Test Report: Docker_Linux_crio_arm64 21139

                    
                      c4345f2baa4ca80c4898fac9368be2207cfcb3f0:2025-11-09:42265
                    
                

Test fail (44/328)

Order failed test Duration
29 TestAddons/serial/Volcano 0.41
35 TestAddons/parallel/Registry 15.03
36 TestAddons/parallel/RegistryCreds 0.54
37 TestAddons/parallel/Ingress 143.7
38 TestAddons/parallel/InspektorGadget 6.26
39 TestAddons/parallel/MetricsServer 5.41
41 TestAddons/parallel/CSI 45.25
42 TestAddons/parallel/Headlamp 3.65
43 TestAddons/parallel/CloudSpanner 5.42
44 TestAddons/parallel/LocalPath 8.41
45 TestAddons/parallel/NvidiaDevicePlugin 5.27
46 TestAddons/parallel/Yakd 6.32
97 TestFunctional/parallel/ServiceCmdConnect 603.53
125 TestFunctional/parallel/ServiceCmd/DeployApp 600.85
134 TestFunctional/parallel/ServiceCmd/HTTPS 0.55
135 TestFunctional/parallel/ServiceCmd/Format 0.54
136 TestFunctional/parallel/ServiceCmd/URL 0.51
145 TestFunctional/parallel/ImageCommands/ImageLoadDaemon 2.26
146 TestFunctional/parallel/ImageCommands/ImageReloadDaemon 1.36
147 TestFunctional/parallel/ImageCommands/ImageTagAndLoadDaemon 1.55
148 TestFunctional/parallel/ImageCommands/ImageSaveToFile 0.36
150 TestFunctional/parallel/ImageCommands/ImageLoadFromFile 0.28
151 TestFunctional/parallel/ImageCommands/ImageSaveDaemon 0.51
173 TestMultiControlPlane/serial/RestartClusterKeepsNodes 516.75
174 TestMultiControlPlane/serial/DeleteSecondaryNode 18.8
175 TestMultiControlPlane/serial/DegradedAfterSecondaryNodeDelete 2.21
176 TestMultiControlPlane/serial/StopCluster 2.92
177 TestMultiControlPlane/serial/RestartCluster 109.43
178 TestMultiControlPlane/serial/DegradedAfterClusterRestart 3.9
179 TestMultiControlPlane/serial/AddSecondaryNode 90.39
180 TestMultiControlPlane/serial/HAppyAfterSecondaryNodeAdd 4.6
191 TestJSONOutput/pause/Command 2.15
197 TestJSONOutput/unpause/Command 1.74
282 TestPause/serial/Pause 8.06
297 TestStartStop/group/old-k8s-version/serial/EnableAddonWhileActive 2.57
304 TestStartStop/group/old-k8s-version/serial/Pause 6.76
311 TestStartStop/group/default-k8s-diff-port/serial/EnableAddonWhileActive 2.97
312 TestStartStop/group/embed-certs/serial/EnableAddonWhileActive 2.95
324 TestStartStop/group/default-k8s-diff-port/serial/Pause 7.69
326 TestStartStop/group/embed-certs/serial/Pause 8.05
332 TestStartStop/group/newest-cni/serial/EnableAddonWhileActive 3.23
339 TestStartStop/group/newest-cni/serial/Pause 6.46
342 TestStartStop/group/no-preload/serial/EnableAddonWhileActive 3.09
351 TestStartStop/group/no-preload/serial/Pause 6.98
x
+
TestAddons/serial/Volcano (0.41s)

                                                
                                                
=== RUN   TestAddons/serial/Volcano
addons_test.go:850: skipping: crio not supported
addons_test.go:1053: (dbg) Run:  out/minikube-linux-arm64 -p addons-651467 addons disable volcano --alsologtostderr -v=1
addons_test.go:1053: (dbg) Non-zero exit: out/minikube-linux-arm64 -p addons-651467 addons disable volcano --alsologtostderr -v=1: exit status 11 (408.114819ms)

                                                
                                                
-- stdout --
	
	

                                                
                                                
-- /stdout --
** stderr ** 
	I1109 13:32:05.623691   10711 out.go:360] Setting OutFile to fd 1 ...
	I1109 13:32:05.625411   10711 out.go:408] TERM=,COLORTERM=, which probably does not support color
	I1109 13:32:05.625426   10711 out.go:374] Setting ErrFile to fd 2...
	I1109 13:32:05.625433   10711 out.go:408] TERM=,COLORTERM=, which probably does not support color
	I1109 13:32:05.625740   10711 root.go:338] Updating PATH: /home/jenkins/minikube-integration/21139-2320/.minikube/bin
	I1109 13:32:05.626055   10711 mustload.go:66] Loading cluster: addons-651467
	I1109 13:32:05.626495   10711 config.go:182] Loaded profile config "addons-651467": Driver=docker, ContainerRuntime=crio, KubernetesVersion=v1.34.1
	I1109 13:32:05.626517   10711 addons.go:607] checking whether the cluster is paused
	I1109 13:32:05.626658   10711 config.go:182] Loaded profile config "addons-651467": Driver=docker, ContainerRuntime=crio, KubernetesVersion=v1.34.1
	I1109 13:32:05.626676   10711 host.go:66] Checking if "addons-651467" exists ...
	I1109 13:32:05.627177   10711 cli_runner.go:164] Run: docker container inspect addons-651467 --format={{.State.Status}}
	I1109 13:32:05.662246   10711 ssh_runner.go:195] Run: systemctl --version
	I1109 13:32:05.662303   10711 cli_runner.go:164] Run: docker container inspect -f "'{{(index (index .NetworkSettings.Ports "22/tcp") 0).HostPort}}'" addons-651467
	I1109 13:32:05.679599   10711 sshutil.go:53] new ssh client: &{IP:127.0.0.1 Port:32768 SSHKeyPath:/home/jenkins/minikube-integration/21139-2320/.minikube/machines/addons-651467/id_rsa Username:docker}
	I1109 13:32:05.786469   10711 cri.go:54] listing CRI containers in root : {State:paused Name: Namespaces:[kube-system]}
	I1109 13:32:05.786555   10711 ssh_runner.go:195] Run: sudo -s eval "crictl ps -a --quiet --label io.kubernetes.pod.namespace=kube-system"
	I1109 13:32:05.817477   10711 cri.go:89] found id: "d2bf491a803e11cc4313b958ddc2a4f9b81fe1d6c808d32e0eabbc433616d351"
	I1109 13:32:05.817502   10711 cri.go:89] found id: "ae9a6f508e15b8d1fe4384aa76e3877df10ee5113b66c3083e80c4dcde62c8a8"
	I1109 13:32:05.817507   10711 cri.go:89] found id: "93a600602192b90ae7356ac03bd841175f91280d377e697cfd4deb434569fe53"
	I1109 13:32:05.817511   10711 cri.go:89] found id: "f480ecab5b3922bb178d1d72c1958865b09068eac71bf6a035651cc6ef4eb5ee"
	I1109 13:32:05.817514   10711 cri.go:89] found id: "a21703b53016b9a78ff4fdf297bac966cefcb887327ab99bfc0a9cfe52433791"
	I1109 13:32:05.817517   10711 cri.go:89] found id: "00a017d960b122a0ac7b1c3c9a793dfdf1bbd8ec498c45c5f9f540e09d617f7a"
	I1109 13:32:05.817520   10711 cri.go:89] found id: "ddbfebb8b3bd893f8ae20d9efaf7473d0a137e5797afeb9803b45bb8aa1f051a"
	I1109 13:32:05.817523   10711 cri.go:89] found id: "cf0248d05e3120ef7b5c016040983dba6d824f850f3e52c07f70ea2cc2732e40"
	I1109 13:32:05.817526   10711 cri.go:89] found id: "14c1c07c042a874219900b89960a0a218cb17f2f4ffe82345cd8e167830bda14"
	I1109 13:32:05.817532   10711 cri.go:89] found id: "d6ac5bca1cd4a01c236b3a7bd5cb7284e2ab5e47d7caf8f848c3c414c09c1622"
	I1109 13:32:05.817535   10711 cri.go:89] found id: "8f8d82b9ad5448afb3d8e85c7c56d6842c12a3d740484f974a4d1c55ceb6c03b"
	I1109 13:32:05.817539   10711 cri.go:89] found id: "07e93bef4f027a026b1e9eeb9678c48f4fd2f36ed05a6b3dad4e03db5bb3d26e"
	I1109 13:32:05.817542   10711 cri.go:89] found id: "1cfcc34c91d7064c30149b4a2e9c9edf756dac367ab8b8881fa8a5c2d7d87cdf"
	I1109 13:32:05.817545   10711 cri.go:89] found id: "c8972766fd69484838521bc9f86a3eebd70d3ad33301b40d4d15af0e79c47667"
	I1109 13:32:05.817548   10711 cri.go:89] found id: "9b1d34d40bba48a95be9706cfae001d077b34f81b8e3bcb44a5ff6fce1d1371a"
	I1109 13:32:05.817553   10711 cri.go:89] found id: "656c0f0ceda1fdc379218b69ef8db74f9f41694122adc3e0fb824a1ed01361cc"
	I1109 13:32:05.817561   10711 cri.go:89] found id: "c23a1bc6ea5a86e9b5f5f1122d161f051386cd4e34dcba89dfe1c7f912f6252a"
	I1109 13:32:05.817565   10711 cri.go:89] found id: "d46f515271a1e15efe8a207332abfdc1e41fd81d78ed0bea3084656a7821a37b"
	I1109 13:32:05.817567   10711 cri.go:89] found id: "0ba9b918f523be282904c95852bc7c89ca78d01514b4fb147900c4f28b749e52"
	I1109 13:32:05.817570   10711 cri.go:89] found id: "bfadffb4d9828f17669ca17fe07a4c945b112c4347660ff98d5f85c95d6afb36"
	I1109 13:32:05.817574   10711 cri.go:89] found id: "ab555c10c248c5e6a67000926c5f9590b70efa1798712c4b098ef7f0373ac795"
	I1109 13:32:05.817578   10711 cri.go:89] found id: "7f6d6e73f49bafb368dee4aea3878226b8d58990c3b0be6703f0f0a6ff7f3f60"
	I1109 13:32:05.817581   10711 cri.go:89] found id: "e8a6b101abe65f7379ea7110c3039c164ac1c65c44e32d5e2b04308d6ccf31e3"
	I1109 13:32:05.817585   10711 cri.go:89] found id: ""
	I1109 13:32:05.817639   10711 ssh_runner.go:195] Run: sudo runc list -f json
	I1109 13:32:05.833352   10711 out.go:203] 
	W1109 13:32:05.836349   10711 out.go:285] X Exiting due to MK_ADDON_DISABLE_PAUSED: disable failed: check paused: list paused: runc: sudo runc list -f json: Process exited with status 1
	stdout:
	
	stderr:
	time="2025-11-09T13:32:05Z" level=error msg="open /run/runc: no such file or directory"
	
	X Exiting due to MK_ADDON_DISABLE_PAUSED: disable failed: check paused: list paused: runc: sudo runc list -f json: Process exited with status 1
	stdout:
	
	stderr:
	time="2025-11-09T13:32:05Z" level=error msg="open /run/runc: no such file or directory"
	
	W1109 13:32:05.836373   10711 out.go:285] * 
	* 
	W1109 13:32:05.946175   10711 out.go:308] ╭─────────────────────────────────────────────────────────────────────────────────────────────╮
	│                                                                                             │
	│    * If the above advice does not help, please let us know:                                 │
	│      https://github.com/kubernetes/minikube/issues/new/choose                               │
	│                                                                                             │
	│    * Please run `minikube logs --file=logs.txt` and attach logs.txt to the GitHub issue.    │
	│    * Please also attach the following file to the GitHub issue:                             │
	│    * - /tmp/minikube_addons_9bd16c244da2144137a37071fb77e06a574610a0_0.log                  │
	│                                                                                             │
	╰─────────────────────────────────────────────────────────────────────────────────────────────╯
	╭─────────────────────────────────────────────────────────────────────────────────────────────╮
	│                                                                                             │
	│    * If the above advice does not help, please let us know:                                 │
	│      https://github.com/kubernetes/minikube/issues/new/choose                               │
	│                                                                                             │
	│    * Please run `minikube logs --file=logs.txt` and attach logs.txt to the GitHub issue.    │
	│    * Please also attach the following file to the GitHub issue:                             │
	│    * - /tmp/minikube_addons_9bd16c244da2144137a37071fb77e06a574610a0_0.log                  │
	│                                                                                             │
	╰─────────────────────────────────────────────────────────────────────────────────────────────╯
	I1109 13:32:05.949116   10711 out.go:203] 

                                                
                                                
** /stderr **
addons_test.go:1055: failed to disable volcano addon: args "out/minikube-linux-arm64 -p addons-651467 addons disable volcano --alsologtostderr -v=1": exit status 11
--- FAIL: TestAddons/serial/Volcano (0.41s)

                                                
                                    
x
+
TestAddons/parallel/Registry (15.03s)

                                                
                                                
=== RUN   TestAddons/parallel/Registry
=== PAUSE TestAddons/parallel/Registry

                                                
                                                

                                                
                                                
=== CONT  TestAddons/parallel/Registry
addons_test.go:382: registry stabilized in 10.979523ms
addons_test.go:384: (dbg) TestAddons/parallel/Registry: waiting 6m0s for pods matching "actual-registry=true" in namespace "kube-system" ...
helpers_test.go:352: "registry-6b586f9694-kzz6v" [44b6807f-27e2-412f-a961-c988ec4d7151] Running
addons_test.go:384: (dbg) TestAddons/parallel/Registry: actual-registry=true healthy within 5.003653865s
addons_test.go:387: (dbg) TestAddons/parallel/Registry: waiting 10m0s for pods matching "registry-proxy=true" in namespace "kube-system" ...
helpers_test.go:352: "registry-proxy-7mv24" [15b0c22b-2fbc-4269-8253-7dbe1e18c15c] Running
addons_test.go:387: (dbg) TestAddons/parallel/Registry: registry-proxy=true healthy within 5.00391632s
addons_test.go:392: (dbg) Run:  kubectl --context addons-651467 delete po -l run=registry-test --now
addons_test.go:397: (dbg) Run:  kubectl --context addons-651467 run --rm registry-test --restart=Never --image=gcr.io/k8s-minikube/busybox -it -- sh -c "wget --spider -S http://registry.kube-system.svc.cluster.local"
addons_test.go:397: (dbg) Done: kubectl --context addons-651467 run --rm registry-test --restart=Never --image=gcr.io/k8s-minikube/busybox -it -- sh -c "wget --spider -S http://registry.kube-system.svc.cluster.local": (4.45877731s)
addons_test.go:411: (dbg) Run:  out/minikube-linux-arm64 -p addons-651467 ip
2025/11/09 13:32:32 [DEBUG] GET http://192.168.49.2:5000
addons_test.go:1053: (dbg) Run:  out/minikube-linux-arm64 -p addons-651467 addons disable registry --alsologtostderr -v=1
addons_test.go:1053: (dbg) Non-zero exit: out/minikube-linux-arm64 -p addons-651467 addons disable registry --alsologtostderr -v=1: exit status 11 (288.742348ms)

                                                
                                                
-- stdout --
	
	

                                                
                                                
-- /stdout --
** stderr ** 
	I1109 13:32:32.073847   11263 out.go:360] Setting OutFile to fd 1 ...
	I1109 13:32:32.074062   11263 out.go:408] TERM=,COLORTERM=, which probably does not support color
	I1109 13:32:32.074096   11263 out.go:374] Setting ErrFile to fd 2...
	I1109 13:32:32.074117   11263 out.go:408] TERM=,COLORTERM=, which probably does not support color
	I1109 13:32:32.074437   11263 root.go:338] Updating PATH: /home/jenkins/minikube-integration/21139-2320/.minikube/bin
	I1109 13:32:32.074745   11263 mustload.go:66] Loading cluster: addons-651467
	I1109 13:32:32.075175   11263 config.go:182] Loaded profile config "addons-651467": Driver=docker, ContainerRuntime=crio, KubernetesVersion=v1.34.1
	I1109 13:32:32.075219   11263 addons.go:607] checking whether the cluster is paused
	I1109 13:32:32.075356   11263 config.go:182] Loaded profile config "addons-651467": Driver=docker, ContainerRuntime=crio, KubernetesVersion=v1.34.1
	I1109 13:32:32.075398   11263 host.go:66] Checking if "addons-651467" exists ...
	I1109 13:32:32.075934   11263 cli_runner.go:164] Run: docker container inspect addons-651467 --format={{.State.Status}}
	I1109 13:32:32.093069   11263 ssh_runner.go:195] Run: systemctl --version
	I1109 13:32:32.093126   11263 cli_runner.go:164] Run: docker container inspect -f "'{{(index (index .NetworkSettings.Ports "22/tcp") 0).HostPort}}'" addons-651467
	I1109 13:32:32.111370   11263 sshutil.go:53] new ssh client: &{IP:127.0.0.1 Port:32768 SSHKeyPath:/home/jenkins/minikube-integration/21139-2320/.minikube/machines/addons-651467/id_rsa Username:docker}
	I1109 13:32:32.226641   11263 cri.go:54] listing CRI containers in root : {State:paused Name: Namespaces:[kube-system]}
	I1109 13:32:32.226735   11263 ssh_runner.go:195] Run: sudo -s eval "crictl ps -a --quiet --label io.kubernetes.pod.namespace=kube-system"
	I1109 13:32:32.273814   11263 cri.go:89] found id: "d2bf491a803e11cc4313b958ddc2a4f9b81fe1d6c808d32e0eabbc433616d351"
	I1109 13:32:32.273843   11263 cri.go:89] found id: "ae9a6f508e15b8d1fe4384aa76e3877df10ee5113b66c3083e80c4dcde62c8a8"
	I1109 13:32:32.273849   11263 cri.go:89] found id: "93a600602192b90ae7356ac03bd841175f91280d377e697cfd4deb434569fe53"
	I1109 13:32:32.273853   11263 cri.go:89] found id: "f480ecab5b3922bb178d1d72c1958865b09068eac71bf6a035651cc6ef4eb5ee"
	I1109 13:32:32.273857   11263 cri.go:89] found id: "a21703b53016b9a78ff4fdf297bac966cefcb887327ab99bfc0a9cfe52433791"
	I1109 13:32:32.273861   11263 cri.go:89] found id: "00a017d960b122a0ac7b1c3c9a793dfdf1bbd8ec498c45c5f9f540e09d617f7a"
	I1109 13:32:32.273868   11263 cri.go:89] found id: "ddbfebb8b3bd893f8ae20d9efaf7473d0a137e5797afeb9803b45bb8aa1f051a"
	I1109 13:32:32.273871   11263 cri.go:89] found id: "cf0248d05e3120ef7b5c016040983dba6d824f850f3e52c07f70ea2cc2732e40"
	I1109 13:32:32.273875   11263 cri.go:89] found id: "14c1c07c042a874219900b89960a0a218cb17f2f4ffe82345cd8e167830bda14"
	I1109 13:32:32.273881   11263 cri.go:89] found id: "d6ac5bca1cd4a01c236b3a7bd5cb7284e2ab5e47d7caf8f848c3c414c09c1622"
	I1109 13:32:32.273885   11263 cri.go:89] found id: "8f8d82b9ad5448afb3d8e85c7c56d6842c12a3d740484f974a4d1c55ceb6c03b"
	I1109 13:32:32.273889   11263 cri.go:89] found id: "07e93bef4f027a026b1e9eeb9678c48f4fd2f36ed05a6b3dad4e03db5bb3d26e"
	I1109 13:32:32.273892   11263 cri.go:89] found id: "1cfcc34c91d7064c30149b4a2e9c9edf756dac367ab8b8881fa8a5c2d7d87cdf"
	I1109 13:32:32.273895   11263 cri.go:89] found id: "c8972766fd69484838521bc9f86a3eebd70d3ad33301b40d4d15af0e79c47667"
	I1109 13:32:32.273898   11263 cri.go:89] found id: "9b1d34d40bba48a95be9706cfae001d077b34f81b8e3bcb44a5ff6fce1d1371a"
	I1109 13:32:32.273903   11263 cri.go:89] found id: "656c0f0ceda1fdc379218b69ef8db74f9f41694122adc3e0fb824a1ed01361cc"
	I1109 13:32:32.273912   11263 cri.go:89] found id: "c23a1bc6ea5a86e9b5f5f1122d161f051386cd4e34dcba89dfe1c7f912f6252a"
	I1109 13:32:32.273916   11263 cri.go:89] found id: "d46f515271a1e15efe8a207332abfdc1e41fd81d78ed0bea3084656a7821a37b"
	I1109 13:32:32.273919   11263 cri.go:89] found id: "0ba9b918f523be282904c95852bc7c89ca78d01514b4fb147900c4f28b749e52"
	I1109 13:32:32.273922   11263 cri.go:89] found id: "bfadffb4d9828f17669ca17fe07a4c945b112c4347660ff98d5f85c95d6afb36"
	I1109 13:32:32.273927   11263 cri.go:89] found id: "ab555c10c248c5e6a67000926c5f9590b70efa1798712c4b098ef7f0373ac795"
	I1109 13:32:32.273930   11263 cri.go:89] found id: "7f6d6e73f49bafb368dee4aea3878226b8d58990c3b0be6703f0f0a6ff7f3f60"
	I1109 13:32:32.273933   11263 cri.go:89] found id: "e8a6b101abe65f7379ea7110c3039c164ac1c65c44e32d5e2b04308d6ccf31e3"
	I1109 13:32:32.273936   11263 cri.go:89] found id: ""
	I1109 13:32:32.273990   11263 ssh_runner.go:195] Run: sudo runc list -f json
	I1109 13:32:32.296765   11263 out.go:203] 
	W1109 13:32:32.299939   11263 out.go:285] X Exiting due to MK_ADDON_DISABLE_PAUSED: disable failed: check paused: list paused: runc: sudo runc list -f json: Process exited with status 1
	stdout:
	
	stderr:
	time="2025-11-09T13:32:32Z" level=error msg="open /run/runc: no such file or directory"
	
	X Exiting due to MK_ADDON_DISABLE_PAUSED: disable failed: check paused: list paused: runc: sudo runc list -f json: Process exited with status 1
	stdout:
	
	stderr:
	time="2025-11-09T13:32:32Z" level=error msg="open /run/runc: no such file or directory"
	
	W1109 13:32:32.300007   11263 out.go:285] * 
	* 
	W1109 13:32:32.303932   11263 out.go:308] ╭─────────────────────────────────────────────────────────────────────────────────────────────╮
	│                                                                                             │
	│    * If the above advice does not help, please let us know:                                 │
	│      https://github.com/kubernetes/minikube/issues/new/choose                               │
	│                                                                                             │
	│    * Please run `minikube logs --file=logs.txt` and attach logs.txt to the GitHub issue.    │
	│    * Please also attach the following file to the GitHub issue:                             │
	│    * - /tmp/minikube_addons_94fa7435cdb0fda2540861b9b71556c8cae5c5f1_0.log                  │
	│                                                                                             │
	╰─────────────────────────────────────────────────────────────────────────────────────────────╯
	╭─────────────────────────────────────────────────────────────────────────────────────────────╮
	│                                                                                             │
	│    * If the above advice does not help, please let us know:                                 │
	│      https://github.com/kubernetes/minikube/issues/new/choose                               │
	│                                                                                             │
	│    * Please run `minikube logs --file=logs.txt` and attach logs.txt to the GitHub issue.    │
	│    * Please also attach the following file to the GitHub issue:                             │
	│    * - /tmp/minikube_addons_94fa7435cdb0fda2540861b9b71556c8cae5c5f1_0.log                  │
	│                                                                                             │
	╰─────────────────────────────────────────────────────────────────────────────────────────────╯
	I1109 13:32:32.306949   11263 out.go:203] 

                                                
                                                
** /stderr **
addons_test.go:1055: failed to disable registry addon: args "out/minikube-linux-arm64 -p addons-651467 addons disable registry --alsologtostderr -v=1": exit status 11
--- FAIL: TestAddons/parallel/Registry (15.03s)

                                                
                                    
x
+
TestAddons/parallel/RegistryCreds (0.54s)

                                                
                                                
=== RUN   TestAddons/parallel/RegistryCreds
=== PAUSE TestAddons/parallel/RegistryCreds

                                                
                                                

                                                
                                                
=== CONT  TestAddons/parallel/RegistryCreds
addons_test.go:323: registry-creds stabilized in 4.872402ms
addons_test.go:325: (dbg) Run:  out/minikube-linux-arm64 addons configure registry-creds -f ./testdata/addons_testconfig.json -p addons-651467
addons_test.go:332: (dbg) Run:  kubectl --context addons-651467 -n kube-system get secret -o yaml
addons_test.go:1053: (dbg) Run:  out/minikube-linux-arm64 -p addons-651467 addons disable registry-creds --alsologtostderr -v=1
addons_test.go:1053: (dbg) Non-zero exit: out/minikube-linux-arm64 -p addons-651467 addons disable registry-creds --alsologtostderr -v=1: exit status 11 (305.728166ms)

                                                
                                                
-- stdout --
	
	

                                                
                                                
-- /stdout --
** stderr ** 
	I1109 13:33:23.295437   13280 out.go:360] Setting OutFile to fd 1 ...
	I1109 13:33:23.295815   13280 out.go:408] TERM=,COLORTERM=, which probably does not support color
	I1109 13:33:23.295827   13280 out.go:374] Setting ErrFile to fd 2...
	I1109 13:33:23.295832   13280 out.go:408] TERM=,COLORTERM=, which probably does not support color
	I1109 13:33:23.296202   13280 root.go:338] Updating PATH: /home/jenkins/minikube-integration/21139-2320/.minikube/bin
	I1109 13:33:23.296495   13280 mustload.go:66] Loading cluster: addons-651467
	I1109 13:33:23.296867   13280 config.go:182] Loaded profile config "addons-651467": Driver=docker, ContainerRuntime=crio, KubernetesVersion=v1.34.1
	I1109 13:33:23.296878   13280 addons.go:607] checking whether the cluster is paused
	I1109 13:33:23.296985   13280 config.go:182] Loaded profile config "addons-651467": Driver=docker, ContainerRuntime=crio, KubernetesVersion=v1.34.1
	I1109 13:33:23.296995   13280 host.go:66] Checking if "addons-651467" exists ...
	I1109 13:33:23.297481   13280 cli_runner.go:164] Run: docker container inspect addons-651467 --format={{.State.Status}}
	I1109 13:33:23.327030   13280 ssh_runner.go:195] Run: systemctl --version
	I1109 13:33:23.327084   13280 cli_runner.go:164] Run: docker container inspect -f "'{{(index (index .NetworkSettings.Ports "22/tcp") 0).HostPort}}'" addons-651467
	I1109 13:33:23.354890   13280 sshutil.go:53] new ssh client: &{IP:127.0.0.1 Port:32768 SSHKeyPath:/home/jenkins/minikube-integration/21139-2320/.minikube/machines/addons-651467/id_rsa Username:docker}
	I1109 13:33:23.466474   13280 cri.go:54] listing CRI containers in root : {State:paused Name: Namespaces:[kube-system]}
	I1109 13:33:23.466563   13280 ssh_runner.go:195] Run: sudo -s eval "crictl ps -a --quiet --label io.kubernetes.pod.namespace=kube-system"
	I1109 13:33:23.495167   13280 cri.go:89] found id: "d2bf491a803e11cc4313b958ddc2a4f9b81fe1d6c808d32e0eabbc433616d351"
	I1109 13:33:23.495192   13280 cri.go:89] found id: "ae9a6f508e15b8d1fe4384aa76e3877df10ee5113b66c3083e80c4dcde62c8a8"
	I1109 13:33:23.495197   13280 cri.go:89] found id: "93a600602192b90ae7356ac03bd841175f91280d377e697cfd4deb434569fe53"
	I1109 13:33:23.495201   13280 cri.go:89] found id: "f480ecab5b3922bb178d1d72c1958865b09068eac71bf6a035651cc6ef4eb5ee"
	I1109 13:33:23.495205   13280 cri.go:89] found id: "a21703b53016b9a78ff4fdf297bac966cefcb887327ab99bfc0a9cfe52433791"
	I1109 13:33:23.495209   13280 cri.go:89] found id: "00a017d960b122a0ac7b1c3c9a793dfdf1bbd8ec498c45c5f9f540e09d617f7a"
	I1109 13:33:23.495212   13280 cri.go:89] found id: "ddbfebb8b3bd893f8ae20d9efaf7473d0a137e5797afeb9803b45bb8aa1f051a"
	I1109 13:33:23.495215   13280 cri.go:89] found id: "cf0248d05e3120ef7b5c016040983dba6d824f850f3e52c07f70ea2cc2732e40"
	I1109 13:33:23.495219   13280 cri.go:89] found id: "14c1c07c042a874219900b89960a0a218cb17f2f4ffe82345cd8e167830bda14"
	I1109 13:33:23.495225   13280 cri.go:89] found id: "d6ac5bca1cd4a01c236b3a7bd5cb7284e2ab5e47d7caf8f848c3c414c09c1622"
	I1109 13:33:23.495228   13280 cri.go:89] found id: "8f8d82b9ad5448afb3d8e85c7c56d6842c12a3d740484f974a4d1c55ceb6c03b"
	I1109 13:33:23.495231   13280 cri.go:89] found id: "07e93bef4f027a026b1e9eeb9678c48f4fd2f36ed05a6b3dad4e03db5bb3d26e"
	I1109 13:33:23.495235   13280 cri.go:89] found id: "1cfcc34c91d7064c30149b4a2e9c9edf756dac367ab8b8881fa8a5c2d7d87cdf"
	I1109 13:33:23.495238   13280 cri.go:89] found id: "c8972766fd69484838521bc9f86a3eebd70d3ad33301b40d4d15af0e79c47667"
	I1109 13:33:23.495241   13280 cri.go:89] found id: "9b1d34d40bba48a95be9706cfae001d077b34f81b8e3bcb44a5ff6fce1d1371a"
	I1109 13:33:23.495247   13280 cri.go:89] found id: "656c0f0ceda1fdc379218b69ef8db74f9f41694122adc3e0fb824a1ed01361cc"
	I1109 13:33:23.495254   13280 cri.go:89] found id: "c23a1bc6ea5a86e9b5f5f1122d161f051386cd4e34dcba89dfe1c7f912f6252a"
	I1109 13:33:23.495258   13280 cri.go:89] found id: "d46f515271a1e15efe8a207332abfdc1e41fd81d78ed0bea3084656a7821a37b"
	I1109 13:33:23.495261   13280 cri.go:89] found id: "0ba9b918f523be282904c95852bc7c89ca78d01514b4fb147900c4f28b749e52"
	I1109 13:33:23.495264   13280 cri.go:89] found id: "bfadffb4d9828f17669ca17fe07a4c945b112c4347660ff98d5f85c95d6afb36"
	I1109 13:33:23.495269   13280 cri.go:89] found id: "ab555c10c248c5e6a67000926c5f9590b70efa1798712c4b098ef7f0373ac795"
	I1109 13:33:23.495272   13280 cri.go:89] found id: "7f6d6e73f49bafb368dee4aea3878226b8d58990c3b0be6703f0f0a6ff7f3f60"
	I1109 13:33:23.495275   13280 cri.go:89] found id: "e8a6b101abe65f7379ea7110c3039c164ac1c65c44e32d5e2b04308d6ccf31e3"
	I1109 13:33:23.495278   13280 cri.go:89] found id: ""
	I1109 13:33:23.495334   13280 ssh_runner.go:195] Run: sudo runc list -f json
	I1109 13:33:23.511397   13280 out.go:203] 
	W1109 13:33:23.514332   13280 out.go:285] X Exiting due to MK_ADDON_DISABLE_PAUSED: disable failed: check paused: list paused: runc: sudo runc list -f json: Process exited with status 1
	stdout:
	
	stderr:
	time="2025-11-09T13:33:23Z" level=error msg="open /run/runc: no such file or directory"
	
	X Exiting due to MK_ADDON_DISABLE_PAUSED: disable failed: check paused: list paused: runc: sudo runc list -f json: Process exited with status 1
	stdout:
	
	stderr:
	time="2025-11-09T13:33:23Z" level=error msg="open /run/runc: no such file or directory"
	
	W1109 13:33:23.514358   13280 out.go:285] * 
	* 
	W1109 13:33:23.518208   13280 out.go:308] ╭─────────────────────────────────────────────────────────────────────────────────────────────╮
	│                                                                                             │
	│    * If the above advice does not help, please let us know:                                 │
	│      https://github.com/kubernetes/minikube/issues/new/choose                               │
	│                                                                                             │
	│    * Please run `minikube logs --file=logs.txt` and attach logs.txt to the GitHub issue.    │
	│    * Please also attach the following file to the GitHub issue:                             │
	│    * - /tmp/minikube_addons_ac42ae7bb4bac5cd909a08f6506d602b3d2ccf6c_0.log                  │
	│                                                                                             │
	╰─────────────────────────────────────────────────────────────────────────────────────────────╯
	╭─────────────────────────────────────────────────────────────────────────────────────────────╮
	│                                                                                             │
	│    * If the above advice does not help, please let us know:                                 │
	│      https://github.com/kubernetes/minikube/issues/new/choose                               │
	│                                                                                             │
	│    * Please run `minikube logs --file=logs.txt` and attach logs.txt to the GitHub issue.    │
	│    * Please also attach the following file to the GitHub issue:                             │
	│    * - /tmp/minikube_addons_ac42ae7bb4bac5cd909a08f6506d602b3d2ccf6c_0.log                  │
	│                                                                                             │
	╰─────────────────────────────────────────────────────────────────────────────────────────────╯
	I1109 13:33:23.521114   13280 out.go:203] 

                                                
                                                
** /stderr **
addons_test.go:1055: failed to disable registry-creds addon: args "out/minikube-linux-arm64 -p addons-651467 addons disable registry-creds --alsologtostderr -v=1": exit status 11
--- FAIL: TestAddons/parallel/RegistryCreds (0.54s)

                                                
                                    
x
+
TestAddons/parallel/Ingress (143.7s)

                                                
                                                
=== RUN   TestAddons/parallel/Ingress
=== PAUSE TestAddons/parallel/Ingress

                                                
                                                

                                                
                                                
=== CONT  TestAddons/parallel/Ingress
addons_test.go:209: (dbg) Run:  kubectl --context addons-651467 wait --for=condition=ready --namespace=ingress-nginx pod --selector=app.kubernetes.io/component=controller --timeout=90s
addons_test.go:234: (dbg) Run:  kubectl --context addons-651467 replace --force -f testdata/nginx-ingress-v1.yaml
addons_test.go:247: (dbg) Run:  kubectl --context addons-651467 replace --force -f testdata/nginx-pod-svc.yaml
addons_test.go:252: (dbg) TestAddons/parallel/Ingress: waiting 8m0s for pods matching "run=nginx" in namespace "default" ...
helpers_test.go:352: "nginx" [91becde9-9755-466c-b8d8-35dca23a4753] Pending / Ready:ContainersNotReady (containers with unready status: [nginx]) / ContainersReady:ContainersNotReady (containers with unready status: [nginx])
helpers_test.go:352: "nginx" [91becde9-9755-466c-b8d8-35dca23a4753] Running
addons_test.go:252: (dbg) TestAddons/parallel/Ingress: run=nginx healthy within 8.004031073s
I1109 13:33:01.291475    4116 kapi.go:150] Service nginx in namespace default found.
addons_test.go:264: (dbg) Run:  out/minikube-linux-arm64 -p addons-651467 ssh "curl -s http://127.0.0.1/ -H 'Host: nginx.example.com'"
addons_test.go:264: (dbg) Non-zero exit: out/minikube-linux-arm64 -p addons-651467 ssh "curl -s http://127.0.0.1/ -H 'Host: nginx.example.com'": exit status 1 (2m10.681974693s)

                                                
                                                
** stderr ** 
	ssh: Process exited with status 28

                                                
                                                
** /stderr **
addons_test.go:280: failed to get expected response from http://127.0.0.1/ within minikube: exit status 1
addons_test.go:288: (dbg) Run:  kubectl --context addons-651467 replace --force -f testdata/ingress-dns-example-v1.yaml
addons_test.go:293: (dbg) Run:  out/minikube-linux-arm64 -p addons-651467 ip
addons_test.go:299: (dbg) Run:  nslookup hello-john.test 192.168.49.2
helpers_test.go:222: -----------------------post-mortem--------------------------------
helpers_test.go:223: ======>  post-mortem[TestAddons/parallel/Ingress]: network settings <======
helpers_test.go:230: HOST ENV snapshots: PROXY env: HTTP_PROXY="<empty>" HTTPS_PROXY="<empty>" NO_PROXY="<empty>"
helpers_test.go:238: ======>  post-mortem[TestAddons/parallel/Ingress]: docker inspect <======
helpers_test.go:239: (dbg) Run:  docker inspect addons-651467
helpers_test.go:243: (dbg) docker inspect addons-651467:

                                                
                                                
-- stdout --
	[
	    {
	        "Id": "c4ab4837e17b267aefc717d6354da6da657718a53ac2c8c63de7aa7f9dc168c6",
	        "Created": "2025-11-09T13:29:50.726053015Z",
	        "Path": "/usr/local/bin/entrypoint",
	        "Args": [
	            "/sbin/init"
	        ],
	        "State": {
	            "Status": "running",
	            "Running": true,
	            "Paused": false,
	            "Restarting": false,
	            "OOMKilled": false,
	            "Dead": false,
	            "Pid": 5274,
	            "ExitCode": 0,
	            "Error": "",
	            "StartedAt": "2025-11-09T13:29:50.79153505Z",
	            "FinishedAt": "0001-01-01T00:00:00Z"
	        },
	        "Image": "sha256:4e1b168be0d8ee6affdaa8dcd0274d605b2f417b6cfaa574410f0380fd962b97",
	        "ResolvConfPath": "/var/lib/docker/containers/c4ab4837e17b267aefc717d6354da6da657718a53ac2c8c63de7aa7f9dc168c6/resolv.conf",
	        "HostnamePath": "/var/lib/docker/containers/c4ab4837e17b267aefc717d6354da6da657718a53ac2c8c63de7aa7f9dc168c6/hostname",
	        "HostsPath": "/var/lib/docker/containers/c4ab4837e17b267aefc717d6354da6da657718a53ac2c8c63de7aa7f9dc168c6/hosts",
	        "LogPath": "/var/lib/docker/containers/c4ab4837e17b267aefc717d6354da6da657718a53ac2c8c63de7aa7f9dc168c6/c4ab4837e17b267aefc717d6354da6da657718a53ac2c8c63de7aa7f9dc168c6-json.log",
	        "Name": "/addons-651467",
	        "RestartCount": 0,
	        "Driver": "overlay2",
	        "Platform": "linux",
	        "MountLabel": "",
	        "ProcessLabel": "",
	        "AppArmorProfile": "unconfined",
	        "ExecIDs": null,
	        "HostConfig": {
	            "Binds": [
	                "/lib/modules:/lib/modules:ro",
	                "addons-651467:/var"
	            ],
	            "ContainerIDFile": "",
	            "LogConfig": {
	                "Type": "json-file",
	                "Config": {}
	            },
	            "NetworkMode": "addons-651467",
	            "PortBindings": {
	                "22/tcp": [
	                    {
	                        "HostIp": "127.0.0.1",
	                        "HostPort": ""
	                    }
	                ],
	                "2376/tcp": [
	                    {
	                        "HostIp": "127.0.0.1",
	                        "HostPort": ""
	                    }
	                ],
	                "32443/tcp": [
	                    {
	                        "HostIp": "127.0.0.1",
	                        "HostPort": ""
	                    }
	                ],
	                "5000/tcp": [
	                    {
	                        "HostIp": "127.0.0.1",
	                        "HostPort": ""
	                    }
	                ],
	                "8443/tcp": [
	                    {
	                        "HostIp": "127.0.0.1",
	                        "HostPort": ""
	                    }
	                ]
	            },
	            "RestartPolicy": {
	                "Name": "no",
	                "MaximumRetryCount": 0
	            },
	            "AutoRemove": false,
	            "VolumeDriver": "",
	            "VolumesFrom": null,
	            "ConsoleSize": [
	                0,
	                0
	            ],
	            "CapAdd": null,
	            "CapDrop": null,
	            "CgroupnsMode": "host",
	            "Dns": [],
	            "DnsOptions": [],
	            "DnsSearch": [],
	            "ExtraHosts": null,
	            "GroupAdd": null,
	            "IpcMode": "private",
	            "Cgroup": "",
	            "Links": null,
	            "OomScoreAdj": 0,
	            "PidMode": "",
	            "Privileged": true,
	            "PublishAllPorts": false,
	            "ReadonlyRootfs": false,
	            "SecurityOpt": [
	                "seccomp=unconfined",
	                "apparmor=unconfined",
	                "label=disable"
	            ],
	            "Tmpfs": {
	                "/run": "",
	                "/tmp": ""
	            },
	            "UTSMode": "",
	            "UsernsMode": "",
	            "ShmSize": 67108864,
	            "Runtime": "runc",
	            "Isolation": "",
	            "CpuShares": 0,
	            "Memory": 4294967296,
	            "NanoCpus": 2000000000,
	            "CgroupParent": "",
	            "BlkioWeight": 0,
	            "BlkioWeightDevice": [],
	            "BlkioDeviceReadBps": [],
	            "BlkioDeviceWriteBps": [],
	            "BlkioDeviceReadIOps": [],
	            "BlkioDeviceWriteIOps": [],
	            "CpuPeriod": 0,
	            "CpuQuota": 0,
	            "CpuRealtimePeriod": 0,
	            "CpuRealtimeRuntime": 0,
	            "CpusetCpus": "",
	            "CpusetMems": "",
	            "Devices": [],
	            "DeviceCgroupRules": null,
	            "DeviceRequests": null,
	            "MemoryReservation": 0,
	            "MemorySwap": 8589934592,
	            "MemorySwappiness": null,
	            "OomKillDisable": false,
	            "PidsLimit": null,
	            "Ulimits": [],
	            "CpuCount": 0,
	            "CpuPercent": 0,
	            "IOMaximumIOps": 0,
	            "IOMaximumBandwidth": 0,
	            "MaskedPaths": null,
	            "ReadonlyPaths": null
	        },
	        "GraphDriver": {
	            "Data": {
	                "ID": "c4ab4837e17b267aefc717d6354da6da657718a53ac2c8c63de7aa7f9dc168c6",
	                "LowerDir": "/var/lib/docker/overlay2/5947e5537d85292bcbeafa0ddc99193912a0755c4189834bd896c1d94caf2b0e-init/diff:/var/lib/docker/overlay2/b3a4de36ab2a7c9237cb1555a4866064baca53bca407ae5f84336bea9c6bc6c8/diff",
	                "MergedDir": "/var/lib/docker/overlay2/5947e5537d85292bcbeafa0ddc99193912a0755c4189834bd896c1d94caf2b0e/merged",
	                "UpperDir": "/var/lib/docker/overlay2/5947e5537d85292bcbeafa0ddc99193912a0755c4189834bd896c1d94caf2b0e/diff",
	                "WorkDir": "/var/lib/docker/overlay2/5947e5537d85292bcbeafa0ddc99193912a0755c4189834bd896c1d94caf2b0e/work"
	            },
	            "Name": "overlay2"
	        },
	        "Mounts": [
	            {
	                "Type": "bind",
	                "Source": "/lib/modules",
	                "Destination": "/lib/modules",
	                "Mode": "ro",
	                "RW": false,
	                "Propagation": "rprivate"
	            },
	            {
	                "Type": "volume",
	                "Name": "addons-651467",
	                "Source": "/var/lib/docker/volumes/addons-651467/_data",
	                "Destination": "/var",
	                "Driver": "local",
	                "Mode": "z",
	                "RW": true,
	                "Propagation": ""
	            }
	        ],
	        "Config": {
	            "Hostname": "addons-651467",
	            "Domainname": "",
	            "User": "",
	            "AttachStdin": false,
	            "AttachStdout": false,
	            "AttachStderr": false,
	            "ExposedPorts": {
	                "22/tcp": {},
	                "2376/tcp": {},
	                "32443/tcp": {},
	                "5000/tcp": {},
	                "8443/tcp": {}
	            },
	            "Tty": true,
	            "OpenStdin": false,
	            "StdinOnce": false,
	            "Env": [
	                "container=docker",
	                "PATH=/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin"
	            ],
	            "Cmd": null,
	            "Image": "gcr.io/k8s-minikube/kicbase-builds:v0.0.48-1761985721-21837@sha256:a50b37e97dfdea51156e079ca6b45818a801b3d41bbe13d141f35d2e1af6c7d1",
	            "Volumes": null,
	            "WorkingDir": "/",
	            "Entrypoint": [
	                "/usr/local/bin/entrypoint",
	                "/sbin/init"
	            ],
	            "OnBuild": null,
	            "Labels": {
	                "created_by.minikube.sigs.k8s.io": "true",
	                "mode.minikube.sigs.k8s.io": "addons-651467",
	                "name.minikube.sigs.k8s.io": "addons-651467",
	                "role.minikube.sigs.k8s.io": ""
	            },
	            "StopSignal": "SIGRTMIN+3"
	        },
	        "NetworkSettings": {
	            "Bridge": "",
	            "SandboxID": "8fc3e6ee83c5f9ad2fb1e922de106dfc451222b9bf113c3d269984e224ee5d34",
	            "SandboxKey": "/var/run/docker/netns/8fc3e6ee83c5",
	            "Ports": {
	                "22/tcp": [
	                    {
	                        "HostIp": "127.0.0.1",
	                        "HostPort": "32768"
	                    }
	                ],
	                "2376/tcp": [
	                    {
	                        "HostIp": "127.0.0.1",
	                        "HostPort": "32769"
	                    }
	                ],
	                "32443/tcp": [
	                    {
	                        "HostIp": "127.0.0.1",
	                        "HostPort": "32772"
	                    }
	                ],
	                "5000/tcp": [
	                    {
	                        "HostIp": "127.0.0.1",
	                        "HostPort": "32770"
	                    }
	                ],
	                "8443/tcp": [
	                    {
	                        "HostIp": "127.0.0.1",
	                        "HostPort": "32771"
	                    }
	                ]
	            },
	            "HairpinMode": false,
	            "LinkLocalIPv6Address": "",
	            "LinkLocalIPv6PrefixLen": 0,
	            "SecondaryIPAddresses": null,
	            "SecondaryIPv6Addresses": null,
	            "EndpointID": "",
	            "Gateway": "",
	            "GlobalIPv6Address": "",
	            "GlobalIPv6PrefixLen": 0,
	            "IPAddress": "",
	            "IPPrefixLen": 0,
	            "IPv6Gateway": "",
	            "MacAddress": "",
	            "Networks": {
	                "addons-651467": {
	                    "IPAMConfig": {
	                        "IPv4Address": "192.168.49.2"
	                    },
	                    "Links": null,
	                    "Aliases": null,
	                    "MacAddress": "26:ed:4f:61:ce:fd",
	                    "DriverOpts": null,
	                    "GwPriority": 0,
	                    "NetworkID": "b7d1097bb55325287ea53a686e1ae72c1a0bec65934ce7e004057f3409631782",
	                    "EndpointID": "0a24fafb4e1067e439d55d52bab317d3e78226374a2f22ddb0a2fcd7482e5919",
	                    "Gateway": "192.168.49.1",
	                    "IPAddress": "192.168.49.2",
	                    "IPPrefixLen": 24,
	                    "IPv6Gateway": "",
	                    "GlobalIPv6Address": "",
	                    "GlobalIPv6PrefixLen": 0,
	                    "DNSNames": [
	                        "addons-651467",
	                        "c4ab4837e17b"
	                    ]
	                }
	            }
	        }
	    }
	]

                                                
                                                
-- /stdout --
helpers_test.go:247: (dbg) Run:  out/minikube-linux-arm64 status --format={{.Host}} -p addons-651467 -n addons-651467
helpers_test.go:252: <<< TestAddons/parallel/Ingress FAILED: start of post-mortem logs <<<
helpers_test.go:253: ======>  post-mortem[TestAddons/parallel/Ingress]: minikube logs <======
helpers_test.go:255: (dbg) Run:  out/minikube-linux-arm64 -p addons-651467 logs -n 25
helpers_test.go:255: (dbg) Done: out/minikube-linux-arm64 -p addons-651467 logs -n 25: (1.501277002s)
helpers_test.go:260: TestAddons/parallel/Ingress logs: 
-- stdout --
	
	==> Audit <==
	┌─────────┬───────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────
───────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────┬────────────────────────┬─────────┬─────────┬─────────────────────┬─────────────────────┐
	│ COMMAND │                                                                                                                                                                                                                                   ARGS                                                                                                                                                                                                                                   │        PROFILE         │  USER   │ VERSION │     START TIME      │      END TIME       │
	├─────────┼───────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────
───────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────┼────────────────────────┼─────────┼─────────┼─────────────────────┼─────────────────────┤
	│ delete  │ -p download-docker-143180                                                                                                                                                                                                                                                                                                                                                                                                                                                │ download-docker-143180 │ jenkins │ v1.37.0 │ 09 Nov 25 13:29 UTC │ 09 Nov 25 13:29 UTC │
	│ start   │ --download-only -p binary-mirror-258515 --alsologtostderr --binary-mirror http://127.0.0.1:41697 --driver=docker  --container-runtime=crio                                                                                                                                                                                                                                                                                                                               │ binary-mirror-258515   │ jenkins │ v1.37.0 │ 09 Nov 25 13:29 UTC │                     │
	│ delete  │ -p binary-mirror-258515                                                                                                                                                                                                                                                                                                                                                                                                                                                  │ binary-mirror-258515   │ jenkins │ v1.37.0 │ 09 Nov 25 13:29 UTC │ 09 Nov 25 13:29 UTC │
	│ addons  │ enable dashboard -p addons-651467                                                                                                                                                                                                                                                                                                                                                                                                                                        │ addons-651467          │ jenkins │ v1.37.0 │ 09 Nov 25 13:29 UTC │                     │
	│ addons  │ disable dashboard -p addons-651467                                                                                                                                                                                                                                                                                                                                                                                                                                       │ addons-651467          │ jenkins │ v1.37.0 │ 09 Nov 25 13:29 UTC │                     │
	│ start   │ -p addons-651467 --wait=true --memory=4096 --alsologtostderr --addons=registry --addons=registry-creds --addons=metrics-server --addons=volumesnapshots --addons=csi-hostpath-driver --addons=gcp-auth --addons=cloud-spanner --addons=inspektor-gadget --addons=nvidia-device-plugin --addons=yakd --addons=volcano --addons=amd-gpu-device-plugin --driver=docker  --container-runtime=crio --addons=ingress --addons=ingress-dns --addons=storage-provisioner-rancher │ addons-651467          │ jenkins │ v1.37.0 │ 09 Nov 25 13:29 UTC │ 09 Nov 25 13:32 UTC │
	│ addons  │ addons-651467 addons disable volcano --alsologtostderr -v=1                                                                                                                                                                                                                                                                                                                                                                                                              │ addons-651467          │ jenkins │ v1.37.0 │ 09 Nov 25 13:32 UTC │                     │
	│ addons  │ addons-651467 addons disable gcp-auth --alsologtostderr -v=1                                                                                                                                                                                                                                                                                                                                                                                                             │ addons-651467          │ jenkins │ v1.37.0 │ 09 Nov 25 13:32 UTC │                     │
	│ addons  │ addons-651467 addons disable yakd --alsologtostderr -v=1                                                                                                                                                                                                                                                                                                                                                                                                                 │ addons-651467          │ jenkins │ v1.37.0 │ 09 Nov 25 13:32 UTC │                     │
	│ addons  │ addons-651467 addons disable nvidia-device-plugin --alsologtostderr -v=1                                                                                                                                                                                                                                                                                                                                                                                                 │ addons-651467          │ jenkins │ v1.37.0 │ 09 Nov 25 13:32 UTC │                     │
	│ ip      │ addons-651467 ip                                                                                                                                                                                                                                                                                                                                                                                                                                                         │ addons-651467          │ jenkins │ v1.37.0 │ 09 Nov 25 13:32 UTC │ 09 Nov 25 13:32 UTC │
	│ addons  │ addons-651467 addons disable registry --alsologtostderr -v=1                                                                                                                                                                                                                                                                                                                                                                                                             │ addons-651467          │ jenkins │ v1.37.0 │ 09 Nov 25 13:32 UTC │                     │
	│ ssh     │ addons-651467 ssh cat /opt/local-path-provisioner/pvc-fe3d3884-77bf-4c1a-9e79-8108e5bf732d_default_test-pvc/file1                                                                                                                                                                                                                                                                                                                                                        │ addons-651467          │ jenkins │ v1.37.0 │ 09 Nov 25 13:32 UTC │ 09 Nov 25 13:32 UTC │
	│ addons  │ addons-651467 addons disable storage-provisioner-rancher --alsologtostderr -v=1                                                                                                                                                                                                                                                                                                                                                                                          │ addons-651467          │ jenkins │ v1.37.0 │ 09 Nov 25 13:32 UTC │                     │
	│ addons  │ enable headlamp -p addons-651467 --alsologtostderr -v=1                                                                                                                                                                                                                                                                                                                                                                                                                  │ addons-651467          │ jenkins │ v1.37.0 │ 09 Nov 25 13:32 UTC │                     │
	│ addons  │ addons-651467 addons disable cloud-spanner --alsologtostderr -v=1                                                                                                                                                                                                                                                                                                                                                                                                        │ addons-651467          │ jenkins │ v1.37.0 │ 09 Nov 25 13:32 UTC │                     │
	│ addons  │ addons-651467 addons disable headlamp --alsologtostderr -v=1                                                                                                                                                                                                                                                                                                                                                                                                             │ addons-651467          │ jenkins │ v1.37.0 │ 09 Nov 25 13:32 UTC │                     │
	│ addons  │ addons-651467 addons disable metrics-server --alsologtostderr -v=1                                                                                                                                                                                                                                                                                                                                                                                                       │ addons-651467          │ jenkins │ v1.37.0 │ 09 Nov 25 13:32 UTC │                     │
	│ addons  │ addons-651467 addons disable inspektor-gadget --alsologtostderr -v=1                                                                                                                                                                                                                                                                                                                                                                                                     │ addons-651467          │ jenkins │ v1.37.0 │ 09 Nov 25 13:32 UTC │                     │
	│ ssh     │ addons-651467 ssh curl -s http://127.0.0.1/ -H 'Host: nginx.example.com'                                                                                                                                                                                                                                                                                                                                                                                                 │ addons-651467          │ jenkins │ v1.37.0 │ 09 Nov 25 13:33 UTC │                     │
	│ addons  │ addons-651467 addons disable volumesnapshots --alsologtostderr -v=1                                                                                                                                                                                                                                                                                                                                                                                                      │ addons-651467          │ jenkins │ v1.37.0 │ 09 Nov 25 13:33 UTC │                     │
	│ addons  │ addons-651467 addons disable csi-hostpath-driver --alsologtostderr -v=1                                                                                                                                                                                                                                                                                                                                                                                                  │ addons-651467          │ jenkins │ v1.37.0 │ 09 Nov 25 13:33 UTC │                     │
	│ addons  │ configure registry-creds -f ./testdata/addons_testconfig.json -p addons-651467                                                                                                                                                                                                                                                                                                                                                                                           │ addons-651467          │ jenkins │ v1.37.0 │ 09 Nov 25 13:33 UTC │ 09 Nov 25 13:33 UTC │
	│ addons  │ addons-651467 addons disable registry-creds --alsologtostderr -v=1                                                                                                                                                                                                                                                                                                                                                                                                       │ addons-651467          │ jenkins │ v1.37.0 │ 09 Nov 25 13:33 UTC │                     │
	│ ip      │ addons-651467 ip                                                                                                                                                                                                                                                                                                                                                                                                                                                         │ addons-651467          │ jenkins │ v1.37.0 │ 09 Nov 25 13:35 UTC │ 09 Nov 25 13:35 UTC │
	└─────────┴───────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────
───────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────┴────────────────────────┴─────────┴─────────┴─────────────────────┴─────────────────────┘
	
	
	==> Last Start <==
	Log file created at: 2025/11/09 13:29:24
	Running on machine: ip-172-31-24-2
	Binary: Built with gc go1.24.6 for linux/arm64
	Log line format: [IWEF]mmdd hh:mm:ss.uuuuuu threadid file:line] msg
	I1109 13:29:24.202235    4875 out.go:360] Setting OutFile to fd 1 ...
	I1109 13:29:24.202459    4875 out.go:408] TERM=,COLORTERM=, which probably does not support color
	I1109 13:29:24.202485    4875 out.go:374] Setting ErrFile to fd 2...
	I1109 13:29:24.202504    4875 out.go:408] TERM=,COLORTERM=, which probably does not support color
	I1109 13:29:24.202801    4875 root.go:338] Updating PATH: /home/jenkins/minikube-integration/21139-2320/.minikube/bin
	I1109 13:29:24.203283    4875 out.go:368] Setting JSON to false
	I1109 13:29:24.204130    4875 start.go:133] hostinfo: {"hostname":"ip-172-31-24-2","uptime":715,"bootTime":1762694250,"procs":145,"os":"linux","platform":"ubuntu","platformFamily":"debian","platformVersion":"20.04","kernelVersion":"5.15.0-1084-aws","kernelArch":"aarch64","virtualizationSystem":"","virtualizationRole":"","hostId":"6d436adf-771e-4269-b9a3-c25fd4fca4f5"}
	I1109 13:29:24.204224    4875 start.go:143] virtualization:  
	I1109 13:29:24.207649    4875 out.go:179] * [addons-651467] minikube v1.37.0 on Ubuntu 20.04 (arm64)
	I1109 13:29:24.211280    4875 out.go:179]   - MINIKUBE_LOCATION=21139
	I1109 13:29:24.211345    4875 notify.go:221] Checking for updates...
	I1109 13:29:24.217219    4875 out.go:179]   - MINIKUBE_SUPPRESS_DOCKER_PERFORMANCE=true
	I1109 13:29:24.220225    4875 out.go:179]   - KUBECONFIG=/home/jenkins/minikube-integration/21139-2320/kubeconfig
	I1109 13:29:24.223051    4875 out.go:179]   - MINIKUBE_HOME=/home/jenkins/minikube-integration/21139-2320/.minikube
	I1109 13:29:24.225889    4875 out.go:179]   - MINIKUBE_BIN=out/minikube-linux-arm64
	I1109 13:29:24.228736    4875 out.go:179]   - MINIKUBE_FORCE_SYSTEMD=
	I1109 13:29:24.231707    4875 driver.go:422] Setting default libvirt URI to qemu:///system
	I1109 13:29:24.255327    4875 docker.go:124] docker version: linux-28.1.1:Docker Engine - Community
	I1109 13:29:24.255457    4875 cli_runner.go:164] Run: docker system info --format "{{json .}}"
	I1109 13:29:24.317017    4875 info.go:266] docker info: {ID:J4M5:W6MX:GOX4:4LAQ:VI7E:VJNF:J3OP:OPBH:GF7G:PPY4:WQWD:7N4L Containers:0 ContainersRunning:0 ContainersPaused:0 ContainersStopped:0 Images:1 Driver:overlay2 DriverStatus:[[Backing Filesystem extfs] [Supports d_type true] [Using metacopy false] [Native Overlay Diff true] [userxattr false]] SystemStatus:<nil> Plugins:{Volume:[local] Network:[bridge host ipvlan macvlan null overlay] Authorization:<nil> Log:[awslogs fluentd gcplogs gelf journald json-file local splunk syslog]} MemoryLimit:true SwapLimit:true KernelMemory:false KernelMemoryTCP:true CPUCfsPeriod:true CPUCfsQuota:true CPUShares:true CPUSet:true PidsLimit:true IPv4Forwarding:true BridgeNfIptables:false BridgeNfIP6Tables:false Debug:false NFd:25 OomKillDisable:true NGoroutines:47 SystemTime:2025-11-09 13:29:24.308126112 +0000 UTC LoggingDriver:json-file CgroupDriver:cgroupfs NEventsListener:0 KernelVersion:5.15.0-1084-aws OperatingSystem:Ubuntu 20.04.6 LTS OSType:linux Architecture:a
arch64 IndexServerAddress:https://index.docker.io/v1/ RegistryConfig:{AllowNondistributableArtifactsCIDRs:[] AllowNondistributableArtifactsHostnames:[] InsecureRegistryCIDRs:[::1/128 127.0.0.0/8] IndexConfigs:{DockerIo:{Name:docker.io Mirrors:[] Secure:true Official:true}} Mirrors:[]} NCPU:2 MemTotal:8214835200 GenericResources:<nil> DockerRootDir:/var/lib/docker HTTPProxy: HTTPSProxy: NoProxy: Name:ip-172-31-24-2 Labels:[] ExperimentalBuild:false ServerVersion:28.1.1 ClusterStore: ClusterAdvertise: Runtimes:{Runc:{Path:runc}} DefaultRuntime:runc Swarm:{NodeID: NodeAddr: LocalNodeState:inactive ControlAvailable:false Error: RemoteManagers:<nil>} LiveRestoreEnabled:false Isolation: InitBinary:docker-init ContainerdCommit:{ID:05044ec0a9a75232cad458027ca83437aae3f4da Expected:} RuncCommit:{ID:v1.2.5-0-g59923ef Expected:} InitCommit:{ID:de40ad0 Expected:} SecurityOptions:[name=apparmor name=seccomp,profile=builtin] ProductLicense: Warnings:<nil> ServerErrors:[] ClientInfo:{Debug:false Plugins:[map[Name:buildx Pat
h:/usr/libexec/docker/cli-plugins/docker-buildx SchemaVersion:0.1.0 ShortDescription:Docker Buildx Vendor:Docker Inc. Version:v0.23.0] map[Name:compose Path:/usr/libexec/docker/cli-plugins/docker-compose SchemaVersion:0.1.0 ShortDescription:Docker Compose Vendor:Docker Inc. Version:v2.35.1]] Warnings:<nil>}}
	I1109 13:29:24.317134    4875 docker.go:319] overlay module found
	I1109 13:29:24.320198    4875 out.go:179] * Using the docker driver based on user configuration
	I1109 13:29:24.323029    4875 start.go:309] selected driver: docker
	I1109 13:29:24.323052    4875 start.go:930] validating driver "docker" against <nil>
	I1109 13:29:24.323066    4875 start.go:941] status for docker: {Installed:true Healthy:true Running:false NeedsImprovement:false Error:<nil> Reason: Fix: Doc: Version:}
	I1109 13:29:24.324072    4875 cli_runner.go:164] Run: docker system info --format "{{json .}}"
	I1109 13:29:24.379970    4875 info.go:266] docker info: {ID:J4M5:W6MX:GOX4:4LAQ:VI7E:VJNF:J3OP:OPBH:GF7G:PPY4:WQWD:7N4L Containers:0 ContainersRunning:0 ContainersPaused:0 ContainersStopped:0 Images:1 Driver:overlay2 DriverStatus:[[Backing Filesystem extfs] [Supports d_type true] [Using metacopy false] [Native Overlay Diff true] [userxattr false]] SystemStatus:<nil> Plugins:{Volume:[local] Network:[bridge host ipvlan macvlan null overlay] Authorization:<nil> Log:[awslogs fluentd gcplogs gelf journald json-file local splunk syslog]} MemoryLimit:true SwapLimit:true KernelMemory:false KernelMemoryTCP:true CPUCfsPeriod:true CPUCfsQuota:true CPUShares:true CPUSet:true PidsLimit:true IPv4Forwarding:true BridgeNfIptables:false BridgeNfIP6Tables:false Debug:false NFd:25 OomKillDisable:true NGoroutines:47 SystemTime:2025-11-09 13:29:24.370892342 +0000 UTC LoggingDriver:json-file CgroupDriver:cgroupfs NEventsListener:0 KernelVersion:5.15.0-1084-aws OperatingSystem:Ubuntu 20.04.6 LTS OSType:linux Architecture:a
arch64 IndexServerAddress:https://index.docker.io/v1/ RegistryConfig:{AllowNondistributableArtifactsCIDRs:[] AllowNondistributableArtifactsHostnames:[] InsecureRegistryCIDRs:[::1/128 127.0.0.0/8] IndexConfigs:{DockerIo:{Name:docker.io Mirrors:[] Secure:true Official:true}} Mirrors:[]} NCPU:2 MemTotal:8214835200 GenericResources:<nil> DockerRootDir:/var/lib/docker HTTPProxy: HTTPSProxy: NoProxy: Name:ip-172-31-24-2 Labels:[] ExperimentalBuild:false ServerVersion:28.1.1 ClusterStore: ClusterAdvertise: Runtimes:{Runc:{Path:runc}} DefaultRuntime:runc Swarm:{NodeID: NodeAddr: LocalNodeState:inactive ControlAvailable:false Error: RemoteManagers:<nil>} LiveRestoreEnabled:false Isolation: InitBinary:docker-init ContainerdCommit:{ID:05044ec0a9a75232cad458027ca83437aae3f4da Expected:} RuncCommit:{ID:v1.2.5-0-g59923ef Expected:} InitCommit:{ID:de40ad0 Expected:} SecurityOptions:[name=apparmor name=seccomp,profile=builtin] ProductLicense: Warnings:<nil> ServerErrors:[] ClientInfo:{Debug:false Plugins:[map[Name:buildx Pat
h:/usr/libexec/docker/cli-plugins/docker-buildx SchemaVersion:0.1.0 ShortDescription:Docker Buildx Vendor:Docker Inc. Version:v0.23.0] map[Name:compose Path:/usr/libexec/docker/cli-plugins/docker-compose SchemaVersion:0.1.0 ShortDescription:Docker Compose Vendor:Docker Inc. Version:v2.35.1]] Warnings:<nil>}}
	I1109 13:29:24.380119    4875 start_flags.go:327] no existing cluster config was found, will generate one from the flags 
	I1109 13:29:24.380362    4875 start_flags.go:992] Waiting for all components: map[apiserver:true apps_running:true default_sa:true extra:true kubelet:true node_ready:true system_pods:true]
	I1109 13:29:24.383236    4875 out.go:179] * Using Docker driver with root privileges
	I1109 13:29:24.386124    4875 cni.go:84] Creating CNI manager for ""
	I1109 13:29:24.386189    4875 cni.go:143] "docker" driver + "crio" runtime found, recommending kindnet
	I1109 13:29:24.386203    4875 start_flags.go:336] Found "CNI" CNI - setting NetworkPlugin=cni
	I1109 13:29:24.386283    4875 start.go:353] cluster config:
	{Name:addons-651467 KeepContext:false EmbedCerts:false MinikubeISO: KicBaseImage:gcr.io/k8s-minikube/kicbase-builds:v0.0.48-1761985721-21837@sha256:a50b37e97dfdea51156e079ca6b45818a801b3d41bbe13d141f35d2e1af6c7d1 Memory:4096 CPUs:2 DiskSize:20000 Driver:docker HyperkitVpnKitSock: HyperkitVSockPorts:[] DockerEnv:[] ContainerVolumeMounts:[] InsecureRegistry:[] RegistryMirror:[] HostOnlyCIDR:192.168.59.1/24 HypervVirtualSwitch: HypervUseExternalSwitch:false HypervExternalAdapter: KVMNetwork:default KVMQemuURI:qemu:///system KVMGPU:false KVMHidden:false KVMNUMACount:1 APIServerPort:8443 DockerOpt:[] DisableDriverMounts:false NFSShare:[] NFSSharesRoot:/nfsshares UUID: NoVTXCheck:false DNSProxy:false HostDNSResolver:true HostOnlyNicType:virtio NatNicType:virtio SSHIPAddress: SSHUser:root SSHKey: SSHPort:22 KubernetesConfig:{KubernetesVersion:v1.34.1 ClusterName:addons-651467 Namespace:default APIServerHAVIP: APIServerName:minikubeCA APIServerNames:[] APIServerIPs:[] DNSDomain:cluster.local ContainerRuntime
:crio CRISocket: NetworkPlugin:cni FeatureGates: ServiceCIDR:10.96.0.0/12 ImageRepository: LoadBalancerStartIP: LoadBalancerEndIP: CustomIngressCert: RegistryAliases: ExtraOptions:[] ShouldLoadCachedImages:true EnableDefaultCNI:false CNI:} Nodes:[{Name: IP: Port:8443 KubernetesVersion:v1.34.1 ContainerRuntime:crio ControlPlane:true Worker:true}] Addons:map[] CustomAddonImages:map[] CustomAddonRegistries:map[] VerifyComponents:map[apiserver:true apps_running:true default_sa:true extra:true kubelet:true node_ready:true system_pods:true] StartHostTimeout:6m0s ScheduledStop:<nil> ExposedPorts:[] ListenAddress: Network: Subnet: MultiNodeRequested:false ExtraDisks:0 CertExpiration:26280h0m0s MountString: Mount9PVersion:9p2000.L MountGID:docker MountIP: MountMSize:262144 MountOptions:[] MountPort:0 MountType:9p MountUID:docker BinaryMirror: DisableOptimizations:false DisableMetrics:false DisableCoreDNSLog:false CustomQemuFirmwarePath: SocketVMnetClientPath: SocketVMnetPath: StaticIP: SSHAuthSock: SSHAgentPID:0 GPUs:
AutoPauseInterval:1m0s}
	I1109 13:29:24.391120    4875 out.go:179] * Starting "addons-651467" primary control-plane node in "addons-651467" cluster
	I1109 13:29:24.393901    4875 cache.go:134] Beginning downloading kic base image for docker with crio
	I1109 13:29:24.396792    4875 out.go:179] * Pulling base image v0.0.48-1761985721-21837 ...
	I1109 13:29:24.399528    4875 preload.go:188] Checking if preload exists for k8s version v1.34.1 and runtime crio
	I1109 13:29:24.399584    4875 preload.go:203] Found local preload: /home/jenkins/minikube-integration/21139-2320/.minikube/cache/preloaded-tarball/preloaded-images-k8s-v18-v1.34.1-cri-o-overlay-arm64.tar.lz4
	I1109 13:29:24.399597    4875 cache.go:65] Caching tarball of preloaded images
	I1109 13:29:24.399599    4875 image.go:81] Checking for gcr.io/k8s-minikube/kicbase-builds:v0.0.48-1761985721-21837@sha256:a50b37e97dfdea51156e079ca6b45818a801b3d41bbe13d141f35d2e1af6c7d1 in local docker daemon
	I1109 13:29:24.399691    4875 preload.go:238] Found /home/jenkins/minikube-integration/21139-2320/.minikube/cache/preloaded-tarball/preloaded-images-k8s-v18-v1.34.1-cri-o-overlay-arm64.tar.lz4 in cache, skipping download
	I1109 13:29:24.399702    4875 cache.go:68] Finished verifying existence of preloaded tar for v1.34.1 on crio
	I1109 13:29:24.400107    4875 profile.go:143] Saving config to /home/jenkins/minikube-integration/21139-2320/.minikube/profiles/addons-651467/config.json ...
	I1109 13:29:24.400133    4875 lock.go:35] WriteFile acquiring /home/jenkins/minikube-integration/21139-2320/.minikube/profiles/addons-651467/config.json: {Name:mk129c827ff3469375a4a6ce55f7b60ccdf45bc1 Clock:{} Delay:500ms Timeout:1m0s Cancel:<nil>}
	I1109 13:29:24.415274    4875 cache.go:163] Downloading gcr.io/k8s-minikube/kicbase-builds:v0.0.48-1761985721-21837@sha256:a50b37e97dfdea51156e079ca6b45818a801b3d41bbe13d141f35d2e1af6c7d1 to local cache
	I1109 13:29:24.415386    4875 image.go:65] Checking for gcr.io/k8s-minikube/kicbase-builds:v0.0.48-1761985721-21837@sha256:a50b37e97dfdea51156e079ca6b45818a801b3d41bbe13d141f35d2e1af6c7d1 in local cache directory
	I1109 13:29:24.415404    4875 image.go:68] Found gcr.io/k8s-minikube/kicbase-builds:v0.0.48-1761985721-21837@sha256:a50b37e97dfdea51156e079ca6b45818a801b3d41bbe13d141f35d2e1af6c7d1 in local cache directory, skipping pull
	I1109 13:29:24.415408    4875 image.go:137] gcr.io/k8s-minikube/kicbase-builds:v0.0.48-1761985721-21837@sha256:a50b37e97dfdea51156e079ca6b45818a801b3d41bbe13d141f35d2e1af6c7d1 exists in cache, skipping pull
	I1109 13:29:24.415415    4875 cache.go:166] successfully saved gcr.io/k8s-minikube/kicbase-builds:v0.0.48-1761985721-21837@sha256:a50b37e97dfdea51156e079ca6b45818a801b3d41bbe13d141f35d2e1af6c7d1 as a tarball
	I1109 13:29:24.415420    4875 cache.go:176] Loading gcr.io/k8s-minikube/kicbase-builds:v0.0.48-1761985721-21837@sha256:a50b37e97dfdea51156e079ca6b45818a801b3d41bbe13d141f35d2e1af6c7d1 from local cache
	I1109 13:29:42.076221    4875 cache.go:178] successfully loaded and using gcr.io/k8s-minikube/kicbase-builds:v0.0.48-1761985721-21837@sha256:a50b37e97dfdea51156e079ca6b45818a801b3d41bbe13d141f35d2e1af6c7d1 from cached tarball
	I1109 13:29:42.076259    4875 cache.go:243] Successfully downloaded all kic artifacts
	I1109 13:29:42.076291    4875 start.go:360] acquireMachinesLock for addons-651467: {Name:mk4994005e3898dce07874204da9a6684eba48a0 Clock:{} Delay:500ms Timeout:10m0s Cancel:<nil>}
	I1109 13:29:42.076421    4875 start.go:364] duration metric: took 110.607µs to acquireMachinesLock for "addons-651467"
	I1109 13:29:42.076448    4875 start.go:93] Provisioning new machine with config: &{Name:addons-651467 KeepContext:false EmbedCerts:false MinikubeISO: KicBaseImage:gcr.io/k8s-minikube/kicbase-builds:v0.0.48-1761985721-21837@sha256:a50b37e97dfdea51156e079ca6b45818a801b3d41bbe13d141f35d2e1af6c7d1 Memory:4096 CPUs:2 DiskSize:20000 Driver:docker HyperkitVpnKitSock: HyperkitVSockPorts:[] DockerEnv:[] ContainerVolumeMounts:[] InsecureRegistry:[] RegistryMirror:[] HostOnlyCIDR:192.168.59.1/24 HypervVirtualSwitch: HypervUseExternalSwitch:false HypervExternalAdapter: KVMNetwork:default KVMQemuURI:qemu:///system KVMGPU:false KVMHidden:false KVMNUMACount:1 APIServerPort:8443 DockerOpt:[] DisableDriverMounts:false NFSShare:[] NFSSharesRoot:/nfsshares UUID: NoVTXCheck:false DNSProxy:false HostDNSResolver:true HostOnlyNicType:virtio NatNicType:virtio SSHIPAddress: SSHUser:root SSHKey: SSHPort:22 KubernetesConfig:{KubernetesVersion:v1.34.1 ClusterName:addons-651467 Namespace:default APIServerHAVIP: APIServerName:min
ikubeCA APIServerNames:[] APIServerIPs:[] DNSDomain:cluster.local ContainerRuntime:crio CRISocket: NetworkPlugin:cni FeatureGates: ServiceCIDR:10.96.0.0/12 ImageRepository: LoadBalancerStartIP: LoadBalancerEndIP: CustomIngressCert: RegistryAliases: ExtraOptions:[] ShouldLoadCachedImages:true EnableDefaultCNI:false CNI:} Nodes:[{Name: IP: Port:8443 KubernetesVersion:v1.34.1 ContainerRuntime:crio ControlPlane:true Worker:true}] Addons:map[] CustomAddonImages:map[] CustomAddonRegistries:map[] VerifyComponents:map[apiserver:true apps_running:true default_sa:true extra:true kubelet:true node_ready:true system_pods:true] StartHostTimeout:6m0s ScheduledStop:<nil> ExposedPorts:[] ListenAddress: Network: Subnet: MultiNodeRequested:false ExtraDisks:0 CertExpiration:26280h0m0s MountString: Mount9PVersion:9p2000.L MountGID:docker MountIP: MountMSize:262144 MountOptions:[] MountPort:0 MountType:9p MountUID:docker BinaryMirror: DisableOptimizations:false DisableMetrics:false DisableCoreDNSLog:false CustomQemuFirmwarePath:
SocketVMnetClientPath: SocketVMnetPath: StaticIP: SSHAuthSock: SSHAgentPID:0 GPUs: AutoPauseInterval:1m0s} &{Name: IP: Port:8443 KubernetesVersion:v1.34.1 ContainerRuntime:crio ControlPlane:true Worker:true}
	I1109 13:29:42.076537    4875 start.go:125] createHost starting for "" (driver="docker")
	I1109 13:29:42.080257    4875 out.go:252] * Creating docker container (CPUs=2, Memory=4096MB) ...
	I1109 13:29:42.080540    4875 start.go:159] libmachine.API.Create for "addons-651467" (driver="docker")
	I1109 13:29:42.080581    4875 client.go:173] LocalClient.Create starting
	I1109 13:29:42.080711    4875 main.go:143] libmachine: Creating CA: /home/jenkins/minikube-integration/21139-2320/.minikube/certs/ca.pem
	I1109 13:29:42.349459    4875 main.go:143] libmachine: Creating client certificate: /home/jenkins/minikube-integration/21139-2320/.minikube/certs/cert.pem
	I1109 13:29:43.828044    4875 cli_runner.go:164] Run: docker network inspect addons-651467 --format "{"Name": "{{.Name}}","Driver": "{{.Driver}}","Subnet": "{{range .IPAM.Config}}{{.Subnet}}{{end}}","Gateway": "{{range .IPAM.Config}}{{.Gateway}}{{end}}","MTU": {{if (index .Options "com.docker.network.driver.mtu")}}{{(index .Options "com.docker.network.driver.mtu")}}{{else}}0{{end}}, "ContainerIPs": [{{range $k,$v := .Containers }}"{{$v.IPv4Address}}",{{end}}]}"
	W1109 13:29:43.843910    4875 cli_runner.go:211] docker network inspect addons-651467 --format "{"Name": "{{.Name}}","Driver": "{{.Driver}}","Subnet": "{{range .IPAM.Config}}{{.Subnet}}{{end}}","Gateway": "{{range .IPAM.Config}}{{.Gateway}}{{end}}","MTU": {{if (index .Options "com.docker.network.driver.mtu")}}{{(index .Options "com.docker.network.driver.mtu")}}{{else}}0{{end}}, "ContainerIPs": [{{range $k,$v := .Containers }}"{{$v.IPv4Address}}",{{end}}]}" returned with exit code 1
	I1109 13:29:43.844017    4875 network_create.go:284] running [docker network inspect addons-651467] to gather additional debugging logs...
	I1109 13:29:43.844039    4875 cli_runner.go:164] Run: docker network inspect addons-651467
	W1109 13:29:43.859512    4875 cli_runner.go:211] docker network inspect addons-651467 returned with exit code 1
	I1109 13:29:43.859541    4875 network_create.go:287] error running [docker network inspect addons-651467]: docker network inspect addons-651467: exit status 1
	stdout:
	[]
	
	stderr:
	Error response from daemon: network addons-651467 not found
	I1109 13:29:43.859560    4875 network_create.go:289] output of [docker network inspect addons-651467]: -- stdout --
	[]
	
	-- /stdout --
	** stderr ** 
	Error response from daemon: network addons-651467 not found
	
	** /stderr **
	I1109 13:29:43.859658    4875 cli_runner.go:164] Run: docker network inspect bridge --format "{"Name": "{{.Name}}","Driver": "{{.Driver}}","Subnet": "{{range .IPAM.Config}}{{.Subnet}}{{end}}","Gateway": "{{range .IPAM.Config}}{{.Gateway}}{{end}}","MTU": {{if (index .Options "com.docker.network.driver.mtu")}}{{(index .Options "com.docker.network.driver.mtu")}}{{else}}0{{end}}, "ContainerIPs": [{{range $k,$v := .Containers }}"{{$v.IPv4Address}}",{{end}}]}"
	I1109 13:29:43.875579    4875 network.go:206] using free private subnet 192.168.49.0/24: &{IP:192.168.49.0 Netmask:255.255.255.0 Prefix:24 CIDR:192.168.49.0/24 Gateway:192.168.49.1 ClientMin:192.168.49.2 ClientMax:192.168.49.254 Broadcast:192.168.49.255 IsPrivate:true Interface:{IfaceName: IfaceIPv4: IfaceMTU:0 IfaceMAC:} reservation:0x400191df20}
	I1109 13:29:43.875617    4875 network_create.go:124] attempt to create docker network addons-651467 192.168.49.0/24 with gateway 192.168.49.1 and MTU of 1500 ...
	I1109 13:29:43.875681    4875 cli_runner.go:164] Run: docker network create --driver=bridge --subnet=192.168.49.0/24 --gateway=192.168.49.1 -o --ip-masq -o --icc -o com.docker.network.driver.mtu=1500 --label=created_by.minikube.sigs.k8s.io=true --label=name.minikube.sigs.k8s.io=addons-651467 addons-651467
	I1109 13:29:43.931552    4875 network_create.go:108] docker network addons-651467 192.168.49.0/24 created
	I1109 13:29:43.931592    4875 kic.go:121] calculated static IP "192.168.49.2" for the "addons-651467" container
	I1109 13:29:43.931683    4875 cli_runner.go:164] Run: docker ps -a --format {{.Names}}
	I1109 13:29:43.947447    4875 cli_runner.go:164] Run: docker volume create addons-651467 --label name.minikube.sigs.k8s.io=addons-651467 --label created_by.minikube.sigs.k8s.io=true
	I1109 13:29:43.964764    4875 oci.go:103] Successfully created a docker volume addons-651467
	I1109 13:29:43.964859    4875 cli_runner.go:164] Run: docker run --rm --name addons-651467-preload-sidecar --label created_by.minikube.sigs.k8s.io=true --label name.minikube.sigs.k8s.io=addons-651467 --entrypoint /usr/bin/test -v addons-651467:/var gcr.io/k8s-minikube/kicbase-builds:v0.0.48-1761985721-21837@sha256:a50b37e97dfdea51156e079ca6b45818a801b3d41bbe13d141f35d2e1af6c7d1 -d /var/lib
	I1109 13:29:46.205341    4875 cli_runner.go:217] Completed: docker run --rm --name addons-651467-preload-sidecar --label created_by.minikube.sigs.k8s.io=true --label name.minikube.sigs.k8s.io=addons-651467 --entrypoint /usr/bin/test -v addons-651467:/var gcr.io/k8s-minikube/kicbase-builds:v0.0.48-1761985721-21837@sha256:a50b37e97dfdea51156e079ca6b45818a801b3d41bbe13d141f35d2e1af6c7d1 -d /var/lib: (2.240444954s)
	I1109 13:29:46.205374    4875 oci.go:107] Successfully prepared a docker volume addons-651467
	I1109 13:29:46.205432    4875 preload.go:188] Checking if preload exists for k8s version v1.34.1 and runtime crio
	I1109 13:29:46.205447    4875 kic.go:194] Starting extracting preloaded images to volume ...
	I1109 13:29:46.205523    4875 cli_runner.go:164] Run: docker run --rm --entrypoint /usr/bin/tar -v /home/jenkins/minikube-integration/21139-2320/.minikube/cache/preloaded-tarball/preloaded-images-k8s-v18-v1.34.1-cri-o-overlay-arm64.tar.lz4:/preloaded.tar:ro -v addons-651467:/extractDir gcr.io/k8s-minikube/kicbase-builds:v0.0.48-1761985721-21837@sha256:a50b37e97dfdea51156e079ca6b45818a801b3d41bbe13d141f35d2e1af6c7d1 -I lz4 -xf /preloaded.tar -C /extractDir
	I1109 13:29:50.651251    4875 cli_runner.go:217] Completed: docker run --rm --entrypoint /usr/bin/tar -v /home/jenkins/minikube-integration/21139-2320/.minikube/cache/preloaded-tarball/preloaded-images-k8s-v18-v1.34.1-cri-o-overlay-arm64.tar.lz4:/preloaded.tar:ro -v addons-651467:/extractDir gcr.io/k8s-minikube/kicbase-builds:v0.0.48-1761985721-21837@sha256:a50b37e97dfdea51156e079ca6b45818a801b3d41bbe13d141f35d2e1af6c7d1 -I lz4 -xf /preloaded.tar -C /extractDir: (4.445680047s)
	I1109 13:29:50.651293    4875 kic.go:203] duration metric: took 4.445833823s to extract preloaded images to volume ...
	W1109 13:29:50.651431    4875 cgroups_linux.go:77] Your kernel does not support swap limit capabilities or the cgroup is not mounted.
	I1109 13:29:50.651568    4875 cli_runner.go:164] Run: docker info --format "'{{json .SecurityOptions}}'"
	I1109 13:29:50.708181    4875 cli_runner.go:164] Run: docker run -d -t --privileged --security-opt seccomp=unconfined --tmpfs /tmp --tmpfs /run -v /lib/modules:/lib/modules:ro --hostname addons-651467 --name addons-651467 --label created_by.minikube.sigs.k8s.io=true --label name.minikube.sigs.k8s.io=addons-651467 --label role.minikube.sigs.k8s.io= --label mode.minikube.sigs.k8s.io=addons-651467 --network addons-651467 --ip 192.168.49.2 --volume addons-651467:/var --security-opt apparmor=unconfined --memory=4096mb --cpus=2 -e container=docker --expose 8443 --publish=127.0.0.1::8443 --publish=127.0.0.1::22 --publish=127.0.0.1::2376 --publish=127.0.0.1::5000 --publish=127.0.0.1::32443 gcr.io/k8s-minikube/kicbase-builds:v0.0.48-1761985721-21837@sha256:a50b37e97dfdea51156e079ca6b45818a801b3d41bbe13d141f35d2e1af6c7d1
	I1109 13:29:51.044906    4875 cli_runner.go:164] Run: docker container inspect addons-651467 --format={{.State.Running}}
	I1109 13:29:51.073396    4875 cli_runner.go:164] Run: docker container inspect addons-651467 --format={{.State.Status}}
	I1109 13:29:51.100150    4875 cli_runner.go:164] Run: docker exec addons-651467 stat /var/lib/dpkg/alternatives/iptables
	I1109 13:29:51.154363    4875 oci.go:144] the created container "addons-651467" has a running status.
	I1109 13:29:51.154398    4875 kic.go:225] Creating ssh key for kic: /home/jenkins/minikube-integration/21139-2320/.minikube/machines/addons-651467/id_rsa...
	I1109 13:29:51.740335    4875 kic_runner.go:191] docker (temp): /home/jenkins/minikube-integration/21139-2320/.minikube/machines/addons-651467/id_rsa.pub --> /home/docker/.ssh/authorized_keys (381 bytes)
	I1109 13:29:51.759276    4875 cli_runner.go:164] Run: docker container inspect addons-651467 --format={{.State.Status}}
	I1109 13:29:51.776747    4875 kic_runner.go:93] Run: chown docker:docker /home/docker/.ssh/authorized_keys
	I1109 13:29:51.776770    4875 kic_runner.go:114] Args: [docker exec --privileged addons-651467 chown docker:docker /home/docker/.ssh/authorized_keys]
	I1109 13:29:51.817318    4875 cli_runner.go:164] Run: docker container inspect addons-651467 --format={{.State.Status}}
	I1109 13:29:51.834283    4875 machine.go:94] provisionDockerMachine start ...
	I1109 13:29:51.834367    4875 cli_runner.go:164] Run: docker container inspect -f "'{{(index (index .NetworkSettings.Ports "22/tcp") 0).HostPort}}'" addons-651467
	I1109 13:29:51.853856    4875 main.go:143] libmachine: Using SSH client type: native
	I1109 13:29:51.854243    4875 main.go:143] libmachine: &{{{<nil> 0 [] [] []} docker [0x3eefe0] 0x3f1790 <nil>  [] 0s} 127.0.0.1 32768 <nil> <nil>}
	I1109 13:29:51.854255    4875 main.go:143] libmachine: About to run SSH command:
	hostname
	I1109 13:29:51.854880    4875 main.go:143] libmachine: Error dialing TCP: ssh: handshake failed: read tcp 127.0.0.1:49612->127.0.0.1:32768: read: connection reset by peer
	I1109 13:29:55.003519    4875 main.go:143] libmachine: SSH cmd err, output: <nil>: addons-651467
	
	I1109 13:29:55.003541    4875 ubuntu.go:182] provisioning hostname "addons-651467"
	I1109 13:29:55.003604    4875 cli_runner.go:164] Run: docker container inspect -f "'{{(index (index .NetworkSettings.Ports "22/tcp") 0).HostPort}}'" addons-651467
	I1109 13:29:55.025852    4875 main.go:143] libmachine: Using SSH client type: native
	I1109 13:29:55.026190    4875 main.go:143] libmachine: &{{{<nil> 0 [] [] []} docker [0x3eefe0] 0x3f1790 <nil>  [] 0s} 127.0.0.1 32768 <nil> <nil>}
	I1109 13:29:55.026209    4875 main.go:143] libmachine: About to run SSH command:
	sudo hostname addons-651467 && echo "addons-651467" | sudo tee /etc/hostname
	I1109 13:29:55.185349    4875 main.go:143] libmachine: SSH cmd err, output: <nil>: addons-651467
	
	I1109 13:29:55.185430    4875 cli_runner.go:164] Run: docker container inspect -f "'{{(index (index .NetworkSettings.Ports "22/tcp") 0).HostPort}}'" addons-651467
	I1109 13:29:55.203980    4875 main.go:143] libmachine: Using SSH client type: native
	I1109 13:29:55.204296    4875 main.go:143] libmachine: &{{{<nil> 0 [] [] []} docker [0x3eefe0] 0x3f1790 <nil>  [] 0s} 127.0.0.1 32768 <nil> <nil>}
	I1109 13:29:55.204312    4875 main.go:143] libmachine: About to run SSH command:
	
			if ! grep -xq '.*\saddons-651467' /etc/hosts; then
				if grep -xq '127.0.1.1\s.*' /etc/hosts; then
					sudo sed -i 's/^127.0.1.1\s.*/127.0.1.1 addons-651467/g' /etc/hosts;
				else 
					echo '127.0.1.1 addons-651467' | sudo tee -a /etc/hosts; 
				fi
			fi
	I1109 13:29:55.356069    4875 main.go:143] libmachine: SSH cmd err, output: <nil>: 
	I1109 13:29:55.356094    4875 ubuntu.go:188] set auth options {CertDir:/home/jenkins/minikube-integration/21139-2320/.minikube CaCertPath:/home/jenkins/minikube-integration/21139-2320/.minikube/certs/ca.pem CaPrivateKeyPath:/home/jenkins/minikube-integration/21139-2320/.minikube/certs/ca-key.pem CaCertRemotePath:/etc/docker/ca.pem ServerCertPath:/home/jenkins/minikube-integration/21139-2320/.minikube/machines/server.pem ServerKeyPath:/home/jenkins/minikube-integration/21139-2320/.minikube/machines/server-key.pem ClientKeyPath:/home/jenkins/minikube-integration/21139-2320/.minikube/certs/key.pem ServerCertRemotePath:/etc/docker/server.pem ServerKeyRemotePath:/etc/docker/server-key.pem ClientCertPath:/home/jenkins/minikube-integration/21139-2320/.minikube/certs/cert.pem ServerCertSANs:[] StorePath:/home/jenkins/minikube-integration/21139-2320/.minikube}
	I1109 13:29:55.356119    4875 ubuntu.go:190] setting up certificates
	I1109 13:29:55.356129    4875 provision.go:84] configureAuth start
	I1109 13:29:55.356212    4875 cli_runner.go:164] Run: docker container inspect -f "{{range .NetworkSettings.Networks}}{{.IPAddress}},{{.GlobalIPv6Address}}{{end}}" addons-651467
	I1109 13:29:55.373623    4875 provision.go:143] copyHostCerts
	I1109 13:29:55.373705    4875 exec_runner.go:151] cp: /home/jenkins/minikube-integration/21139-2320/.minikube/certs/cert.pem --> /home/jenkins/minikube-integration/21139-2320/.minikube/cert.pem (1123 bytes)
	I1109 13:29:55.373830    4875 exec_runner.go:151] cp: /home/jenkins/minikube-integration/21139-2320/.minikube/certs/key.pem --> /home/jenkins/minikube-integration/21139-2320/.minikube/key.pem (1679 bytes)
	I1109 13:29:55.373906    4875 exec_runner.go:151] cp: /home/jenkins/minikube-integration/21139-2320/.minikube/certs/ca.pem --> /home/jenkins/minikube-integration/21139-2320/.minikube/ca.pem (1082 bytes)
	I1109 13:29:55.373994    4875 provision.go:117] generating server cert: /home/jenkins/minikube-integration/21139-2320/.minikube/machines/server.pem ca-key=/home/jenkins/minikube-integration/21139-2320/.minikube/certs/ca.pem private-key=/home/jenkins/minikube-integration/21139-2320/.minikube/certs/ca-key.pem org=jenkins.addons-651467 san=[127.0.0.1 192.168.49.2 addons-651467 localhost minikube]
	I1109 13:29:55.579769    4875 provision.go:177] copyRemoteCerts
	I1109 13:29:55.579841    4875 ssh_runner.go:195] Run: sudo mkdir -p /etc/docker /etc/docker /etc/docker
	I1109 13:29:55.579917    4875 cli_runner.go:164] Run: docker container inspect -f "'{{(index (index .NetworkSettings.Ports "22/tcp") 0).HostPort}}'" addons-651467
	I1109 13:29:55.598804    4875 sshutil.go:53] new ssh client: &{IP:127.0.0.1 Port:32768 SSHKeyPath:/home/jenkins/minikube-integration/21139-2320/.minikube/machines/addons-651467/id_rsa Username:docker}
	I1109 13:29:55.705541    4875 ssh_runner.go:362] scp /home/jenkins/minikube-integration/21139-2320/.minikube/certs/ca.pem --> /etc/docker/ca.pem (1082 bytes)
	I1109 13:29:55.723795    4875 ssh_runner.go:362] scp /home/jenkins/minikube-integration/21139-2320/.minikube/machines/server.pem --> /etc/docker/server.pem (1208 bytes)
	I1109 13:29:55.740609    4875 ssh_runner.go:362] scp /home/jenkins/minikube-integration/21139-2320/.minikube/machines/server-key.pem --> /etc/docker/server-key.pem (1675 bytes)
	I1109 13:29:55.757376    4875 provision.go:87] duration metric: took 401.226503ms to configureAuth
	I1109 13:29:55.757400    4875 ubuntu.go:206] setting minikube options for container-runtime
	I1109 13:29:55.757583    4875 config.go:182] Loaded profile config "addons-651467": Driver=docker, ContainerRuntime=crio, KubernetesVersion=v1.34.1
	I1109 13:29:55.757681    4875 cli_runner.go:164] Run: docker container inspect -f "'{{(index (index .NetworkSettings.Ports "22/tcp") 0).HostPort}}'" addons-651467
	I1109 13:29:55.774945    4875 main.go:143] libmachine: Using SSH client type: native
	I1109 13:29:55.775248    4875 main.go:143] libmachine: &{{{<nil> 0 [] [] []} docker [0x3eefe0] 0x3f1790 <nil>  [] 0s} 127.0.0.1 32768 <nil> <nil>}
	I1109 13:29:55.775262    4875 main.go:143] libmachine: About to run SSH command:
	sudo mkdir -p /etc/sysconfig && printf %s "
	CRIO_MINIKUBE_OPTIONS='--insecure-registry 10.96.0.0/12 '
	" | sudo tee /etc/sysconfig/crio.minikube && sudo systemctl restart crio
	I1109 13:29:56.038560    4875 main.go:143] libmachine: SSH cmd err, output: <nil>: 
	CRIO_MINIKUBE_OPTIONS='--insecure-registry 10.96.0.0/12 '
	
	I1109 13:29:56.038580    4875 machine.go:97] duration metric: took 4.204279672s to provisionDockerMachine
	I1109 13:29:56.038589    4875 client.go:176] duration metric: took 13.957998677s to LocalClient.Create
	I1109 13:29:56.038605    4875 start.go:167] duration metric: took 13.95806829s to libmachine.API.Create "addons-651467"
	I1109 13:29:56.038612    4875 start.go:293] postStartSetup for "addons-651467" (driver="docker")
	I1109 13:29:56.038622    4875 start.go:322] creating required directories: [/etc/kubernetes/addons /etc/kubernetes/manifests /var/tmp/minikube /var/lib/minikube /var/lib/minikube/certs /var/lib/minikube/images /var/lib/minikube/binaries /tmp/gvisor /usr/share/ca-certificates /etc/ssl/certs]
	I1109 13:29:56.038686    4875 ssh_runner.go:195] Run: sudo mkdir -p /etc/kubernetes/addons /etc/kubernetes/manifests /var/tmp/minikube /var/lib/minikube /var/lib/minikube/certs /var/lib/minikube/images /var/lib/minikube/binaries /tmp/gvisor /usr/share/ca-certificates /etc/ssl/certs
	I1109 13:29:56.038734    4875 cli_runner.go:164] Run: docker container inspect -f "'{{(index (index .NetworkSettings.Ports "22/tcp") 0).HostPort}}'" addons-651467
	I1109 13:29:56.056548    4875 sshutil.go:53] new ssh client: &{IP:127.0.0.1 Port:32768 SSHKeyPath:/home/jenkins/minikube-integration/21139-2320/.minikube/machines/addons-651467/id_rsa Username:docker}
	I1109 13:29:56.164335    4875 ssh_runner.go:195] Run: cat /etc/os-release
	I1109 13:29:56.168018    4875 main.go:143] libmachine: Couldn't set key VERSION_CODENAME, no corresponding struct field found
	I1109 13:29:56.168046    4875 info.go:137] Remote host: Debian GNU/Linux 12 (bookworm)
	I1109 13:29:56.168058    4875 filesync.go:126] Scanning /home/jenkins/minikube-integration/21139-2320/.minikube/addons for local assets ...
	I1109 13:29:56.168128    4875 filesync.go:126] Scanning /home/jenkins/minikube-integration/21139-2320/.minikube/files for local assets ...
	I1109 13:29:56.168157    4875 start.go:296] duration metric: took 129.536845ms for postStartSetup
	I1109 13:29:56.168473    4875 cli_runner.go:164] Run: docker container inspect -f "{{range .NetworkSettings.Networks}}{{.IPAddress}},{{.GlobalIPv6Address}}{{end}}" addons-651467
	I1109 13:29:56.185830    4875 profile.go:143] Saving config to /home/jenkins/minikube-integration/21139-2320/.minikube/profiles/addons-651467/config.json ...
	I1109 13:29:56.186138    4875 ssh_runner.go:195] Run: sh -c "df -h /var | awk 'NR==2{print $5}'"
	I1109 13:29:56.186202    4875 cli_runner.go:164] Run: docker container inspect -f "'{{(index (index .NetworkSettings.Ports "22/tcp") 0).HostPort}}'" addons-651467
	I1109 13:29:56.203115    4875 sshutil.go:53] new ssh client: &{IP:127.0.0.1 Port:32768 SSHKeyPath:/home/jenkins/minikube-integration/21139-2320/.minikube/machines/addons-651467/id_rsa Username:docker}
	I1109 13:29:56.304930    4875 ssh_runner.go:195] Run: sh -c "df -BG /var | awk 'NR==2{print $4}'"
	I1109 13:29:56.309497    4875 start.go:128] duration metric: took 14.232944846s to createHost
	I1109 13:29:56.309571    4875 start.go:83] releasing machines lock for "addons-651467", held for 14.233140272s
	I1109 13:29:56.309677    4875 cli_runner.go:164] Run: docker container inspect -f "{{range .NetworkSettings.Networks}}{{.IPAddress}},{{.GlobalIPv6Address}}{{end}}" addons-651467
	I1109 13:29:56.326564    4875 ssh_runner.go:195] Run: cat /version.json
	I1109 13:29:56.326613    4875 cli_runner.go:164] Run: docker container inspect -f "'{{(index (index .NetworkSettings.Ports "22/tcp") 0).HostPort}}'" addons-651467
	I1109 13:29:56.326880    4875 ssh_runner.go:195] Run: curl -sS -m 2 https://registry.k8s.io/
	I1109 13:29:56.326932    4875 cli_runner.go:164] Run: docker container inspect -f "'{{(index (index .NetworkSettings.Ports "22/tcp") 0).HostPort}}'" addons-651467
	I1109 13:29:56.346663    4875 sshutil.go:53] new ssh client: &{IP:127.0.0.1 Port:32768 SSHKeyPath:/home/jenkins/minikube-integration/21139-2320/.minikube/machines/addons-651467/id_rsa Username:docker}
	I1109 13:29:56.355951    4875 sshutil.go:53] new ssh client: &{IP:127.0.0.1 Port:32768 SSHKeyPath:/home/jenkins/minikube-integration/21139-2320/.minikube/machines/addons-651467/id_rsa Username:docker}
	I1109 13:29:56.447280    4875 ssh_runner.go:195] Run: systemctl --version
	I1109 13:29:56.536725    4875 ssh_runner.go:195] Run: sudo sh -c "podman version >/dev/null"
	I1109 13:29:56.571436    4875 ssh_runner.go:195] Run: sh -c "stat /etc/cni/net.d/*loopback.conf*"
	W1109 13:29:56.575608    4875 cni.go:209] loopback cni configuration skipped: "/etc/cni/net.d/*loopback.conf*" not found
	I1109 13:29:56.575717    4875 ssh_runner.go:195] Run: sudo find /etc/cni/net.d -maxdepth 1 -type f ( ( -name *bridge* -or -name *podman* ) -and -not -name *.mk_disabled ) -printf "%p, " -exec sh -c "sudo mv {} {}.mk_disabled" ;
	I1109 13:29:56.604790    4875 cni.go:262] disabled [/etc/cni/net.d/87-podman-bridge.conflist, /etc/cni/net.d/10-crio-bridge.conflist.disabled] bridge cni config(s)
	I1109 13:29:56.604812    4875 start.go:496] detecting cgroup driver to use...
	I1109 13:29:56.604843    4875 detect.go:187] detected "cgroupfs" cgroup driver on host os
	I1109 13:29:56.604896    4875 ssh_runner.go:195] Run: sudo systemctl stop -f containerd
	I1109 13:29:56.621254    4875 ssh_runner.go:195] Run: sudo systemctl is-active --quiet service containerd
	I1109 13:29:56.636193    4875 docker.go:218] disabling cri-docker service (if available) ...
	I1109 13:29:56.636260    4875 ssh_runner.go:195] Run: sudo systemctl stop -f cri-docker.socket
	I1109 13:29:56.653788    4875 ssh_runner.go:195] Run: sudo systemctl stop -f cri-docker.service
	I1109 13:29:56.672984    4875 ssh_runner.go:195] Run: sudo systemctl disable cri-docker.socket
	I1109 13:29:56.789645    4875 ssh_runner.go:195] Run: sudo systemctl mask cri-docker.service
	I1109 13:29:56.917453    4875 docker.go:234] disabling docker service ...
	I1109 13:29:56.917558    4875 ssh_runner.go:195] Run: sudo systemctl stop -f docker.socket
	I1109 13:29:56.938056    4875 ssh_runner.go:195] Run: sudo systemctl stop -f docker.service
	I1109 13:29:56.950556    4875 ssh_runner.go:195] Run: sudo systemctl disable docker.socket
	I1109 13:29:57.065685    4875 ssh_runner.go:195] Run: sudo systemctl mask docker.service
	I1109 13:29:57.184021    4875 ssh_runner.go:195] Run: sudo systemctl is-active --quiet service docker
	I1109 13:29:57.196471    4875 ssh_runner.go:195] Run: /bin/bash -c "sudo mkdir -p /etc && printf %s "runtime-endpoint: unix:///var/run/crio/crio.sock
	" | sudo tee /etc/crictl.yaml"
	I1109 13:29:57.210135    4875 crio.go:59] configure cri-o to use "registry.k8s.io/pause:3.10.1" pause image...
	I1109 13:29:57.210281    4875 ssh_runner.go:195] Run: sh -c "sudo sed -i 's|^.*pause_image = .*$|pause_image = "registry.k8s.io/pause:3.10.1"|' /etc/crio/crio.conf.d/02-crio.conf"
	I1109 13:29:57.219145    4875 crio.go:70] configuring cri-o to use "cgroupfs" as cgroup driver...
	I1109 13:29:57.219223    4875 ssh_runner.go:195] Run: sh -c "sudo sed -i 's|^.*cgroup_manager = .*$|cgroup_manager = "cgroupfs"|' /etc/crio/crio.conf.d/02-crio.conf"
	I1109 13:29:57.227597    4875 ssh_runner.go:195] Run: sh -c "sudo sed -i '/conmon_cgroup = .*/d' /etc/crio/crio.conf.d/02-crio.conf"
	I1109 13:29:57.236325    4875 ssh_runner.go:195] Run: sh -c "sudo sed -i '/cgroup_manager = .*/a conmon_cgroup = "pod"' /etc/crio/crio.conf.d/02-crio.conf"
	I1109 13:29:57.244944    4875 ssh_runner.go:195] Run: sh -c "sudo rm -rf /etc/cni/net.mk"
	I1109 13:29:57.252468    4875 ssh_runner.go:195] Run: sh -c "sudo sed -i '/^ *"net.ipv4.ip_unprivileged_port_start=.*"/d' /etc/crio/crio.conf.d/02-crio.conf"
	I1109 13:29:57.260922    4875 ssh_runner.go:195] Run: sh -c "sudo grep -q "^ *default_sysctls" /etc/crio/crio.conf.d/02-crio.conf || sudo sed -i '/conmon_cgroup = .*/a default_sysctls = \[\n\]' /etc/crio/crio.conf.d/02-crio.conf"
	I1109 13:29:57.274219    4875 ssh_runner.go:195] Run: sh -c "sudo sed -i -r 's|^default_sysctls *= *\[|&\n  "net.ipv4.ip_unprivileged_port_start=0",|' /etc/crio/crio.conf.d/02-crio.conf"
	I1109 13:29:57.282600    4875 ssh_runner.go:195] Run: sudo sysctl net.bridge.bridge-nf-call-iptables
	I1109 13:29:57.290068    4875 crio.go:166] couldn't verify netfilter by "sudo sysctl net.bridge.bridge-nf-call-iptables" which might be okay. error: sudo sysctl net.bridge.bridge-nf-call-iptables: Process exited with status 1
	stdout:
	
	stderr:
	sysctl: cannot stat /proc/sys/net/bridge/bridge-nf-call-iptables: No such file or directory
	I1109 13:29:57.290160    4875 ssh_runner.go:195] Run: sudo modprobe br_netfilter
	I1109 13:29:57.303473    4875 ssh_runner.go:195] Run: sudo sh -c "echo 1 > /proc/sys/net/ipv4/ip_forward"
	I1109 13:29:57.311368    4875 ssh_runner.go:195] Run: sudo systemctl daemon-reload
	I1109 13:29:57.422450    4875 ssh_runner.go:195] Run: sudo systemctl restart crio
	I1109 13:29:57.554184    4875 start.go:543] Will wait 60s for socket path /var/run/crio/crio.sock
	I1109 13:29:57.554317    4875 ssh_runner.go:195] Run: stat /var/run/crio/crio.sock
	I1109 13:29:57.557853    4875 start.go:564] Will wait 60s for crictl version
	I1109 13:29:57.557954    4875 ssh_runner.go:195] Run: which crictl
	I1109 13:29:57.561195    4875 ssh_runner.go:195] Run: sudo /usr/local/bin/crictl version
	I1109 13:29:57.584884    4875 start.go:580] Version:  0.1.0
	RuntimeName:  cri-o
	RuntimeVersion:  1.34.1
	RuntimeApiVersion:  v1
	I1109 13:29:57.585077    4875 ssh_runner.go:195] Run: crio --version
	I1109 13:29:57.613520    4875 ssh_runner.go:195] Run: crio --version
	I1109 13:29:57.645604    4875 out.go:179] * Preparing Kubernetes v1.34.1 on CRI-O 1.34.1 ...
	I1109 13:29:57.648482    4875 cli_runner.go:164] Run: docker network inspect addons-651467 --format "{"Name": "{{.Name}}","Driver": "{{.Driver}}","Subnet": "{{range .IPAM.Config}}{{.Subnet}}{{end}}","Gateway": "{{range .IPAM.Config}}{{.Gateway}}{{end}}","MTU": {{if (index .Options "com.docker.network.driver.mtu")}}{{(index .Options "com.docker.network.driver.mtu")}}{{else}}0{{end}}, "ContainerIPs": [{{range $k,$v := .Containers }}"{{$v.IPv4Address}}",{{end}}]}"
	I1109 13:29:57.667667    4875 ssh_runner.go:195] Run: grep 192.168.49.1	host.minikube.internal$ /etc/hosts
	I1109 13:29:57.671356    4875 ssh_runner.go:195] Run: /bin/bash -c "{ grep -v $'\thost.minikube.internal$' "/etc/hosts"; echo "192.168.49.1	host.minikube.internal"; } > /tmp/h.$$; sudo cp /tmp/h.$$ "/etc/hosts""
	I1109 13:29:57.680593    4875 kubeadm.go:884] updating cluster {Name:addons-651467 KeepContext:false EmbedCerts:false MinikubeISO: KicBaseImage:gcr.io/k8s-minikube/kicbase-builds:v0.0.48-1761985721-21837@sha256:a50b37e97dfdea51156e079ca6b45818a801b3d41bbe13d141f35d2e1af6c7d1 Memory:4096 CPUs:2 DiskSize:20000 Driver:docker HyperkitVpnKitSock: HyperkitVSockPorts:[] DockerEnv:[] ContainerVolumeMounts:[] InsecureRegistry:[] RegistryMirror:[] HostOnlyCIDR:192.168.59.1/24 HypervVirtualSwitch: HypervUseExternalSwitch:false HypervExternalAdapter: KVMNetwork:default KVMQemuURI:qemu:///system KVMGPU:false KVMHidden:false KVMNUMACount:1 APIServerPort:8443 DockerOpt:[] DisableDriverMounts:false NFSShare:[] NFSSharesRoot:/nfsshares UUID: NoVTXCheck:false DNSProxy:false HostDNSResolver:true HostOnlyNicType:virtio NatNicType:virtio SSHIPAddress: SSHUser:root SSHKey: SSHPort:22 KubernetesConfig:{KubernetesVersion:v1.34.1 ClusterName:addons-651467 Namespace:default APIServerHAVIP: APIServerName:minikubeCA APIServerNa
mes:[] APIServerIPs:[] DNSDomain:cluster.local ContainerRuntime:crio CRISocket: NetworkPlugin:cni FeatureGates: ServiceCIDR:10.96.0.0/12 ImageRepository: LoadBalancerStartIP: LoadBalancerEndIP: CustomIngressCert: RegistryAliases: ExtraOptions:[] ShouldLoadCachedImages:true EnableDefaultCNI:false CNI:} Nodes:[{Name: IP:192.168.49.2 Port:8443 KubernetesVersion:v1.34.1 ContainerRuntime:crio ControlPlane:true Worker:true}] Addons:map[] CustomAddonImages:map[] CustomAddonRegistries:map[] VerifyComponents:map[apiserver:true apps_running:true default_sa:true extra:true kubelet:true node_ready:true system_pods:true] StartHostTimeout:6m0s ScheduledStop:<nil> ExposedPorts:[] ListenAddress: Network: Subnet: MultiNodeRequested:false ExtraDisks:0 CertExpiration:26280h0m0s MountString: Mount9PVersion:9p2000.L MountGID:docker MountIP: MountMSize:262144 MountOptions:[] MountPort:0 MountType:9p MountUID:docker BinaryMirror: DisableOptimizations:false DisableMetrics:false DisableCoreDNSLog:false CustomQemuFirmwarePath: SocketV
MnetClientPath: SocketVMnetPath: StaticIP: SSHAuthSock: SSHAgentPID:0 GPUs: AutoPauseInterval:1m0s} ...
	I1109 13:29:57.680706    4875 preload.go:188] Checking if preload exists for k8s version v1.34.1 and runtime crio
	I1109 13:29:57.680762    4875 ssh_runner.go:195] Run: sudo crictl images --output json
	I1109 13:29:57.712673    4875 crio.go:514] all images are preloaded for cri-o runtime.
	I1109 13:29:57.712696    4875 crio.go:433] Images already preloaded, skipping extraction
	I1109 13:29:57.712751    4875 ssh_runner.go:195] Run: sudo crictl images --output json
	I1109 13:29:57.737639    4875 crio.go:514] all images are preloaded for cri-o runtime.
	I1109 13:29:57.737662    4875 cache_images.go:86] Images are preloaded, skipping loading
	I1109 13:29:57.737670    4875 kubeadm.go:935] updating node { 192.168.49.2 8443 v1.34.1 crio true true} ...
	I1109 13:29:57.737753    4875 kubeadm.go:947] kubelet [Unit]
	Wants=crio.service
	
	[Service]
	ExecStart=
	ExecStart=/var/lib/minikube/binaries/v1.34.1/kubelet --bootstrap-kubeconfig=/etc/kubernetes/bootstrap-kubelet.conf --cgroups-per-qos=false --config=/var/lib/kubelet/config.yaml --enforce-node-allocatable= --hostname-override=addons-651467 --kubeconfig=/etc/kubernetes/kubelet.conf --node-ip=192.168.49.2
	
	[Install]
	 config:
	{KubernetesVersion:v1.34.1 ClusterName:addons-651467 Namespace:default APIServerHAVIP: APIServerName:minikubeCA APIServerNames:[] APIServerIPs:[] DNSDomain:cluster.local ContainerRuntime:crio CRISocket: NetworkPlugin:cni FeatureGates: ServiceCIDR:10.96.0.0/12 ImageRepository: LoadBalancerStartIP: LoadBalancerEndIP: CustomIngressCert: RegistryAliases: ExtraOptions:[] ShouldLoadCachedImages:true EnableDefaultCNI:false CNI:}
	I1109 13:29:57.737833    4875 ssh_runner.go:195] Run: crio config
	I1109 13:29:57.816421    4875 cni.go:84] Creating CNI manager for ""
	I1109 13:29:57.816446    4875 cni.go:143] "docker" driver + "crio" runtime found, recommending kindnet
	I1109 13:29:57.816464    4875 kubeadm.go:85] Using pod CIDR: 10.244.0.0/16
	I1109 13:29:57.816507    4875 kubeadm.go:190] kubeadm options: {CertDir:/var/lib/minikube/certs ServiceCIDR:10.96.0.0/12 PodSubnet:10.244.0.0/16 AdvertiseAddress:192.168.49.2 APIServerPort:8443 KubernetesVersion:v1.34.1 EtcdDataDir:/var/lib/minikube/etcd EtcdExtraArgs:map[] ClusterName:addons-651467 NodeName:addons-651467 DNSDomain:cluster.local CRISocket:/var/run/crio/crio.sock ImageRepository: ComponentOptions:[{Component:apiServer ExtraArgs:map[enable-admission-plugins:NamespaceLifecycle,LimitRanger,ServiceAccount,DefaultStorageClass,DefaultTolerationSeconds,NodeRestriction,MutatingAdmissionWebhook,ValidatingAdmissionWebhook,ResourceQuota] Pairs:map[certSANs:["127.0.0.1", "localhost", "192.168.49.2"]]} {Component:controllerManager ExtraArgs:map[allocate-node-cidrs:true leader-elect:false] Pairs:map[]} {Component:scheduler ExtraArgs:map[leader-elect:false] Pairs:map[]}] FeatureArgs:map[] NodeIP:192.168.49.2 CgroupDriver:cgroupfs ClientCAFile:/var/lib/minikube/certs/ca.crt StaticPodPath:/etc/kuberne
tes/manifests ControlPlaneAddress:control-plane.minikube.internal KubeProxyOptions:map[] ResolvConfSearchRegression:false KubeletConfigOpts:map[containerRuntimeEndpoint:unix:///var/run/crio/crio.sock hairpinMode:hairpin-veth runtimeRequestTimeout:15m] PrependCriSocketUnix:true}
	I1109 13:29:57.816671    4875 kubeadm.go:196] kubeadm config:
	apiVersion: kubeadm.k8s.io/v1beta4
	kind: InitConfiguration
	localAPIEndpoint:
	  advertiseAddress: 192.168.49.2
	  bindPort: 8443
	bootstrapTokens:
	  - groups:
	      - system:bootstrappers:kubeadm:default-node-token
	    ttl: 24h0m0s
	    usages:
	      - signing
	      - authentication
	nodeRegistration:
	  criSocket: unix:///var/run/crio/crio.sock
	  name: "addons-651467"
	  kubeletExtraArgs:
	    - name: "node-ip"
	      value: "192.168.49.2"
	  taints: []
	---
	apiVersion: kubeadm.k8s.io/v1beta4
	kind: ClusterConfiguration
	apiServer:
	  certSANs: ["127.0.0.1", "localhost", "192.168.49.2"]
	  extraArgs:
	    - name: "enable-admission-plugins"
	      value: "NamespaceLifecycle,LimitRanger,ServiceAccount,DefaultStorageClass,DefaultTolerationSeconds,NodeRestriction,MutatingAdmissionWebhook,ValidatingAdmissionWebhook,ResourceQuota"
	controllerManager:
	  extraArgs:
	    - name: "allocate-node-cidrs"
	      value: "true"
	    - name: "leader-elect"
	      value: "false"
	scheduler:
	  extraArgs:
	    - name: "leader-elect"
	      value: "false"
	certificatesDir: /var/lib/minikube/certs
	clusterName: mk
	controlPlaneEndpoint: control-plane.minikube.internal:8443
	etcd:
	  local:
	    dataDir: /var/lib/minikube/etcd
	kubernetesVersion: v1.34.1
	networking:
	  dnsDomain: cluster.local
	  podSubnet: "10.244.0.0/16"
	  serviceSubnet: 10.96.0.0/12
	---
	apiVersion: kubelet.config.k8s.io/v1beta1
	kind: KubeletConfiguration
	authentication:
	  x509:
	    clientCAFile: /var/lib/minikube/certs/ca.crt
	cgroupDriver: cgroupfs
	containerRuntimeEndpoint: unix:///var/run/crio/crio.sock
	hairpinMode: hairpin-veth
	runtimeRequestTimeout: 15m
	clusterDomain: "cluster.local"
	# disable disk resource management by default
	imageGCHighThresholdPercent: 100
	evictionHard:
	  nodefs.available: "0%"
	  nodefs.inodesFree: "0%"
	  imagefs.available: "0%"
	failSwapOn: false
	staticPodPath: /etc/kubernetes/manifests
	---
	apiVersion: kubeproxy.config.k8s.io/v1alpha1
	kind: KubeProxyConfiguration
	clusterCIDR: "10.244.0.0/16"
	metricsBindAddress: 0.0.0.0:10249
	conntrack:
	  maxPerCore: 0
	# Skip setting "net.netfilter.nf_conntrack_tcp_timeout_established"
	  tcpEstablishedTimeout: 0s
	# Skip setting "net.netfilter.nf_conntrack_tcp_timeout_close"
	  tcpCloseWaitTimeout: 0s
	
	I1109 13:29:57.816746    4875 ssh_runner.go:195] Run: sudo ls /var/lib/minikube/binaries/v1.34.1
	I1109 13:29:57.824517    4875 binaries.go:51] Found k8s binaries, skipping transfer
	I1109 13:29:57.824609    4875 ssh_runner.go:195] Run: sudo mkdir -p /etc/systemd/system/kubelet.service.d /lib/systemd/system /var/tmp/minikube
	I1109 13:29:57.832292    4875 ssh_runner.go:362] scp memory --> /etc/systemd/system/kubelet.service.d/10-kubeadm.conf (363 bytes)
	I1109 13:29:57.846595    4875 ssh_runner.go:362] scp memory --> /lib/systemd/system/kubelet.service (352 bytes)
	I1109 13:29:57.862370    4875 ssh_runner.go:362] scp memory --> /var/tmp/minikube/kubeadm.yaml.new (2210 bytes)
	I1109 13:29:57.875465    4875 ssh_runner.go:195] Run: grep 192.168.49.2	control-plane.minikube.internal$ /etc/hosts
	I1109 13:29:57.878954    4875 ssh_runner.go:195] Run: /bin/bash -c "{ grep -v $'\tcontrol-plane.minikube.internal$' "/etc/hosts"; echo "192.168.49.2	control-plane.minikube.internal"; } > /tmp/h.$$; sudo cp /tmp/h.$$ "/etc/hosts""
	I1109 13:29:57.889111    4875 ssh_runner.go:195] Run: sudo systemctl daemon-reload
	I1109 13:29:57.995831    4875 ssh_runner.go:195] Run: sudo systemctl start kubelet
	I1109 13:29:58.012414    4875 certs.go:69] Setting up /home/jenkins/minikube-integration/21139-2320/.minikube/profiles/addons-651467 for IP: 192.168.49.2
	I1109 13:29:58.012437    4875 certs.go:195] generating shared ca certs ...
	I1109 13:29:58.012455    4875 certs.go:227] acquiring lock for ca certs: {Name:mkdc9287ecd89df27ec460b72246ed6b75395d7c Clock:{} Delay:500ms Timeout:1m0s Cancel:<nil>}
	I1109 13:29:58.012613    4875 certs.go:241] generating "minikubeCA" ca cert: /home/jenkins/minikube-integration/21139-2320/.minikube/ca.key
	I1109 13:29:58.815426    4875 crypto.go:156] Writing cert to /home/jenkins/minikube-integration/21139-2320/.minikube/ca.crt ...
	I1109 13:29:58.815498    4875 lock.go:35] WriteFile acquiring /home/jenkins/minikube-integration/21139-2320/.minikube/ca.crt: {Name:mkb86fe4580308a5adcf0264e830fede14e8cc36 Clock:{} Delay:500ms Timeout:1m0s Cancel:<nil>}
	I1109 13:29:58.815701    4875 crypto.go:164] Writing key to /home/jenkins/minikube-integration/21139-2320/.minikube/ca.key ...
	I1109 13:29:58.815734    4875 lock.go:35] WriteFile acquiring /home/jenkins/minikube-integration/21139-2320/.minikube/ca.key: {Name:mk48c7d5dd368e917e8673396d91313ce1411346 Clock:{} Delay:500ms Timeout:1m0s Cancel:<nil>}
	I1109 13:29:58.815853    4875 certs.go:241] generating "proxyClientCA" ca cert: /home/jenkins/minikube-integration/21139-2320/.minikube/proxy-client-ca.key
	I1109 13:29:59.081454    4875 crypto.go:156] Writing cert to /home/jenkins/minikube-integration/21139-2320/.minikube/proxy-client-ca.crt ...
	I1109 13:29:59.081485    4875 lock.go:35] WriteFile acquiring /home/jenkins/minikube-integration/21139-2320/.minikube/proxy-client-ca.crt: {Name:mka77811779f028cd2c29c0788f4fc57f7399a8b Clock:{} Delay:500ms Timeout:1m0s Cancel:<nil>}
	I1109 13:29:59.081695    4875 crypto.go:164] Writing key to /home/jenkins/minikube-integration/21139-2320/.minikube/proxy-client-ca.key ...
	I1109 13:29:59.081711    4875 lock.go:35] WriteFile acquiring /home/jenkins/minikube-integration/21139-2320/.minikube/proxy-client-ca.key: {Name:mkcfe7fdba38edb59535214ca3c34887341dad32 Clock:{} Delay:500ms Timeout:1m0s Cancel:<nil>}
	I1109 13:29:59.081823    4875 certs.go:257] generating profile certs ...
	I1109 13:29:59.081882    4875 certs.go:364] generating signed profile cert for "minikube-user": /home/jenkins/minikube-integration/21139-2320/.minikube/profiles/addons-651467/client.key
	I1109 13:29:59.081899    4875 crypto.go:68] Generating cert /home/jenkins/minikube-integration/21139-2320/.minikube/profiles/addons-651467/client.crt with IP's: []
	I1109 13:29:59.554604    4875 crypto.go:156] Writing cert to /home/jenkins/minikube-integration/21139-2320/.minikube/profiles/addons-651467/client.crt ...
	I1109 13:29:59.554635    4875 lock.go:35] WriteFile acquiring /home/jenkins/minikube-integration/21139-2320/.minikube/profiles/addons-651467/client.crt: {Name:mkb5129912da0330cf5f2087feea056b4c3687ea Clock:{} Delay:500ms Timeout:1m0s Cancel:<nil>}
	I1109 13:29:59.554805    4875 crypto.go:164] Writing key to /home/jenkins/minikube-integration/21139-2320/.minikube/profiles/addons-651467/client.key ...
	I1109 13:29:59.554819    4875 lock.go:35] WriteFile acquiring /home/jenkins/minikube-integration/21139-2320/.minikube/profiles/addons-651467/client.key: {Name:mka7681018893601dd5ee47377e7b97dba042747 Clock:{} Delay:500ms Timeout:1m0s Cancel:<nil>}
	I1109 13:29:59.554888    4875 certs.go:364] generating signed profile cert for "minikube": /home/jenkins/minikube-integration/21139-2320/.minikube/profiles/addons-651467/apiserver.key.057f78a1
	I1109 13:29:59.554913    4875 crypto.go:68] Generating cert /home/jenkins/minikube-integration/21139-2320/.minikube/profiles/addons-651467/apiserver.crt.057f78a1 with IP's: [10.96.0.1 127.0.0.1 10.0.0.1 192.168.49.2]
	I1109 13:29:59.992178    4875 crypto.go:156] Writing cert to /home/jenkins/minikube-integration/21139-2320/.minikube/profiles/addons-651467/apiserver.crt.057f78a1 ...
	I1109 13:29:59.992211    4875 lock.go:35] WriteFile acquiring /home/jenkins/minikube-integration/21139-2320/.minikube/profiles/addons-651467/apiserver.crt.057f78a1: {Name:mk5885cdcf4b26af0ab62b466c88c19552f535d4 Clock:{} Delay:500ms Timeout:1m0s Cancel:<nil>}
	I1109 13:29:59.992388    4875 crypto.go:164] Writing key to /home/jenkins/minikube-integration/21139-2320/.minikube/profiles/addons-651467/apiserver.key.057f78a1 ...
	I1109 13:29:59.992402    4875 lock.go:35] WriteFile acquiring /home/jenkins/minikube-integration/21139-2320/.minikube/profiles/addons-651467/apiserver.key.057f78a1: {Name:mk8c34653536b2604d6587e82121e6ae9af6b189 Clock:{} Delay:500ms Timeout:1m0s Cancel:<nil>}
	I1109 13:29:59.992486    4875 certs.go:382] copying /home/jenkins/minikube-integration/21139-2320/.minikube/profiles/addons-651467/apiserver.crt.057f78a1 -> /home/jenkins/minikube-integration/21139-2320/.minikube/profiles/addons-651467/apiserver.crt
	I1109 13:29:59.992572    4875 certs.go:386] copying /home/jenkins/minikube-integration/21139-2320/.minikube/profiles/addons-651467/apiserver.key.057f78a1 -> /home/jenkins/minikube-integration/21139-2320/.minikube/profiles/addons-651467/apiserver.key
	I1109 13:29:59.992629    4875 certs.go:364] generating signed profile cert for "aggregator": /home/jenkins/minikube-integration/21139-2320/.minikube/profiles/addons-651467/proxy-client.key
	I1109 13:29:59.992653    4875 crypto.go:68] Generating cert /home/jenkins/minikube-integration/21139-2320/.minikube/profiles/addons-651467/proxy-client.crt with IP's: []
	I1109 13:30:00.886859    4875 crypto.go:156] Writing cert to /home/jenkins/minikube-integration/21139-2320/.minikube/profiles/addons-651467/proxy-client.crt ...
	I1109 13:30:00.886899    4875 lock.go:35] WriteFile acquiring /home/jenkins/minikube-integration/21139-2320/.minikube/profiles/addons-651467/proxy-client.crt: {Name:mkbc7863fbc3d8a1220aa9fa9ef7020993e849f3 Clock:{} Delay:500ms Timeout:1m0s Cancel:<nil>}
	I1109 13:30:00.887129    4875 crypto.go:164] Writing key to /home/jenkins/minikube-integration/21139-2320/.minikube/profiles/addons-651467/proxy-client.key ...
	I1109 13:30:00.887145    4875 lock.go:35] WriteFile acquiring /home/jenkins/minikube-integration/21139-2320/.minikube/profiles/addons-651467/proxy-client.key: {Name:mk8d8c3fc1ee7ee0888a2a40426e84bb5152d01e Clock:{} Delay:500ms Timeout:1m0s Cancel:<nil>}
	I1109 13:30:00.887368    4875 certs.go:484] found cert: /home/jenkins/minikube-integration/21139-2320/.minikube/certs/ca-key.pem (1679 bytes)
	I1109 13:30:00.887408    4875 certs.go:484] found cert: /home/jenkins/minikube-integration/21139-2320/.minikube/certs/ca.pem (1082 bytes)
	I1109 13:30:00.887439    4875 certs.go:484] found cert: /home/jenkins/minikube-integration/21139-2320/.minikube/certs/cert.pem (1123 bytes)
	I1109 13:30:00.887465    4875 certs.go:484] found cert: /home/jenkins/minikube-integration/21139-2320/.minikube/certs/key.pem (1679 bytes)
	I1109 13:30:00.888136    4875 ssh_runner.go:362] scp /home/jenkins/minikube-integration/21139-2320/.minikube/ca.crt --> /var/lib/minikube/certs/ca.crt (1111 bytes)
	I1109 13:30:00.913866    4875 ssh_runner.go:362] scp /home/jenkins/minikube-integration/21139-2320/.minikube/ca.key --> /var/lib/minikube/certs/ca.key (1679 bytes)
	I1109 13:30:00.937709    4875 ssh_runner.go:362] scp /home/jenkins/minikube-integration/21139-2320/.minikube/proxy-client-ca.crt --> /var/lib/minikube/certs/proxy-client-ca.crt (1119 bytes)
	I1109 13:30:00.962691    4875 ssh_runner.go:362] scp /home/jenkins/minikube-integration/21139-2320/.minikube/proxy-client-ca.key --> /var/lib/minikube/certs/proxy-client-ca.key (1675 bytes)
	I1109 13:30:00.984193    4875 ssh_runner.go:362] scp /home/jenkins/minikube-integration/21139-2320/.minikube/profiles/addons-651467/apiserver.crt --> /var/lib/minikube/certs/apiserver.crt (1419 bytes)
	I1109 13:30:01.023662    4875 ssh_runner.go:362] scp /home/jenkins/minikube-integration/21139-2320/.minikube/profiles/addons-651467/apiserver.key --> /var/lib/minikube/certs/apiserver.key (1675 bytes)
	I1109 13:30:01.059173    4875 ssh_runner.go:362] scp /home/jenkins/minikube-integration/21139-2320/.minikube/profiles/addons-651467/proxy-client.crt --> /var/lib/minikube/certs/proxy-client.crt (1147 bytes)
	I1109 13:30:01.097562    4875 ssh_runner.go:362] scp /home/jenkins/minikube-integration/21139-2320/.minikube/profiles/addons-651467/proxy-client.key --> /var/lib/minikube/certs/proxy-client.key (1675 bytes)
	I1109 13:30:01.126278    4875 ssh_runner.go:362] scp /home/jenkins/minikube-integration/21139-2320/.minikube/ca.crt --> /usr/share/ca-certificates/minikubeCA.pem (1111 bytes)
	I1109 13:30:01.152939    4875 ssh_runner.go:362] scp memory --> /var/lib/minikube/kubeconfig (738 bytes)
	I1109 13:30:01.171279    4875 ssh_runner.go:195] Run: openssl version
	I1109 13:30:01.196028    4875 ssh_runner.go:195] Run: sudo /bin/bash -c "test -s /usr/share/ca-certificates/minikubeCA.pem && ln -fs /usr/share/ca-certificates/minikubeCA.pem /etc/ssl/certs/minikubeCA.pem"
	I1109 13:30:01.206138    4875 ssh_runner.go:195] Run: ls -la /usr/share/ca-certificates/minikubeCA.pem
	I1109 13:30:01.211661    4875 certs.go:528] hashing: -rw-r--r-- 1 root root 1111 Nov  9 13:29 /usr/share/ca-certificates/minikubeCA.pem
	I1109 13:30:01.211749    4875 ssh_runner.go:195] Run: openssl x509 -hash -noout -in /usr/share/ca-certificates/minikubeCA.pem
	I1109 13:30:01.258932    4875 ssh_runner.go:195] Run: sudo /bin/bash -c "test -L /etc/ssl/certs/b5213941.0 || ln -fs /etc/ssl/certs/minikubeCA.pem /etc/ssl/certs/b5213941.0"
	I1109 13:30:01.293464    4875 ssh_runner.go:195] Run: stat /var/lib/minikube/certs/apiserver-kubelet-client.crt
	I1109 13:30:01.300183    4875 certs.go:400] 'apiserver-kubelet-client' cert doesn't exist, likely first start: stat /var/lib/minikube/certs/apiserver-kubelet-client.crt: Process exited with status 1
	stdout:
	
	stderr:
	stat: cannot statx '/var/lib/minikube/certs/apiserver-kubelet-client.crt': No such file or directory
	I1109 13:30:01.300235    4875 kubeadm.go:401] StartCluster: {Name:addons-651467 KeepContext:false EmbedCerts:false MinikubeISO: KicBaseImage:gcr.io/k8s-minikube/kicbase-builds:v0.0.48-1761985721-21837@sha256:a50b37e97dfdea51156e079ca6b45818a801b3d41bbe13d141f35d2e1af6c7d1 Memory:4096 CPUs:2 DiskSize:20000 Driver:docker HyperkitVpnKitSock: HyperkitVSockPorts:[] DockerEnv:[] ContainerVolumeMounts:[] InsecureRegistry:[] RegistryMirror:[] HostOnlyCIDR:192.168.59.1/24 HypervVirtualSwitch: HypervUseExternalSwitch:false HypervExternalAdapter: KVMNetwork:default KVMQemuURI:qemu:///system KVMGPU:false KVMHidden:false KVMNUMACount:1 APIServerPort:8443 DockerOpt:[] DisableDriverMounts:false NFSShare:[] NFSSharesRoot:/nfsshares UUID: NoVTXCheck:false DNSProxy:false HostDNSResolver:true HostOnlyNicType:virtio NatNicType:virtio SSHIPAddress: SSHUser:root SSHKey: SSHPort:22 KubernetesConfig:{KubernetesVersion:v1.34.1 ClusterName:addons-651467 Namespace:default APIServerHAVIP: APIServerName:minikubeCA APIServerNames
:[] APIServerIPs:[] DNSDomain:cluster.local ContainerRuntime:crio CRISocket: NetworkPlugin:cni FeatureGates: ServiceCIDR:10.96.0.0/12 ImageRepository: LoadBalancerStartIP: LoadBalancerEndIP: CustomIngressCert: RegistryAliases: ExtraOptions:[] ShouldLoadCachedImages:true EnableDefaultCNI:false CNI:} Nodes:[{Name: IP:192.168.49.2 Port:8443 KubernetesVersion:v1.34.1 ContainerRuntime:crio ControlPlane:true Worker:true}] Addons:map[] CustomAddonImages:map[] CustomAddonRegistries:map[] VerifyComponents:map[apiserver:true apps_running:true default_sa:true extra:true kubelet:true node_ready:true system_pods:true] StartHostTimeout:6m0s ScheduledStop:<nil> ExposedPorts:[] ListenAddress: Network: Subnet: MultiNodeRequested:false ExtraDisks:0 CertExpiration:26280h0m0s MountString: Mount9PVersion:9p2000.L MountGID:docker MountIP: MountMSize:262144 MountOptions:[] MountPort:0 MountType:9p MountUID:docker BinaryMirror: DisableOptimizations:false DisableMetrics:false DisableCoreDNSLog:false CustomQemuFirmwarePath: SocketVMne
tClientPath: SocketVMnetPath: StaticIP: SSHAuthSock: SSHAgentPID:0 GPUs: AutoPauseInterval:1m0s}
	I1109 13:30:01.300309    4875 cri.go:54] listing CRI containers in root : {State:paused Name: Namespaces:[kube-system]}
	I1109 13:30:01.300395    4875 ssh_runner.go:195] Run: sudo -s eval "crictl ps -a --quiet --label io.kubernetes.pod.namespace=kube-system"
	I1109 13:30:01.336003    4875 cri.go:89] found id: ""
	I1109 13:30:01.336093    4875 ssh_runner.go:195] Run: sudo ls /var/lib/kubelet/kubeadm-flags.env /var/lib/kubelet/config.yaml /var/lib/minikube/etcd
	I1109 13:30:01.348102    4875 ssh_runner.go:195] Run: sudo cp /var/tmp/minikube/kubeadm.yaml.new /var/tmp/minikube/kubeadm.yaml
	I1109 13:30:01.357950    4875 kubeadm.go:215] ignoring SystemVerification for kubeadm because of docker driver
	I1109 13:30:01.358109    4875 ssh_runner.go:195] Run: sudo ls -la /etc/kubernetes/admin.conf /etc/kubernetes/kubelet.conf /etc/kubernetes/controller-manager.conf /etc/kubernetes/scheduler.conf
	I1109 13:30:01.368760    4875 kubeadm.go:156] config check failed, skipping stale config cleanup: sudo ls -la /etc/kubernetes/admin.conf /etc/kubernetes/kubelet.conf /etc/kubernetes/controller-manager.conf /etc/kubernetes/scheduler.conf: Process exited with status 2
	stdout:
	
	stderr:
	ls: cannot access '/etc/kubernetes/admin.conf': No such file or directory
	ls: cannot access '/etc/kubernetes/kubelet.conf': No such file or directory
	ls: cannot access '/etc/kubernetes/controller-manager.conf': No such file or directory
	ls: cannot access '/etc/kubernetes/scheduler.conf': No such file or directory
	I1109 13:30:01.368816    4875 kubeadm.go:158] found existing configuration files:
	
	I1109 13:30:01.368879    4875 ssh_runner.go:195] Run: sudo grep https://control-plane.minikube.internal:8443 /etc/kubernetes/admin.conf
	I1109 13:30:01.378630    4875 kubeadm.go:164] "https://control-plane.minikube.internal:8443" may not be in /etc/kubernetes/admin.conf - will remove: sudo grep https://control-plane.minikube.internal:8443 /etc/kubernetes/admin.conf: Process exited with status 2
	stdout:
	
	stderr:
	grep: /etc/kubernetes/admin.conf: No such file or directory
	I1109 13:30:01.378704    4875 ssh_runner.go:195] Run: sudo rm -f /etc/kubernetes/admin.conf
	I1109 13:30:01.388686    4875 ssh_runner.go:195] Run: sudo grep https://control-plane.minikube.internal:8443 /etc/kubernetes/kubelet.conf
	I1109 13:30:01.404517    4875 kubeadm.go:164] "https://control-plane.minikube.internal:8443" may not be in /etc/kubernetes/kubelet.conf - will remove: sudo grep https://control-plane.minikube.internal:8443 /etc/kubernetes/kubelet.conf: Process exited with status 2
	stdout:
	
	stderr:
	grep: /etc/kubernetes/kubelet.conf: No such file or directory
	I1109 13:30:01.404597    4875 ssh_runner.go:195] Run: sudo rm -f /etc/kubernetes/kubelet.conf
	I1109 13:30:01.413481    4875 ssh_runner.go:195] Run: sudo grep https://control-plane.minikube.internal:8443 /etc/kubernetes/controller-manager.conf
	I1109 13:30:01.422962    4875 kubeadm.go:164] "https://control-plane.minikube.internal:8443" may not be in /etc/kubernetes/controller-manager.conf - will remove: sudo grep https://control-plane.minikube.internal:8443 /etc/kubernetes/controller-manager.conf: Process exited with status 2
	stdout:
	
	stderr:
	grep: /etc/kubernetes/controller-manager.conf: No such file or directory
	I1109 13:30:01.423102    4875 ssh_runner.go:195] Run: sudo rm -f /etc/kubernetes/controller-manager.conf
	I1109 13:30:01.432121    4875 ssh_runner.go:195] Run: sudo grep https://control-plane.minikube.internal:8443 /etc/kubernetes/scheduler.conf
	I1109 13:30:01.442346    4875 kubeadm.go:164] "https://control-plane.minikube.internal:8443" may not be in /etc/kubernetes/scheduler.conf - will remove: sudo grep https://control-plane.minikube.internal:8443 /etc/kubernetes/scheduler.conf: Process exited with status 2
	stdout:
	
	stderr:
	grep: /etc/kubernetes/scheduler.conf: No such file or directory
	I1109 13:30:01.442480    4875 ssh_runner.go:195] Run: sudo rm -f /etc/kubernetes/scheduler.conf
	I1109 13:30:01.453002    4875 ssh_runner.go:286] Start: sudo /bin/bash -c "env PATH="/var/lib/minikube/binaries/v1.34.1:$PATH" kubeadm init --config /var/tmp/minikube/kubeadm.yaml  --ignore-preflight-errors=DirAvailable--etc-kubernetes-manifests,DirAvailable--var-lib-minikube,DirAvailable--var-lib-minikube-etcd,FileAvailable--etc-kubernetes-manifests-kube-scheduler.yaml,FileAvailable--etc-kubernetes-manifests-kube-apiserver.yaml,FileAvailable--etc-kubernetes-manifests-kube-controller-manager.yaml,FileAvailable--etc-kubernetes-manifests-etcd.yaml,Port-10250,Swap,NumCPU,Mem,SystemVerification,FileContent--proc-sys-net-bridge-bridge-nf-call-iptables"
	I1109 13:30:01.501853    4875 kubeadm.go:319] [init] Using Kubernetes version: v1.34.1
	I1109 13:30:01.502359    4875 kubeadm.go:319] [preflight] Running pre-flight checks
	I1109 13:30:01.529028    4875 kubeadm.go:319] [preflight] The system verification failed. Printing the output from the verification:
	I1109 13:30:01.529152    4875 kubeadm.go:319] KERNEL_VERSION: 5.15.0-1084-aws
	I1109 13:30:01.529205    4875 kubeadm.go:319] OS: Linux
	I1109 13:30:01.529278    4875 kubeadm.go:319] CGROUPS_CPU: enabled
	I1109 13:30:01.529347    4875 kubeadm.go:319] CGROUPS_CPUACCT: enabled
	I1109 13:30:01.529421    4875 kubeadm.go:319] CGROUPS_CPUSET: enabled
	I1109 13:30:01.529488    4875 kubeadm.go:319] CGROUPS_DEVICES: enabled
	I1109 13:30:01.529563    4875 kubeadm.go:319] CGROUPS_FREEZER: enabled
	I1109 13:30:01.529633    4875 kubeadm.go:319] CGROUPS_MEMORY: enabled
	I1109 13:30:01.529706    4875 kubeadm.go:319] CGROUPS_PIDS: enabled
	I1109 13:30:01.529785    4875 kubeadm.go:319] CGROUPS_HUGETLB: enabled
	I1109 13:30:01.529848    4875 kubeadm.go:319] CGROUPS_BLKIO: enabled
	I1109 13:30:01.610645    4875 kubeadm.go:319] [preflight] Pulling images required for setting up a Kubernetes cluster
	I1109 13:30:01.610801    4875 kubeadm.go:319] [preflight] This might take a minute or two, depending on the speed of your internet connection
	I1109 13:30:01.610956    4875 kubeadm.go:319] [preflight] You can also perform this action beforehand using 'kubeadm config images pull'
	I1109 13:30:01.621669    4875 kubeadm.go:319] [certs] Using certificateDir folder "/var/lib/minikube/certs"
	I1109 13:30:01.628275    4875 out.go:252]   - Generating certificates and keys ...
	I1109 13:30:01.628377    4875 kubeadm.go:319] [certs] Using existing ca certificate authority
	I1109 13:30:01.628453    4875 kubeadm.go:319] [certs] Using existing apiserver certificate and key on disk
	I1109 13:30:02.193390    4875 kubeadm.go:319] [certs] Generating "apiserver-kubelet-client" certificate and key
	I1109 13:30:02.336480    4875 kubeadm.go:319] [certs] Generating "front-proxy-ca" certificate and key
	I1109 13:30:02.612356    4875 kubeadm.go:319] [certs] Generating "front-proxy-client" certificate and key
	I1109 13:30:02.952014    4875 kubeadm.go:319] [certs] Generating "etcd/ca" certificate and key
	I1109 13:30:03.295042    4875 kubeadm.go:319] [certs] Generating "etcd/server" certificate and key
	I1109 13:30:03.295265    4875 kubeadm.go:319] [certs] etcd/server serving cert is signed for DNS names [addons-651467 localhost] and IPs [192.168.49.2 127.0.0.1 ::1]
	I1109 13:30:03.830724    4875 kubeadm.go:319] [certs] Generating "etcd/peer" certificate and key
	I1109 13:30:03.831039    4875 kubeadm.go:319] [certs] etcd/peer serving cert is signed for DNS names [addons-651467 localhost] and IPs [192.168.49.2 127.0.0.1 ::1]
	I1109 13:30:04.470763    4875 kubeadm.go:319] [certs] Generating "etcd/healthcheck-client" certificate and key
	I1109 13:30:05.489721    4875 kubeadm.go:319] [certs] Generating "apiserver-etcd-client" certificate and key
	I1109 13:30:05.944002    4875 kubeadm.go:319] [certs] Generating "sa" key and public key
	I1109 13:30:05.944329    4875 kubeadm.go:319] [kubeconfig] Using kubeconfig folder "/etc/kubernetes"
	I1109 13:30:06.887973    4875 kubeadm.go:319] [kubeconfig] Writing "admin.conf" kubeconfig file
	I1109 13:30:07.067707    4875 kubeadm.go:319] [kubeconfig] Writing "super-admin.conf" kubeconfig file
	I1109 13:30:07.706053    4875 kubeadm.go:319] [kubeconfig] Writing "kubelet.conf" kubeconfig file
	I1109 13:30:09.346813    4875 kubeadm.go:319] [kubeconfig] Writing "controller-manager.conf" kubeconfig file
	I1109 13:30:10.361632    4875 kubeadm.go:319] [kubeconfig] Writing "scheduler.conf" kubeconfig file
	I1109 13:30:10.362229    4875 kubeadm.go:319] [etcd] Creating static Pod manifest for local etcd in "/etc/kubernetes/manifests"
	I1109 13:30:10.364909    4875 kubeadm.go:319] [control-plane] Using manifest folder "/etc/kubernetes/manifests"
	I1109 13:30:10.368450    4875 out.go:252]   - Booting up control plane ...
	I1109 13:30:10.368585    4875 kubeadm.go:319] [control-plane] Creating static Pod manifest for "kube-apiserver"
	I1109 13:30:10.368689    4875 kubeadm.go:319] [control-plane] Creating static Pod manifest for "kube-controller-manager"
	I1109 13:30:10.368780    4875 kubeadm.go:319] [control-plane] Creating static Pod manifest for "kube-scheduler"
	I1109 13:30:10.386625    4875 kubeadm.go:319] [kubelet-start] Writing kubelet environment file with flags to file "/var/lib/kubelet/kubeadm-flags.env"
	I1109 13:30:10.386934    4875 kubeadm.go:319] [kubelet-start] Writing kubelet configuration to file "/var/lib/kubelet/instance-config.yaml"
	I1109 13:30:10.395546    4875 kubeadm.go:319] [patches] Applied patch of type "application/strategic-merge-patch+json" to target "kubeletconfiguration"
	I1109 13:30:10.395649    4875 kubeadm.go:319] [kubelet-start] Writing kubelet configuration to file "/var/lib/kubelet/config.yaml"
	I1109 13:30:10.395692    4875 kubeadm.go:319] [kubelet-start] Starting the kubelet
	I1109 13:30:10.523037    4875 kubeadm.go:319] [wait-control-plane] Waiting for the kubelet to boot up the control plane as static Pods from directory "/etc/kubernetes/manifests"
	I1109 13:30:10.523158    4875 kubeadm.go:319] [kubelet-check] Waiting for a healthy kubelet at http://127.0.0.1:10248/healthz. This can take up to 4m0s
	I1109 13:30:12.024670    4875 kubeadm.go:319] [kubelet-check] The kubelet is healthy after 1.501969248s
	I1109 13:30:12.028493    4875 kubeadm.go:319] [control-plane-check] Waiting for healthy control plane components. This can take up to 4m0s
	I1109 13:30:12.028601    4875 kubeadm.go:319] [control-plane-check] Checking kube-apiserver at https://192.168.49.2:8443/livez
	I1109 13:30:12.028695    4875 kubeadm.go:319] [control-plane-check] Checking kube-controller-manager at https://127.0.0.1:10257/healthz
	I1109 13:30:12.028777    4875 kubeadm.go:319] [control-plane-check] Checking kube-scheduler at https://127.0.0.1:10259/livez
	I1109 13:30:14.900297    4875 kubeadm.go:319] [control-plane-check] kube-controller-manager is healthy after 2.871353735s
	I1109 13:30:16.289745    4875 kubeadm.go:319] [control-plane-check] kube-scheduler is healthy after 4.260633178s
	I1109 13:30:18.032280    4875 kubeadm.go:319] [control-plane-check] kube-apiserver is healthy after 6.003737833s
	I1109 13:30:18.052926    4875 kubeadm.go:319] [upload-config] Storing the configuration used in ConfigMap "kubeadm-config" in the "kube-system" Namespace
	I1109 13:30:18.068472    4875 kubeadm.go:319] [kubelet] Creating a ConfigMap "kubelet-config" in namespace kube-system with the configuration for the kubelets in the cluster
	I1109 13:30:18.084175    4875 kubeadm.go:319] [upload-certs] Skipping phase. Please see --upload-certs
	I1109 13:30:18.084413    4875 kubeadm.go:319] [mark-control-plane] Marking the node addons-651467 as control-plane by adding the labels: [node-role.kubernetes.io/control-plane node.kubernetes.io/exclude-from-external-load-balancers]
	I1109 13:30:18.100750    4875 kubeadm.go:319] [bootstrap-token] Using token: 3icf2d.4lu3e6i9hke2tnsi
	I1109 13:30:18.103926    4875 out.go:252]   - Configuring RBAC rules ...
	I1109 13:30:18.104060    4875 kubeadm.go:319] [bootstrap-token] Configuring bootstrap tokens, cluster-info ConfigMap, RBAC Roles
	I1109 13:30:18.112578    4875 kubeadm.go:319] [bootstrap-token] Configured RBAC rules to allow Node Bootstrap tokens to get nodes
	I1109 13:30:18.120857    4875 kubeadm.go:319] [bootstrap-token] Configured RBAC rules to allow Node Bootstrap tokens to post CSRs in order for nodes to get long term certificate credentials
	I1109 13:30:18.124982    4875 kubeadm.go:319] [bootstrap-token] Configured RBAC rules to allow the csrapprover controller automatically approve CSRs from a Node Bootstrap Token
	I1109 13:30:18.131351    4875 kubeadm.go:319] [bootstrap-token] Configured RBAC rules to allow certificate rotation for all node client certificates in the cluster
	I1109 13:30:18.135986    4875 kubeadm.go:319] [bootstrap-token] Creating the "cluster-info" ConfigMap in the "kube-public" namespace
	I1109 13:30:18.440141    4875 kubeadm.go:319] [kubelet-finalize] Updating "/etc/kubernetes/kubelet.conf" to point to a rotatable kubelet client certificate and key
	I1109 13:30:18.880050    4875 kubeadm.go:319] [addons] Applied essential addon: CoreDNS
	I1109 13:30:19.442022    4875 kubeadm.go:319] [addons] Applied essential addon: kube-proxy
	I1109 13:30:19.443323    4875 kubeadm.go:319] 
	I1109 13:30:19.443399    4875 kubeadm.go:319] Your Kubernetes control-plane has initialized successfully!
	I1109 13:30:19.443405    4875 kubeadm.go:319] 
	I1109 13:30:19.443482    4875 kubeadm.go:319] To start using your cluster, you need to run the following as a regular user:
	I1109 13:30:19.443487    4875 kubeadm.go:319] 
	I1109 13:30:19.443513    4875 kubeadm.go:319]   mkdir -p $HOME/.kube
	I1109 13:30:19.443572    4875 kubeadm.go:319]   sudo cp -i /etc/kubernetes/admin.conf $HOME/.kube/config
	I1109 13:30:19.443621    4875 kubeadm.go:319]   sudo chown $(id -u):$(id -g) $HOME/.kube/config
	I1109 13:30:19.443626    4875 kubeadm.go:319] 
	I1109 13:30:19.443679    4875 kubeadm.go:319] Alternatively, if you are the root user, you can run:
	I1109 13:30:19.443683    4875 kubeadm.go:319] 
	I1109 13:30:19.443731    4875 kubeadm.go:319]   export KUBECONFIG=/etc/kubernetes/admin.conf
	I1109 13:30:19.443736    4875 kubeadm.go:319] 
	I1109 13:30:19.443788    4875 kubeadm.go:319] You should now deploy a pod network to the cluster.
	I1109 13:30:19.443863    4875 kubeadm.go:319] Run "kubectl apply -f [podnetwork].yaml" with one of the options listed at:
	I1109 13:30:19.443961    4875 kubeadm.go:319]   https://kubernetes.io/docs/concepts/cluster-administration/addons/
	I1109 13:30:19.443966    4875 kubeadm.go:319] 
	I1109 13:30:19.444050    4875 kubeadm.go:319] You can now join any number of control-plane nodes by copying certificate authorities
	I1109 13:30:19.444127    4875 kubeadm.go:319] and service account keys on each node and then running the following as root:
	I1109 13:30:19.444131    4875 kubeadm.go:319] 
	I1109 13:30:19.444216    4875 kubeadm.go:319]   kubeadm join control-plane.minikube.internal:8443 --token 3icf2d.4lu3e6i9hke2tnsi \
	I1109 13:30:19.444326    4875 kubeadm.go:319] 	--discovery-token-ca-cert-hash sha256:bb1c1d61f7abb7f43f33866576b71e6342f6f67545a2d1f7cc51fa851cd51d52 \
	I1109 13:30:19.444348    4875 kubeadm.go:319] 	--control-plane 
	I1109 13:30:19.444353    4875 kubeadm.go:319] 
	I1109 13:30:19.444438    4875 kubeadm.go:319] Then you can join any number of worker nodes by running the following on each as root:
	I1109 13:30:19.444443    4875 kubeadm.go:319] 
	I1109 13:30:19.444525    4875 kubeadm.go:319] kubeadm join control-plane.minikube.internal:8443 --token 3icf2d.4lu3e6i9hke2tnsi \
	I1109 13:30:19.444640    4875 kubeadm.go:319] 	--discovery-token-ca-cert-hash sha256:bb1c1d61f7abb7f43f33866576b71e6342f6f67545a2d1f7cc51fa851cd51d52 
	I1109 13:30:19.448166    4875 kubeadm.go:319] 	[WARNING SystemVerification]: cgroups v1 support is in maintenance mode, please migrate to cgroups v2
	I1109 13:30:19.448400    4875 kubeadm.go:319] 	[WARNING SystemVerification]: failed to parse kernel config: unable to load kernel module: "configs", output: "modprobe: FATAL: Module configs not found in directory /lib/modules/5.15.0-1084-aws\n", err: exit status 1
	I1109 13:30:19.448510    4875 kubeadm.go:319] 	[WARNING Service-Kubelet]: kubelet service is not enabled, please run 'systemctl enable kubelet.service'
	I1109 13:30:19.448525    4875 cni.go:84] Creating CNI manager for ""
	I1109 13:30:19.448536    4875 cni.go:143] "docker" driver + "crio" runtime found, recommending kindnet
	I1109 13:30:19.451788    4875 out.go:179] * Configuring CNI (Container Networking Interface) ...
	I1109 13:30:19.454751    4875 ssh_runner.go:195] Run: stat /opt/cni/bin/portmap
	I1109 13:30:19.459011    4875 cni.go:182] applying CNI manifest using /var/lib/minikube/binaries/v1.34.1/kubectl ...
	I1109 13:30:19.459036    4875 ssh_runner.go:362] scp memory --> /var/tmp/minikube/cni.yaml (2601 bytes)
	I1109 13:30:19.472608    4875 ssh_runner.go:195] Run: sudo /var/lib/minikube/binaries/v1.34.1/kubectl apply --kubeconfig=/var/lib/minikube/kubeconfig -f /var/tmp/minikube/cni.yaml
	I1109 13:30:19.770099    4875 ssh_runner.go:195] Run: /bin/bash -c "cat /proc/$(pgrep kube-apiserver)/oom_adj"
	I1109 13:30:19.770236    4875 ssh_runner.go:195] Run: sudo /var/lib/minikube/binaries/v1.34.1/kubectl create clusterrolebinding minikube-rbac --clusterrole=cluster-admin --serviceaccount=kube-system:default --kubeconfig=/var/lib/minikube/kubeconfig
	I1109 13:30:19.770309    4875 ssh_runner.go:195] Run: sudo /var/lib/minikube/binaries/v1.34.1/kubectl --kubeconfig=/var/lib/minikube/kubeconfig label --overwrite nodes addons-651467 minikube.k8s.io/updated_at=2025_11_09T13_30_19_0700 minikube.k8s.io/version=v1.37.0 minikube.k8s.io/commit=70e94ed289d7418ac27e2778f9cf44be27a4ecda minikube.k8s.io/name=addons-651467 minikube.k8s.io/primary=true
	I1109 13:30:19.974539    4875 ops.go:34] apiserver oom_adj: -16
	I1109 13:30:19.974663    4875 ssh_runner.go:195] Run: sudo /var/lib/minikube/binaries/v1.34.1/kubectl get sa default --kubeconfig=/var/lib/minikube/kubeconfig
	I1109 13:30:20.475561    4875 ssh_runner.go:195] Run: sudo /var/lib/minikube/binaries/v1.34.1/kubectl get sa default --kubeconfig=/var/lib/minikube/kubeconfig
	I1109 13:30:20.975503    4875 ssh_runner.go:195] Run: sudo /var/lib/minikube/binaries/v1.34.1/kubectl get sa default --kubeconfig=/var/lib/minikube/kubeconfig
	I1109 13:30:21.475093    4875 ssh_runner.go:195] Run: sudo /var/lib/minikube/binaries/v1.34.1/kubectl get sa default --kubeconfig=/var/lib/minikube/kubeconfig
	I1109 13:30:21.975488    4875 ssh_runner.go:195] Run: sudo /var/lib/minikube/binaries/v1.34.1/kubectl get sa default --kubeconfig=/var/lib/minikube/kubeconfig
	I1109 13:30:22.475474    4875 ssh_runner.go:195] Run: sudo /var/lib/minikube/binaries/v1.34.1/kubectl get sa default --kubeconfig=/var/lib/minikube/kubeconfig
	I1109 13:30:22.974746    4875 ssh_runner.go:195] Run: sudo /var/lib/minikube/binaries/v1.34.1/kubectl get sa default --kubeconfig=/var/lib/minikube/kubeconfig
	I1109 13:30:23.079934    4875 kubeadm.go:1114] duration metric: took 3.309742117s to wait for elevateKubeSystemPrivileges
	I1109 13:30:23.079967    4875 kubeadm.go:403] duration metric: took 21.779734371s to StartCluster
	I1109 13:30:23.079984    4875 settings.go:142] acquiring lock: {Name:mk52a5a1cf83cfd3007d92af07a5e8dea393f789 Clock:{} Delay:500ms Timeout:1m0s Cancel:<nil>}
	I1109 13:30:23.080101    4875 settings.go:150] Updating kubeconfig:  /home/jenkins/minikube-integration/21139-2320/kubeconfig
	I1109 13:30:23.080460    4875 lock.go:35] WriteFile acquiring /home/jenkins/minikube-integration/21139-2320/kubeconfig: {Name:mkbcbd43244c4eb498050023163f96f81faa78c6 Clock:{} Delay:500ms Timeout:1m0s Cancel:<nil>}
	I1109 13:30:23.080640    4875 ssh_runner.go:195] Run: /bin/bash -c "sudo /var/lib/minikube/binaries/v1.34.1/kubectl --kubeconfig=/var/lib/minikube/kubeconfig -n kube-system get configmap coredns -o yaml"
	I1109 13:30:23.080663    4875 start.go:236] Will wait 6m0s for node &{Name: IP:192.168.49.2 Port:8443 KubernetesVersion:v1.34.1 ContainerRuntime:crio ControlPlane:true Worker:true}
	I1109 13:30:23.080909    4875 config.go:182] Loaded profile config "addons-651467": Driver=docker, ContainerRuntime=crio, KubernetesVersion=v1.34.1
	I1109 13:30:23.080940    4875 addons.go:512] enable addons start: toEnable=map[ambassador:false amd-gpu-device-plugin:true auto-pause:false cloud-spanner:true csi-hostpath-driver:true dashboard:false default-storageclass:true efk:false freshpod:false gcp-auth:true gvisor:false headlamp:false inaccel:false ingress:true ingress-dns:true inspektor-gadget:true istio:false istio-provisioner:false kong:false kubeflow:false kubetail:false kubevirt:false logviewer:false metallb:false metrics-server:true nvidia-device-plugin:true nvidia-driver-installer:false nvidia-gpu-device-plugin:false olm:false pod-security-policy:false portainer:false registry:true registry-aliases:false registry-creds:true storage-provisioner:true storage-provisioner-rancher:true volcano:true volumesnapshots:true yakd:true]
	I1109 13:30:23.081018    4875 addons.go:70] Setting yakd=true in profile "addons-651467"
	I1109 13:30:23.081031    4875 addons.go:239] Setting addon yakd=true in "addons-651467"
	I1109 13:30:23.081053    4875 host.go:66] Checking if "addons-651467" exists ...
	I1109 13:30:23.081498    4875 cli_runner.go:164] Run: docker container inspect addons-651467 --format={{.State.Status}}
	I1109 13:30:23.082050    4875 addons.go:70] Setting metrics-server=true in profile "addons-651467"
	I1109 13:30:23.082069    4875 addons.go:239] Setting addon metrics-server=true in "addons-651467"
	I1109 13:30:23.082084    4875 addons.go:70] Setting nvidia-device-plugin=true in profile "addons-651467"
	I1109 13:30:23.082096    4875 host.go:66] Checking if "addons-651467" exists ...
	I1109 13:30:23.082102    4875 addons.go:239] Setting addon nvidia-device-plugin=true in "addons-651467"
	I1109 13:30:23.082125    4875 host.go:66] Checking if "addons-651467" exists ...
	I1109 13:30:23.082498    4875 cli_runner.go:164] Run: docker container inspect addons-651467 --format={{.State.Status}}
	I1109 13:30:23.082626    4875 cli_runner.go:164] Run: docker container inspect addons-651467 --format={{.State.Status}}
	I1109 13:30:23.085442    4875 addons.go:70] Setting registry=true in profile "addons-651467"
	I1109 13:30:23.085536    4875 addons.go:239] Setting addon registry=true in "addons-651467"
	I1109 13:30:23.085638    4875 host.go:66] Checking if "addons-651467" exists ...
	I1109 13:30:23.085768    4875 addons.go:70] Setting registry-creds=true in profile "addons-651467"
	I1109 13:30:23.089577    4875 addons.go:239] Setting addon registry-creds=true in "addons-651467"
	I1109 13:30:23.089643    4875 host.go:66] Checking if "addons-651467" exists ...
	I1109 13:30:23.090281    4875 cli_runner.go:164] Run: docker container inspect addons-651467 --format={{.State.Status}}
	I1109 13:30:23.092461    4875 cli_runner.go:164] Run: docker container inspect addons-651467 --format={{.State.Status}}
	I1109 13:30:23.085777    4875 addons.go:70] Setting storage-provisioner=true in profile "addons-651467"
	I1109 13:30:23.104005    4875 addons.go:239] Setting addon storage-provisioner=true in "addons-651467"
	I1109 13:30:23.104052    4875 host.go:66] Checking if "addons-651467" exists ...
	I1109 13:30:23.104526    4875 cli_runner.go:164] Run: docker container inspect addons-651467 --format={{.State.Status}}
	I1109 13:30:23.085781    4875 addons.go:70] Setting storage-provisioner-rancher=true in profile "addons-651467"
	I1109 13:30:23.105854    4875 addons_storage_classes.go:34] enableOrDisableStorageClasses storage-provisioner-rancher=true on "addons-651467"
	I1109 13:30:23.106167    4875 cli_runner.go:164] Run: docker container inspect addons-651467 --format={{.State.Status}}
	I1109 13:30:23.085785    4875 addons.go:70] Setting volcano=true in profile "addons-651467"
	I1109 13:30:23.126496    4875 addons.go:239] Setting addon volcano=true in "addons-651467"
	I1109 13:30:23.126563    4875 host.go:66] Checking if "addons-651467" exists ...
	I1109 13:30:23.127087    4875 cli_runner.go:164] Run: docker container inspect addons-651467 --format={{.State.Status}}
	I1109 13:30:23.086487    4875 addons.go:70] Setting volumesnapshots=true in profile "addons-651467"
	I1109 13:30:23.136566    4875 addons.go:239] Setting addon volumesnapshots=true in "addons-651467"
	I1109 13:30:23.136605    4875 host.go:66] Checking if "addons-651467" exists ...
	I1109 13:30:23.137067    4875 cli_runner.go:164] Run: docker container inspect addons-651467 --format={{.State.Status}}
	I1109 13:30:23.086503    4875 out.go:179] * Verifying Kubernetes components...
	I1109 13:30:23.086871    4875 addons.go:70] Setting gcp-auth=true in profile "addons-651467"
	I1109 13:30:23.159797    4875 mustload.go:66] Loading cluster: addons-651467
	I1109 13:30:23.163928    4875 config.go:182] Loaded profile config "addons-651467": Driver=docker, ContainerRuntime=crio, KubernetesVersion=v1.34.1
	I1109 13:30:23.165277    4875 cli_runner.go:164] Run: docker container inspect addons-651467 --format={{.State.Status}}
	I1109 13:30:23.086879    4875 addons.go:70] Setting amd-gpu-device-plugin=true in profile "addons-651467"
	I1109 13:30:23.086883    4875 addons.go:70] Setting cloud-spanner=true in profile "addons-651467"
	I1109 13:30:23.086887    4875 addons.go:70] Setting csi-hostpath-driver=true in profile "addons-651467"
	I1109 13:30:23.184123    4875 addons.go:239] Setting addon amd-gpu-device-plugin=true in "addons-651467"
	I1109 13:30:23.184178    4875 host.go:66] Checking if "addons-651467" exists ...
	I1109 13:30:23.184200    4875 addons.go:239] Setting addon csi-hostpath-driver=true in "addons-651467"
	I1109 13:30:23.184241    4875 host.go:66] Checking if "addons-651467" exists ...
	I1109 13:30:23.184638    4875 cli_runner.go:164] Run: docker container inspect addons-651467 --format={{.State.Status}}
	I1109 13:30:23.184713    4875 cli_runner.go:164] Run: docker container inspect addons-651467 --format={{.State.Status}}
	I1109 13:30:23.190448    4875 addons.go:239] Setting addon cloud-spanner=true in "addons-651467"
	I1109 13:30:23.191048    4875 host.go:66] Checking if "addons-651467" exists ...
	I1109 13:30:23.195516    4875 cli_runner.go:164] Run: docker container inspect addons-651467 --format={{.State.Status}}
	I1109 13:30:23.086891    4875 addons.go:70] Setting default-storageclass=true in profile "addons-651467"
	I1109 13:30:23.207484    4875 addons_storage_classes.go:34] enableOrDisableStorageClasses default-storageclass=true on "addons-651467"
	I1109 13:30:23.086896    4875 addons.go:70] Setting inspektor-gadget=true in profile "addons-651467"
	I1109 13:30:23.207637    4875 addons.go:239] Setting addon inspektor-gadget=true in "addons-651467"
	I1109 13:30:23.207667    4875 host.go:66] Checking if "addons-651467" exists ...
	I1109 13:30:23.086911    4875 addons.go:70] Setting ingress=true in profile "addons-651467"
	I1109 13:30:23.207712    4875 addons.go:239] Setting addon ingress=true in "addons-651467"
	I1109 13:30:23.207738    4875 host.go:66] Checking if "addons-651467" exists ...
	I1109 13:30:23.086915    4875 addons.go:70] Setting ingress-dns=true in profile "addons-651467"
	I1109 13:30:23.207787    4875 addons.go:239] Setting addon ingress-dns=true in "addons-651467"
	I1109 13:30:23.207805    4875 host.go:66] Checking if "addons-651467" exists ...
	I1109 13:30:23.159708    4875 ssh_runner.go:195] Run: sudo systemctl daemon-reload
	I1109 13:30:23.209434    4875 out.go:179]   - Using image nvcr.io/nvidia/k8s-device-plugin:v0.18.0
	I1109 13:30:23.217579    4875 cli_runner.go:164] Run: docker container inspect addons-651467 --format={{.State.Status}}
	I1109 13:30:23.232513    4875 cli_runner.go:164] Run: docker container inspect addons-651467 --format={{.State.Status}}
	I1109 13:30:23.246074    4875 out.go:179]   - Using image registry.k8s.io/metrics-server/metrics-server:v0.8.0
	I1109 13:30:23.249512    4875 cli_runner.go:164] Run: docker container inspect addons-651467 --format={{.State.Status}}
	I1109 13:30:23.249784    4875 addons.go:436] installing /etc/kubernetes/addons/nvidia-device-plugin.yaml
	I1109 13:30:23.249797    4875 ssh_runner.go:362] scp memory --> /etc/kubernetes/addons/nvidia-device-plugin.yaml (1966 bytes)
	I1109 13:30:23.249842    4875 cli_runner.go:164] Run: docker container inspect -f "'{{(index (index .NetworkSettings.Ports "22/tcp") 0).HostPort}}'" addons-651467
	I1109 13:30:23.264958    4875 out.go:179]   - Using image docker.io/marcnuri/yakd:0.0.5
	I1109 13:30:23.265198    4875 addons.go:436] installing /etc/kubernetes/addons/metrics-apiservice.yaml
	I1109 13:30:23.265211    4875 ssh_runner.go:362] scp metrics-server/metrics-apiservice.yaml --> /etc/kubernetes/addons/metrics-apiservice.yaml (424 bytes)
	I1109 13:30:23.265286    4875 cli_runner.go:164] Run: docker container inspect -f "'{{(index (index .NetworkSettings.Ports "22/tcp") 0).HostPort}}'" addons-651467
	I1109 13:30:23.295051    4875 out.go:179]   - Using image gcr.io/k8s-minikube/kube-registry-proxy:0.0.9
	I1109 13:30:23.298478    4875 out.go:179]   - Using image docker.io/registry:3.0.0
	I1109 13:30:23.302026    4875 addons.go:436] installing /etc/kubernetes/addons/registry-rc.yaml
	I1109 13:30:23.302092    4875 ssh_runner.go:362] scp memory --> /etc/kubernetes/addons/registry-rc.yaml (860 bytes)
	I1109 13:30:23.302182    4875 cli_runner.go:164] Run: docker container inspect -f "'{{(index (index .NetworkSettings.Ports "22/tcp") 0).HostPort}}'" addons-651467
	I1109 13:30:23.333757    4875 addons.go:436] installing /etc/kubernetes/addons/yakd-ns.yaml
	I1109 13:30:23.333791    4875 ssh_runner.go:362] scp yakd/yakd-ns.yaml --> /etc/kubernetes/addons/yakd-ns.yaml (171 bytes)
	I1109 13:30:23.333863    4875 cli_runner.go:164] Run: docker container inspect -f "'{{(index (index .NetworkSettings.Ports "22/tcp") 0).HostPort}}'" addons-651467
	I1109 13:30:23.341646    4875 cli_runner.go:164] Run: docker container inspect addons-651467 --format={{.State.Status}}
	I1109 13:30:23.361736    4875 ssh_runner.go:195] Run: /bin/bash -c "sudo /var/lib/minikube/binaries/v1.34.1/kubectl --kubeconfig=/var/lib/minikube/kubeconfig -n kube-system get configmap coredns -o yaml | sed -e '/^        forward . \/etc\/resolv.conf.*/i \        hosts {\n           192.168.49.1 host.minikube.internal\n           fallthrough\n        }' -e '/^        errors *$/i \        log' | sudo /var/lib/minikube/binaries/v1.34.1/kubectl --kubeconfig=/var/lib/minikube/kubeconfig replace -f -"
	W1109 13:30:23.362284    4875 out.go:285] ! Enabling 'volcano' returned an error: running callbacks: [volcano addon does not support crio]
	I1109 13:30:23.389135    4875 out.go:179]   - Using image docker.io/upmcenterprises/registry-creds:1.10
	I1109 13:30:23.389156    4875 out.go:179]   - Using image gcr.io/k8s-minikube/storage-provisioner:v5
	I1109 13:30:23.390663    4875 addons.go:239] Setting addon storage-provisioner-rancher=true in "addons-651467"
	I1109 13:30:23.390713    4875 host.go:66] Checking if "addons-651467" exists ...
	I1109 13:30:23.391143    4875 cli_runner.go:164] Run: docker container inspect addons-651467 --format={{.State.Status}}
	I1109 13:30:23.406762    4875 out.go:179]   - Using image registry.k8s.io/sig-storage/snapshot-controller:v6.1.0
	I1109 13:30:23.409633    4875 addons.go:436] installing /etc/kubernetes/addons/csi-hostpath-snapshotclass.yaml
	I1109 13:30:23.409656    4875 ssh_runner.go:362] scp volumesnapshots/csi-hostpath-snapshotclass.yaml --> /etc/kubernetes/addons/csi-hostpath-snapshotclass.yaml (934 bytes)
	I1109 13:30:23.409825    4875 cli_runner.go:164] Run: docker container inspect -f "'{{(index (index .NetworkSettings.Ports "22/tcp") 0).HostPort}}'" addons-651467
	I1109 13:30:23.410774    4875 addons.go:436] installing /etc/kubernetes/addons/registry-creds-rc.yaml
	I1109 13:30:23.410836    4875 ssh_runner.go:362] scp memory --> /etc/kubernetes/addons/registry-creds-rc.yaml (3306 bytes)
	I1109 13:30:23.410913    4875 cli_runner.go:164] Run: docker container inspect -f "'{{(index (index .NetworkSettings.Ports "22/tcp") 0).HostPort}}'" addons-651467
	I1109 13:30:23.462856    4875 addons.go:436] installing /etc/kubernetes/addons/storage-provisioner.yaml
	I1109 13:30:23.462883    4875 ssh_runner.go:362] scp memory --> /etc/kubernetes/addons/storage-provisioner.yaml (2676 bytes)
	I1109 13:30:23.462950    4875 cli_runner.go:164] Run: docker container inspect -f "'{{(index (index .NetworkSettings.Ports "22/tcp") 0).HostPort}}'" addons-651467
	I1109 13:30:23.467554    4875 host.go:66] Checking if "addons-651467" exists ...
	I1109 13:30:23.472811    4875 out.go:179]   - Using image registry.k8s.io/sig-storage/livenessprobe:v2.8.0
	I1109 13:30:23.493988    4875 out.go:179]   - Using image registry.k8s.io/sig-storage/csi-resizer:v1.6.0
	I1109 13:30:23.496926    4875 out.go:179]   - Using image registry.k8s.io/sig-storage/csi-snapshotter:v6.1.0
	I1109 13:30:23.502368    4875 out.go:179]   - Using image registry.k8s.io/sig-storage/csi-provisioner:v3.3.0
	I1109 13:30:23.505237    4875 out.go:179]   - Using image registry.k8s.io/sig-storage/csi-attacher:v4.0.0
	I1109 13:30:23.508044    4875 out.go:179]   - Using image registry.k8s.io/sig-storage/csi-external-health-monitor-controller:v0.7.0
	I1109 13:30:23.510999    4875 out.go:179]   - Using image registry.k8s.io/sig-storage/csi-node-driver-registrar:v2.6.0
	I1109 13:30:23.512391    4875 out.go:179]   - Using image docker.io/rocm/k8s-device-plugin:1.25.2.8
	I1109 13:30:23.522100    4875 out.go:179]   - Using image registry.k8s.io/sig-storage/hostpathplugin:v1.9.0
	I1109 13:30:23.522332    4875 out.go:179]   - Using image registry.k8s.io/ingress-nginx/controller:v1.13.3
	I1109 13:30:23.522528    4875 addons.go:436] installing /etc/kubernetes/addons/amd-gpu-device-plugin.yaml
	I1109 13:30:23.522542    4875 ssh_runner.go:362] scp memory --> /etc/kubernetes/addons/amd-gpu-device-plugin.yaml (1868 bytes)
	I1109 13:30:23.522622    4875 cli_runner.go:164] Run: docker container inspect -f "'{{(index (index .NetworkSettings.Ports "22/tcp") 0).HostPort}}'" addons-651467
	I1109 13:30:23.542469    4875 out.go:179]   - Using image docker.io/kicbase/minikube-ingress-dns:0.0.4
	I1109 13:30:23.548192    4875 addons.go:436] installing /etc/kubernetes/addons/ingress-dns-pod.yaml
	I1109 13:30:23.548262    4875 ssh_runner.go:362] scp memory --> /etc/kubernetes/addons/ingress-dns-pod.yaml (2889 bytes)
	I1109 13:30:23.548361    4875 cli_runner.go:164] Run: docker container inspect -f "'{{(index (index .NetworkSettings.Ports "22/tcp") 0).HostPort}}'" addons-651467
	I1109 13:30:23.564743    4875 addons.go:239] Setting addon default-storageclass=true in "addons-651467"
	I1109 13:30:23.564796    4875 host.go:66] Checking if "addons-651467" exists ...
	I1109 13:30:23.565194    4875 cli_runner.go:164] Run: docker container inspect addons-651467 --format={{.State.Status}}
	I1109 13:30:23.566307    4875 out.go:179]   - Using image gcr.io/cloud-spanner-emulator/emulator:1.5.43
	I1109 13:30:23.569299    4875 addons.go:436] installing /etc/kubernetes/addons/deployment.yaml
	I1109 13:30:23.569356    4875 ssh_runner.go:362] scp memory --> /etc/kubernetes/addons/deployment.yaml (1004 bytes)
	I1109 13:30:23.569470    4875 cli_runner.go:164] Run: docker container inspect -f "'{{(index (index .NetworkSettings.Ports "22/tcp") 0).HostPort}}'" addons-651467
	I1109 13:30:23.581671    4875 ssh_runner.go:195] Run: sudo systemctl start kubelet
	I1109 13:30:23.581856    4875 sshutil.go:53] new ssh client: &{IP:127.0.0.1 Port:32768 SSHKeyPath:/home/jenkins/minikube-integration/21139-2320/.minikube/machines/addons-651467/id_rsa Username:docker}
	I1109 13:30:23.585076    4875 sshutil.go:53] new ssh client: &{IP:127.0.0.1 Port:32768 SSHKeyPath:/home/jenkins/minikube-integration/21139-2320/.minikube/machines/addons-651467/id_rsa Username:docker}
	I1109 13:30:23.585832    4875 sshutil.go:53] new ssh client: &{IP:127.0.0.1 Port:32768 SSHKeyPath:/home/jenkins/minikube-integration/21139-2320/.minikube/machines/addons-651467/id_rsa Username:docker}
	I1109 13:30:23.586532    4875 addons.go:436] installing /etc/kubernetes/addons/rbac-external-attacher.yaml
	I1109 13:30:23.586547    4875 ssh_runner.go:362] scp csi-hostpath-driver/rbac/rbac-external-attacher.yaml --> /etc/kubernetes/addons/rbac-external-attacher.yaml (3073 bytes)
	I1109 13:30:23.586610    4875 cli_runner.go:164] Run: docker container inspect -f "'{{(index (index .NetworkSettings.Ports "22/tcp") 0).HostPort}}'" addons-651467
	I1109 13:30:23.589740    4875 sshutil.go:53] new ssh client: &{IP:127.0.0.1 Port:32768 SSHKeyPath:/home/jenkins/minikube-integration/21139-2320/.minikube/machines/addons-651467/id_rsa Username:docker}
	I1109 13:30:23.590461    4875 out.go:179]   - Using image registry.k8s.io/ingress-nginx/kube-webhook-certgen:v1.6.3
	I1109 13:30:23.593361    4875 out.go:179]   - Using image registry.k8s.io/ingress-nginx/kube-webhook-certgen:v1.6.3
	I1109 13:30:23.597196    4875 addons.go:436] installing /etc/kubernetes/addons/ingress-deploy.yaml
	I1109 13:30:23.597217    4875 ssh_runner.go:362] scp memory --> /etc/kubernetes/addons/ingress-deploy.yaml (16078 bytes)
	I1109 13:30:23.597277    4875 cli_runner.go:164] Run: docker container inspect -f "'{{(index (index .NetworkSettings.Ports "22/tcp") 0).HostPort}}'" addons-651467
	I1109 13:30:23.620812    4875 out.go:179]   - Using image ghcr.io/inspektor-gadget/inspektor-gadget:v0.46.0
	I1109 13:30:23.623809    4875 addons.go:436] installing /etc/kubernetes/addons/ig-deployment.yaml
	I1109 13:30:23.623830    4875 ssh_runner.go:362] scp memory --> /etc/kubernetes/addons/ig-deployment.yaml (15034 bytes)
	I1109 13:30:23.624023    4875 cli_runner.go:164] Run: docker container inspect -f "'{{(index (index .NetworkSettings.Ports "22/tcp") 0).HostPort}}'" addons-651467
	I1109 13:30:23.636040    4875 sshutil.go:53] new ssh client: &{IP:127.0.0.1 Port:32768 SSHKeyPath:/home/jenkins/minikube-integration/21139-2320/.minikube/machines/addons-651467/id_rsa Username:docker}
	I1109 13:30:23.640117    4875 out.go:179]   - Using image docker.io/rancher/local-path-provisioner:v0.0.22
	I1109 13:30:23.646122    4875 out.go:179]   - Using image docker.io/busybox:stable
	I1109 13:30:23.651322    4875 addons.go:436] installing /etc/kubernetes/addons/storage-provisioner-rancher.yaml
	I1109 13:30:23.651353    4875 ssh_runner.go:362] scp memory --> /etc/kubernetes/addons/storage-provisioner-rancher.yaml (3113 bytes)
	I1109 13:30:23.651424    4875 cli_runner.go:164] Run: docker container inspect -f "'{{(index (index .NetworkSettings.Ports "22/tcp") 0).HostPort}}'" addons-651467
	I1109 13:30:23.680167    4875 sshutil.go:53] new ssh client: &{IP:127.0.0.1 Port:32768 SSHKeyPath:/home/jenkins/minikube-integration/21139-2320/.minikube/machines/addons-651467/id_rsa Username:docker}
	I1109 13:30:23.736442    4875 sshutil.go:53] new ssh client: &{IP:127.0.0.1 Port:32768 SSHKeyPath:/home/jenkins/minikube-integration/21139-2320/.minikube/machines/addons-651467/id_rsa Username:docker}
	I1109 13:30:23.738071    4875 sshutil.go:53] new ssh client: &{IP:127.0.0.1 Port:32768 SSHKeyPath:/home/jenkins/minikube-integration/21139-2320/.minikube/machines/addons-651467/id_rsa Username:docker}
	I1109 13:30:23.751617    4875 sshutil.go:53] new ssh client: &{IP:127.0.0.1 Port:32768 SSHKeyPath:/home/jenkins/minikube-integration/21139-2320/.minikube/machines/addons-651467/id_rsa Username:docker}
	I1109 13:30:23.775628    4875 addons.go:436] installing /etc/kubernetes/addons/storageclass.yaml
	I1109 13:30:23.775650    4875 ssh_runner.go:362] scp storageclass/storageclass.yaml --> /etc/kubernetes/addons/storageclass.yaml (271 bytes)
	I1109 13:30:23.775712    4875 cli_runner.go:164] Run: docker container inspect -f "'{{(index (index .NetworkSettings.Ports "22/tcp") 0).HostPort}}'" addons-651467
	I1109 13:30:23.796115    4875 sshutil.go:53] new ssh client: &{IP:127.0.0.1 Port:32768 SSHKeyPath:/home/jenkins/minikube-integration/21139-2320/.minikube/machines/addons-651467/id_rsa Username:docker}
	I1109 13:30:23.804121    4875 sshutil.go:53] new ssh client: &{IP:127.0.0.1 Port:32768 SSHKeyPath:/home/jenkins/minikube-integration/21139-2320/.minikube/machines/addons-651467/id_rsa Username:docker}
	I1109 13:30:23.811079    4875 sshutil.go:53] new ssh client: &{IP:127.0.0.1 Port:32768 SSHKeyPath:/home/jenkins/minikube-integration/21139-2320/.minikube/machines/addons-651467/id_rsa Username:docker}
	I1109 13:30:23.811834    4875 sshutil.go:53] new ssh client: &{IP:127.0.0.1 Port:32768 SSHKeyPath:/home/jenkins/minikube-integration/21139-2320/.minikube/machines/addons-651467/id_rsa Username:docker}
	W1109 13:30:23.812821    4875 sshutil.go:64] dial failure (will retry): ssh: handshake failed: EOF
	I1109 13:30:23.812845    4875 retry.go:31] will retry after 196.572128ms: ssh: handshake failed: EOF
	W1109 13:30:23.813084    4875 sshutil.go:64] dial failure (will retry): ssh: handshake failed: EOF
	I1109 13:30:23.813095    4875 retry.go:31] will retry after 148.811311ms: ssh: handshake failed: EOF
	I1109 13:30:23.813238    4875 sshutil.go:53] new ssh client: &{IP:127.0.0.1 Port:32768 SSHKeyPath:/home/jenkins/minikube-integration/21139-2320/.minikube/machines/addons-651467/id_rsa Username:docker}
	I1109 13:30:23.844899    4875 sshutil.go:53] new ssh client: &{IP:127.0.0.1 Port:32768 SSHKeyPath:/home/jenkins/minikube-integration/21139-2320/.minikube/machines/addons-651467/id_rsa Username:docker}
	W1109 13:30:23.846051    4875 sshutil.go:64] dial failure (will retry): ssh: handshake failed: EOF
	I1109 13:30:23.846070    4875 retry.go:31] will retry after 283.546301ms: ssh: handshake failed: EOF
	W1109 13:30:24.010544    4875 sshutil.go:64] dial failure (will retry): ssh: handshake failed: EOF
	I1109 13:30:24.010644    4875 retry.go:31] will retry after 451.888093ms: ssh: handshake failed: EOF
	I1109 13:30:24.340742    4875 ssh_runner.go:195] Run: sudo KUBECONFIG=/var/lib/minikube/kubeconfig /var/lib/minikube/binaries/v1.34.1/kubectl apply -f /etc/kubernetes/addons/storage-provisioner-rancher.yaml
	I1109 13:30:24.487955    4875 ssh_runner.go:195] Run: sudo KUBECONFIG=/var/lib/minikube/kubeconfig /var/lib/minikube/binaries/v1.34.1/kubectl apply -f /etc/kubernetes/addons/registry-creds-rc.yaml
	I1109 13:30:24.565629    4875 addons.go:436] installing /etc/kubernetes/addons/registry-svc.yaml
	I1109 13:30:24.565702    4875 ssh_runner.go:362] scp registry/registry-svc.yaml --> /etc/kubernetes/addons/registry-svc.yaml (398 bytes)
	I1109 13:30:24.604087    4875 ssh_runner.go:195] Run: sudo KUBECONFIG=/var/lib/minikube/kubeconfig /var/lib/minikube/binaries/v1.34.1/kubectl apply -f /etc/kubernetes/addons/storage-provisioner.yaml
	I1109 13:30:24.611598    4875 addons.go:436] installing /etc/kubernetes/addons/metrics-server-deployment.yaml
	I1109 13:30:24.611671    4875 ssh_runner.go:362] scp memory --> /etc/kubernetes/addons/metrics-server-deployment.yaml (1907 bytes)
	I1109 13:30:24.619908    4875 ssh_runner.go:195] Run: sudo KUBECONFIG=/var/lib/minikube/kubeconfig /var/lib/minikube/binaries/v1.34.1/kubectl apply -f /etc/kubernetes/addons/ig-deployment.yaml
	I1109 13:30:24.632560    4875 addons.go:436] installing /etc/kubernetes/addons/yakd-sa.yaml
	I1109 13:30:24.632637    4875 ssh_runner.go:362] scp yakd/yakd-sa.yaml --> /etc/kubernetes/addons/yakd-sa.yaml (247 bytes)
	I1109 13:30:24.654687    4875 ssh_runner.go:195] Run: sudo KUBECONFIG=/var/lib/minikube/kubeconfig /var/lib/minikube/binaries/v1.34.1/kubectl apply -f /etc/kubernetes/addons/amd-gpu-device-plugin.yaml
	I1109 13:30:24.662836    4875 addons.go:436] installing /etc/kubernetes/addons/snapshot.storage.k8s.io_volumesnapshotclasses.yaml
	I1109 13:30:24.662907    4875 ssh_runner.go:362] scp volumesnapshots/snapshot.storage.k8s.io_volumesnapshotclasses.yaml --> /etc/kubernetes/addons/snapshot.storage.k8s.io_volumesnapshotclasses.yaml (6471 bytes)
	I1109 13:30:24.674934    4875 ssh_runner.go:195] Run: sudo KUBECONFIG=/var/lib/minikube/kubeconfig /var/lib/minikube/binaries/v1.34.1/kubectl apply -f /etc/kubernetes/addons/nvidia-device-plugin.yaml
	I1109 13:30:24.677792    4875 ssh_runner.go:195] Run: sudo KUBECONFIG=/var/lib/minikube/kubeconfig /var/lib/minikube/binaries/v1.34.1/kubectl apply -f /etc/kubernetes/addons/ingress-dns-pod.yaml
	I1109 13:30:24.745988    4875 addons.go:436] installing /etc/kubernetes/addons/registry-proxy.yaml
	I1109 13:30:24.746008    4875 ssh_runner.go:362] scp memory --> /etc/kubernetes/addons/registry-proxy.yaml (947 bytes)
	I1109 13:30:24.786792    4875 ssh_runner.go:235] Completed: /bin/bash -c "sudo /var/lib/minikube/binaries/v1.34.1/kubectl --kubeconfig=/var/lib/minikube/kubeconfig -n kube-system get configmap coredns -o yaml | sed -e '/^        forward . \/etc\/resolv.conf.*/i \        hosts {\n           192.168.49.1 host.minikube.internal\n           fallthrough\n        }' -e '/^        errors *$/i \        log' | sudo /var/lib/minikube/binaries/v1.34.1/kubectl --kubeconfig=/var/lib/minikube/kubeconfig replace -f -": (1.425024806s)
	I1109 13:30:24.786870    4875 start.go:977] {"host.minikube.internal": 192.168.49.1} host record injected into CoreDNS's ConfigMap
	I1109 13:30:24.787976    4875 ssh_runner.go:235] Completed: sudo systemctl start kubelet: (1.206271528s)
	I1109 13:30:24.788821    4875 node_ready.go:35] waiting up to 6m0s for node "addons-651467" to be "Ready" ...
	I1109 13:30:24.814290    4875 addons.go:436] installing /etc/kubernetes/addons/metrics-server-rbac.yaml
	I1109 13:30:24.814362    4875 ssh_runner.go:362] scp metrics-server/metrics-server-rbac.yaml --> /etc/kubernetes/addons/metrics-server-rbac.yaml (2175 bytes)
	I1109 13:30:24.863198    4875 addons.go:436] installing /etc/kubernetes/addons/yakd-crb.yaml
	I1109 13:30:24.863267    4875 ssh_runner.go:362] scp yakd/yakd-crb.yaml --> /etc/kubernetes/addons/yakd-crb.yaml (422 bytes)
	I1109 13:30:24.898570    4875 ssh_runner.go:195] Run: sudo KUBECONFIG=/var/lib/minikube/kubeconfig /var/lib/minikube/binaries/v1.34.1/kubectl apply -f /etc/kubernetes/addons/registry-rc.yaml -f /etc/kubernetes/addons/registry-svc.yaml -f /etc/kubernetes/addons/registry-proxy.yaml
	I1109 13:30:24.940723    4875 ssh_runner.go:195] Run: sudo KUBECONFIG=/var/lib/minikube/kubeconfig /var/lib/minikube/binaries/v1.34.1/kubectl apply -f /etc/kubernetes/addons/ingress-deploy.yaml
	I1109 13:30:24.944353    4875 addons.go:436] installing /etc/kubernetes/addons/snapshot.storage.k8s.io_volumesnapshotcontents.yaml
	I1109 13:30:24.944426    4875 ssh_runner.go:362] scp volumesnapshots/snapshot.storage.k8s.io_volumesnapshotcontents.yaml --> /etc/kubernetes/addons/snapshot.storage.k8s.io_volumesnapshotcontents.yaml (23126 bytes)
	I1109 13:30:24.954474    4875 addons.go:436] installing /etc/kubernetes/addons/rbac-hostpath.yaml
	I1109 13:30:24.954546    4875 ssh_runner.go:362] scp csi-hostpath-driver/rbac/rbac-hostpath.yaml --> /etc/kubernetes/addons/rbac-hostpath.yaml (4266 bytes)
	I1109 13:30:25.104377    4875 ssh_runner.go:195] Run: sudo KUBECONFIG=/var/lib/minikube/kubeconfig /var/lib/minikube/binaries/v1.34.1/kubectl apply -f /etc/kubernetes/addons/storageclass.yaml
	I1109 13:30:25.121689    4875 addons.go:436] installing /etc/kubernetes/addons/yakd-svc.yaml
	I1109 13:30:25.121769    4875 ssh_runner.go:362] scp yakd/yakd-svc.yaml --> /etc/kubernetes/addons/yakd-svc.yaml (412 bytes)
	I1109 13:30:25.192933    4875 addons.go:436] installing /etc/kubernetes/addons/yakd-dp.yaml
	I1109 13:30:25.193010    4875 ssh_runner.go:362] scp memory --> /etc/kubernetes/addons/yakd-dp.yaml (2017 bytes)
	I1109 13:30:25.213382    4875 addons.go:436] installing /etc/kubernetes/addons/rbac-external-health-monitor-controller.yaml
	I1109 13:30:25.213459    4875 ssh_runner.go:362] scp csi-hostpath-driver/rbac/rbac-external-health-monitor-controller.yaml --> /etc/kubernetes/addons/rbac-external-health-monitor-controller.yaml (3038 bytes)
	I1109 13:30:25.221748    4875 addons.go:436] installing /etc/kubernetes/addons/metrics-server-service.yaml
	I1109 13:30:25.221844    4875 ssh_runner.go:362] scp metrics-server/metrics-server-service.yaml --> /etc/kubernetes/addons/metrics-server-service.yaml (446 bytes)
	I1109 13:30:25.236853    4875 ssh_runner.go:195] Run: sudo KUBECONFIG=/var/lib/minikube/kubeconfig /var/lib/minikube/binaries/v1.34.1/kubectl apply -f /etc/kubernetes/addons/deployment.yaml
	I1109 13:30:25.278536    4875 addons.go:436] installing /etc/kubernetes/addons/snapshot.storage.k8s.io_volumesnapshots.yaml
	I1109 13:30:25.278565    4875 ssh_runner.go:362] scp volumesnapshots/snapshot.storage.k8s.io_volumesnapshots.yaml --> /etc/kubernetes/addons/snapshot.storage.k8s.io_volumesnapshots.yaml (19582 bytes)
	I1109 13:30:25.290842    4875 kapi.go:214] "coredns" deployment in "kube-system" namespace and "addons-651467" context rescaled to 1 replicas
	I1109 13:30:25.393225    4875 ssh_runner.go:195] Run: sudo KUBECONFIG=/var/lib/minikube/kubeconfig /var/lib/minikube/binaries/v1.34.1/kubectl apply -f /etc/kubernetes/addons/metrics-apiservice.yaml -f /etc/kubernetes/addons/metrics-server-deployment.yaml -f /etc/kubernetes/addons/metrics-server-rbac.yaml -f /etc/kubernetes/addons/metrics-server-service.yaml
	I1109 13:30:25.438693    4875 ssh_runner.go:195] Run: sudo KUBECONFIG=/var/lib/minikube/kubeconfig /var/lib/minikube/binaries/v1.34.1/kubectl apply -f /etc/kubernetes/addons/yakd-ns.yaml -f /etc/kubernetes/addons/yakd-sa.yaml -f /etc/kubernetes/addons/yakd-crb.yaml -f /etc/kubernetes/addons/yakd-svc.yaml -f /etc/kubernetes/addons/yakd-dp.yaml
	I1109 13:30:25.499932    4875 addons.go:436] installing /etc/kubernetes/addons/rbac-external-provisioner.yaml
	I1109 13:30:25.499954    4875 ssh_runner.go:362] scp csi-hostpath-driver/rbac/rbac-external-provisioner.yaml --> /etc/kubernetes/addons/rbac-external-provisioner.yaml (4442 bytes)
	I1109 13:30:25.519227    4875 addons.go:436] installing /etc/kubernetes/addons/rbac-volume-snapshot-controller.yaml
	I1109 13:30:25.519249    4875 ssh_runner.go:362] scp volumesnapshots/rbac-volume-snapshot-controller.yaml --> /etc/kubernetes/addons/rbac-volume-snapshot-controller.yaml (3545 bytes)
	I1109 13:30:25.671321    4875 addons.go:436] installing /etc/kubernetes/addons/volume-snapshot-controller-deployment.yaml
	I1109 13:30:25.671392    4875 ssh_runner.go:362] scp memory --> /etc/kubernetes/addons/volume-snapshot-controller-deployment.yaml (1475 bytes)
	I1109 13:30:25.694095    4875 addons.go:436] installing /etc/kubernetes/addons/rbac-external-resizer.yaml
	I1109 13:30:25.694167    4875 ssh_runner.go:362] scp csi-hostpath-driver/rbac/rbac-external-resizer.yaml --> /etc/kubernetes/addons/rbac-external-resizer.yaml (2943 bytes)
	I1109 13:30:25.880194    4875 addons.go:436] installing /etc/kubernetes/addons/rbac-external-snapshotter.yaml
	I1109 13:30:25.880272    4875 ssh_runner.go:362] scp csi-hostpath-driver/rbac/rbac-external-snapshotter.yaml --> /etc/kubernetes/addons/rbac-external-snapshotter.yaml (3149 bytes)
	I1109 13:30:26.046095    4875 ssh_runner.go:195] Run: sudo KUBECONFIG=/var/lib/minikube/kubeconfig /var/lib/minikube/binaries/v1.34.1/kubectl apply -f /etc/kubernetes/addons/csi-hostpath-snapshotclass.yaml -f /etc/kubernetes/addons/snapshot.storage.k8s.io_volumesnapshotclasses.yaml -f /etc/kubernetes/addons/snapshot.storage.k8s.io_volumesnapshotcontents.yaml -f /etc/kubernetes/addons/snapshot.storage.k8s.io_volumesnapshots.yaml -f /etc/kubernetes/addons/rbac-volume-snapshot-controller.yaml -f /etc/kubernetes/addons/volume-snapshot-controller-deployment.yaml
	I1109 13:30:26.200363    4875 addons.go:436] installing /etc/kubernetes/addons/csi-hostpath-attacher.yaml
	I1109 13:30:26.200435    4875 ssh_runner.go:362] scp memory --> /etc/kubernetes/addons/csi-hostpath-attacher.yaml (2143 bytes)
	I1109 13:30:26.272502    4875 addons.go:436] installing /etc/kubernetes/addons/csi-hostpath-driverinfo.yaml
	I1109 13:30:26.272528    4875 ssh_runner.go:362] scp csi-hostpath-driver/deploy/csi-hostpath-driverinfo.yaml --> /etc/kubernetes/addons/csi-hostpath-driverinfo.yaml (1274 bytes)
	I1109 13:30:26.390827    4875 addons.go:436] installing /etc/kubernetes/addons/csi-hostpath-plugin.yaml
	I1109 13:30:26.390850    4875 ssh_runner.go:362] scp memory --> /etc/kubernetes/addons/csi-hostpath-plugin.yaml (8201 bytes)
	I1109 13:30:26.514748    4875 addons.go:436] installing /etc/kubernetes/addons/csi-hostpath-resizer.yaml
	I1109 13:30:26.514820    4875 ssh_runner.go:362] scp memory --> /etc/kubernetes/addons/csi-hostpath-resizer.yaml (2191 bytes)
	I1109 13:30:26.705488    4875 addons.go:436] installing /etc/kubernetes/addons/csi-hostpath-storageclass.yaml
	I1109 13:30:26.705567    4875 ssh_runner.go:362] scp csi-hostpath-driver/deploy/csi-hostpath-storageclass.yaml --> /etc/kubernetes/addons/csi-hostpath-storageclass.yaml (846 bytes)
	W1109 13:30:26.809629    4875 node_ready.go:57] node "addons-651467" has "Ready":"False" status (will retry)
	I1109 13:30:26.902084    4875 ssh_runner.go:195] Run: sudo KUBECONFIG=/var/lib/minikube/kubeconfig /var/lib/minikube/binaries/v1.34.1/kubectl apply -f /etc/kubernetes/addons/rbac-external-attacher.yaml -f /etc/kubernetes/addons/rbac-hostpath.yaml -f /etc/kubernetes/addons/rbac-external-health-monitor-controller.yaml -f /etc/kubernetes/addons/rbac-external-provisioner.yaml -f /etc/kubernetes/addons/rbac-external-resizer.yaml -f /etc/kubernetes/addons/rbac-external-snapshotter.yaml -f /etc/kubernetes/addons/csi-hostpath-attacher.yaml -f /etc/kubernetes/addons/csi-hostpath-driverinfo.yaml -f /etc/kubernetes/addons/csi-hostpath-plugin.yaml -f /etc/kubernetes/addons/csi-hostpath-resizer.yaml -f /etc/kubernetes/addons/csi-hostpath-storageclass.yaml
	I1109 13:30:28.483316    4875 ssh_runner.go:235] Completed: sudo KUBECONFIG=/var/lib/minikube/kubeconfig /var/lib/minikube/binaries/v1.34.1/kubectl apply -f /etc/kubernetes/addons/storage-provisioner-rancher.yaml: (4.142473295s)
	I1109 13:30:28.483694    4875 ssh_runner.go:235] Completed: sudo KUBECONFIG=/var/lib/minikube/kubeconfig /var/lib/minikube/binaries/v1.34.1/kubectl apply -f /etc/kubernetes/addons/registry-creds-rc.yaml: (3.995667147s)
	I1109 13:30:28.483785    4875 ssh_runner.go:235] Completed: sudo KUBECONFIG=/var/lib/minikube/kubeconfig /var/lib/minikube/binaries/v1.34.1/kubectl apply -f /etc/kubernetes/addons/storage-provisioner.yaml: (3.879629081s)
	I1109 13:30:28.676100    4875 ssh_runner.go:235] Completed: sudo KUBECONFIG=/var/lib/minikube/kubeconfig /var/lib/minikube/binaries/v1.34.1/kubectl apply -f /etc/kubernetes/addons/ig-deployment.yaml: (4.0561068s)
	I1109 13:30:28.676202    4875 ssh_runner.go:235] Completed: sudo KUBECONFIG=/var/lib/minikube/kubeconfig /var/lib/minikube/binaries/v1.34.1/kubectl apply -f /etc/kubernetes/addons/amd-gpu-device-plugin.yaml: (4.02144271s)
	I1109 13:30:28.676219    4875 ssh_runner.go:235] Completed: sudo KUBECONFIG=/var/lib/minikube/kubeconfig /var/lib/minikube/binaries/v1.34.1/kubectl apply -f /etc/kubernetes/addons/nvidia-device-plugin.yaml: (4.00122603s)
	I1109 13:30:28.676365    4875 ssh_runner.go:235] Completed: sudo KUBECONFIG=/var/lib/minikube/kubeconfig /var/lib/minikube/binaries/v1.34.1/kubectl apply -f /etc/kubernetes/addons/ingress-dns-pod.yaml: (3.998557251s)
	I1109 13:30:28.676419    4875 ssh_runner.go:235] Completed: sudo KUBECONFIG=/var/lib/minikube/kubeconfig /var/lib/minikube/binaries/v1.34.1/kubectl apply -f /etc/kubernetes/addons/registry-rc.yaml -f /etc/kubernetes/addons/registry-svc.yaml -f /etc/kubernetes/addons/registry-proxy.yaml: (3.777770521s)
	I1109 13:30:28.676472    4875 addons.go:480] Verifying addon registry=true in "addons-651467"
	I1109 13:30:28.680909    4875 out.go:179] * Verifying registry addon...
	I1109 13:30:28.684059    4875 kapi.go:75] Waiting for pod with label "kubernetes.io/minikube-addons=registry" in ns "kube-system" ...
	I1109 13:30:28.699820    4875 kapi.go:86] Found 1 Pods for label selector kubernetes.io/minikube-addons=registry
	I1109 13:30:28.699840    4875 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=registry", current state: Pending: [<nil>]
	W1109 13:30:28.812813    4875 node_ready.go:57] node "addons-651467" has "Ready":"False" status (will retry)
	I1109 13:30:29.209893    4875 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=registry", current state: Pending: [<nil>]
	I1109 13:30:29.533080    4875 ssh_runner.go:235] Completed: sudo KUBECONFIG=/var/lib/minikube/kubeconfig /var/lib/minikube/binaries/v1.34.1/kubectl apply -f /etc/kubernetes/addons/ingress-deploy.yaml: (4.59226929s)
	I1109 13:30:29.533158    4875 addons.go:480] Verifying addon ingress=true in "addons-651467"
	I1109 13:30:29.533359    4875 ssh_runner.go:235] Completed: sudo KUBECONFIG=/var/lib/minikube/kubeconfig /var/lib/minikube/binaries/v1.34.1/kubectl apply -f /etc/kubernetes/addons/storageclass.yaml: (4.428907176s)
	I1109 13:30:29.533543    4875 ssh_runner.go:235] Completed: sudo KUBECONFIG=/var/lib/minikube/kubeconfig /var/lib/minikube/binaries/v1.34.1/kubectl apply -f /etc/kubernetes/addons/deployment.yaml: (4.296669566s)
	I1109 13:30:29.533603    4875 ssh_runner.go:235] Completed: sudo KUBECONFIG=/var/lib/minikube/kubeconfig /var/lib/minikube/binaries/v1.34.1/kubectl apply -f /etc/kubernetes/addons/metrics-apiservice.yaml -f /etc/kubernetes/addons/metrics-server-deployment.yaml -f /etc/kubernetes/addons/metrics-server-rbac.yaml -f /etc/kubernetes/addons/metrics-server-service.yaml: (4.140308051s)
	I1109 13:30:29.534027    4875 addons.go:480] Verifying addon metrics-server=true in "addons-651467"
	I1109 13:30:29.533641    4875 ssh_runner.go:235] Completed: sudo KUBECONFIG=/var/lib/minikube/kubeconfig /var/lib/minikube/binaries/v1.34.1/kubectl apply -f /etc/kubernetes/addons/yakd-ns.yaml -f /etc/kubernetes/addons/yakd-sa.yaml -f /etc/kubernetes/addons/yakd-crb.yaml -f /etc/kubernetes/addons/yakd-svc.yaml -f /etc/kubernetes/addons/yakd-dp.yaml: (4.094874469s)
	I1109 13:30:29.533711    4875 ssh_runner.go:235] Completed: sudo KUBECONFIG=/var/lib/minikube/kubeconfig /var/lib/minikube/binaries/v1.34.1/kubectl apply -f /etc/kubernetes/addons/csi-hostpath-snapshotclass.yaml -f /etc/kubernetes/addons/snapshot.storage.k8s.io_volumesnapshotclasses.yaml -f /etc/kubernetes/addons/snapshot.storage.k8s.io_volumesnapshotcontents.yaml -f /etc/kubernetes/addons/snapshot.storage.k8s.io_volumesnapshots.yaml -f /etc/kubernetes/addons/rbac-volume-snapshot-controller.yaml -f /etc/kubernetes/addons/volume-snapshot-controller-deployment.yaml: (3.487542902s)
	W1109 13:30:29.534151    4875 addons.go:462] apply failed, will retry: sudo KUBECONFIG=/var/lib/minikube/kubeconfig /var/lib/minikube/binaries/v1.34.1/kubectl apply -f /etc/kubernetes/addons/csi-hostpath-snapshotclass.yaml -f /etc/kubernetes/addons/snapshot.storage.k8s.io_volumesnapshotclasses.yaml -f /etc/kubernetes/addons/snapshot.storage.k8s.io_volumesnapshotcontents.yaml -f /etc/kubernetes/addons/snapshot.storage.k8s.io_volumesnapshots.yaml -f /etc/kubernetes/addons/rbac-volume-snapshot-controller.yaml -f /etc/kubernetes/addons/volume-snapshot-controller-deployment.yaml: Process exited with status 1
	stdout:
	customresourcedefinition.apiextensions.k8s.io/volumesnapshotclasses.snapshot.storage.k8s.io created
	customresourcedefinition.apiextensions.k8s.io/volumesnapshotcontents.snapshot.storage.k8s.io created
	customresourcedefinition.apiextensions.k8s.io/volumesnapshots.snapshot.storage.k8s.io created
	serviceaccount/snapshot-controller created
	clusterrole.rbac.authorization.k8s.io/snapshot-controller-runner created
	clusterrolebinding.rbac.authorization.k8s.io/snapshot-controller-role created
	role.rbac.authorization.k8s.io/snapshot-controller-leaderelection created
	rolebinding.rbac.authorization.k8s.io/snapshot-controller-leaderelection created
	deployment.apps/snapshot-controller created
	
	stderr:
	error: resource mapping not found for name: "csi-hostpath-snapclass" namespace: "" from "/etc/kubernetes/addons/csi-hostpath-snapshotclass.yaml": no matches for kind "VolumeSnapshotClass" in version "snapshot.storage.k8s.io/v1"
	ensure CRDs are installed first
	I1109 13:30:29.534238    4875 retry.go:31] will retry after 158.861087ms: sudo KUBECONFIG=/var/lib/minikube/kubeconfig /var/lib/minikube/binaries/v1.34.1/kubectl apply -f /etc/kubernetes/addons/csi-hostpath-snapshotclass.yaml -f /etc/kubernetes/addons/snapshot.storage.k8s.io_volumesnapshotclasses.yaml -f /etc/kubernetes/addons/snapshot.storage.k8s.io_volumesnapshotcontents.yaml -f /etc/kubernetes/addons/snapshot.storage.k8s.io_volumesnapshots.yaml -f /etc/kubernetes/addons/rbac-volume-snapshot-controller.yaml -f /etc/kubernetes/addons/volume-snapshot-controller-deployment.yaml: Process exited with status 1
	stdout:
	customresourcedefinition.apiextensions.k8s.io/volumesnapshotclasses.snapshot.storage.k8s.io created
	customresourcedefinition.apiextensions.k8s.io/volumesnapshotcontents.snapshot.storage.k8s.io created
	customresourcedefinition.apiextensions.k8s.io/volumesnapshots.snapshot.storage.k8s.io created
	serviceaccount/snapshot-controller created
	clusterrole.rbac.authorization.k8s.io/snapshot-controller-runner created
	clusterrolebinding.rbac.authorization.k8s.io/snapshot-controller-role created
	role.rbac.authorization.k8s.io/snapshot-controller-leaderelection created
	rolebinding.rbac.authorization.k8s.io/snapshot-controller-leaderelection created
	deployment.apps/snapshot-controller created
	
	stderr:
	error: resource mapping not found for name: "csi-hostpath-snapclass" namespace: "" from "/etc/kubernetes/addons/csi-hostpath-snapshotclass.yaml": no matches for kind "VolumeSnapshotClass" in version "snapshot.storage.k8s.io/v1"
	ensure CRDs are installed first
	I1109 13:30:29.536434    4875 out.go:179] * To access YAKD - Kubernetes Dashboard, wait for Pod to be ready and run the following command:
	
		minikube -p addons-651467 service yakd-dashboard -n yakd-dashboard
	
	I1109 13:30:29.536434    4875 out.go:179] * Verifying ingress addon...
	I1109 13:30:29.540217    4875 kapi.go:75] Waiting for pod with label "app.kubernetes.io/name=ingress-nginx" in ns "ingress-nginx" ...
	I1109 13:30:29.550087    4875 kapi.go:86] Found 3 Pods for label selector app.kubernetes.io/name=ingress-nginx
	I1109 13:30:29.550108    4875 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I1109 13:30:29.694147    4875 ssh_runner.go:195] Run: sudo KUBECONFIG=/var/lib/minikube/kubeconfig /var/lib/minikube/binaries/v1.34.1/kubectl apply --force -f /etc/kubernetes/addons/csi-hostpath-snapshotclass.yaml -f /etc/kubernetes/addons/snapshot.storage.k8s.io_volumesnapshotclasses.yaml -f /etc/kubernetes/addons/snapshot.storage.k8s.io_volumesnapshotcontents.yaml -f /etc/kubernetes/addons/snapshot.storage.k8s.io_volumesnapshots.yaml -f /etc/kubernetes/addons/rbac-volume-snapshot-controller.yaml -f /etc/kubernetes/addons/volume-snapshot-controller-deployment.yaml
	I1109 13:30:29.710715    4875 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=registry", current state: Pending: [<nil>]
	I1109 13:30:29.831401    4875 ssh_runner.go:235] Completed: sudo KUBECONFIG=/var/lib/minikube/kubeconfig /var/lib/minikube/binaries/v1.34.1/kubectl apply -f /etc/kubernetes/addons/rbac-external-attacher.yaml -f /etc/kubernetes/addons/rbac-hostpath.yaml -f /etc/kubernetes/addons/rbac-external-health-monitor-controller.yaml -f /etc/kubernetes/addons/rbac-external-provisioner.yaml -f /etc/kubernetes/addons/rbac-external-resizer.yaml -f /etc/kubernetes/addons/rbac-external-snapshotter.yaml -f /etc/kubernetes/addons/csi-hostpath-attacher.yaml -f /etc/kubernetes/addons/csi-hostpath-driverinfo.yaml -f /etc/kubernetes/addons/csi-hostpath-plugin.yaml -f /etc/kubernetes/addons/csi-hostpath-resizer.yaml -f /etc/kubernetes/addons/csi-hostpath-storageclass.yaml: (2.929213838s)
	I1109 13:30:29.831444    4875 addons.go:480] Verifying addon csi-hostpath-driver=true in "addons-651467"
	I1109 13:30:29.834681    4875 out.go:179] * Verifying csi-hostpath-driver addon...
	I1109 13:30:29.838305    4875 kapi.go:75] Waiting for pod with label "kubernetes.io/minikube-addons=csi-hostpath-driver" in ns "kube-system" ...
	I1109 13:30:29.843402    4875 kapi.go:86] Found 2 Pods for label selector kubernetes.io/minikube-addons=csi-hostpath-driver
	I1109 13:30:29.843425    4875 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I1109 13:30:30.060829    4875 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I1109 13:30:30.188382    4875 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=registry", current state: Pending: [<nil>]
	I1109 13:30:30.344753    4875 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I1109 13:30:30.544465    4875 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I1109 13:30:30.688765    4875 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=registry", current state: Pending: [<nil>]
	I1109 13:30:30.841741    4875 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I1109 13:30:31.043980    4875 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I1109 13:30:31.077521    4875 ssh_runner.go:362] scp memory --> /var/lib/minikube/google_application_credentials.json (162 bytes)
	I1109 13:30:31.077654    4875 cli_runner.go:164] Run: docker container inspect -f "'{{(index (index .NetworkSettings.Ports "22/tcp") 0).HostPort}}'" addons-651467
	I1109 13:30:31.095830    4875 sshutil.go:53] new ssh client: &{IP:127.0.0.1 Port:32768 SSHKeyPath:/home/jenkins/minikube-integration/21139-2320/.minikube/machines/addons-651467/id_rsa Username:docker}
	I1109 13:30:31.187994    4875 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=registry", current state: Pending: [<nil>]
	I1109 13:30:31.231226    4875 ssh_runner.go:362] scp memory --> /var/lib/minikube/google_cloud_project (12 bytes)
	I1109 13:30:31.245101    4875 addons.go:239] Setting addon gcp-auth=true in "addons-651467"
	I1109 13:30:31.245153    4875 host.go:66] Checking if "addons-651467" exists ...
	I1109 13:30:31.245642    4875 cli_runner.go:164] Run: docker container inspect addons-651467 --format={{.State.Status}}
	I1109 13:30:31.264193    4875 ssh_runner.go:195] Run: cat /var/lib/minikube/google_application_credentials.json
	I1109 13:30:31.264248    4875 cli_runner.go:164] Run: docker container inspect -f "'{{(index (index .NetworkSettings.Ports "22/tcp") 0).HostPort}}'" addons-651467
	I1109 13:30:31.282316    4875 sshutil.go:53] new ssh client: &{IP:127.0.0.1 Port:32768 SSHKeyPath:/home/jenkins/minikube-integration/21139-2320/.minikube/machines/addons-651467/id_rsa Username:docker}
	W1109 13:30:31.292098    4875 node_ready.go:57] node "addons-651467" has "Ready":"False" status (will retry)
	I1109 13:30:31.341505    4875 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I1109 13:30:31.386870    4875 out.go:179]   - Using image registry.k8s.io/ingress-nginx/kube-webhook-certgen:v1.6.3
	I1109 13:30:31.389883    4875 out.go:179]   - Using image gcr.io/k8s-minikube/gcp-auth-webhook:v0.1.3
	I1109 13:30:31.392760    4875 addons.go:436] installing /etc/kubernetes/addons/gcp-auth-ns.yaml
	I1109 13:30:31.392788    4875 ssh_runner.go:362] scp gcp-auth/gcp-auth-ns.yaml --> /etc/kubernetes/addons/gcp-auth-ns.yaml (700 bytes)
	I1109 13:30:31.406203    4875 addons.go:436] installing /etc/kubernetes/addons/gcp-auth-service.yaml
	I1109 13:30:31.406223    4875 ssh_runner.go:362] scp gcp-auth/gcp-auth-service.yaml --> /etc/kubernetes/addons/gcp-auth-service.yaml (788 bytes)
	I1109 13:30:31.418827    4875 addons.go:436] installing /etc/kubernetes/addons/gcp-auth-webhook.yaml
	I1109 13:30:31.418852    4875 ssh_runner.go:362] scp memory --> /etc/kubernetes/addons/gcp-auth-webhook.yaml (5421 bytes)
	I1109 13:30:31.431939    4875 ssh_runner.go:195] Run: sudo KUBECONFIG=/var/lib/minikube/kubeconfig /var/lib/minikube/binaries/v1.34.1/kubectl apply -f /etc/kubernetes/addons/gcp-auth-ns.yaml -f /etc/kubernetes/addons/gcp-auth-service.yaml -f /etc/kubernetes/addons/gcp-auth-webhook.yaml
	I1109 13:30:31.543908    4875 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I1109 13:30:31.687585    4875 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=registry", current state: Pending: [<nil>]
	I1109 13:30:31.852921    4875 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I1109 13:30:31.932969    4875 addons.go:480] Verifying addon gcp-auth=true in "addons-651467"
	I1109 13:30:31.937981    4875 out.go:179] * Verifying gcp-auth addon...
	I1109 13:30:31.941634    4875 kapi.go:75] Waiting for pod with label "kubernetes.io/minikube-addons=gcp-auth" in ns "gcp-auth" ...
	I1109 13:30:31.951547    4875 kapi.go:86] Found 1 Pods for label selector kubernetes.io/minikube-addons=gcp-auth
	I1109 13:30:31.951572    4875 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I1109 13:30:32.043596    4875 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I1109 13:30:32.187217    4875 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=registry", current state: Pending: [<nil>]
	I1109 13:30:32.341951    4875 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I1109 13:30:32.444675    4875 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I1109 13:30:32.543930    4875 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I1109 13:30:32.687662    4875 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=registry", current state: Pending: [<nil>]
	I1109 13:30:32.841909    4875 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I1109 13:30:32.944917    4875 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I1109 13:30:33.044162    4875 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I1109 13:30:33.187078    4875 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=registry", current state: Pending: [<nil>]
	I1109 13:30:33.341304    4875 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I1109 13:30:33.444560    4875 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I1109 13:30:33.543478    4875 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I1109 13:30:33.687268    4875 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=registry", current state: Pending: [<nil>]
	W1109 13:30:33.791940    4875 node_ready.go:57] node "addons-651467" has "Ready":"False" status (will retry)
	I1109 13:30:33.841539    4875 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I1109 13:30:33.945102    4875 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I1109 13:30:34.044349    4875 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I1109 13:30:34.187062    4875 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=registry", current state: Pending: [<nil>]
	I1109 13:30:34.341598    4875 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I1109 13:30:34.444508    4875 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I1109 13:30:34.543758    4875 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I1109 13:30:34.687730    4875 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=registry", current state: Pending: [<nil>]
	I1109 13:30:34.842259    4875 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I1109 13:30:34.945270    4875 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I1109 13:30:35.043787    4875 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I1109 13:30:35.188137    4875 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=registry", current state: Pending: [<nil>]
	I1109 13:30:35.341184    4875 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I1109 13:30:35.444984    4875 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I1109 13:30:35.544162    4875 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I1109 13:30:35.687211    4875 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=registry", current state: Pending: [<nil>]
	I1109 13:30:35.841549    4875 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I1109 13:30:35.945187    4875 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I1109 13:30:36.043229    4875 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I1109 13:30:36.186772    4875 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=registry", current state: Pending: [<nil>]
	W1109 13:30:36.291837    4875 node_ready.go:57] node "addons-651467" has "Ready":"False" status (will retry)
	I1109 13:30:36.341726    4875 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I1109 13:30:36.444502    4875 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I1109 13:30:36.545083    4875 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I1109 13:30:36.686995    4875 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=registry", current state: Pending: [<nil>]
	I1109 13:30:36.841904    4875 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I1109 13:30:36.944875    4875 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I1109 13:30:37.044138    4875 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I1109 13:30:37.187841    4875 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=registry", current state: Pending: [<nil>]
	I1109 13:30:37.342273    4875 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I1109 13:30:37.445022    4875 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I1109 13:30:37.544420    4875 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I1109 13:30:37.687666    4875 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=registry", current state: Pending: [<nil>]
	I1109 13:30:37.842175    4875 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I1109 13:30:37.945389    4875 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I1109 13:30:38.044015    4875 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I1109 13:30:38.187143    4875 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=registry", current state: Pending: [<nil>]
	I1109 13:30:38.341929    4875 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I1109 13:30:38.448312    4875 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I1109 13:30:38.543630    4875 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I1109 13:30:38.687074    4875 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=registry", current state: Pending: [<nil>]
	W1109 13:30:38.791543    4875 node_ready.go:57] node "addons-651467" has "Ready":"False" status (will retry)
	I1109 13:30:38.841822    4875 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I1109 13:30:38.945952    4875 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I1109 13:30:39.044126    4875 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I1109 13:30:39.189112    4875 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=registry", current state: Pending: [<nil>]
	I1109 13:30:39.342174    4875 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I1109 13:30:39.444784    4875 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I1109 13:30:39.544128    4875 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I1109 13:30:39.687799    4875 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=registry", current state: Pending: [<nil>]
	I1109 13:30:39.841169    4875 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I1109 13:30:39.945840    4875 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I1109 13:30:40.044465    4875 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I1109 13:30:40.187552    4875 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=registry", current state: Pending: [<nil>]
	I1109 13:30:40.341694    4875 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I1109 13:30:40.444718    4875 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I1109 13:30:40.543643    4875 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I1109 13:30:40.687804    4875 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=registry", current state: Pending: [<nil>]
	W1109 13:30:40.792427    4875 node_ready.go:57] node "addons-651467" has "Ready":"False" status (will retry)
	I1109 13:30:40.841561    4875 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I1109 13:30:40.944505    4875 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I1109 13:30:41.043378    4875 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I1109 13:30:41.187405    4875 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=registry", current state: Pending: [<nil>]
	I1109 13:30:41.341510    4875 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I1109 13:30:41.444514    4875 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I1109 13:30:41.543523    4875 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I1109 13:30:41.688906    4875 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=registry", current state: Pending: [<nil>]
	I1109 13:30:41.842400    4875 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I1109 13:30:41.945594    4875 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I1109 13:30:42.044467    4875 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I1109 13:30:42.187411    4875 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=registry", current state: Pending: [<nil>]
	I1109 13:30:42.341280    4875 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I1109 13:30:42.445062    4875 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I1109 13:30:42.543975    4875 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I1109 13:30:42.686736    4875 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=registry", current state: Pending: [<nil>]
	I1109 13:30:42.841540    4875 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I1109 13:30:42.944675    4875 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I1109 13:30:43.043765    4875 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I1109 13:30:43.187610    4875 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=registry", current state: Pending: [<nil>]
	W1109 13:30:43.292589    4875 node_ready.go:57] node "addons-651467" has "Ready":"False" status (will retry)
	I1109 13:30:43.341191    4875 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I1109 13:30:43.445490    4875 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I1109 13:30:43.543568    4875 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I1109 13:30:43.687575    4875 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=registry", current state: Pending: [<nil>]
	I1109 13:30:43.840929    4875 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I1109 13:30:43.944819    4875 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I1109 13:30:44.043584    4875 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I1109 13:30:44.187507    4875 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=registry", current state: Pending: [<nil>]
	I1109 13:30:44.340907    4875 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I1109 13:30:44.444843    4875 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I1109 13:30:44.543987    4875 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I1109 13:30:44.687109    4875 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=registry", current state: Pending: [<nil>]
	I1109 13:30:44.841917    4875 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I1109 13:30:44.944961    4875 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I1109 13:30:45.050770    4875 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I1109 13:30:45.189716    4875 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=registry", current state: Pending: [<nil>]
	W1109 13:30:45.295183    4875 node_ready.go:57] node "addons-651467" has "Ready":"False" status (will retry)
	I1109 13:30:45.342598    4875 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I1109 13:30:45.444525    4875 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I1109 13:30:45.543196    4875 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I1109 13:30:45.687358    4875 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=registry", current state: Pending: [<nil>]
	I1109 13:30:45.841227    4875 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I1109 13:30:45.945251    4875 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I1109 13:30:46.043344    4875 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I1109 13:30:46.187357    4875 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=registry", current state: Pending: [<nil>]
	I1109 13:30:46.341217    4875 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I1109 13:30:46.444933    4875 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I1109 13:30:46.543989    4875 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I1109 13:30:46.687637    4875 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=registry", current state: Pending: [<nil>]
	I1109 13:30:46.842053    4875 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I1109 13:30:46.945238    4875 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I1109 13:30:47.043401    4875 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I1109 13:30:47.188523    4875 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=registry", current state: Pending: [<nil>]
	I1109 13:30:47.341781    4875 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I1109 13:30:47.444965    4875 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I1109 13:30:47.543749    4875 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I1109 13:30:47.688073    4875 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=registry", current state: Pending: [<nil>]
	W1109 13:30:47.791570    4875 node_ready.go:57] node "addons-651467" has "Ready":"False" status (will retry)
	I1109 13:30:47.841515    4875 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I1109 13:30:47.945561    4875 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I1109 13:30:48.043787    4875 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I1109 13:30:48.187708    4875 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=registry", current state: Pending: [<nil>]
	I1109 13:30:48.341620    4875 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I1109 13:30:48.444995    4875 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I1109 13:30:48.544379    4875 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I1109 13:30:48.687107    4875 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=registry", current state: Pending: [<nil>]
	I1109 13:30:48.842167    4875 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I1109 13:30:48.945152    4875 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I1109 13:30:49.043234    4875 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I1109 13:30:49.187838    4875 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=registry", current state: Pending: [<nil>]
	I1109 13:30:49.341999    4875 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I1109 13:30:49.445138    4875 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I1109 13:30:49.544052    4875 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I1109 13:30:49.686714    4875 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=registry", current state: Pending: [<nil>]
	W1109 13:30:49.792211    4875 node_ready.go:57] node "addons-651467" has "Ready":"False" status (will retry)
	I1109 13:30:49.842396    4875 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I1109 13:30:49.945527    4875 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I1109 13:30:50.043759    4875 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I1109 13:30:50.187612    4875 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=registry", current state: Pending: [<nil>]
	I1109 13:30:50.341098    4875 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I1109 13:30:50.444993    4875 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I1109 13:30:50.543951    4875 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I1109 13:30:50.687648    4875 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=registry", current state: Pending: [<nil>]
	I1109 13:30:50.842226    4875 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I1109 13:30:50.944974    4875 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I1109 13:30:51.043944    4875 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I1109 13:30:51.186997    4875 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=registry", current state: Pending: [<nil>]
	I1109 13:30:51.341904    4875 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I1109 13:30:51.444653    4875 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I1109 13:30:51.543406    4875 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I1109 13:30:51.687185    4875 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=registry", current state: Pending: [<nil>]
	I1109 13:30:51.849512    4875 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I1109 13:30:51.944444    4875 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I1109 13:30:52.043385    4875 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I1109 13:30:52.186856    4875 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=registry", current state: Pending: [<nil>]
	W1109 13:30:52.291621    4875 node_ready.go:57] node "addons-651467" has "Ready":"False" status (will retry)
	I1109 13:30:52.341323    4875 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I1109 13:30:52.445012    4875 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I1109 13:30:52.544163    4875 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I1109 13:30:52.687019    4875 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=registry", current state: Pending: [<nil>]
	I1109 13:30:52.842128    4875 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I1109 13:30:52.951720    4875 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I1109 13:30:53.043983    4875 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I1109 13:30:53.186866    4875 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=registry", current state: Pending: [<nil>]
	I1109 13:30:53.341621    4875 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I1109 13:30:53.444635    4875 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I1109 13:30:53.543838    4875 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I1109 13:30:53.687804    4875 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=registry", current state: Pending: [<nil>]
	I1109 13:30:53.841239    4875 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I1109 13:30:53.945191    4875 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I1109 13:30:54.043518    4875 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I1109 13:30:54.187694    4875 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=registry", current state: Pending: [<nil>]
	I1109 13:30:54.341742    4875 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I1109 13:30:54.444377    4875 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I1109 13:30:54.543264    4875 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I1109 13:30:54.687271    4875 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=registry", current state: Pending: [<nil>]
	W1109 13:30:54.791739    4875 node_ready.go:57] node "addons-651467" has "Ready":"False" status (will retry)
	I1109 13:30:54.841648    4875 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I1109 13:30:54.944532    4875 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I1109 13:30:55.044837    4875 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I1109 13:30:55.188006    4875 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=registry", current state: Pending: [<nil>]
	I1109 13:30:55.341875    4875 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I1109 13:30:55.444563    4875 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I1109 13:30:55.543838    4875 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I1109 13:30:55.687907    4875 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=registry", current state: Pending: [<nil>]
	I1109 13:30:55.841627    4875 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I1109 13:30:55.945202    4875 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I1109 13:30:56.043483    4875 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I1109 13:30:56.187179    4875 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=registry", current state: Pending: [<nil>]
	I1109 13:30:56.341651    4875 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I1109 13:30:56.444691    4875 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I1109 13:30:56.543833    4875 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I1109 13:30:56.687671    4875 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=registry", current state: Pending: [<nil>]
	W1109 13:30:56.792523    4875 node_ready.go:57] node "addons-651467" has "Ready":"False" status (will retry)
	I1109 13:30:56.841418    4875 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I1109 13:30:56.945147    4875 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I1109 13:30:57.043924    4875 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I1109 13:30:57.187608    4875 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=registry", current state: Pending: [<nil>]
	I1109 13:30:57.341872    4875 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I1109 13:30:57.444835    4875 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I1109 13:30:57.544131    4875 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I1109 13:30:57.687381    4875 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=registry", current state: Pending: [<nil>]
	I1109 13:30:57.841752    4875 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I1109 13:30:57.944935    4875 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I1109 13:30:58.046473    4875 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I1109 13:30:58.187128    4875 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=registry", current state: Pending: [<nil>]
	I1109 13:30:58.341769    4875 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I1109 13:30:58.444493    4875 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I1109 13:30:58.543611    4875 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I1109 13:30:58.687474    4875 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=registry", current state: Pending: [<nil>]
	I1109 13:30:58.841692    4875 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I1109 13:30:58.944633    4875 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I1109 13:30:59.043620    4875 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I1109 13:30:59.187791    4875 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=registry", current state: Pending: [<nil>]
	W1109 13:30:59.291597    4875 node_ready.go:57] node "addons-651467" has "Ready":"False" status (will retry)
	I1109 13:30:59.341379    4875 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I1109 13:30:59.445007    4875 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I1109 13:30:59.544944    4875 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I1109 13:30:59.688073    4875 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=registry", current state: Pending: [<nil>]
	I1109 13:30:59.842193    4875 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I1109 13:30:59.945063    4875 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I1109 13:31:00.072158    4875 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I1109 13:31:00.236080    4875 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=registry", current state: Pending: [<nil>]
	I1109 13:31:00.347674    4875 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I1109 13:31:00.445470    4875 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I1109 13:31:00.543997    4875 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I1109 13:31:00.687676    4875 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=registry", current state: Pending: [<nil>]
	I1109 13:31:00.841317    4875 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I1109 13:31:00.945259    4875 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I1109 13:31:01.043636    4875 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I1109 13:31:01.187996    4875 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=registry", current state: Pending: [<nil>]
	I1109 13:31:01.341516    4875 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I1109 13:31:01.444531    4875 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I1109 13:31:01.543442    4875 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I1109 13:31:01.687310    4875 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=registry", current state: Pending: [<nil>]
	W1109 13:31:01.792083    4875 node_ready.go:57] node "addons-651467" has "Ready":"False" status (will retry)
	I1109 13:31:01.842124    4875 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I1109 13:31:01.944862    4875 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I1109 13:31:02.044457    4875 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I1109 13:31:02.187041    4875 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=registry", current state: Pending: [<nil>]
	I1109 13:31:02.341796    4875 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I1109 13:31:02.444743    4875 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I1109 13:31:02.543669    4875 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I1109 13:31:02.688012    4875 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=registry", current state: Pending: [<nil>]
	I1109 13:31:02.841980    4875 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I1109 13:31:02.945089    4875 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I1109 13:31:03.043891    4875 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I1109 13:31:03.187735    4875 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=registry", current state: Pending: [<nil>]
	I1109 13:31:03.341646    4875 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I1109 13:31:03.445094    4875 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I1109 13:31:03.543258    4875 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I1109 13:31:03.687453    4875 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=registry", current state: Pending: [<nil>]
	W1109 13:31:03.792599    4875 node_ready.go:57] node "addons-651467" has "Ready":"False" status (will retry)
	I1109 13:31:03.841835    4875 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I1109 13:31:03.944741    4875 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I1109 13:31:04.044019    4875 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I1109 13:31:04.187617    4875 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=registry", current state: Pending: [<nil>]
	I1109 13:31:04.341496    4875 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I1109 13:31:04.444440    4875 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I1109 13:31:04.543400    4875 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I1109 13:31:04.686997    4875 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=registry", current state: Pending: [<nil>]
	I1109 13:31:04.841018    4875 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I1109 13:31:04.944974    4875 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I1109 13:31:05.044053    4875 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I1109 13:31:05.215419    4875 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=registry", current state: Pending: [<nil>]
	I1109 13:31:05.297264    4875 node_ready.go:49] node "addons-651467" is "Ready"
	I1109 13:31:05.297309    4875 node_ready.go:38] duration metric: took 40.508401799s for node "addons-651467" to be "Ready" ...
	I1109 13:31:05.297323    4875 api_server.go:52] waiting for apiserver process to appear ...
	I1109 13:31:05.297393    4875 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I1109 13:31:05.314492    4875 api_server.go:72] duration metric: took 42.233802998s to wait for apiserver process to appear ...
	I1109 13:31:05.314529    4875 api_server.go:88] waiting for apiserver healthz status ...
	I1109 13:31:05.314548    4875 api_server.go:253] Checking apiserver healthz at https://192.168.49.2:8443/healthz ...
	I1109 13:31:05.326881    4875 api_server.go:279] https://192.168.49.2:8443/healthz returned 200:
	ok
	I1109 13:31:05.329478    4875 api_server.go:141] control plane version: v1.34.1
	I1109 13:31:05.329513    4875 api_server.go:131] duration metric: took 14.971994ms to wait for apiserver health ...
	I1109 13:31:05.329522    4875 system_pods.go:43] waiting for kube-system pods to appear ...
	I1109 13:31:05.343325    4875 system_pods.go:59] 19 kube-system pods found
	I1109 13:31:05.343436    4875 system_pods.go:61] "coredns-66bc5c9577-2bvft" [ec69ac87-b8fd-43ca-9881-75fdfcf79050] Pending / Ready:ContainersNotReady (containers with unready status: [coredns]) / ContainersReady:ContainersNotReady (containers with unready status: [coredns])
	I1109 13:31:05.343460    4875 system_pods.go:61] "csi-hostpath-attacher-0" [87f54f6c-2cfb-45e8-8ac8-a9eccdb3e51f] Pending
	I1109 13:31:05.343494    4875 system_pods.go:61] "csi-hostpath-resizer-0" [397070b3-aae2-4129-9072-7de9d3ba6da9] Pending
	I1109 13:31:05.343517    4875 system_pods.go:61] "csi-hostpathplugin-txjcd" [6943fc25-e8c6-4fdc-ad8b-ecf63b13f5fd] Pending
	I1109 13:31:05.343535    4875 system_pods.go:61] "etcd-addons-651467" [513b6301-60f7-4c1a-b01a-baf84b87bd75] Running
	I1109 13:31:05.343553    4875 system_pods.go:61] "kindnet-9qtn5" [d33de5ba-e443-452e-8fe7-5deb7ad623ca] Running
	I1109 13:31:05.343571    4875 system_pods.go:61] "kube-apiserver-addons-651467" [3f3ad910-fa5d-4d5c-88db-c8305e10d2c4] Running
	I1109 13:31:05.343599    4875 system_pods.go:61] "kube-controller-manager-addons-651467" [b37e9a85-6d92-4d6e-be54-8ff0648c2f93] Running
	I1109 13:31:05.343623    4875 system_pods.go:61] "kube-ingress-dns-minikube" [87ed2174-b7f3-4e0b-b964-91736f08eb82] Pending
	I1109 13:31:05.343641    4875 system_pods.go:61] "kube-proxy-mbtfx" [be5bd5f2-d003-4c6a-a3bf-db956ea6e05a] Running
	I1109 13:31:05.343658    4875 system_pods.go:61] "kube-scheduler-addons-651467" [d370dd8e-5eaa-4ea0-bb9e-6893d5047d12] Running
	I1109 13:31:05.343678    4875 system_pods.go:61] "metrics-server-85b7d694d7-lmgbd" [448bf1e7-7a08-483f-8dc9-602a084beee5] Pending
	I1109 13:31:05.343707    4875 system_pods.go:61] "nvidia-device-plugin-daemonset-rx8x7" [a557f519-0b90-4340-abc1-df9fb9511be1] Pending
	I1109 13:31:05.343728    4875 system_pods.go:61] "registry-6b586f9694-kzz6v" [44b6807f-27e2-412f-a961-c988ec4d7151] Pending
	I1109 13:31:05.343749    4875 system_pods.go:61] "registry-creds-764b6fb674-sppdf" [db731fa9-ef93-469f-8a5f-23bd2aa1c2b8] Pending / Ready:ContainersNotReady (containers with unready status: [registry-creds]) / ContainersReady:ContainersNotReady (containers with unready status: [registry-creds])
	I1109 13:31:05.343769    4875 system_pods.go:61] "registry-proxy-7mv24" [15b0c22b-2fbc-4269-8253-7dbe1e18c15c] Pending
	I1109 13:31:05.343791    4875 system_pods.go:61] "snapshot-controller-7d9fbc56b8-d4qqx" [e84f0c00-37e8-49f5-9221-f7abf50a0b54] Pending / Ready:ContainersNotReady (containers with unready status: [volume-snapshot-controller]) / ContainersReady:ContainersNotReady (containers with unready status: [volume-snapshot-controller])
	I1109 13:31:05.343826    4875 system_pods.go:61] "snapshot-controller-7d9fbc56b8-jmnwh" [a1da803e-5dfc-4660-88f5-05d16292eda1] Pending / Ready:ContainersNotReady (containers with unready status: [volume-snapshot-controller]) / ContainersReady:ContainersNotReady (containers with unready status: [volume-snapshot-controller])
	I1109 13:31:05.343844    4875 system_pods.go:61] "storage-provisioner" [5f59a559-3989-40c0-9f6e-8df14dc8db7a] Pending
	I1109 13:31:05.343910    4875 system_pods.go:74] duration metric: took 14.335442ms to wait for pod list to return data ...
	I1109 13:31:05.343942    4875 default_sa.go:34] waiting for default service account to be created ...
	I1109 13:31:05.346297    4875 kapi.go:86] Found 3 Pods for label selector kubernetes.io/minikube-addons=csi-hostpath-driver
	I1109 13:31:05.346360    4875 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I1109 13:31:05.348684    4875 default_sa.go:45] found service account: "default"
	I1109 13:31:05.348738    4875 default_sa.go:55] duration metric: took 4.777458ms for default service account to be created ...
	I1109 13:31:05.348762    4875 system_pods.go:116] waiting for k8s-apps to be running ...
	I1109 13:31:05.364970    4875 system_pods.go:86] 19 kube-system pods found
	I1109 13:31:05.365053    4875 system_pods.go:89] "coredns-66bc5c9577-2bvft" [ec69ac87-b8fd-43ca-9881-75fdfcf79050] Pending / Ready:ContainersNotReady (containers with unready status: [coredns]) / ContainersReady:ContainersNotReady (containers with unready status: [coredns])
	I1109 13:31:05.365074    4875 system_pods.go:89] "csi-hostpath-attacher-0" [87f54f6c-2cfb-45e8-8ac8-a9eccdb3e51f] Pending
	I1109 13:31:05.365093    4875 system_pods.go:89] "csi-hostpath-resizer-0" [397070b3-aae2-4129-9072-7de9d3ba6da9] Pending
	I1109 13:31:05.365124    4875 system_pods.go:89] "csi-hostpathplugin-txjcd" [6943fc25-e8c6-4fdc-ad8b-ecf63b13f5fd] Pending
	I1109 13:31:05.365148    4875 system_pods.go:89] "etcd-addons-651467" [513b6301-60f7-4c1a-b01a-baf84b87bd75] Running
	I1109 13:31:05.365167    4875 system_pods.go:89] "kindnet-9qtn5" [d33de5ba-e443-452e-8fe7-5deb7ad623ca] Running
	I1109 13:31:05.365184    4875 system_pods.go:89] "kube-apiserver-addons-651467" [3f3ad910-fa5d-4d5c-88db-c8305e10d2c4] Running
	I1109 13:31:05.365201    4875 system_pods.go:89] "kube-controller-manager-addons-651467" [b37e9a85-6d92-4d6e-be54-8ff0648c2f93] Running
	I1109 13:31:05.365230    4875 system_pods.go:89] "kube-ingress-dns-minikube" [87ed2174-b7f3-4e0b-b964-91736f08eb82] Pending
	I1109 13:31:05.365251    4875 system_pods.go:89] "kube-proxy-mbtfx" [be5bd5f2-d003-4c6a-a3bf-db956ea6e05a] Running
	I1109 13:31:05.365269    4875 system_pods.go:89] "kube-scheduler-addons-651467" [d370dd8e-5eaa-4ea0-bb9e-6893d5047d12] Running
	I1109 13:31:05.365286    4875 system_pods.go:89] "metrics-server-85b7d694d7-lmgbd" [448bf1e7-7a08-483f-8dc9-602a084beee5] Pending
	I1109 13:31:05.365304    4875 system_pods.go:89] "nvidia-device-plugin-daemonset-rx8x7" [a557f519-0b90-4340-abc1-df9fb9511be1] Pending
	I1109 13:31:05.365331    4875 system_pods.go:89] "registry-6b586f9694-kzz6v" [44b6807f-27e2-412f-a961-c988ec4d7151] Pending
	I1109 13:31:05.365356    4875 system_pods.go:89] "registry-creds-764b6fb674-sppdf" [db731fa9-ef93-469f-8a5f-23bd2aa1c2b8] Pending / Ready:ContainersNotReady (containers with unready status: [registry-creds]) / ContainersReady:ContainersNotReady (containers with unready status: [registry-creds])
	I1109 13:31:05.365374    4875 system_pods.go:89] "registry-proxy-7mv24" [15b0c22b-2fbc-4269-8253-7dbe1e18c15c] Pending
	I1109 13:31:05.365394    4875 system_pods.go:89] "snapshot-controller-7d9fbc56b8-d4qqx" [e84f0c00-37e8-49f5-9221-f7abf50a0b54] Pending / Ready:ContainersNotReady (containers with unready status: [volume-snapshot-controller]) / ContainersReady:ContainersNotReady (containers with unready status: [volume-snapshot-controller])
	I1109 13:31:05.365414    4875 system_pods.go:89] "snapshot-controller-7d9fbc56b8-jmnwh" [a1da803e-5dfc-4660-88f5-05d16292eda1] Pending / Ready:ContainersNotReady (containers with unready status: [volume-snapshot-controller]) / ContainersReady:ContainersNotReady (containers with unready status: [volume-snapshot-controller])
	I1109 13:31:05.365449    4875 system_pods.go:89] "storage-provisioner" [5f59a559-3989-40c0-9f6e-8df14dc8db7a] Pending
	I1109 13:31:05.365481    4875 retry.go:31] will retry after 295.318675ms: missing components: kube-dns
	I1109 13:31:05.467403    4875 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I1109 13:31:05.582593    4875 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I1109 13:31:05.671960    4875 system_pods.go:86] 19 kube-system pods found
	I1109 13:31:05.675627    4875 system_pods.go:89] "coredns-66bc5c9577-2bvft" [ec69ac87-b8fd-43ca-9881-75fdfcf79050] Pending / Ready:ContainersNotReady (containers with unready status: [coredns]) / ContainersReady:ContainersNotReady (containers with unready status: [coredns])
	I1109 13:31:05.675650    4875 system_pods.go:89] "csi-hostpath-attacher-0" [87f54f6c-2cfb-45e8-8ac8-a9eccdb3e51f] Pending / Ready:ContainersNotReady (containers with unready status: [csi-attacher]) / ContainersReady:ContainersNotReady (containers with unready status: [csi-attacher])
	I1109 13:31:05.675661    4875 system_pods.go:89] "csi-hostpath-resizer-0" [397070b3-aae2-4129-9072-7de9d3ba6da9] Pending / Ready:ContainersNotReady (containers with unready status: [csi-resizer]) / ContainersReady:ContainersNotReady (containers with unready status: [csi-resizer])
	I1109 13:31:05.675666    4875 system_pods.go:89] "csi-hostpathplugin-txjcd" [6943fc25-e8c6-4fdc-ad8b-ecf63b13f5fd] Pending
	I1109 13:31:05.675672    4875 system_pods.go:89] "etcd-addons-651467" [513b6301-60f7-4c1a-b01a-baf84b87bd75] Running
	I1109 13:31:05.675678    4875 system_pods.go:89] "kindnet-9qtn5" [d33de5ba-e443-452e-8fe7-5deb7ad623ca] Running
	I1109 13:31:05.675682    4875 system_pods.go:89] "kube-apiserver-addons-651467" [3f3ad910-fa5d-4d5c-88db-c8305e10d2c4] Running
	I1109 13:31:05.675696    4875 system_pods.go:89] "kube-controller-manager-addons-651467" [b37e9a85-6d92-4d6e-be54-8ff0648c2f93] Running
	I1109 13:31:05.675704    4875 system_pods.go:89] "kube-ingress-dns-minikube" [87ed2174-b7f3-4e0b-b964-91736f08eb82] Pending / Ready:ContainersNotReady (containers with unready status: [minikube-ingress-dns]) / ContainersReady:ContainersNotReady (containers with unready status: [minikube-ingress-dns])
	I1109 13:31:05.675708    4875 system_pods.go:89] "kube-proxy-mbtfx" [be5bd5f2-d003-4c6a-a3bf-db956ea6e05a] Running
	I1109 13:31:05.675713    4875 system_pods.go:89] "kube-scheduler-addons-651467" [d370dd8e-5eaa-4ea0-bb9e-6893d5047d12] Running
	I1109 13:31:05.675719    4875 system_pods.go:89] "metrics-server-85b7d694d7-lmgbd" [448bf1e7-7a08-483f-8dc9-602a084beee5] Pending / Ready:ContainersNotReady (containers with unready status: [metrics-server]) / ContainersReady:ContainersNotReady (containers with unready status: [metrics-server])
	I1109 13:31:05.675723    4875 system_pods.go:89] "nvidia-device-plugin-daemonset-rx8x7" [a557f519-0b90-4340-abc1-df9fb9511be1] Pending
	I1109 13:31:05.675730    4875 system_pods.go:89] "registry-6b586f9694-kzz6v" [44b6807f-27e2-412f-a961-c988ec4d7151] Pending / Ready:ContainersNotReady (containers with unready status: [registry]) / ContainersReady:ContainersNotReady (containers with unready status: [registry])
	I1109 13:31:05.675736    4875 system_pods.go:89] "registry-creds-764b6fb674-sppdf" [db731fa9-ef93-469f-8a5f-23bd2aa1c2b8] Pending / Ready:ContainersNotReady (containers with unready status: [registry-creds]) / ContainersReady:ContainersNotReady (containers with unready status: [registry-creds])
	I1109 13:31:05.675740    4875 system_pods.go:89] "registry-proxy-7mv24" [15b0c22b-2fbc-4269-8253-7dbe1e18c15c] Pending
	I1109 13:31:05.675748    4875 system_pods.go:89] "snapshot-controller-7d9fbc56b8-d4qqx" [e84f0c00-37e8-49f5-9221-f7abf50a0b54] Pending / Ready:ContainersNotReady (containers with unready status: [volume-snapshot-controller]) / ContainersReady:ContainersNotReady (containers with unready status: [volume-snapshot-controller])
	I1109 13:31:05.675774    4875 system_pods.go:89] "snapshot-controller-7d9fbc56b8-jmnwh" [a1da803e-5dfc-4660-88f5-05d16292eda1] Pending / Ready:ContainersNotReady (containers with unready status: [volume-snapshot-controller]) / ContainersReady:ContainersNotReady (containers with unready status: [volume-snapshot-controller])
	I1109 13:31:05.675782    4875 system_pods.go:89] "storage-provisioner" [5f59a559-3989-40c0-9f6e-8df14dc8db7a] Pending / Ready:ContainersNotReady (containers with unready status: [storage-provisioner]) / ContainersReady:ContainersNotReady (containers with unready status: [storage-provisioner])
	I1109 13:31:05.675799    4875 retry.go:31] will retry after 242.088146ms: missing components: kube-dns
	I1109 13:31:05.692207    4875 kapi.go:86] Found 2 Pods for label selector kubernetes.io/minikube-addons=registry
	I1109 13:31:05.692233    4875 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=registry", current state: Pending: [<nil>]
	I1109 13:31:05.841781    4875 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I1109 13:31:05.928229    4875 system_pods.go:86] 19 kube-system pods found
	I1109 13:31:05.928268    4875 system_pods.go:89] "coredns-66bc5c9577-2bvft" [ec69ac87-b8fd-43ca-9881-75fdfcf79050] Pending / Ready:ContainersNotReady (containers with unready status: [coredns]) / ContainersReady:ContainersNotReady (containers with unready status: [coredns])
	I1109 13:31:05.928278    4875 system_pods.go:89] "csi-hostpath-attacher-0" [87f54f6c-2cfb-45e8-8ac8-a9eccdb3e51f] Pending / Ready:ContainersNotReady (containers with unready status: [csi-attacher]) / ContainersReady:ContainersNotReady (containers with unready status: [csi-attacher])
	I1109 13:31:05.928287    4875 system_pods.go:89] "csi-hostpath-resizer-0" [397070b3-aae2-4129-9072-7de9d3ba6da9] Pending / Ready:ContainersNotReady (containers with unready status: [csi-resizer]) / ContainersReady:ContainersNotReady (containers with unready status: [csi-resizer])
	I1109 13:31:05.928295    4875 system_pods.go:89] "csi-hostpathplugin-txjcd" [6943fc25-e8c6-4fdc-ad8b-ecf63b13f5fd] Pending / Ready:ContainersNotReady (containers with unready status: [csi-external-health-monitor-controller node-driver-registrar hostpath liveness-probe csi-provisioner csi-snapshotter]) / ContainersReady:ContainersNotReady (containers with unready status: [csi-external-health-monitor-controller node-driver-registrar hostpath liveness-probe csi-provisioner csi-snapshotter])
	I1109 13:31:05.928310    4875 system_pods.go:89] "etcd-addons-651467" [513b6301-60f7-4c1a-b01a-baf84b87bd75] Running
	I1109 13:31:05.928321    4875 system_pods.go:89] "kindnet-9qtn5" [d33de5ba-e443-452e-8fe7-5deb7ad623ca] Running
	I1109 13:31:05.928326    4875 system_pods.go:89] "kube-apiserver-addons-651467" [3f3ad910-fa5d-4d5c-88db-c8305e10d2c4] Running
	I1109 13:31:05.928349    4875 system_pods.go:89] "kube-controller-manager-addons-651467" [b37e9a85-6d92-4d6e-be54-8ff0648c2f93] Running
	I1109 13:31:05.928358    4875 system_pods.go:89] "kube-ingress-dns-minikube" [87ed2174-b7f3-4e0b-b964-91736f08eb82] Pending / Ready:ContainersNotReady (containers with unready status: [minikube-ingress-dns]) / ContainersReady:ContainersNotReady (containers with unready status: [minikube-ingress-dns])
	I1109 13:31:05.928369    4875 system_pods.go:89] "kube-proxy-mbtfx" [be5bd5f2-d003-4c6a-a3bf-db956ea6e05a] Running
	I1109 13:31:05.928374    4875 system_pods.go:89] "kube-scheduler-addons-651467" [d370dd8e-5eaa-4ea0-bb9e-6893d5047d12] Running
	I1109 13:31:05.928380    4875 system_pods.go:89] "metrics-server-85b7d694d7-lmgbd" [448bf1e7-7a08-483f-8dc9-602a084beee5] Pending / Ready:ContainersNotReady (containers with unready status: [metrics-server]) / ContainersReady:ContainersNotReady (containers with unready status: [metrics-server])
	I1109 13:31:05.928390    4875 system_pods.go:89] "nvidia-device-plugin-daemonset-rx8x7" [a557f519-0b90-4340-abc1-df9fb9511be1] Pending / Ready:ContainersNotReady (containers with unready status: [nvidia-device-plugin-ctr]) / ContainersReady:ContainersNotReady (containers with unready status: [nvidia-device-plugin-ctr])
	I1109 13:31:05.928397    4875 system_pods.go:89] "registry-6b586f9694-kzz6v" [44b6807f-27e2-412f-a961-c988ec4d7151] Pending / Ready:ContainersNotReady (containers with unready status: [registry]) / ContainersReady:ContainersNotReady (containers with unready status: [registry])
	I1109 13:31:05.928403    4875 system_pods.go:89] "registry-creds-764b6fb674-sppdf" [db731fa9-ef93-469f-8a5f-23bd2aa1c2b8] Pending / Ready:ContainersNotReady (containers with unready status: [registry-creds]) / ContainersReady:ContainersNotReady (containers with unready status: [registry-creds])
	I1109 13:31:05.928412    4875 system_pods.go:89] "registry-proxy-7mv24" [15b0c22b-2fbc-4269-8253-7dbe1e18c15c] Pending / Ready:ContainersNotReady (containers with unready status: [registry-proxy]) / ContainersReady:ContainersNotReady (containers with unready status: [registry-proxy])
	I1109 13:31:05.928421    4875 system_pods.go:89] "snapshot-controller-7d9fbc56b8-d4qqx" [e84f0c00-37e8-49f5-9221-f7abf50a0b54] Pending / Ready:ContainersNotReady (containers with unready status: [volume-snapshot-controller]) / ContainersReady:ContainersNotReady (containers with unready status: [volume-snapshot-controller])
	I1109 13:31:05.928428    4875 system_pods.go:89] "snapshot-controller-7d9fbc56b8-jmnwh" [a1da803e-5dfc-4660-88f5-05d16292eda1] Pending / Ready:ContainersNotReady (containers with unready status: [volume-snapshot-controller]) / ContainersReady:ContainersNotReady (containers with unready status: [volume-snapshot-controller])
	I1109 13:31:05.928439    4875 system_pods.go:89] "storage-provisioner" [5f59a559-3989-40c0-9f6e-8df14dc8db7a] Pending / Ready:ContainersNotReady (containers with unready status: [storage-provisioner]) / ContainersReady:ContainersNotReady (containers with unready status: [storage-provisioner])
	I1109 13:31:05.928455    4875 retry.go:31] will retry after 467.918653ms: missing components: kube-dns
	I1109 13:31:05.945740    4875 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I1109 13:31:06.045025    4875 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I1109 13:31:06.194123    4875 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=registry", current state: Pending: [<nil>]
	I1109 13:31:06.342954    4875 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I1109 13:31:06.406127    4875 system_pods.go:86] 19 kube-system pods found
	I1109 13:31:06.406167    4875 system_pods.go:89] "coredns-66bc5c9577-2bvft" [ec69ac87-b8fd-43ca-9881-75fdfcf79050] Pending / Ready:ContainersNotReady (containers with unready status: [coredns]) / ContainersReady:ContainersNotReady (containers with unready status: [coredns])
	I1109 13:31:06.406177    4875 system_pods.go:89] "csi-hostpath-attacher-0" [87f54f6c-2cfb-45e8-8ac8-a9eccdb3e51f] Pending / Ready:ContainersNotReady (containers with unready status: [csi-attacher]) / ContainersReady:ContainersNotReady (containers with unready status: [csi-attacher])
	I1109 13:31:06.406203    4875 system_pods.go:89] "csi-hostpath-resizer-0" [397070b3-aae2-4129-9072-7de9d3ba6da9] Pending / Ready:ContainersNotReady (containers with unready status: [csi-resizer]) / ContainersReady:ContainersNotReady (containers with unready status: [csi-resizer])
	I1109 13:31:06.406218    4875 system_pods.go:89] "csi-hostpathplugin-txjcd" [6943fc25-e8c6-4fdc-ad8b-ecf63b13f5fd] Pending / Ready:ContainersNotReady (containers with unready status: [csi-external-health-monitor-controller node-driver-registrar hostpath liveness-probe csi-provisioner csi-snapshotter]) / ContainersReady:ContainersNotReady (containers with unready status: [csi-external-health-monitor-controller node-driver-registrar hostpath liveness-probe csi-provisioner csi-snapshotter])
	I1109 13:31:06.406225    4875 system_pods.go:89] "etcd-addons-651467" [513b6301-60f7-4c1a-b01a-baf84b87bd75] Running
	I1109 13:31:06.406237    4875 system_pods.go:89] "kindnet-9qtn5" [d33de5ba-e443-452e-8fe7-5deb7ad623ca] Running
	I1109 13:31:06.406242    4875 system_pods.go:89] "kube-apiserver-addons-651467" [3f3ad910-fa5d-4d5c-88db-c8305e10d2c4] Running
	I1109 13:31:06.406246    4875 system_pods.go:89] "kube-controller-manager-addons-651467" [b37e9a85-6d92-4d6e-be54-8ff0648c2f93] Running
	I1109 13:31:06.406257    4875 system_pods.go:89] "kube-ingress-dns-minikube" [87ed2174-b7f3-4e0b-b964-91736f08eb82] Pending / Ready:ContainersNotReady (containers with unready status: [minikube-ingress-dns]) / ContainersReady:ContainersNotReady (containers with unready status: [minikube-ingress-dns])
	I1109 13:31:06.406262    4875 system_pods.go:89] "kube-proxy-mbtfx" [be5bd5f2-d003-4c6a-a3bf-db956ea6e05a] Running
	I1109 13:31:06.406266    4875 system_pods.go:89] "kube-scheduler-addons-651467" [d370dd8e-5eaa-4ea0-bb9e-6893d5047d12] Running
	I1109 13:31:06.406273    4875 system_pods.go:89] "metrics-server-85b7d694d7-lmgbd" [448bf1e7-7a08-483f-8dc9-602a084beee5] Pending / Ready:ContainersNotReady (containers with unready status: [metrics-server]) / ContainersReady:ContainersNotReady (containers with unready status: [metrics-server])
	I1109 13:31:06.406280    4875 system_pods.go:89] "nvidia-device-plugin-daemonset-rx8x7" [a557f519-0b90-4340-abc1-df9fb9511be1] Pending / Ready:ContainersNotReady (containers with unready status: [nvidia-device-plugin-ctr]) / ContainersReady:ContainersNotReady (containers with unready status: [nvidia-device-plugin-ctr])
	I1109 13:31:06.406289    4875 system_pods.go:89] "registry-6b586f9694-kzz6v" [44b6807f-27e2-412f-a961-c988ec4d7151] Pending / Ready:ContainersNotReady (containers with unready status: [registry]) / ContainersReady:ContainersNotReady (containers with unready status: [registry])
	I1109 13:31:06.406296    4875 system_pods.go:89] "registry-creds-764b6fb674-sppdf" [db731fa9-ef93-469f-8a5f-23bd2aa1c2b8] Pending / Ready:ContainersNotReady (containers with unready status: [registry-creds]) / ContainersReady:ContainersNotReady (containers with unready status: [registry-creds])
	I1109 13:31:06.406304    4875 system_pods.go:89] "registry-proxy-7mv24" [15b0c22b-2fbc-4269-8253-7dbe1e18c15c] Pending / Ready:ContainersNotReady (containers with unready status: [registry-proxy]) / ContainersReady:ContainersNotReady (containers with unready status: [registry-proxy])
	I1109 13:31:06.406314    4875 system_pods.go:89] "snapshot-controller-7d9fbc56b8-d4qqx" [e84f0c00-37e8-49f5-9221-f7abf50a0b54] Pending / Ready:ContainersNotReady (containers with unready status: [volume-snapshot-controller]) / ContainersReady:ContainersNotReady (containers with unready status: [volume-snapshot-controller])
	I1109 13:31:06.406323    4875 system_pods.go:89] "snapshot-controller-7d9fbc56b8-jmnwh" [a1da803e-5dfc-4660-88f5-05d16292eda1] Pending / Ready:ContainersNotReady (containers with unready status: [volume-snapshot-controller]) / ContainersReady:ContainersNotReady (containers with unready status: [volume-snapshot-controller])
	I1109 13:31:06.406333    4875 system_pods.go:89] "storage-provisioner" [5f59a559-3989-40c0-9f6e-8df14dc8db7a] Pending / Ready:ContainersNotReady (containers with unready status: [storage-provisioner]) / ContainersReady:ContainersNotReady (containers with unready status: [storage-provisioner])
	I1109 13:31:06.406349    4875 retry.go:31] will retry after 565.373843ms: missing components: kube-dns
	I1109 13:31:06.445397    4875 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I1109 13:31:06.543520    4875 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I1109 13:31:06.687191    4875 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=registry", current state: Pending: [<nil>]
	I1109 13:31:06.842214    4875 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I1109 13:31:06.948442    4875 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I1109 13:31:06.977367    4875 system_pods.go:86] 19 kube-system pods found
	I1109 13:31:06.977404    4875 system_pods.go:89] "coredns-66bc5c9577-2bvft" [ec69ac87-b8fd-43ca-9881-75fdfcf79050] Pending / Ready:ContainersNotReady (containers with unready status: [coredns]) / ContainersReady:ContainersNotReady (containers with unready status: [coredns])
	I1109 13:31:06.977414    4875 system_pods.go:89] "csi-hostpath-attacher-0" [87f54f6c-2cfb-45e8-8ac8-a9eccdb3e51f] Pending / Ready:ContainersNotReady (containers with unready status: [csi-attacher]) / ContainersReady:ContainersNotReady (containers with unready status: [csi-attacher])
	I1109 13:31:06.977454    4875 system_pods.go:89] "csi-hostpath-resizer-0" [397070b3-aae2-4129-9072-7de9d3ba6da9] Pending / Ready:ContainersNotReady (containers with unready status: [csi-resizer]) / ContainersReady:ContainersNotReady (containers with unready status: [csi-resizer])
	I1109 13:31:06.977470    4875 system_pods.go:89] "csi-hostpathplugin-txjcd" [6943fc25-e8c6-4fdc-ad8b-ecf63b13f5fd] Pending / Ready:ContainersNotReady (containers with unready status: [csi-external-health-monitor-controller node-driver-registrar hostpath liveness-probe csi-provisioner csi-snapshotter]) / ContainersReady:ContainersNotReady (containers with unready status: [csi-external-health-monitor-controller node-driver-registrar hostpath liveness-probe csi-provisioner csi-snapshotter])
	I1109 13:31:06.977475    4875 system_pods.go:89] "etcd-addons-651467" [513b6301-60f7-4c1a-b01a-baf84b87bd75] Running
	I1109 13:31:06.977481    4875 system_pods.go:89] "kindnet-9qtn5" [d33de5ba-e443-452e-8fe7-5deb7ad623ca] Running
	I1109 13:31:06.977489    4875 system_pods.go:89] "kube-apiserver-addons-651467" [3f3ad910-fa5d-4d5c-88db-c8305e10d2c4] Running
	I1109 13:31:06.977493    4875 system_pods.go:89] "kube-controller-manager-addons-651467" [b37e9a85-6d92-4d6e-be54-8ff0648c2f93] Running
	I1109 13:31:06.977520    4875 system_pods.go:89] "kube-ingress-dns-minikube" [87ed2174-b7f3-4e0b-b964-91736f08eb82] Pending / Ready:ContainersNotReady (containers with unready status: [minikube-ingress-dns]) / ContainersReady:ContainersNotReady (containers with unready status: [minikube-ingress-dns])
	I1109 13:31:06.977535    4875 system_pods.go:89] "kube-proxy-mbtfx" [be5bd5f2-d003-4c6a-a3bf-db956ea6e05a] Running
	I1109 13:31:06.977541    4875 system_pods.go:89] "kube-scheduler-addons-651467" [d370dd8e-5eaa-4ea0-bb9e-6893d5047d12] Running
	I1109 13:31:06.977557    4875 system_pods.go:89] "metrics-server-85b7d694d7-lmgbd" [448bf1e7-7a08-483f-8dc9-602a084beee5] Pending / Ready:ContainersNotReady (containers with unready status: [metrics-server]) / ContainersReady:ContainersNotReady (containers with unready status: [metrics-server])
	I1109 13:31:06.977571    4875 system_pods.go:89] "nvidia-device-plugin-daemonset-rx8x7" [a557f519-0b90-4340-abc1-df9fb9511be1] Pending / Ready:ContainersNotReady (containers with unready status: [nvidia-device-plugin-ctr]) / ContainersReady:ContainersNotReady (containers with unready status: [nvidia-device-plugin-ctr])
	I1109 13:31:06.977581    4875 system_pods.go:89] "registry-6b586f9694-kzz6v" [44b6807f-27e2-412f-a961-c988ec4d7151] Pending / Ready:ContainersNotReady (containers with unready status: [registry]) / ContainersReady:ContainersNotReady (containers with unready status: [registry])
	I1109 13:31:06.977595    4875 system_pods.go:89] "registry-creds-764b6fb674-sppdf" [db731fa9-ef93-469f-8a5f-23bd2aa1c2b8] Pending / Ready:ContainersNotReady (containers with unready status: [registry-creds]) / ContainersReady:ContainersNotReady (containers with unready status: [registry-creds])
	I1109 13:31:06.977602    4875 system_pods.go:89] "registry-proxy-7mv24" [15b0c22b-2fbc-4269-8253-7dbe1e18c15c] Pending / Ready:ContainersNotReady (containers with unready status: [registry-proxy]) / ContainersReady:ContainersNotReady (containers with unready status: [registry-proxy])
	I1109 13:31:06.977611    4875 system_pods.go:89] "snapshot-controller-7d9fbc56b8-d4qqx" [e84f0c00-37e8-49f5-9221-f7abf50a0b54] Pending / Ready:ContainersNotReady (containers with unready status: [volume-snapshot-controller]) / ContainersReady:ContainersNotReady (containers with unready status: [volume-snapshot-controller])
	I1109 13:31:06.977617    4875 system_pods.go:89] "snapshot-controller-7d9fbc56b8-jmnwh" [a1da803e-5dfc-4660-88f5-05d16292eda1] Pending / Ready:ContainersNotReady (containers with unready status: [volume-snapshot-controller]) / ContainersReady:ContainersNotReady (containers with unready status: [volume-snapshot-controller])
	I1109 13:31:06.977648    4875 system_pods.go:89] "storage-provisioner" [5f59a559-3989-40c0-9f6e-8df14dc8db7a] Pending / Ready:ContainersNotReady (containers with unready status: [storage-provisioner]) / ContainersReady:ContainersNotReady (containers with unready status: [storage-provisioner])
	I1109 13:31:06.977670    4875 retry.go:31] will retry after 692.918636ms: missing components: kube-dns
	I1109 13:31:07.048337    4875 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I1109 13:31:07.187783    4875 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=registry", current state: Pending: [<nil>]
	I1109 13:31:07.344616    4875 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I1109 13:31:07.444691    4875 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I1109 13:31:07.543985    4875 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I1109 13:31:07.679476    4875 system_pods.go:86] 19 kube-system pods found
	I1109 13:31:07.679519    4875 system_pods.go:89] "coredns-66bc5c9577-2bvft" [ec69ac87-b8fd-43ca-9881-75fdfcf79050] Running
	I1109 13:31:07.679546    4875 system_pods.go:89] "csi-hostpath-attacher-0" [87f54f6c-2cfb-45e8-8ac8-a9eccdb3e51f] Pending / Ready:ContainersNotReady (containers with unready status: [csi-attacher]) / ContainersReady:ContainersNotReady (containers with unready status: [csi-attacher])
	I1109 13:31:07.679562    4875 system_pods.go:89] "csi-hostpath-resizer-0" [397070b3-aae2-4129-9072-7de9d3ba6da9] Pending / Ready:ContainersNotReady (containers with unready status: [csi-resizer]) / ContainersReady:ContainersNotReady (containers with unready status: [csi-resizer])
	I1109 13:31:07.679571    4875 system_pods.go:89] "csi-hostpathplugin-txjcd" [6943fc25-e8c6-4fdc-ad8b-ecf63b13f5fd] Pending / Ready:ContainersNotReady (containers with unready status: [csi-external-health-monitor-controller node-driver-registrar hostpath liveness-probe csi-provisioner csi-snapshotter]) / ContainersReady:ContainersNotReady (containers with unready status: [csi-external-health-monitor-controller node-driver-registrar hostpath liveness-probe csi-provisioner csi-snapshotter])
	I1109 13:31:07.679579    4875 system_pods.go:89] "etcd-addons-651467" [513b6301-60f7-4c1a-b01a-baf84b87bd75] Running
	I1109 13:31:07.679585    4875 system_pods.go:89] "kindnet-9qtn5" [d33de5ba-e443-452e-8fe7-5deb7ad623ca] Running
	I1109 13:31:07.679610    4875 system_pods.go:89] "kube-apiserver-addons-651467" [3f3ad910-fa5d-4d5c-88db-c8305e10d2c4] Running
	I1109 13:31:07.679628    4875 system_pods.go:89] "kube-controller-manager-addons-651467" [b37e9a85-6d92-4d6e-be54-8ff0648c2f93] Running
	I1109 13:31:07.679645    4875 system_pods.go:89] "kube-ingress-dns-minikube" [87ed2174-b7f3-4e0b-b964-91736f08eb82] Pending / Ready:ContainersNotReady (containers with unready status: [minikube-ingress-dns]) / ContainersReady:ContainersNotReady (containers with unready status: [minikube-ingress-dns])
	I1109 13:31:07.679654    4875 system_pods.go:89] "kube-proxy-mbtfx" [be5bd5f2-d003-4c6a-a3bf-db956ea6e05a] Running
	I1109 13:31:07.679659    4875 system_pods.go:89] "kube-scheduler-addons-651467" [d370dd8e-5eaa-4ea0-bb9e-6893d5047d12] Running
	I1109 13:31:07.679681    4875 system_pods.go:89] "metrics-server-85b7d694d7-lmgbd" [448bf1e7-7a08-483f-8dc9-602a084beee5] Pending / Ready:ContainersNotReady (containers with unready status: [metrics-server]) / ContainersReady:ContainersNotReady (containers with unready status: [metrics-server])
	I1109 13:31:07.679696    4875 system_pods.go:89] "nvidia-device-plugin-daemonset-rx8x7" [a557f519-0b90-4340-abc1-df9fb9511be1] Pending / Ready:ContainersNotReady (containers with unready status: [nvidia-device-plugin-ctr]) / ContainersReady:ContainersNotReady (containers with unready status: [nvidia-device-plugin-ctr])
	I1109 13:31:07.679713    4875 system_pods.go:89] "registry-6b586f9694-kzz6v" [44b6807f-27e2-412f-a961-c988ec4d7151] Pending / Ready:ContainersNotReady (containers with unready status: [registry]) / ContainersReady:ContainersNotReady (containers with unready status: [registry])
	I1109 13:31:07.679726    4875 system_pods.go:89] "registry-creds-764b6fb674-sppdf" [db731fa9-ef93-469f-8a5f-23bd2aa1c2b8] Pending / Ready:ContainersNotReady (containers with unready status: [registry-creds]) / ContainersReady:ContainersNotReady (containers with unready status: [registry-creds])
	I1109 13:31:07.679734    4875 system_pods.go:89] "registry-proxy-7mv24" [15b0c22b-2fbc-4269-8253-7dbe1e18c15c] Pending / Ready:ContainersNotReady (containers with unready status: [registry-proxy]) / ContainersReady:ContainersNotReady (containers with unready status: [registry-proxy])
	I1109 13:31:07.679758    4875 system_pods.go:89] "snapshot-controller-7d9fbc56b8-d4qqx" [e84f0c00-37e8-49f5-9221-f7abf50a0b54] Pending / Ready:ContainersNotReady (containers with unready status: [volume-snapshot-controller]) / ContainersReady:ContainersNotReady (containers with unready status: [volume-snapshot-controller])
	I1109 13:31:07.679773    4875 system_pods.go:89] "snapshot-controller-7d9fbc56b8-jmnwh" [a1da803e-5dfc-4660-88f5-05d16292eda1] Pending / Ready:ContainersNotReady (containers with unready status: [volume-snapshot-controller]) / ContainersReady:ContainersNotReady (containers with unready status: [volume-snapshot-controller])
	I1109 13:31:07.679779    4875 system_pods.go:89] "storage-provisioner" [5f59a559-3989-40c0-9f6e-8df14dc8db7a] Running
	I1109 13:31:07.679792    4875 system_pods.go:126] duration metric: took 2.331012986s to wait for k8s-apps to be running ...
	I1109 13:31:07.679799    4875 system_svc.go:44] waiting for kubelet service to be running ....
	I1109 13:31:07.679897    4875 ssh_runner.go:195] Run: sudo systemctl is-active --quiet service kubelet
	I1109 13:31:07.688565    4875 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=registry", current state: Pending: [<nil>]
	I1109 13:31:07.698508    4875 system_svc.go:56] duration metric: took 18.686427ms WaitForService to wait for kubelet
	I1109 13:31:07.698536    4875 kubeadm.go:587] duration metric: took 44.617850738s to wait for: map[apiserver:true apps_running:true default_sa:true extra:true kubelet:true node_ready:true system_pods:true]
	I1109 13:31:07.698555    4875 node_conditions.go:102] verifying NodePressure condition ...
	I1109 13:31:07.701923    4875 node_conditions.go:122] node storage ephemeral capacity is 203034800Ki
	I1109 13:31:07.701964    4875 node_conditions.go:123] node cpu capacity is 2
	I1109 13:31:07.701978    4875 node_conditions.go:105] duration metric: took 3.388756ms to run NodePressure ...
	I1109 13:31:07.702005    4875 start.go:242] waiting for startup goroutines ...
	I1109 13:31:07.842394    4875 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I1109 13:31:07.960123    4875 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I1109 13:31:08.046382    4875 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I1109 13:31:08.187846    4875 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=registry", current state: Pending: [<nil>]
	I1109 13:31:08.343396    4875 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I1109 13:31:08.445356    4875 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I1109 13:31:08.543811    4875 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I1109 13:31:08.687924    4875 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=registry", current state: Pending: [<nil>]
	I1109 13:31:08.842283    4875 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I1109 13:31:08.948645    4875 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I1109 13:31:09.049198    4875 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I1109 13:31:09.187977    4875 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=registry", current state: Pending: [<nil>]
	I1109 13:31:09.342353    4875 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I1109 13:31:09.445445    4875 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I1109 13:31:09.544002    4875 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I1109 13:31:09.688225    4875 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=registry", current state: Pending: [<nil>]
	I1109 13:31:09.843082    4875 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I1109 13:31:09.945123    4875 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I1109 13:31:10.045007    4875 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I1109 13:31:10.186992    4875 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=registry", current state: Pending: [<nil>]
	I1109 13:31:10.343270    4875 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I1109 13:31:10.445058    4875 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I1109 13:31:10.544664    4875 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I1109 13:31:10.687983    4875 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=registry", current state: Pending: [<nil>]
	I1109 13:31:10.842447    4875 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I1109 13:31:10.945724    4875 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I1109 13:31:11.044365    4875 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I1109 13:31:11.187704    4875 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=registry", current state: Pending: [<nil>]
	I1109 13:31:11.342798    4875 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I1109 13:31:11.445156    4875 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I1109 13:31:11.543531    4875 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I1109 13:31:11.687366    4875 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=registry", current state: Pending: [<nil>]
	I1109 13:31:11.841821    4875 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I1109 13:31:11.944360    4875 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I1109 13:31:12.043949    4875 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I1109 13:31:12.190993    4875 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=registry", current state: Pending: [<nil>]
	I1109 13:31:12.342825    4875 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I1109 13:31:12.445528    4875 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I1109 13:31:12.543369    4875 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I1109 13:31:12.687987    4875 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=registry", current state: Pending: [<nil>]
	I1109 13:31:12.843084    4875 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I1109 13:31:12.945278    4875 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I1109 13:31:13.044050    4875 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I1109 13:31:13.187382    4875 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=registry", current state: Pending: [<nil>]
	I1109 13:31:13.341803    4875 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I1109 13:31:13.445113    4875 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I1109 13:31:13.543461    4875 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I1109 13:31:13.689221    4875 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=registry", current state: Pending: [<nil>]
	I1109 13:31:13.848023    4875 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I1109 13:31:13.945283    4875 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I1109 13:31:14.043373    4875 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I1109 13:31:14.190863    4875 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=registry", current state: Pending: [<nil>]
	I1109 13:31:14.342243    4875 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I1109 13:31:14.444853    4875 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I1109 13:31:14.543808    4875 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I1109 13:31:14.687759    4875 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=registry", current state: Pending: [<nil>]
	I1109 13:31:14.842206    4875 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I1109 13:31:14.945404    4875 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I1109 13:31:15.047137    4875 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I1109 13:31:15.187431    4875 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=registry", current state: Pending: [<nil>]
	I1109 13:31:15.341907    4875 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I1109 13:31:15.445293    4875 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I1109 13:31:15.544445    4875 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I1109 13:31:15.687613    4875 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=registry", current state: Pending: [<nil>]
	I1109 13:31:15.841872    4875 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I1109 13:31:15.944731    4875 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I1109 13:31:16.043592    4875 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I1109 13:31:16.187324    4875 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=registry", current state: Pending: [<nil>]
	I1109 13:31:16.341740    4875 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I1109 13:31:16.444531    4875 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I1109 13:31:16.543477    4875 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I1109 13:31:16.687383    4875 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=registry", current state: Pending: [<nil>]
	I1109 13:31:16.842388    4875 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I1109 13:31:16.945447    4875 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I1109 13:31:17.045184    4875 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I1109 13:31:17.189101    4875 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=registry", current state: Pending: [<nil>]
	I1109 13:31:17.349179    4875 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I1109 13:31:17.450036    4875 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I1109 13:31:17.544763    4875 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I1109 13:31:17.689113    4875 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=registry", current state: Pending: [<nil>]
	I1109 13:31:17.844157    4875 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I1109 13:31:17.945448    4875 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I1109 13:31:18.044398    4875 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I1109 13:31:18.187486    4875 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=registry", current state: Pending: [<nil>]
	I1109 13:31:18.343418    4875 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I1109 13:31:18.447578    4875 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I1109 13:31:18.544146    4875 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I1109 13:31:18.687790    4875 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=registry", current state: Pending: [<nil>]
	I1109 13:31:18.842840    4875 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I1109 13:31:18.944690    4875 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I1109 13:31:19.043900    4875 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I1109 13:31:19.187354    4875 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=registry", current state: Pending: [<nil>]
	I1109 13:31:19.342512    4875 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I1109 13:31:19.445269    4875 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I1109 13:31:19.546362    4875 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I1109 13:31:19.686902    4875 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=registry", current state: Pending: [<nil>]
	I1109 13:31:19.842342    4875 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I1109 13:31:19.947533    4875 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I1109 13:31:20.043959    4875 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I1109 13:31:20.188453    4875 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=registry", current state: Pending: [<nil>]
	I1109 13:31:20.343467    4875 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I1109 13:31:20.444967    4875 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I1109 13:31:20.544632    4875 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I1109 13:31:20.687560    4875 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=registry", current state: Pending: [<nil>]
	I1109 13:31:20.842606    4875 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I1109 13:31:20.944591    4875 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I1109 13:31:21.043938    4875 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I1109 13:31:21.188426    4875 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=registry", current state: Pending: [<nil>]
	I1109 13:31:21.341610    4875 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I1109 13:31:21.444757    4875 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I1109 13:31:21.544030    4875 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I1109 13:31:21.688620    4875 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=registry", current state: Pending: [<nil>]
	I1109 13:31:21.841769    4875 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I1109 13:31:21.945236    4875 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I1109 13:31:22.043303    4875 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I1109 13:31:22.187945    4875 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=registry", current state: Pending: [<nil>]
	I1109 13:31:22.342469    4875 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I1109 13:31:22.445890    4875 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I1109 13:31:22.544575    4875 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I1109 13:31:22.687594    4875 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=registry", current state: Pending: [<nil>]
	I1109 13:31:22.842331    4875 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I1109 13:31:22.945779    4875 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I1109 13:31:23.044541    4875 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I1109 13:31:23.187397    4875 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=registry", current state: Pending: [<nil>]
	I1109 13:31:23.344822    4875 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I1109 13:31:23.445600    4875 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I1109 13:31:23.543995    4875 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I1109 13:31:23.688150    4875 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=registry", current state: Pending: [<nil>]
	I1109 13:31:23.843190    4875 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I1109 13:31:23.944798    4875 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I1109 13:31:24.044277    4875 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I1109 13:31:24.187155    4875 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=registry", current state: Pending: [<nil>]
	I1109 13:31:24.342839    4875 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I1109 13:31:24.445181    4875 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I1109 13:31:24.544115    4875 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I1109 13:31:24.687983    4875 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=registry", current state: Pending: [<nil>]
	I1109 13:31:24.842171    4875 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I1109 13:31:24.945373    4875 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I1109 13:31:25.043491    4875 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I1109 13:31:25.187806    4875 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=registry", current state: Pending: [<nil>]
	I1109 13:31:25.341828    4875 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I1109 13:31:25.444875    4875 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I1109 13:31:25.544130    4875 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I1109 13:31:25.687326    4875 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=registry", current state: Pending: [<nil>]
	I1109 13:31:25.842203    4875 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I1109 13:31:25.945528    4875 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I1109 13:31:26.043966    4875 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I1109 13:31:26.188173    4875 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=registry", current state: Pending: [<nil>]
	I1109 13:31:26.344388    4875 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I1109 13:31:26.445815    4875 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I1109 13:31:26.544249    4875 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I1109 13:31:26.687080    4875 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=registry", current state: Pending: [<nil>]
	I1109 13:31:26.841607    4875 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I1109 13:31:26.945926    4875 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I1109 13:31:27.047242    4875 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I1109 13:31:27.187195    4875 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=registry", current state: Pending: [<nil>]
	I1109 13:31:27.342760    4875 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I1109 13:31:27.445176    4875 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I1109 13:31:27.543478    4875 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I1109 13:31:27.687982    4875 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=registry", current state: Pending: [<nil>]
	I1109 13:31:27.842558    4875 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I1109 13:31:27.944879    4875 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I1109 13:31:28.044377    4875 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I1109 13:31:28.188063    4875 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=registry", current state: Pending: [<nil>]
	I1109 13:31:28.342036    4875 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I1109 13:31:28.445665    4875 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I1109 13:31:28.544099    4875 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I1109 13:31:28.687583    4875 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=registry", current state: Pending: [<nil>]
	I1109 13:31:28.841785    4875 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I1109 13:31:28.944916    4875 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I1109 13:31:29.044315    4875 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I1109 13:31:29.187575    4875 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=registry", current state: Pending: [<nil>]
	I1109 13:31:29.341737    4875 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I1109 13:31:29.444575    4875 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I1109 13:31:29.543715    4875 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I1109 13:31:29.687776    4875 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=registry", current state: Pending: [<nil>]
	I1109 13:31:29.842431    4875 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I1109 13:31:29.946012    4875 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I1109 13:31:30.044243    4875 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I1109 13:31:30.187701    4875 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=registry", current state: Pending: [<nil>]
	I1109 13:31:30.342547    4875 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I1109 13:31:30.445283    4875 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I1109 13:31:30.543718    4875 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I1109 13:31:30.687550    4875 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=registry", current state: Pending: [<nil>]
	I1109 13:31:30.842633    4875 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I1109 13:31:30.944680    4875 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I1109 13:31:31.044258    4875 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I1109 13:31:31.187306    4875 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=registry", current state: Pending: [<nil>]
	I1109 13:31:31.341708    4875 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I1109 13:31:31.445494    4875 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I1109 13:31:31.543574    4875 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I1109 13:31:31.688014    4875 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=registry", current state: Pending: [<nil>]
	I1109 13:31:31.842688    4875 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I1109 13:31:31.946687    4875 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I1109 13:31:32.047125    4875 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I1109 13:31:32.192866    4875 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=registry", current state: Pending: [<nil>]
	I1109 13:31:32.342357    4875 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I1109 13:31:32.445380    4875 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I1109 13:31:32.544778    4875 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I1109 13:31:32.687669    4875 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=registry", current state: Pending: [<nil>]
	I1109 13:31:32.842254    4875 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I1109 13:31:32.945295    4875 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I1109 13:31:33.044784    4875 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I1109 13:31:33.187760    4875 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=registry", current state: Pending: [<nil>]
	I1109 13:31:33.343100    4875 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I1109 13:31:33.447328    4875 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I1109 13:31:33.543366    4875 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I1109 13:31:33.688731    4875 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=registry", current state: Pending: [<nil>]
	I1109 13:31:33.842796    4875 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I1109 13:31:33.945690    4875 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I1109 13:31:34.044155    4875 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I1109 13:31:34.188030    4875 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=registry", current state: Pending: [<nil>]
	I1109 13:31:34.342055    4875 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I1109 13:31:34.448857    4875 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I1109 13:31:34.544248    4875 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I1109 13:31:34.689320    4875 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=registry", current state: Pending: [<nil>]
	I1109 13:31:34.842107    4875 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I1109 13:31:34.945278    4875 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I1109 13:31:35.043821    4875 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I1109 13:31:35.188379    4875 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=registry", current state: Pending: [<nil>]
	I1109 13:31:35.342020    4875 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I1109 13:31:35.445289    4875 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I1109 13:31:35.543490    4875 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I1109 13:31:35.687199    4875 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=registry", current state: Pending: [<nil>]
	I1109 13:31:35.842553    4875 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I1109 13:31:35.944595    4875 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I1109 13:31:36.044417    4875 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I1109 13:31:36.187616    4875 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=registry", current state: Pending: [<nil>]
	I1109 13:31:36.342137    4875 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I1109 13:31:36.445625    4875 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I1109 13:31:36.543691    4875 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I1109 13:31:36.687677    4875 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=registry", current state: Pending: [<nil>]
	I1109 13:31:36.842580    4875 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I1109 13:31:36.945091    4875 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I1109 13:31:37.045399    4875 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I1109 13:31:37.220947    4875 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=registry", current state: Pending: [<nil>]
	I1109 13:31:37.351341    4875 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I1109 13:31:37.445243    4875 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I1109 13:31:37.543533    4875 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I1109 13:31:37.688016    4875 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=registry", current state: Pending: [<nil>]
	I1109 13:31:37.843577    4875 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I1109 13:31:37.944950    4875 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I1109 13:31:38.044236    4875 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I1109 13:31:38.188016    4875 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=registry", current state: Pending: [<nil>]
	I1109 13:31:38.342953    4875 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I1109 13:31:38.445788    4875 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I1109 13:31:38.546825    4875 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I1109 13:31:38.687962    4875 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=registry", current state: Pending: [<nil>]
	I1109 13:31:38.843115    4875 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I1109 13:31:38.945213    4875 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I1109 13:31:39.043172    4875 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I1109 13:31:39.187235    4875 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=registry", current state: Pending: [<nil>]
	I1109 13:31:39.343301    4875 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I1109 13:31:39.445517    4875 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I1109 13:31:39.543933    4875 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I1109 13:31:39.687495    4875 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=registry", current state: Pending: [<nil>]
	I1109 13:31:39.841859    4875 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I1109 13:31:39.944789    4875 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I1109 13:31:40.044087    4875 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I1109 13:31:40.187795    4875 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=registry", current state: Pending: [<nil>]
	I1109 13:31:40.342135    4875 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I1109 13:31:40.445282    4875 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I1109 13:31:40.543473    4875 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I1109 13:31:40.687270    4875 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=registry", current state: Pending: [<nil>]
	I1109 13:31:40.841498    4875 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I1109 13:31:40.945192    4875 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I1109 13:31:41.044162    4875 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I1109 13:31:41.186994    4875 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=registry", current state: Pending: [<nil>]
	I1109 13:31:41.342455    4875 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I1109 13:31:41.446241    4875 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I1109 13:31:41.546227    4875 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I1109 13:31:41.687218    4875 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=registry", current state: Pending: [<nil>]
	I1109 13:31:41.842979    4875 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I1109 13:31:41.945303    4875 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I1109 13:31:42.043956    4875 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I1109 13:31:42.187315    4875 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=registry", current state: Pending: [<nil>]
	I1109 13:31:42.341928    4875 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I1109 13:31:42.445341    4875 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I1109 13:31:42.543917    4875 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I1109 13:31:42.687782    4875 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=registry", current state: Pending: [<nil>]
	I1109 13:31:42.842432    4875 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I1109 13:31:42.945810    4875 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I1109 13:31:43.044393    4875 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I1109 13:31:43.187754    4875 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=registry", current state: Pending: [<nil>]
	I1109 13:31:43.342273    4875 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I1109 13:31:43.461680    4875 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I1109 13:31:43.549767    4875 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I1109 13:31:43.697455    4875 kapi.go:107] duration metric: took 1m15.013397296s to wait for kubernetes.io/minikube-addons=registry ...
	I1109 13:31:43.842415    4875 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I1109 13:31:43.946510    4875 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I1109 13:31:44.043559    4875 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I1109 13:31:44.342061    4875 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I1109 13:31:44.445557    4875 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I1109 13:31:44.547096    4875 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I1109 13:31:44.842895    4875 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I1109 13:31:44.944782    4875 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I1109 13:31:45.047173    4875 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I1109 13:31:45.341699    4875 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I1109 13:31:45.445454    4875 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I1109 13:31:45.543650    4875 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I1109 13:31:45.841933    4875 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I1109 13:31:45.944804    4875 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I1109 13:31:46.044456    4875 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I1109 13:31:46.343326    4875 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I1109 13:31:46.446617    4875 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I1109 13:31:46.544102    4875 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I1109 13:31:46.841436    4875 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I1109 13:31:46.946046    4875 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I1109 13:31:47.044429    4875 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I1109 13:31:47.342123    4875 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I1109 13:31:47.445977    4875 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I1109 13:31:47.544249    4875 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I1109 13:31:47.841836    4875 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I1109 13:31:47.944657    4875 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I1109 13:31:48.044235    4875 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I1109 13:31:48.342357    4875 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I1109 13:31:48.445867    4875 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I1109 13:31:48.549619    4875 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I1109 13:31:48.843075    4875 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I1109 13:31:48.945260    4875 kapi.go:107] duration metric: took 1m17.003625512s to wait for kubernetes.io/minikube-addons=gcp-auth ...
	I1109 13:31:48.948561    4875 out.go:179] * Your GCP credentials will now be mounted into every pod created in the addons-651467 cluster.
	I1109 13:31:48.952537    4875 out.go:179] * If you don't want your credentials mounted into a specific pod, add a label with the `gcp-auth-skip-secret` key to your pod configuration.
	I1109 13:31:48.955607    4875 out.go:179] * If you want existing pods to be mounted with credentials, either recreate them or rerun addons enable with --refresh.
	I1109 13:31:49.044055    4875 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I1109 13:31:49.342258    4875 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I1109 13:31:49.543429    4875 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I1109 13:31:49.842401    4875 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I1109 13:31:50.044020    4875 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I1109 13:31:50.342456    4875 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I1109 13:31:50.544193    4875 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I1109 13:31:50.842339    4875 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I1109 13:31:51.043329    4875 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I1109 13:31:51.341475    4875 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I1109 13:31:51.543592    4875 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I1109 13:31:51.841701    4875 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I1109 13:31:52.044393    4875 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I1109 13:31:52.341902    4875 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I1109 13:31:52.544571    4875 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I1109 13:31:52.842440    4875 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I1109 13:31:53.043262    4875 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I1109 13:31:53.341824    4875 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I1109 13:31:53.543939    4875 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I1109 13:31:53.846287    4875 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I1109 13:31:54.045580    4875 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I1109 13:31:54.342291    4875 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I1109 13:31:54.546575    4875 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I1109 13:31:54.842213    4875 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I1109 13:31:55.043634    4875 kapi.go:107] duration metric: took 1m25.503415203s to wait for app.kubernetes.io/name=ingress-nginx ...
	I1109 13:31:55.343987    4875 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I1109 13:31:55.841129    4875 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I1109 13:31:56.341159    4875 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I1109 13:31:56.841674    4875 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I1109 13:31:57.342718    4875 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I1109 13:31:57.848790    4875 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I1109 13:31:58.343439    4875 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I1109 13:31:58.853473    4875 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I1109 13:31:59.356564    4875 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I1109 13:31:59.844687    4875 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I1109 13:32:00.348620    4875 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I1109 13:32:00.842376    4875 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I1109 13:32:01.341809    4875 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I1109 13:32:01.842406    4875 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I1109 13:32:02.342580    4875 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I1109 13:32:02.842771    4875 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I1109 13:32:03.341724    4875 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I1109 13:32:03.842929    4875 kapi.go:107] duration metric: took 1m34.00462405s to wait for kubernetes.io/minikube-addons=csi-hostpath-driver ...
	I1109 13:32:03.846167    4875 out.go:179] * Enabled addons: registry-creds, storage-provisioner, storage-provisioner-rancher, inspektor-gadget, amd-gpu-device-plugin, nvidia-device-plugin, ingress-dns, cloud-spanner, metrics-server, yakd, default-storageclass, volumesnapshots, registry, gcp-auth, ingress, csi-hostpath-driver
	I1109 13:32:03.849156    4875 addons.go:515] duration metric: took 1m40.768204928s for enable addons: enabled=[registry-creds storage-provisioner storage-provisioner-rancher inspektor-gadget amd-gpu-device-plugin nvidia-device-plugin ingress-dns cloud-spanner metrics-server yakd default-storageclass volumesnapshots registry gcp-auth ingress csi-hostpath-driver]
	I1109 13:32:03.849212    4875 start.go:247] waiting for cluster config update ...
	I1109 13:32:03.849237    4875 start.go:256] writing updated cluster config ...
	I1109 13:32:03.850140    4875 ssh_runner.go:195] Run: rm -f paused
	I1109 13:32:03.853966    4875 pod_ready.go:37] extra waiting up to 4m0s for all "kube-system" pods having one of [k8s-app=kube-dns component=etcd component=kube-apiserver component=kube-controller-manager k8s-app=kube-proxy component=kube-scheduler] labels to be "Ready" ...
	I1109 13:32:03.857807    4875 pod_ready.go:83] waiting for pod "coredns-66bc5c9577-2bvft" in "kube-system" namespace to be "Ready" or be gone ...
	I1109 13:32:03.863009    4875 pod_ready.go:94] pod "coredns-66bc5c9577-2bvft" is "Ready"
	I1109 13:32:03.863043    4875 pod_ready.go:86] duration metric: took 5.206095ms for pod "coredns-66bc5c9577-2bvft" in "kube-system" namespace to be "Ready" or be gone ...
	I1109 13:32:03.865356    4875 pod_ready.go:83] waiting for pod "etcd-addons-651467" in "kube-system" namespace to be "Ready" or be gone ...
	I1109 13:32:03.869934    4875 pod_ready.go:94] pod "etcd-addons-651467" is "Ready"
	I1109 13:32:03.870023    4875 pod_ready.go:86] duration metric: took 4.627924ms for pod "etcd-addons-651467" in "kube-system" namespace to be "Ready" or be gone ...
	I1109 13:32:03.872828    4875 pod_ready.go:83] waiting for pod "kube-apiserver-addons-651467" in "kube-system" namespace to be "Ready" or be gone ...
	I1109 13:32:03.877953    4875 pod_ready.go:94] pod "kube-apiserver-addons-651467" is "Ready"
	I1109 13:32:03.877981    4875 pod_ready.go:86] duration metric: took 5.12629ms for pod "kube-apiserver-addons-651467" in "kube-system" namespace to be "Ready" or be gone ...
	I1109 13:32:03.880979    4875 pod_ready.go:83] waiting for pod "kube-controller-manager-addons-651467" in "kube-system" namespace to be "Ready" or be gone ...
	I1109 13:32:04.257997    4875 pod_ready.go:94] pod "kube-controller-manager-addons-651467" is "Ready"
	I1109 13:32:04.258077    4875 pod_ready.go:86] duration metric: took 377.069727ms for pod "kube-controller-manager-addons-651467" in "kube-system" namespace to be "Ready" or be gone ...
	I1109 13:32:04.458846    4875 pod_ready.go:83] waiting for pod "kube-proxy-mbtfx" in "kube-system" namespace to be "Ready" or be gone ...
	I1109 13:32:04.858165    4875 pod_ready.go:94] pod "kube-proxy-mbtfx" is "Ready"
	I1109 13:32:04.858195    4875 pod_ready.go:86] duration metric: took 399.321259ms for pod "kube-proxy-mbtfx" in "kube-system" namespace to be "Ready" or be gone ...
	I1109 13:32:05.058712    4875 pod_ready.go:83] waiting for pod "kube-scheduler-addons-651467" in "kube-system" namespace to be "Ready" or be gone ...
	I1109 13:32:05.457517    4875 pod_ready.go:94] pod "kube-scheduler-addons-651467" is "Ready"
	I1109 13:32:05.457545    4875 pod_ready.go:86] duration metric: took 398.743819ms for pod "kube-scheduler-addons-651467" in "kube-system" namespace to be "Ready" or be gone ...
	I1109 13:32:05.457558    4875 pod_ready.go:40] duration metric: took 1.60355954s for extra waiting for all "kube-system" pods having one of [k8s-app=kube-dns component=etcd component=kube-apiserver component=kube-controller-manager k8s-app=kube-proxy component=kube-scheduler] labels to be "Ready" ...
	I1109 13:32:05.527300    4875 start.go:628] kubectl: 1.33.2, cluster: 1.34.1 (minor skew: 1)
	I1109 13:32:05.530609    4875 out.go:179] * Done! kubectl is now configured to use "addons-651467" cluster and "default" namespace by default
	
	
	==> CRI-O <==
	Nov 09 13:34:19 addons-651467 crio[830]: time="2025-11-09T13:34:19.022206703Z" level=info msg="Removed pod sandbox: e027598c8f414be1660c494f89d8d6201efcb240cc144c0a92d68c74812cfe3b" id=3f278e6d-ce80-4609-967c-83ad4822c6c3 name=/runtime.v1.RuntimeService/RemovePodSandbox
	Nov 09 13:35:12 addons-651467 crio[830]: time="2025-11-09T13:35:12.48674209Z" level=info msg="Running pod sandbox: default/hello-world-app-5d498dc89-mh878/POD" id=dc052c6d-8f31-4712-ae2b-83f082122f85 name=/runtime.v1.RuntimeService/RunPodSandbox
	Nov 09 13:35:12 addons-651467 crio[830]: time="2025-11-09T13:35:12.486809732Z" level=info msg="Allowed annotations are specified for workload [io.containers.trace-syscall]"
	Nov 09 13:35:12 addons-651467 crio[830]: time="2025-11-09T13:35:12.509217761Z" level=info msg="Got pod network &{Name:hello-world-app-5d498dc89-mh878 Namespace:default ID:0e65d20ece5689de4b2f1ca06aef9e72526e516584bad3da5e64bfbbd14182c6 UID:456f5d53-3057-4e07-a897-98403b9b41df NetNS:/var/run/netns/53c2fddb-5f43-4283-9706-169bc952eaa5 Networks:[{Name:kindnet Ifname:eth0}] RuntimeConfig:map[kindnet:{IP: MAC: PortMappings:[] Bandwidth:<nil> IpRanges:[] CgroupPath: PodAnnotations:0x4001fd0030}] Aliases:map[]}"
	Nov 09 13:35:12 addons-651467 crio[830]: time="2025-11-09T13:35:12.509268927Z" level=info msg="Adding pod default_hello-world-app-5d498dc89-mh878 to CNI network \"kindnet\" (type=ptp)"
	Nov 09 13:35:12 addons-651467 crio[830]: time="2025-11-09T13:35:12.523453663Z" level=info msg="Got pod network &{Name:hello-world-app-5d498dc89-mh878 Namespace:default ID:0e65d20ece5689de4b2f1ca06aef9e72526e516584bad3da5e64bfbbd14182c6 UID:456f5d53-3057-4e07-a897-98403b9b41df NetNS:/var/run/netns/53c2fddb-5f43-4283-9706-169bc952eaa5 Networks:[{Name:kindnet Ifname:eth0}] RuntimeConfig:map[kindnet:{IP: MAC: PortMappings:[] Bandwidth:<nil> IpRanges:[] CgroupPath: PodAnnotations:0x4001fd0030}] Aliases:map[]}"
	Nov 09 13:35:12 addons-651467 crio[830]: time="2025-11-09T13:35:12.523597808Z" level=info msg="Checking pod default_hello-world-app-5d498dc89-mh878 for CNI network kindnet (type=ptp)"
	Nov 09 13:35:12 addons-651467 crio[830]: time="2025-11-09T13:35:12.52707592Z" level=info msg="Ran pod sandbox 0e65d20ece5689de4b2f1ca06aef9e72526e516584bad3da5e64bfbbd14182c6 with infra container: default/hello-world-app-5d498dc89-mh878/POD" id=dc052c6d-8f31-4712-ae2b-83f082122f85 name=/runtime.v1.RuntimeService/RunPodSandbox
	Nov 09 13:35:12 addons-651467 crio[830]: time="2025-11-09T13:35:12.537874883Z" level=info msg="Checking image status: docker.io/kicbase/echo-server:1.0" id=3188bf5c-26e9-4e32-9733-45c9538f9844 name=/runtime.v1.ImageService/ImageStatus
	Nov 09 13:35:12 addons-651467 crio[830]: time="2025-11-09T13:35:12.538018298Z" level=info msg="Image docker.io/kicbase/echo-server:1.0 not found" id=3188bf5c-26e9-4e32-9733-45c9538f9844 name=/runtime.v1.ImageService/ImageStatus
	Nov 09 13:35:12 addons-651467 crio[830]: time="2025-11-09T13:35:12.538054302Z" level=info msg="Neither image nor artfiact docker.io/kicbase/echo-server:1.0 found" id=3188bf5c-26e9-4e32-9733-45c9538f9844 name=/runtime.v1.ImageService/ImageStatus
	Nov 09 13:35:12 addons-651467 crio[830]: time="2025-11-09T13:35:12.541514534Z" level=info msg="Pulling image: docker.io/kicbase/echo-server:1.0" id=784e2ece-ff88-4fc1-9127-20057a441e01 name=/runtime.v1.ImageService/PullImage
	Nov 09 13:35:12 addons-651467 crio[830]: time="2025-11-09T13:35:12.546388828Z" level=info msg="Trying to access \"docker.io/kicbase/echo-server:1.0\""
	Nov 09 13:35:13 addons-651467 crio[830]: time="2025-11-09T13:35:13.188607065Z" level=info msg="Pulled image: docker.io/kicbase/echo-server@sha256:42a89d9b22e5307cb88494990d5d929c401339f508c0a7e98a4d8ac52623fc5b" id=784e2ece-ff88-4fc1-9127-20057a441e01 name=/runtime.v1.ImageService/PullImage
	Nov 09 13:35:13 addons-651467 crio[830]: time="2025-11-09T13:35:13.190042878Z" level=info msg="Checking image status: docker.io/kicbase/echo-server:1.0" id=a8ef457e-2935-426d-9f04-303c435e7532 name=/runtime.v1.ImageService/ImageStatus
	Nov 09 13:35:13 addons-651467 crio[830]: time="2025-11-09T13:35:13.197991206Z" level=info msg="Checking image status: docker.io/kicbase/echo-server:1.0" id=c8edf4a3-c632-42c6-af68-26ce71e805ad name=/runtime.v1.ImageService/ImageStatus
	Nov 09 13:35:13 addons-651467 crio[830]: time="2025-11-09T13:35:13.211441378Z" level=info msg="Creating container: default/hello-world-app-5d498dc89-mh878/hello-world-app" id=3d27e067-c591-4f5c-b5bd-c32ec167fa78 name=/runtime.v1.RuntimeService/CreateContainer
	Nov 09 13:35:13 addons-651467 crio[830]: time="2025-11-09T13:35:13.21176449Z" level=info msg="Allowed annotations are specified for workload [io.containers.trace-syscall]"
	Nov 09 13:35:13 addons-651467 crio[830]: time="2025-11-09T13:35:13.243985693Z" level=info msg="Allowed annotations are specified for workload [io.containers.trace-syscall]"
	Nov 09 13:35:13 addons-651467 crio[830]: time="2025-11-09T13:35:13.244259903Z" level=warning msg="Failed to open /etc/passwd: open /var/lib/containers/storage/overlay/c7acc2b34cb78125831ecf8e3d63d1b1a51f5bfccbe72a7a98abf81b741957fb/merged/etc/passwd: no such file or directory"
	Nov 09 13:35:13 addons-651467 crio[830]: time="2025-11-09T13:35:13.244306335Z" level=warning msg="Failed to open /etc/group: open /var/lib/containers/storage/overlay/c7acc2b34cb78125831ecf8e3d63d1b1a51f5bfccbe72a7a98abf81b741957fb/merged/etc/group: no such file or directory"
	Nov 09 13:35:13 addons-651467 crio[830]: time="2025-11-09T13:35:13.244675214Z" level=info msg="Allowed annotations are specified for workload [io.containers.trace-syscall]"
	Nov 09 13:35:13 addons-651467 crio[830]: time="2025-11-09T13:35:13.283789331Z" level=info msg="Created container e9fe02da89299570bc6bd733f307daf9913c6a326f9837d1868609020b40eb8e: default/hello-world-app-5d498dc89-mh878/hello-world-app" id=3d27e067-c591-4f5c-b5bd-c32ec167fa78 name=/runtime.v1.RuntimeService/CreateContainer
	Nov 09 13:35:13 addons-651467 crio[830]: time="2025-11-09T13:35:13.285020225Z" level=info msg="Starting container: e9fe02da89299570bc6bd733f307daf9913c6a326f9837d1868609020b40eb8e" id=6cae2450-58b3-4383-9947-5ce15790db14 name=/runtime.v1.RuntimeService/StartContainer
	Nov 09 13:35:13 addons-651467 crio[830]: time="2025-11-09T13:35:13.287046278Z" level=info msg="Started container" PID=7014 containerID=e9fe02da89299570bc6bd733f307daf9913c6a326f9837d1868609020b40eb8e description=default/hello-world-app-5d498dc89-mh878/hello-world-app id=6cae2450-58b3-4383-9947-5ce15790db14 name=/runtime.v1.RuntimeService/StartContainer sandboxID=0e65d20ece5689de4b2f1ca06aef9e72526e516584bad3da5e64bfbbd14182c6
	
	
	==> container status <==
	CONTAINER           IMAGE                                                                                                                                        CREATED                  STATE               NAME                                     ATTEMPT             POD ID              POD                                         NAMESPACE
	e9fe02da89299       docker.io/kicbase/echo-server@sha256:42a89d9b22e5307cb88494990d5d929c401339f508c0a7e98a4d8ac52623fc5b                                        Less than a second ago   Running             hello-world-app                          0                   0e65d20ece568       hello-world-app-5d498dc89-mh878             default
	efa4474c0cdd3       docker.io/library/nginx@sha256:7391b3732e7f7ccd23ff1d02fbeadcde496f374d7460ad8e79260f8f6d2c9f90                                              2 minutes ago            Running             nginx                                    0                   3a88cb019b721       nginx                                       default
	006bf09a4271c       gcr.io/k8s-minikube/busybox@sha256:580b0aa58b210f512f818b7b7ef4f63c803f7a8cd6baf571b1462b79f7b7719e                                          3 minutes ago            Running             busybox                                  0                   82d821c1afe81       busybox                                     default
	d2bf491a803e1       registry.k8s.io/sig-storage/csi-snapshotter@sha256:bd6b8417b2a83e66ab1d4c1193bb2774f027745bdebbd9e0c1a6518afdecc39a                          3 minutes ago            Running             csi-snapshotter                          0                   c36845a2e7f54       csi-hostpathplugin-txjcd                    kube-system
	ae9a6f508e15b       registry.k8s.io/sig-storage/csi-provisioner@sha256:98ffd09c0784203d200e0f8c241501de31c8df79644caac7eed61bd6391e5d49                          3 minutes ago            Running             csi-provisioner                          0                   c36845a2e7f54       csi-hostpathplugin-txjcd                    kube-system
	93a600602192b       registry.k8s.io/sig-storage/livenessprobe@sha256:8b00c6e8f52639ed9c6f866085893ab688e57879741b3089e3cfa9998502e158                            3 minutes ago            Running             liveness-probe                           0                   c36845a2e7f54       csi-hostpathplugin-txjcd                    kube-system
	f480ecab5b392       registry.k8s.io/sig-storage/hostpathplugin@sha256:7b1dfc90a367222067fc468442fdf952e20fc5961f25c1ad654300ddc34d7083                           3 minutes ago            Running             hostpath                                 0                   c36845a2e7f54       csi-hostpathplugin-txjcd                    kube-system
	7d0e397731f1d       ghcr.io/inspektor-gadget/inspektor-gadget@sha256:c2c5268a38de5c792beb84122c5350c644fbb9b85e04342ef72fa9a6d052f0b0                            3 minutes ago            Running             gadget                                   0                   a9a4dc25ab575       gadget-9q8bf                                gadget
	9da3d3ae626ec       registry.k8s.io/ingress-nginx/controller@sha256:4ae52268a9493fc308d5f2fb67fe657d2499293aa644122d385ddb60c2330dbc                             3 minutes ago            Running             controller                               0                   d2c3faf0fc87d       ingress-nginx-controller-675c5ddd98-6lswb   ingress-nginx
	90208adb8fd21       gcr.io/k8s-minikube/gcp-auth-webhook@sha256:2de98fa4b397f92e5e8e05d73caf21787a1c72c41378f3eb7bad72b1e0f4e9ff                                 3 minutes ago            Running             gcp-auth                                 0                   9d1c631f65bae       gcp-auth-78565c9fb4-qqfqm                   gcp-auth
	a21703b53016b       registry.k8s.io/sig-storage/csi-node-driver-registrar@sha256:511b8c8ac828194a753909d26555ff08bc12f497dd8daeb83fe9d593693a26c1                3 minutes ago            Running             node-driver-registrar                    0                   c36845a2e7f54       csi-hostpathplugin-txjcd                    kube-system
	00a017d960b12       gcr.io/k8s-minikube/kube-registry-proxy@sha256:26c84a64530a67aa4d749dd4356d67ea27a2576e4d25b640d21857b0574cfd4b                              3 minutes ago            Running             registry-proxy                           0                   580951646e443       registry-proxy-7mv24                        kube-system
	ddbfebb8b3bd8       registry.k8s.io/sig-storage/csi-resizer@sha256:82c1945463342884c05a5b2bc31319712ce75b154c279c2a10765f61e0f688af                              3 minutes ago            Running             csi-resizer                              0                   00963cb17a76a       csi-hostpath-resizer-0                      kube-system
	cf0248d05e312       nvcr.io/nvidia/k8s-device-plugin@sha256:80924fc52384565a7c59f1e2f12319fb8f2b02a1c974bb3d73a9853fe01af874                                     3 minutes ago            Running             nvidia-device-plugin-ctr                 0                   ecaa5e9d76d87       nvidia-device-plugin-daemonset-rx8x7        kube-system
	7ab837fe1c905       registry.k8s.io/ingress-nginx/kube-webhook-certgen@sha256:2d5727fcf5b9ee2bd367835234500c1ec7f54a0b94ea92a76169a9308a197e93                   3 minutes ago            Exited              patch                                    0                   634ea2c369e72       ingress-nginx-admission-patch-bp4lk         ingress-nginx
	14c1c07c042a8       docker.io/library/registry@sha256:8715992817b2254fe61e74ffc6a4096d57a0cde36c95ea075676c05f7a94a630                                           3 minutes ago            Running             registry                                 0                   beaeca0cb57d4       registry-6b586f9694-kzz6v                   kube-system
	d6ac5bca1cd4a       docker.io/kicbase/minikube-ingress-dns@sha256:6d710af680d8a9b5a5b1f9047eb83ee4c9258efd3fcd962f938c00bcbb4c5958                               3 minutes ago            Running             minikube-ingress-dns                     0                   d19f25ed24632       kube-ingress-dns-minikube                   kube-system
	8f8d82b9ad544       registry.k8s.io/sig-storage/csi-attacher@sha256:4b5609c78455de45821910065281a368d5f760b41250f90cbde5110543bdc326                             3 minutes ago            Running             csi-attacher                             0                   cd681023af0f6       csi-hostpath-attacher-0                     kube-system
	a330963f46686       gcr.io/cloud-spanner-emulator/emulator@sha256:c2688dc4b7ecb4546084321d63c2b3b616a54263488137e18fcb7c7005aef086                               3 minutes ago            Running             cloud-spanner-emulator                   0                   f3fbd629991d7       cloud-spanner-emulator-6f9fcf858b-gv67d     default
	07e93bef4f027       registry.k8s.io/metrics-server/metrics-server@sha256:8f49cf1b0688bb0eae18437882dbf6de2c7a2baac71b1492bc4eca25439a1bf2                        3 minutes ago            Running             metrics-server                           0                   b785fa871315c       metrics-server-85b7d694d7-lmgbd             kube-system
	1cfcc34c91d70       registry.k8s.io/sig-storage/csi-external-health-monitor-controller@sha256:8b9df00898ded1bfb4d8f3672679f29cd9f88e651b76fef64121c8d347dd12c0   3 minutes ago            Running             csi-external-health-monitor-controller   0                   c36845a2e7f54       csi-hostpathplugin-txjcd                    kube-system
	bf2a38a499919       registry.k8s.io/ingress-nginx/kube-webhook-certgen@sha256:2d5727fcf5b9ee2bd367835234500c1ec7f54a0b94ea92a76169a9308a197e93                   4 minutes ago            Exited              create                                   0                   9bad713001cd2       ingress-nginx-admission-create-29qmn        ingress-nginx
	c8972766fd694       registry.k8s.io/sig-storage/snapshot-controller@sha256:5d668e35c15df6e87e2530da25d557f543182cedbdb39d421b87076463ee9857                      4 minutes ago            Running             volume-snapshot-controller               0                   9d2e90e9f9df4       snapshot-controller-7d9fbc56b8-d4qqx        kube-system
	9b1d34d40bba4       registry.k8s.io/sig-storage/snapshot-controller@sha256:5d668e35c15df6e87e2530da25d557f543182cedbdb39d421b87076463ee9857                      4 minutes ago            Running             volume-snapshot-controller               0                   c7e75da2a7ca4       snapshot-controller-7d9fbc56b8-jmnwh        kube-system
	a23ffdd04b966       docker.io/marcnuri/yakd@sha256:1c961556224d57fc747de0b1874524208e5fb4f8386f23e9c1c4c18e97109f17                                              4 minutes ago            Running             yakd                                     0                   5cb2e8be2016a       yakd-dashboard-5ff678cb9-srcxl              yakd-dashboard
	dd4e8ee49564d       docker.io/rancher/local-path-provisioner@sha256:689a2489a24e74426e4a4666e611c988202c5fa995908b0c60133aca3eb87d98                             4 minutes ago            Running             local-path-provisioner                   0                   c1b62a03ba4e6       local-path-provisioner-648f6765c9-mlhnm     local-path-storage
	656c0f0ceda1f       ba04bb24b95753201135cbc420b233c1b0b9fa2e1fd21d28319c348c33fbcde6                                                                             4 minutes ago            Running             storage-provisioner                      0                   874a92377ddf2       storage-provisioner                         kube-system
	c23a1bc6ea5a8       138784d87c9c50f8e59412544da4cf4928d61ccbaf93b9f5898a3ba406871bfc                                                                             4 minutes ago            Running             coredns                                  0                   0a257a162663e       coredns-66bc5c9577-2bvft                    kube-system
	d46f515271a1e       b1a8c6f707935fd5f346ce5846d21ff8dd65e14c15406a14dbd16b9b897b9b4c                                                                             4 minutes ago            Running             kindnet-cni                              0                   f5f39b8c487ea       kindnet-9qtn5                               kube-system
	0ba9b918f523b       05baa95f5142d87797a2bc1d3d11edfb0bf0a9236d436243d15061fae8b58cb9                                                                             4 minutes ago            Running             kube-proxy                               0                   9e232d2aed376       kube-proxy-mbtfx                            kube-system
	bfadffb4d9828       7eb2c6ff0c5a768fd309321bc2ade0e4e11afcf4f2017ef1d0ff00d91fdf992a                                                                             5 minutes ago            Running             kube-controller-manager                  0                   106c96544d7d7       kube-controller-manager-addons-651467       kube-system
	ab555c10c248c       b5f57ec6b98676d815366685a0422bd164ecf0732540b79ac51b1186cef97ff0                                                                             5 minutes ago            Running             kube-scheduler                           0                   a5ecc3135a681       kube-scheduler-addons-651467                kube-system
	7f6d6e73f49ba       43911e833d64d4f30460862fc0c54bb61999d60bc7d063feca71e9fc610d5196                                                                             5 minutes ago            Running             kube-apiserver                           0                   c5f9839eb3fd6       kube-apiserver-addons-651467                kube-system
	e8a6b101abe65       a1894772a478e07c67a56e8bf32335fdbe1dd4ec96976a5987083164bd00bc0e                                                                             5 minutes ago            Running             etcd                                     0                   755ffeca2c7d7       etcd-addons-651467                          kube-system
	
	
	==> coredns [c23a1bc6ea5a86e9b5f5f1122d161f051386cd4e34dcba89dfe1c7f912f6252a] <==
	[INFO] 10.244.0.15:53915 - 14353 "AAAA IN registry.kube-system.svc.cluster.local.us-east-2.compute.internal. udp 94 false 1232" NXDOMAIN qr,rd,ra 83 0.00300355s
	[INFO] 10.244.0.15:53915 - 65526 "AAAA IN registry.kube-system.svc.cluster.local. udp 67 false 1232" NOERROR qr,aa,rd 149 0.000233425s
	[INFO] 10.244.0.15:53915 - 46749 "A IN registry.kube-system.svc.cluster.local. udp 67 false 1232" NOERROR qr,aa,rd 110 0.000157082s
	[INFO] 10.244.0.15:49252 - 7488 "AAAA IN registry.kube-system.svc.cluster.local.kube-system.svc.cluster.local. udp 86 false 512" NXDOMAIN qr,aa,rd 179 0.000178417s
	[INFO] 10.244.0.15:49252 - 7282 "A IN registry.kube-system.svc.cluster.local.kube-system.svc.cluster.local. udp 86 false 512" NXDOMAIN qr,aa,rd 179 0.000097044s
	[INFO] 10.244.0.15:45563 - 15603 "A IN registry.kube-system.svc.cluster.local.svc.cluster.local. udp 74 false 512" NXDOMAIN qr,aa,rd 167 0.000521399s
	[INFO] 10.244.0.15:45563 - 15775 "AAAA IN registry.kube-system.svc.cluster.local.svc.cluster.local. udp 74 false 512" NXDOMAIN qr,aa,rd 167 0.000170778s
	[INFO] 10.244.0.15:52691 - 59110 "A IN registry.kube-system.svc.cluster.local.cluster.local. udp 70 false 512" NXDOMAIN qr,aa,rd 163 0.000105372s
	[INFO] 10.244.0.15:52691 - 59298 "AAAA IN registry.kube-system.svc.cluster.local.cluster.local. udp 70 false 512" NXDOMAIN qr,aa,rd 163 0.000181444s
	[INFO] 10.244.0.15:42324 - 25155 "A IN registry.kube-system.svc.cluster.local.us-east-2.compute.internal. udp 83 false 512" NXDOMAIN qr,rd,ra 83 0.001457733s
	[INFO] 10.244.0.15:42324 - 25368 "AAAA IN registry.kube-system.svc.cluster.local.us-east-2.compute.internal. udp 83 false 512" NXDOMAIN qr,rd,ra 83 0.00152463s
	[INFO] 10.244.0.15:54082 - 23193 "AAAA IN registry.kube-system.svc.cluster.local. udp 56 false 512" NOERROR qr,aa,rd 149 0.000120783s
	[INFO] 10.244.0.15:54082 - 23020 "A IN registry.kube-system.svc.cluster.local. udp 56 false 512" NOERROR qr,aa,rd 110 0.000138505s
	[INFO] 10.244.0.19:45970 - 6597 "A IN storage.googleapis.com.gcp-auth.svc.cluster.local. udp 78 false 1232" NXDOMAIN qr,aa,rd 160 0.00017072s
	[INFO] 10.244.0.19:53831 - 29398 "AAAA IN storage.googleapis.com.gcp-auth.svc.cluster.local. udp 78 false 1232" NXDOMAIN qr,aa,rd 160 0.00007169s
	[INFO] 10.244.0.19:59422 - 47365 "AAAA IN storage.googleapis.com.svc.cluster.local. udp 69 false 1232" NXDOMAIN qr,aa,rd 151 0.000089758s
	[INFO] 10.244.0.19:48802 - 62789 "A IN storage.googleapis.com.svc.cluster.local. udp 69 false 1232" NXDOMAIN qr,aa,rd 151 0.000113677s
	[INFO] 10.244.0.19:52243 - 50316 "AAAA IN storage.googleapis.com.cluster.local. udp 65 false 1232" NXDOMAIN qr,aa,rd 147 0.000105922s
	[INFO] 10.244.0.19:37086 - 56244 "A IN storage.googleapis.com.cluster.local. udp 65 false 1232" NXDOMAIN qr,aa,rd 147 0.000075144s
	[INFO] 10.244.0.19:54727 - 7982 "AAAA IN storage.googleapis.com.us-east-2.compute.internal. udp 78 false 1232" NXDOMAIN qr,rd,ra 67 0.001616883s
	[INFO] 10.244.0.19:57198 - 40082 "A IN storage.googleapis.com.us-east-2.compute.internal. udp 78 false 1232" NXDOMAIN qr,rd,ra 67 0.001664498s
	[INFO] 10.244.0.19:44638 - 4097 "AAAA IN storage.googleapis.com. udp 51 false 1232" NOERROR qr,rd,ra 240 0.001329951s
	[INFO] 10.244.0.19:45027 - 20462 "A IN storage.googleapis.com. udp 51 false 1232" NOERROR qr,rd,ra 648 0.001348733s
	[INFO] 10.244.0.23:57323 - 2 "AAAA IN registry.kube-system.svc.cluster.local. udp 56 false 512" NOERROR qr,aa,rd 149 0.000165781s
	[INFO] 10.244.0.23:33173 - 3 "A IN registry.kube-system.svc.cluster.local. udp 56 false 512" NOERROR qr,aa,rd 110 0.000161645s
	
	
	==> describe nodes <==
	Name:               addons-651467
	Roles:              control-plane
	Labels:             beta.kubernetes.io/arch=arm64
	                    beta.kubernetes.io/os=linux
	                    kubernetes.io/arch=arm64
	                    kubernetes.io/hostname=addons-651467
	                    kubernetes.io/os=linux
	                    minikube.k8s.io/commit=70e94ed289d7418ac27e2778f9cf44be27a4ecda
	                    minikube.k8s.io/name=addons-651467
	                    minikube.k8s.io/primary=true
	                    minikube.k8s.io/updated_at=2025_11_09T13_30_19_0700
	                    minikube.k8s.io/version=v1.37.0
	                    node-role.kubernetes.io/control-plane=
	                    node.kubernetes.io/exclude-from-external-load-balancers=
	                    topology.hostpath.csi/node=addons-651467
	Annotations:        csi.volume.kubernetes.io/nodeid: {"hostpath.csi.k8s.io":"addons-651467"}
	                    node.alpha.kubernetes.io/ttl: 0
	                    volumes.kubernetes.io/controller-managed-attach-detach: true
	CreationTimestamp:  Sun, 09 Nov 2025 13:30:15 +0000
	Taints:             <none>
	Unschedulable:      false
	Lease:
	  HolderIdentity:  addons-651467
	  AcquireTime:     <unset>
	  RenewTime:       Sun, 09 Nov 2025 13:35:14 +0000
	Conditions:
	  Type             Status  LastHeartbeatTime                 LastTransitionTime                Reason                       Message
	  ----             ------  -----------------                 ------------------                ------                       -------
	  MemoryPressure   False   Sun, 09 Nov 2025 13:35:04 +0000   Sun, 09 Nov 2025 13:30:12 +0000   KubeletHasSufficientMemory   kubelet has sufficient memory available
	  DiskPressure     False   Sun, 09 Nov 2025 13:35:04 +0000   Sun, 09 Nov 2025 13:30:12 +0000   KubeletHasNoDiskPressure     kubelet has no disk pressure
	  PIDPressure      False   Sun, 09 Nov 2025 13:35:04 +0000   Sun, 09 Nov 2025 13:30:12 +0000   KubeletHasSufficientPID      kubelet has sufficient PID available
	  Ready            True    Sun, 09 Nov 2025 13:35:04 +0000   Sun, 09 Nov 2025 13:31:05 +0000   KubeletReady                 kubelet is posting ready status
	Addresses:
	  InternalIP:  192.168.49.2
	  Hostname:    addons-651467
	Capacity:
	  cpu:                2
	  ephemeral-storage:  203034800Ki
	  hugepages-1Gi:      0
	  hugepages-2Mi:      0
	  hugepages-32Mi:     0
	  hugepages-64Ki:     0
	  memory:             8022300Ki
	  pods:               110
	Allocatable:
	  cpu:                2
	  ephemeral-storage:  203034800Ki
	  hugepages-1Gi:      0
	  hugepages-2Mi:      0
	  hugepages-32Mi:     0
	  hugepages-64Ki:     0
	  memory:             8022300Ki
	  pods:               110
	System Info:
	  Machine ID:                 8fe80b37a6a3cb4bc1adadb06905c59a
	  System UUID:                d0f34771-67d1-4321-b924-10c217c33abf
	  Boot ID:                    c2b4b02b-b0e5-42ff-aa45-346ad9349595
	  Kernel Version:             5.15.0-1084-aws
	  OS Image:                   Debian GNU/Linux 12 (bookworm)
	  Operating System:           linux
	  Architecture:               arm64
	  Container Runtime Version:  cri-o://1.34.1
	  Kubelet Version:            v1.34.1
	  Kube-Proxy Version:         
	PodCIDR:                      10.244.0.0/24
	PodCIDRs:                     10.244.0.0/24
	Non-terminated Pods:          (28 in total)
	  Namespace                   Name                                         CPU Requests  CPU Limits  Memory Requests  Memory Limits  Age
	  ---------                   ----                                         ------------  ----------  ---------------  -------------  ---
	  default                     busybox                                      0 (0%)        0 (0%)      0 (0%)           0 (0%)         3m8s
	  default                     cloud-spanner-emulator-6f9fcf858b-gv67d      0 (0%)        0 (0%)      0 (0%)           0 (0%)         4m47s
	  default                     hello-world-app-5d498dc89-mh878              0 (0%)        0 (0%)      0 (0%)           0 (0%)         2s
	  default                     nginx                                        0 (0%)        0 (0%)      0 (0%)           0 (0%)         2m21s
	  gadget                      gadget-9q8bf                                 0 (0%)        0 (0%)      0 (0%)           0 (0%)         4m46s
	  gcp-auth                    gcp-auth-78565c9fb4-qqfqm                    0 (0%)        0 (0%)      0 (0%)           0 (0%)         4m43s
	  ingress-nginx               ingress-nginx-controller-675c5ddd98-6lswb    100m (5%)     0 (0%)      90Mi (1%)        0 (0%)         4m45s
	  kube-system                 coredns-66bc5c9577-2bvft                     100m (5%)     0 (0%)      70Mi (0%)        170Mi (2%)     4m50s
	  kube-system                 csi-hostpath-attacher-0                      0 (0%)        0 (0%)      0 (0%)           0 (0%)         4m45s
	  kube-system                 csi-hostpath-resizer-0                       0 (0%)        0 (0%)      0 (0%)           0 (0%)         4m45s
	  kube-system                 csi-hostpathplugin-txjcd                     0 (0%)        0 (0%)      0 (0%)           0 (0%)         4m9s
	  kube-system                 etcd-addons-651467                           100m (5%)     0 (0%)      100Mi (1%)       0 (0%)         4m55s
	  kube-system                 kindnet-9qtn5                                100m (5%)     100m (5%)   50Mi (0%)        50Mi (0%)      4m51s
	  kube-system                 kube-apiserver-addons-651467                 250m (12%)    0 (0%)      0 (0%)           0 (0%)         4m55s
	  kube-system                 kube-controller-manager-addons-651467        200m (10%)    0 (0%)      0 (0%)           0 (0%)         4m57s
	  kube-system                 kube-ingress-dns-minikube                    0 (0%)        0 (0%)      0 (0%)           0 (0%)         4m48s
	  kube-system                 kube-proxy-mbtfx                             0 (0%)        0 (0%)      0 (0%)           0 (0%)         4m51s
	  kube-system                 kube-scheduler-addons-651467                 100m (5%)     0 (0%)      0 (0%)           0 (0%)         4m55s
	  kube-system                 metrics-server-85b7d694d7-lmgbd              100m (5%)     0 (0%)      200Mi (2%)       0 (0%)         4m46s
	  kube-system                 nvidia-device-plugin-daemonset-rx8x7         0 (0%)        0 (0%)      0 (0%)           0 (0%)         4m9s
	  kube-system                 registry-6b586f9694-kzz6v                    0 (0%)        0 (0%)      0 (0%)           0 (0%)         4m47s
	  kube-system                 registry-creds-764b6fb674-sppdf              0 (0%)        0 (0%)      0 (0%)           0 (0%)         4m49s
	  kube-system                 registry-proxy-7mv24                         0 (0%)        0 (0%)      0 (0%)           0 (0%)         4m9s
	  kube-system                 snapshot-controller-7d9fbc56b8-d4qqx         0 (0%)        0 (0%)      0 (0%)           0 (0%)         4m45s
	  kube-system                 snapshot-controller-7d9fbc56b8-jmnwh         0 (0%)        0 (0%)      0 (0%)           0 (0%)         4m45s
	  kube-system                 storage-provisioner                          0 (0%)        0 (0%)      0 (0%)           0 (0%)         4m46s
	  local-path-storage          local-path-provisioner-648f6765c9-mlhnm      0 (0%)        0 (0%)      0 (0%)           0 (0%)         4m46s
	  yakd-dashboard              yakd-dashboard-5ff678cb9-srcxl               0 (0%)        0 (0%)      128Mi (1%)       256Mi (3%)     4m46s
	Allocated resources:
	  (Total limits may be over 100 percent, i.e., overcommitted.)
	  Resource           Requests     Limits
	  --------           --------     ------
	  cpu                1050m (52%)  100m (5%)
	  memory             638Mi (8%)   476Mi (6%)
	  ephemeral-storage  0 (0%)       0 (0%)
	  hugepages-1Gi      0 (0%)       0 (0%)
	  hugepages-2Mi      0 (0%)       0 (0%)
	  hugepages-32Mi     0 (0%)       0 (0%)
	  hugepages-64Ki     0 (0%)       0 (0%)
	Events:
	  Type     Reason                   Age                  From             Message
	  ----     ------                   ----                 ----             -------
	  Normal   Starting                 4m49s                kube-proxy       
	  Normal   Starting                 5m3s                 kubelet          Starting kubelet.
	  Warning  CgroupV1                 5m3s                 kubelet          cgroup v1 support is in maintenance mode, please migrate to cgroup v2
	  Normal   NodeHasSufficientMemory  5m3s (x8 over 5m3s)  kubelet          Node addons-651467 status is now: NodeHasSufficientMemory
	  Normal   NodeHasNoDiskPressure    5m3s (x8 over 5m3s)  kubelet          Node addons-651467 status is now: NodeHasNoDiskPressure
	  Normal   NodeHasSufficientPID     5m3s (x8 over 5m3s)  kubelet          Node addons-651467 status is now: NodeHasSufficientPID
	  Normal   Starting                 4m56s                kubelet          Starting kubelet.
	  Warning  CgroupV1                 4m56s                kubelet          cgroup v1 support is in maintenance mode, please migrate to cgroup v2
	  Normal   NodeHasSufficientMemory  4m55s                kubelet          Node addons-651467 status is now: NodeHasSufficientMemory
	  Normal   NodeHasNoDiskPressure    4m55s                kubelet          Node addons-651467 status is now: NodeHasNoDiskPressure
	  Normal   NodeHasSufficientPID     4m55s                kubelet          Node addons-651467 status is now: NodeHasSufficientPID
	  Normal   RegisteredNode           4m52s                node-controller  Node addons-651467 event: Registered Node addons-651467 in Controller
	  Normal   NodeReady                4m9s                 kubelet          Node addons-651467 status is now: NodeReady
	
	
	==> dmesg <==
	[Nov 9 13:17] ACPI: SRAT not present
	[  +0.000000] ACPI: SRAT not present
	[  +0.000000] SPI driver altr_a10sr has no spi_device_id for altr,a10sr
	[  +0.015355] device-mapper: core: CONFIG_IMA_DISABLE_HTABLE is disabled. Duplicate IMA measurements will not be recorded in the IMA log.
	[  +0.494196] systemd[1]: Configuration file /run/systemd/system/netplan-ovs-cleanup.service is marked world-inaccessible. This has no effect as configuration data is accessible via APIs without restrictions. Proceeding anyway.
	[  +0.034512] systemd[1]: /lib/systemd/system/snapd.service:23: Unknown key name 'RestartMode' in section 'Service', ignoring.
	[  +0.743336] ena 0000:00:05.0: LLQ is not supported Fallback to host mode policy.
	[  +6.564676] kauditd_printk_skb: 36 callbacks suppressed
	[Nov 9 13:30] overlayfs: idmapped layers are currently not supported
	[  +0.081590] kmem.limit_in_bytes is deprecated and will be removed. Please report your usecase to linux-mm@kvack.org if you depend on this functionality.
	
	
	==> etcd [e8a6b101abe65f7379ea7110c3039c164ac1c65c44e32d5e2b04308d6ccf31e3] <==
	{"level":"warn","ts":"2025-11-09T13:30:14.494038Z","caller":"embed/config_logging.go:188","msg":"rejected connection on client endpoint","remote-addr":"127.0.0.1:53618","server-name":"","error":"EOF"}
	{"level":"warn","ts":"2025-11-09T13:30:14.505051Z","caller":"embed/config_logging.go:188","msg":"rejected connection on client endpoint","remote-addr":"127.0.0.1:53626","server-name":"","error":"EOF"}
	{"level":"warn","ts":"2025-11-09T13:30:14.532209Z","caller":"embed/config_logging.go:188","msg":"rejected connection on client endpoint","remote-addr":"127.0.0.1:53652","server-name":"","error":"EOF"}
	{"level":"warn","ts":"2025-11-09T13:30:14.574053Z","caller":"embed/config_logging.go:188","msg":"rejected connection on client endpoint","remote-addr":"127.0.0.1:53670","server-name":"","error":"EOF"}
	{"level":"warn","ts":"2025-11-09T13:30:14.613521Z","caller":"embed/config_logging.go:188","msg":"rejected connection on client endpoint","remote-addr":"127.0.0.1:53688","server-name":"","error":"EOF"}
	{"level":"warn","ts":"2025-11-09T13:30:14.646584Z","caller":"embed/config_logging.go:188","msg":"rejected connection on client endpoint","remote-addr":"127.0.0.1:53706","server-name":"","error":"EOF"}
	{"level":"warn","ts":"2025-11-09T13:30:14.665304Z","caller":"embed/config_logging.go:188","msg":"rejected connection on client endpoint","remote-addr":"127.0.0.1:53724","server-name":"","error":"EOF"}
	{"level":"warn","ts":"2025-11-09T13:30:14.698801Z","caller":"embed/config_logging.go:188","msg":"rejected connection on client endpoint","remote-addr":"127.0.0.1:53736","server-name":"","error":"EOF"}
	{"level":"warn","ts":"2025-11-09T13:30:14.732069Z","caller":"embed/config_logging.go:188","msg":"rejected connection on client endpoint","remote-addr":"127.0.0.1:53750","server-name":"","error":"EOF"}
	{"level":"warn","ts":"2025-11-09T13:30:14.750233Z","caller":"embed/config_logging.go:188","msg":"rejected connection on client endpoint","remote-addr":"127.0.0.1:53764","server-name":"","error":"EOF"}
	{"level":"warn","ts":"2025-11-09T13:30:14.778427Z","caller":"embed/config_logging.go:188","msg":"rejected connection on client endpoint","remote-addr":"127.0.0.1:53782","server-name":"","error":"EOF"}
	{"level":"warn","ts":"2025-11-09T13:30:14.816258Z","caller":"embed/config_logging.go:188","msg":"rejected connection on client endpoint","remote-addr":"127.0.0.1:53792","server-name":"","error":"EOF"}
	{"level":"warn","ts":"2025-11-09T13:30:14.838892Z","caller":"embed/config_logging.go:188","msg":"rejected connection on client endpoint","remote-addr":"127.0.0.1:53810","server-name":"","error":"EOF"}
	{"level":"warn","ts":"2025-11-09T13:30:14.887797Z","caller":"embed/config_logging.go:188","msg":"rejected connection on client endpoint","remote-addr":"127.0.0.1:53820","server-name":"","error":"EOF"}
	{"level":"warn","ts":"2025-11-09T13:30:14.902936Z","caller":"embed/config_logging.go:188","msg":"rejected connection on client endpoint","remote-addr":"127.0.0.1:53840","server-name":"","error":"EOF"}
	{"level":"warn","ts":"2025-11-09T13:30:14.930045Z","caller":"embed/config_logging.go:188","msg":"rejected connection on client endpoint","remote-addr":"127.0.0.1:53870","server-name":"","error":"EOF"}
	{"level":"warn","ts":"2025-11-09T13:30:14.948781Z","caller":"embed/config_logging.go:188","msg":"rejected connection on client endpoint","remote-addr":"127.0.0.1:53888","server-name":"","error":"EOF"}
	{"level":"warn","ts":"2025-11-09T13:30:14.961926Z","caller":"embed/config_logging.go:188","msg":"rejected connection on client endpoint","remote-addr":"127.0.0.1:53910","server-name":"","error":"EOF"}
	{"level":"warn","ts":"2025-11-09T13:30:15.064534Z","caller":"embed/config_logging.go:188","msg":"rejected connection on client endpoint","remote-addr":"127.0.0.1:53928","server-name":"","error":"EOF"}
	{"level":"warn","ts":"2025-11-09T13:30:30.083841Z","caller":"embed/config_logging.go:188","msg":"rejected connection on client endpoint","remote-addr":"127.0.0.1:40608","server-name":"","error":"EOF"}
	{"level":"warn","ts":"2025-11-09T13:30:30.107387Z","caller":"embed/config_logging.go:188","msg":"rejected connection on client endpoint","remote-addr":"127.0.0.1:40614","server-name":"","error":"EOF"}
	{"level":"warn","ts":"2025-11-09T13:30:52.919771Z","caller":"embed/config_logging.go:188","msg":"rejected connection on client endpoint","remote-addr":"127.0.0.1:41030","server-name":"","error":"EOF"}
	{"level":"warn","ts":"2025-11-09T13:30:52.935964Z","caller":"embed/config_logging.go:188","msg":"rejected connection on client endpoint","remote-addr":"127.0.0.1:41052","server-name":"","error":"EOF"}
	{"level":"warn","ts":"2025-11-09T13:30:52.965286Z","caller":"embed/config_logging.go:188","msg":"rejected connection on client endpoint","remote-addr":"127.0.0.1:41066","server-name":"","error":"EOF"}
	{"level":"warn","ts":"2025-11-09T13:30:52.980502Z","caller":"embed/config_logging.go:188","msg":"rejected connection on client endpoint","remote-addr":"127.0.0.1:41082","server-name":"","error":"EOF"}
	
	
	==> gcp-auth [90208adb8fd21b1cdd4ad940bede247fec478373f2117c89532a7e4e22f0eb20] <==
	2025/11/09 13:31:47 GCP Auth Webhook started!
	2025/11/09 13:32:06 Ready to marshal response ...
	2025/11/09 13:32:06 Ready to write response ...
	2025/11/09 13:32:06 Ready to marshal response ...
	2025/11/09 13:32:06 Ready to write response ...
	2025/11/09 13:32:06 Ready to marshal response ...
	2025/11/09 13:32:06 Ready to write response ...
	2025/11/09 13:32:27 Ready to marshal response ...
	2025/11/09 13:32:27 Ready to write response ...
	2025/11/09 13:32:29 Ready to marshal response ...
	2025/11/09 13:32:29 Ready to write response ...
	2025/11/09 13:32:29 Ready to marshal response ...
	2025/11/09 13:32:29 Ready to write response ...
	2025/11/09 13:32:36 Ready to marshal response ...
	2025/11/09 13:32:36 Ready to write response ...
	2025/11/09 13:32:47 Ready to marshal response ...
	2025/11/09 13:32:47 Ready to write response ...
	2025/11/09 13:32:53 Ready to marshal response ...
	2025/11/09 13:32:53 Ready to write response ...
	2025/11/09 13:33:13 Ready to marshal response ...
	2025/11/09 13:33:13 Ready to write response ...
	2025/11/09 13:35:12 Ready to marshal response ...
	2025/11/09 13:35:12 Ready to write response ...
	
	
	==> kernel <==
	 13:35:14 up 17 min,  0 user,  load average: 0.49, 0.88, 0.48
	Linux addons-651467 5.15.0-1084-aws #91~20.04.1-Ubuntu SMP Fri May 2 07:00:04 UTC 2025 aarch64 GNU/Linux
	PRETTY_NAME="Debian GNU/Linux 12 (bookworm)"
	
	
	==> kindnet [d46f515271a1e15efe8a207332abfdc1e41fd81d78ed0bea3084656a7821a37b] <==
	I1109 13:33:04.822344       1 main.go:301] handling current node
	I1109 13:33:14.821327       1 main.go:297] Handling node with IPs: map[192.168.49.2:{}]
	I1109 13:33:14.821372       1 main.go:301] handling current node
	I1109 13:33:24.823778       1 main.go:297] Handling node with IPs: map[192.168.49.2:{}]
	I1109 13:33:24.823909       1 main.go:301] handling current node
	I1109 13:33:34.826167       1 main.go:297] Handling node with IPs: map[192.168.49.2:{}]
	I1109 13:33:34.826202       1 main.go:301] handling current node
	I1109 13:33:44.821428       1 main.go:297] Handling node with IPs: map[192.168.49.2:{}]
	I1109 13:33:44.821468       1 main.go:301] handling current node
	I1109 13:33:54.822315       1 main.go:297] Handling node with IPs: map[192.168.49.2:{}]
	I1109 13:33:54.822420       1 main.go:301] handling current node
	I1109 13:34:04.827984       1 main.go:297] Handling node with IPs: map[192.168.49.2:{}]
	I1109 13:34:04.828017       1 main.go:301] handling current node
	I1109 13:34:14.821483       1 main.go:297] Handling node with IPs: map[192.168.49.2:{}]
	I1109 13:34:14.821589       1 main.go:301] handling current node
	I1109 13:34:24.826486       1 main.go:297] Handling node with IPs: map[192.168.49.2:{}]
	I1109 13:34:24.826591       1 main.go:301] handling current node
	I1109 13:34:34.827975       1 main.go:297] Handling node with IPs: map[192.168.49.2:{}]
	I1109 13:34:34.828011       1 main.go:301] handling current node
	I1109 13:34:44.830700       1 main.go:297] Handling node with IPs: map[192.168.49.2:{}]
	I1109 13:34:44.830733       1 main.go:301] handling current node
	I1109 13:34:54.821303       1 main.go:297] Handling node with IPs: map[192.168.49.2:{}]
	I1109 13:34:54.821336       1 main.go:301] handling current node
	I1109 13:35:04.821748       1 main.go:297] Handling node with IPs: map[192.168.49.2:{}]
	I1109 13:35:04.821781       1 main.go:301] handling current node
	
	
	==> kube-apiserver [7f6d6e73f49bafb368dee4aea3878226b8d58990c3b0be6703f0f0a6ff7f3f60] <==
	W1109 13:30:52.965164       1 logging.go:55] [core] [Channel #275 SubChannel #276]grpc: addrConn.createTransport failed to connect to {Addr: "127.0.0.1:2379", ServerName: "127.0.0.1:2379", BalancerAttributes: {"<%!p(pickfirstleaf.managedByPickfirstKeyType={})>": "<%!p(bool=true)>" }}. Err: connection error: desc = "transport: authentication handshake failed: context canceled"
	W1109 13:30:52.979208       1 logging.go:55] [core] [Channel #279 SubChannel #280]grpc: addrConn.createTransport failed to connect to {Addr: "127.0.0.1:2379", ServerName: "127.0.0.1:2379", BalancerAttributes: {"<%!p(pickfirstleaf.managedByPickfirstKeyType={})>": "<%!p(bool=true)>" }}. Err: connection error: desc = "transport: Error while dialing: dial tcp 127.0.0.1:2379: operation was canceled"
	W1109 13:31:05.125383       1 dispatcher.go:210] Failed calling webhook, failing open gcp-auth-mutate.k8s.io: failed calling webhook "gcp-auth-mutate.k8s.io": failed to call webhook: Post "https://gcp-auth.gcp-auth.svc:443/mutate?timeout=10s": dial tcp 10.99.45.153:443: connect: connection refused
	E1109 13:31:05.125496       1 dispatcher.go:214] "Unhandled Error" err="failed calling webhook \"gcp-auth-mutate.k8s.io\": failed to call webhook: Post \"https://gcp-auth.gcp-auth.svc:443/mutate?timeout=10s\": dial tcp 10.99.45.153:443: connect: connection refused" logger="UnhandledError"
	W1109 13:31:05.126102       1 dispatcher.go:210] Failed calling webhook, failing open gcp-auth-mutate.k8s.io: failed calling webhook "gcp-auth-mutate.k8s.io": failed to call webhook: Post "https://gcp-auth.gcp-auth.svc:443/mutate?timeout=10s": dial tcp 10.99.45.153:443: connect: connection refused
	E1109 13:31:05.127538       1 dispatcher.go:214] "Unhandled Error" err="failed calling webhook \"gcp-auth-mutate.k8s.io\": failed to call webhook: Post \"https://gcp-auth.gcp-auth.svc:443/mutate?timeout=10s\": dial tcp 10.99.45.153:443: connect: connection refused" logger="UnhandledError"
	W1109 13:31:05.199640       1 dispatcher.go:210] Failed calling webhook, failing open gcp-auth-mutate.k8s.io: failed calling webhook "gcp-auth-mutate.k8s.io": failed to call webhook: Post "https://gcp-auth.gcp-auth.svc:443/mutate?timeout=10s": dial tcp 10.99.45.153:443: connect: connection refused
	E1109 13:31:05.199777       1 dispatcher.go:214] "Unhandled Error" err="failed calling webhook \"gcp-auth-mutate.k8s.io\": failed to call webhook: Post \"https://gcp-auth.gcp-auth.svc:443/mutate?timeout=10s\": dial tcp 10.99.45.153:443: connect: connection refused" logger="UnhandledError"
	E1109 13:31:19.394009       1 remote_available_controller.go:462] "Unhandled Error" err="v1beta1.metrics.k8s.io failed with: failing or missing response from https://10.100.115.160:443/apis/metrics.k8s.io/v1beta1: Get \"https://10.100.115.160:443/apis/metrics.k8s.io/v1beta1\": dial tcp 10.100.115.160:443: connect: connection refused" logger="UnhandledError"
	W1109 13:31:19.394171       1 handler_proxy.go:99] no RequestInfo found in the context
	E1109 13:31:19.394227       1 controller.go:146] "Unhandled Error" err=<
		Error updating APIService "v1beta1.metrics.k8s.io" with err: failed to download v1beta1.metrics.k8s.io: failed to retrieve openAPI spec, http error: ResponseCode: 503, Body: service unavailable
		, Header: map[Content-Type:[text/plain; charset=utf-8] X-Content-Type-Options:[nosniff]]
	 > logger="UnhandledError"
	E1109 13:31:19.394993       1 remote_available_controller.go:462] "Unhandled Error" err="v1beta1.metrics.k8s.io failed with: failing or missing response from https://10.100.115.160:443/apis/metrics.k8s.io/v1beta1: Get \"https://10.100.115.160:443/apis/metrics.k8s.io/v1beta1\": dial tcp 10.100.115.160:443: connect: connection refused" logger="UnhandledError"
	E1109 13:31:19.401411       1 remote_available_controller.go:462] "Unhandled Error" err="v1beta1.metrics.k8s.io failed with: failing or missing response from https://10.100.115.160:443/apis/metrics.k8s.io/v1beta1: Get \"https://10.100.115.160:443/apis/metrics.k8s.io/v1beta1\": dial tcp 10.100.115.160:443: connect: connection refused" logger="UnhandledError"
	I1109 13:31:19.534890       1 handler.go:285] Adding GroupVersion metrics.k8s.io v1beta1 to ResourceManager
	E1109 13:32:16.622149       1 conn.go:339] Error on socket receive: read tcp 192.168.49.2:8443->192.168.49.1:35474: use of closed network connection
	E1109 13:32:16.851192       1 conn.go:339] Error on socket receive: read tcp 192.168.49.2:8443->192.168.49.1:35494: use of closed network connection
	E1109 13:32:17.009708       1 conn.go:339] Error on socket receive: read tcp 192.168.49.2:8443->192.168.49.1:35512: use of closed network connection
	I1109 13:32:52.950680       1 controller.go:667] quota admission added evaluator for: ingresses.networking.k8s.io
	I1109 13:32:53.277934       1 alloc.go:328] "allocated clusterIPs" service="default/nginx" clusterIPs={"IPv4":"10.98.48.242"}
	I1109 13:32:59.424363       1 controller.go:667] quota admission added evaluator for: volumesnapshots.snapshot.storage.k8s.io
	E1109 13:33:00.943853       1 watch.go:272] "Unhandled Error" err="http2: stream closed" logger="UnhandledError"
	I1109 13:35:12.372717       1 alloc.go:328] "allocated clusterIPs" service="default/hello-world-app" clusterIPs={"IPv4":"10.100.123.3"}
	
	
	==> kube-controller-manager [bfadffb4d9828f17669ca17fe07a4c945b112c4347660ff98d5f85c95d6afb36] <==
	I1109 13:30:22.914090       1 shared_informer.go:356] "Caches are synced" controller="resource quota"
	I1109 13:30:22.921478       1 shared_informer.go:356] "Caches are synced" controller="garbage collector"
	I1109 13:30:22.921507       1 garbagecollector.go:154] "Garbage collector: all resource monitors have synced" logger="garbage-collector-controller"
	I1109 13:30:22.921513       1 garbagecollector.go:157] "Proceeding to collect garbage" logger="garbage-collector-controller"
	I1109 13:30:22.925575       1 shared_informer.go:356] "Caches are synced" controller="GC"
	I1109 13:30:22.937014       1 shared_informer.go:356] "Caches are synced" controller="garbage collector"
	I1109 13:30:22.943401       1 shared_informer.go:356] "Caches are synced" controller="deployment"
	I1109 13:30:22.949024       1 shared_informer.go:356] "Caches are synced" controller="stateful set"
	I1109 13:30:22.949133       1 shared_informer.go:356] "Caches are synced" controller="VAC protection"
	I1109 13:30:22.949191       1 shared_informer.go:356] "Caches are synced" controller="endpoint_slice_mirroring"
	I1109 13:30:22.949383       1 shared_informer.go:356] "Caches are synced" controller="legacy-service-account-token-cleaner"
	I1109 13:30:22.949649       1 shared_informer.go:356] "Caches are synced" controller="cronjob"
	I1109 13:30:22.949705       1 shared_informer.go:356] "Caches are synced" controller="persistent volume"
	I1109 13:30:22.949756       1 shared_informer.go:356] "Caches are synced" controller="resource_claim"
	I1109 13:30:22.950190       1 shared_informer.go:356] "Caches are synced" controller="TTL"
	I1109 13:30:22.951295       1 shared_informer.go:356] "Caches are synced" controller="service-cidr-controller"
	E1109 13:30:28.220729       1 replica_set.go:587] "Unhandled Error" err="sync \"kube-system/metrics-server-85b7d694d7\" failed with pods \"metrics-server-85b7d694d7-\" is forbidden: error looking up service account kube-system/metrics-server: serviceaccount \"metrics-server\" not found" logger="UnhandledError"
	E1109 13:30:52.912943       1 resource_quota_controller.go:446] "Unhandled Error" err="unable to retrieve the complete list of server APIs: metrics.k8s.io/v1beta1: stale GroupVersion discovery: metrics.k8s.io/v1beta1" logger="UnhandledError"
	I1109 13:30:52.913088       1 resource_quota_monitor.go:227] "QuotaMonitor created object count evaluator" logger="resourcequota-controller" resource="volumesnapshots.snapshot.storage.k8s.io"
	I1109 13:30:52.913137       1 shared_informer.go:349] "Waiting for caches to sync" controller="resource quota"
	I1109 13:30:52.949607       1 garbagecollector.go:787] "failed to discover some groups" logger="garbage-collector-controller" groups="map[\"metrics.k8s.io/v1beta1\":\"stale GroupVersion discovery: metrics.k8s.io/v1beta1\"]"
	I1109 13:30:52.955740       1 shared_informer.go:349] "Waiting for caches to sync" controller="garbage collector"
	I1109 13:30:53.018685       1 shared_informer.go:356] "Caches are synced" controller="resource quota"
	I1109 13:30:53.056549       1 shared_informer.go:356] "Caches are synced" controller="garbage collector"
	I1109 13:31:07.885021       1 node_lifecycle_controller.go:1044] "Controller detected that some Nodes are Ready. Exiting master disruption mode" logger="node-lifecycle-controller"
	
	
	==> kube-proxy [0ba9b918f523be282904c95852bc7c89ca78d01514b4fb147900c4f28b749e52] <==
	I1109 13:30:24.651171       1 server_linux.go:53] "Using iptables proxy"
	I1109 13:30:24.749604       1 shared_informer.go:349] "Waiting for caches to sync" controller="node informer cache"
	I1109 13:30:24.850189       1 shared_informer.go:356] "Caches are synced" controller="node informer cache"
	I1109 13:30:24.850234       1 server.go:219] "Successfully retrieved NodeIPs" NodeIPs=["192.168.49.2"]
	E1109 13:30:24.850318       1 server.go:256] "Kube-proxy configuration may be incomplete or incorrect" err="nodePortAddresses is unset; NodePort connections will be accepted on all local IPs. Consider using `--nodeport-addresses primary`"
	I1109 13:30:25.053694       1 server.go:265] "kube-proxy running in dual-stack mode" primary ipFamily="IPv4"
	I1109 13:30:25.053754       1 server_linux.go:132] "Using iptables Proxier"
	I1109 13:30:25.080023       1 proxier.go:242] "Setting route_localnet=1 to allow node-ports on localhost; to change this either disable iptables.localhostNodePorts (--iptables-localhost-nodeports) or set nodePortAddresses (--nodeport-addresses) to filter loopback addresses" ipFamily="IPv4"
	I1109 13:30:25.090986       1 server.go:527] "Version info" version="v1.34.1"
	I1109 13:30:25.091023       1 server.go:529] "Golang settings" GOGC="" GOMAXPROCS="" GOTRACEBACK=""
	I1109 13:30:25.092876       1 config.go:200] "Starting service config controller"
	I1109 13:30:25.092907       1 shared_informer.go:349] "Waiting for caches to sync" controller="service config"
	I1109 13:30:25.092927       1 config.go:106] "Starting endpoint slice config controller"
	I1109 13:30:25.092933       1 shared_informer.go:349] "Waiting for caches to sync" controller="endpoint slice config"
	I1109 13:30:25.094140       1 config.go:403] "Starting serviceCIDR config controller"
	I1109 13:30:25.094165       1 shared_informer.go:349] "Waiting for caches to sync" controller="serviceCIDR config"
	I1109 13:30:25.094903       1 config.go:309] "Starting node config controller"
	I1109 13:30:25.094913       1 shared_informer.go:349] "Waiting for caches to sync" controller="node config"
	I1109 13:30:25.094919       1 shared_informer.go:356] "Caches are synced" controller="node config"
	I1109 13:30:25.193606       1 shared_informer.go:356] "Caches are synced" controller="endpoint slice config"
	I1109 13:30:25.193684       1 shared_informer.go:356] "Caches are synced" controller="service config"
	I1109 13:30:25.194992       1 shared_informer.go:356] "Caches are synced" controller="serviceCIDR config"
	
	
	==> kube-scheduler [ab555c10c248c5e6a67000926c5f9590b70efa1798712c4b098ef7f0373ac795] <==
	I1109 13:30:16.279860       1 secure_serving.go:211] Serving securely on 127.0.0.1:10259
	E1109 13:30:16.298807       1 reflector.go:205] "Failed to watch" err="failed to list *v1.ConfigMap: configmaps \"extension-apiserver-authentication\" is forbidden: User \"system:kube-scheduler\" cannot list resource \"configmaps\" in API group \"\" in the namespace \"kube-system\"" logger="UnhandledError" reflector="runtime/asm_arm64.s:1223" type="*v1.ConfigMap"
	E1109 13:30:16.299212       1 reflector.go:205] "Failed to watch" err="failed to list *v1.ReplicationController: replicationcontrollers is forbidden: User \"system:kube-scheduler\" cannot list resource \"replicationcontrollers\" in API group \"\" at the cluster scope" logger="UnhandledError" reflector="k8s.io/client-go/informers/factory.go:160" type="*v1.ReplicationController"
	E1109 13:30:16.299364       1 reflector.go:205] "Failed to watch" err="failed to list *v1.Node: nodes is forbidden: User \"system:kube-scheduler\" cannot list resource \"nodes\" in API group \"\" at the cluster scope" logger="UnhandledError" reflector="k8s.io/client-go/informers/factory.go:160" type="*v1.Node"
	E1109 13:30:16.299463       1 reflector.go:205] "Failed to watch" err="failed to list *v1.PersistentVolumeClaim: persistentvolumeclaims is forbidden: User \"system:kube-scheduler\" cannot list resource \"persistentvolumeclaims\" in API group \"\" at the cluster scope" logger="UnhandledError" reflector="k8s.io/client-go/informers/factory.go:160" type="*v1.PersistentVolumeClaim"
	E1109 13:30:16.299568       1 reflector.go:205] "Failed to watch" err="failed to list *v1.CSIStorageCapacity: csistoragecapacities.storage.k8s.io is forbidden: User \"system:kube-scheduler\" cannot list resource \"csistoragecapacities\" in API group \"storage.k8s.io\" at the cluster scope" logger="UnhandledError" reflector="k8s.io/client-go/informers/factory.go:160" type="*v1.CSIStorageCapacity"
	E1109 13:30:16.299772       1 reflector.go:205] "Failed to watch" err="failed to list *v1.DeviceClass: deviceclasses.resource.k8s.io is forbidden: User \"system:kube-scheduler\" cannot list resource \"deviceclasses\" in API group \"resource.k8s.io\" at the cluster scope" logger="UnhandledError" reflector="k8s.io/client-go/informers/factory.go:160" type="*v1.DeviceClass"
	E1109 13:30:16.299849       1 reflector.go:205] "Failed to watch" err="failed to list *v1.Pod: pods is forbidden: User \"system:kube-scheduler\" cannot list resource \"pods\" in API group \"\" at the cluster scope" logger="UnhandledError" reflector="k8s.io/client-go/informers/factory.go:160" type="*v1.Pod"
	E1109 13:30:16.300005       1 reflector.go:205] "Failed to watch" err="failed to list *v1.CSIDriver: csidrivers.storage.k8s.io is forbidden: User \"system:kube-scheduler\" cannot list resource \"csidrivers\" in API group \"storage.k8s.io\" at the cluster scope" logger="UnhandledError" reflector="k8s.io/client-go/informers/factory.go:160" type="*v1.CSIDriver"
	E1109 13:30:16.300064       1 reflector.go:205] "Failed to watch" err="failed to list *v1.Service: services is forbidden: User \"system:kube-scheduler\" cannot list resource \"services\" in API group \"\" at the cluster scope" logger="UnhandledError" reflector="k8s.io/client-go/informers/factory.go:160" type="*v1.Service"
	E1109 13:30:16.300115       1 reflector.go:205] "Failed to watch" err="failed to list *v1.CSINode: csinodes.storage.k8s.io is forbidden: User \"system:kube-scheduler\" cannot list resource \"csinodes\" in API group \"storage.k8s.io\" at the cluster scope" logger="UnhandledError" reflector="k8s.io/client-go/informers/factory.go:160" type="*v1.CSINode"
	E1109 13:30:16.300174       1 reflector.go:205] "Failed to watch" err="failed to list *v1.Namespace: namespaces is forbidden: User \"system:kube-scheduler\" cannot list resource \"namespaces\" in API group \"\" at the cluster scope" logger="UnhandledError" reflector="k8s.io/client-go/informers/factory.go:160" type="*v1.Namespace"
	E1109 13:30:16.300226       1 reflector.go:205] "Failed to watch" err="failed to list *v1.ReplicaSet: replicasets.apps is forbidden: User \"system:kube-scheduler\" cannot list resource \"replicasets\" in API group \"apps\" at the cluster scope" logger="UnhandledError" reflector="k8s.io/client-go/informers/factory.go:160" type="*v1.ReplicaSet"
	E1109 13:30:16.300567       1 reflector.go:205] "Failed to watch" err="failed to list *v1.ResourceClaim: resourceclaims.resource.k8s.io is forbidden: User \"system:kube-scheduler\" cannot list resource \"resourceclaims\" in API group \"resource.k8s.io\" at the cluster scope" logger="UnhandledError" reflector="k8s.io/client-go/informers/factory.go:160" type="*v1.ResourceClaim"
	E1109 13:30:16.300644       1 reflector.go:205] "Failed to watch" err="failed to list *v1.PersistentVolume: persistentvolumes is forbidden: User \"system:kube-scheduler\" cannot list resource \"persistentvolumes\" in API group \"\" at the cluster scope" logger="UnhandledError" reflector="k8s.io/client-go/informers/factory.go:160" type="*v1.PersistentVolume"
	E1109 13:30:16.300661       1 reflector.go:205] "Failed to watch" err="failed to list *v1.VolumeAttachment: volumeattachments.storage.k8s.io is forbidden: User \"system:kube-scheduler\" cannot list resource \"volumeattachments\" in API group \"storage.k8s.io\" at the cluster scope" logger="UnhandledError" reflector="k8s.io/client-go/informers/factory.go:160" type="*v1.VolumeAttachment"
	E1109 13:30:16.300774       1 reflector.go:205] "Failed to watch" err="failed to list *v1.StorageClass: storageclasses.storage.k8s.io is forbidden: User \"system:kube-scheduler\" cannot list resource \"storageclasses\" in API group \"storage.k8s.io\" at the cluster scope" logger="UnhandledError" reflector="k8s.io/client-go/informers/factory.go:160" type="*v1.StorageClass"
	E1109 13:30:16.300834       1 reflector.go:205] "Failed to watch" err="failed to list *v1.StatefulSet: statefulsets.apps is forbidden: User \"system:kube-scheduler\" cannot list resource \"statefulsets\" in API group \"apps\" at the cluster scope" logger="UnhandledError" reflector="k8s.io/client-go/informers/factory.go:160" type="*v1.StatefulSet"
	E1109 13:30:16.300837       1 reflector.go:205] "Failed to watch" err="failed to list *v1.PodDisruptionBudget: poddisruptionbudgets.policy is forbidden: User \"system:kube-scheduler\" cannot list resource \"poddisruptionbudgets\" in API group \"policy\" at the cluster scope" logger="UnhandledError" reflector="k8s.io/client-go/informers/factory.go:160" type="*v1.PodDisruptionBudget"
	E1109 13:30:16.300883       1 reflector.go:205] "Failed to watch" err="failed to list *v1.ResourceSlice: resourceslices.resource.k8s.io is forbidden: User \"system:kube-scheduler\" cannot list resource \"resourceslices\" in API group \"resource.k8s.io\" at the cluster scope" logger="UnhandledError" reflector="k8s.io/client-go/informers/factory.go:160" type="*v1.ResourceSlice"
	E1109 13:30:17.120508       1 reflector.go:205] "Failed to watch" err="failed to list *v1.DeviceClass: deviceclasses.resource.k8s.io is forbidden: User \"system:kube-scheduler\" cannot list resource \"deviceclasses\" in API group \"resource.k8s.io\" at the cluster scope" logger="UnhandledError" reflector="k8s.io/client-go/informers/factory.go:160" type="*v1.DeviceClass"
	E1109 13:30:17.158761       1 reflector.go:205] "Failed to watch" err="failed to list *v1.CSIStorageCapacity: csistoragecapacities.storage.k8s.io is forbidden: User \"system:kube-scheduler\" cannot list resource \"csistoragecapacities\" in API group \"storage.k8s.io\" at the cluster scope" logger="UnhandledError" reflector="k8s.io/client-go/informers/factory.go:160" type="*v1.CSIStorageCapacity"
	E1109 13:30:17.197309       1 reflector.go:205] "Failed to watch" err="failed to list *v1.StorageClass: storageclasses.storage.k8s.io is forbidden: User \"system:kube-scheduler\" cannot list resource \"storageclasses\" in API group \"storage.k8s.io\" at the cluster scope" logger="UnhandledError" reflector="k8s.io/client-go/informers/factory.go:160" type="*v1.StorageClass"
	E1109 13:30:17.206819       1 reflector.go:205] "Failed to watch" err="failed to list *v1.ReplicaSet: replicasets.apps is forbidden: User \"system:kube-scheduler\" cannot list resource \"replicasets\" in API group \"apps\" at the cluster scope" logger="UnhandledError" reflector="k8s.io/client-go/informers/factory.go:160" type="*v1.ReplicaSet"
	I1109 13:30:17.781928       1 shared_informer.go:356] "Caches are synced" controller="client-ca::kube-system::extension-apiserver-authentication::client-ca-file"
	
	
	==> kubelet <==
	Nov 09 13:33:21 addons-651467 kubelet[1291]: I1109 13:33:21.897947    1291 reconciler_common.go:163] "operationExecutor.UnmountVolume started for volume \"gcp-creds\" (UniqueName: \"kubernetes.io/host-path/125f80ee-e40d-49c9-989a-7e5edd1c9831-gcp-creds\") pod \"125f80ee-e40d-49c9-989a-7e5edd1c9831\" (UID: \"125f80ee-e40d-49c9-989a-7e5edd1c9831\") "
	Nov 09 13:33:21 addons-651467 kubelet[1291]: I1109 13:33:21.898111    1291 reconciler_common.go:163] "operationExecutor.UnmountVolume started for volume \"task-pv-storage\" (UniqueName: \"kubernetes.io/csi/hostpath.csi.k8s.io^a3e41f79-bd70-11f0-9205-f685f2ad57c9\") pod \"125f80ee-e40d-49c9-989a-7e5edd1c9831\" (UID: \"125f80ee-e40d-49c9-989a-7e5edd1c9831\") "
	Nov 09 13:33:21 addons-651467 kubelet[1291]: I1109 13:33:21.898163    1291 reconciler_common.go:163] "operationExecutor.UnmountVolume started for volume \"kube-api-access-q92n2\" (UniqueName: \"kubernetes.io/projected/125f80ee-e40d-49c9-989a-7e5edd1c9831-kube-api-access-q92n2\") pod \"125f80ee-e40d-49c9-989a-7e5edd1c9831\" (UID: \"125f80ee-e40d-49c9-989a-7e5edd1c9831\") "
	Nov 09 13:33:21 addons-651467 kubelet[1291]: I1109 13:33:21.898272    1291 operation_generator.go:781] UnmountVolume.TearDown succeeded for volume "kubernetes.io/host-path/125f80ee-e40d-49c9-989a-7e5edd1c9831-gcp-creds" (OuterVolumeSpecName: "gcp-creds") pod "125f80ee-e40d-49c9-989a-7e5edd1c9831" (UID: "125f80ee-e40d-49c9-989a-7e5edd1c9831"). InnerVolumeSpecName "gcp-creds". PluginName "kubernetes.io/host-path", VolumeGIDValue ""
	Nov 09 13:33:21 addons-651467 kubelet[1291]: I1109 13:33:21.904118    1291 operation_generator.go:781] UnmountVolume.TearDown succeeded for volume "kubernetes.io/projected/125f80ee-e40d-49c9-989a-7e5edd1c9831-kube-api-access-q92n2" (OuterVolumeSpecName: "kube-api-access-q92n2") pod "125f80ee-e40d-49c9-989a-7e5edd1c9831" (UID: "125f80ee-e40d-49c9-989a-7e5edd1c9831"). InnerVolumeSpecName "kube-api-access-q92n2". PluginName "kubernetes.io/projected", VolumeGIDValue ""
	Nov 09 13:33:21 addons-651467 kubelet[1291]: I1109 13:33:21.922501    1291 operation_generator.go:781] UnmountVolume.TearDown succeeded for volume "kubernetes.io/csi/hostpath.csi.k8s.io^a3e41f79-bd70-11f0-9205-f685f2ad57c9" (OuterVolumeSpecName: "task-pv-storage") pod "125f80ee-e40d-49c9-989a-7e5edd1c9831" (UID: "125f80ee-e40d-49c9-989a-7e5edd1c9831"). InnerVolumeSpecName "pvc-0dda2c39-5b4d-4199-9f3c-83d464c1ceff". PluginName "kubernetes.io/csi", VolumeGIDValue ""
	Nov 09 13:33:21 addons-651467 kubelet[1291]: I1109 13:33:21.998703    1291 reconciler_common.go:299] "Volume detached for volume \"kube-api-access-q92n2\" (UniqueName: \"kubernetes.io/projected/125f80ee-e40d-49c9-989a-7e5edd1c9831-kube-api-access-q92n2\") on node \"addons-651467\" DevicePath \"\""
	Nov 09 13:33:21 addons-651467 kubelet[1291]: I1109 13:33:21.998742    1291 reconciler_common.go:299] "Volume detached for volume \"gcp-creds\" (UniqueName: \"kubernetes.io/host-path/125f80ee-e40d-49c9-989a-7e5edd1c9831-gcp-creds\") on node \"addons-651467\" DevicePath \"\""
	Nov 09 13:33:21 addons-651467 kubelet[1291]: I1109 13:33:21.998768    1291 reconciler_common.go:292] "operationExecutor.UnmountDevice started for volume \"pvc-0dda2c39-5b4d-4199-9f3c-83d464c1ceff\" (UniqueName: \"kubernetes.io/csi/hostpath.csi.k8s.io^a3e41f79-bd70-11f0-9205-f685f2ad57c9\") on node \"addons-651467\" "
	Nov 09 13:33:22 addons-651467 kubelet[1291]: I1109 13:33:22.003909    1291 operation_generator.go:895] UnmountDevice succeeded for volume "pvc-0dda2c39-5b4d-4199-9f3c-83d464c1ceff" (UniqueName: "kubernetes.io/csi/hostpath.csi.k8s.io^a3e41f79-bd70-11f0-9205-f685f2ad57c9") on node "addons-651467"
	Nov 09 13:33:22 addons-651467 kubelet[1291]: I1109 13:33:22.019763    1291 scope.go:117] "RemoveContainer" containerID="f0205e311001b38431f4a42f6326097bccc76b763810b2957282c5b99714617f"
	Nov 09 13:33:22 addons-651467 kubelet[1291]: I1109 13:33:22.032491    1291 scope.go:117] "RemoveContainer" containerID="f0205e311001b38431f4a42f6326097bccc76b763810b2957282c5b99714617f"
	Nov 09 13:33:22 addons-651467 kubelet[1291]: E1109 13:33:22.033053    1291 log.go:32] "ContainerStatus from runtime service failed" err="rpc error: code = NotFound desc = could not find container \"f0205e311001b38431f4a42f6326097bccc76b763810b2957282c5b99714617f\": container with ID starting with f0205e311001b38431f4a42f6326097bccc76b763810b2957282c5b99714617f not found: ID does not exist" containerID="f0205e311001b38431f4a42f6326097bccc76b763810b2957282c5b99714617f"
	Nov 09 13:33:22 addons-651467 kubelet[1291]: I1109 13:33:22.033237    1291 pod_container_deletor.go:53] "DeleteContainer returned error" containerID={"Type":"cri-o","ID":"f0205e311001b38431f4a42f6326097bccc76b763810b2957282c5b99714617f"} err="failed to get container status \"f0205e311001b38431f4a42f6326097bccc76b763810b2957282c5b99714617f\": rpc error: code = NotFound desc = could not find container \"f0205e311001b38431f4a42f6326097bccc76b763810b2957282c5b99714617f\": container with ID starting with f0205e311001b38431f4a42f6326097bccc76b763810b2957282c5b99714617f not found: ID does not exist"
	Nov 09 13:33:22 addons-651467 kubelet[1291]: I1109 13:33:22.099932    1291 reconciler_common.go:299] "Volume detached for volume \"pvc-0dda2c39-5b4d-4199-9f3c-83d464c1ceff\" (UniqueName: \"kubernetes.io/csi/hostpath.csi.k8s.io^a3e41f79-bd70-11f0-9205-f685f2ad57c9\") on node \"addons-651467\" DevicePath \"\""
	Nov 09 13:33:22 addons-651467 kubelet[1291]: I1109 13:33:22.805736    1291 kubelet_volumes.go:163] "Cleaned up orphaned pod volumes dir" podUID="125f80ee-e40d-49c9-989a-7e5edd1c9831" path="/var/lib/kubelet/pods/125f80ee-e40d-49c9-989a-7e5edd1c9831/volumes"
	Nov 09 13:33:56 addons-651467 kubelet[1291]: I1109 13:33:56.803799    1291 kubelet_pods.go:1082] "Unable to retrieve pull secret, the image pull may not succeed." pod="kube-system/registry-proxy-7mv24" secret="" err="secret \"gcp-auth\" not found"
	Nov 09 13:34:06 addons-651467 kubelet[1291]: I1109 13:34:06.803259    1291 kubelet_pods.go:1082] "Unable to retrieve pull secret, the image pull may not succeed." pod="kube-system/registry-6b586f9694-kzz6v" secret="" err="secret \"gcp-auth\" not found"
	Nov 09 13:34:12 addons-651467 kubelet[1291]: I1109 13:34:12.803333    1291 kubelet_pods.go:1082] "Unable to retrieve pull secret, the image pull may not succeed." pod="kube-system/nvidia-device-plugin-daemonset-rx8x7" secret="" err="secret \"gcp-auth\" not found"
	Nov 09 13:35:12 addons-651467 kubelet[1291]: I1109 13:35:12.233965    1291 reconciler_common.go:251] "operationExecutor.VerifyControllerAttachedVolume started for volume \"gcp-creds\" (UniqueName: \"kubernetes.io/host-path/456f5d53-3057-4e07-a897-98403b9b41df-gcp-creds\") pod \"hello-world-app-5d498dc89-mh878\" (UID: \"456f5d53-3057-4e07-a897-98403b9b41df\") " pod="default/hello-world-app-5d498dc89-mh878"
	Nov 09 13:35:12 addons-651467 kubelet[1291]: I1109 13:35:12.234592    1291 reconciler_common.go:251] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kube-api-access-vdkkv\" (UniqueName: \"kubernetes.io/projected/456f5d53-3057-4e07-a897-98403b9b41df-kube-api-access-vdkkv\") pod \"hello-world-app-5d498dc89-mh878\" (UID: \"456f5d53-3057-4e07-a897-98403b9b41df\") " pod="default/hello-world-app-5d498dc89-mh878"
	Nov 09 13:35:12 addons-651467 kubelet[1291]: W1109 13:35:12.533368    1291 manager.go:1169] Failed to process watch event {EventType:0 Name:/docker/c4ab4837e17b267aefc717d6354da6da657718a53ac2c8c63de7aa7f9dc168c6/crio-0e65d20ece5689de4b2f1ca06aef9e72526e516584bad3da5e64bfbbd14182c6 WatchSource:0}: Error finding container 0e65d20ece5689de4b2f1ca06aef9e72526e516584bad3da5e64bfbbd14182c6: Status 404 returned error can't find the container with id 0e65d20ece5689de4b2f1ca06aef9e72526e516584bad3da5e64bfbbd14182c6
	Nov 09 13:35:12 addons-651467 kubelet[1291]: I1109 13:35:12.802719    1291 kubelet_pods.go:1082] "Unable to retrieve pull secret, the image pull may not succeed." pod="kube-system/registry-6b586f9694-kzz6v" secret="" err="secret \"gcp-auth\" not found"
	Nov 09 13:35:13 addons-651467 kubelet[1291]: I1109 13:35:13.487576    1291 pod_startup_latency_tracker.go:104] "Observed pod startup duration" pod="default/hello-world-app-5d498dc89-mh878" podStartSLOduration=0.833933711 podStartE2EDuration="1.487561058s" podCreationTimestamp="2025-11-09 13:35:12 +0000 UTC" firstStartedPulling="2025-11-09 13:35:12.538326091 +0000 UTC m=+293.846476601" lastFinishedPulling="2025-11-09 13:35:13.19195343 +0000 UTC m=+294.500103948" observedRunningTime="2025-11-09 13:35:13.487299155 +0000 UTC m=+294.795449665" watchObservedRunningTime="2025-11-09 13:35:13.487561058 +0000 UTC m=+294.795711576"
	Nov 09 13:35:13 addons-651467 kubelet[1291]: I1109 13:35:13.803242    1291 kubelet_pods.go:1082] "Unable to retrieve pull secret, the image pull may not succeed." pod="kube-system/nvidia-device-plugin-daemonset-rx8x7" secret="" err="secret \"gcp-auth\" not found"
	
	
	==> storage-provisioner [656c0f0ceda1fdc379218b69ef8db74f9f41694122adc3e0fb824a1ed01361cc] <==
	W1109 13:34:49.362418       1 warnings.go:70] v1 Endpoints is deprecated in v1.33+; use discovery.k8s.io/v1 EndpointSlice
	W1109 13:34:51.365749       1 warnings.go:70] v1 Endpoints is deprecated in v1.33+; use discovery.k8s.io/v1 EndpointSlice
	W1109 13:34:51.370536       1 warnings.go:70] v1 Endpoints is deprecated in v1.33+; use discovery.k8s.io/v1 EndpointSlice
	W1109 13:34:53.373345       1 warnings.go:70] v1 Endpoints is deprecated in v1.33+; use discovery.k8s.io/v1 EndpointSlice
	W1109 13:34:53.380842       1 warnings.go:70] v1 Endpoints is deprecated in v1.33+; use discovery.k8s.io/v1 EndpointSlice
	W1109 13:34:55.383583       1 warnings.go:70] v1 Endpoints is deprecated in v1.33+; use discovery.k8s.io/v1 EndpointSlice
	W1109 13:34:55.390380       1 warnings.go:70] v1 Endpoints is deprecated in v1.33+; use discovery.k8s.io/v1 EndpointSlice
	W1109 13:34:57.394026       1 warnings.go:70] v1 Endpoints is deprecated in v1.33+; use discovery.k8s.io/v1 EndpointSlice
	W1109 13:34:57.398391       1 warnings.go:70] v1 Endpoints is deprecated in v1.33+; use discovery.k8s.io/v1 EndpointSlice
	W1109 13:34:59.401645       1 warnings.go:70] v1 Endpoints is deprecated in v1.33+; use discovery.k8s.io/v1 EndpointSlice
	W1109 13:34:59.407977       1 warnings.go:70] v1 Endpoints is deprecated in v1.33+; use discovery.k8s.io/v1 EndpointSlice
	W1109 13:35:01.410893       1 warnings.go:70] v1 Endpoints is deprecated in v1.33+; use discovery.k8s.io/v1 EndpointSlice
	W1109 13:35:01.415724       1 warnings.go:70] v1 Endpoints is deprecated in v1.33+; use discovery.k8s.io/v1 EndpointSlice
	W1109 13:35:03.418358       1 warnings.go:70] v1 Endpoints is deprecated in v1.33+; use discovery.k8s.io/v1 EndpointSlice
	W1109 13:35:03.422697       1 warnings.go:70] v1 Endpoints is deprecated in v1.33+; use discovery.k8s.io/v1 EndpointSlice
	W1109 13:35:05.425623       1 warnings.go:70] v1 Endpoints is deprecated in v1.33+; use discovery.k8s.io/v1 EndpointSlice
	W1109 13:35:05.430400       1 warnings.go:70] v1 Endpoints is deprecated in v1.33+; use discovery.k8s.io/v1 EndpointSlice
	W1109 13:35:07.434772       1 warnings.go:70] v1 Endpoints is deprecated in v1.33+; use discovery.k8s.io/v1 EndpointSlice
	W1109 13:35:07.439569       1 warnings.go:70] v1 Endpoints is deprecated in v1.33+; use discovery.k8s.io/v1 EndpointSlice
	W1109 13:35:09.442531       1 warnings.go:70] v1 Endpoints is deprecated in v1.33+; use discovery.k8s.io/v1 EndpointSlice
	W1109 13:35:09.448601       1 warnings.go:70] v1 Endpoints is deprecated in v1.33+; use discovery.k8s.io/v1 EndpointSlice
	W1109 13:35:11.451948       1 warnings.go:70] v1 Endpoints is deprecated in v1.33+; use discovery.k8s.io/v1 EndpointSlice
	W1109 13:35:11.458940       1 warnings.go:70] v1 Endpoints is deprecated in v1.33+; use discovery.k8s.io/v1 EndpointSlice
	W1109 13:35:13.480195       1 warnings.go:70] v1 Endpoints is deprecated in v1.33+; use discovery.k8s.io/v1 EndpointSlice
	W1109 13:35:13.501731       1 warnings.go:70] v1 Endpoints is deprecated in v1.33+; use discovery.k8s.io/v1 EndpointSlice
	

                                                
                                                
-- /stdout --
helpers_test.go:262: (dbg) Run:  out/minikube-linux-arm64 status --format={{.APIServer}} -p addons-651467 -n addons-651467
helpers_test.go:269: (dbg) Run:  kubectl --context addons-651467 get po -o=jsonpath={.items[*].metadata.name} -A --field-selector=status.phase!=Running
helpers_test.go:280: non-running pods: ingress-nginx-admission-create-29qmn ingress-nginx-admission-patch-bp4lk registry-creds-764b6fb674-sppdf
helpers_test.go:282: ======> post-mortem[TestAddons/parallel/Ingress]: describe non-running pods <======
helpers_test.go:285: (dbg) Run:  kubectl --context addons-651467 describe pod ingress-nginx-admission-create-29qmn ingress-nginx-admission-patch-bp4lk registry-creds-764b6fb674-sppdf
helpers_test.go:285: (dbg) Non-zero exit: kubectl --context addons-651467 describe pod ingress-nginx-admission-create-29qmn ingress-nginx-admission-patch-bp4lk registry-creds-764b6fb674-sppdf: exit status 1 (127.820967ms)

                                                
                                                
** stderr ** 
	Error from server (NotFound): pods "ingress-nginx-admission-create-29qmn" not found
	Error from server (NotFound): pods "ingress-nginx-admission-patch-bp4lk" not found
	Error from server (NotFound): pods "registry-creds-764b6fb674-sppdf" not found

                                                
                                                
** /stderr **
helpers_test.go:287: kubectl --context addons-651467 describe pod ingress-nginx-admission-create-29qmn ingress-nginx-admission-patch-bp4lk registry-creds-764b6fb674-sppdf: exit status 1
addons_test.go:1053: (dbg) Run:  out/minikube-linux-arm64 -p addons-651467 addons disable ingress-dns --alsologtostderr -v=1
addons_test.go:1053: (dbg) Non-zero exit: out/minikube-linux-arm64 -p addons-651467 addons disable ingress-dns --alsologtostderr -v=1: exit status 11 (368.958984ms)

                                                
                                                
-- stdout --
	
	

                                                
                                                
-- /stdout --
** stderr ** 
	I1109 13:35:15.674904   14315 out.go:360] Setting OutFile to fd 1 ...
	I1109 13:35:15.676111   14315 out.go:408] TERM=,COLORTERM=, which probably does not support color
	I1109 13:35:15.676127   14315 out.go:374] Setting ErrFile to fd 2...
	I1109 13:35:15.676133   14315 out.go:408] TERM=,COLORTERM=, which probably does not support color
	I1109 13:35:15.676483   14315 root.go:338] Updating PATH: /home/jenkins/minikube-integration/21139-2320/.minikube/bin
	I1109 13:35:15.676846   14315 mustload.go:66] Loading cluster: addons-651467
	I1109 13:35:15.677360   14315 config.go:182] Loaded profile config "addons-651467": Driver=docker, ContainerRuntime=crio, KubernetesVersion=v1.34.1
	I1109 13:35:15.677383   14315 addons.go:607] checking whether the cluster is paused
	I1109 13:35:15.677538   14315 config.go:182] Loaded profile config "addons-651467": Driver=docker, ContainerRuntime=crio, KubernetesVersion=v1.34.1
	I1109 13:35:15.677555   14315 host.go:66] Checking if "addons-651467" exists ...
	I1109 13:35:15.678068   14315 cli_runner.go:164] Run: docker container inspect addons-651467 --format={{.State.Status}}
	I1109 13:35:15.704062   14315 ssh_runner.go:195] Run: systemctl --version
	I1109 13:35:15.704130   14315 cli_runner.go:164] Run: docker container inspect -f "'{{(index (index .NetworkSettings.Ports "22/tcp") 0).HostPort}}'" addons-651467
	I1109 13:35:15.726196   14315 sshutil.go:53] new ssh client: &{IP:127.0.0.1 Port:32768 SSHKeyPath:/home/jenkins/minikube-integration/21139-2320/.minikube/machines/addons-651467/id_rsa Username:docker}
	I1109 13:35:15.853411   14315 cri.go:54] listing CRI containers in root : {State:paused Name: Namespaces:[kube-system]}
	I1109 13:35:15.853489   14315 ssh_runner.go:195] Run: sudo -s eval "crictl ps -a --quiet --label io.kubernetes.pod.namespace=kube-system"
	I1109 13:35:15.925837   14315 cri.go:89] found id: "d2bf491a803e11cc4313b958ddc2a4f9b81fe1d6c808d32e0eabbc433616d351"
	I1109 13:35:15.925908   14315 cri.go:89] found id: "ae9a6f508e15b8d1fe4384aa76e3877df10ee5113b66c3083e80c4dcde62c8a8"
	I1109 13:35:15.925916   14315 cri.go:89] found id: "93a600602192b90ae7356ac03bd841175f91280d377e697cfd4deb434569fe53"
	I1109 13:35:15.925921   14315 cri.go:89] found id: "f480ecab5b3922bb178d1d72c1958865b09068eac71bf6a035651cc6ef4eb5ee"
	I1109 13:35:15.925924   14315 cri.go:89] found id: "a21703b53016b9a78ff4fdf297bac966cefcb887327ab99bfc0a9cfe52433791"
	I1109 13:35:15.925927   14315 cri.go:89] found id: "00a017d960b122a0ac7b1c3c9a793dfdf1bbd8ec498c45c5f9f540e09d617f7a"
	I1109 13:35:15.925930   14315 cri.go:89] found id: "ddbfebb8b3bd893f8ae20d9efaf7473d0a137e5797afeb9803b45bb8aa1f051a"
	I1109 13:35:15.925933   14315 cri.go:89] found id: "cf0248d05e3120ef7b5c016040983dba6d824f850f3e52c07f70ea2cc2732e40"
	I1109 13:35:15.925936   14315 cri.go:89] found id: "14c1c07c042a874219900b89960a0a218cb17f2f4ffe82345cd8e167830bda14"
	I1109 13:35:15.925942   14315 cri.go:89] found id: "d6ac5bca1cd4a01c236b3a7bd5cb7284e2ab5e47d7caf8f848c3c414c09c1622"
	I1109 13:35:15.925946   14315 cri.go:89] found id: "8f8d82b9ad5448afb3d8e85c7c56d6842c12a3d740484f974a4d1c55ceb6c03b"
	I1109 13:35:15.925949   14315 cri.go:89] found id: "07e93bef4f027a026b1e9eeb9678c48f4fd2f36ed05a6b3dad4e03db5bb3d26e"
	I1109 13:35:15.925952   14315 cri.go:89] found id: "1cfcc34c91d7064c30149b4a2e9c9edf756dac367ab8b8881fa8a5c2d7d87cdf"
	I1109 13:35:15.925955   14315 cri.go:89] found id: "c8972766fd69484838521bc9f86a3eebd70d3ad33301b40d4d15af0e79c47667"
	I1109 13:35:15.925958   14315 cri.go:89] found id: "9b1d34d40bba48a95be9706cfae001d077b34f81b8e3bcb44a5ff6fce1d1371a"
	I1109 13:35:15.925962   14315 cri.go:89] found id: "656c0f0ceda1fdc379218b69ef8db74f9f41694122adc3e0fb824a1ed01361cc"
	I1109 13:35:15.925966   14315 cri.go:89] found id: "c23a1bc6ea5a86e9b5f5f1122d161f051386cd4e34dcba89dfe1c7f912f6252a"
	I1109 13:35:15.925982   14315 cri.go:89] found id: "d46f515271a1e15efe8a207332abfdc1e41fd81d78ed0bea3084656a7821a37b"
	I1109 13:35:15.925986   14315 cri.go:89] found id: "0ba9b918f523be282904c95852bc7c89ca78d01514b4fb147900c4f28b749e52"
	I1109 13:35:15.925989   14315 cri.go:89] found id: "bfadffb4d9828f17669ca17fe07a4c945b112c4347660ff98d5f85c95d6afb36"
	I1109 13:35:15.925993   14315 cri.go:89] found id: "ab555c10c248c5e6a67000926c5f9590b70efa1798712c4b098ef7f0373ac795"
	I1109 13:35:15.925997   14315 cri.go:89] found id: "7f6d6e73f49bafb368dee4aea3878226b8d58990c3b0be6703f0f0a6ff7f3f60"
	I1109 13:35:15.925999   14315 cri.go:89] found id: "e8a6b101abe65f7379ea7110c3039c164ac1c65c44e32d5e2b04308d6ccf31e3"
	I1109 13:35:15.926002   14315 cri.go:89] found id: ""
	I1109 13:35:15.926051   14315 ssh_runner.go:195] Run: sudo runc list -f json
	I1109 13:35:15.943314   14315 out.go:203] 
	W1109 13:35:15.946574   14315 out.go:285] X Exiting due to MK_ADDON_DISABLE_PAUSED: disable failed: check paused: list paused: runc: sudo runc list -f json: Process exited with status 1
	stdout:
	
	stderr:
	time="2025-11-09T13:35:15Z" level=error msg="open /run/runc: no such file or directory"
	
	X Exiting due to MK_ADDON_DISABLE_PAUSED: disable failed: check paused: list paused: runc: sudo runc list -f json: Process exited with status 1
	stdout:
	
	stderr:
	time="2025-11-09T13:35:15Z" level=error msg="open /run/runc: no such file or directory"
	
	W1109 13:35:15.946649   14315 out.go:285] * 
	* 
	W1109 13:35:15.950511   14315 out.go:308] ╭─────────────────────────────────────────────────────────────────────────────────────────────╮
	│                                                                                             │
	│    * If the above advice does not help, please let us know:                                 │
	│      https://github.com/kubernetes/minikube/issues/new/choose                               │
	│                                                                                             │
	│    * Please run `minikube logs --file=logs.txt` and attach logs.txt to the GitHub issue.    │
	│    * Please also attach the following file to the GitHub issue:                             │
	│    * - /tmp/minikube_addons_4116e8848b7c0e6a40fa9061a5ca6da2e0eb6ead_0.log                  │
	│                                                                                             │
	╰─────────────────────────────────────────────────────────────────────────────────────────────╯
	╭─────────────────────────────────────────────────────────────────────────────────────────────╮
	│                                                                                             │
	│    * If the above advice does not help, please let us know:                                 │
	│      https://github.com/kubernetes/minikube/issues/new/choose                               │
	│                                                                                             │
	│    * Please run `minikube logs --file=logs.txt` and attach logs.txt to the GitHub issue.    │
	│    * Please also attach the following file to the GitHub issue:                             │
	│    * - /tmp/minikube_addons_4116e8848b7c0e6a40fa9061a5ca6da2e0eb6ead_0.log                  │
	│                                                                                             │
	╰─────────────────────────────────────────────────────────────────────────────────────────────╯
	I1109 13:35:15.953692   14315 out.go:203] 

                                                
                                                
** /stderr **
addons_test.go:1055: failed to disable ingress-dns addon: args "out/minikube-linux-arm64 -p addons-651467 addons disable ingress-dns --alsologtostderr -v=1": exit status 11
addons_test.go:1053: (dbg) Run:  out/minikube-linux-arm64 -p addons-651467 addons disable ingress --alsologtostderr -v=1
addons_test.go:1053: (dbg) Non-zero exit: out/minikube-linux-arm64 -p addons-651467 addons disable ingress --alsologtostderr -v=1: exit status 11 (340.374669ms)

                                                
                                                
-- stdout --
	
	

                                                
                                                
-- /stdout --
** stderr ** 
	I1109 13:35:16.030774   14453 out.go:360] Setting OutFile to fd 1 ...
	I1109 13:35:16.030920   14453 out.go:408] TERM=,COLORTERM=, which probably does not support color
	I1109 13:35:16.030926   14453 out.go:374] Setting ErrFile to fd 2...
	I1109 13:35:16.030930   14453 out.go:408] TERM=,COLORTERM=, which probably does not support color
	I1109 13:35:16.031237   14453 root.go:338] Updating PATH: /home/jenkins/minikube-integration/21139-2320/.minikube/bin
	I1109 13:35:16.031550   14453 mustload.go:66] Loading cluster: addons-651467
	I1109 13:35:16.032094   14453 config.go:182] Loaded profile config "addons-651467": Driver=docker, ContainerRuntime=crio, KubernetesVersion=v1.34.1
	I1109 13:35:16.032120   14453 addons.go:607] checking whether the cluster is paused
	I1109 13:35:16.032275   14453 config.go:182] Loaded profile config "addons-651467": Driver=docker, ContainerRuntime=crio, KubernetesVersion=v1.34.1
	I1109 13:35:16.032293   14453 host.go:66] Checking if "addons-651467" exists ...
	I1109 13:35:16.032791   14453 cli_runner.go:164] Run: docker container inspect addons-651467 --format={{.State.Status}}
	I1109 13:35:16.064151   14453 ssh_runner.go:195] Run: systemctl --version
	I1109 13:35:16.064202   14453 cli_runner.go:164] Run: docker container inspect -f "'{{(index (index .NetworkSettings.Ports "22/tcp") 0).HostPort}}'" addons-651467
	I1109 13:35:16.093776   14453 sshutil.go:53] new ssh client: &{IP:127.0.0.1 Port:32768 SSHKeyPath:/home/jenkins/minikube-integration/21139-2320/.minikube/machines/addons-651467/id_rsa Username:docker}
	I1109 13:35:16.206816   14453 cri.go:54] listing CRI containers in root : {State:paused Name: Namespaces:[kube-system]}
	I1109 13:35:16.206926   14453 ssh_runner.go:195] Run: sudo -s eval "crictl ps -a --quiet --label io.kubernetes.pod.namespace=kube-system"
	I1109 13:35:16.248814   14453 cri.go:89] found id: "d2bf491a803e11cc4313b958ddc2a4f9b81fe1d6c808d32e0eabbc433616d351"
	I1109 13:35:16.248839   14453 cri.go:89] found id: "ae9a6f508e15b8d1fe4384aa76e3877df10ee5113b66c3083e80c4dcde62c8a8"
	I1109 13:35:16.248844   14453 cri.go:89] found id: "93a600602192b90ae7356ac03bd841175f91280d377e697cfd4deb434569fe53"
	I1109 13:35:16.248848   14453 cri.go:89] found id: "f480ecab5b3922bb178d1d72c1958865b09068eac71bf6a035651cc6ef4eb5ee"
	I1109 13:35:16.248852   14453 cri.go:89] found id: "a21703b53016b9a78ff4fdf297bac966cefcb887327ab99bfc0a9cfe52433791"
	I1109 13:35:16.248855   14453 cri.go:89] found id: "00a017d960b122a0ac7b1c3c9a793dfdf1bbd8ec498c45c5f9f540e09d617f7a"
	I1109 13:35:16.248859   14453 cri.go:89] found id: "ddbfebb8b3bd893f8ae20d9efaf7473d0a137e5797afeb9803b45bb8aa1f051a"
	I1109 13:35:16.248862   14453 cri.go:89] found id: "cf0248d05e3120ef7b5c016040983dba6d824f850f3e52c07f70ea2cc2732e40"
	I1109 13:35:16.248865   14453 cri.go:89] found id: "14c1c07c042a874219900b89960a0a218cb17f2f4ffe82345cd8e167830bda14"
	I1109 13:35:16.248873   14453 cri.go:89] found id: "d6ac5bca1cd4a01c236b3a7bd5cb7284e2ab5e47d7caf8f848c3c414c09c1622"
	I1109 13:35:16.248876   14453 cri.go:89] found id: "8f8d82b9ad5448afb3d8e85c7c56d6842c12a3d740484f974a4d1c55ceb6c03b"
	I1109 13:35:16.248880   14453 cri.go:89] found id: "07e93bef4f027a026b1e9eeb9678c48f4fd2f36ed05a6b3dad4e03db5bb3d26e"
	I1109 13:35:16.248883   14453 cri.go:89] found id: "1cfcc34c91d7064c30149b4a2e9c9edf756dac367ab8b8881fa8a5c2d7d87cdf"
	I1109 13:35:16.248886   14453 cri.go:89] found id: "c8972766fd69484838521bc9f86a3eebd70d3ad33301b40d4d15af0e79c47667"
	I1109 13:35:16.248890   14453 cri.go:89] found id: "9b1d34d40bba48a95be9706cfae001d077b34f81b8e3bcb44a5ff6fce1d1371a"
	I1109 13:35:16.248898   14453 cri.go:89] found id: "656c0f0ceda1fdc379218b69ef8db74f9f41694122adc3e0fb824a1ed01361cc"
	I1109 13:35:16.248906   14453 cri.go:89] found id: "c23a1bc6ea5a86e9b5f5f1122d161f051386cd4e34dcba89dfe1c7f912f6252a"
	I1109 13:35:16.248911   14453 cri.go:89] found id: "d46f515271a1e15efe8a207332abfdc1e41fd81d78ed0bea3084656a7821a37b"
	I1109 13:35:16.248914   14453 cri.go:89] found id: "0ba9b918f523be282904c95852bc7c89ca78d01514b4fb147900c4f28b749e52"
	I1109 13:35:16.248917   14453 cri.go:89] found id: "bfadffb4d9828f17669ca17fe07a4c945b112c4347660ff98d5f85c95d6afb36"
	I1109 13:35:16.248922   14453 cri.go:89] found id: "ab555c10c248c5e6a67000926c5f9590b70efa1798712c4b098ef7f0373ac795"
	I1109 13:35:16.248925   14453 cri.go:89] found id: "7f6d6e73f49bafb368dee4aea3878226b8d58990c3b0be6703f0f0a6ff7f3f60"
	I1109 13:35:16.248928   14453 cri.go:89] found id: "e8a6b101abe65f7379ea7110c3039c164ac1c65c44e32d5e2b04308d6ccf31e3"
	I1109 13:35:16.248931   14453 cri.go:89] found id: ""
	I1109 13:35:16.248982   14453 ssh_runner.go:195] Run: sudo runc list -f json
	I1109 13:35:16.279042   14453 out.go:203] 
	W1109 13:35:16.284339   14453 out.go:285] X Exiting due to MK_ADDON_DISABLE_PAUSED: disable failed: check paused: list paused: runc: sudo runc list -f json: Process exited with status 1
	stdout:
	
	stderr:
	time="2025-11-09T13:35:16Z" level=error msg="open /run/runc: no such file or directory"
	
	X Exiting due to MK_ADDON_DISABLE_PAUSED: disable failed: check paused: list paused: runc: sudo runc list -f json: Process exited with status 1
	stdout:
	
	stderr:
	time="2025-11-09T13:35:16Z" level=error msg="open /run/runc: no such file or directory"
	
	W1109 13:35:16.284377   14453 out.go:285] * 
	* 
	W1109 13:35:16.288303   14453 out.go:308] ╭─────────────────────────────────────────────────────────────────────────────────────────────╮
	│                                                                                             │
	│    * If the above advice does not help, please let us know:                                 │
	│      https://github.com/kubernetes/minikube/issues/new/choose                               │
	│                                                                                             │
	│    * Please run `minikube logs --file=logs.txt` and attach logs.txt to the GitHub issue.    │
	│    * Please also attach the following file to the GitHub issue:                             │
	│    * - /tmp/minikube_addons_62553deefc570c97f2052ef703df7b8905a654d6_0.log                  │
	│                                                                                             │
	╰─────────────────────────────────────────────────────────────────────────────────────────────╯
	╭─────────────────────────────────────────────────────────────────────────────────────────────╮
	│                                                                                             │
	│    * If the above advice does not help, please let us know:                                 │
	│      https://github.com/kubernetes/minikube/issues/new/choose                               │
	│                                                                                             │
	│    * Please run `minikube logs --file=logs.txt` and attach logs.txt to the GitHub issue.    │
	│    * Please also attach the following file to the GitHub issue:                             │
	│    * - /tmp/minikube_addons_62553deefc570c97f2052ef703df7b8905a654d6_0.log                  │
	│                                                                                             │
	╰─────────────────────────────────────────────────────────────────────────────────────────────╯
	I1109 13:35:16.292698   14453 out.go:203] 

                                                
                                                
** /stderr **
addons_test.go:1055: failed to disable ingress addon: args "out/minikube-linux-arm64 -p addons-651467 addons disable ingress --alsologtostderr -v=1": exit status 11
--- FAIL: TestAddons/parallel/Ingress (143.70s)

                                                
                                    
x
+
TestAddons/parallel/InspektorGadget (6.26s)

                                                
                                                
=== RUN   TestAddons/parallel/InspektorGadget
=== PAUSE TestAddons/parallel/InspektorGadget

                                                
                                                

                                                
                                                
=== CONT  TestAddons/parallel/InspektorGadget
addons_test.go:823: (dbg) TestAddons/parallel/InspektorGadget: waiting 8m0s for pods matching "k8s-app=gadget" in namespace "gadget" ...
helpers_test.go:352: "gadget-9q8bf" [3eae0ab1-de28-44ba-86d7-e51b0bffce01] Running
addons_test.go:823: (dbg) TestAddons/parallel/InspektorGadget: k8s-app=gadget healthy within 6.003094213s
addons_test.go:1053: (dbg) Run:  out/minikube-linux-arm64 -p addons-651467 addons disable inspektor-gadget --alsologtostderr -v=1
addons_test.go:1053: (dbg) Non-zero exit: out/minikube-linux-arm64 -p addons-651467 addons disable inspektor-gadget --alsologtostderr -v=1: exit status 11 (253.257635ms)

                                                
                                                
-- stdout --
	
	

                                                
                                                
-- /stdout --
** stderr ** 
	I1109 13:32:52.392260   12408 out.go:360] Setting OutFile to fd 1 ...
	I1109 13:32:52.392450   12408 out.go:408] TERM=,COLORTERM=, which probably does not support color
	I1109 13:32:52.392465   12408 out.go:374] Setting ErrFile to fd 2...
	I1109 13:32:52.392470   12408 out.go:408] TERM=,COLORTERM=, which probably does not support color
	I1109 13:32:52.392764   12408 root.go:338] Updating PATH: /home/jenkins/minikube-integration/21139-2320/.minikube/bin
	I1109 13:32:52.393120   12408 mustload.go:66] Loading cluster: addons-651467
	I1109 13:32:52.393532   12408 config.go:182] Loaded profile config "addons-651467": Driver=docker, ContainerRuntime=crio, KubernetesVersion=v1.34.1
	I1109 13:32:52.393553   12408 addons.go:607] checking whether the cluster is paused
	I1109 13:32:52.393693   12408 config.go:182] Loaded profile config "addons-651467": Driver=docker, ContainerRuntime=crio, KubernetesVersion=v1.34.1
	I1109 13:32:52.393725   12408 host.go:66] Checking if "addons-651467" exists ...
	I1109 13:32:52.394261   12408 cli_runner.go:164] Run: docker container inspect addons-651467 --format={{.State.Status}}
	I1109 13:32:52.411498   12408 ssh_runner.go:195] Run: systemctl --version
	I1109 13:32:52.411547   12408 cli_runner.go:164] Run: docker container inspect -f "'{{(index (index .NetworkSettings.Ports "22/tcp") 0).HostPort}}'" addons-651467
	I1109 13:32:52.429672   12408 sshutil.go:53] new ssh client: &{IP:127.0.0.1 Port:32768 SSHKeyPath:/home/jenkins/minikube-integration/21139-2320/.minikube/machines/addons-651467/id_rsa Username:docker}
	I1109 13:32:52.534893   12408 cri.go:54] listing CRI containers in root : {State:paused Name: Namespaces:[kube-system]}
	I1109 13:32:52.535033   12408 ssh_runner.go:195] Run: sudo -s eval "crictl ps -a --quiet --label io.kubernetes.pod.namespace=kube-system"
	I1109 13:32:52.565523   12408 cri.go:89] found id: "d2bf491a803e11cc4313b958ddc2a4f9b81fe1d6c808d32e0eabbc433616d351"
	I1109 13:32:52.565548   12408 cri.go:89] found id: "ae9a6f508e15b8d1fe4384aa76e3877df10ee5113b66c3083e80c4dcde62c8a8"
	I1109 13:32:52.565554   12408 cri.go:89] found id: "93a600602192b90ae7356ac03bd841175f91280d377e697cfd4deb434569fe53"
	I1109 13:32:52.565559   12408 cri.go:89] found id: "f480ecab5b3922bb178d1d72c1958865b09068eac71bf6a035651cc6ef4eb5ee"
	I1109 13:32:52.565562   12408 cri.go:89] found id: "a21703b53016b9a78ff4fdf297bac966cefcb887327ab99bfc0a9cfe52433791"
	I1109 13:32:52.565566   12408 cri.go:89] found id: "00a017d960b122a0ac7b1c3c9a793dfdf1bbd8ec498c45c5f9f540e09d617f7a"
	I1109 13:32:52.565588   12408 cri.go:89] found id: "ddbfebb8b3bd893f8ae20d9efaf7473d0a137e5797afeb9803b45bb8aa1f051a"
	I1109 13:32:52.565599   12408 cri.go:89] found id: "cf0248d05e3120ef7b5c016040983dba6d824f850f3e52c07f70ea2cc2732e40"
	I1109 13:32:52.565603   12408 cri.go:89] found id: "14c1c07c042a874219900b89960a0a218cb17f2f4ffe82345cd8e167830bda14"
	I1109 13:32:52.565613   12408 cri.go:89] found id: "d6ac5bca1cd4a01c236b3a7bd5cb7284e2ab5e47d7caf8f848c3c414c09c1622"
	I1109 13:32:52.565620   12408 cri.go:89] found id: "8f8d82b9ad5448afb3d8e85c7c56d6842c12a3d740484f974a4d1c55ceb6c03b"
	I1109 13:32:52.565624   12408 cri.go:89] found id: "07e93bef4f027a026b1e9eeb9678c48f4fd2f36ed05a6b3dad4e03db5bb3d26e"
	I1109 13:32:52.565627   12408 cri.go:89] found id: "1cfcc34c91d7064c30149b4a2e9c9edf756dac367ab8b8881fa8a5c2d7d87cdf"
	I1109 13:32:52.565630   12408 cri.go:89] found id: "c8972766fd69484838521bc9f86a3eebd70d3ad33301b40d4d15af0e79c47667"
	I1109 13:32:52.565634   12408 cri.go:89] found id: "9b1d34d40bba48a95be9706cfae001d077b34f81b8e3bcb44a5ff6fce1d1371a"
	I1109 13:32:52.565643   12408 cri.go:89] found id: "656c0f0ceda1fdc379218b69ef8db74f9f41694122adc3e0fb824a1ed01361cc"
	I1109 13:32:52.565650   12408 cri.go:89] found id: "c23a1bc6ea5a86e9b5f5f1122d161f051386cd4e34dcba89dfe1c7f912f6252a"
	I1109 13:32:52.565680   12408 cri.go:89] found id: "d46f515271a1e15efe8a207332abfdc1e41fd81d78ed0bea3084656a7821a37b"
	I1109 13:32:52.565685   12408 cri.go:89] found id: "0ba9b918f523be282904c95852bc7c89ca78d01514b4fb147900c4f28b749e52"
	I1109 13:32:52.565689   12408 cri.go:89] found id: "bfadffb4d9828f17669ca17fe07a4c945b112c4347660ff98d5f85c95d6afb36"
	I1109 13:32:52.565694   12408 cri.go:89] found id: "ab555c10c248c5e6a67000926c5f9590b70efa1798712c4b098ef7f0373ac795"
	I1109 13:32:52.565698   12408 cri.go:89] found id: "7f6d6e73f49bafb368dee4aea3878226b8d58990c3b0be6703f0f0a6ff7f3f60"
	I1109 13:32:52.565701   12408 cri.go:89] found id: "e8a6b101abe65f7379ea7110c3039c164ac1c65c44e32d5e2b04308d6ccf31e3"
	I1109 13:32:52.565708   12408 cri.go:89] found id: ""
	I1109 13:32:52.565768   12408 ssh_runner.go:195] Run: sudo runc list -f json
	I1109 13:32:52.580628   12408 out.go:203] 
	W1109 13:32:52.583698   12408 out.go:285] X Exiting due to MK_ADDON_DISABLE_PAUSED: disable failed: check paused: list paused: runc: sudo runc list -f json: Process exited with status 1
	stdout:
	
	stderr:
	time="2025-11-09T13:32:52Z" level=error msg="open /run/runc: no such file or directory"
	
	X Exiting due to MK_ADDON_DISABLE_PAUSED: disable failed: check paused: list paused: runc: sudo runc list -f json: Process exited with status 1
	stdout:
	
	stderr:
	time="2025-11-09T13:32:52Z" level=error msg="open /run/runc: no such file or directory"
	
	W1109 13:32:52.583726   12408 out.go:285] * 
	* 
	W1109 13:32:52.588220   12408 out.go:308] ╭─────────────────────────────────────────────────────────────────────────────────────────────╮
	│                                                                                             │
	│    * If the above advice does not help, please let us know:                                 │
	│      https://github.com/kubernetes/minikube/issues/new/choose                               │
	│                                                                                             │
	│    * Please run `minikube logs --file=logs.txt` and attach logs.txt to the GitHub issue.    │
	│    * Please also attach the following file to the GitHub issue:                             │
	│    * - /tmp/minikube_addons_07218961934993dd21acc63caaf1aa08873c018e_0.log                  │
	│                                                                                             │
	╰─────────────────────────────────────────────────────────────────────────────────────────────╯
	╭─────────────────────────────────────────────────────────────────────────────────────────────╮
	│                                                                                             │
	│    * If the above advice does not help, please let us know:                                 │
	│      https://github.com/kubernetes/minikube/issues/new/choose                               │
	│                                                                                             │
	│    * Please run `minikube logs --file=logs.txt` and attach logs.txt to the GitHub issue.    │
	│    * Please also attach the following file to the GitHub issue:                             │
	│    * - /tmp/minikube_addons_07218961934993dd21acc63caaf1aa08873c018e_0.log                  │
	│                                                                                             │
	╰─────────────────────────────────────────────────────────────────────────────────────────────╯
	I1109 13:32:52.591185   12408 out.go:203] 

                                                
                                                
** /stderr **
addons_test.go:1055: failed to disable inspektor-gadget addon: args "out/minikube-linux-arm64 -p addons-651467 addons disable inspektor-gadget --alsologtostderr -v=1": exit status 11
--- FAIL: TestAddons/parallel/InspektorGadget (6.26s)

                                                
                                    
x
+
TestAddons/parallel/MetricsServer (5.41s)

                                                
                                                
=== RUN   TestAddons/parallel/MetricsServer
=== PAUSE TestAddons/parallel/MetricsServer

                                                
                                                

                                                
                                                
=== CONT  TestAddons/parallel/MetricsServer
addons_test.go:455: metrics-server stabilized in 6.588208ms
addons_test.go:457: (dbg) TestAddons/parallel/MetricsServer: waiting 6m0s for pods matching "k8s-app=metrics-server" in namespace "kube-system" ...
helpers_test.go:352: "metrics-server-85b7d694d7-lmgbd" [448bf1e7-7a08-483f-8dc9-602a084beee5] Running
addons_test.go:457: (dbg) TestAddons/parallel/MetricsServer: k8s-app=metrics-server healthy within 5.003559093s
addons_test.go:463: (dbg) Run:  kubectl --context addons-651467 top pods -n kube-system
addons_test.go:1053: (dbg) Run:  out/minikube-linux-arm64 -p addons-651467 addons disable metrics-server --alsologtostderr -v=1
addons_test.go:1053: (dbg) Non-zero exit: out/minikube-linux-arm64 -p addons-651467 addons disable metrics-server --alsologtostderr -v=1: exit status 11 (306.448117ms)

                                                
                                                
-- stdout --
	
	

                                                
                                                
-- /stdout --
** stderr ** 
	I1109 13:32:46.081494   12181 out.go:360] Setting OutFile to fd 1 ...
	I1109 13:32:46.081754   12181 out.go:408] TERM=,COLORTERM=, which probably does not support color
	I1109 13:32:46.081788   12181 out.go:374] Setting ErrFile to fd 2...
	I1109 13:32:46.081808   12181 out.go:408] TERM=,COLORTERM=, which probably does not support color
	I1109 13:32:46.082077   12181 root.go:338] Updating PATH: /home/jenkins/minikube-integration/21139-2320/.minikube/bin
	I1109 13:32:46.082385   12181 mustload.go:66] Loading cluster: addons-651467
	I1109 13:32:46.082845   12181 config.go:182] Loaded profile config "addons-651467": Driver=docker, ContainerRuntime=crio, KubernetesVersion=v1.34.1
	I1109 13:32:46.082878   12181 addons.go:607] checking whether the cluster is paused
	I1109 13:32:46.083004   12181 config.go:182] Loaded profile config "addons-651467": Driver=docker, ContainerRuntime=crio, KubernetesVersion=v1.34.1
	I1109 13:32:46.083028   12181 host.go:66] Checking if "addons-651467" exists ...
	I1109 13:32:46.083566   12181 cli_runner.go:164] Run: docker container inspect addons-651467 --format={{.State.Status}}
	I1109 13:32:46.104921   12181 ssh_runner.go:195] Run: systemctl --version
	I1109 13:32:46.104980   12181 cli_runner.go:164] Run: docker container inspect -f "'{{(index (index .NetworkSettings.Ports "22/tcp") 0).HostPort}}'" addons-651467
	I1109 13:32:46.137513   12181 sshutil.go:53] new ssh client: &{IP:127.0.0.1 Port:32768 SSHKeyPath:/home/jenkins/minikube-integration/21139-2320/.minikube/machines/addons-651467/id_rsa Username:docker}
	I1109 13:32:46.251023   12181 cri.go:54] listing CRI containers in root : {State:paused Name: Namespaces:[kube-system]}
	I1109 13:32:46.251119   12181 ssh_runner.go:195] Run: sudo -s eval "crictl ps -a --quiet --label io.kubernetes.pod.namespace=kube-system"
	I1109 13:32:46.307158   12181 cri.go:89] found id: "d2bf491a803e11cc4313b958ddc2a4f9b81fe1d6c808d32e0eabbc433616d351"
	I1109 13:32:46.307182   12181 cri.go:89] found id: "ae9a6f508e15b8d1fe4384aa76e3877df10ee5113b66c3083e80c4dcde62c8a8"
	I1109 13:32:46.307188   12181 cri.go:89] found id: "93a600602192b90ae7356ac03bd841175f91280d377e697cfd4deb434569fe53"
	I1109 13:32:46.307192   12181 cri.go:89] found id: "f480ecab5b3922bb178d1d72c1958865b09068eac71bf6a035651cc6ef4eb5ee"
	I1109 13:32:46.307195   12181 cri.go:89] found id: "a21703b53016b9a78ff4fdf297bac966cefcb887327ab99bfc0a9cfe52433791"
	I1109 13:32:46.307199   12181 cri.go:89] found id: "00a017d960b122a0ac7b1c3c9a793dfdf1bbd8ec498c45c5f9f540e09d617f7a"
	I1109 13:32:46.307202   12181 cri.go:89] found id: "ddbfebb8b3bd893f8ae20d9efaf7473d0a137e5797afeb9803b45bb8aa1f051a"
	I1109 13:32:46.307205   12181 cri.go:89] found id: "cf0248d05e3120ef7b5c016040983dba6d824f850f3e52c07f70ea2cc2732e40"
	I1109 13:32:46.307209   12181 cri.go:89] found id: "14c1c07c042a874219900b89960a0a218cb17f2f4ffe82345cd8e167830bda14"
	I1109 13:32:46.307221   12181 cri.go:89] found id: "d6ac5bca1cd4a01c236b3a7bd5cb7284e2ab5e47d7caf8f848c3c414c09c1622"
	I1109 13:32:46.307229   12181 cri.go:89] found id: "8f8d82b9ad5448afb3d8e85c7c56d6842c12a3d740484f974a4d1c55ceb6c03b"
	I1109 13:32:46.307232   12181 cri.go:89] found id: "07e93bef4f027a026b1e9eeb9678c48f4fd2f36ed05a6b3dad4e03db5bb3d26e"
	I1109 13:32:46.307235   12181 cri.go:89] found id: "1cfcc34c91d7064c30149b4a2e9c9edf756dac367ab8b8881fa8a5c2d7d87cdf"
	I1109 13:32:46.307238   12181 cri.go:89] found id: "c8972766fd69484838521bc9f86a3eebd70d3ad33301b40d4d15af0e79c47667"
	I1109 13:32:46.307242   12181 cri.go:89] found id: "9b1d34d40bba48a95be9706cfae001d077b34f81b8e3bcb44a5ff6fce1d1371a"
	I1109 13:32:46.307249   12181 cri.go:89] found id: "656c0f0ceda1fdc379218b69ef8db74f9f41694122adc3e0fb824a1ed01361cc"
	I1109 13:32:46.307255   12181 cri.go:89] found id: "c23a1bc6ea5a86e9b5f5f1122d161f051386cd4e34dcba89dfe1c7f912f6252a"
	I1109 13:32:46.307260   12181 cri.go:89] found id: "d46f515271a1e15efe8a207332abfdc1e41fd81d78ed0bea3084656a7821a37b"
	I1109 13:32:46.307263   12181 cri.go:89] found id: "0ba9b918f523be282904c95852bc7c89ca78d01514b4fb147900c4f28b749e52"
	I1109 13:32:46.307266   12181 cri.go:89] found id: "bfadffb4d9828f17669ca17fe07a4c945b112c4347660ff98d5f85c95d6afb36"
	I1109 13:32:46.307270   12181 cri.go:89] found id: "ab555c10c248c5e6a67000926c5f9590b70efa1798712c4b098ef7f0373ac795"
	I1109 13:32:46.307274   12181 cri.go:89] found id: "7f6d6e73f49bafb368dee4aea3878226b8d58990c3b0be6703f0f0a6ff7f3f60"
	I1109 13:32:46.307277   12181 cri.go:89] found id: "e8a6b101abe65f7379ea7110c3039c164ac1c65c44e32d5e2b04308d6ccf31e3"
	I1109 13:32:46.307280   12181 cri.go:89] found id: ""
	I1109 13:32:46.307331   12181 ssh_runner.go:195] Run: sudo runc list -f json
	I1109 13:32:46.322237   12181 out.go:203] 
	W1109 13:32:46.325216   12181 out.go:285] X Exiting due to MK_ADDON_DISABLE_PAUSED: disable failed: check paused: list paused: runc: sudo runc list -f json: Process exited with status 1
	stdout:
	
	stderr:
	time="2025-11-09T13:32:46Z" level=error msg="open /run/runc: no such file or directory"
	
	X Exiting due to MK_ADDON_DISABLE_PAUSED: disable failed: check paused: list paused: runc: sudo runc list -f json: Process exited with status 1
	stdout:
	
	stderr:
	time="2025-11-09T13:32:46Z" level=error msg="open /run/runc: no such file or directory"
	
	W1109 13:32:46.325242   12181 out.go:285] * 
	* 
	W1109 13:32:46.329076   12181 out.go:308] ╭─────────────────────────────────────────────────────────────────────────────────────────────╮
	│                                                                                             │
	│    * If the above advice does not help, please let us know:                                 │
	│      https://github.com/kubernetes/minikube/issues/new/choose                               │
	│                                                                                             │
	│    * Please run `minikube logs --file=logs.txt` and attach logs.txt to the GitHub issue.    │
	│    * Please also attach the following file to the GitHub issue:                             │
	│    * - /tmp/minikube_addons_9e377edc2b59264359e9c26f81b048e390fa608a_0.log                  │
	│                                                                                             │
	╰─────────────────────────────────────────────────────────────────────────────────────────────╯
	╭─────────────────────────────────────────────────────────────────────────────────────────────╮
	│                                                                                             │
	│    * If the above advice does not help, please let us know:                                 │
	│      https://github.com/kubernetes/minikube/issues/new/choose                               │
	│                                                                                             │
	│    * Please run `minikube logs --file=logs.txt` and attach logs.txt to the GitHub issue.    │
	│    * Please also attach the following file to the GitHub issue:                             │
	│    * - /tmp/minikube_addons_9e377edc2b59264359e9c26f81b048e390fa608a_0.log                  │
	│                                                                                             │
	╰─────────────────────────────────────────────────────────────────────────────────────────────╯
	I1109 13:32:46.332118   12181 out.go:203] 

                                                
                                                
** /stderr **
addons_test.go:1055: failed to disable metrics-server addon: args "out/minikube-linux-arm64 -p addons-651467 addons disable metrics-server --alsologtostderr -v=1": exit status 11
--- FAIL: TestAddons/parallel/MetricsServer (5.41s)

                                                
                                    
x
+
TestAddons/parallel/CSI (45.25s)

                                                
                                                
=== RUN   TestAddons/parallel/CSI
=== PAUSE TestAddons/parallel/CSI

                                                
                                                

                                                
                                                
=== CONT  TestAddons/parallel/CSI
I1109 13:32:37.736354    4116 kapi.go:75] Waiting for pod with label "kubernetes.io/minikube-addons=csi-hostpath-driver" in ns "kube-system" ...
I1109 13:32:37.744357    4116 kapi.go:86] Found 3 Pods for label selector kubernetes.io/minikube-addons=csi-hostpath-driver
I1109 13:32:37.744385    4116 kapi.go:107] duration metric: took 8.046785ms to wait for kubernetes.io/minikube-addons=csi-hostpath-driver ...
addons_test.go:549: csi-hostpath-driver pods stabilized in 8.056443ms
addons_test.go:552: (dbg) Run:  kubectl --context addons-651467 create -f testdata/csi-hostpath-driver/pvc.yaml
addons_test.go:557: (dbg) TestAddons/parallel/CSI: waiting 6m0s for pvc "hpvc" in namespace "default" ...
helpers_test.go:402: (dbg) Run:  kubectl --context addons-651467 get pvc hpvc -o jsonpath={.status.phase} -n default
helpers_test.go:402: (dbg) Run:  kubectl --context addons-651467 get pvc hpvc -o jsonpath={.status.phase} -n default
helpers_test.go:402: (dbg) Run:  kubectl --context addons-651467 get pvc hpvc -o jsonpath={.status.phase} -n default
helpers_test.go:402: (dbg) Run:  kubectl --context addons-651467 get pvc hpvc -o jsonpath={.status.phase} -n default
helpers_test.go:402: (dbg) Run:  kubectl --context addons-651467 get pvc hpvc -o jsonpath={.status.phase} -n default
helpers_test.go:402: (dbg) Run:  kubectl --context addons-651467 get pvc hpvc -o jsonpath={.status.phase} -n default
helpers_test.go:402: (dbg) Run:  kubectl --context addons-651467 get pvc hpvc -o jsonpath={.status.phase} -n default
helpers_test.go:402: (dbg) Run:  kubectl --context addons-651467 get pvc hpvc -o jsonpath={.status.phase} -n default
helpers_test.go:402: (dbg) Run:  kubectl --context addons-651467 get pvc hpvc -o jsonpath={.status.phase} -n default
helpers_test.go:402: (dbg) Run:  kubectl --context addons-651467 get pvc hpvc -o jsonpath={.status.phase} -n default
addons_test.go:562: (dbg) Run:  kubectl --context addons-651467 create -f testdata/csi-hostpath-driver/pv-pod.yaml
addons_test.go:567: (dbg) TestAddons/parallel/CSI: waiting 6m0s for pods matching "app=task-pv-pod" in namespace "default" ...
helpers_test.go:352: "task-pv-pod" [2b4d7bb0-4306-4611-bd86-c4d98ad06c83] Pending
helpers_test.go:352: "task-pv-pod" [2b4d7bb0-4306-4611-bd86-c4d98ad06c83] Pending / Ready:ContainersNotReady (containers with unready status: [task-pv-container]) / ContainersReady:ContainersNotReady (containers with unready status: [task-pv-container])
helpers_test.go:352: "task-pv-pod" [2b4d7bb0-4306-4611-bd86-c4d98ad06c83] Running
addons_test.go:567: (dbg) TestAddons/parallel/CSI: app=task-pv-pod healthy within 12.002923619s
addons_test.go:572: (dbg) Run:  kubectl --context addons-651467 create -f testdata/csi-hostpath-driver/snapshot.yaml
addons_test.go:577: (dbg) TestAddons/parallel/CSI: waiting 6m0s for volume snapshot "new-snapshot-demo" in namespace "default" ...
helpers_test.go:427: (dbg) Run:  kubectl --context addons-651467 get volumesnapshot new-snapshot-demo -o jsonpath={.status.readyToUse} -n default
helpers_test.go:427: (dbg) Run:  kubectl --context addons-651467 get volumesnapshot new-snapshot-demo -o jsonpath={.status.readyToUse} -n default
addons_test.go:582: (dbg) Run:  kubectl --context addons-651467 delete pod task-pv-pod
addons_test.go:588: (dbg) Run:  kubectl --context addons-651467 delete pvc hpvc
addons_test.go:594: (dbg) Run:  kubectl --context addons-651467 create -f testdata/csi-hostpath-driver/pvc-restore.yaml
addons_test.go:599: (dbg) TestAddons/parallel/CSI: waiting 6m0s for pvc "hpvc-restore" in namespace "default" ...
helpers_test.go:402: (dbg) Run:  kubectl --context addons-651467 get pvc hpvc-restore -o jsonpath={.status.phase} -n default
helpers_test.go:402: (dbg) Run:  kubectl --context addons-651467 get pvc hpvc-restore -o jsonpath={.status.phase} -n default
helpers_test.go:402: (dbg) Run:  kubectl --context addons-651467 get pvc hpvc-restore -o jsonpath={.status.phase} -n default
helpers_test.go:402: (dbg) Run:  kubectl --context addons-651467 get pvc hpvc-restore -o jsonpath={.status.phase} -n default
helpers_test.go:402: (dbg) Run:  kubectl --context addons-651467 get pvc hpvc-restore -o jsonpath={.status.phase} -n default
helpers_test.go:402: (dbg) Run:  kubectl --context addons-651467 get pvc hpvc-restore -o jsonpath={.status.phase} -n default
helpers_test.go:402: (dbg) Run:  kubectl --context addons-651467 get pvc hpvc-restore -o jsonpath={.status.phase} -n default
helpers_test.go:402: (dbg) Run:  kubectl --context addons-651467 get pvc hpvc-restore -o jsonpath={.status.phase} -n default
helpers_test.go:402: (dbg) Run:  kubectl --context addons-651467 get pvc hpvc-restore -o jsonpath={.status.phase} -n default
helpers_test.go:402: (dbg) Run:  kubectl --context addons-651467 get pvc hpvc-restore -o jsonpath={.status.phase} -n default
helpers_test.go:402: (dbg) Run:  kubectl --context addons-651467 get pvc hpvc-restore -o jsonpath={.status.phase} -n default
helpers_test.go:402: (dbg) Run:  kubectl --context addons-651467 get pvc hpvc-restore -o jsonpath={.status.phase} -n default
helpers_test.go:402: (dbg) Run:  kubectl --context addons-651467 get pvc hpvc-restore -o jsonpath={.status.phase} -n default
addons_test.go:604: (dbg) Run:  kubectl --context addons-651467 create -f testdata/csi-hostpath-driver/pv-pod-restore.yaml
addons_test.go:609: (dbg) TestAddons/parallel/CSI: waiting 6m0s for pods matching "app=task-pv-pod-restore" in namespace "default" ...
helpers_test.go:352: "task-pv-pod-restore" [125f80ee-e40d-49c9-989a-7e5edd1c9831] Pending
helpers_test.go:352: "task-pv-pod-restore" [125f80ee-e40d-49c9-989a-7e5edd1c9831] Pending / Ready:ContainersNotReady (containers with unready status: [task-pv-container]) / ContainersReady:ContainersNotReady (containers with unready status: [task-pv-container])
helpers_test.go:352: "task-pv-pod-restore" [125f80ee-e40d-49c9-989a-7e5edd1c9831] Running
addons_test.go:609: (dbg) TestAddons/parallel/CSI: app=task-pv-pod-restore healthy within 8.007038052s
addons_test.go:614: (dbg) Run:  kubectl --context addons-651467 delete pod task-pv-pod-restore
addons_test.go:618: (dbg) Run:  kubectl --context addons-651467 delete pvc hpvc-restore
addons_test.go:622: (dbg) Run:  kubectl --context addons-651467 delete volumesnapshot new-snapshot-demo
addons_test.go:1053: (dbg) Run:  out/minikube-linux-arm64 -p addons-651467 addons disable volumesnapshots --alsologtostderr -v=1
addons_test.go:1053: (dbg) Non-zero exit: out/minikube-linux-arm64 -p addons-651467 addons disable volumesnapshots --alsologtostderr -v=1: exit status 11 (254.843297ms)

                                                
                                                
-- stdout --
	
	

                                                
                                                
-- /stdout --
** stderr ** 
	I1109 13:33:22.496655   13172 out.go:360] Setting OutFile to fd 1 ...
	I1109 13:33:22.496899   13172 out.go:408] TERM=,COLORTERM=, which probably does not support color
	I1109 13:33:22.496926   13172 out.go:374] Setting ErrFile to fd 2...
	I1109 13:33:22.497005   13172 out.go:408] TERM=,COLORTERM=, which probably does not support color
	I1109 13:33:22.497706   13172 root.go:338] Updating PATH: /home/jenkins/minikube-integration/21139-2320/.minikube/bin
	I1109 13:33:22.498121   13172 mustload.go:66] Loading cluster: addons-651467
	I1109 13:33:22.498524   13172 config.go:182] Loaded profile config "addons-651467": Driver=docker, ContainerRuntime=crio, KubernetesVersion=v1.34.1
	I1109 13:33:22.498559   13172 addons.go:607] checking whether the cluster is paused
	I1109 13:33:22.498696   13172 config.go:182] Loaded profile config "addons-651467": Driver=docker, ContainerRuntime=crio, KubernetesVersion=v1.34.1
	I1109 13:33:22.498725   13172 host.go:66] Checking if "addons-651467" exists ...
	I1109 13:33:22.499243   13172 cli_runner.go:164] Run: docker container inspect addons-651467 --format={{.State.Status}}
	I1109 13:33:22.518445   13172 ssh_runner.go:195] Run: systemctl --version
	I1109 13:33:22.518500   13172 cli_runner.go:164] Run: docker container inspect -f "'{{(index (index .NetworkSettings.Ports "22/tcp") 0).HostPort}}'" addons-651467
	I1109 13:33:22.536204   13172 sshutil.go:53] new ssh client: &{IP:127.0.0.1 Port:32768 SSHKeyPath:/home/jenkins/minikube-integration/21139-2320/.minikube/machines/addons-651467/id_rsa Username:docker}
	I1109 13:33:22.642062   13172 cri.go:54] listing CRI containers in root : {State:paused Name: Namespaces:[kube-system]}
	I1109 13:33:22.642168   13172 ssh_runner.go:195] Run: sudo -s eval "crictl ps -a --quiet --label io.kubernetes.pod.namespace=kube-system"
	I1109 13:33:22.671816   13172 cri.go:89] found id: "d2bf491a803e11cc4313b958ddc2a4f9b81fe1d6c808d32e0eabbc433616d351"
	I1109 13:33:22.671839   13172 cri.go:89] found id: "ae9a6f508e15b8d1fe4384aa76e3877df10ee5113b66c3083e80c4dcde62c8a8"
	I1109 13:33:22.671845   13172 cri.go:89] found id: "93a600602192b90ae7356ac03bd841175f91280d377e697cfd4deb434569fe53"
	I1109 13:33:22.671848   13172 cri.go:89] found id: "f480ecab5b3922bb178d1d72c1958865b09068eac71bf6a035651cc6ef4eb5ee"
	I1109 13:33:22.671852   13172 cri.go:89] found id: "a21703b53016b9a78ff4fdf297bac966cefcb887327ab99bfc0a9cfe52433791"
	I1109 13:33:22.671855   13172 cri.go:89] found id: "00a017d960b122a0ac7b1c3c9a793dfdf1bbd8ec498c45c5f9f540e09d617f7a"
	I1109 13:33:22.671858   13172 cri.go:89] found id: "ddbfebb8b3bd893f8ae20d9efaf7473d0a137e5797afeb9803b45bb8aa1f051a"
	I1109 13:33:22.671895   13172 cri.go:89] found id: "cf0248d05e3120ef7b5c016040983dba6d824f850f3e52c07f70ea2cc2732e40"
	I1109 13:33:22.671899   13172 cri.go:89] found id: "14c1c07c042a874219900b89960a0a218cb17f2f4ffe82345cd8e167830bda14"
	I1109 13:33:22.671906   13172 cri.go:89] found id: "d6ac5bca1cd4a01c236b3a7bd5cb7284e2ab5e47d7caf8f848c3c414c09c1622"
	I1109 13:33:22.671912   13172 cri.go:89] found id: "8f8d82b9ad5448afb3d8e85c7c56d6842c12a3d740484f974a4d1c55ceb6c03b"
	I1109 13:33:22.671916   13172 cri.go:89] found id: "07e93bef4f027a026b1e9eeb9678c48f4fd2f36ed05a6b3dad4e03db5bb3d26e"
	I1109 13:33:22.671919   13172 cri.go:89] found id: "1cfcc34c91d7064c30149b4a2e9c9edf756dac367ab8b8881fa8a5c2d7d87cdf"
	I1109 13:33:22.671925   13172 cri.go:89] found id: "c8972766fd69484838521bc9f86a3eebd70d3ad33301b40d4d15af0e79c47667"
	I1109 13:33:22.671928   13172 cri.go:89] found id: "9b1d34d40bba48a95be9706cfae001d077b34f81b8e3bcb44a5ff6fce1d1371a"
	I1109 13:33:22.671933   13172 cri.go:89] found id: "656c0f0ceda1fdc379218b69ef8db74f9f41694122adc3e0fb824a1ed01361cc"
	I1109 13:33:22.671942   13172 cri.go:89] found id: "c23a1bc6ea5a86e9b5f5f1122d161f051386cd4e34dcba89dfe1c7f912f6252a"
	I1109 13:33:22.671947   13172 cri.go:89] found id: "d46f515271a1e15efe8a207332abfdc1e41fd81d78ed0bea3084656a7821a37b"
	I1109 13:33:22.671950   13172 cri.go:89] found id: "0ba9b918f523be282904c95852bc7c89ca78d01514b4fb147900c4f28b749e52"
	I1109 13:33:22.671953   13172 cri.go:89] found id: "bfadffb4d9828f17669ca17fe07a4c945b112c4347660ff98d5f85c95d6afb36"
	I1109 13:33:22.671958   13172 cri.go:89] found id: "ab555c10c248c5e6a67000926c5f9590b70efa1798712c4b098ef7f0373ac795"
	I1109 13:33:22.671961   13172 cri.go:89] found id: "7f6d6e73f49bafb368dee4aea3878226b8d58990c3b0be6703f0f0a6ff7f3f60"
	I1109 13:33:22.671964   13172 cri.go:89] found id: "e8a6b101abe65f7379ea7110c3039c164ac1c65c44e32d5e2b04308d6ccf31e3"
	I1109 13:33:22.671967   13172 cri.go:89] found id: ""
	I1109 13:33:22.672016   13172 ssh_runner.go:195] Run: sudo runc list -f json
	I1109 13:33:22.685863   13172 out.go:203] 
	W1109 13:33:22.688741   13172 out.go:285] X Exiting due to MK_ADDON_DISABLE_PAUSED: disable failed: check paused: list paused: runc: sudo runc list -f json: Process exited with status 1
	stdout:
	
	stderr:
	time="2025-11-09T13:33:22Z" level=error msg="open /run/runc: no such file or directory"
	
	X Exiting due to MK_ADDON_DISABLE_PAUSED: disable failed: check paused: list paused: runc: sudo runc list -f json: Process exited with status 1
	stdout:
	
	stderr:
	time="2025-11-09T13:33:22Z" level=error msg="open /run/runc: no such file or directory"
	
	W1109 13:33:22.688762   13172 out.go:285] * 
	* 
	W1109 13:33:22.692582   13172 out.go:308] ╭─────────────────────────────────────────────────────────────────────────────────────────────╮
	│                                                                                             │
	│    * If the above advice does not help, please let us know:                                 │
	│      https://github.com/kubernetes/minikube/issues/new/choose                               │
	│                                                                                             │
	│    * Please run `minikube logs --file=logs.txt` and attach logs.txt to the GitHub issue.    │
	│    * Please also attach the following file to the GitHub issue:                             │
	│    * - /tmp/minikube_addons_f6150db7515caf82d8c4c5baeba9fd21f738a7e0_0.log                  │
	│                                                                                             │
	╰─────────────────────────────────────────────────────────────────────────────────────────────╯
	╭─────────────────────────────────────────────────────────────────────────────────────────────╮
	│                                                                                             │
	│    * If the above advice does not help, please let us know:                                 │
	│      https://github.com/kubernetes/minikube/issues/new/choose                               │
	│                                                                                             │
	│    * Please run `minikube logs --file=logs.txt` and attach logs.txt to the GitHub issue.    │
	│    * Please also attach the following file to the GitHub issue:                             │
	│    * - /tmp/minikube_addons_f6150db7515caf82d8c4c5baeba9fd21f738a7e0_0.log                  │
	│                                                                                             │
	╰─────────────────────────────────────────────────────────────────────────────────────────────╯
	I1109 13:33:22.695541   13172 out.go:203] 

                                                
                                                
** /stderr **
addons_test.go:1055: failed to disable volumesnapshots addon: args "out/minikube-linux-arm64 -p addons-651467 addons disable volumesnapshots --alsologtostderr -v=1": exit status 11
addons_test.go:1053: (dbg) Run:  out/minikube-linux-arm64 -p addons-651467 addons disable csi-hostpath-driver --alsologtostderr -v=1
addons_test.go:1053: (dbg) Non-zero exit: out/minikube-linux-arm64 -p addons-651467 addons disable csi-hostpath-driver --alsologtostderr -v=1: exit status 11 (283.491633ms)

                                                
                                                
-- stdout --
	
	

                                                
                                                
-- /stdout --
** stderr ** 
	I1109 13:33:22.750357   13217 out.go:360] Setting OutFile to fd 1 ...
	I1109 13:33:22.750583   13217 out.go:408] TERM=,COLORTERM=, which probably does not support color
	I1109 13:33:22.750594   13217 out.go:374] Setting ErrFile to fd 2...
	I1109 13:33:22.750601   13217 out.go:408] TERM=,COLORTERM=, which probably does not support color
	I1109 13:33:22.751411   13217 root.go:338] Updating PATH: /home/jenkins/minikube-integration/21139-2320/.minikube/bin
	I1109 13:33:22.752105   13217 mustload.go:66] Loading cluster: addons-651467
	I1109 13:33:22.752586   13217 config.go:182] Loaded profile config "addons-651467": Driver=docker, ContainerRuntime=crio, KubernetesVersion=v1.34.1
	I1109 13:33:22.752606   13217 addons.go:607] checking whether the cluster is paused
	I1109 13:33:22.752753   13217 config.go:182] Loaded profile config "addons-651467": Driver=docker, ContainerRuntime=crio, KubernetesVersion=v1.34.1
	I1109 13:33:22.752771   13217 host.go:66] Checking if "addons-651467" exists ...
	I1109 13:33:22.753260   13217 cli_runner.go:164] Run: docker container inspect addons-651467 --format={{.State.Status}}
	I1109 13:33:22.773269   13217 ssh_runner.go:195] Run: systemctl --version
	I1109 13:33:22.773328   13217 cli_runner.go:164] Run: docker container inspect -f "'{{(index (index .NetworkSettings.Ports "22/tcp") 0).HostPort}}'" addons-651467
	I1109 13:33:22.793235   13217 sshutil.go:53] new ssh client: &{IP:127.0.0.1 Port:32768 SSHKeyPath:/home/jenkins/minikube-integration/21139-2320/.minikube/machines/addons-651467/id_rsa Username:docker}
	I1109 13:33:22.908898   13217 cri.go:54] listing CRI containers in root : {State:paused Name: Namespaces:[kube-system]}
	I1109 13:33:22.908979   13217 ssh_runner.go:195] Run: sudo -s eval "crictl ps -a --quiet --label io.kubernetes.pod.namespace=kube-system"
	I1109 13:33:22.952684   13217 cri.go:89] found id: "d2bf491a803e11cc4313b958ddc2a4f9b81fe1d6c808d32e0eabbc433616d351"
	I1109 13:33:22.952704   13217 cri.go:89] found id: "ae9a6f508e15b8d1fe4384aa76e3877df10ee5113b66c3083e80c4dcde62c8a8"
	I1109 13:33:22.952709   13217 cri.go:89] found id: "93a600602192b90ae7356ac03bd841175f91280d377e697cfd4deb434569fe53"
	I1109 13:33:22.952712   13217 cri.go:89] found id: "f480ecab5b3922bb178d1d72c1958865b09068eac71bf6a035651cc6ef4eb5ee"
	I1109 13:33:22.952715   13217 cri.go:89] found id: "a21703b53016b9a78ff4fdf297bac966cefcb887327ab99bfc0a9cfe52433791"
	I1109 13:33:22.952719   13217 cri.go:89] found id: "00a017d960b122a0ac7b1c3c9a793dfdf1bbd8ec498c45c5f9f540e09d617f7a"
	I1109 13:33:22.952722   13217 cri.go:89] found id: "ddbfebb8b3bd893f8ae20d9efaf7473d0a137e5797afeb9803b45bb8aa1f051a"
	I1109 13:33:22.952725   13217 cri.go:89] found id: "cf0248d05e3120ef7b5c016040983dba6d824f850f3e52c07f70ea2cc2732e40"
	I1109 13:33:22.952729   13217 cri.go:89] found id: "14c1c07c042a874219900b89960a0a218cb17f2f4ffe82345cd8e167830bda14"
	I1109 13:33:22.952735   13217 cri.go:89] found id: "d6ac5bca1cd4a01c236b3a7bd5cb7284e2ab5e47d7caf8f848c3c414c09c1622"
	I1109 13:33:22.952739   13217 cri.go:89] found id: "8f8d82b9ad5448afb3d8e85c7c56d6842c12a3d740484f974a4d1c55ceb6c03b"
	I1109 13:33:22.952743   13217 cri.go:89] found id: "07e93bef4f027a026b1e9eeb9678c48f4fd2f36ed05a6b3dad4e03db5bb3d26e"
	I1109 13:33:22.952746   13217 cri.go:89] found id: "1cfcc34c91d7064c30149b4a2e9c9edf756dac367ab8b8881fa8a5c2d7d87cdf"
	I1109 13:33:22.952751   13217 cri.go:89] found id: "c8972766fd69484838521bc9f86a3eebd70d3ad33301b40d4d15af0e79c47667"
	I1109 13:33:22.952754   13217 cri.go:89] found id: "9b1d34d40bba48a95be9706cfae001d077b34f81b8e3bcb44a5ff6fce1d1371a"
	I1109 13:33:22.952759   13217 cri.go:89] found id: "656c0f0ceda1fdc379218b69ef8db74f9f41694122adc3e0fb824a1ed01361cc"
	I1109 13:33:22.952762   13217 cri.go:89] found id: "c23a1bc6ea5a86e9b5f5f1122d161f051386cd4e34dcba89dfe1c7f912f6252a"
	I1109 13:33:22.952766   13217 cri.go:89] found id: "d46f515271a1e15efe8a207332abfdc1e41fd81d78ed0bea3084656a7821a37b"
	I1109 13:33:22.952769   13217 cri.go:89] found id: "0ba9b918f523be282904c95852bc7c89ca78d01514b4fb147900c4f28b749e52"
	I1109 13:33:22.952772   13217 cri.go:89] found id: "bfadffb4d9828f17669ca17fe07a4c945b112c4347660ff98d5f85c95d6afb36"
	I1109 13:33:22.952777   13217 cri.go:89] found id: "ab555c10c248c5e6a67000926c5f9590b70efa1798712c4b098ef7f0373ac795"
	I1109 13:33:22.952780   13217 cri.go:89] found id: "7f6d6e73f49bafb368dee4aea3878226b8d58990c3b0be6703f0f0a6ff7f3f60"
	I1109 13:33:22.952783   13217 cri.go:89] found id: "e8a6b101abe65f7379ea7110c3039c164ac1c65c44e32d5e2b04308d6ccf31e3"
	I1109 13:33:22.952785   13217 cri.go:89] found id: ""
	I1109 13:33:22.952835   13217 ssh_runner.go:195] Run: sudo runc list -f json
	I1109 13:33:22.969406   13217 out.go:203] 
	W1109 13:33:22.972318   13217 out.go:285] X Exiting due to MK_ADDON_DISABLE_PAUSED: disable failed: check paused: list paused: runc: sudo runc list -f json: Process exited with status 1
	stdout:
	
	stderr:
	time="2025-11-09T13:33:22Z" level=error msg="open /run/runc: no such file or directory"
	
	X Exiting due to MK_ADDON_DISABLE_PAUSED: disable failed: check paused: list paused: runc: sudo runc list -f json: Process exited with status 1
	stdout:
	
	stderr:
	time="2025-11-09T13:33:22Z" level=error msg="open /run/runc: no such file or directory"
	
	W1109 13:33:22.972360   13217 out.go:285] * 
	* 
	W1109 13:33:22.976523   13217 out.go:308] ╭─────────────────────────────────────────────────────────────────────────────────────────────╮
	│                                                                                             │
	│    * If the above advice does not help, please let us know:                                 │
	│      https://github.com/kubernetes/minikube/issues/new/choose                               │
	│                                                                                             │
	│    * Please run `minikube logs --file=logs.txt` and attach logs.txt to the GitHub issue.    │
	│    * Please also attach the following file to the GitHub issue:                             │
	│    * - /tmp/minikube_addons_913eef9b964ccef8b5b536327192b81f4aff5da9_0.log                  │
	│                                                                                             │
	╰─────────────────────────────────────────────────────────────────────────────────────────────╯
	╭─────────────────────────────────────────────────────────────────────────────────────────────╮
	│                                                                                             │
	│    * If the above advice does not help, please let us know:                                 │
	│      https://github.com/kubernetes/minikube/issues/new/choose                               │
	│                                                                                             │
	│    * Please run `minikube logs --file=logs.txt` and attach logs.txt to the GitHub issue.    │
	│    * Please also attach the following file to the GitHub issue:                             │
	│    * - /tmp/minikube_addons_913eef9b964ccef8b5b536327192b81f4aff5da9_0.log                  │
	│                                                                                             │
	╰─────────────────────────────────────────────────────────────────────────────────────────────╯
	I1109 13:33:22.979630   13217 out.go:203] 

                                                
                                                
** /stderr **
addons_test.go:1055: failed to disable csi-hostpath-driver addon: args "out/minikube-linux-arm64 -p addons-651467 addons disable csi-hostpath-driver --alsologtostderr -v=1": exit status 11
--- FAIL: TestAddons/parallel/CSI (45.25s)

                                                
                                    
x
+
TestAddons/parallel/Headlamp (3.65s)

                                                
                                                
=== RUN   TestAddons/parallel/Headlamp
=== PAUSE TestAddons/parallel/Headlamp

                                                
                                                

                                                
                                                
=== CONT  TestAddons/parallel/Headlamp
addons_test.go:808: (dbg) Run:  out/minikube-linux-arm64 addons enable headlamp -p addons-651467 --alsologtostderr -v=1
addons_test.go:808: (dbg) Non-zero exit: out/minikube-linux-arm64 addons enable headlamp -p addons-651467 --alsologtostderr -v=1: exit status 11 (291.368412ms)

                                                
                                                
-- stdout --
	
	

                                                
                                                
-- /stdout --
** stderr ** 
	I1109 13:32:37.331855   11512 out.go:360] Setting OutFile to fd 1 ...
	I1109 13:32:37.332144   11512 out.go:408] TERM=,COLORTERM=, which probably does not support color
	I1109 13:32:37.332177   11512 out.go:374] Setting ErrFile to fd 2...
	I1109 13:32:37.332196   11512 out.go:408] TERM=,COLORTERM=, which probably does not support color
	I1109 13:32:37.332476   11512 root.go:338] Updating PATH: /home/jenkins/minikube-integration/21139-2320/.minikube/bin
	I1109 13:32:37.332777   11512 mustload.go:66] Loading cluster: addons-651467
	I1109 13:32:37.333158   11512 config.go:182] Loaded profile config "addons-651467": Driver=docker, ContainerRuntime=crio, KubernetesVersion=v1.34.1
	I1109 13:32:37.333196   11512 addons.go:607] checking whether the cluster is paused
	I1109 13:32:37.333320   11512 config.go:182] Loaded profile config "addons-651467": Driver=docker, ContainerRuntime=crio, KubernetesVersion=v1.34.1
	I1109 13:32:37.333355   11512 host.go:66] Checking if "addons-651467" exists ...
	I1109 13:32:37.333800   11512 cli_runner.go:164] Run: docker container inspect addons-651467 --format={{.State.Status}}
	I1109 13:32:37.361773   11512 ssh_runner.go:195] Run: systemctl --version
	I1109 13:32:37.361829   11512 cli_runner.go:164] Run: docker container inspect -f "'{{(index (index .NetworkSettings.Ports "22/tcp") 0).HostPort}}'" addons-651467
	I1109 13:32:37.383213   11512 sshutil.go:53] new ssh client: &{IP:127.0.0.1 Port:32768 SSHKeyPath:/home/jenkins/minikube-integration/21139-2320/.minikube/machines/addons-651467/id_rsa Username:docker}
	I1109 13:32:37.495247   11512 cri.go:54] listing CRI containers in root : {State:paused Name: Namespaces:[kube-system]}
	I1109 13:32:37.495372   11512 ssh_runner.go:195] Run: sudo -s eval "crictl ps -a --quiet --label io.kubernetes.pod.namespace=kube-system"
	I1109 13:32:37.535250   11512 cri.go:89] found id: "d2bf491a803e11cc4313b958ddc2a4f9b81fe1d6c808d32e0eabbc433616d351"
	I1109 13:32:37.535269   11512 cri.go:89] found id: "ae9a6f508e15b8d1fe4384aa76e3877df10ee5113b66c3083e80c4dcde62c8a8"
	I1109 13:32:37.535273   11512 cri.go:89] found id: "93a600602192b90ae7356ac03bd841175f91280d377e697cfd4deb434569fe53"
	I1109 13:32:37.535279   11512 cri.go:89] found id: "f480ecab5b3922bb178d1d72c1958865b09068eac71bf6a035651cc6ef4eb5ee"
	I1109 13:32:37.535282   11512 cri.go:89] found id: "a21703b53016b9a78ff4fdf297bac966cefcb887327ab99bfc0a9cfe52433791"
	I1109 13:32:37.535288   11512 cri.go:89] found id: "00a017d960b122a0ac7b1c3c9a793dfdf1bbd8ec498c45c5f9f540e09d617f7a"
	I1109 13:32:37.535291   11512 cri.go:89] found id: "ddbfebb8b3bd893f8ae20d9efaf7473d0a137e5797afeb9803b45bb8aa1f051a"
	I1109 13:32:37.535294   11512 cri.go:89] found id: "cf0248d05e3120ef7b5c016040983dba6d824f850f3e52c07f70ea2cc2732e40"
	I1109 13:32:37.535298   11512 cri.go:89] found id: "14c1c07c042a874219900b89960a0a218cb17f2f4ffe82345cd8e167830bda14"
	I1109 13:32:37.535305   11512 cri.go:89] found id: "d6ac5bca1cd4a01c236b3a7bd5cb7284e2ab5e47d7caf8f848c3c414c09c1622"
	I1109 13:32:37.535309   11512 cri.go:89] found id: "8f8d82b9ad5448afb3d8e85c7c56d6842c12a3d740484f974a4d1c55ceb6c03b"
	I1109 13:32:37.535312   11512 cri.go:89] found id: "07e93bef4f027a026b1e9eeb9678c48f4fd2f36ed05a6b3dad4e03db5bb3d26e"
	I1109 13:32:37.535316   11512 cri.go:89] found id: "1cfcc34c91d7064c30149b4a2e9c9edf756dac367ab8b8881fa8a5c2d7d87cdf"
	I1109 13:32:37.535319   11512 cri.go:89] found id: "c8972766fd69484838521bc9f86a3eebd70d3ad33301b40d4d15af0e79c47667"
	I1109 13:32:37.535323   11512 cri.go:89] found id: "9b1d34d40bba48a95be9706cfae001d077b34f81b8e3bcb44a5ff6fce1d1371a"
	I1109 13:32:37.535330   11512 cri.go:89] found id: "656c0f0ceda1fdc379218b69ef8db74f9f41694122adc3e0fb824a1ed01361cc"
	I1109 13:32:37.535334   11512 cri.go:89] found id: "c23a1bc6ea5a86e9b5f5f1122d161f051386cd4e34dcba89dfe1c7f912f6252a"
	I1109 13:32:37.535339   11512 cri.go:89] found id: "d46f515271a1e15efe8a207332abfdc1e41fd81d78ed0bea3084656a7821a37b"
	I1109 13:32:37.535342   11512 cri.go:89] found id: "0ba9b918f523be282904c95852bc7c89ca78d01514b4fb147900c4f28b749e52"
	I1109 13:32:37.535345   11512 cri.go:89] found id: "bfadffb4d9828f17669ca17fe07a4c945b112c4347660ff98d5f85c95d6afb36"
	I1109 13:32:37.535350   11512 cri.go:89] found id: "ab555c10c248c5e6a67000926c5f9590b70efa1798712c4b098ef7f0373ac795"
	I1109 13:32:37.535353   11512 cri.go:89] found id: "7f6d6e73f49bafb368dee4aea3878226b8d58990c3b0be6703f0f0a6ff7f3f60"
	I1109 13:32:37.535356   11512 cri.go:89] found id: "e8a6b101abe65f7379ea7110c3039c164ac1c65c44e32d5e2b04308d6ccf31e3"
	I1109 13:32:37.535359   11512 cri.go:89] found id: ""
	I1109 13:32:37.535411   11512 ssh_runner.go:195] Run: sudo runc list -f json
	I1109 13:32:37.555616   11512 out.go:203] 
	W1109 13:32:37.561276   11512 out.go:285] X Exiting due to MK_ADDON_ENABLE_PAUSED: enabled failed: check paused: list paused: runc: sudo runc list -f json: Process exited with status 1
	stdout:
	
	stderr:
	time="2025-11-09T13:32:37Z" level=error msg="open /run/runc: no such file or directory"
	
	X Exiting due to MK_ADDON_ENABLE_PAUSED: enabled failed: check paused: list paused: runc: sudo runc list -f json: Process exited with status 1
	stdout:
	
	stderr:
	time="2025-11-09T13:32:37Z" level=error msg="open /run/runc: no such file or directory"
	
	W1109 13:32:37.561300   11512 out.go:285] * 
	* 
	W1109 13:32:37.565759   11512 out.go:308] ╭─────────────────────────────────────────────────────────────────────────────────────────────╮
	│                                                                                             │
	│    * If the above advice does not help, please let us know:                                 │
	│      https://github.com/kubernetes/minikube/issues/new/choose                               │
	│                                                                                             │
	│    * Please run `minikube logs --file=logs.txt` and attach logs.txt to the GitHub issue.    │
	│    * Please also attach the following file to the GitHub issue:                             │
	│    * - /tmp/minikube_addons_af3b8a9ce4f102efc219f1404c9eed7a69cbf2d5_0.log                  │
	│                                                                                             │
	╰─────────────────────────────────────────────────────────────────────────────────────────────╯
	╭─────────────────────────────────────────────────────────────────────────────────────────────╮
	│                                                                                             │
	│    * If the above advice does not help, please let us know:                                 │
	│      https://github.com/kubernetes/minikube/issues/new/choose                               │
	│                                                                                             │
	│    * Please run `minikube logs --file=logs.txt` and attach logs.txt to the GitHub issue.    │
	│    * Please also attach the following file to the GitHub issue:                             │
	│    * - /tmp/minikube_addons_af3b8a9ce4f102efc219f1404c9eed7a69cbf2d5_0.log                  │
	│                                                                                             │
	╰─────────────────────────────────────────────────────────────────────────────────────────────╯
	I1109 13:32:37.569222   11512 out.go:203] 

                                                
                                                
** /stderr **
addons_test.go:810: failed to enable headlamp addon: args: "out/minikube-linux-arm64 addons enable headlamp -p addons-651467 --alsologtostderr -v=1": exit status 11
helpers_test.go:222: -----------------------post-mortem--------------------------------
helpers_test.go:223: ======>  post-mortem[TestAddons/parallel/Headlamp]: network settings <======
helpers_test.go:230: HOST ENV snapshots: PROXY env: HTTP_PROXY="<empty>" HTTPS_PROXY="<empty>" NO_PROXY="<empty>"
helpers_test.go:238: ======>  post-mortem[TestAddons/parallel/Headlamp]: docker inspect <======
helpers_test.go:239: (dbg) Run:  docker inspect addons-651467
helpers_test.go:243: (dbg) docker inspect addons-651467:

                                                
                                                
-- stdout --
	[
	    {
	        "Id": "c4ab4837e17b267aefc717d6354da6da657718a53ac2c8c63de7aa7f9dc168c6",
	        "Created": "2025-11-09T13:29:50.726053015Z",
	        "Path": "/usr/local/bin/entrypoint",
	        "Args": [
	            "/sbin/init"
	        ],
	        "State": {
	            "Status": "running",
	            "Running": true,
	            "Paused": false,
	            "Restarting": false,
	            "OOMKilled": false,
	            "Dead": false,
	            "Pid": 5274,
	            "ExitCode": 0,
	            "Error": "",
	            "StartedAt": "2025-11-09T13:29:50.79153505Z",
	            "FinishedAt": "0001-01-01T00:00:00Z"
	        },
	        "Image": "sha256:4e1b168be0d8ee6affdaa8dcd0274d605b2f417b6cfaa574410f0380fd962b97",
	        "ResolvConfPath": "/var/lib/docker/containers/c4ab4837e17b267aefc717d6354da6da657718a53ac2c8c63de7aa7f9dc168c6/resolv.conf",
	        "HostnamePath": "/var/lib/docker/containers/c4ab4837e17b267aefc717d6354da6da657718a53ac2c8c63de7aa7f9dc168c6/hostname",
	        "HostsPath": "/var/lib/docker/containers/c4ab4837e17b267aefc717d6354da6da657718a53ac2c8c63de7aa7f9dc168c6/hosts",
	        "LogPath": "/var/lib/docker/containers/c4ab4837e17b267aefc717d6354da6da657718a53ac2c8c63de7aa7f9dc168c6/c4ab4837e17b267aefc717d6354da6da657718a53ac2c8c63de7aa7f9dc168c6-json.log",
	        "Name": "/addons-651467",
	        "RestartCount": 0,
	        "Driver": "overlay2",
	        "Platform": "linux",
	        "MountLabel": "",
	        "ProcessLabel": "",
	        "AppArmorProfile": "unconfined",
	        "ExecIDs": null,
	        "HostConfig": {
	            "Binds": [
	                "/lib/modules:/lib/modules:ro",
	                "addons-651467:/var"
	            ],
	            "ContainerIDFile": "",
	            "LogConfig": {
	                "Type": "json-file",
	                "Config": {}
	            },
	            "NetworkMode": "addons-651467",
	            "PortBindings": {
	                "22/tcp": [
	                    {
	                        "HostIp": "127.0.0.1",
	                        "HostPort": ""
	                    }
	                ],
	                "2376/tcp": [
	                    {
	                        "HostIp": "127.0.0.1",
	                        "HostPort": ""
	                    }
	                ],
	                "32443/tcp": [
	                    {
	                        "HostIp": "127.0.0.1",
	                        "HostPort": ""
	                    }
	                ],
	                "5000/tcp": [
	                    {
	                        "HostIp": "127.0.0.1",
	                        "HostPort": ""
	                    }
	                ],
	                "8443/tcp": [
	                    {
	                        "HostIp": "127.0.0.1",
	                        "HostPort": ""
	                    }
	                ]
	            },
	            "RestartPolicy": {
	                "Name": "no",
	                "MaximumRetryCount": 0
	            },
	            "AutoRemove": false,
	            "VolumeDriver": "",
	            "VolumesFrom": null,
	            "ConsoleSize": [
	                0,
	                0
	            ],
	            "CapAdd": null,
	            "CapDrop": null,
	            "CgroupnsMode": "host",
	            "Dns": [],
	            "DnsOptions": [],
	            "DnsSearch": [],
	            "ExtraHosts": null,
	            "GroupAdd": null,
	            "IpcMode": "private",
	            "Cgroup": "",
	            "Links": null,
	            "OomScoreAdj": 0,
	            "PidMode": "",
	            "Privileged": true,
	            "PublishAllPorts": false,
	            "ReadonlyRootfs": false,
	            "SecurityOpt": [
	                "seccomp=unconfined",
	                "apparmor=unconfined",
	                "label=disable"
	            ],
	            "Tmpfs": {
	                "/run": "",
	                "/tmp": ""
	            },
	            "UTSMode": "",
	            "UsernsMode": "",
	            "ShmSize": 67108864,
	            "Runtime": "runc",
	            "Isolation": "",
	            "CpuShares": 0,
	            "Memory": 4294967296,
	            "NanoCpus": 2000000000,
	            "CgroupParent": "",
	            "BlkioWeight": 0,
	            "BlkioWeightDevice": [],
	            "BlkioDeviceReadBps": [],
	            "BlkioDeviceWriteBps": [],
	            "BlkioDeviceReadIOps": [],
	            "BlkioDeviceWriteIOps": [],
	            "CpuPeriod": 0,
	            "CpuQuota": 0,
	            "CpuRealtimePeriod": 0,
	            "CpuRealtimeRuntime": 0,
	            "CpusetCpus": "",
	            "CpusetMems": "",
	            "Devices": [],
	            "DeviceCgroupRules": null,
	            "DeviceRequests": null,
	            "MemoryReservation": 0,
	            "MemorySwap": 8589934592,
	            "MemorySwappiness": null,
	            "OomKillDisable": false,
	            "PidsLimit": null,
	            "Ulimits": [],
	            "CpuCount": 0,
	            "CpuPercent": 0,
	            "IOMaximumIOps": 0,
	            "IOMaximumBandwidth": 0,
	            "MaskedPaths": null,
	            "ReadonlyPaths": null
	        },
	        "GraphDriver": {
	            "Data": {
	                "ID": "c4ab4837e17b267aefc717d6354da6da657718a53ac2c8c63de7aa7f9dc168c6",
	                "LowerDir": "/var/lib/docker/overlay2/5947e5537d85292bcbeafa0ddc99193912a0755c4189834bd896c1d94caf2b0e-init/diff:/var/lib/docker/overlay2/b3a4de36ab2a7c9237cb1555a4866064baca53bca407ae5f84336bea9c6bc6c8/diff",
	                "MergedDir": "/var/lib/docker/overlay2/5947e5537d85292bcbeafa0ddc99193912a0755c4189834bd896c1d94caf2b0e/merged",
	                "UpperDir": "/var/lib/docker/overlay2/5947e5537d85292bcbeafa0ddc99193912a0755c4189834bd896c1d94caf2b0e/diff",
	                "WorkDir": "/var/lib/docker/overlay2/5947e5537d85292bcbeafa0ddc99193912a0755c4189834bd896c1d94caf2b0e/work"
	            },
	            "Name": "overlay2"
	        },
	        "Mounts": [
	            {
	                "Type": "bind",
	                "Source": "/lib/modules",
	                "Destination": "/lib/modules",
	                "Mode": "ro",
	                "RW": false,
	                "Propagation": "rprivate"
	            },
	            {
	                "Type": "volume",
	                "Name": "addons-651467",
	                "Source": "/var/lib/docker/volumes/addons-651467/_data",
	                "Destination": "/var",
	                "Driver": "local",
	                "Mode": "z",
	                "RW": true,
	                "Propagation": ""
	            }
	        ],
	        "Config": {
	            "Hostname": "addons-651467",
	            "Domainname": "",
	            "User": "",
	            "AttachStdin": false,
	            "AttachStdout": false,
	            "AttachStderr": false,
	            "ExposedPorts": {
	                "22/tcp": {},
	                "2376/tcp": {},
	                "32443/tcp": {},
	                "5000/tcp": {},
	                "8443/tcp": {}
	            },
	            "Tty": true,
	            "OpenStdin": false,
	            "StdinOnce": false,
	            "Env": [
	                "container=docker",
	                "PATH=/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin"
	            ],
	            "Cmd": null,
	            "Image": "gcr.io/k8s-minikube/kicbase-builds:v0.0.48-1761985721-21837@sha256:a50b37e97dfdea51156e079ca6b45818a801b3d41bbe13d141f35d2e1af6c7d1",
	            "Volumes": null,
	            "WorkingDir": "/",
	            "Entrypoint": [
	                "/usr/local/bin/entrypoint",
	                "/sbin/init"
	            ],
	            "OnBuild": null,
	            "Labels": {
	                "created_by.minikube.sigs.k8s.io": "true",
	                "mode.minikube.sigs.k8s.io": "addons-651467",
	                "name.minikube.sigs.k8s.io": "addons-651467",
	                "role.minikube.sigs.k8s.io": ""
	            },
	            "StopSignal": "SIGRTMIN+3"
	        },
	        "NetworkSettings": {
	            "Bridge": "",
	            "SandboxID": "8fc3e6ee83c5f9ad2fb1e922de106dfc451222b9bf113c3d269984e224ee5d34",
	            "SandboxKey": "/var/run/docker/netns/8fc3e6ee83c5",
	            "Ports": {
	                "22/tcp": [
	                    {
	                        "HostIp": "127.0.0.1",
	                        "HostPort": "32768"
	                    }
	                ],
	                "2376/tcp": [
	                    {
	                        "HostIp": "127.0.0.1",
	                        "HostPort": "32769"
	                    }
	                ],
	                "32443/tcp": [
	                    {
	                        "HostIp": "127.0.0.1",
	                        "HostPort": "32772"
	                    }
	                ],
	                "5000/tcp": [
	                    {
	                        "HostIp": "127.0.0.1",
	                        "HostPort": "32770"
	                    }
	                ],
	                "8443/tcp": [
	                    {
	                        "HostIp": "127.0.0.1",
	                        "HostPort": "32771"
	                    }
	                ]
	            },
	            "HairpinMode": false,
	            "LinkLocalIPv6Address": "",
	            "LinkLocalIPv6PrefixLen": 0,
	            "SecondaryIPAddresses": null,
	            "SecondaryIPv6Addresses": null,
	            "EndpointID": "",
	            "Gateway": "",
	            "GlobalIPv6Address": "",
	            "GlobalIPv6PrefixLen": 0,
	            "IPAddress": "",
	            "IPPrefixLen": 0,
	            "IPv6Gateway": "",
	            "MacAddress": "",
	            "Networks": {
	                "addons-651467": {
	                    "IPAMConfig": {
	                        "IPv4Address": "192.168.49.2"
	                    },
	                    "Links": null,
	                    "Aliases": null,
	                    "MacAddress": "26:ed:4f:61:ce:fd",
	                    "DriverOpts": null,
	                    "GwPriority": 0,
	                    "NetworkID": "b7d1097bb55325287ea53a686e1ae72c1a0bec65934ce7e004057f3409631782",
	                    "EndpointID": "0a24fafb4e1067e439d55d52bab317d3e78226374a2f22ddb0a2fcd7482e5919",
	                    "Gateway": "192.168.49.1",
	                    "IPAddress": "192.168.49.2",
	                    "IPPrefixLen": 24,
	                    "IPv6Gateway": "",
	                    "GlobalIPv6Address": "",
	                    "GlobalIPv6PrefixLen": 0,
	                    "DNSNames": [
	                        "addons-651467",
	                        "c4ab4837e17b"
	                    ]
	                }
	            }
	        }
	    }
	]

                                                
                                                
-- /stdout --
helpers_test.go:247: (dbg) Run:  out/minikube-linux-arm64 status --format={{.Host}} -p addons-651467 -n addons-651467
helpers_test.go:252: <<< TestAddons/parallel/Headlamp FAILED: start of post-mortem logs <<<
helpers_test.go:253: ======>  post-mortem[TestAddons/parallel/Headlamp]: minikube logs <======
helpers_test.go:255: (dbg) Run:  out/minikube-linux-arm64 -p addons-651467 logs -n 25
helpers_test.go:255: (dbg) Done: out/minikube-linux-arm64 -p addons-651467 logs -n 25: (1.690332002s)
helpers_test.go:260: TestAddons/parallel/Headlamp logs: 
-- stdout --
	
	==> Audit <==
	┌─────────┬───────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────
───────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────┬────────────────────────┬─────────┬─────────┬─────────────────────┬─────────────────────┐
	│ COMMAND │                                                                                                                                                                                                                                   ARGS                                                                                                                                                                                                                                   │        PROFILE         │  USER   │ VERSION │     START TIME      │      END TIME       │
	├─────────┼───────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────
───────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────┼────────────────────────┼─────────┼─────────┼─────────────────────┼─────────────────────┤
	│ start   │ -o=json --download-only -p download-only-802526 --force --alsologtostderr --kubernetes-version=v1.28.0 --container-runtime=crio --driver=docker  --container-runtime=crio                                                                                                                                                                                                                                                                                                │ download-only-802526   │ jenkins │ v1.37.0 │ 09 Nov 25 13:29 UTC │                     │
	│ delete  │ --all                                                                                                                                                                                                                                                                                                                                                                                                                                                                    │ minikube               │ jenkins │ v1.37.0 │ 09 Nov 25 13:29 UTC │ 09 Nov 25 13:29 UTC │
	│ delete  │ -p download-only-802526                                                                                                                                                                                                                                                                                                                                                                                                                                                  │ download-only-802526   │ jenkins │ v1.37.0 │ 09 Nov 25 13:29 UTC │ 09 Nov 25 13:29 UTC │
	│ start   │ -o=json --download-only -p download-only-603977 --force --alsologtostderr --kubernetes-version=v1.34.1 --container-runtime=crio --driver=docker  --container-runtime=crio                                                                                                                                                                                                                                                                                                │ download-only-603977   │ jenkins │ v1.37.0 │ 09 Nov 25 13:29 UTC │                     │
	│ delete  │ --all                                                                                                                                                                                                                                                                                                                                                                                                                                                                    │ minikube               │ jenkins │ v1.37.0 │ 09 Nov 25 13:29 UTC │ 09 Nov 25 13:29 UTC │
	│ delete  │ -p download-only-603977                                                                                                                                                                                                                                                                                                                                                                                                                                                  │ download-only-603977   │ jenkins │ v1.37.0 │ 09 Nov 25 13:29 UTC │ 09 Nov 25 13:29 UTC │
	│ delete  │ -p download-only-802526                                                                                                                                                                                                                                                                                                                                                                                                                                                  │ download-only-802526   │ jenkins │ v1.37.0 │ 09 Nov 25 13:29 UTC │ 09 Nov 25 13:29 UTC │
	│ delete  │ -p download-only-603977                                                                                                                                                                                                                                                                                                                                                                                                                                                  │ download-only-603977   │ jenkins │ v1.37.0 │ 09 Nov 25 13:29 UTC │ 09 Nov 25 13:29 UTC │
	│ start   │ --download-only -p download-docker-143180 --alsologtostderr --driver=docker  --container-runtime=crio                                                                                                                                                                                                                                                                                                                                                                    │ download-docker-143180 │ jenkins │ v1.37.0 │ 09 Nov 25 13:29 UTC │                     │
	│ delete  │ -p download-docker-143180                                                                                                                                                                                                                                                                                                                                                                                                                                                │ download-docker-143180 │ jenkins │ v1.37.0 │ 09 Nov 25 13:29 UTC │ 09 Nov 25 13:29 UTC │
	│ start   │ --download-only -p binary-mirror-258515 --alsologtostderr --binary-mirror http://127.0.0.1:41697 --driver=docker  --container-runtime=crio                                                                                                                                                                                                                                                                                                                               │ binary-mirror-258515   │ jenkins │ v1.37.0 │ 09 Nov 25 13:29 UTC │                     │
	│ delete  │ -p binary-mirror-258515                                                                                                                                                                                                                                                                                                                                                                                                                                                  │ binary-mirror-258515   │ jenkins │ v1.37.0 │ 09 Nov 25 13:29 UTC │ 09 Nov 25 13:29 UTC │
	│ addons  │ enable dashboard -p addons-651467                                                                                                                                                                                                                                                                                                                                                                                                                                        │ addons-651467          │ jenkins │ v1.37.0 │ 09 Nov 25 13:29 UTC │                     │
	│ addons  │ disable dashboard -p addons-651467                                                                                                                                                                                                                                                                                                                                                                                                                                       │ addons-651467          │ jenkins │ v1.37.0 │ 09 Nov 25 13:29 UTC │                     │
	│ start   │ -p addons-651467 --wait=true --memory=4096 --alsologtostderr --addons=registry --addons=registry-creds --addons=metrics-server --addons=volumesnapshots --addons=csi-hostpath-driver --addons=gcp-auth --addons=cloud-spanner --addons=inspektor-gadget --addons=nvidia-device-plugin --addons=yakd --addons=volcano --addons=amd-gpu-device-plugin --driver=docker  --container-runtime=crio --addons=ingress --addons=ingress-dns --addons=storage-provisioner-rancher │ addons-651467          │ jenkins │ v1.37.0 │ 09 Nov 25 13:29 UTC │ 09 Nov 25 13:32 UTC │
	│ addons  │ addons-651467 addons disable volcano --alsologtostderr -v=1                                                                                                                                                                                                                                                                                                                                                                                                              │ addons-651467          │ jenkins │ v1.37.0 │ 09 Nov 25 13:32 UTC │                     │
	│ addons  │ addons-651467 addons disable gcp-auth --alsologtostderr -v=1                                                                                                                                                                                                                                                                                                                                                                                                             │ addons-651467          │ jenkins │ v1.37.0 │ 09 Nov 25 13:32 UTC │                     │
	│ addons  │ addons-651467 addons disable yakd --alsologtostderr -v=1                                                                                                                                                                                                                                                                                                                                                                                                                 │ addons-651467          │ jenkins │ v1.37.0 │ 09 Nov 25 13:32 UTC │                     │
	│ addons  │ addons-651467 addons disable nvidia-device-plugin --alsologtostderr -v=1                                                                                                                                                                                                                                                                                                                                                                                                 │ addons-651467          │ jenkins │ v1.37.0 │ 09 Nov 25 13:32 UTC │                     │
	│ ip      │ addons-651467 ip                                                                                                                                                                                                                                                                                                                                                                                                                                                         │ addons-651467          │ jenkins │ v1.37.0 │ 09 Nov 25 13:32 UTC │ 09 Nov 25 13:32 UTC │
	│ addons  │ addons-651467 addons disable registry --alsologtostderr -v=1                                                                                                                                                                                                                                                                                                                                                                                                             │ addons-651467          │ jenkins │ v1.37.0 │ 09 Nov 25 13:32 UTC │                     │
	│ ssh     │ addons-651467 ssh cat /opt/local-path-provisioner/pvc-fe3d3884-77bf-4c1a-9e79-8108e5bf732d_default_test-pvc/file1                                                                                                                                                                                                                                                                                                                                                        │ addons-651467          │ jenkins │ v1.37.0 │ 09 Nov 25 13:32 UTC │ 09 Nov 25 13:32 UTC │
	│ addons  │ addons-651467 addons disable storage-provisioner-rancher --alsologtostderr -v=1                                                                                                                                                                                                                                                                                                                                                                                          │ addons-651467          │ jenkins │ v1.37.0 │ 09 Nov 25 13:32 UTC │                     │
	│ addons  │ enable headlamp -p addons-651467 --alsologtostderr -v=1                                                                                                                                                                                                                                                                                                                                                                                                                  │ addons-651467          │ jenkins │ v1.37.0 │ 09 Nov 25 13:32 UTC │                     │
	│ addons  │ addons-651467 addons disable cloud-spanner --alsologtostderr -v=1                                                                                                                                                                                                                                                                                                                                                                                                        │ addons-651467          │ jenkins │ v1.37.0 │ 09 Nov 25 13:32 UTC │                     │
	└─────────┴───────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────
───────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────┴────────────────────────┴─────────┴─────────┴─────────────────────┴─────────────────────┘
	
	
	==> Last Start <==
	Log file created at: 2025/11/09 13:29:24
	Running on machine: ip-172-31-24-2
	Binary: Built with gc go1.24.6 for linux/arm64
	Log line format: [IWEF]mmdd hh:mm:ss.uuuuuu threadid file:line] msg
	I1109 13:29:24.202235    4875 out.go:360] Setting OutFile to fd 1 ...
	I1109 13:29:24.202459    4875 out.go:408] TERM=,COLORTERM=, which probably does not support color
	I1109 13:29:24.202485    4875 out.go:374] Setting ErrFile to fd 2...
	I1109 13:29:24.202504    4875 out.go:408] TERM=,COLORTERM=, which probably does not support color
	I1109 13:29:24.202801    4875 root.go:338] Updating PATH: /home/jenkins/minikube-integration/21139-2320/.minikube/bin
	I1109 13:29:24.203283    4875 out.go:368] Setting JSON to false
	I1109 13:29:24.204130    4875 start.go:133] hostinfo: {"hostname":"ip-172-31-24-2","uptime":715,"bootTime":1762694250,"procs":145,"os":"linux","platform":"ubuntu","platformFamily":"debian","platformVersion":"20.04","kernelVersion":"5.15.0-1084-aws","kernelArch":"aarch64","virtualizationSystem":"","virtualizationRole":"","hostId":"6d436adf-771e-4269-b9a3-c25fd4fca4f5"}
	I1109 13:29:24.204224    4875 start.go:143] virtualization:  
	I1109 13:29:24.207649    4875 out.go:179] * [addons-651467] minikube v1.37.0 on Ubuntu 20.04 (arm64)
	I1109 13:29:24.211280    4875 out.go:179]   - MINIKUBE_LOCATION=21139
	I1109 13:29:24.211345    4875 notify.go:221] Checking for updates...
	I1109 13:29:24.217219    4875 out.go:179]   - MINIKUBE_SUPPRESS_DOCKER_PERFORMANCE=true
	I1109 13:29:24.220225    4875 out.go:179]   - KUBECONFIG=/home/jenkins/minikube-integration/21139-2320/kubeconfig
	I1109 13:29:24.223051    4875 out.go:179]   - MINIKUBE_HOME=/home/jenkins/minikube-integration/21139-2320/.minikube
	I1109 13:29:24.225889    4875 out.go:179]   - MINIKUBE_BIN=out/minikube-linux-arm64
	I1109 13:29:24.228736    4875 out.go:179]   - MINIKUBE_FORCE_SYSTEMD=
	I1109 13:29:24.231707    4875 driver.go:422] Setting default libvirt URI to qemu:///system
	I1109 13:29:24.255327    4875 docker.go:124] docker version: linux-28.1.1:Docker Engine - Community
	I1109 13:29:24.255457    4875 cli_runner.go:164] Run: docker system info --format "{{json .}}"
	I1109 13:29:24.317017    4875 info.go:266] docker info: {ID:J4M5:W6MX:GOX4:4LAQ:VI7E:VJNF:J3OP:OPBH:GF7G:PPY4:WQWD:7N4L Containers:0 ContainersRunning:0 ContainersPaused:0 ContainersStopped:0 Images:1 Driver:overlay2 DriverStatus:[[Backing Filesystem extfs] [Supports d_type true] [Using metacopy false] [Native Overlay Diff true] [userxattr false]] SystemStatus:<nil> Plugins:{Volume:[local] Network:[bridge host ipvlan macvlan null overlay] Authorization:<nil> Log:[awslogs fluentd gcplogs gelf journald json-file local splunk syslog]} MemoryLimit:true SwapLimit:true KernelMemory:false KernelMemoryTCP:true CPUCfsPeriod:true CPUCfsQuota:true CPUShares:true CPUSet:true PidsLimit:true IPv4Forwarding:true BridgeNfIptables:false BridgeNfIP6Tables:false Debug:false NFd:25 OomKillDisable:true NGoroutines:47 SystemTime:2025-11-09 13:29:24.308126112 +0000 UTC LoggingDriver:json-file CgroupDriver:cgroupfs NEventsListener:0 KernelVersion:5.15.0-1084-aws OperatingSystem:Ubuntu 20.04.6 LTS OSType:linux Architecture:a
arch64 IndexServerAddress:https://index.docker.io/v1/ RegistryConfig:{AllowNondistributableArtifactsCIDRs:[] AllowNondistributableArtifactsHostnames:[] InsecureRegistryCIDRs:[::1/128 127.0.0.0/8] IndexConfigs:{DockerIo:{Name:docker.io Mirrors:[] Secure:true Official:true}} Mirrors:[]} NCPU:2 MemTotal:8214835200 GenericResources:<nil> DockerRootDir:/var/lib/docker HTTPProxy: HTTPSProxy: NoProxy: Name:ip-172-31-24-2 Labels:[] ExperimentalBuild:false ServerVersion:28.1.1 ClusterStore: ClusterAdvertise: Runtimes:{Runc:{Path:runc}} DefaultRuntime:runc Swarm:{NodeID: NodeAddr: LocalNodeState:inactive ControlAvailable:false Error: RemoteManagers:<nil>} LiveRestoreEnabled:false Isolation: InitBinary:docker-init ContainerdCommit:{ID:05044ec0a9a75232cad458027ca83437aae3f4da Expected:} RuncCommit:{ID:v1.2.5-0-g59923ef Expected:} InitCommit:{ID:de40ad0 Expected:} SecurityOptions:[name=apparmor name=seccomp,profile=builtin] ProductLicense: Warnings:<nil> ServerErrors:[] ClientInfo:{Debug:false Plugins:[map[Name:buildx Pat
h:/usr/libexec/docker/cli-plugins/docker-buildx SchemaVersion:0.1.0 ShortDescription:Docker Buildx Vendor:Docker Inc. Version:v0.23.0] map[Name:compose Path:/usr/libexec/docker/cli-plugins/docker-compose SchemaVersion:0.1.0 ShortDescription:Docker Compose Vendor:Docker Inc. Version:v2.35.1]] Warnings:<nil>}}
	I1109 13:29:24.317134    4875 docker.go:319] overlay module found
	I1109 13:29:24.320198    4875 out.go:179] * Using the docker driver based on user configuration
	I1109 13:29:24.323029    4875 start.go:309] selected driver: docker
	I1109 13:29:24.323052    4875 start.go:930] validating driver "docker" against <nil>
	I1109 13:29:24.323066    4875 start.go:941] status for docker: {Installed:true Healthy:true Running:false NeedsImprovement:false Error:<nil> Reason: Fix: Doc: Version:}
	I1109 13:29:24.324072    4875 cli_runner.go:164] Run: docker system info --format "{{json .}}"
	I1109 13:29:24.379970    4875 info.go:266] docker info: {ID:J4M5:W6MX:GOX4:4LAQ:VI7E:VJNF:J3OP:OPBH:GF7G:PPY4:WQWD:7N4L Containers:0 ContainersRunning:0 ContainersPaused:0 ContainersStopped:0 Images:1 Driver:overlay2 DriverStatus:[[Backing Filesystem extfs] [Supports d_type true] [Using metacopy false] [Native Overlay Diff true] [userxattr false]] SystemStatus:<nil> Plugins:{Volume:[local] Network:[bridge host ipvlan macvlan null overlay] Authorization:<nil> Log:[awslogs fluentd gcplogs gelf journald json-file local splunk syslog]} MemoryLimit:true SwapLimit:true KernelMemory:false KernelMemoryTCP:true CPUCfsPeriod:true CPUCfsQuota:true CPUShares:true CPUSet:true PidsLimit:true IPv4Forwarding:true BridgeNfIptables:false BridgeNfIP6Tables:false Debug:false NFd:25 OomKillDisable:true NGoroutines:47 SystemTime:2025-11-09 13:29:24.370892342 +0000 UTC LoggingDriver:json-file CgroupDriver:cgroupfs NEventsListener:0 KernelVersion:5.15.0-1084-aws OperatingSystem:Ubuntu 20.04.6 LTS OSType:linux Architecture:a
arch64 IndexServerAddress:https://index.docker.io/v1/ RegistryConfig:{AllowNondistributableArtifactsCIDRs:[] AllowNondistributableArtifactsHostnames:[] InsecureRegistryCIDRs:[::1/128 127.0.0.0/8] IndexConfigs:{DockerIo:{Name:docker.io Mirrors:[] Secure:true Official:true}} Mirrors:[]} NCPU:2 MemTotal:8214835200 GenericResources:<nil> DockerRootDir:/var/lib/docker HTTPProxy: HTTPSProxy: NoProxy: Name:ip-172-31-24-2 Labels:[] ExperimentalBuild:false ServerVersion:28.1.1 ClusterStore: ClusterAdvertise: Runtimes:{Runc:{Path:runc}} DefaultRuntime:runc Swarm:{NodeID: NodeAddr: LocalNodeState:inactive ControlAvailable:false Error: RemoteManagers:<nil>} LiveRestoreEnabled:false Isolation: InitBinary:docker-init ContainerdCommit:{ID:05044ec0a9a75232cad458027ca83437aae3f4da Expected:} RuncCommit:{ID:v1.2.5-0-g59923ef Expected:} InitCommit:{ID:de40ad0 Expected:} SecurityOptions:[name=apparmor name=seccomp,profile=builtin] ProductLicense: Warnings:<nil> ServerErrors:[] ClientInfo:{Debug:false Plugins:[map[Name:buildx Pat
h:/usr/libexec/docker/cli-plugins/docker-buildx SchemaVersion:0.1.0 ShortDescription:Docker Buildx Vendor:Docker Inc. Version:v0.23.0] map[Name:compose Path:/usr/libexec/docker/cli-plugins/docker-compose SchemaVersion:0.1.0 ShortDescription:Docker Compose Vendor:Docker Inc. Version:v2.35.1]] Warnings:<nil>}}
	I1109 13:29:24.380119    4875 start_flags.go:327] no existing cluster config was found, will generate one from the flags 
	I1109 13:29:24.380362    4875 start_flags.go:992] Waiting for all components: map[apiserver:true apps_running:true default_sa:true extra:true kubelet:true node_ready:true system_pods:true]
	I1109 13:29:24.383236    4875 out.go:179] * Using Docker driver with root privileges
	I1109 13:29:24.386124    4875 cni.go:84] Creating CNI manager for ""
	I1109 13:29:24.386189    4875 cni.go:143] "docker" driver + "crio" runtime found, recommending kindnet
	I1109 13:29:24.386203    4875 start_flags.go:336] Found "CNI" CNI - setting NetworkPlugin=cni
	I1109 13:29:24.386283    4875 start.go:353] cluster config:
	{Name:addons-651467 KeepContext:false EmbedCerts:false MinikubeISO: KicBaseImage:gcr.io/k8s-minikube/kicbase-builds:v0.0.48-1761985721-21837@sha256:a50b37e97dfdea51156e079ca6b45818a801b3d41bbe13d141f35d2e1af6c7d1 Memory:4096 CPUs:2 DiskSize:20000 Driver:docker HyperkitVpnKitSock: HyperkitVSockPorts:[] DockerEnv:[] ContainerVolumeMounts:[] InsecureRegistry:[] RegistryMirror:[] HostOnlyCIDR:192.168.59.1/24 HypervVirtualSwitch: HypervUseExternalSwitch:false HypervExternalAdapter: KVMNetwork:default KVMQemuURI:qemu:///system KVMGPU:false KVMHidden:false KVMNUMACount:1 APIServerPort:8443 DockerOpt:[] DisableDriverMounts:false NFSShare:[] NFSSharesRoot:/nfsshares UUID: NoVTXCheck:false DNSProxy:false HostDNSResolver:true HostOnlyNicType:virtio NatNicType:virtio SSHIPAddress: SSHUser:root SSHKey: SSHPort:22 KubernetesConfig:{KubernetesVersion:v1.34.1 ClusterName:addons-651467 Namespace:default APIServerHAVIP: APIServerName:minikubeCA APIServerNames:[] APIServerIPs:[] DNSDomain:cluster.local ContainerRuntime
:crio CRISocket: NetworkPlugin:cni FeatureGates: ServiceCIDR:10.96.0.0/12 ImageRepository: LoadBalancerStartIP: LoadBalancerEndIP: CustomIngressCert: RegistryAliases: ExtraOptions:[] ShouldLoadCachedImages:true EnableDefaultCNI:false CNI:} Nodes:[{Name: IP: Port:8443 KubernetesVersion:v1.34.1 ContainerRuntime:crio ControlPlane:true Worker:true}] Addons:map[] CustomAddonImages:map[] CustomAddonRegistries:map[] VerifyComponents:map[apiserver:true apps_running:true default_sa:true extra:true kubelet:true node_ready:true system_pods:true] StartHostTimeout:6m0s ScheduledStop:<nil> ExposedPorts:[] ListenAddress: Network: Subnet: MultiNodeRequested:false ExtraDisks:0 CertExpiration:26280h0m0s MountString: Mount9PVersion:9p2000.L MountGID:docker MountIP: MountMSize:262144 MountOptions:[] MountPort:0 MountType:9p MountUID:docker BinaryMirror: DisableOptimizations:false DisableMetrics:false DisableCoreDNSLog:false CustomQemuFirmwarePath: SocketVMnetClientPath: SocketVMnetPath: StaticIP: SSHAuthSock: SSHAgentPID:0 GPUs:
AutoPauseInterval:1m0s}
	I1109 13:29:24.391120    4875 out.go:179] * Starting "addons-651467" primary control-plane node in "addons-651467" cluster
	I1109 13:29:24.393901    4875 cache.go:134] Beginning downloading kic base image for docker with crio
	I1109 13:29:24.396792    4875 out.go:179] * Pulling base image v0.0.48-1761985721-21837 ...
	I1109 13:29:24.399528    4875 preload.go:188] Checking if preload exists for k8s version v1.34.1 and runtime crio
	I1109 13:29:24.399584    4875 preload.go:203] Found local preload: /home/jenkins/minikube-integration/21139-2320/.minikube/cache/preloaded-tarball/preloaded-images-k8s-v18-v1.34.1-cri-o-overlay-arm64.tar.lz4
	I1109 13:29:24.399597    4875 cache.go:65] Caching tarball of preloaded images
	I1109 13:29:24.399599    4875 image.go:81] Checking for gcr.io/k8s-minikube/kicbase-builds:v0.0.48-1761985721-21837@sha256:a50b37e97dfdea51156e079ca6b45818a801b3d41bbe13d141f35d2e1af6c7d1 in local docker daemon
	I1109 13:29:24.399691    4875 preload.go:238] Found /home/jenkins/minikube-integration/21139-2320/.minikube/cache/preloaded-tarball/preloaded-images-k8s-v18-v1.34.1-cri-o-overlay-arm64.tar.lz4 in cache, skipping download
	I1109 13:29:24.399702    4875 cache.go:68] Finished verifying existence of preloaded tar for v1.34.1 on crio
	I1109 13:29:24.400107    4875 profile.go:143] Saving config to /home/jenkins/minikube-integration/21139-2320/.minikube/profiles/addons-651467/config.json ...
	I1109 13:29:24.400133    4875 lock.go:35] WriteFile acquiring /home/jenkins/minikube-integration/21139-2320/.minikube/profiles/addons-651467/config.json: {Name:mk129c827ff3469375a4a6ce55f7b60ccdf45bc1 Clock:{} Delay:500ms Timeout:1m0s Cancel:<nil>}
	I1109 13:29:24.415274    4875 cache.go:163] Downloading gcr.io/k8s-minikube/kicbase-builds:v0.0.48-1761985721-21837@sha256:a50b37e97dfdea51156e079ca6b45818a801b3d41bbe13d141f35d2e1af6c7d1 to local cache
	I1109 13:29:24.415386    4875 image.go:65] Checking for gcr.io/k8s-minikube/kicbase-builds:v0.0.48-1761985721-21837@sha256:a50b37e97dfdea51156e079ca6b45818a801b3d41bbe13d141f35d2e1af6c7d1 in local cache directory
	I1109 13:29:24.415404    4875 image.go:68] Found gcr.io/k8s-minikube/kicbase-builds:v0.0.48-1761985721-21837@sha256:a50b37e97dfdea51156e079ca6b45818a801b3d41bbe13d141f35d2e1af6c7d1 in local cache directory, skipping pull
	I1109 13:29:24.415408    4875 image.go:137] gcr.io/k8s-minikube/kicbase-builds:v0.0.48-1761985721-21837@sha256:a50b37e97dfdea51156e079ca6b45818a801b3d41bbe13d141f35d2e1af6c7d1 exists in cache, skipping pull
	I1109 13:29:24.415415    4875 cache.go:166] successfully saved gcr.io/k8s-minikube/kicbase-builds:v0.0.48-1761985721-21837@sha256:a50b37e97dfdea51156e079ca6b45818a801b3d41bbe13d141f35d2e1af6c7d1 as a tarball
	I1109 13:29:24.415420    4875 cache.go:176] Loading gcr.io/k8s-minikube/kicbase-builds:v0.0.48-1761985721-21837@sha256:a50b37e97dfdea51156e079ca6b45818a801b3d41bbe13d141f35d2e1af6c7d1 from local cache
	I1109 13:29:42.076221    4875 cache.go:178] successfully loaded and using gcr.io/k8s-minikube/kicbase-builds:v0.0.48-1761985721-21837@sha256:a50b37e97dfdea51156e079ca6b45818a801b3d41bbe13d141f35d2e1af6c7d1 from cached tarball
	I1109 13:29:42.076259    4875 cache.go:243] Successfully downloaded all kic artifacts
	I1109 13:29:42.076291    4875 start.go:360] acquireMachinesLock for addons-651467: {Name:mk4994005e3898dce07874204da9a6684eba48a0 Clock:{} Delay:500ms Timeout:10m0s Cancel:<nil>}
	I1109 13:29:42.076421    4875 start.go:364] duration metric: took 110.607µs to acquireMachinesLock for "addons-651467"
	I1109 13:29:42.076448    4875 start.go:93] Provisioning new machine with config: &{Name:addons-651467 KeepContext:false EmbedCerts:false MinikubeISO: KicBaseImage:gcr.io/k8s-minikube/kicbase-builds:v0.0.48-1761985721-21837@sha256:a50b37e97dfdea51156e079ca6b45818a801b3d41bbe13d141f35d2e1af6c7d1 Memory:4096 CPUs:2 DiskSize:20000 Driver:docker HyperkitVpnKitSock: HyperkitVSockPorts:[] DockerEnv:[] ContainerVolumeMounts:[] InsecureRegistry:[] RegistryMirror:[] HostOnlyCIDR:192.168.59.1/24 HypervVirtualSwitch: HypervUseExternalSwitch:false HypervExternalAdapter: KVMNetwork:default KVMQemuURI:qemu:///system KVMGPU:false KVMHidden:false KVMNUMACount:1 APIServerPort:8443 DockerOpt:[] DisableDriverMounts:false NFSShare:[] NFSSharesRoot:/nfsshares UUID: NoVTXCheck:false DNSProxy:false HostDNSResolver:true HostOnlyNicType:virtio NatNicType:virtio SSHIPAddress: SSHUser:root SSHKey: SSHPort:22 KubernetesConfig:{KubernetesVersion:v1.34.1 ClusterName:addons-651467 Namespace:default APIServerHAVIP: APIServerName:min
ikubeCA APIServerNames:[] APIServerIPs:[] DNSDomain:cluster.local ContainerRuntime:crio CRISocket: NetworkPlugin:cni FeatureGates: ServiceCIDR:10.96.0.0/12 ImageRepository: LoadBalancerStartIP: LoadBalancerEndIP: CustomIngressCert: RegistryAliases: ExtraOptions:[] ShouldLoadCachedImages:true EnableDefaultCNI:false CNI:} Nodes:[{Name: IP: Port:8443 KubernetesVersion:v1.34.1 ContainerRuntime:crio ControlPlane:true Worker:true}] Addons:map[] CustomAddonImages:map[] CustomAddonRegistries:map[] VerifyComponents:map[apiserver:true apps_running:true default_sa:true extra:true kubelet:true node_ready:true system_pods:true] StartHostTimeout:6m0s ScheduledStop:<nil> ExposedPorts:[] ListenAddress: Network: Subnet: MultiNodeRequested:false ExtraDisks:0 CertExpiration:26280h0m0s MountString: Mount9PVersion:9p2000.L MountGID:docker MountIP: MountMSize:262144 MountOptions:[] MountPort:0 MountType:9p MountUID:docker BinaryMirror: DisableOptimizations:false DisableMetrics:false DisableCoreDNSLog:false CustomQemuFirmwarePath:
SocketVMnetClientPath: SocketVMnetPath: StaticIP: SSHAuthSock: SSHAgentPID:0 GPUs: AutoPauseInterval:1m0s} &{Name: IP: Port:8443 KubernetesVersion:v1.34.1 ContainerRuntime:crio ControlPlane:true Worker:true}
	I1109 13:29:42.076537    4875 start.go:125] createHost starting for "" (driver="docker")
	I1109 13:29:42.080257    4875 out.go:252] * Creating docker container (CPUs=2, Memory=4096MB) ...
	I1109 13:29:42.080540    4875 start.go:159] libmachine.API.Create for "addons-651467" (driver="docker")
	I1109 13:29:42.080581    4875 client.go:173] LocalClient.Create starting
	I1109 13:29:42.080711    4875 main.go:143] libmachine: Creating CA: /home/jenkins/minikube-integration/21139-2320/.minikube/certs/ca.pem
	I1109 13:29:42.349459    4875 main.go:143] libmachine: Creating client certificate: /home/jenkins/minikube-integration/21139-2320/.minikube/certs/cert.pem
	I1109 13:29:43.828044    4875 cli_runner.go:164] Run: docker network inspect addons-651467 --format "{"Name": "{{.Name}}","Driver": "{{.Driver}}","Subnet": "{{range .IPAM.Config}}{{.Subnet}}{{end}}","Gateway": "{{range .IPAM.Config}}{{.Gateway}}{{end}}","MTU": {{if (index .Options "com.docker.network.driver.mtu")}}{{(index .Options "com.docker.network.driver.mtu")}}{{else}}0{{end}}, "ContainerIPs": [{{range $k,$v := .Containers }}"{{$v.IPv4Address}}",{{end}}]}"
	W1109 13:29:43.843910    4875 cli_runner.go:211] docker network inspect addons-651467 --format "{"Name": "{{.Name}}","Driver": "{{.Driver}}","Subnet": "{{range .IPAM.Config}}{{.Subnet}}{{end}}","Gateway": "{{range .IPAM.Config}}{{.Gateway}}{{end}}","MTU": {{if (index .Options "com.docker.network.driver.mtu")}}{{(index .Options "com.docker.network.driver.mtu")}}{{else}}0{{end}}, "ContainerIPs": [{{range $k,$v := .Containers }}"{{$v.IPv4Address}}",{{end}}]}" returned with exit code 1
	I1109 13:29:43.844017    4875 network_create.go:284] running [docker network inspect addons-651467] to gather additional debugging logs...
	I1109 13:29:43.844039    4875 cli_runner.go:164] Run: docker network inspect addons-651467
	W1109 13:29:43.859512    4875 cli_runner.go:211] docker network inspect addons-651467 returned with exit code 1
	I1109 13:29:43.859541    4875 network_create.go:287] error running [docker network inspect addons-651467]: docker network inspect addons-651467: exit status 1
	stdout:
	[]
	
	stderr:
	Error response from daemon: network addons-651467 not found
	I1109 13:29:43.859560    4875 network_create.go:289] output of [docker network inspect addons-651467]: -- stdout --
	[]
	
	-- /stdout --
	** stderr ** 
	Error response from daemon: network addons-651467 not found
	
	** /stderr **
	I1109 13:29:43.859658    4875 cli_runner.go:164] Run: docker network inspect bridge --format "{"Name": "{{.Name}}","Driver": "{{.Driver}}","Subnet": "{{range .IPAM.Config}}{{.Subnet}}{{end}}","Gateway": "{{range .IPAM.Config}}{{.Gateway}}{{end}}","MTU": {{if (index .Options "com.docker.network.driver.mtu")}}{{(index .Options "com.docker.network.driver.mtu")}}{{else}}0{{end}}, "ContainerIPs": [{{range $k,$v := .Containers }}"{{$v.IPv4Address}}",{{end}}]}"
	I1109 13:29:43.875579    4875 network.go:206] using free private subnet 192.168.49.0/24: &{IP:192.168.49.0 Netmask:255.255.255.0 Prefix:24 CIDR:192.168.49.0/24 Gateway:192.168.49.1 ClientMin:192.168.49.2 ClientMax:192.168.49.254 Broadcast:192.168.49.255 IsPrivate:true Interface:{IfaceName: IfaceIPv4: IfaceMTU:0 IfaceMAC:} reservation:0x400191df20}
	I1109 13:29:43.875617    4875 network_create.go:124] attempt to create docker network addons-651467 192.168.49.0/24 with gateway 192.168.49.1 and MTU of 1500 ...
	I1109 13:29:43.875681    4875 cli_runner.go:164] Run: docker network create --driver=bridge --subnet=192.168.49.0/24 --gateway=192.168.49.1 -o --ip-masq -o --icc -o com.docker.network.driver.mtu=1500 --label=created_by.minikube.sigs.k8s.io=true --label=name.minikube.sigs.k8s.io=addons-651467 addons-651467
	I1109 13:29:43.931552    4875 network_create.go:108] docker network addons-651467 192.168.49.0/24 created
	I1109 13:29:43.931592    4875 kic.go:121] calculated static IP "192.168.49.2" for the "addons-651467" container
	I1109 13:29:43.931683    4875 cli_runner.go:164] Run: docker ps -a --format {{.Names}}
	I1109 13:29:43.947447    4875 cli_runner.go:164] Run: docker volume create addons-651467 --label name.minikube.sigs.k8s.io=addons-651467 --label created_by.minikube.sigs.k8s.io=true
	I1109 13:29:43.964764    4875 oci.go:103] Successfully created a docker volume addons-651467
	I1109 13:29:43.964859    4875 cli_runner.go:164] Run: docker run --rm --name addons-651467-preload-sidecar --label created_by.minikube.sigs.k8s.io=true --label name.minikube.sigs.k8s.io=addons-651467 --entrypoint /usr/bin/test -v addons-651467:/var gcr.io/k8s-minikube/kicbase-builds:v0.0.48-1761985721-21837@sha256:a50b37e97dfdea51156e079ca6b45818a801b3d41bbe13d141f35d2e1af6c7d1 -d /var/lib
	I1109 13:29:46.205341    4875 cli_runner.go:217] Completed: docker run --rm --name addons-651467-preload-sidecar --label created_by.minikube.sigs.k8s.io=true --label name.minikube.sigs.k8s.io=addons-651467 --entrypoint /usr/bin/test -v addons-651467:/var gcr.io/k8s-minikube/kicbase-builds:v0.0.48-1761985721-21837@sha256:a50b37e97dfdea51156e079ca6b45818a801b3d41bbe13d141f35d2e1af6c7d1 -d /var/lib: (2.240444954s)
	I1109 13:29:46.205374    4875 oci.go:107] Successfully prepared a docker volume addons-651467
	I1109 13:29:46.205432    4875 preload.go:188] Checking if preload exists for k8s version v1.34.1 and runtime crio
	I1109 13:29:46.205447    4875 kic.go:194] Starting extracting preloaded images to volume ...
	I1109 13:29:46.205523    4875 cli_runner.go:164] Run: docker run --rm --entrypoint /usr/bin/tar -v /home/jenkins/minikube-integration/21139-2320/.minikube/cache/preloaded-tarball/preloaded-images-k8s-v18-v1.34.1-cri-o-overlay-arm64.tar.lz4:/preloaded.tar:ro -v addons-651467:/extractDir gcr.io/k8s-minikube/kicbase-builds:v0.0.48-1761985721-21837@sha256:a50b37e97dfdea51156e079ca6b45818a801b3d41bbe13d141f35d2e1af6c7d1 -I lz4 -xf /preloaded.tar -C /extractDir
	I1109 13:29:50.651251    4875 cli_runner.go:217] Completed: docker run --rm --entrypoint /usr/bin/tar -v /home/jenkins/minikube-integration/21139-2320/.minikube/cache/preloaded-tarball/preloaded-images-k8s-v18-v1.34.1-cri-o-overlay-arm64.tar.lz4:/preloaded.tar:ro -v addons-651467:/extractDir gcr.io/k8s-minikube/kicbase-builds:v0.0.48-1761985721-21837@sha256:a50b37e97dfdea51156e079ca6b45818a801b3d41bbe13d141f35d2e1af6c7d1 -I lz4 -xf /preloaded.tar -C /extractDir: (4.445680047s)
	I1109 13:29:50.651293    4875 kic.go:203] duration metric: took 4.445833823s to extract preloaded images to volume ...
	W1109 13:29:50.651431    4875 cgroups_linux.go:77] Your kernel does not support swap limit capabilities or the cgroup is not mounted.
	I1109 13:29:50.651568    4875 cli_runner.go:164] Run: docker info --format "'{{json .SecurityOptions}}'"
	I1109 13:29:50.708181    4875 cli_runner.go:164] Run: docker run -d -t --privileged --security-opt seccomp=unconfined --tmpfs /tmp --tmpfs /run -v /lib/modules:/lib/modules:ro --hostname addons-651467 --name addons-651467 --label created_by.minikube.sigs.k8s.io=true --label name.minikube.sigs.k8s.io=addons-651467 --label role.minikube.sigs.k8s.io= --label mode.minikube.sigs.k8s.io=addons-651467 --network addons-651467 --ip 192.168.49.2 --volume addons-651467:/var --security-opt apparmor=unconfined --memory=4096mb --cpus=2 -e container=docker --expose 8443 --publish=127.0.0.1::8443 --publish=127.0.0.1::22 --publish=127.0.0.1::2376 --publish=127.0.0.1::5000 --publish=127.0.0.1::32443 gcr.io/k8s-minikube/kicbase-builds:v0.0.48-1761985721-21837@sha256:a50b37e97dfdea51156e079ca6b45818a801b3d41bbe13d141f35d2e1af6c7d1
	I1109 13:29:51.044906    4875 cli_runner.go:164] Run: docker container inspect addons-651467 --format={{.State.Running}}
	I1109 13:29:51.073396    4875 cli_runner.go:164] Run: docker container inspect addons-651467 --format={{.State.Status}}
	I1109 13:29:51.100150    4875 cli_runner.go:164] Run: docker exec addons-651467 stat /var/lib/dpkg/alternatives/iptables
	I1109 13:29:51.154363    4875 oci.go:144] the created container "addons-651467" has a running status.
	I1109 13:29:51.154398    4875 kic.go:225] Creating ssh key for kic: /home/jenkins/minikube-integration/21139-2320/.minikube/machines/addons-651467/id_rsa...
	I1109 13:29:51.740335    4875 kic_runner.go:191] docker (temp): /home/jenkins/minikube-integration/21139-2320/.minikube/machines/addons-651467/id_rsa.pub --> /home/docker/.ssh/authorized_keys (381 bytes)
	I1109 13:29:51.759276    4875 cli_runner.go:164] Run: docker container inspect addons-651467 --format={{.State.Status}}
	I1109 13:29:51.776747    4875 kic_runner.go:93] Run: chown docker:docker /home/docker/.ssh/authorized_keys
	I1109 13:29:51.776770    4875 kic_runner.go:114] Args: [docker exec --privileged addons-651467 chown docker:docker /home/docker/.ssh/authorized_keys]
	I1109 13:29:51.817318    4875 cli_runner.go:164] Run: docker container inspect addons-651467 --format={{.State.Status}}
	I1109 13:29:51.834283    4875 machine.go:94] provisionDockerMachine start ...
	I1109 13:29:51.834367    4875 cli_runner.go:164] Run: docker container inspect -f "'{{(index (index .NetworkSettings.Ports "22/tcp") 0).HostPort}}'" addons-651467
	I1109 13:29:51.853856    4875 main.go:143] libmachine: Using SSH client type: native
	I1109 13:29:51.854243    4875 main.go:143] libmachine: &{{{<nil> 0 [] [] []} docker [0x3eefe0] 0x3f1790 <nil>  [] 0s} 127.0.0.1 32768 <nil> <nil>}
	I1109 13:29:51.854255    4875 main.go:143] libmachine: About to run SSH command:
	hostname
	I1109 13:29:51.854880    4875 main.go:143] libmachine: Error dialing TCP: ssh: handshake failed: read tcp 127.0.0.1:49612->127.0.0.1:32768: read: connection reset by peer
	I1109 13:29:55.003519    4875 main.go:143] libmachine: SSH cmd err, output: <nil>: addons-651467
	
	I1109 13:29:55.003541    4875 ubuntu.go:182] provisioning hostname "addons-651467"
	I1109 13:29:55.003604    4875 cli_runner.go:164] Run: docker container inspect -f "'{{(index (index .NetworkSettings.Ports "22/tcp") 0).HostPort}}'" addons-651467
	I1109 13:29:55.025852    4875 main.go:143] libmachine: Using SSH client type: native
	I1109 13:29:55.026190    4875 main.go:143] libmachine: &{{{<nil> 0 [] [] []} docker [0x3eefe0] 0x3f1790 <nil>  [] 0s} 127.0.0.1 32768 <nil> <nil>}
	I1109 13:29:55.026209    4875 main.go:143] libmachine: About to run SSH command:
	sudo hostname addons-651467 && echo "addons-651467" | sudo tee /etc/hostname
	I1109 13:29:55.185349    4875 main.go:143] libmachine: SSH cmd err, output: <nil>: addons-651467
	
	I1109 13:29:55.185430    4875 cli_runner.go:164] Run: docker container inspect -f "'{{(index (index .NetworkSettings.Ports "22/tcp") 0).HostPort}}'" addons-651467
	I1109 13:29:55.203980    4875 main.go:143] libmachine: Using SSH client type: native
	I1109 13:29:55.204296    4875 main.go:143] libmachine: &{{{<nil> 0 [] [] []} docker [0x3eefe0] 0x3f1790 <nil>  [] 0s} 127.0.0.1 32768 <nil> <nil>}
	I1109 13:29:55.204312    4875 main.go:143] libmachine: About to run SSH command:
	
			if ! grep -xq '.*\saddons-651467' /etc/hosts; then
				if grep -xq '127.0.1.1\s.*' /etc/hosts; then
					sudo sed -i 's/^127.0.1.1\s.*/127.0.1.1 addons-651467/g' /etc/hosts;
				else 
					echo '127.0.1.1 addons-651467' | sudo tee -a /etc/hosts; 
				fi
			fi
	I1109 13:29:55.356069    4875 main.go:143] libmachine: SSH cmd err, output: <nil>: 
	I1109 13:29:55.356094    4875 ubuntu.go:188] set auth options {CertDir:/home/jenkins/minikube-integration/21139-2320/.minikube CaCertPath:/home/jenkins/minikube-integration/21139-2320/.minikube/certs/ca.pem CaPrivateKeyPath:/home/jenkins/minikube-integration/21139-2320/.minikube/certs/ca-key.pem CaCertRemotePath:/etc/docker/ca.pem ServerCertPath:/home/jenkins/minikube-integration/21139-2320/.minikube/machines/server.pem ServerKeyPath:/home/jenkins/minikube-integration/21139-2320/.minikube/machines/server-key.pem ClientKeyPath:/home/jenkins/minikube-integration/21139-2320/.minikube/certs/key.pem ServerCertRemotePath:/etc/docker/server.pem ServerKeyRemotePath:/etc/docker/server-key.pem ClientCertPath:/home/jenkins/minikube-integration/21139-2320/.minikube/certs/cert.pem ServerCertSANs:[] StorePath:/home/jenkins/minikube-integration/21139-2320/.minikube}
	I1109 13:29:55.356119    4875 ubuntu.go:190] setting up certificates
	I1109 13:29:55.356129    4875 provision.go:84] configureAuth start
	I1109 13:29:55.356212    4875 cli_runner.go:164] Run: docker container inspect -f "{{range .NetworkSettings.Networks}}{{.IPAddress}},{{.GlobalIPv6Address}}{{end}}" addons-651467
	I1109 13:29:55.373623    4875 provision.go:143] copyHostCerts
	I1109 13:29:55.373705    4875 exec_runner.go:151] cp: /home/jenkins/minikube-integration/21139-2320/.minikube/certs/cert.pem --> /home/jenkins/minikube-integration/21139-2320/.minikube/cert.pem (1123 bytes)
	I1109 13:29:55.373830    4875 exec_runner.go:151] cp: /home/jenkins/minikube-integration/21139-2320/.minikube/certs/key.pem --> /home/jenkins/minikube-integration/21139-2320/.minikube/key.pem (1679 bytes)
	I1109 13:29:55.373906    4875 exec_runner.go:151] cp: /home/jenkins/minikube-integration/21139-2320/.minikube/certs/ca.pem --> /home/jenkins/minikube-integration/21139-2320/.minikube/ca.pem (1082 bytes)
	I1109 13:29:55.373994    4875 provision.go:117] generating server cert: /home/jenkins/minikube-integration/21139-2320/.minikube/machines/server.pem ca-key=/home/jenkins/minikube-integration/21139-2320/.minikube/certs/ca.pem private-key=/home/jenkins/minikube-integration/21139-2320/.minikube/certs/ca-key.pem org=jenkins.addons-651467 san=[127.0.0.1 192.168.49.2 addons-651467 localhost minikube]
	I1109 13:29:55.579769    4875 provision.go:177] copyRemoteCerts
	I1109 13:29:55.579841    4875 ssh_runner.go:195] Run: sudo mkdir -p /etc/docker /etc/docker /etc/docker
	I1109 13:29:55.579917    4875 cli_runner.go:164] Run: docker container inspect -f "'{{(index (index .NetworkSettings.Ports "22/tcp") 0).HostPort}}'" addons-651467
	I1109 13:29:55.598804    4875 sshutil.go:53] new ssh client: &{IP:127.0.0.1 Port:32768 SSHKeyPath:/home/jenkins/minikube-integration/21139-2320/.minikube/machines/addons-651467/id_rsa Username:docker}
	I1109 13:29:55.705541    4875 ssh_runner.go:362] scp /home/jenkins/minikube-integration/21139-2320/.minikube/certs/ca.pem --> /etc/docker/ca.pem (1082 bytes)
	I1109 13:29:55.723795    4875 ssh_runner.go:362] scp /home/jenkins/minikube-integration/21139-2320/.minikube/machines/server.pem --> /etc/docker/server.pem (1208 bytes)
	I1109 13:29:55.740609    4875 ssh_runner.go:362] scp /home/jenkins/minikube-integration/21139-2320/.minikube/machines/server-key.pem --> /etc/docker/server-key.pem (1675 bytes)
	I1109 13:29:55.757376    4875 provision.go:87] duration metric: took 401.226503ms to configureAuth
	I1109 13:29:55.757400    4875 ubuntu.go:206] setting minikube options for container-runtime
	I1109 13:29:55.757583    4875 config.go:182] Loaded profile config "addons-651467": Driver=docker, ContainerRuntime=crio, KubernetesVersion=v1.34.1
	I1109 13:29:55.757681    4875 cli_runner.go:164] Run: docker container inspect -f "'{{(index (index .NetworkSettings.Ports "22/tcp") 0).HostPort}}'" addons-651467
	I1109 13:29:55.774945    4875 main.go:143] libmachine: Using SSH client type: native
	I1109 13:29:55.775248    4875 main.go:143] libmachine: &{{{<nil> 0 [] [] []} docker [0x3eefe0] 0x3f1790 <nil>  [] 0s} 127.0.0.1 32768 <nil> <nil>}
	I1109 13:29:55.775262    4875 main.go:143] libmachine: About to run SSH command:
	sudo mkdir -p /etc/sysconfig && printf %s "
	CRIO_MINIKUBE_OPTIONS='--insecure-registry 10.96.0.0/12 '
	" | sudo tee /etc/sysconfig/crio.minikube && sudo systemctl restart crio
	I1109 13:29:56.038560    4875 main.go:143] libmachine: SSH cmd err, output: <nil>: 
	CRIO_MINIKUBE_OPTIONS='--insecure-registry 10.96.0.0/12 '
	
	I1109 13:29:56.038580    4875 machine.go:97] duration metric: took 4.204279672s to provisionDockerMachine
	I1109 13:29:56.038589    4875 client.go:176] duration metric: took 13.957998677s to LocalClient.Create
	I1109 13:29:56.038605    4875 start.go:167] duration metric: took 13.95806829s to libmachine.API.Create "addons-651467"
	I1109 13:29:56.038612    4875 start.go:293] postStartSetup for "addons-651467" (driver="docker")
	I1109 13:29:56.038622    4875 start.go:322] creating required directories: [/etc/kubernetes/addons /etc/kubernetes/manifests /var/tmp/minikube /var/lib/minikube /var/lib/minikube/certs /var/lib/minikube/images /var/lib/minikube/binaries /tmp/gvisor /usr/share/ca-certificates /etc/ssl/certs]
	I1109 13:29:56.038686    4875 ssh_runner.go:195] Run: sudo mkdir -p /etc/kubernetes/addons /etc/kubernetes/manifests /var/tmp/minikube /var/lib/minikube /var/lib/minikube/certs /var/lib/minikube/images /var/lib/minikube/binaries /tmp/gvisor /usr/share/ca-certificates /etc/ssl/certs
	I1109 13:29:56.038734    4875 cli_runner.go:164] Run: docker container inspect -f "'{{(index (index .NetworkSettings.Ports "22/tcp") 0).HostPort}}'" addons-651467
	I1109 13:29:56.056548    4875 sshutil.go:53] new ssh client: &{IP:127.0.0.1 Port:32768 SSHKeyPath:/home/jenkins/minikube-integration/21139-2320/.minikube/machines/addons-651467/id_rsa Username:docker}
	I1109 13:29:56.164335    4875 ssh_runner.go:195] Run: cat /etc/os-release
	I1109 13:29:56.168018    4875 main.go:143] libmachine: Couldn't set key VERSION_CODENAME, no corresponding struct field found
	I1109 13:29:56.168046    4875 info.go:137] Remote host: Debian GNU/Linux 12 (bookworm)
	I1109 13:29:56.168058    4875 filesync.go:126] Scanning /home/jenkins/minikube-integration/21139-2320/.minikube/addons for local assets ...
	I1109 13:29:56.168128    4875 filesync.go:126] Scanning /home/jenkins/minikube-integration/21139-2320/.minikube/files for local assets ...
	I1109 13:29:56.168157    4875 start.go:296] duration metric: took 129.536845ms for postStartSetup
	I1109 13:29:56.168473    4875 cli_runner.go:164] Run: docker container inspect -f "{{range .NetworkSettings.Networks}}{{.IPAddress}},{{.GlobalIPv6Address}}{{end}}" addons-651467
	I1109 13:29:56.185830    4875 profile.go:143] Saving config to /home/jenkins/minikube-integration/21139-2320/.minikube/profiles/addons-651467/config.json ...
	I1109 13:29:56.186138    4875 ssh_runner.go:195] Run: sh -c "df -h /var | awk 'NR==2{print $5}'"
	I1109 13:29:56.186202    4875 cli_runner.go:164] Run: docker container inspect -f "'{{(index (index .NetworkSettings.Ports "22/tcp") 0).HostPort}}'" addons-651467
	I1109 13:29:56.203115    4875 sshutil.go:53] new ssh client: &{IP:127.0.0.1 Port:32768 SSHKeyPath:/home/jenkins/minikube-integration/21139-2320/.minikube/machines/addons-651467/id_rsa Username:docker}
	I1109 13:29:56.304930    4875 ssh_runner.go:195] Run: sh -c "df -BG /var | awk 'NR==2{print $4}'"
	I1109 13:29:56.309497    4875 start.go:128] duration metric: took 14.232944846s to createHost
	I1109 13:29:56.309571    4875 start.go:83] releasing machines lock for "addons-651467", held for 14.233140272s
	I1109 13:29:56.309677    4875 cli_runner.go:164] Run: docker container inspect -f "{{range .NetworkSettings.Networks}}{{.IPAddress}},{{.GlobalIPv6Address}}{{end}}" addons-651467
	I1109 13:29:56.326564    4875 ssh_runner.go:195] Run: cat /version.json
	I1109 13:29:56.326613    4875 cli_runner.go:164] Run: docker container inspect -f "'{{(index (index .NetworkSettings.Ports "22/tcp") 0).HostPort}}'" addons-651467
	I1109 13:29:56.326880    4875 ssh_runner.go:195] Run: curl -sS -m 2 https://registry.k8s.io/
	I1109 13:29:56.326932    4875 cli_runner.go:164] Run: docker container inspect -f "'{{(index (index .NetworkSettings.Ports "22/tcp") 0).HostPort}}'" addons-651467
	I1109 13:29:56.346663    4875 sshutil.go:53] new ssh client: &{IP:127.0.0.1 Port:32768 SSHKeyPath:/home/jenkins/minikube-integration/21139-2320/.minikube/machines/addons-651467/id_rsa Username:docker}
	I1109 13:29:56.355951    4875 sshutil.go:53] new ssh client: &{IP:127.0.0.1 Port:32768 SSHKeyPath:/home/jenkins/minikube-integration/21139-2320/.minikube/machines/addons-651467/id_rsa Username:docker}
	I1109 13:29:56.447280    4875 ssh_runner.go:195] Run: systemctl --version
	I1109 13:29:56.536725    4875 ssh_runner.go:195] Run: sudo sh -c "podman version >/dev/null"
	I1109 13:29:56.571436    4875 ssh_runner.go:195] Run: sh -c "stat /etc/cni/net.d/*loopback.conf*"
	W1109 13:29:56.575608    4875 cni.go:209] loopback cni configuration skipped: "/etc/cni/net.d/*loopback.conf*" not found
	I1109 13:29:56.575717    4875 ssh_runner.go:195] Run: sudo find /etc/cni/net.d -maxdepth 1 -type f ( ( -name *bridge* -or -name *podman* ) -and -not -name *.mk_disabled ) -printf "%p, " -exec sh -c "sudo mv {} {}.mk_disabled" ;
	I1109 13:29:56.604790    4875 cni.go:262] disabled [/etc/cni/net.d/87-podman-bridge.conflist, /etc/cni/net.d/10-crio-bridge.conflist.disabled] bridge cni config(s)
	I1109 13:29:56.604812    4875 start.go:496] detecting cgroup driver to use...
	I1109 13:29:56.604843    4875 detect.go:187] detected "cgroupfs" cgroup driver on host os
	I1109 13:29:56.604896    4875 ssh_runner.go:195] Run: sudo systemctl stop -f containerd
	I1109 13:29:56.621254    4875 ssh_runner.go:195] Run: sudo systemctl is-active --quiet service containerd
	I1109 13:29:56.636193    4875 docker.go:218] disabling cri-docker service (if available) ...
	I1109 13:29:56.636260    4875 ssh_runner.go:195] Run: sudo systemctl stop -f cri-docker.socket
	I1109 13:29:56.653788    4875 ssh_runner.go:195] Run: sudo systemctl stop -f cri-docker.service
	I1109 13:29:56.672984    4875 ssh_runner.go:195] Run: sudo systemctl disable cri-docker.socket
	I1109 13:29:56.789645    4875 ssh_runner.go:195] Run: sudo systemctl mask cri-docker.service
	I1109 13:29:56.917453    4875 docker.go:234] disabling docker service ...
	I1109 13:29:56.917558    4875 ssh_runner.go:195] Run: sudo systemctl stop -f docker.socket
	I1109 13:29:56.938056    4875 ssh_runner.go:195] Run: sudo systemctl stop -f docker.service
	I1109 13:29:56.950556    4875 ssh_runner.go:195] Run: sudo systemctl disable docker.socket
	I1109 13:29:57.065685    4875 ssh_runner.go:195] Run: sudo systemctl mask docker.service
	I1109 13:29:57.184021    4875 ssh_runner.go:195] Run: sudo systemctl is-active --quiet service docker
	I1109 13:29:57.196471    4875 ssh_runner.go:195] Run: /bin/bash -c "sudo mkdir -p /etc && printf %s "runtime-endpoint: unix:///var/run/crio/crio.sock
	" | sudo tee /etc/crictl.yaml"
	I1109 13:29:57.210135    4875 crio.go:59] configure cri-o to use "registry.k8s.io/pause:3.10.1" pause image...
	I1109 13:29:57.210281    4875 ssh_runner.go:195] Run: sh -c "sudo sed -i 's|^.*pause_image = .*$|pause_image = "registry.k8s.io/pause:3.10.1"|' /etc/crio/crio.conf.d/02-crio.conf"
	I1109 13:29:57.219145    4875 crio.go:70] configuring cri-o to use "cgroupfs" as cgroup driver...
	I1109 13:29:57.219223    4875 ssh_runner.go:195] Run: sh -c "sudo sed -i 's|^.*cgroup_manager = .*$|cgroup_manager = "cgroupfs"|' /etc/crio/crio.conf.d/02-crio.conf"
	I1109 13:29:57.227597    4875 ssh_runner.go:195] Run: sh -c "sudo sed -i '/conmon_cgroup = .*/d' /etc/crio/crio.conf.d/02-crio.conf"
	I1109 13:29:57.236325    4875 ssh_runner.go:195] Run: sh -c "sudo sed -i '/cgroup_manager = .*/a conmon_cgroup = "pod"' /etc/crio/crio.conf.d/02-crio.conf"
	I1109 13:29:57.244944    4875 ssh_runner.go:195] Run: sh -c "sudo rm -rf /etc/cni/net.mk"
	I1109 13:29:57.252468    4875 ssh_runner.go:195] Run: sh -c "sudo sed -i '/^ *"net.ipv4.ip_unprivileged_port_start=.*"/d' /etc/crio/crio.conf.d/02-crio.conf"
	I1109 13:29:57.260922    4875 ssh_runner.go:195] Run: sh -c "sudo grep -q "^ *default_sysctls" /etc/crio/crio.conf.d/02-crio.conf || sudo sed -i '/conmon_cgroup = .*/a default_sysctls = \[\n\]' /etc/crio/crio.conf.d/02-crio.conf"
	I1109 13:29:57.274219    4875 ssh_runner.go:195] Run: sh -c "sudo sed -i -r 's|^default_sysctls *= *\[|&\n  "net.ipv4.ip_unprivileged_port_start=0",|' /etc/crio/crio.conf.d/02-crio.conf"
	I1109 13:29:57.282600    4875 ssh_runner.go:195] Run: sudo sysctl net.bridge.bridge-nf-call-iptables
	I1109 13:29:57.290068    4875 crio.go:166] couldn't verify netfilter by "sudo sysctl net.bridge.bridge-nf-call-iptables" which might be okay. error: sudo sysctl net.bridge.bridge-nf-call-iptables: Process exited with status 1
	stdout:
	
	stderr:
	sysctl: cannot stat /proc/sys/net/bridge/bridge-nf-call-iptables: No such file or directory
	I1109 13:29:57.290160    4875 ssh_runner.go:195] Run: sudo modprobe br_netfilter
	I1109 13:29:57.303473    4875 ssh_runner.go:195] Run: sudo sh -c "echo 1 > /proc/sys/net/ipv4/ip_forward"
	I1109 13:29:57.311368    4875 ssh_runner.go:195] Run: sudo systemctl daemon-reload
	I1109 13:29:57.422450    4875 ssh_runner.go:195] Run: sudo systemctl restart crio
	I1109 13:29:57.554184    4875 start.go:543] Will wait 60s for socket path /var/run/crio/crio.sock
	I1109 13:29:57.554317    4875 ssh_runner.go:195] Run: stat /var/run/crio/crio.sock
	I1109 13:29:57.557853    4875 start.go:564] Will wait 60s for crictl version
	I1109 13:29:57.557954    4875 ssh_runner.go:195] Run: which crictl
	I1109 13:29:57.561195    4875 ssh_runner.go:195] Run: sudo /usr/local/bin/crictl version
	I1109 13:29:57.584884    4875 start.go:580] Version:  0.1.0
	RuntimeName:  cri-o
	RuntimeVersion:  1.34.1
	RuntimeApiVersion:  v1
	I1109 13:29:57.585077    4875 ssh_runner.go:195] Run: crio --version
	I1109 13:29:57.613520    4875 ssh_runner.go:195] Run: crio --version
	I1109 13:29:57.645604    4875 out.go:179] * Preparing Kubernetes v1.34.1 on CRI-O 1.34.1 ...
	I1109 13:29:57.648482    4875 cli_runner.go:164] Run: docker network inspect addons-651467 --format "{"Name": "{{.Name}}","Driver": "{{.Driver}}","Subnet": "{{range .IPAM.Config}}{{.Subnet}}{{end}}","Gateway": "{{range .IPAM.Config}}{{.Gateway}}{{end}}","MTU": {{if (index .Options "com.docker.network.driver.mtu")}}{{(index .Options "com.docker.network.driver.mtu")}}{{else}}0{{end}}, "ContainerIPs": [{{range $k,$v := .Containers }}"{{$v.IPv4Address}}",{{end}}]}"
	I1109 13:29:57.667667    4875 ssh_runner.go:195] Run: grep 192.168.49.1	host.minikube.internal$ /etc/hosts
	I1109 13:29:57.671356    4875 ssh_runner.go:195] Run: /bin/bash -c "{ grep -v $'\thost.minikube.internal$' "/etc/hosts"; echo "192.168.49.1	host.minikube.internal"; } > /tmp/h.$$; sudo cp /tmp/h.$$ "/etc/hosts""
	I1109 13:29:57.680593    4875 kubeadm.go:884] updating cluster {Name:addons-651467 KeepContext:false EmbedCerts:false MinikubeISO: KicBaseImage:gcr.io/k8s-minikube/kicbase-builds:v0.0.48-1761985721-21837@sha256:a50b37e97dfdea51156e079ca6b45818a801b3d41bbe13d141f35d2e1af6c7d1 Memory:4096 CPUs:2 DiskSize:20000 Driver:docker HyperkitVpnKitSock: HyperkitVSockPorts:[] DockerEnv:[] ContainerVolumeMounts:[] InsecureRegistry:[] RegistryMirror:[] HostOnlyCIDR:192.168.59.1/24 HypervVirtualSwitch: HypervUseExternalSwitch:false HypervExternalAdapter: KVMNetwork:default KVMQemuURI:qemu:///system KVMGPU:false KVMHidden:false KVMNUMACount:1 APIServerPort:8443 DockerOpt:[] DisableDriverMounts:false NFSShare:[] NFSSharesRoot:/nfsshares UUID: NoVTXCheck:false DNSProxy:false HostDNSResolver:true HostOnlyNicType:virtio NatNicType:virtio SSHIPAddress: SSHUser:root SSHKey: SSHPort:22 KubernetesConfig:{KubernetesVersion:v1.34.1 ClusterName:addons-651467 Namespace:default APIServerHAVIP: APIServerName:minikubeCA APIServerNa
mes:[] APIServerIPs:[] DNSDomain:cluster.local ContainerRuntime:crio CRISocket: NetworkPlugin:cni FeatureGates: ServiceCIDR:10.96.0.0/12 ImageRepository: LoadBalancerStartIP: LoadBalancerEndIP: CustomIngressCert: RegistryAliases: ExtraOptions:[] ShouldLoadCachedImages:true EnableDefaultCNI:false CNI:} Nodes:[{Name: IP:192.168.49.2 Port:8443 KubernetesVersion:v1.34.1 ContainerRuntime:crio ControlPlane:true Worker:true}] Addons:map[] CustomAddonImages:map[] CustomAddonRegistries:map[] VerifyComponents:map[apiserver:true apps_running:true default_sa:true extra:true kubelet:true node_ready:true system_pods:true] StartHostTimeout:6m0s ScheduledStop:<nil> ExposedPorts:[] ListenAddress: Network: Subnet: MultiNodeRequested:false ExtraDisks:0 CertExpiration:26280h0m0s MountString: Mount9PVersion:9p2000.L MountGID:docker MountIP: MountMSize:262144 MountOptions:[] MountPort:0 MountType:9p MountUID:docker BinaryMirror: DisableOptimizations:false DisableMetrics:false DisableCoreDNSLog:false CustomQemuFirmwarePath: SocketV
MnetClientPath: SocketVMnetPath: StaticIP: SSHAuthSock: SSHAgentPID:0 GPUs: AutoPauseInterval:1m0s} ...
	I1109 13:29:57.680706    4875 preload.go:188] Checking if preload exists for k8s version v1.34.1 and runtime crio
	I1109 13:29:57.680762    4875 ssh_runner.go:195] Run: sudo crictl images --output json
	I1109 13:29:57.712673    4875 crio.go:514] all images are preloaded for cri-o runtime.
	I1109 13:29:57.712696    4875 crio.go:433] Images already preloaded, skipping extraction
	I1109 13:29:57.712751    4875 ssh_runner.go:195] Run: sudo crictl images --output json
	I1109 13:29:57.737639    4875 crio.go:514] all images are preloaded for cri-o runtime.
	I1109 13:29:57.737662    4875 cache_images.go:86] Images are preloaded, skipping loading
	I1109 13:29:57.737670    4875 kubeadm.go:935] updating node { 192.168.49.2 8443 v1.34.1 crio true true} ...
	I1109 13:29:57.737753    4875 kubeadm.go:947] kubelet [Unit]
	Wants=crio.service
	
	[Service]
	ExecStart=
	ExecStart=/var/lib/minikube/binaries/v1.34.1/kubelet --bootstrap-kubeconfig=/etc/kubernetes/bootstrap-kubelet.conf --cgroups-per-qos=false --config=/var/lib/kubelet/config.yaml --enforce-node-allocatable= --hostname-override=addons-651467 --kubeconfig=/etc/kubernetes/kubelet.conf --node-ip=192.168.49.2
	
	[Install]
	 config:
	{KubernetesVersion:v1.34.1 ClusterName:addons-651467 Namespace:default APIServerHAVIP: APIServerName:minikubeCA APIServerNames:[] APIServerIPs:[] DNSDomain:cluster.local ContainerRuntime:crio CRISocket: NetworkPlugin:cni FeatureGates: ServiceCIDR:10.96.0.0/12 ImageRepository: LoadBalancerStartIP: LoadBalancerEndIP: CustomIngressCert: RegistryAliases: ExtraOptions:[] ShouldLoadCachedImages:true EnableDefaultCNI:false CNI:}
	I1109 13:29:57.737833    4875 ssh_runner.go:195] Run: crio config
	I1109 13:29:57.816421    4875 cni.go:84] Creating CNI manager for ""
	I1109 13:29:57.816446    4875 cni.go:143] "docker" driver + "crio" runtime found, recommending kindnet
	I1109 13:29:57.816464    4875 kubeadm.go:85] Using pod CIDR: 10.244.0.0/16
	I1109 13:29:57.816507    4875 kubeadm.go:190] kubeadm options: {CertDir:/var/lib/minikube/certs ServiceCIDR:10.96.0.0/12 PodSubnet:10.244.0.0/16 AdvertiseAddress:192.168.49.2 APIServerPort:8443 KubernetesVersion:v1.34.1 EtcdDataDir:/var/lib/minikube/etcd EtcdExtraArgs:map[] ClusterName:addons-651467 NodeName:addons-651467 DNSDomain:cluster.local CRISocket:/var/run/crio/crio.sock ImageRepository: ComponentOptions:[{Component:apiServer ExtraArgs:map[enable-admission-plugins:NamespaceLifecycle,LimitRanger,ServiceAccount,DefaultStorageClass,DefaultTolerationSeconds,NodeRestriction,MutatingAdmissionWebhook,ValidatingAdmissionWebhook,ResourceQuota] Pairs:map[certSANs:["127.0.0.1", "localhost", "192.168.49.2"]]} {Component:controllerManager ExtraArgs:map[allocate-node-cidrs:true leader-elect:false] Pairs:map[]} {Component:scheduler ExtraArgs:map[leader-elect:false] Pairs:map[]}] FeatureArgs:map[] NodeIP:192.168.49.2 CgroupDriver:cgroupfs ClientCAFile:/var/lib/minikube/certs/ca.crt StaticPodPath:/etc/kuberne
tes/manifests ControlPlaneAddress:control-plane.minikube.internal KubeProxyOptions:map[] ResolvConfSearchRegression:false KubeletConfigOpts:map[containerRuntimeEndpoint:unix:///var/run/crio/crio.sock hairpinMode:hairpin-veth runtimeRequestTimeout:15m] PrependCriSocketUnix:true}
	I1109 13:29:57.816671    4875 kubeadm.go:196] kubeadm config:
	apiVersion: kubeadm.k8s.io/v1beta4
	kind: InitConfiguration
	localAPIEndpoint:
	  advertiseAddress: 192.168.49.2
	  bindPort: 8443
	bootstrapTokens:
	  - groups:
	      - system:bootstrappers:kubeadm:default-node-token
	    ttl: 24h0m0s
	    usages:
	      - signing
	      - authentication
	nodeRegistration:
	  criSocket: unix:///var/run/crio/crio.sock
	  name: "addons-651467"
	  kubeletExtraArgs:
	    - name: "node-ip"
	      value: "192.168.49.2"
	  taints: []
	---
	apiVersion: kubeadm.k8s.io/v1beta4
	kind: ClusterConfiguration
	apiServer:
	  certSANs: ["127.0.0.1", "localhost", "192.168.49.2"]
	  extraArgs:
	    - name: "enable-admission-plugins"
	      value: "NamespaceLifecycle,LimitRanger,ServiceAccount,DefaultStorageClass,DefaultTolerationSeconds,NodeRestriction,MutatingAdmissionWebhook,ValidatingAdmissionWebhook,ResourceQuota"
	controllerManager:
	  extraArgs:
	    - name: "allocate-node-cidrs"
	      value: "true"
	    - name: "leader-elect"
	      value: "false"
	scheduler:
	  extraArgs:
	    - name: "leader-elect"
	      value: "false"
	certificatesDir: /var/lib/minikube/certs
	clusterName: mk
	controlPlaneEndpoint: control-plane.minikube.internal:8443
	etcd:
	  local:
	    dataDir: /var/lib/minikube/etcd
	kubernetesVersion: v1.34.1
	networking:
	  dnsDomain: cluster.local
	  podSubnet: "10.244.0.0/16"
	  serviceSubnet: 10.96.0.0/12
	---
	apiVersion: kubelet.config.k8s.io/v1beta1
	kind: KubeletConfiguration
	authentication:
	  x509:
	    clientCAFile: /var/lib/minikube/certs/ca.crt
	cgroupDriver: cgroupfs
	containerRuntimeEndpoint: unix:///var/run/crio/crio.sock
	hairpinMode: hairpin-veth
	runtimeRequestTimeout: 15m
	clusterDomain: "cluster.local"
	# disable disk resource management by default
	imageGCHighThresholdPercent: 100
	evictionHard:
	  nodefs.available: "0%"
	  nodefs.inodesFree: "0%"
	  imagefs.available: "0%"
	failSwapOn: false
	staticPodPath: /etc/kubernetes/manifests
	---
	apiVersion: kubeproxy.config.k8s.io/v1alpha1
	kind: KubeProxyConfiguration
	clusterCIDR: "10.244.0.0/16"
	metricsBindAddress: 0.0.0.0:10249
	conntrack:
	  maxPerCore: 0
	# Skip setting "net.netfilter.nf_conntrack_tcp_timeout_established"
	  tcpEstablishedTimeout: 0s
	# Skip setting "net.netfilter.nf_conntrack_tcp_timeout_close"
	  tcpCloseWaitTimeout: 0s
	
	I1109 13:29:57.816746    4875 ssh_runner.go:195] Run: sudo ls /var/lib/minikube/binaries/v1.34.1
	I1109 13:29:57.824517    4875 binaries.go:51] Found k8s binaries, skipping transfer
	I1109 13:29:57.824609    4875 ssh_runner.go:195] Run: sudo mkdir -p /etc/systemd/system/kubelet.service.d /lib/systemd/system /var/tmp/minikube
	I1109 13:29:57.832292    4875 ssh_runner.go:362] scp memory --> /etc/systemd/system/kubelet.service.d/10-kubeadm.conf (363 bytes)
	I1109 13:29:57.846595    4875 ssh_runner.go:362] scp memory --> /lib/systemd/system/kubelet.service (352 bytes)
	I1109 13:29:57.862370    4875 ssh_runner.go:362] scp memory --> /var/tmp/minikube/kubeadm.yaml.new (2210 bytes)
	I1109 13:29:57.875465    4875 ssh_runner.go:195] Run: grep 192.168.49.2	control-plane.minikube.internal$ /etc/hosts
	I1109 13:29:57.878954    4875 ssh_runner.go:195] Run: /bin/bash -c "{ grep -v $'\tcontrol-plane.minikube.internal$' "/etc/hosts"; echo "192.168.49.2	control-plane.minikube.internal"; } > /tmp/h.$$; sudo cp /tmp/h.$$ "/etc/hosts""
	I1109 13:29:57.889111    4875 ssh_runner.go:195] Run: sudo systemctl daemon-reload
	I1109 13:29:57.995831    4875 ssh_runner.go:195] Run: sudo systemctl start kubelet
	I1109 13:29:58.012414    4875 certs.go:69] Setting up /home/jenkins/minikube-integration/21139-2320/.minikube/profiles/addons-651467 for IP: 192.168.49.2
	I1109 13:29:58.012437    4875 certs.go:195] generating shared ca certs ...
	I1109 13:29:58.012455    4875 certs.go:227] acquiring lock for ca certs: {Name:mkdc9287ecd89df27ec460b72246ed6b75395d7c Clock:{} Delay:500ms Timeout:1m0s Cancel:<nil>}
	I1109 13:29:58.012613    4875 certs.go:241] generating "minikubeCA" ca cert: /home/jenkins/minikube-integration/21139-2320/.minikube/ca.key
	I1109 13:29:58.815426    4875 crypto.go:156] Writing cert to /home/jenkins/minikube-integration/21139-2320/.minikube/ca.crt ...
	I1109 13:29:58.815498    4875 lock.go:35] WriteFile acquiring /home/jenkins/minikube-integration/21139-2320/.minikube/ca.crt: {Name:mkb86fe4580308a5adcf0264e830fede14e8cc36 Clock:{} Delay:500ms Timeout:1m0s Cancel:<nil>}
	I1109 13:29:58.815701    4875 crypto.go:164] Writing key to /home/jenkins/minikube-integration/21139-2320/.minikube/ca.key ...
	I1109 13:29:58.815734    4875 lock.go:35] WriteFile acquiring /home/jenkins/minikube-integration/21139-2320/.minikube/ca.key: {Name:mk48c7d5dd368e917e8673396d91313ce1411346 Clock:{} Delay:500ms Timeout:1m0s Cancel:<nil>}
	I1109 13:29:58.815853    4875 certs.go:241] generating "proxyClientCA" ca cert: /home/jenkins/minikube-integration/21139-2320/.minikube/proxy-client-ca.key
	I1109 13:29:59.081454    4875 crypto.go:156] Writing cert to /home/jenkins/minikube-integration/21139-2320/.minikube/proxy-client-ca.crt ...
	I1109 13:29:59.081485    4875 lock.go:35] WriteFile acquiring /home/jenkins/minikube-integration/21139-2320/.minikube/proxy-client-ca.crt: {Name:mka77811779f028cd2c29c0788f4fc57f7399a8b Clock:{} Delay:500ms Timeout:1m0s Cancel:<nil>}
	I1109 13:29:59.081695    4875 crypto.go:164] Writing key to /home/jenkins/minikube-integration/21139-2320/.minikube/proxy-client-ca.key ...
	I1109 13:29:59.081711    4875 lock.go:35] WriteFile acquiring /home/jenkins/minikube-integration/21139-2320/.minikube/proxy-client-ca.key: {Name:mkcfe7fdba38edb59535214ca3c34887341dad32 Clock:{} Delay:500ms Timeout:1m0s Cancel:<nil>}
	I1109 13:29:59.081823    4875 certs.go:257] generating profile certs ...
	I1109 13:29:59.081882    4875 certs.go:364] generating signed profile cert for "minikube-user": /home/jenkins/minikube-integration/21139-2320/.minikube/profiles/addons-651467/client.key
	I1109 13:29:59.081899    4875 crypto.go:68] Generating cert /home/jenkins/minikube-integration/21139-2320/.minikube/profiles/addons-651467/client.crt with IP's: []
	I1109 13:29:59.554604    4875 crypto.go:156] Writing cert to /home/jenkins/minikube-integration/21139-2320/.minikube/profiles/addons-651467/client.crt ...
	I1109 13:29:59.554635    4875 lock.go:35] WriteFile acquiring /home/jenkins/minikube-integration/21139-2320/.minikube/profiles/addons-651467/client.crt: {Name:mkb5129912da0330cf5f2087feea056b4c3687ea Clock:{} Delay:500ms Timeout:1m0s Cancel:<nil>}
	I1109 13:29:59.554805    4875 crypto.go:164] Writing key to /home/jenkins/minikube-integration/21139-2320/.minikube/profiles/addons-651467/client.key ...
	I1109 13:29:59.554819    4875 lock.go:35] WriteFile acquiring /home/jenkins/minikube-integration/21139-2320/.minikube/profiles/addons-651467/client.key: {Name:mka7681018893601dd5ee47377e7b97dba042747 Clock:{} Delay:500ms Timeout:1m0s Cancel:<nil>}
	I1109 13:29:59.554888    4875 certs.go:364] generating signed profile cert for "minikube": /home/jenkins/minikube-integration/21139-2320/.minikube/profiles/addons-651467/apiserver.key.057f78a1
	I1109 13:29:59.554913    4875 crypto.go:68] Generating cert /home/jenkins/minikube-integration/21139-2320/.minikube/profiles/addons-651467/apiserver.crt.057f78a1 with IP's: [10.96.0.1 127.0.0.1 10.0.0.1 192.168.49.2]
	I1109 13:29:59.992178    4875 crypto.go:156] Writing cert to /home/jenkins/minikube-integration/21139-2320/.minikube/profiles/addons-651467/apiserver.crt.057f78a1 ...
	I1109 13:29:59.992211    4875 lock.go:35] WriteFile acquiring /home/jenkins/minikube-integration/21139-2320/.minikube/profiles/addons-651467/apiserver.crt.057f78a1: {Name:mk5885cdcf4b26af0ab62b466c88c19552f535d4 Clock:{} Delay:500ms Timeout:1m0s Cancel:<nil>}
	I1109 13:29:59.992388    4875 crypto.go:164] Writing key to /home/jenkins/minikube-integration/21139-2320/.minikube/profiles/addons-651467/apiserver.key.057f78a1 ...
	I1109 13:29:59.992402    4875 lock.go:35] WriteFile acquiring /home/jenkins/minikube-integration/21139-2320/.minikube/profiles/addons-651467/apiserver.key.057f78a1: {Name:mk8c34653536b2604d6587e82121e6ae9af6b189 Clock:{} Delay:500ms Timeout:1m0s Cancel:<nil>}
	I1109 13:29:59.992486    4875 certs.go:382] copying /home/jenkins/minikube-integration/21139-2320/.minikube/profiles/addons-651467/apiserver.crt.057f78a1 -> /home/jenkins/minikube-integration/21139-2320/.minikube/profiles/addons-651467/apiserver.crt
	I1109 13:29:59.992572    4875 certs.go:386] copying /home/jenkins/minikube-integration/21139-2320/.minikube/profiles/addons-651467/apiserver.key.057f78a1 -> /home/jenkins/minikube-integration/21139-2320/.minikube/profiles/addons-651467/apiserver.key
	I1109 13:29:59.992629    4875 certs.go:364] generating signed profile cert for "aggregator": /home/jenkins/minikube-integration/21139-2320/.minikube/profiles/addons-651467/proxy-client.key
	I1109 13:29:59.992653    4875 crypto.go:68] Generating cert /home/jenkins/minikube-integration/21139-2320/.minikube/profiles/addons-651467/proxy-client.crt with IP's: []
	I1109 13:30:00.886859    4875 crypto.go:156] Writing cert to /home/jenkins/minikube-integration/21139-2320/.minikube/profiles/addons-651467/proxy-client.crt ...
	I1109 13:30:00.886899    4875 lock.go:35] WriteFile acquiring /home/jenkins/minikube-integration/21139-2320/.minikube/profiles/addons-651467/proxy-client.crt: {Name:mkbc7863fbc3d8a1220aa9fa9ef7020993e849f3 Clock:{} Delay:500ms Timeout:1m0s Cancel:<nil>}
	I1109 13:30:00.887129    4875 crypto.go:164] Writing key to /home/jenkins/minikube-integration/21139-2320/.minikube/profiles/addons-651467/proxy-client.key ...
	I1109 13:30:00.887145    4875 lock.go:35] WriteFile acquiring /home/jenkins/minikube-integration/21139-2320/.minikube/profiles/addons-651467/proxy-client.key: {Name:mk8d8c3fc1ee7ee0888a2a40426e84bb5152d01e Clock:{} Delay:500ms Timeout:1m0s Cancel:<nil>}
	I1109 13:30:00.887368    4875 certs.go:484] found cert: /home/jenkins/minikube-integration/21139-2320/.minikube/certs/ca-key.pem (1679 bytes)
	I1109 13:30:00.887408    4875 certs.go:484] found cert: /home/jenkins/minikube-integration/21139-2320/.minikube/certs/ca.pem (1082 bytes)
	I1109 13:30:00.887439    4875 certs.go:484] found cert: /home/jenkins/minikube-integration/21139-2320/.minikube/certs/cert.pem (1123 bytes)
	I1109 13:30:00.887465    4875 certs.go:484] found cert: /home/jenkins/minikube-integration/21139-2320/.minikube/certs/key.pem (1679 bytes)
	I1109 13:30:00.888136    4875 ssh_runner.go:362] scp /home/jenkins/minikube-integration/21139-2320/.minikube/ca.crt --> /var/lib/minikube/certs/ca.crt (1111 bytes)
	I1109 13:30:00.913866    4875 ssh_runner.go:362] scp /home/jenkins/minikube-integration/21139-2320/.minikube/ca.key --> /var/lib/minikube/certs/ca.key (1679 bytes)
	I1109 13:30:00.937709    4875 ssh_runner.go:362] scp /home/jenkins/minikube-integration/21139-2320/.minikube/proxy-client-ca.crt --> /var/lib/minikube/certs/proxy-client-ca.crt (1119 bytes)
	I1109 13:30:00.962691    4875 ssh_runner.go:362] scp /home/jenkins/minikube-integration/21139-2320/.minikube/proxy-client-ca.key --> /var/lib/minikube/certs/proxy-client-ca.key (1675 bytes)
	I1109 13:30:00.984193    4875 ssh_runner.go:362] scp /home/jenkins/minikube-integration/21139-2320/.minikube/profiles/addons-651467/apiserver.crt --> /var/lib/minikube/certs/apiserver.crt (1419 bytes)
	I1109 13:30:01.023662    4875 ssh_runner.go:362] scp /home/jenkins/minikube-integration/21139-2320/.minikube/profiles/addons-651467/apiserver.key --> /var/lib/minikube/certs/apiserver.key (1675 bytes)
	I1109 13:30:01.059173    4875 ssh_runner.go:362] scp /home/jenkins/minikube-integration/21139-2320/.minikube/profiles/addons-651467/proxy-client.crt --> /var/lib/minikube/certs/proxy-client.crt (1147 bytes)
	I1109 13:30:01.097562    4875 ssh_runner.go:362] scp /home/jenkins/minikube-integration/21139-2320/.minikube/profiles/addons-651467/proxy-client.key --> /var/lib/minikube/certs/proxy-client.key (1675 bytes)
	I1109 13:30:01.126278    4875 ssh_runner.go:362] scp /home/jenkins/minikube-integration/21139-2320/.minikube/ca.crt --> /usr/share/ca-certificates/minikubeCA.pem (1111 bytes)
	I1109 13:30:01.152939    4875 ssh_runner.go:362] scp memory --> /var/lib/minikube/kubeconfig (738 bytes)
	I1109 13:30:01.171279    4875 ssh_runner.go:195] Run: openssl version
	I1109 13:30:01.196028    4875 ssh_runner.go:195] Run: sudo /bin/bash -c "test -s /usr/share/ca-certificates/minikubeCA.pem && ln -fs /usr/share/ca-certificates/minikubeCA.pem /etc/ssl/certs/minikubeCA.pem"
	I1109 13:30:01.206138    4875 ssh_runner.go:195] Run: ls -la /usr/share/ca-certificates/minikubeCA.pem
	I1109 13:30:01.211661    4875 certs.go:528] hashing: -rw-r--r-- 1 root root 1111 Nov  9 13:29 /usr/share/ca-certificates/minikubeCA.pem
	I1109 13:30:01.211749    4875 ssh_runner.go:195] Run: openssl x509 -hash -noout -in /usr/share/ca-certificates/minikubeCA.pem
	I1109 13:30:01.258932    4875 ssh_runner.go:195] Run: sudo /bin/bash -c "test -L /etc/ssl/certs/b5213941.0 || ln -fs /etc/ssl/certs/minikubeCA.pem /etc/ssl/certs/b5213941.0"
	I1109 13:30:01.293464    4875 ssh_runner.go:195] Run: stat /var/lib/minikube/certs/apiserver-kubelet-client.crt
	I1109 13:30:01.300183    4875 certs.go:400] 'apiserver-kubelet-client' cert doesn't exist, likely first start: stat /var/lib/minikube/certs/apiserver-kubelet-client.crt: Process exited with status 1
	stdout:
	
	stderr:
	stat: cannot statx '/var/lib/minikube/certs/apiserver-kubelet-client.crt': No such file or directory
	I1109 13:30:01.300235    4875 kubeadm.go:401] StartCluster: {Name:addons-651467 KeepContext:false EmbedCerts:false MinikubeISO: KicBaseImage:gcr.io/k8s-minikube/kicbase-builds:v0.0.48-1761985721-21837@sha256:a50b37e97dfdea51156e079ca6b45818a801b3d41bbe13d141f35d2e1af6c7d1 Memory:4096 CPUs:2 DiskSize:20000 Driver:docker HyperkitVpnKitSock: HyperkitVSockPorts:[] DockerEnv:[] ContainerVolumeMounts:[] InsecureRegistry:[] RegistryMirror:[] HostOnlyCIDR:192.168.59.1/24 HypervVirtualSwitch: HypervUseExternalSwitch:false HypervExternalAdapter: KVMNetwork:default KVMQemuURI:qemu:///system KVMGPU:false KVMHidden:false KVMNUMACount:1 APIServerPort:8443 DockerOpt:[] DisableDriverMounts:false NFSShare:[] NFSSharesRoot:/nfsshares UUID: NoVTXCheck:false DNSProxy:false HostDNSResolver:true HostOnlyNicType:virtio NatNicType:virtio SSHIPAddress: SSHUser:root SSHKey: SSHPort:22 KubernetesConfig:{KubernetesVersion:v1.34.1 ClusterName:addons-651467 Namespace:default APIServerHAVIP: APIServerName:minikubeCA APIServerNames
:[] APIServerIPs:[] DNSDomain:cluster.local ContainerRuntime:crio CRISocket: NetworkPlugin:cni FeatureGates: ServiceCIDR:10.96.0.0/12 ImageRepository: LoadBalancerStartIP: LoadBalancerEndIP: CustomIngressCert: RegistryAliases: ExtraOptions:[] ShouldLoadCachedImages:true EnableDefaultCNI:false CNI:} Nodes:[{Name: IP:192.168.49.2 Port:8443 KubernetesVersion:v1.34.1 ContainerRuntime:crio ControlPlane:true Worker:true}] Addons:map[] CustomAddonImages:map[] CustomAddonRegistries:map[] VerifyComponents:map[apiserver:true apps_running:true default_sa:true extra:true kubelet:true node_ready:true system_pods:true] StartHostTimeout:6m0s ScheduledStop:<nil> ExposedPorts:[] ListenAddress: Network: Subnet: MultiNodeRequested:false ExtraDisks:0 CertExpiration:26280h0m0s MountString: Mount9PVersion:9p2000.L MountGID:docker MountIP: MountMSize:262144 MountOptions:[] MountPort:0 MountType:9p MountUID:docker BinaryMirror: DisableOptimizations:false DisableMetrics:false DisableCoreDNSLog:false CustomQemuFirmwarePath: SocketVMne
tClientPath: SocketVMnetPath: StaticIP: SSHAuthSock: SSHAgentPID:0 GPUs: AutoPauseInterval:1m0s}
	I1109 13:30:01.300309    4875 cri.go:54] listing CRI containers in root : {State:paused Name: Namespaces:[kube-system]}
	I1109 13:30:01.300395    4875 ssh_runner.go:195] Run: sudo -s eval "crictl ps -a --quiet --label io.kubernetes.pod.namespace=kube-system"
	I1109 13:30:01.336003    4875 cri.go:89] found id: ""
	I1109 13:30:01.336093    4875 ssh_runner.go:195] Run: sudo ls /var/lib/kubelet/kubeadm-flags.env /var/lib/kubelet/config.yaml /var/lib/minikube/etcd
	I1109 13:30:01.348102    4875 ssh_runner.go:195] Run: sudo cp /var/tmp/minikube/kubeadm.yaml.new /var/tmp/minikube/kubeadm.yaml
	I1109 13:30:01.357950    4875 kubeadm.go:215] ignoring SystemVerification for kubeadm because of docker driver
	I1109 13:30:01.358109    4875 ssh_runner.go:195] Run: sudo ls -la /etc/kubernetes/admin.conf /etc/kubernetes/kubelet.conf /etc/kubernetes/controller-manager.conf /etc/kubernetes/scheduler.conf
	I1109 13:30:01.368760    4875 kubeadm.go:156] config check failed, skipping stale config cleanup: sudo ls -la /etc/kubernetes/admin.conf /etc/kubernetes/kubelet.conf /etc/kubernetes/controller-manager.conf /etc/kubernetes/scheduler.conf: Process exited with status 2
	stdout:
	
	stderr:
	ls: cannot access '/etc/kubernetes/admin.conf': No such file or directory
	ls: cannot access '/etc/kubernetes/kubelet.conf': No such file or directory
	ls: cannot access '/etc/kubernetes/controller-manager.conf': No such file or directory
	ls: cannot access '/etc/kubernetes/scheduler.conf': No such file or directory
	I1109 13:30:01.368816    4875 kubeadm.go:158] found existing configuration files:
	
	I1109 13:30:01.368879    4875 ssh_runner.go:195] Run: sudo grep https://control-plane.minikube.internal:8443 /etc/kubernetes/admin.conf
	I1109 13:30:01.378630    4875 kubeadm.go:164] "https://control-plane.minikube.internal:8443" may not be in /etc/kubernetes/admin.conf - will remove: sudo grep https://control-plane.minikube.internal:8443 /etc/kubernetes/admin.conf: Process exited with status 2
	stdout:
	
	stderr:
	grep: /etc/kubernetes/admin.conf: No such file or directory
	I1109 13:30:01.378704    4875 ssh_runner.go:195] Run: sudo rm -f /etc/kubernetes/admin.conf
	I1109 13:30:01.388686    4875 ssh_runner.go:195] Run: sudo grep https://control-plane.minikube.internal:8443 /etc/kubernetes/kubelet.conf
	I1109 13:30:01.404517    4875 kubeadm.go:164] "https://control-plane.minikube.internal:8443" may not be in /etc/kubernetes/kubelet.conf - will remove: sudo grep https://control-plane.minikube.internal:8443 /etc/kubernetes/kubelet.conf: Process exited with status 2
	stdout:
	
	stderr:
	grep: /etc/kubernetes/kubelet.conf: No such file or directory
	I1109 13:30:01.404597    4875 ssh_runner.go:195] Run: sudo rm -f /etc/kubernetes/kubelet.conf
	I1109 13:30:01.413481    4875 ssh_runner.go:195] Run: sudo grep https://control-plane.minikube.internal:8443 /etc/kubernetes/controller-manager.conf
	I1109 13:30:01.422962    4875 kubeadm.go:164] "https://control-plane.minikube.internal:8443" may not be in /etc/kubernetes/controller-manager.conf - will remove: sudo grep https://control-plane.minikube.internal:8443 /etc/kubernetes/controller-manager.conf: Process exited with status 2
	stdout:
	
	stderr:
	grep: /etc/kubernetes/controller-manager.conf: No such file or directory
	I1109 13:30:01.423102    4875 ssh_runner.go:195] Run: sudo rm -f /etc/kubernetes/controller-manager.conf
	I1109 13:30:01.432121    4875 ssh_runner.go:195] Run: sudo grep https://control-plane.minikube.internal:8443 /etc/kubernetes/scheduler.conf
	I1109 13:30:01.442346    4875 kubeadm.go:164] "https://control-plane.minikube.internal:8443" may not be in /etc/kubernetes/scheduler.conf - will remove: sudo grep https://control-plane.minikube.internal:8443 /etc/kubernetes/scheduler.conf: Process exited with status 2
	stdout:
	
	stderr:
	grep: /etc/kubernetes/scheduler.conf: No such file or directory
	I1109 13:30:01.442480    4875 ssh_runner.go:195] Run: sudo rm -f /etc/kubernetes/scheduler.conf
	I1109 13:30:01.453002    4875 ssh_runner.go:286] Start: sudo /bin/bash -c "env PATH="/var/lib/minikube/binaries/v1.34.1:$PATH" kubeadm init --config /var/tmp/minikube/kubeadm.yaml  --ignore-preflight-errors=DirAvailable--etc-kubernetes-manifests,DirAvailable--var-lib-minikube,DirAvailable--var-lib-minikube-etcd,FileAvailable--etc-kubernetes-manifests-kube-scheduler.yaml,FileAvailable--etc-kubernetes-manifests-kube-apiserver.yaml,FileAvailable--etc-kubernetes-manifests-kube-controller-manager.yaml,FileAvailable--etc-kubernetes-manifests-etcd.yaml,Port-10250,Swap,NumCPU,Mem,SystemVerification,FileContent--proc-sys-net-bridge-bridge-nf-call-iptables"
	I1109 13:30:01.501853    4875 kubeadm.go:319] [init] Using Kubernetes version: v1.34.1
	I1109 13:30:01.502359    4875 kubeadm.go:319] [preflight] Running pre-flight checks
	I1109 13:30:01.529028    4875 kubeadm.go:319] [preflight] The system verification failed. Printing the output from the verification:
	I1109 13:30:01.529152    4875 kubeadm.go:319] KERNEL_VERSION: 5.15.0-1084-aws
	I1109 13:30:01.529205    4875 kubeadm.go:319] OS: Linux
	I1109 13:30:01.529278    4875 kubeadm.go:319] CGROUPS_CPU: enabled
	I1109 13:30:01.529347    4875 kubeadm.go:319] CGROUPS_CPUACCT: enabled
	I1109 13:30:01.529421    4875 kubeadm.go:319] CGROUPS_CPUSET: enabled
	I1109 13:30:01.529488    4875 kubeadm.go:319] CGROUPS_DEVICES: enabled
	I1109 13:30:01.529563    4875 kubeadm.go:319] CGROUPS_FREEZER: enabled
	I1109 13:30:01.529633    4875 kubeadm.go:319] CGROUPS_MEMORY: enabled
	I1109 13:30:01.529706    4875 kubeadm.go:319] CGROUPS_PIDS: enabled
	I1109 13:30:01.529785    4875 kubeadm.go:319] CGROUPS_HUGETLB: enabled
	I1109 13:30:01.529848    4875 kubeadm.go:319] CGROUPS_BLKIO: enabled
	I1109 13:30:01.610645    4875 kubeadm.go:319] [preflight] Pulling images required for setting up a Kubernetes cluster
	I1109 13:30:01.610801    4875 kubeadm.go:319] [preflight] This might take a minute or two, depending on the speed of your internet connection
	I1109 13:30:01.610956    4875 kubeadm.go:319] [preflight] You can also perform this action beforehand using 'kubeadm config images pull'
	I1109 13:30:01.621669    4875 kubeadm.go:319] [certs] Using certificateDir folder "/var/lib/minikube/certs"
	I1109 13:30:01.628275    4875 out.go:252]   - Generating certificates and keys ...
	I1109 13:30:01.628377    4875 kubeadm.go:319] [certs] Using existing ca certificate authority
	I1109 13:30:01.628453    4875 kubeadm.go:319] [certs] Using existing apiserver certificate and key on disk
	I1109 13:30:02.193390    4875 kubeadm.go:319] [certs] Generating "apiserver-kubelet-client" certificate and key
	I1109 13:30:02.336480    4875 kubeadm.go:319] [certs] Generating "front-proxy-ca" certificate and key
	I1109 13:30:02.612356    4875 kubeadm.go:319] [certs] Generating "front-proxy-client" certificate and key
	I1109 13:30:02.952014    4875 kubeadm.go:319] [certs] Generating "etcd/ca" certificate and key
	I1109 13:30:03.295042    4875 kubeadm.go:319] [certs] Generating "etcd/server" certificate and key
	I1109 13:30:03.295265    4875 kubeadm.go:319] [certs] etcd/server serving cert is signed for DNS names [addons-651467 localhost] and IPs [192.168.49.2 127.0.0.1 ::1]
	I1109 13:30:03.830724    4875 kubeadm.go:319] [certs] Generating "etcd/peer" certificate and key
	I1109 13:30:03.831039    4875 kubeadm.go:319] [certs] etcd/peer serving cert is signed for DNS names [addons-651467 localhost] and IPs [192.168.49.2 127.0.0.1 ::1]
	I1109 13:30:04.470763    4875 kubeadm.go:319] [certs] Generating "etcd/healthcheck-client" certificate and key
	I1109 13:30:05.489721    4875 kubeadm.go:319] [certs] Generating "apiserver-etcd-client" certificate and key
	I1109 13:30:05.944002    4875 kubeadm.go:319] [certs] Generating "sa" key and public key
	I1109 13:30:05.944329    4875 kubeadm.go:319] [kubeconfig] Using kubeconfig folder "/etc/kubernetes"
	I1109 13:30:06.887973    4875 kubeadm.go:319] [kubeconfig] Writing "admin.conf" kubeconfig file
	I1109 13:30:07.067707    4875 kubeadm.go:319] [kubeconfig] Writing "super-admin.conf" kubeconfig file
	I1109 13:30:07.706053    4875 kubeadm.go:319] [kubeconfig] Writing "kubelet.conf" kubeconfig file
	I1109 13:30:09.346813    4875 kubeadm.go:319] [kubeconfig] Writing "controller-manager.conf" kubeconfig file
	I1109 13:30:10.361632    4875 kubeadm.go:319] [kubeconfig] Writing "scheduler.conf" kubeconfig file
	I1109 13:30:10.362229    4875 kubeadm.go:319] [etcd] Creating static Pod manifest for local etcd in "/etc/kubernetes/manifests"
	I1109 13:30:10.364909    4875 kubeadm.go:319] [control-plane] Using manifest folder "/etc/kubernetes/manifests"
	I1109 13:30:10.368450    4875 out.go:252]   - Booting up control plane ...
	I1109 13:30:10.368585    4875 kubeadm.go:319] [control-plane] Creating static Pod manifest for "kube-apiserver"
	I1109 13:30:10.368689    4875 kubeadm.go:319] [control-plane] Creating static Pod manifest for "kube-controller-manager"
	I1109 13:30:10.368780    4875 kubeadm.go:319] [control-plane] Creating static Pod manifest for "kube-scheduler"
	I1109 13:30:10.386625    4875 kubeadm.go:319] [kubelet-start] Writing kubelet environment file with flags to file "/var/lib/kubelet/kubeadm-flags.env"
	I1109 13:30:10.386934    4875 kubeadm.go:319] [kubelet-start] Writing kubelet configuration to file "/var/lib/kubelet/instance-config.yaml"
	I1109 13:30:10.395546    4875 kubeadm.go:319] [patches] Applied patch of type "application/strategic-merge-patch+json" to target "kubeletconfiguration"
	I1109 13:30:10.395649    4875 kubeadm.go:319] [kubelet-start] Writing kubelet configuration to file "/var/lib/kubelet/config.yaml"
	I1109 13:30:10.395692    4875 kubeadm.go:319] [kubelet-start] Starting the kubelet
	I1109 13:30:10.523037    4875 kubeadm.go:319] [wait-control-plane] Waiting for the kubelet to boot up the control plane as static Pods from directory "/etc/kubernetes/manifests"
	I1109 13:30:10.523158    4875 kubeadm.go:319] [kubelet-check] Waiting for a healthy kubelet at http://127.0.0.1:10248/healthz. This can take up to 4m0s
	I1109 13:30:12.024670    4875 kubeadm.go:319] [kubelet-check] The kubelet is healthy after 1.501969248s
	I1109 13:30:12.028493    4875 kubeadm.go:319] [control-plane-check] Waiting for healthy control plane components. This can take up to 4m0s
	I1109 13:30:12.028601    4875 kubeadm.go:319] [control-plane-check] Checking kube-apiserver at https://192.168.49.2:8443/livez
	I1109 13:30:12.028695    4875 kubeadm.go:319] [control-plane-check] Checking kube-controller-manager at https://127.0.0.1:10257/healthz
	I1109 13:30:12.028777    4875 kubeadm.go:319] [control-plane-check] Checking kube-scheduler at https://127.0.0.1:10259/livez
	I1109 13:30:14.900297    4875 kubeadm.go:319] [control-plane-check] kube-controller-manager is healthy after 2.871353735s
	I1109 13:30:16.289745    4875 kubeadm.go:319] [control-plane-check] kube-scheduler is healthy after 4.260633178s
	I1109 13:30:18.032280    4875 kubeadm.go:319] [control-plane-check] kube-apiserver is healthy after 6.003737833s
	I1109 13:30:18.052926    4875 kubeadm.go:319] [upload-config] Storing the configuration used in ConfigMap "kubeadm-config" in the "kube-system" Namespace
	I1109 13:30:18.068472    4875 kubeadm.go:319] [kubelet] Creating a ConfigMap "kubelet-config" in namespace kube-system with the configuration for the kubelets in the cluster
	I1109 13:30:18.084175    4875 kubeadm.go:319] [upload-certs] Skipping phase. Please see --upload-certs
	I1109 13:30:18.084413    4875 kubeadm.go:319] [mark-control-plane] Marking the node addons-651467 as control-plane by adding the labels: [node-role.kubernetes.io/control-plane node.kubernetes.io/exclude-from-external-load-balancers]
	I1109 13:30:18.100750    4875 kubeadm.go:319] [bootstrap-token] Using token: 3icf2d.4lu3e6i9hke2tnsi
	I1109 13:30:18.103926    4875 out.go:252]   - Configuring RBAC rules ...
	I1109 13:30:18.104060    4875 kubeadm.go:319] [bootstrap-token] Configuring bootstrap tokens, cluster-info ConfigMap, RBAC Roles
	I1109 13:30:18.112578    4875 kubeadm.go:319] [bootstrap-token] Configured RBAC rules to allow Node Bootstrap tokens to get nodes
	I1109 13:30:18.120857    4875 kubeadm.go:319] [bootstrap-token] Configured RBAC rules to allow Node Bootstrap tokens to post CSRs in order for nodes to get long term certificate credentials
	I1109 13:30:18.124982    4875 kubeadm.go:319] [bootstrap-token] Configured RBAC rules to allow the csrapprover controller automatically approve CSRs from a Node Bootstrap Token
	I1109 13:30:18.131351    4875 kubeadm.go:319] [bootstrap-token] Configured RBAC rules to allow certificate rotation for all node client certificates in the cluster
	I1109 13:30:18.135986    4875 kubeadm.go:319] [bootstrap-token] Creating the "cluster-info" ConfigMap in the "kube-public" namespace
	I1109 13:30:18.440141    4875 kubeadm.go:319] [kubelet-finalize] Updating "/etc/kubernetes/kubelet.conf" to point to a rotatable kubelet client certificate and key
	I1109 13:30:18.880050    4875 kubeadm.go:319] [addons] Applied essential addon: CoreDNS
	I1109 13:30:19.442022    4875 kubeadm.go:319] [addons] Applied essential addon: kube-proxy
	I1109 13:30:19.443323    4875 kubeadm.go:319] 
	I1109 13:30:19.443399    4875 kubeadm.go:319] Your Kubernetes control-plane has initialized successfully!
	I1109 13:30:19.443405    4875 kubeadm.go:319] 
	I1109 13:30:19.443482    4875 kubeadm.go:319] To start using your cluster, you need to run the following as a regular user:
	I1109 13:30:19.443487    4875 kubeadm.go:319] 
	I1109 13:30:19.443513    4875 kubeadm.go:319]   mkdir -p $HOME/.kube
	I1109 13:30:19.443572    4875 kubeadm.go:319]   sudo cp -i /etc/kubernetes/admin.conf $HOME/.kube/config
	I1109 13:30:19.443621    4875 kubeadm.go:319]   sudo chown $(id -u):$(id -g) $HOME/.kube/config
	I1109 13:30:19.443626    4875 kubeadm.go:319] 
	I1109 13:30:19.443679    4875 kubeadm.go:319] Alternatively, if you are the root user, you can run:
	I1109 13:30:19.443683    4875 kubeadm.go:319] 
	I1109 13:30:19.443731    4875 kubeadm.go:319]   export KUBECONFIG=/etc/kubernetes/admin.conf
	I1109 13:30:19.443736    4875 kubeadm.go:319] 
	I1109 13:30:19.443788    4875 kubeadm.go:319] You should now deploy a pod network to the cluster.
	I1109 13:30:19.443863    4875 kubeadm.go:319] Run "kubectl apply -f [podnetwork].yaml" with one of the options listed at:
	I1109 13:30:19.443961    4875 kubeadm.go:319]   https://kubernetes.io/docs/concepts/cluster-administration/addons/
	I1109 13:30:19.443966    4875 kubeadm.go:319] 
	I1109 13:30:19.444050    4875 kubeadm.go:319] You can now join any number of control-plane nodes by copying certificate authorities
	I1109 13:30:19.444127    4875 kubeadm.go:319] and service account keys on each node and then running the following as root:
	I1109 13:30:19.444131    4875 kubeadm.go:319] 
	I1109 13:30:19.444216    4875 kubeadm.go:319]   kubeadm join control-plane.minikube.internal:8443 --token 3icf2d.4lu3e6i9hke2tnsi \
	I1109 13:30:19.444326    4875 kubeadm.go:319] 	--discovery-token-ca-cert-hash sha256:bb1c1d61f7abb7f43f33866576b71e6342f6f67545a2d1f7cc51fa851cd51d52 \
	I1109 13:30:19.444348    4875 kubeadm.go:319] 	--control-plane 
	I1109 13:30:19.444353    4875 kubeadm.go:319] 
	I1109 13:30:19.444438    4875 kubeadm.go:319] Then you can join any number of worker nodes by running the following on each as root:
	I1109 13:30:19.444443    4875 kubeadm.go:319] 
	I1109 13:30:19.444525    4875 kubeadm.go:319] kubeadm join control-plane.minikube.internal:8443 --token 3icf2d.4lu3e6i9hke2tnsi \
	I1109 13:30:19.444640    4875 kubeadm.go:319] 	--discovery-token-ca-cert-hash sha256:bb1c1d61f7abb7f43f33866576b71e6342f6f67545a2d1f7cc51fa851cd51d52 
	I1109 13:30:19.448166    4875 kubeadm.go:319] 	[WARNING SystemVerification]: cgroups v1 support is in maintenance mode, please migrate to cgroups v2
	I1109 13:30:19.448400    4875 kubeadm.go:319] 	[WARNING SystemVerification]: failed to parse kernel config: unable to load kernel module: "configs", output: "modprobe: FATAL: Module configs not found in directory /lib/modules/5.15.0-1084-aws\n", err: exit status 1
	I1109 13:30:19.448510    4875 kubeadm.go:319] 	[WARNING Service-Kubelet]: kubelet service is not enabled, please run 'systemctl enable kubelet.service'
	I1109 13:30:19.448525    4875 cni.go:84] Creating CNI manager for ""
	I1109 13:30:19.448536    4875 cni.go:143] "docker" driver + "crio" runtime found, recommending kindnet
	I1109 13:30:19.451788    4875 out.go:179] * Configuring CNI (Container Networking Interface) ...
	I1109 13:30:19.454751    4875 ssh_runner.go:195] Run: stat /opt/cni/bin/portmap
	I1109 13:30:19.459011    4875 cni.go:182] applying CNI manifest using /var/lib/minikube/binaries/v1.34.1/kubectl ...
	I1109 13:30:19.459036    4875 ssh_runner.go:362] scp memory --> /var/tmp/minikube/cni.yaml (2601 bytes)
	I1109 13:30:19.472608    4875 ssh_runner.go:195] Run: sudo /var/lib/minikube/binaries/v1.34.1/kubectl apply --kubeconfig=/var/lib/minikube/kubeconfig -f /var/tmp/minikube/cni.yaml
	I1109 13:30:19.770099    4875 ssh_runner.go:195] Run: /bin/bash -c "cat /proc/$(pgrep kube-apiserver)/oom_adj"
	I1109 13:30:19.770236    4875 ssh_runner.go:195] Run: sudo /var/lib/minikube/binaries/v1.34.1/kubectl create clusterrolebinding minikube-rbac --clusterrole=cluster-admin --serviceaccount=kube-system:default --kubeconfig=/var/lib/minikube/kubeconfig
	I1109 13:30:19.770309    4875 ssh_runner.go:195] Run: sudo /var/lib/minikube/binaries/v1.34.1/kubectl --kubeconfig=/var/lib/minikube/kubeconfig label --overwrite nodes addons-651467 minikube.k8s.io/updated_at=2025_11_09T13_30_19_0700 minikube.k8s.io/version=v1.37.0 minikube.k8s.io/commit=70e94ed289d7418ac27e2778f9cf44be27a4ecda minikube.k8s.io/name=addons-651467 minikube.k8s.io/primary=true
	I1109 13:30:19.974539    4875 ops.go:34] apiserver oom_adj: -16
	I1109 13:30:19.974663    4875 ssh_runner.go:195] Run: sudo /var/lib/minikube/binaries/v1.34.1/kubectl get sa default --kubeconfig=/var/lib/minikube/kubeconfig
	I1109 13:30:20.475561    4875 ssh_runner.go:195] Run: sudo /var/lib/minikube/binaries/v1.34.1/kubectl get sa default --kubeconfig=/var/lib/minikube/kubeconfig
	I1109 13:30:20.975503    4875 ssh_runner.go:195] Run: sudo /var/lib/minikube/binaries/v1.34.1/kubectl get sa default --kubeconfig=/var/lib/minikube/kubeconfig
	I1109 13:30:21.475093    4875 ssh_runner.go:195] Run: sudo /var/lib/minikube/binaries/v1.34.1/kubectl get sa default --kubeconfig=/var/lib/minikube/kubeconfig
	I1109 13:30:21.975488    4875 ssh_runner.go:195] Run: sudo /var/lib/minikube/binaries/v1.34.1/kubectl get sa default --kubeconfig=/var/lib/minikube/kubeconfig
	I1109 13:30:22.475474    4875 ssh_runner.go:195] Run: sudo /var/lib/minikube/binaries/v1.34.1/kubectl get sa default --kubeconfig=/var/lib/minikube/kubeconfig
	I1109 13:30:22.974746    4875 ssh_runner.go:195] Run: sudo /var/lib/minikube/binaries/v1.34.1/kubectl get sa default --kubeconfig=/var/lib/minikube/kubeconfig
	I1109 13:30:23.079934    4875 kubeadm.go:1114] duration metric: took 3.309742117s to wait for elevateKubeSystemPrivileges
	I1109 13:30:23.079967    4875 kubeadm.go:403] duration metric: took 21.779734371s to StartCluster
	I1109 13:30:23.079984    4875 settings.go:142] acquiring lock: {Name:mk52a5a1cf83cfd3007d92af07a5e8dea393f789 Clock:{} Delay:500ms Timeout:1m0s Cancel:<nil>}
	I1109 13:30:23.080101    4875 settings.go:150] Updating kubeconfig:  /home/jenkins/minikube-integration/21139-2320/kubeconfig
	I1109 13:30:23.080460    4875 lock.go:35] WriteFile acquiring /home/jenkins/minikube-integration/21139-2320/kubeconfig: {Name:mkbcbd43244c4eb498050023163f96f81faa78c6 Clock:{} Delay:500ms Timeout:1m0s Cancel:<nil>}
	I1109 13:30:23.080640    4875 ssh_runner.go:195] Run: /bin/bash -c "sudo /var/lib/minikube/binaries/v1.34.1/kubectl --kubeconfig=/var/lib/minikube/kubeconfig -n kube-system get configmap coredns -o yaml"
	I1109 13:30:23.080663    4875 start.go:236] Will wait 6m0s for node &{Name: IP:192.168.49.2 Port:8443 KubernetesVersion:v1.34.1 ContainerRuntime:crio ControlPlane:true Worker:true}
	I1109 13:30:23.080909    4875 config.go:182] Loaded profile config "addons-651467": Driver=docker, ContainerRuntime=crio, KubernetesVersion=v1.34.1
	I1109 13:30:23.080940    4875 addons.go:512] enable addons start: toEnable=map[ambassador:false amd-gpu-device-plugin:true auto-pause:false cloud-spanner:true csi-hostpath-driver:true dashboard:false default-storageclass:true efk:false freshpod:false gcp-auth:true gvisor:false headlamp:false inaccel:false ingress:true ingress-dns:true inspektor-gadget:true istio:false istio-provisioner:false kong:false kubeflow:false kubetail:false kubevirt:false logviewer:false metallb:false metrics-server:true nvidia-device-plugin:true nvidia-driver-installer:false nvidia-gpu-device-plugin:false olm:false pod-security-policy:false portainer:false registry:true registry-aliases:false registry-creds:true storage-provisioner:true storage-provisioner-rancher:true volcano:true volumesnapshots:true yakd:true]
	I1109 13:30:23.081018    4875 addons.go:70] Setting yakd=true in profile "addons-651467"
	I1109 13:30:23.081031    4875 addons.go:239] Setting addon yakd=true in "addons-651467"
	I1109 13:30:23.081053    4875 host.go:66] Checking if "addons-651467" exists ...
	I1109 13:30:23.081498    4875 cli_runner.go:164] Run: docker container inspect addons-651467 --format={{.State.Status}}
	I1109 13:30:23.082050    4875 addons.go:70] Setting metrics-server=true in profile "addons-651467"
	I1109 13:30:23.082069    4875 addons.go:239] Setting addon metrics-server=true in "addons-651467"
	I1109 13:30:23.082084    4875 addons.go:70] Setting nvidia-device-plugin=true in profile "addons-651467"
	I1109 13:30:23.082096    4875 host.go:66] Checking if "addons-651467" exists ...
	I1109 13:30:23.082102    4875 addons.go:239] Setting addon nvidia-device-plugin=true in "addons-651467"
	I1109 13:30:23.082125    4875 host.go:66] Checking if "addons-651467" exists ...
	I1109 13:30:23.082498    4875 cli_runner.go:164] Run: docker container inspect addons-651467 --format={{.State.Status}}
	I1109 13:30:23.082626    4875 cli_runner.go:164] Run: docker container inspect addons-651467 --format={{.State.Status}}
	I1109 13:30:23.085442    4875 addons.go:70] Setting registry=true in profile "addons-651467"
	I1109 13:30:23.085536    4875 addons.go:239] Setting addon registry=true in "addons-651467"
	I1109 13:30:23.085638    4875 host.go:66] Checking if "addons-651467" exists ...
	I1109 13:30:23.085768    4875 addons.go:70] Setting registry-creds=true in profile "addons-651467"
	I1109 13:30:23.089577    4875 addons.go:239] Setting addon registry-creds=true in "addons-651467"
	I1109 13:30:23.089643    4875 host.go:66] Checking if "addons-651467" exists ...
	I1109 13:30:23.090281    4875 cli_runner.go:164] Run: docker container inspect addons-651467 --format={{.State.Status}}
	I1109 13:30:23.092461    4875 cli_runner.go:164] Run: docker container inspect addons-651467 --format={{.State.Status}}
	I1109 13:30:23.085777    4875 addons.go:70] Setting storage-provisioner=true in profile "addons-651467"
	I1109 13:30:23.104005    4875 addons.go:239] Setting addon storage-provisioner=true in "addons-651467"
	I1109 13:30:23.104052    4875 host.go:66] Checking if "addons-651467" exists ...
	I1109 13:30:23.104526    4875 cli_runner.go:164] Run: docker container inspect addons-651467 --format={{.State.Status}}
	I1109 13:30:23.085781    4875 addons.go:70] Setting storage-provisioner-rancher=true in profile "addons-651467"
	I1109 13:30:23.105854    4875 addons_storage_classes.go:34] enableOrDisableStorageClasses storage-provisioner-rancher=true on "addons-651467"
	I1109 13:30:23.106167    4875 cli_runner.go:164] Run: docker container inspect addons-651467 --format={{.State.Status}}
	I1109 13:30:23.085785    4875 addons.go:70] Setting volcano=true in profile "addons-651467"
	I1109 13:30:23.126496    4875 addons.go:239] Setting addon volcano=true in "addons-651467"
	I1109 13:30:23.126563    4875 host.go:66] Checking if "addons-651467" exists ...
	I1109 13:30:23.127087    4875 cli_runner.go:164] Run: docker container inspect addons-651467 --format={{.State.Status}}
	I1109 13:30:23.086487    4875 addons.go:70] Setting volumesnapshots=true in profile "addons-651467"
	I1109 13:30:23.136566    4875 addons.go:239] Setting addon volumesnapshots=true in "addons-651467"
	I1109 13:30:23.136605    4875 host.go:66] Checking if "addons-651467" exists ...
	I1109 13:30:23.137067    4875 cli_runner.go:164] Run: docker container inspect addons-651467 --format={{.State.Status}}
	I1109 13:30:23.086503    4875 out.go:179] * Verifying Kubernetes components...
	I1109 13:30:23.086871    4875 addons.go:70] Setting gcp-auth=true in profile "addons-651467"
	I1109 13:30:23.159797    4875 mustload.go:66] Loading cluster: addons-651467
	I1109 13:30:23.163928    4875 config.go:182] Loaded profile config "addons-651467": Driver=docker, ContainerRuntime=crio, KubernetesVersion=v1.34.1
	I1109 13:30:23.165277    4875 cli_runner.go:164] Run: docker container inspect addons-651467 --format={{.State.Status}}
	I1109 13:30:23.086879    4875 addons.go:70] Setting amd-gpu-device-plugin=true in profile "addons-651467"
	I1109 13:30:23.086883    4875 addons.go:70] Setting cloud-spanner=true in profile "addons-651467"
	I1109 13:30:23.086887    4875 addons.go:70] Setting csi-hostpath-driver=true in profile "addons-651467"
	I1109 13:30:23.184123    4875 addons.go:239] Setting addon amd-gpu-device-plugin=true in "addons-651467"
	I1109 13:30:23.184178    4875 host.go:66] Checking if "addons-651467" exists ...
	I1109 13:30:23.184200    4875 addons.go:239] Setting addon csi-hostpath-driver=true in "addons-651467"
	I1109 13:30:23.184241    4875 host.go:66] Checking if "addons-651467" exists ...
	I1109 13:30:23.184638    4875 cli_runner.go:164] Run: docker container inspect addons-651467 --format={{.State.Status}}
	I1109 13:30:23.184713    4875 cli_runner.go:164] Run: docker container inspect addons-651467 --format={{.State.Status}}
	I1109 13:30:23.190448    4875 addons.go:239] Setting addon cloud-spanner=true in "addons-651467"
	I1109 13:30:23.191048    4875 host.go:66] Checking if "addons-651467" exists ...
	I1109 13:30:23.195516    4875 cli_runner.go:164] Run: docker container inspect addons-651467 --format={{.State.Status}}
	I1109 13:30:23.086891    4875 addons.go:70] Setting default-storageclass=true in profile "addons-651467"
	I1109 13:30:23.207484    4875 addons_storage_classes.go:34] enableOrDisableStorageClasses default-storageclass=true on "addons-651467"
	I1109 13:30:23.086896    4875 addons.go:70] Setting inspektor-gadget=true in profile "addons-651467"
	I1109 13:30:23.207637    4875 addons.go:239] Setting addon inspektor-gadget=true in "addons-651467"
	I1109 13:30:23.207667    4875 host.go:66] Checking if "addons-651467" exists ...
	I1109 13:30:23.086911    4875 addons.go:70] Setting ingress=true in profile "addons-651467"
	I1109 13:30:23.207712    4875 addons.go:239] Setting addon ingress=true in "addons-651467"
	I1109 13:30:23.207738    4875 host.go:66] Checking if "addons-651467" exists ...
	I1109 13:30:23.086915    4875 addons.go:70] Setting ingress-dns=true in profile "addons-651467"
	I1109 13:30:23.207787    4875 addons.go:239] Setting addon ingress-dns=true in "addons-651467"
	I1109 13:30:23.207805    4875 host.go:66] Checking if "addons-651467" exists ...
	I1109 13:30:23.159708    4875 ssh_runner.go:195] Run: sudo systemctl daemon-reload
	I1109 13:30:23.209434    4875 out.go:179]   - Using image nvcr.io/nvidia/k8s-device-plugin:v0.18.0
	I1109 13:30:23.217579    4875 cli_runner.go:164] Run: docker container inspect addons-651467 --format={{.State.Status}}
	I1109 13:30:23.232513    4875 cli_runner.go:164] Run: docker container inspect addons-651467 --format={{.State.Status}}
	I1109 13:30:23.246074    4875 out.go:179]   - Using image registry.k8s.io/metrics-server/metrics-server:v0.8.0
	I1109 13:30:23.249512    4875 cli_runner.go:164] Run: docker container inspect addons-651467 --format={{.State.Status}}
	I1109 13:30:23.249784    4875 addons.go:436] installing /etc/kubernetes/addons/nvidia-device-plugin.yaml
	I1109 13:30:23.249797    4875 ssh_runner.go:362] scp memory --> /etc/kubernetes/addons/nvidia-device-plugin.yaml (1966 bytes)
	I1109 13:30:23.249842    4875 cli_runner.go:164] Run: docker container inspect -f "'{{(index (index .NetworkSettings.Ports "22/tcp") 0).HostPort}}'" addons-651467
	I1109 13:30:23.264958    4875 out.go:179]   - Using image docker.io/marcnuri/yakd:0.0.5
	I1109 13:30:23.265198    4875 addons.go:436] installing /etc/kubernetes/addons/metrics-apiservice.yaml
	I1109 13:30:23.265211    4875 ssh_runner.go:362] scp metrics-server/metrics-apiservice.yaml --> /etc/kubernetes/addons/metrics-apiservice.yaml (424 bytes)
	I1109 13:30:23.265286    4875 cli_runner.go:164] Run: docker container inspect -f "'{{(index (index .NetworkSettings.Ports "22/tcp") 0).HostPort}}'" addons-651467
	I1109 13:30:23.295051    4875 out.go:179]   - Using image gcr.io/k8s-minikube/kube-registry-proxy:0.0.9
	I1109 13:30:23.298478    4875 out.go:179]   - Using image docker.io/registry:3.0.0
	I1109 13:30:23.302026    4875 addons.go:436] installing /etc/kubernetes/addons/registry-rc.yaml
	I1109 13:30:23.302092    4875 ssh_runner.go:362] scp memory --> /etc/kubernetes/addons/registry-rc.yaml (860 bytes)
	I1109 13:30:23.302182    4875 cli_runner.go:164] Run: docker container inspect -f "'{{(index (index .NetworkSettings.Ports "22/tcp") 0).HostPort}}'" addons-651467
	I1109 13:30:23.333757    4875 addons.go:436] installing /etc/kubernetes/addons/yakd-ns.yaml
	I1109 13:30:23.333791    4875 ssh_runner.go:362] scp yakd/yakd-ns.yaml --> /etc/kubernetes/addons/yakd-ns.yaml (171 bytes)
	I1109 13:30:23.333863    4875 cli_runner.go:164] Run: docker container inspect -f "'{{(index (index .NetworkSettings.Ports "22/tcp") 0).HostPort}}'" addons-651467
	I1109 13:30:23.341646    4875 cli_runner.go:164] Run: docker container inspect addons-651467 --format={{.State.Status}}
	I1109 13:30:23.361736    4875 ssh_runner.go:195] Run: /bin/bash -c "sudo /var/lib/minikube/binaries/v1.34.1/kubectl --kubeconfig=/var/lib/minikube/kubeconfig -n kube-system get configmap coredns -o yaml | sed -e '/^        forward . \/etc\/resolv.conf.*/i \        hosts {\n           192.168.49.1 host.minikube.internal\n           fallthrough\n        }' -e '/^        errors *$/i \        log' | sudo /var/lib/minikube/binaries/v1.34.1/kubectl --kubeconfig=/var/lib/minikube/kubeconfig replace -f -"
	W1109 13:30:23.362284    4875 out.go:285] ! Enabling 'volcano' returned an error: running callbacks: [volcano addon does not support crio]
	I1109 13:30:23.389135    4875 out.go:179]   - Using image docker.io/upmcenterprises/registry-creds:1.10
	I1109 13:30:23.389156    4875 out.go:179]   - Using image gcr.io/k8s-minikube/storage-provisioner:v5
	I1109 13:30:23.390663    4875 addons.go:239] Setting addon storage-provisioner-rancher=true in "addons-651467"
	I1109 13:30:23.390713    4875 host.go:66] Checking if "addons-651467" exists ...
	I1109 13:30:23.391143    4875 cli_runner.go:164] Run: docker container inspect addons-651467 --format={{.State.Status}}
	I1109 13:30:23.406762    4875 out.go:179]   - Using image registry.k8s.io/sig-storage/snapshot-controller:v6.1.0
	I1109 13:30:23.409633    4875 addons.go:436] installing /etc/kubernetes/addons/csi-hostpath-snapshotclass.yaml
	I1109 13:30:23.409656    4875 ssh_runner.go:362] scp volumesnapshots/csi-hostpath-snapshotclass.yaml --> /etc/kubernetes/addons/csi-hostpath-snapshotclass.yaml (934 bytes)
	I1109 13:30:23.409825    4875 cli_runner.go:164] Run: docker container inspect -f "'{{(index (index .NetworkSettings.Ports "22/tcp") 0).HostPort}}'" addons-651467
	I1109 13:30:23.410774    4875 addons.go:436] installing /etc/kubernetes/addons/registry-creds-rc.yaml
	I1109 13:30:23.410836    4875 ssh_runner.go:362] scp memory --> /etc/kubernetes/addons/registry-creds-rc.yaml (3306 bytes)
	I1109 13:30:23.410913    4875 cli_runner.go:164] Run: docker container inspect -f "'{{(index (index .NetworkSettings.Ports "22/tcp") 0).HostPort}}'" addons-651467
	I1109 13:30:23.462856    4875 addons.go:436] installing /etc/kubernetes/addons/storage-provisioner.yaml
	I1109 13:30:23.462883    4875 ssh_runner.go:362] scp memory --> /etc/kubernetes/addons/storage-provisioner.yaml (2676 bytes)
	I1109 13:30:23.462950    4875 cli_runner.go:164] Run: docker container inspect -f "'{{(index (index .NetworkSettings.Ports "22/tcp") 0).HostPort}}'" addons-651467
	I1109 13:30:23.467554    4875 host.go:66] Checking if "addons-651467" exists ...
	I1109 13:30:23.472811    4875 out.go:179]   - Using image registry.k8s.io/sig-storage/livenessprobe:v2.8.0
	I1109 13:30:23.493988    4875 out.go:179]   - Using image registry.k8s.io/sig-storage/csi-resizer:v1.6.0
	I1109 13:30:23.496926    4875 out.go:179]   - Using image registry.k8s.io/sig-storage/csi-snapshotter:v6.1.0
	I1109 13:30:23.502368    4875 out.go:179]   - Using image registry.k8s.io/sig-storage/csi-provisioner:v3.3.0
	I1109 13:30:23.505237    4875 out.go:179]   - Using image registry.k8s.io/sig-storage/csi-attacher:v4.0.0
	I1109 13:30:23.508044    4875 out.go:179]   - Using image registry.k8s.io/sig-storage/csi-external-health-monitor-controller:v0.7.0
	I1109 13:30:23.510999    4875 out.go:179]   - Using image registry.k8s.io/sig-storage/csi-node-driver-registrar:v2.6.0
	I1109 13:30:23.512391    4875 out.go:179]   - Using image docker.io/rocm/k8s-device-plugin:1.25.2.8
	I1109 13:30:23.522100    4875 out.go:179]   - Using image registry.k8s.io/sig-storage/hostpathplugin:v1.9.0
	I1109 13:30:23.522332    4875 out.go:179]   - Using image registry.k8s.io/ingress-nginx/controller:v1.13.3
	I1109 13:30:23.522528    4875 addons.go:436] installing /etc/kubernetes/addons/amd-gpu-device-plugin.yaml
	I1109 13:30:23.522542    4875 ssh_runner.go:362] scp memory --> /etc/kubernetes/addons/amd-gpu-device-plugin.yaml (1868 bytes)
	I1109 13:30:23.522622    4875 cli_runner.go:164] Run: docker container inspect -f "'{{(index (index .NetworkSettings.Ports "22/tcp") 0).HostPort}}'" addons-651467
	I1109 13:30:23.542469    4875 out.go:179]   - Using image docker.io/kicbase/minikube-ingress-dns:0.0.4
	I1109 13:30:23.548192    4875 addons.go:436] installing /etc/kubernetes/addons/ingress-dns-pod.yaml
	I1109 13:30:23.548262    4875 ssh_runner.go:362] scp memory --> /etc/kubernetes/addons/ingress-dns-pod.yaml (2889 bytes)
	I1109 13:30:23.548361    4875 cli_runner.go:164] Run: docker container inspect -f "'{{(index (index .NetworkSettings.Ports "22/tcp") 0).HostPort}}'" addons-651467
	I1109 13:30:23.564743    4875 addons.go:239] Setting addon default-storageclass=true in "addons-651467"
	I1109 13:30:23.564796    4875 host.go:66] Checking if "addons-651467" exists ...
	I1109 13:30:23.565194    4875 cli_runner.go:164] Run: docker container inspect addons-651467 --format={{.State.Status}}
	I1109 13:30:23.566307    4875 out.go:179]   - Using image gcr.io/cloud-spanner-emulator/emulator:1.5.43
	I1109 13:30:23.569299    4875 addons.go:436] installing /etc/kubernetes/addons/deployment.yaml
	I1109 13:30:23.569356    4875 ssh_runner.go:362] scp memory --> /etc/kubernetes/addons/deployment.yaml (1004 bytes)
	I1109 13:30:23.569470    4875 cli_runner.go:164] Run: docker container inspect -f "'{{(index (index .NetworkSettings.Ports "22/tcp") 0).HostPort}}'" addons-651467
	I1109 13:30:23.581671    4875 ssh_runner.go:195] Run: sudo systemctl start kubelet
	I1109 13:30:23.581856    4875 sshutil.go:53] new ssh client: &{IP:127.0.0.1 Port:32768 SSHKeyPath:/home/jenkins/minikube-integration/21139-2320/.minikube/machines/addons-651467/id_rsa Username:docker}
	I1109 13:30:23.585076    4875 sshutil.go:53] new ssh client: &{IP:127.0.0.1 Port:32768 SSHKeyPath:/home/jenkins/minikube-integration/21139-2320/.minikube/machines/addons-651467/id_rsa Username:docker}
	I1109 13:30:23.585832    4875 sshutil.go:53] new ssh client: &{IP:127.0.0.1 Port:32768 SSHKeyPath:/home/jenkins/minikube-integration/21139-2320/.minikube/machines/addons-651467/id_rsa Username:docker}
	I1109 13:30:23.586532    4875 addons.go:436] installing /etc/kubernetes/addons/rbac-external-attacher.yaml
	I1109 13:30:23.586547    4875 ssh_runner.go:362] scp csi-hostpath-driver/rbac/rbac-external-attacher.yaml --> /etc/kubernetes/addons/rbac-external-attacher.yaml (3073 bytes)
	I1109 13:30:23.586610    4875 cli_runner.go:164] Run: docker container inspect -f "'{{(index (index .NetworkSettings.Ports "22/tcp") 0).HostPort}}'" addons-651467
	I1109 13:30:23.589740    4875 sshutil.go:53] new ssh client: &{IP:127.0.0.1 Port:32768 SSHKeyPath:/home/jenkins/minikube-integration/21139-2320/.minikube/machines/addons-651467/id_rsa Username:docker}
	I1109 13:30:23.590461    4875 out.go:179]   - Using image registry.k8s.io/ingress-nginx/kube-webhook-certgen:v1.6.3
	I1109 13:30:23.593361    4875 out.go:179]   - Using image registry.k8s.io/ingress-nginx/kube-webhook-certgen:v1.6.3
	I1109 13:30:23.597196    4875 addons.go:436] installing /etc/kubernetes/addons/ingress-deploy.yaml
	I1109 13:30:23.597217    4875 ssh_runner.go:362] scp memory --> /etc/kubernetes/addons/ingress-deploy.yaml (16078 bytes)
	I1109 13:30:23.597277    4875 cli_runner.go:164] Run: docker container inspect -f "'{{(index (index .NetworkSettings.Ports "22/tcp") 0).HostPort}}'" addons-651467
	I1109 13:30:23.620812    4875 out.go:179]   - Using image ghcr.io/inspektor-gadget/inspektor-gadget:v0.46.0
	I1109 13:30:23.623809    4875 addons.go:436] installing /etc/kubernetes/addons/ig-deployment.yaml
	I1109 13:30:23.623830    4875 ssh_runner.go:362] scp memory --> /etc/kubernetes/addons/ig-deployment.yaml (15034 bytes)
	I1109 13:30:23.624023    4875 cli_runner.go:164] Run: docker container inspect -f "'{{(index (index .NetworkSettings.Ports "22/tcp") 0).HostPort}}'" addons-651467
	I1109 13:30:23.636040    4875 sshutil.go:53] new ssh client: &{IP:127.0.0.1 Port:32768 SSHKeyPath:/home/jenkins/minikube-integration/21139-2320/.minikube/machines/addons-651467/id_rsa Username:docker}
	I1109 13:30:23.640117    4875 out.go:179]   - Using image docker.io/rancher/local-path-provisioner:v0.0.22
	I1109 13:30:23.646122    4875 out.go:179]   - Using image docker.io/busybox:stable
	I1109 13:30:23.651322    4875 addons.go:436] installing /etc/kubernetes/addons/storage-provisioner-rancher.yaml
	I1109 13:30:23.651353    4875 ssh_runner.go:362] scp memory --> /etc/kubernetes/addons/storage-provisioner-rancher.yaml (3113 bytes)
	I1109 13:30:23.651424    4875 cli_runner.go:164] Run: docker container inspect -f "'{{(index (index .NetworkSettings.Ports "22/tcp") 0).HostPort}}'" addons-651467
	I1109 13:30:23.680167    4875 sshutil.go:53] new ssh client: &{IP:127.0.0.1 Port:32768 SSHKeyPath:/home/jenkins/minikube-integration/21139-2320/.minikube/machines/addons-651467/id_rsa Username:docker}
	I1109 13:30:23.736442    4875 sshutil.go:53] new ssh client: &{IP:127.0.0.1 Port:32768 SSHKeyPath:/home/jenkins/minikube-integration/21139-2320/.minikube/machines/addons-651467/id_rsa Username:docker}
	I1109 13:30:23.738071    4875 sshutil.go:53] new ssh client: &{IP:127.0.0.1 Port:32768 SSHKeyPath:/home/jenkins/minikube-integration/21139-2320/.minikube/machines/addons-651467/id_rsa Username:docker}
	I1109 13:30:23.751617    4875 sshutil.go:53] new ssh client: &{IP:127.0.0.1 Port:32768 SSHKeyPath:/home/jenkins/minikube-integration/21139-2320/.minikube/machines/addons-651467/id_rsa Username:docker}
	I1109 13:30:23.775628    4875 addons.go:436] installing /etc/kubernetes/addons/storageclass.yaml
	I1109 13:30:23.775650    4875 ssh_runner.go:362] scp storageclass/storageclass.yaml --> /etc/kubernetes/addons/storageclass.yaml (271 bytes)
	I1109 13:30:23.775712    4875 cli_runner.go:164] Run: docker container inspect -f "'{{(index (index .NetworkSettings.Ports "22/tcp") 0).HostPort}}'" addons-651467
	I1109 13:30:23.796115    4875 sshutil.go:53] new ssh client: &{IP:127.0.0.1 Port:32768 SSHKeyPath:/home/jenkins/minikube-integration/21139-2320/.minikube/machines/addons-651467/id_rsa Username:docker}
	I1109 13:30:23.804121    4875 sshutil.go:53] new ssh client: &{IP:127.0.0.1 Port:32768 SSHKeyPath:/home/jenkins/minikube-integration/21139-2320/.minikube/machines/addons-651467/id_rsa Username:docker}
	I1109 13:30:23.811079    4875 sshutil.go:53] new ssh client: &{IP:127.0.0.1 Port:32768 SSHKeyPath:/home/jenkins/minikube-integration/21139-2320/.minikube/machines/addons-651467/id_rsa Username:docker}
	I1109 13:30:23.811834    4875 sshutil.go:53] new ssh client: &{IP:127.0.0.1 Port:32768 SSHKeyPath:/home/jenkins/minikube-integration/21139-2320/.minikube/machines/addons-651467/id_rsa Username:docker}
	W1109 13:30:23.812821    4875 sshutil.go:64] dial failure (will retry): ssh: handshake failed: EOF
	I1109 13:30:23.812845    4875 retry.go:31] will retry after 196.572128ms: ssh: handshake failed: EOF
	W1109 13:30:23.813084    4875 sshutil.go:64] dial failure (will retry): ssh: handshake failed: EOF
	I1109 13:30:23.813095    4875 retry.go:31] will retry after 148.811311ms: ssh: handshake failed: EOF
	I1109 13:30:23.813238    4875 sshutil.go:53] new ssh client: &{IP:127.0.0.1 Port:32768 SSHKeyPath:/home/jenkins/minikube-integration/21139-2320/.minikube/machines/addons-651467/id_rsa Username:docker}
	I1109 13:30:23.844899    4875 sshutil.go:53] new ssh client: &{IP:127.0.0.1 Port:32768 SSHKeyPath:/home/jenkins/minikube-integration/21139-2320/.minikube/machines/addons-651467/id_rsa Username:docker}
	W1109 13:30:23.846051    4875 sshutil.go:64] dial failure (will retry): ssh: handshake failed: EOF
	I1109 13:30:23.846070    4875 retry.go:31] will retry after 283.546301ms: ssh: handshake failed: EOF
	W1109 13:30:24.010544    4875 sshutil.go:64] dial failure (will retry): ssh: handshake failed: EOF
	I1109 13:30:24.010644    4875 retry.go:31] will retry after 451.888093ms: ssh: handshake failed: EOF
	I1109 13:30:24.340742    4875 ssh_runner.go:195] Run: sudo KUBECONFIG=/var/lib/minikube/kubeconfig /var/lib/minikube/binaries/v1.34.1/kubectl apply -f /etc/kubernetes/addons/storage-provisioner-rancher.yaml
	I1109 13:30:24.487955    4875 ssh_runner.go:195] Run: sudo KUBECONFIG=/var/lib/minikube/kubeconfig /var/lib/minikube/binaries/v1.34.1/kubectl apply -f /etc/kubernetes/addons/registry-creds-rc.yaml
	I1109 13:30:24.565629    4875 addons.go:436] installing /etc/kubernetes/addons/registry-svc.yaml
	I1109 13:30:24.565702    4875 ssh_runner.go:362] scp registry/registry-svc.yaml --> /etc/kubernetes/addons/registry-svc.yaml (398 bytes)
	I1109 13:30:24.604087    4875 ssh_runner.go:195] Run: sudo KUBECONFIG=/var/lib/minikube/kubeconfig /var/lib/minikube/binaries/v1.34.1/kubectl apply -f /etc/kubernetes/addons/storage-provisioner.yaml
	I1109 13:30:24.611598    4875 addons.go:436] installing /etc/kubernetes/addons/metrics-server-deployment.yaml
	I1109 13:30:24.611671    4875 ssh_runner.go:362] scp memory --> /etc/kubernetes/addons/metrics-server-deployment.yaml (1907 bytes)
	I1109 13:30:24.619908    4875 ssh_runner.go:195] Run: sudo KUBECONFIG=/var/lib/minikube/kubeconfig /var/lib/minikube/binaries/v1.34.1/kubectl apply -f /etc/kubernetes/addons/ig-deployment.yaml
	I1109 13:30:24.632560    4875 addons.go:436] installing /etc/kubernetes/addons/yakd-sa.yaml
	I1109 13:30:24.632637    4875 ssh_runner.go:362] scp yakd/yakd-sa.yaml --> /etc/kubernetes/addons/yakd-sa.yaml (247 bytes)
	I1109 13:30:24.654687    4875 ssh_runner.go:195] Run: sudo KUBECONFIG=/var/lib/minikube/kubeconfig /var/lib/minikube/binaries/v1.34.1/kubectl apply -f /etc/kubernetes/addons/amd-gpu-device-plugin.yaml
	I1109 13:30:24.662836    4875 addons.go:436] installing /etc/kubernetes/addons/snapshot.storage.k8s.io_volumesnapshotclasses.yaml
	I1109 13:30:24.662907    4875 ssh_runner.go:362] scp volumesnapshots/snapshot.storage.k8s.io_volumesnapshotclasses.yaml --> /etc/kubernetes/addons/snapshot.storage.k8s.io_volumesnapshotclasses.yaml (6471 bytes)
	I1109 13:30:24.674934    4875 ssh_runner.go:195] Run: sudo KUBECONFIG=/var/lib/minikube/kubeconfig /var/lib/minikube/binaries/v1.34.1/kubectl apply -f /etc/kubernetes/addons/nvidia-device-plugin.yaml
	I1109 13:30:24.677792    4875 ssh_runner.go:195] Run: sudo KUBECONFIG=/var/lib/minikube/kubeconfig /var/lib/minikube/binaries/v1.34.1/kubectl apply -f /etc/kubernetes/addons/ingress-dns-pod.yaml
	I1109 13:30:24.745988    4875 addons.go:436] installing /etc/kubernetes/addons/registry-proxy.yaml
	I1109 13:30:24.746008    4875 ssh_runner.go:362] scp memory --> /etc/kubernetes/addons/registry-proxy.yaml (947 bytes)
	I1109 13:30:24.786792    4875 ssh_runner.go:235] Completed: /bin/bash -c "sudo /var/lib/minikube/binaries/v1.34.1/kubectl --kubeconfig=/var/lib/minikube/kubeconfig -n kube-system get configmap coredns -o yaml | sed -e '/^        forward . \/etc\/resolv.conf.*/i \        hosts {\n           192.168.49.1 host.minikube.internal\n           fallthrough\n        }' -e '/^        errors *$/i \        log' | sudo /var/lib/minikube/binaries/v1.34.1/kubectl --kubeconfig=/var/lib/minikube/kubeconfig replace -f -": (1.425024806s)
	I1109 13:30:24.786870    4875 start.go:977] {"host.minikube.internal": 192.168.49.1} host record injected into CoreDNS's ConfigMap
	I1109 13:30:24.787976    4875 ssh_runner.go:235] Completed: sudo systemctl start kubelet: (1.206271528s)
	I1109 13:30:24.788821    4875 node_ready.go:35] waiting up to 6m0s for node "addons-651467" to be "Ready" ...
	I1109 13:30:24.814290    4875 addons.go:436] installing /etc/kubernetes/addons/metrics-server-rbac.yaml
	I1109 13:30:24.814362    4875 ssh_runner.go:362] scp metrics-server/metrics-server-rbac.yaml --> /etc/kubernetes/addons/metrics-server-rbac.yaml (2175 bytes)
	I1109 13:30:24.863198    4875 addons.go:436] installing /etc/kubernetes/addons/yakd-crb.yaml
	I1109 13:30:24.863267    4875 ssh_runner.go:362] scp yakd/yakd-crb.yaml --> /etc/kubernetes/addons/yakd-crb.yaml (422 bytes)
	I1109 13:30:24.898570    4875 ssh_runner.go:195] Run: sudo KUBECONFIG=/var/lib/minikube/kubeconfig /var/lib/minikube/binaries/v1.34.1/kubectl apply -f /etc/kubernetes/addons/registry-rc.yaml -f /etc/kubernetes/addons/registry-svc.yaml -f /etc/kubernetes/addons/registry-proxy.yaml
	I1109 13:30:24.940723    4875 ssh_runner.go:195] Run: sudo KUBECONFIG=/var/lib/minikube/kubeconfig /var/lib/minikube/binaries/v1.34.1/kubectl apply -f /etc/kubernetes/addons/ingress-deploy.yaml
	I1109 13:30:24.944353    4875 addons.go:436] installing /etc/kubernetes/addons/snapshot.storage.k8s.io_volumesnapshotcontents.yaml
	I1109 13:30:24.944426    4875 ssh_runner.go:362] scp volumesnapshots/snapshot.storage.k8s.io_volumesnapshotcontents.yaml --> /etc/kubernetes/addons/snapshot.storage.k8s.io_volumesnapshotcontents.yaml (23126 bytes)
	I1109 13:30:24.954474    4875 addons.go:436] installing /etc/kubernetes/addons/rbac-hostpath.yaml
	I1109 13:30:24.954546    4875 ssh_runner.go:362] scp csi-hostpath-driver/rbac/rbac-hostpath.yaml --> /etc/kubernetes/addons/rbac-hostpath.yaml (4266 bytes)
	I1109 13:30:25.104377    4875 ssh_runner.go:195] Run: sudo KUBECONFIG=/var/lib/minikube/kubeconfig /var/lib/minikube/binaries/v1.34.1/kubectl apply -f /etc/kubernetes/addons/storageclass.yaml
	I1109 13:30:25.121689    4875 addons.go:436] installing /etc/kubernetes/addons/yakd-svc.yaml
	I1109 13:30:25.121769    4875 ssh_runner.go:362] scp yakd/yakd-svc.yaml --> /etc/kubernetes/addons/yakd-svc.yaml (412 bytes)
	I1109 13:30:25.192933    4875 addons.go:436] installing /etc/kubernetes/addons/yakd-dp.yaml
	I1109 13:30:25.193010    4875 ssh_runner.go:362] scp memory --> /etc/kubernetes/addons/yakd-dp.yaml (2017 bytes)
	I1109 13:30:25.213382    4875 addons.go:436] installing /etc/kubernetes/addons/rbac-external-health-monitor-controller.yaml
	I1109 13:30:25.213459    4875 ssh_runner.go:362] scp csi-hostpath-driver/rbac/rbac-external-health-monitor-controller.yaml --> /etc/kubernetes/addons/rbac-external-health-monitor-controller.yaml (3038 bytes)
	I1109 13:30:25.221748    4875 addons.go:436] installing /etc/kubernetes/addons/metrics-server-service.yaml
	I1109 13:30:25.221844    4875 ssh_runner.go:362] scp metrics-server/metrics-server-service.yaml --> /etc/kubernetes/addons/metrics-server-service.yaml (446 bytes)
	I1109 13:30:25.236853    4875 ssh_runner.go:195] Run: sudo KUBECONFIG=/var/lib/minikube/kubeconfig /var/lib/minikube/binaries/v1.34.1/kubectl apply -f /etc/kubernetes/addons/deployment.yaml
	I1109 13:30:25.278536    4875 addons.go:436] installing /etc/kubernetes/addons/snapshot.storage.k8s.io_volumesnapshots.yaml
	I1109 13:30:25.278565    4875 ssh_runner.go:362] scp volumesnapshots/snapshot.storage.k8s.io_volumesnapshots.yaml --> /etc/kubernetes/addons/snapshot.storage.k8s.io_volumesnapshots.yaml (19582 bytes)
	I1109 13:30:25.290842    4875 kapi.go:214] "coredns" deployment in "kube-system" namespace and "addons-651467" context rescaled to 1 replicas
	I1109 13:30:25.393225    4875 ssh_runner.go:195] Run: sudo KUBECONFIG=/var/lib/minikube/kubeconfig /var/lib/minikube/binaries/v1.34.1/kubectl apply -f /etc/kubernetes/addons/metrics-apiservice.yaml -f /etc/kubernetes/addons/metrics-server-deployment.yaml -f /etc/kubernetes/addons/metrics-server-rbac.yaml -f /etc/kubernetes/addons/metrics-server-service.yaml
	I1109 13:30:25.438693    4875 ssh_runner.go:195] Run: sudo KUBECONFIG=/var/lib/minikube/kubeconfig /var/lib/minikube/binaries/v1.34.1/kubectl apply -f /etc/kubernetes/addons/yakd-ns.yaml -f /etc/kubernetes/addons/yakd-sa.yaml -f /etc/kubernetes/addons/yakd-crb.yaml -f /etc/kubernetes/addons/yakd-svc.yaml -f /etc/kubernetes/addons/yakd-dp.yaml
	I1109 13:30:25.499932    4875 addons.go:436] installing /etc/kubernetes/addons/rbac-external-provisioner.yaml
	I1109 13:30:25.499954    4875 ssh_runner.go:362] scp csi-hostpath-driver/rbac/rbac-external-provisioner.yaml --> /etc/kubernetes/addons/rbac-external-provisioner.yaml (4442 bytes)
	I1109 13:30:25.519227    4875 addons.go:436] installing /etc/kubernetes/addons/rbac-volume-snapshot-controller.yaml
	I1109 13:30:25.519249    4875 ssh_runner.go:362] scp volumesnapshots/rbac-volume-snapshot-controller.yaml --> /etc/kubernetes/addons/rbac-volume-snapshot-controller.yaml (3545 bytes)
	I1109 13:30:25.671321    4875 addons.go:436] installing /etc/kubernetes/addons/volume-snapshot-controller-deployment.yaml
	I1109 13:30:25.671392    4875 ssh_runner.go:362] scp memory --> /etc/kubernetes/addons/volume-snapshot-controller-deployment.yaml (1475 bytes)
	I1109 13:30:25.694095    4875 addons.go:436] installing /etc/kubernetes/addons/rbac-external-resizer.yaml
	I1109 13:30:25.694167    4875 ssh_runner.go:362] scp csi-hostpath-driver/rbac/rbac-external-resizer.yaml --> /etc/kubernetes/addons/rbac-external-resizer.yaml (2943 bytes)
	I1109 13:30:25.880194    4875 addons.go:436] installing /etc/kubernetes/addons/rbac-external-snapshotter.yaml
	I1109 13:30:25.880272    4875 ssh_runner.go:362] scp csi-hostpath-driver/rbac/rbac-external-snapshotter.yaml --> /etc/kubernetes/addons/rbac-external-snapshotter.yaml (3149 bytes)
	I1109 13:30:26.046095    4875 ssh_runner.go:195] Run: sudo KUBECONFIG=/var/lib/minikube/kubeconfig /var/lib/minikube/binaries/v1.34.1/kubectl apply -f /etc/kubernetes/addons/csi-hostpath-snapshotclass.yaml -f /etc/kubernetes/addons/snapshot.storage.k8s.io_volumesnapshotclasses.yaml -f /etc/kubernetes/addons/snapshot.storage.k8s.io_volumesnapshotcontents.yaml -f /etc/kubernetes/addons/snapshot.storage.k8s.io_volumesnapshots.yaml -f /etc/kubernetes/addons/rbac-volume-snapshot-controller.yaml -f /etc/kubernetes/addons/volume-snapshot-controller-deployment.yaml
	I1109 13:30:26.200363    4875 addons.go:436] installing /etc/kubernetes/addons/csi-hostpath-attacher.yaml
	I1109 13:30:26.200435    4875 ssh_runner.go:362] scp memory --> /etc/kubernetes/addons/csi-hostpath-attacher.yaml (2143 bytes)
	I1109 13:30:26.272502    4875 addons.go:436] installing /etc/kubernetes/addons/csi-hostpath-driverinfo.yaml
	I1109 13:30:26.272528    4875 ssh_runner.go:362] scp csi-hostpath-driver/deploy/csi-hostpath-driverinfo.yaml --> /etc/kubernetes/addons/csi-hostpath-driverinfo.yaml (1274 bytes)
	I1109 13:30:26.390827    4875 addons.go:436] installing /etc/kubernetes/addons/csi-hostpath-plugin.yaml
	I1109 13:30:26.390850    4875 ssh_runner.go:362] scp memory --> /etc/kubernetes/addons/csi-hostpath-plugin.yaml (8201 bytes)
	I1109 13:30:26.514748    4875 addons.go:436] installing /etc/kubernetes/addons/csi-hostpath-resizer.yaml
	I1109 13:30:26.514820    4875 ssh_runner.go:362] scp memory --> /etc/kubernetes/addons/csi-hostpath-resizer.yaml (2191 bytes)
	I1109 13:30:26.705488    4875 addons.go:436] installing /etc/kubernetes/addons/csi-hostpath-storageclass.yaml
	I1109 13:30:26.705567    4875 ssh_runner.go:362] scp csi-hostpath-driver/deploy/csi-hostpath-storageclass.yaml --> /etc/kubernetes/addons/csi-hostpath-storageclass.yaml (846 bytes)
	W1109 13:30:26.809629    4875 node_ready.go:57] node "addons-651467" has "Ready":"False" status (will retry)
	I1109 13:30:26.902084    4875 ssh_runner.go:195] Run: sudo KUBECONFIG=/var/lib/minikube/kubeconfig /var/lib/minikube/binaries/v1.34.1/kubectl apply -f /etc/kubernetes/addons/rbac-external-attacher.yaml -f /etc/kubernetes/addons/rbac-hostpath.yaml -f /etc/kubernetes/addons/rbac-external-health-monitor-controller.yaml -f /etc/kubernetes/addons/rbac-external-provisioner.yaml -f /etc/kubernetes/addons/rbac-external-resizer.yaml -f /etc/kubernetes/addons/rbac-external-snapshotter.yaml -f /etc/kubernetes/addons/csi-hostpath-attacher.yaml -f /etc/kubernetes/addons/csi-hostpath-driverinfo.yaml -f /etc/kubernetes/addons/csi-hostpath-plugin.yaml -f /etc/kubernetes/addons/csi-hostpath-resizer.yaml -f /etc/kubernetes/addons/csi-hostpath-storageclass.yaml
	I1109 13:30:28.483316    4875 ssh_runner.go:235] Completed: sudo KUBECONFIG=/var/lib/minikube/kubeconfig /var/lib/minikube/binaries/v1.34.1/kubectl apply -f /etc/kubernetes/addons/storage-provisioner-rancher.yaml: (4.142473295s)
	I1109 13:30:28.483694    4875 ssh_runner.go:235] Completed: sudo KUBECONFIG=/var/lib/minikube/kubeconfig /var/lib/minikube/binaries/v1.34.1/kubectl apply -f /etc/kubernetes/addons/registry-creds-rc.yaml: (3.995667147s)
	I1109 13:30:28.483785    4875 ssh_runner.go:235] Completed: sudo KUBECONFIG=/var/lib/minikube/kubeconfig /var/lib/minikube/binaries/v1.34.1/kubectl apply -f /etc/kubernetes/addons/storage-provisioner.yaml: (3.879629081s)
	I1109 13:30:28.676100    4875 ssh_runner.go:235] Completed: sudo KUBECONFIG=/var/lib/minikube/kubeconfig /var/lib/minikube/binaries/v1.34.1/kubectl apply -f /etc/kubernetes/addons/ig-deployment.yaml: (4.0561068s)
	I1109 13:30:28.676202    4875 ssh_runner.go:235] Completed: sudo KUBECONFIG=/var/lib/minikube/kubeconfig /var/lib/minikube/binaries/v1.34.1/kubectl apply -f /etc/kubernetes/addons/amd-gpu-device-plugin.yaml: (4.02144271s)
	I1109 13:30:28.676219    4875 ssh_runner.go:235] Completed: sudo KUBECONFIG=/var/lib/minikube/kubeconfig /var/lib/minikube/binaries/v1.34.1/kubectl apply -f /etc/kubernetes/addons/nvidia-device-plugin.yaml: (4.00122603s)
	I1109 13:30:28.676365    4875 ssh_runner.go:235] Completed: sudo KUBECONFIG=/var/lib/minikube/kubeconfig /var/lib/minikube/binaries/v1.34.1/kubectl apply -f /etc/kubernetes/addons/ingress-dns-pod.yaml: (3.998557251s)
	I1109 13:30:28.676419    4875 ssh_runner.go:235] Completed: sudo KUBECONFIG=/var/lib/minikube/kubeconfig /var/lib/minikube/binaries/v1.34.1/kubectl apply -f /etc/kubernetes/addons/registry-rc.yaml -f /etc/kubernetes/addons/registry-svc.yaml -f /etc/kubernetes/addons/registry-proxy.yaml: (3.777770521s)
	I1109 13:30:28.676472    4875 addons.go:480] Verifying addon registry=true in "addons-651467"
	I1109 13:30:28.680909    4875 out.go:179] * Verifying registry addon...
	I1109 13:30:28.684059    4875 kapi.go:75] Waiting for pod with label "kubernetes.io/minikube-addons=registry" in ns "kube-system" ...
	I1109 13:30:28.699820    4875 kapi.go:86] Found 1 Pods for label selector kubernetes.io/minikube-addons=registry
	I1109 13:30:28.699840    4875 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=registry", current state: Pending: [<nil>]
	W1109 13:30:28.812813    4875 node_ready.go:57] node "addons-651467" has "Ready":"False" status (will retry)
	I1109 13:30:29.209893    4875 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=registry", current state: Pending: [<nil>]
	I1109 13:30:29.533080    4875 ssh_runner.go:235] Completed: sudo KUBECONFIG=/var/lib/minikube/kubeconfig /var/lib/minikube/binaries/v1.34.1/kubectl apply -f /etc/kubernetes/addons/ingress-deploy.yaml: (4.59226929s)
	I1109 13:30:29.533158    4875 addons.go:480] Verifying addon ingress=true in "addons-651467"
	I1109 13:30:29.533359    4875 ssh_runner.go:235] Completed: sudo KUBECONFIG=/var/lib/minikube/kubeconfig /var/lib/minikube/binaries/v1.34.1/kubectl apply -f /etc/kubernetes/addons/storageclass.yaml: (4.428907176s)
	I1109 13:30:29.533543    4875 ssh_runner.go:235] Completed: sudo KUBECONFIG=/var/lib/minikube/kubeconfig /var/lib/minikube/binaries/v1.34.1/kubectl apply -f /etc/kubernetes/addons/deployment.yaml: (4.296669566s)
	I1109 13:30:29.533603    4875 ssh_runner.go:235] Completed: sudo KUBECONFIG=/var/lib/minikube/kubeconfig /var/lib/minikube/binaries/v1.34.1/kubectl apply -f /etc/kubernetes/addons/metrics-apiservice.yaml -f /etc/kubernetes/addons/metrics-server-deployment.yaml -f /etc/kubernetes/addons/metrics-server-rbac.yaml -f /etc/kubernetes/addons/metrics-server-service.yaml: (4.140308051s)
	I1109 13:30:29.534027    4875 addons.go:480] Verifying addon metrics-server=true in "addons-651467"
	I1109 13:30:29.533641    4875 ssh_runner.go:235] Completed: sudo KUBECONFIG=/var/lib/minikube/kubeconfig /var/lib/minikube/binaries/v1.34.1/kubectl apply -f /etc/kubernetes/addons/yakd-ns.yaml -f /etc/kubernetes/addons/yakd-sa.yaml -f /etc/kubernetes/addons/yakd-crb.yaml -f /etc/kubernetes/addons/yakd-svc.yaml -f /etc/kubernetes/addons/yakd-dp.yaml: (4.094874469s)
	I1109 13:30:29.533711    4875 ssh_runner.go:235] Completed: sudo KUBECONFIG=/var/lib/minikube/kubeconfig /var/lib/minikube/binaries/v1.34.1/kubectl apply -f /etc/kubernetes/addons/csi-hostpath-snapshotclass.yaml -f /etc/kubernetes/addons/snapshot.storage.k8s.io_volumesnapshotclasses.yaml -f /etc/kubernetes/addons/snapshot.storage.k8s.io_volumesnapshotcontents.yaml -f /etc/kubernetes/addons/snapshot.storage.k8s.io_volumesnapshots.yaml -f /etc/kubernetes/addons/rbac-volume-snapshot-controller.yaml -f /etc/kubernetes/addons/volume-snapshot-controller-deployment.yaml: (3.487542902s)
	W1109 13:30:29.534151    4875 addons.go:462] apply failed, will retry: sudo KUBECONFIG=/var/lib/minikube/kubeconfig /var/lib/minikube/binaries/v1.34.1/kubectl apply -f /etc/kubernetes/addons/csi-hostpath-snapshotclass.yaml -f /etc/kubernetes/addons/snapshot.storage.k8s.io_volumesnapshotclasses.yaml -f /etc/kubernetes/addons/snapshot.storage.k8s.io_volumesnapshotcontents.yaml -f /etc/kubernetes/addons/snapshot.storage.k8s.io_volumesnapshots.yaml -f /etc/kubernetes/addons/rbac-volume-snapshot-controller.yaml -f /etc/kubernetes/addons/volume-snapshot-controller-deployment.yaml: Process exited with status 1
	stdout:
	customresourcedefinition.apiextensions.k8s.io/volumesnapshotclasses.snapshot.storage.k8s.io created
	customresourcedefinition.apiextensions.k8s.io/volumesnapshotcontents.snapshot.storage.k8s.io created
	customresourcedefinition.apiextensions.k8s.io/volumesnapshots.snapshot.storage.k8s.io created
	serviceaccount/snapshot-controller created
	clusterrole.rbac.authorization.k8s.io/snapshot-controller-runner created
	clusterrolebinding.rbac.authorization.k8s.io/snapshot-controller-role created
	role.rbac.authorization.k8s.io/snapshot-controller-leaderelection created
	rolebinding.rbac.authorization.k8s.io/snapshot-controller-leaderelection created
	deployment.apps/snapshot-controller created
	
	stderr:
	error: resource mapping not found for name: "csi-hostpath-snapclass" namespace: "" from "/etc/kubernetes/addons/csi-hostpath-snapshotclass.yaml": no matches for kind "VolumeSnapshotClass" in version "snapshot.storage.k8s.io/v1"
	ensure CRDs are installed first
	I1109 13:30:29.534238    4875 retry.go:31] will retry after 158.861087ms: sudo KUBECONFIG=/var/lib/minikube/kubeconfig /var/lib/minikube/binaries/v1.34.1/kubectl apply -f /etc/kubernetes/addons/csi-hostpath-snapshotclass.yaml -f /etc/kubernetes/addons/snapshot.storage.k8s.io_volumesnapshotclasses.yaml -f /etc/kubernetes/addons/snapshot.storage.k8s.io_volumesnapshotcontents.yaml -f /etc/kubernetes/addons/snapshot.storage.k8s.io_volumesnapshots.yaml -f /etc/kubernetes/addons/rbac-volume-snapshot-controller.yaml -f /etc/kubernetes/addons/volume-snapshot-controller-deployment.yaml: Process exited with status 1
	stdout:
	customresourcedefinition.apiextensions.k8s.io/volumesnapshotclasses.snapshot.storage.k8s.io created
	customresourcedefinition.apiextensions.k8s.io/volumesnapshotcontents.snapshot.storage.k8s.io created
	customresourcedefinition.apiextensions.k8s.io/volumesnapshots.snapshot.storage.k8s.io created
	serviceaccount/snapshot-controller created
	clusterrole.rbac.authorization.k8s.io/snapshot-controller-runner created
	clusterrolebinding.rbac.authorization.k8s.io/snapshot-controller-role created
	role.rbac.authorization.k8s.io/snapshot-controller-leaderelection created
	rolebinding.rbac.authorization.k8s.io/snapshot-controller-leaderelection created
	deployment.apps/snapshot-controller created
	
	stderr:
	error: resource mapping not found for name: "csi-hostpath-snapclass" namespace: "" from "/etc/kubernetes/addons/csi-hostpath-snapshotclass.yaml": no matches for kind "VolumeSnapshotClass" in version "snapshot.storage.k8s.io/v1"
	ensure CRDs are installed first
	I1109 13:30:29.536434    4875 out.go:179] * To access YAKD - Kubernetes Dashboard, wait for Pod to be ready and run the following command:
	
		minikube -p addons-651467 service yakd-dashboard -n yakd-dashboard
	
	I1109 13:30:29.536434    4875 out.go:179] * Verifying ingress addon...
	I1109 13:30:29.540217    4875 kapi.go:75] Waiting for pod with label "app.kubernetes.io/name=ingress-nginx" in ns "ingress-nginx" ...
	I1109 13:30:29.550087    4875 kapi.go:86] Found 3 Pods for label selector app.kubernetes.io/name=ingress-nginx
	I1109 13:30:29.550108    4875 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I1109 13:30:29.694147    4875 ssh_runner.go:195] Run: sudo KUBECONFIG=/var/lib/minikube/kubeconfig /var/lib/minikube/binaries/v1.34.1/kubectl apply --force -f /etc/kubernetes/addons/csi-hostpath-snapshotclass.yaml -f /etc/kubernetes/addons/snapshot.storage.k8s.io_volumesnapshotclasses.yaml -f /etc/kubernetes/addons/snapshot.storage.k8s.io_volumesnapshotcontents.yaml -f /etc/kubernetes/addons/snapshot.storage.k8s.io_volumesnapshots.yaml -f /etc/kubernetes/addons/rbac-volume-snapshot-controller.yaml -f /etc/kubernetes/addons/volume-snapshot-controller-deployment.yaml
	I1109 13:30:29.710715    4875 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=registry", current state: Pending: [<nil>]
	I1109 13:30:29.831401    4875 ssh_runner.go:235] Completed: sudo KUBECONFIG=/var/lib/minikube/kubeconfig /var/lib/minikube/binaries/v1.34.1/kubectl apply -f /etc/kubernetes/addons/rbac-external-attacher.yaml -f /etc/kubernetes/addons/rbac-hostpath.yaml -f /etc/kubernetes/addons/rbac-external-health-monitor-controller.yaml -f /etc/kubernetes/addons/rbac-external-provisioner.yaml -f /etc/kubernetes/addons/rbac-external-resizer.yaml -f /etc/kubernetes/addons/rbac-external-snapshotter.yaml -f /etc/kubernetes/addons/csi-hostpath-attacher.yaml -f /etc/kubernetes/addons/csi-hostpath-driverinfo.yaml -f /etc/kubernetes/addons/csi-hostpath-plugin.yaml -f /etc/kubernetes/addons/csi-hostpath-resizer.yaml -f /etc/kubernetes/addons/csi-hostpath-storageclass.yaml: (2.929213838s)
	I1109 13:30:29.831444    4875 addons.go:480] Verifying addon csi-hostpath-driver=true in "addons-651467"
	I1109 13:30:29.834681    4875 out.go:179] * Verifying csi-hostpath-driver addon...
	I1109 13:30:29.838305    4875 kapi.go:75] Waiting for pod with label "kubernetes.io/minikube-addons=csi-hostpath-driver" in ns "kube-system" ...
	I1109 13:30:29.843402    4875 kapi.go:86] Found 2 Pods for label selector kubernetes.io/minikube-addons=csi-hostpath-driver
	I1109 13:30:29.843425    4875 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I1109 13:30:30.060829    4875 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I1109 13:30:30.188382    4875 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=registry", current state: Pending: [<nil>]
	I1109 13:30:30.344753    4875 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I1109 13:30:30.544465    4875 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I1109 13:30:30.688765    4875 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=registry", current state: Pending: [<nil>]
	I1109 13:30:30.841741    4875 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I1109 13:30:31.043980    4875 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I1109 13:30:31.077521    4875 ssh_runner.go:362] scp memory --> /var/lib/minikube/google_application_credentials.json (162 bytes)
	I1109 13:30:31.077654    4875 cli_runner.go:164] Run: docker container inspect -f "'{{(index (index .NetworkSettings.Ports "22/tcp") 0).HostPort}}'" addons-651467
	I1109 13:30:31.095830    4875 sshutil.go:53] new ssh client: &{IP:127.0.0.1 Port:32768 SSHKeyPath:/home/jenkins/minikube-integration/21139-2320/.minikube/machines/addons-651467/id_rsa Username:docker}
	I1109 13:30:31.187994    4875 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=registry", current state: Pending: [<nil>]
	I1109 13:30:31.231226    4875 ssh_runner.go:362] scp memory --> /var/lib/minikube/google_cloud_project (12 bytes)
	I1109 13:30:31.245101    4875 addons.go:239] Setting addon gcp-auth=true in "addons-651467"
	I1109 13:30:31.245153    4875 host.go:66] Checking if "addons-651467" exists ...
	I1109 13:30:31.245642    4875 cli_runner.go:164] Run: docker container inspect addons-651467 --format={{.State.Status}}
	I1109 13:30:31.264193    4875 ssh_runner.go:195] Run: cat /var/lib/minikube/google_application_credentials.json
	I1109 13:30:31.264248    4875 cli_runner.go:164] Run: docker container inspect -f "'{{(index (index .NetworkSettings.Ports "22/tcp") 0).HostPort}}'" addons-651467
	I1109 13:30:31.282316    4875 sshutil.go:53] new ssh client: &{IP:127.0.0.1 Port:32768 SSHKeyPath:/home/jenkins/minikube-integration/21139-2320/.minikube/machines/addons-651467/id_rsa Username:docker}
	W1109 13:30:31.292098    4875 node_ready.go:57] node "addons-651467" has "Ready":"False" status (will retry)
	I1109 13:30:31.341505    4875 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I1109 13:30:31.386870    4875 out.go:179]   - Using image registry.k8s.io/ingress-nginx/kube-webhook-certgen:v1.6.3
	I1109 13:30:31.389883    4875 out.go:179]   - Using image gcr.io/k8s-minikube/gcp-auth-webhook:v0.1.3
	I1109 13:30:31.392760    4875 addons.go:436] installing /etc/kubernetes/addons/gcp-auth-ns.yaml
	I1109 13:30:31.392788    4875 ssh_runner.go:362] scp gcp-auth/gcp-auth-ns.yaml --> /etc/kubernetes/addons/gcp-auth-ns.yaml (700 bytes)
	I1109 13:30:31.406203    4875 addons.go:436] installing /etc/kubernetes/addons/gcp-auth-service.yaml
	I1109 13:30:31.406223    4875 ssh_runner.go:362] scp gcp-auth/gcp-auth-service.yaml --> /etc/kubernetes/addons/gcp-auth-service.yaml (788 bytes)
	I1109 13:30:31.418827    4875 addons.go:436] installing /etc/kubernetes/addons/gcp-auth-webhook.yaml
	I1109 13:30:31.418852    4875 ssh_runner.go:362] scp memory --> /etc/kubernetes/addons/gcp-auth-webhook.yaml (5421 bytes)
	I1109 13:30:31.431939    4875 ssh_runner.go:195] Run: sudo KUBECONFIG=/var/lib/minikube/kubeconfig /var/lib/minikube/binaries/v1.34.1/kubectl apply -f /etc/kubernetes/addons/gcp-auth-ns.yaml -f /etc/kubernetes/addons/gcp-auth-service.yaml -f /etc/kubernetes/addons/gcp-auth-webhook.yaml
	I1109 13:30:31.543908    4875 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I1109 13:30:31.687585    4875 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=registry", current state: Pending: [<nil>]
	I1109 13:30:31.852921    4875 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I1109 13:30:31.932969    4875 addons.go:480] Verifying addon gcp-auth=true in "addons-651467"
	I1109 13:30:31.937981    4875 out.go:179] * Verifying gcp-auth addon...
	I1109 13:30:31.941634    4875 kapi.go:75] Waiting for pod with label "kubernetes.io/minikube-addons=gcp-auth" in ns "gcp-auth" ...
	I1109 13:30:31.951547    4875 kapi.go:86] Found 1 Pods for label selector kubernetes.io/minikube-addons=gcp-auth
	I1109 13:30:31.951572    4875 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I1109 13:30:32.043596    4875 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I1109 13:30:32.187217    4875 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=registry", current state: Pending: [<nil>]
	I1109 13:30:32.341951    4875 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I1109 13:30:32.444675    4875 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I1109 13:30:32.543930    4875 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I1109 13:30:32.687662    4875 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=registry", current state: Pending: [<nil>]
	I1109 13:30:32.841909    4875 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I1109 13:30:32.944917    4875 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I1109 13:30:33.044162    4875 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I1109 13:30:33.187078    4875 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=registry", current state: Pending: [<nil>]
	I1109 13:30:33.341304    4875 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I1109 13:30:33.444560    4875 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I1109 13:30:33.543478    4875 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I1109 13:30:33.687268    4875 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=registry", current state: Pending: [<nil>]
	W1109 13:30:33.791940    4875 node_ready.go:57] node "addons-651467" has "Ready":"False" status (will retry)
	I1109 13:30:33.841539    4875 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I1109 13:30:33.945102    4875 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I1109 13:30:34.044349    4875 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I1109 13:30:34.187062    4875 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=registry", current state: Pending: [<nil>]
	I1109 13:30:34.341598    4875 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I1109 13:30:34.444508    4875 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I1109 13:30:34.543758    4875 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I1109 13:30:34.687730    4875 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=registry", current state: Pending: [<nil>]
	I1109 13:30:34.842259    4875 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I1109 13:30:34.945270    4875 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I1109 13:30:35.043787    4875 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I1109 13:30:35.188137    4875 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=registry", current state: Pending: [<nil>]
	I1109 13:30:35.341184    4875 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I1109 13:30:35.444984    4875 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I1109 13:30:35.544162    4875 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I1109 13:30:35.687211    4875 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=registry", current state: Pending: [<nil>]
	I1109 13:30:35.841549    4875 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I1109 13:30:35.945187    4875 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I1109 13:30:36.043229    4875 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I1109 13:30:36.186772    4875 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=registry", current state: Pending: [<nil>]
	W1109 13:30:36.291837    4875 node_ready.go:57] node "addons-651467" has "Ready":"False" status (will retry)
	I1109 13:30:36.341726    4875 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I1109 13:30:36.444502    4875 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I1109 13:30:36.545083    4875 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I1109 13:30:36.686995    4875 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=registry", current state: Pending: [<nil>]
	I1109 13:30:36.841904    4875 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I1109 13:30:36.944875    4875 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I1109 13:30:37.044138    4875 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I1109 13:30:37.187841    4875 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=registry", current state: Pending: [<nil>]
	I1109 13:30:37.342273    4875 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I1109 13:30:37.445022    4875 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I1109 13:30:37.544420    4875 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I1109 13:30:37.687666    4875 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=registry", current state: Pending: [<nil>]
	I1109 13:30:37.842175    4875 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I1109 13:30:37.945389    4875 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I1109 13:30:38.044015    4875 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I1109 13:30:38.187143    4875 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=registry", current state: Pending: [<nil>]
	I1109 13:30:38.341929    4875 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I1109 13:30:38.448312    4875 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I1109 13:30:38.543630    4875 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I1109 13:30:38.687074    4875 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=registry", current state: Pending: [<nil>]
	W1109 13:30:38.791543    4875 node_ready.go:57] node "addons-651467" has "Ready":"False" status (will retry)
	I1109 13:30:38.841822    4875 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I1109 13:30:38.945952    4875 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I1109 13:30:39.044126    4875 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I1109 13:30:39.189112    4875 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=registry", current state: Pending: [<nil>]
	I1109 13:30:39.342174    4875 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I1109 13:30:39.444784    4875 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I1109 13:30:39.544128    4875 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I1109 13:30:39.687799    4875 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=registry", current state: Pending: [<nil>]
	I1109 13:30:39.841169    4875 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I1109 13:30:39.945840    4875 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I1109 13:30:40.044465    4875 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I1109 13:30:40.187552    4875 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=registry", current state: Pending: [<nil>]
	I1109 13:30:40.341694    4875 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I1109 13:30:40.444718    4875 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I1109 13:30:40.543643    4875 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I1109 13:30:40.687804    4875 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=registry", current state: Pending: [<nil>]
	W1109 13:30:40.792427    4875 node_ready.go:57] node "addons-651467" has "Ready":"False" status (will retry)
	I1109 13:30:40.841561    4875 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I1109 13:30:40.944505    4875 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I1109 13:30:41.043378    4875 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I1109 13:30:41.187405    4875 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=registry", current state: Pending: [<nil>]
	I1109 13:30:41.341510    4875 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I1109 13:30:41.444514    4875 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I1109 13:30:41.543523    4875 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I1109 13:30:41.688906    4875 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=registry", current state: Pending: [<nil>]
	I1109 13:30:41.842400    4875 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I1109 13:30:41.945594    4875 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I1109 13:30:42.044467    4875 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I1109 13:30:42.187411    4875 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=registry", current state: Pending: [<nil>]
	I1109 13:30:42.341280    4875 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I1109 13:30:42.445062    4875 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I1109 13:30:42.543975    4875 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I1109 13:30:42.686736    4875 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=registry", current state: Pending: [<nil>]
	I1109 13:30:42.841540    4875 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I1109 13:30:42.944675    4875 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I1109 13:30:43.043765    4875 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I1109 13:30:43.187610    4875 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=registry", current state: Pending: [<nil>]
	W1109 13:30:43.292589    4875 node_ready.go:57] node "addons-651467" has "Ready":"False" status (will retry)
	I1109 13:30:43.341191    4875 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I1109 13:30:43.445490    4875 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I1109 13:30:43.543568    4875 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I1109 13:30:43.687575    4875 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=registry", current state: Pending: [<nil>]
	I1109 13:30:43.840929    4875 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I1109 13:30:43.944819    4875 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I1109 13:30:44.043584    4875 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I1109 13:30:44.187507    4875 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=registry", current state: Pending: [<nil>]
	I1109 13:30:44.340907    4875 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I1109 13:30:44.444843    4875 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I1109 13:30:44.543987    4875 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I1109 13:30:44.687109    4875 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=registry", current state: Pending: [<nil>]
	I1109 13:30:44.841917    4875 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I1109 13:30:44.944961    4875 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I1109 13:30:45.050770    4875 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I1109 13:30:45.189716    4875 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=registry", current state: Pending: [<nil>]
	W1109 13:30:45.295183    4875 node_ready.go:57] node "addons-651467" has "Ready":"False" status (will retry)
	I1109 13:30:45.342598    4875 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I1109 13:30:45.444525    4875 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I1109 13:30:45.543196    4875 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I1109 13:30:45.687358    4875 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=registry", current state: Pending: [<nil>]
	I1109 13:30:45.841227    4875 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I1109 13:30:45.945251    4875 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I1109 13:30:46.043344    4875 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I1109 13:30:46.187357    4875 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=registry", current state: Pending: [<nil>]
	I1109 13:30:46.341217    4875 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I1109 13:30:46.444933    4875 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I1109 13:30:46.543989    4875 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I1109 13:30:46.687637    4875 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=registry", current state: Pending: [<nil>]
	I1109 13:30:46.842053    4875 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I1109 13:30:46.945238    4875 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I1109 13:30:47.043401    4875 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I1109 13:30:47.188523    4875 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=registry", current state: Pending: [<nil>]
	I1109 13:30:47.341781    4875 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I1109 13:30:47.444965    4875 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I1109 13:30:47.543749    4875 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I1109 13:30:47.688073    4875 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=registry", current state: Pending: [<nil>]
	W1109 13:30:47.791570    4875 node_ready.go:57] node "addons-651467" has "Ready":"False" status (will retry)
	I1109 13:30:47.841515    4875 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I1109 13:30:47.945561    4875 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I1109 13:30:48.043787    4875 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I1109 13:30:48.187708    4875 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=registry", current state: Pending: [<nil>]
	I1109 13:30:48.341620    4875 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I1109 13:30:48.444995    4875 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I1109 13:30:48.544379    4875 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I1109 13:30:48.687107    4875 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=registry", current state: Pending: [<nil>]
	I1109 13:30:48.842167    4875 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I1109 13:30:48.945152    4875 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I1109 13:30:49.043234    4875 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I1109 13:30:49.187838    4875 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=registry", current state: Pending: [<nil>]
	I1109 13:30:49.341999    4875 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I1109 13:30:49.445138    4875 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I1109 13:30:49.544052    4875 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I1109 13:30:49.686714    4875 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=registry", current state: Pending: [<nil>]
	W1109 13:30:49.792211    4875 node_ready.go:57] node "addons-651467" has "Ready":"False" status (will retry)
	I1109 13:30:49.842396    4875 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I1109 13:30:49.945527    4875 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I1109 13:30:50.043759    4875 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I1109 13:30:50.187612    4875 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=registry", current state: Pending: [<nil>]
	I1109 13:30:50.341098    4875 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I1109 13:30:50.444993    4875 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I1109 13:30:50.543951    4875 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I1109 13:30:50.687648    4875 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=registry", current state: Pending: [<nil>]
	I1109 13:30:50.842226    4875 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I1109 13:30:50.944974    4875 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I1109 13:30:51.043944    4875 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I1109 13:30:51.186997    4875 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=registry", current state: Pending: [<nil>]
	I1109 13:30:51.341904    4875 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I1109 13:30:51.444653    4875 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I1109 13:30:51.543406    4875 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I1109 13:30:51.687185    4875 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=registry", current state: Pending: [<nil>]
	I1109 13:30:51.849512    4875 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I1109 13:30:51.944444    4875 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I1109 13:30:52.043385    4875 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I1109 13:30:52.186856    4875 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=registry", current state: Pending: [<nil>]
	W1109 13:30:52.291621    4875 node_ready.go:57] node "addons-651467" has "Ready":"False" status (will retry)
	I1109 13:30:52.341323    4875 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I1109 13:30:52.445012    4875 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I1109 13:30:52.544163    4875 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I1109 13:30:52.687019    4875 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=registry", current state: Pending: [<nil>]
	I1109 13:30:52.842128    4875 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I1109 13:30:52.951720    4875 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I1109 13:30:53.043983    4875 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I1109 13:30:53.186866    4875 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=registry", current state: Pending: [<nil>]
	I1109 13:30:53.341621    4875 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I1109 13:30:53.444635    4875 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I1109 13:30:53.543838    4875 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I1109 13:30:53.687804    4875 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=registry", current state: Pending: [<nil>]
	I1109 13:30:53.841239    4875 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I1109 13:30:53.945191    4875 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I1109 13:30:54.043518    4875 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I1109 13:30:54.187694    4875 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=registry", current state: Pending: [<nil>]
	I1109 13:30:54.341742    4875 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I1109 13:30:54.444377    4875 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I1109 13:30:54.543264    4875 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I1109 13:30:54.687271    4875 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=registry", current state: Pending: [<nil>]
	W1109 13:30:54.791739    4875 node_ready.go:57] node "addons-651467" has "Ready":"False" status (will retry)
	I1109 13:30:54.841648    4875 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I1109 13:30:54.944532    4875 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I1109 13:30:55.044837    4875 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I1109 13:30:55.188006    4875 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=registry", current state: Pending: [<nil>]
	I1109 13:30:55.341875    4875 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I1109 13:30:55.444563    4875 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I1109 13:30:55.543838    4875 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I1109 13:30:55.687907    4875 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=registry", current state: Pending: [<nil>]
	I1109 13:30:55.841627    4875 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I1109 13:30:55.945202    4875 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I1109 13:30:56.043483    4875 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I1109 13:30:56.187179    4875 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=registry", current state: Pending: [<nil>]
	I1109 13:30:56.341651    4875 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I1109 13:30:56.444691    4875 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I1109 13:30:56.543833    4875 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I1109 13:30:56.687671    4875 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=registry", current state: Pending: [<nil>]
	W1109 13:30:56.792523    4875 node_ready.go:57] node "addons-651467" has "Ready":"False" status (will retry)
	I1109 13:30:56.841418    4875 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I1109 13:30:56.945147    4875 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I1109 13:30:57.043924    4875 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I1109 13:30:57.187608    4875 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=registry", current state: Pending: [<nil>]
	I1109 13:30:57.341872    4875 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I1109 13:30:57.444835    4875 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I1109 13:30:57.544131    4875 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I1109 13:30:57.687381    4875 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=registry", current state: Pending: [<nil>]
	I1109 13:30:57.841752    4875 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I1109 13:30:57.944935    4875 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I1109 13:30:58.046473    4875 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I1109 13:30:58.187128    4875 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=registry", current state: Pending: [<nil>]
	I1109 13:30:58.341769    4875 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I1109 13:30:58.444493    4875 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I1109 13:30:58.543611    4875 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I1109 13:30:58.687474    4875 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=registry", current state: Pending: [<nil>]
	I1109 13:30:58.841692    4875 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I1109 13:30:58.944633    4875 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I1109 13:30:59.043620    4875 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I1109 13:30:59.187791    4875 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=registry", current state: Pending: [<nil>]
	W1109 13:30:59.291597    4875 node_ready.go:57] node "addons-651467" has "Ready":"False" status (will retry)
	I1109 13:30:59.341379    4875 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I1109 13:30:59.445007    4875 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I1109 13:30:59.544944    4875 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I1109 13:30:59.688073    4875 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=registry", current state: Pending: [<nil>]
	I1109 13:30:59.842193    4875 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I1109 13:30:59.945063    4875 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I1109 13:31:00.072158    4875 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I1109 13:31:00.236080    4875 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=registry", current state: Pending: [<nil>]
	I1109 13:31:00.347674    4875 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I1109 13:31:00.445470    4875 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I1109 13:31:00.543997    4875 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I1109 13:31:00.687676    4875 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=registry", current state: Pending: [<nil>]
	I1109 13:31:00.841317    4875 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I1109 13:31:00.945259    4875 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I1109 13:31:01.043636    4875 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I1109 13:31:01.187996    4875 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=registry", current state: Pending: [<nil>]
	I1109 13:31:01.341516    4875 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I1109 13:31:01.444531    4875 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I1109 13:31:01.543442    4875 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I1109 13:31:01.687310    4875 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=registry", current state: Pending: [<nil>]
	W1109 13:31:01.792083    4875 node_ready.go:57] node "addons-651467" has "Ready":"False" status (will retry)
	I1109 13:31:01.842124    4875 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I1109 13:31:01.944862    4875 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I1109 13:31:02.044457    4875 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I1109 13:31:02.187041    4875 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=registry", current state: Pending: [<nil>]
	I1109 13:31:02.341796    4875 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I1109 13:31:02.444743    4875 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I1109 13:31:02.543669    4875 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I1109 13:31:02.688012    4875 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=registry", current state: Pending: [<nil>]
	I1109 13:31:02.841980    4875 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I1109 13:31:02.945089    4875 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I1109 13:31:03.043891    4875 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I1109 13:31:03.187735    4875 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=registry", current state: Pending: [<nil>]
	I1109 13:31:03.341646    4875 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I1109 13:31:03.445094    4875 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I1109 13:31:03.543258    4875 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I1109 13:31:03.687453    4875 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=registry", current state: Pending: [<nil>]
	W1109 13:31:03.792599    4875 node_ready.go:57] node "addons-651467" has "Ready":"False" status (will retry)
	I1109 13:31:03.841835    4875 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I1109 13:31:03.944741    4875 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I1109 13:31:04.044019    4875 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I1109 13:31:04.187617    4875 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=registry", current state: Pending: [<nil>]
	I1109 13:31:04.341496    4875 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I1109 13:31:04.444440    4875 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I1109 13:31:04.543400    4875 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I1109 13:31:04.686997    4875 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=registry", current state: Pending: [<nil>]
	I1109 13:31:04.841018    4875 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I1109 13:31:04.944974    4875 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I1109 13:31:05.044053    4875 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I1109 13:31:05.215419    4875 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=registry", current state: Pending: [<nil>]
	I1109 13:31:05.297264    4875 node_ready.go:49] node "addons-651467" is "Ready"
	I1109 13:31:05.297309    4875 node_ready.go:38] duration metric: took 40.508401799s for node "addons-651467" to be "Ready" ...
	I1109 13:31:05.297323    4875 api_server.go:52] waiting for apiserver process to appear ...
	I1109 13:31:05.297393    4875 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I1109 13:31:05.314492    4875 api_server.go:72] duration metric: took 42.233802998s to wait for apiserver process to appear ...
	I1109 13:31:05.314529    4875 api_server.go:88] waiting for apiserver healthz status ...
	I1109 13:31:05.314548    4875 api_server.go:253] Checking apiserver healthz at https://192.168.49.2:8443/healthz ...
	I1109 13:31:05.326881    4875 api_server.go:279] https://192.168.49.2:8443/healthz returned 200:
	ok
	I1109 13:31:05.329478    4875 api_server.go:141] control plane version: v1.34.1
	I1109 13:31:05.329513    4875 api_server.go:131] duration metric: took 14.971994ms to wait for apiserver health ...
	I1109 13:31:05.329522    4875 system_pods.go:43] waiting for kube-system pods to appear ...
	I1109 13:31:05.343325    4875 system_pods.go:59] 19 kube-system pods found
	I1109 13:31:05.343436    4875 system_pods.go:61] "coredns-66bc5c9577-2bvft" [ec69ac87-b8fd-43ca-9881-75fdfcf79050] Pending / Ready:ContainersNotReady (containers with unready status: [coredns]) / ContainersReady:ContainersNotReady (containers with unready status: [coredns])
	I1109 13:31:05.343460    4875 system_pods.go:61] "csi-hostpath-attacher-0" [87f54f6c-2cfb-45e8-8ac8-a9eccdb3e51f] Pending
	I1109 13:31:05.343494    4875 system_pods.go:61] "csi-hostpath-resizer-0" [397070b3-aae2-4129-9072-7de9d3ba6da9] Pending
	I1109 13:31:05.343517    4875 system_pods.go:61] "csi-hostpathplugin-txjcd" [6943fc25-e8c6-4fdc-ad8b-ecf63b13f5fd] Pending
	I1109 13:31:05.343535    4875 system_pods.go:61] "etcd-addons-651467" [513b6301-60f7-4c1a-b01a-baf84b87bd75] Running
	I1109 13:31:05.343553    4875 system_pods.go:61] "kindnet-9qtn5" [d33de5ba-e443-452e-8fe7-5deb7ad623ca] Running
	I1109 13:31:05.343571    4875 system_pods.go:61] "kube-apiserver-addons-651467" [3f3ad910-fa5d-4d5c-88db-c8305e10d2c4] Running
	I1109 13:31:05.343599    4875 system_pods.go:61] "kube-controller-manager-addons-651467" [b37e9a85-6d92-4d6e-be54-8ff0648c2f93] Running
	I1109 13:31:05.343623    4875 system_pods.go:61] "kube-ingress-dns-minikube" [87ed2174-b7f3-4e0b-b964-91736f08eb82] Pending
	I1109 13:31:05.343641    4875 system_pods.go:61] "kube-proxy-mbtfx" [be5bd5f2-d003-4c6a-a3bf-db956ea6e05a] Running
	I1109 13:31:05.343658    4875 system_pods.go:61] "kube-scheduler-addons-651467" [d370dd8e-5eaa-4ea0-bb9e-6893d5047d12] Running
	I1109 13:31:05.343678    4875 system_pods.go:61] "metrics-server-85b7d694d7-lmgbd" [448bf1e7-7a08-483f-8dc9-602a084beee5] Pending
	I1109 13:31:05.343707    4875 system_pods.go:61] "nvidia-device-plugin-daemonset-rx8x7" [a557f519-0b90-4340-abc1-df9fb9511be1] Pending
	I1109 13:31:05.343728    4875 system_pods.go:61] "registry-6b586f9694-kzz6v" [44b6807f-27e2-412f-a961-c988ec4d7151] Pending
	I1109 13:31:05.343749    4875 system_pods.go:61] "registry-creds-764b6fb674-sppdf" [db731fa9-ef93-469f-8a5f-23bd2aa1c2b8] Pending / Ready:ContainersNotReady (containers with unready status: [registry-creds]) / ContainersReady:ContainersNotReady (containers with unready status: [registry-creds])
	I1109 13:31:05.343769    4875 system_pods.go:61] "registry-proxy-7mv24" [15b0c22b-2fbc-4269-8253-7dbe1e18c15c] Pending
	I1109 13:31:05.343791    4875 system_pods.go:61] "snapshot-controller-7d9fbc56b8-d4qqx" [e84f0c00-37e8-49f5-9221-f7abf50a0b54] Pending / Ready:ContainersNotReady (containers with unready status: [volume-snapshot-controller]) / ContainersReady:ContainersNotReady (containers with unready status: [volume-snapshot-controller])
	I1109 13:31:05.343826    4875 system_pods.go:61] "snapshot-controller-7d9fbc56b8-jmnwh" [a1da803e-5dfc-4660-88f5-05d16292eda1] Pending / Ready:ContainersNotReady (containers with unready status: [volume-snapshot-controller]) / ContainersReady:ContainersNotReady (containers with unready status: [volume-snapshot-controller])
	I1109 13:31:05.343844    4875 system_pods.go:61] "storage-provisioner" [5f59a559-3989-40c0-9f6e-8df14dc8db7a] Pending
	I1109 13:31:05.343910    4875 system_pods.go:74] duration metric: took 14.335442ms to wait for pod list to return data ...
	I1109 13:31:05.343942    4875 default_sa.go:34] waiting for default service account to be created ...
	I1109 13:31:05.346297    4875 kapi.go:86] Found 3 Pods for label selector kubernetes.io/minikube-addons=csi-hostpath-driver
	I1109 13:31:05.346360    4875 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I1109 13:31:05.348684    4875 default_sa.go:45] found service account: "default"
	I1109 13:31:05.348738    4875 default_sa.go:55] duration metric: took 4.777458ms for default service account to be created ...
	I1109 13:31:05.348762    4875 system_pods.go:116] waiting for k8s-apps to be running ...
	I1109 13:31:05.364970    4875 system_pods.go:86] 19 kube-system pods found
	I1109 13:31:05.365053    4875 system_pods.go:89] "coredns-66bc5c9577-2bvft" [ec69ac87-b8fd-43ca-9881-75fdfcf79050] Pending / Ready:ContainersNotReady (containers with unready status: [coredns]) / ContainersReady:ContainersNotReady (containers with unready status: [coredns])
	I1109 13:31:05.365074    4875 system_pods.go:89] "csi-hostpath-attacher-0" [87f54f6c-2cfb-45e8-8ac8-a9eccdb3e51f] Pending
	I1109 13:31:05.365093    4875 system_pods.go:89] "csi-hostpath-resizer-0" [397070b3-aae2-4129-9072-7de9d3ba6da9] Pending
	I1109 13:31:05.365124    4875 system_pods.go:89] "csi-hostpathplugin-txjcd" [6943fc25-e8c6-4fdc-ad8b-ecf63b13f5fd] Pending
	I1109 13:31:05.365148    4875 system_pods.go:89] "etcd-addons-651467" [513b6301-60f7-4c1a-b01a-baf84b87bd75] Running
	I1109 13:31:05.365167    4875 system_pods.go:89] "kindnet-9qtn5" [d33de5ba-e443-452e-8fe7-5deb7ad623ca] Running
	I1109 13:31:05.365184    4875 system_pods.go:89] "kube-apiserver-addons-651467" [3f3ad910-fa5d-4d5c-88db-c8305e10d2c4] Running
	I1109 13:31:05.365201    4875 system_pods.go:89] "kube-controller-manager-addons-651467" [b37e9a85-6d92-4d6e-be54-8ff0648c2f93] Running
	I1109 13:31:05.365230    4875 system_pods.go:89] "kube-ingress-dns-minikube" [87ed2174-b7f3-4e0b-b964-91736f08eb82] Pending
	I1109 13:31:05.365251    4875 system_pods.go:89] "kube-proxy-mbtfx" [be5bd5f2-d003-4c6a-a3bf-db956ea6e05a] Running
	I1109 13:31:05.365269    4875 system_pods.go:89] "kube-scheduler-addons-651467" [d370dd8e-5eaa-4ea0-bb9e-6893d5047d12] Running
	I1109 13:31:05.365286    4875 system_pods.go:89] "metrics-server-85b7d694d7-lmgbd" [448bf1e7-7a08-483f-8dc9-602a084beee5] Pending
	I1109 13:31:05.365304    4875 system_pods.go:89] "nvidia-device-plugin-daemonset-rx8x7" [a557f519-0b90-4340-abc1-df9fb9511be1] Pending
	I1109 13:31:05.365331    4875 system_pods.go:89] "registry-6b586f9694-kzz6v" [44b6807f-27e2-412f-a961-c988ec4d7151] Pending
	I1109 13:31:05.365356    4875 system_pods.go:89] "registry-creds-764b6fb674-sppdf" [db731fa9-ef93-469f-8a5f-23bd2aa1c2b8] Pending / Ready:ContainersNotReady (containers with unready status: [registry-creds]) / ContainersReady:ContainersNotReady (containers with unready status: [registry-creds])
	I1109 13:31:05.365374    4875 system_pods.go:89] "registry-proxy-7mv24" [15b0c22b-2fbc-4269-8253-7dbe1e18c15c] Pending
	I1109 13:31:05.365394    4875 system_pods.go:89] "snapshot-controller-7d9fbc56b8-d4qqx" [e84f0c00-37e8-49f5-9221-f7abf50a0b54] Pending / Ready:ContainersNotReady (containers with unready status: [volume-snapshot-controller]) / ContainersReady:ContainersNotReady (containers with unready status: [volume-snapshot-controller])
	I1109 13:31:05.365414    4875 system_pods.go:89] "snapshot-controller-7d9fbc56b8-jmnwh" [a1da803e-5dfc-4660-88f5-05d16292eda1] Pending / Ready:ContainersNotReady (containers with unready status: [volume-snapshot-controller]) / ContainersReady:ContainersNotReady (containers with unready status: [volume-snapshot-controller])
	I1109 13:31:05.365449    4875 system_pods.go:89] "storage-provisioner" [5f59a559-3989-40c0-9f6e-8df14dc8db7a] Pending
	I1109 13:31:05.365481    4875 retry.go:31] will retry after 295.318675ms: missing components: kube-dns
	I1109 13:31:05.467403    4875 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I1109 13:31:05.582593    4875 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I1109 13:31:05.671960    4875 system_pods.go:86] 19 kube-system pods found
	I1109 13:31:05.675627    4875 system_pods.go:89] "coredns-66bc5c9577-2bvft" [ec69ac87-b8fd-43ca-9881-75fdfcf79050] Pending / Ready:ContainersNotReady (containers with unready status: [coredns]) / ContainersReady:ContainersNotReady (containers with unready status: [coredns])
	I1109 13:31:05.675650    4875 system_pods.go:89] "csi-hostpath-attacher-0" [87f54f6c-2cfb-45e8-8ac8-a9eccdb3e51f] Pending / Ready:ContainersNotReady (containers with unready status: [csi-attacher]) / ContainersReady:ContainersNotReady (containers with unready status: [csi-attacher])
	I1109 13:31:05.675661    4875 system_pods.go:89] "csi-hostpath-resizer-0" [397070b3-aae2-4129-9072-7de9d3ba6da9] Pending / Ready:ContainersNotReady (containers with unready status: [csi-resizer]) / ContainersReady:ContainersNotReady (containers with unready status: [csi-resizer])
	I1109 13:31:05.675666    4875 system_pods.go:89] "csi-hostpathplugin-txjcd" [6943fc25-e8c6-4fdc-ad8b-ecf63b13f5fd] Pending
	I1109 13:31:05.675672    4875 system_pods.go:89] "etcd-addons-651467" [513b6301-60f7-4c1a-b01a-baf84b87bd75] Running
	I1109 13:31:05.675678    4875 system_pods.go:89] "kindnet-9qtn5" [d33de5ba-e443-452e-8fe7-5deb7ad623ca] Running
	I1109 13:31:05.675682    4875 system_pods.go:89] "kube-apiserver-addons-651467" [3f3ad910-fa5d-4d5c-88db-c8305e10d2c4] Running
	I1109 13:31:05.675696    4875 system_pods.go:89] "kube-controller-manager-addons-651467" [b37e9a85-6d92-4d6e-be54-8ff0648c2f93] Running
	I1109 13:31:05.675704    4875 system_pods.go:89] "kube-ingress-dns-minikube" [87ed2174-b7f3-4e0b-b964-91736f08eb82] Pending / Ready:ContainersNotReady (containers with unready status: [minikube-ingress-dns]) / ContainersReady:ContainersNotReady (containers with unready status: [minikube-ingress-dns])
	I1109 13:31:05.675708    4875 system_pods.go:89] "kube-proxy-mbtfx" [be5bd5f2-d003-4c6a-a3bf-db956ea6e05a] Running
	I1109 13:31:05.675713    4875 system_pods.go:89] "kube-scheduler-addons-651467" [d370dd8e-5eaa-4ea0-bb9e-6893d5047d12] Running
	I1109 13:31:05.675719    4875 system_pods.go:89] "metrics-server-85b7d694d7-lmgbd" [448bf1e7-7a08-483f-8dc9-602a084beee5] Pending / Ready:ContainersNotReady (containers with unready status: [metrics-server]) / ContainersReady:ContainersNotReady (containers with unready status: [metrics-server])
	I1109 13:31:05.675723    4875 system_pods.go:89] "nvidia-device-plugin-daemonset-rx8x7" [a557f519-0b90-4340-abc1-df9fb9511be1] Pending
	I1109 13:31:05.675730    4875 system_pods.go:89] "registry-6b586f9694-kzz6v" [44b6807f-27e2-412f-a961-c988ec4d7151] Pending / Ready:ContainersNotReady (containers with unready status: [registry]) / ContainersReady:ContainersNotReady (containers with unready status: [registry])
	I1109 13:31:05.675736    4875 system_pods.go:89] "registry-creds-764b6fb674-sppdf" [db731fa9-ef93-469f-8a5f-23bd2aa1c2b8] Pending / Ready:ContainersNotReady (containers with unready status: [registry-creds]) / ContainersReady:ContainersNotReady (containers with unready status: [registry-creds])
	I1109 13:31:05.675740    4875 system_pods.go:89] "registry-proxy-7mv24" [15b0c22b-2fbc-4269-8253-7dbe1e18c15c] Pending
	I1109 13:31:05.675748    4875 system_pods.go:89] "snapshot-controller-7d9fbc56b8-d4qqx" [e84f0c00-37e8-49f5-9221-f7abf50a0b54] Pending / Ready:ContainersNotReady (containers with unready status: [volume-snapshot-controller]) / ContainersReady:ContainersNotReady (containers with unready status: [volume-snapshot-controller])
	I1109 13:31:05.675774    4875 system_pods.go:89] "snapshot-controller-7d9fbc56b8-jmnwh" [a1da803e-5dfc-4660-88f5-05d16292eda1] Pending / Ready:ContainersNotReady (containers with unready status: [volume-snapshot-controller]) / ContainersReady:ContainersNotReady (containers with unready status: [volume-snapshot-controller])
	I1109 13:31:05.675782    4875 system_pods.go:89] "storage-provisioner" [5f59a559-3989-40c0-9f6e-8df14dc8db7a] Pending / Ready:ContainersNotReady (containers with unready status: [storage-provisioner]) / ContainersReady:ContainersNotReady (containers with unready status: [storage-provisioner])
	I1109 13:31:05.675799    4875 retry.go:31] will retry after 242.088146ms: missing components: kube-dns
	I1109 13:31:05.692207    4875 kapi.go:86] Found 2 Pods for label selector kubernetes.io/minikube-addons=registry
	I1109 13:31:05.692233    4875 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=registry", current state: Pending: [<nil>]
	I1109 13:31:05.841781    4875 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I1109 13:31:05.928229    4875 system_pods.go:86] 19 kube-system pods found
	I1109 13:31:05.928268    4875 system_pods.go:89] "coredns-66bc5c9577-2bvft" [ec69ac87-b8fd-43ca-9881-75fdfcf79050] Pending / Ready:ContainersNotReady (containers with unready status: [coredns]) / ContainersReady:ContainersNotReady (containers with unready status: [coredns])
	I1109 13:31:05.928278    4875 system_pods.go:89] "csi-hostpath-attacher-0" [87f54f6c-2cfb-45e8-8ac8-a9eccdb3e51f] Pending / Ready:ContainersNotReady (containers with unready status: [csi-attacher]) / ContainersReady:ContainersNotReady (containers with unready status: [csi-attacher])
	I1109 13:31:05.928287    4875 system_pods.go:89] "csi-hostpath-resizer-0" [397070b3-aae2-4129-9072-7de9d3ba6da9] Pending / Ready:ContainersNotReady (containers with unready status: [csi-resizer]) / ContainersReady:ContainersNotReady (containers with unready status: [csi-resizer])
	I1109 13:31:05.928295    4875 system_pods.go:89] "csi-hostpathplugin-txjcd" [6943fc25-e8c6-4fdc-ad8b-ecf63b13f5fd] Pending / Ready:ContainersNotReady (containers with unready status: [csi-external-health-monitor-controller node-driver-registrar hostpath liveness-probe csi-provisioner csi-snapshotter]) / ContainersReady:ContainersNotReady (containers with unready status: [csi-external-health-monitor-controller node-driver-registrar hostpath liveness-probe csi-provisioner csi-snapshotter])
	I1109 13:31:05.928310    4875 system_pods.go:89] "etcd-addons-651467" [513b6301-60f7-4c1a-b01a-baf84b87bd75] Running
	I1109 13:31:05.928321    4875 system_pods.go:89] "kindnet-9qtn5" [d33de5ba-e443-452e-8fe7-5deb7ad623ca] Running
	I1109 13:31:05.928326    4875 system_pods.go:89] "kube-apiserver-addons-651467" [3f3ad910-fa5d-4d5c-88db-c8305e10d2c4] Running
	I1109 13:31:05.928349    4875 system_pods.go:89] "kube-controller-manager-addons-651467" [b37e9a85-6d92-4d6e-be54-8ff0648c2f93] Running
	I1109 13:31:05.928358    4875 system_pods.go:89] "kube-ingress-dns-minikube" [87ed2174-b7f3-4e0b-b964-91736f08eb82] Pending / Ready:ContainersNotReady (containers with unready status: [minikube-ingress-dns]) / ContainersReady:ContainersNotReady (containers with unready status: [minikube-ingress-dns])
	I1109 13:31:05.928369    4875 system_pods.go:89] "kube-proxy-mbtfx" [be5bd5f2-d003-4c6a-a3bf-db956ea6e05a] Running
	I1109 13:31:05.928374    4875 system_pods.go:89] "kube-scheduler-addons-651467" [d370dd8e-5eaa-4ea0-bb9e-6893d5047d12] Running
	I1109 13:31:05.928380    4875 system_pods.go:89] "metrics-server-85b7d694d7-lmgbd" [448bf1e7-7a08-483f-8dc9-602a084beee5] Pending / Ready:ContainersNotReady (containers with unready status: [metrics-server]) / ContainersReady:ContainersNotReady (containers with unready status: [metrics-server])
	I1109 13:31:05.928390    4875 system_pods.go:89] "nvidia-device-plugin-daemonset-rx8x7" [a557f519-0b90-4340-abc1-df9fb9511be1] Pending / Ready:ContainersNotReady (containers with unready status: [nvidia-device-plugin-ctr]) / ContainersReady:ContainersNotReady (containers with unready status: [nvidia-device-plugin-ctr])
	I1109 13:31:05.928397    4875 system_pods.go:89] "registry-6b586f9694-kzz6v" [44b6807f-27e2-412f-a961-c988ec4d7151] Pending / Ready:ContainersNotReady (containers with unready status: [registry]) / ContainersReady:ContainersNotReady (containers with unready status: [registry])
	I1109 13:31:05.928403    4875 system_pods.go:89] "registry-creds-764b6fb674-sppdf" [db731fa9-ef93-469f-8a5f-23bd2aa1c2b8] Pending / Ready:ContainersNotReady (containers with unready status: [registry-creds]) / ContainersReady:ContainersNotReady (containers with unready status: [registry-creds])
	I1109 13:31:05.928412    4875 system_pods.go:89] "registry-proxy-7mv24" [15b0c22b-2fbc-4269-8253-7dbe1e18c15c] Pending / Ready:ContainersNotReady (containers with unready status: [registry-proxy]) / ContainersReady:ContainersNotReady (containers with unready status: [registry-proxy])
	I1109 13:31:05.928421    4875 system_pods.go:89] "snapshot-controller-7d9fbc56b8-d4qqx" [e84f0c00-37e8-49f5-9221-f7abf50a0b54] Pending / Ready:ContainersNotReady (containers with unready status: [volume-snapshot-controller]) / ContainersReady:ContainersNotReady (containers with unready status: [volume-snapshot-controller])
	I1109 13:31:05.928428    4875 system_pods.go:89] "snapshot-controller-7d9fbc56b8-jmnwh" [a1da803e-5dfc-4660-88f5-05d16292eda1] Pending / Ready:ContainersNotReady (containers with unready status: [volume-snapshot-controller]) / ContainersReady:ContainersNotReady (containers with unready status: [volume-snapshot-controller])
	I1109 13:31:05.928439    4875 system_pods.go:89] "storage-provisioner" [5f59a559-3989-40c0-9f6e-8df14dc8db7a] Pending / Ready:ContainersNotReady (containers with unready status: [storage-provisioner]) / ContainersReady:ContainersNotReady (containers with unready status: [storage-provisioner])
	I1109 13:31:05.928455    4875 retry.go:31] will retry after 467.918653ms: missing components: kube-dns
	I1109 13:31:05.945740    4875 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I1109 13:31:06.045025    4875 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I1109 13:31:06.194123    4875 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=registry", current state: Pending: [<nil>]
	I1109 13:31:06.342954    4875 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I1109 13:31:06.406127    4875 system_pods.go:86] 19 kube-system pods found
	I1109 13:31:06.406167    4875 system_pods.go:89] "coredns-66bc5c9577-2bvft" [ec69ac87-b8fd-43ca-9881-75fdfcf79050] Pending / Ready:ContainersNotReady (containers with unready status: [coredns]) / ContainersReady:ContainersNotReady (containers with unready status: [coredns])
	I1109 13:31:06.406177    4875 system_pods.go:89] "csi-hostpath-attacher-0" [87f54f6c-2cfb-45e8-8ac8-a9eccdb3e51f] Pending / Ready:ContainersNotReady (containers with unready status: [csi-attacher]) / ContainersReady:ContainersNotReady (containers with unready status: [csi-attacher])
	I1109 13:31:06.406203    4875 system_pods.go:89] "csi-hostpath-resizer-0" [397070b3-aae2-4129-9072-7de9d3ba6da9] Pending / Ready:ContainersNotReady (containers with unready status: [csi-resizer]) / ContainersReady:ContainersNotReady (containers with unready status: [csi-resizer])
	I1109 13:31:06.406218    4875 system_pods.go:89] "csi-hostpathplugin-txjcd" [6943fc25-e8c6-4fdc-ad8b-ecf63b13f5fd] Pending / Ready:ContainersNotReady (containers with unready status: [csi-external-health-monitor-controller node-driver-registrar hostpath liveness-probe csi-provisioner csi-snapshotter]) / ContainersReady:ContainersNotReady (containers with unready status: [csi-external-health-monitor-controller node-driver-registrar hostpath liveness-probe csi-provisioner csi-snapshotter])
	I1109 13:31:06.406225    4875 system_pods.go:89] "etcd-addons-651467" [513b6301-60f7-4c1a-b01a-baf84b87bd75] Running
	I1109 13:31:06.406237    4875 system_pods.go:89] "kindnet-9qtn5" [d33de5ba-e443-452e-8fe7-5deb7ad623ca] Running
	I1109 13:31:06.406242    4875 system_pods.go:89] "kube-apiserver-addons-651467" [3f3ad910-fa5d-4d5c-88db-c8305e10d2c4] Running
	I1109 13:31:06.406246    4875 system_pods.go:89] "kube-controller-manager-addons-651467" [b37e9a85-6d92-4d6e-be54-8ff0648c2f93] Running
	I1109 13:31:06.406257    4875 system_pods.go:89] "kube-ingress-dns-minikube" [87ed2174-b7f3-4e0b-b964-91736f08eb82] Pending / Ready:ContainersNotReady (containers with unready status: [minikube-ingress-dns]) / ContainersReady:ContainersNotReady (containers with unready status: [minikube-ingress-dns])
	I1109 13:31:06.406262    4875 system_pods.go:89] "kube-proxy-mbtfx" [be5bd5f2-d003-4c6a-a3bf-db956ea6e05a] Running
	I1109 13:31:06.406266    4875 system_pods.go:89] "kube-scheduler-addons-651467" [d370dd8e-5eaa-4ea0-bb9e-6893d5047d12] Running
	I1109 13:31:06.406273    4875 system_pods.go:89] "metrics-server-85b7d694d7-lmgbd" [448bf1e7-7a08-483f-8dc9-602a084beee5] Pending / Ready:ContainersNotReady (containers with unready status: [metrics-server]) / ContainersReady:ContainersNotReady (containers with unready status: [metrics-server])
	I1109 13:31:06.406280    4875 system_pods.go:89] "nvidia-device-plugin-daemonset-rx8x7" [a557f519-0b90-4340-abc1-df9fb9511be1] Pending / Ready:ContainersNotReady (containers with unready status: [nvidia-device-plugin-ctr]) / ContainersReady:ContainersNotReady (containers with unready status: [nvidia-device-plugin-ctr])
	I1109 13:31:06.406289    4875 system_pods.go:89] "registry-6b586f9694-kzz6v" [44b6807f-27e2-412f-a961-c988ec4d7151] Pending / Ready:ContainersNotReady (containers with unready status: [registry]) / ContainersReady:ContainersNotReady (containers with unready status: [registry])
	I1109 13:31:06.406296    4875 system_pods.go:89] "registry-creds-764b6fb674-sppdf" [db731fa9-ef93-469f-8a5f-23bd2aa1c2b8] Pending / Ready:ContainersNotReady (containers with unready status: [registry-creds]) / ContainersReady:ContainersNotReady (containers with unready status: [registry-creds])
	I1109 13:31:06.406304    4875 system_pods.go:89] "registry-proxy-7mv24" [15b0c22b-2fbc-4269-8253-7dbe1e18c15c] Pending / Ready:ContainersNotReady (containers with unready status: [registry-proxy]) / ContainersReady:ContainersNotReady (containers with unready status: [registry-proxy])
	I1109 13:31:06.406314    4875 system_pods.go:89] "snapshot-controller-7d9fbc56b8-d4qqx" [e84f0c00-37e8-49f5-9221-f7abf50a0b54] Pending / Ready:ContainersNotReady (containers with unready status: [volume-snapshot-controller]) / ContainersReady:ContainersNotReady (containers with unready status: [volume-snapshot-controller])
	I1109 13:31:06.406323    4875 system_pods.go:89] "snapshot-controller-7d9fbc56b8-jmnwh" [a1da803e-5dfc-4660-88f5-05d16292eda1] Pending / Ready:ContainersNotReady (containers with unready status: [volume-snapshot-controller]) / ContainersReady:ContainersNotReady (containers with unready status: [volume-snapshot-controller])
	I1109 13:31:06.406333    4875 system_pods.go:89] "storage-provisioner" [5f59a559-3989-40c0-9f6e-8df14dc8db7a] Pending / Ready:ContainersNotReady (containers with unready status: [storage-provisioner]) / ContainersReady:ContainersNotReady (containers with unready status: [storage-provisioner])
	I1109 13:31:06.406349    4875 retry.go:31] will retry after 565.373843ms: missing components: kube-dns
	I1109 13:31:06.445397    4875 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I1109 13:31:06.543520    4875 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I1109 13:31:06.687191    4875 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=registry", current state: Pending: [<nil>]
	I1109 13:31:06.842214    4875 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I1109 13:31:06.948442    4875 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I1109 13:31:06.977367    4875 system_pods.go:86] 19 kube-system pods found
	I1109 13:31:06.977404    4875 system_pods.go:89] "coredns-66bc5c9577-2bvft" [ec69ac87-b8fd-43ca-9881-75fdfcf79050] Pending / Ready:ContainersNotReady (containers with unready status: [coredns]) / ContainersReady:ContainersNotReady (containers with unready status: [coredns])
	I1109 13:31:06.977414    4875 system_pods.go:89] "csi-hostpath-attacher-0" [87f54f6c-2cfb-45e8-8ac8-a9eccdb3e51f] Pending / Ready:ContainersNotReady (containers with unready status: [csi-attacher]) / ContainersReady:ContainersNotReady (containers with unready status: [csi-attacher])
	I1109 13:31:06.977454    4875 system_pods.go:89] "csi-hostpath-resizer-0" [397070b3-aae2-4129-9072-7de9d3ba6da9] Pending / Ready:ContainersNotReady (containers with unready status: [csi-resizer]) / ContainersReady:ContainersNotReady (containers with unready status: [csi-resizer])
	I1109 13:31:06.977470    4875 system_pods.go:89] "csi-hostpathplugin-txjcd" [6943fc25-e8c6-4fdc-ad8b-ecf63b13f5fd] Pending / Ready:ContainersNotReady (containers with unready status: [csi-external-health-monitor-controller node-driver-registrar hostpath liveness-probe csi-provisioner csi-snapshotter]) / ContainersReady:ContainersNotReady (containers with unready status: [csi-external-health-monitor-controller node-driver-registrar hostpath liveness-probe csi-provisioner csi-snapshotter])
	I1109 13:31:06.977475    4875 system_pods.go:89] "etcd-addons-651467" [513b6301-60f7-4c1a-b01a-baf84b87bd75] Running
	I1109 13:31:06.977481    4875 system_pods.go:89] "kindnet-9qtn5" [d33de5ba-e443-452e-8fe7-5deb7ad623ca] Running
	I1109 13:31:06.977489    4875 system_pods.go:89] "kube-apiserver-addons-651467" [3f3ad910-fa5d-4d5c-88db-c8305e10d2c4] Running
	I1109 13:31:06.977493    4875 system_pods.go:89] "kube-controller-manager-addons-651467" [b37e9a85-6d92-4d6e-be54-8ff0648c2f93] Running
	I1109 13:31:06.977520    4875 system_pods.go:89] "kube-ingress-dns-minikube" [87ed2174-b7f3-4e0b-b964-91736f08eb82] Pending / Ready:ContainersNotReady (containers with unready status: [minikube-ingress-dns]) / ContainersReady:ContainersNotReady (containers with unready status: [minikube-ingress-dns])
	I1109 13:31:06.977535    4875 system_pods.go:89] "kube-proxy-mbtfx" [be5bd5f2-d003-4c6a-a3bf-db956ea6e05a] Running
	I1109 13:31:06.977541    4875 system_pods.go:89] "kube-scheduler-addons-651467" [d370dd8e-5eaa-4ea0-bb9e-6893d5047d12] Running
	I1109 13:31:06.977557    4875 system_pods.go:89] "metrics-server-85b7d694d7-lmgbd" [448bf1e7-7a08-483f-8dc9-602a084beee5] Pending / Ready:ContainersNotReady (containers with unready status: [metrics-server]) / ContainersReady:ContainersNotReady (containers with unready status: [metrics-server])
	I1109 13:31:06.977571    4875 system_pods.go:89] "nvidia-device-plugin-daemonset-rx8x7" [a557f519-0b90-4340-abc1-df9fb9511be1] Pending / Ready:ContainersNotReady (containers with unready status: [nvidia-device-plugin-ctr]) / ContainersReady:ContainersNotReady (containers with unready status: [nvidia-device-plugin-ctr])
	I1109 13:31:06.977581    4875 system_pods.go:89] "registry-6b586f9694-kzz6v" [44b6807f-27e2-412f-a961-c988ec4d7151] Pending / Ready:ContainersNotReady (containers with unready status: [registry]) / ContainersReady:ContainersNotReady (containers with unready status: [registry])
	I1109 13:31:06.977595    4875 system_pods.go:89] "registry-creds-764b6fb674-sppdf" [db731fa9-ef93-469f-8a5f-23bd2aa1c2b8] Pending / Ready:ContainersNotReady (containers with unready status: [registry-creds]) / ContainersReady:ContainersNotReady (containers with unready status: [registry-creds])
	I1109 13:31:06.977602    4875 system_pods.go:89] "registry-proxy-7mv24" [15b0c22b-2fbc-4269-8253-7dbe1e18c15c] Pending / Ready:ContainersNotReady (containers with unready status: [registry-proxy]) / ContainersReady:ContainersNotReady (containers with unready status: [registry-proxy])
	I1109 13:31:06.977611    4875 system_pods.go:89] "snapshot-controller-7d9fbc56b8-d4qqx" [e84f0c00-37e8-49f5-9221-f7abf50a0b54] Pending / Ready:ContainersNotReady (containers with unready status: [volume-snapshot-controller]) / ContainersReady:ContainersNotReady (containers with unready status: [volume-snapshot-controller])
	I1109 13:31:06.977617    4875 system_pods.go:89] "snapshot-controller-7d9fbc56b8-jmnwh" [a1da803e-5dfc-4660-88f5-05d16292eda1] Pending / Ready:ContainersNotReady (containers with unready status: [volume-snapshot-controller]) / ContainersReady:ContainersNotReady (containers with unready status: [volume-snapshot-controller])
	I1109 13:31:06.977648    4875 system_pods.go:89] "storage-provisioner" [5f59a559-3989-40c0-9f6e-8df14dc8db7a] Pending / Ready:ContainersNotReady (containers with unready status: [storage-provisioner]) / ContainersReady:ContainersNotReady (containers with unready status: [storage-provisioner])
	I1109 13:31:06.977670    4875 retry.go:31] will retry after 692.918636ms: missing components: kube-dns
	I1109 13:31:07.048337    4875 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I1109 13:31:07.187783    4875 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=registry", current state: Pending: [<nil>]
	I1109 13:31:07.344616    4875 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I1109 13:31:07.444691    4875 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I1109 13:31:07.543985    4875 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I1109 13:31:07.679476    4875 system_pods.go:86] 19 kube-system pods found
	I1109 13:31:07.679519    4875 system_pods.go:89] "coredns-66bc5c9577-2bvft" [ec69ac87-b8fd-43ca-9881-75fdfcf79050] Running
	I1109 13:31:07.679546    4875 system_pods.go:89] "csi-hostpath-attacher-0" [87f54f6c-2cfb-45e8-8ac8-a9eccdb3e51f] Pending / Ready:ContainersNotReady (containers with unready status: [csi-attacher]) / ContainersReady:ContainersNotReady (containers with unready status: [csi-attacher])
	I1109 13:31:07.679562    4875 system_pods.go:89] "csi-hostpath-resizer-0" [397070b3-aae2-4129-9072-7de9d3ba6da9] Pending / Ready:ContainersNotReady (containers with unready status: [csi-resizer]) / ContainersReady:ContainersNotReady (containers with unready status: [csi-resizer])
	I1109 13:31:07.679571    4875 system_pods.go:89] "csi-hostpathplugin-txjcd" [6943fc25-e8c6-4fdc-ad8b-ecf63b13f5fd] Pending / Ready:ContainersNotReady (containers with unready status: [csi-external-health-monitor-controller node-driver-registrar hostpath liveness-probe csi-provisioner csi-snapshotter]) / ContainersReady:ContainersNotReady (containers with unready status: [csi-external-health-monitor-controller node-driver-registrar hostpath liveness-probe csi-provisioner csi-snapshotter])
	I1109 13:31:07.679579    4875 system_pods.go:89] "etcd-addons-651467" [513b6301-60f7-4c1a-b01a-baf84b87bd75] Running
	I1109 13:31:07.679585    4875 system_pods.go:89] "kindnet-9qtn5" [d33de5ba-e443-452e-8fe7-5deb7ad623ca] Running
	I1109 13:31:07.679610    4875 system_pods.go:89] "kube-apiserver-addons-651467" [3f3ad910-fa5d-4d5c-88db-c8305e10d2c4] Running
	I1109 13:31:07.679628    4875 system_pods.go:89] "kube-controller-manager-addons-651467" [b37e9a85-6d92-4d6e-be54-8ff0648c2f93] Running
	I1109 13:31:07.679645    4875 system_pods.go:89] "kube-ingress-dns-minikube" [87ed2174-b7f3-4e0b-b964-91736f08eb82] Pending / Ready:ContainersNotReady (containers with unready status: [minikube-ingress-dns]) / ContainersReady:ContainersNotReady (containers with unready status: [minikube-ingress-dns])
	I1109 13:31:07.679654    4875 system_pods.go:89] "kube-proxy-mbtfx" [be5bd5f2-d003-4c6a-a3bf-db956ea6e05a] Running
	I1109 13:31:07.679659    4875 system_pods.go:89] "kube-scheduler-addons-651467" [d370dd8e-5eaa-4ea0-bb9e-6893d5047d12] Running
	I1109 13:31:07.679681    4875 system_pods.go:89] "metrics-server-85b7d694d7-lmgbd" [448bf1e7-7a08-483f-8dc9-602a084beee5] Pending / Ready:ContainersNotReady (containers with unready status: [metrics-server]) / ContainersReady:ContainersNotReady (containers with unready status: [metrics-server])
	I1109 13:31:07.679696    4875 system_pods.go:89] "nvidia-device-plugin-daemonset-rx8x7" [a557f519-0b90-4340-abc1-df9fb9511be1] Pending / Ready:ContainersNotReady (containers with unready status: [nvidia-device-plugin-ctr]) / ContainersReady:ContainersNotReady (containers with unready status: [nvidia-device-plugin-ctr])
	I1109 13:31:07.679713    4875 system_pods.go:89] "registry-6b586f9694-kzz6v" [44b6807f-27e2-412f-a961-c988ec4d7151] Pending / Ready:ContainersNotReady (containers with unready status: [registry]) / ContainersReady:ContainersNotReady (containers with unready status: [registry])
	I1109 13:31:07.679726    4875 system_pods.go:89] "registry-creds-764b6fb674-sppdf" [db731fa9-ef93-469f-8a5f-23bd2aa1c2b8] Pending / Ready:ContainersNotReady (containers with unready status: [registry-creds]) / ContainersReady:ContainersNotReady (containers with unready status: [registry-creds])
	I1109 13:31:07.679734    4875 system_pods.go:89] "registry-proxy-7mv24" [15b0c22b-2fbc-4269-8253-7dbe1e18c15c] Pending / Ready:ContainersNotReady (containers with unready status: [registry-proxy]) / ContainersReady:ContainersNotReady (containers with unready status: [registry-proxy])
	I1109 13:31:07.679758    4875 system_pods.go:89] "snapshot-controller-7d9fbc56b8-d4qqx" [e84f0c00-37e8-49f5-9221-f7abf50a0b54] Pending / Ready:ContainersNotReady (containers with unready status: [volume-snapshot-controller]) / ContainersReady:ContainersNotReady (containers with unready status: [volume-snapshot-controller])
	I1109 13:31:07.679773    4875 system_pods.go:89] "snapshot-controller-7d9fbc56b8-jmnwh" [a1da803e-5dfc-4660-88f5-05d16292eda1] Pending / Ready:ContainersNotReady (containers with unready status: [volume-snapshot-controller]) / ContainersReady:ContainersNotReady (containers with unready status: [volume-snapshot-controller])
	I1109 13:31:07.679779    4875 system_pods.go:89] "storage-provisioner" [5f59a559-3989-40c0-9f6e-8df14dc8db7a] Running
	I1109 13:31:07.679792    4875 system_pods.go:126] duration metric: took 2.331012986s to wait for k8s-apps to be running ...
	I1109 13:31:07.679799    4875 system_svc.go:44] waiting for kubelet service to be running ....
	I1109 13:31:07.679897    4875 ssh_runner.go:195] Run: sudo systemctl is-active --quiet service kubelet
	I1109 13:31:07.688565    4875 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=registry", current state: Pending: [<nil>]
	I1109 13:31:07.698508    4875 system_svc.go:56] duration metric: took 18.686427ms WaitForService to wait for kubelet
	I1109 13:31:07.698536    4875 kubeadm.go:587] duration metric: took 44.617850738s to wait for: map[apiserver:true apps_running:true default_sa:true extra:true kubelet:true node_ready:true system_pods:true]
	I1109 13:31:07.698555    4875 node_conditions.go:102] verifying NodePressure condition ...
	I1109 13:31:07.701923    4875 node_conditions.go:122] node storage ephemeral capacity is 203034800Ki
	I1109 13:31:07.701964    4875 node_conditions.go:123] node cpu capacity is 2
	I1109 13:31:07.701978    4875 node_conditions.go:105] duration metric: took 3.388756ms to run NodePressure ...
	I1109 13:31:07.702005    4875 start.go:242] waiting for startup goroutines ...
	I1109 13:31:07.842394    4875 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I1109 13:31:07.960123    4875 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I1109 13:31:08.046382    4875 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I1109 13:31:08.187846    4875 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=registry", current state: Pending: [<nil>]
	I1109 13:31:08.343396    4875 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I1109 13:31:08.445356    4875 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I1109 13:31:08.543811    4875 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I1109 13:31:08.687924    4875 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=registry", current state: Pending: [<nil>]
	I1109 13:31:08.842283    4875 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I1109 13:31:08.948645    4875 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I1109 13:31:09.049198    4875 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I1109 13:31:09.187977    4875 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=registry", current state: Pending: [<nil>]
	I1109 13:31:09.342353    4875 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I1109 13:31:09.445445    4875 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I1109 13:31:09.544002    4875 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I1109 13:31:09.688225    4875 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=registry", current state: Pending: [<nil>]
	I1109 13:31:09.843082    4875 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I1109 13:31:09.945123    4875 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I1109 13:31:10.045007    4875 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I1109 13:31:10.186992    4875 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=registry", current state: Pending: [<nil>]
	I1109 13:31:10.343270    4875 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I1109 13:31:10.445058    4875 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I1109 13:31:10.544664    4875 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I1109 13:31:10.687983    4875 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=registry", current state: Pending: [<nil>]
	I1109 13:31:10.842447    4875 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I1109 13:31:10.945724    4875 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I1109 13:31:11.044365    4875 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I1109 13:31:11.187704    4875 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=registry", current state: Pending: [<nil>]
	I1109 13:31:11.342798    4875 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I1109 13:31:11.445156    4875 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I1109 13:31:11.543531    4875 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I1109 13:31:11.687366    4875 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=registry", current state: Pending: [<nil>]
	I1109 13:31:11.841821    4875 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I1109 13:31:11.944360    4875 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I1109 13:31:12.043949    4875 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I1109 13:31:12.190993    4875 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=registry", current state: Pending: [<nil>]
	I1109 13:31:12.342825    4875 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I1109 13:31:12.445528    4875 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I1109 13:31:12.543369    4875 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I1109 13:31:12.687987    4875 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=registry", current state: Pending: [<nil>]
	I1109 13:31:12.843084    4875 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I1109 13:31:12.945278    4875 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I1109 13:31:13.044050    4875 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I1109 13:31:13.187382    4875 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=registry", current state: Pending: [<nil>]
	I1109 13:31:13.341803    4875 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I1109 13:31:13.445113    4875 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I1109 13:31:13.543461    4875 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I1109 13:31:13.689221    4875 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=registry", current state: Pending: [<nil>]
	I1109 13:31:13.848023    4875 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I1109 13:31:13.945283    4875 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I1109 13:31:14.043373    4875 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I1109 13:31:14.190863    4875 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=registry", current state: Pending: [<nil>]
	I1109 13:31:14.342243    4875 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I1109 13:31:14.444853    4875 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I1109 13:31:14.543808    4875 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I1109 13:31:14.687759    4875 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=registry", current state: Pending: [<nil>]
	I1109 13:31:14.842206    4875 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I1109 13:31:14.945404    4875 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I1109 13:31:15.047137    4875 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I1109 13:31:15.187431    4875 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=registry", current state: Pending: [<nil>]
	I1109 13:31:15.341907    4875 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I1109 13:31:15.445293    4875 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I1109 13:31:15.544445    4875 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I1109 13:31:15.687613    4875 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=registry", current state: Pending: [<nil>]
	I1109 13:31:15.841872    4875 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I1109 13:31:15.944731    4875 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I1109 13:31:16.043592    4875 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I1109 13:31:16.187324    4875 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=registry", current state: Pending: [<nil>]
	I1109 13:31:16.341740    4875 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I1109 13:31:16.444531    4875 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I1109 13:31:16.543477    4875 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I1109 13:31:16.687383    4875 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=registry", current state: Pending: [<nil>]
	I1109 13:31:16.842388    4875 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I1109 13:31:16.945447    4875 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I1109 13:31:17.045184    4875 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I1109 13:31:17.189101    4875 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=registry", current state: Pending: [<nil>]
	I1109 13:31:17.349179    4875 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I1109 13:31:17.450036    4875 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I1109 13:31:17.544763    4875 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I1109 13:31:17.689113    4875 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=registry", current state: Pending: [<nil>]
	I1109 13:31:17.844157    4875 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I1109 13:31:17.945448    4875 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I1109 13:31:18.044398    4875 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I1109 13:31:18.187486    4875 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=registry", current state: Pending: [<nil>]
	I1109 13:31:18.343418    4875 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I1109 13:31:18.447578    4875 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I1109 13:31:18.544146    4875 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I1109 13:31:18.687790    4875 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=registry", current state: Pending: [<nil>]
	I1109 13:31:18.842840    4875 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I1109 13:31:18.944690    4875 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I1109 13:31:19.043900    4875 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I1109 13:31:19.187354    4875 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=registry", current state: Pending: [<nil>]
	I1109 13:31:19.342512    4875 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I1109 13:31:19.445269    4875 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I1109 13:31:19.546362    4875 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I1109 13:31:19.686902    4875 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=registry", current state: Pending: [<nil>]
	I1109 13:31:19.842342    4875 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I1109 13:31:19.947533    4875 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I1109 13:31:20.043959    4875 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I1109 13:31:20.188453    4875 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=registry", current state: Pending: [<nil>]
	I1109 13:31:20.343467    4875 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I1109 13:31:20.444967    4875 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I1109 13:31:20.544632    4875 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I1109 13:31:20.687560    4875 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=registry", current state: Pending: [<nil>]
	I1109 13:31:20.842606    4875 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I1109 13:31:20.944591    4875 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I1109 13:31:21.043938    4875 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I1109 13:31:21.188426    4875 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=registry", current state: Pending: [<nil>]
	I1109 13:31:21.341610    4875 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I1109 13:31:21.444757    4875 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I1109 13:31:21.544030    4875 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I1109 13:31:21.688620    4875 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=registry", current state: Pending: [<nil>]
	I1109 13:31:21.841769    4875 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I1109 13:31:21.945236    4875 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I1109 13:31:22.043303    4875 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I1109 13:31:22.187945    4875 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=registry", current state: Pending: [<nil>]
	I1109 13:31:22.342469    4875 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I1109 13:31:22.445890    4875 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I1109 13:31:22.544575    4875 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I1109 13:31:22.687594    4875 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=registry", current state: Pending: [<nil>]
	I1109 13:31:22.842331    4875 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I1109 13:31:22.945779    4875 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I1109 13:31:23.044541    4875 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I1109 13:31:23.187397    4875 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=registry", current state: Pending: [<nil>]
	I1109 13:31:23.344822    4875 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I1109 13:31:23.445600    4875 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I1109 13:31:23.543995    4875 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I1109 13:31:23.688150    4875 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=registry", current state: Pending: [<nil>]
	I1109 13:31:23.843190    4875 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I1109 13:31:23.944798    4875 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I1109 13:31:24.044277    4875 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I1109 13:31:24.187155    4875 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=registry", current state: Pending: [<nil>]
	I1109 13:31:24.342839    4875 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I1109 13:31:24.445181    4875 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I1109 13:31:24.544115    4875 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I1109 13:31:24.687983    4875 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=registry", current state: Pending: [<nil>]
	I1109 13:31:24.842171    4875 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I1109 13:31:24.945373    4875 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I1109 13:31:25.043491    4875 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I1109 13:31:25.187806    4875 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=registry", current state: Pending: [<nil>]
	I1109 13:31:25.341828    4875 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I1109 13:31:25.444875    4875 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I1109 13:31:25.544130    4875 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I1109 13:31:25.687326    4875 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=registry", current state: Pending: [<nil>]
	I1109 13:31:25.842203    4875 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I1109 13:31:25.945528    4875 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I1109 13:31:26.043966    4875 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I1109 13:31:26.188173    4875 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=registry", current state: Pending: [<nil>]
	I1109 13:31:26.344388    4875 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I1109 13:31:26.445815    4875 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I1109 13:31:26.544249    4875 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I1109 13:31:26.687080    4875 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=registry", current state: Pending: [<nil>]
	I1109 13:31:26.841607    4875 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I1109 13:31:26.945926    4875 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I1109 13:31:27.047242    4875 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I1109 13:31:27.187195    4875 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=registry", current state: Pending: [<nil>]
	I1109 13:31:27.342760    4875 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I1109 13:31:27.445176    4875 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I1109 13:31:27.543478    4875 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I1109 13:31:27.687982    4875 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=registry", current state: Pending: [<nil>]
	I1109 13:31:27.842558    4875 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I1109 13:31:27.944879    4875 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I1109 13:31:28.044377    4875 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I1109 13:31:28.188063    4875 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=registry", current state: Pending: [<nil>]
	I1109 13:31:28.342036    4875 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I1109 13:31:28.445665    4875 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I1109 13:31:28.544099    4875 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I1109 13:31:28.687583    4875 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=registry", current state: Pending: [<nil>]
	I1109 13:31:28.841785    4875 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I1109 13:31:28.944916    4875 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I1109 13:31:29.044315    4875 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I1109 13:31:29.187575    4875 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=registry", current state: Pending: [<nil>]
	I1109 13:31:29.341737    4875 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I1109 13:31:29.444575    4875 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I1109 13:31:29.543715    4875 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I1109 13:31:29.687776    4875 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=registry", current state: Pending: [<nil>]
	I1109 13:31:29.842431    4875 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I1109 13:31:29.946012    4875 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I1109 13:31:30.044243    4875 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I1109 13:31:30.187701    4875 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=registry", current state: Pending: [<nil>]
	I1109 13:31:30.342547    4875 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I1109 13:31:30.445283    4875 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I1109 13:31:30.543718    4875 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I1109 13:31:30.687550    4875 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=registry", current state: Pending: [<nil>]
	I1109 13:31:30.842633    4875 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I1109 13:31:30.944680    4875 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I1109 13:31:31.044258    4875 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I1109 13:31:31.187306    4875 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=registry", current state: Pending: [<nil>]
	I1109 13:31:31.341708    4875 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I1109 13:31:31.445494    4875 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I1109 13:31:31.543574    4875 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I1109 13:31:31.688014    4875 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=registry", current state: Pending: [<nil>]
	I1109 13:31:31.842688    4875 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I1109 13:31:31.946687    4875 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I1109 13:31:32.047125    4875 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I1109 13:31:32.192866    4875 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=registry", current state: Pending: [<nil>]
	I1109 13:31:32.342357    4875 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I1109 13:31:32.445380    4875 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I1109 13:31:32.544778    4875 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I1109 13:31:32.687669    4875 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=registry", current state: Pending: [<nil>]
	I1109 13:31:32.842254    4875 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I1109 13:31:32.945295    4875 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I1109 13:31:33.044784    4875 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I1109 13:31:33.187760    4875 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=registry", current state: Pending: [<nil>]
	I1109 13:31:33.343100    4875 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I1109 13:31:33.447328    4875 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I1109 13:31:33.543366    4875 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I1109 13:31:33.688731    4875 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=registry", current state: Pending: [<nil>]
	I1109 13:31:33.842796    4875 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I1109 13:31:33.945690    4875 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I1109 13:31:34.044155    4875 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I1109 13:31:34.188030    4875 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=registry", current state: Pending: [<nil>]
	I1109 13:31:34.342055    4875 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I1109 13:31:34.448857    4875 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I1109 13:31:34.544248    4875 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I1109 13:31:34.689320    4875 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=registry", current state: Pending: [<nil>]
	I1109 13:31:34.842107    4875 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I1109 13:31:34.945278    4875 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I1109 13:31:35.043821    4875 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I1109 13:31:35.188379    4875 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=registry", current state: Pending: [<nil>]
	I1109 13:31:35.342020    4875 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I1109 13:31:35.445289    4875 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I1109 13:31:35.543490    4875 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I1109 13:31:35.687199    4875 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=registry", current state: Pending: [<nil>]
	I1109 13:31:35.842553    4875 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I1109 13:31:35.944595    4875 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I1109 13:31:36.044417    4875 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I1109 13:31:36.187616    4875 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=registry", current state: Pending: [<nil>]
	I1109 13:31:36.342137    4875 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I1109 13:31:36.445625    4875 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I1109 13:31:36.543691    4875 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I1109 13:31:36.687677    4875 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=registry", current state: Pending: [<nil>]
	I1109 13:31:36.842580    4875 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I1109 13:31:36.945091    4875 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I1109 13:31:37.045399    4875 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I1109 13:31:37.220947    4875 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=registry", current state: Pending: [<nil>]
	I1109 13:31:37.351341    4875 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I1109 13:31:37.445243    4875 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I1109 13:31:37.543533    4875 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I1109 13:31:37.688016    4875 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=registry", current state: Pending: [<nil>]
	I1109 13:31:37.843577    4875 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I1109 13:31:37.944950    4875 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I1109 13:31:38.044236    4875 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I1109 13:31:38.188016    4875 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=registry", current state: Pending: [<nil>]
	I1109 13:31:38.342953    4875 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I1109 13:31:38.445788    4875 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I1109 13:31:38.546825    4875 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I1109 13:31:38.687962    4875 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=registry", current state: Pending: [<nil>]
	I1109 13:31:38.843115    4875 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I1109 13:31:38.945213    4875 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I1109 13:31:39.043172    4875 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I1109 13:31:39.187235    4875 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=registry", current state: Pending: [<nil>]
	I1109 13:31:39.343301    4875 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I1109 13:31:39.445517    4875 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I1109 13:31:39.543933    4875 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I1109 13:31:39.687495    4875 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=registry", current state: Pending: [<nil>]
	I1109 13:31:39.841859    4875 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I1109 13:31:39.944789    4875 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I1109 13:31:40.044087    4875 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I1109 13:31:40.187795    4875 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=registry", current state: Pending: [<nil>]
	I1109 13:31:40.342135    4875 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I1109 13:31:40.445282    4875 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I1109 13:31:40.543473    4875 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I1109 13:31:40.687270    4875 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=registry", current state: Pending: [<nil>]
	I1109 13:31:40.841498    4875 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I1109 13:31:40.945192    4875 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I1109 13:31:41.044162    4875 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I1109 13:31:41.186994    4875 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=registry", current state: Pending: [<nil>]
	I1109 13:31:41.342455    4875 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I1109 13:31:41.446241    4875 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I1109 13:31:41.546227    4875 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I1109 13:31:41.687218    4875 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=registry", current state: Pending: [<nil>]
	I1109 13:31:41.842979    4875 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I1109 13:31:41.945303    4875 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I1109 13:31:42.043956    4875 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I1109 13:31:42.187315    4875 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=registry", current state: Pending: [<nil>]
	I1109 13:31:42.341928    4875 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I1109 13:31:42.445341    4875 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I1109 13:31:42.543917    4875 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I1109 13:31:42.687782    4875 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=registry", current state: Pending: [<nil>]
	I1109 13:31:42.842432    4875 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I1109 13:31:42.945810    4875 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I1109 13:31:43.044393    4875 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I1109 13:31:43.187754    4875 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=registry", current state: Pending: [<nil>]
	I1109 13:31:43.342273    4875 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I1109 13:31:43.461680    4875 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I1109 13:31:43.549767    4875 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I1109 13:31:43.697455    4875 kapi.go:107] duration metric: took 1m15.013397296s to wait for kubernetes.io/minikube-addons=registry ...
	I1109 13:31:43.842415    4875 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I1109 13:31:43.946510    4875 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I1109 13:31:44.043559    4875 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I1109 13:31:44.342061    4875 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I1109 13:31:44.445557    4875 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I1109 13:31:44.547096    4875 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I1109 13:31:44.842895    4875 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I1109 13:31:44.944782    4875 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I1109 13:31:45.047173    4875 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I1109 13:31:45.341699    4875 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I1109 13:31:45.445454    4875 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I1109 13:31:45.543650    4875 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I1109 13:31:45.841933    4875 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I1109 13:31:45.944804    4875 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I1109 13:31:46.044456    4875 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I1109 13:31:46.343326    4875 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I1109 13:31:46.446617    4875 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I1109 13:31:46.544102    4875 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I1109 13:31:46.841436    4875 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I1109 13:31:46.946046    4875 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I1109 13:31:47.044429    4875 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I1109 13:31:47.342123    4875 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I1109 13:31:47.445977    4875 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I1109 13:31:47.544249    4875 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I1109 13:31:47.841836    4875 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I1109 13:31:47.944657    4875 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I1109 13:31:48.044235    4875 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I1109 13:31:48.342357    4875 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I1109 13:31:48.445867    4875 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I1109 13:31:48.549619    4875 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I1109 13:31:48.843075    4875 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I1109 13:31:48.945260    4875 kapi.go:107] duration metric: took 1m17.003625512s to wait for kubernetes.io/minikube-addons=gcp-auth ...
	I1109 13:31:48.948561    4875 out.go:179] * Your GCP credentials will now be mounted into every pod created in the addons-651467 cluster.
	I1109 13:31:48.952537    4875 out.go:179] * If you don't want your credentials mounted into a specific pod, add a label with the `gcp-auth-skip-secret` key to your pod configuration.
	I1109 13:31:48.955607    4875 out.go:179] * If you want existing pods to be mounted with credentials, either recreate them or rerun addons enable with --refresh.
	I1109 13:31:49.044055    4875 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I1109 13:31:49.342258    4875 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I1109 13:31:49.543429    4875 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I1109 13:31:49.842401    4875 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I1109 13:31:50.044020    4875 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I1109 13:31:50.342456    4875 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I1109 13:31:50.544193    4875 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I1109 13:31:50.842339    4875 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I1109 13:31:51.043329    4875 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I1109 13:31:51.341475    4875 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I1109 13:31:51.543592    4875 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I1109 13:31:51.841701    4875 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I1109 13:31:52.044393    4875 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I1109 13:31:52.341902    4875 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I1109 13:31:52.544571    4875 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I1109 13:31:52.842440    4875 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I1109 13:31:53.043262    4875 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I1109 13:31:53.341824    4875 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I1109 13:31:53.543939    4875 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I1109 13:31:53.846287    4875 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I1109 13:31:54.045580    4875 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I1109 13:31:54.342291    4875 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I1109 13:31:54.546575    4875 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I1109 13:31:54.842213    4875 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I1109 13:31:55.043634    4875 kapi.go:107] duration metric: took 1m25.503415203s to wait for app.kubernetes.io/name=ingress-nginx ...
	I1109 13:31:55.343987    4875 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I1109 13:31:55.841129    4875 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I1109 13:31:56.341159    4875 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I1109 13:31:56.841674    4875 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I1109 13:31:57.342718    4875 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I1109 13:31:57.848790    4875 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I1109 13:31:58.343439    4875 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I1109 13:31:58.853473    4875 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I1109 13:31:59.356564    4875 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I1109 13:31:59.844687    4875 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I1109 13:32:00.348620    4875 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I1109 13:32:00.842376    4875 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I1109 13:32:01.341809    4875 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I1109 13:32:01.842406    4875 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I1109 13:32:02.342580    4875 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I1109 13:32:02.842771    4875 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I1109 13:32:03.341724    4875 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I1109 13:32:03.842929    4875 kapi.go:107] duration metric: took 1m34.00462405s to wait for kubernetes.io/minikube-addons=csi-hostpath-driver ...
	I1109 13:32:03.846167    4875 out.go:179] * Enabled addons: registry-creds, storage-provisioner, storage-provisioner-rancher, inspektor-gadget, amd-gpu-device-plugin, nvidia-device-plugin, ingress-dns, cloud-spanner, metrics-server, yakd, default-storageclass, volumesnapshots, registry, gcp-auth, ingress, csi-hostpath-driver
	I1109 13:32:03.849156    4875 addons.go:515] duration metric: took 1m40.768204928s for enable addons: enabled=[registry-creds storage-provisioner storage-provisioner-rancher inspektor-gadget amd-gpu-device-plugin nvidia-device-plugin ingress-dns cloud-spanner metrics-server yakd default-storageclass volumesnapshots registry gcp-auth ingress csi-hostpath-driver]
	I1109 13:32:03.849212    4875 start.go:247] waiting for cluster config update ...
	I1109 13:32:03.849237    4875 start.go:256] writing updated cluster config ...
	I1109 13:32:03.850140    4875 ssh_runner.go:195] Run: rm -f paused
	I1109 13:32:03.853966    4875 pod_ready.go:37] extra waiting up to 4m0s for all "kube-system" pods having one of [k8s-app=kube-dns component=etcd component=kube-apiserver component=kube-controller-manager k8s-app=kube-proxy component=kube-scheduler] labels to be "Ready" ...
	I1109 13:32:03.857807    4875 pod_ready.go:83] waiting for pod "coredns-66bc5c9577-2bvft" in "kube-system" namespace to be "Ready" or be gone ...
	I1109 13:32:03.863009    4875 pod_ready.go:94] pod "coredns-66bc5c9577-2bvft" is "Ready"
	I1109 13:32:03.863043    4875 pod_ready.go:86] duration metric: took 5.206095ms for pod "coredns-66bc5c9577-2bvft" in "kube-system" namespace to be "Ready" or be gone ...
	I1109 13:32:03.865356    4875 pod_ready.go:83] waiting for pod "etcd-addons-651467" in "kube-system" namespace to be "Ready" or be gone ...
	I1109 13:32:03.869934    4875 pod_ready.go:94] pod "etcd-addons-651467" is "Ready"
	I1109 13:32:03.870023    4875 pod_ready.go:86] duration metric: took 4.627924ms for pod "etcd-addons-651467" in "kube-system" namespace to be "Ready" or be gone ...
	I1109 13:32:03.872828    4875 pod_ready.go:83] waiting for pod "kube-apiserver-addons-651467" in "kube-system" namespace to be "Ready" or be gone ...
	I1109 13:32:03.877953    4875 pod_ready.go:94] pod "kube-apiserver-addons-651467" is "Ready"
	I1109 13:32:03.877981    4875 pod_ready.go:86] duration metric: took 5.12629ms for pod "kube-apiserver-addons-651467" in "kube-system" namespace to be "Ready" or be gone ...
	I1109 13:32:03.880979    4875 pod_ready.go:83] waiting for pod "kube-controller-manager-addons-651467" in "kube-system" namespace to be "Ready" or be gone ...
	I1109 13:32:04.257997    4875 pod_ready.go:94] pod "kube-controller-manager-addons-651467" is "Ready"
	I1109 13:32:04.258077    4875 pod_ready.go:86] duration metric: took 377.069727ms for pod "kube-controller-manager-addons-651467" in "kube-system" namespace to be "Ready" or be gone ...
	I1109 13:32:04.458846    4875 pod_ready.go:83] waiting for pod "kube-proxy-mbtfx" in "kube-system" namespace to be "Ready" or be gone ...
	I1109 13:32:04.858165    4875 pod_ready.go:94] pod "kube-proxy-mbtfx" is "Ready"
	I1109 13:32:04.858195    4875 pod_ready.go:86] duration metric: took 399.321259ms for pod "kube-proxy-mbtfx" in "kube-system" namespace to be "Ready" or be gone ...
	I1109 13:32:05.058712    4875 pod_ready.go:83] waiting for pod "kube-scheduler-addons-651467" in "kube-system" namespace to be "Ready" or be gone ...
	I1109 13:32:05.457517    4875 pod_ready.go:94] pod "kube-scheduler-addons-651467" is "Ready"
	I1109 13:32:05.457545    4875 pod_ready.go:86] duration metric: took 398.743819ms for pod "kube-scheduler-addons-651467" in "kube-system" namespace to be "Ready" or be gone ...
	I1109 13:32:05.457558    4875 pod_ready.go:40] duration metric: took 1.60355954s for extra waiting for all "kube-system" pods having one of [k8s-app=kube-dns component=etcd component=kube-apiserver component=kube-controller-manager k8s-app=kube-proxy component=kube-scheduler] labels to be "Ready" ...
	I1109 13:32:05.527300    4875 start.go:628] kubectl: 1.33.2, cluster: 1.34.1 (minor skew: 1)
	I1109 13:32:05.530609    4875 out.go:179] * Done! kubectl is now configured to use "addons-651467" cluster and "default" namespace by default
	
	
	==> CRI-O <==
	Nov 09 13:32:34 addons-651467 crio[830]: time="2025-11-09T13:32:34.513777644Z" level=info msg="Started container" PID=5255 containerID=3337cd2415db927d125fc1525a47ae32d82fa6c2205779e7b35e88ad0d2b44b5 description=default/test-local-path/busybox id=2db02fd4-00dd-4c80-9835-10a0b7af7086 name=/runtime.v1.RuntimeService/StartContainer sandboxID=7cf471d392a65659791b80646d6f0e91ce55aa38d4dd680bad8bb87387df601f
	Nov 09 13:32:35 addons-651467 crio[830]: time="2025-11-09T13:32:35.75966303Z" level=info msg="Stopping pod sandbox: 7cf471d392a65659791b80646d6f0e91ce55aa38d4dd680bad8bb87387df601f" id=e32f2948-43ad-4fc9-b75b-34ecc1b23346 name=/runtime.v1.RuntimeService/StopPodSandbox
	Nov 09 13:32:35 addons-651467 crio[830]: time="2025-11-09T13:32:35.760051445Z" level=info msg="Got pod network &{Name:test-local-path Namespace:default ID:7cf471d392a65659791b80646d6f0e91ce55aa38d4dd680bad8bb87387df601f UID:e171f132-b733-4b54-ae4c-19c7f35a643d NetNS:/var/run/netns/ca817874-037e-4fa0-b83d-2e349a49632c Networks:[{Name:kindnet Ifname:eth0}] RuntimeConfig:map[kindnet:{IP: MAC: PortMappings:[] Bandwidth:<nil> IpRanges:[] CgroupPath: PodAnnotations:0x40012ac840}] Aliases:map[]}"
	Nov 09 13:32:35 addons-651467 crio[830]: time="2025-11-09T13:32:35.76022205Z" level=info msg="Deleting pod default_test-local-path from CNI network \"kindnet\" (type=ptp)"
	Nov 09 13:32:35 addons-651467 crio[830]: time="2025-11-09T13:32:35.79445866Z" level=info msg="Stopped pod sandbox: 7cf471d392a65659791b80646d6f0e91ce55aa38d4dd680bad8bb87387df601f" id=e32f2948-43ad-4fc9-b75b-34ecc1b23346 name=/runtime.v1.RuntimeService/StopPodSandbox
	Nov 09 13:32:37 addons-651467 crio[830]: time="2025-11-09T13:32:37.57708793Z" level=info msg="Running pod sandbox: local-path-storage/helper-pod-delete-pvc-fe3d3884-77bf-4c1a-9e79-8108e5bf732d/POD" id=79df0e30-f5cf-4466-901b-a7dcea1b05fb name=/runtime.v1.RuntimeService/RunPodSandbox
	Nov 09 13:32:37 addons-651467 crio[830]: time="2025-11-09T13:32:37.577237677Z" level=info msg="Allowed annotations are specified for workload [io.containers.trace-syscall]"
	Nov 09 13:32:37 addons-651467 crio[830]: time="2025-11-09T13:32:37.608368647Z" level=info msg="Got pod network &{Name:helper-pod-delete-pvc-fe3d3884-77bf-4c1a-9e79-8108e5bf732d Namespace:local-path-storage ID:fc863424c5f3e2b7936a30e861bea5212383ad9396c90b89c1324f2927579020 UID:14a940c4-9553-4233-9b17-b4a97753eab9 NetNS:/var/run/netns/03b3c894-e20b-419a-8c30-ea6b25d110b0 Networks:[{Name:kindnet Ifname:eth0}] RuntimeConfig:map[kindnet:{IP: MAC: PortMappings:[] Bandwidth:<nil> IpRanges:[] CgroupPath: PodAnnotations:0x40012acb78}] Aliases:map[]}"
	Nov 09 13:32:37 addons-651467 crio[830]: time="2025-11-09T13:32:37.608414243Z" level=info msg="Adding pod local-path-storage_helper-pod-delete-pvc-fe3d3884-77bf-4c1a-9e79-8108e5bf732d to CNI network \"kindnet\" (type=ptp)"
	Nov 09 13:32:37 addons-651467 crio[830]: time="2025-11-09T13:32:37.630549162Z" level=info msg="Got pod network &{Name:helper-pod-delete-pvc-fe3d3884-77bf-4c1a-9e79-8108e5bf732d Namespace:local-path-storage ID:fc863424c5f3e2b7936a30e861bea5212383ad9396c90b89c1324f2927579020 UID:14a940c4-9553-4233-9b17-b4a97753eab9 NetNS:/var/run/netns/03b3c894-e20b-419a-8c30-ea6b25d110b0 Networks:[{Name:kindnet Ifname:eth0}] RuntimeConfig:map[kindnet:{IP: MAC: PortMappings:[] Bandwidth:<nil> IpRanges:[] CgroupPath: PodAnnotations:0x40012acb78}] Aliases:map[]}"
	Nov 09 13:32:37 addons-651467 crio[830]: time="2025-11-09T13:32:37.63070421Z" level=info msg="Checking pod local-path-storage_helper-pod-delete-pvc-fe3d3884-77bf-4c1a-9e79-8108e5bf732d for CNI network kindnet (type=ptp)"
	Nov 09 13:32:37 addons-651467 crio[830]: time="2025-11-09T13:32:37.639720298Z" level=info msg="Ran pod sandbox fc863424c5f3e2b7936a30e861bea5212383ad9396c90b89c1324f2927579020 with infra container: local-path-storage/helper-pod-delete-pvc-fe3d3884-77bf-4c1a-9e79-8108e5bf732d/POD" id=79df0e30-f5cf-4466-901b-a7dcea1b05fb name=/runtime.v1.RuntimeService/RunPodSandbox
	Nov 09 13:32:37 addons-651467 crio[830]: time="2025-11-09T13:32:37.644503762Z" level=info msg="Checking image status: docker.io/busybox:stable@sha256:3fbc632167424a6d997e74f52b878d7cc478225cffac6bc977eedfe51c7f4e79" id=e4790142-4fb2-4d58-9386-1bc7831f809e name=/runtime.v1.ImageService/ImageStatus
	Nov 09 13:32:37 addons-651467 crio[830]: time="2025-11-09T13:32:37.649305639Z" level=info msg="Checking image status: docker.io/busybox:stable@sha256:3fbc632167424a6d997e74f52b878d7cc478225cffac6bc977eedfe51c7f4e79" id=f0bbac92-261b-4ff3-bad9-4011ac233cf9 name=/runtime.v1.ImageService/ImageStatus
	Nov 09 13:32:37 addons-651467 crio[830]: time="2025-11-09T13:32:37.659806578Z" level=info msg="Creating container: local-path-storage/helper-pod-delete-pvc-fe3d3884-77bf-4c1a-9e79-8108e5bf732d/helper-pod" id=6ba27103-fd62-48c4-a32d-cc2d059326d4 name=/runtime.v1.RuntimeService/CreateContainer
	Nov 09 13:32:37 addons-651467 crio[830]: time="2025-11-09T13:32:37.659999978Z" level=info msg="Allowed annotations are specified for workload [io.containers.trace-syscall]"
	Nov 09 13:32:37 addons-651467 crio[830]: time="2025-11-09T13:32:37.684809863Z" level=info msg="Allowed annotations are specified for workload [io.containers.trace-syscall]"
	Nov 09 13:32:37 addons-651467 crio[830]: time="2025-11-09T13:32:37.685349601Z" level=info msg="Allowed annotations are specified for workload [io.containers.trace-syscall]"
	Nov 09 13:32:37 addons-651467 crio[830]: time="2025-11-09T13:32:37.722147928Z" level=info msg="Created container f50af473d9feba1ed423cdf2ee410a3cc9ada0d06e4584913e9eeb7290efcd1a: local-path-storage/helper-pod-delete-pvc-fe3d3884-77bf-4c1a-9e79-8108e5bf732d/helper-pod" id=6ba27103-fd62-48c4-a32d-cc2d059326d4 name=/runtime.v1.RuntimeService/CreateContainer
	Nov 09 13:32:37 addons-651467 crio[830]: time="2025-11-09T13:32:37.723940586Z" level=info msg="Starting container: f50af473d9feba1ed423cdf2ee410a3cc9ada0d06e4584913e9eeb7290efcd1a" id=5164a7d3-4ab4-4c9f-87a8-8b808d804758 name=/runtime.v1.RuntimeService/StartContainer
	Nov 09 13:32:37 addons-651467 crio[830]: time="2025-11-09T13:32:37.748399177Z" level=info msg="Started container" PID=5371 containerID=f50af473d9feba1ed423cdf2ee410a3cc9ada0d06e4584913e9eeb7290efcd1a description=local-path-storage/helper-pod-delete-pvc-fe3d3884-77bf-4c1a-9e79-8108e5bf732d/helper-pod id=5164a7d3-4ab4-4c9f-87a8-8b808d804758 name=/runtime.v1.RuntimeService/StartContainer sandboxID=fc863424c5f3e2b7936a30e861bea5212383ad9396c90b89c1324f2927579020
	Nov 09 13:32:38 addons-651467 crio[830]: time="2025-11-09T13:32:38.79821239Z" level=info msg="Stopping pod sandbox: fc863424c5f3e2b7936a30e861bea5212383ad9396c90b89c1324f2927579020" id=bc242ff5-6b6a-4581-814c-39f609518bb4 name=/runtime.v1.RuntimeService/StopPodSandbox
	Nov 09 13:32:38 addons-651467 crio[830]: time="2025-11-09T13:32:38.798584206Z" level=info msg="Got pod network &{Name:helper-pod-delete-pvc-fe3d3884-77bf-4c1a-9e79-8108e5bf732d Namespace:local-path-storage ID:fc863424c5f3e2b7936a30e861bea5212383ad9396c90b89c1324f2927579020 UID:14a940c4-9553-4233-9b17-b4a97753eab9 NetNS:/var/run/netns/03b3c894-e20b-419a-8c30-ea6b25d110b0 Networks:[{Name:kindnet Ifname:eth0}] RuntimeConfig:map[kindnet:{IP: MAC: PortMappings:[] Bandwidth:<nil> IpRanges:[] CgroupPath: PodAnnotations:0x4000143c58}] Aliases:map[]}"
	Nov 09 13:32:38 addons-651467 crio[830]: time="2025-11-09T13:32:38.79894314Z" level=info msg="Deleting pod local-path-storage_helper-pod-delete-pvc-fe3d3884-77bf-4c1a-9e79-8108e5bf732d from CNI network \"kindnet\" (type=ptp)"
	Nov 09 13:32:38 addons-651467 crio[830]: time="2025-11-09T13:32:38.832803379Z" level=info msg="Stopped pod sandbox: fc863424c5f3e2b7936a30e861bea5212383ad9396c90b89c1324f2927579020" id=bc242ff5-6b6a-4581-814c-39f609518bb4 name=/runtime.v1.RuntimeService/StopPodSandbox
	
	
	==> container status <==
	CONTAINER           IMAGE                                                                                                                                        CREATED              STATE               NAME                                     ATTEMPT             POD ID              POD                                                          NAMESPACE
	f50af473d9feb       fc9db2894f4e4b8c296b8c9dab7e18a6e78de700d21bc0cfaf5c78484226db9c                                                                             1 second ago         Exited              helper-pod                               0                   fc863424c5f3e       helper-pod-delete-pvc-fe3d3884-77bf-4c1a-9e79-8108e5bf732d   local-path-storage
	3337cd2415db9       docker.io/library/busybox@sha256:079b4a73854a059a2073c6e1a031b17fcbf23a47c6c59ae760d78045199e403c                                            4 seconds ago        Exited              busybox                                  0                   7cf471d392a65       test-local-path                                              default
	c7c65e3b3bbaf       docker.io/library/busybox@sha256:1fa89c01cd0473cedbd1a470abb8c139eeb80920edf1bc55de87851bfb63ea11                                            8 seconds ago        Exited              helper-pod                               0                   88efb116adebe       helper-pod-create-pvc-fe3d3884-77bf-4c1a-9e79-8108e5bf732d   local-path-storage
	68ba00df88f9e       gcr.io/k8s-minikube/busybox@sha256:a77fe109c026308f149d36484d795b42efe0fd29b332be9071f63e1634c36ac9                                          9 seconds ago        Exited              registry-test                            0                   10d76d4715727       registry-test                                                default
	006bf09a4271c       gcr.io/k8s-minikube/busybox@sha256:580b0aa58b210f512f818b7b7ef4f63c803f7a8cd6baf571b1462b79f7b7719e                                          30 seconds ago       Running             busybox                                  0                   82d821c1afe81       busybox                                                      default
	d2bf491a803e1       registry.k8s.io/sig-storage/csi-snapshotter@sha256:bd6b8417b2a83e66ab1d4c1193bb2774f027745bdebbd9e0c1a6518afdecc39a                          36 seconds ago       Running             csi-snapshotter                          0                   c36845a2e7f54       csi-hostpathplugin-txjcd                                     kube-system
	ae9a6f508e15b       registry.k8s.io/sig-storage/csi-provisioner@sha256:98ffd09c0784203d200e0f8c241501de31c8df79644caac7eed61bd6391e5d49                          37 seconds ago       Running             csi-provisioner                          0                   c36845a2e7f54       csi-hostpathplugin-txjcd                                     kube-system
	93a600602192b       registry.k8s.io/sig-storage/livenessprobe@sha256:8b00c6e8f52639ed9c6f866085893ab688e57879741b3089e3cfa9998502e158                            39 seconds ago       Running             liveness-probe                           0                   c36845a2e7f54       csi-hostpathplugin-txjcd                                     kube-system
	f480ecab5b392       registry.k8s.io/sig-storage/hostpathplugin@sha256:7b1dfc90a367222067fc468442fdf952e20fc5961f25c1ad654300ddc34d7083                           40 seconds ago       Running             hostpath                                 0                   c36845a2e7f54       csi-hostpathplugin-txjcd                                     kube-system
	7d0e397731f1d       ghcr.io/inspektor-gadget/inspektor-gadget@sha256:c2c5268a38de5c792beb84122c5350c644fbb9b85e04342ef72fa9a6d052f0b0                            41 seconds ago       Running             gadget                                   0                   a9a4dc25ab575       gadget-9q8bf                                                 gadget
	9da3d3ae626ec       registry.k8s.io/ingress-nginx/controller@sha256:4ae52268a9493fc308d5f2fb67fe657d2499293aa644122d385ddb60c2330dbc                             45 seconds ago       Running             controller                               0                   d2c3faf0fc87d       ingress-nginx-controller-675c5ddd98-6lswb                    ingress-nginx
	90208adb8fd21       gcr.io/k8s-minikube/gcp-auth-webhook@sha256:2de98fa4b397f92e5e8e05d73caf21787a1c72c41378f3eb7bad72b1e0f4e9ff                                 51 seconds ago       Running             gcp-auth                                 0                   9d1c631f65bae       gcp-auth-78565c9fb4-qqfqm                                    gcp-auth
	a21703b53016b       registry.k8s.io/sig-storage/csi-node-driver-registrar@sha256:511b8c8ac828194a753909d26555ff08bc12f497dd8daeb83fe9d593693a26c1                54 seconds ago       Running             node-driver-registrar                    0                   c36845a2e7f54       csi-hostpathplugin-txjcd                                     kube-system
	00a017d960b12       gcr.io/k8s-minikube/kube-registry-proxy@sha256:26c84a64530a67aa4d749dd4356d67ea27a2576e4d25b640d21857b0574cfd4b                              55 seconds ago       Running             registry-proxy                           0                   580951646e443       registry-proxy-7mv24                                         kube-system
	ddbfebb8b3bd8       registry.k8s.io/sig-storage/csi-resizer@sha256:82c1945463342884c05a5b2bc31319712ce75b154c279c2a10765f61e0f688af                              59 seconds ago       Running             csi-resizer                              0                   00963cb17a76a       csi-hostpath-resizer-0                                       kube-system
	cf0248d05e312       nvcr.io/nvidia/k8s-device-plugin@sha256:80924fc52384565a7c59f1e2f12319fb8f2b02a1c974bb3d73a9853fe01af874                                     About a minute ago   Running             nvidia-device-plugin-ctr                 0                   ecaa5e9d76d87       nvidia-device-plugin-daemonset-rx8x7                         kube-system
	7ab837fe1c905       registry.k8s.io/ingress-nginx/kube-webhook-certgen@sha256:2d5727fcf5b9ee2bd367835234500c1ec7f54a0b94ea92a76169a9308a197e93                   About a minute ago   Exited              patch                                    0                   634ea2c369e72       ingress-nginx-admission-patch-bp4lk                          ingress-nginx
	14c1c07c042a8       docker.io/library/registry@sha256:8715992817b2254fe61e74ffc6a4096d57a0cde36c95ea075676c05f7a94a630                                           About a minute ago   Running             registry                                 0                   beaeca0cb57d4       registry-6b586f9694-kzz6v                                    kube-system
	d6ac5bca1cd4a       docker.io/kicbase/minikube-ingress-dns@sha256:6d710af680d8a9b5a5b1f9047eb83ee4c9258efd3fcd962f938c00bcbb4c5958                               About a minute ago   Running             minikube-ingress-dns                     0                   d19f25ed24632       kube-ingress-dns-minikube                                    kube-system
	8f8d82b9ad544       registry.k8s.io/sig-storage/csi-attacher@sha256:4b5609c78455de45821910065281a368d5f760b41250f90cbde5110543bdc326                             About a minute ago   Running             csi-attacher                             0                   cd681023af0f6       csi-hostpath-attacher-0                                      kube-system
	a330963f46686       gcr.io/cloud-spanner-emulator/emulator@sha256:c2688dc4b7ecb4546084321d63c2b3b616a54263488137e18fcb7c7005aef086                               About a minute ago   Running             cloud-spanner-emulator                   0                   f3fbd629991d7       cloud-spanner-emulator-6f9fcf858b-gv67d                      default
	07e93bef4f027       registry.k8s.io/metrics-server/metrics-server@sha256:8f49cf1b0688bb0eae18437882dbf6de2c7a2baac71b1492bc4eca25439a1bf2                        About a minute ago   Running             metrics-server                           0                   b785fa871315c       metrics-server-85b7d694d7-lmgbd                              kube-system
	1cfcc34c91d70       registry.k8s.io/sig-storage/csi-external-health-monitor-controller@sha256:8b9df00898ded1bfb4d8f3672679f29cd9f88e651b76fef64121c8d347dd12c0   About a minute ago   Running             csi-external-health-monitor-controller   0                   c36845a2e7f54       csi-hostpathplugin-txjcd                                     kube-system
	bf2a38a499919       registry.k8s.io/ingress-nginx/kube-webhook-certgen@sha256:2d5727fcf5b9ee2bd367835234500c1ec7f54a0b94ea92a76169a9308a197e93                   About a minute ago   Exited              create                                   0                   9bad713001cd2       ingress-nginx-admission-create-29qmn                         ingress-nginx
	c8972766fd694       registry.k8s.io/sig-storage/snapshot-controller@sha256:5d668e35c15df6e87e2530da25d557f543182cedbdb39d421b87076463ee9857                      About a minute ago   Running             volume-snapshot-controller               0                   9d2e90e9f9df4       snapshot-controller-7d9fbc56b8-d4qqx                         kube-system
	9b1d34d40bba4       registry.k8s.io/sig-storage/snapshot-controller@sha256:5d668e35c15df6e87e2530da25d557f543182cedbdb39d421b87076463ee9857                      About a minute ago   Running             volume-snapshot-controller               0                   c7e75da2a7ca4       snapshot-controller-7d9fbc56b8-jmnwh                         kube-system
	a23ffdd04b966       docker.io/marcnuri/yakd@sha256:1c961556224d57fc747de0b1874524208e5fb4f8386f23e9c1c4c18e97109f17                                              About a minute ago   Running             yakd                                     0                   5cb2e8be2016a       yakd-dashboard-5ff678cb9-srcxl                               yakd-dashboard
	dd4e8ee49564d       docker.io/rancher/local-path-provisioner@sha256:689a2489a24e74426e4a4666e611c988202c5fa995908b0c60133aca3eb87d98                             About a minute ago   Running             local-path-provisioner                   0                   c1b62a03ba4e6       local-path-provisioner-648f6765c9-mlhnm                      local-path-storage
	656c0f0ceda1f       ba04bb24b95753201135cbc420b233c1b0b9fa2e1fd21d28319c348c33fbcde6                                                                             About a minute ago   Running             storage-provisioner                      0                   874a92377ddf2       storage-provisioner                                          kube-system
	c23a1bc6ea5a8       138784d87c9c50f8e59412544da4cf4928d61ccbaf93b9f5898a3ba406871bfc                                                                             About a minute ago   Running             coredns                                  0                   0a257a162663e       coredns-66bc5c9577-2bvft                                     kube-system
	d46f515271a1e       b1a8c6f707935fd5f346ce5846d21ff8dd65e14c15406a14dbd16b9b897b9b4c                                                                             2 minutes ago        Running             kindnet-cni                              0                   f5f39b8c487ea       kindnet-9qtn5                                                kube-system
	0ba9b918f523b       05baa95f5142d87797a2bc1d3d11edfb0bf0a9236d436243d15061fae8b58cb9                                                                             2 minutes ago        Running             kube-proxy                               0                   9e232d2aed376       kube-proxy-mbtfx                                             kube-system
	bfadffb4d9828       7eb2c6ff0c5a768fd309321bc2ade0e4e11afcf4f2017ef1d0ff00d91fdf992a                                                                             2 minutes ago        Running             kube-controller-manager                  0                   106c96544d7d7       kube-controller-manager-addons-651467                        kube-system
	ab555c10c248c       b5f57ec6b98676d815366685a0422bd164ecf0732540b79ac51b1186cef97ff0                                                                             2 minutes ago        Running             kube-scheduler                           0                   a5ecc3135a681       kube-scheduler-addons-651467                                 kube-system
	7f6d6e73f49ba       43911e833d64d4f30460862fc0c54bb61999d60bc7d063feca71e9fc610d5196                                                                             2 minutes ago        Running             kube-apiserver                           0                   c5f9839eb3fd6       kube-apiserver-addons-651467                                 kube-system
	e8a6b101abe65       a1894772a478e07c67a56e8bf32335fdbe1dd4ec96976a5987083164bd00bc0e                                                                             2 minutes ago        Running             etcd                                     0                   755ffeca2c7d7       etcd-addons-651467                                           kube-system
	
	
	==> coredns [c23a1bc6ea5a86e9b5f5f1122d161f051386cd4e34dcba89dfe1c7f912f6252a] <==
	[INFO] 10.244.0.15:53915 - 14353 "AAAA IN registry.kube-system.svc.cluster.local.us-east-2.compute.internal. udp 94 false 1232" NXDOMAIN qr,rd,ra 83 0.00300355s
	[INFO] 10.244.0.15:53915 - 65526 "AAAA IN registry.kube-system.svc.cluster.local. udp 67 false 1232" NOERROR qr,aa,rd 149 0.000233425s
	[INFO] 10.244.0.15:53915 - 46749 "A IN registry.kube-system.svc.cluster.local. udp 67 false 1232" NOERROR qr,aa,rd 110 0.000157082s
	[INFO] 10.244.0.15:49252 - 7488 "AAAA IN registry.kube-system.svc.cluster.local.kube-system.svc.cluster.local. udp 86 false 512" NXDOMAIN qr,aa,rd 179 0.000178417s
	[INFO] 10.244.0.15:49252 - 7282 "A IN registry.kube-system.svc.cluster.local.kube-system.svc.cluster.local. udp 86 false 512" NXDOMAIN qr,aa,rd 179 0.000097044s
	[INFO] 10.244.0.15:45563 - 15603 "A IN registry.kube-system.svc.cluster.local.svc.cluster.local. udp 74 false 512" NXDOMAIN qr,aa,rd 167 0.000521399s
	[INFO] 10.244.0.15:45563 - 15775 "AAAA IN registry.kube-system.svc.cluster.local.svc.cluster.local. udp 74 false 512" NXDOMAIN qr,aa,rd 167 0.000170778s
	[INFO] 10.244.0.15:52691 - 59110 "A IN registry.kube-system.svc.cluster.local.cluster.local. udp 70 false 512" NXDOMAIN qr,aa,rd 163 0.000105372s
	[INFO] 10.244.0.15:52691 - 59298 "AAAA IN registry.kube-system.svc.cluster.local.cluster.local. udp 70 false 512" NXDOMAIN qr,aa,rd 163 0.000181444s
	[INFO] 10.244.0.15:42324 - 25155 "A IN registry.kube-system.svc.cluster.local.us-east-2.compute.internal. udp 83 false 512" NXDOMAIN qr,rd,ra 83 0.001457733s
	[INFO] 10.244.0.15:42324 - 25368 "AAAA IN registry.kube-system.svc.cluster.local.us-east-2.compute.internal. udp 83 false 512" NXDOMAIN qr,rd,ra 83 0.00152463s
	[INFO] 10.244.0.15:54082 - 23193 "AAAA IN registry.kube-system.svc.cluster.local. udp 56 false 512" NOERROR qr,aa,rd 149 0.000120783s
	[INFO] 10.244.0.15:54082 - 23020 "A IN registry.kube-system.svc.cluster.local. udp 56 false 512" NOERROR qr,aa,rd 110 0.000138505s
	[INFO] 10.244.0.19:45970 - 6597 "A IN storage.googleapis.com.gcp-auth.svc.cluster.local. udp 78 false 1232" NXDOMAIN qr,aa,rd 160 0.00017072s
	[INFO] 10.244.0.19:53831 - 29398 "AAAA IN storage.googleapis.com.gcp-auth.svc.cluster.local. udp 78 false 1232" NXDOMAIN qr,aa,rd 160 0.00007169s
	[INFO] 10.244.0.19:59422 - 47365 "AAAA IN storage.googleapis.com.svc.cluster.local. udp 69 false 1232" NXDOMAIN qr,aa,rd 151 0.000089758s
	[INFO] 10.244.0.19:48802 - 62789 "A IN storage.googleapis.com.svc.cluster.local. udp 69 false 1232" NXDOMAIN qr,aa,rd 151 0.000113677s
	[INFO] 10.244.0.19:52243 - 50316 "AAAA IN storage.googleapis.com.cluster.local. udp 65 false 1232" NXDOMAIN qr,aa,rd 147 0.000105922s
	[INFO] 10.244.0.19:37086 - 56244 "A IN storage.googleapis.com.cluster.local. udp 65 false 1232" NXDOMAIN qr,aa,rd 147 0.000075144s
	[INFO] 10.244.0.19:54727 - 7982 "AAAA IN storage.googleapis.com.us-east-2.compute.internal. udp 78 false 1232" NXDOMAIN qr,rd,ra 67 0.001616883s
	[INFO] 10.244.0.19:57198 - 40082 "A IN storage.googleapis.com.us-east-2.compute.internal. udp 78 false 1232" NXDOMAIN qr,rd,ra 67 0.001664498s
	[INFO] 10.244.0.19:44638 - 4097 "AAAA IN storage.googleapis.com. udp 51 false 1232" NOERROR qr,rd,ra 240 0.001329951s
	[INFO] 10.244.0.19:45027 - 20462 "A IN storage.googleapis.com. udp 51 false 1232" NOERROR qr,rd,ra 648 0.001348733s
	[INFO] 10.244.0.23:57323 - 2 "AAAA IN registry.kube-system.svc.cluster.local. udp 56 false 512" NOERROR qr,aa,rd 149 0.000165781s
	[INFO] 10.244.0.23:33173 - 3 "A IN registry.kube-system.svc.cluster.local. udp 56 false 512" NOERROR qr,aa,rd 110 0.000161645s
	
	
	==> describe nodes <==
	Name:               addons-651467
	Roles:              control-plane
	Labels:             beta.kubernetes.io/arch=arm64
	                    beta.kubernetes.io/os=linux
	                    kubernetes.io/arch=arm64
	                    kubernetes.io/hostname=addons-651467
	                    kubernetes.io/os=linux
	                    minikube.k8s.io/commit=70e94ed289d7418ac27e2778f9cf44be27a4ecda
	                    minikube.k8s.io/name=addons-651467
	                    minikube.k8s.io/primary=true
	                    minikube.k8s.io/updated_at=2025_11_09T13_30_19_0700
	                    minikube.k8s.io/version=v1.37.0
	                    node-role.kubernetes.io/control-plane=
	                    node.kubernetes.io/exclude-from-external-load-balancers=
	                    topology.hostpath.csi/node=addons-651467
	Annotations:        csi.volume.kubernetes.io/nodeid: {"hostpath.csi.k8s.io":"addons-651467"}
	                    node.alpha.kubernetes.io/ttl: 0
	                    volumes.kubernetes.io/controller-managed-attach-detach: true
	CreationTimestamp:  Sun, 09 Nov 2025 13:30:15 +0000
	Taints:             <none>
	Unschedulable:      false
	Lease:
	  HolderIdentity:  addons-651467
	  AcquireTime:     <unset>
	  RenewTime:       Sun, 09 Nov 2025 13:32:30 +0000
	Conditions:
	  Type             Status  LastHeartbeatTime                 LastTransitionTime                Reason                       Message
	  ----             ------  -----------------                 ------------------                ------                       -------
	  MemoryPressure   False   Sun, 09 Nov 2025 13:32:21 +0000   Sun, 09 Nov 2025 13:30:12 +0000   KubeletHasSufficientMemory   kubelet has sufficient memory available
	  DiskPressure     False   Sun, 09 Nov 2025 13:32:21 +0000   Sun, 09 Nov 2025 13:30:12 +0000   KubeletHasNoDiskPressure     kubelet has no disk pressure
	  PIDPressure      False   Sun, 09 Nov 2025 13:32:21 +0000   Sun, 09 Nov 2025 13:30:12 +0000   KubeletHasSufficientPID      kubelet has sufficient PID available
	  Ready            True    Sun, 09 Nov 2025 13:32:21 +0000   Sun, 09 Nov 2025 13:31:05 +0000   KubeletReady                 kubelet is posting ready status
	Addresses:
	  InternalIP:  192.168.49.2
	  Hostname:    addons-651467
	Capacity:
	  cpu:                2
	  ephemeral-storage:  203034800Ki
	  hugepages-1Gi:      0
	  hugepages-2Mi:      0
	  hugepages-32Mi:     0
	  hugepages-64Ki:     0
	  memory:             8022300Ki
	  pods:               110
	Allocatable:
	  cpu:                2
	  ephemeral-storage:  203034800Ki
	  hugepages-1Gi:      0
	  hugepages-2Mi:      0
	  hugepages-32Mi:     0
	  hugepages-64Ki:     0
	  memory:             8022300Ki
	  pods:               110
	System Info:
	  Machine ID:                 8fe80b37a6a3cb4bc1adadb06905c59a
	  System UUID:                d0f34771-67d1-4321-b924-10c217c33abf
	  Boot ID:                    c2b4b02b-b0e5-42ff-aa45-346ad9349595
	  Kernel Version:             5.15.0-1084-aws
	  OS Image:                   Debian GNU/Linux 12 (bookworm)
	  Operating System:           linux
	  Architecture:               arm64
	  Container Runtime Version:  cri-o://1.34.1
	  Kubelet Version:            v1.34.1
	  Kube-Proxy Version:         
	PodCIDR:                      10.244.0.0/24
	PodCIDRs:                     10.244.0.0/24
	Non-terminated Pods:          (26 in total)
	  Namespace                   Name                                         CPU Requests  CPU Limits  Memory Requests  Memory Limits  Age
	  ---------                   ----                                         ------------  ----------  ---------------  -------------  ---
	  default                     busybox                                      0 (0%)        0 (0%)      0 (0%)           0 (0%)         33s
	  default                     cloud-spanner-emulator-6f9fcf858b-gv67d      0 (0%)        0 (0%)      0 (0%)           0 (0%)         2m12s
	  gadget                      gadget-9q8bf                                 0 (0%)        0 (0%)      0 (0%)           0 (0%)         2m11s
	  gcp-auth                    gcp-auth-78565c9fb4-qqfqm                    0 (0%)        0 (0%)      0 (0%)           0 (0%)         2m8s
	  ingress-nginx               ingress-nginx-controller-675c5ddd98-6lswb    100m (5%)     0 (0%)      90Mi (1%)        0 (0%)         2m10s
	  kube-system                 coredns-66bc5c9577-2bvft                     100m (5%)     0 (0%)      70Mi (0%)        170Mi (2%)     2m15s
	  kube-system                 csi-hostpath-attacher-0                      0 (0%)        0 (0%)      0 (0%)           0 (0%)         2m10s
	  kube-system                 csi-hostpath-resizer-0                       0 (0%)        0 (0%)      0 (0%)           0 (0%)         2m10s
	  kube-system                 csi-hostpathplugin-txjcd                     0 (0%)        0 (0%)      0 (0%)           0 (0%)         94s
	  kube-system                 etcd-addons-651467                           100m (5%)     0 (0%)      100Mi (1%)       0 (0%)         2m20s
	  kube-system                 kindnet-9qtn5                                100m (5%)     100m (5%)   50Mi (0%)        50Mi (0%)      2m16s
	  kube-system                 kube-apiserver-addons-651467                 250m (12%)    0 (0%)      0 (0%)           0 (0%)         2m20s
	  kube-system                 kube-controller-manager-addons-651467        200m (10%)    0 (0%)      0 (0%)           0 (0%)         2m22s
	  kube-system                 kube-ingress-dns-minikube                    0 (0%)        0 (0%)      0 (0%)           0 (0%)         2m13s
	  kube-system                 kube-proxy-mbtfx                             0 (0%)        0 (0%)      0 (0%)           0 (0%)         2m16s
	  kube-system                 kube-scheduler-addons-651467                 100m (5%)     0 (0%)      0 (0%)           0 (0%)         2m20s
	  kube-system                 metrics-server-85b7d694d7-lmgbd              100m (5%)     0 (0%)      200Mi (2%)       0 (0%)         2m11s
	  kube-system                 nvidia-device-plugin-daemonset-rx8x7         0 (0%)        0 (0%)      0 (0%)           0 (0%)         94s
	  kube-system                 registry-6b586f9694-kzz6v                    0 (0%)        0 (0%)      0 (0%)           0 (0%)         2m12s
	  kube-system                 registry-creds-764b6fb674-sppdf              0 (0%)        0 (0%)      0 (0%)           0 (0%)         2m14s
	  kube-system                 registry-proxy-7mv24                         0 (0%)        0 (0%)      0 (0%)           0 (0%)         94s
	  kube-system                 snapshot-controller-7d9fbc56b8-d4qqx         0 (0%)        0 (0%)      0 (0%)           0 (0%)         2m10s
	  kube-system                 snapshot-controller-7d9fbc56b8-jmnwh         0 (0%)        0 (0%)      0 (0%)           0 (0%)         2m10s
	  kube-system                 storage-provisioner                          0 (0%)        0 (0%)      0 (0%)           0 (0%)         2m11s
	  local-path-storage          local-path-provisioner-648f6765c9-mlhnm      0 (0%)        0 (0%)      0 (0%)           0 (0%)         2m11s
	  yakd-dashboard              yakd-dashboard-5ff678cb9-srcxl               0 (0%)        0 (0%)      128Mi (1%)       256Mi (3%)     2m11s
	Allocated resources:
	  (Total limits may be over 100 percent, i.e., overcommitted.)
	  Resource           Requests     Limits
	  --------           --------     ------
	  cpu                1050m (52%)  100m (5%)
	  memory             638Mi (8%)   476Mi (6%)
	  ephemeral-storage  0 (0%)       0 (0%)
	  hugepages-1Gi      0 (0%)       0 (0%)
	  hugepages-2Mi      0 (0%)       0 (0%)
	  hugepages-32Mi     0 (0%)       0 (0%)
	  hugepages-64Ki     0 (0%)       0 (0%)
	Events:
	  Type     Reason                   Age                    From             Message
	  ----     ------                   ----                   ----             -------
	  Normal   Starting                 2m14s                  kube-proxy       
	  Normal   Starting                 2m28s                  kubelet          Starting kubelet.
	  Warning  CgroupV1                 2m28s                  kubelet          cgroup v1 support is in maintenance mode, please migrate to cgroup v2
	  Normal   NodeHasSufficientMemory  2m28s (x8 over 2m28s)  kubelet          Node addons-651467 status is now: NodeHasSufficientMemory
	  Normal   NodeHasNoDiskPressure    2m28s (x8 over 2m28s)  kubelet          Node addons-651467 status is now: NodeHasNoDiskPressure
	  Normal   NodeHasSufficientPID     2m28s (x8 over 2m28s)  kubelet          Node addons-651467 status is now: NodeHasSufficientPID
	  Normal   Starting                 2m21s                  kubelet          Starting kubelet.
	  Warning  CgroupV1                 2m21s                  kubelet          cgroup v1 support is in maintenance mode, please migrate to cgroup v2
	  Normal   NodeHasSufficientMemory  2m20s                  kubelet          Node addons-651467 status is now: NodeHasSufficientMemory
	  Normal   NodeHasNoDiskPressure    2m20s                  kubelet          Node addons-651467 status is now: NodeHasNoDiskPressure
	  Normal   NodeHasSufficientPID     2m20s                  kubelet          Node addons-651467 status is now: NodeHasSufficientPID
	  Normal   RegisteredNode           2m17s                  node-controller  Node addons-651467 event: Registered Node addons-651467 in Controller
	  Normal   NodeReady                94s                    kubelet          Node addons-651467 status is now: NodeReady
	
	
	==> dmesg <==
	[Nov 9 13:17] ACPI: SRAT not present
	[  +0.000000] ACPI: SRAT not present
	[  +0.000000] SPI driver altr_a10sr has no spi_device_id for altr,a10sr
	[  +0.015355] device-mapper: core: CONFIG_IMA_DISABLE_HTABLE is disabled. Duplicate IMA measurements will not be recorded in the IMA log.
	[  +0.494196] systemd[1]: Configuration file /run/systemd/system/netplan-ovs-cleanup.service is marked world-inaccessible. This has no effect as configuration data is accessible via APIs without restrictions. Proceeding anyway.
	[  +0.034512] systemd[1]: /lib/systemd/system/snapd.service:23: Unknown key name 'RestartMode' in section 'Service', ignoring.
	[  +0.743336] ena 0000:00:05.0: LLQ is not supported Fallback to host mode policy.
	[  +6.564676] kauditd_printk_skb: 36 callbacks suppressed
	[Nov 9 13:30] overlayfs: idmapped layers are currently not supported
	[  +0.081590] kmem.limit_in_bytes is deprecated and will be removed. Please report your usecase to linux-mm@kvack.org if you depend on this functionality.
	
	
	==> etcd [e8a6b101abe65f7379ea7110c3039c164ac1c65c44e32d5e2b04308d6ccf31e3] <==
	{"level":"warn","ts":"2025-11-09T13:30:14.494038Z","caller":"embed/config_logging.go:188","msg":"rejected connection on client endpoint","remote-addr":"127.0.0.1:53618","server-name":"","error":"EOF"}
	{"level":"warn","ts":"2025-11-09T13:30:14.505051Z","caller":"embed/config_logging.go:188","msg":"rejected connection on client endpoint","remote-addr":"127.0.0.1:53626","server-name":"","error":"EOF"}
	{"level":"warn","ts":"2025-11-09T13:30:14.532209Z","caller":"embed/config_logging.go:188","msg":"rejected connection on client endpoint","remote-addr":"127.0.0.1:53652","server-name":"","error":"EOF"}
	{"level":"warn","ts":"2025-11-09T13:30:14.574053Z","caller":"embed/config_logging.go:188","msg":"rejected connection on client endpoint","remote-addr":"127.0.0.1:53670","server-name":"","error":"EOF"}
	{"level":"warn","ts":"2025-11-09T13:30:14.613521Z","caller":"embed/config_logging.go:188","msg":"rejected connection on client endpoint","remote-addr":"127.0.0.1:53688","server-name":"","error":"EOF"}
	{"level":"warn","ts":"2025-11-09T13:30:14.646584Z","caller":"embed/config_logging.go:188","msg":"rejected connection on client endpoint","remote-addr":"127.0.0.1:53706","server-name":"","error":"EOF"}
	{"level":"warn","ts":"2025-11-09T13:30:14.665304Z","caller":"embed/config_logging.go:188","msg":"rejected connection on client endpoint","remote-addr":"127.0.0.1:53724","server-name":"","error":"EOF"}
	{"level":"warn","ts":"2025-11-09T13:30:14.698801Z","caller":"embed/config_logging.go:188","msg":"rejected connection on client endpoint","remote-addr":"127.0.0.1:53736","server-name":"","error":"EOF"}
	{"level":"warn","ts":"2025-11-09T13:30:14.732069Z","caller":"embed/config_logging.go:188","msg":"rejected connection on client endpoint","remote-addr":"127.0.0.1:53750","server-name":"","error":"EOF"}
	{"level":"warn","ts":"2025-11-09T13:30:14.750233Z","caller":"embed/config_logging.go:188","msg":"rejected connection on client endpoint","remote-addr":"127.0.0.1:53764","server-name":"","error":"EOF"}
	{"level":"warn","ts":"2025-11-09T13:30:14.778427Z","caller":"embed/config_logging.go:188","msg":"rejected connection on client endpoint","remote-addr":"127.0.0.1:53782","server-name":"","error":"EOF"}
	{"level":"warn","ts":"2025-11-09T13:30:14.816258Z","caller":"embed/config_logging.go:188","msg":"rejected connection on client endpoint","remote-addr":"127.0.0.1:53792","server-name":"","error":"EOF"}
	{"level":"warn","ts":"2025-11-09T13:30:14.838892Z","caller":"embed/config_logging.go:188","msg":"rejected connection on client endpoint","remote-addr":"127.0.0.1:53810","server-name":"","error":"EOF"}
	{"level":"warn","ts":"2025-11-09T13:30:14.887797Z","caller":"embed/config_logging.go:188","msg":"rejected connection on client endpoint","remote-addr":"127.0.0.1:53820","server-name":"","error":"EOF"}
	{"level":"warn","ts":"2025-11-09T13:30:14.902936Z","caller":"embed/config_logging.go:188","msg":"rejected connection on client endpoint","remote-addr":"127.0.0.1:53840","server-name":"","error":"EOF"}
	{"level":"warn","ts":"2025-11-09T13:30:14.930045Z","caller":"embed/config_logging.go:188","msg":"rejected connection on client endpoint","remote-addr":"127.0.0.1:53870","server-name":"","error":"EOF"}
	{"level":"warn","ts":"2025-11-09T13:30:14.948781Z","caller":"embed/config_logging.go:188","msg":"rejected connection on client endpoint","remote-addr":"127.0.0.1:53888","server-name":"","error":"EOF"}
	{"level":"warn","ts":"2025-11-09T13:30:14.961926Z","caller":"embed/config_logging.go:188","msg":"rejected connection on client endpoint","remote-addr":"127.0.0.1:53910","server-name":"","error":"EOF"}
	{"level":"warn","ts":"2025-11-09T13:30:15.064534Z","caller":"embed/config_logging.go:188","msg":"rejected connection on client endpoint","remote-addr":"127.0.0.1:53928","server-name":"","error":"EOF"}
	{"level":"warn","ts":"2025-11-09T13:30:30.083841Z","caller":"embed/config_logging.go:188","msg":"rejected connection on client endpoint","remote-addr":"127.0.0.1:40608","server-name":"","error":"EOF"}
	{"level":"warn","ts":"2025-11-09T13:30:30.107387Z","caller":"embed/config_logging.go:188","msg":"rejected connection on client endpoint","remote-addr":"127.0.0.1:40614","server-name":"","error":"EOF"}
	{"level":"warn","ts":"2025-11-09T13:30:52.919771Z","caller":"embed/config_logging.go:188","msg":"rejected connection on client endpoint","remote-addr":"127.0.0.1:41030","server-name":"","error":"EOF"}
	{"level":"warn","ts":"2025-11-09T13:30:52.935964Z","caller":"embed/config_logging.go:188","msg":"rejected connection on client endpoint","remote-addr":"127.0.0.1:41052","server-name":"","error":"EOF"}
	{"level":"warn","ts":"2025-11-09T13:30:52.965286Z","caller":"embed/config_logging.go:188","msg":"rejected connection on client endpoint","remote-addr":"127.0.0.1:41066","server-name":"","error":"EOF"}
	{"level":"warn","ts":"2025-11-09T13:30:52.980502Z","caller":"embed/config_logging.go:188","msg":"rejected connection on client endpoint","remote-addr":"127.0.0.1:41082","server-name":"","error":"EOF"}
	
	
	==> gcp-auth [90208adb8fd21b1cdd4ad940bede247fec478373f2117c89532a7e4e22f0eb20] <==
	2025/11/09 13:31:47 GCP Auth Webhook started!
	2025/11/09 13:32:06 Ready to marshal response ...
	2025/11/09 13:32:06 Ready to write response ...
	2025/11/09 13:32:06 Ready to marshal response ...
	2025/11/09 13:32:06 Ready to write response ...
	2025/11/09 13:32:06 Ready to marshal response ...
	2025/11/09 13:32:06 Ready to write response ...
	2025/11/09 13:32:27 Ready to marshal response ...
	2025/11/09 13:32:27 Ready to write response ...
	2025/11/09 13:32:29 Ready to marshal response ...
	2025/11/09 13:32:29 Ready to write response ...
	2025/11/09 13:32:29 Ready to marshal response ...
	2025/11/09 13:32:29 Ready to write response ...
	2025/11/09 13:32:36 Ready to marshal response ...
	2025/11/09 13:32:36 Ready to write response ...
	
	
	==> kernel <==
	 13:32:39 up 15 min,  0 user,  load average: 1.77, 1.15, 0.48
	Linux addons-651467 5.15.0-1084-aws #91~20.04.1-Ubuntu SMP Fri May 2 07:00:04 UTC 2025 aarch64 GNU/Linux
	PRETTY_NAME="Debian GNU/Linux 12 (bookworm)"
	
	
	==> kindnet [d46f515271a1e15efe8a207332abfdc1e41fd81d78ed0bea3084656a7821a37b] <==
	E1109 13:30:54.823591       1 reflector.go:200] "Failed to watch" err="failed to list *v1.NetworkPolicy: Get \"https://10.96.0.1:443/apis/networking.k8s.io/v1/networkpolicies?limit=500&resourceVersion=0\": dial tcp 10.96.0.1:443: i/o timeout" logger="UnhandledError" reflector="pkg/mod/k8s.io/client-go@v0.33.0/tools/cache/reflector.go:285" type="*v1.NetworkPolicy"
	I1109 13:30:56.022293       1 shared_informer.go:357] "Caches are synced" controller="kube-network-policies"
	I1109 13:30:56.022403       1 metrics.go:72] Registering metrics
	I1109 13:30:56.022496       1 controller.go:711] "Syncing nftables rules"
	E1109 13:30:56.022592       1 controller.go:417] "reading nfqueue stats" err="open /proc/net/netfilter/nfnetlink_queue: no such file or directory"
	I1109 13:31:04.825939       1 main.go:297] Handling node with IPs: map[192.168.49.2:{}]
	I1109 13:31:04.825994       1 main.go:301] handling current node
	I1109 13:31:14.821465       1 main.go:297] Handling node with IPs: map[192.168.49.2:{}]
	I1109 13:31:14.821540       1 main.go:301] handling current node
	I1109 13:31:24.822637       1 main.go:297] Handling node with IPs: map[192.168.49.2:{}]
	I1109 13:31:24.822693       1 main.go:301] handling current node
	I1109 13:31:34.822318       1 main.go:297] Handling node with IPs: map[192.168.49.2:{}]
	I1109 13:31:34.822365       1 main.go:301] handling current node
	I1109 13:31:44.822332       1 main.go:297] Handling node with IPs: map[192.168.49.2:{}]
	I1109 13:31:44.822367       1 main.go:301] handling current node
	I1109 13:31:54.821798       1 main.go:297] Handling node with IPs: map[192.168.49.2:{}]
	I1109 13:31:54.821833       1 main.go:301] handling current node
	I1109 13:32:04.821255       1 main.go:297] Handling node with IPs: map[192.168.49.2:{}]
	I1109 13:32:04.821289       1 main.go:301] handling current node
	I1109 13:32:14.823627       1 main.go:297] Handling node with IPs: map[192.168.49.2:{}]
	I1109 13:32:14.823662       1 main.go:301] handling current node
	I1109 13:32:24.824119       1 main.go:297] Handling node with IPs: map[192.168.49.2:{}]
	I1109 13:32:24.824230       1 main.go:301] handling current node
	I1109 13:32:34.822261       1 main.go:297] Handling node with IPs: map[192.168.49.2:{}]
	I1109 13:32:34.822295       1 main.go:301] handling current node
	
	
	==> kube-apiserver [7f6d6e73f49bafb368dee4aea3878226b8d58990c3b0be6703f0f0a6ff7f3f60] <==
	W1109 13:30:30.083563       1 logging.go:55] [core] [Channel #259 SubChannel #260]grpc: addrConn.createTransport failed to connect to {Addr: "127.0.0.1:2379", ServerName: "127.0.0.1:2379", BalancerAttributes: {"<%!p(pickfirstleaf.managedByPickfirstKeyType={})>": "<%!p(bool=true)>" }}. Err: connection error: desc = "transport: Error while dialing: dial tcp 127.0.0.1:2379: operation was canceled"
	W1109 13:30:30.107111       1 logging.go:55] [core] [Channel #263 SubChannel #264]grpc: addrConn.createTransport failed to connect to {Addr: "127.0.0.1:2379", ServerName: "127.0.0.1:2379", BalancerAttributes: {"<%!p(pickfirstleaf.managedByPickfirstKeyType={})>": "<%!p(bool=true)>" }}. Err: connection error: desc = "transport: Error while dialing: dial tcp 127.0.0.1:2379: operation was canceled"
	I1109 13:30:31.779826       1 alloc.go:328] "allocated clusterIPs" service="gcp-auth/gcp-auth" clusterIPs={"IPv4":"10.99.45.153"}
	W1109 13:30:52.919558       1 logging.go:55] [core] [Channel #267 SubChannel #268]grpc: addrConn.createTransport failed to connect to {Addr: "127.0.0.1:2379", ServerName: "127.0.0.1:2379", BalancerAttributes: {"<%!p(pickfirstleaf.managedByPickfirstKeyType={})>": "<%!p(bool=true)>" }}. Err: connection error: desc = "transport: Error while dialing: dial tcp 127.0.0.1:2379: operation was canceled"
	W1109 13:30:52.934724       1 logging.go:55] [core] [Channel #271 SubChannel #272]grpc: addrConn.createTransport failed to connect to {Addr: "127.0.0.1:2379", ServerName: "127.0.0.1:2379", BalancerAttributes: {"<%!p(pickfirstleaf.managedByPickfirstKeyType={})>": "<%!p(bool=true)>" }}. Err: connection error: desc = "transport: Error while dialing: dial tcp 127.0.0.1:2379: operation was canceled"
	W1109 13:30:52.965164       1 logging.go:55] [core] [Channel #275 SubChannel #276]grpc: addrConn.createTransport failed to connect to {Addr: "127.0.0.1:2379", ServerName: "127.0.0.1:2379", BalancerAttributes: {"<%!p(pickfirstleaf.managedByPickfirstKeyType={})>": "<%!p(bool=true)>" }}. Err: connection error: desc = "transport: authentication handshake failed: context canceled"
	W1109 13:30:52.979208       1 logging.go:55] [core] [Channel #279 SubChannel #280]grpc: addrConn.createTransport failed to connect to {Addr: "127.0.0.1:2379", ServerName: "127.0.0.1:2379", BalancerAttributes: {"<%!p(pickfirstleaf.managedByPickfirstKeyType={})>": "<%!p(bool=true)>" }}. Err: connection error: desc = "transport: Error while dialing: dial tcp 127.0.0.1:2379: operation was canceled"
	W1109 13:31:05.125383       1 dispatcher.go:210] Failed calling webhook, failing open gcp-auth-mutate.k8s.io: failed calling webhook "gcp-auth-mutate.k8s.io": failed to call webhook: Post "https://gcp-auth.gcp-auth.svc:443/mutate?timeout=10s": dial tcp 10.99.45.153:443: connect: connection refused
	E1109 13:31:05.125496       1 dispatcher.go:214] "Unhandled Error" err="failed calling webhook \"gcp-auth-mutate.k8s.io\": failed to call webhook: Post \"https://gcp-auth.gcp-auth.svc:443/mutate?timeout=10s\": dial tcp 10.99.45.153:443: connect: connection refused" logger="UnhandledError"
	W1109 13:31:05.126102       1 dispatcher.go:210] Failed calling webhook, failing open gcp-auth-mutate.k8s.io: failed calling webhook "gcp-auth-mutate.k8s.io": failed to call webhook: Post "https://gcp-auth.gcp-auth.svc:443/mutate?timeout=10s": dial tcp 10.99.45.153:443: connect: connection refused
	E1109 13:31:05.127538       1 dispatcher.go:214] "Unhandled Error" err="failed calling webhook \"gcp-auth-mutate.k8s.io\": failed to call webhook: Post \"https://gcp-auth.gcp-auth.svc:443/mutate?timeout=10s\": dial tcp 10.99.45.153:443: connect: connection refused" logger="UnhandledError"
	W1109 13:31:05.199640       1 dispatcher.go:210] Failed calling webhook, failing open gcp-auth-mutate.k8s.io: failed calling webhook "gcp-auth-mutate.k8s.io": failed to call webhook: Post "https://gcp-auth.gcp-auth.svc:443/mutate?timeout=10s": dial tcp 10.99.45.153:443: connect: connection refused
	E1109 13:31:05.199777       1 dispatcher.go:214] "Unhandled Error" err="failed calling webhook \"gcp-auth-mutate.k8s.io\": failed to call webhook: Post \"https://gcp-auth.gcp-auth.svc:443/mutate?timeout=10s\": dial tcp 10.99.45.153:443: connect: connection refused" logger="UnhandledError"
	E1109 13:31:19.394009       1 remote_available_controller.go:462] "Unhandled Error" err="v1beta1.metrics.k8s.io failed with: failing or missing response from https://10.100.115.160:443/apis/metrics.k8s.io/v1beta1: Get \"https://10.100.115.160:443/apis/metrics.k8s.io/v1beta1\": dial tcp 10.100.115.160:443: connect: connection refused" logger="UnhandledError"
	W1109 13:31:19.394171       1 handler_proxy.go:99] no RequestInfo found in the context
	E1109 13:31:19.394227       1 controller.go:146] "Unhandled Error" err=<
		Error updating APIService "v1beta1.metrics.k8s.io" with err: failed to download v1beta1.metrics.k8s.io: failed to retrieve openAPI spec, http error: ResponseCode: 503, Body: service unavailable
		, Header: map[Content-Type:[text/plain; charset=utf-8] X-Content-Type-Options:[nosniff]]
	 > logger="UnhandledError"
	E1109 13:31:19.394993       1 remote_available_controller.go:462] "Unhandled Error" err="v1beta1.metrics.k8s.io failed with: failing or missing response from https://10.100.115.160:443/apis/metrics.k8s.io/v1beta1: Get \"https://10.100.115.160:443/apis/metrics.k8s.io/v1beta1\": dial tcp 10.100.115.160:443: connect: connection refused" logger="UnhandledError"
	E1109 13:31:19.401411       1 remote_available_controller.go:462] "Unhandled Error" err="v1beta1.metrics.k8s.io failed with: failing or missing response from https://10.100.115.160:443/apis/metrics.k8s.io/v1beta1: Get \"https://10.100.115.160:443/apis/metrics.k8s.io/v1beta1\": dial tcp 10.100.115.160:443: connect: connection refused" logger="UnhandledError"
	I1109 13:31:19.534890       1 handler.go:285] Adding GroupVersion metrics.k8s.io v1beta1 to ResourceManager
	E1109 13:32:16.622149       1 conn.go:339] Error on socket receive: read tcp 192.168.49.2:8443->192.168.49.1:35474: use of closed network connection
	E1109 13:32:16.851192       1 conn.go:339] Error on socket receive: read tcp 192.168.49.2:8443->192.168.49.1:35494: use of closed network connection
	E1109 13:32:17.009708       1 conn.go:339] Error on socket receive: read tcp 192.168.49.2:8443->192.168.49.1:35512: use of closed network connection
	
	
	==> kube-controller-manager [bfadffb4d9828f17669ca17fe07a4c945b112c4347660ff98d5f85c95d6afb36] <==
	I1109 13:30:22.914090       1 shared_informer.go:356] "Caches are synced" controller="resource quota"
	I1109 13:30:22.921478       1 shared_informer.go:356] "Caches are synced" controller="garbage collector"
	I1109 13:30:22.921507       1 garbagecollector.go:154] "Garbage collector: all resource monitors have synced" logger="garbage-collector-controller"
	I1109 13:30:22.921513       1 garbagecollector.go:157] "Proceeding to collect garbage" logger="garbage-collector-controller"
	I1109 13:30:22.925575       1 shared_informer.go:356] "Caches are synced" controller="GC"
	I1109 13:30:22.937014       1 shared_informer.go:356] "Caches are synced" controller="garbage collector"
	I1109 13:30:22.943401       1 shared_informer.go:356] "Caches are synced" controller="deployment"
	I1109 13:30:22.949024       1 shared_informer.go:356] "Caches are synced" controller="stateful set"
	I1109 13:30:22.949133       1 shared_informer.go:356] "Caches are synced" controller="VAC protection"
	I1109 13:30:22.949191       1 shared_informer.go:356] "Caches are synced" controller="endpoint_slice_mirroring"
	I1109 13:30:22.949383       1 shared_informer.go:356] "Caches are synced" controller="legacy-service-account-token-cleaner"
	I1109 13:30:22.949649       1 shared_informer.go:356] "Caches are synced" controller="cronjob"
	I1109 13:30:22.949705       1 shared_informer.go:356] "Caches are synced" controller="persistent volume"
	I1109 13:30:22.949756       1 shared_informer.go:356] "Caches are synced" controller="resource_claim"
	I1109 13:30:22.950190       1 shared_informer.go:356] "Caches are synced" controller="TTL"
	I1109 13:30:22.951295       1 shared_informer.go:356] "Caches are synced" controller="service-cidr-controller"
	E1109 13:30:28.220729       1 replica_set.go:587] "Unhandled Error" err="sync \"kube-system/metrics-server-85b7d694d7\" failed with pods \"metrics-server-85b7d694d7-\" is forbidden: error looking up service account kube-system/metrics-server: serviceaccount \"metrics-server\" not found" logger="UnhandledError"
	E1109 13:30:52.912943       1 resource_quota_controller.go:446] "Unhandled Error" err="unable to retrieve the complete list of server APIs: metrics.k8s.io/v1beta1: stale GroupVersion discovery: metrics.k8s.io/v1beta1" logger="UnhandledError"
	I1109 13:30:52.913088       1 resource_quota_monitor.go:227] "QuotaMonitor created object count evaluator" logger="resourcequota-controller" resource="volumesnapshots.snapshot.storage.k8s.io"
	I1109 13:30:52.913137       1 shared_informer.go:349] "Waiting for caches to sync" controller="resource quota"
	I1109 13:30:52.949607       1 garbagecollector.go:787] "failed to discover some groups" logger="garbage-collector-controller" groups="map[\"metrics.k8s.io/v1beta1\":\"stale GroupVersion discovery: metrics.k8s.io/v1beta1\"]"
	I1109 13:30:52.955740       1 shared_informer.go:349] "Waiting for caches to sync" controller="garbage collector"
	I1109 13:30:53.018685       1 shared_informer.go:356] "Caches are synced" controller="resource quota"
	I1109 13:30:53.056549       1 shared_informer.go:356] "Caches are synced" controller="garbage collector"
	I1109 13:31:07.885021       1 node_lifecycle_controller.go:1044] "Controller detected that some Nodes are Ready. Exiting master disruption mode" logger="node-lifecycle-controller"
	
	
	==> kube-proxy [0ba9b918f523be282904c95852bc7c89ca78d01514b4fb147900c4f28b749e52] <==
	I1109 13:30:24.651171       1 server_linux.go:53] "Using iptables proxy"
	I1109 13:30:24.749604       1 shared_informer.go:349] "Waiting for caches to sync" controller="node informer cache"
	I1109 13:30:24.850189       1 shared_informer.go:356] "Caches are synced" controller="node informer cache"
	I1109 13:30:24.850234       1 server.go:219] "Successfully retrieved NodeIPs" NodeIPs=["192.168.49.2"]
	E1109 13:30:24.850318       1 server.go:256] "Kube-proxy configuration may be incomplete or incorrect" err="nodePortAddresses is unset; NodePort connections will be accepted on all local IPs. Consider using `--nodeport-addresses primary`"
	I1109 13:30:25.053694       1 server.go:265] "kube-proxy running in dual-stack mode" primary ipFamily="IPv4"
	I1109 13:30:25.053754       1 server_linux.go:132] "Using iptables Proxier"
	I1109 13:30:25.080023       1 proxier.go:242] "Setting route_localnet=1 to allow node-ports on localhost; to change this either disable iptables.localhostNodePorts (--iptables-localhost-nodeports) or set nodePortAddresses (--nodeport-addresses) to filter loopback addresses" ipFamily="IPv4"
	I1109 13:30:25.090986       1 server.go:527] "Version info" version="v1.34.1"
	I1109 13:30:25.091023       1 server.go:529] "Golang settings" GOGC="" GOMAXPROCS="" GOTRACEBACK=""
	I1109 13:30:25.092876       1 config.go:200] "Starting service config controller"
	I1109 13:30:25.092907       1 shared_informer.go:349] "Waiting for caches to sync" controller="service config"
	I1109 13:30:25.092927       1 config.go:106] "Starting endpoint slice config controller"
	I1109 13:30:25.092933       1 shared_informer.go:349] "Waiting for caches to sync" controller="endpoint slice config"
	I1109 13:30:25.094140       1 config.go:403] "Starting serviceCIDR config controller"
	I1109 13:30:25.094165       1 shared_informer.go:349] "Waiting for caches to sync" controller="serviceCIDR config"
	I1109 13:30:25.094903       1 config.go:309] "Starting node config controller"
	I1109 13:30:25.094913       1 shared_informer.go:349] "Waiting for caches to sync" controller="node config"
	I1109 13:30:25.094919       1 shared_informer.go:356] "Caches are synced" controller="node config"
	I1109 13:30:25.193606       1 shared_informer.go:356] "Caches are synced" controller="endpoint slice config"
	I1109 13:30:25.193684       1 shared_informer.go:356] "Caches are synced" controller="service config"
	I1109 13:30:25.194992       1 shared_informer.go:356] "Caches are synced" controller="serviceCIDR config"
	
	
	==> kube-scheduler [ab555c10c248c5e6a67000926c5f9590b70efa1798712c4b098ef7f0373ac795] <==
	I1109 13:30:16.279860       1 secure_serving.go:211] Serving securely on 127.0.0.1:10259
	E1109 13:30:16.298807       1 reflector.go:205] "Failed to watch" err="failed to list *v1.ConfigMap: configmaps \"extension-apiserver-authentication\" is forbidden: User \"system:kube-scheduler\" cannot list resource \"configmaps\" in API group \"\" in the namespace \"kube-system\"" logger="UnhandledError" reflector="runtime/asm_arm64.s:1223" type="*v1.ConfigMap"
	E1109 13:30:16.299212       1 reflector.go:205] "Failed to watch" err="failed to list *v1.ReplicationController: replicationcontrollers is forbidden: User \"system:kube-scheduler\" cannot list resource \"replicationcontrollers\" in API group \"\" at the cluster scope" logger="UnhandledError" reflector="k8s.io/client-go/informers/factory.go:160" type="*v1.ReplicationController"
	E1109 13:30:16.299364       1 reflector.go:205] "Failed to watch" err="failed to list *v1.Node: nodes is forbidden: User \"system:kube-scheduler\" cannot list resource \"nodes\" in API group \"\" at the cluster scope" logger="UnhandledError" reflector="k8s.io/client-go/informers/factory.go:160" type="*v1.Node"
	E1109 13:30:16.299463       1 reflector.go:205] "Failed to watch" err="failed to list *v1.PersistentVolumeClaim: persistentvolumeclaims is forbidden: User \"system:kube-scheduler\" cannot list resource \"persistentvolumeclaims\" in API group \"\" at the cluster scope" logger="UnhandledError" reflector="k8s.io/client-go/informers/factory.go:160" type="*v1.PersistentVolumeClaim"
	E1109 13:30:16.299568       1 reflector.go:205] "Failed to watch" err="failed to list *v1.CSIStorageCapacity: csistoragecapacities.storage.k8s.io is forbidden: User \"system:kube-scheduler\" cannot list resource \"csistoragecapacities\" in API group \"storage.k8s.io\" at the cluster scope" logger="UnhandledError" reflector="k8s.io/client-go/informers/factory.go:160" type="*v1.CSIStorageCapacity"
	E1109 13:30:16.299772       1 reflector.go:205] "Failed to watch" err="failed to list *v1.DeviceClass: deviceclasses.resource.k8s.io is forbidden: User \"system:kube-scheduler\" cannot list resource \"deviceclasses\" in API group \"resource.k8s.io\" at the cluster scope" logger="UnhandledError" reflector="k8s.io/client-go/informers/factory.go:160" type="*v1.DeviceClass"
	E1109 13:30:16.299849       1 reflector.go:205] "Failed to watch" err="failed to list *v1.Pod: pods is forbidden: User \"system:kube-scheduler\" cannot list resource \"pods\" in API group \"\" at the cluster scope" logger="UnhandledError" reflector="k8s.io/client-go/informers/factory.go:160" type="*v1.Pod"
	E1109 13:30:16.300005       1 reflector.go:205] "Failed to watch" err="failed to list *v1.CSIDriver: csidrivers.storage.k8s.io is forbidden: User \"system:kube-scheduler\" cannot list resource \"csidrivers\" in API group \"storage.k8s.io\" at the cluster scope" logger="UnhandledError" reflector="k8s.io/client-go/informers/factory.go:160" type="*v1.CSIDriver"
	E1109 13:30:16.300064       1 reflector.go:205] "Failed to watch" err="failed to list *v1.Service: services is forbidden: User \"system:kube-scheduler\" cannot list resource \"services\" in API group \"\" at the cluster scope" logger="UnhandledError" reflector="k8s.io/client-go/informers/factory.go:160" type="*v1.Service"
	E1109 13:30:16.300115       1 reflector.go:205] "Failed to watch" err="failed to list *v1.CSINode: csinodes.storage.k8s.io is forbidden: User \"system:kube-scheduler\" cannot list resource \"csinodes\" in API group \"storage.k8s.io\" at the cluster scope" logger="UnhandledError" reflector="k8s.io/client-go/informers/factory.go:160" type="*v1.CSINode"
	E1109 13:30:16.300174       1 reflector.go:205] "Failed to watch" err="failed to list *v1.Namespace: namespaces is forbidden: User \"system:kube-scheduler\" cannot list resource \"namespaces\" in API group \"\" at the cluster scope" logger="UnhandledError" reflector="k8s.io/client-go/informers/factory.go:160" type="*v1.Namespace"
	E1109 13:30:16.300226       1 reflector.go:205] "Failed to watch" err="failed to list *v1.ReplicaSet: replicasets.apps is forbidden: User \"system:kube-scheduler\" cannot list resource \"replicasets\" in API group \"apps\" at the cluster scope" logger="UnhandledError" reflector="k8s.io/client-go/informers/factory.go:160" type="*v1.ReplicaSet"
	E1109 13:30:16.300567       1 reflector.go:205] "Failed to watch" err="failed to list *v1.ResourceClaim: resourceclaims.resource.k8s.io is forbidden: User \"system:kube-scheduler\" cannot list resource \"resourceclaims\" in API group \"resource.k8s.io\" at the cluster scope" logger="UnhandledError" reflector="k8s.io/client-go/informers/factory.go:160" type="*v1.ResourceClaim"
	E1109 13:30:16.300644       1 reflector.go:205] "Failed to watch" err="failed to list *v1.PersistentVolume: persistentvolumes is forbidden: User \"system:kube-scheduler\" cannot list resource \"persistentvolumes\" in API group \"\" at the cluster scope" logger="UnhandledError" reflector="k8s.io/client-go/informers/factory.go:160" type="*v1.PersistentVolume"
	E1109 13:30:16.300661       1 reflector.go:205] "Failed to watch" err="failed to list *v1.VolumeAttachment: volumeattachments.storage.k8s.io is forbidden: User \"system:kube-scheduler\" cannot list resource \"volumeattachments\" in API group \"storage.k8s.io\" at the cluster scope" logger="UnhandledError" reflector="k8s.io/client-go/informers/factory.go:160" type="*v1.VolumeAttachment"
	E1109 13:30:16.300774       1 reflector.go:205] "Failed to watch" err="failed to list *v1.StorageClass: storageclasses.storage.k8s.io is forbidden: User \"system:kube-scheduler\" cannot list resource \"storageclasses\" in API group \"storage.k8s.io\" at the cluster scope" logger="UnhandledError" reflector="k8s.io/client-go/informers/factory.go:160" type="*v1.StorageClass"
	E1109 13:30:16.300834       1 reflector.go:205] "Failed to watch" err="failed to list *v1.StatefulSet: statefulsets.apps is forbidden: User \"system:kube-scheduler\" cannot list resource \"statefulsets\" in API group \"apps\" at the cluster scope" logger="UnhandledError" reflector="k8s.io/client-go/informers/factory.go:160" type="*v1.StatefulSet"
	E1109 13:30:16.300837       1 reflector.go:205] "Failed to watch" err="failed to list *v1.PodDisruptionBudget: poddisruptionbudgets.policy is forbidden: User \"system:kube-scheduler\" cannot list resource \"poddisruptionbudgets\" in API group \"policy\" at the cluster scope" logger="UnhandledError" reflector="k8s.io/client-go/informers/factory.go:160" type="*v1.PodDisruptionBudget"
	E1109 13:30:16.300883       1 reflector.go:205] "Failed to watch" err="failed to list *v1.ResourceSlice: resourceslices.resource.k8s.io is forbidden: User \"system:kube-scheduler\" cannot list resource \"resourceslices\" in API group \"resource.k8s.io\" at the cluster scope" logger="UnhandledError" reflector="k8s.io/client-go/informers/factory.go:160" type="*v1.ResourceSlice"
	E1109 13:30:17.120508       1 reflector.go:205] "Failed to watch" err="failed to list *v1.DeviceClass: deviceclasses.resource.k8s.io is forbidden: User \"system:kube-scheduler\" cannot list resource \"deviceclasses\" in API group \"resource.k8s.io\" at the cluster scope" logger="UnhandledError" reflector="k8s.io/client-go/informers/factory.go:160" type="*v1.DeviceClass"
	E1109 13:30:17.158761       1 reflector.go:205] "Failed to watch" err="failed to list *v1.CSIStorageCapacity: csistoragecapacities.storage.k8s.io is forbidden: User \"system:kube-scheduler\" cannot list resource \"csistoragecapacities\" in API group \"storage.k8s.io\" at the cluster scope" logger="UnhandledError" reflector="k8s.io/client-go/informers/factory.go:160" type="*v1.CSIStorageCapacity"
	E1109 13:30:17.197309       1 reflector.go:205] "Failed to watch" err="failed to list *v1.StorageClass: storageclasses.storage.k8s.io is forbidden: User \"system:kube-scheduler\" cannot list resource \"storageclasses\" in API group \"storage.k8s.io\" at the cluster scope" logger="UnhandledError" reflector="k8s.io/client-go/informers/factory.go:160" type="*v1.StorageClass"
	E1109 13:30:17.206819       1 reflector.go:205] "Failed to watch" err="failed to list *v1.ReplicaSet: replicasets.apps is forbidden: User \"system:kube-scheduler\" cannot list resource \"replicasets\" in API group \"apps\" at the cluster scope" logger="UnhandledError" reflector="k8s.io/client-go/informers/factory.go:160" type="*v1.ReplicaSet"
	I1109 13:30:17.781928       1 shared_informer.go:356] "Caches are synced" controller="client-ca::kube-system::extension-apiserver-authentication::client-ca-file"
	
	
	==> kubelet <==
	Nov 09 13:32:35 addons-651467 kubelet[1291]: I1109 13:32:35.954390    1291 reconciler_common.go:163] "operationExecutor.UnmountVolume started for volume \"gcp-creds\" (UniqueName: \"kubernetes.io/host-path/e171f132-b733-4b54-ae4c-19c7f35a643d-gcp-creds\") pod \"e171f132-b733-4b54-ae4c-19c7f35a643d\" (UID: \"e171f132-b733-4b54-ae4c-19c7f35a643d\") "
	Nov 09 13:32:35 addons-651467 kubelet[1291]: I1109 13:32:35.954862    1291 operation_generator.go:781] UnmountVolume.TearDown succeeded for volume "kubernetes.io/host-path/e171f132-b733-4b54-ae4c-19c7f35a643d-gcp-creds" (OuterVolumeSpecName: "gcp-creds") pod "e171f132-b733-4b54-ae4c-19c7f35a643d" (UID: "e171f132-b733-4b54-ae4c-19c7f35a643d"). InnerVolumeSpecName "gcp-creds". PluginName "kubernetes.io/host-path", VolumeGIDValue ""
	Nov 09 13:32:35 addons-651467 kubelet[1291]: I1109 13:32:35.954953    1291 operation_generator.go:781] UnmountVolume.TearDown succeeded for volume "kubernetes.io/host-path/e171f132-b733-4b54-ae4c-19c7f35a643d-pvc-fe3d3884-77bf-4c1a-9e79-8108e5bf732d" (OuterVolumeSpecName: "data") pod "e171f132-b733-4b54-ae4c-19c7f35a643d" (UID: "e171f132-b733-4b54-ae4c-19c7f35a643d"). InnerVolumeSpecName "pvc-fe3d3884-77bf-4c1a-9e79-8108e5bf732d". PluginName "kubernetes.io/host-path", VolumeGIDValue ""
	Nov 09 13:32:35 addons-651467 kubelet[1291]: I1109 13:32:35.960133    1291 operation_generator.go:781] UnmountVolume.TearDown succeeded for volume "kubernetes.io/projected/e171f132-b733-4b54-ae4c-19c7f35a643d-kube-api-access-rp5bl" (OuterVolumeSpecName: "kube-api-access-rp5bl") pod "e171f132-b733-4b54-ae4c-19c7f35a643d" (UID: "e171f132-b733-4b54-ae4c-19c7f35a643d"). InnerVolumeSpecName "kube-api-access-rp5bl". PluginName "kubernetes.io/projected", VolumeGIDValue ""
	Nov 09 13:32:36 addons-651467 kubelet[1291]: I1109 13:32:36.054873    1291 reconciler_common.go:299] "Volume detached for volume \"gcp-creds\" (UniqueName: \"kubernetes.io/host-path/e171f132-b733-4b54-ae4c-19c7f35a643d-gcp-creds\") on node \"addons-651467\" DevicePath \"\""
	Nov 09 13:32:36 addons-651467 kubelet[1291]: I1109 13:32:36.055075    1291 reconciler_common.go:299] "Volume detached for volume \"pvc-fe3d3884-77bf-4c1a-9e79-8108e5bf732d\" (UniqueName: \"kubernetes.io/host-path/e171f132-b733-4b54-ae4c-19c7f35a643d-pvc-fe3d3884-77bf-4c1a-9e79-8108e5bf732d\") on node \"addons-651467\" DevicePath \"\""
	Nov 09 13:32:36 addons-651467 kubelet[1291]: I1109 13:32:36.055168    1291 reconciler_common.go:299] "Volume detached for volume \"kube-api-access-rp5bl\" (UniqueName: \"kubernetes.io/projected/e171f132-b733-4b54-ae4c-19c7f35a643d-kube-api-access-rp5bl\") on node \"addons-651467\" DevicePath \"\""
	Nov 09 13:32:36 addons-651467 kubelet[1291]: I1109 13:32:36.771482    1291 pod_container_deletor.go:80] "Container not found in pod's containers" containerID="7cf471d392a65659791b80646d6f0e91ce55aa38d4dd680bad8bb87387df601f"
	Nov 09 13:32:37 addons-651467 kubelet[1291]: I1109 13:32:37.164289    1291 reconciler_common.go:251] "operationExecutor.VerifyControllerAttachedVolume started for volume \"gcp-creds\" (UniqueName: \"kubernetes.io/host-path/14a940c4-9553-4233-9b17-b4a97753eab9-gcp-creds\") pod \"helper-pod-delete-pvc-fe3d3884-77bf-4c1a-9e79-8108e5bf732d\" (UID: \"14a940c4-9553-4233-9b17-b4a97753eab9\") " pod="local-path-storage/helper-pod-delete-pvc-fe3d3884-77bf-4c1a-9e79-8108e5bf732d"
	Nov 09 13:32:37 addons-651467 kubelet[1291]: I1109 13:32:37.164407    1291 reconciler_common.go:251] "operationExecutor.VerifyControllerAttachedVolume started for volume \"data\" (UniqueName: \"kubernetes.io/host-path/14a940c4-9553-4233-9b17-b4a97753eab9-data\") pod \"helper-pod-delete-pvc-fe3d3884-77bf-4c1a-9e79-8108e5bf732d\" (UID: \"14a940c4-9553-4233-9b17-b4a97753eab9\") " pod="local-path-storage/helper-pod-delete-pvc-fe3d3884-77bf-4c1a-9e79-8108e5bf732d"
	Nov 09 13:32:37 addons-651467 kubelet[1291]: I1109 13:32:37.164450    1291 reconciler_common.go:251] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kube-api-access-5xczc\" (UniqueName: \"kubernetes.io/projected/14a940c4-9553-4233-9b17-b4a97753eab9-kube-api-access-5xczc\") pod \"helper-pod-delete-pvc-fe3d3884-77bf-4c1a-9e79-8108e5bf732d\" (UID: \"14a940c4-9553-4233-9b17-b4a97753eab9\") " pod="local-path-storage/helper-pod-delete-pvc-fe3d3884-77bf-4c1a-9e79-8108e5bf732d"
	Nov 09 13:32:37 addons-651467 kubelet[1291]: I1109 13:32:37.164497    1291 reconciler_common.go:251] "operationExecutor.VerifyControllerAttachedVolume started for volume \"script\" (UniqueName: \"kubernetes.io/configmap/14a940c4-9553-4233-9b17-b4a97753eab9-script\") pod \"helper-pod-delete-pvc-fe3d3884-77bf-4c1a-9e79-8108e5bf732d\" (UID: \"14a940c4-9553-4233-9b17-b4a97753eab9\") " pod="local-path-storage/helper-pod-delete-pvc-fe3d3884-77bf-4c1a-9e79-8108e5bf732d"
	Nov 09 13:32:38 addons-651467 kubelet[1291]: I1109 13:32:38.808910    1291 kubelet_volumes.go:163] "Cleaned up orphaned pod volumes dir" podUID="e171f132-b733-4b54-ae4c-19c7f35a643d" path="/var/lib/kubelet/pods/e171f132-b733-4b54-ae4c-19c7f35a643d/volumes"
	Nov 09 13:32:38 addons-651467 kubelet[1291]: I1109 13:32:38.987913    1291 operation_generator.go:781] UnmountVolume.TearDown succeeded for volume "kubernetes.io/host-path/14a940c4-9553-4233-9b17-b4a97753eab9-gcp-creds" (OuterVolumeSpecName: "gcp-creds") pod "14a940c4-9553-4233-9b17-b4a97753eab9" (UID: "14a940c4-9553-4233-9b17-b4a97753eab9"). InnerVolumeSpecName "gcp-creds". PluginName "kubernetes.io/host-path", VolumeGIDValue ""
	Nov 09 13:32:38 addons-651467 kubelet[1291]: I1109 13:32:38.990184    1291 reconciler_common.go:163] "operationExecutor.UnmountVolume started for volume \"gcp-creds\" (UniqueName: \"kubernetes.io/host-path/14a940c4-9553-4233-9b17-b4a97753eab9-gcp-creds\") pod \"14a940c4-9553-4233-9b17-b4a97753eab9\" (UID: \"14a940c4-9553-4233-9b17-b4a97753eab9\") "
	Nov 09 13:32:38 addons-651467 kubelet[1291]: I1109 13:32:38.990330    1291 reconciler_common.go:163] "operationExecutor.UnmountVolume started for volume \"data\" (UniqueName: \"kubernetes.io/host-path/14a940c4-9553-4233-9b17-b4a97753eab9-data\") pod \"14a940c4-9553-4233-9b17-b4a97753eab9\" (UID: \"14a940c4-9553-4233-9b17-b4a97753eab9\") "
	Nov 09 13:32:38 addons-651467 kubelet[1291]: I1109 13:32:38.990442    1291 operation_generator.go:781] UnmountVolume.TearDown succeeded for volume "kubernetes.io/host-path/14a940c4-9553-4233-9b17-b4a97753eab9-data" (OuterVolumeSpecName: "data") pod "14a940c4-9553-4233-9b17-b4a97753eab9" (UID: "14a940c4-9553-4233-9b17-b4a97753eab9"). InnerVolumeSpecName "data". PluginName "kubernetes.io/host-path", VolumeGIDValue ""
	Nov 09 13:32:38 addons-651467 kubelet[1291]: I1109 13:32:38.990677    1291 reconciler_common.go:163] "operationExecutor.UnmountVolume started for volume \"kube-api-access-5xczc\" (UniqueName: \"kubernetes.io/projected/14a940c4-9553-4233-9b17-b4a97753eab9-kube-api-access-5xczc\") pod \"14a940c4-9553-4233-9b17-b4a97753eab9\" (UID: \"14a940c4-9553-4233-9b17-b4a97753eab9\") "
	Nov 09 13:32:38 addons-651467 kubelet[1291]: I1109 13:32:38.991659    1291 reconciler_common.go:163] "operationExecutor.UnmountVolume started for volume \"script\" (UniqueName: \"kubernetes.io/configmap/14a940c4-9553-4233-9b17-b4a97753eab9-script\") pod \"14a940c4-9553-4233-9b17-b4a97753eab9\" (UID: \"14a940c4-9553-4233-9b17-b4a97753eab9\") "
	Nov 09 13:32:38 addons-651467 kubelet[1291]: I1109 13:32:38.992579    1291 reconciler_common.go:299] "Volume detached for volume \"gcp-creds\" (UniqueName: \"kubernetes.io/host-path/14a940c4-9553-4233-9b17-b4a97753eab9-gcp-creds\") on node \"addons-651467\" DevicePath \"\""
	Nov 09 13:32:38 addons-651467 kubelet[1291]: I1109 13:32:38.992751    1291 reconciler_common.go:299] "Volume detached for volume \"data\" (UniqueName: \"kubernetes.io/host-path/14a940c4-9553-4233-9b17-b4a97753eab9-data\") on node \"addons-651467\" DevicePath \"\""
	Nov 09 13:32:38 addons-651467 kubelet[1291]: I1109 13:32:38.993423    1291 operation_generator.go:781] UnmountVolume.TearDown succeeded for volume "kubernetes.io/configmap/14a940c4-9553-4233-9b17-b4a97753eab9-script" (OuterVolumeSpecName: "script") pod "14a940c4-9553-4233-9b17-b4a97753eab9" (UID: "14a940c4-9553-4233-9b17-b4a97753eab9"). InnerVolumeSpecName "script". PluginName "kubernetes.io/configmap", VolumeGIDValue ""
	Nov 09 13:32:38 addons-651467 kubelet[1291]: I1109 13:32:38.997028    1291 operation_generator.go:781] UnmountVolume.TearDown succeeded for volume "kubernetes.io/projected/14a940c4-9553-4233-9b17-b4a97753eab9-kube-api-access-5xczc" (OuterVolumeSpecName: "kube-api-access-5xczc") pod "14a940c4-9553-4233-9b17-b4a97753eab9" (UID: "14a940c4-9553-4233-9b17-b4a97753eab9"). InnerVolumeSpecName "kube-api-access-5xczc". PluginName "kubernetes.io/projected", VolumeGIDValue ""
	Nov 09 13:32:39 addons-651467 kubelet[1291]: I1109 13:32:39.094307    1291 reconciler_common.go:299] "Volume detached for volume \"kube-api-access-5xczc\" (UniqueName: \"kubernetes.io/projected/14a940c4-9553-4233-9b17-b4a97753eab9-kube-api-access-5xczc\") on node \"addons-651467\" DevicePath \"\""
	Nov 09 13:32:39 addons-651467 kubelet[1291]: I1109 13:32:39.094359    1291 reconciler_common.go:299] "Volume detached for volume \"script\" (UniqueName: \"kubernetes.io/configmap/14a940c4-9553-4233-9b17-b4a97753eab9-script\") on node \"addons-651467\" DevicePath \"\""
	
	
	==> storage-provisioner [656c0f0ceda1fdc379218b69ef8db74f9f41694122adc3e0fb824a1ed01361cc] <==
	W1109 13:32:14.595569       1 warnings.go:70] v1 Endpoints is deprecated in v1.33+; use discovery.k8s.io/v1 EndpointSlice
	W1109 13:32:16.604956       1 warnings.go:70] v1 Endpoints is deprecated in v1.33+; use discovery.k8s.io/v1 EndpointSlice
	W1109 13:32:16.612921       1 warnings.go:70] v1 Endpoints is deprecated in v1.33+; use discovery.k8s.io/v1 EndpointSlice
	W1109 13:32:18.616147       1 warnings.go:70] v1 Endpoints is deprecated in v1.33+; use discovery.k8s.io/v1 EndpointSlice
	W1109 13:32:18.621176       1 warnings.go:70] v1 Endpoints is deprecated in v1.33+; use discovery.k8s.io/v1 EndpointSlice
	W1109 13:32:20.625184       1 warnings.go:70] v1 Endpoints is deprecated in v1.33+; use discovery.k8s.io/v1 EndpointSlice
	W1109 13:32:20.630094       1 warnings.go:70] v1 Endpoints is deprecated in v1.33+; use discovery.k8s.io/v1 EndpointSlice
	W1109 13:32:22.633329       1 warnings.go:70] v1 Endpoints is deprecated in v1.33+; use discovery.k8s.io/v1 EndpointSlice
	W1109 13:32:22.637990       1 warnings.go:70] v1 Endpoints is deprecated in v1.33+; use discovery.k8s.io/v1 EndpointSlice
	W1109 13:32:24.640654       1 warnings.go:70] v1 Endpoints is deprecated in v1.33+; use discovery.k8s.io/v1 EndpointSlice
	W1109 13:32:24.644974       1 warnings.go:70] v1 Endpoints is deprecated in v1.33+; use discovery.k8s.io/v1 EndpointSlice
	W1109 13:32:26.647974       1 warnings.go:70] v1 Endpoints is deprecated in v1.33+; use discovery.k8s.io/v1 EndpointSlice
	W1109 13:32:26.652347       1 warnings.go:70] v1 Endpoints is deprecated in v1.33+; use discovery.k8s.io/v1 EndpointSlice
	W1109 13:32:28.655326       1 warnings.go:70] v1 Endpoints is deprecated in v1.33+; use discovery.k8s.io/v1 EndpointSlice
	W1109 13:32:28.660104       1 warnings.go:70] v1 Endpoints is deprecated in v1.33+; use discovery.k8s.io/v1 EndpointSlice
	W1109 13:32:30.663221       1 warnings.go:70] v1 Endpoints is deprecated in v1.33+; use discovery.k8s.io/v1 EndpointSlice
	W1109 13:32:30.668547       1 warnings.go:70] v1 Endpoints is deprecated in v1.33+; use discovery.k8s.io/v1 EndpointSlice
	W1109 13:32:32.671717       1 warnings.go:70] v1 Endpoints is deprecated in v1.33+; use discovery.k8s.io/v1 EndpointSlice
	W1109 13:32:32.678407       1 warnings.go:70] v1 Endpoints is deprecated in v1.33+; use discovery.k8s.io/v1 EndpointSlice
	W1109 13:32:34.681277       1 warnings.go:70] v1 Endpoints is deprecated in v1.33+; use discovery.k8s.io/v1 EndpointSlice
	W1109 13:32:34.685629       1 warnings.go:70] v1 Endpoints is deprecated in v1.33+; use discovery.k8s.io/v1 EndpointSlice
	W1109 13:32:36.692424       1 warnings.go:70] v1 Endpoints is deprecated in v1.33+; use discovery.k8s.io/v1 EndpointSlice
	W1109 13:32:36.708280       1 warnings.go:70] v1 Endpoints is deprecated in v1.33+; use discovery.k8s.io/v1 EndpointSlice
	W1109 13:32:38.711535       1 warnings.go:70] v1 Endpoints is deprecated in v1.33+; use discovery.k8s.io/v1 EndpointSlice
	W1109 13:32:38.717686       1 warnings.go:70] v1 Endpoints is deprecated in v1.33+; use discovery.k8s.io/v1 EndpointSlice
	

                                                
                                                
-- /stdout --
helpers_test.go:262: (dbg) Run:  out/minikube-linux-arm64 status --format={{.APIServer}} -p addons-651467 -n addons-651467
helpers_test.go:269: (dbg) Run:  kubectl --context addons-651467 get po -o=jsonpath={.items[*].metadata.name} -A --field-selector=status.phase!=Running
helpers_test.go:280: non-running pods: ingress-nginx-admission-create-29qmn ingress-nginx-admission-patch-bp4lk registry-creds-764b6fb674-sppdf
helpers_test.go:282: ======> post-mortem[TestAddons/parallel/Headlamp]: describe non-running pods <======
helpers_test.go:285: (dbg) Run:  kubectl --context addons-651467 describe pod ingress-nginx-admission-create-29qmn ingress-nginx-admission-patch-bp4lk registry-creds-764b6fb674-sppdf
helpers_test.go:285: (dbg) Non-zero exit: kubectl --context addons-651467 describe pod ingress-nginx-admission-create-29qmn ingress-nginx-admission-patch-bp4lk registry-creds-764b6fb674-sppdf: exit status 1 (81.251685ms)

                                                
                                                
** stderr ** 
	Error from server (NotFound): pods "ingress-nginx-admission-create-29qmn" not found
	Error from server (NotFound): pods "ingress-nginx-admission-patch-bp4lk" not found
	Error from server (NotFound): pods "registry-creds-764b6fb674-sppdf" not found

                                                
                                                
** /stderr **
helpers_test.go:287: kubectl --context addons-651467 describe pod ingress-nginx-admission-create-29qmn ingress-nginx-admission-patch-bp4lk registry-creds-764b6fb674-sppdf: exit status 1
addons_test.go:1053: (dbg) Run:  out/minikube-linux-arm64 -p addons-651467 addons disable headlamp --alsologtostderr -v=1
addons_test.go:1053: (dbg) Non-zero exit: out/minikube-linux-arm64 -p addons-651467 addons disable headlamp --alsologtostderr -v=1: exit status 11 (264.896514ms)

                                                
                                                
-- stdout --
	
	

                                                
                                                
-- /stdout --
** stderr ** 
	I1109 13:32:40.707646   12095 out.go:360] Setting OutFile to fd 1 ...
	I1109 13:32:40.707834   12095 out.go:408] TERM=,COLORTERM=, which probably does not support color
	I1109 13:32:40.707845   12095 out.go:374] Setting ErrFile to fd 2...
	I1109 13:32:40.707851   12095 out.go:408] TERM=,COLORTERM=, which probably does not support color
	I1109 13:32:40.708279   12095 root.go:338] Updating PATH: /home/jenkins/minikube-integration/21139-2320/.minikube/bin
	I1109 13:32:40.709905   12095 mustload.go:66] Loading cluster: addons-651467
	I1109 13:32:40.710428   12095 config.go:182] Loaded profile config "addons-651467": Driver=docker, ContainerRuntime=crio, KubernetesVersion=v1.34.1
	I1109 13:32:40.710447   12095 addons.go:607] checking whether the cluster is paused
	I1109 13:32:40.710559   12095 config.go:182] Loaded profile config "addons-651467": Driver=docker, ContainerRuntime=crio, KubernetesVersion=v1.34.1
	I1109 13:32:40.710574   12095 host.go:66] Checking if "addons-651467" exists ...
	I1109 13:32:40.711129   12095 cli_runner.go:164] Run: docker container inspect addons-651467 --format={{.State.Status}}
	I1109 13:32:40.735390   12095 ssh_runner.go:195] Run: systemctl --version
	I1109 13:32:40.735454   12095 cli_runner.go:164] Run: docker container inspect -f "'{{(index (index .NetworkSettings.Ports "22/tcp") 0).HostPort}}'" addons-651467
	I1109 13:32:40.754425   12095 sshutil.go:53] new ssh client: &{IP:127.0.0.1 Port:32768 SSHKeyPath:/home/jenkins/minikube-integration/21139-2320/.minikube/machines/addons-651467/id_rsa Username:docker}
	I1109 13:32:40.862479   12095 cri.go:54] listing CRI containers in root : {State:paused Name: Namespaces:[kube-system]}
	I1109 13:32:40.862592   12095 ssh_runner.go:195] Run: sudo -s eval "crictl ps -a --quiet --label io.kubernetes.pod.namespace=kube-system"
	I1109 13:32:40.892834   12095 cri.go:89] found id: "d2bf491a803e11cc4313b958ddc2a4f9b81fe1d6c808d32e0eabbc433616d351"
	I1109 13:32:40.892853   12095 cri.go:89] found id: "ae9a6f508e15b8d1fe4384aa76e3877df10ee5113b66c3083e80c4dcde62c8a8"
	I1109 13:32:40.892858   12095 cri.go:89] found id: "93a600602192b90ae7356ac03bd841175f91280d377e697cfd4deb434569fe53"
	I1109 13:32:40.892862   12095 cri.go:89] found id: "f480ecab5b3922bb178d1d72c1958865b09068eac71bf6a035651cc6ef4eb5ee"
	I1109 13:32:40.892866   12095 cri.go:89] found id: "a21703b53016b9a78ff4fdf297bac966cefcb887327ab99bfc0a9cfe52433791"
	I1109 13:32:40.892870   12095 cri.go:89] found id: "00a017d960b122a0ac7b1c3c9a793dfdf1bbd8ec498c45c5f9f540e09d617f7a"
	I1109 13:32:40.892873   12095 cri.go:89] found id: "ddbfebb8b3bd893f8ae20d9efaf7473d0a137e5797afeb9803b45bb8aa1f051a"
	I1109 13:32:40.892876   12095 cri.go:89] found id: "cf0248d05e3120ef7b5c016040983dba6d824f850f3e52c07f70ea2cc2732e40"
	I1109 13:32:40.892879   12095 cri.go:89] found id: "14c1c07c042a874219900b89960a0a218cb17f2f4ffe82345cd8e167830bda14"
	I1109 13:32:40.892885   12095 cri.go:89] found id: "d6ac5bca1cd4a01c236b3a7bd5cb7284e2ab5e47d7caf8f848c3c414c09c1622"
	I1109 13:32:40.892893   12095 cri.go:89] found id: "8f8d82b9ad5448afb3d8e85c7c56d6842c12a3d740484f974a4d1c55ceb6c03b"
	I1109 13:32:40.892897   12095 cri.go:89] found id: "07e93bef4f027a026b1e9eeb9678c48f4fd2f36ed05a6b3dad4e03db5bb3d26e"
	I1109 13:32:40.892900   12095 cri.go:89] found id: "1cfcc34c91d7064c30149b4a2e9c9edf756dac367ab8b8881fa8a5c2d7d87cdf"
	I1109 13:32:40.892903   12095 cri.go:89] found id: "c8972766fd69484838521bc9f86a3eebd70d3ad33301b40d4d15af0e79c47667"
	I1109 13:32:40.892906   12095 cri.go:89] found id: "9b1d34d40bba48a95be9706cfae001d077b34f81b8e3bcb44a5ff6fce1d1371a"
	I1109 13:32:40.892911   12095 cri.go:89] found id: "656c0f0ceda1fdc379218b69ef8db74f9f41694122adc3e0fb824a1ed01361cc"
	I1109 13:32:40.892914   12095 cri.go:89] found id: "c23a1bc6ea5a86e9b5f5f1122d161f051386cd4e34dcba89dfe1c7f912f6252a"
	I1109 13:32:40.892918   12095 cri.go:89] found id: "d46f515271a1e15efe8a207332abfdc1e41fd81d78ed0bea3084656a7821a37b"
	I1109 13:32:40.892921   12095 cri.go:89] found id: "0ba9b918f523be282904c95852bc7c89ca78d01514b4fb147900c4f28b749e52"
	I1109 13:32:40.892924   12095 cri.go:89] found id: "bfadffb4d9828f17669ca17fe07a4c945b112c4347660ff98d5f85c95d6afb36"
	I1109 13:32:40.892929   12095 cri.go:89] found id: "ab555c10c248c5e6a67000926c5f9590b70efa1798712c4b098ef7f0373ac795"
	I1109 13:32:40.892932   12095 cri.go:89] found id: "7f6d6e73f49bafb368dee4aea3878226b8d58990c3b0be6703f0f0a6ff7f3f60"
	I1109 13:32:40.892935   12095 cri.go:89] found id: "e8a6b101abe65f7379ea7110c3039c164ac1c65c44e32d5e2b04308d6ccf31e3"
	I1109 13:32:40.892937   12095 cri.go:89] found id: ""
	I1109 13:32:40.892989   12095 ssh_runner.go:195] Run: sudo runc list -f json
	I1109 13:32:40.913100   12095 out.go:203] 
	W1109 13:32:40.916056   12095 out.go:285] X Exiting due to MK_ADDON_DISABLE_PAUSED: disable failed: check paused: list paused: runc: sudo runc list -f json: Process exited with status 1
	stdout:
	
	stderr:
	time="2025-11-09T13:32:40Z" level=error msg="open /run/runc: no such file or directory"
	
	X Exiting due to MK_ADDON_DISABLE_PAUSED: disable failed: check paused: list paused: runc: sudo runc list -f json: Process exited with status 1
	stdout:
	
	stderr:
	time="2025-11-09T13:32:40Z" level=error msg="open /run/runc: no such file or directory"
	
	W1109 13:32:40.916082   12095 out.go:285] * 
	* 
	W1109 13:32:40.920049   12095 out.go:308] ╭─────────────────────────────────────────────────────────────────────────────────────────────╮
	│                                                                                             │
	│    * If the above advice does not help, please let us know:                                 │
	│      https://github.com/kubernetes/minikube/issues/new/choose                               │
	│                                                                                             │
	│    * Please run `minikube logs --file=logs.txt` and attach logs.txt to the GitHub issue.    │
	│    * Please also attach the following file to the GitHub issue:                             │
	│    * - /tmp/minikube_addons_efe3f0a65eabdab15324ffdebd5a66da17706a9c_0.log                  │
	│                                                                                             │
	╰─────────────────────────────────────────────────────────────────────────────────────────────╯
	╭─────────────────────────────────────────────────────────────────────────────────────────────╮
	│                                                                                             │
	│    * If the above advice does not help, please let us know:                                 │
	│      https://github.com/kubernetes/minikube/issues/new/choose                               │
	│                                                                                             │
	│    * Please run `minikube logs --file=logs.txt` and attach logs.txt to the GitHub issue.    │
	│    * Please also attach the following file to the GitHub issue:                             │
	│    * - /tmp/minikube_addons_efe3f0a65eabdab15324ffdebd5a66da17706a9c_0.log                  │
	│                                                                                             │
	╰─────────────────────────────────────────────────────────────────────────────────────────────╯
	I1109 13:32:40.923137   12095 out.go:203] 

                                                
                                                
** /stderr **
addons_test.go:1055: failed to disable headlamp addon: args "out/minikube-linux-arm64 -p addons-651467 addons disable headlamp --alsologtostderr -v=1": exit status 11
--- FAIL: TestAddons/parallel/Headlamp (3.65s)

                                                
                                    
x
+
TestAddons/parallel/CloudSpanner (5.42s)

                                                
                                                
=== RUN   TestAddons/parallel/CloudSpanner
=== PAUSE TestAddons/parallel/CloudSpanner

                                                
                                                

                                                
                                                
=== CONT  TestAddons/parallel/CloudSpanner
addons_test.go:840: (dbg) TestAddons/parallel/CloudSpanner: waiting 6m0s for pods matching "app=cloud-spanner-emulator" in namespace "default" ...
helpers_test.go:352: "cloud-spanner-emulator-6f9fcf858b-gv67d" [537df624-fa5e-45c4-a02b-58afc69453e3] Running
addons_test.go:840: (dbg) TestAddons/parallel/CloudSpanner: app=cloud-spanner-emulator healthy within 5.004767467s
addons_test.go:1053: (dbg) Run:  out/minikube-linux-arm64 -p addons-651467 addons disable cloud-spanner --alsologtostderr -v=1
addons_test.go:1053: (dbg) Non-zero exit: out/minikube-linux-arm64 -p addons-651467 addons disable cloud-spanner --alsologtostderr -v=1: exit status 11 (404.451981ms)

                                                
                                                
-- stdout --
	
	

                                                
                                                
-- /stdout --
** stderr ** 
	I1109 13:32:37.395693   11516 out.go:360] Setting OutFile to fd 1 ...
	I1109 13:32:37.408028   11516 out.go:408] TERM=,COLORTERM=, which probably does not support color
	I1109 13:32:37.408117   11516 out.go:374] Setting ErrFile to fd 2...
	I1109 13:32:37.408139   11516 out.go:408] TERM=,COLORTERM=, which probably does not support color
	I1109 13:32:37.408462   11516 root.go:338] Updating PATH: /home/jenkins/minikube-integration/21139-2320/.minikube/bin
	I1109 13:32:37.408823   11516 mustload.go:66] Loading cluster: addons-651467
	I1109 13:32:37.409250   11516 config.go:182] Loaded profile config "addons-651467": Driver=docker, ContainerRuntime=crio, KubernetesVersion=v1.34.1
	I1109 13:32:37.409288   11516 addons.go:607] checking whether the cluster is paused
	I1109 13:32:37.409442   11516 config.go:182] Loaded profile config "addons-651467": Driver=docker, ContainerRuntime=crio, KubernetesVersion=v1.34.1
	I1109 13:32:37.409475   11516 host.go:66] Checking if "addons-651467" exists ...
	I1109 13:32:37.409963   11516 cli_runner.go:164] Run: docker container inspect addons-651467 --format={{.State.Status}}
	I1109 13:32:37.430791   11516 ssh_runner.go:195] Run: systemctl --version
	I1109 13:32:37.430853   11516 cli_runner.go:164] Run: docker container inspect -f "'{{(index (index .NetworkSettings.Ports "22/tcp") 0).HostPort}}'" addons-651467
	I1109 13:32:37.452175   11516 sshutil.go:53] new ssh client: &{IP:127.0.0.1 Port:32768 SSHKeyPath:/home/jenkins/minikube-integration/21139-2320/.minikube/machines/addons-651467/id_rsa Username:docker}
	I1109 13:32:37.564385   11516 cri.go:54] listing CRI containers in root : {State:paused Name: Namespaces:[kube-system]}
	I1109 13:32:37.564540   11516 ssh_runner.go:195] Run: sudo -s eval "crictl ps -a --quiet --label io.kubernetes.pod.namespace=kube-system"
	I1109 13:32:37.667525   11516 cri.go:89] found id: "d2bf491a803e11cc4313b958ddc2a4f9b81fe1d6c808d32e0eabbc433616d351"
	I1109 13:32:37.667549   11516 cri.go:89] found id: "ae9a6f508e15b8d1fe4384aa76e3877df10ee5113b66c3083e80c4dcde62c8a8"
	I1109 13:32:37.667556   11516 cri.go:89] found id: "93a600602192b90ae7356ac03bd841175f91280d377e697cfd4deb434569fe53"
	I1109 13:32:37.667561   11516 cri.go:89] found id: "f480ecab5b3922bb178d1d72c1958865b09068eac71bf6a035651cc6ef4eb5ee"
	I1109 13:32:37.667564   11516 cri.go:89] found id: "a21703b53016b9a78ff4fdf297bac966cefcb887327ab99bfc0a9cfe52433791"
	I1109 13:32:37.667569   11516 cri.go:89] found id: "00a017d960b122a0ac7b1c3c9a793dfdf1bbd8ec498c45c5f9f540e09d617f7a"
	I1109 13:32:37.667572   11516 cri.go:89] found id: "ddbfebb8b3bd893f8ae20d9efaf7473d0a137e5797afeb9803b45bb8aa1f051a"
	I1109 13:32:37.667575   11516 cri.go:89] found id: "cf0248d05e3120ef7b5c016040983dba6d824f850f3e52c07f70ea2cc2732e40"
	I1109 13:32:37.667579   11516 cri.go:89] found id: "14c1c07c042a874219900b89960a0a218cb17f2f4ffe82345cd8e167830bda14"
	I1109 13:32:37.667589   11516 cri.go:89] found id: "d6ac5bca1cd4a01c236b3a7bd5cb7284e2ab5e47d7caf8f848c3c414c09c1622"
	I1109 13:32:37.667597   11516 cri.go:89] found id: "8f8d82b9ad5448afb3d8e85c7c56d6842c12a3d740484f974a4d1c55ceb6c03b"
	I1109 13:32:37.667600   11516 cri.go:89] found id: "07e93bef4f027a026b1e9eeb9678c48f4fd2f36ed05a6b3dad4e03db5bb3d26e"
	I1109 13:32:37.667604   11516 cri.go:89] found id: "1cfcc34c91d7064c30149b4a2e9c9edf756dac367ab8b8881fa8a5c2d7d87cdf"
	I1109 13:32:37.667607   11516 cri.go:89] found id: "c8972766fd69484838521bc9f86a3eebd70d3ad33301b40d4d15af0e79c47667"
	I1109 13:32:37.667611   11516 cri.go:89] found id: "9b1d34d40bba48a95be9706cfae001d077b34f81b8e3bcb44a5ff6fce1d1371a"
	I1109 13:32:37.667621   11516 cri.go:89] found id: "656c0f0ceda1fdc379218b69ef8db74f9f41694122adc3e0fb824a1ed01361cc"
	I1109 13:32:37.667629   11516 cri.go:89] found id: "c23a1bc6ea5a86e9b5f5f1122d161f051386cd4e34dcba89dfe1c7f912f6252a"
	I1109 13:32:37.667633   11516 cri.go:89] found id: "d46f515271a1e15efe8a207332abfdc1e41fd81d78ed0bea3084656a7821a37b"
	I1109 13:32:37.667636   11516 cri.go:89] found id: "0ba9b918f523be282904c95852bc7c89ca78d01514b4fb147900c4f28b749e52"
	I1109 13:32:37.667640   11516 cri.go:89] found id: "bfadffb4d9828f17669ca17fe07a4c945b112c4347660ff98d5f85c95d6afb36"
	I1109 13:32:37.667644   11516 cri.go:89] found id: "ab555c10c248c5e6a67000926c5f9590b70efa1798712c4b098ef7f0373ac795"
	I1109 13:32:37.667647   11516 cri.go:89] found id: "7f6d6e73f49bafb368dee4aea3878226b8d58990c3b0be6703f0f0a6ff7f3f60"
	I1109 13:32:37.667650   11516 cri.go:89] found id: "e8a6b101abe65f7379ea7110c3039c164ac1c65c44e32d5e2b04308d6ccf31e3"
	I1109 13:32:37.667653   11516 cri.go:89] found id: ""
	I1109 13:32:37.667703   11516 ssh_runner.go:195] Run: sudo runc list -f json
	I1109 13:32:37.704829   11516 out.go:203] 
	W1109 13:32:37.708424   11516 out.go:285] X Exiting due to MK_ADDON_DISABLE_PAUSED: disable failed: check paused: list paused: runc: sudo runc list -f json: Process exited with status 1
	stdout:
	
	stderr:
	time="2025-11-09T13:32:37Z" level=error msg="open /run/runc: no such file or directory"
	
	X Exiting due to MK_ADDON_DISABLE_PAUSED: disable failed: check paused: list paused: runc: sudo runc list -f json: Process exited with status 1
	stdout:
	
	stderr:
	time="2025-11-09T13:32:37Z" level=error msg="open /run/runc: no such file or directory"
	
	W1109 13:32:37.708458   11516 out.go:285] * 
	* 
	W1109 13:32:37.712567   11516 out.go:308] ╭─────────────────────────────────────────────────────────────────────────────────────────────╮
	│                                                                                             │
	│    * If the above advice does not help, please let us know:                                 │
	│      https://github.com/kubernetes/minikube/issues/new/choose                               │
	│                                                                                             │
	│    * Please run `minikube logs --file=logs.txt` and attach logs.txt to the GitHub issue.    │
	│    * Please also attach the following file to the GitHub issue:                             │
	│    * - /tmp/minikube_addons_e93ff976b7e98e1dc466aded9385c0856b6d1b41_0.log                  │
	│                                                                                             │
	╰─────────────────────────────────────────────────────────────────────────────────────────────╯
	╭─────────────────────────────────────────────────────────────────────────────────────────────╮
	│                                                                                             │
	│    * If the above advice does not help, please let us know:                                 │
	│      https://github.com/kubernetes/minikube/issues/new/choose                               │
	│                                                                                             │
	│    * Please run `minikube logs --file=logs.txt` and attach logs.txt to the GitHub issue.    │
	│    * Please also attach the following file to the GitHub issue:                             │
	│    * - /tmp/minikube_addons_e93ff976b7e98e1dc466aded9385c0856b6d1b41_0.log                  │
	│                                                                                             │
	╰─────────────────────────────────────────────────────────────────────────────────────────────╯
	I1109 13:32:37.716914   11516 out.go:203] 

                                                
                                                
** /stderr **
addons_test.go:1055: failed to disable cloud-spanner addon: args "out/minikube-linux-arm64 -p addons-651467 addons disable cloud-spanner --alsologtostderr -v=1": exit status 11
--- FAIL: TestAddons/parallel/CloudSpanner (5.42s)

                                                
                                    
x
+
TestAddons/parallel/LocalPath (8.41s)

                                                
                                                
=== RUN   TestAddons/parallel/LocalPath
=== PAUSE TestAddons/parallel/LocalPath

                                                
                                                

                                                
                                                
=== CONT  TestAddons/parallel/LocalPath
addons_test.go:949: (dbg) Run:  kubectl --context addons-651467 apply -f testdata/storage-provisioner-rancher/pvc.yaml
addons_test.go:955: (dbg) Run:  kubectl --context addons-651467 apply -f testdata/storage-provisioner-rancher/pod.yaml
addons_test.go:959: (dbg) TestAddons/parallel/LocalPath: waiting 5m0s for pvc "test-pvc" in namespace "default" ...
helpers_test.go:402: (dbg) Run:  kubectl --context addons-651467 get pvc test-pvc -o jsonpath={.status.phase} -n default
helpers_test.go:402: (dbg) Run:  kubectl --context addons-651467 get pvc test-pvc -o jsonpath={.status.phase} -n default
helpers_test.go:402: (dbg) Run:  kubectl --context addons-651467 get pvc test-pvc -o jsonpath={.status.phase} -n default
helpers_test.go:402: (dbg) Run:  kubectl --context addons-651467 get pvc test-pvc -o jsonpath={.status.phase} -n default
helpers_test.go:402: (dbg) Run:  kubectl --context addons-651467 get pvc test-pvc -o jsonpath={.status.phase} -n default
addons_test.go:962: (dbg) TestAddons/parallel/LocalPath: waiting 3m0s for pods matching "run=test-local-path" in namespace "default" ...
helpers_test.go:352: "test-local-path" [e171f132-b733-4b54-ae4c-19c7f35a643d] Pending / Ready:ContainersNotReady (containers with unready status: [busybox]) / ContainersReady:ContainersNotReady (containers with unready status: [busybox])
helpers_test.go:352: "test-local-path" [e171f132-b733-4b54-ae4c-19c7f35a643d] Pending / Initialized:PodCompleted / Ready:PodCompleted / ContainersReady:PodCompleted
helpers_test.go:352: "test-local-path" [e171f132-b733-4b54-ae4c-19c7f35a643d] Succeeded / Initialized:PodCompleted / Ready:PodCompleted / ContainersReady:PodCompleted
addons_test.go:962: (dbg) TestAddons/parallel/LocalPath: run=test-local-path healthy within 3.003581846s
addons_test.go:967: (dbg) Run:  kubectl --context addons-651467 get pvc test-pvc -o=json
addons_test.go:976: (dbg) Run:  out/minikube-linux-arm64 -p addons-651467 ssh "cat /opt/local-path-provisioner/pvc-fe3d3884-77bf-4c1a-9e79-8108e5bf732d_default_test-pvc/file1"
addons_test.go:988: (dbg) Run:  kubectl --context addons-651467 delete pod test-local-path
addons_test.go:992: (dbg) Run:  kubectl --context addons-651467 delete pvc test-pvc
addons_test.go:1053: (dbg) Run:  out/minikube-linux-arm64 -p addons-651467 addons disable storage-provisioner-rancher --alsologtostderr -v=1
addons_test.go:1053: (dbg) Non-zero exit: out/minikube-linux-arm64 -p addons-651467 addons disable storage-provisioner-rancher --alsologtostderr -v=1: exit status 11 (269.131517ms)

                                                
                                                
-- stdout --
	
	

                                                
                                                
-- /stdout --
** stderr ** 
	I1109 13:32:37.074038   11468 out.go:360] Setting OutFile to fd 1 ...
	I1109 13:32:37.074297   11468 out.go:408] TERM=,COLORTERM=, which probably does not support color
	I1109 13:32:37.074329   11468 out.go:374] Setting ErrFile to fd 2...
	I1109 13:32:37.074347   11468 out.go:408] TERM=,COLORTERM=, which probably does not support color
	I1109 13:32:37.074756   11468 root.go:338] Updating PATH: /home/jenkins/minikube-integration/21139-2320/.minikube/bin
	I1109 13:32:37.075207   11468 mustload.go:66] Loading cluster: addons-651467
	I1109 13:32:37.076020   11468 config.go:182] Loaded profile config "addons-651467": Driver=docker, ContainerRuntime=crio, KubernetesVersion=v1.34.1
	I1109 13:32:37.076063   11468 addons.go:607] checking whether the cluster is paused
	I1109 13:32:37.076577   11468 config.go:182] Loaded profile config "addons-651467": Driver=docker, ContainerRuntime=crio, KubernetesVersion=v1.34.1
	I1109 13:32:37.076608   11468 host.go:66] Checking if "addons-651467" exists ...
	I1109 13:32:37.077254   11468 cli_runner.go:164] Run: docker container inspect addons-651467 --format={{.State.Status}}
	I1109 13:32:37.095272   11468 ssh_runner.go:195] Run: systemctl --version
	I1109 13:32:37.095333   11468 cli_runner.go:164] Run: docker container inspect -f "'{{(index (index .NetworkSettings.Ports "22/tcp") 0).HostPort}}'" addons-651467
	I1109 13:32:37.112896   11468 sshutil.go:53] new ssh client: &{IP:127.0.0.1 Port:32768 SSHKeyPath:/home/jenkins/minikube-integration/21139-2320/.minikube/machines/addons-651467/id_rsa Username:docker}
	I1109 13:32:37.218478   11468 cri.go:54] listing CRI containers in root : {State:paused Name: Namespaces:[kube-system]}
	I1109 13:32:37.218609   11468 ssh_runner.go:195] Run: sudo -s eval "crictl ps -a --quiet --label io.kubernetes.pod.namespace=kube-system"
	I1109 13:32:37.249072   11468 cri.go:89] found id: "d2bf491a803e11cc4313b958ddc2a4f9b81fe1d6c808d32e0eabbc433616d351"
	I1109 13:32:37.249095   11468 cri.go:89] found id: "ae9a6f508e15b8d1fe4384aa76e3877df10ee5113b66c3083e80c4dcde62c8a8"
	I1109 13:32:37.249100   11468 cri.go:89] found id: "93a600602192b90ae7356ac03bd841175f91280d377e697cfd4deb434569fe53"
	I1109 13:32:37.249104   11468 cri.go:89] found id: "f480ecab5b3922bb178d1d72c1958865b09068eac71bf6a035651cc6ef4eb5ee"
	I1109 13:32:37.249119   11468 cri.go:89] found id: "a21703b53016b9a78ff4fdf297bac966cefcb887327ab99bfc0a9cfe52433791"
	I1109 13:32:37.249140   11468 cri.go:89] found id: "00a017d960b122a0ac7b1c3c9a793dfdf1bbd8ec498c45c5f9f540e09d617f7a"
	I1109 13:32:37.249149   11468 cri.go:89] found id: "ddbfebb8b3bd893f8ae20d9efaf7473d0a137e5797afeb9803b45bb8aa1f051a"
	I1109 13:32:37.249153   11468 cri.go:89] found id: "cf0248d05e3120ef7b5c016040983dba6d824f850f3e52c07f70ea2cc2732e40"
	I1109 13:32:37.249156   11468 cri.go:89] found id: "14c1c07c042a874219900b89960a0a218cb17f2f4ffe82345cd8e167830bda14"
	I1109 13:32:37.249163   11468 cri.go:89] found id: "d6ac5bca1cd4a01c236b3a7bd5cb7284e2ab5e47d7caf8f848c3c414c09c1622"
	I1109 13:32:37.249171   11468 cri.go:89] found id: "8f8d82b9ad5448afb3d8e85c7c56d6842c12a3d740484f974a4d1c55ceb6c03b"
	I1109 13:32:37.249175   11468 cri.go:89] found id: "07e93bef4f027a026b1e9eeb9678c48f4fd2f36ed05a6b3dad4e03db5bb3d26e"
	I1109 13:32:37.249178   11468 cri.go:89] found id: "1cfcc34c91d7064c30149b4a2e9c9edf756dac367ab8b8881fa8a5c2d7d87cdf"
	I1109 13:32:37.249182   11468 cri.go:89] found id: "c8972766fd69484838521bc9f86a3eebd70d3ad33301b40d4d15af0e79c47667"
	I1109 13:32:37.249186   11468 cri.go:89] found id: "9b1d34d40bba48a95be9706cfae001d077b34f81b8e3bcb44a5ff6fce1d1371a"
	I1109 13:32:37.249191   11468 cri.go:89] found id: "656c0f0ceda1fdc379218b69ef8db74f9f41694122adc3e0fb824a1ed01361cc"
	I1109 13:32:37.249201   11468 cri.go:89] found id: "c23a1bc6ea5a86e9b5f5f1122d161f051386cd4e34dcba89dfe1c7f912f6252a"
	I1109 13:32:37.249218   11468 cri.go:89] found id: "d46f515271a1e15efe8a207332abfdc1e41fd81d78ed0bea3084656a7821a37b"
	I1109 13:32:37.249223   11468 cri.go:89] found id: "0ba9b918f523be282904c95852bc7c89ca78d01514b4fb147900c4f28b749e52"
	I1109 13:32:37.249226   11468 cri.go:89] found id: "bfadffb4d9828f17669ca17fe07a4c945b112c4347660ff98d5f85c95d6afb36"
	I1109 13:32:37.249233   11468 cri.go:89] found id: "ab555c10c248c5e6a67000926c5f9590b70efa1798712c4b098ef7f0373ac795"
	I1109 13:32:37.249239   11468 cri.go:89] found id: "7f6d6e73f49bafb368dee4aea3878226b8d58990c3b0be6703f0f0a6ff7f3f60"
	I1109 13:32:37.249243   11468 cri.go:89] found id: "e8a6b101abe65f7379ea7110c3039c164ac1c65c44e32d5e2b04308d6ccf31e3"
	I1109 13:32:37.249246   11468 cri.go:89] found id: ""
	I1109 13:32:37.249312   11468 ssh_runner.go:195] Run: sudo runc list -f json
	I1109 13:32:37.265392   11468 out.go:203] 
	W1109 13:32:37.269180   11468 out.go:285] X Exiting due to MK_ADDON_DISABLE_PAUSED: disable failed: check paused: list paused: runc: sudo runc list -f json: Process exited with status 1
	stdout:
	
	stderr:
	time="2025-11-09T13:32:37Z" level=error msg="open /run/runc: no such file or directory"
	
	X Exiting due to MK_ADDON_DISABLE_PAUSED: disable failed: check paused: list paused: runc: sudo runc list -f json: Process exited with status 1
	stdout:
	
	stderr:
	time="2025-11-09T13:32:37Z" level=error msg="open /run/runc: no such file or directory"
	
	W1109 13:32:37.269219   11468 out.go:285] * 
	* 
	W1109 13:32:37.273072   11468 out.go:308] ╭─────────────────────────────────────────────────────────────────────────────────────────────╮
	│                                                                                             │
	│    * If the above advice does not help, please let us know:                                 │
	│      https://github.com/kubernetes/minikube/issues/new/choose                               │
	│                                                                                             │
	│    * Please run `minikube logs --file=logs.txt` and attach logs.txt to the GitHub issue.    │
	│    * Please also attach the following file to the GitHub issue:                             │
	│    * - /tmp/minikube_addons_e8b2053d4ef30ba659303f708d034237180eb1ed_0.log                  │
	│                                                                                             │
	╰─────────────────────────────────────────────────────────────────────────────────────────────╯
	╭─────────────────────────────────────────────────────────────────────────────────────────────╮
	│                                                                                             │
	│    * If the above advice does not help, please let us know:                                 │
	│      https://github.com/kubernetes/minikube/issues/new/choose                               │
	│                                                                                             │
	│    * Please run `minikube logs --file=logs.txt` and attach logs.txt to the GitHub issue.    │
	│    * Please also attach the following file to the GitHub issue:                             │
	│    * - /tmp/minikube_addons_e8b2053d4ef30ba659303f708d034237180eb1ed_0.log                  │
	│                                                                                             │
	╰─────────────────────────────────────────────────────────────────────────────────────────────╯
	I1109 13:32:37.276348   11468 out.go:203] 

                                                
                                                
** /stderr **
addons_test.go:1055: failed to disable storage-provisioner-rancher addon: args "out/minikube-linux-arm64 -p addons-651467 addons disable storage-provisioner-rancher --alsologtostderr -v=1": exit status 11
--- FAIL: TestAddons/parallel/LocalPath (8.41s)

                                                
                                    
x
+
TestAddons/parallel/NvidiaDevicePlugin (5.27s)

                                                
                                                
=== RUN   TestAddons/parallel/NvidiaDevicePlugin
=== PAUSE TestAddons/parallel/NvidiaDevicePlugin

                                                
                                                

                                                
                                                
=== CONT  TestAddons/parallel/NvidiaDevicePlugin
addons_test.go:1025: (dbg) TestAddons/parallel/NvidiaDevicePlugin: waiting 6m0s for pods matching "name=nvidia-device-plugin-ds" in namespace "kube-system" ...
helpers_test.go:352: "nvidia-device-plugin-daemonset-rx8x7" [a557f519-0b90-4340-abc1-df9fb9511be1] Running
addons_test.go:1025: (dbg) TestAddons/parallel/NvidiaDevicePlugin: name=nvidia-device-plugin-ds healthy within 5.003260782s
addons_test.go:1053: (dbg) Run:  out/minikube-linux-arm64 -p addons-651467 addons disable nvidia-device-plugin --alsologtostderr -v=1
addons_test.go:1053: (dbg) Non-zero exit: out/minikube-linux-arm64 -p addons-651467 addons disable nvidia-device-plugin --alsologtostderr -v=1: exit status 11 (263.467935ms)

                                                
                                                
-- stdout --
	
	

                                                
                                                
-- /stdout --
** stderr ** 
	I1109 13:32:28.661504   11058 out.go:360] Setting OutFile to fd 1 ...
	I1109 13:32:28.661652   11058 out.go:408] TERM=,COLORTERM=, which probably does not support color
	I1109 13:32:28.661664   11058 out.go:374] Setting ErrFile to fd 2...
	I1109 13:32:28.661669   11058 out.go:408] TERM=,COLORTERM=, which probably does not support color
	I1109 13:32:28.661954   11058 root.go:338] Updating PATH: /home/jenkins/minikube-integration/21139-2320/.minikube/bin
	I1109 13:32:28.662231   11058 mustload.go:66] Loading cluster: addons-651467
	I1109 13:32:28.662631   11058 config.go:182] Loaded profile config "addons-651467": Driver=docker, ContainerRuntime=crio, KubernetesVersion=v1.34.1
	I1109 13:32:28.662652   11058 addons.go:607] checking whether the cluster is paused
	I1109 13:32:28.662762   11058 config.go:182] Loaded profile config "addons-651467": Driver=docker, ContainerRuntime=crio, KubernetesVersion=v1.34.1
	I1109 13:32:28.662776   11058 host.go:66] Checking if "addons-651467" exists ...
	I1109 13:32:28.663226   11058 cli_runner.go:164] Run: docker container inspect addons-651467 --format={{.State.Status}}
	I1109 13:32:28.685413   11058 ssh_runner.go:195] Run: systemctl --version
	I1109 13:32:28.685474   11058 cli_runner.go:164] Run: docker container inspect -f "'{{(index (index .NetworkSettings.Ports "22/tcp") 0).HostPort}}'" addons-651467
	I1109 13:32:28.702443   11058 sshutil.go:53] new ssh client: &{IP:127.0.0.1 Port:32768 SSHKeyPath:/home/jenkins/minikube-integration/21139-2320/.minikube/machines/addons-651467/id_rsa Username:docker}
	I1109 13:32:28.814363   11058 cri.go:54] listing CRI containers in root : {State:paused Name: Namespaces:[kube-system]}
	I1109 13:32:28.814452   11058 ssh_runner.go:195] Run: sudo -s eval "crictl ps -a --quiet --label io.kubernetes.pod.namespace=kube-system"
	I1109 13:32:28.843486   11058 cri.go:89] found id: "d2bf491a803e11cc4313b958ddc2a4f9b81fe1d6c808d32e0eabbc433616d351"
	I1109 13:32:28.843513   11058 cri.go:89] found id: "ae9a6f508e15b8d1fe4384aa76e3877df10ee5113b66c3083e80c4dcde62c8a8"
	I1109 13:32:28.843519   11058 cri.go:89] found id: "93a600602192b90ae7356ac03bd841175f91280d377e697cfd4deb434569fe53"
	I1109 13:32:28.843523   11058 cri.go:89] found id: "f480ecab5b3922bb178d1d72c1958865b09068eac71bf6a035651cc6ef4eb5ee"
	I1109 13:32:28.843527   11058 cri.go:89] found id: "a21703b53016b9a78ff4fdf297bac966cefcb887327ab99bfc0a9cfe52433791"
	I1109 13:32:28.843530   11058 cri.go:89] found id: "00a017d960b122a0ac7b1c3c9a793dfdf1bbd8ec498c45c5f9f540e09d617f7a"
	I1109 13:32:28.843533   11058 cri.go:89] found id: "ddbfebb8b3bd893f8ae20d9efaf7473d0a137e5797afeb9803b45bb8aa1f051a"
	I1109 13:32:28.843536   11058 cri.go:89] found id: "cf0248d05e3120ef7b5c016040983dba6d824f850f3e52c07f70ea2cc2732e40"
	I1109 13:32:28.843539   11058 cri.go:89] found id: "14c1c07c042a874219900b89960a0a218cb17f2f4ffe82345cd8e167830bda14"
	I1109 13:32:28.843547   11058 cri.go:89] found id: "d6ac5bca1cd4a01c236b3a7bd5cb7284e2ab5e47d7caf8f848c3c414c09c1622"
	I1109 13:32:28.843550   11058 cri.go:89] found id: "8f8d82b9ad5448afb3d8e85c7c56d6842c12a3d740484f974a4d1c55ceb6c03b"
	I1109 13:32:28.843554   11058 cri.go:89] found id: "07e93bef4f027a026b1e9eeb9678c48f4fd2f36ed05a6b3dad4e03db5bb3d26e"
	I1109 13:32:28.843557   11058 cri.go:89] found id: "1cfcc34c91d7064c30149b4a2e9c9edf756dac367ab8b8881fa8a5c2d7d87cdf"
	I1109 13:32:28.843561   11058 cri.go:89] found id: "c8972766fd69484838521bc9f86a3eebd70d3ad33301b40d4d15af0e79c47667"
	I1109 13:32:28.843565   11058 cri.go:89] found id: "9b1d34d40bba48a95be9706cfae001d077b34f81b8e3bcb44a5ff6fce1d1371a"
	I1109 13:32:28.843570   11058 cri.go:89] found id: "656c0f0ceda1fdc379218b69ef8db74f9f41694122adc3e0fb824a1ed01361cc"
	I1109 13:32:28.843574   11058 cri.go:89] found id: "c23a1bc6ea5a86e9b5f5f1122d161f051386cd4e34dcba89dfe1c7f912f6252a"
	I1109 13:32:28.843578   11058 cri.go:89] found id: "d46f515271a1e15efe8a207332abfdc1e41fd81d78ed0bea3084656a7821a37b"
	I1109 13:32:28.843581   11058 cri.go:89] found id: "0ba9b918f523be282904c95852bc7c89ca78d01514b4fb147900c4f28b749e52"
	I1109 13:32:28.843584   11058 cri.go:89] found id: "bfadffb4d9828f17669ca17fe07a4c945b112c4347660ff98d5f85c95d6afb36"
	I1109 13:32:28.843591   11058 cri.go:89] found id: "ab555c10c248c5e6a67000926c5f9590b70efa1798712c4b098ef7f0373ac795"
	I1109 13:32:28.843596   11058 cri.go:89] found id: "7f6d6e73f49bafb368dee4aea3878226b8d58990c3b0be6703f0f0a6ff7f3f60"
	I1109 13:32:28.843600   11058 cri.go:89] found id: "e8a6b101abe65f7379ea7110c3039c164ac1c65c44e32d5e2b04308d6ccf31e3"
	I1109 13:32:28.843603   11058 cri.go:89] found id: ""
	I1109 13:32:28.843658   11058 ssh_runner.go:195] Run: sudo runc list -f json
	I1109 13:32:28.858634   11058 out.go:203] 
	W1109 13:32:28.861646   11058 out.go:285] X Exiting due to MK_ADDON_DISABLE_PAUSED: disable failed: check paused: list paused: runc: sudo runc list -f json: Process exited with status 1
	stdout:
	
	stderr:
	time="2025-11-09T13:32:28Z" level=error msg="open /run/runc: no such file or directory"
	
	X Exiting due to MK_ADDON_DISABLE_PAUSED: disable failed: check paused: list paused: runc: sudo runc list -f json: Process exited with status 1
	stdout:
	
	stderr:
	time="2025-11-09T13:32:28Z" level=error msg="open /run/runc: no such file or directory"
	
	W1109 13:32:28.861668   11058 out.go:285] * 
	* 
	W1109 13:32:28.865519   11058 out.go:308] ╭─────────────────────────────────────────────────────────────────────────────────────────────╮
	│                                                                                             │
	│    * If the above advice does not help, please let us know:                                 │
	│      https://github.com/kubernetes/minikube/issues/new/choose                               │
	│                                                                                             │
	│    * Please run `minikube logs --file=logs.txt` and attach logs.txt to the GitHub issue.    │
	│    * Please also attach the following file to the GitHub issue:                             │
	│    * - /tmp/minikube_addons_47e1a72799625313bd916979b0f8aa84efd54736_0.log                  │
	│                                                                                             │
	╰─────────────────────────────────────────────────────────────────────────────────────────────╯
	╭─────────────────────────────────────────────────────────────────────────────────────────────╮
	│                                                                                             │
	│    * If the above advice does not help, please let us know:                                 │
	│      https://github.com/kubernetes/minikube/issues/new/choose                               │
	│                                                                                             │
	│    * Please run `minikube logs --file=logs.txt` and attach logs.txt to the GitHub issue.    │
	│    * Please also attach the following file to the GitHub issue:                             │
	│    * - /tmp/minikube_addons_47e1a72799625313bd916979b0f8aa84efd54736_0.log                  │
	│                                                                                             │
	╰─────────────────────────────────────────────────────────────────────────────────────────────╯
	I1109 13:32:28.868386   11058 out.go:203] 

                                                
                                                
** /stderr **
addons_test.go:1055: failed to disable nvidia-device-plugin addon: args "out/minikube-linux-arm64 -p addons-651467 addons disable nvidia-device-plugin --alsologtostderr -v=1": exit status 11
--- FAIL: TestAddons/parallel/NvidiaDevicePlugin (5.27s)

                                                
                                    
x
+
TestAddons/parallel/Yakd (6.32s)

                                                
                                                
=== RUN   TestAddons/parallel/Yakd
=== PAUSE TestAddons/parallel/Yakd

                                                
                                                

                                                
                                                
=== CONT  TestAddons/parallel/Yakd
addons_test.go:1047: (dbg) TestAddons/parallel/Yakd: waiting 2m0s for pods matching "app.kubernetes.io/name=yakd-dashboard" in namespace "yakd-dashboard" ...
helpers_test.go:352: "yakd-dashboard-5ff678cb9-srcxl" [26fc42ce-b89c-451e-9e6b-3896bad4bc7e] Running
addons_test.go:1047: (dbg) TestAddons/parallel/Yakd: app.kubernetes.io/name=yakd-dashboard healthy within 6.004721928s
addons_test.go:1053: (dbg) Run:  out/minikube-linux-arm64 -p addons-651467 addons disable yakd --alsologtostderr -v=1
addons_test.go:1053: (dbg) Non-zero exit: out/minikube-linux-arm64 -p addons-651467 addons disable yakd --alsologtostderr -v=1: exit status 11 (315.294683ms)

                                                
                                                
-- stdout --
	
	

                                                
                                                
-- /stdout --
** stderr ** 
	I1109 13:32:23.354679   10952 out.go:360] Setting OutFile to fd 1 ...
	I1109 13:32:23.355256   10952 out.go:408] TERM=,COLORTERM=, which probably does not support color
	I1109 13:32:23.355318   10952 out.go:374] Setting ErrFile to fd 2...
	I1109 13:32:23.355340   10952 out.go:408] TERM=,COLORTERM=, which probably does not support color
	I1109 13:32:23.355753   10952 root.go:338] Updating PATH: /home/jenkins/minikube-integration/21139-2320/.minikube/bin
	I1109 13:32:23.356217   10952 mustload.go:66] Loading cluster: addons-651467
	I1109 13:32:23.356784   10952 config.go:182] Loaded profile config "addons-651467": Driver=docker, ContainerRuntime=crio, KubernetesVersion=v1.34.1
	I1109 13:32:23.356828   10952 addons.go:607] checking whether the cluster is paused
	I1109 13:32:23.356970   10952 config.go:182] Loaded profile config "addons-651467": Driver=docker, ContainerRuntime=crio, KubernetesVersion=v1.34.1
	I1109 13:32:23.357001   10952 host.go:66] Checking if "addons-651467" exists ...
	I1109 13:32:23.357478   10952 cli_runner.go:164] Run: docker container inspect addons-651467 --format={{.State.Status}}
	I1109 13:32:23.394062   10952 ssh_runner.go:195] Run: systemctl --version
	I1109 13:32:23.394123   10952 cli_runner.go:164] Run: docker container inspect -f "'{{(index (index .NetworkSettings.Ports "22/tcp") 0).HostPort}}'" addons-651467
	I1109 13:32:23.422996   10952 sshutil.go:53] new ssh client: &{IP:127.0.0.1 Port:32768 SSHKeyPath:/home/jenkins/minikube-integration/21139-2320/.minikube/machines/addons-651467/id_rsa Username:docker}
	I1109 13:32:23.538495   10952 cri.go:54] listing CRI containers in root : {State:paused Name: Namespaces:[kube-system]}
	I1109 13:32:23.538629   10952 ssh_runner.go:195] Run: sudo -s eval "crictl ps -a --quiet --label io.kubernetes.pod.namespace=kube-system"
	I1109 13:32:23.570320   10952 cri.go:89] found id: "d2bf491a803e11cc4313b958ddc2a4f9b81fe1d6c808d32e0eabbc433616d351"
	I1109 13:32:23.570343   10952 cri.go:89] found id: "ae9a6f508e15b8d1fe4384aa76e3877df10ee5113b66c3083e80c4dcde62c8a8"
	I1109 13:32:23.570348   10952 cri.go:89] found id: "93a600602192b90ae7356ac03bd841175f91280d377e697cfd4deb434569fe53"
	I1109 13:32:23.570352   10952 cri.go:89] found id: "f480ecab5b3922bb178d1d72c1958865b09068eac71bf6a035651cc6ef4eb5ee"
	I1109 13:32:23.570356   10952 cri.go:89] found id: "a21703b53016b9a78ff4fdf297bac966cefcb887327ab99bfc0a9cfe52433791"
	I1109 13:32:23.570360   10952 cri.go:89] found id: "00a017d960b122a0ac7b1c3c9a793dfdf1bbd8ec498c45c5f9f540e09d617f7a"
	I1109 13:32:23.570364   10952 cri.go:89] found id: "ddbfebb8b3bd893f8ae20d9efaf7473d0a137e5797afeb9803b45bb8aa1f051a"
	I1109 13:32:23.570367   10952 cri.go:89] found id: "cf0248d05e3120ef7b5c016040983dba6d824f850f3e52c07f70ea2cc2732e40"
	I1109 13:32:23.570370   10952 cri.go:89] found id: "14c1c07c042a874219900b89960a0a218cb17f2f4ffe82345cd8e167830bda14"
	I1109 13:32:23.570378   10952 cri.go:89] found id: "d6ac5bca1cd4a01c236b3a7bd5cb7284e2ab5e47d7caf8f848c3c414c09c1622"
	I1109 13:32:23.570381   10952 cri.go:89] found id: "8f8d82b9ad5448afb3d8e85c7c56d6842c12a3d740484f974a4d1c55ceb6c03b"
	I1109 13:32:23.570385   10952 cri.go:89] found id: "07e93bef4f027a026b1e9eeb9678c48f4fd2f36ed05a6b3dad4e03db5bb3d26e"
	I1109 13:32:23.570388   10952 cri.go:89] found id: "1cfcc34c91d7064c30149b4a2e9c9edf756dac367ab8b8881fa8a5c2d7d87cdf"
	I1109 13:32:23.570393   10952 cri.go:89] found id: "c8972766fd69484838521bc9f86a3eebd70d3ad33301b40d4d15af0e79c47667"
	I1109 13:32:23.570397   10952 cri.go:89] found id: "9b1d34d40bba48a95be9706cfae001d077b34f81b8e3bcb44a5ff6fce1d1371a"
	I1109 13:32:23.570402   10952 cri.go:89] found id: "656c0f0ceda1fdc379218b69ef8db74f9f41694122adc3e0fb824a1ed01361cc"
	I1109 13:32:23.570409   10952 cri.go:89] found id: "c23a1bc6ea5a86e9b5f5f1122d161f051386cd4e34dcba89dfe1c7f912f6252a"
	I1109 13:32:23.570415   10952 cri.go:89] found id: "d46f515271a1e15efe8a207332abfdc1e41fd81d78ed0bea3084656a7821a37b"
	I1109 13:32:23.570419   10952 cri.go:89] found id: "0ba9b918f523be282904c95852bc7c89ca78d01514b4fb147900c4f28b749e52"
	I1109 13:32:23.570423   10952 cri.go:89] found id: "bfadffb4d9828f17669ca17fe07a4c945b112c4347660ff98d5f85c95d6afb36"
	I1109 13:32:23.570428   10952 cri.go:89] found id: "ab555c10c248c5e6a67000926c5f9590b70efa1798712c4b098ef7f0373ac795"
	I1109 13:32:23.570432   10952 cri.go:89] found id: "7f6d6e73f49bafb368dee4aea3878226b8d58990c3b0be6703f0f0a6ff7f3f60"
	I1109 13:32:23.570435   10952 cri.go:89] found id: "e8a6b101abe65f7379ea7110c3039c164ac1c65c44e32d5e2b04308d6ccf31e3"
	I1109 13:32:23.570438   10952 cri.go:89] found id: ""
	I1109 13:32:23.570490   10952 ssh_runner.go:195] Run: sudo runc list -f json
	I1109 13:32:23.588815   10952 out.go:203] 
	W1109 13:32:23.592021   10952 out.go:285] X Exiting due to MK_ADDON_DISABLE_PAUSED: disable failed: check paused: list paused: runc: sudo runc list -f json: Process exited with status 1
	stdout:
	
	stderr:
	time="2025-11-09T13:32:23Z" level=error msg="open /run/runc: no such file or directory"
	
	X Exiting due to MK_ADDON_DISABLE_PAUSED: disable failed: check paused: list paused: runc: sudo runc list -f json: Process exited with status 1
	stdout:
	
	stderr:
	time="2025-11-09T13:32:23Z" level=error msg="open /run/runc: no such file or directory"
	
	W1109 13:32:23.592050   10952 out.go:285] * 
	* 
	W1109 13:32:23.595813   10952 out.go:308] ╭─────────────────────────────────────────────────────────────────────────────────────────────╮
	│                                                                                             │
	│    * If the above advice does not help, please let us know:                                 │
	│      https://github.com/kubernetes/minikube/issues/new/choose                               │
	│                                                                                             │
	│    * Please run `minikube logs --file=logs.txt` and attach logs.txt to the GitHub issue.    │
	│    * Please also attach the following file to the GitHub issue:                             │
	│    * - /tmp/minikube_addons_82e5d844def28f20a5cac88dc27578ab5d1e7e1a_0.log                  │
	│                                                                                             │
	╰─────────────────────────────────────────────────────────────────────────────────────────────╯
	╭─────────────────────────────────────────────────────────────────────────────────────────────╮
	│                                                                                             │
	│    * If the above advice does not help, please let us know:                                 │
	│      https://github.com/kubernetes/minikube/issues/new/choose                               │
	│                                                                                             │
	│    * Please run `minikube logs --file=logs.txt` and attach logs.txt to the GitHub issue.    │
	│    * Please also attach the following file to the GitHub issue:                             │
	│    * - /tmp/minikube_addons_82e5d844def28f20a5cac88dc27578ab5d1e7e1a_0.log                  │
	│                                                                                             │
	╰─────────────────────────────────────────────────────────────────────────────────────────────╯
	I1109 13:32:23.598682   10952 out.go:203] 

                                                
                                                
** /stderr **
addons_test.go:1055: failed to disable yakd addon: args "out/minikube-linux-arm64 -p addons-651467 addons disable yakd --alsologtostderr -v=1": exit status 11
--- FAIL: TestAddons/parallel/Yakd (6.32s)

                                                
                                    
x
+
TestFunctional/parallel/ServiceCmdConnect (603.53s)

                                                
                                                
=== RUN   TestFunctional/parallel/ServiceCmdConnect
=== PAUSE TestFunctional/parallel/ServiceCmdConnect

                                                
                                                

                                                
                                                
=== CONT  TestFunctional/parallel/ServiceCmdConnect
functional_test.go:1636: (dbg) Run:  kubectl --context functional-002359 create deployment hello-node-connect --image kicbase/echo-server
functional_test.go:1640: (dbg) Run:  kubectl --context functional-002359 expose deployment hello-node-connect --type=NodePort --port=8080
functional_test.go:1645: (dbg) TestFunctional/parallel/ServiceCmdConnect: waiting 10m0s for pods matching "app=hello-node-connect" in namespace "default" ...
helpers_test.go:352: "hello-node-connect-7d85dfc575-r228d" [3ebf374c-ab24-4668-9255-ce164c6b9712] Pending / Ready:ContainersNotReady (containers with unready status: [echo-server]) / ContainersReady:ContainersNotReady (containers with unready status: [echo-server])
functional_test.go:1645: ***** TestFunctional/parallel/ServiceCmdConnect: pod "app=hello-node-connect" failed to start within 10m0s: context deadline exceeded ****
functional_test.go:1645: (dbg) Run:  out/minikube-linux-arm64 status --format={{.APIServer}} -p functional-002359 -n functional-002359
functional_test.go:1645: TestFunctional/parallel/ServiceCmdConnect: showing logs for failed pods as of 2025-11-09 13:49:32.382054387 +0000 UTC m=+1226.683290284
functional_test.go:1645: (dbg) Run:  kubectl --context functional-002359 describe po hello-node-connect-7d85dfc575-r228d -n default
functional_test.go:1645: (dbg) kubectl --context functional-002359 describe po hello-node-connect-7d85dfc575-r228d -n default:
Name:             hello-node-connect-7d85dfc575-r228d
Namespace:        default
Priority:         0
Service Account:  default
Node:             functional-002359/192.168.49.2
Start Time:       Sun, 09 Nov 2025 13:39:31 +0000
Labels:           app=hello-node-connect
pod-template-hash=7d85dfc575
Annotations:      <none>
Status:           Pending
IP:               10.244.0.6
IPs:
IP:           10.244.0.6
Controlled By:  ReplicaSet/hello-node-connect-7d85dfc575
Containers:
echo-server:
Container ID:   
Image:          kicbase/echo-server
Image ID:       
Port:           <none>
Host Port:      <none>
State:          Waiting
Reason:       ImagePullBackOff
Ready:          False
Restart Count:  0
Environment:    <none>
Mounts:
/var/run/secrets/kubernetes.io/serviceaccount from kube-api-access-58dk9 (ro)
Conditions:
Type                        Status
PodReadyToStartContainers   True 
Initialized                 True 
Ready                       False 
ContainersReady             False 
PodScheduled                True 
Volumes:
kube-api-access-58dk9:
Type:                    Projected (a volume that contains injected data from multiple sources)
TokenExpirationSeconds:  3607
ConfigMapName:           kube-root-ca.crt
Optional:                false
DownwardAPI:             true
QoS Class:                   BestEffort
Node-Selectors:              <none>
Tolerations:                 node.kubernetes.io/not-ready:NoExecute op=Exists for 300s
node.kubernetes.io/unreachable:NoExecute op=Exists for 300s
Events:
Type     Reason     Age                   From               Message
----     ------     ----                  ----               -------
Normal   Scheduled  10m                   default-scheduler  Successfully assigned default/hello-node-connect-7d85dfc575-r228d to functional-002359
Normal   Pulling    7m2s (x5 over 10m)    kubelet            Pulling image "kicbase/echo-server"
Warning  Failed     7m2s (x5 over 10m)    kubelet            Failed to pull image "kicbase/echo-server": short name mode is enforcing, but image name kicbase/echo-server:latest returns ambiguous list
Warning  Failed     7m2s (x5 over 10m)    kubelet            Error: ErrImagePull
Warning  Failed     4m59s (x20 over 10m)  kubelet            Error: ImagePullBackOff
Normal   BackOff    4m47s (x21 over 10m)  kubelet            Back-off pulling image "kicbase/echo-server"
functional_test.go:1645: (dbg) Run:  kubectl --context functional-002359 logs hello-node-connect-7d85dfc575-r228d -n default
functional_test.go:1645: (dbg) Non-zero exit: kubectl --context functional-002359 logs hello-node-connect-7d85dfc575-r228d -n default: exit status 1 (105.487896ms)

                                                
                                                
** stderr ** 
	Error from server (BadRequest): container "echo-server" in pod "hello-node-connect-7d85dfc575-r228d" is waiting to start: trying and failing to pull image

                                                
                                                
** /stderr **
functional_test.go:1645: kubectl --context functional-002359 logs hello-node-connect-7d85dfc575-r228d -n default: exit status 1
functional_test.go:1646: failed waiting for hello-node pod: app=hello-node-connect within 10m0s: context deadline exceeded
functional_test.go:1608: service test failed - dumping debug information
functional_test.go:1609: -----------------------service failure post-mortem--------------------------------
functional_test.go:1612: (dbg) Run:  kubectl --context functional-002359 describe po hello-node-connect
functional_test.go:1616: hello-node pod describe:
Name:             hello-node-connect-7d85dfc575-r228d
Namespace:        default
Priority:         0
Service Account:  default
Node:             functional-002359/192.168.49.2
Start Time:       Sun, 09 Nov 2025 13:39:31 +0000
Labels:           app=hello-node-connect
pod-template-hash=7d85dfc575
Annotations:      <none>
Status:           Pending
IP:               10.244.0.6
IPs:
IP:           10.244.0.6
Controlled By:  ReplicaSet/hello-node-connect-7d85dfc575
Containers:
echo-server:
Container ID:   
Image:          kicbase/echo-server
Image ID:       
Port:           <none>
Host Port:      <none>
State:          Waiting
Reason:       ImagePullBackOff
Ready:          False
Restart Count:  0
Environment:    <none>
Mounts:
/var/run/secrets/kubernetes.io/serviceaccount from kube-api-access-58dk9 (ro)
Conditions:
Type                        Status
PodReadyToStartContainers   True 
Initialized                 True 
Ready                       False 
ContainersReady             False 
PodScheduled                True 
Volumes:
kube-api-access-58dk9:
Type:                    Projected (a volume that contains injected data from multiple sources)
TokenExpirationSeconds:  3607
ConfigMapName:           kube-root-ca.crt
Optional:                false
DownwardAPI:             true
QoS Class:                   BestEffort
Node-Selectors:              <none>
Tolerations:                 node.kubernetes.io/not-ready:NoExecute op=Exists for 300s
node.kubernetes.io/unreachable:NoExecute op=Exists for 300s
Events:
Type     Reason     Age                   From               Message
----     ------     ----                  ----               -------
Normal   Scheduled  10m                   default-scheduler  Successfully assigned default/hello-node-connect-7d85dfc575-r228d to functional-002359
Normal   Pulling    7m2s (x5 over 10m)    kubelet            Pulling image "kicbase/echo-server"
Warning  Failed     7m2s (x5 over 10m)    kubelet            Failed to pull image "kicbase/echo-server": short name mode is enforcing, but image name kicbase/echo-server:latest returns ambiguous list
Warning  Failed     7m2s (x5 over 10m)    kubelet            Error: ErrImagePull
Warning  Failed     4m59s (x20 over 10m)  kubelet            Error: ImagePullBackOff
Normal   BackOff    4m47s (x21 over 10m)  kubelet            Back-off pulling image "kicbase/echo-server"

                                                
                                                
functional_test.go:1618: (dbg) Run:  kubectl --context functional-002359 logs -l app=hello-node-connect
functional_test.go:1618: (dbg) Non-zero exit: kubectl --context functional-002359 logs -l app=hello-node-connect: exit status 1 (88.78107ms)

                                                
                                                
** stderr ** 
	Error from server (BadRequest): container "echo-server" in pod "hello-node-connect-7d85dfc575-r228d" is waiting to start: trying and failing to pull image

                                                
                                                
** /stderr **
functional_test.go:1620: "kubectl --context functional-002359 logs -l app=hello-node-connect" failed: exit status 1
functional_test.go:1622: hello-node logs:
functional_test.go:1624: (dbg) Run:  kubectl --context functional-002359 describe svc hello-node-connect
functional_test.go:1628: hello-node svc describe:
Name:                     hello-node-connect
Namespace:                default
Labels:                   app=hello-node-connect
Annotations:              <none>
Selector:                 app=hello-node-connect
Type:                     NodePort
IP Family Policy:         SingleStack
IP Families:              IPv4
IP:                       10.106.227.210
IPs:                      10.106.227.210
Port:                     <unset>  8080/TCP
TargetPort:               8080/TCP
NodePort:                 <unset>  32164/TCP
Endpoints:                
Session Affinity:         None
External Traffic Policy:  Cluster
Internal Traffic Policy:  Cluster
Events:                   <none>
helpers_test.go:222: -----------------------post-mortem--------------------------------
helpers_test.go:223: ======>  post-mortem[TestFunctional/parallel/ServiceCmdConnect]: network settings <======
helpers_test.go:230: HOST ENV snapshots: PROXY env: HTTP_PROXY="<empty>" HTTPS_PROXY="<empty>" NO_PROXY="<empty>"
helpers_test.go:238: ======>  post-mortem[TestFunctional/parallel/ServiceCmdConnect]: docker inspect <======
helpers_test.go:239: (dbg) Run:  docker inspect functional-002359
helpers_test.go:243: (dbg) docker inspect functional-002359:

                                                
                                                
-- stdout --
	[
	    {
	        "Id": "22906744b160366d954eec3ca67a303b654c8ecdafb87659a1ef127a6359f0ea",
	        "Created": "2025-11-09T13:36:35.713528957Z",
	        "Path": "/usr/local/bin/entrypoint",
	        "Args": [
	            "/sbin/init"
	        ],
	        "State": {
	            "Status": "running",
	            "Running": true,
	            "Paused": false,
	            "Restarting": false,
	            "OOMKilled": false,
	            "Dead": false,
	            "Pid": 19613,
	            "ExitCode": 0,
	            "Error": "",
	            "StartedAt": "2025-11-09T13:36:35.773881154Z",
	            "FinishedAt": "0001-01-01T00:00:00Z"
	        },
	        "Image": "sha256:4e1b168be0d8ee6affdaa8dcd0274d605b2f417b6cfaa574410f0380fd962b97",
	        "ResolvConfPath": "/var/lib/docker/containers/22906744b160366d954eec3ca67a303b654c8ecdafb87659a1ef127a6359f0ea/resolv.conf",
	        "HostnamePath": "/var/lib/docker/containers/22906744b160366d954eec3ca67a303b654c8ecdafb87659a1ef127a6359f0ea/hostname",
	        "HostsPath": "/var/lib/docker/containers/22906744b160366d954eec3ca67a303b654c8ecdafb87659a1ef127a6359f0ea/hosts",
	        "LogPath": "/var/lib/docker/containers/22906744b160366d954eec3ca67a303b654c8ecdafb87659a1ef127a6359f0ea/22906744b160366d954eec3ca67a303b654c8ecdafb87659a1ef127a6359f0ea-json.log",
	        "Name": "/functional-002359",
	        "RestartCount": 0,
	        "Driver": "overlay2",
	        "Platform": "linux",
	        "MountLabel": "",
	        "ProcessLabel": "",
	        "AppArmorProfile": "unconfined",
	        "ExecIDs": null,
	        "HostConfig": {
	            "Binds": [
	                "/lib/modules:/lib/modules:ro",
	                "functional-002359:/var"
	            ],
	            "ContainerIDFile": "",
	            "LogConfig": {
	                "Type": "json-file",
	                "Config": {}
	            },
	            "NetworkMode": "functional-002359",
	            "PortBindings": {
	                "22/tcp": [
	                    {
	                        "HostIp": "127.0.0.1",
	                        "HostPort": ""
	                    }
	                ],
	                "2376/tcp": [
	                    {
	                        "HostIp": "127.0.0.1",
	                        "HostPort": ""
	                    }
	                ],
	                "32443/tcp": [
	                    {
	                        "HostIp": "127.0.0.1",
	                        "HostPort": ""
	                    }
	                ],
	                "5000/tcp": [
	                    {
	                        "HostIp": "127.0.0.1",
	                        "HostPort": ""
	                    }
	                ],
	                "8441/tcp": [
	                    {
	                        "HostIp": "127.0.0.1",
	                        "HostPort": ""
	                    }
	                ]
	            },
	            "RestartPolicy": {
	                "Name": "no",
	                "MaximumRetryCount": 0
	            },
	            "AutoRemove": false,
	            "VolumeDriver": "",
	            "VolumesFrom": null,
	            "ConsoleSize": [
	                0,
	                0
	            ],
	            "CapAdd": null,
	            "CapDrop": null,
	            "CgroupnsMode": "host",
	            "Dns": [],
	            "DnsOptions": [],
	            "DnsSearch": [],
	            "ExtraHosts": null,
	            "GroupAdd": null,
	            "IpcMode": "private",
	            "Cgroup": "",
	            "Links": null,
	            "OomScoreAdj": 0,
	            "PidMode": "",
	            "Privileged": true,
	            "PublishAllPorts": false,
	            "ReadonlyRootfs": false,
	            "SecurityOpt": [
	                "seccomp=unconfined",
	                "apparmor=unconfined",
	                "label=disable"
	            ],
	            "Tmpfs": {
	                "/run": "",
	                "/tmp": ""
	            },
	            "UTSMode": "",
	            "UsernsMode": "",
	            "ShmSize": 67108864,
	            "Runtime": "runc",
	            "Isolation": "",
	            "CpuShares": 0,
	            "Memory": 4294967296,
	            "NanoCpus": 2000000000,
	            "CgroupParent": "",
	            "BlkioWeight": 0,
	            "BlkioWeightDevice": [],
	            "BlkioDeviceReadBps": [],
	            "BlkioDeviceWriteBps": [],
	            "BlkioDeviceReadIOps": [],
	            "BlkioDeviceWriteIOps": [],
	            "CpuPeriod": 0,
	            "CpuQuota": 0,
	            "CpuRealtimePeriod": 0,
	            "CpuRealtimeRuntime": 0,
	            "CpusetCpus": "",
	            "CpusetMems": "",
	            "Devices": [],
	            "DeviceCgroupRules": null,
	            "DeviceRequests": null,
	            "MemoryReservation": 0,
	            "MemorySwap": 8589934592,
	            "MemorySwappiness": null,
	            "OomKillDisable": false,
	            "PidsLimit": null,
	            "Ulimits": [],
	            "CpuCount": 0,
	            "CpuPercent": 0,
	            "IOMaximumIOps": 0,
	            "IOMaximumBandwidth": 0,
	            "MaskedPaths": null,
	            "ReadonlyPaths": null
	        },
	        "GraphDriver": {
	            "Data": {
	                "ID": "22906744b160366d954eec3ca67a303b654c8ecdafb87659a1ef127a6359f0ea",
	                "LowerDir": "/var/lib/docker/overlay2/e7ad402f4a4e7185c828e588db0cfe79085c08222af87722fdf564cdfc416ca0-init/diff:/var/lib/docker/overlay2/b3a4de36ab2a7c9237cb1555a4866064baca53bca407ae5f84336bea9c6bc6c8/diff",
	                "MergedDir": "/var/lib/docker/overlay2/e7ad402f4a4e7185c828e588db0cfe79085c08222af87722fdf564cdfc416ca0/merged",
	                "UpperDir": "/var/lib/docker/overlay2/e7ad402f4a4e7185c828e588db0cfe79085c08222af87722fdf564cdfc416ca0/diff",
	                "WorkDir": "/var/lib/docker/overlay2/e7ad402f4a4e7185c828e588db0cfe79085c08222af87722fdf564cdfc416ca0/work"
	            },
	            "Name": "overlay2"
	        },
	        "Mounts": [
	            {
	                "Type": "bind",
	                "Source": "/lib/modules",
	                "Destination": "/lib/modules",
	                "Mode": "ro",
	                "RW": false,
	                "Propagation": "rprivate"
	            },
	            {
	                "Type": "volume",
	                "Name": "functional-002359",
	                "Source": "/var/lib/docker/volumes/functional-002359/_data",
	                "Destination": "/var",
	                "Driver": "local",
	                "Mode": "z",
	                "RW": true,
	                "Propagation": ""
	            }
	        ],
	        "Config": {
	            "Hostname": "functional-002359",
	            "Domainname": "",
	            "User": "",
	            "AttachStdin": false,
	            "AttachStdout": false,
	            "AttachStderr": false,
	            "ExposedPorts": {
	                "22/tcp": {},
	                "2376/tcp": {},
	                "32443/tcp": {},
	                "5000/tcp": {},
	                "8441/tcp": {}
	            },
	            "Tty": true,
	            "OpenStdin": false,
	            "StdinOnce": false,
	            "Env": [
	                "container=docker",
	                "PATH=/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin"
	            ],
	            "Cmd": null,
	            "Image": "gcr.io/k8s-minikube/kicbase-builds:v0.0.48-1761985721-21837@sha256:a50b37e97dfdea51156e079ca6b45818a801b3d41bbe13d141f35d2e1af6c7d1",
	            "Volumes": null,
	            "WorkingDir": "/",
	            "Entrypoint": [
	                "/usr/local/bin/entrypoint",
	                "/sbin/init"
	            ],
	            "OnBuild": null,
	            "Labels": {
	                "created_by.minikube.sigs.k8s.io": "true",
	                "mode.minikube.sigs.k8s.io": "functional-002359",
	                "name.minikube.sigs.k8s.io": "functional-002359",
	                "role.minikube.sigs.k8s.io": ""
	            },
	            "StopSignal": "SIGRTMIN+3"
	        },
	        "NetworkSettings": {
	            "Bridge": "",
	            "SandboxID": "8c4179125708989cd2bc0fed5bb8a9eb261a857188f30ed9769475108afe7627",
	            "SandboxKey": "/var/run/docker/netns/8c4179125708",
	            "Ports": {
	                "22/tcp": [
	                    {
	                        "HostIp": "127.0.0.1",
	                        "HostPort": "32778"
	                    }
	                ],
	                "2376/tcp": [
	                    {
	                        "HostIp": "127.0.0.1",
	                        "HostPort": "32779"
	                    }
	                ],
	                "32443/tcp": [
	                    {
	                        "HostIp": "127.0.0.1",
	                        "HostPort": "32782"
	                    }
	                ],
	                "5000/tcp": [
	                    {
	                        "HostIp": "127.0.0.1",
	                        "HostPort": "32780"
	                    }
	                ],
	                "8441/tcp": [
	                    {
	                        "HostIp": "127.0.0.1",
	                        "HostPort": "32781"
	                    }
	                ]
	            },
	            "HairpinMode": false,
	            "LinkLocalIPv6Address": "",
	            "LinkLocalIPv6PrefixLen": 0,
	            "SecondaryIPAddresses": null,
	            "SecondaryIPv6Addresses": null,
	            "EndpointID": "",
	            "Gateway": "",
	            "GlobalIPv6Address": "",
	            "GlobalIPv6PrefixLen": 0,
	            "IPAddress": "",
	            "IPPrefixLen": 0,
	            "IPv6Gateway": "",
	            "MacAddress": "",
	            "Networks": {
	                "functional-002359": {
	                    "IPAMConfig": {
	                        "IPv4Address": "192.168.49.2"
	                    },
	                    "Links": null,
	                    "Aliases": null,
	                    "MacAddress": "76:f7:ee:1b:d7:00",
	                    "DriverOpts": null,
	                    "GwPriority": 0,
	                    "NetworkID": "ac3790680cfe4f7c8b71a00d75253403809003a7c3335b12d04bc1be2d601785",
	                    "EndpointID": "a02fcb77511e04f159c41e7019e697bea71b98e10b8e2fbd4ef889934f1ed12b",
	                    "Gateway": "192.168.49.1",
	                    "IPAddress": "192.168.49.2",
	                    "IPPrefixLen": 24,
	                    "IPv6Gateway": "",
	                    "GlobalIPv6Address": "",
	                    "GlobalIPv6PrefixLen": 0,
	                    "DNSNames": [
	                        "functional-002359",
	                        "22906744b160"
	                    ]
	                }
	            }
	        }
	    }
	]

                                                
                                                
-- /stdout --
helpers_test.go:247: (dbg) Run:  out/minikube-linux-arm64 status --format={{.Host}} -p functional-002359 -n functional-002359
helpers_test.go:252: <<< TestFunctional/parallel/ServiceCmdConnect FAILED: start of post-mortem logs <<<
helpers_test.go:253: ======>  post-mortem[TestFunctional/parallel/ServiceCmdConnect]: minikube logs <======
helpers_test.go:255: (dbg) Run:  out/minikube-linux-arm64 -p functional-002359 logs -n 25
helpers_test.go:255: (dbg) Done: out/minikube-linux-arm64 -p functional-002359 logs -n 25: (1.437822205s)
helpers_test.go:260: TestFunctional/parallel/ServiceCmdConnect logs: 
-- stdout --
	
	==> Audit <==
	┌─────────┬──────────────────────────────────────────────────────────────────────────────────────────────────────────┬───────────────────┬─────────┬─────────┬─────────────────────┬─────────────────────┐
	│ COMMAND │                                                   ARGS                                                   │      PROFILE      │  USER   │ VERSION │     START TIME      │      END TIME       │
	├─────────┼──────────────────────────────────────────────────────────────────────────────────────────────────────────┼───────────────────┼─────────┼─────────┼─────────────────────┼─────────────────────┤
	│ cache   │ delete registry.k8s.io/pause:3.3                                                                         │ minikube          │ jenkins │ v1.37.0 │ 09 Nov 25 13:38 UTC │ 09 Nov 25 13:38 UTC │
	│ cache   │ list                                                                                                     │ minikube          │ jenkins │ v1.37.0 │ 09 Nov 25 13:38 UTC │ 09 Nov 25 13:38 UTC │
	│ ssh     │ functional-002359 ssh sudo crictl images                                                                 │ functional-002359 │ jenkins │ v1.37.0 │ 09 Nov 25 13:38 UTC │ 09 Nov 25 13:38 UTC │
	│ ssh     │ functional-002359 ssh sudo crictl rmi registry.k8s.io/pause:latest                                       │ functional-002359 │ jenkins │ v1.37.0 │ 09 Nov 25 13:38 UTC │ 09 Nov 25 13:38 UTC │
	│ ssh     │ functional-002359 ssh sudo crictl inspecti registry.k8s.io/pause:latest                                  │ functional-002359 │ jenkins │ v1.37.0 │ 09 Nov 25 13:38 UTC │                     │
	│ cache   │ functional-002359 cache reload                                                                           │ functional-002359 │ jenkins │ v1.37.0 │ 09 Nov 25 13:38 UTC │ 09 Nov 25 13:38 UTC │
	│ ssh     │ functional-002359 ssh sudo crictl inspecti registry.k8s.io/pause:latest                                  │ functional-002359 │ jenkins │ v1.37.0 │ 09 Nov 25 13:38 UTC │ 09 Nov 25 13:38 UTC │
	│ cache   │ delete registry.k8s.io/pause:3.1                                                                         │ minikube          │ jenkins │ v1.37.0 │ 09 Nov 25 13:38 UTC │ 09 Nov 25 13:38 UTC │
	│ cache   │ delete registry.k8s.io/pause:latest                                                                      │ minikube          │ jenkins │ v1.37.0 │ 09 Nov 25 13:38 UTC │ 09 Nov 25 13:38 UTC │
	│ kubectl │ functional-002359 kubectl -- --context functional-002359 get pods                                        │ functional-002359 │ jenkins │ v1.37.0 │ 09 Nov 25 13:38 UTC │ 09 Nov 25 13:38 UTC │
	│ start   │ -p functional-002359 --extra-config=apiserver.enable-admission-plugins=NamespaceAutoProvision --wait=all │ functional-002359 │ jenkins │ v1.37.0 │ 09 Nov 25 13:38 UTC │ 09 Nov 25 13:39 UTC │
	│ service │ invalid-svc -p functional-002359                                                                         │ functional-002359 │ jenkins │ v1.37.0 │ 09 Nov 25 13:39 UTC │                     │
	│ config  │ functional-002359 config unset cpus                                                                      │ functional-002359 │ jenkins │ v1.37.0 │ 09 Nov 25 13:39 UTC │ 09 Nov 25 13:39 UTC │
	│ ssh     │ functional-002359 ssh echo hello                                                                         │ functional-002359 │ jenkins │ v1.37.0 │ 09 Nov 25 13:39 UTC │ 09 Nov 25 13:39 UTC │
	│ config  │ functional-002359 config get cpus                                                                        │ functional-002359 │ jenkins │ v1.37.0 │ 09 Nov 25 13:39 UTC │                     │
	│ config  │ functional-002359 config set cpus 2                                                                      │ functional-002359 │ jenkins │ v1.37.0 │ 09 Nov 25 13:39 UTC │ 09 Nov 25 13:39 UTC │
	│ config  │ functional-002359 config get cpus                                                                        │ functional-002359 │ jenkins │ v1.37.0 │ 09 Nov 25 13:39 UTC │ 09 Nov 25 13:39 UTC │
	│ config  │ functional-002359 config unset cpus                                                                      │ functional-002359 │ jenkins │ v1.37.0 │ 09 Nov 25 13:39 UTC │ 09 Nov 25 13:39 UTC │
	│ ssh     │ functional-002359 ssh cat /etc/hostname                                                                  │ functional-002359 │ jenkins │ v1.37.0 │ 09 Nov 25 13:39 UTC │ 09 Nov 25 13:39 UTC │
	│ config  │ functional-002359 config get cpus                                                                        │ functional-002359 │ jenkins │ v1.37.0 │ 09 Nov 25 13:39 UTC │                     │
	│ tunnel  │ functional-002359 tunnel --alsologtostderr                                                               │ functional-002359 │ jenkins │ v1.37.0 │ 09 Nov 25 13:39 UTC │                     │
	│ tunnel  │ functional-002359 tunnel --alsologtostderr                                                               │ functional-002359 │ jenkins │ v1.37.0 │ 09 Nov 25 13:39 UTC │                     │
	│ tunnel  │ functional-002359 tunnel --alsologtostderr                                                               │ functional-002359 │ jenkins │ v1.37.0 │ 09 Nov 25 13:39 UTC │                     │
	│ addons  │ functional-002359 addons list                                                                            │ functional-002359 │ jenkins │ v1.37.0 │ 09 Nov 25 13:39 UTC │ 09 Nov 25 13:39 UTC │
	│ addons  │ functional-002359 addons list -o json                                                                    │ functional-002359 │ jenkins │ v1.37.0 │ 09 Nov 25 13:39 UTC │ 09 Nov 25 13:39 UTC │
	└─────────┴──────────────────────────────────────────────────────────────────────────────────────────────────────────┴───────────────────┴─────────┴─────────┴─────────────────────┴─────────────────────┘
	
	
	==> Last Start <==
	Log file created at: 2025/11/09 13:38:31
	Running on machine: ip-172-31-24-2
	Binary: Built with gc go1.24.6 for linux/arm64
	Log line format: [IWEF]mmdd hh:mm:ss.uuuuuu threadid file:line] msg
	I1109 13:38:31.749084   23920 out.go:360] Setting OutFile to fd 1 ...
	I1109 13:38:31.749176   23920 out.go:408] TERM=,COLORTERM=, which probably does not support color
	I1109 13:38:31.749187   23920 out.go:374] Setting ErrFile to fd 2...
	I1109 13:38:31.749191   23920 out.go:408] TERM=,COLORTERM=, which probably does not support color
	I1109 13:38:31.749545   23920 root.go:338] Updating PATH: /home/jenkins/minikube-integration/21139-2320/.minikube/bin
	I1109 13:38:31.750068   23920 out.go:368] Setting JSON to false
	I1109 13:38:31.751114   23920 start.go:133] hostinfo: {"hostname":"ip-172-31-24-2","uptime":1262,"bootTime":1762694250,"procs":178,"os":"linux","platform":"ubuntu","platformFamily":"debian","platformVersion":"20.04","kernelVersion":"5.15.0-1084-aws","kernelArch":"aarch64","virtualizationSystem":"","virtualizationRole":"","hostId":"6d436adf-771e-4269-b9a3-c25fd4fca4f5"}
	I1109 13:38:31.751172   23920 start.go:143] virtualization:  
	I1109 13:38:31.754771   23920 out.go:179] * [functional-002359] minikube v1.37.0 on Ubuntu 20.04 (arm64)
	I1109 13:38:31.758724   23920 out.go:179]   - MINIKUBE_LOCATION=21139
	I1109 13:38:31.758780   23920 notify.go:221] Checking for updates...
	I1109 13:38:31.764599   23920 out.go:179]   - MINIKUBE_SUPPRESS_DOCKER_PERFORMANCE=true
	I1109 13:38:31.767521   23920 out.go:179]   - KUBECONFIG=/home/jenkins/minikube-integration/21139-2320/kubeconfig
	I1109 13:38:31.770337   23920 out.go:179]   - MINIKUBE_HOME=/home/jenkins/minikube-integration/21139-2320/.minikube
	I1109 13:38:31.773153   23920 out.go:179]   - MINIKUBE_BIN=out/minikube-linux-arm64
	I1109 13:38:31.776045   23920 out.go:179]   - MINIKUBE_FORCE_SYSTEMD=
	I1109 13:38:31.779456   23920 config.go:182] Loaded profile config "functional-002359": Driver=docker, ContainerRuntime=crio, KubernetesVersion=v1.34.1
	I1109 13:38:31.779542   23920 driver.go:422] Setting default libvirt URI to qemu:///system
	I1109 13:38:31.804740   23920 docker.go:124] docker version: linux-28.1.1:Docker Engine - Community
	I1109 13:38:31.804843   23920 cli_runner.go:164] Run: docker system info --format "{{json .}}"
	I1109 13:38:31.873512   23920 info.go:266] docker info: {ID:J4M5:W6MX:GOX4:4LAQ:VI7E:VJNF:J3OP:OPBH:GF7G:PPY4:WQWD:7N4L Containers:1 ContainersRunning:1 ContainersPaused:0 ContainersStopped:0 Images:2 Driver:overlay2 DriverStatus:[[Backing Filesystem extfs] [Supports d_type true] [Using metacopy false] [Native Overlay Diff true] [userxattr false]] SystemStatus:<nil> Plugins:{Volume:[local] Network:[bridge host ipvlan macvlan null overlay] Authorization:<nil> Log:[awslogs fluentd gcplogs gelf journald json-file local splunk syslog]} MemoryLimit:true SwapLimit:true KernelMemory:false KernelMemoryTCP:true CPUCfsPeriod:true CPUCfsQuota:true CPUShares:true CPUSet:true PidsLimit:true IPv4Forwarding:true BridgeNfIptables:false BridgeNfIP6Tables:false Debug:false NFd:36 OomKillDisable:true NGoroutines:65 SystemTime:2025-11-09 13:38:31.86298372 +0000 UTC LoggingDriver:json-file CgroupDriver:cgroupfs NEventsListener:0 KernelVersion:5.15.0-1084-aws OperatingSystem:Ubuntu 20.04.6 LTS OSType:linux Architecture:aa
rch64 IndexServerAddress:https://index.docker.io/v1/ RegistryConfig:{AllowNondistributableArtifactsCIDRs:[] AllowNondistributableArtifactsHostnames:[] InsecureRegistryCIDRs:[::1/128 127.0.0.0/8] IndexConfigs:{DockerIo:{Name:docker.io Mirrors:[] Secure:true Official:true}} Mirrors:[]} NCPU:2 MemTotal:8214835200 GenericResources:<nil> DockerRootDir:/var/lib/docker HTTPProxy: HTTPSProxy: NoProxy: Name:ip-172-31-24-2 Labels:[] ExperimentalBuild:false ServerVersion:28.1.1 ClusterStore: ClusterAdvertise: Runtimes:{Runc:{Path:runc}} DefaultRuntime:runc Swarm:{NodeID: NodeAddr: LocalNodeState:inactive ControlAvailable:false Error: RemoteManagers:<nil>} LiveRestoreEnabled:false Isolation: InitBinary:docker-init ContainerdCommit:{ID:05044ec0a9a75232cad458027ca83437aae3f4da Expected:} RuncCommit:{ID:v1.2.5-0-g59923ef Expected:} InitCommit:{ID:de40ad0 Expected:} SecurityOptions:[name=apparmor name=seccomp,profile=builtin] ProductLicense: Warnings:<nil> ServerErrors:[] ClientInfo:{Debug:false Plugins:[map[Name:buildx Path
:/usr/libexec/docker/cli-plugins/docker-buildx SchemaVersion:0.1.0 ShortDescription:Docker Buildx Vendor:Docker Inc. Version:v0.23.0] map[Name:compose Path:/usr/libexec/docker/cli-plugins/docker-compose SchemaVersion:0.1.0 ShortDescription:Docker Compose Vendor:Docker Inc. Version:v2.35.1]] Warnings:<nil>}}
	I1109 13:38:31.873613   23920 docker.go:319] overlay module found
	I1109 13:38:31.876910   23920 out.go:179] * Using the docker driver based on existing profile
	I1109 13:38:31.879818   23920 start.go:309] selected driver: docker
	I1109 13:38:31.879830   23920 start.go:930] validating driver "docker" against &{Name:functional-002359 KeepContext:false EmbedCerts:false MinikubeISO: KicBaseImage:gcr.io/k8s-minikube/kicbase-builds:v0.0.48-1761985721-21837@sha256:a50b37e97dfdea51156e079ca6b45818a801b3d41bbe13d141f35d2e1af6c7d1 Memory:4096 CPUs:2 DiskSize:20000 Driver:docker HyperkitVpnKitSock: HyperkitVSockPorts:[] DockerEnv:[] ContainerVolumeMounts:[] InsecureRegistry:[] RegistryMirror:[] HostOnlyCIDR:192.168.59.1/24 HypervVirtualSwitch: HypervUseExternalSwitch:false HypervExternalAdapter: KVMNetwork:default KVMQemuURI:qemu:///system KVMGPU:false KVMHidden:false KVMNUMACount:1 APIServerPort:8441 DockerOpt:[] DisableDriverMounts:false NFSShare:[] NFSSharesRoot:/nfsshares UUID: NoVTXCheck:false DNSProxy:false HostDNSResolver:true HostOnlyNicType:virtio NatNicType:virtio SSHIPAddress: SSHUser:root SSHKey: SSHPort:22 KubernetesConfig:{KubernetesVersion:v1.34.1 ClusterName:functional-002359 Namespace:default APIServerHAVIP: APIServerNa
me:minikubeCA APIServerNames:[] APIServerIPs:[] DNSDomain:cluster.local ContainerRuntime:crio CRISocket: NetworkPlugin:cni FeatureGates: ServiceCIDR:10.96.0.0/12 ImageRepository: LoadBalancerStartIP: LoadBalancerEndIP: CustomIngressCert: RegistryAliases: ExtraOptions:[] ShouldLoadCachedImages:true EnableDefaultCNI:false CNI:} Nodes:[{Name: IP:192.168.49.2 Port:8441 KubernetesVersion:v1.34.1 ContainerRuntime:crio ControlPlane:true Worker:true}] Addons:map[default-storageclass:true storage-provisioner:true] CustomAddonImages:map[] CustomAddonRegistries:map[] VerifyComponents:map[apiserver:true apps_running:true default_sa:true extra:true kubelet:true node_ready:true system_pods:true] StartHostTimeout:6m0s ScheduledStop:<nil> ExposedPorts:[] ListenAddress: Network: Subnet: MultiNodeRequested:false ExtraDisks:0 CertExpiration:26280h0m0s MountString: Mount9PVersion:9p2000.L MountGID:docker MountIP: MountMSize:262144 MountOptions:[] MountPort:0 MountType:9p MountUID:docker BinaryMirror: DisableOptimizations:false D
isableMetrics:false DisableCoreDNSLog:false CustomQemuFirmwarePath: SocketVMnetClientPath: SocketVMnetPath: StaticIP: SSHAuthSock: SSHAgentPID:0 GPUs: AutoPauseInterval:1m0s}
	I1109 13:38:31.879960   23920 start.go:941] status for docker: {Installed:true Healthy:true Running:false NeedsImprovement:false Error:<nil> Reason: Fix: Doc: Version:}
	I1109 13:38:31.880085   23920 cli_runner.go:164] Run: docker system info --format "{{json .}}"
	I1109 13:38:31.934858   23920 info.go:266] docker info: {ID:J4M5:W6MX:GOX4:4LAQ:VI7E:VJNF:J3OP:OPBH:GF7G:PPY4:WQWD:7N4L Containers:1 ContainersRunning:1 ContainersPaused:0 ContainersStopped:0 Images:2 Driver:overlay2 DriverStatus:[[Backing Filesystem extfs] [Supports d_type true] [Using metacopy false] [Native Overlay Diff true] [userxattr false]] SystemStatus:<nil> Plugins:{Volume:[local] Network:[bridge host ipvlan macvlan null overlay] Authorization:<nil> Log:[awslogs fluentd gcplogs gelf journald json-file local splunk syslog]} MemoryLimit:true SwapLimit:true KernelMemory:false KernelMemoryTCP:true CPUCfsPeriod:true CPUCfsQuota:true CPUShares:true CPUSet:true PidsLimit:true IPv4Forwarding:true BridgeNfIptables:false BridgeNfIP6Tables:false Debug:false NFd:36 OomKillDisable:true NGoroutines:65 SystemTime:2025-11-09 13:38:31.92569946 +0000 UTC LoggingDriver:json-file CgroupDriver:cgroupfs NEventsListener:0 KernelVersion:5.15.0-1084-aws OperatingSystem:Ubuntu 20.04.6 LTS OSType:linux Architecture:aa
rch64 IndexServerAddress:https://index.docker.io/v1/ RegistryConfig:{AllowNondistributableArtifactsCIDRs:[] AllowNondistributableArtifactsHostnames:[] InsecureRegistryCIDRs:[::1/128 127.0.0.0/8] IndexConfigs:{DockerIo:{Name:docker.io Mirrors:[] Secure:true Official:true}} Mirrors:[]} NCPU:2 MemTotal:8214835200 GenericResources:<nil> DockerRootDir:/var/lib/docker HTTPProxy: HTTPSProxy: NoProxy: Name:ip-172-31-24-2 Labels:[] ExperimentalBuild:false ServerVersion:28.1.1 ClusterStore: ClusterAdvertise: Runtimes:{Runc:{Path:runc}} DefaultRuntime:runc Swarm:{NodeID: NodeAddr: LocalNodeState:inactive ControlAvailable:false Error: RemoteManagers:<nil>} LiveRestoreEnabled:false Isolation: InitBinary:docker-init ContainerdCommit:{ID:05044ec0a9a75232cad458027ca83437aae3f4da Expected:} RuncCommit:{ID:v1.2.5-0-g59923ef Expected:} InitCommit:{ID:de40ad0 Expected:} SecurityOptions:[name=apparmor name=seccomp,profile=builtin] ProductLicense: Warnings:<nil> ServerErrors:[] ClientInfo:{Debug:false Plugins:[map[Name:buildx Path
:/usr/libexec/docker/cli-plugins/docker-buildx SchemaVersion:0.1.0 ShortDescription:Docker Buildx Vendor:Docker Inc. Version:v0.23.0] map[Name:compose Path:/usr/libexec/docker/cli-plugins/docker-compose SchemaVersion:0.1.0 ShortDescription:Docker Compose Vendor:Docker Inc. Version:v2.35.1]] Warnings:<nil>}}
	I1109 13:38:31.935300   23920 start_flags.go:992] Waiting for all components: map[apiserver:true apps_running:true default_sa:true extra:true kubelet:true node_ready:true system_pods:true]
	I1109 13:38:31.935323   23920 cni.go:84] Creating CNI manager for ""
	I1109 13:38:31.935372   23920 cni.go:143] "docker" driver + "crio" runtime found, recommending kindnet
	I1109 13:38:31.935413   23920 start.go:353] cluster config:
	{Name:functional-002359 KeepContext:false EmbedCerts:false MinikubeISO: KicBaseImage:gcr.io/k8s-minikube/kicbase-builds:v0.0.48-1761985721-21837@sha256:a50b37e97dfdea51156e079ca6b45818a801b3d41bbe13d141f35d2e1af6c7d1 Memory:4096 CPUs:2 DiskSize:20000 Driver:docker HyperkitVpnKitSock: HyperkitVSockPorts:[] DockerEnv:[] ContainerVolumeMounts:[] InsecureRegistry:[] RegistryMirror:[] HostOnlyCIDR:192.168.59.1/24 HypervVirtualSwitch: HypervUseExternalSwitch:false HypervExternalAdapter: KVMNetwork:default KVMQemuURI:qemu:///system KVMGPU:false KVMHidden:false KVMNUMACount:1 APIServerPort:8441 DockerOpt:[] DisableDriverMounts:false NFSShare:[] NFSSharesRoot:/nfsshares UUID: NoVTXCheck:false DNSProxy:false HostDNSResolver:true HostOnlyNicType:virtio NatNicType:virtio SSHIPAddress: SSHUser:root SSHKey: SSHPort:22 KubernetesConfig:{KubernetesVersion:v1.34.1 ClusterName:functional-002359 Namespace:default APIServerHAVIP: APIServerName:minikubeCA APIServerNames:[] APIServerIPs:[] DNSDomain:cluster.local Containe
rRuntime:crio CRISocket: NetworkPlugin:cni FeatureGates: ServiceCIDR:10.96.0.0/12 ImageRepository: LoadBalancerStartIP: LoadBalancerEndIP: CustomIngressCert: RegistryAliases: ExtraOptions:[{Component:apiserver Key:enable-admission-plugins Value:NamespaceAutoProvision}] ShouldLoadCachedImages:true EnableDefaultCNI:false CNI:} Nodes:[{Name: IP:192.168.49.2 Port:8441 KubernetesVersion:v1.34.1 ContainerRuntime:crio ControlPlane:true Worker:true}] Addons:map[default-storageclass:true storage-provisioner:true] CustomAddonImages:map[] CustomAddonRegistries:map[] VerifyComponents:map[apiserver:true apps_running:true default_sa:true extra:true kubelet:true node_ready:true system_pods:true] StartHostTimeout:6m0s ScheduledStop:<nil> ExposedPorts:[] ListenAddress: Network: Subnet: MultiNodeRequested:false ExtraDisks:0 CertExpiration:26280h0m0s MountString: Mount9PVersion:9p2000.L MountGID:docker MountIP: MountMSize:262144 MountOptions:[] MountPort:0 MountType:9p MountUID:docker BinaryMirror: DisableOptimizations:false Di
sableMetrics:false DisableCoreDNSLog:false CustomQemuFirmwarePath: SocketVMnetClientPath: SocketVMnetPath: StaticIP: SSHAuthSock: SSHAgentPID:0 GPUs: AutoPauseInterval:1m0s}
	I1109 13:38:31.940455   23920 out.go:179] * Starting "functional-002359" primary control-plane node in "functional-002359" cluster
	I1109 13:38:31.943268   23920 cache.go:134] Beginning downloading kic base image for docker with crio
	I1109 13:38:31.946202   23920 out.go:179] * Pulling base image v0.0.48-1761985721-21837 ...
	I1109 13:38:31.949075   23920 preload.go:188] Checking if preload exists for k8s version v1.34.1 and runtime crio
	I1109 13:38:31.949113   23920 preload.go:203] Found local preload: /home/jenkins/minikube-integration/21139-2320/.minikube/cache/preloaded-tarball/preloaded-images-k8s-v18-v1.34.1-cri-o-overlay-arm64.tar.lz4
	I1109 13:38:31.949122   23920 cache.go:65] Caching tarball of preloaded images
	I1109 13:38:31.949148   23920 image.go:81] Checking for gcr.io/k8s-minikube/kicbase-builds:v0.0.48-1761985721-21837@sha256:a50b37e97dfdea51156e079ca6b45818a801b3d41bbe13d141f35d2e1af6c7d1 in local docker daemon
	I1109 13:38:31.949204   23920 preload.go:238] Found /home/jenkins/minikube-integration/21139-2320/.minikube/cache/preloaded-tarball/preloaded-images-k8s-v18-v1.34.1-cri-o-overlay-arm64.tar.lz4 in cache, skipping download
	I1109 13:38:31.949213   23920 cache.go:68] Finished verifying existence of preloaded tar for v1.34.1 on crio
	I1109 13:38:31.949371   23920 profile.go:143] Saving config to /home/jenkins/minikube-integration/21139-2320/.minikube/profiles/functional-002359/config.json ...
	I1109 13:38:31.968655   23920 image.go:100] Found gcr.io/k8s-minikube/kicbase-builds:v0.0.48-1761985721-21837@sha256:a50b37e97dfdea51156e079ca6b45818a801b3d41bbe13d141f35d2e1af6c7d1 in local docker daemon, skipping pull
	I1109 13:38:31.968666   23920 cache.go:158] gcr.io/k8s-minikube/kicbase-builds:v0.0.48-1761985721-21837@sha256:a50b37e97dfdea51156e079ca6b45818a801b3d41bbe13d141f35d2e1af6c7d1 exists in daemon, skipping load
	I1109 13:38:31.968677   23920 cache.go:243] Successfully downloaded all kic artifacts
	I1109 13:38:31.968721   23920 start.go:360] acquireMachinesLock for functional-002359: {Name:mk3fe7f302fe0aa25013c2c5e1ec5d14ec721606 Clock:{} Delay:500ms Timeout:10m0s Cancel:<nil>}
	I1109 13:38:31.968806   23920 start.go:364] duration metric: took 67.676µs to acquireMachinesLock for "functional-002359"
	I1109 13:38:31.968826   23920 start.go:96] Skipping create...Using existing machine configuration
	I1109 13:38:31.968830   23920 fix.go:54] fixHost starting: 
	I1109 13:38:31.969148   23920 cli_runner.go:164] Run: docker container inspect functional-002359 --format={{.State.Status}}
	I1109 13:38:31.985304   23920 fix.go:112] recreateIfNeeded on functional-002359: state=Running err=<nil>
	W1109 13:38:31.985331   23920 fix.go:138] unexpected machine state, will restart: <nil>
	I1109 13:38:31.988676   23920 out.go:252] * Updating the running docker "functional-002359" container ...
	I1109 13:38:31.988701   23920 machine.go:94] provisionDockerMachine start ...
	I1109 13:38:31.988777   23920 cli_runner.go:164] Run: docker container inspect -f "'{{(index (index .NetworkSettings.Ports "22/tcp") 0).HostPort}}'" functional-002359
	I1109 13:38:32.006665   23920 main.go:143] libmachine: Using SSH client type: native
	I1109 13:38:32.006969   23920 main.go:143] libmachine: &{{{<nil> 0 [] [] []} docker [0x3eefe0] 0x3f1790 <nil>  [] 0s} 127.0.0.1 32778 <nil> <nil>}
	I1109 13:38:32.006975   23920 main.go:143] libmachine: About to run SSH command:
	hostname
	I1109 13:38:32.159475   23920 main.go:143] libmachine: SSH cmd err, output: <nil>: functional-002359
	
	I1109 13:38:32.159489   23920 ubuntu.go:182] provisioning hostname "functional-002359"
	I1109 13:38:32.159555   23920 cli_runner.go:164] Run: docker container inspect -f "'{{(index (index .NetworkSettings.Ports "22/tcp") 0).HostPort}}'" functional-002359
	I1109 13:38:32.177027   23920 main.go:143] libmachine: Using SSH client type: native
	I1109 13:38:32.177322   23920 main.go:143] libmachine: &{{{<nil> 0 [] [] []} docker [0x3eefe0] 0x3f1790 <nil>  [] 0s} 127.0.0.1 32778 <nil> <nil>}
	I1109 13:38:32.177331   23920 main.go:143] libmachine: About to run SSH command:
	sudo hostname functional-002359 && echo "functional-002359" | sudo tee /etc/hostname
	I1109 13:38:32.340840   23920 main.go:143] libmachine: SSH cmd err, output: <nil>: functional-002359
	
	I1109 13:38:32.340902   23920 cli_runner.go:164] Run: docker container inspect -f "'{{(index (index .NetworkSettings.Ports "22/tcp") 0).HostPort}}'" functional-002359
	I1109 13:38:32.362928   23920 main.go:143] libmachine: Using SSH client type: native
	I1109 13:38:32.363268   23920 main.go:143] libmachine: &{{{<nil> 0 [] [] []} docker [0x3eefe0] 0x3f1790 <nil>  [] 0s} 127.0.0.1 32778 <nil> <nil>}
	I1109 13:38:32.363284   23920 main.go:143] libmachine: About to run SSH command:
	
			if ! grep -xq '.*\sfunctional-002359' /etc/hosts; then
				if grep -xq '127.0.1.1\s.*' /etc/hosts; then
					sudo sed -i 's/^127.0.1.1\s.*/127.0.1.1 functional-002359/g' /etc/hosts;
				else 
					echo '127.0.1.1 functional-002359' | sudo tee -a /etc/hosts; 
				fi
			fi
	I1109 13:38:32.520429   23920 main.go:143] libmachine: SSH cmd err, output: <nil>: 
	I1109 13:38:32.520444   23920 ubuntu.go:188] set auth options {CertDir:/home/jenkins/minikube-integration/21139-2320/.minikube CaCertPath:/home/jenkins/minikube-integration/21139-2320/.minikube/certs/ca.pem CaPrivateKeyPath:/home/jenkins/minikube-integration/21139-2320/.minikube/certs/ca-key.pem CaCertRemotePath:/etc/docker/ca.pem ServerCertPath:/home/jenkins/minikube-integration/21139-2320/.minikube/machines/server.pem ServerKeyPath:/home/jenkins/minikube-integration/21139-2320/.minikube/machines/server-key.pem ClientKeyPath:/home/jenkins/minikube-integration/21139-2320/.minikube/certs/key.pem ServerCertRemotePath:/etc/docker/server.pem ServerKeyRemotePath:/etc/docker/server-key.pem ClientCertPath:/home/jenkins/minikube-integration/21139-2320/.minikube/certs/cert.pem ServerCertSANs:[] StorePath:/home/jenkins/minikube-integration/21139-2320/.minikube}
	I1109 13:38:32.520465   23920 ubuntu.go:190] setting up certificates
	I1109 13:38:32.520472   23920 provision.go:84] configureAuth start
	I1109 13:38:32.520530   23920 cli_runner.go:164] Run: docker container inspect -f "{{range .NetworkSettings.Networks}}{{.IPAddress}},{{.GlobalIPv6Address}}{{end}}" functional-002359
	I1109 13:38:32.540668   23920 provision.go:143] copyHostCerts
	I1109 13:38:32.540724   23920 exec_runner.go:144] found /home/jenkins/minikube-integration/21139-2320/.minikube/ca.pem, removing ...
	I1109 13:38:32.540739   23920 exec_runner.go:203] rm: /home/jenkins/minikube-integration/21139-2320/.minikube/ca.pem
	I1109 13:38:32.540816   23920 exec_runner.go:151] cp: /home/jenkins/minikube-integration/21139-2320/.minikube/certs/ca.pem --> /home/jenkins/minikube-integration/21139-2320/.minikube/ca.pem (1082 bytes)
	I1109 13:38:32.540915   23920 exec_runner.go:144] found /home/jenkins/minikube-integration/21139-2320/.minikube/cert.pem, removing ...
	I1109 13:38:32.540918   23920 exec_runner.go:203] rm: /home/jenkins/minikube-integration/21139-2320/.minikube/cert.pem
	I1109 13:38:32.540944   23920 exec_runner.go:151] cp: /home/jenkins/minikube-integration/21139-2320/.minikube/certs/cert.pem --> /home/jenkins/minikube-integration/21139-2320/.minikube/cert.pem (1123 bytes)
	I1109 13:38:32.540993   23920 exec_runner.go:144] found /home/jenkins/minikube-integration/21139-2320/.minikube/key.pem, removing ...
	I1109 13:38:32.540996   23920 exec_runner.go:203] rm: /home/jenkins/minikube-integration/21139-2320/.minikube/key.pem
	I1109 13:38:32.541040   23920 exec_runner.go:151] cp: /home/jenkins/minikube-integration/21139-2320/.minikube/certs/key.pem --> /home/jenkins/minikube-integration/21139-2320/.minikube/key.pem (1679 bytes)
	I1109 13:38:32.541093   23920 provision.go:117] generating server cert: /home/jenkins/minikube-integration/21139-2320/.minikube/machines/server.pem ca-key=/home/jenkins/minikube-integration/21139-2320/.minikube/certs/ca.pem private-key=/home/jenkins/minikube-integration/21139-2320/.minikube/certs/ca-key.pem org=jenkins.functional-002359 san=[127.0.0.1 192.168.49.2 functional-002359 localhost minikube]
	I1109 13:38:32.827528   23920 provision.go:177] copyRemoteCerts
	I1109 13:38:32.827582   23920 ssh_runner.go:195] Run: sudo mkdir -p /etc/docker /etc/docker /etc/docker
	I1109 13:38:32.827619   23920 cli_runner.go:164] Run: docker container inspect -f "'{{(index (index .NetworkSettings.Ports "22/tcp") 0).HostPort}}'" functional-002359
	I1109 13:38:32.848344   23920 sshutil.go:53] new ssh client: &{IP:127.0.0.1 Port:32778 SSHKeyPath:/home/jenkins/minikube-integration/21139-2320/.minikube/machines/functional-002359/id_rsa Username:docker}
	I1109 13:38:32.955653   23920 ssh_runner.go:362] scp /home/jenkins/minikube-integration/21139-2320/.minikube/certs/ca.pem --> /etc/docker/ca.pem (1082 bytes)
	I1109 13:38:32.974320   23920 ssh_runner.go:362] scp /home/jenkins/minikube-integration/21139-2320/.minikube/machines/server.pem --> /etc/docker/server.pem (1220 bytes)
	I1109 13:38:32.993495   23920 ssh_runner.go:362] scp /home/jenkins/minikube-integration/21139-2320/.minikube/machines/server-key.pem --> /etc/docker/server-key.pem (1675 bytes)
	I1109 13:38:33.013010   23920 provision.go:87] duration metric: took 492.522786ms to configureAuth
	I1109 13:38:33.013030   23920 ubuntu.go:206] setting minikube options for container-runtime
	I1109 13:38:33.013285   23920 config.go:182] Loaded profile config "functional-002359": Driver=docker, ContainerRuntime=crio, KubernetesVersion=v1.34.1
	I1109 13:38:33.013419   23920 cli_runner.go:164] Run: docker container inspect -f "'{{(index (index .NetworkSettings.Ports "22/tcp") 0).HostPort}}'" functional-002359
	I1109 13:38:33.034582   23920 main.go:143] libmachine: Using SSH client type: native
	I1109 13:38:33.034885   23920 main.go:143] libmachine: &{{{<nil> 0 [] [] []} docker [0x3eefe0] 0x3f1790 <nil>  [] 0s} 127.0.0.1 32778 <nil> <nil>}
	I1109 13:38:33.034896   23920 main.go:143] libmachine: About to run SSH command:
	sudo mkdir -p /etc/sysconfig && printf %s "
	CRIO_MINIKUBE_OPTIONS='--insecure-registry 10.96.0.0/12 '
	" | sudo tee /etc/sysconfig/crio.minikube && sudo systemctl restart crio
	I1109 13:38:38.414488   23920 main.go:143] libmachine: SSH cmd err, output: <nil>: 
	CRIO_MINIKUBE_OPTIONS='--insecure-registry 10.96.0.0/12 '
	
	I1109 13:38:38.414520   23920 machine.go:97] duration metric: took 6.425793207s to provisionDockerMachine
	I1109 13:38:38.414530   23920 start.go:293] postStartSetup for "functional-002359" (driver="docker")
	I1109 13:38:38.414539   23920 start.go:322] creating required directories: [/etc/kubernetes/addons /etc/kubernetes/manifests /var/tmp/minikube /var/lib/minikube /var/lib/minikube/certs /var/lib/minikube/images /var/lib/minikube/binaries /tmp/gvisor /usr/share/ca-certificates /etc/ssl/certs]
	I1109 13:38:38.414597   23920 ssh_runner.go:195] Run: sudo mkdir -p /etc/kubernetes/addons /etc/kubernetes/manifests /var/tmp/minikube /var/lib/minikube /var/lib/minikube/certs /var/lib/minikube/images /var/lib/minikube/binaries /tmp/gvisor /usr/share/ca-certificates /etc/ssl/certs
	I1109 13:38:38.414638   23920 cli_runner.go:164] Run: docker container inspect -f "'{{(index (index .NetworkSettings.Ports "22/tcp") 0).HostPort}}'" functional-002359
	I1109 13:38:38.432778   23920 sshutil.go:53] new ssh client: &{IP:127.0.0.1 Port:32778 SSHKeyPath:/home/jenkins/minikube-integration/21139-2320/.minikube/machines/functional-002359/id_rsa Username:docker}
	I1109 13:38:38.539903   23920 ssh_runner.go:195] Run: cat /etc/os-release
	I1109 13:38:38.543271   23920 main.go:143] libmachine: Couldn't set key VERSION_CODENAME, no corresponding struct field found
	I1109 13:38:38.543294   23920 info.go:137] Remote host: Debian GNU/Linux 12 (bookworm)
	I1109 13:38:38.543303   23920 filesync.go:126] Scanning /home/jenkins/minikube-integration/21139-2320/.minikube/addons for local assets ...
	I1109 13:38:38.543357   23920 filesync.go:126] Scanning /home/jenkins/minikube-integration/21139-2320/.minikube/files for local assets ...
	I1109 13:38:38.543435   23920 filesync.go:149] local asset: /home/jenkins/minikube-integration/21139-2320/.minikube/files/etc/ssl/certs/41162.pem -> 41162.pem in /etc/ssl/certs
	I1109 13:38:38.543514   23920 filesync.go:149] local asset: /home/jenkins/minikube-integration/21139-2320/.minikube/files/etc/test/nested/copy/4116/hosts -> hosts in /etc/test/nested/copy/4116
	I1109 13:38:38.543557   23920 ssh_runner.go:195] Run: sudo mkdir -p /etc/ssl/certs /etc/test/nested/copy/4116
	I1109 13:38:38.550920   23920 ssh_runner.go:362] scp /home/jenkins/minikube-integration/21139-2320/.minikube/files/etc/ssl/certs/41162.pem --> /etc/ssl/certs/41162.pem (1708 bytes)
	I1109 13:38:38.569519   23920 ssh_runner.go:362] scp /home/jenkins/minikube-integration/21139-2320/.minikube/files/etc/test/nested/copy/4116/hosts --> /etc/test/nested/copy/4116/hosts (40 bytes)
	I1109 13:38:38.586645   23920 start.go:296] duration metric: took 172.100555ms for postStartSetup
	I1109 13:38:38.586712   23920 ssh_runner.go:195] Run: sh -c "df -h /var | awk 'NR==2{print $5}'"
	I1109 13:38:38.586766   23920 cli_runner.go:164] Run: docker container inspect -f "'{{(index (index .NetworkSettings.Ports "22/tcp") 0).HostPort}}'" functional-002359
	I1109 13:38:38.603658   23920 sshutil.go:53] new ssh client: &{IP:127.0.0.1 Port:32778 SSHKeyPath:/home/jenkins/minikube-integration/21139-2320/.minikube/machines/functional-002359/id_rsa Username:docker}
	I1109 13:38:38.705037   23920 ssh_runner.go:195] Run: sh -c "df -BG /var | awk 'NR==2{print $4}'"
	I1109 13:38:38.709855   23920 fix.go:56] duration metric: took 6.741016765s for fixHost
	I1109 13:38:38.709871   23920 start.go:83] releasing machines lock for "functional-002359", held for 6.741057315s
	I1109 13:38:38.709949   23920 cli_runner.go:164] Run: docker container inspect -f "{{range .NetworkSettings.Networks}}{{.IPAddress}},{{.GlobalIPv6Address}}{{end}}" functional-002359
	I1109 13:38:38.727591   23920 ssh_runner.go:195] Run: cat /version.json
	I1109 13:38:38.727634   23920 cli_runner.go:164] Run: docker container inspect -f "'{{(index (index .NetworkSettings.Ports "22/tcp") 0).HostPort}}'" functional-002359
	I1109 13:38:38.727928   23920 ssh_runner.go:195] Run: curl -sS -m 2 https://registry.k8s.io/
	I1109 13:38:38.727981   23920 cli_runner.go:164] Run: docker container inspect -f "'{{(index (index .NetworkSettings.Ports "22/tcp") 0).HostPort}}'" functional-002359
	I1109 13:38:38.748175   23920 sshutil.go:53] new ssh client: &{IP:127.0.0.1 Port:32778 SSHKeyPath:/home/jenkins/minikube-integration/21139-2320/.minikube/machines/functional-002359/id_rsa Username:docker}
	I1109 13:38:38.754180   23920 sshutil.go:53] new ssh client: &{IP:127.0.0.1 Port:32778 SSHKeyPath:/home/jenkins/minikube-integration/21139-2320/.minikube/machines/functional-002359/id_rsa Username:docker}
	I1109 13:38:38.851513   23920 ssh_runner.go:195] Run: systemctl --version
	I1109 13:38:38.958090   23920 ssh_runner.go:195] Run: sudo sh -c "podman version >/dev/null"
	I1109 13:38:38.994555   23920 ssh_runner.go:195] Run: sh -c "stat /etc/cni/net.d/*loopback.conf*"
	W1109 13:38:38.998885   23920 cni.go:209] loopback cni configuration skipped: "/etc/cni/net.d/*loopback.conf*" not found
	I1109 13:38:38.998956   23920 ssh_runner.go:195] Run: sudo find /etc/cni/net.d -maxdepth 1 -type f ( ( -name *bridge* -or -name *podman* ) -and -not -name *.mk_disabled ) -printf "%p, " -exec sh -c "sudo mv {} {}.mk_disabled" ;
	I1109 13:38:39.006701   23920 cni.go:259] no active bridge cni configs found in "/etc/cni/net.d" - nothing to disable
	I1109 13:38:39.006713   23920 start.go:496] detecting cgroup driver to use...
	I1109 13:38:39.006744   23920 detect.go:187] detected "cgroupfs" cgroup driver on host os
	I1109 13:38:39.006788   23920 ssh_runner.go:195] Run: sudo systemctl stop -f containerd
	I1109 13:38:39.023612   23920 ssh_runner.go:195] Run: sudo systemctl is-active --quiet service containerd
	I1109 13:38:39.037318   23920 docker.go:218] disabling cri-docker service (if available) ...
	I1109 13:38:39.037389   23920 ssh_runner.go:195] Run: sudo systemctl stop -f cri-docker.socket
	I1109 13:38:39.053026   23920 ssh_runner.go:195] Run: sudo systemctl stop -f cri-docker.service
	I1109 13:38:39.065788   23920 ssh_runner.go:195] Run: sudo systemctl disable cri-docker.socket
	I1109 13:38:39.196501   23920 ssh_runner.go:195] Run: sudo systemctl mask cri-docker.service
	I1109 13:38:39.337684   23920 docker.go:234] disabling docker service ...
	I1109 13:38:39.337760   23920 ssh_runner.go:195] Run: sudo systemctl stop -f docker.socket
	I1109 13:38:39.352162   23920 ssh_runner.go:195] Run: sudo systemctl stop -f docker.service
	I1109 13:38:39.364522   23920 ssh_runner.go:195] Run: sudo systemctl disable docker.socket
	I1109 13:38:39.499541   23920 ssh_runner.go:195] Run: sudo systemctl mask docker.service
	I1109 13:38:39.628141   23920 ssh_runner.go:195] Run: sudo systemctl is-active --quiet service docker
	I1109 13:38:39.640749   23920 ssh_runner.go:195] Run: /bin/bash -c "sudo mkdir -p /etc && printf %s "runtime-endpoint: unix:///var/run/crio/crio.sock
	" | sudo tee /etc/crictl.yaml"
	I1109 13:38:39.655576   23920 crio.go:59] configure cri-o to use "registry.k8s.io/pause:3.10.1" pause image...
	I1109 13:38:39.655640   23920 ssh_runner.go:195] Run: sh -c "sudo sed -i 's|^.*pause_image = .*$|pause_image = "registry.k8s.io/pause:3.10.1"|' /etc/crio/crio.conf.d/02-crio.conf"
	I1109 13:38:39.665299   23920 crio.go:70] configuring cri-o to use "cgroupfs" as cgroup driver...
	I1109 13:38:39.665357   23920 ssh_runner.go:195] Run: sh -c "sudo sed -i 's|^.*cgroup_manager = .*$|cgroup_manager = "cgroupfs"|' /etc/crio/crio.conf.d/02-crio.conf"
	I1109 13:38:39.674688   23920 ssh_runner.go:195] Run: sh -c "sudo sed -i '/conmon_cgroup = .*/d' /etc/crio/crio.conf.d/02-crio.conf"
	I1109 13:38:39.683541   23920 ssh_runner.go:195] Run: sh -c "sudo sed -i '/cgroup_manager = .*/a conmon_cgroup = "pod"' /etc/crio/crio.conf.d/02-crio.conf"
	I1109 13:38:39.692700   23920 ssh_runner.go:195] Run: sh -c "sudo rm -rf /etc/cni/net.mk"
	I1109 13:38:39.701171   23920 ssh_runner.go:195] Run: sh -c "sudo sed -i '/^ *"net.ipv4.ip_unprivileged_port_start=.*"/d' /etc/crio/crio.conf.d/02-crio.conf"
	I1109 13:38:39.710891   23920 ssh_runner.go:195] Run: sh -c "sudo grep -q "^ *default_sysctls" /etc/crio/crio.conf.d/02-crio.conf || sudo sed -i '/conmon_cgroup = .*/a default_sysctls = \[\n\]' /etc/crio/crio.conf.d/02-crio.conf"
	I1109 13:38:39.719811   23920 ssh_runner.go:195] Run: sh -c "sudo sed -i -r 's|^default_sysctls *= *\[|&\n  "net.ipv4.ip_unprivileged_port_start=0",|' /etc/crio/crio.conf.d/02-crio.conf"
	I1109 13:38:39.729188   23920 ssh_runner.go:195] Run: sudo sysctl net.bridge.bridge-nf-call-iptables
	I1109 13:38:39.737119   23920 ssh_runner.go:195] Run: sudo sh -c "echo 1 > /proc/sys/net/ipv4/ip_forward"
	I1109 13:38:39.744955   23920 ssh_runner.go:195] Run: sudo systemctl daemon-reload
	I1109 13:38:39.874282   23920 ssh_runner.go:195] Run: sudo systemctl restart crio
	I1109 13:38:47.729162   23920 ssh_runner.go:235] Completed: sudo systemctl restart crio: (7.854857795s)
	I1109 13:38:47.729178   23920 start.go:543] Will wait 60s for socket path /var/run/crio/crio.sock
	I1109 13:38:47.729226   23920 ssh_runner.go:195] Run: stat /var/run/crio/crio.sock
	I1109 13:38:47.732809   23920 start.go:564] Will wait 60s for crictl version
	I1109 13:38:47.732861   23920 ssh_runner.go:195] Run: which crictl
	I1109 13:38:47.736278   23920 ssh_runner.go:195] Run: sudo /usr/local/bin/crictl version
	I1109 13:38:47.763672   23920 start.go:580] Version:  0.1.0
	RuntimeName:  cri-o
	RuntimeVersion:  1.34.1
	RuntimeApiVersion:  v1
	I1109 13:38:47.763740   23920 ssh_runner.go:195] Run: crio --version
	I1109 13:38:47.789887   23920 ssh_runner.go:195] Run: crio --version
	I1109 13:38:47.821471   23920 out.go:179] * Preparing Kubernetes v1.34.1 on CRI-O 1.34.1 ...
	I1109 13:38:47.824481   23920 cli_runner.go:164] Run: docker network inspect functional-002359 --format "{"Name": "{{.Name}}","Driver": "{{.Driver}}","Subnet": "{{range .IPAM.Config}}{{.Subnet}}{{end}}","Gateway": "{{range .IPAM.Config}}{{.Gateway}}{{end}}","MTU": {{if (index .Options "com.docker.network.driver.mtu")}}{{(index .Options "com.docker.network.driver.mtu")}}{{else}}0{{end}}, "ContainerIPs": [{{range $k,$v := .Containers }}"{{$v.IPv4Address}}",{{end}}]}"
	I1109 13:38:47.840398   23920 ssh_runner.go:195] Run: grep 192.168.49.1	host.minikube.internal$ /etc/hosts
	I1109 13:38:47.847409   23920 out.go:179]   - apiserver.enable-admission-plugins=NamespaceAutoProvision
	I1109 13:38:47.850168   23920 kubeadm.go:884] updating cluster {Name:functional-002359 KeepContext:false EmbedCerts:false MinikubeISO: KicBaseImage:gcr.io/k8s-minikube/kicbase-builds:v0.0.48-1761985721-21837@sha256:a50b37e97dfdea51156e079ca6b45818a801b3d41bbe13d141f35d2e1af6c7d1 Memory:4096 CPUs:2 DiskSize:20000 Driver:docker HyperkitVpnKitSock: HyperkitVSockPorts:[] DockerEnv:[] ContainerVolumeMounts:[] InsecureRegistry:[] RegistryMirror:[] HostOnlyCIDR:192.168.59.1/24 HypervVirtualSwitch: HypervUseExternalSwitch:false HypervExternalAdapter: KVMNetwork:default KVMQemuURI:qemu:///system KVMGPU:false KVMHidden:false KVMNUMACount:1 APIServerPort:8441 DockerOpt:[] DisableDriverMounts:false NFSShare:[] NFSSharesRoot:/nfsshares UUID: NoVTXCheck:false DNSProxy:false HostDNSResolver:true HostOnlyNicType:virtio NatNicType:virtio SSHIPAddress: SSHUser:root SSHKey: SSHPort:22 KubernetesConfig:{KubernetesVersion:v1.34.1 ClusterName:functional-002359 Namespace:default APIServerHAVIP: APIServerName:minikubeCA API
ServerNames:[] APIServerIPs:[] DNSDomain:cluster.local ContainerRuntime:crio CRISocket: NetworkPlugin:cni FeatureGates: ServiceCIDR:10.96.0.0/12 ImageRepository: LoadBalancerStartIP: LoadBalancerEndIP: CustomIngressCert: RegistryAliases: ExtraOptions:[{Component:apiserver Key:enable-admission-plugins Value:NamespaceAutoProvision}] ShouldLoadCachedImages:true EnableDefaultCNI:false CNI:} Nodes:[{Name: IP:192.168.49.2 Port:8441 KubernetesVersion:v1.34.1 ContainerRuntime:crio ControlPlane:true Worker:true}] Addons:map[default-storageclass:true storage-provisioner:true] CustomAddonImages:map[] CustomAddonRegistries:map[] VerifyComponents:map[apiserver:true apps_running:true default_sa:true extra:true kubelet:true node_ready:true system_pods:true] StartHostTimeout:6m0s ScheduledStop:<nil> ExposedPorts:[] ListenAddress: Network: Subnet: MultiNodeRequested:false ExtraDisks:0 CertExpiration:26280h0m0s MountString: Mount9PVersion:9p2000.L MountGID:docker MountIP: MountMSize:262144 MountOptions:[] MountPort:0 MountType
:9p MountUID:docker BinaryMirror: DisableOptimizations:false DisableMetrics:false DisableCoreDNSLog:false CustomQemuFirmwarePath: SocketVMnetClientPath: SocketVMnetPath: StaticIP: SSHAuthSock: SSHAgentPID:0 GPUs: AutoPauseInterval:1m0s} ...
	I1109 13:38:47.850286   23920 preload.go:188] Checking if preload exists for k8s version v1.34.1 and runtime crio
	I1109 13:38:47.850350   23920 ssh_runner.go:195] Run: sudo crictl images --output json
	I1109 13:38:47.886657   23920 crio.go:514] all images are preloaded for cri-o runtime.
	I1109 13:38:47.886667   23920 crio.go:433] Images already preloaded, skipping extraction
	I1109 13:38:47.886720   23920 ssh_runner.go:195] Run: sudo crictl images --output json
	I1109 13:38:47.913284   23920 crio.go:514] all images are preloaded for cri-o runtime.
	I1109 13:38:47.913296   23920 cache_images.go:86] Images are preloaded, skipping loading
	I1109 13:38:47.913302   23920 kubeadm.go:935] updating node { 192.168.49.2 8441 v1.34.1 crio true true} ...
	I1109 13:38:47.913391   23920 kubeadm.go:947] kubelet [Unit]
	Wants=crio.service
	
	[Service]
	ExecStart=
	ExecStart=/var/lib/minikube/binaries/v1.34.1/kubelet --bootstrap-kubeconfig=/etc/kubernetes/bootstrap-kubelet.conf --cgroups-per-qos=false --config=/var/lib/kubelet/config.yaml --enforce-node-allocatable= --hostname-override=functional-002359 --kubeconfig=/etc/kubernetes/kubelet.conf --node-ip=192.168.49.2
	
	[Install]
	 config:
	{KubernetesVersion:v1.34.1 ClusterName:functional-002359 Namespace:default APIServerHAVIP: APIServerName:minikubeCA APIServerNames:[] APIServerIPs:[] DNSDomain:cluster.local ContainerRuntime:crio CRISocket: NetworkPlugin:cni FeatureGates: ServiceCIDR:10.96.0.0/12 ImageRepository: LoadBalancerStartIP: LoadBalancerEndIP: CustomIngressCert: RegistryAliases: ExtraOptions:[{Component:apiserver Key:enable-admission-plugins Value:NamespaceAutoProvision}] ShouldLoadCachedImages:true EnableDefaultCNI:false CNI:}
	I1109 13:38:47.913476   23920 ssh_runner.go:195] Run: crio config
	I1109 13:38:47.986580   23920 extraconfig.go:125] Overwriting default enable-admission-plugins=NamespaceLifecycle,LimitRanger,ServiceAccount,DefaultStorageClass,DefaultTolerationSeconds,NodeRestriction,MutatingAdmissionWebhook,ValidatingAdmissionWebhook,ResourceQuota with user provided enable-admission-plugins=NamespaceAutoProvision for component apiserver
	I1109 13:38:47.986599   23920 cni.go:84] Creating CNI manager for ""
	I1109 13:38:47.986608   23920 cni.go:143] "docker" driver + "crio" runtime found, recommending kindnet
	I1109 13:38:47.986624   23920 kubeadm.go:85] Using pod CIDR: 10.244.0.0/16
	I1109 13:38:47.986655   23920 kubeadm.go:190] kubeadm options: {CertDir:/var/lib/minikube/certs ServiceCIDR:10.96.0.0/12 PodSubnet:10.244.0.0/16 AdvertiseAddress:192.168.49.2 APIServerPort:8441 KubernetesVersion:v1.34.1 EtcdDataDir:/var/lib/minikube/etcd EtcdExtraArgs:map[] ClusterName:functional-002359 NodeName:functional-002359 DNSDomain:cluster.local CRISocket:/var/run/crio/crio.sock ImageRepository: ComponentOptions:[{Component:apiServer ExtraArgs:map[enable-admission-plugins:NamespaceAutoProvision] Pairs:map[certSANs:["127.0.0.1", "localhost", "192.168.49.2"]]} {Component:controllerManager ExtraArgs:map[allocate-node-cidrs:true leader-elect:false] Pairs:map[]} {Component:scheduler ExtraArgs:map[leader-elect:false] Pairs:map[]}] FeatureArgs:map[] NodeIP:192.168.49.2 CgroupDriver:cgroupfs ClientCAFile:/var/lib/minikube/certs/ca.crt StaticPodPath:/etc/kubernetes/manifests ControlPlaneAddress:control-plane.minikube.internal KubeProxyOptions:map[] ResolvConfSearchRegression:false KubeletConfigOpts:ma
p[containerRuntimeEndpoint:unix:///var/run/crio/crio.sock hairpinMode:hairpin-veth runtimeRequestTimeout:15m] PrependCriSocketUnix:true}
	I1109 13:38:47.986780   23920 kubeadm.go:196] kubeadm config:
	apiVersion: kubeadm.k8s.io/v1beta4
	kind: InitConfiguration
	localAPIEndpoint:
	  advertiseAddress: 192.168.49.2
	  bindPort: 8441
	bootstrapTokens:
	  - groups:
	      - system:bootstrappers:kubeadm:default-node-token
	    ttl: 24h0m0s
	    usages:
	      - signing
	      - authentication
	nodeRegistration:
	  criSocket: unix:///var/run/crio/crio.sock
	  name: "functional-002359"
	  kubeletExtraArgs:
	    - name: "node-ip"
	      value: "192.168.49.2"
	  taints: []
	---
	apiVersion: kubeadm.k8s.io/v1beta4
	kind: ClusterConfiguration
	apiServer:
	  certSANs: ["127.0.0.1", "localhost", "192.168.49.2"]
	  extraArgs:
	    - name: "enable-admission-plugins"
	      value: "NamespaceAutoProvision"
	controllerManager:
	  extraArgs:
	    - name: "allocate-node-cidrs"
	      value: "true"
	    - name: "leader-elect"
	      value: "false"
	scheduler:
	  extraArgs:
	    - name: "leader-elect"
	      value: "false"
	certificatesDir: /var/lib/minikube/certs
	clusterName: mk
	controlPlaneEndpoint: control-plane.minikube.internal:8441
	etcd:
	  local:
	    dataDir: /var/lib/minikube/etcd
	kubernetesVersion: v1.34.1
	networking:
	  dnsDomain: cluster.local
	  podSubnet: "10.244.0.0/16"
	  serviceSubnet: 10.96.0.0/12
	---
	apiVersion: kubelet.config.k8s.io/v1beta1
	kind: KubeletConfiguration
	authentication:
	  x509:
	    clientCAFile: /var/lib/minikube/certs/ca.crt
	cgroupDriver: cgroupfs
	containerRuntimeEndpoint: unix:///var/run/crio/crio.sock
	hairpinMode: hairpin-veth
	runtimeRequestTimeout: 15m
	clusterDomain: "cluster.local"
	# disable disk resource management by default
	imageGCHighThresholdPercent: 100
	evictionHard:
	  nodefs.available: "0%"
	  nodefs.inodesFree: "0%"
	  imagefs.available: "0%"
	failSwapOn: false
	staticPodPath: /etc/kubernetes/manifests
	---
	apiVersion: kubeproxy.config.k8s.io/v1alpha1
	kind: KubeProxyConfiguration
	clusterCIDR: "10.244.0.0/16"
	metricsBindAddress: 0.0.0.0:10249
	conntrack:
	  maxPerCore: 0
	# Skip setting "net.netfilter.nf_conntrack_tcp_timeout_established"
	  tcpEstablishedTimeout: 0s
	# Skip setting "net.netfilter.nf_conntrack_tcp_timeout_close"
	  tcpCloseWaitTimeout: 0s
	
	I1109 13:38:47.986853   23920 ssh_runner.go:195] Run: sudo ls /var/lib/minikube/binaries/v1.34.1
	I1109 13:38:47.994399   23920 binaries.go:51] Found k8s binaries, skipping transfer
	I1109 13:38:47.994465   23920 ssh_runner.go:195] Run: sudo mkdir -p /etc/systemd/system/kubelet.service.d /lib/systemd/system /var/tmp/minikube
	I1109 13:38:48.001950   23920 ssh_runner.go:362] scp memory --> /etc/systemd/system/kubelet.service.d/10-kubeadm.conf (367 bytes)
	I1109 13:38:48.017275   23920 ssh_runner.go:362] scp memory --> /lib/systemd/system/kubelet.service (352 bytes)
	I1109 13:38:48.031909   23920 ssh_runner.go:362] scp memory --> /var/tmp/minikube/kubeadm.yaml.new (2064 bytes)
	I1109 13:38:48.046508   23920 ssh_runner.go:195] Run: grep 192.168.49.2	control-plane.minikube.internal$ /etc/hosts
	I1109 13:38:48.050411   23920 ssh_runner.go:195] Run: sudo systemctl daemon-reload
	I1109 13:38:48.186154   23920 ssh_runner.go:195] Run: sudo systemctl start kubelet
	I1109 13:38:48.200732   23920 certs.go:69] Setting up /home/jenkins/minikube-integration/21139-2320/.minikube/profiles/functional-002359 for IP: 192.168.49.2
	I1109 13:38:48.200742   23920 certs.go:195] generating shared ca certs ...
	I1109 13:38:48.200757   23920 certs.go:227] acquiring lock for ca certs: {Name:mkdc9287ecd89df27ec460b72246ed6b75395d7c Clock:{} Delay:500ms Timeout:1m0s Cancel:<nil>}
	I1109 13:38:48.200908   23920 certs.go:236] skipping valid "minikubeCA" ca cert: /home/jenkins/minikube-integration/21139-2320/.minikube/ca.key
	I1109 13:38:48.200944   23920 certs.go:236] skipping valid "proxyClientCA" ca cert: /home/jenkins/minikube-integration/21139-2320/.minikube/proxy-client-ca.key
	I1109 13:38:48.200950   23920 certs.go:257] generating profile certs ...
	I1109 13:38:48.201035   23920 certs.go:360] skipping valid signed profile cert regeneration for "minikube-user": /home/jenkins/minikube-integration/21139-2320/.minikube/profiles/functional-002359/client.key
	I1109 13:38:48.201086   23920 certs.go:360] skipping valid signed profile cert regeneration for "minikube": /home/jenkins/minikube-integration/21139-2320/.minikube/profiles/functional-002359/apiserver.key.263cac5a
	I1109 13:38:48.201122   23920 certs.go:360] skipping valid signed profile cert regeneration for "aggregator": /home/jenkins/minikube-integration/21139-2320/.minikube/profiles/functional-002359/proxy-client.key
	I1109 13:38:48.201233   23920 certs.go:484] found cert: /home/jenkins/minikube-integration/21139-2320/.minikube/certs/4116.pem (1338 bytes)
	W1109 13:38:48.201266   23920 certs.go:480] ignoring /home/jenkins/minikube-integration/21139-2320/.minikube/certs/4116_empty.pem, impossibly tiny 0 bytes
	I1109 13:38:48.201273   23920 certs.go:484] found cert: /home/jenkins/minikube-integration/21139-2320/.minikube/certs/ca-key.pem (1679 bytes)
	I1109 13:38:48.201294   23920 certs.go:484] found cert: /home/jenkins/minikube-integration/21139-2320/.minikube/certs/ca.pem (1082 bytes)
	I1109 13:38:48.201313   23920 certs.go:484] found cert: /home/jenkins/minikube-integration/21139-2320/.minikube/certs/cert.pem (1123 bytes)
	I1109 13:38:48.201334   23920 certs.go:484] found cert: /home/jenkins/minikube-integration/21139-2320/.minikube/certs/key.pem (1679 bytes)
	I1109 13:38:48.201382   23920 certs.go:484] found cert: /home/jenkins/minikube-integration/21139-2320/.minikube/files/etc/ssl/certs/41162.pem (1708 bytes)
	I1109 13:38:48.201983   23920 ssh_runner.go:362] scp /home/jenkins/minikube-integration/21139-2320/.minikube/ca.crt --> /var/lib/minikube/certs/ca.crt (1111 bytes)
	I1109 13:38:48.221937   23920 ssh_runner.go:362] scp /home/jenkins/minikube-integration/21139-2320/.minikube/ca.key --> /var/lib/minikube/certs/ca.key (1679 bytes)
	I1109 13:38:48.239478   23920 ssh_runner.go:362] scp /home/jenkins/minikube-integration/21139-2320/.minikube/proxy-client-ca.crt --> /var/lib/minikube/certs/proxy-client-ca.crt (1119 bytes)
	I1109 13:38:48.256571   23920 ssh_runner.go:362] scp /home/jenkins/minikube-integration/21139-2320/.minikube/proxy-client-ca.key --> /var/lib/minikube/certs/proxy-client-ca.key (1675 bytes)
	I1109 13:38:48.273184   23920 ssh_runner.go:362] scp /home/jenkins/minikube-integration/21139-2320/.minikube/profiles/functional-002359/apiserver.crt --> /var/lib/minikube/certs/apiserver.crt (1424 bytes)
	I1109 13:38:48.290858   23920 ssh_runner.go:362] scp /home/jenkins/minikube-integration/21139-2320/.minikube/profiles/functional-002359/apiserver.key --> /var/lib/minikube/certs/apiserver.key (1675 bytes)
	I1109 13:38:48.308602   23920 ssh_runner.go:362] scp /home/jenkins/minikube-integration/21139-2320/.minikube/profiles/functional-002359/proxy-client.crt --> /var/lib/minikube/certs/proxy-client.crt (1147 bytes)
	I1109 13:38:48.325209   23920 ssh_runner.go:362] scp /home/jenkins/minikube-integration/21139-2320/.minikube/profiles/functional-002359/proxy-client.key --> /var/lib/minikube/certs/proxy-client.key (1675 bytes)
	I1109 13:38:48.342228   23920 ssh_runner.go:362] scp /home/jenkins/minikube-integration/21139-2320/.minikube/certs/4116.pem --> /usr/share/ca-certificates/4116.pem (1338 bytes)
	I1109 13:38:48.359208   23920 ssh_runner.go:362] scp /home/jenkins/minikube-integration/21139-2320/.minikube/files/etc/ssl/certs/41162.pem --> /usr/share/ca-certificates/41162.pem (1708 bytes)
	I1109 13:38:48.377024   23920 ssh_runner.go:362] scp /home/jenkins/minikube-integration/21139-2320/.minikube/ca.crt --> /usr/share/ca-certificates/minikubeCA.pem (1111 bytes)
	I1109 13:38:48.394359   23920 ssh_runner.go:362] scp memory --> /var/lib/minikube/kubeconfig (738 bytes)
	I1109 13:38:48.406903   23920 ssh_runner.go:195] Run: openssl version
	I1109 13:38:48.412999   23920 ssh_runner.go:195] Run: sudo /bin/bash -c "test -s /usr/share/ca-certificates/4116.pem && ln -fs /usr/share/ca-certificates/4116.pem /etc/ssl/certs/4116.pem"
	I1109 13:38:48.421435   23920 ssh_runner.go:195] Run: ls -la /usr/share/ca-certificates/4116.pem
	I1109 13:38:48.425137   23920 certs.go:528] hashing: -rw-r--r-- 1 root root 1338 Nov  9 13:36 /usr/share/ca-certificates/4116.pem
	I1109 13:38:48.425189   23920 ssh_runner.go:195] Run: openssl x509 -hash -noout -in /usr/share/ca-certificates/4116.pem
	I1109 13:38:48.465887   23920 ssh_runner.go:195] Run: sudo /bin/bash -c "test -L /etc/ssl/certs/51391683.0 || ln -fs /etc/ssl/certs/4116.pem /etc/ssl/certs/51391683.0"
	I1109 13:38:48.473975   23920 ssh_runner.go:195] Run: sudo /bin/bash -c "test -s /usr/share/ca-certificates/41162.pem && ln -fs /usr/share/ca-certificates/41162.pem /etc/ssl/certs/41162.pem"
	I1109 13:38:48.482569   23920 ssh_runner.go:195] Run: ls -la /usr/share/ca-certificates/41162.pem
	I1109 13:38:48.486684   23920 certs.go:528] hashing: -rw-r--r-- 1 root root 1708 Nov  9 13:36 /usr/share/ca-certificates/41162.pem
	I1109 13:38:48.486739   23920 ssh_runner.go:195] Run: openssl x509 -hash -noout -in /usr/share/ca-certificates/41162.pem
	I1109 13:38:48.528884   23920 ssh_runner.go:195] Run: sudo /bin/bash -c "test -L /etc/ssl/certs/3ec20f2e.0 || ln -fs /etc/ssl/certs/41162.pem /etc/ssl/certs/3ec20f2e.0"
	I1109 13:38:48.536801   23920 ssh_runner.go:195] Run: sudo /bin/bash -c "test -s /usr/share/ca-certificates/minikubeCA.pem && ln -fs /usr/share/ca-certificates/minikubeCA.pem /etc/ssl/certs/minikubeCA.pem"
	I1109 13:38:48.545218   23920 ssh_runner.go:195] Run: ls -la /usr/share/ca-certificates/minikubeCA.pem
	I1109 13:38:48.549089   23920 certs.go:528] hashing: -rw-r--r-- 1 root root 1111 Nov  9 13:29 /usr/share/ca-certificates/minikubeCA.pem
	I1109 13:38:48.549156   23920 ssh_runner.go:195] Run: openssl x509 -hash -noout -in /usr/share/ca-certificates/minikubeCA.pem
	I1109 13:38:48.590814   23920 ssh_runner.go:195] Run: sudo /bin/bash -c "test -L /etc/ssl/certs/b5213941.0 || ln -fs /etc/ssl/certs/minikubeCA.pem /etc/ssl/certs/b5213941.0"
	I1109 13:38:48.598826   23920 ssh_runner.go:195] Run: stat /var/lib/minikube/certs/apiserver-kubelet-client.crt
	I1109 13:38:48.602575   23920 ssh_runner.go:195] Run: openssl x509 -noout -in /var/lib/minikube/certs/apiserver-etcd-client.crt -checkend 86400
	I1109 13:38:48.644364   23920 ssh_runner.go:195] Run: openssl x509 -noout -in /var/lib/minikube/certs/apiserver-kubelet-client.crt -checkend 86400
	I1109 13:38:48.685407   23920 ssh_runner.go:195] Run: openssl x509 -noout -in /var/lib/minikube/certs/etcd/server.crt -checkend 86400
	I1109 13:38:48.726922   23920 ssh_runner.go:195] Run: openssl x509 -noout -in /var/lib/minikube/certs/etcd/healthcheck-client.crt -checkend 86400
	I1109 13:38:48.771125   23920 ssh_runner.go:195] Run: openssl x509 -noout -in /var/lib/minikube/certs/etcd/peer.crt -checkend 86400
	I1109 13:38:48.812816   23920 ssh_runner.go:195] Run: openssl x509 -noout -in /var/lib/minikube/certs/front-proxy-client.crt -checkend 86400
	I1109 13:38:48.859884   23920 kubeadm.go:401] StartCluster: {Name:functional-002359 KeepContext:false EmbedCerts:false MinikubeISO: KicBaseImage:gcr.io/k8s-minikube/kicbase-builds:v0.0.48-1761985721-21837@sha256:a50b37e97dfdea51156e079ca6b45818a801b3d41bbe13d141f35d2e1af6c7d1 Memory:4096 CPUs:2 DiskSize:20000 Driver:docker HyperkitVpnKitSock: HyperkitVSockPorts:[] DockerEnv:[] ContainerVolumeMounts:[] InsecureRegistry:[] RegistryMirror:[] HostOnlyCIDR:192.168.59.1/24 HypervVirtualSwitch: HypervUseExternalSwitch:false HypervExternalAdapter: KVMNetwork:default KVMQemuURI:qemu:///system KVMGPU:false KVMHidden:false KVMNUMACount:1 APIServerPort:8441 DockerOpt:[] DisableDriverMounts:false NFSShare:[] NFSSharesRoot:/nfsshares UUID: NoVTXCheck:false DNSProxy:false HostDNSResolver:true HostOnlyNicType:virtio NatNicType:virtio SSHIPAddress: SSHUser:root SSHKey: SSHPort:22 KubernetesConfig:{KubernetesVersion:v1.34.1 ClusterName:functional-002359 Namespace:default APIServerHAVIP: APIServerName:minikubeCA APISer
verNames:[] APIServerIPs:[] DNSDomain:cluster.local ContainerRuntime:crio CRISocket: NetworkPlugin:cni FeatureGates: ServiceCIDR:10.96.0.0/12 ImageRepository: LoadBalancerStartIP: LoadBalancerEndIP: CustomIngressCert: RegistryAliases: ExtraOptions:[{Component:apiserver Key:enable-admission-plugins Value:NamespaceAutoProvision}] ShouldLoadCachedImages:true EnableDefaultCNI:false CNI:} Nodes:[{Name: IP:192.168.49.2 Port:8441 KubernetesVersion:v1.34.1 ContainerRuntime:crio ControlPlane:true Worker:true}] Addons:map[default-storageclass:true storage-provisioner:true] CustomAddonImages:map[] CustomAddonRegistries:map[] VerifyComponents:map[apiserver:true apps_running:true default_sa:true extra:true kubelet:true node_ready:true system_pods:true] StartHostTimeout:6m0s ScheduledStop:<nil> ExposedPorts:[] ListenAddress: Network: Subnet: MultiNodeRequested:false ExtraDisks:0 CertExpiration:26280h0m0s MountString: Mount9PVersion:9p2000.L MountGID:docker MountIP: MountMSize:262144 MountOptions:[] MountPort:0 MountType:9p
MountUID:docker BinaryMirror: DisableOptimizations:false DisableMetrics:false DisableCoreDNSLog:false CustomQemuFirmwarePath: SocketVMnetClientPath: SocketVMnetPath: StaticIP: SSHAuthSock: SSHAgentPID:0 GPUs: AutoPauseInterval:1m0s}
	I1109 13:38:48.859981   23920 cri.go:54] listing CRI containers in root : {State:paused Name: Namespaces:[kube-system]}
	I1109 13:38:48.860043   23920 ssh_runner.go:195] Run: sudo -s eval "crictl ps -a --quiet --label io.kubernetes.pod.namespace=kube-system"
	I1109 13:38:48.889636   23920 cri.go:89] found id: "cc0225f0d3fc862a263aba94b43a8360ad345e2ba74021176372dd8fb19e2f46"
	I1109 13:38:48.889648   23920 cri.go:89] found id: "4399d84388041557c1d3c9a8c4e17027e1307f8182ddc9cc872cdcc755543e0b"
	I1109 13:38:48.889651   23920 cri.go:89] found id: "4663919f4dfd1c751b900adc0403bf4c4532fc6b25a43c53c46e3a122694511a"
	I1109 13:38:48.889654   23920 cri.go:89] found id: "fb70fb34ba82b21ce77b755ef23791d15855d8cc4358cccffdfd2bb2f1188601"
	I1109 13:38:48.889657   23920 cri.go:89] found id: "fd2d9ed063202e623ca374a65f74a1937f6945175a3d5077250d649751ff07aa"
	I1109 13:38:48.889659   23920 cri.go:89] found id: "43c7c71ce936bb80862e865a4e75ba3dda7fd0cca616d0c2c10e8c87a63026a7"
	I1109 13:38:48.889662   23920 cri.go:89] found id: "8d6e6004d07de0b3c112dd7099e42b98a279d0db1103180650ecf7707de4fc98"
	I1109 13:38:48.889665   23920 cri.go:89] found id: "93a2fe7c60151ff52423241accd77838add9eae68a87a84e559f224a0cdf0925"
	I1109 13:38:48.889667   23920 cri.go:89] found id: "7532ba0e54aae3f71098bf59e7c370cca3d3b96bf2a4a07f8c418a3330928626"
	I1109 13:38:48.889673   23920 cri.go:89] found id: "8984c46d88bac1f11c232f4d5f0a06bfc5996779b8f908ce7c3ef10f06e69f0e"
	I1109 13:38:48.889675   23920 cri.go:89] found id: "bd28ab5e7a4a0dde79be33ddaa3550ad7bc6ff75d57c96ecf26fe2c95f4cbbc8"
	I1109 13:38:48.889680   23920 cri.go:89] found id: "13a33829d2e290973b975890ccf5127dff062d5b8410155d2752117cae2ad566"
	I1109 13:38:48.889683   23920 cri.go:89] found id: "f4ac75767e71ea84d5aaf3595156493bd635565a954f84477d4b389350d65578"
	I1109 13:38:48.889685   23920 cri.go:89] found id: "99e5fe542114965b8cca149fb7cca9193c7a9f2708af4fc18d1a66cf778f739f"
	I1109 13:38:48.889688   23920 cri.go:89] found id: "d14f26decd336c2b535399242f119ebf40d36a1504d686f23c447e32ab6e512d"
	I1109 13:38:48.889692   23920 cri.go:89] found id: "e4c4701093658a41c25f9ec9e0173ef32a58de5a8f177bf502a7fcf31f155b34"
	I1109 13:38:48.889694   23920 cri.go:89] found id: ""
	I1109 13:38:48.889763   23920 ssh_runner.go:195] Run: sudo runc list -f json
	W1109 13:38:48.900874   23920 kubeadm.go:408] unpause failed: list paused: runc: sudo runc list -f json: Process exited with status 1
	stdout:
	
	stderr:
	time="2025-11-09T13:38:48Z" level=error msg="open /run/runc: no such file or directory"
	I1109 13:38:48.900953   23920 ssh_runner.go:195] Run: sudo ls /var/lib/kubelet/kubeadm-flags.env /var/lib/kubelet/config.yaml /var/lib/minikube/etcd
	I1109 13:38:48.908945   23920 kubeadm.go:417] found existing configuration files, will attempt cluster restart
	I1109 13:38:48.908955   23920 kubeadm.go:598] restartPrimaryControlPlane start ...
	I1109 13:38:48.909006   23920 ssh_runner.go:195] Run: sudo test -d /data/minikube
	I1109 13:38:48.917011   23920 kubeadm.go:131] /data/minikube skipping compat symlinks: sudo test -d /data/minikube: Process exited with status 1
	stdout:
	
	stderr:
	I1109 13:38:48.917535   23920 kubeconfig.go:125] found "functional-002359" server: "https://192.168.49.2:8441"
	I1109 13:38:48.918814   23920 ssh_runner.go:195] Run: sudo diff -u /var/tmp/minikube/kubeadm.yaml /var/tmp/minikube/kubeadm.yaml.new
	I1109 13:38:48.927070   23920 kubeadm.go:645] detected kubeadm config drift (will reconfigure cluster from new /var/tmp/minikube/kubeadm.yaml):
	-- stdout --
	--- /var/tmp/minikube/kubeadm.yaml	2025-11-09 13:36:41.539866467 +0000
	+++ /var/tmp/minikube/kubeadm.yaml.new	2025-11-09 13:38:48.043533253 +0000
	@@ -24,7 +24,7 @@
	   certSANs: ["127.0.0.1", "localhost", "192.168.49.2"]
	   extraArgs:
	     - name: "enable-admission-plugins"
	-      value: "NamespaceLifecycle,LimitRanger,ServiceAccount,DefaultStorageClass,DefaultTolerationSeconds,NodeRestriction,MutatingAdmissionWebhook,ValidatingAdmissionWebhook,ResourceQuota"
	+      value: "NamespaceAutoProvision"
	 controllerManager:
	   extraArgs:
	     - name: "allocate-node-cidrs"
	
	-- /stdout --
	I1109 13:38:48.927080   23920 kubeadm.go:1161] stopping kube-system containers ...
	I1109 13:38:48.927090   23920 cri.go:54] listing CRI containers in root : {State:all Name: Namespaces:[kube-system]}
	I1109 13:38:48.927162   23920 ssh_runner.go:195] Run: sudo -s eval "crictl ps -a --quiet --label io.kubernetes.pod.namespace=kube-system"
	I1109 13:38:48.960223   23920 cri.go:89] found id: "cc0225f0d3fc862a263aba94b43a8360ad345e2ba74021176372dd8fb19e2f46"
	I1109 13:38:48.960235   23920 cri.go:89] found id: "4399d84388041557c1d3c9a8c4e17027e1307f8182ddc9cc872cdcc755543e0b"
	I1109 13:38:48.960238   23920 cri.go:89] found id: "4663919f4dfd1c751b900adc0403bf4c4532fc6b25a43c53c46e3a122694511a"
	I1109 13:38:48.960241   23920 cri.go:89] found id: "fb70fb34ba82b21ce77b755ef23791d15855d8cc4358cccffdfd2bb2f1188601"
	I1109 13:38:48.960243   23920 cri.go:89] found id: "fd2d9ed063202e623ca374a65f74a1937f6945175a3d5077250d649751ff07aa"
	I1109 13:38:48.960245   23920 cri.go:89] found id: "43c7c71ce936bb80862e865a4e75ba3dda7fd0cca616d0c2c10e8c87a63026a7"
	I1109 13:38:48.960247   23920 cri.go:89] found id: "8d6e6004d07de0b3c112dd7099e42b98a279d0db1103180650ecf7707de4fc98"
	I1109 13:38:48.960250   23920 cri.go:89] found id: "93a2fe7c60151ff52423241accd77838add9eae68a87a84e559f224a0cdf0925"
	I1109 13:38:48.960252   23920 cri.go:89] found id: "7532ba0e54aae3f71098bf59e7c370cca3d3b96bf2a4a07f8c418a3330928626"
	I1109 13:38:48.960257   23920 cri.go:89] found id: "8984c46d88bac1f11c232f4d5f0a06bfc5996779b8f908ce7c3ef10f06e69f0e"
	I1109 13:38:48.960259   23920 cri.go:89] found id: "bd28ab5e7a4a0dde79be33ddaa3550ad7bc6ff75d57c96ecf26fe2c95f4cbbc8"
	I1109 13:38:48.960263   23920 cri.go:89] found id: "13a33829d2e290973b975890ccf5127dff062d5b8410155d2752117cae2ad566"
	I1109 13:38:48.960265   23920 cri.go:89] found id: "f4ac75767e71ea84d5aaf3595156493bd635565a954f84477d4b389350d65578"
	I1109 13:38:48.960268   23920 cri.go:89] found id: "99e5fe542114965b8cca149fb7cca9193c7a9f2708af4fc18d1a66cf778f739f"
	I1109 13:38:48.960270   23920 cri.go:89] found id: "d14f26decd336c2b535399242f119ebf40d36a1504d686f23c447e32ab6e512d"
	I1109 13:38:48.960273   23920 cri.go:89] found id: "e4c4701093658a41c25f9ec9e0173ef32a58de5a8f177bf502a7fcf31f155b34"
	I1109 13:38:48.960276   23920 cri.go:89] found id: ""
	I1109 13:38:48.960280   23920 cri.go:252] Stopping containers: [cc0225f0d3fc862a263aba94b43a8360ad345e2ba74021176372dd8fb19e2f46 4399d84388041557c1d3c9a8c4e17027e1307f8182ddc9cc872cdcc755543e0b 4663919f4dfd1c751b900adc0403bf4c4532fc6b25a43c53c46e3a122694511a fb70fb34ba82b21ce77b755ef23791d15855d8cc4358cccffdfd2bb2f1188601 fd2d9ed063202e623ca374a65f74a1937f6945175a3d5077250d649751ff07aa 43c7c71ce936bb80862e865a4e75ba3dda7fd0cca616d0c2c10e8c87a63026a7 8d6e6004d07de0b3c112dd7099e42b98a279d0db1103180650ecf7707de4fc98 93a2fe7c60151ff52423241accd77838add9eae68a87a84e559f224a0cdf0925 7532ba0e54aae3f71098bf59e7c370cca3d3b96bf2a4a07f8c418a3330928626 8984c46d88bac1f11c232f4d5f0a06bfc5996779b8f908ce7c3ef10f06e69f0e bd28ab5e7a4a0dde79be33ddaa3550ad7bc6ff75d57c96ecf26fe2c95f4cbbc8 13a33829d2e290973b975890ccf5127dff062d5b8410155d2752117cae2ad566 f4ac75767e71ea84d5aaf3595156493bd635565a954f84477d4b389350d65578 99e5fe542114965b8cca149fb7cca9193c7a9f2708af4fc18d1a66cf778f739f d14f26decd336c2b535399242f119ebf40d36a150
4d686f23c447e32ab6e512d e4c4701093658a41c25f9ec9e0173ef32a58de5a8f177bf502a7fcf31f155b34]
	I1109 13:38:48.960340   23920 ssh_runner.go:195] Run: which crictl
	I1109 13:38:48.964335   23920 ssh_runner.go:195] Run: sudo /usr/local/bin/crictl stop --timeout=10 cc0225f0d3fc862a263aba94b43a8360ad345e2ba74021176372dd8fb19e2f46 4399d84388041557c1d3c9a8c4e17027e1307f8182ddc9cc872cdcc755543e0b 4663919f4dfd1c751b900adc0403bf4c4532fc6b25a43c53c46e3a122694511a fb70fb34ba82b21ce77b755ef23791d15855d8cc4358cccffdfd2bb2f1188601 fd2d9ed063202e623ca374a65f74a1937f6945175a3d5077250d649751ff07aa 43c7c71ce936bb80862e865a4e75ba3dda7fd0cca616d0c2c10e8c87a63026a7 8d6e6004d07de0b3c112dd7099e42b98a279d0db1103180650ecf7707de4fc98 93a2fe7c60151ff52423241accd77838add9eae68a87a84e559f224a0cdf0925 7532ba0e54aae3f71098bf59e7c370cca3d3b96bf2a4a07f8c418a3330928626 8984c46d88bac1f11c232f4d5f0a06bfc5996779b8f908ce7c3ef10f06e69f0e bd28ab5e7a4a0dde79be33ddaa3550ad7bc6ff75d57c96ecf26fe2c95f4cbbc8 13a33829d2e290973b975890ccf5127dff062d5b8410155d2752117cae2ad566 f4ac75767e71ea84d5aaf3595156493bd635565a954f84477d4b389350d65578 99e5fe542114965b8cca149fb7cca9193c7a9f2708af4fc18d1a66cf778f739f d14f26
decd336c2b535399242f119ebf40d36a1504d686f23c447e32ab6e512d e4c4701093658a41c25f9ec9e0173ef32a58de5a8f177bf502a7fcf31f155b34
	I1109 13:38:49.072771   23920 ssh_runner.go:195] Run: sudo systemctl stop kubelet
	I1109 13:38:49.183133   23920 ssh_runner.go:195] Run: sudo ls -la /etc/kubernetes/admin.conf /etc/kubernetes/kubelet.conf /etc/kubernetes/controller-manager.conf /etc/kubernetes/scheduler.conf
	I1109 13:38:49.191369   23920 kubeadm.go:158] found existing configuration files:
	-rw------- 1 root root 5635 Nov  9 13:36 /etc/kubernetes/admin.conf
	-rw------- 1 root root 5636 Nov  9 13:36 /etc/kubernetes/controller-manager.conf
	-rw------- 1 root root 1972 Nov  9 13:36 /etc/kubernetes/kubelet.conf
	-rw------- 1 root root 5588 Nov  9 13:36 /etc/kubernetes/scheduler.conf
	
	I1109 13:38:49.191437   23920 ssh_runner.go:195] Run: sudo grep https://control-plane.minikube.internal:8441 /etc/kubernetes/admin.conf
	I1109 13:38:49.199711   23920 ssh_runner.go:195] Run: sudo grep https://control-plane.minikube.internal:8441 /etc/kubernetes/kubelet.conf
	I1109 13:38:49.208625   23920 kubeadm.go:164] "https://control-plane.minikube.internal:8441" may not be in /etc/kubernetes/kubelet.conf - will remove: sudo grep https://control-plane.minikube.internal:8441 /etc/kubernetes/kubelet.conf: Process exited with status 1
	stdout:
	
	stderr:
	I1109 13:38:49.208685   23920 ssh_runner.go:195] Run: sudo rm -f /etc/kubernetes/kubelet.conf
	I1109 13:38:49.216744   23920 ssh_runner.go:195] Run: sudo grep https://control-plane.minikube.internal:8441 /etc/kubernetes/controller-manager.conf
	I1109 13:38:49.224733   23920 kubeadm.go:164] "https://control-plane.minikube.internal:8441" may not be in /etc/kubernetes/controller-manager.conf - will remove: sudo grep https://control-plane.minikube.internal:8441 /etc/kubernetes/controller-manager.conf: Process exited with status 1
	stdout:
	
	stderr:
	I1109 13:38:49.224789   23920 ssh_runner.go:195] Run: sudo rm -f /etc/kubernetes/controller-manager.conf
	I1109 13:38:49.232402   23920 ssh_runner.go:195] Run: sudo grep https://control-plane.minikube.internal:8441 /etc/kubernetes/scheduler.conf
	I1109 13:38:49.239974   23920 kubeadm.go:164] "https://control-plane.minikube.internal:8441" may not be in /etc/kubernetes/scheduler.conf - will remove: sudo grep https://control-plane.minikube.internal:8441 /etc/kubernetes/scheduler.conf: Process exited with status 1
	stdout:
	
	stderr:
	I1109 13:38:49.240032   23920 ssh_runner.go:195] Run: sudo rm -f /etc/kubernetes/scheduler.conf
	I1109 13:38:49.247644   23920 ssh_runner.go:195] Run: sudo cp /var/tmp/minikube/kubeadm.yaml.new /var/tmp/minikube/kubeadm.yaml
	I1109 13:38:49.255702   23920 ssh_runner.go:195] Run: sudo /bin/bash -c "env PATH="/var/lib/minikube/binaries/v1.34.1:$PATH" kubeadm init phase certs all --config /var/tmp/minikube/kubeadm.yaml"
	I1109 13:38:49.306041   23920 ssh_runner.go:195] Run: sudo /bin/bash -c "env PATH="/var/lib/minikube/binaries/v1.34.1:$PATH" kubeadm init phase kubeconfig all --config /var/tmp/minikube/kubeadm.yaml"
	I1109 13:38:52.030596   23920 ssh_runner.go:235] Completed: sudo /bin/bash -c "env PATH="/var/lib/minikube/binaries/v1.34.1:$PATH" kubeadm init phase kubeconfig all --config /var/tmp/minikube/kubeadm.yaml": (2.724530567s)
	I1109 13:38:52.030657   23920 ssh_runner.go:195] Run: sudo /bin/bash -c "env PATH="/var/lib/minikube/binaries/v1.34.1:$PATH" kubeadm init phase kubelet-start --config /var/tmp/minikube/kubeadm.yaml"
	I1109 13:38:52.256487   23920 ssh_runner.go:195] Run: sudo /bin/bash -c "env PATH="/var/lib/minikube/binaries/v1.34.1:$PATH" kubeadm init phase control-plane all --config /var/tmp/minikube/kubeadm.yaml"
	I1109 13:38:52.319173   23920 ssh_runner.go:195] Run: sudo /bin/bash -c "env PATH="/var/lib/minikube/binaries/v1.34.1:$PATH" kubeadm init phase etcd local --config /var/tmp/minikube/kubeadm.yaml"
	I1109 13:38:52.380659   23920 api_server.go:52] waiting for apiserver process to appear ...
	I1109 13:38:52.380722   23920 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I1109 13:38:52.880902   23920 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I1109 13:38:53.381515   23920 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I1109 13:38:53.401351   23920 api_server.go:72] duration metric: took 1.020698s to wait for apiserver process to appear ...
	I1109 13:38:53.401365   23920 api_server.go:88] waiting for apiserver healthz status ...
	I1109 13:38:53.401382   23920 api_server.go:253] Checking apiserver healthz at https://192.168.49.2:8441/healthz ...
	I1109 13:38:57.192088   23920 api_server.go:279] https://192.168.49.2:8441/healthz returned 403:
	{"kind":"Status","apiVersion":"v1","metadata":{},"status":"Failure","message":"forbidden: User \"system:anonymous\" cannot get path \"/healthz\"","reason":"Forbidden","details":{},"code":403}
	W1109 13:38:57.192104   23920 api_server.go:103] status: https://192.168.49.2:8441/healthz returned error 403:
	{"kind":"Status","apiVersion":"v1","metadata":{},"status":"Failure","message":"forbidden: User \"system:anonymous\" cannot get path \"/healthz\"","reason":"Forbidden","details":{},"code":403}
	I1109 13:38:57.192116   23920 api_server.go:253] Checking apiserver healthz at https://192.168.49.2:8441/healthz ...
	I1109 13:38:57.279923   23920 api_server.go:279] https://192.168.49.2:8441/healthz returned 403:
	{"kind":"Status","apiVersion":"v1","metadata":{},"status":"Failure","message":"forbidden: User \"system:anonymous\" cannot get path \"/healthz\"","reason":"Forbidden","details":{},"code":403}
	W1109 13:38:57.279956   23920 api_server.go:103] status: https://192.168.49.2:8441/healthz returned error 403:
	{"kind":"Status","apiVersion":"v1","metadata":{},"status":"Failure","message":"forbidden: User \"system:anonymous\" cannot get path \"/healthz\"","reason":"Forbidden","details":{},"code":403}
	I1109 13:38:57.402149   23920 api_server.go:253] Checking apiserver healthz at https://192.168.49.2:8441/healthz ...
	I1109 13:38:57.413540   23920 api_server.go:279] https://192.168.49.2:8441/healthz returned 500:
	[+]ping ok
	[+]log ok
	[+]etcd ok
	[+]poststarthook/start-apiserver-admission-initializer ok
	[+]poststarthook/generic-apiserver-start-informers ok
	[+]poststarthook/priority-and-fairness-config-consumer ok
	[+]poststarthook/priority-and-fairness-filter ok
	[+]poststarthook/storage-object-count-tracker-hook ok
	[+]poststarthook/start-apiextensions-informers ok
	[+]poststarthook/start-apiextensions-controllers ok
	[+]poststarthook/crd-informer-synced ok
	[+]poststarthook/start-system-namespaces-controller ok
	[+]poststarthook/start-cluster-authentication-info-controller ok
	[+]poststarthook/start-kube-apiserver-identity-lease-controller ok
	[+]poststarthook/start-kube-apiserver-identity-lease-garbage-collector ok
	[+]poststarthook/start-legacy-token-tracking-controller ok
	[+]poststarthook/start-service-ip-repair-controllers ok
	[-]poststarthook/rbac/bootstrap-roles failed: reason withheld
	[-]poststarthook/scheduling/bootstrap-system-priority-classes failed: reason withheld
	[+]poststarthook/priority-and-fairness-config-producer ok
	[+]poststarthook/bootstrap-controller ok
	[+]poststarthook/start-kubernetes-service-cidr-controller ok
	[+]poststarthook/aggregator-reload-proxy-client-cert ok
	[+]poststarthook/start-kube-aggregator-informers ok
	[+]poststarthook/apiservice-status-local-available-controller ok
	[+]poststarthook/apiservice-status-remote-available-controller ok
	[+]poststarthook/apiservice-registration-controller ok
	[+]poststarthook/apiservice-discovery-controller ok
	[+]poststarthook/kube-apiserver-autoregistration ok
	[+]autoregister-completion ok
	[+]poststarthook/apiservice-openapi-controller ok
	[+]poststarthook/apiservice-openapiv3-controller ok
	healthz check failed
	W1109 13:38:57.413557   23920 api_server.go:103] status: https://192.168.49.2:8441/healthz returned error 500:
	[+]ping ok
	[+]log ok
	[+]etcd ok
	[+]poststarthook/start-apiserver-admission-initializer ok
	[+]poststarthook/generic-apiserver-start-informers ok
	[+]poststarthook/priority-and-fairness-config-consumer ok
	[+]poststarthook/priority-and-fairness-filter ok
	[+]poststarthook/storage-object-count-tracker-hook ok
	[+]poststarthook/start-apiextensions-informers ok
	[+]poststarthook/start-apiextensions-controllers ok
	[+]poststarthook/crd-informer-synced ok
	[+]poststarthook/start-system-namespaces-controller ok
	[+]poststarthook/start-cluster-authentication-info-controller ok
	[+]poststarthook/start-kube-apiserver-identity-lease-controller ok
	[+]poststarthook/start-kube-apiserver-identity-lease-garbage-collector ok
	[+]poststarthook/start-legacy-token-tracking-controller ok
	[+]poststarthook/start-service-ip-repair-controllers ok
	[-]poststarthook/rbac/bootstrap-roles failed: reason withheld
	[-]poststarthook/scheduling/bootstrap-system-priority-classes failed: reason withheld
	[+]poststarthook/priority-and-fairness-config-producer ok
	[+]poststarthook/bootstrap-controller ok
	[+]poststarthook/start-kubernetes-service-cidr-controller ok
	[+]poststarthook/aggregator-reload-proxy-client-cert ok
	[+]poststarthook/start-kube-aggregator-informers ok
	[+]poststarthook/apiservice-status-local-available-controller ok
	[+]poststarthook/apiservice-status-remote-available-controller ok
	[+]poststarthook/apiservice-registration-controller ok
	[+]poststarthook/apiservice-discovery-controller ok
	[+]poststarthook/kube-apiserver-autoregistration ok
	[+]autoregister-completion ok
	[+]poststarthook/apiservice-openapi-controller ok
	[+]poststarthook/apiservice-openapiv3-controller ok
	healthz check failed
	I1109 13:38:57.902202   23920 api_server.go:253] Checking apiserver healthz at https://192.168.49.2:8441/healthz ...
	I1109 13:38:57.910643   23920 api_server.go:279] https://192.168.49.2:8441/healthz returned 500:
	[+]ping ok
	[+]log ok
	[+]etcd ok
	[+]poststarthook/start-apiserver-admission-initializer ok
	[+]poststarthook/generic-apiserver-start-informers ok
	[+]poststarthook/priority-and-fairness-config-consumer ok
	[+]poststarthook/priority-and-fairness-filter ok
	[+]poststarthook/storage-object-count-tracker-hook ok
	[+]poststarthook/start-apiextensions-informers ok
	[+]poststarthook/start-apiextensions-controllers ok
	[+]poststarthook/crd-informer-synced ok
	[+]poststarthook/start-system-namespaces-controller ok
	[+]poststarthook/start-cluster-authentication-info-controller ok
	[+]poststarthook/start-kube-apiserver-identity-lease-controller ok
	[+]poststarthook/start-kube-apiserver-identity-lease-garbage-collector ok
	[+]poststarthook/start-legacy-token-tracking-controller ok
	[+]poststarthook/start-service-ip-repair-controllers ok
	[-]poststarthook/rbac/bootstrap-roles failed: reason withheld
	[-]poststarthook/scheduling/bootstrap-system-priority-classes failed: reason withheld
	[+]poststarthook/priority-and-fairness-config-producer ok
	[+]poststarthook/bootstrap-controller ok
	[+]poststarthook/start-kubernetes-service-cidr-controller ok
	[+]poststarthook/aggregator-reload-proxy-client-cert ok
	[+]poststarthook/start-kube-aggregator-informers ok
	[+]poststarthook/apiservice-status-local-available-controller ok
	[+]poststarthook/apiservice-status-remote-available-controller ok
	[+]poststarthook/apiservice-registration-controller ok
	[+]poststarthook/apiservice-discovery-controller ok
	[+]poststarthook/kube-apiserver-autoregistration ok
	[+]autoregister-completion ok
	[+]poststarthook/apiservice-openapi-controller ok
	[+]poststarthook/apiservice-openapiv3-controller ok
	healthz check failed
	W1109 13:38:57.910661   23920 api_server.go:103] status: https://192.168.49.2:8441/healthz returned error 500:
	[+]ping ok
	[+]log ok
	[+]etcd ok
	[+]poststarthook/start-apiserver-admission-initializer ok
	[+]poststarthook/generic-apiserver-start-informers ok
	[+]poststarthook/priority-and-fairness-config-consumer ok
	[+]poststarthook/priority-and-fairness-filter ok
	[+]poststarthook/storage-object-count-tracker-hook ok
	[+]poststarthook/start-apiextensions-informers ok
	[+]poststarthook/start-apiextensions-controllers ok
	[+]poststarthook/crd-informer-synced ok
	[+]poststarthook/start-system-namespaces-controller ok
	[+]poststarthook/start-cluster-authentication-info-controller ok
	[+]poststarthook/start-kube-apiserver-identity-lease-controller ok
	[+]poststarthook/start-kube-apiserver-identity-lease-garbage-collector ok
	[+]poststarthook/start-legacy-token-tracking-controller ok
	[+]poststarthook/start-service-ip-repair-controllers ok
	[-]poststarthook/rbac/bootstrap-roles failed: reason withheld
	[-]poststarthook/scheduling/bootstrap-system-priority-classes failed: reason withheld
	[+]poststarthook/priority-and-fairness-config-producer ok
	[+]poststarthook/bootstrap-controller ok
	[+]poststarthook/start-kubernetes-service-cidr-controller ok
	[+]poststarthook/aggregator-reload-proxy-client-cert ok
	[+]poststarthook/start-kube-aggregator-informers ok
	[+]poststarthook/apiservice-status-local-available-controller ok
	[+]poststarthook/apiservice-status-remote-available-controller ok
	[+]poststarthook/apiservice-registration-controller ok
	[+]poststarthook/apiservice-discovery-controller ok
	[+]poststarthook/kube-apiserver-autoregistration ok
	[+]autoregister-completion ok
	[+]poststarthook/apiservice-openapi-controller ok
	[+]poststarthook/apiservice-openapiv3-controller ok
	healthz check failed
	I1109 13:38:58.402298   23920 api_server.go:253] Checking apiserver healthz at https://192.168.49.2:8441/healthz ...
	I1109 13:38:58.415902   23920 api_server.go:279] https://192.168.49.2:8441/healthz returned 200:
	ok
	I1109 13:38:58.434439   23920 api_server.go:141] control plane version: v1.34.1
	I1109 13:38:58.434458   23920 api_server.go:131] duration metric: took 5.033087434s to wait for apiserver health ...
	I1109 13:38:58.434466   23920 cni.go:84] Creating CNI manager for ""
	I1109 13:38:58.434471   23920 cni.go:143] "docker" driver + "crio" runtime found, recommending kindnet
	I1109 13:38:58.438319   23920 out.go:179] * Configuring CNI (Container Networking Interface) ...
	I1109 13:38:58.441441   23920 ssh_runner.go:195] Run: stat /opt/cni/bin/portmap
	I1109 13:38:58.445784   23920 cni.go:182] applying CNI manifest using /var/lib/minikube/binaries/v1.34.1/kubectl ...
	I1109 13:38:58.445795   23920 ssh_runner.go:362] scp memory --> /var/tmp/minikube/cni.yaml (2601 bytes)
	I1109 13:38:58.460206   23920 ssh_runner.go:195] Run: sudo /var/lib/minikube/binaries/v1.34.1/kubectl apply --kubeconfig=/var/lib/minikube/kubeconfig -f /var/tmp/minikube/cni.yaml
	I1109 13:38:59.033608   23920 system_pods.go:43] waiting for kube-system pods to appear ...
	I1109 13:38:59.037093   23920 system_pods.go:59] 8 kube-system pods found
	I1109 13:38:59.037118   23920 system_pods.go:61] "coredns-66bc5c9577-xpr2w" [dbba4567-530d-4a52-a2f2-8064707aa15b] Running / Ready:ContainersNotReady (containers with unready status: [coredns]) / ContainersReady:ContainersNotReady (containers with unready status: [coredns])
	I1109 13:38:59.037126   23920 system_pods.go:61] "etcd-functional-002359" [2d763ca8-ba34-47e3-8d79-758a9a6edd10] Running / Ready:ContainersNotReady (containers with unready status: [etcd]) / ContainersReady:ContainersNotReady (containers with unready status: [etcd])
	I1109 13:38:59.037131   23920 system_pods.go:61] "kindnet-bnks4" [cf64d488-cbe9-4bcc-8c0c-4ea871aaef1b] Running
	I1109 13:38:59.037137   23920 system_pods.go:61] "kube-apiserver-functional-002359" [93e27957-df27-4519-b2b0-ec58476e28bf] Running / Ready:ContainersNotReady (containers with unready status: [kube-apiserver]) / ContainersReady:ContainersNotReady (containers with unready status: [kube-apiserver])
	I1109 13:38:59.037142   23920 system_pods.go:61] "kube-controller-manager-functional-002359" [4726cf25-c727-43fb-b6d8-9107588da88a] Running / Ready:ContainersNotReady (containers with unready status: [kube-controller-manager]) / ContainersReady:ContainersNotReady (containers with unready status: [kube-controller-manager])
	I1109 13:38:59.037147   23920 system_pods.go:61] "kube-proxy-8fpx6" [56ebc8fd-f360-4a52-bd69-a1ba3d689978] Running
	I1109 13:38:59.037153   23920 system_pods.go:61] "kube-scheduler-functional-002359" [08a912b3-4327-45ca-a6f4-d96fc3541a23] Running / Ready:ContainersNotReady (containers with unready status: [kube-scheduler]) / ContainersReady:ContainersNotReady (containers with unready status: [kube-scheduler])
	I1109 13:38:59.037156   23920 system_pods.go:61] "storage-provisioner" [ebf54c13-6f7b-4167-920f-f85c358ec5ab] Running
	I1109 13:38:59.037162   23920 system_pods.go:74] duration metric: took 3.542976ms to wait for pod list to return data ...
	I1109 13:38:59.037168   23920 node_conditions.go:102] verifying NodePressure condition ...
	I1109 13:38:59.039963   23920 node_conditions.go:122] node storage ephemeral capacity is 203034800Ki
	I1109 13:38:59.039981   23920 node_conditions.go:123] node cpu capacity is 2
	I1109 13:38:59.039992   23920 node_conditions.go:105] duration metric: took 2.820693ms to run NodePressure ...
	I1109 13:38:59.040052   23920 ssh_runner.go:195] Run: sudo /bin/bash -c "env PATH="/var/lib/minikube/binaries/v1.34.1:$PATH" kubeadm init phase addon all --config /var/tmp/minikube/kubeadm.yaml"
	I1109 13:38:59.289401   23920 kubeadm.go:729] waiting for restarted kubelet to initialise ...
	I1109 13:38:59.293310   23920 kubeadm.go:744] kubelet initialised
	I1109 13:38:59.293320   23920 kubeadm.go:745] duration metric: took 3.907334ms waiting for restarted kubelet to initialise ...
	I1109 13:38:59.293336   23920 ssh_runner.go:195] Run: /bin/bash -c "cat /proc/$(pgrep kube-apiserver)/oom_adj"
	I1109 13:38:59.302850   23920 ops.go:34] apiserver oom_adj: -16
	I1109 13:38:59.302861   23920 kubeadm.go:602] duration metric: took 10.393901087s to restartPrimaryControlPlane
	I1109 13:38:59.302870   23920 kubeadm.go:403] duration metric: took 10.443010582s to StartCluster
	I1109 13:38:59.302883   23920 settings.go:142] acquiring lock: {Name:mk52a5a1cf83cfd3007d92af07a5e8dea393f789 Clock:{} Delay:500ms Timeout:1m0s Cancel:<nil>}
	I1109 13:38:59.302943   23920 settings.go:150] Updating kubeconfig:  /home/jenkins/minikube-integration/21139-2320/kubeconfig
	I1109 13:38:59.303575   23920 lock.go:35] WriteFile acquiring /home/jenkins/minikube-integration/21139-2320/kubeconfig: {Name:mkbcbd43244c4eb498050023163f96f81faa78c6 Clock:{} Delay:500ms Timeout:1m0s Cancel:<nil>}
	I1109 13:38:59.303785   23920 start.go:236] Will wait 6m0s for node &{Name: IP:192.168.49.2 Port:8441 KubernetesVersion:v1.34.1 ContainerRuntime:crio ControlPlane:true Worker:true}
	I1109 13:38:59.304103   23920 config.go:182] Loaded profile config "functional-002359": Driver=docker, ContainerRuntime=crio, KubernetesVersion=v1.34.1
	I1109 13:38:59.304135   23920 addons.go:512] enable addons start: toEnable=map[ambassador:false amd-gpu-device-plugin:false auto-pause:false cloud-spanner:false csi-hostpath-driver:false dashboard:false default-storageclass:true efk:false freshpod:false gcp-auth:false gvisor:false headlamp:false inaccel:false ingress:false ingress-dns:false inspektor-gadget:false istio:false istio-provisioner:false kong:false kubeflow:false kubetail:false kubevirt:false logviewer:false metallb:false metrics-server:false nvidia-device-plugin:false nvidia-driver-installer:false nvidia-gpu-device-plugin:false olm:false pod-security-policy:false portainer:false registry:false registry-aliases:false registry-creds:false storage-provisioner:true storage-provisioner-rancher:false volcano:false volumesnapshots:false yakd:false]
	I1109 13:38:59.304206   23920 addons.go:70] Setting storage-provisioner=true in profile "functional-002359"
	I1109 13:38:59.304218   23920 addons.go:239] Setting addon storage-provisioner=true in "functional-002359"
	W1109 13:38:59.304223   23920 addons.go:248] addon storage-provisioner should already be in state true
	I1109 13:38:59.304242   23920 host.go:66] Checking if "functional-002359" exists ...
	I1109 13:38:59.304265   23920 addons.go:70] Setting default-storageclass=true in profile "functional-002359"
	I1109 13:38:59.304275   23920 addons_storage_classes.go:34] enableOrDisableStorageClasses default-storageclass=true on "functional-002359"
	I1109 13:38:59.304558   23920 cli_runner.go:164] Run: docker container inspect functional-002359 --format={{.State.Status}}
	I1109 13:38:59.305130   23920 cli_runner.go:164] Run: docker container inspect functional-002359 --format={{.State.Status}}
	I1109 13:38:59.307030   23920 out.go:179] * Verifying Kubernetes components...
	I1109 13:38:59.310182   23920 ssh_runner.go:195] Run: sudo systemctl daemon-reload
	I1109 13:38:59.345795   23920 addons.go:239] Setting addon default-storageclass=true in "functional-002359"
	W1109 13:38:59.345805   23920 addons.go:248] addon default-storageclass should already be in state true
	I1109 13:38:59.345828   23920 host.go:66] Checking if "functional-002359" exists ...
	I1109 13:38:59.346237   23920 cli_runner.go:164] Run: docker container inspect functional-002359 --format={{.State.Status}}
	I1109 13:38:59.348425   23920 out.go:179]   - Using image gcr.io/k8s-minikube/storage-provisioner:v5
	I1109 13:38:59.351333   23920 addons.go:436] installing /etc/kubernetes/addons/storage-provisioner.yaml
	I1109 13:38:59.351345   23920 ssh_runner.go:362] scp memory --> /etc/kubernetes/addons/storage-provisioner.yaml (2676 bytes)
	I1109 13:38:59.351409   23920 cli_runner.go:164] Run: docker container inspect -f "'{{(index (index .NetworkSettings.Ports "22/tcp") 0).HostPort}}'" functional-002359
	I1109 13:38:59.383322   23920 addons.go:436] installing /etc/kubernetes/addons/storageclass.yaml
	I1109 13:38:59.383334   23920 ssh_runner.go:362] scp storageclass/storageclass.yaml --> /etc/kubernetes/addons/storageclass.yaml (271 bytes)
	I1109 13:38:59.383392   23920 cli_runner.go:164] Run: docker container inspect -f "'{{(index (index .NetworkSettings.Ports "22/tcp") 0).HostPort}}'" functional-002359
	I1109 13:38:59.385592   23920 sshutil.go:53] new ssh client: &{IP:127.0.0.1 Port:32778 SSHKeyPath:/home/jenkins/minikube-integration/21139-2320/.minikube/machines/functional-002359/id_rsa Username:docker}
	I1109 13:38:59.415422   23920 sshutil.go:53] new ssh client: &{IP:127.0.0.1 Port:32778 SSHKeyPath:/home/jenkins/minikube-integration/21139-2320/.minikube/machines/functional-002359/id_rsa Username:docker}
	I1109 13:38:59.547654   23920 ssh_runner.go:195] Run: sudo KUBECONFIG=/var/lib/minikube/kubeconfig /var/lib/minikube/binaries/v1.34.1/kubectl apply -f /etc/kubernetes/addons/storage-provisioner.yaml
	I1109 13:38:59.588493   23920 ssh_runner.go:195] Run: sudo systemctl start kubelet
	I1109 13:38:59.597578   23920 ssh_runner.go:195] Run: sudo KUBECONFIG=/var/lib/minikube/kubeconfig /var/lib/minikube/binaries/v1.34.1/kubectl apply -f /etc/kubernetes/addons/storageclass.yaml
	I1109 13:39:00.551549   23920 ssh_runner.go:235] Completed: sudo KUBECONFIG=/var/lib/minikube/kubeconfig /var/lib/minikube/binaries/v1.34.1/kubectl apply -f /etc/kubernetes/addons/storage-provisioner.yaml: (1.003867398s)
	I1109 13:39:00.551640   23920 node_ready.go:35] waiting up to 6m0s for node "functional-002359" to be "Ready" ...
	I1109 13:39:00.555919   23920 node_ready.go:49] node "functional-002359" is "Ready"
	I1109 13:39:00.555937   23920 node_ready.go:38] duration metric: took 4.285846ms for node "functional-002359" to be "Ready" ...
	I1109 13:39:00.555948   23920 api_server.go:52] waiting for apiserver process to appear ...
	I1109 13:39:00.556061   23920 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I1109 13:39:00.574088   23920 out.go:179] * Enabled addons: storage-provisioner, default-storageclass
	I1109 13:39:00.577190   23920 addons.go:515] duration metric: took 1.273025324s for enable addons: enabled=[storage-provisioner default-storageclass]
	I1109 13:39:00.585739   23920 api_server.go:72] duration metric: took 1.281929333s to wait for apiserver process to appear ...
	I1109 13:39:00.585755   23920 api_server.go:88] waiting for apiserver healthz status ...
	I1109 13:39:00.585777   23920 api_server.go:253] Checking apiserver healthz at https://192.168.49.2:8441/healthz ...
	I1109 13:39:00.648194   23920 api_server.go:279] https://192.168.49.2:8441/healthz returned 200:
	ok
	I1109 13:39:00.649266   23920 api_server.go:141] control plane version: v1.34.1
	I1109 13:39:00.649282   23920 api_server.go:131] duration metric: took 63.521003ms to wait for apiserver health ...
	I1109 13:39:00.649289   23920 system_pods.go:43] waiting for kube-system pods to appear ...
	I1109 13:39:00.661046   23920 system_pods.go:59] 8 kube-system pods found
	I1109 13:39:00.661074   23920 system_pods.go:61] "coredns-66bc5c9577-xpr2w" [dbba4567-530d-4a52-a2f2-8064707aa15b] Running
	I1109 13:39:00.661082   23920 system_pods.go:61] "etcd-functional-002359" [2d763ca8-ba34-47e3-8d79-758a9a6edd10] Running / Ready:ContainersNotReady (containers with unready status: [etcd]) / ContainersReady:ContainersNotReady (containers with unready status: [etcd])
	I1109 13:39:00.661086   23920 system_pods.go:61] "kindnet-bnks4" [cf64d488-cbe9-4bcc-8c0c-4ea871aaef1b] Running
	I1109 13:39:00.661093   23920 system_pods.go:61] "kube-apiserver-functional-002359" [93e27957-df27-4519-b2b0-ec58476e28bf] Running / Ready:ContainersNotReady (containers with unready status: [kube-apiserver]) / ContainersReady:ContainersNotReady (containers with unready status: [kube-apiserver])
	I1109 13:39:00.661099   23920 system_pods.go:61] "kube-controller-manager-functional-002359" [4726cf25-c727-43fb-b6d8-9107588da88a] Running / Ready:ContainersNotReady (containers with unready status: [kube-controller-manager]) / ContainersReady:ContainersNotReady (containers with unready status: [kube-controller-manager])
	I1109 13:39:00.661102   23920 system_pods.go:61] "kube-proxy-8fpx6" [56ebc8fd-f360-4a52-bd69-a1ba3d689978] Running
	I1109 13:39:00.661108   23920 system_pods.go:61] "kube-scheduler-functional-002359" [08a912b3-4327-45ca-a6f4-d96fc3541a23] Running / Ready:ContainersNotReady (containers with unready status: [kube-scheduler]) / ContainersReady:ContainersNotReady (containers with unready status: [kube-scheduler])
	I1109 13:39:00.661111   23920 system_pods.go:61] "storage-provisioner" [ebf54c13-6f7b-4167-920f-f85c358ec5ab] Running
	I1109 13:39:00.661119   23920 system_pods.go:74] duration metric: took 11.824461ms to wait for pod list to return data ...
	I1109 13:39:00.661125   23920 default_sa.go:34] waiting for default service account to be created ...
	I1109 13:39:00.669144   23920 default_sa.go:45] found service account: "default"
	I1109 13:39:00.669159   23920 default_sa.go:55] duration metric: took 8.028774ms for default service account to be created ...
	I1109 13:39:00.669167   23920 system_pods.go:116] waiting for k8s-apps to be running ...
	I1109 13:39:00.683973   23920 system_pods.go:86] 8 kube-system pods found
	I1109 13:39:00.683988   23920 system_pods.go:89] "coredns-66bc5c9577-xpr2w" [dbba4567-530d-4a52-a2f2-8064707aa15b] Running
	I1109 13:39:00.683996   23920 system_pods.go:89] "etcd-functional-002359" [2d763ca8-ba34-47e3-8d79-758a9a6edd10] Running / Ready:ContainersNotReady (containers with unready status: [etcd]) / ContainersReady:ContainersNotReady (containers with unready status: [etcd])
	I1109 13:39:00.684002   23920 system_pods.go:89] "kindnet-bnks4" [cf64d488-cbe9-4bcc-8c0c-4ea871aaef1b] Running
	I1109 13:39:00.684008   23920 system_pods.go:89] "kube-apiserver-functional-002359" [93e27957-df27-4519-b2b0-ec58476e28bf] Running / Ready:ContainersNotReady (containers with unready status: [kube-apiserver]) / ContainersReady:ContainersNotReady (containers with unready status: [kube-apiserver])
	I1109 13:39:00.684015   23920 system_pods.go:89] "kube-controller-manager-functional-002359" [4726cf25-c727-43fb-b6d8-9107588da88a] Running / Ready:ContainersNotReady (containers with unready status: [kube-controller-manager]) / ContainersReady:ContainersNotReady (containers with unready status: [kube-controller-manager])
	I1109 13:39:00.684018   23920 system_pods.go:89] "kube-proxy-8fpx6" [56ebc8fd-f360-4a52-bd69-a1ba3d689978] Running
	I1109 13:39:00.684023   23920 system_pods.go:89] "kube-scheduler-functional-002359" [08a912b3-4327-45ca-a6f4-d96fc3541a23] Running / Ready:ContainersNotReady (containers with unready status: [kube-scheduler]) / ContainersReady:ContainersNotReady (containers with unready status: [kube-scheduler])
	I1109 13:39:00.684038   23920 system_pods.go:89] "storage-provisioner" [ebf54c13-6f7b-4167-920f-f85c358ec5ab] Running
	I1109 13:39:00.684044   23920 system_pods.go:126] duration metric: took 14.872108ms to wait for k8s-apps to be running ...
	I1109 13:39:00.684050   23920 system_svc.go:44] waiting for kubelet service to be running ....
	I1109 13:39:00.684118   23920 ssh_runner.go:195] Run: sudo systemctl is-active --quiet service kubelet
	I1109 13:39:00.698943   23920 system_svc.go:56] duration metric: took 14.88266ms WaitForService to wait for kubelet
	I1109 13:39:00.698960   23920 kubeadm.go:587] duration metric: took 1.395156964s to wait for: map[apiserver:true apps_running:true default_sa:true extra:true kubelet:true node_ready:true system_pods:true]
	I1109 13:39:00.698979   23920 node_conditions.go:102] verifying NodePressure condition ...
	I1109 13:39:00.704037   23920 node_conditions.go:122] node storage ephemeral capacity is 203034800Ki
	I1109 13:39:00.704052   23920 node_conditions.go:123] node cpu capacity is 2
	I1109 13:39:00.704062   23920 node_conditions.go:105] duration metric: took 5.077947ms to run NodePressure ...
	I1109 13:39:00.704073   23920 start.go:242] waiting for startup goroutines ...
	I1109 13:39:00.704079   23920 start.go:247] waiting for cluster config update ...
	I1109 13:39:00.704089   23920 start.go:256] writing updated cluster config ...
	I1109 13:39:00.704358   23920 ssh_runner.go:195] Run: rm -f paused
	I1109 13:39:00.712623   23920 pod_ready.go:37] extra waiting up to 4m0s for all "kube-system" pods having one of [k8s-app=kube-dns component=etcd component=kube-apiserver component=kube-controller-manager k8s-app=kube-proxy component=kube-scheduler] labels to be "Ready" ...
	I1109 13:39:00.720487   23920 pod_ready.go:83] waiting for pod "coredns-66bc5c9577-xpr2w" in "kube-system" namespace to be "Ready" or be gone ...
	I1109 13:39:00.730485   23920 pod_ready.go:94] pod "coredns-66bc5c9577-xpr2w" is "Ready"
	I1109 13:39:00.730501   23920 pod_ready.go:86] duration metric: took 9.999988ms for pod "coredns-66bc5c9577-xpr2w" in "kube-system" namespace to be "Ready" or be gone ...
	I1109 13:39:00.734128   23920 pod_ready.go:83] waiting for pod "etcd-functional-002359" in "kube-system" namespace to be "Ready" or be gone ...
	W1109 13:39:02.740625   23920 pod_ready.go:104] pod "etcd-functional-002359" is not "Ready", error: <nil>
	W1109 13:39:05.240157   23920 pod_ready.go:104] pod "etcd-functional-002359" is not "Ready", error: <nil>
	W1109 13:39:07.739525   23920 pod_ready.go:104] pod "etcd-functional-002359" is not "Ready", error: <nil>
	W1109 13:39:09.740121   23920 pod_ready.go:104] pod "etcd-functional-002359" is not "Ready", error: <nil>
	I1109 13:39:10.240054   23920 pod_ready.go:94] pod "etcd-functional-002359" is "Ready"
	I1109 13:39:10.240068   23920 pod_ready.go:86] duration metric: took 9.505928075s for pod "etcd-functional-002359" in "kube-system" namespace to be "Ready" or be gone ...
	I1109 13:39:10.242355   23920 pod_ready.go:83] waiting for pod "kube-apiserver-functional-002359" in "kube-system" namespace to be "Ready" or be gone ...
	I1109 13:39:10.246926   23920 pod_ready.go:94] pod "kube-apiserver-functional-002359" is "Ready"
	I1109 13:39:10.246939   23920 pod_ready.go:86] duration metric: took 4.572003ms for pod "kube-apiserver-functional-002359" in "kube-system" namespace to be "Ready" or be gone ...
	I1109 13:39:10.249261   23920 pod_ready.go:83] waiting for pod "kube-controller-manager-functional-002359" in "kube-system" namespace to be "Ready" or be gone ...
	I1109 13:39:12.254934   23920 pod_ready.go:94] pod "kube-controller-manager-functional-002359" is "Ready"
	I1109 13:39:12.254948   23920 pod_ready.go:86] duration metric: took 2.005674721s for pod "kube-controller-manager-functional-002359" in "kube-system" namespace to be "Ready" or be gone ...
	I1109 13:39:12.257380   23920 pod_ready.go:83] waiting for pod "kube-proxy-8fpx6" in "kube-system" namespace to be "Ready" or be gone ...
	I1109 13:39:12.261897   23920 pod_ready.go:94] pod "kube-proxy-8fpx6" is "Ready"
	I1109 13:39:12.261910   23920 pod_ready.go:86] duration metric: took 4.517062ms for pod "kube-proxy-8fpx6" in "kube-system" namespace to be "Ready" or be gone ...
	I1109 13:39:12.264135   23920 pod_ready.go:83] waiting for pod "kube-scheduler-functional-002359" in "kube-system" namespace to be "Ready" or be gone ...
	I1109 13:39:12.637608   23920 pod_ready.go:94] pod "kube-scheduler-functional-002359" is "Ready"
	I1109 13:39:12.637621   23920 pod_ready.go:86] duration metric: took 373.47554ms for pod "kube-scheduler-functional-002359" in "kube-system" namespace to be "Ready" or be gone ...
	I1109 13:39:12.637630   23920 pod_ready.go:40] duration metric: took 11.924984791s for extra waiting for all "kube-system" pods having one of [k8s-app=kube-dns component=etcd component=kube-apiserver component=kube-controller-manager k8s-app=kube-proxy component=kube-scheduler] labels to be "Ready" ...
	I1109 13:39:12.685567   23920 start.go:628] kubectl: 1.33.2, cluster: 1.34.1 (minor skew: 1)
	I1109 13:39:12.688773   23920 out.go:179] * Done! kubectl is now configured to use "functional-002359" cluster and "default" namespace by default
	
	
	==> CRI-O <==
	Nov 09 13:39:46 functional-002359 crio[3683]: time="2025-11-09T13:39:46.797405168Z" level=info msg="Got pod network &{Name:hello-node-75c85bcc94-9jr6w Namespace:default ID:0e38e65a63c6ec86ec8f6911fb5c352b41484281548257106e39caa733b9d48f UID:d34c4368-4034-48c2-8f47-3be1ca055a42 NetNS:/var/run/netns/9422d40b-4853-41cd-b229-1ceb6e7145ab Networks:[{Name:kindnet Ifname:eth0}] RuntimeConfig:map[kindnet:{IP: MAC: PortMappings:[] Bandwidth:<nil> IpRanges:[] CgroupPath: PodAnnotations:0x4000789b98}] Aliases:map[]}"
	Nov 09 13:39:46 functional-002359 crio[3683]: time="2025-11-09T13:39:46.797723069Z" level=info msg="Checking pod default_hello-node-75c85bcc94-9jr6w for CNI network kindnet (type=ptp)"
	Nov 09 13:39:46 functional-002359 crio[3683]: time="2025-11-09T13:39:46.802316226Z" level=info msg="Ran pod sandbox 0e38e65a63c6ec86ec8f6911fb5c352b41484281548257106e39caa733b9d48f with infra container: default/hello-node-75c85bcc94-9jr6w/POD" id=ad23446a-6dd4-4ae7-94cd-99b00345260e name=/runtime.v1.RuntimeService/RunPodSandbox
	Nov 09 13:39:46 functional-002359 crio[3683]: time="2025-11-09T13:39:46.812331369Z" level=info msg="Pulling image: kicbase/echo-server:latest" id=2faef2c3-4139-437d-89fb-d850162bcdc4 name=/runtime.v1.ImageService/PullImage
	Nov 09 13:39:52 functional-002359 crio[3683]: time="2025-11-09T13:39:52.572823056Z" level=info msg="Stopping pod sandbox: ec9a1eebf91049e901cbdc8b8bbcecee5a88d886142f877adb5a029db1013ebf" id=c88443b0-521b-4ea9-9fc1-78f3473cfa63 name=/runtime.v1.RuntimeService/StopPodSandbox
	Nov 09 13:39:52 functional-002359 crio[3683]: time="2025-11-09T13:39:52.572879622Z" level=info msg="Stopped pod sandbox (already stopped): ec9a1eebf91049e901cbdc8b8bbcecee5a88d886142f877adb5a029db1013ebf" id=c88443b0-521b-4ea9-9fc1-78f3473cfa63 name=/runtime.v1.RuntimeService/StopPodSandbox
	Nov 09 13:39:52 functional-002359 crio[3683]: time="2025-11-09T13:39:52.57394979Z" level=info msg="Removing pod sandbox: ec9a1eebf91049e901cbdc8b8bbcecee5a88d886142f877adb5a029db1013ebf" id=7af161c9-950a-4514-9771-d5b1abef180d name=/runtime.v1.RuntimeService/RemovePodSandbox
	Nov 09 13:39:52 functional-002359 crio[3683]: time="2025-11-09T13:39:52.577678771Z" level=info msg="Removed pod sandbox: ec9a1eebf91049e901cbdc8b8bbcecee5a88d886142f877adb5a029db1013ebf" id=7af161c9-950a-4514-9771-d5b1abef180d name=/runtime.v1.RuntimeService/RemovePodSandbox
	Nov 09 13:39:52 functional-002359 crio[3683]: time="2025-11-09T13:39:52.580323833Z" level=info msg="Stopping pod sandbox: 3f2ebc0ee849980892b4b290653c87ad8d55dd0accd5dcbe4f582e214fb24da7" id=00b00916-acd9-4c47-8926-35f7a7ec978e name=/runtime.v1.RuntimeService/StopPodSandbox
	Nov 09 13:39:52 functional-002359 crio[3683]: time="2025-11-09T13:39:52.580385216Z" level=info msg="Stopped pod sandbox (already stopped): 3f2ebc0ee849980892b4b290653c87ad8d55dd0accd5dcbe4f582e214fb24da7" id=00b00916-acd9-4c47-8926-35f7a7ec978e name=/runtime.v1.RuntimeService/StopPodSandbox
	Nov 09 13:39:52 functional-002359 crio[3683]: time="2025-11-09T13:39:52.585737049Z" level=info msg="Removing pod sandbox: 3f2ebc0ee849980892b4b290653c87ad8d55dd0accd5dcbe4f582e214fb24da7" id=aa45515b-c22f-4655-b1cf-f7db21cb5f07 name=/runtime.v1.RuntimeService/RemovePodSandbox
	Nov 09 13:39:52 functional-002359 crio[3683]: time="2025-11-09T13:39:52.589462461Z" level=info msg="Removed pod sandbox: 3f2ebc0ee849980892b4b290653c87ad8d55dd0accd5dcbe4f582e214fb24da7" id=aa45515b-c22f-4655-b1cf-f7db21cb5f07 name=/runtime.v1.RuntimeService/RemovePodSandbox
	Nov 09 13:39:52 functional-002359 crio[3683]: time="2025-11-09T13:39:52.589982144Z" level=info msg="Stopping pod sandbox: 5d5224d6fdfdbed6154abe1e3d502ce6cbf508e960e09a701bea30c25a7d1760" id=5961a7e8-c174-4370-9105-ef700d0813d7 name=/runtime.v1.RuntimeService/StopPodSandbox
	Nov 09 13:39:52 functional-002359 crio[3683]: time="2025-11-09T13:39:52.590027937Z" level=info msg="Stopped pod sandbox (already stopped): 5d5224d6fdfdbed6154abe1e3d502ce6cbf508e960e09a701bea30c25a7d1760" id=5961a7e8-c174-4370-9105-ef700d0813d7 name=/runtime.v1.RuntimeService/StopPodSandbox
	Nov 09 13:39:52 functional-002359 crio[3683]: time="2025-11-09T13:39:52.590421835Z" level=info msg="Removing pod sandbox: 5d5224d6fdfdbed6154abe1e3d502ce6cbf508e960e09a701bea30c25a7d1760" id=6bc56e26-ef4c-4eba-89fa-950fe1bfcbe8 name=/runtime.v1.RuntimeService/RemovePodSandbox
	Nov 09 13:39:52 functional-002359 crio[3683]: time="2025-11-09T13:39:52.593909461Z" level=info msg="Removed pod sandbox: 5d5224d6fdfdbed6154abe1e3d502ce6cbf508e960e09a701bea30c25a7d1760" id=6bc56e26-ef4c-4eba-89fa-950fe1bfcbe8 name=/runtime.v1.RuntimeService/RemovePodSandbox
	Nov 09 13:39:58 functional-002359 crio[3683]: time="2025-11-09T13:39:58.405543018Z" level=info msg="Pulling image: kicbase/echo-server:latest" id=5eead6a6-1ab4-4c73-8308-4b6fbf0e6b84 name=/runtime.v1.ImageService/PullImage
	Nov 09 13:40:13 functional-002359 crio[3683]: time="2025-11-09T13:40:13.40614479Z" level=info msg="Pulling image: kicbase/echo-server:latest" id=7a72b563-c094-4c7e-95a3-3d43c485886e name=/runtime.v1.ImageService/PullImage
	Nov 09 13:40:24 functional-002359 crio[3683]: time="2025-11-09T13:40:24.405709605Z" level=info msg="Pulling image: kicbase/echo-server:latest" id=a794c82f-ac78-4dc5-982f-8b14d8223af9 name=/runtime.v1.ImageService/PullImage
	Nov 09 13:41:05 functional-002359 crio[3683]: time="2025-11-09T13:41:05.405205152Z" level=info msg="Pulling image: kicbase/echo-server:latest" id=d337f3d3-a800-4403-aeff-7238d4d833bb name=/runtime.v1.ImageService/PullImage
	Nov 09 13:41:15 functional-002359 crio[3683]: time="2025-11-09T13:41:15.405315015Z" level=info msg="Pulling image: kicbase/echo-server:latest" id=4a7dcb21-b093-4d48-928f-e25027439804 name=/runtime.v1.ImageService/PullImage
	Nov 09 13:42:30 functional-002359 crio[3683]: time="2025-11-09T13:42:30.407451121Z" level=info msg="Pulling image: kicbase/echo-server:latest" id=b792236d-8cf5-4673-9733-c09ad0e5bef7 name=/runtime.v1.ImageService/PullImage
	Nov 09 13:42:49 functional-002359 crio[3683]: time="2025-11-09T13:42:49.405201751Z" level=info msg="Pulling image: kicbase/echo-server:latest" id=f996b6be-bc26-46fc-a847-0c24fd0f2142 name=/runtime.v1.ImageService/PullImage
	Nov 09 13:45:21 functional-002359 crio[3683]: time="2025-11-09T13:45:21.405076721Z" level=info msg="Pulling image: kicbase/echo-server:latest" id=40624d77-dcfe-42ca-9a66-33e653acf214 name=/runtime.v1.ImageService/PullImage
	Nov 09 13:45:30 functional-002359 crio[3683]: time="2025-11-09T13:45:30.406167001Z" level=info msg="Pulling image: kicbase/echo-server:latest" id=0ce53552-9d8b-49df-92fe-1b37d2e9981d name=/runtime.v1.ImageService/PullImage
	
	
	==> container status <==
	CONTAINER           IMAGE                                                                                             CREATED             STATE               NAME                      ATTEMPT             POD ID              POD                                         NAMESPACE
	7b7fbc8d4e17d       docker.io/library/nginx@sha256:63a931a2f5772f57ed7537f19330ee231c0550d1fbb95ee24d0e0e3e849bae33   9 minutes ago       Running             myfrontend                0                   896b3364624bf       sp-pod                                      default
	2ae6de37b4656       docker.io/library/nginx@sha256:7391b3732e7f7ccd23ff1d02fbeadcde496f374d7460ad8e79260f8f6d2c9f90   10 minutes ago      Running             nginx                     0                   88fed42ab3423       nginx-svc                                   default
	d5485c2e8cfbb       b1a8c6f707935fd5f346ce5846d21ff8dd65e14c15406a14dbd16b9b897b9b4c                                  10 minutes ago      Running             kindnet-cni               3                   b27b92b6d3369       kindnet-bnks4                               kube-system
	c96d5a2830c28       138784d87c9c50f8e59412544da4cf4928d61ccbaf93b9f5898a3ba406871bfc                                  10 minutes ago      Running             coredns                   3                   15155c4f2056d       coredns-66bc5c9577-xpr2w                    kube-system
	a5d00d82a7ddb       05baa95f5142d87797a2bc1d3d11edfb0bf0a9236d436243d15061fae8b58cb9                                  10 minutes ago      Running             kube-proxy                3                   f5ef8bf123374       kube-proxy-8fpx6                            kube-system
	323eccb0e9e51       ba04bb24b95753201135cbc420b233c1b0b9fa2e1fd21d28319c348c33fbcde6                                  10 minutes ago      Running             storage-provisioner       3                   37c615e607346       storage-provisioner                         kube-system
	11be1b38f3bae       43911e833d64d4f30460862fc0c54bb61999d60bc7d063feca71e9fc610d5196                                  10 minutes ago      Running             kube-apiserver            0                   5d841cb7237ca       kube-apiserver-functional-002359            kube-system
	aee17e5b4b89d       7eb2c6ff0c5a768fd309321bc2ade0e4e11afcf4f2017ef1d0ff00d91fdf992a                                  10 minutes ago      Running             kube-controller-manager   3                   66d0fd70a624a       kube-controller-manager-functional-002359   kube-system
	4598e5005814d       b5f57ec6b98676d815366685a0422bd164ecf0732540b79ac51b1186cef97ff0                                  10 minutes ago      Running             kube-scheduler            3                   10e046e2efa29       kube-scheduler-functional-002359            kube-system
	c811799b68b0c       a1894772a478e07c67a56e8bf32335fdbe1dd4ec96976a5987083164bd00bc0e                                  10 minutes ago      Running             etcd                      3                   caeb2aedb0d10       etcd-functional-002359                      kube-system
	cc0225f0d3fc8       138784d87c9c50f8e59412544da4cf4928d61ccbaf93b9f5898a3ba406871bfc                                  11 minutes ago      Exited              coredns                   2                   15155c4f2056d       coredns-66bc5c9577-xpr2w                    kube-system
	4399d84388041       ba04bb24b95753201135cbc420b233c1b0b9fa2e1fd21d28319c348c33fbcde6                                  11 minutes ago      Exited              storage-provisioner       2                   37c615e607346       storage-provisioner                         kube-system
	4663919f4dfd1       05baa95f5142d87797a2bc1d3d11edfb0bf0a9236d436243d15061fae8b58cb9                                  11 minutes ago      Exited              kube-proxy                2                   f5ef8bf123374       kube-proxy-8fpx6                            kube-system
	fb70fb34ba82b       b5f57ec6b98676d815366685a0422bd164ecf0732540b79ac51b1186cef97ff0                                  11 minutes ago      Exited              kube-scheduler            2                   10e046e2efa29       kube-scheduler-functional-002359            kube-system
	43c7c71ce936b       a1894772a478e07c67a56e8bf32335fdbe1dd4ec96976a5987083164bd00bc0e                                  11 minutes ago      Exited              etcd                      2                   caeb2aedb0d10       etcd-functional-002359                      kube-system
	8d6e6004d07de       b1a8c6f707935fd5f346ce5846d21ff8dd65e14c15406a14dbd16b9b897b9b4c                                  11 minutes ago      Exited              kindnet-cni               2                   b27b92b6d3369       kindnet-bnks4                               kube-system
	93a2fe7c60151       7eb2c6ff0c5a768fd309321bc2ade0e4e11afcf4f2017ef1d0ff00d91fdf992a                                  11 minutes ago      Exited              kube-controller-manager   2                   66d0fd70a624a       kube-controller-manager-functional-002359   kube-system
	
	
	==> coredns [c96d5a2830c28ecab9c430b56c2ae9b86c1f8f1c4efd80a7841a6ca79fb2b47c] <==
	maxprocs: Leaving GOMAXPROCS=2: CPU quota undefined
	.:53
	[INFO] plugin/reload: Running configuration SHA512 = 9e2996f8cb67ac53e0259ab1f8d615d07d1beb0bd07e6a1e39769c3bf486a905bb991cc47f8d2f14d0d3a90a87dfc625a0b4c524fed169d8158c40657c0694b1
	CoreDNS-1.12.1
	linux/arm64, go1.24.1, 707c7c1
	[INFO] 127.0.0.1:56497 - 6065 "HINFO IN 2858345791255568237.4245772666780404738. udp 57 false 512" NXDOMAIN qr,rd,ra 57 0.017949547s
	
	
	==> coredns [cc0225f0d3fc862a263aba94b43a8360ad345e2ba74021176372dd8fb19e2f46] <==
	maxprocs: Leaving GOMAXPROCS=2: CPU quota undefined
	.:53
	[INFO] plugin/reload: Running configuration SHA512 = 9e2996f8cb67ac53e0259ab1f8d615d07d1beb0bd07e6a1e39769c3bf486a905bb991cc47f8d2f14d0d3a90a87dfc625a0b4c524fed169d8158c40657c0694b1
	CoreDNS-1.12.1
	linux/arm64, go1.24.1, 707c7c1
	[INFO] 127.0.0.1:53915 - 2105 "HINFO IN 7501807784434035100.7480472948433614933. udp 57 false 512" NXDOMAIN qr,rd,ra 57 0.03286795s
	[INFO] SIGTERM: Shutting down servers then terminating
	[INFO] plugin/health: Going into lameduck mode for 5s
	
	
	==> describe nodes <==
	Name:               functional-002359
	Roles:              control-plane
	Labels:             beta.kubernetes.io/arch=arm64
	                    beta.kubernetes.io/os=linux
	                    kubernetes.io/arch=arm64
	                    kubernetes.io/hostname=functional-002359
	                    kubernetes.io/os=linux
	                    minikube.k8s.io/commit=70e94ed289d7418ac27e2778f9cf44be27a4ecda
	                    minikube.k8s.io/name=functional-002359
	                    minikube.k8s.io/primary=true
	                    minikube.k8s.io/updated_at=2025_11_09T13_36_58_0700
	                    minikube.k8s.io/version=v1.37.0
	                    node-role.kubernetes.io/control-plane=
	                    node.kubernetes.io/exclude-from-external-load-balancers=
	Annotations:        node.alpha.kubernetes.io/ttl: 0
	                    volumes.kubernetes.io/controller-managed-attach-detach: true
	CreationTimestamp:  Sun, 09 Nov 2025 13:36:54 +0000
	Taints:             <none>
	Unschedulable:      false
	Lease:
	  HolderIdentity:  functional-002359
	  AcquireTime:     <unset>
	  RenewTime:       Sun, 09 Nov 2025 13:49:30 +0000
	Conditions:
	  Type             Status  LastHeartbeatTime                 LastTransitionTime                Reason                       Message
	  ----             ------  -----------------                 ------------------                ------                       -------
	  MemoryPressure   False   Sun, 09 Nov 2025 13:49:29 +0000   Sun, 09 Nov 2025 13:36:51 +0000   KubeletHasSufficientMemory   kubelet has sufficient memory available
	  DiskPressure     False   Sun, 09 Nov 2025 13:49:29 +0000   Sun, 09 Nov 2025 13:36:51 +0000   KubeletHasNoDiskPressure     kubelet has no disk pressure
	  PIDPressure      False   Sun, 09 Nov 2025 13:49:29 +0000   Sun, 09 Nov 2025 13:36:51 +0000   KubeletHasSufficientPID      kubelet has sufficient PID available
	  Ready            True    Sun, 09 Nov 2025 13:49:29 +0000   Sun, 09 Nov 2025 13:37:43 +0000   KubeletReady                 kubelet is posting ready status
	Addresses:
	  InternalIP:  192.168.49.2
	  Hostname:    functional-002359
	Capacity:
	  cpu:                2
	  ephemeral-storage:  203034800Ki
	  hugepages-1Gi:      0
	  hugepages-2Mi:      0
	  hugepages-32Mi:     0
	  hugepages-64Ki:     0
	  memory:             8022300Ki
	  pods:               110
	Allocatable:
	  cpu:                2
	  ephemeral-storage:  203034800Ki
	  hugepages-1Gi:      0
	  hugepages-2Mi:      0
	  hugepages-32Mi:     0
	  hugepages-64Ki:     0
	  memory:             8022300Ki
	  pods:               110
	System Info:
	  Machine ID:                 8fe80b37a6a3cb4bc1adadb06905c59a
	  System UUID:                17b511c3-cc9f-42f9-96e1-457c1f4860d2
	  Boot ID:                    c2b4b02b-b0e5-42ff-aa45-346ad9349595
	  Kernel Version:             5.15.0-1084-aws
	  OS Image:                   Debian GNU/Linux 12 (bookworm)
	  Operating System:           linux
	  Architecture:               arm64
	  Container Runtime Version:  cri-o://1.34.1
	  Kubelet Version:            v1.34.1
	  Kube-Proxy Version:         
	PodCIDR:                      10.244.0.0/24
	PodCIDRs:                     10.244.0.0/24
	Non-terminated Pods:          (12 in total)
	  Namespace                   Name                                         CPU Requests  CPU Limits  Memory Requests  Memory Limits  Age
	  ---------                   ----                                         ------------  ----------  ---------------  -------------  ---
	  default                     hello-node-75c85bcc94-9jr6w                  0 (0%)        0 (0%)      0 (0%)           0 (0%)         9m48s
	  default                     hello-node-connect-7d85dfc575-r228d          0 (0%)        0 (0%)      0 (0%)           0 (0%)         10m
	  default                     nginx-svc                                    0 (0%)        0 (0%)      0 (0%)           0 (0%)         10m
	  default                     sp-pod                                       0 (0%)        0 (0%)      0 (0%)           0 (0%)         9m55s
	  kube-system                 coredns-66bc5c9577-xpr2w                     100m (5%)     0 (0%)      70Mi (0%)        170Mi (2%)     12m
	  kube-system                 etcd-functional-002359                       100m (5%)     0 (0%)      100Mi (1%)       0 (0%)         12m
	  kube-system                 kindnet-bnks4                                100m (5%)     100m (5%)   50Mi (0%)        50Mi (0%)      12m
	  kube-system                 kube-apiserver-functional-002359             250m (12%)    0 (0%)      0 (0%)           0 (0%)         10m
	  kube-system                 kube-controller-manager-functional-002359    200m (10%)    0 (0%)      0 (0%)           0 (0%)         12m
	  kube-system                 kube-proxy-8fpx6                             0 (0%)        0 (0%)      0 (0%)           0 (0%)         12m
	  kube-system                 kube-scheduler-functional-002359             100m (5%)     0 (0%)      0 (0%)           0 (0%)         12m
	  kube-system                 storage-provisioner                          0 (0%)        0 (0%)      0 (0%)           0 (0%)         12m
	Allocated resources:
	  (Total limits may be over 100 percent, i.e., overcommitted.)
	  Resource           Requests    Limits
	  --------           --------    ------
	  cpu                850m (42%)  100m (5%)
	  memory             220Mi (2%)  220Mi (2%)
	  ephemeral-storage  0 (0%)      0 (0%)
	  hugepages-1Gi      0 (0%)      0 (0%)
	  hugepages-2Mi      0 (0%)      0 (0%)
	  hugepages-32Mi     0 (0%)      0 (0%)
	  hugepages-64Ki     0 (0%)      0 (0%)
	Events:
	  Type     Reason                   Age                From             Message
	  ----     ------                   ----               ----             -------
	  Normal   Starting                 12m                kube-proxy       
	  Normal   Starting                 10m                kube-proxy       
	  Normal   Starting                 11m                kube-proxy       
	  Normal   NodeHasSufficientMemory  12m (x8 over 12m)  kubelet          Node functional-002359 status is now: NodeHasSufficientMemory
	  Normal   NodeHasNoDiskPressure    12m (x8 over 12m)  kubelet          Node functional-002359 status is now: NodeHasNoDiskPressure
	  Normal   NodeHasSufficientPID     12m (x8 over 12m)  kubelet          Node functional-002359 status is now: NodeHasSufficientPID
	  Normal   NodeHasSufficientPID     12m                kubelet          Node functional-002359 status is now: NodeHasSufficientPID
	  Warning  CgroupV1                 12m                kubelet          cgroup v1 support is in maintenance mode, please migrate to cgroup v2
	  Normal   NodeHasSufficientMemory  12m                kubelet          Node functional-002359 status is now: NodeHasSufficientMemory
	  Normal   NodeHasNoDiskPressure    12m                kubelet          Node functional-002359 status is now: NodeHasNoDiskPressure
	  Normal   Starting                 12m                kubelet          Starting kubelet.
	  Normal   RegisteredNode           12m                node-controller  Node functional-002359 event: Registered Node functional-002359 in Controller
	  Normal   NodeReady                11m                kubelet          Node functional-002359 status is now: NodeReady
	  Warning  ContainerGCFailed        11m                kubelet          rpc error: code = Unavailable desc = connection error: desc = "transport: Error while dialing: dial unix /var/run/crio/crio.sock: connect: no such file or directory"
	  Normal   RegisteredNode           11m                node-controller  Node functional-002359 event: Registered Node functional-002359 in Controller
	  Normal   Starting                 10m                kubelet          Starting kubelet.
	  Warning  CgroupV1                 10m                kubelet          cgroup v1 support is in maintenance mode, please migrate to cgroup v2
	  Normal   NodeHasSufficientMemory  10m (x8 over 10m)  kubelet          Node functional-002359 status is now: NodeHasSufficientMemory
	  Normal   NodeHasNoDiskPressure    10m (x8 over 10m)  kubelet          Node functional-002359 status is now: NodeHasNoDiskPressure
	  Normal   NodeHasSufficientPID     10m (x8 over 10m)  kubelet          Node functional-002359 status is now: NodeHasSufficientPID
	  Normal   RegisteredNode           10m                node-controller  Node functional-002359 event: Registered Node functional-002359 in Controller
	
	
	==> dmesg <==
	[Nov 9 13:17] ACPI: SRAT not present
	[  +0.000000] ACPI: SRAT not present
	[  +0.000000] SPI driver altr_a10sr has no spi_device_id for altr,a10sr
	[  +0.015355] device-mapper: core: CONFIG_IMA_DISABLE_HTABLE is disabled. Duplicate IMA measurements will not be recorded in the IMA log.
	[  +0.494196] systemd[1]: Configuration file /run/systemd/system/netplan-ovs-cleanup.service is marked world-inaccessible. This has no effect as configuration data is accessible via APIs without restrictions. Proceeding anyway.
	[  +0.034512] systemd[1]: /lib/systemd/system/snapd.service:23: Unknown key name 'RestartMode' in section 'Service', ignoring.
	[  +0.743336] ena 0000:00:05.0: LLQ is not supported Fallback to host mode policy.
	[  +6.564676] kauditd_printk_skb: 36 callbacks suppressed
	[Nov 9 13:30] overlayfs: idmapped layers are currently not supported
	[  +0.081590] kmem.limit_in_bytes is deprecated and will be removed. Please report your usecase to linux-mm@kvack.org if you depend on this functionality.
	[Nov 9 13:36] overlayfs: idmapped layers are currently not supported
	[ +50.497753] overlayfs: idmapped layers are currently not supported
	
	
	==> etcd [43c7c71ce936bb80862e865a4e75ba3dda7fd0cca616d0c2c10e8c87a63026a7] <==
	{"level":"warn","ts":"2025-11-09T13:38:10.768027Z","caller":"embed/config_logging.go:188","msg":"rejected connection on client endpoint","remote-addr":"127.0.0.1:35194","server-name":"","error":"EOF"}
	{"level":"warn","ts":"2025-11-09T13:38:10.780971Z","caller":"embed/config_logging.go:188","msg":"rejected connection on client endpoint","remote-addr":"127.0.0.1:35204","server-name":"","error":"EOF"}
	{"level":"warn","ts":"2025-11-09T13:38:10.820798Z","caller":"embed/config_logging.go:188","msg":"rejected connection on client endpoint","remote-addr":"127.0.0.1:35210","server-name":"","error":"EOF"}
	{"level":"warn","ts":"2025-11-09T13:38:10.842486Z","caller":"embed/config_logging.go:188","msg":"rejected connection on client endpoint","remote-addr":"127.0.0.1:35230","server-name":"","error":"EOF"}
	{"level":"warn","ts":"2025-11-09T13:38:10.851305Z","caller":"embed/config_logging.go:188","msg":"rejected connection on client endpoint","remote-addr":"127.0.0.1:35256","server-name":"","error":"EOF"}
	{"level":"warn","ts":"2025-11-09T13:38:10.867780Z","caller":"embed/config_logging.go:188","msg":"rejected connection on client endpoint","remote-addr":"127.0.0.1:35262","server-name":"","error":"EOF"}
	{"level":"warn","ts":"2025-11-09T13:38:10.929364Z","caller":"embed/config_logging.go:188","msg":"rejected connection on client endpoint","remote-addr":"127.0.0.1:47316","server-name":"","error":"EOF"}
	{"level":"info","ts":"2025-11-09T13:38:33.208382Z","caller":"osutil/interrupt_unix.go:65","msg":"received signal; shutting down","signal":"terminated"}
	{"level":"info","ts":"2025-11-09T13:38:33.208432Z","caller":"embed/etcd.go:426","msg":"closing etcd server","name":"functional-002359","data-dir":"/var/lib/minikube/etcd","advertise-peer-urls":["https://192.168.49.2:2380"],"advertise-client-urls":["https://192.168.49.2:2379"]}
	{"level":"error","ts":"2025-11-09T13:38:33.208522Z","caller":"embed/etcd.go:912","msg":"setting up serving from embedded etcd failed.","error":"http: Server closed","stacktrace":"go.etcd.io/etcd/server/v3/embed.(*Etcd).errHandler\n\tgo.etcd.io/etcd/server/v3/embed/etcd.go:912\ngo.etcd.io/etcd/server/v3/embed.(*serveCtx).startHandler.func1\n\tgo.etcd.io/etcd/server/v3/embed/serve.go:90"}
	{"level":"error","ts":"2025-11-09T13:38:33.355554Z","caller":"embed/etcd.go:912","msg":"setting up serving from embedded etcd failed.","error":"http: Server closed","stacktrace":"go.etcd.io/etcd/server/v3/embed.(*Etcd).errHandler\n\tgo.etcd.io/etcd/server/v3/embed/etcd.go:912\ngo.etcd.io/etcd/server/v3/embed.(*serveCtx).startHandler.func1\n\tgo.etcd.io/etcd/server/v3/embed/serve.go:90"}
	{"level":"error","ts":"2025-11-09T13:38:33.355628Z","caller":"embed/etcd.go:912","msg":"setting up serving from embedded etcd failed.","error":"accept tcp 127.0.0.1:2381: use of closed network connection","stacktrace":"go.etcd.io/etcd/server/v3/embed.(*Etcd).errHandler\n\tgo.etcd.io/etcd/server/v3/embed/etcd.go:912\ngo.etcd.io/etcd/server/v3/embed.(*Etcd).startHandler.func1\n\tgo.etcd.io/etcd/server/v3/embed/etcd.go:906"}
	{"level":"info","ts":"2025-11-09T13:38:33.355650Z","caller":"etcdserver/server.go:1281","msg":"skipped leadership transfer for single voting member cluster","local-member-id":"aec36adc501070cc","current-leader-member-id":"aec36adc501070cc"}
	{"level":"info","ts":"2025-11-09T13:38:33.355730Z","caller":"etcdserver/server.go:2319","msg":"server has stopped; stopping cluster version's monitor"}
	{"level":"info","ts":"2025-11-09T13:38:33.355742Z","caller":"etcdserver/server.go:2342","msg":"server has stopped; stopping storage version's monitor"}
	{"level":"warn","ts":"2025-11-09T13:38:33.355741Z","caller":"embed/serve.go:245","msg":"stopping secure grpc server due to error","error":"accept tcp 127.0.0.1:2379: use of closed network connection"}
	{"level":"warn","ts":"2025-11-09T13:38:33.355769Z","caller":"embed/serve.go:247","msg":"stopped secure grpc server due to error","error":"accept tcp 127.0.0.1:2379: use of closed network connection"}
	{"level":"error","ts":"2025-11-09T13:38:33.355776Z","caller":"embed/etcd.go:912","msg":"setting up serving from embedded etcd failed.","error":"accept tcp 127.0.0.1:2379: use of closed network connection","stacktrace":"go.etcd.io/etcd/server/v3/embed.(*Etcd).errHandler\n\tgo.etcd.io/etcd/server/v3/embed/etcd.go:912\ngo.etcd.io/etcd/server/v3/embed.(*Etcd).startHandler.func1\n\tgo.etcd.io/etcd/server/v3/embed/etcd.go:906"}
	{"level":"warn","ts":"2025-11-09T13:38:33.355808Z","caller":"embed/serve.go:245","msg":"stopping secure grpc server due to error","error":"accept tcp 192.168.49.2:2379: use of closed network connection"}
	{"level":"warn","ts":"2025-11-09T13:38:33.355816Z","caller":"embed/serve.go:247","msg":"stopped secure grpc server due to error","error":"accept tcp 192.168.49.2:2379: use of closed network connection"}
	{"level":"error","ts":"2025-11-09T13:38:33.355823Z","caller":"embed/etcd.go:912","msg":"setting up serving from embedded etcd failed.","error":"accept tcp 192.168.49.2:2379: use of closed network connection","stacktrace":"go.etcd.io/etcd/server/v3/embed.(*Etcd).errHandler\n\tgo.etcd.io/etcd/server/v3/embed/etcd.go:912\ngo.etcd.io/etcd/server/v3/embed.(*Etcd).startHandler.func1\n\tgo.etcd.io/etcd/server/v3/embed/etcd.go:906"}
	{"level":"info","ts":"2025-11-09T13:38:33.359682Z","caller":"embed/etcd.go:621","msg":"stopping serving peer traffic","address":"192.168.49.2:2380"}
	{"level":"error","ts":"2025-11-09T13:38:33.359774Z","caller":"embed/etcd.go:912","msg":"setting up serving from embedded etcd failed.","error":"accept tcp 192.168.49.2:2380: use of closed network connection","stacktrace":"go.etcd.io/etcd/server/v3/embed.(*Etcd).errHandler\n\tgo.etcd.io/etcd/server/v3/embed/etcd.go:912\ngo.etcd.io/etcd/server/v3/embed.(*Etcd).startHandler.func1\n\tgo.etcd.io/etcd/server/v3/embed/etcd.go:906"}
	{"level":"info","ts":"2025-11-09T13:38:33.359818Z","caller":"embed/etcd.go:626","msg":"stopped serving peer traffic","address":"192.168.49.2:2380"}
	{"level":"info","ts":"2025-11-09T13:38:33.359827Z","caller":"embed/etcd.go:428","msg":"closed etcd server","name":"functional-002359","data-dir":"/var/lib/minikube/etcd","advertise-peer-urls":["https://192.168.49.2:2380"],"advertise-client-urls":["https://192.168.49.2:2379"]}
	
	
	==> etcd [c811799b68b0c99b6416d77808cdd05cd43cc550675340a882e88d4ed64f45c8] <==
	{"level":"warn","ts":"2025-11-09T13:38:56.087081Z","caller":"embed/config_logging.go:188","msg":"rejected connection on client endpoint","remote-addr":"127.0.0.1:35756","server-name":"","error":"EOF"}
	{"level":"warn","ts":"2025-11-09T13:38:56.101157Z","caller":"embed/config_logging.go:188","msg":"rejected connection on client endpoint","remote-addr":"127.0.0.1:35786","server-name":"","error":"EOF"}
	{"level":"warn","ts":"2025-11-09T13:38:56.120963Z","caller":"embed/config_logging.go:188","msg":"rejected connection on client endpoint","remote-addr":"127.0.0.1:35814","server-name":"","error":"EOF"}
	{"level":"warn","ts":"2025-11-09T13:38:56.144029Z","caller":"embed/config_logging.go:188","msg":"rejected connection on client endpoint","remote-addr":"127.0.0.1:35828","server-name":"","error":"EOF"}
	{"level":"warn","ts":"2025-11-09T13:38:56.156824Z","caller":"embed/config_logging.go:188","msg":"rejected connection on client endpoint","remote-addr":"127.0.0.1:35842","server-name":"","error":"EOF"}
	{"level":"warn","ts":"2025-11-09T13:38:56.172987Z","caller":"embed/config_logging.go:188","msg":"rejected connection on client endpoint","remote-addr":"127.0.0.1:35858","server-name":"","error":"EOF"}
	{"level":"warn","ts":"2025-11-09T13:38:56.196024Z","caller":"embed/config_logging.go:188","msg":"rejected connection on client endpoint","remote-addr":"127.0.0.1:35878","server-name":"","error":"EOF"}
	{"level":"warn","ts":"2025-11-09T13:38:56.216155Z","caller":"embed/config_logging.go:188","msg":"rejected connection on client endpoint","remote-addr":"127.0.0.1:35894","server-name":"","error":"EOF"}
	{"level":"warn","ts":"2025-11-09T13:38:56.227935Z","caller":"embed/config_logging.go:188","msg":"rejected connection on client endpoint","remote-addr":"127.0.0.1:35914","server-name":"","error":"EOF"}
	{"level":"warn","ts":"2025-11-09T13:38:56.249483Z","caller":"embed/config_logging.go:188","msg":"rejected connection on client endpoint","remote-addr":"127.0.0.1:35942","server-name":"","error":"EOF"}
	{"level":"warn","ts":"2025-11-09T13:38:56.258917Z","caller":"embed/config_logging.go:188","msg":"rejected connection on client endpoint","remote-addr":"127.0.0.1:35954","server-name":"","error":"EOF"}
	{"level":"warn","ts":"2025-11-09T13:38:56.281851Z","caller":"embed/config_logging.go:188","msg":"rejected connection on client endpoint","remote-addr":"127.0.0.1:35974","server-name":"","error":"EOF"}
	{"level":"warn","ts":"2025-11-09T13:38:56.294397Z","caller":"embed/config_logging.go:188","msg":"rejected connection on client endpoint","remote-addr":"127.0.0.1:36006","server-name":"","error":"EOF"}
	{"level":"warn","ts":"2025-11-09T13:38:56.312647Z","caller":"embed/config_logging.go:188","msg":"rejected connection on client endpoint","remote-addr":"127.0.0.1:36030","server-name":"","error":"EOF"}
	{"level":"warn","ts":"2025-11-09T13:38:56.333163Z","caller":"embed/config_logging.go:188","msg":"rejected connection on client endpoint","remote-addr":"127.0.0.1:36048","server-name":"","error":"EOF"}
	{"level":"warn","ts":"2025-11-09T13:38:56.353360Z","caller":"embed/config_logging.go:188","msg":"rejected connection on client endpoint","remote-addr":"127.0.0.1:36074","server-name":"","error":"EOF"}
	{"level":"warn","ts":"2025-11-09T13:38:56.380122Z","caller":"embed/config_logging.go:188","msg":"rejected connection on client endpoint","remote-addr":"127.0.0.1:36092","server-name":"","error":"EOF"}
	{"level":"warn","ts":"2025-11-09T13:38:56.419820Z","caller":"embed/config_logging.go:188","msg":"rejected connection on client endpoint","remote-addr":"127.0.0.1:36110","server-name":"","error":"EOF"}
	{"level":"warn","ts":"2025-11-09T13:38:56.454750Z","caller":"embed/config_logging.go:188","msg":"rejected connection on client endpoint","remote-addr":"127.0.0.1:36128","server-name":"","error":"EOF"}
	{"level":"warn","ts":"2025-11-09T13:38:56.471958Z","caller":"embed/config_logging.go:188","msg":"rejected connection on client endpoint","remote-addr":"127.0.0.1:36148","server-name":"","error":"EOF"}
	{"level":"warn","ts":"2025-11-09T13:38:56.484646Z","caller":"embed/config_logging.go:188","msg":"rejected connection on client endpoint","remote-addr":"127.0.0.1:36180","server-name":"","error":"EOF"}
	{"level":"warn","ts":"2025-11-09T13:38:56.536979Z","caller":"embed/config_logging.go:188","msg":"rejected connection on client endpoint","remote-addr":"127.0.0.1:36210","server-name":"","error":"EOF"}
	{"level":"info","ts":"2025-11-09T13:48:54.859707Z","caller":"mvcc/index.go:194","msg":"compact tree index","revision":1137}
	{"level":"info","ts":"2025-11-09T13:48:54.883023Z","caller":"mvcc/kvstore_compaction.go:70","msg":"finished scheduled compaction","compact-revision":1137,"took":"22.926545ms","hash":424195945,"current-db-size-bytes":3219456,"current-db-size":"3.2 MB","current-db-size-in-use-bytes":1433600,"current-db-size-in-use":"1.4 MB"}
	{"level":"info","ts":"2025-11-09T13:48:54.883113Z","caller":"mvcc/hash.go:157","msg":"storing new hash","hash":424195945,"revision":1137,"compact-revision":-1}
	
	
	==> kernel <==
	 13:49:34 up 32 min,  0 user,  load average: 0.07, 0.23, 0.40
	Linux functional-002359 5.15.0-1084-aws #91~20.04.1-Ubuntu SMP Fri May 2 07:00:04 UTC 2025 aarch64 GNU/Linux
	PRETTY_NAME="Debian GNU/Linux 12 (bookworm)"
	
	
	==> kindnet [8d6e6004d07de0b3c112dd7099e42b98a279d0db1103180650ecf7707de4fc98] <==
	I1109 13:38:06.023082       1 main.go:109] connected to apiserver: https://10.96.0.1:443
	I1109 13:38:06.023316       1 main.go:139] hostIP = 192.168.49.2
	podIP = 192.168.49.2
	I1109 13:38:06.023478       1 main.go:148] setting mtu 1500 for CNI 
	I1109 13:38:06.023492       1 main.go:178] kindnetd IP family: "ipv4"
	I1109 13:38:06.023503       1 main.go:182] noMask IPv4 subnets: [10.244.0.0/16]
	time="2025-11-09T13:38:06Z" level=info msg="Created plugin 10-kube-network-policies (kindnetd, handles RunPodSandbox,RemovePodSandbox)"
	I1109 13:38:06.224865       1 controller.go:377] "Starting controller" name="kube-network-policies"
	I1109 13:38:06.231941       1 controller.go:381] "Waiting for informer caches to sync"
	I1109 13:38:06.232019       1 shared_informer.go:350] "Waiting for caches to sync" controller="kube-network-policies"
	I1109 13:38:06.232994       1 controller.go:390] nri plugin exited: failed to connect to NRI service: dial unix /var/run/nri/nri.sock: connect: no such file or directory
	I1109 13:38:11.932455       1 shared_informer.go:357] "Caches are synced" controller="kube-network-policies"
	I1109 13:38:11.932586       1 metrics.go:72] Registering metrics
	I1109 13:38:11.932690       1 controller.go:711] "Syncing nftables rules"
	I1109 13:38:16.224939       1 main.go:297] Handling node with IPs: map[192.168.49.2:{}]
	I1109 13:38:16.225010       1 main.go:301] handling current node
	I1109 13:38:26.225038       1 main.go:297] Handling node with IPs: map[192.168.49.2:{}]
	I1109 13:38:26.225071       1 main.go:301] handling current node
	
	
	==> kindnet [d5485c2e8cfbb6f9f283664099b520add6fb07a38801ab322baf3742cd792fa6] <==
	I1109 13:47:28.119279       1 main.go:301] handling current node
	I1109 13:47:38.122627       1 main.go:297] Handling node with IPs: map[192.168.49.2:{}]
	I1109 13:47:38.122663       1 main.go:301] handling current node
	I1109 13:47:48.123077       1 main.go:297] Handling node with IPs: map[192.168.49.2:{}]
	I1109 13:47:48.123109       1 main.go:301] handling current node
	I1109 13:47:58.124349       1 main.go:297] Handling node with IPs: map[192.168.49.2:{}]
	I1109 13:47:58.124455       1 main.go:301] handling current node
	I1109 13:48:08.118550       1 main.go:297] Handling node with IPs: map[192.168.49.2:{}]
	I1109 13:48:08.118603       1 main.go:301] handling current node
	I1109 13:48:18.119943       1 main.go:297] Handling node with IPs: map[192.168.49.2:{}]
	I1109 13:48:18.119976       1 main.go:301] handling current node
	I1109 13:48:28.124324       1 main.go:297] Handling node with IPs: map[192.168.49.2:{}]
	I1109 13:48:28.124360       1 main.go:301] handling current node
	I1109 13:48:38.119272       1 main.go:297] Handling node with IPs: map[192.168.49.2:{}]
	I1109 13:48:38.119306       1 main.go:301] handling current node
	I1109 13:48:48.122885       1 main.go:297] Handling node with IPs: map[192.168.49.2:{}]
	I1109 13:48:48.122919       1 main.go:301] handling current node
	I1109 13:48:58.123977       1 main.go:297] Handling node with IPs: map[192.168.49.2:{}]
	I1109 13:48:58.124098       1 main.go:301] handling current node
	I1109 13:49:08.119379       1 main.go:297] Handling node with IPs: map[192.168.49.2:{}]
	I1109 13:49:08.119423       1 main.go:301] handling current node
	I1109 13:49:18.124934       1 main.go:297] Handling node with IPs: map[192.168.49.2:{}]
	I1109 13:49:18.124969       1 main.go:301] handling current node
	I1109 13:49:28.123967       1 main.go:297] Handling node with IPs: map[192.168.49.2:{}]
	I1109 13:49:28.124002       1 main.go:301] handling current node
	
	
	==> kube-apiserver [11be1b38f3bae1842ce5b91efc68f015c0af45d1a678ad6006c78f46d95f24c7] <==
	I1109 13:38:57.354829       1 shared_informer.go:356] "Caches are synced" controller="ipallocator-repair-controller"
	I1109 13:38:57.355142       1 shared_informer.go:356] "Caches are synced" controller="configmaps"
	I1109 13:38:57.355528       1 handler_discovery.go:451] Starting ResourceDiscoveryManager
	E1109 13:38:57.362859       1 controller.go:97] Error removing old endpoints from kubernetes service: no API server IP addresses were listed in storage, refusing to erase all endpoints for the kubernetes Service
	I1109 13:38:57.375292       1 shared_informer.go:356] "Caches are synced" controller="kubernetes-service-cidr-controller"
	I1109 13:38:57.375356       1 default_servicecidr_controller.go:137] Shutting down kubernetes-service-cidr-controller
	I1109 13:38:57.382660       1 shared_informer.go:356] "Caches are synced" controller="node_authorizer"
	I1109 13:38:57.406598       1 controller.go:667] quota admission added evaluator for: leases.coordination.k8s.io
	I1109 13:38:57.485367       1 controller.go:667] quota admission added evaluator for: serviceaccounts
	I1109 13:38:58.052926       1 storage_scheduling.go:111] all system priority classes are created successfully or already exist.
	I1109 13:38:59.026510       1 controller.go:667] quota admission added evaluator for: daemonsets.apps
	I1109 13:38:59.143283       1 controller.go:667] quota admission added evaluator for: deployments.apps
	I1109 13:38:59.212311       1 controller.go:667] quota admission added evaluator for: roles.rbac.authorization.k8s.io
	I1109 13:38:59.219473       1 controller.go:667] quota admission added evaluator for: rolebindings.rbac.authorization.k8s.io
	I1109 13:39:15.233962       1 controller.go:667] quota admission added evaluator for: endpoints
	I1109 13:39:16.013681       1 alloc.go:328] "allocated clusterIPs" service="default/invalid-svc" clusterIPs={"IPv4":"10.111.199.186"}
	I1109 13:39:16.037342       1 controller.go:667] quota admission added evaluator for: endpointslices.discovery.k8s.io
	I1109 13:39:21.467543       1 alloc.go:328] "allocated clusterIPs" service="default/nginx-svc" clusterIPs={"IPv4":"10.110.22.157"}
	I1109 13:39:31.887675       1 controller.go:667] quota admission added evaluator for: replicasets.apps
	I1109 13:39:32.033932       1 alloc.go:328] "allocated clusterIPs" service="default/hello-node-connect" clusterIPs={"IPv4":"10.106.227.210"}
	E1109 13:39:38.426975       1 conn.go:339] Error on socket receive: read tcp 192.168.49.2:8441->192.168.49.1:40942: use of closed network connection
	E1109 13:39:39.025097       1 watch.go:272] "Unhandled Error" err="http2: stream closed" logger="UnhandledError"
	E1109 13:39:46.324324       1 conn.go:339] Error on socket receive: read tcp 192.168.49.2:8441->192.168.49.1:38478: use of closed network connection
	I1109 13:39:46.532114       1 alloc.go:328] "allocated clusterIPs" service="default/hello-node" clusterIPs={"IPv4":"10.108.244.149"}
	I1109 13:48:57.290423       1 cidrallocator.go:277] updated ClusterIP allocator for Service CIDR 10.96.0.0/12
	
	
	==> kube-controller-manager [93a2fe7c60151ff52423241accd77838add9eae68a87a84e559f224a0cdf0925] <==
	I1109 13:38:15.102391       1 shared_informer.go:356] "Caches are synced" controller="resource quota"
	I1109 13:38:15.104914       1 shared_informer.go:356] "Caches are synced" controller="persistent volume"
	I1109 13:38:15.111332       1 shared_informer.go:356] "Caches are synced" controller="VAC protection"
	I1109 13:38:15.114646       1 shared_informer.go:356] "Caches are synced" controller="ephemeral"
	I1109 13:38:15.116854       1 shared_informer.go:356] "Caches are synced" controller="bootstrap_signer"
	I1109 13:38:15.119091       1 shared_informer.go:356] "Caches are synced" controller="endpoint_slice_mirroring"
	I1109 13:38:15.141330       1 shared_informer.go:356] "Caches are synced" controller="validatingadmissionpolicy-status"
	I1109 13:38:15.141342       1 shared_informer.go:356] "Caches are synced" controller="certificate-csrapproving"
	I1109 13:38:15.142558       1 shared_informer.go:356] "Caches are synced" controller="daemon sets"
	I1109 13:38:15.142644       1 shared_informer.go:356] "Caches are synced" controller="TTL"
	I1109 13:38:15.144702       1 shared_informer.go:356] "Caches are synced" controller="garbage collector"
	I1109 13:38:15.146292       1 shared_informer.go:356] "Caches are synced" controller="resource quota"
	I1109 13:38:15.150499       1 shared_informer.go:356] "Caches are synced" controller="stateful set"
	I1109 13:38:15.153779       1 shared_informer.go:356] "Caches are synced" controller="node"
	I1109 13:38:15.153856       1 range_allocator.go:177] "Sending events to api server" logger="node-ipam-controller"
	I1109 13:38:15.153890       1 range_allocator.go:183] "Starting range CIDR allocator" logger="node-ipam-controller"
	I1109 13:38:15.153905       1 shared_informer.go:349] "Waiting for caches to sync" controller="cidrallocator"
	I1109 13:38:15.153911       1 shared_informer.go:356] "Caches are synced" controller="cidrallocator"
	I1109 13:38:15.156400       1 shared_informer.go:356] "Caches are synced" controller="attach detach"
	I1109 13:38:15.159728       1 shared_informer.go:356] "Caches are synced" controller="taint-eviction-controller"
	I1109 13:38:15.160868       1 shared_informer.go:356] "Caches are synced" controller="endpoint_slice"
	I1109 13:38:15.165425       1 shared_informer.go:356] "Caches are synced" controller="taint"
	I1109 13:38:15.165545       1 node_lifecycle_controller.go:1221] "Initializing eviction metric for zone" logger="node-lifecycle-controller" zone=""
	I1109 13:38:15.165632       1 node_lifecycle_controller.go:873] "Missing timestamp for Node. Assuming now as a timestamp" logger="node-lifecycle-controller" node="functional-002359"
	I1109 13:38:15.165680       1 node_lifecycle_controller.go:1067] "Controller detected that zone is now in new state" logger="node-lifecycle-controller" zone="" newState="Normal"
	
	
	==> kube-controller-manager [aee17e5b4b89d923756745dc37019948d118e3bd789e0376793283758f85349b] <==
	I1109 13:39:00.702319       1 shared_informer.go:349] "Waiting for caches to sync" controller="cidrallocator"
	I1109 13:39:00.702324       1 shared_informer.go:356] "Caches are synced" controller="cidrallocator"
	I1109 13:39:00.706319       1 shared_informer.go:356] "Caches are synced" controller="VAC protection"
	I1109 13:39:00.709213       1 shared_informer.go:356] "Caches are synced" controller="endpoint"
	I1109 13:39:00.713823       1 shared_informer.go:356] "Caches are synced" controller="ClusterRoleAggregator"
	I1109 13:39:00.715316       1 shared_informer.go:356] "Caches are synced" controller="taint"
	I1109 13:39:00.715465       1 node_lifecycle_controller.go:1221] "Initializing eviction metric for zone" logger="node-lifecycle-controller" zone=""
	I1109 13:39:00.715608       1 node_lifecycle_controller.go:873] "Missing timestamp for Node. Assuming now as a timestamp" logger="node-lifecycle-controller" node="functional-002359"
	I1109 13:39:00.715687       1 node_lifecycle_controller.go:1067] "Controller detected that zone is now in new state" logger="node-lifecycle-controller" zone="" newState="Normal"
	I1109 13:39:00.715758       1 shared_informer.go:356] "Caches are synced" controller="service-cidr-controller"
	I1109 13:39:00.717641       1 shared_informer.go:356] "Caches are synced" controller="ReplicationController"
	I1109 13:39:00.719617       1 shared_informer.go:356] "Caches are synced" controller="namespace"
	I1109 13:39:00.720731       1 shared_informer.go:356] "Caches are synced" controller="legacy-service-account-token-cleaner"
	I1109 13:39:00.723374       1 shared_informer.go:356] "Caches are synced" controller="disruption"
	I1109 13:39:00.726508       1 shared_informer.go:356] "Caches are synced" controller="stateful set"
	I1109 13:39:00.726946       1 shared_informer.go:356] "Caches are synced" controller="daemon sets"
	I1109 13:39:00.730828       1 shared_informer.go:356] "Caches are synced" controller="resource quota"
	I1109 13:39:00.737523       1 shared_informer.go:356] "Caches are synced" controller="crt configmap"
	I1109 13:39:00.739258       1 shared_informer.go:356] "Caches are synced" controller="resource_claim"
	I1109 13:39:00.742279       1 shared_informer.go:356] "Caches are synced" controller="endpoint_slice"
	I1109 13:39:00.744859       1 shared_informer.go:356] "Caches are synced" controller="deployment"
	I1109 13:39:00.763556       1 shared_informer.go:356] "Caches are synced" controller="ReplicaSet"
	I1109 13:39:00.763665       1 shared_informer.go:356] "Caches are synced" controller="persistent volume"
	I1109 13:39:00.768630       1 shared_informer.go:356] "Caches are synced" controller="TTL"
	I1109 13:39:00.775417       1 shared_informer.go:356] "Caches are synced" controller="garbage collector"
	
	
	==> kube-proxy [4663919f4dfd1c751b900adc0403bf4c4532fc6b25a43c53c46e3a122694511a] <==
	I1109 13:38:08.623521       1 server_linux.go:53] "Using iptables proxy"
	I1109 13:38:08.698766       1 shared_informer.go:349] "Waiting for caches to sync" controller="node informer cache"
	I1109 13:38:11.836680       1 shared_informer.go:356] "Caches are synced" controller="node informer cache"
	I1109 13:38:11.904186       1 server.go:219] "Successfully retrieved NodeIPs" NodeIPs=["192.168.49.2"]
	E1109 13:38:11.936166       1 server.go:256] "Kube-proxy configuration may be incomplete or incorrect" err="nodePortAddresses is unset; NodePort connections will be accepted on all local IPs. Consider using `--nodeport-addresses primary`"
	I1109 13:38:12.208249       1 server.go:265] "kube-proxy running in dual-stack mode" primary ipFamily="IPv4"
	I1109 13:38:12.208302       1 server_linux.go:132] "Using iptables Proxier"
	I1109 13:38:12.234301       1 proxier.go:242] "Setting route_localnet=1 to allow node-ports on localhost; to change this either disable iptables.localhostNodePorts (--iptables-localhost-nodeports) or set nodePortAddresses (--nodeport-addresses) to filter loopback addresses" ipFamily="IPv4"
	I1109 13:38:12.234658       1 server.go:527] "Version info" version="v1.34.1"
	I1109 13:38:12.234675       1 server.go:529] "Golang settings" GOGC="" GOMAXPROCS="" GOTRACEBACK=""
	I1109 13:38:12.236509       1 config.go:200] "Starting service config controller"
	I1109 13:38:12.236526       1 shared_informer.go:349] "Waiting for caches to sync" controller="service config"
	I1109 13:38:12.236542       1 config.go:106] "Starting endpoint slice config controller"
	I1109 13:38:12.236546       1 shared_informer.go:349] "Waiting for caches to sync" controller="endpoint slice config"
	I1109 13:38:12.236559       1 config.go:403] "Starting serviceCIDR config controller"
	I1109 13:38:12.236563       1 shared_informer.go:349] "Waiting for caches to sync" controller="serviceCIDR config"
	I1109 13:38:12.237178       1 config.go:309] "Starting node config controller"
	I1109 13:38:12.237187       1 shared_informer.go:349] "Waiting for caches to sync" controller="node config"
	I1109 13:38:12.237193       1 shared_informer.go:356] "Caches are synced" controller="node config"
	I1109 13:38:12.339975       1 shared_informer.go:356] "Caches are synced" controller="serviceCIDR config"
	I1109 13:38:12.340007       1 shared_informer.go:356] "Caches are synced" controller="service config"
	I1109 13:38:12.340018       1 shared_informer.go:356] "Caches are synced" controller="endpoint slice config"
	
	
	==> kube-proxy [a5d00d82a7ddbd632beb21db08c26836816fb3870ed42d02b56a437f1a7b0509] <==
	I1109 13:38:57.848528       1 server_linux.go:53] "Using iptables proxy"
	I1109 13:38:57.934616       1 shared_informer.go:349] "Waiting for caches to sync" controller="node informer cache"
	I1109 13:38:58.035664       1 shared_informer.go:356] "Caches are synced" controller="node informer cache"
	I1109 13:38:58.035702       1 server.go:219] "Successfully retrieved NodeIPs" NodeIPs=["192.168.49.2"]
	E1109 13:38:58.035787       1 server.go:256] "Kube-proxy configuration may be incomplete or incorrect" err="nodePortAddresses is unset; NodePort connections will be accepted on all local IPs. Consider using `--nodeport-addresses primary`"
	I1109 13:38:58.082624       1 server.go:265] "kube-proxy running in dual-stack mode" primary ipFamily="IPv4"
	I1109 13:38:58.082768       1 server_linux.go:132] "Using iptables Proxier"
	I1109 13:38:58.094676       1 proxier.go:242] "Setting route_localnet=1 to allow node-ports on localhost; to change this either disable iptables.localhostNodePorts (--iptables-localhost-nodeports) or set nodePortAddresses (--nodeport-addresses) to filter loopback addresses" ipFamily="IPv4"
	I1109 13:38:58.095122       1 server.go:527] "Version info" version="v1.34.1"
	I1109 13:38:58.095194       1 server.go:529] "Golang settings" GOGC="" GOMAXPROCS="" GOTRACEBACK=""
	I1109 13:38:58.098616       1 config.go:200] "Starting service config controller"
	I1109 13:38:58.098649       1 shared_informer.go:349] "Waiting for caches to sync" controller="service config"
	I1109 13:38:58.105109       1 config.go:106] "Starting endpoint slice config controller"
	I1109 13:38:58.105239       1 shared_informer.go:349] "Waiting for caches to sync" controller="endpoint slice config"
	I1109 13:38:58.105318       1 config.go:403] "Starting serviceCIDR config controller"
	I1109 13:38:58.105362       1 shared_informer.go:349] "Waiting for caches to sync" controller="serviceCIDR config"
	I1109 13:38:58.106276       1 config.go:309] "Starting node config controller"
	I1109 13:38:58.108013       1 shared_informer.go:349] "Waiting for caches to sync" controller="node config"
	I1109 13:38:58.108105       1 shared_informer.go:356] "Caches are synced" controller="node config"
	I1109 13:38:58.199301       1 shared_informer.go:356] "Caches are synced" controller="service config"
	I1109 13:38:58.206871       1 shared_informer.go:356] "Caches are synced" controller="serviceCIDR config"
	I1109 13:38:58.207023       1 shared_informer.go:356] "Caches are synced" controller="endpoint slice config"
	
	
	==> kube-scheduler [4598e5005814dd47b417d0f8794d53ddefc58dfa50f368ae0d09a55c4f0ba85c] <==
	I1109 13:38:54.994898       1 serving.go:386] Generated self-signed cert in-memory
	W1109 13:38:57.204136       1 requestheader_controller.go:204] Unable to get configmap/extension-apiserver-authentication in kube-system.  Usually fixed by 'kubectl create rolebinding -n kube-system ROLEBINDING_NAME --role=extension-apiserver-authentication-reader --serviceaccount=YOUR_NS:YOUR_SA'
	W1109 13:38:57.204243       1 authentication.go:397] Error looking up in-cluster authentication configuration: configmaps "extension-apiserver-authentication" is forbidden: User "system:kube-scheduler" cannot get resource "configmaps" in API group "" in the namespace "kube-system"
	W1109 13:38:57.204279       1 authentication.go:398] Continuing without authentication configuration. This may treat all requests as anonymous.
	W1109 13:38:57.204324       1 authentication.go:399] To require authentication configuration lookup to succeed, set --authentication-tolerate-lookup-failure=false
	I1109 13:38:57.296704       1 server.go:175] "Starting Kubernetes Scheduler" version="v1.34.1"
	I1109 13:38:57.299936       1 server.go:177] "Golang settings" GOGC="" GOMAXPROCS="" GOTRACEBACK=""
	I1109 13:38:57.302537       1 secure_serving.go:211] Serving securely on 127.0.0.1:10259
	I1109 13:38:57.302645       1 configmap_cafile_content.go:205] "Starting controller" name="client-ca::kube-system::extension-apiserver-authentication::client-ca-file"
	I1109 13:38:57.305422       1 shared_informer.go:349] "Waiting for caches to sync" controller="client-ca::kube-system::extension-apiserver-authentication::client-ca-file"
	I1109 13:38:57.302663       1 tlsconfig.go:243] "Starting DynamicServingCertificateController"
	I1109 13:38:57.405968       1 shared_informer.go:356] "Caches are synced" controller="client-ca::kube-system::extension-apiserver-authentication::client-ca-file"
	
	
	==> kube-scheduler [fb70fb34ba82b21ce77b755ef23791d15855d8cc4358cccffdfd2bb2f1188601] <==
	I1109 13:38:08.059884       1 serving.go:386] Generated self-signed cert in-memory
	W1109 13:38:11.688465       1 requestheader_controller.go:204] Unable to get configmap/extension-apiserver-authentication in kube-system.  Usually fixed by 'kubectl create rolebinding -n kube-system ROLEBINDING_NAME --role=extension-apiserver-authentication-reader --serviceaccount=YOUR_NS:YOUR_SA'
	W1109 13:38:11.688595       1 authentication.go:397] Error looking up in-cluster authentication configuration: configmaps "extension-apiserver-authentication" is forbidden: User "system:kube-scheduler" cannot get resource "configmaps" in API group "" in the namespace "kube-system"
	W1109 13:38:11.688631       1 authentication.go:398] Continuing without authentication configuration. This may treat all requests as anonymous.
	W1109 13:38:11.688672       1 authentication.go:399] To require authentication configuration lookup to succeed, set --authentication-tolerate-lookup-failure=false
	I1109 13:38:11.816875       1 server.go:175] "Starting Kubernetes Scheduler" version="v1.34.1"
	I1109 13:38:11.816901       1 server.go:177] "Golang settings" GOGC="" GOMAXPROCS="" GOTRACEBACK=""
	I1109 13:38:11.819963       1 secure_serving.go:211] Serving securely on 127.0.0.1:10259
	I1109 13:38:11.820366       1 configmap_cafile_content.go:205] "Starting controller" name="client-ca::kube-system::extension-apiserver-authentication::client-ca-file"
	I1109 13:38:11.823973       1 shared_informer.go:349] "Waiting for caches to sync" controller="client-ca::kube-system::extension-apiserver-authentication::client-ca-file"
	I1109 13:38:11.820393       1 tlsconfig.go:243] "Starting DynamicServingCertificateController"
	I1109 13:38:11.928176       1 shared_informer.go:356] "Caches are synced" controller="client-ca::kube-system::extension-apiserver-authentication::client-ca-file"
	I1109 13:38:33.219443       1 configmap_cafile_content.go:226] "Shutting down controller" name="client-ca::kube-system::extension-apiserver-authentication::client-ca-file"
	I1109 13:38:33.219506       1 secure_serving.go:259] Stopped listening on 127.0.0.1:10259
	I1109 13:38:33.219526       1 server.go:263] "[graceful-termination] secure server has stopped listening"
	I1109 13:38:33.219717       1 tlsconfig.go:258] "Shutting down DynamicServingCertificateController"
	I1109 13:38:33.219761       1 server.go:265] "[graceful-termination] secure server is exiting"
	E1109 13:38:33.219775       1 run.go:72] "command failed" err="finished without leader elect"
	
	
	==> kubelet <==
	Nov 09 13:46:53 functional-002359 kubelet[3996]: E1109 13:46:53.404775    3996 pod_workers.go:1324] "Error syncing pod, skipping" err="failed to \"StartContainer\" for \"echo-server\" with ImagePullBackOff: \"Back-off pulling image \\\"kicbase/echo-server\\\": ErrImagePull: short name mode is enforcing, but image name kicbase/echo-server:latest returns ambiguous list\"" pod="default/hello-node-connect-7d85dfc575-r228d" podUID="3ebf374c-ab24-4668-9255-ce164c6b9712"
	Nov 09 13:47:01 functional-002359 kubelet[3996]: E1109 13:47:01.404691    3996 pod_workers.go:1324] "Error syncing pod, skipping" err="failed to \"StartContainer\" for \"echo-server\" with ImagePullBackOff: \"Back-off pulling image \\\"kicbase/echo-server\\\": ErrImagePull: short name mode is enforcing, but image name kicbase/echo-server:latest returns ambiguous list\"" pod="default/hello-node-75c85bcc94-9jr6w" podUID="d34c4368-4034-48c2-8f47-3be1ca055a42"
	Nov 09 13:47:08 functional-002359 kubelet[3996]: E1109 13:47:08.405067    3996 pod_workers.go:1324] "Error syncing pod, skipping" err="failed to \"StartContainer\" for \"echo-server\" with ImagePullBackOff: \"Back-off pulling image \\\"kicbase/echo-server\\\": ErrImagePull: short name mode is enforcing, but image name kicbase/echo-server:latest returns ambiguous list\"" pod="default/hello-node-connect-7d85dfc575-r228d" podUID="3ebf374c-ab24-4668-9255-ce164c6b9712"
	Nov 09 13:47:14 functional-002359 kubelet[3996]: E1109 13:47:14.405553    3996 pod_workers.go:1324] "Error syncing pod, skipping" err="failed to \"StartContainer\" for \"echo-server\" with ImagePullBackOff: \"Back-off pulling image \\\"kicbase/echo-server\\\": ErrImagePull: short name mode is enforcing, but image name kicbase/echo-server:latest returns ambiguous list\"" pod="default/hello-node-75c85bcc94-9jr6w" podUID="d34c4368-4034-48c2-8f47-3be1ca055a42"
	Nov 09 13:47:19 functional-002359 kubelet[3996]: E1109 13:47:19.404674    3996 pod_workers.go:1324] "Error syncing pod, skipping" err="failed to \"StartContainer\" for \"echo-server\" with ImagePullBackOff: \"Back-off pulling image \\\"kicbase/echo-server\\\": ErrImagePull: short name mode is enforcing, but image name kicbase/echo-server:latest returns ambiguous list\"" pod="default/hello-node-connect-7d85dfc575-r228d" podUID="3ebf374c-ab24-4668-9255-ce164c6b9712"
	Nov 09 13:47:27 functional-002359 kubelet[3996]: E1109 13:47:27.404975    3996 pod_workers.go:1324] "Error syncing pod, skipping" err="failed to \"StartContainer\" for \"echo-server\" with ImagePullBackOff: \"Back-off pulling image \\\"kicbase/echo-server\\\": ErrImagePull: short name mode is enforcing, but image name kicbase/echo-server:latest returns ambiguous list\"" pod="default/hello-node-75c85bcc94-9jr6w" podUID="d34c4368-4034-48c2-8f47-3be1ca055a42"
	Nov 09 13:47:34 functional-002359 kubelet[3996]: E1109 13:47:34.405379    3996 pod_workers.go:1324] "Error syncing pod, skipping" err="failed to \"StartContainer\" for \"echo-server\" with ImagePullBackOff: \"Back-off pulling image \\\"kicbase/echo-server\\\": ErrImagePull: short name mode is enforcing, but image name kicbase/echo-server:latest returns ambiguous list\"" pod="default/hello-node-connect-7d85dfc575-r228d" podUID="3ebf374c-ab24-4668-9255-ce164c6b9712"
	Nov 09 13:47:42 functional-002359 kubelet[3996]: E1109 13:47:42.405948    3996 pod_workers.go:1324] "Error syncing pod, skipping" err="failed to \"StartContainer\" for \"echo-server\" with ImagePullBackOff: \"Back-off pulling image \\\"kicbase/echo-server\\\": ErrImagePull: short name mode is enforcing, but image name kicbase/echo-server:latest returns ambiguous list\"" pod="default/hello-node-75c85bcc94-9jr6w" podUID="d34c4368-4034-48c2-8f47-3be1ca055a42"
	Nov 09 13:47:48 functional-002359 kubelet[3996]: E1109 13:47:48.405101    3996 pod_workers.go:1324] "Error syncing pod, skipping" err="failed to \"StartContainer\" for \"echo-server\" with ImagePullBackOff: \"Back-off pulling image \\\"kicbase/echo-server\\\": ErrImagePull: short name mode is enforcing, but image name kicbase/echo-server:latest returns ambiguous list\"" pod="default/hello-node-connect-7d85dfc575-r228d" podUID="3ebf374c-ab24-4668-9255-ce164c6b9712"
	Nov 09 13:47:56 functional-002359 kubelet[3996]: E1109 13:47:56.405005    3996 pod_workers.go:1324] "Error syncing pod, skipping" err="failed to \"StartContainer\" for \"echo-server\" with ImagePullBackOff: \"Back-off pulling image \\\"kicbase/echo-server\\\": ErrImagePull: short name mode is enforcing, but image name kicbase/echo-server:latest returns ambiguous list\"" pod="default/hello-node-75c85bcc94-9jr6w" podUID="d34c4368-4034-48c2-8f47-3be1ca055a42"
	Nov 09 13:48:01 functional-002359 kubelet[3996]: E1109 13:48:01.404656    3996 pod_workers.go:1324] "Error syncing pod, skipping" err="failed to \"StartContainer\" for \"echo-server\" with ImagePullBackOff: \"Back-off pulling image \\\"kicbase/echo-server\\\": ErrImagePull: short name mode is enforcing, but image name kicbase/echo-server:latest returns ambiguous list\"" pod="default/hello-node-connect-7d85dfc575-r228d" podUID="3ebf374c-ab24-4668-9255-ce164c6b9712"
	Nov 09 13:48:08 functional-002359 kubelet[3996]: E1109 13:48:08.405165    3996 pod_workers.go:1324] "Error syncing pod, skipping" err="failed to \"StartContainer\" for \"echo-server\" with ImagePullBackOff: \"Back-off pulling image \\\"kicbase/echo-server\\\": ErrImagePull: short name mode is enforcing, but image name kicbase/echo-server:latest returns ambiguous list\"" pod="default/hello-node-75c85bcc94-9jr6w" podUID="d34c4368-4034-48c2-8f47-3be1ca055a42"
	Nov 09 13:48:12 functional-002359 kubelet[3996]: E1109 13:48:12.405274    3996 pod_workers.go:1324] "Error syncing pod, skipping" err="failed to \"StartContainer\" for \"echo-server\" with ImagePullBackOff: \"Back-off pulling image \\\"kicbase/echo-server\\\": ErrImagePull: short name mode is enforcing, but image name kicbase/echo-server:latest returns ambiguous list\"" pod="default/hello-node-connect-7d85dfc575-r228d" podUID="3ebf374c-ab24-4668-9255-ce164c6b9712"
	Nov 09 13:48:19 functional-002359 kubelet[3996]: E1109 13:48:19.405156    3996 pod_workers.go:1324] "Error syncing pod, skipping" err="failed to \"StartContainer\" for \"echo-server\" with ImagePullBackOff: \"Back-off pulling image \\\"kicbase/echo-server\\\": ErrImagePull: short name mode is enforcing, but image name kicbase/echo-server:latest returns ambiguous list\"" pod="default/hello-node-75c85bcc94-9jr6w" podUID="d34c4368-4034-48c2-8f47-3be1ca055a42"
	Nov 09 13:48:26 functional-002359 kubelet[3996]: E1109 13:48:26.404884    3996 pod_workers.go:1324] "Error syncing pod, skipping" err="failed to \"StartContainer\" for \"echo-server\" with ImagePullBackOff: \"Back-off pulling image \\\"kicbase/echo-server\\\": ErrImagePull: short name mode is enforcing, but image name kicbase/echo-server:latest returns ambiguous list\"" pod="default/hello-node-connect-7d85dfc575-r228d" podUID="3ebf374c-ab24-4668-9255-ce164c6b9712"
	Nov 09 13:48:33 functional-002359 kubelet[3996]: E1109 13:48:33.405138    3996 pod_workers.go:1324] "Error syncing pod, skipping" err="failed to \"StartContainer\" for \"echo-server\" with ImagePullBackOff: \"Back-off pulling image \\\"kicbase/echo-server\\\": ErrImagePull: short name mode is enforcing, but image name kicbase/echo-server:latest returns ambiguous list\"" pod="default/hello-node-75c85bcc94-9jr6w" podUID="d34c4368-4034-48c2-8f47-3be1ca055a42"
	Nov 09 13:48:39 functional-002359 kubelet[3996]: E1109 13:48:39.405104    3996 pod_workers.go:1324] "Error syncing pod, skipping" err="failed to \"StartContainer\" for \"echo-server\" with ImagePullBackOff: \"Back-off pulling image \\\"kicbase/echo-server\\\": ErrImagePull: short name mode is enforcing, but image name kicbase/echo-server:latest returns ambiguous list\"" pod="default/hello-node-connect-7d85dfc575-r228d" podUID="3ebf374c-ab24-4668-9255-ce164c6b9712"
	Nov 09 13:48:48 functional-002359 kubelet[3996]: E1109 13:48:48.405402    3996 pod_workers.go:1324] "Error syncing pod, skipping" err="failed to \"StartContainer\" for \"echo-server\" with ImagePullBackOff: \"Back-off pulling image \\\"kicbase/echo-server\\\": ErrImagePull: short name mode is enforcing, but image name kicbase/echo-server:latest returns ambiguous list\"" pod="default/hello-node-75c85bcc94-9jr6w" podUID="d34c4368-4034-48c2-8f47-3be1ca055a42"
	Nov 09 13:48:52 functional-002359 kubelet[3996]: E1109 13:48:52.405007    3996 pod_workers.go:1324] "Error syncing pod, skipping" err="failed to \"StartContainer\" for \"echo-server\" with ImagePullBackOff: \"Back-off pulling image \\\"kicbase/echo-server\\\": ErrImagePull: short name mode is enforcing, but image name kicbase/echo-server:latest returns ambiguous list\"" pod="default/hello-node-connect-7d85dfc575-r228d" podUID="3ebf374c-ab24-4668-9255-ce164c6b9712"
	Nov 09 13:49:00 functional-002359 kubelet[3996]: E1109 13:49:00.406188    3996 pod_workers.go:1324] "Error syncing pod, skipping" err="failed to \"StartContainer\" for \"echo-server\" with ImagePullBackOff: \"Back-off pulling image \\\"kicbase/echo-server\\\": ErrImagePull: short name mode is enforcing, but image name kicbase/echo-server:latest returns ambiguous list\"" pod="default/hello-node-75c85bcc94-9jr6w" podUID="d34c4368-4034-48c2-8f47-3be1ca055a42"
	Nov 09 13:49:07 functional-002359 kubelet[3996]: E1109 13:49:07.404948    3996 pod_workers.go:1324] "Error syncing pod, skipping" err="failed to \"StartContainer\" for \"echo-server\" with ImagePullBackOff: \"Back-off pulling image \\\"kicbase/echo-server\\\": ErrImagePull: short name mode is enforcing, but image name kicbase/echo-server:latest returns ambiguous list\"" pod="default/hello-node-connect-7d85dfc575-r228d" podUID="3ebf374c-ab24-4668-9255-ce164c6b9712"
	Nov 09 13:49:12 functional-002359 kubelet[3996]: E1109 13:49:12.404917    3996 pod_workers.go:1324] "Error syncing pod, skipping" err="failed to \"StartContainer\" for \"echo-server\" with ImagePullBackOff: \"Back-off pulling image \\\"kicbase/echo-server\\\": ErrImagePull: short name mode is enforcing, but image name kicbase/echo-server:latest returns ambiguous list\"" pod="default/hello-node-75c85bcc94-9jr6w" podUID="d34c4368-4034-48c2-8f47-3be1ca055a42"
	Nov 09 13:49:18 functional-002359 kubelet[3996]: E1109 13:49:18.404989    3996 pod_workers.go:1324] "Error syncing pod, skipping" err="failed to \"StartContainer\" for \"echo-server\" with ImagePullBackOff: \"Back-off pulling image \\\"kicbase/echo-server\\\": ErrImagePull: short name mode is enforcing, but image name kicbase/echo-server:latest returns ambiguous list\"" pod="default/hello-node-connect-7d85dfc575-r228d" podUID="3ebf374c-ab24-4668-9255-ce164c6b9712"
	Nov 09 13:49:26 functional-002359 kubelet[3996]: E1109 13:49:26.405597    3996 pod_workers.go:1324] "Error syncing pod, skipping" err="failed to \"StartContainer\" for \"echo-server\" with ImagePullBackOff: \"Back-off pulling image \\\"kicbase/echo-server\\\": ErrImagePull: short name mode is enforcing, but image name kicbase/echo-server:latest returns ambiguous list\"" pod="default/hello-node-75c85bcc94-9jr6w" podUID="d34c4368-4034-48c2-8f47-3be1ca055a42"
	Nov 09 13:49:33 functional-002359 kubelet[3996]: E1109 13:49:33.404637    3996 pod_workers.go:1324] "Error syncing pod, skipping" err="failed to \"StartContainer\" for \"echo-server\" with ImagePullBackOff: \"Back-off pulling image \\\"kicbase/echo-server\\\": ErrImagePull: short name mode is enforcing, but image name kicbase/echo-server:latest returns ambiguous list\"" pod="default/hello-node-connect-7d85dfc575-r228d" podUID="3ebf374c-ab24-4668-9255-ce164c6b9712"
	
	
	==> storage-provisioner [323eccb0e9e512d8dadfde1919c7279694d09344514992919e364eb624dea466] <==
	W1109 13:49:09.948980       1 warnings.go:70] v1 Endpoints is deprecated in v1.33+; use discovery.k8s.io/v1 EndpointSlice
	W1109 13:49:11.952629       1 warnings.go:70] v1 Endpoints is deprecated in v1.33+; use discovery.k8s.io/v1 EndpointSlice
	W1109 13:49:11.957707       1 warnings.go:70] v1 Endpoints is deprecated in v1.33+; use discovery.k8s.io/v1 EndpointSlice
	W1109 13:49:13.960339       1 warnings.go:70] v1 Endpoints is deprecated in v1.33+; use discovery.k8s.io/v1 EndpointSlice
	W1109 13:49:13.965038       1 warnings.go:70] v1 Endpoints is deprecated in v1.33+; use discovery.k8s.io/v1 EndpointSlice
	W1109 13:49:15.968023       1 warnings.go:70] v1 Endpoints is deprecated in v1.33+; use discovery.k8s.io/v1 EndpointSlice
	W1109 13:49:15.974912       1 warnings.go:70] v1 Endpoints is deprecated in v1.33+; use discovery.k8s.io/v1 EndpointSlice
	W1109 13:49:17.977821       1 warnings.go:70] v1 Endpoints is deprecated in v1.33+; use discovery.k8s.io/v1 EndpointSlice
	W1109 13:49:17.982291       1 warnings.go:70] v1 Endpoints is deprecated in v1.33+; use discovery.k8s.io/v1 EndpointSlice
	W1109 13:49:19.985868       1 warnings.go:70] v1 Endpoints is deprecated in v1.33+; use discovery.k8s.io/v1 EndpointSlice
	W1109 13:49:19.993117       1 warnings.go:70] v1 Endpoints is deprecated in v1.33+; use discovery.k8s.io/v1 EndpointSlice
	W1109 13:49:21.996251       1 warnings.go:70] v1 Endpoints is deprecated in v1.33+; use discovery.k8s.io/v1 EndpointSlice
	W1109 13:49:22.000576       1 warnings.go:70] v1 Endpoints is deprecated in v1.33+; use discovery.k8s.io/v1 EndpointSlice
	W1109 13:49:24.003311       1 warnings.go:70] v1 Endpoints is deprecated in v1.33+; use discovery.k8s.io/v1 EndpointSlice
	W1109 13:49:24.010355       1 warnings.go:70] v1 Endpoints is deprecated in v1.33+; use discovery.k8s.io/v1 EndpointSlice
	W1109 13:49:26.016099       1 warnings.go:70] v1 Endpoints is deprecated in v1.33+; use discovery.k8s.io/v1 EndpointSlice
	W1109 13:49:26.021390       1 warnings.go:70] v1 Endpoints is deprecated in v1.33+; use discovery.k8s.io/v1 EndpointSlice
	W1109 13:49:28.024624       1 warnings.go:70] v1 Endpoints is deprecated in v1.33+; use discovery.k8s.io/v1 EndpointSlice
	W1109 13:49:28.029354       1 warnings.go:70] v1 Endpoints is deprecated in v1.33+; use discovery.k8s.io/v1 EndpointSlice
	W1109 13:49:30.036294       1 warnings.go:70] v1 Endpoints is deprecated in v1.33+; use discovery.k8s.io/v1 EndpointSlice
	W1109 13:49:30.091821       1 warnings.go:70] v1 Endpoints is deprecated in v1.33+; use discovery.k8s.io/v1 EndpointSlice
	W1109 13:49:32.095572       1 warnings.go:70] v1 Endpoints is deprecated in v1.33+; use discovery.k8s.io/v1 EndpointSlice
	W1109 13:49:32.103057       1 warnings.go:70] v1 Endpoints is deprecated in v1.33+; use discovery.k8s.io/v1 EndpointSlice
	W1109 13:49:34.106905       1 warnings.go:70] v1 Endpoints is deprecated in v1.33+; use discovery.k8s.io/v1 EndpointSlice
	W1109 13:49:34.114614       1 warnings.go:70] v1 Endpoints is deprecated in v1.33+; use discovery.k8s.io/v1 EndpointSlice
	
	
	==> storage-provisioner [4399d84388041557c1d3c9a8c4e17027e1307f8182ddc9cc872cdcc755543e0b] <==
	I1109 13:38:07.268768       1 storage_provisioner.go:116] Initializing the minikube storage provisioner...
	I1109 13:38:11.922844       1 storage_provisioner.go:141] Storage provisioner initialized, now starting service!
	I1109 13:38:11.922903       1 leaderelection.go:243] attempting to acquire leader lease kube-system/k8s.io-minikube-hostpath...
	W1109 13:38:12.022171       1 warnings.go:70] v1 Endpoints is deprecated in v1.33+; use discovery.k8s.io/v1 EndpointSlice
	W1109 13:38:15.495603       1 warnings.go:70] v1 Endpoints is deprecated in v1.33+; use discovery.k8s.io/v1 EndpointSlice
	W1109 13:38:19.755948       1 warnings.go:70] v1 Endpoints is deprecated in v1.33+; use discovery.k8s.io/v1 EndpointSlice
	W1109 13:38:23.354971       1 warnings.go:70] v1 Endpoints is deprecated in v1.33+; use discovery.k8s.io/v1 EndpointSlice
	W1109 13:38:26.408691       1 warnings.go:70] v1 Endpoints is deprecated in v1.33+; use discovery.k8s.io/v1 EndpointSlice
	W1109 13:38:29.430744       1 warnings.go:70] v1 Endpoints is deprecated in v1.33+; use discovery.k8s.io/v1 EndpointSlice
	W1109 13:38:29.436068       1 warnings.go:70] v1 Endpoints is deprecated in v1.33+; use discovery.k8s.io/v1 EndpointSlice
	I1109 13:38:29.436201       1 leaderelection.go:253] successfully acquired lease kube-system/k8s.io-minikube-hostpath
	I1109 13:38:29.436376       1 controller.go:835] Starting provisioner controller k8s.io/minikube-hostpath_functional-002359_f9197926-d4c2-4df8-86cf-ae2a8b3c9da0!
	I1109 13:38:29.437181       1 event.go:282] Event(v1.ObjectReference{Kind:"Endpoints", Namespace:"kube-system", Name:"k8s.io-minikube-hostpath", UID:"5f2e3043-4dec-443e-94cb-e2337e8636df", APIVersion:"v1", ResourceVersion:"580", FieldPath:""}): type: 'Normal' reason: 'LeaderElection' functional-002359_f9197926-d4c2-4df8-86cf-ae2a8b3c9da0 became leader
	W1109 13:38:29.446101       1 warnings.go:70] v1 Endpoints is deprecated in v1.33+; use discovery.k8s.io/v1 EndpointSlice
	W1109 13:38:29.451074       1 warnings.go:70] v1 Endpoints is deprecated in v1.33+; use discovery.k8s.io/v1 EndpointSlice
	I1109 13:38:29.536624       1 controller.go:884] Started provisioner controller k8s.io/minikube-hostpath_functional-002359_f9197926-d4c2-4df8-86cf-ae2a8b3c9da0!
	W1109 13:38:31.454387       1 warnings.go:70] v1 Endpoints is deprecated in v1.33+; use discovery.k8s.io/v1 EndpointSlice
	W1109 13:38:31.459636       1 warnings.go:70] v1 Endpoints is deprecated in v1.33+; use discovery.k8s.io/v1 EndpointSlice
	

                                                
                                                
-- /stdout --
helpers_test.go:262: (dbg) Run:  out/minikube-linux-arm64 status --format={{.APIServer}} -p functional-002359 -n functional-002359
helpers_test.go:269: (dbg) Run:  kubectl --context functional-002359 get po -o=jsonpath={.items[*].metadata.name} -A --field-selector=status.phase!=Running
helpers_test.go:280: non-running pods: hello-node-75c85bcc94-9jr6w hello-node-connect-7d85dfc575-r228d
helpers_test.go:282: ======> post-mortem[TestFunctional/parallel/ServiceCmdConnect]: describe non-running pods <======
helpers_test.go:285: (dbg) Run:  kubectl --context functional-002359 describe pod hello-node-75c85bcc94-9jr6w hello-node-connect-7d85dfc575-r228d
helpers_test.go:290: (dbg) kubectl --context functional-002359 describe pod hello-node-75c85bcc94-9jr6w hello-node-connect-7d85dfc575-r228d:

                                                
                                                
-- stdout --
	Name:             hello-node-75c85bcc94-9jr6w
	Namespace:        default
	Priority:         0
	Service Account:  default
	Node:             functional-002359/192.168.49.2
	Start Time:       Sun, 09 Nov 2025 13:39:46 +0000
	Labels:           app=hello-node
	                  pod-template-hash=75c85bcc94
	Annotations:      <none>
	Status:           Pending
	IP:               10.244.0.8
	IPs:
	  IP:           10.244.0.8
	Controlled By:  ReplicaSet/hello-node-75c85bcc94
	Containers:
	  echo-server:
	    Container ID:   
	    Image:          kicbase/echo-server
	    Image ID:       
	    Port:           <none>
	    Host Port:      <none>
	    State:          Waiting
	      Reason:       ImagePullBackOff
	    Ready:          False
	    Restart Count:  0
	    Environment:    <none>
	    Mounts:
	      /var/run/secrets/kubernetes.io/serviceaccount from kube-api-access-tvlpv (ro)
	Conditions:
	  Type                        Status
	  PodReadyToStartContainers   True 
	  Initialized                 True 
	  Ready                       False 
	  ContainersReady             False 
	  PodScheduled                True 
	Volumes:
	  kube-api-access-tvlpv:
	    Type:                    Projected (a volume that contains injected data from multiple sources)
	    TokenExpirationSeconds:  3607
	    ConfigMapName:           kube-root-ca.crt
	    Optional:                false
	    DownwardAPI:             true
	QoS Class:                   BestEffort
	Node-Selectors:              <none>
	Tolerations:                 node.kubernetes.io/not-ready:NoExecute op=Exists for 300s
	                             node.kubernetes.io/unreachable:NoExecute op=Exists for 300s
	Events:
	  Type     Reason     Age                     From               Message
	  ----     ------     ----                    ----               -------
	  Normal   Scheduled  9m49s                   default-scheduler  Successfully assigned default/hello-node-75c85bcc94-9jr6w to functional-002359
	  Normal   Pulling    6m46s (x5 over 9m49s)   kubelet            Pulling image "kicbase/echo-server"
	  Warning  Failed     6m46s (x5 over 9m49s)   kubelet            Failed to pull image "kicbase/echo-server": short name mode is enforcing, but image name kicbase/echo-server:latest returns ambiguous list
	  Warning  Failed     6m46s (x5 over 9m49s)   kubelet            Error: ErrImagePull
	  Warning  Failed     4m43s (x20 over 9m48s)  kubelet            Error: ImagePullBackOff
	  Normal   BackOff    4m28s (x21 over 9m48s)  kubelet            Back-off pulling image "kicbase/echo-server"
	
	
	Name:             hello-node-connect-7d85dfc575-r228d
	Namespace:        default
	Priority:         0
	Service Account:  default
	Node:             functional-002359/192.168.49.2
	Start Time:       Sun, 09 Nov 2025 13:39:31 +0000
	Labels:           app=hello-node-connect
	                  pod-template-hash=7d85dfc575
	Annotations:      <none>
	Status:           Pending
	IP:               10.244.0.6
	IPs:
	  IP:           10.244.0.6
	Controlled By:  ReplicaSet/hello-node-connect-7d85dfc575
	Containers:
	  echo-server:
	    Container ID:   
	    Image:          kicbase/echo-server
	    Image ID:       
	    Port:           <none>
	    Host Port:      <none>
	    State:          Waiting
	      Reason:       ImagePullBackOff
	    Ready:          False
	    Restart Count:  0
	    Environment:    <none>
	    Mounts:
	      /var/run/secrets/kubernetes.io/serviceaccount from kube-api-access-58dk9 (ro)
	Conditions:
	  Type                        Status
	  PodReadyToStartContainers   True 
	  Initialized                 True 
	  Ready                       False 
	  ContainersReady             False 
	  PodScheduled                True 
	Volumes:
	  kube-api-access-58dk9:
	    Type:                    Projected (a volume that contains injected data from multiple sources)
	    TokenExpirationSeconds:  3607
	    ConfigMapName:           kube-root-ca.crt
	    Optional:                false
	    DownwardAPI:             true
	QoS Class:                   BestEffort
	Node-Selectors:              <none>
	Tolerations:                 node.kubernetes.io/not-ready:NoExecute op=Exists for 300s
	                             node.kubernetes.io/unreachable:NoExecute op=Exists for 300s
	Events:
	  Type     Reason     Age                 From               Message
	  ----     ------     ----                ----               -------
	  Normal   Scheduled  10m                 default-scheduler  Successfully assigned default/hello-node-connect-7d85dfc575-r228d to functional-002359
	  Normal   Pulling    7m5s (x5 over 10m)  kubelet            Pulling image "kicbase/echo-server"
	  Warning  Failed     7m5s (x5 over 10m)  kubelet            Failed to pull image "kicbase/echo-server": short name mode is enforcing, but image name kicbase/echo-server:latest returns ambiguous list
	  Warning  Failed     7m5s (x5 over 10m)  kubelet            Error: ErrImagePull
	  Normal   BackOff    2s (x42 over 10m)   kubelet            Back-off pulling image "kicbase/echo-server"
	  Warning  Failed     2s (x42 over 10m)   kubelet            Error: ImagePullBackOff

                                                
                                                
-- /stdout --
helpers_test.go:293: <<< TestFunctional/parallel/ServiceCmdConnect FAILED: end of post-mortem logs <<<
helpers_test.go:294: ---------------------/post-mortem---------------------------------
--- FAIL: TestFunctional/parallel/ServiceCmdConnect (603.53s)

                                                
                                    
x
+
TestFunctional/parallel/ServiceCmd/DeployApp (600.85s)

                                                
                                                
=== RUN   TestFunctional/parallel/ServiceCmd/DeployApp
functional_test.go:1451: (dbg) Run:  kubectl --context functional-002359 create deployment hello-node --image kicbase/echo-server
functional_test.go:1455: (dbg) Run:  kubectl --context functional-002359 expose deployment hello-node --type=NodePort --port=8080
functional_test.go:1460: (dbg) TestFunctional/parallel/ServiceCmd/DeployApp: waiting 10m0s for pods matching "app=hello-node" in namespace "default" ...
helpers_test.go:352: "hello-node-75c85bcc94-9jr6w" [d34c4368-4034-48c2-8f47-3be1ca055a42] Pending / Ready:ContainersNotReady (containers with unready status: [echo-server]) / ContainersReady:ContainersNotReady (containers with unready status: [echo-server])
E1109 13:39:50.318010    4116 cert_rotation.go:172] "Loading client cert failed" err="open /home/jenkins/minikube-integration/21139-2320/.minikube/profiles/addons-651467/client.crt: no such file or directory" logger="tls-transport-cache.UnhandledError" key="key"
E1109 13:42:06.457217    4116 cert_rotation.go:172] "Loading client cert failed" err="open /home/jenkins/minikube-integration/21139-2320/.minikube/profiles/addons-651467/client.crt: no such file or directory" logger="tls-transport-cache.UnhandledError" key="key"
E1109 13:42:34.160209    4116 cert_rotation.go:172] "Loading client cert failed" err="open /home/jenkins/minikube-integration/21139-2320/.minikube/profiles/addons-651467/client.crt: no such file or directory" logger="tls-transport-cache.UnhandledError" key="key"
E1109 13:47:06.456772    4116 cert_rotation.go:172] "Loading client cert failed" err="open /home/jenkins/minikube-integration/21139-2320/.minikube/profiles/addons-651467/client.crt: no such file or directory" logger="tls-transport-cache.UnhandledError" key="key"
helpers_test.go:337: TestFunctional/parallel/ServiceCmd/DeployApp: WARNING: pod list for "default" "app=hello-node" returned: client rate limiter Wait returned an error: context deadline exceeded
functional_test.go:1460: ***** TestFunctional/parallel/ServiceCmd/DeployApp: pod "app=hello-node" failed to start within 10m0s: context deadline exceeded ****
functional_test.go:1460: (dbg) Run:  out/minikube-linux-arm64 status --format={{.APIServer}} -p functional-002359 -n functional-002359
functional_test.go:1460: TestFunctional/parallel/ServiceCmd/DeployApp: showing logs for failed pods as of 2025-11-09 13:49:46.971825453 +0000 UTC m=+1241.273061358
functional_test.go:1460: (dbg) Run:  kubectl --context functional-002359 describe po hello-node-75c85bcc94-9jr6w -n default
functional_test.go:1460: (dbg) kubectl --context functional-002359 describe po hello-node-75c85bcc94-9jr6w -n default:
Name:             hello-node-75c85bcc94-9jr6w
Namespace:        default
Priority:         0
Service Account:  default
Node:             functional-002359/192.168.49.2
Start Time:       Sun, 09 Nov 2025 13:39:46 +0000
Labels:           app=hello-node
pod-template-hash=75c85bcc94
Annotations:      <none>
Status:           Pending
IP:               10.244.0.8
IPs:
IP:           10.244.0.8
Controlled By:  ReplicaSet/hello-node-75c85bcc94
Containers:
echo-server:
Container ID:   
Image:          kicbase/echo-server
Image ID:       
Port:           <none>
Host Port:      <none>
State:          Waiting
Reason:       ImagePullBackOff
Ready:          False
Restart Count:  0
Environment:    <none>
Mounts:
/var/run/secrets/kubernetes.io/serviceaccount from kube-api-access-tvlpv (ro)
Conditions:
Type                        Status
PodReadyToStartContainers   True 
Initialized                 True 
Ready                       False 
ContainersReady             False 
PodScheduled                True 
Volumes:
kube-api-access-tvlpv:
Type:                    Projected (a volume that contains injected data from multiple sources)
TokenExpirationSeconds:  3607
ConfigMapName:           kube-root-ca.crt
Optional:                false
DownwardAPI:             true
QoS Class:                   BestEffort
Node-Selectors:              <none>
Tolerations:                 node.kubernetes.io/not-ready:NoExecute op=Exists for 300s
node.kubernetes.io/unreachable:NoExecute op=Exists for 300s
Events:
Type     Reason     Age                   From               Message
----     ------     ----                  ----               -------
Normal   Scheduled  10m                   default-scheduler  Successfully assigned default/hello-node-75c85bcc94-9jr6w to functional-002359
Normal   Pulling    6m58s (x5 over 10m)   kubelet            Pulling image "kicbase/echo-server"
Warning  Failed     6m58s (x5 over 10m)   kubelet            Failed to pull image "kicbase/echo-server": short name mode is enforcing, but image name kicbase/echo-server:latest returns ambiguous list
Warning  Failed     6m58s (x5 over 10m)   kubelet            Error: ErrImagePull
Warning  Failed     4m55s (x20 over 10m)  kubelet            Error: ImagePullBackOff
Normal   BackOff    4m40s (x21 over 10m)  kubelet            Back-off pulling image "kicbase/echo-server"
functional_test.go:1460: (dbg) Run:  kubectl --context functional-002359 logs hello-node-75c85bcc94-9jr6w -n default
functional_test.go:1460: (dbg) Non-zero exit: kubectl --context functional-002359 logs hello-node-75c85bcc94-9jr6w -n default: exit status 1 (96.089993ms)

                                                
                                                
** stderr ** 
	Error from server (BadRequest): container "echo-server" in pod "hello-node-75c85bcc94-9jr6w" is waiting to start: trying and failing to pull image

                                                
                                                
** /stderr **
functional_test.go:1460: kubectl --context functional-002359 logs hello-node-75c85bcc94-9jr6w -n default: exit status 1
functional_test.go:1461: failed waiting for hello-node pod: app=hello-node within 10m0s: context deadline exceeded
--- FAIL: TestFunctional/parallel/ServiceCmd/DeployApp (600.85s)

                                                
                                    
x
+
TestFunctional/parallel/ServiceCmd/HTTPS (0.55s)

                                                
                                                
=== RUN   TestFunctional/parallel/ServiceCmd/HTTPS
functional_test.go:1519: (dbg) Run:  out/minikube-linux-arm64 -p functional-002359 service --namespace=default --https --url hello-node
functional_test.go:1519: (dbg) Non-zero exit: out/minikube-linux-arm64 -p functional-002359 service --namespace=default --https --url hello-node: exit status 115 (550.463279ms)

                                                
                                                
-- stdout --
	https://192.168.49.2:31237
	
	

                                                
                                                
-- /stdout --
** stderr ** 
	X Exiting due to SVC_UNREACHABLE: service not available: no running pod for service hello-node found
	* 
	╭─────────────────────────────────────────────────────────────────────────────────────────────╮
	│                                                                                             │
	│    * If the above advice does not help, please let us know:                                 │
	│      https://github.com/kubernetes/minikube/issues/new/choose                               │
	│                                                                                             │
	│    * Please run `minikube logs --file=logs.txt` and attach logs.txt to the GitHub issue.    │
	│    * Please also attach the following file to the GitHub issue:                             │
	│    * - /tmp/minikube_service_3af0dd3f106bd0c134df3d834cbdbb288a06d35d_0.log                 │
	│                                                                                             │
	╰─────────────────────────────────────────────────────────────────────────────────────────────╯

                                                
                                                
** /stderr **
functional_test.go:1521: failed to get service url. args "out/minikube-linux-arm64 -p functional-002359 service --namespace=default --https --url hello-node" : exit status 115
--- FAIL: TestFunctional/parallel/ServiceCmd/HTTPS (0.55s)

                                                
                                    
x
+
TestFunctional/parallel/ServiceCmd/Format (0.54s)

                                                
                                                
=== RUN   TestFunctional/parallel/ServiceCmd/Format
functional_test.go:1550: (dbg) Run:  out/minikube-linux-arm64 -p functional-002359 service hello-node --url --format={{.IP}}
functional_test.go:1550: (dbg) Non-zero exit: out/minikube-linux-arm64 -p functional-002359 service hello-node --url --format={{.IP}}: exit status 115 (535.943282ms)

                                                
                                                
-- stdout --
	192.168.49.2
	
	

                                                
                                                
-- /stdout --
** stderr ** 
	X Exiting due to SVC_UNREACHABLE: service not available: no running pod for service hello-node found
	* 
	╭─────────────────────────────────────────────────────────────────────────────────────────────╮
	│                                                                                             │
	│    * If the above advice does not help, please let us know:                                 │
	│      https://github.com/kubernetes/minikube/issues/new/choose                               │
	│                                                                                             │
	│    * Please run `minikube logs --file=logs.txt` and attach logs.txt to the GitHub issue.    │
	│    * Please also attach the following file to the GitHub issue:                             │
	│    * - /tmp/minikube_service_7cc4328ee572bf2be3730700e5bda4ff5ee9066f_0.log                 │
	│                                                                                             │
	╰─────────────────────────────────────────────────────────────────────────────────────────────╯

                                                
                                                
** /stderr **
functional_test.go:1552: failed to get service url with custom format. args "out/minikube-linux-arm64 -p functional-002359 service hello-node --url --format={{.IP}}": exit status 115
--- FAIL: TestFunctional/parallel/ServiceCmd/Format (0.54s)

                                                
                                    
x
+
TestFunctional/parallel/ServiceCmd/URL (0.51s)

                                                
                                                
=== RUN   TestFunctional/parallel/ServiceCmd/URL
functional_test.go:1569: (dbg) Run:  out/minikube-linux-arm64 -p functional-002359 service hello-node --url
functional_test.go:1569: (dbg) Non-zero exit: out/minikube-linux-arm64 -p functional-002359 service hello-node --url: exit status 115 (508.63113ms)

                                                
                                                
-- stdout --
	http://192.168.49.2:31237
	
	

                                                
                                                
-- /stdout --
** stderr ** 
	X Exiting due to SVC_UNREACHABLE: service not available: no running pod for service hello-node found
	* 
	╭─────────────────────────────────────────────────────────────────────────────────────────────╮
	│                                                                                             │
	│    * If the above advice does not help, please let us know:                                 │
	│      https://github.com/kubernetes/minikube/issues/new/choose                               │
	│                                                                                             │
	│    * Please run `minikube logs --file=logs.txt` and attach logs.txt to the GitHub issue.    │
	│    * Please also attach the following file to the GitHub issue:                             │
	│    * - /tmp/minikube_service_7cc4328ee572bf2be3730700e5bda4ff5ee9066f_0.log                 │
	│                                                                                             │
	╰─────────────────────────────────────────────────────────────────────────────────────────────╯

                                                
                                                
** /stderr **
functional_test.go:1571: failed to get service url. args: "out/minikube-linux-arm64 -p functional-002359 service hello-node --url": exit status 115
functional_test.go:1575: found endpoint for hello-node: http://192.168.49.2:31237
--- FAIL: TestFunctional/parallel/ServiceCmd/URL (0.51s)

                                                
                                    
x
+
TestFunctional/parallel/ImageCommands/ImageLoadDaemon (2.26s)

                                                
                                                
=== RUN   TestFunctional/parallel/ImageCommands/ImageLoadDaemon
functional_test.go:370: (dbg) Run:  out/minikube-linux-arm64 -p functional-002359 image load --daemon kicbase/echo-server:functional-002359 --alsologtostderr
functional_test.go:370: (dbg) Done: out/minikube-linux-arm64 -p functional-002359 image load --daemon kicbase/echo-server:functional-002359 --alsologtostderr: (1.993580118s)
functional_test.go:466: (dbg) Run:  out/minikube-linux-arm64 -p functional-002359 image ls
functional_test.go:461: expected "kicbase/echo-server:functional-002359" to be loaded into minikube but the image is not there
--- FAIL: TestFunctional/parallel/ImageCommands/ImageLoadDaemon (2.26s)

                                                
                                    
x
+
TestFunctional/parallel/ImageCommands/ImageReloadDaemon (1.36s)

                                                
                                                
=== RUN   TestFunctional/parallel/ImageCommands/ImageReloadDaemon
functional_test.go:380: (dbg) Run:  out/minikube-linux-arm64 -p functional-002359 image load --daemon kicbase/echo-server:functional-002359 --alsologtostderr
functional_test.go:466: (dbg) Run:  out/minikube-linux-arm64 -p functional-002359 image ls
functional_test.go:461: expected "kicbase/echo-server:functional-002359" to be loaded into minikube but the image is not there
--- FAIL: TestFunctional/parallel/ImageCommands/ImageReloadDaemon (1.36s)

                                                
                                    
x
+
TestFunctional/parallel/ImageCommands/ImageTagAndLoadDaemon (1.55s)

                                                
                                                
=== RUN   TestFunctional/parallel/ImageCommands/ImageTagAndLoadDaemon
functional_test.go:250: (dbg) Run:  docker pull kicbase/echo-server:latest
functional_test.go:255: (dbg) Run:  docker tag kicbase/echo-server:latest kicbase/echo-server:functional-002359
functional_test.go:260: (dbg) Run:  out/minikube-linux-arm64 -p functional-002359 image load --daemon kicbase/echo-server:functional-002359 --alsologtostderr
functional_test.go:466: (dbg) Run:  out/minikube-linux-arm64 -p functional-002359 image ls
functional_test.go:461: expected "kicbase/echo-server:functional-002359" to be loaded into minikube but the image is not there
--- FAIL: TestFunctional/parallel/ImageCommands/ImageTagAndLoadDaemon (1.55s)

                                                
                                    
x
+
TestFunctional/parallel/ImageCommands/ImageSaveToFile (0.36s)

                                                
                                                
=== RUN   TestFunctional/parallel/ImageCommands/ImageSaveToFile
functional_test.go:395: (dbg) Run:  out/minikube-linux-arm64 -p functional-002359 image save kicbase/echo-server:functional-002359 /home/jenkins/workspace/Docker_Linux_crio_arm64/echo-server-save.tar --alsologtostderr
2025/11/09 13:50:00 [DEBUG] GET http://127.0.0.1:36195/api/v1/namespaces/kubernetes-dashboard/services/http:kubernetes-dashboard:/proxy/
functional_test.go:401: expected "/home/jenkins/workspace/Docker_Linux_crio_arm64/echo-server-save.tar" to exist after `image save`, but doesn't exist
--- FAIL: TestFunctional/parallel/ImageCommands/ImageSaveToFile (0.36s)

                                                
                                    
x
+
TestFunctional/parallel/ImageCommands/ImageLoadFromFile (0.28s)

                                                
                                                
=== RUN   TestFunctional/parallel/ImageCommands/ImageLoadFromFile
functional_test.go:424: (dbg) Run:  out/minikube-linux-arm64 -p functional-002359 image load /home/jenkins/workspace/Docker_Linux_crio_arm64/echo-server-save.tar --alsologtostderr
functional_test.go:426: loading image into minikube from file: <nil>

                                                
                                                
** stderr ** 
	I1109 13:50:01.736155   31450 out.go:360] Setting OutFile to fd 1 ...
	I1109 13:50:01.736306   31450 out.go:408] TERM=,COLORTERM=, which probably does not support color
	I1109 13:50:01.736315   31450 out.go:374] Setting ErrFile to fd 2...
	I1109 13:50:01.736320   31450 out.go:408] TERM=,COLORTERM=, which probably does not support color
	I1109 13:50:01.736702   31450 root.go:338] Updating PATH: /home/jenkins/minikube-integration/21139-2320/.minikube/bin
	I1109 13:50:01.737658   31450 config.go:182] Loaded profile config "functional-002359": Driver=docker, ContainerRuntime=crio, KubernetesVersion=v1.34.1
	I1109 13:50:01.737801   31450 config.go:182] Loaded profile config "functional-002359": Driver=docker, ContainerRuntime=crio, KubernetesVersion=v1.34.1
	I1109 13:50:01.738568   31450 cli_runner.go:164] Run: docker container inspect functional-002359 --format={{.State.Status}}
	I1109 13:50:01.767398   31450 ssh_runner.go:195] Run: systemctl --version
	I1109 13:50:01.767465   31450 cli_runner.go:164] Run: docker container inspect -f "'{{(index (index .NetworkSettings.Ports "22/tcp") 0).HostPort}}'" functional-002359
	I1109 13:50:01.796051   31450 sshutil.go:53] new ssh client: &{IP:127.0.0.1 Port:32778 SSHKeyPath:/home/jenkins/minikube-integration/21139-2320/.minikube/machines/functional-002359/id_rsa Username:docker}
	I1109 13:50:01.907151   31450 cache_images.go:291] Loading image from: /home/jenkins/workspace/Docker_Linux_crio_arm64/echo-server-save.tar
	W1109 13:50:01.907225   31450 cache_images.go:255] Failed to load cached images for "functional-002359": loading images: stat /home/jenkins/workspace/Docker_Linux_crio_arm64/echo-server-save.tar: no such file or directory
	I1109 13:50:01.907255   31450 cache_images.go:267] failed pushing to: functional-002359

                                                
                                                
** /stderr **
--- FAIL: TestFunctional/parallel/ImageCommands/ImageLoadFromFile (0.28s)

                                                
                                    
x
+
TestFunctional/parallel/ImageCommands/ImageSaveDaemon (0.51s)

                                                
                                                
=== RUN   TestFunctional/parallel/ImageCommands/ImageSaveDaemon
functional_test.go:434: (dbg) Run:  docker rmi kicbase/echo-server:functional-002359
functional_test.go:439: (dbg) Run:  out/minikube-linux-arm64 -p functional-002359 image save --daemon kicbase/echo-server:functional-002359 --alsologtostderr
functional_test.go:447: (dbg) Run:  docker image inspect localhost/kicbase/echo-server:functional-002359
functional_test.go:447: (dbg) Non-zero exit: docker image inspect localhost/kicbase/echo-server:functional-002359: exit status 1 (53.655413ms)

                                                
                                                
-- stdout --
	[]

                                                
                                                
-- /stdout --
** stderr ** 
	Error response from daemon: No such image: localhost/kicbase/echo-server:functional-002359

                                                
                                                
** /stderr **
functional_test.go:449: expected image to be loaded into Docker, but image was not found: exit status 1

                                                
                                                
-- stdout --
	[]

                                                
                                                
-- /stdout --
** stderr ** 
	Error response from daemon: No such image: localhost/kicbase/echo-server:functional-002359

                                                
                                                
** /stderr **
--- FAIL: TestFunctional/parallel/ImageCommands/ImageSaveDaemon (0.51s)

                                                
                                    
x
+
TestMultiControlPlane/serial/RestartClusterKeepsNodes (516.75s)

                                                
                                                
=== RUN   TestMultiControlPlane/serial/RestartClusterKeepsNodes
ha_test.go:458: (dbg) Run:  out/minikube-linux-arm64 -p ha-423884 node list --alsologtostderr -v 5
ha_test.go:464: (dbg) Run:  out/minikube-linux-arm64 -p ha-423884 stop --alsologtostderr -v 5
E1109 13:54:30.986444    4116 cert_rotation.go:172] "Loading client cert failed" err="open /home/jenkins/minikube-integration/21139-2320/.minikube/profiles/functional-002359/client.crt: no such file or directory" logger="tls-transport-cache.UnhandledError" key="key"
E1109 13:54:41.227749    4116 cert_rotation.go:172] "Loading client cert failed" err="open /home/jenkins/minikube-integration/21139-2320/.minikube/profiles/functional-002359/client.crt: no such file or directory" logger="tls-transport-cache.UnhandledError" key="key"
ha_test.go:464: (dbg) Done: out/minikube-linux-arm64 -p ha-423884 stop --alsologtostderr -v 5: (27.566917813s)
ha_test.go:469: (dbg) Run:  out/minikube-linux-arm64 -p ha-423884 start --wait true --alsologtostderr -v 5
E1109 13:55:01.709829    4116 cert_rotation.go:172] "Loading client cert failed" err="open /home/jenkins/minikube-integration/21139-2320/.minikube/profiles/functional-002359/client.crt: no such file or directory" logger="tls-transport-cache.UnhandledError" key="key"
E1109 13:55:42.672992    4116 cert_rotation.go:172] "Loading client cert failed" err="open /home/jenkins/minikube-integration/21139-2320/.minikube/profiles/functional-002359/client.crt: no such file or directory" logger="tls-transport-cache.UnhandledError" key="key"
E1109 13:57:04.594664    4116 cert_rotation.go:172] "Loading client cert failed" err="open /home/jenkins/minikube-integration/21139-2320/.minikube/profiles/functional-002359/client.crt: no such file or directory" logger="tls-transport-cache.UnhandledError" key="key"
E1109 13:57:06.456774    4116 cert_rotation.go:172] "Loading client cert failed" err="open /home/jenkins/minikube-integration/21139-2320/.minikube/profiles/addons-651467/client.crt: no such file or directory" logger="tls-transport-cache.UnhandledError" key="key"
E1109 13:59:20.736060    4116 cert_rotation.go:172] "Loading client cert failed" err="open /home/jenkins/minikube-integration/21139-2320/.minikube/profiles/functional-002359/client.crt: no such file or directory" logger="tls-transport-cache.UnhandledError" key="key"
E1109 13:59:48.436165    4116 cert_rotation.go:172] "Loading client cert failed" err="open /home/jenkins/minikube-integration/21139-2320/.minikube/profiles/functional-002359/client.crt: no such file or directory" logger="tls-transport-cache.UnhandledError" key="key"
E1109 14:02:06.456575    4116 cert_rotation.go:172] "Loading client cert failed" err="open /home/jenkins/minikube-integration/21139-2320/.minikube/profiles/addons-651467/client.crt: no such file or directory" logger="tls-transport-cache.UnhandledError" key="key"
ha_test.go:469: (dbg) Non-zero exit: out/minikube-linux-arm64 -p ha-423884 start --wait true --alsologtostderr -v 5: exit status 80 (7m49.655102849s)

                                                
                                                
-- stdout --
	* [ha-423884] minikube v1.37.0 on Ubuntu 20.04 (arm64)
	  - MINIKUBE_LOCATION=21139
	  - MINIKUBE_SUPPRESS_DOCKER_PERFORMANCE=true
	  - KUBECONFIG=/home/jenkins/minikube-integration/21139-2320/kubeconfig
	  - MINIKUBE_HOME=/home/jenkins/minikube-integration/21139-2320/.minikube
	  - MINIKUBE_BIN=out/minikube-linux-arm64
	  - MINIKUBE_FORCE_SYSTEMD=
	* Using the docker driver based on existing profile
	* Starting "ha-423884" primary control-plane node in "ha-423884" cluster
	* Pulling base image v0.0.48-1761985721-21837 ...
	* Preparing Kubernetes v1.34.1 on CRI-O 1.34.1 ...
	* Enabled addons: 
	
	* Starting "ha-423884-m02" control-plane node in "ha-423884" cluster
	* Pulling base image v0.0.48-1761985721-21837 ...
	* Found network options:
	  - NO_PROXY=192.168.49.2
	* Preparing Kubernetes v1.34.1 on CRI-O 1.34.1 ...
	  - env NO_PROXY=192.168.49.2
	* Verifying Kubernetes components...
	
	

                                                
                                                
-- /stdout --
** stderr ** 
	I1109 13:54:55.113963   50941 out.go:360] Setting OutFile to fd 1 ...
	I1109 13:54:55.114176   50941 out.go:408] TERM=,COLORTERM=, which probably does not support color
	I1109 13:54:55.114201   50941 out.go:374] Setting ErrFile to fd 2...
	I1109 13:54:55.114221   50941 out.go:408] TERM=,COLORTERM=, which probably does not support color
	I1109 13:54:55.114531   50941 root.go:338] Updating PATH: /home/jenkins/minikube-integration/21139-2320/.minikube/bin
	I1109 13:54:55.114968   50941 out.go:368] Setting JSON to false
	I1109 13:54:55.115825   50941 start.go:133] hostinfo: {"hostname":"ip-172-31-24-2","uptime":2245,"bootTime":1762694250,"procs":151,"os":"linux","platform":"ubuntu","platformFamily":"debian","platformVersion":"20.04","kernelVersion":"5.15.0-1084-aws","kernelArch":"aarch64","virtualizationSystem":"","virtualizationRole":"","hostId":"6d436adf-771e-4269-b9a3-c25fd4fca4f5"}
	I1109 13:54:55.115981   50941 start.go:143] virtualization:  
	I1109 13:54:55.119256   50941 out.go:179] * [ha-423884] minikube v1.37.0 on Ubuntu 20.04 (arm64)
	I1109 13:54:55.122910   50941 out.go:179]   - MINIKUBE_LOCATION=21139
	I1109 13:54:55.122982   50941 notify.go:221] Checking for updates...
	I1109 13:54:55.128665   50941 out.go:179]   - MINIKUBE_SUPPRESS_DOCKER_PERFORMANCE=true
	I1109 13:54:55.131661   50941 out.go:179]   - KUBECONFIG=/home/jenkins/minikube-integration/21139-2320/kubeconfig
	I1109 13:54:55.134714   50941 out.go:179]   - MINIKUBE_HOME=/home/jenkins/minikube-integration/21139-2320/.minikube
	I1109 13:54:55.137648   50941 out.go:179]   - MINIKUBE_BIN=out/minikube-linux-arm64
	I1109 13:54:55.140756   50941 out.go:179]   - MINIKUBE_FORCE_SYSTEMD=
	I1109 13:54:55.144331   50941 config.go:182] Loaded profile config "ha-423884": Driver=docker, ContainerRuntime=crio, KubernetesVersion=v1.34.1
	I1109 13:54:55.144477   50941 driver.go:422] Setting default libvirt URI to qemu:///system
	I1109 13:54:55.179836   50941 docker.go:124] docker version: linux-28.1.1:Docker Engine - Community
	I1109 13:54:55.179983   50941 cli_runner.go:164] Run: docker system info --format "{{json .}}"
	I1109 13:54:55.239723   50941 info.go:266] docker info: {ID:J4M5:W6MX:GOX4:4LAQ:VI7E:VJNF:J3OP:OPBH:GF7G:PPY4:WQWD:7N4L Containers:4 ContainersRunning:0 ContainersPaused:0 ContainersStopped:4 Images:3 Driver:overlay2 DriverStatus:[[Backing Filesystem extfs] [Supports d_type true] [Using metacopy false] [Native Overlay Diff true] [userxattr false]] SystemStatus:<nil> Plugins:{Volume:[local] Network:[bridge host ipvlan macvlan null overlay] Authorization:<nil> Log:[awslogs fluentd gcplogs gelf journald json-file local splunk syslog]} MemoryLimit:true SwapLimit:true KernelMemory:false KernelMemoryTCP:true CPUCfsPeriod:true CPUCfsQuota:true CPUShares:true CPUSet:true PidsLimit:true IPv4Forwarding:true BridgeNfIptables:false BridgeNfIP6Tables:false Debug:false NFd:23 OomKillDisable:true NGoroutines:42 SystemTime:2025-11-09 13:54:55.22949764 +0000 UTC LoggingDriver:json-file CgroupDriver:cgroupfs NEventsListener:0 KernelVersion:5.15.0-1084-aws OperatingSystem:Ubuntu 20.04.6 LTS OSType:linux Architecture:aa
rch64 IndexServerAddress:https://index.docker.io/v1/ RegistryConfig:{AllowNondistributableArtifactsCIDRs:[] AllowNondistributableArtifactsHostnames:[] InsecureRegistryCIDRs:[::1/128 127.0.0.0/8] IndexConfigs:{DockerIo:{Name:docker.io Mirrors:[] Secure:true Official:true}} Mirrors:[]} NCPU:2 MemTotal:8214835200 GenericResources:<nil> DockerRootDir:/var/lib/docker HTTPProxy: HTTPSProxy: NoProxy: Name:ip-172-31-24-2 Labels:[] ExperimentalBuild:false ServerVersion:28.1.1 ClusterStore: ClusterAdvertise: Runtimes:{Runc:{Path:runc}} DefaultRuntime:runc Swarm:{NodeID: NodeAddr: LocalNodeState:inactive ControlAvailable:false Error: RemoteManagers:<nil>} LiveRestoreEnabled:false Isolation: InitBinary:docker-init ContainerdCommit:{ID:05044ec0a9a75232cad458027ca83437aae3f4da Expected:} RuncCommit:{ID:v1.2.5-0-g59923ef Expected:} InitCommit:{ID:de40ad0 Expected:} SecurityOptions:[name=apparmor name=seccomp,profile=builtin] ProductLicense: Warnings:<nil> ServerErrors:[] ClientInfo:{Debug:false Plugins:[map[Name:buildx Path
:/usr/libexec/docker/cli-plugins/docker-buildx SchemaVersion:0.1.0 ShortDescription:Docker Buildx Vendor:Docker Inc. Version:v0.23.0] map[Name:compose Path:/usr/libexec/docker/cli-plugins/docker-compose SchemaVersion:0.1.0 ShortDescription:Docker Compose Vendor:Docker Inc. Version:v2.35.1]] Warnings:<nil>}}
	I1109 13:54:55.239832   50941 docker.go:319] overlay module found
	I1109 13:54:55.244865   50941 out.go:179] * Using the docker driver based on existing profile
	I1109 13:54:55.247777   50941 start.go:309] selected driver: docker
	I1109 13:54:55.247800   50941 start.go:930] validating driver "docker" against &{Name:ha-423884 KeepContext:false EmbedCerts:false MinikubeISO: KicBaseImage:gcr.io/k8s-minikube/kicbase-builds:v0.0.48-1761985721-21837@sha256:a50b37e97dfdea51156e079ca6b45818a801b3d41bbe13d141f35d2e1af6c7d1 Memory:3072 CPUs:2 DiskSize:20000 Driver:docker HyperkitVpnKitSock: HyperkitVSockPorts:[] DockerEnv:[] ContainerVolumeMounts:[] InsecureRegistry:[] RegistryMirror:[] HostOnlyCIDR:192.168.59.1/24 HypervVirtualSwitch: HypervUseExternalSwitch:false HypervExternalAdapter: KVMNetwork:default KVMQemuURI:qemu:///system KVMGPU:false KVMHidden:false KVMNUMACount:1 APIServerPort:8443 DockerOpt:[] DisableDriverMounts:false NFSShare:[] NFSSharesRoot:/nfsshares UUID: NoVTXCheck:false DNSProxy:false HostDNSResolver:true HostOnlyNicType:virtio NatNicType:virtio SSHIPAddress: SSHUser:root SSHKey: SSHPort:22 KubernetesConfig:{KubernetesVersion:v1.34.1 ClusterName:ha-423884 Namespace:default APIServerHAVIP:192.168.49.254 APIServerName
:minikubeCA APIServerNames:[] APIServerIPs:[] DNSDomain:cluster.local ContainerRuntime:crio CRISocket: NetworkPlugin:cni FeatureGates: ServiceCIDR:10.96.0.0/12 ImageRepository: LoadBalancerStartIP: LoadBalancerEndIP: CustomIngressCert: RegistryAliases: ExtraOptions:[] ShouldLoadCachedImages:true EnableDefaultCNI:false CNI:} Nodes:[{Name: IP:192.168.49.2 Port:8443 KubernetesVersion:v1.34.1 ContainerRuntime:crio ControlPlane:true Worker:true} {Name:m02 IP:192.168.49.3 Port:8443 KubernetesVersion:v1.34.1 ContainerRuntime:crio ControlPlane:true Worker:true} {Name:m03 IP:192.168.49.4 Port:8443 KubernetesVersion:v1.34.1 ContainerRuntime:crio ControlPlane:true Worker:true} {Name:m04 IP:192.168.49.5 Port:0 KubernetesVersion:v1.34.1 ContainerRuntime: ControlPlane:false Worker:true}] Addons:map[ambassador:false amd-gpu-device-plugin:false auto-pause:false cloud-spanner:false csi-hostpath-driver:false dashboard:false default-storageclass:false efk:false freshpod:false gcp-auth:false gvisor:false headlamp:false inaccel:f
alse ingress:false ingress-dns:false inspektor-gadget:false istio:false istio-provisioner:false kong:false kubeflow:false kubetail:false kubevirt:false logviewer:false metallb:false metrics-server:false nvidia-device-plugin:false nvidia-driver-installer:false nvidia-gpu-device-plugin:false olm:false pod-security-policy:false portainer:false registry:false registry-aliases:false registry-creds:false storage-provisioner:false storage-provisioner-rancher:false volcano:false volumesnapshots:false yakd:false] CustomAddonImages:map[] CustomAddonRegistries:map[] VerifyComponents:map[apiserver:true apps_running:true default_sa:true extra:true kubelet:true node_ready:true system_pods:true] StartHostTimeout:6m0s ScheduledStop:<nil> ExposedPorts:[] ListenAddress: Network: Subnet: MultiNodeRequested:true ExtraDisks:0 CertExpiration:26280h0m0s MountString: Mount9PVersion:9p2000.L MountGID:docker MountIP: MountMSize:262144 MountOptions:[] MountPort:0 MountType:9p MountUID:docker BinaryMirror: DisableOptimizations:false Dis
ableMetrics:false DisableCoreDNSLog:false CustomQemuFirmwarePath: SocketVMnetClientPath: SocketVMnetPath: StaticIP: SSHAuthSock: SSHAgentPID:0 GPUs: AutoPauseInterval:1m0s}
	I1109 13:54:55.248067   50941 start.go:941] status for docker: {Installed:true Healthy:true Running:false NeedsImprovement:false Error:<nil> Reason: Fix: Doc: Version:}
	I1109 13:54:55.248171   50941 cli_runner.go:164] Run: docker system info --format "{{json .}}"
	I1109 13:54:55.301652   50941 info.go:266] docker info: {ID:J4M5:W6MX:GOX4:4LAQ:VI7E:VJNF:J3OP:OPBH:GF7G:PPY4:WQWD:7N4L Containers:4 ContainersRunning:0 ContainersPaused:0 ContainersStopped:4 Images:3 Driver:overlay2 DriverStatus:[[Backing Filesystem extfs] [Supports d_type true] [Using metacopy false] [Native Overlay Diff true] [userxattr false]] SystemStatus:<nil> Plugins:{Volume:[local] Network:[bridge host ipvlan macvlan null overlay] Authorization:<nil> Log:[awslogs fluentd gcplogs gelf journald json-file local splunk syslog]} MemoryLimit:true SwapLimit:true KernelMemory:false KernelMemoryTCP:true CPUCfsPeriod:true CPUCfsQuota:true CPUShares:true CPUSet:true PidsLimit:true IPv4Forwarding:true BridgeNfIptables:false BridgeNfIP6Tables:false Debug:false NFd:23 OomKillDisable:true NGoroutines:42 SystemTime:2025-11-09 13:54:55.292554772 +0000 UTC LoggingDriver:json-file CgroupDriver:cgroupfs NEventsListener:0 KernelVersion:5.15.0-1084-aws OperatingSystem:Ubuntu 20.04.6 LTS OSType:linux Architecture:a
arch64 IndexServerAddress:https://index.docker.io/v1/ RegistryConfig:{AllowNondistributableArtifactsCIDRs:[] AllowNondistributableArtifactsHostnames:[] InsecureRegistryCIDRs:[::1/128 127.0.0.0/8] IndexConfigs:{DockerIo:{Name:docker.io Mirrors:[] Secure:true Official:true}} Mirrors:[]} NCPU:2 MemTotal:8214835200 GenericResources:<nil> DockerRootDir:/var/lib/docker HTTPProxy: HTTPSProxy: NoProxy: Name:ip-172-31-24-2 Labels:[] ExperimentalBuild:false ServerVersion:28.1.1 ClusterStore: ClusterAdvertise: Runtimes:{Runc:{Path:runc}} DefaultRuntime:runc Swarm:{NodeID: NodeAddr: LocalNodeState:inactive ControlAvailable:false Error: RemoteManagers:<nil>} LiveRestoreEnabled:false Isolation: InitBinary:docker-init ContainerdCommit:{ID:05044ec0a9a75232cad458027ca83437aae3f4da Expected:} RuncCommit:{ID:v1.2.5-0-g59923ef Expected:} InitCommit:{ID:de40ad0 Expected:} SecurityOptions:[name=apparmor name=seccomp,profile=builtin] ProductLicense: Warnings:<nil> ServerErrors:[] ClientInfo:{Debug:false Plugins:[map[Name:buildx Pat
h:/usr/libexec/docker/cli-plugins/docker-buildx SchemaVersion:0.1.0 ShortDescription:Docker Buildx Vendor:Docker Inc. Version:v0.23.0] map[Name:compose Path:/usr/libexec/docker/cli-plugins/docker-compose SchemaVersion:0.1.0 ShortDescription:Docker Compose Vendor:Docker Inc. Version:v2.35.1]] Warnings:<nil>}}
	I1109 13:54:55.302058   50941 start_flags.go:992] Waiting for all components: map[apiserver:true apps_running:true default_sa:true extra:true kubelet:true node_ready:true system_pods:true]
	I1109 13:54:55.302090   50941 cni.go:84] Creating CNI manager for ""
	I1109 13:54:55.302144   50941 cni.go:136] multinode detected (4 nodes found), recommending kindnet
	I1109 13:54:55.302223   50941 start.go:353] cluster config:
	{Name:ha-423884 KeepContext:false EmbedCerts:false MinikubeISO: KicBaseImage:gcr.io/k8s-minikube/kicbase-builds:v0.0.48-1761985721-21837@sha256:a50b37e97dfdea51156e079ca6b45818a801b3d41bbe13d141f35d2e1af6c7d1 Memory:3072 CPUs:2 DiskSize:20000 Driver:docker HyperkitVpnKitSock: HyperkitVSockPorts:[] DockerEnv:[] ContainerVolumeMounts:[] InsecureRegistry:[] RegistryMirror:[] HostOnlyCIDR:192.168.59.1/24 HypervVirtualSwitch: HypervUseExternalSwitch:false HypervExternalAdapter: KVMNetwork:default KVMQemuURI:qemu:///system KVMGPU:false KVMHidden:false KVMNUMACount:1 APIServerPort:8443 DockerOpt:[] DisableDriverMounts:false NFSShare:[] NFSSharesRoot:/nfsshares UUID: NoVTXCheck:false DNSProxy:false HostDNSResolver:true HostOnlyNicType:virtio NatNicType:virtio SSHIPAddress: SSHUser:root SSHKey: SSHPort:22 KubernetesConfig:{KubernetesVersion:v1.34.1 ClusterName:ha-423884 Namespace:default APIServerHAVIP:192.168.49.254 APIServerName:minikubeCA APIServerNames:[] APIServerIPs:[] DNSDomain:cluster.local ContainerR
untime:crio CRISocket: NetworkPlugin:cni FeatureGates: ServiceCIDR:10.96.0.0/12 ImageRepository: LoadBalancerStartIP: LoadBalancerEndIP: CustomIngressCert: RegistryAliases: ExtraOptions:[] ShouldLoadCachedImages:true EnableDefaultCNI:false CNI:} Nodes:[{Name: IP:192.168.49.2 Port:8443 KubernetesVersion:v1.34.1 ContainerRuntime:crio ControlPlane:true Worker:true} {Name:m02 IP:192.168.49.3 Port:8443 KubernetesVersion:v1.34.1 ContainerRuntime:crio ControlPlane:true Worker:true} {Name:m03 IP:192.168.49.4 Port:8443 KubernetesVersion:v1.34.1 ContainerRuntime:crio ControlPlane:true Worker:true} {Name:m04 IP:192.168.49.5 Port:0 KubernetesVersion:v1.34.1 ContainerRuntime:crio ControlPlane:false Worker:true}] Addons:map[ambassador:false amd-gpu-device-plugin:false auto-pause:false cloud-spanner:false csi-hostpath-driver:false dashboard:false default-storageclass:false efk:false freshpod:false gcp-auth:false gvisor:false headlamp:false inaccel:false ingress:false ingress-dns:false inspektor-gadget:false istio:false isti
o-provisioner:false kong:false kubeflow:false kubetail:false kubevirt:false logviewer:false metallb:false metrics-server:false nvidia-device-plugin:false nvidia-driver-installer:false nvidia-gpu-device-plugin:false olm:false pod-security-policy:false portainer:false registry:false registry-aliases:false registry-creds:false storage-provisioner:false storage-provisioner-rancher:false volcano:false volumesnapshots:false yakd:false] CustomAddonImages:map[] CustomAddonRegistries:map[] VerifyComponents:map[apiserver:true apps_running:true default_sa:true extra:true kubelet:true node_ready:true system_pods:true] StartHostTimeout:6m0s ScheduledStop:<nil> ExposedPorts:[] ListenAddress: Network: Subnet: MultiNodeRequested:true ExtraDisks:0 CertExpiration:26280h0m0s MountString: Mount9PVersion:9p2000.L MountGID:docker MountIP: MountMSize:262144 MountOptions:[] MountPort:0 MountType:9p MountUID:docker BinaryMirror: DisableOptimizations:false DisableMetrics:false DisableCoreDNSLog:false CustomQemuFirmwarePath: SocketVMne
tClientPath: SocketVMnetPath: StaticIP: SSHAuthSock: SSHAgentPID:0 GPUs: AutoPauseInterval:1m0s}
	I1109 13:54:55.305501   50941 out.go:179] * Starting "ha-423884" primary control-plane node in "ha-423884" cluster
	I1109 13:54:55.308419   50941 cache.go:134] Beginning downloading kic base image for docker with crio
	I1109 13:54:55.311313   50941 out.go:179] * Pulling base image v0.0.48-1761985721-21837 ...
	I1109 13:54:55.314132   50941 preload.go:188] Checking if preload exists for k8s version v1.34.1 and runtime crio
	I1109 13:54:55.314177   50941 preload.go:203] Found local preload: /home/jenkins/minikube-integration/21139-2320/.minikube/cache/preloaded-tarball/preloaded-images-k8s-v18-v1.34.1-cri-o-overlay-arm64.tar.lz4
	I1109 13:54:55.314201   50941 cache.go:65] Caching tarball of preloaded images
	I1109 13:54:55.314200   50941 image.go:81] Checking for gcr.io/k8s-minikube/kicbase-builds:v0.0.48-1761985721-21837@sha256:a50b37e97dfdea51156e079ca6b45818a801b3d41bbe13d141f35d2e1af6c7d1 in local docker daemon
	I1109 13:54:55.314293   50941 preload.go:238] Found /home/jenkins/minikube-integration/21139-2320/.minikube/cache/preloaded-tarball/preloaded-images-k8s-v18-v1.34.1-cri-o-overlay-arm64.tar.lz4 in cache, skipping download
	I1109 13:54:55.314309   50941 cache.go:68] Finished verifying existence of preloaded tar for v1.34.1 on crio
	I1109 13:54:55.314456   50941 profile.go:143] Saving config to /home/jenkins/minikube-integration/21139-2320/.minikube/profiles/ha-423884/config.json ...
	I1109 13:54:55.334236   50941 image.go:100] Found gcr.io/k8s-minikube/kicbase-builds:v0.0.48-1761985721-21837@sha256:a50b37e97dfdea51156e079ca6b45818a801b3d41bbe13d141f35d2e1af6c7d1 in local docker daemon, skipping pull
	I1109 13:54:55.334260   50941 cache.go:158] gcr.io/k8s-minikube/kicbase-builds:v0.0.48-1761985721-21837@sha256:a50b37e97dfdea51156e079ca6b45818a801b3d41bbe13d141f35d2e1af6c7d1 exists in daemon, skipping load
	I1109 13:54:55.334278   50941 cache.go:243] Successfully downloaded all kic artifacts
	I1109 13:54:55.334307   50941 start.go:360] acquireMachinesLock for ha-423884: {Name:mkda5c7a1ce8a51da0d8a40a6bd47565509d6909 Clock:{} Delay:500ms Timeout:10m0s Cancel:<nil>}
	I1109 13:54:55.334364   50941 start.go:364] duration metric: took 38.367µs to acquireMachinesLock for "ha-423884"
	I1109 13:54:55.334396   50941 start.go:96] Skipping create...Using existing machine configuration
	I1109 13:54:55.334402   50941 fix.go:54] fixHost starting: 
	I1109 13:54:55.334657   50941 cli_runner.go:164] Run: docker container inspect ha-423884 --format={{.State.Status}}
	I1109 13:54:55.351523   50941 fix.go:112] recreateIfNeeded on ha-423884: state=Stopped err=<nil>
	W1109 13:54:55.351563   50941 fix.go:138] unexpected machine state, will restart: <nil>
	I1109 13:54:55.355014   50941 out.go:252] * Restarting existing docker container for "ha-423884" ...
	I1109 13:54:55.355096   50941 cli_runner.go:164] Run: docker start ha-423884
	I1109 13:54:55.620677   50941 cli_runner.go:164] Run: docker container inspect ha-423884 --format={{.State.Status}}
	I1109 13:54:55.643357   50941 kic.go:430] container "ha-423884" state is running.
	I1109 13:54:55.643727   50941 cli_runner.go:164] Run: docker container inspect -f "{{range .NetworkSettings.Networks}}{{.IPAddress}},{{.GlobalIPv6Address}}{{end}}" ha-423884
	I1109 13:54:55.666053   50941 profile.go:143] Saving config to /home/jenkins/minikube-integration/21139-2320/.minikube/profiles/ha-423884/config.json ...
	I1109 13:54:55.666297   50941 machine.go:94] provisionDockerMachine start ...
	I1109 13:54:55.666487   50941 cli_runner.go:164] Run: docker container inspect -f "'{{(index (index .NetworkSettings.Ports "22/tcp") 0).HostPort}}'" ha-423884
	I1109 13:54:55.687681   50941 main.go:143] libmachine: Using SSH client type: native
	I1109 13:54:55.688070   50941 main.go:143] libmachine: &{{{<nil> 0 [] [] []} docker [0x3eefe0] 0x3f1790 <nil>  [] 0s} 127.0.0.1 32808 <nil> <nil>}
	I1109 13:54:55.688081   50941 main.go:143] libmachine: About to run SSH command:
	hostname
	I1109 13:54:55.688918   50941 main.go:143] libmachine: Error dialing TCP: ssh: handshake failed: EOF
	I1109 13:54:58.840047   50941 main.go:143] libmachine: SSH cmd err, output: <nil>: ha-423884
	
	I1109 13:54:58.840071   50941 ubuntu.go:182] provisioning hostname "ha-423884"
	I1109 13:54:58.840136   50941 cli_runner.go:164] Run: docker container inspect -f "'{{(index (index .NetworkSettings.Ports "22/tcp") 0).HostPort}}'" ha-423884
	I1109 13:54:58.857828   50941 main.go:143] libmachine: Using SSH client type: native
	I1109 13:54:58.858140   50941 main.go:143] libmachine: &{{{<nil> 0 [] [] []} docker [0x3eefe0] 0x3f1790 <nil>  [] 0s} 127.0.0.1 32808 <nil> <nil>}
	I1109 13:54:58.858156   50941 main.go:143] libmachine: About to run SSH command:
	sudo hostname ha-423884 && echo "ha-423884" | sudo tee /etc/hostname
	I1109 13:54:59.019946   50941 main.go:143] libmachine: SSH cmd err, output: <nil>: ha-423884
	
	I1109 13:54:59.020040   50941 cli_runner.go:164] Run: docker container inspect -f "'{{(index (index .NetworkSettings.Ports "22/tcp") 0).HostPort}}'" ha-423884
	I1109 13:54:59.037942   50941 main.go:143] libmachine: Using SSH client type: native
	I1109 13:54:59.038251   50941 main.go:143] libmachine: &{{{<nil> 0 [] [] []} docker [0x3eefe0] 0x3f1790 <nil>  [] 0s} 127.0.0.1 32808 <nil> <nil>}
	I1109 13:54:59.038273   50941 main.go:143] libmachine: About to run SSH command:
	
			if ! grep -xq '.*\sha-423884' /etc/hosts; then
				if grep -xq '127.0.1.1\s.*' /etc/hosts; then
					sudo sed -i 's/^127.0.1.1\s.*/127.0.1.1 ha-423884/g' /etc/hosts;
				else 
					echo '127.0.1.1 ha-423884' | sudo tee -a /etc/hosts; 
				fi
			fi
	I1109 13:54:59.188269   50941 main.go:143] libmachine: SSH cmd err, output: <nil>: 
	I1109 13:54:59.188306   50941 ubuntu.go:188] set auth options {CertDir:/home/jenkins/minikube-integration/21139-2320/.minikube CaCertPath:/home/jenkins/minikube-integration/21139-2320/.minikube/certs/ca.pem CaPrivateKeyPath:/home/jenkins/minikube-integration/21139-2320/.minikube/certs/ca-key.pem CaCertRemotePath:/etc/docker/ca.pem ServerCertPath:/home/jenkins/minikube-integration/21139-2320/.minikube/machines/server.pem ServerKeyPath:/home/jenkins/minikube-integration/21139-2320/.minikube/machines/server-key.pem ClientKeyPath:/home/jenkins/minikube-integration/21139-2320/.minikube/certs/key.pem ServerCertRemotePath:/etc/docker/server.pem ServerKeyRemotePath:/etc/docker/server-key.pem ClientCertPath:/home/jenkins/minikube-integration/21139-2320/.minikube/certs/cert.pem ServerCertSANs:[] StorePath:/home/jenkins/minikube-integration/21139-2320/.minikube}
	I1109 13:54:59.188331   50941 ubuntu.go:190] setting up certificates
	I1109 13:54:59.188340   50941 provision.go:84] configureAuth start
	I1109 13:54:59.189373   50941 cli_runner.go:164] Run: docker container inspect -f "{{range .NetworkSettings.Networks}}{{.IPAddress}},{{.GlobalIPv6Address}}{{end}}" ha-423884
	I1109 13:54:59.208053   50941 provision.go:143] copyHostCerts
	I1109 13:54:59.208097   50941 vm_assets.go:164] NewFileAsset: /home/jenkins/minikube-integration/21139-2320/.minikube/certs/key.pem -> /home/jenkins/minikube-integration/21139-2320/.minikube/key.pem
	I1109 13:54:59.208129   50941 exec_runner.go:144] found /home/jenkins/minikube-integration/21139-2320/.minikube/key.pem, removing ...
	I1109 13:54:59.208146   50941 exec_runner.go:203] rm: /home/jenkins/minikube-integration/21139-2320/.minikube/key.pem
	I1109 13:54:59.208224   50941 exec_runner.go:151] cp: /home/jenkins/minikube-integration/21139-2320/.minikube/certs/key.pem --> /home/jenkins/minikube-integration/21139-2320/.minikube/key.pem (1679 bytes)
	I1109 13:54:59.208317   50941 vm_assets.go:164] NewFileAsset: /home/jenkins/minikube-integration/21139-2320/.minikube/certs/ca.pem -> /home/jenkins/minikube-integration/21139-2320/.minikube/ca.pem
	I1109 13:54:59.208339   50941 exec_runner.go:144] found /home/jenkins/minikube-integration/21139-2320/.minikube/ca.pem, removing ...
	I1109 13:54:59.208343   50941 exec_runner.go:203] rm: /home/jenkins/minikube-integration/21139-2320/.minikube/ca.pem
	I1109 13:54:59.208378   50941 exec_runner.go:151] cp: /home/jenkins/minikube-integration/21139-2320/.minikube/certs/ca.pem --> /home/jenkins/minikube-integration/21139-2320/.minikube/ca.pem (1082 bytes)
	I1109 13:54:59.208440   50941 vm_assets.go:164] NewFileAsset: /home/jenkins/minikube-integration/21139-2320/.minikube/certs/cert.pem -> /home/jenkins/minikube-integration/21139-2320/.minikube/cert.pem
	I1109 13:54:59.208460   50941 exec_runner.go:144] found /home/jenkins/minikube-integration/21139-2320/.minikube/cert.pem, removing ...
	I1109 13:54:59.208464   50941 exec_runner.go:203] rm: /home/jenkins/minikube-integration/21139-2320/.minikube/cert.pem
	I1109 13:54:59.208489   50941 exec_runner.go:151] cp: /home/jenkins/minikube-integration/21139-2320/.minikube/certs/cert.pem --> /home/jenkins/minikube-integration/21139-2320/.minikube/cert.pem (1123 bytes)
	I1109 13:54:59.208551   50941 provision.go:117] generating server cert: /home/jenkins/minikube-integration/21139-2320/.minikube/machines/server.pem ca-key=/home/jenkins/minikube-integration/21139-2320/.minikube/certs/ca.pem private-key=/home/jenkins/minikube-integration/21139-2320/.minikube/certs/ca-key.pem org=jenkins.ha-423884 san=[127.0.0.1 192.168.49.2 ha-423884 localhost minikube]
	I1109 13:54:59.461403   50941 provision.go:177] copyRemoteCerts
	I1109 13:54:59.461473   50941 ssh_runner.go:195] Run: sudo mkdir -p /etc/docker /etc/docker /etc/docker
	I1109 13:54:59.461549   50941 cli_runner.go:164] Run: docker container inspect -f "'{{(index (index .NetworkSettings.Ports "22/tcp") 0).HostPort}}'" ha-423884
	I1109 13:54:59.479030   50941 sshutil.go:53] new ssh client: &{IP:127.0.0.1 Port:32808 SSHKeyPath:/home/jenkins/minikube-integration/21139-2320/.minikube/machines/ha-423884/id_rsa Username:docker}
	I1109 13:54:59.583635   50941 vm_assets.go:164] NewFileAsset: /home/jenkins/minikube-integration/21139-2320/.minikube/certs/ca.pem -> /etc/docker/ca.pem
	I1109 13:54:59.583697   50941 ssh_runner.go:362] scp /home/jenkins/minikube-integration/21139-2320/.minikube/certs/ca.pem --> /etc/docker/ca.pem (1082 bytes)
	I1109 13:54:59.602289   50941 vm_assets.go:164] NewFileAsset: /home/jenkins/minikube-integration/21139-2320/.minikube/machines/server.pem -> /etc/docker/server.pem
	I1109 13:54:59.602361   50941 ssh_runner.go:362] scp /home/jenkins/minikube-integration/21139-2320/.minikube/machines/server.pem --> /etc/docker/server.pem (1200 bytes)
	I1109 13:54:59.620339   50941 vm_assets.go:164] NewFileAsset: /home/jenkins/minikube-integration/21139-2320/.minikube/machines/server-key.pem -> /etc/docker/server-key.pem
	I1109 13:54:59.620401   50941 ssh_runner.go:362] scp /home/jenkins/minikube-integration/21139-2320/.minikube/machines/server-key.pem --> /etc/docker/server-key.pem (1679 bytes)
	I1109 13:54:59.637908   50941 provision.go:87] duration metric: took 449.546564ms to configureAuth
	I1109 13:54:59.637931   50941 ubuntu.go:206] setting minikube options for container-runtime
	I1109 13:54:59.638163   50941 config.go:182] Loaded profile config "ha-423884": Driver=docker, ContainerRuntime=crio, KubernetesVersion=v1.34.1
	I1109 13:54:59.638260   50941 cli_runner.go:164] Run: docker container inspect -f "'{{(index (index .NetworkSettings.Ports "22/tcp") 0).HostPort}}'" ha-423884
	I1109 13:54:59.655128   50941 main.go:143] libmachine: Using SSH client type: native
	I1109 13:54:59.655439   50941 main.go:143] libmachine: &{{{<nil> 0 [] [] []} docker [0x3eefe0] 0x3f1790 <nil>  [] 0s} 127.0.0.1 32808 <nil> <nil>}
	I1109 13:54:59.655453   50941 main.go:143] libmachine: About to run SSH command:
	sudo mkdir -p /etc/sysconfig && printf %s "
	CRIO_MINIKUBE_OPTIONS='--insecure-registry 10.96.0.0/12 '
	" | sudo tee /etc/sysconfig/crio.minikube && sudo systemctl restart crio
	I1109 13:55:00.057852   50941 main.go:143] libmachine: SSH cmd err, output: <nil>: 
	CRIO_MINIKUBE_OPTIONS='--insecure-registry 10.96.0.0/12 '
	
	I1109 13:55:00.057945   50941 machine.go:97] duration metric: took 4.391628222s to provisionDockerMachine
	I1109 13:55:00.057975   50941 start.go:293] postStartSetup for "ha-423884" (driver="docker")
	I1109 13:55:00.058017   50941 start.go:322] creating required directories: [/etc/kubernetes/addons /etc/kubernetes/manifests /var/tmp/minikube /var/lib/minikube /var/lib/minikube/certs /var/lib/minikube/images /var/lib/minikube/binaries /tmp/gvisor /usr/share/ca-certificates /etc/ssl/certs]
	I1109 13:55:00.058141   50941 ssh_runner.go:195] Run: sudo mkdir -p /etc/kubernetes/addons /etc/kubernetes/manifests /var/tmp/minikube /var/lib/minikube /var/lib/minikube/certs /var/lib/minikube/images /var/lib/minikube/binaries /tmp/gvisor /usr/share/ca-certificates /etc/ssl/certs
	I1109 13:55:00.058222   50941 cli_runner.go:164] Run: docker container inspect -f "'{{(index (index .NetworkSettings.Ports "22/tcp") 0).HostPort}}'" ha-423884
	I1109 13:55:00.132751   50941 sshutil.go:53] new ssh client: &{IP:127.0.0.1 Port:32808 SSHKeyPath:/home/jenkins/minikube-integration/21139-2320/.minikube/machines/ha-423884/id_rsa Username:docker}
	I1109 13:55:00.319042   50941 ssh_runner.go:195] Run: cat /etc/os-release
	I1109 13:55:00.329077   50941 main.go:143] libmachine: Couldn't set key VERSION_CODENAME, no corresponding struct field found
	I1109 13:55:00.329106   50941 info.go:137] Remote host: Debian GNU/Linux 12 (bookworm)
	I1109 13:55:00.329119   50941 filesync.go:126] Scanning /home/jenkins/minikube-integration/21139-2320/.minikube/addons for local assets ...
	I1109 13:55:00.329189   50941 filesync.go:126] Scanning /home/jenkins/minikube-integration/21139-2320/.minikube/files for local assets ...
	I1109 13:55:00.329276   50941 filesync.go:149] local asset: /home/jenkins/minikube-integration/21139-2320/.minikube/files/etc/ssl/certs/41162.pem -> 41162.pem in /etc/ssl/certs
	I1109 13:55:00.329283   50941 vm_assets.go:164] NewFileAsset: /home/jenkins/minikube-integration/21139-2320/.minikube/files/etc/ssl/certs/41162.pem -> /etc/ssl/certs/41162.pem
	I1109 13:55:00.330448   50941 ssh_runner.go:195] Run: sudo mkdir -p /etc/ssl/certs
	I1109 13:55:00.371002   50941 ssh_runner.go:362] scp /home/jenkins/minikube-integration/21139-2320/.minikube/files/etc/ssl/certs/41162.pem --> /etc/ssl/certs/41162.pem (1708 bytes)
	I1109 13:55:00.407176   50941 start.go:296] duration metric: took 349.154111ms for postStartSetup
	I1109 13:55:00.407280   50941 ssh_runner.go:195] Run: sh -c "df -h /var | awk 'NR==2{print $5}'"
	I1109 13:55:00.407327   50941 cli_runner.go:164] Run: docker container inspect -f "'{{(index (index .NetworkSettings.Ports "22/tcp") 0).HostPort}}'" ha-423884
	I1109 13:55:00.431818   50941 sshutil.go:53] new ssh client: &{IP:127.0.0.1 Port:32808 SSHKeyPath:/home/jenkins/minikube-integration/21139-2320/.minikube/machines/ha-423884/id_rsa Username:docker}
	I1109 13:55:00.545778   50941 ssh_runner.go:195] Run: sh -c "df -BG /var | awk 'NR==2{print $4}'"
	I1109 13:55:00.551509   50941 fix.go:56] duration metric: took 5.21709566s for fixHost
	I1109 13:55:00.551542   50941 start.go:83] releasing machines lock for "ha-423884", held for 5.217154802s
	I1109 13:55:00.551634   50941 cli_runner.go:164] Run: docker container inspect -f "{{range .NetworkSettings.Networks}}{{.IPAddress}},{{.GlobalIPv6Address}}{{end}}" ha-423884
	I1109 13:55:00.570733   50941 ssh_runner.go:195] Run: cat /version.json
	I1109 13:55:00.570830   50941 cli_runner.go:164] Run: docker container inspect -f "'{{(index (index .NetworkSettings.Ports "22/tcp") 0).HostPort}}'" ha-423884
	I1109 13:55:00.571142   50941 ssh_runner.go:195] Run: curl -sS -m 2 https://registry.k8s.io/
	I1109 13:55:00.571242   50941 cli_runner.go:164] Run: docker container inspect -f "'{{(index (index .NetworkSettings.Ports "22/tcp") 0).HostPort}}'" ha-423884
	I1109 13:55:00.592161   50941 sshutil.go:53] new ssh client: &{IP:127.0.0.1 Port:32808 SSHKeyPath:/home/jenkins/minikube-integration/21139-2320/.minikube/machines/ha-423884/id_rsa Username:docker}
	I1109 13:55:00.595555   50941 sshutil.go:53] new ssh client: &{IP:127.0.0.1 Port:32808 SSHKeyPath:/home/jenkins/minikube-integration/21139-2320/.minikube/machines/ha-423884/id_rsa Username:docker}
	I1109 13:55:00.796007   50941 ssh_runner.go:195] Run: systemctl --version
	I1109 13:55:00.805538   50941 ssh_runner.go:195] Run: sudo sh -c "podman version >/dev/null"
	I1109 13:55:00.848119   50941 ssh_runner.go:195] Run: sh -c "stat /etc/cni/net.d/*loopback.conf*"
	W1109 13:55:00.852825   50941 cni.go:209] loopback cni configuration skipped: "/etc/cni/net.d/*loopback.conf*" not found
	I1109 13:55:00.852894   50941 ssh_runner.go:195] Run: sudo find /etc/cni/net.d -maxdepth 1 -type f ( ( -name *bridge* -or -name *podman* ) -and -not -name *.mk_disabled ) -printf "%p, " -exec sh -c "sudo mv {} {}.mk_disabled" ;
	I1109 13:55:00.861218   50941 cni.go:259] no active bridge cni configs found in "/etc/cni/net.d" - nothing to disable
	I1109 13:55:00.861244   50941 start.go:496] detecting cgroup driver to use...
	I1109 13:55:00.861295   50941 detect.go:187] detected "cgroupfs" cgroup driver on host os
	I1109 13:55:00.861369   50941 ssh_runner.go:195] Run: sudo systemctl stop -f containerd
	I1109 13:55:00.877235   50941 ssh_runner.go:195] Run: sudo systemctl is-active --quiet service containerd
	I1109 13:55:00.891078   50941 docker.go:218] disabling cri-docker service (if available) ...
	I1109 13:55:00.891189   50941 ssh_runner.go:195] Run: sudo systemctl stop -f cri-docker.socket
	I1109 13:55:00.907526   50941 ssh_runner.go:195] Run: sudo systemctl stop -f cri-docker.service
	I1109 13:55:00.921033   50941 ssh_runner.go:195] Run: sudo systemctl disable cri-docker.socket
	I1109 13:55:01.038695   50941 ssh_runner.go:195] Run: sudo systemctl mask cri-docker.service
	I1109 13:55:01.157282   50941 docker.go:234] disabling docker service ...
	I1109 13:55:01.157400   50941 ssh_runner.go:195] Run: sudo systemctl stop -f docker.socket
	I1109 13:55:01.175939   50941 ssh_runner.go:195] Run: sudo systemctl stop -f docker.service
	I1109 13:55:01.191589   50941 ssh_runner.go:195] Run: sudo systemctl disable docker.socket
	I1109 13:55:01.322566   50941 ssh_runner.go:195] Run: sudo systemctl mask docker.service
	I1109 13:55:01.442242   50941 ssh_runner.go:195] Run: sudo systemctl is-active --quiet service docker
	I1109 13:55:01.455592   50941 ssh_runner.go:195] Run: /bin/bash -c "sudo mkdir -p /etc && printf %s "runtime-endpoint: unix:///var/run/crio/crio.sock
	" | sudo tee /etc/crictl.yaml"
	I1109 13:55:01.470955   50941 crio.go:59] configure cri-o to use "registry.k8s.io/pause:3.10.1" pause image...
	I1109 13:55:01.471022   50941 ssh_runner.go:195] Run: sh -c "sudo sed -i 's|^.*pause_image = .*$|pause_image = "registry.k8s.io/pause:3.10.1"|' /etc/crio/crio.conf.d/02-crio.conf"
	I1109 13:55:01.480518   50941 crio.go:70] configuring cri-o to use "cgroupfs" as cgroup driver...
	I1109 13:55:01.480598   50941 ssh_runner.go:195] Run: sh -c "sudo sed -i 's|^.*cgroup_manager = .*$|cgroup_manager = "cgroupfs"|' /etc/crio/crio.conf.d/02-crio.conf"
	I1109 13:55:01.490192   50941 ssh_runner.go:195] Run: sh -c "sudo sed -i '/conmon_cgroup = .*/d' /etc/crio/crio.conf.d/02-crio.conf"
	I1109 13:55:01.499971   50941 ssh_runner.go:195] Run: sh -c "sudo sed -i '/cgroup_manager = .*/a conmon_cgroup = "pod"' /etc/crio/crio.conf.d/02-crio.conf"
	I1109 13:55:01.508704   50941 ssh_runner.go:195] Run: sh -c "sudo rm -rf /etc/cni/net.mk"
	I1109 13:55:01.517693   50941 ssh_runner.go:195] Run: sh -c "sudo sed -i '/^ *"net.ipv4.ip_unprivileged_port_start=.*"/d' /etc/crio/crio.conf.d/02-crio.conf"
	I1109 13:55:01.526722   50941 ssh_runner.go:195] Run: sh -c "sudo grep -q "^ *default_sysctls" /etc/crio/crio.conf.d/02-crio.conf || sudo sed -i '/conmon_cgroup = .*/a default_sysctls = \[\n\]' /etc/crio/crio.conf.d/02-crio.conf"
	I1109 13:55:01.535091   50941 ssh_runner.go:195] Run: sh -c "sudo sed -i -r 's|^default_sysctls *= *\[|&\n  "net.ipv4.ip_unprivileged_port_start=0",|' /etc/crio/crio.conf.d/02-crio.conf"
	I1109 13:55:01.544402   50941 ssh_runner.go:195] Run: sudo sysctl net.bridge.bridge-nf-call-iptables
	I1109 13:55:01.552454   50941 ssh_runner.go:195] Run: sudo sh -c "echo 1 > /proc/sys/net/ipv4/ip_forward"
	I1109 13:55:01.560165   50941 ssh_runner.go:195] Run: sudo systemctl daemon-reload
	I1109 13:55:01.679582   50941 ssh_runner.go:195] Run: sudo systemctl restart crio
	I1109 13:55:01.822017   50941 start.go:543] Will wait 60s for socket path /var/run/crio/crio.sock
	I1109 13:55:01.822142   50941 ssh_runner.go:195] Run: stat /var/run/crio/crio.sock
	I1109 13:55:01.826235   50941 start.go:564] Will wait 60s for crictl version
	I1109 13:55:01.826377   50941 ssh_runner.go:195] Run: which crictl
	I1109 13:55:01.830288   50941 ssh_runner.go:195] Run: sudo /usr/local/bin/crictl version
	I1109 13:55:01.857542   50941 start.go:580] Version:  0.1.0
	RuntimeName:  cri-o
	RuntimeVersion:  1.34.1
	RuntimeApiVersion:  v1
	I1109 13:55:01.857636   50941 ssh_runner.go:195] Run: crio --version
	I1109 13:55:01.890996   50941 ssh_runner.go:195] Run: crio --version
	I1109 13:55:01.922135   50941 out.go:179] * Preparing Kubernetes v1.34.1 on CRI-O 1.34.1 ...
	I1109 13:55:01.925065   50941 cli_runner.go:164] Run: docker network inspect ha-423884 --format "{"Name": "{{.Name}}","Driver": "{{.Driver}}","Subnet": "{{range .IPAM.Config}}{{.Subnet}}{{end}}","Gateway": "{{range .IPAM.Config}}{{.Gateway}}{{end}}","MTU": {{if (index .Options "com.docker.network.driver.mtu")}}{{(index .Options "com.docker.network.driver.mtu")}}{{else}}0{{end}}, "ContainerIPs": [{{range $k,$v := .Containers }}"{{$v.IPv4Address}}",{{end}}]}"
	I1109 13:55:01.943662   50941 ssh_runner.go:195] Run: grep 192.168.49.1	host.minikube.internal$ /etc/hosts
	I1109 13:55:01.947786   50941 ssh_runner.go:195] Run: /bin/bash -c "{ grep -v $'\thost.minikube.internal$' "/etc/hosts"; echo "192.168.49.1	host.minikube.internal"; } > /tmp/h.$$; sudo cp /tmp/h.$$ "/etc/hosts""
	I1109 13:55:01.958276   50941 kubeadm.go:884] updating cluster {Name:ha-423884 KeepContext:false EmbedCerts:false MinikubeISO: KicBaseImage:gcr.io/k8s-minikube/kicbase-builds:v0.0.48-1761985721-21837@sha256:a50b37e97dfdea51156e079ca6b45818a801b3d41bbe13d141f35d2e1af6c7d1 Memory:3072 CPUs:2 DiskSize:20000 Driver:docker HyperkitVpnKitSock: HyperkitVSockPorts:[] DockerEnv:[] ContainerVolumeMounts:[] InsecureRegistry:[] RegistryMirror:[] HostOnlyCIDR:192.168.59.1/24 HypervVirtualSwitch: HypervUseExternalSwitch:false HypervExternalAdapter: KVMNetwork:default KVMQemuURI:qemu:///system KVMGPU:false KVMHidden:false KVMNUMACount:1 APIServerPort:8443 DockerOpt:[] DisableDriverMounts:false NFSShare:[] NFSSharesRoot:/nfsshares UUID: NoVTXCheck:false DNSProxy:false HostDNSResolver:true HostOnlyNicType:virtio NatNicType:virtio SSHIPAddress: SSHUser:root SSHKey: SSHPort:22 KubernetesConfig:{KubernetesVersion:v1.34.1 ClusterName:ha-423884 Namespace:default APIServerHAVIP:192.168.49.254 APIServerName:minikubeCA APISe
rverNames:[] APIServerIPs:[] DNSDomain:cluster.local ContainerRuntime:crio CRISocket: NetworkPlugin:cni FeatureGates: ServiceCIDR:10.96.0.0/12 ImageRepository: LoadBalancerStartIP: LoadBalancerEndIP: CustomIngressCert: RegistryAliases: ExtraOptions:[] ShouldLoadCachedImages:true EnableDefaultCNI:false CNI:} Nodes:[{Name: IP:192.168.49.2 Port:8443 KubernetesVersion:v1.34.1 ContainerRuntime:crio ControlPlane:true Worker:true} {Name:m02 IP:192.168.49.3 Port:8443 KubernetesVersion:v1.34.1 ContainerRuntime:crio ControlPlane:true Worker:true} {Name:m03 IP:192.168.49.4 Port:8443 KubernetesVersion:v1.34.1 ContainerRuntime:crio ControlPlane:true Worker:true} {Name:m04 IP:192.168.49.5 Port:0 KubernetesVersion:v1.34.1 ContainerRuntime:crio ControlPlane:false Worker:true}] Addons:map[ambassador:false amd-gpu-device-plugin:false auto-pause:false cloud-spanner:false csi-hostpath-driver:false dashboard:false default-storageclass:false efk:false freshpod:false gcp-auth:false gvisor:false headlamp:false inaccel:false ingress:
false ingress-dns:false inspektor-gadget:false istio:false istio-provisioner:false kong:false kubeflow:false kubetail:false kubevirt:false logviewer:false metallb:false metrics-server:false nvidia-device-plugin:false nvidia-driver-installer:false nvidia-gpu-device-plugin:false olm:false pod-security-policy:false portainer:false registry:false registry-aliases:false registry-creds:false storage-provisioner:false storage-provisioner-rancher:false volcano:false volumesnapshots:false yakd:false] CustomAddonImages:map[] CustomAddonRegistries:map[] VerifyComponents:map[apiserver:true apps_running:true default_sa:true extra:true kubelet:true node_ready:true system_pods:true] StartHostTimeout:6m0s ScheduledStop:<nil> ExposedPorts:[] ListenAddress: Network: Subnet: MultiNodeRequested:true ExtraDisks:0 CertExpiration:26280h0m0s MountString: Mount9PVersion:9p2000.L MountGID:docker MountIP: MountMSize:262144 MountOptions:[] MountPort:0 MountType:9p MountUID:docker BinaryMirror: DisableOptimizations:false DisableMetrics:f
alse DisableCoreDNSLog:false CustomQemuFirmwarePath: SocketVMnetClientPath: SocketVMnetPath: StaticIP: SSHAuthSock: SSHAgentPID:0 GPUs: AutoPauseInterval:1m0s} ...
	I1109 13:55:01.958452   50941 preload.go:188] Checking if preload exists for k8s version v1.34.1 and runtime crio
	I1109 13:55:01.958516   50941 ssh_runner.go:195] Run: sudo crictl images --output json
	I1109 13:55:01.997808   50941 crio.go:514] all images are preloaded for cri-o runtime.
	I1109 13:55:01.997834   50941 crio.go:433] Images already preloaded, skipping extraction
	I1109 13:55:01.997895   50941 ssh_runner.go:195] Run: sudo crictl images --output json
	I1109 13:55:02.024927   50941 crio.go:514] all images are preloaded for cri-o runtime.
	I1109 13:55:02.024953   50941 cache_images.go:86] Images are preloaded, skipping loading
	I1109 13:55:02.024962   50941 kubeadm.go:935] updating node { 192.168.49.2 8443 v1.34.1 crio true true} ...
	I1109 13:55:02.025128   50941 kubeadm.go:947] kubelet [Unit]
	Wants=crio.service
	
	[Service]
	ExecStart=
	ExecStart=/var/lib/minikube/binaries/v1.34.1/kubelet --bootstrap-kubeconfig=/etc/kubernetes/bootstrap-kubelet.conf --cgroups-per-qos=false --config=/var/lib/kubelet/config.yaml --enforce-node-allocatable= --hostname-override=ha-423884 --kubeconfig=/etc/kubernetes/kubelet.conf --node-ip=192.168.49.2
	
	[Install]
	 config:
	{KubernetesVersion:v1.34.1 ClusterName:ha-423884 Namespace:default APIServerHAVIP:192.168.49.254 APIServerName:minikubeCA APIServerNames:[] APIServerIPs:[] DNSDomain:cluster.local ContainerRuntime:crio CRISocket: NetworkPlugin:cni FeatureGates: ServiceCIDR:10.96.0.0/12 ImageRepository: LoadBalancerStartIP: LoadBalancerEndIP: CustomIngressCert: RegistryAliases: ExtraOptions:[] ShouldLoadCachedImages:true EnableDefaultCNI:false CNI:}
	I1109 13:55:02.025216   50941 ssh_runner.go:195] Run: crio config
	I1109 13:55:02.096570   50941 cni.go:84] Creating CNI manager for ""
	I1109 13:55:02.096595   50941 cni.go:136] multinode detected (4 nodes found), recommending kindnet
	I1109 13:55:02.096612   50941 kubeadm.go:85] Using pod CIDR: 10.244.0.0/16
	I1109 13:55:02.096664   50941 kubeadm.go:190] kubeadm options: {CertDir:/var/lib/minikube/certs ServiceCIDR:10.96.0.0/12 PodSubnet:10.244.0.0/16 AdvertiseAddress:192.168.49.2 APIServerPort:8443 KubernetesVersion:v1.34.1 EtcdDataDir:/var/lib/minikube/etcd EtcdExtraArgs:map[] ClusterName:ha-423884 NodeName:ha-423884 DNSDomain:cluster.local CRISocket:/var/run/crio/crio.sock ImageRepository: ComponentOptions:[{Component:apiServer ExtraArgs:map[enable-admission-plugins:NamespaceLifecycle,LimitRanger,ServiceAccount,DefaultStorageClass,DefaultTolerationSeconds,NodeRestriction,MutatingAdmissionWebhook,ValidatingAdmissionWebhook,ResourceQuota] Pairs:map[certSANs:["127.0.0.1", "localhost", "192.168.49.2"]]} {Component:controllerManager ExtraArgs:map[allocate-node-cidrs:true leader-elect:false] Pairs:map[]} {Component:scheduler ExtraArgs:map[leader-elect:false] Pairs:map[]}] FeatureArgs:map[] NodeIP:192.168.49.2 CgroupDriver:cgroupfs ClientCAFile:/var/lib/minikube/certs/ca.crt StaticPodPath:/etc/kubernetes/mani
fests ControlPlaneAddress:control-plane.minikube.internal KubeProxyOptions:map[] ResolvConfSearchRegression:false KubeletConfigOpts:map[containerRuntimeEndpoint:unix:///var/run/crio/crio.sock hairpinMode:hairpin-veth runtimeRequestTimeout:15m] PrependCriSocketUnix:true}
	I1109 13:55:02.096862   50941 kubeadm.go:196] kubeadm config:
	apiVersion: kubeadm.k8s.io/v1beta4
	kind: InitConfiguration
	localAPIEndpoint:
	  advertiseAddress: 192.168.49.2
	  bindPort: 8443
	bootstrapTokens:
	  - groups:
	      - system:bootstrappers:kubeadm:default-node-token
	    ttl: 24h0m0s
	    usages:
	      - signing
	      - authentication
	nodeRegistration:
	  criSocket: unix:///var/run/crio/crio.sock
	  name: "ha-423884"
	  kubeletExtraArgs:
	    - name: "node-ip"
	      value: "192.168.49.2"
	  taints: []
	---
	apiVersion: kubeadm.k8s.io/v1beta4
	kind: ClusterConfiguration
	apiServer:
	  certSANs: ["127.0.0.1", "localhost", "192.168.49.2"]
	  extraArgs:
	    - name: "enable-admission-plugins"
	      value: "NamespaceLifecycle,LimitRanger,ServiceAccount,DefaultStorageClass,DefaultTolerationSeconds,NodeRestriction,MutatingAdmissionWebhook,ValidatingAdmissionWebhook,ResourceQuota"
	controllerManager:
	  extraArgs:
	    - name: "allocate-node-cidrs"
	      value: "true"
	    - name: "leader-elect"
	      value: "false"
	scheduler:
	  extraArgs:
	    - name: "leader-elect"
	      value: "false"
	certificatesDir: /var/lib/minikube/certs
	clusterName: mk
	controlPlaneEndpoint: control-plane.minikube.internal:8443
	etcd:
	  local:
	    dataDir: /var/lib/minikube/etcd
	kubernetesVersion: v1.34.1
	networking:
	  dnsDomain: cluster.local
	  podSubnet: "10.244.0.0/16"
	  serviceSubnet: 10.96.0.0/12
	---
	apiVersion: kubelet.config.k8s.io/v1beta1
	kind: KubeletConfiguration
	authentication:
	  x509:
	    clientCAFile: /var/lib/minikube/certs/ca.crt
	cgroupDriver: cgroupfs
	containerRuntimeEndpoint: unix:///var/run/crio/crio.sock
	hairpinMode: hairpin-veth
	runtimeRequestTimeout: 15m
	clusterDomain: "cluster.local"
	# disable disk resource management by default
	imageGCHighThresholdPercent: 100
	evictionHard:
	  nodefs.available: "0%"
	  nodefs.inodesFree: "0%"
	  imagefs.available: "0%"
	failSwapOn: false
	staticPodPath: /etc/kubernetes/manifests
	---
	apiVersion: kubeproxy.config.k8s.io/v1alpha1
	kind: KubeProxyConfiguration
	clusterCIDR: "10.244.0.0/16"
	metricsBindAddress: 0.0.0.0:10249
	conntrack:
	  maxPerCore: 0
	# Skip setting "net.netfilter.nf_conntrack_tcp_timeout_established"
	  tcpEstablishedTimeout: 0s
	# Skip setting "net.netfilter.nf_conntrack_tcp_timeout_close"
	  tcpCloseWaitTimeout: 0s
	
	I1109 13:55:02.096890   50941 kube-vip.go:115] generating kube-vip config ...
	I1109 13:55:02.096949   50941 ssh_runner.go:195] Run: sudo sh -c "lsmod | grep ip_vs"
	I1109 13:55:02.108971   50941 kube-vip.go:163] giving up enabling control-plane load-balancing as ipvs kernel modules appears not to be available: sudo sh -c "lsmod | grep ip_vs": Process exited with status 1
	stdout:
	
	stderr:
	I1109 13:55:02.109069   50941 kube-vip.go:137] kube-vip config:
	apiVersion: v1
	kind: Pod
	metadata:
	  creationTimestamp: null
	  name: kube-vip
	  namespace: kube-system
	spec:
	  containers:
	  - args:
	    - manager
	    env:
	    - name: vip_arp
	      value: "true"
	    - name: port
	      value: "8443"
	    - name: vip_nodename
	      valueFrom:
	        fieldRef:
	          fieldPath: spec.nodeName
	    - name: vip_interface
	      value: eth0
	    - name: vip_cidr
	      value: "32"
	    - name: dns_mode
	      value: first
	    - name: cp_enable
	      value: "true"
	    - name: cp_namespace
	      value: kube-system
	    - name: vip_leaderelection
	      value: "true"
	    - name: vip_leasename
	      value: plndr-cp-lock
	    - name: vip_leaseduration
	      value: "5"
	    - name: vip_renewdeadline
	      value: "3"
	    - name: vip_retryperiod
	      value: "1"
	    - name: address
	      value: 192.168.49.254
	    - name: prometheus_server
	      value: :2112
	    image: ghcr.io/kube-vip/kube-vip:v1.0.1
	    imagePullPolicy: IfNotPresent
	    name: kube-vip
	    resources: {}
	    securityContext:
	      capabilities:
	        add:
	        - NET_ADMIN
	        - NET_RAW
	    volumeMounts:
	    - mountPath: /etc/kubernetes/admin.conf
	      name: kubeconfig
	  hostAliases:
	  - hostnames:
	    - kubernetes
	    ip: 127.0.0.1
	  hostNetwork: true
	  volumes:
	  - hostPath:
	      path: "/etc/kubernetes/admin.conf"
	    name: kubeconfig
	status: {}
	I1109 13:55:02.109141   50941 ssh_runner.go:195] Run: sudo ls /var/lib/minikube/binaries/v1.34.1
	I1109 13:55:02.117384   50941 binaries.go:51] Found k8s binaries, skipping transfer
	I1109 13:55:02.117456   50941 ssh_runner.go:195] Run: sudo mkdir -p /etc/systemd/system/kubelet.service.d /lib/systemd/system /var/tmp/minikube /etc/kubernetes/manifests
	I1109 13:55:02.125412   50941 ssh_runner.go:362] scp memory --> /etc/systemd/system/kubelet.service.d/10-kubeadm.conf (359 bytes)
	I1109 13:55:02.139511   50941 ssh_runner.go:362] scp memory --> /lib/systemd/system/kubelet.service (352 bytes)
	I1109 13:55:02.153010   50941 ssh_runner.go:362] scp memory --> /var/tmp/minikube/kubeadm.yaml.new (2206 bytes)
	I1109 13:55:02.166712   50941 ssh_runner.go:362] scp memory --> /etc/kubernetes/manifests/kube-vip.yaml (1358 bytes)
	I1109 13:55:02.180196   50941 ssh_runner.go:195] Run: grep 192.168.49.254	control-plane.minikube.internal$ /etc/hosts
	I1109 13:55:02.183971   50941 ssh_runner.go:195] Run: /bin/bash -c "{ grep -v $'\tcontrol-plane.minikube.internal$' "/etc/hosts"; echo "192.168.49.254	control-plane.minikube.internal"; } > /tmp/h.$$; sudo cp /tmp/h.$$ "/etc/hosts""
	I1109 13:55:02.194268   50941 ssh_runner.go:195] Run: sudo systemctl daemon-reload
	I1109 13:55:02.311353   50941 ssh_runner.go:195] Run: sudo systemctl start kubelet
	I1109 13:55:02.326388   50941 certs.go:69] Setting up /home/jenkins/minikube-integration/21139-2320/.minikube/profiles/ha-423884 for IP: 192.168.49.2
	I1109 13:55:02.326464   50941 certs.go:195] generating shared ca certs ...
	I1109 13:55:02.326494   50941 certs.go:227] acquiring lock for ca certs: {Name:mkdc9287ecd89df27ec460b72246ed6b75395d7c Clock:{} Delay:500ms Timeout:1m0s Cancel:<nil>}
	I1109 13:55:02.326661   50941 certs.go:236] skipping valid "minikubeCA" ca cert: /home/jenkins/minikube-integration/21139-2320/.minikube/ca.key
	I1109 13:55:02.326749   50941 certs.go:236] skipping valid "proxyClientCA" ca cert: /home/jenkins/minikube-integration/21139-2320/.minikube/proxy-client-ca.key
	I1109 13:55:02.326774   50941 certs.go:257] generating profile certs ...
	I1109 13:55:02.326889   50941 certs.go:360] skipping valid signed profile cert regeneration for "minikube-user": /home/jenkins/minikube-integration/21139-2320/.minikube/profiles/ha-423884/client.key
	I1109 13:55:02.326942   50941 certs.go:364] generating signed profile cert for "minikube": /home/jenkins/minikube-integration/21139-2320/.minikube/profiles/ha-423884/apiserver.key.32540612
	I1109 13:55:02.326978   50941 crypto.go:68] Generating cert /home/jenkins/minikube-integration/21139-2320/.minikube/profiles/ha-423884/apiserver.crt.32540612 with IP's: [10.96.0.1 127.0.0.1 10.0.0.1 192.168.49.2 192.168.49.3 192.168.49.4 192.168.49.254]
	I1109 13:55:02.791794   50941 crypto.go:156] Writing cert to /home/jenkins/minikube-integration/21139-2320/.minikube/profiles/ha-423884/apiserver.crt.32540612 ...
	I1109 13:55:02.791832   50941 lock.go:35] WriteFile acquiring /home/jenkins/minikube-integration/21139-2320/.minikube/profiles/ha-423884/apiserver.crt.32540612: {Name:mkffe35c2a4a9e9ef2460782868fdfad2ff0b271 Clock:{} Delay:500ms Timeout:1m0s Cancel:<nil>}
	I1109 13:55:02.792045   50941 crypto.go:164] Writing key to /home/jenkins/minikube-integration/21139-2320/.minikube/profiles/ha-423884/apiserver.key.32540612 ...
	I1109 13:55:02.792064   50941 lock.go:35] WriteFile acquiring /home/jenkins/minikube-integration/21139-2320/.minikube/profiles/ha-423884/apiserver.key.32540612: {Name:mk387e11ec0c12eb2f7dfe43ad45967daf55df66 Clock:{} Delay:500ms Timeout:1m0s Cancel:<nil>}
	I1109 13:55:02.792144   50941 certs.go:382] copying /home/jenkins/minikube-integration/21139-2320/.minikube/profiles/ha-423884/apiserver.crt.32540612 -> /home/jenkins/minikube-integration/21139-2320/.minikube/profiles/ha-423884/apiserver.crt
	I1109 13:55:02.792295   50941 certs.go:386] copying /home/jenkins/minikube-integration/21139-2320/.minikube/profiles/ha-423884/apiserver.key.32540612 -> /home/jenkins/minikube-integration/21139-2320/.minikube/profiles/ha-423884/apiserver.key
	I1109 13:55:02.792427   50941 certs.go:360] skipping valid signed profile cert regeneration for "aggregator": /home/jenkins/minikube-integration/21139-2320/.minikube/profiles/ha-423884/proxy-client.key
	I1109 13:55:02.792446   50941 vm_assets.go:164] NewFileAsset: /home/jenkins/minikube-integration/21139-2320/.minikube/ca.crt -> /var/lib/minikube/certs/ca.crt
	I1109 13:55:02.792461   50941 vm_assets.go:164] NewFileAsset: /home/jenkins/minikube-integration/21139-2320/.minikube/ca.key -> /var/lib/minikube/certs/ca.key
	I1109 13:55:02.792477   50941 vm_assets.go:164] NewFileAsset: /home/jenkins/minikube-integration/21139-2320/.minikube/proxy-client-ca.crt -> /var/lib/minikube/certs/proxy-client-ca.crt
	I1109 13:55:02.792493   50941 vm_assets.go:164] NewFileAsset: /home/jenkins/minikube-integration/21139-2320/.minikube/proxy-client-ca.key -> /var/lib/minikube/certs/proxy-client-ca.key
	I1109 13:55:02.792513   50941 vm_assets.go:164] NewFileAsset: /home/jenkins/minikube-integration/21139-2320/.minikube/profiles/ha-423884/apiserver.crt -> /var/lib/minikube/certs/apiserver.crt
	I1109 13:55:02.792538   50941 vm_assets.go:164] NewFileAsset: /home/jenkins/minikube-integration/21139-2320/.minikube/profiles/ha-423884/apiserver.key -> /var/lib/minikube/certs/apiserver.key
	I1109 13:55:02.792554   50941 vm_assets.go:164] NewFileAsset: /home/jenkins/minikube-integration/21139-2320/.minikube/profiles/ha-423884/proxy-client.crt -> /var/lib/minikube/certs/proxy-client.crt
	I1109 13:55:02.792565   50941 vm_assets.go:164] NewFileAsset: /home/jenkins/minikube-integration/21139-2320/.minikube/profiles/ha-423884/proxy-client.key -> /var/lib/minikube/certs/proxy-client.key
	I1109 13:55:02.792626   50941 certs.go:484] found cert: /home/jenkins/minikube-integration/21139-2320/.minikube/certs/4116.pem (1338 bytes)
	W1109 13:55:02.792662   50941 certs.go:480] ignoring /home/jenkins/minikube-integration/21139-2320/.minikube/certs/4116_empty.pem, impossibly tiny 0 bytes
	I1109 13:55:02.792674   50941 certs.go:484] found cert: /home/jenkins/minikube-integration/21139-2320/.minikube/certs/ca-key.pem (1679 bytes)
	I1109 13:55:02.792701   50941 certs.go:484] found cert: /home/jenkins/minikube-integration/21139-2320/.minikube/certs/ca.pem (1082 bytes)
	I1109 13:55:02.792731   50941 certs.go:484] found cert: /home/jenkins/minikube-integration/21139-2320/.minikube/certs/cert.pem (1123 bytes)
	I1109 13:55:02.792756   50941 certs.go:484] found cert: /home/jenkins/minikube-integration/21139-2320/.minikube/certs/key.pem (1679 bytes)
	I1109 13:55:02.792801   50941 certs.go:484] found cert: /home/jenkins/minikube-integration/21139-2320/.minikube/files/etc/ssl/certs/41162.pem (1708 bytes)
	I1109 13:55:02.792832   50941 vm_assets.go:164] NewFileAsset: /home/jenkins/minikube-integration/21139-2320/.minikube/ca.crt -> /usr/share/ca-certificates/minikubeCA.pem
	I1109 13:55:02.792847   50941 vm_assets.go:164] NewFileAsset: /home/jenkins/minikube-integration/21139-2320/.minikube/certs/4116.pem -> /usr/share/ca-certificates/4116.pem
	I1109 13:55:02.792857   50941 vm_assets.go:164] NewFileAsset: /home/jenkins/minikube-integration/21139-2320/.minikube/files/etc/ssl/certs/41162.pem -> /usr/share/ca-certificates/41162.pem
	I1109 13:55:02.793472   50941 ssh_runner.go:362] scp /home/jenkins/minikube-integration/21139-2320/.minikube/ca.crt --> /var/lib/minikube/certs/ca.crt (1111 bytes)
	I1109 13:55:02.812682   50941 ssh_runner.go:362] scp /home/jenkins/minikube-integration/21139-2320/.minikube/ca.key --> /var/lib/minikube/certs/ca.key (1679 bytes)
	I1109 13:55:02.831556   50941 ssh_runner.go:362] scp /home/jenkins/minikube-integration/21139-2320/.minikube/proxy-client-ca.crt --> /var/lib/minikube/certs/proxy-client-ca.crt (1119 bytes)
	I1109 13:55:02.850358   50941 ssh_runner.go:362] scp /home/jenkins/minikube-integration/21139-2320/.minikube/proxy-client-ca.key --> /var/lib/minikube/certs/proxy-client-ca.key (1675 bytes)
	I1109 13:55:02.868603   50941 ssh_runner.go:362] scp /home/jenkins/minikube-integration/21139-2320/.minikube/profiles/ha-423884/apiserver.crt --> /var/lib/minikube/certs/apiserver.crt (1440 bytes)
	I1109 13:55:02.886617   50941 ssh_runner.go:362] scp /home/jenkins/minikube-integration/21139-2320/.minikube/profiles/ha-423884/apiserver.key --> /var/lib/minikube/certs/apiserver.key (1679 bytes)
	I1109 13:55:02.904941   50941 ssh_runner.go:362] scp /home/jenkins/minikube-integration/21139-2320/.minikube/profiles/ha-423884/proxy-client.crt --> /var/lib/minikube/certs/proxy-client.crt (1147 bytes)
	I1109 13:55:02.923399   50941 ssh_runner.go:362] scp /home/jenkins/minikube-integration/21139-2320/.minikube/profiles/ha-423884/proxy-client.key --> /var/lib/minikube/certs/proxy-client.key (1675 bytes)
	I1109 13:55:02.941967   50941 ssh_runner.go:362] scp /home/jenkins/minikube-integration/21139-2320/.minikube/ca.crt --> /usr/share/ca-certificates/minikubeCA.pem (1111 bytes)
	I1109 13:55:02.960665   50941 ssh_runner.go:362] scp /home/jenkins/minikube-integration/21139-2320/.minikube/certs/4116.pem --> /usr/share/ca-certificates/4116.pem (1338 bytes)
	I1109 13:55:02.983802   50941 ssh_runner.go:362] scp /home/jenkins/minikube-integration/21139-2320/.minikube/files/etc/ssl/certs/41162.pem --> /usr/share/ca-certificates/41162.pem (1708 bytes)
	I1109 13:55:03.009751   50941 ssh_runner.go:362] scp memory --> /var/lib/minikube/kubeconfig (738 bytes)
	I1109 13:55:03.031089   50941 ssh_runner.go:195] Run: openssl version
	I1109 13:55:03.048901   50941 ssh_runner.go:195] Run: sudo /bin/bash -c "test -s /usr/share/ca-certificates/minikubeCA.pem && ln -fs /usr/share/ca-certificates/minikubeCA.pem /etc/ssl/certs/minikubeCA.pem"
	I1109 13:55:03.062145   50941 ssh_runner.go:195] Run: ls -la /usr/share/ca-certificates/minikubeCA.pem
	I1109 13:55:03.066948   50941 certs.go:528] hashing: -rw-r--r-- 1 root root 1111 Nov  9 13:29 /usr/share/ca-certificates/minikubeCA.pem
	I1109 13:55:03.067030   50941 ssh_runner.go:195] Run: openssl x509 -hash -noout -in /usr/share/ca-certificates/minikubeCA.pem
	I1109 13:55:03.126889   50941 ssh_runner.go:195] Run: sudo /bin/bash -c "test -L /etc/ssl/certs/b5213941.0 || ln -fs /etc/ssl/certs/minikubeCA.pem /etc/ssl/certs/b5213941.0"
	I1109 13:55:03.139601   50941 ssh_runner.go:195] Run: sudo /bin/bash -c "test -s /usr/share/ca-certificates/4116.pem && ln -fs /usr/share/ca-certificates/4116.pem /etc/ssl/certs/4116.pem"
	I1109 13:55:03.154789   50941 ssh_runner.go:195] Run: ls -la /usr/share/ca-certificates/4116.pem
	I1109 13:55:03.160976   50941 certs.go:528] hashing: -rw-r--r-- 1 root root 1338 Nov  9 13:36 /usr/share/ca-certificates/4116.pem
	I1109 13:55:03.161044   50941 ssh_runner.go:195] Run: openssl x509 -hash -noout -in /usr/share/ca-certificates/4116.pem
	I1109 13:55:03.250161   50941 ssh_runner.go:195] Run: sudo /bin/bash -c "test -L /etc/ssl/certs/51391683.0 || ln -fs /etc/ssl/certs/4116.pem /etc/ssl/certs/51391683.0"
	I1109 13:55:03.265557   50941 ssh_runner.go:195] Run: sudo /bin/bash -c "test -s /usr/share/ca-certificates/41162.pem && ln -fs /usr/share/ca-certificates/41162.pem /etc/ssl/certs/41162.pem"
	I1109 13:55:03.284581   50941 ssh_runner.go:195] Run: ls -la /usr/share/ca-certificates/41162.pem
	I1109 13:55:03.293895   50941 certs.go:528] hashing: -rw-r--r-- 1 root root 1708 Nov  9 13:36 /usr/share/ca-certificates/41162.pem
	I1109 13:55:03.293986   50941 ssh_runner.go:195] Run: openssl x509 -hash -noout -in /usr/share/ca-certificates/41162.pem
	I1109 13:55:03.352776   50941 ssh_runner.go:195] Run: sudo /bin/bash -c "test -L /etc/ssl/certs/3ec20f2e.0 || ln -fs /etc/ssl/certs/41162.pem /etc/ssl/certs/3ec20f2e.0"
	I1109 13:55:03.366601   50941 ssh_runner.go:195] Run: stat /var/lib/minikube/certs/apiserver-kubelet-client.crt
	I1109 13:55:03.371480   50941 ssh_runner.go:195] Run: openssl x509 -noout -in /var/lib/minikube/certs/apiserver-etcd-client.crt -checkend 86400
	I1109 13:55:03.428568   50941 ssh_runner.go:195] Run: openssl x509 -noout -in /var/lib/minikube/certs/apiserver-kubelet-client.crt -checkend 86400
	I1109 13:55:03.484572   50941 ssh_runner.go:195] Run: openssl x509 -noout -in /var/lib/minikube/certs/etcd/server.crt -checkend 86400
	I1109 13:55:03.542471   50941 ssh_runner.go:195] Run: openssl x509 -noout -in /var/lib/minikube/certs/etcd/healthcheck-client.crt -checkend 86400
	I1109 13:55:03.629320   50941 ssh_runner.go:195] Run: openssl x509 -noout -in /var/lib/minikube/certs/etcd/peer.crt -checkend 86400
	I1109 13:55:03.682786   50941 ssh_runner.go:195] Run: openssl x509 -noout -in /var/lib/minikube/certs/front-proxy-client.crt -checkend 86400
	I1109 13:55:03.732655   50941 kubeadm.go:401] StartCluster: {Name:ha-423884 KeepContext:false EmbedCerts:false MinikubeISO: KicBaseImage:gcr.io/k8s-minikube/kicbase-builds:v0.0.48-1761985721-21837@sha256:a50b37e97dfdea51156e079ca6b45818a801b3d41bbe13d141f35d2e1af6c7d1 Memory:3072 CPUs:2 DiskSize:20000 Driver:docker HyperkitVpnKitSock: HyperkitVSockPorts:[] DockerEnv:[] ContainerVolumeMounts:[] InsecureRegistry:[] RegistryMirror:[] HostOnlyCIDR:192.168.59.1/24 HypervVirtualSwitch: HypervUseExternalSwitch:false HypervExternalAdapter: KVMNetwork:default KVMQemuURI:qemu:///system KVMGPU:false KVMHidden:false KVMNUMACount:1 APIServerPort:8443 DockerOpt:[] DisableDriverMounts:false NFSShare:[] NFSSharesRoot:/nfsshares UUID: NoVTXCheck:false DNSProxy:false HostDNSResolver:true HostOnlyNicType:virtio NatNicType:virtio SSHIPAddress: SSHUser:root SSHKey: SSHPort:22 KubernetesConfig:{KubernetesVersion:v1.34.1 ClusterName:ha-423884 Namespace:default APIServerHAVIP:192.168.49.254 APIServerName:minikubeCA APIServe
rNames:[] APIServerIPs:[] DNSDomain:cluster.local ContainerRuntime:crio CRISocket: NetworkPlugin:cni FeatureGates: ServiceCIDR:10.96.0.0/12 ImageRepository: LoadBalancerStartIP: LoadBalancerEndIP: CustomIngressCert: RegistryAliases: ExtraOptions:[] ShouldLoadCachedImages:true EnableDefaultCNI:false CNI:} Nodes:[{Name: IP:192.168.49.2 Port:8443 KubernetesVersion:v1.34.1 ContainerRuntime:crio ControlPlane:true Worker:true} {Name:m02 IP:192.168.49.3 Port:8443 KubernetesVersion:v1.34.1 ContainerRuntime:crio ControlPlane:true Worker:true} {Name:m03 IP:192.168.49.4 Port:8443 KubernetesVersion:v1.34.1 ContainerRuntime:crio ControlPlane:true Worker:true} {Name:m04 IP:192.168.49.5 Port:0 KubernetesVersion:v1.34.1 ContainerRuntime:crio ControlPlane:false Worker:true}] Addons:map[ambassador:false amd-gpu-device-plugin:false auto-pause:false cloud-spanner:false csi-hostpath-driver:false dashboard:false default-storageclass:false efk:false freshpod:false gcp-auth:false gvisor:false headlamp:false inaccel:false ingress:fal
se ingress-dns:false inspektor-gadget:false istio:false istio-provisioner:false kong:false kubeflow:false kubetail:false kubevirt:false logviewer:false metallb:false metrics-server:false nvidia-device-plugin:false nvidia-driver-installer:false nvidia-gpu-device-plugin:false olm:false pod-security-policy:false portainer:false registry:false registry-aliases:false registry-creds:false storage-provisioner:false storage-provisioner-rancher:false volcano:false volumesnapshots:false yakd:false] CustomAddonImages:map[] CustomAddonRegistries:map[] VerifyComponents:map[apiserver:true apps_running:true default_sa:true extra:true kubelet:true node_ready:true system_pods:true] StartHostTimeout:6m0s ScheduledStop:<nil> ExposedPorts:[] ListenAddress: Network: Subnet: MultiNodeRequested:true ExtraDisks:0 CertExpiration:26280h0m0s MountString: Mount9PVersion:9p2000.L MountGID:docker MountIP: MountMSize:262144 MountOptions:[] MountPort:0 MountType:9p MountUID:docker BinaryMirror: DisableOptimizations:false DisableMetrics:fals
e DisableCoreDNSLog:false CustomQemuFirmwarePath: SocketVMnetClientPath: SocketVMnetPath: StaticIP: SSHAuthSock: SSHAgentPID:0 GPUs: AutoPauseInterval:1m0s}
	I1109 13:55:03.732832   50941 cri.go:54] listing CRI containers in root : {State:paused Name: Namespaces:[kube-system]}
	I1109 13:55:03.732937   50941 ssh_runner.go:195] Run: sudo -s eval "crictl ps -a --quiet --label io.kubernetes.pod.namespace=kube-system"
	I1109 13:55:03.774883   50941 cri.go:89] found id: "90f6d4700e66c1004154b3bffd5b655a9e7a54dab0ca93ca633a48ec6805be8c"
	I1109 13:55:03.774945   50941 cri.go:89] found id: "ee4108629384f7d2a0c69033ae60bc1c7015caec18238848cb6dace4abb60ac1"
	I1109 13:55:03.774963   50941 cri.go:89] found id: "dc4b89b5cdd42a6e98698322cd4a212e4b2439c3edbe3305cc3f85573f85fb2b"
	I1109 13:55:03.774978   50941 cri.go:89] found id: "ad03fe50fbbd1dace582db018b89f80349534b6604f17260fe8e6175c0110640"
	I1109 13:55:03.774996   50941 cri.go:89] found id: "435fea996772c07f8ab06a7210ea047100aeb59de8bfe2b882e29743c63515bf"
	I1109 13:55:03.775025   50941 cri.go:89] found id: ""
	I1109 13:55:03.775095   50941 ssh_runner.go:195] Run: sudo runc list -f json
	W1109 13:55:03.797048   50941 kubeadm.go:408] unpause failed: list paused: runc: sudo runc list -f json: Process exited with status 1
	stdout:
	
	stderr:
	time="2025-11-09T13:55:03Z" level=error msg="open /run/runc: no such file or directory"
	I1109 13:55:03.797184   50941 ssh_runner.go:195] Run: sudo ls /var/lib/kubelet/kubeadm-flags.env /var/lib/kubelet/config.yaml /var/lib/minikube/etcd
	I1109 13:55:03.808348   50941 kubeadm.go:417] found existing configuration files, will attempt cluster restart
	I1109 13:55:03.808412   50941 kubeadm.go:598] restartPrimaryControlPlane start ...
	I1109 13:55:03.808506   50941 ssh_runner.go:195] Run: sudo test -d /data/minikube
	I1109 13:55:03.822278   50941 kubeadm.go:131] /data/minikube skipping compat symlinks: sudo test -d /data/minikube: Process exited with status 1
	stdout:
	
	stderr:
	I1109 13:55:03.822778   50941 kubeconfig.go:47] verify endpoint returned: get endpoint: "ha-423884" does not appear in /home/jenkins/minikube-integration/21139-2320/kubeconfig
	I1109 13:55:03.822942   50941 kubeconfig.go:62] /home/jenkins/minikube-integration/21139-2320/kubeconfig needs updating (will repair): [kubeconfig missing "ha-423884" cluster setting kubeconfig missing "ha-423884" context setting]
	I1109 13:55:03.823266   50941 lock.go:35] WriteFile acquiring /home/jenkins/minikube-integration/21139-2320/kubeconfig: {Name:mkbcbd43244c4eb498050023163f96f81faa78c6 Clock:{} Delay:500ms Timeout:1m0s Cancel:<nil>}
	I1109 13:55:03.823946   50941 kapi.go:59] client config for ha-423884: &rest.Config{Host:"https://192.168.49.2:8443", APIPath:"", ContentConfig:rest.ContentConfig{AcceptContentTypes:"", ContentType:"", GroupVersion:(*schema.GroupVersion)(nil), NegotiatedSerializer:runtime.NegotiatedSerializer(nil)}, Username:"", Password:"", BearerToken:"", BearerTokenFile:"", Impersonate:rest.ImpersonationConfig{UserName:"", UID:"", Groups:[]string(nil), Extra:map[string][]string(nil)}, AuthProvider:<nil>, AuthConfigPersister:rest.AuthProviderConfigPersister(nil), ExecProvider:<nil>, TLSClientConfig:rest.sanitizedTLSClientConfig{Insecure:false, ServerName:"", CertFile:"/home/jenkins/minikube-integration/21139-2320/.minikube/profiles/ha-423884/client.crt", KeyFile:"/home/jenkins/minikube-integration/21139-2320/.minikube/profiles/ha-423884/client.key", CAFile:"/home/jenkins/minikube-integration/21139-2320/.minikube/ca.crt", CertData:[]uint8(nil), KeyData:[]uint8(nil), CAData:[]uint8(nil), NextProtos:[]string(nil)}, Us
erAgent:"", DisableCompression:false, Transport:http.RoundTripper(nil), WrapTransport:(transport.WrapperFunc)(0x21276e0), QPS:0, Burst:0, RateLimiter:flowcontrol.RateLimiter(nil), WarningHandler:rest.WarningHandler(nil), WarningHandlerWithContext:rest.WarningHandlerWithContext(nil), Timeout:0, Dial:(func(context.Context, string, string) (net.Conn, error))(nil), Proxy:(func(*http.Request) (*url.URL, error))(nil)}
	I1109 13:55:03.824966   50941 envvar.go:172] "Feature gate default state" feature="ClientsPreferCBOR" enabled=false
	I1109 13:55:03.825021   50941 envvar.go:172] "Feature gate default state" feature="InformerResourceVersion" enabled=false
	I1109 13:55:03.825042   50941 envvar.go:172] "Feature gate default state" feature="InOrderInformers" enabled=true
	I1109 13:55:03.825069   50941 envvar.go:172] "Feature gate default state" feature="WatchListClient" enabled=false
	I1109 13:55:03.825105   50941 envvar.go:172] "Feature gate default state" feature="ClientsAllowCBOR" enabled=false
	I1109 13:55:03.824993   50941 cert_rotation.go:141] "Starting client certificate rotation controller" logger="tls-transport-cache"
	I1109 13:55:03.825458   50941 ssh_runner.go:195] Run: sudo diff -u /var/tmp/minikube/kubeadm.yaml /var/tmp/minikube/kubeadm.yaml.new
	I1109 13:55:03.835686   50941 kubeadm.go:635] The running cluster does not require reconfiguration: 192.168.49.2
	I1109 13:55:03.835761   50941 kubeadm.go:602] duration metric: took 27.320729ms to restartPrimaryControlPlane
	I1109 13:55:03.835784   50941 kubeadm.go:403] duration metric: took 103.139447ms to StartCluster
	I1109 13:55:03.835816   50941 settings.go:142] acquiring lock: {Name:mk52a5a1cf83cfd3007d92af07a5e8dea393f789 Clock:{} Delay:500ms Timeout:1m0s Cancel:<nil>}
	I1109 13:55:03.835943   50941 settings.go:150] Updating kubeconfig:  /home/jenkins/minikube-integration/21139-2320/kubeconfig
	I1109 13:55:03.836573   50941 lock.go:35] WriteFile acquiring /home/jenkins/minikube-integration/21139-2320/kubeconfig: {Name:mkbcbd43244c4eb498050023163f96f81faa78c6 Clock:{} Delay:500ms Timeout:1m0s Cancel:<nil>}
	I1109 13:55:03.836835   50941 start.go:234] HA (multi-control plane) cluster: will skip waiting for primary control-plane node &{Name: IP:192.168.49.2 Port:8443 KubernetesVersion:v1.34.1 ContainerRuntime:crio ControlPlane:true Worker:true}
	I1109 13:55:03.836883   50941 start.go:242] waiting for startup goroutines ...
	I1109 13:55:03.836904   50941 addons.go:512] enable addons start: toEnable=map[ambassador:false amd-gpu-device-plugin:false auto-pause:false cloud-spanner:false csi-hostpath-driver:false dashboard:false default-storageclass:false efk:false freshpod:false gcp-auth:false gvisor:false headlamp:false inaccel:false ingress:false ingress-dns:false inspektor-gadget:false istio:false istio-provisioner:false kong:false kubeflow:false kubetail:false kubevirt:false logviewer:false metallb:false metrics-server:false nvidia-device-plugin:false nvidia-driver-installer:false nvidia-gpu-device-plugin:false olm:false pod-security-policy:false portainer:false registry:false registry-aliases:false registry-creds:false storage-provisioner:false storage-provisioner-rancher:false volcano:false volumesnapshots:false yakd:false]
	I1109 13:55:03.837391   50941 config.go:182] Loaded profile config "ha-423884": Driver=docker, ContainerRuntime=crio, KubernetesVersion=v1.34.1
	I1109 13:55:03.842877   50941 out.go:179] * Enabled addons: 
	I1109 13:55:03.845819   50941 addons.go:515] duration metric: took 8.899809ms for enable addons: enabled=[]
	I1109 13:55:03.845887   50941 start.go:247] waiting for cluster config update ...
	I1109 13:55:03.845910   50941 start.go:256] writing updated cluster config ...
	I1109 13:55:03.849092   50941 out.go:203] 
	I1109 13:55:03.852433   50941 config.go:182] Loaded profile config "ha-423884": Driver=docker, ContainerRuntime=crio, KubernetesVersion=v1.34.1
	I1109 13:55:03.852560   50941 profile.go:143] Saving config to /home/jenkins/minikube-integration/21139-2320/.minikube/profiles/ha-423884/config.json ...
	I1109 13:55:03.856045   50941 out.go:179] * Starting "ha-423884-m02" control-plane node in "ha-423884" cluster
	I1109 13:55:03.858860   50941 cache.go:134] Beginning downloading kic base image for docker with crio
	I1109 13:55:03.861834   50941 out.go:179] * Pulling base image v0.0.48-1761985721-21837 ...
	I1109 13:55:03.864733   50941 preload.go:188] Checking if preload exists for k8s version v1.34.1 and runtime crio
	I1109 13:55:03.864755   50941 cache.go:65] Caching tarball of preloaded images
	I1109 13:55:03.864808   50941 image.go:81] Checking for gcr.io/k8s-minikube/kicbase-builds:v0.0.48-1761985721-21837@sha256:a50b37e97dfdea51156e079ca6b45818a801b3d41bbe13d141f35d2e1af6c7d1 in local docker daemon
	I1109 13:55:03.864863   50941 preload.go:238] Found /home/jenkins/minikube-integration/21139-2320/.minikube/cache/preloaded-tarball/preloaded-images-k8s-v18-v1.34.1-cri-o-overlay-arm64.tar.lz4 in cache, skipping download
	I1109 13:55:03.864879   50941 cache.go:68] Finished verifying existence of preloaded tar for v1.34.1 on crio
	I1109 13:55:03.865000   50941 profile.go:143] Saving config to /home/jenkins/minikube-integration/21139-2320/.minikube/profiles/ha-423884/config.json ...
	I1109 13:55:03.890212   50941 image.go:100] Found gcr.io/k8s-minikube/kicbase-builds:v0.0.48-1761985721-21837@sha256:a50b37e97dfdea51156e079ca6b45818a801b3d41bbe13d141f35d2e1af6c7d1 in local docker daemon, skipping pull
	I1109 13:55:03.890235   50941 cache.go:158] gcr.io/k8s-minikube/kicbase-builds:v0.0.48-1761985721-21837@sha256:a50b37e97dfdea51156e079ca6b45818a801b3d41bbe13d141f35d2e1af6c7d1 exists in daemon, skipping load
	I1109 13:55:03.890249   50941 cache.go:243] Successfully downloaded all kic artifacts
	I1109 13:55:03.890271   50941 start.go:360] acquireMachinesLock for ha-423884-m02: {Name:mkc465d60ac134a0502b48f535d5c2db44f7f07b Clock:{} Delay:500ms Timeout:10m0s Cancel:<nil>}
	I1109 13:55:03.890338   50941 start.go:364] duration metric: took 47.253µs to acquireMachinesLock for "ha-423884-m02"
	I1109 13:55:03.890362   50941 start.go:96] Skipping create...Using existing machine configuration
	I1109 13:55:03.890369   50941 fix.go:54] fixHost starting: m02
	I1109 13:55:03.890623   50941 cli_runner.go:164] Run: docker container inspect ha-423884-m02 --format={{.State.Status}}
	I1109 13:55:03.914904   50941 fix.go:112] recreateIfNeeded on ha-423884-m02: state=Stopped err=<nil>
	W1109 13:55:03.914934   50941 fix.go:138] unexpected machine state, will restart: <nil>
	I1109 13:55:03.917975   50941 out.go:252] * Restarting existing docker container for "ha-423884-m02" ...
	I1109 13:55:03.918057   50941 cli_runner.go:164] Run: docker start ha-423884-m02
	I1109 13:55:04.309913   50941 cli_runner.go:164] Run: docker container inspect ha-423884-m02 --format={{.State.Status}}
	I1109 13:55:04.344086   50941 kic.go:430] container "ha-423884-m02" state is running.
	I1109 13:55:04.344458   50941 cli_runner.go:164] Run: docker container inspect -f "{{range .NetworkSettings.Networks}}{{.IPAddress}},{{.GlobalIPv6Address}}{{end}}" ha-423884-m02
	I1109 13:55:04.369599   50941 profile.go:143] Saving config to /home/jenkins/minikube-integration/21139-2320/.minikube/profiles/ha-423884/config.json ...
	I1109 13:55:04.369844   50941 machine.go:94] provisionDockerMachine start ...
	I1109 13:55:04.369909   50941 cli_runner.go:164] Run: docker container inspect -f "'{{(index (index .NetworkSettings.Ports "22/tcp") 0).HostPort}}'" ha-423884-m02
	I1109 13:55:04.400285   50941 main.go:143] libmachine: Using SSH client type: native
	I1109 13:55:04.400586   50941 main.go:143] libmachine: &{{{<nil> 0 [] [] []} docker [0x3eefe0] 0x3f1790 <nil>  [] 0s} 127.0.0.1 32813 <nil> <nil>}
	I1109 13:55:04.400595   50941 main.go:143] libmachine: About to run SSH command:
	hostname
	I1109 13:55:04.401311   50941 main.go:143] libmachine: Error dialing TCP: ssh: handshake failed: EOF
	I1109 13:55:07.579475   50941 main.go:143] libmachine: SSH cmd err, output: <nil>: ha-423884-m02
	
	I1109 13:55:07.579506   50941 ubuntu.go:182] provisioning hostname "ha-423884-m02"
	I1109 13:55:07.579638   50941 cli_runner.go:164] Run: docker container inspect -f "'{{(index (index .NetworkSettings.Ports "22/tcp") 0).HostPort}}'" ha-423884-m02
	I1109 13:55:07.602366   50941 main.go:143] libmachine: Using SSH client type: native
	I1109 13:55:07.602673   50941 main.go:143] libmachine: &{{{<nil> 0 [] [] []} docker [0x3eefe0] 0x3f1790 <nil>  [] 0s} 127.0.0.1 32813 <nil> <nil>}
	I1109 13:55:07.602690   50941 main.go:143] libmachine: About to run SSH command:
	sudo hostname ha-423884-m02 && echo "ha-423884-m02" | sudo tee /etc/hostname
	I1109 13:55:07.809995   50941 main.go:143] libmachine: SSH cmd err, output: <nil>: ha-423884-m02
	
	I1109 13:55:07.810122   50941 cli_runner.go:164] Run: docker container inspect -f "'{{(index (index .NetworkSettings.Ports "22/tcp") 0).HostPort}}'" ha-423884-m02
	I1109 13:55:07.846000   50941 main.go:143] libmachine: Using SSH client type: native
	I1109 13:55:07.846319   50941 main.go:143] libmachine: &{{{<nil> 0 [] [] []} docker [0x3eefe0] 0x3f1790 <nil>  [] 0s} 127.0.0.1 32813 <nil> <nil>}
	I1109 13:55:07.846341   50941 main.go:143] libmachine: About to run SSH command:
	
			if ! grep -xq '.*\sha-423884-m02' /etc/hosts; then
				if grep -xq '127.0.1.1\s.*' /etc/hosts; then
					sudo sed -i 's/^127.0.1.1\s.*/127.0.1.1 ha-423884-m02/g' /etc/hosts;
				else 
					echo '127.0.1.1 ha-423884-m02' | sudo tee -a /etc/hosts; 
				fi
			fi
	I1109 13:55:08.029626   50941 main.go:143] libmachine: SSH cmd err, output: <nil>: 
	I1109 13:55:08.029653   50941 ubuntu.go:188] set auth options {CertDir:/home/jenkins/minikube-integration/21139-2320/.minikube CaCertPath:/home/jenkins/minikube-integration/21139-2320/.minikube/certs/ca.pem CaPrivateKeyPath:/home/jenkins/minikube-integration/21139-2320/.minikube/certs/ca-key.pem CaCertRemotePath:/etc/docker/ca.pem ServerCertPath:/home/jenkins/minikube-integration/21139-2320/.minikube/machines/server.pem ServerKeyPath:/home/jenkins/minikube-integration/21139-2320/.minikube/machines/server-key.pem ClientKeyPath:/home/jenkins/minikube-integration/21139-2320/.minikube/certs/key.pem ServerCertRemotePath:/etc/docker/server.pem ServerKeyRemotePath:/etc/docker/server-key.pem ClientCertPath:/home/jenkins/minikube-integration/21139-2320/.minikube/certs/cert.pem ServerCertSANs:[] StorePath:/home/jenkins/minikube-integration/21139-2320/.minikube}
	I1109 13:55:08.029670   50941 ubuntu.go:190] setting up certificates
	I1109 13:55:08.029726   50941 provision.go:84] configureAuth start
	I1109 13:55:08.029805   50941 cli_runner.go:164] Run: docker container inspect -f "{{range .NetworkSettings.Networks}}{{.IPAddress}},{{.GlobalIPv6Address}}{{end}}" ha-423884-m02
	I1109 13:55:08.053364   50941 provision.go:143] copyHostCerts
	I1109 13:55:08.053410   50941 vm_assets.go:164] NewFileAsset: /home/jenkins/minikube-integration/21139-2320/.minikube/certs/ca.pem -> /home/jenkins/minikube-integration/21139-2320/.minikube/ca.pem
	I1109 13:55:08.053445   50941 exec_runner.go:144] found /home/jenkins/minikube-integration/21139-2320/.minikube/ca.pem, removing ...
	I1109 13:55:08.053457   50941 exec_runner.go:203] rm: /home/jenkins/minikube-integration/21139-2320/.minikube/ca.pem
	I1109 13:55:08.053539   50941 exec_runner.go:151] cp: /home/jenkins/minikube-integration/21139-2320/.minikube/certs/ca.pem --> /home/jenkins/minikube-integration/21139-2320/.minikube/ca.pem (1082 bytes)
	I1109 13:55:08.053624   50941 vm_assets.go:164] NewFileAsset: /home/jenkins/minikube-integration/21139-2320/.minikube/certs/cert.pem -> /home/jenkins/minikube-integration/21139-2320/.minikube/cert.pem
	I1109 13:55:08.053647   50941 exec_runner.go:144] found /home/jenkins/minikube-integration/21139-2320/.minikube/cert.pem, removing ...
	I1109 13:55:08.053656   50941 exec_runner.go:203] rm: /home/jenkins/minikube-integration/21139-2320/.minikube/cert.pem
	I1109 13:55:08.053687   50941 exec_runner.go:151] cp: /home/jenkins/minikube-integration/21139-2320/.minikube/certs/cert.pem --> /home/jenkins/minikube-integration/21139-2320/.minikube/cert.pem (1123 bytes)
	I1109 13:55:08.053733   50941 vm_assets.go:164] NewFileAsset: /home/jenkins/minikube-integration/21139-2320/.minikube/certs/key.pem -> /home/jenkins/minikube-integration/21139-2320/.minikube/key.pem
	I1109 13:55:08.053755   50941 exec_runner.go:144] found /home/jenkins/minikube-integration/21139-2320/.minikube/key.pem, removing ...
	I1109 13:55:08.053762   50941 exec_runner.go:203] rm: /home/jenkins/minikube-integration/21139-2320/.minikube/key.pem
	I1109 13:55:08.053788   50941 exec_runner.go:151] cp: /home/jenkins/minikube-integration/21139-2320/.minikube/certs/key.pem --> /home/jenkins/minikube-integration/21139-2320/.minikube/key.pem (1679 bytes)
	I1109 13:55:08.053839   50941 provision.go:117] generating server cert: /home/jenkins/minikube-integration/21139-2320/.minikube/machines/server.pem ca-key=/home/jenkins/minikube-integration/21139-2320/.minikube/certs/ca.pem private-key=/home/jenkins/minikube-integration/21139-2320/.minikube/certs/ca-key.pem org=jenkins.ha-423884-m02 san=[127.0.0.1 192.168.49.3 ha-423884-m02 localhost minikube]
	I1109 13:55:08.908426   50941 provision.go:177] copyRemoteCerts
	I1109 13:55:08.908547   50941 ssh_runner.go:195] Run: sudo mkdir -p /etc/docker /etc/docker /etc/docker
	I1109 13:55:08.908608   50941 cli_runner.go:164] Run: docker container inspect -f "'{{(index (index .NetworkSettings.Ports "22/tcp") 0).HostPort}}'" ha-423884-m02
	I1109 13:55:08.925860   50941 sshutil.go:53] new ssh client: &{IP:127.0.0.1 Port:32813 SSHKeyPath:/home/jenkins/minikube-integration/21139-2320/.minikube/machines/ha-423884-m02/id_rsa Username:docker}
	I1109 13:55:09.037241   50941 vm_assets.go:164] NewFileAsset: /home/jenkins/minikube-integration/21139-2320/.minikube/certs/ca.pem -> /etc/docker/ca.pem
	I1109 13:55:09.037302   50941 ssh_runner.go:362] scp /home/jenkins/minikube-integration/21139-2320/.minikube/certs/ca.pem --> /etc/docker/ca.pem (1082 bytes)
	I1109 13:55:09.069800   50941 vm_assets.go:164] NewFileAsset: /home/jenkins/minikube-integration/21139-2320/.minikube/machines/server.pem -> /etc/docker/server.pem
	I1109 13:55:09.069861   50941 ssh_runner.go:362] scp /home/jenkins/minikube-integration/21139-2320/.minikube/machines/server.pem --> /etc/docker/server.pem (1208 bytes)
	I1109 13:55:09.100884   50941 vm_assets.go:164] NewFileAsset: /home/jenkins/minikube-integration/21139-2320/.minikube/machines/server-key.pem -> /etc/docker/server-key.pem
	I1109 13:55:09.100993   50941 ssh_runner.go:362] scp /home/jenkins/minikube-integration/21139-2320/.minikube/machines/server-key.pem --> /etc/docker/server-key.pem (1675 bytes)
	I1109 13:55:09.135925   50941 provision.go:87] duration metric: took 1.10618017s to configureAuth
	I1109 13:55:09.136002   50941 ubuntu.go:206] setting minikube options for container-runtime
	I1109 13:55:09.136280   50941 config.go:182] Loaded profile config "ha-423884": Driver=docker, ContainerRuntime=crio, KubernetesVersion=v1.34.1
	I1109 13:55:09.136432   50941 cli_runner.go:164] Run: docker container inspect -f "'{{(index (index .NetworkSettings.Ports "22/tcp") 0).HostPort}}'" ha-423884-m02
	I1109 13:55:09.164021   50941 main.go:143] libmachine: Using SSH client type: native
	I1109 13:55:09.164323   50941 main.go:143] libmachine: &{{{<nil> 0 [] [] []} docker [0x3eefe0] 0x3f1790 <nil>  [] 0s} 127.0.0.1 32813 <nil> <nil>}
	I1109 13:55:09.164337   50941 main.go:143] libmachine: About to run SSH command:
	sudo mkdir -p /etc/sysconfig && printf %s "
	CRIO_MINIKUBE_OPTIONS='--insecure-registry 10.96.0.0/12 '
	" | sudo tee /etc/sysconfig/crio.minikube && sudo systemctl restart crio
	I1109 13:55:10.296777   50941 main.go:143] libmachine: SSH cmd err, output: <nil>: 
	CRIO_MINIKUBE_OPTIONS='--insecure-registry 10.96.0.0/12 '
	
	I1109 13:55:10.296814   50941 machine.go:97] duration metric: took 5.926952254s to provisionDockerMachine
	I1109 13:55:10.296825   50941 start.go:293] postStartSetup for "ha-423884-m02" (driver="docker")
	I1109 13:55:10.296872   50941 start.go:322] creating required directories: [/etc/kubernetes/addons /etc/kubernetes/manifests /var/tmp/minikube /var/lib/minikube /var/lib/minikube/certs /var/lib/minikube/images /var/lib/minikube/binaries /tmp/gvisor /usr/share/ca-certificates /etc/ssl/certs]
	I1109 13:55:10.296972   50941 ssh_runner.go:195] Run: sudo mkdir -p /etc/kubernetes/addons /etc/kubernetes/manifests /var/tmp/minikube /var/lib/minikube /var/lib/minikube/certs /var/lib/minikube/images /var/lib/minikube/binaries /tmp/gvisor /usr/share/ca-certificates /etc/ssl/certs
	I1109 13:55:10.297065   50941 cli_runner.go:164] Run: docker container inspect -f "'{{(index (index .NetworkSettings.Ports "22/tcp") 0).HostPort}}'" ha-423884-m02
	I1109 13:55:10.332056   50941 sshutil.go:53] new ssh client: &{IP:127.0.0.1 Port:32813 SSHKeyPath:/home/jenkins/minikube-integration/21139-2320/.minikube/machines/ha-423884-m02/id_rsa Username:docker}
	I1109 13:55:10.449335   50941 ssh_runner.go:195] Run: cat /etc/os-release
	I1109 13:55:10.453805   50941 main.go:143] libmachine: Couldn't set key VERSION_CODENAME, no corresponding struct field found
	I1109 13:55:10.453831   50941 info.go:137] Remote host: Debian GNU/Linux 12 (bookworm)
	I1109 13:55:10.453843   50941 filesync.go:126] Scanning /home/jenkins/minikube-integration/21139-2320/.minikube/addons for local assets ...
	I1109 13:55:10.453902   50941 filesync.go:126] Scanning /home/jenkins/minikube-integration/21139-2320/.minikube/files for local assets ...
	I1109 13:55:10.453979   50941 filesync.go:149] local asset: /home/jenkins/minikube-integration/21139-2320/.minikube/files/etc/ssl/certs/41162.pem -> 41162.pem in /etc/ssl/certs
	I1109 13:55:10.453986   50941 vm_assets.go:164] NewFileAsset: /home/jenkins/minikube-integration/21139-2320/.minikube/files/etc/ssl/certs/41162.pem -> /etc/ssl/certs/41162.pem
	I1109 13:55:10.454091   50941 ssh_runner.go:195] Run: sudo mkdir -p /etc/ssl/certs
	I1109 13:55:10.463699   50941 ssh_runner.go:362] scp /home/jenkins/minikube-integration/21139-2320/.minikube/files/etc/ssl/certs/41162.pem --> /etc/ssl/certs/41162.pem (1708 bytes)
	I1109 13:55:10.484008   50941 start.go:296] duration metric: took 187.133589ms for postStartSetup
	I1109 13:55:10.484157   50941 ssh_runner.go:195] Run: sh -c "df -h /var | awk 'NR==2{print $5}'"
	I1109 13:55:10.484228   50941 cli_runner.go:164] Run: docker container inspect -f "'{{(index (index .NetworkSettings.Ports "22/tcp") 0).HostPort}}'" ha-423884-m02
	I1109 13:55:10.524852   50941 sshutil.go:53] new ssh client: &{IP:127.0.0.1 Port:32813 SSHKeyPath:/home/jenkins/minikube-integration/21139-2320/.minikube/machines/ha-423884-m02/id_rsa Username:docker}
	I1109 13:55:10.647401   50941 ssh_runner.go:195] Run: sh -c "df -BG /var | awk 'NR==2{print $4}'"
	I1109 13:55:10.654990   50941 fix.go:56] duration metric: took 6.764614102s for fixHost
	I1109 13:55:10.655012   50941 start.go:83] releasing machines lock for "ha-423884-m02", held for 6.764660929s
	I1109 13:55:10.655097   50941 cli_runner.go:164] Run: docker container inspect -f "{{range .NetworkSettings.Networks}}{{.IPAddress}},{{.GlobalIPv6Address}}{{end}}" ha-423884-m02
	I1109 13:55:10.684905   50941 out.go:179] * Found network options:
	I1109 13:55:10.687829   50941 out.go:179]   - NO_PROXY=192.168.49.2
	W1109 13:55:10.690818   50941 proxy.go:120] fail to check proxy env: Error ip not in block
	W1109 13:55:10.690871   50941 proxy.go:120] fail to check proxy env: Error ip not in block
	I1109 13:55:10.690948   50941 ssh_runner.go:195] Run: sudo sh -c "podman version >/dev/null"
	I1109 13:55:10.690961   50941 ssh_runner.go:195] Run: curl -sS -m 2 https://registry.k8s.io/
	I1109 13:55:10.690989   50941 cli_runner.go:164] Run: docker container inspect -f "'{{(index (index .NetworkSettings.Ports "22/tcp") 0).HostPort}}'" ha-423884-m02
	I1109 13:55:10.691019   50941 cli_runner.go:164] Run: docker container inspect -f "'{{(index (index .NetworkSettings.Ports "22/tcp") 0).HostPort}}'" ha-423884-m02
	I1109 13:55:10.712241   50941 sshutil.go:53] new ssh client: &{IP:127.0.0.1 Port:32813 SSHKeyPath:/home/jenkins/minikube-integration/21139-2320/.minikube/machines/ha-423884-m02/id_rsa Username:docker}
	I1109 13:55:10.725530   50941 sshutil.go:53] new ssh client: &{IP:127.0.0.1 Port:32813 SSHKeyPath:/home/jenkins/minikube-integration/21139-2320/.minikube/machines/ha-423884-m02/id_rsa Username:docker}
	I1109 13:55:11.009545   50941 ssh_runner.go:195] Run: sh -c "stat /etc/cni/net.d/*loopback.conf*"
	W1109 13:55:11.084432   50941 cni.go:209] loopback cni configuration skipped: "/etc/cni/net.d/*loopback.conf*" not found
	I1109 13:55:11.084558   50941 ssh_runner.go:195] Run: sudo find /etc/cni/net.d -maxdepth 1 -type f ( ( -name *bridge* -or -name *podman* ) -and -not -name *.mk_disabled ) -printf "%p, " -exec sh -c "sudo mv {} {}.mk_disabled" ;
	I1109 13:55:11.121004   50941 cni.go:259] no active bridge cni configs found in "/etc/cni/net.d" - nothing to disable
	I1109 13:55:11.121072   50941 start.go:496] detecting cgroup driver to use...
	I1109 13:55:11.121123   50941 detect.go:187] detected "cgroupfs" cgroup driver on host os
	I1109 13:55:11.121189   50941 ssh_runner.go:195] Run: sudo systemctl stop -f containerd
	I1109 13:55:11.194725   50941 ssh_runner.go:195] Run: sudo systemctl is-active --quiet service containerd
	I1109 13:55:11.266109   50941 docker.go:218] disabling cri-docker service (if available) ...
	I1109 13:55:11.266214   50941 ssh_runner.go:195] Run: sudo systemctl stop -f cri-docker.socket
	I1109 13:55:11.320610   50941 ssh_runner.go:195] Run: sudo systemctl stop -f cri-docker.service
	I1109 13:55:11.355184   50941 ssh_runner.go:195] Run: sudo systemctl disable cri-docker.socket
	I1109 13:55:11.758458   50941 ssh_runner.go:195] Run: sudo systemctl mask cri-docker.service
	I1109 13:55:12.038806   50941 docker.go:234] disabling docker service ...
	I1109 13:55:12.038952   50941 ssh_runner.go:195] Run: sudo systemctl stop -f docker.socket
	I1109 13:55:12.067528   50941 ssh_runner.go:195] Run: sudo systemctl stop -f docker.service
	I1109 13:55:12.086834   50941 ssh_runner.go:195] Run: sudo systemctl disable docker.socket
	I1109 13:55:12.313835   50941 ssh_runner.go:195] Run: sudo systemctl mask docker.service
	I1109 13:55:12.529003   50941 ssh_runner.go:195] Run: sudo systemctl is-active --quiet service docker
	I1109 13:55:12.547999   50941 ssh_runner.go:195] Run: /bin/bash -c "sudo mkdir -p /etc && printf %s "runtime-endpoint: unix:///var/run/crio/crio.sock
	" | sudo tee /etc/crictl.yaml"
	I1109 13:55:12.574350   50941 crio.go:59] configure cri-o to use "registry.k8s.io/pause:3.10.1" pause image...
	I1109 13:55:12.574468   50941 ssh_runner.go:195] Run: sh -c "sudo sed -i 's|^.*pause_image = .*$|pause_image = "registry.k8s.io/pause:3.10.1"|' /etc/crio/crio.conf.d/02-crio.conf"
	I1109 13:55:12.593062   50941 crio.go:70] configuring cri-o to use "cgroupfs" as cgroup driver...
	I1109 13:55:12.593178   50941 ssh_runner.go:195] Run: sh -c "sudo sed -i 's|^.*cgroup_manager = .*$|cgroup_manager = "cgroupfs"|' /etc/crio/crio.conf.d/02-crio.conf"
	I1109 13:55:12.611675   50941 ssh_runner.go:195] Run: sh -c "sudo sed -i '/conmon_cgroup = .*/d' /etc/crio/crio.conf.d/02-crio.conf"
	I1109 13:55:12.621325   50941 ssh_runner.go:195] Run: sh -c "sudo sed -i '/cgroup_manager = .*/a conmon_cgroup = "pod"' /etc/crio/crio.conf.d/02-crio.conf"
	I1109 13:55:12.634011   50941 ssh_runner.go:195] Run: sh -c "sudo rm -rf /etc/cni/net.mk"
	I1109 13:55:12.644140   50941 ssh_runner.go:195] Run: sh -c "sudo sed -i '/^ *"net.ipv4.ip_unprivileged_port_start=.*"/d' /etc/crio/crio.conf.d/02-crio.conf"
	I1109 13:55:12.656327   50941 ssh_runner.go:195] Run: sh -c "sudo grep -q "^ *default_sysctls" /etc/crio/crio.conf.d/02-crio.conf || sudo sed -i '/conmon_cgroup = .*/a default_sysctls = \[\n\]' /etc/crio/crio.conf.d/02-crio.conf"
	I1109 13:55:12.666866   50941 ssh_runner.go:195] Run: sh -c "sudo sed -i -r 's|^default_sysctls *= *\[|&\n  "net.ipv4.ip_unprivileged_port_start=0",|' /etc/crio/crio.conf.d/02-crio.conf"
	I1109 13:55:12.678918   50941 ssh_runner.go:195] Run: sudo sysctl net.bridge.bridge-nf-call-iptables
	I1109 13:55:12.688104   50941 ssh_runner.go:195] Run: sudo sh -c "echo 1 > /proc/sys/net/ipv4/ip_forward"
	I1109 13:55:12.699862   50941 ssh_runner.go:195] Run: sudo systemctl daemon-reload
	I1109 13:55:12.929281   50941 ssh_runner.go:195] Run: sudo systemctl restart crio
	I1109 13:56:43.189989   50941 ssh_runner.go:235] Completed: sudo systemctl restart crio: (1m30.260670612s)
	I1109 13:56:43.190013   50941 start.go:543] Will wait 60s for socket path /var/run/crio/crio.sock
	I1109 13:56:43.190063   50941 ssh_runner.go:195] Run: stat /var/run/crio/crio.sock
	I1109 13:56:43.194863   50941 start.go:564] Will wait 60s for crictl version
	I1109 13:56:43.194926   50941 ssh_runner.go:195] Run: which crictl
	I1109 13:56:43.198897   50941 ssh_runner.go:195] Run: sudo /usr/local/bin/crictl version
	I1109 13:56:43.224592   50941 start.go:580] Version:  0.1.0
	RuntimeName:  cri-o
	RuntimeVersion:  1.34.1
	RuntimeApiVersion:  v1
	I1109 13:56:43.224673   50941 ssh_runner.go:195] Run: crio --version
	I1109 13:56:43.252803   50941 ssh_runner.go:195] Run: crio --version
	I1109 13:56:43.288977   50941 out.go:179] * Preparing Kubernetes v1.34.1 on CRI-O 1.34.1 ...
	I1109 13:56:43.292132   50941 out.go:179]   - env NO_PROXY=192.168.49.2
	I1109 13:56:43.295175   50941 cli_runner.go:164] Run: docker network inspect ha-423884 --format "{"Name": "{{.Name}}","Driver": "{{.Driver}}","Subnet": "{{range .IPAM.Config}}{{.Subnet}}{{end}}","Gateway": "{{range .IPAM.Config}}{{.Gateway}}{{end}}","MTU": {{if (index .Options "com.docker.network.driver.mtu")}}{{(index .Options "com.docker.network.driver.mtu")}}{{else}}0{{end}}, "ContainerIPs": [{{range $k,$v := .Containers }}"{{$v.IPv4Address}}",{{end}}]}"
	I1109 13:56:43.311775   50941 ssh_runner.go:195] Run: grep 192.168.49.1	host.minikube.internal$ /etc/hosts
	I1109 13:56:43.316096   50941 ssh_runner.go:195] Run: /bin/bash -c "{ grep -v $'\thost.minikube.internal$' "/etc/hosts"; echo "192.168.49.1	host.minikube.internal"; } > /tmp/h.$$; sudo cp /tmp/h.$$ "/etc/hosts""
	I1109 13:56:43.327026   50941 mustload.go:66] Loading cluster: ha-423884
	I1109 13:56:43.327285   50941 config.go:182] Loaded profile config "ha-423884": Driver=docker, ContainerRuntime=crio, KubernetesVersion=v1.34.1
	I1109 13:56:43.327549   50941 cli_runner.go:164] Run: docker container inspect ha-423884 --format={{.State.Status}}
	I1109 13:56:43.344797   50941 host.go:66] Checking if "ha-423884" exists ...
	I1109 13:56:43.345106   50941 certs.go:69] Setting up /home/jenkins/minikube-integration/21139-2320/.minikube/profiles/ha-423884 for IP: 192.168.49.3
	I1109 13:56:43.345119   50941 certs.go:195] generating shared ca certs ...
	I1109 13:56:43.345155   50941 certs.go:227] acquiring lock for ca certs: {Name:mkdc9287ecd89df27ec460b72246ed6b75395d7c Clock:{} Delay:500ms Timeout:1m0s Cancel:<nil>}
	I1109 13:56:43.345275   50941 certs.go:236] skipping valid "minikubeCA" ca cert: /home/jenkins/minikube-integration/21139-2320/.minikube/ca.key
	I1109 13:56:43.345325   50941 certs.go:236] skipping valid "proxyClientCA" ca cert: /home/jenkins/minikube-integration/21139-2320/.minikube/proxy-client-ca.key
	I1109 13:56:43.345337   50941 certs.go:257] generating profile certs ...
	I1109 13:56:43.345411   50941 certs.go:360] skipping valid signed profile cert regeneration for "minikube-user": /home/jenkins/minikube-integration/21139-2320/.minikube/profiles/ha-423884/client.key
	I1109 13:56:43.345491   50941 certs.go:360] skipping valid signed profile cert regeneration for "minikube": /home/jenkins/minikube-integration/21139-2320/.minikube/profiles/ha-423884/apiserver.key.75d82079
	I1109 13:56:43.345540   50941 certs.go:360] skipping valid signed profile cert regeneration for "aggregator": /home/jenkins/minikube-integration/21139-2320/.minikube/profiles/ha-423884/proxy-client.key
	I1109 13:56:43.345557   50941 vm_assets.go:164] NewFileAsset: /home/jenkins/minikube-integration/21139-2320/.minikube/ca.crt -> /var/lib/minikube/certs/ca.crt
	I1109 13:56:43.345575   50941 vm_assets.go:164] NewFileAsset: /home/jenkins/minikube-integration/21139-2320/.minikube/ca.key -> /var/lib/minikube/certs/ca.key
	I1109 13:56:43.345594   50941 vm_assets.go:164] NewFileAsset: /home/jenkins/minikube-integration/21139-2320/.minikube/proxy-client-ca.crt -> /var/lib/minikube/certs/proxy-client-ca.crt
	I1109 13:56:43.345615   50941 vm_assets.go:164] NewFileAsset: /home/jenkins/minikube-integration/21139-2320/.minikube/proxy-client-ca.key -> /var/lib/minikube/certs/proxy-client-ca.key
	I1109 13:56:43.345628   50941 vm_assets.go:164] NewFileAsset: /home/jenkins/minikube-integration/21139-2320/.minikube/profiles/ha-423884/apiserver.crt -> /var/lib/minikube/certs/apiserver.crt
	I1109 13:56:43.345642   50941 vm_assets.go:164] NewFileAsset: /home/jenkins/minikube-integration/21139-2320/.minikube/profiles/ha-423884/apiserver.key -> /var/lib/minikube/certs/apiserver.key
	I1109 13:56:43.345658   50941 vm_assets.go:164] NewFileAsset: /home/jenkins/minikube-integration/21139-2320/.minikube/profiles/ha-423884/proxy-client.crt -> /var/lib/minikube/certs/proxy-client.crt
	I1109 13:56:43.345671   50941 vm_assets.go:164] NewFileAsset: /home/jenkins/minikube-integration/21139-2320/.minikube/profiles/ha-423884/proxy-client.key -> /var/lib/minikube/certs/proxy-client.key
	I1109 13:56:43.345729   50941 certs.go:484] found cert: /home/jenkins/minikube-integration/21139-2320/.minikube/certs/4116.pem (1338 bytes)
	W1109 13:56:43.345760   50941 certs.go:480] ignoring /home/jenkins/minikube-integration/21139-2320/.minikube/certs/4116_empty.pem, impossibly tiny 0 bytes
	I1109 13:56:43.345772   50941 certs.go:484] found cert: /home/jenkins/minikube-integration/21139-2320/.minikube/certs/ca-key.pem (1679 bytes)
	I1109 13:56:43.345800   50941 certs.go:484] found cert: /home/jenkins/minikube-integration/21139-2320/.minikube/certs/ca.pem (1082 bytes)
	I1109 13:56:43.345827   50941 certs.go:484] found cert: /home/jenkins/minikube-integration/21139-2320/.minikube/certs/cert.pem (1123 bytes)
	I1109 13:56:43.345850   50941 certs.go:484] found cert: /home/jenkins/minikube-integration/21139-2320/.minikube/certs/key.pem (1679 bytes)
	I1109 13:56:43.345896   50941 certs.go:484] found cert: /home/jenkins/minikube-integration/21139-2320/.minikube/files/etc/ssl/certs/41162.pem (1708 bytes)
	I1109 13:56:43.345926   50941 vm_assets.go:164] NewFileAsset: /home/jenkins/minikube-integration/21139-2320/.minikube/ca.crt -> /usr/share/ca-certificates/minikubeCA.pem
	I1109 13:56:43.345942   50941 vm_assets.go:164] NewFileAsset: /home/jenkins/minikube-integration/21139-2320/.minikube/certs/4116.pem -> /usr/share/ca-certificates/4116.pem
	I1109 13:56:43.345953   50941 vm_assets.go:164] NewFileAsset: /home/jenkins/minikube-integration/21139-2320/.minikube/files/etc/ssl/certs/41162.pem -> /usr/share/ca-certificates/41162.pem
	I1109 13:56:43.346011   50941 cli_runner.go:164] Run: docker container inspect -f "'{{(index (index .NetworkSettings.Ports "22/tcp") 0).HostPort}}'" ha-423884
	I1109 13:56:43.364089   50941 sshutil.go:53] new ssh client: &{IP:127.0.0.1 Port:32808 SSHKeyPath:/home/jenkins/minikube-integration/21139-2320/.minikube/machines/ha-423884/id_rsa Username:docker}
	I1109 13:56:43.460186   50941 ssh_runner.go:195] Run: stat -c %s /var/lib/minikube/certs/sa.pub
	I1109 13:56:43.463803   50941 ssh_runner.go:447] scp /var/lib/minikube/certs/sa.pub --> memory (451 bytes)
	I1109 13:56:43.471985   50941 ssh_runner.go:195] Run: stat -c %s /var/lib/minikube/certs/sa.key
	I1109 13:56:43.475672   50941 ssh_runner.go:447] scp /var/lib/minikube/certs/sa.key --> memory (1679 bytes)
	I1109 13:56:43.483925   50941 ssh_runner.go:195] Run: stat -c %s /var/lib/minikube/certs/front-proxy-ca.crt
	I1109 13:56:43.487471   50941 ssh_runner.go:447] scp /var/lib/minikube/certs/front-proxy-ca.crt --> memory (1123 bytes)
	I1109 13:56:43.495787   50941 ssh_runner.go:195] Run: stat -c %s /var/lib/minikube/certs/front-proxy-ca.key
	I1109 13:56:43.499361   50941 ssh_runner.go:447] scp /var/lib/minikube/certs/front-proxy-ca.key --> memory (1675 bytes)
	I1109 13:56:43.507536   50941 ssh_runner.go:195] Run: stat -c %s /var/lib/minikube/certs/etcd/ca.crt
	I1109 13:56:43.511561   50941 ssh_runner.go:447] scp /var/lib/minikube/certs/etcd/ca.crt --> memory (1094 bytes)
	I1109 13:56:43.520262   50941 ssh_runner.go:195] Run: stat -c %s /var/lib/minikube/certs/etcd/ca.key
	I1109 13:56:43.524097   50941 ssh_runner.go:447] scp /var/lib/minikube/certs/etcd/ca.key --> memory (1675 bytes)
	I1109 13:56:43.532380   50941 ssh_runner.go:362] scp /home/jenkins/minikube-integration/21139-2320/.minikube/ca.crt --> /var/lib/minikube/certs/ca.crt (1111 bytes)
	I1109 13:56:43.553569   50941 ssh_runner.go:362] scp /home/jenkins/minikube-integration/21139-2320/.minikube/ca.key --> /var/lib/minikube/certs/ca.key (1679 bytes)
	I1109 13:56:43.574274   50941 ssh_runner.go:362] scp /home/jenkins/minikube-integration/21139-2320/.minikube/proxy-client-ca.crt --> /var/lib/minikube/certs/proxy-client-ca.crt (1119 bytes)
	I1109 13:56:43.593982   50941 ssh_runner.go:362] scp /home/jenkins/minikube-integration/21139-2320/.minikube/proxy-client-ca.key --> /var/lib/minikube/certs/proxy-client-ca.key (1675 bytes)
	I1109 13:56:43.611803   50941 ssh_runner.go:362] scp /home/jenkins/minikube-integration/21139-2320/.minikube/profiles/ha-423884/apiserver.crt --> /var/lib/minikube/certs/apiserver.crt (1440 bytes)
	I1109 13:56:43.629036   50941 ssh_runner.go:362] scp /home/jenkins/minikube-integration/21139-2320/.minikube/profiles/ha-423884/apiserver.key --> /var/lib/minikube/certs/apiserver.key (1679 bytes)
	I1109 13:56:43.646449   50941 ssh_runner.go:362] scp /home/jenkins/minikube-integration/21139-2320/.minikube/profiles/ha-423884/proxy-client.crt --> /var/lib/minikube/certs/proxy-client.crt (1147 bytes)
	I1109 13:56:43.665505   50941 ssh_runner.go:362] scp /home/jenkins/minikube-integration/21139-2320/.minikube/profiles/ha-423884/proxy-client.key --> /var/lib/minikube/certs/proxy-client.key (1675 bytes)
	I1109 13:56:43.685863   50941 ssh_runner.go:362] scp /home/jenkins/minikube-integration/21139-2320/.minikube/ca.crt --> /usr/share/ca-certificates/minikubeCA.pem (1111 bytes)
	I1109 13:56:43.704695   50941 ssh_runner.go:362] scp /home/jenkins/minikube-integration/21139-2320/.minikube/certs/4116.pem --> /usr/share/ca-certificates/4116.pem (1338 bytes)
	I1109 13:56:43.725055   50941 ssh_runner.go:362] scp /home/jenkins/minikube-integration/21139-2320/.minikube/files/etc/ssl/certs/41162.pem --> /usr/share/ca-certificates/41162.pem (1708 bytes)
	I1109 13:56:43.743980   50941 ssh_runner.go:362] scp memory --> /var/lib/minikube/certs/sa.pub (451 bytes)
	I1109 13:56:43.757782   50941 ssh_runner.go:362] scp memory --> /var/lib/minikube/certs/sa.key (1679 bytes)
	I1109 13:56:43.770797   50941 ssh_runner.go:362] scp memory --> /var/lib/minikube/certs/front-proxy-ca.crt (1123 bytes)
	I1109 13:56:43.783823   50941 ssh_runner.go:362] scp memory --> /var/lib/minikube/certs/front-proxy-ca.key (1675 bytes)
	I1109 13:56:43.798200   50941 ssh_runner.go:362] scp memory --> /var/lib/minikube/certs/etcd/ca.crt (1094 bytes)
	I1109 13:56:43.811164   50941 ssh_runner.go:362] scp memory --> /var/lib/minikube/certs/etcd/ca.key (1675 bytes)
	I1109 13:56:43.824190   50941 ssh_runner.go:362] scp memory --> /var/lib/minikube/kubeconfig (744 bytes)
	I1109 13:56:43.838949   50941 ssh_runner.go:195] Run: openssl version
	I1109 13:56:43.845204   50941 ssh_runner.go:195] Run: sudo /bin/bash -c "test -s /usr/share/ca-certificates/minikubeCA.pem && ln -fs /usr/share/ca-certificates/minikubeCA.pem /etc/ssl/certs/minikubeCA.pem"
	I1109 13:56:43.853394   50941 ssh_runner.go:195] Run: ls -la /usr/share/ca-certificates/minikubeCA.pem
	I1109 13:56:43.857520   50941 certs.go:528] hashing: -rw-r--r-- 1 root root 1111 Nov  9 13:29 /usr/share/ca-certificates/minikubeCA.pem
	I1109 13:56:43.857581   50941 ssh_runner.go:195] Run: openssl x509 -hash -noout -in /usr/share/ca-certificates/minikubeCA.pem
	I1109 13:56:43.898978   50941 ssh_runner.go:195] Run: sudo /bin/bash -c "test -L /etc/ssl/certs/b5213941.0 || ln -fs /etc/ssl/certs/minikubeCA.pem /etc/ssl/certs/b5213941.0"
	I1109 13:56:43.907056   50941 ssh_runner.go:195] Run: sudo /bin/bash -c "test -s /usr/share/ca-certificates/4116.pem && ln -fs /usr/share/ca-certificates/4116.pem /etc/ssl/certs/4116.pem"
	I1109 13:56:43.915514   50941 ssh_runner.go:195] Run: ls -la /usr/share/ca-certificates/4116.pem
	I1109 13:56:43.919395   50941 certs.go:528] hashing: -rw-r--r-- 1 root root 1338 Nov  9 13:36 /usr/share/ca-certificates/4116.pem
	I1109 13:56:43.919509   50941 ssh_runner.go:195] Run: openssl x509 -hash -noout -in /usr/share/ca-certificates/4116.pem
	I1109 13:56:43.961298   50941 ssh_runner.go:195] Run: sudo /bin/bash -c "test -L /etc/ssl/certs/51391683.0 || ln -fs /etc/ssl/certs/4116.pem /etc/ssl/certs/51391683.0"
	I1109 13:56:43.969278   50941 ssh_runner.go:195] Run: sudo /bin/bash -c "test -s /usr/share/ca-certificates/41162.pem && ln -fs /usr/share/ca-certificates/41162.pem /etc/ssl/certs/41162.pem"
	I1109 13:56:43.979745   50941 ssh_runner.go:195] Run: ls -la /usr/share/ca-certificates/41162.pem
	I1109 13:56:43.983461   50941 certs.go:528] hashing: -rw-r--r-- 1 root root 1708 Nov  9 13:36 /usr/share/ca-certificates/41162.pem
	I1109 13:56:43.983552   50941 ssh_runner.go:195] Run: openssl x509 -hash -noout -in /usr/share/ca-certificates/41162.pem
	I1109 13:56:44.024743   50941 ssh_runner.go:195] Run: sudo /bin/bash -c "test -L /etc/ssl/certs/3ec20f2e.0 || ln -fs /etc/ssl/certs/41162.pem /etc/ssl/certs/3ec20f2e.0"
	I1109 13:56:44.034346   50941 ssh_runner.go:195] Run: stat /var/lib/minikube/certs/apiserver-kubelet-client.crt
	I1109 13:56:44.038346   50941 ssh_runner.go:195] Run: openssl x509 -noout -in /var/lib/minikube/certs/apiserver-etcd-client.crt -checkend 86400
	I1109 13:56:44.083522   50941 ssh_runner.go:195] Run: openssl x509 -noout -in /var/lib/minikube/certs/apiserver-kubelet-client.crt -checkend 86400
	I1109 13:56:44.124383   50941 ssh_runner.go:195] Run: openssl x509 -noout -in /var/lib/minikube/certs/etcd/server.crt -checkend 86400
	I1109 13:56:44.165272   50941 ssh_runner.go:195] Run: openssl x509 -noout -in /var/lib/minikube/certs/etcd/healthcheck-client.crt -checkend 86400
	I1109 13:56:44.207715   50941 ssh_runner.go:195] Run: openssl x509 -noout -in /var/lib/minikube/certs/etcd/peer.crt -checkend 86400
	I1109 13:56:44.249227   50941 ssh_runner.go:195] Run: openssl x509 -noout -in /var/lib/minikube/certs/front-proxy-client.crt -checkend 86400
	I1109 13:56:44.295420   50941 kubeadm.go:935] updating node {m02 192.168.49.3 8443 v1.34.1 crio true true} ...
	I1109 13:56:44.295534   50941 kubeadm.go:947] kubelet [Unit]
	Wants=crio.service
	
	[Service]
	ExecStart=
	ExecStart=/var/lib/minikube/binaries/v1.34.1/kubelet --bootstrap-kubeconfig=/etc/kubernetes/bootstrap-kubelet.conf --cgroups-per-qos=false --config=/var/lib/kubelet/config.yaml --enforce-node-allocatable= --hostname-override=ha-423884-m02 --kubeconfig=/etc/kubernetes/kubelet.conf --node-ip=192.168.49.3
	
	[Install]
	 config:
	{KubernetesVersion:v1.34.1 ClusterName:ha-423884 Namespace:default APIServerHAVIP:192.168.49.254 APIServerName:minikubeCA APIServerNames:[] APIServerIPs:[] DNSDomain:cluster.local ContainerRuntime:crio CRISocket: NetworkPlugin:cni FeatureGates: ServiceCIDR:10.96.0.0/12 ImageRepository: LoadBalancerStartIP: LoadBalancerEndIP: CustomIngressCert: RegistryAliases: ExtraOptions:[] ShouldLoadCachedImages:true EnableDefaultCNI:false CNI:}
	I1109 13:56:44.295575   50941 kube-vip.go:115] generating kube-vip config ...
	I1109 13:56:44.295626   50941 ssh_runner.go:195] Run: sudo sh -c "lsmod | grep ip_vs"
	I1109 13:56:44.307501   50941 kube-vip.go:163] giving up enabling control-plane load-balancing as ipvs kernel modules appears not to be available: sudo sh -c "lsmod | grep ip_vs": Process exited with status 1
	stdout:
	
	stderr:
	I1109 13:56:44.307559   50941 kube-vip.go:137] kube-vip config:
	apiVersion: v1
	kind: Pod
	metadata:
	  creationTimestamp: null
	  name: kube-vip
	  namespace: kube-system
	spec:
	  containers:
	  - args:
	    - manager
	    env:
	    - name: vip_arp
	      value: "true"
	    - name: port
	      value: "8443"
	    - name: vip_nodename
	      valueFrom:
	        fieldRef:
	          fieldPath: spec.nodeName
	    - name: vip_interface
	      value: eth0
	    - name: vip_cidr
	      value: "32"
	    - name: dns_mode
	      value: first
	    - name: cp_enable
	      value: "true"
	    - name: cp_namespace
	      value: kube-system
	    - name: vip_leaderelection
	      value: "true"
	    - name: vip_leasename
	      value: plndr-cp-lock
	    - name: vip_leaseduration
	      value: "5"
	    - name: vip_renewdeadline
	      value: "3"
	    - name: vip_retryperiod
	      value: "1"
	    - name: address
	      value: 192.168.49.254
	    - name: prometheus_server
	      value: :2112
	    image: ghcr.io/kube-vip/kube-vip:v1.0.1
	    imagePullPolicy: IfNotPresent
	    name: kube-vip
	    resources: {}
	    securityContext:
	      capabilities:
	        add:
	        - NET_ADMIN
	        - NET_RAW
	    volumeMounts:
	    - mountPath: /etc/kubernetes/admin.conf
	      name: kubeconfig
	  hostAliases:
	  - hostnames:
	    - kubernetes
	    ip: 127.0.0.1
	  hostNetwork: true
	  volumes:
	  - hostPath:
	      path: "/etc/kubernetes/admin.conf"
	    name: kubeconfig
	status: {}
	I1109 13:56:44.307640   50941 ssh_runner.go:195] Run: sudo ls /var/lib/minikube/binaries/v1.34.1
	I1109 13:56:44.315582   50941 binaries.go:51] Found k8s binaries, skipping transfer
	I1109 13:56:44.315693   50941 ssh_runner.go:195] Run: sudo mkdir -p /etc/systemd/system/kubelet.service.d /lib/systemd/system /etc/kubernetes/manifests
	I1109 13:56:44.323673   50941 ssh_runner.go:362] scp memory --> /etc/systemd/system/kubelet.service.d/10-kubeadm.conf (363 bytes)
	I1109 13:56:44.336356   50941 ssh_runner.go:362] scp memory --> /lib/systemd/system/kubelet.service (352 bytes)
	I1109 13:56:44.348987   50941 ssh_runner.go:362] scp memory --> /etc/kubernetes/manifests/kube-vip.yaml (1358 bytes)
	I1109 13:56:44.364628   50941 ssh_runner.go:195] Run: grep 192.168.49.254	control-plane.minikube.internal$ /etc/hosts
	I1109 13:56:44.368185   50941 ssh_runner.go:195] Run: /bin/bash -c "{ grep -v $'\tcontrol-plane.minikube.internal$' "/etc/hosts"; echo "192.168.49.254	control-plane.minikube.internal"; } > /tmp/h.$$; sudo cp /tmp/h.$$ "/etc/hosts""
	I1109 13:56:44.378442   50941 ssh_runner.go:195] Run: sudo systemctl daemon-reload
	I1109 13:56:44.512505   50941 ssh_runner.go:195] Run: sudo systemctl start kubelet
	I1109 13:56:44.527192   50941 start.go:236] Will wait 6m0s for node &{Name:m02 IP:192.168.49.3 Port:8443 KubernetesVersion:v1.34.1 ContainerRuntime:crio ControlPlane:true Worker:true}
	I1109 13:56:44.527585   50941 config.go:182] Loaded profile config "ha-423884": Driver=docker, ContainerRuntime=crio, KubernetesVersion=v1.34.1
	I1109 13:56:44.530787   50941 out.go:179] * Verifying Kubernetes components...
	I1109 13:56:44.533648   50941 ssh_runner.go:195] Run: sudo systemctl daemon-reload
	I1109 13:56:44.676788   50941 ssh_runner.go:195] Run: sudo systemctl start kubelet
	I1109 13:56:44.692725   50941 kapi.go:59] client config for ha-423884: &rest.Config{Host:"https://192.168.49.254:8443", APIPath:"", ContentConfig:rest.ContentConfig{AcceptContentTypes:"", ContentType:"", GroupVersion:(*schema.GroupVersion)(nil), NegotiatedSerializer:runtime.NegotiatedSerializer(nil)}, Username:"", Password:"", BearerToken:"", BearerTokenFile:"", Impersonate:rest.ImpersonationConfig{UserName:"", UID:"", Groups:[]string(nil), Extra:map[string][]string(nil)}, AuthProvider:<nil>, AuthConfigPersister:rest.AuthProviderConfigPersister(nil), ExecProvider:<nil>, TLSClientConfig:rest.sanitizedTLSClientConfig{Insecure:false, ServerName:"", CertFile:"/home/jenkins/minikube-integration/21139-2320/.minikube/profiles/ha-423884/client.crt", KeyFile:"/home/jenkins/minikube-integration/21139-2320/.minikube/profiles/ha-423884/client.key", CAFile:"/home/jenkins/minikube-integration/21139-2320/.minikube/ca.crt", CertData:[]uint8(nil), KeyData:[]uint8(nil), CAData:[]uint8(nil), NextProtos:[]string(nil)},
UserAgent:"", DisableCompression:false, Transport:http.RoundTripper(nil), WrapTransport:(transport.WrapperFunc)(0x21276e0), QPS:0, Burst:0, RateLimiter:flowcontrol.RateLimiter(nil), WarningHandler:rest.WarningHandler(nil), WarningHandlerWithContext:rest.WarningHandlerWithContext(nil), Timeout:0, Dial:(func(context.Context, string, string) (net.Conn, error))(nil), Proxy:(func(*http.Request) (*url.URL, error))(nil)}
	W1109 13:56:44.692806   50941 kubeadm.go:492] Overriding stale ClientConfig host https://192.168.49.254:8443 with https://192.168.49.2:8443
	I1109 13:56:44.694375   50941 node_ready.go:35] waiting up to 6m0s for node "ha-423884-m02" to be "Ready" ...
	I1109 13:57:15.889308   50941 with_retry.go:234] "Got a Retry-After response" delay="1s" attempt=1 url="https://192.168.49.2:8443/api/v1/nodes/ha-423884-m02"
	W1109 13:57:15.889661   50941 node_ready.go:55] error getting node "ha-423884-m02" condition "Ready" status (will retry): Get "https://192.168.49.2:8443/api/v1/nodes/ha-423884-m02": dial tcp 192.168.49.2:8443: connect: connection refused - error from a previous attempt: read tcp 192.168.49.1:53614->192.168.49.2:8443: read: connection reset by peer
	W1109 13:57:18.195899   50941 node_ready.go:55] error getting node "ha-423884-m02" condition "Ready" status (will retry): Get "https://192.168.49.2:8443/api/v1/nodes/ha-423884-m02": dial tcp 192.168.49.2:8443: connect: connection refused
	W1109 13:57:20.196010   50941 node_ready.go:55] error getting node "ha-423884-m02" condition "Ready" status (will retry): Get "https://192.168.49.2:8443/api/v1/nodes/ha-423884-m02": dial tcp 192.168.49.2:8443: connect: connection refused
	W1109 13:57:22.695941   50941 node_ready.go:55] error getting node "ha-423884-m02" condition "Ready" status (will retry): Get "https://192.168.49.2:8443/api/v1/nodes/ha-423884-m02": dial tcp 192.168.49.2:8443: connect: connection refused
	W1109 13:57:25.195852   50941 node_ready.go:55] error getting node "ha-423884-m02" condition "Ready" status (will retry): Get "https://192.168.49.2:8443/api/v1/nodes/ha-423884-m02": dial tcp 192.168.49.2:8443: connect: connection refused
	W1109 13:57:27.695680   50941 node_ready.go:55] error getting node "ha-423884-m02" condition "Ready" status (will retry): Get "https://192.168.49.2:8443/api/v1/nodes/ha-423884-m02": dial tcp 192.168.49.2:8443: connect: connection refused
	W1109 13:57:30.195776   50941 node_ready.go:55] error getting node "ha-423884-m02" condition "Ready" status (will retry): Get "https://192.168.49.2:8443/api/v1/nodes/ha-423884-m02": dial tcp 192.168.49.2:8443: connect: connection refused
	W1109 13:57:32.695117   50941 node_ready.go:55] error getting node "ha-423884-m02" condition "Ready" status (will retry): Get "https://192.168.49.2:8443/api/v1/nodes/ha-423884-m02": dial tcp 192.168.49.2:8443: connect: connection refused
	W1109 13:57:35.195841   50941 node_ready.go:55] error getting node "ha-423884-m02" condition "Ready" status (will retry): Get "https://192.168.49.2:8443/api/v1/nodes/ha-423884-m02": dial tcp 192.168.49.2:8443: connect: connection refused
	W1109 13:57:37.694955   50941 node_ready.go:55] error getting node "ha-423884-m02" condition "Ready" status (will retry): Get "https://192.168.49.2:8443/api/v1/nodes/ha-423884-m02": dial tcp 192.168.49.2:8443: connect: connection refused
	W1109 13:58:41.750119   50941 node_ready.go:55] error getting node "ha-423884-m02" condition "Ready" status (will retry): the server was unable to return a response in the time allotted, but may still be processing the request (get nodes ha-423884-m02)
	I1109 13:58:42.976513   50941 with_retry.go:234] "Got a Retry-After response" delay="1s" attempt=1 url="https://192.168.49.2:8443/api/v1/nodes/ha-423884-m02"
	W1109 13:58:44.194875   50941 node_ready.go:55] error getting node "ha-423884-m02" condition "Ready" status (will retry): Get "https://192.168.49.2:8443/api/v1/nodes/ha-423884-m02": dial tcp 192.168.49.2:8443: connect: connection refused
	W1109 13:58:46.195854   50941 node_ready.go:55] error getting node "ha-423884-m02" condition "Ready" status (will retry): Get "https://192.168.49.2:8443/api/v1/nodes/ha-423884-m02": dial tcp 192.168.49.2:8443: connect: connection refused
	W1109 13:58:48.695884   50941 node_ready.go:55] error getting node "ha-423884-m02" condition "Ready" status (will retry): Get "https://192.168.49.2:8443/api/v1/nodes/ha-423884-m02": dial tcp 192.168.49.2:8443: connect: connection refused
	W1109 13:58:51.195591   50941 node_ready.go:55] error getting node "ha-423884-m02" condition "Ready" status (will retry): Get "https://192.168.49.2:8443/api/v1/nodes/ha-423884-m02": dial tcp 192.168.49.2:8443: connect: connection refused
	W1109 13:58:53.694984   50941 node_ready.go:55] error getting node "ha-423884-m02" condition "Ready" status (will retry): Get "https://192.168.49.2:8443/api/v1/nodes/ha-423884-m02": dial tcp 192.168.49.2:8443: connect: connection refused
	W1109 13:58:55.695023   50941 node_ready.go:55] error getting node "ha-423884-m02" condition "Ready" status (will retry): Get "https://192.168.49.2:8443/api/v1/nodes/ha-423884-m02": dial tcp 192.168.49.2:8443: connect: connection refused
	W1109 13:58:57.695978   50941 node_ready.go:55] error getting node "ha-423884-m02" condition "Ready" status (will retry): Get "https://192.168.49.2:8443/api/v1/nodes/ha-423884-m02": dial tcp 192.168.49.2:8443: connect: connection refused
	W1109 13:59:00.195414   50941 node_ready.go:55] error getting node "ha-423884-m02" condition "Ready" status (will retry): Get "https://192.168.49.2:8443/api/v1/nodes/ha-423884-m02": dial tcp 192.168.49.2:8443: connect: connection refused
	W1109 13:59:02.195699   50941 node_ready.go:55] error getting node "ha-423884-m02" condition "Ready" status (will retry): Get "https://192.168.49.2:8443/api/v1/nodes/ha-423884-m02": dial tcp 192.168.49.2:8443: connect: connection refused
	W1109 13:59:04.695049   50941 node_ready.go:55] error getting node "ha-423884-m02" condition "Ready" status (will retry): Get "https://192.168.49.2:8443/api/v1/nodes/ha-423884-m02": dial tcp 192.168.49.2:8443: connect: connection refused
	W1109 13:59:06.695993   50941 node_ready.go:55] error getting node "ha-423884-m02" condition "Ready" status (will retry): Get "https://192.168.49.2:8443/api/v1/nodes/ha-423884-m02": dial tcp 192.168.49.2:8443: connect: connection refused
	I1109 14:00:12.661230   50941 with_retry.go:234] "Got a Retry-After response" delay="1s" attempt=1 url="https://192.168.49.2:8443/api/v1/nodes/ha-423884-m02"
	W1109 14:00:12.661517   50941 node_ready.go:55] error getting node "ha-423884-m02" condition "Ready" status (will retry): Get "https://192.168.49.2:8443/api/v1/nodes/ha-423884-m02": dial tcp 192.168.49.2:8443: connect: connection refused - error from a previous attempt: read tcp 192.168.49.1:43904->192.168.49.2:8443: read: connection reset by peer
	W1109 14:00:14.695160   50941 node_ready.go:55] error getting node "ha-423884-m02" condition "Ready" status (will retry): Get "https://192.168.49.2:8443/api/v1/nodes/ha-423884-m02": dial tcp 192.168.49.2:8443: connect: connection refused
	W1109 14:00:16.695701   50941 node_ready.go:55] error getting node "ha-423884-m02" condition "Ready" status (will retry): Get "https://192.168.49.2:8443/api/v1/nodes/ha-423884-m02": dial tcp 192.168.49.2:8443: connect: connection refused
	W1109 14:00:19.195798   50941 node_ready.go:55] error getting node "ha-423884-m02" condition "Ready" status (will retry): Get "https://192.168.49.2:8443/api/v1/nodes/ha-423884-m02": dial tcp 192.168.49.2:8443: connect: connection refused
	W1109 14:00:21.695004   50941 node_ready.go:55] error getting node "ha-423884-m02" condition "Ready" status (will retry): Get "https://192.168.49.2:8443/api/v1/nodes/ha-423884-m02": dial tcp 192.168.49.2:8443: connect: connection refused
	W1109 14:00:24.195895   50941 node_ready.go:55] error getting node "ha-423884-m02" condition "Ready" status (will retry): Get "https://192.168.49.2:8443/api/v1/nodes/ha-423884-m02": dial tcp 192.168.49.2:8443: connect: connection refused
	W1109 14:00:26.695529   50941 node_ready.go:55] error getting node "ha-423884-m02" condition "Ready" status (will retry): Get "https://192.168.49.2:8443/api/v1/nodes/ha-423884-m02": dial tcp 192.168.49.2:8443: connect: connection refused
	W1109 14:00:28.695896   50941 node_ready.go:55] error getting node "ha-423884-m02" condition "Ready" status (will retry): Get "https://192.168.49.2:8443/api/v1/nodes/ha-423884-m02": dial tcp 192.168.49.2:8443: connect: connection refused
	W1109 14:00:31.194955   50941 node_ready.go:55] error getting node "ha-423884-m02" condition "Ready" status (will retry): Get "https://192.168.49.2:8443/api/v1/nodes/ha-423884-m02": dial tcp 192.168.49.2:8443: connect: connection refused
	W1109 14:00:33.695955   50941 node_ready.go:55] error getting node "ha-423884-m02" condition "Ready" status (will retry): Get "https://192.168.49.2:8443/api/v1/nodes/ha-423884-m02": dial tcp 192.168.49.2:8443: connect: connection refused
	W1109 14:00:36.194952   50941 node_ready.go:55] error getting node "ha-423884-m02" condition "Ready" status (will retry): Get "https://192.168.49.2:8443/api/v1/nodes/ha-423884-m02": dial tcp 192.168.49.2:8443: connect: connection refused
	W1109 14:00:38.694903   50941 node_ready.go:55] error getting node "ha-423884-m02" condition "Ready" status (will retry): Get "https://192.168.49.2:8443/api/v1/nodes/ha-423884-m02": dial tcp 192.168.49.2:8443: connect: connection refused
	W1109 14:00:41.195976   50941 node_ready.go:55] error getting node "ha-423884-m02" condition "Ready" status (will retry): Get "https://192.168.49.2:8443/api/v1/nodes/ha-423884-m02": dial tcp 192.168.49.2:8443: connect: connection refused
	W1109 14:00:43.695060   50941 node_ready.go:55] error getting node "ha-423884-m02" condition "Ready" status (will retry): Get "https://192.168.49.2:8443/api/v1/nodes/ha-423884-m02": dial tcp 192.168.49.2:8443: connect: connection refused
	W1109 14:00:45.695243   50941 node_ready.go:55] error getting node "ha-423884-m02" condition "Ready" status (will retry): Get "https://192.168.49.2:8443/api/v1/nodes/ha-423884-m02": dial tcp 192.168.49.2:8443: connect: connection refused
	W1109 14:00:47.695603   50941 node_ready.go:55] error getting node "ha-423884-m02" condition "Ready" status (will retry): Get "https://192.168.49.2:8443/api/v1/nodes/ha-423884-m02": dial tcp 192.168.49.2:8443: connect: connection refused
	W1109 14:00:49.695924   50941 node_ready.go:55] error getting node "ha-423884-m02" condition "Ready" status (will retry): Get "https://192.168.49.2:8443/api/v1/nodes/ha-423884-m02": dial tcp 192.168.49.2:8443: connect: connection refused
	W1109 14:00:52.194931   50941 node_ready.go:55] error getting node "ha-423884-m02" condition "Ready" status (will retry): Get "https://192.168.49.2:8443/api/v1/nodes/ha-423884-m02": dial tcp 192.168.49.2:8443: connect: connection refused
	W1109 14:01:02.696092   50941 node_ready.go:55] error getting node "ha-423884-m02" condition "Ready" status (will retry): Get "https://192.168.49.2:8443/api/v1/nodes/ha-423884-m02": net/http: TLS handshake timeout
	W1109 14:01:12.697352   50941 node_ready.go:55] error getting node "ha-423884-m02" condition "Ready" status (will retry): Get "https://192.168.49.2:8443/api/v1/nodes/ha-423884-m02": net/http: TLS handshake timeout
	I1109 14:01:14.246046   50941 with_retry.go:234] "Got a Retry-After response" delay="1s" attempt=1 url="https://192.168.49.2:8443/api/v1/nodes/ha-423884-m02"
	W1109 14:01:15.195896   50941 node_ready.go:55] error getting node "ha-423884-m02" condition "Ready" status (will retry): Get "https://192.168.49.2:8443/api/v1/nodes/ha-423884-m02": dial tcp 192.168.49.2:8443: connect: connection refused
	W1109 14:01:17.694927   50941 node_ready.go:55] error getting node "ha-423884-m02" condition "Ready" status (will retry): Get "https://192.168.49.2:8443/api/v1/nodes/ha-423884-m02": dial tcp 192.168.49.2:8443: connect: connection refused
	W1109 14:01:19.695012   50941 node_ready.go:55] error getting node "ha-423884-m02" condition "Ready" status (will retry): Get "https://192.168.49.2:8443/api/v1/nodes/ha-423884-m02": dial tcp 192.168.49.2:8443: connect: connection refused
	W1109 14:01:21.695859   50941 node_ready.go:55] error getting node "ha-423884-m02" condition "Ready" status (will retry): Get "https://192.168.49.2:8443/api/v1/nodes/ha-423884-m02": dial tcp 192.168.49.2:8443: connect: connection refused
	W1109 14:01:24.195971   50941 node_ready.go:55] error getting node "ha-423884-m02" condition "Ready" status (will retry): Get "https://192.168.49.2:8443/api/v1/nodes/ha-423884-m02": dial tcp 192.168.49.2:8443: connect: connection refused
	W1109 14:01:26.694922   50941 node_ready.go:55] error getting node "ha-423884-m02" condition "Ready" status (will retry): Get "https://192.168.49.2:8443/api/v1/nodes/ha-423884-m02": dial tcp 192.168.49.2:8443: connect: connection refused
	W1109 14:01:28.695002   50941 node_ready.go:55] error getting node "ha-423884-m02" condition "Ready" status (will retry): Get "https://192.168.49.2:8443/api/v1/nodes/ha-423884-m02": dial tcp 192.168.49.2:8443: connect: connection refused
	W1109 14:01:31.195949   50941 node_ready.go:55] error getting node "ha-423884-m02" condition "Ready" status (will retry): Get "https://192.168.49.2:8443/api/v1/nodes/ha-423884-m02": dial tcp 192.168.49.2:8443: connect: connection refused
	W1109 14:01:33.196044   50941 node_ready.go:55] error getting node "ha-423884-m02" condition "Ready" status (will retry): Get "https://192.168.49.2:8443/api/v1/nodes/ha-423884-m02": dial tcp 192.168.49.2:8443: connect: connection refused
	W1109 14:01:35.695811   50941 node_ready.go:55] error getting node "ha-423884-m02" condition "Ready" status (will retry): Get "https://192.168.49.2:8443/api/v1/nodes/ha-423884-m02": dial tcp 192.168.49.2:8443: connect: connection refused
	W1109 14:01:38.194914   50941 node_ready.go:55] error getting node "ha-423884-m02" condition "Ready" status (will retry): Get "https://192.168.49.2:8443/api/v1/nodes/ha-423884-m02": dial tcp 192.168.49.2:8443: connect: connection refused
	W1109 14:01:40.195799   50941 node_ready.go:55] error getting node "ha-423884-m02" condition "Ready" status (will retry): Get "https://192.168.49.2:8443/api/v1/nodes/ha-423884-m02": dial tcp 192.168.49.2:8443: connect: connection refused
	W1109 14:01:42.695109   50941 node_ready.go:55] error getting node "ha-423884-m02" condition "Ready" status (will retry): Get "https://192.168.49.2:8443/api/v1/nodes/ha-423884-m02": dial tcp 192.168.49.2:8443: connect: connection refused
	W1109 14:01:45.194966   50941 node_ready.go:55] error getting node "ha-423884-m02" condition "Ready" status (will retry): Get "https://192.168.49.2:8443/api/v1/nodes/ha-423884-m02": dial tcp 192.168.49.2:8443: connect: connection refused
	W1109 14:01:47.195992   50941 node_ready.go:55] error getting node "ha-423884-m02" condition "Ready" status (will retry): Get "https://192.168.49.2:8443/api/v1/nodes/ha-423884-m02": dial tcp 192.168.49.2:8443: connect: connection refused
	W1109 14:01:49.694861   50941 node_ready.go:55] error getting node "ha-423884-m02" condition "Ready" status (will retry): Get "https://192.168.49.2:8443/api/v1/nodes/ha-423884-m02": dial tcp 192.168.49.2:8443: connect: connection refused
	W1109 14:01:52.194884   50941 node_ready.go:55] error getting node "ha-423884-m02" condition "Ready" status (will retry): Get "https://192.168.49.2:8443/api/v1/nodes/ha-423884-m02": dial tcp 192.168.49.2:8443: connect: connection refused
	W1109 14:01:54.694898   50941 node_ready.go:55] error getting node "ha-423884-m02" condition "Ready" status (will retry): Get "https://192.168.49.2:8443/api/v1/nodes/ha-423884-m02": dial tcp 192.168.49.2:8443: connect: connection refused
	W1109 14:01:56.695125   50941 node_ready.go:55] error getting node "ha-423884-m02" condition "Ready" status (will retry): Get "https://192.168.49.2:8443/api/v1/nodes/ha-423884-m02": dial tcp 192.168.49.2:8443: connect: connection refused
	W1109 14:01:59.195035   50941 node_ready.go:55] error getting node "ha-423884-m02" condition "Ready" status (will retry): Get "https://192.168.49.2:8443/api/v1/nodes/ha-423884-m02": dial tcp 192.168.49.2:8443: connect: connection refused
	W1109 14:02:01.694940   50941 node_ready.go:55] error getting node "ha-423884-m02" condition "Ready" status (will retry): Get "https://192.168.49.2:8443/api/v1/nodes/ha-423884-m02": dial tcp 192.168.49.2:8443: connect: connection refused
	W1109 14:02:03.695952   50941 node_ready.go:55] error getting node "ha-423884-m02" condition "Ready" status (will retry): Get "https://192.168.49.2:8443/api/v1/nodes/ha-423884-m02": dial tcp 192.168.49.2:8443: connect: connection refused
	W1109 14:02:06.194964   50941 node_ready.go:55] error getting node "ha-423884-m02" condition "Ready" status (will retry): Get "https://192.168.49.2:8443/api/v1/nodes/ha-423884-m02": dial tcp 192.168.49.2:8443: connect: connection refused
	W1109 14:02:08.694953   50941 node_ready.go:55] error getting node "ha-423884-m02" condition "Ready" status (will retry): Get "https://192.168.49.2:8443/api/v1/nodes/ha-423884-m02": dial tcp 192.168.49.2:8443: connect: connection refused
	W1109 14:02:10.695760   50941 node_ready.go:55] error getting node "ha-423884-m02" condition "Ready" status (will retry): Get "https://192.168.49.2:8443/api/v1/nodes/ha-423884-m02": dial tcp 192.168.49.2:8443: connect: connection refused
	W1109 14:02:13.195697   50941 node_ready.go:55] error getting node "ha-423884-m02" condition "Ready" status (will retry): Get "https://192.168.49.2:8443/api/v1/nodes/ha-423884-m02": dial tcp 192.168.49.2:8443: connect: connection refused
	W1109 14:02:15.694939   50941 node_ready.go:55] error getting node "ha-423884-m02" condition "Ready" status (will retry): Get "https://192.168.49.2:8443/api/v1/nodes/ha-423884-m02": dial tcp 192.168.49.2:8443: connect: connection refused
	W1109 14:02:17.695926   50941 node_ready.go:55] error getting node "ha-423884-m02" condition "Ready" status (will retry): Get "https://192.168.49.2:8443/api/v1/nodes/ha-423884-m02": dial tcp 192.168.49.2:8443: connect: connection refused
	W1109 14:02:20.195916   50941 node_ready.go:55] error getting node "ha-423884-m02" condition "Ready" status (will retry): Get "https://192.168.49.2:8443/api/v1/nodes/ha-423884-m02": dial tcp 192.168.49.2:8443: connect: connection refused
	W1109 14:02:22.695194   50941 node_ready.go:55] error getting node "ha-423884-m02" condition "Ready" status (will retry): Get "https://192.168.49.2:8443/api/v1/nodes/ha-423884-m02": dial tcp 192.168.49.2:8443: connect: connection refused
	W1109 14:02:25.195931   50941 node_ready.go:55] error getting node "ha-423884-m02" condition "Ready" status (will retry): Get "https://192.168.49.2:8443/api/v1/nodes/ha-423884-m02": dial tcp 192.168.49.2:8443: connect: connection refused
	W1109 14:02:27.694900   50941 node_ready.go:55] error getting node "ha-423884-m02" condition "Ready" status (will retry): Get "https://192.168.49.2:8443/api/v1/nodes/ha-423884-m02": dial tcp 192.168.49.2:8443: connect: connection refused
	W1109 14:02:29.694960   50941 node_ready.go:55] error getting node "ha-423884-m02" condition "Ready" status (will retry): Get "https://192.168.49.2:8443/api/v1/nodes/ha-423884-m02": dial tcp 192.168.49.2:8443: connect: connection refused
	W1109 14:02:32.194988   50941 node_ready.go:55] error getting node "ha-423884-m02" condition "Ready" status (will retry): Get "https://192.168.49.2:8443/api/v1/nodes/ha-423884-m02": dial tcp 192.168.49.2:8443: connect: connection refused
	W1109 14:02:34.195073   50941 node_ready.go:55] error getting node "ha-423884-m02" condition "Ready" status (will retry): Get "https://192.168.49.2:8443/api/v1/nodes/ha-423884-m02": dial tcp 192.168.49.2:8443: connect: connection refused
	W1109 14:02:44.694610   50941 node_ready.go:55] error getting node "ha-423884-m02" condition "Ready" status (will retry): Get "https://192.168.49.2:8443/api/v1/nodes/ha-423884-m02": context deadline exceeded
	I1109 14:02:44.694648   50941 node_ready.go:38] duration metric: took 6m0.000230455s for node "ha-423884-m02" to be "Ready" ...
	I1109 14:02:44.698103   50941 out.go:203] 
	W1109 14:02:44.701305   50941 out.go:285] X Exiting due to GUEST_START: failed to start node: adding node: wait 6m0s for node: waiting for node to be ready: WaitNodeCondition: context deadline exceeded
	X Exiting due to GUEST_START: failed to start node: adding node: wait 6m0s for node: waiting for node to be ready: WaitNodeCondition: context deadline exceeded
	W1109 14:02:44.701325   50941 out.go:285] * 
	* 
	W1109 14:02:44.703469   50941 out.go:308] ╭─────────────────────────────────────────────────────────────────────────────────────────────╮
	│                                                                                             │
	│    * If the above advice does not help, please let us know:                                 │
	│      https://github.com/kubernetes/minikube/issues/new/choose                               │
	│                                                                                             │
	│    * Please run `minikube logs --file=logs.txt` and attach logs.txt to the GitHub issue.    │
	│                                                                                             │
	╰─────────────────────────────────────────────────────────────────────────────────────────────╯
	╭─────────────────────────────────────────────────────────────────────────────────────────────╮
	│                                                                                             │
	│    * If the above advice does not help, please let us know:                                 │
	│      https://github.com/kubernetes/minikube/issues/new/choose                               │
	│                                                                                             │
	│    * Please run `minikube logs --file=logs.txt` and attach logs.txt to the GitHub issue.    │
	│                                                                                             │
	╰─────────────────────────────────────────────────────────────────────────────────────────────╯
	I1109 14:02:44.706530   50941 out.go:203] 

                                                
                                                
** /stderr **
ha_test.go:471: failed to run minikube start. args "out/minikube-linux-arm64 -p ha-423884 node list --alsologtostderr -v 5" : exit status 80
ha_test.go:474: (dbg) Run:  out/minikube-linux-arm64 -p ha-423884 node list --alsologtostderr -v 5
helpers_test.go:222: -----------------------post-mortem--------------------------------
helpers_test.go:223: ======>  post-mortem[TestMultiControlPlane/serial/RestartClusterKeepsNodes]: network settings <======
helpers_test.go:230: HOST ENV snapshots: PROXY env: HTTP_PROXY="<empty>" HTTPS_PROXY="<empty>" NO_PROXY="<empty>"
helpers_test.go:238: ======>  post-mortem[TestMultiControlPlane/serial/RestartClusterKeepsNodes]: docker inspect <======
helpers_test.go:239: (dbg) Run:  docker inspect ha-423884
helpers_test.go:243: (dbg) docker inspect ha-423884:

                                                
                                                
-- stdout --
	[
	    {
	        "Id": "8c902201acb6ac6cc85dcb02210aea656c86b05a6fb72e47cb8c42c952f307e8",
	        "Created": "2025-11-09T13:50:17.166169915Z",
	        "Path": "/usr/local/bin/entrypoint",
	        "Args": [
	            "/sbin/init"
	        ],
	        "State": {
	            "Status": "running",
	            "Running": true,
	            "Paused": false,
	            "Restarting": false,
	            "OOMKilled": false,
	            "Dead": false,
	            "Pid": 51070,
	            "ExitCode": 0,
	            "Error": "",
	            "StartedAt": "2025-11-09T13:54:55.389490243Z",
	            "FinishedAt": "2025-11-09T13:54:54.671589817Z"
	        },
	        "Image": "sha256:4e1b168be0d8ee6affdaa8dcd0274d605b2f417b6cfaa574410f0380fd962b97",
	        "ResolvConfPath": "/var/lib/docker/containers/8c902201acb6ac6cc85dcb02210aea656c86b05a6fb72e47cb8c42c952f307e8/resolv.conf",
	        "HostnamePath": "/var/lib/docker/containers/8c902201acb6ac6cc85dcb02210aea656c86b05a6fb72e47cb8c42c952f307e8/hostname",
	        "HostsPath": "/var/lib/docker/containers/8c902201acb6ac6cc85dcb02210aea656c86b05a6fb72e47cb8c42c952f307e8/hosts",
	        "LogPath": "/var/lib/docker/containers/8c902201acb6ac6cc85dcb02210aea656c86b05a6fb72e47cb8c42c952f307e8/8c902201acb6ac6cc85dcb02210aea656c86b05a6fb72e47cb8c42c952f307e8-json.log",
	        "Name": "/ha-423884",
	        "RestartCount": 0,
	        "Driver": "overlay2",
	        "Platform": "linux",
	        "MountLabel": "",
	        "ProcessLabel": "",
	        "AppArmorProfile": "unconfined",
	        "ExecIDs": null,
	        "HostConfig": {
	            "Binds": [
	                "/lib/modules:/lib/modules:ro",
	                "ha-423884:/var"
	            ],
	            "ContainerIDFile": "",
	            "LogConfig": {
	                "Type": "json-file",
	                "Config": {}
	            },
	            "NetworkMode": "ha-423884",
	            "PortBindings": {
	                "22/tcp": [
	                    {
	                        "HostIp": "127.0.0.1",
	                        "HostPort": ""
	                    }
	                ],
	                "2376/tcp": [
	                    {
	                        "HostIp": "127.0.0.1",
	                        "HostPort": ""
	                    }
	                ],
	                "32443/tcp": [
	                    {
	                        "HostIp": "127.0.0.1",
	                        "HostPort": ""
	                    }
	                ],
	                "5000/tcp": [
	                    {
	                        "HostIp": "127.0.0.1",
	                        "HostPort": ""
	                    }
	                ],
	                "8443/tcp": [
	                    {
	                        "HostIp": "127.0.0.1",
	                        "HostPort": ""
	                    }
	                ]
	            },
	            "RestartPolicy": {
	                "Name": "no",
	                "MaximumRetryCount": 0
	            },
	            "AutoRemove": false,
	            "VolumeDriver": "",
	            "VolumesFrom": null,
	            "ConsoleSize": [
	                0,
	                0
	            ],
	            "CapAdd": null,
	            "CapDrop": null,
	            "CgroupnsMode": "host",
	            "Dns": [],
	            "DnsOptions": [],
	            "DnsSearch": [],
	            "ExtraHosts": null,
	            "GroupAdd": null,
	            "IpcMode": "private",
	            "Cgroup": "",
	            "Links": null,
	            "OomScoreAdj": 0,
	            "PidMode": "",
	            "Privileged": true,
	            "PublishAllPorts": false,
	            "ReadonlyRootfs": false,
	            "SecurityOpt": [
	                "seccomp=unconfined",
	                "apparmor=unconfined",
	                "label=disable"
	            ],
	            "Tmpfs": {
	                "/run": "",
	                "/tmp": ""
	            },
	            "UTSMode": "",
	            "UsernsMode": "",
	            "ShmSize": 67108864,
	            "Runtime": "runc",
	            "Isolation": "",
	            "CpuShares": 0,
	            "Memory": 3221225472,
	            "NanoCpus": 2000000000,
	            "CgroupParent": "",
	            "BlkioWeight": 0,
	            "BlkioWeightDevice": [],
	            "BlkioDeviceReadBps": [],
	            "BlkioDeviceWriteBps": [],
	            "BlkioDeviceReadIOps": [],
	            "BlkioDeviceWriteIOps": [],
	            "CpuPeriod": 0,
	            "CpuQuota": 0,
	            "CpuRealtimePeriod": 0,
	            "CpuRealtimeRuntime": 0,
	            "CpusetCpus": "",
	            "CpusetMems": "",
	            "Devices": [],
	            "DeviceCgroupRules": null,
	            "DeviceRequests": null,
	            "MemoryReservation": 0,
	            "MemorySwap": 6442450944,
	            "MemorySwappiness": null,
	            "OomKillDisable": false,
	            "PidsLimit": null,
	            "Ulimits": [],
	            "CpuCount": 0,
	            "CpuPercent": 0,
	            "IOMaximumIOps": 0,
	            "IOMaximumBandwidth": 0,
	            "MaskedPaths": null,
	            "ReadonlyPaths": null
	        },
	        "GraphDriver": {
	            "Data": {
	                "ID": "8c902201acb6ac6cc85dcb02210aea656c86b05a6fb72e47cb8c42c952f307e8",
	                "LowerDir": "/var/lib/docker/overlay2/d7d9b7ca13eaf7cc4d5734ee4a6a54ff542d1224261d25b7c41162aa58453c4b-init/diff:/var/lib/docker/overlay2/b3a4de36ab2a7c9237cb1555a4866064baca53bca407ae5f84336bea9c6bc6c8/diff",
	                "MergedDir": "/var/lib/docker/overlay2/d7d9b7ca13eaf7cc4d5734ee4a6a54ff542d1224261d25b7c41162aa58453c4b/merged",
	                "UpperDir": "/var/lib/docker/overlay2/d7d9b7ca13eaf7cc4d5734ee4a6a54ff542d1224261d25b7c41162aa58453c4b/diff",
	                "WorkDir": "/var/lib/docker/overlay2/d7d9b7ca13eaf7cc4d5734ee4a6a54ff542d1224261d25b7c41162aa58453c4b/work"
	            },
	            "Name": "overlay2"
	        },
	        "Mounts": [
	            {
	                "Type": "bind",
	                "Source": "/lib/modules",
	                "Destination": "/lib/modules",
	                "Mode": "ro",
	                "RW": false,
	                "Propagation": "rprivate"
	            },
	            {
	                "Type": "volume",
	                "Name": "ha-423884",
	                "Source": "/var/lib/docker/volumes/ha-423884/_data",
	                "Destination": "/var",
	                "Driver": "local",
	                "Mode": "z",
	                "RW": true,
	                "Propagation": ""
	            }
	        ],
	        "Config": {
	            "Hostname": "ha-423884",
	            "Domainname": "",
	            "User": "",
	            "AttachStdin": false,
	            "AttachStdout": false,
	            "AttachStderr": false,
	            "ExposedPorts": {
	                "22/tcp": {},
	                "2376/tcp": {},
	                "32443/tcp": {},
	                "5000/tcp": {},
	                "8443/tcp": {}
	            },
	            "Tty": true,
	            "OpenStdin": false,
	            "StdinOnce": false,
	            "Env": [
	                "container=docker",
	                "PATH=/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin"
	            ],
	            "Cmd": null,
	            "Image": "gcr.io/k8s-minikube/kicbase-builds:v0.0.48-1761985721-21837@sha256:a50b37e97dfdea51156e079ca6b45818a801b3d41bbe13d141f35d2e1af6c7d1",
	            "Volumes": null,
	            "WorkingDir": "/",
	            "Entrypoint": [
	                "/usr/local/bin/entrypoint",
	                "/sbin/init"
	            ],
	            "OnBuild": null,
	            "Labels": {
	                "created_by.minikube.sigs.k8s.io": "true",
	                "mode.minikube.sigs.k8s.io": "ha-423884",
	                "name.minikube.sigs.k8s.io": "ha-423884",
	                "role.minikube.sigs.k8s.io": ""
	            },
	            "StopSignal": "SIGRTMIN+3"
	        },
	        "NetworkSettings": {
	            "Bridge": "",
	            "SandboxID": "89fbaf0c08047c2a06be0a8a75835803aa19533c48ff4c5735fc268ee9d93691",
	            "SandboxKey": "/var/run/docker/netns/89fbaf0c0804",
	            "Ports": {
	                "22/tcp": [
	                    {
	                        "HostIp": "127.0.0.1",
	                        "HostPort": "32808"
	                    }
	                ],
	                "2376/tcp": [
	                    {
	                        "HostIp": "127.0.0.1",
	                        "HostPort": "32809"
	                    }
	                ],
	                "32443/tcp": [
	                    {
	                        "HostIp": "127.0.0.1",
	                        "HostPort": "32812"
	                    }
	                ],
	                "5000/tcp": [
	                    {
	                        "HostIp": "127.0.0.1",
	                        "HostPort": "32810"
	                    }
	                ],
	                "8443/tcp": [
	                    {
	                        "HostIp": "127.0.0.1",
	                        "HostPort": "32811"
	                    }
	                ]
	            },
	            "HairpinMode": false,
	            "LinkLocalIPv6Address": "",
	            "LinkLocalIPv6PrefixLen": 0,
	            "SecondaryIPAddresses": null,
	            "SecondaryIPv6Addresses": null,
	            "EndpointID": "",
	            "Gateway": "",
	            "GlobalIPv6Address": "",
	            "GlobalIPv6PrefixLen": 0,
	            "IPAddress": "",
	            "IPPrefixLen": 0,
	            "IPv6Gateway": "",
	            "MacAddress": "",
	            "Networks": {
	                "ha-423884": {
	                    "IPAMConfig": {
	                        "IPv4Address": "192.168.49.2"
	                    },
	                    "Links": null,
	                    "Aliases": null,
	                    "MacAddress": "66:00:10:26:7b:33",
	                    "DriverOpts": null,
	                    "GwPriority": 0,
	                    "NetworkID": "b901b8dcb82129bdc4c62d2bf9cac8a365e41b87cf75b0978b149071ce152f44",
	                    "EndpointID": "20baa733fdf8670705aeddf1cfd5b1a5d39152767930d2a81eaedc478e6f1104",
	                    "Gateway": "192.168.49.1",
	                    "IPAddress": "192.168.49.2",
	                    "IPPrefixLen": 24,
	                    "IPv6Gateway": "",
	                    "GlobalIPv6Address": "",
	                    "GlobalIPv6PrefixLen": 0,
	                    "DNSNames": [
	                        "ha-423884",
	                        "8c902201acb6"
	                    ]
	                }
	            }
	        }
	    }
	]

                                                
                                                
-- /stdout --
helpers_test.go:247: (dbg) Run:  out/minikube-linux-arm64 status --format={{.Host}} -p ha-423884 -n ha-423884
helpers_test.go:247: (dbg) Non-zero exit: out/minikube-linux-arm64 status --format={{.Host}} -p ha-423884 -n ha-423884: exit status 2 (17.96772573s)

                                                
                                                
-- stdout --
	Running

                                                
                                                
-- /stdout --
helpers_test.go:247: status error: exit status 2 (may be ok)
helpers_test.go:252: <<< TestMultiControlPlane/serial/RestartClusterKeepsNodes FAILED: start of post-mortem logs <<<
helpers_test.go:253: ======>  post-mortem[TestMultiControlPlane/serial/RestartClusterKeepsNodes]: minikube logs <======
helpers_test.go:255: (dbg) Run:  out/minikube-linux-arm64 -p ha-423884 logs -n 25
helpers_test.go:260: TestMultiControlPlane/serial/RestartClusterKeepsNodes logs: 
-- stdout --
	
	==> Audit <==
	┌─────────┬─────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────┬───────────┬─────────┬─────────┬─────────────────────┬─────────────────────┐
	│ COMMAND │                                                                ARGS                                                                 │  PROFILE  │  USER   │ VERSION │     START TIME      │      END TIME       │
	├─────────┼─────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────┼───────────┼─────────┼─────────┼─────────────────────┼─────────────────────┤
	│ cp      │ ha-423884 cp ha-423884-m03:/home/docker/cp-test.txt ha-423884-m02:/home/docker/cp-test_ha-423884-m03_ha-423884-m02.txt              │ ha-423884 │ jenkins │ v1.37.0 │ 09 Nov 25 13:53 UTC │ 09 Nov 25 13:53 UTC │
	│ ssh     │ ha-423884 ssh -n ha-423884-m03 sudo cat /home/docker/cp-test.txt                                                                    │ ha-423884 │ jenkins │ v1.37.0 │ 09 Nov 25 13:53 UTC │ 09 Nov 25 13:53 UTC │
	│ ssh     │ ha-423884 ssh -n ha-423884-m02 sudo cat /home/docker/cp-test_ha-423884-m03_ha-423884-m02.txt                                        │ ha-423884 │ jenkins │ v1.37.0 │ 09 Nov 25 13:53 UTC │ 09 Nov 25 13:53 UTC │
	│ cp      │ ha-423884 cp ha-423884-m03:/home/docker/cp-test.txt ha-423884-m04:/home/docker/cp-test_ha-423884-m03_ha-423884-m04.txt              │ ha-423884 │ jenkins │ v1.37.0 │ 09 Nov 25 13:53 UTC │ 09 Nov 25 13:53 UTC │
	│ ssh     │ ha-423884 ssh -n ha-423884-m03 sudo cat /home/docker/cp-test.txt                                                                    │ ha-423884 │ jenkins │ v1.37.0 │ 09 Nov 25 13:53 UTC │ 09 Nov 25 13:53 UTC │
	│ ssh     │ ha-423884 ssh -n ha-423884-m04 sudo cat /home/docker/cp-test_ha-423884-m03_ha-423884-m04.txt                                        │ ha-423884 │ jenkins │ v1.37.0 │ 09 Nov 25 13:53 UTC │ 09 Nov 25 13:53 UTC │
	│ cp      │ ha-423884 cp testdata/cp-test.txt ha-423884-m04:/home/docker/cp-test.txt                                                            │ ha-423884 │ jenkins │ v1.37.0 │ 09 Nov 25 13:53 UTC │ 09 Nov 25 13:53 UTC │
	│ ssh     │ ha-423884 ssh -n ha-423884-m04 sudo cat /home/docker/cp-test.txt                                                                    │ ha-423884 │ jenkins │ v1.37.0 │ 09 Nov 25 13:53 UTC │ 09 Nov 25 13:53 UTC │
	│ cp      │ ha-423884 cp ha-423884-m04:/home/docker/cp-test.txt /tmp/TestMultiControlPlaneserialCopyFile776916293/001/cp-test_ha-423884-m04.txt │ ha-423884 │ jenkins │ v1.37.0 │ 09 Nov 25 13:53 UTC │ 09 Nov 25 13:53 UTC │
	│ ssh     │ ha-423884 ssh -n ha-423884-m04 sudo cat /home/docker/cp-test.txt                                                                    │ ha-423884 │ jenkins │ v1.37.0 │ 09 Nov 25 13:53 UTC │ 09 Nov 25 13:53 UTC │
	│ cp      │ ha-423884 cp ha-423884-m04:/home/docker/cp-test.txt ha-423884:/home/docker/cp-test_ha-423884-m04_ha-423884.txt                      │ ha-423884 │ jenkins │ v1.37.0 │ 09 Nov 25 13:53 UTC │ 09 Nov 25 13:53 UTC │
	│ ssh     │ ha-423884 ssh -n ha-423884-m04 sudo cat /home/docker/cp-test.txt                                                                    │ ha-423884 │ jenkins │ v1.37.0 │ 09 Nov 25 13:53 UTC │ 09 Nov 25 13:53 UTC │
	│ ssh     │ ha-423884 ssh -n ha-423884 sudo cat /home/docker/cp-test_ha-423884-m04_ha-423884.txt                                                │ ha-423884 │ jenkins │ v1.37.0 │ 09 Nov 25 13:53 UTC │ 09 Nov 25 13:53 UTC │
	│ cp      │ ha-423884 cp ha-423884-m04:/home/docker/cp-test.txt ha-423884-m02:/home/docker/cp-test_ha-423884-m04_ha-423884-m02.txt              │ ha-423884 │ jenkins │ v1.37.0 │ 09 Nov 25 13:53 UTC │ 09 Nov 25 13:53 UTC │
	│ ssh     │ ha-423884 ssh -n ha-423884-m04 sudo cat /home/docker/cp-test.txt                                                                    │ ha-423884 │ jenkins │ v1.37.0 │ 09 Nov 25 13:53 UTC │ 09 Nov 25 13:53 UTC │
	│ ssh     │ ha-423884 ssh -n ha-423884-m02 sudo cat /home/docker/cp-test_ha-423884-m04_ha-423884-m02.txt                                        │ ha-423884 │ jenkins │ v1.37.0 │ 09 Nov 25 13:53 UTC │ 09 Nov 25 13:53 UTC │
	│ cp      │ ha-423884 cp ha-423884-m04:/home/docker/cp-test.txt ha-423884-m03:/home/docker/cp-test_ha-423884-m04_ha-423884-m03.txt              │ ha-423884 │ jenkins │ v1.37.0 │ 09 Nov 25 13:53 UTC │ 09 Nov 25 13:53 UTC │
	│ ssh     │ ha-423884 ssh -n ha-423884-m04 sudo cat /home/docker/cp-test.txt                                                                    │ ha-423884 │ jenkins │ v1.37.0 │ 09 Nov 25 13:53 UTC │ 09 Nov 25 13:53 UTC │
	│ ssh     │ ha-423884 ssh -n ha-423884-m03 sudo cat /home/docker/cp-test_ha-423884-m04_ha-423884-m03.txt                                        │ ha-423884 │ jenkins │ v1.37.0 │ 09 Nov 25 13:53 UTC │ 09 Nov 25 13:53 UTC │
	│ node    │ ha-423884 node stop m02 --alsologtostderr -v 5                                                                                      │ ha-423884 │ jenkins │ v1.37.0 │ 09 Nov 25 13:53 UTC │ 09 Nov 25 13:53 UTC │
	│ node    │ ha-423884 node start m02 --alsologtostderr -v 5                                                                                     │ ha-423884 │ jenkins │ v1.37.0 │ 09 Nov 25 13:53 UTC │ 09 Nov 25 13:54 UTC │
	│ node    │ ha-423884 node list --alsologtostderr -v 5                                                                                          │ ha-423884 │ jenkins │ v1.37.0 │ 09 Nov 25 13:54 UTC │                     │
	│ stop    │ ha-423884 stop --alsologtostderr -v 5                                                                                               │ ha-423884 │ jenkins │ v1.37.0 │ 09 Nov 25 13:54 UTC │ 09 Nov 25 13:54 UTC │
	│ start   │ ha-423884 start --wait true --alsologtostderr -v 5                                                                                  │ ha-423884 │ jenkins │ v1.37.0 │ 09 Nov 25 13:54 UTC │                     │
	│ node    │ ha-423884 node list --alsologtostderr -v 5                                                                                          │ ha-423884 │ jenkins │ v1.37.0 │ 09 Nov 25 14:02 UTC │                     │
	└─────────┴─────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────┴───────────┴─────────┴─────────┴─────────────────────┴─────────────────────┘
	
	
	==> Last Start <==
	Log file created at: 2025/11/09 13:54:55
	Running on machine: ip-172-31-24-2
	Binary: Built with gc go1.24.6 for linux/arm64
	Log line format: [IWEF]mmdd hh:mm:ss.uuuuuu threadid file:line] msg
	I1109 13:54:55.113963   50941 out.go:360] Setting OutFile to fd 1 ...
	I1109 13:54:55.114176   50941 out.go:408] TERM=,COLORTERM=, which probably does not support color
	I1109 13:54:55.114201   50941 out.go:374] Setting ErrFile to fd 2...
	I1109 13:54:55.114221   50941 out.go:408] TERM=,COLORTERM=, which probably does not support color
	I1109 13:54:55.114531   50941 root.go:338] Updating PATH: /home/jenkins/minikube-integration/21139-2320/.minikube/bin
	I1109 13:54:55.114968   50941 out.go:368] Setting JSON to false
	I1109 13:54:55.115825   50941 start.go:133] hostinfo: {"hostname":"ip-172-31-24-2","uptime":2245,"bootTime":1762694250,"procs":151,"os":"linux","platform":"ubuntu","platformFamily":"debian","platformVersion":"20.04","kernelVersion":"5.15.0-1084-aws","kernelArch":"aarch64","virtualizationSystem":"","virtualizationRole":"","hostId":"6d436adf-771e-4269-b9a3-c25fd4fca4f5"}
	I1109 13:54:55.115981   50941 start.go:143] virtualization:  
	I1109 13:54:55.119256   50941 out.go:179] * [ha-423884] minikube v1.37.0 on Ubuntu 20.04 (arm64)
	I1109 13:54:55.122910   50941 out.go:179]   - MINIKUBE_LOCATION=21139
	I1109 13:54:55.122982   50941 notify.go:221] Checking for updates...
	I1109 13:54:55.128665   50941 out.go:179]   - MINIKUBE_SUPPRESS_DOCKER_PERFORMANCE=true
	I1109 13:54:55.131661   50941 out.go:179]   - KUBECONFIG=/home/jenkins/minikube-integration/21139-2320/kubeconfig
	I1109 13:54:55.134714   50941 out.go:179]   - MINIKUBE_HOME=/home/jenkins/minikube-integration/21139-2320/.minikube
	I1109 13:54:55.137648   50941 out.go:179]   - MINIKUBE_BIN=out/minikube-linux-arm64
	I1109 13:54:55.140756   50941 out.go:179]   - MINIKUBE_FORCE_SYSTEMD=
	I1109 13:54:55.144331   50941 config.go:182] Loaded profile config "ha-423884": Driver=docker, ContainerRuntime=crio, KubernetesVersion=v1.34.1
	I1109 13:54:55.144477   50941 driver.go:422] Setting default libvirt URI to qemu:///system
	I1109 13:54:55.179836   50941 docker.go:124] docker version: linux-28.1.1:Docker Engine - Community
	I1109 13:54:55.179983   50941 cli_runner.go:164] Run: docker system info --format "{{json .}}"
	I1109 13:54:55.239723   50941 info.go:266] docker info: {ID:J4M5:W6MX:GOX4:4LAQ:VI7E:VJNF:J3OP:OPBH:GF7G:PPY4:WQWD:7N4L Containers:4 ContainersRunning:0 ContainersPaused:0 ContainersStopped:4 Images:3 Driver:overlay2 DriverStatus:[[Backing Filesystem extfs] [Supports d_type true] [Using metacopy false] [Native Overlay Diff true] [userxattr false]] SystemStatus:<nil> Plugins:{Volume:[local] Network:[bridge host ipvlan macvlan null overlay] Authorization:<nil> Log:[awslogs fluentd gcplogs gelf journald json-file local splunk syslog]} MemoryLimit:true SwapLimit:true KernelMemory:false KernelMemoryTCP:true CPUCfsPeriod:true CPUCfsQuota:true CPUShares:true CPUSet:true PidsLimit:true IPv4Forwarding:true BridgeNfIptables:false BridgeNfIP6Tables:false Debug:false NFd:23 OomKillDisable:true NGoroutines:42 SystemTime:2025-11-09 13:54:55.22949764 +0000 UTC LoggingDriver:json-file CgroupDriver:cgroupfs NEventsListener:0 KernelVersion:5.15.0-1084-aws OperatingSystem:Ubuntu 20.04.6 LTS OSType:linux Architecture:aa
rch64 IndexServerAddress:https://index.docker.io/v1/ RegistryConfig:{AllowNondistributableArtifactsCIDRs:[] AllowNondistributableArtifactsHostnames:[] InsecureRegistryCIDRs:[::1/128 127.0.0.0/8] IndexConfigs:{DockerIo:{Name:docker.io Mirrors:[] Secure:true Official:true}} Mirrors:[]} NCPU:2 MemTotal:8214835200 GenericResources:<nil> DockerRootDir:/var/lib/docker HTTPProxy: HTTPSProxy: NoProxy: Name:ip-172-31-24-2 Labels:[] ExperimentalBuild:false ServerVersion:28.1.1 ClusterStore: ClusterAdvertise: Runtimes:{Runc:{Path:runc}} DefaultRuntime:runc Swarm:{NodeID: NodeAddr: LocalNodeState:inactive ControlAvailable:false Error: RemoteManagers:<nil>} LiveRestoreEnabled:false Isolation: InitBinary:docker-init ContainerdCommit:{ID:05044ec0a9a75232cad458027ca83437aae3f4da Expected:} RuncCommit:{ID:v1.2.5-0-g59923ef Expected:} InitCommit:{ID:de40ad0 Expected:} SecurityOptions:[name=apparmor name=seccomp,profile=builtin] ProductLicense: Warnings:<nil> ServerErrors:[] ClientInfo:{Debug:false Plugins:[map[Name:buildx Path
:/usr/libexec/docker/cli-plugins/docker-buildx SchemaVersion:0.1.0 ShortDescription:Docker Buildx Vendor:Docker Inc. Version:v0.23.0] map[Name:compose Path:/usr/libexec/docker/cli-plugins/docker-compose SchemaVersion:0.1.0 ShortDescription:Docker Compose Vendor:Docker Inc. Version:v2.35.1]] Warnings:<nil>}}
	I1109 13:54:55.239832   50941 docker.go:319] overlay module found
	I1109 13:54:55.244865   50941 out.go:179] * Using the docker driver based on existing profile
	I1109 13:54:55.247777   50941 start.go:309] selected driver: docker
	I1109 13:54:55.247800   50941 start.go:930] validating driver "docker" against &{Name:ha-423884 KeepContext:false EmbedCerts:false MinikubeISO: KicBaseImage:gcr.io/k8s-minikube/kicbase-builds:v0.0.48-1761985721-21837@sha256:a50b37e97dfdea51156e079ca6b45818a801b3d41bbe13d141f35d2e1af6c7d1 Memory:3072 CPUs:2 DiskSize:20000 Driver:docker HyperkitVpnKitSock: HyperkitVSockPorts:[] DockerEnv:[] ContainerVolumeMounts:[] InsecureRegistry:[] RegistryMirror:[] HostOnlyCIDR:192.168.59.1/24 HypervVirtualSwitch: HypervUseExternalSwitch:false HypervExternalAdapter: KVMNetwork:default KVMQemuURI:qemu:///system KVMGPU:false KVMHidden:false KVMNUMACount:1 APIServerPort:8443 DockerOpt:[] DisableDriverMounts:false NFSShare:[] NFSSharesRoot:/nfsshares UUID: NoVTXCheck:false DNSProxy:false HostDNSResolver:true HostOnlyNicType:virtio NatNicType:virtio SSHIPAddress: SSHUser:root SSHKey: SSHPort:22 KubernetesConfig:{KubernetesVersion:v1.34.1 ClusterName:ha-423884 Namespace:default APIServerHAVIP:192.168.49.254 APIServerName
:minikubeCA APIServerNames:[] APIServerIPs:[] DNSDomain:cluster.local ContainerRuntime:crio CRISocket: NetworkPlugin:cni FeatureGates: ServiceCIDR:10.96.0.0/12 ImageRepository: LoadBalancerStartIP: LoadBalancerEndIP: CustomIngressCert: RegistryAliases: ExtraOptions:[] ShouldLoadCachedImages:true EnableDefaultCNI:false CNI:} Nodes:[{Name: IP:192.168.49.2 Port:8443 KubernetesVersion:v1.34.1 ContainerRuntime:crio ControlPlane:true Worker:true} {Name:m02 IP:192.168.49.3 Port:8443 KubernetesVersion:v1.34.1 ContainerRuntime:crio ControlPlane:true Worker:true} {Name:m03 IP:192.168.49.4 Port:8443 KubernetesVersion:v1.34.1 ContainerRuntime:crio ControlPlane:true Worker:true} {Name:m04 IP:192.168.49.5 Port:0 KubernetesVersion:v1.34.1 ContainerRuntime: ControlPlane:false Worker:true}] Addons:map[ambassador:false amd-gpu-device-plugin:false auto-pause:false cloud-spanner:false csi-hostpath-driver:false dashboard:false default-storageclass:false efk:false freshpod:false gcp-auth:false gvisor:false headlamp:false inaccel:f
alse ingress:false ingress-dns:false inspektor-gadget:false istio:false istio-provisioner:false kong:false kubeflow:false kubetail:false kubevirt:false logviewer:false metallb:false metrics-server:false nvidia-device-plugin:false nvidia-driver-installer:false nvidia-gpu-device-plugin:false olm:false pod-security-policy:false portainer:false registry:false registry-aliases:false registry-creds:false storage-provisioner:false storage-provisioner-rancher:false volcano:false volumesnapshots:false yakd:false] CustomAddonImages:map[] CustomAddonRegistries:map[] VerifyComponents:map[apiserver:true apps_running:true default_sa:true extra:true kubelet:true node_ready:true system_pods:true] StartHostTimeout:6m0s ScheduledStop:<nil> ExposedPorts:[] ListenAddress: Network: Subnet: MultiNodeRequested:true ExtraDisks:0 CertExpiration:26280h0m0s MountString: Mount9PVersion:9p2000.L MountGID:docker MountIP: MountMSize:262144 MountOptions:[] MountPort:0 MountType:9p MountUID:docker BinaryMirror: DisableOptimizations:false Dis
ableMetrics:false DisableCoreDNSLog:false CustomQemuFirmwarePath: SocketVMnetClientPath: SocketVMnetPath: StaticIP: SSHAuthSock: SSHAgentPID:0 GPUs: AutoPauseInterval:1m0s}
	I1109 13:54:55.248067   50941 start.go:941] status for docker: {Installed:true Healthy:true Running:false NeedsImprovement:false Error:<nil> Reason: Fix: Doc: Version:}
	I1109 13:54:55.248171   50941 cli_runner.go:164] Run: docker system info --format "{{json .}}"
	I1109 13:54:55.301652   50941 info.go:266] docker info: {ID:J4M5:W6MX:GOX4:4LAQ:VI7E:VJNF:J3OP:OPBH:GF7G:PPY4:WQWD:7N4L Containers:4 ContainersRunning:0 ContainersPaused:0 ContainersStopped:4 Images:3 Driver:overlay2 DriverStatus:[[Backing Filesystem extfs] [Supports d_type true] [Using metacopy false] [Native Overlay Diff true] [userxattr false]] SystemStatus:<nil> Plugins:{Volume:[local] Network:[bridge host ipvlan macvlan null overlay] Authorization:<nil> Log:[awslogs fluentd gcplogs gelf journald json-file local splunk syslog]} MemoryLimit:true SwapLimit:true KernelMemory:false KernelMemoryTCP:true CPUCfsPeriod:true CPUCfsQuota:true CPUShares:true CPUSet:true PidsLimit:true IPv4Forwarding:true BridgeNfIptables:false BridgeNfIP6Tables:false Debug:false NFd:23 OomKillDisable:true NGoroutines:42 SystemTime:2025-11-09 13:54:55.292554772 +0000 UTC LoggingDriver:json-file CgroupDriver:cgroupfs NEventsListener:0 KernelVersion:5.15.0-1084-aws OperatingSystem:Ubuntu 20.04.6 LTS OSType:linux Architecture:a
arch64 IndexServerAddress:https://index.docker.io/v1/ RegistryConfig:{AllowNondistributableArtifactsCIDRs:[] AllowNondistributableArtifactsHostnames:[] InsecureRegistryCIDRs:[::1/128 127.0.0.0/8] IndexConfigs:{DockerIo:{Name:docker.io Mirrors:[] Secure:true Official:true}} Mirrors:[]} NCPU:2 MemTotal:8214835200 GenericResources:<nil> DockerRootDir:/var/lib/docker HTTPProxy: HTTPSProxy: NoProxy: Name:ip-172-31-24-2 Labels:[] ExperimentalBuild:false ServerVersion:28.1.1 ClusterStore: ClusterAdvertise: Runtimes:{Runc:{Path:runc}} DefaultRuntime:runc Swarm:{NodeID: NodeAddr: LocalNodeState:inactive ControlAvailable:false Error: RemoteManagers:<nil>} LiveRestoreEnabled:false Isolation: InitBinary:docker-init ContainerdCommit:{ID:05044ec0a9a75232cad458027ca83437aae3f4da Expected:} RuncCommit:{ID:v1.2.5-0-g59923ef Expected:} InitCommit:{ID:de40ad0 Expected:} SecurityOptions:[name=apparmor name=seccomp,profile=builtin] ProductLicense: Warnings:<nil> ServerErrors:[] ClientInfo:{Debug:false Plugins:[map[Name:buildx Pat
h:/usr/libexec/docker/cli-plugins/docker-buildx SchemaVersion:0.1.0 ShortDescription:Docker Buildx Vendor:Docker Inc. Version:v0.23.0] map[Name:compose Path:/usr/libexec/docker/cli-plugins/docker-compose SchemaVersion:0.1.0 ShortDescription:Docker Compose Vendor:Docker Inc. Version:v2.35.1]] Warnings:<nil>}}
	I1109 13:54:55.302058   50941 start_flags.go:992] Waiting for all components: map[apiserver:true apps_running:true default_sa:true extra:true kubelet:true node_ready:true system_pods:true]
	I1109 13:54:55.302090   50941 cni.go:84] Creating CNI manager for ""
	I1109 13:54:55.302144   50941 cni.go:136] multinode detected (4 nodes found), recommending kindnet
	I1109 13:54:55.302223   50941 start.go:353] cluster config:
	{Name:ha-423884 KeepContext:false EmbedCerts:false MinikubeISO: KicBaseImage:gcr.io/k8s-minikube/kicbase-builds:v0.0.48-1761985721-21837@sha256:a50b37e97dfdea51156e079ca6b45818a801b3d41bbe13d141f35d2e1af6c7d1 Memory:3072 CPUs:2 DiskSize:20000 Driver:docker HyperkitVpnKitSock: HyperkitVSockPorts:[] DockerEnv:[] ContainerVolumeMounts:[] InsecureRegistry:[] RegistryMirror:[] HostOnlyCIDR:192.168.59.1/24 HypervVirtualSwitch: HypervUseExternalSwitch:false HypervExternalAdapter: KVMNetwork:default KVMQemuURI:qemu:///system KVMGPU:false KVMHidden:false KVMNUMACount:1 APIServerPort:8443 DockerOpt:[] DisableDriverMounts:false NFSShare:[] NFSSharesRoot:/nfsshares UUID: NoVTXCheck:false DNSProxy:false HostDNSResolver:true HostOnlyNicType:virtio NatNicType:virtio SSHIPAddress: SSHUser:root SSHKey: SSHPort:22 KubernetesConfig:{KubernetesVersion:v1.34.1 ClusterName:ha-423884 Namespace:default APIServerHAVIP:192.168.49.254 APIServerName:minikubeCA APIServerNames:[] APIServerIPs:[] DNSDomain:cluster.local ContainerR
untime:crio CRISocket: NetworkPlugin:cni FeatureGates: ServiceCIDR:10.96.0.0/12 ImageRepository: LoadBalancerStartIP: LoadBalancerEndIP: CustomIngressCert: RegistryAliases: ExtraOptions:[] ShouldLoadCachedImages:true EnableDefaultCNI:false CNI:} Nodes:[{Name: IP:192.168.49.2 Port:8443 KubernetesVersion:v1.34.1 ContainerRuntime:crio ControlPlane:true Worker:true} {Name:m02 IP:192.168.49.3 Port:8443 KubernetesVersion:v1.34.1 ContainerRuntime:crio ControlPlane:true Worker:true} {Name:m03 IP:192.168.49.4 Port:8443 KubernetesVersion:v1.34.1 ContainerRuntime:crio ControlPlane:true Worker:true} {Name:m04 IP:192.168.49.5 Port:0 KubernetesVersion:v1.34.1 ContainerRuntime:crio ControlPlane:false Worker:true}] Addons:map[ambassador:false amd-gpu-device-plugin:false auto-pause:false cloud-spanner:false csi-hostpath-driver:false dashboard:false default-storageclass:false efk:false freshpod:false gcp-auth:false gvisor:false headlamp:false inaccel:false ingress:false ingress-dns:false inspektor-gadget:false istio:false isti
o-provisioner:false kong:false kubeflow:false kubetail:false kubevirt:false logviewer:false metallb:false metrics-server:false nvidia-device-plugin:false nvidia-driver-installer:false nvidia-gpu-device-plugin:false olm:false pod-security-policy:false portainer:false registry:false registry-aliases:false registry-creds:false storage-provisioner:false storage-provisioner-rancher:false volcano:false volumesnapshots:false yakd:false] CustomAddonImages:map[] CustomAddonRegistries:map[] VerifyComponents:map[apiserver:true apps_running:true default_sa:true extra:true kubelet:true node_ready:true system_pods:true] StartHostTimeout:6m0s ScheduledStop:<nil> ExposedPorts:[] ListenAddress: Network: Subnet: MultiNodeRequested:true ExtraDisks:0 CertExpiration:26280h0m0s MountString: Mount9PVersion:9p2000.L MountGID:docker MountIP: MountMSize:262144 MountOptions:[] MountPort:0 MountType:9p MountUID:docker BinaryMirror: DisableOptimizations:false DisableMetrics:false DisableCoreDNSLog:false CustomQemuFirmwarePath: SocketVMne
tClientPath: SocketVMnetPath: StaticIP: SSHAuthSock: SSHAgentPID:0 GPUs: AutoPauseInterval:1m0s}
	I1109 13:54:55.305501   50941 out.go:179] * Starting "ha-423884" primary control-plane node in "ha-423884" cluster
	I1109 13:54:55.308419   50941 cache.go:134] Beginning downloading kic base image for docker with crio
	I1109 13:54:55.311313   50941 out.go:179] * Pulling base image v0.0.48-1761985721-21837 ...
	I1109 13:54:55.314132   50941 preload.go:188] Checking if preload exists for k8s version v1.34.1 and runtime crio
	I1109 13:54:55.314177   50941 preload.go:203] Found local preload: /home/jenkins/minikube-integration/21139-2320/.minikube/cache/preloaded-tarball/preloaded-images-k8s-v18-v1.34.1-cri-o-overlay-arm64.tar.lz4
	I1109 13:54:55.314201   50941 cache.go:65] Caching tarball of preloaded images
	I1109 13:54:55.314200   50941 image.go:81] Checking for gcr.io/k8s-minikube/kicbase-builds:v0.0.48-1761985721-21837@sha256:a50b37e97dfdea51156e079ca6b45818a801b3d41bbe13d141f35d2e1af6c7d1 in local docker daemon
	I1109 13:54:55.314293   50941 preload.go:238] Found /home/jenkins/minikube-integration/21139-2320/.minikube/cache/preloaded-tarball/preloaded-images-k8s-v18-v1.34.1-cri-o-overlay-arm64.tar.lz4 in cache, skipping download
	I1109 13:54:55.314309   50941 cache.go:68] Finished verifying existence of preloaded tar for v1.34.1 on crio
	I1109 13:54:55.314456   50941 profile.go:143] Saving config to /home/jenkins/minikube-integration/21139-2320/.minikube/profiles/ha-423884/config.json ...
	I1109 13:54:55.334236   50941 image.go:100] Found gcr.io/k8s-minikube/kicbase-builds:v0.0.48-1761985721-21837@sha256:a50b37e97dfdea51156e079ca6b45818a801b3d41bbe13d141f35d2e1af6c7d1 in local docker daemon, skipping pull
	I1109 13:54:55.334260   50941 cache.go:158] gcr.io/k8s-minikube/kicbase-builds:v0.0.48-1761985721-21837@sha256:a50b37e97dfdea51156e079ca6b45818a801b3d41bbe13d141f35d2e1af6c7d1 exists in daemon, skipping load
	I1109 13:54:55.334278   50941 cache.go:243] Successfully downloaded all kic artifacts
	I1109 13:54:55.334307   50941 start.go:360] acquireMachinesLock for ha-423884: {Name:mkda5c7a1ce8a51da0d8a40a6bd47565509d6909 Clock:{} Delay:500ms Timeout:10m0s Cancel:<nil>}
	I1109 13:54:55.334364   50941 start.go:364] duration metric: took 38.367µs to acquireMachinesLock for "ha-423884"
	I1109 13:54:55.334396   50941 start.go:96] Skipping create...Using existing machine configuration
	I1109 13:54:55.334402   50941 fix.go:54] fixHost starting: 
	I1109 13:54:55.334657   50941 cli_runner.go:164] Run: docker container inspect ha-423884 --format={{.State.Status}}
	I1109 13:54:55.351523   50941 fix.go:112] recreateIfNeeded on ha-423884: state=Stopped err=<nil>
	W1109 13:54:55.351563   50941 fix.go:138] unexpected machine state, will restart: <nil>
	I1109 13:54:55.355014   50941 out.go:252] * Restarting existing docker container for "ha-423884" ...
	I1109 13:54:55.355096   50941 cli_runner.go:164] Run: docker start ha-423884
	I1109 13:54:55.620677   50941 cli_runner.go:164] Run: docker container inspect ha-423884 --format={{.State.Status}}
	I1109 13:54:55.643357   50941 kic.go:430] container "ha-423884" state is running.
	I1109 13:54:55.643727   50941 cli_runner.go:164] Run: docker container inspect -f "{{range .NetworkSettings.Networks}}{{.IPAddress}},{{.GlobalIPv6Address}}{{end}}" ha-423884
	I1109 13:54:55.666053   50941 profile.go:143] Saving config to /home/jenkins/minikube-integration/21139-2320/.minikube/profiles/ha-423884/config.json ...
	I1109 13:54:55.666297   50941 machine.go:94] provisionDockerMachine start ...
	I1109 13:54:55.666487   50941 cli_runner.go:164] Run: docker container inspect -f "'{{(index (index .NetworkSettings.Ports "22/tcp") 0).HostPort}}'" ha-423884
	I1109 13:54:55.687681   50941 main.go:143] libmachine: Using SSH client type: native
	I1109 13:54:55.688070   50941 main.go:143] libmachine: &{{{<nil> 0 [] [] []} docker [0x3eefe0] 0x3f1790 <nil>  [] 0s} 127.0.0.1 32808 <nil> <nil>}
	I1109 13:54:55.688081   50941 main.go:143] libmachine: About to run SSH command:
	hostname
	I1109 13:54:55.688918   50941 main.go:143] libmachine: Error dialing TCP: ssh: handshake failed: EOF
	I1109 13:54:58.840047   50941 main.go:143] libmachine: SSH cmd err, output: <nil>: ha-423884
	
	I1109 13:54:58.840071   50941 ubuntu.go:182] provisioning hostname "ha-423884"
	I1109 13:54:58.840136   50941 cli_runner.go:164] Run: docker container inspect -f "'{{(index (index .NetworkSettings.Ports "22/tcp") 0).HostPort}}'" ha-423884
	I1109 13:54:58.857828   50941 main.go:143] libmachine: Using SSH client type: native
	I1109 13:54:58.858140   50941 main.go:143] libmachine: &{{{<nil> 0 [] [] []} docker [0x3eefe0] 0x3f1790 <nil>  [] 0s} 127.0.0.1 32808 <nil> <nil>}
	I1109 13:54:58.858156   50941 main.go:143] libmachine: About to run SSH command:
	sudo hostname ha-423884 && echo "ha-423884" | sudo tee /etc/hostname
	I1109 13:54:59.019946   50941 main.go:143] libmachine: SSH cmd err, output: <nil>: ha-423884
	
	I1109 13:54:59.020040   50941 cli_runner.go:164] Run: docker container inspect -f "'{{(index (index .NetworkSettings.Ports "22/tcp") 0).HostPort}}'" ha-423884
	I1109 13:54:59.037942   50941 main.go:143] libmachine: Using SSH client type: native
	I1109 13:54:59.038251   50941 main.go:143] libmachine: &{{{<nil> 0 [] [] []} docker [0x3eefe0] 0x3f1790 <nil>  [] 0s} 127.0.0.1 32808 <nil> <nil>}
	I1109 13:54:59.038273   50941 main.go:143] libmachine: About to run SSH command:
	
			if ! grep -xq '.*\sha-423884' /etc/hosts; then
				if grep -xq '127.0.1.1\s.*' /etc/hosts; then
					sudo sed -i 's/^127.0.1.1\s.*/127.0.1.1 ha-423884/g' /etc/hosts;
				else 
					echo '127.0.1.1 ha-423884' | sudo tee -a /etc/hosts; 
				fi
			fi
	I1109 13:54:59.188269   50941 main.go:143] libmachine: SSH cmd err, output: <nil>: 
	I1109 13:54:59.188306   50941 ubuntu.go:188] set auth options {CertDir:/home/jenkins/minikube-integration/21139-2320/.minikube CaCertPath:/home/jenkins/minikube-integration/21139-2320/.minikube/certs/ca.pem CaPrivateKeyPath:/home/jenkins/minikube-integration/21139-2320/.minikube/certs/ca-key.pem CaCertRemotePath:/etc/docker/ca.pem ServerCertPath:/home/jenkins/minikube-integration/21139-2320/.minikube/machines/server.pem ServerKeyPath:/home/jenkins/minikube-integration/21139-2320/.minikube/machines/server-key.pem ClientKeyPath:/home/jenkins/minikube-integration/21139-2320/.minikube/certs/key.pem ServerCertRemotePath:/etc/docker/server.pem ServerKeyRemotePath:/etc/docker/server-key.pem ClientCertPath:/home/jenkins/minikube-integration/21139-2320/.minikube/certs/cert.pem ServerCertSANs:[] StorePath:/home/jenkins/minikube-integration/21139-2320/.minikube}
	I1109 13:54:59.188331   50941 ubuntu.go:190] setting up certificates
	I1109 13:54:59.188340   50941 provision.go:84] configureAuth start
	I1109 13:54:59.189373   50941 cli_runner.go:164] Run: docker container inspect -f "{{range .NetworkSettings.Networks}}{{.IPAddress}},{{.GlobalIPv6Address}}{{end}}" ha-423884
	I1109 13:54:59.208053   50941 provision.go:143] copyHostCerts
	I1109 13:54:59.208097   50941 vm_assets.go:164] NewFileAsset: /home/jenkins/minikube-integration/21139-2320/.minikube/certs/key.pem -> /home/jenkins/minikube-integration/21139-2320/.minikube/key.pem
	I1109 13:54:59.208129   50941 exec_runner.go:144] found /home/jenkins/minikube-integration/21139-2320/.minikube/key.pem, removing ...
	I1109 13:54:59.208146   50941 exec_runner.go:203] rm: /home/jenkins/minikube-integration/21139-2320/.minikube/key.pem
	I1109 13:54:59.208224   50941 exec_runner.go:151] cp: /home/jenkins/minikube-integration/21139-2320/.minikube/certs/key.pem --> /home/jenkins/minikube-integration/21139-2320/.minikube/key.pem (1679 bytes)
	I1109 13:54:59.208317   50941 vm_assets.go:164] NewFileAsset: /home/jenkins/minikube-integration/21139-2320/.minikube/certs/ca.pem -> /home/jenkins/minikube-integration/21139-2320/.minikube/ca.pem
	I1109 13:54:59.208339   50941 exec_runner.go:144] found /home/jenkins/minikube-integration/21139-2320/.minikube/ca.pem, removing ...
	I1109 13:54:59.208343   50941 exec_runner.go:203] rm: /home/jenkins/minikube-integration/21139-2320/.minikube/ca.pem
	I1109 13:54:59.208378   50941 exec_runner.go:151] cp: /home/jenkins/minikube-integration/21139-2320/.minikube/certs/ca.pem --> /home/jenkins/minikube-integration/21139-2320/.minikube/ca.pem (1082 bytes)
	I1109 13:54:59.208440   50941 vm_assets.go:164] NewFileAsset: /home/jenkins/minikube-integration/21139-2320/.minikube/certs/cert.pem -> /home/jenkins/minikube-integration/21139-2320/.minikube/cert.pem
	I1109 13:54:59.208460   50941 exec_runner.go:144] found /home/jenkins/minikube-integration/21139-2320/.minikube/cert.pem, removing ...
	I1109 13:54:59.208464   50941 exec_runner.go:203] rm: /home/jenkins/minikube-integration/21139-2320/.minikube/cert.pem
	I1109 13:54:59.208489   50941 exec_runner.go:151] cp: /home/jenkins/minikube-integration/21139-2320/.minikube/certs/cert.pem --> /home/jenkins/minikube-integration/21139-2320/.minikube/cert.pem (1123 bytes)
	I1109 13:54:59.208551   50941 provision.go:117] generating server cert: /home/jenkins/minikube-integration/21139-2320/.minikube/machines/server.pem ca-key=/home/jenkins/minikube-integration/21139-2320/.minikube/certs/ca.pem private-key=/home/jenkins/minikube-integration/21139-2320/.minikube/certs/ca-key.pem org=jenkins.ha-423884 san=[127.0.0.1 192.168.49.2 ha-423884 localhost minikube]
	I1109 13:54:59.461403   50941 provision.go:177] copyRemoteCerts
	I1109 13:54:59.461473   50941 ssh_runner.go:195] Run: sudo mkdir -p /etc/docker /etc/docker /etc/docker
	I1109 13:54:59.461549   50941 cli_runner.go:164] Run: docker container inspect -f "'{{(index (index .NetworkSettings.Ports "22/tcp") 0).HostPort}}'" ha-423884
	I1109 13:54:59.479030   50941 sshutil.go:53] new ssh client: &{IP:127.0.0.1 Port:32808 SSHKeyPath:/home/jenkins/minikube-integration/21139-2320/.minikube/machines/ha-423884/id_rsa Username:docker}
	I1109 13:54:59.583635   50941 vm_assets.go:164] NewFileAsset: /home/jenkins/minikube-integration/21139-2320/.minikube/certs/ca.pem -> /etc/docker/ca.pem
	I1109 13:54:59.583697   50941 ssh_runner.go:362] scp /home/jenkins/minikube-integration/21139-2320/.minikube/certs/ca.pem --> /etc/docker/ca.pem (1082 bytes)
	I1109 13:54:59.602289   50941 vm_assets.go:164] NewFileAsset: /home/jenkins/minikube-integration/21139-2320/.minikube/machines/server.pem -> /etc/docker/server.pem
	I1109 13:54:59.602361   50941 ssh_runner.go:362] scp /home/jenkins/minikube-integration/21139-2320/.minikube/machines/server.pem --> /etc/docker/server.pem (1200 bytes)
	I1109 13:54:59.620339   50941 vm_assets.go:164] NewFileAsset: /home/jenkins/minikube-integration/21139-2320/.minikube/machines/server-key.pem -> /etc/docker/server-key.pem
	I1109 13:54:59.620401   50941 ssh_runner.go:362] scp /home/jenkins/minikube-integration/21139-2320/.minikube/machines/server-key.pem --> /etc/docker/server-key.pem (1679 bytes)
	I1109 13:54:59.637908   50941 provision.go:87] duration metric: took 449.546564ms to configureAuth
	I1109 13:54:59.637931   50941 ubuntu.go:206] setting minikube options for container-runtime
	I1109 13:54:59.638163   50941 config.go:182] Loaded profile config "ha-423884": Driver=docker, ContainerRuntime=crio, KubernetesVersion=v1.34.1
	I1109 13:54:59.638260   50941 cli_runner.go:164] Run: docker container inspect -f "'{{(index (index .NetworkSettings.Ports "22/tcp") 0).HostPort}}'" ha-423884
	I1109 13:54:59.655128   50941 main.go:143] libmachine: Using SSH client type: native
	I1109 13:54:59.655439   50941 main.go:143] libmachine: &{{{<nil> 0 [] [] []} docker [0x3eefe0] 0x3f1790 <nil>  [] 0s} 127.0.0.1 32808 <nil> <nil>}
	I1109 13:54:59.655453   50941 main.go:143] libmachine: About to run SSH command:
	sudo mkdir -p /etc/sysconfig && printf %s "
	CRIO_MINIKUBE_OPTIONS='--insecure-registry 10.96.0.0/12 '
	" | sudo tee /etc/sysconfig/crio.minikube && sudo systemctl restart crio
	I1109 13:55:00.057852   50941 main.go:143] libmachine: SSH cmd err, output: <nil>: 
	CRIO_MINIKUBE_OPTIONS='--insecure-registry 10.96.0.0/12 '
	
	I1109 13:55:00.057945   50941 machine.go:97] duration metric: took 4.391628222s to provisionDockerMachine
	I1109 13:55:00.057975   50941 start.go:293] postStartSetup for "ha-423884" (driver="docker")
	I1109 13:55:00.058017   50941 start.go:322] creating required directories: [/etc/kubernetes/addons /etc/kubernetes/manifests /var/tmp/minikube /var/lib/minikube /var/lib/minikube/certs /var/lib/minikube/images /var/lib/minikube/binaries /tmp/gvisor /usr/share/ca-certificates /etc/ssl/certs]
	I1109 13:55:00.058141   50941 ssh_runner.go:195] Run: sudo mkdir -p /etc/kubernetes/addons /etc/kubernetes/manifests /var/tmp/minikube /var/lib/minikube /var/lib/minikube/certs /var/lib/minikube/images /var/lib/minikube/binaries /tmp/gvisor /usr/share/ca-certificates /etc/ssl/certs
	I1109 13:55:00.058222   50941 cli_runner.go:164] Run: docker container inspect -f "'{{(index (index .NetworkSettings.Ports "22/tcp") 0).HostPort}}'" ha-423884
	I1109 13:55:00.132751   50941 sshutil.go:53] new ssh client: &{IP:127.0.0.1 Port:32808 SSHKeyPath:/home/jenkins/minikube-integration/21139-2320/.minikube/machines/ha-423884/id_rsa Username:docker}
	I1109 13:55:00.319042   50941 ssh_runner.go:195] Run: cat /etc/os-release
	I1109 13:55:00.329077   50941 main.go:143] libmachine: Couldn't set key VERSION_CODENAME, no corresponding struct field found
	I1109 13:55:00.329106   50941 info.go:137] Remote host: Debian GNU/Linux 12 (bookworm)
	I1109 13:55:00.329119   50941 filesync.go:126] Scanning /home/jenkins/minikube-integration/21139-2320/.minikube/addons for local assets ...
	I1109 13:55:00.329189   50941 filesync.go:126] Scanning /home/jenkins/minikube-integration/21139-2320/.minikube/files for local assets ...
	I1109 13:55:00.329276   50941 filesync.go:149] local asset: /home/jenkins/minikube-integration/21139-2320/.minikube/files/etc/ssl/certs/41162.pem -> 41162.pem in /etc/ssl/certs
	I1109 13:55:00.329283   50941 vm_assets.go:164] NewFileAsset: /home/jenkins/minikube-integration/21139-2320/.minikube/files/etc/ssl/certs/41162.pem -> /etc/ssl/certs/41162.pem
	I1109 13:55:00.330448   50941 ssh_runner.go:195] Run: sudo mkdir -p /etc/ssl/certs
	I1109 13:55:00.371002   50941 ssh_runner.go:362] scp /home/jenkins/minikube-integration/21139-2320/.minikube/files/etc/ssl/certs/41162.pem --> /etc/ssl/certs/41162.pem (1708 bytes)
	I1109 13:55:00.407176   50941 start.go:296] duration metric: took 349.154111ms for postStartSetup
	I1109 13:55:00.407280   50941 ssh_runner.go:195] Run: sh -c "df -h /var | awk 'NR==2{print $5}'"
	I1109 13:55:00.407327   50941 cli_runner.go:164] Run: docker container inspect -f "'{{(index (index .NetworkSettings.Ports "22/tcp") 0).HostPort}}'" ha-423884
	I1109 13:55:00.431818   50941 sshutil.go:53] new ssh client: &{IP:127.0.0.1 Port:32808 SSHKeyPath:/home/jenkins/minikube-integration/21139-2320/.minikube/machines/ha-423884/id_rsa Username:docker}
	I1109 13:55:00.545778   50941 ssh_runner.go:195] Run: sh -c "df -BG /var | awk 'NR==2{print $4}'"
	I1109 13:55:00.551509   50941 fix.go:56] duration metric: took 5.21709566s for fixHost
	I1109 13:55:00.551542   50941 start.go:83] releasing machines lock for "ha-423884", held for 5.217154802s
	I1109 13:55:00.551634   50941 cli_runner.go:164] Run: docker container inspect -f "{{range .NetworkSettings.Networks}}{{.IPAddress}},{{.GlobalIPv6Address}}{{end}}" ha-423884
	I1109 13:55:00.570733   50941 ssh_runner.go:195] Run: cat /version.json
	I1109 13:55:00.570830   50941 cli_runner.go:164] Run: docker container inspect -f "'{{(index (index .NetworkSettings.Ports "22/tcp") 0).HostPort}}'" ha-423884
	I1109 13:55:00.571142   50941 ssh_runner.go:195] Run: curl -sS -m 2 https://registry.k8s.io/
	I1109 13:55:00.571242   50941 cli_runner.go:164] Run: docker container inspect -f "'{{(index (index .NetworkSettings.Ports "22/tcp") 0).HostPort}}'" ha-423884
	I1109 13:55:00.592161   50941 sshutil.go:53] new ssh client: &{IP:127.0.0.1 Port:32808 SSHKeyPath:/home/jenkins/minikube-integration/21139-2320/.minikube/machines/ha-423884/id_rsa Username:docker}
	I1109 13:55:00.595555   50941 sshutil.go:53] new ssh client: &{IP:127.0.0.1 Port:32808 SSHKeyPath:/home/jenkins/minikube-integration/21139-2320/.minikube/machines/ha-423884/id_rsa Username:docker}
	I1109 13:55:00.796007   50941 ssh_runner.go:195] Run: systemctl --version
	I1109 13:55:00.805538   50941 ssh_runner.go:195] Run: sudo sh -c "podman version >/dev/null"
	I1109 13:55:00.848119   50941 ssh_runner.go:195] Run: sh -c "stat /etc/cni/net.d/*loopback.conf*"
	W1109 13:55:00.852825   50941 cni.go:209] loopback cni configuration skipped: "/etc/cni/net.d/*loopback.conf*" not found
	I1109 13:55:00.852894   50941 ssh_runner.go:195] Run: sudo find /etc/cni/net.d -maxdepth 1 -type f ( ( -name *bridge* -or -name *podman* ) -and -not -name *.mk_disabled ) -printf "%p, " -exec sh -c "sudo mv {} {}.mk_disabled" ;
	I1109 13:55:00.861218   50941 cni.go:259] no active bridge cni configs found in "/etc/cni/net.d" - nothing to disable
	I1109 13:55:00.861244   50941 start.go:496] detecting cgroup driver to use...
	I1109 13:55:00.861295   50941 detect.go:187] detected "cgroupfs" cgroup driver on host os
	I1109 13:55:00.861369   50941 ssh_runner.go:195] Run: sudo systemctl stop -f containerd
	I1109 13:55:00.877235   50941 ssh_runner.go:195] Run: sudo systemctl is-active --quiet service containerd
	I1109 13:55:00.891078   50941 docker.go:218] disabling cri-docker service (if available) ...
	I1109 13:55:00.891189   50941 ssh_runner.go:195] Run: sudo systemctl stop -f cri-docker.socket
	I1109 13:55:00.907526   50941 ssh_runner.go:195] Run: sudo systemctl stop -f cri-docker.service
	I1109 13:55:00.921033   50941 ssh_runner.go:195] Run: sudo systemctl disable cri-docker.socket
	I1109 13:55:01.038695   50941 ssh_runner.go:195] Run: sudo systemctl mask cri-docker.service
	I1109 13:55:01.157282   50941 docker.go:234] disabling docker service ...
	I1109 13:55:01.157400   50941 ssh_runner.go:195] Run: sudo systemctl stop -f docker.socket
	I1109 13:55:01.175939   50941 ssh_runner.go:195] Run: sudo systemctl stop -f docker.service
	I1109 13:55:01.191589   50941 ssh_runner.go:195] Run: sudo systemctl disable docker.socket
	I1109 13:55:01.322566   50941 ssh_runner.go:195] Run: sudo systemctl mask docker.service
	I1109 13:55:01.442242   50941 ssh_runner.go:195] Run: sudo systemctl is-active --quiet service docker
	I1109 13:55:01.455592   50941 ssh_runner.go:195] Run: /bin/bash -c "sudo mkdir -p /etc && printf %s "runtime-endpoint: unix:///var/run/crio/crio.sock
	" | sudo tee /etc/crictl.yaml"
	I1109 13:55:01.470955   50941 crio.go:59] configure cri-o to use "registry.k8s.io/pause:3.10.1" pause image...
	I1109 13:55:01.471022   50941 ssh_runner.go:195] Run: sh -c "sudo sed -i 's|^.*pause_image = .*$|pause_image = "registry.k8s.io/pause:3.10.1"|' /etc/crio/crio.conf.d/02-crio.conf"
	I1109 13:55:01.480518   50941 crio.go:70] configuring cri-o to use "cgroupfs" as cgroup driver...
	I1109 13:55:01.480598   50941 ssh_runner.go:195] Run: sh -c "sudo sed -i 's|^.*cgroup_manager = .*$|cgroup_manager = "cgroupfs"|' /etc/crio/crio.conf.d/02-crio.conf"
	I1109 13:55:01.490192   50941 ssh_runner.go:195] Run: sh -c "sudo sed -i '/conmon_cgroup = .*/d' /etc/crio/crio.conf.d/02-crio.conf"
	I1109 13:55:01.499971   50941 ssh_runner.go:195] Run: sh -c "sudo sed -i '/cgroup_manager = .*/a conmon_cgroup = "pod"' /etc/crio/crio.conf.d/02-crio.conf"
	I1109 13:55:01.508704   50941 ssh_runner.go:195] Run: sh -c "sudo rm -rf /etc/cni/net.mk"
	I1109 13:55:01.517693   50941 ssh_runner.go:195] Run: sh -c "sudo sed -i '/^ *"net.ipv4.ip_unprivileged_port_start=.*"/d' /etc/crio/crio.conf.d/02-crio.conf"
	I1109 13:55:01.526722   50941 ssh_runner.go:195] Run: sh -c "sudo grep -q "^ *default_sysctls" /etc/crio/crio.conf.d/02-crio.conf || sudo sed -i '/conmon_cgroup = .*/a default_sysctls = \[\n\]' /etc/crio/crio.conf.d/02-crio.conf"
	I1109 13:55:01.535091   50941 ssh_runner.go:195] Run: sh -c "sudo sed -i -r 's|^default_sysctls *= *\[|&\n  "net.ipv4.ip_unprivileged_port_start=0",|' /etc/crio/crio.conf.d/02-crio.conf"
	I1109 13:55:01.544402   50941 ssh_runner.go:195] Run: sudo sysctl net.bridge.bridge-nf-call-iptables
	I1109 13:55:01.552454   50941 ssh_runner.go:195] Run: sudo sh -c "echo 1 > /proc/sys/net/ipv4/ip_forward"
	I1109 13:55:01.560165   50941 ssh_runner.go:195] Run: sudo systemctl daemon-reload
	I1109 13:55:01.679582   50941 ssh_runner.go:195] Run: sudo systemctl restart crio
	I1109 13:55:01.822017   50941 start.go:543] Will wait 60s for socket path /var/run/crio/crio.sock
	I1109 13:55:01.822142   50941 ssh_runner.go:195] Run: stat /var/run/crio/crio.sock
	I1109 13:55:01.826235   50941 start.go:564] Will wait 60s for crictl version
	I1109 13:55:01.826377   50941 ssh_runner.go:195] Run: which crictl
	I1109 13:55:01.830288   50941 ssh_runner.go:195] Run: sudo /usr/local/bin/crictl version
	I1109 13:55:01.857542   50941 start.go:580] Version:  0.1.0
	RuntimeName:  cri-o
	RuntimeVersion:  1.34.1
	RuntimeApiVersion:  v1
	I1109 13:55:01.857636   50941 ssh_runner.go:195] Run: crio --version
	I1109 13:55:01.890996   50941 ssh_runner.go:195] Run: crio --version
	I1109 13:55:01.922135   50941 out.go:179] * Preparing Kubernetes v1.34.1 on CRI-O 1.34.1 ...
	I1109 13:55:01.925065   50941 cli_runner.go:164] Run: docker network inspect ha-423884 --format "{"Name": "{{.Name}}","Driver": "{{.Driver}}","Subnet": "{{range .IPAM.Config}}{{.Subnet}}{{end}}","Gateway": "{{range .IPAM.Config}}{{.Gateway}}{{end}}","MTU": {{if (index .Options "com.docker.network.driver.mtu")}}{{(index .Options "com.docker.network.driver.mtu")}}{{else}}0{{end}}, "ContainerIPs": [{{range $k,$v := .Containers }}"{{$v.IPv4Address}}",{{end}}]}"
	I1109 13:55:01.943662   50941 ssh_runner.go:195] Run: grep 192.168.49.1	host.minikube.internal$ /etc/hosts
	I1109 13:55:01.947786   50941 ssh_runner.go:195] Run: /bin/bash -c "{ grep -v $'\thost.minikube.internal$' "/etc/hosts"; echo "192.168.49.1	host.minikube.internal"; } > /tmp/h.$$; sudo cp /tmp/h.$$ "/etc/hosts""
	I1109 13:55:01.958276   50941 kubeadm.go:884] updating cluster {Name:ha-423884 KeepContext:false EmbedCerts:false MinikubeISO: KicBaseImage:gcr.io/k8s-minikube/kicbase-builds:v0.0.48-1761985721-21837@sha256:a50b37e97dfdea51156e079ca6b45818a801b3d41bbe13d141f35d2e1af6c7d1 Memory:3072 CPUs:2 DiskSize:20000 Driver:docker HyperkitVpnKitSock: HyperkitVSockPorts:[] DockerEnv:[] ContainerVolumeMounts:[] InsecureRegistry:[] RegistryMirror:[] HostOnlyCIDR:192.168.59.1/24 HypervVirtualSwitch: HypervUseExternalSwitch:false HypervExternalAdapter: KVMNetwork:default KVMQemuURI:qemu:///system KVMGPU:false KVMHidden:false KVMNUMACount:1 APIServerPort:8443 DockerOpt:[] DisableDriverMounts:false NFSShare:[] NFSSharesRoot:/nfsshares UUID: NoVTXCheck:false DNSProxy:false HostDNSResolver:true HostOnlyNicType:virtio NatNicType:virtio SSHIPAddress: SSHUser:root SSHKey: SSHPort:22 KubernetesConfig:{KubernetesVersion:v1.34.1 ClusterName:ha-423884 Namespace:default APIServerHAVIP:192.168.49.254 APIServerName:minikubeCA APISe
rverNames:[] APIServerIPs:[] DNSDomain:cluster.local ContainerRuntime:crio CRISocket: NetworkPlugin:cni FeatureGates: ServiceCIDR:10.96.0.0/12 ImageRepository: LoadBalancerStartIP: LoadBalancerEndIP: CustomIngressCert: RegistryAliases: ExtraOptions:[] ShouldLoadCachedImages:true EnableDefaultCNI:false CNI:} Nodes:[{Name: IP:192.168.49.2 Port:8443 KubernetesVersion:v1.34.1 ContainerRuntime:crio ControlPlane:true Worker:true} {Name:m02 IP:192.168.49.3 Port:8443 KubernetesVersion:v1.34.1 ContainerRuntime:crio ControlPlane:true Worker:true} {Name:m03 IP:192.168.49.4 Port:8443 KubernetesVersion:v1.34.1 ContainerRuntime:crio ControlPlane:true Worker:true} {Name:m04 IP:192.168.49.5 Port:0 KubernetesVersion:v1.34.1 ContainerRuntime:crio ControlPlane:false Worker:true}] Addons:map[ambassador:false amd-gpu-device-plugin:false auto-pause:false cloud-spanner:false csi-hostpath-driver:false dashboard:false default-storageclass:false efk:false freshpod:false gcp-auth:false gvisor:false headlamp:false inaccel:false ingress:
false ingress-dns:false inspektor-gadget:false istio:false istio-provisioner:false kong:false kubeflow:false kubetail:false kubevirt:false logviewer:false metallb:false metrics-server:false nvidia-device-plugin:false nvidia-driver-installer:false nvidia-gpu-device-plugin:false olm:false pod-security-policy:false portainer:false registry:false registry-aliases:false registry-creds:false storage-provisioner:false storage-provisioner-rancher:false volcano:false volumesnapshots:false yakd:false] CustomAddonImages:map[] CustomAddonRegistries:map[] VerifyComponents:map[apiserver:true apps_running:true default_sa:true extra:true kubelet:true node_ready:true system_pods:true] StartHostTimeout:6m0s ScheduledStop:<nil> ExposedPorts:[] ListenAddress: Network: Subnet: MultiNodeRequested:true ExtraDisks:0 CertExpiration:26280h0m0s MountString: Mount9PVersion:9p2000.L MountGID:docker MountIP: MountMSize:262144 MountOptions:[] MountPort:0 MountType:9p MountUID:docker BinaryMirror: DisableOptimizations:false DisableMetrics:f
alse DisableCoreDNSLog:false CustomQemuFirmwarePath: SocketVMnetClientPath: SocketVMnetPath: StaticIP: SSHAuthSock: SSHAgentPID:0 GPUs: AutoPauseInterval:1m0s} ...
	I1109 13:55:01.958452   50941 preload.go:188] Checking if preload exists for k8s version v1.34.1 and runtime crio
	I1109 13:55:01.958516   50941 ssh_runner.go:195] Run: sudo crictl images --output json
	I1109 13:55:01.997808   50941 crio.go:514] all images are preloaded for cri-o runtime.
	I1109 13:55:01.997834   50941 crio.go:433] Images already preloaded, skipping extraction
	I1109 13:55:01.997895   50941 ssh_runner.go:195] Run: sudo crictl images --output json
	I1109 13:55:02.024927   50941 crio.go:514] all images are preloaded for cri-o runtime.
	I1109 13:55:02.024953   50941 cache_images.go:86] Images are preloaded, skipping loading
	I1109 13:55:02.024962   50941 kubeadm.go:935] updating node { 192.168.49.2 8443 v1.34.1 crio true true} ...
	I1109 13:55:02.025128   50941 kubeadm.go:947] kubelet [Unit]
	Wants=crio.service
	
	[Service]
	ExecStart=
	ExecStart=/var/lib/minikube/binaries/v1.34.1/kubelet --bootstrap-kubeconfig=/etc/kubernetes/bootstrap-kubelet.conf --cgroups-per-qos=false --config=/var/lib/kubelet/config.yaml --enforce-node-allocatable= --hostname-override=ha-423884 --kubeconfig=/etc/kubernetes/kubelet.conf --node-ip=192.168.49.2
	
	[Install]
	 config:
	{KubernetesVersion:v1.34.1 ClusterName:ha-423884 Namespace:default APIServerHAVIP:192.168.49.254 APIServerName:minikubeCA APIServerNames:[] APIServerIPs:[] DNSDomain:cluster.local ContainerRuntime:crio CRISocket: NetworkPlugin:cni FeatureGates: ServiceCIDR:10.96.0.0/12 ImageRepository: LoadBalancerStartIP: LoadBalancerEndIP: CustomIngressCert: RegistryAliases: ExtraOptions:[] ShouldLoadCachedImages:true EnableDefaultCNI:false CNI:}
	I1109 13:55:02.025216   50941 ssh_runner.go:195] Run: crio config
	I1109 13:55:02.096570   50941 cni.go:84] Creating CNI manager for ""
	I1109 13:55:02.096595   50941 cni.go:136] multinode detected (4 nodes found), recommending kindnet
	I1109 13:55:02.096612   50941 kubeadm.go:85] Using pod CIDR: 10.244.0.0/16
	I1109 13:55:02.096664   50941 kubeadm.go:190] kubeadm options: {CertDir:/var/lib/minikube/certs ServiceCIDR:10.96.0.0/12 PodSubnet:10.244.0.0/16 AdvertiseAddress:192.168.49.2 APIServerPort:8443 KubernetesVersion:v1.34.1 EtcdDataDir:/var/lib/minikube/etcd EtcdExtraArgs:map[] ClusterName:ha-423884 NodeName:ha-423884 DNSDomain:cluster.local CRISocket:/var/run/crio/crio.sock ImageRepository: ComponentOptions:[{Component:apiServer ExtraArgs:map[enable-admission-plugins:NamespaceLifecycle,LimitRanger,ServiceAccount,DefaultStorageClass,DefaultTolerationSeconds,NodeRestriction,MutatingAdmissionWebhook,ValidatingAdmissionWebhook,ResourceQuota] Pairs:map[certSANs:["127.0.0.1", "localhost", "192.168.49.2"]]} {Component:controllerManager ExtraArgs:map[allocate-node-cidrs:true leader-elect:false] Pairs:map[]} {Component:scheduler ExtraArgs:map[leader-elect:false] Pairs:map[]}] FeatureArgs:map[] NodeIP:192.168.49.2 CgroupDriver:cgroupfs ClientCAFile:/var/lib/minikube/certs/ca.crt StaticPodPath:/etc/kubernetes/mani
fests ControlPlaneAddress:control-plane.minikube.internal KubeProxyOptions:map[] ResolvConfSearchRegression:false KubeletConfigOpts:map[containerRuntimeEndpoint:unix:///var/run/crio/crio.sock hairpinMode:hairpin-veth runtimeRequestTimeout:15m] PrependCriSocketUnix:true}
	I1109 13:55:02.096862   50941 kubeadm.go:196] kubeadm config:
	apiVersion: kubeadm.k8s.io/v1beta4
	kind: InitConfiguration
	localAPIEndpoint:
	  advertiseAddress: 192.168.49.2
	  bindPort: 8443
	bootstrapTokens:
	  - groups:
	      - system:bootstrappers:kubeadm:default-node-token
	    ttl: 24h0m0s
	    usages:
	      - signing
	      - authentication
	nodeRegistration:
	  criSocket: unix:///var/run/crio/crio.sock
	  name: "ha-423884"
	  kubeletExtraArgs:
	    - name: "node-ip"
	      value: "192.168.49.2"
	  taints: []
	---
	apiVersion: kubeadm.k8s.io/v1beta4
	kind: ClusterConfiguration
	apiServer:
	  certSANs: ["127.0.0.1", "localhost", "192.168.49.2"]
	  extraArgs:
	    - name: "enable-admission-plugins"
	      value: "NamespaceLifecycle,LimitRanger,ServiceAccount,DefaultStorageClass,DefaultTolerationSeconds,NodeRestriction,MutatingAdmissionWebhook,ValidatingAdmissionWebhook,ResourceQuota"
	controllerManager:
	  extraArgs:
	    - name: "allocate-node-cidrs"
	      value: "true"
	    - name: "leader-elect"
	      value: "false"
	scheduler:
	  extraArgs:
	    - name: "leader-elect"
	      value: "false"
	certificatesDir: /var/lib/minikube/certs
	clusterName: mk
	controlPlaneEndpoint: control-plane.minikube.internal:8443
	etcd:
	  local:
	    dataDir: /var/lib/minikube/etcd
	kubernetesVersion: v1.34.1
	networking:
	  dnsDomain: cluster.local
	  podSubnet: "10.244.0.0/16"
	  serviceSubnet: 10.96.0.0/12
	---
	apiVersion: kubelet.config.k8s.io/v1beta1
	kind: KubeletConfiguration
	authentication:
	  x509:
	    clientCAFile: /var/lib/minikube/certs/ca.crt
	cgroupDriver: cgroupfs
	containerRuntimeEndpoint: unix:///var/run/crio/crio.sock
	hairpinMode: hairpin-veth
	runtimeRequestTimeout: 15m
	clusterDomain: "cluster.local"
	# disable disk resource management by default
	imageGCHighThresholdPercent: 100
	evictionHard:
	  nodefs.available: "0%"
	  nodefs.inodesFree: "0%"
	  imagefs.available: "0%"
	failSwapOn: false
	staticPodPath: /etc/kubernetes/manifests
	---
	apiVersion: kubeproxy.config.k8s.io/v1alpha1
	kind: KubeProxyConfiguration
	clusterCIDR: "10.244.0.0/16"
	metricsBindAddress: 0.0.0.0:10249
	conntrack:
	  maxPerCore: 0
	# Skip setting "net.netfilter.nf_conntrack_tcp_timeout_established"
	  tcpEstablishedTimeout: 0s
	# Skip setting "net.netfilter.nf_conntrack_tcp_timeout_close"
	  tcpCloseWaitTimeout: 0s
	
	I1109 13:55:02.096890   50941 kube-vip.go:115] generating kube-vip config ...
	I1109 13:55:02.096949   50941 ssh_runner.go:195] Run: sudo sh -c "lsmod | grep ip_vs"
	I1109 13:55:02.108971   50941 kube-vip.go:163] giving up enabling control-plane load-balancing as ipvs kernel modules appears not to be available: sudo sh -c "lsmod | grep ip_vs": Process exited with status 1
	stdout:
	
	stderr:
	I1109 13:55:02.109069   50941 kube-vip.go:137] kube-vip config:
	apiVersion: v1
	kind: Pod
	metadata:
	  creationTimestamp: null
	  name: kube-vip
	  namespace: kube-system
	spec:
	  containers:
	  - args:
	    - manager
	    env:
	    - name: vip_arp
	      value: "true"
	    - name: port
	      value: "8443"
	    - name: vip_nodename
	      valueFrom:
	        fieldRef:
	          fieldPath: spec.nodeName
	    - name: vip_interface
	      value: eth0
	    - name: vip_cidr
	      value: "32"
	    - name: dns_mode
	      value: first
	    - name: cp_enable
	      value: "true"
	    - name: cp_namespace
	      value: kube-system
	    - name: vip_leaderelection
	      value: "true"
	    - name: vip_leasename
	      value: plndr-cp-lock
	    - name: vip_leaseduration
	      value: "5"
	    - name: vip_renewdeadline
	      value: "3"
	    - name: vip_retryperiod
	      value: "1"
	    - name: address
	      value: 192.168.49.254
	    - name: prometheus_server
	      value: :2112
	    image: ghcr.io/kube-vip/kube-vip:v1.0.1
	    imagePullPolicy: IfNotPresent
	    name: kube-vip
	    resources: {}
	    securityContext:
	      capabilities:
	        add:
	        - NET_ADMIN
	        - NET_RAW
	    volumeMounts:
	    - mountPath: /etc/kubernetes/admin.conf
	      name: kubeconfig
	  hostAliases:
	  - hostnames:
	    - kubernetes
	    ip: 127.0.0.1
	  hostNetwork: true
	  volumes:
	  - hostPath:
	      path: "/etc/kubernetes/admin.conf"
	    name: kubeconfig
	status: {}
	I1109 13:55:02.109141   50941 ssh_runner.go:195] Run: sudo ls /var/lib/minikube/binaries/v1.34.1
	I1109 13:55:02.117384   50941 binaries.go:51] Found k8s binaries, skipping transfer
	I1109 13:55:02.117456   50941 ssh_runner.go:195] Run: sudo mkdir -p /etc/systemd/system/kubelet.service.d /lib/systemd/system /var/tmp/minikube /etc/kubernetes/manifests
	I1109 13:55:02.125412   50941 ssh_runner.go:362] scp memory --> /etc/systemd/system/kubelet.service.d/10-kubeadm.conf (359 bytes)
	I1109 13:55:02.139511   50941 ssh_runner.go:362] scp memory --> /lib/systemd/system/kubelet.service (352 bytes)
	I1109 13:55:02.153010   50941 ssh_runner.go:362] scp memory --> /var/tmp/minikube/kubeadm.yaml.new (2206 bytes)
	I1109 13:55:02.166712   50941 ssh_runner.go:362] scp memory --> /etc/kubernetes/manifests/kube-vip.yaml (1358 bytes)
	I1109 13:55:02.180196   50941 ssh_runner.go:195] Run: grep 192.168.49.254	control-plane.minikube.internal$ /etc/hosts
	I1109 13:55:02.183971   50941 ssh_runner.go:195] Run: /bin/bash -c "{ grep -v $'\tcontrol-plane.minikube.internal$' "/etc/hosts"; echo "192.168.49.254	control-plane.minikube.internal"; } > /tmp/h.$$; sudo cp /tmp/h.$$ "/etc/hosts""
	I1109 13:55:02.194268   50941 ssh_runner.go:195] Run: sudo systemctl daemon-reload
	I1109 13:55:02.311353   50941 ssh_runner.go:195] Run: sudo systemctl start kubelet
	I1109 13:55:02.326388   50941 certs.go:69] Setting up /home/jenkins/minikube-integration/21139-2320/.minikube/profiles/ha-423884 for IP: 192.168.49.2
	I1109 13:55:02.326464   50941 certs.go:195] generating shared ca certs ...
	I1109 13:55:02.326494   50941 certs.go:227] acquiring lock for ca certs: {Name:mkdc9287ecd89df27ec460b72246ed6b75395d7c Clock:{} Delay:500ms Timeout:1m0s Cancel:<nil>}
	I1109 13:55:02.326661   50941 certs.go:236] skipping valid "minikubeCA" ca cert: /home/jenkins/minikube-integration/21139-2320/.minikube/ca.key
	I1109 13:55:02.326749   50941 certs.go:236] skipping valid "proxyClientCA" ca cert: /home/jenkins/minikube-integration/21139-2320/.minikube/proxy-client-ca.key
	I1109 13:55:02.326774   50941 certs.go:257] generating profile certs ...
	I1109 13:55:02.326889   50941 certs.go:360] skipping valid signed profile cert regeneration for "minikube-user": /home/jenkins/minikube-integration/21139-2320/.minikube/profiles/ha-423884/client.key
	I1109 13:55:02.326942   50941 certs.go:364] generating signed profile cert for "minikube": /home/jenkins/minikube-integration/21139-2320/.minikube/profiles/ha-423884/apiserver.key.32540612
	I1109 13:55:02.326978   50941 crypto.go:68] Generating cert /home/jenkins/minikube-integration/21139-2320/.minikube/profiles/ha-423884/apiserver.crt.32540612 with IP's: [10.96.0.1 127.0.0.1 10.0.0.1 192.168.49.2 192.168.49.3 192.168.49.4 192.168.49.254]
	I1109 13:55:02.791794   50941 crypto.go:156] Writing cert to /home/jenkins/minikube-integration/21139-2320/.minikube/profiles/ha-423884/apiserver.crt.32540612 ...
	I1109 13:55:02.791832   50941 lock.go:35] WriteFile acquiring /home/jenkins/minikube-integration/21139-2320/.minikube/profiles/ha-423884/apiserver.crt.32540612: {Name:mkffe35c2a4a9e9ef2460782868fdfad2ff0b271 Clock:{} Delay:500ms Timeout:1m0s Cancel:<nil>}
	I1109 13:55:02.792045   50941 crypto.go:164] Writing key to /home/jenkins/minikube-integration/21139-2320/.minikube/profiles/ha-423884/apiserver.key.32540612 ...
	I1109 13:55:02.792064   50941 lock.go:35] WriteFile acquiring /home/jenkins/minikube-integration/21139-2320/.minikube/profiles/ha-423884/apiserver.key.32540612: {Name:mk387e11ec0c12eb2f7dfe43ad45967daf55df66 Clock:{} Delay:500ms Timeout:1m0s Cancel:<nil>}
	I1109 13:55:02.792144   50941 certs.go:382] copying /home/jenkins/minikube-integration/21139-2320/.minikube/profiles/ha-423884/apiserver.crt.32540612 -> /home/jenkins/minikube-integration/21139-2320/.minikube/profiles/ha-423884/apiserver.crt
	I1109 13:55:02.792295   50941 certs.go:386] copying /home/jenkins/minikube-integration/21139-2320/.minikube/profiles/ha-423884/apiserver.key.32540612 -> /home/jenkins/minikube-integration/21139-2320/.minikube/profiles/ha-423884/apiserver.key
	I1109 13:55:02.792427   50941 certs.go:360] skipping valid signed profile cert regeneration for "aggregator": /home/jenkins/minikube-integration/21139-2320/.minikube/profiles/ha-423884/proxy-client.key
	I1109 13:55:02.792446   50941 vm_assets.go:164] NewFileAsset: /home/jenkins/minikube-integration/21139-2320/.minikube/ca.crt -> /var/lib/minikube/certs/ca.crt
	I1109 13:55:02.792461   50941 vm_assets.go:164] NewFileAsset: /home/jenkins/minikube-integration/21139-2320/.minikube/ca.key -> /var/lib/minikube/certs/ca.key
	I1109 13:55:02.792477   50941 vm_assets.go:164] NewFileAsset: /home/jenkins/minikube-integration/21139-2320/.minikube/proxy-client-ca.crt -> /var/lib/minikube/certs/proxy-client-ca.crt
	I1109 13:55:02.792493   50941 vm_assets.go:164] NewFileAsset: /home/jenkins/minikube-integration/21139-2320/.minikube/proxy-client-ca.key -> /var/lib/minikube/certs/proxy-client-ca.key
	I1109 13:55:02.792513   50941 vm_assets.go:164] NewFileAsset: /home/jenkins/minikube-integration/21139-2320/.minikube/profiles/ha-423884/apiserver.crt -> /var/lib/minikube/certs/apiserver.crt
	I1109 13:55:02.792538   50941 vm_assets.go:164] NewFileAsset: /home/jenkins/minikube-integration/21139-2320/.minikube/profiles/ha-423884/apiserver.key -> /var/lib/minikube/certs/apiserver.key
	I1109 13:55:02.792554   50941 vm_assets.go:164] NewFileAsset: /home/jenkins/minikube-integration/21139-2320/.minikube/profiles/ha-423884/proxy-client.crt -> /var/lib/minikube/certs/proxy-client.crt
	I1109 13:55:02.792565   50941 vm_assets.go:164] NewFileAsset: /home/jenkins/minikube-integration/21139-2320/.minikube/profiles/ha-423884/proxy-client.key -> /var/lib/minikube/certs/proxy-client.key
	I1109 13:55:02.792626   50941 certs.go:484] found cert: /home/jenkins/minikube-integration/21139-2320/.minikube/certs/4116.pem (1338 bytes)
	W1109 13:55:02.792662   50941 certs.go:480] ignoring /home/jenkins/minikube-integration/21139-2320/.minikube/certs/4116_empty.pem, impossibly tiny 0 bytes
	I1109 13:55:02.792674   50941 certs.go:484] found cert: /home/jenkins/minikube-integration/21139-2320/.minikube/certs/ca-key.pem (1679 bytes)
	I1109 13:55:02.792701   50941 certs.go:484] found cert: /home/jenkins/minikube-integration/21139-2320/.minikube/certs/ca.pem (1082 bytes)
	I1109 13:55:02.792731   50941 certs.go:484] found cert: /home/jenkins/minikube-integration/21139-2320/.minikube/certs/cert.pem (1123 bytes)
	I1109 13:55:02.792756   50941 certs.go:484] found cert: /home/jenkins/minikube-integration/21139-2320/.minikube/certs/key.pem (1679 bytes)
	I1109 13:55:02.792801   50941 certs.go:484] found cert: /home/jenkins/minikube-integration/21139-2320/.minikube/files/etc/ssl/certs/41162.pem (1708 bytes)
	I1109 13:55:02.792832   50941 vm_assets.go:164] NewFileAsset: /home/jenkins/minikube-integration/21139-2320/.minikube/ca.crt -> /usr/share/ca-certificates/minikubeCA.pem
	I1109 13:55:02.792847   50941 vm_assets.go:164] NewFileAsset: /home/jenkins/minikube-integration/21139-2320/.minikube/certs/4116.pem -> /usr/share/ca-certificates/4116.pem
	I1109 13:55:02.792857   50941 vm_assets.go:164] NewFileAsset: /home/jenkins/minikube-integration/21139-2320/.minikube/files/etc/ssl/certs/41162.pem -> /usr/share/ca-certificates/41162.pem
	I1109 13:55:02.793472   50941 ssh_runner.go:362] scp /home/jenkins/minikube-integration/21139-2320/.minikube/ca.crt --> /var/lib/minikube/certs/ca.crt (1111 bytes)
	I1109 13:55:02.812682   50941 ssh_runner.go:362] scp /home/jenkins/minikube-integration/21139-2320/.minikube/ca.key --> /var/lib/minikube/certs/ca.key (1679 bytes)
	I1109 13:55:02.831556   50941 ssh_runner.go:362] scp /home/jenkins/minikube-integration/21139-2320/.minikube/proxy-client-ca.crt --> /var/lib/minikube/certs/proxy-client-ca.crt (1119 bytes)
	I1109 13:55:02.850358   50941 ssh_runner.go:362] scp /home/jenkins/minikube-integration/21139-2320/.minikube/proxy-client-ca.key --> /var/lib/minikube/certs/proxy-client-ca.key (1675 bytes)
	I1109 13:55:02.868603   50941 ssh_runner.go:362] scp /home/jenkins/minikube-integration/21139-2320/.minikube/profiles/ha-423884/apiserver.crt --> /var/lib/minikube/certs/apiserver.crt (1440 bytes)
	I1109 13:55:02.886617   50941 ssh_runner.go:362] scp /home/jenkins/minikube-integration/21139-2320/.minikube/profiles/ha-423884/apiserver.key --> /var/lib/minikube/certs/apiserver.key (1679 bytes)
	I1109 13:55:02.904941   50941 ssh_runner.go:362] scp /home/jenkins/minikube-integration/21139-2320/.minikube/profiles/ha-423884/proxy-client.crt --> /var/lib/minikube/certs/proxy-client.crt (1147 bytes)
	I1109 13:55:02.923399   50941 ssh_runner.go:362] scp /home/jenkins/minikube-integration/21139-2320/.minikube/profiles/ha-423884/proxy-client.key --> /var/lib/minikube/certs/proxy-client.key (1675 bytes)
	I1109 13:55:02.941967   50941 ssh_runner.go:362] scp /home/jenkins/minikube-integration/21139-2320/.minikube/ca.crt --> /usr/share/ca-certificates/minikubeCA.pem (1111 bytes)
	I1109 13:55:02.960665   50941 ssh_runner.go:362] scp /home/jenkins/minikube-integration/21139-2320/.minikube/certs/4116.pem --> /usr/share/ca-certificates/4116.pem (1338 bytes)
	I1109 13:55:02.983802   50941 ssh_runner.go:362] scp /home/jenkins/minikube-integration/21139-2320/.minikube/files/etc/ssl/certs/41162.pem --> /usr/share/ca-certificates/41162.pem (1708 bytes)
	I1109 13:55:03.009751   50941 ssh_runner.go:362] scp memory --> /var/lib/minikube/kubeconfig (738 bytes)
	I1109 13:55:03.031089   50941 ssh_runner.go:195] Run: openssl version
	I1109 13:55:03.048901   50941 ssh_runner.go:195] Run: sudo /bin/bash -c "test -s /usr/share/ca-certificates/minikubeCA.pem && ln -fs /usr/share/ca-certificates/minikubeCA.pem /etc/ssl/certs/minikubeCA.pem"
	I1109 13:55:03.062145   50941 ssh_runner.go:195] Run: ls -la /usr/share/ca-certificates/minikubeCA.pem
	I1109 13:55:03.066948   50941 certs.go:528] hashing: -rw-r--r-- 1 root root 1111 Nov  9 13:29 /usr/share/ca-certificates/minikubeCA.pem
	I1109 13:55:03.067030   50941 ssh_runner.go:195] Run: openssl x509 -hash -noout -in /usr/share/ca-certificates/minikubeCA.pem
	I1109 13:55:03.126889   50941 ssh_runner.go:195] Run: sudo /bin/bash -c "test -L /etc/ssl/certs/b5213941.0 || ln -fs /etc/ssl/certs/minikubeCA.pem /etc/ssl/certs/b5213941.0"
	I1109 13:55:03.139601   50941 ssh_runner.go:195] Run: sudo /bin/bash -c "test -s /usr/share/ca-certificates/4116.pem && ln -fs /usr/share/ca-certificates/4116.pem /etc/ssl/certs/4116.pem"
	I1109 13:55:03.154789   50941 ssh_runner.go:195] Run: ls -la /usr/share/ca-certificates/4116.pem
	I1109 13:55:03.160976   50941 certs.go:528] hashing: -rw-r--r-- 1 root root 1338 Nov  9 13:36 /usr/share/ca-certificates/4116.pem
	I1109 13:55:03.161044   50941 ssh_runner.go:195] Run: openssl x509 -hash -noout -in /usr/share/ca-certificates/4116.pem
	I1109 13:55:03.250161   50941 ssh_runner.go:195] Run: sudo /bin/bash -c "test -L /etc/ssl/certs/51391683.0 || ln -fs /etc/ssl/certs/4116.pem /etc/ssl/certs/51391683.0"
	I1109 13:55:03.265557   50941 ssh_runner.go:195] Run: sudo /bin/bash -c "test -s /usr/share/ca-certificates/41162.pem && ln -fs /usr/share/ca-certificates/41162.pem /etc/ssl/certs/41162.pem"
	I1109 13:55:03.284581   50941 ssh_runner.go:195] Run: ls -la /usr/share/ca-certificates/41162.pem
	I1109 13:55:03.293895   50941 certs.go:528] hashing: -rw-r--r-- 1 root root 1708 Nov  9 13:36 /usr/share/ca-certificates/41162.pem
	I1109 13:55:03.293986   50941 ssh_runner.go:195] Run: openssl x509 -hash -noout -in /usr/share/ca-certificates/41162.pem
	I1109 13:55:03.352776   50941 ssh_runner.go:195] Run: sudo /bin/bash -c "test -L /etc/ssl/certs/3ec20f2e.0 || ln -fs /etc/ssl/certs/41162.pem /etc/ssl/certs/3ec20f2e.0"
	I1109 13:55:03.366601   50941 ssh_runner.go:195] Run: stat /var/lib/minikube/certs/apiserver-kubelet-client.crt
	I1109 13:55:03.371480   50941 ssh_runner.go:195] Run: openssl x509 -noout -in /var/lib/minikube/certs/apiserver-etcd-client.crt -checkend 86400
	I1109 13:55:03.428568   50941 ssh_runner.go:195] Run: openssl x509 -noout -in /var/lib/minikube/certs/apiserver-kubelet-client.crt -checkend 86400
	I1109 13:55:03.484572   50941 ssh_runner.go:195] Run: openssl x509 -noout -in /var/lib/minikube/certs/etcd/server.crt -checkend 86400
	I1109 13:55:03.542471   50941 ssh_runner.go:195] Run: openssl x509 -noout -in /var/lib/minikube/certs/etcd/healthcheck-client.crt -checkend 86400
	I1109 13:55:03.629320   50941 ssh_runner.go:195] Run: openssl x509 -noout -in /var/lib/minikube/certs/etcd/peer.crt -checkend 86400
	I1109 13:55:03.682786   50941 ssh_runner.go:195] Run: openssl x509 -noout -in /var/lib/minikube/certs/front-proxy-client.crt -checkend 86400
	I1109 13:55:03.732655   50941 kubeadm.go:401] StartCluster: {Name:ha-423884 KeepContext:false EmbedCerts:false MinikubeISO: KicBaseImage:gcr.io/k8s-minikube/kicbase-builds:v0.0.48-1761985721-21837@sha256:a50b37e97dfdea51156e079ca6b45818a801b3d41bbe13d141f35d2e1af6c7d1 Memory:3072 CPUs:2 DiskSize:20000 Driver:docker HyperkitVpnKitSock: HyperkitVSockPorts:[] DockerEnv:[] ContainerVolumeMounts:[] InsecureRegistry:[] RegistryMirror:[] HostOnlyCIDR:192.168.59.1/24 HypervVirtualSwitch: HypervUseExternalSwitch:false HypervExternalAdapter: KVMNetwork:default KVMQemuURI:qemu:///system KVMGPU:false KVMHidden:false KVMNUMACount:1 APIServerPort:8443 DockerOpt:[] DisableDriverMounts:false NFSShare:[] NFSSharesRoot:/nfsshares UUID: NoVTXCheck:false DNSProxy:false HostDNSResolver:true HostOnlyNicType:virtio NatNicType:virtio SSHIPAddress: SSHUser:root SSHKey: SSHPort:22 KubernetesConfig:{KubernetesVersion:v1.34.1 ClusterName:ha-423884 Namespace:default APIServerHAVIP:192.168.49.254 APIServerName:minikubeCA APIServe
rNames:[] APIServerIPs:[] DNSDomain:cluster.local ContainerRuntime:crio CRISocket: NetworkPlugin:cni FeatureGates: ServiceCIDR:10.96.0.0/12 ImageRepository: LoadBalancerStartIP: LoadBalancerEndIP: CustomIngressCert: RegistryAliases: ExtraOptions:[] ShouldLoadCachedImages:true EnableDefaultCNI:false CNI:} Nodes:[{Name: IP:192.168.49.2 Port:8443 KubernetesVersion:v1.34.1 ContainerRuntime:crio ControlPlane:true Worker:true} {Name:m02 IP:192.168.49.3 Port:8443 KubernetesVersion:v1.34.1 ContainerRuntime:crio ControlPlane:true Worker:true} {Name:m03 IP:192.168.49.4 Port:8443 KubernetesVersion:v1.34.1 ContainerRuntime:crio ControlPlane:true Worker:true} {Name:m04 IP:192.168.49.5 Port:0 KubernetesVersion:v1.34.1 ContainerRuntime:crio ControlPlane:false Worker:true}] Addons:map[ambassador:false amd-gpu-device-plugin:false auto-pause:false cloud-spanner:false csi-hostpath-driver:false dashboard:false default-storageclass:false efk:false freshpod:false gcp-auth:false gvisor:false headlamp:false inaccel:false ingress:fal
se ingress-dns:false inspektor-gadget:false istio:false istio-provisioner:false kong:false kubeflow:false kubetail:false kubevirt:false logviewer:false metallb:false metrics-server:false nvidia-device-plugin:false nvidia-driver-installer:false nvidia-gpu-device-plugin:false olm:false pod-security-policy:false portainer:false registry:false registry-aliases:false registry-creds:false storage-provisioner:false storage-provisioner-rancher:false volcano:false volumesnapshots:false yakd:false] CustomAddonImages:map[] CustomAddonRegistries:map[] VerifyComponents:map[apiserver:true apps_running:true default_sa:true extra:true kubelet:true node_ready:true system_pods:true] StartHostTimeout:6m0s ScheduledStop:<nil> ExposedPorts:[] ListenAddress: Network: Subnet: MultiNodeRequested:true ExtraDisks:0 CertExpiration:26280h0m0s MountString: Mount9PVersion:9p2000.L MountGID:docker MountIP: MountMSize:262144 MountOptions:[] MountPort:0 MountType:9p MountUID:docker BinaryMirror: DisableOptimizations:false DisableMetrics:fals
e DisableCoreDNSLog:false CustomQemuFirmwarePath: SocketVMnetClientPath: SocketVMnetPath: StaticIP: SSHAuthSock: SSHAgentPID:0 GPUs: AutoPauseInterval:1m0s}
	I1109 13:55:03.732832   50941 cri.go:54] listing CRI containers in root : {State:paused Name: Namespaces:[kube-system]}
	I1109 13:55:03.732937   50941 ssh_runner.go:195] Run: sudo -s eval "crictl ps -a --quiet --label io.kubernetes.pod.namespace=kube-system"
	I1109 13:55:03.774883   50941 cri.go:89] found id: "90f6d4700e66c1004154b3bffd5b655a9e7a54dab0ca93ca633a48ec6805be8c"
	I1109 13:55:03.774945   50941 cri.go:89] found id: "ee4108629384f7d2a0c69033ae60bc1c7015caec18238848cb6dace4abb60ac1"
	I1109 13:55:03.774963   50941 cri.go:89] found id: "dc4b89b5cdd42a6e98698322cd4a212e4b2439c3edbe3305cc3f85573f85fb2b"
	I1109 13:55:03.774978   50941 cri.go:89] found id: "ad03fe50fbbd1dace582db018b89f80349534b6604f17260fe8e6175c0110640"
	I1109 13:55:03.774996   50941 cri.go:89] found id: "435fea996772c07f8ab06a7210ea047100aeb59de8bfe2b882e29743c63515bf"
	I1109 13:55:03.775025   50941 cri.go:89] found id: ""
	I1109 13:55:03.775095   50941 ssh_runner.go:195] Run: sudo runc list -f json
	W1109 13:55:03.797048   50941 kubeadm.go:408] unpause failed: list paused: runc: sudo runc list -f json: Process exited with status 1
	stdout:
	
	stderr:
	time="2025-11-09T13:55:03Z" level=error msg="open /run/runc: no such file or directory"
	I1109 13:55:03.797184   50941 ssh_runner.go:195] Run: sudo ls /var/lib/kubelet/kubeadm-flags.env /var/lib/kubelet/config.yaml /var/lib/minikube/etcd
	I1109 13:55:03.808348   50941 kubeadm.go:417] found existing configuration files, will attempt cluster restart
	I1109 13:55:03.808412   50941 kubeadm.go:598] restartPrimaryControlPlane start ...
	I1109 13:55:03.808506   50941 ssh_runner.go:195] Run: sudo test -d /data/minikube
	I1109 13:55:03.822278   50941 kubeadm.go:131] /data/minikube skipping compat symlinks: sudo test -d /data/minikube: Process exited with status 1
	stdout:
	
	stderr:
	I1109 13:55:03.822778   50941 kubeconfig.go:47] verify endpoint returned: get endpoint: "ha-423884" does not appear in /home/jenkins/minikube-integration/21139-2320/kubeconfig
	I1109 13:55:03.822942   50941 kubeconfig.go:62] /home/jenkins/minikube-integration/21139-2320/kubeconfig needs updating (will repair): [kubeconfig missing "ha-423884" cluster setting kubeconfig missing "ha-423884" context setting]
	I1109 13:55:03.823266   50941 lock.go:35] WriteFile acquiring /home/jenkins/minikube-integration/21139-2320/kubeconfig: {Name:mkbcbd43244c4eb498050023163f96f81faa78c6 Clock:{} Delay:500ms Timeout:1m0s Cancel:<nil>}
	I1109 13:55:03.823946   50941 kapi.go:59] client config for ha-423884: &rest.Config{Host:"https://192.168.49.2:8443", APIPath:"", ContentConfig:rest.ContentConfig{AcceptContentTypes:"", ContentType:"", GroupVersion:(*schema.GroupVersion)(nil), NegotiatedSerializer:runtime.NegotiatedSerializer(nil)}, Username:"", Password:"", BearerToken:"", BearerTokenFile:"", Impersonate:rest.ImpersonationConfig{UserName:"", UID:"", Groups:[]string(nil), Extra:map[string][]string(nil)}, AuthProvider:<nil>, AuthConfigPersister:rest.AuthProviderConfigPersister(nil), ExecProvider:<nil>, TLSClientConfig:rest.sanitizedTLSClientConfig{Insecure:false, ServerName:"", CertFile:"/home/jenkins/minikube-integration/21139-2320/.minikube/profiles/ha-423884/client.crt", KeyFile:"/home/jenkins/minikube-integration/21139-2320/.minikube/profiles/ha-423884/client.key", CAFile:"/home/jenkins/minikube-integration/21139-2320/.minikube/ca.crt", CertData:[]uint8(nil), KeyData:[]uint8(nil), CAData:[]uint8(nil), NextProtos:[]string(nil)}, Us
erAgent:"", DisableCompression:false, Transport:http.RoundTripper(nil), WrapTransport:(transport.WrapperFunc)(0x21276e0), QPS:0, Burst:0, RateLimiter:flowcontrol.RateLimiter(nil), WarningHandler:rest.WarningHandler(nil), WarningHandlerWithContext:rest.WarningHandlerWithContext(nil), Timeout:0, Dial:(func(context.Context, string, string) (net.Conn, error))(nil), Proxy:(func(*http.Request) (*url.URL, error))(nil)}
	I1109 13:55:03.824966   50941 envvar.go:172] "Feature gate default state" feature="ClientsPreferCBOR" enabled=false
	I1109 13:55:03.825021   50941 envvar.go:172] "Feature gate default state" feature="InformerResourceVersion" enabled=false
	I1109 13:55:03.825042   50941 envvar.go:172] "Feature gate default state" feature="InOrderInformers" enabled=true
	I1109 13:55:03.825069   50941 envvar.go:172] "Feature gate default state" feature="WatchListClient" enabled=false
	I1109 13:55:03.825105   50941 envvar.go:172] "Feature gate default state" feature="ClientsAllowCBOR" enabled=false
	I1109 13:55:03.824993   50941 cert_rotation.go:141] "Starting client certificate rotation controller" logger="tls-transport-cache"
	I1109 13:55:03.825458   50941 ssh_runner.go:195] Run: sudo diff -u /var/tmp/minikube/kubeadm.yaml /var/tmp/minikube/kubeadm.yaml.new
	I1109 13:55:03.835686   50941 kubeadm.go:635] The running cluster does not require reconfiguration: 192.168.49.2
	I1109 13:55:03.835761   50941 kubeadm.go:602] duration metric: took 27.320729ms to restartPrimaryControlPlane
	I1109 13:55:03.835784   50941 kubeadm.go:403] duration metric: took 103.139447ms to StartCluster
	I1109 13:55:03.835816   50941 settings.go:142] acquiring lock: {Name:mk52a5a1cf83cfd3007d92af07a5e8dea393f789 Clock:{} Delay:500ms Timeout:1m0s Cancel:<nil>}
	I1109 13:55:03.835943   50941 settings.go:150] Updating kubeconfig:  /home/jenkins/minikube-integration/21139-2320/kubeconfig
	I1109 13:55:03.836573   50941 lock.go:35] WriteFile acquiring /home/jenkins/minikube-integration/21139-2320/kubeconfig: {Name:mkbcbd43244c4eb498050023163f96f81faa78c6 Clock:{} Delay:500ms Timeout:1m0s Cancel:<nil>}
	I1109 13:55:03.836835   50941 start.go:234] HA (multi-control plane) cluster: will skip waiting for primary control-plane node &{Name: IP:192.168.49.2 Port:8443 KubernetesVersion:v1.34.1 ContainerRuntime:crio ControlPlane:true Worker:true}
	I1109 13:55:03.836883   50941 start.go:242] waiting for startup goroutines ...
	I1109 13:55:03.836904   50941 addons.go:512] enable addons start: toEnable=map[ambassador:false amd-gpu-device-plugin:false auto-pause:false cloud-spanner:false csi-hostpath-driver:false dashboard:false default-storageclass:false efk:false freshpod:false gcp-auth:false gvisor:false headlamp:false inaccel:false ingress:false ingress-dns:false inspektor-gadget:false istio:false istio-provisioner:false kong:false kubeflow:false kubetail:false kubevirt:false logviewer:false metallb:false metrics-server:false nvidia-device-plugin:false nvidia-driver-installer:false nvidia-gpu-device-plugin:false olm:false pod-security-policy:false portainer:false registry:false registry-aliases:false registry-creds:false storage-provisioner:false storage-provisioner-rancher:false volcano:false volumesnapshots:false yakd:false]
	I1109 13:55:03.837391   50941 config.go:182] Loaded profile config "ha-423884": Driver=docker, ContainerRuntime=crio, KubernetesVersion=v1.34.1
	I1109 13:55:03.842877   50941 out.go:179] * Enabled addons: 
	I1109 13:55:03.845819   50941 addons.go:515] duration metric: took 8.899809ms for enable addons: enabled=[]
	I1109 13:55:03.845887   50941 start.go:247] waiting for cluster config update ...
	I1109 13:55:03.845910   50941 start.go:256] writing updated cluster config ...
	I1109 13:55:03.849092   50941 out.go:203] 
	I1109 13:55:03.852433   50941 config.go:182] Loaded profile config "ha-423884": Driver=docker, ContainerRuntime=crio, KubernetesVersion=v1.34.1
	I1109 13:55:03.852560   50941 profile.go:143] Saving config to /home/jenkins/minikube-integration/21139-2320/.minikube/profiles/ha-423884/config.json ...
	I1109 13:55:03.856045   50941 out.go:179] * Starting "ha-423884-m02" control-plane node in "ha-423884" cluster
	I1109 13:55:03.858860   50941 cache.go:134] Beginning downloading kic base image for docker with crio
	I1109 13:55:03.861834   50941 out.go:179] * Pulling base image v0.0.48-1761985721-21837 ...
	I1109 13:55:03.864733   50941 preload.go:188] Checking if preload exists for k8s version v1.34.1 and runtime crio
	I1109 13:55:03.864755   50941 cache.go:65] Caching tarball of preloaded images
	I1109 13:55:03.864808   50941 image.go:81] Checking for gcr.io/k8s-minikube/kicbase-builds:v0.0.48-1761985721-21837@sha256:a50b37e97dfdea51156e079ca6b45818a801b3d41bbe13d141f35d2e1af6c7d1 in local docker daemon
	I1109 13:55:03.864863   50941 preload.go:238] Found /home/jenkins/minikube-integration/21139-2320/.minikube/cache/preloaded-tarball/preloaded-images-k8s-v18-v1.34.1-cri-o-overlay-arm64.tar.lz4 in cache, skipping download
	I1109 13:55:03.864879   50941 cache.go:68] Finished verifying existence of preloaded tar for v1.34.1 on crio
	I1109 13:55:03.865000   50941 profile.go:143] Saving config to /home/jenkins/minikube-integration/21139-2320/.minikube/profiles/ha-423884/config.json ...
	I1109 13:55:03.890212   50941 image.go:100] Found gcr.io/k8s-minikube/kicbase-builds:v0.0.48-1761985721-21837@sha256:a50b37e97dfdea51156e079ca6b45818a801b3d41bbe13d141f35d2e1af6c7d1 in local docker daemon, skipping pull
	I1109 13:55:03.890235   50941 cache.go:158] gcr.io/k8s-minikube/kicbase-builds:v0.0.48-1761985721-21837@sha256:a50b37e97dfdea51156e079ca6b45818a801b3d41bbe13d141f35d2e1af6c7d1 exists in daemon, skipping load
	I1109 13:55:03.890249   50941 cache.go:243] Successfully downloaded all kic artifacts
	I1109 13:55:03.890271   50941 start.go:360] acquireMachinesLock for ha-423884-m02: {Name:mkc465d60ac134a0502b48f535d5c2db44f7f07b Clock:{} Delay:500ms Timeout:10m0s Cancel:<nil>}
	I1109 13:55:03.890338   50941 start.go:364] duration metric: took 47.253µs to acquireMachinesLock for "ha-423884-m02"
	I1109 13:55:03.890362   50941 start.go:96] Skipping create...Using existing machine configuration
	I1109 13:55:03.890369   50941 fix.go:54] fixHost starting: m02
	I1109 13:55:03.890623   50941 cli_runner.go:164] Run: docker container inspect ha-423884-m02 --format={{.State.Status}}
	I1109 13:55:03.914904   50941 fix.go:112] recreateIfNeeded on ha-423884-m02: state=Stopped err=<nil>
	W1109 13:55:03.914934   50941 fix.go:138] unexpected machine state, will restart: <nil>
	I1109 13:55:03.917975   50941 out.go:252] * Restarting existing docker container for "ha-423884-m02" ...
	I1109 13:55:03.918057   50941 cli_runner.go:164] Run: docker start ha-423884-m02
	I1109 13:55:04.309913   50941 cli_runner.go:164] Run: docker container inspect ha-423884-m02 --format={{.State.Status}}
	I1109 13:55:04.344086   50941 kic.go:430] container "ha-423884-m02" state is running.
	I1109 13:55:04.344458   50941 cli_runner.go:164] Run: docker container inspect -f "{{range .NetworkSettings.Networks}}{{.IPAddress}},{{.GlobalIPv6Address}}{{end}}" ha-423884-m02
	I1109 13:55:04.369599   50941 profile.go:143] Saving config to /home/jenkins/minikube-integration/21139-2320/.minikube/profiles/ha-423884/config.json ...
	I1109 13:55:04.369844   50941 machine.go:94] provisionDockerMachine start ...
	I1109 13:55:04.369909   50941 cli_runner.go:164] Run: docker container inspect -f "'{{(index (index .NetworkSettings.Ports "22/tcp") 0).HostPort}}'" ha-423884-m02
	I1109 13:55:04.400285   50941 main.go:143] libmachine: Using SSH client type: native
	I1109 13:55:04.400586   50941 main.go:143] libmachine: &{{{<nil> 0 [] [] []} docker [0x3eefe0] 0x3f1790 <nil>  [] 0s} 127.0.0.1 32813 <nil> <nil>}
	I1109 13:55:04.400595   50941 main.go:143] libmachine: About to run SSH command:
	hostname
	I1109 13:55:04.401311   50941 main.go:143] libmachine: Error dialing TCP: ssh: handshake failed: EOF
	I1109 13:55:07.579475   50941 main.go:143] libmachine: SSH cmd err, output: <nil>: ha-423884-m02
	
	I1109 13:55:07.579506   50941 ubuntu.go:182] provisioning hostname "ha-423884-m02"
	I1109 13:55:07.579638   50941 cli_runner.go:164] Run: docker container inspect -f "'{{(index (index .NetworkSettings.Ports "22/tcp") 0).HostPort}}'" ha-423884-m02
	I1109 13:55:07.602366   50941 main.go:143] libmachine: Using SSH client type: native
	I1109 13:55:07.602673   50941 main.go:143] libmachine: &{{{<nil> 0 [] [] []} docker [0x3eefe0] 0x3f1790 <nil>  [] 0s} 127.0.0.1 32813 <nil> <nil>}
	I1109 13:55:07.602690   50941 main.go:143] libmachine: About to run SSH command:
	sudo hostname ha-423884-m02 && echo "ha-423884-m02" | sudo tee /etc/hostname
	I1109 13:55:07.809995   50941 main.go:143] libmachine: SSH cmd err, output: <nil>: ha-423884-m02
	
	I1109 13:55:07.810122   50941 cli_runner.go:164] Run: docker container inspect -f "'{{(index (index .NetworkSettings.Ports "22/tcp") 0).HostPort}}'" ha-423884-m02
	I1109 13:55:07.846000   50941 main.go:143] libmachine: Using SSH client type: native
	I1109 13:55:07.846319   50941 main.go:143] libmachine: &{{{<nil> 0 [] [] []} docker [0x3eefe0] 0x3f1790 <nil>  [] 0s} 127.0.0.1 32813 <nil> <nil>}
	I1109 13:55:07.846341   50941 main.go:143] libmachine: About to run SSH command:
	
			if ! grep -xq '.*\sha-423884-m02' /etc/hosts; then
				if grep -xq '127.0.1.1\s.*' /etc/hosts; then
					sudo sed -i 's/^127.0.1.1\s.*/127.0.1.1 ha-423884-m02/g' /etc/hosts;
				else 
					echo '127.0.1.1 ha-423884-m02' | sudo tee -a /etc/hosts; 
				fi
			fi
	I1109 13:55:08.029626   50941 main.go:143] libmachine: SSH cmd err, output: <nil>: 
	I1109 13:55:08.029653   50941 ubuntu.go:188] set auth options {CertDir:/home/jenkins/minikube-integration/21139-2320/.minikube CaCertPath:/home/jenkins/minikube-integration/21139-2320/.minikube/certs/ca.pem CaPrivateKeyPath:/home/jenkins/minikube-integration/21139-2320/.minikube/certs/ca-key.pem CaCertRemotePath:/etc/docker/ca.pem ServerCertPath:/home/jenkins/minikube-integration/21139-2320/.minikube/machines/server.pem ServerKeyPath:/home/jenkins/minikube-integration/21139-2320/.minikube/machines/server-key.pem ClientKeyPath:/home/jenkins/minikube-integration/21139-2320/.minikube/certs/key.pem ServerCertRemotePath:/etc/docker/server.pem ServerKeyRemotePath:/etc/docker/server-key.pem ClientCertPath:/home/jenkins/minikube-integration/21139-2320/.minikube/certs/cert.pem ServerCertSANs:[] StorePath:/home/jenkins/minikube-integration/21139-2320/.minikube}
	I1109 13:55:08.029670   50941 ubuntu.go:190] setting up certificates
	I1109 13:55:08.029726   50941 provision.go:84] configureAuth start
	I1109 13:55:08.029805   50941 cli_runner.go:164] Run: docker container inspect -f "{{range .NetworkSettings.Networks}}{{.IPAddress}},{{.GlobalIPv6Address}}{{end}}" ha-423884-m02
	I1109 13:55:08.053364   50941 provision.go:143] copyHostCerts
	I1109 13:55:08.053410   50941 vm_assets.go:164] NewFileAsset: /home/jenkins/minikube-integration/21139-2320/.minikube/certs/ca.pem -> /home/jenkins/minikube-integration/21139-2320/.minikube/ca.pem
	I1109 13:55:08.053445   50941 exec_runner.go:144] found /home/jenkins/minikube-integration/21139-2320/.minikube/ca.pem, removing ...
	I1109 13:55:08.053457   50941 exec_runner.go:203] rm: /home/jenkins/minikube-integration/21139-2320/.minikube/ca.pem
	I1109 13:55:08.053539   50941 exec_runner.go:151] cp: /home/jenkins/minikube-integration/21139-2320/.minikube/certs/ca.pem --> /home/jenkins/minikube-integration/21139-2320/.minikube/ca.pem (1082 bytes)
	I1109 13:55:08.053624   50941 vm_assets.go:164] NewFileAsset: /home/jenkins/minikube-integration/21139-2320/.minikube/certs/cert.pem -> /home/jenkins/minikube-integration/21139-2320/.minikube/cert.pem
	I1109 13:55:08.053647   50941 exec_runner.go:144] found /home/jenkins/minikube-integration/21139-2320/.minikube/cert.pem, removing ...
	I1109 13:55:08.053656   50941 exec_runner.go:203] rm: /home/jenkins/minikube-integration/21139-2320/.minikube/cert.pem
	I1109 13:55:08.053687   50941 exec_runner.go:151] cp: /home/jenkins/minikube-integration/21139-2320/.minikube/certs/cert.pem --> /home/jenkins/minikube-integration/21139-2320/.minikube/cert.pem (1123 bytes)
	I1109 13:55:08.053733   50941 vm_assets.go:164] NewFileAsset: /home/jenkins/minikube-integration/21139-2320/.minikube/certs/key.pem -> /home/jenkins/minikube-integration/21139-2320/.minikube/key.pem
	I1109 13:55:08.053755   50941 exec_runner.go:144] found /home/jenkins/minikube-integration/21139-2320/.minikube/key.pem, removing ...
	I1109 13:55:08.053762   50941 exec_runner.go:203] rm: /home/jenkins/minikube-integration/21139-2320/.minikube/key.pem
	I1109 13:55:08.053788   50941 exec_runner.go:151] cp: /home/jenkins/minikube-integration/21139-2320/.minikube/certs/key.pem --> /home/jenkins/minikube-integration/21139-2320/.minikube/key.pem (1679 bytes)
	I1109 13:55:08.053839   50941 provision.go:117] generating server cert: /home/jenkins/minikube-integration/21139-2320/.minikube/machines/server.pem ca-key=/home/jenkins/minikube-integration/21139-2320/.minikube/certs/ca.pem private-key=/home/jenkins/minikube-integration/21139-2320/.minikube/certs/ca-key.pem org=jenkins.ha-423884-m02 san=[127.0.0.1 192.168.49.3 ha-423884-m02 localhost minikube]
	I1109 13:55:08.908426   50941 provision.go:177] copyRemoteCerts
	I1109 13:55:08.908547   50941 ssh_runner.go:195] Run: sudo mkdir -p /etc/docker /etc/docker /etc/docker
	I1109 13:55:08.908608   50941 cli_runner.go:164] Run: docker container inspect -f "'{{(index (index .NetworkSettings.Ports "22/tcp") 0).HostPort}}'" ha-423884-m02
	I1109 13:55:08.925860   50941 sshutil.go:53] new ssh client: &{IP:127.0.0.1 Port:32813 SSHKeyPath:/home/jenkins/minikube-integration/21139-2320/.minikube/machines/ha-423884-m02/id_rsa Username:docker}
	I1109 13:55:09.037241   50941 vm_assets.go:164] NewFileAsset: /home/jenkins/minikube-integration/21139-2320/.minikube/certs/ca.pem -> /etc/docker/ca.pem
	I1109 13:55:09.037302   50941 ssh_runner.go:362] scp /home/jenkins/minikube-integration/21139-2320/.minikube/certs/ca.pem --> /etc/docker/ca.pem (1082 bytes)
	I1109 13:55:09.069800   50941 vm_assets.go:164] NewFileAsset: /home/jenkins/minikube-integration/21139-2320/.minikube/machines/server.pem -> /etc/docker/server.pem
	I1109 13:55:09.069861   50941 ssh_runner.go:362] scp /home/jenkins/minikube-integration/21139-2320/.minikube/machines/server.pem --> /etc/docker/server.pem (1208 bytes)
	I1109 13:55:09.100884   50941 vm_assets.go:164] NewFileAsset: /home/jenkins/minikube-integration/21139-2320/.minikube/machines/server-key.pem -> /etc/docker/server-key.pem
	I1109 13:55:09.100993   50941 ssh_runner.go:362] scp /home/jenkins/minikube-integration/21139-2320/.minikube/machines/server-key.pem --> /etc/docker/server-key.pem (1675 bytes)
	I1109 13:55:09.135925   50941 provision.go:87] duration metric: took 1.10618017s to configureAuth
	I1109 13:55:09.136002   50941 ubuntu.go:206] setting minikube options for container-runtime
	I1109 13:55:09.136280   50941 config.go:182] Loaded profile config "ha-423884": Driver=docker, ContainerRuntime=crio, KubernetesVersion=v1.34.1
	I1109 13:55:09.136432   50941 cli_runner.go:164] Run: docker container inspect -f "'{{(index (index .NetworkSettings.Ports "22/tcp") 0).HostPort}}'" ha-423884-m02
	I1109 13:55:09.164021   50941 main.go:143] libmachine: Using SSH client type: native
	I1109 13:55:09.164323   50941 main.go:143] libmachine: &{{{<nil> 0 [] [] []} docker [0x3eefe0] 0x3f1790 <nil>  [] 0s} 127.0.0.1 32813 <nil> <nil>}
	I1109 13:55:09.164337   50941 main.go:143] libmachine: About to run SSH command:
	sudo mkdir -p /etc/sysconfig && printf %s "
	CRIO_MINIKUBE_OPTIONS='--insecure-registry 10.96.0.0/12 '
	" | sudo tee /etc/sysconfig/crio.minikube && sudo systemctl restart crio
	I1109 13:55:10.296777   50941 main.go:143] libmachine: SSH cmd err, output: <nil>: 
	CRIO_MINIKUBE_OPTIONS='--insecure-registry 10.96.0.0/12 '
	
	I1109 13:55:10.296814   50941 machine.go:97] duration metric: took 5.926952254s to provisionDockerMachine
	I1109 13:55:10.296825   50941 start.go:293] postStartSetup for "ha-423884-m02" (driver="docker")
	I1109 13:55:10.296872   50941 start.go:322] creating required directories: [/etc/kubernetes/addons /etc/kubernetes/manifests /var/tmp/minikube /var/lib/minikube /var/lib/minikube/certs /var/lib/minikube/images /var/lib/minikube/binaries /tmp/gvisor /usr/share/ca-certificates /etc/ssl/certs]
	I1109 13:55:10.296972   50941 ssh_runner.go:195] Run: sudo mkdir -p /etc/kubernetes/addons /etc/kubernetes/manifests /var/tmp/minikube /var/lib/minikube /var/lib/minikube/certs /var/lib/minikube/images /var/lib/minikube/binaries /tmp/gvisor /usr/share/ca-certificates /etc/ssl/certs
	I1109 13:55:10.297065   50941 cli_runner.go:164] Run: docker container inspect -f "'{{(index (index .NetworkSettings.Ports "22/tcp") 0).HostPort}}'" ha-423884-m02
	I1109 13:55:10.332056   50941 sshutil.go:53] new ssh client: &{IP:127.0.0.1 Port:32813 SSHKeyPath:/home/jenkins/minikube-integration/21139-2320/.minikube/machines/ha-423884-m02/id_rsa Username:docker}
	I1109 13:55:10.449335   50941 ssh_runner.go:195] Run: cat /etc/os-release
	I1109 13:55:10.453805   50941 main.go:143] libmachine: Couldn't set key VERSION_CODENAME, no corresponding struct field found
	I1109 13:55:10.453831   50941 info.go:137] Remote host: Debian GNU/Linux 12 (bookworm)
	I1109 13:55:10.453843   50941 filesync.go:126] Scanning /home/jenkins/minikube-integration/21139-2320/.minikube/addons for local assets ...
	I1109 13:55:10.453902   50941 filesync.go:126] Scanning /home/jenkins/minikube-integration/21139-2320/.minikube/files for local assets ...
	I1109 13:55:10.453979   50941 filesync.go:149] local asset: /home/jenkins/minikube-integration/21139-2320/.minikube/files/etc/ssl/certs/41162.pem -> 41162.pem in /etc/ssl/certs
	I1109 13:55:10.453986   50941 vm_assets.go:164] NewFileAsset: /home/jenkins/minikube-integration/21139-2320/.minikube/files/etc/ssl/certs/41162.pem -> /etc/ssl/certs/41162.pem
	I1109 13:55:10.454091   50941 ssh_runner.go:195] Run: sudo mkdir -p /etc/ssl/certs
	I1109 13:55:10.463699   50941 ssh_runner.go:362] scp /home/jenkins/minikube-integration/21139-2320/.minikube/files/etc/ssl/certs/41162.pem --> /etc/ssl/certs/41162.pem (1708 bytes)
	I1109 13:55:10.484008   50941 start.go:296] duration metric: took 187.133589ms for postStartSetup
	I1109 13:55:10.484157   50941 ssh_runner.go:195] Run: sh -c "df -h /var | awk 'NR==2{print $5}'"
	I1109 13:55:10.484228   50941 cli_runner.go:164] Run: docker container inspect -f "'{{(index (index .NetworkSettings.Ports "22/tcp") 0).HostPort}}'" ha-423884-m02
	I1109 13:55:10.524852   50941 sshutil.go:53] new ssh client: &{IP:127.0.0.1 Port:32813 SSHKeyPath:/home/jenkins/minikube-integration/21139-2320/.minikube/machines/ha-423884-m02/id_rsa Username:docker}
	I1109 13:55:10.647401   50941 ssh_runner.go:195] Run: sh -c "df -BG /var | awk 'NR==2{print $4}'"
	I1109 13:55:10.654990   50941 fix.go:56] duration metric: took 6.764614102s for fixHost
	I1109 13:55:10.655012   50941 start.go:83] releasing machines lock for "ha-423884-m02", held for 6.764660929s
	I1109 13:55:10.655097   50941 cli_runner.go:164] Run: docker container inspect -f "{{range .NetworkSettings.Networks}}{{.IPAddress}},{{.GlobalIPv6Address}}{{end}}" ha-423884-m02
	I1109 13:55:10.684905   50941 out.go:179] * Found network options:
	I1109 13:55:10.687829   50941 out.go:179]   - NO_PROXY=192.168.49.2
	W1109 13:55:10.690818   50941 proxy.go:120] fail to check proxy env: Error ip not in block
	W1109 13:55:10.690871   50941 proxy.go:120] fail to check proxy env: Error ip not in block
	I1109 13:55:10.690948   50941 ssh_runner.go:195] Run: sudo sh -c "podman version >/dev/null"
	I1109 13:55:10.690961   50941 ssh_runner.go:195] Run: curl -sS -m 2 https://registry.k8s.io/
	I1109 13:55:10.690989   50941 cli_runner.go:164] Run: docker container inspect -f "'{{(index (index .NetworkSettings.Ports "22/tcp") 0).HostPort}}'" ha-423884-m02
	I1109 13:55:10.691019   50941 cli_runner.go:164] Run: docker container inspect -f "'{{(index (index .NetworkSettings.Ports "22/tcp") 0).HostPort}}'" ha-423884-m02
	I1109 13:55:10.712241   50941 sshutil.go:53] new ssh client: &{IP:127.0.0.1 Port:32813 SSHKeyPath:/home/jenkins/minikube-integration/21139-2320/.minikube/machines/ha-423884-m02/id_rsa Username:docker}
	I1109 13:55:10.725530   50941 sshutil.go:53] new ssh client: &{IP:127.0.0.1 Port:32813 SSHKeyPath:/home/jenkins/minikube-integration/21139-2320/.minikube/machines/ha-423884-m02/id_rsa Username:docker}
	I1109 13:55:11.009545   50941 ssh_runner.go:195] Run: sh -c "stat /etc/cni/net.d/*loopback.conf*"
	W1109 13:55:11.084432   50941 cni.go:209] loopback cni configuration skipped: "/etc/cni/net.d/*loopback.conf*" not found
	I1109 13:55:11.084558   50941 ssh_runner.go:195] Run: sudo find /etc/cni/net.d -maxdepth 1 -type f ( ( -name *bridge* -or -name *podman* ) -and -not -name *.mk_disabled ) -printf "%p, " -exec sh -c "sudo mv {} {}.mk_disabled" ;
	I1109 13:55:11.121004   50941 cni.go:259] no active bridge cni configs found in "/etc/cni/net.d" - nothing to disable
	I1109 13:55:11.121072   50941 start.go:496] detecting cgroup driver to use...
	I1109 13:55:11.121123   50941 detect.go:187] detected "cgroupfs" cgroup driver on host os
	I1109 13:55:11.121189   50941 ssh_runner.go:195] Run: sudo systemctl stop -f containerd
	I1109 13:55:11.194725   50941 ssh_runner.go:195] Run: sudo systemctl is-active --quiet service containerd
	I1109 13:55:11.266109   50941 docker.go:218] disabling cri-docker service (if available) ...
	I1109 13:55:11.266214   50941 ssh_runner.go:195] Run: sudo systemctl stop -f cri-docker.socket
	I1109 13:55:11.320610   50941 ssh_runner.go:195] Run: sudo systemctl stop -f cri-docker.service
	I1109 13:55:11.355184   50941 ssh_runner.go:195] Run: sudo systemctl disable cri-docker.socket
	I1109 13:55:11.758458   50941 ssh_runner.go:195] Run: sudo systemctl mask cri-docker.service
	I1109 13:55:12.038806   50941 docker.go:234] disabling docker service ...
	I1109 13:55:12.038952   50941 ssh_runner.go:195] Run: sudo systemctl stop -f docker.socket
	I1109 13:55:12.067528   50941 ssh_runner.go:195] Run: sudo systemctl stop -f docker.service
	I1109 13:55:12.086834   50941 ssh_runner.go:195] Run: sudo systemctl disable docker.socket
	I1109 13:55:12.313835   50941 ssh_runner.go:195] Run: sudo systemctl mask docker.service
	I1109 13:55:12.529003   50941 ssh_runner.go:195] Run: sudo systemctl is-active --quiet service docker
	I1109 13:55:12.547999   50941 ssh_runner.go:195] Run: /bin/bash -c "sudo mkdir -p /etc && printf %s "runtime-endpoint: unix:///var/run/crio/crio.sock
	" | sudo tee /etc/crictl.yaml"
	I1109 13:55:12.574350   50941 crio.go:59] configure cri-o to use "registry.k8s.io/pause:3.10.1" pause image...
	I1109 13:55:12.574468   50941 ssh_runner.go:195] Run: sh -c "sudo sed -i 's|^.*pause_image = .*$|pause_image = "registry.k8s.io/pause:3.10.1"|' /etc/crio/crio.conf.d/02-crio.conf"
	I1109 13:55:12.593062   50941 crio.go:70] configuring cri-o to use "cgroupfs" as cgroup driver...
	I1109 13:55:12.593178   50941 ssh_runner.go:195] Run: sh -c "sudo sed -i 's|^.*cgroup_manager = .*$|cgroup_manager = "cgroupfs"|' /etc/crio/crio.conf.d/02-crio.conf"
	I1109 13:55:12.611675   50941 ssh_runner.go:195] Run: sh -c "sudo sed -i '/conmon_cgroup = .*/d' /etc/crio/crio.conf.d/02-crio.conf"
	I1109 13:55:12.621325   50941 ssh_runner.go:195] Run: sh -c "sudo sed -i '/cgroup_manager = .*/a conmon_cgroup = "pod"' /etc/crio/crio.conf.d/02-crio.conf"
	I1109 13:55:12.634011   50941 ssh_runner.go:195] Run: sh -c "sudo rm -rf /etc/cni/net.mk"
	I1109 13:55:12.644140   50941 ssh_runner.go:195] Run: sh -c "sudo sed -i '/^ *"net.ipv4.ip_unprivileged_port_start=.*"/d' /etc/crio/crio.conf.d/02-crio.conf"
	I1109 13:55:12.656327   50941 ssh_runner.go:195] Run: sh -c "sudo grep -q "^ *default_sysctls" /etc/crio/crio.conf.d/02-crio.conf || sudo sed -i '/conmon_cgroup = .*/a default_sysctls = \[\n\]' /etc/crio/crio.conf.d/02-crio.conf"
	I1109 13:55:12.666866   50941 ssh_runner.go:195] Run: sh -c "sudo sed -i -r 's|^default_sysctls *= *\[|&\n  "net.ipv4.ip_unprivileged_port_start=0",|' /etc/crio/crio.conf.d/02-crio.conf"
	I1109 13:55:12.678918   50941 ssh_runner.go:195] Run: sudo sysctl net.bridge.bridge-nf-call-iptables
	I1109 13:55:12.688104   50941 ssh_runner.go:195] Run: sudo sh -c "echo 1 > /proc/sys/net/ipv4/ip_forward"
	I1109 13:55:12.699862   50941 ssh_runner.go:195] Run: sudo systemctl daemon-reload
	I1109 13:55:12.929281   50941 ssh_runner.go:195] Run: sudo systemctl restart crio
	I1109 13:56:43.189989   50941 ssh_runner.go:235] Completed: sudo systemctl restart crio: (1m30.260670612s)
	I1109 13:56:43.190013   50941 start.go:543] Will wait 60s for socket path /var/run/crio/crio.sock
	I1109 13:56:43.190063   50941 ssh_runner.go:195] Run: stat /var/run/crio/crio.sock
	I1109 13:56:43.194863   50941 start.go:564] Will wait 60s for crictl version
	I1109 13:56:43.194926   50941 ssh_runner.go:195] Run: which crictl
	I1109 13:56:43.198897   50941 ssh_runner.go:195] Run: sudo /usr/local/bin/crictl version
	I1109 13:56:43.224592   50941 start.go:580] Version:  0.1.0
	RuntimeName:  cri-o
	RuntimeVersion:  1.34.1
	RuntimeApiVersion:  v1
	I1109 13:56:43.224673   50941 ssh_runner.go:195] Run: crio --version
	I1109 13:56:43.252803   50941 ssh_runner.go:195] Run: crio --version
	I1109 13:56:43.288977   50941 out.go:179] * Preparing Kubernetes v1.34.1 on CRI-O 1.34.1 ...
	I1109 13:56:43.292132   50941 out.go:179]   - env NO_PROXY=192.168.49.2
	I1109 13:56:43.295175   50941 cli_runner.go:164] Run: docker network inspect ha-423884 --format "{"Name": "{{.Name}}","Driver": "{{.Driver}}","Subnet": "{{range .IPAM.Config}}{{.Subnet}}{{end}}","Gateway": "{{range .IPAM.Config}}{{.Gateway}}{{end}}","MTU": {{if (index .Options "com.docker.network.driver.mtu")}}{{(index .Options "com.docker.network.driver.mtu")}}{{else}}0{{end}}, "ContainerIPs": [{{range $k,$v := .Containers }}"{{$v.IPv4Address}}",{{end}}]}"
	I1109 13:56:43.311775   50941 ssh_runner.go:195] Run: grep 192.168.49.1	host.minikube.internal$ /etc/hosts
	I1109 13:56:43.316096   50941 ssh_runner.go:195] Run: /bin/bash -c "{ grep -v $'\thost.minikube.internal$' "/etc/hosts"; echo "192.168.49.1	host.minikube.internal"; } > /tmp/h.$$; sudo cp /tmp/h.$$ "/etc/hosts""
	I1109 13:56:43.327026   50941 mustload.go:66] Loading cluster: ha-423884
	I1109 13:56:43.327285   50941 config.go:182] Loaded profile config "ha-423884": Driver=docker, ContainerRuntime=crio, KubernetesVersion=v1.34.1
	I1109 13:56:43.327549   50941 cli_runner.go:164] Run: docker container inspect ha-423884 --format={{.State.Status}}
	I1109 13:56:43.344797   50941 host.go:66] Checking if "ha-423884" exists ...
	I1109 13:56:43.345106   50941 certs.go:69] Setting up /home/jenkins/minikube-integration/21139-2320/.minikube/profiles/ha-423884 for IP: 192.168.49.3
	I1109 13:56:43.345119   50941 certs.go:195] generating shared ca certs ...
	I1109 13:56:43.345155   50941 certs.go:227] acquiring lock for ca certs: {Name:mkdc9287ecd89df27ec460b72246ed6b75395d7c Clock:{} Delay:500ms Timeout:1m0s Cancel:<nil>}
	I1109 13:56:43.345275   50941 certs.go:236] skipping valid "minikubeCA" ca cert: /home/jenkins/minikube-integration/21139-2320/.minikube/ca.key
	I1109 13:56:43.345325   50941 certs.go:236] skipping valid "proxyClientCA" ca cert: /home/jenkins/minikube-integration/21139-2320/.minikube/proxy-client-ca.key
	I1109 13:56:43.345337   50941 certs.go:257] generating profile certs ...
	I1109 13:56:43.345411   50941 certs.go:360] skipping valid signed profile cert regeneration for "minikube-user": /home/jenkins/minikube-integration/21139-2320/.minikube/profiles/ha-423884/client.key
	I1109 13:56:43.345491   50941 certs.go:360] skipping valid signed profile cert regeneration for "minikube": /home/jenkins/minikube-integration/21139-2320/.minikube/profiles/ha-423884/apiserver.key.75d82079
	I1109 13:56:43.345540   50941 certs.go:360] skipping valid signed profile cert regeneration for "aggregator": /home/jenkins/minikube-integration/21139-2320/.minikube/profiles/ha-423884/proxy-client.key
	I1109 13:56:43.345557   50941 vm_assets.go:164] NewFileAsset: /home/jenkins/minikube-integration/21139-2320/.minikube/ca.crt -> /var/lib/minikube/certs/ca.crt
	I1109 13:56:43.345575   50941 vm_assets.go:164] NewFileAsset: /home/jenkins/minikube-integration/21139-2320/.minikube/ca.key -> /var/lib/minikube/certs/ca.key
	I1109 13:56:43.345594   50941 vm_assets.go:164] NewFileAsset: /home/jenkins/minikube-integration/21139-2320/.minikube/proxy-client-ca.crt -> /var/lib/minikube/certs/proxy-client-ca.crt
	I1109 13:56:43.345615   50941 vm_assets.go:164] NewFileAsset: /home/jenkins/minikube-integration/21139-2320/.minikube/proxy-client-ca.key -> /var/lib/minikube/certs/proxy-client-ca.key
	I1109 13:56:43.345628   50941 vm_assets.go:164] NewFileAsset: /home/jenkins/minikube-integration/21139-2320/.minikube/profiles/ha-423884/apiserver.crt -> /var/lib/minikube/certs/apiserver.crt
	I1109 13:56:43.345642   50941 vm_assets.go:164] NewFileAsset: /home/jenkins/minikube-integration/21139-2320/.minikube/profiles/ha-423884/apiserver.key -> /var/lib/minikube/certs/apiserver.key
	I1109 13:56:43.345658   50941 vm_assets.go:164] NewFileAsset: /home/jenkins/minikube-integration/21139-2320/.minikube/profiles/ha-423884/proxy-client.crt -> /var/lib/minikube/certs/proxy-client.crt
	I1109 13:56:43.345671   50941 vm_assets.go:164] NewFileAsset: /home/jenkins/minikube-integration/21139-2320/.minikube/profiles/ha-423884/proxy-client.key -> /var/lib/minikube/certs/proxy-client.key
	I1109 13:56:43.345729   50941 certs.go:484] found cert: /home/jenkins/minikube-integration/21139-2320/.minikube/certs/4116.pem (1338 bytes)
	W1109 13:56:43.345760   50941 certs.go:480] ignoring /home/jenkins/minikube-integration/21139-2320/.minikube/certs/4116_empty.pem, impossibly tiny 0 bytes
	I1109 13:56:43.345772   50941 certs.go:484] found cert: /home/jenkins/minikube-integration/21139-2320/.minikube/certs/ca-key.pem (1679 bytes)
	I1109 13:56:43.345800   50941 certs.go:484] found cert: /home/jenkins/minikube-integration/21139-2320/.minikube/certs/ca.pem (1082 bytes)
	I1109 13:56:43.345827   50941 certs.go:484] found cert: /home/jenkins/minikube-integration/21139-2320/.minikube/certs/cert.pem (1123 bytes)
	I1109 13:56:43.345850   50941 certs.go:484] found cert: /home/jenkins/minikube-integration/21139-2320/.minikube/certs/key.pem (1679 bytes)
	I1109 13:56:43.345896   50941 certs.go:484] found cert: /home/jenkins/minikube-integration/21139-2320/.minikube/files/etc/ssl/certs/41162.pem (1708 bytes)
	I1109 13:56:43.345926   50941 vm_assets.go:164] NewFileAsset: /home/jenkins/minikube-integration/21139-2320/.minikube/ca.crt -> /usr/share/ca-certificates/minikubeCA.pem
	I1109 13:56:43.345942   50941 vm_assets.go:164] NewFileAsset: /home/jenkins/minikube-integration/21139-2320/.minikube/certs/4116.pem -> /usr/share/ca-certificates/4116.pem
	I1109 13:56:43.345953   50941 vm_assets.go:164] NewFileAsset: /home/jenkins/minikube-integration/21139-2320/.minikube/files/etc/ssl/certs/41162.pem -> /usr/share/ca-certificates/41162.pem
	I1109 13:56:43.346011   50941 cli_runner.go:164] Run: docker container inspect -f "'{{(index (index .NetworkSettings.Ports "22/tcp") 0).HostPort}}'" ha-423884
	I1109 13:56:43.364089   50941 sshutil.go:53] new ssh client: &{IP:127.0.0.1 Port:32808 SSHKeyPath:/home/jenkins/minikube-integration/21139-2320/.minikube/machines/ha-423884/id_rsa Username:docker}
	I1109 13:56:43.460186   50941 ssh_runner.go:195] Run: stat -c %s /var/lib/minikube/certs/sa.pub
	I1109 13:56:43.463803   50941 ssh_runner.go:447] scp /var/lib/minikube/certs/sa.pub --> memory (451 bytes)
	I1109 13:56:43.471985   50941 ssh_runner.go:195] Run: stat -c %s /var/lib/minikube/certs/sa.key
	I1109 13:56:43.475672   50941 ssh_runner.go:447] scp /var/lib/minikube/certs/sa.key --> memory (1679 bytes)
	I1109 13:56:43.483925   50941 ssh_runner.go:195] Run: stat -c %s /var/lib/minikube/certs/front-proxy-ca.crt
	I1109 13:56:43.487471   50941 ssh_runner.go:447] scp /var/lib/minikube/certs/front-proxy-ca.crt --> memory (1123 bytes)
	I1109 13:56:43.495787   50941 ssh_runner.go:195] Run: stat -c %s /var/lib/minikube/certs/front-proxy-ca.key
	I1109 13:56:43.499361   50941 ssh_runner.go:447] scp /var/lib/minikube/certs/front-proxy-ca.key --> memory (1675 bytes)
	I1109 13:56:43.507536   50941 ssh_runner.go:195] Run: stat -c %s /var/lib/minikube/certs/etcd/ca.crt
	I1109 13:56:43.511561   50941 ssh_runner.go:447] scp /var/lib/minikube/certs/etcd/ca.crt --> memory (1094 bytes)
	I1109 13:56:43.520262   50941 ssh_runner.go:195] Run: stat -c %s /var/lib/minikube/certs/etcd/ca.key
	I1109 13:56:43.524097   50941 ssh_runner.go:447] scp /var/lib/minikube/certs/etcd/ca.key --> memory (1675 bytes)
	I1109 13:56:43.532380   50941 ssh_runner.go:362] scp /home/jenkins/minikube-integration/21139-2320/.minikube/ca.crt --> /var/lib/minikube/certs/ca.crt (1111 bytes)
	I1109 13:56:43.553569   50941 ssh_runner.go:362] scp /home/jenkins/minikube-integration/21139-2320/.minikube/ca.key --> /var/lib/minikube/certs/ca.key (1679 bytes)
	I1109 13:56:43.574274   50941 ssh_runner.go:362] scp /home/jenkins/minikube-integration/21139-2320/.minikube/proxy-client-ca.crt --> /var/lib/minikube/certs/proxy-client-ca.crt (1119 bytes)
	I1109 13:56:43.593982   50941 ssh_runner.go:362] scp /home/jenkins/minikube-integration/21139-2320/.minikube/proxy-client-ca.key --> /var/lib/minikube/certs/proxy-client-ca.key (1675 bytes)
	I1109 13:56:43.611803   50941 ssh_runner.go:362] scp /home/jenkins/minikube-integration/21139-2320/.minikube/profiles/ha-423884/apiserver.crt --> /var/lib/minikube/certs/apiserver.crt (1440 bytes)
	I1109 13:56:43.629036   50941 ssh_runner.go:362] scp /home/jenkins/minikube-integration/21139-2320/.minikube/profiles/ha-423884/apiserver.key --> /var/lib/minikube/certs/apiserver.key (1679 bytes)
	I1109 13:56:43.646449   50941 ssh_runner.go:362] scp /home/jenkins/minikube-integration/21139-2320/.minikube/profiles/ha-423884/proxy-client.crt --> /var/lib/minikube/certs/proxy-client.crt (1147 bytes)
	I1109 13:56:43.665505   50941 ssh_runner.go:362] scp /home/jenkins/minikube-integration/21139-2320/.minikube/profiles/ha-423884/proxy-client.key --> /var/lib/minikube/certs/proxy-client.key (1675 bytes)
	I1109 13:56:43.685863   50941 ssh_runner.go:362] scp /home/jenkins/minikube-integration/21139-2320/.minikube/ca.crt --> /usr/share/ca-certificates/minikubeCA.pem (1111 bytes)
	I1109 13:56:43.704695   50941 ssh_runner.go:362] scp /home/jenkins/minikube-integration/21139-2320/.minikube/certs/4116.pem --> /usr/share/ca-certificates/4116.pem (1338 bytes)
	I1109 13:56:43.725055   50941 ssh_runner.go:362] scp /home/jenkins/minikube-integration/21139-2320/.minikube/files/etc/ssl/certs/41162.pem --> /usr/share/ca-certificates/41162.pem (1708 bytes)
	I1109 13:56:43.743980   50941 ssh_runner.go:362] scp memory --> /var/lib/minikube/certs/sa.pub (451 bytes)
	I1109 13:56:43.757782   50941 ssh_runner.go:362] scp memory --> /var/lib/minikube/certs/sa.key (1679 bytes)
	I1109 13:56:43.770797   50941 ssh_runner.go:362] scp memory --> /var/lib/minikube/certs/front-proxy-ca.crt (1123 bytes)
	I1109 13:56:43.783823   50941 ssh_runner.go:362] scp memory --> /var/lib/minikube/certs/front-proxy-ca.key (1675 bytes)
	I1109 13:56:43.798200   50941 ssh_runner.go:362] scp memory --> /var/lib/minikube/certs/etcd/ca.crt (1094 bytes)
	I1109 13:56:43.811164   50941 ssh_runner.go:362] scp memory --> /var/lib/minikube/certs/etcd/ca.key (1675 bytes)
	I1109 13:56:43.824190   50941 ssh_runner.go:362] scp memory --> /var/lib/minikube/kubeconfig (744 bytes)
	I1109 13:56:43.838949   50941 ssh_runner.go:195] Run: openssl version
	I1109 13:56:43.845204   50941 ssh_runner.go:195] Run: sudo /bin/bash -c "test -s /usr/share/ca-certificates/minikubeCA.pem && ln -fs /usr/share/ca-certificates/minikubeCA.pem /etc/ssl/certs/minikubeCA.pem"
	I1109 13:56:43.853394   50941 ssh_runner.go:195] Run: ls -la /usr/share/ca-certificates/minikubeCA.pem
	I1109 13:56:43.857520   50941 certs.go:528] hashing: -rw-r--r-- 1 root root 1111 Nov  9 13:29 /usr/share/ca-certificates/minikubeCA.pem
	I1109 13:56:43.857581   50941 ssh_runner.go:195] Run: openssl x509 -hash -noout -in /usr/share/ca-certificates/minikubeCA.pem
	I1109 13:56:43.898978   50941 ssh_runner.go:195] Run: sudo /bin/bash -c "test -L /etc/ssl/certs/b5213941.0 || ln -fs /etc/ssl/certs/minikubeCA.pem /etc/ssl/certs/b5213941.0"
	I1109 13:56:43.907056   50941 ssh_runner.go:195] Run: sudo /bin/bash -c "test -s /usr/share/ca-certificates/4116.pem && ln -fs /usr/share/ca-certificates/4116.pem /etc/ssl/certs/4116.pem"
	I1109 13:56:43.915514   50941 ssh_runner.go:195] Run: ls -la /usr/share/ca-certificates/4116.pem
	I1109 13:56:43.919395   50941 certs.go:528] hashing: -rw-r--r-- 1 root root 1338 Nov  9 13:36 /usr/share/ca-certificates/4116.pem
	I1109 13:56:43.919509   50941 ssh_runner.go:195] Run: openssl x509 -hash -noout -in /usr/share/ca-certificates/4116.pem
	I1109 13:56:43.961298   50941 ssh_runner.go:195] Run: sudo /bin/bash -c "test -L /etc/ssl/certs/51391683.0 || ln -fs /etc/ssl/certs/4116.pem /etc/ssl/certs/51391683.0"
	I1109 13:56:43.969278   50941 ssh_runner.go:195] Run: sudo /bin/bash -c "test -s /usr/share/ca-certificates/41162.pem && ln -fs /usr/share/ca-certificates/41162.pem /etc/ssl/certs/41162.pem"
	I1109 13:56:43.979745   50941 ssh_runner.go:195] Run: ls -la /usr/share/ca-certificates/41162.pem
	I1109 13:56:43.983461   50941 certs.go:528] hashing: -rw-r--r-- 1 root root 1708 Nov  9 13:36 /usr/share/ca-certificates/41162.pem
	I1109 13:56:43.983552   50941 ssh_runner.go:195] Run: openssl x509 -hash -noout -in /usr/share/ca-certificates/41162.pem
	I1109 13:56:44.024743   50941 ssh_runner.go:195] Run: sudo /bin/bash -c "test -L /etc/ssl/certs/3ec20f2e.0 || ln -fs /etc/ssl/certs/41162.pem /etc/ssl/certs/3ec20f2e.0"
	I1109 13:56:44.034346   50941 ssh_runner.go:195] Run: stat /var/lib/minikube/certs/apiserver-kubelet-client.crt
	I1109 13:56:44.038346   50941 ssh_runner.go:195] Run: openssl x509 -noout -in /var/lib/minikube/certs/apiserver-etcd-client.crt -checkend 86400
	I1109 13:56:44.083522   50941 ssh_runner.go:195] Run: openssl x509 -noout -in /var/lib/minikube/certs/apiserver-kubelet-client.crt -checkend 86400
	I1109 13:56:44.124383   50941 ssh_runner.go:195] Run: openssl x509 -noout -in /var/lib/minikube/certs/etcd/server.crt -checkend 86400
	I1109 13:56:44.165272   50941 ssh_runner.go:195] Run: openssl x509 -noout -in /var/lib/minikube/certs/etcd/healthcheck-client.crt -checkend 86400
	I1109 13:56:44.207715   50941 ssh_runner.go:195] Run: openssl x509 -noout -in /var/lib/minikube/certs/etcd/peer.crt -checkend 86400
	I1109 13:56:44.249227   50941 ssh_runner.go:195] Run: openssl x509 -noout -in /var/lib/minikube/certs/front-proxy-client.crt -checkend 86400
	I1109 13:56:44.295420   50941 kubeadm.go:935] updating node {m02 192.168.49.3 8443 v1.34.1 crio true true} ...
	I1109 13:56:44.295534   50941 kubeadm.go:947] kubelet [Unit]
	Wants=crio.service
	
	[Service]
	ExecStart=
	ExecStart=/var/lib/minikube/binaries/v1.34.1/kubelet --bootstrap-kubeconfig=/etc/kubernetes/bootstrap-kubelet.conf --cgroups-per-qos=false --config=/var/lib/kubelet/config.yaml --enforce-node-allocatable= --hostname-override=ha-423884-m02 --kubeconfig=/etc/kubernetes/kubelet.conf --node-ip=192.168.49.3
	
	[Install]
	 config:
	{KubernetesVersion:v1.34.1 ClusterName:ha-423884 Namespace:default APIServerHAVIP:192.168.49.254 APIServerName:minikubeCA APIServerNames:[] APIServerIPs:[] DNSDomain:cluster.local ContainerRuntime:crio CRISocket: NetworkPlugin:cni FeatureGates: ServiceCIDR:10.96.0.0/12 ImageRepository: LoadBalancerStartIP: LoadBalancerEndIP: CustomIngressCert: RegistryAliases: ExtraOptions:[] ShouldLoadCachedImages:true EnableDefaultCNI:false CNI:}
	I1109 13:56:44.295575   50941 kube-vip.go:115] generating kube-vip config ...
	I1109 13:56:44.295626   50941 ssh_runner.go:195] Run: sudo sh -c "lsmod | grep ip_vs"
	I1109 13:56:44.307501   50941 kube-vip.go:163] giving up enabling control-plane load-balancing as ipvs kernel modules appears not to be available: sudo sh -c "lsmod | grep ip_vs": Process exited with status 1
	stdout:
	
	stderr:
	I1109 13:56:44.307559   50941 kube-vip.go:137] kube-vip config:
	apiVersion: v1
	kind: Pod
	metadata:
	  creationTimestamp: null
	  name: kube-vip
	  namespace: kube-system
	spec:
	  containers:
	  - args:
	    - manager
	    env:
	    - name: vip_arp
	      value: "true"
	    - name: port
	      value: "8443"
	    - name: vip_nodename
	      valueFrom:
	        fieldRef:
	          fieldPath: spec.nodeName
	    - name: vip_interface
	      value: eth0
	    - name: vip_cidr
	      value: "32"
	    - name: dns_mode
	      value: first
	    - name: cp_enable
	      value: "true"
	    - name: cp_namespace
	      value: kube-system
	    - name: vip_leaderelection
	      value: "true"
	    - name: vip_leasename
	      value: plndr-cp-lock
	    - name: vip_leaseduration
	      value: "5"
	    - name: vip_renewdeadline
	      value: "3"
	    - name: vip_retryperiod
	      value: "1"
	    - name: address
	      value: 192.168.49.254
	    - name: prometheus_server
	      value: :2112
	    image: ghcr.io/kube-vip/kube-vip:v1.0.1
	    imagePullPolicy: IfNotPresent
	    name: kube-vip
	    resources: {}
	    securityContext:
	      capabilities:
	        add:
	        - NET_ADMIN
	        - NET_RAW
	    volumeMounts:
	    - mountPath: /etc/kubernetes/admin.conf
	      name: kubeconfig
	  hostAliases:
	  - hostnames:
	    - kubernetes
	    ip: 127.0.0.1
	  hostNetwork: true
	  volumes:
	  - hostPath:
	      path: "/etc/kubernetes/admin.conf"
	    name: kubeconfig
	status: {}
	I1109 13:56:44.307640   50941 ssh_runner.go:195] Run: sudo ls /var/lib/minikube/binaries/v1.34.1
	I1109 13:56:44.315582   50941 binaries.go:51] Found k8s binaries, skipping transfer
	I1109 13:56:44.315693   50941 ssh_runner.go:195] Run: sudo mkdir -p /etc/systemd/system/kubelet.service.d /lib/systemd/system /etc/kubernetes/manifests
	I1109 13:56:44.323673   50941 ssh_runner.go:362] scp memory --> /etc/systemd/system/kubelet.service.d/10-kubeadm.conf (363 bytes)
	I1109 13:56:44.336356   50941 ssh_runner.go:362] scp memory --> /lib/systemd/system/kubelet.service (352 bytes)
	I1109 13:56:44.348987   50941 ssh_runner.go:362] scp memory --> /etc/kubernetes/manifests/kube-vip.yaml (1358 bytes)
	I1109 13:56:44.364628   50941 ssh_runner.go:195] Run: grep 192.168.49.254	control-plane.minikube.internal$ /etc/hosts
	I1109 13:56:44.368185   50941 ssh_runner.go:195] Run: /bin/bash -c "{ grep -v $'\tcontrol-plane.minikube.internal$' "/etc/hosts"; echo "192.168.49.254	control-plane.minikube.internal"; } > /tmp/h.$$; sudo cp /tmp/h.$$ "/etc/hosts""
	I1109 13:56:44.378442   50941 ssh_runner.go:195] Run: sudo systemctl daemon-reload
	I1109 13:56:44.512505   50941 ssh_runner.go:195] Run: sudo systemctl start kubelet
	I1109 13:56:44.527192   50941 start.go:236] Will wait 6m0s for node &{Name:m02 IP:192.168.49.3 Port:8443 KubernetesVersion:v1.34.1 ContainerRuntime:crio ControlPlane:true Worker:true}
	I1109 13:56:44.527585   50941 config.go:182] Loaded profile config "ha-423884": Driver=docker, ContainerRuntime=crio, KubernetesVersion=v1.34.1
	I1109 13:56:44.530787   50941 out.go:179] * Verifying Kubernetes components...
	I1109 13:56:44.533648   50941 ssh_runner.go:195] Run: sudo systemctl daemon-reload
	I1109 13:56:44.676788   50941 ssh_runner.go:195] Run: sudo systemctl start kubelet
	I1109 13:56:44.692725   50941 kapi.go:59] client config for ha-423884: &rest.Config{Host:"https://192.168.49.254:8443", APIPath:"", ContentConfig:rest.ContentConfig{AcceptContentTypes:"", ContentType:"", GroupVersion:(*schema.GroupVersion)(nil), NegotiatedSerializer:runtime.NegotiatedSerializer(nil)}, Username:"", Password:"", BearerToken:"", BearerTokenFile:"", Impersonate:rest.ImpersonationConfig{UserName:"", UID:"", Groups:[]string(nil), Extra:map[string][]string(nil)}, AuthProvider:<nil>, AuthConfigPersister:rest.AuthProviderConfigPersister(nil), ExecProvider:<nil>, TLSClientConfig:rest.sanitizedTLSClientConfig{Insecure:false, ServerName:"", CertFile:"/home/jenkins/minikube-integration/21139-2320/.minikube/profiles/ha-423884/client.crt", KeyFile:"/home/jenkins/minikube-integration/21139-2320/.minikube/profiles/ha-423884/client.key", CAFile:"/home/jenkins/minikube-integration/21139-2320/.minikube/ca.crt", CertData:[]uint8(nil), KeyData:[]uint8(nil), CAData:[]uint8(nil), NextProtos:[]string(nil)},
UserAgent:"", DisableCompression:false, Transport:http.RoundTripper(nil), WrapTransport:(transport.WrapperFunc)(0x21276e0), QPS:0, Burst:0, RateLimiter:flowcontrol.RateLimiter(nil), WarningHandler:rest.WarningHandler(nil), WarningHandlerWithContext:rest.WarningHandlerWithContext(nil), Timeout:0, Dial:(func(context.Context, string, string) (net.Conn, error))(nil), Proxy:(func(*http.Request) (*url.URL, error))(nil)}
	W1109 13:56:44.692806   50941 kubeadm.go:492] Overriding stale ClientConfig host https://192.168.49.254:8443 with https://192.168.49.2:8443
	I1109 13:56:44.694375   50941 node_ready.go:35] waiting up to 6m0s for node "ha-423884-m02" to be "Ready" ...
	I1109 13:57:15.889308   50941 with_retry.go:234] "Got a Retry-After response" delay="1s" attempt=1 url="https://192.168.49.2:8443/api/v1/nodes/ha-423884-m02"
	W1109 13:57:15.889661   50941 node_ready.go:55] error getting node "ha-423884-m02" condition "Ready" status (will retry): Get "https://192.168.49.2:8443/api/v1/nodes/ha-423884-m02": dial tcp 192.168.49.2:8443: connect: connection refused - error from a previous attempt: read tcp 192.168.49.1:53614->192.168.49.2:8443: read: connection reset by peer
	W1109 13:57:18.195899   50941 node_ready.go:55] error getting node "ha-423884-m02" condition "Ready" status (will retry): Get "https://192.168.49.2:8443/api/v1/nodes/ha-423884-m02": dial tcp 192.168.49.2:8443: connect: connection refused
	W1109 13:57:20.196010   50941 node_ready.go:55] error getting node "ha-423884-m02" condition "Ready" status (will retry): Get "https://192.168.49.2:8443/api/v1/nodes/ha-423884-m02": dial tcp 192.168.49.2:8443: connect: connection refused
	W1109 13:57:22.695941   50941 node_ready.go:55] error getting node "ha-423884-m02" condition "Ready" status (will retry): Get "https://192.168.49.2:8443/api/v1/nodes/ha-423884-m02": dial tcp 192.168.49.2:8443: connect: connection refused
	W1109 13:57:25.195852   50941 node_ready.go:55] error getting node "ha-423884-m02" condition "Ready" status (will retry): Get "https://192.168.49.2:8443/api/v1/nodes/ha-423884-m02": dial tcp 192.168.49.2:8443: connect: connection refused
	W1109 13:57:27.695680   50941 node_ready.go:55] error getting node "ha-423884-m02" condition "Ready" status (will retry): Get "https://192.168.49.2:8443/api/v1/nodes/ha-423884-m02": dial tcp 192.168.49.2:8443: connect: connection refused
	W1109 13:57:30.195776   50941 node_ready.go:55] error getting node "ha-423884-m02" condition "Ready" status (will retry): Get "https://192.168.49.2:8443/api/v1/nodes/ha-423884-m02": dial tcp 192.168.49.2:8443: connect: connection refused
	W1109 13:57:32.695117   50941 node_ready.go:55] error getting node "ha-423884-m02" condition "Ready" status (will retry): Get "https://192.168.49.2:8443/api/v1/nodes/ha-423884-m02": dial tcp 192.168.49.2:8443: connect: connection refused
	W1109 13:57:35.195841   50941 node_ready.go:55] error getting node "ha-423884-m02" condition "Ready" status (will retry): Get "https://192.168.49.2:8443/api/v1/nodes/ha-423884-m02": dial tcp 192.168.49.2:8443: connect: connection refused
	W1109 13:57:37.694955   50941 node_ready.go:55] error getting node "ha-423884-m02" condition "Ready" status (will retry): Get "https://192.168.49.2:8443/api/v1/nodes/ha-423884-m02": dial tcp 192.168.49.2:8443: connect: connection refused
	W1109 13:58:41.750119   50941 node_ready.go:55] error getting node "ha-423884-m02" condition "Ready" status (will retry): the server was unable to return a response in the time allotted, but may still be processing the request (get nodes ha-423884-m02)
	I1109 13:58:42.976513   50941 with_retry.go:234] "Got a Retry-After response" delay="1s" attempt=1 url="https://192.168.49.2:8443/api/v1/nodes/ha-423884-m02"
	W1109 13:58:44.194875   50941 node_ready.go:55] error getting node "ha-423884-m02" condition "Ready" status (will retry): Get "https://192.168.49.2:8443/api/v1/nodes/ha-423884-m02": dial tcp 192.168.49.2:8443: connect: connection refused
	W1109 13:58:46.195854   50941 node_ready.go:55] error getting node "ha-423884-m02" condition "Ready" status (will retry): Get "https://192.168.49.2:8443/api/v1/nodes/ha-423884-m02": dial tcp 192.168.49.2:8443: connect: connection refused
	W1109 13:58:48.695884   50941 node_ready.go:55] error getting node "ha-423884-m02" condition "Ready" status (will retry): Get "https://192.168.49.2:8443/api/v1/nodes/ha-423884-m02": dial tcp 192.168.49.2:8443: connect: connection refused
	W1109 13:58:51.195591   50941 node_ready.go:55] error getting node "ha-423884-m02" condition "Ready" status (will retry): Get "https://192.168.49.2:8443/api/v1/nodes/ha-423884-m02": dial tcp 192.168.49.2:8443: connect: connection refused
	W1109 13:58:53.694984   50941 node_ready.go:55] error getting node "ha-423884-m02" condition "Ready" status (will retry): Get "https://192.168.49.2:8443/api/v1/nodes/ha-423884-m02": dial tcp 192.168.49.2:8443: connect: connection refused
	W1109 13:58:55.695023   50941 node_ready.go:55] error getting node "ha-423884-m02" condition "Ready" status (will retry): Get "https://192.168.49.2:8443/api/v1/nodes/ha-423884-m02": dial tcp 192.168.49.2:8443: connect: connection refused
	W1109 13:58:57.695978   50941 node_ready.go:55] error getting node "ha-423884-m02" condition "Ready" status (will retry): Get "https://192.168.49.2:8443/api/v1/nodes/ha-423884-m02": dial tcp 192.168.49.2:8443: connect: connection refused
	W1109 13:59:00.195414   50941 node_ready.go:55] error getting node "ha-423884-m02" condition "Ready" status (will retry): Get "https://192.168.49.2:8443/api/v1/nodes/ha-423884-m02": dial tcp 192.168.49.2:8443: connect: connection refused
	W1109 13:59:02.195699   50941 node_ready.go:55] error getting node "ha-423884-m02" condition "Ready" status (will retry): Get "https://192.168.49.2:8443/api/v1/nodes/ha-423884-m02": dial tcp 192.168.49.2:8443: connect: connection refused
	W1109 13:59:04.695049   50941 node_ready.go:55] error getting node "ha-423884-m02" condition "Ready" status (will retry): Get "https://192.168.49.2:8443/api/v1/nodes/ha-423884-m02": dial tcp 192.168.49.2:8443: connect: connection refused
	W1109 13:59:06.695993   50941 node_ready.go:55] error getting node "ha-423884-m02" condition "Ready" status (will retry): Get "https://192.168.49.2:8443/api/v1/nodes/ha-423884-m02": dial tcp 192.168.49.2:8443: connect: connection refused
	I1109 14:00:12.661230   50941 with_retry.go:234] "Got a Retry-After response" delay="1s" attempt=1 url="https://192.168.49.2:8443/api/v1/nodes/ha-423884-m02"
	W1109 14:00:12.661517   50941 node_ready.go:55] error getting node "ha-423884-m02" condition "Ready" status (will retry): Get "https://192.168.49.2:8443/api/v1/nodes/ha-423884-m02": dial tcp 192.168.49.2:8443: connect: connection refused - error from a previous attempt: read tcp 192.168.49.1:43904->192.168.49.2:8443: read: connection reset by peer
	W1109 14:00:14.695160   50941 node_ready.go:55] error getting node "ha-423884-m02" condition "Ready" status (will retry): Get "https://192.168.49.2:8443/api/v1/nodes/ha-423884-m02": dial tcp 192.168.49.2:8443: connect: connection refused
	W1109 14:00:16.695701   50941 node_ready.go:55] error getting node "ha-423884-m02" condition "Ready" status (will retry): Get "https://192.168.49.2:8443/api/v1/nodes/ha-423884-m02": dial tcp 192.168.49.2:8443: connect: connection refused
	W1109 14:00:19.195798   50941 node_ready.go:55] error getting node "ha-423884-m02" condition "Ready" status (will retry): Get "https://192.168.49.2:8443/api/v1/nodes/ha-423884-m02": dial tcp 192.168.49.2:8443: connect: connection refused
	W1109 14:00:21.695004   50941 node_ready.go:55] error getting node "ha-423884-m02" condition "Ready" status (will retry): Get "https://192.168.49.2:8443/api/v1/nodes/ha-423884-m02": dial tcp 192.168.49.2:8443: connect: connection refused
	W1109 14:00:24.195895   50941 node_ready.go:55] error getting node "ha-423884-m02" condition "Ready" status (will retry): Get "https://192.168.49.2:8443/api/v1/nodes/ha-423884-m02": dial tcp 192.168.49.2:8443: connect: connection refused
	W1109 14:00:26.695529   50941 node_ready.go:55] error getting node "ha-423884-m02" condition "Ready" status (will retry): Get "https://192.168.49.2:8443/api/v1/nodes/ha-423884-m02": dial tcp 192.168.49.2:8443: connect: connection refused
	W1109 14:00:28.695896   50941 node_ready.go:55] error getting node "ha-423884-m02" condition "Ready" status (will retry): Get "https://192.168.49.2:8443/api/v1/nodes/ha-423884-m02": dial tcp 192.168.49.2:8443: connect: connection refused
	W1109 14:00:31.194955   50941 node_ready.go:55] error getting node "ha-423884-m02" condition "Ready" status (will retry): Get "https://192.168.49.2:8443/api/v1/nodes/ha-423884-m02": dial tcp 192.168.49.2:8443: connect: connection refused
	W1109 14:00:33.695955   50941 node_ready.go:55] error getting node "ha-423884-m02" condition "Ready" status (will retry): Get "https://192.168.49.2:8443/api/v1/nodes/ha-423884-m02": dial tcp 192.168.49.2:8443: connect: connection refused
	W1109 14:00:36.194952   50941 node_ready.go:55] error getting node "ha-423884-m02" condition "Ready" status (will retry): Get "https://192.168.49.2:8443/api/v1/nodes/ha-423884-m02": dial tcp 192.168.49.2:8443: connect: connection refused
	W1109 14:00:38.694903   50941 node_ready.go:55] error getting node "ha-423884-m02" condition "Ready" status (will retry): Get "https://192.168.49.2:8443/api/v1/nodes/ha-423884-m02": dial tcp 192.168.49.2:8443: connect: connection refused
	W1109 14:00:41.195976   50941 node_ready.go:55] error getting node "ha-423884-m02" condition "Ready" status (will retry): Get "https://192.168.49.2:8443/api/v1/nodes/ha-423884-m02": dial tcp 192.168.49.2:8443: connect: connection refused
	W1109 14:00:43.695060   50941 node_ready.go:55] error getting node "ha-423884-m02" condition "Ready" status (will retry): Get "https://192.168.49.2:8443/api/v1/nodes/ha-423884-m02": dial tcp 192.168.49.2:8443: connect: connection refused
	W1109 14:00:45.695243   50941 node_ready.go:55] error getting node "ha-423884-m02" condition "Ready" status (will retry): Get "https://192.168.49.2:8443/api/v1/nodes/ha-423884-m02": dial tcp 192.168.49.2:8443: connect: connection refused
	W1109 14:00:47.695603   50941 node_ready.go:55] error getting node "ha-423884-m02" condition "Ready" status (will retry): Get "https://192.168.49.2:8443/api/v1/nodes/ha-423884-m02": dial tcp 192.168.49.2:8443: connect: connection refused
	W1109 14:00:49.695924   50941 node_ready.go:55] error getting node "ha-423884-m02" condition "Ready" status (will retry): Get "https://192.168.49.2:8443/api/v1/nodes/ha-423884-m02": dial tcp 192.168.49.2:8443: connect: connection refused
	W1109 14:00:52.194931   50941 node_ready.go:55] error getting node "ha-423884-m02" condition "Ready" status (will retry): Get "https://192.168.49.2:8443/api/v1/nodes/ha-423884-m02": dial tcp 192.168.49.2:8443: connect: connection refused
	W1109 14:01:02.696092   50941 node_ready.go:55] error getting node "ha-423884-m02" condition "Ready" status (will retry): Get "https://192.168.49.2:8443/api/v1/nodes/ha-423884-m02": net/http: TLS handshake timeout
	W1109 14:01:12.697352   50941 node_ready.go:55] error getting node "ha-423884-m02" condition "Ready" status (will retry): Get "https://192.168.49.2:8443/api/v1/nodes/ha-423884-m02": net/http: TLS handshake timeout
	I1109 14:01:14.246046   50941 with_retry.go:234] "Got a Retry-After response" delay="1s" attempt=1 url="https://192.168.49.2:8443/api/v1/nodes/ha-423884-m02"
	W1109 14:01:15.195896   50941 node_ready.go:55] error getting node "ha-423884-m02" condition "Ready" status (will retry): Get "https://192.168.49.2:8443/api/v1/nodes/ha-423884-m02": dial tcp 192.168.49.2:8443: connect: connection refused
	W1109 14:01:17.694927   50941 node_ready.go:55] error getting node "ha-423884-m02" condition "Ready" status (will retry): Get "https://192.168.49.2:8443/api/v1/nodes/ha-423884-m02": dial tcp 192.168.49.2:8443: connect: connection refused
	W1109 14:01:19.695012   50941 node_ready.go:55] error getting node "ha-423884-m02" condition "Ready" status (will retry): Get "https://192.168.49.2:8443/api/v1/nodes/ha-423884-m02": dial tcp 192.168.49.2:8443: connect: connection refused
	W1109 14:01:21.695859   50941 node_ready.go:55] error getting node "ha-423884-m02" condition "Ready" status (will retry): Get "https://192.168.49.2:8443/api/v1/nodes/ha-423884-m02": dial tcp 192.168.49.2:8443: connect: connection refused
	W1109 14:01:24.195971   50941 node_ready.go:55] error getting node "ha-423884-m02" condition "Ready" status (will retry): Get "https://192.168.49.2:8443/api/v1/nodes/ha-423884-m02": dial tcp 192.168.49.2:8443: connect: connection refused
	W1109 14:01:26.694922   50941 node_ready.go:55] error getting node "ha-423884-m02" condition "Ready" status (will retry): Get "https://192.168.49.2:8443/api/v1/nodes/ha-423884-m02": dial tcp 192.168.49.2:8443: connect: connection refused
	W1109 14:01:28.695002   50941 node_ready.go:55] error getting node "ha-423884-m02" condition "Ready" status (will retry): Get "https://192.168.49.2:8443/api/v1/nodes/ha-423884-m02": dial tcp 192.168.49.2:8443: connect: connection refused
	W1109 14:01:31.195949   50941 node_ready.go:55] error getting node "ha-423884-m02" condition "Ready" status (will retry): Get "https://192.168.49.2:8443/api/v1/nodes/ha-423884-m02": dial tcp 192.168.49.2:8443: connect: connection refused
	W1109 14:01:33.196044   50941 node_ready.go:55] error getting node "ha-423884-m02" condition "Ready" status (will retry): Get "https://192.168.49.2:8443/api/v1/nodes/ha-423884-m02": dial tcp 192.168.49.2:8443: connect: connection refused
	W1109 14:01:35.695811   50941 node_ready.go:55] error getting node "ha-423884-m02" condition "Ready" status (will retry): Get "https://192.168.49.2:8443/api/v1/nodes/ha-423884-m02": dial tcp 192.168.49.2:8443: connect: connection refused
	W1109 14:01:38.194914   50941 node_ready.go:55] error getting node "ha-423884-m02" condition "Ready" status (will retry): Get "https://192.168.49.2:8443/api/v1/nodes/ha-423884-m02": dial tcp 192.168.49.2:8443: connect: connection refused
	W1109 14:01:40.195799   50941 node_ready.go:55] error getting node "ha-423884-m02" condition "Ready" status (will retry): Get "https://192.168.49.2:8443/api/v1/nodes/ha-423884-m02": dial tcp 192.168.49.2:8443: connect: connection refused
	W1109 14:01:42.695109   50941 node_ready.go:55] error getting node "ha-423884-m02" condition "Ready" status (will retry): Get "https://192.168.49.2:8443/api/v1/nodes/ha-423884-m02": dial tcp 192.168.49.2:8443: connect: connection refused
	W1109 14:01:45.194966   50941 node_ready.go:55] error getting node "ha-423884-m02" condition "Ready" status (will retry): Get "https://192.168.49.2:8443/api/v1/nodes/ha-423884-m02": dial tcp 192.168.49.2:8443: connect: connection refused
	W1109 14:01:47.195992   50941 node_ready.go:55] error getting node "ha-423884-m02" condition "Ready" status (will retry): Get "https://192.168.49.2:8443/api/v1/nodes/ha-423884-m02": dial tcp 192.168.49.2:8443: connect: connection refused
	W1109 14:01:49.694861   50941 node_ready.go:55] error getting node "ha-423884-m02" condition "Ready" status (will retry): Get "https://192.168.49.2:8443/api/v1/nodes/ha-423884-m02": dial tcp 192.168.49.2:8443: connect: connection refused
	W1109 14:01:52.194884   50941 node_ready.go:55] error getting node "ha-423884-m02" condition "Ready" status (will retry): Get "https://192.168.49.2:8443/api/v1/nodes/ha-423884-m02": dial tcp 192.168.49.2:8443: connect: connection refused
	W1109 14:01:54.694898   50941 node_ready.go:55] error getting node "ha-423884-m02" condition "Ready" status (will retry): Get "https://192.168.49.2:8443/api/v1/nodes/ha-423884-m02": dial tcp 192.168.49.2:8443: connect: connection refused
	W1109 14:01:56.695125   50941 node_ready.go:55] error getting node "ha-423884-m02" condition "Ready" status (will retry): Get "https://192.168.49.2:8443/api/v1/nodes/ha-423884-m02": dial tcp 192.168.49.2:8443: connect: connection refused
	W1109 14:01:59.195035   50941 node_ready.go:55] error getting node "ha-423884-m02" condition "Ready" status (will retry): Get "https://192.168.49.2:8443/api/v1/nodes/ha-423884-m02": dial tcp 192.168.49.2:8443: connect: connection refused
	W1109 14:02:01.694940   50941 node_ready.go:55] error getting node "ha-423884-m02" condition "Ready" status (will retry): Get "https://192.168.49.2:8443/api/v1/nodes/ha-423884-m02": dial tcp 192.168.49.2:8443: connect: connection refused
	W1109 14:02:03.695952   50941 node_ready.go:55] error getting node "ha-423884-m02" condition "Ready" status (will retry): Get "https://192.168.49.2:8443/api/v1/nodes/ha-423884-m02": dial tcp 192.168.49.2:8443: connect: connection refused
	W1109 14:02:06.194964   50941 node_ready.go:55] error getting node "ha-423884-m02" condition "Ready" status (will retry): Get "https://192.168.49.2:8443/api/v1/nodes/ha-423884-m02": dial tcp 192.168.49.2:8443: connect: connection refused
	W1109 14:02:08.694953   50941 node_ready.go:55] error getting node "ha-423884-m02" condition "Ready" status (will retry): Get "https://192.168.49.2:8443/api/v1/nodes/ha-423884-m02": dial tcp 192.168.49.2:8443: connect: connection refused
	W1109 14:02:10.695760   50941 node_ready.go:55] error getting node "ha-423884-m02" condition "Ready" status (will retry): Get "https://192.168.49.2:8443/api/v1/nodes/ha-423884-m02": dial tcp 192.168.49.2:8443: connect: connection refused
	W1109 14:02:13.195697   50941 node_ready.go:55] error getting node "ha-423884-m02" condition "Ready" status (will retry): Get "https://192.168.49.2:8443/api/v1/nodes/ha-423884-m02": dial tcp 192.168.49.2:8443: connect: connection refused
	W1109 14:02:15.694939   50941 node_ready.go:55] error getting node "ha-423884-m02" condition "Ready" status (will retry): Get "https://192.168.49.2:8443/api/v1/nodes/ha-423884-m02": dial tcp 192.168.49.2:8443: connect: connection refused
	W1109 14:02:17.695926   50941 node_ready.go:55] error getting node "ha-423884-m02" condition "Ready" status (will retry): Get "https://192.168.49.2:8443/api/v1/nodes/ha-423884-m02": dial tcp 192.168.49.2:8443: connect: connection refused
	W1109 14:02:20.195916   50941 node_ready.go:55] error getting node "ha-423884-m02" condition "Ready" status (will retry): Get "https://192.168.49.2:8443/api/v1/nodes/ha-423884-m02": dial tcp 192.168.49.2:8443: connect: connection refused
	W1109 14:02:22.695194   50941 node_ready.go:55] error getting node "ha-423884-m02" condition "Ready" status (will retry): Get "https://192.168.49.2:8443/api/v1/nodes/ha-423884-m02": dial tcp 192.168.49.2:8443: connect: connection refused
	W1109 14:02:25.195931   50941 node_ready.go:55] error getting node "ha-423884-m02" condition "Ready" status (will retry): Get "https://192.168.49.2:8443/api/v1/nodes/ha-423884-m02": dial tcp 192.168.49.2:8443: connect: connection refused
	W1109 14:02:27.694900   50941 node_ready.go:55] error getting node "ha-423884-m02" condition "Ready" status (will retry): Get "https://192.168.49.2:8443/api/v1/nodes/ha-423884-m02": dial tcp 192.168.49.2:8443: connect: connection refused
	W1109 14:02:29.694960   50941 node_ready.go:55] error getting node "ha-423884-m02" condition "Ready" status (will retry): Get "https://192.168.49.2:8443/api/v1/nodes/ha-423884-m02": dial tcp 192.168.49.2:8443: connect: connection refused
	W1109 14:02:32.194988   50941 node_ready.go:55] error getting node "ha-423884-m02" condition "Ready" status (will retry): Get "https://192.168.49.2:8443/api/v1/nodes/ha-423884-m02": dial tcp 192.168.49.2:8443: connect: connection refused
	W1109 14:02:34.195073   50941 node_ready.go:55] error getting node "ha-423884-m02" condition "Ready" status (will retry): Get "https://192.168.49.2:8443/api/v1/nodes/ha-423884-m02": dial tcp 192.168.49.2:8443: connect: connection refused
	W1109 14:02:44.694610   50941 node_ready.go:55] error getting node "ha-423884-m02" condition "Ready" status (will retry): Get "https://192.168.49.2:8443/api/v1/nodes/ha-423884-m02": context deadline exceeded
	I1109 14:02:44.694648   50941 node_ready.go:38] duration metric: took 6m0.000230455s for node "ha-423884-m02" to be "Ready" ...
	I1109 14:02:44.698103   50941 out.go:203] 
	W1109 14:02:44.701305   50941 out.go:285] X Exiting due to GUEST_START: failed to start node: adding node: wait 6m0s for node: waiting for node to be ready: WaitNodeCondition: context deadline exceeded
	W1109 14:02:44.701325   50941 out.go:285] * 
	W1109 14:02:44.703469   50941 out.go:308] ╭─────────────────────────────────────────────────────────────────────────────────────────────╮
	│                                                                                             │
	│    * If the above advice does not help, please let us know:                                 │
	│      https://github.com/kubernetes/minikube/issues/new/choose                               │
	│                                                                                             │
	│    * Please run `minikube logs --file=logs.txt` and attach logs.txt to the GitHub issue.    │
	│                                                                                             │
	╰─────────────────────────────────────────────────────────────────────────────────────────────╯
	I1109 14:02:44.706530   50941 out.go:203] 
	
	
	==> CRI-O <==
	Nov 09 14:02:09 ha-423884 crio[667]: time="2025-11-09T14:02:09.551728258Z" level=info msg="Allowed annotations are specified for workload [io.containers.trace-syscall]"
	Nov 09 14:02:09 ha-423884 crio[667]: time="2025-11-09T14:02:09.559379136Z" level=info msg="Allowed annotations are specified for workload [io.containers.trace-syscall]"
	Nov 09 14:02:09 ha-423884 crio[667]: time="2025-11-09T14:02:09.56010363Z" level=info msg="Allowed annotations are specified for workload [io.containers.trace-syscall]"
	Nov 09 14:02:09 ha-423884 crio[667]: time="2025-11-09T14:02:09.578180754Z" level=info msg="Created container fb43ae7a5bc7148d318300f77b90f87d0bed19698a40a38ce78123c68c92f396: kube-system/kube-controller-manager-ha-423884/kube-controller-manager" id=914f69c7-9e75-4685-97ae-ce6d487a80eb name=/runtime.v1.RuntimeService/CreateContainer
	Nov 09 14:02:09 ha-423884 crio[667]: time="2025-11-09T14:02:09.578978029Z" level=info msg="Starting container: fb43ae7a5bc7148d318300f77b90f87d0bed19698a40a38ce78123c68c92f396" id=7b0cb328-02c2-455b-8d26-92c535c13320 name=/runtime.v1.RuntimeService/StartContainer
	Nov 09 14:02:09 ha-423884 crio[667]: time="2025-11-09T14:02:09.581346511Z" level=info msg="Started container" PID=1238 containerID=fb43ae7a5bc7148d318300f77b90f87d0bed19698a40a38ce78123c68c92f396 description=kube-system/kube-controller-manager-ha-423884/kube-controller-manager id=7b0cb328-02c2-455b-8d26-92c535c13320 name=/runtime.v1.RuntimeService/StartContainer sandboxID=575e9e561d03bd73bfc2977eee9d1a87cf7e044ef1af38644f54f805e50974ba
	Nov 09 14:02:21 ha-423884 conmon[1235]: conmon fb43ae7a5bc7148d3183 <ninfo>: container 1238 exited with status 1
	Nov 09 14:02:21 ha-423884 crio[667]: time="2025-11-09T14:02:21.691484867Z" level=info msg="Removing container: ee43c2dc250434d5cc4568e9176c1adbc891131ee25ba52582eec6dc2abb7fda" id=20d01fb9-69b8-4d52-8e94-f64f27447597 name=/runtime.v1.RuntimeService/RemoveContainer
	Nov 09 14:02:21 ha-423884 crio[667]: time="2025-11-09T14:02:21.700637954Z" level=info msg="Error loading conmon cgroup of container ee43c2dc250434d5cc4568e9176c1adbc891131ee25ba52582eec6dc2abb7fda: cgroup deleted" id=20d01fb9-69b8-4d52-8e94-f64f27447597 name=/runtime.v1.RuntimeService/RemoveContainer
	Nov 09 14:02:21 ha-423884 crio[667]: time="2025-11-09T14:02:21.70373976Z" level=info msg="Removed container ee43c2dc250434d5cc4568e9176c1adbc891131ee25ba52582eec6dc2abb7fda: kube-system/kube-controller-manager-ha-423884/kube-controller-manager" id=20d01fb9-69b8-4d52-8e94-f64f27447597 name=/runtime.v1.RuntimeService/RemoveContainer
	Nov 09 14:02:36 ha-423884 crio[667]: time="2025-11-09T14:02:36.550431718Z" level=info msg="Checking image status: registry.k8s.io/kube-apiserver:v1.34.1" id=eb071ab8-2487-4b87-9951-f3406f3f724d name=/runtime.v1.ImageService/ImageStatus
	Nov 09 14:02:36 ha-423884 crio[667]: time="2025-11-09T14:02:36.551499117Z" level=info msg="Checking image status: registry.k8s.io/kube-apiserver:v1.34.1" id=dc022fcb-5120-4661-9098-70f50aeb80b1 name=/runtime.v1.ImageService/ImageStatus
	Nov 09 14:02:36 ha-423884 crio[667]: time="2025-11-09T14:02:36.552772713Z" level=info msg="Creating container: kube-system/kube-apiserver-ha-423884/kube-apiserver" id=a5b314ab-13da-40fb-be88-4a7f65ae1f46 name=/runtime.v1.RuntimeService/CreateContainer
	Nov 09 14:02:36 ha-423884 crio[667]: time="2025-11-09T14:02:36.552913276Z" level=info msg="Allowed annotations are specified for workload [io.containers.trace-syscall]"
	Nov 09 14:02:36 ha-423884 crio[667]: time="2025-11-09T14:02:36.558216367Z" level=info msg="Allowed annotations are specified for workload [io.containers.trace-syscall]"
	Nov 09 14:02:36 ha-423884 crio[667]: time="2025-11-09T14:02:36.558699867Z" level=info msg="Allowed annotations are specified for workload [io.containers.trace-syscall]"
	Nov 09 14:02:36 ha-423884 crio[667]: time="2025-11-09T14:02:36.577264476Z" level=info msg="Created container f857691ef21f6060a3153742c856510acde291aa03d98022486b54ccdd6b16f0: kube-system/kube-apiserver-ha-423884/kube-apiserver" id=a5b314ab-13da-40fb-be88-4a7f65ae1f46 name=/runtime.v1.RuntimeService/CreateContainer
	Nov 09 14:02:36 ha-423884 crio[667]: time="2025-11-09T14:02:36.577889688Z" level=info msg="Starting container: f857691ef21f6060a3153742c856510acde291aa03d98022486b54ccdd6b16f0" id=1a740b53-426e-446b-bea6-3a698a721a3b name=/runtime.v1.RuntimeService/StartContainer
	Nov 09 14:02:36 ha-423884 crio[667]: time="2025-11-09T14:02:36.580618576Z" level=info msg="Started container" PID=1253 containerID=f857691ef21f6060a3153742c856510acde291aa03d98022486b54ccdd6b16f0 description=kube-system/kube-apiserver-ha-423884/kube-apiserver id=1a740b53-426e-446b-bea6-3a698a721a3b name=/runtime.v1.RuntimeService/StartContainer sandboxID=e385df39fa9a74c7a559091711257de7f4454e0e52edc9948675220b19108eb4
	Nov 09 14:02:57 ha-423884 conmon[1251]: conmon f857691ef21f6060a315 <ninfo>: container 1253 exited with status 255
	Nov 09 14:02:57 ha-423884 crio[667]: time="2025-11-09T14:02:57.959662491Z" level=info msg="Stopping container: f857691ef21f6060a3153742c856510acde291aa03d98022486b54ccdd6b16f0 (timeout: 30s)" id=55305a25-44dd-4ddb-b724-39a9d92f3c50 name=/runtime.v1.RuntimeService/StopContainer
	Nov 09 14:02:57 ha-423884 crio[667]: time="2025-11-09T14:02:57.971272541Z" level=info msg="Stopped container f857691ef21f6060a3153742c856510acde291aa03d98022486b54ccdd6b16f0: kube-system/kube-apiserver-ha-423884/kube-apiserver" id=55305a25-44dd-4ddb-b724-39a9d92f3c50 name=/runtime.v1.RuntimeService/StopContainer
	Nov 09 14:02:58 ha-423884 crio[667]: time="2025-11-09T14:02:58.779081549Z" level=info msg="Removing container: dab52ef1281dc29d9aafc0f1b10c6757441456ebe6e9ca4316f204ffc32d3ce1" id=4d31d8f8-0ff3-4c90-882a-124e284c1b87 name=/runtime.v1.RuntimeService/RemoveContainer
	Nov 09 14:02:58 ha-423884 crio[667]: time="2025-11-09T14:02:58.786993816Z" level=info msg="Error loading conmon cgroup of container dab52ef1281dc29d9aafc0f1b10c6757441456ebe6e9ca4316f204ffc32d3ce1: cgroup deleted" id=4d31d8f8-0ff3-4c90-882a-124e284c1b87 name=/runtime.v1.RuntimeService/RemoveContainer
	Nov 09 14:02:58 ha-423884 crio[667]: time="2025-11-09T14:02:58.790080944Z" level=info msg="Removed container dab52ef1281dc29d9aafc0f1b10c6757441456ebe6e9ca4316f204ffc32d3ce1: kube-system/kube-apiserver-ha-423884/kube-apiserver" id=4d31d8f8-0ff3-4c90-882a-124e284c1b87 name=/runtime.v1.RuntimeService/RemoveContainer
	
	
	==> container status <==
	CONTAINER           IMAGE                                                              CREATED             STATE               NAME                      ATTEMPT             POD ID              POD                                 NAMESPACE
	f857691ef21f6       43911e833d64d4f30460862fc0c54bb61999d60bc7d063feca71e9fc610d5196   26 seconds ago      Exited              kube-apiserver            6                   e385df39fa9a7       kube-apiserver-ha-423884            kube-system
	fb43ae7a5bc71       7eb2c6ff0c5a768fd309321bc2ade0e4e11afcf4f2017ef1d0ff00d91fdf992a   53 seconds ago      Exited              kube-controller-manager   7                   575e9e561d03b       kube-controller-manager-ha-423884   kube-system
	c2bc167e20428       a1894772a478e07c67a56e8bf32335fdbe1dd4ec96976a5987083164bd00bc0e   2 minutes ago       Running             etcd                      2                   c523e19ee75d0       etcd-ha-423884                      kube-system
	ee4108629384f       b5f57ec6b98676d815366685a0422bd164ecf0732540b79ac51b1186cef97ff0   8 minutes ago       Running             kube-scheduler            1                   eee2ee895e800       kube-scheduler-ha-423884            kube-system
	dc4b89b5cdd42       2a8917f902489be5a8dd414209c32b77bd644d187ea646d86dbdc31e85efb551   8 minutes ago       Running             kube-vip                  0                   babe0da53b9cc       kube-vip-ha-423884                  kube-system
	ad03fe50fbbd1       a1894772a478e07c67a56e8bf32335fdbe1dd4ec96976a5987083164bd00bc0e   8 minutes ago       Exited              etcd                      1                   c523e19ee75d0       etcd-ha-423884                      kube-system
	
	
	==> describe nodes <==
	command /bin/bash -c "sudo /var/lib/minikube/binaries/v1.34.1/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig" failed with error: /bin/bash -c "sudo /var/lib/minikube/binaries/v1.34.1/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig": Process exited with status 1
	stdout:
	
	stderr:
	The connection to the server localhost:8443 was refused - did you specify the right host or port?
	
	
	==> dmesg <==
	[Nov 9 13:17] ACPI: SRAT not present
	[  +0.000000] ACPI: SRAT not present
	[  +0.000000] SPI driver altr_a10sr has no spi_device_id for altr,a10sr
	[  +0.015355] device-mapper: core: CONFIG_IMA_DISABLE_HTABLE is disabled. Duplicate IMA measurements will not be recorded in the IMA log.
	[  +0.494196] systemd[1]: Configuration file /run/systemd/system/netplan-ovs-cleanup.service is marked world-inaccessible. This has no effect as configuration data is accessible via APIs without restrictions. Proceeding anyway.
	[  +0.034512] systemd[1]: /lib/systemd/system/snapd.service:23: Unknown key name 'RestartMode' in section 'Service', ignoring.
	[  +0.743336] ena 0000:00:05.0: LLQ is not supported Fallback to host mode policy.
	[  +6.564676] kauditd_printk_skb: 36 callbacks suppressed
	[Nov 9 13:30] overlayfs: idmapped layers are currently not supported
	[  +0.081590] kmem.limit_in_bytes is deprecated and will be removed. Please report your usecase to linux-mm@kvack.org if you depend on this functionality.
	[Nov 9 13:36] overlayfs: idmapped layers are currently not supported
	[ +50.497753] overlayfs: idmapped layers are currently not supported
	[Nov 9 13:50] overlayfs: idmapped layers are currently not supported
	[Nov 9 13:51] overlayfs: idmapped layers are currently not supported
	[Nov 9 13:52] overlayfs: idmapped layers are currently not supported
	[Nov 9 13:53] overlayfs: idmapped layers are currently not supported
	[Nov 9 13:54] overlayfs: idmapped layers are currently not supported
	[Nov 9 13:55] overlayfs: idmapped layers are currently not supported
	[  +3.159182] overlayfs: idmapped layers are currently not supported
	
	
	==> etcd [ad03fe50fbbd1dace582db018b89f80349534b6604f17260fe8e6175c0110640] <==
	{"level":"warn","ts":"2025-11-09T14:00:20.007830Z","caller":"embed/serve.go:245","msg":"stopping secure grpc server due to error","error":"accept tcp 127.0.0.1:2379: use of closed network connection"}
	{"level":"warn","ts":"2025-11-09T14:00:20.007902Z","caller":"embed/serve.go:247","msg":"stopped secure grpc server due to error","error":"accept tcp 127.0.0.1:2379: use of closed network connection"}
	{"level":"error","ts":"2025-11-09T14:00:20.007914Z","caller":"embed/etcd.go:912","msg":"setting up serving from embedded etcd failed.","error":"accept tcp 127.0.0.1:2379: use of closed network connection","stacktrace":"go.etcd.io/etcd/server/v3/embed.(*Etcd).errHandler\n\tgo.etcd.io/etcd/server/v3/embed/etcd.go:912\ngo.etcd.io/etcd/server/v3/embed.(*Etcd).startHandler.func1\n\tgo.etcd.io/etcd/server/v3/embed/etcd.go:906"}
	{"level":"info","ts":"2025-11-09T14:00:20.007856Z","caller":"etcdserver/server.go:2319","msg":"server has stopped; stopping cluster version's monitor"}
	{"level":"info","ts":"2025-11-09T14:00:20.008037Z","caller":"rafthttp/peer.go:316","msg":"stopping remote peer","remote-peer-id":"b6e80321287bcc6a"}
	{"level":"info","ts":"2025-11-09T14:00:20.008080Z","caller":"rafthttp/stream.go:293","msg":"stopped TCP streaming connection with remote peer","stream-writer-type":"unknown stream","remote-peer-id":"b6e80321287bcc6a"}
	{"level":"info","ts":"2025-11-09T14:00:20.008125Z","caller":"rafthttp/stream.go:293","msg":"stopped TCP streaming connection with remote peer","stream-writer-type":"unknown stream","remote-peer-id":"b6e80321287bcc6a"}
	{"level":"warn","ts":"2025-11-09T14:00:20.007973Z","caller":"embed/serve.go:245","msg":"stopping secure grpc server due to error","error":"accept tcp 192.168.49.2:2379: use of closed network connection"}
	{"level":"warn","ts":"2025-11-09T14:00:20.008184Z","caller":"embed/serve.go:247","msg":"stopped secure grpc server due to error","error":"accept tcp 192.168.49.2:2379: use of closed network connection"}
	{"level":"error","ts":"2025-11-09T14:00:20.008220Z","caller":"embed/etcd.go:912","msg":"setting up serving from embedded etcd failed.","error":"accept tcp 192.168.49.2:2379: use of closed network connection","stacktrace":"go.etcd.io/etcd/server/v3/embed.(*Etcd).errHandler\n\tgo.etcd.io/etcd/server/v3/embed/etcd.go:912\ngo.etcd.io/etcd/server/v3/embed.(*Etcd).startHandler.func1\n\tgo.etcd.io/etcd/server/v3/embed/etcd.go:906"}
	{"level":"info","ts":"2025-11-09T14:00:20.008299Z","caller":"rafthttp/pipeline.go:85","msg":"stopped HTTP pipelining with remote peer","local-member-id":"aec36adc501070cc","remote-peer-id":"b6e80321287bcc6a"}
	{"level":"info","ts":"2025-11-09T14:00:20.008345Z","caller":"rafthttp/stream.go:441","msg":"stopped stream reader with remote peer","stream-reader-type":"stream MsgApp v2","local-member-id":"aec36adc501070cc","remote-peer-id":"b6e80321287bcc6a"}
	{"level":"info","ts":"2025-11-09T14:00:20.008407Z","caller":"rafthttp/stream.go:441","msg":"stopped stream reader with remote peer","stream-reader-type":"stream Message","local-member-id":"aec36adc501070cc","remote-peer-id":"b6e80321287bcc6a"}
	{"level":"info","ts":"2025-11-09T14:00:20.008440Z","caller":"rafthttp/peer.go:321","msg":"stopped remote peer","remote-peer-id":"b6e80321287bcc6a"}
	{"level":"info","ts":"2025-11-09T14:00:20.008470Z","caller":"rafthttp/peer.go:316","msg":"stopping remote peer","remote-peer-id":"c7770fc1e85485c5"}
	{"level":"info","ts":"2025-11-09T14:00:20.008501Z","caller":"rafthttp/stream.go:293","msg":"stopped TCP streaming connection with remote peer","stream-writer-type":"stream MsgApp v2","remote-peer-id":"c7770fc1e85485c5"}
	{"level":"info","ts":"2025-11-09T14:00:20.008542Z","caller":"rafthttp/stream.go:293","msg":"stopped TCP streaming connection with remote peer","stream-writer-type":"stream Message","remote-peer-id":"c7770fc1e85485c5"}
	{"level":"info","ts":"2025-11-09T14:00:20.008588Z","caller":"rafthttp/pipeline.go:85","msg":"stopped HTTP pipelining with remote peer","local-member-id":"aec36adc501070cc","remote-peer-id":"c7770fc1e85485c5"}
	{"level":"info","ts":"2025-11-09T14:00:20.008623Z","caller":"rafthttp/stream.go:441","msg":"stopped stream reader with remote peer","stream-reader-type":"stream MsgApp v2","local-member-id":"aec36adc501070cc","remote-peer-id":"c7770fc1e85485c5"}
	{"level":"info","ts":"2025-11-09T14:00:20.008664Z","caller":"rafthttp/stream.go:441","msg":"stopped stream reader with remote peer","stream-reader-type":"stream Message","local-member-id":"aec36adc501070cc","remote-peer-id":"c7770fc1e85485c5"}
	{"level":"info","ts":"2025-11-09T14:00:20.008699Z","caller":"rafthttp/peer.go:321","msg":"stopped remote peer","remote-peer-id":"c7770fc1e85485c5"}
	{"level":"info","ts":"2025-11-09T14:00:20.021805Z","caller":"embed/etcd.go:621","msg":"stopping serving peer traffic","address":"192.168.49.2:2380"}
	{"level":"error","ts":"2025-11-09T14:00:20.021895Z","caller":"embed/etcd.go:912","msg":"setting up serving from embedded etcd failed.","error":"accept tcp 192.168.49.2:2380: use of closed network connection","stacktrace":"go.etcd.io/etcd/server/v3/embed.(*Etcd).errHandler\n\tgo.etcd.io/etcd/server/v3/embed/etcd.go:912\ngo.etcd.io/etcd/server/v3/embed.(*Etcd).startHandler.func1\n\tgo.etcd.io/etcd/server/v3/embed/etcd.go:906"}
	{"level":"info","ts":"2025-11-09T14:00:20.021928Z","caller":"embed/etcd.go:626","msg":"stopped serving peer traffic","address":"192.168.49.2:2380"}
	{"level":"info","ts":"2025-11-09T14:00:20.021935Z","caller":"embed/etcd.go:428","msg":"closed etcd server","name":"ha-423884","data-dir":"/var/lib/minikube/etcd","advertise-peer-urls":["https://192.168.49.2:2380"],"advertise-client-urls":["https://192.168.49.2:2379"]}
	
	
	==> etcd [c2bc167e204287c49f92f3ea3b5ca2ff40be8e2eed3675512ec65e082d5b7ed6] <==
	{"level":"info","ts":"2025-11-09T14:03:00.290641Z","logger":"raft","caller":"v3@v3.6.0/raft.go:930","msg":"aec36adc501070cc became pre-candidate at term 4"}
	{"level":"info","ts":"2025-11-09T14:03:00.290693Z","logger":"raft","caller":"v3@v3.6.0/raft.go:1064","msg":"aec36adc501070cc [logterm: 4, index: 2084] sent MsgPreVote request to b6e80321287bcc6a at term 4"}
	{"level":"info","ts":"2025-11-09T14:03:00.290735Z","logger":"raft","caller":"v3@v3.6.0/raft.go:1064","msg":"aec36adc501070cc [logterm: 4, index: 2084] sent MsgPreVote request to c7770fc1e85485c5 at term 4"}
	{"level":"info","ts":"2025-11-09T14:03:00.290800Z","logger":"raft","caller":"v3@v3.6.0/raft.go:1077","msg":"aec36adc501070cc received MsgPreVoteResp from aec36adc501070cc at term 4"}
	{"level":"info","ts":"2025-11-09T14:03:00.290849Z","logger":"raft","caller":"v3@v3.6.0/raft.go:1693","msg":"aec36adc501070cc has received 1 MsgPreVoteResp votes and 0 vote rejections"}
	{"level":"warn","ts":"2025-11-09T14:03:00.481423Z","caller":"etcdserver/v3_server.go:911","msg":"waiting for ReadIndex response took too long, retrying","sent-request-id":8128041202906160421,"retry-timeout":"500ms"}
	{"level":"warn","ts":"2025-11-09T14:03:00.981676Z","caller":"etcdserver/v3_server.go:911","msg":"waiting for ReadIndex response took too long, retrying","sent-request-id":8128041202906160421,"retry-timeout":"500ms"}
	{"level":"warn","ts":"2025-11-09T14:03:01.482302Z","caller":"etcdserver/v3_server.go:911","msg":"waiting for ReadIndex response took too long, retrying","sent-request-id":8128041202906160421,"retry-timeout":"500ms"}
	{"level":"info","ts":"2025-11-09T14:03:01.690283Z","logger":"raft","caller":"v3@v3.6.0/raft.go:988","msg":"aec36adc501070cc is starting a new election at term 4"}
	{"level":"info","ts":"2025-11-09T14:03:01.690580Z","logger":"raft","caller":"v3@v3.6.0/raft.go:930","msg":"aec36adc501070cc became pre-candidate at term 4"}
	{"level":"info","ts":"2025-11-09T14:03:01.690656Z","logger":"raft","caller":"v3@v3.6.0/raft.go:1064","msg":"aec36adc501070cc [logterm: 4, index: 2084] sent MsgPreVote request to b6e80321287bcc6a at term 4"}
	{"level":"info","ts":"2025-11-09T14:03:01.690695Z","logger":"raft","caller":"v3@v3.6.0/raft.go:1064","msg":"aec36adc501070cc [logterm: 4, index: 2084] sent MsgPreVote request to c7770fc1e85485c5 at term 4"}
	{"level":"info","ts":"2025-11-09T14:03:01.690765Z","logger":"raft","caller":"v3@v3.6.0/raft.go:1077","msg":"aec36adc501070cc received MsgPreVoteResp from aec36adc501070cc at term 4"}
	{"level":"info","ts":"2025-11-09T14:03:01.690824Z","logger":"raft","caller":"v3@v3.6.0/raft.go:1693","msg":"aec36adc501070cc has received 1 MsgPreVoteResp votes and 0 vote rejections"}
	{"level":"warn","ts":"2025-11-09T14:03:01.982798Z","caller":"etcdserver/v3_server.go:911","msg":"waiting for ReadIndex response took too long, retrying","sent-request-id":8128041202906160421,"retry-timeout":"500ms"}
	{"level":"warn","ts":"2025-11-09T14:03:02.483949Z","caller":"etcdserver/v3_server.go:911","msg":"waiting for ReadIndex response took too long, retrying","sent-request-id":8128041202906160421,"retry-timeout":"500ms"}
	{"level":"warn","ts":"2025-11-09T14:03:02.985010Z","caller":"etcdserver/v3_server.go:911","msg":"waiting for ReadIndex response took too long, retrying","sent-request-id":8128041202906160421,"retry-timeout":"500ms"}
	{"level":"info","ts":"2025-11-09T14:03:03.090865Z","logger":"raft","caller":"v3@v3.6.0/raft.go:988","msg":"aec36adc501070cc is starting a new election at term 4"}
	{"level":"info","ts":"2025-11-09T14:03:03.090944Z","logger":"raft","caller":"v3@v3.6.0/raft.go:930","msg":"aec36adc501070cc became pre-candidate at term 4"}
	{"level":"info","ts":"2025-11-09T14:03:03.090967Z","logger":"raft","caller":"v3@v3.6.0/raft.go:1064","msg":"aec36adc501070cc [logterm: 4, index: 2084] sent MsgPreVote request to b6e80321287bcc6a at term 4"}
	{"level":"info","ts":"2025-11-09T14:03:03.090976Z","logger":"raft","caller":"v3@v3.6.0/raft.go:1064","msg":"aec36adc501070cc [logterm: 4, index: 2084] sent MsgPreVote request to c7770fc1e85485c5 at term 4"}
	{"level":"info","ts":"2025-11-09T14:03:03.091004Z","logger":"raft","caller":"v3@v3.6.0/raft.go:1077","msg":"aec36adc501070cc received MsgPreVoteResp from aec36adc501070cc at term 4"}
	{"level":"info","ts":"2025-11-09T14:03:03.091014Z","logger":"raft","caller":"v3@v3.6.0/raft.go:1693","msg":"aec36adc501070cc has received 1 MsgPreVoteResp votes and 0 vote rejections"}
	{"level":"warn","ts":"2025-11-09T14:03:03.227666Z","caller":"etcdserver/server.go:1814","msg":"failed to publish local member to cluster through raft","local-member-id":"aec36adc501070cc","local-member-attributes":"{Name:ha-423884 ClientURLs:[https://192.168.49.2:2379]}","publish-timeout":"7s","error":"context deadline exceeded"}
	{"level":"warn","ts":"2025-11-09T14:03:03.485505Z","caller":"etcdserver/v3_server.go:911","msg":"waiting for ReadIndex response took too long, retrying","sent-request-id":8128041202906160421,"retry-timeout":"500ms"}
	
	
	==> kernel <==
	 14:03:03 up 45 min,  0 user,  load average: 0.33, 0.75, 0.93
	Linux ha-423884 5.15.0-1084-aws #91~20.04.1-Ubuntu SMP Fri May 2 07:00:04 UTC 2025 aarch64 GNU/Linux
	PRETTY_NAME="Debian GNU/Linux 12 (bookworm)"
	
	
	==> kube-apiserver [f857691ef21f6060a3153742c856510acde291aa03d98022486b54ccdd6b16f0] <==
	I1109 14:02:36.634199       1 server.go:150] Version: v1.34.1
	I1109 14:02:36.634303       1 server.go:152] "Golang settings" GOGC="" GOMAXPROCS="" GOTRACEBACK=""
	W1109 14:02:37.909918       1 api_enablement.go:112] alpha api enabled with emulated version 1.34 instead of the binary's version 1.34.1, this is unsupported, proceed at your own risk: api=coordination.k8s.io/v1alpha2
	W1109 14:02:37.910014       1 api_enablement.go:112] alpha api enabled with emulated version 1.34 instead of the binary's version 1.34.1, this is unsupported, proceed at your own risk: api=certificates.k8s.io/v1alpha1
	W1109 14:02:37.910049       1 api_enablement.go:112] alpha api enabled with emulated version 1.34 instead of the binary's version 1.34.1, this is unsupported, proceed at your own risk: api=storagemigration.k8s.io/v1alpha1
	W1109 14:02:37.910094       1 api_enablement.go:112] alpha api enabled with emulated version 1.34 instead of the binary's version 1.34.1, this is unsupported, proceed at your own risk: api=authentication.k8s.io/v1alpha1
	W1109 14:02:37.910134       1 api_enablement.go:112] alpha api enabled with emulated version 1.34 instead of the binary's version 1.34.1, this is unsupported, proceed at your own risk: api=storage.k8s.io/v1alpha1
	W1109 14:02:37.910171       1 api_enablement.go:112] alpha api enabled with emulated version 1.34 instead of the binary's version 1.34.1, this is unsupported, proceed at your own risk: api=admissionregistration.k8s.io/v1alpha1
	W1109 14:02:37.910203       1 api_enablement.go:112] alpha api enabled with emulated version 1.34 instead of the binary's version 1.34.1, this is unsupported, proceed at your own risk: api=internal.apiserver.k8s.io/v1alpha1
	W1109 14:02:37.910236       1 api_enablement.go:112] alpha api enabled with emulated version 1.34 instead of the binary's version 1.34.1, this is unsupported, proceed at your own risk: api=imagepolicy.k8s.io/v1alpha1
	W1109 14:02:37.910268       1 api_enablement.go:112] alpha api enabled with emulated version 1.34 instead of the binary's version 1.34.1, this is unsupported, proceed at your own risk: api=resource.k8s.io/v1alpha3
	W1109 14:02:37.910286       1 api_enablement.go:112] alpha api enabled with emulated version 1.34 instead of the binary's version 1.34.1, this is unsupported, proceed at your own risk: api=scheduling.k8s.io/v1alpha1
	W1109 14:02:37.910291       1 api_enablement.go:112] alpha api enabled with emulated version 1.34 instead of the binary's version 1.34.1, this is unsupported, proceed at your own risk: api=rbac.authorization.k8s.io/v1alpha1
	W1109 14:02:37.910295       1 api_enablement.go:112] alpha api enabled with emulated version 1.34 instead of the binary's version 1.34.1, this is unsupported, proceed at your own risk: api=node.k8s.io/v1alpha1
	W1109 14:02:37.926088       1 logging.go:55] [core] [Channel #1 SubChannel #2]grpc: addrConn.createTransport failed to connect to {Addr: "127.0.0.1:2379", ServerName: "127.0.0.1:2379", BalancerAttributes: {"<%!p(pickfirstleaf.managedByPickfirstKeyType={})>": "<%!p(bool=true)>" }}. Err: connection error: desc = "transport: Error while dialing: dial tcp 127.0.0.1:2379: operation was canceled"
	W1109 14:02:37.927390       1 logging.go:55] [core] [Channel #4 SubChannel #5]grpc: addrConn.createTransport failed to connect to {Addr: "127.0.0.1:2379", ServerName: "127.0.0.1:2379", BalancerAttributes: {"<%!p(pickfirstleaf.managedByPickfirstKeyType={})>": "<%!p(bool=true)>" }}. Err: connection error: desc = "transport: Error while dialing: dial tcp 127.0.0.1:2379: operation was canceled"
	I1109 14:02:37.927960       1 shared_informer.go:349] "Waiting for caches to sync" controller="node_authorizer"
	I1109 14:02:37.944693       1 shared_informer.go:349] "Waiting for caches to sync" controller="*generic.policySource[*k8s.io/api/admissionregistration/v1.ValidatingAdmissionPolicy,*k8s.io/api/admissionregistration/v1.ValidatingAdmissionPolicyBinding,k8s.io/apiserver/pkg/admission/plugin/policy/validating.Validator]"
	I1109 14:02:37.950338       1 plugins.go:157] Loaded 14 mutating admission controller(s) successfully in the following order: NamespaceLifecycle,LimitRanger,ServiceAccount,NodeRestriction,TaintNodesByCondition,Priority,DefaultTolerationSeconds,DefaultStorageClass,StorageObjectInUseProtection,RuntimeClass,DefaultIngressClass,PodTopologyLabels,MutatingAdmissionPolicy,MutatingAdmissionWebhook.
	I1109 14:02:37.950373       1 plugins.go:160] Loaded 13 validating admission controller(s) successfully in the following order: LimitRanger,ServiceAccount,PodSecurity,Priority,PersistentVolumeClaimResize,RuntimeClass,CertificateApproval,CertificateSigning,ClusterTrustBundleAttest,CertificateSubjectRestriction,ValidatingAdmissionPolicy,ValidatingAdmissionWebhook,ResourceQuota.
	I1109 14:02:37.950622       1 instance.go:239] Using reconciler: lease
	W1109 14:02:37.952361       1 logging.go:55] [core] [Channel #7 SubChannel #8]grpc: addrConn.createTransport failed to connect to {Addr: "127.0.0.1:2379", ServerName: "127.0.0.1:2379", BalancerAttributes: {"<%!p(pickfirstleaf.managedByPickfirstKeyType={})>": "<%!p(bool=true)>" }}. Err: connection error: desc = "transport: authentication handshake failed: context canceled"
	W1109 14:02:57.925789       1 logging.go:55] [core] [Channel #1 SubChannel #3]grpc: addrConn.createTransport failed to connect to {Addr: "127.0.0.1:2379", ServerName: "127.0.0.1:2379", BalancerAttributes: {"<%!p(pickfirstleaf.managedByPickfirstKeyType={})>": "<%!p(bool=true)>" }}. Err: connection error: desc = "transport: authentication handshake failed: context canceled"
	W1109 14:02:57.928142       1 logging.go:55] [core] [Channel #4 SubChannel #6]grpc: addrConn.createTransport failed to connect to {Addr: "127.0.0.1:2379", ServerName: "127.0.0.1:2379", BalancerAttributes: {"<%!p(pickfirstleaf.managedByPickfirstKeyType={})>": "<%!p(bool=true)>" }}. Err: connection error: desc = "transport: authentication handshake failed: context deadline exceeded"
	F1109 14:02:57.951648       1 instance.go:232] Error creating leases: error creating storage factory: context deadline exceeded
	
	
	==> kube-controller-manager [fb43ae7a5bc7148d318300f77b90f87d0bed19698a40a38ce78123c68c92f396] <==
	I1109 14:02:10.123805       1 serving.go:386] Generated self-signed cert in-memory
	I1109 14:02:11.457380       1 controllermanager.go:191] "Starting" version="v1.34.1"
	I1109 14:02:11.457409       1 controllermanager.go:193] "Golang settings" GOGC="" GOMAXPROCS="" GOTRACEBACK=""
	I1109 14:02:11.458876       1 dynamic_cafile_content.go:161] "Starting controller" name="request-header::/var/lib/minikube/certs/front-proxy-ca.crt"
	I1109 14:02:11.459053       1 dynamic_cafile_content.go:161] "Starting controller" name="client-ca-bundle::/var/lib/minikube/certs/ca.crt"
	I1109 14:02:11.459304       1 secure_serving.go:211] Serving securely on 127.0.0.1:10257
	I1109 14:02:11.459359       1 tlsconfig.go:243] "Starting DynamicServingCertificateController"
	E1109 14:02:21.460773       1 controllermanager.go:245] "Error building controller context" err="failed to wait for apiserver being healthy: timed out waiting for the condition: failed to get apiserver /healthz status: Get \"https://192.168.49.2:8443/healthz\": dial tcp 192.168.49.2:8443: connect: connection refused"
	
	
	==> kube-scheduler [ee4108629384f7d2a0c69033ae60bc1c7015caec18238848cb6dace4abb60ac1] <==
	E1109 14:02:03.355728       1 reflector.go:205] "Failed to watch" err="failed to list *v1.StorageClass: Get \"https://192.168.49.2:8443/apis/storage.k8s.io/v1/storageclasses?limit=500&resourceVersion=0\": dial tcp 192.168.49.2:8443: connect: connection refused" logger="UnhandledError" reflector="k8s.io/client-go/informers/factory.go:160" type="*v1.StorageClass"
	E1109 14:02:05.562387       1 reflector.go:205] "Failed to watch" err="failed to list *v1.Namespace: Get \"https://192.168.49.2:8443/api/v1/namespaces?limit=500&resourceVersion=0\": dial tcp 192.168.49.2:8443: connect: connection refused" logger="UnhandledError" reflector="k8s.io/client-go/informers/factory.go:160" type="*v1.Namespace"
	E1109 14:02:12.788932       1 reflector.go:205] "Failed to watch" err="failed to list *v1.CSINode: Get \"https://192.168.49.2:8443/apis/storage.k8s.io/v1/csinodes?limit=500&resourceVersion=0\": dial tcp 192.168.49.2:8443: connect: connection refused" logger="UnhandledError" reflector="k8s.io/client-go/informers/factory.go:160" type="*v1.CSINode"
	E1109 14:02:16.248109       1 reflector.go:205] "Failed to watch" err="failed to list *v1.CSIStorageCapacity: Get \"https://192.168.49.2:8443/apis/storage.k8s.io/v1/csistoragecapacities?limit=500&resourceVersion=0\": dial tcp 192.168.49.2:8443: connect: connection refused" logger="UnhandledError" reflector="k8s.io/client-go/informers/factory.go:160" type="*v1.CSIStorageCapacity"
	E1109 14:02:18.786160       1 reflector.go:205] "Failed to watch" err="failed to list *v1.ResourceClaim: Get \"https://192.168.49.2:8443/apis/resource.k8s.io/v1/resourceclaims?limit=500&resourceVersion=0\": dial tcp 192.168.49.2:8443: connect: connection refused" logger="UnhandledError" reflector="k8s.io/client-go/informers/factory.go:160" type="*v1.ResourceClaim"
	E1109 14:02:21.806031       1 reflector.go:205] "Failed to watch" err="failed to list *v1.ReplicationController: Get \"https://192.168.49.2:8443/api/v1/replicationcontrollers?limit=500&resourceVersion=0\": dial tcp 192.168.49.2:8443: connect: connection refused" logger="UnhandledError" reflector="k8s.io/client-go/informers/factory.go:160" type="*v1.ReplicationController"
	E1109 14:02:22.349416       1 reflector.go:205] "Failed to watch" err="failed to list *v1.Node: Get \"https://192.168.49.2:8443/api/v1/nodes?limit=500&resourceVersion=0\": dial tcp 192.168.49.2:8443: connect: connection refused" logger="UnhandledError" reflector="k8s.io/client-go/informers/factory.go:160" type="*v1.Node"
	E1109 14:02:22.944232       1 reflector.go:205] "Failed to watch" err="failed to list *v1.VolumeAttachment: Get \"https://192.168.49.2:8443/apis/storage.k8s.io/v1/volumeattachments?limit=500&resourceVersion=0\": dial tcp 192.168.49.2:8443: connect: connection refused" logger="UnhandledError" reflector="k8s.io/client-go/informers/factory.go:160" type="*v1.VolumeAttachment"
	E1109 14:02:23.105929       1 reflector.go:205] "Failed to watch" err="failed to list *v1.PersistentVolume: Get \"https://192.168.49.2:8443/api/v1/persistentvolumes?limit=500&resourceVersion=0\": dial tcp 192.168.49.2:8443: connect: connection refused" logger="UnhandledError" reflector="k8s.io/client-go/informers/factory.go:160" type="*v1.PersistentVolume"
	E1109 14:02:25.053888       1 reflector.go:205] "Failed to watch" err="failed to list *v1.PersistentVolumeClaim: Get \"https://192.168.49.2:8443/api/v1/persistentvolumeclaims?limit=500&resourceVersion=0\": dial tcp 192.168.49.2:8443: connect: connection refused" logger="UnhandledError" reflector="k8s.io/client-go/informers/factory.go:160" type="*v1.PersistentVolumeClaim"
	E1109 14:02:27.099287       1 reflector.go:205] "Failed to watch" err="failed to list *v1.CSIDriver: Get \"https://192.168.49.2:8443/apis/storage.k8s.io/v1/csidrivers?limit=500&resourceVersion=0\": dial tcp 192.168.49.2:8443: connect: connection refused" logger="UnhandledError" reflector="k8s.io/client-go/informers/factory.go:160" type="*v1.CSIDriver"
	E1109 14:02:27.123796       1 reflector.go:205] "Failed to watch" err="failed to list *v1.StatefulSet: Get \"https://192.168.49.2:8443/apis/apps/v1/statefulsets?limit=500&resourceVersion=0\": dial tcp 192.168.49.2:8443: connect: connection refused" logger="UnhandledError" reflector="k8s.io/client-go/informers/factory.go:160" type="*v1.StatefulSet"
	E1109 14:02:32.519640       1 reflector.go:205] "Failed to watch" err="failed to list *v1.ResourceSlice: Get \"https://192.168.49.2:8443/apis/resource.k8s.io/v1/resourceslices?limit=500&resourceVersion=0\": dial tcp 192.168.49.2:8443: connect: connection refused" logger="UnhandledError" reflector="k8s.io/client-go/informers/factory.go:160" type="*v1.ResourceSlice"
	E1109 14:02:33.877009       1 reflector.go:205] "Failed to watch" err="failed to list *v1.Service: Get \"https://192.168.49.2:8443/api/v1/services?limit=500&resourceVersion=0\": dial tcp 192.168.49.2:8443: connect: connection refused" logger="UnhandledError" reflector="k8s.io/client-go/informers/factory.go:160" type="*v1.Service"
	E1109 14:02:34.203560       1 reflector.go:205] "Failed to watch" err="failed to list *v1.Pod: Get \"https://192.168.49.2:8443/api/v1/pods?fieldSelector=status.phase%21%3DSucceeded%2Cstatus.phase%21%3DFailed&limit=500&resourceVersion=0\": dial tcp 192.168.49.2:8443: connect: connection refused" logger="UnhandledError" reflector="k8s.io/client-go/informers/factory.go:160" type="*v1.Pod"
	E1109 14:02:34.291436       1 reflector.go:205] "Failed to watch" err="failed to list *v1.DeviceClass: Get \"https://192.168.49.2:8443/apis/resource.k8s.io/v1/deviceclasses?limit=500&resourceVersion=0\": dial tcp 192.168.49.2:8443: connect: connection refused" logger="UnhandledError" reflector="k8s.io/client-go/informers/factory.go:160" type="*v1.DeviceClass"
	E1109 14:02:50.965757       1 reflector.go:205] "Failed to watch" err="failed to list *v1.ConfigMap: Get \"https://192.168.49.2:8443/api/v1/namespaces/kube-system/configmaps?fieldSelector=metadata.name%3Dextension-apiserver-authentication&limit=500&resourceVersion=0\": net/http: TLS handshake timeout" logger="UnhandledError" reflector="runtime/asm_arm64.s:1223" type="*v1.ConfigMap"
	E1109 14:02:52.910441       1 reflector.go:205] "Failed to watch" err="failed to list *v1.Namespace: Get \"https://192.168.49.2:8443/api/v1/namespaces?limit=500&resourceVersion=0\": net/http: TLS handshake timeout" logger="UnhandledError" reflector="k8s.io/client-go/informers/factory.go:160" type="*v1.Namespace"
	E1109 14:02:54.093633       1 reflector.go:205] "Failed to watch" err="failed to list *v1.PodDisruptionBudget: Get \"https://192.168.49.2:8443/apis/policy/v1/poddisruptionbudgets?limit=500&resourceVersion=0\": net/http: TLS handshake timeout" logger="UnhandledError" reflector="k8s.io/client-go/informers/factory.go:160" type="*v1.PodDisruptionBudget"
	E1109 14:02:56.581927       1 reflector.go:205] "Failed to watch" err="failed to list *v1.StorageClass: Get \"https://192.168.49.2:8443/apis/storage.k8s.io/v1/storageclasses?limit=500&resourceVersion=0\": net/http: TLS handshake timeout" logger="UnhandledError" reflector="k8s.io/client-go/informers/factory.go:160" type="*v1.StorageClass"
	E1109 14:02:58.739939       1 reflector.go:205] "Failed to watch" err="failed to list *v1.VolumeAttachment: Get \"https://192.168.49.2:8443/apis/storage.k8s.io/v1/volumeattachments?limit=500&resourceVersion=0\": dial tcp 192.168.49.2:8443: connect: connection refused" logger="UnhandledError" reflector="k8s.io/client-go/informers/factory.go:160" type="*v1.VolumeAttachment"
	E1109 14:02:58.959102       1 reflector.go:205] "Failed to watch" err="failed to list *v1.CSINode: Get \"https://192.168.49.2:8443/apis/storage.k8s.io/v1/csinodes?limit=500&resourceVersion=0\": dial tcp 192.168.49.2:8443: connect: connection refused - error from a previous attempt: read tcp 192.168.49.2:56852->192.168.49.2:8443: read: connection reset by peer" logger="UnhandledError" reflector="k8s.io/client-go/informers/factory.go:160" type="*v1.CSINode"
	E1109 14:02:58.959223       1 reflector.go:205] "Failed to watch" err="failed to list *v1.ReplicaSet: Get \"https://192.168.49.2:8443/apis/apps/v1/replicasets?limit=500&resourceVersion=0\": dial tcp 192.168.49.2:8443: connect: connection refused - error from a previous attempt: read tcp 192.168.49.2:56844->192.168.49.2:8443: read: connection reset by peer" logger="UnhandledError" reflector="k8s.io/client-go/informers/factory.go:160" type="*v1.ReplicaSet"
	E1109 14:03:00.781917       1 reflector.go:205] "Failed to watch" err="failed to list *v1.CSIStorageCapacity: Get \"https://192.168.49.2:8443/apis/storage.k8s.io/v1/csistoragecapacities?limit=500&resourceVersion=0\": dial tcp 192.168.49.2:8443: connect: connection refused" logger="UnhandledError" reflector="k8s.io/client-go/informers/factory.go:160" type="*v1.CSIStorageCapacity"
	E1109 14:03:01.017035       1 reflector.go:205] "Failed to watch" err="failed to list *v1.Node: Get \"https://192.168.49.2:8443/api/v1/nodes?limit=500&resourceVersion=0\": dial tcp 192.168.49.2:8443: connect: connection refused" logger="UnhandledError" reflector="k8s.io/client-go/informers/factory.go:160" type="*v1.Node"
	
	
	==> kubelet <==
	Nov 09 14:03:01 ha-423884 kubelet[803]: E1109 14:03:01.589337     803 reconstruct.go:189] "Failed to get Node status to reconstruct device paths" err="Get \"https://192.168.49.2:8443/api/v1/nodes/ha-423884\": dial tcp 192.168.49.2:8443: connect: connection refused"
	Nov 09 14:03:01 ha-423884 kubelet[803]: E1109 14:03:01.690526     803 reconstruct.go:189] "Failed to get Node status to reconstruct device paths" err="Get \"https://192.168.49.2:8443/api/v1/nodes/ha-423884\": dial tcp 192.168.49.2:8443: connect: connection refused"
	Nov 09 14:03:01 ha-423884 kubelet[803]: E1109 14:03:01.791269     803 reconstruct.go:189] "Failed to get Node status to reconstruct device paths" err="Get \"https://192.168.49.2:8443/api/v1/nodes/ha-423884\": dial tcp 192.168.49.2:8443: connect: connection refused"
	Nov 09 14:03:01 ha-423884 kubelet[803]: E1109 14:03:01.892469     803 reconstruct.go:189] "Failed to get Node status to reconstruct device paths" err="Get \"https://192.168.49.2:8443/api/v1/nodes/ha-423884\": dial tcp 192.168.49.2:8443: connect: connection refused"
	Nov 09 14:03:01 ha-423884 kubelet[803]: E1109 14:03:01.993098     803 reconstruct.go:189] "Failed to get Node status to reconstruct device paths" err="Get \"https://192.168.49.2:8443/api/v1/nodes/ha-423884\": dial tcp 192.168.49.2:8443: connect: connection refused"
	Nov 09 14:03:02 ha-423884 kubelet[803]: E1109 14:03:02.094209     803 reconstruct.go:189] "Failed to get Node status to reconstruct device paths" err="Get \"https://192.168.49.2:8443/api/v1/nodes/ha-423884\": dial tcp 192.168.49.2:8443: connect: connection refused"
	Nov 09 14:03:02 ha-423884 kubelet[803]: E1109 14:03:02.185240     803 event.go:368] "Unable to write event (may retry after sleeping)" err="Patch \"https://192.168.49.2:8443/api/v1/namespaces/default/events/ha-423884.18765b213fede0a9\": dial tcp 192.168.49.2:8443: connect: connection refused" event="&Event{ObjectMeta:{ha-423884.18765b213fede0a9  default    0 0001-01-01 00:00:00 +0000 UTC <nil> <nil> map[] map[] [] [] []},InvolvedObject:ObjectReference{Kind:Node,Namespace:,Name:ha-423884,UID:ha-423884,APIVersion:,ResourceVersion:,FieldPath:,},Reason:NodeHasNoDiskPressure,Message:Node ha-423884 status is now: NodeHasNoDiskPressure,Source:EventSource{Component:kubelet,Host:ha-423884,},FirstTimestamp:2025-11-09 13:55:02.526730409 +0000 UTC m=+0.194632428,LastTimestamp:2025-11-09 13:55:02.634347734 +0000 UTC m=+0.302249753,Count:3,Type:Normal,EventTime:0001-01-01 00:00:00 +0000 UTC,Series:nil,Action:,Related:nil,ReportingController:kubelet,ReportingInstance:ha-423884,}"
	Nov 09 14:03:02 ha-423884 kubelet[803]: E1109 14:03:02.195770     803 reconstruct.go:189] "Failed to get Node status to reconstruct device paths" err="Get \"https://192.168.49.2:8443/api/v1/nodes/ha-423884\": dial tcp 192.168.49.2:8443: connect: connection refused"
	Nov 09 14:03:02 ha-423884 kubelet[803]: E1109 14:03:02.296933     803 reconstruct.go:189] "Failed to get Node status to reconstruct device paths" err="Get \"https://192.168.49.2:8443/api/v1/nodes/ha-423884\": dial tcp 192.168.49.2:8443: connect: connection refused"
	Nov 09 14:03:02 ha-423884 kubelet[803]: E1109 14:03:02.397451     803 reconstruct.go:189] "Failed to get Node status to reconstruct device paths" err="Get \"https://192.168.49.2:8443/api/v1/nodes/ha-423884\": dial tcp 192.168.49.2:8443: connect: connection refused"
	Nov 09 14:03:02 ha-423884 kubelet[803]: E1109 14:03:02.499000     803 reconstruct.go:189] "Failed to get Node status to reconstruct device paths" err="Get \"https://192.168.49.2:8443/api/v1/nodes/ha-423884\": dial tcp 192.168.49.2:8443: connect: connection refused"
	Nov 09 14:03:02 ha-423884 kubelet[803]: E1109 14:03:02.534898     803 fsHandler.go:119] failed to collect filesystem stats - rootDiskErr: could not stat "/var/lib/containers/storage/overlay/e161fd5c87d95ac5cc5bf9471f10f841950c7cdcce8fdbc0165b0016f9a196ac/diff" to get inode usage: stat /var/lib/containers/storage/overlay/e161fd5c87d95ac5cc5bf9471f10f841950c7cdcce8fdbc0165b0016f9a196ac/diff: no such file or directory, extraDiskErr: could not stat "/var/log/pods/kube-system_kube-apiserver-ha-423884_a804b94b9eb13f887b086a5e7ad93ef5/kube-apiserver/5.log" to get inode usage: stat /var/log/pods/kube-system_kube-apiserver-ha-423884_a804b94b9eb13f887b086a5e7ad93ef5/kube-apiserver/5.log: no such file or directory
	Nov 09 14:03:02 ha-423884 kubelet[803]: E1109 14:03:02.582574     803 eviction_manager.go:292] "Eviction manager: failed to get summary stats" err="failed to get node info: node \"ha-423884\" not found"
	Nov 09 14:03:02 ha-423884 kubelet[803]: E1109 14:03:02.599490     803 reconstruct.go:189] "Failed to get Node status to reconstruct device paths" err="Get \"https://192.168.49.2:8443/api/v1/nodes/ha-423884\": dial tcp 192.168.49.2:8443: connect: connection refused"
	Nov 09 14:03:02 ha-423884 kubelet[803]: E1109 14:03:02.700352     803 reconstruct.go:189] "Failed to get Node status to reconstruct device paths" err="Get \"https://192.168.49.2:8443/api/v1/nodes/ha-423884\": dial tcp 192.168.49.2:8443: connect: connection refused"
	Nov 09 14:03:02 ha-423884 kubelet[803]: E1109 14:03:02.801150     803 reconstruct.go:189] "Failed to get Node status to reconstruct device paths" err="Get \"https://192.168.49.2:8443/api/v1/nodes/ha-423884\": dial tcp 192.168.49.2:8443: connect: connection refused"
	Nov 09 14:03:02 ha-423884 kubelet[803]: E1109 14:03:02.901604     803 reconstruct.go:189] "Failed to get Node status to reconstruct device paths" err="Get \"https://192.168.49.2:8443/api/v1/nodes/ha-423884\": dial tcp 192.168.49.2:8443: connect: connection refused"
	Nov 09 14:03:03 ha-423884 kubelet[803]: E1109 14:03:03.002234     803 reconstruct.go:189] "Failed to get Node status to reconstruct device paths" err="Get \"https://192.168.49.2:8443/api/v1/nodes/ha-423884\": dial tcp 192.168.49.2:8443: connect: connection refused"
	Nov 09 14:03:03 ha-423884 kubelet[803]: E1109 14:03:03.102762     803 reconstruct.go:189] "Failed to get Node status to reconstruct device paths" err="Get \"https://192.168.49.2:8443/api/v1/nodes/ha-423884\": dial tcp 192.168.49.2:8443: connect: connection refused"
	Nov 09 14:03:03 ha-423884 kubelet[803]: E1109 14:03:03.203740     803 reconstruct.go:189] "Failed to get Node status to reconstruct device paths" err="Get \"https://192.168.49.2:8443/api/v1/nodes/ha-423884\": dial tcp 192.168.49.2:8443: connect: connection refused"
	Nov 09 14:03:03 ha-423884 kubelet[803]: E1109 14:03:03.304853     803 reconstruct.go:189] "Failed to get Node status to reconstruct device paths" err="Get \"https://192.168.49.2:8443/api/v1/nodes/ha-423884\": dial tcp 192.168.49.2:8443: connect: connection refused"
	Nov 09 14:03:03 ha-423884 kubelet[803]: E1109 14:03:03.405981     803 reconstruct.go:189] "Failed to get Node status to reconstruct device paths" err="Get \"https://192.168.49.2:8443/api/v1/nodes/ha-423884\": dial tcp 192.168.49.2:8443: connect: connection refused"
	Nov 09 14:03:03 ha-423884 kubelet[803]: E1109 14:03:03.506669     803 reconstruct.go:189] "Failed to get Node status to reconstruct device paths" err="Get \"https://192.168.49.2:8443/api/v1/nodes/ha-423884\": dial tcp 192.168.49.2:8443: connect: connection refused"
	Nov 09 14:03:03 ha-423884 kubelet[803]: E1109 14:03:03.607734     803 reconstruct.go:189] "Failed to get Node status to reconstruct device paths" err="Get \"https://192.168.49.2:8443/api/v1/nodes/ha-423884\": dial tcp 192.168.49.2:8443: connect: connection refused"
	Nov 09 14:03:03 ha-423884 kubelet[803]: E1109 14:03:03.708582     803 reconstruct.go:189] "Failed to get Node status to reconstruct device paths" err="Get \"https://192.168.49.2:8443/api/v1/nodes/ha-423884\": dial tcp 192.168.49.2:8443: connect: connection refused"
	

                                                
                                                
-- /stdout --
helpers_test.go:262: (dbg) Run:  out/minikube-linux-arm64 status --format={{.APIServer}} -p ha-423884 -n ha-423884
helpers_test.go:262: (dbg) Non-zero exit: out/minikube-linux-arm64 status --format={{.APIServer}} -p ha-423884 -n ha-423884: exit status 2 (349.926709ms)

                                                
                                                
-- stdout --
	Stopped

                                                
                                                
-- /stdout --
helpers_test.go:262: status error: exit status 2 (may be ok)
helpers_test.go:264: "ha-423884" apiserver is not running, skipping kubectl commands (state="Stopped")
--- FAIL: TestMultiControlPlane/serial/RestartClusterKeepsNodes (516.75s)

                                                
                                    
x
+
TestMultiControlPlane/serial/DeleteSecondaryNode (18.8s)

                                                
                                                
=== RUN   TestMultiControlPlane/serial/DeleteSecondaryNode
ha_test.go:489: (dbg) Run:  out/minikube-linux-arm64 -p ha-423884 node delete m03 --alsologtostderr -v 5
ha_test.go:489: (dbg) Non-zero exit: out/minikube-linux-arm64 -p ha-423884 node delete m03 --alsologtostderr -v 5: exit status 83 (169.652325ms)

                                                
                                                
-- stdout --
	* The control-plane node ha-423884-m03 host is not running: state=Stopped
	  To start a cluster, run: "minikube start -p ha-423884"

                                                
                                                
-- /stdout --
** stderr ** 
	I1109 14:03:04.237339   54462 out.go:360] Setting OutFile to fd 1 ...
	I1109 14:03:04.237562   54462 out.go:408] TERM=,COLORTERM=, which probably does not support color
	I1109 14:03:04.237741   54462 out.go:374] Setting ErrFile to fd 2...
	I1109 14:03:04.237775   54462 out.go:408] TERM=,COLORTERM=, which probably does not support color
	I1109 14:03:04.238088   54462 root.go:338] Updating PATH: /home/jenkins/minikube-integration/21139-2320/.minikube/bin
	I1109 14:03:04.238441   54462 mustload.go:66] Loading cluster: ha-423884
	I1109 14:03:04.238940   54462 config.go:182] Loaded profile config "ha-423884": Driver=docker, ContainerRuntime=crio, KubernetesVersion=v1.34.1
	I1109 14:03:04.239435   54462 cli_runner.go:164] Run: docker container inspect ha-423884 --format={{.State.Status}}
	I1109 14:03:04.256859   54462 host.go:66] Checking if "ha-423884" exists ...
	I1109 14:03:04.257206   54462 cli_runner.go:164] Run: docker system info --format "{{json .}}"
	I1109 14:03:04.310561   54462 info.go:266] docker info: {ID:J4M5:W6MX:GOX4:4LAQ:VI7E:VJNF:J3OP:OPBH:GF7G:PPY4:WQWD:7N4L Containers:4 ContainersRunning:2 ContainersPaused:0 ContainersStopped:2 Images:3 Driver:overlay2 DriverStatus:[[Backing Filesystem extfs] [Supports d_type true] [Using metacopy false] [Native Overlay Diff true] [userxattr false]] SystemStatus:<nil> Plugins:{Volume:[local] Network:[bridge host ipvlan macvlan null overlay] Authorization:<nil> Log:[awslogs fluentd gcplogs gelf journald json-file local splunk syslog]} MemoryLimit:true SwapLimit:true KernelMemory:false KernelMemoryTCP:true CPUCfsPeriod:true CPUCfsQuota:true CPUShares:true CPUSet:true PidsLimit:true IPv4Forwarding:true BridgeNfIptables:false BridgeNfIP6Tables:false Debug:false NFd:49 OomKillDisable:true NGoroutines:62 SystemTime:2025-11-09 14:03:04.300820661 +0000 UTC LoggingDriver:json-file CgroupDriver:cgroupfs NEventsListener:0 KernelVersion:5.15.0-1084-aws OperatingSystem:Ubuntu 20.04.6 LTS OSType:linux Architecture:a
arch64 IndexServerAddress:https://index.docker.io/v1/ RegistryConfig:{AllowNondistributableArtifactsCIDRs:[] AllowNondistributableArtifactsHostnames:[] InsecureRegistryCIDRs:[::1/128 127.0.0.0/8] IndexConfigs:{DockerIo:{Name:docker.io Mirrors:[] Secure:true Official:true}} Mirrors:[]} NCPU:2 MemTotal:8214835200 GenericResources:<nil> DockerRootDir:/var/lib/docker HTTPProxy: HTTPSProxy: NoProxy: Name:ip-172-31-24-2 Labels:[] ExperimentalBuild:false ServerVersion:28.1.1 ClusterStore: ClusterAdvertise: Runtimes:{Runc:{Path:runc}} DefaultRuntime:runc Swarm:{NodeID: NodeAddr: LocalNodeState:inactive ControlAvailable:false Error: RemoteManagers:<nil>} LiveRestoreEnabled:false Isolation: InitBinary:docker-init ContainerdCommit:{ID:05044ec0a9a75232cad458027ca83437aae3f4da Expected:} RuncCommit:{ID:v1.2.5-0-g59923ef Expected:} InitCommit:{ID:de40ad0 Expected:} SecurityOptions:[name=apparmor name=seccomp,profile=builtin] ProductLicense: Warnings:<nil> ServerErrors:[] ClientInfo:{Debug:false Plugins:[map[Name:buildx Pat
h:/usr/libexec/docker/cli-plugins/docker-buildx SchemaVersion:0.1.0 ShortDescription:Docker Buildx Vendor:Docker Inc. Version:v0.23.0] map[Name:compose Path:/usr/libexec/docker/cli-plugins/docker-compose SchemaVersion:0.1.0 ShortDescription:Docker Compose Vendor:Docker Inc. Version:v2.35.1]] Warnings:<nil>}}
	I1109 14:03:04.310960   54462 cli_runner.go:164] Run: docker container inspect ha-423884-m02 --format={{.State.Status}}
	I1109 14:03:04.327523   54462 host.go:66] Checking if "ha-423884-m02" exists ...
	I1109 14:03:04.328107   54462 cli_runner.go:164] Run: docker container inspect ha-423884-m03 --format={{.State.Status}}
	I1109 14:03:04.348168   54462 out.go:179] * The control-plane node ha-423884-m03 host is not running: state=Stopped
	I1109 14:03:04.351154   54462 out.go:179]   To start a cluster, run: "minikube start -p ha-423884"

                                                
                                                
** /stderr **
ha_test.go:491: node delete returned an error. args "out/minikube-linux-arm64 -p ha-423884 node delete m03 --alsologtostderr -v 5": exit status 83
ha_test.go:495: (dbg) Run:  out/minikube-linux-arm64 -p ha-423884 status --alsologtostderr -v 5
ha_test.go:495: (dbg) Non-zero exit: out/minikube-linux-arm64 -p ha-423884 status --alsologtostderr -v 5: exit status 7 (16.935658153s)

                                                
                                                
-- stdout --
	ha-423884
	type: Control Plane
	host: Running
	kubelet: Running
	apiserver: Stopped
	kubeconfig: Configured
	
	ha-423884-m02
	type: Control Plane
	host: Running
	kubelet: Running
	apiserver: Stopped
	kubeconfig: Configured
	
	ha-423884-m03
	type: Control Plane
	host: Stopped
	kubelet: Stopped
	apiserver: Stopped
	kubeconfig: Stopped
	
	ha-423884-m04
	type: Worker
	host: Stopped
	kubelet: Stopped
	

                                                
                                                
-- /stdout --
** stderr ** 
	I1109 14:03:04.405030   54517 out.go:360] Setting OutFile to fd 1 ...
	I1109 14:03:04.405138   54517 out.go:408] TERM=,COLORTERM=, which probably does not support color
	I1109 14:03:04.405149   54517 out.go:374] Setting ErrFile to fd 2...
	I1109 14:03:04.405154   54517 out.go:408] TERM=,COLORTERM=, which probably does not support color
	I1109 14:03:04.405433   54517 root.go:338] Updating PATH: /home/jenkins/minikube-integration/21139-2320/.minikube/bin
	I1109 14:03:04.405632   54517 out.go:368] Setting JSON to false
	I1109 14:03:04.405668   54517 mustload.go:66] Loading cluster: ha-423884
	I1109 14:03:04.405731   54517 notify.go:221] Checking for updates...
	I1109 14:03:04.406687   54517 config.go:182] Loaded profile config "ha-423884": Driver=docker, ContainerRuntime=crio, KubernetesVersion=v1.34.1
	I1109 14:03:04.406711   54517 status.go:174] checking status of ha-423884 ...
	I1109 14:03:04.407329   54517 cli_runner.go:164] Run: docker container inspect ha-423884 --format={{.State.Status}}
	I1109 14:03:04.427317   54517 status.go:371] ha-423884 host status = "Running" (err=<nil>)
	I1109 14:03:04.427340   54517 host.go:66] Checking if "ha-423884" exists ...
	I1109 14:03:04.427642   54517 cli_runner.go:164] Run: docker container inspect -f "{{range .NetworkSettings.Networks}}{{.IPAddress}},{{.GlobalIPv6Address}}{{end}}" ha-423884
	I1109 14:03:04.456295   54517 host.go:66] Checking if "ha-423884" exists ...
	I1109 14:03:04.456586   54517 ssh_runner.go:195] Run: sh -c "df -h /var | awk 'NR==2{print $5}'"
	I1109 14:03:04.456646   54517 cli_runner.go:164] Run: docker container inspect -f "'{{(index (index .NetworkSettings.Ports "22/tcp") 0).HostPort}}'" ha-423884
	I1109 14:03:04.475291   54517 sshutil.go:53] new ssh client: &{IP:127.0.0.1 Port:32808 SSHKeyPath:/home/jenkins/minikube-integration/21139-2320/.minikube/machines/ha-423884/id_rsa Username:docker}
	I1109 14:03:04.581137   54517 ssh_runner.go:195] Run: systemctl --version
	I1109 14:03:04.587607   54517 ssh_runner.go:195] Run: sudo systemctl is-active --quiet service kubelet
	I1109 14:03:04.600340   54517 cli_runner.go:164] Run: docker system info --format "{{json .}}"
	I1109 14:03:04.665331   54517 info.go:266] docker info: {ID:J4M5:W6MX:GOX4:4LAQ:VI7E:VJNF:J3OP:OPBH:GF7G:PPY4:WQWD:7N4L Containers:4 ContainersRunning:2 ContainersPaused:0 ContainersStopped:2 Images:3 Driver:overlay2 DriverStatus:[[Backing Filesystem extfs] [Supports d_type true] [Using metacopy false] [Native Overlay Diff true] [userxattr false]] SystemStatus:<nil> Plugins:{Volume:[local] Network:[bridge host ipvlan macvlan null overlay] Authorization:<nil> Log:[awslogs fluentd gcplogs gelf journald json-file local splunk syslog]} MemoryLimit:true SwapLimit:true KernelMemory:false KernelMemoryTCP:true CPUCfsPeriod:true CPUCfsQuota:true CPUShares:true CPUSet:true PidsLimit:true IPv4Forwarding:true BridgeNfIptables:false BridgeNfIP6Tables:false Debug:false NFd:49 OomKillDisable:true NGoroutines:62 SystemTime:2025-11-09 14:03:04.656243697 +0000 UTC LoggingDriver:json-file CgroupDriver:cgroupfs NEventsListener:0 KernelVersion:5.15.0-1084-aws OperatingSystem:Ubuntu 20.04.6 LTS OSType:linux Architecture:a
arch64 IndexServerAddress:https://index.docker.io/v1/ RegistryConfig:{AllowNondistributableArtifactsCIDRs:[] AllowNondistributableArtifactsHostnames:[] InsecureRegistryCIDRs:[::1/128 127.0.0.0/8] IndexConfigs:{DockerIo:{Name:docker.io Mirrors:[] Secure:true Official:true}} Mirrors:[]} NCPU:2 MemTotal:8214835200 GenericResources:<nil> DockerRootDir:/var/lib/docker HTTPProxy: HTTPSProxy: NoProxy: Name:ip-172-31-24-2 Labels:[] ExperimentalBuild:false ServerVersion:28.1.1 ClusterStore: ClusterAdvertise: Runtimes:{Runc:{Path:runc}} DefaultRuntime:runc Swarm:{NodeID: NodeAddr: LocalNodeState:inactive ControlAvailable:false Error: RemoteManagers:<nil>} LiveRestoreEnabled:false Isolation: InitBinary:docker-init ContainerdCommit:{ID:05044ec0a9a75232cad458027ca83437aae3f4da Expected:} RuncCommit:{ID:v1.2.5-0-g59923ef Expected:} InitCommit:{ID:de40ad0 Expected:} SecurityOptions:[name=apparmor name=seccomp,profile=builtin] ProductLicense: Warnings:<nil> ServerErrors:[] ClientInfo:{Debug:false Plugins:[map[Name:buildx Pat
h:/usr/libexec/docker/cli-plugins/docker-buildx SchemaVersion:0.1.0 ShortDescription:Docker Buildx Vendor:Docker Inc. Version:v0.23.0] map[Name:compose Path:/usr/libexec/docker/cli-plugins/docker-compose SchemaVersion:0.1.0 ShortDescription:Docker Compose Vendor:Docker Inc. Version:v2.35.1]] Warnings:<nil>}}
	I1109 14:03:04.665943   54517 kubeconfig.go:125] found "ha-423884" server: "https://192.168.49.254:8443"
	I1109 14:03:04.665976   54517 api_server.go:166] Checking apiserver status ...
	I1109 14:03:04.666019   54517 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	W1109 14:03:04.676062   54517 api_server.go:170] stopped: unable to get apiserver pid: sudo pgrep -xnf kube-apiserver.*minikube.*: Process exited with status 1
	stdout:
	
	stderr:
	I1109 14:03:04.676089   54517 status.go:463] ha-423884 apiserver status = Running (err=<nil>)
	I1109 14:03:04.676099   54517 status.go:176] ha-423884 status: &{Name:ha-423884 Host:Running Kubelet:Running APIServer:Stopped Kubeconfig:Configured Worker:false TimeToStop: DockerEnv: PodManEnv:}
	I1109 14:03:04.676123   54517 status.go:174] checking status of ha-423884-m02 ...
	I1109 14:03:04.676407   54517 cli_runner.go:164] Run: docker container inspect ha-423884-m02 --format={{.State.Status}}
	I1109 14:03:04.692933   54517 status.go:371] ha-423884-m02 host status = "Running" (err=<nil>)
	I1109 14:03:04.692957   54517 host.go:66] Checking if "ha-423884-m02" exists ...
	I1109 14:03:04.693272   54517 cli_runner.go:164] Run: docker container inspect -f "{{range .NetworkSettings.Networks}}{{.IPAddress}},{{.GlobalIPv6Address}}{{end}}" ha-423884-m02
	I1109 14:03:04.718905   54517 host.go:66] Checking if "ha-423884-m02" exists ...
	I1109 14:03:04.719230   54517 ssh_runner.go:195] Run: sh -c "df -h /var | awk 'NR==2{print $5}'"
	I1109 14:03:04.719272   54517 cli_runner.go:164] Run: docker container inspect -f "'{{(index (index .NetworkSettings.Ports "22/tcp") 0).HostPort}}'" ha-423884-m02
	I1109 14:03:04.738556   54517 sshutil.go:53] new ssh client: &{IP:127.0.0.1 Port:32813 SSHKeyPath:/home/jenkins/minikube-integration/21139-2320/.minikube/machines/ha-423884-m02/id_rsa Username:docker}
	I1109 14:03:04.841943   54517 ssh_runner.go:195] Run: sudo systemctl is-active --quiet service kubelet
	I1109 14:03:04.858401   54517 kubeconfig.go:125] found "ha-423884" server: "https://192.168.49.254:8443"
	I1109 14:03:04.858433   54517 api_server.go:166] Checking apiserver status ...
	I1109 14:03:04.858482   54517 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I1109 14:03:04.869639   54517 ssh_runner.go:195] Run: sudo egrep ^[0-9]+:freezer: /proc/1255/cgroup
	I1109 14:03:04.879436   54517 api_server.go:182] apiserver freezer: "11:freezer:/docker/fd0b3129a6238d6fdd418637b5f02562e6a77e8a5b595fc932e1afb77e27e771/crio/crio-1323a44babf4efb19eabfc7a5bd049a0a1f7adcdcb93764dda569f7bd84939b8"
	I1109 14:03:04.879592   54517 ssh_runner.go:195] Run: sudo cat /sys/fs/cgroup/freezer/docker/fd0b3129a6238d6fdd418637b5f02562e6a77e8a5b595fc932e1afb77e27e771/crio/crio-1323a44babf4efb19eabfc7a5bd049a0a1f7adcdcb93764dda569f7bd84939b8/freezer.state
	I1109 14:03:04.887541   54517 api_server.go:204] freezer state: "THAWED"
	I1109 14:03:04.887568   54517 api_server.go:253] Checking apiserver healthz at https://192.168.49.254:8443/healthz ...
	I1109 14:03:05.888154   54517 api_server.go:269] stopped: https://192.168.49.254:8443/healthz: Get "https://192.168.49.254:8443/healthz": dial tcp 192.168.49.254:8443: connect: no route to host
	I1109 14:03:05.888225   54517 retry.go:31] will retry after 278.62279ms: state is "Stopped"
	I1109 14:03:06.167674   54517 api_server.go:253] Checking apiserver healthz at https://192.168.49.254:8443/healthz ...
	I1109 14:03:08.960213   54517 api_server.go:269] stopped: https://192.168.49.254:8443/healthz: Get "https://192.168.49.254:8443/healthz": dial tcp 192.168.49.254:8443: connect: no route to host
	I1109 14:03:08.960260   54517 retry.go:31] will retry after 360.54664ms: state is "Stopped"
	I1109 14:03:09.321814   54517 api_server.go:253] Checking apiserver healthz at https://192.168.49.254:8443/healthz ...
	I1109 14:03:12.032189   54517 api_server.go:269] stopped: https://192.168.49.254:8443/healthz: Get "https://192.168.49.254:8443/healthz": dial tcp 192.168.49.254:8443: connect: no route to host
	I1109 14:03:12.032234   54517 retry.go:31] will retry after 401.708073ms: state is "Stopped"
	I1109 14:03:12.434885   54517 api_server.go:253] Checking apiserver healthz at https://192.168.49.254:8443/healthz ...
	I1109 14:03:15.104218   54517 api_server.go:269] stopped: https://192.168.49.254:8443/healthz: Get "https://192.168.49.254:8443/healthz": dial tcp 192.168.49.254:8443: connect: no route to host
	I1109 14:03:15.104284   54517 retry.go:31] will retry after 576.292871ms: state is "Stopped"
	I1109 14:03:15.680969   54517 api_server.go:253] Checking apiserver healthz at https://192.168.49.254:8443/healthz ...
	I1109 14:03:18.176151   54517 api_server.go:269] stopped: https://192.168.49.254:8443/healthz: Get "https://192.168.49.254:8443/healthz": dial tcp 192.168.49.254:8443: connect: no route to host
	I1109 14:03:18.176195   54517 retry.go:31] will retry after 706.278894ms: state is "Stopped"
	I1109 14:03:18.882634   54517 api_server.go:253] Checking apiserver healthz at https://192.168.49.254:8443/healthz ...
	I1109 14:03:21.248141   54517 api_server.go:269] stopped: https://192.168.49.254:8443/healthz: Get "https://192.168.49.254:8443/healthz": dial tcp 192.168.49.254:8443: connect: no route to host
	I1109 14:03:21.248192   54517 status.go:463] ha-423884-m02 apiserver status = Running (err=<nil>)
	I1109 14:03:21.248200   54517 status.go:176] ha-423884-m02 status: &{Name:ha-423884-m02 Host:Running Kubelet:Running APIServer:Stopped Kubeconfig:Configured Worker:false TimeToStop: DockerEnv: PodManEnv:}
	I1109 14:03:21.248221   54517 status.go:174] checking status of ha-423884-m03 ...
	I1109 14:03:21.248547   54517 cli_runner.go:164] Run: docker container inspect ha-423884-m03 --format={{.State.Status}}
	I1109 14:03:21.266100   54517 status.go:371] ha-423884-m03 host status = "Stopped" (err=<nil>)
	I1109 14:03:21.266123   54517 status.go:384] host is not running, skipping remaining checks
	I1109 14:03:21.266142   54517 status.go:176] ha-423884-m03 status: &{Name:ha-423884-m03 Host:Stopped Kubelet:Stopped APIServer:Stopped Kubeconfig:Stopped Worker:false TimeToStop: DockerEnv: PodManEnv:}
	I1109 14:03:21.266162   54517 status.go:174] checking status of ha-423884-m04 ...
	I1109 14:03:21.266469   54517 cli_runner.go:164] Run: docker container inspect ha-423884-m04 --format={{.State.Status}}
	I1109 14:03:21.283609   54517 status.go:371] ha-423884-m04 host status = "Stopped" (err=<nil>)
	I1109 14:03:21.283633   54517 status.go:384] host is not running, skipping remaining checks
	I1109 14:03:21.283640   54517 status.go:176] ha-423884-m04 status: &{Name:ha-423884-m04 Host:Stopped Kubelet:Stopped APIServer:Stopped Kubeconfig:Stopped Worker:true TimeToStop: DockerEnv: PodManEnv:}

                                                
                                                
** /stderr **
ha_test.go:497: failed to run minikube status. args "out/minikube-linux-arm64 -p ha-423884 status --alsologtostderr -v 5" : exit status 7
helpers_test.go:222: -----------------------post-mortem--------------------------------
helpers_test.go:223: ======>  post-mortem[TestMultiControlPlane/serial/DeleteSecondaryNode]: network settings <======
helpers_test.go:230: HOST ENV snapshots: PROXY env: HTTP_PROXY="<empty>" HTTPS_PROXY="<empty>" NO_PROXY="<empty>"
helpers_test.go:238: ======>  post-mortem[TestMultiControlPlane/serial/DeleteSecondaryNode]: docker inspect <======
helpers_test.go:239: (dbg) Run:  docker inspect ha-423884
helpers_test.go:243: (dbg) docker inspect ha-423884:

                                                
                                                
-- stdout --
	[
	    {
	        "Id": "8c902201acb6ac6cc85dcb02210aea656c86b05a6fb72e47cb8c42c952f307e8",
	        "Created": "2025-11-09T13:50:17.166169915Z",
	        "Path": "/usr/local/bin/entrypoint",
	        "Args": [
	            "/sbin/init"
	        ],
	        "State": {
	            "Status": "running",
	            "Running": true,
	            "Paused": false,
	            "Restarting": false,
	            "OOMKilled": false,
	            "Dead": false,
	            "Pid": 51070,
	            "ExitCode": 0,
	            "Error": "",
	            "StartedAt": "2025-11-09T13:54:55.389490243Z",
	            "FinishedAt": "2025-11-09T13:54:54.671589817Z"
	        },
	        "Image": "sha256:4e1b168be0d8ee6affdaa8dcd0274d605b2f417b6cfaa574410f0380fd962b97",
	        "ResolvConfPath": "/var/lib/docker/containers/8c902201acb6ac6cc85dcb02210aea656c86b05a6fb72e47cb8c42c952f307e8/resolv.conf",
	        "HostnamePath": "/var/lib/docker/containers/8c902201acb6ac6cc85dcb02210aea656c86b05a6fb72e47cb8c42c952f307e8/hostname",
	        "HostsPath": "/var/lib/docker/containers/8c902201acb6ac6cc85dcb02210aea656c86b05a6fb72e47cb8c42c952f307e8/hosts",
	        "LogPath": "/var/lib/docker/containers/8c902201acb6ac6cc85dcb02210aea656c86b05a6fb72e47cb8c42c952f307e8/8c902201acb6ac6cc85dcb02210aea656c86b05a6fb72e47cb8c42c952f307e8-json.log",
	        "Name": "/ha-423884",
	        "RestartCount": 0,
	        "Driver": "overlay2",
	        "Platform": "linux",
	        "MountLabel": "",
	        "ProcessLabel": "",
	        "AppArmorProfile": "unconfined",
	        "ExecIDs": null,
	        "HostConfig": {
	            "Binds": [
	                "/lib/modules:/lib/modules:ro",
	                "ha-423884:/var"
	            ],
	            "ContainerIDFile": "",
	            "LogConfig": {
	                "Type": "json-file",
	                "Config": {}
	            },
	            "NetworkMode": "ha-423884",
	            "PortBindings": {
	                "22/tcp": [
	                    {
	                        "HostIp": "127.0.0.1",
	                        "HostPort": ""
	                    }
	                ],
	                "2376/tcp": [
	                    {
	                        "HostIp": "127.0.0.1",
	                        "HostPort": ""
	                    }
	                ],
	                "32443/tcp": [
	                    {
	                        "HostIp": "127.0.0.1",
	                        "HostPort": ""
	                    }
	                ],
	                "5000/tcp": [
	                    {
	                        "HostIp": "127.0.0.1",
	                        "HostPort": ""
	                    }
	                ],
	                "8443/tcp": [
	                    {
	                        "HostIp": "127.0.0.1",
	                        "HostPort": ""
	                    }
	                ]
	            },
	            "RestartPolicy": {
	                "Name": "no",
	                "MaximumRetryCount": 0
	            },
	            "AutoRemove": false,
	            "VolumeDriver": "",
	            "VolumesFrom": null,
	            "ConsoleSize": [
	                0,
	                0
	            ],
	            "CapAdd": null,
	            "CapDrop": null,
	            "CgroupnsMode": "host",
	            "Dns": [],
	            "DnsOptions": [],
	            "DnsSearch": [],
	            "ExtraHosts": null,
	            "GroupAdd": null,
	            "IpcMode": "private",
	            "Cgroup": "",
	            "Links": null,
	            "OomScoreAdj": 0,
	            "PidMode": "",
	            "Privileged": true,
	            "PublishAllPorts": false,
	            "ReadonlyRootfs": false,
	            "SecurityOpt": [
	                "seccomp=unconfined",
	                "apparmor=unconfined",
	                "label=disable"
	            ],
	            "Tmpfs": {
	                "/run": "",
	                "/tmp": ""
	            },
	            "UTSMode": "",
	            "UsernsMode": "",
	            "ShmSize": 67108864,
	            "Runtime": "runc",
	            "Isolation": "",
	            "CpuShares": 0,
	            "Memory": 3221225472,
	            "NanoCpus": 2000000000,
	            "CgroupParent": "",
	            "BlkioWeight": 0,
	            "BlkioWeightDevice": [],
	            "BlkioDeviceReadBps": [],
	            "BlkioDeviceWriteBps": [],
	            "BlkioDeviceReadIOps": [],
	            "BlkioDeviceWriteIOps": [],
	            "CpuPeriod": 0,
	            "CpuQuota": 0,
	            "CpuRealtimePeriod": 0,
	            "CpuRealtimeRuntime": 0,
	            "CpusetCpus": "",
	            "CpusetMems": "",
	            "Devices": [],
	            "DeviceCgroupRules": null,
	            "DeviceRequests": null,
	            "MemoryReservation": 0,
	            "MemorySwap": 6442450944,
	            "MemorySwappiness": null,
	            "OomKillDisable": false,
	            "PidsLimit": null,
	            "Ulimits": [],
	            "CpuCount": 0,
	            "CpuPercent": 0,
	            "IOMaximumIOps": 0,
	            "IOMaximumBandwidth": 0,
	            "MaskedPaths": null,
	            "ReadonlyPaths": null
	        },
	        "GraphDriver": {
	            "Data": {
	                "ID": "8c902201acb6ac6cc85dcb02210aea656c86b05a6fb72e47cb8c42c952f307e8",
	                "LowerDir": "/var/lib/docker/overlay2/d7d9b7ca13eaf7cc4d5734ee4a6a54ff542d1224261d25b7c41162aa58453c4b-init/diff:/var/lib/docker/overlay2/b3a4de36ab2a7c9237cb1555a4866064baca53bca407ae5f84336bea9c6bc6c8/diff",
	                "MergedDir": "/var/lib/docker/overlay2/d7d9b7ca13eaf7cc4d5734ee4a6a54ff542d1224261d25b7c41162aa58453c4b/merged",
	                "UpperDir": "/var/lib/docker/overlay2/d7d9b7ca13eaf7cc4d5734ee4a6a54ff542d1224261d25b7c41162aa58453c4b/diff",
	                "WorkDir": "/var/lib/docker/overlay2/d7d9b7ca13eaf7cc4d5734ee4a6a54ff542d1224261d25b7c41162aa58453c4b/work"
	            },
	            "Name": "overlay2"
	        },
	        "Mounts": [
	            {
	                "Type": "bind",
	                "Source": "/lib/modules",
	                "Destination": "/lib/modules",
	                "Mode": "ro",
	                "RW": false,
	                "Propagation": "rprivate"
	            },
	            {
	                "Type": "volume",
	                "Name": "ha-423884",
	                "Source": "/var/lib/docker/volumes/ha-423884/_data",
	                "Destination": "/var",
	                "Driver": "local",
	                "Mode": "z",
	                "RW": true,
	                "Propagation": ""
	            }
	        ],
	        "Config": {
	            "Hostname": "ha-423884",
	            "Domainname": "",
	            "User": "",
	            "AttachStdin": false,
	            "AttachStdout": false,
	            "AttachStderr": false,
	            "ExposedPorts": {
	                "22/tcp": {},
	                "2376/tcp": {},
	                "32443/tcp": {},
	                "5000/tcp": {},
	                "8443/tcp": {}
	            },
	            "Tty": true,
	            "OpenStdin": false,
	            "StdinOnce": false,
	            "Env": [
	                "container=docker",
	                "PATH=/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin"
	            ],
	            "Cmd": null,
	            "Image": "gcr.io/k8s-minikube/kicbase-builds:v0.0.48-1761985721-21837@sha256:a50b37e97dfdea51156e079ca6b45818a801b3d41bbe13d141f35d2e1af6c7d1",
	            "Volumes": null,
	            "WorkingDir": "/",
	            "Entrypoint": [
	                "/usr/local/bin/entrypoint",
	                "/sbin/init"
	            ],
	            "OnBuild": null,
	            "Labels": {
	                "created_by.minikube.sigs.k8s.io": "true",
	                "mode.minikube.sigs.k8s.io": "ha-423884",
	                "name.minikube.sigs.k8s.io": "ha-423884",
	                "role.minikube.sigs.k8s.io": ""
	            },
	            "StopSignal": "SIGRTMIN+3"
	        },
	        "NetworkSettings": {
	            "Bridge": "",
	            "SandboxID": "89fbaf0c08047c2a06be0a8a75835803aa19533c48ff4c5735fc268ee9d93691",
	            "SandboxKey": "/var/run/docker/netns/89fbaf0c0804",
	            "Ports": {
	                "22/tcp": [
	                    {
	                        "HostIp": "127.0.0.1",
	                        "HostPort": "32808"
	                    }
	                ],
	                "2376/tcp": [
	                    {
	                        "HostIp": "127.0.0.1",
	                        "HostPort": "32809"
	                    }
	                ],
	                "32443/tcp": [
	                    {
	                        "HostIp": "127.0.0.1",
	                        "HostPort": "32812"
	                    }
	                ],
	                "5000/tcp": [
	                    {
	                        "HostIp": "127.0.0.1",
	                        "HostPort": "32810"
	                    }
	                ],
	                "8443/tcp": [
	                    {
	                        "HostIp": "127.0.0.1",
	                        "HostPort": "32811"
	                    }
	                ]
	            },
	            "HairpinMode": false,
	            "LinkLocalIPv6Address": "",
	            "LinkLocalIPv6PrefixLen": 0,
	            "SecondaryIPAddresses": null,
	            "SecondaryIPv6Addresses": null,
	            "EndpointID": "",
	            "Gateway": "",
	            "GlobalIPv6Address": "",
	            "GlobalIPv6PrefixLen": 0,
	            "IPAddress": "",
	            "IPPrefixLen": 0,
	            "IPv6Gateway": "",
	            "MacAddress": "",
	            "Networks": {
	                "ha-423884": {
	                    "IPAMConfig": {
	                        "IPv4Address": "192.168.49.2"
	                    },
	                    "Links": null,
	                    "Aliases": null,
	                    "MacAddress": "66:00:10:26:7b:33",
	                    "DriverOpts": null,
	                    "GwPriority": 0,
	                    "NetworkID": "b901b8dcb82129bdc4c62d2bf9cac8a365e41b87cf75b0978b149071ce152f44",
	                    "EndpointID": "20baa733fdf8670705aeddf1cfd5b1a5d39152767930d2a81eaedc478e6f1104",
	                    "Gateway": "192.168.49.1",
	                    "IPAddress": "192.168.49.2",
	                    "IPPrefixLen": 24,
	                    "IPv6Gateway": "",
	                    "GlobalIPv6Address": "",
	                    "GlobalIPv6PrefixLen": 0,
	                    "DNSNames": [
	                        "ha-423884",
	                        "8c902201acb6"
	                    ]
	                }
	            }
	        }
	    }
	]

                                                
                                                
-- /stdout --
helpers_test.go:247: (dbg) Run:  out/minikube-linux-arm64 status --format={{.Host}} -p ha-423884 -n ha-423884
helpers_test.go:247: (dbg) Non-zero exit: out/minikube-linux-arm64 status --format={{.Host}} -p ha-423884 -n ha-423884: exit status 2 (318.617553ms)

                                                
                                                
-- stdout --
	Running

                                                
                                                
-- /stdout --
helpers_test.go:247: status error: exit status 2 (may be ok)
helpers_test.go:252: <<< TestMultiControlPlane/serial/DeleteSecondaryNode FAILED: start of post-mortem logs <<<
helpers_test.go:253: ======>  post-mortem[TestMultiControlPlane/serial/DeleteSecondaryNode]: minikube logs <======
helpers_test.go:255: (dbg) Run:  out/minikube-linux-arm64 -p ha-423884 logs -n 25
helpers_test.go:260: TestMultiControlPlane/serial/DeleteSecondaryNode logs: 
-- stdout --
	
	==> Audit <==
	┌─────────┬─────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────┬───────────┬─────────┬─────────┬─────────────────────┬─────────────────────┐
	│ COMMAND │                                                                ARGS                                                                 │  PROFILE  │  USER   │ VERSION │     START TIME      │      END TIME       │
	├─────────┼─────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────┼───────────┼─────────┼─────────┼─────────────────────┼─────────────────────┤
	│ ssh     │ ha-423884 ssh -n ha-423884-m03 sudo cat /home/docker/cp-test.txt                                                                    │ ha-423884 │ jenkins │ v1.37.0 │ 09 Nov 25 13:53 UTC │ 09 Nov 25 13:53 UTC │
	│ ssh     │ ha-423884 ssh -n ha-423884-m02 sudo cat /home/docker/cp-test_ha-423884-m03_ha-423884-m02.txt                                        │ ha-423884 │ jenkins │ v1.37.0 │ 09 Nov 25 13:53 UTC │ 09 Nov 25 13:53 UTC │
	│ cp      │ ha-423884 cp ha-423884-m03:/home/docker/cp-test.txt ha-423884-m04:/home/docker/cp-test_ha-423884-m03_ha-423884-m04.txt              │ ha-423884 │ jenkins │ v1.37.0 │ 09 Nov 25 13:53 UTC │ 09 Nov 25 13:53 UTC │
	│ ssh     │ ha-423884 ssh -n ha-423884-m03 sudo cat /home/docker/cp-test.txt                                                                    │ ha-423884 │ jenkins │ v1.37.0 │ 09 Nov 25 13:53 UTC │ 09 Nov 25 13:53 UTC │
	│ ssh     │ ha-423884 ssh -n ha-423884-m04 sudo cat /home/docker/cp-test_ha-423884-m03_ha-423884-m04.txt                                        │ ha-423884 │ jenkins │ v1.37.0 │ 09 Nov 25 13:53 UTC │ 09 Nov 25 13:53 UTC │
	│ cp      │ ha-423884 cp testdata/cp-test.txt ha-423884-m04:/home/docker/cp-test.txt                                                            │ ha-423884 │ jenkins │ v1.37.0 │ 09 Nov 25 13:53 UTC │ 09 Nov 25 13:53 UTC │
	│ ssh     │ ha-423884 ssh -n ha-423884-m04 sudo cat /home/docker/cp-test.txt                                                                    │ ha-423884 │ jenkins │ v1.37.0 │ 09 Nov 25 13:53 UTC │ 09 Nov 25 13:53 UTC │
	│ cp      │ ha-423884 cp ha-423884-m04:/home/docker/cp-test.txt /tmp/TestMultiControlPlaneserialCopyFile776916293/001/cp-test_ha-423884-m04.txt │ ha-423884 │ jenkins │ v1.37.0 │ 09 Nov 25 13:53 UTC │ 09 Nov 25 13:53 UTC │
	│ ssh     │ ha-423884 ssh -n ha-423884-m04 sudo cat /home/docker/cp-test.txt                                                                    │ ha-423884 │ jenkins │ v1.37.0 │ 09 Nov 25 13:53 UTC │ 09 Nov 25 13:53 UTC │
	│ cp      │ ha-423884 cp ha-423884-m04:/home/docker/cp-test.txt ha-423884:/home/docker/cp-test_ha-423884-m04_ha-423884.txt                      │ ha-423884 │ jenkins │ v1.37.0 │ 09 Nov 25 13:53 UTC │ 09 Nov 25 13:53 UTC │
	│ ssh     │ ha-423884 ssh -n ha-423884-m04 sudo cat /home/docker/cp-test.txt                                                                    │ ha-423884 │ jenkins │ v1.37.0 │ 09 Nov 25 13:53 UTC │ 09 Nov 25 13:53 UTC │
	│ ssh     │ ha-423884 ssh -n ha-423884 sudo cat /home/docker/cp-test_ha-423884-m04_ha-423884.txt                                                │ ha-423884 │ jenkins │ v1.37.0 │ 09 Nov 25 13:53 UTC │ 09 Nov 25 13:53 UTC │
	│ cp      │ ha-423884 cp ha-423884-m04:/home/docker/cp-test.txt ha-423884-m02:/home/docker/cp-test_ha-423884-m04_ha-423884-m02.txt              │ ha-423884 │ jenkins │ v1.37.0 │ 09 Nov 25 13:53 UTC │ 09 Nov 25 13:53 UTC │
	│ ssh     │ ha-423884 ssh -n ha-423884-m04 sudo cat /home/docker/cp-test.txt                                                                    │ ha-423884 │ jenkins │ v1.37.0 │ 09 Nov 25 13:53 UTC │ 09 Nov 25 13:53 UTC │
	│ ssh     │ ha-423884 ssh -n ha-423884-m02 sudo cat /home/docker/cp-test_ha-423884-m04_ha-423884-m02.txt                                        │ ha-423884 │ jenkins │ v1.37.0 │ 09 Nov 25 13:53 UTC │ 09 Nov 25 13:53 UTC │
	│ cp      │ ha-423884 cp ha-423884-m04:/home/docker/cp-test.txt ha-423884-m03:/home/docker/cp-test_ha-423884-m04_ha-423884-m03.txt              │ ha-423884 │ jenkins │ v1.37.0 │ 09 Nov 25 13:53 UTC │ 09 Nov 25 13:53 UTC │
	│ ssh     │ ha-423884 ssh -n ha-423884-m04 sudo cat /home/docker/cp-test.txt                                                                    │ ha-423884 │ jenkins │ v1.37.0 │ 09 Nov 25 13:53 UTC │ 09 Nov 25 13:53 UTC │
	│ ssh     │ ha-423884 ssh -n ha-423884-m03 sudo cat /home/docker/cp-test_ha-423884-m04_ha-423884-m03.txt                                        │ ha-423884 │ jenkins │ v1.37.0 │ 09 Nov 25 13:53 UTC │ 09 Nov 25 13:53 UTC │
	│ node    │ ha-423884 node stop m02 --alsologtostderr -v 5                                                                                      │ ha-423884 │ jenkins │ v1.37.0 │ 09 Nov 25 13:53 UTC │ 09 Nov 25 13:53 UTC │
	│ node    │ ha-423884 node start m02 --alsologtostderr -v 5                                                                                     │ ha-423884 │ jenkins │ v1.37.0 │ 09 Nov 25 13:53 UTC │ 09 Nov 25 13:54 UTC │
	│ node    │ ha-423884 node list --alsologtostderr -v 5                                                                                          │ ha-423884 │ jenkins │ v1.37.0 │ 09 Nov 25 13:54 UTC │                     │
	│ stop    │ ha-423884 stop --alsologtostderr -v 5                                                                                               │ ha-423884 │ jenkins │ v1.37.0 │ 09 Nov 25 13:54 UTC │ 09 Nov 25 13:54 UTC │
	│ start   │ ha-423884 start --wait true --alsologtostderr -v 5                                                                                  │ ha-423884 │ jenkins │ v1.37.0 │ 09 Nov 25 13:54 UTC │                     │
	│ node    │ ha-423884 node list --alsologtostderr -v 5                                                                                          │ ha-423884 │ jenkins │ v1.37.0 │ 09 Nov 25 14:02 UTC │                     │
	│ node    │ ha-423884 node delete m03 --alsologtostderr -v 5                                                                                    │ ha-423884 │ jenkins │ v1.37.0 │ 09 Nov 25 14:03 UTC │                     │
	└─────────┴─────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────┴───────────┴─────────┴─────────┴─────────────────────┴─────────────────────┘
	
	
	==> Last Start <==
	Log file created at: 2025/11/09 13:54:55
	Running on machine: ip-172-31-24-2
	Binary: Built with gc go1.24.6 for linux/arm64
	Log line format: [IWEF]mmdd hh:mm:ss.uuuuuu threadid file:line] msg
	I1109 13:54:55.113963   50941 out.go:360] Setting OutFile to fd 1 ...
	I1109 13:54:55.114176   50941 out.go:408] TERM=,COLORTERM=, which probably does not support color
	I1109 13:54:55.114201   50941 out.go:374] Setting ErrFile to fd 2...
	I1109 13:54:55.114221   50941 out.go:408] TERM=,COLORTERM=, which probably does not support color
	I1109 13:54:55.114531   50941 root.go:338] Updating PATH: /home/jenkins/minikube-integration/21139-2320/.minikube/bin
	I1109 13:54:55.114968   50941 out.go:368] Setting JSON to false
	I1109 13:54:55.115825   50941 start.go:133] hostinfo: {"hostname":"ip-172-31-24-2","uptime":2245,"bootTime":1762694250,"procs":151,"os":"linux","platform":"ubuntu","platformFamily":"debian","platformVersion":"20.04","kernelVersion":"5.15.0-1084-aws","kernelArch":"aarch64","virtualizationSystem":"","virtualizationRole":"","hostId":"6d436adf-771e-4269-b9a3-c25fd4fca4f5"}
	I1109 13:54:55.115981   50941 start.go:143] virtualization:  
	I1109 13:54:55.119256   50941 out.go:179] * [ha-423884] minikube v1.37.0 on Ubuntu 20.04 (arm64)
	I1109 13:54:55.122910   50941 out.go:179]   - MINIKUBE_LOCATION=21139
	I1109 13:54:55.122982   50941 notify.go:221] Checking for updates...
	I1109 13:54:55.128665   50941 out.go:179]   - MINIKUBE_SUPPRESS_DOCKER_PERFORMANCE=true
	I1109 13:54:55.131661   50941 out.go:179]   - KUBECONFIG=/home/jenkins/minikube-integration/21139-2320/kubeconfig
	I1109 13:54:55.134714   50941 out.go:179]   - MINIKUBE_HOME=/home/jenkins/minikube-integration/21139-2320/.minikube
	I1109 13:54:55.137648   50941 out.go:179]   - MINIKUBE_BIN=out/minikube-linux-arm64
	I1109 13:54:55.140756   50941 out.go:179]   - MINIKUBE_FORCE_SYSTEMD=
	I1109 13:54:55.144331   50941 config.go:182] Loaded profile config "ha-423884": Driver=docker, ContainerRuntime=crio, KubernetesVersion=v1.34.1
	I1109 13:54:55.144477   50941 driver.go:422] Setting default libvirt URI to qemu:///system
	I1109 13:54:55.179836   50941 docker.go:124] docker version: linux-28.1.1:Docker Engine - Community
	I1109 13:54:55.179983   50941 cli_runner.go:164] Run: docker system info --format "{{json .}}"
	I1109 13:54:55.239723   50941 info.go:266] docker info: {ID:J4M5:W6MX:GOX4:4LAQ:VI7E:VJNF:J3OP:OPBH:GF7G:PPY4:WQWD:7N4L Containers:4 ContainersRunning:0 ContainersPaused:0 ContainersStopped:4 Images:3 Driver:overlay2 DriverStatus:[[Backing Filesystem extfs] [Supports d_type true] [Using metacopy false] [Native Overlay Diff true] [userxattr false]] SystemStatus:<nil> Plugins:{Volume:[local] Network:[bridge host ipvlan macvlan null overlay] Authorization:<nil> Log:[awslogs fluentd gcplogs gelf journald json-file local splunk syslog]} MemoryLimit:true SwapLimit:true KernelMemory:false KernelMemoryTCP:true CPUCfsPeriod:true CPUCfsQuota:true CPUShares:true CPUSet:true PidsLimit:true IPv4Forwarding:true BridgeNfIptables:false BridgeNfIP6Tables:false Debug:false NFd:23 OomKillDisable:true NGoroutines:42 SystemTime:2025-11-09 13:54:55.22949764 +0000 UTC LoggingDriver:json-file CgroupDriver:cgroupfs NEventsListener:0 KernelVersion:5.15.0-1084-aws OperatingSystem:Ubuntu 20.04.6 LTS OSType:linux Architecture:aa
rch64 IndexServerAddress:https://index.docker.io/v1/ RegistryConfig:{AllowNondistributableArtifactsCIDRs:[] AllowNondistributableArtifactsHostnames:[] InsecureRegistryCIDRs:[::1/128 127.0.0.0/8] IndexConfigs:{DockerIo:{Name:docker.io Mirrors:[] Secure:true Official:true}} Mirrors:[]} NCPU:2 MemTotal:8214835200 GenericResources:<nil> DockerRootDir:/var/lib/docker HTTPProxy: HTTPSProxy: NoProxy: Name:ip-172-31-24-2 Labels:[] ExperimentalBuild:false ServerVersion:28.1.1 ClusterStore: ClusterAdvertise: Runtimes:{Runc:{Path:runc}} DefaultRuntime:runc Swarm:{NodeID: NodeAddr: LocalNodeState:inactive ControlAvailable:false Error: RemoteManagers:<nil>} LiveRestoreEnabled:false Isolation: InitBinary:docker-init ContainerdCommit:{ID:05044ec0a9a75232cad458027ca83437aae3f4da Expected:} RuncCommit:{ID:v1.2.5-0-g59923ef Expected:} InitCommit:{ID:de40ad0 Expected:} SecurityOptions:[name=apparmor name=seccomp,profile=builtin] ProductLicense: Warnings:<nil> ServerErrors:[] ClientInfo:{Debug:false Plugins:[map[Name:buildx Path
:/usr/libexec/docker/cli-plugins/docker-buildx SchemaVersion:0.1.0 ShortDescription:Docker Buildx Vendor:Docker Inc. Version:v0.23.0] map[Name:compose Path:/usr/libexec/docker/cli-plugins/docker-compose SchemaVersion:0.1.0 ShortDescription:Docker Compose Vendor:Docker Inc. Version:v2.35.1]] Warnings:<nil>}}
	I1109 13:54:55.239832   50941 docker.go:319] overlay module found
	I1109 13:54:55.244865   50941 out.go:179] * Using the docker driver based on existing profile
	I1109 13:54:55.247777   50941 start.go:309] selected driver: docker
	I1109 13:54:55.247800   50941 start.go:930] validating driver "docker" against &{Name:ha-423884 KeepContext:false EmbedCerts:false MinikubeISO: KicBaseImage:gcr.io/k8s-minikube/kicbase-builds:v0.0.48-1761985721-21837@sha256:a50b37e97dfdea51156e079ca6b45818a801b3d41bbe13d141f35d2e1af6c7d1 Memory:3072 CPUs:2 DiskSize:20000 Driver:docker HyperkitVpnKitSock: HyperkitVSockPorts:[] DockerEnv:[] ContainerVolumeMounts:[] InsecureRegistry:[] RegistryMirror:[] HostOnlyCIDR:192.168.59.1/24 HypervVirtualSwitch: HypervUseExternalSwitch:false HypervExternalAdapter: KVMNetwork:default KVMQemuURI:qemu:///system KVMGPU:false KVMHidden:false KVMNUMACount:1 APIServerPort:8443 DockerOpt:[] DisableDriverMounts:false NFSShare:[] NFSSharesRoot:/nfsshares UUID: NoVTXCheck:false DNSProxy:false HostDNSResolver:true HostOnlyNicType:virtio NatNicType:virtio SSHIPAddress: SSHUser:root SSHKey: SSHPort:22 KubernetesConfig:{KubernetesVersion:v1.34.1 ClusterName:ha-423884 Namespace:default APIServerHAVIP:192.168.49.254 APIServerName
:minikubeCA APIServerNames:[] APIServerIPs:[] DNSDomain:cluster.local ContainerRuntime:crio CRISocket: NetworkPlugin:cni FeatureGates: ServiceCIDR:10.96.0.0/12 ImageRepository: LoadBalancerStartIP: LoadBalancerEndIP: CustomIngressCert: RegistryAliases: ExtraOptions:[] ShouldLoadCachedImages:true EnableDefaultCNI:false CNI:} Nodes:[{Name: IP:192.168.49.2 Port:8443 KubernetesVersion:v1.34.1 ContainerRuntime:crio ControlPlane:true Worker:true} {Name:m02 IP:192.168.49.3 Port:8443 KubernetesVersion:v1.34.1 ContainerRuntime:crio ControlPlane:true Worker:true} {Name:m03 IP:192.168.49.4 Port:8443 KubernetesVersion:v1.34.1 ContainerRuntime:crio ControlPlane:true Worker:true} {Name:m04 IP:192.168.49.5 Port:0 KubernetesVersion:v1.34.1 ContainerRuntime: ControlPlane:false Worker:true}] Addons:map[ambassador:false amd-gpu-device-plugin:false auto-pause:false cloud-spanner:false csi-hostpath-driver:false dashboard:false default-storageclass:false efk:false freshpod:false gcp-auth:false gvisor:false headlamp:false inaccel:f
alse ingress:false ingress-dns:false inspektor-gadget:false istio:false istio-provisioner:false kong:false kubeflow:false kubetail:false kubevirt:false logviewer:false metallb:false metrics-server:false nvidia-device-plugin:false nvidia-driver-installer:false nvidia-gpu-device-plugin:false olm:false pod-security-policy:false portainer:false registry:false registry-aliases:false registry-creds:false storage-provisioner:false storage-provisioner-rancher:false volcano:false volumesnapshots:false yakd:false] CustomAddonImages:map[] CustomAddonRegistries:map[] VerifyComponents:map[apiserver:true apps_running:true default_sa:true extra:true kubelet:true node_ready:true system_pods:true] StartHostTimeout:6m0s ScheduledStop:<nil> ExposedPorts:[] ListenAddress: Network: Subnet: MultiNodeRequested:true ExtraDisks:0 CertExpiration:26280h0m0s MountString: Mount9PVersion:9p2000.L MountGID:docker MountIP: MountMSize:262144 MountOptions:[] MountPort:0 MountType:9p MountUID:docker BinaryMirror: DisableOptimizations:false Dis
ableMetrics:false DisableCoreDNSLog:false CustomQemuFirmwarePath: SocketVMnetClientPath: SocketVMnetPath: StaticIP: SSHAuthSock: SSHAgentPID:0 GPUs: AutoPauseInterval:1m0s}
	I1109 13:54:55.248067   50941 start.go:941] status for docker: {Installed:true Healthy:true Running:false NeedsImprovement:false Error:<nil> Reason: Fix: Doc: Version:}
	I1109 13:54:55.248171   50941 cli_runner.go:164] Run: docker system info --format "{{json .}}"
	I1109 13:54:55.301652   50941 info.go:266] docker info: {ID:J4M5:W6MX:GOX4:4LAQ:VI7E:VJNF:J3OP:OPBH:GF7G:PPY4:WQWD:7N4L Containers:4 ContainersRunning:0 ContainersPaused:0 ContainersStopped:4 Images:3 Driver:overlay2 DriverStatus:[[Backing Filesystem extfs] [Supports d_type true] [Using metacopy false] [Native Overlay Diff true] [userxattr false]] SystemStatus:<nil> Plugins:{Volume:[local] Network:[bridge host ipvlan macvlan null overlay] Authorization:<nil> Log:[awslogs fluentd gcplogs gelf journald json-file local splunk syslog]} MemoryLimit:true SwapLimit:true KernelMemory:false KernelMemoryTCP:true CPUCfsPeriod:true CPUCfsQuota:true CPUShares:true CPUSet:true PidsLimit:true IPv4Forwarding:true BridgeNfIptables:false BridgeNfIP6Tables:false Debug:false NFd:23 OomKillDisable:true NGoroutines:42 SystemTime:2025-11-09 13:54:55.292554772 +0000 UTC LoggingDriver:json-file CgroupDriver:cgroupfs NEventsListener:0 KernelVersion:5.15.0-1084-aws OperatingSystem:Ubuntu 20.04.6 LTS OSType:linux Architecture:a
arch64 IndexServerAddress:https://index.docker.io/v1/ RegistryConfig:{AllowNondistributableArtifactsCIDRs:[] AllowNondistributableArtifactsHostnames:[] InsecureRegistryCIDRs:[::1/128 127.0.0.0/8] IndexConfigs:{DockerIo:{Name:docker.io Mirrors:[] Secure:true Official:true}} Mirrors:[]} NCPU:2 MemTotal:8214835200 GenericResources:<nil> DockerRootDir:/var/lib/docker HTTPProxy: HTTPSProxy: NoProxy: Name:ip-172-31-24-2 Labels:[] ExperimentalBuild:false ServerVersion:28.1.1 ClusterStore: ClusterAdvertise: Runtimes:{Runc:{Path:runc}} DefaultRuntime:runc Swarm:{NodeID: NodeAddr: LocalNodeState:inactive ControlAvailable:false Error: RemoteManagers:<nil>} LiveRestoreEnabled:false Isolation: InitBinary:docker-init ContainerdCommit:{ID:05044ec0a9a75232cad458027ca83437aae3f4da Expected:} RuncCommit:{ID:v1.2.5-0-g59923ef Expected:} InitCommit:{ID:de40ad0 Expected:} SecurityOptions:[name=apparmor name=seccomp,profile=builtin] ProductLicense: Warnings:<nil> ServerErrors:[] ClientInfo:{Debug:false Plugins:[map[Name:buildx Pat
h:/usr/libexec/docker/cli-plugins/docker-buildx SchemaVersion:0.1.0 ShortDescription:Docker Buildx Vendor:Docker Inc. Version:v0.23.0] map[Name:compose Path:/usr/libexec/docker/cli-plugins/docker-compose SchemaVersion:0.1.0 ShortDescription:Docker Compose Vendor:Docker Inc. Version:v2.35.1]] Warnings:<nil>}}
	I1109 13:54:55.302058   50941 start_flags.go:992] Waiting for all components: map[apiserver:true apps_running:true default_sa:true extra:true kubelet:true node_ready:true system_pods:true]
	I1109 13:54:55.302090   50941 cni.go:84] Creating CNI manager for ""
	I1109 13:54:55.302144   50941 cni.go:136] multinode detected (4 nodes found), recommending kindnet
	I1109 13:54:55.302223   50941 start.go:353] cluster config:
	{Name:ha-423884 KeepContext:false EmbedCerts:false MinikubeISO: KicBaseImage:gcr.io/k8s-minikube/kicbase-builds:v0.0.48-1761985721-21837@sha256:a50b37e97dfdea51156e079ca6b45818a801b3d41bbe13d141f35d2e1af6c7d1 Memory:3072 CPUs:2 DiskSize:20000 Driver:docker HyperkitVpnKitSock: HyperkitVSockPorts:[] DockerEnv:[] ContainerVolumeMounts:[] InsecureRegistry:[] RegistryMirror:[] HostOnlyCIDR:192.168.59.1/24 HypervVirtualSwitch: HypervUseExternalSwitch:false HypervExternalAdapter: KVMNetwork:default KVMQemuURI:qemu:///system KVMGPU:false KVMHidden:false KVMNUMACount:1 APIServerPort:8443 DockerOpt:[] DisableDriverMounts:false NFSShare:[] NFSSharesRoot:/nfsshares UUID: NoVTXCheck:false DNSProxy:false HostDNSResolver:true HostOnlyNicType:virtio NatNicType:virtio SSHIPAddress: SSHUser:root SSHKey: SSHPort:22 KubernetesConfig:{KubernetesVersion:v1.34.1 ClusterName:ha-423884 Namespace:default APIServerHAVIP:192.168.49.254 APIServerName:minikubeCA APIServerNames:[] APIServerIPs:[] DNSDomain:cluster.local ContainerR
untime:crio CRISocket: NetworkPlugin:cni FeatureGates: ServiceCIDR:10.96.0.0/12 ImageRepository: LoadBalancerStartIP: LoadBalancerEndIP: CustomIngressCert: RegistryAliases: ExtraOptions:[] ShouldLoadCachedImages:true EnableDefaultCNI:false CNI:} Nodes:[{Name: IP:192.168.49.2 Port:8443 KubernetesVersion:v1.34.1 ContainerRuntime:crio ControlPlane:true Worker:true} {Name:m02 IP:192.168.49.3 Port:8443 KubernetesVersion:v1.34.1 ContainerRuntime:crio ControlPlane:true Worker:true} {Name:m03 IP:192.168.49.4 Port:8443 KubernetesVersion:v1.34.1 ContainerRuntime:crio ControlPlane:true Worker:true} {Name:m04 IP:192.168.49.5 Port:0 KubernetesVersion:v1.34.1 ContainerRuntime:crio ControlPlane:false Worker:true}] Addons:map[ambassador:false amd-gpu-device-plugin:false auto-pause:false cloud-spanner:false csi-hostpath-driver:false dashboard:false default-storageclass:false efk:false freshpod:false gcp-auth:false gvisor:false headlamp:false inaccel:false ingress:false ingress-dns:false inspektor-gadget:false istio:false isti
o-provisioner:false kong:false kubeflow:false kubetail:false kubevirt:false logviewer:false metallb:false metrics-server:false nvidia-device-plugin:false nvidia-driver-installer:false nvidia-gpu-device-plugin:false olm:false pod-security-policy:false portainer:false registry:false registry-aliases:false registry-creds:false storage-provisioner:false storage-provisioner-rancher:false volcano:false volumesnapshots:false yakd:false] CustomAddonImages:map[] CustomAddonRegistries:map[] VerifyComponents:map[apiserver:true apps_running:true default_sa:true extra:true kubelet:true node_ready:true system_pods:true] StartHostTimeout:6m0s ScheduledStop:<nil> ExposedPorts:[] ListenAddress: Network: Subnet: MultiNodeRequested:true ExtraDisks:0 CertExpiration:26280h0m0s MountString: Mount9PVersion:9p2000.L MountGID:docker MountIP: MountMSize:262144 MountOptions:[] MountPort:0 MountType:9p MountUID:docker BinaryMirror: DisableOptimizations:false DisableMetrics:false DisableCoreDNSLog:false CustomQemuFirmwarePath: SocketVMne
tClientPath: SocketVMnetPath: StaticIP: SSHAuthSock: SSHAgentPID:0 GPUs: AutoPauseInterval:1m0s}
	I1109 13:54:55.305501   50941 out.go:179] * Starting "ha-423884" primary control-plane node in "ha-423884" cluster
	I1109 13:54:55.308419   50941 cache.go:134] Beginning downloading kic base image for docker with crio
	I1109 13:54:55.311313   50941 out.go:179] * Pulling base image v0.0.48-1761985721-21837 ...
	I1109 13:54:55.314132   50941 preload.go:188] Checking if preload exists for k8s version v1.34.1 and runtime crio
	I1109 13:54:55.314177   50941 preload.go:203] Found local preload: /home/jenkins/minikube-integration/21139-2320/.minikube/cache/preloaded-tarball/preloaded-images-k8s-v18-v1.34.1-cri-o-overlay-arm64.tar.lz4
	I1109 13:54:55.314201   50941 cache.go:65] Caching tarball of preloaded images
	I1109 13:54:55.314200   50941 image.go:81] Checking for gcr.io/k8s-minikube/kicbase-builds:v0.0.48-1761985721-21837@sha256:a50b37e97dfdea51156e079ca6b45818a801b3d41bbe13d141f35d2e1af6c7d1 in local docker daemon
	I1109 13:54:55.314293   50941 preload.go:238] Found /home/jenkins/minikube-integration/21139-2320/.minikube/cache/preloaded-tarball/preloaded-images-k8s-v18-v1.34.1-cri-o-overlay-arm64.tar.lz4 in cache, skipping download
	I1109 13:54:55.314309   50941 cache.go:68] Finished verifying existence of preloaded tar for v1.34.1 on crio
	I1109 13:54:55.314456   50941 profile.go:143] Saving config to /home/jenkins/minikube-integration/21139-2320/.minikube/profiles/ha-423884/config.json ...
	I1109 13:54:55.334236   50941 image.go:100] Found gcr.io/k8s-minikube/kicbase-builds:v0.0.48-1761985721-21837@sha256:a50b37e97dfdea51156e079ca6b45818a801b3d41bbe13d141f35d2e1af6c7d1 in local docker daemon, skipping pull
	I1109 13:54:55.334260   50941 cache.go:158] gcr.io/k8s-minikube/kicbase-builds:v0.0.48-1761985721-21837@sha256:a50b37e97dfdea51156e079ca6b45818a801b3d41bbe13d141f35d2e1af6c7d1 exists in daemon, skipping load
	I1109 13:54:55.334278   50941 cache.go:243] Successfully downloaded all kic artifacts
	I1109 13:54:55.334307   50941 start.go:360] acquireMachinesLock for ha-423884: {Name:mkda5c7a1ce8a51da0d8a40a6bd47565509d6909 Clock:{} Delay:500ms Timeout:10m0s Cancel:<nil>}
	I1109 13:54:55.334364   50941 start.go:364] duration metric: took 38.367µs to acquireMachinesLock for "ha-423884"
	I1109 13:54:55.334396   50941 start.go:96] Skipping create...Using existing machine configuration
	I1109 13:54:55.334402   50941 fix.go:54] fixHost starting: 
	I1109 13:54:55.334657   50941 cli_runner.go:164] Run: docker container inspect ha-423884 --format={{.State.Status}}
	I1109 13:54:55.351523   50941 fix.go:112] recreateIfNeeded on ha-423884: state=Stopped err=<nil>
	W1109 13:54:55.351563   50941 fix.go:138] unexpected machine state, will restart: <nil>
	I1109 13:54:55.355014   50941 out.go:252] * Restarting existing docker container for "ha-423884" ...
	I1109 13:54:55.355096   50941 cli_runner.go:164] Run: docker start ha-423884
	I1109 13:54:55.620677   50941 cli_runner.go:164] Run: docker container inspect ha-423884 --format={{.State.Status}}
	I1109 13:54:55.643357   50941 kic.go:430] container "ha-423884" state is running.
	I1109 13:54:55.643727   50941 cli_runner.go:164] Run: docker container inspect -f "{{range .NetworkSettings.Networks}}{{.IPAddress}},{{.GlobalIPv6Address}}{{end}}" ha-423884
	I1109 13:54:55.666053   50941 profile.go:143] Saving config to /home/jenkins/minikube-integration/21139-2320/.minikube/profiles/ha-423884/config.json ...
	I1109 13:54:55.666297   50941 machine.go:94] provisionDockerMachine start ...
	I1109 13:54:55.666487   50941 cli_runner.go:164] Run: docker container inspect -f "'{{(index (index .NetworkSettings.Ports "22/tcp") 0).HostPort}}'" ha-423884
	I1109 13:54:55.687681   50941 main.go:143] libmachine: Using SSH client type: native
	I1109 13:54:55.688070   50941 main.go:143] libmachine: &{{{<nil> 0 [] [] []} docker [0x3eefe0] 0x3f1790 <nil>  [] 0s} 127.0.0.1 32808 <nil> <nil>}
	I1109 13:54:55.688081   50941 main.go:143] libmachine: About to run SSH command:
	hostname
	I1109 13:54:55.688918   50941 main.go:143] libmachine: Error dialing TCP: ssh: handshake failed: EOF
	I1109 13:54:58.840047   50941 main.go:143] libmachine: SSH cmd err, output: <nil>: ha-423884
	
	I1109 13:54:58.840071   50941 ubuntu.go:182] provisioning hostname "ha-423884"
	I1109 13:54:58.840136   50941 cli_runner.go:164] Run: docker container inspect -f "'{{(index (index .NetworkSettings.Ports "22/tcp") 0).HostPort}}'" ha-423884
	I1109 13:54:58.857828   50941 main.go:143] libmachine: Using SSH client type: native
	I1109 13:54:58.858140   50941 main.go:143] libmachine: &{{{<nil> 0 [] [] []} docker [0x3eefe0] 0x3f1790 <nil>  [] 0s} 127.0.0.1 32808 <nil> <nil>}
	I1109 13:54:58.858156   50941 main.go:143] libmachine: About to run SSH command:
	sudo hostname ha-423884 && echo "ha-423884" | sudo tee /etc/hostname
	I1109 13:54:59.019946   50941 main.go:143] libmachine: SSH cmd err, output: <nil>: ha-423884
	
	I1109 13:54:59.020040   50941 cli_runner.go:164] Run: docker container inspect -f "'{{(index (index .NetworkSettings.Ports "22/tcp") 0).HostPort}}'" ha-423884
	I1109 13:54:59.037942   50941 main.go:143] libmachine: Using SSH client type: native
	I1109 13:54:59.038251   50941 main.go:143] libmachine: &{{{<nil> 0 [] [] []} docker [0x3eefe0] 0x3f1790 <nil>  [] 0s} 127.0.0.1 32808 <nil> <nil>}
	I1109 13:54:59.038273   50941 main.go:143] libmachine: About to run SSH command:
	
			if ! grep -xq '.*\sha-423884' /etc/hosts; then
				if grep -xq '127.0.1.1\s.*' /etc/hosts; then
					sudo sed -i 's/^127.0.1.1\s.*/127.0.1.1 ha-423884/g' /etc/hosts;
				else 
					echo '127.0.1.1 ha-423884' | sudo tee -a /etc/hosts; 
				fi
			fi
	I1109 13:54:59.188269   50941 main.go:143] libmachine: SSH cmd err, output: <nil>: 
	I1109 13:54:59.188306   50941 ubuntu.go:188] set auth options {CertDir:/home/jenkins/minikube-integration/21139-2320/.minikube CaCertPath:/home/jenkins/minikube-integration/21139-2320/.minikube/certs/ca.pem CaPrivateKeyPath:/home/jenkins/minikube-integration/21139-2320/.minikube/certs/ca-key.pem CaCertRemotePath:/etc/docker/ca.pem ServerCertPath:/home/jenkins/minikube-integration/21139-2320/.minikube/machines/server.pem ServerKeyPath:/home/jenkins/minikube-integration/21139-2320/.minikube/machines/server-key.pem ClientKeyPath:/home/jenkins/minikube-integration/21139-2320/.minikube/certs/key.pem ServerCertRemotePath:/etc/docker/server.pem ServerKeyRemotePath:/etc/docker/server-key.pem ClientCertPath:/home/jenkins/minikube-integration/21139-2320/.minikube/certs/cert.pem ServerCertSANs:[] StorePath:/home/jenkins/minikube-integration/21139-2320/.minikube}
	I1109 13:54:59.188331   50941 ubuntu.go:190] setting up certificates
	I1109 13:54:59.188340   50941 provision.go:84] configureAuth start
	I1109 13:54:59.189373   50941 cli_runner.go:164] Run: docker container inspect -f "{{range .NetworkSettings.Networks}}{{.IPAddress}},{{.GlobalIPv6Address}}{{end}}" ha-423884
	I1109 13:54:59.208053   50941 provision.go:143] copyHostCerts
	I1109 13:54:59.208097   50941 vm_assets.go:164] NewFileAsset: /home/jenkins/minikube-integration/21139-2320/.minikube/certs/key.pem -> /home/jenkins/minikube-integration/21139-2320/.minikube/key.pem
	I1109 13:54:59.208129   50941 exec_runner.go:144] found /home/jenkins/minikube-integration/21139-2320/.minikube/key.pem, removing ...
	I1109 13:54:59.208146   50941 exec_runner.go:203] rm: /home/jenkins/minikube-integration/21139-2320/.minikube/key.pem
	I1109 13:54:59.208224   50941 exec_runner.go:151] cp: /home/jenkins/minikube-integration/21139-2320/.minikube/certs/key.pem --> /home/jenkins/minikube-integration/21139-2320/.minikube/key.pem (1679 bytes)
	I1109 13:54:59.208317   50941 vm_assets.go:164] NewFileAsset: /home/jenkins/minikube-integration/21139-2320/.minikube/certs/ca.pem -> /home/jenkins/minikube-integration/21139-2320/.minikube/ca.pem
	I1109 13:54:59.208339   50941 exec_runner.go:144] found /home/jenkins/minikube-integration/21139-2320/.minikube/ca.pem, removing ...
	I1109 13:54:59.208343   50941 exec_runner.go:203] rm: /home/jenkins/minikube-integration/21139-2320/.minikube/ca.pem
	I1109 13:54:59.208378   50941 exec_runner.go:151] cp: /home/jenkins/minikube-integration/21139-2320/.minikube/certs/ca.pem --> /home/jenkins/minikube-integration/21139-2320/.minikube/ca.pem (1082 bytes)
	I1109 13:54:59.208440   50941 vm_assets.go:164] NewFileAsset: /home/jenkins/minikube-integration/21139-2320/.minikube/certs/cert.pem -> /home/jenkins/minikube-integration/21139-2320/.minikube/cert.pem
	I1109 13:54:59.208460   50941 exec_runner.go:144] found /home/jenkins/minikube-integration/21139-2320/.minikube/cert.pem, removing ...
	I1109 13:54:59.208464   50941 exec_runner.go:203] rm: /home/jenkins/minikube-integration/21139-2320/.minikube/cert.pem
	I1109 13:54:59.208489   50941 exec_runner.go:151] cp: /home/jenkins/minikube-integration/21139-2320/.minikube/certs/cert.pem --> /home/jenkins/minikube-integration/21139-2320/.minikube/cert.pem (1123 bytes)
	I1109 13:54:59.208551   50941 provision.go:117] generating server cert: /home/jenkins/minikube-integration/21139-2320/.minikube/machines/server.pem ca-key=/home/jenkins/minikube-integration/21139-2320/.minikube/certs/ca.pem private-key=/home/jenkins/minikube-integration/21139-2320/.minikube/certs/ca-key.pem org=jenkins.ha-423884 san=[127.0.0.1 192.168.49.2 ha-423884 localhost minikube]
	I1109 13:54:59.461403   50941 provision.go:177] copyRemoteCerts
	I1109 13:54:59.461473   50941 ssh_runner.go:195] Run: sudo mkdir -p /etc/docker /etc/docker /etc/docker
	I1109 13:54:59.461549   50941 cli_runner.go:164] Run: docker container inspect -f "'{{(index (index .NetworkSettings.Ports "22/tcp") 0).HostPort}}'" ha-423884
	I1109 13:54:59.479030   50941 sshutil.go:53] new ssh client: &{IP:127.0.0.1 Port:32808 SSHKeyPath:/home/jenkins/minikube-integration/21139-2320/.minikube/machines/ha-423884/id_rsa Username:docker}
	I1109 13:54:59.583635   50941 vm_assets.go:164] NewFileAsset: /home/jenkins/minikube-integration/21139-2320/.minikube/certs/ca.pem -> /etc/docker/ca.pem
	I1109 13:54:59.583697   50941 ssh_runner.go:362] scp /home/jenkins/minikube-integration/21139-2320/.minikube/certs/ca.pem --> /etc/docker/ca.pem (1082 bytes)
	I1109 13:54:59.602289   50941 vm_assets.go:164] NewFileAsset: /home/jenkins/minikube-integration/21139-2320/.minikube/machines/server.pem -> /etc/docker/server.pem
	I1109 13:54:59.602361   50941 ssh_runner.go:362] scp /home/jenkins/minikube-integration/21139-2320/.minikube/machines/server.pem --> /etc/docker/server.pem (1200 bytes)
	I1109 13:54:59.620339   50941 vm_assets.go:164] NewFileAsset: /home/jenkins/minikube-integration/21139-2320/.minikube/machines/server-key.pem -> /etc/docker/server-key.pem
	I1109 13:54:59.620401   50941 ssh_runner.go:362] scp /home/jenkins/minikube-integration/21139-2320/.minikube/machines/server-key.pem --> /etc/docker/server-key.pem (1679 bytes)
	I1109 13:54:59.637908   50941 provision.go:87] duration metric: took 449.546564ms to configureAuth
	I1109 13:54:59.637931   50941 ubuntu.go:206] setting minikube options for container-runtime
	I1109 13:54:59.638163   50941 config.go:182] Loaded profile config "ha-423884": Driver=docker, ContainerRuntime=crio, KubernetesVersion=v1.34.1
	I1109 13:54:59.638260   50941 cli_runner.go:164] Run: docker container inspect -f "'{{(index (index .NetworkSettings.Ports "22/tcp") 0).HostPort}}'" ha-423884
	I1109 13:54:59.655128   50941 main.go:143] libmachine: Using SSH client type: native
	I1109 13:54:59.655439   50941 main.go:143] libmachine: &{{{<nil> 0 [] [] []} docker [0x3eefe0] 0x3f1790 <nil>  [] 0s} 127.0.0.1 32808 <nil> <nil>}
	I1109 13:54:59.655453   50941 main.go:143] libmachine: About to run SSH command:
	sudo mkdir -p /etc/sysconfig && printf %s "
	CRIO_MINIKUBE_OPTIONS='--insecure-registry 10.96.0.0/12 '
	" | sudo tee /etc/sysconfig/crio.minikube && sudo systemctl restart crio
	I1109 13:55:00.057852   50941 main.go:143] libmachine: SSH cmd err, output: <nil>: 
	CRIO_MINIKUBE_OPTIONS='--insecure-registry 10.96.0.0/12 '
	
	I1109 13:55:00.057945   50941 machine.go:97] duration metric: took 4.391628222s to provisionDockerMachine
	I1109 13:55:00.057975   50941 start.go:293] postStartSetup for "ha-423884" (driver="docker")
	I1109 13:55:00.058017   50941 start.go:322] creating required directories: [/etc/kubernetes/addons /etc/kubernetes/manifests /var/tmp/minikube /var/lib/minikube /var/lib/minikube/certs /var/lib/minikube/images /var/lib/minikube/binaries /tmp/gvisor /usr/share/ca-certificates /etc/ssl/certs]
	I1109 13:55:00.058141   50941 ssh_runner.go:195] Run: sudo mkdir -p /etc/kubernetes/addons /etc/kubernetes/manifests /var/tmp/minikube /var/lib/minikube /var/lib/minikube/certs /var/lib/minikube/images /var/lib/minikube/binaries /tmp/gvisor /usr/share/ca-certificates /etc/ssl/certs
	I1109 13:55:00.058222   50941 cli_runner.go:164] Run: docker container inspect -f "'{{(index (index .NetworkSettings.Ports "22/tcp") 0).HostPort}}'" ha-423884
	I1109 13:55:00.132751   50941 sshutil.go:53] new ssh client: &{IP:127.0.0.1 Port:32808 SSHKeyPath:/home/jenkins/minikube-integration/21139-2320/.minikube/machines/ha-423884/id_rsa Username:docker}
	I1109 13:55:00.319042   50941 ssh_runner.go:195] Run: cat /etc/os-release
	I1109 13:55:00.329077   50941 main.go:143] libmachine: Couldn't set key VERSION_CODENAME, no corresponding struct field found
	I1109 13:55:00.329106   50941 info.go:137] Remote host: Debian GNU/Linux 12 (bookworm)
	I1109 13:55:00.329119   50941 filesync.go:126] Scanning /home/jenkins/minikube-integration/21139-2320/.minikube/addons for local assets ...
	I1109 13:55:00.329189   50941 filesync.go:126] Scanning /home/jenkins/minikube-integration/21139-2320/.minikube/files for local assets ...
	I1109 13:55:00.329276   50941 filesync.go:149] local asset: /home/jenkins/minikube-integration/21139-2320/.minikube/files/etc/ssl/certs/41162.pem -> 41162.pem in /etc/ssl/certs
	I1109 13:55:00.329283   50941 vm_assets.go:164] NewFileAsset: /home/jenkins/minikube-integration/21139-2320/.minikube/files/etc/ssl/certs/41162.pem -> /etc/ssl/certs/41162.pem
	I1109 13:55:00.330448   50941 ssh_runner.go:195] Run: sudo mkdir -p /etc/ssl/certs
	I1109 13:55:00.371002   50941 ssh_runner.go:362] scp /home/jenkins/minikube-integration/21139-2320/.minikube/files/etc/ssl/certs/41162.pem --> /etc/ssl/certs/41162.pem (1708 bytes)
	I1109 13:55:00.407176   50941 start.go:296] duration metric: took 349.154111ms for postStartSetup
	I1109 13:55:00.407280   50941 ssh_runner.go:195] Run: sh -c "df -h /var | awk 'NR==2{print $5}'"
	I1109 13:55:00.407327   50941 cli_runner.go:164] Run: docker container inspect -f "'{{(index (index .NetworkSettings.Ports "22/tcp") 0).HostPort}}'" ha-423884
	I1109 13:55:00.431818   50941 sshutil.go:53] new ssh client: &{IP:127.0.0.1 Port:32808 SSHKeyPath:/home/jenkins/minikube-integration/21139-2320/.minikube/machines/ha-423884/id_rsa Username:docker}
	I1109 13:55:00.545778   50941 ssh_runner.go:195] Run: sh -c "df -BG /var | awk 'NR==2{print $4}'"
	I1109 13:55:00.551509   50941 fix.go:56] duration metric: took 5.21709566s for fixHost
	I1109 13:55:00.551542   50941 start.go:83] releasing machines lock for "ha-423884", held for 5.217154802s
	I1109 13:55:00.551634   50941 cli_runner.go:164] Run: docker container inspect -f "{{range .NetworkSettings.Networks}}{{.IPAddress}},{{.GlobalIPv6Address}}{{end}}" ha-423884
	I1109 13:55:00.570733   50941 ssh_runner.go:195] Run: cat /version.json
	I1109 13:55:00.570830   50941 cli_runner.go:164] Run: docker container inspect -f "'{{(index (index .NetworkSettings.Ports "22/tcp") 0).HostPort}}'" ha-423884
	I1109 13:55:00.571142   50941 ssh_runner.go:195] Run: curl -sS -m 2 https://registry.k8s.io/
	I1109 13:55:00.571242   50941 cli_runner.go:164] Run: docker container inspect -f "'{{(index (index .NetworkSettings.Ports "22/tcp") 0).HostPort}}'" ha-423884
	I1109 13:55:00.592161   50941 sshutil.go:53] new ssh client: &{IP:127.0.0.1 Port:32808 SSHKeyPath:/home/jenkins/minikube-integration/21139-2320/.minikube/machines/ha-423884/id_rsa Username:docker}
	I1109 13:55:00.595555   50941 sshutil.go:53] new ssh client: &{IP:127.0.0.1 Port:32808 SSHKeyPath:/home/jenkins/minikube-integration/21139-2320/.minikube/machines/ha-423884/id_rsa Username:docker}
	I1109 13:55:00.796007   50941 ssh_runner.go:195] Run: systemctl --version
	I1109 13:55:00.805538   50941 ssh_runner.go:195] Run: sudo sh -c "podman version >/dev/null"
	I1109 13:55:00.848119   50941 ssh_runner.go:195] Run: sh -c "stat /etc/cni/net.d/*loopback.conf*"
	W1109 13:55:00.852825   50941 cni.go:209] loopback cni configuration skipped: "/etc/cni/net.d/*loopback.conf*" not found
	I1109 13:55:00.852894   50941 ssh_runner.go:195] Run: sudo find /etc/cni/net.d -maxdepth 1 -type f ( ( -name *bridge* -or -name *podman* ) -and -not -name *.mk_disabled ) -printf "%p, " -exec sh -c "sudo mv {} {}.mk_disabled" ;
	I1109 13:55:00.861218   50941 cni.go:259] no active bridge cni configs found in "/etc/cni/net.d" - nothing to disable
	I1109 13:55:00.861244   50941 start.go:496] detecting cgroup driver to use...
	I1109 13:55:00.861295   50941 detect.go:187] detected "cgroupfs" cgroup driver on host os
	I1109 13:55:00.861369   50941 ssh_runner.go:195] Run: sudo systemctl stop -f containerd
	I1109 13:55:00.877235   50941 ssh_runner.go:195] Run: sudo systemctl is-active --quiet service containerd
	I1109 13:55:00.891078   50941 docker.go:218] disabling cri-docker service (if available) ...
	I1109 13:55:00.891189   50941 ssh_runner.go:195] Run: sudo systemctl stop -f cri-docker.socket
	I1109 13:55:00.907526   50941 ssh_runner.go:195] Run: sudo systemctl stop -f cri-docker.service
	I1109 13:55:00.921033   50941 ssh_runner.go:195] Run: sudo systemctl disable cri-docker.socket
	I1109 13:55:01.038695   50941 ssh_runner.go:195] Run: sudo systemctl mask cri-docker.service
	I1109 13:55:01.157282   50941 docker.go:234] disabling docker service ...
	I1109 13:55:01.157400   50941 ssh_runner.go:195] Run: sudo systemctl stop -f docker.socket
	I1109 13:55:01.175939   50941 ssh_runner.go:195] Run: sudo systemctl stop -f docker.service
	I1109 13:55:01.191589   50941 ssh_runner.go:195] Run: sudo systemctl disable docker.socket
	I1109 13:55:01.322566   50941 ssh_runner.go:195] Run: sudo systemctl mask docker.service
	I1109 13:55:01.442242   50941 ssh_runner.go:195] Run: sudo systemctl is-active --quiet service docker
	I1109 13:55:01.455592   50941 ssh_runner.go:195] Run: /bin/bash -c "sudo mkdir -p /etc && printf %s "runtime-endpoint: unix:///var/run/crio/crio.sock
	" | sudo tee /etc/crictl.yaml"
	I1109 13:55:01.470955   50941 crio.go:59] configure cri-o to use "registry.k8s.io/pause:3.10.1" pause image...
	I1109 13:55:01.471022   50941 ssh_runner.go:195] Run: sh -c "sudo sed -i 's|^.*pause_image = .*$|pause_image = "registry.k8s.io/pause:3.10.1"|' /etc/crio/crio.conf.d/02-crio.conf"
	I1109 13:55:01.480518   50941 crio.go:70] configuring cri-o to use "cgroupfs" as cgroup driver...
	I1109 13:55:01.480598   50941 ssh_runner.go:195] Run: sh -c "sudo sed -i 's|^.*cgroup_manager = .*$|cgroup_manager = "cgroupfs"|' /etc/crio/crio.conf.d/02-crio.conf"
	I1109 13:55:01.490192   50941 ssh_runner.go:195] Run: sh -c "sudo sed -i '/conmon_cgroup = .*/d' /etc/crio/crio.conf.d/02-crio.conf"
	I1109 13:55:01.499971   50941 ssh_runner.go:195] Run: sh -c "sudo sed -i '/cgroup_manager = .*/a conmon_cgroup = "pod"' /etc/crio/crio.conf.d/02-crio.conf"
	I1109 13:55:01.508704   50941 ssh_runner.go:195] Run: sh -c "sudo rm -rf /etc/cni/net.mk"
	I1109 13:55:01.517693   50941 ssh_runner.go:195] Run: sh -c "sudo sed -i '/^ *"net.ipv4.ip_unprivileged_port_start=.*"/d' /etc/crio/crio.conf.d/02-crio.conf"
	I1109 13:55:01.526722   50941 ssh_runner.go:195] Run: sh -c "sudo grep -q "^ *default_sysctls" /etc/crio/crio.conf.d/02-crio.conf || sudo sed -i '/conmon_cgroup = .*/a default_sysctls = \[\n\]' /etc/crio/crio.conf.d/02-crio.conf"
	I1109 13:55:01.535091   50941 ssh_runner.go:195] Run: sh -c "sudo sed -i -r 's|^default_sysctls *= *\[|&\n  "net.ipv4.ip_unprivileged_port_start=0",|' /etc/crio/crio.conf.d/02-crio.conf"
	I1109 13:55:01.544402   50941 ssh_runner.go:195] Run: sudo sysctl net.bridge.bridge-nf-call-iptables
	I1109 13:55:01.552454   50941 ssh_runner.go:195] Run: sudo sh -c "echo 1 > /proc/sys/net/ipv4/ip_forward"
	I1109 13:55:01.560165   50941 ssh_runner.go:195] Run: sudo systemctl daemon-reload
	I1109 13:55:01.679582   50941 ssh_runner.go:195] Run: sudo systemctl restart crio
	I1109 13:55:01.822017   50941 start.go:543] Will wait 60s for socket path /var/run/crio/crio.sock
	I1109 13:55:01.822142   50941 ssh_runner.go:195] Run: stat /var/run/crio/crio.sock
	I1109 13:55:01.826235   50941 start.go:564] Will wait 60s for crictl version
	I1109 13:55:01.826377   50941 ssh_runner.go:195] Run: which crictl
	I1109 13:55:01.830288   50941 ssh_runner.go:195] Run: sudo /usr/local/bin/crictl version
	I1109 13:55:01.857542   50941 start.go:580] Version:  0.1.0
	RuntimeName:  cri-o
	RuntimeVersion:  1.34.1
	RuntimeApiVersion:  v1
	I1109 13:55:01.857636   50941 ssh_runner.go:195] Run: crio --version
	I1109 13:55:01.890996   50941 ssh_runner.go:195] Run: crio --version
	I1109 13:55:01.922135   50941 out.go:179] * Preparing Kubernetes v1.34.1 on CRI-O 1.34.1 ...
	I1109 13:55:01.925065   50941 cli_runner.go:164] Run: docker network inspect ha-423884 --format "{"Name": "{{.Name}}","Driver": "{{.Driver}}","Subnet": "{{range .IPAM.Config}}{{.Subnet}}{{end}}","Gateway": "{{range .IPAM.Config}}{{.Gateway}}{{end}}","MTU": {{if (index .Options "com.docker.network.driver.mtu")}}{{(index .Options "com.docker.network.driver.mtu")}}{{else}}0{{end}}, "ContainerIPs": [{{range $k,$v := .Containers }}"{{$v.IPv4Address}}",{{end}}]}"
	I1109 13:55:01.943662   50941 ssh_runner.go:195] Run: grep 192.168.49.1	host.minikube.internal$ /etc/hosts
	I1109 13:55:01.947786   50941 ssh_runner.go:195] Run: /bin/bash -c "{ grep -v $'\thost.minikube.internal$' "/etc/hosts"; echo "192.168.49.1	host.minikube.internal"; } > /tmp/h.$$; sudo cp /tmp/h.$$ "/etc/hosts""
	I1109 13:55:01.958276   50941 kubeadm.go:884] updating cluster {Name:ha-423884 KeepContext:false EmbedCerts:false MinikubeISO: KicBaseImage:gcr.io/k8s-minikube/kicbase-builds:v0.0.48-1761985721-21837@sha256:a50b37e97dfdea51156e079ca6b45818a801b3d41bbe13d141f35d2e1af6c7d1 Memory:3072 CPUs:2 DiskSize:20000 Driver:docker HyperkitVpnKitSock: HyperkitVSockPorts:[] DockerEnv:[] ContainerVolumeMounts:[] InsecureRegistry:[] RegistryMirror:[] HostOnlyCIDR:192.168.59.1/24 HypervVirtualSwitch: HypervUseExternalSwitch:false HypervExternalAdapter: KVMNetwork:default KVMQemuURI:qemu:///system KVMGPU:false KVMHidden:false KVMNUMACount:1 APIServerPort:8443 DockerOpt:[] DisableDriverMounts:false NFSShare:[] NFSSharesRoot:/nfsshares UUID: NoVTXCheck:false DNSProxy:false HostDNSResolver:true HostOnlyNicType:virtio NatNicType:virtio SSHIPAddress: SSHUser:root SSHKey: SSHPort:22 KubernetesConfig:{KubernetesVersion:v1.34.1 ClusterName:ha-423884 Namespace:default APIServerHAVIP:192.168.49.254 APIServerName:minikubeCA APISe
rverNames:[] APIServerIPs:[] DNSDomain:cluster.local ContainerRuntime:crio CRISocket: NetworkPlugin:cni FeatureGates: ServiceCIDR:10.96.0.0/12 ImageRepository: LoadBalancerStartIP: LoadBalancerEndIP: CustomIngressCert: RegistryAliases: ExtraOptions:[] ShouldLoadCachedImages:true EnableDefaultCNI:false CNI:} Nodes:[{Name: IP:192.168.49.2 Port:8443 KubernetesVersion:v1.34.1 ContainerRuntime:crio ControlPlane:true Worker:true} {Name:m02 IP:192.168.49.3 Port:8443 KubernetesVersion:v1.34.1 ContainerRuntime:crio ControlPlane:true Worker:true} {Name:m03 IP:192.168.49.4 Port:8443 KubernetesVersion:v1.34.1 ContainerRuntime:crio ControlPlane:true Worker:true} {Name:m04 IP:192.168.49.5 Port:0 KubernetesVersion:v1.34.1 ContainerRuntime:crio ControlPlane:false Worker:true}] Addons:map[ambassador:false amd-gpu-device-plugin:false auto-pause:false cloud-spanner:false csi-hostpath-driver:false dashboard:false default-storageclass:false efk:false freshpod:false gcp-auth:false gvisor:false headlamp:false inaccel:false ingress:
false ingress-dns:false inspektor-gadget:false istio:false istio-provisioner:false kong:false kubeflow:false kubetail:false kubevirt:false logviewer:false metallb:false metrics-server:false nvidia-device-plugin:false nvidia-driver-installer:false nvidia-gpu-device-plugin:false olm:false pod-security-policy:false portainer:false registry:false registry-aliases:false registry-creds:false storage-provisioner:false storage-provisioner-rancher:false volcano:false volumesnapshots:false yakd:false] CustomAddonImages:map[] CustomAddonRegistries:map[] VerifyComponents:map[apiserver:true apps_running:true default_sa:true extra:true kubelet:true node_ready:true system_pods:true] StartHostTimeout:6m0s ScheduledStop:<nil> ExposedPorts:[] ListenAddress: Network: Subnet: MultiNodeRequested:true ExtraDisks:0 CertExpiration:26280h0m0s MountString: Mount9PVersion:9p2000.L MountGID:docker MountIP: MountMSize:262144 MountOptions:[] MountPort:0 MountType:9p MountUID:docker BinaryMirror: DisableOptimizations:false DisableMetrics:f
alse DisableCoreDNSLog:false CustomQemuFirmwarePath: SocketVMnetClientPath: SocketVMnetPath: StaticIP: SSHAuthSock: SSHAgentPID:0 GPUs: AutoPauseInterval:1m0s} ...
	I1109 13:55:01.958452   50941 preload.go:188] Checking if preload exists for k8s version v1.34.1 and runtime crio
	I1109 13:55:01.958516   50941 ssh_runner.go:195] Run: sudo crictl images --output json
	I1109 13:55:01.997808   50941 crio.go:514] all images are preloaded for cri-o runtime.
	I1109 13:55:01.997834   50941 crio.go:433] Images already preloaded, skipping extraction
	I1109 13:55:01.997895   50941 ssh_runner.go:195] Run: sudo crictl images --output json
	I1109 13:55:02.024927   50941 crio.go:514] all images are preloaded for cri-o runtime.
	I1109 13:55:02.024953   50941 cache_images.go:86] Images are preloaded, skipping loading
	I1109 13:55:02.024962   50941 kubeadm.go:935] updating node { 192.168.49.2 8443 v1.34.1 crio true true} ...
	I1109 13:55:02.025128   50941 kubeadm.go:947] kubelet [Unit]
	Wants=crio.service
	
	[Service]
	ExecStart=
	ExecStart=/var/lib/minikube/binaries/v1.34.1/kubelet --bootstrap-kubeconfig=/etc/kubernetes/bootstrap-kubelet.conf --cgroups-per-qos=false --config=/var/lib/kubelet/config.yaml --enforce-node-allocatable= --hostname-override=ha-423884 --kubeconfig=/etc/kubernetes/kubelet.conf --node-ip=192.168.49.2
	
	[Install]
	 config:
	{KubernetesVersion:v1.34.1 ClusterName:ha-423884 Namespace:default APIServerHAVIP:192.168.49.254 APIServerName:minikubeCA APIServerNames:[] APIServerIPs:[] DNSDomain:cluster.local ContainerRuntime:crio CRISocket: NetworkPlugin:cni FeatureGates: ServiceCIDR:10.96.0.0/12 ImageRepository: LoadBalancerStartIP: LoadBalancerEndIP: CustomIngressCert: RegistryAliases: ExtraOptions:[] ShouldLoadCachedImages:true EnableDefaultCNI:false CNI:}
	I1109 13:55:02.025216   50941 ssh_runner.go:195] Run: crio config
	I1109 13:55:02.096570   50941 cni.go:84] Creating CNI manager for ""
	I1109 13:55:02.096595   50941 cni.go:136] multinode detected (4 nodes found), recommending kindnet
	I1109 13:55:02.096612   50941 kubeadm.go:85] Using pod CIDR: 10.244.0.0/16
	I1109 13:55:02.096664   50941 kubeadm.go:190] kubeadm options: {CertDir:/var/lib/minikube/certs ServiceCIDR:10.96.0.0/12 PodSubnet:10.244.0.0/16 AdvertiseAddress:192.168.49.2 APIServerPort:8443 KubernetesVersion:v1.34.1 EtcdDataDir:/var/lib/minikube/etcd EtcdExtraArgs:map[] ClusterName:ha-423884 NodeName:ha-423884 DNSDomain:cluster.local CRISocket:/var/run/crio/crio.sock ImageRepository: ComponentOptions:[{Component:apiServer ExtraArgs:map[enable-admission-plugins:NamespaceLifecycle,LimitRanger,ServiceAccount,DefaultStorageClass,DefaultTolerationSeconds,NodeRestriction,MutatingAdmissionWebhook,ValidatingAdmissionWebhook,ResourceQuota] Pairs:map[certSANs:["127.0.0.1", "localhost", "192.168.49.2"]]} {Component:controllerManager ExtraArgs:map[allocate-node-cidrs:true leader-elect:false] Pairs:map[]} {Component:scheduler ExtraArgs:map[leader-elect:false] Pairs:map[]}] FeatureArgs:map[] NodeIP:192.168.49.2 CgroupDriver:cgroupfs ClientCAFile:/var/lib/minikube/certs/ca.crt StaticPodPath:/etc/kubernetes/mani
fests ControlPlaneAddress:control-plane.minikube.internal KubeProxyOptions:map[] ResolvConfSearchRegression:false KubeletConfigOpts:map[containerRuntimeEndpoint:unix:///var/run/crio/crio.sock hairpinMode:hairpin-veth runtimeRequestTimeout:15m] PrependCriSocketUnix:true}
	I1109 13:55:02.096862   50941 kubeadm.go:196] kubeadm config:
	apiVersion: kubeadm.k8s.io/v1beta4
	kind: InitConfiguration
	localAPIEndpoint:
	  advertiseAddress: 192.168.49.2
	  bindPort: 8443
	bootstrapTokens:
	  - groups:
	      - system:bootstrappers:kubeadm:default-node-token
	    ttl: 24h0m0s
	    usages:
	      - signing
	      - authentication
	nodeRegistration:
	  criSocket: unix:///var/run/crio/crio.sock
	  name: "ha-423884"
	  kubeletExtraArgs:
	    - name: "node-ip"
	      value: "192.168.49.2"
	  taints: []
	---
	apiVersion: kubeadm.k8s.io/v1beta4
	kind: ClusterConfiguration
	apiServer:
	  certSANs: ["127.0.0.1", "localhost", "192.168.49.2"]
	  extraArgs:
	    - name: "enable-admission-plugins"
	      value: "NamespaceLifecycle,LimitRanger,ServiceAccount,DefaultStorageClass,DefaultTolerationSeconds,NodeRestriction,MutatingAdmissionWebhook,ValidatingAdmissionWebhook,ResourceQuota"
	controllerManager:
	  extraArgs:
	    - name: "allocate-node-cidrs"
	      value: "true"
	    - name: "leader-elect"
	      value: "false"
	scheduler:
	  extraArgs:
	    - name: "leader-elect"
	      value: "false"
	certificatesDir: /var/lib/minikube/certs
	clusterName: mk
	controlPlaneEndpoint: control-plane.minikube.internal:8443
	etcd:
	  local:
	    dataDir: /var/lib/minikube/etcd
	kubernetesVersion: v1.34.1
	networking:
	  dnsDomain: cluster.local
	  podSubnet: "10.244.0.0/16"
	  serviceSubnet: 10.96.0.0/12
	---
	apiVersion: kubelet.config.k8s.io/v1beta1
	kind: KubeletConfiguration
	authentication:
	  x509:
	    clientCAFile: /var/lib/minikube/certs/ca.crt
	cgroupDriver: cgroupfs
	containerRuntimeEndpoint: unix:///var/run/crio/crio.sock
	hairpinMode: hairpin-veth
	runtimeRequestTimeout: 15m
	clusterDomain: "cluster.local"
	# disable disk resource management by default
	imageGCHighThresholdPercent: 100
	evictionHard:
	  nodefs.available: "0%"
	  nodefs.inodesFree: "0%"
	  imagefs.available: "0%"
	failSwapOn: false
	staticPodPath: /etc/kubernetes/manifests
	---
	apiVersion: kubeproxy.config.k8s.io/v1alpha1
	kind: KubeProxyConfiguration
	clusterCIDR: "10.244.0.0/16"
	metricsBindAddress: 0.0.0.0:10249
	conntrack:
	  maxPerCore: 0
	# Skip setting "net.netfilter.nf_conntrack_tcp_timeout_established"
	  tcpEstablishedTimeout: 0s
	# Skip setting "net.netfilter.nf_conntrack_tcp_timeout_close"
	  tcpCloseWaitTimeout: 0s
	
	I1109 13:55:02.096890   50941 kube-vip.go:115] generating kube-vip config ...
	I1109 13:55:02.096949   50941 ssh_runner.go:195] Run: sudo sh -c "lsmod | grep ip_vs"
	I1109 13:55:02.108971   50941 kube-vip.go:163] giving up enabling control-plane load-balancing as ipvs kernel modules appears not to be available: sudo sh -c "lsmod | grep ip_vs": Process exited with status 1
	stdout:
	
	stderr:
	I1109 13:55:02.109069   50941 kube-vip.go:137] kube-vip config:
	apiVersion: v1
	kind: Pod
	metadata:
	  creationTimestamp: null
	  name: kube-vip
	  namespace: kube-system
	spec:
	  containers:
	  - args:
	    - manager
	    env:
	    - name: vip_arp
	      value: "true"
	    - name: port
	      value: "8443"
	    - name: vip_nodename
	      valueFrom:
	        fieldRef:
	          fieldPath: spec.nodeName
	    - name: vip_interface
	      value: eth0
	    - name: vip_cidr
	      value: "32"
	    - name: dns_mode
	      value: first
	    - name: cp_enable
	      value: "true"
	    - name: cp_namespace
	      value: kube-system
	    - name: vip_leaderelection
	      value: "true"
	    - name: vip_leasename
	      value: plndr-cp-lock
	    - name: vip_leaseduration
	      value: "5"
	    - name: vip_renewdeadline
	      value: "3"
	    - name: vip_retryperiod
	      value: "1"
	    - name: address
	      value: 192.168.49.254
	    - name: prometheus_server
	      value: :2112
	    image: ghcr.io/kube-vip/kube-vip:v1.0.1
	    imagePullPolicy: IfNotPresent
	    name: kube-vip
	    resources: {}
	    securityContext:
	      capabilities:
	        add:
	        - NET_ADMIN
	        - NET_RAW
	    volumeMounts:
	    - mountPath: /etc/kubernetes/admin.conf
	      name: kubeconfig
	  hostAliases:
	  - hostnames:
	    - kubernetes
	    ip: 127.0.0.1
	  hostNetwork: true
	  volumes:
	  - hostPath:
	      path: "/etc/kubernetes/admin.conf"
	    name: kubeconfig
	status: {}
	I1109 13:55:02.109141   50941 ssh_runner.go:195] Run: sudo ls /var/lib/minikube/binaries/v1.34.1
	I1109 13:55:02.117384   50941 binaries.go:51] Found k8s binaries, skipping transfer
	I1109 13:55:02.117456   50941 ssh_runner.go:195] Run: sudo mkdir -p /etc/systemd/system/kubelet.service.d /lib/systemd/system /var/tmp/minikube /etc/kubernetes/manifests
	I1109 13:55:02.125412   50941 ssh_runner.go:362] scp memory --> /etc/systemd/system/kubelet.service.d/10-kubeadm.conf (359 bytes)
	I1109 13:55:02.139511   50941 ssh_runner.go:362] scp memory --> /lib/systemd/system/kubelet.service (352 bytes)
	I1109 13:55:02.153010   50941 ssh_runner.go:362] scp memory --> /var/tmp/minikube/kubeadm.yaml.new (2206 bytes)
	I1109 13:55:02.166712   50941 ssh_runner.go:362] scp memory --> /etc/kubernetes/manifests/kube-vip.yaml (1358 bytes)
	I1109 13:55:02.180196   50941 ssh_runner.go:195] Run: grep 192.168.49.254	control-plane.minikube.internal$ /etc/hosts
	I1109 13:55:02.183971   50941 ssh_runner.go:195] Run: /bin/bash -c "{ grep -v $'\tcontrol-plane.minikube.internal$' "/etc/hosts"; echo "192.168.49.254	control-plane.minikube.internal"; } > /tmp/h.$$; sudo cp /tmp/h.$$ "/etc/hosts""
	I1109 13:55:02.194268   50941 ssh_runner.go:195] Run: sudo systemctl daemon-reload
	I1109 13:55:02.311353   50941 ssh_runner.go:195] Run: sudo systemctl start kubelet
	I1109 13:55:02.326388   50941 certs.go:69] Setting up /home/jenkins/minikube-integration/21139-2320/.minikube/profiles/ha-423884 for IP: 192.168.49.2
	I1109 13:55:02.326464   50941 certs.go:195] generating shared ca certs ...
	I1109 13:55:02.326494   50941 certs.go:227] acquiring lock for ca certs: {Name:mkdc9287ecd89df27ec460b72246ed6b75395d7c Clock:{} Delay:500ms Timeout:1m0s Cancel:<nil>}
	I1109 13:55:02.326661   50941 certs.go:236] skipping valid "minikubeCA" ca cert: /home/jenkins/minikube-integration/21139-2320/.minikube/ca.key
	I1109 13:55:02.326749   50941 certs.go:236] skipping valid "proxyClientCA" ca cert: /home/jenkins/minikube-integration/21139-2320/.minikube/proxy-client-ca.key
	I1109 13:55:02.326774   50941 certs.go:257] generating profile certs ...
	I1109 13:55:02.326889   50941 certs.go:360] skipping valid signed profile cert regeneration for "minikube-user": /home/jenkins/minikube-integration/21139-2320/.minikube/profiles/ha-423884/client.key
	I1109 13:55:02.326942   50941 certs.go:364] generating signed profile cert for "minikube": /home/jenkins/minikube-integration/21139-2320/.minikube/profiles/ha-423884/apiserver.key.32540612
	I1109 13:55:02.326978   50941 crypto.go:68] Generating cert /home/jenkins/minikube-integration/21139-2320/.minikube/profiles/ha-423884/apiserver.crt.32540612 with IP's: [10.96.0.1 127.0.0.1 10.0.0.1 192.168.49.2 192.168.49.3 192.168.49.4 192.168.49.254]
	I1109 13:55:02.791794   50941 crypto.go:156] Writing cert to /home/jenkins/minikube-integration/21139-2320/.minikube/profiles/ha-423884/apiserver.crt.32540612 ...
	I1109 13:55:02.791832   50941 lock.go:35] WriteFile acquiring /home/jenkins/minikube-integration/21139-2320/.minikube/profiles/ha-423884/apiserver.crt.32540612: {Name:mkffe35c2a4a9e9ef2460782868fdfad2ff0b271 Clock:{} Delay:500ms Timeout:1m0s Cancel:<nil>}
	I1109 13:55:02.792045   50941 crypto.go:164] Writing key to /home/jenkins/minikube-integration/21139-2320/.minikube/profiles/ha-423884/apiserver.key.32540612 ...
	I1109 13:55:02.792064   50941 lock.go:35] WriteFile acquiring /home/jenkins/minikube-integration/21139-2320/.minikube/profiles/ha-423884/apiserver.key.32540612: {Name:mk387e11ec0c12eb2f7dfe43ad45967daf55df66 Clock:{} Delay:500ms Timeout:1m0s Cancel:<nil>}
	I1109 13:55:02.792144   50941 certs.go:382] copying /home/jenkins/minikube-integration/21139-2320/.minikube/profiles/ha-423884/apiserver.crt.32540612 -> /home/jenkins/minikube-integration/21139-2320/.minikube/profiles/ha-423884/apiserver.crt
	I1109 13:55:02.792295   50941 certs.go:386] copying /home/jenkins/minikube-integration/21139-2320/.minikube/profiles/ha-423884/apiserver.key.32540612 -> /home/jenkins/minikube-integration/21139-2320/.minikube/profiles/ha-423884/apiserver.key
	I1109 13:55:02.792427   50941 certs.go:360] skipping valid signed profile cert regeneration for "aggregator": /home/jenkins/minikube-integration/21139-2320/.minikube/profiles/ha-423884/proxy-client.key
	I1109 13:55:02.792446   50941 vm_assets.go:164] NewFileAsset: /home/jenkins/minikube-integration/21139-2320/.minikube/ca.crt -> /var/lib/minikube/certs/ca.crt
	I1109 13:55:02.792461   50941 vm_assets.go:164] NewFileAsset: /home/jenkins/minikube-integration/21139-2320/.minikube/ca.key -> /var/lib/minikube/certs/ca.key
	I1109 13:55:02.792477   50941 vm_assets.go:164] NewFileAsset: /home/jenkins/minikube-integration/21139-2320/.minikube/proxy-client-ca.crt -> /var/lib/minikube/certs/proxy-client-ca.crt
	I1109 13:55:02.792493   50941 vm_assets.go:164] NewFileAsset: /home/jenkins/minikube-integration/21139-2320/.minikube/proxy-client-ca.key -> /var/lib/minikube/certs/proxy-client-ca.key
	I1109 13:55:02.792513   50941 vm_assets.go:164] NewFileAsset: /home/jenkins/minikube-integration/21139-2320/.minikube/profiles/ha-423884/apiserver.crt -> /var/lib/minikube/certs/apiserver.crt
	I1109 13:55:02.792538   50941 vm_assets.go:164] NewFileAsset: /home/jenkins/minikube-integration/21139-2320/.minikube/profiles/ha-423884/apiserver.key -> /var/lib/minikube/certs/apiserver.key
	I1109 13:55:02.792554   50941 vm_assets.go:164] NewFileAsset: /home/jenkins/minikube-integration/21139-2320/.minikube/profiles/ha-423884/proxy-client.crt -> /var/lib/minikube/certs/proxy-client.crt
	I1109 13:55:02.792565   50941 vm_assets.go:164] NewFileAsset: /home/jenkins/minikube-integration/21139-2320/.minikube/profiles/ha-423884/proxy-client.key -> /var/lib/minikube/certs/proxy-client.key
	I1109 13:55:02.792626   50941 certs.go:484] found cert: /home/jenkins/minikube-integration/21139-2320/.minikube/certs/4116.pem (1338 bytes)
	W1109 13:55:02.792662   50941 certs.go:480] ignoring /home/jenkins/minikube-integration/21139-2320/.minikube/certs/4116_empty.pem, impossibly tiny 0 bytes
	I1109 13:55:02.792674   50941 certs.go:484] found cert: /home/jenkins/minikube-integration/21139-2320/.minikube/certs/ca-key.pem (1679 bytes)
	I1109 13:55:02.792701   50941 certs.go:484] found cert: /home/jenkins/minikube-integration/21139-2320/.minikube/certs/ca.pem (1082 bytes)
	I1109 13:55:02.792731   50941 certs.go:484] found cert: /home/jenkins/minikube-integration/21139-2320/.minikube/certs/cert.pem (1123 bytes)
	I1109 13:55:02.792756   50941 certs.go:484] found cert: /home/jenkins/minikube-integration/21139-2320/.minikube/certs/key.pem (1679 bytes)
	I1109 13:55:02.792801   50941 certs.go:484] found cert: /home/jenkins/minikube-integration/21139-2320/.minikube/files/etc/ssl/certs/41162.pem (1708 bytes)
	I1109 13:55:02.792832   50941 vm_assets.go:164] NewFileAsset: /home/jenkins/minikube-integration/21139-2320/.minikube/ca.crt -> /usr/share/ca-certificates/minikubeCA.pem
	I1109 13:55:02.792847   50941 vm_assets.go:164] NewFileAsset: /home/jenkins/minikube-integration/21139-2320/.minikube/certs/4116.pem -> /usr/share/ca-certificates/4116.pem
	I1109 13:55:02.792857   50941 vm_assets.go:164] NewFileAsset: /home/jenkins/minikube-integration/21139-2320/.minikube/files/etc/ssl/certs/41162.pem -> /usr/share/ca-certificates/41162.pem
	I1109 13:55:02.793472   50941 ssh_runner.go:362] scp /home/jenkins/minikube-integration/21139-2320/.minikube/ca.crt --> /var/lib/minikube/certs/ca.crt (1111 bytes)
	I1109 13:55:02.812682   50941 ssh_runner.go:362] scp /home/jenkins/minikube-integration/21139-2320/.minikube/ca.key --> /var/lib/minikube/certs/ca.key (1679 bytes)
	I1109 13:55:02.831556   50941 ssh_runner.go:362] scp /home/jenkins/minikube-integration/21139-2320/.minikube/proxy-client-ca.crt --> /var/lib/minikube/certs/proxy-client-ca.crt (1119 bytes)
	I1109 13:55:02.850358   50941 ssh_runner.go:362] scp /home/jenkins/minikube-integration/21139-2320/.minikube/proxy-client-ca.key --> /var/lib/minikube/certs/proxy-client-ca.key (1675 bytes)
	I1109 13:55:02.868603   50941 ssh_runner.go:362] scp /home/jenkins/minikube-integration/21139-2320/.minikube/profiles/ha-423884/apiserver.crt --> /var/lib/minikube/certs/apiserver.crt (1440 bytes)
	I1109 13:55:02.886617   50941 ssh_runner.go:362] scp /home/jenkins/minikube-integration/21139-2320/.minikube/profiles/ha-423884/apiserver.key --> /var/lib/minikube/certs/apiserver.key (1679 bytes)
	I1109 13:55:02.904941   50941 ssh_runner.go:362] scp /home/jenkins/minikube-integration/21139-2320/.minikube/profiles/ha-423884/proxy-client.crt --> /var/lib/minikube/certs/proxy-client.crt (1147 bytes)
	I1109 13:55:02.923399   50941 ssh_runner.go:362] scp /home/jenkins/minikube-integration/21139-2320/.minikube/profiles/ha-423884/proxy-client.key --> /var/lib/minikube/certs/proxy-client.key (1675 bytes)
	I1109 13:55:02.941967   50941 ssh_runner.go:362] scp /home/jenkins/minikube-integration/21139-2320/.minikube/ca.crt --> /usr/share/ca-certificates/minikubeCA.pem (1111 bytes)
	I1109 13:55:02.960665   50941 ssh_runner.go:362] scp /home/jenkins/minikube-integration/21139-2320/.minikube/certs/4116.pem --> /usr/share/ca-certificates/4116.pem (1338 bytes)
	I1109 13:55:02.983802   50941 ssh_runner.go:362] scp /home/jenkins/minikube-integration/21139-2320/.minikube/files/etc/ssl/certs/41162.pem --> /usr/share/ca-certificates/41162.pem (1708 bytes)
	I1109 13:55:03.009751   50941 ssh_runner.go:362] scp memory --> /var/lib/minikube/kubeconfig (738 bytes)
	I1109 13:55:03.031089   50941 ssh_runner.go:195] Run: openssl version
	I1109 13:55:03.048901   50941 ssh_runner.go:195] Run: sudo /bin/bash -c "test -s /usr/share/ca-certificates/minikubeCA.pem && ln -fs /usr/share/ca-certificates/minikubeCA.pem /etc/ssl/certs/minikubeCA.pem"
	I1109 13:55:03.062145   50941 ssh_runner.go:195] Run: ls -la /usr/share/ca-certificates/minikubeCA.pem
	I1109 13:55:03.066948   50941 certs.go:528] hashing: -rw-r--r-- 1 root root 1111 Nov  9 13:29 /usr/share/ca-certificates/minikubeCA.pem
	I1109 13:55:03.067030   50941 ssh_runner.go:195] Run: openssl x509 -hash -noout -in /usr/share/ca-certificates/minikubeCA.pem
	I1109 13:55:03.126889   50941 ssh_runner.go:195] Run: sudo /bin/bash -c "test -L /etc/ssl/certs/b5213941.0 || ln -fs /etc/ssl/certs/minikubeCA.pem /etc/ssl/certs/b5213941.0"
	I1109 13:55:03.139601   50941 ssh_runner.go:195] Run: sudo /bin/bash -c "test -s /usr/share/ca-certificates/4116.pem && ln -fs /usr/share/ca-certificates/4116.pem /etc/ssl/certs/4116.pem"
	I1109 13:55:03.154789   50941 ssh_runner.go:195] Run: ls -la /usr/share/ca-certificates/4116.pem
	I1109 13:55:03.160976   50941 certs.go:528] hashing: -rw-r--r-- 1 root root 1338 Nov  9 13:36 /usr/share/ca-certificates/4116.pem
	I1109 13:55:03.161044   50941 ssh_runner.go:195] Run: openssl x509 -hash -noout -in /usr/share/ca-certificates/4116.pem
	I1109 13:55:03.250161   50941 ssh_runner.go:195] Run: sudo /bin/bash -c "test -L /etc/ssl/certs/51391683.0 || ln -fs /etc/ssl/certs/4116.pem /etc/ssl/certs/51391683.0"
	I1109 13:55:03.265557   50941 ssh_runner.go:195] Run: sudo /bin/bash -c "test -s /usr/share/ca-certificates/41162.pem && ln -fs /usr/share/ca-certificates/41162.pem /etc/ssl/certs/41162.pem"
	I1109 13:55:03.284581   50941 ssh_runner.go:195] Run: ls -la /usr/share/ca-certificates/41162.pem
	I1109 13:55:03.293895   50941 certs.go:528] hashing: -rw-r--r-- 1 root root 1708 Nov  9 13:36 /usr/share/ca-certificates/41162.pem
	I1109 13:55:03.293986   50941 ssh_runner.go:195] Run: openssl x509 -hash -noout -in /usr/share/ca-certificates/41162.pem
	I1109 13:55:03.352776   50941 ssh_runner.go:195] Run: sudo /bin/bash -c "test -L /etc/ssl/certs/3ec20f2e.0 || ln -fs /etc/ssl/certs/41162.pem /etc/ssl/certs/3ec20f2e.0"
	I1109 13:55:03.366601   50941 ssh_runner.go:195] Run: stat /var/lib/minikube/certs/apiserver-kubelet-client.crt
	I1109 13:55:03.371480   50941 ssh_runner.go:195] Run: openssl x509 -noout -in /var/lib/minikube/certs/apiserver-etcd-client.crt -checkend 86400
	I1109 13:55:03.428568   50941 ssh_runner.go:195] Run: openssl x509 -noout -in /var/lib/minikube/certs/apiserver-kubelet-client.crt -checkend 86400
	I1109 13:55:03.484572   50941 ssh_runner.go:195] Run: openssl x509 -noout -in /var/lib/minikube/certs/etcd/server.crt -checkend 86400
	I1109 13:55:03.542471   50941 ssh_runner.go:195] Run: openssl x509 -noout -in /var/lib/minikube/certs/etcd/healthcheck-client.crt -checkend 86400
	I1109 13:55:03.629320   50941 ssh_runner.go:195] Run: openssl x509 -noout -in /var/lib/minikube/certs/etcd/peer.crt -checkend 86400
	I1109 13:55:03.682786   50941 ssh_runner.go:195] Run: openssl x509 -noout -in /var/lib/minikube/certs/front-proxy-client.crt -checkend 86400
	I1109 13:55:03.732655   50941 kubeadm.go:401] StartCluster: {Name:ha-423884 KeepContext:false EmbedCerts:false MinikubeISO: KicBaseImage:gcr.io/k8s-minikube/kicbase-builds:v0.0.48-1761985721-21837@sha256:a50b37e97dfdea51156e079ca6b45818a801b3d41bbe13d141f35d2e1af6c7d1 Memory:3072 CPUs:2 DiskSize:20000 Driver:docker HyperkitVpnKitSock: HyperkitVSockPorts:[] DockerEnv:[] ContainerVolumeMounts:[] InsecureRegistry:[] RegistryMirror:[] HostOnlyCIDR:192.168.59.1/24 HypervVirtualSwitch: HypervUseExternalSwitch:false HypervExternalAdapter: KVMNetwork:default KVMQemuURI:qemu:///system KVMGPU:false KVMHidden:false KVMNUMACount:1 APIServerPort:8443 DockerOpt:[] DisableDriverMounts:false NFSShare:[] NFSSharesRoot:/nfsshares UUID: NoVTXCheck:false DNSProxy:false HostDNSResolver:true HostOnlyNicType:virtio NatNicType:virtio SSHIPAddress: SSHUser:root SSHKey: SSHPort:22 KubernetesConfig:{KubernetesVersion:v1.34.1 ClusterName:ha-423884 Namespace:default APIServerHAVIP:192.168.49.254 APIServerName:minikubeCA APIServe
rNames:[] APIServerIPs:[] DNSDomain:cluster.local ContainerRuntime:crio CRISocket: NetworkPlugin:cni FeatureGates: ServiceCIDR:10.96.0.0/12 ImageRepository: LoadBalancerStartIP: LoadBalancerEndIP: CustomIngressCert: RegistryAliases: ExtraOptions:[] ShouldLoadCachedImages:true EnableDefaultCNI:false CNI:} Nodes:[{Name: IP:192.168.49.2 Port:8443 KubernetesVersion:v1.34.1 ContainerRuntime:crio ControlPlane:true Worker:true} {Name:m02 IP:192.168.49.3 Port:8443 KubernetesVersion:v1.34.1 ContainerRuntime:crio ControlPlane:true Worker:true} {Name:m03 IP:192.168.49.4 Port:8443 KubernetesVersion:v1.34.1 ContainerRuntime:crio ControlPlane:true Worker:true} {Name:m04 IP:192.168.49.5 Port:0 KubernetesVersion:v1.34.1 ContainerRuntime:crio ControlPlane:false Worker:true}] Addons:map[ambassador:false amd-gpu-device-plugin:false auto-pause:false cloud-spanner:false csi-hostpath-driver:false dashboard:false default-storageclass:false efk:false freshpod:false gcp-auth:false gvisor:false headlamp:false inaccel:false ingress:fal
se ingress-dns:false inspektor-gadget:false istio:false istio-provisioner:false kong:false kubeflow:false kubetail:false kubevirt:false logviewer:false metallb:false metrics-server:false nvidia-device-plugin:false nvidia-driver-installer:false nvidia-gpu-device-plugin:false olm:false pod-security-policy:false portainer:false registry:false registry-aliases:false registry-creds:false storage-provisioner:false storage-provisioner-rancher:false volcano:false volumesnapshots:false yakd:false] CustomAddonImages:map[] CustomAddonRegistries:map[] VerifyComponents:map[apiserver:true apps_running:true default_sa:true extra:true kubelet:true node_ready:true system_pods:true] StartHostTimeout:6m0s ScheduledStop:<nil> ExposedPorts:[] ListenAddress: Network: Subnet: MultiNodeRequested:true ExtraDisks:0 CertExpiration:26280h0m0s MountString: Mount9PVersion:9p2000.L MountGID:docker MountIP: MountMSize:262144 MountOptions:[] MountPort:0 MountType:9p MountUID:docker BinaryMirror: DisableOptimizations:false DisableMetrics:fals
e DisableCoreDNSLog:false CustomQemuFirmwarePath: SocketVMnetClientPath: SocketVMnetPath: StaticIP: SSHAuthSock: SSHAgentPID:0 GPUs: AutoPauseInterval:1m0s}
	I1109 13:55:03.732832   50941 cri.go:54] listing CRI containers in root : {State:paused Name: Namespaces:[kube-system]}
	I1109 13:55:03.732937   50941 ssh_runner.go:195] Run: sudo -s eval "crictl ps -a --quiet --label io.kubernetes.pod.namespace=kube-system"
	I1109 13:55:03.774883   50941 cri.go:89] found id: "90f6d4700e66c1004154b3bffd5b655a9e7a54dab0ca93ca633a48ec6805be8c"
	I1109 13:55:03.774945   50941 cri.go:89] found id: "ee4108629384f7d2a0c69033ae60bc1c7015caec18238848cb6dace4abb60ac1"
	I1109 13:55:03.774963   50941 cri.go:89] found id: "dc4b89b5cdd42a6e98698322cd4a212e4b2439c3edbe3305cc3f85573f85fb2b"
	I1109 13:55:03.774978   50941 cri.go:89] found id: "ad03fe50fbbd1dace582db018b89f80349534b6604f17260fe8e6175c0110640"
	I1109 13:55:03.774996   50941 cri.go:89] found id: "435fea996772c07f8ab06a7210ea047100aeb59de8bfe2b882e29743c63515bf"
	I1109 13:55:03.775025   50941 cri.go:89] found id: ""
	I1109 13:55:03.775095   50941 ssh_runner.go:195] Run: sudo runc list -f json
	W1109 13:55:03.797048   50941 kubeadm.go:408] unpause failed: list paused: runc: sudo runc list -f json: Process exited with status 1
	stdout:
	
	stderr:
	time="2025-11-09T13:55:03Z" level=error msg="open /run/runc: no such file or directory"
	I1109 13:55:03.797184   50941 ssh_runner.go:195] Run: sudo ls /var/lib/kubelet/kubeadm-flags.env /var/lib/kubelet/config.yaml /var/lib/minikube/etcd
	I1109 13:55:03.808348   50941 kubeadm.go:417] found existing configuration files, will attempt cluster restart
	I1109 13:55:03.808412   50941 kubeadm.go:598] restartPrimaryControlPlane start ...
	I1109 13:55:03.808506   50941 ssh_runner.go:195] Run: sudo test -d /data/minikube
	I1109 13:55:03.822278   50941 kubeadm.go:131] /data/minikube skipping compat symlinks: sudo test -d /data/minikube: Process exited with status 1
	stdout:
	
	stderr:
	I1109 13:55:03.822778   50941 kubeconfig.go:47] verify endpoint returned: get endpoint: "ha-423884" does not appear in /home/jenkins/minikube-integration/21139-2320/kubeconfig
	I1109 13:55:03.822942   50941 kubeconfig.go:62] /home/jenkins/minikube-integration/21139-2320/kubeconfig needs updating (will repair): [kubeconfig missing "ha-423884" cluster setting kubeconfig missing "ha-423884" context setting]
	I1109 13:55:03.823266   50941 lock.go:35] WriteFile acquiring /home/jenkins/minikube-integration/21139-2320/kubeconfig: {Name:mkbcbd43244c4eb498050023163f96f81faa78c6 Clock:{} Delay:500ms Timeout:1m0s Cancel:<nil>}
	I1109 13:55:03.823946   50941 kapi.go:59] client config for ha-423884: &rest.Config{Host:"https://192.168.49.2:8443", APIPath:"", ContentConfig:rest.ContentConfig{AcceptContentTypes:"", ContentType:"", GroupVersion:(*schema.GroupVersion)(nil), NegotiatedSerializer:runtime.NegotiatedSerializer(nil)}, Username:"", Password:"", BearerToken:"", BearerTokenFile:"", Impersonate:rest.ImpersonationConfig{UserName:"", UID:"", Groups:[]string(nil), Extra:map[string][]string(nil)}, AuthProvider:<nil>, AuthConfigPersister:rest.AuthProviderConfigPersister(nil), ExecProvider:<nil>, TLSClientConfig:rest.sanitizedTLSClientConfig{Insecure:false, ServerName:"", CertFile:"/home/jenkins/minikube-integration/21139-2320/.minikube/profiles/ha-423884/client.crt", KeyFile:"/home/jenkins/minikube-integration/21139-2320/.minikube/profiles/ha-423884/client.key", CAFile:"/home/jenkins/minikube-integration/21139-2320/.minikube/ca.crt", CertData:[]uint8(nil), KeyData:[]uint8(nil), CAData:[]uint8(nil), NextProtos:[]string(nil)}, Us
erAgent:"", DisableCompression:false, Transport:http.RoundTripper(nil), WrapTransport:(transport.WrapperFunc)(0x21276e0), QPS:0, Burst:0, RateLimiter:flowcontrol.RateLimiter(nil), WarningHandler:rest.WarningHandler(nil), WarningHandlerWithContext:rest.WarningHandlerWithContext(nil), Timeout:0, Dial:(func(context.Context, string, string) (net.Conn, error))(nil), Proxy:(func(*http.Request) (*url.URL, error))(nil)}
	I1109 13:55:03.824966   50941 envvar.go:172] "Feature gate default state" feature="ClientsPreferCBOR" enabled=false
	I1109 13:55:03.825021   50941 envvar.go:172] "Feature gate default state" feature="InformerResourceVersion" enabled=false
	I1109 13:55:03.825042   50941 envvar.go:172] "Feature gate default state" feature="InOrderInformers" enabled=true
	I1109 13:55:03.825069   50941 envvar.go:172] "Feature gate default state" feature="WatchListClient" enabled=false
	I1109 13:55:03.825105   50941 envvar.go:172] "Feature gate default state" feature="ClientsAllowCBOR" enabled=false
	I1109 13:55:03.824993   50941 cert_rotation.go:141] "Starting client certificate rotation controller" logger="tls-transport-cache"
	I1109 13:55:03.825458   50941 ssh_runner.go:195] Run: sudo diff -u /var/tmp/minikube/kubeadm.yaml /var/tmp/minikube/kubeadm.yaml.new
	I1109 13:55:03.835686   50941 kubeadm.go:635] The running cluster does not require reconfiguration: 192.168.49.2
	I1109 13:55:03.835761   50941 kubeadm.go:602] duration metric: took 27.320729ms to restartPrimaryControlPlane
	I1109 13:55:03.835784   50941 kubeadm.go:403] duration metric: took 103.139447ms to StartCluster
	I1109 13:55:03.835816   50941 settings.go:142] acquiring lock: {Name:mk52a5a1cf83cfd3007d92af07a5e8dea393f789 Clock:{} Delay:500ms Timeout:1m0s Cancel:<nil>}
	I1109 13:55:03.835943   50941 settings.go:150] Updating kubeconfig:  /home/jenkins/minikube-integration/21139-2320/kubeconfig
	I1109 13:55:03.836573   50941 lock.go:35] WriteFile acquiring /home/jenkins/minikube-integration/21139-2320/kubeconfig: {Name:mkbcbd43244c4eb498050023163f96f81faa78c6 Clock:{} Delay:500ms Timeout:1m0s Cancel:<nil>}
	I1109 13:55:03.836835   50941 start.go:234] HA (multi-control plane) cluster: will skip waiting for primary control-plane node &{Name: IP:192.168.49.2 Port:8443 KubernetesVersion:v1.34.1 ContainerRuntime:crio ControlPlane:true Worker:true}
	I1109 13:55:03.836883   50941 start.go:242] waiting for startup goroutines ...
	I1109 13:55:03.836904   50941 addons.go:512] enable addons start: toEnable=map[ambassador:false amd-gpu-device-plugin:false auto-pause:false cloud-spanner:false csi-hostpath-driver:false dashboard:false default-storageclass:false efk:false freshpod:false gcp-auth:false gvisor:false headlamp:false inaccel:false ingress:false ingress-dns:false inspektor-gadget:false istio:false istio-provisioner:false kong:false kubeflow:false kubetail:false kubevirt:false logviewer:false metallb:false metrics-server:false nvidia-device-plugin:false nvidia-driver-installer:false nvidia-gpu-device-plugin:false olm:false pod-security-policy:false portainer:false registry:false registry-aliases:false registry-creds:false storage-provisioner:false storage-provisioner-rancher:false volcano:false volumesnapshots:false yakd:false]
	I1109 13:55:03.837391   50941 config.go:182] Loaded profile config "ha-423884": Driver=docker, ContainerRuntime=crio, KubernetesVersion=v1.34.1
	I1109 13:55:03.842877   50941 out.go:179] * Enabled addons: 
	I1109 13:55:03.845819   50941 addons.go:515] duration metric: took 8.899809ms for enable addons: enabled=[]
	I1109 13:55:03.845887   50941 start.go:247] waiting for cluster config update ...
	I1109 13:55:03.845910   50941 start.go:256] writing updated cluster config ...
	I1109 13:55:03.849092   50941 out.go:203] 
	I1109 13:55:03.852433   50941 config.go:182] Loaded profile config "ha-423884": Driver=docker, ContainerRuntime=crio, KubernetesVersion=v1.34.1
	I1109 13:55:03.852560   50941 profile.go:143] Saving config to /home/jenkins/minikube-integration/21139-2320/.minikube/profiles/ha-423884/config.json ...
	I1109 13:55:03.856045   50941 out.go:179] * Starting "ha-423884-m02" control-plane node in "ha-423884" cluster
	I1109 13:55:03.858860   50941 cache.go:134] Beginning downloading kic base image for docker with crio
	I1109 13:55:03.861834   50941 out.go:179] * Pulling base image v0.0.48-1761985721-21837 ...
	I1109 13:55:03.864733   50941 preload.go:188] Checking if preload exists for k8s version v1.34.1 and runtime crio
	I1109 13:55:03.864755   50941 cache.go:65] Caching tarball of preloaded images
	I1109 13:55:03.864808   50941 image.go:81] Checking for gcr.io/k8s-minikube/kicbase-builds:v0.0.48-1761985721-21837@sha256:a50b37e97dfdea51156e079ca6b45818a801b3d41bbe13d141f35d2e1af6c7d1 in local docker daemon
	I1109 13:55:03.864863   50941 preload.go:238] Found /home/jenkins/minikube-integration/21139-2320/.minikube/cache/preloaded-tarball/preloaded-images-k8s-v18-v1.34.1-cri-o-overlay-arm64.tar.lz4 in cache, skipping download
	I1109 13:55:03.864879   50941 cache.go:68] Finished verifying existence of preloaded tar for v1.34.1 on crio
	I1109 13:55:03.865000   50941 profile.go:143] Saving config to /home/jenkins/minikube-integration/21139-2320/.minikube/profiles/ha-423884/config.json ...
	I1109 13:55:03.890212   50941 image.go:100] Found gcr.io/k8s-minikube/kicbase-builds:v0.0.48-1761985721-21837@sha256:a50b37e97dfdea51156e079ca6b45818a801b3d41bbe13d141f35d2e1af6c7d1 in local docker daemon, skipping pull
	I1109 13:55:03.890235   50941 cache.go:158] gcr.io/k8s-minikube/kicbase-builds:v0.0.48-1761985721-21837@sha256:a50b37e97dfdea51156e079ca6b45818a801b3d41bbe13d141f35d2e1af6c7d1 exists in daemon, skipping load
	I1109 13:55:03.890249   50941 cache.go:243] Successfully downloaded all kic artifacts
	I1109 13:55:03.890271   50941 start.go:360] acquireMachinesLock for ha-423884-m02: {Name:mkc465d60ac134a0502b48f535d5c2db44f7f07b Clock:{} Delay:500ms Timeout:10m0s Cancel:<nil>}
	I1109 13:55:03.890338   50941 start.go:364] duration metric: took 47.253µs to acquireMachinesLock for "ha-423884-m02"
	I1109 13:55:03.890362   50941 start.go:96] Skipping create...Using existing machine configuration
	I1109 13:55:03.890369   50941 fix.go:54] fixHost starting: m02
	I1109 13:55:03.890623   50941 cli_runner.go:164] Run: docker container inspect ha-423884-m02 --format={{.State.Status}}
	I1109 13:55:03.914904   50941 fix.go:112] recreateIfNeeded on ha-423884-m02: state=Stopped err=<nil>
	W1109 13:55:03.914934   50941 fix.go:138] unexpected machine state, will restart: <nil>
	I1109 13:55:03.917975   50941 out.go:252] * Restarting existing docker container for "ha-423884-m02" ...
	I1109 13:55:03.918057   50941 cli_runner.go:164] Run: docker start ha-423884-m02
	I1109 13:55:04.309913   50941 cli_runner.go:164] Run: docker container inspect ha-423884-m02 --format={{.State.Status}}
	I1109 13:55:04.344086   50941 kic.go:430] container "ha-423884-m02" state is running.
	I1109 13:55:04.344458   50941 cli_runner.go:164] Run: docker container inspect -f "{{range .NetworkSettings.Networks}}{{.IPAddress}},{{.GlobalIPv6Address}}{{end}}" ha-423884-m02
	I1109 13:55:04.369599   50941 profile.go:143] Saving config to /home/jenkins/minikube-integration/21139-2320/.minikube/profiles/ha-423884/config.json ...
	I1109 13:55:04.369844   50941 machine.go:94] provisionDockerMachine start ...
	I1109 13:55:04.369909   50941 cli_runner.go:164] Run: docker container inspect -f "'{{(index (index .NetworkSettings.Ports "22/tcp") 0).HostPort}}'" ha-423884-m02
	I1109 13:55:04.400285   50941 main.go:143] libmachine: Using SSH client type: native
	I1109 13:55:04.400586   50941 main.go:143] libmachine: &{{{<nil> 0 [] [] []} docker [0x3eefe0] 0x3f1790 <nil>  [] 0s} 127.0.0.1 32813 <nil> <nil>}
	I1109 13:55:04.400595   50941 main.go:143] libmachine: About to run SSH command:
	hostname
	I1109 13:55:04.401311   50941 main.go:143] libmachine: Error dialing TCP: ssh: handshake failed: EOF
	I1109 13:55:07.579475   50941 main.go:143] libmachine: SSH cmd err, output: <nil>: ha-423884-m02
	
	I1109 13:55:07.579506   50941 ubuntu.go:182] provisioning hostname "ha-423884-m02"
	I1109 13:55:07.579638   50941 cli_runner.go:164] Run: docker container inspect -f "'{{(index (index .NetworkSettings.Ports "22/tcp") 0).HostPort}}'" ha-423884-m02
	I1109 13:55:07.602366   50941 main.go:143] libmachine: Using SSH client type: native
	I1109 13:55:07.602673   50941 main.go:143] libmachine: &{{{<nil> 0 [] [] []} docker [0x3eefe0] 0x3f1790 <nil>  [] 0s} 127.0.0.1 32813 <nil> <nil>}
	I1109 13:55:07.602690   50941 main.go:143] libmachine: About to run SSH command:
	sudo hostname ha-423884-m02 && echo "ha-423884-m02" | sudo tee /etc/hostname
	I1109 13:55:07.809995   50941 main.go:143] libmachine: SSH cmd err, output: <nil>: ha-423884-m02
	
	I1109 13:55:07.810122   50941 cli_runner.go:164] Run: docker container inspect -f "'{{(index (index .NetworkSettings.Ports "22/tcp") 0).HostPort}}'" ha-423884-m02
	I1109 13:55:07.846000   50941 main.go:143] libmachine: Using SSH client type: native
	I1109 13:55:07.846319   50941 main.go:143] libmachine: &{{{<nil> 0 [] [] []} docker [0x3eefe0] 0x3f1790 <nil>  [] 0s} 127.0.0.1 32813 <nil> <nil>}
	I1109 13:55:07.846341   50941 main.go:143] libmachine: About to run SSH command:
	
			if ! grep -xq '.*\sha-423884-m02' /etc/hosts; then
				if grep -xq '127.0.1.1\s.*' /etc/hosts; then
					sudo sed -i 's/^127.0.1.1\s.*/127.0.1.1 ha-423884-m02/g' /etc/hosts;
				else 
					echo '127.0.1.1 ha-423884-m02' | sudo tee -a /etc/hosts; 
				fi
			fi
	I1109 13:55:08.029626   50941 main.go:143] libmachine: SSH cmd err, output: <nil>: 
	I1109 13:55:08.029653   50941 ubuntu.go:188] set auth options {CertDir:/home/jenkins/minikube-integration/21139-2320/.minikube CaCertPath:/home/jenkins/minikube-integration/21139-2320/.minikube/certs/ca.pem CaPrivateKeyPath:/home/jenkins/minikube-integration/21139-2320/.minikube/certs/ca-key.pem CaCertRemotePath:/etc/docker/ca.pem ServerCertPath:/home/jenkins/minikube-integration/21139-2320/.minikube/machines/server.pem ServerKeyPath:/home/jenkins/minikube-integration/21139-2320/.minikube/machines/server-key.pem ClientKeyPath:/home/jenkins/minikube-integration/21139-2320/.minikube/certs/key.pem ServerCertRemotePath:/etc/docker/server.pem ServerKeyRemotePath:/etc/docker/server-key.pem ClientCertPath:/home/jenkins/minikube-integration/21139-2320/.minikube/certs/cert.pem ServerCertSANs:[] StorePath:/home/jenkins/minikube-integration/21139-2320/.minikube}
	I1109 13:55:08.029670   50941 ubuntu.go:190] setting up certificates
	I1109 13:55:08.029726   50941 provision.go:84] configureAuth start
	I1109 13:55:08.029805   50941 cli_runner.go:164] Run: docker container inspect -f "{{range .NetworkSettings.Networks}}{{.IPAddress}},{{.GlobalIPv6Address}}{{end}}" ha-423884-m02
	I1109 13:55:08.053364   50941 provision.go:143] copyHostCerts
	I1109 13:55:08.053410   50941 vm_assets.go:164] NewFileAsset: /home/jenkins/minikube-integration/21139-2320/.minikube/certs/ca.pem -> /home/jenkins/minikube-integration/21139-2320/.minikube/ca.pem
	I1109 13:55:08.053445   50941 exec_runner.go:144] found /home/jenkins/minikube-integration/21139-2320/.minikube/ca.pem, removing ...
	I1109 13:55:08.053457   50941 exec_runner.go:203] rm: /home/jenkins/minikube-integration/21139-2320/.minikube/ca.pem
	I1109 13:55:08.053539   50941 exec_runner.go:151] cp: /home/jenkins/minikube-integration/21139-2320/.minikube/certs/ca.pem --> /home/jenkins/minikube-integration/21139-2320/.minikube/ca.pem (1082 bytes)
	I1109 13:55:08.053624   50941 vm_assets.go:164] NewFileAsset: /home/jenkins/minikube-integration/21139-2320/.minikube/certs/cert.pem -> /home/jenkins/minikube-integration/21139-2320/.minikube/cert.pem
	I1109 13:55:08.053647   50941 exec_runner.go:144] found /home/jenkins/minikube-integration/21139-2320/.minikube/cert.pem, removing ...
	I1109 13:55:08.053656   50941 exec_runner.go:203] rm: /home/jenkins/minikube-integration/21139-2320/.minikube/cert.pem
	I1109 13:55:08.053687   50941 exec_runner.go:151] cp: /home/jenkins/minikube-integration/21139-2320/.minikube/certs/cert.pem --> /home/jenkins/minikube-integration/21139-2320/.minikube/cert.pem (1123 bytes)
	I1109 13:55:08.053733   50941 vm_assets.go:164] NewFileAsset: /home/jenkins/minikube-integration/21139-2320/.minikube/certs/key.pem -> /home/jenkins/minikube-integration/21139-2320/.minikube/key.pem
	I1109 13:55:08.053755   50941 exec_runner.go:144] found /home/jenkins/minikube-integration/21139-2320/.minikube/key.pem, removing ...
	I1109 13:55:08.053762   50941 exec_runner.go:203] rm: /home/jenkins/minikube-integration/21139-2320/.minikube/key.pem
	I1109 13:55:08.053788   50941 exec_runner.go:151] cp: /home/jenkins/minikube-integration/21139-2320/.minikube/certs/key.pem --> /home/jenkins/minikube-integration/21139-2320/.minikube/key.pem (1679 bytes)
	I1109 13:55:08.053839   50941 provision.go:117] generating server cert: /home/jenkins/minikube-integration/21139-2320/.minikube/machines/server.pem ca-key=/home/jenkins/minikube-integration/21139-2320/.minikube/certs/ca.pem private-key=/home/jenkins/minikube-integration/21139-2320/.minikube/certs/ca-key.pem org=jenkins.ha-423884-m02 san=[127.0.0.1 192.168.49.3 ha-423884-m02 localhost minikube]
	I1109 13:55:08.908426   50941 provision.go:177] copyRemoteCerts
	I1109 13:55:08.908547   50941 ssh_runner.go:195] Run: sudo mkdir -p /etc/docker /etc/docker /etc/docker
	I1109 13:55:08.908608   50941 cli_runner.go:164] Run: docker container inspect -f "'{{(index (index .NetworkSettings.Ports "22/tcp") 0).HostPort}}'" ha-423884-m02
	I1109 13:55:08.925860   50941 sshutil.go:53] new ssh client: &{IP:127.0.0.1 Port:32813 SSHKeyPath:/home/jenkins/minikube-integration/21139-2320/.minikube/machines/ha-423884-m02/id_rsa Username:docker}
	I1109 13:55:09.037241   50941 vm_assets.go:164] NewFileAsset: /home/jenkins/minikube-integration/21139-2320/.minikube/certs/ca.pem -> /etc/docker/ca.pem
	I1109 13:55:09.037302   50941 ssh_runner.go:362] scp /home/jenkins/minikube-integration/21139-2320/.minikube/certs/ca.pem --> /etc/docker/ca.pem (1082 bytes)
	I1109 13:55:09.069800   50941 vm_assets.go:164] NewFileAsset: /home/jenkins/minikube-integration/21139-2320/.minikube/machines/server.pem -> /etc/docker/server.pem
	I1109 13:55:09.069861   50941 ssh_runner.go:362] scp /home/jenkins/minikube-integration/21139-2320/.minikube/machines/server.pem --> /etc/docker/server.pem (1208 bytes)
	I1109 13:55:09.100884   50941 vm_assets.go:164] NewFileAsset: /home/jenkins/minikube-integration/21139-2320/.minikube/machines/server-key.pem -> /etc/docker/server-key.pem
	I1109 13:55:09.100993   50941 ssh_runner.go:362] scp /home/jenkins/minikube-integration/21139-2320/.minikube/machines/server-key.pem --> /etc/docker/server-key.pem (1675 bytes)
	I1109 13:55:09.135925   50941 provision.go:87] duration metric: took 1.10618017s to configureAuth
	I1109 13:55:09.136002   50941 ubuntu.go:206] setting minikube options for container-runtime
	I1109 13:55:09.136280   50941 config.go:182] Loaded profile config "ha-423884": Driver=docker, ContainerRuntime=crio, KubernetesVersion=v1.34.1
	I1109 13:55:09.136432   50941 cli_runner.go:164] Run: docker container inspect -f "'{{(index (index .NetworkSettings.Ports "22/tcp") 0).HostPort}}'" ha-423884-m02
	I1109 13:55:09.164021   50941 main.go:143] libmachine: Using SSH client type: native
	I1109 13:55:09.164323   50941 main.go:143] libmachine: &{{{<nil> 0 [] [] []} docker [0x3eefe0] 0x3f1790 <nil>  [] 0s} 127.0.0.1 32813 <nil> <nil>}
	I1109 13:55:09.164337   50941 main.go:143] libmachine: About to run SSH command:
	sudo mkdir -p /etc/sysconfig && printf %s "
	CRIO_MINIKUBE_OPTIONS='--insecure-registry 10.96.0.0/12 '
	" | sudo tee /etc/sysconfig/crio.minikube && sudo systemctl restart crio
	I1109 13:55:10.296777   50941 main.go:143] libmachine: SSH cmd err, output: <nil>: 
	CRIO_MINIKUBE_OPTIONS='--insecure-registry 10.96.0.0/12 '
	
	I1109 13:55:10.296814   50941 machine.go:97] duration metric: took 5.926952254s to provisionDockerMachine
	I1109 13:55:10.296825   50941 start.go:293] postStartSetup for "ha-423884-m02" (driver="docker")
	I1109 13:55:10.296872   50941 start.go:322] creating required directories: [/etc/kubernetes/addons /etc/kubernetes/manifests /var/tmp/minikube /var/lib/minikube /var/lib/minikube/certs /var/lib/minikube/images /var/lib/minikube/binaries /tmp/gvisor /usr/share/ca-certificates /etc/ssl/certs]
	I1109 13:55:10.296972   50941 ssh_runner.go:195] Run: sudo mkdir -p /etc/kubernetes/addons /etc/kubernetes/manifests /var/tmp/minikube /var/lib/minikube /var/lib/minikube/certs /var/lib/minikube/images /var/lib/minikube/binaries /tmp/gvisor /usr/share/ca-certificates /etc/ssl/certs
	I1109 13:55:10.297065   50941 cli_runner.go:164] Run: docker container inspect -f "'{{(index (index .NetworkSettings.Ports "22/tcp") 0).HostPort}}'" ha-423884-m02
	I1109 13:55:10.332056   50941 sshutil.go:53] new ssh client: &{IP:127.0.0.1 Port:32813 SSHKeyPath:/home/jenkins/minikube-integration/21139-2320/.minikube/machines/ha-423884-m02/id_rsa Username:docker}
	I1109 13:55:10.449335   50941 ssh_runner.go:195] Run: cat /etc/os-release
	I1109 13:55:10.453805   50941 main.go:143] libmachine: Couldn't set key VERSION_CODENAME, no corresponding struct field found
	I1109 13:55:10.453831   50941 info.go:137] Remote host: Debian GNU/Linux 12 (bookworm)
	I1109 13:55:10.453843   50941 filesync.go:126] Scanning /home/jenkins/minikube-integration/21139-2320/.minikube/addons for local assets ...
	I1109 13:55:10.453902   50941 filesync.go:126] Scanning /home/jenkins/minikube-integration/21139-2320/.minikube/files for local assets ...
	I1109 13:55:10.453979   50941 filesync.go:149] local asset: /home/jenkins/minikube-integration/21139-2320/.minikube/files/etc/ssl/certs/41162.pem -> 41162.pem in /etc/ssl/certs
	I1109 13:55:10.453986   50941 vm_assets.go:164] NewFileAsset: /home/jenkins/minikube-integration/21139-2320/.minikube/files/etc/ssl/certs/41162.pem -> /etc/ssl/certs/41162.pem
	I1109 13:55:10.454091   50941 ssh_runner.go:195] Run: sudo mkdir -p /etc/ssl/certs
	I1109 13:55:10.463699   50941 ssh_runner.go:362] scp /home/jenkins/minikube-integration/21139-2320/.minikube/files/etc/ssl/certs/41162.pem --> /etc/ssl/certs/41162.pem (1708 bytes)
	I1109 13:55:10.484008   50941 start.go:296] duration metric: took 187.133589ms for postStartSetup
	I1109 13:55:10.484157   50941 ssh_runner.go:195] Run: sh -c "df -h /var | awk 'NR==2{print $5}'"
	I1109 13:55:10.484228   50941 cli_runner.go:164] Run: docker container inspect -f "'{{(index (index .NetworkSettings.Ports "22/tcp") 0).HostPort}}'" ha-423884-m02
	I1109 13:55:10.524852   50941 sshutil.go:53] new ssh client: &{IP:127.0.0.1 Port:32813 SSHKeyPath:/home/jenkins/minikube-integration/21139-2320/.minikube/machines/ha-423884-m02/id_rsa Username:docker}
	I1109 13:55:10.647401   50941 ssh_runner.go:195] Run: sh -c "df -BG /var | awk 'NR==2{print $4}'"
	I1109 13:55:10.654990   50941 fix.go:56] duration metric: took 6.764614102s for fixHost
	I1109 13:55:10.655012   50941 start.go:83] releasing machines lock for "ha-423884-m02", held for 6.764660929s
	I1109 13:55:10.655097   50941 cli_runner.go:164] Run: docker container inspect -f "{{range .NetworkSettings.Networks}}{{.IPAddress}},{{.GlobalIPv6Address}}{{end}}" ha-423884-m02
	I1109 13:55:10.684905   50941 out.go:179] * Found network options:
	I1109 13:55:10.687829   50941 out.go:179]   - NO_PROXY=192.168.49.2
	W1109 13:55:10.690818   50941 proxy.go:120] fail to check proxy env: Error ip not in block
	W1109 13:55:10.690871   50941 proxy.go:120] fail to check proxy env: Error ip not in block
	I1109 13:55:10.690948   50941 ssh_runner.go:195] Run: sudo sh -c "podman version >/dev/null"
	I1109 13:55:10.690961   50941 ssh_runner.go:195] Run: curl -sS -m 2 https://registry.k8s.io/
	I1109 13:55:10.690989   50941 cli_runner.go:164] Run: docker container inspect -f "'{{(index (index .NetworkSettings.Ports "22/tcp") 0).HostPort}}'" ha-423884-m02
	I1109 13:55:10.691019   50941 cli_runner.go:164] Run: docker container inspect -f "'{{(index (index .NetworkSettings.Ports "22/tcp") 0).HostPort}}'" ha-423884-m02
	I1109 13:55:10.712241   50941 sshutil.go:53] new ssh client: &{IP:127.0.0.1 Port:32813 SSHKeyPath:/home/jenkins/minikube-integration/21139-2320/.minikube/machines/ha-423884-m02/id_rsa Username:docker}
	I1109 13:55:10.725530   50941 sshutil.go:53] new ssh client: &{IP:127.0.0.1 Port:32813 SSHKeyPath:/home/jenkins/minikube-integration/21139-2320/.minikube/machines/ha-423884-m02/id_rsa Username:docker}
	I1109 13:55:11.009545   50941 ssh_runner.go:195] Run: sh -c "stat /etc/cni/net.d/*loopback.conf*"
	W1109 13:55:11.084432   50941 cni.go:209] loopback cni configuration skipped: "/etc/cni/net.d/*loopback.conf*" not found
	I1109 13:55:11.084558   50941 ssh_runner.go:195] Run: sudo find /etc/cni/net.d -maxdepth 1 -type f ( ( -name *bridge* -or -name *podman* ) -and -not -name *.mk_disabled ) -printf "%p, " -exec sh -c "sudo mv {} {}.mk_disabled" ;
	I1109 13:55:11.121004   50941 cni.go:259] no active bridge cni configs found in "/etc/cni/net.d" - nothing to disable
	I1109 13:55:11.121072   50941 start.go:496] detecting cgroup driver to use...
	I1109 13:55:11.121123   50941 detect.go:187] detected "cgroupfs" cgroup driver on host os
	I1109 13:55:11.121189   50941 ssh_runner.go:195] Run: sudo systemctl stop -f containerd
	I1109 13:55:11.194725   50941 ssh_runner.go:195] Run: sudo systemctl is-active --quiet service containerd
	I1109 13:55:11.266109   50941 docker.go:218] disabling cri-docker service (if available) ...
	I1109 13:55:11.266214   50941 ssh_runner.go:195] Run: sudo systemctl stop -f cri-docker.socket
	I1109 13:55:11.320610   50941 ssh_runner.go:195] Run: sudo systemctl stop -f cri-docker.service
	I1109 13:55:11.355184   50941 ssh_runner.go:195] Run: sudo systemctl disable cri-docker.socket
	I1109 13:55:11.758458   50941 ssh_runner.go:195] Run: sudo systemctl mask cri-docker.service
	I1109 13:55:12.038806   50941 docker.go:234] disabling docker service ...
	I1109 13:55:12.038952   50941 ssh_runner.go:195] Run: sudo systemctl stop -f docker.socket
	I1109 13:55:12.067528   50941 ssh_runner.go:195] Run: sudo systemctl stop -f docker.service
	I1109 13:55:12.086834   50941 ssh_runner.go:195] Run: sudo systemctl disable docker.socket
	I1109 13:55:12.313835   50941 ssh_runner.go:195] Run: sudo systemctl mask docker.service
	I1109 13:55:12.529003   50941 ssh_runner.go:195] Run: sudo systemctl is-active --quiet service docker
	I1109 13:55:12.547999   50941 ssh_runner.go:195] Run: /bin/bash -c "sudo mkdir -p /etc && printf %s "runtime-endpoint: unix:///var/run/crio/crio.sock
	" | sudo tee /etc/crictl.yaml"
	I1109 13:55:12.574350   50941 crio.go:59] configure cri-o to use "registry.k8s.io/pause:3.10.1" pause image...
	I1109 13:55:12.574468   50941 ssh_runner.go:195] Run: sh -c "sudo sed -i 's|^.*pause_image = .*$|pause_image = "registry.k8s.io/pause:3.10.1"|' /etc/crio/crio.conf.d/02-crio.conf"
	I1109 13:55:12.593062   50941 crio.go:70] configuring cri-o to use "cgroupfs" as cgroup driver...
	I1109 13:55:12.593178   50941 ssh_runner.go:195] Run: sh -c "sudo sed -i 's|^.*cgroup_manager = .*$|cgroup_manager = "cgroupfs"|' /etc/crio/crio.conf.d/02-crio.conf"
	I1109 13:55:12.611675   50941 ssh_runner.go:195] Run: sh -c "sudo sed -i '/conmon_cgroup = .*/d' /etc/crio/crio.conf.d/02-crio.conf"
	I1109 13:55:12.621325   50941 ssh_runner.go:195] Run: sh -c "sudo sed -i '/cgroup_manager = .*/a conmon_cgroup = "pod"' /etc/crio/crio.conf.d/02-crio.conf"
	I1109 13:55:12.634011   50941 ssh_runner.go:195] Run: sh -c "sudo rm -rf /etc/cni/net.mk"
	I1109 13:55:12.644140   50941 ssh_runner.go:195] Run: sh -c "sudo sed -i '/^ *"net.ipv4.ip_unprivileged_port_start=.*"/d' /etc/crio/crio.conf.d/02-crio.conf"
	I1109 13:55:12.656327   50941 ssh_runner.go:195] Run: sh -c "sudo grep -q "^ *default_sysctls" /etc/crio/crio.conf.d/02-crio.conf || sudo sed -i '/conmon_cgroup = .*/a default_sysctls = \[\n\]' /etc/crio/crio.conf.d/02-crio.conf"
	I1109 13:55:12.666866   50941 ssh_runner.go:195] Run: sh -c "sudo sed -i -r 's|^default_sysctls *= *\[|&\n  "net.ipv4.ip_unprivileged_port_start=0",|' /etc/crio/crio.conf.d/02-crio.conf"
	I1109 13:55:12.678918   50941 ssh_runner.go:195] Run: sudo sysctl net.bridge.bridge-nf-call-iptables
	I1109 13:55:12.688104   50941 ssh_runner.go:195] Run: sudo sh -c "echo 1 > /proc/sys/net/ipv4/ip_forward"
	I1109 13:55:12.699862   50941 ssh_runner.go:195] Run: sudo systemctl daemon-reload
	I1109 13:55:12.929281   50941 ssh_runner.go:195] Run: sudo systemctl restart crio
	I1109 13:56:43.189989   50941 ssh_runner.go:235] Completed: sudo systemctl restart crio: (1m30.260670612s)
	I1109 13:56:43.190013   50941 start.go:543] Will wait 60s for socket path /var/run/crio/crio.sock
	I1109 13:56:43.190063   50941 ssh_runner.go:195] Run: stat /var/run/crio/crio.sock
	I1109 13:56:43.194863   50941 start.go:564] Will wait 60s for crictl version
	I1109 13:56:43.194926   50941 ssh_runner.go:195] Run: which crictl
	I1109 13:56:43.198897   50941 ssh_runner.go:195] Run: sudo /usr/local/bin/crictl version
	I1109 13:56:43.224592   50941 start.go:580] Version:  0.1.0
	RuntimeName:  cri-o
	RuntimeVersion:  1.34.1
	RuntimeApiVersion:  v1
	I1109 13:56:43.224673   50941 ssh_runner.go:195] Run: crio --version
	I1109 13:56:43.252803   50941 ssh_runner.go:195] Run: crio --version
	I1109 13:56:43.288977   50941 out.go:179] * Preparing Kubernetes v1.34.1 on CRI-O 1.34.1 ...
	I1109 13:56:43.292132   50941 out.go:179]   - env NO_PROXY=192.168.49.2
	I1109 13:56:43.295175   50941 cli_runner.go:164] Run: docker network inspect ha-423884 --format "{"Name": "{{.Name}}","Driver": "{{.Driver}}","Subnet": "{{range .IPAM.Config}}{{.Subnet}}{{end}}","Gateway": "{{range .IPAM.Config}}{{.Gateway}}{{end}}","MTU": {{if (index .Options "com.docker.network.driver.mtu")}}{{(index .Options "com.docker.network.driver.mtu")}}{{else}}0{{end}}, "ContainerIPs": [{{range $k,$v := .Containers }}"{{$v.IPv4Address}}",{{end}}]}"
	I1109 13:56:43.311775   50941 ssh_runner.go:195] Run: grep 192.168.49.1	host.minikube.internal$ /etc/hosts
	I1109 13:56:43.316096   50941 ssh_runner.go:195] Run: /bin/bash -c "{ grep -v $'\thost.minikube.internal$' "/etc/hosts"; echo "192.168.49.1	host.minikube.internal"; } > /tmp/h.$$; sudo cp /tmp/h.$$ "/etc/hosts""
	I1109 13:56:43.327026   50941 mustload.go:66] Loading cluster: ha-423884
	I1109 13:56:43.327285   50941 config.go:182] Loaded profile config "ha-423884": Driver=docker, ContainerRuntime=crio, KubernetesVersion=v1.34.1
	I1109 13:56:43.327549   50941 cli_runner.go:164] Run: docker container inspect ha-423884 --format={{.State.Status}}
	I1109 13:56:43.344797   50941 host.go:66] Checking if "ha-423884" exists ...
	I1109 13:56:43.345106   50941 certs.go:69] Setting up /home/jenkins/minikube-integration/21139-2320/.minikube/profiles/ha-423884 for IP: 192.168.49.3
	I1109 13:56:43.345119   50941 certs.go:195] generating shared ca certs ...
	I1109 13:56:43.345155   50941 certs.go:227] acquiring lock for ca certs: {Name:mkdc9287ecd89df27ec460b72246ed6b75395d7c Clock:{} Delay:500ms Timeout:1m0s Cancel:<nil>}
	I1109 13:56:43.345275   50941 certs.go:236] skipping valid "minikubeCA" ca cert: /home/jenkins/minikube-integration/21139-2320/.minikube/ca.key
	I1109 13:56:43.345325   50941 certs.go:236] skipping valid "proxyClientCA" ca cert: /home/jenkins/minikube-integration/21139-2320/.minikube/proxy-client-ca.key
	I1109 13:56:43.345337   50941 certs.go:257] generating profile certs ...
	I1109 13:56:43.345411   50941 certs.go:360] skipping valid signed profile cert regeneration for "minikube-user": /home/jenkins/minikube-integration/21139-2320/.minikube/profiles/ha-423884/client.key
	I1109 13:56:43.345491   50941 certs.go:360] skipping valid signed profile cert regeneration for "minikube": /home/jenkins/minikube-integration/21139-2320/.minikube/profiles/ha-423884/apiserver.key.75d82079
	I1109 13:56:43.345540   50941 certs.go:360] skipping valid signed profile cert regeneration for "aggregator": /home/jenkins/minikube-integration/21139-2320/.minikube/profiles/ha-423884/proxy-client.key
	I1109 13:56:43.345557   50941 vm_assets.go:164] NewFileAsset: /home/jenkins/minikube-integration/21139-2320/.minikube/ca.crt -> /var/lib/minikube/certs/ca.crt
	I1109 13:56:43.345575   50941 vm_assets.go:164] NewFileAsset: /home/jenkins/minikube-integration/21139-2320/.minikube/ca.key -> /var/lib/minikube/certs/ca.key
	I1109 13:56:43.345594   50941 vm_assets.go:164] NewFileAsset: /home/jenkins/minikube-integration/21139-2320/.minikube/proxy-client-ca.crt -> /var/lib/minikube/certs/proxy-client-ca.crt
	I1109 13:56:43.345615   50941 vm_assets.go:164] NewFileAsset: /home/jenkins/minikube-integration/21139-2320/.minikube/proxy-client-ca.key -> /var/lib/minikube/certs/proxy-client-ca.key
	I1109 13:56:43.345628   50941 vm_assets.go:164] NewFileAsset: /home/jenkins/minikube-integration/21139-2320/.minikube/profiles/ha-423884/apiserver.crt -> /var/lib/minikube/certs/apiserver.crt
	I1109 13:56:43.345642   50941 vm_assets.go:164] NewFileAsset: /home/jenkins/minikube-integration/21139-2320/.minikube/profiles/ha-423884/apiserver.key -> /var/lib/minikube/certs/apiserver.key
	I1109 13:56:43.345658   50941 vm_assets.go:164] NewFileAsset: /home/jenkins/minikube-integration/21139-2320/.minikube/profiles/ha-423884/proxy-client.crt -> /var/lib/minikube/certs/proxy-client.crt
	I1109 13:56:43.345671   50941 vm_assets.go:164] NewFileAsset: /home/jenkins/minikube-integration/21139-2320/.minikube/profiles/ha-423884/proxy-client.key -> /var/lib/minikube/certs/proxy-client.key
	I1109 13:56:43.345729   50941 certs.go:484] found cert: /home/jenkins/minikube-integration/21139-2320/.minikube/certs/4116.pem (1338 bytes)
	W1109 13:56:43.345760   50941 certs.go:480] ignoring /home/jenkins/minikube-integration/21139-2320/.minikube/certs/4116_empty.pem, impossibly tiny 0 bytes
	I1109 13:56:43.345772   50941 certs.go:484] found cert: /home/jenkins/minikube-integration/21139-2320/.minikube/certs/ca-key.pem (1679 bytes)
	I1109 13:56:43.345800   50941 certs.go:484] found cert: /home/jenkins/minikube-integration/21139-2320/.minikube/certs/ca.pem (1082 bytes)
	I1109 13:56:43.345827   50941 certs.go:484] found cert: /home/jenkins/minikube-integration/21139-2320/.minikube/certs/cert.pem (1123 bytes)
	I1109 13:56:43.345850   50941 certs.go:484] found cert: /home/jenkins/minikube-integration/21139-2320/.minikube/certs/key.pem (1679 bytes)
	I1109 13:56:43.345896   50941 certs.go:484] found cert: /home/jenkins/minikube-integration/21139-2320/.minikube/files/etc/ssl/certs/41162.pem (1708 bytes)
	I1109 13:56:43.345926   50941 vm_assets.go:164] NewFileAsset: /home/jenkins/minikube-integration/21139-2320/.minikube/ca.crt -> /usr/share/ca-certificates/minikubeCA.pem
	I1109 13:56:43.345942   50941 vm_assets.go:164] NewFileAsset: /home/jenkins/minikube-integration/21139-2320/.minikube/certs/4116.pem -> /usr/share/ca-certificates/4116.pem
	I1109 13:56:43.345953   50941 vm_assets.go:164] NewFileAsset: /home/jenkins/minikube-integration/21139-2320/.minikube/files/etc/ssl/certs/41162.pem -> /usr/share/ca-certificates/41162.pem
	I1109 13:56:43.346011   50941 cli_runner.go:164] Run: docker container inspect -f "'{{(index (index .NetworkSettings.Ports "22/tcp") 0).HostPort}}'" ha-423884
	I1109 13:56:43.364089   50941 sshutil.go:53] new ssh client: &{IP:127.0.0.1 Port:32808 SSHKeyPath:/home/jenkins/minikube-integration/21139-2320/.minikube/machines/ha-423884/id_rsa Username:docker}
	I1109 13:56:43.460186   50941 ssh_runner.go:195] Run: stat -c %s /var/lib/minikube/certs/sa.pub
	I1109 13:56:43.463803   50941 ssh_runner.go:447] scp /var/lib/minikube/certs/sa.pub --> memory (451 bytes)
	I1109 13:56:43.471985   50941 ssh_runner.go:195] Run: stat -c %s /var/lib/minikube/certs/sa.key
	I1109 13:56:43.475672   50941 ssh_runner.go:447] scp /var/lib/minikube/certs/sa.key --> memory (1679 bytes)
	I1109 13:56:43.483925   50941 ssh_runner.go:195] Run: stat -c %s /var/lib/minikube/certs/front-proxy-ca.crt
	I1109 13:56:43.487471   50941 ssh_runner.go:447] scp /var/lib/minikube/certs/front-proxy-ca.crt --> memory (1123 bytes)
	I1109 13:56:43.495787   50941 ssh_runner.go:195] Run: stat -c %s /var/lib/minikube/certs/front-proxy-ca.key
	I1109 13:56:43.499361   50941 ssh_runner.go:447] scp /var/lib/minikube/certs/front-proxy-ca.key --> memory (1675 bytes)
	I1109 13:56:43.507536   50941 ssh_runner.go:195] Run: stat -c %s /var/lib/minikube/certs/etcd/ca.crt
	I1109 13:56:43.511561   50941 ssh_runner.go:447] scp /var/lib/minikube/certs/etcd/ca.crt --> memory (1094 bytes)
	I1109 13:56:43.520262   50941 ssh_runner.go:195] Run: stat -c %s /var/lib/minikube/certs/etcd/ca.key
	I1109 13:56:43.524097   50941 ssh_runner.go:447] scp /var/lib/minikube/certs/etcd/ca.key --> memory (1675 bytes)
	I1109 13:56:43.532380   50941 ssh_runner.go:362] scp /home/jenkins/minikube-integration/21139-2320/.minikube/ca.crt --> /var/lib/minikube/certs/ca.crt (1111 bytes)
	I1109 13:56:43.553569   50941 ssh_runner.go:362] scp /home/jenkins/minikube-integration/21139-2320/.minikube/ca.key --> /var/lib/minikube/certs/ca.key (1679 bytes)
	I1109 13:56:43.574274   50941 ssh_runner.go:362] scp /home/jenkins/minikube-integration/21139-2320/.minikube/proxy-client-ca.crt --> /var/lib/minikube/certs/proxy-client-ca.crt (1119 bytes)
	I1109 13:56:43.593982   50941 ssh_runner.go:362] scp /home/jenkins/minikube-integration/21139-2320/.minikube/proxy-client-ca.key --> /var/lib/minikube/certs/proxy-client-ca.key (1675 bytes)
	I1109 13:56:43.611803   50941 ssh_runner.go:362] scp /home/jenkins/minikube-integration/21139-2320/.minikube/profiles/ha-423884/apiserver.crt --> /var/lib/minikube/certs/apiserver.crt (1440 bytes)
	I1109 13:56:43.629036   50941 ssh_runner.go:362] scp /home/jenkins/minikube-integration/21139-2320/.minikube/profiles/ha-423884/apiserver.key --> /var/lib/minikube/certs/apiserver.key (1679 bytes)
	I1109 13:56:43.646449   50941 ssh_runner.go:362] scp /home/jenkins/minikube-integration/21139-2320/.minikube/profiles/ha-423884/proxy-client.crt --> /var/lib/minikube/certs/proxy-client.crt (1147 bytes)
	I1109 13:56:43.665505   50941 ssh_runner.go:362] scp /home/jenkins/minikube-integration/21139-2320/.minikube/profiles/ha-423884/proxy-client.key --> /var/lib/minikube/certs/proxy-client.key (1675 bytes)
	I1109 13:56:43.685863   50941 ssh_runner.go:362] scp /home/jenkins/minikube-integration/21139-2320/.minikube/ca.crt --> /usr/share/ca-certificates/minikubeCA.pem (1111 bytes)
	I1109 13:56:43.704695   50941 ssh_runner.go:362] scp /home/jenkins/minikube-integration/21139-2320/.minikube/certs/4116.pem --> /usr/share/ca-certificates/4116.pem (1338 bytes)
	I1109 13:56:43.725055   50941 ssh_runner.go:362] scp /home/jenkins/minikube-integration/21139-2320/.minikube/files/etc/ssl/certs/41162.pem --> /usr/share/ca-certificates/41162.pem (1708 bytes)
	I1109 13:56:43.743980   50941 ssh_runner.go:362] scp memory --> /var/lib/minikube/certs/sa.pub (451 bytes)
	I1109 13:56:43.757782   50941 ssh_runner.go:362] scp memory --> /var/lib/minikube/certs/sa.key (1679 bytes)
	I1109 13:56:43.770797   50941 ssh_runner.go:362] scp memory --> /var/lib/minikube/certs/front-proxy-ca.crt (1123 bytes)
	I1109 13:56:43.783823   50941 ssh_runner.go:362] scp memory --> /var/lib/minikube/certs/front-proxy-ca.key (1675 bytes)
	I1109 13:56:43.798200   50941 ssh_runner.go:362] scp memory --> /var/lib/minikube/certs/etcd/ca.crt (1094 bytes)
	I1109 13:56:43.811164   50941 ssh_runner.go:362] scp memory --> /var/lib/minikube/certs/etcd/ca.key (1675 bytes)
	I1109 13:56:43.824190   50941 ssh_runner.go:362] scp memory --> /var/lib/minikube/kubeconfig (744 bytes)
	I1109 13:56:43.838949   50941 ssh_runner.go:195] Run: openssl version
	I1109 13:56:43.845204   50941 ssh_runner.go:195] Run: sudo /bin/bash -c "test -s /usr/share/ca-certificates/minikubeCA.pem && ln -fs /usr/share/ca-certificates/minikubeCA.pem /etc/ssl/certs/minikubeCA.pem"
	I1109 13:56:43.853394   50941 ssh_runner.go:195] Run: ls -la /usr/share/ca-certificates/minikubeCA.pem
	I1109 13:56:43.857520   50941 certs.go:528] hashing: -rw-r--r-- 1 root root 1111 Nov  9 13:29 /usr/share/ca-certificates/minikubeCA.pem
	I1109 13:56:43.857581   50941 ssh_runner.go:195] Run: openssl x509 -hash -noout -in /usr/share/ca-certificates/minikubeCA.pem
	I1109 13:56:43.898978   50941 ssh_runner.go:195] Run: sudo /bin/bash -c "test -L /etc/ssl/certs/b5213941.0 || ln -fs /etc/ssl/certs/minikubeCA.pem /etc/ssl/certs/b5213941.0"
	I1109 13:56:43.907056   50941 ssh_runner.go:195] Run: sudo /bin/bash -c "test -s /usr/share/ca-certificates/4116.pem && ln -fs /usr/share/ca-certificates/4116.pem /etc/ssl/certs/4116.pem"
	I1109 13:56:43.915514   50941 ssh_runner.go:195] Run: ls -la /usr/share/ca-certificates/4116.pem
	I1109 13:56:43.919395   50941 certs.go:528] hashing: -rw-r--r-- 1 root root 1338 Nov  9 13:36 /usr/share/ca-certificates/4116.pem
	I1109 13:56:43.919509   50941 ssh_runner.go:195] Run: openssl x509 -hash -noout -in /usr/share/ca-certificates/4116.pem
	I1109 13:56:43.961298   50941 ssh_runner.go:195] Run: sudo /bin/bash -c "test -L /etc/ssl/certs/51391683.0 || ln -fs /etc/ssl/certs/4116.pem /etc/ssl/certs/51391683.0"
	I1109 13:56:43.969278   50941 ssh_runner.go:195] Run: sudo /bin/bash -c "test -s /usr/share/ca-certificates/41162.pem && ln -fs /usr/share/ca-certificates/41162.pem /etc/ssl/certs/41162.pem"
	I1109 13:56:43.979745   50941 ssh_runner.go:195] Run: ls -la /usr/share/ca-certificates/41162.pem
	I1109 13:56:43.983461   50941 certs.go:528] hashing: -rw-r--r-- 1 root root 1708 Nov  9 13:36 /usr/share/ca-certificates/41162.pem
	I1109 13:56:43.983552   50941 ssh_runner.go:195] Run: openssl x509 -hash -noout -in /usr/share/ca-certificates/41162.pem
	I1109 13:56:44.024743   50941 ssh_runner.go:195] Run: sudo /bin/bash -c "test -L /etc/ssl/certs/3ec20f2e.0 || ln -fs /etc/ssl/certs/41162.pem /etc/ssl/certs/3ec20f2e.0"
	I1109 13:56:44.034346   50941 ssh_runner.go:195] Run: stat /var/lib/minikube/certs/apiserver-kubelet-client.crt
	I1109 13:56:44.038346   50941 ssh_runner.go:195] Run: openssl x509 -noout -in /var/lib/minikube/certs/apiserver-etcd-client.crt -checkend 86400
	I1109 13:56:44.083522   50941 ssh_runner.go:195] Run: openssl x509 -noout -in /var/lib/minikube/certs/apiserver-kubelet-client.crt -checkend 86400
	I1109 13:56:44.124383   50941 ssh_runner.go:195] Run: openssl x509 -noout -in /var/lib/minikube/certs/etcd/server.crt -checkend 86400
	I1109 13:56:44.165272   50941 ssh_runner.go:195] Run: openssl x509 -noout -in /var/lib/minikube/certs/etcd/healthcheck-client.crt -checkend 86400
	I1109 13:56:44.207715   50941 ssh_runner.go:195] Run: openssl x509 -noout -in /var/lib/minikube/certs/etcd/peer.crt -checkend 86400
	I1109 13:56:44.249227   50941 ssh_runner.go:195] Run: openssl x509 -noout -in /var/lib/minikube/certs/front-proxy-client.crt -checkend 86400
	I1109 13:56:44.295420   50941 kubeadm.go:935] updating node {m02 192.168.49.3 8443 v1.34.1 crio true true} ...
	I1109 13:56:44.295534   50941 kubeadm.go:947] kubelet [Unit]
	Wants=crio.service
	
	[Service]
	ExecStart=
	ExecStart=/var/lib/minikube/binaries/v1.34.1/kubelet --bootstrap-kubeconfig=/etc/kubernetes/bootstrap-kubelet.conf --cgroups-per-qos=false --config=/var/lib/kubelet/config.yaml --enforce-node-allocatable= --hostname-override=ha-423884-m02 --kubeconfig=/etc/kubernetes/kubelet.conf --node-ip=192.168.49.3
	
	[Install]
	 config:
	{KubernetesVersion:v1.34.1 ClusterName:ha-423884 Namespace:default APIServerHAVIP:192.168.49.254 APIServerName:minikubeCA APIServerNames:[] APIServerIPs:[] DNSDomain:cluster.local ContainerRuntime:crio CRISocket: NetworkPlugin:cni FeatureGates: ServiceCIDR:10.96.0.0/12 ImageRepository: LoadBalancerStartIP: LoadBalancerEndIP: CustomIngressCert: RegistryAliases: ExtraOptions:[] ShouldLoadCachedImages:true EnableDefaultCNI:false CNI:}
	I1109 13:56:44.295575   50941 kube-vip.go:115] generating kube-vip config ...
	I1109 13:56:44.295626   50941 ssh_runner.go:195] Run: sudo sh -c "lsmod | grep ip_vs"
	I1109 13:56:44.307501   50941 kube-vip.go:163] giving up enabling control-plane load-balancing as ipvs kernel modules appears not to be available: sudo sh -c "lsmod | grep ip_vs": Process exited with status 1
	stdout:
	
	stderr:
	I1109 13:56:44.307559   50941 kube-vip.go:137] kube-vip config:
	apiVersion: v1
	kind: Pod
	metadata:
	  creationTimestamp: null
	  name: kube-vip
	  namespace: kube-system
	spec:
	  containers:
	  - args:
	    - manager
	    env:
	    - name: vip_arp
	      value: "true"
	    - name: port
	      value: "8443"
	    - name: vip_nodename
	      valueFrom:
	        fieldRef:
	          fieldPath: spec.nodeName
	    - name: vip_interface
	      value: eth0
	    - name: vip_cidr
	      value: "32"
	    - name: dns_mode
	      value: first
	    - name: cp_enable
	      value: "true"
	    - name: cp_namespace
	      value: kube-system
	    - name: vip_leaderelection
	      value: "true"
	    - name: vip_leasename
	      value: plndr-cp-lock
	    - name: vip_leaseduration
	      value: "5"
	    - name: vip_renewdeadline
	      value: "3"
	    - name: vip_retryperiod
	      value: "1"
	    - name: address
	      value: 192.168.49.254
	    - name: prometheus_server
	      value: :2112
	    image: ghcr.io/kube-vip/kube-vip:v1.0.1
	    imagePullPolicy: IfNotPresent
	    name: kube-vip
	    resources: {}
	    securityContext:
	      capabilities:
	        add:
	        - NET_ADMIN
	        - NET_RAW
	    volumeMounts:
	    - mountPath: /etc/kubernetes/admin.conf
	      name: kubeconfig
	  hostAliases:
	  - hostnames:
	    - kubernetes
	    ip: 127.0.0.1
	  hostNetwork: true
	  volumes:
	  - hostPath:
	      path: "/etc/kubernetes/admin.conf"
	    name: kubeconfig
	status: {}
	I1109 13:56:44.307640   50941 ssh_runner.go:195] Run: sudo ls /var/lib/minikube/binaries/v1.34.1
	I1109 13:56:44.315582   50941 binaries.go:51] Found k8s binaries, skipping transfer
	I1109 13:56:44.315693   50941 ssh_runner.go:195] Run: sudo mkdir -p /etc/systemd/system/kubelet.service.d /lib/systemd/system /etc/kubernetes/manifests
	I1109 13:56:44.323673   50941 ssh_runner.go:362] scp memory --> /etc/systemd/system/kubelet.service.d/10-kubeadm.conf (363 bytes)
	I1109 13:56:44.336356   50941 ssh_runner.go:362] scp memory --> /lib/systemd/system/kubelet.service (352 bytes)
	I1109 13:56:44.348987   50941 ssh_runner.go:362] scp memory --> /etc/kubernetes/manifests/kube-vip.yaml (1358 bytes)
	I1109 13:56:44.364628   50941 ssh_runner.go:195] Run: grep 192.168.49.254	control-plane.minikube.internal$ /etc/hosts
	I1109 13:56:44.368185   50941 ssh_runner.go:195] Run: /bin/bash -c "{ grep -v $'\tcontrol-plane.minikube.internal$' "/etc/hosts"; echo "192.168.49.254	control-plane.minikube.internal"; } > /tmp/h.$$; sudo cp /tmp/h.$$ "/etc/hosts""
	I1109 13:56:44.378442   50941 ssh_runner.go:195] Run: sudo systemctl daemon-reload
	I1109 13:56:44.512505   50941 ssh_runner.go:195] Run: sudo systemctl start kubelet
	I1109 13:56:44.527192   50941 start.go:236] Will wait 6m0s for node &{Name:m02 IP:192.168.49.3 Port:8443 KubernetesVersion:v1.34.1 ContainerRuntime:crio ControlPlane:true Worker:true}
	I1109 13:56:44.527585   50941 config.go:182] Loaded profile config "ha-423884": Driver=docker, ContainerRuntime=crio, KubernetesVersion=v1.34.1
	I1109 13:56:44.530787   50941 out.go:179] * Verifying Kubernetes components...
	I1109 13:56:44.533648   50941 ssh_runner.go:195] Run: sudo systemctl daemon-reload
	I1109 13:56:44.676788   50941 ssh_runner.go:195] Run: sudo systemctl start kubelet
	I1109 13:56:44.692725   50941 kapi.go:59] client config for ha-423884: &rest.Config{Host:"https://192.168.49.254:8443", APIPath:"", ContentConfig:rest.ContentConfig{AcceptContentTypes:"", ContentType:"", GroupVersion:(*schema.GroupVersion)(nil), NegotiatedSerializer:runtime.NegotiatedSerializer(nil)}, Username:"", Password:"", BearerToken:"", BearerTokenFile:"", Impersonate:rest.ImpersonationConfig{UserName:"", UID:"", Groups:[]string(nil), Extra:map[string][]string(nil)}, AuthProvider:<nil>, AuthConfigPersister:rest.AuthProviderConfigPersister(nil), ExecProvider:<nil>, TLSClientConfig:rest.sanitizedTLSClientConfig{Insecure:false, ServerName:"", CertFile:"/home/jenkins/minikube-integration/21139-2320/.minikube/profiles/ha-423884/client.crt", KeyFile:"/home/jenkins/minikube-integration/21139-2320/.minikube/profiles/ha-423884/client.key", CAFile:"/home/jenkins/minikube-integration/21139-2320/.minikube/ca.crt", CertData:[]uint8(nil), KeyData:[]uint8(nil), CAData:[]uint8(nil), NextProtos:[]string(nil)},
UserAgent:"", DisableCompression:false, Transport:http.RoundTripper(nil), WrapTransport:(transport.WrapperFunc)(0x21276e0), QPS:0, Burst:0, RateLimiter:flowcontrol.RateLimiter(nil), WarningHandler:rest.WarningHandler(nil), WarningHandlerWithContext:rest.WarningHandlerWithContext(nil), Timeout:0, Dial:(func(context.Context, string, string) (net.Conn, error))(nil), Proxy:(func(*http.Request) (*url.URL, error))(nil)}
	W1109 13:56:44.692806   50941 kubeadm.go:492] Overriding stale ClientConfig host https://192.168.49.254:8443 with https://192.168.49.2:8443
	I1109 13:56:44.694375   50941 node_ready.go:35] waiting up to 6m0s for node "ha-423884-m02" to be "Ready" ...
	I1109 13:57:15.889308   50941 with_retry.go:234] "Got a Retry-After response" delay="1s" attempt=1 url="https://192.168.49.2:8443/api/v1/nodes/ha-423884-m02"
	W1109 13:57:15.889661   50941 node_ready.go:55] error getting node "ha-423884-m02" condition "Ready" status (will retry): Get "https://192.168.49.2:8443/api/v1/nodes/ha-423884-m02": dial tcp 192.168.49.2:8443: connect: connection refused - error from a previous attempt: read tcp 192.168.49.1:53614->192.168.49.2:8443: read: connection reset by peer
	W1109 13:57:18.195899   50941 node_ready.go:55] error getting node "ha-423884-m02" condition "Ready" status (will retry): Get "https://192.168.49.2:8443/api/v1/nodes/ha-423884-m02": dial tcp 192.168.49.2:8443: connect: connection refused
	W1109 13:57:20.196010   50941 node_ready.go:55] error getting node "ha-423884-m02" condition "Ready" status (will retry): Get "https://192.168.49.2:8443/api/v1/nodes/ha-423884-m02": dial tcp 192.168.49.2:8443: connect: connection refused
	W1109 13:57:22.695941   50941 node_ready.go:55] error getting node "ha-423884-m02" condition "Ready" status (will retry): Get "https://192.168.49.2:8443/api/v1/nodes/ha-423884-m02": dial tcp 192.168.49.2:8443: connect: connection refused
	W1109 13:57:25.195852   50941 node_ready.go:55] error getting node "ha-423884-m02" condition "Ready" status (will retry): Get "https://192.168.49.2:8443/api/v1/nodes/ha-423884-m02": dial tcp 192.168.49.2:8443: connect: connection refused
	W1109 13:57:27.695680   50941 node_ready.go:55] error getting node "ha-423884-m02" condition "Ready" status (will retry): Get "https://192.168.49.2:8443/api/v1/nodes/ha-423884-m02": dial tcp 192.168.49.2:8443: connect: connection refused
	W1109 13:57:30.195776   50941 node_ready.go:55] error getting node "ha-423884-m02" condition "Ready" status (will retry): Get "https://192.168.49.2:8443/api/v1/nodes/ha-423884-m02": dial tcp 192.168.49.2:8443: connect: connection refused
	W1109 13:57:32.695117   50941 node_ready.go:55] error getting node "ha-423884-m02" condition "Ready" status (will retry): Get "https://192.168.49.2:8443/api/v1/nodes/ha-423884-m02": dial tcp 192.168.49.2:8443: connect: connection refused
	W1109 13:57:35.195841   50941 node_ready.go:55] error getting node "ha-423884-m02" condition "Ready" status (will retry): Get "https://192.168.49.2:8443/api/v1/nodes/ha-423884-m02": dial tcp 192.168.49.2:8443: connect: connection refused
	W1109 13:57:37.694955   50941 node_ready.go:55] error getting node "ha-423884-m02" condition "Ready" status (will retry): Get "https://192.168.49.2:8443/api/v1/nodes/ha-423884-m02": dial tcp 192.168.49.2:8443: connect: connection refused
	W1109 13:58:41.750119   50941 node_ready.go:55] error getting node "ha-423884-m02" condition "Ready" status (will retry): the server was unable to return a response in the time allotted, but may still be processing the request (get nodes ha-423884-m02)
	I1109 13:58:42.976513   50941 with_retry.go:234] "Got a Retry-After response" delay="1s" attempt=1 url="https://192.168.49.2:8443/api/v1/nodes/ha-423884-m02"
	W1109 13:58:44.194875   50941 node_ready.go:55] error getting node "ha-423884-m02" condition "Ready" status (will retry): Get "https://192.168.49.2:8443/api/v1/nodes/ha-423884-m02": dial tcp 192.168.49.2:8443: connect: connection refused
	W1109 13:58:46.195854   50941 node_ready.go:55] error getting node "ha-423884-m02" condition "Ready" status (will retry): Get "https://192.168.49.2:8443/api/v1/nodes/ha-423884-m02": dial tcp 192.168.49.2:8443: connect: connection refused
	W1109 13:58:48.695884   50941 node_ready.go:55] error getting node "ha-423884-m02" condition "Ready" status (will retry): Get "https://192.168.49.2:8443/api/v1/nodes/ha-423884-m02": dial tcp 192.168.49.2:8443: connect: connection refused
	W1109 13:58:51.195591   50941 node_ready.go:55] error getting node "ha-423884-m02" condition "Ready" status (will retry): Get "https://192.168.49.2:8443/api/v1/nodes/ha-423884-m02": dial tcp 192.168.49.2:8443: connect: connection refused
	W1109 13:58:53.694984   50941 node_ready.go:55] error getting node "ha-423884-m02" condition "Ready" status (will retry): Get "https://192.168.49.2:8443/api/v1/nodes/ha-423884-m02": dial tcp 192.168.49.2:8443: connect: connection refused
	W1109 13:58:55.695023   50941 node_ready.go:55] error getting node "ha-423884-m02" condition "Ready" status (will retry): Get "https://192.168.49.2:8443/api/v1/nodes/ha-423884-m02": dial tcp 192.168.49.2:8443: connect: connection refused
	W1109 13:58:57.695978   50941 node_ready.go:55] error getting node "ha-423884-m02" condition "Ready" status (will retry): Get "https://192.168.49.2:8443/api/v1/nodes/ha-423884-m02": dial tcp 192.168.49.2:8443: connect: connection refused
	W1109 13:59:00.195414   50941 node_ready.go:55] error getting node "ha-423884-m02" condition "Ready" status (will retry): Get "https://192.168.49.2:8443/api/v1/nodes/ha-423884-m02": dial tcp 192.168.49.2:8443: connect: connection refused
	W1109 13:59:02.195699   50941 node_ready.go:55] error getting node "ha-423884-m02" condition "Ready" status (will retry): Get "https://192.168.49.2:8443/api/v1/nodes/ha-423884-m02": dial tcp 192.168.49.2:8443: connect: connection refused
	W1109 13:59:04.695049   50941 node_ready.go:55] error getting node "ha-423884-m02" condition "Ready" status (will retry): Get "https://192.168.49.2:8443/api/v1/nodes/ha-423884-m02": dial tcp 192.168.49.2:8443: connect: connection refused
	W1109 13:59:06.695993   50941 node_ready.go:55] error getting node "ha-423884-m02" condition "Ready" status (will retry): Get "https://192.168.49.2:8443/api/v1/nodes/ha-423884-m02": dial tcp 192.168.49.2:8443: connect: connection refused
	I1109 14:00:12.661230   50941 with_retry.go:234] "Got a Retry-After response" delay="1s" attempt=1 url="https://192.168.49.2:8443/api/v1/nodes/ha-423884-m02"
	W1109 14:00:12.661517   50941 node_ready.go:55] error getting node "ha-423884-m02" condition "Ready" status (will retry): Get "https://192.168.49.2:8443/api/v1/nodes/ha-423884-m02": dial tcp 192.168.49.2:8443: connect: connection refused - error from a previous attempt: read tcp 192.168.49.1:43904->192.168.49.2:8443: read: connection reset by peer
	W1109 14:00:14.695160   50941 node_ready.go:55] error getting node "ha-423884-m02" condition "Ready" status (will retry): Get "https://192.168.49.2:8443/api/v1/nodes/ha-423884-m02": dial tcp 192.168.49.2:8443: connect: connection refused
	W1109 14:00:16.695701   50941 node_ready.go:55] error getting node "ha-423884-m02" condition "Ready" status (will retry): Get "https://192.168.49.2:8443/api/v1/nodes/ha-423884-m02": dial tcp 192.168.49.2:8443: connect: connection refused
	W1109 14:00:19.195798   50941 node_ready.go:55] error getting node "ha-423884-m02" condition "Ready" status (will retry): Get "https://192.168.49.2:8443/api/v1/nodes/ha-423884-m02": dial tcp 192.168.49.2:8443: connect: connection refused
	W1109 14:00:21.695004   50941 node_ready.go:55] error getting node "ha-423884-m02" condition "Ready" status (will retry): Get "https://192.168.49.2:8443/api/v1/nodes/ha-423884-m02": dial tcp 192.168.49.2:8443: connect: connection refused
	W1109 14:00:24.195895   50941 node_ready.go:55] error getting node "ha-423884-m02" condition "Ready" status (will retry): Get "https://192.168.49.2:8443/api/v1/nodes/ha-423884-m02": dial tcp 192.168.49.2:8443: connect: connection refused
	W1109 14:00:26.695529   50941 node_ready.go:55] error getting node "ha-423884-m02" condition "Ready" status (will retry): Get "https://192.168.49.2:8443/api/v1/nodes/ha-423884-m02": dial tcp 192.168.49.2:8443: connect: connection refused
	W1109 14:00:28.695896   50941 node_ready.go:55] error getting node "ha-423884-m02" condition "Ready" status (will retry): Get "https://192.168.49.2:8443/api/v1/nodes/ha-423884-m02": dial tcp 192.168.49.2:8443: connect: connection refused
	W1109 14:00:31.194955   50941 node_ready.go:55] error getting node "ha-423884-m02" condition "Ready" status (will retry): Get "https://192.168.49.2:8443/api/v1/nodes/ha-423884-m02": dial tcp 192.168.49.2:8443: connect: connection refused
	W1109 14:00:33.695955   50941 node_ready.go:55] error getting node "ha-423884-m02" condition "Ready" status (will retry): Get "https://192.168.49.2:8443/api/v1/nodes/ha-423884-m02": dial tcp 192.168.49.2:8443: connect: connection refused
	W1109 14:00:36.194952   50941 node_ready.go:55] error getting node "ha-423884-m02" condition "Ready" status (will retry): Get "https://192.168.49.2:8443/api/v1/nodes/ha-423884-m02": dial tcp 192.168.49.2:8443: connect: connection refused
	W1109 14:00:38.694903   50941 node_ready.go:55] error getting node "ha-423884-m02" condition "Ready" status (will retry): Get "https://192.168.49.2:8443/api/v1/nodes/ha-423884-m02": dial tcp 192.168.49.2:8443: connect: connection refused
	W1109 14:00:41.195976   50941 node_ready.go:55] error getting node "ha-423884-m02" condition "Ready" status (will retry): Get "https://192.168.49.2:8443/api/v1/nodes/ha-423884-m02": dial tcp 192.168.49.2:8443: connect: connection refused
	W1109 14:00:43.695060   50941 node_ready.go:55] error getting node "ha-423884-m02" condition "Ready" status (will retry): Get "https://192.168.49.2:8443/api/v1/nodes/ha-423884-m02": dial tcp 192.168.49.2:8443: connect: connection refused
	W1109 14:00:45.695243   50941 node_ready.go:55] error getting node "ha-423884-m02" condition "Ready" status (will retry): Get "https://192.168.49.2:8443/api/v1/nodes/ha-423884-m02": dial tcp 192.168.49.2:8443: connect: connection refused
	W1109 14:00:47.695603   50941 node_ready.go:55] error getting node "ha-423884-m02" condition "Ready" status (will retry): Get "https://192.168.49.2:8443/api/v1/nodes/ha-423884-m02": dial tcp 192.168.49.2:8443: connect: connection refused
	W1109 14:00:49.695924   50941 node_ready.go:55] error getting node "ha-423884-m02" condition "Ready" status (will retry): Get "https://192.168.49.2:8443/api/v1/nodes/ha-423884-m02": dial tcp 192.168.49.2:8443: connect: connection refused
	W1109 14:00:52.194931   50941 node_ready.go:55] error getting node "ha-423884-m02" condition "Ready" status (will retry): Get "https://192.168.49.2:8443/api/v1/nodes/ha-423884-m02": dial tcp 192.168.49.2:8443: connect: connection refused
	W1109 14:01:02.696092   50941 node_ready.go:55] error getting node "ha-423884-m02" condition "Ready" status (will retry): Get "https://192.168.49.2:8443/api/v1/nodes/ha-423884-m02": net/http: TLS handshake timeout
	W1109 14:01:12.697352   50941 node_ready.go:55] error getting node "ha-423884-m02" condition "Ready" status (will retry): Get "https://192.168.49.2:8443/api/v1/nodes/ha-423884-m02": net/http: TLS handshake timeout
	I1109 14:01:14.246046   50941 with_retry.go:234] "Got a Retry-After response" delay="1s" attempt=1 url="https://192.168.49.2:8443/api/v1/nodes/ha-423884-m02"
	W1109 14:01:15.195896   50941 node_ready.go:55] error getting node "ha-423884-m02" condition "Ready" status (will retry): Get "https://192.168.49.2:8443/api/v1/nodes/ha-423884-m02": dial tcp 192.168.49.2:8443: connect: connection refused
	W1109 14:01:17.694927   50941 node_ready.go:55] error getting node "ha-423884-m02" condition "Ready" status (will retry): Get "https://192.168.49.2:8443/api/v1/nodes/ha-423884-m02": dial tcp 192.168.49.2:8443: connect: connection refused
	W1109 14:01:19.695012   50941 node_ready.go:55] error getting node "ha-423884-m02" condition "Ready" status (will retry): Get "https://192.168.49.2:8443/api/v1/nodes/ha-423884-m02": dial tcp 192.168.49.2:8443: connect: connection refused
	W1109 14:01:21.695859   50941 node_ready.go:55] error getting node "ha-423884-m02" condition "Ready" status (will retry): Get "https://192.168.49.2:8443/api/v1/nodes/ha-423884-m02": dial tcp 192.168.49.2:8443: connect: connection refused
	W1109 14:01:24.195971   50941 node_ready.go:55] error getting node "ha-423884-m02" condition "Ready" status (will retry): Get "https://192.168.49.2:8443/api/v1/nodes/ha-423884-m02": dial tcp 192.168.49.2:8443: connect: connection refused
	W1109 14:01:26.694922   50941 node_ready.go:55] error getting node "ha-423884-m02" condition "Ready" status (will retry): Get "https://192.168.49.2:8443/api/v1/nodes/ha-423884-m02": dial tcp 192.168.49.2:8443: connect: connection refused
	W1109 14:01:28.695002   50941 node_ready.go:55] error getting node "ha-423884-m02" condition "Ready" status (will retry): Get "https://192.168.49.2:8443/api/v1/nodes/ha-423884-m02": dial tcp 192.168.49.2:8443: connect: connection refused
	W1109 14:01:31.195949   50941 node_ready.go:55] error getting node "ha-423884-m02" condition "Ready" status (will retry): Get "https://192.168.49.2:8443/api/v1/nodes/ha-423884-m02": dial tcp 192.168.49.2:8443: connect: connection refused
	W1109 14:01:33.196044   50941 node_ready.go:55] error getting node "ha-423884-m02" condition "Ready" status (will retry): Get "https://192.168.49.2:8443/api/v1/nodes/ha-423884-m02": dial tcp 192.168.49.2:8443: connect: connection refused
	W1109 14:01:35.695811   50941 node_ready.go:55] error getting node "ha-423884-m02" condition "Ready" status (will retry): Get "https://192.168.49.2:8443/api/v1/nodes/ha-423884-m02": dial tcp 192.168.49.2:8443: connect: connection refused
	W1109 14:01:38.194914   50941 node_ready.go:55] error getting node "ha-423884-m02" condition "Ready" status (will retry): Get "https://192.168.49.2:8443/api/v1/nodes/ha-423884-m02": dial tcp 192.168.49.2:8443: connect: connection refused
	W1109 14:01:40.195799   50941 node_ready.go:55] error getting node "ha-423884-m02" condition "Ready" status (will retry): Get "https://192.168.49.2:8443/api/v1/nodes/ha-423884-m02": dial tcp 192.168.49.2:8443: connect: connection refused
	W1109 14:01:42.695109   50941 node_ready.go:55] error getting node "ha-423884-m02" condition "Ready" status (will retry): Get "https://192.168.49.2:8443/api/v1/nodes/ha-423884-m02": dial tcp 192.168.49.2:8443: connect: connection refused
	W1109 14:01:45.194966   50941 node_ready.go:55] error getting node "ha-423884-m02" condition "Ready" status (will retry): Get "https://192.168.49.2:8443/api/v1/nodes/ha-423884-m02": dial tcp 192.168.49.2:8443: connect: connection refused
	W1109 14:01:47.195992   50941 node_ready.go:55] error getting node "ha-423884-m02" condition "Ready" status (will retry): Get "https://192.168.49.2:8443/api/v1/nodes/ha-423884-m02": dial tcp 192.168.49.2:8443: connect: connection refused
	W1109 14:01:49.694861   50941 node_ready.go:55] error getting node "ha-423884-m02" condition "Ready" status (will retry): Get "https://192.168.49.2:8443/api/v1/nodes/ha-423884-m02": dial tcp 192.168.49.2:8443: connect: connection refused
	W1109 14:01:52.194884   50941 node_ready.go:55] error getting node "ha-423884-m02" condition "Ready" status (will retry): Get "https://192.168.49.2:8443/api/v1/nodes/ha-423884-m02": dial tcp 192.168.49.2:8443: connect: connection refused
	W1109 14:01:54.694898   50941 node_ready.go:55] error getting node "ha-423884-m02" condition "Ready" status (will retry): Get "https://192.168.49.2:8443/api/v1/nodes/ha-423884-m02": dial tcp 192.168.49.2:8443: connect: connection refused
	W1109 14:01:56.695125   50941 node_ready.go:55] error getting node "ha-423884-m02" condition "Ready" status (will retry): Get "https://192.168.49.2:8443/api/v1/nodes/ha-423884-m02": dial tcp 192.168.49.2:8443: connect: connection refused
	W1109 14:01:59.195035   50941 node_ready.go:55] error getting node "ha-423884-m02" condition "Ready" status (will retry): Get "https://192.168.49.2:8443/api/v1/nodes/ha-423884-m02": dial tcp 192.168.49.2:8443: connect: connection refused
	W1109 14:02:01.694940   50941 node_ready.go:55] error getting node "ha-423884-m02" condition "Ready" status (will retry): Get "https://192.168.49.2:8443/api/v1/nodes/ha-423884-m02": dial tcp 192.168.49.2:8443: connect: connection refused
	W1109 14:02:03.695952   50941 node_ready.go:55] error getting node "ha-423884-m02" condition "Ready" status (will retry): Get "https://192.168.49.2:8443/api/v1/nodes/ha-423884-m02": dial tcp 192.168.49.2:8443: connect: connection refused
	W1109 14:02:06.194964   50941 node_ready.go:55] error getting node "ha-423884-m02" condition "Ready" status (will retry): Get "https://192.168.49.2:8443/api/v1/nodes/ha-423884-m02": dial tcp 192.168.49.2:8443: connect: connection refused
	W1109 14:02:08.694953   50941 node_ready.go:55] error getting node "ha-423884-m02" condition "Ready" status (will retry): Get "https://192.168.49.2:8443/api/v1/nodes/ha-423884-m02": dial tcp 192.168.49.2:8443: connect: connection refused
	W1109 14:02:10.695760   50941 node_ready.go:55] error getting node "ha-423884-m02" condition "Ready" status (will retry): Get "https://192.168.49.2:8443/api/v1/nodes/ha-423884-m02": dial tcp 192.168.49.2:8443: connect: connection refused
	W1109 14:02:13.195697   50941 node_ready.go:55] error getting node "ha-423884-m02" condition "Ready" status (will retry): Get "https://192.168.49.2:8443/api/v1/nodes/ha-423884-m02": dial tcp 192.168.49.2:8443: connect: connection refused
	W1109 14:02:15.694939   50941 node_ready.go:55] error getting node "ha-423884-m02" condition "Ready" status (will retry): Get "https://192.168.49.2:8443/api/v1/nodes/ha-423884-m02": dial tcp 192.168.49.2:8443: connect: connection refused
	W1109 14:02:17.695926   50941 node_ready.go:55] error getting node "ha-423884-m02" condition "Ready" status (will retry): Get "https://192.168.49.2:8443/api/v1/nodes/ha-423884-m02": dial tcp 192.168.49.2:8443: connect: connection refused
	W1109 14:02:20.195916   50941 node_ready.go:55] error getting node "ha-423884-m02" condition "Ready" status (will retry): Get "https://192.168.49.2:8443/api/v1/nodes/ha-423884-m02": dial tcp 192.168.49.2:8443: connect: connection refused
	W1109 14:02:22.695194   50941 node_ready.go:55] error getting node "ha-423884-m02" condition "Ready" status (will retry): Get "https://192.168.49.2:8443/api/v1/nodes/ha-423884-m02": dial tcp 192.168.49.2:8443: connect: connection refused
	W1109 14:02:25.195931   50941 node_ready.go:55] error getting node "ha-423884-m02" condition "Ready" status (will retry): Get "https://192.168.49.2:8443/api/v1/nodes/ha-423884-m02": dial tcp 192.168.49.2:8443: connect: connection refused
	W1109 14:02:27.694900   50941 node_ready.go:55] error getting node "ha-423884-m02" condition "Ready" status (will retry): Get "https://192.168.49.2:8443/api/v1/nodes/ha-423884-m02": dial tcp 192.168.49.2:8443: connect: connection refused
	W1109 14:02:29.694960   50941 node_ready.go:55] error getting node "ha-423884-m02" condition "Ready" status (will retry): Get "https://192.168.49.2:8443/api/v1/nodes/ha-423884-m02": dial tcp 192.168.49.2:8443: connect: connection refused
	W1109 14:02:32.194988   50941 node_ready.go:55] error getting node "ha-423884-m02" condition "Ready" status (will retry): Get "https://192.168.49.2:8443/api/v1/nodes/ha-423884-m02": dial tcp 192.168.49.2:8443: connect: connection refused
	W1109 14:02:34.195073   50941 node_ready.go:55] error getting node "ha-423884-m02" condition "Ready" status (will retry): Get "https://192.168.49.2:8443/api/v1/nodes/ha-423884-m02": dial tcp 192.168.49.2:8443: connect: connection refused
	W1109 14:02:44.694610   50941 node_ready.go:55] error getting node "ha-423884-m02" condition "Ready" status (will retry): Get "https://192.168.49.2:8443/api/v1/nodes/ha-423884-m02": context deadline exceeded
	I1109 14:02:44.694648   50941 node_ready.go:38] duration metric: took 6m0.000230455s for node "ha-423884-m02" to be "Ready" ...
	I1109 14:02:44.698103   50941 out.go:203] 
	W1109 14:02:44.701305   50941 out.go:285] X Exiting due to GUEST_START: failed to start node: adding node: wait 6m0s for node: waiting for node to be ready: WaitNodeCondition: context deadline exceeded
	W1109 14:02:44.701325   50941 out.go:285] * 
	W1109 14:02:44.703469   50941 out.go:308] ╭─────────────────────────────────────────────────────────────────────────────────────────────╮
	│                                                                                             │
	│    * If the above advice does not help, please let us know:                                 │
	│      https://github.com/kubernetes/minikube/issues/new/choose                               │
	│                                                                                             │
	│    * Please run `minikube logs --file=logs.txt` and attach logs.txt to the GitHub issue.    │
	│                                                                                             │
	╰─────────────────────────────────────────────────────────────────────────────────────────────╯
	I1109 14:02:44.706530   50941 out.go:203] 
	
	
	==> CRI-O <==
	Nov 09 14:02:09 ha-423884 crio[667]: time="2025-11-09T14:02:09.551728258Z" level=info msg="Allowed annotations are specified for workload [io.containers.trace-syscall]"
	Nov 09 14:02:09 ha-423884 crio[667]: time="2025-11-09T14:02:09.559379136Z" level=info msg="Allowed annotations are specified for workload [io.containers.trace-syscall]"
	Nov 09 14:02:09 ha-423884 crio[667]: time="2025-11-09T14:02:09.56010363Z" level=info msg="Allowed annotations are specified for workload [io.containers.trace-syscall]"
	Nov 09 14:02:09 ha-423884 crio[667]: time="2025-11-09T14:02:09.578180754Z" level=info msg="Created container fb43ae7a5bc7148d318300f77b90f87d0bed19698a40a38ce78123c68c92f396: kube-system/kube-controller-manager-ha-423884/kube-controller-manager" id=914f69c7-9e75-4685-97ae-ce6d487a80eb name=/runtime.v1.RuntimeService/CreateContainer
	Nov 09 14:02:09 ha-423884 crio[667]: time="2025-11-09T14:02:09.578978029Z" level=info msg="Starting container: fb43ae7a5bc7148d318300f77b90f87d0bed19698a40a38ce78123c68c92f396" id=7b0cb328-02c2-455b-8d26-92c535c13320 name=/runtime.v1.RuntimeService/StartContainer
	Nov 09 14:02:09 ha-423884 crio[667]: time="2025-11-09T14:02:09.581346511Z" level=info msg="Started container" PID=1238 containerID=fb43ae7a5bc7148d318300f77b90f87d0bed19698a40a38ce78123c68c92f396 description=kube-system/kube-controller-manager-ha-423884/kube-controller-manager id=7b0cb328-02c2-455b-8d26-92c535c13320 name=/runtime.v1.RuntimeService/StartContainer sandboxID=575e9e561d03bd73bfc2977eee9d1a87cf7e044ef1af38644f54f805e50974ba
	Nov 09 14:02:21 ha-423884 conmon[1235]: conmon fb43ae7a5bc7148d3183 <ninfo>: container 1238 exited with status 1
	Nov 09 14:02:21 ha-423884 crio[667]: time="2025-11-09T14:02:21.691484867Z" level=info msg="Removing container: ee43c2dc250434d5cc4568e9176c1adbc891131ee25ba52582eec6dc2abb7fda" id=20d01fb9-69b8-4d52-8e94-f64f27447597 name=/runtime.v1.RuntimeService/RemoveContainer
	Nov 09 14:02:21 ha-423884 crio[667]: time="2025-11-09T14:02:21.700637954Z" level=info msg="Error loading conmon cgroup of container ee43c2dc250434d5cc4568e9176c1adbc891131ee25ba52582eec6dc2abb7fda: cgroup deleted" id=20d01fb9-69b8-4d52-8e94-f64f27447597 name=/runtime.v1.RuntimeService/RemoveContainer
	Nov 09 14:02:21 ha-423884 crio[667]: time="2025-11-09T14:02:21.70373976Z" level=info msg="Removed container ee43c2dc250434d5cc4568e9176c1adbc891131ee25ba52582eec6dc2abb7fda: kube-system/kube-controller-manager-ha-423884/kube-controller-manager" id=20d01fb9-69b8-4d52-8e94-f64f27447597 name=/runtime.v1.RuntimeService/RemoveContainer
	Nov 09 14:02:36 ha-423884 crio[667]: time="2025-11-09T14:02:36.550431718Z" level=info msg="Checking image status: registry.k8s.io/kube-apiserver:v1.34.1" id=eb071ab8-2487-4b87-9951-f3406f3f724d name=/runtime.v1.ImageService/ImageStatus
	Nov 09 14:02:36 ha-423884 crio[667]: time="2025-11-09T14:02:36.551499117Z" level=info msg="Checking image status: registry.k8s.io/kube-apiserver:v1.34.1" id=dc022fcb-5120-4661-9098-70f50aeb80b1 name=/runtime.v1.ImageService/ImageStatus
	Nov 09 14:02:36 ha-423884 crio[667]: time="2025-11-09T14:02:36.552772713Z" level=info msg="Creating container: kube-system/kube-apiserver-ha-423884/kube-apiserver" id=a5b314ab-13da-40fb-be88-4a7f65ae1f46 name=/runtime.v1.RuntimeService/CreateContainer
	Nov 09 14:02:36 ha-423884 crio[667]: time="2025-11-09T14:02:36.552913276Z" level=info msg="Allowed annotations are specified for workload [io.containers.trace-syscall]"
	Nov 09 14:02:36 ha-423884 crio[667]: time="2025-11-09T14:02:36.558216367Z" level=info msg="Allowed annotations are specified for workload [io.containers.trace-syscall]"
	Nov 09 14:02:36 ha-423884 crio[667]: time="2025-11-09T14:02:36.558699867Z" level=info msg="Allowed annotations are specified for workload [io.containers.trace-syscall]"
	Nov 09 14:02:36 ha-423884 crio[667]: time="2025-11-09T14:02:36.577264476Z" level=info msg="Created container f857691ef21f6060a3153742c856510acde291aa03d98022486b54ccdd6b16f0: kube-system/kube-apiserver-ha-423884/kube-apiserver" id=a5b314ab-13da-40fb-be88-4a7f65ae1f46 name=/runtime.v1.RuntimeService/CreateContainer
	Nov 09 14:02:36 ha-423884 crio[667]: time="2025-11-09T14:02:36.577889688Z" level=info msg="Starting container: f857691ef21f6060a3153742c856510acde291aa03d98022486b54ccdd6b16f0" id=1a740b53-426e-446b-bea6-3a698a721a3b name=/runtime.v1.RuntimeService/StartContainer
	Nov 09 14:02:36 ha-423884 crio[667]: time="2025-11-09T14:02:36.580618576Z" level=info msg="Started container" PID=1253 containerID=f857691ef21f6060a3153742c856510acde291aa03d98022486b54ccdd6b16f0 description=kube-system/kube-apiserver-ha-423884/kube-apiserver id=1a740b53-426e-446b-bea6-3a698a721a3b name=/runtime.v1.RuntimeService/StartContainer sandboxID=e385df39fa9a74c7a559091711257de7f4454e0e52edc9948675220b19108eb4
	Nov 09 14:02:57 ha-423884 conmon[1251]: conmon f857691ef21f6060a315 <ninfo>: container 1253 exited with status 255
	Nov 09 14:02:57 ha-423884 crio[667]: time="2025-11-09T14:02:57.959662491Z" level=info msg="Stopping container: f857691ef21f6060a3153742c856510acde291aa03d98022486b54ccdd6b16f0 (timeout: 30s)" id=55305a25-44dd-4ddb-b724-39a9d92f3c50 name=/runtime.v1.RuntimeService/StopContainer
	Nov 09 14:02:57 ha-423884 crio[667]: time="2025-11-09T14:02:57.971272541Z" level=info msg="Stopped container f857691ef21f6060a3153742c856510acde291aa03d98022486b54ccdd6b16f0: kube-system/kube-apiserver-ha-423884/kube-apiserver" id=55305a25-44dd-4ddb-b724-39a9d92f3c50 name=/runtime.v1.RuntimeService/StopContainer
	Nov 09 14:02:58 ha-423884 crio[667]: time="2025-11-09T14:02:58.779081549Z" level=info msg="Removing container: dab52ef1281dc29d9aafc0f1b10c6757441456ebe6e9ca4316f204ffc32d3ce1" id=4d31d8f8-0ff3-4c90-882a-124e284c1b87 name=/runtime.v1.RuntimeService/RemoveContainer
	Nov 09 14:02:58 ha-423884 crio[667]: time="2025-11-09T14:02:58.786993816Z" level=info msg="Error loading conmon cgroup of container dab52ef1281dc29d9aafc0f1b10c6757441456ebe6e9ca4316f204ffc32d3ce1: cgroup deleted" id=4d31d8f8-0ff3-4c90-882a-124e284c1b87 name=/runtime.v1.RuntimeService/RemoveContainer
	Nov 09 14:02:58 ha-423884 crio[667]: time="2025-11-09T14:02:58.790080944Z" level=info msg="Removed container dab52ef1281dc29d9aafc0f1b10c6757441456ebe6e9ca4316f204ffc32d3ce1: kube-system/kube-apiserver-ha-423884/kube-apiserver" id=4d31d8f8-0ff3-4c90-882a-124e284c1b87 name=/runtime.v1.RuntimeService/RemoveContainer
	
	
	==> container status <==
	CONTAINER           IMAGE                                                              CREATED              STATE               NAME                      ATTEMPT             POD ID              POD                                 NAMESPACE
	f857691ef21f6       43911e833d64d4f30460862fc0c54bb61999d60bc7d063feca71e9fc610d5196   45 seconds ago       Exited              kube-apiserver            6                   e385df39fa9a7       kube-apiserver-ha-423884            kube-system
	fb43ae7a5bc71       7eb2c6ff0c5a768fd309321bc2ade0e4e11afcf4f2017ef1d0ff00d91fdf992a   About a minute ago   Exited              kube-controller-manager   7                   575e9e561d03b       kube-controller-manager-ha-423884   kube-system
	c2bc167e20428       a1894772a478e07c67a56e8bf32335fdbe1dd4ec96976a5987083164bd00bc0e   3 minutes ago        Running             etcd                      2                   c523e19ee75d0       etcd-ha-423884                      kube-system
	ee4108629384f       b5f57ec6b98676d815366685a0422bd164ecf0732540b79ac51b1186cef97ff0   8 minutes ago        Running             kube-scheduler            1                   eee2ee895e800       kube-scheduler-ha-423884            kube-system
	dc4b89b5cdd42       2a8917f902489be5a8dd414209c32b77bd644d187ea646d86dbdc31e85efb551   8 minutes ago        Running             kube-vip                  0                   babe0da53b9cc       kube-vip-ha-423884                  kube-system
	ad03fe50fbbd1       a1894772a478e07c67a56e8bf32335fdbe1dd4ec96976a5987083164bd00bc0e   8 minutes ago        Exited              etcd                      1                   c523e19ee75d0       etcd-ha-423884                      kube-system
	
	
	==> describe nodes <==
	command /bin/bash -c "sudo /var/lib/minikube/binaries/v1.34.1/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig" failed with error: /bin/bash -c "sudo /var/lib/minikube/binaries/v1.34.1/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig": Process exited with status 1
	stdout:
	
	stderr:
	The connection to the server localhost:8443 was refused - did you specify the right host or port?
	
	
	==> dmesg <==
	[Nov 9 13:17] ACPI: SRAT not present
	[  +0.000000] ACPI: SRAT not present
	[  +0.000000] SPI driver altr_a10sr has no spi_device_id for altr,a10sr
	[  +0.015355] device-mapper: core: CONFIG_IMA_DISABLE_HTABLE is disabled. Duplicate IMA measurements will not be recorded in the IMA log.
	[  +0.494196] systemd[1]: Configuration file /run/systemd/system/netplan-ovs-cleanup.service is marked world-inaccessible. This has no effect as configuration data is accessible via APIs without restrictions. Proceeding anyway.
	[  +0.034512] systemd[1]: /lib/systemd/system/snapd.service:23: Unknown key name 'RestartMode' in section 'Service', ignoring.
	[  +0.743336] ena 0000:00:05.0: LLQ is not supported Fallback to host mode policy.
	[  +6.564676] kauditd_printk_skb: 36 callbacks suppressed
	[Nov 9 13:30] overlayfs: idmapped layers are currently not supported
	[  +0.081590] kmem.limit_in_bytes is deprecated and will be removed. Please report your usecase to linux-mm@kvack.org if you depend on this functionality.
	[Nov 9 13:36] overlayfs: idmapped layers are currently not supported
	[ +50.497753] overlayfs: idmapped layers are currently not supported
	[Nov 9 13:50] overlayfs: idmapped layers are currently not supported
	[Nov 9 13:51] overlayfs: idmapped layers are currently not supported
	[Nov 9 13:52] overlayfs: idmapped layers are currently not supported
	[Nov 9 13:53] overlayfs: idmapped layers are currently not supported
	[Nov 9 13:54] overlayfs: idmapped layers are currently not supported
	[Nov 9 13:55] overlayfs: idmapped layers are currently not supported
	[  +3.159182] overlayfs: idmapped layers are currently not supported
	
	
	==> etcd [ad03fe50fbbd1dace582db018b89f80349534b6604f17260fe8e6175c0110640] <==
	{"level":"warn","ts":"2025-11-09T14:00:20.007830Z","caller":"embed/serve.go:245","msg":"stopping secure grpc server due to error","error":"accept tcp 127.0.0.1:2379: use of closed network connection"}
	{"level":"warn","ts":"2025-11-09T14:00:20.007902Z","caller":"embed/serve.go:247","msg":"stopped secure grpc server due to error","error":"accept tcp 127.0.0.1:2379: use of closed network connection"}
	{"level":"error","ts":"2025-11-09T14:00:20.007914Z","caller":"embed/etcd.go:912","msg":"setting up serving from embedded etcd failed.","error":"accept tcp 127.0.0.1:2379: use of closed network connection","stacktrace":"go.etcd.io/etcd/server/v3/embed.(*Etcd).errHandler\n\tgo.etcd.io/etcd/server/v3/embed/etcd.go:912\ngo.etcd.io/etcd/server/v3/embed.(*Etcd).startHandler.func1\n\tgo.etcd.io/etcd/server/v3/embed/etcd.go:906"}
	{"level":"info","ts":"2025-11-09T14:00:20.007856Z","caller":"etcdserver/server.go:2319","msg":"server has stopped; stopping cluster version's monitor"}
	{"level":"info","ts":"2025-11-09T14:00:20.008037Z","caller":"rafthttp/peer.go:316","msg":"stopping remote peer","remote-peer-id":"b6e80321287bcc6a"}
	{"level":"info","ts":"2025-11-09T14:00:20.008080Z","caller":"rafthttp/stream.go:293","msg":"stopped TCP streaming connection with remote peer","stream-writer-type":"unknown stream","remote-peer-id":"b6e80321287bcc6a"}
	{"level":"info","ts":"2025-11-09T14:00:20.008125Z","caller":"rafthttp/stream.go:293","msg":"stopped TCP streaming connection with remote peer","stream-writer-type":"unknown stream","remote-peer-id":"b6e80321287bcc6a"}
	{"level":"warn","ts":"2025-11-09T14:00:20.007973Z","caller":"embed/serve.go:245","msg":"stopping secure grpc server due to error","error":"accept tcp 192.168.49.2:2379: use of closed network connection"}
	{"level":"warn","ts":"2025-11-09T14:00:20.008184Z","caller":"embed/serve.go:247","msg":"stopped secure grpc server due to error","error":"accept tcp 192.168.49.2:2379: use of closed network connection"}
	{"level":"error","ts":"2025-11-09T14:00:20.008220Z","caller":"embed/etcd.go:912","msg":"setting up serving from embedded etcd failed.","error":"accept tcp 192.168.49.2:2379: use of closed network connection","stacktrace":"go.etcd.io/etcd/server/v3/embed.(*Etcd).errHandler\n\tgo.etcd.io/etcd/server/v3/embed/etcd.go:912\ngo.etcd.io/etcd/server/v3/embed.(*Etcd).startHandler.func1\n\tgo.etcd.io/etcd/server/v3/embed/etcd.go:906"}
	{"level":"info","ts":"2025-11-09T14:00:20.008299Z","caller":"rafthttp/pipeline.go:85","msg":"stopped HTTP pipelining with remote peer","local-member-id":"aec36adc501070cc","remote-peer-id":"b6e80321287bcc6a"}
	{"level":"info","ts":"2025-11-09T14:00:20.008345Z","caller":"rafthttp/stream.go:441","msg":"stopped stream reader with remote peer","stream-reader-type":"stream MsgApp v2","local-member-id":"aec36adc501070cc","remote-peer-id":"b6e80321287bcc6a"}
	{"level":"info","ts":"2025-11-09T14:00:20.008407Z","caller":"rafthttp/stream.go:441","msg":"stopped stream reader with remote peer","stream-reader-type":"stream Message","local-member-id":"aec36adc501070cc","remote-peer-id":"b6e80321287bcc6a"}
	{"level":"info","ts":"2025-11-09T14:00:20.008440Z","caller":"rafthttp/peer.go:321","msg":"stopped remote peer","remote-peer-id":"b6e80321287bcc6a"}
	{"level":"info","ts":"2025-11-09T14:00:20.008470Z","caller":"rafthttp/peer.go:316","msg":"stopping remote peer","remote-peer-id":"c7770fc1e85485c5"}
	{"level":"info","ts":"2025-11-09T14:00:20.008501Z","caller":"rafthttp/stream.go:293","msg":"stopped TCP streaming connection with remote peer","stream-writer-type":"stream MsgApp v2","remote-peer-id":"c7770fc1e85485c5"}
	{"level":"info","ts":"2025-11-09T14:00:20.008542Z","caller":"rafthttp/stream.go:293","msg":"stopped TCP streaming connection with remote peer","stream-writer-type":"stream Message","remote-peer-id":"c7770fc1e85485c5"}
	{"level":"info","ts":"2025-11-09T14:00:20.008588Z","caller":"rafthttp/pipeline.go:85","msg":"stopped HTTP pipelining with remote peer","local-member-id":"aec36adc501070cc","remote-peer-id":"c7770fc1e85485c5"}
	{"level":"info","ts":"2025-11-09T14:00:20.008623Z","caller":"rafthttp/stream.go:441","msg":"stopped stream reader with remote peer","stream-reader-type":"stream MsgApp v2","local-member-id":"aec36adc501070cc","remote-peer-id":"c7770fc1e85485c5"}
	{"level":"info","ts":"2025-11-09T14:00:20.008664Z","caller":"rafthttp/stream.go:441","msg":"stopped stream reader with remote peer","stream-reader-type":"stream Message","local-member-id":"aec36adc501070cc","remote-peer-id":"c7770fc1e85485c5"}
	{"level":"info","ts":"2025-11-09T14:00:20.008699Z","caller":"rafthttp/peer.go:321","msg":"stopped remote peer","remote-peer-id":"c7770fc1e85485c5"}
	{"level":"info","ts":"2025-11-09T14:00:20.021805Z","caller":"embed/etcd.go:621","msg":"stopping serving peer traffic","address":"192.168.49.2:2380"}
	{"level":"error","ts":"2025-11-09T14:00:20.021895Z","caller":"embed/etcd.go:912","msg":"setting up serving from embedded etcd failed.","error":"accept tcp 192.168.49.2:2380: use of closed network connection","stacktrace":"go.etcd.io/etcd/server/v3/embed.(*Etcd).errHandler\n\tgo.etcd.io/etcd/server/v3/embed/etcd.go:912\ngo.etcd.io/etcd/server/v3/embed.(*Etcd).startHandler.func1\n\tgo.etcd.io/etcd/server/v3/embed/etcd.go:906"}
	{"level":"info","ts":"2025-11-09T14:00:20.021928Z","caller":"embed/etcd.go:626","msg":"stopped serving peer traffic","address":"192.168.49.2:2380"}
	{"level":"info","ts":"2025-11-09T14:00:20.021935Z","caller":"embed/etcd.go:428","msg":"closed etcd server","name":"ha-423884","data-dir":"/var/lib/minikube/etcd","advertise-peer-urls":["https://192.168.49.2:2380"],"advertise-client-urls":["https://192.168.49.2:2379"]}
	
	
	==> etcd [c2bc167e204287c49f92f3ea3b5ca2ff40be8e2eed3675512ec65e082d5b7ed6] <==
	{"level":"info","ts":"2025-11-09T14:03:18.490638Z","logger":"raft","caller":"v3@v3.6.0/raft.go:1064","msg":"aec36adc501070cc [logterm: 4, index: 2084] sent MsgPreVote request to c7770fc1e85485c5 at term 4"}
	{"level":"info","ts":"2025-11-09T14:03:18.490673Z","logger":"raft","caller":"v3@v3.6.0/raft.go:1077","msg":"aec36adc501070cc received MsgPreVoteResp from aec36adc501070cc at term 4"}
	{"level":"info","ts":"2025-11-09T14:03:18.490684Z","logger":"raft","caller":"v3@v3.6.0/raft.go:1693","msg":"aec36adc501070cc has received 1 MsgPreVoteResp votes and 0 vote rejections"}
	{"level":"warn","ts":"2025-11-09T14:03:19.481671Z","caller":"etcdserver/v3_server.go:911","msg":"waiting for ReadIndex response took too long, retrying","sent-request-id":8128041202906160426,"retry-timeout":"500ms"}
	{"level":"info","ts":"2025-11-09T14:03:19.890346Z","logger":"raft","caller":"v3@v3.6.0/raft.go:988","msg":"aec36adc501070cc is starting a new election at term 4"}
	{"level":"info","ts":"2025-11-09T14:03:19.890395Z","logger":"raft","caller":"v3@v3.6.0/raft.go:930","msg":"aec36adc501070cc became pre-candidate at term 4"}
	{"level":"info","ts":"2025-11-09T14:03:19.890419Z","logger":"raft","caller":"v3@v3.6.0/raft.go:1064","msg":"aec36adc501070cc [logterm: 4, index: 2084] sent MsgPreVote request to b6e80321287bcc6a at term 4"}
	{"level":"info","ts":"2025-11-09T14:03:19.890432Z","logger":"raft","caller":"v3@v3.6.0/raft.go:1064","msg":"aec36adc501070cc [logterm: 4, index: 2084] sent MsgPreVote request to c7770fc1e85485c5 at term 4"}
	{"level":"info","ts":"2025-11-09T14:03:19.890467Z","logger":"raft","caller":"v3@v3.6.0/raft.go:1077","msg":"aec36adc501070cc received MsgPreVoteResp from aec36adc501070cc at term 4"}
	{"level":"info","ts":"2025-11-09T14:03:19.890481Z","logger":"raft","caller":"v3@v3.6.0/raft.go:1693","msg":"aec36adc501070cc has received 1 MsgPreVoteResp votes and 0 vote rejections"}
	{"level":"warn","ts":"2025-11-09T14:03:19.982028Z","caller":"etcdserver/v3_server.go:911","msg":"waiting for ReadIndex response took too long, retrying","sent-request-id":8128041202906160426,"retry-timeout":"500ms"}
	{"level":"warn","ts":"2025-11-09T14:03:20.238182Z","caller":"rafthttp/probing_status.go:68","msg":"prober detected unhealthy status","round-tripper-name":"ROUND_TRIPPER_RAFT_MESSAGE","remote-peer-id":"b6e80321287bcc6a","rtt":"0s","error":"dial tcp 192.168.49.4:2380: i/o timeout"}
	{"level":"warn","ts":"2025-11-09T14:03:20.238192Z","caller":"rafthttp/probing_status.go:68","msg":"prober detected unhealthy status","round-tripper-name":"ROUND_TRIPPER_SNAPSHOT","remote-peer-id":"c7770fc1e85485c5","rtt":"0s","error":"dial tcp 192.168.49.3:2380: connect: connection refused"}
	{"level":"warn","ts":"2025-11-09T14:03:20.238228Z","caller":"rafthttp/probing_status.go:68","msg":"prober detected unhealthy status","round-tripper-name":"ROUND_TRIPPER_RAFT_MESSAGE","remote-peer-id":"c7770fc1e85485c5","rtt":"0s","error":"dial tcp 192.168.49.3:2380: connect: connection refused"}
	{"level":"warn","ts":"2025-11-09T14:03:20.238240Z","caller":"rafthttp/probing_status.go:68","msg":"prober detected unhealthy status","round-tripper-name":"ROUND_TRIPPER_SNAPSHOT","remote-peer-id":"b6e80321287bcc6a","rtt":"0s","error":"dial tcp 192.168.49.4:2380: i/o timeout"}
	{"level":"warn","ts":"2025-11-09T14:03:20.483000Z","caller":"etcdserver/v3_server.go:911","msg":"waiting for ReadIndex response took too long, retrying","sent-request-id":8128041202906160426,"retry-timeout":"500ms"}
	{"level":"warn","ts":"2025-11-09T14:03:20.983958Z","caller":"etcdserver/v3_server.go:911","msg":"waiting for ReadIndex response took too long, retrying","sent-request-id":8128041202906160426,"retry-timeout":"500ms"}
	{"level":"info","ts":"2025-11-09T14:03:21.292813Z","logger":"raft","caller":"v3@v3.6.0/raft.go:988","msg":"aec36adc501070cc is starting a new election at term 4"}
	{"level":"info","ts":"2025-11-09T14:03:21.292942Z","logger":"raft","caller":"v3@v3.6.0/raft.go:930","msg":"aec36adc501070cc became pre-candidate at term 4"}
	{"level":"info","ts":"2025-11-09T14:03:21.292989Z","logger":"raft","caller":"v3@v3.6.0/raft.go:1064","msg":"aec36adc501070cc [logterm: 4, index: 2084] sent MsgPreVote request to b6e80321287bcc6a at term 4"}
	{"level":"info","ts":"2025-11-09T14:03:21.293028Z","logger":"raft","caller":"v3@v3.6.0/raft.go:1064","msg":"aec36adc501070cc [logterm: 4, index: 2084] sent MsgPreVote request to c7770fc1e85485c5 at term 4"}
	{"level":"info","ts":"2025-11-09T14:03:21.293090Z","logger":"raft","caller":"v3@v3.6.0/raft.go:1077","msg":"aec36adc501070cc received MsgPreVoteResp from aec36adc501070cc at term 4"}
	{"level":"info","ts":"2025-11-09T14:03:21.293128Z","logger":"raft","caller":"v3@v3.6.0/raft.go:1693","msg":"aec36adc501070cc has received 1 MsgPreVoteResp votes and 0 vote rejections"}
	{"level":"warn","ts":"2025-11-09T14:03:21.484904Z","caller":"etcdserver/v3_server.go:911","msg":"waiting for ReadIndex response took too long, retrying","sent-request-id":8128041202906160426,"retry-timeout":"500ms"}
	{"level":"warn","ts":"2025-11-09T14:03:21.986004Z","caller":"etcdserver/v3_server.go:911","msg":"waiting for ReadIndex response took too long, retrying","sent-request-id":8128041202906160426,"retry-timeout":"500ms"}
	
	
	==> kernel <==
	 14:03:22 up 45 min,  0 user,  load average: 0.40, 0.73, 0.92
	Linux ha-423884 5.15.0-1084-aws #91~20.04.1-Ubuntu SMP Fri May 2 07:00:04 UTC 2025 aarch64 GNU/Linux
	PRETTY_NAME="Debian GNU/Linux 12 (bookworm)"
	
	
	==> kube-apiserver [f857691ef21f6060a3153742c856510acde291aa03d98022486b54ccdd6b16f0] <==
	I1109 14:02:36.634199       1 server.go:150] Version: v1.34.1
	I1109 14:02:36.634303       1 server.go:152] "Golang settings" GOGC="" GOMAXPROCS="" GOTRACEBACK=""
	W1109 14:02:37.909918       1 api_enablement.go:112] alpha api enabled with emulated version 1.34 instead of the binary's version 1.34.1, this is unsupported, proceed at your own risk: api=coordination.k8s.io/v1alpha2
	W1109 14:02:37.910014       1 api_enablement.go:112] alpha api enabled with emulated version 1.34 instead of the binary's version 1.34.1, this is unsupported, proceed at your own risk: api=certificates.k8s.io/v1alpha1
	W1109 14:02:37.910049       1 api_enablement.go:112] alpha api enabled with emulated version 1.34 instead of the binary's version 1.34.1, this is unsupported, proceed at your own risk: api=storagemigration.k8s.io/v1alpha1
	W1109 14:02:37.910094       1 api_enablement.go:112] alpha api enabled with emulated version 1.34 instead of the binary's version 1.34.1, this is unsupported, proceed at your own risk: api=authentication.k8s.io/v1alpha1
	W1109 14:02:37.910134       1 api_enablement.go:112] alpha api enabled with emulated version 1.34 instead of the binary's version 1.34.1, this is unsupported, proceed at your own risk: api=storage.k8s.io/v1alpha1
	W1109 14:02:37.910171       1 api_enablement.go:112] alpha api enabled with emulated version 1.34 instead of the binary's version 1.34.1, this is unsupported, proceed at your own risk: api=admissionregistration.k8s.io/v1alpha1
	W1109 14:02:37.910203       1 api_enablement.go:112] alpha api enabled with emulated version 1.34 instead of the binary's version 1.34.1, this is unsupported, proceed at your own risk: api=internal.apiserver.k8s.io/v1alpha1
	W1109 14:02:37.910236       1 api_enablement.go:112] alpha api enabled with emulated version 1.34 instead of the binary's version 1.34.1, this is unsupported, proceed at your own risk: api=imagepolicy.k8s.io/v1alpha1
	W1109 14:02:37.910268       1 api_enablement.go:112] alpha api enabled with emulated version 1.34 instead of the binary's version 1.34.1, this is unsupported, proceed at your own risk: api=resource.k8s.io/v1alpha3
	W1109 14:02:37.910286       1 api_enablement.go:112] alpha api enabled with emulated version 1.34 instead of the binary's version 1.34.1, this is unsupported, proceed at your own risk: api=scheduling.k8s.io/v1alpha1
	W1109 14:02:37.910291       1 api_enablement.go:112] alpha api enabled with emulated version 1.34 instead of the binary's version 1.34.1, this is unsupported, proceed at your own risk: api=rbac.authorization.k8s.io/v1alpha1
	W1109 14:02:37.910295       1 api_enablement.go:112] alpha api enabled with emulated version 1.34 instead of the binary's version 1.34.1, this is unsupported, proceed at your own risk: api=node.k8s.io/v1alpha1
	W1109 14:02:37.926088       1 logging.go:55] [core] [Channel #1 SubChannel #2]grpc: addrConn.createTransport failed to connect to {Addr: "127.0.0.1:2379", ServerName: "127.0.0.1:2379", BalancerAttributes: {"<%!p(pickfirstleaf.managedByPickfirstKeyType={})>": "<%!p(bool=true)>" }}. Err: connection error: desc = "transport: Error while dialing: dial tcp 127.0.0.1:2379: operation was canceled"
	W1109 14:02:37.927390       1 logging.go:55] [core] [Channel #4 SubChannel #5]grpc: addrConn.createTransport failed to connect to {Addr: "127.0.0.1:2379", ServerName: "127.0.0.1:2379", BalancerAttributes: {"<%!p(pickfirstleaf.managedByPickfirstKeyType={})>": "<%!p(bool=true)>" }}. Err: connection error: desc = "transport: Error while dialing: dial tcp 127.0.0.1:2379: operation was canceled"
	I1109 14:02:37.927960       1 shared_informer.go:349] "Waiting for caches to sync" controller="node_authorizer"
	I1109 14:02:37.944693       1 shared_informer.go:349] "Waiting for caches to sync" controller="*generic.policySource[*k8s.io/api/admissionregistration/v1.ValidatingAdmissionPolicy,*k8s.io/api/admissionregistration/v1.ValidatingAdmissionPolicyBinding,k8s.io/apiserver/pkg/admission/plugin/policy/validating.Validator]"
	I1109 14:02:37.950338       1 plugins.go:157] Loaded 14 mutating admission controller(s) successfully in the following order: NamespaceLifecycle,LimitRanger,ServiceAccount,NodeRestriction,TaintNodesByCondition,Priority,DefaultTolerationSeconds,DefaultStorageClass,StorageObjectInUseProtection,RuntimeClass,DefaultIngressClass,PodTopologyLabels,MutatingAdmissionPolicy,MutatingAdmissionWebhook.
	I1109 14:02:37.950373       1 plugins.go:160] Loaded 13 validating admission controller(s) successfully in the following order: LimitRanger,ServiceAccount,PodSecurity,Priority,PersistentVolumeClaimResize,RuntimeClass,CertificateApproval,CertificateSigning,ClusterTrustBundleAttest,CertificateSubjectRestriction,ValidatingAdmissionPolicy,ValidatingAdmissionWebhook,ResourceQuota.
	I1109 14:02:37.950622       1 instance.go:239] Using reconciler: lease
	W1109 14:02:37.952361       1 logging.go:55] [core] [Channel #7 SubChannel #8]grpc: addrConn.createTransport failed to connect to {Addr: "127.0.0.1:2379", ServerName: "127.0.0.1:2379", BalancerAttributes: {"<%!p(pickfirstleaf.managedByPickfirstKeyType={})>": "<%!p(bool=true)>" }}. Err: connection error: desc = "transport: authentication handshake failed: context canceled"
	W1109 14:02:57.925789       1 logging.go:55] [core] [Channel #1 SubChannel #3]grpc: addrConn.createTransport failed to connect to {Addr: "127.0.0.1:2379", ServerName: "127.0.0.1:2379", BalancerAttributes: {"<%!p(pickfirstleaf.managedByPickfirstKeyType={})>": "<%!p(bool=true)>" }}. Err: connection error: desc = "transport: authentication handshake failed: context canceled"
	W1109 14:02:57.928142       1 logging.go:55] [core] [Channel #4 SubChannel #6]grpc: addrConn.createTransport failed to connect to {Addr: "127.0.0.1:2379", ServerName: "127.0.0.1:2379", BalancerAttributes: {"<%!p(pickfirstleaf.managedByPickfirstKeyType={})>": "<%!p(bool=true)>" }}. Err: connection error: desc = "transport: authentication handshake failed: context deadline exceeded"
	F1109 14:02:57.951648       1 instance.go:232] Error creating leases: error creating storage factory: context deadline exceeded
	
	
	==> kube-controller-manager [fb43ae7a5bc7148d318300f77b90f87d0bed19698a40a38ce78123c68c92f396] <==
	I1109 14:02:10.123805       1 serving.go:386] Generated self-signed cert in-memory
	I1109 14:02:11.457380       1 controllermanager.go:191] "Starting" version="v1.34.1"
	I1109 14:02:11.457409       1 controllermanager.go:193] "Golang settings" GOGC="" GOMAXPROCS="" GOTRACEBACK=""
	I1109 14:02:11.458876       1 dynamic_cafile_content.go:161] "Starting controller" name="request-header::/var/lib/minikube/certs/front-proxy-ca.crt"
	I1109 14:02:11.459053       1 dynamic_cafile_content.go:161] "Starting controller" name="client-ca-bundle::/var/lib/minikube/certs/ca.crt"
	I1109 14:02:11.459304       1 secure_serving.go:211] Serving securely on 127.0.0.1:10257
	I1109 14:02:11.459359       1 tlsconfig.go:243] "Starting DynamicServingCertificateController"
	E1109 14:02:21.460773       1 controllermanager.go:245] "Error building controller context" err="failed to wait for apiserver being healthy: timed out waiting for the condition: failed to get apiserver /healthz status: Get \"https://192.168.49.2:8443/healthz\": dial tcp 192.168.49.2:8443: connect: connection refused"
	
	
	==> kube-scheduler [ee4108629384f7d2a0c69033ae60bc1c7015caec18238848cb6dace4abb60ac1] <==
	E1109 14:02:22.944232       1 reflector.go:205] "Failed to watch" err="failed to list *v1.VolumeAttachment: Get \"https://192.168.49.2:8443/apis/storage.k8s.io/v1/volumeattachments?limit=500&resourceVersion=0\": dial tcp 192.168.49.2:8443: connect: connection refused" logger="UnhandledError" reflector="k8s.io/client-go/informers/factory.go:160" type="*v1.VolumeAttachment"
	E1109 14:02:23.105929       1 reflector.go:205] "Failed to watch" err="failed to list *v1.PersistentVolume: Get \"https://192.168.49.2:8443/api/v1/persistentvolumes?limit=500&resourceVersion=0\": dial tcp 192.168.49.2:8443: connect: connection refused" logger="UnhandledError" reflector="k8s.io/client-go/informers/factory.go:160" type="*v1.PersistentVolume"
	E1109 14:02:25.053888       1 reflector.go:205] "Failed to watch" err="failed to list *v1.PersistentVolumeClaim: Get \"https://192.168.49.2:8443/api/v1/persistentvolumeclaims?limit=500&resourceVersion=0\": dial tcp 192.168.49.2:8443: connect: connection refused" logger="UnhandledError" reflector="k8s.io/client-go/informers/factory.go:160" type="*v1.PersistentVolumeClaim"
	E1109 14:02:27.099287       1 reflector.go:205] "Failed to watch" err="failed to list *v1.CSIDriver: Get \"https://192.168.49.2:8443/apis/storage.k8s.io/v1/csidrivers?limit=500&resourceVersion=0\": dial tcp 192.168.49.2:8443: connect: connection refused" logger="UnhandledError" reflector="k8s.io/client-go/informers/factory.go:160" type="*v1.CSIDriver"
	E1109 14:02:27.123796       1 reflector.go:205] "Failed to watch" err="failed to list *v1.StatefulSet: Get \"https://192.168.49.2:8443/apis/apps/v1/statefulsets?limit=500&resourceVersion=0\": dial tcp 192.168.49.2:8443: connect: connection refused" logger="UnhandledError" reflector="k8s.io/client-go/informers/factory.go:160" type="*v1.StatefulSet"
	E1109 14:02:32.519640       1 reflector.go:205] "Failed to watch" err="failed to list *v1.ResourceSlice: Get \"https://192.168.49.2:8443/apis/resource.k8s.io/v1/resourceslices?limit=500&resourceVersion=0\": dial tcp 192.168.49.2:8443: connect: connection refused" logger="UnhandledError" reflector="k8s.io/client-go/informers/factory.go:160" type="*v1.ResourceSlice"
	E1109 14:02:33.877009       1 reflector.go:205] "Failed to watch" err="failed to list *v1.Service: Get \"https://192.168.49.2:8443/api/v1/services?limit=500&resourceVersion=0\": dial tcp 192.168.49.2:8443: connect: connection refused" logger="UnhandledError" reflector="k8s.io/client-go/informers/factory.go:160" type="*v1.Service"
	E1109 14:02:34.203560       1 reflector.go:205] "Failed to watch" err="failed to list *v1.Pod: Get \"https://192.168.49.2:8443/api/v1/pods?fieldSelector=status.phase%21%3DSucceeded%2Cstatus.phase%21%3DFailed&limit=500&resourceVersion=0\": dial tcp 192.168.49.2:8443: connect: connection refused" logger="UnhandledError" reflector="k8s.io/client-go/informers/factory.go:160" type="*v1.Pod"
	E1109 14:02:34.291436       1 reflector.go:205] "Failed to watch" err="failed to list *v1.DeviceClass: Get \"https://192.168.49.2:8443/apis/resource.k8s.io/v1/deviceclasses?limit=500&resourceVersion=0\": dial tcp 192.168.49.2:8443: connect: connection refused" logger="UnhandledError" reflector="k8s.io/client-go/informers/factory.go:160" type="*v1.DeviceClass"
	E1109 14:02:50.965757       1 reflector.go:205] "Failed to watch" err="failed to list *v1.ConfigMap: Get \"https://192.168.49.2:8443/api/v1/namespaces/kube-system/configmaps?fieldSelector=metadata.name%3Dextension-apiserver-authentication&limit=500&resourceVersion=0\": net/http: TLS handshake timeout" logger="UnhandledError" reflector="runtime/asm_arm64.s:1223" type="*v1.ConfigMap"
	E1109 14:02:52.910441       1 reflector.go:205] "Failed to watch" err="failed to list *v1.Namespace: Get \"https://192.168.49.2:8443/api/v1/namespaces?limit=500&resourceVersion=0\": net/http: TLS handshake timeout" logger="UnhandledError" reflector="k8s.io/client-go/informers/factory.go:160" type="*v1.Namespace"
	E1109 14:02:54.093633       1 reflector.go:205] "Failed to watch" err="failed to list *v1.PodDisruptionBudget: Get \"https://192.168.49.2:8443/apis/policy/v1/poddisruptionbudgets?limit=500&resourceVersion=0\": net/http: TLS handshake timeout" logger="UnhandledError" reflector="k8s.io/client-go/informers/factory.go:160" type="*v1.PodDisruptionBudget"
	E1109 14:02:56.581927       1 reflector.go:205] "Failed to watch" err="failed to list *v1.StorageClass: Get \"https://192.168.49.2:8443/apis/storage.k8s.io/v1/storageclasses?limit=500&resourceVersion=0\": net/http: TLS handshake timeout" logger="UnhandledError" reflector="k8s.io/client-go/informers/factory.go:160" type="*v1.StorageClass"
	E1109 14:02:58.739939       1 reflector.go:205] "Failed to watch" err="failed to list *v1.VolumeAttachment: Get \"https://192.168.49.2:8443/apis/storage.k8s.io/v1/volumeattachments?limit=500&resourceVersion=0\": dial tcp 192.168.49.2:8443: connect: connection refused" logger="UnhandledError" reflector="k8s.io/client-go/informers/factory.go:160" type="*v1.VolumeAttachment"
	E1109 14:02:58.959102       1 reflector.go:205] "Failed to watch" err="failed to list *v1.CSINode: Get \"https://192.168.49.2:8443/apis/storage.k8s.io/v1/csinodes?limit=500&resourceVersion=0\": dial tcp 192.168.49.2:8443: connect: connection refused - error from a previous attempt: read tcp 192.168.49.2:56852->192.168.49.2:8443: read: connection reset by peer" logger="UnhandledError" reflector="k8s.io/client-go/informers/factory.go:160" type="*v1.CSINode"
	E1109 14:02:58.959223       1 reflector.go:205] "Failed to watch" err="failed to list *v1.ReplicaSet: Get \"https://192.168.49.2:8443/apis/apps/v1/replicasets?limit=500&resourceVersion=0\": dial tcp 192.168.49.2:8443: connect: connection refused - error from a previous attempt: read tcp 192.168.49.2:56844->192.168.49.2:8443: read: connection reset by peer" logger="UnhandledError" reflector="k8s.io/client-go/informers/factory.go:160" type="*v1.ReplicaSet"
	E1109 14:03:00.781917       1 reflector.go:205] "Failed to watch" err="failed to list *v1.CSIStorageCapacity: Get \"https://192.168.49.2:8443/apis/storage.k8s.io/v1/csistoragecapacities?limit=500&resourceVersion=0\": dial tcp 192.168.49.2:8443: connect: connection refused" logger="UnhandledError" reflector="k8s.io/client-go/informers/factory.go:160" type="*v1.CSIStorageCapacity"
	E1109 14:03:01.017035       1 reflector.go:205] "Failed to watch" err="failed to list *v1.Node: Get \"https://192.168.49.2:8443/api/v1/nodes?limit=500&resourceVersion=0\": dial tcp 192.168.49.2:8443: connect: connection refused" logger="UnhandledError" reflector="k8s.io/client-go/informers/factory.go:160" type="*v1.Node"
	E1109 14:03:07.737453       1 reflector.go:205] "Failed to watch" err="failed to list *v1.ResourceClaim: Get \"https://192.168.49.2:8443/apis/resource.k8s.io/v1/resourceclaims?limit=500&resourceVersion=0\": dial tcp 192.168.49.2:8443: connect: connection refused" logger="UnhandledError" reflector="k8s.io/client-go/informers/factory.go:160" type="*v1.ResourceClaim"
	E1109 14:03:08.997413       1 reflector.go:205] "Failed to watch" err="failed to list *v1.ReplicationController: Get \"https://192.168.49.2:8443/api/v1/replicationcontrollers?limit=500&resourceVersion=0\": dial tcp 192.168.49.2:8443: connect: connection refused" logger="UnhandledError" reflector="k8s.io/client-go/informers/factory.go:160" type="*v1.ReplicationController"
	E1109 14:03:09.588533       1 reflector.go:205] "Failed to watch" err="failed to list *v1.Service: Get \"https://192.168.49.2:8443/api/v1/services?limit=500&resourceVersion=0\": dial tcp 192.168.49.2:8443: connect: connection refused" logger="UnhandledError" reflector="k8s.io/client-go/informers/factory.go:160" type="*v1.Service"
	E1109 14:03:09.845299       1 reflector.go:205] "Failed to watch" err="failed to list *v1.ResourceSlice: Get \"https://192.168.49.2:8443/apis/resource.k8s.io/v1/resourceslices?limit=500&resourceVersion=0\": dial tcp 192.168.49.2:8443: connect: connection refused" logger="UnhandledError" reflector="k8s.io/client-go/informers/factory.go:160" type="*v1.ResourceSlice"
	E1109 14:03:12.070030       1 reflector.go:205] "Failed to watch" err="failed to list *v1.PersistentVolume: Get \"https://192.168.49.2:8443/api/v1/persistentvolumes?limit=500&resourceVersion=0\": dial tcp 192.168.49.2:8443: connect: connection refused" logger="UnhandledError" reflector="k8s.io/client-go/informers/factory.go:160" type="*v1.PersistentVolume"
	E1109 14:03:15.877609       1 reflector.go:205] "Failed to watch" err="failed to list *v1.PersistentVolumeClaim: Get \"https://192.168.49.2:8443/api/v1/persistentvolumeclaims?limit=500&resourceVersion=0\": dial tcp 192.168.49.2:8443: connect: connection refused" logger="UnhandledError" reflector="k8s.io/client-go/informers/factory.go:160" type="*v1.PersistentVolumeClaim"
	E1109 14:03:20.824924       1 reflector.go:205] "Failed to watch" err="failed to list *v1.StatefulSet: Get \"https://192.168.49.2:8443/apis/apps/v1/statefulsets?limit=500&resourceVersion=0\": dial tcp 192.168.49.2:8443: connect: connection refused" logger="UnhandledError" reflector="k8s.io/client-go/informers/factory.go:160" type="*v1.StatefulSet"
	
	
	==> kubelet <==
	Nov 09 14:03:20 ha-423884 kubelet[803]: E1109 14:03:20.292914     803 reconstruct.go:189] "Failed to get Node status to reconstruct device paths" err="Get \"https://192.168.49.2:8443/api/v1/nodes/ha-423884\": dial tcp 192.168.49.2:8443: connect: connection refused"
	Nov 09 14:03:20 ha-423884 kubelet[803]: E1109 14:03:20.394035     803 reconstruct.go:189] "Failed to get Node status to reconstruct device paths" err="Get \"https://192.168.49.2:8443/api/v1/nodes/ha-423884\": dial tcp 192.168.49.2:8443: connect: connection refused"
	Nov 09 14:03:20 ha-423884 kubelet[803]: E1109 14:03:20.494907     803 reconstruct.go:189] "Failed to get Node status to reconstruct device paths" err="Get \"https://192.168.49.2:8443/api/v1/nodes/ha-423884\": dial tcp 192.168.49.2:8443: connect: connection refused"
	Nov 09 14:03:20 ha-423884 kubelet[803]: E1109 14:03:20.595546     803 reconstruct.go:189] "Failed to get Node status to reconstruct device paths" err="Get \"https://192.168.49.2:8443/api/v1/nodes/ha-423884\": dial tcp 192.168.49.2:8443: connect: connection refused"
	Nov 09 14:03:20 ha-423884 kubelet[803]: E1109 14:03:20.696305     803 reconstruct.go:189] "Failed to get Node status to reconstruct device paths" err="Get \"https://192.168.49.2:8443/api/v1/nodes/ha-423884\": dial tcp 192.168.49.2:8443: connect: connection refused"
	Nov 09 14:03:20 ha-423884 kubelet[803]: E1109 14:03:20.797837     803 reconstruct.go:189] "Failed to get Node status to reconstruct device paths" err="Get \"https://192.168.49.2:8443/api/v1/nodes/ha-423884\": dial tcp 192.168.49.2:8443: connect: connection refused"
	Nov 09 14:03:20 ha-423884 kubelet[803]: E1109 14:03:20.898653     803 reconstruct.go:189] "Failed to get Node status to reconstruct device paths" err="Get \"https://192.168.49.2:8443/api/v1/nodes/ha-423884\": dial tcp 192.168.49.2:8443: connect: connection refused"
	Nov 09 14:03:20 ha-423884 kubelet[803]: E1109 14:03:20.912631     803 reflector.go:205] "Failed to watch" err="failed to list *v1.RuntimeClass: Get \"https://192.168.49.2:8443/apis/node.k8s.io/v1/runtimeclasses?limit=500&resourceVersion=0\": dial tcp 192.168.49.2:8443: connect: connection refused" logger="UnhandledError" reflector="k8s.io/client-go/informers/factory.go:160" type="*v1.RuntimeClass"
	Nov 09 14:03:21 ha-423884 kubelet[803]: E1109 14:03:21.000112     803 reconstruct.go:189] "Failed to get Node status to reconstruct device paths" err="Get \"https://192.168.49.2:8443/api/v1/nodes/ha-423884\": dial tcp 192.168.49.2:8443: connect: connection refused"
	Nov 09 14:03:21 ha-423884 kubelet[803]: E1109 14:03:21.101164     803 reconstruct.go:189] "Failed to get Node status to reconstruct device paths" err="Get \"https://192.168.49.2:8443/api/v1/nodes/ha-423884\": dial tcp 192.168.49.2:8443: connect: connection refused"
	Nov 09 14:03:21 ha-423884 kubelet[803]: E1109 14:03:21.201879     803 reconstruct.go:189] "Failed to get Node status to reconstruct device paths" err="Get \"https://192.168.49.2:8443/api/v1/nodes/ha-423884\": dial tcp 192.168.49.2:8443: connect: connection refused"
	Nov 09 14:03:21 ha-423884 kubelet[803]: E1109 14:03:21.302927     803 reconstruct.go:189] "Failed to get Node status to reconstruct device paths" err="Get \"https://192.168.49.2:8443/api/v1/nodes/ha-423884\": dial tcp 192.168.49.2:8443: connect: connection refused"
	Nov 09 14:03:21 ha-423884 kubelet[803]: E1109 14:03:21.404014     803 reconstruct.go:189] "Failed to get Node status to reconstruct device paths" err="Get \"https://192.168.49.2:8443/api/v1/nodes/ha-423884\": dial tcp 192.168.49.2:8443: connect: connection refused"
	Nov 09 14:03:21 ha-423884 kubelet[803]: E1109 14:03:21.505122     803 reconstruct.go:189] "Failed to get Node status to reconstruct device paths" err="Get \"https://192.168.49.2:8443/api/v1/nodes/ha-423884\": dial tcp 192.168.49.2:8443: connect: connection refused"
	Nov 09 14:03:21 ha-423884 kubelet[803]: E1109 14:03:21.606609     803 reconstruct.go:189] "Failed to get Node status to reconstruct device paths" err="Get \"https://192.168.49.2:8443/api/v1/nodes/ha-423884\": dial tcp 192.168.49.2:8443: connect: connection refused"
	Nov 09 14:03:21 ha-423884 kubelet[803]: E1109 14:03:21.707769     803 reconstruct.go:189] "Failed to get Node status to reconstruct device paths" err="Get \"https://192.168.49.2:8443/api/v1/nodes/ha-423884\": dial tcp 192.168.49.2:8443: connect: connection refused"
	Nov 09 14:03:21 ha-423884 kubelet[803]: E1109 14:03:21.808817     803 reconstruct.go:189] "Failed to get Node status to reconstruct device paths" err="Get \"https://192.168.49.2:8443/api/v1/nodes/ha-423884\": dial tcp 192.168.49.2:8443: connect: connection refused"
	Nov 09 14:03:21 ha-423884 kubelet[803]: E1109 14:03:21.910168     803 reconstruct.go:189] "Failed to get Node status to reconstruct device paths" err="Get \"https://192.168.49.2:8443/api/v1/nodes/ha-423884\": dial tcp 192.168.49.2:8443: connect: connection refused"
	Nov 09 14:03:22 ha-423884 kubelet[803]: E1109 14:03:22.012963     803 reconstruct.go:189] "Failed to get Node status to reconstruct device paths" err="Get \"https://192.168.49.2:8443/api/v1/nodes/ha-423884\": dial tcp 192.168.49.2:8443: connect: connection refused"
	Nov 09 14:03:22 ha-423884 kubelet[803]: E1109 14:03:22.114021     803 reconstruct.go:189] "Failed to get Node status to reconstruct device paths" err="Get \"https://192.168.49.2:8443/api/v1/nodes/ha-423884\": dial tcp 192.168.49.2:8443: connect: connection refused"
	Nov 09 14:03:22 ha-423884 kubelet[803]: E1109 14:03:22.187019     803 event.go:368] "Unable to write event (may retry after sleeping)" err="Patch \"https://192.168.49.2:8443/api/v1/namespaces/default/events/ha-423884.18765b213fede0a9\": dial tcp 192.168.49.2:8443: connect: connection refused" event="&Event{ObjectMeta:{ha-423884.18765b213fede0a9  default    0 0001-01-01 00:00:00 +0000 UTC <nil> <nil> map[] map[] [] [] []},InvolvedObject:ObjectReference{Kind:Node,Namespace:,Name:ha-423884,UID:ha-423884,APIVersion:,ResourceVersion:,FieldPath:,},Reason:NodeHasNoDiskPressure,Message:Node ha-423884 status is now: NodeHasNoDiskPressure,Source:EventSource{Component:kubelet,Host:ha-423884,},FirstTimestamp:2025-11-09 13:55:02.526730409 +0000 UTC m=+0.194632428,LastTimestamp:2025-11-09 13:55:02.634347734 +0000 UTC m=+0.302249753,Count:3,Type:Normal,EventTime:0001-01-01 00:00:00 +0000 UTC,Series:nil,Action:,Related:nil,ReportingController:kubelet,ReportingInstance:ha-423884,}"
	Nov 09 14:03:22 ha-423884 kubelet[803]: E1109 14:03:22.214787     803 reconstruct.go:189] "Failed to get Node status to reconstruct device paths" err="Get \"https://192.168.49.2:8443/api/v1/nodes/ha-423884\": dial tcp 192.168.49.2:8443: connect: connection refused"
	Nov 09 14:03:22 ha-423884 kubelet[803]: E1109 14:03:22.315781     803 reconstruct.go:189] "Failed to get Node status to reconstruct device paths" err="Get \"https://192.168.49.2:8443/api/v1/nodes/ha-423884\": dial tcp 192.168.49.2:8443: connect: connection refused"
	Nov 09 14:03:22 ha-423884 kubelet[803]: E1109 14:03:22.416363     803 reconstruct.go:189] "Failed to get Node status to reconstruct device paths" err="Get \"https://192.168.49.2:8443/api/v1/nodes/ha-423884\": dial tcp 192.168.49.2:8443: connect: connection refused"
	Nov 09 14:03:22 ha-423884 kubelet[803]: E1109 14:03:22.517362     803 reconstruct.go:189] "Failed to get Node status to reconstruct device paths" err="Get \"https://192.168.49.2:8443/api/v1/nodes/ha-423884\": dial tcp 192.168.49.2:8443: connect: connection refused"
	

                                                
                                                
-- /stdout --
helpers_test.go:262: (dbg) Run:  out/minikube-linux-arm64 status --format={{.APIServer}} -p ha-423884 -n ha-423884
helpers_test.go:262: (dbg) Non-zero exit: out/minikube-linux-arm64 status --format={{.APIServer}} -p ha-423884 -n ha-423884: exit status 2 (364.897343ms)

                                                
                                                
-- stdout --
	Stopped

                                                
                                                
-- /stdout --
helpers_test.go:262: status error: exit status 2 (may be ok)
helpers_test.go:264: "ha-423884" apiserver is not running, skipping kubectl commands (state="Stopped")
--- FAIL: TestMultiControlPlane/serial/DeleteSecondaryNode (18.80s)

                                                
                                    
x
+
TestMultiControlPlane/serial/DegradedAfterSecondaryNodeDelete (2.21s)

                                                
                                                
=== RUN   TestMultiControlPlane/serial/DegradedAfterSecondaryNodeDelete
ha_test.go:392: (dbg) Run:  out/minikube-linux-arm64 profile list --output json
ha_test.go:415: expected profile "ha-423884" in json of 'profile list' to have "Degraded" status but have "Starting" status. got *"{\"invalid\":[],\"valid\":[{\"Name\":\"ha-423884\",\"Status\":\"Starting\",\"Config\":{\"Name\":\"ha-423884\",\"KeepContext\":false,\"EmbedCerts\":false,\"MinikubeISO\":\"\",\"KicBaseImage\":\"gcr.io/k8s-minikube/kicbase-builds:v0.0.48-1761985721-21837@sha256:a50b37e97dfdea51156e079ca6b45818a801b3d41bbe13d141f35d2e1af6c7d1\",\"Memory\":3072,\"CPUs\":2,\"DiskSize\":20000,\"Driver\":\"docker\",\"HyperkitVpnKitSock\":\"\",\"HyperkitVSockPorts\":[],\"DockerEnv\":null,\"ContainerVolumeMounts\":null,\"InsecureRegistry\":null,\"RegistryMirror\":[],\"HostOnlyCIDR\":\"192.168.59.1/24\",\"HypervVirtualSwitch\":\"\",\"HypervUseExternalSwitch\":false,\"HypervExternalAdapter\":\"\",\"KVMNetwork\":\"default\",\"KVMQemuURI\":\"qemu:///system\",\"KVMGPU\":false,\"KVMHidden\":false,\"KVMNUMACount\":1,\"APIServerPort\":8443,\"DockerOpt\":null,\"DisableDriverMounts\":false,\"NFSShare\":[],\"NFSS
haresRoot\":\"/nfsshares\",\"UUID\":\"\",\"NoVTXCheck\":false,\"DNSProxy\":false,\"HostDNSResolver\":true,\"HostOnlyNicType\":\"virtio\",\"NatNicType\":\"virtio\",\"SSHIPAddress\":\"\",\"SSHUser\":\"root\",\"SSHKey\":\"\",\"SSHPort\":22,\"KubernetesConfig\":{\"KubernetesVersion\":\"v1.34.1\",\"ClusterName\":\"ha-423884\",\"Namespace\":\"default\",\"APIServerHAVIP\":\"192.168.49.254\",\"APIServerName\":\"minikubeCA\",\"APIServerNames\":null,\"APIServerIPs\":null,\"DNSDomain\":\"cluster.local\",\"ContainerRuntime\":\"crio\",\"CRISocket\":\"\",\"NetworkPlugin\":\"cni\",\"FeatureGates\":\"\",\"ServiceCIDR\":\"10.96.0.0/12\",\"ImageRepository\":\"\",\"LoadBalancerStartIP\":\"\",\"LoadBalancerEndIP\":\"\",\"CustomIngressCert\":\"\",\"RegistryAliases\":\"\",\"ExtraOptions\":null,\"ShouldLoadCachedImages\":true,\"EnableDefaultCNI\":false,\"CNI\":\"\"},\"Nodes\":[{\"Name\":\"\",\"IP\":\"192.168.49.2\",\"Port\":8443,\"KubernetesVersion\":\"v1.34.1\",\"ContainerRuntime\":\"crio\",\"ControlPlane\":true,\"Worker\":true},{
\"Name\":\"m02\",\"IP\":\"192.168.49.3\",\"Port\":8443,\"KubernetesVersion\":\"v1.34.1\",\"ContainerRuntime\":\"crio\",\"ControlPlane\":true,\"Worker\":true},{\"Name\":\"m03\",\"IP\":\"192.168.49.4\",\"Port\":8443,\"KubernetesVersion\":\"v1.34.1\",\"ContainerRuntime\":\"crio\",\"ControlPlane\":true,\"Worker\":true},{\"Name\":\"m04\",\"IP\":\"192.168.49.5\",\"Port\":0,\"KubernetesVersion\":\"v1.34.1\",\"ContainerRuntime\":\"crio\",\"ControlPlane\":false,\"Worker\":true}],\"Addons\":{\"ambassador\":false,\"amd-gpu-device-plugin\":false,\"auto-pause\":false,\"cloud-spanner\":false,\"csi-hostpath-driver\":false,\"dashboard\":false,\"default-storageclass\":false,\"efk\":false,\"freshpod\":false,\"gcp-auth\":false,\"gvisor\":false,\"headlamp\":false,\"inaccel\":false,\"ingress\":false,\"ingress-dns\":false,\"inspektor-gadget\":false,\"istio\":false,\"istio-provisioner\":false,\"kong\":false,\"kubeflow\":false,\"kubetail\":false,\"kubevirt\":false,\"logviewer\":false,\"metallb\":false,\"metrics-server\":false,\"nvid
ia-device-plugin\":false,\"nvidia-driver-installer\":false,\"nvidia-gpu-device-plugin\":false,\"olm\":false,\"pod-security-policy\":false,\"portainer\":false,\"registry\":false,\"registry-aliases\":false,\"registry-creds\":false,\"storage-provisioner\":false,\"storage-provisioner-rancher\":false,\"volcano\":false,\"volumesnapshots\":false,\"yakd\":false},\"CustomAddonImages\":null,\"CustomAddonRegistries\":null,\"VerifyComponents\":{\"apiserver\":true,\"apps_running\":true,\"default_sa\":true,\"extra\":true,\"kubelet\":true,\"node_ready\":true,\"system_pods\":true},\"StartHostTimeout\":360000000000,\"ScheduledStop\":null,\"ExposedPorts\":[],\"ListenAddress\":\"\",\"Network\":\"\",\"Subnet\":\"\",\"MultiNodeRequested\":true,\"ExtraDisks\":0,\"CertExpiration\":94608000000000000,\"MountString\":\"\",\"Mount9PVersion\":\"9p2000.L\",\"MountGID\":\"docker\",\"MountIP\":\"\",\"MountMSize\":262144,\"MountOptions\":[],\"MountPort\":0,\"MountType\":\"9p\",\"MountUID\":\"docker\",\"BinaryMirror\":\"\",\"DisableOptimizat
ions\":false,\"DisableMetrics\":false,\"DisableCoreDNSLog\":false,\"CustomQemuFirmwarePath\":\"\",\"SocketVMnetClientPath\":\"\",\"SocketVMnetPath\":\"\",\"StaticIP\":\"\",\"SSHAuthSock\":\"\",\"SSHAgentPID\":0,\"GPUs\":\"\",\"AutoPauseInterval\":60000000000},\"Active\":false,\"ActiveKubeContext\":true}]}"*. args: "out/minikube-linux-arm64 profile list --output json"
helpers_test.go:222: -----------------------post-mortem--------------------------------
helpers_test.go:223: ======>  post-mortem[TestMultiControlPlane/serial/DegradedAfterSecondaryNodeDelete]: network settings <======
helpers_test.go:230: HOST ENV snapshots: PROXY env: HTTP_PROXY="<empty>" HTTPS_PROXY="<empty>" NO_PROXY="<empty>"
helpers_test.go:238: ======>  post-mortem[TestMultiControlPlane/serial/DegradedAfterSecondaryNodeDelete]: docker inspect <======
helpers_test.go:239: (dbg) Run:  docker inspect ha-423884
helpers_test.go:243: (dbg) docker inspect ha-423884:

                                                
                                                
-- stdout --
	[
	    {
	        "Id": "8c902201acb6ac6cc85dcb02210aea656c86b05a6fb72e47cb8c42c952f307e8",
	        "Created": "2025-11-09T13:50:17.166169915Z",
	        "Path": "/usr/local/bin/entrypoint",
	        "Args": [
	            "/sbin/init"
	        ],
	        "State": {
	            "Status": "running",
	            "Running": true,
	            "Paused": false,
	            "Restarting": false,
	            "OOMKilled": false,
	            "Dead": false,
	            "Pid": 51070,
	            "ExitCode": 0,
	            "Error": "",
	            "StartedAt": "2025-11-09T13:54:55.389490243Z",
	            "FinishedAt": "2025-11-09T13:54:54.671589817Z"
	        },
	        "Image": "sha256:4e1b168be0d8ee6affdaa8dcd0274d605b2f417b6cfaa574410f0380fd962b97",
	        "ResolvConfPath": "/var/lib/docker/containers/8c902201acb6ac6cc85dcb02210aea656c86b05a6fb72e47cb8c42c952f307e8/resolv.conf",
	        "HostnamePath": "/var/lib/docker/containers/8c902201acb6ac6cc85dcb02210aea656c86b05a6fb72e47cb8c42c952f307e8/hostname",
	        "HostsPath": "/var/lib/docker/containers/8c902201acb6ac6cc85dcb02210aea656c86b05a6fb72e47cb8c42c952f307e8/hosts",
	        "LogPath": "/var/lib/docker/containers/8c902201acb6ac6cc85dcb02210aea656c86b05a6fb72e47cb8c42c952f307e8/8c902201acb6ac6cc85dcb02210aea656c86b05a6fb72e47cb8c42c952f307e8-json.log",
	        "Name": "/ha-423884",
	        "RestartCount": 0,
	        "Driver": "overlay2",
	        "Platform": "linux",
	        "MountLabel": "",
	        "ProcessLabel": "",
	        "AppArmorProfile": "unconfined",
	        "ExecIDs": null,
	        "HostConfig": {
	            "Binds": [
	                "/lib/modules:/lib/modules:ro",
	                "ha-423884:/var"
	            ],
	            "ContainerIDFile": "",
	            "LogConfig": {
	                "Type": "json-file",
	                "Config": {}
	            },
	            "NetworkMode": "ha-423884",
	            "PortBindings": {
	                "22/tcp": [
	                    {
	                        "HostIp": "127.0.0.1",
	                        "HostPort": ""
	                    }
	                ],
	                "2376/tcp": [
	                    {
	                        "HostIp": "127.0.0.1",
	                        "HostPort": ""
	                    }
	                ],
	                "32443/tcp": [
	                    {
	                        "HostIp": "127.0.0.1",
	                        "HostPort": ""
	                    }
	                ],
	                "5000/tcp": [
	                    {
	                        "HostIp": "127.0.0.1",
	                        "HostPort": ""
	                    }
	                ],
	                "8443/tcp": [
	                    {
	                        "HostIp": "127.0.0.1",
	                        "HostPort": ""
	                    }
	                ]
	            },
	            "RestartPolicy": {
	                "Name": "no",
	                "MaximumRetryCount": 0
	            },
	            "AutoRemove": false,
	            "VolumeDriver": "",
	            "VolumesFrom": null,
	            "ConsoleSize": [
	                0,
	                0
	            ],
	            "CapAdd": null,
	            "CapDrop": null,
	            "CgroupnsMode": "host",
	            "Dns": [],
	            "DnsOptions": [],
	            "DnsSearch": [],
	            "ExtraHosts": null,
	            "GroupAdd": null,
	            "IpcMode": "private",
	            "Cgroup": "",
	            "Links": null,
	            "OomScoreAdj": 0,
	            "PidMode": "",
	            "Privileged": true,
	            "PublishAllPorts": false,
	            "ReadonlyRootfs": false,
	            "SecurityOpt": [
	                "seccomp=unconfined",
	                "apparmor=unconfined",
	                "label=disable"
	            ],
	            "Tmpfs": {
	                "/run": "",
	                "/tmp": ""
	            },
	            "UTSMode": "",
	            "UsernsMode": "",
	            "ShmSize": 67108864,
	            "Runtime": "runc",
	            "Isolation": "",
	            "CpuShares": 0,
	            "Memory": 3221225472,
	            "NanoCpus": 2000000000,
	            "CgroupParent": "",
	            "BlkioWeight": 0,
	            "BlkioWeightDevice": [],
	            "BlkioDeviceReadBps": [],
	            "BlkioDeviceWriteBps": [],
	            "BlkioDeviceReadIOps": [],
	            "BlkioDeviceWriteIOps": [],
	            "CpuPeriod": 0,
	            "CpuQuota": 0,
	            "CpuRealtimePeriod": 0,
	            "CpuRealtimeRuntime": 0,
	            "CpusetCpus": "",
	            "CpusetMems": "",
	            "Devices": [],
	            "DeviceCgroupRules": null,
	            "DeviceRequests": null,
	            "MemoryReservation": 0,
	            "MemorySwap": 6442450944,
	            "MemorySwappiness": null,
	            "OomKillDisable": false,
	            "PidsLimit": null,
	            "Ulimits": [],
	            "CpuCount": 0,
	            "CpuPercent": 0,
	            "IOMaximumIOps": 0,
	            "IOMaximumBandwidth": 0,
	            "MaskedPaths": null,
	            "ReadonlyPaths": null
	        },
	        "GraphDriver": {
	            "Data": {
	                "ID": "8c902201acb6ac6cc85dcb02210aea656c86b05a6fb72e47cb8c42c952f307e8",
	                "LowerDir": "/var/lib/docker/overlay2/d7d9b7ca13eaf7cc4d5734ee4a6a54ff542d1224261d25b7c41162aa58453c4b-init/diff:/var/lib/docker/overlay2/b3a4de36ab2a7c9237cb1555a4866064baca53bca407ae5f84336bea9c6bc6c8/diff",
	                "MergedDir": "/var/lib/docker/overlay2/d7d9b7ca13eaf7cc4d5734ee4a6a54ff542d1224261d25b7c41162aa58453c4b/merged",
	                "UpperDir": "/var/lib/docker/overlay2/d7d9b7ca13eaf7cc4d5734ee4a6a54ff542d1224261d25b7c41162aa58453c4b/diff",
	                "WorkDir": "/var/lib/docker/overlay2/d7d9b7ca13eaf7cc4d5734ee4a6a54ff542d1224261d25b7c41162aa58453c4b/work"
	            },
	            "Name": "overlay2"
	        },
	        "Mounts": [
	            {
	                "Type": "bind",
	                "Source": "/lib/modules",
	                "Destination": "/lib/modules",
	                "Mode": "ro",
	                "RW": false,
	                "Propagation": "rprivate"
	            },
	            {
	                "Type": "volume",
	                "Name": "ha-423884",
	                "Source": "/var/lib/docker/volumes/ha-423884/_data",
	                "Destination": "/var",
	                "Driver": "local",
	                "Mode": "z",
	                "RW": true,
	                "Propagation": ""
	            }
	        ],
	        "Config": {
	            "Hostname": "ha-423884",
	            "Domainname": "",
	            "User": "",
	            "AttachStdin": false,
	            "AttachStdout": false,
	            "AttachStderr": false,
	            "ExposedPorts": {
	                "22/tcp": {},
	                "2376/tcp": {},
	                "32443/tcp": {},
	                "5000/tcp": {},
	                "8443/tcp": {}
	            },
	            "Tty": true,
	            "OpenStdin": false,
	            "StdinOnce": false,
	            "Env": [
	                "container=docker",
	                "PATH=/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin"
	            ],
	            "Cmd": null,
	            "Image": "gcr.io/k8s-minikube/kicbase-builds:v0.0.48-1761985721-21837@sha256:a50b37e97dfdea51156e079ca6b45818a801b3d41bbe13d141f35d2e1af6c7d1",
	            "Volumes": null,
	            "WorkingDir": "/",
	            "Entrypoint": [
	                "/usr/local/bin/entrypoint",
	                "/sbin/init"
	            ],
	            "OnBuild": null,
	            "Labels": {
	                "created_by.minikube.sigs.k8s.io": "true",
	                "mode.minikube.sigs.k8s.io": "ha-423884",
	                "name.minikube.sigs.k8s.io": "ha-423884",
	                "role.minikube.sigs.k8s.io": ""
	            },
	            "StopSignal": "SIGRTMIN+3"
	        },
	        "NetworkSettings": {
	            "Bridge": "",
	            "SandboxID": "89fbaf0c08047c2a06be0a8a75835803aa19533c48ff4c5735fc268ee9d93691",
	            "SandboxKey": "/var/run/docker/netns/89fbaf0c0804",
	            "Ports": {
	                "22/tcp": [
	                    {
	                        "HostIp": "127.0.0.1",
	                        "HostPort": "32808"
	                    }
	                ],
	                "2376/tcp": [
	                    {
	                        "HostIp": "127.0.0.1",
	                        "HostPort": "32809"
	                    }
	                ],
	                "32443/tcp": [
	                    {
	                        "HostIp": "127.0.0.1",
	                        "HostPort": "32812"
	                    }
	                ],
	                "5000/tcp": [
	                    {
	                        "HostIp": "127.0.0.1",
	                        "HostPort": "32810"
	                    }
	                ],
	                "8443/tcp": [
	                    {
	                        "HostIp": "127.0.0.1",
	                        "HostPort": "32811"
	                    }
	                ]
	            },
	            "HairpinMode": false,
	            "LinkLocalIPv6Address": "",
	            "LinkLocalIPv6PrefixLen": 0,
	            "SecondaryIPAddresses": null,
	            "SecondaryIPv6Addresses": null,
	            "EndpointID": "",
	            "Gateway": "",
	            "GlobalIPv6Address": "",
	            "GlobalIPv6PrefixLen": 0,
	            "IPAddress": "",
	            "IPPrefixLen": 0,
	            "IPv6Gateway": "",
	            "MacAddress": "",
	            "Networks": {
	                "ha-423884": {
	                    "IPAMConfig": {
	                        "IPv4Address": "192.168.49.2"
	                    },
	                    "Links": null,
	                    "Aliases": null,
	                    "MacAddress": "66:00:10:26:7b:33",
	                    "DriverOpts": null,
	                    "GwPriority": 0,
	                    "NetworkID": "b901b8dcb82129bdc4c62d2bf9cac8a365e41b87cf75b0978b149071ce152f44",
	                    "EndpointID": "20baa733fdf8670705aeddf1cfd5b1a5d39152767930d2a81eaedc478e6f1104",
	                    "Gateway": "192.168.49.1",
	                    "IPAddress": "192.168.49.2",
	                    "IPPrefixLen": 24,
	                    "IPv6Gateway": "",
	                    "GlobalIPv6Address": "",
	                    "GlobalIPv6PrefixLen": 0,
	                    "DNSNames": [
	                        "ha-423884",
	                        "8c902201acb6"
	                    ]
	                }
	            }
	        }
	    }
	]

                                                
                                                
-- /stdout --
helpers_test.go:247: (dbg) Run:  out/minikube-linux-arm64 status --format={{.Host}} -p ha-423884 -n ha-423884
helpers_test.go:247: (dbg) Non-zero exit: out/minikube-linux-arm64 status --format={{.Host}} -p ha-423884 -n ha-423884: exit status 2 (305.870444ms)

                                                
                                                
-- stdout --
	Running

                                                
                                                
-- /stdout --
helpers_test.go:247: status error: exit status 2 (may be ok)
helpers_test.go:252: <<< TestMultiControlPlane/serial/DegradedAfterSecondaryNodeDelete FAILED: start of post-mortem logs <<<
helpers_test.go:253: ======>  post-mortem[TestMultiControlPlane/serial/DegradedAfterSecondaryNodeDelete]: minikube logs <======
helpers_test.go:255: (dbg) Run:  out/minikube-linux-arm64 -p ha-423884 logs -n 25
helpers_test.go:260: TestMultiControlPlane/serial/DegradedAfterSecondaryNodeDelete logs: 
-- stdout --
	
	==> Audit <==
	┌─────────┬─────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────┬───────────┬─────────┬─────────┬─────────────────────┬─────────────────────┐
	│ COMMAND │                                                                ARGS                                                                 │  PROFILE  │  USER   │ VERSION │     START TIME      │      END TIME       │
	├─────────┼─────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────┼───────────┼─────────┼─────────┼─────────────────────┼─────────────────────┤
	│ ssh     │ ha-423884 ssh -n ha-423884-m03 sudo cat /home/docker/cp-test.txt                                                                    │ ha-423884 │ jenkins │ v1.37.0 │ 09 Nov 25 13:53 UTC │ 09 Nov 25 13:53 UTC │
	│ ssh     │ ha-423884 ssh -n ha-423884-m02 sudo cat /home/docker/cp-test_ha-423884-m03_ha-423884-m02.txt                                        │ ha-423884 │ jenkins │ v1.37.0 │ 09 Nov 25 13:53 UTC │ 09 Nov 25 13:53 UTC │
	│ cp      │ ha-423884 cp ha-423884-m03:/home/docker/cp-test.txt ha-423884-m04:/home/docker/cp-test_ha-423884-m03_ha-423884-m04.txt              │ ha-423884 │ jenkins │ v1.37.0 │ 09 Nov 25 13:53 UTC │ 09 Nov 25 13:53 UTC │
	│ ssh     │ ha-423884 ssh -n ha-423884-m03 sudo cat /home/docker/cp-test.txt                                                                    │ ha-423884 │ jenkins │ v1.37.0 │ 09 Nov 25 13:53 UTC │ 09 Nov 25 13:53 UTC │
	│ ssh     │ ha-423884 ssh -n ha-423884-m04 sudo cat /home/docker/cp-test_ha-423884-m03_ha-423884-m04.txt                                        │ ha-423884 │ jenkins │ v1.37.0 │ 09 Nov 25 13:53 UTC │ 09 Nov 25 13:53 UTC │
	│ cp      │ ha-423884 cp testdata/cp-test.txt ha-423884-m04:/home/docker/cp-test.txt                                                            │ ha-423884 │ jenkins │ v1.37.0 │ 09 Nov 25 13:53 UTC │ 09 Nov 25 13:53 UTC │
	│ ssh     │ ha-423884 ssh -n ha-423884-m04 sudo cat /home/docker/cp-test.txt                                                                    │ ha-423884 │ jenkins │ v1.37.0 │ 09 Nov 25 13:53 UTC │ 09 Nov 25 13:53 UTC │
	│ cp      │ ha-423884 cp ha-423884-m04:/home/docker/cp-test.txt /tmp/TestMultiControlPlaneserialCopyFile776916293/001/cp-test_ha-423884-m04.txt │ ha-423884 │ jenkins │ v1.37.0 │ 09 Nov 25 13:53 UTC │ 09 Nov 25 13:53 UTC │
	│ ssh     │ ha-423884 ssh -n ha-423884-m04 sudo cat /home/docker/cp-test.txt                                                                    │ ha-423884 │ jenkins │ v1.37.0 │ 09 Nov 25 13:53 UTC │ 09 Nov 25 13:53 UTC │
	│ cp      │ ha-423884 cp ha-423884-m04:/home/docker/cp-test.txt ha-423884:/home/docker/cp-test_ha-423884-m04_ha-423884.txt                      │ ha-423884 │ jenkins │ v1.37.0 │ 09 Nov 25 13:53 UTC │ 09 Nov 25 13:53 UTC │
	│ ssh     │ ha-423884 ssh -n ha-423884-m04 sudo cat /home/docker/cp-test.txt                                                                    │ ha-423884 │ jenkins │ v1.37.0 │ 09 Nov 25 13:53 UTC │ 09 Nov 25 13:53 UTC │
	│ ssh     │ ha-423884 ssh -n ha-423884 sudo cat /home/docker/cp-test_ha-423884-m04_ha-423884.txt                                                │ ha-423884 │ jenkins │ v1.37.0 │ 09 Nov 25 13:53 UTC │ 09 Nov 25 13:53 UTC │
	│ cp      │ ha-423884 cp ha-423884-m04:/home/docker/cp-test.txt ha-423884-m02:/home/docker/cp-test_ha-423884-m04_ha-423884-m02.txt              │ ha-423884 │ jenkins │ v1.37.0 │ 09 Nov 25 13:53 UTC │ 09 Nov 25 13:53 UTC │
	│ ssh     │ ha-423884 ssh -n ha-423884-m04 sudo cat /home/docker/cp-test.txt                                                                    │ ha-423884 │ jenkins │ v1.37.0 │ 09 Nov 25 13:53 UTC │ 09 Nov 25 13:53 UTC │
	│ ssh     │ ha-423884 ssh -n ha-423884-m02 sudo cat /home/docker/cp-test_ha-423884-m04_ha-423884-m02.txt                                        │ ha-423884 │ jenkins │ v1.37.0 │ 09 Nov 25 13:53 UTC │ 09 Nov 25 13:53 UTC │
	│ cp      │ ha-423884 cp ha-423884-m04:/home/docker/cp-test.txt ha-423884-m03:/home/docker/cp-test_ha-423884-m04_ha-423884-m03.txt              │ ha-423884 │ jenkins │ v1.37.0 │ 09 Nov 25 13:53 UTC │ 09 Nov 25 13:53 UTC │
	│ ssh     │ ha-423884 ssh -n ha-423884-m04 sudo cat /home/docker/cp-test.txt                                                                    │ ha-423884 │ jenkins │ v1.37.0 │ 09 Nov 25 13:53 UTC │ 09 Nov 25 13:53 UTC │
	│ ssh     │ ha-423884 ssh -n ha-423884-m03 sudo cat /home/docker/cp-test_ha-423884-m04_ha-423884-m03.txt                                        │ ha-423884 │ jenkins │ v1.37.0 │ 09 Nov 25 13:53 UTC │ 09 Nov 25 13:53 UTC │
	│ node    │ ha-423884 node stop m02 --alsologtostderr -v 5                                                                                      │ ha-423884 │ jenkins │ v1.37.0 │ 09 Nov 25 13:53 UTC │ 09 Nov 25 13:53 UTC │
	│ node    │ ha-423884 node start m02 --alsologtostderr -v 5                                                                                     │ ha-423884 │ jenkins │ v1.37.0 │ 09 Nov 25 13:53 UTC │ 09 Nov 25 13:54 UTC │
	│ node    │ ha-423884 node list --alsologtostderr -v 5                                                                                          │ ha-423884 │ jenkins │ v1.37.0 │ 09 Nov 25 13:54 UTC │                     │
	│ stop    │ ha-423884 stop --alsologtostderr -v 5                                                                                               │ ha-423884 │ jenkins │ v1.37.0 │ 09 Nov 25 13:54 UTC │ 09 Nov 25 13:54 UTC │
	│ start   │ ha-423884 start --wait true --alsologtostderr -v 5                                                                                  │ ha-423884 │ jenkins │ v1.37.0 │ 09 Nov 25 13:54 UTC │                     │
	│ node    │ ha-423884 node list --alsologtostderr -v 5                                                                                          │ ha-423884 │ jenkins │ v1.37.0 │ 09 Nov 25 14:02 UTC │                     │
	│ node    │ ha-423884 node delete m03 --alsologtostderr -v 5                                                                                    │ ha-423884 │ jenkins │ v1.37.0 │ 09 Nov 25 14:03 UTC │                     │
	└─────────┴─────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────┴───────────┴─────────┴─────────┴─────────────────────┴─────────────────────┘
	
	
	==> Last Start <==
	Log file created at: 2025/11/09 13:54:55
	Running on machine: ip-172-31-24-2
	Binary: Built with gc go1.24.6 for linux/arm64
	Log line format: [IWEF]mmdd hh:mm:ss.uuuuuu threadid file:line] msg
	I1109 13:54:55.113963   50941 out.go:360] Setting OutFile to fd 1 ...
	I1109 13:54:55.114176   50941 out.go:408] TERM=,COLORTERM=, which probably does not support color
	I1109 13:54:55.114201   50941 out.go:374] Setting ErrFile to fd 2...
	I1109 13:54:55.114221   50941 out.go:408] TERM=,COLORTERM=, which probably does not support color
	I1109 13:54:55.114531   50941 root.go:338] Updating PATH: /home/jenkins/minikube-integration/21139-2320/.minikube/bin
	I1109 13:54:55.114968   50941 out.go:368] Setting JSON to false
	I1109 13:54:55.115825   50941 start.go:133] hostinfo: {"hostname":"ip-172-31-24-2","uptime":2245,"bootTime":1762694250,"procs":151,"os":"linux","platform":"ubuntu","platformFamily":"debian","platformVersion":"20.04","kernelVersion":"5.15.0-1084-aws","kernelArch":"aarch64","virtualizationSystem":"","virtualizationRole":"","hostId":"6d436adf-771e-4269-b9a3-c25fd4fca4f5"}
	I1109 13:54:55.115981   50941 start.go:143] virtualization:  
	I1109 13:54:55.119256   50941 out.go:179] * [ha-423884] minikube v1.37.0 on Ubuntu 20.04 (arm64)
	I1109 13:54:55.122910   50941 out.go:179]   - MINIKUBE_LOCATION=21139
	I1109 13:54:55.122982   50941 notify.go:221] Checking for updates...
	I1109 13:54:55.128665   50941 out.go:179]   - MINIKUBE_SUPPRESS_DOCKER_PERFORMANCE=true
	I1109 13:54:55.131661   50941 out.go:179]   - KUBECONFIG=/home/jenkins/minikube-integration/21139-2320/kubeconfig
	I1109 13:54:55.134714   50941 out.go:179]   - MINIKUBE_HOME=/home/jenkins/minikube-integration/21139-2320/.minikube
	I1109 13:54:55.137648   50941 out.go:179]   - MINIKUBE_BIN=out/minikube-linux-arm64
	I1109 13:54:55.140756   50941 out.go:179]   - MINIKUBE_FORCE_SYSTEMD=
	I1109 13:54:55.144331   50941 config.go:182] Loaded profile config "ha-423884": Driver=docker, ContainerRuntime=crio, KubernetesVersion=v1.34.1
	I1109 13:54:55.144477   50941 driver.go:422] Setting default libvirt URI to qemu:///system
	I1109 13:54:55.179836   50941 docker.go:124] docker version: linux-28.1.1:Docker Engine - Community
	I1109 13:54:55.179983   50941 cli_runner.go:164] Run: docker system info --format "{{json .}}"
	I1109 13:54:55.239723   50941 info.go:266] docker info: {ID:J4M5:W6MX:GOX4:4LAQ:VI7E:VJNF:J3OP:OPBH:GF7G:PPY4:WQWD:7N4L Containers:4 ContainersRunning:0 ContainersPaused:0 ContainersStopped:4 Images:3 Driver:overlay2 DriverStatus:[[Backing Filesystem extfs] [Supports d_type true] [Using metacopy false] [Native Overlay Diff true] [userxattr false]] SystemStatus:<nil> Plugins:{Volume:[local] Network:[bridge host ipvlan macvlan null overlay] Authorization:<nil> Log:[awslogs fluentd gcplogs gelf journald json-file local splunk syslog]} MemoryLimit:true SwapLimit:true KernelMemory:false KernelMemoryTCP:true CPUCfsPeriod:true CPUCfsQuota:true CPUShares:true CPUSet:true PidsLimit:true IPv4Forwarding:true BridgeNfIptables:false BridgeNfIP6Tables:false Debug:false NFd:23 OomKillDisable:true NGoroutines:42 SystemTime:2025-11-09 13:54:55.22949764 +0000 UTC LoggingDriver:json-file CgroupDriver:cgroupfs NEventsListener:0 KernelVersion:5.15.0-1084-aws OperatingSystem:Ubuntu 20.04.6 LTS OSType:linux Architecture:aa
rch64 IndexServerAddress:https://index.docker.io/v1/ RegistryConfig:{AllowNondistributableArtifactsCIDRs:[] AllowNondistributableArtifactsHostnames:[] InsecureRegistryCIDRs:[::1/128 127.0.0.0/8] IndexConfigs:{DockerIo:{Name:docker.io Mirrors:[] Secure:true Official:true}} Mirrors:[]} NCPU:2 MemTotal:8214835200 GenericResources:<nil> DockerRootDir:/var/lib/docker HTTPProxy: HTTPSProxy: NoProxy: Name:ip-172-31-24-2 Labels:[] ExperimentalBuild:false ServerVersion:28.1.1 ClusterStore: ClusterAdvertise: Runtimes:{Runc:{Path:runc}} DefaultRuntime:runc Swarm:{NodeID: NodeAddr: LocalNodeState:inactive ControlAvailable:false Error: RemoteManagers:<nil>} LiveRestoreEnabled:false Isolation: InitBinary:docker-init ContainerdCommit:{ID:05044ec0a9a75232cad458027ca83437aae3f4da Expected:} RuncCommit:{ID:v1.2.5-0-g59923ef Expected:} InitCommit:{ID:de40ad0 Expected:} SecurityOptions:[name=apparmor name=seccomp,profile=builtin] ProductLicense: Warnings:<nil> ServerErrors:[] ClientInfo:{Debug:false Plugins:[map[Name:buildx Path
:/usr/libexec/docker/cli-plugins/docker-buildx SchemaVersion:0.1.0 ShortDescription:Docker Buildx Vendor:Docker Inc. Version:v0.23.0] map[Name:compose Path:/usr/libexec/docker/cli-plugins/docker-compose SchemaVersion:0.1.0 ShortDescription:Docker Compose Vendor:Docker Inc. Version:v2.35.1]] Warnings:<nil>}}
	I1109 13:54:55.239832   50941 docker.go:319] overlay module found
	I1109 13:54:55.244865   50941 out.go:179] * Using the docker driver based on existing profile
	I1109 13:54:55.247777   50941 start.go:309] selected driver: docker
	I1109 13:54:55.247800   50941 start.go:930] validating driver "docker" against &{Name:ha-423884 KeepContext:false EmbedCerts:false MinikubeISO: KicBaseImage:gcr.io/k8s-minikube/kicbase-builds:v0.0.48-1761985721-21837@sha256:a50b37e97dfdea51156e079ca6b45818a801b3d41bbe13d141f35d2e1af6c7d1 Memory:3072 CPUs:2 DiskSize:20000 Driver:docker HyperkitVpnKitSock: HyperkitVSockPorts:[] DockerEnv:[] ContainerVolumeMounts:[] InsecureRegistry:[] RegistryMirror:[] HostOnlyCIDR:192.168.59.1/24 HypervVirtualSwitch: HypervUseExternalSwitch:false HypervExternalAdapter: KVMNetwork:default KVMQemuURI:qemu:///system KVMGPU:false KVMHidden:false KVMNUMACount:1 APIServerPort:8443 DockerOpt:[] DisableDriverMounts:false NFSShare:[] NFSSharesRoot:/nfsshares UUID: NoVTXCheck:false DNSProxy:false HostDNSResolver:true HostOnlyNicType:virtio NatNicType:virtio SSHIPAddress: SSHUser:root SSHKey: SSHPort:22 KubernetesConfig:{KubernetesVersion:v1.34.1 ClusterName:ha-423884 Namespace:default APIServerHAVIP:192.168.49.254 APIServerName
:minikubeCA APIServerNames:[] APIServerIPs:[] DNSDomain:cluster.local ContainerRuntime:crio CRISocket: NetworkPlugin:cni FeatureGates: ServiceCIDR:10.96.0.0/12 ImageRepository: LoadBalancerStartIP: LoadBalancerEndIP: CustomIngressCert: RegistryAliases: ExtraOptions:[] ShouldLoadCachedImages:true EnableDefaultCNI:false CNI:} Nodes:[{Name: IP:192.168.49.2 Port:8443 KubernetesVersion:v1.34.1 ContainerRuntime:crio ControlPlane:true Worker:true} {Name:m02 IP:192.168.49.3 Port:8443 KubernetesVersion:v1.34.1 ContainerRuntime:crio ControlPlane:true Worker:true} {Name:m03 IP:192.168.49.4 Port:8443 KubernetesVersion:v1.34.1 ContainerRuntime:crio ControlPlane:true Worker:true} {Name:m04 IP:192.168.49.5 Port:0 KubernetesVersion:v1.34.1 ContainerRuntime: ControlPlane:false Worker:true}] Addons:map[ambassador:false amd-gpu-device-plugin:false auto-pause:false cloud-spanner:false csi-hostpath-driver:false dashboard:false default-storageclass:false efk:false freshpod:false gcp-auth:false gvisor:false headlamp:false inaccel:f
alse ingress:false ingress-dns:false inspektor-gadget:false istio:false istio-provisioner:false kong:false kubeflow:false kubetail:false kubevirt:false logviewer:false metallb:false metrics-server:false nvidia-device-plugin:false nvidia-driver-installer:false nvidia-gpu-device-plugin:false olm:false pod-security-policy:false portainer:false registry:false registry-aliases:false registry-creds:false storage-provisioner:false storage-provisioner-rancher:false volcano:false volumesnapshots:false yakd:false] CustomAddonImages:map[] CustomAddonRegistries:map[] VerifyComponents:map[apiserver:true apps_running:true default_sa:true extra:true kubelet:true node_ready:true system_pods:true] StartHostTimeout:6m0s ScheduledStop:<nil> ExposedPorts:[] ListenAddress: Network: Subnet: MultiNodeRequested:true ExtraDisks:0 CertExpiration:26280h0m0s MountString: Mount9PVersion:9p2000.L MountGID:docker MountIP: MountMSize:262144 MountOptions:[] MountPort:0 MountType:9p MountUID:docker BinaryMirror: DisableOptimizations:false Dis
ableMetrics:false DisableCoreDNSLog:false CustomQemuFirmwarePath: SocketVMnetClientPath: SocketVMnetPath: StaticIP: SSHAuthSock: SSHAgentPID:0 GPUs: AutoPauseInterval:1m0s}
	I1109 13:54:55.248067   50941 start.go:941] status for docker: {Installed:true Healthy:true Running:false NeedsImprovement:false Error:<nil> Reason: Fix: Doc: Version:}
	I1109 13:54:55.248171   50941 cli_runner.go:164] Run: docker system info --format "{{json .}}"
	I1109 13:54:55.301652   50941 info.go:266] docker info: {ID:J4M5:W6MX:GOX4:4LAQ:VI7E:VJNF:J3OP:OPBH:GF7G:PPY4:WQWD:7N4L Containers:4 ContainersRunning:0 ContainersPaused:0 ContainersStopped:4 Images:3 Driver:overlay2 DriverStatus:[[Backing Filesystem extfs] [Supports d_type true] [Using metacopy false] [Native Overlay Diff true] [userxattr false]] SystemStatus:<nil> Plugins:{Volume:[local] Network:[bridge host ipvlan macvlan null overlay] Authorization:<nil> Log:[awslogs fluentd gcplogs gelf journald json-file local splunk syslog]} MemoryLimit:true SwapLimit:true KernelMemory:false KernelMemoryTCP:true CPUCfsPeriod:true CPUCfsQuota:true CPUShares:true CPUSet:true PidsLimit:true IPv4Forwarding:true BridgeNfIptables:false BridgeNfIP6Tables:false Debug:false NFd:23 OomKillDisable:true NGoroutines:42 SystemTime:2025-11-09 13:54:55.292554772 +0000 UTC LoggingDriver:json-file CgroupDriver:cgroupfs NEventsListener:0 KernelVersion:5.15.0-1084-aws OperatingSystem:Ubuntu 20.04.6 LTS OSType:linux Architecture:a
arch64 IndexServerAddress:https://index.docker.io/v1/ RegistryConfig:{AllowNondistributableArtifactsCIDRs:[] AllowNondistributableArtifactsHostnames:[] InsecureRegistryCIDRs:[::1/128 127.0.0.0/8] IndexConfigs:{DockerIo:{Name:docker.io Mirrors:[] Secure:true Official:true}} Mirrors:[]} NCPU:2 MemTotal:8214835200 GenericResources:<nil> DockerRootDir:/var/lib/docker HTTPProxy: HTTPSProxy: NoProxy: Name:ip-172-31-24-2 Labels:[] ExperimentalBuild:false ServerVersion:28.1.1 ClusterStore: ClusterAdvertise: Runtimes:{Runc:{Path:runc}} DefaultRuntime:runc Swarm:{NodeID: NodeAddr: LocalNodeState:inactive ControlAvailable:false Error: RemoteManagers:<nil>} LiveRestoreEnabled:false Isolation: InitBinary:docker-init ContainerdCommit:{ID:05044ec0a9a75232cad458027ca83437aae3f4da Expected:} RuncCommit:{ID:v1.2.5-0-g59923ef Expected:} InitCommit:{ID:de40ad0 Expected:} SecurityOptions:[name=apparmor name=seccomp,profile=builtin] ProductLicense: Warnings:<nil> ServerErrors:[] ClientInfo:{Debug:false Plugins:[map[Name:buildx Pat
h:/usr/libexec/docker/cli-plugins/docker-buildx SchemaVersion:0.1.0 ShortDescription:Docker Buildx Vendor:Docker Inc. Version:v0.23.0] map[Name:compose Path:/usr/libexec/docker/cli-plugins/docker-compose SchemaVersion:0.1.0 ShortDescription:Docker Compose Vendor:Docker Inc. Version:v2.35.1]] Warnings:<nil>}}
	I1109 13:54:55.302058   50941 start_flags.go:992] Waiting for all components: map[apiserver:true apps_running:true default_sa:true extra:true kubelet:true node_ready:true system_pods:true]
	I1109 13:54:55.302090   50941 cni.go:84] Creating CNI manager for ""
	I1109 13:54:55.302144   50941 cni.go:136] multinode detected (4 nodes found), recommending kindnet
	I1109 13:54:55.302223   50941 start.go:353] cluster config:
	{Name:ha-423884 KeepContext:false EmbedCerts:false MinikubeISO: KicBaseImage:gcr.io/k8s-minikube/kicbase-builds:v0.0.48-1761985721-21837@sha256:a50b37e97dfdea51156e079ca6b45818a801b3d41bbe13d141f35d2e1af6c7d1 Memory:3072 CPUs:2 DiskSize:20000 Driver:docker HyperkitVpnKitSock: HyperkitVSockPorts:[] DockerEnv:[] ContainerVolumeMounts:[] InsecureRegistry:[] RegistryMirror:[] HostOnlyCIDR:192.168.59.1/24 HypervVirtualSwitch: HypervUseExternalSwitch:false HypervExternalAdapter: KVMNetwork:default KVMQemuURI:qemu:///system KVMGPU:false KVMHidden:false KVMNUMACount:1 APIServerPort:8443 DockerOpt:[] DisableDriverMounts:false NFSShare:[] NFSSharesRoot:/nfsshares UUID: NoVTXCheck:false DNSProxy:false HostDNSResolver:true HostOnlyNicType:virtio NatNicType:virtio SSHIPAddress: SSHUser:root SSHKey: SSHPort:22 KubernetesConfig:{KubernetesVersion:v1.34.1 ClusterName:ha-423884 Namespace:default APIServerHAVIP:192.168.49.254 APIServerName:minikubeCA APIServerNames:[] APIServerIPs:[] DNSDomain:cluster.local ContainerR
untime:crio CRISocket: NetworkPlugin:cni FeatureGates: ServiceCIDR:10.96.0.0/12 ImageRepository: LoadBalancerStartIP: LoadBalancerEndIP: CustomIngressCert: RegistryAliases: ExtraOptions:[] ShouldLoadCachedImages:true EnableDefaultCNI:false CNI:} Nodes:[{Name: IP:192.168.49.2 Port:8443 KubernetesVersion:v1.34.1 ContainerRuntime:crio ControlPlane:true Worker:true} {Name:m02 IP:192.168.49.3 Port:8443 KubernetesVersion:v1.34.1 ContainerRuntime:crio ControlPlane:true Worker:true} {Name:m03 IP:192.168.49.4 Port:8443 KubernetesVersion:v1.34.1 ContainerRuntime:crio ControlPlane:true Worker:true} {Name:m04 IP:192.168.49.5 Port:0 KubernetesVersion:v1.34.1 ContainerRuntime:crio ControlPlane:false Worker:true}] Addons:map[ambassador:false amd-gpu-device-plugin:false auto-pause:false cloud-spanner:false csi-hostpath-driver:false dashboard:false default-storageclass:false efk:false freshpod:false gcp-auth:false gvisor:false headlamp:false inaccel:false ingress:false ingress-dns:false inspektor-gadget:false istio:false isti
o-provisioner:false kong:false kubeflow:false kubetail:false kubevirt:false logviewer:false metallb:false metrics-server:false nvidia-device-plugin:false nvidia-driver-installer:false nvidia-gpu-device-plugin:false olm:false pod-security-policy:false portainer:false registry:false registry-aliases:false registry-creds:false storage-provisioner:false storage-provisioner-rancher:false volcano:false volumesnapshots:false yakd:false] CustomAddonImages:map[] CustomAddonRegistries:map[] VerifyComponents:map[apiserver:true apps_running:true default_sa:true extra:true kubelet:true node_ready:true system_pods:true] StartHostTimeout:6m0s ScheduledStop:<nil> ExposedPorts:[] ListenAddress: Network: Subnet: MultiNodeRequested:true ExtraDisks:0 CertExpiration:26280h0m0s MountString: Mount9PVersion:9p2000.L MountGID:docker MountIP: MountMSize:262144 MountOptions:[] MountPort:0 MountType:9p MountUID:docker BinaryMirror: DisableOptimizations:false DisableMetrics:false DisableCoreDNSLog:false CustomQemuFirmwarePath: SocketVMne
tClientPath: SocketVMnetPath: StaticIP: SSHAuthSock: SSHAgentPID:0 GPUs: AutoPauseInterval:1m0s}
	I1109 13:54:55.305501   50941 out.go:179] * Starting "ha-423884" primary control-plane node in "ha-423884" cluster
	I1109 13:54:55.308419   50941 cache.go:134] Beginning downloading kic base image for docker with crio
	I1109 13:54:55.311313   50941 out.go:179] * Pulling base image v0.0.48-1761985721-21837 ...
	I1109 13:54:55.314132   50941 preload.go:188] Checking if preload exists for k8s version v1.34.1 and runtime crio
	I1109 13:54:55.314177   50941 preload.go:203] Found local preload: /home/jenkins/minikube-integration/21139-2320/.minikube/cache/preloaded-tarball/preloaded-images-k8s-v18-v1.34.1-cri-o-overlay-arm64.tar.lz4
	I1109 13:54:55.314201   50941 cache.go:65] Caching tarball of preloaded images
	I1109 13:54:55.314200   50941 image.go:81] Checking for gcr.io/k8s-minikube/kicbase-builds:v0.0.48-1761985721-21837@sha256:a50b37e97dfdea51156e079ca6b45818a801b3d41bbe13d141f35d2e1af6c7d1 in local docker daemon
	I1109 13:54:55.314293   50941 preload.go:238] Found /home/jenkins/minikube-integration/21139-2320/.minikube/cache/preloaded-tarball/preloaded-images-k8s-v18-v1.34.1-cri-o-overlay-arm64.tar.lz4 in cache, skipping download
	I1109 13:54:55.314309   50941 cache.go:68] Finished verifying existence of preloaded tar for v1.34.1 on crio
	I1109 13:54:55.314456   50941 profile.go:143] Saving config to /home/jenkins/minikube-integration/21139-2320/.minikube/profiles/ha-423884/config.json ...
	I1109 13:54:55.334236   50941 image.go:100] Found gcr.io/k8s-minikube/kicbase-builds:v0.0.48-1761985721-21837@sha256:a50b37e97dfdea51156e079ca6b45818a801b3d41bbe13d141f35d2e1af6c7d1 in local docker daemon, skipping pull
	I1109 13:54:55.334260   50941 cache.go:158] gcr.io/k8s-minikube/kicbase-builds:v0.0.48-1761985721-21837@sha256:a50b37e97dfdea51156e079ca6b45818a801b3d41bbe13d141f35d2e1af6c7d1 exists in daemon, skipping load
	I1109 13:54:55.334278   50941 cache.go:243] Successfully downloaded all kic artifacts
	I1109 13:54:55.334307   50941 start.go:360] acquireMachinesLock for ha-423884: {Name:mkda5c7a1ce8a51da0d8a40a6bd47565509d6909 Clock:{} Delay:500ms Timeout:10m0s Cancel:<nil>}
	I1109 13:54:55.334364   50941 start.go:364] duration metric: took 38.367µs to acquireMachinesLock for "ha-423884"
	I1109 13:54:55.334396   50941 start.go:96] Skipping create...Using existing machine configuration
	I1109 13:54:55.334402   50941 fix.go:54] fixHost starting: 
	I1109 13:54:55.334657   50941 cli_runner.go:164] Run: docker container inspect ha-423884 --format={{.State.Status}}
	I1109 13:54:55.351523   50941 fix.go:112] recreateIfNeeded on ha-423884: state=Stopped err=<nil>
	W1109 13:54:55.351563   50941 fix.go:138] unexpected machine state, will restart: <nil>
	I1109 13:54:55.355014   50941 out.go:252] * Restarting existing docker container for "ha-423884" ...
	I1109 13:54:55.355096   50941 cli_runner.go:164] Run: docker start ha-423884
	I1109 13:54:55.620677   50941 cli_runner.go:164] Run: docker container inspect ha-423884 --format={{.State.Status}}
	I1109 13:54:55.643357   50941 kic.go:430] container "ha-423884" state is running.
	I1109 13:54:55.643727   50941 cli_runner.go:164] Run: docker container inspect -f "{{range .NetworkSettings.Networks}}{{.IPAddress}},{{.GlobalIPv6Address}}{{end}}" ha-423884
	I1109 13:54:55.666053   50941 profile.go:143] Saving config to /home/jenkins/minikube-integration/21139-2320/.minikube/profiles/ha-423884/config.json ...
	I1109 13:54:55.666297   50941 machine.go:94] provisionDockerMachine start ...
	I1109 13:54:55.666487   50941 cli_runner.go:164] Run: docker container inspect -f "'{{(index (index .NetworkSettings.Ports "22/tcp") 0).HostPort}}'" ha-423884
	I1109 13:54:55.687681   50941 main.go:143] libmachine: Using SSH client type: native
	I1109 13:54:55.688070   50941 main.go:143] libmachine: &{{{<nil> 0 [] [] []} docker [0x3eefe0] 0x3f1790 <nil>  [] 0s} 127.0.0.1 32808 <nil> <nil>}
	I1109 13:54:55.688081   50941 main.go:143] libmachine: About to run SSH command:
	hostname
	I1109 13:54:55.688918   50941 main.go:143] libmachine: Error dialing TCP: ssh: handshake failed: EOF
	I1109 13:54:58.840047   50941 main.go:143] libmachine: SSH cmd err, output: <nil>: ha-423884
	
	I1109 13:54:58.840071   50941 ubuntu.go:182] provisioning hostname "ha-423884"
	I1109 13:54:58.840136   50941 cli_runner.go:164] Run: docker container inspect -f "'{{(index (index .NetworkSettings.Ports "22/tcp") 0).HostPort}}'" ha-423884
	I1109 13:54:58.857828   50941 main.go:143] libmachine: Using SSH client type: native
	I1109 13:54:58.858140   50941 main.go:143] libmachine: &{{{<nil> 0 [] [] []} docker [0x3eefe0] 0x3f1790 <nil>  [] 0s} 127.0.0.1 32808 <nil> <nil>}
	I1109 13:54:58.858156   50941 main.go:143] libmachine: About to run SSH command:
	sudo hostname ha-423884 && echo "ha-423884" | sudo tee /etc/hostname
	I1109 13:54:59.019946   50941 main.go:143] libmachine: SSH cmd err, output: <nil>: ha-423884
	
	I1109 13:54:59.020040   50941 cli_runner.go:164] Run: docker container inspect -f "'{{(index (index .NetworkSettings.Ports "22/tcp") 0).HostPort}}'" ha-423884
	I1109 13:54:59.037942   50941 main.go:143] libmachine: Using SSH client type: native
	I1109 13:54:59.038251   50941 main.go:143] libmachine: &{{{<nil> 0 [] [] []} docker [0x3eefe0] 0x3f1790 <nil>  [] 0s} 127.0.0.1 32808 <nil> <nil>}
	I1109 13:54:59.038273   50941 main.go:143] libmachine: About to run SSH command:
	
			if ! grep -xq '.*\sha-423884' /etc/hosts; then
				if grep -xq '127.0.1.1\s.*' /etc/hosts; then
					sudo sed -i 's/^127.0.1.1\s.*/127.0.1.1 ha-423884/g' /etc/hosts;
				else 
					echo '127.0.1.1 ha-423884' | sudo tee -a /etc/hosts; 
				fi
			fi
	I1109 13:54:59.188269   50941 main.go:143] libmachine: SSH cmd err, output: <nil>: 
	I1109 13:54:59.188306   50941 ubuntu.go:188] set auth options {CertDir:/home/jenkins/minikube-integration/21139-2320/.minikube CaCertPath:/home/jenkins/minikube-integration/21139-2320/.minikube/certs/ca.pem CaPrivateKeyPath:/home/jenkins/minikube-integration/21139-2320/.minikube/certs/ca-key.pem CaCertRemotePath:/etc/docker/ca.pem ServerCertPath:/home/jenkins/minikube-integration/21139-2320/.minikube/machines/server.pem ServerKeyPath:/home/jenkins/minikube-integration/21139-2320/.minikube/machines/server-key.pem ClientKeyPath:/home/jenkins/minikube-integration/21139-2320/.minikube/certs/key.pem ServerCertRemotePath:/etc/docker/server.pem ServerKeyRemotePath:/etc/docker/server-key.pem ClientCertPath:/home/jenkins/minikube-integration/21139-2320/.minikube/certs/cert.pem ServerCertSANs:[] StorePath:/home/jenkins/minikube-integration/21139-2320/.minikube}
	I1109 13:54:59.188331   50941 ubuntu.go:190] setting up certificates
	I1109 13:54:59.188340   50941 provision.go:84] configureAuth start
	I1109 13:54:59.189373   50941 cli_runner.go:164] Run: docker container inspect -f "{{range .NetworkSettings.Networks}}{{.IPAddress}},{{.GlobalIPv6Address}}{{end}}" ha-423884
	I1109 13:54:59.208053   50941 provision.go:143] copyHostCerts
	I1109 13:54:59.208097   50941 vm_assets.go:164] NewFileAsset: /home/jenkins/minikube-integration/21139-2320/.minikube/certs/key.pem -> /home/jenkins/minikube-integration/21139-2320/.minikube/key.pem
	I1109 13:54:59.208129   50941 exec_runner.go:144] found /home/jenkins/minikube-integration/21139-2320/.minikube/key.pem, removing ...
	I1109 13:54:59.208146   50941 exec_runner.go:203] rm: /home/jenkins/minikube-integration/21139-2320/.minikube/key.pem
	I1109 13:54:59.208224   50941 exec_runner.go:151] cp: /home/jenkins/minikube-integration/21139-2320/.minikube/certs/key.pem --> /home/jenkins/minikube-integration/21139-2320/.minikube/key.pem (1679 bytes)
	I1109 13:54:59.208317   50941 vm_assets.go:164] NewFileAsset: /home/jenkins/minikube-integration/21139-2320/.minikube/certs/ca.pem -> /home/jenkins/minikube-integration/21139-2320/.minikube/ca.pem
	I1109 13:54:59.208339   50941 exec_runner.go:144] found /home/jenkins/minikube-integration/21139-2320/.minikube/ca.pem, removing ...
	I1109 13:54:59.208343   50941 exec_runner.go:203] rm: /home/jenkins/minikube-integration/21139-2320/.minikube/ca.pem
	I1109 13:54:59.208378   50941 exec_runner.go:151] cp: /home/jenkins/minikube-integration/21139-2320/.minikube/certs/ca.pem --> /home/jenkins/minikube-integration/21139-2320/.minikube/ca.pem (1082 bytes)
	I1109 13:54:59.208440   50941 vm_assets.go:164] NewFileAsset: /home/jenkins/minikube-integration/21139-2320/.minikube/certs/cert.pem -> /home/jenkins/minikube-integration/21139-2320/.minikube/cert.pem
	I1109 13:54:59.208460   50941 exec_runner.go:144] found /home/jenkins/minikube-integration/21139-2320/.minikube/cert.pem, removing ...
	I1109 13:54:59.208464   50941 exec_runner.go:203] rm: /home/jenkins/minikube-integration/21139-2320/.minikube/cert.pem
	I1109 13:54:59.208489   50941 exec_runner.go:151] cp: /home/jenkins/minikube-integration/21139-2320/.minikube/certs/cert.pem --> /home/jenkins/minikube-integration/21139-2320/.minikube/cert.pem (1123 bytes)
	I1109 13:54:59.208551   50941 provision.go:117] generating server cert: /home/jenkins/minikube-integration/21139-2320/.minikube/machines/server.pem ca-key=/home/jenkins/minikube-integration/21139-2320/.minikube/certs/ca.pem private-key=/home/jenkins/minikube-integration/21139-2320/.minikube/certs/ca-key.pem org=jenkins.ha-423884 san=[127.0.0.1 192.168.49.2 ha-423884 localhost minikube]
	I1109 13:54:59.461403   50941 provision.go:177] copyRemoteCerts
	I1109 13:54:59.461473   50941 ssh_runner.go:195] Run: sudo mkdir -p /etc/docker /etc/docker /etc/docker
	I1109 13:54:59.461549   50941 cli_runner.go:164] Run: docker container inspect -f "'{{(index (index .NetworkSettings.Ports "22/tcp") 0).HostPort}}'" ha-423884
	I1109 13:54:59.479030   50941 sshutil.go:53] new ssh client: &{IP:127.0.0.1 Port:32808 SSHKeyPath:/home/jenkins/minikube-integration/21139-2320/.minikube/machines/ha-423884/id_rsa Username:docker}
	I1109 13:54:59.583635   50941 vm_assets.go:164] NewFileAsset: /home/jenkins/minikube-integration/21139-2320/.minikube/certs/ca.pem -> /etc/docker/ca.pem
	I1109 13:54:59.583697   50941 ssh_runner.go:362] scp /home/jenkins/minikube-integration/21139-2320/.minikube/certs/ca.pem --> /etc/docker/ca.pem (1082 bytes)
	I1109 13:54:59.602289   50941 vm_assets.go:164] NewFileAsset: /home/jenkins/minikube-integration/21139-2320/.minikube/machines/server.pem -> /etc/docker/server.pem
	I1109 13:54:59.602361   50941 ssh_runner.go:362] scp /home/jenkins/minikube-integration/21139-2320/.minikube/machines/server.pem --> /etc/docker/server.pem (1200 bytes)
	I1109 13:54:59.620339   50941 vm_assets.go:164] NewFileAsset: /home/jenkins/minikube-integration/21139-2320/.minikube/machines/server-key.pem -> /etc/docker/server-key.pem
	I1109 13:54:59.620401   50941 ssh_runner.go:362] scp /home/jenkins/minikube-integration/21139-2320/.minikube/machines/server-key.pem --> /etc/docker/server-key.pem (1679 bytes)
	I1109 13:54:59.637908   50941 provision.go:87] duration metric: took 449.546564ms to configureAuth
	I1109 13:54:59.637931   50941 ubuntu.go:206] setting minikube options for container-runtime
	I1109 13:54:59.638163   50941 config.go:182] Loaded profile config "ha-423884": Driver=docker, ContainerRuntime=crio, KubernetesVersion=v1.34.1
	I1109 13:54:59.638260   50941 cli_runner.go:164] Run: docker container inspect -f "'{{(index (index .NetworkSettings.Ports "22/tcp") 0).HostPort}}'" ha-423884
	I1109 13:54:59.655128   50941 main.go:143] libmachine: Using SSH client type: native
	I1109 13:54:59.655439   50941 main.go:143] libmachine: &{{{<nil> 0 [] [] []} docker [0x3eefe0] 0x3f1790 <nil>  [] 0s} 127.0.0.1 32808 <nil> <nil>}
	I1109 13:54:59.655453   50941 main.go:143] libmachine: About to run SSH command:
	sudo mkdir -p /etc/sysconfig && printf %s "
	CRIO_MINIKUBE_OPTIONS='--insecure-registry 10.96.0.0/12 '
	" | sudo tee /etc/sysconfig/crio.minikube && sudo systemctl restart crio
	I1109 13:55:00.057852   50941 main.go:143] libmachine: SSH cmd err, output: <nil>: 
	CRIO_MINIKUBE_OPTIONS='--insecure-registry 10.96.0.0/12 '
	
	I1109 13:55:00.057945   50941 machine.go:97] duration metric: took 4.391628222s to provisionDockerMachine
	I1109 13:55:00.057975   50941 start.go:293] postStartSetup for "ha-423884" (driver="docker")
	I1109 13:55:00.058017   50941 start.go:322] creating required directories: [/etc/kubernetes/addons /etc/kubernetes/manifests /var/tmp/minikube /var/lib/minikube /var/lib/minikube/certs /var/lib/minikube/images /var/lib/minikube/binaries /tmp/gvisor /usr/share/ca-certificates /etc/ssl/certs]
	I1109 13:55:00.058141   50941 ssh_runner.go:195] Run: sudo mkdir -p /etc/kubernetes/addons /etc/kubernetes/manifests /var/tmp/minikube /var/lib/minikube /var/lib/minikube/certs /var/lib/minikube/images /var/lib/minikube/binaries /tmp/gvisor /usr/share/ca-certificates /etc/ssl/certs
	I1109 13:55:00.058222   50941 cli_runner.go:164] Run: docker container inspect -f "'{{(index (index .NetworkSettings.Ports "22/tcp") 0).HostPort}}'" ha-423884
	I1109 13:55:00.132751   50941 sshutil.go:53] new ssh client: &{IP:127.0.0.1 Port:32808 SSHKeyPath:/home/jenkins/minikube-integration/21139-2320/.minikube/machines/ha-423884/id_rsa Username:docker}
	I1109 13:55:00.319042   50941 ssh_runner.go:195] Run: cat /etc/os-release
	I1109 13:55:00.329077   50941 main.go:143] libmachine: Couldn't set key VERSION_CODENAME, no corresponding struct field found
	I1109 13:55:00.329106   50941 info.go:137] Remote host: Debian GNU/Linux 12 (bookworm)
	I1109 13:55:00.329119   50941 filesync.go:126] Scanning /home/jenkins/minikube-integration/21139-2320/.minikube/addons for local assets ...
	I1109 13:55:00.329189   50941 filesync.go:126] Scanning /home/jenkins/minikube-integration/21139-2320/.minikube/files for local assets ...
	I1109 13:55:00.329276   50941 filesync.go:149] local asset: /home/jenkins/minikube-integration/21139-2320/.minikube/files/etc/ssl/certs/41162.pem -> 41162.pem in /etc/ssl/certs
	I1109 13:55:00.329283   50941 vm_assets.go:164] NewFileAsset: /home/jenkins/minikube-integration/21139-2320/.minikube/files/etc/ssl/certs/41162.pem -> /etc/ssl/certs/41162.pem
	I1109 13:55:00.330448   50941 ssh_runner.go:195] Run: sudo mkdir -p /etc/ssl/certs
	I1109 13:55:00.371002   50941 ssh_runner.go:362] scp /home/jenkins/minikube-integration/21139-2320/.minikube/files/etc/ssl/certs/41162.pem --> /etc/ssl/certs/41162.pem (1708 bytes)
	I1109 13:55:00.407176   50941 start.go:296] duration metric: took 349.154111ms for postStartSetup
	I1109 13:55:00.407280   50941 ssh_runner.go:195] Run: sh -c "df -h /var | awk 'NR==2{print $5}'"
	I1109 13:55:00.407327   50941 cli_runner.go:164] Run: docker container inspect -f "'{{(index (index .NetworkSettings.Ports "22/tcp") 0).HostPort}}'" ha-423884
	I1109 13:55:00.431818   50941 sshutil.go:53] new ssh client: &{IP:127.0.0.1 Port:32808 SSHKeyPath:/home/jenkins/minikube-integration/21139-2320/.minikube/machines/ha-423884/id_rsa Username:docker}
	I1109 13:55:00.545778   50941 ssh_runner.go:195] Run: sh -c "df -BG /var | awk 'NR==2{print $4}'"
	I1109 13:55:00.551509   50941 fix.go:56] duration metric: took 5.21709566s for fixHost
	I1109 13:55:00.551542   50941 start.go:83] releasing machines lock for "ha-423884", held for 5.217154802s
	I1109 13:55:00.551634   50941 cli_runner.go:164] Run: docker container inspect -f "{{range .NetworkSettings.Networks}}{{.IPAddress}},{{.GlobalIPv6Address}}{{end}}" ha-423884
	I1109 13:55:00.570733   50941 ssh_runner.go:195] Run: cat /version.json
	I1109 13:55:00.570830   50941 cli_runner.go:164] Run: docker container inspect -f "'{{(index (index .NetworkSettings.Ports "22/tcp") 0).HostPort}}'" ha-423884
	I1109 13:55:00.571142   50941 ssh_runner.go:195] Run: curl -sS -m 2 https://registry.k8s.io/
	I1109 13:55:00.571242   50941 cli_runner.go:164] Run: docker container inspect -f "'{{(index (index .NetworkSettings.Ports "22/tcp") 0).HostPort}}'" ha-423884
	I1109 13:55:00.592161   50941 sshutil.go:53] new ssh client: &{IP:127.0.0.1 Port:32808 SSHKeyPath:/home/jenkins/minikube-integration/21139-2320/.minikube/machines/ha-423884/id_rsa Username:docker}
	I1109 13:55:00.595555   50941 sshutil.go:53] new ssh client: &{IP:127.0.0.1 Port:32808 SSHKeyPath:/home/jenkins/minikube-integration/21139-2320/.minikube/machines/ha-423884/id_rsa Username:docker}
	I1109 13:55:00.796007   50941 ssh_runner.go:195] Run: systemctl --version
	I1109 13:55:00.805538   50941 ssh_runner.go:195] Run: sudo sh -c "podman version >/dev/null"
	I1109 13:55:00.848119   50941 ssh_runner.go:195] Run: sh -c "stat /etc/cni/net.d/*loopback.conf*"
	W1109 13:55:00.852825   50941 cni.go:209] loopback cni configuration skipped: "/etc/cni/net.d/*loopback.conf*" not found
	I1109 13:55:00.852894   50941 ssh_runner.go:195] Run: sudo find /etc/cni/net.d -maxdepth 1 -type f ( ( -name *bridge* -or -name *podman* ) -and -not -name *.mk_disabled ) -printf "%p, " -exec sh -c "sudo mv {} {}.mk_disabled" ;
	I1109 13:55:00.861218   50941 cni.go:259] no active bridge cni configs found in "/etc/cni/net.d" - nothing to disable
	I1109 13:55:00.861244   50941 start.go:496] detecting cgroup driver to use...
	I1109 13:55:00.861295   50941 detect.go:187] detected "cgroupfs" cgroup driver on host os
	I1109 13:55:00.861369   50941 ssh_runner.go:195] Run: sudo systemctl stop -f containerd
	I1109 13:55:00.877235   50941 ssh_runner.go:195] Run: sudo systemctl is-active --quiet service containerd
	I1109 13:55:00.891078   50941 docker.go:218] disabling cri-docker service (if available) ...
	I1109 13:55:00.891189   50941 ssh_runner.go:195] Run: sudo systemctl stop -f cri-docker.socket
	I1109 13:55:00.907526   50941 ssh_runner.go:195] Run: sudo systemctl stop -f cri-docker.service
	I1109 13:55:00.921033   50941 ssh_runner.go:195] Run: sudo systemctl disable cri-docker.socket
	I1109 13:55:01.038695   50941 ssh_runner.go:195] Run: sudo systemctl mask cri-docker.service
	I1109 13:55:01.157282   50941 docker.go:234] disabling docker service ...
	I1109 13:55:01.157400   50941 ssh_runner.go:195] Run: sudo systemctl stop -f docker.socket
	I1109 13:55:01.175939   50941 ssh_runner.go:195] Run: sudo systemctl stop -f docker.service
	I1109 13:55:01.191589   50941 ssh_runner.go:195] Run: sudo systemctl disable docker.socket
	I1109 13:55:01.322566   50941 ssh_runner.go:195] Run: sudo systemctl mask docker.service
	I1109 13:55:01.442242   50941 ssh_runner.go:195] Run: sudo systemctl is-active --quiet service docker
	I1109 13:55:01.455592   50941 ssh_runner.go:195] Run: /bin/bash -c "sudo mkdir -p /etc && printf %s "runtime-endpoint: unix:///var/run/crio/crio.sock
	" | sudo tee /etc/crictl.yaml"
	I1109 13:55:01.470955   50941 crio.go:59] configure cri-o to use "registry.k8s.io/pause:3.10.1" pause image...
	I1109 13:55:01.471022   50941 ssh_runner.go:195] Run: sh -c "sudo sed -i 's|^.*pause_image = .*$|pause_image = "registry.k8s.io/pause:3.10.1"|' /etc/crio/crio.conf.d/02-crio.conf"
	I1109 13:55:01.480518   50941 crio.go:70] configuring cri-o to use "cgroupfs" as cgroup driver...
	I1109 13:55:01.480598   50941 ssh_runner.go:195] Run: sh -c "sudo sed -i 's|^.*cgroup_manager = .*$|cgroup_manager = "cgroupfs"|' /etc/crio/crio.conf.d/02-crio.conf"
	I1109 13:55:01.490192   50941 ssh_runner.go:195] Run: sh -c "sudo sed -i '/conmon_cgroup = .*/d' /etc/crio/crio.conf.d/02-crio.conf"
	I1109 13:55:01.499971   50941 ssh_runner.go:195] Run: sh -c "sudo sed -i '/cgroup_manager = .*/a conmon_cgroup = "pod"' /etc/crio/crio.conf.d/02-crio.conf"
	I1109 13:55:01.508704   50941 ssh_runner.go:195] Run: sh -c "sudo rm -rf /etc/cni/net.mk"
	I1109 13:55:01.517693   50941 ssh_runner.go:195] Run: sh -c "sudo sed -i '/^ *"net.ipv4.ip_unprivileged_port_start=.*"/d' /etc/crio/crio.conf.d/02-crio.conf"
	I1109 13:55:01.526722   50941 ssh_runner.go:195] Run: sh -c "sudo grep -q "^ *default_sysctls" /etc/crio/crio.conf.d/02-crio.conf || sudo sed -i '/conmon_cgroup = .*/a default_sysctls = \[\n\]' /etc/crio/crio.conf.d/02-crio.conf"
	I1109 13:55:01.535091   50941 ssh_runner.go:195] Run: sh -c "sudo sed -i -r 's|^default_sysctls *= *\[|&\n  "net.ipv4.ip_unprivileged_port_start=0",|' /etc/crio/crio.conf.d/02-crio.conf"
	I1109 13:55:01.544402   50941 ssh_runner.go:195] Run: sudo sysctl net.bridge.bridge-nf-call-iptables
	I1109 13:55:01.552454   50941 ssh_runner.go:195] Run: sudo sh -c "echo 1 > /proc/sys/net/ipv4/ip_forward"
	I1109 13:55:01.560165   50941 ssh_runner.go:195] Run: sudo systemctl daemon-reload
	I1109 13:55:01.679582   50941 ssh_runner.go:195] Run: sudo systemctl restart crio
	I1109 13:55:01.822017   50941 start.go:543] Will wait 60s for socket path /var/run/crio/crio.sock
	I1109 13:55:01.822142   50941 ssh_runner.go:195] Run: stat /var/run/crio/crio.sock
	I1109 13:55:01.826235   50941 start.go:564] Will wait 60s for crictl version
	I1109 13:55:01.826377   50941 ssh_runner.go:195] Run: which crictl
	I1109 13:55:01.830288   50941 ssh_runner.go:195] Run: sudo /usr/local/bin/crictl version
	I1109 13:55:01.857542   50941 start.go:580] Version:  0.1.0
	RuntimeName:  cri-o
	RuntimeVersion:  1.34.1
	RuntimeApiVersion:  v1
	I1109 13:55:01.857636   50941 ssh_runner.go:195] Run: crio --version
	I1109 13:55:01.890996   50941 ssh_runner.go:195] Run: crio --version
	I1109 13:55:01.922135   50941 out.go:179] * Preparing Kubernetes v1.34.1 on CRI-O 1.34.1 ...
	I1109 13:55:01.925065   50941 cli_runner.go:164] Run: docker network inspect ha-423884 --format "{"Name": "{{.Name}}","Driver": "{{.Driver}}","Subnet": "{{range .IPAM.Config}}{{.Subnet}}{{end}}","Gateway": "{{range .IPAM.Config}}{{.Gateway}}{{end}}","MTU": {{if (index .Options "com.docker.network.driver.mtu")}}{{(index .Options "com.docker.network.driver.mtu")}}{{else}}0{{end}}, "ContainerIPs": [{{range $k,$v := .Containers }}"{{$v.IPv4Address}}",{{end}}]}"
	I1109 13:55:01.943662   50941 ssh_runner.go:195] Run: grep 192.168.49.1	host.minikube.internal$ /etc/hosts
	I1109 13:55:01.947786   50941 ssh_runner.go:195] Run: /bin/bash -c "{ grep -v $'\thost.minikube.internal$' "/etc/hosts"; echo "192.168.49.1	host.minikube.internal"; } > /tmp/h.$$; sudo cp /tmp/h.$$ "/etc/hosts""
	I1109 13:55:01.958276   50941 kubeadm.go:884] updating cluster {Name:ha-423884 KeepContext:false EmbedCerts:false MinikubeISO: KicBaseImage:gcr.io/k8s-minikube/kicbase-builds:v0.0.48-1761985721-21837@sha256:a50b37e97dfdea51156e079ca6b45818a801b3d41bbe13d141f35d2e1af6c7d1 Memory:3072 CPUs:2 DiskSize:20000 Driver:docker HyperkitVpnKitSock: HyperkitVSockPorts:[] DockerEnv:[] ContainerVolumeMounts:[] InsecureRegistry:[] RegistryMirror:[] HostOnlyCIDR:192.168.59.1/24 HypervVirtualSwitch: HypervUseExternalSwitch:false HypervExternalAdapter: KVMNetwork:default KVMQemuURI:qemu:///system KVMGPU:false KVMHidden:false KVMNUMACount:1 APIServerPort:8443 DockerOpt:[] DisableDriverMounts:false NFSShare:[] NFSSharesRoot:/nfsshares UUID: NoVTXCheck:false DNSProxy:false HostDNSResolver:true HostOnlyNicType:virtio NatNicType:virtio SSHIPAddress: SSHUser:root SSHKey: SSHPort:22 KubernetesConfig:{KubernetesVersion:v1.34.1 ClusterName:ha-423884 Namespace:default APIServerHAVIP:192.168.49.254 APIServerName:minikubeCA APISe
rverNames:[] APIServerIPs:[] DNSDomain:cluster.local ContainerRuntime:crio CRISocket: NetworkPlugin:cni FeatureGates: ServiceCIDR:10.96.0.0/12 ImageRepository: LoadBalancerStartIP: LoadBalancerEndIP: CustomIngressCert: RegistryAliases: ExtraOptions:[] ShouldLoadCachedImages:true EnableDefaultCNI:false CNI:} Nodes:[{Name: IP:192.168.49.2 Port:8443 KubernetesVersion:v1.34.1 ContainerRuntime:crio ControlPlane:true Worker:true} {Name:m02 IP:192.168.49.3 Port:8443 KubernetesVersion:v1.34.1 ContainerRuntime:crio ControlPlane:true Worker:true} {Name:m03 IP:192.168.49.4 Port:8443 KubernetesVersion:v1.34.1 ContainerRuntime:crio ControlPlane:true Worker:true} {Name:m04 IP:192.168.49.5 Port:0 KubernetesVersion:v1.34.1 ContainerRuntime:crio ControlPlane:false Worker:true}] Addons:map[ambassador:false amd-gpu-device-plugin:false auto-pause:false cloud-spanner:false csi-hostpath-driver:false dashboard:false default-storageclass:false efk:false freshpod:false gcp-auth:false gvisor:false headlamp:false inaccel:false ingress:
false ingress-dns:false inspektor-gadget:false istio:false istio-provisioner:false kong:false kubeflow:false kubetail:false kubevirt:false logviewer:false metallb:false metrics-server:false nvidia-device-plugin:false nvidia-driver-installer:false nvidia-gpu-device-plugin:false olm:false pod-security-policy:false portainer:false registry:false registry-aliases:false registry-creds:false storage-provisioner:false storage-provisioner-rancher:false volcano:false volumesnapshots:false yakd:false] CustomAddonImages:map[] CustomAddonRegistries:map[] VerifyComponents:map[apiserver:true apps_running:true default_sa:true extra:true kubelet:true node_ready:true system_pods:true] StartHostTimeout:6m0s ScheduledStop:<nil> ExposedPorts:[] ListenAddress: Network: Subnet: MultiNodeRequested:true ExtraDisks:0 CertExpiration:26280h0m0s MountString: Mount9PVersion:9p2000.L MountGID:docker MountIP: MountMSize:262144 MountOptions:[] MountPort:0 MountType:9p MountUID:docker BinaryMirror: DisableOptimizations:false DisableMetrics:f
alse DisableCoreDNSLog:false CustomQemuFirmwarePath: SocketVMnetClientPath: SocketVMnetPath: StaticIP: SSHAuthSock: SSHAgentPID:0 GPUs: AutoPauseInterval:1m0s} ...
	I1109 13:55:01.958452   50941 preload.go:188] Checking if preload exists for k8s version v1.34.1 and runtime crio
	I1109 13:55:01.958516   50941 ssh_runner.go:195] Run: sudo crictl images --output json
	I1109 13:55:01.997808   50941 crio.go:514] all images are preloaded for cri-o runtime.
	I1109 13:55:01.997834   50941 crio.go:433] Images already preloaded, skipping extraction
	I1109 13:55:01.997895   50941 ssh_runner.go:195] Run: sudo crictl images --output json
	I1109 13:55:02.024927   50941 crio.go:514] all images are preloaded for cri-o runtime.
	I1109 13:55:02.024953   50941 cache_images.go:86] Images are preloaded, skipping loading
	I1109 13:55:02.024962   50941 kubeadm.go:935] updating node { 192.168.49.2 8443 v1.34.1 crio true true} ...
	I1109 13:55:02.025128   50941 kubeadm.go:947] kubelet [Unit]
	Wants=crio.service
	
	[Service]
	ExecStart=
	ExecStart=/var/lib/minikube/binaries/v1.34.1/kubelet --bootstrap-kubeconfig=/etc/kubernetes/bootstrap-kubelet.conf --cgroups-per-qos=false --config=/var/lib/kubelet/config.yaml --enforce-node-allocatable= --hostname-override=ha-423884 --kubeconfig=/etc/kubernetes/kubelet.conf --node-ip=192.168.49.2
	
	[Install]
	 config:
	{KubernetesVersion:v1.34.1 ClusterName:ha-423884 Namespace:default APIServerHAVIP:192.168.49.254 APIServerName:minikubeCA APIServerNames:[] APIServerIPs:[] DNSDomain:cluster.local ContainerRuntime:crio CRISocket: NetworkPlugin:cni FeatureGates: ServiceCIDR:10.96.0.0/12 ImageRepository: LoadBalancerStartIP: LoadBalancerEndIP: CustomIngressCert: RegistryAliases: ExtraOptions:[] ShouldLoadCachedImages:true EnableDefaultCNI:false CNI:}
	I1109 13:55:02.025216   50941 ssh_runner.go:195] Run: crio config
	I1109 13:55:02.096570   50941 cni.go:84] Creating CNI manager for ""
	I1109 13:55:02.096595   50941 cni.go:136] multinode detected (4 nodes found), recommending kindnet
	I1109 13:55:02.096612   50941 kubeadm.go:85] Using pod CIDR: 10.244.0.0/16
	I1109 13:55:02.096664   50941 kubeadm.go:190] kubeadm options: {CertDir:/var/lib/minikube/certs ServiceCIDR:10.96.0.0/12 PodSubnet:10.244.0.0/16 AdvertiseAddress:192.168.49.2 APIServerPort:8443 KubernetesVersion:v1.34.1 EtcdDataDir:/var/lib/minikube/etcd EtcdExtraArgs:map[] ClusterName:ha-423884 NodeName:ha-423884 DNSDomain:cluster.local CRISocket:/var/run/crio/crio.sock ImageRepository: ComponentOptions:[{Component:apiServer ExtraArgs:map[enable-admission-plugins:NamespaceLifecycle,LimitRanger,ServiceAccount,DefaultStorageClass,DefaultTolerationSeconds,NodeRestriction,MutatingAdmissionWebhook,ValidatingAdmissionWebhook,ResourceQuota] Pairs:map[certSANs:["127.0.0.1", "localhost", "192.168.49.2"]]} {Component:controllerManager ExtraArgs:map[allocate-node-cidrs:true leader-elect:false] Pairs:map[]} {Component:scheduler ExtraArgs:map[leader-elect:false] Pairs:map[]}] FeatureArgs:map[] NodeIP:192.168.49.2 CgroupDriver:cgroupfs ClientCAFile:/var/lib/minikube/certs/ca.crt StaticPodPath:/etc/kubernetes/mani
fests ControlPlaneAddress:control-plane.minikube.internal KubeProxyOptions:map[] ResolvConfSearchRegression:false KubeletConfigOpts:map[containerRuntimeEndpoint:unix:///var/run/crio/crio.sock hairpinMode:hairpin-veth runtimeRequestTimeout:15m] PrependCriSocketUnix:true}
	I1109 13:55:02.096862   50941 kubeadm.go:196] kubeadm config:
	apiVersion: kubeadm.k8s.io/v1beta4
	kind: InitConfiguration
	localAPIEndpoint:
	  advertiseAddress: 192.168.49.2
	  bindPort: 8443
	bootstrapTokens:
	  - groups:
	      - system:bootstrappers:kubeadm:default-node-token
	    ttl: 24h0m0s
	    usages:
	      - signing
	      - authentication
	nodeRegistration:
	  criSocket: unix:///var/run/crio/crio.sock
	  name: "ha-423884"
	  kubeletExtraArgs:
	    - name: "node-ip"
	      value: "192.168.49.2"
	  taints: []
	---
	apiVersion: kubeadm.k8s.io/v1beta4
	kind: ClusterConfiguration
	apiServer:
	  certSANs: ["127.0.0.1", "localhost", "192.168.49.2"]
	  extraArgs:
	    - name: "enable-admission-plugins"
	      value: "NamespaceLifecycle,LimitRanger,ServiceAccount,DefaultStorageClass,DefaultTolerationSeconds,NodeRestriction,MutatingAdmissionWebhook,ValidatingAdmissionWebhook,ResourceQuota"
	controllerManager:
	  extraArgs:
	    - name: "allocate-node-cidrs"
	      value: "true"
	    - name: "leader-elect"
	      value: "false"
	scheduler:
	  extraArgs:
	    - name: "leader-elect"
	      value: "false"
	certificatesDir: /var/lib/minikube/certs
	clusterName: mk
	controlPlaneEndpoint: control-plane.minikube.internal:8443
	etcd:
	  local:
	    dataDir: /var/lib/minikube/etcd
	kubernetesVersion: v1.34.1
	networking:
	  dnsDomain: cluster.local
	  podSubnet: "10.244.0.0/16"
	  serviceSubnet: 10.96.0.0/12
	---
	apiVersion: kubelet.config.k8s.io/v1beta1
	kind: KubeletConfiguration
	authentication:
	  x509:
	    clientCAFile: /var/lib/minikube/certs/ca.crt
	cgroupDriver: cgroupfs
	containerRuntimeEndpoint: unix:///var/run/crio/crio.sock
	hairpinMode: hairpin-veth
	runtimeRequestTimeout: 15m
	clusterDomain: "cluster.local"
	# disable disk resource management by default
	imageGCHighThresholdPercent: 100
	evictionHard:
	  nodefs.available: "0%"
	  nodefs.inodesFree: "0%"
	  imagefs.available: "0%"
	failSwapOn: false
	staticPodPath: /etc/kubernetes/manifests
	---
	apiVersion: kubeproxy.config.k8s.io/v1alpha1
	kind: KubeProxyConfiguration
	clusterCIDR: "10.244.0.0/16"
	metricsBindAddress: 0.0.0.0:10249
	conntrack:
	  maxPerCore: 0
	# Skip setting "net.netfilter.nf_conntrack_tcp_timeout_established"
	  tcpEstablishedTimeout: 0s
	# Skip setting "net.netfilter.nf_conntrack_tcp_timeout_close"
	  tcpCloseWaitTimeout: 0s
	
	I1109 13:55:02.096890   50941 kube-vip.go:115] generating kube-vip config ...
	I1109 13:55:02.096949   50941 ssh_runner.go:195] Run: sudo sh -c "lsmod | grep ip_vs"
	I1109 13:55:02.108971   50941 kube-vip.go:163] giving up enabling control-plane load-balancing as ipvs kernel modules appears not to be available: sudo sh -c "lsmod | grep ip_vs": Process exited with status 1
	stdout:
	
	stderr:
	I1109 13:55:02.109069   50941 kube-vip.go:137] kube-vip config:
	apiVersion: v1
	kind: Pod
	metadata:
	  creationTimestamp: null
	  name: kube-vip
	  namespace: kube-system
	spec:
	  containers:
	  - args:
	    - manager
	    env:
	    - name: vip_arp
	      value: "true"
	    - name: port
	      value: "8443"
	    - name: vip_nodename
	      valueFrom:
	        fieldRef:
	          fieldPath: spec.nodeName
	    - name: vip_interface
	      value: eth0
	    - name: vip_cidr
	      value: "32"
	    - name: dns_mode
	      value: first
	    - name: cp_enable
	      value: "true"
	    - name: cp_namespace
	      value: kube-system
	    - name: vip_leaderelection
	      value: "true"
	    - name: vip_leasename
	      value: plndr-cp-lock
	    - name: vip_leaseduration
	      value: "5"
	    - name: vip_renewdeadline
	      value: "3"
	    - name: vip_retryperiod
	      value: "1"
	    - name: address
	      value: 192.168.49.254
	    - name: prometheus_server
	      value: :2112
	    image: ghcr.io/kube-vip/kube-vip:v1.0.1
	    imagePullPolicy: IfNotPresent
	    name: kube-vip
	    resources: {}
	    securityContext:
	      capabilities:
	        add:
	        - NET_ADMIN
	        - NET_RAW
	    volumeMounts:
	    - mountPath: /etc/kubernetes/admin.conf
	      name: kubeconfig
	  hostAliases:
	  - hostnames:
	    - kubernetes
	    ip: 127.0.0.1
	  hostNetwork: true
	  volumes:
	  - hostPath:
	      path: "/etc/kubernetes/admin.conf"
	    name: kubeconfig
	status: {}
	I1109 13:55:02.109141   50941 ssh_runner.go:195] Run: sudo ls /var/lib/minikube/binaries/v1.34.1
	I1109 13:55:02.117384   50941 binaries.go:51] Found k8s binaries, skipping transfer
	I1109 13:55:02.117456   50941 ssh_runner.go:195] Run: sudo mkdir -p /etc/systemd/system/kubelet.service.d /lib/systemd/system /var/tmp/minikube /etc/kubernetes/manifests
	I1109 13:55:02.125412   50941 ssh_runner.go:362] scp memory --> /etc/systemd/system/kubelet.service.d/10-kubeadm.conf (359 bytes)
	I1109 13:55:02.139511   50941 ssh_runner.go:362] scp memory --> /lib/systemd/system/kubelet.service (352 bytes)
	I1109 13:55:02.153010   50941 ssh_runner.go:362] scp memory --> /var/tmp/minikube/kubeadm.yaml.new (2206 bytes)
	I1109 13:55:02.166712   50941 ssh_runner.go:362] scp memory --> /etc/kubernetes/manifests/kube-vip.yaml (1358 bytes)
	I1109 13:55:02.180196   50941 ssh_runner.go:195] Run: grep 192.168.49.254	control-plane.minikube.internal$ /etc/hosts
	I1109 13:55:02.183971   50941 ssh_runner.go:195] Run: /bin/bash -c "{ grep -v $'\tcontrol-plane.minikube.internal$' "/etc/hosts"; echo "192.168.49.254	control-plane.minikube.internal"; } > /tmp/h.$$; sudo cp /tmp/h.$$ "/etc/hosts""
	I1109 13:55:02.194268   50941 ssh_runner.go:195] Run: sudo systemctl daemon-reload
	I1109 13:55:02.311353   50941 ssh_runner.go:195] Run: sudo systemctl start kubelet
	I1109 13:55:02.326388   50941 certs.go:69] Setting up /home/jenkins/minikube-integration/21139-2320/.minikube/profiles/ha-423884 for IP: 192.168.49.2
	I1109 13:55:02.326464   50941 certs.go:195] generating shared ca certs ...
	I1109 13:55:02.326494   50941 certs.go:227] acquiring lock for ca certs: {Name:mkdc9287ecd89df27ec460b72246ed6b75395d7c Clock:{} Delay:500ms Timeout:1m0s Cancel:<nil>}
	I1109 13:55:02.326661   50941 certs.go:236] skipping valid "minikubeCA" ca cert: /home/jenkins/minikube-integration/21139-2320/.minikube/ca.key
	I1109 13:55:02.326749   50941 certs.go:236] skipping valid "proxyClientCA" ca cert: /home/jenkins/minikube-integration/21139-2320/.minikube/proxy-client-ca.key
	I1109 13:55:02.326774   50941 certs.go:257] generating profile certs ...
	I1109 13:55:02.326889   50941 certs.go:360] skipping valid signed profile cert regeneration for "minikube-user": /home/jenkins/minikube-integration/21139-2320/.minikube/profiles/ha-423884/client.key
	I1109 13:55:02.326942   50941 certs.go:364] generating signed profile cert for "minikube": /home/jenkins/minikube-integration/21139-2320/.minikube/profiles/ha-423884/apiserver.key.32540612
	I1109 13:55:02.326978   50941 crypto.go:68] Generating cert /home/jenkins/minikube-integration/21139-2320/.minikube/profiles/ha-423884/apiserver.crt.32540612 with IP's: [10.96.0.1 127.0.0.1 10.0.0.1 192.168.49.2 192.168.49.3 192.168.49.4 192.168.49.254]
	I1109 13:55:02.791794   50941 crypto.go:156] Writing cert to /home/jenkins/minikube-integration/21139-2320/.minikube/profiles/ha-423884/apiserver.crt.32540612 ...
	I1109 13:55:02.791832   50941 lock.go:35] WriteFile acquiring /home/jenkins/minikube-integration/21139-2320/.minikube/profiles/ha-423884/apiserver.crt.32540612: {Name:mkffe35c2a4a9e9ef2460782868fdfad2ff0b271 Clock:{} Delay:500ms Timeout:1m0s Cancel:<nil>}
	I1109 13:55:02.792045   50941 crypto.go:164] Writing key to /home/jenkins/minikube-integration/21139-2320/.minikube/profiles/ha-423884/apiserver.key.32540612 ...
	I1109 13:55:02.792064   50941 lock.go:35] WriteFile acquiring /home/jenkins/minikube-integration/21139-2320/.minikube/profiles/ha-423884/apiserver.key.32540612: {Name:mk387e11ec0c12eb2f7dfe43ad45967daf55df66 Clock:{} Delay:500ms Timeout:1m0s Cancel:<nil>}
	I1109 13:55:02.792144   50941 certs.go:382] copying /home/jenkins/minikube-integration/21139-2320/.minikube/profiles/ha-423884/apiserver.crt.32540612 -> /home/jenkins/minikube-integration/21139-2320/.minikube/profiles/ha-423884/apiserver.crt
	I1109 13:55:02.792295   50941 certs.go:386] copying /home/jenkins/minikube-integration/21139-2320/.minikube/profiles/ha-423884/apiserver.key.32540612 -> /home/jenkins/minikube-integration/21139-2320/.minikube/profiles/ha-423884/apiserver.key
	I1109 13:55:02.792427   50941 certs.go:360] skipping valid signed profile cert regeneration for "aggregator": /home/jenkins/minikube-integration/21139-2320/.minikube/profiles/ha-423884/proxy-client.key
	I1109 13:55:02.792446   50941 vm_assets.go:164] NewFileAsset: /home/jenkins/minikube-integration/21139-2320/.minikube/ca.crt -> /var/lib/minikube/certs/ca.crt
	I1109 13:55:02.792461   50941 vm_assets.go:164] NewFileAsset: /home/jenkins/minikube-integration/21139-2320/.minikube/ca.key -> /var/lib/minikube/certs/ca.key
	I1109 13:55:02.792477   50941 vm_assets.go:164] NewFileAsset: /home/jenkins/minikube-integration/21139-2320/.minikube/proxy-client-ca.crt -> /var/lib/minikube/certs/proxy-client-ca.crt
	I1109 13:55:02.792493   50941 vm_assets.go:164] NewFileAsset: /home/jenkins/minikube-integration/21139-2320/.minikube/proxy-client-ca.key -> /var/lib/minikube/certs/proxy-client-ca.key
	I1109 13:55:02.792513   50941 vm_assets.go:164] NewFileAsset: /home/jenkins/minikube-integration/21139-2320/.minikube/profiles/ha-423884/apiserver.crt -> /var/lib/minikube/certs/apiserver.crt
	I1109 13:55:02.792538   50941 vm_assets.go:164] NewFileAsset: /home/jenkins/minikube-integration/21139-2320/.minikube/profiles/ha-423884/apiserver.key -> /var/lib/minikube/certs/apiserver.key
	I1109 13:55:02.792554   50941 vm_assets.go:164] NewFileAsset: /home/jenkins/minikube-integration/21139-2320/.minikube/profiles/ha-423884/proxy-client.crt -> /var/lib/minikube/certs/proxy-client.crt
	I1109 13:55:02.792565   50941 vm_assets.go:164] NewFileAsset: /home/jenkins/minikube-integration/21139-2320/.minikube/profiles/ha-423884/proxy-client.key -> /var/lib/minikube/certs/proxy-client.key
	I1109 13:55:02.792626   50941 certs.go:484] found cert: /home/jenkins/minikube-integration/21139-2320/.minikube/certs/4116.pem (1338 bytes)
	W1109 13:55:02.792662   50941 certs.go:480] ignoring /home/jenkins/minikube-integration/21139-2320/.minikube/certs/4116_empty.pem, impossibly tiny 0 bytes
	I1109 13:55:02.792674   50941 certs.go:484] found cert: /home/jenkins/minikube-integration/21139-2320/.minikube/certs/ca-key.pem (1679 bytes)
	I1109 13:55:02.792701   50941 certs.go:484] found cert: /home/jenkins/minikube-integration/21139-2320/.minikube/certs/ca.pem (1082 bytes)
	I1109 13:55:02.792731   50941 certs.go:484] found cert: /home/jenkins/minikube-integration/21139-2320/.minikube/certs/cert.pem (1123 bytes)
	I1109 13:55:02.792756   50941 certs.go:484] found cert: /home/jenkins/minikube-integration/21139-2320/.minikube/certs/key.pem (1679 bytes)
	I1109 13:55:02.792801   50941 certs.go:484] found cert: /home/jenkins/minikube-integration/21139-2320/.minikube/files/etc/ssl/certs/41162.pem (1708 bytes)
	I1109 13:55:02.792832   50941 vm_assets.go:164] NewFileAsset: /home/jenkins/minikube-integration/21139-2320/.minikube/ca.crt -> /usr/share/ca-certificates/minikubeCA.pem
	I1109 13:55:02.792847   50941 vm_assets.go:164] NewFileAsset: /home/jenkins/minikube-integration/21139-2320/.minikube/certs/4116.pem -> /usr/share/ca-certificates/4116.pem
	I1109 13:55:02.792857   50941 vm_assets.go:164] NewFileAsset: /home/jenkins/minikube-integration/21139-2320/.minikube/files/etc/ssl/certs/41162.pem -> /usr/share/ca-certificates/41162.pem
	I1109 13:55:02.793472   50941 ssh_runner.go:362] scp /home/jenkins/minikube-integration/21139-2320/.minikube/ca.crt --> /var/lib/minikube/certs/ca.crt (1111 bytes)
	I1109 13:55:02.812682   50941 ssh_runner.go:362] scp /home/jenkins/minikube-integration/21139-2320/.minikube/ca.key --> /var/lib/minikube/certs/ca.key (1679 bytes)
	I1109 13:55:02.831556   50941 ssh_runner.go:362] scp /home/jenkins/minikube-integration/21139-2320/.minikube/proxy-client-ca.crt --> /var/lib/minikube/certs/proxy-client-ca.crt (1119 bytes)
	I1109 13:55:02.850358   50941 ssh_runner.go:362] scp /home/jenkins/minikube-integration/21139-2320/.minikube/proxy-client-ca.key --> /var/lib/minikube/certs/proxy-client-ca.key (1675 bytes)
	I1109 13:55:02.868603   50941 ssh_runner.go:362] scp /home/jenkins/minikube-integration/21139-2320/.minikube/profiles/ha-423884/apiserver.crt --> /var/lib/minikube/certs/apiserver.crt (1440 bytes)
	I1109 13:55:02.886617   50941 ssh_runner.go:362] scp /home/jenkins/minikube-integration/21139-2320/.minikube/profiles/ha-423884/apiserver.key --> /var/lib/minikube/certs/apiserver.key (1679 bytes)
	I1109 13:55:02.904941   50941 ssh_runner.go:362] scp /home/jenkins/minikube-integration/21139-2320/.minikube/profiles/ha-423884/proxy-client.crt --> /var/lib/minikube/certs/proxy-client.crt (1147 bytes)
	I1109 13:55:02.923399   50941 ssh_runner.go:362] scp /home/jenkins/minikube-integration/21139-2320/.minikube/profiles/ha-423884/proxy-client.key --> /var/lib/minikube/certs/proxy-client.key (1675 bytes)
	I1109 13:55:02.941967   50941 ssh_runner.go:362] scp /home/jenkins/minikube-integration/21139-2320/.minikube/ca.crt --> /usr/share/ca-certificates/minikubeCA.pem (1111 bytes)
	I1109 13:55:02.960665   50941 ssh_runner.go:362] scp /home/jenkins/minikube-integration/21139-2320/.minikube/certs/4116.pem --> /usr/share/ca-certificates/4116.pem (1338 bytes)
	I1109 13:55:02.983802   50941 ssh_runner.go:362] scp /home/jenkins/minikube-integration/21139-2320/.minikube/files/etc/ssl/certs/41162.pem --> /usr/share/ca-certificates/41162.pem (1708 bytes)
	I1109 13:55:03.009751   50941 ssh_runner.go:362] scp memory --> /var/lib/minikube/kubeconfig (738 bytes)
	I1109 13:55:03.031089   50941 ssh_runner.go:195] Run: openssl version
	I1109 13:55:03.048901   50941 ssh_runner.go:195] Run: sudo /bin/bash -c "test -s /usr/share/ca-certificates/minikubeCA.pem && ln -fs /usr/share/ca-certificates/minikubeCA.pem /etc/ssl/certs/minikubeCA.pem"
	I1109 13:55:03.062145   50941 ssh_runner.go:195] Run: ls -la /usr/share/ca-certificates/minikubeCA.pem
	I1109 13:55:03.066948   50941 certs.go:528] hashing: -rw-r--r-- 1 root root 1111 Nov  9 13:29 /usr/share/ca-certificates/minikubeCA.pem
	I1109 13:55:03.067030   50941 ssh_runner.go:195] Run: openssl x509 -hash -noout -in /usr/share/ca-certificates/minikubeCA.pem
	I1109 13:55:03.126889   50941 ssh_runner.go:195] Run: sudo /bin/bash -c "test -L /etc/ssl/certs/b5213941.0 || ln -fs /etc/ssl/certs/minikubeCA.pem /etc/ssl/certs/b5213941.0"
	I1109 13:55:03.139601   50941 ssh_runner.go:195] Run: sudo /bin/bash -c "test -s /usr/share/ca-certificates/4116.pem && ln -fs /usr/share/ca-certificates/4116.pem /etc/ssl/certs/4116.pem"
	I1109 13:55:03.154789   50941 ssh_runner.go:195] Run: ls -la /usr/share/ca-certificates/4116.pem
	I1109 13:55:03.160976   50941 certs.go:528] hashing: -rw-r--r-- 1 root root 1338 Nov  9 13:36 /usr/share/ca-certificates/4116.pem
	I1109 13:55:03.161044   50941 ssh_runner.go:195] Run: openssl x509 -hash -noout -in /usr/share/ca-certificates/4116.pem
	I1109 13:55:03.250161   50941 ssh_runner.go:195] Run: sudo /bin/bash -c "test -L /etc/ssl/certs/51391683.0 || ln -fs /etc/ssl/certs/4116.pem /etc/ssl/certs/51391683.0"
	I1109 13:55:03.265557   50941 ssh_runner.go:195] Run: sudo /bin/bash -c "test -s /usr/share/ca-certificates/41162.pem && ln -fs /usr/share/ca-certificates/41162.pem /etc/ssl/certs/41162.pem"
	I1109 13:55:03.284581   50941 ssh_runner.go:195] Run: ls -la /usr/share/ca-certificates/41162.pem
	I1109 13:55:03.293895   50941 certs.go:528] hashing: -rw-r--r-- 1 root root 1708 Nov  9 13:36 /usr/share/ca-certificates/41162.pem
	I1109 13:55:03.293986   50941 ssh_runner.go:195] Run: openssl x509 -hash -noout -in /usr/share/ca-certificates/41162.pem
	I1109 13:55:03.352776   50941 ssh_runner.go:195] Run: sudo /bin/bash -c "test -L /etc/ssl/certs/3ec20f2e.0 || ln -fs /etc/ssl/certs/41162.pem /etc/ssl/certs/3ec20f2e.0"
	I1109 13:55:03.366601   50941 ssh_runner.go:195] Run: stat /var/lib/minikube/certs/apiserver-kubelet-client.crt
	I1109 13:55:03.371480   50941 ssh_runner.go:195] Run: openssl x509 -noout -in /var/lib/minikube/certs/apiserver-etcd-client.crt -checkend 86400
	I1109 13:55:03.428568   50941 ssh_runner.go:195] Run: openssl x509 -noout -in /var/lib/minikube/certs/apiserver-kubelet-client.crt -checkend 86400
	I1109 13:55:03.484572   50941 ssh_runner.go:195] Run: openssl x509 -noout -in /var/lib/minikube/certs/etcd/server.crt -checkend 86400
	I1109 13:55:03.542471   50941 ssh_runner.go:195] Run: openssl x509 -noout -in /var/lib/minikube/certs/etcd/healthcheck-client.crt -checkend 86400
	I1109 13:55:03.629320   50941 ssh_runner.go:195] Run: openssl x509 -noout -in /var/lib/minikube/certs/etcd/peer.crt -checkend 86400
	I1109 13:55:03.682786   50941 ssh_runner.go:195] Run: openssl x509 -noout -in /var/lib/minikube/certs/front-proxy-client.crt -checkend 86400
	I1109 13:55:03.732655   50941 kubeadm.go:401] StartCluster: {Name:ha-423884 KeepContext:false EmbedCerts:false MinikubeISO: KicBaseImage:gcr.io/k8s-minikube/kicbase-builds:v0.0.48-1761985721-21837@sha256:a50b37e97dfdea51156e079ca6b45818a801b3d41bbe13d141f35d2e1af6c7d1 Memory:3072 CPUs:2 DiskSize:20000 Driver:docker HyperkitVpnKitSock: HyperkitVSockPorts:[] DockerEnv:[] ContainerVolumeMounts:[] InsecureRegistry:[] RegistryMirror:[] HostOnlyCIDR:192.168.59.1/24 HypervVirtualSwitch: HypervUseExternalSwitch:false HypervExternalAdapter: KVMNetwork:default KVMQemuURI:qemu:///system KVMGPU:false KVMHidden:false KVMNUMACount:1 APIServerPort:8443 DockerOpt:[] DisableDriverMounts:false NFSShare:[] NFSSharesRoot:/nfsshares UUID: NoVTXCheck:false DNSProxy:false HostDNSResolver:true HostOnlyNicType:virtio NatNicType:virtio SSHIPAddress: SSHUser:root SSHKey: SSHPort:22 KubernetesConfig:{KubernetesVersion:v1.34.1 ClusterName:ha-423884 Namespace:default APIServerHAVIP:192.168.49.254 APIServerName:minikubeCA APIServe
rNames:[] APIServerIPs:[] DNSDomain:cluster.local ContainerRuntime:crio CRISocket: NetworkPlugin:cni FeatureGates: ServiceCIDR:10.96.0.0/12 ImageRepository: LoadBalancerStartIP: LoadBalancerEndIP: CustomIngressCert: RegistryAliases: ExtraOptions:[] ShouldLoadCachedImages:true EnableDefaultCNI:false CNI:} Nodes:[{Name: IP:192.168.49.2 Port:8443 KubernetesVersion:v1.34.1 ContainerRuntime:crio ControlPlane:true Worker:true} {Name:m02 IP:192.168.49.3 Port:8443 KubernetesVersion:v1.34.1 ContainerRuntime:crio ControlPlane:true Worker:true} {Name:m03 IP:192.168.49.4 Port:8443 KubernetesVersion:v1.34.1 ContainerRuntime:crio ControlPlane:true Worker:true} {Name:m04 IP:192.168.49.5 Port:0 KubernetesVersion:v1.34.1 ContainerRuntime:crio ControlPlane:false Worker:true}] Addons:map[ambassador:false amd-gpu-device-plugin:false auto-pause:false cloud-spanner:false csi-hostpath-driver:false dashboard:false default-storageclass:false efk:false freshpod:false gcp-auth:false gvisor:false headlamp:false inaccel:false ingress:fal
se ingress-dns:false inspektor-gadget:false istio:false istio-provisioner:false kong:false kubeflow:false kubetail:false kubevirt:false logviewer:false metallb:false metrics-server:false nvidia-device-plugin:false nvidia-driver-installer:false nvidia-gpu-device-plugin:false olm:false pod-security-policy:false portainer:false registry:false registry-aliases:false registry-creds:false storage-provisioner:false storage-provisioner-rancher:false volcano:false volumesnapshots:false yakd:false] CustomAddonImages:map[] CustomAddonRegistries:map[] VerifyComponents:map[apiserver:true apps_running:true default_sa:true extra:true kubelet:true node_ready:true system_pods:true] StartHostTimeout:6m0s ScheduledStop:<nil> ExposedPorts:[] ListenAddress: Network: Subnet: MultiNodeRequested:true ExtraDisks:0 CertExpiration:26280h0m0s MountString: Mount9PVersion:9p2000.L MountGID:docker MountIP: MountMSize:262144 MountOptions:[] MountPort:0 MountType:9p MountUID:docker BinaryMirror: DisableOptimizations:false DisableMetrics:fals
e DisableCoreDNSLog:false CustomQemuFirmwarePath: SocketVMnetClientPath: SocketVMnetPath: StaticIP: SSHAuthSock: SSHAgentPID:0 GPUs: AutoPauseInterval:1m0s}
	I1109 13:55:03.732832   50941 cri.go:54] listing CRI containers in root : {State:paused Name: Namespaces:[kube-system]}
	I1109 13:55:03.732937   50941 ssh_runner.go:195] Run: sudo -s eval "crictl ps -a --quiet --label io.kubernetes.pod.namespace=kube-system"
	I1109 13:55:03.774883   50941 cri.go:89] found id: "90f6d4700e66c1004154b3bffd5b655a9e7a54dab0ca93ca633a48ec6805be8c"
	I1109 13:55:03.774945   50941 cri.go:89] found id: "ee4108629384f7d2a0c69033ae60bc1c7015caec18238848cb6dace4abb60ac1"
	I1109 13:55:03.774963   50941 cri.go:89] found id: "dc4b89b5cdd42a6e98698322cd4a212e4b2439c3edbe3305cc3f85573f85fb2b"
	I1109 13:55:03.774978   50941 cri.go:89] found id: "ad03fe50fbbd1dace582db018b89f80349534b6604f17260fe8e6175c0110640"
	I1109 13:55:03.774996   50941 cri.go:89] found id: "435fea996772c07f8ab06a7210ea047100aeb59de8bfe2b882e29743c63515bf"
	I1109 13:55:03.775025   50941 cri.go:89] found id: ""
	I1109 13:55:03.775095   50941 ssh_runner.go:195] Run: sudo runc list -f json
	W1109 13:55:03.797048   50941 kubeadm.go:408] unpause failed: list paused: runc: sudo runc list -f json: Process exited with status 1
	stdout:
	
	stderr:
	time="2025-11-09T13:55:03Z" level=error msg="open /run/runc: no such file or directory"
	I1109 13:55:03.797184   50941 ssh_runner.go:195] Run: sudo ls /var/lib/kubelet/kubeadm-flags.env /var/lib/kubelet/config.yaml /var/lib/minikube/etcd
	I1109 13:55:03.808348   50941 kubeadm.go:417] found existing configuration files, will attempt cluster restart
	I1109 13:55:03.808412   50941 kubeadm.go:598] restartPrimaryControlPlane start ...
	I1109 13:55:03.808506   50941 ssh_runner.go:195] Run: sudo test -d /data/minikube
	I1109 13:55:03.822278   50941 kubeadm.go:131] /data/minikube skipping compat symlinks: sudo test -d /data/minikube: Process exited with status 1
	stdout:
	
	stderr:
	I1109 13:55:03.822778   50941 kubeconfig.go:47] verify endpoint returned: get endpoint: "ha-423884" does not appear in /home/jenkins/minikube-integration/21139-2320/kubeconfig
	I1109 13:55:03.822942   50941 kubeconfig.go:62] /home/jenkins/minikube-integration/21139-2320/kubeconfig needs updating (will repair): [kubeconfig missing "ha-423884" cluster setting kubeconfig missing "ha-423884" context setting]
	I1109 13:55:03.823266   50941 lock.go:35] WriteFile acquiring /home/jenkins/minikube-integration/21139-2320/kubeconfig: {Name:mkbcbd43244c4eb498050023163f96f81faa78c6 Clock:{} Delay:500ms Timeout:1m0s Cancel:<nil>}
	I1109 13:55:03.823946   50941 kapi.go:59] client config for ha-423884: &rest.Config{Host:"https://192.168.49.2:8443", APIPath:"", ContentConfig:rest.ContentConfig{AcceptContentTypes:"", ContentType:"", GroupVersion:(*schema.GroupVersion)(nil), NegotiatedSerializer:runtime.NegotiatedSerializer(nil)}, Username:"", Password:"", BearerToken:"", BearerTokenFile:"", Impersonate:rest.ImpersonationConfig{UserName:"", UID:"", Groups:[]string(nil), Extra:map[string][]string(nil)}, AuthProvider:<nil>, AuthConfigPersister:rest.AuthProviderConfigPersister(nil), ExecProvider:<nil>, TLSClientConfig:rest.sanitizedTLSClientConfig{Insecure:false, ServerName:"", CertFile:"/home/jenkins/minikube-integration/21139-2320/.minikube/profiles/ha-423884/client.crt", KeyFile:"/home/jenkins/minikube-integration/21139-2320/.minikube/profiles/ha-423884/client.key", CAFile:"/home/jenkins/minikube-integration/21139-2320/.minikube/ca.crt", CertData:[]uint8(nil), KeyData:[]uint8(nil), CAData:[]uint8(nil), NextProtos:[]string(nil)}, Us
erAgent:"", DisableCompression:false, Transport:http.RoundTripper(nil), WrapTransport:(transport.WrapperFunc)(0x21276e0), QPS:0, Burst:0, RateLimiter:flowcontrol.RateLimiter(nil), WarningHandler:rest.WarningHandler(nil), WarningHandlerWithContext:rest.WarningHandlerWithContext(nil), Timeout:0, Dial:(func(context.Context, string, string) (net.Conn, error))(nil), Proxy:(func(*http.Request) (*url.URL, error))(nil)}
	I1109 13:55:03.824966   50941 envvar.go:172] "Feature gate default state" feature="ClientsPreferCBOR" enabled=false
	I1109 13:55:03.825021   50941 envvar.go:172] "Feature gate default state" feature="InformerResourceVersion" enabled=false
	I1109 13:55:03.825042   50941 envvar.go:172] "Feature gate default state" feature="InOrderInformers" enabled=true
	I1109 13:55:03.825069   50941 envvar.go:172] "Feature gate default state" feature="WatchListClient" enabled=false
	I1109 13:55:03.825105   50941 envvar.go:172] "Feature gate default state" feature="ClientsAllowCBOR" enabled=false
	I1109 13:55:03.824993   50941 cert_rotation.go:141] "Starting client certificate rotation controller" logger="tls-transport-cache"
	I1109 13:55:03.825458   50941 ssh_runner.go:195] Run: sudo diff -u /var/tmp/minikube/kubeadm.yaml /var/tmp/minikube/kubeadm.yaml.new
	I1109 13:55:03.835686   50941 kubeadm.go:635] The running cluster does not require reconfiguration: 192.168.49.2
	I1109 13:55:03.835761   50941 kubeadm.go:602] duration metric: took 27.320729ms to restartPrimaryControlPlane
	I1109 13:55:03.835784   50941 kubeadm.go:403] duration metric: took 103.139447ms to StartCluster
	I1109 13:55:03.835816   50941 settings.go:142] acquiring lock: {Name:mk52a5a1cf83cfd3007d92af07a5e8dea393f789 Clock:{} Delay:500ms Timeout:1m0s Cancel:<nil>}
	I1109 13:55:03.835943   50941 settings.go:150] Updating kubeconfig:  /home/jenkins/minikube-integration/21139-2320/kubeconfig
	I1109 13:55:03.836573   50941 lock.go:35] WriteFile acquiring /home/jenkins/minikube-integration/21139-2320/kubeconfig: {Name:mkbcbd43244c4eb498050023163f96f81faa78c6 Clock:{} Delay:500ms Timeout:1m0s Cancel:<nil>}
	I1109 13:55:03.836835   50941 start.go:234] HA (multi-control plane) cluster: will skip waiting for primary control-plane node &{Name: IP:192.168.49.2 Port:8443 KubernetesVersion:v1.34.1 ContainerRuntime:crio ControlPlane:true Worker:true}
	I1109 13:55:03.836883   50941 start.go:242] waiting for startup goroutines ...
	I1109 13:55:03.836904   50941 addons.go:512] enable addons start: toEnable=map[ambassador:false amd-gpu-device-plugin:false auto-pause:false cloud-spanner:false csi-hostpath-driver:false dashboard:false default-storageclass:false efk:false freshpod:false gcp-auth:false gvisor:false headlamp:false inaccel:false ingress:false ingress-dns:false inspektor-gadget:false istio:false istio-provisioner:false kong:false kubeflow:false kubetail:false kubevirt:false logviewer:false metallb:false metrics-server:false nvidia-device-plugin:false nvidia-driver-installer:false nvidia-gpu-device-plugin:false olm:false pod-security-policy:false portainer:false registry:false registry-aliases:false registry-creds:false storage-provisioner:false storage-provisioner-rancher:false volcano:false volumesnapshots:false yakd:false]
	I1109 13:55:03.837391   50941 config.go:182] Loaded profile config "ha-423884": Driver=docker, ContainerRuntime=crio, KubernetesVersion=v1.34.1
	I1109 13:55:03.842877   50941 out.go:179] * Enabled addons: 
	I1109 13:55:03.845819   50941 addons.go:515] duration metric: took 8.899809ms for enable addons: enabled=[]
	I1109 13:55:03.845887   50941 start.go:247] waiting for cluster config update ...
	I1109 13:55:03.845910   50941 start.go:256] writing updated cluster config ...
	I1109 13:55:03.849092   50941 out.go:203] 
	I1109 13:55:03.852433   50941 config.go:182] Loaded profile config "ha-423884": Driver=docker, ContainerRuntime=crio, KubernetesVersion=v1.34.1
	I1109 13:55:03.852560   50941 profile.go:143] Saving config to /home/jenkins/minikube-integration/21139-2320/.minikube/profiles/ha-423884/config.json ...
	I1109 13:55:03.856045   50941 out.go:179] * Starting "ha-423884-m02" control-plane node in "ha-423884" cluster
	I1109 13:55:03.858860   50941 cache.go:134] Beginning downloading kic base image for docker with crio
	I1109 13:55:03.861834   50941 out.go:179] * Pulling base image v0.0.48-1761985721-21837 ...
	I1109 13:55:03.864733   50941 preload.go:188] Checking if preload exists for k8s version v1.34.1 and runtime crio
	I1109 13:55:03.864755   50941 cache.go:65] Caching tarball of preloaded images
	I1109 13:55:03.864808   50941 image.go:81] Checking for gcr.io/k8s-minikube/kicbase-builds:v0.0.48-1761985721-21837@sha256:a50b37e97dfdea51156e079ca6b45818a801b3d41bbe13d141f35d2e1af6c7d1 in local docker daemon
	I1109 13:55:03.864863   50941 preload.go:238] Found /home/jenkins/minikube-integration/21139-2320/.minikube/cache/preloaded-tarball/preloaded-images-k8s-v18-v1.34.1-cri-o-overlay-arm64.tar.lz4 in cache, skipping download
	I1109 13:55:03.864879   50941 cache.go:68] Finished verifying existence of preloaded tar for v1.34.1 on crio
	I1109 13:55:03.865000   50941 profile.go:143] Saving config to /home/jenkins/minikube-integration/21139-2320/.minikube/profiles/ha-423884/config.json ...
	I1109 13:55:03.890212   50941 image.go:100] Found gcr.io/k8s-minikube/kicbase-builds:v0.0.48-1761985721-21837@sha256:a50b37e97dfdea51156e079ca6b45818a801b3d41bbe13d141f35d2e1af6c7d1 in local docker daemon, skipping pull
	I1109 13:55:03.890235   50941 cache.go:158] gcr.io/k8s-minikube/kicbase-builds:v0.0.48-1761985721-21837@sha256:a50b37e97dfdea51156e079ca6b45818a801b3d41bbe13d141f35d2e1af6c7d1 exists in daemon, skipping load
	I1109 13:55:03.890249   50941 cache.go:243] Successfully downloaded all kic artifacts
	I1109 13:55:03.890271   50941 start.go:360] acquireMachinesLock for ha-423884-m02: {Name:mkc465d60ac134a0502b48f535d5c2db44f7f07b Clock:{} Delay:500ms Timeout:10m0s Cancel:<nil>}
	I1109 13:55:03.890338   50941 start.go:364] duration metric: took 47.253µs to acquireMachinesLock for "ha-423884-m02"
	I1109 13:55:03.890362   50941 start.go:96] Skipping create...Using existing machine configuration
	I1109 13:55:03.890369   50941 fix.go:54] fixHost starting: m02
	I1109 13:55:03.890623   50941 cli_runner.go:164] Run: docker container inspect ha-423884-m02 --format={{.State.Status}}
	I1109 13:55:03.914904   50941 fix.go:112] recreateIfNeeded on ha-423884-m02: state=Stopped err=<nil>
	W1109 13:55:03.914934   50941 fix.go:138] unexpected machine state, will restart: <nil>
	I1109 13:55:03.917975   50941 out.go:252] * Restarting existing docker container for "ha-423884-m02" ...
	I1109 13:55:03.918057   50941 cli_runner.go:164] Run: docker start ha-423884-m02
	I1109 13:55:04.309913   50941 cli_runner.go:164] Run: docker container inspect ha-423884-m02 --format={{.State.Status}}
	I1109 13:55:04.344086   50941 kic.go:430] container "ha-423884-m02" state is running.
	I1109 13:55:04.344458   50941 cli_runner.go:164] Run: docker container inspect -f "{{range .NetworkSettings.Networks}}{{.IPAddress}},{{.GlobalIPv6Address}}{{end}}" ha-423884-m02
	I1109 13:55:04.369599   50941 profile.go:143] Saving config to /home/jenkins/minikube-integration/21139-2320/.minikube/profiles/ha-423884/config.json ...
	I1109 13:55:04.369844   50941 machine.go:94] provisionDockerMachine start ...
	I1109 13:55:04.369909   50941 cli_runner.go:164] Run: docker container inspect -f "'{{(index (index .NetworkSettings.Ports "22/tcp") 0).HostPort}}'" ha-423884-m02
	I1109 13:55:04.400285   50941 main.go:143] libmachine: Using SSH client type: native
	I1109 13:55:04.400586   50941 main.go:143] libmachine: &{{{<nil> 0 [] [] []} docker [0x3eefe0] 0x3f1790 <nil>  [] 0s} 127.0.0.1 32813 <nil> <nil>}
	I1109 13:55:04.400595   50941 main.go:143] libmachine: About to run SSH command:
	hostname
	I1109 13:55:04.401311   50941 main.go:143] libmachine: Error dialing TCP: ssh: handshake failed: EOF
	I1109 13:55:07.579475   50941 main.go:143] libmachine: SSH cmd err, output: <nil>: ha-423884-m02
	
	I1109 13:55:07.579506   50941 ubuntu.go:182] provisioning hostname "ha-423884-m02"
	I1109 13:55:07.579638   50941 cli_runner.go:164] Run: docker container inspect -f "'{{(index (index .NetworkSettings.Ports "22/tcp") 0).HostPort}}'" ha-423884-m02
	I1109 13:55:07.602366   50941 main.go:143] libmachine: Using SSH client type: native
	I1109 13:55:07.602673   50941 main.go:143] libmachine: &{{{<nil> 0 [] [] []} docker [0x3eefe0] 0x3f1790 <nil>  [] 0s} 127.0.0.1 32813 <nil> <nil>}
	I1109 13:55:07.602690   50941 main.go:143] libmachine: About to run SSH command:
	sudo hostname ha-423884-m02 && echo "ha-423884-m02" | sudo tee /etc/hostname
	I1109 13:55:07.809995   50941 main.go:143] libmachine: SSH cmd err, output: <nil>: ha-423884-m02
	
	I1109 13:55:07.810122   50941 cli_runner.go:164] Run: docker container inspect -f "'{{(index (index .NetworkSettings.Ports "22/tcp") 0).HostPort}}'" ha-423884-m02
	I1109 13:55:07.846000   50941 main.go:143] libmachine: Using SSH client type: native
	I1109 13:55:07.846319   50941 main.go:143] libmachine: &{{{<nil> 0 [] [] []} docker [0x3eefe0] 0x3f1790 <nil>  [] 0s} 127.0.0.1 32813 <nil> <nil>}
	I1109 13:55:07.846341   50941 main.go:143] libmachine: About to run SSH command:
	
			if ! grep -xq '.*\sha-423884-m02' /etc/hosts; then
				if grep -xq '127.0.1.1\s.*' /etc/hosts; then
					sudo sed -i 's/^127.0.1.1\s.*/127.0.1.1 ha-423884-m02/g' /etc/hosts;
				else 
					echo '127.0.1.1 ha-423884-m02' | sudo tee -a /etc/hosts; 
				fi
			fi
	I1109 13:55:08.029626   50941 main.go:143] libmachine: SSH cmd err, output: <nil>: 
	I1109 13:55:08.029653   50941 ubuntu.go:188] set auth options {CertDir:/home/jenkins/minikube-integration/21139-2320/.minikube CaCertPath:/home/jenkins/minikube-integration/21139-2320/.minikube/certs/ca.pem CaPrivateKeyPath:/home/jenkins/minikube-integration/21139-2320/.minikube/certs/ca-key.pem CaCertRemotePath:/etc/docker/ca.pem ServerCertPath:/home/jenkins/minikube-integration/21139-2320/.minikube/machines/server.pem ServerKeyPath:/home/jenkins/minikube-integration/21139-2320/.minikube/machines/server-key.pem ClientKeyPath:/home/jenkins/minikube-integration/21139-2320/.minikube/certs/key.pem ServerCertRemotePath:/etc/docker/server.pem ServerKeyRemotePath:/etc/docker/server-key.pem ClientCertPath:/home/jenkins/minikube-integration/21139-2320/.minikube/certs/cert.pem ServerCertSANs:[] StorePath:/home/jenkins/minikube-integration/21139-2320/.minikube}
	I1109 13:55:08.029670   50941 ubuntu.go:190] setting up certificates
	I1109 13:55:08.029726   50941 provision.go:84] configureAuth start
	I1109 13:55:08.029805   50941 cli_runner.go:164] Run: docker container inspect -f "{{range .NetworkSettings.Networks}}{{.IPAddress}},{{.GlobalIPv6Address}}{{end}}" ha-423884-m02
	I1109 13:55:08.053364   50941 provision.go:143] copyHostCerts
	I1109 13:55:08.053410   50941 vm_assets.go:164] NewFileAsset: /home/jenkins/minikube-integration/21139-2320/.minikube/certs/ca.pem -> /home/jenkins/minikube-integration/21139-2320/.minikube/ca.pem
	I1109 13:55:08.053445   50941 exec_runner.go:144] found /home/jenkins/minikube-integration/21139-2320/.minikube/ca.pem, removing ...
	I1109 13:55:08.053457   50941 exec_runner.go:203] rm: /home/jenkins/minikube-integration/21139-2320/.minikube/ca.pem
	I1109 13:55:08.053539   50941 exec_runner.go:151] cp: /home/jenkins/minikube-integration/21139-2320/.minikube/certs/ca.pem --> /home/jenkins/minikube-integration/21139-2320/.minikube/ca.pem (1082 bytes)
	I1109 13:55:08.053624   50941 vm_assets.go:164] NewFileAsset: /home/jenkins/minikube-integration/21139-2320/.minikube/certs/cert.pem -> /home/jenkins/minikube-integration/21139-2320/.minikube/cert.pem
	I1109 13:55:08.053647   50941 exec_runner.go:144] found /home/jenkins/minikube-integration/21139-2320/.minikube/cert.pem, removing ...
	I1109 13:55:08.053656   50941 exec_runner.go:203] rm: /home/jenkins/minikube-integration/21139-2320/.minikube/cert.pem
	I1109 13:55:08.053687   50941 exec_runner.go:151] cp: /home/jenkins/minikube-integration/21139-2320/.minikube/certs/cert.pem --> /home/jenkins/minikube-integration/21139-2320/.minikube/cert.pem (1123 bytes)
	I1109 13:55:08.053733   50941 vm_assets.go:164] NewFileAsset: /home/jenkins/minikube-integration/21139-2320/.minikube/certs/key.pem -> /home/jenkins/minikube-integration/21139-2320/.minikube/key.pem
	I1109 13:55:08.053755   50941 exec_runner.go:144] found /home/jenkins/minikube-integration/21139-2320/.minikube/key.pem, removing ...
	I1109 13:55:08.053762   50941 exec_runner.go:203] rm: /home/jenkins/minikube-integration/21139-2320/.minikube/key.pem
	I1109 13:55:08.053788   50941 exec_runner.go:151] cp: /home/jenkins/minikube-integration/21139-2320/.minikube/certs/key.pem --> /home/jenkins/minikube-integration/21139-2320/.minikube/key.pem (1679 bytes)
	I1109 13:55:08.053839   50941 provision.go:117] generating server cert: /home/jenkins/minikube-integration/21139-2320/.minikube/machines/server.pem ca-key=/home/jenkins/minikube-integration/21139-2320/.minikube/certs/ca.pem private-key=/home/jenkins/minikube-integration/21139-2320/.minikube/certs/ca-key.pem org=jenkins.ha-423884-m02 san=[127.0.0.1 192.168.49.3 ha-423884-m02 localhost minikube]
	I1109 13:55:08.908426   50941 provision.go:177] copyRemoteCerts
	I1109 13:55:08.908547   50941 ssh_runner.go:195] Run: sudo mkdir -p /etc/docker /etc/docker /etc/docker
	I1109 13:55:08.908608   50941 cli_runner.go:164] Run: docker container inspect -f "'{{(index (index .NetworkSettings.Ports "22/tcp") 0).HostPort}}'" ha-423884-m02
	I1109 13:55:08.925860   50941 sshutil.go:53] new ssh client: &{IP:127.0.0.1 Port:32813 SSHKeyPath:/home/jenkins/minikube-integration/21139-2320/.minikube/machines/ha-423884-m02/id_rsa Username:docker}
	I1109 13:55:09.037241   50941 vm_assets.go:164] NewFileAsset: /home/jenkins/minikube-integration/21139-2320/.minikube/certs/ca.pem -> /etc/docker/ca.pem
	I1109 13:55:09.037302   50941 ssh_runner.go:362] scp /home/jenkins/minikube-integration/21139-2320/.minikube/certs/ca.pem --> /etc/docker/ca.pem (1082 bytes)
	I1109 13:55:09.069800   50941 vm_assets.go:164] NewFileAsset: /home/jenkins/minikube-integration/21139-2320/.minikube/machines/server.pem -> /etc/docker/server.pem
	I1109 13:55:09.069861   50941 ssh_runner.go:362] scp /home/jenkins/minikube-integration/21139-2320/.minikube/machines/server.pem --> /etc/docker/server.pem (1208 bytes)
	I1109 13:55:09.100884   50941 vm_assets.go:164] NewFileAsset: /home/jenkins/minikube-integration/21139-2320/.minikube/machines/server-key.pem -> /etc/docker/server-key.pem
	I1109 13:55:09.100993   50941 ssh_runner.go:362] scp /home/jenkins/minikube-integration/21139-2320/.minikube/machines/server-key.pem --> /etc/docker/server-key.pem (1675 bytes)
	I1109 13:55:09.135925   50941 provision.go:87] duration metric: took 1.10618017s to configureAuth
	I1109 13:55:09.136002   50941 ubuntu.go:206] setting minikube options for container-runtime
	I1109 13:55:09.136280   50941 config.go:182] Loaded profile config "ha-423884": Driver=docker, ContainerRuntime=crio, KubernetesVersion=v1.34.1
	I1109 13:55:09.136432   50941 cli_runner.go:164] Run: docker container inspect -f "'{{(index (index .NetworkSettings.Ports "22/tcp") 0).HostPort}}'" ha-423884-m02
	I1109 13:55:09.164021   50941 main.go:143] libmachine: Using SSH client type: native
	I1109 13:55:09.164323   50941 main.go:143] libmachine: &{{{<nil> 0 [] [] []} docker [0x3eefe0] 0x3f1790 <nil>  [] 0s} 127.0.0.1 32813 <nil> <nil>}
	I1109 13:55:09.164337   50941 main.go:143] libmachine: About to run SSH command:
	sudo mkdir -p /etc/sysconfig && printf %s "
	CRIO_MINIKUBE_OPTIONS='--insecure-registry 10.96.0.0/12 '
	" | sudo tee /etc/sysconfig/crio.minikube && sudo systemctl restart crio
	I1109 13:55:10.296777   50941 main.go:143] libmachine: SSH cmd err, output: <nil>: 
	CRIO_MINIKUBE_OPTIONS='--insecure-registry 10.96.0.0/12 '
	
	I1109 13:55:10.296814   50941 machine.go:97] duration metric: took 5.926952254s to provisionDockerMachine
	I1109 13:55:10.296825   50941 start.go:293] postStartSetup for "ha-423884-m02" (driver="docker")
	I1109 13:55:10.296872   50941 start.go:322] creating required directories: [/etc/kubernetes/addons /etc/kubernetes/manifests /var/tmp/minikube /var/lib/minikube /var/lib/minikube/certs /var/lib/minikube/images /var/lib/minikube/binaries /tmp/gvisor /usr/share/ca-certificates /etc/ssl/certs]
	I1109 13:55:10.296972   50941 ssh_runner.go:195] Run: sudo mkdir -p /etc/kubernetes/addons /etc/kubernetes/manifests /var/tmp/minikube /var/lib/minikube /var/lib/minikube/certs /var/lib/minikube/images /var/lib/minikube/binaries /tmp/gvisor /usr/share/ca-certificates /etc/ssl/certs
	I1109 13:55:10.297065   50941 cli_runner.go:164] Run: docker container inspect -f "'{{(index (index .NetworkSettings.Ports "22/tcp") 0).HostPort}}'" ha-423884-m02
	I1109 13:55:10.332056   50941 sshutil.go:53] new ssh client: &{IP:127.0.0.1 Port:32813 SSHKeyPath:/home/jenkins/minikube-integration/21139-2320/.minikube/machines/ha-423884-m02/id_rsa Username:docker}
	I1109 13:55:10.449335   50941 ssh_runner.go:195] Run: cat /etc/os-release
	I1109 13:55:10.453805   50941 main.go:143] libmachine: Couldn't set key VERSION_CODENAME, no corresponding struct field found
	I1109 13:55:10.453831   50941 info.go:137] Remote host: Debian GNU/Linux 12 (bookworm)
	I1109 13:55:10.453843   50941 filesync.go:126] Scanning /home/jenkins/minikube-integration/21139-2320/.minikube/addons for local assets ...
	I1109 13:55:10.453902   50941 filesync.go:126] Scanning /home/jenkins/minikube-integration/21139-2320/.minikube/files for local assets ...
	I1109 13:55:10.453979   50941 filesync.go:149] local asset: /home/jenkins/minikube-integration/21139-2320/.minikube/files/etc/ssl/certs/41162.pem -> 41162.pem in /etc/ssl/certs
	I1109 13:55:10.453986   50941 vm_assets.go:164] NewFileAsset: /home/jenkins/minikube-integration/21139-2320/.minikube/files/etc/ssl/certs/41162.pem -> /etc/ssl/certs/41162.pem
	I1109 13:55:10.454091   50941 ssh_runner.go:195] Run: sudo mkdir -p /etc/ssl/certs
	I1109 13:55:10.463699   50941 ssh_runner.go:362] scp /home/jenkins/minikube-integration/21139-2320/.minikube/files/etc/ssl/certs/41162.pem --> /etc/ssl/certs/41162.pem (1708 bytes)
	I1109 13:55:10.484008   50941 start.go:296] duration metric: took 187.133589ms for postStartSetup
	I1109 13:55:10.484157   50941 ssh_runner.go:195] Run: sh -c "df -h /var | awk 'NR==2{print $5}'"
	I1109 13:55:10.484228   50941 cli_runner.go:164] Run: docker container inspect -f "'{{(index (index .NetworkSettings.Ports "22/tcp") 0).HostPort}}'" ha-423884-m02
	I1109 13:55:10.524852   50941 sshutil.go:53] new ssh client: &{IP:127.0.0.1 Port:32813 SSHKeyPath:/home/jenkins/minikube-integration/21139-2320/.minikube/machines/ha-423884-m02/id_rsa Username:docker}
	I1109 13:55:10.647401   50941 ssh_runner.go:195] Run: sh -c "df -BG /var | awk 'NR==2{print $4}'"
	I1109 13:55:10.654990   50941 fix.go:56] duration metric: took 6.764614102s for fixHost
	I1109 13:55:10.655012   50941 start.go:83] releasing machines lock for "ha-423884-m02", held for 6.764660929s
	I1109 13:55:10.655097   50941 cli_runner.go:164] Run: docker container inspect -f "{{range .NetworkSettings.Networks}}{{.IPAddress}},{{.GlobalIPv6Address}}{{end}}" ha-423884-m02
	I1109 13:55:10.684905   50941 out.go:179] * Found network options:
	I1109 13:55:10.687829   50941 out.go:179]   - NO_PROXY=192.168.49.2
	W1109 13:55:10.690818   50941 proxy.go:120] fail to check proxy env: Error ip not in block
	W1109 13:55:10.690871   50941 proxy.go:120] fail to check proxy env: Error ip not in block
	I1109 13:55:10.690948   50941 ssh_runner.go:195] Run: sudo sh -c "podman version >/dev/null"
	I1109 13:55:10.690961   50941 ssh_runner.go:195] Run: curl -sS -m 2 https://registry.k8s.io/
	I1109 13:55:10.690989   50941 cli_runner.go:164] Run: docker container inspect -f "'{{(index (index .NetworkSettings.Ports "22/tcp") 0).HostPort}}'" ha-423884-m02
	I1109 13:55:10.691019   50941 cli_runner.go:164] Run: docker container inspect -f "'{{(index (index .NetworkSettings.Ports "22/tcp") 0).HostPort}}'" ha-423884-m02
	I1109 13:55:10.712241   50941 sshutil.go:53] new ssh client: &{IP:127.0.0.1 Port:32813 SSHKeyPath:/home/jenkins/minikube-integration/21139-2320/.minikube/machines/ha-423884-m02/id_rsa Username:docker}
	I1109 13:55:10.725530   50941 sshutil.go:53] new ssh client: &{IP:127.0.0.1 Port:32813 SSHKeyPath:/home/jenkins/minikube-integration/21139-2320/.minikube/machines/ha-423884-m02/id_rsa Username:docker}
	I1109 13:55:11.009545   50941 ssh_runner.go:195] Run: sh -c "stat /etc/cni/net.d/*loopback.conf*"
	W1109 13:55:11.084432   50941 cni.go:209] loopback cni configuration skipped: "/etc/cni/net.d/*loopback.conf*" not found
	I1109 13:55:11.084558   50941 ssh_runner.go:195] Run: sudo find /etc/cni/net.d -maxdepth 1 -type f ( ( -name *bridge* -or -name *podman* ) -and -not -name *.mk_disabled ) -printf "%p, " -exec sh -c "sudo mv {} {}.mk_disabled" ;
	I1109 13:55:11.121004   50941 cni.go:259] no active bridge cni configs found in "/etc/cni/net.d" - nothing to disable
	I1109 13:55:11.121072   50941 start.go:496] detecting cgroup driver to use...
	I1109 13:55:11.121123   50941 detect.go:187] detected "cgroupfs" cgroup driver on host os
	I1109 13:55:11.121189   50941 ssh_runner.go:195] Run: sudo systemctl stop -f containerd
	I1109 13:55:11.194725   50941 ssh_runner.go:195] Run: sudo systemctl is-active --quiet service containerd
	I1109 13:55:11.266109   50941 docker.go:218] disabling cri-docker service (if available) ...
	I1109 13:55:11.266214   50941 ssh_runner.go:195] Run: sudo systemctl stop -f cri-docker.socket
	I1109 13:55:11.320610   50941 ssh_runner.go:195] Run: sudo systemctl stop -f cri-docker.service
	I1109 13:55:11.355184   50941 ssh_runner.go:195] Run: sudo systemctl disable cri-docker.socket
	I1109 13:55:11.758458   50941 ssh_runner.go:195] Run: sudo systemctl mask cri-docker.service
	I1109 13:55:12.038806   50941 docker.go:234] disabling docker service ...
	I1109 13:55:12.038952   50941 ssh_runner.go:195] Run: sudo systemctl stop -f docker.socket
	I1109 13:55:12.067528   50941 ssh_runner.go:195] Run: sudo systemctl stop -f docker.service
	I1109 13:55:12.086834   50941 ssh_runner.go:195] Run: sudo systemctl disable docker.socket
	I1109 13:55:12.313835   50941 ssh_runner.go:195] Run: sudo systemctl mask docker.service
	I1109 13:55:12.529003   50941 ssh_runner.go:195] Run: sudo systemctl is-active --quiet service docker
	I1109 13:55:12.547999   50941 ssh_runner.go:195] Run: /bin/bash -c "sudo mkdir -p /etc && printf %s "runtime-endpoint: unix:///var/run/crio/crio.sock
	" | sudo tee /etc/crictl.yaml"
	I1109 13:55:12.574350   50941 crio.go:59] configure cri-o to use "registry.k8s.io/pause:3.10.1" pause image...
	I1109 13:55:12.574468   50941 ssh_runner.go:195] Run: sh -c "sudo sed -i 's|^.*pause_image = .*$|pause_image = "registry.k8s.io/pause:3.10.1"|' /etc/crio/crio.conf.d/02-crio.conf"
	I1109 13:55:12.593062   50941 crio.go:70] configuring cri-o to use "cgroupfs" as cgroup driver...
	I1109 13:55:12.593178   50941 ssh_runner.go:195] Run: sh -c "sudo sed -i 's|^.*cgroup_manager = .*$|cgroup_manager = "cgroupfs"|' /etc/crio/crio.conf.d/02-crio.conf"
	I1109 13:55:12.611675   50941 ssh_runner.go:195] Run: sh -c "sudo sed -i '/conmon_cgroup = .*/d' /etc/crio/crio.conf.d/02-crio.conf"
	I1109 13:55:12.621325   50941 ssh_runner.go:195] Run: sh -c "sudo sed -i '/cgroup_manager = .*/a conmon_cgroup = "pod"' /etc/crio/crio.conf.d/02-crio.conf"
	I1109 13:55:12.634011   50941 ssh_runner.go:195] Run: sh -c "sudo rm -rf /etc/cni/net.mk"
	I1109 13:55:12.644140   50941 ssh_runner.go:195] Run: sh -c "sudo sed -i '/^ *"net.ipv4.ip_unprivileged_port_start=.*"/d' /etc/crio/crio.conf.d/02-crio.conf"
	I1109 13:55:12.656327   50941 ssh_runner.go:195] Run: sh -c "sudo grep -q "^ *default_sysctls" /etc/crio/crio.conf.d/02-crio.conf || sudo sed -i '/conmon_cgroup = .*/a default_sysctls = \[\n\]' /etc/crio/crio.conf.d/02-crio.conf"
	I1109 13:55:12.666866   50941 ssh_runner.go:195] Run: sh -c "sudo sed -i -r 's|^default_sysctls *= *\[|&\n  "net.ipv4.ip_unprivileged_port_start=0",|' /etc/crio/crio.conf.d/02-crio.conf"
	I1109 13:55:12.678918   50941 ssh_runner.go:195] Run: sudo sysctl net.bridge.bridge-nf-call-iptables
	I1109 13:55:12.688104   50941 ssh_runner.go:195] Run: sudo sh -c "echo 1 > /proc/sys/net/ipv4/ip_forward"
	I1109 13:55:12.699862   50941 ssh_runner.go:195] Run: sudo systemctl daemon-reload
	I1109 13:55:12.929281   50941 ssh_runner.go:195] Run: sudo systemctl restart crio
	I1109 13:56:43.189989   50941 ssh_runner.go:235] Completed: sudo systemctl restart crio: (1m30.260670612s)
	I1109 13:56:43.190013   50941 start.go:543] Will wait 60s for socket path /var/run/crio/crio.sock
	I1109 13:56:43.190063   50941 ssh_runner.go:195] Run: stat /var/run/crio/crio.sock
	I1109 13:56:43.194863   50941 start.go:564] Will wait 60s for crictl version
	I1109 13:56:43.194926   50941 ssh_runner.go:195] Run: which crictl
	I1109 13:56:43.198897   50941 ssh_runner.go:195] Run: sudo /usr/local/bin/crictl version
	I1109 13:56:43.224592   50941 start.go:580] Version:  0.1.0
	RuntimeName:  cri-o
	RuntimeVersion:  1.34.1
	RuntimeApiVersion:  v1
	I1109 13:56:43.224673   50941 ssh_runner.go:195] Run: crio --version
	I1109 13:56:43.252803   50941 ssh_runner.go:195] Run: crio --version
	I1109 13:56:43.288977   50941 out.go:179] * Preparing Kubernetes v1.34.1 on CRI-O 1.34.1 ...
	I1109 13:56:43.292132   50941 out.go:179]   - env NO_PROXY=192.168.49.2
	I1109 13:56:43.295175   50941 cli_runner.go:164] Run: docker network inspect ha-423884 --format "{"Name": "{{.Name}}","Driver": "{{.Driver}}","Subnet": "{{range .IPAM.Config}}{{.Subnet}}{{end}}","Gateway": "{{range .IPAM.Config}}{{.Gateway}}{{end}}","MTU": {{if (index .Options "com.docker.network.driver.mtu")}}{{(index .Options "com.docker.network.driver.mtu")}}{{else}}0{{end}}, "ContainerIPs": [{{range $k,$v := .Containers }}"{{$v.IPv4Address}}",{{end}}]}"
	I1109 13:56:43.311775   50941 ssh_runner.go:195] Run: grep 192.168.49.1	host.minikube.internal$ /etc/hosts
	I1109 13:56:43.316096   50941 ssh_runner.go:195] Run: /bin/bash -c "{ grep -v $'\thost.minikube.internal$' "/etc/hosts"; echo "192.168.49.1	host.minikube.internal"; } > /tmp/h.$$; sudo cp /tmp/h.$$ "/etc/hosts""
	I1109 13:56:43.327026   50941 mustload.go:66] Loading cluster: ha-423884
	I1109 13:56:43.327285   50941 config.go:182] Loaded profile config "ha-423884": Driver=docker, ContainerRuntime=crio, KubernetesVersion=v1.34.1
	I1109 13:56:43.327549   50941 cli_runner.go:164] Run: docker container inspect ha-423884 --format={{.State.Status}}
	I1109 13:56:43.344797   50941 host.go:66] Checking if "ha-423884" exists ...
	I1109 13:56:43.345106   50941 certs.go:69] Setting up /home/jenkins/minikube-integration/21139-2320/.minikube/profiles/ha-423884 for IP: 192.168.49.3
	I1109 13:56:43.345119   50941 certs.go:195] generating shared ca certs ...
	I1109 13:56:43.345155   50941 certs.go:227] acquiring lock for ca certs: {Name:mkdc9287ecd89df27ec460b72246ed6b75395d7c Clock:{} Delay:500ms Timeout:1m0s Cancel:<nil>}
	I1109 13:56:43.345275   50941 certs.go:236] skipping valid "minikubeCA" ca cert: /home/jenkins/minikube-integration/21139-2320/.minikube/ca.key
	I1109 13:56:43.345325   50941 certs.go:236] skipping valid "proxyClientCA" ca cert: /home/jenkins/minikube-integration/21139-2320/.minikube/proxy-client-ca.key
	I1109 13:56:43.345337   50941 certs.go:257] generating profile certs ...
	I1109 13:56:43.345411   50941 certs.go:360] skipping valid signed profile cert regeneration for "minikube-user": /home/jenkins/minikube-integration/21139-2320/.minikube/profiles/ha-423884/client.key
	I1109 13:56:43.345491   50941 certs.go:360] skipping valid signed profile cert regeneration for "minikube": /home/jenkins/minikube-integration/21139-2320/.minikube/profiles/ha-423884/apiserver.key.75d82079
	I1109 13:56:43.345540   50941 certs.go:360] skipping valid signed profile cert regeneration for "aggregator": /home/jenkins/minikube-integration/21139-2320/.minikube/profiles/ha-423884/proxy-client.key
	I1109 13:56:43.345557   50941 vm_assets.go:164] NewFileAsset: /home/jenkins/minikube-integration/21139-2320/.minikube/ca.crt -> /var/lib/minikube/certs/ca.crt
	I1109 13:56:43.345575   50941 vm_assets.go:164] NewFileAsset: /home/jenkins/minikube-integration/21139-2320/.minikube/ca.key -> /var/lib/minikube/certs/ca.key
	I1109 13:56:43.345594   50941 vm_assets.go:164] NewFileAsset: /home/jenkins/minikube-integration/21139-2320/.minikube/proxy-client-ca.crt -> /var/lib/minikube/certs/proxy-client-ca.crt
	I1109 13:56:43.345615   50941 vm_assets.go:164] NewFileAsset: /home/jenkins/minikube-integration/21139-2320/.minikube/proxy-client-ca.key -> /var/lib/minikube/certs/proxy-client-ca.key
	I1109 13:56:43.345628   50941 vm_assets.go:164] NewFileAsset: /home/jenkins/minikube-integration/21139-2320/.minikube/profiles/ha-423884/apiserver.crt -> /var/lib/minikube/certs/apiserver.crt
	I1109 13:56:43.345642   50941 vm_assets.go:164] NewFileAsset: /home/jenkins/minikube-integration/21139-2320/.minikube/profiles/ha-423884/apiserver.key -> /var/lib/minikube/certs/apiserver.key
	I1109 13:56:43.345658   50941 vm_assets.go:164] NewFileAsset: /home/jenkins/minikube-integration/21139-2320/.minikube/profiles/ha-423884/proxy-client.crt -> /var/lib/minikube/certs/proxy-client.crt
	I1109 13:56:43.345671   50941 vm_assets.go:164] NewFileAsset: /home/jenkins/minikube-integration/21139-2320/.minikube/profiles/ha-423884/proxy-client.key -> /var/lib/minikube/certs/proxy-client.key
	I1109 13:56:43.345729   50941 certs.go:484] found cert: /home/jenkins/minikube-integration/21139-2320/.minikube/certs/4116.pem (1338 bytes)
	W1109 13:56:43.345760   50941 certs.go:480] ignoring /home/jenkins/minikube-integration/21139-2320/.minikube/certs/4116_empty.pem, impossibly tiny 0 bytes
	I1109 13:56:43.345772   50941 certs.go:484] found cert: /home/jenkins/minikube-integration/21139-2320/.minikube/certs/ca-key.pem (1679 bytes)
	I1109 13:56:43.345800   50941 certs.go:484] found cert: /home/jenkins/minikube-integration/21139-2320/.minikube/certs/ca.pem (1082 bytes)
	I1109 13:56:43.345827   50941 certs.go:484] found cert: /home/jenkins/minikube-integration/21139-2320/.minikube/certs/cert.pem (1123 bytes)
	I1109 13:56:43.345850   50941 certs.go:484] found cert: /home/jenkins/minikube-integration/21139-2320/.minikube/certs/key.pem (1679 bytes)
	I1109 13:56:43.345896   50941 certs.go:484] found cert: /home/jenkins/minikube-integration/21139-2320/.minikube/files/etc/ssl/certs/41162.pem (1708 bytes)
	I1109 13:56:43.345926   50941 vm_assets.go:164] NewFileAsset: /home/jenkins/minikube-integration/21139-2320/.minikube/ca.crt -> /usr/share/ca-certificates/minikubeCA.pem
	I1109 13:56:43.345942   50941 vm_assets.go:164] NewFileAsset: /home/jenkins/minikube-integration/21139-2320/.minikube/certs/4116.pem -> /usr/share/ca-certificates/4116.pem
	I1109 13:56:43.345953   50941 vm_assets.go:164] NewFileAsset: /home/jenkins/minikube-integration/21139-2320/.minikube/files/etc/ssl/certs/41162.pem -> /usr/share/ca-certificates/41162.pem
	I1109 13:56:43.346011   50941 cli_runner.go:164] Run: docker container inspect -f "'{{(index (index .NetworkSettings.Ports "22/tcp") 0).HostPort}}'" ha-423884
	I1109 13:56:43.364089   50941 sshutil.go:53] new ssh client: &{IP:127.0.0.1 Port:32808 SSHKeyPath:/home/jenkins/minikube-integration/21139-2320/.minikube/machines/ha-423884/id_rsa Username:docker}
	I1109 13:56:43.460186   50941 ssh_runner.go:195] Run: stat -c %s /var/lib/minikube/certs/sa.pub
	I1109 13:56:43.463803   50941 ssh_runner.go:447] scp /var/lib/minikube/certs/sa.pub --> memory (451 bytes)
	I1109 13:56:43.471985   50941 ssh_runner.go:195] Run: stat -c %s /var/lib/minikube/certs/sa.key
	I1109 13:56:43.475672   50941 ssh_runner.go:447] scp /var/lib/minikube/certs/sa.key --> memory (1679 bytes)
	I1109 13:56:43.483925   50941 ssh_runner.go:195] Run: stat -c %s /var/lib/minikube/certs/front-proxy-ca.crt
	I1109 13:56:43.487471   50941 ssh_runner.go:447] scp /var/lib/minikube/certs/front-proxy-ca.crt --> memory (1123 bytes)
	I1109 13:56:43.495787   50941 ssh_runner.go:195] Run: stat -c %s /var/lib/minikube/certs/front-proxy-ca.key
	I1109 13:56:43.499361   50941 ssh_runner.go:447] scp /var/lib/minikube/certs/front-proxy-ca.key --> memory (1675 bytes)
	I1109 13:56:43.507536   50941 ssh_runner.go:195] Run: stat -c %s /var/lib/minikube/certs/etcd/ca.crt
	I1109 13:56:43.511561   50941 ssh_runner.go:447] scp /var/lib/minikube/certs/etcd/ca.crt --> memory (1094 bytes)
	I1109 13:56:43.520262   50941 ssh_runner.go:195] Run: stat -c %s /var/lib/minikube/certs/etcd/ca.key
	I1109 13:56:43.524097   50941 ssh_runner.go:447] scp /var/lib/minikube/certs/etcd/ca.key --> memory (1675 bytes)
	I1109 13:56:43.532380   50941 ssh_runner.go:362] scp /home/jenkins/minikube-integration/21139-2320/.minikube/ca.crt --> /var/lib/minikube/certs/ca.crt (1111 bytes)
	I1109 13:56:43.553569   50941 ssh_runner.go:362] scp /home/jenkins/minikube-integration/21139-2320/.minikube/ca.key --> /var/lib/minikube/certs/ca.key (1679 bytes)
	I1109 13:56:43.574274   50941 ssh_runner.go:362] scp /home/jenkins/minikube-integration/21139-2320/.minikube/proxy-client-ca.crt --> /var/lib/minikube/certs/proxy-client-ca.crt (1119 bytes)
	I1109 13:56:43.593982   50941 ssh_runner.go:362] scp /home/jenkins/minikube-integration/21139-2320/.minikube/proxy-client-ca.key --> /var/lib/minikube/certs/proxy-client-ca.key (1675 bytes)
	I1109 13:56:43.611803   50941 ssh_runner.go:362] scp /home/jenkins/minikube-integration/21139-2320/.minikube/profiles/ha-423884/apiserver.crt --> /var/lib/minikube/certs/apiserver.crt (1440 bytes)
	I1109 13:56:43.629036   50941 ssh_runner.go:362] scp /home/jenkins/minikube-integration/21139-2320/.minikube/profiles/ha-423884/apiserver.key --> /var/lib/minikube/certs/apiserver.key (1679 bytes)
	I1109 13:56:43.646449   50941 ssh_runner.go:362] scp /home/jenkins/minikube-integration/21139-2320/.minikube/profiles/ha-423884/proxy-client.crt --> /var/lib/minikube/certs/proxy-client.crt (1147 bytes)
	I1109 13:56:43.665505   50941 ssh_runner.go:362] scp /home/jenkins/minikube-integration/21139-2320/.minikube/profiles/ha-423884/proxy-client.key --> /var/lib/minikube/certs/proxy-client.key (1675 bytes)
	I1109 13:56:43.685863   50941 ssh_runner.go:362] scp /home/jenkins/minikube-integration/21139-2320/.minikube/ca.crt --> /usr/share/ca-certificates/minikubeCA.pem (1111 bytes)
	I1109 13:56:43.704695   50941 ssh_runner.go:362] scp /home/jenkins/minikube-integration/21139-2320/.minikube/certs/4116.pem --> /usr/share/ca-certificates/4116.pem (1338 bytes)
	I1109 13:56:43.725055   50941 ssh_runner.go:362] scp /home/jenkins/minikube-integration/21139-2320/.minikube/files/etc/ssl/certs/41162.pem --> /usr/share/ca-certificates/41162.pem (1708 bytes)
	I1109 13:56:43.743980   50941 ssh_runner.go:362] scp memory --> /var/lib/minikube/certs/sa.pub (451 bytes)
	I1109 13:56:43.757782   50941 ssh_runner.go:362] scp memory --> /var/lib/minikube/certs/sa.key (1679 bytes)
	I1109 13:56:43.770797   50941 ssh_runner.go:362] scp memory --> /var/lib/minikube/certs/front-proxy-ca.crt (1123 bytes)
	I1109 13:56:43.783823   50941 ssh_runner.go:362] scp memory --> /var/lib/minikube/certs/front-proxy-ca.key (1675 bytes)
	I1109 13:56:43.798200   50941 ssh_runner.go:362] scp memory --> /var/lib/minikube/certs/etcd/ca.crt (1094 bytes)
	I1109 13:56:43.811164   50941 ssh_runner.go:362] scp memory --> /var/lib/minikube/certs/etcd/ca.key (1675 bytes)
	I1109 13:56:43.824190   50941 ssh_runner.go:362] scp memory --> /var/lib/minikube/kubeconfig (744 bytes)
	I1109 13:56:43.838949   50941 ssh_runner.go:195] Run: openssl version
	I1109 13:56:43.845204   50941 ssh_runner.go:195] Run: sudo /bin/bash -c "test -s /usr/share/ca-certificates/minikubeCA.pem && ln -fs /usr/share/ca-certificates/minikubeCA.pem /etc/ssl/certs/minikubeCA.pem"
	I1109 13:56:43.853394   50941 ssh_runner.go:195] Run: ls -la /usr/share/ca-certificates/minikubeCA.pem
	I1109 13:56:43.857520   50941 certs.go:528] hashing: -rw-r--r-- 1 root root 1111 Nov  9 13:29 /usr/share/ca-certificates/minikubeCA.pem
	I1109 13:56:43.857581   50941 ssh_runner.go:195] Run: openssl x509 -hash -noout -in /usr/share/ca-certificates/minikubeCA.pem
	I1109 13:56:43.898978   50941 ssh_runner.go:195] Run: sudo /bin/bash -c "test -L /etc/ssl/certs/b5213941.0 || ln -fs /etc/ssl/certs/minikubeCA.pem /etc/ssl/certs/b5213941.0"
	I1109 13:56:43.907056   50941 ssh_runner.go:195] Run: sudo /bin/bash -c "test -s /usr/share/ca-certificates/4116.pem && ln -fs /usr/share/ca-certificates/4116.pem /etc/ssl/certs/4116.pem"
	I1109 13:56:43.915514   50941 ssh_runner.go:195] Run: ls -la /usr/share/ca-certificates/4116.pem
	I1109 13:56:43.919395   50941 certs.go:528] hashing: -rw-r--r-- 1 root root 1338 Nov  9 13:36 /usr/share/ca-certificates/4116.pem
	I1109 13:56:43.919509   50941 ssh_runner.go:195] Run: openssl x509 -hash -noout -in /usr/share/ca-certificates/4116.pem
	I1109 13:56:43.961298   50941 ssh_runner.go:195] Run: sudo /bin/bash -c "test -L /etc/ssl/certs/51391683.0 || ln -fs /etc/ssl/certs/4116.pem /etc/ssl/certs/51391683.0"
	I1109 13:56:43.969278   50941 ssh_runner.go:195] Run: sudo /bin/bash -c "test -s /usr/share/ca-certificates/41162.pem && ln -fs /usr/share/ca-certificates/41162.pem /etc/ssl/certs/41162.pem"
	I1109 13:56:43.979745   50941 ssh_runner.go:195] Run: ls -la /usr/share/ca-certificates/41162.pem
	I1109 13:56:43.983461   50941 certs.go:528] hashing: -rw-r--r-- 1 root root 1708 Nov  9 13:36 /usr/share/ca-certificates/41162.pem
	I1109 13:56:43.983552   50941 ssh_runner.go:195] Run: openssl x509 -hash -noout -in /usr/share/ca-certificates/41162.pem
	I1109 13:56:44.024743   50941 ssh_runner.go:195] Run: sudo /bin/bash -c "test -L /etc/ssl/certs/3ec20f2e.0 || ln -fs /etc/ssl/certs/41162.pem /etc/ssl/certs/3ec20f2e.0"
	I1109 13:56:44.034346   50941 ssh_runner.go:195] Run: stat /var/lib/minikube/certs/apiserver-kubelet-client.crt
	I1109 13:56:44.038346   50941 ssh_runner.go:195] Run: openssl x509 -noout -in /var/lib/minikube/certs/apiserver-etcd-client.crt -checkend 86400
	I1109 13:56:44.083522   50941 ssh_runner.go:195] Run: openssl x509 -noout -in /var/lib/minikube/certs/apiserver-kubelet-client.crt -checkend 86400
	I1109 13:56:44.124383   50941 ssh_runner.go:195] Run: openssl x509 -noout -in /var/lib/minikube/certs/etcd/server.crt -checkend 86400
	I1109 13:56:44.165272   50941 ssh_runner.go:195] Run: openssl x509 -noout -in /var/lib/minikube/certs/etcd/healthcheck-client.crt -checkend 86400
	I1109 13:56:44.207715   50941 ssh_runner.go:195] Run: openssl x509 -noout -in /var/lib/minikube/certs/etcd/peer.crt -checkend 86400
	I1109 13:56:44.249227   50941 ssh_runner.go:195] Run: openssl x509 -noout -in /var/lib/minikube/certs/front-proxy-client.crt -checkend 86400
	I1109 13:56:44.295420   50941 kubeadm.go:935] updating node {m02 192.168.49.3 8443 v1.34.1 crio true true} ...
	I1109 13:56:44.295534   50941 kubeadm.go:947] kubelet [Unit]
	Wants=crio.service
	
	[Service]
	ExecStart=
	ExecStart=/var/lib/minikube/binaries/v1.34.1/kubelet --bootstrap-kubeconfig=/etc/kubernetes/bootstrap-kubelet.conf --cgroups-per-qos=false --config=/var/lib/kubelet/config.yaml --enforce-node-allocatable= --hostname-override=ha-423884-m02 --kubeconfig=/etc/kubernetes/kubelet.conf --node-ip=192.168.49.3
	
	[Install]
	 config:
	{KubernetesVersion:v1.34.1 ClusterName:ha-423884 Namespace:default APIServerHAVIP:192.168.49.254 APIServerName:minikubeCA APIServerNames:[] APIServerIPs:[] DNSDomain:cluster.local ContainerRuntime:crio CRISocket: NetworkPlugin:cni FeatureGates: ServiceCIDR:10.96.0.0/12 ImageRepository: LoadBalancerStartIP: LoadBalancerEndIP: CustomIngressCert: RegistryAliases: ExtraOptions:[] ShouldLoadCachedImages:true EnableDefaultCNI:false CNI:}
	I1109 13:56:44.295575   50941 kube-vip.go:115] generating kube-vip config ...
	I1109 13:56:44.295626   50941 ssh_runner.go:195] Run: sudo sh -c "lsmod | grep ip_vs"
	I1109 13:56:44.307501   50941 kube-vip.go:163] giving up enabling control-plane load-balancing as ipvs kernel modules appears not to be available: sudo sh -c "lsmod | grep ip_vs": Process exited with status 1
	stdout:
	
	stderr:
	I1109 13:56:44.307559   50941 kube-vip.go:137] kube-vip config:
	apiVersion: v1
	kind: Pod
	metadata:
	  creationTimestamp: null
	  name: kube-vip
	  namespace: kube-system
	spec:
	  containers:
	  - args:
	    - manager
	    env:
	    - name: vip_arp
	      value: "true"
	    - name: port
	      value: "8443"
	    - name: vip_nodename
	      valueFrom:
	        fieldRef:
	          fieldPath: spec.nodeName
	    - name: vip_interface
	      value: eth0
	    - name: vip_cidr
	      value: "32"
	    - name: dns_mode
	      value: first
	    - name: cp_enable
	      value: "true"
	    - name: cp_namespace
	      value: kube-system
	    - name: vip_leaderelection
	      value: "true"
	    - name: vip_leasename
	      value: plndr-cp-lock
	    - name: vip_leaseduration
	      value: "5"
	    - name: vip_renewdeadline
	      value: "3"
	    - name: vip_retryperiod
	      value: "1"
	    - name: address
	      value: 192.168.49.254
	    - name: prometheus_server
	      value: :2112
	    image: ghcr.io/kube-vip/kube-vip:v1.0.1
	    imagePullPolicy: IfNotPresent
	    name: kube-vip
	    resources: {}
	    securityContext:
	      capabilities:
	        add:
	        - NET_ADMIN
	        - NET_RAW
	    volumeMounts:
	    - mountPath: /etc/kubernetes/admin.conf
	      name: kubeconfig
	  hostAliases:
	  - hostnames:
	    - kubernetes
	    ip: 127.0.0.1
	  hostNetwork: true
	  volumes:
	  - hostPath:
	      path: "/etc/kubernetes/admin.conf"
	    name: kubeconfig
	status: {}
	I1109 13:56:44.307640   50941 ssh_runner.go:195] Run: sudo ls /var/lib/minikube/binaries/v1.34.1
	I1109 13:56:44.315582   50941 binaries.go:51] Found k8s binaries, skipping transfer
	I1109 13:56:44.315693   50941 ssh_runner.go:195] Run: sudo mkdir -p /etc/systemd/system/kubelet.service.d /lib/systemd/system /etc/kubernetes/manifests
	I1109 13:56:44.323673   50941 ssh_runner.go:362] scp memory --> /etc/systemd/system/kubelet.service.d/10-kubeadm.conf (363 bytes)
	I1109 13:56:44.336356   50941 ssh_runner.go:362] scp memory --> /lib/systemd/system/kubelet.service (352 bytes)
	I1109 13:56:44.348987   50941 ssh_runner.go:362] scp memory --> /etc/kubernetes/manifests/kube-vip.yaml (1358 bytes)
	I1109 13:56:44.364628   50941 ssh_runner.go:195] Run: grep 192.168.49.254	control-plane.minikube.internal$ /etc/hosts
	I1109 13:56:44.368185   50941 ssh_runner.go:195] Run: /bin/bash -c "{ grep -v $'\tcontrol-plane.minikube.internal$' "/etc/hosts"; echo "192.168.49.254	control-plane.minikube.internal"; } > /tmp/h.$$; sudo cp /tmp/h.$$ "/etc/hosts""
	I1109 13:56:44.378442   50941 ssh_runner.go:195] Run: sudo systemctl daemon-reload
	I1109 13:56:44.512505   50941 ssh_runner.go:195] Run: sudo systemctl start kubelet
	I1109 13:56:44.527192   50941 start.go:236] Will wait 6m0s for node &{Name:m02 IP:192.168.49.3 Port:8443 KubernetesVersion:v1.34.1 ContainerRuntime:crio ControlPlane:true Worker:true}
	I1109 13:56:44.527585   50941 config.go:182] Loaded profile config "ha-423884": Driver=docker, ContainerRuntime=crio, KubernetesVersion=v1.34.1
	I1109 13:56:44.530787   50941 out.go:179] * Verifying Kubernetes components...
	I1109 13:56:44.533648   50941 ssh_runner.go:195] Run: sudo systemctl daemon-reload
	I1109 13:56:44.676788   50941 ssh_runner.go:195] Run: sudo systemctl start kubelet
	I1109 13:56:44.692725   50941 kapi.go:59] client config for ha-423884: &rest.Config{Host:"https://192.168.49.254:8443", APIPath:"", ContentConfig:rest.ContentConfig{AcceptContentTypes:"", ContentType:"", GroupVersion:(*schema.GroupVersion)(nil), NegotiatedSerializer:runtime.NegotiatedSerializer(nil)}, Username:"", Password:"", BearerToken:"", BearerTokenFile:"", Impersonate:rest.ImpersonationConfig{UserName:"", UID:"", Groups:[]string(nil), Extra:map[string][]string(nil)}, AuthProvider:<nil>, AuthConfigPersister:rest.AuthProviderConfigPersister(nil), ExecProvider:<nil>, TLSClientConfig:rest.sanitizedTLSClientConfig{Insecure:false, ServerName:"", CertFile:"/home/jenkins/minikube-integration/21139-2320/.minikube/profiles/ha-423884/client.crt", KeyFile:"/home/jenkins/minikube-integration/21139-2320/.minikube/profiles/ha-423884/client.key", CAFile:"/home/jenkins/minikube-integration/21139-2320/.minikube/ca.crt", CertData:[]uint8(nil), KeyData:[]uint8(nil), CAData:[]uint8(nil), NextProtos:[]string(nil)},
UserAgent:"", DisableCompression:false, Transport:http.RoundTripper(nil), WrapTransport:(transport.WrapperFunc)(0x21276e0), QPS:0, Burst:0, RateLimiter:flowcontrol.RateLimiter(nil), WarningHandler:rest.WarningHandler(nil), WarningHandlerWithContext:rest.WarningHandlerWithContext(nil), Timeout:0, Dial:(func(context.Context, string, string) (net.Conn, error))(nil), Proxy:(func(*http.Request) (*url.URL, error))(nil)}
	W1109 13:56:44.692806   50941 kubeadm.go:492] Overriding stale ClientConfig host https://192.168.49.254:8443 with https://192.168.49.2:8443
	I1109 13:56:44.694375   50941 node_ready.go:35] waiting up to 6m0s for node "ha-423884-m02" to be "Ready" ...
	I1109 13:57:15.889308   50941 with_retry.go:234] "Got a Retry-After response" delay="1s" attempt=1 url="https://192.168.49.2:8443/api/v1/nodes/ha-423884-m02"
	W1109 13:57:15.889661   50941 node_ready.go:55] error getting node "ha-423884-m02" condition "Ready" status (will retry): Get "https://192.168.49.2:8443/api/v1/nodes/ha-423884-m02": dial tcp 192.168.49.2:8443: connect: connection refused - error from a previous attempt: read tcp 192.168.49.1:53614->192.168.49.2:8443: read: connection reset by peer
	W1109 13:57:18.195899   50941 node_ready.go:55] error getting node "ha-423884-m02" condition "Ready" status (will retry): Get "https://192.168.49.2:8443/api/v1/nodes/ha-423884-m02": dial tcp 192.168.49.2:8443: connect: connection refused
	W1109 13:57:20.196010   50941 node_ready.go:55] error getting node "ha-423884-m02" condition "Ready" status (will retry): Get "https://192.168.49.2:8443/api/v1/nodes/ha-423884-m02": dial tcp 192.168.49.2:8443: connect: connection refused
	W1109 13:57:22.695941   50941 node_ready.go:55] error getting node "ha-423884-m02" condition "Ready" status (will retry): Get "https://192.168.49.2:8443/api/v1/nodes/ha-423884-m02": dial tcp 192.168.49.2:8443: connect: connection refused
	W1109 13:57:25.195852   50941 node_ready.go:55] error getting node "ha-423884-m02" condition "Ready" status (will retry): Get "https://192.168.49.2:8443/api/v1/nodes/ha-423884-m02": dial tcp 192.168.49.2:8443: connect: connection refused
	W1109 13:57:27.695680   50941 node_ready.go:55] error getting node "ha-423884-m02" condition "Ready" status (will retry): Get "https://192.168.49.2:8443/api/v1/nodes/ha-423884-m02": dial tcp 192.168.49.2:8443: connect: connection refused
	W1109 13:57:30.195776   50941 node_ready.go:55] error getting node "ha-423884-m02" condition "Ready" status (will retry): Get "https://192.168.49.2:8443/api/v1/nodes/ha-423884-m02": dial tcp 192.168.49.2:8443: connect: connection refused
	W1109 13:57:32.695117   50941 node_ready.go:55] error getting node "ha-423884-m02" condition "Ready" status (will retry): Get "https://192.168.49.2:8443/api/v1/nodes/ha-423884-m02": dial tcp 192.168.49.2:8443: connect: connection refused
	W1109 13:57:35.195841   50941 node_ready.go:55] error getting node "ha-423884-m02" condition "Ready" status (will retry): Get "https://192.168.49.2:8443/api/v1/nodes/ha-423884-m02": dial tcp 192.168.49.2:8443: connect: connection refused
	W1109 13:57:37.694955   50941 node_ready.go:55] error getting node "ha-423884-m02" condition "Ready" status (will retry): Get "https://192.168.49.2:8443/api/v1/nodes/ha-423884-m02": dial tcp 192.168.49.2:8443: connect: connection refused
	W1109 13:58:41.750119   50941 node_ready.go:55] error getting node "ha-423884-m02" condition "Ready" status (will retry): the server was unable to return a response in the time allotted, but may still be processing the request (get nodes ha-423884-m02)
	I1109 13:58:42.976513   50941 with_retry.go:234] "Got a Retry-After response" delay="1s" attempt=1 url="https://192.168.49.2:8443/api/v1/nodes/ha-423884-m02"
	W1109 13:58:44.194875   50941 node_ready.go:55] error getting node "ha-423884-m02" condition "Ready" status (will retry): Get "https://192.168.49.2:8443/api/v1/nodes/ha-423884-m02": dial tcp 192.168.49.2:8443: connect: connection refused
	W1109 13:58:46.195854   50941 node_ready.go:55] error getting node "ha-423884-m02" condition "Ready" status (will retry): Get "https://192.168.49.2:8443/api/v1/nodes/ha-423884-m02": dial tcp 192.168.49.2:8443: connect: connection refused
	W1109 13:58:48.695884   50941 node_ready.go:55] error getting node "ha-423884-m02" condition "Ready" status (will retry): Get "https://192.168.49.2:8443/api/v1/nodes/ha-423884-m02": dial tcp 192.168.49.2:8443: connect: connection refused
	W1109 13:58:51.195591   50941 node_ready.go:55] error getting node "ha-423884-m02" condition "Ready" status (will retry): Get "https://192.168.49.2:8443/api/v1/nodes/ha-423884-m02": dial tcp 192.168.49.2:8443: connect: connection refused
	W1109 13:58:53.694984   50941 node_ready.go:55] error getting node "ha-423884-m02" condition "Ready" status (will retry): Get "https://192.168.49.2:8443/api/v1/nodes/ha-423884-m02": dial tcp 192.168.49.2:8443: connect: connection refused
	W1109 13:58:55.695023   50941 node_ready.go:55] error getting node "ha-423884-m02" condition "Ready" status (will retry): Get "https://192.168.49.2:8443/api/v1/nodes/ha-423884-m02": dial tcp 192.168.49.2:8443: connect: connection refused
	W1109 13:58:57.695978   50941 node_ready.go:55] error getting node "ha-423884-m02" condition "Ready" status (will retry): Get "https://192.168.49.2:8443/api/v1/nodes/ha-423884-m02": dial tcp 192.168.49.2:8443: connect: connection refused
	W1109 13:59:00.195414   50941 node_ready.go:55] error getting node "ha-423884-m02" condition "Ready" status (will retry): Get "https://192.168.49.2:8443/api/v1/nodes/ha-423884-m02": dial tcp 192.168.49.2:8443: connect: connection refused
	W1109 13:59:02.195699   50941 node_ready.go:55] error getting node "ha-423884-m02" condition "Ready" status (will retry): Get "https://192.168.49.2:8443/api/v1/nodes/ha-423884-m02": dial tcp 192.168.49.2:8443: connect: connection refused
	W1109 13:59:04.695049   50941 node_ready.go:55] error getting node "ha-423884-m02" condition "Ready" status (will retry): Get "https://192.168.49.2:8443/api/v1/nodes/ha-423884-m02": dial tcp 192.168.49.2:8443: connect: connection refused
	W1109 13:59:06.695993   50941 node_ready.go:55] error getting node "ha-423884-m02" condition "Ready" status (will retry): Get "https://192.168.49.2:8443/api/v1/nodes/ha-423884-m02": dial tcp 192.168.49.2:8443: connect: connection refused
	I1109 14:00:12.661230   50941 with_retry.go:234] "Got a Retry-After response" delay="1s" attempt=1 url="https://192.168.49.2:8443/api/v1/nodes/ha-423884-m02"
	W1109 14:00:12.661517   50941 node_ready.go:55] error getting node "ha-423884-m02" condition "Ready" status (will retry): Get "https://192.168.49.2:8443/api/v1/nodes/ha-423884-m02": dial tcp 192.168.49.2:8443: connect: connection refused - error from a previous attempt: read tcp 192.168.49.1:43904->192.168.49.2:8443: read: connection reset by peer
	W1109 14:00:14.695160   50941 node_ready.go:55] error getting node "ha-423884-m02" condition "Ready" status (will retry): Get "https://192.168.49.2:8443/api/v1/nodes/ha-423884-m02": dial tcp 192.168.49.2:8443: connect: connection refused
	W1109 14:00:16.695701   50941 node_ready.go:55] error getting node "ha-423884-m02" condition "Ready" status (will retry): Get "https://192.168.49.2:8443/api/v1/nodes/ha-423884-m02": dial tcp 192.168.49.2:8443: connect: connection refused
	W1109 14:00:19.195798   50941 node_ready.go:55] error getting node "ha-423884-m02" condition "Ready" status (will retry): Get "https://192.168.49.2:8443/api/v1/nodes/ha-423884-m02": dial tcp 192.168.49.2:8443: connect: connection refused
	W1109 14:00:21.695004   50941 node_ready.go:55] error getting node "ha-423884-m02" condition "Ready" status (will retry): Get "https://192.168.49.2:8443/api/v1/nodes/ha-423884-m02": dial tcp 192.168.49.2:8443: connect: connection refused
	W1109 14:00:24.195895   50941 node_ready.go:55] error getting node "ha-423884-m02" condition "Ready" status (will retry): Get "https://192.168.49.2:8443/api/v1/nodes/ha-423884-m02": dial tcp 192.168.49.2:8443: connect: connection refused
	W1109 14:00:26.695529   50941 node_ready.go:55] error getting node "ha-423884-m02" condition "Ready" status (will retry): Get "https://192.168.49.2:8443/api/v1/nodes/ha-423884-m02": dial tcp 192.168.49.2:8443: connect: connection refused
	W1109 14:00:28.695896   50941 node_ready.go:55] error getting node "ha-423884-m02" condition "Ready" status (will retry): Get "https://192.168.49.2:8443/api/v1/nodes/ha-423884-m02": dial tcp 192.168.49.2:8443: connect: connection refused
	W1109 14:00:31.194955   50941 node_ready.go:55] error getting node "ha-423884-m02" condition "Ready" status (will retry): Get "https://192.168.49.2:8443/api/v1/nodes/ha-423884-m02": dial tcp 192.168.49.2:8443: connect: connection refused
	W1109 14:00:33.695955   50941 node_ready.go:55] error getting node "ha-423884-m02" condition "Ready" status (will retry): Get "https://192.168.49.2:8443/api/v1/nodes/ha-423884-m02": dial tcp 192.168.49.2:8443: connect: connection refused
	W1109 14:00:36.194952   50941 node_ready.go:55] error getting node "ha-423884-m02" condition "Ready" status (will retry): Get "https://192.168.49.2:8443/api/v1/nodes/ha-423884-m02": dial tcp 192.168.49.2:8443: connect: connection refused
	W1109 14:00:38.694903   50941 node_ready.go:55] error getting node "ha-423884-m02" condition "Ready" status (will retry): Get "https://192.168.49.2:8443/api/v1/nodes/ha-423884-m02": dial tcp 192.168.49.2:8443: connect: connection refused
	W1109 14:00:41.195976   50941 node_ready.go:55] error getting node "ha-423884-m02" condition "Ready" status (will retry): Get "https://192.168.49.2:8443/api/v1/nodes/ha-423884-m02": dial tcp 192.168.49.2:8443: connect: connection refused
	W1109 14:00:43.695060   50941 node_ready.go:55] error getting node "ha-423884-m02" condition "Ready" status (will retry): Get "https://192.168.49.2:8443/api/v1/nodes/ha-423884-m02": dial tcp 192.168.49.2:8443: connect: connection refused
	W1109 14:00:45.695243   50941 node_ready.go:55] error getting node "ha-423884-m02" condition "Ready" status (will retry): Get "https://192.168.49.2:8443/api/v1/nodes/ha-423884-m02": dial tcp 192.168.49.2:8443: connect: connection refused
	W1109 14:00:47.695603   50941 node_ready.go:55] error getting node "ha-423884-m02" condition "Ready" status (will retry): Get "https://192.168.49.2:8443/api/v1/nodes/ha-423884-m02": dial tcp 192.168.49.2:8443: connect: connection refused
	W1109 14:00:49.695924   50941 node_ready.go:55] error getting node "ha-423884-m02" condition "Ready" status (will retry): Get "https://192.168.49.2:8443/api/v1/nodes/ha-423884-m02": dial tcp 192.168.49.2:8443: connect: connection refused
	W1109 14:00:52.194931   50941 node_ready.go:55] error getting node "ha-423884-m02" condition "Ready" status (will retry): Get "https://192.168.49.2:8443/api/v1/nodes/ha-423884-m02": dial tcp 192.168.49.2:8443: connect: connection refused
	W1109 14:01:02.696092   50941 node_ready.go:55] error getting node "ha-423884-m02" condition "Ready" status (will retry): Get "https://192.168.49.2:8443/api/v1/nodes/ha-423884-m02": net/http: TLS handshake timeout
	W1109 14:01:12.697352   50941 node_ready.go:55] error getting node "ha-423884-m02" condition "Ready" status (will retry): Get "https://192.168.49.2:8443/api/v1/nodes/ha-423884-m02": net/http: TLS handshake timeout
	I1109 14:01:14.246046   50941 with_retry.go:234] "Got a Retry-After response" delay="1s" attempt=1 url="https://192.168.49.2:8443/api/v1/nodes/ha-423884-m02"
	W1109 14:01:15.195896   50941 node_ready.go:55] error getting node "ha-423884-m02" condition "Ready" status (will retry): Get "https://192.168.49.2:8443/api/v1/nodes/ha-423884-m02": dial tcp 192.168.49.2:8443: connect: connection refused
	W1109 14:01:17.694927   50941 node_ready.go:55] error getting node "ha-423884-m02" condition "Ready" status (will retry): Get "https://192.168.49.2:8443/api/v1/nodes/ha-423884-m02": dial tcp 192.168.49.2:8443: connect: connection refused
	W1109 14:01:19.695012   50941 node_ready.go:55] error getting node "ha-423884-m02" condition "Ready" status (will retry): Get "https://192.168.49.2:8443/api/v1/nodes/ha-423884-m02": dial tcp 192.168.49.2:8443: connect: connection refused
	W1109 14:01:21.695859   50941 node_ready.go:55] error getting node "ha-423884-m02" condition "Ready" status (will retry): Get "https://192.168.49.2:8443/api/v1/nodes/ha-423884-m02": dial tcp 192.168.49.2:8443: connect: connection refused
	W1109 14:01:24.195971   50941 node_ready.go:55] error getting node "ha-423884-m02" condition "Ready" status (will retry): Get "https://192.168.49.2:8443/api/v1/nodes/ha-423884-m02": dial tcp 192.168.49.2:8443: connect: connection refused
	W1109 14:01:26.694922   50941 node_ready.go:55] error getting node "ha-423884-m02" condition "Ready" status (will retry): Get "https://192.168.49.2:8443/api/v1/nodes/ha-423884-m02": dial tcp 192.168.49.2:8443: connect: connection refused
	W1109 14:01:28.695002   50941 node_ready.go:55] error getting node "ha-423884-m02" condition "Ready" status (will retry): Get "https://192.168.49.2:8443/api/v1/nodes/ha-423884-m02": dial tcp 192.168.49.2:8443: connect: connection refused
	W1109 14:01:31.195949   50941 node_ready.go:55] error getting node "ha-423884-m02" condition "Ready" status (will retry): Get "https://192.168.49.2:8443/api/v1/nodes/ha-423884-m02": dial tcp 192.168.49.2:8443: connect: connection refused
	W1109 14:01:33.196044   50941 node_ready.go:55] error getting node "ha-423884-m02" condition "Ready" status (will retry): Get "https://192.168.49.2:8443/api/v1/nodes/ha-423884-m02": dial tcp 192.168.49.2:8443: connect: connection refused
	W1109 14:01:35.695811   50941 node_ready.go:55] error getting node "ha-423884-m02" condition "Ready" status (will retry): Get "https://192.168.49.2:8443/api/v1/nodes/ha-423884-m02": dial tcp 192.168.49.2:8443: connect: connection refused
	W1109 14:01:38.194914   50941 node_ready.go:55] error getting node "ha-423884-m02" condition "Ready" status (will retry): Get "https://192.168.49.2:8443/api/v1/nodes/ha-423884-m02": dial tcp 192.168.49.2:8443: connect: connection refused
	W1109 14:01:40.195799   50941 node_ready.go:55] error getting node "ha-423884-m02" condition "Ready" status (will retry): Get "https://192.168.49.2:8443/api/v1/nodes/ha-423884-m02": dial tcp 192.168.49.2:8443: connect: connection refused
	W1109 14:01:42.695109   50941 node_ready.go:55] error getting node "ha-423884-m02" condition "Ready" status (will retry): Get "https://192.168.49.2:8443/api/v1/nodes/ha-423884-m02": dial tcp 192.168.49.2:8443: connect: connection refused
	W1109 14:01:45.194966   50941 node_ready.go:55] error getting node "ha-423884-m02" condition "Ready" status (will retry): Get "https://192.168.49.2:8443/api/v1/nodes/ha-423884-m02": dial tcp 192.168.49.2:8443: connect: connection refused
	W1109 14:01:47.195992   50941 node_ready.go:55] error getting node "ha-423884-m02" condition "Ready" status (will retry): Get "https://192.168.49.2:8443/api/v1/nodes/ha-423884-m02": dial tcp 192.168.49.2:8443: connect: connection refused
	W1109 14:01:49.694861   50941 node_ready.go:55] error getting node "ha-423884-m02" condition "Ready" status (will retry): Get "https://192.168.49.2:8443/api/v1/nodes/ha-423884-m02": dial tcp 192.168.49.2:8443: connect: connection refused
	W1109 14:01:52.194884   50941 node_ready.go:55] error getting node "ha-423884-m02" condition "Ready" status (will retry): Get "https://192.168.49.2:8443/api/v1/nodes/ha-423884-m02": dial tcp 192.168.49.2:8443: connect: connection refused
	W1109 14:01:54.694898   50941 node_ready.go:55] error getting node "ha-423884-m02" condition "Ready" status (will retry): Get "https://192.168.49.2:8443/api/v1/nodes/ha-423884-m02": dial tcp 192.168.49.2:8443: connect: connection refused
	W1109 14:01:56.695125   50941 node_ready.go:55] error getting node "ha-423884-m02" condition "Ready" status (will retry): Get "https://192.168.49.2:8443/api/v1/nodes/ha-423884-m02": dial tcp 192.168.49.2:8443: connect: connection refused
	W1109 14:01:59.195035   50941 node_ready.go:55] error getting node "ha-423884-m02" condition "Ready" status (will retry): Get "https://192.168.49.2:8443/api/v1/nodes/ha-423884-m02": dial tcp 192.168.49.2:8443: connect: connection refused
	W1109 14:02:01.694940   50941 node_ready.go:55] error getting node "ha-423884-m02" condition "Ready" status (will retry): Get "https://192.168.49.2:8443/api/v1/nodes/ha-423884-m02": dial tcp 192.168.49.2:8443: connect: connection refused
	W1109 14:02:03.695952   50941 node_ready.go:55] error getting node "ha-423884-m02" condition "Ready" status (will retry): Get "https://192.168.49.2:8443/api/v1/nodes/ha-423884-m02": dial tcp 192.168.49.2:8443: connect: connection refused
	W1109 14:02:06.194964   50941 node_ready.go:55] error getting node "ha-423884-m02" condition "Ready" status (will retry): Get "https://192.168.49.2:8443/api/v1/nodes/ha-423884-m02": dial tcp 192.168.49.2:8443: connect: connection refused
	W1109 14:02:08.694953   50941 node_ready.go:55] error getting node "ha-423884-m02" condition "Ready" status (will retry): Get "https://192.168.49.2:8443/api/v1/nodes/ha-423884-m02": dial tcp 192.168.49.2:8443: connect: connection refused
	W1109 14:02:10.695760   50941 node_ready.go:55] error getting node "ha-423884-m02" condition "Ready" status (will retry): Get "https://192.168.49.2:8443/api/v1/nodes/ha-423884-m02": dial tcp 192.168.49.2:8443: connect: connection refused
	W1109 14:02:13.195697   50941 node_ready.go:55] error getting node "ha-423884-m02" condition "Ready" status (will retry): Get "https://192.168.49.2:8443/api/v1/nodes/ha-423884-m02": dial tcp 192.168.49.2:8443: connect: connection refused
	W1109 14:02:15.694939   50941 node_ready.go:55] error getting node "ha-423884-m02" condition "Ready" status (will retry): Get "https://192.168.49.2:8443/api/v1/nodes/ha-423884-m02": dial tcp 192.168.49.2:8443: connect: connection refused
	W1109 14:02:17.695926   50941 node_ready.go:55] error getting node "ha-423884-m02" condition "Ready" status (will retry): Get "https://192.168.49.2:8443/api/v1/nodes/ha-423884-m02": dial tcp 192.168.49.2:8443: connect: connection refused
	W1109 14:02:20.195916   50941 node_ready.go:55] error getting node "ha-423884-m02" condition "Ready" status (will retry): Get "https://192.168.49.2:8443/api/v1/nodes/ha-423884-m02": dial tcp 192.168.49.2:8443: connect: connection refused
	W1109 14:02:22.695194   50941 node_ready.go:55] error getting node "ha-423884-m02" condition "Ready" status (will retry): Get "https://192.168.49.2:8443/api/v1/nodes/ha-423884-m02": dial tcp 192.168.49.2:8443: connect: connection refused
	W1109 14:02:25.195931   50941 node_ready.go:55] error getting node "ha-423884-m02" condition "Ready" status (will retry): Get "https://192.168.49.2:8443/api/v1/nodes/ha-423884-m02": dial tcp 192.168.49.2:8443: connect: connection refused
	W1109 14:02:27.694900   50941 node_ready.go:55] error getting node "ha-423884-m02" condition "Ready" status (will retry): Get "https://192.168.49.2:8443/api/v1/nodes/ha-423884-m02": dial tcp 192.168.49.2:8443: connect: connection refused
	W1109 14:02:29.694960   50941 node_ready.go:55] error getting node "ha-423884-m02" condition "Ready" status (will retry): Get "https://192.168.49.2:8443/api/v1/nodes/ha-423884-m02": dial tcp 192.168.49.2:8443: connect: connection refused
	W1109 14:02:32.194988   50941 node_ready.go:55] error getting node "ha-423884-m02" condition "Ready" status (will retry): Get "https://192.168.49.2:8443/api/v1/nodes/ha-423884-m02": dial tcp 192.168.49.2:8443: connect: connection refused
	W1109 14:02:34.195073   50941 node_ready.go:55] error getting node "ha-423884-m02" condition "Ready" status (will retry): Get "https://192.168.49.2:8443/api/v1/nodes/ha-423884-m02": dial tcp 192.168.49.2:8443: connect: connection refused
	W1109 14:02:44.694610   50941 node_ready.go:55] error getting node "ha-423884-m02" condition "Ready" status (will retry): Get "https://192.168.49.2:8443/api/v1/nodes/ha-423884-m02": context deadline exceeded
	I1109 14:02:44.694648   50941 node_ready.go:38] duration metric: took 6m0.000230455s for node "ha-423884-m02" to be "Ready" ...
	I1109 14:02:44.698103   50941 out.go:203] 
	W1109 14:02:44.701305   50941 out.go:285] X Exiting due to GUEST_START: failed to start node: adding node: wait 6m0s for node: waiting for node to be ready: WaitNodeCondition: context deadline exceeded
	W1109 14:02:44.701325   50941 out.go:285] * 
	W1109 14:02:44.703469   50941 out.go:308] ╭─────────────────────────────────────────────────────────────────────────────────────────────╮
	│                                                                                             │
	│    * If the above advice does not help, please let us know:                                 │
	│      https://github.com/kubernetes/minikube/issues/new/choose                               │
	│                                                                                             │
	│    * Please run `minikube logs --file=logs.txt` and attach logs.txt to the GitHub issue.    │
	│                                                                                             │
	╰─────────────────────────────────────────────────────────────────────────────────────────────╯
	I1109 14:02:44.706530   50941 out.go:203] 
	
	
	==> CRI-O <==
	Nov 09 14:02:09 ha-423884 crio[667]: time="2025-11-09T14:02:09.551728258Z" level=info msg="Allowed annotations are specified for workload [io.containers.trace-syscall]"
	Nov 09 14:02:09 ha-423884 crio[667]: time="2025-11-09T14:02:09.559379136Z" level=info msg="Allowed annotations are specified for workload [io.containers.trace-syscall]"
	Nov 09 14:02:09 ha-423884 crio[667]: time="2025-11-09T14:02:09.56010363Z" level=info msg="Allowed annotations are specified for workload [io.containers.trace-syscall]"
	Nov 09 14:02:09 ha-423884 crio[667]: time="2025-11-09T14:02:09.578180754Z" level=info msg="Created container fb43ae7a5bc7148d318300f77b90f87d0bed19698a40a38ce78123c68c92f396: kube-system/kube-controller-manager-ha-423884/kube-controller-manager" id=914f69c7-9e75-4685-97ae-ce6d487a80eb name=/runtime.v1.RuntimeService/CreateContainer
	Nov 09 14:02:09 ha-423884 crio[667]: time="2025-11-09T14:02:09.578978029Z" level=info msg="Starting container: fb43ae7a5bc7148d318300f77b90f87d0bed19698a40a38ce78123c68c92f396" id=7b0cb328-02c2-455b-8d26-92c535c13320 name=/runtime.v1.RuntimeService/StartContainer
	Nov 09 14:02:09 ha-423884 crio[667]: time="2025-11-09T14:02:09.581346511Z" level=info msg="Started container" PID=1238 containerID=fb43ae7a5bc7148d318300f77b90f87d0bed19698a40a38ce78123c68c92f396 description=kube-system/kube-controller-manager-ha-423884/kube-controller-manager id=7b0cb328-02c2-455b-8d26-92c535c13320 name=/runtime.v1.RuntimeService/StartContainer sandboxID=575e9e561d03bd73bfc2977eee9d1a87cf7e044ef1af38644f54f805e50974ba
	Nov 09 14:02:21 ha-423884 conmon[1235]: conmon fb43ae7a5bc7148d3183 <ninfo>: container 1238 exited with status 1
	Nov 09 14:02:21 ha-423884 crio[667]: time="2025-11-09T14:02:21.691484867Z" level=info msg="Removing container: ee43c2dc250434d5cc4568e9176c1adbc891131ee25ba52582eec6dc2abb7fda" id=20d01fb9-69b8-4d52-8e94-f64f27447597 name=/runtime.v1.RuntimeService/RemoveContainer
	Nov 09 14:02:21 ha-423884 crio[667]: time="2025-11-09T14:02:21.700637954Z" level=info msg="Error loading conmon cgroup of container ee43c2dc250434d5cc4568e9176c1adbc891131ee25ba52582eec6dc2abb7fda: cgroup deleted" id=20d01fb9-69b8-4d52-8e94-f64f27447597 name=/runtime.v1.RuntimeService/RemoveContainer
	Nov 09 14:02:21 ha-423884 crio[667]: time="2025-11-09T14:02:21.70373976Z" level=info msg="Removed container ee43c2dc250434d5cc4568e9176c1adbc891131ee25ba52582eec6dc2abb7fda: kube-system/kube-controller-manager-ha-423884/kube-controller-manager" id=20d01fb9-69b8-4d52-8e94-f64f27447597 name=/runtime.v1.RuntimeService/RemoveContainer
	Nov 09 14:02:36 ha-423884 crio[667]: time="2025-11-09T14:02:36.550431718Z" level=info msg="Checking image status: registry.k8s.io/kube-apiserver:v1.34.1" id=eb071ab8-2487-4b87-9951-f3406f3f724d name=/runtime.v1.ImageService/ImageStatus
	Nov 09 14:02:36 ha-423884 crio[667]: time="2025-11-09T14:02:36.551499117Z" level=info msg="Checking image status: registry.k8s.io/kube-apiserver:v1.34.1" id=dc022fcb-5120-4661-9098-70f50aeb80b1 name=/runtime.v1.ImageService/ImageStatus
	Nov 09 14:02:36 ha-423884 crio[667]: time="2025-11-09T14:02:36.552772713Z" level=info msg="Creating container: kube-system/kube-apiserver-ha-423884/kube-apiserver" id=a5b314ab-13da-40fb-be88-4a7f65ae1f46 name=/runtime.v1.RuntimeService/CreateContainer
	Nov 09 14:02:36 ha-423884 crio[667]: time="2025-11-09T14:02:36.552913276Z" level=info msg="Allowed annotations are specified for workload [io.containers.trace-syscall]"
	Nov 09 14:02:36 ha-423884 crio[667]: time="2025-11-09T14:02:36.558216367Z" level=info msg="Allowed annotations are specified for workload [io.containers.trace-syscall]"
	Nov 09 14:02:36 ha-423884 crio[667]: time="2025-11-09T14:02:36.558699867Z" level=info msg="Allowed annotations are specified for workload [io.containers.trace-syscall]"
	Nov 09 14:02:36 ha-423884 crio[667]: time="2025-11-09T14:02:36.577264476Z" level=info msg="Created container f857691ef21f6060a3153742c856510acde291aa03d98022486b54ccdd6b16f0: kube-system/kube-apiserver-ha-423884/kube-apiserver" id=a5b314ab-13da-40fb-be88-4a7f65ae1f46 name=/runtime.v1.RuntimeService/CreateContainer
	Nov 09 14:02:36 ha-423884 crio[667]: time="2025-11-09T14:02:36.577889688Z" level=info msg="Starting container: f857691ef21f6060a3153742c856510acde291aa03d98022486b54ccdd6b16f0" id=1a740b53-426e-446b-bea6-3a698a721a3b name=/runtime.v1.RuntimeService/StartContainer
	Nov 09 14:02:36 ha-423884 crio[667]: time="2025-11-09T14:02:36.580618576Z" level=info msg="Started container" PID=1253 containerID=f857691ef21f6060a3153742c856510acde291aa03d98022486b54ccdd6b16f0 description=kube-system/kube-apiserver-ha-423884/kube-apiserver id=1a740b53-426e-446b-bea6-3a698a721a3b name=/runtime.v1.RuntimeService/StartContainer sandboxID=e385df39fa9a74c7a559091711257de7f4454e0e52edc9948675220b19108eb4
	Nov 09 14:02:57 ha-423884 conmon[1251]: conmon f857691ef21f6060a315 <ninfo>: container 1253 exited with status 255
	Nov 09 14:02:57 ha-423884 crio[667]: time="2025-11-09T14:02:57.959662491Z" level=info msg="Stopping container: f857691ef21f6060a3153742c856510acde291aa03d98022486b54ccdd6b16f0 (timeout: 30s)" id=55305a25-44dd-4ddb-b724-39a9d92f3c50 name=/runtime.v1.RuntimeService/StopContainer
	Nov 09 14:02:57 ha-423884 crio[667]: time="2025-11-09T14:02:57.971272541Z" level=info msg="Stopped container f857691ef21f6060a3153742c856510acde291aa03d98022486b54ccdd6b16f0: kube-system/kube-apiserver-ha-423884/kube-apiserver" id=55305a25-44dd-4ddb-b724-39a9d92f3c50 name=/runtime.v1.RuntimeService/StopContainer
	Nov 09 14:02:58 ha-423884 crio[667]: time="2025-11-09T14:02:58.779081549Z" level=info msg="Removing container: dab52ef1281dc29d9aafc0f1b10c6757441456ebe6e9ca4316f204ffc32d3ce1" id=4d31d8f8-0ff3-4c90-882a-124e284c1b87 name=/runtime.v1.RuntimeService/RemoveContainer
	Nov 09 14:02:58 ha-423884 crio[667]: time="2025-11-09T14:02:58.786993816Z" level=info msg="Error loading conmon cgroup of container dab52ef1281dc29d9aafc0f1b10c6757441456ebe6e9ca4316f204ffc32d3ce1: cgroup deleted" id=4d31d8f8-0ff3-4c90-882a-124e284c1b87 name=/runtime.v1.RuntimeService/RemoveContainer
	Nov 09 14:02:58 ha-423884 crio[667]: time="2025-11-09T14:02:58.790080944Z" level=info msg="Removed container dab52ef1281dc29d9aafc0f1b10c6757441456ebe6e9ca4316f204ffc32d3ce1: kube-system/kube-apiserver-ha-423884/kube-apiserver" id=4d31d8f8-0ff3-4c90-882a-124e284c1b87 name=/runtime.v1.RuntimeService/RemoveContainer
	
	
	==> container status <==
	CONTAINER           IMAGE                                                              CREATED              STATE               NAME                      ATTEMPT             POD ID              POD                                 NAMESPACE
	f857691ef21f6       43911e833d64d4f30460862fc0c54bb61999d60bc7d063feca71e9fc610d5196   47 seconds ago       Exited              kube-apiserver            6                   e385df39fa9a7       kube-apiserver-ha-423884            kube-system
	fb43ae7a5bc71       7eb2c6ff0c5a768fd309321bc2ade0e4e11afcf4f2017ef1d0ff00d91fdf992a   About a minute ago   Exited              kube-controller-manager   7                   575e9e561d03b       kube-controller-manager-ha-423884   kube-system
	c2bc167e20428       a1894772a478e07c67a56e8bf32335fdbe1dd4ec96976a5987083164bd00bc0e   3 minutes ago        Running             etcd                      2                   c523e19ee75d0       etcd-ha-423884                      kube-system
	ee4108629384f       b5f57ec6b98676d815366685a0422bd164ecf0732540b79ac51b1186cef97ff0   8 minutes ago        Running             kube-scheduler            1                   eee2ee895e800       kube-scheduler-ha-423884            kube-system
	dc4b89b5cdd42       2a8917f902489be5a8dd414209c32b77bd644d187ea646d86dbdc31e85efb551   8 minutes ago        Running             kube-vip                  0                   babe0da53b9cc       kube-vip-ha-423884                  kube-system
	ad03fe50fbbd1       a1894772a478e07c67a56e8bf32335fdbe1dd4ec96976a5987083164bd00bc0e   8 minutes ago        Exited              etcd                      1                   c523e19ee75d0       etcd-ha-423884                      kube-system
	
	
	==> describe nodes <==
	command /bin/bash -c "sudo /var/lib/minikube/binaries/v1.34.1/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig" failed with error: /bin/bash -c "sudo /var/lib/minikube/binaries/v1.34.1/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig": Process exited with status 1
	stdout:
	
	stderr:
	The connection to the server localhost:8443 was refused - did you specify the right host or port?
	
	
	==> dmesg <==
	[Nov 9 13:17] ACPI: SRAT not present
	[  +0.000000] ACPI: SRAT not present
	[  +0.000000] SPI driver altr_a10sr has no spi_device_id for altr,a10sr
	[  +0.015355] device-mapper: core: CONFIG_IMA_DISABLE_HTABLE is disabled. Duplicate IMA measurements will not be recorded in the IMA log.
	[  +0.494196] systemd[1]: Configuration file /run/systemd/system/netplan-ovs-cleanup.service is marked world-inaccessible. This has no effect as configuration data is accessible via APIs without restrictions. Proceeding anyway.
	[  +0.034512] systemd[1]: /lib/systemd/system/snapd.service:23: Unknown key name 'RestartMode' in section 'Service', ignoring.
	[  +0.743336] ena 0000:00:05.0: LLQ is not supported Fallback to host mode policy.
	[  +6.564676] kauditd_printk_skb: 36 callbacks suppressed
	[Nov 9 13:30] overlayfs: idmapped layers are currently not supported
	[  +0.081590] kmem.limit_in_bytes is deprecated and will be removed. Please report your usecase to linux-mm@kvack.org if you depend on this functionality.
	[Nov 9 13:36] overlayfs: idmapped layers are currently not supported
	[ +50.497753] overlayfs: idmapped layers are currently not supported
	[Nov 9 13:50] overlayfs: idmapped layers are currently not supported
	[Nov 9 13:51] overlayfs: idmapped layers are currently not supported
	[Nov 9 13:52] overlayfs: idmapped layers are currently not supported
	[Nov 9 13:53] overlayfs: idmapped layers are currently not supported
	[Nov 9 13:54] overlayfs: idmapped layers are currently not supported
	[Nov 9 13:55] overlayfs: idmapped layers are currently not supported
	[  +3.159182] overlayfs: idmapped layers are currently not supported
	
	
	==> etcd [ad03fe50fbbd1dace582db018b89f80349534b6604f17260fe8e6175c0110640] <==
	{"level":"warn","ts":"2025-11-09T14:00:20.007830Z","caller":"embed/serve.go:245","msg":"stopping secure grpc server due to error","error":"accept tcp 127.0.0.1:2379: use of closed network connection"}
	{"level":"warn","ts":"2025-11-09T14:00:20.007902Z","caller":"embed/serve.go:247","msg":"stopped secure grpc server due to error","error":"accept tcp 127.0.0.1:2379: use of closed network connection"}
	{"level":"error","ts":"2025-11-09T14:00:20.007914Z","caller":"embed/etcd.go:912","msg":"setting up serving from embedded etcd failed.","error":"accept tcp 127.0.0.1:2379: use of closed network connection","stacktrace":"go.etcd.io/etcd/server/v3/embed.(*Etcd).errHandler\n\tgo.etcd.io/etcd/server/v3/embed/etcd.go:912\ngo.etcd.io/etcd/server/v3/embed.(*Etcd).startHandler.func1\n\tgo.etcd.io/etcd/server/v3/embed/etcd.go:906"}
	{"level":"info","ts":"2025-11-09T14:00:20.007856Z","caller":"etcdserver/server.go:2319","msg":"server has stopped; stopping cluster version's monitor"}
	{"level":"info","ts":"2025-11-09T14:00:20.008037Z","caller":"rafthttp/peer.go:316","msg":"stopping remote peer","remote-peer-id":"b6e80321287bcc6a"}
	{"level":"info","ts":"2025-11-09T14:00:20.008080Z","caller":"rafthttp/stream.go:293","msg":"stopped TCP streaming connection with remote peer","stream-writer-type":"unknown stream","remote-peer-id":"b6e80321287bcc6a"}
	{"level":"info","ts":"2025-11-09T14:00:20.008125Z","caller":"rafthttp/stream.go:293","msg":"stopped TCP streaming connection with remote peer","stream-writer-type":"unknown stream","remote-peer-id":"b6e80321287bcc6a"}
	{"level":"warn","ts":"2025-11-09T14:00:20.007973Z","caller":"embed/serve.go:245","msg":"stopping secure grpc server due to error","error":"accept tcp 192.168.49.2:2379: use of closed network connection"}
	{"level":"warn","ts":"2025-11-09T14:00:20.008184Z","caller":"embed/serve.go:247","msg":"stopped secure grpc server due to error","error":"accept tcp 192.168.49.2:2379: use of closed network connection"}
	{"level":"error","ts":"2025-11-09T14:00:20.008220Z","caller":"embed/etcd.go:912","msg":"setting up serving from embedded etcd failed.","error":"accept tcp 192.168.49.2:2379: use of closed network connection","stacktrace":"go.etcd.io/etcd/server/v3/embed.(*Etcd).errHandler\n\tgo.etcd.io/etcd/server/v3/embed/etcd.go:912\ngo.etcd.io/etcd/server/v3/embed.(*Etcd).startHandler.func1\n\tgo.etcd.io/etcd/server/v3/embed/etcd.go:906"}
	{"level":"info","ts":"2025-11-09T14:00:20.008299Z","caller":"rafthttp/pipeline.go:85","msg":"stopped HTTP pipelining with remote peer","local-member-id":"aec36adc501070cc","remote-peer-id":"b6e80321287bcc6a"}
	{"level":"info","ts":"2025-11-09T14:00:20.008345Z","caller":"rafthttp/stream.go:441","msg":"stopped stream reader with remote peer","stream-reader-type":"stream MsgApp v2","local-member-id":"aec36adc501070cc","remote-peer-id":"b6e80321287bcc6a"}
	{"level":"info","ts":"2025-11-09T14:00:20.008407Z","caller":"rafthttp/stream.go:441","msg":"stopped stream reader with remote peer","stream-reader-type":"stream Message","local-member-id":"aec36adc501070cc","remote-peer-id":"b6e80321287bcc6a"}
	{"level":"info","ts":"2025-11-09T14:00:20.008440Z","caller":"rafthttp/peer.go:321","msg":"stopped remote peer","remote-peer-id":"b6e80321287bcc6a"}
	{"level":"info","ts":"2025-11-09T14:00:20.008470Z","caller":"rafthttp/peer.go:316","msg":"stopping remote peer","remote-peer-id":"c7770fc1e85485c5"}
	{"level":"info","ts":"2025-11-09T14:00:20.008501Z","caller":"rafthttp/stream.go:293","msg":"stopped TCP streaming connection with remote peer","stream-writer-type":"stream MsgApp v2","remote-peer-id":"c7770fc1e85485c5"}
	{"level":"info","ts":"2025-11-09T14:00:20.008542Z","caller":"rafthttp/stream.go:293","msg":"stopped TCP streaming connection with remote peer","stream-writer-type":"stream Message","remote-peer-id":"c7770fc1e85485c5"}
	{"level":"info","ts":"2025-11-09T14:00:20.008588Z","caller":"rafthttp/pipeline.go:85","msg":"stopped HTTP pipelining with remote peer","local-member-id":"aec36adc501070cc","remote-peer-id":"c7770fc1e85485c5"}
	{"level":"info","ts":"2025-11-09T14:00:20.008623Z","caller":"rafthttp/stream.go:441","msg":"stopped stream reader with remote peer","stream-reader-type":"stream MsgApp v2","local-member-id":"aec36adc501070cc","remote-peer-id":"c7770fc1e85485c5"}
	{"level":"info","ts":"2025-11-09T14:00:20.008664Z","caller":"rafthttp/stream.go:441","msg":"stopped stream reader with remote peer","stream-reader-type":"stream Message","local-member-id":"aec36adc501070cc","remote-peer-id":"c7770fc1e85485c5"}
	{"level":"info","ts":"2025-11-09T14:00:20.008699Z","caller":"rafthttp/peer.go:321","msg":"stopped remote peer","remote-peer-id":"c7770fc1e85485c5"}
	{"level":"info","ts":"2025-11-09T14:00:20.021805Z","caller":"embed/etcd.go:621","msg":"stopping serving peer traffic","address":"192.168.49.2:2380"}
	{"level":"error","ts":"2025-11-09T14:00:20.021895Z","caller":"embed/etcd.go:912","msg":"setting up serving from embedded etcd failed.","error":"accept tcp 192.168.49.2:2380: use of closed network connection","stacktrace":"go.etcd.io/etcd/server/v3/embed.(*Etcd).errHandler\n\tgo.etcd.io/etcd/server/v3/embed/etcd.go:912\ngo.etcd.io/etcd/server/v3/embed.(*Etcd).startHandler.func1\n\tgo.etcd.io/etcd/server/v3/embed/etcd.go:906"}
	{"level":"info","ts":"2025-11-09T14:00:20.021928Z","caller":"embed/etcd.go:626","msg":"stopped serving peer traffic","address":"192.168.49.2:2380"}
	{"level":"info","ts":"2025-11-09T14:00:20.021935Z","caller":"embed/etcd.go:428","msg":"closed etcd server","name":"ha-423884","data-dir":"/var/lib/minikube/etcd","advertise-peer-urls":["https://192.168.49.2:2380"],"advertise-client-urls":["https://192.168.49.2:2379"]}
	
	
	==> etcd [c2bc167e204287c49f92f3ea3b5ca2ff40be8e2eed3675512ec65e082d5b7ed6] <==
	{"level":"info","ts":"2025-11-09T14:03:21.292942Z","logger":"raft","caller":"v3@v3.6.0/raft.go:930","msg":"aec36adc501070cc became pre-candidate at term 4"}
	{"level":"info","ts":"2025-11-09T14:03:21.292989Z","logger":"raft","caller":"v3@v3.6.0/raft.go:1064","msg":"aec36adc501070cc [logterm: 4, index: 2084] sent MsgPreVote request to b6e80321287bcc6a at term 4"}
	{"level":"info","ts":"2025-11-09T14:03:21.293028Z","logger":"raft","caller":"v3@v3.6.0/raft.go:1064","msg":"aec36adc501070cc [logterm: 4, index: 2084] sent MsgPreVote request to c7770fc1e85485c5 at term 4"}
	{"level":"info","ts":"2025-11-09T14:03:21.293090Z","logger":"raft","caller":"v3@v3.6.0/raft.go:1077","msg":"aec36adc501070cc received MsgPreVoteResp from aec36adc501070cc at term 4"}
	{"level":"info","ts":"2025-11-09T14:03:21.293128Z","logger":"raft","caller":"v3@v3.6.0/raft.go:1693","msg":"aec36adc501070cc has received 1 MsgPreVoteResp votes and 0 vote rejections"}
	{"level":"warn","ts":"2025-11-09T14:03:21.484904Z","caller":"etcdserver/v3_server.go:911","msg":"waiting for ReadIndex response took too long, retrying","sent-request-id":8128041202906160426,"retry-timeout":"500ms"}
	{"level":"warn","ts":"2025-11-09T14:03:21.986004Z","caller":"etcdserver/v3_server.go:911","msg":"waiting for ReadIndex response took too long, retrying","sent-request-id":8128041202906160426,"retry-timeout":"500ms"}
	{"level":"warn","ts":"2025-11-09T14:03:22.486153Z","caller":"etcdserver/v3_server.go:911","msg":"waiting for ReadIndex response took too long, retrying","sent-request-id":8128041202906160426,"retry-timeout":"500ms"}
	{"level":"info","ts":"2025-11-09T14:03:22.690907Z","logger":"raft","caller":"v3@v3.6.0/raft.go:988","msg":"aec36adc501070cc is starting a new election at term 4"}
	{"level":"info","ts":"2025-11-09T14:03:22.690962Z","logger":"raft","caller":"v3@v3.6.0/raft.go:930","msg":"aec36adc501070cc became pre-candidate at term 4"}
	{"level":"info","ts":"2025-11-09T14:03:22.690987Z","logger":"raft","caller":"v3@v3.6.0/raft.go:1064","msg":"aec36adc501070cc [logterm: 4, index: 2084] sent MsgPreVote request to b6e80321287bcc6a at term 4"}
	{"level":"info","ts":"2025-11-09T14:03:22.691002Z","logger":"raft","caller":"v3@v3.6.0/raft.go:1064","msg":"aec36adc501070cc [logterm: 4, index: 2084] sent MsgPreVote request to c7770fc1e85485c5 at term 4"}
	{"level":"info","ts":"2025-11-09T14:03:22.691032Z","logger":"raft","caller":"v3@v3.6.0/raft.go:1077","msg":"aec36adc501070cc received MsgPreVoteResp from aec36adc501070cc at term 4"}
	{"level":"info","ts":"2025-11-09T14:03:22.691042Z","logger":"raft","caller":"v3@v3.6.0/raft.go:1693","msg":"aec36adc501070cc has received 1 MsgPreVoteResp votes and 0 vote rejections"}
	{"level":"warn","ts":"2025-11-09T14:03:22.986323Z","caller":"etcdserver/v3_server.go:911","msg":"waiting for ReadIndex response took too long, retrying","sent-request-id":8128041202906160426,"retry-timeout":"500ms"}
	{"level":"warn","ts":"2025-11-09T14:03:23.487379Z","caller":"etcdserver/v3_server.go:911","msg":"waiting for ReadIndex response took too long, retrying","sent-request-id":8128041202906160426,"retry-timeout":"500ms"}
	{"level":"warn","ts":"2025-11-09T14:03:23.987828Z","caller":"etcdserver/v3_server.go:911","msg":"waiting for ReadIndex response took too long, retrying","sent-request-id":8128041202906160426,"retry-timeout":"500ms"}
	{"level":"info","ts":"2025-11-09T14:03:24.090661Z","logger":"raft","caller":"v3@v3.6.0/raft.go:988","msg":"aec36adc501070cc is starting a new election at term 4"}
	{"level":"info","ts":"2025-11-09T14:03:24.090731Z","logger":"raft","caller":"v3@v3.6.0/raft.go:930","msg":"aec36adc501070cc became pre-candidate at term 4"}
	{"level":"info","ts":"2025-11-09T14:03:24.090754Z","logger":"raft","caller":"v3@v3.6.0/raft.go:1064","msg":"aec36adc501070cc [logterm: 4, index: 2084] sent MsgPreVote request to b6e80321287bcc6a at term 4"}
	{"level":"info","ts":"2025-11-09T14:03:24.090763Z","logger":"raft","caller":"v3@v3.6.0/raft.go:1064","msg":"aec36adc501070cc [logterm: 4, index: 2084] sent MsgPreVote request to c7770fc1e85485c5 at term 4"}
	{"level":"info","ts":"2025-11-09T14:03:24.090813Z","logger":"raft","caller":"v3@v3.6.0/raft.go:1077","msg":"aec36adc501070cc received MsgPreVoteResp from aec36adc501070cc at term 4"}
	{"level":"info","ts":"2025-11-09T14:03:24.090830Z","logger":"raft","caller":"v3@v3.6.0/raft.go:1693","msg":"aec36adc501070cc has received 1 MsgPreVoteResp votes and 0 vote rejections"}
	{"level":"warn","ts":"2025-11-09T14:03:24.229664Z","caller":"etcdserver/server.go:1814","msg":"failed to publish local member to cluster through raft","local-member-id":"aec36adc501070cc","local-member-attributes":"{Name:ha-423884 ClientURLs:[https://192.168.49.2:2379]}","publish-timeout":"7s","error":"context deadline exceeded"}
	{"level":"warn","ts":"2025-11-09T14:03:24.488013Z","caller":"etcdserver/v3_server.go:911","msg":"waiting for ReadIndex response took too long, retrying","sent-request-id":8128041202906160426,"retry-timeout":"500ms"}
	
	
	==> kernel <==
	 14:03:24 up 45 min,  0 user,  load average: 0.40, 0.73, 0.92
	Linux ha-423884 5.15.0-1084-aws #91~20.04.1-Ubuntu SMP Fri May 2 07:00:04 UTC 2025 aarch64 GNU/Linux
	PRETTY_NAME="Debian GNU/Linux 12 (bookworm)"
	
	
	==> kube-apiserver [f857691ef21f6060a3153742c856510acde291aa03d98022486b54ccdd6b16f0] <==
	I1109 14:02:36.634199       1 server.go:150] Version: v1.34.1
	I1109 14:02:36.634303       1 server.go:152] "Golang settings" GOGC="" GOMAXPROCS="" GOTRACEBACK=""
	W1109 14:02:37.909918       1 api_enablement.go:112] alpha api enabled with emulated version 1.34 instead of the binary's version 1.34.1, this is unsupported, proceed at your own risk: api=coordination.k8s.io/v1alpha2
	W1109 14:02:37.910014       1 api_enablement.go:112] alpha api enabled with emulated version 1.34 instead of the binary's version 1.34.1, this is unsupported, proceed at your own risk: api=certificates.k8s.io/v1alpha1
	W1109 14:02:37.910049       1 api_enablement.go:112] alpha api enabled with emulated version 1.34 instead of the binary's version 1.34.1, this is unsupported, proceed at your own risk: api=storagemigration.k8s.io/v1alpha1
	W1109 14:02:37.910094       1 api_enablement.go:112] alpha api enabled with emulated version 1.34 instead of the binary's version 1.34.1, this is unsupported, proceed at your own risk: api=authentication.k8s.io/v1alpha1
	W1109 14:02:37.910134       1 api_enablement.go:112] alpha api enabled with emulated version 1.34 instead of the binary's version 1.34.1, this is unsupported, proceed at your own risk: api=storage.k8s.io/v1alpha1
	W1109 14:02:37.910171       1 api_enablement.go:112] alpha api enabled with emulated version 1.34 instead of the binary's version 1.34.1, this is unsupported, proceed at your own risk: api=admissionregistration.k8s.io/v1alpha1
	W1109 14:02:37.910203       1 api_enablement.go:112] alpha api enabled with emulated version 1.34 instead of the binary's version 1.34.1, this is unsupported, proceed at your own risk: api=internal.apiserver.k8s.io/v1alpha1
	W1109 14:02:37.910236       1 api_enablement.go:112] alpha api enabled with emulated version 1.34 instead of the binary's version 1.34.1, this is unsupported, proceed at your own risk: api=imagepolicy.k8s.io/v1alpha1
	W1109 14:02:37.910268       1 api_enablement.go:112] alpha api enabled with emulated version 1.34 instead of the binary's version 1.34.1, this is unsupported, proceed at your own risk: api=resource.k8s.io/v1alpha3
	W1109 14:02:37.910286       1 api_enablement.go:112] alpha api enabled with emulated version 1.34 instead of the binary's version 1.34.1, this is unsupported, proceed at your own risk: api=scheduling.k8s.io/v1alpha1
	W1109 14:02:37.910291       1 api_enablement.go:112] alpha api enabled with emulated version 1.34 instead of the binary's version 1.34.1, this is unsupported, proceed at your own risk: api=rbac.authorization.k8s.io/v1alpha1
	W1109 14:02:37.910295       1 api_enablement.go:112] alpha api enabled with emulated version 1.34 instead of the binary's version 1.34.1, this is unsupported, proceed at your own risk: api=node.k8s.io/v1alpha1
	W1109 14:02:37.926088       1 logging.go:55] [core] [Channel #1 SubChannel #2]grpc: addrConn.createTransport failed to connect to {Addr: "127.0.0.1:2379", ServerName: "127.0.0.1:2379", BalancerAttributes: {"<%!p(pickfirstleaf.managedByPickfirstKeyType={})>": "<%!p(bool=true)>" }}. Err: connection error: desc = "transport: Error while dialing: dial tcp 127.0.0.1:2379: operation was canceled"
	W1109 14:02:37.927390       1 logging.go:55] [core] [Channel #4 SubChannel #5]grpc: addrConn.createTransport failed to connect to {Addr: "127.0.0.1:2379", ServerName: "127.0.0.1:2379", BalancerAttributes: {"<%!p(pickfirstleaf.managedByPickfirstKeyType={})>": "<%!p(bool=true)>" }}. Err: connection error: desc = "transport: Error while dialing: dial tcp 127.0.0.1:2379: operation was canceled"
	I1109 14:02:37.927960       1 shared_informer.go:349] "Waiting for caches to sync" controller="node_authorizer"
	I1109 14:02:37.944693       1 shared_informer.go:349] "Waiting for caches to sync" controller="*generic.policySource[*k8s.io/api/admissionregistration/v1.ValidatingAdmissionPolicy,*k8s.io/api/admissionregistration/v1.ValidatingAdmissionPolicyBinding,k8s.io/apiserver/pkg/admission/plugin/policy/validating.Validator]"
	I1109 14:02:37.950338       1 plugins.go:157] Loaded 14 mutating admission controller(s) successfully in the following order: NamespaceLifecycle,LimitRanger,ServiceAccount,NodeRestriction,TaintNodesByCondition,Priority,DefaultTolerationSeconds,DefaultStorageClass,StorageObjectInUseProtection,RuntimeClass,DefaultIngressClass,PodTopologyLabels,MutatingAdmissionPolicy,MutatingAdmissionWebhook.
	I1109 14:02:37.950373       1 plugins.go:160] Loaded 13 validating admission controller(s) successfully in the following order: LimitRanger,ServiceAccount,PodSecurity,Priority,PersistentVolumeClaimResize,RuntimeClass,CertificateApproval,CertificateSigning,ClusterTrustBundleAttest,CertificateSubjectRestriction,ValidatingAdmissionPolicy,ValidatingAdmissionWebhook,ResourceQuota.
	I1109 14:02:37.950622       1 instance.go:239] Using reconciler: lease
	W1109 14:02:37.952361       1 logging.go:55] [core] [Channel #7 SubChannel #8]grpc: addrConn.createTransport failed to connect to {Addr: "127.0.0.1:2379", ServerName: "127.0.0.1:2379", BalancerAttributes: {"<%!p(pickfirstleaf.managedByPickfirstKeyType={})>": "<%!p(bool=true)>" }}. Err: connection error: desc = "transport: authentication handshake failed: context canceled"
	W1109 14:02:57.925789       1 logging.go:55] [core] [Channel #1 SubChannel #3]grpc: addrConn.createTransport failed to connect to {Addr: "127.0.0.1:2379", ServerName: "127.0.0.1:2379", BalancerAttributes: {"<%!p(pickfirstleaf.managedByPickfirstKeyType={})>": "<%!p(bool=true)>" }}. Err: connection error: desc = "transport: authentication handshake failed: context canceled"
	W1109 14:02:57.928142       1 logging.go:55] [core] [Channel #4 SubChannel #6]grpc: addrConn.createTransport failed to connect to {Addr: "127.0.0.1:2379", ServerName: "127.0.0.1:2379", BalancerAttributes: {"<%!p(pickfirstleaf.managedByPickfirstKeyType={})>": "<%!p(bool=true)>" }}. Err: connection error: desc = "transport: authentication handshake failed: context deadline exceeded"
	F1109 14:02:57.951648       1 instance.go:232] Error creating leases: error creating storage factory: context deadline exceeded
	
	
	==> kube-controller-manager [fb43ae7a5bc7148d318300f77b90f87d0bed19698a40a38ce78123c68c92f396] <==
	I1109 14:02:10.123805       1 serving.go:386] Generated self-signed cert in-memory
	I1109 14:02:11.457380       1 controllermanager.go:191] "Starting" version="v1.34.1"
	I1109 14:02:11.457409       1 controllermanager.go:193] "Golang settings" GOGC="" GOMAXPROCS="" GOTRACEBACK=""
	I1109 14:02:11.458876       1 dynamic_cafile_content.go:161] "Starting controller" name="request-header::/var/lib/minikube/certs/front-proxy-ca.crt"
	I1109 14:02:11.459053       1 dynamic_cafile_content.go:161] "Starting controller" name="client-ca-bundle::/var/lib/minikube/certs/ca.crt"
	I1109 14:02:11.459304       1 secure_serving.go:211] Serving securely on 127.0.0.1:10257
	I1109 14:02:11.459359       1 tlsconfig.go:243] "Starting DynamicServingCertificateController"
	E1109 14:02:21.460773       1 controllermanager.go:245] "Error building controller context" err="failed to wait for apiserver being healthy: timed out waiting for the condition: failed to get apiserver /healthz status: Get \"https://192.168.49.2:8443/healthz\": dial tcp 192.168.49.2:8443: connect: connection refused"
	
	
	==> kube-scheduler [ee4108629384f7d2a0c69033ae60bc1c7015caec18238848cb6dace4abb60ac1] <==
	E1109 14:02:23.105929       1 reflector.go:205] "Failed to watch" err="failed to list *v1.PersistentVolume: Get \"https://192.168.49.2:8443/api/v1/persistentvolumes?limit=500&resourceVersion=0\": dial tcp 192.168.49.2:8443: connect: connection refused" logger="UnhandledError" reflector="k8s.io/client-go/informers/factory.go:160" type="*v1.PersistentVolume"
	E1109 14:02:25.053888       1 reflector.go:205] "Failed to watch" err="failed to list *v1.PersistentVolumeClaim: Get \"https://192.168.49.2:8443/api/v1/persistentvolumeclaims?limit=500&resourceVersion=0\": dial tcp 192.168.49.2:8443: connect: connection refused" logger="UnhandledError" reflector="k8s.io/client-go/informers/factory.go:160" type="*v1.PersistentVolumeClaim"
	E1109 14:02:27.099287       1 reflector.go:205] "Failed to watch" err="failed to list *v1.CSIDriver: Get \"https://192.168.49.2:8443/apis/storage.k8s.io/v1/csidrivers?limit=500&resourceVersion=0\": dial tcp 192.168.49.2:8443: connect: connection refused" logger="UnhandledError" reflector="k8s.io/client-go/informers/factory.go:160" type="*v1.CSIDriver"
	E1109 14:02:27.123796       1 reflector.go:205] "Failed to watch" err="failed to list *v1.StatefulSet: Get \"https://192.168.49.2:8443/apis/apps/v1/statefulsets?limit=500&resourceVersion=0\": dial tcp 192.168.49.2:8443: connect: connection refused" logger="UnhandledError" reflector="k8s.io/client-go/informers/factory.go:160" type="*v1.StatefulSet"
	E1109 14:02:32.519640       1 reflector.go:205] "Failed to watch" err="failed to list *v1.ResourceSlice: Get \"https://192.168.49.2:8443/apis/resource.k8s.io/v1/resourceslices?limit=500&resourceVersion=0\": dial tcp 192.168.49.2:8443: connect: connection refused" logger="UnhandledError" reflector="k8s.io/client-go/informers/factory.go:160" type="*v1.ResourceSlice"
	E1109 14:02:33.877009       1 reflector.go:205] "Failed to watch" err="failed to list *v1.Service: Get \"https://192.168.49.2:8443/api/v1/services?limit=500&resourceVersion=0\": dial tcp 192.168.49.2:8443: connect: connection refused" logger="UnhandledError" reflector="k8s.io/client-go/informers/factory.go:160" type="*v1.Service"
	E1109 14:02:34.203560       1 reflector.go:205] "Failed to watch" err="failed to list *v1.Pod: Get \"https://192.168.49.2:8443/api/v1/pods?fieldSelector=status.phase%21%3DSucceeded%2Cstatus.phase%21%3DFailed&limit=500&resourceVersion=0\": dial tcp 192.168.49.2:8443: connect: connection refused" logger="UnhandledError" reflector="k8s.io/client-go/informers/factory.go:160" type="*v1.Pod"
	E1109 14:02:34.291436       1 reflector.go:205] "Failed to watch" err="failed to list *v1.DeviceClass: Get \"https://192.168.49.2:8443/apis/resource.k8s.io/v1/deviceclasses?limit=500&resourceVersion=0\": dial tcp 192.168.49.2:8443: connect: connection refused" logger="UnhandledError" reflector="k8s.io/client-go/informers/factory.go:160" type="*v1.DeviceClass"
	E1109 14:02:50.965757       1 reflector.go:205] "Failed to watch" err="failed to list *v1.ConfigMap: Get \"https://192.168.49.2:8443/api/v1/namespaces/kube-system/configmaps?fieldSelector=metadata.name%3Dextension-apiserver-authentication&limit=500&resourceVersion=0\": net/http: TLS handshake timeout" logger="UnhandledError" reflector="runtime/asm_arm64.s:1223" type="*v1.ConfigMap"
	E1109 14:02:52.910441       1 reflector.go:205] "Failed to watch" err="failed to list *v1.Namespace: Get \"https://192.168.49.2:8443/api/v1/namespaces?limit=500&resourceVersion=0\": net/http: TLS handshake timeout" logger="UnhandledError" reflector="k8s.io/client-go/informers/factory.go:160" type="*v1.Namespace"
	E1109 14:02:54.093633       1 reflector.go:205] "Failed to watch" err="failed to list *v1.PodDisruptionBudget: Get \"https://192.168.49.2:8443/apis/policy/v1/poddisruptionbudgets?limit=500&resourceVersion=0\": net/http: TLS handshake timeout" logger="UnhandledError" reflector="k8s.io/client-go/informers/factory.go:160" type="*v1.PodDisruptionBudget"
	E1109 14:02:56.581927       1 reflector.go:205] "Failed to watch" err="failed to list *v1.StorageClass: Get \"https://192.168.49.2:8443/apis/storage.k8s.io/v1/storageclasses?limit=500&resourceVersion=0\": net/http: TLS handshake timeout" logger="UnhandledError" reflector="k8s.io/client-go/informers/factory.go:160" type="*v1.StorageClass"
	E1109 14:02:58.739939       1 reflector.go:205] "Failed to watch" err="failed to list *v1.VolumeAttachment: Get \"https://192.168.49.2:8443/apis/storage.k8s.io/v1/volumeattachments?limit=500&resourceVersion=0\": dial tcp 192.168.49.2:8443: connect: connection refused" logger="UnhandledError" reflector="k8s.io/client-go/informers/factory.go:160" type="*v1.VolumeAttachment"
	E1109 14:02:58.959102       1 reflector.go:205] "Failed to watch" err="failed to list *v1.CSINode: Get \"https://192.168.49.2:8443/apis/storage.k8s.io/v1/csinodes?limit=500&resourceVersion=0\": dial tcp 192.168.49.2:8443: connect: connection refused - error from a previous attempt: read tcp 192.168.49.2:56852->192.168.49.2:8443: read: connection reset by peer" logger="UnhandledError" reflector="k8s.io/client-go/informers/factory.go:160" type="*v1.CSINode"
	E1109 14:02:58.959223       1 reflector.go:205] "Failed to watch" err="failed to list *v1.ReplicaSet: Get \"https://192.168.49.2:8443/apis/apps/v1/replicasets?limit=500&resourceVersion=0\": dial tcp 192.168.49.2:8443: connect: connection refused - error from a previous attempt: read tcp 192.168.49.2:56844->192.168.49.2:8443: read: connection reset by peer" logger="UnhandledError" reflector="k8s.io/client-go/informers/factory.go:160" type="*v1.ReplicaSet"
	E1109 14:03:00.781917       1 reflector.go:205] "Failed to watch" err="failed to list *v1.CSIStorageCapacity: Get \"https://192.168.49.2:8443/apis/storage.k8s.io/v1/csistoragecapacities?limit=500&resourceVersion=0\": dial tcp 192.168.49.2:8443: connect: connection refused" logger="UnhandledError" reflector="k8s.io/client-go/informers/factory.go:160" type="*v1.CSIStorageCapacity"
	E1109 14:03:01.017035       1 reflector.go:205] "Failed to watch" err="failed to list *v1.Node: Get \"https://192.168.49.2:8443/api/v1/nodes?limit=500&resourceVersion=0\": dial tcp 192.168.49.2:8443: connect: connection refused" logger="UnhandledError" reflector="k8s.io/client-go/informers/factory.go:160" type="*v1.Node"
	E1109 14:03:07.737453       1 reflector.go:205] "Failed to watch" err="failed to list *v1.ResourceClaim: Get \"https://192.168.49.2:8443/apis/resource.k8s.io/v1/resourceclaims?limit=500&resourceVersion=0\": dial tcp 192.168.49.2:8443: connect: connection refused" logger="UnhandledError" reflector="k8s.io/client-go/informers/factory.go:160" type="*v1.ResourceClaim"
	E1109 14:03:08.997413       1 reflector.go:205] "Failed to watch" err="failed to list *v1.ReplicationController: Get \"https://192.168.49.2:8443/api/v1/replicationcontrollers?limit=500&resourceVersion=0\": dial tcp 192.168.49.2:8443: connect: connection refused" logger="UnhandledError" reflector="k8s.io/client-go/informers/factory.go:160" type="*v1.ReplicationController"
	E1109 14:03:09.588533       1 reflector.go:205] "Failed to watch" err="failed to list *v1.Service: Get \"https://192.168.49.2:8443/api/v1/services?limit=500&resourceVersion=0\": dial tcp 192.168.49.2:8443: connect: connection refused" logger="UnhandledError" reflector="k8s.io/client-go/informers/factory.go:160" type="*v1.Service"
	E1109 14:03:09.845299       1 reflector.go:205] "Failed to watch" err="failed to list *v1.ResourceSlice: Get \"https://192.168.49.2:8443/apis/resource.k8s.io/v1/resourceslices?limit=500&resourceVersion=0\": dial tcp 192.168.49.2:8443: connect: connection refused" logger="UnhandledError" reflector="k8s.io/client-go/informers/factory.go:160" type="*v1.ResourceSlice"
	E1109 14:03:12.070030       1 reflector.go:205] "Failed to watch" err="failed to list *v1.PersistentVolume: Get \"https://192.168.49.2:8443/api/v1/persistentvolumes?limit=500&resourceVersion=0\": dial tcp 192.168.49.2:8443: connect: connection refused" logger="UnhandledError" reflector="k8s.io/client-go/informers/factory.go:160" type="*v1.PersistentVolume"
	E1109 14:03:15.877609       1 reflector.go:205] "Failed to watch" err="failed to list *v1.PersistentVolumeClaim: Get \"https://192.168.49.2:8443/api/v1/persistentvolumeclaims?limit=500&resourceVersion=0\": dial tcp 192.168.49.2:8443: connect: connection refused" logger="UnhandledError" reflector="k8s.io/client-go/informers/factory.go:160" type="*v1.PersistentVolumeClaim"
	E1109 14:03:20.824924       1 reflector.go:205] "Failed to watch" err="failed to list *v1.StatefulSet: Get \"https://192.168.49.2:8443/apis/apps/v1/statefulsets?limit=500&resourceVersion=0\": dial tcp 192.168.49.2:8443: connect: connection refused" logger="UnhandledError" reflector="k8s.io/client-go/informers/factory.go:160" type="*v1.StatefulSet"
	E1109 14:03:23.126426       1 reflector.go:205] "Failed to watch" err="failed to list *v1.CSIDriver: Get \"https://192.168.49.2:8443/apis/storage.k8s.io/v1/csidrivers?limit=500&resourceVersion=0\": dial tcp 192.168.49.2:8443: connect: connection refused" logger="UnhandledError" reflector="k8s.io/client-go/informers/factory.go:160" type="*v1.CSIDriver"
	
	
	==> kubelet <==
	Nov 09 14:03:22 ha-423884 kubelet[803]: E1109 14:03:22.416363     803 reconstruct.go:189] "Failed to get Node status to reconstruct device paths" err="Get \"https://192.168.49.2:8443/api/v1/nodes/ha-423884\": dial tcp 192.168.49.2:8443: connect: connection refused"
	Nov 09 14:03:22 ha-423884 kubelet[803]: E1109 14:03:22.517362     803 reconstruct.go:189] "Failed to get Node status to reconstruct device paths" err="Get \"https://192.168.49.2:8443/api/v1/nodes/ha-423884\": dial tcp 192.168.49.2:8443: connect: connection refused"
	Nov 09 14:03:22 ha-423884 kubelet[803]: E1109 14:03:22.583901     803 eviction_manager.go:292] "Eviction manager: failed to get summary stats" err="failed to get node info: node \"ha-423884\" not found"
	Nov 09 14:03:22 ha-423884 kubelet[803]: E1109 14:03:22.618587     803 reconstruct.go:189] "Failed to get Node status to reconstruct device paths" err="Get \"https://192.168.49.2:8443/api/v1/nodes/ha-423884\": dial tcp 192.168.49.2:8443: connect: connection refused"
	Nov 09 14:03:22 ha-423884 kubelet[803]: E1109 14:03:22.720046     803 reconstruct.go:189] "Failed to get Node status to reconstruct device paths" err="Get \"https://192.168.49.2:8443/api/v1/nodes/ha-423884\": dial tcp 192.168.49.2:8443: connect: connection refused"
	Nov 09 14:03:22 ha-423884 kubelet[803]: E1109 14:03:22.821225     803 reconstruct.go:189] "Failed to get Node status to reconstruct device paths" err="Get \"https://192.168.49.2:8443/api/v1/nodes/ha-423884\": dial tcp 192.168.49.2:8443: connect: connection refused"
	Nov 09 14:03:22 ha-423884 kubelet[803]: E1109 14:03:22.923119     803 reconstruct.go:189] "Failed to get Node status to reconstruct device paths" err="Get \"https://192.168.49.2:8443/api/v1/nodes/ha-423884\": dial tcp 192.168.49.2:8443: connect: connection refused"
	Nov 09 14:03:23 ha-423884 kubelet[803]: E1109 14:03:23.023606     803 reconstruct.go:189] "Failed to get Node status to reconstruct device paths" err="Get \"https://192.168.49.2:8443/api/v1/nodes/ha-423884\": dial tcp 192.168.49.2:8443: connect: connection refused"
	Nov 09 14:03:23 ha-423884 kubelet[803]: E1109 14:03:23.124590     803 reconstruct.go:189] "Failed to get Node status to reconstruct device paths" err="Get \"https://192.168.49.2:8443/api/v1/nodes/ha-423884\": dial tcp 192.168.49.2:8443: connect: connection refused"
	Nov 09 14:03:23 ha-423884 kubelet[803]: E1109 14:03:23.225803     803 reconstruct.go:189] "Failed to get Node status to reconstruct device paths" err="Get \"https://192.168.49.2:8443/api/v1/nodes/ha-423884\": dial tcp 192.168.49.2:8443: connect: connection refused"
	Nov 09 14:03:23 ha-423884 kubelet[803]: E1109 14:03:23.326954     803 reconstruct.go:189] "Failed to get Node status to reconstruct device paths" err="Get \"https://192.168.49.2:8443/api/v1/nodes/ha-423884\": dial tcp 192.168.49.2:8443: connect: connection refused"
	Nov 09 14:03:23 ha-423884 kubelet[803]: E1109 14:03:23.428372     803 reconstruct.go:189] "Failed to get Node status to reconstruct device paths" err="Get \"https://192.168.49.2:8443/api/v1/nodes/ha-423884\": dial tcp 192.168.49.2:8443: connect: connection refused"
	Nov 09 14:03:23 ha-423884 kubelet[803]: E1109 14:03:23.529557     803 reconstruct.go:189] "Failed to get Node status to reconstruct device paths" err="Get \"https://192.168.49.2:8443/api/v1/nodes/ha-423884\": dial tcp 192.168.49.2:8443: connect: connection refused"
	Nov 09 14:03:23 ha-423884 kubelet[803]: E1109 14:03:23.630191     803 reconstruct.go:189] "Failed to get Node status to reconstruct device paths" err="Get \"https://192.168.49.2:8443/api/v1/nodes/ha-423884\": dial tcp 192.168.49.2:8443: connect: connection refused"
	Nov 09 14:03:23 ha-423884 kubelet[803]: E1109 14:03:23.731336     803 reconstruct.go:189] "Failed to get Node status to reconstruct device paths" err="Get \"https://192.168.49.2:8443/api/v1/nodes/ha-423884\": dial tcp 192.168.49.2:8443: connect: connection refused"
	Nov 09 14:03:23 ha-423884 kubelet[803]: E1109 14:03:23.833557     803 reconstruct.go:189] "Failed to get Node status to reconstruct device paths" err="Get \"https://192.168.49.2:8443/api/v1/nodes/ha-423884\": dial tcp 192.168.49.2:8443: connect: connection refused"
	Nov 09 14:03:23 ha-423884 kubelet[803]: E1109 14:03:23.934370     803 reconstruct.go:189] "Failed to get Node status to reconstruct device paths" err="Get \"https://192.168.49.2:8443/api/v1/nodes/ha-423884\": dial tcp 192.168.49.2:8443: connect: connection refused"
	Nov 09 14:03:24 ha-423884 kubelet[803]: E1109 14:03:24.035794     803 reconstruct.go:189] "Failed to get Node status to reconstruct device paths" err="Get \"https://192.168.49.2:8443/api/v1/nodes/ha-423884\": dial tcp 192.168.49.2:8443: connect: connection refused"
	Nov 09 14:03:24 ha-423884 kubelet[803]: E1109 14:03:24.136758     803 reconstruct.go:189] "Failed to get Node status to reconstruct device paths" err="Get \"https://192.168.49.2:8443/api/v1/nodes/ha-423884\": dial tcp 192.168.49.2:8443: connect: connection refused"
	Nov 09 14:03:24 ha-423884 kubelet[803]: E1109 14:03:24.237737     803 reconstruct.go:189] "Failed to get Node status to reconstruct device paths" err="Get \"https://192.168.49.2:8443/api/v1/nodes/ha-423884\": dial tcp 192.168.49.2:8443: connect: connection refused"
	Nov 09 14:03:24 ha-423884 kubelet[803]: E1109 14:03:24.338982     803 reconstruct.go:189] "Failed to get Node status to reconstruct device paths" err="Get \"https://192.168.49.2:8443/api/v1/nodes/ha-423884\": dial tcp 192.168.49.2:8443: connect: connection refused"
	Nov 09 14:03:24 ha-423884 kubelet[803]: E1109 14:03:24.440255     803 reconstruct.go:189] "Failed to get Node status to reconstruct device paths" err="Get \"https://192.168.49.2:8443/api/v1/nodes/ha-423884\": dial tcp 192.168.49.2:8443: connect: connection refused"
	Nov 09 14:03:24 ha-423884 kubelet[803]: E1109 14:03:24.540689     803 reconstruct.go:189] "Failed to get Node status to reconstruct device paths" err="Get \"https://192.168.49.2:8443/api/v1/nodes/ha-423884\": dial tcp 192.168.49.2:8443: connect: connection refused"
	Nov 09 14:03:24 ha-423884 kubelet[803]: E1109 14:03:24.641771     803 reconstruct.go:189] "Failed to get Node status to reconstruct device paths" err="Get \"https://192.168.49.2:8443/api/v1/nodes/ha-423884\": dial tcp 192.168.49.2:8443: connect: connection refused"
	Nov 09 14:03:24 ha-423884 kubelet[803]: E1109 14:03:24.742886     803 reconstruct.go:189] "Failed to get Node status to reconstruct device paths" err="Get \"https://192.168.49.2:8443/api/v1/nodes/ha-423884\": dial tcp 192.168.49.2:8443: connect: connection refused"
	

                                                
                                                
-- /stdout --
helpers_test.go:262: (dbg) Run:  out/minikube-linux-arm64 status --format={{.APIServer}} -p ha-423884 -n ha-423884
helpers_test.go:262: (dbg) Non-zero exit: out/minikube-linux-arm64 status --format={{.APIServer}} -p ha-423884 -n ha-423884: exit status 2 (318.195884ms)

                                                
                                                
-- stdout --
	Stopped

                                                
                                                
-- /stdout --
helpers_test.go:262: status error: exit status 2 (may be ok)
helpers_test.go:264: "ha-423884" apiserver is not running, skipping kubectl commands (state="Stopped")
--- FAIL: TestMultiControlPlane/serial/DegradedAfterSecondaryNodeDelete (2.21s)

                                                
                                    
x
+
TestMultiControlPlane/serial/StopCluster (2.92s)

                                                
                                                
=== RUN   TestMultiControlPlane/serial/StopCluster
ha_test.go:533: (dbg) Run:  out/minikube-linux-arm64 -p ha-423884 stop --alsologtostderr -v 5
ha_test.go:533: (dbg) Done: out/minikube-linux-arm64 -p ha-423884 stop --alsologtostderr -v 5: (2.678744225s)
ha_test.go:539: (dbg) Run:  out/minikube-linux-arm64 -p ha-423884 status --alsologtostderr -v 5
ha_test.go:539: (dbg) Non-zero exit: out/minikube-linux-arm64 -p ha-423884 status --alsologtostderr -v 5: exit status 7 (139.909943ms)

                                                
                                                
-- stdout --
	ha-423884
	type: Control Plane
	host: Stopped
	kubelet: Stopped
	apiserver: Stopped
	kubeconfig: Stopped
	
	ha-423884-m02
	type: Control Plane
	host: Stopped
	kubelet: Stopped
	apiserver: Stopped
	kubeconfig: Stopped
	
	ha-423884-m03
	type: Control Plane
	host: Stopped
	kubelet: Stopped
	apiserver: Stopped
	kubeconfig: Stopped
	
	ha-423884-m04
	type: Worker
	host: Stopped
	kubelet: Stopped
	

                                                
                                                
-- /stdout --
** stderr ** 
	I1109 14:03:27.929559   55851 out.go:360] Setting OutFile to fd 1 ...
	I1109 14:03:27.929697   55851 out.go:408] TERM=,COLORTERM=, which probably does not support color
	I1109 14:03:27.929707   55851 out.go:374] Setting ErrFile to fd 2...
	I1109 14:03:27.929713   55851 out.go:408] TERM=,COLORTERM=, which probably does not support color
	I1109 14:03:27.930085   55851 root.go:338] Updating PATH: /home/jenkins/minikube-integration/21139-2320/.minikube/bin
	I1109 14:03:27.930301   55851 out.go:368] Setting JSON to false
	I1109 14:03:27.930324   55851 mustload.go:66] Loading cluster: ha-423884
	I1109 14:03:27.930867   55851 notify.go:221] Checking for updates...
	I1109 14:03:27.931285   55851 config.go:182] Loaded profile config "ha-423884": Driver=docker, ContainerRuntime=crio, KubernetesVersion=v1.34.1
	I1109 14:03:27.931432   55851 status.go:174] checking status of ha-423884 ...
	I1109 14:03:27.932358   55851 cli_runner.go:164] Run: docker container inspect ha-423884 --format={{.State.Status}}
	I1109 14:03:27.951062   55851 status.go:371] ha-423884 host status = "Stopped" (err=<nil>)
	I1109 14:03:27.951083   55851 status.go:384] host is not running, skipping remaining checks
	I1109 14:03:27.951089   55851 status.go:176] ha-423884 status: &{Name:ha-423884 Host:Stopped Kubelet:Stopped APIServer:Stopped Kubeconfig:Stopped Worker:false TimeToStop: DockerEnv: PodManEnv:}
	I1109 14:03:27.951118   55851 status.go:174] checking status of ha-423884-m02 ...
	I1109 14:03:27.951425   55851 cli_runner.go:164] Run: docker container inspect ha-423884-m02 --format={{.State.Status}}
	I1109 14:03:27.976127   55851 status.go:371] ha-423884-m02 host status = "Stopped" (err=<nil>)
	I1109 14:03:27.976145   55851 status.go:384] host is not running, skipping remaining checks
	I1109 14:03:27.976152   55851 status.go:176] ha-423884-m02 status: &{Name:ha-423884-m02 Host:Stopped Kubelet:Stopped APIServer:Stopped Kubeconfig:Stopped Worker:false TimeToStop: DockerEnv: PodManEnv:}
	I1109 14:03:27.976170   55851 status.go:174] checking status of ha-423884-m03 ...
	I1109 14:03:27.976497   55851 cli_runner.go:164] Run: docker container inspect ha-423884-m03 --format={{.State.Status}}
	I1109 14:03:27.996121   55851 status.go:371] ha-423884-m03 host status = "Stopped" (err=<nil>)
	I1109 14:03:27.996141   55851 status.go:384] host is not running, skipping remaining checks
	I1109 14:03:27.996148   55851 status.go:176] ha-423884-m03 status: &{Name:ha-423884-m03 Host:Stopped Kubelet:Stopped APIServer:Stopped Kubeconfig:Stopped Worker:false TimeToStop: DockerEnv: PodManEnv:}
	I1109 14:03:27.996165   55851 status.go:174] checking status of ha-423884-m04 ...
	I1109 14:03:27.996460   55851 cli_runner.go:164] Run: docker container inspect ha-423884-m04 --format={{.State.Status}}
	I1109 14:03:28.020449   55851 status.go:371] ha-423884-m04 host status = "Stopped" (err=<nil>)
	I1109 14:03:28.020477   55851 status.go:384] host is not running, skipping remaining checks
	I1109 14:03:28.020483   55851 status.go:176] ha-423884-m04 status: &{Name:ha-423884-m04 Host:Stopped Kubelet:Stopped APIServer:Stopped Kubeconfig:Stopped Worker:true TimeToStop: DockerEnv: PodManEnv:}

                                                
                                                
** /stderr **
ha_test.go:545: status says not two control-plane nodes are present: args "out/minikube-linux-arm64 -p ha-423884 status --alsologtostderr -v 5": ha-423884
type: Control Plane
host: Stopped
kubelet: Stopped
apiserver: Stopped
kubeconfig: Stopped

                                                
                                                
ha-423884-m02
type: Control Plane
host: Stopped
kubelet: Stopped
apiserver: Stopped
kubeconfig: Stopped

                                                
                                                
ha-423884-m03
type: Control Plane
host: Stopped
kubelet: Stopped
apiserver: Stopped
kubeconfig: Stopped

                                                
                                                
ha-423884-m04
type: Worker
host: Stopped
kubelet: Stopped

                                                
                                                
ha_test.go:551: status says not three kubelets are stopped: args "out/minikube-linux-arm64 -p ha-423884 status --alsologtostderr -v 5": ha-423884
type: Control Plane
host: Stopped
kubelet: Stopped
apiserver: Stopped
kubeconfig: Stopped

                                                
                                                
ha-423884-m02
type: Control Plane
host: Stopped
kubelet: Stopped
apiserver: Stopped
kubeconfig: Stopped

                                                
                                                
ha-423884-m03
type: Control Plane
host: Stopped
kubelet: Stopped
apiserver: Stopped
kubeconfig: Stopped

                                                
                                                
ha-423884-m04
type: Worker
host: Stopped
kubelet: Stopped

                                                
                                                
ha_test.go:554: status says not two apiservers are stopped: args "out/minikube-linux-arm64 -p ha-423884 status --alsologtostderr -v 5": ha-423884
type: Control Plane
host: Stopped
kubelet: Stopped
apiserver: Stopped
kubeconfig: Stopped

                                                
                                                
ha-423884-m02
type: Control Plane
host: Stopped
kubelet: Stopped
apiserver: Stopped
kubeconfig: Stopped

                                                
                                                
ha-423884-m03
type: Control Plane
host: Stopped
kubelet: Stopped
apiserver: Stopped
kubeconfig: Stopped

                                                
                                                
ha-423884-m04
type: Worker
host: Stopped
kubelet: Stopped

                                                
                                                
helpers_test.go:222: -----------------------post-mortem--------------------------------
helpers_test.go:223: ======>  post-mortem[TestMultiControlPlane/serial/StopCluster]: network settings <======
helpers_test.go:230: HOST ENV snapshots: PROXY env: HTTP_PROXY="<empty>" HTTPS_PROXY="<empty>" NO_PROXY="<empty>"
helpers_test.go:238: ======>  post-mortem[TestMultiControlPlane/serial/StopCluster]: docker inspect <======
helpers_test.go:239: (dbg) Run:  docker inspect ha-423884
helpers_test.go:243: (dbg) docker inspect ha-423884:

                                                
                                                
-- stdout --
	[
	    {
	        "Id": "8c902201acb6ac6cc85dcb02210aea656c86b05a6fb72e47cb8c42c952f307e8",
	        "Created": "2025-11-09T13:50:17.166169915Z",
	        "Path": "/usr/local/bin/entrypoint",
	        "Args": [
	            "/sbin/init"
	        ],
	        "State": {
	            "Status": "exited",
	            "Running": false,
	            "Paused": false,
	            "Restarting": false,
	            "OOMKilled": false,
	            "Dead": false,
	            "Pid": 0,
	            "ExitCode": 130,
	            "Error": "",
	            "StartedAt": "2025-11-09T13:54:55.389490243Z",
	            "FinishedAt": "2025-11-09T14:03:27.198748336Z"
	        },
	        "Image": "sha256:4e1b168be0d8ee6affdaa8dcd0274d605b2f417b6cfaa574410f0380fd962b97",
	        "ResolvConfPath": "/var/lib/docker/containers/8c902201acb6ac6cc85dcb02210aea656c86b05a6fb72e47cb8c42c952f307e8/resolv.conf",
	        "HostnamePath": "/var/lib/docker/containers/8c902201acb6ac6cc85dcb02210aea656c86b05a6fb72e47cb8c42c952f307e8/hostname",
	        "HostsPath": "/var/lib/docker/containers/8c902201acb6ac6cc85dcb02210aea656c86b05a6fb72e47cb8c42c952f307e8/hosts",
	        "LogPath": "/var/lib/docker/containers/8c902201acb6ac6cc85dcb02210aea656c86b05a6fb72e47cb8c42c952f307e8/8c902201acb6ac6cc85dcb02210aea656c86b05a6fb72e47cb8c42c952f307e8-json.log",
	        "Name": "/ha-423884",
	        "RestartCount": 0,
	        "Driver": "overlay2",
	        "Platform": "linux",
	        "MountLabel": "",
	        "ProcessLabel": "",
	        "AppArmorProfile": "unconfined",
	        "ExecIDs": null,
	        "HostConfig": {
	            "Binds": [
	                "/lib/modules:/lib/modules:ro",
	                "ha-423884:/var"
	            ],
	            "ContainerIDFile": "",
	            "LogConfig": {
	                "Type": "json-file",
	                "Config": {}
	            },
	            "NetworkMode": "ha-423884",
	            "PortBindings": {
	                "22/tcp": [
	                    {
	                        "HostIp": "127.0.0.1",
	                        "HostPort": ""
	                    }
	                ],
	                "2376/tcp": [
	                    {
	                        "HostIp": "127.0.0.1",
	                        "HostPort": ""
	                    }
	                ],
	                "32443/tcp": [
	                    {
	                        "HostIp": "127.0.0.1",
	                        "HostPort": ""
	                    }
	                ],
	                "5000/tcp": [
	                    {
	                        "HostIp": "127.0.0.1",
	                        "HostPort": ""
	                    }
	                ],
	                "8443/tcp": [
	                    {
	                        "HostIp": "127.0.0.1",
	                        "HostPort": ""
	                    }
	                ]
	            },
	            "RestartPolicy": {
	                "Name": "no",
	                "MaximumRetryCount": 0
	            },
	            "AutoRemove": false,
	            "VolumeDriver": "",
	            "VolumesFrom": null,
	            "ConsoleSize": [
	                0,
	                0
	            ],
	            "CapAdd": null,
	            "CapDrop": null,
	            "CgroupnsMode": "host",
	            "Dns": [],
	            "DnsOptions": [],
	            "DnsSearch": [],
	            "ExtraHosts": null,
	            "GroupAdd": null,
	            "IpcMode": "private",
	            "Cgroup": "",
	            "Links": null,
	            "OomScoreAdj": 0,
	            "PidMode": "",
	            "Privileged": true,
	            "PublishAllPorts": false,
	            "ReadonlyRootfs": false,
	            "SecurityOpt": [
	                "seccomp=unconfined",
	                "apparmor=unconfined",
	                "label=disable"
	            ],
	            "Tmpfs": {
	                "/run": "",
	                "/tmp": ""
	            },
	            "UTSMode": "",
	            "UsernsMode": "",
	            "ShmSize": 67108864,
	            "Runtime": "runc",
	            "Isolation": "",
	            "CpuShares": 0,
	            "Memory": 3221225472,
	            "NanoCpus": 2000000000,
	            "CgroupParent": "",
	            "BlkioWeight": 0,
	            "BlkioWeightDevice": [],
	            "BlkioDeviceReadBps": [],
	            "BlkioDeviceWriteBps": [],
	            "BlkioDeviceReadIOps": [],
	            "BlkioDeviceWriteIOps": [],
	            "CpuPeriod": 0,
	            "CpuQuota": 0,
	            "CpuRealtimePeriod": 0,
	            "CpuRealtimeRuntime": 0,
	            "CpusetCpus": "",
	            "CpusetMems": "",
	            "Devices": [],
	            "DeviceCgroupRules": null,
	            "DeviceRequests": null,
	            "MemoryReservation": 0,
	            "MemorySwap": 6442450944,
	            "MemorySwappiness": null,
	            "OomKillDisable": false,
	            "PidsLimit": null,
	            "Ulimits": [],
	            "CpuCount": 0,
	            "CpuPercent": 0,
	            "IOMaximumIOps": 0,
	            "IOMaximumBandwidth": 0,
	            "MaskedPaths": null,
	            "ReadonlyPaths": null
	        },
	        "GraphDriver": {
	            "Data": {
	                "ID": "8c902201acb6ac6cc85dcb02210aea656c86b05a6fb72e47cb8c42c952f307e8",
	                "LowerDir": "/var/lib/docker/overlay2/d7d9b7ca13eaf7cc4d5734ee4a6a54ff542d1224261d25b7c41162aa58453c4b-init/diff:/var/lib/docker/overlay2/b3a4de36ab2a7c9237cb1555a4866064baca53bca407ae5f84336bea9c6bc6c8/diff",
	                "MergedDir": "/var/lib/docker/overlay2/d7d9b7ca13eaf7cc4d5734ee4a6a54ff542d1224261d25b7c41162aa58453c4b/merged",
	                "UpperDir": "/var/lib/docker/overlay2/d7d9b7ca13eaf7cc4d5734ee4a6a54ff542d1224261d25b7c41162aa58453c4b/diff",
	                "WorkDir": "/var/lib/docker/overlay2/d7d9b7ca13eaf7cc4d5734ee4a6a54ff542d1224261d25b7c41162aa58453c4b/work"
	            },
	            "Name": "overlay2"
	        },
	        "Mounts": [
	            {
	                "Type": "bind",
	                "Source": "/lib/modules",
	                "Destination": "/lib/modules",
	                "Mode": "ro",
	                "RW": false,
	                "Propagation": "rprivate"
	            },
	            {
	                "Type": "volume",
	                "Name": "ha-423884",
	                "Source": "/var/lib/docker/volumes/ha-423884/_data",
	                "Destination": "/var",
	                "Driver": "local",
	                "Mode": "z",
	                "RW": true,
	                "Propagation": ""
	            }
	        ],
	        "Config": {
	            "Hostname": "ha-423884",
	            "Domainname": "",
	            "User": "",
	            "AttachStdin": false,
	            "AttachStdout": false,
	            "AttachStderr": false,
	            "ExposedPorts": {
	                "22/tcp": {},
	                "2376/tcp": {},
	                "32443/tcp": {},
	                "5000/tcp": {},
	                "8443/tcp": {}
	            },
	            "Tty": true,
	            "OpenStdin": false,
	            "StdinOnce": false,
	            "Env": [
	                "container=docker",
	                "PATH=/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin"
	            ],
	            "Cmd": null,
	            "Image": "gcr.io/k8s-minikube/kicbase-builds:v0.0.48-1761985721-21837@sha256:a50b37e97dfdea51156e079ca6b45818a801b3d41bbe13d141f35d2e1af6c7d1",
	            "Volumes": null,
	            "WorkingDir": "/",
	            "Entrypoint": [
	                "/usr/local/bin/entrypoint",
	                "/sbin/init"
	            ],
	            "OnBuild": null,
	            "Labels": {
	                "created_by.minikube.sigs.k8s.io": "true",
	                "mode.minikube.sigs.k8s.io": "ha-423884",
	                "name.minikube.sigs.k8s.io": "ha-423884",
	                "role.minikube.sigs.k8s.io": ""
	            },
	            "StopSignal": "SIGRTMIN+3"
	        },
	        "NetworkSettings": {
	            "Bridge": "",
	            "SandboxID": "",
	            "SandboxKey": "",
	            "Ports": {},
	            "HairpinMode": false,
	            "LinkLocalIPv6Address": "",
	            "LinkLocalIPv6PrefixLen": 0,
	            "SecondaryIPAddresses": null,
	            "SecondaryIPv6Addresses": null,
	            "EndpointID": "",
	            "Gateway": "",
	            "GlobalIPv6Address": "",
	            "GlobalIPv6PrefixLen": 0,
	            "IPAddress": "",
	            "IPPrefixLen": 0,
	            "IPv6Gateway": "",
	            "MacAddress": "",
	            "Networks": {
	                "ha-423884": {
	                    "IPAMConfig": {
	                        "IPv4Address": "192.168.49.2"
	                    },
	                    "Links": null,
	                    "Aliases": null,
	                    "MacAddress": "",
	                    "DriverOpts": null,
	                    "GwPriority": 0,
	                    "NetworkID": "b901b8dcb82129bdc4c62d2bf9cac8a365e41b87cf75b0978b149071ce152f44",
	                    "EndpointID": "",
	                    "Gateway": "",
	                    "IPAddress": "",
	                    "IPPrefixLen": 0,
	                    "IPv6Gateway": "",
	                    "GlobalIPv6Address": "",
	                    "GlobalIPv6PrefixLen": 0,
	                    "DNSNames": [
	                        "ha-423884",
	                        "8c902201acb6"
	                    ]
	                }
	            }
	        }
	    }
	]

                                                
                                                
-- /stdout --
helpers_test.go:247: (dbg) Run:  out/minikube-linux-arm64 status --format={{.Host}} -p ha-423884 -n ha-423884
helpers_test.go:247: (dbg) Non-zero exit: out/minikube-linux-arm64 status --format={{.Host}} -p ha-423884 -n ha-423884: exit status 7 (75.90812ms)

                                                
                                                
-- stdout --
	Stopped

                                                
                                                
-- /stdout --
helpers_test.go:247: status error: exit status 7 (may be ok)
helpers_test.go:249: "ha-423884" host is not running, skipping log retrieval (state="Stopped")
--- FAIL: TestMultiControlPlane/serial/StopCluster (2.92s)

                                                
                                    
x
+
TestMultiControlPlane/serial/RestartCluster (109.43s)

                                                
                                                
=== RUN   TestMultiControlPlane/serial/RestartCluster
ha_test.go:562: (dbg) Run:  out/minikube-linux-arm64 -p ha-423884 start --wait true --alsologtostderr -v 5 --driver=docker  --container-runtime=crio
E1109 14:04:20.733960    4116 cert_rotation.go:172] "Loading client cert failed" err="open /home/jenkins/minikube-integration/21139-2320/.minikube/profiles/functional-002359/client.crt: no such file or directory" logger="tls-transport-cache.UnhandledError" key="key"
ha_test.go:562: (dbg) Done: out/minikube-linux-arm64 -p ha-423884 start --wait true --alsologtostderr -v 5 --driver=docker  --container-runtime=crio: (1m44.825659864s)
ha_test.go:568: (dbg) Run:  out/minikube-linux-arm64 -p ha-423884 status --alsologtostderr -v 5
ha_test.go:568: (dbg) Done: out/minikube-linux-arm64 -p ha-423884 status --alsologtostderr -v 5: (1.218727464s)
ha_test.go:573: status says not two control-plane nodes are present: args "out/minikube-linux-arm64 -p ha-423884 status --alsologtostderr -v 5": ha-423884
type: Control Plane
host: Running
kubelet: Running
apiserver: Running
kubeconfig: Configured

                                                
                                                
ha-423884-m02
type: Control Plane
host: Running
kubelet: Running
apiserver: Running
kubeconfig: Configured

                                                
                                                
ha-423884-m03
type: Control Plane
host: Running
kubelet: Running
apiserver: Running
kubeconfig: Configured

                                                
                                                
ha-423884-m04
type: Worker
host: Running
kubelet: Running

                                                
                                                
ha_test.go:576: status says not three hosts are running: args "out/minikube-linux-arm64 -p ha-423884 status --alsologtostderr -v 5": ha-423884
type: Control Plane
host: Running
kubelet: Running
apiserver: Running
kubeconfig: Configured

                                                
                                                
ha-423884-m02
type: Control Plane
host: Running
kubelet: Running
apiserver: Running
kubeconfig: Configured

                                                
                                                
ha-423884-m03
type: Control Plane
host: Running
kubelet: Running
apiserver: Running
kubeconfig: Configured

                                                
                                                
ha-423884-m04
type: Worker
host: Running
kubelet: Running

                                                
                                                
ha_test.go:579: status says not three kubelets are running: args "out/minikube-linux-arm64 -p ha-423884 status --alsologtostderr -v 5": ha-423884
type: Control Plane
host: Running
kubelet: Running
apiserver: Running
kubeconfig: Configured

                                                
                                                
ha-423884-m02
type: Control Plane
host: Running
kubelet: Running
apiserver: Running
kubeconfig: Configured

                                                
                                                
ha-423884-m03
type: Control Plane
host: Running
kubelet: Running
apiserver: Running
kubeconfig: Configured

                                                
                                                
ha-423884-m04
type: Worker
host: Running
kubelet: Running

                                                
                                                
ha_test.go:582: status says not two apiservers are running: args "out/minikube-linux-arm64 -p ha-423884 status --alsologtostderr -v 5": ha-423884
type: Control Plane
host: Running
kubelet: Running
apiserver: Running
kubeconfig: Configured

                                                
                                                
ha-423884-m02
type: Control Plane
host: Running
kubelet: Running
apiserver: Running
kubeconfig: Configured

                                                
                                                
ha-423884-m03
type: Control Plane
host: Running
kubelet: Running
apiserver: Running
kubeconfig: Configured

                                                
                                                
ha-423884-m04
type: Worker
host: Running
kubelet: Running

                                                
                                                
ha_test.go:586: (dbg) Run:  kubectl get nodes
ha_test.go:594: (dbg) Run:  kubectl get nodes -o "go-template='{{range .items}}{{range .status.conditions}}{{if eq .type "Ready"}} {{.status}}{{"\n"}}{{end}}{{end}}{{end}}'"
ha_test.go:599: expected 3 nodes Ready status to be True, got 
-- stdout --
	' True
	 True
	 True
	 True
	'

                                                
                                                
-- /stdout --
helpers_test.go:222: -----------------------post-mortem--------------------------------
helpers_test.go:223: ======>  post-mortem[TestMultiControlPlane/serial/RestartCluster]: network settings <======
helpers_test.go:230: HOST ENV snapshots: PROXY env: HTTP_PROXY="<empty>" HTTPS_PROXY="<empty>" NO_PROXY="<empty>"
helpers_test.go:238: ======>  post-mortem[TestMultiControlPlane/serial/RestartCluster]: docker inspect <======
helpers_test.go:239: (dbg) Run:  docker inspect ha-423884
helpers_test.go:243: (dbg) docker inspect ha-423884:

                                                
                                                
-- stdout --
	[
	    {
	        "Id": "8c902201acb6ac6cc85dcb02210aea656c86b05a6fb72e47cb8c42c952f307e8",
	        "Created": "2025-11-09T13:50:17.166169915Z",
	        "Path": "/usr/local/bin/entrypoint",
	        "Args": [
	            "/sbin/init"
	        ],
	        "State": {
	            "Status": "running",
	            "Running": true,
	            "Paused": false,
	            "Restarting": false,
	            "OOMKilled": false,
	            "Dead": false,
	            "Pid": 56035,
	            "ExitCode": 0,
	            "Error": "",
	            "StartedAt": "2025-11-09T14:03:28.454326897Z",
	            "FinishedAt": "2025-11-09T14:03:27.198748336Z"
	        },
	        "Image": "sha256:4e1b168be0d8ee6affdaa8dcd0274d605b2f417b6cfaa574410f0380fd962b97",
	        "ResolvConfPath": "/var/lib/docker/containers/8c902201acb6ac6cc85dcb02210aea656c86b05a6fb72e47cb8c42c952f307e8/resolv.conf",
	        "HostnamePath": "/var/lib/docker/containers/8c902201acb6ac6cc85dcb02210aea656c86b05a6fb72e47cb8c42c952f307e8/hostname",
	        "HostsPath": "/var/lib/docker/containers/8c902201acb6ac6cc85dcb02210aea656c86b05a6fb72e47cb8c42c952f307e8/hosts",
	        "LogPath": "/var/lib/docker/containers/8c902201acb6ac6cc85dcb02210aea656c86b05a6fb72e47cb8c42c952f307e8/8c902201acb6ac6cc85dcb02210aea656c86b05a6fb72e47cb8c42c952f307e8-json.log",
	        "Name": "/ha-423884",
	        "RestartCount": 0,
	        "Driver": "overlay2",
	        "Platform": "linux",
	        "MountLabel": "",
	        "ProcessLabel": "",
	        "AppArmorProfile": "unconfined",
	        "ExecIDs": null,
	        "HostConfig": {
	            "Binds": [
	                "/lib/modules:/lib/modules:ro",
	                "ha-423884:/var"
	            ],
	            "ContainerIDFile": "",
	            "LogConfig": {
	                "Type": "json-file",
	                "Config": {}
	            },
	            "NetworkMode": "ha-423884",
	            "PortBindings": {
	                "22/tcp": [
	                    {
	                        "HostIp": "127.0.0.1",
	                        "HostPort": ""
	                    }
	                ],
	                "2376/tcp": [
	                    {
	                        "HostIp": "127.0.0.1",
	                        "HostPort": ""
	                    }
	                ],
	                "32443/tcp": [
	                    {
	                        "HostIp": "127.0.0.1",
	                        "HostPort": ""
	                    }
	                ],
	                "5000/tcp": [
	                    {
	                        "HostIp": "127.0.0.1",
	                        "HostPort": ""
	                    }
	                ],
	                "8443/tcp": [
	                    {
	                        "HostIp": "127.0.0.1",
	                        "HostPort": ""
	                    }
	                ]
	            },
	            "RestartPolicy": {
	                "Name": "no",
	                "MaximumRetryCount": 0
	            },
	            "AutoRemove": false,
	            "VolumeDriver": "",
	            "VolumesFrom": null,
	            "ConsoleSize": [
	                0,
	                0
	            ],
	            "CapAdd": null,
	            "CapDrop": null,
	            "CgroupnsMode": "host",
	            "Dns": [],
	            "DnsOptions": [],
	            "DnsSearch": [],
	            "ExtraHosts": null,
	            "GroupAdd": null,
	            "IpcMode": "private",
	            "Cgroup": "",
	            "Links": null,
	            "OomScoreAdj": 0,
	            "PidMode": "",
	            "Privileged": true,
	            "PublishAllPorts": false,
	            "ReadonlyRootfs": false,
	            "SecurityOpt": [
	                "seccomp=unconfined",
	                "apparmor=unconfined",
	                "label=disable"
	            ],
	            "Tmpfs": {
	                "/run": "",
	                "/tmp": ""
	            },
	            "UTSMode": "",
	            "UsernsMode": "",
	            "ShmSize": 67108864,
	            "Runtime": "runc",
	            "Isolation": "",
	            "CpuShares": 0,
	            "Memory": 3221225472,
	            "NanoCpus": 2000000000,
	            "CgroupParent": "",
	            "BlkioWeight": 0,
	            "BlkioWeightDevice": [],
	            "BlkioDeviceReadBps": [],
	            "BlkioDeviceWriteBps": [],
	            "BlkioDeviceReadIOps": [],
	            "BlkioDeviceWriteIOps": [],
	            "CpuPeriod": 0,
	            "CpuQuota": 0,
	            "CpuRealtimePeriod": 0,
	            "CpuRealtimeRuntime": 0,
	            "CpusetCpus": "",
	            "CpusetMems": "",
	            "Devices": [],
	            "DeviceCgroupRules": null,
	            "DeviceRequests": null,
	            "MemoryReservation": 0,
	            "MemorySwap": 6442450944,
	            "MemorySwappiness": null,
	            "OomKillDisable": false,
	            "PidsLimit": null,
	            "Ulimits": [],
	            "CpuCount": 0,
	            "CpuPercent": 0,
	            "IOMaximumIOps": 0,
	            "IOMaximumBandwidth": 0,
	            "MaskedPaths": null,
	            "ReadonlyPaths": null
	        },
	        "GraphDriver": {
	            "Data": {
	                "ID": "8c902201acb6ac6cc85dcb02210aea656c86b05a6fb72e47cb8c42c952f307e8",
	                "LowerDir": "/var/lib/docker/overlay2/d7d9b7ca13eaf7cc4d5734ee4a6a54ff542d1224261d25b7c41162aa58453c4b-init/diff:/var/lib/docker/overlay2/b3a4de36ab2a7c9237cb1555a4866064baca53bca407ae5f84336bea9c6bc6c8/diff",
	                "MergedDir": "/var/lib/docker/overlay2/d7d9b7ca13eaf7cc4d5734ee4a6a54ff542d1224261d25b7c41162aa58453c4b/merged",
	                "UpperDir": "/var/lib/docker/overlay2/d7d9b7ca13eaf7cc4d5734ee4a6a54ff542d1224261d25b7c41162aa58453c4b/diff",
	                "WorkDir": "/var/lib/docker/overlay2/d7d9b7ca13eaf7cc4d5734ee4a6a54ff542d1224261d25b7c41162aa58453c4b/work"
	            },
	            "Name": "overlay2"
	        },
	        "Mounts": [
	            {
	                "Type": "bind",
	                "Source": "/lib/modules",
	                "Destination": "/lib/modules",
	                "Mode": "ro",
	                "RW": false,
	                "Propagation": "rprivate"
	            },
	            {
	                "Type": "volume",
	                "Name": "ha-423884",
	                "Source": "/var/lib/docker/volumes/ha-423884/_data",
	                "Destination": "/var",
	                "Driver": "local",
	                "Mode": "z",
	                "RW": true,
	                "Propagation": ""
	            }
	        ],
	        "Config": {
	            "Hostname": "ha-423884",
	            "Domainname": "",
	            "User": "",
	            "AttachStdin": false,
	            "AttachStdout": false,
	            "AttachStderr": false,
	            "ExposedPorts": {
	                "22/tcp": {},
	                "2376/tcp": {},
	                "32443/tcp": {},
	                "5000/tcp": {},
	                "8443/tcp": {}
	            },
	            "Tty": true,
	            "OpenStdin": false,
	            "StdinOnce": false,
	            "Env": [
	                "container=docker",
	                "PATH=/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin"
	            ],
	            "Cmd": null,
	            "Image": "gcr.io/k8s-minikube/kicbase-builds:v0.0.48-1761985721-21837@sha256:a50b37e97dfdea51156e079ca6b45818a801b3d41bbe13d141f35d2e1af6c7d1",
	            "Volumes": null,
	            "WorkingDir": "/",
	            "Entrypoint": [
	                "/usr/local/bin/entrypoint",
	                "/sbin/init"
	            ],
	            "OnBuild": null,
	            "Labels": {
	                "created_by.minikube.sigs.k8s.io": "true",
	                "mode.minikube.sigs.k8s.io": "ha-423884",
	                "name.minikube.sigs.k8s.io": "ha-423884",
	                "role.minikube.sigs.k8s.io": ""
	            },
	            "StopSignal": "SIGRTMIN+3"
	        },
	        "NetworkSettings": {
	            "Bridge": "",
	            "SandboxID": "1a517d91b9dd2fa9b7c1a86f3c7ce600153c1394576da0eb7ce565af8604f53c",
	            "SandboxKey": "/var/run/docker/netns/1a517d91b9dd",
	            "Ports": {
	                "22/tcp": [
	                    {
	                        "HostIp": "127.0.0.1",
	                        "HostPort": "32818"
	                    }
	                ],
	                "2376/tcp": [
	                    {
	                        "HostIp": "127.0.0.1",
	                        "HostPort": "32819"
	                    }
	                ],
	                "32443/tcp": [
	                    {
	                        "HostIp": "127.0.0.1",
	                        "HostPort": "32822"
	                    }
	                ],
	                "5000/tcp": [
	                    {
	                        "HostIp": "127.0.0.1",
	                        "HostPort": "32820"
	                    }
	                ],
	                "8443/tcp": [
	                    {
	                        "HostIp": "127.0.0.1",
	                        "HostPort": "32821"
	                    }
	                ]
	            },
	            "HairpinMode": false,
	            "LinkLocalIPv6Address": "",
	            "LinkLocalIPv6PrefixLen": 0,
	            "SecondaryIPAddresses": null,
	            "SecondaryIPv6Addresses": null,
	            "EndpointID": "",
	            "Gateway": "",
	            "GlobalIPv6Address": "",
	            "GlobalIPv6PrefixLen": 0,
	            "IPAddress": "",
	            "IPPrefixLen": 0,
	            "IPv6Gateway": "",
	            "MacAddress": "",
	            "Networks": {
	                "ha-423884": {
	                    "IPAMConfig": {
	                        "IPv4Address": "192.168.49.2"
	                    },
	                    "Links": null,
	                    "Aliases": null,
	                    "MacAddress": "42:a0:79:53:a9:c1",
	                    "DriverOpts": null,
	                    "GwPriority": 0,
	                    "NetworkID": "b901b8dcb82129bdc4c62d2bf9cac8a365e41b87cf75b0978b149071ce152f44",
	                    "EndpointID": "863a231ee9ea532fe20e7b03570549e0d16ef617b4f2a4ad156998677dd29113",
	                    "Gateway": "192.168.49.1",
	                    "IPAddress": "192.168.49.2",
	                    "IPPrefixLen": 24,
	                    "IPv6Gateway": "",
	                    "GlobalIPv6Address": "",
	                    "GlobalIPv6PrefixLen": 0,
	                    "DNSNames": [
	                        "ha-423884",
	                        "8c902201acb6"
	                    ]
	                }
	            }
	        }
	    }
	]

                                                
                                                
-- /stdout --
helpers_test.go:247: (dbg) Run:  out/minikube-linux-arm64 status --format={{.Host}} -p ha-423884 -n ha-423884
helpers_test.go:252: <<< TestMultiControlPlane/serial/RestartCluster FAILED: start of post-mortem logs <<<
helpers_test.go:253: ======>  post-mortem[TestMultiControlPlane/serial/RestartCluster]: minikube logs <======
helpers_test.go:255: (dbg) Run:  out/minikube-linux-arm64 -p ha-423884 logs -n 25
helpers_test.go:255: (dbg) Done: out/minikube-linux-arm64 -p ha-423884 logs -n 25: (1.980211506s)
helpers_test.go:260: TestMultiControlPlane/serial/RestartCluster logs: 
-- stdout --
	
	==> Audit <==
	┌─────────┬─────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────┬───────────┬─────────┬─────────┬─────────────────────┬─────────────────────┐
	│ COMMAND │                                                                ARGS                                                                 │  PROFILE  │  USER   │ VERSION │     START TIME      │      END TIME       │
	├─────────┼─────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────┼───────────┼─────────┼─────────┼─────────────────────┼─────────────────────┤
	│ cp      │ ha-423884 cp ha-423884-m03:/home/docker/cp-test.txt ha-423884-m04:/home/docker/cp-test_ha-423884-m03_ha-423884-m04.txt              │ ha-423884 │ jenkins │ v1.37.0 │ 09 Nov 25 13:53 UTC │ 09 Nov 25 13:53 UTC │
	│ ssh     │ ha-423884 ssh -n ha-423884-m03 sudo cat /home/docker/cp-test.txt                                                                    │ ha-423884 │ jenkins │ v1.37.0 │ 09 Nov 25 13:53 UTC │ 09 Nov 25 13:53 UTC │
	│ ssh     │ ha-423884 ssh -n ha-423884-m04 sudo cat /home/docker/cp-test_ha-423884-m03_ha-423884-m04.txt                                        │ ha-423884 │ jenkins │ v1.37.0 │ 09 Nov 25 13:53 UTC │ 09 Nov 25 13:53 UTC │
	│ cp      │ ha-423884 cp testdata/cp-test.txt ha-423884-m04:/home/docker/cp-test.txt                                                            │ ha-423884 │ jenkins │ v1.37.0 │ 09 Nov 25 13:53 UTC │ 09 Nov 25 13:53 UTC │
	│ ssh     │ ha-423884 ssh -n ha-423884-m04 sudo cat /home/docker/cp-test.txt                                                                    │ ha-423884 │ jenkins │ v1.37.0 │ 09 Nov 25 13:53 UTC │ 09 Nov 25 13:53 UTC │
	│ cp      │ ha-423884 cp ha-423884-m04:/home/docker/cp-test.txt /tmp/TestMultiControlPlaneserialCopyFile776916293/001/cp-test_ha-423884-m04.txt │ ha-423884 │ jenkins │ v1.37.0 │ 09 Nov 25 13:53 UTC │ 09 Nov 25 13:53 UTC │
	│ ssh     │ ha-423884 ssh -n ha-423884-m04 sudo cat /home/docker/cp-test.txt                                                                    │ ha-423884 │ jenkins │ v1.37.0 │ 09 Nov 25 13:53 UTC │ 09 Nov 25 13:53 UTC │
	│ cp      │ ha-423884 cp ha-423884-m04:/home/docker/cp-test.txt ha-423884:/home/docker/cp-test_ha-423884-m04_ha-423884.txt                      │ ha-423884 │ jenkins │ v1.37.0 │ 09 Nov 25 13:53 UTC │ 09 Nov 25 13:53 UTC │
	│ ssh     │ ha-423884 ssh -n ha-423884-m04 sudo cat /home/docker/cp-test.txt                                                                    │ ha-423884 │ jenkins │ v1.37.0 │ 09 Nov 25 13:53 UTC │ 09 Nov 25 13:53 UTC │
	│ ssh     │ ha-423884 ssh -n ha-423884 sudo cat /home/docker/cp-test_ha-423884-m04_ha-423884.txt                                                │ ha-423884 │ jenkins │ v1.37.0 │ 09 Nov 25 13:53 UTC │ 09 Nov 25 13:53 UTC │
	│ cp      │ ha-423884 cp ha-423884-m04:/home/docker/cp-test.txt ha-423884-m02:/home/docker/cp-test_ha-423884-m04_ha-423884-m02.txt              │ ha-423884 │ jenkins │ v1.37.0 │ 09 Nov 25 13:53 UTC │ 09 Nov 25 13:53 UTC │
	│ ssh     │ ha-423884 ssh -n ha-423884-m04 sudo cat /home/docker/cp-test.txt                                                                    │ ha-423884 │ jenkins │ v1.37.0 │ 09 Nov 25 13:53 UTC │ 09 Nov 25 13:53 UTC │
	│ ssh     │ ha-423884 ssh -n ha-423884-m02 sudo cat /home/docker/cp-test_ha-423884-m04_ha-423884-m02.txt                                        │ ha-423884 │ jenkins │ v1.37.0 │ 09 Nov 25 13:53 UTC │ 09 Nov 25 13:53 UTC │
	│ cp      │ ha-423884 cp ha-423884-m04:/home/docker/cp-test.txt ha-423884-m03:/home/docker/cp-test_ha-423884-m04_ha-423884-m03.txt              │ ha-423884 │ jenkins │ v1.37.0 │ 09 Nov 25 13:53 UTC │ 09 Nov 25 13:53 UTC │
	│ ssh     │ ha-423884 ssh -n ha-423884-m04 sudo cat /home/docker/cp-test.txt                                                                    │ ha-423884 │ jenkins │ v1.37.0 │ 09 Nov 25 13:53 UTC │ 09 Nov 25 13:53 UTC │
	│ ssh     │ ha-423884 ssh -n ha-423884-m03 sudo cat /home/docker/cp-test_ha-423884-m04_ha-423884-m03.txt                                        │ ha-423884 │ jenkins │ v1.37.0 │ 09 Nov 25 13:53 UTC │ 09 Nov 25 13:53 UTC │
	│ node    │ ha-423884 node stop m02 --alsologtostderr -v 5                                                                                      │ ha-423884 │ jenkins │ v1.37.0 │ 09 Nov 25 13:53 UTC │ 09 Nov 25 13:53 UTC │
	│ node    │ ha-423884 node start m02 --alsologtostderr -v 5                                                                                     │ ha-423884 │ jenkins │ v1.37.0 │ 09 Nov 25 13:53 UTC │ 09 Nov 25 13:54 UTC │
	│ node    │ ha-423884 node list --alsologtostderr -v 5                                                                                          │ ha-423884 │ jenkins │ v1.37.0 │ 09 Nov 25 13:54 UTC │                     │
	│ stop    │ ha-423884 stop --alsologtostderr -v 5                                                                                               │ ha-423884 │ jenkins │ v1.37.0 │ 09 Nov 25 13:54 UTC │ 09 Nov 25 13:54 UTC │
	│ start   │ ha-423884 start --wait true --alsologtostderr -v 5                                                                                  │ ha-423884 │ jenkins │ v1.37.0 │ 09 Nov 25 13:54 UTC │                     │
	│ node    │ ha-423884 node list --alsologtostderr -v 5                                                                                          │ ha-423884 │ jenkins │ v1.37.0 │ 09 Nov 25 14:02 UTC │                     │
	│ node    │ ha-423884 node delete m03 --alsologtostderr -v 5                                                                                    │ ha-423884 │ jenkins │ v1.37.0 │ 09 Nov 25 14:03 UTC │                     │
	│ stop    │ ha-423884 stop --alsologtostderr -v 5                                                                                               │ ha-423884 │ jenkins │ v1.37.0 │ 09 Nov 25 14:03 UTC │ 09 Nov 25 14:03 UTC │
	│ start   │ ha-423884 start --wait true --alsologtostderr -v 5 --driver=docker  --container-runtime=crio                                        │ ha-423884 │ jenkins │ v1.37.0 │ 09 Nov 25 14:03 UTC │ 09 Nov 25 14:05 UTC │
	└─────────┴─────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────┴───────────┴─────────┴─────────┴─────────────────────┴─────────────────────┘
	
	
	==> Last Start <==
	Log file created at: 2025/11/09 14:03:28
	Running on machine: ip-172-31-24-2
	Binary: Built with gc go1.24.6 for linux/arm64
	Log line format: [IWEF]mmdd hh:mm:ss.uuuuuu threadid file:line] msg
	I1109 14:03:28.177539   55908 out.go:360] Setting OutFile to fd 1 ...
	I1109 14:03:28.177725   55908 out.go:408] TERM=,COLORTERM=, which probably does not support color
	I1109 14:03:28.177737   55908 out.go:374] Setting ErrFile to fd 2...
	I1109 14:03:28.177743   55908 out.go:408] TERM=,COLORTERM=, which probably does not support color
	I1109 14:03:28.178015   55908 root.go:338] Updating PATH: /home/jenkins/minikube-integration/21139-2320/.minikube/bin
	I1109 14:03:28.178387   55908 out.go:368] Setting JSON to false
	I1109 14:03:28.179233   55908 start.go:133] hostinfo: {"hostname":"ip-172-31-24-2","uptime":2759,"bootTime":1762694250,"procs":154,"os":"linux","platform":"ubuntu","platformFamily":"debian","platformVersion":"20.04","kernelVersion":"5.15.0-1084-aws","kernelArch":"aarch64","virtualizationSystem":"","virtualizationRole":"","hostId":"6d436adf-771e-4269-b9a3-c25fd4fca4f5"}
	I1109 14:03:28.179304   55908 start.go:143] virtualization:  
	I1109 14:03:28.182654   55908 out.go:179] * [ha-423884] minikube v1.37.0 on Ubuntu 20.04 (arm64)
	I1109 14:03:28.186399   55908 out.go:179]   - MINIKUBE_LOCATION=21139
	I1109 14:03:28.186530   55908 notify.go:221] Checking for updates...
	I1109 14:03:28.192400   55908 out.go:179]   - MINIKUBE_SUPPRESS_DOCKER_PERFORMANCE=true
	I1109 14:03:28.195380   55908 out.go:179]   - KUBECONFIG=/home/jenkins/minikube-integration/21139-2320/kubeconfig
	I1109 14:03:28.198311   55908 out.go:179]   - MINIKUBE_HOME=/home/jenkins/minikube-integration/21139-2320/.minikube
	I1109 14:03:28.201212   55908 out.go:179]   - MINIKUBE_BIN=out/minikube-linux-arm64
	I1109 14:03:28.204122   55908 out.go:179]   - MINIKUBE_FORCE_SYSTEMD=
	I1109 14:03:28.207578   55908 config.go:182] Loaded profile config "ha-423884": Driver=docker, ContainerRuntime=crio, KubernetesVersion=v1.34.1
	I1109 14:03:28.208223   55908 driver.go:422] Setting default libvirt URI to qemu:///system
	I1109 14:03:28.238570   55908 docker.go:124] docker version: linux-28.1.1:Docker Engine - Community
	I1109 14:03:28.238679   55908 cli_runner.go:164] Run: docker system info --format "{{json .}}"
	I1109 14:03:28.302173   55908 info.go:266] docker info: {ID:J4M5:W6MX:GOX4:4LAQ:VI7E:VJNF:J3OP:OPBH:GF7G:PPY4:WQWD:7N4L Containers:4 ContainersRunning:0 ContainersPaused:0 ContainersStopped:4 Images:3 Driver:overlay2 DriverStatus:[[Backing Filesystem extfs] [Supports d_type true] [Using metacopy false] [Native Overlay Diff true] [userxattr false]] SystemStatus:<nil> Plugins:{Volume:[local] Network:[bridge host ipvlan macvlan null overlay] Authorization:<nil> Log:[awslogs fluentd gcplogs gelf journald json-file local splunk syslog]} MemoryLimit:true SwapLimit:true KernelMemory:false KernelMemoryTCP:true CPUCfsPeriod:true CPUCfsQuota:true CPUShares:true CPUSet:true PidsLimit:true IPv4Forwarding:true BridgeNfIptables:false BridgeNfIP6Tables:false Debug:false NFd:25 OomKillDisable:true NGoroutines:42 SystemTime:2025-11-09 14:03:28.29285158 +0000 UTC LoggingDriver:json-file CgroupDriver:cgroupfs NEventsListener:0 KernelVersion:5.15.0-1084-aws OperatingSystem:Ubuntu 20.04.6 LTS OSType:linux Architecture:aa
rch64 IndexServerAddress:https://index.docker.io/v1/ RegistryConfig:{AllowNondistributableArtifactsCIDRs:[] AllowNondistributableArtifactsHostnames:[] InsecureRegistryCIDRs:[::1/128 127.0.0.0/8] IndexConfigs:{DockerIo:{Name:docker.io Mirrors:[] Secure:true Official:true}} Mirrors:[]} NCPU:2 MemTotal:8214835200 GenericResources:<nil> DockerRootDir:/var/lib/docker HTTPProxy: HTTPSProxy: NoProxy: Name:ip-172-31-24-2 Labels:[] ExperimentalBuild:false ServerVersion:28.1.1 ClusterStore: ClusterAdvertise: Runtimes:{Runc:{Path:runc}} DefaultRuntime:runc Swarm:{NodeID: NodeAddr: LocalNodeState:inactive ControlAvailable:false Error: RemoteManagers:<nil>} LiveRestoreEnabled:false Isolation: InitBinary:docker-init ContainerdCommit:{ID:05044ec0a9a75232cad458027ca83437aae3f4da Expected:} RuncCommit:{ID:v1.2.5-0-g59923ef Expected:} InitCommit:{ID:de40ad0 Expected:} SecurityOptions:[name=apparmor name=seccomp,profile=builtin] ProductLicense: Warnings:<nil> ServerErrors:[] ClientInfo:{Debug:false Plugins:[map[Name:buildx Path
:/usr/libexec/docker/cli-plugins/docker-buildx SchemaVersion:0.1.0 ShortDescription:Docker Buildx Vendor:Docker Inc. Version:v0.23.0] map[Name:compose Path:/usr/libexec/docker/cli-plugins/docker-compose SchemaVersion:0.1.0 ShortDescription:Docker Compose Vendor:Docker Inc. Version:v2.35.1]] Warnings:<nil>}}
	I1109 14:03:28.302284   55908 docker.go:319] overlay module found
	I1109 14:03:28.305382   55908 out.go:179] * Using the docker driver based on existing profile
	I1109 14:03:28.308271   55908 start.go:309] selected driver: docker
	I1109 14:03:28.308292   55908 start.go:930] validating driver "docker" against &{Name:ha-423884 KeepContext:false EmbedCerts:false MinikubeISO: KicBaseImage:gcr.io/k8s-minikube/kicbase-builds:v0.0.48-1761985721-21837@sha256:a50b37e97dfdea51156e079ca6b45818a801b3d41bbe13d141f35d2e1af6c7d1 Memory:3072 CPUs:2 DiskSize:20000 Driver:docker HyperkitVpnKitSock: HyperkitVSockPorts:[] DockerEnv:[] ContainerVolumeMounts:[] InsecureRegistry:[] RegistryMirror:[] HostOnlyCIDR:192.168.59.1/24 HypervVirtualSwitch: HypervUseExternalSwitch:false HypervExternalAdapter: KVMNetwork:default KVMQemuURI:qemu:///system KVMGPU:false KVMHidden:false KVMNUMACount:1 APIServerPort:8443 DockerOpt:[] DisableDriverMounts:false NFSShare:[] NFSSharesRoot:/nfsshares UUID: NoVTXCheck:false DNSProxy:false HostDNSResolver:true HostOnlyNicType:virtio NatNicType:virtio SSHIPAddress: SSHUser:root SSHKey: SSHPort:22 KubernetesConfig:{KubernetesVersion:v1.34.1 ClusterName:ha-423884 Namespace:default APIServerHAVIP:192.168.49.254 APIServerName
:minikubeCA APIServerNames:[] APIServerIPs:[] DNSDomain:cluster.local ContainerRuntime:crio CRISocket: NetworkPlugin:cni FeatureGates: ServiceCIDR:10.96.0.0/12 ImageRepository: LoadBalancerStartIP: LoadBalancerEndIP: CustomIngressCert: RegistryAliases: ExtraOptions:[] ShouldLoadCachedImages:true EnableDefaultCNI:false CNI:} Nodes:[{Name: IP:192.168.49.2 Port:8443 KubernetesVersion:v1.34.1 ContainerRuntime:crio ControlPlane:true Worker:true} {Name:m02 IP:192.168.49.3 Port:8443 KubernetesVersion:v1.34.1 ContainerRuntime:crio ControlPlane:true Worker:true} {Name:m03 IP:192.168.49.4 Port:8443 KubernetesVersion:v1.34.1 ContainerRuntime:crio ControlPlane:true Worker:true} {Name:m04 IP:192.168.49.5 Port:0 KubernetesVersion:v1.34.1 ContainerRuntime:crio ControlPlane:false Worker:true}] Addons:map[ambassador:false amd-gpu-device-plugin:false auto-pause:false cloud-spanner:false csi-hostpath-driver:false dashboard:false default-storageclass:false efk:false freshpod:false gcp-auth:false gvisor:false headlamp:false inacc
el:false ingress:false ingress-dns:false inspektor-gadget:false istio:false istio-provisioner:false kong:false kubeflow:false kubetail:false kubevirt:false logviewer:false metallb:false metrics-server:false nvidia-device-plugin:false nvidia-driver-installer:false nvidia-gpu-device-plugin:false olm:false pod-security-policy:false portainer:false registry:false registry-aliases:false registry-creds:false storage-provisioner:false storage-provisioner-rancher:false volcano:false volumesnapshots:false yakd:false] CustomAddonImages:map[] CustomAddonRegistries:map[] VerifyComponents:map[apiserver:true apps_running:true default_sa:true extra:true kubelet:true node_ready:true system_pods:true] StartHostTimeout:6m0s ScheduledStop:<nil> ExposedPorts:[] ListenAddress: Network: Subnet: MultiNodeRequested:true ExtraDisks:0 CertExpiration:26280h0m0s MountString: Mount9PVersion:9p2000.L MountGID:docker MountIP: MountMSize:262144 MountOptions:[] MountPort:0 MountType:9p MountUID:docker BinaryMirror: DisableOptimizations:false
DisableMetrics:false DisableCoreDNSLog:false CustomQemuFirmwarePath: SocketVMnetClientPath: SocketVMnetPath: StaticIP: SSHAuthSock: SSHAgentPID:0 GPUs: AutoPauseInterval:1m0s}
	I1109 14:03:28.308437   55908 start.go:941] status for docker: {Installed:true Healthy:true Running:false NeedsImprovement:false Error:<nil> Reason: Fix: Doc: Version:}
	I1109 14:03:28.308547   55908 cli_runner.go:164] Run: docker system info --format "{{json .}}"
	I1109 14:03:28.367315   55908 info.go:266] docker info: {ID:J4M5:W6MX:GOX4:4LAQ:VI7E:VJNF:J3OP:OPBH:GF7G:PPY4:WQWD:7N4L Containers:4 ContainersRunning:0 ContainersPaused:0 ContainersStopped:4 Images:3 Driver:overlay2 DriverStatus:[[Backing Filesystem extfs] [Supports d_type true] [Using metacopy false] [Native Overlay Diff true] [userxattr false]] SystemStatus:<nil> Plugins:{Volume:[local] Network:[bridge host ipvlan macvlan null overlay] Authorization:<nil> Log:[awslogs fluentd gcplogs gelf journald json-file local splunk syslog]} MemoryLimit:true SwapLimit:true KernelMemory:false KernelMemoryTCP:true CPUCfsPeriod:true CPUCfsQuota:true CPUShares:true CPUSet:true PidsLimit:true IPv4Forwarding:true BridgeNfIptables:false BridgeNfIP6Tables:false Debug:false NFd:25 OomKillDisable:true NGoroutines:42 SystemTime:2025-11-09 14:03:28.35650136 +0000 UTC LoggingDriver:json-file CgroupDriver:cgroupfs NEventsListener:0 KernelVersion:5.15.0-1084-aws OperatingSystem:Ubuntu 20.04.6 LTS OSType:linux Architecture:aa
rch64 IndexServerAddress:https://index.docker.io/v1/ RegistryConfig:{AllowNondistributableArtifactsCIDRs:[] AllowNondistributableArtifactsHostnames:[] InsecureRegistryCIDRs:[::1/128 127.0.0.0/8] IndexConfigs:{DockerIo:{Name:docker.io Mirrors:[] Secure:true Official:true}} Mirrors:[]} NCPU:2 MemTotal:8214835200 GenericResources:<nil> DockerRootDir:/var/lib/docker HTTPProxy: HTTPSProxy: NoProxy: Name:ip-172-31-24-2 Labels:[] ExperimentalBuild:false ServerVersion:28.1.1 ClusterStore: ClusterAdvertise: Runtimes:{Runc:{Path:runc}} DefaultRuntime:runc Swarm:{NodeID: NodeAddr: LocalNodeState:inactive ControlAvailable:false Error: RemoteManagers:<nil>} LiveRestoreEnabled:false Isolation: InitBinary:docker-init ContainerdCommit:{ID:05044ec0a9a75232cad458027ca83437aae3f4da Expected:} RuncCommit:{ID:v1.2.5-0-g59923ef Expected:} InitCommit:{ID:de40ad0 Expected:} SecurityOptions:[name=apparmor name=seccomp,profile=builtin] ProductLicense: Warnings:<nil> ServerErrors:[] ClientInfo:{Debug:false Plugins:[map[Name:buildx Path
:/usr/libexec/docker/cli-plugins/docker-buildx SchemaVersion:0.1.0 ShortDescription:Docker Buildx Vendor:Docker Inc. Version:v0.23.0] map[Name:compose Path:/usr/libexec/docker/cli-plugins/docker-compose SchemaVersion:0.1.0 ShortDescription:Docker Compose Vendor:Docker Inc. Version:v2.35.1]] Warnings:<nil>}}
	I1109 14:03:28.367739   55908 start_flags.go:992] Waiting for all components: map[apiserver:true apps_running:true default_sa:true extra:true kubelet:true node_ready:true system_pods:true]
	I1109 14:03:28.367770   55908 cni.go:84] Creating CNI manager for ""
	I1109 14:03:28.367814   55908 cni.go:136] multinode detected (4 nodes found), recommending kindnet
	I1109 14:03:28.367923   55908 start.go:353] cluster config:
	{Name:ha-423884 KeepContext:false EmbedCerts:false MinikubeISO: KicBaseImage:gcr.io/k8s-minikube/kicbase-builds:v0.0.48-1761985721-21837@sha256:a50b37e97dfdea51156e079ca6b45818a801b3d41bbe13d141f35d2e1af6c7d1 Memory:3072 CPUs:2 DiskSize:20000 Driver:docker HyperkitVpnKitSock: HyperkitVSockPorts:[] DockerEnv:[] ContainerVolumeMounts:[] InsecureRegistry:[] RegistryMirror:[] HostOnlyCIDR:192.168.59.1/24 HypervVirtualSwitch: HypervUseExternalSwitch:false HypervExternalAdapter: KVMNetwork:default KVMQemuURI:qemu:///system KVMGPU:false KVMHidden:false KVMNUMACount:1 APIServerPort:8443 DockerOpt:[] DisableDriverMounts:false NFSShare:[] NFSSharesRoot:/nfsshares UUID: NoVTXCheck:false DNSProxy:false HostDNSResolver:true HostOnlyNicType:virtio NatNicType:virtio SSHIPAddress: SSHUser:root SSHKey: SSHPort:22 KubernetesConfig:{KubernetesVersion:v1.34.1 ClusterName:ha-423884 Namespace:default APIServerHAVIP:192.168.49.254 APIServerName:minikubeCA APIServerNames:[] APIServerIPs:[] DNSDomain:cluster.local ContainerR
untime:crio CRISocket: NetworkPlugin:cni FeatureGates: ServiceCIDR:10.96.0.0/12 ImageRepository: LoadBalancerStartIP: LoadBalancerEndIP: CustomIngressCert: RegistryAliases: ExtraOptions:[] ShouldLoadCachedImages:true EnableDefaultCNI:false CNI:} Nodes:[{Name: IP:192.168.49.2 Port:8443 KubernetesVersion:v1.34.1 ContainerRuntime:crio ControlPlane:true Worker:true} {Name:m02 IP:192.168.49.3 Port:8443 KubernetesVersion:v1.34.1 ContainerRuntime:crio ControlPlane:true Worker:true} {Name:m03 IP:192.168.49.4 Port:8443 KubernetesVersion:v1.34.1 ContainerRuntime:crio ControlPlane:true Worker:true} {Name:m04 IP:192.168.49.5 Port:0 KubernetesVersion:v1.34.1 ContainerRuntime:crio ControlPlane:false Worker:true}] Addons:map[ambassador:false amd-gpu-device-plugin:false auto-pause:false cloud-spanner:false csi-hostpath-driver:false dashboard:false default-storageclass:false efk:false freshpod:false gcp-auth:false gvisor:false headlamp:false inaccel:false ingress:false ingress-dns:false inspektor-gadget:false istio:false isti
o-provisioner:false kong:false kubeflow:false kubetail:false kubevirt:false logviewer:false metallb:false metrics-server:false nvidia-device-plugin:false nvidia-driver-installer:false nvidia-gpu-device-plugin:false olm:false pod-security-policy:false portainer:false registry:false registry-aliases:false registry-creds:false storage-provisioner:false storage-provisioner-rancher:false volcano:false volumesnapshots:false yakd:false] CustomAddonImages:map[] CustomAddonRegistries:map[] VerifyComponents:map[apiserver:true apps_running:true default_sa:true extra:true kubelet:true node_ready:true system_pods:true] StartHostTimeout:6m0s ScheduledStop:<nil> ExposedPorts:[] ListenAddress: Network: Subnet: MultiNodeRequested:true ExtraDisks:0 CertExpiration:26280h0m0s MountString: Mount9PVersion:9p2000.L MountGID:docker MountIP: MountMSize:262144 MountOptions:[] MountPort:0 MountType:9p MountUID:docker BinaryMirror: DisableOptimizations:false DisableMetrics:false DisableCoreDNSLog:false CustomQemuFirmwarePath: SocketVMne
tClientPath: SocketVMnetPath: StaticIP: SSHAuthSock: SSHAgentPID:0 GPUs: AutoPauseInterval:1m0s}
	I1109 14:03:28.372921   55908 out.go:179] * Starting "ha-423884" primary control-plane node in "ha-423884" cluster
	I1109 14:03:28.375587   55908 cache.go:134] Beginning downloading kic base image for docker with crio
	I1109 14:03:28.378486   55908 out.go:179] * Pulling base image v0.0.48-1761985721-21837 ...
	I1109 14:03:28.381428   55908 preload.go:188] Checking if preload exists for k8s version v1.34.1 and runtime crio
	I1109 14:03:28.381482   55908 preload.go:203] Found local preload: /home/jenkins/minikube-integration/21139-2320/.minikube/cache/preloaded-tarball/preloaded-images-k8s-v18-v1.34.1-cri-o-overlay-arm64.tar.lz4
	I1109 14:03:28.381492   55908 cache.go:65] Caching tarball of preloaded images
	I1109 14:03:28.381532   55908 image.go:81] Checking for gcr.io/k8s-minikube/kicbase-builds:v0.0.48-1761985721-21837@sha256:a50b37e97dfdea51156e079ca6b45818a801b3d41bbe13d141f35d2e1af6c7d1 in local docker daemon
	I1109 14:03:28.381584   55908 preload.go:238] Found /home/jenkins/minikube-integration/21139-2320/.minikube/cache/preloaded-tarball/preloaded-images-k8s-v18-v1.34.1-cri-o-overlay-arm64.tar.lz4 in cache, skipping download
	I1109 14:03:28.381603   55908 cache.go:68] Finished verifying existence of preloaded tar for v1.34.1 on crio
	I1109 14:03:28.381760   55908 profile.go:143] Saving config to /home/jenkins/minikube-integration/21139-2320/.minikube/profiles/ha-423884/config.json ...
	I1109 14:03:28.401896   55908 image.go:100] Found gcr.io/k8s-minikube/kicbase-builds:v0.0.48-1761985721-21837@sha256:a50b37e97dfdea51156e079ca6b45818a801b3d41bbe13d141f35d2e1af6c7d1 in local docker daemon, skipping pull
	I1109 14:03:28.401919   55908 cache.go:158] gcr.io/k8s-minikube/kicbase-builds:v0.0.48-1761985721-21837@sha256:a50b37e97dfdea51156e079ca6b45818a801b3d41bbe13d141f35d2e1af6c7d1 exists in daemon, skipping load
	I1109 14:03:28.401946   55908 cache.go:243] Successfully downloaded all kic artifacts
	I1109 14:03:28.401968   55908 start.go:360] acquireMachinesLock for ha-423884: {Name:mkda5c7a1ce8a51da0d8a40a6bd47565509d6909 Clock:{} Delay:500ms Timeout:10m0s Cancel:<nil>}
	I1109 14:03:28.402035   55908 start.go:364] duration metric: took 47.073µs to acquireMachinesLock for "ha-423884"
	I1109 14:03:28.402054   55908 start.go:96] Skipping create...Using existing machine configuration
	I1109 14:03:28.402059   55908 fix.go:54] fixHost starting: 
	I1109 14:03:28.402320   55908 cli_runner.go:164] Run: docker container inspect ha-423884 --format={{.State.Status}}
	I1109 14:03:28.419704   55908 fix.go:112] recreateIfNeeded on ha-423884: state=Stopped err=<nil>
	W1109 14:03:28.419733   55908 fix.go:138] unexpected machine state, will restart: <nil>
	I1109 14:03:28.423107   55908 out.go:252] * Restarting existing docker container for "ha-423884" ...
	I1109 14:03:28.423213   55908 cli_runner.go:164] Run: docker start ha-423884
	I1109 14:03:28.683970   55908 cli_runner.go:164] Run: docker container inspect ha-423884 --format={{.State.Status}}
	I1109 14:03:28.706610   55908 kic.go:430] container "ha-423884" state is running.
	I1109 14:03:28.707012   55908 cli_runner.go:164] Run: docker container inspect -f "{{range .NetworkSettings.Networks}}{{.IPAddress}},{{.GlobalIPv6Address}}{{end}}" ha-423884
	I1109 14:03:28.730099   55908 profile.go:143] Saving config to /home/jenkins/minikube-integration/21139-2320/.minikube/profiles/ha-423884/config.json ...
	I1109 14:03:28.730346   55908 machine.go:94] provisionDockerMachine start ...
	I1109 14:03:28.730410   55908 cli_runner.go:164] Run: docker container inspect -f "'{{(index (index .NetworkSettings.Ports "22/tcp") 0).HostPort}}'" ha-423884
	I1109 14:03:28.752410   55908 main.go:143] libmachine: Using SSH client type: native
	I1109 14:03:28.752757   55908 main.go:143] libmachine: &{{{<nil> 0 [] [] []} docker [0x3eefe0] 0x3f1790 <nil>  [] 0s} 127.0.0.1 32818 <nil> <nil>}
	I1109 14:03:28.752774   55908 main.go:143] libmachine: About to run SSH command:
	hostname
	I1109 14:03:28.753518   55908 main.go:143] libmachine: Error dialing TCP: ssh: handshake failed: EOF
	I1109 14:03:31.903504   55908 main.go:143] libmachine: SSH cmd err, output: <nil>: ha-423884
	
	I1109 14:03:31.903534   55908 ubuntu.go:182] provisioning hostname "ha-423884"
	I1109 14:03:31.903601   55908 cli_runner.go:164] Run: docker container inspect -f "'{{(index (index .NetworkSettings.Ports "22/tcp") 0).HostPort}}'" ha-423884
	I1109 14:03:31.923571   55908 main.go:143] libmachine: Using SSH client type: native
	I1109 14:03:31.923916   55908 main.go:143] libmachine: &{{{<nil> 0 [] [] []} docker [0x3eefe0] 0x3f1790 <nil>  [] 0s} 127.0.0.1 32818 <nil> <nil>}
	I1109 14:03:31.923929   55908 main.go:143] libmachine: About to run SSH command:
	sudo hostname ha-423884 && echo "ha-423884" | sudo tee /etc/hostname
	I1109 14:03:32.084992   55908 main.go:143] libmachine: SSH cmd err, output: <nil>: ha-423884
	
	I1109 14:03:32.085077   55908 cli_runner.go:164] Run: docker container inspect -f "'{{(index (index .NetworkSettings.Ports "22/tcp") 0).HostPort}}'" ha-423884
	I1109 14:03:32.103777   55908 main.go:143] libmachine: Using SSH client type: native
	I1109 14:03:32.104122   55908 main.go:143] libmachine: &{{{<nil> 0 [] [] []} docker [0x3eefe0] 0x3f1790 <nil>  [] 0s} 127.0.0.1 32818 <nil> <nil>}
	I1109 14:03:32.104149   55908 main.go:143] libmachine: About to run SSH command:
	
			if ! grep -xq '.*\sha-423884' /etc/hosts; then
				if grep -xq '127.0.1.1\s.*' /etc/hosts; then
					sudo sed -i 's/^127.0.1.1\s.*/127.0.1.1 ha-423884/g' /etc/hosts;
				else 
					echo '127.0.1.1 ha-423884' | sudo tee -a /etc/hosts; 
				fi
			fi
	I1109 14:03:32.256008   55908 main.go:143] libmachine: SSH cmd err, output: <nil>: 
	I1109 14:03:32.256036   55908 ubuntu.go:188] set auth options {CertDir:/home/jenkins/minikube-integration/21139-2320/.minikube CaCertPath:/home/jenkins/minikube-integration/21139-2320/.minikube/certs/ca.pem CaPrivateKeyPath:/home/jenkins/minikube-integration/21139-2320/.minikube/certs/ca-key.pem CaCertRemotePath:/etc/docker/ca.pem ServerCertPath:/home/jenkins/minikube-integration/21139-2320/.minikube/machines/server.pem ServerKeyPath:/home/jenkins/minikube-integration/21139-2320/.minikube/machines/server-key.pem ClientKeyPath:/home/jenkins/minikube-integration/21139-2320/.minikube/certs/key.pem ServerCertRemotePath:/etc/docker/server.pem ServerKeyRemotePath:/etc/docker/server-key.pem ClientCertPath:/home/jenkins/minikube-integration/21139-2320/.minikube/certs/cert.pem ServerCertSANs:[] StorePath:/home/jenkins/minikube-integration/21139-2320/.minikube}
	I1109 14:03:32.256065   55908 ubuntu.go:190] setting up certificates
	I1109 14:03:32.256074   55908 provision.go:84] configureAuth start
	I1109 14:03:32.256143   55908 cli_runner.go:164] Run: docker container inspect -f "{{range .NetworkSettings.Networks}}{{.IPAddress}},{{.GlobalIPv6Address}}{{end}}" ha-423884
	I1109 14:03:32.275304   55908 provision.go:143] copyHostCerts
	I1109 14:03:32.275347   55908 vm_assets.go:164] NewFileAsset: /home/jenkins/minikube-integration/21139-2320/.minikube/certs/ca.pem -> /home/jenkins/minikube-integration/21139-2320/.minikube/ca.pem
	I1109 14:03:32.275379   55908 exec_runner.go:144] found /home/jenkins/minikube-integration/21139-2320/.minikube/ca.pem, removing ...
	I1109 14:03:32.275389   55908 exec_runner.go:203] rm: /home/jenkins/minikube-integration/21139-2320/.minikube/ca.pem
	I1109 14:03:32.275467   55908 exec_runner.go:151] cp: /home/jenkins/minikube-integration/21139-2320/.minikube/certs/ca.pem --> /home/jenkins/minikube-integration/21139-2320/.minikube/ca.pem (1082 bytes)
	I1109 14:03:32.275563   55908 vm_assets.go:164] NewFileAsset: /home/jenkins/minikube-integration/21139-2320/.minikube/certs/cert.pem -> /home/jenkins/minikube-integration/21139-2320/.minikube/cert.pem
	I1109 14:03:32.275585   55908 exec_runner.go:144] found /home/jenkins/minikube-integration/21139-2320/.minikube/cert.pem, removing ...
	I1109 14:03:32.275593   55908 exec_runner.go:203] rm: /home/jenkins/minikube-integration/21139-2320/.minikube/cert.pem
	I1109 14:03:32.275622   55908 exec_runner.go:151] cp: /home/jenkins/minikube-integration/21139-2320/.minikube/certs/cert.pem --> /home/jenkins/minikube-integration/21139-2320/.minikube/cert.pem (1123 bytes)
	I1109 14:03:32.275677   55908 vm_assets.go:164] NewFileAsset: /home/jenkins/minikube-integration/21139-2320/.minikube/certs/key.pem -> /home/jenkins/minikube-integration/21139-2320/.minikube/key.pem
	I1109 14:03:32.275699   55908 exec_runner.go:144] found /home/jenkins/minikube-integration/21139-2320/.minikube/key.pem, removing ...
	I1109 14:03:32.275704   55908 exec_runner.go:203] rm: /home/jenkins/minikube-integration/21139-2320/.minikube/key.pem
	I1109 14:03:32.275734   55908 exec_runner.go:151] cp: /home/jenkins/minikube-integration/21139-2320/.minikube/certs/key.pem --> /home/jenkins/minikube-integration/21139-2320/.minikube/key.pem (1679 bytes)
	I1109 14:03:32.275800   55908 provision.go:117] generating server cert: /home/jenkins/minikube-integration/21139-2320/.minikube/machines/server.pem ca-key=/home/jenkins/minikube-integration/21139-2320/.minikube/certs/ca.pem private-key=/home/jenkins/minikube-integration/21139-2320/.minikube/certs/ca-key.pem org=jenkins.ha-423884 san=[127.0.0.1 192.168.49.2 ha-423884 localhost minikube]
	I1109 14:03:32.661025   55908 provision.go:177] copyRemoteCerts
	I1109 14:03:32.661095   55908 ssh_runner.go:195] Run: sudo mkdir -p /etc/docker /etc/docker /etc/docker
	I1109 14:03:32.661138   55908 cli_runner.go:164] Run: docker container inspect -f "'{{(index (index .NetworkSettings.Ports "22/tcp") 0).HostPort}}'" ha-423884
	I1109 14:03:32.678774   55908 sshutil.go:53] new ssh client: &{IP:127.0.0.1 Port:32818 SSHKeyPath:/home/jenkins/minikube-integration/21139-2320/.minikube/machines/ha-423884/id_rsa Username:docker}
	I1109 14:03:32.784475   55908 vm_assets.go:164] NewFileAsset: /home/jenkins/minikube-integration/21139-2320/.minikube/machines/server-key.pem -> /etc/docker/server-key.pem
	I1109 14:03:32.784549   55908 ssh_runner.go:362] scp /home/jenkins/minikube-integration/21139-2320/.minikube/machines/server-key.pem --> /etc/docker/server-key.pem (1679 bytes)
	I1109 14:03:32.802319   55908 vm_assets.go:164] NewFileAsset: /home/jenkins/minikube-integration/21139-2320/.minikube/certs/ca.pem -> /etc/docker/ca.pem
	I1109 14:03:32.802376   55908 ssh_runner.go:362] scp /home/jenkins/minikube-integration/21139-2320/.minikube/certs/ca.pem --> /etc/docker/ca.pem (1082 bytes)
	I1109 14:03:32.819169   55908 vm_assets.go:164] NewFileAsset: /home/jenkins/minikube-integration/21139-2320/.minikube/machines/server.pem -> /etc/docker/server.pem
	I1109 14:03:32.819280   55908 ssh_runner.go:362] scp /home/jenkins/minikube-integration/21139-2320/.minikube/machines/server.pem --> /etc/docker/server.pem (1196 bytes)
	I1109 14:03:32.836450   55908 provision.go:87] duration metric: took 580.362722ms to configureAuth
	I1109 14:03:32.836513   55908 ubuntu.go:206] setting minikube options for container-runtime
	I1109 14:03:32.836762   55908 config.go:182] Loaded profile config "ha-423884": Driver=docker, ContainerRuntime=crio, KubernetesVersion=v1.34.1
	I1109 14:03:32.836868   55908 cli_runner.go:164] Run: docker container inspect -f "'{{(index (index .NetworkSettings.Ports "22/tcp") 0).HostPort}}'" ha-423884
	I1109 14:03:32.853354   55908 main.go:143] libmachine: Using SSH client type: native
	I1109 14:03:32.853661   55908 main.go:143] libmachine: &{{{<nil> 0 [] [] []} docker [0x3eefe0] 0x3f1790 <nil>  [] 0s} 127.0.0.1 32818 <nil> <nil>}
	I1109 14:03:32.853680   55908 main.go:143] libmachine: About to run SSH command:
	sudo mkdir -p /etc/sysconfig && printf %s "
	CRIO_MINIKUBE_OPTIONS='--insecure-registry 10.96.0.0/12 '
	" | sudo tee /etc/sysconfig/crio.minikube && sudo systemctl restart crio
	I1109 14:03:33.144760   55908 main.go:143] libmachine: SSH cmd err, output: <nil>: 
	CRIO_MINIKUBE_OPTIONS='--insecure-registry 10.96.0.0/12 '
	
	I1109 14:03:33.144782   55908 machine.go:97] duration metric: took 4.41442095s to provisionDockerMachine
	I1109 14:03:33.144794   55908 start.go:293] postStartSetup for "ha-423884" (driver="docker")
	I1109 14:03:33.144804   55908 start.go:322] creating required directories: [/etc/kubernetes/addons /etc/kubernetes/manifests /var/tmp/minikube /var/lib/minikube /var/lib/minikube/certs /var/lib/minikube/images /var/lib/minikube/binaries /tmp/gvisor /usr/share/ca-certificates /etc/ssl/certs]
	I1109 14:03:33.144881   55908 ssh_runner.go:195] Run: sudo mkdir -p /etc/kubernetes/addons /etc/kubernetes/manifests /var/tmp/minikube /var/lib/minikube /var/lib/minikube/certs /var/lib/minikube/images /var/lib/minikube/binaries /tmp/gvisor /usr/share/ca-certificates /etc/ssl/certs
	I1109 14:03:33.144923   55908 cli_runner.go:164] Run: docker container inspect -f "'{{(index (index .NetworkSettings.Ports "22/tcp") 0).HostPort}}'" ha-423884
	I1109 14:03:33.163262   55908 sshutil.go:53] new ssh client: &{IP:127.0.0.1 Port:32818 SSHKeyPath:/home/jenkins/minikube-integration/21139-2320/.minikube/machines/ha-423884/id_rsa Username:docker}
	I1109 14:03:33.271726   55908 ssh_runner.go:195] Run: cat /etc/os-release
	I1109 14:03:33.275165   55908 main.go:143] libmachine: Couldn't set key VERSION_CODENAME, no corresponding struct field found
	I1109 14:03:33.275193   55908 info.go:137] Remote host: Debian GNU/Linux 12 (bookworm)
	I1109 14:03:33.275203   55908 filesync.go:126] Scanning /home/jenkins/minikube-integration/21139-2320/.minikube/addons for local assets ...
	I1109 14:03:33.275256   55908 filesync.go:126] Scanning /home/jenkins/minikube-integration/21139-2320/.minikube/files for local assets ...
	I1109 14:03:33.275333   55908 filesync.go:149] local asset: /home/jenkins/minikube-integration/21139-2320/.minikube/files/etc/ssl/certs/41162.pem -> 41162.pem in /etc/ssl/certs
	I1109 14:03:33.275341   55908 vm_assets.go:164] NewFileAsset: /home/jenkins/minikube-integration/21139-2320/.minikube/files/etc/ssl/certs/41162.pem -> /etc/ssl/certs/41162.pem
	I1109 14:03:33.275445   55908 ssh_runner.go:195] Run: sudo mkdir -p /etc/ssl/certs
	I1109 14:03:33.282869   55908 ssh_runner.go:362] scp /home/jenkins/minikube-integration/21139-2320/.minikube/files/etc/ssl/certs/41162.pem --> /etc/ssl/certs/41162.pem (1708 bytes)
	I1109 14:03:33.300086   55908 start.go:296] duration metric: took 155.276378ms for postStartSetup
	I1109 14:03:33.300181   55908 ssh_runner.go:195] Run: sh -c "df -h /var | awk 'NR==2{print $5}'"
	I1109 14:03:33.300227   55908 cli_runner.go:164] Run: docker container inspect -f "'{{(index (index .NetworkSettings.Ports "22/tcp") 0).HostPort}}'" ha-423884
	I1109 14:03:33.318900   55908 sshutil.go:53] new ssh client: &{IP:127.0.0.1 Port:32818 SSHKeyPath:/home/jenkins/minikube-integration/21139-2320/.minikube/machines/ha-423884/id_rsa Username:docker}
	I1109 14:03:33.421156   55908 ssh_runner.go:195] Run: sh -c "df -BG /var | awk 'NR==2{print $4}'"
	I1109 14:03:33.426364   55908 fix.go:56] duration metric: took 5.024296824s for fixHost
	I1109 14:03:33.426438   55908 start.go:83] releasing machines lock for "ha-423884", held for 5.024394146s
	I1109 14:03:33.426527   55908 cli_runner.go:164] Run: docker container inspect -f "{{range .NetworkSettings.Networks}}{{.IPAddress}},{{.GlobalIPv6Address}}{{end}}" ha-423884
	I1109 14:03:33.444332   55908 ssh_runner.go:195] Run: cat /version.json
	I1109 14:03:33.444382   55908 cli_runner.go:164] Run: docker container inspect -f "'{{(index (index .NetworkSettings.Ports "22/tcp") 0).HostPort}}'" ha-423884
	I1109 14:03:33.444389   55908 ssh_runner.go:195] Run: curl -sS -m 2 https://registry.k8s.io/
	I1109 14:03:33.444465   55908 cli_runner.go:164] Run: docker container inspect -f "'{{(index (index .NetworkSettings.Ports "22/tcp") 0).HostPort}}'" ha-423884
	I1109 14:03:33.466109   55908 sshutil.go:53] new ssh client: &{IP:127.0.0.1 Port:32818 SSHKeyPath:/home/jenkins/minikube-integration/21139-2320/.minikube/machines/ha-423884/id_rsa Username:docker}
	I1109 14:03:33.468674   55908 sshutil.go:53] new ssh client: &{IP:127.0.0.1 Port:32818 SSHKeyPath:/home/jenkins/minikube-integration/21139-2320/.minikube/machines/ha-423884/id_rsa Username:docker}
	I1109 14:03:33.567827   55908 ssh_runner.go:195] Run: systemctl --version
	I1109 14:03:33.665464   55908 ssh_runner.go:195] Run: sudo sh -c "podman version >/dev/null"
	I1109 14:03:33.703682   55908 ssh_runner.go:195] Run: sh -c "stat /etc/cni/net.d/*loopback.conf*"
	W1109 14:03:33.708050   55908 cni.go:209] loopback cni configuration skipped: "/etc/cni/net.d/*loopback.conf*" not found
	I1109 14:03:33.708118   55908 ssh_runner.go:195] Run: sudo find /etc/cni/net.d -maxdepth 1 -type f ( ( -name *bridge* -or -name *podman* ) -and -not -name *.mk_disabled ) -printf "%p, " -exec sh -c "sudo mv {} {}.mk_disabled" ;
	I1109 14:03:33.716273   55908 cni.go:259] no active bridge cni configs found in "/etc/cni/net.d" - nothing to disable
	I1109 14:03:33.716295   55908 start.go:496] detecting cgroup driver to use...
	I1109 14:03:33.716329   55908 detect.go:187] detected "cgroupfs" cgroup driver on host os
	I1109 14:03:33.716378   55908 ssh_runner.go:195] Run: sudo systemctl stop -f containerd
	I1109 14:03:33.732433   55908 ssh_runner.go:195] Run: sudo systemctl is-active --quiet service containerd
	I1109 14:03:33.746199   55908 docker.go:218] disabling cri-docker service (if available) ...
	I1109 14:03:33.746294   55908 ssh_runner.go:195] Run: sudo systemctl stop -f cri-docker.socket
	I1109 14:03:33.762279   55908 ssh_runner.go:195] Run: sudo systemctl stop -f cri-docker.service
	I1109 14:03:33.775981   55908 ssh_runner.go:195] Run: sudo systemctl disable cri-docker.socket
	I1109 14:03:33.917723   55908 ssh_runner.go:195] Run: sudo systemctl mask cri-docker.service
	I1109 14:03:34.035293   55908 docker.go:234] disabling docker service ...
	I1109 14:03:34.035371   55908 ssh_runner.go:195] Run: sudo systemctl stop -f docker.socket
	I1109 14:03:34.050665   55908 ssh_runner.go:195] Run: sudo systemctl stop -f docker.service
	I1109 14:03:34.063795   55908 ssh_runner.go:195] Run: sudo systemctl disable docker.socket
	I1109 14:03:34.194207   55908 ssh_runner.go:195] Run: sudo systemctl mask docker.service
	I1109 14:03:34.316201   55908 ssh_runner.go:195] Run: sudo systemctl is-active --quiet service docker
	I1109 14:03:34.328760   55908 ssh_runner.go:195] Run: /bin/bash -c "sudo mkdir -p /etc && printf %s "runtime-endpoint: unix:///var/run/crio/crio.sock
	" | sudo tee /etc/crictl.yaml"
	I1109 14:03:34.342596   55908 crio.go:59] configure cri-o to use "registry.k8s.io/pause:3.10.1" pause image...
	I1109 14:03:34.342661   55908 ssh_runner.go:195] Run: sh -c "sudo sed -i 's|^.*pause_image = .*$|pause_image = "registry.k8s.io/pause:3.10.1"|' /etc/crio/crio.conf.d/02-crio.conf"
	I1109 14:03:34.351380   55908 crio.go:70] configuring cri-o to use "cgroupfs" as cgroup driver...
	I1109 14:03:34.351501   55908 ssh_runner.go:195] Run: sh -c "sudo sed -i 's|^.*cgroup_manager = .*$|cgroup_manager = "cgroupfs"|' /etc/crio/crio.conf.d/02-crio.conf"
	I1109 14:03:34.360283   55908 ssh_runner.go:195] Run: sh -c "sudo sed -i '/conmon_cgroup = .*/d' /etc/crio/crio.conf.d/02-crio.conf"
	I1109 14:03:34.369198   55908 ssh_runner.go:195] Run: sh -c "sudo sed -i '/cgroup_manager = .*/a conmon_cgroup = "pod"' /etc/crio/crio.conf.d/02-crio.conf"
	I1109 14:03:34.378151   55908 ssh_runner.go:195] Run: sh -c "sudo rm -rf /etc/cni/net.mk"
	I1109 14:03:34.386268   55908 ssh_runner.go:195] Run: sh -c "sudo sed -i '/^ *"net.ipv4.ip_unprivileged_port_start=.*"/d' /etc/crio/crio.conf.d/02-crio.conf"
	I1109 14:03:34.394888   55908 ssh_runner.go:195] Run: sh -c "sudo grep -q "^ *default_sysctls" /etc/crio/crio.conf.d/02-crio.conf || sudo sed -i '/conmon_cgroup = .*/a default_sysctls = \[\n\]' /etc/crio/crio.conf.d/02-crio.conf"
	I1109 14:03:34.403377   55908 ssh_runner.go:195] Run: sh -c "sudo sed -i -r 's|^default_sysctls *= *\[|&\n  "net.ipv4.ip_unprivileged_port_start=0",|' /etc/crio/crio.conf.d/02-crio.conf"
	I1109 14:03:34.412509   55908 ssh_runner.go:195] Run: sudo sysctl net.bridge.bridge-nf-call-iptables
	I1109 14:03:34.419807   55908 ssh_runner.go:195] Run: sudo sh -c "echo 1 > /proc/sys/net/ipv4/ip_forward"
	I1109 14:03:34.427015   55908 ssh_runner.go:195] Run: sudo systemctl daemon-reload
	I1109 14:03:34.533676   55908 ssh_runner.go:195] Run: sudo systemctl restart crio
	I1109 14:03:34.661746   55908 start.go:543] Will wait 60s for socket path /var/run/crio/crio.sock
	I1109 14:03:34.661816   55908 ssh_runner.go:195] Run: stat /var/run/crio/crio.sock
	I1109 14:03:34.665477   55908 start.go:564] Will wait 60s for crictl version
	I1109 14:03:34.665590   55908 ssh_runner.go:195] Run: which crictl
	I1109 14:03:34.668882   55908 ssh_runner.go:195] Run: sudo /usr/local/bin/crictl version
	I1109 14:03:34.697803   55908 start.go:580] Version:  0.1.0
	RuntimeName:  cri-o
	RuntimeVersion:  1.34.1
	RuntimeApiVersion:  v1
	I1109 14:03:34.697964   55908 ssh_runner.go:195] Run: crio --version
	I1109 14:03:34.726272   55908 ssh_runner.go:195] Run: crio --version
	I1109 14:03:34.758410   55908 out.go:179] * Preparing Kubernetes v1.34.1 on CRI-O 1.34.1 ...
	I1109 14:03:34.761247   55908 cli_runner.go:164] Run: docker network inspect ha-423884 --format "{"Name": "{{.Name}}","Driver": "{{.Driver}}","Subnet": "{{range .IPAM.Config}}{{.Subnet}}{{end}}","Gateway": "{{range .IPAM.Config}}{{.Gateway}}{{end}}","MTU": {{if (index .Options "com.docker.network.driver.mtu")}}{{(index .Options "com.docker.network.driver.mtu")}}{{else}}0{{end}}, "ContainerIPs": [{{range $k,$v := .Containers }}"{{$v.IPv4Address}}",{{end}}]}"
	I1109 14:03:34.776734   55908 ssh_runner.go:195] Run: grep 192.168.49.1	host.minikube.internal$ /etc/hosts
	I1109 14:03:34.780588   55908 ssh_runner.go:195] Run: /bin/bash -c "{ grep -v $'\thost.minikube.internal$' "/etc/hosts"; echo "192.168.49.1	host.minikube.internal"; } > /tmp/h.$$; sudo cp /tmp/h.$$ "/etc/hosts""
	I1109 14:03:34.790316   55908 kubeadm.go:884] updating cluster {Name:ha-423884 KeepContext:false EmbedCerts:false MinikubeISO: KicBaseImage:gcr.io/k8s-minikube/kicbase-builds:v0.0.48-1761985721-21837@sha256:a50b37e97dfdea51156e079ca6b45818a801b3d41bbe13d141f35d2e1af6c7d1 Memory:3072 CPUs:2 DiskSize:20000 Driver:docker HyperkitVpnKitSock: HyperkitVSockPorts:[] DockerEnv:[] ContainerVolumeMounts:[] InsecureRegistry:[] RegistryMirror:[] HostOnlyCIDR:192.168.59.1/24 HypervVirtualSwitch: HypervUseExternalSwitch:false HypervExternalAdapter: KVMNetwork:default KVMQemuURI:qemu:///system KVMGPU:false KVMHidden:false KVMNUMACount:1 APIServerPort:8443 DockerOpt:[] DisableDriverMounts:false NFSShare:[] NFSSharesRoot:/nfsshares UUID: NoVTXCheck:false DNSProxy:false HostDNSResolver:true HostOnlyNicType:virtio NatNicType:virtio SSHIPAddress: SSHUser:root SSHKey: SSHPort:22 KubernetesConfig:{KubernetesVersion:v1.34.1 ClusterName:ha-423884 Namespace:default APIServerHAVIP:192.168.49.254 APIServerName:minikubeCA APISe
rverNames:[] APIServerIPs:[] DNSDomain:cluster.local ContainerRuntime:crio CRISocket: NetworkPlugin:cni FeatureGates: ServiceCIDR:10.96.0.0/12 ImageRepository: LoadBalancerStartIP: LoadBalancerEndIP: CustomIngressCert: RegistryAliases: ExtraOptions:[] ShouldLoadCachedImages:true EnableDefaultCNI:false CNI:} Nodes:[{Name: IP:192.168.49.2 Port:8443 KubernetesVersion:v1.34.1 ContainerRuntime:crio ControlPlane:true Worker:true} {Name:m02 IP:192.168.49.3 Port:8443 KubernetesVersion:v1.34.1 ContainerRuntime:crio ControlPlane:true Worker:true} {Name:m03 IP:192.168.49.4 Port:8443 KubernetesVersion:v1.34.1 ContainerRuntime:crio ControlPlane:true Worker:true} {Name:m04 IP:192.168.49.5 Port:0 KubernetesVersion:v1.34.1 ContainerRuntime:crio ControlPlane:false Worker:true}] Addons:map[ambassador:false amd-gpu-device-plugin:false auto-pause:false cloud-spanner:false csi-hostpath-driver:false dashboard:false default-storageclass:false efk:false freshpod:false gcp-auth:false gvisor:false headlamp:false inaccel:false ingress:
false ingress-dns:false inspektor-gadget:false istio:false istio-provisioner:false kong:false kubeflow:false kubetail:false kubevirt:false logviewer:false metallb:false metrics-server:false nvidia-device-plugin:false nvidia-driver-installer:false nvidia-gpu-device-plugin:false olm:false pod-security-policy:false portainer:false registry:false registry-aliases:false registry-creds:false storage-provisioner:false storage-provisioner-rancher:false volcano:false volumesnapshots:false yakd:false] CustomAddonImages:map[] CustomAddonRegistries:map[] VerifyComponents:map[apiserver:true apps_running:true default_sa:true extra:true kubelet:true node_ready:true system_pods:true] StartHostTimeout:6m0s ScheduledStop:<nil> ExposedPorts:[] ListenAddress: Network: Subnet: MultiNodeRequested:true ExtraDisks:0 CertExpiration:26280h0m0s MountString: Mount9PVersion:9p2000.L MountGID:docker MountIP: MountMSize:262144 MountOptions:[] MountPort:0 MountType:9p MountUID:docker BinaryMirror: DisableOptimizations:false DisableMetrics:f
alse DisableCoreDNSLog:false CustomQemuFirmwarePath: SocketVMnetClientPath: SocketVMnetPath: StaticIP: SSHAuthSock: SSHAgentPID:0 GPUs: AutoPauseInterval:1m0s} ...
	I1109 14:03:34.790470   55908 preload.go:188] Checking if preload exists for k8s version v1.34.1 and runtime crio
	I1109 14:03:34.790530   55908 ssh_runner.go:195] Run: sudo crictl images --output json
	I1109 14:03:34.825584   55908 crio.go:514] all images are preloaded for cri-o runtime.
	I1109 14:03:34.825621   55908 crio.go:433] Images already preloaded, skipping extraction
	I1109 14:03:34.825685   55908 ssh_runner.go:195] Run: sudo crictl images --output json
	I1109 14:03:34.851854   55908 crio.go:514] all images are preloaded for cri-o runtime.
	I1109 14:03:34.851980   55908 cache_images.go:86] Images are preloaded, skipping loading
	I1109 14:03:34.851997   55908 kubeadm.go:935] updating node { 192.168.49.2 8443 v1.34.1 crio true true} ...
	I1109 14:03:34.852146   55908 kubeadm.go:947] kubelet [Unit]
	Wants=crio.service
	
	[Service]
	ExecStart=
	ExecStart=/var/lib/minikube/binaries/v1.34.1/kubelet --bootstrap-kubeconfig=/etc/kubernetes/bootstrap-kubelet.conf --cgroups-per-qos=false --config=/var/lib/kubelet/config.yaml --enforce-node-allocatable= --hostname-override=ha-423884 --kubeconfig=/etc/kubernetes/kubelet.conf --node-ip=192.168.49.2
	
	[Install]
	 config:
	{KubernetesVersion:v1.34.1 ClusterName:ha-423884 Namespace:default APIServerHAVIP:192.168.49.254 APIServerName:minikubeCA APIServerNames:[] APIServerIPs:[] DNSDomain:cluster.local ContainerRuntime:crio CRISocket: NetworkPlugin:cni FeatureGates: ServiceCIDR:10.96.0.0/12 ImageRepository: LoadBalancerStartIP: LoadBalancerEndIP: CustomIngressCert: RegistryAliases: ExtraOptions:[] ShouldLoadCachedImages:true EnableDefaultCNI:false CNI:}
	I1109 14:03:34.852273   55908 ssh_runner.go:195] Run: crio config
	I1109 14:03:34.903939   55908 cni.go:84] Creating CNI manager for ""
	I1109 14:03:34.903963   55908 cni.go:136] multinode detected (4 nodes found), recommending kindnet
	I1109 14:03:34.903981   55908 kubeadm.go:85] Using pod CIDR: 10.244.0.0/16
	I1109 14:03:34.904009   55908 kubeadm.go:190] kubeadm options: {CertDir:/var/lib/minikube/certs ServiceCIDR:10.96.0.0/12 PodSubnet:10.244.0.0/16 AdvertiseAddress:192.168.49.2 APIServerPort:8443 KubernetesVersion:v1.34.1 EtcdDataDir:/var/lib/minikube/etcd EtcdExtraArgs:map[] ClusterName:ha-423884 NodeName:ha-423884 DNSDomain:cluster.local CRISocket:/var/run/crio/crio.sock ImageRepository: ComponentOptions:[{Component:apiServer ExtraArgs:map[enable-admission-plugins:NamespaceLifecycle,LimitRanger,ServiceAccount,DefaultStorageClass,DefaultTolerationSeconds,NodeRestriction,MutatingAdmissionWebhook,ValidatingAdmissionWebhook,ResourceQuota] Pairs:map[certSANs:["127.0.0.1", "localhost", "192.168.49.2"]]} {Component:controllerManager ExtraArgs:map[allocate-node-cidrs:true leader-elect:false] Pairs:map[]} {Component:scheduler ExtraArgs:map[leader-elect:false] Pairs:map[]}] FeatureArgs:map[] NodeIP:192.168.49.2 CgroupDriver:cgroupfs ClientCAFile:/var/lib/minikube/certs/ca.crt StaticPodPath:/etc/kubernetes/mani
fests ControlPlaneAddress:control-plane.minikube.internal KubeProxyOptions:map[] ResolvConfSearchRegression:false KubeletConfigOpts:map[containerRuntimeEndpoint:unix:///var/run/crio/crio.sock hairpinMode:hairpin-veth runtimeRequestTimeout:15m] PrependCriSocketUnix:true}
	I1109 14:03:34.904140   55908 kubeadm.go:196] kubeadm config:
	apiVersion: kubeadm.k8s.io/v1beta4
	kind: InitConfiguration
	localAPIEndpoint:
	  advertiseAddress: 192.168.49.2
	  bindPort: 8443
	bootstrapTokens:
	  - groups:
	      - system:bootstrappers:kubeadm:default-node-token
	    ttl: 24h0m0s
	    usages:
	      - signing
	      - authentication
	nodeRegistration:
	  criSocket: unix:///var/run/crio/crio.sock
	  name: "ha-423884"
	  kubeletExtraArgs:
	    - name: "node-ip"
	      value: "192.168.49.2"
	  taints: []
	---
	apiVersion: kubeadm.k8s.io/v1beta4
	kind: ClusterConfiguration
	apiServer:
	  certSANs: ["127.0.0.1", "localhost", "192.168.49.2"]
	  extraArgs:
	    - name: "enable-admission-plugins"
	      value: "NamespaceLifecycle,LimitRanger,ServiceAccount,DefaultStorageClass,DefaultTolerationSeconds,NodeRestriction,MutatingAdmissionWebhook,ValidatingAdmissionWebhook,ResourceQuota"
	controllerManager:
	  extraArgs:
	    - name: "allocate-node-cidrs"
	      value: "true"
	    - name: "leader-elect"
	      value: "false"
	scheduler:
	  extraArgs:
	    - name: "leader-elect"
	      value: "false"
	certificatesDir: /var/lib/minikube/certs
	clusterName: mk
	controlPlaneEndpoint: control-plane.minikube.internal:8443
	etcd:
	  local:
	    dataDir: /var/lib/minikube/etcd
	kubernetesVersion: v1.34.1
	networking:
	  dnsDomain: cluster.local
	  podSubnet: "10.244.0.0/16"
	  serviceSubnet: 10.96.0.0/12
	---
	apiVersion: kubelet.config.k8s.io/v1beta1
	kind: KubeletConfiguration
	authentication:
	  x509:
	    clientCAFile: /var/lib/minikube/certs/ca.crt
	cgroupDriver: cgroupfs
	containerRuntimeEndpoint: unix:///var/run/crio/crio.sock
	hairpinMode: hairpin-veth
	runtimeRequestTimeout: 15m
	clusterDomain: "cluster.local"
	# disable disk resource management by default
	imageGCHighThresholdPercent: 100
	evictionHard:
	  nodefs.available: "0%"
	  nodefs.inodesFree: "0%"
	  imagefs.available: "0%"
	failSwapOn: false
	staticPodPath: /etc/kubernetes/manifests
	---
	apiVersion: kubeproxy.config.k8s.io/v1alpha1
	kind: KubeProxyConfiguration
	clusterCIDR: "10.244.0.0/16"
	metricsBindAddress: 0.0.0.0:10249
	conntrack:
	  maxPerCore: 0
	# Skip setting "net.netfilter.nf_conntrack_tcp_timeout_established"
	  tcpEstablishedTimeout: 0s
	# Skip setting "net.netfilter.nf_conntrack_tcp_timeout_close"
	  tcpCloseWaitTimeout: 0s
	
	I1109 14:03:34.904162   55908 kube-vip.go:115] generating kube-vip config ...
	I1109 14:03:34.904219   55908 ssh_runner.go:195] Run: sudo sh -c "lsmod | grep ip_vs"
	I1109 14:03:34.915786   55908 kube-vip.go:163] giving up enabling control-plane load-balancing as ipvs kernel modules appears not to be available: sudo sh -c "lsmod | grep ip_vs": Process exited with status 1
	stdout:
	
	stderr:
	I1109 14:03:34.915909   55908 kube-vip.go:137] kube-vip config:
	apiVersion: v1
	kind: Pod
	metadata:
	  creationTimestamp: null
	  name: kube-vip
	  namespace: kube-system
	spec:
	  containers:
	  - args:
	    - manager
	    env:
	    - name: vip_arp
	      value: "true"
	    - name: port
	      value: "8443"
	    - name: vip_nodename
	      valueFrom:
	        fieldRef:
	          fieldPath: spec.nodeName
	    - name: vip_interface
	      value: eth0
	    - name: vip_cidr
	      value: "32"
	    - name: dns_mode
	      value: first
	    - name: cp_enable
	      value: "true"
	    - name: cp_namespace
	      value: kube-system
	    - name: vip_leaderelection
	      value: "true"
	    - name: vip_leasename
	      value: plndr-cp-lock
	    - name: vip_leaseduration
	      value: "5"
	    - name: vip_renewdeadline
	      value: "3"
	    - name: vip_retryperiod
	      value: "1"
	    - name: address
	      value: 192.168.49.254
	    - name: prometheus_server
	      value: :2112
	    image: ghcr.io/kube-vip/kube-vip:v1.0.1
	    imagePullPolicy: IfNotPresent
	    name: kube-vip
	    resources: {}
	    securityContext:
	      capabilities:
	        add:
	        - NET_ADMIN
	        - NET_RAW
	    volumeMounts:
	    - mountPath: /etc/kubernetes/admin.conf
	      name: kubeconfig
	  hostAliases:
	  - hostnames:
	    - kubernetes
	    ip: 127.0.0.1
	  hostNetwork: true
	  volumes:
	  - hostPath:
	      path: "/etc/kubernetes/admin.conf"
	    name: kubeconfig
	status: {}
	I1109 14:03:34.915977   55908 ssh_runner.go:195] Run: sudo ls /var/lib/minikube/binaries/v1.34.1
	I1109 14:03:34.923406   55908 binaries.go:51] Found k8s binaries, skipping transfer
	I1109 14:03:34.923480   55908 ssh_runner.go:195] Run: sudo mkdir -p /etc/systemd/system/kubelet.service.d /lib/systemd/system /var/tmp/minikube /etc/kubernetes/manifests
	I1109 14:03:34.931134   55908 ssh_runner.go:362] scp memory --> /etc/systemd/system/kubelet.service.d/10-kubeadm.conf (359 bytes)
	I1109 14:03:34.943678   55908 ssh_runner.go:362] scp memory --> /lib/systemd/system/kubelet.service (352 bytes)
	I1109 14:03:34.956560   55908 ssh_runner.go:362] scp memory --> /var/tmp/minikube/kubeadm.yaml.new (2206 bytes)
	I1109 14:03:34.969028   55908 ssh_runner.go:362] scp memory --> /etc/kubernetes/manifests/kube-vip.yaml (1358 bytes)
	I1109 14:03:34.981532   55908 ssh_runner.go:195] Run: grep 192.168.49.254	control-plane.minikube.internal$ /etc/hosts
	I1109 14:03:34.985043   55908 ssh_runner.go:195] Run: /bin/bash -c "{ grep -v $'\tcontrol-plane.minikube.internal$' "/etc/hosts"; echo "192.168.49.254	control-plane.minikube.internal"; } > /tmp/h.$$; sudo cp /tmp/h.$$ "/etc/hosts""
	I1109 14:03:34.994528   55908 ssh_runner.go:195] Run: sudo systemctl daemon-reload
	I1109 14:03:35.107177   55908 ssh_runner.go:195] Run: sudo systemctl start kubelet
	I1109 14:03:35.123121   55908 certs.go:69] Setting up /home/jenkins/minikube-integration/21139-2320/.minikube/profiles/ha-423884 for IP: 192.168.49.2
	I1109 14:03:35.123194   55908 certs.go:195] generating shared ca certs ...
	I1109 14:03:35.123226   55908 certs.go:227] acquiring lock for ca certs: {Name:mkdc9287ecd89df27ec460b72246ed6b75395d7c Clock:{} Delay:500ms Timeout:1m0s Cancel:<nil>}
	I1109 14:03:35.123409   55908 certs.go:236] skipping valid "minikubeCA" ca cert: /home/jenkins/minikube-integration/21139-2320/.minikube/ca.key
	I1109 14:03:35.123481   55908 certs.go:236] skipping valid "proxyClientCA" ca cert: /home/jenkins/minikube-integration/21139-2320/.minikube/proxy-client-ca.key
	I1109 14:03:35.123518   55908 certs.go:257] generating profile certs ...
	I1109 14:03:35.123657   55908 certs.go:360] skipping valid signed profile cert regeneration for "minikube-user": /home/jenkins/minikube-integration/21139-2320/.minikube/profiles/ha-423884/client.key
	I1109 14:03:35.123781   55908 certs.go:360] skipping valid signed profile cert regeneration for "minikube": /home/jenkins/minikube-integration/21139-2320/.minikube/profiles/ha-423884/apiserver.key.32540612
	I1109 14:03:35.123858   55908 certs.go:360] skipping valid signed profile cert regeneration for "aggregator": /home/jenkins/minikube-integration/21139-2320/.minikube/profiles/ha-423884/proxy-client.key
	I1109 14:03:35.123923   55908 vm_assets.go:164] NewFileAsset: /home/jenkins/minikube-integration/21139-2320/.minikube/ca.crt -> /var/lib/minikube/certs/ca.crt
	I1109 14:03:35.123960   55908 vm_assets.go:164] NewFileAsset: /home/jenkins/minikube-integration/21139-2320/.minikube/ca.key -> /var/lib/minikube/certs/ca.key
	I1109 14:03:35.124009   55908 vm_assets.go:164] NewFileAsset: /home/jenkins/minikube-integration/21139-2320/.minikube/proxy-client-ca.crt -> /var/lib/minikube/certs/proxy-client-ca.crt
	I1109 14:03:35.124043   55908 vm_assets.go:164] NewFileAsset: /home/jenkins/minikube-integration/21139-2320/.minikube/proxy-client-ca.key -> /var/lib/minikube/certs/proxy-client-ca.key
	I1109 14:03:35.124090   55908 vm_assets.go:164] NewFileAsset: /home/jenkins/minikube-integration/21139-2320/.minikube/profiles/ha-423884/apiserver.crt -> /var/lib/minikube/certs/apiserver.crt
	I1109 14:03:35.124123   55908 vm_assets.go:164] NewFileAsset: /home/jenkins/minikube-integration/21139-2320/.minikube/profiles/ha-423884/apiserver.key -> /var/lib/minikube/certs/apiserver.key
	I1109 14:03:35.124169   55908 vm_assets.go:164] NewFileAsset: /home/jenkins/minikube-integration/21139-2320/.minikube/profiles/ha-423884/proxy-client.crt -> /var/lib/minikube/certs/proxy-client.crt
	I1109 14:03:35.124203   55908 vm_assets.go:164] NewFileAsset: /home/jenkins/minikube-integration/21139-2320/.minikube/profiles/ha-423884/proxy-client.key -> /var/lib/minikube/certs/proxy-client.key
	I1109 14:03:35.124294   55908 certs.go:484] found cert: /home/jenkins/minikube-integration/21139-2320/.minikube/certs/4116.pem (1338 bytes)
	W1109 14:03:35.124369   55908 certs.go:480] ignoring /home/jenkins/minikube-integration/21139-2320/.minikube/certs/4116_empty.pem, impossibly tiny 0 bytes
	I1109 14:03:35.124408   55908 certs.go:484] found cert: /home/jenkins/minikube-integration/21139-2320/.minikube/certs/ca-key.pem (1679 bytes)
	I1109 14:03:35.124455   55908 certs.go:484] found cert: /home/jenkins/minikube-integration/21139-2320/.minikube/certs/ca.pem (1082 bytes)
	I1109 14:03:35.124508   55908 certs.go:484] found cert: /home/jenkins/minikube-integration/21139-2320/.minikube/certs/cert.pem (1123 bytes)
	I1109 14:03:35.124566   55908 certs.go:484] found cert: /home/jenkins/minikube-integration/21139-2320/.minikube/certs/key.pem (1679 bytes)
	I1109 14:03:35.124648   55908 certs.go:484] found cert: /home/jenkins/minikube-integration/21139-2320/.minikube/files/etc/ssl/certs/41162.pem (1708 bytes)
	I1109 14:03:35.124724   55908 vm_assets.go:164] NewFileAsset: /home/jenkins/minikube-integration/21139-2320/.minikube/certs/4116.pem -> /usr/share/ca-certificates/4116.pem
	I1109 14:03:35.124808   55908 vm_assets.go:164] NewFileAsset: /home/jenkins/minikube-integration/21139-2320/.minikube/files/etc/ssl/certs/41162.pem -> /usr/share/ca-certificates/41162.pem
	I1109 14:03:35.124844   55908 vm_assets.go:164] NewFileAsset: /home/jenkins/minikube-integration/21139-2320/.minikube/ca.crt -> /usr/share/ca-certificates/minikubeCA.pem
	I1109 14:03:35.125710   55908 ssh_runner.go:362] scp /home/jenkins/minikube-integration/21139-2320/.minikube/ca.crt --> /var/lib/minikube/certs/ca.crt (1111 bytes)
	I1109 14:03:35.143578   55908 ssh_runner.go:362] scp /home/jenkins/minikube-integration/21139-2320/.minikube/ca.key --> /var/lib/minikube/certs/ca.key (1679 bytes)
	I1109 14:03:35.160309   55908 ssh_runner.go:362] scp /home/jenkins/minikube-integration/21139-2320/.minikube/proxy-client-ca.crt --> /var/lib/minikube/certs/proxy-client-ca.crt (1119 bytes)
	I1109 14:03:35.180028   55908 ssh_runner.go:362] scp /home/jenkins/minikube-integration/21139-2320/.minikube/proxy-client-ca.key --> /var/lib/minikube/certs/proxy-client-ca.key (1675 bytes)
	I1109 14:03:35.198803   55908 ssh_runner.go:362] scp /home/jenkins/minikube-integration/21139-2320/.minikube/profiles/ha-423884/apiserver.crt --> /var/lib/minikube/certs/apiserver.crt (1440 bytes)
	I1109 14:03:35.222988   55908 ssh_runner.go:362] scp /home/jenkins/minikube-integration/21139-2320/.minikube/profiles/ha-423884/apiserver.key --> /var/lib/minikube/certs/apiserver.key (1679 bytes)
	I1109 14:03:35.246464   55908 ssh_runner.go:362] scp /home/jenkins/minikube-integration/21139-2320/.minikube/profiles/ha-423884/proxy-client.crt --> /var/lib/minikube/certs/proxy-client.crt (1147 bytes)
	I1109 14:03:35.273513   55908 ssh_runner.go:362] scp /home/jenkins/minikube-integration/21139-2320/.minikube/profiles/ha-423884/proxy-client.key --> /var/lib/minikube/certs/proxy-client.key (1675 bytes)
	I1109 14:03:35.298574   55908 ssh_runner.go:362] scp /home/jenkins/minikube-integration/21139-2320/.minikube/certs/4116.pem --> /usr/share/ca-certificates/4116.pem (1338 bytes)
	I1109 14:03:35.323310   55908 ssh_runner.go:362] scp /home/jenkins/minikube-integration/21139-2320/.minikube/files/etc/ssl/certs/41162.pem --> /usr/share/ca-certificates/41162.pem (1708 bytes)
	I1109 14:03:35.344665   55908 ssh_runner.go:362] scp /home/jenkins/minikube-integration/21139-2320/.minikube/ca.crt --> /usr/share/ca-certificates/minikubeCA.pem (1111 bytes)
	I1109 14:03:35.365172   55908 ssh_runner.go:362] scp memory --> /var/lib/minikube/kubeconfig (738 bytes)
	I1109 14:03:35.378569   55908 ssh_runner.go:195] Run: openssl version
	I1109 14:03:35.385015   55908 ssh_runner.go:195] Run: sudo /bin/bash -c "test -s /usr/share/ca-certificates/4116.pem && ln -fs /usr/share/ca-certificates/4116.pem /etc/ssl/certs/4116.pem"
	I1109 14:03:35.394601   55908 ssh_runner.go:195] Run: ls -la /usr/share/ca-certificates/4116.pem
	I1109 14:03:35.398299   55908 certs.go:528] hashing: -rw-r--r-- 1 root root 1338 Nov  9 13:36 /usr/share/ca-certificates/4116.pem
	I1109 14:03:35.398412   55908 ssh_runner.go:195] Run: openssl x509 -hash -noout -in /usr/share/ca-certificates/4116.pem
	I1109 14:03:35.453607   55908 ssh_runner.go:195] Run: sudo /bin/bash -c "test -L /etc/ssl/certs/51391683.0 || ln -fs /etc/ssl/certs/4116.pem /etc/ssl/certs/51391683.0"
	I1109 14:03:35.463012   55908 ssh_runner.go:195] Run: sudo /bin/bash -c "test -s /usr/share/ca-certificates/41162.pem && ln -fs /usr/share/ca-certificates/41162.pem /etc/ssl/certs/41162.pem"
	I1109 14:03:35.471886   55908 ssh_runner.go:195] Run: ls -la /usr/share/ca-certificates/41162.pem
	I1109 14:03:35.475852   55908 certs.go:528] hashing: -rw-r--r-- 1 root root 1708 Nov  9 13:36 /usr/share/ca-certificates/41162.pem
	I1109 14:03:35.475960   55908 ssh_runner.go:195] Run: openssl x509 -hash -noout -in /usr/share/ca-certificates/41162.pem
	I1109 14:03:35.519535   55908 ssh_runner.go:195] Run: sudo /bin/bash -c "test -L /etc/ssl/certs/3ec20f2e.0 || ln -fs /etc/ssl/certs/41162.pem /etc/ssl/certs/3ec20f2e.0"
	I1109 14:03:35.532870   55908 ssh_runner.go:195] Run: sudo /bin/bash -c "test -s /usr/share/ca-certificates/minikubeCA.pem && ln -fs /usr/share/ca-certificates/minikubeCA.pem /etc/ssl/certs/minikubeCA.pem"
	I1109 14:03:35.541526   55908 ssh_runner.go:195] Run: ls -la /usr/share/ca-certificates/minikubeCA.pem
	I1109 14:03:35.545559   55908 certs.go:528] hashing: -rw-r--r-- 1 root root 1111 Nov  9 13:29 /usr/share/ca-certificates/minikubeCA.pem
	I1109 14:03:35.545647   55908 ssh_runner.go:195] Run: openssl x509 -hash -noout -in /usr/share/ca-certificates/minikubeCA.pem
	I1109 14:03:35.587429   55908 ssh_runner.go:195] Run: sudo /bin/bash -c "test -L /etc/ssl/certs/b5213941.0 || ln -fs /etc/ssl/certs/minikubeCA.pem /etc/ssl/certs/b5213941.0"
	I1109 14:03:35.595355   55908 ssh_runner.go:195] Run: stat /var/lib/minikube/certs/apiserver-kubelet-client.crt
	I1109 14:03:35.598863   55908 ssh_runner.go:195] Run: openssl x509 -noout -in /var/lib/minikube/certs/apiserver-etcd-client.crt -checkend 86400
	I1109 14:03:35.639394   55908 ssh_runner.go:195] Run: openssl x509 -noout -in /var/lib/minikube/certs/apiserver-kubelet-client.crt -checkend 86400
	I1109 14:03:35.682546   55908 ssh_runner.go:195] Run: openssl x509 -noout -in /var/lib/minikube/certs/etcd/server.crt -checkend 86400
	I1109 14:03:35.723686   55908 ssh_runner.go:195] Run: openssl x509 -noout -in /var/lib/minikube/certs/etcd/healthcheck-client.crt -checkend 86400
	I1109 14:03:35.769486   55908 ssh_runner.go:195] Run: openssl x509 -noout -in /var/lib/minikube/certs/etcd/peer.crt -checkend 86400
	I1109 14:03:35.818163   55908 ssh_runner.go:195] Run: openssl x509 -noout -in /var/lib/minikube/certs/front-proxy-client.crt -checkend 86400
	I1109 14:03:35.873301   55908 kubeadm.go:401] StartCluster: {Name:ha-423884 KeepContext:false EmbedCerts:false MinikubeISO: KicBaseImage:gcr.io/k8s-minikube/kicbase-builds:v0.0.48-1761985721-21837@sha256:a50b37e97dfdea51156e079ca6b45818a801b3d41bbe13d141f35d2e1af6c7d1 Memory:3072 CPUs:2 DiskSize:20000 Driver:docker HyperkitVpnKitSock: HyperkitVSockPorts:[] DockerEnv:[] ContainerVolumeMounts:[] InsecureRegistry:[] RegistryMirror:[] HostOnlyCIDR:192.168.59.1/24 HypervVirtualSwitch: HypervUseExternalSwitch:false HypervExternalAdapter: KVMNetwork:default KVMQemuURI:qemu:///system KVMGPU:false KVMHidden:false KVMNUMACount:1 APIServerPort:8443 DockerOpt:[] DisableDriverMounts:false NFSShare:[] NFSSharesRoot:/nfsshares UUID: NoVTXCheck:false DNSProxy:false HostDNSResolver:true HostOnlyNicType:virtio NatNicType:virtio SSHIPAddress: SSHUser:root SSHKey: SSHPort:22 KubernetesConfig:{KubernetesVersion:v1.34.1 ClusterName:ha-423884 Namespace:default APIServerHAVIP:192.168.49.254 APIServerName:minikubeCA APIServe
rNames:[] APIServerIPs:[] DNSDomain:cluster.local ContainerRuntime:crio CRISocket: NetworkPlugin:cni FeatureGates: ServiceCIDR:10.96.0.0/12 ImageRepository: LoadBalancerStartIP: LoadBalancerEndIP: CustomIngressCert: RegistryAliases: ExtraOptions:[] ShouldLoadCachedImages:true EnableDefaultCNI:false CNI:} Nodes:[{Name: IP:192.168.49.2 Port:8443 KubernetesVersion:v1.34.1 ContainerRuntime:crio ControlPlane:true Worker:true} {Name:m02 IP:192.168.49.3 Port:8443 KubernetesVersion:v1.34.1 ContainerRuntime:crio ControlPlane:true Worker:true} {Name:m03 IP:192.168.49.4 Port:8443 KubernetesVersion:v1.34.1 ContainerRuntime:crio ControlPlane:true Worker:true} {Name:m04 IP:192.168.49.5 Port:0 KubernetesVersion:v1.34.1 ContainerRuntime:crio ControlPlane:false Worker:true}] Addons:map[ambassador:false amd-gpu-device-plugin:false auto-pause:false cloud-spanner:false csi-hostpath-driver:false dashboard:false default-storageclass:false efk:false freshpod:false gcp-auth:false gvisor:false headlamp:false inaccel:false ingress:fal
se ingress-dns:false inspektor-gadget:false istio:false istio-provisioner:false kong:false kubeflow:false kubetail:false kubevirt:false logviewer:false metallb:false metrics-server:false nvidia-device-plugin:false nvidia-driver-installer:false nvidia-gpu-device-plugin:false olm:false pod-security-policy:false portainer:false registry:false registry-aliases:false registry-creds:false storage-provisioner:false storage-provisioner-rancher:false volcano:false volumesnapshots:false yakd:false] CustomAddonImages:map[] CustomAddonRegistries:map[] VerifyComponents:map[apiserver:true apps_running:true default_sa:true extra:true kubelet:true node_ready:true system_pods:true] StartHostTimeout:6m0s ScheduledStop:<nil> ExposedPorts:[] ListenAddress: Network: Subnet: MultiNodeRequested:true ExtraDisks:0 CertExpiration:26280h0m0s MountString: Mount9PVersion:9p2000.L MountGID:docker MountIP: MountMSize:262144 MountOptions:[] MountPort:0 MountType:9p MountUID:docker BinaryMirror: DisableOptimizations:false DisableMetrics:fals
e DisableCoreDNSLog:false CustomQemuFirmwarePath: SocketVMnetClientPath: SocketVMnetPath: StaticIP: SSHAuthSock: SSHAgentPID:0 GPUs: AutoPauseInterval:1m0s}
	I1109 14:03:35.873423   55908 cri.go:54] listing CRI containers in root : {State:paused Name: Namespaces:[kube-system]}
	I1109 14:03:35.873481   55908 ssh_runner.go:195] Run: sudo -s eval "crictl ps -a --quiet --label io.kubernetes.pod.namespace=kube-system"
	I1109 14:03:35.949725   55908 cri.go:89] found id: "947390d8997ffb89bea0e3c1e1bca5c1f8dd53d457d88db5aafd7664dbcb65b2"
	I1109 14:03:35.949794   55908 cri.go:89] found id: "c0ba74e816e1338d86f2f29c211b83c172784bbf106dba7bae518b2ee0201a4e"
	I1109 14:03:35.949821   55908 cri.go:89] found id: "785a023345fda66c98e73a27cd2aa79f3beb28f1d9847ff2264dd21ee91db42a"
	I1109 14:03:35.949838   55908 cri.go:89] found id: ""
	I1109 14:03:35.949915   55908 ssh_runner.go:195] Run: sudo runc list -f json
	W1109 14:03:35.976461   55908 kubeadm.go:408] unpause failed: list paused: runc: sudo runc list -f json: Process exited with status 1
	stdout:
	
	stderr:
	time="2025-11-09T14:03:35Z" level=error msg="open /run/runc: no such file or directory"
	I1109 14:03:35.976622   55908 ssh_runner.go:195] Run: sudo ls /var/lib/kubelet/kubeadm-flags.env /var/lib/kubelet/config.yaml /var/lib/minikube/etcd
	I1109 14:03:35.995533   55908 kubeadm.go:417] found existing configuration files, will attempt cluster restart
	I1109 14:03:35.995601   55908 kubeadm.go:598] restartPrimaryControlPlane start ...
	I1109 14:03:35.995698   55908 ssh_runner.go:195] Run: sudo test -d /data/minikube
	I1109 14:03:36.007080   55908 kubeadm.go:131] /data/minikube skipping compat symlinks: sudo test -d /data/minikube: Process exited with status 1
	stdout:
	
	stderr:
	I1109 14:03:36.007609   55908 kubeconfig.go:47] verify endpoint returned: get endpoint: "ha-423884" does not appear in /home/jenkins/minikube-integration/21139-2320/kubeconfig
	I1109 14:03:36.007785   55908 kubeconfig.go:62] /home/jenkins/minikube-integration/21139-2320/kubeconfig needs updating (will repair): [kubeconfig missing "ha-423884" cluster setting kubeconfig missing "ha-423884" context setting]
	I1109 14:03:36.008206   55908 lock.go:35] WriteFile acquiring /home/jenkins/minikube-integration/21139-2320/kubeconfig: {Name:mkbcbd43244c4eb498050023163f96f81faa78c6 Clock:{} Delay:500ms Timeout:1m0s Cancel:<nil>}
	I1109 14:03:36.008996   55908 kapi.go:59] client config for ha-423884: &rest.Config{Host:"https://192.168.49.2:8443", APIPath:"", ContentConfig:rest.ContentConfig{AcceptContentTypes:"", ContentType:"", GroupVersion:(*schema.GroupVersion)(nil), NegotiatedSerializer:runtime.NegotiatedSerializer(nil)}, Username:"", Password:"", BearerToken:"", BearerTokenFile:"", Impersonate:rest.ImpersonationConfig{UserName:"", UID:"", Groups:[]string(nil), Extra:map[string][]string(nil)}, AuthProvider:<nil>, AuthConfigPersister:rest.AuthProviderConfigPersister(nil), ExecProvider:<nil>, TLSClientConfig:rest.sanitizedTLSClientConfig{Insecure:false, ServerName:"", CertFile:"/home/jenkins/minikube-integration/21139-2320/.minikube/profiles/ha-423884/client.crt", KeyFile:"/home/jenkins/minikube-integration/21139-2320/.minikube/profiles/ha-423884/client.key", CAFile:"/home/jenkins/minikube-integration/21139-2320/.minikube/ca.crt", CertData:[]uint8(nil), KeyData:[]uint8(nil), CAData:[]uint8(nil), NextProtos:[]string(nil)}, Us
erAgent:"", DisableCompression:false, Transport:http.RoundTripper(nil), WrapTransport:(transport.WrapperFunc)(0x21276e0), QPS:0, Burst:0, RateLimiter:flowcontrol.RateLimiter(nil), WarningHandler:rest.WarningHandler(nil), WarningHandlerWithContext:rest.WarningHandlerWithContext(nil), Timeout:0, Dial:(func(context.Context, string, string) (net.Conn, error))(nil), Proxy:(func(*http.Request) (*url.URL, error))(nil)}
	I1109 14:03:36.009887   55908 envvar.go:172] "Feature gate default state" feature="WatchListClient" enabled=false
	I1109 14:03:36.009995   55908 envvar.go:172] "Feature gate default state" feature="ClientsAllowCBOR" enabled=false
	I1109 14:03:36.010046   55908 envvar.go:172] "Feature gate default state" feature="ClientsPreferCBOR" enabled=false
	I1109 14:03:36.010070   55908 envvar.go:172] "Feature gate default state" feature="InformerResourceVersion" enabled=false
	I1109 14:03:36.009972   55908 cert_rotation.go:141] "Starting client certificate rotation controller" logger="tls-transport-cache"
	I1109 14:03:36.010189   55908 envvar.go:172] "Feature gate default state" feature="InOrderInformers" enabled=true
	I1109 14:03:36.010607   55908 ssh_runner.go:195] Run: sudo diff -u /var/tmp/minikube/kubeadm.yaml /var/tmp/minikube/kubeadm.yaml.new
	I1109 14:03:36.028288   55908 kubeadm.go:635] The running cluster does not require reconfiguration: 192.168.49.2
	I1109 14:03:36.028364   55908 kubeadm.go:602] duration metric: took 32.744336ms to restartPrimaryControlPlane
	I1109 14:03:36.028386   55908 kubeadm.go:403] duration metric: took 155.094636ms to StartCluster
	I1109 14:03:36.028414   55908 settings.go:142] acquiring lock: {Name:mk52a5a1cf83cfd3007d92af07a5e8dea393f789 Clock:{} Delay:500ms Timeout:1m0s Cancel:<nil>}
	I1109 14:03:36.028527   55908 settings.go:150] Updating kubeconfig:  /home/jenkins/minikube-integration/21139-2320/kubeconfig
	I1109 14:03:36.029250   55908 lock.go:35] WriteFile acquiring /home/jenkins/minikube-integration/21139-2320/kubeconfig: {Name:mkbcbd43244c4eb498050023163f96f81faa78c6 Clock:{} Delay:500ms Timeout:1m0s Cancel:<nil>}
	I1109 14:03:36.029535   55908 start.go:234] HA (multi-control plane) cluster: will skip waiting for primary control-plane node &{Name: IP:192.168.49.2 Port:8443 KubernetesVersion:v1.34.1 ContainerRuntime:crio ControlPlane:true Worker:true}
	I1109 14:03:36.029589   55908 start.go:242] waiting for startup goroutines ...
	I1109 14:03:36.029633   55908 addons.go:512] enable addons start: toEnable=map[ambassador:false amd-gpu-device-plugin:false auto-pause:false cloud-spanner:false csi-hostpath-driver:false dashboard:false default-storageclass:false efk:false freshpod:false gcp-auth:false gvisor:false headlamp:false inaccel:false ingress:false ingress-dns:false inspektor-gadget:false istio:false istio-provisioner:false kong:false kubeflow:false kubetail:false kubevirt:false logviewer:false metallb:false metrics-server:false nvidia-device-plugin:false nvidia-driver-installer:false nvidia-gpu-device-plugin:false olm:false pod-security-policy:false portainer:false registry:false registry-aliases:false registry-creds:false storage-provisioner:false storage-provisioner-rancher:false volcano:false volumesnapshots:false yakd:false]
	I1109 14:03:36.030494   55908 config.go:182] Loaded profile config "ha-423884": Driver=docker, ContainerRuntime=crio, KubernetesVersion=v1.34.1
	I1109 14:03:36.035208   55908 out.go:179] * Enabled addons: 
	I1109 14:03:36.040262   55908 addons.go:515] duration metric: took 10.631239ms for enable addons: enabled=[]
	I1109 14:03:36.040364   55908 start.go:247] waiting for cluster config update ...
	I1109 14:03:36.040385   55908 start.go:256] writing updated cluster config ...
	I1109 14:03:36.043855   55908 out.go:203] 
	I1109 14:03:36.047167   55908 config.go:182] Loaded profile config "ha-423884": Driver=docker, ContainerRuntime=crio, KubernetesVersion=v1.34.1
	I1109 14:03:36.047362   55908 profile.go:143] Saving config to /home/jenkins/minikube-integration/21139-2320/.minikube/profiles/ha-423884/config.json ...
	I1109 14:03:36.050885   55908 out.go:179] * Starting "ha-423884-m02" control-plane node in "ha-423884" cluster
	I1109 14:03:36.053842   55908 cache.go:134] Beginning downloading kic base image for docker with crio
	I1109 14:03:36.056999   55908 out.go:179] * Pulling base image v0.0.48-1761985721-21837 ...
	I1109 14:03:36.060038   55908 image.go:81] Checking for gcr.io/k8s-minikube/kicbase-builds:v0.0.48-1761985721-21837@sha256:a50b37e97dfdea51156e079ca6b45818a801b3d41bbe13d141f35d2e1af6c7d1 in local docker daemon
	I1109 14:03:36.060318   55908 preload.go:188] Checking if preload exists for k8s version v1.34.1 and runtime crio
	I1109 14:03:36.060344   55908 cache.go:65] Caching tarball of preloaded images
	I1109 14:03:36.060467   55908 preload.go:238] Found /home/jenkins/minikube-integration/21139-2320/.minikube/cache/preloaded-tarball/preloaded-images-k8s-v18-v1.34.1-cri-o-overlay-arm64.tar.lz4 in cache, skipping download
	I1109 14:03:36.060496   55908 cache.go:68] Finished verifying existence of preloaded tar for v1.34.1 on crio
	I1109 14:03:36.060681   55908 profile.go:143] Saving config to /home/jenkins/minikube-integration/21139-2320/.minikube/profiles/ha-423884/config.json ...
	I1109 14:03:36.087960   55908 image.go:100] Found gcr.io/k8s-minikube/kicbase-builds:v0.0.48-1761985721-21837@sha256:a50b37e97dfdea51156e079ca6b45818a801b3d41bbe13d141f35d2e1af6c7d1 in local docker daemon, skipping pull
	I1109 14:03:36.087980   55908 cache.go:158] gcr.io/k8s-minikube/kicbase-builds:v0.0.48-1761985721-21837@sha256:a50b37e97dfdea51156e079ca6b45818a801b3d41bbe13d141f35d2e1af6c7d1 exists in daemon, skipping load
	I1109 14:03:36.087991   55908 cache.go:243] Successfully downloaded all kic artifacts
	I1109 14:03:36.088015   55908 start.go:360] acquireMachinesLock for ha-423884-m02: {Name:mkc465d60ac134a0502b48f535d5c2db44f7f07b Clock:{} Delay:500ms Timeout:10m0s Cancel:<nil>}
	I1109 14:03:36.088071   55908 start.go:364] duration metric: took 40.263µs to acquireMachinesLock for "ha-423884-m02"
	I1109 14:03:36.088090   55908 start.go:96] Skipping create...Using existing machine configuration
	I1109 14:03:36.088095   55908 fix.go:54] fixHost starting: m02
	I1109 14:03:36.088348   55908 cli_runner.go:164] Run: docker container inspect ha-423884-m02 --format={{.State.Status}}
	I1109 14:03:36.119614   55908 fix.go:112] recreateIfNeeded on ha-423884-m02: state=Stopped err=<nil>
	W1109 14:03:36.119639   55908 fix.go:138] unexpected machine state, will restart: <nil>
	I1109 14:03:36.123884   55908 out.go:252] * Restarting existing docker container for "ha-423884-m02" ...
	I1109 14:03:36.123973   55908 cli_runner.go:164] Run: docker start ha-423884-m02
	I1109 14:03:36.530699   55908 cli_runner.go:164] Run: docker container inspect ha-423884-m02 --format={{.State.Status}}
	I1109 14:03:36.559612   55908 kic.go:430] container "ha-423884-m02" state is running.
	I1109 14:03:36.560004   55908 cli_runner.go:164] Run: docker container inspect -f "{{range .NetworkSettings.Networks}}{{.IPAddress}},{{.GlobalIPv6Address}}{{end}}" ha-423884-m02
	I1109 14:03:36.586384   55908 profile.go:143] Saving config to /home/jenkins/minikube-integration/21139-2320/.minikube/profiles/ha-423884/config.json ...
	I1109 14:03:36.586624   55908 machine.go:94] provisionDockerMachine start ...
	I1109 14:03:36.586695   55908 cli_runner.go:164] Run: docker container inspect -f "'{{(index (index .NetworkSettings.Ports "22/tcp") 0).HostPort}}'" ha-423884-m02
	I1109 14:03:36.615730   55908 main.go:143] libmachine: Using SSH client type: native
	I1109 14:03:36.616048   55908 main.go:143] libmachine: &{{{<nil> 0 [] [] []} docker [0x3eefe0] 0x3f1790 <nil>  [] 0s} 127.0.0.1 32823 <nil> <nil>}
	I1109 14:03:36.616058   55908 main.go:143] libmachine: About to run SSH command:
	hostname
	I1109 14:03:36.616804   55908 main.go:143] libmachine: Error dialing TCP: ssh: handshake failed: read tcp 127.0.0.1:49240->127.0.0.1:32823: read: connection reset by peer
	I1109 14:03:39.844217   55908 main.go:143] libmachine: SSH cmd err, output: <nil>: ha-423884-m02
	
	I1109 14:03:39.844255   55908 ubuntu.go:182] provisioning hostname "ha-423884-m02"
	I1109 14:03:39.844325   55908 cli_runner.go:164] Run: docker container inspect -f "'{{(index (index .NetworkSettings.Ports "22/tcp") 0).HostPort}}'" ha-423884-m02
	I1109 14:03:39.868660   55908 main.go:143] libmachine: Using SSH client type: native
	I1109 14:03:39.868984   55908 main.go:143] libmachine: &{{{<nil> 0 [] [] []} docker [0x3eefe0] 0x3f1790 <nil>  [] 0s} 127.0.0.1 32823 <nil> <nil>}
	I1109 14:03:39.869001   55908 main.go:143] libmachine: About to run SSH command:
	sudo hostname ha-423884-m02 && echo "ha-423884-m02" | sudo tee /etc/hostname
	I1109 14:03:40.093355   55908 main.go:143] libmachine: SSH cmd err, output: <nil>: ha-423884-m02
	
	I1109 14:03:40.093437   55908 cli_runner.go:164] Run: docker container inspect -f "'{{(index (index .NetworkSettings.Ports "22/tcp") 0).HostPort}}'" ha-423884-m02
	I1109 14:03:40.121586   55908 main.go:143] libmachine: Using SSH client type: native
	I1109 14:03:40.121898   55908 main.go:143] libmachine: &{{{<nil> 0 [] [] []} docker [0x3eefe0] 0x3f1790 <nil>  [] 0s} 127.0.0.1 32823 <nil> <nil>}
	I1109 14:03:40.121920   55908 main.go:143] libmachine: About to run SSH command:
	
			if ! grep -xq '.*\sha-423884-m02' /etc/hosts; then
				if grep -xq '127.0.1.1\s.*' /etc/hosts; then
					sudo sed -i 's/^127.0.1.1\s.*/127.0.1.1 ha-423884-m02/g' /etc/hosts;
				else 
					echo '127.0.1.1 ha-423884-m02' | sudo tee -a /etc/hosts; 
				fi
			fi
	I1109 14:03:40.328493   55908 main.go:143] libmachine: SSH cmd err, output: <nil>: 
	I1109 14:03:40.328522   55908 ubuntu.go:188] set auth options {CertDir:/home/jenkins/minikube-integration/21139-2320/.minikube CaCertPath:/home/jenkins/minikube-integration/21139-2320/.minikube/certs/ca.pem CaPrivateKeyPath:/home/jenkins/minikube-integration/21139-2320/.minikube/certs/ca-key.pem CaCertRemotePath:/etc/docker/ca.pem ServerCertPath:/home/jenkins/minikube-integration/21139-2320/.minikube/machines/server.pem ServerKeyPath:/home/jenkins/minikube-integration/21139-2320/.minikube/machines/server-key.pem ClientKeyPath:/home/jenkins/minikube-integration/21139-2320/.minikube/certs/key.pem ServerCertRemotePath:/etc/docker/server.pem ServerKeyRemotePath:/etc/docker/server-key.pem ClientCertPath:/home/jenkins/minikube-integration/21139-2320/.minikube/certs/cert.pem ServerCertSANs:[] StorePath:/home/jenkins/minikube-integration/21139-2320/.minikube}
	I1109 14:03:40.328538   55908 ubuntu.go:190] setting up certificates
	I1109 14:03:40.328548   55908 provision.go:84] configureAuth start
	I1109 14:03:40.328618   55908 cli_runner.go:164] Run: docker container inspect -f "{{range .NetworkSettings.Networks}}{{.IPAddress}},{{.GlobalIPv6Address}}{{end}}" ha-423884-m02
	I1109 14:03:40.372055   55908 provision.go:143] copyHostCerts
	I1109 14:03:40.372096   55908 vm_assets.go:164] NewFileAsset: /home/jenkins/minikube-integration/21139-2320/.minikube/certs/ca.pem -> /home/jenkins/minikube-integration/21139-2320/.minikube/ca.pem
	I1109 14:03:40.372169   55908 exec_runner.go:144] found /home/jenkins/minikube-integration/21139-2320/.minikube/ca.pem, removing ...
	I1109 14:03:40.372176   55908 exec_runner.go:203] rm: /home/jenkins/minikube-integration/21139-2320/.minikube/ca.pem
	I1109 14:03:40.372257   55908 exec_runner.go:151] cp: /home/jenkins/minikube-integration/21139-2320/.minikube/certs/ca.pem --> /home/jenkins/minikube-integration/21139-2320/.minikube/ca.pem (1082 bytes)
	I1109 14:03:40.372331   55908 vm_assets.go:164] NewFileAsset: /home/jenkins/minikube-integration/21139-2320/.minikube/certs/cert.pem -> /home/jenkins/minikube-integration/21139-2320/.minikube/cert.pem
	I1109 14:03:40.372347   55908 exec_runner.go:144] found /home/jenkins/minikube-integration/21139-2320/.minikube/cert.pem, removing ...
	I1109 14:03:40.372352   55908 exec_runner.go:203] rm: /home/jenkins/minikube-integration/21139-2320/.minikube/cert.pem
	I1109 14:03:40.372377   55908 exec_runner.go:151] cp: /home/jenkins/minikube-integration/21139-2320/.minikube/certs/cert.pem --> /home/jenkins/minikube-integration/21139-2320/.minikube/cert.pem (1123 bytes)
	I1109 14:03:40.372418   55908 vm_assets.go:164] NewFileAsset: /home/jenkins/minikube-integration/21139-2320/.minikube/certs/key.pem -> /home/jenkins/minikube-integration/21139-2320/.minikube/key.pem
	I1109 14:03:40.372433   55908 exec_runner.go:144] found /home/jenkins/minikube-integration/21139-2320/.minikube/key.pem, removing ...
	I1109 14:03:40.372437   55908 exec_runner.go:203] rm: /home/jenkins/minikube-integration/21139-2320/.minikube/key.pem
	I1109 14:03:40.372461   55908 exec_runner.go:151] cp: /home/jenkins/minikube-integration/21139-2320/.minikube/certs/key.pem --> /home/jenkins/minikube-integration/21139-2320/.minikube/key.pem (1679 bytes)
	I1109 14:03:40.372508   55908 provision.go:117] generating server cert: /home/jenkins/minikube-integration/21139-2320/.minikube/machines/server.pem ca-key=/home/jenkins/minikube-integration/21139-2320/.minikube/certs/ca.pem private-key=/home/jenkins/minikube-integration/21139-2320/.minikube/certs/ca-key.pem org=jenkins.ha-423884-m02 san=[127.0.0.1 192.168.49.3 ha-423884-m02 localhost minikube]
	I1109 14:03:40.460419   55908 provision.go:177] copyRemoteCerts
	I1109 14:03:40.460536   55908 ssh_runner.go:195] Run: sudo mkdir -p /etc/docker /etc/docker /etc/docker
	I1109 14:03:40.460611   55908 cli_runner.go:164] Run: docker container inspect -f "'{{(index (index .NetworkSettings.Ports "22/tcp") 0).HostPort}}'" ha-423884-m02
	I1109 14:03:40.505492   55908 sshutil.go:53] new ssh client: &{IP:127.0.0.1 Port:32823 SSHKeyPath:/home/jenkins/minikube-integration/21139-2320/.minikube/machines/ha-423884-m02/id_rsa Username:docker}
	I1109 14:03:40.630054   55908 vm_assets.go:164] NewFileAsset: /home/jenkins/minikube-integration/21139-2320/.minikube/certs/ca.pem -> /etc/docker/ca.pem
	I1109 14:03:40.630110   55908 ssh_runner.go:362] scp /home/jenkins/minikube-integration/21139-2320/.minikube/certs/ca.pem --> /etc/docker/ca.pem (1082 bytes)
	I1109 14:03:40.653044   55908 vm_assets.go:164] NewFileAsset: /home/jenkins/minikube-integration/21139-2320/.minikube/machines/server.pem -> /etc/docker/server.pem
	I1109 14:03:40.653106   55908 ssh_runner.go:362] scp /home/jenkins/minikube-integration/21139-2320/.minikube/machines/server.pem --> /etc/docker/server.pem (1208 bytes)
	I1109 14:03:40.683285   55908 vm_assets.go:164] NewFileAsset: /home/jenkins/minikube-integration/21139-2320/.minikube/machines/server-key.pem -> /etc/docker/server-key.pem
	I1109 14:03:40.683343   55908 ssh_runner.go:362] scp /home/jenkins/minikube-integration/21139-2320/.minikube/machines/server-key.pem --> /etc/docker/server-key.pem (1679 bytes)
	I1109 14:03:40.713212   55908 provision.go:87] duration metric: took 384.650953ms to configureAuth
	I1109 14:03:40.713278   55908 ubuntu.go:206] setting minikube options for container-runtime
	I1109 14:03:40.713537   55908 config.go:182] Loaded profile config "ha-423884": Driver=docker, ContainerRuntime=crio, KubernetesVersion=v1.34.1
	I1109 14:03:40.713674   55908 cli_runner.go:164] Run: docker container inspect -f "'{{(index (index .NetworkSettings.Ports "22/tcp") 0).HostPort}}'" ha-423884-m02
	I1109 14:03:40.745458   55908 main.go:143] libmachine: Using SSH client type: native
	I1109 14:03:40.745765   55908 main.go:143] libmachine: &{{{<nil> 0 [] [] []} docker [0x3eefe0] 0x3f1790 <nil>  [] 0s} 127.0.0.1 32823 <nil> <nil>}
	I1109 14:03:40.745786   55908 main.go:143] libmachine: About to run SSH command:
	sudo mkdir -p /etc/sysconfig && printf %s "
	CRIO_MINIKUBE_OPTIONS='--insecure-registry 10.96.0.0/12 '
	" | sudo tee /etc/sysconfig/crio.minikube && sudo systemctl restart crio
	I1109 14:03:41.160286   55908 main.go:143] libmachine: SSH cmd err, output: <nil>: 
	CRIO_MINIKUBE_OPTIONS='--insecure-registry 10.96.0.0/12 '
	
	I1109 14:03:41.160309   55908 machine.go:97] duration metric: took 4.573667407s to provisionDockerMachine
	I1109 14:03:41.160321   55908 start.go:293] postStartSetup for "ha-423884-m02" (driver="docker")
	I1109 14:03:41.160332   55908 start.go:322] creating required directories: [/etc/kubernetes/addons /etc/kubernetes/manifests /var/tmp/minikube /var/lib/minikube /var/lib/minikube/certs /var/lib/minikube/images /var/lib/minikube/binaries /tmp/gvisor /usr/share/ca-certificates /etc/ssl/certs]
	I1109 14:03:41.160396   55908 ssh_runner.go:195] Run: sudo mkdir -p /etc/kubernetes/addons /etc/kubernetes/manifests /var/tmp/minikube /var/lib/minikube /var/lib/minikube/certs /var/lib/minikube/images /var/lib/minikube/binaries /tmp/gvisor /usr/share/ca-certificates /etc/ssl/certs
	I1109 14:03:41.160449   55908 cli_runner.go:164] Run: docker container inspect -f "'{{(index (index .NetworkSettings.Ports "22/tcp") 0).HostPort}}'" ha-423884-m02
	I1109 14:03:41.178991   55908 sshutil.go:53] new ssh client: &{IP:127.0.0.1 Port:32823 SSHKeyPath:/home/jenkins/minikube-integration/21139-2320/.minikube/machines/ha-423884-m02/id_rsa Username:docker}
	I1109 14:03:41.284963   55908 ssh_runner.go:195] Run: cat /etc/os-release
	I1109 14:03:41.288725   55908 main.go:143] libmachine: Couldn't set key VERSION_CODENAME, no corresponding struct field found
	I1109 14:03:41.288763   55908 info.go:137] Remote host: Debian GNU/Linux 12 (bookworm)
	I1109 14:03:41.288776   55908 filesync.go:126] Scanning /home/jenkins/minikube-integration/21139-2320/.minikube/addons for local assets ...
	I1109 14:03:41.288833   55908 filesync.go:126] Scanning /home/jenkins/minikube-integration/21139-2320/.minikube/files for local assets ...
	I1109 14:03:41.288922   55908 filesync.go:149] local asset: /home/jenkins/minikube-integration/21139-2320/.minikube/files/etc/ssl/certs/41162.pem -> 41162.pem in /etc/ssl/certs
	I1109 14:03:41.288929   55908 vm_assets.go:164] NewFileAsset: /home/jenkins/minikube-integration/21139-2320/.minikube/files/etc/ssl/certs/41162.pem -> /etc/ssl/certs/41162.pem
	I1109 14:03:41.289033   55908 ssh_runner.go:195] Run: sudo mkdir -p /etc/ssl/certs
	I1109 14:03:41.297714   55908 ssh_runner.go:362] scp /home/jenkins/minikube-integration/21139-2320/.minikube/files/etc/ssl/certs/41162.pem --> /etc/ssl/certs/41162.pem (1708 bytes)
	I1109 14:03:41.316091   55908 start.go:296] duration metric: took 155.749725ms for postStartSetup
	I1109 14:03:41.316183   55908 ssh_runner.go:195] Run: sh -c "df -h /var | awk 'NR==2{print $5}'"
	I1109 14:03:41.316251   55908 cli_runner.go:164] Run: docker container inspect -f "'{{(index (index .NetworkSettings.Ports "22/tcp") 0).HostPort}}'" ha-423884-m02
	I1109 14:03:41.332754   55908 sshutil.go:53] new ssh client: &{IP:127.0.0.1 Port:32823 SSHKeyPath:/home/jenkins/minikube-integration/21139-2320/.minikube/machines/ha-423884-m02/id_rsa Username:docker}
	I1109 14:03:41.441566   55908 ssh_runner.go:195] Run: sh -c "df -BG /var | awk 'NR==2{print $4}'"
	I1109 14:03:41.446853   55908 fix.go:56] duration metric: took 5.358725913s for fixHost
	I1109 14:03:41.446878   55908 start.go:83] releasing machines lock for "ha-423884-m02", held for 5.358799177s
	I1109 14:03:41.446969   55908 cli_runner.go:164] Run: docker container inspect -f "{{range .NetworkSettings.Networks}}{{.IPAddress}},{{.GlobalIPv6Address}}{{end}}" ha-423884-m02
	I1109 14:03:41.471189   55908 out.go:179] * Found network options:
	I1109 14:03:41.474105   55908 out.go:179]   - NO_PROXY=192.168.49.2
	W1109 14:03:41.477016   55908 proxy.go:120] fail to check proxy env: Error ip not in block
	W1109 14:03:41.477060   55908 proxy.go:120] fail to check proxy env: Error ip not in block
	I1109 14:03:41.477139   55908 ssh_runner.go:195] Run: sudo sh -c "podman version >/dev/null"
	I1109 14:03:41.477182   55908 cli_runner.go:164] Run: docker container inspect -f "'{{(index (index .NetworkSettings.Ports "22/tcp") 0).HostPort}}'" ha-423884-m02
	I1109 14:03:41.477214   55908 ssh_runner.go:195] Run: curl -sS -m 2 https://registry.k8s.io/
	I1109 14:03:41.477268   55908 cli_runner.go:164] Run: docker container inspect -f "'{{(index (index .NetworkSettings.Ports "22/tcp") 0).HostPort}}'" ha-423884-m02
	I1109 14:03:41.498901   55908 sshutil.go:53] new ssh client: &{IP:127.0.0.1 Port:32823 SSHKeyPath:/home/jenkins/minikube-integration/21139-2320/.minikube/machines/ha-423884-m02/id_rsa Username:docker}
	I1109 14:03:41.500358   55908 sshutil.go:53] new ssh client: &{IP:127.0.0.1 Port:32823 SSHKeyPath:/home/jenkins/minikube-integration/21139-2320/.minikube/machines/ha-423884-m02/id_rsa Username:docker}
	I1109 14:03:41.696694   55908 ssh_runner.go:195] Run: sh -c "stat /etc/cni/net.d/*loopback.conf*"
	W1109 14:03:41.701371   55908 cni.go:209] loopback cni configuration skipped: "/etc/cni/net.d/*loopback.conf*" not found
	I1109 14:03:41.701516   55908 ssh_runner.go:195] Run: sudo find /etc/cni/net.d -maxdepth 1 -type f ( ( -name *bridge* -or -name *podman* ) -and -not -name *.mk_disabled ) -printf "%p, " -exec sh -c "sudo mv {} {}.mk_disabled" ;
	I1109 14:03:41.709683   55908 cni.go:259] no active bridge cni configs found in "/etc/cni/net.d" - nothing to disable
	I1109 14:03:41.709721   55908 start.go:496] detecting cgroup driver to use...
	I1109 14:03:41.709755   55908 detect.go:187] detected "cgroupfs" cgroup driver on host os
	I1109 14:03:41.709825   55908 ssh_runner.go:195] Run: sudo systemctl stop -f containerd
	I1109 14:03:41.725678   55908 ssh_runner.go:195] Run: sudo systemctl is-active --quiet service containerd
	I1109 14:03:41.739787   55908 docker.go:218] disabling cri-docker service (if available) ...
	I1109 14:03:41.739856   55908 ssh_runner.go:195] Run: sudo systemctl stop -f cri-docker.socket
	I1109 14:03:41.757143   55908 ssh_runner.go:195] Run: sudo systemctl stop -f cri-docker.service
	I1109 14:03:41.771643   55908 ssh_runner.go:195] Run: sudo systemctl disable cri-docker.socket
	I1109 14:03:41.900022   55908 ssh_runner.go:195] Run: sudo systemctl mask cri-docker.service
	I1109 14:03:42.105606   55908 docker.go:234] disabling docker service ...
	I1109 14:03:42.105681   55908 ssh_runner.go:195] Run: sudo systemctl stop -f docker.socket
	I1109 14:03:42.144421   55908 ssh_runner.go:195] Run: sudo systemctl stop -f docker.service
	I1109 14:03:42.178839   55908 ssh_runner.go:195] Run: sudo systemctl disable docker.socket
	I1109 14:03:42.468213   55908 ssh_runner.go:195] Run: sudo systemctl mask docker.service
	I1109 14:03:42.691726   55908 ssh_runner.go:195] Run: sudo systemctl is-active --quiet service docker
	I1109 14:03:42.709612   55908 ssh_runner.go:195] Run: /bin/bash -c "sudo mkdir -p /etc && printf %s "runtime-endpoint: unix:///var/run/crio/crio.sock
	" | sudo tee /etc/crictl.yaml"
	I1109 14:03:42.730882   55908 crio.go:59] configure cri-o to use "registry.k8s.io/pause:3.10.1" pause image...
	I1109 14:03:42.730946   55908 ssh_runner.go:195] Run: sh -c "sudo sed -i 's|^.*pause_image = .*$|pause_image = "registry.k8s.io/pause:3.10.1"|' /etc/crio/crio.conf.d/02-crio.conf"
	I1109 14:03:42.740089   55908 crio.go:70] configuring cri-o to use "cgroupfs" as cgroup driver...
	I1109 14:03:42.740148   55908 ssh_runner.go:195] Run: sh -c "sudo sed -i 's|^.*cgroup_manager = .*$|cgroup_manager = "cgroupfs"|' /etc/crio/crio.conf.d/02-crio.conf"
	I1109 14:03:42.750087   55908 ssh_runner.go:195] Run: sh -c "sudo sed -i '/conmon_cgroup = .*/d' /etc/crio/crio.conf.d/02-crio.conf"
	I1109 14:03:42.759038   55908 ssh_runner.go:195] Run: sh -c "sudo sed -i '/cgroup_manager = .*/a conmon_cgroup = "pod"' /etc/crio/crio.conf.d/02-crio.conf"
	I1109 14:03:42.773257   55908 ssh_runner.go:195] Run: sh -c "sudo rm -rf /etc/cni/net.mk"
	I1109 14:03:42.782648   55908 ssh_runner.go:195] Run: sh -c "sudo sed -i '/^ *"net.ipv4.ip_unprivileged_port_start=.*"/d' /etc/crio/crio.conf.d/02-crio.conf"
	I1109 14:03:42.800890   55908 ssh_runner.go:195] Run: sh -c "sudo grep -q "^ *default_sysctls" /etc/crio/crio.conf.d/02-crio.conf || sudo sed -i '/conmon_cgroup = .*/a default_sysctls = \[\n\]' /etc/crio/crio.conf.d/02-crio.conf"
	I1109 14:03:42.812622   55908 ssh_runner.go:195] Run: sh -c "sudo sed -i -r 's|^default_sysctls *= *\[|&\n  "net.ipv4.ip_unprivileged_port_start=0",|' /etc/crio/crio.conf.d/02-crio.conf"
	I1109 14:03:42.829326   55908 ssh_runner.go:195] Run: sudo sysctl net.bridge.bridge-nf-call-iptables
	I1109 14:03:42.846516   55908 ssh_runner.go:195] Run: sudo sh -c "echo 1 > /proc/sys/net/ipv4/ip_forward"
	I1109 14:03:42.860429   55908 ssh_runner.go:195] Run: sudo systemctl daemon-reload
	I1109 14:03:43.078130   55908 ssh_runner.go:195] Run: sudo systemctl restart crio
	I1109 14:03:43.300172   55908 start.go:543] Will wait 60s for socket path /var/run/crio/crio.sock
	I1109 14:03:43.300292   55908 ssh_runner.go:195] Run: stat /var/run/crio/crio.sock
	I1109 14:03:43.304336   55908 start.go:564] Will wait 60s for crictl version
	I1109 14:03:43.304441   55908 ssh_runner.go:195] Run: which crictl
	I1109 14:03:43.308290   55908 ssh_runner.go:195] Run: sudo /usr/local/bin/crictl version
	I1109 14:03:43.334041   55908 start.go:580] Version:  0.1.0
	RuntimeName:  cri-o
	RuntimeVersion:  1.34.1
	RuntimeApiVersion:  v1
	I1109 14:03:43.334158   55908 ssh_runner.go:195] Run: crio --version
	I1109 14:03:43.366433   55908 ssh_runner.go:195] Run: crio --version
	I1109 14:03:43.403997   55908 out.go:179] * Preparing Kubernetes v1.34.1 on CRI-O 1.34.1 ...
	I1109 14:03:43.406881   55908 out.go:179]   - env NO_PROXY=192.168.49.2
	I1109 14:03:43.409947   55908 cli_runner.go:164] Run: docker network inspect ha-423884 --format "{"Name": "{{.Name}}","Driver": "{{.Driver}}","Subnet": "{{range .IPAM.Config}}{{.Subnet}}{{end}}","Gateway": "{{range .IPAM.Config}}{{.Gateway}}{{end}}","MTU": {{if (index .Options "com.docker.network.driver.mtu")}}{{(index .Options "com.docker.network.driver.mtu")}}{{else}}0{{end}}, "ContainerIPs": [{{range $k,$v := .Containers }}"{{$v.IPv4Address}}",{{end}}]}"
	I1109 14:03:43.426148   55908 ssh_runner.go:195] Run: grep 192.168.49.1	host.minikube.internal$ /etc/hosts
	I1109 14:03:43.430019   55908 ssh_runner.go:195] Run: /bin/bash -c "{ grep -v $'\thost.minikube.internal$' "/etc/hosts"; echo "192.168.49.1	host.minikube.internal"; } > /tmp/h.$$; sudo cp /tmp/h.$$ "/etc/hosts""
	I1109 14:03:43.439859   55908 mustload.go:66] Loading cluster: ha-423884
	I1109 14:03:43.440179   55908 config.go:182] Loaded profile config "ha-423884": Driver=docker, ContainerRuntime=crio, KubernetesVersion=v1.34.1
	I1109 14:03:43.440497   55908 cli_runner.go:164] Run: docker container inspect ha-423884 --format={{.State.Status}}
	I1109 14:03:43.458429   55908 host.go:66] Checking if "ha-423884" exists ...
	I1109 14:03:43.458717   55908 certs.go:69] Setting up /home/jenkins/minikube-integration/21139-2320/.minikube/profiles/ha-423884 for IP: 192.168.49.3
	I1109 14:03:43.458732   55908 certs.go:195] generating shared ca certs ...
	I1109 14:03:43.458747   55908 certs.go:227] acquiring lock for ca certs: {Name:mkdc9287ecd89df27ec460b72246ed6b75395d7c Clock:{} Delay:500ms Timeout:1m0s Cancel:<nil>}
	I1109 14:03:43.458858   55908 certs.go:236] skipping valid "minikubeCA" ca cert: /home/jenkins/minikube-integration/21139-2320/.minikube/ca.key
	I1109 14:03:43.458906   55908 certs.go:236] skipping valid "proxyClientCA" ca cert: /home/jenkins/minikube-integration/21139-2320/.minikube/proxy-client-ca.key
	I1109 14:03:43.458917   55908 certs.go:257] generating profile certs ...
	I1109 14:03:43.458991   55908 certs.go:360] skipping valid signed profile cert regeneration for "minikube-user": /home/jenkins/minikube-integration/21139-2320/.minikube/profiles/ha-423884/client.key
	I1109 14:03:43.459044   55908 certs.go:360] skipping valid signed profile cert regeneration for "minikube": /home/jenkins/minikube-integration/21139-2320/.minikube/profiles/ha-423884/apiserver.key.75d82079
	I1109 14:03:43.459087   55908 certs.go:360] skipping valid signed profile cert regeneration for "aggregator": /home/jenkins/minikube-integration/21139-2320/.minikube/profiles/ha-423884/proxy-client.key
	I1109 14:03:43.459098   55908 vm_assets.go:164] NewFileAsset: /home/jenkins/minikube-integration/21139-2320/.minikube/ca.crt -> /var/lib/minikube/certs/ca.crt
	I1109 14:03:43.459110   55908 vm_assets.go:164] NewFileAsset: /home/jenkins/minikube-integration/21139-2320/.minikube/ca.key -> /var/lib/minikube/certs/ca.key
	I1109 14:03:43.459125   55908 vm_assets.go:164] NewFileAsset: /home/jenkins/minikube-integration/21139-2320/.minikube/proxy-client-ca.crt -> /var/lib/minikube/certs/proxy-client-ca.crt
	I1109 14:03:43.459143   55908 vm_assets.go:164] NewFileAsset: /home/jenkins/minikube-integration/21139-2320/.minikube/proxy-client-ca.key -> /var/lib/minikube/certs/proxy-client-ca.key
	I1109 14:03:43.459162   55908 vm_assets.go:164] NewFileAsset: /home/jenkins/minikube-integration/21139-2320/.minikube/profiles/ha-423884/apiserver.crt -> /var/lib/minikube/certs/apiserver.crt
	I1109 14:03:43.459178   55908 vm_assets.go:164] NewFileAsset: /home/jenkins/minikube-integration/21139-2320/.minikube/profiles/ha-423884/apiserver.key -> /var/lib/minikube/certs/apiserver.key
	I1109 14:03:43.459192   55908 vm_assets.go:164] NewFileAsset: /home/jenkins/minikube-integration/21139-2320/.minikube/profiles/ha-423884/proxy-client.crt -> /var/lib/minikube/certs/proxy-client.crt
	I1109 14:03:43.459209   55908 vm_assets.go:164] NewFileAsset: /home/jenkins/minikube-integration/21139-2320/.minikube/profiles/ha-423884/proxy-client.key -> /var/lib/minikube/certs/proxy-client.key
	I1109 14:03:43.459262   55908 certs.go:484] found cert: /home/jenkins/minikube-integration/21139-2320/.minikube/certs/4116.pem (1338 bytes)
	W1109 14:03:43.459293   55908 certs.go:480] ignoring /home/jenkins/minikube-integration/21139-2320/.minikube/certs/4116_empty.pem, impossibly tiny 0 bytes
	I1109 14:03:43.459305   55908 certs.go:484] found cert: /home/jenkins/minikube-integration/21139-2320/.minikube/certs/ca-key.pem (1679 bytes)
	I1109 14:03:43.459331   55908 certs.go:484] found cert: /home/jenkins/minikube-integration/21139-2320/.minikube/certs/ca.pem (1082 bytes)
	I1109 14:03:43.459355   55908 certs.go:484] found cert: /home/jenkins/minikube-integration/21139-2320/.minikube/certs/cert.pem (1123 bytes)
	I1109 14:03:43.459385   55908 certs.go:484] found cert: /home/jenkins/minikube-integration/21139-2320/.minikube/certs/key.pem (1679 bytes)
	I1109 14:03:43.459432   55908 certs.go:484] found cert: /home/jenkins/minikube-integration/21139-2320/.minikube/files/etc/ssl/certs/41162.pem (1708 bytes)
	I1109 14:03:43.459462   55908 vm_assets.go:164] NewFileAsset: /home/jenkins/minikube-integration/21139-2320/.minikube/ca.crt -> /usr/share/ca-certificates/minikubeCA.pem
	I1109 14:03:43.459482   55908 vm_assets.go:164] NewFileAsset: /home/jenkins/minikube-integration/21139-2320/.minikube/certs/4116.pem -> /usr/share/ca-certificates/4116.pem
	I1109 14:03:43.459498   55908 vm_assets.go:164] NewFileAsset: /home/jenkins/minikube-integration/21139-2320/.minikube/files/etc/ssl/certs/41162.pem -> /usr/share/ca-certificates/41162.pem
	I1109 14:03:43.459553   55908 cli_runner.go:164] Run: docker container inspect -f "'{{(index (index .NetworkSettings.Ports "22/tcp") 0).HostPort}}'" ha-423884
	I1109 14:03:43.476791   55908 sshutil.go:53] new ssh client: &{IP:127.0.0.1 Port:32818 SSHKeyPath:/home/jenkins/minikube-integration/21139-2320/.minikube/machines/ha-423884/id_rsa Username:docker}
	I1109 14:03:43.576150   55908 ssh_runner.go:195] Run: stat -c %s /var/lib/minikube/certs/sa.pub
	I1109 14:03:43.579947   55908 ssh_runner.go:447] scp /var/lib/minikube/certs/sa.pub --> memory (451 bytes)
	I1109 14:03:43.588442   55908 ssh_runner.go:195] Run: stat -c %s /var/lib/minikube/certs/sa.key
	I1109 14:03:43.591845   55908 ssh_runner.go:447] scp /var/lib/minikube/certs/sa.key --> memory (1679 bytes)
	I1109 14:03:43.600302   55908 ssh_runner.go:195] Run: stat -c %s /var/lib/minikube/certs/front-proxy-ca.crt
	I1109 14:03:43.603828   55908 ssh_runner.go:447] scp /var/lib/minikube/certs/front-proxy-ca.crt --> memory (1123 bytes)
	I1109 14:03:43.612657   55908 ssh_runner.go:195] Run: stat -c %s /var/lib/minikube/certs/front-proxy-ca.key
	I1109 14:03:43.616127   55908 ssh_runner.go:447] scp /var/lib/minikube/certs/front-proxy-ca.key --> memory (1675 bytes)
	I1109 14:03:43.624209   55908 ssh_runner.go:195] Run: stat -c %s /var/lib/minikube/certs/etcd/ca.crt
	I1109 14:03:43.627692   55908 ssh_runner.go:447] scp /var/lib/minikube/certs/etcd/ca.crt --> memory (1094 bytes)
	I1109 14:03:43.635688   55908 ssh_runner.go:195] Run: stat -c %s /var/lib/minikube/certs/etcd/ca.key
	I1109 14:03:43.639181   55908 ssh_runner.go:447] scp /var/lib/minikube/certs/etcd/ca.key --> memory (1675 bytes)
	I1109 14:03:43.647210   55908 ssh_runner.go:362] scp /home/jenkins/minikube-integration/21139-2320/.minikube/ca.crt --> /var/lib/minikube/certs/ca.crt (1111 bytes)
	I1109 14:03:43.665935   55908 ssh_runner.go:362] scp /home/jenkins/minikube-integration/21139-2320/.minikube/ca.key --> /var/lib/minikube/certs/ca.key (1679 bytes)
	I1109 14:03:43.683098   55908 ssh_runner.go:362] scp /home/jenkins/minikube-integration/21139-2320/.minikube/proxy-client-ca.crt --> /var/lib/minikube/certs/proxy-client-ca.crt (1119 bytes)
	I1109 14:03:43.701792   55908 ssh_runner.go:362] scp /home/jenkins/minikube-integration/21139-2320/.minikube/proxy-client-ca.key --> /var/lib/minikube/certs/proxy-client-ca.key (1675 bytes)
	I1109 14:03:43.720535   55908 ssh_runner.go:362] scp /home/jenkins/minikube-integration/21139-2320/.minikube/profiles/ha-423884/apiserver.crt --> /var/lib/minikube/certs/apiserver.crt (1440 bytes)
	I1109 14:03:43.738207   55908 ssh_runner.go:362] scp /home/jenkins/minikube-integration/21139-2320/.minikube/profiles/ha-423884/apiserver.key --> /var/lib/minikube/certs/apiserver.key (1679 bytes)
	I1109 14:03:43.756027   55908 ssh_runner.go:362] scp /home/jenkins/minikube-integration/21139-2320/.minikube/profiles/ha-423884/proxy-client.crt --> /var/lib/minikube/certs/proxy-client.crt (1147 bytes)
	I1109 14:03:43.774278   55908 ssh_runner.go:362] scp /home/jenkins/minikube-integration/21139-2320/.minikube/profiles/ha-423884/proxy-client.key --> /var/lib/minikube/certs/proxy-client.key (1675 bytes)
	I1109 14:03:43.792937   55908 ssh_runner.go:362] scp /home/jenkins/minikube-integration/21139-2320/.minikube/ca.crt --> /usr/share/ca-certificates/minikubeCA.pem (1111 bytes)
	I1109 14:03:43.811113   55908 ssh_runner.go:362] scp /home/jenkins/minikube-integration/21139-2320/.minikube/certs/4116.pem --> /usr/share/ca-certificates/4116.pem (1338 bytes)
	I1109 14:03:43.829133   55908 ssh_runner.go:362] scp /home/jenkins/minikube-integration/21139-2320/.minikube/files/etc/ssl/certs/41162.pem --> /usr/share/ca-certificates/41162.pem (1708 bytes)
	I1109 14:03:43.847536   55908 ssh_runner.go:362] scp memory --> /var/lib/minikube/certs/sa.pub (451 bytes)
	I1109 14:03:43.860908   55908 ssh_runner.go:362] scp memory --> /var/lib/minikube/certs/sa.key (1679 bytes)
	I1109 14:03:43.873289   55908 ssh_runner.go:362] scp memory --> /var/lib/minikube/certs/front-proxy-ca.crt (1123 bytes)
	I1109 14:03:43.886865   55908 ssh_runner.go:362] scp memory --> /var/lib/minikube/certs/front-proxy-ca.key (1675 bytes)
	I1109 14:03:43.900616   55908 ssh_runner.go:362] scp memory --> /var/lib/minikube/certs/etcd/ca.crt (1094 bytes)
	I1109 14:03:43.913948   55908 ssh_runner.go:362] scp memory --> /var/lib/minikube/certs/etcd/ca.key (1675 bytes)
	I1109 14:03:43.927015   55908 ssh_runner.go:362] scp memory --> /var/lib/minikube/kubeconfig (744 bytes)
	I1109 14:03:43.939523   55908 ssh_runner.go:195] Run: openssl version
	I1109 14:03:43.945583   55908 ssh_runner.go:195] Run: sudo /bin/bash -c "test -s /usr/share/ca-certificates/minikubeCA.pem && ln -fs /usr/share/ca-certificates/minikubeCA.pem /etc/ssl/certs/minikubeCA.pem"
	I1109 14:03:43.954590   55908 ssh_runner.go:195] Run: ls -la /usr/share/ca-certificates/minikubeCA.pem
	I1109 14:03:43.958760   55908 certs.go:528] hashing: -rw-r--r-- 1 root root 1111 Nov  9 13:29 /usr/share/ca-certificates/minikubeCA.pem
	I1109 14:03:43.958867   55908 ssh_runner.go:195] Run: openssl x509 -hash -noout -in /usr/share/ca-certificates/minikubeCA.pem
	I1109 14:03:43.999953   55908 ssh_runner.go:195] Run: sudo /bin/bash -c "test -L /etc/ssl/certs/b5213941.0 || ln -fs /etc/ssl/certs/minikubeCA.pem /etc/ssl/certs/b5213941.0"
	I1109 14:03:44.007895   55908 ssh_runner.go:195] Run: sudo /bin/bash -c "test -s /usr/share/ca-certificates/4116.pem && ln -fs /usr/share/ca-certificates/4116.pem /etc/ssl/certs/4116.pem"
	I1109 14:03:44.020206   55908 ssh_runner.go:195] Run: ls -la /usr/share/ca-certificates/4116.pem
	I1109 14:03:44.024532   55908 certs.go:528] hashing: -rw-r--r-- 1 root root 1338 Nov  9 13:36 /usr/share/ca-certificates/4116.pem
	I1109 14:03:44.024619   55908 ssh_runner.go:195] Run: openssl x509 -hash -noout -in /usr/share/ca-certificates/4116.pem
	I1109 14:03:44.068208   55908 ssh_runner.go:195] Run: sudo /bin/bash -c "test -L /etc/ssl/certs/51391683.0 || ln -fs /etc/ssl/certs/4116.pem /etc/ssl/certs/51391683.0"
	I1109 14:03:44.079840   55908 ssh_runner.go:195] Run: sudo /bin/bash -c "test -s /usr/share/ca-certificates/41162.pem && ln -fs /usr/share/ca-certificates/41162.pem /etc/ssl/certs/41162.pem"
	I1109 14:03:44.089486   55908 ssh_runner.go:195] Run: ls -la /usr/share/ca-certificates/41162.pem
	I1109 14:03:44.094109   55908 certs.go:528] hashing: -rw-r--r-- 1 root root 1708 Nov  9 13:36 /usr/share/ca-certificates/41162.pem
	I1109 14:03:44.094227   55908 ssh_runner.go:195] Run: openssl x509 -hash -noout -in /usr/share/ca-certificates/41162.pem
	I1109 14:03:44.137949   55908 ssh_runner.go:195] Run: sudo /bin/bash -c "test -L /etc/ssl/certs/3ec20f2e.0 || ln -fs /etc/ssl/certs/41162.pem /etc/ssl/certs/3ec20f2e.0"
	I1109 14:03:44.146324   55908 ssh_runner.go:195] Run: stat /var/lib/minikube/certs/apiserver-kubelet-client.crt
	I1109 14:03:44.150369   55908 ssh_runner.go:195] Run: openssl x509 -noout -in /var/lib/minikube/certs/apiserver-etcd-client.crt -checkend 86400
	I1109 14:03:44.191825   55908 ssh_runner.go:195] Run: openssl x509 -noout -in /var/lib/minikube/certs/apiserver-kubelet-client.crt -checkend 86400
	I1109 14:03:44.232925   55908 ssh_runner.go:195] Run: openssl x509 -noout -in /var/lib/minikube/certs/etcd/server.crt -checkend 86400
	I1109 14:03:44.273939   55908 ssh_runner.go:195] Run: openssl x509 -noout -in /var/lib/minikube/certs/etcd/healthcheck-client.crt -checkend 86400
	I1109 14:03:44.314652   55908 ssh_runner.go:195] Run: openssl x509 -noout -in /var/lib/minikube/certs/etcd/peer.crt -checkend 86400
	I1109 14:03:44.356028   55908 ssh_runner.go:195] Run: openssl x509 -noout -in /var/lib/minikube/certs/front-proxy-client.crt -checkend 86400
	I1109 14:03:44.407731   55908 kubeadm.go:935] updating node {m02 192.168.49.3 8443 v1.34.1 crio true true} ...
	I1109 14:03:44.407917   55908 kubeadm.go:947] kubelet [Unit]
	Wants=crio.service
	
	[Service]
	ExecStart=
	ExecStart=/var/lib/minikube/binaries/v1.34.1/kubelet --bootstrap-kubeconfig=/etc/kubernetes/bootstrap-kubelet.conf --cgroups-per-qos=false --config=/var/lib/kubelet/config.yaml --enforce-node-allocatable= --hostname-override=ha-423884-m02 --kubeconfig=/etc/kubernetes/kubelet.conf --node-ip=192.168.49.3
	
	[Install]
	 config:
	{KubernetesVersion:v1.34.1 ClusterName:ha-423884 Namespace:default APIServerHAVIP:192.168.49.254 APIServerName:minikubeCA APIServerNames:[] APIServerIPs:[] DNSDomain:cluster.local ContainerRuntime:crio CRISocket: NetworkPlugin:cni FeatureGates: ServiceCIDR:10.96.0.0/12 ImageRepository: LoadBalancerStartIP: LoadBalancerEndIP: CustomIngressCert: RegistryAliases: ExtraOptions:[] ShouldLoadCachedImages:true EnableDefaultCNI:false CNI:}
	I1109 14:03:44.407958   55908 kube-vip.go:115] generating kube-vip config ...
	I1109 14:03:44.408031   55908 ssh_runner.go:195] Run: sudo sh -c "lsmod | grep ip_vs"
	I1109 14:03:44.419991   55908 kube-vip.go:163] giving up enabling control-plane load-balancing as ipvs kernel modules appears not to be available: sudo sh -c "lsmod | grep ip_vs": Process exited with status 1
	stdout:
	
	stderr:
	I1109 14:03:44.420052   55908 kube-vip.go:137] kube-vip config:
	apiVersion: v1
	kind: Pod
	metadata:
	  creationTimestamp: null
	  name: kube-vip
	  namespace: kube-system
	spec:
	  containers:
	  - args:
	    - manager
	    env:
	    - name: vip_arp
	      value: "true"
	    - name: port
	      value: "8443"
	    - name: vip_nodename
	      valueFrom:
	        fieldRef:
	          fieldPath: spec.nodeName
	    - name: vip_interface
	      value: eth0
	    - name: vip_cidr
	      value: "32"
	    - name: dns_mode
	      value: first
	    - name: cp_enable
	      value: "true"
	    - name: cp_namespace
	      value: kube-system
	    - name: vip_leaderelection
	      value: "true"
	    - name: vip_leasename
	      value: plndr-cp-lock
	    - name: vip_leaseduration
	      value: "5"
	    - name: vip_renewdeadline
	      value: "3"
	    - name: vip_retryperiod
	      value: "1"
	    - name: address
	      value: 192.168.49.254
	    - name: prometheus_server
	      value: :2112
	    image: ghcr.io/kube-vip/kube-vip:v1.0.1
	    imagePullPolicy: IfNotPresent
	    name: kube-vip
	    resources: {}
	    securityContext:
	      capabilities:
	        add:
	        - NET_ADMIN
	        - NET_RAW
	    volumeMounts:
	    - mountPath: /etc/kubernetes/admin.conf
	      name: kubeconfig
	  hostAliases:
	  - hostnames:
	    - kubernetes
	    ip: 127.0.0.1
	  hostNetwork: true
	  volumes:
	  - hostPath:
	      path: "/etc/kubernetes/admin.conf"
	    name: kubeconfig
	status: {}
	I1109 14:03:44.420129   55908 ssh_runner.go:195] Run: sudo ls /var/lib/minikube/binaries/v1.34.1
	I1109 14:03:44.427945   55908 binaries.go:51] Found k8s binaries, skipping transfer
	I1109 14:03:44.428013   55908 ssh_runner.go:195] Run: sudo mkdir -p /etc/systemd/system/kubelet.service.d /lib/systemd/system /etc/kubernetes/manifests
	I1109 14:03:44.435476   55908 ssh_runner.go:362] scp memory --> /etc/systemd/system/kubelet.service.d/10-kubeadm.conf (363 bytes)
	I1109 14:03:44.448591   55908 ssh_runner.go:362] scp memory --> /lib/systemd/system/kubelet.service (352 bytes)
	I1109 14:03:44.461928   55908 ssh_runner.go:362] scp memory --> /etc/kubernetes/manifests/kube-vip.yaml (1358 bytes)
	I1109 14:03:44.475231   55908 ssh_runner.go:195] Run: grep 192.168.49.254	control-plane.minikube.internal$ /etc/hosts
	I1109 14:03:44.478933   55908 ssh_runner.go:195] Run: /bin/bash -c "{ grep -v $'\tcontrol-plane.minikube.internal$' "/etc/hosts"; echo "192.168.49.254	control-plane.minikube.internal"; } > /tmp/h.$$; sudo cp /tmp/h.$$ "/etc/hosts""
	I1109 14:03:44.488867   55908 ssh_runner.go:195] Run: sudo systemctl daemon-reload
	I1109 14:03:44.623612   55908 ssh_runner.go:195] Run: sudo systemctl start kubelet
	I1109 14:03:44.638897   55908 start.go:236] Will wait 6m0s for node &{Name:m02 IP:192.168.49.3 Port:8443 KubernetesVersion:v1.34.1 ContainerRuntime:crio ControlPlane:true Worker:true}
	I1109 14:03:44.639336   55908 config.go:182] Loaded profile config "ha-423884": Driver=docker, ContainerRuntime=crio, KubernetesVersion=v1.34.1
	I1109 14:03:44.643324   55908 out.go:179] * Verifying Kubernetes components...
	I1109 14:03:44.646391   55908 ssh_runner.go:195] Run: sudo systemctl daemon-reload
	I1109 14:03:44.766731   55908 ssh_runner.go:195] Run: sudo systemctl start kubelet
	I1109 14:03:44.781836   55908 kapi.go:59] client config for ha-423884: &rest.Config{Host:"https://192.168.49.254:8443", APIPath:"", ContentConfig:rest.ContentConfig{AcceptContentTypes:"", ContentType:"", GroupVersion:(*schema.GroupVersion)(nil), NegotiatedSerializer:runtime.NegotiatedSerializer(nil)}, Username:"", Password:"", BearerToken:"", BearerTokenFile:"", Impersonate:rest.ImpersonationConfig{UserName:"", UID:"", Groups:[]string(nil), Extra:map[string][]string(nil)}, AuthProvider:<nil>, AuthConfigPersister:rest.AuthProviderConfigPersister(nil), ExecProvider:<nil>, TLSClientConfig:rest.sanitizedTLSClientConfig{Insecure:false, ServerName:"", CertFile:"/home/jenkins/minikube-integration/21139-2320/.minikube/profiles/ha-423884/client.crt", KeyFile:"/home/jenkins/minikube-integration/21139-2320/.minikube/profiles/ha-423884/client.key", CAFile:"/home/jenkins/minikube-integration/21139-2320/.minikube/ca.crt", CertData:[]uint8(nil), KeyData:[]uint8(nil), CAData:[]uint8(nil), NextProtos:[]string(nil)},
UserAgent:"", DisableCompression:false, Transport:http.RoundTripper(nil), WrapTransport:(transport.WrapperFunc)(0x21276e0), QPS:0, Burst:0, RateLimiter:flowcontrol.RateLimiter(nil), WarningHandler:rest.WarningHandler(nil), WarningHandlerWithContext:rest.WarningHandlerWithContext(nil), Timeout:0, Dial:(func(context.Context, string, string) (net.Conn, error))(nil), Proxy:(func(*http.Request) (*url.URL, error))(nil)}
	W1109 14:03:44.781971   55908 kubeadm.go:492] Overriding stale ClientConfig host https://192.168.49.254:8443 with https://192.168.49.2:8443
	I1109 14:03:44.782234   55908 node_ready.go:35] waiting up to 6m0s for node "ha-423884-m02" to be "Ready" ...
	W1109 14:03:54.783441   55908 node_ready.go:55] error getting node "ha-423884-m02" condition "Ready" status (will retry): Get "https://192.168.49.2:8443/api/v1/nodes/ha-423884-m02": net/http: TLS handshake timeout
	I1109 14:03:58.293061   55908 with_retry.go:234] "Got a Retry-After response" delay="1s" attempt=1 url="https://192.168.49.2:8443/api/v1/nodes/ha-423884-m02"
	W1109 14:04:08.294056   55908 node_ready.go:55] error getting node "ha-423884-m02" condition "Ready" status (will retry): Get "https://192.168.49.2:8443/api/v1/nodes/ha-423884-m02": net/http: TLS handshake timeout - error from a previous attempt: read tcp 192.168.49.1:36070->192.168.49.2:8443: read: connection reset by peer
	I1109 14:04:10.224067   55908 node_ready.go:49] node "ha-423884-m02" is "Ready"
	I1109 14:04:10.224094   55908 node_ready.go:38] duration metric: took 25.441822993s for node "ha-423884-m02" to be "Ready" ...
	I1109 14:04:10.224107   55908 api_server.go:52] waiting for apiserver process to appear ...
	I1109 14:04:10.224169   55908 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I1109 14:04:10.237071   55908 api_server.go:72] duration metric: took 25.598086143s to wait for apiserver process to appear ...
	I1109 14:04:10.237093   55908 api_server.go:88] waiting for apiserver healthz status ...
	I1109 14:04:10.237122   55908 api_server.go:253] Checking apiserver healthz at https://192.168.49.2:8443/healthz ...
	I1109 14:04:10.273674   55908 api_server.go:279] https://192.168.49.2:8443/healthz returned 500:
	[+]ping ok
	[+]log ok
	[+]etcd ok
	[+]poststarthook/start-apiserver-admission-initializer ok
	[+]poststarthook/generic-apiserver-start-informers ok
	[+]poststarthook/priority-and-fairness-config-consumer ok
	[+]poststarthook/priority-and-fairness-filter ok
	[+]poststarthook/storage-object-count-tracker-hook ok
	[+]poststarthook/start-apiextensions-informers ok
	[+]poststarthook/start-apiextensions-controllers ok
	[+]poststarthook/crd-informer-synced ok
	[+]poststarthook/start-system-namespaces-controller ok
	[+]poststarthook/start-cluster-authentication-info-controller ok
	[+]poststarthook/start-kube-apiserver-identity-lease-controller ok
	[+]poststarthook/start-kube-apiserver-identity-lease-garbage-collector ok
	[+]poststarthook/start-legacy-token-tracking-controller ok
	[+]poststarthook/start-service-ip-repair-controllers ok
	[-]poststarthook/rbac/bootstrap-roles failed: reason withheld
	[-]poststarthook/scheduling/bootstrap-system-priority-classes failed: reason withheld
	[+]poststarthook/priority-and-fairness-config-producer ok
	[-]poststarthook/bootstrap-controller failed: reason withheld
	[+]poststarthook/start-kubernetes-service-cidr-controller ok
	[+]poststarthook/aggregator-reload-proxy-client-cert ok
	[+]poststarthook/start-kube-aggregator-informers ok
	[+]poststarthook/apiservice-status-local-available-controller ok
	[+]poststarthook/apiservice-status-remote-available-controller ok
	[+]poststarthook/apiservice-registration-controller ok
	[+]poststarthook/apiservice-discovery-controller ok
	[+]poststarthook/kube-apiserver-autoregistration ok
	[+]autoregister-completion ok
	[+]poststarthook/apiservice-openapi-controller ok
	[+]poststarthook/apiservice-openapiv3-controller ok
	healthz check failed
	W1109 14:04:10.273706   55908 api_server.go:103] status: https://192.168.49.2:8443/healthz returned error 500:
	[+]ping ok
	[+]log ok
	[+]etcd ok
	[+]poststarthook/start-apiserver-admission-initializer ok
	[+]poststarthook/generic-apiserver-start-informers ok
	[+]poststarthook/priority-and-fairness-config-consumer ok
	[+]poststarthook/priority-and-fairness-filter ok
	[+]poststarthook/storage-object-count-tracker-hook ok
	[+]poststarthook/start-apiextensions-informers ok
	[+]poststarthook/start-apiextensions-controllers ok
	[+]poststarthook/crd-informer-synced ok
	[+]poststarthook/start-system-namespaces-controller ok
	[+]poststarthook/start-cluster-authentication-info-controller ok
	[+]poststarthook/start-kube-apiserver-identity-lease-controller ok
	[+]poststarthook/start-kube-apiserver-identity-lease-garbage-collector ok
	[+]poststarthook/start-legacy-token-tracking-controller ok
	[+]poststarthook/start-service-ip-repair-controllers ok
	[-]poststarthook/rbac/bootstrap-roles failed: reason withheld
	[-]poststarthook/scheduling/bootstrap-system-priority-classes failed: reason withheld
	[+]poststarthook/priority-and-fairness-config-producer ok
	[-]poststarthook/bootstrap-controller failed: reason withheld
	[+]poststarthook/start-kubernetes-service-cidr-controller ok
	[+]poststarthook/aggregator-reload-proxy-client-cert ok
	[+]poststarthook/start-kube-aggregator-informers ok
	[+]poststarthook/apiservice-status-local-available-controller ok
	[+]poststarthook/apiservice-status-remote-available-controller ok
	[+]poststarthook/apiservice-registration-controller ok
	[+]poststarthook/apiservice-discovery-controller ok
	[+]poststarthook/kube-apiserver-autoregistration ok
	[+]autoregister-completion ok
	[+]poststarthook/apiservice-openapi-controller ok
	[+]poststarthook/apiservice-openapiv3-controller ok
	healthz check failed
	I1109 14:04:10.737933   55908 api_server.go:253] Checking apiserver healthz at https://192.168.49.2:8443/healthz ...
	I1109 14:04:10.747401   55908 api_server.go:279] https://192.168.49.2:8443/healthz returned 500:
	[+]ping ok
	[+]log ok
	[+]etcd ok
	[+]poststarthook/start-apiserver-admission-initializer ok
	[+]poststarthook/generic-apiserver-start-informers ok
	[+]poststarthook/priority-and-fairness-config-consumer ok
	[+]poststarthook/priority-and-fairness-filter ok
	[+]poststarthook/storage-object-count-tracker-hook ok
	[+]poststarthook/start-apiextensions-informers ok
	[+]poststarthook/start-apiextensions-controllers ok
	[+]poststarthook/crd-informer-synced ok
	[+]poststarthook/start-system-namespaces-controller ok
	[+]poststarthook/start-cluster-authentication-info-controller ok
	[+]poststarthook/start-kube-apiserver-identity-lease-controller ok
	[+]poststarthook/start-kube-apiserver-identity-lease-garbage-collector ok
	[+]poststarthook/start-legacy-token-tracking-controller ok
	[+]poststarthook/start-service-ip-repair-controllers ok
	[-]poststarthook/rbac/bootstrap-roles failed: reason withheld
	[+]poststarthook/scheduling/bootstrap-system-priority-classes ok
	[+]poststarthook/priority-and-fairness-config-producer ok
	[+]poststarthook/bootstrap-controller ok
	[+]poststarthook/start-kubernetes-service-cidr-controller ok
	[+]poststarthook/aggregator-reload-proxy-client-cert ok
	[+]poststarthook/start-kube-aggregator-informers ok
	[+]poststarthook/apiservice-status-local-available-controller ok
	[+]poststarthook/apiservice-status-remote-available-controller ok
	[+]poststarthook/apiservice-registration-controller ok
	[+]poststarthook/apiservice-discovery-controller ok
	[+]poststarthook/kube-apiserver-autoregistration ok
	[+]autoregister-completion ok
	[+]poststarthook/apiservice-openapi-controller ok
	[+]poststarthook/apiservice-openapiv3-controller ok
	healthz check failed
	W1109 14:04:10.747476   55908 api_server.go:103] status: https://192.168.49.2:8443/healthz returned error 500:
	[+]ping ok
	[+]log ok
	[+]etcd ok
	[+]poststarthook/start-apiserver-admission-initializer ok
	[+]poststarthook/generic-apiserver-start-informers ok
	[+]poststarthook/priority-and-fairness-config-consumer ok
	[+]poststarthook/priority-and-fairness-filter ok
	[+]poststarthook/storage-object-count-tracker-hook ok
	[+]poststarthook/start-apiextensions-informers ok
	[+]poststarthook/start-apiextensions-controllers ok
	[+]poststarthook/crd-informer-synced ok
	[+]poststarthook/start-system-namespaces-controller ok
	[+]poststarthook/start-cluster-authentication-info-controller ok
	[+]poststarthook/start-kube-apiserver-identity-lease-controller ok
	[+]poststarthook/start-kube-apiserver-identity-lease-garbage-collector ok
	[+]poststarthook/start-legacy-token-tracking-controller ok
	[+]poststarthook/start-service-ip-repair-controllers ok
	[-]poststarthook/rbac/bootstrap-roles failed: reason withheld
	[+]poststarthook/scheduling/bootstrap-system-priority-classes ok
	[+]poststarthook/priority-and-fairness-config-producer ok
	[+]poststarthook/bootstrap-controller ok
	[+]poststarthook/start-kubernetes-service-cidr-controller ok
	[+]poststarthook/aggregator-reload-proxy-client-cert ok
	[+]poststarthook/start-kube-aggregator-informers ok
	[+]poststarthook/apiservice-status-local-available-controller ok
	[+]poststarthook/apiservice-status-remote-available-controller ok
	[+]poststarthook/apiservice-registration-controller ok
	[+]poststarthook/apiservice-discovery-controller ok
	[+]poststarthook/kube-apiserver-autoregistration ok
	[+]autoregister-completion ok
	[+]poststarthook/apiservice-openapi-controller ok
	[+]poststarthook/apiservice-openapiv3-controller ok
	healthz check failed
	I1109 14:04:11.238081   55908 api_server.go:253] Checking apiserver healthz at https://192.168.49.2:8443/healthz ...
	I1109 14:04:11.253573   55908 api_server.go:279] https://192.168.49.2:8443/healthz returned 500:
	[+]ping ok
	[+]log ok
	[+]etcd ok
	[+]poststarthook/start-apiserver-admission-initializer ok
	[+]poststarthook/generic-apiserver-start-informers ok
	[+]poststarthook/priority-and-fairness-config-consumer ok
	[+]poststarthook/priority-and-fairness-filter ok
	[+]poststarthook/storage-object-count-tracker-hook ok
	[+]poststarthook/start-apiextensions-informers ok
	[+]poststarthook/start-apiextensions-controllers ok
	[+]poststarthook/crd-informer-synced ok
	[+]poststarthook/start-system-namespaces-controller ok
	[+]poststarthook/start-cluster-authentication-info-controller ok
	[+]poststarthook/start-kube-apiserver-identity-lease-controller ok
	[+]poststarthook/start-kube-apiserver-identity-lease-garbage-collector ok
	[+]poststarthook/start-legacy-token-tracking-controller ok
	[+]poststarthook/start-service-ip-repair-controllers ok
	[-]poststarthook/rbac/bootstrap-roles failed: reason withheld
	[+]poststarthook/scheduling/bootstrap-system-priority-classes ok
	[+]poststarthook/priority-and-fairness-config-producer ok
	[+]poststarthook/bootstrap-controller ok
	[+]poststarthook/start-kubernetes-service-cidr-controller ok
	[+]poststarthook/aggregator-reload-proxy-client-cert ok
	[+]poststarthook/start-kube-aggregator-informers ok
	[+]poststarthook/apiservice-status-local-available-controller ok
	[+]poststarthook/apiservice-status-remote-available-controller ok
	[+]poststarthook/apiservice-registration-controller ok
	[+]poststarthook/apiservice-discovery-controller ok
	[+]poststarthook/kube-apiserver-autoregistration ok
	[+]autoregister-completion ok
	[+]poststarthook/apiservice-openapi-controller ok
	[+]poststarthook/apiservice-openapiv3-controller ok
	healthz check failed
	W1109 14:04:11.253663   55908 api_server.go:103] status: https://192.168.49.2:8443/healthz returned error 500:
	[+]ping ok
	[+]log ok
	[+]etcd ok
	[+]poststarthook/start-apiserver-admission-initializer ok
	[+]poststarthook/generic-apiserver-start-informers ok
	[+]poststarthook/priority-and-fairness-config-consumer ok
	[+]poststarthook/priority-and-fairness-filter ok
	[+]poststarthook/storage-object-count-tracker-hook ok
	[+]poststarthook/start-apiextensions-informers ok
	[+]poststarthook/start-apiextensions-controllers ok
	[+]poststarthook/crd-informer-synced ok
	[+]poststarthook/start-system-namespaces-controller ok
	[+]poststarthook/start-cluster-authentication-info-controller ok
	[+]poststarthook/start-kube-apiserver-identity-lease-controller ok
	[+]poststarthook/start-kube-apiserver-identity-lease-garbage-collector ok
	[+]poststarthook/start-legacy-token-tracking-controller ok
	[+]poststarthook/start-service-ip-repair-controllers ok
	[-]poststarthook/rbac/bootstrap-roles failed: reason withheld
	[+]poststarthook/scheduling/bootstrap-system-priority-classes ok
	[+]poststarthook/priority-and-fairness-config-producer ok
	[+]poststarthook/bootstrap-controller ok
	[+]poststarthook/start-kubernetes-service-cidr-controller ok
	[+]poststarthook/aggregator-reload-proxy-client-cert ok
	[+]poststarthook/start-kube-aggregator-informers ok
	[+]poststarthook/apiservice-status-local-available-controller ok
	[+]poststarthook/apiservice-status-remote-available-controller ok
	[+]poststarthook/apiservice-registration-controller ok
	[+]poststarthook/apiservice-discovery-controller ok
	[+]poststarthook/kube-apiserver-autoregistration ok
	[+]autoregister-completion ok
	[+]poststarthook/apiservice-openapi-controller ok
	[+]poststarthook/apiservice-openapiv3-controller ok
	healthz check failed
	I1109 14:04:11.737248   55908 api_server.go:253] Checking apiserver healthz at https://192.168.49.2:8443/healthz ...
	I1109 14:04:11.745671   55908 api_server.go:279] https://192.168.49.2:8443/healthz returned 500:
	[+]ping ok
	[+]log ok
	[+]etcd ok
	[+]poststarthook/start-apiserver-admission-initializer ok
	[+]poststarthook/generic-apiserver-start-informers ok
	[+]poststarthook/priority-and-fairness-config-consumer ok
	[+]poststarthook/priority-and-fairness-filter ok
	[+]poststarthook/storage-object-count-tracker-hook ok
	[+]poststarthook/start-apiextensions-informers ok
	[+]poststarthook/start-apiextensions-controllers ok
	[+]poststarthook/crd-informer-synced ok
	[+]poststarthook/start-system-namespaces-controller ok
	[+]poststarthook/start-cluster-authentication-info-controller ok
	[+]poststarthook/start-kube-apiserver-identity-lease-controller ok
	[+]poststarthook/start-kube-apiserver-identity-lease-garbage-collector ok
	[+]poststarthook/start-legacy-token-tracking-controller ok
	[+]poststarthook/start-service-ip-repair-controllers ok
	[-]poststarthook/rbac/bootstrap-roles failed: reason withheld
	[+]poststarthook/scheduling/bootstrap-system-priority-classes ok
	[+]poststarthook/priority-and-fairness-config-producer ok
	[+]poststarthook/bootstrap-controller ok
	[+]poststarthook/start-kubernetes-service-cidr-controller ok
	[+]poststarthook/aggregator-reload-proxy-client-cert ok
	[+]poststarthook/start-kube-aggregator-informers ok
	[+]poststarthook/apiservice-status-local-available-controller ok
	[+]poststarthook/apiservice-status-remote-available-controller ok
	[+]poststarthook/apiservice-registration-controller ok
	[+]poststarthook/apiservice-discovery-controller ok
	[+]poststarthook/kube-apiserver-autoregistration ok
	[+]autoregister-completion ok
	[+]poststarthook/apiservice-openapi-controller ok
	[+]poststarthook/apiservice-openapiv3-controller ok
	healthz check failed
	W1109 14:04:11.745753   55908 api_server.go:103] status: https://192.168.49.2:8443/healthz returned error 500:
	[+]ping ok
	[+]log ok
	[+]etcd ok
	[+]poststarthook/start-apiserver-admission-initializer ok
	[+]poststarthook/generic-apiserver-start-informers ok
	[+]poststarthook/priority-and-fairness-config-consumer ok
	[+]poststarthook/priority-and-fairness-filter ok
	[+]poststarthook/storage-object-count-tracker-hook ok
	[+]poststarthook/start-apiextensions-informers ok
	[+]poststarthook/start-apiextensions-controllers ok
	[+]poststarthook/crd-informer-synced ok
	[+]poststarthook/start-system-namespaces-controller ok
	[+]poststarthook/start-cluster-authentication-info-controller ok
	[+]poststarthook/start-kube-apiserver-identity-lease-controller ok
	[+]poststarthook/start-kube-apiserver-identity-lease-garbage-collector ok
	[+]poststarthook/start-legacy-token-tracking-controller ok
	[+]poststarthook/start-service-ip-repair-controllers ok
	[-]poststarthook/rbac/bootstrap-roles failed: reason withheld
	[+]poststarthook/scheduling/bootstrap-system-priority-classes ok
	[+]poststarthook/priority-and-fairness-config-producer ok
	[+]poststarthook/bootstrap-controller ok
	[+]poststarthook/start-kubernetes-service-cidr-controller ok
	[+]poststarthook/aggregator-reload-proxy-client-cert ok
	[+]poststarthook/start-kube-aggregator-informers ok
	[+]poststarthook/apiservice-status-local-available-controller ok
	[+]poststarthook/apiservice-status-remote-available-controller ok
	[+]poststarthook/apiservice-registration-controller ok
	[+]poststarthook/apiservice-discovery-controller ok
	[+]poststarthook/kube-apiserver-autoregistration ok
	[+]autoregister-completion ok
	[+]poststarthook/apiservice-openapi-controller ok
	[+]poststarthook/apiservice-openapiv3-controller ok
	healthz check failed
	I1109 14:04:12.237288   55908 api_server.go:253] Checking apiserver healthz at https://192.168.49.2:8443/healthz ...
	I1109 14:04:12.246058   55908 api_server.go:279] https://192.168.49.2:8443/healthz returned 200:
	ok
	I1109 14:04:12.247325   55908 api_server.go:141] control plane version: v1.34.1
	I1109 14:04:12.247378   55908 api_server.go:131] duration metric: took 2.0102771s to wait for apiserver health ...
	I1109 14:04:12.247399   55908 system_pods.go:43] waiting for kube-system pods to appear ...
	I1109 14:04:12.255293   55908 system_pods.go:59] 26 kube-system pods found
	I1109 14:04:12.255379   55908 system_pods.go:61] "coredns-66bc5c9577-wl6rt" [95478f0f-683f-4542-aaa4-adc037f97d0c] Running
	I1109 14:04:12.255399   55908 system_pods.go:61] "coredns-66bc5c9577-x2j4c" [96cca476-edbb-4139-8b01-f7fd7c7d55aa] Running
	I1109 14:04:12.255418   55908 system_pods.go:61] "etcd-ha-423884" [004413ee-6d00-4b6b-8e58-dbe5c2694a91] Running
	I1109 14:04:12.255451   55908 system_pods.go:61] "etcd-ha-423884-m02" [7009b9f1-1f16-4b4e-a5e3-1d8df4987593] Running
	I1109 14:04:12.255475   55908 system_pods.go:61] "etcd-ha-423884-m03" [5c66298e-55ac-439c-8fc7-cc91a29fff8c] Running
	I1109 14:04:12.255490   55908 system_pods.go:61] "kindnet-2tcn6" [22d3ab43-1335-4838-ab1d-7368817c4287] Running
	I1109 14:04:12.255507   55908 system_pods.go:61] "kindnet-45jg2" [fe2d9f7d-ba0c-42a5-af2d-ebfb3153b0b1] Running
	I1109 14:04:12.255525   55908 system_pods.go:61] "kindnet-4s4nj" [aaab0693-39a9-46cc-b5c6-f07055a7cbc4] Running
	I1109 14:04:12.255556   55908 system_pods.go:61] "kindnet-ftnwt" [ea92690f-5103-40c4-ba92-16b97894d00c] Running
	I1109 14:04:12.255578   55908 system_pods.go:61] "kube-apiserver-ha-423884" [1981277b-07b7-4bfc-8601-d026e58476b1] Running
	I1109 14:04:12.255596   55908 system_pods.go:61] "kube-apiserver-ha-423884-m02" [e203df4c-2d27-445c-ae67-14d3e03d4a48] Running
	I1109 14:04:12.255613   55908 system_pods.go:61] "kube-apiserver-ha-423884-m03" [16908abd-f0bf-4fc2-b689-4062490d63b3] Running
	I1109 14:04:12.255631   55908 system_pods.go:61] "kube-controller-manager-ha-423884" [1d981b3b-aa6e-470c-8fd5-a97cedeb7ab7] Running
	I1109 14:04:12.255657   55908 system_pods.go:61] "kube-controller-manager-ha-423884-m02" [047f7949-9451-4301-be27-a9073b1f8dd8] Running
	I1109 14:04:12.255679   55908 system_pods.go:61] "kube-controller-manager-ha-423884-m03" [94734b39-b4df-4736-8bb8-4289b8b59d4a] Running
	I1109 14:04:12.255698   55908 system_pods.go:61] "kube-proxy-7z7d2" [f3de4d87-91fe-4303-a8db-50a70cbce4d7] Running
	I1109 14:04:12.255716   55908 system_pods.go:61] "kube-proxy-9kff9" [3e8293e6-027b-460b-bf10-1c31ea96c7b9] Running
	I1109 14:04:12.255733   55908 system_pods.go:61] "kube-proxy-f4hgn" [4eebc45e-329c-47be-b22c-f516100cff56] Running
	I1109 14:04:12.255760   55908 system_pods.go:61] "kube-proxy-jcgxk" [511bb5aa-d398-4eb4-852e-d2b6cda335c7] Running
	I1109 14:04:12.255785   55908 system_pods.go:61] "kube-scheduler-ha-423884" [9d1a4a22-ff4f-4204-b9a0-c90bc99cf5ee] Running
	I1109 14:04:12.255802   55908 system_pods.go:61] "kube-scheduler-ha-423884-m02" [3959577a-da41-4834-8c10-1cd6da3da88d] Running
	I1109 14:04:12.255819   55908 system_pods.go:61] "kube-scheduler-ha-423884-m03" [ab2c4706-65e8-429e-9365-35ddef56a3f5] Running
	I1109 14:04:12.255834   55908 system_pods.go:61] "kube-vip-ha-423884" [8470dcc0-6c4f-4241-ad4e-8b896f6712b0] Running
	I1109 14:04:12.255904   55908 system_pods.go:61] "kube-vip-ha-423884-m02" [17d3154c-6732-452b-a209-4a8c98c5626e] Running
	I1109 14:04:12.255931   55908 system_pods.go:61] "kube-vip-ha-423884-m03" [aade9448-5515-4d1c-ae79-38c8686c171f] Running
	I1109 14:04:12.255949   55908 system_pods.go:61] "storage-provisioner" [5c249a88-1e05-40e0-b9d2-60a993f8c146] Running
	I1109 14:04:12.255967   55908 system_pods.go:74] duration metric: took 8.549678ms to wait for pod list to return data ...
	I1109 14:04:12.255987   55908 default_sa.go:34] waiting for default service account to be created ...
	I1109 14:04:12.259644   55908 default_sa.go:45] found service account: "default"
	I1109 14:04:12.259701   55908 default_sa.go:55] duration metric: took 3.685783ms for default service account to be created ...
	I1109 14:04:12.259723   55908 system_pods.go:116] waiting for k8s-apps to be running ...
	I1109 14:04:12.265757   55908 system_pods.go:86] 26 kube-system pods found
	I1109 14:04:12.265830   55908 system_pods.go:89] "coredns-66bc5c9577-wl6rt" [95478f0f-683f-4542-aaa4-adc037f97d0c] Running
	I1109 14:04:12.265849   55908 system_pods.go:89] "coredns-66bc5c9577-x2j4c" [96cca476-edbb-4139-8b01-f7fd7c7d55aa] Running
	I1109 14:04:12.265871   55908 system_pods.go:89] "etcd-ha-423884" [004413ee-6d00-4b6b-8e58-dbe5c2694a91] Running
	I1109 14:04:12.265906   55908 system_pods.go:89] "etcd-ha-423884-m02" [7009b9f1-1f16-4b4e-a5e3-1d8df4987593] Running
	I1109 14:04:12.265928   55908 system_pods.go:89] "etcd-ha-423884-m03" [5c66298e-55ac-439c-8fc7-cc91a29fff8c] Running
	I1109 14:04:12.265945   55908 system_pods.go:89] "kindnet-2tcn6" [22d3ab43-1335-4838-ab1d-7368817c4287] Running
	I1109 14:04:12.265961   55908 system_pods.go:89] "kindnet-45jg2" [fe2d9f7d-ba0c-42a5-af2d-ebfb3153b0b1] Running
	I1109 14:04:12.265977   55908 system_pods.go:89] "kindnet-4s4nj" [aaab0693-39a9-46cc-b5c6-f07055a7cbc4] Running
	I1109 14:04:12.266004   55908 system_pods.go:89] "kindnet-ftnwt" [ea92690f-5103-40c4-ba92-16b97894d00c] Running
	I1109 14:04:12.266025   55908 system_pods.go:89] "kube-apiserver-ha-423884" [1981277b-07b7-4bfc-8601-d026e58476b1] Running
	I1109 14:04:12.266042   55908 system_pods.go:89] "kube-apiserver-ha-423884-m02" [e203df4c-2d27-445c-ae67-14d3e03d4a48] Running
	I1109 14:04:12.266059   55908 system_pods.go:89] "kube-apiserver-ha-423884-m03" [16908abd-f0bf-4fc2-b689-4062490d63b3] Running
	I1109 14:04:12.266077   55908 system_pods.go:89] "kube-controller-manager-ha-423884" [1d981b3b-aa6e-470c-8fd5-a97cedeb7ab7] Running
	I1109 14:04:12.266107   55908 system_pods.go:89] "kube-controller-manager-ha-423884-m02" [047f7949-9451-4301-be27-a9073b1f8dd8] Running
	I1109 14:04:12.266238   55908 system_pods.go:89] "kube-controller-manager-ha-423884-m03" [94734b39-b4df-4736-8bb8-4289b8b59d4a] Running
	I1109 14:04:12.266258   55908 system_pods.go:89] "kube-proxy-7z7d2" [f3de4d87-91fe-4303-a8db-50a70cbce4d7] Running
	I1109 14:04:12.266274   55908 system_pods.go:89] "kube-proxy-9kff9" [3e8293e6-027b-460b-bf10-1c31ea96c7b9] Running
	I1109 14:04:12.266290   55908 system_pods.go:89] "kube-proxy-f4hgn" [4eebc45e-329c-47be-b22c-f516100cff56] Running
	I1109 14:04:12.266322   55908 system_pods.go:89] "kube-proxy-jcgxk" [511bb5aa-d398-4eb4-852e-d2b6cda335c7] Running
	I1109 14:04:12.266345   55908 system_pods.go:89] "kube-scheduler-ha-423884" [9d1a4a22-ff4f-4204-b9a0-c90bc99cf5ee] Running
	I1109 14:04:12.266364   55908 system_pods.go:89] "kube-scheduler-ha-423884-m02" [3959577a-da41-4834-8c10-1cd6da3da88d] Running
	I1109 14:04:12.266382   55908 system_pods.go:89] "kube-scheduler-ha-423884-m03" [ab2c4706-65e8-429e-9365-35ddef56a3f5] Running
	I1109 14:04:12.266400   55908 system_pods.go:89] "kube-vip-ha-423884" [8470dcc0-6c4f-4241-ad4e-8b896f6712b0] Running
	I1109 14:04:12.266427   55908 system_pods.go:89] "kube-vip-ha-423884-m02" [17d3154c-6732-452b-a209-4a8c98c5626e] Running
	I1109 14:04:12.266450   55908 system_pods.go:89] "kube-vip-ha-423884-m03" [aade9448-5515-4d1c-ae79-38c8686c171f] Running
	I1109 14:04:12.266468   55908 system_pods.go:89] "storage-provisioner" [5c249a88-1e05-40e0-b9d2-60a993f8c146] Running
	I1109 14:04:12.266489   55908 system_pods.go:126] duration metric: took 6.747337ms to wait for k8s-apps to be running ...
	I1109 14:04:12.266510   55908 system_svc.go:44] waiting for kubelet service to be running ....
	I1109 14:04:12.266588   55908 ssh_runner.go:195] Run: sudo systemctl is-active --quiet service kubelet
	I1109 14:04:12.282135   55908 system_svc.go:56] duration metric: took 15.616371ms WaitForService to wait for kubelet
	I1109 14:04:12.282232   55908 kubeadm.go:587] duration metric: took 27.643251935s to wait for: map[apiserver:true apps_running:true default_sa:true extra:true kubelet:true node_ready:true system_pods:true]
	I1109 14:04:12.282264   55908 node_conditions.go:102] verifying NodePressure condition ...
	I1109 14:04:12.287797   55908 node_conditions.go:122] node storage ephemeral capacity is 203034800Ki
	I1109 14:04:12.287962   55908 node_conditions.go:123] node cpu capacity is 2
	I1109 14:04:12.287995   55908 node_conditions.go:122] node storage ephemeral capacity is 203034800Ki
	I1109 14:04:12.288016   55908 node_conditions.go:123] node cpu capacity is 2
	I1109 14:04:12.288036   55908 node_conditions.go:122] node storage ephemeral capacity is 203034800Ki
	I1109 14:04:12.288054   55908 node_conditions.go:123] node cpu capacity is 2
	I1109 14:04:12.288080   55908 node_conditions.go:122] node storage ephemeral capacity is 203034800Ki
	I1109 14:04:12.288104   55908 node_conditions.go:123] node cpu capacity is 2
	I1109 14:04:12.288124   55908 node_conditions.go:105] duration metric: took 5.843459ms to run NodePressure ...
	I1109 14:04:12.288147   55908 start.go:242] waiting for startup goroutines ...
	I1109 14:04:12.288194   55908 start.go:256] writing updated cluster config ...
	I1109 14:04:12.292016   55908 out.go:203] 
	I1109 14:04:12.295240   55908 config.go:182] Loaded profile config "ha-423884": Driver=docker, ContainerRuntime=crio, KubernetesVersion=v1.34.1
	I1109 14:04:12.295416   55908 profile.go:143] Saving config to /home/jenkins/minikube-integration/21139-2320/.minikube/profiles/ha-423884/config.json ...
	I1109 14:04:12.298693   55908 out.go:179] * Starting "ha-423884-m03" control-plane node in "ha-423884" cluster
	I1109 14:04:12.302221   55908 cache.go:134] Beginning downloading kic base image for docker with crio
	I1109 14:04:12.305225   55908 out.go:179] * Pulling base image v0.0.48-1761985721-21837 ...
	I1109 14:04:12.307950   55908 preload.go:188] Checking if preload exists for k8s version v1.34.1 and runtime crio
	I1109 14:04:12.307975   55908 cache.go:65] Caching tarball of preloaded images
	I1109 14:04:12.308093   55908 preload.go:238] Found /home/jenkins/minikube-integration/21139-2320/.minikube/cache/preloaded-tarball/preloaded-images-k8s-v18-v1.34.1-cri-o-overlay-arm64.tar.lz4 in cache, skipping download
	I1109 14:04:12.308103   55908 cache.go:68] Finished verifying existence of preloaded tar for v1.34.1 on crio
	I1109 14:04:12.308245   55908 profile.go:143] Saving config to /home/jenkins/minikube-integration/21139-2320/.minikube/profiles/ha-423884/config.json ...
	I1109 14:04:12.308454   55908 image.go:81] Checking for gcr.io/k8s-minikube/kicbase-builds:v0.0.48-1761985721-21837@sha256:a50b37e97dfdea51156e079ca6b45818a801b3d41bbe13d141f35d2e1af6c7d1 in local docker daemon
	I1109 14:04:12.335753   55908 image.go:100] Found gcr.io/k8s-minikube/kicbase-builds:v0.0.48-1761985721-21837@sha256:a50b37e97dfdea51156e079ca6b45818a801b3d41bbe13d141f35d2e1af6c7d1 in local docker daemon, skipping pull
	I1109 14:04:12.335772   55908 cache.go:158] gcr.io/k8s-minikube/kicbase-builds:v0.0.48-1761985721-21837@sha256:a50b37e97dfdea51156e079ca6b45818a801b3d41bbe13d141f35d2e1af6c7d1 exists in daemon, skipping load
	I1109 14:04:12.335783   55908 cache.go:243] Successfully downloaded all kic artifacts
	I1109 14:04:12.335806   55908 start.go:360] acquireMachinesLock for ha-423884-m03: {Name:mk2c1f49120f6acdbb0b7c106d84b578b982c1c0 Clock:{} Delay:500ms Timeout:10m0s Cancel:<nil>}
	I1109 14:04:12.335852   55908 start.go:364] duration metric: took 32.608µs to acquireMachinesLock for "ha-423884-m03"
	I1109 14:04:12.335906   55908 start.go:96] Skipping create...Using existing machine configuration
	I1109 14:04:12.335913   55908 fix.go:54] fixHost starting: m03
	I1109 14:04:12.336176   55908 cli_runner.go:164] Run: docker container inspect ha-423884-m03 --format={{.State.Status}}
	I1109 14:04:12.360018   55908 fix.go:112] recreateIfNeeded on ha-423884-m03: state=Stopped err=<nil>
	W1109 14:04:12.360050   55908 fix.go:138] unexpected machine state, will restart: <nil>
	I1109 14:04:12.363431   55908 out.go:252] * Restarting existing docker container for "ha-423884-m03" ...
	I1109 14:04:12.363592   55908 cli_runner.go:164] Run: docker start ha-423884-m03
	I1109 14:04:12.653356   55908 cli_runner.go:164] Run: docker container inspect ha-423884-m03 --format={{.State.Status}}
	I1109 14:04:12.683958   55908 kic.go:430] container "ha-423884-m03" state is running.
	I1109 14:04:12.684306   55908 cli_runner.go:164] Run: docker container inspect -f "{{range .NetworkSettings.Networks}}{{.IPAddress}},{{.GlobalIPv6Address}}{{end}}" ha-423884-m03
	I1109 14:04:12.727840   55908 profile.go:143] Saving config to /home/jenkins/minikube-integration/21139-2320/.minikube/profiles/ha-423884/config.json ...
	I1109 14:04:12.728107   55908 machine.go:94] provisionDockerMachine start ...
	I1109 14:04:12.728163   55908 cli_runner.go:164] Run: docker container inspect -f "'{{(index (index .NetworkSettings.Ports "22/tcp") 0).HostPort}}'" ha-423884-m03
	I1109 14:04:12.759896   55908 main.go:143] libmachine: Using SSH client type: native
	I1109 14:04:12.760195   55908 main.go:143] libmachine: &{{{<nil> 0 [] [] []} docker [0x3eefe0] 0x3f1790 <nil>  [] 0s} 127.0.0.1 32828 <nil> <nil>}
	I1109 14:04:12.760204   55908 main.go:143] libmachine: About to run SSH command:
	hostname
	I1109 14:04:12.761068   55908 main.go:143] libmachine: Error dialing TCP: ssh: handshake failed: EOF
	I1109 14:04:16.033281   55908 main.go:143] libmachine: SSH cmd err, output: <nil>: ha-423884-m03
	
	I1109 14:04:16.033354   55908 ubuntu.go:182] provisioning hostname "ha-423884-m03"
	I1109 14:04:16.033448   55908 cli_runner.go:164] Run: docker container inspect -f "'{{(index (index .NetworkSettings.Ports "22/tcp") 0).HostPort}}'" ha-423884-m03
	I1109 14:04:16.074078   55908 main.go:143] libmachine: Using SSH client type: native
	I1109 14:04:16.074389   55908 main.go:143] libmachine: &{{{<nil> 0 [] [] []} docker [0x3eefe0] 0x3f1790 <nil>  [] 0s} 127.0.0.1 32828 <nil> <nil>}
	I1109 14:04:16.074407   55908 main.go:143] libmachine: About to run SSH command:
	sudo hostname ha-423884-m03 && echo "ha-423884-m03" | sudo tee /etc/hostname
	I1109 14:04:16.423110   55908 main.go:143] libmachine: SSH cmd err, output: <nil>: ha-423884-m03
	
	I1109 14:04:16.423192   55908 cli_runner.go:164] Run: docker container inspect -f "'{{(index (index .NetworkSettings.Ports "22/tcp") 0).HostPort}}'" ha-423884-m03
	I1109 14:04:16.456144   55908 main.go:143] libmachine: Using SSH client type: native
	I1109 14:04:16.456500   55908 main.go:143] libmachine: &{{{<nil> 0 [] [] []} docker [0x3eefe0] 0x3f1790 <nil>  [] 0s} 127.0.0.1 32828 <nil> <nil>}
	I1109 14:04:16.456523   55908 main.go:143] libmachine: About to run SSH command:
	
			if ! grep -xq '.*\sha-423884-m03' /etc/hosts; then
				if grep -xq '127.0.1.1\s.*' /etc/hosts; then
					sudo sed -i 's/^127.0.1.1\s.*/127.0.1.1 ha-423884-m03/g' /etc/hosts;
				else 
					echo '127.0.1.1 ha-423884-m03' | sudo tee -a /etc/hosts; 
				fi
			fi
	I1109 14:04:16.751298   55908 main.go:143] libmachine: SSH cmd err, output: <nil>: 
	I1109 14:04:16.751374   55908 ubuntu.go:188] set auth options {CertDir:/home/jenkins/minikube-integration/21139-2320/.minikube CaCertPath:/home/jenkins/minikube-integration/21139-2320/.minikube/certs/ca.pem CaPrivateKeyPath:/home/jenkins/minikube-integration/21139-2320/.minikube/certs/ca-key.pem CaCertRemotePath:/etc/docker/ca.pem ServerCertPath:/home/jenkins/minikube-integration/21139-2320/.minikube/machines/server.pem ServerKeyPath:/home/jenkins/minikube-integration/21139-2320/.minikube/machines/server-key.pem ClientKeyPath:/home/jenkins/minikube-integration/21139-2320/.minikube/certs/key.pem ServerCertRemotePath:/etc/docker/server.pem ServerKeyRemotePath:/etc/docker/server-key.pem ClientCertPath:/home/jenkins/minikube-integration/21139-2320/.minikube/certs/cert.pem ServerCertSANs:[] StorePath:/home/jenkins/minikube-integration/21139-2320/.minikube}
	I1109 14:04:16.751397   55908 ubuntu.go:190] setting up certificates
	I1109 14:04:16.751407   55908 provision.go:84] configureAuth start
	I1109 14:04:16.751471   55908 cli_runner.go:164] Run: docker container inspect -f "{{range .NetworkSettings.Networks}}{{.IPAddress}},{{.GlobalIPv6Address}}{{end}}" ha-423884-m03
	I1109 14:04:16.793487   55908 provision.go:143] copyHostCerts
	I1109 14:04:16.793536   55908 vm_assets.go:164] NewFileAsset: /home/jenkins/minikube-integration/21139-2320/.minikube/certs/ca.pem -> /home/jenkins/minikube-integration/21139-2320/.minikube/ca.pem
	I1109 14:04:16.793570   55908 exec_runner.go:144] found /home/jenkins/minikube-integration/21139-2320/.minikube/ca.pem, removing ...
	I1109 14:04:16.793586   55908 exec_runner.go:203] rm: /home/jenkins/minikube-integration/21139-2320/.minikube/ca.pem
	I1109 14:04:16.793664   55908 exec_runner.go:151] cp: /home/jenkins/minikube-integration/21139-2320/.minikube/certs/ca.pem --> /home/jenkins/minikube-integration/21139-2320/.minikube/ca.pem (1082 bytes)
	I1109 14:04:16.793744   55908 vm_assets.go:164] NewFileAsset: /home/jenkins/minikube-integration/21139-2320/.minikube/certs/cert.pem -> /home/jenkins/minikube-integration/21139-2320/.minikube/cert.pem
	I1109 14:04:16.793767   55908 exec_runner.go:144] found /home/jenkins/minikube-integration/21139-2320/.minikube/cert.pem, removing ...
	I1109 14:04:16.793774   55908 exec_runner.go:203] rm: /home/jenkins/minikube-integration/21139-2320/.minikube/cert.pem
	I1109 14:04:16.793803   55908 exec_runner.go:151] cp: /home/jenkins/minikube-integration/21139-2320/.minikube/certs/cert.pem --> /home/jenkins/minikube-integration/21139-2320/.minikube/cert.pem (1123 bytes)
	I1109 14:04:16.793848   55908 vm_assets.go:164] NewFileAsset: /home/jenkins/minikube-integration/21139-2320/.minikube/certs/key.pem -> /home/jenkins/minikube-integration/21139-2320/.minikube/key.pem
	I1109 14:04:16.793870   55908 exec_runner.go:144] found /home/jenkins/minikube-integration/21139-2320/.minikube/key.pem, removing ...
	I1109 14:04:16.793874   55908 exec_runner.go:203] rm: /home/jenkins/minikube-integration/21139-2320/.minikube/key.pem
	I1109 14:04:16.793899   55908 exec_runner.go:151] cp: /home/jenkins/minikube-integration/21139-2320/.minikube/certs/key.pem --> /home/jenkins/minikube-integration/21139-2320/.minikube/key.pem (1679 bytes)
	I1109 14:04:16.793952   55908 provision.go:117] generating server cert: /home/jenkins/minikube-integration/21139-2320/.minikube/machines/server.pem ca-key=/home/jenkins/minikube-integration/21139-2320/.minikube/certs/ca.pem private-key=/home/jenkins/minikube-integration/21139-2320/.minikube/certs/ca-key.pem org=jenkins.ha-423884-m03 san=[127.0.0.1 192.168.49.4 ha-423884-m03 localhost minikube]
	I1109 14:04:17.244605   55908 provision.go:177] copyRemoteCerts
	I1109 14:04:17.244683   55908 ssh_runner.go:195] Run: sudo mkdir -p /etc/docker /etc/docker /etc/docker
	I1109 14:04:17.244730   55908 cli_runner.go:164] Run: docker container inspect -f "'{{(index (index .NetworkSettings.Ports "22/tcp") 0).HostPort}}'" ha-423884-m03
	I1109 14:04:17.267714   55908 sshutil.go:53] new ssh client: &{IP:127.0.0.1 Port:32828 SSHKeyPath:/home/jenkins/minikube-integration/21139-2320/.minikube/machines/ha-423884-m03/id_rsa Username:docker}
	I1109 14:04:17.397341   55908 vm_assets.go:164] NewFileAsset: /home/jenkins/minikube-integration/21139-2320/.minikube/certs/ca.pem -> /etc/docker/ca.pem
	I1109 14:04:17.397397   55908 ssh_runner.go:362] scp /home/jenkins/minikube-integration/21139-2320/.minikube/certs/ca.pem --> /etc/docker/ca.pem (1082 bytes)
	I1109 14:04:17.451209   55908 vm_assets.go:164] NewFileAsset: /home/jenkins/minikube-integration/21139-2320/.minikube/machines/server.pem -> /etc/docker/server.pem
	I1109 14:04:17.451268   55908 ssh_runner.go:362] scp /home/jenkins/minikube-integration/21139-2320/.minikube/machines/server.pem --> /etc/docker/server.pem (1208 bytes)
	I1109 14:04:17.501897   55908 vm_assets.go:164] NewFileAsset: /home/jenkins/minikube-integration/21139-2320/.minikube/machines/server-key.pem -> /etc/docker/server-key.pem
	I1109 14:04:17.501959   55908 ssh_runner.go:362] scp /home/jenkins/minikube-integration/21139-2320/.minikube/machines/server-key.pem --> /etc/docker/server-key.pem (1679 bytes)
	I1109 14:04:17.543399   55908 provision.go:87] duration metric: took 791.974444ms to configureAuth
	I1109 14:04:17.543429   55908 ubuntu.go:206] setting minikube options for container-runtime
	I1109 14:04:17.543658   55908 config.go:182] Loaded profile config "ha-423884": Driver=docker, ContainerRuntime=crio, KubernetesVersion=v1.34.1
	I1109 14:04:17.543760   55908 cli_runner.go:164] Run: docker container inspect -f "'{{(index (index .NetworkSettings.Ports "22/tcp") 0).HostPort}}'" ha-423884-m03
	I1109 14:04:17.578118   55908 main.go:143] libmachine: Using SSH client type: native
	I1109 14:04:17.578425   55908 main.go:143] libmachine: &{{{<nil> 0 [] [] []} docker [0x3eefe0] 0x3f1790 <nil>  [] 0s} 127.0.0.1 32828 <nil> <nil>}
	I1109 14:04:17.578447   55908 main.go:143] libmachine: About to run SSH command:
	sudo mkdir -p /etc/sysconfig && printf %s "
	CRIO_MINIKUBE_OPTIONS='--insecure-registry 10.96.0.0/12 '
	" | sudo tee /etc/sysconfig/crio.minikube && sudo systemctl restart crio
	I1109 14:04:18.006743   55908 main.go:143] libmachine: SSH cmd err, output: <nil>: 
	CRIO_MINIKUBE_OPTIONS='--insecure-registry 10.96.0.0/12 '
	
	I1109 14:04:18.006766   55908 machine.go:97] duration metric: took 5.278648591s to provisionDockerMachine
	I1109 14:04:18.006777   55908 start.go:293] postStartSetup for "ha-423884-m03" (driver="docker")
	I1109 14:04:18.006788   55908 start.go:322] creating required directories: [/etc/kubernetes/addons /etc/kubernetes/manifests /var/tmp/minikube /var/lib/minikube /var/lib/minikube/certs /var/lib/minikube/images /var/lib/minikube/binaries /tmp/gvisor /usr/share/ca-certificates /etc/ssl/certs]
	I1109 14:04:18.006849   55908 ssh_runner.go:195] Run: sudo mkdir -p /etc/kubernetes/addons /etc/kubernetes/manifests /var/tmp/minikube /var/lib/minikube /var/lib/minikube/certs /var/lib/minikube/images /var/lib/minikube/binaries /tmp/gvisor /usr/share/ca-certificates /etc/ssl/certs
	I1109 14:04:18.006908   55908 cli_runner.go:164] Run: docker container inspect -f "'{{(index (index .NetworkSettings.Ports "22/tcp") 0).HostPort}}'" ha-423884-m03
	I1109 14:04:18.028378   55908 sshutil.go:53] new ssh client: &{IP:127.0.0.1 Port:32828 SSHKeyPath:/home/jenkins/minikube-integration/21139-2320/.minikube/machines/ha-423884-m03/id_rsa Username:docker}
	I1109 14:04:18.136392   55908 ssh_runner.go:195] Run: cat /etc/os-release
	I1109 14:04:18.139676   55908 main.go:143] libmachine: Couldn't set key VERSION_CODENAME, no corresponding struct field found
	I1109 14:04:18.139706   55908 info.go:137] Remote host: Debian GNU/Linux 12 (bookworm)
	I1109 14:04:18.139718   55908 filesync.go:126] Scanning /home/jenkins/minikube-integration/21139-2320/.minikube/addons for local assets ...
	I1109 14:04:18.139772   55908 filesync.go:126] Scanning /home/jenkins/minikube-integration/21139-2320/.minikube/files for local assets ...
	I1109 14:04:18.139877   55908 filesync.go:149] local asset: /home/jenkins/minikube-integration/21139-2320/.minikube/files/etc/ssl/certs/41162.pem -> 41162.pem in /etc/ssl/certs
	I1109 14:04:18.139916   55908 vm_assets.go:164] NewFileAsset: /home/jenkins/minikube-integration/21139-2320/.minikube/files/etc/ssl/certs/41162.pem -> /etc/ssl/certs/41162.pem
	I1109 14:04:18.140203   55908 ssh_runner.go:195] Run: sudo mkdir -p /etc/ssl/certs
	I1109 14:04:18.151607   55908 ssh_runner.go:362] scp /home/jenkins/minikube-integration/21139-2320/.minikube/files/etc/ssl/certs/41162.pem --> /etc/ssl/certs/41162.pem (1708 bytes)
	I1109 14:04:18.170641   55908 start.go:296] duration metric: took 163.846632ms for postStartSetup
	I1109 14:04:18.170734   55908 ssh_runner.go:195] Run: sh -c "df -h /var | awk 'NR==2{print $5}'"
	I1109 14:04:18.170783   55908 cli_runner.go:164] Run: docker container inspect -f "'{{(index (index .NetworkSettings.Ports "22/tcp") 0).HostPort}}'" ha-423884-m03
	I1109 14:04:18.190645   55908 sshutil.go:53] new ssh client: &{IP:127.0.0.1 Port:32828 SSHKeyPath:/home/jenkins/minikube-integration/21139-2320/.minikube/machines/ha-423884-m03/id_rsa Username:docker}
	I1109 14:04:18.303725   55908 ssh_runner.go:195] Run: sh -c "df -BG /var | awk 'NR==2{print $4}'"
	I1109 14:04:18.315157   55908 fix.go:56] duration metric: took 5.979236955s for fixHost
	I1109 14:04:18.315228   55908 start.go:83] releasing machines lock for "ha-423884-m03", held for 5.979367853s
	I1109 14:04:18.315337   55908 cli_runner.go:164] Run: docker container inspect -f "{{range .NetworkSettings.Networks}}{{.IPAddress}},{{.GlobalIPv6Address}}{{end}}" ha-423884-m03
	I1109 14:04:18.346232   55908 out.go:179] * Found network options:
	I1109 14:04:18.349488   55908 out.go:179]   - NO_PROXY=192.168.49.2,192.168.49.3
	W1109 14:04:18.352634   55908 proxy.go:120] fail to check proxy env: Error ip not in block
	W1109 14:04:18.352664   55908 proxy.go:120] fail to check proxy env: Error ip not in block
	W1109 14:04:18.352686   55908 proxy.go:120] fail to check proxy env: Error ip not in block
	W1109 14:04:18.352696   55908 proxy.go:120] fail to check proxy env: Error ip not in block
	I1109 14:04:18.352763   55908 ssh_runner.go:195] Run: sudo sh -c "podman version >/dev/null"
	I1109 14:04:18.352815   55908 cli_runner.go:164] Run: docker container inspect -f "'{{(index (index .NetworkSettings.Ports "22/tcp") 0).HostPort}}'" ha-423884-m03
	I1109 14:04:18.353042   55908 ssh_runner.go:195] Run: curl -sS -m 2 https://registry.k8s.io/
	I1109 14:04:18.353099   55908 cli_runner.go:164] Run: docker container inspect -f "'{{(index (index .NetworkSettings.Ports "22/tcp") 0).HostPort}}'" ha-423884-m03
	I1109 14:04:18.407037   55908 sshutil.go:53] new ssh client: &{IP:127.0.0.1 Port:32828 SSHKeyPath:/home/jenkins/minikube-integration/21139-2320/.minikube/machines/ha-423884-m03/id_rsa Username:docker}
	I1109 14:04:18.416133   55908 sshutil.go:53] new ssh client: &{IP:127.0.0.1 Port:32828 SSHKeyPath:/home/jenkins/minikube-integration/21139-2320/.minikube/machines/ha-423884-m03/id_rsa Username:docker}
	I1109 14:04:18.761655   55908 ssh_runner.go:195] Run: sh -c "stat /etc/cni/net.d/*loopback.conf*"
	W1109 14:04:18.827322   55908 cni.go:209] loopback cni configuration skipped: "/etc/cni/net.d/*loopback.conf*" not found
	I1109 14:04:18.827443   55908 ssh_runner.go:195] Run: sudo find /etc/cni/net.d -maxdepth 1 -type f ( ( -name *bridge* -or -name *podman* ) -and -not -name *.mk_disabled ) -printf "%p, " -exec sh -c "sudo mv {} {}.mk_disabled" ;
	I1109 14:04:18.846068   55908 cni.go:259] no active bridge cni configs found in "/etc/cni/net.d" - nothing to disable
	I1109 14:04:18.846140   55908 start.go:496] detecting cgroup driver to use...
	I1109 14:04:18.846187   55908 detect.go:187] detected "cgroupfs" cgroup driver on host os
	I1109 14:04:18.846266   55908 ssh_runner.go:195] Run: sudo systemctl stop -f containerd
	I1109 14:04:18.869418   55908 ssh_runner.go:195] Run: sudo systemctl is-active --quiet service containerd
	I1109 14:04:18.889860   55908 docker.go:218] disabling cri-docker service (if available) ...
	I1109 14:04:18.889997   55908 ssh_runner.go:195] Run: sudo systemctl stop -f cri-docker.socket
	I1109 14:04:18.919381   55908 ssh_runner.go:195] Run: sudo systemctl stop -f cri-docker.service
	I1109 14:04:18.942214   55908 ssh_runner.go:195] Run: sudo systemctl disable cri-docker.socket
	I1109 14:04:19.209339   55908 ssh_runner.go:195] Run: sudo systemctl mask cri-docker.service
	I1109 14:04:19.469248   55908 docker.go:234] disabling docker service ...
	I1109 14:04:19.469315   55908 ssh_runner.go:195] Run: sudo systemctl stop -f docker.socket
	I1109 14:04:19.487357   55908 ssh_runner.go:195] Run: sudo systemctl stop -f docker.service
	I1109 14:04:19.508816   55908 ssh_runner.go:195] Run: sudo systemctl disable docker.socket
	I1109 14:04:19.750896   55908 ssh_runner.go:195] Run: sudo systemctl mask docker.service
	I1109 14:04:19.978351   55908 ssh_runner.go:195] Run: sudo systemctl is-active --quiet service docker
	I1109 14:04:20.002094   55908 ssh_runner.go:195] Run: /bin/bash -c "sudo mkdir -p /etc && printf %s "runtime-endpoint: unix:///var/run/crio/crio.sock
	" | sudo tee /etc/crictl.yaml"
	I1109 14:04:20.029962   55908 crio.go:59] configure cri-o to use "registry.k8s.io/pause:3.10.1" pause image...
	I1109 14:04:20.030038   55908 ssh_runner.go:195] Run: sh -c "sudo sed -i 's|^.*pause_image = .*$|pause_image = "registry.k8s.io/pause:3.10.1"|' /etc/crio/crio.conf.d/02-crio.conf"
	I1109 14:04:20.046014   55908 crio.go:70] configuring cri-o to use "cgroupfs" as cgroup driver...
	I1109 14:04:20.046086   55908 ssh_runner.go:195] Run: sh -c "sudo sed -i 's|^.*cgroup_manager = .*$|cgroup_manager = "cgroupfs"|' /etc/crio/crio.conf.d/02-crio.conf"
	I1109 14:04:20.061773   55908 ssh_runner.go:195] Run: sh -c "sudo sed -i '/conmon_cgroup = .*/d' /etc/crio/crio.conf.d/02-crio.conf"
	I1109 14:04:20.083454   55908 ssh_runner.go:195] Run: sh -c "sudo sed -i '/cgroup_manager = .*/a conmon_cgroup = "pod"' /etc/crio/crio.conf.d/02-crio.conf"
	I1109 14:04:20.096347   55908 ssh_runner.go:195] Run: sh -c "sudo rm -rf /etc/cni/net.mk"
	I1109 14:04:20.114097   55908 ssh_runner.go:195] Run: sh -c "sudo sed -i '/^ *"net.ipv4.ip_unprivileged_port_start=.*"/d' /etc/crio/crio.conf.d/02-crio.conf"
	I1109 14:04:20.126722   55908 ssh_runner.go:195] Run: sh -c "sudo grep -q "^ *default_sysctls" /etc/crio/crio.conf.d/02-crio.conf || sudo sed -i '/conmon_cgroup = .*/a default_sysctls = \[\n\]' /etc/crio/crio.conf.d/02-crio.conf"
	I1109 14:04:20.143159   55908 ssh_runner.go:195] Run: sh -c "sudo sed -i -r 's|^default_sysctls *= *\[|&\n  "net.ipv4.ip_unprivileged_port_start=0",|' /etc/crio/crio.conf.d/02-crio.conf"
	I1109 14:04:20.160109   55908 ssh_runner.go:195] Run: sudo sysctl net.bridge.bridge-nf-call-iptables
	I1109 14:04:20.177582   55908 ssh_runner.go:195] Run: sudo sh -c "echo 1 > /proc/sys/net/ipv4/ip_forward"
	I1109 14:04:20.196091   55908 ssh_runner.go:195] Run: sudo systemctl daemon-reload
	I1109 14:04:20.468433   55908 ssh_runner.go:195] Run: sudo systemctl restart crio
	I1109 14:04:21.283004   55908 start.go:543] Will wait 60s for socket path /var/run/crio/crio.sock
	I1109 14:04:21.283084   55908 ssh_runner.go:195] Run: stat /var/run/crio/crio.sock
	I1109 14:04:21.287304   55908 start.go:564] Will wait 60s for crictl version
	I1109 14:04:21.287372   55908 ssh_runner.go:195] Run: which crictl
	I1109 14:04:21.291538   55908 ssh_runner.go:195] Run: sudo /usr/local/bin/crictl version
	I1109 14:04:21.328386   55908 start.go:580] Version:  0.1.0
	RuntimeName:  cri-o
	RuntimeVersion:  1.34.1
	RuntimeApiVersion:  v1
	I1109 14:04:21.328481   55908 ssh_runner.go:195] Run: crio --version
	I1109 14:04:21.361417   55908 ssh_runner.go:195] Run: crio --version
	I1109 14:04:21.451954   55908 out.go:179] * Preparing Kubernetes v1.34.1 on CRI-O 1.34.1 ...
	I1109 14:04:21.455954   55908 out.go:179]   - env NO_PROXY=192.168.49.2
	I1109 14:04:21.459224   55908 out.go:179]   - env NO_PROXY=192.168.49.2,192.168.49.3
	I1109 14:04:21.462952   55908 cli_runner.go:164] Run: docker network inspect ha-423884 --format "{"Name": "{{.Name}}","Driver": "{{.Driver}}","Subnet": "{{range .IPAM.Config}}{{.Subnet}}{{end}}","Gateway": "{{range .IPAM.Config}}{{.Gateway}}{{end}}","MTU": {{if (index .Options "com.docker.network.driver.mtu")}}{{(index .Options "com.docker.network.driver.mtu")}}{{else}}0{{end}}, "ContainerIPs": [{{range $k,$v := .Containers }}"{{$v.IPv4Address}}",{{end}}]}"
	I1109 14:04:21.484807   55908 ssh_runner.go:195] Run: grep 192.168.49.1	host.minikube.internal$ /etc/hosts
	I1109 14:04:21.489960   55908 ssh_runner.go:195] Run: /bin/bash -c "{ grep -v $'\thost.minikube.internal$' "/etc/hosts"; echo "192.168.49.1	host.minikube.internal"; } > /tmp/h.$$; sudo cp /tmp/h.$$ "/etc/hosts""
	I1109 14:04:21.506775   55908 mustload.go:66] Loading cluster: ha-423884
	I1109 14:04:21.507015   55908 config.go:182] Loaded profile config "ha-423884": Driver=docker, ContainerRuntime=crio, KubernetesVersion=v1.34.1
	I1109 14:04:21.507301   55908 cli_runner.go:164] Run: docker container inspect ha-423884 --format={{.State.Status}}
	I1109 14:04:21.526101   55908 host.go:66] Checking if "ha-423884" exists ...
	I1109 14:04:21.526377   55908 certs.go:69] Setting up /home/jenkins/minikube-integration/21139-2320/.minikube/profiles/ha-423884 for IP: 192.168.49.4
	I1109 14:04:21.526391   55908 certs.go:195] generating shared ca certs ...
	I1109 14:04:21.526407   55908 certs.go:227] acquiring lock for ca certs: {Name:mkdc9287ecd89df27ec460b72246ed6b75395d7c Clock:{} Delay:500ms Timeout:1m0s Cancel:<nil>}
	I1109 14:04:21.526515   55908 certs.go:236] skipping valid "minikubeCA" ca cert: /home/jenkins/minikube-integration/21139-2320/.minikube/ca.key
	I1109 14:04:21.526559   55908 certs.go:236] skipping valid "proxyClientCA" ca cert: /home/jenkins/minikube-integration/21139-2320/.minikube/proxy-client-ca.key
	I1109 14:04:21.526572   55908 certs.go:257] generating profile certs ...
	I1109 14:04:21.526658   55908 certs.go:360] skipping valid signed profile cert regeneration for "minikube-user": /home/jenkins/minikube-integration/21139-2320/.minikube/profiles/ha-423884/client.key
	I1109 14:04:21.526726   55908 certs.go:360] skipping valid signed profile cert regeneration for "minikube": /home/jenkins/minikube-integration/21139-2320/.minikube/profiles/ha-423884/apiserver.key.7ffb4171
	I1109 14:04:21.526767   55908 certs.go:360] skipping valid signed profile cert regeneration for "aggregator": /home/jenkins/minikube-integration/21139-2320/.minikube/profiles/ha-423884/proxy-client.key
	I1109 14:04:21.526781   55908 vm_assets.go:164] NewFileAsset: /home/jenkins/minikube-integration/21139-2320/.minikube/ca.crt -> /var/lib/minikube/certs/ca.crt
	I1109 14:04:21.526793   55908 vm_assets.go:164] NewFileAsset: /home/jenkins/minikube-integration/21139-2320/.minikube/ca.key -> /var/lib/minikube/certs/ca.key
	I1109 14:04:21.526808   55908 vm_assets.go:164] NewFileAsset: /home/jenkins/minikube-integration/21139-2320/.minikube/proxy-client-ca.crt -> /var/lib/minikube/certs/proxy-client-ca.crt
	I1109 14:04:21.526826   55908 vm_assets.go:164] NewFileAsset: /home/jenkins/minikube-integration/21139-2320/.minikube/proxy-client-ca.key -> /var/lib/minikube/certs/proxy-client-ca.key
	I1109 14:04:21.526836   55908 vm_assets.go:164] NewFileAsset: /home/jenkins/minikube-integration/21139-2320/.minikube/profiles/ha-423884/apiserver.crt -> /var/lib/minikube/certs/apiserver.crt
	I1109 14:04:21.526848   55908 vm_assets.go:164] NewFileAsset: /home/jenkins/minikube-integration/21139-2320/.minikube/profiles/ha-423884/apiserver.key -> /var/lib/minikube/certs/apiserver.key
	I1109 14:04:21.526910   55908 vm_assets.go:164] NewFileAsset: /home/jenkins/minikube-integration/21139-2320/.minikube/profiles/ha-423884/proxy-client.crt -> /var/lib/minikube/certs/proxy-client.crt
	I1109 14:04:21.526925   55908 vm_assets.go:164] NewFileAsset: /home/jenkins/minikube-integration/21139-2320/.minikube/profiles/ha-423884/proxy-client.key -> /var/lib/minikube/certs/proxy-client.key
	I1109 14:04:21.526982   55908 certs.go:484] found cert: /home/jenkins/minikube-integration/21139-2320/.minikube/certs/4116.pem (1338 bytes)
	W1109 14:04:21.527018   55908 certs.go:480] ignoring /home/jenkins/minikube-integration/21139-2320/.minikube/certs/4116_empty.pem, impossibly tiny 0 bytes
	I1109 14:04:21.527028   55908 certs.go:484] found cert: /home/jenkins/minikube-integration/21139-2320/.minikube/certs/ca-key.pem (1679 bytes)
	I1109 14:04:21.527056   55908 certs.go:484] found cert: /home/jenkins/minikube-integration/21139-2320/.minikube/certs/ca.pem (1082 bytes)
	I1109 14:04:21.527080   55908 certs.go:484] found cert: /home/jenkins/minikube-integration/21139-2320/.minikube/certs/cert.pem (1123 bytes)
	I1109 14:04:21.527107   55908 certs.go:484] found cert: /home/jenkins/minikube-integration/21139-2320/.minikube/certs/key.pem (1679 bytes)
	I1109 14:04:21.527154   55908 certs.go:484] found cert: /home/jenkins/minikube-integration/21139-2320/.minikube/files/etc/ssl/certs/41162.pem (1708 bytes)
	I1109 14:04:21.527185   55908 vm_assets.go:164] NewFileAsset: /home/jenkins/minikube-integration/21139-2320/.minikube/files/etc/ssl/certs/41162.pem -> /usr/share/ca-certificates/41162.pem
	I1109 14:04:21.527200   55908 vm_assets.go:164] NewFileAsset: /home/jenkins/minikube-integration/21139-2320/.minikube/ca.crt -> /usr/share/ca-certificates/minikubeCA.pem
	I1109 14:04:21.527211   55908 vm_assets.go:164] NewFileAsset: /home/jenkins/minikube-integration/21139-2320/.minikube/certs/4116.pem -> /usr/share/ca-certificates/4116.pem
	I1109 14:04:21.527271   55908 cli_runner.go:164] Run: docker container inspect -f "'{{(index (index .NetworkSettings.Ports "22/tcp") 0).HostPort}}'" ha-423884
	I1109 14:04:21.551818   55908 sshutil.go:53] new ssh client: &{IP:127.0.0.1 Port:32818 SSHKeyPath:/home/jenkins/minikube-integration/21139-2320/.minikube/machines/ha-423884/id_rsa Username:docker}
	I1109 14:04:21.676202   55908 ssh_runner.go:195] Run: stat -c %s /var/lib/minikube/certs/sa.pub
	I1109 14:04:21.680212   55908 ssh_runner.go:447] scp /var/lib/minikube/certs/sa.pub --> memory (451 bytes)
	I1109 14:04:21.691215   55908 ssh_runner.go:195] Run: stat -c %s /var/lib/minikube/certs/sa.key
	I1109 14:04:21.701694   55908 ssh_runner.go:447] scp /var/lib/minikube/certs/sa.key --> memory (1679 bytes)
	I1109 14:04:21.714762   55908 ssh_runner.go:195] Run: stat -c %s /var/lib/minikube/certs/front-proxy-ca.crt
	I1109 14:04:21.719210   55908 ssh_runner.go:447] scp /var/lib/minikube/certs/front-proxy-ca.crt --> memory (1123 bytes)
	I1109 14:04:21.729229   55908 ssh_runner.go:195] Run: stat -c %s /var/lib/minikube/certs/front-proxy-ca.key
	I1109 14:04:21.733219   55908 ssh_runner.go:447] scp /var/lib/minikube/certs/front-proxy-ca.key --> memory (1675 bytes)
	I1109 14:04:21.742594   55908 ssh_runner.go:195] Run: stat -c %s /var/lib/minikube/certs/etcd/ca.crt
	I1109 14:04:21.746326   55908 ssh_runner.go:447] scp /var/lib/minikube/certs/etcd/ca.crt --> memory (1094 bytes)
	I1109 14:04:21.755768   55908 ssh_runner.go:195] Run: stat -c %s /var/lib/minikube/certs/etcd/ca.key
	I1109 14:04:21.759436   55908 ssh_runner.go:447] scp /var/lib/minikube/certs/etcd/ca.key --> memory (1675 bytes)
	I1109 14:04:21.771660   55908 ssh_runner.go:362] scp /home/jenkins/minikube-integration/21139-2320/.minikube/ca.crt --> /var/lib/minikube/certs/ca.crt (1111 bytes)
	I1109 14:04:21.795312   55908 ssh_runner.go:362] scp /home/jenkins/minikube-integration/21139-2320/.minikube/ca.key --> /var/lib/minikube/certs/ca.key (1679 bytes)
	I1109 14:04:21.815560   55908 ssh_runner.go:362] scp /home/jenkins/minikube-integration/21139-2320/.minikube/proxy-client-ca.crt --> /var/lib/minikube/certs/proxy-client-ca.crt (1119 bytes)
	I1109 14:04:21.833662   55908 ssh_runner.go:362] scp /home/jenkins/minikube-integration/21139-2320/.minikube/proxy-client-ca.key --> /var/lib/minikube/certs/proxy-client-ca.key (1675 bytes)
	I1109 14:04:21.852805   55908 ssh_runner.go:362] scp /home/jenkins/minikube-integration/21139-2320/.minikube/profiles/ha-423884/apiserver.crt --> /var/lib/minikube/certs/apiserver.crt (1440 bytes)
	I1109 14:04:21.870267   55908 ssh_runner.go:362] scp /home/jenkins/minikube-integration/21139-2320/.minikube/profiles/ha-423884/apiserver.key --> /var/lib/minikube/certs/apiserver.key (1679 bytes)
	I1109 14:04:21.889041   55908 ssh_runner.go:362] scp /home/jenkins/minikube-integration/21139-2320/.minikube/profiles/ha-423884/proxy-client.crt --> /var/lib/minikube/certs/proxy-client.crt (1147 bytes)
	I1109 14:04:21.907386   55908 ssh_runner.go:362] scp /home/jenkins/minikube-integration/21139-2320/.minikube/profiles/ha-423884/proxy-client.key --> /var/lib/minikube/certs/proxy-client.key (1675 bytes)
	I1109 14:04:21.925376   55908 ssh_runner.go:362] scp /home/jenkins/minikube-integration/21139-2320/.minikube/files/etc/ssl/certs/41162.pem --> /usr/share/ca-certificates/41162.pem (1708 bytes)
	I1109 14:04:21.943214   55908 ssh_runner.go:362] scp /home/jenkins/minikube-integration/21139-2320/.minikube/ca.crt --> /usr/share/ca-certificates/minikubeCA.pem (1111 bytes)
	I1109 14:04:21.961586   55908 ssh_runner.go:362] scp /home/jenkins/minikube-integration/21139-2320/.minikube/certs/4116.pem --> /usr/share/ca-certificates/4116.pem (1338 bytes)
	I1109 14:04:21.979793   55908 ssh_runner.go:362] scp memory --> /var/lib/minikube/certs/sa.pub (451 bytes)
	I1109 14:04:21.993395   55908 ssh_runner.go:362] scp memory --> /var/lib/minikube/certs/sa.key (1679 bytes)
	I1109 14:04:22.006684   55908 ssh_runner.go:362] scp memory --> /var/lib/minikube/certs/front-proxy-ca.crt (1123 bytes)
	I1109 14:04:22.033388   55908 ssh_runner.go:362] scp memory --> /var/lib/minikube/certs/front-proxy-ca.key (1675 bytes)
	I1109 14:04:22.052052   55908 ssh_runner.go:362] scp memory --> /var/lib/minikube/certs/etcd/ca.crt (1094 bytes)
	I1109 14:04:22.068060   55908 ssh_runner.go:362] scp memory --> /var/lib/minikube/certs/etcd/ca.key (1675 bytes)
	I1109 14:04:22.086207   55908 ssh_runner.go:362] scp memory --> /var/lib/minikube/kubeconfig (744 bytes)
	I1109 14:04:22.104940   55908 ssh_runner.go:195] Run: openssl version
	I1109 14:04:22.112046   55908 ssh_runner.go:195] Run: sudo /bin/bash -c "test -s /usr/share/ca-certificates/41162.pem && ln -fs /usr/share/ca-certificates/41162.pem /etc/ssl/certs/41162.pem"
	I1109 14:04:22.122102   55908 ssh_runner.go:195] Run: ls -la /usr/share/ca-certificates/41162.pem
	I1109 14:04:22.125980   55908 certs.go:528] hashing: -rw-r--r-- 1 root root 1708 Nov  9 13:36 /usr/share/ca-certificates/41162.pem
	I1109 14:04:22.126092   55908 ssh_runner.go:195] Run: openssl x509 -hash -noout -in /usr/share/ca-certificates/41162.pem
	I1109 14:04:22.167702   55908 ssh_runner.go:195] Run: sudo /bin/bash -c "test -L /etc/ssl/certs/3ec20f2e.0 || ln -fs /etc/ssl/certs/41162.pem /etc/ssl/certs/3ec20f2e.0"
	I1109 14:04:22.176107   55908 ssh_runner.go:195] Run: sudo /bin/bash -c "test -s /usr/share/ca-certificates/minikubeCA.pem && ln -fs /usr/share/ca-certificates/minikubeCA.pem /etc/ssl/certs/minikubeCA.pem"
	I1109 14:04:22.184759   55908 ssh_runner.go:195] Run: ls -la /usr/share/ca-certificates/minikubeCA.pem
	I1109 14:04:22.189529   55908 certs.go:528] hashing: -rw-r--r-- 1 root root 1111 Nov  9 13:29 /usr/share/ca-certificates/minikubeCA.pem
	I1109 14:04:22.189649   55908 ssh_runner.go:195] Run: openssl x509 -hash -noout -in /usr/share/ca-certificates/minikubeCA.pem
	I1109 14:04:22.231896   55908 ssh_runner.go:195] Run: sudo /bin/bash -c "test -L /etc/ssl/certs/b5213941.0 || ln -fs /etc/ssl/certs/minikubeCA.pem /etc/ssl/certs/b5213941.0"
	I1109 14:04:22.240788   55908 ssh_runner.go:195] Run: sudo /bin/bash -c "test -s /usr/share/ca-certificates/4116.pem && ln -fs /usr/share/ca-certificates/4116.pem /etc/ssl/certs/4116.pem"
	I1109 14:04:22.250648   55908 ssh_runner.go:195] Run: ls -la /usr/share/ca-certificates/4116.pem
	I1109 14:04:22.254774   55908 certs.go:528] hashing: -rw-r--r-- 1 root root 1338 Nov  9 13:36 /usr/share/ca-certificates/4116.pem
	I1109 14:04:22.254890   55908 ssh_runner.go:195] Run: openssl x509 -hash -noout -in /usr/share/ca-certificates/4116.pem
	I1109 14:04:22.295743   55908 ssh_runner.go:195] Run: sudo /bin/bash -c "test -L /etc/ssl/certs/51391683.0 || ln -fs /etc/ssl/certs/4116.pem /etc/ssl/certs/51391683.0"
	I1109 14:04:22.303694   55908 ssh_runner.go:195] Run: stat /var/lib/minikube/certs/apiserver-kubelet-client.crt
	I1109 14:04:22.308400   55908 ssh_runner.go:195] Run: openssl x509 -noout -in /var/lib/minikube/certs/apiserver-etcd-client.crt -checkend 86400
	I1109 14:04:22.361240   55908 ssh_runner.go:195] Run: openssl x509 -noout -in /var/lib/minikube/certs/apiserver-kubelet-client.crt -checkend 86400
	I1109 14:04:22.402093   55908 ssh_runner.go:195] Run: openssl x509 -noout -in /var/lib/minikube/certs/etcd/server.crt -checkend 86400
	I1109 14:04:22.444367   55908 ssh_runner.go:195] Run: openssl x509 -noout -in /var/lib/minikube/certs/etcd/healthcheck-client.crt -checkend 86400
	I1109 14:04:22.486212   55908 ssh_runner.go:195] Run: openssl x509 -noout -in /var/lib/minikube/certs/etcd/peer.crt -checkend 86400
	I1109 14:04:22.528227   55908 ssh_runner.go:195] Run: openssl x509 -noout -in /var/lib/minikube/certs/front-proxy-client.crt -checkend 86400
	I1109 14:04:22.571111   55908 kubeadm.go:935] updating node {m03 192.168.49.4 8443 v1.34.1 crio true true} ...
	I1109 14:04:22.571227   55908 kubeadm.go:947] kubelet [Unit]
	Wants=crio.service
	
	[Service]
	ExecStart=
	ExecStart=/var/lib/minikube/binaries/v1.34.1/kubelet --bootstrap-kubeconfig=/etc/kubernetes/bootstrap-kubelet.conf --cgroups-per-qos=false --config=/var/lib/kubelet/config.yaml --enforce-node-allocatable= --hostname-override=ha-423884-m03 --kubeconfig=/etc/kubernetes/kubelet.conf --node-ip=192.168.49.4
	
	[Install]
	 config:
	{KubernetesVersion:v1.34.1 ClusterName:ha-423884 Namespace:default APIServerHAVIP:192.168.49.254 APIServerName:minikubeCA APIServerNames:[] APIServerIPs:[] DNSDomain:cluster.local ContainerRuntime:crio CRISocket: NetworkPlugin:cni FeatureGates: ServiceCIDR:10.96.0.0/12 ImageRepository: LoadBalancerStartIP: LoadBalancerEndIP: CustomIngressCert: RegistryAliases: ExtraOptions:[] ShouldLoadCachedImages:true EnableDefaultCNI:false CNI:}
	I1109 14:04:22.571257   55908 kube-vip.go:115] generating kube-vip config ...
	I1109 14:04:22.571311   55908 ssh_runner.go:195] Run: sudo sh -c "lsmod | grep ip_vs"
	I1109 14:04:22.583651   55908 kube-vip.go:163] giving up enabling control-plane load-balancing as ipvs kernel modules appears not to be available: sudo sh -c "lsmod | grep ip_vs": Process exited with status 1
	stdout:
	
	stderr:
	I1109 14:04:22.583707   55908 kube-vip.go:137] kube-vip config:
	apiVersion: v1
	kind: Pod
	metadata:
	  creationTimestamp: null
	  name: kube-vip
	  namespace: kube-system
	spec:
	  containers:
	  - args:
	    - manager
	    env:
	    - name: vip_arp
	      value: "true"
	    - name: port
	      value: "8443"
	    - name: vip_nodename
	      valueFrom:
	        fieldRef:
	          fieldPath: spec.nodeName
	    - name: vip_interface
	      value: eth0
	    - name: vip_cidr
	      value: "32"
	    - name: dns_mode
	      value: first
	    - name: cp_enable
	      value: "true"
	    - name: cp_namespace
	      value: kube-system
	    - name: vip_leaderelection
	      value: "true"
	    - name: vip_leasename
	      value: plndr-cp-lock
	    - name: vip_leaseduration
	      value: "5"
	    - name: vip_renewdeadline
	      value: "3"
	    - name: vip_retryperiod
	      value: "1"
	    - name: address
	      value: 192.168.49.254
	    - name: prometheus_server
	      value: :2112
	    image: ghcr.io/kube-vip/kube-vip:v1.0.1
	    imagePullPolicy: IfNotPresent
	    name: kube-vip
	    resources: {}
	    securityContext:
	      capabilities:
	        add:
	        - NET_ADMIN
	        - NET_RAW
	    volumeMounts:
	    - mountPath: /etc/kubernetes/admin.conf
	      name: kubeconfig
	  hostAliases:
	  - hostnames:
	    - kubernetes
	    ip: 127.0.0.1
	  hostNetwork: true
	  volumes:
	  - hostPath:
	      path: "/etc/kubernetes/admin.conf"
	    name: kubeconfig
	status: {}
	I1109 14:04:22.583783   55908 ssh_runner.go:195] Run: sudo ls /var/lib/minikube/binaries/v1.34.1
	I1109 14:04:22.592357   55908 binaries.go:51] Found k8s binaries, skipping transfer
	I1109 14:04:22.592434   55908 ssh_runner.go:195] Run: sudo mkdir -p /etc/systemd/system/kubelet.service.d /lib/systemd/system /etc/kubernetes/manifests
	I1109 14:04:22.602564   55908 ssh_runner.go:362] scp memory --> /etc/systemd/system/kubelet.service.d/10-kubeadm.conf (363 bytes)
	I1109 14:04:22.615684   55908 ssh_runner.go:362] scp memory --> /lib/systemd/system/kubelet.service (352 bytes)
	I1109 14:04:22.634261   55908 ssh_runner.go:362] scp memory --> /etc/kubernetes/manifests/kube-vip.yaml (1358 bytes)
	I1109 14:04:22.648965   55908 ssh_runner.go:195] Run: grep 192.168.49.254	control-plane.minikube.internal$ /etc/hosts
	I1109 14:04:22.652918   55908 ssh_runner.go:195] Run: /bin/bash -c "{ grep -v $'\tcontrol-plane.minikube.internal$' "/etc/hosts"; echo "192.168.49.254	control-plane.minikube.internal"; } > /tmp/h.$$; sudo cp /tmp/h.$$ "/etc/hosts""
	I1109 14:04:22.663308   55908 ssh_runner.go:195] Run: sudo systemctl daemon-reload
	I1109 14:04:22.796103   55908 ssh_runner.go:195] Run: sudo systemctl start kubelet
	I1109 14:04:22.812101   55908 start.go:236] Will wait 6m0s for node &{Name:m03 IP:192.168.49.4 Port:8443 KubernetesVersion:v1.34.1 ContainerRuntime:crio ControlPlane:true Worker:true}
	I1109 14:04:22.812586   55908 config.go:182] Loaded profile config "ha-423884": Driver=docker, ContainerRuntime=crio, KubernetesVersion=v1.34.1
	I1109 14:04:22.817295   55908 out.go:179] * Verifying Kubernetes components...
	I1109 14:04:22.820274   55908 ssh_runner.go:195] Run: sudo systemctl daemon-reload
	I1109 14:04:22.956399   55908 ssh_runner.go:195] Run: sudo systemctl start kubelet
	I1109 14:04:22.970086   55908 kapi.go:59] client config for ha-423884: &rest.Config{Host:"https://192.168.49.254:8443", APIPath:"", ContentConfig:rest.ContentConfig{AcceptContentTypes:"", ContentType:"", GroupVersion:(*schema.GroupVersion)(nil), NegotiatedSerializer:runtime.NegotiatedSerializer(nil)}, Username:"", Password:"", BearerToken:"", BearerTokenFile:"", Impersonate:rest.ImpersonationConfig{UserName:"", UID:"", Groups:[]string(nil), Extra:map[string][]string(nil)}, AuthProvider:<nil>, AuthConfigPersister:rest.AuthProviderConfigPersister(nil), ExecProvider:<nil>, TLSClientConfig:rest.sanitizedTLSClientConfig{Insecure:false, ServerName:"", CertFile:"/home/jenkins/minikube-integration/21139-2320/.minikube/profiles/ha-423884/client.crt", KeyFile:"/home/jenkins/minikube-integration/21139-2320/.minikube/profiles/ha-423884/client.key", CAFile:"/home/jenkins/minikube-integration/21139-2320/.minikube/ca.crt", CertData:[]uint8(nil), KeyData:[]uint8(nil), CAData:[]uint8(nil), NextProtos:[]string(nil)},
UserAgent:"", DisableCompression:false, Transport:http.RoundTripper(nil), WrapTransport:(transport.WrapperFunc)(0x21276e0), QPS:0, Burst:0, RateLimiter:flowcontrol.RateLimiter(nil), WarningHandler:rest.WarningHandler(nil), WarningHandlerWithContext:rest.WarningHandlerWithContext(nil), Timeout:0, Dial:(func(context.Context, string, string) (net.Conn, error))(nil), Proxy:(func(*http.Request) (*url.URL, error))(nil)}
	W1109 14:04:22.970158   55908 kubeadm.go:492] Overriding stale ClientConfig host https://192.168.49.254:8443 with https://192.168.49.2:8443
	I1109 14:04:22.970389   55908 node_ready.go:35] waiting up to 6m0s for node "ha-423884-m03" to be "Ready" ...
	I1109 14:04:22.973665   55908 node_ready.go:49] node "ha-423884-m03" is "Ready"
	I1109 14:04:22.973696   55908 node_ready.go:38] duration metric: took 3.289742ms for node "ha-423884-m03" to be "Ready" ...
	I1109 14:04:22.973708   55908 api_server.go:52] waiting for apiserver process to appear ...
	I1109 14:04:22.973776   55908 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I1109 14:04:23.474233   55908 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I1109 14:04:23.974449   55908 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I1109 14:04:24.473927   55908 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I1109 14:04:24.973967   55908 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I1109 14:04:25.474635   55908 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I1109 14:04:25.973916   55908 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I1109 14:04:26.474480   55908 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I1109 14:04:26.974653   55908 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I1109 14:04:27.474731   55908 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I1109 14:04:27.974238   55908 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I1109 14:04:28.474498   55908 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I1109 14:04:28.973919   55908 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I1109 14:04:29.474517   55908 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I1109 14:04:29.974713   55908 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I1109 14:04:30.474585   55908 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I1109 14:04:30.974741   55908 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I1109 14:04:31.473916   55908 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I1109 14:04:31.974806   55908 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I1109 14:04:32.474537   55908 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I1109 14:04:32.973899   55908 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I1109 14:04:33.474884   55908 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I1109 14:04:33.974179   55908 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I1109 14:04:34.473908   55908 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I1109 14:04:34.973922   55908 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I1109 14:04:35.474186   55908 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I1109 14:04:35.974351   55908 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I1109 14:04:36.474756   55908 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I1109 14:04:36.973943   55908 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I1109 14:04:37.474873   55908 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I1109 14:04:37.974832   55908 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I1109 14:04:38.474095   55908 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I1109 14:04:38.486973   55908 api_server.go:72] duration metric: took 15.674824664s to wait for apiserver process to appear ...
	I1109 14:04:38.486994   55908 api_server.go:88] waiting for apiserver healthz status ...
	I1109 14:04:38.487013   55908 api_server.go:253] Checking apiserver healthz at https://192.168.49.2:8443/healthz ...
	I1109 14:04:38.496492   55908 api_server.go:279] https://192.168.49.2:8443/healthz returned 200:
	ok
	I1109 14:04:38.497757   55908 api_server.go:141] control plane version: v1.34.1
	I1109 14:04:38.497778   55908 api_server.go:131] duration metric: took 10.777406ms to wait for apiserver health ...
	I1109 14:04:38.497787   55908 system_pods.go:43] waiting for kube-system pods to appear ...
	I1109 14:04:38.505258   55908 system_pods.go:59] 26 kube-system pods found
	I1109 14:04:38.505350   55908 system_pods.go:61] "coredns-66bc5c9577-wl6rt" [95478f0f-683f-4542-aaa4-adc037f97d0c] Running / Ready:ContainersNotReady (containers with unready status: [coredns]) / ContainersReady:ContainersNotReady (containers with unready status: [coredns])
	I1109 14:04:38.505374   55908 system_pods.go:61] "coredns-66bc5c9577-x2j4c" [96cca476-edbb-4139-8b01-f7fd7c7d55aa] Running / Ready:ContainersNotReady (containers with unready status: [coredns]) / ContainersReady:ContainersNotReady (containers with unready status: [coredns])
	I1109 14:04:38.505408   55908 system_pods.go:61] "etcd-ha-423884" [004413ee-6d00-4b6b-8e58-dbe5c2694a91] Running
	I1109 14:04:38.505432   55908 system_pods.go:61] "etcd-ha-423884-m02" [7009b9f1-1f16-4b4e-a5e3-1d8df4987593] Running
	I1109 14:04:38.505449   55908 system_pods.go:61] "etcd-ha-423884-m03" [5c66298e-55ac-439c-8fc7-cc91a29fff8c] Running
	I1109 14:04:38.505466   55908 system_pods.go:61] "kindnet-2tcn6" [22d3ab43-1335-4838-ab1d-7368817c4287] Running
	I1109 14:04:38.505484   55908 system_pods.go:61] "kindnet-45jg2" [fe2d9f7d-ba0c-42a5-af2d-ebfb3153b0b1] Running
	I1109 14:04:38.505510   55908 system_pods.go:61] "kindnet-4s4nj" [aaab0693-39a9-46cc-b5c6-f07055a7cbc4] Running
	I1109 14:04:38.505536   55908 system_pods.go:61] "kindnet-ftnwt" [ea92690f-5103-40c4-ba92-16b97894d00c] Running
	I1109 14:04:38.505555   55908 system_pods.go:61] "kube-apiserver-ha-423884" [1981277b-07b7-4bfc-8601-d026e58476b1] Running
	I1109 14:04:38.505572   55908 system_pods.go:61] "kube-apiserver-ha-423884-m02" [e203df4c-2d27-445c-ae67-14d3e03d4a48] Running
	I1109 14:04:38.505590   55908 system_pods.go:61] "kube-apiserver-ha-423884-m03" [16908abd-f0bf-4fc2-b689-4062490d63b3] Running
	I1109 14:04:38.505618   55908 system_pods.go:61] "kube-controller-manager-ha-423884" [1d981b3b-aa6e-470c-8fd5-a97cedeb7ab7] Running
	I1109 14:04:38.505641   55908 system_pods.go:61] "kube-controller-manager-ha-423884-m02" [047f7949-9451-4301-be27-a9073b1f8dd8] Running
	I1109 14:04:38.505659   55908 system_pods.go:61] "kube-controller-manager-ha-423884-m03" [94734b39-b4df-4736-8bb8-4289b8b59d4a] Running
	I1109 14:04:38.505675   55908 system_pods.go:61] "kube-proxy-7z7d2" [f3de4d87-91fe-4303-a8db-50a70cbce4d7] Running
	I1109 14:04:38.505694   55908 system_pods.go:61] "kube-proxy-9kff9" [3e8293e6-027b-460b-bf10-1c31ea96c7b9] Running
	I1109 14:04:38.505721   55908 system_pods.go:61] "kube-proxy-f4hgn" [4eebc45e-329c-47be-b22c-f516100cff56] Running
	I1109 14:04:38.505743   55908 system_pods.go:61] "kube-proxy-jcgxk" [511bb5aa-d398-4eb4-852e-d2b6cda335c7] Running
	I1109 14:04:38.505761   55908 system_pods.go:61] "kube-scheduler-ha-423884" [9d1a4a22-ff4f-4204-b9a0-c90bc99cf5ee] Running
	I1109 14:04:38.505778   55908 system_pods.go:61] "kube-scheduler-ha-423884-m02" [3959577a-da41-4834-8c10-1cd6da3da88d] Running
	I1109 14:04:38.505796   55908 system_pods.go:61] "kube-scheduler-ha-423884-m03" [ab2c4706-65e8-429e-9365-35ddef56a3f5] Running
	I1109 14:04:38.505824   55908 system_pods.go:61] "kube-vip-ha-423884" [b043421c-6408-4df1-87d9-bc0d12fef736] Running
	I1109 14:04:38.505850   55908 system_pods.go:61] "kube-vip-ha-423884-m02" [17d3154c-6732-452b-a209-4a8c98c5626e] Running
	I1109 14:04:38.505867   55908 system_pods.go:61] "kube-vip-ha-423884-m03" [aade9448-5515-4d1c-ae79-38c8686c171f] Running
	I1109 14:04:38.505886   55908 system_pods.go:61] "storage-provisioner" [5c249a88-1e05-40e0-b9d2-60a993f8c146] Running
	I1109 14:04:38.505905   55908 system_pods.go:74] duration metric: took 8.112367ms to wait for pod list to return data ...
	I1109 14:04:38.505935   55908 default_sa.go:34] waiting for default service account to be created ...
	I1109 14:04:38.509739   55908 default_sa.go:45] found service account: "default"
	I1109 14:04:38.509805   55908 default_sa.go:55] duration metric: took 3.846441ms for default service account to be created ...
	I1109 14:04:38.509829   55908 system_pods.go:116] waiting for k8s-apps to be running ...
	I1109 14:04:38.517291   55908 system_pods.go:86] 26 kube-system pods found
	I1109 14:04:38.517382   55908 system_pods.go:89] "coredns-66bc5c9577-wl6rt" [95478f0f-683f-4542-aaa4-adc037f97d0c] Running / Ready:ContainersNotReady (containers with unready status: [coredns]) / ContainersReady:ContainersNotReady (containers with unready status: [coredns])
	I1109 14:04:38.517407   55908 system_pods.go:89] "coredns-66bc5c9577-x2j4c" [96cca476-edbb-4139-8b01-f7fd7c7d55aa] Running / Ready:ContainersNotReady (containers with unready status: [coredns]) / ContainersReady:ContainersNotReady (containers with unready status: [coredns])
	I1109 14:04:38.517444   55908 system_pods.go:89] "etcd-ha-423884" [004413ee-6d00-4b6b-8e58-dbe5c2694a91] Running
	I1109 14:04:38.517467   55908 system_pods.go:89] "etcd-ha-423884-m02" [7009b9f1-1f16-4b4e-a5e3-1d8df4987593] Running
	I1109 14:04:38.517484   55908 system_pods.go:89] "etcd-ha-423884-m03" [5c66298e-55ac-439c-8fc7-cc91a29fff8c] Running
	I1109 14:04:38.517500   55908 system_pods.go:89] "kindnet-2tcn6" [22d3ab43-1335-4838-ab1d-7368817c4287] Running
	I1109 14:04:38.517518   55908 system_pods.go:89] "kindnet-45jg2" [fe2d9f7d-ba0c-42a5-af2d-ebfb3153b0b1] Running
	I1109 14:04:38.517545   55908 system_pods.go:89] "kindnet-4s4nj" [aaab0693-39a9-46cc-b5c6-f07055a7cbc4] Running
	I1109 14:04:38.517568   55908 system_pods.go:89] "kindnet-ftnwt" [ea92690f-5103-40c4-ba92-16b97894d00c] Running
	I1109 14:04:38.517586   55908 system_pods.go:89] "kube-apiserver-ha-423884" [1981277b-07b7-4bfc-8601-d026e58476b1] Running
	I1109 14:04:38.517602   55908 system_pods.go:89] "kube-apiserver-ha-423884-m02" [e203df4c-2d27-445c-ae67-14d3e03d4a48] Running
	I1109 14:04:38.517620   55908 system_pods.go:89] "kube-apiserver-ha-423884-m03" [16908abd-f0bf-4fc2-b689-4062490d63b3] Running
	I1109 14:04:38.517648   55908 system_pods.go:89] "kube-controller-manager-ha-423884" [1d981b3b-aa6e-470c-8fd5-a97cedeb7ab7] Running
	I1109 14:04:38.517670   55908 system_pods.go:89] "kube-controller-manager-ha-423884-m02" [047f7949-9451-4301-be27-a9073b1f8dd8] Running
	I1109 14:04:38.517688   55908 system_pods.go:89] "kube-controller-manager-ha-423884-m03" [94734b39-b4df-4736-8bb8-4289b8b59d4a] Running
	I1109 14:04:38.517705   55908 system_pods.go:89] "kube-proxy-7z7d2" [f3de4d87-91fe-4303-a8db-50a70cbce4d7] Running
	I1109 14:04:38.517722   55908 system_pods.go:89] "kube-proxy-9kff9" [3e8293e6-027b-460b-bf10-1c31ea96c7b9] Running
	I1109 14:04:38.517750   55908 system_pods.go:89] "kube-proxy-f4hgn" [4eebc45e-329c-47be-b22c-f516100cff56] Running
	I1109 14:04:38.517773   55908 system_pods.go:89] "kube-proxy-jcgxk" [511bb5aa-d398-4eb4-852e-d2b6cda335c7] Running
	I1109 14:04:38.517794   55908 system_pods.go:89] "kube-scheduler-ha-423884" [9d1a4a22-ff4f-4204-b9a0-c90bc99cf5ee] Running
	I1109 14:04:38.517812   55908 system_pods.go:89] "kube-scheduler-ha-423884-m02" [3959577a-da41-4834-8c10-1cd6da3da88d] Running
	I1109 14:04:38.517830   55908 system_pods.go:89] "kube-scheduler-ha-423884-m03" [ab2c4706-65e8-429e-9365-35ddef56a3f5] Running
	I1109 14:04:38.517856   55908 system_pods.go:89] "kube-vip-ha-423884" [b043421c-6408-4df1-87d9-bc0d12fef736] Running
	I1109 14:04:38.517877   55908 system_pods.go:89] "kube-vip-ha-423884-m02" [17d3154c-6732-452b-a209-4a8c98c5626e] Running
	I1109 14:04:38.517894   55908 system_pods.go:89] "kube-vip-ha-423884-m03" [aade9448-5515-4d1c-ae79-38c8686c171f] Running
	I1109 14:04:38.517911   55908 system_pods.go:89] "storage-provisioner" [5c249a88-1e05-40e0-b9d2-60a993f8c146] Running
	I1109 14:04:38.517933   55908 system_pods.go:126] duration metric: took 8.084994ms to wait for k8s-apps to be running ...
	I1109 14:04:38.517962   55908 system_svc.go:44] waiting for kubelet service to be running ....
	I1109 14:04:38.518068   55908 ssh_runner.go:195] Run: sudo systemctl is-active --quiet service kubelet
	I1109 14:04:38.532879   55908 system_svc.go:56] duration metric: took 14.908297ms WaitForService to wait for kubelet
	I1109 14:04:38.532917   55908 kubeadm.go:587] duration metric: took 15.720774062s to wait for: map[apiserver:true apps_running:true default_sa:true extra:true kubelet:true node_ready:true system_pods:true]
	I1109 14:04:38.532935   55908 node_conditions.go:102] verifying NodePressure condition ...
	I1109 14:04:38.536579   55908 node_conditions.go:122] node storage ephemeral capacity is 203034800Ki
	I1109 14:04:38.536610   55908 node_conditions.go:123] node cpu capacity is 2
	I1109 14:04:38.536621   55908 node_conditions.go:122] node storage ephemeral capacity is 203034800Ki
	I1109 14:04:38.536625   55908 node_conditions.go:123] node cpu capacity is 2
	I1109 14:04:38.536629   55908 node_conditions.go:122] node storage ephemeral capacity is 203034800Ki
	I1109 14:04:38.536633   55908 node_conditions.go:123] node cpu capacity is 2
	I1109 14:04:38.536636   55908 node_conditions.go:122] node storage ephemeral capacity is 203034800Ki
	I1109 14:04:38.536648   55908 node_conditions.go:123] node cpu capacity is 2
	I1109 14:04:38.536656   55908 node_conditions.go:105] duration metric: took 3.715265ms to run NodePressure ...
	I1109 14:04:38.536669   55908 start.go:242] waiting for startup goroutines ...
	I1109 14:04:38.536695   55908 start.go:256] writing updated cluster config ...
	I1109 14:04:38.540432   55908 out.go:203] 
	I1109 14:04:38.543707   55908 config.go:182] Loaded profile config "ha-423884": Driver=docker, ContainerRuntime=crio, KubernetesVersion=v1.34.1
	I1109 14:04:38.543833   55908 profile.go:143] Saving config to /home/jenkins/minikube-integration/21139-2320/.minikube/profiles/ha-423884/config.json ...
	I1109 14:04:38.547314   55908 out.go:179] * Starting "ha-423884-m04" worker node in "ha-423884" cluster
	I1109 14:04:38.550154   55908 cache.go:134] Beginning downloading kic base image for docker with crio
	I1109 14:04:38.553075   55908 out.go:179] * Pulling base image v0.0.48-1761985721-21837 ...
	I1109 14:04:38.555918   55908 preload.go:188] Checking if preload exists for k8s version v1.34.1 and runtime crio
	I1109 14:04:38.555945   55908 cache.go:65] Caching tarball of preloaded images
	I1109 14:04:38.555984   55908 image.go:81] Checking for gcr.io/k8s-minikube/kicbase-builds:v0.0.48-1761985721-21837@sha256:a50b37e97dfdea51156e079ca6b45818a801b3d41bbe13d141f35d2e1af6c7d1 in local docker daemon
	I1109 14:04:38.556052   55908 preload.go:238] Found /home/jenkins/minikube-integration/21139-2320/.minikube/cache/preloaded-tarball/preloaded-images-k8s-v18-v1.34.1-cri-o-overlay-arm64.tar.lz4 in cache, skipping download
	I1109 14:04:38.556067   55908 cache.go:68] Finished verifying existence of preloaded tar for v1.34.1 on crio
	I1109 14:04:38.556232   55908 profile.go:143] Saving config to /home/jenkins/minikube-integration/21139-2320/.minikube/profiles/ha-423884/config.json ...
	I1109 14:04:38.596080   55908 image.go:100] Found gcr.io/k8s-minikube/kicbase-builds:v0.0.48-1761985721-21837@sha256:a50b37e97dfdea51156e079ca6b45818a801b3d41bbe13d141f35d2e1af6c7d1 in local docker daemon, skipping pull
	I1109 14:04:38.596104   55908 cache.go:158] gcr.io/k8s-minikube/kicbase-builds:v0.0.48-1761985721-21837@sha256:a50b37e97dfdea51156e079ca6b45818a801b3d41bbe13d141f35d2e1af6c7d1 exists in daemon, skipping load
	I1109 14:04:38.596117   55908 cache.go:243] Successfully downloaded all kic artifacts
	I1109 14:04:38.596140   55908 start.go:360] acquireMachinesLock for ha-423884-m04: {Name:mk8ea327a8bd5498886fa5c18402495ffce70373 Clock:{} Delay:500ms Timeout:10m0s Cancel:<nil>}
	I1109 14:04:38.596197   55908 start.go:364] duration metric: took 36.833µs to acquireMachinesLock for "ha-423884-m04"
	I1109 14:04:38.596221   55908 start.go:96] Skipping create...Using existing machine configuration
	I1109 14:04:38.596226   55908 fix.go:54] fixHost starting: m04
	I1109 14:04:38.596505   55908 cli_runner.go:164] Run: docker container inspect ha-423884-m04 --format={{.State.Status}}
	I1109 14:04:38.628055   55908 fix.go:112] recreateIfNeeded on ha-423884-m04: state=Stopped err=<nil>
	W1109 14:04:38.628083   55908 fix.go:138] unexpected machine state, will restart: <nil>
	I1109 14:04:38.631296   55908 out.go:252] * Restarting existing docker container for "ha-423884-m04" ...
	I1109 14:04:38.631384   55908 cli_runner.go:164] Run: docker start ha-423884-m04
	I1109 14:04:38.994029   55908 cli_runner.go:164] Run: docker container inspect ha-423884-m04 --format={{.State.Status}}
	I1109 14:04:39.024143   55908 kic.go:430] container "ha-423884-m04" state is running.
	I1109 14:04:39.024645   55908 cli_runner.go:164] Run: docker container inspect -f "{{range .NetworkSettings.Networks}}{{.IPAddress}},{{.GlobalIPv6Address}}{{end}}" ha-423884-m04
	I1109 14:04:39.049753   55908 profile.go:143] Saving config to /home/jenkins/minikube-integration/21139-2320/.minikube/profiles/ha-423884/config.json ...
	I1109 14:04:39.049997   55908 machine.go:94] provisionDockerMachine start ...
	I1109 14:04:39.050055   55908 cli_runner.go:164] Run: docker container inspect -f "'{{(index (index .NetworkSettings.Ports "22/tcp") 0).HostPort}}'" ha-423884-m04
	I1109 14:04:39.086245   55908 main.go:143] libmachine: Using SSH client type: native
	I1109 14:04:39.086555   55908 main.go:143] libmachine: &{{{<nil> 0 [] [] []} docker [0x3eefe0] 0x3f1790 <nil>  [] 0s} 127.0.0.1 32833 <nil> <nil>}
	I1109 14:04:39.086564   55908 main.go:143] libmachine: About to run SSH command:
	hostname
	I1109 14:04:39.087311   55908 main.go:143] libmachine: Error dialing TCP: ssh: handshake failed: read tcp 127.0.0.1:54962->127.0.0.1:32833: read: connection reset by peer
	I1109 14:04:42.305377   55908 main.go:143] libmachine: SSH cmd err, output: <nil>: ha-423884-m04
	
	I1109 14:04:42.305403   55908 ubuntu.go:182] provisioning hostname "ha-423884-m04"
	I1109 14:04:42.305544   55908 cli_runner.go:164] Run: docker container inspect -f "'{{(index (index .NetworkSettings.Ports "22/tcp") 0).HostPort}}'" ha-423884-m04
	I1109 14:04:42.345625   55908 main.go:143] libmachine: Using SSH client type: native
	I1109 14:04:42.345948   55908 main.go:143] libmachine: &{{{<nil> 0 [] [] []} docker [0x3eefe0] 0x3f1790 <nil>  [] 0s} 127.0.0.1 32833 <nil> <nil>}
	I1109 14:04:42.345975   55908 main.go:143] libmachine: About to run SSH command:
	sudo hostname ha-423884-m04 && echo "ha-423884-m04" | sudo tee /etc/hostname
	I1109 14:04:42.540380   55908 main.go:143] libmachine: SSH cmd err, output: <nil>: ha-423884-m04
	
	I1109 14:04:42.540467   55908 cli_runner.go:164] Run: docker container inspect -f "'{{(index (index .NetworkSettings.Ports "22/tcp") 0).HostPort}}'" ha-423884-m04
	I1109 14:04:42.568082   55908 main.go:143] libmachine: Using SSH client type: native
	I1109 14:04:42.568508   55908 main.go:143] libmachine: &{{{<nil> 0 [] [] []} docker [0x3eefe0] 0x3f1790 <nil>  [] 0s} 127.0.0.1 32833 <nil> <nil>}
	I1109 14:04:42.568528   55908 main.go:143] libmachine: About to run SSH command:
	
			if ! grep -xq '.*\sha-423884-m04' /etc/hosts; then
				if grep -xq '127.0.1.1\s.*' /etc/hosts; then
					sudo sed -i 's/^127.0.1.1\s.*/127.0.1.1 ha-423884-m04/g' /etc/hosts;
				else 
					echo '127.0.1.1 ha-423884-m04' | sudo tee -a /etc/hosts; 
				fi
			fi
	I1109 14:04:42.740938   55908 main.go:143] libmachine: SSH cmd err, output: <nil>: 
	I1109 14:04:42.740964   55908 ubuntu.go:188] set auth options {CertDir:/home/jenkins/minikube-integration/21139-2320/.minikube CaCertPath:/home/jenkins/minikube-integration/21139-2320/.minikube/certs/ca.pem CaPrivateKeyPath:/home/jenkins/minikube-integration/21139-2320/.minikube/certs/ca-key.pem CaCertRemotePath:/etc/docker/ca.pem ServerCertPath:/home/jenkins/minikube-integration/21139-2320/.minikube/machines/server.pem ServerKeyPath:/home/jenkins/minikube-integration/21139-2320/.minikube/machines/server-key.pem ClientKeyPath:/home/jenkins/minikube-integration/21139-2320/.minikube/certs/key.pem ServerCertRemotePath:/etc/docker/server.pem ServerKeyRemotePath:/etc/docker/server-key.pem ClientCertPath:/home/jenkins/minikube-integration/21139-2320/.minikube/certs/cert.pem ServerCertSANs:[] StorePath:/home/jenkins/minikube-integration/21139-2320/.minikube}
	I1109 14:04:42.740987   55908 ubuntu.go:190] setting up certificates
	I1109 14:04:42.740999   55908 provision.go:84] configureAuth start
	I1109 14:04:42.741056   55908 cli_runner.go:164] Run: docker container inspect -f "{{range .NetworkSettings.Networks}}{{.IPAddress}},{{.GlobalIPv6Address}}{{end}}" ha-423884-m04
	I1109 14:04:42.758596   55908 provision.go:143] copyHostCerts
	I1109 14:04:42.758635   55908 vm_assets.go:164] NewFileAsset: /home/jenkins/minikube-integration/21139-2320/.minikube/certs/ca.pem -> /home/jenkins/minikube-integration/21139-2320/.minikube/ca.pem
	I1109 14:04:42.758666   55908 exec_runner.go:144] found /home/jenkins/minikube-integration/21139-2320/.minikube/ca.pem, removing ...
	I1109 14:04:42.758673   55908 exec_runner.go:203] rm: /home/jenkins/minikube-integration/21139-2320/.minikube/ca.pem
	I1109 14:04:42.758748   55908 exec_runner.go:151] cp: /home/jenkins/minikube-integration/21139-2320/.minikube/certs/ca.pem --> /home/jenkins/minikube-integration/21139-2320/.minikube/ca.pem (1082 bytes)
	I1109 14:04:42.758825   55908 vm_assets.go:164] NewFileAsset: /home/jenkins/minikube-integration/21139-2320/.minikube/certs/cert.pem -> /home/jenkins/minikube-integration/21139-2320/.minikube/cert.pem
	I1109 14:04:42.758841   55908 exec_runner.go:144] found /home/jenkins/minikube-integration/21139-2320/.minikube/cert.pem, removing ...
	I1109 14:04:42.758845   55908 exec_runner.go:203] rm: /home/jenkins/minikube-integration/21139-2320/.minikube/cert.pem
	I1109 14:04:42.758872   55908 exec_runner.go:151] cp: /home/jenkins/minikube-integration/21139-2320/.minikube/certs/cert.pem --> /home/jenkins/minikube-integration/21139-2320/.minikube/cert.pem (1123 bytes)
	I1109 14:04:42.758947   55908 vm_assets.go:164] NewFileAsset: /home/jenkins/minikube-integration/21139-2320/.minikube/certs/key.pem -> /home/jenkins/minikube-integration/21139-2320/.minikube/key.pem
	I1109 14:04:42.758966   55908 exec_runner.go:144] found /home/jenkins/minikube-integration/21139-2320/.minikube/key.pem, removing ...
	I1109 14:04:42.758970   55908 exec_runner.go:203] rm: /home/jenkins/minikube-integration/21139-2320/.minikube/key.pem
	I1109 14:04:42.758992   55908 exec_runner.go:151] cp: /home/jenkins/minikube-integration/21139-2320/.minikube/certs/key.pem --> /home/jenkins/minikube-integration/21139-2320/.minikube/key.pem (1679 bytes)
	I1109 14:04:42.759035   55908 provision.go:117] generating server cert: /home/jenkins/minikube-integration/21139-2320/.minikube/machines/server.pem ca-key=/home/jenkins/minikube-integration/21139-2320/.minikube/certs/ca.pem private-key=/home/jenkins/minikube-integration/21139-2320/.minikube/certs/ca-key.pem org=jenkins.ha-423884-m04 san=[127.0.0.1 192.168.49.5 ha-423884-m04 localhost minikube]
	I1109 14:04:43.620778   55908 provision.go:177] copyRemoteCerts
	I1109 14:04:43.620850   55908 ssh_runner.go:195] Run: sudo mkdir -p /etc/docker /etc/docker /etc/docker
	I1109 14:04:43.620891   55908 cli_runner.go:164] Run: docker container inspect -f "'{{(index (index .NetworkSettings.Ports "22/tcp") 0).HostPort}}'" ha-423884-m04
	I1109 14:04:43.638135   55908 sshutil.go:53] new ssh client: &{IP:127.0.0.1 Port:32833 SSHKeyPath:/home/jenkins/minikube-integration/21139-2320/.minikube/machines/ha-423884-m04/id_rsa Username:docker}
	I1109 14:04:43.746715   55908 vm_assets.go:164] NewFileAsset: /home/jenkins/minikube-integration/21139-2320/.minikube/certs/ca.pem -> /etc/docker/ca.pem
	I1109 14:04:43.746778   55908 ssh_runner.go:362] scp /home/jenkins/minikube-integration/21139-2320/.minikube/certs/ca.pem --> /etc/docker/ca.pem (1082 bytes)
	I1109 14:04:43.783559   55908 vm_assets.go:164] NewFileAsset: /home/jenkins/minikube-integration/21139-2320/.minikube/machines/server.pem -> /etc/docker/server.pem
	I1109 14:04:43.783620   55908 ssh_runner.go:362] scp /home/jenkins/minikube-integration/21139-2320/.minikube/machines/server.pem --> /etc/docker/server.pem (1208 bytes)
	I1109 14:04:43.821821   55908 vm_assets.go:164] NewFileAsset: /home/jenkins/minikube-integration/21139-2320/.minikube/machines/server-key.pem -> /etc/docker/server-key.pem
	I1109 14:04:43.821884   55908 ssh_runner.go:362] scp /home/jenkins/minikube-integration/21139-2320/.minikube/machines/server-key.pem --> /etc/docker/server-key.pem (1675 bytes)
	I1109 14:04:43.853243   55908 provision.go:87] duration metric: took 1.112229927s to configureAuth
	I1109 14:04:43.853316   55908 ubuntu.go:206] setting minikube options for container-runtime
	I1109 14:04:43.853606   55908 config.go:182] Loaded profile config "ha-423884": Driver=docker, ContainerRuntime=crio, KubernetesVersion=v1.34.1
	I1109 14:04:43.853756   55908 cli_runner.go:164] Run: docker container inspect -f "'{{(index (index .NetworkSettings.Ports "22/tcp") 0).HostPort}}'" ha-423884-m04
	I1109 14:04:43.895433   55908 main.go:143] libmachine: Using SSH client type: native
	I1109 14:04:43.895732   55908 main.go:143] libmachine: &{{{<nil> 0 [] [] []} docker [0x3eefe0] 0x3f1790 <nil>  [] 0s} 127.0.0.1 32833 <nil> <nil>}
	I1109 14:04:43.895746   55908 main.go:143] libmachine: About to run SSH command:
	sudo mkdir -p /etc/sysconfig && printf %s "
	CRIO_MINIKUBE_OPTIONS='--insecure-registry 10.96.0.0/12 '
	" | sudo tee /etc/sysconfig/crio.minikube && sudo systemctl restart crio
	I1109 14:04:44.332263   55908 main.go:143] libmachine: SSH cmd err, output: <nil>: 
	CRIO_MINIKUBE_OPTIONS='--insecure-registry 10.96.0.0/12 '
	
	I1109 14:04:44.332289   55908 machine.go:97] duration metric: took 5.282283014s to provisionDockerMachine
	I1109 14:04:44.332300   55908 start.go:293] postStartSetup for "ha-423884-m04" (driver="docker")
	I1109 14:04:44.332310   55908 start.go:322] creating required directories: [/etc/kubernetes/addons /etc/kubernetes/manifests /var/tmp/minikube /var/lib/minikube /var/lib/minikube/certs /var/lib/minikube/images /var/lib/minikube/binaries /tmp/gvisor /usr/share/ca-certificates /etc/ssl/certs]
	I1109 14:04:44.332371   55908 ssh_runner.go:195] Run: sudo mkdir -p /etc/kubernetes/addons /etc/kubernetes/manifests /var/tmp/minikube /var/lib/minikube /var/lib/minikube/certs /var/lib/minikube/images /var/lib/minikube/binaries /tmp/gvisor /usr/share/ca-certificates /etc/ssl/certs
	I1109 14:04:44.332415   55908 cli_runner.go:164] Run: docker container inspect -f "'{{(index (index .NetworkSettings.Ports "22/tcp") 0).HostPort}}'" ha-423884-m04
	I1109 14:04:44.353937   55908 sshutil.go:53] new ssh client: &{IP:127.0.0.1 Port:32833 SSHKeyPath:/home/jenkins/minikube-integration/21139-2320/.minikube/machines/ha-423884-m04/id_rsa Username:docker}
	I1109 14:04:44.464143   55908 ssh_runner.go:195] Run: cat /etc/os-release
	I1109 14:04:44.470188   55908 main.go:143] libmachine: Couldn't set key VERSION_CODENAME, no corresponding struct field found
	I1109 14:04:44.470214   55908 info.go:137] Remote host: Debian GNU/Linux 12 (bookworm)
	I1109 14:04:44.470225   55908 filesync.go:126] Scanning /home/jenkins/minikube-integration/21139-2320/.minikube/addons for local assets ...
	I1109 14:04:44.470281   55908 filesync.go:126] Scanning /home/jenkins/minikube-integration/21139-2320/.minikube/files for local assets ...
	I1109 14:04:44.470354   55908 filesync.go:149] local asset: /home/jenkins/minikube-integration/21139-2320/.minikube/files/etc/ssl/certs/41162.pem -> 41162.pem in /etc/ssl/certs
	I1109 14:04:44.470361   55908 vm_assets.go:164] NewFileAsset: /home/jenkins/minikube-integration/21139-2320/.minikube/files/etc/ssl/certs/41162.pem -> /etc/ssl/certs/41162.pem
	I1109 14:04:44.470470   55908 ssh_runner.go:195] Run: sudo mkdir -p /etc/ssl/certs
	I1109 14:04:44.479795   55908 ssh_runner.go:362] scp /home/jenkins/minikube-integration/21139-2320/.minikube/files/etc/ssl/certs/41162.pem --> /etc/ssl/certs/41162.pem (1708 bytes)
	I1109 14:04:44.529226   55908 start.go:296] duration metric: took 196.901694ms for postStartSetup
	I1109 14:04:44.529386   55908 ssh_runner.go:195] Run: sh -c "df -h /var | awk 'NR==2{print $5}'"
	I1109 14:04:44.529460   55908 cli_runner.go:164] Run: docker container inspect -f "'{{(index (index .NetworkSettings.Ports "22/tcp") 0).HostPort}}'" ha-423884-m04
	I1109 14:04:44.554649   55908 sshutil.go:53] new ssh client: &{IP:127.0.0.1 Port:32833 SSHKeyPath:/home/jenkins/minikube-integration/21139-2320/.minikube/machines/ha-423884-m04/id_rsa Username:docker}
	I1109 14:04:44.673604   55908 ssh_runner.go:195] Run: sh -c "df -BG /var | awk 'NR==2{print $4}'"
	I1109 14:04:44.680762   55908 fix.go:56] duration metric: took 6.08452744s for fixHost
	I1109 14:04:44.680784   55908 start.go:83] releasing machines lock for "ha-423884-m04", held for 6.084574408s
	I1109 14:04:44.680867   55908 cli_runner.go:164] Run: docker container inspect -f "{{range .NetworkSettings.Networks}}{{.IPAddress}},{{.GlobalIPv6Address}}{{end}}" ha-423884-m04
	I1109 14:04:44.721415   55908 out.go:179] * Found network options:
	I1109 14:04:44.724159   55908 out.go:179]   - NO_PROXY=192.168.49.2,192.168.49.3,192.168.49.4
	W1109 14:04:44.726873   55908 proxy.go:120] fail to check proxy env: Error ip not in block
	W1109 14:04:44.726905   55908 proxy.go:120] fail to check proxy env: Error ip not in block
	W1109 14:04:44.726917   55908 proxy.go:120] fail to check proxy env: Error ip not in block
	W1109 14:04:44.726942   55908 proxy.go:120] fail to check proxy env: Error ip not in block
	W1109 14:04:44.726952   55908 proxy.go:120] fail to check proxy env: Error ip not in block
	W1109 14:04:44.726961   55908 proxy.go:120] fail to check proxy env: Error ip not in block
	I1109 14:04:44.727033   55908 ssh_runner.go:195] Run: sudo sh -c "podman version >/dev/null"
	I1109 14:04:44.727074   55908 ssh_runner.go:195] Run: curl -sS -m 2 https://registry.k8s.io/
	I1109 14:04:44.727134   55908 cli_runner.go:164] Run: docker container inspect -f "'{{(index (index .NetworkSettings.Ports "22/tcp") 0).HostPort}}'" ha-423884-m04
	I1109 14:04:44.727085   55908 cli_runner.go:164] Run: docker container inspect -f "'{{(index (index .NetworkSettings.Ports "22/tcp") 0).HostPort}}'" ha-423884-m04
	I1109 14:04:44.759201   55908 sshutil.go:53] new ssh client: &{IP:127.0.0.1 Port:32833 SSHKeyPath:/home/jenkins/minikube-integration/21139-2320/.minikube/machines/ha-423884-m04/id_rsa Username:docker}
	I1109 14:04:44.763544   55908 sshutil.go:53] new ssh client: &{IP:127.0.0.1 Port:32833 SSHKeyPath:/home/jenkins/minikube-integration/21139-2320/.minikube/machines/ha-423884-m04/id_rsa Username:docker}
	I1109 14:04:45.037350   55908 ssh_runner.go:195] Run: sh -c "stat /etc/cni/net.d/*loopback.conf*"
	W1109 14:04:45.135550   55908 cni.go:209] loopback cni configuration skipped: "/etc/cni/net.d/*loopback.conf*" not found
	I1109 14:04:45.135658   55908 ssh_runner.go:195] Run: sudo find /etc/cni/net.d -maxdepth 1 -type f ( ( -name *bridge* -or -name *podman* ) -and -not -name *.mk_disabled ) -printf "%p, " -exec sh -c "sudo mv {} {}.mk_disabled" ;
	I1109 14:04:45.148313   55908 cni.go:259] no active bridge cni configs found in "/etc/cni/net.d" - nothing to disable
	I1109 14:04:45.148341   55908 start.go:496] detecting cgroup driver to use...
	I1109 14:04:45.148377   55908 detect.go:187] detected "cgroupfs" cgroup driver on host os
	I1109 14:04:45.148433   55908 ssh_runner.go:195] Run: sudo systemctl stop -f containerd
	I1109 14:04:45.185399   55908 ssh_runner.go:195] Run: sudo systemctl is-active --quiet service containerd
	I1109 14:04:45.214772   55908 docker.go:218] disabling cri-docker service (if available) ...
	I1109 14:04:45.214846   55908 ssh_runner.go:195] Run: sudo systemctl stop -f cri-docker.socket
	I1109 14:04:45.250953   55908 ssh_runner.go:195] Run: sudo systemctl stop -f cri-docker.service
	I1109 14:04:45.287278   55908 ssh_runner.go:195] Run: sudo systemctl disable cri-docker.socket
	I1109 14:04:45.661062   55908 ssh_runner.go:195] Run: sudo systemctl mask cri-docker.service
	I1109 14:04:45.935411   55908 docker.go:234] disabling docker service ...
	I1109 14:04:45.935486   55908 ssh_runner.go:195] Run: sudo systemctl stop -f docker.socket
	I1109 14:04:45.952438   55908 ssh_runner.go:195] Run: sudo systemctl stop -f docker.service
	I1109 14:04:45.980819   55908 ssh_runner.go:195] Run: sudo systemctl disable docker.socket
	I1109 14:04:46.226547   55908 ssh_runner.go:195] Run: sudo systemctl mask docker.service
	I1109 14:04:46.528888   55908 ssh_runner.go:195] Run: sudo systemctl is-active --quiet service docker
	I1109 14:04:46.569464   55908 ssh_runner.go:195] Run: /bin/bash -c "sudo mkdir -p /etc && printf %s "runtime-endpoint: unix:///var/run/crio/crio.sock
	" | sudo tee /etc/crictl.yaml"
	I1109 14:04:46.593467   55908 crio.go:59] configure cri-o to use "registry.k8s.io/pause:3.10.1" pause image...
	I1109 14:04:46.593541   55908 ssh_runner.go:195] Run: sh -c "sudo sed -i 's|^.*pause_image = .*$|pause_image = "registry.k8s.io/pause:3.10.1"|' /etc/crio/crio.conf.d/02-crio.conf"
	I1109 14:04:46.617190   55908 crio.go:70] configuring cri-o to use "cgroupfs" as cgroup driver...
	I1109 14:04:46.617307   55908 ssh_runner.go:195] Run: sh -c "sudo sed -i 's|^.*cgroup_manager = .*$|cgroup_manager = "cgroupfs"|' /etc/crio/crio.conf.d/02-crio.conf"
	I1109 14:04:46.632140   55908 ssh_runner.go:195] Run: sh -c "sudo sed -i '/conmon_cgroup = .*/d' /etc/crio/crio.conf.d/02-crio.conf"
	I1109 14:04:46.655050   55908 ssh_runner.go:195] Run: sh -c "sudo sed -i '/cgroup_manager = .*/a conmon_cgroup = "pod"' /etc/crio/crio.conf.d/02-crio.conf"
	I1109 14:04:46.669679   55908 ssh_runner.go:195] Run: sh -c "sudo rm -rf /etc/cni/net.mk"
	I1109 14:04:46.703425   55908 ssh_runner.go:195] Run: sh -c "sudo sed -i '/^ *"net.ipv4.ip_unprivileged_port_start=.*"/d' /etc/crio/crio.conf.d/02-crio.conf"
	I1109 14:04:46.732454   55908 ssh_runner.go:195] Run: sh -c "sudo grep -q "^ *default_sysctls" /etc/crio/crio.conf.d/02-crio.conf || sudo sed -i '/conmon_cgroup = .*/a default_sysctls = \[\n\]' /etc/crio/crio.conf.d/02-crio.conf"
	I1109 14:04:46.748482   55908 ssh_runner.go:195] Run: sh -c "sudo sed -i -r 's|^default_sysctls *= *\[|&\n  "net.ipv4.ip_unprivileged_port_start=0",|' /etc/crio/crio.conf.d/02-crio.conf"
	I1109 14:04:46.774220   55908 ssh_runner.go:195] Run: sudo sysctl net.bridge.bridge-nf-call-iptables
	I1109 14:04:46.794338   55908 ssh_runner.go:195] Run: sudo sh -c "echo 1 > /proc/sys/net/ipv4/ip_forward"
	I1109 14:04:46.805580   55908 ssh_runner.go:195] Run: sudo systemctl daemon-reload
	I1109 14:04:47.010084   55908 ssh_runner.go:195] Run: sudo systemctl restart crio
	I1109 14:04:47.173577   55908 start.go:543] Will wait 60s for socket path /var/run/crio/crio.sock
	I1109 14:04:47.173656   55908 ssh_runner.go:195] Run: stat /var/run/crio/crio.sock
	I1109 14:04:47.181540   55908 start.go:564] Will wait 60s for crictl version
	I1109 14:04:47.181604   55908 ssh_runner.go:195] Run: which crictl
	I1109 14:04:47.186006   55908 ssh_runner.go:195] Run: sudo /usr/local/bin/crictl version
	I1109 14:04:47.222300   55908 start.go:580] Version:  0.1.0
	RuntimeName:  cri-o
	RuntimeVersion:  1.34.1
	RuntimeApiVersion:  v1
	I1109 14:04:47.222379   55908 ssh_runner.go:195] Run: crio --version
	I1109 14:04:47.253413   55908 ssh_runner.go:195] Run: crio --version
	I1109 14:04:47.291652   55908 out.go:179] * Preparing Kubernetes v1.34.1 on CRI-O 1.34.1 ...
	I1109 14:04:47.294554   55908 out.go:179]   - env NO_PROXY=192.168.49.2
	I1109 14:04:47.297616   55908 out.go:179]   - env NO_PROXY=192.168.49.2,192.168.49.3
	I1109 14:04:47.301230   55908 out.go:179]   - env NO_PROXY=192.168.49.2,192.168.49.3,192.168.49.4
	I1109 14:04:47.304267   55908 cli_runner.go:164] Run: docker network inspect ha-423884 --format "{"Name": "{{.Name}}","Driver": "{{.Driver}}","Subnet": "{{range .IPAM.Config}}{{.Subnet}}{{end}}","Gateway": "{{range .IPAM.Config}}{{.Gateway}}{{end}}","MTU": {{if (index .Options "com.docker.network.driver.mtu")}}{{(index .Options "com.docker.network.driver.mtu")}}{{else}}0{{end}}, "ContainerIPs": [{{range $k,$v := .Containers }}"{{$v.IPv4Address}}",{{end}}]}"
	I1109 14:04:47.343687   55908 ssh_runner.go:195] Run: grep 192.168.49.1	host.minikube.internal$ /etc/hosts
	I1109 14:04:47.347710   55908 ssh_runner.go:195] Run: /bin/bash -c "{ grep -v $'\thost.minikube.internal$' "/etc/hosts"; echo "192.168.49.1	host.minikube.internal"; } > /tmp/h.$$; sudo cp /tmp/h.$$ "/etc/hosts""
	I1109 14:04:47.360845   55908 mustload.go:66] Loading cluster: ha-423884
	I1109 14:04:47.361083   55908 config.go:182] Loaded profile config "ha-423884": Driver=docker, ContainerRuntime=crio, KubernetesVersion=v1.34.1
	I1109 14:04:47.361322   55908 cli_runner.go:164] Run: docker container inspect ha-423884 --format={{.State.Status}}
	I1109 14:04:47.390238   55908 host.go:66] Checking if "ha-423884" exists ...
	I1109 14:04:47.390509   55908 certs.go:69] Setting up /home/jenkins/minikube-integration/21139-2320/.minikube/profiles/ha-423884 for IP: 192.168.49.5
	I1109 14:04:47.390516   55908 certs.go:195] generating shared ca certs ...
	I1109 14:04:47.390534   55908 certs.go:227] acquiring lock for ca certs: {Name:mkdc9287ecd89df27ec460b72246ed6b75395d7c Clock:{} Delay:500ms Timeout:1m0s Cancel:<nil>}
	I1109 14:04:47.390655   55908 certs.go:236] skipping valid "minikubeCA" ca cert: /home/jenkins/minikube-integration/21139-2320/.minikube/ca.key
	I1109 14:04:47.390695   55908 certs.go:236] skipping valid "proxyClientCA" ca cert: /home/jenkins/minikube-integration/21139-2320/.minikube/proxy-client-ca.key
	I1109 14:04:47.390705   55908 vm_assets.go:164] NewFileAsset: /home/jenkins/minikube-integration/21139-2320/.minikube/ca.crt -> /var/lib/minikube/certs/ca.crt
	I1109 14:04:47.390717   55908 vm_assets.go:164] NewFileAsset: /home/jenkins/minikube-integration/21139-2320/.minikube/ca.key -> /var/lib/minikube/certs/ca.key
	I1109 14:04:47.390728   55908 vm_assets.go:164] NewFileAsset: /home/jenkins/minikube-integration/21139-2320/.minikube/proxy-client-ca.crt -> /var/lib/minikube/certs/proxy-client-ca.crt
	I1109 14:04:47.390739   55908 vm_assets.go:164] NewFileAsset: /home/jenkins/minikube-integration/21139-2320/.minikube/proxy-client-ca.key -> /var/lib/minikube/certs/proxy-client-ca.key
	I1109 14:04:47.390789   55908 certs.go:484] found cert: /home/jenkins/minikube-integration/21139-2320/.minikube/certs/4116.pem (1338 bytes)
	W1109 14:04:47.390815   55908 certs.go:480] ignoring /home/jenkins/minikube-integration/21139-2320/.minikube/certs/4116_empty.pem, impossibly tiny 0 bytes
	I1109 14:04:47.390823   55908 certs.go:484] found cert: /home/jenkins/minikube-integration/21139-2320/.minikube/certs/ca-key.pem (1679 bytes)
	I1109 14:04:47.390848   55908 certs.go:484] found cert: /home/jenkins/minikube-integration/21139-2320/.minikube/certs/ca.pem (1082 bytes)
	I1109 14:04:47.390868   55908 certs.go:484] found cert: /home/jenkins/minikube-integration/21139-2320/.minikube/certs/cert.pem (1123 bytes)
	I1109 14:04:47.390889   55908 certs.go:484] found cert: /home/jenkins/minikube-integration/21139-2320/.minikube/certs/key.pem (1679 bytes)
	I1109 14:04:47.390931   55908 certs.go:484] found cert: /home/jenkins/minikube-integration/21139-2320/.minikube/files/etc/ssl/certs/41162.pem (1708 bytes)
	I1109 14:04:47.390957   55908 vm_assets.go:164] NewFileAsset: /home/jenkins/minikube-integration/21139-2320/.minikube/certs/4116.pem -> /usr/share/ca-certificates/4116.pem
	I1109 14:04:47.390969   55908 vm_assets.go:164] NewFileAsset: /home/jenkins/minikube-integration/21139-2320/.minikube/files/etc/ssl/certs/41162.pem -> /usr/share/ca-certificates/41162.pem
	I1109 14:04:47.390980   55908 vm_assets.go:164] NewFileAsset: /home/jenkins/minikube-integration/21139-2320/.minikube/ca.crt -> /usr/share/ca-certificates/minikubeCA.pem
	I1109 14:04:47.390996   55908 ssh_runner.go:362] scp /home/jenkins/minikube-integration/21139-2320/.minikube/ca.crt --> /var/lib/minikube/certs/ca.crt (1111 bytes)
	I1109 14:04:47.419171   55908 ssh_runner.go:362] scp /home/jenkins/minikube-integration/21139-2320/.minikube/ca.key --> /var/lib/minikube/certs/ca.key (1679 bytes)
	I1109 14:04:47.458480   55908 ssh_runner.go:362] scp /home/jenkins/minikube-integration/21139-2320/.minikube/proxy-client-ca.crt --> /var/lib/minikube/certs/proxy-client-ca.crt (1119 bytes)
	I1109 14:04:47.491840   55908 ssh_runner.go:362] scp /home/jenkins/minikube-integration/21139-2320/.minikube/proxy-client-ca.key --> /var/lib/minikube/certs/proxy-client-ca.key (1675 bytes)
	I1109 14:04:47.515467   55908 ssh_runner.go:362] scp /home/jenkins/minikube-integration/21139-2320/.minikube/certs/4116.pem --> /usr/share/ca-certificates/4116.pem (1338 bytes)
	I1109 14:04:47.547694   55908 ssh_runner.go:362] scp /home/jenkins/minikube-integration/21139-2320/.minikube/files/etc/ssl/certs/41162.pem --> /usr/share/ca-certificates/41162.pem (1708 bytes)
	I1109 14:04:47.571204   55908 ssh_runner.go:362] scp /home/jenkins/minikube-integration/21139-2320/.minikube/ca.crt --> /usr/share/ca-certificates/minikubeCA.pem (1111 bytes)
	I1109 14:04:47.596967   55908 ssh_runner.go:195] Run: openssl version
	I1109 14:04:47.604617   55908 ssh_runner.go:195] Run: sudo /bin/bash -c "test -s /usr/share/ca-certificates/4116.pem && ln -fs /usr/share/ca-certificates/4116.pem /etc/ssl/certs/4116.pem"
	I1109 14:04:47.618704   55908 ssh_runner.go:195] Run: ls -la /usr/share/ca-certificates/4116.pem
	I1109 14:04:47.623578   55908 certs.go:528] hashing: -rw-r--r-- 1 root root 1338 Nov  9 13:36 /usr/share/ca-certificates/4116.pem
	I1109 14:04:47.623648   55908 ssh_runner.go:195] Run: openssl x509 -hash -noout -in /usr/share/ca-certificates/4116.pem
	I1109 14:04:47.684940   55908 ssh_runner.go:195] Run: sudo /bin/bash -c "test -L /etc/ssl/certs/51391683.0 || ln -fs /etc/ssl/certs/4116.pem /etc/ssl/certs/51391683.0"
	I1109 14:04:47.694950   55908 ssh_runner.go:195] Run: sudo /bin/bash -c "test -s /usr/share/ca-certificates/41162.pem && ln -fs /usr/share/ca-certificates/41162.pem /etc/ssl/certs/41162.pem"
	I1109 14:04:47.704570   55908 ssh_runner.go:195] Run: ls -la /usr/share/ca-certificates/41162.pem
	I1109 14:04:47.709468   55908 certs.go:528] hashing: -rw-r--r-- 1 root root 1708 Nov  9 13:36 /usr/share/ca-certificates/41162.pem
	I1109 14:04:47.709530   55908 ssh_runner.go:195] Run: openssl x509 -hash -noout -in /usr/share/ca-certificates/41162.pem
	I1109 14:04:47.765768   55908 ssh_runner.go:195] Run: sudo /bin/bash -c "test -L /etc/ssl/certs/3ec20f2e.0 || ln -fs /etc/ssl/certs/41162.pem /etc/ssl/certs/3ec20f2e.0"
	I1109 14:04:47.777604   55908 ssh_runner.go:195] Run: sudo /bin/bash -c "test -s /usr/share/ca-certificates/minikubeCA.pem && ln -fs /usr/share/ca-certificates/minikubeCA.pem /etc/ssl/certs/minikubeCA.pem"
	I1109 14:04:47.788177   55908 ssh_runner.go:195] Run: ls -la /usr/share/ca-certificates/minikubeCA.pem
	I1109 14:04:47.793126   55908 certs.go:528] hashing: -rw-r--r-- 1 root root 1111 Nov  9 13:29 /usr/share/ca-certificates/minikubeCA.pem
	I1109 14:04:47.793191   55908 ssh_runner.go:195] Run: openssl x509 -hash -noout -in /usr/share/ca-certificates/minikubeCA.pem
	I1109 14:04:47.845154   55908 ssh_runner.go:195] Run: sudo /bin/bash -c "test -L /etc/ssl/certs/b5213941.0 || ln -fs /etc/ssl/certs/minikubeCA.pem /etc/ssl/certs/b5213941.0"
	I1109 14:04:47.856386   55908 ssh_runner.go:195] Run: stat /var/lib/minikube/certs/apiserver-kubelet-client.crt
	I1109 14:04:47.861306   55908 certs.go:400] 'apiserver-kubelet-client' cert doesn't exist, likely first start: stat /var/lib/minikube/certs/apiserver-kubelet-client.crt: Process exited with status 1
	stdout:
	
	stderr:
	stat: cannot statx '/var/lib/minikube/certs/apiserver-kubelet-client.crt': No such file or directory
	I1109 14:04:47.861350   55908 kubeadm.go:935] updating node {m04 192.168.49.5 0 v1.34.1 crio false true} ...
	I1109 14:04:47.861449   55908 kubeadm.go:947] kubelet [Unit]
	Wants=crio.service
	
	[Service]
	ExecStart=
	ExecStart=/var/lib/minikube/binaries/v1.34.1/kubelet --bootstrap-kubeconfig=/etc/kubernetes/bootstrap-kubelet.conf --cgroups-per-qos=false --config=/var/lib/kubelet/config.yaml --enforce-node-allocatable= --hostname-override=ha-423884-m04 --kubeconfig=/etc/kubernetes/kubelet.conf --node-ip=192.168.49.5
	
	[Install]
	 config:
	{KubernetesVersion:v1.34.1 ClusterName:ha-423884 Namespace:default APIServerHAVIP:192.168.49.254 APIServerName:minikubeCA APIServerNames:[] APIServerIPs:[] DNSDomain:cluster.local ContainerRuntime:crio CRISocket: NetworkPlugin:cni FeatureGates: ServiceCIDR:10.96.0.0/12 ImageRepository: LoadBalancerStartIP: LoadBalancerEndIP: CustomIngressCert: RegistryAliases: ExtraOptions:[] ShouldLoadCachedImages:true EnableDefaultCNI:false CNI:}
	I1109 14:04:47.861522   55908 ssh_runner.go:195] Run: sudo ls /var/lib/minikube/binaries/v1.34.1
	I1109 14:04:47.870269   55908 binaries.go:51] Found k8s binaries, skipping transfer
	I1109 14:04:47.870337   55908 ssh_runner.go:195] Run: sudo mkdir -p /etc/systemd/system/kubelet.service.d /lib/systemd/system
	I1109 14:04:47.880368   55908 ssh_runner.go:362] scp memory --> /etc/systemd/system/kubelet.service.d/10-kubeadm.conf (363 bytes)
	I1109 14:04:47.897846   55908 ssh_runner.go:362] scp memory --> /lib/systemd/system/kubelet.service (352 bytes)
	I1109 14:04:47.917114   55908 ssh_runner.go:195] Run: grep 192.168.49.254	control-plane.minikube.internal$ /etc/hosts
	I1109 14:04:47.924685   55908 ssh_runner.go:195] Run: /bin/bash -c "{ grep -v $'\tcontrol-plane.minikube.internal$' "/etc/hosts"; echo "192.168.49.254	control-plane.minikube.internal"; } > /tmp/h.$$; sudo cp /tmp/h.$$ "/etc/hosts""
	I1109 14:04:47.936633   55908 ssh_runner.go:195] Run: sudo systemctl daemon-reload
	I1109 14:04:48.172177   55908 ssh_runner.go:195] Run: sudo systemctl start kubelet
	I1109 14:04:48.203009   55908 start.go:236] Will wait 6m0s for node &{Name:m04 IP:192.168.49.5 Port:0 KubernetesVersion:v1.34.1 ContainerRuntime:crio ControlPlane:false Worker:true}
	I1109 14:04:48.203488   55908 config.go:182] Loaded profile config "ha-423884": Driver=docker, ContainerRuntime=crio, KubernetesVersion=v1.34.1
	I1109 14:04:48.206078   55908 out.go:179] * Verifying Kubernetes components...
	I1109 14:04:48.209257   55908 ssh_runner.go:195] Run: sudo systemctl daemon-reload
	I1109 14:04:48.462006   55908 ssh_runner.go:195] Run: sudo systemctl start kubelet
	I1109 14:04:48.478911   55908 kapi.go:59] client config for ha-423884: &rest.Config{Host:"https://192.168.49.254:8443", APIPath:"", ContentConfig:rest.ContentConfig{AcceptContentTypes:"", ContentType:"", GroupVersion:(*schema.GroupVersion)(nil), NegotiatedSerializer:runtime.NegotiatedSerializer(nil)}, Username:"", Password:"", BearerToken:"", BearerTokenFile:"", Impersonate:rest.ImpersonationConfig{UserName:"", UID:"", Groups:[]string(nil), Extra:map[string][]string(nil)}, AuthProvider:<nil>, AuthConfigPersister:rest.AuthProviderConfigPersister(nil), ExecProvider:<nil>, TLSClientConfig:rest.sanitizedTLSClientConfig{Insecure:false, ServerName:"", CertFile:"/home/jenkins/minikube-integration/21139-2320/.minikube/profiles/ha-423884/client.crt", KeyFile:"/home/jenkins/minikube-integration/21139-2320/.minikube/profiles/ha-423884/client.key", CAFile:"/home/jenkins/minikube-integration/21139-2320/.minikube/ca.crt", CertData:[]uint8(nil), KeyData:[]uint8(nil), CAData:[]uint8(nil), NextProtos:[]string(nil)},
UserAgent:"", DisableCompression:false, Transport:http.RoundTripper(nil), WrapTransport:(transport.WrapperFunc)(0x21276e0), QPS:0, Burst:0, RateLimiter:flowcontrol.RateLimiter(nil), WarningHandler:rest.WarningHandler(nil), WarningHandlerWithContext:rest.WarningHandlerWithContext(nil), Timeout:0, Dial:(func(context.Context, string, string) (net.Conn, error))(nil), Proxy:(func(*http.Request) (*url.URL, error))(nil)}
	W1109 14:04:48.478989   55908 kubeadm.go:492] Overriding stale ClientConfig host https://192.168.49.254:8443 with https://192.168.49.2:8443
	I1109 14:04:48.479221   55908 node_ready.go:35] waiting up to 6m0s for node "ha-423884-m04" to be "Ready" ...
	I1109 14:04:48.482317   55908 node_ready.go:49] node "ha-423884-m04" is "Ready"
	I1109 14:04:48.482349   55908 node_ready.go:38] duration metric: took 3.109285ms for node "ha-423884-m04" to be "Ready" ...
	I1109 14:04:48.482363   55908 system_svc.go:44] waiting for kubelet service to be running ....
	I1109 14:04:48.482419   55908 ssh_runner.go:195] Run: sudo systemctl is-active --quiet service kubelet
	I1109 14:04:48.500348   55908 system_svc.go:56] duration metric: took 17.977329ms WaitForService to wait for kubelet
	I1109 14:04:48.500378   55908 kubeadm.go:587] duration metric: took 297.325981ms to wait for: map[apiserver:true apps_running:true default_sa:true extra:true kubelet:true node_ready:true system_pods:true]
	I1109 14:04:48.500397   55908 node_conditions.go:102] verifying NodePressure condition ...
	I1109 14:04:48.505686   55908 node_conditions.go:122] node storage ephemeral capacity is 203034800Ki
	I1109 14:04:48.505725   55908 node_conditions.go:123] node cpu capacity is 2
	I1109 14:04:48.505737   55908 node_conditions.go:122] node storage ephemeral capacity is 203034800Ki
	I1109 14:04:48.505742   55908 node_conditions.go:123] node cpu capacity is 2
	I1109 14:04:48.505745   55908 node_conditions.go:122] node storage ephemeral capacity is 203034800Ki
	I1109 14:04:48.505750   55908 node_conditions.go:123] node cpu capacity is 2
	I1109 14:04:48.505754   55908 node_conditions.go:122] node storage ephemeral capacity is 203034800Ki
	I1109 14:04:48.505758   55908 node_conditions.go:123] node cpu capacity is 2
	I1109 14:04:48.505763   55908 node_conditions.go:105] duration metric: took 5.360822ms to run NodePressure ...
	I1109 14:04:48.505778   55908 start.go:242] waiting for startup goroutines ...
	I1109 14:04:48.505806   55908 start.go:256] writing updated cluster config ...
	I1109 14:04:48.506138   55908 ssh_runner.go:195] Run: rm -f paused
	I1109 14:04:48.511449   55908 pod_ready.go:37] extra waiting up to 4m0s for all "kube-system" pods having one of [k8s-app=kube-dns component=etcd component=kube-apiserver component=kube-controller-manager k8s-app=kube-proxy component=kube-scheduler] labels to be "Ready" ...
	I1109 14:04:48.512086   55908 kapi.go:59] client config for ha-423884: &rest.Config{Host:"https://192.168.49.254:8443", APIPath:"", ContentConfig:rest.ContentConfig{AcceptContentTypes:"", ContentType:"", GroupVersion:(*schema.GroupVersion)(nil), NegotiatedSerializer:runtime.NegotiatedSerializer(nil)}, Username:"", Password:"", BearerToken:"", BearerTokenFile:"", Impersonate:rest.ImpersonationConfig{UserName:"", UID:"", Groups:[]string(nil), Extra:map[string][]string(nil)}, AuthProvider:<nil>, AuthConfigPersister:rest.AuthProviderConfigPersister(nil), ExecProvider:<nil>, TLSClientConfig:rest.sanitizedTLSClientConfig{Insecure:false, ServerName:"", CertFile:"/home/jenkins/minikube-integration/21139-2320/.minikube/profiles/ha-423884/client.crt", KeyFile:"/home/jenkins/minikube-integration/21139-2320/.minikube/profiles/ha-423884/client.key", CAFile:"/home/jenkins/minikube-integration/21139-2320/.minikube/ca.crt", CertData:[]uint8(nil), KeyData:[]uint8(nil), CAData:[]uint8(nil), NextProtos:[]string(nil)},
UserAgent:"", DisableCompression:false, Transport:http.RoundTripper(nil), WrapTransport:(transport.WrapperFunc)(0x21276e0), QPS:0, Burst:0, RateLimiter:flowcontrol.RateLimiter(nil), WarningHandler:rest.WarningHandler(nil), WarningHandlerWithContext:rest.WarningHandlerWithContext(nil), Timeout:0, Dial:(func(context.Context, string, string) (net.Conn, error))(nil), Proxy:(func(*http.Request) (*url.URL, error))(nil)}
	I1109 14:04:48.531812   55908 pod_ready.go:83] waiting for pod "coredns-66bc5c9577-wl6rt" in "kube-system" namespace to be "Ready" or be gone ...
	W1109 14:04:50.538801   55908 pod_ready.go:104] pod "coredns-66bc5c9577-wl6rt" is not "Ready", error: <nil>
	W1109 14:04:53.041776   55908 pod_ready.go:104] pod "coredns-66bc5c9577-wl6rt" is not "Ready", error: <nil>
	W1109 14:04:55.540126   55908 pod_ready.go:104] pod "coredns-66bc5c9577-wl6rt" is not "Ready", error: <nil>
	I1109 14:04:57.039850   55908 pod_ready.go:94] pod "coredns-66bc5c9577-wl6rt" is "Ready"
	I1109 14:04:57.039917   55908 pod_ready.go:86] duration metric: took 8.508070998s for pod "coredns-66bc5c9577-wl6rt" in "kube-system" namespace to be "Ready" or be gone ...
	I1109 14:04:57.039928   55908 pod_ready.go:83] waiting for pod "coredns-66bc5c9577-x2j4c" in "kube-system" namespace to be "Ready" or be gone ...
	I1109 14:04:57.047591   55908 pod_ready.go:94] pod "coredns-66bc5c9577-x2j4c" is "Ready"
	I1109 14:04:57.047620   55908 pod_ready.go:86] duration metric: took 7.684548ms for pod "coredns-66bc5c9577-x2j4c" in "kube-system" namespace to be "Ready" or be gone ...
	I1109 14:04:57.051339   55908 pod_ready.go:83] waiting for pod "etcd-ha-423884" in "kube-system" namespace to be "Ready" or be gone ...
	I1109 14:04:57.057478   55908 pod_ready.go:94] pod "etcd-ha-423884" is "Ready"
	I1109 14:04:57.057507   55908 pod_ready.go:86] duration metric: took 6.138948ms for pod "etcd-ha-423884" in "kube-system" namespace to be "Ready" or be gone ...
	I1109 14:04:57.057516   55908 pod_ready.go:83] waiting for pod "etcd-ha-423884-m02" in "kube-system" namespace to be "Ready" or be gone ...
	I1109 14:04:57.063675   55908 pod_ready.go:94] pod "etcd-ha-423884-m02" is "Ready"
	I1109 14:04:57.063703   55908 pod_ready.go:86] duration metric: took 6.180712ms for pod "etcd-ha-423884-m02" in "kube-system" namespace to be "Ready" or be gone ...
	I1109 14:04:57.063713   55908 pod_ready.go:83] waiting for pod "etcd-ha-423884-m03" in "kube-system" namespace to be "Ready" or be gone ...
	I1109 14:04:57.232913   55908 request.go:683] "Waited before sending request" delay="166.184726ms" reason="client-side throttling, not priority and fairness" verb="GET" URL="https://192.168.49.254:8443/api/v1/nodes/ha-423884-m03"
	I1109 14:04:57.235976   55908 pod_ready.go:94] pod "etcd-ha-423884-m03" is "Ready"
	I1109 14:04:57.236003   55908 pod_ready.go:86] duration metric: took 172.283157ms for pod "etcd-ha-423884-m03" in "kube-system" namespace to be "Ready" or be gone ...
	I1109 14:04:57.433310   55908 request.go:683] "Waited before sending request" delay="197.214303ms" reason="client-side throttling, not priority and fairness" verb="GET" URL="https://192.168.49.254:8443/api/v1/namespaces/kube-system/pods?labelSelector=component%3Dkube-apiserver"
	I1109 14:04:57.437206   55908 pod_ready.go:83] waiting for pod "kube-apiserver-ha-423884" in "kube-system" namespace to be "Ready" or be gone ...
	I1109 14:04:57.632527   55908 request.go:683] "Waited before sending request" delay="195.228871ms" reason="client-side throttling, not priority and fairness" verb="GET" URL="https://192.168.49.254:8443/api/v1/namespaces/kube-system/pods/kube-apiserver-ha-423884"
	I1109 14:04:57.833084   55908 request.go:683] "Waited before sending request" delay="197.197966ms" reason="client-side throttling, not priority and fairness" verb="GET" URL="https://192.168.49.254:8443/api/v1/nodes/ha-423884"
	I1109 14:04:57.836198   55908 pod_ready.go:94] pod "kube-apiserver-ha-423884" is "Ready"
	I1109 14:04:57.836230   55908 pod_ready.go:86] duration metric: took 398.997813ms for pod "kube-apiserver-ha-423884" in "kube-system" namespace to be "Ready" or be gone ...
	I1109 14:04:57.836239   55908 pod_ready.go:83] waiting for pod "kube-apiserver-ha-423884-m02" in "kube-system" namespace to be "Ready" or be gone ...
	I1109 14:04:58.032538   55908 request.go:683] "Waited before sending request" delay="196.215039ms" reason="client-side throttling, not priority and fairness" verb="GET" URL="https://192.168.49.254:8443/api/v1/namespaces/kube-system/pods/kube-apiserver-ha-423884-m02"
	I1109 14:04:58.232521   55908 request.go:683] "Waited before sending request" delay="195.230554ms" reason="client-side throttling, not priority and fairness" verb="GET" URL="https://192.168.49.254:8443/api/v1/nodes/ha-423884-m02"
	I1109 14:04:58.236341   55908 pod_ready.go:94] pod "kube-apiserver-ha-423884-m02" is "Ready"
	I1109 14:04:58.236367   55908 pod_ready.go:86] duration metric: took 400.120914ms for pod "kube-apiserver-ha-423884-m02" in "kube-system" namespace to be "Ready" or be gone ...
	I1109 14:04:58.236376   55908 pod_ready.go:83] waiting for pod "kube-apiserver-ha-423884-m03" in "kube-system" namespace to be "Ready" or be gone ...
	I1109 14:04:58.433023   55908 request.go:683] "Waited before sending request" delay="196.538827ms" reason="client-side throttling, not priority and fairness" verb="GET" URL="https://192.168.49.254:8443/api/v1/namespaces/kube-system/pods/kube-apiserver-ha-423884-m03"
	I1109 14:04:58.632901   55908 request.go:683] "Waited before sending request" delay="196.260046ms" reason="client-side throttling, not priority and fairness" verb="GET" URL="https://192.168.49.254:8443/api/v1/nodes/ha-423884-m03"
	I1109 14:04:58.636121   55908 pod_ready.go:94] pod "kube-apiserver-ha-423884-m03" is "Ready"
	I1109 14:04:58.636150   55908 pod_ready.go:86] duration metric: took 399.76645ms for pod "kube-apiserver-ha-423884-m03" in "kube-system" namespace to be "Ready" or be gone ...
	I1109 14:04:58.832522   55908 request.go:683] "Waited before sending request" delay="196.25788ms" reason="client-side throttling, not priority and fairness" verb="GET" URL="https://192.168.49.254:8443/api/v1/namespaces/kube-system/pods?labelSelector=component%3Dkube-controller-manager"
	I1109 14:04:58.836640   55908 pod_ready.go:83] waiting for pod "kube-controller-manager-ha-423884" in "kube-system" namespace to be "Ready" or be gone ...
	I1109 14:04:59.033076   55908 request.go:683] "Waited before sending request" delay="196.288797ms" reason="client-side throttling, not priority and fairness" verb="GET" URL="https://192.168.49.254:8443/api/v1/namespaces/kube-system/pods/kube-controller-manager-ha-423884"
	I1109 14:04:59.233471   55908 request.go:683] "Waited before sending request" delay="197.170343ms" reason="client-side throttling, not priority and fairness" verb="GET" URL="https://192.168.49.254:8443/api/v1/nodes/ha-423884"
	I1109 14:04:59.236562   55908 pod_ready.go:94] pod "kube-controller-manager-ha-423884" is "Ready"
	I1109 14:04:59.236586   55908 pod_ready.go:86] duration metric: took 399.915672ms for pod "kube-controller-manager-ha-423884" in "kube-system" namespace to be "Ready" or be gone ...
	I1109 14:04:59.236595   55908 pod_ready.go:83] waiting for pod "kube-controller-manager-ha-423884-m02" in "kube-system" namespace to be "Ready" or be gone ...
	I1109 14:04:59.432815   55908 request.go:683] "Waited before sending request" delay="196.151501ms" reason="client-side throttling, not priority and fairness" verb="GET" URL="https://192.168.49.254:8443/api/v1/namespaces/kube-system/pods/kube-controller-manager-ha-423884-m02"
	I1109 14:04:59.633389   55908 request.go:683] "Waited before sending request" delay="197.339699ms" reason="client-side throttling, not priority and fairness" verb="GET" URL="https://192.168.49.254:8443/api/v1/nodes/ha-423884-m02"
	I1109 14:04:59.636611   55908 pod_ready.go:94] pod "kube-controller-manager-ha-423884-m02" is "Ready"
	I1109 14:04:59.636639   55908 pod_ready.go:86] duration metric: took 400.036716ms for pod "kube-controller-manager-ha-423884-m02" in "kube-system" namespace to be "Ready" or be gone ...
	I1109 14:04:59.636649   55908 pod_ready.go:83] waiting for pod "kube-controller-manager-ha-423884-m03" in "kube-system" namespace to be "Ready" or be gone ...
	I1109 14:04:59.832944   55908 request.go:683] "Waited before sending request" delay="196.225586ms" reason="client-side throttling, not priority and fairness" verb="GET" URL="https://192.168.49.254:8443/api/v1/namespaces/kube-system/pods/kube-controller-manager-ha-423884-m03"
	I1109 14:05:00.032735   55908 request.go:683] "Waited before sending request" delay="196.153889ms" reason="client-side throttling, not priority and fairness" verb="GET" URL="https://192.168.49.254:8443/api/v1/nodes/ha-423884-m03"
	I1109 14:05:00.114688   55908 pod_ready.go:94] pod "kube-controller-manager-ha-423884-m03" is "Ready"
	I1109 14:05:00.114728   55908 pod_ready.go:86] duration metric: took 478.071803ms for pod "kube-controller-manager-ha-423884-m03" in "kube-system" namespace to be "Ready" or be gone ...
	I1109 14:05:00.242596   55908 request.go:683] "Waited before sending request" delay="127.725515ms" reason="client-side throttling, not priority and fairness" verb="GET" URL="https://192.168.49.254:8443/api/v1/namespaces/kube-system/pods?labelSelector=k8s-app%3Dkube-proxy"
	I1109 14:05:00.298102   55908 pod_ready.go:83] waiting for pod "kube-proxy-7z7d2" in "kube-system" namespace to be "Ready" or be gone ...
	I1109 14:05:00.433403   55908 request.go:683] "Waited before sending request" delay="135.18186ms" reason="client-side throttling, not priority and fairness" verb="GET" URL="https://192.168.49.254:8443/api/v1/namespaces/kube-system/pods/kube-proxy-7z7d2"
	I1109 14:05:00.633480   55908 request.go:683] "Waited before sending request" delay="187.320382ms" reason="client-side throttling, not priority and fairness" verb="GET" URL="https://192.168.49.254:8443/api/v1/nodes/ha-423884"
	I1109 14:05:00.659363   55908 pod_ready.go:94] pod "kube-proxy-7z7d2" is "Ready"
	I1109 14:05:00.659405   55908 pod_ready.go:86] duration metric: took 361.264172ms for pod "kube-proxy-7z7d2" in "kube-system" namespace to be "Ready" or be gone ...
	I1109 14:05:00.659421   55908 pod_ready.go:83] waiting for pod "kube-proxy-9kff9" in "kube-system" namespace to be "Ready" or be gone ...
	I1109 14:05:00.832720   55908 request.go:683] "Waited before sending request" delay="173.209595ms" reason="client-side throttling, not priority and fairness" verb="GET" URL="https://192.168.49.254:8443/api/v1/namespaces/kube-system/pods/kube-proxy-9kff9"
	I1109 14:05:01.032589   55908 request.go:683] "Waited before sending request" delay="193.218072ms" reason="client-side throttling, not priority and fairness" verb="GET" URL="https://192.168.49.254:8443/api/v1/nodes/ha-423884-m04"
	I1109 14:05:01.233422   55908 request.go:683] "Waited before sending request" delay="73.212921ms" reason="client-side throttling, not priority and fairness" verb="GET" URL="https://192.168.49.254:8443/api/v1/namespaces/kube-system/pods/kube-proxy-9kff9"
	I1109 14:05:01.433041   55908 request.go:683] "Waited before sending request" delay="190.18265ms" reason="client-side throttling, not priority and fairness" verb="GET" URL="https://192.168.49.254:8443/api/v1/nodes/ha-423884-m04"
	I1109 14:05:01.437082   55908 pod_ready.go:94] pod "kube-proxy-9kff9" is "Ready"
	I1109 14:05:01.437110   55908 pod_ready.go:86] duration metric: took 777.680802ms for pod "kube-proxy-9kff9" in "kube-system" namespace to be "Ready" or be gone ...
	I1109 14:05:01.437119   55908 pod_ready.go:83] waiting for pod "kube-proxy-f4hgn" in "kube-system" namespace to be "Ready" or be gone ...
	I1109 14:05:01.632461   55908 request.go:683] "Waited before sending request" delay="195.271922ms" reason="client-side throttling, not priority and fairness" verb="GET" URL="https://192.168.49.254:8443/api/v1/namespaces/kube-system/pods/kube-proxy-f4hgn"
	I1109 14:05:01.832811   55908 request.go:683] "Waited before sending request" delay="187.236042ms" reason="client-side throttling, not priority and fairness" verb="GET" URL="https://192.168.49.254:8443/api/v1/nodes/ha-423884-m02"
	I1109 14:05:01.836535   55908 pod_ready.go:94] pod "kube-proxy-f4hgn" is "Ready"
	I1109 14:05:01.836565   55908 pod_ready.go:86] duration metric: took 399.438784ms for pod "kube-proxy-f4hgn" in "kube-system" namespace to be "Ready" or be gone ...
	I1109 14:05:01.836576   55908 pod_ready.go:83] waiting for pod "kube-proxy-jcgxk" in "kube-system" namespace to be "Ready" or be gone ...
	I1109 14:05:02.032823   55908 request.go:683] "Waited before sending request" delay="196.168826ms" reason="client-side throttling, not priority and fairness" verb="GET" URL="https://192.168.49.254:8443/api/v1/namespaces/kube-system/pods/kube-proxy-jcgxk"
	I1109 14:05:02.232950   55908 request.go:683] "Waited before sending request" delay="192.345884ms" reason="client-side throttling, not priority and fairness" verb="GET" URL="https://192.168.49.254:8443/api/v1/nodes/ha-423884-m03"
	I1109 14:05:02.432483   55908 request.go:683] "Waited before sending request" delay="95.122005ms" reason="client-side throttling, not priority and fairness" verb="GET" URL="https://192.168.49.254:8443/api/v1/namespaces/kube-system/pods/kube-proxy-jcgxk"
	I1109 14:05:02.632558   55908 request.go:683] "Waited before sending request" delay="196.186501ms" reason="client-side throttling, not priority and fairness" verb="GET" URL="https://192.168.49.254:8443/api/v1/nodes/ha-423884-m03"
	I1109 14:05:03.032762   55908 request.go:683] "Waited before sending request" delay="191.358141ms" reason="client-side throttling, not priority and fairness" verb="GET" URL="https://192.168.49.254:8443/api/v1/nodes/ha-423884-m03"
	I1109 14:05:03.433075   55908 request.go:683] "Waited before sending request" delay="91.200576ms" reason="client-side throttling, not priority and fairness" verb="GET" URL="https://192.168.49.254:8443/api/v1/nodes/ha-423884-m03"
	W1109 14:05:03.843130   55908 pod_ready.go:104] pod "kube-proxy-jcgxk" is not "Ready", error: <nil>
	W1109 14:05:05.843241   55908 pod_ready.go:104] pod "kube-proxy-jcgxk" is not "Ready", error: <nil>
	W1109 14:05:07.843386   55908 pod_ready.go:104] pod "kube-proxy-jcgxk" is not "Ready", error: <nil>
	W1109 14:05:10.345843   55908 pod_ready.go:104] pod "kube-proxy-jcgxk" is not "Ready", error: <nil>
	W1109 14:05:12.347116   55908 pod_ready.go:104] pod "kube-proxy-jcgxk" is not "Ready", error: <nil>
	I1109 14:05:12.843484   55908 pod_ready.go:94] pod "kube-proxy-jcgxk" is "Ready"
	I1109 14:05:12.843511   55908 pod_ready.go:86] duration metric: took 11.006928371s for pod "kube-proxy-jcgxk" in "kube-system" namespace to be "Ready" or be gone ...
	I1109 14:05:12.847315   55908 pod_ready.go:83] waiting for pod "kube-scheduler-ha-423884" in "kube-system" namespace to be "Ready" or be gone ...
	I1109 14:05:12.853111   55908 pod_ready.go:94] pod "kube-scheduler-ha-423884" is "Ready"
	I1109 14:05:12.853137   55908 pod_ready.go:86] duration metric: took 5.793657ms for pod "kube-scheduler-ha-423884" in "kube-system" namespace to be "Ready" or be gone ...
	I1109 14:05:12.853146   55908 pod_ready.go:83] waiting for pod "kube-scheduler-ha-423884-m02" in "kube-system" namespace to be "Ready" or be gone ...
	I1109 14:05:12.859861   55908 pod_ready.go:94] pod "kube-scheduler-ha-423884-m02" is "Ready"
	I1109 14:05:12.859981   55908 pod_ready.go:86] duration metric: took 6.827161ms for pod "kube-scheduler-ha-423884-m02" in "kube-system" namespace to be "Ready" or be gone ...
	I1109 14:05:12.860005   55908 pod_ready.go:83] waiting for pod "kube-scheduler-ha-423884-m03" in "kube-system" namespace to be "Ready" or be gone ...
	I1109 14:05:12.867050   55908 pod_ready.go:94] pod "kube-scheduler-ha-423884-m03" is "Ready"
	I1109 14:05:12.867075   55908 pod_ready.go:86] duration metric: took 7.050311ms for pod "kube-scheduler-ha-423884-m03" in "kube-system" namespace to be "Ready" or be gone ...
	I1109 14:05:12.867087   55908 pod_ready.go:40] duration metric: took 24.355592064s for extra waiting for all "kube-system" pods having one of [k8s-app=kube-dns component=etcd component=kube-apiserver component=kube-controller-manager k8s-app=kube-proxy component=kube-scheduler] labels to be "Ready" ...
	I1109 14:05:12.924097   55908 start.go:628] kubectl: 1.33.2, cluster: 1.34.1 (minor skew: 1)
	I1109 14:05:12.927451   55908 out.go:179] * Done! kubectl is now configured to use "ha-423884" cluster and "default" namespace by default
	
	
	==> CRI-O <==
	Nov 09 14:04:15 ha-423884 crio[619]: time="2025-11-09T14:04:15.560693803Z" level=info msg="Started container" PID=1120 containerID=b63a9a2c4e5fbd3fad199cd6e213c4eaeb9cf307dbae0131d130c7d22384f79e description=default/busybox-7b57f96db7-bprtw/busybox id=6e691df6-c3f8-4e79-938c-13c481c463f8 name=/runtime.v1.RuntimeService/StartContainer sandboxID=49d4f70bf4320e7c70fbccc94bd31ea39dc7456ce3f9c2a9a4d16059f54b6f87
	Nov 09 14:04:45 ha-423884 conmon[1119]: conmon 5bed382b465f29e125aa <ninfo>: container 1132 exited with status 1
	Nov 09 14:04:46 ha-423884 crio[619]: time="2025-11-09T14:04:46.632047702Z" level=info msg="Checking image status: gcr.io/k8s-minikube/storage-provisioner:v5" id=58fafaad-5a62-4ed2-a48c-ac5cfcffacd0 name=/runtime.v1.ImageService/ImageStatus
	Nov 09 14:04:46 ha-423884 crio[619]: time="2025-11-09T14:04:46.633906069Z" level=info msg="Checking image status: gcr.io/k8s-minikube/storage-provisioner:v5" id=36005cb0-6a41-40e9-950b-0b9545dd375d name=/runtime.v1.ImageService/ImageStatus
	Nov 09 14:04:46 ha-423884 crio[619]: time="2025-11-09T14:04:46.64579785Z" level=info msg="Creating container: kube-system/storage-provisioner/storage-provisioner" id=95caab63-861a-49ee-8b75-b5d15cfb1b60 name=/runtime.v1.RuntimeService/CreateContainer
	Nov 09 14:04:46 ha-423884 crio[619]: time="2025-11-09T14:04:46.645906225Z" level=info msg="Allowed annotations are specified for workload [io.containers.trace-syscall]"
	Nov 09 14:04:46 ha-423884 crio[619]: time="2025-11-09T14:04:46.658781722Z" level=info msg="Allowed annotations are specified for workload [io.containers.trace-syscall]"
	Nov 09 14:04:46 ha-423884 crio[619]: time="2025-11-09T14:04:46.662347217Z" level=warning msg="Failed to open /etc/passwd: open /var/lib/containers/storage/overlay/184c9fdfb9f2c0bab041655609ae7f88de235f6f6f171cc5cec8c531dddf11f3/merged/etc/passwd: no such file or directory"
	Nov 09 14:04:46 ha-423884 crio[619]: time="2025-11-09T14:04:46.662465462Z" level=warning msg="Failed to open /etc/group: open /var/lib/containers/storage/overlay/184c9fdfb9f2c0bab041655609ae7f88de235f6f6f171cc5cec8c531dddf11f3/merged/etc/group: no such file or directory"
	Nov 09 14:04:46 ha-423884 crio[619]: time="2025-11-09T14:04:46.662915043Z" level=info msg="Allowed annotations are specified for workload [io.containers.trace-syscall]"
	Nov 09 14:04:46 ha-423884 crio[619]: time="2025-11-09T14:04:46.702334944Z" level=info msg="Created container b305e5d843218e1b3e886cb2f5ba534d02cd5d0a3a41d07de89ec1654cf7277c: kube-system/storage-provisioner/storage-provisioner" id=95caab63-861a-49ee-8b75-b5d15cfb1b60 name=/runtime.v1.RuntimeService/CreateContainer
	Nov 09 14:04:46 ha-423884 crio[619]: time="2025-11-09T14:04:46.714514458Z" level=info msg="Starting container: b305e5d843218e1b3e886cb2f5ba534d02cd5d0a3a41d07de89ec1654cf7277c" id=63571f8b-fba8-4137-bf17-f12c81bfa57d name=/runtime.v1.RuntimeService/StartContainer
	Nov 09 14:04:46 ha-423884 crio[619]: time="2025-11-09T14:04:46.721604636Z" level=info msg="Started container" PID=1382 containerID=b305e5d843218e1b3e886cb2f5ba534d02cd5d0a3a41d07de89ec1654cf7277c description=kube-system/storage-provisioner/storage-provisioner id=63571f8b-fba8-4137-bf17-f12c81bfa57d name=/runtime.v1.RuntimeService/StartContainer sandboxID=624febe3bef0cff2a8b38f86f67900ed2fa943529a6d461c2caa559b51a854bb
	Nov 09 14:04:55 ha-423884 crio[619]: time="2025-11-09T14:04:55.4215931Z" level=info msg="CNI monitoring event CREATE        \"/etc/cni/net.d/10-kindnet.conflist.temp\""
	Nov 09 14:04:55 ha-423884 crio[619]: time="2025-11-09T14:04:55.42716999Z" level=info msg="Found CNI network kindnet (type=ptp) at /etc/cni/net.d/10-kindnet.conflist"
	Nov 09 14:04:55 ha-423884 crio[619]: time="2025-11-09T14:04:55.427323214Z" level=info msg="Updated default CNI network name to kindnet"
	Nov 09 14:04:55 ha-423884 crio[619]: time="2025-11-09T14:04:55.427398128Z" level=info msg="CNI monitoring event WRITE         \"/etc/cni/net.d/10-kindnet.conflist.temp\""
	Nov 09 14:04:55 ha-423884 crio[619]: time="2025-11-09T14:04:55.431810591Z" level=info msg="Found CNI network kindnet (type=ptp) at /etc/cni/net.d/10-kindnet.conflist"
	Nov 09 14:04:55 ha-423884 crio[619]: time="2025-11-09T14:04:55.432264101Z" level=info msg="Updated default CNI network name to kindnet"
	Nov 09 14:04:55 ha-423884 crio[619]: time="2025-11-09T14:04:55.43234498Z" level=info msg="CNI monitoring event RENAME        \"/etc/cni/net.d/10-kindnet.conflist.temp\""
	Nov 09 14:04:55 ha-423884 crio[619]: time="2025-11-09T14:04:55.436394288Z" level=info msg="Found CNI network kindnet (type=ptp) at /etc/cni/net.d/10-kindnet.conflist"
	Nov 09 14:04:55 ha-423884 crio[619]: time="2025-11-09T14:04:55.436552493Z" level=info msg="Updated default CNI network name to kindnet"
	Nov 09 14:04:55 ha-423884 crio[619]: time="2025-11-09T14:04:55.43662753Z" level=info msg="CNI monitoring event CREATE        \"/etc/cni/net.d/10-kindnet.conflist\" ← \"/etc/cni/net.d/10-kindnet.conflist.temp\""
	Nov 09 14:04:55 ha-423884 crio[619]: time="2025-11-09T14:04:55.440324498Z" level=info msg="Found CNI network kindnet (type=ptp) at /etc/cni/net.d/10-kindnet.conflist"
	Nov 09 14:04:55 ha-423884 crio[619]: time="2025-11-09T14:04:55.440479609Z" level=info msg="Updated default CNI network name to kindnet"
	
	
	==> container status <==
	CONTAINER           IMAGE                                                              CREATED              STATE               NAME                      ATTEMPT             POD ID              POD                                 NAMESPACE
	b305e5d843218       ba04bb24b95753201135cbc420b233c1b0b9fa2e1fd21d28319c348c33fbcde6   29 seconds ago       Running             storage-provisioner       2                   624febe3bef0c       storage-provisioner                 kube-system
	4e1565497868e       138784d87c9c50f8e59412544da4cf4928d61ccbaf93b9f5898a3ba406871bfc   About a minute ago   Running             coredns                   1                   156c341c8adee       coredns-66bc5c9577-wl6rt            kube-system
	f0fd891d62df4       138784d87c9c50f8e59412544da4cf4928d61ccbaf93b9f5898a3ba406871bfc   About a minute ago   Running             coredns                   1                   0149d6cd55157       coredns-66bc5c9577-x2j4c            kube-system
	5bed382b465f2       ba04bb24b95753201135cbc420b233c1b0b9fa2e1fd21d28319c348c33fbcde6   About a minute ago   Exited              storage-provisioner       1                   624febe3bef0c       storage-provisioner                 kube-system
	b63a9a2c4e5fb       89a35e2ebb6b938201966889b5e8c85b931db6432c5643966116cd1c28bf45cd   About a minute ago   Running             busybox                   1                   49d4f70bf4320       busybox-7b57f96db7-bprtw            default
	6db8ccf0f7e5d       05baa95f5142d87797a2bc1d3d11edfb0bf0a9236d436243d15061fae8b58cb9   About a minute ago   Running             kube-proxy                1                   7482e6b61af8f       kube-proxy-7z7d2                    kube-system
	2858b15648473       b1a8c6f707935fd5f346ce5846d21ff8dd65e14c15406a14dbd16b9b897b9b4c   About a minute ago   Running             kindnet-cni               1                   ef99cabeed954       kindnet-4s4nj                       kube-system
	d4b5eae8c40aa       7eb2c6ff0c5a768fd309321bc2ade0e4e11afcf4f2017ef1d0ff00d91fdf992a   About a minute ago   Running             kube-controller-manager   9                   8d358a601f8e9       kube-controller-manager-ha-423884   kube-system
	7a8b6eec5acc3       43911e833d64d4f30460862fc0c54bb61999d60bc7d063feca71e9fc610d5196   About a minute ago   Running             kube-apiserver            8                   5dc1bc8f687be       kube-apiserver-ha-423884            kube-system
	78f5efcea671f       7eb2c6ff0c5a768fd309321bc2ade0e4e11afcf4f2017ef1d0ff00d91fdf992a   About a minute ago   Exited              kube-controller-manager   8                   8d358a601f8e9       kube-controller-manager-ha-423884   kube-system
	947390d8997ff       a1894772a478e07c67a56e8bf32335fdbe1dd4ec96976a5987083164bd00bc0e   About a minute ago   Running             etcd                      3                   0c595ba9083de       etcd-ha-423884                      kube-system
	c0ba74e816e13       43911e833d64d4f30460862fc0c54bb61999d60bc7d063feca71e9fc610d5196   About a minute ago   Exited              kube-apiserver            7                   5dc1bc8f687be       kube-apiserver-ha-423884            kube-system
	374a5429d6a56       b5f57ec6b98676d815366685a0422bd164ecf0732540b79ac51b1186cef97ff0   About a minute ago   Running             kube-scheduler            2                   3ee3bcbc0fa87       kube-scheduler-ha-423884            kube-system
	785a023345fda       2a8917f902489be5a8dd414209c32b77bd644d187ea646d86dbdc31e85efb551   About a minute ago   Running             kube-vip                  1                   90a0cbb7d6ed9       kube-vip-ha-423884                  kube-system
	
	
	==> coredns [4e1565497868eb720e6f89fa2f64f1892d9d7c7fb165c52c75c00a6e26644dcd] <==
	[INFO] plugin/kubernetes: waiting for Kubernetes API before starting server
	[INFO] plugin/kubernetes: waiting for Kubernetes API before starting server
	[INFO] plugin/ready: Still waiting on: "kubernetes"
	[INFO] plugin/kubernetes: waiting for Kubernetes API before starting server
	[INFO] plugin/kubernetes: waiting for Kubernetes API before starting server
	[INFO] plugin/kubernetes: waiting for Kubernetes API before starting server
	[INFO] plugin/kubernetes: waiting for Kubernetes API before starting server
	[INFO] plugin/kubernetes: waiting for Kubernetes API before starting server
	[INFO] plugin/kubernetes: waiting for Kubernetes API before starting server
	[INFO] plugin/kubernetes: waiting for Kubernetes API before starting server
	[WARNING] plugin/kubernetes: starting server with unsynced Kubernetes API
	.:53
	[INFO] plugin/reload: Running configuration SHA512 = 9e2996f8cb67ac53e0259ab1f8d615d07d1beb0bd07e6a1e39769c3bf486a905bb991cc47f8d2f14d0d3a90a87dfc625a0b4c524fed169d8158c40657c0694b1
	CoreDNS-1.12.1
	linux/arm64, go1.24.1, 707c7c1
	[INFO] 127.0.0.1:56290 - 23869 "HINFO IN 4295743501471833009.7362039906491692351. udp 57 false 512" NXDOMAIN qr,rd,ra 57 0.027167594s
	[INFO] plugin/ready: Still waiting on: "kubernetes"
	[INFO] plugin/ready: Still waiting on: "kubernetes"
	[INFO] plugin/kubernetes: pkg/mod/k8s.io/client-go@v0.32.3/tools/cache/reflector.go:251: failed to list *v1.Namespace: Get "https://10.96.0.1:443/api/v1/namespaces?limit=500&resourceVersion=0": dial tcp 10.96.0.1:443: i/o timeout
	[ERROR] plugin/kubernetes: Unhandled Error
	[INFO] plugin/kubernetes: pkg/mod/k8s.io/client-go@v0.32.3/tools/cache/reflector.go:251: failed to list *v1.EndpointSlice: Get "https://10.96.0.1:443/apis/discovery.k8s.io/v1/endpointslices?limit=500&resourceVersion=0": dial tcp 10.96.0.1:443: i/o timeout
	[ERROR] plugin/kubernetes: Unhandled Error
	[INFO] plugin/kubernetes: pkg/mod/k8s.io/client-go@v0.32.3/tools/cache/reflector.go:251: failed to list *v1.Service: Get "https://10.96.0.1:443/api/v1/services?limit=500&resourceVersion=0": dial tcp 10.96.0.1:443: i/o timeout
	[ERROR] plugin/kubernetes: Unhandled Error
	[INFO] plugin/ready: Still waiting on: "kubernetes"
	
	
	==> coredns [f0fd891d62df4ba35f7f2bb9f867a20bb1ee66fec8156164361837f74c33b151] <==
	[INFO] plugin/kubernetes: waiting for Kubernetes API before starting server
	[INFO] plugin/kubernetes: waiting for Kubernetes API before starting server
	[INFO] plugin/ready: Still waiting on: "kubernetes"
	[INFO] plugin/kubernetes: waiting for Kubernetes API before starting server
	[INFO] plugin/kubernetes: waiting for Kubernetes API before starting server
	[INFO] plugin/kubernetes: waiting for Kubernetes API before starting server
	[INFO] plugin/kubernetes: waiting for Kubernetes API before starting server
	[INFO] plugin/kubernetes: waiting for Kubernetes API before starting server
	[INFO] plugin/kubernetes: waiting for Kubernetes API before starting server
	[INFO] plugin/kubernetes: waiting for Kubernetes API before starting server
	[WARNING] plugin/kubernetes: starting server with unsynced Kubernetes API
	.:53
	[INFO] plugin/reload: Running configuration SHA512 = 9e2996f8cb67ac53e0259ab1f8d615d07d1beb0bd07e6a1e39769c3bf486a905bb991cc47f8d2f14d0d3a90a87dfc625a0b4c524fed169d8158c40657c0694b1
	CoreDNS-1.12.1
	linux/arm64, go1.24.1, 707c7c1
	[INFO] 127.0.0.1:41286 - 39887 "HINFO IN 9165684468172783655.3008217872247164606. udp 57 false 512" NXDOMAIN qr,rd,ra 57 0.020928117s
	[INFO] plugin/ready: Still waiting on: "kubernetes"
	[INFO] plugin/ready: Still waiting on: "kubernetes"
	[INFO] plugin/kubernetes: pkg/mod/k8s.io/client-go@v0.32.3/tools/cache/reflector.go:251: failed to list *v1.Service: Get "https://10.96.0.1:443/api/v1/services?limit=500&resourceVersion=0": dial tcp 10.96.0.1:443: i/o timeout
	[ERROR] plugin/kubernetes: Unhandled Error
	[INFO] plugin/kubernetes: pkg/mod/k8s.io/client-go@v0.32.3/tools/cache/reflector.go:251: failed to list *v1.EndpointSlice: Get "https://10.96.0.1:443/apis/discovery.k8s.io/v1/endpointslices?limit=500&resourceVersion=0": dial tcp 10.96.0.1:443: i/o timeout
	[ERROR] plugin/kubernetes: Unhandled Error
	[INFO] plugin/kubernetes: pkg/mod/k8s.io/client-go@v0.32.3/tools/cache/reflector.go:251: failed to list *v1.Namespace: Get "https://10.96.0.1:443/api/v1/namespaces?limit=500&resourceVersion=0": dial tcp 10.96.0.1:443: i/o timeout
	[ERROR] plugin/kubernetes: Unhandled Error
	[INFO] plugin/ready: Still waiting on: "kubernetes"
	
	
	==> describe nodes <==
	Name:               ha-423884
	Roles:              control-plane
	Labels:             beta.kubernetes.io/arch=arm64
	                    beta.kubernetes.io/os=linux
	                    kubernetes.io/arch=arm64
	                    kubernetes.io/hostname=ha-423884
	                    kubernetes.io/os=linux
	                    minikube.k8s.io/commit=70e94ed289d7418ac27e2778f9cf44be27a4ecda
	                    minikube.k8s.io/name=ha-423884
	                    minikube.k8s.io/primary=true
	                    minikube.k8s.io/updated_at=2025_11_09T13_50_44_0700
	                    minikube.k8s.io/version=v1.37.0
	                    node-role.kubernetes.io/control-plane=
	                    node.kubernetes.io/exclude-from-external-load-balancers=
	Annotations:        node.alpha.kubernetes.io/ttl: 0
	                    volumes.kubernetes.io/controller-managed-attach-detach: true
	CreationTimestamp:  Sun, 09 Nov 2025 13:50:40 +0000
	Taints:             <none>
	Unschedulable:      false
	Lease:
	  HolderIdentity:  ha-423884
	  AcquireTime:     <unset>
	  RenewTime:       Sun, 09 Nov 2025 14:05:10 +0000
	Conditions:
	  Type             Status  LastHeartbeatTime                 LastTransitionTime                Reason                       Message
	  ----             ------  -----------------                 ------------------                ------                       -------
	  MemoryPressure   False   Sun, 09 Nov 2025 14:04:10 +0000   Sun, 09 Nov 2025 13:50:35 +0000   KubeletHasSufficientMemory   kubelet has sufficient memory available
	  DiskPressure     False   Sun, 09 Nov 2025 14:04:10 +0000   Sun, 09 Nov 2025 13:50:35 +0000   KubeletHasNoDiskPressure     kubelet has no disk pressure
	  PIDPressure      False   Sun, 09 Nov 2025 14:04:10 +0000   Sun, 09 Nov 2025 13:50:35 +0000   KubeletHasSufficientPID      kubelet has sufficient PID available
	  Ready            True    Sun, 09 Nov 2025 14:04:10 +0000   Sun, 09 Nov 2025 13:51:29 +0000   KubeletReady                 kubelet is posting ready status
	Addresses:
	  InternalIP:  192.168.49.2
	  Hostname:    ha-423884
	Capacity:
	  cpu:                2
	  ephemeral-storage:  203034800Ki
	  hugepages-1Gi:      0
	  hugepages-2Mi:      0
	  hugepages-32Mi:     0
	  hugepages-64Ki:     0
	  memory:             8022300Ki
	  pods:               110
	Allocatable:
	  cpu:                2
	  ephemeral-storage:  203034800Ki
	  hugepages-1Gi:      0
	  hugepages-2Mi:      0
	  hugepages-32Mi:     0
	  hugepages-64Ki:     0
	  memory:             8022300Ki
	  pods:               110
	System Info:
	  Machine ID:                 8fe80b37a6a3cb4bc1adadb06905c59a
	  System UUID:                657918f5-0b52-434a-8e2d-4cc93dc46e2f
	  Boot ID:                    c2b4b02b-b0e5-42ff-aa45-346ad9349595
	  Kernel Version:             5.15.0-1084-aws
	  OS Image:                   Debian GNU/Linux 12 (bookworm)
	  Operating System:           linux
	  Architecture:               arm64
	  Container Runtime Version:  cri-o://1.34.1
	  Kubelet Version:            v1.34.1
	  Kube-Proxy Version:         
	PodCIDR:                      10.244.0.0/24
	PodCIDRs:                     10.244.0.0/24
	Non-terminated Pods:          (11 in total)
	  Namespace                   Name                                 CPU Requests  CPU Limits  Memory Requests  Memory Limits  Age
	  ---------                   ----                                 ------------  ----------  ---------------  -------------  ---
	  default                     busybox-7b57f96db7-bprtw             0 (0%)        0 (0%)      0 (0%)           0 (0%)         12m
	  kube-system                 coredns-66bc5c9577-wl6rt             100m (5%)     0 (0%)      70Mi (0%)        170Mi (2%)     14m
	  kube-system                 coredns-66bc5c9577-x2j4c             100m (5%)     0 (0%)      70Mi (0%)        170Mi (2%)     14m
	  kube-system                 etcd-ha-423884                       100m (5%)     0 (0%)      100Mi (1%)       0 (0%)         14m
	  kube-system                 kindnet-4s4nj                        100m (5%)     100m (5%)   50Mi (0%)        50Mi (0%)      14m
	  kube-system                 kube-apiserver-ha-423884             250m (12%)    0 (0%)      0 (0%)           0 (0%)         14m
	  kube-system                 kube-controller-manager-ha-423884    200m (10%)    0 (0%)      0 (0%)           0 (0%)         14m
	  kube-system                 kube-proxy-7z7d2                     0 (0%)        0 (0%)      0 (0%)           0 (0%)         14m
	  kube-system                 kube-scheduler-ha-423884             100m (5%)     0 (0%)      0 (0%)           0 (0%)         14m
	  kube-system                 kube-vip-ha-423884                   0 (0%)        0 (0%)      0 (0%)           0 (0%)         62s
	  kube-system                 storage-provisioner                  0 (0%)        0 (0%)      0 (0%)           0 (0%)         14m
	Allocated resources:
	  (Total limits may be over 100 percent, i.e., overcommitted.)
	  Resource           Requests    Limits
	  --------           --------    ------
	  cpu                950m (47%)  100m (5%)
	  memory             290Mi (3%)  390Mi (4%)
	  ephemeral-storage  0 (0%)      0 (0%)
	  hugepages-1Gi      0 (0%)      0 (0%)
	  hugepages-2Mi      0 (0%)      0 (0%)
	  hugepages-32Mi     0 (0%)      0 (0%)
	  hugepages-64Ki     0 (0%)      0 (0%)
	Events:
	  Type     Reason                   Age                  From             Message
	  ----     ------                   ----                 ----             -------
	  Normal   Starting                 58s                  kube-proxy       
	  Normal   Starting                 14m                  kube-proxy       
	  Normal   Starting                 14m                  kubelet          Starting kubelet.
	  Normal   NodeHasSufficientPID     14m (x8 over 14m)    kubelet          Node ha-423884 status is now: NodeHasSufficientPID
	  Normal   NodeHasNoDiskPressure    14m (x8 over 14m)    kubelet          Node ha-423884 status is now: NodeHasNoDiskPressure
	  Normal   NodeHasSufficientMemory  14m (x8 over 14m)    kubelet          Node ha-423884 status is now: NodeHasSufficientMemory
	  Warning  CgroupV1                 14m                  kubelet          cgroup v1 support is in maintenance mode, please migrate to cgroup v2
	  Normal   NodeHasNoDiskPressure    14m                  kubelet          Node ha-423884 status is now: NodeHasNoDiskPressure
	  Normal   NodeHasSufficientMemory  14m                  kubelet          Node ha-423884 status is now: NodeHasSufficientMemory
	  Normal   NodeHasSufficientPID     14m                  kubelet          Node ha-423884 status is now: NodeHasSufficientPID
	  Normal   Starting                 14m                  kubelet          Starting kubelet.
	  Warning  CgroupV1                 14m                  kubelet          cgroup v1 support is in maintenance mode, please migrate to cgroup v2
	  Normal   RegisteredNode           14m                  node-controller  Node ha-423884 event: Registered Node ha-423884 in Controller
	  Normal   RegisteredNode           13m                  node-controller  Node ha-423884 event: Registered Node ha-423884 in Controller
	  Normal   NodeReady                13m                  kubelet          Node ha-423884 status is now: NodeReady
	  Normal   RegisteredNode           13m                  node-controller  Node ha-423884 event: Registered Node ha-423884 in Controller
	  Normal   RegisteredNode           10m                  node-controller  Node ha-423884 event: Registered Node ha-423884 in Controller
	  Normal   Starting                 101s                 kubelet          Starting kubelet.
	  Warning  CgroupV1                 101s                 kubelet          cgroup v1 support is in maintenance mode, please migrate to cgroup v2
	  Normal   NodeHasSufficientMemory  101s (x8 over 101s)  kubelet          Node ha-423884 status is now: NodeHasSufficientMemory
	  Normal   NodeHasNoDiskPressure    101s (x8 over 101s)  kubelet          Node ha-423884 status is now: NodeHasNoDiskPressure
	  Normal   NodeHasSufficientPID     101s (x8 over 101s)  kubelet          Node ha-423884 status is now: NodeHasSufficientPID
	  Normal   RegisteredNode           62s                  node-controller  Node ha-423884 event: Registered Node ha-423884 in Controller
	  Normal   RegisteredNode           61s                  node-controller  Node ha-423884 event: Registered Node ha-423884 in Controller
	  Normal   RegisteredNode           22s                  node-controller  Node ha-423884 event: Registered Node ha-423884 in Controller
	
	
	Name:               ha-423884-m02
	Roles:              control-plane
	Labels:             beta.kubernetes.io/arch=arm64
	                    beta.kubernetes.io/os=linux
	                    kubernetes.io/arch=arm64
	                    kubernetes.io/hostname=ha-423884-m02
	                    kubernetes.io/os=linux
	                    minikube.k8s.io/commit=70e94ed289d7418ac27e2778f9cf44be27a4ecda
	                    minikube.k8s.io/name=ha-423884
	                    minikube.k8s.io/primary=false
	                    minikube.k8s.io/updated_at=2025_11_09T13_51_24_0700
	                    minikube.k8s.io/version=v1.37.0
	                    node-role.kubernetes.io/control-plane=
	                    node.kubernetes.io/exclude-from-external-load-balancers=
	Annotations:        node.alpha.kubernetes.io/ttl: 0
	                    volumes.kubernetes.io/controller-managed-attach-detach: true
	CreationTimestamp:  Sun, 09 Nov 2025 13:51:24 +0000
	Taints:             <none>
	Unschedulable:      false
	Lease:
	  HolderIdentity:  ha-423884-m02
	  AcquireTime:     <unset>
	  RenewTime:       Sun, 09 Nov 2025 14:05:11 +0000
	Conditions:
	  Type             Status  LastHeartbeatTime                 LastTransitionTime                Reason                       Message
	  ----             ------  -----------------                 ------------------                ------                       -------
	  MemoryPressure   False   Sun, 09 Nov 2025 14:04:10 +0000   Sun, 09 Nov 2025 13:54:37 +0000   KubeletHasSufficientMemory   kubelet has sufficient memory available
	  DiskPressure     False   Sun, 09 Nov 2025 14:04:10 +0000   Sun, 09 Nov 2025 13:54:37 +0000   KubeletHasNoDiskPressure     kubelet has no disk pressure
	  PIDPressure      False   Sun, 09 Nov 2025 14:04:10 +0000   Sun, 09 Nov 2025 13:54:37 +0000   KubeletHasSufficientPID      kubelet has sufficient PID available
	  Ready            True    Sun, 09 Nov 2025 14:04:10 +0000   Sun, 09 Nov 2025 13:54:37 +0000   KubeletReady                 kubelet is posting ready status
	Addresses:
	  InternalIP:  192.168.49.3
	  Hostname:    ha-423884-m02
	Capacity:
	  cpu:                2
	  ephemeral-storage:  203034800Ki
	  hugepages-1Gi:      0
	  hugepages-2Mi:      0
	  hugepages-32Mi:     0
	  hugepages-64Ki:     0
	  memory:             8022300Ki
	  pods:               110
	Allocatable:
	  cpu:                2
	  ephemeral-storage:  203034800Ki
	  hugepages-1Gi:      0
	  hugepages-2Mi:      0
	  hugepages-32Mi:     0
	  hugepages-64Ki:     0
	  memory:             8022300Ki
	  pods:               110
	System Info:
	  Machine ID:                 8fe80b37a6a3cb4bc1adadb06905c59a
	  System UUID:                36d1a056-7fa9-4feb-8fa0-03ee70e31c22
	  Boot ID:                    c2b4b02b-b0e5-42ff-aa45-346ad9349595
	  Kernel Version:             5.15.0-1084-aws
	  OS Image:                   Debian GNU/Linux 12 (bookworm)
	  Operating System:           linux
	  Architecture:               arm64
	  Container Runtime Version:  cri-o://1.34.1
	  Kubelet Version:            v1.34.1
	  Kube-Proxy Version:         
	PodCIDR:                      10.244.1.0/24
	PodCIDRs:                     10.244.1.0/24
	Non-terminated Pods:          (8 in total)
	  Namespace                   Name                                     CPU Requests  CPU Limits  Memory Requests  Memory Limits  Age
	  ---------                   ----                                     ------------  ----------  ---------------  -------------  ---
	  default                     busybox-7b57f96db7-c9qf4                 0 (0%)        0 (0%)      0 (0%)           0 (0%)         12m
	  kube-system                 etcd-ha-423884-m02                       100m (5%)     0 (0%)      100Mi (1%)       0 (0%)         13m
	  kube-system                 kindnet-ftnwt                            100m (5%)     100m (5%)   50Mi (0%)        50Mi (0%)      13m
	  kube-system                 kube-apiserver-ha-423884-m02             250m (12%)    0 (0%)      0 (0%)           0 (0%)         13m
	  kube-system                 kube-controller-manager-ha-423884-m02    200m (10%)    0 (0%)      0 (0%)           0 (0%)         13m
	  kube-system                 kube-proxy-f4hgn                         0 (0%)        0 (0%)      0 (0%)           0 (0%)         13m
	  kube-system                 kube-scheduler-ha-423884-m02             100m (5%)     0 (0%)      0 (0%)           0 (0%)         13m
	  kube-system                 kube-vip-ha-423884-m02                   0 (0%)        0 (0%)      0 (0%)           0 (0%)         13m
	Allocated resources:
	  (Total limits may be over 100 percent, i.e., overcommitted.)
	  Resource           Requests    Limits
	  --------           --------    ------
	  cpu                750m (37%)  100m (5%)
	  memory             150Mi (1%)  50Mi (0%)
	  ephemeral-storage  0 (0%)      0 (0%)
	  hugepages-1Gi      0 (0%)      0 (0%)
	  hugepages-2Mi      0 (0%)      0 (0%)
	  hugepages-32Mi     0 (0%)      0 (0%)
	  hugepages-64Ki     0 (0%)      0 (0%)
	Events:
	  Type     Reason                   Age                From             Message
	  ----     ------                   ----               ----             -------
	  Normal   Starting                 48s                kube-proxy       
	  Normal   Starting                 13m                kube-proxy       
	  Normal   RegisteredNode           13m                node-controller  Node ha-423884-m02 event: Registered Node ha-423884-m02 in Controller
	  Normal   RegisteredNode           13m                node-controller  Node ha-423884-m02 event: Registered Node ha-423884-m02 in Controller
	  Normal   RegisteredNode           13m                node-controller  Node ha-423884-m02 event: Registered Node ha-423884-m02 in Controller
	  Normal   Starting                 11m                kubelet          Starting kubelet.
	  Warning  CgroupV1                 11m                kubelet          cgroup v1 support is in maintenance mode, please migrate to cgroup v2
	  Normal   NodeHasNoDiskPressure    11m (x8 over 11m)  kubelet          Node ha-423884-m02 status is now: NodeHasNoDiskPressure
	  Normal   NodeHasSufficientPID     11m (x8 over 11m)  kubelet          Node ha-423884-m02 status is now: NodeHasSufficientPID
	  Normal   NodeHasSufficientMemory  11m (x8 over 11m)  kubelet          Node ha-423884-m02 status is now: NodeHasSufficientMemory
	  Normal   NodeNotReady             10m                node-controller  Node ha-423884-m02 status is now: NodeNotReady
	  Normal   RegisteredNode           10m                node-controller  Node ha-423884-m02 event: Registered Node ha-423884-m02 in Controller
	  Warning  CgroupV1                 98s                kubelet          cgroup v1 support is in maintenance mode, please migrate to cgroup v2
	  Normal   Starting                 98s                kubelet          Starting kubelet.
	  Normal   NodeHasSufficientMemory  97s (x8 over 98s)  kubelet          Node ha-423884-m02 status is now: NodeHasSufficientMemory
	  Normal   NodeHasNoDiskPressure    97s (x8 over 98s)  kubelet          Node ha-423884-m02 status is now: NodeHasNoDiskPressure
	  Normal   NodeHasSufficientPID     97s (x8 over 98s)  kubelet          Node ha-423884-m02 status is now: NodeHasSufficientPID
	  Normal   RegisteredNode           62s                node-controller  Node ha-423884-m02 event: Registered Node ha-423884-m02 in Controller
	  Normal   RegisteredNode           61s                node-controller  Node ha-423884-m02 event: Registered Node ha-423884-m02 in Controller
	  Normal   RegisteredNode           22s                node-controller  Node ha-423884-m02 event: Registered Node ha-423884-m02 in Controller
	
	
	Name:               ha-423884-m03
	Roles:              control-plane
	Labels:             beta.kubernetes.io/arch=arm64
	                    beta.kubernetes.io/os=linux
	                    kubernetes.io/arch=arm64
	                    kubernetes.io/hostname=ha-423884-m03
	                    kubernetes.io/os=linux
	                    minikube.k8s.io/commit=70e94ed289d7418ac27e2778f9cf44be27a4ecda
	                    minikube.k8s.io/name=ha-423884
	                    minikube.k8s.io/primary=false
	                    minikube.k8s.io/updated_at=2025_11_09T13_52_18_0700
	                    minikube.k8s.io/version=v1.37.0
	                    node-role.kubernetes.io/control-plane=
	                    node.kubernetes.io/exclude-from-external-load-balancers=
	Annotations:        node.alpha.kubernetes.io/ttl: 0
	                    volumes.kubernetes.io/controller-managed-attach-detach: true
	CreationTimestamp:  Sun, 09 Nov 2025 13:52:17 +0000
	Taints:             <none>
	Unschedulable:      false
	Lease:
	  HolderIdentity:  ha-423884-m03
	  AcquireTime:     <unset>
	  RenewTime:       Sun, 09 Nov 2025 14:05:06 +0000
	Conditions:
	  Type             Status  LastHeartbeatTime                 LastTransitionTime                Reason                       Message
	  ----             ------  -----------------                 ------------------                ------                       -------
	  MemoryPressure   False   Sun, 09 Nov 2025 14:05:11 +0000   Sun, 09 Nov 2025 13:52:17 +0000   KubeletHasSufficientMemory   kubelet has sufficient memory available
	  DiskPressure     False   Sun, 09 Nov 2025 14:05:11 +0000   Sun, 09 Nov 2025 13:52:17 +0000   KubeletHasNoDiskPressure     kubelet has no disk pressure
	  PIDPressure      False   Sun, 09 Nov 2025 14:05:11 +0000   Sun, 09 Nov 2025 13:52:17 +0000   KubeletHasSufficientPID      kubelet has sufficient PID available
	  Ready            True    Sun, 09 Nov 2025 14:05:11 +0000   Sun, 09 Nov 2025 13:52:33 +0000   KubeletReady                 kubelet is posting ready status
	Addresses:
	  InternalIP:  192.168.49.4
	  Hostname:    ha-423884-m03
	Capacity:
	  cpu:                2
	  ephemeral-storage:  203034800Ki
	  hugepages-1Gi:      0
	  hugepages-2Mi:      0
	  hugepages-32Mi:     0
	  hugepages-64Ki:     0
	  memory:             8022300Ki
	  pods:               110
	Allocatable:
	  cpu:                2
	  ephemeral-storage:  203034800Ki
	  hugepages-1Gi:      0
	  hugepages-2Mi:      0
	  hugepages-32Mi:     0
	  hugepages-64Ki:     0
	  memory:             8022300Ki
	  pods:               110
	System Info:
	  Machine ID:                 8fe80b37a6a3cb4bc1adadb06905c59a
	  System UUID:                d57bf8b4-5512-4316-94f7-79a9c657e155
	  Boot ID:                    c2b4b02b-b0e5-42ff-aa45-346ad9349595
	  Kernel Version:             5.15.0-1084-aws
	  OS Image:                   Debian GNU/Linux 12 (bookworm)
	  Operating System:           linux
	  Architecture:               arm64
	  Container Runtime Version:  cri-o://1.34.1
	  Kubelet Version:            v1.34.1
	  Kube-Proxy Version:         
	PodCIDR:                      10.244.2.0/24
	PodCIDRs:                     10.244.2.0/24
	Non-terminated Pods:          (8 in total)
	  Namespace                   Name                                     CPU Requests  CPU Limits  Memory Requests  Memory Limits  Age
	  ---------                   ----                                     ------------  ----------  ---------------  -------------  ---
	  default                     busybox-7b57f96db7-5bfxx                 0 (0%)        0 (0%)      0 (0%)           0 (0%)         12m
	  kube-system                 etcd-ha-423884-m03                       100m (5%)     0 (0%)      100Mi (1%)       0 (0%)         12m
	  kube-system                 kindnet-45jg2                            100m (5%)     100m (5%)   50Mi (0%)        50Mi (0%)      12m
	  kube-system                 kube-apiserver-ha-423884-m03             250m (12%)    0 (0%)      0 (0%)           0 (0%)         12m
	  kube-system                 kube-controller-manager-ha-423884-m03    200m (10%)    0 (0%)      0 (0%)           0 (0%)         12m
	  kube-system                 kube-proxy-jcgxk                         0 (0%)        0 (0%)      0 (0%)           0 (0%)         12m
	  kube-system                 kube-scheduler-ha-423884-m03             100m (5%)     0 (0%)      0 (0%)           0 (0%)         12m
	  kube-system                 kube-vip-ha-423884-m03                   0 (0%)        0 (0%)      0 (0%)           0 (0%)         12m
	Allocated resources:
	  (Total limits may be over 100 percent, i.e., overcommitted.)
	  Resource           Requests    Limits
	  --------           --------    ------
	  cpu                750m (37%)  100m (5%)
	  memory             150Mi (1%)  50Mi (0%)
	  ephemeral-storage  0 (0%)      0 (0%)
	  hugepages-1Gi      0 (0%)      0 (0%)
	  hugepages-2Mi      0 (0%)      0 (0%)
	  hugepages-32Mi     0 (0%)      0 (0%)
	  hugepages-64Ki     0 (0%)      0 (0%)
	Events:
	  Type     Reason                   Age                From             Message
	  ----     ------                   ----               ----             -------
	  Normal   Starting                 12m                kube-proxy       
	  Normal   Starting                 3s                 kube-proxy       
	  Normal   RegisteredNode           12m                node-controller  Node ha-423884-m03 event: Registered Node ha-423884-m03 in Controller
	  Normal   RegisteredNode           12m                node-controller  Node ha-423884-m03 event: Registered Node ha-423884-m03 in Controller
	  Normal   RegisteredNode           12m                node-controller  Node ha-423884-m03 event: Registered Node ha-423884-m03 in Controller
	  Normal   RegisteredNode           10m                node-controller  Node ha-423884-m03 event: Registered Node ha-423884-m03 in Controller
	  Warning  CgroupV1                 62s                kubelet          cgroup v1 support is in maintenance mode, please migrate to cgroup v2
	  Normal   Starting                 62s                kubelet          Starting kubelet.
	  Normal   NodeHasSufficientMemory  62s (x8 over 62s)  kubelet          Node ha-423884-m03 status is now: NodeHasSufficientMemory
	  Normal   NodeHasNoDiskPressure    62s (x8 over 62s)  kubelet          Node ha-423884-m03 status is now: NodeHasNoDiskPressure
	  Normal   NodeHasSufficientPID     62s (x8 over 62s)  kubelet          Node ha-423884-m03 status is now: NodeHasSufficientPID
	  Normal   RegisteredNode           62s                node-controller  Node ha-423884-m03 event: Registered Node ha-423884-m03 in Controller
	  Normal   RegisteredNode           61s                node-controller  Node ha-423884-m03 event: Registered Node ha-423884-m03 in Controller
	  Normal   RegisteredNode           22s                node-controller  Node ha-423884-m03 event: Registered Node ha-423884-m03 in Controller
	
	
	Name:               ha-423884-m04
	Roles:              <none>
	Labels:             beta.kubernetes.io/arch=arm64
	                    beta.kubernetes.io/os=linux
	                    kubernetes.io/arch=arm64
	                    kubernetes.io/hostname=ha-423884-m04
	                    kubernetes.io/os=linux
	                    minikube.k8s.io/commit=70e94ed289d7418ac27e2778f9cf44be27a4ecda
	                    minikube.k8s.io/name=ha-423884
	                    minikube.k8s.io/primary=false
	                    minikube.k8s.io/updated_at=2025_11_09T13_53_07_0700
	                    minikube.k8s.io/version=v1.37.0
	Annotations:        node.alpha.kubernetes.io/ttl: 0
	                    volumes.kubernetes.io/controller-managed-attach-detach: true
	CreationTimestamp:  Sun, 09 Nov 2025 13:53:06 +0000
	Taints:             <none>
	Unschedulable:      false
	Lease:
	  HolderIdentity:  ha-423884-m04
	  AcquireTime:     <unset>
	  RenewTime:       Sun, 09 Nov 2025 14:05:13 +0000
	Conditions:
	  Type             Status  LastHeartbeatTime                 LastTransitionTime                Reason                       Message
	  ----             ------  -----------------                 ------------------                ------                       -------
	  MemoryPressure   False   Sun, 09 Nov 2025 14:04:52 +0000   Sun, 09 Nov 2025 13:53:06 +0000   KubeletHasSufficientMemory   kubelet has sufficient memory available
	  DiskPressure     False   Sun, 09 Nov 2025 14:04:52 +0000   Sun, 09 Nov 2025 13:53:06 +0000   KubeletHasNoDiskPressure     kubelet has no disk pressure
	  PIDPressure      False   Sun, 09 Nov 2025 14:04:52 +0000   Sun, 09 Nov 2025 13:53:06 +0000   KubeletHasSufficientPID      kubelet has sufficient PID available
	  Ready            True    Sun, 09 Nov 2025 14:04:52 +0000   Sun, 09 Nov 2025 13:53:22 +0000   KubeletReady                 kubelet is posting ready status
	Addresses:
	  InternalIP:  192.168.49.5
	  Hostname:    ha-423884-m04
	Capacity:
	  cpu:                2
	  ephemeral-storage:  203034800Ki
	  hugepages-1Gi:      0
	  hugepages-2Mi:      0
	  hugepages-32Mi:     0
	  hugepages-64Ki:     0
	  memory:             8022300Ki
	  pods:               110
	Allocatable:
	  cpu:                2
	  ephemeral-storage:  203034800Ki
	  hugepages-1Gi:      0
	  hugepages-2Mi:      0
	  hugepages-32Mi:     0
	  hugepages-64Ki:     0
	  memory:             8022300Ki
	  pods:               110
	System Info:
	  Machine ID:                 8fe80b37a6a3cb4bc1adadb06905c59a
	  System UUID:                750e1d79-71b2-4dc5-bf03-65a8c044964c
	  Boot ID:                    c2b4b02b-b0e5-42ff-aa45-346ad9349595
	  Kernel Version:             5.15.0-1084-aws
	  OS Image:                   Debian GNU/Linux 12 (bookworm)
	  Operating System:           linux
	  Architecture:               arm64
	  Container Runtime Version:  cri-o://1.34.1
	  Kubelet Version:            v1.34.1
	  Kube-Proxy Version:         
	PodCIDR:                      10.244.3.0/24
	PodCIDRs:                     10.244.3.0/24
	Non-terminated Pods:          (2 in total)
	  Namespace                   Name                CPU Requests  CPU Limits  Memory Requests  Memory Limits  Age
	  ---------                   ----                ------------  ----------  ---------------  -------------  ---
	  kube-system                 kindnet-2tcn6       100m (5%)     100m (5%)   50Mi (0%)        50Mi (0%)      12m
	  kube-system                 kube-proxy-9kff9    0 (0%)        0 (0%)      0 (0%)           0 (0%)         12m
	Allocated resources:
	  (Total limits may be over 100 percent, i.e., overcommitted.)
	  Resource           Requests   Limits
	  --------           --------   ------
	  cpu                100m (5%)  100m (5%)
	  memory             50Mi (0%)  50Mi (0%)
	  ephemeral-storage  0 (0%)     0 (0%)
	  hugepages-1Gi      0 (0%)     0 (0%)
	  hugepages-2Mi      0 (0%)     0 (0%)
	  hugepages-32Mi     0 (0%)     0 (0%)
	  hugepages-64Ki     0 (0%)     0 (0%)
	Events:
	  Type     Reason                   Age                From             Message
	  ----     ------                   ----               ----             -------
	  Normal   Starting                 14s                kube-proxy       
	  Normal   Starting                 12m                kube-proxy       
	  Normal   NodeHasNoDiskPressure    12m (x3 over 12m)  kubelet          Node ha-423884-m04 status is now: NodeHasNoDiskPressure
	  Normal   NodeHasSufficientPID     12m (x3 over 12m)  kubelet          Node ha-423884-m04 status is now: NodeHasSufficientPID
	  Normal   NodeHasSufficientMemory  12m (x3 over 12m)  kubelet          Node ha-423884-m04 status is now: NodeHasSufficientMemory
	  Normal   RegisteredNode           12m                node-controller  Node ha-423884-m04 event: Registered Node ha-423884-m04 in Controller
	  Normal   RegisteredNode           12m                node-controller  Node ha-423884-m04 event: Registered Node ha-423884-m04 in Controller
	  Normal   CIDRAssignmentFailed     12m                cidrAllocator    Node ha-423884-m04 status is now: CIDRAssignmentFailed
	  Normal   RegisteredNode           12m                node-controller  Node ha-423884-m04 event: Registered Node ha-423884-m04 in Controller
	  Normal   NodeReady                11m                kubelet          Node ha-423884-m04 status is now: NodeReady
	  Normal   RegisteredNode           10m                node-controller  Node ha-423884-m04 event: Registered Node ha-423884-m04 in Controller
	  Normal   RegisteredNode           62s                node-controller  Node ha-423884-m04 event: Registered Node ha-423884-m04 in Controller
	  Normal   RegisteredNode           61s                node-controller  Node ha-423884-m04 event: Registered Node ha-423884-m04 in Controller
	  Normal   Starting                 37s                kubelet          Starting kubelet.
	  Warning  CgroupV1                 37s                kubelet          cgroup v1 support is in maintenance mode, please migrate to cgroup v2
	  Normal   NodeHasSufficientMemory  33s (x8 over 36s)  kubelet          Node ha-423884-m04 status is now: NodeHasSufficientMemory
	  Normal   NodeHasNoDiskPressure    33s (x8 over 36s)  kubelet          Node ha-423884-m04 status is now: NodeHasNoDiskPressure
	  Normal   NodeHasSufficientPID     33s (x8 over 36s)  kubelet          Node ha-423884-m04 status is now: NodeHasSufficientPID
	  Normal   RegisteredNode           22s                node-controller  Node ha-423884-m04 event: Registered Node ha-423884-m04 in Controller
	
	
	==> dmesg <==
	[Nov 9 13:17] ACPI: SRAT not present
	[  +0.000000] ACPI: SRAT not present
	[  +0.000000] SPI driver altr_a10sr has no spi_device_id for altr,a10sr
	[  +0.015355] device-mapper: core: CONFIG_IMA_DISABLE_HTABLE is disabled. Duplicate IMA measurements will not be recorded in the IMA log.
	[  +0.494196] systemd[1]: Configuration file /run/systemd/system/netplan-ovs-cleanup.service is marked world-inaccessible. This has no effect as configuration data is accessible via APIs without restrictions. Proceeding anyway.
	[  +0.034512] systemd[1]: /lib/systemd/system/snapd.service:23: Unknown key name 'RestartMode' in section 'Service', ignoring.
	[  +0.743336] ena 0000:00:05.0: LLQ is not supported Fallback to host mode policy.
	[  +6.564676] kauditd_printk_skb: 36 callbacks suppressed
	[Nov 9 13:30] overlayfs: idmapped layers are currently not supported
	[  +0.081590] kmem.limit_in_bytes is deprecated and will be removed. Please report your usecase to linux-mm@kvack.org if you depend on this functionality.
	[Nov 9 13:36] overlayfs: idmapped layers are currently not supported
	[ +50.497753] overlayfs: idmapped layers are currently not supported
	[Nov 9 13:50] overlayfs: idmapped layers are currently not supported
	[Nov 9 13:51] overlayfs: idmapped layers are currently not supported
	[Nov 9 13:52] overlayfs: idmapped layers are currently not supported
	[Nov 9 13:53] overlayfs: idmapped layers are currently not supported
	[Nov 9 13:54] overlayfs: idmapped layers are currently not supported
	[Nov 9 13:55] overlayfs: idmapped layers are currently not supported
	[  +3.159182] overlayfs: idmapped layers are currently not supported
	[Nov 9 14:03] overlayfs: idmapped layers are currently not supported
	[  +3.581786] overlayfs: idmapped layers are currently not supported
	[Nov 9 14:04] overlayfs: idmapped layers are currently not supported
	[Nov 9 14:05] overlayfs: idmapped layers are currently not supported
	
	
	==> etcd [947390d8997ffb89bea0e3c1e1bca5c1f8dd53d457d88db5aafd7664dbcb65b2] <==
	{"level":"warn","ts":"2025-11-09T14:04:20.723108Z","caller":"rafthttp/stream.go:222","msg":"lost TCP streaming connection with remote peer","stream-writer-type":"stream MsgApp v2","local-member-id":"aec36adc501070cc","remote-peer-id":"b6e80321287bcc6a"}
	{"level":"warn","ts":"2025-11-09T14:04:20.781289Z","caller":"rafthttp/stream.go:222","msg":"lost TCP streaming connection with remote peer","stream-writer-type":"stream Message","local-member-id":"aec36adc501070cc","remote-peer-id":"b6e80321287bcc6a"}
	{"level":"warn","ts":"2025-11-09T14:04:21.076909Z","caller":"etcdserver/cluster_util.go:259","msg":"failed to reach the peer URL","address":"https://192.168.49.4:2380/version","remote-member-id":"b6e80321287bcc6a","error":"Get \"https://192.168.49.4:2380/version\": dial tcp 192.168.49.4:2380: connect: connection refused"}
	{"level":"warn","ts":"2025-11-09T14:04:21.076972Z","caller":"etcdserver/cluster_util.go:160","msg":"failed to get version","remote-member-id":"b6e80321287bcc6a","error":"Get \"https://192.168.49.4:2380/version\": dial tcp 192.168.49.4:2380: connect: connection refused"}
	{"level":"warn","ts":"2025-11-09T14:04:25.078838Z","caller":"etcdserver/cluster_util.go:259","msg":"failed to reach the peer URL","address":"https://192.168.49.4:2380/version","remote-member-id":"b6e80321287bcc6a","error":"Get \"https://192.168.49.4:2380/version\": dial tcp 192.168.49.4:2380: connect: connection refused"}
	{"level":"warn","ts":"2025-11-09T14:04:25.078893Z","caller":"etcdserver/cluster_util.go:160","msg":"failed to get version","remote-member-id":"b6e80321287bcc6a","error":"Get \"https://192.168.49.4:2380/version\": dial tcp 192.168.49.4:2380: connect: connection refused"}
	{"level":"warn","ts":"2025-11-09T14:04:29.080762Z","caller":"etcdserver/cluster_util.go:259","msg":"failed to reach the peer URL","address":"https://192.168.49.4:2380/version","remote-member-id":"b6e80321287bcc6a","error":"Get \"https://192.168.49.4:2380/version\": dial tcp 192.168.49.4:2380: connect: connection refused"}
	{"level":"warn","ts":"2025-11-09T14:04:29.080821Z","caller":"etcdserver/cluster_util.go:160","msg":"failed to get version","remote-member-id":"b6e80321287bcc6a","error":"Get \"https://192.168.49.4:2380/version\": dial tcp 192.168.49.4:2380: connect: connection refused"}
	{"level":"warn","ts":"2025-11-09T14:04:33.081975Z","caller":"etcdserver/cluster_util.go:259","msg":"failed to reach the peer URL","address":"https://192.168.49.4:2380/version","remote-member-id":"b6e80321287bcc6a","error":"Get \"https://192.168.49.4:2380/version\": dial tcp 192.168.49.4:2380: connect: connection refused"}
	{"level":"warn","ts":"2025-11-09T14:04:33.082125Z","caller":"etcdserver/cluster_util.go:160","msg":"failed to get version","remote-member-id":"b6e80321287bcc6a","error":"Get \"https://192.168.49.4:2380/version\": dial tcp 192.168.49.4:2380: connect: connection refused"}
	{"level":"warn","ts":"2025-11-09T14:04:37.083255Z","caller":"etcdserver/cluster_util.go:259","msg":"failed to reach the peer URL","address":"https://192.168.49.4:2380/version","remote-member-id":"b6e80321287bcc6a","error":"Get \"https://192.168.49.4:2380/version\": dial tcp 192.168.49.4:2380: connect: connection refused"}
	{"level":"warn","ts":"2025-11-09T14:04:37.083315Z","caller":"etcdserver/cluster_util.go:160","msg":"failed to get version","remote-member-id":"b6e80321287bcc6a","error":"Get \"https://192.168.49.4:2380/version\": dial tcp 192.168.49.4:2380: connect: connection refused"}
	{"level":"warn","ts":"2025-11-09T14:04:41.084369Z","caller":"etcdserver/cluster_util.go:259","msg":"failed to reach the peer URL","address":"https://192.168.49.4:2380/version","remote-member-id":"b6e80321287bcc6a","error":"Get \"https://192.168.49.4:2380/version\": dial tcp 192.168.49.4:2380: connect: connection refused"}
	{"level":"warn","ts":"2025-11-09T14:04:41.084422Z","caller":"etcdserver/cluster_util.go:160","msg":"failed to get version","remote-member-id":"b6e80321287bcc6a","error":"Get \"https://192.168.49.4:2380/version\": dial tcp 192.168.49.4:2380: connect: connection refused"}
	{"level":"warn","ts":"2025-11-09T14:04:45.085605Z","caller":"etcdserver/cluster_util.go:259","msg":"failed to reach the peer URL","address":"https://192.168.49.4:2380/version","remote-member-id":"b6e80321287bcc6a","error":"Get \"https://192.168.49.4:2380/version\": dial tcp 192.168.49.4:2380: connect: connection refused"}
	{"level":"warn","ts":"2025-11-09T14:04:45.085763Z","caller":"etcdserver/cluster_util.go:160","msg":"failed to get version","remote-member-id":"b6e80321287bcc6a","error":"Get \"https://192.168.49.4:2380/version\": dial tcp 192.168.49.4:2380: connect: connection refused"}
	{"level":"info","ts":"2025-11-09T14:04:45.944960Z","caller":"rafthttp/stream.go:248","msg":"set message encoder","from":"aec36adc501070cc","to":"b6e80321287bcc6a","stream-type":"stream Message"}
	{"level":"info","ts":"2025-11-09T14:04:45.945005Z","caller":"rafthttp/peer_status.go:53","msg":"peer became active","peer-id":"b6e80321287bcc6a"}
	{"level":"info","ts":"2025-11-09T14:04:45.945017Z","caller":"rafthttp/stream.go:273","msg":"established TCP streaming connection with remote peer","stream-writer-type":"stream Message","local-member-id":"aec36adc501070cc","remote-peer-id":"b6e80321287bcc6a"}
	{"level":"info","ts":"2025-11-09T14:04:46.018416Z","caller":"rafthttp/stream.go:248","msg":"set message encoder","from":"aec36adc501070cc","to":"b6e80321287bcc6a","stream-type":"stream MsgApp v2"}
	{"level":"info","ts":"2025-11-09T14:04:46.018472Z","caller":"rafthttp/stream.go:273","msg":"established TCP streaming connection with remote peer","stream-writer-type":"stream MsgApp v2","local-member-id":"aec36adc501070cc","remote-peer-id":"b6e80321287bcc6a"}
	{"level":"info","ts":"2025-11-09T14:04:46.161733Z","caller":"rafthttp/stream.go:411","msg":"established TCP streaming connection with remote peer","stream-reader-type":"stream Message","local-member-id":"aec36adc501070cc","remote-peer-id":"b6e80321287bcc6a"}
	{"level":"info","ts":"2025-11-09T14:04:46.162210Z","caller":"rafthttp/stream.go:411","msg":"established TCP streaming connection with remote peer","stream-reader-type":"stream MsgApp v2","local-member-id":"aec36adc501070cc","remote-peer-id":"b6e80321287bcc6a"}
	{"level":"warn","ts":"2025-11-09T14:05:16.107022Z","caller":"txn/util.go:93","msg":"apply request took too long","took":"105.334087ms","expected-duration":"100ms","prefix":"read-only range ","request":"key:\"/registry/events/\" range_end:\"/registry/events0\" limit:500 ","response":"range_response_count:497 size:364476"}
	{"level":"info","ts":"2025-11-09T14:05:16.107100Z","caller":"traceutil/trace.go:172","msg":"trace[1651213599] range","detail":"{range_begin:/registry/events/; range_end:/registry/events0; response_count:497; response_revision:2344; }","duration":"105.42921ms","start":"2025-11-09T14:05:16.001658Z","end":"2025-11-09T14:05:16.107088Z","steps":["trace[1651213599] 'range keys from bolt db'  (duration: 104.510955ms)"],"step_count":1}
	
	
	==> kernel <==
	 14:05:16 up 47 min,  0 user,  load average: 2.97, 1.71, 1.26
	Linux ha-423884 5.15.0-1084-aws #91~20.04.1-Ubuntu SMP Fri May 2 07:00:04 UTC 2025 aarch64 GNU/Linux
	PRETTY_NAME="Debian GNU/Linux 12 (bookworm)"
	
	
	==> kindnet [2858b156484730345bc39e8edca1ca8eabf5a6c2eb446824527423d351ec9fd3] <==
	I1109 14:04:55.424983       1 main.go:297] Handling node with IPs: map[192.168.49.3:{}]
	I1109 14:04:55.425011       1 main.go:324] Node ha-423884-m02 has CIDR [10.244.1.0/24] 
	I1109 14:04:55.425174       1 routes.go:62] Adding route {Ifindex: 0 Dst: 10.244.1.0/24 Src: <nil> Gw: 192.168.49.3 Flags: [] Table: 0 Realm: 0} 
	I1109 14:04:55.425305       1 main.go:297] Handling node with IPs: map[192.168.49.4:{}]
	I1109 14:04:55.425321       1 main.go:324] Node ha-423884-m03 has CIDR [10.244.2.0/24] 
	I1109 14:04:55.425492       1 routes.go:62] Adding route {Ifindex: 0 Dst: 10.244.2.0/24 Src: <nil> Gw: 192.168.49.4 Flags: [] Table: 0 Realm: 0} 
	I1109 14:04:55.425604       1 main.go:297] Handling node with IPs: map[192.168.49.5:{}]
	I1109 14:04:55.425617       1 main.go:324] Node ha-423884-m04 has CIDR [10.244.3.0/24] 
	I1109 14:04:55.426323       1 routes.go:62] Adding route {Ifindex: 0 Dst: 10.244.3.0/24 Src: <nil> Gw: 192.168.49.5 Flags: [] Table: 0 Realm: 0} 
	I1109 14:05:05.419934       1 main.go:297] Handling node with IPs: map[192.168.49.2:{}]
	I1109 14:05:05.419974       1 main.go:301] handling current node
	I1109 14:05:05.419990       1 main.go:297] Handling node with IPs: map[192.168.49.3:{}]
	I1109 14:05:05.419996       1 main.go:324] Node ha-423884-m02 has CIDR [10.244.1.0/24] 
	I1109 14:05:05.420187       1 main.go:297] Handling node with IPs: map[192.168.49.4:{}]
	I1109 14:05:05.420194       1 main.go:324] Node ha-423884-m03 has CIDR [10.244.2.0/24] 
	I1109 14:05:05.420281       1 main.go:297] Handling node with IPs: map[192.168.49.5:{}]
	I1109 14:05:05.420286       1 main.go:324] Node ha-423884-m04 has CIDR [10.244.3.0/24] 
	I1109 14:05:15.417972       1 main.go:297] Handling node with IPs: map[192.168.49.5:{}]
	I1109 14:05:15.418003       1 main.go:324] Node ha-423884-m04 has CIDR [10.244.3.0/24] 
	I1109 14:05:15.418241       1 main.go:297] Handling node with IPs: map[192.168.49.2:{}]
	I1109 14:05:15.418253       1 main.go:301] handling current node
	I1109 14:05:15.418304       1 main.go:297] Handling node with IPs: map[192.168.49.3:{}]
	I1109 14:05:15.418311       1 main.go:324] Node ha-423884-m02 has CIDR [10.244.1.0/24] 
	I1109 14:05:15.418415       1 main.go:297] Handling node with IPs: map[192.168.49.4:{}]
	I1109 14:05:15.418423       1 main.go:324] Node ha-423884-m03 has CIDR [10.244.2.0/24] 
	
	
	==> kube-apiserver [7a8b6eec5acc3d0e17aa26ea522ab1781b387d043859460f3c3aa2c80f07c6d7] <==
	I1109 14:04:10.251082       1 cidrallocator.go:301] created ClusterIP allocator for Service CIDR 10.96.0.0/12
	I1109 14:04:10.254066       1 aggregator.go:171] initial CRD sync complete...
	I1109 14:04:10.254147       1 autoregister_controller.go:144] Starting autoregister controller
	I1109 14:04:10.254176       1 cache.go:32] Waiting for caches to sync for autoregister controller
	I1109 14:04:10.254222       1 cache.go:39] Caches are synced for autoregister controller
	I1109 14:04:10.259503       1 shared_informer.go:356] "Caches are synced" controller="ipallocator-repair-controller"
	I1109 14:04:10.259679       1 cache.go:39] Caches are synced for RemoteAvailability controller
	I1109 14:04:10.259777       1 cache.go:39] Caches are synced for APIServiceRegistrationController controller
	I1109 14:04:10.265702       1 apf_controller.go:382] Running API Priority and Fairness config worker
	I1109 14:04:10.265731       1 apf_controller.go:385] Running API Priority and Fairness periodic rebalancing process
	I1109 14:04:10.268080       1 shared_informer.go:356] "Caches are synced" controller="node_authorizer"
	I1109 14:04:10.269054       1 handler_discovery.go:451] Starting ResourceDiscoveryManager
	I1109 14:04:10.282785       1 shared_informer.go:356] "Caches are synced" controller="*generic.policySource[*k8s.io/api/admissionregistration/v1.ValidatingAdmissionPolicy,*k8s.io/api/admissionregistration/v1.ValidatingAdmissionPolicyBinding,k8s.io/apiserver/pkg/admission/plugin/policy/validating.Validator]"
	I1109 14:04:10.282828       1 policy_source.go:240] refreshing policies
	W1109 14:04:10.283375       1 lease.go:265] Resetting endpoints for master service "kubernetes" to [192.168.49.4]
	I1109 14:04:10.285247       1 controller.go:667] quota admission added evaluator for: endpoints
	I1109 14:04:10.308873       1 controller.go:667] quota admission added evaluator for: endpointslices.discovery.k8s.io
	I1109 14:04:10.309359       1 controller.go:667] quota admission added evaluator for: leases.coordination.k8s.io
	E1109 14:04:10.317898       1 controller.go:95] Found stale data, removed previous endpoints on kubernetes service, apiserver didn't exit successfully previously
	I1109 14:04:10.610930       1 storage_scheduling.go:111] all system priority classes are created successfully or already exist.
	W1109 14:04:12.050948       1 lease.go:265] Resetting endpoints for master service "kubernetes" to [192.168.49.2 192.168.49.3 192.168.49.4]
	I1109 14:04:13.586194       1 controller.go:667] quota admission added evaluator for: serviceaccounts
	I1109 14:04:16.069224       1 controller.go:667] quota admission added evaluator for: replicasets.apps
	I1109 14:04:16.362429       1 controller.go:667] quota admission added evaluator for: daemonsets.apps
	I1109 14:04:17.009317       1 controller.go:667] quota admission added evaluator for: deployments.apps
	
	
	==> kube-apiserver [c0ba74e816e1338d86f2f29c211b83c172784bbf106dba7bae518b2ee0201a4e] <==
	I1109 14:03:36.079801       1 server.go:150] Version: v1.34.1
	I1109 14:03:36.079970       1 server.go:152] "Golang settings" GOGC="" GOMAXPROCS="" GOTRACEBACK=""
	W1109 14:03:37.231523       1 api_enablement.go:112] alpha api enabled with emulated version 1.34 instead of the binary's version 1.34.1, this is unsupported, proceed at your own risk: api=coordination.k8s.io/v1alpha2
	W1109 14:03:37.231632       1 api_enablement.go:112] alpha api enabled with emulated version 1.34 instead of the binary's version 1.34.1, this is unsupported, proceed at your own risk: api=admissionregistration.k8s.io/v1alpha1
	W1109 14:03:37.231673       1 api_enablement.go:112] alpha api enabled with emulated version 1.34 instead of the binary's version 1.34.1, this is unsupported, proceed at your own risk: api=imagepolicy.k8s.io/v1alpha1
	W1109 14:03:37.231710       1 api_enablement.go:112] alpha api enabled with emulated version 1.34 instead of the binary's version 1.34.1, this is unsupported, proceed at your own risk: api=resource.k8s.io/v1alpha3
	W1109 14:03:37.231743       1 api_enablement.go:112] alpha api enabled with emulated version 1.34 instead of the binary's version 1.34.1, this is unsupported, proceed at your own risk: api=scheduling.k8s.io/v1alpha1
	W1109 14:03:37.231775       1 api_enablement.go:112] alpha api enabled with emulated version 1.34 instead of the binary's version 1.34.1, this is unsupported, proceed at your own risk: api=certificates.k8s.io/v1alpha1
	W1109 14:03:37.233731       1 api_enablement.go:112] alpha api enabled with emulated version 1.34 instead of the binary's version 1.34.1, this is unsupported, proceed at your own risk: api=rbac.authorization.k8s.io/v1alpha1
	W1109 14:03:37.233812       1 api_enablement.go:112] alpha api enabled with emulated version 1.34 instead of the binary's version 1.34.1, this is unsupported, proceed at your own risk: api=storage.k8s.io/v1alpha1
	W1109 14:03:37.233841       1 api_enablement.go:112] alpha api enabled with emulated version 1.34 instead of the binary's version 1.34.1, this is unsupported, proceed at your own risk: api=internal.apiserver.k8s.io/v1alpha1
	W1109 14:03:37.233872       1 api_enablement.go:112] alpha api enabled with emulated version 1.34 instead of the binary's version 1.34.1, this is unsupported, proceed at your own risk: api=authentication.k8s.io/v1alpha1
	W1109 14:03:37.233903       1 api_enablement.go:112] alpha api enabled with emulated version 1.34 instead of the binary's version 1.34.1, this is unsupported, proceed at your own risk: api=storagemigration.k8s.io/v1alpha1
	W1109 14:03:37.233935       1 api_enablement.go:112] alpha api enabled with emulated version 1.34 instead of the binary's version 1.34.1, this is unsupported, proceed at your own risk: api=node.k8s.io/v1alpha1
	W1109 14:03:37.264427       1 logging.go:55] [core] [Channel #2 SubChannel #3]grpc: addrConn.createTransport failed to connect to {Addr: "127.0.0.1:2379", ServerName: "127.0.0.1:2379", BalancerAttributes: {"<%!p(pickfirstleaf.managedByPickfirstKeyType={})>": "<%!p(bool=true)>" }}. Err: connection error: desc = "transport: authentication handshake failed: context canceled"
	W1109 14:03:37.266135       1 logging.go:55] [core] [Channel #1 SubChannel #5]grpc: addrConn.createTransport failed to connect to {Addr: "127.0.0.1:2379", ServerName: "127.0.0.1:2379", BalancerAttributes: {"<%!p(pickfirstleaf.managedByPickfirstKeyType={})>": "<%!p(bool=true)>" }}. Err: connection error: desc = "transport: Error while dialing: dial tcp 127.0.0.1:2379: operation was canceled"
	I1109 14:03:37.266724       1 shared_informer.go:349] "Waiting for caches to sync" controller="node_authorizer"
	I1109 14:03:37.284361       1 shared_informer.go:349] "Waiting for caches to sync" controller="*generic.policySource[*k8s.io/api/admissionregistration/v1.ValidatingAdmissionPolicy,*k8s.io/api/admissionregistration/v1.ValidatingAdmissionPolicyBinding,k8s.io/apiserver/pkg/admission/plugin/policy/validating.Validator]"
	I1109 14:03:37.285347       1 plugins.go:157] Loaded 14 mutating admission controller(s) successfully in the following order: NamespaceLifecycle,LimitRanger,ServiceAccount,NodeRestriction,TaintNodesByCondition,Priority,DefaultTolerationSeconds,DefaultStorageClass,StorageObjectInUseProtection,RuntimeClass,DefaultIngressClass,PodTopologyLabels,MutatingAdmissionPolicy,MutatingAdmissionWebhook.
	I1109 14:03:37.285437       1 plugins.go:160] Loaded 13 validating admission controller(s) successfully in the following order: LimitRanger,ServiceAccount,PodSecurity,Priority,PersistentVolumeClaimResize,RuntimeClass,CertificateApproval,CertificateSigning,ClusterTrustBundleAttest,CertificateSubjectRestriction,ValidatingAdmissionPolicy,ValidatingAdmissionWebhook,ResourceQuota.
	I1109 14:03:37.285697       1 instance.go:239] Using reconciler: lease
	W1109 14:03:37.287884       1 logging.go:55] [core] [Channel #7 SubChannel #8]grpc: addrConn.createTransport failed to connect to {Addr: "127.0.0.1:2379", ServerName: "127.0.0.1:2379", BalancerAttributes: {"<%!p(pickfirstleaf.managedByPickfirstKeyType={})>": "<%!p(bool=true)>" }}. Err: connection error: desc = "transport: authentication handshake failed: context canceled"
	W1109 14:03:57.261619       1 logging.go:55] [core] [Channel #1 SubChannel #6]grpc: addrConn.createTransport failed to connect to {Addr: "127.0.0.1:2379", ServerName: "127.0.0.1:2379", BalancerAttributes: {"<%!p(pickfirstleaf.managedByPickfirstKeyType={})>": "<%!p(bool=true)>" }}. Err: connection error: desc = "transport: authentication handshake failed: context canceled"
	W1109 14:03:57.262651       1 logging.go:55] [core] [Channel #2 SubChannel #4]grpc: addrConn.createTransport failed to connect to {Addr: "127.0.0.1:2379", ServerName: "127.0.0.1:2379", BalancerAttributes: {"<%!p(pickfirstleaf.managedByPickfirstKeyType={})>": "<%!p(bool=true)>" }}. Err: connection error: desc = "transport: authentication handshake failed: context canceled"
	F1109 14:03:57.287379       1 instance.go:232] Error creating leases: error creating storage factory: context deadline exceeded
	
	
	==> kube-controller-manager [78f5efcea671f680d59175d4a69693bbbeed9fa6a7cee912ee40e0f169e81738] <==
	I1109 14:03:38.933755       1 serving.go:386] Generated self-signed cert in-memory
	I1109 14:03:39.743954       1 controllermanager.go:191] "Starting" version="v1.34.1"
	I1109 14:03:39.744053       1 controllermanager.go:193] "Golang settings" GOGC="" GOMAXPROCS="" GOTRACEBACK=""
	I1109 14:03:39.745947       1 secure_serving.go:211] Serving securely on 127.0.0.1:10257
	I1109 14:03:39.746091       1 dynamic_cafile_content.go:161] "Starting controller" name="request-header::/var/lib/minikube/certs/front-proxy-ca.crt"
	I1109 14:03:39.746103       1 dynamic_cafile_content.go:161] "Starting controller" name="client-ca-bundle::/var/lib/minikube/certs/ca.crt"
	I1109 14:03:39.746115       1 tlsconfig.go:243] "Starting DynamicServingCertificateController"
	E1109 14:04:10.143520       1 controllermanager.go:245] "Error building controller context" err="failed to wait for apiserver being healthy: timed out waiting for the condition: failed to get apiserver /healthz status: forbidden: User \"system:kube-controller-manager\" cannot get path \"/healthz\""
	
	
	==> kube-controller-manager [d4b5eae8c40aaa51b1839a8972d830ffbb9a271e980e83d7f4e1e1a5a0e7c344] <==
	I1109 14:04:15.598430       1 shared_informer.go:356] "Caches are synced" controller="stateful set"
	I1109 14:04:15.608339       1 shared_informer.go:356] "Caches are synced" controller="resource quota"
	I1109 14:04:15.615143       1 shared_informer.go:356] "Caches are synced" controller="VAC protection"
	I1109 14:04:15.620158       1 shared_informer.go:356] "Caches are synced" controller="resource quota"
	I1109 14:04:15.626597       1 shared_informer.go:356] "Caches are synced" controller="disruption"
	I1109 14:04:15.635956       1 shared_informer.go:356] "Caches are synced" controller="PV protection"
	I1109 14:04:15.646645       1 shared_informer.go:356] "Caches are synced" controller="ReplicationController"
	I1109 14:04:15.647826       1 shared_informer.go:356] "Caches are synced" controller="garbage collector"
	I1109 14:04:15.648760       1 garbagecollector.go:154] "Garbage collector: all resource monitors have synced" logger="garbage-collector-controller"
	I1109 14:04:15.648829       1 garbagecollector.go:157] "Proceeding to collect garbage" logger="garbage-collector-controller"
	I1109 14:04:15.650811       1 shared_informer.go:356] "Caches are synced" controller="persistent volume"
	I1109 14:04:15.679894       1 shared_informer.go:356] "Caches are synced" controller="attach detach"
	I1109 14:04:15.695896       1 shared_informer.go:356] "Caches are synced" controller="garbage collector"
	I1109 14:04:15.916336       1 endpointslice_controller.go:344] "Error syncing endpoint slices for service, retrying" logger="endpointslice-controller" key="kube-system/kube-dns" err="failed to update kube-dns-tmhdr EndpointSlice for Service kube-system/kube-dns: Operation cannot be fulfilled on endpointslices.discovery.k8s.io \"kube-dns-tmhdr\": the object has been modified; please apply your changes to the latest version and try again"
	I1109 14:04:15.916728       1 event.go:377] Event(v1.ObjectReference{Kind:"Service", Namespace:"kube-system", Name:"kube-dns", UID:"8beb4313-ddc0-4b92-876a-23da421be39d", APIVersion:"v1", ResourceVersion:"292", FieldPath:""}): type: 'Warning' reason: 'FailedToUpdateEndpointSlices' Error updating Endpoint Slices for Service kube-system/kube-dns: failed to update kube-dns-tmhdr EndpointSlice for Service kube-system/kube-dns: Operation cannot be fulfilled on endpointslices.discovery.k8s.io "kube-dns-tmhdr": the object has been modified; please apply your changes to the latest version and try again
	E1109 14:04:16.184059       1 replica_set.go:587] "Unhandled Error" err="sync \"kube-system/coredns-66bc5c9577\" failed with Operation cannot be fulfilled on replicasets.apps \"coredns-66bc5c9577\": the object has been modified; please apply your changes to the latest version and try again" logger="UnhandledError"
	I1109 14:04:16.664643       1 endpointslice_controller.go:344] "Error syncing endpoint slices for service, retrying" logger="endpointslice-controller" key="kube-system/kube-dns" err="failed to update kube-dns-tmhdr EndpointSlice for Service kube-system/kube-dns: Operation cannot be fulfilled on endpointslices.discovery.k8s.io \"kube-dns-tmhdr\": the object has been modified; please apply your changes to the latest version and try again"
	I1109 14:04:16.665695       1 event.go:377] Event(v1.ObjectReference{Kind:"Service", Namespace:"kube-system", Name:"kube-dns", UID:"8beb4313-ddc0-4b92-876a-23da421be39d", APIVersion:"v1", ResourceVersion:"292", FieldPath:""}): type: 'Warning' reason: 'FailedToUpdateEndpointSlices' Error updating Endpoint Slices for Service kube-system/kube-dns: failed to update kube-dns-tmhdr EndpointSlice for Service kube-system/kube-dns: Operation cannot be fulfilled on endpointslices.discovery.k8s.io "kube-dns-tmhdr": the object has been modified; please apply your changes to the latest version and try again
	I1109 14:04:56.714750       1 endpointslice_controller.go:344] "Error syncing endpoint slices for service, retrying" logger="endpointslice-controller" key="kube-system/kube-dns" err="failed to update kube-dns-tmhdr EndpointSlice for Service kube-system/kube-dns: Operation cannot be fulfilled on endpointslices.discovery.k8s.io \"kube-dns-tmhdr\": the object has been modified; please apply your changes to the latest version and try again"
	I1109 14:04:56.714878       1 event.go:377] Event(v1.ObjectReference{Kind:"Service", Namespace:"kube-system", Name:"kube-dns", UID:"8beb4313-ddc0-4b92-876a-23da421be39d", APIVersion:"v1", ResourceVersion:"292", FieldPath:""}): type: 'Warning' reason: 'FailedToUpdateEndpointSlices' Error updating Endpoint Slices for Service kube-system/kube-dns: failed to update kube-dns-tmhdr EndpointSlice for Service kube-system/kube-dns: Operation cannot be fulfilled on endpointslices.discovery.k8s.io "kube-dns-tmhdr": the object has been modified; please apply your changes to the latest version and try again
	I1109 14:04:56.849774       1 endpointslice_controller.go:344] "Error syncing endpoint slices for service, retrying" logger="endpointslice-controller" key="kube-system/kube-dns" err="failed to update kube-dns-tmhdr EndpointSlice for Service kube-system/kube-dns: Operation cannot be fulfilled on endpointslices.discovery.k8s.io \"kube-dns-tmhdr\": the object has been modified; please apply your changes to the latest version and try again"
	I1109 14:04:56.849836       1 event.go:377] Event(v1.ObjectReference{Kind:"Service", Namespace:"kube-system", Name:"kube-dns", UID:"8beb4313-ddc0-4b92-876a-23da421be39d", APIVersion:"v1", ResourceVersion:"292", FieldPath:""}): type: 'Warning' reason: 'FailedToUpdateEndpointSlices' Error updating Endpoint Slices for Service kube-system/kube-dns: failed to update kube-dns-tmhdr EndpointSlice for Service kube-system/kube-dns: Operation cannot be fulfilled on endpointslices.discovery.k8s.io "kube-dns-tmhdr": the object has been modified; please apply your changes to the latest version and try again
	E1109 14:04:56.882397       1 replica_set.go:587] "Unhandled Error" err="sync \"kube-system/coredns-66bc5c9577\" failed with Operation cannot be fulfilled on replicasets.apps \"coredns-66bc5c9577\": the object has been modified; please apply your changes to the latest version and try again" logger="UnhandledError"
	E1109 14:05:01.377737       1 daemon_controller.go:346] "Unhandled Error" err="kube-system/kindnet failed with : error storing status for daemon set &v1.DaemonSet{TypeMeta:v1.TypeMeta{Kind:\"\", APIVersion:\"\"}, ObjectMeta:v1.ObjectMeta{Name:\"kindnet\", GenerateName:\"\", Namespace:\"kube-system\", SelfLink:\"\", UID:\"a423ea2b-b11a-451e-9dc0-0b9bc17e2520\", ResourceVersion:\"2273\", Generation:1, CreationTimestamp:time.Date(2025, time.November, 9, 13, 50, 44, 0, time.Local), DeletionTimestamp:<nil>, DeletionGracePeriodSeconds:(*int64)(nil), Labels:map[string]string{\"app\":\"kindnet\", \"k8s-app\":\"kindnet\", \"tier\":\"node\"}, Annotations:map[string]string{\"deprecated.daemonset.template.generation\":\"1\", \"kubectl.kubernetes.io/last-applied-configuration\":\"{\\\"apiVersion\\\":\\\"apps/v1\\\",\\\"kind\\\":\\\"DaemonSet\\\",\\\"metadata\\\":{\\\"annotations\\\":{},\\\"labels\\\":{\\\"app\\\":\\\"kindnet\\\",\\\"k8s-app\\\":\\\"kindnet\\\",\\\"tier\\\":\\\"node\\\"},\\\"name\\\":\\\"kindnet\\
\",\\\"namespace\\\":\\\"kube-system\\\"},\\\"spec\\\":{\\\"selector\\\":{\\\"matchLabels\\\":{\\\"app\\\":\\\"kindnet\\\"}},\\\"template\\\":{\\\"metadata\\\":{\\\"labels\\\":{\\\"app\\\":\\\"kindnet\\\",\\\"k8s-app\\\":\\\"kindnet\\\",\\\"tier\\\":\\\"node\\\"}},\\\"spec\\\":{\\\"containers\\\":[{\\\"env\\\":[{\\\"name\\\":\\\"HOST_IP\\\",\\\"valueFrom\\\":{\\\"fieldRef\\\":{\\\"fieldPath\\\":\\\"status.hostIP\\\"}}},{\\\"name\\\":\\\"POD_IP\\\",\\\"valueFrom\\\":{\\\"fieldRef\\\":{\\\"fieldPath\\\":\\\"status.podIP\\\"}}},{\\\"name\\\":\\\"POD_SUBNET\\\",\\\"value\\\":\\\"10.244.0.0/16\\\"}],\\\"image\\\":\\\"docker.io/kindest/kindnetd:v20250512-df8de77b\\\",\\\"name\\\":\\\"kindnet-cni\\\",\\\"resources\\\":{\\\"limits\\\":{\\\"cpu\\\":\\\"100m\\\",\\\"memory\\\":\\\"50Mi\\\"},\\\"requests\\\":{\\\"cpu\\\":\\\"100m\\\",\\\"memory\\\":\\\"50Mi\\\"}},\\\"securityContext\\\":{\\\"capabilities\\\":{\\\"add\\\":[\\\"NET_RAW\\\",\\\"NET_ADMIN\\\"]},\\\"privileged\\\":false},\\\"volumeMounts\\\":[{\\\"mountPath\
\\":\\\"/etc/cni/net.d\\\",\\\"name\\\":\\\"cni-cfg\\\"},{\\\"mountPath\\\":\\\"/run/xtables.lock\\\",\\\"name\\\":\\\"xtables-lock\\\",\\\"readOnly\\\":false},{\\\"mountPath\\\":\\\"/lib/modules\\\",\\\"name\\\":\\\"lib-modules\\\",\\\"readOnly\\\":true}]}],\\\"hostNetwork\\\":true,\\\"serviceAccountName\\\":\\\"kindnet\\\",\\\"tolerations\\\":[{\\\"effect\\\":\\\"NoSchedule\\\",\\\"operator\\\":\\\"Exists\\\"}],\\\"volumes\\\":[{\\\"hostPath\\\":{\\\"path\\\":\\\"/etc/cni/net.d\\\",\\\"type\\\":\\\"DirectoryOrCreate\\\"},\\\"name\\\":\\\"cni-cfg\\\"},{\\\"hostPath\\\":{\\\"path\\\":\\\"/run/xtables.lock\\\",\\\"type\\\":\\\"FileOrCreate\\\"},\\\"name\\\":\\\"xtables-lock\\\"},{\\\"hostPath\\\":{\\\"path\\\":\\\"/lib/modules\\\"},\\\"name\\\":\\\"lib-modules\\\"}]}}}}\\n\"}, OwnerReferences:[]v1.OwnerReference(nil), Finalizers:[]string(nil), ManagedFields:[]v1.ManagedFieldsEntry(nil)}, Spec:v1.DaemonSetSpec{Selector:(*v1.LabelSelector)(0x40017852e0), Template:v1.PodTemplateSpec{ObjectMeta:v1.ObjectMeta{Name:
\"\", GenerateName:\"\", Namespace:\"\", SelfLink:\"\", UID:\"\", ResourceVersion:\"\", Generation:0, CreationTimestamp:time.Date(1, time.January, 1, 0, 0, 0, 0, time.UTC), DeletionTimestamp:<nil>, DeletionGracePeriodSeconds:(*int64)(nil), Labels:map[string]string{\"app\":\"kindnet\", \"k8s-app\":\"kindnet\", \"tier\":\"node\"}, Annotations:map[string]string(nil), OwnerReferences:[]v1.OwnerReference(nil), Finalizers:[]string(nil), ManagedFields:[]v1.ManagedFieldsEntry(nil)}, Spec:v1.PodSpec{Volumes:[]v1.Volume{v1.Volume{Name:\"cni-cfg\", VolumeSource:v1.VolumeSource{HostPath:(*v1.HostPathVolumeSource)(0x4000aea5d0), EmptyDir:(*v1.EmptyDirVolumeSource)(nil), GCEPersistentDisk:(*v1.GCEPersistentDiskVolumeSource)(nil), AWSElasticBlockStore:(*v1.AWSElasticBlockStoreVolumeSource)(nil), GitRepo:(*v1.GitRepoVolumeSource)(nil), Secret:(*v1.SecretVolumeSource)(nil), NFS:(*v1.NFSVolumeSource)(nil), ISCSI:(*v1.ISCSIVolumeSource)(nil), Glusterfs:(*v1.GlusterfsVolumeSource)(nil), PersistentVolumeClaim:(*v1.PersistentVolum
eClaimVolumeSource)(nil), RBD:(*v1.RBDVolumeSource)(nil), FlexVolume:(*v1.FlexVolumeSource)(nil), Cinder:(*v1.CinderVolumeSource)(nil), CephFS:(*v1.CephFSVolumeSource)(nil), Flocker:(*v1.FlockerVolumeSource)(nil), DownwardAPI:(*v1.DownwardAPIVolumeSource)(nil), FC:(*v1.FCVolumeSource)(nil), AzureFile:(*v1.AzureFileVolumeSource)(nil), ConfigMap:(*v1.ConfigMapVolumeSource)(nil), VsphereVolume:(*v1.VsphereVirtualDiskVolumeSource)(nil), Quobyte:(*v1.QuobyteVolumeSource)(nil), AzureDisk:(*v1.AzureDiskVolumeSource)(nil), PhotonPersistentDisk:(*v1.PhotonPersistentDiskVolumeSource)(nil), Projected:(*v1.ProjectedVolumeSource)(nil), PortworxVolume:(*v1.PortworxVolumeSource)(nil), ScaleIO:(*v1.ScaleIOVolumeSource)(nil), StorageOS:(*v1.StorageOSVolumeSource)(nil), CSI:(*v1.CSIVolumeSource)(nil), Ephemeral:(*v1.EphemeralVolumeSource)(nil), Image:(*v1.ImageVolumeSource)(nil)}}, v1.Volume{Name:\"xtables-lock\", VolumeSource:v1.VolumeSource{HostPath:(*v1.HostPathVolumeSource)(0x4000aea618), EmptyDir:(*v1.EmptyDirVolumeSource
)(nil), GCEPersistentDisk:(*v1.GCEPersistentDiskVolumeSource)(nil), AWSElasticBlockStore:(*v1.AWSElasticBlockStoreVolumeSource)(nil), GitRepo:(*v1.GitRepoVolumeSource)(nil), Secret:(*v1.SecretVolumeSource)(nil), NFS:(*v1.NFSVolumeSource)(nil), ISCSI:(*v1.ISCSIVolumeSource)(nil), Glusterfs:(*v1.GlusterfsVolumeSource)(nil), PersistentVolumeClaim:(*v1.PersistentVolumeClaimVolumeSource)(nil), RBD:(*v1.RBDVolumeSource)(nil), FlexVolume:(*v1.FlexVolumeSource)(nil), Cinder:(*v1.CinderVolumeSource)(nil), CephFS:(*v1.CephFSVolumeSource)(nil), Flocker:(*v1.FlockerVolumeSource)(nil), DownwardAPI:(*v1.DownwardAPIVolumeSource)(nil), FC:(*v1.FCVolumeSource)(nil), AzureFile:(*v1.AzureFileVolumeSource)(nil), ConfigMap:(*v1.ConfigMapVolumeSource)(nil), VsphereVolume:(*v1.VsphereVirtualDiskVolumeSource)(nil), Quobyte:(*v1.QuobyteVolumeSource)(nil), AzureDisk:(*v1.AzureDiskVolumeSource)(nil), PhotonPersistentDisk:(*v1.PhotonPersistentDiskVolumeSource)(nil), Projected:(*v1.ProjectedVolumeSource)(nil), PortworxVolume:(*v1.Portwor
xVolumeSource)(nil), ScaleIO:(*v1.ScaleIOVolumeSource)(nil), StorageOS:(*v1.StorageOSVolumeSource)(nil), CSI:(*v1.CSIVolumeSource)(nil), Ephemeral:(*v1.EphemeralVolumeSource)(nil), Image:(*v1.ImageVolumeSource)(nil)}}, v1.Volume{Name:\"lib-modules\", VolumeSource:v1.VolumeSource{HostPath:(*v1.HostPathVolumeSource)(0x4000aea678), EmptyDir:(*v1.EmptyDirVolumeSource)(nil), GCEPersistentDisk:(*v1.GCEPersistentDiskVolumeSource)(nil), AWSElasticBlockStore:(*v1.AWSElasticBlockStoreVolumeSource)(nil), GitRepo:(*v1.GitRepoVolumeSource)(nil), Secret:(*v1.SecretVolumeSource)(nil), NFS:(*v1.NFSVolumeSource)(nil), ISCSI:(*v1.ISCSIVolumeSource)(nil), Glusterfs:(*v1.GlusterfsVolumeSource)(nil), PersistentVolumeClaim:(*v1.PersistentVolumeClaimVolumeSource)(nil), RBD:(*v1.RBDVolumeSource)(nil), FlexVolume:(*v1.FlexVolumeSource)(nil), Cinder:(*v1.CinderVolumeSource)(nil), CephFS:(*v1.CephFSVolumeSource)(nil), Flocker:(*v1.FlockerVolumeSource)(nil), DownwardAPI:(*v1.DownwardAPIVolumeSource)(nil), FC:(*v1.FCVolumeSource)(nil), A
zureFile:(*v1.AzureFileVolumeSource)(nil), ConfigMap:(*v1.ConfigMapVolumeSource)(nil), VsphereVolume:(*v1.VsphereVirtualDiskVolumeSource)(nil), Quobyte:(*v1.QuobyteVolumeSource)(nil), AzureDisk:(*v1.AzureDiskVolumeSource)(nil), PhotonPersistentDisk:(*v1.PhotonPersistentDiskVolumeSource)(nil), Projected:(*v1.ProjectedVolumeSource)(nil), PortworxVolume:(*v1.PortworxVolumeSource)(nil), ScaleIO:(*v1.ScaleIOVolumeSource)(nil), StorageOS:(*v1.StorageOSVolumeSource)(nil), CSI:(*v1.CSIVolumeSource)(nil), Ephemeral:(*v1.EphemeralVolumeSource)(nil), Image:(*v1.ImageVolumeSource)(nil)}}}, InitContainers:[]v1.Container(nil), Containers:[]v1.Container{v1.Container{Name:\"kindnet-cni\", Image:\"docker.io/kindest/kindnetd:v20250512-df8de77b\", Command:[]string(nil), Args:[]string(nil), WorkingDir:\"\", Ports:[]v1.ContainerPort(nil), EnvFrom:[]v1.EnvFromSource(nil), Env:[]v1.EnvVar{v1.EnvVar{Name:\"HOST_IP\", Value:\"\", ValueFrom:(*v1.EnvVarSource)(0x400208fe00)}, v1.EnvVar{Name:\"POD_IP\", Value:\"\", ValueFrom:(*v1.EnvVar
Source)(0x400208fe30)}, v1.EnvVar{Name:\"POD_SUBNET\", Value:\"10.244.0.0/16\", ValueFrom:(*v1.EnvVarSource)(nil)}}, Resources:v1.ResourceRequirements{Limits:v1.ResourceList{\"cpu\":resource.Quantity{i:resource.int64Amount{value:100, scale:-3}, d:resource.infDecAmount{Dec:(*inf.Dec)(nil)}, s:\"100m\", Format:\"DecimalSI\"}, \"memory\":resource.Quantity{i:resource.int64Amount{value:52428800, scale:0}, d:resource.infDecAmount{Dec:(*inf.Dec)(nil)}, s:\"50Mi\", Format:\"BinarySI\"}}, Requests:v1.ResourceList{\"cpu\":resource.Quantity{i:resource.int64Amount{value:100, scale:-3}, d:resource.infDecAmount{Dec:(*inf.Dec)(nil)}, s:\"100m\", Format:\"DecimalSI\"}, \"memory\":resource.Quantity{i:resource.int64Amount{value:52428800, scale:0}, d:resource.infDecAmount{Dec:(*inf.Dec)(nil)}, s:\"50Mi\", Format:\"BinarySI\"}}, Claims:[]v1.ResourceClaim(nil)}, ResizePolicy:[]v1.ContainerResizePolicy(nil), RestartPolicy:(*v1.ContainerRestartPolicy)(nil), RestartPolicyRules:[]v1.ContainerRestartRule(nil), VolumeMounts:[]v1.Volume
Mount{v1.VolumeMount{Name:\"cni-cfg\", ReadOnly:false, RecursiveReadOnly:(*v1.RecursiveReadOnlyMode)(nil), MountPath:\"/etc/cni/net.d\", SubPath:\"\", MountPropagation:(*v1.MountPropagationMode)(nil), SubPathExpr:\"\"}, v1.VolumeMount{Name:\"xtables-lock\", ReadOnly:false, RecursiveReadOnly:(*v1.RecursiveReadOnlyMode)(nil), MountPath:\"/run/xtables.lock\", SubPath:\"\", MountPropagation:(*v1.MountPropagationMode)(nil), SubPathExpr:\"\"}, v1.VolumeMount{Name:\"lib-modules\", ReadOnly:true, RecursiveReadOnly:(*v1.RecursiveReadOnlyMode)(nil), MountPath:\"/lib/modules\", SubPath:\"\", MountPropagation:(*v1.MountPropagationMode)(nil), SubPathExpr:\"\"}}, VolumeDevices:[]v1.VolumeDevice(nil), LivenessProbe:(*v1.Probe)(nil), ReadinessProbe:(*v1.Probe)(nil), StartupProbe:(*v1.Probe)(nil), Lifecycle:(*v1.Lifecycle)(nil), TerminationMessagePath:\"/dev/termination-log\", TerminationMessagePolicy:\"File\", ImagePullPolicy:\"IfNotPresent\", SecurityContext:(*v1.SecurityContext)(0x40024818c0), Stdin:false, StdinOnce:false,
TTY:false}}, EphemeralContainers:[]v1.EphemeralContainer(nil), RestartPolicy:\"Always\", TerminationGracePeriodSeconds:(*int64)(0x4002225268), ActiveDeadlineSeconds:(*int64)(nil), DNSPolicy:\"ClusterFirst\", NodeSelector:map[string]string(nil), ServiceAccountName:\"kindnet\", DeprecatedServiceAccount:\"kindnet\", AutomountServiceAccountToken:(*bool)(nil), NodeName:\"\", HostNetwork:true, HostPID:false, HostIPC:false, ShareProcessNamespace:(*bool)(nil), SecurityContext:(*v1.PodSecurityContext)(0x400180ef30), ImagePullSecrets:[]v1.LocalObjectReference(nil), Hostname:\"\", Subdomain:\"\", Affinity:(*v1.Affinity)(nil), SchedulerName:\"default-scheduler\", Tolerations:[]v1.Toleration{v1.Toleration{Key:\"\", Operator:\"Exists\", Value:\"\", Effect:\"NoSchedule\", TolerationSeconds:(*int64)(nil)}}, HostAliases:[]v1.HostAlias(nil), PriorityClassName:\"\", Priority:(*int32)(nil), DNSConfig:(*v1.PodDNSConfig)(nil), ReadinessGates:[]v1.PodReadinessGate(nil), RuntimeClassName:(*string)(nil), EnableServiceLinks:(*bool)(n
il), PreemptionPolicy:(*v1.PreemptionPolicy)(nil), Overhead:v1.ResourceList(nil), TopologySpreadConstraints:[]v1.TopologySpreadConstraint(nil), SetHostnameAsFQDN:(*bool)(nil), OS:(*v1.PodOS)(nil), HostUsers:(*bool)(nil), SchedulingGates:[]v1.PodSchedulingGate(nil), ResourceClaims:[]v1.PodResourceClaim(nil), Resources:(*v1.ResourceRequirements)(nil), HostnameOverride:(*string)(nil)}}, UpdateStrategy:v1.DaemonSetUpdateStrategy{Type:\"RollingUpdate\", RollingUpdate:(*v1.RollingUpdateDaemonSet)(0x400354e850)}, MinReadySeconds:0, RevisionHistoryLimit:(*int32)(0x40022252d0)}, Status:v1.DaemonSetStatus{CurrentNumberScheduled:4, NumberMisscheduled:0, DesiredNumberScheduled:4, NumberReady:4, ObservedGeneration:1, UpdatedNumberScheduled:4, NumberAvailable:4, NumberUnavailable:0, CollisionCount:(*int32)(nil), Conditions:[]v1.DaemonSetCondition(nil)}}: Operation cannot be fulfilled on daemonsets.apps \"kindnet\": the object has been modified; please apply your changes to the latest version and try again" logger="Unhandle
dError"
	
	
	==> kube-proxy [6db8ccf0f7e5d6927f1f90014c3a7aaa5232618397851b52007fa71137db2843] <==
	I1109 14:04:16.669492       1 server_linux.go:53] "Using iptables proxy"
	I1109 14:04:17.085521       1 shared_informer.go:349] "Waiting for caches to sync" controller="node informer cache"
	I1109 14:04:17.200105       1 shared_informer.go:356] "Caches are synced" controller="node informer cache"
	I1109 14:04:17.200215       1 server.go:219] "Successfully retrieved NodeIPs" NodeIPs=["192.168.49.2"]
	E1109 14:04:17.200363       1 server.go:256] "Kube-proxy configuration may be incomplete or incorrect" err="nodePortAddresses is unset; NodePort connections will be accepted on all local IPs. Consider using `--nodeport-addresses primary`"
	I1109 14:04:17.278348       1 server.go:265] "kube-proxy running in dual-stack mode" primary ipFamily="IPv4"
	I1109 14:04:17.278470       1 server_linux.go:132] "Using iptables Proxier"
	I1109 14:04:17.286098       1 proxier.go:242] "Setting route_localnet=1 to allow node-ports on localhost; to change this either disable iptables.localhostNodePorts (--iptables-localhost-nodeports) or set nodePortAddresses (--nodeport-addresses) to filter loopback addresses" ipFamily="IPv4"
	I1109 14:04:17.286454       1 server.go:527] "Version info" version="v1.34.1"
	I1109 14:04:17.286654       1 server.go:529] "Golang settings" GOGC="" GOMAXPROCS="" GOTRACEBACK=""
	I1109 14:04:17.290007       1 config.go:200] "Starting service config controller"
	I1109 14:04:17.290117       1 shared_informer.go:349] "Waiting for caches to sync" controller="service config"
	I1109 14:04:17.290166       1 config.go:106] "Starting endpoint slice config controller"
	I1109 14:04:17.290209       1 shared_informer.go:349] "Waiting for caches to sync" controller="endpoint slice config"
	I1109 14:04:17.290245       1 config.go:403] "Starting serviceCIDR config controller"
	I1109 14:04:17.290290       1 shared_informer.go:349] "Waiting for caches to sync" controller="serviceCIDR config"
	I1109 14:04:17.297376       1 config.go:309] "Starting node config controller"
	I1109 14:04:17.297723       1 shared_informer.go:349] "Waiting for caches to sync" controller="node config"
	I1109 14:04:17.297759       1 shared_informer.go:356] "Caches are synced" controller="node config"
	I1109 14:04:17.390352       1 shared_informer.go:356] "Caches are synced" controller="endpoint slice config"
	I1109 14:04:17.390429       1 shared_informer.go:356] "Caches are synced" controller="service config"
	I1109 14:04:17.390722       1 shared_informer.go:356] "Caches are synced" controller="serviceCIDR config"
	
	
	==> kube-scheduler [374a5429d6a564b1f172e68e0f603aefc3b04e7b183e31ef8b55c3ae430182ff] <==
	I1109 14:04:08.302323       1 server.go:177] "Golang settings" GOGC="" GOMAXPROCS="" GOTRACEBACK=""
	I1109 14:04:08.304556       1 configmap_cafile_content.go:205] "Starting controller" name="client-ca::kube-system::extension-apiserver-authentication::client-ca-file"
	I1109 14:04:08.312882       1 shared_informer.go:349] "Waiting for caches to sync" controller="client-ca::kube-system::extension-apiserver-authentication::client-ca-file"
	I1109 14:04:08.316380       1 secure_serving.go:211] Serving securely on 127.0.0.1:10259
	I1109 14:04:08.316458       1 tlsconfig.go:243] "Starting DynamicServingCertificateController"
	E1109 14:04:10.211376       1 reflector.go:205] "Failed to watch" err="failed to list *v1.CSIStorageCapacity: csistoragecapacities.storage.k8s.io is forbidden: User \"system:kube-scheduler\" cannot list resource \"csistoragecapacities\" in API group \"storage.k8s.io\" at the cluster scope" logger="UnhandledError" reflector="k8s.io/client-go/informers/factory.go:160" type="*v1.CSIStorageCapacity"
	E1109 14:04:10.211546       1 reflector.go:205] "Failed to watch" err="failed to list *v1.Pod: pods is forbidden: User \"system:kube-scheduler\" cannot list resource \"pods\" in API group \"\" at the cluster scope" logger="UnhandledError" reflector="k8s.io/client-go/informers/factory.go:160" type="*v1.Pod"
	E1109 14:04:10.211639       1 reflector.go:205] "Failed to watch" err="failed to list *v1.Node: nodes is forbidden: User \"system:kube-scheduler\" cannot list resource \"nodes\" in API group \"\" at the cluster scope" logger="UnhandledError" reflector="k8s.io/client-go/informers/factory.go:160" type="*v1.Node"
	E1109 14:04:10.211730       1 reflector.go:205] "Failed to watch" err="failed to list *v1.ResourceSlice: resourceslices.resource.k8s.io is forbidden: User \"system:kube-scheduler\" cannot list resource \"resourceslices\" in API group \"resource.k8s.io\" at the cluster scope" logger="UnhandledError" reflector="k8s.io/client-go/informers/factory.go:160" type="*v1.ResourceSlice"
	E1109 14:04:10.211824       1 reflector.go:205] "Failed to watch" err="failed to list *v1.StorageClass: storageclasses.storage.k8s.io is forbidden: User \"system:kube-scheduler\" cannot list resource \"storageclasses\" in API group \"storage.k8s.io\" at the cluster scope" logger="UnhandledError" reflector="k8s.io/client-go/informers/factory.go:160" type="*v1.StorageClass"
	E1109 14:04:10.212031       1 reflector.go:205] "Failed to watch" err="failed to list *v1.ReplicationController: replicationcontrollers is forbidden: User \"system:kube-scheduler\" cannot list resource \"replicationcontrollers\" in API group \"\" at the cluster scope" logger="UnhandledError" reflector="k8s.io/client-go/informers/factory.go:160" type="*v1.ReplicationController"
	E1109 14:04:10.212181       1 reflector.go:205] "Failed to watch" err="failed to list *v1.ResourceClaim: resourceclaims.resource.k8s.io is forbidden: User \"system:kube-scheduler\" cannot list resource \"resourceclaims\" in API group \"resource.k8s.io\" at the cluster scope" logger="UnhandledError" reflector="k8s.io/client-go/informers/factory.go:160" type="*v1.ResourceClaim"
	E1109 14:04:10.212276       1 reflector.go:205] "Failed to watch" err="failed to list *v1.DeviceClass: deviceclasses.resource.k8s.io is forbidden: User \"system:kube-scheduler\" cannot list resource \"deviceclasses\" in API group \"resource.k8s.io\" at the cluster scope" logger="UnhandledError" reflector="k8s.io/client-go/informers/factory.go:160" type="*v1.DeviceClass"
	E1109 14:04:10.212389       1 reflector.go:205] "Failed to watch" err="failed to list *v1.PodDisruptionBudget: poddisruptionbudgets.policy is forbidden: User \"system:kube-scheduler\" cannot list resource \"poddisruptionbudgets\" in API group \"policy\" at the cluster scope" logger="UnhandledError" reflector="k8s.io/client-go/informers/factory.go:160" type="*v1.PodDisruptionBudget"
	E1109 14:04:10.212522       1 reflector.go:205] "Failed to watch" err="failed to list *v1.ReplicaSet: replicasets.apps is forbidden: User \"system:kube-scheduler\" cannot list resource \"replicasets\" in API group \"apps\" at the cluster scope" logger="UnhandledError" reflector="k8s.io/client-go/informers/factory.go:160" type="*v1.ReplicaSet"
	E1109 14:04:10.212737       1 reflector.go:205] "Failed to watch" err="failed to list *v1.VolumeAttachment: volumeattachments.storage.k8s.io is forbidden: User \"system:kube-scheduler\" cannot list resource \"volumeattachments\" in API group \"storage.k8s.io\" at the cluster scope" logger="UnhandledError" reflector="k8s.io/client-go/informers/factory.go:160" type="*v1.VolumeAttachment"
	E1109 14:04:10.212857       1 reflector.go:205] "Failed to watch" err="failed to list *v1.StatefulSet: statefulsets.apps is forbidden: User \"system:kube-scheduler\" cannot list resource \"statefulsets\" in API group \"apps\" at the cluster scope" logger="UnhandledError" reflector="k8s.io/client-go/informers/factory.go:160" type="*v1.StatefulSet"
	E1109 14:04:10.213039       1 reflector.go:205] "Failed to watch" err="failed to list *v1.Namespace: namespaces is forbidden: User \"system:kube-scheduler\" cannot list resource \"namespaces\" in API group \"\" at the cluster scope" logger="UnhandledError" reflector="k8s.io/client-go/informers/factory.go:160" type="*v1.Namespace"
	E1109 14:04:10.213127       1 reflector.go:205] "Failed to watch" err="failed to list *v1.CSINode: csinodes.storage.k8s.io is forbidden: User \"system:kube-scheduler\" cannot list resource \"csinodes\" in API group \"storage.k8s.io\" at the cluster scope" logger="UnhandledError" reflector="k8s.io/client-go/informers/factory.go:160" type="*v1.CSINode"
	E1109 14:04:10.213178       1 reflector.go:205] "Failed to watch" err="failed to list *v1.Service: services is forbidden: User \"system:kube-scheduler\" cannot list resource \"services\" in API group \"\" at the cluster scope" logger="UnhandledError" reflector="k8s.io/client-go/informers/factory.go:160" type="*v1.Service"
	E1109 14:04:10.213230       1 reflector.go:205] "Failed to watch" err="failed to list *v1.PersistentVolumeClaim: persistentvolumeclaims is forbidden: User \"system:kube-scheduler\" cannot list resource \"persistentvolumeclaims\" in API group \"\" at the cluster scope" logger="UnhandledError" reflector="k8s.io/client-go/informers/factory.go:160" type="*v1.PersistentVolumeClaim"
	E1109 14:04:10.213342       1 reflector.go:205] "Failed to watch" err="failed to list *v1.CSIDriver: csidrivers.storage.k8s.io is forbidden: User \"system:kube-scheduler\" cannot list resource \"csidrivers\" in API group \"storage.k8s.io\" at the cluster scope" logger="UnhandledError" reflector="k8s.io/client-go/informers/factory.go:160" type="*v1.CSIDriver"
	E1109 14:04:10.213396       1 reflector.go:205] "Failed to watch" err="failed to list *v1.PersistentVolume: persistentvolumes is forbidden: User \"system:kube-scheduler\" cannot list resource \"persistentvolumes\" in API group \"\" at the cluster scope" logger="UnhandledError" reflector="k8s.io/client-go/informers/factory.go:160" type="*v1.PersistentVolume"
	E1109 14:04:10.213833       1 reflector.go:205] "Failed to watch" err="failed to list *v1.ConfigMap: configmaps \"extension-apiserver-authentication\" is forbidden: User \"system:kube-scheduler\" cannot list resource \"configmaps\" in API group \"\" in the namespace \"kube-system\"" logger="UnhandledError" reflector="runtime/asm_arm64.s:1223" type="*v1.ConfigMap"
	I1109 14:04:11.613639       1 shared_informer.go:356] "Caches are synced" controller="client-ca::kube-system::extension-apiserver-authentication::client-ca-file"
	
	
	==> kubelet <==
	Nov 09 14:04:14 ha-423884 kubelet[749]: I1109 14:04:14.263506     749 kubelet.go:3202] "Trying to delete pod" pod="kube-system/kube-vip-ha-423884" podUID="8470dcc0-6c4f-4241-ad4e-8b896f6712b0"
	Nov 09 14:04:14 ha-423884 kubelet[749]: E1109 14:04:14.282901     749 kubelet.go:3221] "Failed creating a mirror pod" err="pods \"etcd-ha-423884\" already exists" pod="kube-system/etcd-ha-423884"
	Nov 09 14:04:14 ha-423884 kubelet[749]: I1109 14:04:14.282937     749 kubelet.go:3219] "Creating a mirror pod for static pod" pod="kube-system/kube-apiserver-ha-423884"
	Nov 09 14:04:14 ha-423884 kubelet[749]: E1109 14:04:14.324502     749 kubelet.go:3221] "Failed creating a mirror pod" err="pods \"kube-apiserver-ha-423884\" already exists" pod="kube-system/kube-apiserver-ha-423884"
	Nov 09 14:04:14 ha-423884 kubelet[749]: I1109 14:04:14.324540     749 kubelet.go:3219] "Creating a mirror pod for static pod" pod="kube-system/kube-controller-manager-ha-423884"
	Nov 09 14:04:14 ha-423884 kubelet[749]: I1109 14:04:14.353962     749 desired_state_of_world_populator.go:154] "Finished populating initial desired state of world"
	Nov 09 14:04:14 ha-423884 kubelet[749]: E1109 14:04:14.370339     749 kubelet.go:3221] "Failed creating a mirror pod" err="pods \"kube-controller-manager-ha-423884\" already exists" pod="kube-system/kube-controller-manager-ha-423884"
	Nov 09 14:04:14 ha-423884 kubelet[749]: I1109 14:04:14.385896     749 kubelet.go:3208] "Deleted mirror pod as it didn't match the static Pod" pod="kube-system/kube-vip-ha-423884"
	Nov 09 14:04:14 ha-423884 kubelet[749]: I1109 14:04:14.385930     749 kubelet.go:3219] "Creating a mirror pod for static pod" pod="kube-system/kube-vip-ha-423884"
	Nov 09 14:04:14 ha-423884 kubelet[749]: I1109 14:04:14.403495     749 reconciler_common.go:251] "operationExecutor.VerifyControllerAttachedVolume started for volume \"tmp\" (UniqueName: \"kubernetes.io/host-path/5c249a88-1e05-40e0-b9d2-60a993f8c146-tmp\") pod \"storage-provisioner\" (UID: \"5c249a88-1e05-40e0-b9d2-60a993f8c146\") " pod="kube-system/storage-provisioner"
	Nov 09 14:04:14 ha-423884 kubelet[749]: I1109 14:04:14.403551     749 reconciler_common.go:251] "operationExecutor.VerifyControllerAttachedVolume started for volume \"lib-modules\" (UniqueName: \"kubernetes.io/host-path/f3de4d87-91fe-4303-a8db-50a70cbce4d7-lib-modules\") pod \"kube-proxy-7z7d2\" (UID: \"f3de4d87-91fe-4303-a8db-50a70cbce4d7\") " pod="kube-system/kube-proxy-7z7d2"
	Nov 09 14:04:14 ha-423884 kubelet[749]: I1109 14:04:14.403593     749 reconciler_common.go:251] "operationExecutor.VerifyControllerAttachedVolume started for volume \"lib-modules\" (UniqueName: \"kubernetes.io/host-path/aaab0693-39a9-46cc-b5c6-f07055a7cbc4-lib-modules\") pod \"kindnet-4s4nj\" (UID: \"aaab0693-39a9-46cc-b5c6-f07055a7cbc4\") " pod="kube-system/kindnet-4s4nj"
	Nov 09 14:04:14 ha-423884 kubelet[749]: I1109 14:04:14.403613     749 reconciler_common.go:251] "operationExecutor.VerifyControllerAttachedVolume started for volume \"xtables-lock\" (UniqueName: \"kubernetes.io/host-path/aaab0693-39a9-46cc-b5c6-f07055a7cbc4-xtables-lock\") pod \"kindnet-4s4nj\" (UID: \"aaab0693-39a9-46cc-b5c6-f07055a7cbc4\") " pod="kube-system/kindnet-4s4nj"
	Nov 09 14:04:14 ha-423884 kubelet[749]: I1109 14:04:14.403647     749 reconciler_common.go:251] "operationExecutor.VerifyControllerAttachedVolume started for volume \"cni-cfg\" (UniqueName: \"kubernetes.io/host-path/aaab0693-39a9-46cc-b5c6-f07055a7cbc4-cni-cfg\") pod \"kindnet-4s4nj\" (UID: \"aaab0693-39a9-46cc-b5c6-f07055a7cbc4\") " pod="kube-system/kindnet-4s4nj"
	Nov 09 14:04:14 ha-423884 kubelet[749]: I1109 14:04:14.403685     749 reconciler_common.go:251] "operationExecutor.VerifyControllerAttachedVolume started for volume \"xtables-lock\" (UniqueName: \"kubernetes.io/host-path/f3de4d87-91fe-4303-a8db-50a70cbce4d7-xtables-lock\") pod \"kube-proxy-7z7d2\" (UID: \"f3de4d87-91fe-4303-a8db-50a70cbce4d7\") " pod="kube-system/kube-proxy-7z7d2"
	Nov 09 14:04:14 ha-423884 kubelet[749]: I1109 14:04:14.469444     749 swap_util.go:74] "error creating dir to test if tmpfs noswap is enabled. Assuming not supported" mount path="" error="stat /var/lib/kubelet/plugins/kubernetes.io/empty-dir: no such file or directory"
	Nov 09 14:04:14 ha-423884 kubelet[749]: I1109 14:04:14.588284     749 pod_startup_latency_tracker.go:104] "Observed pod startup duration" pod="kube-system/kube-vip-ha-423884" podStartSLOduration=0.588263843 podStartE2EDuration="588.263843ms" podCreationTimestamp="2025-11-09 14:04:14 +0000 UTC" firstStartedPulling="0001-01-01 00:00:00 +0000 UTC" lastFinishedPulling="0001-01-01 00:00:00 +0000 UTC" observedRunningTime="2025-11-09 14:04:14.53432425 +0000 UTC m=+39.410575888" watchObservedRunningTime="2025-11-09 14:04:14.588263843 +0000 UTC m=+39.464515481"
	Nov 09 14:04:14 ha-423884 kubelet[749]: W1109 14:04:14.716436     749 manager.go:1169] Failed to process watch event {EventType:0 Name:/docker/8c902201acb6ac6cc85dcb02210aea656c86b05a6fb72e47cb8c42c952f307e8/crio-ef99cabeed9545cc36e7e4c46554ffacb1775a080c9128707cd0b34d4cb4a81a WatchSource:0}: Error finding container ef99cabeed9545cc36e7e4c46554ffacb1775a080c9128707cd0b34d4cb4a81a: Status 404 returned error can't find the container with id ef99cabeed9545cc36e7e4c46554ffacb1775a080c9128707cd0b34d4cb4a81a
	Nov 09 14:04:14 ha-423884 kubelet[749]: W1109 14:04:14.783698     749 manager.go:1169] Failed to process watch event {EventType:0 Name:/docker/8c902201acb6ac6cc85dcb02210aea656c86b05a6fb72e47cb8c42c952f307e8/crio-624febe3bef0cff2a8b38f86f67900ed2fa943529a6d461c2caa559b51a854bb WatchSource:0}: Error finding container 624febe3bef0cff2a8b38f86f67900ed2fa943529a6d461c2caa559b51a854bb: Status 404 returned error can't find the container with id 624febe3bef0cff2a8b38f86f67900ed2fa943529a6d461c2caa559b51a854bb
	Nov 09 14:04:14 ha-423884 kubelet[749]: W1109 14:04:14.798946     749 manager.go:1169] Failed to process watch event {EventType:0 Name:/docker/8c902201acb6ac6cc85dcb02210aea656c86b05a6fb72e47cb8c42c952f307e8/crio-49d4f70bf4320e7c70fbccc94bd31ea39dc7456ce3f9c2a9a4d16059f54b6f87 WatchSource:0}: Error finding container 49d4f70bf4320e7c70fbccc94bd31ea39dc7456ce3f9c2a9a4d16059f54b6f87: Status 404 returned error can't find the container with id 49d4f70bf4320e7c70fbccc94bd31ea39dc7456ce3f9c2a9a4d16059f54b6f87
	Nov 09 14:04:14 ha-423884 kubelet[749]: W1109 14:04:14.971628     749 manager.go:1169] Failed to process watch event {EventType:0 Name:/docker/8c902201acb6ac6cc85dcb02210aea656c86b05a6fb72e47cb8c42c952f307e8/crio-156c341c8adeeef3c52b9cc70a6ad9c7dc97d08df535a6d0183d2288e46aaa13 WatchSource:0}: Error finding container 156c341c8adeeef3c52b9cc70a6ad9c7dc97d08df535a6d0183d2288e46aaa13: Status 404 returned error can't find the container with id 156c341c8adeeef3c52b9cc70a6ad9c7dc97d08df535a6d0183d2288e46aaa13
	Nov 09 14:04:15 ha-423884 kubelet[749]: I1109 14:04:15.348436     749 kubelet_volumes.go:163] "Cleaned up orphaned pod volumes dir" podUID="fbb3ff8bceed3e182ae34f06d816435e" path="/var/lib/kubelet/pods/fbb3ff8bceed3e182ae34f06d816435e/volumes"
	Nov 09 14:04:35 ha-423884 kubelet[749]: E1109 14:04:35.276791     749 log.go:32] "ContainerStatus from runtime service failed" err="rpc error: code = NotFound desc = could not find container \"12f4955a06244881af7082cc0ff38ad3ea7f1e7f5cb44d59792e4bf672da56bd\": container with ID starting with 12f4955a06244881af7082cc0ff38ad3ea7f1e7f5cb44d59792e4bf672da56bd not found: ID does not exist" containerID="12f4955a06244881af7082cc0ff38ad3ea7f1e7f5cb44d59792e4bf672da56bd"
	Nov 09 14:04:35 ha-423884 kubelet[749]: I1109 14:04:35.276883     749 kuberuntime_gc.go:364] "Error getting ContainerStatus for containerID" containerID="12f4955a06244881af7082cc0ff38ad3ea7f1e7f5cb44d59792e4bf672da56bd" err="rpc error: code = NotFound desc = could not find container \"12f4955a06244881af7082cc0ff38ad3ea7f1e7f5cb44d59792e4bf672da56bd\": container with ID starting with 12f4955a06244881af7082cc0ff38ad3ea7f1e7f5cb44d59792e4bf672da56bd not found: ID does not exist"
	Nov 09 14:04:46 ha-423884 kubelet[749]: I1109 14:04:46.630690     749 scope.go:117] "RemoveContainer" containerID="5bed382b465f29e125aa4acb35f3e43d30cb2fa5b8aadd1ad04f56abc10722a7"
	

                                                
                                                
-- /stdout --
helpers_test.go:262: (dbg) Run:  out/minikube-linux-arm64 status --format={{.APIServer}} -p ha-423884 -n ha-423884
helpers_test.go:269: (dbg) Run:  kubectl --context ha-423884 get po -o=jsonpath={.items[*].metadata.name} -A --field-selector=status.phase!=Running
helpers_test.go:293: <<< TestMultiControlPlane/serial/RestartCluster FAILED: end of post-mortem logs <<<
helpers_test.go:294: ---------------------/post-mortem---------------------------------
--- FAIL: TestMultiControlPlane/serial/RestartCluster (109.43s)

                                                
                                    
x
+
TestMultiControlPlane/serial/DegradedAfterClusterRestart (3.9s)

                                                
                                                
=== RUN   TestMultiControlPlane/serial/DegradedAfterClusterRestart
ha_test.go:392: (dbg) Run:  out/minikube-linux-arm64 profile list --output json
ha_test.go:392: (dbg) Done: out/minikube-linux-arm64 profile list --output json: (1.059162152s)
ha_test.go:415: expected profile "ha-423884" in json of 'profile list' to have "Degraded" status but have "HAppy" status. got *"{\"invalid\":[],\"valid\":[{\"Name\":\"ha-423884\",\"Status\":\"HAppy\",\"Config\":{\"Name\":\"ha-423884\",\"KeepContext\":false,\"EmbedCerts\":false,\"MinikubeISO\":\"\",\"KicBaseImage\":\"gcr.io/k8s-minikube/kicbase-builds:v0.0.48-1761985721-21837@sha256:a50b37e97dfdea51156e079ca6b45818a801b3d41bbe13d141f35d2e1af6c7d1\",\"Memory\":3072,\"CPUs\":2,\"DiskSize\":20000,\"Driver\":\"docker\",\"HyperkitVpnKitSock\":\"\",\"HyperkitVSockPorts\":[],\"DockerEnv\":null,\"ContainerVolumeMounts\":null,\"InsecureRegistry\":null,\"RegistryMirror\":[],\"HostOnlyCIDR\":\"192.168.59.1/24\",\"HypervVirtualSwitch\":\"\",\"HypervUseExternalSwitch\":false,\"HypervExternalAdapter\":\"\",\"KVMNetwork\":\"default\",\"KVMQemuURI\":\"qemu:///system\",\"KVMGPU\":false,\"KVMHidden\":false,\"KVMNUMACount\":1,\"APIServerPort\":8443,\"DockerOpt\":null,\"DisableDriverMounts\":false,\"NFSShare\":[],\"NFSSharesR
oot\":\"/nfsshares\",\"UUID\":\"\",\"NoVTXCheck\":false,\"DNSProxy\":false,\"HostDNSResolver\":true,\"HostOnlyNicType\":\"virtio\",\"NatNicType\":\"virtio\",\"SSHIPAddress\":\"\",\"SSHUser\":\"root\",\"SSHKey\":\"\",\"SSHPort\":22,\"KubernetesConfig\":{\"KubernetesVersion\":\"v1.34.1\",\"ClusterName\":\"ha-423884\",\"Namespace\":\"default\",\"APIServerHAVIP\":\"192.168.49.254\",\"APIServerName\":\"minikubeCA\",\"APIServerNames\":null,\"APIServerIPs\":null,\"DNSDomain\":\"cluster.local\",\"ContainerRuntime\":\"crio\",\"CRISocket\":\"\",\"NetworkPlugin\":\"cni\",\"FeatureGates\":\"\",\"ServiceCIDR\":\"10.96.0.0/12\",\"ImageRepository\":\"\",\"LoadBalancerStartIP\":\"\",\"LoadBalancerEndIP\":\"\",\"CustomIngressCert\":\"\",\"RegistryAliases\":\"\",\"ExtraOptions\":null,\"ShouldLoadCachedImages\":true,\"EnableDefaultCNI\":false,\"CNI\":\"\"},\"Nodes\":[{\"Name\":\"\",\"IP\":\"192.168.49.2\",\"Port\":8443,\"KubernetesVersion\":\"v1.34.1\",\"ContainerRuntime\":\"crio\",\"ControlPlane\":true,\"Worker\":true},{\"Name
\":\"m02\",\"IP\":\"192.168.49.3\",\"Port\":8443,\"KubernetesVersion\":\"v1.34.1\",\"ContainerRuntime\":\"crio\",\"ControlPlane\":true,\"Worker\":true},{\"Name\":\"m03\",\"IP\":\"192.168.49.4\",\"Port\":8443,\"KubernetesVersion\":\"v1.34.1\",\"ContainerRuntime\":\"crio\",\"ControlPlane\":true,\"Worker\":true},{\"Name\":\"m04\",\"IP\":\"192.168.49.5\",\"Port\":0,\"KubernetesVersion\":\"v1.34.1\",\"ContainerRuntime\":\"crio\",\"ControlPlane\":false,\"Worker\":true}],\"Addons\":{\"ambassador\":false,\"amd-gpu-device-plugin\":false,\"auto-pause\":false,\"cloud-spanner\":false,\"csi-hostpath-driver\":false,\"dashboard\":false,\"default-storageclass\":false,\"efk\":false,\"freshpod\":false,\"gcp-auth\":false,\"gvisor\":false,\"headlamp\":false,\"inaccel\":false,\"ingress\":false,\"ingress-dns\":false,\"inspektor-gadget\":false,\"istio\":false,\"istio-provisioner\":false,\"kong\":false,\"kubeflow\":false,\"kubetail\":false,\"kubevirt\":false,\"logviewer\":false,\"metallb\":false,\"metrics-server\":false,\"nvidia-dev
ice-plugin\":false,\"nvidia-driver-installer\":false,\"nvidia-gpu-device-plugin\":false,\"olm\":false,\"pod-security-policy\":false,\"portainer\":false,\"registry\":false,\"registry-aliases\":false,\"registry-creds\":false,\"storage-provisioner\":false,\"storage-provisioner-rancher\":false,\"volcano\":false,\"volumesnapshots\":false,\"yakd\":false},\"CustomAddonImages\":null,\"CustomAddonRegistries\":null,\"VerifyComponents\":{\"apiserver\":true,\"apps_running\":true,\"default_sa\":true,\"extra\":true,\"kubelet\":true,\"node_ready\":true,\"system_pods\":true},\"StartHostTimeout\":360000000000,\"ScheduledStop\":null,\"ExposedPorts\":[],\"ListenAddress\":\"\",\"Network\":\"\",\"Subnet\":\"\",\"MultiNodeRequested\":true,\"ExtraDisks\":0,\"CertExpiration\":94608000000000000,\"MountString\":\"\",\"Mount9PVersion\":\"9p2000.L\",\"MountGID\":\"docker\",\"MountIP\":\"\",\"MountMSize\":262144,\"MountOptions\":[],\"MountPort\":0,\"MountType\":\"9p\",\"MountUID\":\"docker\",\"BinaryMirror\":\"\",\"DisableOptimizations\"
:false,\"DisableMetrics\":false,\"DisableCoreDNSLog\":false,\"CustomQemuFirmwarePath\":\"\",\"SocketVMnetClientPath\":\"\",\"SocketVMnetPath\":\"\",\"StaticIP\":\"\",\"SSHAuthSock\":\"\",\"SSHAgentPID\":0,\"GPUs\":\"\",\"AutoPauseInterval\":60000000000},\"Active\":false,\"ActiveKubeContext\":true}]}"*. args: "out/minikube-linux-arm64 profile list --output json"
helpers_test.go:222: -----------------------post-mortem--------------------------------
helpers_test.go:223: ======>  post-mortem[TestMultiControlPlane/serial/DegradedAfterClusterRestart]: network settings <======
helpers_test.go:230: HOST ENV snapshots: PROXY env: HTTP_PROXY="<empty>" HTTPS_PROXY="<empty>" NO_PROXY="<empty>"
helpers_test.go:238: ======>  post-mortem[TestMultiControlPlane/serial/DegradedAfterClusterRestart]: docker inspect <======
helpers_test.go:239: (dbg) Run:  docker inspect ha-423884
helpers_test.go:243: (dbg) docker inspect ha-423884:

                                                
                                                
-- stdout --
	[
	    {
	        "Id": "8c902201acb6ac6cc85dcb02210aea656c86b05a6fb72e47cb8c42c952f307e8",
	        "Created": "2025-11-09T13:50:17.166169915Z",
	        "Path": "/usr/local/bin/entrypoint",
	        "Args": [
	            "/sbin/init"
	        ],
	        "State": {
	            "Status": "running",
	            "Running": true,
	            "Paused": false,
	            "Restarting": false,
	            "OOMKilled": false,
	            "Dead": false,
	            "Pid": 56035,
	            "ExitCode": 0,
	            "Error": "",
	            "StartedAt": "2025-11-09T14:03:28.454326897Z",
	            "FinishedAt": "2025-11-09T14:03:27.198748336Z"
	        },
	        "Image": "sha256:4e1b168be0d8ee6affdaa8dcd0274d605b2f417b6cfaa574410f0380fd962b97",
	        "ResolvConfPath": "/var/lib/docker/containers/8c902201acb6ac6cc85dcb02210aea656c86b05a6fb72e47cb8c42c952f307e8/resolv.conf",
	        "HostnamePath": "/var/lib/docker/containers/8c902201acb6ac6cc85dcb02210aea656c86b05a6fb72e47cb8c42c952f307e8/hostname",
	        "HostsPath": "/var/lib/docker/containers/8c902201acb6ac6cc85dcb02210aea656c86b05a6fb72e47cb8c42c952f307e8/hosts",
	        "LogPath": "/var/lib/docker/containers/8c902201acb6ac6cc85dcb02210aea656c86b05a6fb72e47cb8c42c952f307e8/8c902201acb6ac6cc85dcb02210aea656c86b05a6fb72e47cb8c42c952f307e8-json.log",
	        "Name": "/ha-423884",
	        "RestartCount": 0,
	        "Driver": "overlay2",
	        "Platform": "linux",
	        "MountLabel": "",
	        "ProcessLabel": "",
	        "AppArmorProfile": "unconfined",
	        "ExecIDs": null,
	        "HostConfig": {
	            "Binds": [
	                "/lib/modules:/lib/modules:ro",
	                "ha-423884:/var"
	            ],
	            "ContainerIDFile": "",
	            "LogConfig": {
	                "Type": "json-file",
	                "Config": {}
	            },
	            "NetworkMode": "ha-423884",
	            "PortBindings": {
	                "22/tcp": [
	                    {
	                        "HostIp": "127.0.0.1",
	                        "HostPort": ""
	                    }
	                ],
	                "2376/tcp": [
	                    {
	                        "HostIp": "127.0.0.1",
	                        "HostPort": ""
	                    }
	                ],
	                "32443/tcp": [
	                    {
	                        "HostIp": "127.0.0.1",
	                        "HostPort": ""
	                    }
	                ],
	                "5000/tcp": [
	                    {
	                        "HostIp": "127.0.0.1",
	                        "HostPort": ""
	                    }
	                ],
	                "8443/tcp": [
	                    {
	                        "HostIp": "127.0.0.1",
	                        "HostPort": ""
	                    }
	                ]
	            },
	            "RestartPolicy": {
	                "Name": "no",
	                "MaximumRetryCount": 0
	            },
	            "AutoRemove": false,
	            "VolumeDriver": "",
	            "VolumesFrom": null,
	            "ConsoleSize": [
	                0,
	                0
	            ],
	            "CapAdd": null,
	            "CapDrop": null,
	            "CgroupnsMode": "host",
	            "Dns": [],
	            "DnsOptions": [],
	            "DnsSearch": [],
	            "ExtraHosts": null,
	            "GroupAdd": null,
	            "IpcMode": "private",
	            "Cgroup": "",
	            "Links": null,
	            "OomScoreAdj": 0,
	            "PidMode": "",
	            "Privileged": true,
	            "PublishAllPorts": false,
	            "ReadonlyRootfs": false,
	            "SecurityOpt": [
	                "seccomp=unconfined",
	                "apparmor=unconfined",
	                "label=disable"
	            ],
	            "Tmpfs": {
	                "/run": "",
	                "/tmp": ""
	            },
	            "UTSMode": "",
	            "UsernsMode": "",
	            "ShmSize": 67108864,
	            "Runtime": "runc",
	            "Isolation": "",
	            "CpuShares": 0,
	            "Memory": 3221225472,
	            "NanoCpus": 2000000000,
	            "CgroupParent": "",
	            "BlkioWeight": 0,
	            "BlkioWeightDevice": [],
	            "BlkioDeviceReadBps": [],
	            "BlkioDeviceWriteBps": [],
	            "BlkioDeviceReadIOps": [],
	            "BlkioDeviceWriteIOps": [],
	            "CpuPeriod": 0,
	            "CpuQuota": 0,
	            "CpuRealtimePeriod": 0,
	            "CpuRealtimeRuntime": 0,
	            "CpusetCpus": "",
	            "CpusetMems": "",
	            "Devices": [],
	            "DeviceCgroupRules": null,
	            "DeviceRequests": null,
	            "MemoryReservation": 0,
	            "MemorySwap": 6442450944,
	            "MemorySwappiness": null,
	            "OomKillDisable": false,
	            "PidsLimit": null,
	            "Ulimits": [],
	            "CpuCount": 0,
	            "CpuPercent": 0,
	            "IOMaximumIOps": 0,
	            "IOMaximumBandwidth": 0,
	            "MaskedPaths": null,
	            "ReadonlyPaths": null
	        },
	        "GraphDriver": {
	            "Data": {
	                "ID": "8c902201acb6ac6cc85dcb02210aea656c86b05a6fb72e47cb8c42c952f307e8",
	                "LowerDir": "/var/lib/docker/overlay2/d7d9b7ca13eaf7cc4d5734ee4a6a54ff542d1224261d25b7c41162aa58453c4b-init/diff:/var/lib/docker/overlay2/b3a4de36ab2a7c9237cb1555a4866064baca53bca407ae5f84336bea9c6bc6c8/diff",
	                "MergedDir": "/var/lib/docker/overlay2/d7d9b7ca13eaf7cc4d5734ee4a6a54ff542d1224261d25b7c41162aa58453c4b/merged",
	                "UpperDir": "/var/lib/docker/overlay2/d7d9b7ca13eaf7cc4d5734ee4a6a54ff542d1224261d25b7c41162aa58453c4b/diff",
	                "WorkDir": "/var/lib/docker/overlay2/d7d9b7ca13eaf7cc4d5734ee4a6a54ff542d1224261d25b7c41162aa58453c4b/work"
	            },
	            "Name": "overlay2"
	        },
	        "Mounts": [
	            {
	                "Type": "bind",
	                "Source": "/lib/modules",
	                "Destination": "/lib/modules",
	                "Mode": "ro",
	                "RW": false,
	                "Propagation": "rprivate"
	            },
	            {
	                "Type": "volume",
	                "Name": "ha-423884",
	                "Source": "/var/lib/docker/volumes/ha-423884/_data",
	                "Destination": "/var",
	                "Driver": "local",
	                "Mode": "z",
	                "RW": true,
	                "Propagation": ""
	            }
	        ],
	        "Config": {
	            "Hostname": "ha-423884",
	            "Domainname": "",
	            "User": "",
	            "AttachStdin": false,
	            "AttachStdout": false,
	            "AttachStderr": false,
	            "ExposedPorts": {
	                "22/tcp": {},
	                "2376/tcp": {},
	                "32443/tcp": {},
	                "5000/tcp": {},
	                "8443/tcp": {}
	            },
	            "Tty": true,
	            "OpenStdin": false,
	            "StdinOnce": false,
	            "Env": [
	                "container=docker",
	                "PATH=/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin"
	            ],
	            "Cmd": null,
	            "Image": "gcr.io/k8s-minikube/kicbase-builds:v0.0.48-1761985721-21837@sha256:a50b37e97dfdea51156e079ca6b45818a801b3d41bbe13d141f35d2e1af6c7d1",
	            "Volumes": null,
	            "WorkingDir": "/",
	            "Entrypoint": [
	                "/usr/local/bin/entrypoint",
	                "/sbin/init"
	            ],
	            "OnBuild": null,
	            "Labels": {
	                "created_by.minikube.sigs.k8s.io": "true",
	                "mode.minikube.sigs.k8s.io": "ha-423884",
	                "name.minikube.sigs.k8s.io": "ha-423884",
	                "role.minikube.sigs.k8s.io": ""
	            },
	            "StopSignal": "SIGRTMIN+3"
	        },
	        "NetworkSettings": {
	            "Bridge": "",
	            "SandboxID": "1a517d91b9dd2fa9b7c1a86f3c7ce600153c1394576da0eb7ce565af8604f53c",
	            "SandboxKey": "/var/run/docker/netns/1a517d91b9dd",
	            "Ports": {
	                "22/tcp": [
	                    {
	                        "HostIp": "127.0.0.1",
	                        "HostPort": "32818"
	                    }
	                ],
	                "2376/tcp": [
	                    {
	                        "HostIp": "127.0.0.1",
	                        "HostPort": "32819"
	                    }
	                ],
	                "32443/tcp": [
	                    {
	                        "HostIp": "127.0.0.1",
	                        "HostPort": "32822"
	                    }
	                ],
	                "5000/tcp": [
	                    {
	                        "HostIp": "127.0.0.1",
	                        "HostPort": "32820"
	                    }
	                ],
	                "8443/tcp": [
	                    {
	                        "HostIp": "127.0.0.1",
	                        "HostPort": "32821"
	                    }
	                ]
	            },
	            "HairpinMode": false,
	            "LinkLocalIPv6Address": "",
	            "LinkLocalIPv6PrefixLen": 0,
	            "SecondaryIPAddresses": null,
	            "SecondaryIPv6Addresses": null,
	            "EndpointID": "",
	            "Gateway": "",
	            "GlobalIPv6Address": "",
	            "GlobalIPv6PrefixLen": 0,
	            "IPAddress": "",
	            "IPPrefixLen": 0,
	            "IPv6Gateway": "",
	            "MacAddress": "",
	            "Networks": {
	                "ha-423884": {
	                    "IPAMConfig": {
	                        "IPv4Address": "192.168.49.2"
	                    },
	                    "Links": null,
	                    "Aliases": null,
	                    "MacAddress": "42:a0:79:53:a9:c1",
	                    "DriverOpts": null,
	                    "GwPriority": 0,
	                    "NetworkID": "b901b8dcb82129bdc4c62d2bf9cac8a365e41b87cf75b0978b149071ce152f44",
	                    "EndpointID": "863a231ee9ea532fe20e7b03570549e0d16ef617b4f2a4ad156998677dd29113",
	                    "Gateway": "192.168.49.1",
	                    "IPAddress": "192.168.49.2",
	                    "IPPrefixLen": 24,
	                    "IPv6Gateway": "",
	                    "GlobalIPv6Address": "",
	                    "GlobalIPv6PrefixLen": 0,
	                    "DNSNames": [
	                        "ha-423884",
	                        "8c902201acb6"
	                    ]
	                }
	            }
	        }
	    }
	]

                                                
                                                
-- /stdout --
helpers_test.go:247: (dbg) Run:  out/minikube-linux-arm64 status --format={{.Host}} -p ha-423884 -n ha-423884
helpers_test.go:252: <<< TestMultiControlPlane/serial/DegradedAfterClusterRestart FAILED: start of post-mortem logs <<<
helpers_test.go:253: ======>  post-mortem[TestMultiControlPlane/serial/DegradedAfterClusterRestart]: minikube logs <======
helpers_test.go:255: (dbg) Run:  out/minikube-linux-arm64 -p ha-423884 logs -n 25
helpers_test.go:255: (dbg) Done: out/minikube-linux-arm64 -p ha-423884 logs -n 25: (1.659296071s)
helpers_test.go:260: TestMultiControlPlane/serial/DegradedAfterClusterRestart logs: 
-- stdout --
	
	==> Audit <==
	┌─────────┬─────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────┬───────────┬─────────┬─────────┬─────────────────────┬─────────────────────┐
	│ COMMAND │                                                                ARGS                                                                 │  PROFILE  │  USER   │ VERSION │     START TIME      │      END TIME       │
	├─────────┼─────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────┼───────────┼─────────┼─────────┼─────────────────────┼─────────────────────┤
	│ cp      │ ha-423884 cp ha-423884-m03:/home/docker/cp-test.txt ha-423884-m04:/home/docker/cp-test_ha-423884-m03_ha-423884-m04.txt              │ ha-423884 │ jenkins │ v1.37.0 │ 09 Nov 25 13:53 UTC │ 09 Nov 25 13:53 UTC │
	│ ssh     │ ha-423884 ssh -n ha-423884-m03 sudo cat /home/docker/cp-test.txt                                                                    │ ha-423884 │ jenkins │ v1.37.0 │ 09 Nov 25 13:53 UTC │ 09 Nov 25 13:53 UTC │
	│ ssh     │ ha-423884 ssh -n ha-423884-m04 sudo cat /home/docker/cp-test_ha-423884-m03_ha-423884-m04.txt                                        │ ha-423884 │ jenkins │ v1.37.0 │ 09 Nov 25 13:53 UTC │ 09 Nov 25 13:53 UTC │
	│ cp      │ ha-423884 cp testdata/cp-test.txt ha-423884-m04:/home/docker/cp-test.txt                                                            │ ha-423884 │ jenkins │ v1.37.0 │ 09 Nov 25 13:53 UTC │ 09 Nov 25 13:53 UTC │
	│ ssh     │ ha-423884 ssh -n ha-423884-m04 sudo cat /home/docker/cp-test.txt                                                                    │ ha-423884 │ jenkins │ v1.37.0 │ 09 Nov 25 13:53 UTC │ 09 Nov 25 13:53 UTC │
	│ cp      │ ha-423884 cp ha-423884-m04:/home/docker/cp-test.txt /tmp/TestMultiControlPlaneserialCopyFile776916293/001/cp-test_ha-423884-m04.txt │ ha-423884 │ jenkins │ v1.37.0 │ 09 Nov 25 13:53 UTC │ 09 Nov 25 13:53 UTC │
	│ ssh     │ ha-423884 ssh -n ha-423884-m04 sudo cat /home/docker/cp-test.txt                                                                    │ ha-423884 │ jenkins │ v1.37.0 │ 09 Nov 25 13:53 UTC │ 09 Nov 25 13:53 UTC │
	│ cp      │ ha-423884 cp ha-423884-m04:/home/docker/cp-test.txt ha-423884:/home/docker/cp-test_ha-423884-m04_ha-423884.txt                      │ ha-423884 │ jenkins │ v1.37.0 │ 09 Nov 25 13:53 UTC │ 09 Nov 25 13:53 UTC │
	│ ssh     │ ha-423884 ssh -n ha-423884-m04 sudo cat /home/docker/cp-test.txt                                                                    │ ha-423884 │ jenkins │ v1.37.0 │ 09 Nov 25 13:53 UTC │ 09 Nov 25 13:53 UTC │
	│ ssh     │ ha-423884 ssh -n ha-423884 sudo cat /home/docker/cp-test_ha-423884-m04_ha-423884.txt                                                │ ha-423884 │ jenkins │ v1.37.0 │ 09 Nov 25 13:53 UTC │ 09 Nov 25 13:53 UTC │
	│ cp      │ ha-423884 cp ha-423884-m04:/home/docker/cp-test.txt ha-423884-m02:/home/docker/cp-test_ha-423884-m04_ha-423884-m02.txt              │ ha-423884 │ jenkins │ v1.37.0 │ 09 Nov 25 13:53 UTC │ 09 Nov 25 13:53 UTC │
	│ ssh     │ ha-423884 ssh -n ha-423884-m04 sudo cat /home/docker/cp-test.txt                                                                    │ ha-423884 │ jenkins │ v1.37.0 │ 09 Nov 25 13:53 UTC │ 09 Nov 25 13:53 UTC │
	│ ssh     │ ha-423884 ssh -n ha-423884-m02 sudo cat /home/docker/cp-test_ha-423884-m04_ha-423884-m02.txt                                        │ ha-423884 │ jenkins │ v1.37.0 │ 09 Nov 25 13:53 UTC │ 09 Nov 25 13:53 UTC │
	│ cp      │ ha-423884 cp ha-423884-m04:/home/docker/cp-test.txt ha-423884-m03:/home/docker/cp-test_ha-423884-m04_ha-423884-m03.txt              │ ha-423884 │ jenkins │ v1.37.0 │ 09 Nov 25 13:53 UTC │ 09 Nov 25 13:53 UTC │
	│ ssh     │ ha-423884 ssh -n ha-423884-m04 sudo cat /home/docker/cp-test.txt                                                                    │ ha-423884 │ jenkins │ v1.37.0 │ 09 Nov 25 13:53 UTC │ 09 Nov 25 13:53 UTC │
	│ ssh     │ ha-423884 ssh -n ha-423884-m03 sudo cat /home/docker/cp-test_ha-423884-m04_ha-423884-m03.txt                                        │ ha-423884 │ jenkins │ v1.37.0 │ 09 Nov 25 13:53 UTC │ 09 Nov 25 13:53 UTC │
	│ node    │ ha-423884 node stop m02 --alsologtostderr -v 5                                                                                      │ ha-423884 │ jenkins │ v1.37.0 │ 09 Nov 25 13:53 UTC │ 09 Nov 25 13:53 UTC │
	│ node    │ ha-423884 node start m02 --alsologtostderr -v 5                                                                                     │ ha-423884 │ jenkins │ v1.37.0 │ 09 Nov 25 13:53 UTC │ 09 Nov 25 13:54 UTC │
	│ node    │ ha-423884 node list --alsologtostderr -v 5                                                                                          │ ha-423884 │ jenkins │ v1.37.0 │ 09 Nov 25 13:54 UTC │                     │
	│ stop    │ ha-423884 stop --alsologtostderr -v 5                                                                                               │ ha-423884 │ jenkins │ v1.37.0 │ 09 Nov 25 13:54 UTC │ 09 Nov 25 13:54 UTC │
	│ start   │ ha-423884 start --wait true --alsologtostderr -v 5                                                                                  │ ha-423884 │ jenkins │ v1.37.0 │ 09 Nov 25 13:54 UTC │                     │
	│ node    │ ha-423884 node list --alsologtostderr -v 5                                                                                          │ ha-423884 │ jenkins │ v1.37.0 │ 09 Nov 25 14:02 UTC │                     │
	│ node    │ ha-423884 node delete m03 --alsologtostderr -v 5                                                                                    │ ha-423884 │ jenkins │ v1.37.0 │ 09 Nov 25 14:03 UTC │                     │
	│ stop    │ ha-423884 stop --alsologtostderr -v 5                                                                                               │ ha-423884 │ jenkins │ v1.37.0 │ 09 Nov 25 14:03 UTC │ 09 Nov 25 14:03 UTC │
	│ start   │ ha-423884 start --wait true --alsologtostderr -v 5 --driver=docker  --container-runtime=crio                                        │ ha-423884 │ jenkins │ v1.37.0 │ 09 Nov 25 14:03 UTC │ 09 Nov 25 14:05 UTC │
	└─────────┴─────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────┴───────────┴─────────┴─────────┴─────────────────────┴─────────────────────┘
	
	
	==> Last Start <==
	Log file created at: 2025/11/09 14:03:28
	Running on machine: ip-172-31-24-2
	Binary: Built with gc go1.24.6 for linux/arm64
	Log line format: [IWEF]mmdd hh:mm:ss.uuuuuu threadid file:line] msg
	I1109 14:03:28.177539   55908 out.go:360] Setting OutFile to fd 1 ...
	I1109 14:03:28.177725   55908 out.go:408] TERM=,COLORTERM=, which probably does not support color
	I1109 14:03:28.177737   55908 out.go:374] Setting ErrFile to fd 2...
	I1109 14:03:28.177743   55908 out.go:408] TERM=,COLORTERM=, which probably does not support color
	I1109 14:03:28.178015   55908 root.go:338] Updating PATH: /home/jenkins/minikube-integration/21139-2320/.minikube/bin
	I1109 14:03:28.178387   55908 out.go:368] Setting JSON to false
	I1109 14:03:28.179233   55908 start.go:133] hostinfo: {"hostname":"ip-172-31-24-2","uptime":2759,"bootTime":1762694250,"procs":154,"os":"linux","platform":"ubuntu","platformFamily":"debian","platformVersion":"20.04","kernelVersion":"5.15.0-1084-aws","kernelArch":"aarch64","virtualizationSystem":"","virtualizationRole":"","hostId":"6d436adf-771e-4269-b9a3-c25fd4fca4f5"}
	I1109 14:03:28.179304   55908 start.go:143] virtualization:  
	I1109 14:03:28.182654   55908 out.go:179] * [ha-423884] minikube v1.37.0 on Ubuntu 20.04 (arm64)
	I1109 14:03:28.186399   55908 out.go:179]   - MINIKUBE_LOCATION=21139
	I1109 14:03:28.186530   55908 notify.go:221] Checking for updates...
	I1109 14:03:28.192400   55908 out.go:179]   - MINIKUBE_SUPPRESS_DOCKER_PERFORMANCE=true
	I1109 14:03:28.195380   55908 out.go:179]   - KUBECONFIG=/home/jenkins/minikube-integration/21139-2320/kubeconfig
	I1109 14:03:28.198311   55908 out.go:179]   - MINIKUBE_HOME=/home/jenkins/minikube-integration/21139-2320/.minikube
	I1109 14:03:28.201212   55908 out.go:179]   - MINIKUBE_BIN=out/minikube-linux-arm64
	I1109 14:03:28.204122   55908 out.go:179]   - MINIKUBE_FORCE_SYSTEMD=
	I1109 14:03:28.207578   55908 config.go:182] Loaded profile config "ha-423884": Driver=docker, ContainerRuntime=crio, KubernetesVersion=v1.34.1
	I1109 14:03:28.208223   55908 driver.go:422] Setting default libvirt URI to qemu:///system
	I1109 14:03:28.238570   55908 docker.go:124] docker version: linux-28.1.1:Docker Engine - Community
	I1109 14:03:28.238679   55908 cli_runner.go:164] Run: docker system info --format "{{json .}}"
	I1109 14:03:28.302173   55908 info.go:266] docker info: {ID:J4M5:W6MX:GOX4:4LAQ:VI7E:VJNF:J3OP:OPBH:GF7G:PPY4:WQWD:7N4L Containers:4 ContainersRunning:0 ContainersPaused:0 ContainersStopped:4 Images:3 Driver:overlay2 DriverStatus:[[Backing Filesystem extfs] [Supports d_type true] [Using metacopy false] [Native Overlay Diff true] [userxattr false]] SystemStatus:<nil> Plugins:{Volume:[local] Network:[bridge host ipvlan macvlan null overlay] Authorization:<nil> Log:[awslogs fluentd gcplogs gelf journald json-file local splunk syslog]} MemoryLimit:true SwapLimit:true KernelMemory:false KernelMemoryTCP:true CPUCfsPeriod:true CPUCfsQuota:true CPUShares:true CPUSet:true PidsLimit:true IPv4Forwarding:true BridgeNfIptables:false BridgeNfIP6Tables:false Debug:false NFd:25 OomKillDisable:true NGoroutines:42 SystemTime:2025-11-09 14:03:28.29285158 +0000 UTC LoggingDriver:json-file CgroupDriver:cgroupfs NEventsListener:0 KernelVersion:5.15.0-1084-aws OperatingSystem:Ubuntu 20.04.6 LTS OSType:linux Architecture:aa
rch64 IndexServerAddress:https://index.docker.io/v1/ RegistryConfig:{AllowNondistributableArtifactsCIDRs:[] AllowNondistributableArtifactsHostnames:[] InsecureRegistryCIDRs:[::1/128 127.0.0.0/8] IndexConfigs:{DockerIo:{Name:docker.io Mirrors:[] Secure:true Official:true}} Mirrors:[]} NCPU:2 MemTotal:8214835200 GenericResources:<nil> DockerRootDir:/var/lib/docker HTTPProxy: HTTPSProxy: NoProxy: Name:ip-172-31-24-2 Labels:[] ExperimentalBuild:false ServerVersion:28.1.1 ClusterStore: ClusterAdvertise: Runtimes:{Runc:{Path:runc}} DefaultRuntime:runc Swarm:{NodeID: NodeAddr: LocalNodeState:inactive ControlAvailable:false Error: RemoteManagers:<nil>} LiveRestoreEnabled:false Isolation: InitBinary:docker-init ContainerdCommit:{ID:05044ec0a9a75232cad458027ca83437aae3f4da Expected:} RuncCommit:{ID:v1.2.5-0-g59923ef Expected:} InitCommit:{ID:de40ad0 Expected:} SecurityOptions:[name=apparmor name=seccomp,profile=builtin] ProductLicense: Warnings:<nil> ServerErrors:[] ClientInfo:{Debug:false Plugins:[map[Name:buildx Path
:/usr/libexec/docker/cli-plugins/docker-buildx SchemaVersion:0.1.0 ShortDescription:Docker Buildx Vendor:Docker Inc. Version:v0.23.0] map[Name:compose Path:/usr/libexec/docker/cli-plugins/docker-compose SchemaVersion:0.1.0 ShortDescription:Docker Compose Vendor:Docker Inc. Version:v2.35.1]] Warnings:<nil>}}
	I1109 14:03:28.302284   55908 docker.go:319] overlay module found
	I1109 14:03:28.305382   55908 out.go:179] * Using the docker driver based on existing profile
	I1109 14:03:28.308271   55908 start.go:309] selected driver: docker
	I1109 14:03:28.308292   55908 start.go:930] validating driver "docker" against &{Name:ha-423884 KeepContext:false EmbedCerts:false MinikubeISO: KicBaseImage:gcr.io/k8s-minikube/kicbase-builds:v0.0.48-1761985721-21837@sha256:a50b37e97dfdea51156e079ca6b45818a801b3d41bbe13d141f35d2e1af6c7d1 Memory:3072 CPUs:2 DiskSize:20000 Driver:docker HyperkitVpnKitSock: HyperkitVSockPorts:[] DockerEnv:[] ContainerVolumeMounts:[] InsecureRegistry:[] RegistryMirror:[] HostOnlyCIDR:192.168.59.1/24 HypervVirtualSwitch: HypervUseExternalSwitch:false HypervExternalAdapter: KVMNetwork:default KVMQemuURI:qemu:///system KVMGPU:false KVMHidden:false KVMNUMACount:1 APIServerPort:8443 DockerOpt:[] DisableDriverMounts:false NFSShare:[] NFSSharesRoot:/nfsshares UUID: NoVTXCheck:false DNSProxy:false HostDNSResolver:true HostOnlyNicType:virtio NatNicType:virtio SSHIPAddress: SSHUser:root SSHKey: SSHPort:22 KubernetesConfig:{KubernetesVersion:v1.34.1 ClusterName:ha-423884 Namespace:default APIServerHAVIP:192.168.49.254 APIServerName
:minikubeCA APIServerNames:[] APIServerIPs:[] DNSDomain:cluster.local ContainerRuntime:crio CRISocket: NetworkPlugin:cni FeatureGates: ServiceCIDR:10.96.0.0/12 ImageRepository: LoadBalancerStartIP: LoadBalancerEndIP: CustomIngressCert: RegistryAliases: ExtraOptions:[] ShouldLoadCachedImages:true EnableDefaultCNI:false CNI:} Nodes:[{Name: IP:192.168.49.2 Port:8443 KubernetesVersion:v1.34.1 ContainerRuntime:crio ControlPlane:true Worker:true} {Name:m02 IP:192.168.49.3 Port:8443 KubernetesVersion:v1.34.1 ContainerRuntime:crio ControlPlane:true Worker:true} {Name:m03 IP:192.168.49.4 Port:8443 KubernetesVersion:v1.34.1 ContainerRuntime:crio ControlPlane:true Worker:true} {Name:m04 IP:192.168.49.5 Port:0 KubernetesVersion:v1.34.1 ContainerRuntime:crio ControlPlane:false Worker:true}] Addons:map[ambassador:false amd-gpu-device-plugin:false auto-pause:false cloud-spanner:false csi-hostpath-driver:false dashboard:false default-storageclass:false efk:false freshpod:false gcp-auth:false gvisor:false headlamp:false inacc
el:false ingress:false ingress-dns:false inspektor-gadget:false istio:false istio-provisioner:false kong:false kubeflow:false kubetail:false kubevirt:false logviewer:false metallb:false metrics-server:false nvidia-device-plugin:false nvidia-driver-installer:false nvidia-gpu-device-plugin:false olm:false pod-security-policy:false portainer:false registry:false registry-aliases:false registry-creds:false storage-provisioner:false storage-provisioner-rancher:false volcano:false volumesnapshots:false yakd:false] CustomAddonImages:map[] CustomAddonRegistries:map[] VerifyComponents:map[apiserver:true apps_running:true default_sa:true extra:true kubelet:true node_ready:true system_pods:true] StartHostTimeout:6m0s ScheduledStop:<nil> ExposedPorts:[] ListenAddress: Network: Subnet: MultiNodeRequested:true ExtraDisks:0 CertExpiration:26280h0m0s MountString: Mount9PVersion:9p2000.L MountGID:docker MountIP: MountMSize:262144 MountOptions:[] MountPort:0 MountType:9p MountUID:docker BinaryMirror: DisableOptimizations:false
DisableMetrics:false DisableCoreDNSLog:false CustomQemuFirmwarePath: SocketVMnetClientPath: SocketVMnetPath: StaticIP: SSHAuthSock: SSHAgentPID:0 GPUs: AutoPauseInterval:1m0s}
	I1109 14:03:28.308437   55908 start.go:941] status for docker: {Installed:true Healthy:true Running:false NeedsImprovement:false Error:<nil> Reason: Fix: Doc: Version:}
	I1109 14:03:28.308547   55908 cli_runner.go:164] Run: docker system info --format "{{json .}}"
	I1109 14:03:28.367315   55908 info.go:266] docker info: {ID:J4M5:W6MX:GOX4:4LAQ:VI7E:VJNF:J3OP:OPBH:GF7G:PPY4:WQWD:7N4L Containers:4 ContainersRunning:0 ContainersPaused:0 ContainersStopped:4 Images:3 Driver:overlay2 DriverStatus:[[Backing Filesystem extfs] [Supports d_type true] [Using metacopy false] [Native Overlay Diff true] [userxattr false]] SystemStatus:<nil> Plugins:{Volume:[local] Network:[bridge host ipvlan macvlan null overlay] Authorization:<nil> Log:[awslogs fluentd gcplogs gelf journald json-file local splunk syslog]} MemoryLimit:true SwapLimit:true KernelMemory:false KernelMemoryTCP:true CPUCfsPeriod:true CPUCfsQuota:true CPUShares:true CPUSet:true PidsLimit:true IPv4Forwarding:true BridgeNfIptables:false BridgeNfIP6Tables:false Debug:false NFd:25 OomKillDisable:true NGoroutines:42 SystemTime:2025-11-09 14:03:28.35650136 +0000 UTC LoggingDriver:json-file CgroupDriver:cgroupfs NEventsListener:0 KernelVersion:5.15.0-1084-aws OperatingSystem:Ubuntu 20.04.6 LTS OSType:linux Architecture:aa
rch64 IndexServerAddress:https://index.docker.io/v1/ RegistryConfig:{AllowNondistributableArtifactsCIDRs:[] AllowNondistributableArtifactsHostnames:[] InsecureRegistryCIDRs:[::1/128 127.0.0.0/8] IndexConfigs:{DockerIo:{Name:docker.io Mirrors:[] Secure:true Official:true}} Mirrors:[]} NCPU:2 MemTotal:8214835200 GenericResources:<nil> DockerRootDir:/var/lib/docker HTTPProxy: HTTPSProxy: NoProxy: Name:ip-172-31-24-2 Labels:[] ExperimentalBuild:false ServerVersion:28.1.1 ClusterStore: ClusterAdvertise: Runtimes:{Runc:{Path:runc}} DefaultRuntime:runc Swarm:{NodeID: NodeAddr: LocalNodeState:inactive ControlAvailable:false Error: RemoteManagers:<nil>} LiveRestoreEnabled:false Isolation: InitBinary:docker-init ContainerdCommit:{ID:05044ec0a9a75232cad458027ca83437aae3f4da Expected:} RuncCommit:{ID:v1.2.5-0-g59923ef Expected:} InitCommit:{ID:de40ad0 Expected:} SecurityOptions:[name=apparmor name=seccomp,profile=builtin] ProductLicense: Warnings:<nil> ServerErrors:[] ClientInfo:{Debug:false Plugins:[map[Name:buildx Path
:/usr/libexec/docker/cli-plugins/docker-buildx SchemaVersion:0.1.0 ShortDescription:Docker Buildx Vendor:Docker Inc. Version:v0.23.0] map[Name:compose Path:/usr/libexec/docker/cli-plugins/docker-compose SchemaVersion:0.1.0 ShortDescription:Docker Compose Vendor:Docker Inc. Version:v2.35.1]] Warnings:<nil>}}
	I1109 14:03:28.367739   55908 start_flags.go:992] Waiting for all components: map[apiserver:true apps_running:true default_sa:true extra:true kubelet:true node_ready:true system_pods:true]
	I1109 14:03:28.367770   55908 cni.go:84] Creating CNI manager for ""
	I1109 14:03:28.367814   55908 cni.go:136] multinode detected (4 nodes found), recommending kindnet
	I1109 14:03:28.367923   55908 start.go:353] cluster config:
	{Name:ha-423884 KeepContext:false EmbedCerts:false MinikubeISO: KicBaseImage:gcr.io/k8s-minikube/kicbase-builds:v0.0.48-1761985721-21837@sha256:a50b37e97dfdea51156e079ca6b45818a801b3d41bbe13d141f35d2e1af6c7d1 Memory:3072 CPUs:2 DiskSize:20000 Driver:docker HyperkitVpnKitSock: HyperkitVSockPorts:[] DockerEnv:[] ContainerVolumeMounts:[] InsecureRegistry:[] RegistryMirror:[] HostOnlyCIDR:192.168.59.1/24 HypervVirtualSwitch: HypervUseExternalSwitch:false HypervExternalAdapter: KVMNetwork:default KVMQemuURI:qemu:///system KVMGPU:false KVMHidden:false KVMNUMACount:1 APIServerPort:8443 DockerOpt:[] DisableDriverMounts:false NFSShare:[] NFSSharesRoot:/nfsshares UUID: NoVTXCheck:false DNSProxy:false HostDNSResolver:true HostOnlyNicType:virtio NatNicType:virtio SSHIPAddress: SSHUser:root SSHKey: SSHPort:22 KubernetesConfig:{KubernetesVersion:v1.34.1 ClusterName:ha-423884 Namespace:default APIServerHAVIP:192.168.49.254 APIServerName:minikubeCA APIServerNames:[] APIServerIPs:[] DNSDomain:cluster.local ContainerR
untime:crio CRISocket: NetworkPlugin:cni FeatureGates: ServiceCIDR:10.96.0.0/12 ImageRepository: LoadBalancerStartIP: LoadBalancerEndIP: CustomIngressCert: RegistryAliases: ExtraOptions:[] ShouldLoadCachedImages:true EnableDefaultCNI:false CNI:} Nodes:[{Name: IP:192.168.49.2 Port:8443 KubernetesVersion:v1.34.1 ContainerRuntime:crio ControlPlane:true Worker:true} {Name:m02 IP:192.168.49.3 Port:8443 KubernetesVersion:v1.34.1 ContainerRuntime:crio ControlPlane:true Worker:true} {Name:m03 IP:192.168.49.4 Port:8443 KubernetesVersion:v1.34.1 ContainerRuntime:crio ControlPlane:true Worker:true} {Name:m04 IP:192.168.49.5 Port:0 KubernetesVersion:v1.34.1 ContainerRuntime:crio ControlPlane:false Worker:true}] Addons:map[ambassador:false amd-gpu-device-plugin:false auto-pause:false cloud-spanner:false csi-hostpath-driver:false dashboard:false default-storageclass:false efk:false freshpod:false gcp-auth:false gvisor:false headlamp:false inaccel:false ingress:false ingress-dns:false inspektor-gadget:false istio:false isti
o-provisioner:false kong:false kubeflow:false kubetail:false kubevirt:false logviewer:false metallb:false metrics-server:false nvidia-device-plugin:false nvidia-driver-installer:false nvidia-gpu-device-plugin:false olm:false pod-security-policy:false portainer:false registry:false registry-aliases:false registry-creds:false storage-provisioner:false storage-provisioner-rancher:false volcano:false volumesnapshots:false yakd:false] CustomAddonImages:map[] CustomAddonRegistries:map[] VerifyComponents:map[apiserver:true apps_running:true default_sa:true extra:true kubelet:true node_ready:true system_pods:true] StartHostTimeout:6m0s ScheduledStop:<nil> ExposedPorts:[] ListenAddress: Network: Subnet: MultiNodeRequested:true ExtraDisks:0 CertExpiration:26280h0m0s MountString: Mount9PVersion:9p2000.L MountGID:docker MountIP: MountMSize:262144 MountOptions:[] MountPort:0 MountType:9p MountUID:docker BinaryMirror: DisableOptimizations:false DisableMetrics:false DisableCoreDNSLog:false CustomQemuFirmwarePath: SocketVMne
tClientPath: SocketVMnetPath: StaticIP: SSHAuthSock: SSHAgentPID:0 GPUs: AutoPauseInterval:1m0s}
	I1109 14:03:28.372921   55908 out.go:179] * Starting "ha-423884" primary control-plane node in "ha-423884" cluster
	I1109 14:03:28.375587   55908 cache.go:134] Beginning downloading kic base image for docker with crio
	I1109 14:03:28.378486   55908 out.go:179] * Pulling base image v0.0.48-1761985721-21837 ...
	I1109 14:03:28.381428   55908 preload.go:188] Checking if preload exists for k8s version v1.34.1 and runtime crio
	I1109 14:03:28.381482   55908 preload.go:203] Found local preload: /home/jenkins/minikube-integration/21139-2320/.minikube/cache/preloaded-tarball/preloaded-images-k8s-v18-v1.34.1-cri-o-overlay-arm64.tar.lz4
	I1109 14:03:28.381492   55908 cache.go:65] Caching tarball of preloaded images
	I1109 14:03:28.381532   55908 image.go:81] Checking for gcr.io/k8s-minikube/kicbase-builds:v0.0.48-1761985721-21837@sha256:a50b37e97dfdea51156e079ca6b45818a801b3d41bbe13d141f35d2e1af6c7d1 in local docker daemon
	I1109 14:03:28.381584   55908 preload.go:238] Found /home/jenkins/minikube-integration/21139-2320/.minikube/cache/preloaded-tarball/preloaded-images-k8s-v18-v1.34.1-cri-o-overlay-arm64.tar.lz4 in cache, skipping download
	I1109 14:03:28.381603   55908 cache.go:68] Finished verifying existence of preloaded tar for v1.34.1 on crio
	I1109 14:03:28.381760   55908 profile.go:143] Saving config to /home/jenkins/minikube-integration/21139-2320/.minikube/profiles/ha-423884/config.json ...
	I1109 14:03:28.401896   55908 image.go:100] Found gcr.io/k8s-minikube/kicbase-builds:v0.0.48-1761985721-21837@sha256:a50b37e97dfdea51156e079ca6b45818a801b3d41bbe13d141f35d2e1af6c7d1 in local docker daemon, skipping pull
	I1109 14:03:28.401919   55908 cache.go:158] gcr.io/k8s-minikube/kicbase-builds:v0.0.48-1761985721-21837@sha256:a50b37e97dfdea51156e079ca6b45818a801b3d41bbe13d141f35d2e1af6c7d1 exists in daemon, skipping load
	I1109 14:03:28.401946   55908 cache.go:243] Successfully downloaded all kic artifacts
	I1109 14:03:28.401968   55908 start.go:360] acquireMachinesLock for ha-423884: {Name:mkda5c7a1ce8a51da0d8a40a6bd47565509d6909 Clock:{} Delay:500ms Timeout:10m0s Cancel:<nil>}
	I1109 14:03:28.402035   55908 start.go:364] duration metric: took 47.073µs to acquireMachinesLock for "ha-423884"
	I1109 14:03:28.402054   55908 start.go:96] Skipping create...Using existing machine configuration
	I1109 14:03:28.402059   55908 fix.go:54] fixHost starting: 
	I1109 14:03:28.402320   55908 cli_runner.go:164] Run: docker container inspect ha-423884 --format={{.State.Status}}
	I1109 14:03:28.419704   55908 fix.go:112] recreateIfNeeded on ha-423884: state=Stopped err=<nil>
	W1109 14:03:28.419733   55908 fix.go:138] unexpected machine state, will restart: <nil>
	I1109 14:03:28.423107   55908 out.go:252] * Restarting existing docker container for "ha-423884" ...
	I1109 14:03:28.423213   55908 cli_runner.go:164] Run: docker start ha-423884
	I1109 14:03:28.683970   55908 cli_runner.go:164] Run: docker container inspect ha-423884 --format={{.State.Status}}
	I1109 14:03:28.706610   55908 kic.go:430] container "ha-423884" state is running.
	I1109 14:03:28.707012   55908 cli_runner.go:164] Run: docker container inspect -f "{{range .NetworkSettings.Networks}}{{.IPAddress}},{{.GlobalIPv6Address}}{{end}}" ha-423884
	I1109 14:03:28.730099   55908 profile.go:143] Saving config to /home/jenkins/minikube-integration/21139-2320/.minikube/profiles/ha-423884/config.json ...
	I1109 14:03:28.730346   55908 machine.go:94] provisionDockerMachine start ...
	I1109 14:03:28.730410   55908 cli_runner.go:164] Run: docker container inspect -f "'{{(index (index .NetworkSettings.Ports "22/tcp") 0).HostPort}}'" ha-423884
	I1109 14:03:28.752410   55908 main.go:143] libmachine: Using SSH client type: native
	I1109 14:03:28.752757   55908 main.go:143] libmachine: &{{{<nil> 0 [] [] []} docker [0x3eefe0] 0x3f1790 <nil>  [] 0s} 127.0.0.1 32818 <nil> <nil>}
	I1109 14:03:28.752774   55908 main.go:143] libmachine: About to run SSH command:
	hostname
	I1109 14:03:28.753518   55908 main.go:143] libmachine: Error dialing TCP: ssh: handshake failed: EOF
	I1109 14:03:31.903504   55908 main.go:143] libmachine: SSH cmd err, output: <nil>: ha-423884
	
	I1109 14:03:31.903534   55908 ubuntu.go:182] provisioning hostname "ha-423884"
	I1109 14:03:31.903601   55908 cli_runner.go:164] Run: docker container inspect -f "'{{(index (index .NetworkSettings.Ports "22/tcp") 0).HostPort}}'" ha-423884
	I1109 14:03:31.923571   55908 main.go:143] libmachine: Using SSH client type: native
	I1109 14:03:31.923916   55908 main.go:143] libmachine: &{{{<nil> 0 [] [] []} docker [0x3eefe0] 0x3f1790 <nil>  [] 0s} 127.0.0.1 32818 <nil> <nil>}
	I1109 14:03:31.923929   55908 main.go:143] libmachine: About to run SSH command:
	sudo hostname ha-423884 && echo "ha-423884" | sudo tee /etc/hostname
	I1109 14:03:32.084992   55908 main.go:143] libmachine: SSH cmd err, output: <nil>: ha-423884
	
	I1109 14:03:32.085077   55908 cli_runner.go:164] Run: docker container inspect -f "'{{(index (index .NetworkSettings.Ports "22/tcp") 0).HostPort}}'" ha-423884
	I1109 14:03:32.103777   55908 main.go:143] libmachine: Using SSH client type: native
	I1109 14:03:32.104122   55908 main.go:143] libmachine: &{{{<nil> 0 [] [] []} docker [0x3eefe0] 0x3f1790 <nil>  [] 0s} 127.0.0.1 32818 <nil> <nil>}
	I1109 14:03:32.104149   55908 main.go:143] libmachine: About to run SSH command:
	
			if ! grep -xq '.*\sha-423884' /etc/hosts; then
				if grep -xq '127.0.1.1\s.*' /etc/hosts; then
					sudo sed -i 's/^127.0.1.1\s.*/127.0.1.1 ha-423884/g' /etc/hosts;
				else 
					echo '127.0.1.1 ha-423884' | sudo tee -a /etc/hosts; 
				fi
			fi
	I1109 14:03:32.256008   55908 main.go:143] libmachine: SSH cmd err, output: <nil>: 
	I1109 14:03:32.256036   55908 ubuntu.go:188] set auth options {CertDir:/home/jenkins/minikube-integration/21139-2320/.minikube CaCertPath:/home/jenkins/minikube-integration/21139-2320/.minikube/certs/ca.pem CaPrivateKeyPath:/home/jenkins/minikube-integration/21139-2320/.minikube/certs/ca-key.pem CaCertRemotePath:/etc/docker/ca.pem ServerCertPath:/home/jenkins/minikube-integration/21139-2320/.minikube/machines/server.pem ServerKeyPath:/home/jenkins/minikube-integration/21139-2320/.minikube/machines/server-key.pem ClientKeyPath:/home/jenkins/minikube-integration/21139-2320/.minikube/certs/key.pem ServerCertRemotePath:/etc/docker/server.pem ServerKeyRemotePath:/etc/docker/server-key.pem ClientCertPath:/home/jenkins/minikube-integration/21139-2320/.minikube/certs/cert.pem ServerCertSANs:[] StorePath:/home/jenkins/minikube-integration/21139-2320/.minikube}
	I1109 14:03:32.256065   55908 ubuntu.go:190] setting up certificates
	I1109 14:03:32.256074   55908 provision.go:84] configureAuth start
	I1109 14:03:32.256143   55908 cli_runner.go:164] Run: docker container inspect -f "{{range .NetworkSettings.Networks}}{{.IPAddress}},{{.GlobalIPv6Address}}{{end}}" ha-423884
	I1109 14:03:32.275304   55908 provision.go:143] copyHostCerts
	I1109 14:03:32.275347   55908 vm_assets.go:164] NewFileAsset: /home/jenkins/minikube-integration/21139-2320/.minikube/certs/ca.pem -> /home/jenkins/minikube-integration/21139-2320/.minikube/ca.pem
	I1109 14:03:32.275379   55908 exec_runner.go:144] found /home/jenkins/minikube-integration/21139-2320/.minikube/ca.pem, removing ...
	I1109 14:03:32.275389   55908 exec_runner.go:203] rm: /home/jenkins/minikube-integration/21139-2320/.minikube/ca.pem
	I1109 14:03:32.275467   55908 exec_runner.go:151] cp: /home/jenkins/minikube-integration/21139-2320/.minikube/certs/ca.pem --> /home/jenkins/minikube-integration/21139-2320/.minikube/ca.pem (1082 bytes)
	I1109 14:03:32.275563   55908 vm_assets.go:164] NewFileAsset: /home/jenkins/minikube-integration/21139-2320/.minikube/certs/cert.pem -> /home/jenkins/minikube-integration/21139-2320/.minikube/cert.pem
	I1109 14:03:32.275585   55908 exec_runner.go:144] found /home/jenkins/minikube-integration/21139-2320/.minikube/cert.pem, removing ...
	I1109 14:03:32.275593   55908 exec_runner.go:203] rm: /home/jenkins/minikube-integration/21139-2320/.minikube/cert.pem
	I1109 14:03:32.275622   55908 exec_runner.go:151] cp: /home/jenkins/minikube-integration/21139-2320/.minikube/certs/cert.pem --> /home/jenkins/minikube-integration/21139-2320/.minikube/cert.pem (1123 bytes)
	I1109 14:03:32.275677   55908 vm_assets.go:164] NewFileAsset: /home/jenkins/minikube-integration/21139-2320/.minikube/certs/key.pem -> /home/jenkins/minikube-integration/21139-2320/.minikube/key.pem
	I1109 14:03:32.275699   55908 exec_runner.go:144] found /home/jenkins/minikube-integration/21139-2320/.minikube/key.pem, removing ...
	I1109 14:03:32.275704   55908 exec_runner.go:203] rm: /home/jenkins/minikube-integration/21139-2320/.minikube/key.pem
	I1109 14:03:32.275734   55908 exec_runner.go:151] cp: /home/jenkins/minikube-integration/21139-2320/.minikube/certs/key.pem --> /home/jenkins/minikube-integration/21139-2320/.minikube/key.pem (1679 bytes)
	I1109 14:03:32.275800   55908 provision.go:117] generating server cert: /home/jenkins/minikube-integration/21139-2320/.minikube/machines/server.pem ca-key=/home/jenkins/minikube-integration/21139-2320/.minikube/certs/ca.pem private-key=/home/jenkins/minikube-integration/21139-2320/.minikube/certs/ca-key.pem org=jenkins.ha-423884 san=[127.0.0.1 192.168.49.2 ha-423884 localhost minikube]
	I1109 14:03:32.661025   55908 provision.go:177] copyRemoteCerts
	I1109 14:03:32.661095   55908 ssh_runner.go:195] Run: sudo mkdir -p /etc/docker /etc/docker /etc/docker
	I1109 14:03:32.661138   55908 cli_runner.go:164] Run: docker container inspect -f "'{{(index (index .NetworkSettings.Ports "22/tcp") 0).HostPort}}'" ha-423884
	I1109 14:03:32.678774   55908 sshutil.go:53] new ssh client: &{IP:127.0.0.1 Port:32818 SSHKeyPath:/home/jenkins/minikube-integration/21139-2320/.minikube/machines/ha-423884/id_rsa Username:docker}
	I1109 14:03:32.784475   55908 vm_assets.go:164] NewFileAsset: /home/jenkins/minikube-integration/21139-2320/.minikube/machines/server-key.pem -> /etc/docker/server-key.pem
	I1109 14:03:32.784549   55908 ssh_runner.go:362] scp /home/jenkins/minikube-integration/21139-2320/.minikube/machines/server-key.pem --> /etc/docker/server-key.pem (1679 bytes)
	I1109 14:03:32.802319   55908 vm_assets.go:164] NewFileAsset: /home/jenkins/minikube-integration/21139-2320/.minikube/certs/ca.pem -> /etc/docker/ca.pem
	I1109 14:03:32.802376   55908 ssh_runner.go:362] scp /home/jenkins/minikube-integration/21139-2320/.minikube/certs/ca.pem --> /etc/docker/ca.pem (1082 bytes)
	I1109 14:03:32.819169   55908 vm_assets.go:164] NewFileAsset: /home/jenkins/minikube-integration/21139-2320/.minikube/machines/server.pem -> /etc/docker/server.pem
	I1109 14:03:32.819280   55908 ssh_runner.go:362] scp /home/jenkins/minikube-integration/21139-2320/.minikube/machines/server.pem --> /etc/docker/server.pem (1196 bytes)
	I1109 14:03:32.836450   55908 provision.go:87] duration metric: took 580.362722ms to configureAuth
	I1109 14:03:32.836513   55908 ubuntu.go:206] setting minikube options for container-runtime
	I1109 14:03:32.836762   55908 config.go:182] Loaded profile config "ha-423884": Driver=docker, ContainerRuntime=crio, KubernetesVersion=v1.34.1
	I1109 14:03:32.836868   55908 cli_runner.go:164] Run: docker container inspect -f "'{{(index (index .NetworkSettings.Ports "22/tcp") 0).HostPort}}'" ha-423884
	I1109 14:03:32.853354   55908 main.go:143] libmachine: Using SSH client type: native
	I1109 14:03:32.853661   55908 main.go:143] libmachine: &{{{<nil> 0 [] [] []} docker [0x3eefe0] 0x3f1790 <nil>  [] 0s} 127.0.0.1 32818 <nil> <nil>}
	I1109 14:03:32.853680   55908 main.go:143] libmachine: About to run SSH command:
	sudo mkdir -p /etc/sysconfig && printf %s "
	CRIO_MINIKUBE_OPTIONS='--insecure-registry 10.96.0.0/12 '
	" | sudo tee /etc/sysconfig/crio.minikube && sudo systemctl restart crio
	I1109 14:03:33.144760   55908 main.go:143] libmachine: SSH cmd err, output: <nil>: 
	CRIO_MINIKUBE_OPTIONS='--insecure-registry 10.96.0.0/12 '
	
	I1109 14:03:33.144782   55908 machine.go:97] duration metric: took 4.41442095s to provisionDockerMachine
	I1109 14:03:33.144794   55908 start.go:293] postStartSetup for "ha-423884" (driver="docker")
	I1109 14:03:33.144804   55908 start.go:322] creating required directories: [/etc/kubernetes/addons /etc/kubernetes/manifests /var/tmp/minikube /var/lib/minikube /var/lib/minikube/certs /var/lib/minikube/images /var/lib/minikube/binaries /tmp/gvisor /usr/share/ca-certificates /etc/ssl/certs]
	I1109 14:03:33.144881   55908 ssh_runner.go:195] Run: sudo mkdir -p /etc/kubernetes/addons /etc/kubernetes/manifests /var/tmp/minikube /var/lib/minikube /var/lib/minikube/certs /var/lib/minikube/images /var/lib/minikube/binaries /tmp/gvisor /usr/share/ca-certificates /etc/ssl/certs
	I1109 14:03:33.144923   55908 cli_runner.go:164] Run: docker container inspect -f "'{{(index (index .NetworkSettings.Ports "22/tcp") 0).HostPort}}'" ha-423884
	I1109 14:03:33.163262   55908 sshutil.go:53] new ssh client: &{IP:127.0.0.1 Port:32818 SSHKeyPath:/home/jenkins/minikube-integration/21139-2320/.minikube/machines/ha-423884/id_rsa Username:docker}
	I1109 14:03:33.271726   55908 ssh_runner.go:195] Run: cat /etc/os-release
	I1109 14:03:33.275165   55908 main.go:143] libmachine: Couldn't set key VERSION_CODENAME, no corresponding struct field found
	I1109 14:03:33.275193   55908 info.go:137] Remote host: Debian GNU/Linux 12 (bookworm)
	I1109 14:03:33.275203   55908 filesync.go:126] Scanning /home/jenkins/minikube-integration/21139-2320/.minikube/addons for local assets ...
	I1109 14:03:33.275256   55908 filesync.go:126] Scanning /home/jenkins/minikube-integration/21139-2320/.minikube/files for local assets ...
	I1109 14:03:33.275333   55908 filesync.go:149] local asset: /home/jenkins/minikube-integration/21139-2320/.minikube/files/etc/ssl/certs/41162.pem -> 41162.pem in /etc/ssl/certs
	I1109 14:03:33.275341   55908 vm_assets.go:164] NewFileAsset: /home/jenkins/minikube-integration/21139-2320/.minikube/files/etc/ssl/certs/41162.pem -> /etc/ssl/certs/41162.pem
	I1109 14:03:33.275445   55908 ssh_runner.go:195] Run: sudo mkdir -p /etc/ssl/certs
	I1109 14:03:33.282869   55908 ssh_runner.go:362] scp /home/jenkins/minikube-integration/21139-2320/.minikube/files/etc/ssl/certs/41162.pem --> /etc/ssl/certs/41162.pem (1708 bytes)
	I1109 14:03:33.300086   55908 start.go:296] duration metric: took 155.276378ms for postStartSetup
	I1109 14:03:33.300181   55908 ssh_runner.go:195] Run: sh -c "df -h /var | awk 'NR==2{print $5}'"
	I1109 14:03:33.300227   55908 cli_runner.go:164] Run: docker container inspect -f "'{{(index (index .NetworkSettings.Ports "22/tcp") 0).HostPort}}'" ha-423884
	I1109 14:03:33.318900   55908 sshutil.go:53] new ssh client: &{IP:127.0.0.1 Port:32818 SSHKeyPath:/home/jenkins/minikube-integration/21139-2320/.minikube/machines/ha-423884/id_rsa Username:docker}
	I1109 14:03:33.421156   55908 ssh_runner.go:195] Run: sh -c "df -BG /var | awk 'NR==2{print $4}'"
	I1109 14:03:33.426364   55908 fix.go:56] duration metric: took 5.024296824s for fixHost
	I1109 14:03:33.426438   55908 start.go:83] releasing machines lock for "ha-423884", held for 5.024394146s
	I1109 14:03:33.426527   55908 cli_runner.go:164] Run: docker container inspect -f "{{range .NetworkSettings.Networks}}{{.IPAddress}},{{.GlobalIPv6Address}}{{end}}" ha-423884
	I1109 14:03:33.444332   55908 ssh_runner.go:195] Run: cat /version.json
	I1109 14:03:33.444382   55908 cli_runner.go:164] Run: docker container inspect -f "'{{(index (index .NetworkSettings.Ports "22/tcp") 0).HostPort}}'" ha-423884
	I1109 14:03:33.444389   55908 ssh_runner.go:195] Run: curl -sS -m 2 https://registry.k8s.io/
	I1109 14:03:33.444465   55908 cli_runner.go:164] Run: docker container inspect -f "'{{(index (index .NetworkSettings.Ports "22/tcp") 0).HostPort}}'" ha-423884
	I1109 14:03:33.466109   55908 sshutil.go:53] new ssh client: &{IP:127.0.0.1 Port:32818 SSHKeyPath:/home/jenkins/minikube-integration/21139-2320/.minikube/machines/ha-423884/id_rsa Username:docker}
	I1109 14:03:33.468674   55908 sshutil.go:53] new ssh client: &{IP:127.0.0.1 Port:32818 SSHKeyPath:/home/jenkins/minikube-integration/21139-2320/.minikube/machines/ha-423884/id_rsa Username:docker}
	I1109 14:03:33.567827   55908 ssh_runner.go:195] Run: systemctl --version
	I1109 14:03:33.665464   55908 ssh_runner.go:195] Run: sudo sh -c "podman version >/dev/null"
	I1109 14:03:33.703682   55908 ssh_runner.go:195] Run: sh -c "stat /etc/cni/net.d/*loopback.conf*"
	W1109 14:03:33.708050   55908 cni.go:209] loopback cni configuration skipped: "/etc/cni/net.d/*loopback.conf*" not found
	I1109 14:03:33.708118   55908 ssh_runner.go:195] Run: sudo find /etc/cni/net.d -maxdepth 1 -type f ( ( -name *bridge* -or -name *podman* ) -and -not -name *.mk_disabled ) -printf "%p, " -exec sh -c "sudo mv {} {}.mk_disabled" ;
	I1109 14:03:33.716273   55908 cni.go:259] no active bridge cni configs found in "/etc/cni/net.d" - nothing to disable
	I1109 14:03:33.716295   55908 start.go:496] detecting cgroup driver to use...
	I1109 14:03:33.716329   55908 detect.go:187] detected "cgroupfs" cgroup driver on host os
	I1109 14:03:33.716378   55908 ssh_runner.go:195] Run: sudo systemctl stop -f containerd
	I1109 14:03:33.732433   55908 ssh_runner.go:195] Run: sudo systemctl is-active --quiet service containerd
	I1109 14:03:33.746199   55908 docker.go:218] disabling cri-docker service (if available) ...
	I1109 14:03:33.746294   55908 ssh_runner.go:195] Run: sudo systemctl stop -f cri-docker.socket
	I1109 14:03:33.762279   55908 ssh_runner.go:195] Run: sudo systemctl stop -f cri-docker.service
	I1109 14:03:33.775981   55908 ssh_runner.go:195] Run: sudo systemctl disable cri-docker.socket
	I1109 14:03:33.917723   55908 ssh_runner.go:195] Run: sudo systemctl mask cri-docker.service
	I1109 14:03:34.035293   55908 docker.go:234] disabling docker service ...
	I1109 14:03:34.035371   55908 ssh_runner.go:195] Run: sudo systemctl stop -f docker.socket
	I1109 14:03:34.050665   55908 ssh_runner.go:195] Run: sudo systemctl stop -f docker.service
	I1109 14:03:34.063795   55908 ssh_runner.go:195] Run: sudo systemctl disable docker.socket
	I1109 14:03:34.194207   55908 ssh_runner.go:195] Run: sudo systemctl mask docker.service
	I1109 14:03:34.316201   55908 ssh_runner.go:195] Run: sudo systemctl is-active --quiet service docker
	I1109 14:03:34.328760   55908 ssh_runner.go:195] Run: /bin/bash -c "sudo mkdir -p /etc && printf %s "runtime-endpoint: unix:///var/run/crio/crio.sock
	" | sudo tee /etc/crictl.yaml"
	I1109 14:03:34.342596   55908 crio.go:59] configure cri-o to use "registry.k8s.io/pause:3.10.1" pause image...
	I1109 14:03:34.342661   55908 ssh_runner.go:195] Run: sh -c "sudo sed -i 's|^.*pause_image = .*$|pause_image = "registry.k8s.io/pause:3.10.1"|' /etc/crio/crio.conf.d/02-crio.conf"
	I1109 14:03:34.351380   55908 crio.go:70] configuring cri-o to use "cgroupfs" as cgroup driver...
	I1109 14:03:34.351501   55908 ssh_runner.go:195] Run: sh -c "sudo sed -i 's|^.*cgroup_manager = .*$|cgroup_manager = "cgroupfs"|' /etc/crio/crio.conf.d/02-crio.conf"
	I1109 14:03:34.360283   55908 ssh_runner.go:195] Run: sh -c "sudo sed -i '/conmon_cgroup = .*/d' /etc/crio/crio.conf.d/02-crio.conf"
	I1109 14:03:34.369198   55908 ssh_runner.go:195] Run: sh -c "sudo sed -i '/cgroup_manager = .*/a conmon_cgroup = "pod"' /etc/crio/crio.conf.d/02-crio.conf"
	I1109 14:03:34.378151   55908 ssh_runner.go:195] Run: sh -c "sudo rm -rf /etc/cni/net.mk"
	I1109 14:03:34.386268   55908 ssh_runner.go:195] Run: sh -c "sudo sed -i '/^ *"net.ipv4.ip_unprivileged_port_start=.*"/d' /etc/crio/crio.conf.d/02-crio.conf"
	I1109 14:03:34.394888   55908 ssh_runner.go:195] Run: sh -c "sudo grep -q "^ *default_sysctls" /etc/crio/crio.conf.d/02-crio.conf || sudo sed -i '/conmon_cgroup = .*/a default_sysctls = \[\n\]' /etc/crio/crio.conf.d/02-crio.conf"
	I1109 14:03:34.403377   55908 ssh_runner.go:195] Run: sh -c "sudo sed -i -r 's|^default_sysctls *= *\[|&\n  "net.ipv4.ip_unprivileged_port_start=0",|' /etc/crio/crio.conf.d/02-crio.conf"
	I1109 14:03:34.412509   55908 ssh_runner.go:195] Run: sudo sysctl net.bridge.bridge-nf-call-iptables
	I1109 14:03:34.419807   55908 ssh_runner.go:195] Run: sudo sh -c "echo 1 > /proc/sys/net/ipv4/ip_forward"
	I1109 14:03:34.427015   55908 ssh_runner.go:195] Run: sudo systemctl daemon-reload
	I1109 14:03:34.533676   55908 ssh_runner.go:195] Run: sudo systemctl restart crio
	I1109 14:03:34.661746   55908 start.go:543] Will wait 60s for socket path /var/run/crio/crio.sock
	I1109 14:03:34.661816   55908 ssh_runner.go:195] Run: stat /var/run/crio/crio.sock
	I1109 14:03:34.665477   55908 start.go:564] Will wait 60s for crictl version
	I1109 14:03:34.665590   55908 ssh_runner.go:195] Run: which crictl
	I1109 14:03:34.668882   55908 ssh_runner.go:195] Run: sudo /usr/local/bin/crictl version
	I1109 14:03:34.697803   55908 start.go:580] Version:  0.1.0
	RuntimeName:  cri-o
	RuntimeVersion:  1.34.1
	RuntimeApiVersion:  v1
	I1109 14:03:34.697964   55908 ssh_runner.go:195] Run: crio --version
	I1109 14:03:34.726272   55908 ssh_runner.go:195] Run: crio --version
	I1109 14:03:34.758410   55908 out.go:179] * Preparing Kubernetes v1.34.1 on CRI-O 1.34.1 ...
	I1109 14:03:34.761247   55908 cli_runner.go:164] Run: docker network inspect ha-423884 --format "{"Name": "{{.Name}}","Driver": "{{.Driver}}","Subnet": "{{range .IPAM.Config}}{{.Subnet}}{{end}}","Gateway": "{{range .IPAM.Config}}{{.Gateway}}{{end}}","MTU": {{if (index .Options "com.docker.network.driver.mtu")}}{{(index .Options "com.docker.network.driver.mtu")}}{{else}}0{{end}}, "ContainerIPs": [{{range $k,$v := .Containers }}"{{$v.IPv4Address}}",{{end}}]}"
	I1109 14:03:34.776734   55908 ssh_runner.go:195] Run: grep 192.168.49.1	host.minikube.internal$ /etc/hosts
	I1109 14:03:34.780588   55908 ssh_runner.go:195] Run: /bin/bash -c "{ grep -v $'\thost.minikube.internal$' "/etc/hosts"; echo "192.168.49.1	host.minikube.internal"; } > /tmp/h.$$; sudo cp /tmp/h.$$ "/etc/hosts""
	I1109 14:03:34.790316   55908 kubeadm.go:884] updating cluster {Name:ha-423884 KeepContext:false EmbedCerts:false MinikubeISO: KicBaseImage:gcr.io/k8s-minikube/kicbase-builds:v0.0.48-1761985721-21837@sha256:a50b37e97dfdea51156e079ca6b45818a801b3d41bbe13d141f35d2e1af6c7d1 Memory:3072 CPUs:2 DiskSize:20000 Driver:docker HyperkitVpnKitSock: HyperkitVSockPorts:[] DockerEnv:[] ContainerVolumeMounts:[] InsecureRegistry:[] RegistryMirror:[] HostOnlyCIDR:192.168.59.1/24 HypervVirtualSwitch: HypervUseExternalSwitch:false HypervExternalAdapter: KVMNetwork:default KVMQemuURI:qemu:///system KVMGPU:false KVMHidden:false KVMNUMACount:1 APIServerPort:8443 DockerOpt:[] DisableDriverMounts:false NFSShare:[] NFSSharesRoot:/nfsshares UUID: NoVTXCheck:false DNSProxy:false HostDNSResolver:true HostOnlyNicType:virtio NatNicType:virtio SSHIPAddress: SSHUser:root SSHKey: SSHPort:22 KubernetesConfig:{KubernetesVersion:v1.34.1 ClusterName:ha-423884 Namespace:default APIServerHAVIP:192.168.49.254 APIServerName:minikubeCA APISe
rverNames:[] APIServerIPs:[] DNSDomain:cluster.local ContainerRuntime:crio CRISocket: NetworkPlugin:cni FeatureGates: ServiceCIDR:10.96.0.0/12 ImageRepository: LoadBalancerStartIP: LoadBalancerEndIP: CustomIngressCert: RegistryAliases: ExtraOptions:[] ShouldLoadCachedImages:true EnableDefaultCNI:false CNI:} Nodes:[{Name: IP:192.168.49.2 Port:8443 KubernetesVersion:v1.34.1 ContainerRuntime:crio ControlPlane:true Worker:true} {Name:m02 IP:192.168.49.3 Port:8443 KubernetesVersion:v1.34.1 ContainerRuntime:crio ControlPlane:true Worker:true} {Name:m03 IP:192.168.49.4 Port:8443 KubernetesVersion:v1.34.1 ContainerRuntime:crio ControlPlane:true Worker:true} {Name:m04 IP:192.168.49.5 Port:0 KubernetesVersion:v1.34.1 ContainerRuntime:crio ControlPlane:false Worker:true}] Addons:map[ambassador:false amd-gpu-device-plugin:false auto-pause:false cloud-spanner:false csi-hostpath-driver:false dashboard:false default-storageclass:false efk:false freshpod:false gcp-auth:false gvisor:false headlamp:false inaccel:false ingress:
false ingress-dns:false inspektor-gadget:false istio:false istio-provisioner:false kong:false kubeflow:false kubetail:false kubevirt:false logviewer:false metallb:false metrics-server:false nvidia-device-plugin:false nvidia-driver-installer:false nvidia-gpu-device-plugin:false olm:false pod-security-policy:false portainer:false registry:false registry-aliases:false registry-creds:false storage-provisioner:false storage-provisioner-rancher:false volcano:false volumesnapshots:false yakd:false] CustomAddonImages:map[] CustomAddonRegistries:map[] VerifyComponents:map[apiserver:true apps_running:true default_sa:true extra:true kubelet:true node_ready:true system_pods:true] StartHostTimeout:6m0s ScheduledStop:<nil> ExposedPorts:[] ListenAddress: Network: Subnet: MultiNodeRequested:true ExtraDisks:0 CertExpiration:26280h0m0s MountString: Mount9PVersion:9p2000.L MountGID:docker MountIP: MountMSize:262144 MountOptions:[] MountPort:0 MountType:9p MountUID:docker BinaryMirror: DisableOptimizations:false DisableMetrics:f
alse DisableCoreDNSLog:false CustomQemuFirmwarePath: SocketVMnetClientPath: SocketVMnetPath: StaticIP: SSHAuthSock: SSHAgentPID:0 GPUs: AutoPauseInterval:1m0s} ...
	I1109 14:03:34.790470   55908 preload.go:188] Checking if preload exists for k8s version v1.34.1 and runtime crio
	I1109 14:03:34.790530   55908 ssh_runner.go:195] Run: sudo crictl images --output json
	I1109 14:03:34.825584   55908 crio.go:514] all images are preloaded for cri-o runtime.
	I1109 14:03:34.825621   55908 crio.go:433] Images already preloaded, skipping extraction
	I1109 14:03:34.825685   55908 ssh_runner.go:195] Run: sudo crictl images --output json
	I1109 14:03:34.851854   55908 crio.go:514] all images are preloaded for cri-o runtime.
	I1109 14:03:34.851980   55908 cache_images.go:86] Images are preloaded, skipping loading
	I1109 14:03:34.851997   55908 kubeadm.go:935] updating node { 192.168.49.2 8443 v1.34.1 crio true true} ...
	I1109 14:03:34.852146   55908 kubeadm.go:947] kubelet [Unit]
	Wants=crio.service
	
	[Service]
	ExecStart=
	ExecStart=/var/lib/minikube/binaries/v1.34.1/kubelet --bootstrap-kubeconfig=/etc/kubernetes/bootstrap-kubelet.conf --cgroups-per-qos=false --config=/var/lib/kubelet/config.yaml --enforce-node-allocatable= --hostname-override=ha-423884 --kubeconfig=/etc/kubernetes/kubelet.conf --node-ip=192.168.49.2
	
	[Install]
	 config:
	{KubernetesVersion:v1.34.1 ClusterName:ha-423884 Namespace:default APIServerHAVIP:192.168.49.254 APIServerName:minikubeCA APIServerNames:[] APIServerIPs:[] DNSDomain:cluster.local ContainerRuntime:crio CRISocket: NetworkPlugin:cni FeatureGates: ServiceCIDR:10.96.0.0/12 ImageRepository: LoadBalancerStartIP: LoadBalancerEndIP: CustomIngressCert: RegistryAliases: ExtraOptions:[] ShouldLoadCachedImages:true EnableDefaultCNI:false CNI:}
	I1109 14:03:34.852273   55908 ssh_runner.go:195] Run: crio config
	I1109 14:03:34.903939   55908 cni.go:84] Creating CNI manager for ""
	I1109 14:03:34.903963   55908 cni.go:136] multinode detected (4 nodes found), recommending kindnet
	I1109 14:03:34.903981   55908 kubeadm.go:85] Using pod CIDR: 10.244.0.0/16
	I1109 14:03:34.904009   55908 kubeadm.go:190] kubeadm options: {CertDir:/var/lib/minikube/certs ServiceCIDR:10.96.0.0/12 PodSubnet:10.244.0.0/16 AdvertiseAddress:192.168.49.2 APIServerPort:8443 KubernetesVersion:v1.34.1 EtcdDataDir:/var/lib/minikube/etcd EtcdExtraArgs:map[] ClusterName:ha-423884 NodeName:ha-423884 DNSDomain:cluster.local CRISocket:/var/run/crio/crio.sock ImageRepository: ComponentOptions:[{Component:apiServer ExtraArgs:map[enable-admission-plugins:NamespaceLifecycle,LimitRanger,ServiceAccount,DefaultStorageClass,DefaultTolerationSeconds,NodeRestriction,MutatingAdmissionWebhook,ValidatingAdmissionWebhook,ResourceQuota] Pairs:map[certSANs:["127.0.0.1", "localhost", "192.168.49.2"]]} {Component:controllerManager ExtraArgs:map[allocate-node-cidrs:true leader-elect:false] Pairs:map[]} {Component:scheduler ExtraArgs:map[leader-elect:false] Pairs:map[]}] FeatureArgs:map[] NodeIP:192.168.49.2 CgroupDriver:cgroupfs ClientCAFile:/var/lib/minikube/certs/ca.crt StaticPodPath:/etc/kubernetes/mani
fests ControlPlaneAddress:control-plane.minikube.internal KubeProxyOptions:map[] ResolvConfSearchRegression:false KubeletConfigOpts:map[containerRuntimeEndpoint:unix:///var/run/crio/crio.sock hairpinMode:hairpin-veth runtimeRequestTimeout:15m] PrependCriSocketUnix:true}
	I1109 14:03:34.904140   55908 kubeadm.go:196] kubeadm config:
	apiVersion: kubeadm.k8s.io/v1beta4
	kind: InitConfiguration
	localAPIEndpoint:
	  advertiseAddress: 192.168.49.2
	  bindPort: 8443
	bootstrapTokens:
	  - groups:
	      - system:bootstrappers:kubeadm:default-node-token
	    ttl: 24h0m0s
	    usages:
	      - signing
	      - authentication
	nodeRegistration:
	  criSocket: unix:///var/run/crio/crio.sock
	  name: "ha-423884"
	  kubeletExtraArgs:
	    - name: "node-ip"
	      value: "192.168.49.2"
	  taints: []
	---
	apiVersion: kubeadm.k8s.io/v1beta4
	kind: ClusterConfiguration
	apiServer:
	  certSANs: ["127.0.0.1", "localhost", "192.168.49.2"]
	  extraArgs:
	    - name: "enable-admission-plugins"
	      value: "NamespaceLifecycle,LimitRanger,ServiceAccount,DefaultStorageClass,DefaultTolerationSeconds,NodeRestriction,MutatingAdmissionWebhook,ValidatingAdmissionWebhook,ResourceQuota"
	controllerManager:
	  extraArgs:
	    - name: "allocate-node-cidrs"
	      value: "true"
	    - name: "leader-elect"
	      value: "false"
	scheduler:
	  extraArgs:
	    - name: "leader-elect"
	      value: "false"
	certificatesDir: /var/lib/minikube/certs
	clusterName: mk
	controlPlaneEndpoint: control-plane.minikube.internal:8443
	etcd:
	  local:
	    dataDir: /var/lib/minikube/etcd
	kubernetesVersion: v1.34.1
	networking:
	  dnsDomain: cluster.local
	  podSubnet: "10.244.0.0/16"
	  serviceSubnet: 10.96.0.0/12
	---
	apiVersion: kubelet.config.k8s.io/v1beta1
	kind: KubeletConfiguration
	authentication:
	  x509:
	    clientCAFile: /var/lib/minikube/certs/ca.crt
	cgroupDriver: cgroupfs
	containerRuntimeEndpoint: unix:///var/run/crio/crio.sock
	hairpinMode: hairpin-veth
	runtimeRequestTimeout: 15m
	clusterDomain: "cluster.local"
	# disable disk resource management by default
	imageGCHighThresholdPercent: 100
	evictionHard:
	  nodefs.available: "0%"
	  nodefs.inodesFree: "0%"
	  imagefs.available: "0%"
	failSwapOn: false
	staticPodPath: /etc/kubernetes/manifests
	---
	apiVersion: kubeproxy.config.k8s.io/v1alpha1
	kind: KubeProxyConfiguration
	clusterCIDR: "10.244.0.0/16"
	metricsBindAddress: 0.0.0.0:10249
	conntrack:
	  maxPerCore: 0
	# Skip setting "net.netfilter.nf_conntrack_tcp_timeout_established"
	  tcpEstablishedTimeout: 0s
	# Skip setting "net.netfilter.nf_conntrack_tcp_timeout_close"
	  tcpCloseWaitTimeout: 0s
	
	I1109 14:03:34.904162   55908 kube-vip.go:115] generating kube-vip config ...
	I1109 14:03:34.904219   55908 ssh_runner.go:195] Run: sudo sh -c "lsmod | grep ip_vs"
	I1109 14:03:34.915786   55908 kube-vip.go:163] giving up enabling control-plane load-balancing as ipvs kernel modules appears not to be available: sudo sh -c "lsmod | grep ip_vs": Process exited with status 1
	stdout:
	
	stderr:
	I1109 14:03:34.915909   55908 kube-vip.go:137] kube-vip config:
	apiVersion: v1
	kind: Pod
	metadata:
	  creationTimestamp: null
	  name: kube-vip
	  namespace: kube-system
	spec:
	  containers:
	  - args:
	    - manager
	    env:
	    - name: vip_arp
	      value: "true"
	    - name: port
	      value: "8443"
	    - name: vip_nodename
	      valueFrom:
	        fieldRef:
	          fieldPath: spec.nodeName
	    - name: vip_interface
	      value: eth0
	    - name: vip_cidr
	      value: "32"
	    - name: dns_mode
	      value: first
	    - name: cp_enable
	      value: "true"
	    - name: cp_namespace
	      value: kube-system
	    - name: vip_leaderelection
	      value: "true"
	    - name: vip_leasename
	      value: plndr-cp-lock
	    - name: vip_leaseduration
	      value: "5"
	    - name: vip_renewdeadline
	      value: "3"
	    - name: vip_retryperiod
	      value: "1"
	    - name: address
	      value: 192.168.49.254
	    - name: prometheus_server
	      value: :2112
	    image: ghcr.io/kube-vip/kube-vip:v1.0.1
	    imagePullPolicy: IfNotPresent
	    name: kube-vip
	    resources: {}
	    securityContext:
	      capabilities:
	        add:
	        - NET_ADMIN
	        - NET_RAW
	    volumeMounts:
	    - mountPath: /etc/kubernetes/admin.conf
	      name: kubeconfig
	  hostAliases:
	  - hostnames:
	    - kubernetes
	    ip: 127.0.0.1
	  hostNetwork: true
	  volumes:
	  - hostPath:
	      path: "/etc/kubernetes/admin.conf"
	    name: kubeconfig
	status: {}
	I1109 14:03:34.915977   55908 ssh_runner.go:195] Run: sudo ls /var/lib/minikube/binaries/v1.34.1
	I1109 14:03:34.923406   55908 binaries.go:51] Found k8s binaries, skipping transfer
	I1109 14:03:34.923480   55908 ssh_runner.go:195] Run: sudo mkdir -p /etc/systemd/system/kubelet.service.d /lib/systemd/system /var/tmp/minikube /etc/kubernetes/manifests
	I1109 14:03:34.931134   55908 ssh_runner.go:362] scp memory --> /etc/systemd/system/kubelet.service.d/10-kubeadm.conf (359 bytes)
	I1109 14:03:34.943678   55908 ssh_runner.go:362] scp memory --> /lib/systemd/system/kubelet.service (352 bytes)
	I1109 14:03:34.956560   55908 ssh_runner.go:362] scp memory --> /var/tmp/minikube/kubeadm.yaml.new (2206 bytes)
	I1109 14:03:34.969028   55908 ssh_runner.go:362] scp memory --> /etc/kubernetes/manifests/kube-vip.yaml (1358 bytes)
	I1109 14:03:34.981532   55908 ssh_runner.go:195] Run: grep 192.168.49.254	control-plane.minikube.internal$ /etc/hosts
	I1109 14:03:34.985043   55908 ssh_runner.go:195] Run: /bin/bash -c "{ grep -v $'\tcontrol-plane.minikube.internal$' "/etc/hosts"; echo "192.168.49.254	control-plane.minikube.internal"; } > /tmp/h.$$; sudo cp /tmp/h.$$ "/etc/hosts""
	I1109 14:03:34.994528   55908 ssh_runner.go:195] Run: sudo systemctl daemon-reload
	I1109 14:03:35.107177   55908 ssh_runner.go:195] Run: sudo systemctl start kubelet
	I1109 14:03:35.123121   55908 certs.go:69] Setting up /home/jenkins/minikube-integration/21139-2320/.minikube/profiles/ha-423884 for IP: 192.168.49.2
	I1109 14:03:35.123194   55908 certs.go:195] generating shared ca certs ...
	I1109 14:03:35.123226   55908 certs.go:227] acquiring lock for ca certs: {Name:mkdc9287ecd89df27ec460b72246ed6b75395d7c Clock:{} Delay:500ms Timeout:1m0s Cancel:<nil>}
	I1109 14:03:35.123409   55908 certs.go:236] skipping valid "minikubeCA" ca cert: /home/jenkins/minikube-integration/21139-2320/.minikube/ca.key
	I1109 14:03:35.123481   55908 certs.go:236] skipping valid "proxyClientCA" ca cert: /home/jenkins/minikube-integration/21139-2320/.minikube/proxy-client-ca.key
	I1109 14:03:35.123518   55908 certs.go:257] generating profile certs ...
	I1109 14:03:35.123657   55908 certs.go:360] skipping valid signed profile cert regeneration for "minikube-user": /home/jenkins/minikube-integration/21139-2320/.minikube/profiles/ha-423884/client.key
	I1109 14:03:35.123781   55908 certs.go:360] skipping valid signed profile cert regeneration for "minikube": /home/jenkins/minikube-integration/21139-2320/.minikube/profiles/ha-423884/apiserver.key.32540612
	I1109 14:03:35.123858   55908 certs.go:360] skipping valid signed profile cert regeneration for "aggregator": /home/jenkins/minikube-integration/21139-2320/.minikube/profiles/ha-423884/proxy-client.key
	I1109 14:03:35.123923   55908 vm_assets.go:164] NewFileAsset: /home/jenkins/minikube-integration/21139-2320/.minikube/ca.crt -> /var/lib/minikube/certs/ca.crt
	I1109 14:03:35.123960   55908 vm_assets.go:164] NewFileAsset: /home/jenkins/minikube-integration/21139-2320/.minikube/ca.key -> /var/lib/minikube/certs/ca.key
	I1109 14:03:35.124009   55908 vm_assets.go:164] NewFileAsset: /home/jenkins/minikube-integration/21139-2320/.minikube/proxy-client-ca.crt -> /var/lib/minikube/certs/proxy-client-ca.crt
	I1109 14:03:35.124043   55908 vm_assets.go:164] NewFileAsset: /home/jenkins/minikube-integration/21139-2320/.minikube/proxy-client-ca.key -> /var/lib/minikube/certs/proxy-client-ca.key
	I1109 14:03:35.124090   55908 vm_assets.go:164] NewFileAsset: /home/jenkins/minikube-integration/21139-2320/.minikube/profiles/ha-423884/apiserver.crt -> /var/lib/minikube/certs/apiserver.crt
	I1109 14:03:35.124123   55908 vm_assets.go:164] NewFileAsset: /home/jenkins/minikube-integration/21139-2320/.minikube/profiles/ha-423884/apiserver.key -> /var/lib/minikube/certs/apiserver.key
	I1109 14:03:35.124169   55908 vm_assets.go:164] NewFileAsset: /home/jenkins/minikube-integration/21139-2320/.minikube/profiles/ha-423884/proxy-client.crt -> /var/lib/minikube/certs/proxy-client.crt
	I1109 14:03:35.124203   55908 vm_assets.go:164] NewFileAsset: /home/jenkins/minikube-integration/21139-2320/.minikube/profiles/ha-423884/proxy-client.key -> /var/lib/minikube/certs/proxy-client.key
	I1109 14:03:35.124294   55908 certs.go:484] found cert: /home/jenkins/minikube-integration/21139-2320/.minikube/certs/4116.pem (1338 bytes)
	W1109 14:03:35.124369   55908 certs.go:480] ignoring /home/jenkins/minikube-integration/21139-2320/.minikube/certs/4116_empty.pem, impossibly tiny 0 bytes
	I1109 14:03:35.124408   55908 certs.go:484] found cert: /home/jenkins/minikube-integration/21139-2320/.minikube/certs/ca-key.pem (1679 bytes)
	I1109 14:03:35.124455   55908 certs.go:484] found cert: /home/jenkins/minikube-integration/21139-2320/.minikube/certs/ca.pem (1082 bytes)
	I1109 14:03:35.124508   55908 certs.go:484] found cert: /home/jenkins/minikube-integration/21139-2320/.minikube/certs/cert.pem (1123 bytes)
	I1109 14:03:35.124566   55908 certs.go:484] found cert: /home/jenkins/minikube-integration/21139-2320/.minikube/certs/key.pem (1679 bytes)
	I1109 14:03:35.124648   55908 certs.go:484] found cert: /home/jenkins/minikube-integration/21139-2320/.minikube/files/etc/ssl/certs/41162.pem (1708 bytes)
	I1109 14:03:35.124724   55908 vm_assets.go:164] NewFileAsset: /home/jenkins/minikube-integration/21139-2320/.minikube/certs/4116.pem -> /usr/share/ca-certificates/4116.pem
	I1109 14:03:35.124808   55908 vm_assets.go:164] NewFileAsset: /home/jenkins/minikube-integration/21139-2320/.minikube/files/etc/ssl/certs/41162.pem -> /usr/share/ca-certificates/41162.pem
	I1109 14:03:35.124844   55908 vm_assets.go:164] NewFileAsset: /home/jenkins/minikube-integration/21139-2320/.minikube/ca.crt -> /usr/share/ca-certificates/minikubeCA.pem
	I1109 14:03:35.125710   55908 ssh_runner.go:362] scp /home/jenkins/minikube-integration/21139-2320/.minikube/ca.crt --> /var/lib/minikube/certs/ca.crt (1111 bytes)
	I1109 14:03:35.143578   55908 ssh_runner.go:362] scp /home/jenkins/minikube-integration/21139-2320/.minikube/ca.key --> /var/lib/minikube/certs/ca.key (1679 bytes)
	I1109 14:03:35.160309   55908 ssh_runner.go:362] scp /home/jenkins/minikube-integration/21139-2320/.minikube/proxy-client-ca.crt --> /var/lib/minikube/certs/proxy-client-ca.crt (1119 bytes)
	I1109 14:03:35.180028   55908 ssh_runner.go:362] scp /home/jenkins/minikube-integration/21139-2320/.minikube/proxy-client-ca.key --> /var/lib/minikube/certs/proxy-client-ca.key (1675 bytes)
	I1109 14:03:35.198803   55908 ssh_runner.go:362] scp /home/jenkins/minikube-integration/21139-2320/.minikube/profiles/ha-423884/apiserver.crt --> /var/lib/minikube/certs/apiserver.crt (1440 bytes)
	I1109 14:03:35.222988   55908 ssh_runner.go:362] scp /home/jenkins/minikube-integration/21139-2320/.minikube/profiles/ha-423884/apiserver.key --> /var/lib/minikube/certs/apiserver.key (1679 bytes)
	I1109 14:03:35.246464   55908 ssh_runner.go:362] scp /home/jenkins/minikube-integration/21139-2320/.minikube/profiles/ha-423884/proxy-client.crt --> /var/lib/minikube/certs/proxy-client.crt (1147 bytes)
	I1109 14:03:35.273513   55908 ssh_runner.go:362] scp /home/jenkins/minikube-integration/21139-2320/.minikube/profiles/ha-423884/proxy-client.key --> /var/lib/minikube/certs/proxy-client.key (1675 bytes)
	I1109 14:03:35.298574   55908 ssh_runner.go:362] scp /home/jenkins/minikube-integration/21139-2320/.minikube/certs/4116.pem --> /usr/share/ca-certificates/4116.pem (1338 bytes)
	I1109 14:03:35.323310   55908 ssh_runner.go:362] scp /home/jenkins/minikube-integration/21139-2320/.minikube/files/etc/ssl/certs/41162.pem --> /usr/share/ca-certificates/41162.pem (1708 bytes)
	I1109 14:03:35.344665   55908 ssh_runner.go:362] scp /home/jenkins/minikube-integration/21139-2320/.minikube/ca.crt --> /usr/share/ca-certificates/minikubeCA.pem (1111 bytes)
	I1109 14:03:35.365172   55908 ssh_runner.go:362] scp memory --> /var/lib/minikube/kubeconfig (738 bytes)
	I1109 14:03:35.378569   55908 ssh_runner.go:195] Run: openssl version
	I1109 14:03:35.385015   55908 ssh_runner.go:195] Run: sudo /bin/bash -c "test -s /usr/share/ca-certificates/4116.pem && ln -fs /usr/share/ca-certificates/4116.pem /etc/ssl/certs/4116.pem"
	I1109 14:03:35.394601   55908 ssh_runner.go:195] Run: ls -la /usr/share/ca-certificates/4116.pem
	I1109 14:03:35.398299   55908 certs.go:528] hashing: -rw-r--r-- 1 root root 1338 Nov  9 13:36 /usr/share/ca-certificates/4116.pem
	I1109 14:03:35.398412   55908 ssh_runner.go:195] Run: openssl x509 -hash -noout -in /usr/share/ca-certificates/4116.pem
	I1109 14:03:35.453607   55908 ssh_runner.go:195] Run: sudo /bin/bash -c "test -L /etc/ssl/certs/51391683.0 || ln -fs /etc/ssl/certs/4116.pem /etc/ssl/certs/51391683.0"
	I1109 14:03:35.463012   55908 ssh_runner.go:195] Run: sudo /bin/bash -c "test -s /usr/share/ca-certificates/41162.pem && ln -fs /usr/share/ca-certificates/41162.pem /etc/ssl/certs/41162.pem"
	I1109 14:03:35.471886   55908 ssh_runner.go:195] Run: ls -la /usr/share/ca-certificates/41162.pem
	I1109 14:03:35.475852   55908 certs.go:528] hashing: -rw-r--r-- 1 root root 1708 Nov  9 13:36 /usr/share/ca-certificates/41162.pem
	I1109 14:03:35.475960   55908 ssh_runner.go:195] Run: openssl x509 -hash -noout -in /usr/share/ca-certificates/41162.pem
	I1109 14:03:35.519535   55908 ssh_runner.go:195] Run: sudo /bin/bash -c "test -L /etc/ssl/certs/3ec20f2e.0 || ln -fs /etc/ssl/certs/41162.pem /etc/ssl/certs/3ec20f2e.0"
	I1109 14:03:35.532870   55908 ssh_runner.go:195] Run: sudo /bin/bash -c "test -s /usr/share/ca-certificates/minikubeCA.pem && ln -fs /usr/share/ca-certificates/minikubeCA.pem /etc/ssl/certs/minikubeCA.pem"
	I1109 14:03:35.541526   55908 ssh_runner.go:195] Run: ls -la /usr/share/ca-certificates/minikubeCA.pem
	I1109 14:03:35.545559   55908 certs.go:528] hashing: -rw-r--r-- 1 root root 1111 Nov  9 13:29 /usr/share/ca-certificates/minikubeCA.pem
	I1109 14:03:35.545647   55908 ssh_runner.go:195] Run: openssl x509 -hash -noout -in /usr/share/ca-certificates/minikubeCA.pem
	I1109 14:03:35.587429   55908 ssh_runner.go:195] Run: sudo /bin/bash -c "test -L /etc/ssl/certs/b5213941.0 || ln -fs /etc/ssl/certs/minikubeCA.pem /etc/ssl/certs/b5213941.0"
	I1109 14:03:35.595355   55908 ssh_runner.go:195] Run: stat /var/lib/minikube/certs/apiserver-kubelet-client.crt
	I1109 14:03:35.598863   55908 ssh_runner.go:195] Run: openssl x509 -noout -in /var/lib/minikube/certs/apiserver-etcd-client.crt -checkend 86400
	I1109 14:03:35.639394   55908 ssh_runner.go:195] Run: openssl x509 -noout -in /var/lib/minikube/certs/apiserver-kubelet-client.crt -checkend 86400
	I1109 14:03:35.682546   55908 ssh_runner.go:195] Run: openssl x509 -noout -in /var/lib/minikube/certs/etcd/server.crt -checkend 86400
	I1109 14:03:35.723686   55908 ssh_runner.go:195] Run: openssl x509 -noout -in /var/lib/minikube/certs/etcd/healthcheck-client.crt -checkend 86400
	I1109 14:03:35.769486   55908 ssh_runner.go:195] Run: openssl x509 -noout -in /var/lib/minikube/certs/etcd/peer.crt -checkend 86400
	I1109 14:03:35.818163   55908 ssh_runner.go:195] Run: openssl x509 -noout -in /var/lib/minikube/certs/front-proxy-client.crt -checkend 86400
	I1109 14:03:35.873301   55908 kubeadm.go:401] StartCluster: {Name:ha-423884 KeepContext:false EmbedCerts:false MinikubeISO: KicBaseImage:gcr.io/k8s-minikube/kicbase-builds:v0.0.48-1761985721-21837@sha256:a50b37e97dfdea51156e079ca6b45818a801b3d41bbe13d141f35d2e1af6c7d1 Memory:3072 CPUs:2 DiskSize:20000 Driver:docker HyperkitVpnKitSock: HyperkitVSockPorts:[] DockerEnv:[] ContainerVolumeMounts:[] InsecureRegistry:[] RegistryMirror:[] HostOnlyCIDR:192.168.59.1/24 HypervVirtualSwitch: HypervUseExternalSwitch:false HypervExternalAdapter: KVMNetwork:default KVMQemuURI:qemu:///system KVMGPU:false KVMHidden:false KVMNUMACount:1 APIServerPort:8443 DockerOpt:[] DisableDriverMounts:false NFSShare:[] NFSSharesRoot:/nfsshares UUID: NoVTXCheck:false DNSProxy:false HostDNSResolver:true HostOnlyNicType:virtio NatNicType:virtio SSHIPAddress: SSHUser:root SSHKey: SSHPort:22 KubernetesConfig:{KubernetesVersion:v1.34.1 ClusterName:ha-423884 Namespace:default APIServerHAVIP:192.168.49.254 APIServerName:minikubeCA APIServe
rNames:[] APIServerIPs:[] DNSDomain:cluster.local ContainerRuntime:crio CRISocket: NetworkPlugin:cni FeatureGates: ServiceCIDR:10.96.0.0/12 ImageRepository: LoadBalancerStartIP: LoadBalancerEndIP: CustomIngressCert: RegistryAliases: ExtraOptions:[] ShouldLoadCachedImages:true EnableDefaultCNI:false CNI:} Nodes:[{Name: IP:192.168.49.2 Port:8443 KubernetesVersion:v1.34.1 ContainerRuntime:crio ControlPlane:true Worker:true} {Name:m02 IP:192.168.49.3 Port:8443 KubernetesVersion:v1.34.1 ContainerRuntime:crio ControlPlane:true Worker:true} {Name:m03 IP:192.168.49.4 Port:8443 KubernetesVersion:v1.34.1 ContainerRuntime:crio ControlPlane:true Worker:true} {Name:m04 IP:192.168.49.5 Port:0 KubernetesVersion:v1.34.1 ContainerRuntime:crio ControlPlane:false Worker:true}] Addons:map[ambassador:false amd-gpu-device-plugin:false auto-pause:false cloud-spanner:false csi-hostpath-driver:false dashboard:false default-storageclass:false efk:false freshpod:false gcp-auth:false gvisor:false headlamp:false inaccel:false ingress:fal
se ingress-dns:false inspektor-gadget:false istio:false istio-provisioner:false kong:false kubeflow:false kubetail:false kubevirt:false logviewer:false metallb:false metrics-server:false nvidia-device-plugin:false nvidia-driver-installer:false nvidia-gpu-device-plugin:false olm:false pod-security-policy:false portainer:false registry:false registry-aliases:false registry-creds:false storage-provisioner:false storage-provisioner-rancher:false volcano:false volumesnapshots:false yakd:false] CustomAddonImages:map[] CustomAddonRegistries:map[] VerifyComponents:map[apiserver:true apps_running:true default_sa:true extra:true kubelet:true node_ready:true system_pods:true] StartHostTimeout:6m0s ScheduledStop:<nil> ExposedPorts:[] ListenAddress: Network: Subnet: MultiNodeRequested:true ExtraDisks:0 CertExpiration:26280h0m0s MountString: Mount9PVersion:9p2000.L MountGID:docker MountIP: MountMSize:262144 MountOptions:[] MountPort:0 MountType:9p MountUID:docker BinaryMirror: DisableOptimizations:false DisableMetrics:fals
e DisableCoreDNSLog:false CustomQemuFirmwarePath: SocketVMnetClientPath: SocketVMnetPath: StaticIP: SSHAuthSock: SSHAgentPID:0 GPUs: AutoPauseInterval:1m0s}
	I1109 14:03:35.873423   55908 cri.go:54] listing CRI containers in root : {State:paused Name: Namespaces:[kube-system]}
	I1109 14:03:35.873481   55908 ssh_runner.go:195] Run: sudo -s eval "crictl ps -a --quiet --label io.kubernetes.pod.namespace=kube-system"
	I1109 14:03:35.949725   55908 cri.go:89] found id: "947390d8997ffb89bea0e3c1e1bca5c1f8dd53d457d88db5aafd7664dbcb65b2"
	I1109 14:03:35.949794   55908 cri.go:89] found id: "c0ba74e816e1338d86f2f29c211b83c172784bbf106dba7bae518b2ee0201a4e"
	I1109 14:03:35.949821   55908 cri.go:89] found id: "785a023345fda66c98e73a27cd2aa79f3beb28f1d9847ff2264dd21ee91db42a"
	I1109 14:03:35.949838   55908 cri.go:89] found id: ""
	I1109 14:03:35.949915   55908 ssh_runner.go:195] Run: sudo runc list -f json
	W1109 14:03:35.976461   55908 kubeadm.go:408] unpause failed: list paused: runc: sudo runc list -f json: Process exited with status 1
	stdout:
	
	stderr:
	time="2025-11-09T14:03:35Z" level=error msg="open /run/runc: no such file or directory"
	I1109 14:03:35.976622   55908 ssh_runner.go:195] Run: sudo ls /var/lib/kubelet/kubeadm-flags.env /var/lib/kubelet/config.yaml /var/lib/minikube/etcd
	I1109 14:03:35.995533   55908 kubeadm.go:417] found existing configuration files, will attempt cluster restart
	I1109 14:03:35.995601   55908 kubeadm.go:598] restartPrimaryControlPlane start ...
	I1109 14:03:35.995698   55908 ssh_runner.go:195] Run: sudo test -d /data/minikube
	I1109 14:03:36.007080   55908 kubeadm.go:131] /data/minikube skipping compat symlinks: sudo test -d /data/minikube: Process exited with status 1
	stdout:
	
	stderr:
	I1109 14:03:36.007609   55908 kubeconfig.go:47] verify endpoint returned: get endpoint: "ha-423884" does not appear in /home/jenkins/minikube-integration/21139-2320/kubeconfig
	I1109 14:03:36.007785   55908 kubeconfig.go:62] /home/jenkins/minikube-integration/21139-2320/kubeconfig needs updating (will repair): [kubeconfig missing "ha-423884" cluster setting kubeconfig missing "ha-423884" context setting]
	I1109 14:03:36.008206   55908 lock.go:35] WriteFile acquiring /home/jenkins/minikube-integration/21139-2320/kubeconfig: {Name:mkbcbd43244c4eb498050023163f96f81faa78c6 Clock:{} Delay:500ms Timeout:1m0s Cancel:<nil>}
	I1109 14:03:36.008996   55908 kapi.go:59] client config for ha-423884: &rest.Config{Host:"https://192.168.49.2:8443", APIPath:"", ContentConfig:rest.ContentConfig{AcceptContentTypes:"", ContentType:"", GroupVersion:(*schema.GroupVersion)(nil), NegotiatedSerializer:runtime.NegotiatedSerializer(nil)}, Username:"", Password:"", BearerToken:"", BearerTokenFile:"", Impersonate:rest.ImpersonationConfig{UserName:"", UID:"", Groups:[]string(nil), Extra:map[string][]string(nil)}, AuthProvider:<nil>, AuthConfigPersister:rest.AuthProviderConfigPersister(nil), ExecProvider:<nil>, TLSClientConfig:rest.sanitizedTLSClientConfig{Insecure:false, ServerName:"", CertFile:"/home/jenkins/minikube-integration/21139-2320/.minikube/profiles/ha-423884/client.crt", KeyFile:"/home/jenkins/minikube-integration/21139-2320/.minikube/profiles/ha-423884/client.key", CAFile:"/home/jenkins/minikube-integration/21139-2320/.minikube/ca.crt", CertData:[]uint8(nil), KeyData:[]uint8(nil), CAData:[]uint8(nil), NextProtos:[]string(nil)}, Us
erAgent:"", DisableCompression:false, Transport:http.RoundTripper(nil), WrapTransport:(transport.WrapperFunc)(0x21276e0), QPS:0, Burst:0, RateLimiter:flowcontrol.RateLimiter(nil), WarningHandler:rest.WarningHandler(nil), WarningHandlerWithContext:rest.WarningHandlerWithContext(nil), Timeout:0, Dial:(func(context.Context, string, string) (net.Conn, error))(nil), Proxy:(func(*http.Request) (*url.URL, error))(nil)}
	I1109 14:03:36.009887   55908 envvar.go:172] "Feature gate default state" feature="WatchListClient" enabled=false
	I1109 14:03:36.009995   55908 envvar.go:172] "Feature gate default state" feature="ClientsAllowCBOR" enabled=false
	I1109 14:03:36.010046   55908 envvar.go:172] "Feature gate default state" feature="ClientsPreferCBOR" enabled=false
	I1109 14:03:36.010070   55908 envvar.go:172] "Feature gate default state" feature="InformerResourceVersion" enabled=false
	I1109 14:03:36.009972   55908 cert_rotation.go:141] "Starting client certificate rotation controller" logger="tls-transport-cache"
	I1109 14:03:36.010189   55908 envvar.go:172] "Feature gate default state" feature="InOrderInformers" enabled=true
	I1109 14:03:36.010607   55908 ssh_runner.go:195] Run: sudo diff -u /var/tmp/minikube/kubeadm.yaml /var/tmp/minikube/kubeadm.yaml.new
	I1109 14:03:36.028288   55908 kubeadm.go:635] The running cluster does not require reconfiguration: 192.168.49.2
	I1109 14:03:36.028364   55908 kubeadm.go:602] duration metric: took 32.744336ms to restartPrimaryControlPlane
	I1109 14:03:36.028386   55908 kubeadm.go:403] duration metric: took 155.094636ms to StartCluster
	I1109 14:03:36.028414   55908 settings.go:142] acquiring lock: {Name:mk52a5a1cf83cfd3007d92af07a5e8dea393f789 Clock:{} Delay:500ms Timeout:1m0s Cancel:<nil>}
	I1109 14:03:36.028527   55908 settings.go:150] Updating kubeconfig:  /home/jenkins/minikube-integration/21139-2320/kubeconfig
	I1109 14:03:36.029250   55908 lock.go:35] WriteFile acquiring /home/jenkins/minikube-integration/21139-2320/kubeconfig: {Name:mkbcbd43244c4eb498050023163f96f81faa78c6 Clock:{} Delay:500ms Timeout:1m0s Cancel:<nil>}
	I1109 14:03:36.029535   55908 start.go:234] HA (multi-control plane) cluster: will skip waiting for primary control-plane node &{Name: IP:192.168.49.2 Port:8443 KubernetesVersion:v1.34.1 ContainerRuntime:crio ControlPlane:true Worker:true}
	I1109 14:03:36.029589   55908 start.go:242] waiting for startup goroutines ...
	I1109 14:03:36.029633   55908 addons.go:512] enable addons start: toEnable=map[ambassador:false amd-gpu-device-plugin:false auto-pause:false cloud-spanner:false csi-hostpath-driver:false dashboard:false default-storageclass:false efk:false freshpod:false gcp-auth:false gvisor:false headlamp:false inaccel:false ingress:false ingress-dns:false inspektor-gadget:false istio:false istio-provisioner:false kong:false kubeflow:false kubetail:false kubevirt:false logviewer:false metallb:false metrics-server:false nvidia-device-plugin:false nvidia-driver-installer:false nvidia-gpu-device-plugin:false olm:false pod-security-policy:false portainer:false registry:false registry-aliases:false registry-creds:false storage-provisioner:false storage-provisioner-rancher:false volcano:false volumesnapshots:false yakd:false]
	I1109 14:03:36.030494   55908 config.go:182] Loaded profile config "ha-423884": Driver=docker, ContainerRuntime=crio, KubernetesVersion=v1.34.1
	I1109 14:03:36.035208   55908 out.go:179] * Enabled addons: 
	I1109 14:03:36.040262   55908 addons.go:515] duration metric: took 10.631239ms for enable addons: enabled=[]
	I1109 14:03:36.040364   55908 start.go:247] waiting for cluster config update ...
	I1109 14:03:36.040385   55908 start.go:256] writing updated cluster config ...
	I1109 14:03:36.043855   55908 out.go:203] 
	I1109 14:03:36.047167   55908 config.go:182] Loaded profile config "ha-423884": Driver=docker, ContainerRuntime=crio, KubernetesVersion=v1.34.1
	I1109 14:03:36.047362   55908 profile.go:143] Saving config to /home/jenkins/minikube-integration/21139-2320/.minikube/profiles/ha-423884/config.json ...
	I1109 14:03:36.050885   55908 out.go:179] * Starting "ha-423884-m02" control-plane node in "ha-423884" cluster
	I1109 14:03:36.053842   55908 cache.go:134] Beginning downloading kic base image for docker with crio
	I1109 14:03:36.056999   55908 out.go:179] * Pulling base image v0.0.48-1761985721-21837 ...
	I1109 14:03:36.060038   55908 image.go:81] Checking for gcr.io/k8s-minikube/kicbase-builds:v0.0.48-1761985721-21837@sha256:a50b37e97dfdea51156e079ca6b45818a801b3d41bbe13d141f35d2e1af6c7d1 in local docker daemon
	I1109 14:03:36.060318   55908 preload.go:188] Checking if preload exists for k8s version v1.34.1 and runtime crio
	I1109 14:03:36.060344   55908 cache.go:65] Caching tarball of preloaded images
	I1109 14:03:36.060467   55908 preload.go:238] Found /home/jenkins/minikube-integration/21139-2320/.minikube/cache/preloaded-tarball/preloaded-images-k8s-v18-v1.34.1-cri-o-overlay-arm64.tar.lz4 in cache, skipping download
	I1109 14:03:36.060496   55908 cache.go:68] Finished verifying existence of preloaded tar for v1.34.1 on crio
	I1109 14:03:36.060681   55908 profile.go:143] Saving config to /home/jenkins/minikube-integration/21139-2320/.minikube/profiles/ha-423884/config.json ...
	I1109 14:03:36.087960   55908 image.go:100] Found gcr.io/k8s-minikube/kicbase-builds:v0.0.48-1761985721-21837@sha256:a50b37e97dfdea51156e079ca6b45818a801b3d41bbe13d141f35d2e1af6c7d1 in local docker daemon, skipping pull
	I1109 14:03:36.087980   55908 cache.go:158] gcr.io/k8s-minikube/kicbase-builds:v0.0.48-1761985721-21837@sha256:a50b37e97dfdea51156e079ca6b45818a801b3d41bbe13d141f35d2e1af6c7d1 exists in daemon, skipping load
	I1109 14:03:36.087991   55908 cache.go:243] Successfully downloaded all kic artifacts
	I1109 14:03:36.088015   55908 start.go:360] acquireMachinesLock for ha-423884-m02: {Name:mkc465d60ac134a0502b48f535d5c2db44f7f07b Clock:{} Delay:500ms Timeout:10m0s Cancel:<nil>}
	I1109 14:03:36.088071   55908 start.go:364] duration metric: took 40.263µs to acquireMachinesLock for "ha-423884-m02"
	I1109 14:03:36.088090   55908 start.go:96] Skipping create...Using existing machine configuration
	I1109 14:03:36.088095   55908 fix.go:54] fixHost starting: m02
	I1109 14:03:36.088348   55908 cli_runner.go:164] Run: docker container inspect ha-423884-m02 --format={{.State.Status}}
	I1109 14:03:36.119614   55908 fix.go:112] recreateIfNeeded on ha-423884-m02: state=Stopped err=<nil>
	W1109 14:03:36.119639   55908 fix.go:138] unexpected machine state, will restart: <nil>
	I1109 14:03:36.123884   55908 out.go:252] * Restarting existing docker container for "ha-423884-m02" ...
	I1109 14:03:36.123973   55908 cli_runner.go:164] Run: docker start ha-423884-m02
	I1109 14:03:36.530699   55908 cli_runner.go:164] Run: docker container inspect ha-423884-m02 --format={{.State.Status}}
	I1109 14:03:36.559612   55908 kic.go:430] container "ha-423884-m02" state is running.
	I1109 14:03:36.560004   55908 cli_runner.go:164] Run: docker container inspect -f "{{range .NetworkSettings.Networks}}{{.IPAddress}},{{.GlobalIPv6Address}}{{end}}" ha-423884-m02
	I1109 14:03:36.586384   55908 profile.go:143] Saving config to /home/jenkins/minikube-integration/21139-2320/.minikube/profiles/ha-423884/config.json ...
	I1109 14:03:36.586624   55908 machine.go:94] provisionDockerMachine start ...
	I1109 14:03:36.586695   55908 cli_runner.go:164] Run: docker container inspect -f "'{{(index (index .NetworkSettings.Ports "22/tcp") 0).HostPort}}'" ha-423884-m02
	I1109 14:03:36.615730   55908 main.go:143] libmachine: Using SSH client type: native
	I1109 14:03:36.616048   55908 main.go:143] libmachine: &{{{<nil> 0 [] [] []} docker [0x3eefe0] 0x3f1790 <nil>  [] 0s} 127.0.0.1 32823 <nil> <nil>}
	I1109 14:03:36.616058   55908 main.go:143] libmachine: About to run SSH command:
	hostname
	I1109 14:03:36.616804   55908 main.go:143] libmachine: Error dialing TCP: ssh: handshake failed: read tcp 127.0.0.1:49240->127.0.0.1:32823: read: connection reset by peer
	I1109 14:03:39.844217   55908 main.go:143] libmachine: SSH cmd err, output: <nil>: ha-423884-m02
	
	I1109 14:03:39.844255   55908 ubuntu.go:182] provisioning hostname "ha-423884-m02"
	I1109 14:03:39.844325   55908 cli_runner.go:164] Run: docker container inspect -f "'{{(index (index .NetworkSettings.Ports "22/tcp") 0).HostPort}}'" ha-423884-m02
	I1109 14:03:39.868660   55908 main.go:143] libmachine: Using SSH client type: native
	I1109 14:03:39.868984   55908 main.go:143] libmachine: &{{{<nil> 0 [] [] []} docker [0x3eefe0] 0x3f1790 <nil>  [] 0s} 127.0.0.1 32823 <nil> <nil>}
	I1109 14:03:39.869001   55908 main.go:143] libmachine: About to run SSH command:
	sudo hostname ha-423884-m02 && echo "ha-423884-m02" | sudo tee /etc/hostname
	I1109 14:03:40.093355   55908 main.go:143] libmachine: SSH cmd err, output: <nil>: ha-423884-m02
	
	I1109 14:03:40.093437   55908 cli_runner.go:164] Run: docker container inspect -f "'{{(index (index .NetworkSettings.Ports "22/tcp") 0).HostPort}}'" ha-423884-m02
	I1109 14:03:40.121586   55908 main.go:143] libmachine: Using SSH client type: native
	I1109 14:03:40.121898   55908 main.go:143] libmachine: &{{{<nil> 0 [] [] []} docker [0x3eefe0] 0x3f1790 <nil>  [] 0s} 127.0.0.1 32823 <nil> <nil>}
	I1109 14:03:40.121920   55908 main.go:143] libmachine: About to run SSH command:
	
			if ! grep -xq '.*\sha-423884-m02' /etc/hosts; then
				if grep -xq '127.0.1.1\s.*' /etc/hosts; then
					sudo sed -i 's/^127.0.1.1\s.*/127.0.1.1 ha-423884-m02/g' /etc/hosts;
				else 
					echo '127.0.1.1 ha-423884-m02' | sudo tee -a /etc/hosts; 
				fi
			fi
	I1109 14:03:40.328493   55908 main.go:143] libmachine: SSH cmd err, output: <nil>: 
	I1109 14:03:40.328522   55908 ubuntu.go:188] set auth options {CertDir:/home/jenkins/minikube-integration/21139-2320/.minikube CaCertPath:/home/jenkins/minikube-integration/21139-2320/.minikube/certs/ca.pem CaPrivateKeyPath:/home/jenkins/minikube-integration/21139-2320/.minikube/certs/ca-key.pem CaCertRemotePath:/etc/docker/ca.pem ServerCertPath:/home/jenkins/minikube-integration/21139-2320/.minikube/machines/server.pem ServerKeyPath:/home/jenkins/minikube-integration/21139-2320/.minikube/machines/server-key.pem ClientKeyPath:/home/jenkins/minikube-integration/21139-2320/.minikube/certs/key.pem ServerCertRemotePath:/etc/docker/server.pem ServerKeyRemotePath:/etc/docker/server-key.pem ClientCertPath:/home/jenkins/minikube-integration/21139-2320/.minikube/certs/cert.pem ServerCertSANs:[] StorePath:/home/jenkins/minikube-integration/21139-2320/.minikube}
	I1109 14:03:40.328538   55908 ubuntu.go:190] setting up certificates
	I1109 14:03:40.328548   55908 provision.go:84] configureAuth start
	I1109 14:03:40.328618   55908 cli_runner.go:164] Run: docker container inspect -f "{{range .NetworkSettings.Networks}}{{.IPAddress}},{{.GlobalIPv6Address}}{{end}}" ha-423884-m02
	I1109 14:03:40.372055   55908 provision.go:143] copyHostCerts
	I1109 14:03:40.372096   55908 vm_assets.go:164] NewFileAsset: /home/jenkins/minikube-integration/21139-2320/.minikube/certs/ca.pem -> /home/jenkins/minikube-integration/21139-2320/.minikube/ca.pem
	I1109 14:03:40.372169   55908 exec_runner.go:144] found /home/jenkins/minikube-integration/21139-2320/.minikube/ca.pem, removing ...
	I1109 14:03:40.372176   55908 exec_runner.go:203] rm: /home/jenkins/minikube-integration/21139-2320/.minikube/ca.pem
	I1109 14:03:40.372257   55908 exec_runner.go:151] cp: /home/jenkins/minikube-integration/21139-2320/.minikube/certs/ca.pem --> /home/jenkins/minikube-integration/21139-2320/.minikube/ca.pem (1082 bytes)
	I1109 14:03:40.372331   55908 vm_assets.go:164] NewFileAsset: /home/jenkins/minikube-integration/21139-2320/.minikube/certs/cert.pem -> /home/jenkins/minikube-integration/21139-2320/.minikube/cert.pem
	I1109 14:03:40.372347   55908 exec_runner.go:144] found /home/jenkins/minikube-integration/21139-2320/.minikube/cert.pem, removing ...
	I1109 14:03:40.372352   55908 exec_runner.go:203] rm: /home/jenkins/minikube-integration/21139-2320/.minikube/cert.pem
	I1109 14:03:40.372377   55908 exec_runner.go:151] cp: /home/jenkins/minikube-integration/21139-2320/.minikube/certs/cert.pem --> /home/jenkins/minikube-integration/21139-2320/.minikube/cert.pem (1123 bytes)
	I1109 14:03:40.372418   55908 vm_assets.go:164] NewFileAsset: /home/jenkins/minikube-integration/21139-2320/.minikube/certs/key.pem -> /home/jenkins/minikube-integration/21139-2320/.minikube/key.pem
	I1109 14:03:40.372433   55908 exec_runner.go:144] found /home/jenkins/minikube-integration/21139-2320/.minikube/key.pem, removing ...
	I1109 14:03:40.372437   55908 exec_runner.go:203] rm: /home/jenkins/minikube-integration/21139-2320/.minikube/key.pem
	I1109 14:03:40.372461   55908 exec_runner.go:151] cp: /home/jenkins/minikube-integration/21139-2320/.minikube/certs/key.pem --> /home/jenkins/minikube-integration/21139-2320/.minikube/key.pem (1679 bytes)
	I1109 14:03:40.372508   55908 provision.go:117] generating server cert: /home/jenkins/minikube-integration/21139-2320/.minikube/machines/server.pem ca-key=/home/jenkins/minikube-integration/21139-2320/.minikube/certs/ca.pem private-key=/home/jenkins/minikube-integration/21139-2320/.minikube/certs/ca-key.pem org=jenkins.ha-423884-m02 san=[127.0.0.1 192.168.49.3 ha-423884-m02 localhost minikube]
	I1109 14:03:40.460419   55908 provision.go:177] copyRemoteCerts
	I1109 14:03:40.460536   55908 ssh_runner.go:195] Run: sudo mkdir -p /etc/docker /etc/docker /etc/docker
	I1109 14:03:40.460611   55908 cli_runner.go:164] Run: docker container inspect -f "'{{(index (index .NetworkSettings.Ports "22/tcp") 0).HostPort}}'" ha-423884-m02
	I1109 14:03:40.505492   55908 sshutil.go:53] new ssh client: &{IP:127.0.0.1 Port:32823 SSHKeyPath:/home/jenkins/minikube-integration/21139-2320/.minikube/machines/ha-423884-m02/id_rsa Username:docker}
	I1109 14:03:40.630054   55908 vm_assets.go:164] NewFileAsset: /home/jenkins/minikube-integration/21139-2320/.minikube/certs/ca.pem -> /etc/docker/ca.pem
	I1109 14:03:40.630110   55908 ssh_runner.go:362] scp /home/jenkins/minikube-integration/21139-2320/.minikube/certs/ca.pem --> /etc/docker/ca.pem (1082 bytes)
	I1109 14:03:40.653044   55908 vm_assets.go:164] NewFileAsset: /home/jenkins/minikube-integration/21139-2320/.minikube/machines/server.pem -> /etc/docker/server.pem
	I1109 14:03:40.653106   55908 ssh_runner.go:362] scp /home/jenkins/minikube-integration/21139-2320/.minikube/machines/server.pem --> /etc/docker/server.pem (1208 bytes)
	I1109 14:03:40.683285   55908 vm_assets.go:164] NewFileAsset: /home/jenkins/minikube-integration/21139-2320/.minikube/machines/server-key.pem -> /etc/docker/server-key.pem
	I1109 14:03:40.683343   55908 ssh_runner.go:362] scp /home/jenkins/minikube-integration/21139-2320/.minikube/machines/server-key.pem --> /etc/docker/server-key.pem (1679 bytes)
	I1109 14:03:40.713212   55908 provision.go:87] duration metric: took 384.650953ms to configureAuth
	I1109 14:03:40.713278   55908 ubuntu.go:206] setting minikube options for container-runtime
	I1109 14:03:40.713537   55908 config.go:182] Loaded profile config "ha-423884": Driver=docker, ContainerRuntime=crio, KubernetesVersion=v1.34.1
	I1109 14:03:40.713674   55908 cli_runner.go:164] Run: docker container inspect -f "'{{(index (index .NetworkSettings.Ports "22/tcp") 0).HostPort}}'" ha-423884-m02
	I1109 14:03:40.745458   55908 main.go:143] libmachine: Using SSH client type: native
	I1109 14:03:40.745765   55908 main.go:143] libmachine: &{{{<nil> 0 [] [] []} docker [0x3eefe0] 0x3f1790 <nil>  [] 0s} 127.0.0.1 32823 <nil> <nil>}
	I1109 14:03:40.745786   55908 main.go:143] libmachine: About to run SSH command:
	sudo mkdir -p /etc/sysconfig && printf %s "
	CRIO_MINIKUBE_OPTIONS='--insecure-registry 10.96.0.0/12 '
	" | sudo tee /etc/sysconfig/crio.minikube && sudo systemctl restart crio
	I1109 14:03:41.160286   55908 main.go:143] libmachine: SSH cmd err, output: <nil>: 
	CRIO_MINIKUBE_OPTIONS='--insecure-registry 10.96.0.0/12 '
	
	I1109 14:03:41.160309   55908 machine.go:97] duration metric: took 4.573667407s to provisionDockerMachine
	I1109 14:03:41.160321   55908 start.go:293] postStartSetup for "ha-423884-m02" (driver="docker")
	I1109 14:03:41.160332   55908 start.go:322] creating required directories: [/etc/kubernetes/addons /etc/kubernetes/manifests /var/tmp/minikube /var/lib/minikube /var/lib/minikube/certs /var/lib/minikube/images /var/lib/minikube/binaries /tmp/gvisor /usr/share/ca-certificates /etc/ssl/certs]
	I1109 14:03:41.160396   55908 ssh_runner.go:195] Run: sudo mkdir -p /etc/kubernetes/addons /etc/kubernetes/manifests /var/tmp/minikube /var/lib/minikube /var/lib/minikube/certs /var/lib/minikube/images /var/lib/minikube/binaries /tmp/gvisor /usr/share/ca-certificates /etc/ssl/certs
	I1109 14:03:41.160449   55908 cli_runner.go:164] Run: docker container inspect -f "'{{(index (index .NetworkSettings.Ports "22/tcp") 0).HostPort}}'" ha-423884-m02
	I1109 14:03:41.178991   55908 sshutil.go:53] new ssh client: &{IP:127.0.0.1 Port:32823 SSHKeyPath:/home/jenkins/minikube-integration/21139-2320/.minikube/machines/ha-423884-m02/id_rsa Username:docker}
	I1109 14:03:41.284963   55908 ssh_runner.go:195] Run: cat /etc/os-release
	I1109 14:03:41.288725   55908 main.go:143] libmachine: Couldn't set key VERSION_CODENAME, no corresponding struct field found
	I1109 14:03:41.288763   55908 info.go:137] Remote host: Debian GNU/Linux 12 (bookworm)
	I1109 14:03:41.288776   55908 filesync.go:126] Scanning /home/jenkins/minikube-integration/21139-2320/.minikube/addons for local assets ...
	I1109 14:03:41.288833   55908 filesync.go:126] Scanning /home/jenkins/minikube-integration/21139-2320/.minikube/files for local assets ...
	I1109 14:03:41.288922   55908 filesync.go:149] local asset: /home/jenkins/minikube-integration/21139-2320/.minikube/files/etc/ssl/certs/41162.pem -> 41162.pem in /etc/ssl/certs
	I1109 14:03:41.288929   55908 vm_assets.go:164] NewFileAsset: /home/jenkins/minikube-integration/21139-2320/.minikube/files/etc/ssl/certs/41162.pem -> /etc/ssl/certs/41162.pem
	I1109 14:03:41.289033   55908 ssh_runner.go:195] Run: sudo mkdir -p /etc/ssl/certs
	I1109 14:03:41.297714   55908 ssh_runner.go:362] scp /home/jenkins/minikube-integration/21139-2320/.minikube/files/etc/ssl/certs/41162.pem --> /etc/ssl/certs/41162.pem (1708 bytes)
	I1109 14:03:41.316091   55908 start.go:296] duration metric: took 155.749725ms for postStartSetup
	I1109 14:03:41.316183   55908 ssh_runner.go:195] Run: sh -c "df -h /var | awk 'NR==2{print $5}'"
	I1109 14:03:41.316251   55908 cli_runner.go:164] Run: docker container inspect -f "'{{(index (index .NetworkSettings.Ports "22/tcp") 0).HostPort}}'" ha-423884-m02
	I1109 14:03:41.332754   55908 sshutil.go:53] new ssh client: &{IP:127.0.0.1 Port:32823 SSHKeyPath:/home/jenkins/minikube-integration/21139-2320/.minikube/machines/ha-423884-m02/id_rsa Username:docker}
	I1109 14:03:41.441566   55908 ssh_runner.go:195] Run: sh -c "df -BG /var | awk 'NR==2{print $4}'"
	I1109 14:03:41.446853   55908 fix.go:56] duration metric: took 5.358725913s for fixHost
	I1109 14:03:41.446878   55908 start.go:83] releasing machines lock for "ha-423884-m02", held for 5.358799177s
	I1109 14:03:41.446969   55908 cli_runner.go:164] Run: docker container inspect -f "{{range .NetworkSettings.Networks}}{{.IPAddress}},{{.GlobalIPv6Address}}{{end}}" ha-423884-m02
	I1109 14:03:41.471189   55908 out.go:179] * Found network options:
	I1109 14:03:41.474105   55908 out.go:179]   - NO_PROXY=192.168.49.2
	W1109 14:03:41.477016   55908 proxy.go:120] fail to check proxy env: Error ip not in block
	W1109 14:03:41.477060   55908 proxy.go:120] fail to check proxy env: Error ip not in block
	I1109 14:03:41.477139   55908 ssh_runner.go:195] Run: sudo sh -c "podman version >/dev/null"
	I1109 14:03:41.477182   55908 cli_runner.go:164] Run: docker container inspect -f "'{{(index (index .NetworkSettings.Ports "22/tcp") 0).HostPort}}'" ha-423884-m02
	I1109 14:03:41.477214   55908 ssh_runner.go:195] Run: curl -sS -m 2 https://registry.k8s.io/
	I1109 14:03:41.477268   55908 cli_runner.go:164] Run: docker container inspect -f "'{{(index (index .NetworkSettings.Ports "22/tcp") 0).HostPort}}'" ha-423884-m02
	I1109 14:03:41.498901   55908 sshutil.go:53] new ssh client: &{IP:127.0.0.1 Port:32823 SSHKeyPath:/home/jenkins/minikube-integration/21139-2320/.minikube/machines/ha-423884-m02/id_rsa Username:docker}
	I1109 14:03:41.500358   55908 sshutil.go:53] new ssh client: &{IP:127.0.0.1 Port:32823 SSHKeyPath:/home/jenkins/minikube-integration/21139-2320/.minikube/machines/ha-423884-m02/id_rsa Username:docker}
	I1109 14:03:41.696694   55908 ssh_runner.go:195] Run: sh -c "stat /etc/cni/net.d/*loopback.conf*"
	W1109 14:03:41.701371   55908 cni.go:209] loopback cni configuration skipped: "/etc/cni/net.d/*loopback.conf*" not found
	I1109 14:03:41.701516   55908 ssh_runner.go:195] Run: sudo find /etc/cni/net.d -maxdepth 1 -type f ( ( -name *bridge* -or -name *podman* ) -and -not -name *.mk_disabled ) -printf "%p, " -exec sh -c "sudo mv {} {}.mk_disabled" ;
	I1109 14:03:41.709683   55908 cni.go:259] no active bridge cni configs found in "/etc/cni/net.d" - nothing to disable
	I1109 14:03:41.709721   55908 start.go:496] detecting cgroup driver to use...
	I1109 14:03:41.709755   55908 detect.go:187] detected "cgroupfs" cgroup driver on host os
	I1109 14:03:41.709825   55908 ssh_runner.go:195] Run: sudo systemctl stop -f containerd
	I1109 14:03:41.725678   55908 ssh_runner.go:195] Run: sudo systemctl is-active --quiet service containerd
	I1109 14:03:41.739787   55908 docker.go:218] disabling cri-docker service (if available) ...
	I1109 14:03:41.739856   55908 ssh_runner.go:195] Run: sudo systemctl stop -f cri-docker.socket
	I1109 14:03:41.757143   55908 ssh_runner.go:195] Run: sudo systemctl stop -f cri-docker.service
	I1109 14:03:41.771643   55908 ssh_runner.go:195] Run: sudo systemctl disable cri-docker.socket
	I1109 14:03:41.900022   55908 ssh_runner.go:195] Run: sudo systemctl mask cri-docker.service
	I1109 14:03:42.105606   55908 docker.go:234] disabling docker service ...
	I1109 14:03:42.105681   55908 ssh_runner.go:195] Run: sudo systemctl stop -f docker.socket
	I1109 14:03:42.144421   55908 ssh_runner.go:195] Run: sudo systemctl stop -f docker.service
	I1109 14:03:42.178839   55908 ssh_runner.go:195] Run: sudo systemctl disable docker.socket
	I1109 14:03:42.468213   55908 ssh_runner.go:195] Run: sudo systemctl mask docker.service
	I1109 14:03:42.691726   55908 ssh_runner.go:195] Run: sudo systemctl is-active --quiet service docker
	I1109 14:03:42.709612   55908 ssh_runner.go:195] Run: /bin/bash -c "sudo mkdir -p /etc && printf %s "runtime-endpoint: unix:///var/run/crio/crio.sock
	" | sudo tee /etc/crictl.yaml"
	I1109 14:03:42.730882   55908 crio.go:59] configure cri-o to use "registry.k8s.io/pause:3.10.1" pause image...
	I1109 14:03:42.730946   55908 ssh_runner.go:195] Run: sh -c "sudo sed -i 's|^.*pause_image = .*$|pause_image = "registry.k8s.io/pause:3.10.1"|' /etc/crio/crio.conf.d/02-crio.conf"
	I1109 14:03:42.740089   55908 crio.go:70] configuring cri-o to use "cgroupfs" as cgroup driver...
	I1109 14:03:42.740148   55908 ssh_runner.go:195] Run: sh -c "sudo sed -i 's|^.*cgroup_manager = .*$|cgroup_manager = "cgroupfs"|' /etc/crio/crio.conf.d/02-crio.conf"
	I1109 14:03:42.750087   55908 ssh_runner.go:195] Run: sh -c "sudo sed -i '/conmon_cgroup = .*/d' /etc/crio/crio.conf.d/02-crio.conf"
	I1109 14:03:42.759038   55908 ssh_runner.go:195] Run: sh -c "sudo sed -i '/cgroup_manager = .*/a conmon_cgroup = "pod"' /etc/crio/crio.conf.d/02-crio.conf"
	I1109 14:03:42.773257   55908 ssh_runner.go:195] Run: sh -c "sudo rm -rf /etc/cni/net.mk"
	I1109 14:03:42.782648   55908 ssh_runner.go:195] Run: sh -c "sudo sed -i '/^ *"net.ipv4.ip_unprivileged_port_start=.*"/d' /etc/crio/crio.conf.d/02-crio.conf"
	I1109 14:03:42.800890   55908 ssh_runner.go:195] Run: sh -c "sudo grep -q "^ *default_sysctls" /etc/crio/crio.conf.d/02-crio.conf || sudo sed -i '/conmon_cgroup = .*/a default_sysctls = \[\n\]' /etc/crio/crio.conf.d/02-crio.conf"
	I1109 14:03:42.812622   55908 ssh_runner.go:195] Run: sh -c "sudo sed -i -r 's|^default_sysctls *= *\[|&\n  "net.ipv4.ip_unprivileged_port_start=0",|' /etc/crio/crio.conf.d/02-crio.conf"
	I1109 14:03:42.829326   55908 ssh_runner.go:195] Run: sudo sysctl net.bridge.bridge-nf-call-iptables
	I1109 14:03:42.846516   55908 ssh_runner.go:195] Run: sudo sh -c "echo 1 > /proc/sys/net/ipv4/ip_forward"
	I1109 14:03:42.860429   55908 ssh_runner.go:195] Run: sudo systemctl daemon-reload
	I1109 14:03:43.078130   55908 ssh_runner.go:195] Run: sudo systemctl restart crio
	I1109 14:03:43.300172   55908 start.go:543] Will wait 60s for socket path /var/run/crio/crio.sock
	I1109 14:03:43.300292   55908 ssh_runner.go:195] Run: stat /var/run/crio/crio.sock
	I1109 14:03:43.304336   55908 start.go:564] Will wait 60s for crictl version
	I1109 14:03:43.304441   55908 ssh_runner.go:195] Run: which crictl
	I1109 14:03:43.308290   55908 ssh_runner.go:195] Run: sudo /usr/local/bin/crictl version
	I1109 14:03:43.334041   55908 start.go:580] Version:  0.1.0
	RuntimeName:  cri-o
	RuntimeVersion:  1.34.1
	RuntimeApiVersion:  v1
	I1109 14:03:43.334158   55908 ssh_runner.go:195] Run: crio --version
	I1109 14:03:43.366433   55908 ssh_runner.go:195] Run: crio --version
	I1109 14:03:43.403997   55908 out.go:179] * Preparing Kubernetes v1.34.1 on CRI-O 1.34.1 ...
	I1109 14:03:43.406881   55908 out.go:179]   - env NO_PROXY=192.168.49.2
	I1109 14:03:43.409947   55908 cli_runner.go:164] Run: docker network inspect ha-423884 --format "{"Name": "{{.Name}}","Driver": "{{.Driver}}","Subnet": "{{range .IPAM.Config}}{{.Subnet}}{{end}}","Gateway": "{{range .IPAM.Config}}{{.Gateway}}{{end}}","MTU": {{if (index .Options "com.docker.network.driver.mtu")}}{{(index .Options "com.docker.network.driver.mtu")}}{{else}}0{{end}}, "ContainerIPs": [{{range $k,$v := .Containers }}"{{$v.IPv4Address}}",{{end}}]}"
	I1109 14:03:43.426148   55908 ssh_runner.go:195] Run: grep 192.168.49.1	host.minikube.internal$ /etc/hosts
	I1109 14:03:43.430019   55908 ssh_runner.go:195] Run: /bin/bash -c "{ grep -v $'\thost.minikube.internal$' "/etc/hosts"; echo "192.168.49.1	host.minikube.internal"; } > /tmp/h.$$; sudo cp /tmp/h.$$ "/etc/hosts""
	I1109 14:03:43.439859   55908 mustload.go:66] Loading cluster: ha-423884
	I1109 14:03:43.440179   55908 config.go:182] Loaded profile config "ha-423884": Driver=docker, ContainerRuntime=crio, KubernetesVersion=v1.34.1
	I1109 14:03:43.440497   55908 cli_runner.go:164] Run: docker container inspect ha-423884 --format={{.State.Status}}
	I1109 14:03:43.458429   55908 host.go:66] Checking if "ha-423884" exists ...
	I1109 14:03:43.458717   55908 certs.go:69] Setting up /home/jenkins/minikube-integration/21139-2320/.minikube/profiles/ha-423884 for IP: 192.168.49.3
	I1109 14:03:43.458732   55908 certs.go:195] generating shared ca certs ...
	I1109 14:03:43.458747   55908 certs.go:227] acquiring lock for ca certs: {Name:mkdc9287ecd89df27ec460b72246ed6b75395d7c Clock:{} Delay:500ms Timeout:1m0s Cancel:<nil>}
	I1109 14:03:43.458858   55908 certs.go:236] skipping valid "minikubeCA" ca cert: /home/jenkins/minikube-integration/21139-2320/.minikube/ca.key
	I1109 14:03:43.458906   55908 certs.go:236] skipping valid "proxyClientCA" ca cert: /home/jenkins/minikube-integration/21139-2320/.minikube/proxy-client-ca.key
	I1109 14:03:43.458917   55908 certs.go:257] generating profile certs ...
	I1109 14:03:43.458991   55908 certs.go:360] skipping valid signed profile cert regeneration for "minikube-user": /home/jenkins/minikube-integration/21139-2320/.minikube/profiles/ha-423884/client.key
	I1109 14:03:43.459044   55908 certs.go:360] skipping valid signed profile cert regeneration for "minikube": /home/jenkins/minikube-integration/21139-2320/.minikube/profiles/ha-423884/apiserver.key.75d82079
	I1109 14:03:43.459087   55908 certs.go:360] skipping valid signed profile cert regeneration for "aggregator": /home/jenkins/minikube-integration/21139-2320/.minikube/profiles/ha-423884/proxy-client.key
	I1109 14:03:43.459098   55908 vm_assets.go:164] NewFileAsset: /home/jenkins/minikube-integration/21139-2320/.minikube/ca.crt -> /var/lib/minikube/certs/ca.crt
	I1109 14:03:43.459110   55908 vm_assets.go:164] NewFileAsset: /home/jenkins/minikube-integration/21139-2320/.minikube/ca.key -> /var/lib/minikube/certs/ca.key
	I1109 14:03:43.459125   55908 vm_assets.go:164] NewFileAsset: /home/jenkins/minikube-integration/21139-2320/.minikube/proxy-client-ca.crt -> /var/lib/minikube/certs/proxy-client-ca.crt
	I1109 14:03:43.459143   55908 vm_assets.go:164] NewFileAsset: /home/jenkins/minikube-integration/21139-2320/.minikube/proxy-client-ca.key -> /var/lib/minikube/certs/proxy-client-ca.key
	I1109 14:03:43.459162   55908 vm_assets.go:164] NewFileAsset: /home/jenkins/minikube-integration/21139-2320/.minikube/profiles/ha-423884/apiserver.crt -> /var/lib/minikube/certs/apiserver.crt
	I1109 14:03:43.459178   55908 vm_assets.go:164] NewFileAsset: /home/jenkins/minikube-integration/21139-2320/.minikube/profiles/ha-423884/apiserver.key -> /var/lib/minikube/certs/apiserver.key
	I1109 14:03:43.459192   55908 vm_assets.go:164] NewFileAsset: /home/jenkins/minikube-integration/21139-2320/.minikube/profiles/ha-423884/proxy-client.crt -> /var/lib/minikube/certs/proxy-client.crt
	I1109 14:03:43.459209   55908 vm_assets.go:164] NewFileAsset: /home/jenkins/minikube-integration/21139-2320/.minikube/profiles/ha-423884/proxy-client.key -> /var/lib/minikube/certs/proxy-client.key
	I1109 14:03:43.459262   55908 certs.go:484] found cert: /home/jenkins/minikube-integration/21139-2320/.minikube/certs/4116.pem (1338 bytes)
	W1109 14:03:43.459293   55908 certs.go:480] ignoring /home/jenkins/minikube-integration/21139-2320/.minikube/certs/4116_empty.pem, impossibly tiny 0 bytes
	I1109 14:03:43.459305   55908 certs.go:484] found cert: /home/jenkins/minikube-integration/21139-2320/.minikube/certs/ca-key.pem (1679 bytes)
	I1109 14:03:43.459331   55908 certs.go:484] found cert: /home/jenkins/minikube-integration/21139-2320/.minikube/certs/ca.pem (1082 bytes)
	I1109 14:03:43.459355   55908 certs.go:484] found cert: /home/jenkins/minikube-integration/21139-2320/.minikube/certs/cert.pem (1123 bytes)
	I1109 14:03:43.459385   55908 certs.go:484] found cert: /home/jenkins/minikube-integration/21139-2320/.minikube/certs/key.pem (1679 bytes)
	I1109 14:03:43.459432   55908 certs.go:484] found cert: /home/jenkins/minikube-integration/21139-2320/.minikube/files/etc/ssl/certs/41162.pem (1708 bytes)
	I1109 14:03:43.459462   55908 vm_assets.go:164] NewFileAsset: /home/jenkins/minikube-integration/21139-2320/.minikube/ca.crt -> /usr/share/ca-certificates/minikubeCA.pem
	I1109 14:03:43.459482   55908 vm_assets.go:164] NewFileAsset: /home/jenkins/minikube-integration/21139-2320/.minikube/certs/4116.pem -> /usr/share/ca-certificates/4116.pem
	I1109 14:03:43.459498   55908 vm_assets.go:164] NewFileAsset: /home/jenkins/minikube-integration/21139-2320/.minikube/files/etc/ssl/certs/41162.pem -> /usr/share/ca-certificates/41162.pem
	I1109 14:03:43.459553   55908 cli_runner.go:164] Run: docker container inspect -f "'{{(index (index .NetworkSettings.Ports "22/tcp") 0).HostPort}}'" ha-423884
	I1109 14:03:43.476791   55908 sshutil.go:53] new ssh client: &{IP:127.0.0.1 Port:32818 SSHKeyPath:/home/jenkins/minikube-integration/21139-2320/.minikube/machines/ha-423884/id_rsa Username:docker}
	I1109 14:03:43.576150   55908 ssh_runner.go:195] Run: stat -c %s /var/lib/minikube/certs/sa.pub
	I1109 14:03:43.579947   55908 ssh_runner.go:447] scp /var/lib/minikube/certs/sa.pub --> memory (451 bytes)
	I1109 14:03:43.588442   55908 ssh_runner.go:195] Run: stat -c %s /var/lib/minikube/certs/sa.key
	I1109 14:03:43.591845   55908 ssh_runner.go:447] scp /var/lib/minikube/certs/sa.key --> memory (1679 bytes)
	I1109 14:03:43.600302   55908 ssh_runner.go:195] Run: stat -c %s /var/lib/minikube/certs/front-proxy-ca.crt
	I1109 14:03:43.603828   55908 ssh_runner.go:447] scp /var/lib/minikube/certs/front-proxy-ca.crt --> memory (1123 bytes)
	I1109 14:03:43.612657   55908 ssh_runner.go:195] Run: stat -c %s /var/lib/minikube/certs/front-proxy-ca.key
	I1109 14:03:43.616127   55908 ssh_runner.go:447] scp /var/lib/minikube/certs/front-proxy-ca.key --> memory (1675 bytes)
	I1109 14:03:43.624209   55908 ssh_runner.go:195] Run: stat -c %s /var/lib/minikube/certs/etcd/ca.crt
	I1109 14:03:43.627692   55908 ssh_runner.go:447] scp /var/lib/minikube/certs/etcd/ca.crt --> memory (1094 bytes)
	I1109 14:03:43.635688   55908 ssh_runner.go:195] Run: stat -c %s /var/lib/minikube/certs/etcd/ca.key
	I1109 14:03:43.639181   55908 ssh_runner.go:447] scp /var/lib/minikube/certs/etcd/ca.key --> memory (1675 bytes)
	I1109 14:03:43.647210   55908 ssh_runner.go:362] scp /home/jenkins/minikube-integration/21139-2320/.minikube/ca.crt --> /var/lib/minikube/certs/ca.crt (1111 bytes)
	I1109 14:03:43.665935   55908 ssh_runner.go:362] scp /home/jenkins/minikube-integration/21139-2320/.minikube/ca.key --> /var/lib/minikube/certs/ca.key (1679 bytes)
	I1109 14:03:43.683098   55908 ssh_runner.go:362] scp /home/jenkins/minikube-integration/21139-2320/.minikube/proxy-client-ca.crt --> /var/lib/minikube/certs/proxy-client-ca.crt (1119 bytes)
	I1109 14:03:43.701792   55908 ssh_runner.go:362] scp /home/jenkins/minikube-integration/21139-2320/.minikube/proxy-client-ca.key --> /var/lib/minikube/certs/proxy-client-ca.key (1675 bytes)
	I1109 14:03:43.720535   55908 ssh_runner.go:362] scp /home/jenkins/minikube-integration/21139-2320/.minikube/profiles/ha-423884/apiserver.crt --> /var/lib/minikube/certs/apiserver.crt (1440 bytes)
	I1109 14:03:43.738207   55908 ssh_runner.go:362] scp /home/jenkins/minikube-integration/21139-2320/.minikube/profiles/ha-423884/apiserver.key --> /var/lib/minikube/certs/apiserver.key (1679 bytes)
	I1109 14:03:43.756027   55908 ssh_runner.go:362] scp /home/jenkins/minikube-integration/21139-2320/.minikube/profiles/ha-423884/proxy-client.crt --> /var/lib/minikube/certs/proxy-client.crt (1147 bytes)
	I1109 14:03:43.774278   55908 ssh_runner.go:362] scp /home/jenkins/minikube-integration/21139-2320/.minikube/profiles/ha-423884/proxy-client.key --> /var/lib/minikube/certs/proxy-client.key (1675 bytes)
	I1109 14:03:43.792937   55908 ssh_runner.go:362] scp /home/jenkins/minikube-integration/21139-2320/.minikube/ca.crt --> /usr/share/ca-certificates/minikubeCA.pem (1111 bytes)
	I1109 14:03:43.811113   55908 ssh_runner.go:362] scp /home/jenkins/minikube-integration/21139-2320/.minikube/certs/4116.pem --> /usr/share/ca-certificates/4116.pem (1338 bytes)
	I1109 14:03:43.829133   55908 ssh_runner.go:362] scp /home/jenkins/minikube-integration/21139-2320/.minikube/files/etc/ssl/certs/41162.pem --> /usr/share/ca-certificates/41162.pem (1708 bytes)
	I1109 14:03:43.847536   55908 ssh_runner.go:362] scp memory --> /var/lib/minikube/certs/sa.pub (451 bytes)
	I1109 14:03:43.860908   55908 ssh_runner.go:362] scp memory --> /var/lib/minikube/certs/sa.key (1679 bytes)
	I1109 14:03:43.873289   55908 ssh_runner.go:362] scp memory --> /var/lib/minikube/certs/front-proxy-ca.crt (1123 bytes)
	I1109 14:03:43.886865   55908 ssh_runner.go:362] scp memory --> /var/lib/minikube/certs/front-proxy-ca.key (1675 bytes)
	I1109 14:03:43.900616   55908 ssh_runner.go:362] scp memory --> /var/lib/minikube/certs/etcd/ca.crt (1094 bytes)
	I1109 14:03:43.913948   55908 ssh_runner.go:362] scp memory --> /var/lib/minikube/certs/etcd/ca.key (1675 bytes)
	I1109 14:03:43.927015   55908 ssh_runner.go:362] scp memory --> /var/lib/minikube/kubeconfig (744 bytes)
	I1109 14:03:43.939523   55908 ssh_runner.go:195] Run: openssl version
	I1109 14:03:43.945583   55908 ssh_runner.go:195] Run: sudo /bin/bash -c "test -s /usr/share/ca-certificates/minikubeCA.pem && ln -fs /usr/share/ca-certificates/minikubeCA.pem /etc/ssl/certs/minikubeCA.pem"
	I1109 14:03:43.954590   55908 ssh_runner.go:195] Run: ls -la /usr/share/ca-certificates/minikubeCA.pem
	I1109 14:03:43.958760   55908 certs.go:528] hashing: -rw-r--r-- 1 root root 1111 Nov  9 13:29 /usr/share/ca-certificates/minikubeCA.pem
	I1109 14:03:43.958867   55908 ssh_runner.go:195] Run: openssl x509 -hash -noout -in /usr/share/ca-certificates/minikubeCA.pem
	I1109 14:03:43.999953   55908 ssh_runner.go:195] Run: sudo /bin/bash -c "test -L /etc/ssl/certs/b5213941.0 || ln -fs /etc/ssl/certs/minikubeCA.pem /etc/ssl/certs/b5213941.0"
	I1109 14:03:44.007895   55908 ssh_runner.go:195] Run: sudo /bin/bash -c "test -s /usr/share/ca-certificates/4116.pem && ln -fs /usr/share/ca-certificates/4116.pem /etc/ssl/certs/4116.pem"
	I1109 14:03:44.020206   55908 ssh_runner.go:195] Run: ls -la /usr/share/ca-certificates/4116.pem
	I1109 14:03:44.024532   55908 certs.go:528] hashing: -rw-r--r-- 1 root root 1338 Nov  9 13:36 /usr/share/ca-certificates/4116.pem
	I1109 14:03:44.024619   55908 ssh_runner.go:195] Run: openssl x509 -hash -noout -in /usr/share/ca-certificates/4116.pem
	I1109 14:03:44.068208   55908 ssh_runner.go:195] Run: sudo /bin/bash -c "test -L /etc/ssl/certs/51391683.0 || ln -fs /etc/ssl/certs/4116.pem /etc/ssl/certs/51391683.0"
	I1109 14:03:44.079840   55908 ssh_runner.go:195] Run: sudo /bin/bash -c "test -s /usr/share/ca-certificates/41162.pem && ln -fs /usr/share/ca-certificates/41162.pem /etc/ssl/certs/41162.pem"
	I1109 14:03:44.089486   55908 ssh_runner.go:195] Run: ls -la /usr/share/ca-certificates/41162.pem
	I1109 14:03:44.094109   55908 certs.go:528] hashing: -rw-r--r-- 1 root root 1708 Nov  9 13:36 /usr/share/ca-certificates/41162.pem
	I1109 14:03:44.094227   55908 ssh_runner.go:195] Run: openssl x509 -hash -noout -in /usr/share/ca-certificates/41162.pem
	I1109 14:03:44.137949   55908 ssh_runner.go:195] Run: sudo /bin/bash -c "test -L /etc/ssl/certs/3ec20f2e.0 || ln -fs /etc/ssl/certs/41162.pem /etc/ssl/certs/3ec20f2e.0"
	I1109 14:03:44.146324   55908 ssh_runner.go:195] Run: stat /var/lib/minikube/certs/apiserver-kubelet-client.crt
	I1109 14:03:44.150369   55908 ssh_runner.go:195] Run: openssl x509 -noout -in /var/lib/minikube/certs/apiserver-etcd-client.crt -checkend 86400
	I1109 14:03:44.191825   55908 ssh_runner.go:195] Run: openssl x509 -noout -in /var/lib/minikube/certs/apiserver-kubelet-client.crt -checkend 86400
	I1109 14:03:44.232925   55908 ssh_runner.go:195] Run: openssl x509 -noout -in /var/lib/minikube/certs/etcd/server.crt -checkend 86400
	I1109 14:03:44.273939   55908 ssh_runner.go:195] Run: openssl x509 -noout -in /var/lib/minikube/certs/etcd/healthcheck-client.crt -checkend 86400
	I1109 14:03:44.314652   55908 ssh_runner.go:195] Run: openssl x509 -noout -in /var/lib/minikube/certs/etcd/peer.crt -checkend 86400
	I1109 14:03:44.356028   55908 ssh_runner.go:195] Run: openssl x509 -noout -in /var/lib/minikube/certs/front-proxy-client.crt -checkend 86400
	I1109 14:03:44.407731   55908 kubeadm.go:935] updating node {m02 192.168.49.3 8443 v1.34.1 crio true true} ...
	I1109 14:03:44.407917   55908 kubeadm.go:947] kubelet [Unit]
	Wants=crio.service
	
	[Service]
	ExecStart=
	ExecStart=/var/lib/minikube/binaries/v1.34.1/kubelet --bootstrap-kubeconfig=/etc/kubernetes/bootstrap-kubelet.conf --cgroups-per-qos=false --config=/var/lib/kubelet/config.yaml --enforce-node-allocatable= --hostname-override=ha-423884-m02 --kubeconfig=/etc/kubernetes/kubelet.conf --node-ip=192.168.49.3
	
	[Install]
	 config:
	{KubernetesVersion:v1.34.1 ClusterName:ha-423884 Namespace:default APIServerHAVIP:192.168.49.254 APIServerName:minikubeCA APIServerNames:[] APIServerIPs:[] DNSDomain:cluster.local ContainerRuntime:crio CRISocket: NetworkPlugin:cni FeatureGates: ServiceCIDR:10.96.0.0/12 ImageRepository: LoadBalancerStartIP: LoadBalancerEndIP: CustomIngressCert: RegistryAliases: ExtraOptions:[] ShouldLoadCachedImages:true EnableDefaultCNI:false CNI:}
	I1109 14:03:44.407958   55908 kube-vip.go:115] generating kube-vip config ...
	I1109 14:03:44.408031   55908 ssh_runner.go:195] Run: sudo sh -c "lsmod | grep ip_vs"
	I1109 14:03:44.419991   55908 kube-vip.go:163] giving up enabling control-plane load-balancing as ipvs kernel modules appears not to be available: sudo sh -c "lsmod | grep ip_vs": Process exited with status 1
	stdout:
	
	stderr:
	I1109 14:03:44.420052   55908 kube-vip.go:137] kube-vip config:
	apiVersion: v1
	kind: Pod
	metadata:
	  creationTimestamp: null
	  name: kube-vip
	  namespace: kube-system
	spec:
	  containers:
	  - args:
	    - manager
	    env:
	    - name: vip_arp
	      value: "true"
	    - name: port
	      value: "8443"
	    - name: vip_nodename
	      valueFrom:
	        fieldRef:
	          fieldPath: spec.nodeName
	    - name: vip_interface
	      value: eth0
	    - name: vip_cidr
	      value: "32"
	    - name: dns_mode
	      value: first
	    - name: cp_enable
	      value: "true"
	    - name: cp_namespace
	      value: kube-system
	    - name: vip_leaderelection
	      value: "true"
	    - name: vip_leasename
	      value: plndr-cp-lock
	    - name: vip_leaseduration
	      value: "5"
	    - name: vip_renewdeadline
	      value: "3"
	    - name: vip_retryperiod
	      value: "1"
	    - name: address
	      value: 192.168.49.254
	    - name: prometheus_server
	      value: :2112
	    image: ghcr.io/kube-vip/kube-vip:v1.0.1
	    imagePullPolicy: IfNotPresent
	    name: kube-vip
	    resources: {}
	    securityContext:
	      capabilities:
	        add:
	        - NET_ADMIN
	        - NET_RAW
	    volumeMounts:
	    - mountPath: /etc/kubernetes/admin.conf
	      name: kubeconfig
	  hostAliases:
	  - hostnames:
	    - kubernetes
	    ip: 127.0.0.1
	  hostNetwork: true
	  volumes:
	  - hostPath:
	      path: "/etc/kubernetes/admin.conf"
	    name: kubeconfig
	status: {}
	I1109 14:03:44.420129   55908 ssh_runner.go:195] Run: sudo ls /var/lib/minikube/binaries/v1.34.1
	I1109 14:03:44.427945   55908 binaries.go:51] Found k8s binaries, skipping transfer
	I1109 14:03:44.428013   55908 ssh_runner.go:195] Run: sudo mkdir -p /etc/systemd/system/kubelet.service.d /lib/systemd/system /etc/kubernetes/manifests
	I1109 14:03:44.435476   55908 ssh_runner.go:362] scp memory --> /etc/systemd/system/kubelet.service.d/10-kubeadm.conf (363 bytes)
	I1109 14:03:44.448591   55908 ssh_runner.go:362] scp memory --> /lib/systemd/system/kubelet.service (352 bytes)
	I1109 14:03:44.461928   55908 ssh_runner.go:362] scp memory --> /etc/kubernetes/manifests/kube-vip.yaml (1358 bytes)
	I1109 14:03:44.475231   55908 ssh_runner.go:195] Run: grep 192.168.49.254	control-plane.minikube.internal$ /etc/hosts
	I1109 14:03:44.478933   55908 ssh_runner.go:195] Run: /bin/bash -c "{ grep -v $'\tcontrol-plane.minikube.internal$' "/etc/hosts"; echo "192.168.49.254	control-plane.minikube.internal"; } > /tmp/h.$$; sudo cp /tmp/h.$$ "/etc/hosts""
	I1109 14:03:44.488867   55908 ssh_runner.go:195] Run: sudo systemctl daemon-reload
	I1109 14:03:44.623612   55908 ssh_runner.go:195] Run: sudo systemctl start kubelet
	I1109 14:03:44.638897   55908 start.go:236] Will wait 6m0s for node &{Name:m02 IP:192.168.49.3 Port:8443 KubernetesVersion:v1.34.1 ContainerRuntime:crio ControlPlane:true Worker:true}
	I1109 14:03:44.639336   55908 config.go:182] Loaded profile config "ha-423884": Driver=docker, ContainerRuntime=crio, KubernetesVersion=v1.34.1
	I1109 14:03:44.643324   55908 out.go:179] * Verifying Kubernetes components...
	I1109 14:03:44.646391   55908 ssh_runner.go:195] Run: sudo systemctl daemon-reload
	I1109 14:03:44.766731   55908 ssh_runner.go:195] Run: sudo systemctl start kubelet
	I1109 14:03:44.781836   55908 kapi.go:59] client config for ha-423884: &rest.Config{Host:"https://192.168.49.254:8443", APIPath:"", ContentConfig:rest.ContentConfig{AcceptContentTypes:"", ContentType:"", GroupVersion:(*schema.GroupVersion)(nil), NegotiatedSerializer:runtime.NegotiatedSerializer(nil)}, Username:"", Password:"", BearerToken:"", BearerTokenFile:"", Impersonate:rest.ImpersonationConfig{UserName:"", UID:"", Groups:[]string(nil), Extra:map[string][]string(nil)}, AuthProvider:<nil>, AuthConfigPersister:rest.AuthProviderConfigPersister(nil), ExecProvider:<nil>, TLSClientConfig:rest.sanitizedTLSClientConfig{Insecure:false, ServerName:"", CertFile:"/home/jenkins/minikube-integration/21139-2320/.minikube/profiles/ha-423884/client.crt", KeyFile:"/home/jenkins/minikube-integration/21139-2320/.minikube/profiles/ha-423884/client.key", CAFile:"/home/jenkins/minikube-integration/21139-2320/.minikube/ca.crt", CertData:[]uint8(nil), KeyData:[]uint8(nil), CAData:[]uint8(nil), NextProtos:[]string(nil)},
UserAgent:"", DisableCompression:false, Transport:http.RoundTripper(nil), WrapTransport:(transport.WrapperFunc)(0x21276e0), QPS:0, Burst:0, RateLimiter:flowcontrol.RateLimiter(nil), WarningHandler:rest.WarningHandler(nil), WarningHandlerWithContext:rest.WarningHandlerWithContext(nil), Timeout:0, Dial:(func(context.Context, string, string) (net.Conn, error))(nil), Proxy:(func(*http.Request) (*url.URL, error))(nil)}
	W1109 14:03:44.781971   55908 kubeadm.go:492] Overriding stale ClientConfig host https://192.168.49.254:8443 with https://192.168.49.2:8443
	I1109 14:03:44.782234   55908 node_ready.go:35] waiting up to 6m0s for node "ha-423884-m02" to be "Ready" ...
	W1109 14:03:54.783441   55908 node_ready.go:55] error getting node "ha-423884-m02" condition "Ready" status (will retry): Get "https://192.168.49.2:8443/api/v1/nodes/ha-423884-m02": net/http: TLS handshake timeout
	I1109 14:03:58.293061   55908 with_retry.go:234] "Got a Retry-After response" delay="1s" attempt=1 url="https://192.168.49.2:8443/api/v1/nodes/ha-423884-m02"
	W1109 14:04:08.294056   55908 node_ready.go:55] error getting node "ha-423884-m02" condition "Ready" status (will retry): Get "https://192.168.49.2:8443/api/v1/nodes/ha-423884-m02": net/http: TLS handshake timeout - error from a previous attempt: read tcp 192.168.49.1:36070->192.168.49.2:8443: read: connection reset by peer
	I1109 14:04:10.224067   55908 node_ready.go:49] node "ha-423884-m02" is "Ready"
	I1109 14:04:10.224094   55908 node_ready.go:38] duration metric: took 25.441822993s for node "ha-423884-m02" to be "Ready" ...
	I1109 14:04:10.224107   55908 api_server.go:52] waiting for apiserver process to appear ...
	I1109 14:04:10.224169   55908 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I1109 14:04:10.237071   55908 api_server.go:72] duration metric: took 25.598086143s to wait for apiserver process to appear ...
	I1109 14:04:10.237093   55908 api_server.go:88] waiting for apiserver healthz status ...
	I1109 14:04:10.237122   55908 api_server.go:253] Checking apiserver healthz at https://192.168.49.2:8443/healthz ...
	I1109 14:04:10.273674   55908 api_server.go:279] https://192.168.49.2:8443/healthz returned 500:
	[+]ping ok
	[+]log ok
	[+]etcd ok
	[+]poststarthook/start-apiserver-admission-initializer ok
	[+]poststarthook/generic-apiserver-start-informers ok
	[+]poststarthook/priority-and-fairness-config-consumer ok
	[+]poststarthook/priority-and-fairness-filter ok
	[+]poststarthook/storage-object-count-tracker-hook ok
	[+]poststarthook/start-apiextensions-informers ok
	[+]poststarthook/start-apiextensions-controllers ok
	[+]poststarthook/crd-informer-synced ok
	[+]poststarthook/start-system-namespaces-controller ok
	[+]poststarthook/start-cluster-authentication-info-controller ok
	[+]poststarthook/start-kube-apiserver-identity-lease-controller ok
	[+]poststarthook/start-kube-apiserver-identity-lease-garbage-collector ok
	[+]poststarthook/start-legacy-token-tracking-controller ok
	[+]poststarthook/start-service-ip-repair-controllers ok
	[-]poststarthook/rbac/bootstrap-roles failed: reason withheld
	[-]poststarthook/scheduling/bootstrap-system-priority-classes failed: reason withheld
	[+]poststarthook/priority-and-fairness-config-producer ok
	[-]poststarthook/bootstrap-controller failed: reason withheld
	[+]poststarthook/start-kubernetes-service-cidr-controller ok
	[+]poststarthook/aggregator-reload-proxy-client-cert ok
	[+]poststarthook/start-kube-aggregator-informers ok
	[+]poststarthook/apiservice-status-local-available-controller ok
	[+]poststarthook/apiservice-status-remote-available-controller ok
	[+]poststarthook/apiservice-registration-controller ok
	[+]poststarthook/apiservice-discovery-controller ok
	[+]poststarthook/kube-apiserver-autoregistration ok
	[+]autoregister-completion ok
	[+]poststarthook/apiservice-openapi-controller ok
	[+]poststarthook/apiservice-openapiv3-controller ok
	healthz check failed
	W1109 14:04:10.273706   55908 api_server.go:103] status: https://192.168.49.2:8443/healthz returned error 500:
	[+]ping ok
	[+]log ok
	[+]etcd ok
	[+]poststarthook/start-apiserver-admission-initializer ok
	[+]poststarthook/generic-apiserver-start-informers ok
	[+]poststarthook/priority-and-fairness-config-consumer ok
	[+]poststarthook/priority-and-fairness-filter ok
	[+]poststarthook/storage-object-count-tracker-hook ok
	[+]poststarthook/start-apiextensions-informers ok
	[+]poststarthook/start-apiextensions-controllers ok
	[+]poststarthook/crd-informer-synced ok
	[+]poststarthook/start-system-namespaces-controller ok
	[+]poststarthook/start-cluster-authentication-info-controller ok
	[+]poststarthook/start-kube-apiserver-identity-lease-controller ok
	[+]poststarthook/start-kube-apiserver-identity-lease-garbage-collector ok
	[+]poststarthook/start-legacy-token-tracking-controller ok
	[+]poststarthook/start-service-ip-repair-controllers ok
	[-]poststarthook/rbac/bootstrap-roles failed: reason withheld
	[-]poststarthook/scheduling/bootstrap-system-priority-classes failed: reason withheld
	[+]poststarthook/priority-and-fairness-config-producer ok
	[-]poststarthook/bootstrap-controller failed: reason withheld
	[+]poststarthook/start-kubernetes-service-cidr-controller ok
	[+]poststarthook/aggregator-reload-proxy-client-cert ok
	[+]poststarthook/start-kube-aggregator-informers ok
	[+]poststarthook/apiservice-status-local-available-controller ok
	[+]poststarthook/apiservice-status-remote-available-controller ok
	[+]poststarthook/apiservice-registration-controller ok
	[+]poststarthook/apiservice-discovery-controller ok
	[+]poststarthook/kube-apiserver-autoregistration ok
	[+]autoregister-completion ok
	[+]poststarthook/apiservice-openapi-controller ok
	[+]poststarthook/apiservice-openapiv3-controller ok
	healthz check failed
	I1109 14:04:10.737933   55908 api_server.go:253] Checking apiserver healthz at https://192.168.49.2:8443/healthz ...
	I1109 14:04:10.747401   55908 api_server.go:279] https://192.168.49.2:8443/healthz returned 500:
	[+]ping ok
	[+]log ok
	[+]etcd ok
	[+]poststarthook/start-apiserver-admission-initializer ok
	[+]poststarthook/generic-apiserver-start-informers ok
	[+]poststarthook/priority-and-fairness-config-consumer ok
	[+]poststarthook/priority-and-fairness-filter ok
	[+]poststarthook/storage-object-count-tracker-hook ok
	[+]poststarthook/start-apiextensions-informers ok
	[+]poststarthook/start-apiextensions-controllers ok
	[+]poststarthook/crd-informer-synced ok
	[+]poststarthook/start-system-namespaces-controller ok
	[+]poststarthook/start-cluster-authentication-info-controller ok
	[+]poststarthook/start-kube-apiserver-identity-lease-controller ok
	[+]poststarthook/start-kube-apiserver-identity-lease-garbage-collector ok
	[+]poststarthook/start-legacy-token-tracking-controller ok
	[+]poststarthook/start-service-ip-repair-controllers ok
	[-]poststarthook/rbac/bootstrap-roles failed: reason withheld
	[+]poststarthook/scheduling/bootstrap-system-priority-classes ok
	[+]poststarthook/priority-and-fairness-config-producer ok
	[+]poststarthook/bootstrap-controller ok
	[+]poststarthook/start-kubernetes-service-cidr-controller ok
	[+]poststarthook/aggregator-reload-proxy-client-cert ok
	[+]poststarthook/start-kube-aggregator-informers ok
	[+]poststarthook/apiservice-status-local-available-controller ok
	[+]poststarthook/apiservice-status-remote-available-controller ok
	[+]poststarthook/apiservice-registration-controller ok
	[+]poststarthook/apiservice-discovery-controller ok
	[+]poststarthook/kube-apiserver-autoregistration ok
	[+]autoregister-completion ok
	[+]poststarthook/apiservice-openapi-controller ok
	[+]poststarthook/apiservice-openapiv3-controller ok
	healthz check failed
	W1109 14:04:10.747476   55908 api_server.go:103] status: https://192.168.49.2:8443/healthz returned error 500:
	[+]ping ok
	[+]log ok
	[+]etcd ok
	[+]poststarthook/start-apiserver-admission-initializer ok
	[+]poststarthook/generic-apiserver-start-informers ok
	[+]poststarthook/priority-and-fairness-config-consumer ok
	[+]poststarthook/priority-and-fairness-filter ok
	[+]poststarthook/storage-object-count-tracker-hook ok
	[+]poststarthook/start-apiextensions-informers ok
	[+]poststarthook/start-apiextensions-controllers ok
	[+]poststarthook/crd-informer-synced ok
	[+]poststarthook/start-system-namespaces-controller ok
	[+]poststarthook/start-cluster-authentication-info-controller ok
	[+]poststarthook/start-kube-apiserver-identity-lease-controller ok
	[+]poststarthook/start-kube-apiserver-identity-lease-garbage-collector ok
	[+]poststarthook/start-legacy-token-tracking-controller ok
	[+]poststarthook/start-service-ip-repair-controllers ok
	[-]poststarthook/rbac/bootstrap-roles failed: reason withheld
	[+]poststarthook/scheduling/bootstrap-system-priority-classes ok
	[+]poststarthook/priority-and-fairness-config-producer ok
	[+]poststarthook/bootstrap-controller ok
	[+]poststarthook/start-kubernetes-service-cidr-controller ok
	[+]poststarthook/aggregator-reload-proxy-client-cert ok
	[+]poststarthook/start-kube-aggregator-informers ok
	[+]poststarthook/apiservice-status-local-available-controller ok
	[+]poststarthook/apiservice-status-remote-available-controller ok
	[+]poststarthook/apiservice-registration-controller ok
	[+]poststarthook/apiservice-discovery-controller ok
	[+]poststarthook/kube-apiserver-autoregistration ok
	[+]autoregister-completion ok
	[+]poststarthook/apiservice-openapi-controller ok
	[+]poststarthook/apiservice-openapiv3-controller ok
	healthz check failed
	I1109 14:04:11.238081   55908 api_server.go:253] Checking apiserver healthz at https://192.168.49.2:8443/healthz ...
	I1109 14:04:11.253573   55908 api_server.go:279] https://192.168.49.2:8443/healthz returned 500:
	[+]ping ok
	[+]log ok
	[+]etcd ok
	[+]poststarthook/start-apiserver-admission-initializer ok
	[+]poststarthook/generic-apiserver-start-informers ok
	[+]poststarthook/priority-and-fairness-config-consumer ok
	[+]poststarthook/priority-and-fairness-filter ok
	[+]poststarthook/storage-object-count-tracker-hook ok
	[+]poststarthook/start-apiextensions-informers ok
	[+]poststarthook/start-apiextensions-controllers ok
	[+]poststarthook/crd-informer-synced ok
	[+]poststarthook/start-system-namespaces-controller ok
	[+]poststarthook/start-cluster-authentication-info-controller ok
	[+]poststarthook/start-kube-apiserver-identity-lease-controller ok
	[+]poststarthook/start-kube-apiserver-identity-lease-garbage-collector ok
	[+]poststarthook/start-legacy-token-tracking-controller ok
	[+]poststarthook/start-service-ip-repair-controllers ok
	[-]poststarthook/rbac/bootstrap-roles failed: reason withheld
	[+]poststarthook/scheduling/bootstrap-system-priority-classes ok
	[+]poststarthook/priority-and-fairness-config-producer ok
	[+]poststarthook/bootstrap-controller ok
	[+]poststarthook/start-kubernetes-service-cidr-controller ok
	[+]poststarthook/aggregator-reload-proxy-client-cert ok
	[+]poststarthook/start-kube-aggregator-informers ok
	[+]poststarthook/apiservice-status-local-available-controller ok
	[+]poststarthook/apiservice-status-remote-available-controller ok
	[+]poststarthook/apiservice-registration-controller ok
	[+]poststarthook/apiservice-discovery-controller ok
	[+]poststarthook/kube-apiserver-autoregistration ok
	[+]autoregister-completion ok
	[+]poststarthook/apiservice-openapi-controller ok
	[+]poststarthook/apiservice-openapiv3-controller ok
	healthz check failed
	W1109 14:04:11.253663   55908 api_server.go:103] status: https://192.168.49.2:8443/healthz returned error 500:
	[+]ping ok
	[+]log ok
	[+]etcd ok
	[+]poststarthook/start-apiserver-admission-initializer ok
	[+]poststarthook/generic-apiserver-start-informers ok
	[+]poststarthook/priority-and-fairness-config-consumer ok
	[+]poststarthook/priority-and-fairness-filter ok
	[+]poststarthook/storage-object-count-tracker-hook ok
	[+]poststarthook/start-apiextensions-informers ok
	[+]poststarthook/start-apiextensions-controllers ok
	[+]poststarthook/crd-informer-synced ok
	[+]poststarthook/start-system-namespaces-controller ok
	[+]poststarthook/start-cluster-authentication-info-controller ok
	[+]poststarthook/start-kube-apiserver-identity-lease-controller ok
	[+]poststarthook/start-kube-apiserver-identity-lease-garbage-collector ok
	[+]poststarthook/start-legacy-token-tracking-controller ok
	[+]poststarthook/start-service-ip-repair-controllers ok
	[-]poststarthook/rbac/bootstrap-roles failed: reason withheld
	[+]poststarthook/scheduling/bootstrap-system-priority-classes ok
	[+]poststarthook/priority-and-fairness-config-producer ok
	[+]poststarthook/bootstrap-controller ok
	[+]poststarthook/start-kubernetes-service-cidr-controller ok
	[+]poststarthook/aggregator-reload-proxy-client-cert ok
	[+]poststarthook/start-kube-aggregator-informers ok
	[+]poststarthook/apiservice-status-local-available-controller ok
	[+]poststarthook/apiservice-status-remote-available-controller ok
	[+]poststarthook/apiservice-registration-controller ok
	[+]poststarthook/apiservice-discovery-controller ok
	[+]poststarthook/kube-apiserver-autoregistration ok
	[+]autoregister-completion ok
	[+]poststarthook/apiservice-openapi-controller ok
	[+]poststarthook/apiservice-openapiv3-controller ok
	healthz check failed
	I1109 14:04:11.737248   55908 api_server.go:253] Checking apiserver healthz at https://192.168.49.2:8443/healthz ...
	I1109 14:04:11.745671   55908 api_server.go:279] https://192.168.49.2:8443/healthz returned 500:
	[+]ping ok
	[+]log ok
	[+]etcd ok
	[+]poststarthook/start-apiserver-admission-initializer ok
	[+]poststarthook/generic-apiserver-start-informers ok
	[+]poststarthook/priority-and-fairness-config-consumer ok
	[+]poststarthook/priority-and-fairness-filter ok
	[+]poststarthook/storage-object-count-tracker-hook ok
	[+]poststarthook/start-apiextensions-informers ok
	[+]poststarthook/start-apiextensions-controllers ok
	[+]poststarthook/crd-informer-synced ok
	[+]poststarthook/start-system-namespaces-controller ok
	[+]poststarthook/start-cluster-authentication-info-controller ok
	[+]poststarthook/start-kube-apiserver-identity-lease-controller ok
	[+]poststarthook/start-kube-apiserver-identity-lease-garbage-collector ok
	[+]poststarthook/start-legacy-token-tracking-controller ok
	[+]poststarthook/start-service-ip-repair-controllers ok
	[-]poststarthook/rbac/bootstrap-roles failed: reason withheld
	[+]poststarthook/scheduling/bootstrap-system-priority-classes ok
	[+]poststarthook/priority-and-fairness-config-producer ok
	[+]poststarthook/bootstrap-controller ok
	[+]poststarthook/start-kubernetes-service-cidr-controller ok
	[+]poststarthook/aggregator-reload-proxy-client-cert ok
	[+]poststarthook/start-kube-aggregator-informers ok
	[+]poststarthook/apiservice-status-local-available-controller ok
	[+]poststarthook/apiservice-status-remote-available-controller ok
	[+]poststarthook/apiservice-registration-controller ok
	[+]poststarthook/apiservice-discovery-controller ok
	[+]poststarthook/kube-apiserver-autoregistration ok
	[+]autoregister-completion ok
	[+]poststarthook/apiservice-openapi-controller ok
	[+]poststarthook/apiservice-openapiv3-controller ok
	healthz check failed
	W1109 14:04:11.745753   55908 api_server.go:103] status: https://192.168.49.2:8443/healthz returned error 500:
	[+]ping ok
	[+]log ok
	[+]etcd ok
	[+]poststarthook/start-apiserver-admission-initializer ok
	[+]poststarthook/generic-apiserver-start-informers ok
	[+]poststarthook/priority-and-fairness-config-consumer ok
	[+]poststarthook/priority-and-fairness-filter ok
	[+]poststarthook/storage-object-count-tracker-hook ok
	[+]poststarthook/start-apiextensions-informers ok
	[+]poststarthook/start-apiextensions-controllers ok
	[+]poststarthook/crd-informer-synced ok
	[+]poststarthook/start-system-namespaces-controller ok
	[+]poststarthook/start-cluster-authentication-info-controller ok
	[+]poststarthook/start-kube-apiserver-identity-lease-controller ok
	[+]poststarthook/start-kube-apiserver-identity-lease-garbage-collector ok
	[+]poststarthook/start-legacy-token-tracking-controller ok
	[+]poststarthook/start-service-ip-repair-controllers ok
	[-]poststarthook/rbac/bootstrap-roles failed: reason withheld
	[+]poststarthook/scheduling/bootstrap-system-priority-classes ok
	[+]poststarthook/priority-and-fairness-config-producer ok
	[+]poststarthook/bootstrap-controller ok
	[+]poststarthook/start-kubernetes-service-cidr-controller ok
	[+]poststarthook/aggregator-reload-proxy-client-cert ok
	[+]poststarthook/start-kube-aggregator-informers ok
	[+]poststarthook/apiservice-status-local-available-controller ok
	[+]poststarthook/apiservice-status-remote-available-controller ok
	[+]poststarthook/apiservice-registration-controller ok
	[+]poststarthook/apiservice-discovery-controller ok
	[+]poststarthook/kube-apiserver-autoregistration ok
	[+]autoregister-completion ok
	[+]poststarthook/apiservice-openapi-controller ok
	[+]poststarthook/apiservice-openapiv3-controller ok
	healthz check failed
	I1109 14:04:12.237288   55908 api_server.go:253] Checking apiserver healthz at https://192.168.49.2:8443/healthz ...
	I1109 14:04:12.246058   55908 api_server.go:279] https://192.168.49.2:8443/healthz returned 200:
	ok
	I1109 14:04:12.247325   55908 api_server.go:141] control plane version: v1.34.1
	I1109 14:04:12.247378   55908 api_server.go:131] duration metric: took 2.0102771s to wait for apiserver health ...
	I1109 14:04:12.247399   55908 system_pods.go:43] waiting for kube-system pods to appear ...
	I1109 14:04:12.255293   55908 system_pods.go:59] 26 kube-system pods found
	I1109 14:04:12.255379   55908 system_pods.go:61] "coredns-66bc5c9577-wl6rt" [95478f0f-683f-4542-aaa4-adc037f97d0c] Running
	I1109 14:04:12.255399   55908 system_pods.go:61] "coredns-66bc5c9577-x2j4c" [96cca476-edbb-4139-8b01-f7fd7c7d55aa] Running
	I1109 14:04:12.255418   55908 system_pods.go:61] "etcd-ha-423884" [004413ee-6d00-4b6b-8e58-dbe5c2694a91] Running
	I1109 14:04:12.255451   55908 system_pods.go:61] "etcd-ha-423884-m02" [7009b9f1-1f16-4b4e-a5e3-1d8df4987593] Running
	I1109 14:04:12.255475   55908 system_pods.go:61] "etcd-ha-423884-m03" [5c66298e-55ac-439c-8fc7-cc91a29fff8c] Running
	I1109 14:04:12.255490   55908 system_pods.go:61] "kindnet-2tcn6" [22d3ab43-1335-4838-ab1d-7368817c4287] Running
	I1109 14:04:12.255507   55908 system_pods.go:61] "kindnet-45jg2" [fe2d9f7d-ba0c-42a5-af2d-ebfb3153b0b1] Running
	I1109 14:04:12.255525   55908 system_pods.go:61] "kindnet-4s4nj" [aaab0693-39a9-46cc-b5c6-f07055a7cbc4] Running
	I1109 14:04:12.255556   55908 system_pods.go:61] "kindnet-ftnwt" [ea92690f-5103-40c4-ba92-16b97894d00c] Running
	I1109 14:04:12.255578   55908 system_pods.go:61] "kube-apiserver-ha-423884" [1981277b-07b7-4bfc-8601-d026e58476b1] Running
	I1109 14:04:12.255596   55908 system_pods.go:61] "kube-apiserver-ha-423884-m02" [e203df4c-2d27-445c-ae67-14d3e03d4a48] Running
	I1109 14:04:12.255613   55908 system_pods.go:61] "kube-apiserver-ha-423884-m03" [16908abd-f0bf-4fc2-b689-4062490d63b3] Running
	I1109 14:04:12.255631   55908 system_pods.go:61] "kube-controller-manager-ha-423884" [1d981b3b-aa6e-470c-8fd5-a97cedeb7ab7] Running
	I1109 14:04:12.255657   55908 system_pods.go:61] "kube-controller-manager-ha-423884-m02" [047f7949-9451-4301-be27-a9073b1f8dd8] Running
	I1109 14:04:12.255679   55908 system_pods.go:61] "kube-controller-manager-ha-423884-m03" [94734b39-b4df-4736-8bb8-4289b8b59d4a] Running
	I1109 14:04:12.255698   55908 system_pods.go:61] "kube-proxy-7z7d2" [f3de4d87-91fe-4303-a8db-50a70cbce4d7] Running
	I1109 14:04:12.255716   55908 system_pods.go:61] "kube-proxy-9kff9" [3e8293e6-027b-460b-bf10-1c31ea96c7b9] Running
	I1109 14:04:12.255733   55908 system_pods.go:61] "kube-proxy-f4hgn" [4eebc45e-329c-47be-b22c-f516100cff56] Running
	I1109 14:04:12.255760   55908 system_pods.go:61] "kube-proxy-jcgxk" [511bb5aa-d398-4eb4-852e-d2b6cda335c7] Running
	I1109 14:04:12.255785   55908 system_pods.go:61] "kube-scheduler-ha-423884" [9d1a4a22-ff4f-4204-b9a0-c90bc99cf5ee] Running
	I1109 14:04:12.255802   55908 system_pods.go:61] "kube-scheduler-ha-423884-m02" [3959577a-da41-4834-8c10-1cd6da3da88d] Running
	I1109 14:04:12.255819   55908 system_pods.go:61] "kube-scheduler-ha-423884-m03" [ab2c4706-65e8-429e-9365-35ddef56a3f5] Running
	I1109 14:04:12.255834   55908 system_pods.go:61] "kube-vip-ha-423884" [8470dcc0-6c4f-4241-ad4e-8b896f6712b0] Running
	I1109 14:04:12.255904   55908 system_pods.go:61] "kube-vip-ha-423884-m02" [17d3154c-6732-452b-a209-4a8c98c5626e] Running
	I1109 14:04:12.255931   55908 system_pods.go:61] "kube-vip-ha-423884-m03" [aade9448-5515-4d1c-ae79-38c8686c171f] Running
	I1109 14:04:12.255949   55908 system_pods.go:61] "storage-provisioner" [5c249a88-1e05-40e0-b9d2-60a993f8c146] Running
	I1109 14:04:12.255967   55908 system_pods.go:74] duration metric: took 8.549678ms to wait for pod list to return data ...
	I1109 14:04:12.255987   55908 default_sa.go:34] waiting for default service account to be created ...
	I1109 14:04:12.259644   55908 default_sa.go:45] found service account: "default"
	I1109 14:04:12.259701   55908 default_sa.go:55] duration metric: took 3.685783ms for default service account to be created ...
	I1109 14:04:12.259723   55908 system_pods.go:116] waiting for k8s-apps to be running ...
	I1109 14:04:12.265757   55908 system_pods.go:86] 26 kube-system pods found
	I1109 14:04:12.265830   55908 system_pods.go:89] "coredns-66bc5c9577-wl6rt" [95478f0f-683f-4542-aaa4-adc037f97d0c] Running
	I1109 14:04:12.265849   55908 system_pods.go:89] "coredns-66bc5c9577-x2j4c" [96cca476-edbb-4139-8b01-f7fd7c7d55aa] Running
	I1109 14:04:12.265871   55908 system_pods.go:89] "etcd-ha-423884" [004413ee-6d00-4b6b-8e58-dbe5c2694a91] Running
	I1109 14:04:12.265906   55908 system_pods.go:89] "etcd-ha-423884-m02" [7009b9f1-1f16-4b4e-a5e3-1d8df4987593] Running
	I1109 14:04:12.265928   55908 system_pods.go:89] "etcd-ha-423884-m03" [5c66298e-55ac-439c-8fc7-cc91a29fff8c] Running
	I1109 14:04:12.265945   55908 system_pods.go:89] "kindnet-2tcn6" [22d3ab43-1335-4838-ab1d-7368817c4287] Running
	I1109 14:04:12.265961   55908 system_pods.go:89] "kindnet-45jg2" [fe2d9f7d-ba0c-42a5-af2d-ebfb3153b0b1] Running
	I1109 14:04:12.265977   55908 system_pods.go:89] "kindnet-4s4nj" [aaab0693-39a9-46cc-b5c6-f07055a7cbc4] Running
	I1109 14:04:12.266004   55908 system_pods.go:89] "kindnet-ftnwt" [ea92690f-5103-40c4-ba92-16b97894d00c] Running
	I1109 14:04:12.266025   55908 system_pods.go:89] "kube-apiserver-ha-423884" [1981277b-07b7-4bfc-8601-d026e58476b1] Running
	I1109 14:04:12.266042   55908 system_pods.go:89] "kube-apiserver-ha-423884-m02" [e203df4c-2d27-445c-ae67-14d3e03d4a48] Running
	I1109 14:04:12.266059   55908 system_pods.go:89] "kube-apiserver-ha-423884-m03" [16908abd-f0bf-4fc2-b689-4062490d63b3] Running
	I1109 14:04:12.266077   55908 system_pods.go:89] "kube-controller-manager-ha-423884" [1d981b3b-aa6e-470c-8fd5-a97cedeb7ab7] Running
	I1109 14:04:12.266107   55908 system_pods.go:89] "kube-controller-manager-ha-423884-m02" [047f7949-9451-4301-be27-a9073b1f8dd8] Running
	I1109 14:04:12.266238   55908 system_pods.go:89] "kube-controller-manager-ha-423884-m03" [94734b39-b4df-4736-8bb8-4289b8b59d4a] Running
	I1109 14:04:12.266258   55908 system_pods.go:89] "kube-proxy-7z7d2" [f3de4d87-91fe-4303-a8db-50a70cbce4d7] Running
	I1109 14:04:12.266274   55908 system_pods.go:89] "kube-proxy-9kff9" [3e8293e6-027b-460b-bf10-1c31ea96c7b9] Running
	I1109 14:04:12.266290   55908 system_pods.go:89] "kube-proxy-f4hgn" [4eebc45e-329c-47be-b22c-f516100cff56] Running
	I1109 14:04:12.266322   55908 system_pods.go:89] "kube-proxy-jcgxk" [511bb5aa-d398-4eb4-852e-d2b6cda335c7] Running
	I1109 14:04:12.266345   55908 system_pods.go:89] "kube-scheduler-ha-423884" [9d1a4a22-ff4f-4204-b9a0-c90bc99cf5ee] Running
	I1109 14:04:12.266364   55908 system_pods.go:89] "kube-scheduler-ha-423884-m02" [3959577a-da41-4834-8c10-1cd6da3da88d] Running
	I1109 14:04:12.266382   55908 system_pods.go:89] "kube-scheduler-ha-423884-m03" [ab2c4706-65e8-429e-9365-35ddef56a3f5] Running
	I1109 14:04:12.266400   55908 system_pods.go:89] "kube-vip-ha-423884" [8470dcc0-6c4f-4241-ad4e-8b896f6712b0] Running
	I1109 14:04:12.266427   55908 system_pods.go:89] "kube-vip-ha-423884-m02" [17d3154c-6732-452b-a209-4a8c98c5626e] Running
	I1109 14:04:12.266450   55908 system_pods.go:89] "kube-vip-ha-423884-m03" [aade9448-5515-4d1c-ae79-38c8686c171f] Running
	I1109 14:04:12.266468   55908 system_pods.go:89] "storage-provisioner" [5c249a88-1e05-40e0-b9d2-60a993f8c146] Running
	I1109 14:04:12.266489   55908 system_pods.go:126] duration metric: took 6.747337ms to wait for k8s-apps to be running ...
	I1109 14:04:12.266510   55908 system_svc.go:44] waiting for kubelet service to be running ....
	I1109 14:04:12.266588   55908 ssh_runner.go:195] Run: sudo systemctl is-active --quiet service kubelet
	I1109 14:04:12.282135   55908 system_svc.go:56] duration metric: took 15.616371ms WaitForService to wait for kubelet
	I1109 14:04:12.282232   55908 kubeadm.go:587] duration metric: took 27.643251935s to wait for: map[apiserver:true apps_running:true default_sa:true extra:true kubelet:true node_ready:true system_pods:true]
	I1109 14:04:12.282264   55908 node_conditions.go:102] verifying NodePressure condition ...
	I1109 14:04:12.287797   55908 node_conditions.go:122] node storage ephemeral capacity is 203034800Ki
	I1109 14:04:12.287962   55908 node_conditions.go:123] node cpu capacity is 2
	I1109 14:04:12.287995   55908 node_conditions.go:122] node storage ephemeral capacity is 203034800Ki
	I1109 14:04:12.288016   55908 node_conditions.go:123] node cpu capacity is 2
	I1109 14:04:12.288036   55908 node_conditions.go:122] node storage ephemeral capacity is 203034800Ki
	I1109 14:04:12.288054   55908 node_conditions.go:123] node cpu capacity is 2
	I1109 14:04:12.288080   55908 node_conditions.go:122] node storage ephemeral capacity is 203034800Ki
	I1109 14:04:12.288104   55908 node_conditions.go:123] node cpu capacity is 2
	I1109 14:04:12.288124   55908 node_conditions.go:105] duration metric: took 5.843459ms to run NodePressure ...
	I1109 14:04:12.288147   55908 start.go:242] waiting for startup goroutines ...
	I1109 14:04:12.288194   55908 start.go:256] writing updated cluster config ...
	I1109 14:04:12.292016   55908 out.go:203] 
	I1109 14:04:12.295240   55908 config.go:182] Loaded profile config "ha-423884": Driver=docker, ContainerRuntime=crio, KubernetesVersion=v1.34.1
	I1109 14:04:12.295416   55908 profile.go:143] Saving config to /home/jenkins/minikube-integration/21139-2320/.minikube/profiles/ha-423884/config.json ...
	I1109 14:04:12.298693   55908 out.go:179] * Starting "ha-423884-m03" control-plane node in "ha-423884" cluster
	I1109 14:04:12.302221   55908 cache.go:134] Beginning downloading kic base image for docker with crio
	I1109 14:04:12.305225   55908 out.go:179] * Pulling base image v0.0.48-1761985721-21837 ...
	I1109 14:04:12.307950   55908 preload.go:188] Checking if preload exists for k8s version v1.34.1 and runtime crio
	I1109 14:04:12.307975   55908 cache.go:65] Caching tarball of preloaded images
	I1109 14:04:12.308093   55908 preload.go:238] Found /home/jenkins/minikube-integration/21139-2320/.minikube/cache/preloaded-tarball/preloaded-images-k8s-v18-v1.34.1-cri-o-overlay-arm64.tar.lz4 in cache, skipping download
	I1109 14:04:12.308103   55908 cache.go:68] Finished verifying existence of preloaded tar for v1.34.1 on crio
	I1109 14:04:12.308245   55908 profile.go:143] Saving config to /home/jenkins/minikube-integration/21139-2320/.minikube/profiles/ha-423884/config.json ...
	I1109 14:04:12.308454   55908 image.go:81] Checking for gcr.io/k8s-minikube/kicbase-builds:v0.0.48-1761985721-21837@sha256:a50b37e97dfdea51156e079ca6b45818a801b3d41bbe13d141f35d2e1af6c7d1 in local docker daemon
	I1109 14:04:12.335753   55908 image.go:100] Found gcr.io/k8s-minikube/kicbase-builds:v0.0.48-1761985721-21837@sha256:a50b37e97dfdea51156e079ca6b45818a801b3d41bbe13d141f35d2e1af6c7d1 in local docker daemon, skipping pull
	I1109 14:04:12.335772   55908 cache.go:158] gcr.io/k8s-minikube/kicbase-builds:v0.0.48-1761985721-21837@sha256:a50b37e97dfdea51156e079ca6b45818a801b3d41bbe13d141f35d2e1af6c7d1 exists in daemon, skipping load
	I1109 14:04:12.335783   55908 cache.go:243] Successfully downloaded all kic artifacts
	I1109 14:04:12.335806   55908 start.go:360] acquireMachinesLock for ha-423884-m03: {Name:mk2c1f49120f6acdbb0b7c106d84b578b982c1c0 Clock:{} Delay:500ms Timeout:10m0s Cancel:<nil>}
	I1109 14:04:12.335852   55908 start.go:364] duration metric: took 32.608µs to acquireMachinesLock for "ha-423884-m03"
	I1109 14:04:12.335906   55908 start.go:96] Skipping create...Using existing machine configuration
	I1109 14:04:12.335913   55908 fix.go:54] fixHost starting: m03
	I1109 14:04:12.336176   55908 cli_runner.go:164] Run: docker container inspect ha-423884-m03 --format={{.State.Status}}
	I1109 14:04:12.360018   55908 fix.go:112] recreateIfNeeded on ha-423884-m03: state=Stopped err=<nil>
	W1109 14:04:12.360050   55908 fix.go:138] unexpected machine state, will restart: <nil>
	I1109 14:04:12.363431   55908 out.go:252] * Restarting existing docker container for "ha-423884-m03" ...
	I1109 14:04:12.363592   55908 cli_runner.go:164] Run: docker start ha-423884-m03
	I1109 14:04:12.653356   55908 cli_runner.go:164] Run: docker container inspect ha-423884-m03 --format={{.State.Status}}
	I1109 14:04:12.683958   55908 kic.go:430] container "ha-423884-m03" state is running.
	I1109 14:04:12.684306   55908 cli_runner.go:164] Run: docker container inspect -f "{{range .NetworkSettings.Networks}}{{.IPAddress}},{{.GlobalIPv6Address}}{{end}}" ha-423884-m03
	I1109 14:04:12.727840   55908 profile.go:143] Saving config to /home/jenkins/minikube-integration/21139-2320/.minikube/profiles/ha-423884/config.json ...
	I1109 14:04:12.728107   55908 machine.go:94] provisionDockerMachine start ...
	I1109 14:04:12.728163   55908 cli_runner.go:164] Run: docker container inspect -f "'{{(index (index .NetworkSettings.Ports "22/tcp") 0).HostPort}}'" ha-423884-m03
	I1109 14:04:12.759896   55908 main.go:143] libmachine: Using SSH client type: native
	I1109 14:04:12.760195   55908 main.go:143] libmachine: &{{{<nil> 0 [] [] []} docker [0x3eefe0] 0x3f1790 <nil>  [] 0s} 127.0.0.1 32828 <nil> <nil>}
	I1109 14:04:12.760204   55908 main.go:143] libmachine: About to run SSH command:
	hostname
	I1109 14:04:12.761068   55908 main.go:143] libmachine: Error dialing TCP: ssh: handshake failed: EOF
	I1109 14:04:16.033281   55908 main.go:143] libmachine: SSH cmd err, output: <nil>: ha-423884-m03
	
	I1109 14:04:16.033354   55908 ubuntu.go:182] provisioning hostname "ha-423884-m03"
	I1109 14:04:16.033448   55908 cli_runner.go:164] Run: docker container inspect -f "'{{(index (index .NetworkSettings.Ports "22/tcp") 0).HostPort}}'" ha-423884-m03
	I1109 14:04:16.074078   55908 main.go:143] libmachine: Using SSH client type: native
	I1109 14:04:16.074389   55908 main.go:143] libmachine: &{{{<nil> 0 [] [] []} docker [0x3eefe0] 0x3f1790 <nil>  [] 0s} 127.0.0.1 32828 <nil> <nil>}
	I1109 14:04:16.074407   55908 main.go:143] libmachine: About to run SSH command:
	sudo hostname ha-423884-m03 && echo "ha-423884-m03" | sudo tee /etc/hostname
	I1109 14:04:16.423110   55908 main.go:143] libmachine: SSH cmd err, output: <nil>: ha-423884-m03
	
	I1109 14:04:16.423192   55908 cli_runner.go:164] Run: docker container inspect -f "'{{(index (index .NetworkSettings.Ports "22/tcp") 0).HostPort}}'" ha-423884-m03
	I1109 14:04:16.456144   55908 main.go:143] libmachine: Using SSH client type: native
	I1109 14:04:16.456500   55908 main.go:143] libmachine: &{{{<nil> 0 [] [] []} docker [0x3eefe0] 0x3f1790 <nil>  [] 0s} 127.0.0.1 32828 <nil> <nil>}
	I1109 14:04:16.456523   55908 main.go:143] libmachine: About to run SSH command:
	
			if ! grep -xq '.*\sha-423884-m03' /etc/hosts; then
				if grep -xq '127.0.1.1\s.*' /etc/hosts; then
					sudo sed -i 's/^127.0.1.1\s.*/127.0.1.1 ha-423884-m03/g' /etc/hosts;
				else 
					echo '127.0.1.1 ha-423884-m03' | sudo tee -a /etc/hosts; 
				fi
			fi
	I1109 14:04:16.751298   55908 main.go:143] libmachine: SSH cmd err, output: <nil>: 
	I1109 14:04:16.751374   55908 ubuntu.go:188] set auth options {CertDir:/home/jenkins/minikube-integration/21139-2320/.minikube CaCertPath:/home/jenkins/minikube-integration/21139-2320/.minikube/certs/ca.pem CaPrivateKeyPath:/home/jenkins/minikube-integration/21139-2320/.minikube/certs/ca-key.pem CaCertRemotePath:/etc/docker/ca.pem ServerCertPath:/home/jenkins/minikube-integration/21139-2320/.minikube/machines/server.pem ServerKeyPath:/home/jenkins/minikube-integration/21139-2320/.minikube/machines/server-key.pem ClientKeyPath:/home/jenkins/minikube-integration/21139-2320/.minikube/certs/key.pem ServerCertRemotePath:/etc/docker/server.pem ServerKeyRemotePath:/etc/docker/server-key.pem ClientCertPath:/home/jenkins/minikube-integration/21139-2320/.minikube/certs/cert.pem ServerCertSANs:[] StorePath:/home/jenkins/minikube-integration/21139-2320/.minikube}
	I1109 14:04:16.751397   55908 ubuntu.go:190] setting up certificates
	I1109 14:04:16.751407   55908 provision.go:84] configureAuth start
	I1109 14:04:16.751471   55908 cli_runner.go:164] Run: docker container inspect -f "{{range .NetworkSettings.Networks}}{{.IPAddress}},{{.GlobalIPv6Address}}{{end}}" ha-423884-m03
	I1109 14:04:16.793487   55908 provision.go:143] copyHostCerts
	I1109 14:04:16.793536   55908 vm_assets.go:164] NewFileAsset: /home/jenkins/minikube-integration/21139-2320/.minikube/certs/ca.pem -> /home/jenkins/minikube-integration/21139-2320/.minikube/ca.pem
	I1109 14:04:16.793570   55908 exec_runner.go:144] found /home/jenkins/minikube-integration/21139-2320/.minikube/ca.pem, removing ...
	I1109 14:04:16.793586   55908 exec_runner.go:203] rm: /home/jenkins/minikube-integration/21139-2320/.minikube/ca.pem
	I1109 14:04:16.793664   55908 exec_runner.go:151] cp: /home/jenkins/minikube-integration/21139-2320/.minikube/certs/ca.pem --> /home/jenkins/minikube-integration/21139-2320/.minikube/ca.pem (1082 bytes)
	I1109 14:04:16.793744   55908 vm_assets.go:164] NewFileAsset: /home/jenkins/minikube-integration/21139-2320/.minikube/certs/cert.pem -> /home/jenkins/minikube-integration/21139-2320/.minikube/cert.pem
	I1109 14:04:16.793767   55908 exec_runner.go:144] found /home/jenkins/minikube-integration/21139-2320/.minikube/cert.pem, removing ...
	I1109 14:04:16.793774   55908 exec_runner.go:203] rm: /home/jenkins/minikube-integration/21139-2320/.minikube/cert.pem
	I1109 14:04:16.793803   55908 exec_runner.go:151] cp: /home/jenkins/minikube-integration/21139-2320/.minikube/certs/cert.pem --> /home/jenkins/minikube-integration/21139-2320/.minikube/cert.pem (1123 bytes)
	I1109 14:04:16.793848   55908 vm_assets.go:164] NewFileAsset: /home/jenkins/minikube-integration/21139-2320/.minikube/certs/key.pem -> /home/jenkins/minikube-integration/21139-2320/.minikube/key.pem
	I1109 14:04:16.793870   55908 exec_runner.go:144] found /home/jenkins/minikube-integration/21139-2320/.minikube/key.pem, removing ...
	I1109 14:04:16.793874   55908 exec_runner.go:203] rm: /home/jenkins/minikube-integration/21139-2320/.minikube/key.pem
	I1109 14:04:16.793899   55908 exec_runner.go:151] cp: /home/jenkins/minikube-integration/21139-2320/.minikube/certs/key.pem --> /home/jenkins/minikube-integration/21139-2320/.minikube/key.pem (1679 bytes)
	I1109 14:04:16.793952   55908 provision.go:117] generating server cert: /home/jenkins/minikube-integration/21139-2320/.minikube/machines/server.pem ca-key=/home/jenkins/minikube-integration/21139-2320/.minikube/certs/ca.pem private-key=/home/jenkins/minikube-integration/21139-2320/.minikube/certs/ca-key.pem org=jenkins.ha-423884-m03 san=[127.0.0.1 192.168.49.4 ha-423884-m03 localhost minikube]
	I1109 14:04:17.244605   55908 provision.go:177] copyRemoteCerts
	I1109 14:04:17.244683   55908 ssh_runner.go:195] Run: sudo mkdir -p /etc/docker /etc/docker /etc/docker
	I1109 14:04:17.244730   55908 cli_runner.go:164] Run: docker container inspect -f "'{{(index (index .NetworkSettings.Ports "22/tcp") 0).HostPort}}'" ha-423884-m03
	I1109 14:04:17.267714   55908 sshutil.go:53] new ssh client: &{IP:127.0.0.1 Port:32828 SSHKeyPath:/home/jenkins/minikube-integration/21139-2320/.minikube/machines/ha-423884-m03/id_rsa Username:docker}
	I1109 14:04:17.397341   55908 vm_assets.go:164] NewFileAsset: /home/jenkins/minikube-integration/21139-2320/.minikube/certs/ca.pem -> /etc/docker/ca.pem
	I1109 14:04:17.397397   55908 ssh_runner.go:362] scp /home/jenkins/minikube-integration/21139-2320/.minikube/certs/ca.pem --> /etc/docker/ca.pem (1082 bytes)
	I1109 14:04:17.451209   55908 vm_assets.go:164] NewFileAsset: /home/jenkins/minikube-integration/21139-2320/.minikube/machines/server.pem -> /etc/docker/server.pem
	I1109 14:04:17.451268   55908 ssh_runner.go:362] scp /home/jenkins/minikube-integration/21139-2320/.minikube/machines/server.pem --> /etc/docker/server.pem (1208 bytes)
	I1109 14:04:17.501897   55908 vm_assets.go:164] NewFileAsset: /home/jenkins/minikube-integration/21139-2320/.minikube/machines/server-key.pem -> /etc/docker/server-key.pem
	I1109 14:04:17.501959   55908 ssh_runner.go:362] scp /home/jenkins/minikube-integration/21139-2320/.minikube/machines/server-key.pem --> /etc/docker/server-key.pem (1679 bytes)
	I1109 14:04:17.543399   55908 provision.go:87] duration metric: took 791.974444ms to configureAuth
	I1109 14:04:17.543429   55908 ubuntu.go:206] setting minikube options for container-runtime
	I1109 14:04:17.543658   55908 config.go:182] Loaded profile config "ha-423884": Driver=docker, ContainerRuntime=crio, KubernetesVersion=v1.34.1
	I1109 14:04:17.543760   55908 cli_runner.go:164] Run: docker container inspect -f "'{{(index (index .NetworkSettings.Ports "22/tcp") 0).HostPort}}'" ha-423884-m03
	I1109 14:04:17.578118   55908 main.go:143] libmachine: Using SSH client type: native
	I1109 14:04:17.578425   55908 main.go:143] libmachine: &{{{<nil> 0 [] [] []} docker [0x3eefe0] 0x3f1790 <nil>  [] 0s} 127.0.0.1 32828 <nil> <nil>}
	I1109 14:04:17.578447   55908 main.go:143] libmachine: About to run SSH command:
	sudo mkdir -p /etc/sysconfig && printf %s "
	CRIO_MINIKUBE_OPTIONS='--insecure-registry 10.96.0.0/12 '
	" | sudo tee /etc/sysconfig/crio.minikube && sudo systemctl restart crio
	I1109 14:04:18.006743   55908 main.go:143] libmachine: SSH cmd err, output: <nil>: 
	CRIO_MINIKUBE_OPTIONS='--insecure-registry 10.96.0.0/12 '
	
	I1109 14:04:18.006766   55908 machine.go:97] duration metric: took 5.278648591s to provisionDockerMachine
	I1109 14:04:18.006777   55908 start.go:293] postStartSetup for "ha-423884-m03" (driver="docker")
	I1109 14:04:18.006788   55908 start.go:322] creating required directories: [/etc/kubernetes/addons /etc/kubernetes/manifests /var/tmp/minikube /var/lib/minikube /var/lib/minikube/certs /var/lib/minikube/images /var/lib/minikube/binaries /tmp/gvisor /usr/share/ca-certificates /etc/ssl/certs]
	I1109 14:04:18.006849   55908 ssh_runner.go:195] Run: sudo mkdir -p /etc/kubernetes/addons /etc/kubernetes/manifests /var/tmp/minikube /var/lib/minikube /var/lib/minikube/certs /var/lib/minikube/images /var/lib/minikube/binaries /tmp/gvisor /usr/share/ca-certificates /etc/ssl/certs
	I1109 14:04:18.006908   55908 cli_runner.go:164] Run: docker container inspect -f "'{{(index (index .NetworkSettings.Ports "22/tcp") 0).HostPort}}'" ha-423884-m03
	I1109 14:04:18.028378   55908 sshutil.go:53] new ssh client: &{IP:127.0.0.1 Port:32828 SSHKeyPath:/home/jenkins/minikube-integration/21139-2320/.minikube/machines/ha-423884-m03/id_rsa Username:docker}
	I1109 14:04:18.136392   55908 ssh_runner.go:195] Run: cat /etc/os-release
	I1109 14:04:18.139676   55908 main.go:143] libmachine: Couldn't set key VERSION_CODENAME, no corresponding struct field found
	I1109 14:04:18.139706   55908 info.go:137] Remote host: Debian GNU/Linux 12 (bookworm)
	I1109 14:04:18.139718   55908 filesync.go:126] Scanning /home/jenkins/minikube-integration/21139-2320/.minikube/addons for local assets ...
	I1109 14:04:18.139772   55908 filesync.go:126] Scanning /home/jenkins/minikube-integration/21139-2320/.minikube/files for local assets ...
	I1109 14:04:18.139877   55908 filesync.go:149] local asset: /home/jenkins/minikube-integration/21139-2320/.minikube/files/etc/ssl/certs/41162.pem -> 41162.pem in /etc/ssl/certs
	I1109 14:04:18.139916   55908 vm_assets.go:164] NewFileAsset: /home/jenkins/minikube-integration/21139-2320/.minikube/files/etc/ssl/certs/41162.pem -> /etc/ssl/certs/41162.pem
	I1109 14:04:18.140203   55908 ssh_runner.go:195] Run: sudo mkdir -p /etc/ssl/certs
	I1109 14:04:18.151607   55908 ssh_runner.go:362] scp /home/jenkins/minikube-integration/21139-2320/.minikube/files/etc/ssl/certs/41162.pem --> /etc/ssl/certs/41162.pem (1708 bytes)
	I1109 14:04:18.170641   55908 start.go:296] duration metric: took 163.846632ms for postStartSetup
	I1109 14:04:18.170734   55908 ssh_runner.go:195] Run: sh -c "df -h /var | awk 'NR==2{print $5}'"
	I1109 14:04:18.170783   55908 cli_runner.go:164] Run: docker container inspect -f "'{{(index (index .NetworkSettings.Ports "22/tcp") 0).HostPort}}'" ha-423884-m03
	I1109 14:04:18.190645   55908 sshutil.go:53] new ssh client: &{IP:127.0.0.1 Port:32828 SSHKeyPath:/home/jenkins/minikube-integration/21139-2320/.minikube/machines/ha-423884-m03/id_rsa Username:docker}
	I1109 14:04:18.303725   55908 ssh_runner.go:195] Run: sh -c "df -BG /var | awk 'NR==2{print $4}'"
	I1109 14:04:18.315157   55908 fix.go:56] duration metric: took 5.979236955s for fixHost
	I1109 14:04:18.315228   55908 start.go:83] releasing machines lock for "ha-423884-m03", held for 5.979367853s
	I1109 14:04:18.315337   55908 cli_runner.go:164] Run: docker container inspect -f "{{range .NetworkSettings.Networks}}{{.IPAddress}},{{.GlobalIPv6Address}}{{end}}" ha-423884-m03
	I1109 14:04:18.346232   55908 out.go:179] * Found network options:
	I1109 14:04:18.349488   55908 out.go:179]   - NO_PROXY=192.168.49.2,192.168.49.3
	W1109 14:04:18.352634   55908 proxy.go:120] fail to check proxy env: Error ip not in block
	W1109 14:04:18.352664   55908 proxy.go:120] fail to check proxy env: Error ip not in block
	W1109 14:04:18.352686   55908 proxy.go:120] fail to check proxy env: Error ip not in block
	W1109 14:04:18.352696   55908 proxy.go:120] fail to check proxy env: Error ip not in block
	I1109 14:04:18.352763   55908 ssh_runner.go:195] Run: sudo sh -c "podman version >/dev/null"
	I1109 14:04:18.352815   55908 cli_runner.go:164] Run: docker container inspect -f "'{{(index (index .NetworkSettings.Ports "22/tcp") 0).HostPort}}'" ha-423884-m03
	I1109 14:04:18.353042   55908 ssh_runner.go:195] Run: curl -sS -m 2 https://registry.k8s.io/
	I1109 14:04:18.353099   55908 cli_runner.go:164] Run: docker container inspect -f "'{{(index (index .NetworkSettings.Ports "22/tcp") 0).HostPort}}'" ha-423884-m03
	I1109 14:04:18.407037   55908 sshutil.go:53] new ssh client: &{IP:127.0.0.1 Port:32828 SSHKeyPath:/home/jenkins/minikube-integration/21139-2320/.minikube/machines/ha-423884-m03/id_rsa Username:docker}
	I1109 14:04:18.416133   55908 sshutil.go:53] new ssh client: &{IP:127.0.0.1 Port:32828 SSHKeyPath:/home/jenkins/minikube-integration/21139-2320/.minikube/machines/ha-423884-m03/id_rsa Username:docker}
	I1109 14:04:18.761655   55908 ssh_runner.go:195] Run: sh -c "stat /etc/cni/net.d/*loopback.conf*"
	W1109 14:04:18.827322   55908 cni.go:209] loopback cni configuration skipped: "/etc/cni/net.d/*loopback.conf*" not found
	I1109 14:04:18.827443   55908 ssh_runner.go:195] Run: sudo find /etc/cni/net.d -maxdepth 1 -type f ( ( -name *bridge* -or -name *podman* ) -and -not -name *.mk_disabled ) -printf "%p, " -exec sh -c "sudo mv {} {}.mk_disabled" ;
	I1109 14:04:18.846068   55908 cni.go:259] no active bridge cni configs found in "/etc/cni/net.d" - nothing to disable
	I1109 14:04:18.846140   55908 start.go:496] detecting cgroup driver to use...
	I1109 14:04:18.846187   55908 detect.go:187] detected "cgroupfs" cgroup driver on host os
	I1109 14:04:18.846266   55908 ssh_runner.go:195] Run: sudo systemctl stop -f containerd
	I1109 14:04:18.869418   55908 ssh_runner.go:195] Run: sudo systemctl is-active --quiet service containerd
	I1109 14:04:18.889860   55908 docker.go:218] disabling cri-docker service (if available) ...
	I1109 14:04:18.889997   55908 ssh_runner.go:195] Run: sudo systemctl stop -f cri-docker.socket
	I1109 14:04:18.919381   55908 ssh_runner.go:195] Run: sudo systemctl stop -f cri-docker.service
	I1109 14:04:18.942214   55908 ssh_runner.go:195] Run: sudo systemctl disable cri-docker.socket
	I1109 14:04:19.209339   55908 ssh_runner.go:195] Run: sudo systemctl mask cri-docker.service
	I1109 14:04:19.469248   55908 docker.go:234] disabling docker service ...
	I1109 14:04:19.469315   55908 ssh_runner.go:195] Run: sudo systemctl stop -f docker.socket
	I1109 14:04:19.487357   55908 ssh_runner.go:195] Run: sudo systemctl stop -f docker.service
	I1109 14:04:19.508816   55908 ssh_runner.go:195] Run: sudo systemctl disable docker.socket
	I1109 14:04:19.750896   55908 ssh_runner.go:195] Run: sudo systemctl mask docker.service
	I1109 14:04:19.978351   55908 ssh_runner.go:195] Run: sudo systemctl is-active --quiet service docker
	I1109 14:04:20.002094   55908 ssh_runner.go:195] Run: /bin/bash -c "sudo mkdir -p /etc && printf %s "runtime-endpoint: unix:///var/run/crio/crio.sock
	" | sudo tee /etc/crictl.yaml"
	I1109 14:04:20.029962   55908 crio.go:59] configure cri-o to use "registry.k8s.io/pause:3.10.1" pause image...
	I1109 14:04:20.030038   55908 ssh_runner.go:195] Run: sh -c "sudo sed -i 's|^.*pause_image = .*$|pause_image = "registry.k8s.io/pause:3.10.1"|' /etc/crio/crio.conf.d/02-crio.conf"
	I1109 14:04:20.046014   55908 crio.go:70] configuring cri-o to use "cgroupfs" as cgroup driver...
	I1109 14:04:20.046086   55908 ssh_runner.go:195] Run: sh -c "sudo sed -i 's|^.*cgroup_manager = .*$|cgroup_manager = "cgroupfs"|' /etc/crio/crio.conf.d/02-crio.conf"
	I1109 14:04:20.061773   55908 ssh_runner.go:195] Run: sh -c "sudo sed -i '/conmon_cgroup = .*/d' /etc/crio/crio.conf.d/02-crio.conf"
	I1109 14:04:20.083454   55908 ssh_runner.go:195] Run: sh -c "sudo sed -i '/cgroup_manager = .*/a conmon_cgroup = "pod"' /etc/crio/crio.conf.d/02-crio.conf"
	I1109 14:04:20.096347   55908 ssh_runner.go:195] Run: sh -c "sudo rm -rf /etc/cni/net.mk"
	I1109 14:04:20.114097   55908 ssh_runner.go:195] Run: sh -c "sudo sed -i '/^ *"net.ipv4.ip_unprivileged_port_start=.*"/d' /etc/crio/crio.conf.d/02-crio.conf"
	I1109 14:04:20.126722   55908 ssh_runner.go:195] Run: sh -c "sudo grep -q "^ *default_sysctls" /etc/crio/crio.conf.d/02-crio.conf || sudo sed -i '/conmon_cgroup = .*/a default_sysctls = \[\n\]' /etc/crio/crio.conf.d/02-crio.conf"
	I1109 14:04:20.143159   55908 ssh_runner.go:195] Run: sh -c "sudo sed -i -r 's|^default_sysctls *= *\[|&\n  "net.ipv4.ip_unprivileged_port_start=0",|' /etc/crio/crio.conf.d/02-crio.conf"
	I1109 14:04:20.160109   55908 ssh_runner.go:195] Run: sudo sysctl net.bridge.bridge-nf-call-iptables
	I1109 14:04:20.177582   55908 ssh_runner.go:195] Run: sudo sh -c "echo 1 > /proc/sys/net/ipv4/ip_forward"
	I1109 14:04:20.196091   55908 ssh_runner.go:195] Run: sudo systemctl daemon-reload
	I1109 14:04:20.468433   55908 ssh_runner.go:195] Run: sudo systemctl restart crio
	I1109 14:04:21.283004   55908 start.go:543] Will wait 60s for socket path /var/run/crio/crio.sock
	I1109 14:04:21.283084   55908 ssh_runner.go:195] Run: stat /var/run/crio/crio.sock
	I1109 14:04:21.287304   55908 start.go:564] Will wait 60s for crictl version
	I1109 14:04:21.287372   55908 ssh_runner.go:195] Run: which crictl
	I1109 14:04:21.291538   55908 ssh_runner.go:195] Run: sudo /usr/local/bin/crictl version
	I1109 14:04:21.328386   55908 start.go:580] Version:  0.1.0
	RuntimeName:  cri-o
	RuntimeVersion:  1.34.1
	RuntimeApiVersion:  v1
	I1109 14:04:21.328481   55908 ssh_runner.go:195] Run: crio --version
	I1109 14:04:21.361417   55908 ssh_runner.go:195] Run: crio --version
	I1109 14:04:21.451954   55908 out.go:179] * Preparing Kubernetes v1.34.1 on CRI-O 1.34.1 ...
	I1109 14:04:21.455954   55908 out.go:179]   - env NO_PROXY=192.168.49.2
	I1109 14:04:21.459224   55908 out.go:179]   - env NO_PROXY=192.168.49.2,192.168.49.3
	I1109 14:04:21.462952   55908 cli_runner.go:164] Run: docker network inspect ha-423884 --format "{"Name": "{{.Name}}","Driver": "{{.Driver}}","Subnet": "{{range .IPAM.Config}}{{.Subnet}}{{end}}","Gateway": "{{range .IPAM.Config}}{{.Gateway}}{{end}}","MTU": {{if (index .Options "com.docker.network.driver.mtu")}}{{(index .Options "com.docker.network.driver.mtu")}}{{else}}0{{end}}, "ContainerIPs": [{{range $k,$v := .Containers }}"{{$v.IPv4Address}}",{{end}}]}"
	I1109 14:04:21.484807   55908 ssh_runner.go:195] Run: grep 192.168.49.1	host.minikube.internal$ /etc/hosts
	I1109 14:04:21.489960   55908 ssh_runner.go:195] Run: /bin/bash -c "{ grep -v $'\thost.minikube.internal$' "/etc/hosts"; echo "192.168.49.1	host.minikube.internal"; } > /tmp/h.$$; sudo cp /tmp/h.$$ "/etc/hosts""
	I1109 14:04:21.506775   55908 mustload.go:66] Loading cluster: ha-423884
	I1109 14:04:21.507015   55908 config.go:182] Loaded profile config "ha-423884": Driver=docker, ContainerRuntime=crio, KubernetesVersion=v1.34.1
	I1109 14:04:21.507301   55908 cli_runner.go:164] Run: docker container inspect ha-423884 --format={{.State.Status}}
	I1109 14:04:21.526101   55908 host.go:66] Checking if "ha-423884" exists ...
	I1109 14:04:21.526377   55908 certs.go:69] Setting up /home/jenkins/minikube-integration/21139-2320/.minikube/profiles/ha-423884 for IP: 192.168.49.4
	I1109 14:04:21.526391   55908 certs.go:195] generating shared ca certs ...
	I1109 14:04:21.526407   55908 certs.go:227] acquiring lock for ca certs: {Name:mkdc9287ecd89df27ec460b72246ed6b75395d7c Clock:{} Delay:500ms Timeout:1m0s Cancel:<nil>}
	I1109 14:04:21.526515   55908 certs.go:236] skipping valid "minikubeCA" ca cert: /home/jenkins/minikube-integration/21139-2320/.minikube/ca.key
	I1109 14:04:21.526559   55908 certs.go:236] skipping valid "proxyClientCA" ca cert: /home/jenkins/minikube-integration/21139-2320/.minikube/proxy-client-ca.key
	I1109 14:04:21.526572   55908 certs.go:257] generating profile certs ...
	I1109 14:04:21.526658   55908 certs.go:360] skipping valid signed profile cert regeneration for "minikube-user": /home/jenkins/minikube-integration/21139-2320/.minikube/profiles/ha-423884/client.key
	I1109 14:04:21.526726   55908 certs.go:360] skipping valid signed profile cert regeneration for "minikube": /home/jenkins/minikube-integration/21139-2320/.minikube/profiles/ha-423884/apiserver.key.7ffb4171
	I1109 14:04:21.526767   55908 certs.go:360] skipping valid signed profile cert regeneration for "aggregator": /home/jenkins/minikube-integration/21139-2320/.minikube/profiles/ha-423884/proxy-client.key
	I1109 14:04:21.526781   55908 vm_assets.go:164] NewFileAsset: /home/jenkins/minikube-integration/21139-2320/.minikube/ca.crt -> /var/lib/minikube/certs/ca.crt
	I1109 14:04:21.526793   55908 vm_assets.go:164] NewFileAsset: /home/jenkins/minikube-integration/21139-2320/.minikube/ca.key -> /var/lib/minikube/certs/ca.key
	I1109 14:04:21.526808   55908 vm_assets.go:164] NewFileAsset: /home/jenkins/minikube-integration/21139-2320/.minikube/proxy-client-ca.crt -> /var/lib/minikube/certs/proxy-client-ca.crt
	I1109 14:04:21.526826   55908 vm_assets.go:164] NewFileAsset: /home/jenkins/minikube-integration/21139-2320/.minikube/proxy-client-ca.key -> /var/lib/minikube/certs/proxy-client-ca.key
	I1109 14:04:21.526836   55908 vm_assets.go:164] NewFileAsset: /home/jenkins/minikube-integration/21139-2320/.minikube/profiles/ha-423884/apiserver.crt -> /var/lib/minikube/certs/apiserver.crt
	I1109 14:04:21.526848   55908 vm_assets.go:164] NewFileAsset: /home/jenkins/minikube-integration/21139-2320/.minikube/profiles/ha-423884/apiserver.key -> /var/lib/minikube/certs/apiserver.key
	I1109 14:04:21.526910   55908 vm_assets.go:164] NewFileAsset: /home/jenkins/minikube-integration/21139-2320/.minikube/profiles/ha-423884/proxy-client.crt -> /var/lib/minikube/certs/proxy-client.crt
	I1109 14:04:21.526925   55908 vm_assets.go:164] NewFileAsset: /home/jenkins/minikube-integration/21139-2320/.minikube/profiles/ha-423884/proxy-client.key -> /var/lib/minikube/certs/proxy-client.key
	I1109 14:04:21.526982   55908 certs.go:484] found cert: /home/jenkins/minikube-integration/21139-2320/.minikube/certs/4116.pem (1338 bytes)
	W1109 14:04:21.527018   55908 certs.go:480] ignoring /home/jenkins/minikube-integration/21139-2320/.minikube/certs/4116_empty.pem, impossibly tiny 0 bytes
	I1109 14:04:21.527028   55908 certs.go:484] found cert: /home/jenkins/minikube-integration/21139-2320/.minikube/certs/ca-key.pem (1679 bytes)
	I1109 14:04:21.527056   55908 certs.go:484] found cert: /home/jenkins/minikube-integration/21139-2320/.minikube/certs/ca.pem (1082 bytes)
	I1109 14:04:21.527080   55908 certs.go:484] found cert: /home/jenkins/minikube-integration/21139-2320/.minikube/certs/cert.pem (1123 bytes)
	I1109 14:04:21.527107   55908 certs.go:484] found cert: /home/jenkins/minikube-integration/21139-2320/.minikube/certs/key.pem (1679 bytes)
	I1109 14:04:21.527154   55908 certs.go:484] found cert: /home/jenkins/minikube-integration/21139-2320/.minikube/files/etc/ssl/certs/41162.pem (1708 bytes)
	I1109 14:04:21.527185   55908 vm_assets.go:164] NewFileAsset: /home/jenkins/minikube-integration/21139-2320/.minikube/files/etc/ssl/certs/41162.pem -> /usr/share/ca-certificates/41162.pem
	I1109 14:04:21.527200   55908 vm_assets.go:164] NewFileAsset: /home/jenkins/minikube-integration/21139-2320/.minikube/ca.crt -> /usr/share/ca-certificates/minikubeCA.pem
	I1109 14:04:21.527211   55908 vm_assets.go:164] NewFileAsset: /home/jenkins/minikube-integration/21139-2320/.minikube/certs/4116.pem -> /usr/share/ca-certificates/4116.pem
	I1109 14:04:21.527271   55908 cli_runner.go:164] Run: docker container inspect -f "'{{(index (index .NetworkSettings.Ports "22/tcp") 0).HostPort}}'" ha-423884
	I1109 14:04:21.551818   55908 sshutil.go:53] new ssh client: &{IP:127.0.0.1 Port:32818 SSHKeyPath:/home/jenkins/minikube-integration/21139-2320/.minikube/machines/ha-423884/id_rsa Username:docker}
	I1109 14:04:21.676202   55908 ssh_runner.go:195] Run: stat -c %s /var/lib/minikube/certs/sa.pub
	I1109 14:04:21.680212   55908 ssh_runner.go:447] scp /var/lib/minikube/certs/sa.pub --> memory (451 bytes)
	I1109 14:04:21.691215   55908 ssh_runner.go:195] Run: stat -c %s /var/lib/minikube/certs/sa.key
	I1109 14:04:21.701694   55908 ssh_runner.go:447] scp /var/lib/minikube/certs/sa.key --> memory (1679 bytes)
	I1109 14:04:21.714762   55908 ssh_runner.go:195] Run: stat -c %s /var/lib/minikube/certs/front-proxy-ca.crt
	I1109 14:04:21.719210   55908 ssh_runner.go:447] scp /var/lib/minikube/certs/front-proxy-ca.crt --> memory (1123 bytes)
	I1109 14:04:21.729229   55908 ssh_runner.go:195] Run: stat -c %s /var/lib/minikube/certs/front-proxy-ca.key
	I1109 14:04:21.733219   55908 ssh_runner.go:447] scp /var/lib/minikube/certs/front-proxy-ca.key --> memory (1675 bytes)
	I1109 14:04:21.742594   55908 ssh_runner.go:195] Run: stat -c %s /var/lib/minikube/certs/etcd/ca.crt
	I1109 14:04:21.746326   55908 ssh_runner.go:447] scp /var/lib/minikube/certs/etcd/ca.crt --> memory (1094 bytes)
	I1109 14:04:21.755768   55908 ssh_runner.go:195] Run: stat -c %s /var/lib/minikube/certs/etcd/ca.key
	I1109 14:04:21.759436   55908 ssh_runner.go:447] scp /var/lib/minikube/certs/etcd/ca.key --> memory (1675 bytes)
	I1109 14:04:21.771660   55908 ssh_runner.go:362] scp /home/jenkins/minikube-integration/21139-2320/.minikube/ca.crt --> /var/lib/minikube/certs/ca.crt (1111 bytes)
	I1109 14:04:21.795312   55908 ssh_runner.go:362] scp /home/jenkins/minikube-integration/21139-2320/.minikube/ca.key --> /var/lib/minikube/certs/ca.key (1679 bytes)
	I1109 14:04:21.815560   55908 ssh_runner.go:362] scp /home/jenkins/minikube-integration/21139-2320/.minikube/proxy-client-ca.crt --> /var/lib/minikube/certs/proxy-client-ca.crt (1119 bytes)
	I1109 14:04:21.833662   55908 ssh_runner.go:362] scp /home/jenkins/minikube-integration/21139-2320/.minikube/proxy-client-ca.key --> /var/lib/minikube/certs/proxy-client-ca.key (1675 bytes)
	I1109 14:04:21.852805   55908 ssh_runner.go:362] scp /home/jenkins/minikube-integration/21139-2320/.minikube/profiles/ha-423884/apiserver.crt --> /var/lib/minikube/certs/apiserver.crt (1440 bytes)
	I1109 14:04:21.870267   55908 ssh_runner.go:362] scp /home/jenkins/minikube-integration/21139-2320/.minikube/profiles/ha-423884/apiserver.key --> /var/lib/minikube/certs/apiserver.key (1679 bytes)
	I1109 14:04:21.889041   55908 ssh_runner.go:362] scp /home/jenkins/minikube-integration/21139-2320/.minikube/profiles/ha-423884/proxy-client.crt --> /var/lib/minikube/certs/proxy-client.crt (1147 bytes)
	I1109 14:04:21.907386   55908 ssh_runner.go:362] scp /home/jenkins/minikube-integration/21139-2320/.minikube/profiles/ha-423884/proxy-client.key --> /var/lib/minikube/certs/proxy-client.key (1675 bytes)
	I1109 14:04:21.925376   55908 ssh_runner.go:362] scp /home/jenkins/minikube-integration/21139-2320/.minikube/files/etc/ssl/certs/41162.pem --> /usr/share/ca-certificates/41162.pem (1708 bytes)
	I1109 14:04:21.943214   55908 ssh_runner.go:362] scp /home/jenkins/minikube-integration/21139-2320/.minikube/ca.crt --> /usr/share/ca-certificates/minikubeCA.pem (1111 bytes)
	I1109 14:04:21.961586   55908 ssh_runner.go:362] scp /home/jenkins/minikube-integration/21139-2320/.minikube/certs/4116.pem --> /usr/share/ca-certificates/4116.pem (1338 bytes)
	I1109 14:04:21.979793   55908 ssh_runner.go:362] scp memory --> /var/lib/minikube/certs/sa.pub (451 bytes)
	I1109 14:04:21.993395   55908 ssh_runner.go:362] scp memory --> /var/lib/minikube/certs/sa.key (1679 bytes)
	I1109 14:04:22.006684   55908 ssh_runner.go:362] scp memory --> /var/lib/minikube/certs/front-proxy-ca.crt (1123 bytes)
	I1109 14:04:22.033388   55908 ssh_runner.go:362] scp memory --> /var/lib/minikube/certs/front-proxy-ca.key (1675 bytes)
	I1109 14:04:22.052052   55908 ssh_runner.go:362] scp memory --> /var/lib/minikube/certs/etcd/ca.crt (1094 bytes)
	I1109 14:04:22.068060   55908 ssh_runner.go:362] scp memory --> /var/lib/minikube/certs/etcd/ca.key (1675 bytes)
	I1109 14:04:22.086207   55908 ssh_runner.go:362] scp memory --> /var/lib/minikube/kubeconfig (744 bytes)
	I1109 14:04:22.104940   55908 ssh_runner.go:195] Run: openssl version
	I1109 14:04:22.112046   55908 ssh_runner.go:195] Run: sudo /bin/bash -c "test -s /usr/share/ca-certificates/41162.pem && ln -fs /usr/share/ca-certificates/41162.pem /etc/ssl/certs/41162.pem"
	I1109 14:04:22.122102   55908 ssh_runner.go:195] Run: ls -la /usr/share/ca-certificates/41162.pem
	I1109 14:04:22.125980   55908 certs.go:528] hashing: -rw-r--r-- 1 root root 1708 Nov  9 13:36 /usr/share/ca-certificates/41162.pem
	I1109 14:04:22.126092   55908 ssh_runner.go:195] Run: openssl x509 -hash -noout -in /usr/share/ca-certificates/41162.pem
	I1109 14:04:22.167702   55908 ssh_runner.go:195] Run: sudo /bin/bash -c "test -L /etc/ssl/certs/3ec20f2e.0 || ln -fs /etc/ssl/certs/41162.pem /etc/ssl/certs/3ec20f2e.0"
	I1109 14:04:22.176107   55908 ssh_runner.go:195] Run: sudo /bin/bash -c "test -s /usr/share/ca-certificates/minikubeCA.pem && ln -fs /usr/share/ca-certificates/minikubeCA.pem /etc/ssl/certs/minikubeCA.pem"
	I1109 14:04:22.184759   55908 ssh_runner.go:195] Run: ls -la /usr/share/ca-certificates/minikubeCA.pem
	I1109 14:04:22.189529   55908 certs.go:528] hashing: -rw-r--r-- 1 root root 1111 Nov  9 13:29 /usr/share/ca-certificates/minikubeCA.pem
	I1109 14:04:22.189649   55908 ssh_runner.go:195] Run: openssl x509 -hash -noout -in /usr/share/ca-certificates/minikubeCA.pem
	I1109 14:04:22.231896   55908 ssh_runner.go:195] Run: sudo /bin/bash -c "test -L /etc/ssl/certs/b5213941.0 || ln -fs /etc/ssl/certs/minikubeCA.pem /etc/ssl/certs/b5213941.0"
	I1109 14:04:22.240788   55908 ssh_runner.go:195] Run: sudo /bin/bash -c "test -s /usr/share/ca-certificates/4116.pem && ln -fs /usr/share/ca-certificates/4116.pem /etc/ssl/certs/4116.pem"
	I1109 14:04:22.250648   55908 ssh_runner.go:195] Run: ls -la /usr/share/ca-certificates/4116.pem
	I1109 14:04:22.254774   55908 certs.go:528] hashing: -rw-r--r-- 1 root root 1338 Nov  9 13:36 /usr/share/ca-certificates/4116.pem
	I1109 14:04:22.254890   55908 ssh_runner.go:195] Run: openssl x509 -hash -noout -in /usr/share/ca-certificates/4116.pem
	I1109 14:04:22.295743   55908 ssh_runner.go:195] Run: sudo /bin/bash -c "test -L /etc/ssl/certs/51391683.0 || ln -fs /etc/ssl/certs/4116.pem /etc/ssl/certs/51391683.0"
	I1109 14:04:22.303694   55908 ssh_runner.go:195] Run: stat /var/lib/minikube/certs/apiserver-kubelet-client.crt
	I1109 14:04:22.308400   55908 ssh_runner.go:195] Run: openssl x509 -noout -in /var/lib/minikube/certs/apiserver-etcd-client.crt -checkend 86400
	I1109 14:04:22.361240   55908 ssh_runner.go:195] Run: openssl x509 -noout -in /var/lib/minikube/certs/apiserver-kubelet-client.crt -checkend 86400
	I1109 14:04:22.402093   55908 ssh_runner.go:195] Run: openssl x509 -noout -in /var/lib/minikube/certs/etcd/server.crt -checkend 86400
	I1109 14:04:22.444367   55908 ssh_runner.go:195] Run: openssl x509 -noout -in /var/lib/minikube/certs/etcd/healthcheck-client.crt -checkend 86400
	I1109 14:04:22.486212   55908 ssh_runner.go:195] Run: openssl x509 -noout -in /var/lib/minikube/certs/etcd/peer.crt -checkend 86400
	I1109 14:04:22.528227   55908 ssh_runner.go:195] Run: openssl x509 -noout -in /var/lib/minikube/certs/front-proxy-client.crt -checkend 86400
	I1109 14:04:22.571111   55908 kubeadm.go:935] updating node {m03 192.168.49.4 8443 v1.34.1 crio true true} ...
	I1109 14:04:22.571227   55908 kubeadm.go:947] kubelet [Unit]
	Wants=crio.service
	
	[Service]
	ExecStart=
	ExecStart=/var/lib/minikube/binaries/v1.34.1/kubelet --bootstrap-kubeconfig=/etc/kubernetes/bootstrap-kubelet.conf --cgroups-per-qos=false --config=/var/lib/kubelet/config.yaml --enforce-node-allocatable= --hostname-override=ha-423884-m03 --kubeconfig=/etc/kubernetes/kubelet.conf --node-ip=192.168.49.4
	
	[Install]
	 config:
	{KubernetesVersion:v1.34.1 ClusterName:ha-423884 Namespace:default APIServerHAVIP:192.168.49.254 APIServerName:minikubeCA APIServerNames:[] APIServerIPs:[] DNSDomain:cluster.local ContainerRuntime:crio CRISocket: NetworkPlugin:cni FeatureGates: ServiceCIDR:10.96.0.0/12 ImageRepository: LoadBalancerStartIP: LoadBalancerEndIP: CustomIngressCert: RegistryAliases: ExtraOptions:[] ShouldLoadCachedImages:true EnableDefaultCNI:false CNI:}
	I1109 14:04:22.571257   55908 kube-vip.go:115] generating kube-vip config ...
	I1109 14:04:22.571311   55908 ssh_runner.go:195] Run: sudo sh -c "lsmod | grep ip_vs"
	I1109 14:04:22.583651   55908 kube-vip.go:163] giving up enabling control-plane load-balancing as ipvs kernel modules appears not to be available: sudo sh -c "lsmod | grep ip_vs": Process exited with status 1
	stdout:
	
	stderr:
	I1109 14:04:22.583707   55908 kube-vip.go:137] kube-vip config:
	apiVersion: v1
	kind: Pod
	metadata:
	  creationTimestamp: null
	  name: kube-vip
	  namespace: kube-system
	spec:
	  containers:
	  - args:
	    - manager
	    env:
	    - name: vip_arp
	      value: "true"
	    - name: port
	      value: "8443"
	    - name: vip_nodename
	      valueFrom:
	        fieldRef:
	          fieldPath: spec.nodeName
	    - name: vip_interface
	      value: eth0
	    - name: vip_cidr
	      value: "32"
	    - name: dns_mode
	      value: first
	    - name: cp_enable
	      value: "true"
	    - name: cp_namespace
	      value: kube-system
	    - name: vip_leaderelection
	      value: "true"
	    - name: vip_leasename
	      value: plndr-cp-lock
	    - name: vip_leaseduration
	      value: "5"
	    - name: vip_renewdeadline
	      value: "3"
	    - name: vip_retryperiod
	      value: "1"
	    - name: address
	      value: 192.168.49.254
	    - name: prometheus_server
	      value: :2112
	    image: ghcr.io/kube-vip/kube-vip:v1.0.1
	    imagePullPolicy: IfNotPresent
	    name: kube-vip
	    resources: {}
	    securityContext:
	      capabilities:
	        add:
	        - NET_ADMIN
	        - NET_RAW
	    volumeMounts:
	    - mountPath: /etc/kubernetes/admin.conf
	      name: kubeconfig
	  hostAliases:
	  - hostnames:
	    - kubernetes
	    ip: 127.0.0.1
	  hostNetwork: true
	  volumes:
	  - hostPath:
	      path: "/etc/kubernetes/admin.conf"
	    name: kubeconfig
	status: {}
	I1109 14:04:22.583783   55908 ssh_runner.go:195] Run: sudo ls /var/lib/minikube/binaries/v1.34.1
	I1109 14:04:22.592357   55908 binaries.go:51] Found k8s binaries, skipping transfer
	I1109 14:04:22.592434   55908 ssh_runner.go:195] Run: sudo mkdir -p /etc/systemd/system/kubelet.service.d /lib/systemd/system /etc/kubernetes/manifests
	I1109 14:04:22.602564   55908 ssh_runner.go:362] scp memory --> /etc/systemd/system/kubelet.service.d/10-kubeadm.conf (363 bytes)
	I1109 14:04:22.615684   55908 ssh_runner.go:362] scp memory --> /lib/systemd/system/kubelet.service (352 bytes)
	I1109 14:04:22.634261   55908 ssh_runner.go:362] scp memory --> /etc/kubernetes/manifests/kube-vip.yaml (1358 bytes)
	I1109 14:04:22.648965   55908 ssh_runner.go:195] Run: grep 192.168.49.254	control-plane.minikube.internal$ /etc/hosts
	I1109 14:04:22.652918   55908 ssh_runner.go:195] Run: /bin/bash -c "{ grep -v $'\tcontrol-plane.minikube.internal$' "/etc/hosts"; echo "192.168.49.254	control-plane.minikube.internal"; } > /tmp/h.$$; sudo cp /tmp/h.$$ "/etc/hosts""
	I1109 14:04:22.663308   55908 ssh_runner.go:195] Run: sudo systemctl daemon-reload
	I1109 14:04:22.796103   55908 ssh_runner.go:195] Run: sudo systemctl start kubelet
	I1109 14:04:22.812101   55908 start.go:236] Will wait 6m0s for node &{Name:m03 IP:192.168.49.4 Port:8443 KubernetesVersion:v1.34.1 ContainerRuntime:crio ControlPlane:true Worker:true}
	I1109 14:04:22.812586   55908 config.go:182] Loaded profile config "ha-423884": Driver=docker, ContainerRuntime=crio, KubernetesVersion=v1.34.1
	I1109 14:04:22.817295   55908 out.go:179] * Verifying Kubernetes components...
	I1109 14:04:22.820274   55908 ssh_runner.go:195] Run: sudo systemctl daemon-reload
	I1109 14:04:22.956399   55908 ssh_runner.go:195] Run: sudo systemctl start kubelet
	I1109 14:04:22.970086   55908 kapi.go:59] client config for ha-423884: &rest.Config{Host:"https://192.168.49.254:8443", APIPath:"", ContentConfig:rest.ContentConfig{AcceptContentTypes:"", ContentType:"", GroupVersion:(*schema.GroupVersion)(nil), NegotiatedSerializer:runtime.NegotiatedSerializer(nil)}, Username:"", Password:"", BearerToken:"", BearerTokenFile:"", Impersonate:rest.ImpersonationConfig{UserName:"", UID:"", Groups:[]string(nil), Extra:map[string][]string(nil)}, AuthProvider:<nil>, AuthConfigPersister:rest.AuthProviderConfigPersister(nil), ExecProvider:<nil>, TLSClientConfig:rest.sanitizedTLSClientConfig{Insecure:false, ServerName:"", CertFile:"/home/jenkins/minikube-integration/21139-2320/.minikube/profiles/ha-423884/client.crt", KeyFile:"/home/jenkins/minikube-integration/21139-2320/.minikube/profiles/ha-423884/client.key", CAFile:"/home/jenkins/minikube-integration/21139-2320/.minikube/ca.crt", CertData:[]uint8(nil), KeyData:[]uint8(nil), CAData:[]uint8(nil), NextProtos:[]string(nil)},
UserAgent:"", DisableCompression:false, Transport:http.RoundTripper(nil), WrapTransport:(transport.WrapperFunc)(0x21276e0), QPS:0, Burst:0, RateLimiter:flowcontrol.RateLimiter(nil), WarningHandler:rest.WarningHandler(nil), WarningHandlerWithContext:rest.WarningHandlerWithContext(nil), Timeout:0, Dial:(func(context.Context, string, string) (net.Conn, error))(nil), Proxy:(func(*http.Request) (*url.URL, error))(nil)}
	W1109 14:04:22.970158   55908 kubeadm.go:492] Overriding stale ClientConfig host https://192.168.49.254:8443 with https://192.168.49.2:8443
	I1109 14:04:22.970389   55908 node_ready.go:35] waiting up to 6m0s for node "ha-423884-m03" to be "Ready" ...
	I1109 14:04:22.973665   55908 node_ready.go:49] node "ha-423884-m03" is "Ready"
	I1109 14:04:22.973696   55908 node_ready.go:38] duration metric: took 3.289742ms for node "ha-423884-m03" to be "Ready" ...
	I1109 14:04:22.973708   55908 api_server.go:52] waiting for apiserver process to appear ...
	I1109 14:04:22.973776   55908 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I1109 14:04:23.474233   55908 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I1109 14:04:23.974449   55908 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I1109 14:04:24.473927   55908 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I1109 14:04:24.973967   55908 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I1109 14:04:25.474635   55908 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I1109 14:04:25.973916   55908 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I1109 14:04:26.474480   55908 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I1109 14:04:26.974653   55908 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I1109 14:04:27.474731   55908 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I1109 14:04:27.974238   55908 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I1109 14:04:28.474498   55908 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I1109 14:04:28.973919   55908 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I1109 14:04:29.474517   55908 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I1109 14:04:29.974713   55908 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I1109 14:04:30.474585   55908 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I1109 14:04:30.974741   55908 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I1109 14:04:31.473916   55908 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I1109 14:04:31.974806   55908 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I1109 14:04:32.474537   55908 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I1109 14:04:32.973899   55908 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I1109 14:04:33.474884   55908 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I1109 14:04:33.974179   55908 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I1109 14:04:34.473908   55908 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I1109 14:04:34.973922   55908 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I1109 14:04:35.474186   55908 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I1109 14:04:35.974351   55908 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I1109 14:04:36.474756   55908 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I1109 14:04:36.973943   55908 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I1109 14:04:37.474873   55908 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I1109 14:04:37.974832   55908 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I1109 14:04:38.474095   55908 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I1109 14:04:38.486973   55908 api_server.go:72] duration metric: took 15.674824664s to wait for apiserver process to appear ...
	I1109 14:04:38.486994   55908 api_server.go:88] waiting for apiserver healthz status ...
	I1109 14:04:38.487013   55908 api_server.go:253] Checking apiserver healthz at https://192.168.49.2:8443/healthz ...
	I1109 14:04:38.496492   55908 api_server.go:279] https://192.168.49.2:8443/healthz returned 200:
	ok
	I1109 14:04:38.497757   55908 api_server.go:141] control plane version: v1.34.1
	I1109 14:04:38.497778   55908 api_server.go:131] duration metric: took 10.777406ms to wait for apiserver health ...
	I1109 14:04:38.497787   55908 system_pods.go:43] waiting for kube-system pods to appear ...
	I1109 14:04:38.505258   55908 system_pods.go:59] 26 kube-system pods found
	I1109 14:04:38.505350   55908 system_pods.go:61] "coredns-66bc5c9577-wl6rt" [95478f0f-683f-4542-aaa4-adc037f97d0c] Running / Ready:ContainersNotReady (containers with unready status: [coredns]) / ContainersReady:ContainersNotReady (containers with unready status: [coredns])
	I1109 14:04:38.505374   55908 system_pods.go:61] "coredns-66bc5c9577-x2j4c" [96cca476-edbb-4139-8b01-f7fd7c7d55aa] Running / Ready:ContainersNotReady (containers with unready status: [coredns]) / ContainersReady:ContainersNotReady (containers with unready status: [coredns])
	I1109 14:04:38.505408   55908 system_pods.go:61] "etcd-ha-423884" [004413ee-6d00-4b6b-8e58-dbe5c2694a91] Running
	I1109 14:04:38.505432   55908 system_pods.go:61] "etcd-ha-423884-m02" [7009b9f1-1f16-4b4e-a5e3-1d8df4987593] Running
	I1109 14:04:38.505449   55908 system_pods.go:61] "etcd-ha-423884-m03" [5c66298e-55ac-439c-8fc7-cc91a29fff8c] Running
	I1109 14:04:38.505466   55908 system_pods.go:61] "kindnet-2tcn6" [22d3ab43-1335-4838-ab1d-7368817c4287] Running
	I1109 14:04:38.505484   55908 system_pods.go:61] "kindnet-45jg2" [fe2d9f7d-ba0c-42a5-af2d-ebfb3153b0b1] Running
	I1109 14:04:38.505510   55908 system_pods.go:61] "kindnet-4s4nj" [aaab0693-39a9-46cc-b5c6-f07055a7cbc4] Running
	I1109 14:04:38.505536   55908 system_pods.go:61] "kindnet-ftnwt" [ea92690f-5103-40c4-ba92-16b97894d00c] Running
	I1109 14:04:38.505555   55908 system_pods.go:61] "kube-apiserver-ha-423884" [1981277b-07b7-4bfc-8601-d026e58476b1] Running
	I1109 14:04:38.505572   55908 system_pods.go:61] "kube-apiserver-ha-423884-m02" [e203df4c-2d27-445c-ae67-14d3e03d4a48] Running
	I1109 14:04:38.505590   55908 system_pods.go:61] "kube-apiserver-ha-423884-m03" [16908abd-f0bf-4fc2-b689-4062490d63b3] Running
	I1109 14:04:38.505618   55908 system_pods.go:61] "kube-controller-manager-ha-423884" [1d981b3b-aa6e-470c-8fd5-a97cedeb7ab7] Running
	I1109 14:04:38.505641   55908 system_pods.go:61] "kube-controller-manager-ha-423884-m02" [047f7949-9451-4301-be27-a9073b1f8dd8] Running
	I1109 14:04:38.505659   55908 system_pods.go:61] "kube-controller-manager-ha-423884-m03" [94734b39-b4df-4736-8bb8-4289b8b59d4a] Running
	I1109 14:04:38.505675   55908 system_pods.go:61] "kube-proxy-7z7d2" [f3de4d87-91fe-4303-a8db-50a70cbce4d7] Running
	I1109 14:04:38.505694   55908 system_pods.go:61] "kube-proxy-9kff9" [3e8293e6-027b-460b-bf10-1c31ea96c7b9] Running
	I1109 14:04:38.505721   55908 system_pods.go:61] "kube-proxy-f4hgn" [4eebc45e-329c-47be-b22c-f516100cff56] Running
	I1109 14:04:38.505743   55908 system_pods.go:61] "kube-proxy-jcgxk" [511bb5aa-d398-4eb4-852e-d2b6cda335c7] Running
	I1109 14:04:38.505761   55908 system_pods.go:61] "kube-scheduler-ha-423884" [9d1a4a22-ff4f-4204-b9a0-c90bc99cf5ee] Running
	I1109 14:04:38.505778   55908 system_pods.go:61] "kube-scheduler-ha-423884-m02" [3959577a-da41-4834-8c10-1cd6da3da88d] Running
	I1109 14:04:38.505796   55908 system_pods.go:61] "kube-scheduler-ha-423884-m03" [ab2c4706-65e8-429e-9365-35ddef56a3f5] Running
	I1109 14:04:38.505824   55908 system_pods.go:61] "kube-vip-ha-423884" [b043421c-6408-4df1-87d9-bc0d12fef736] Running
	I1109 14:04:38.505850   55908 system_pods.go:61] "kube-vip-ha-423884-m02" [17d3154c-6732-452b-a209-4a8c98c5626e] Running
	I1109 14:04:38.505867   55908 system_pods.go:61] "kube-vip-ha-423884-m03" [aade9448-5515-4d1c-ae79-38c8686c171f] Running
	I1109 14:04:38.505886   55908 system_pods.go:61] "storage-provisioner" [5c249a88-1e05-40e0-b9d2-60a993f8c146] Running
	I1109 14:04:38.505905   55908 system_pods.go:74] duration metric: took 8.112367ms to wait for pod list to return data ...
	I1109 14:04:38.505935   55908 default_sa.go:34] waiting for default service account to be created ...
	I1109 14:04:38.509739   55908 default_sa.go:45] found service account: "default"
	I1109 14:04:38.509805   55908 default_sa.go:55] duration metric: took 3.846441ms for default service account to be created ...
	I1109 14:04:38.509829   55908 system_pods.go:116] waiting for k8s-apps to be running ...
	I1109 14:04:38.517291   55908 system_pods.go:86] 26 kube-system pods found
	I1109 14:04:38.517382   55908 system_pods.go:89] "coredns-66bc5c9577-wl6rt" [95478f0f-683f-4542-aaa4-adc037f97d0c] Running / Ready:ContainersNotReady (containers with unready status: [coredns]) / ContainersReady:ContainersNotReady (containers with unready status: [coredns])
	I1109 14:04:38.517407   55908 system_pods.go:89] "coredns-66bc5c9577-x2j4c" [96cca476-edbb-4139-8b01-f7fd7c7d55aa] Running / Ready:ContainersNotReady (containers with unready status: [coredns]) / ContainersReady:ContainersNotReady (containers with unready status: [coredns])
	I1109 14:04:38.517444   55908 system_pods.go:89] "etcd-ha-423884" [004413ee-6d00-4b6b-8e58-dbe5c2694a91] Running
	I1109 14:04:38.517467   55908 system_pods.go:89] "etcd-ha-423884-m02" [7009b9f1-1f16-4b4e-a5e3-1d8df4987593] Running
	I1109 14:04:38.517484   55908 system_pods.go:89] "etcd-ha-423884-m03" [5c66298e-55ac-439c-8fc7-cc91a29fff8c] Running
	I1109 14:04:38.517500   55908 system_pods.go:89] "kindnet-2tcn6" [22d3ab43-1335-4838-ab1d-7368817c4287] Running
	I1109 14:04:38.517518   55908 system_pods.go:89] "kindnet-45jg2" [fe2d9f7d-ba0c-42a5-af2d-ebfb3153b0b1] Running
	I1109 14:04:38.517545   55908 system_pods.go:89] "kindnet-4s4nj" [aaab0693-39a9-46cc-b5c6-f07055a7cbc4] Running
	I1109 14:04:38.517568   55908 system_pods.go:89] "kindnet-ftnwt" [ea92690f-5103-40c4-ba92-16b97894d00c] Running
	I1109 14:04:38.517586   55908 system_pods.go:89] "kube-apiserver-ha-423884" [1981277b-07b7-4bfc-8601-d026e58476b1] Running
	I1109 14:04:38.517602   55908 system_pods.go:89] "kube-apiserver-ha-423884-m02" [e203df4c-2d27-445c-ae67-14d3e03d4a48] Running
	I1109 14:04:38.517620   55908 system_pods.go:89] "kube-apiserver-ha-423884-m03" [16908abd-f0bf-4fc2-b689-4062490d63b3] Running
	I1109 14:04:38.517648   55908 system_pods.go:89] "kube-controller-manager-ha-423884" [1d981b3b-aa6e-470c-8fd5-a97cedeb7ab7] Running
	I1109 14:04:38.517670   55908 system_pods.go:89] "kube-controller-manager-ha-423884-m02" [047f7949-9451-4301-be27-a9073b1f8dd8] Running
	I1109 14:04:38.517688   55908 system_pods.go:89] "kube-controller-manager-ha-423884-m03" [94734b39-b4df-4736-8bb8-4289b8b59d4a] Running
	I1109 14:04:38.517705   55908 system_pods.go:89] "kube-proxy-7z7d2" [f3de4d87-91fe-4303-a8db-50a70cbce4d7] Running
	I1109 14:04:38.517722   55908 system_pods.go:89] "kube-proxy-9kff9" [3e8293e6-027b-460b-bf10-1c31ea96c7b9] Running
	I1109 14:04:38.517750   55908 system_pods.go:89] "kube-proxy-f4hgn" [4eebc45e-329c-47be-b22c-f516100cff56] Running
	I1109 14:04:38.517773   55908 system_pods.go:89] "kube-proxy-jcgxk" [511bb5aa-d398-4eb4-852e-d2b6cda335c7] Running
	I1109 14:04:38.517794   55908 system_pods.go:89] "kube-scheduler-ha-423884" [9d1a4a22-ff4f-4204-b9a0-c90bc99cf5ee] Running
	I1109 14:04:38.517812   55908 system_pods.go:89] "kube-scheduler-ha-423884-m02" [3959577a-da41-4834-8c10-1cd6da3da88d] Running
	I1109 14:04:38.517830   55908 system_pods.go:89] "kube-scheduler-ha-423884-m03" [ab2c4706-65e8-429e-9365-35ddef56a3f5] Running
	I1109 14:04:38.517856   55908 system_pods.go:89] "kube-vip-ha-423884" [b043421c-6408-4df1-87d9-bc0d12fef736] Running
	I1109 14:04:38.517877   55908 system_pods.go:89] "kube-vip-ha-423884-m02" [17d3154c-6732-452b-a209-4a8c98c5626e] Running
	I1109 14:04:38.517894   55908 system_pods.go:89] "kube-vip-ha-423884-m03" [aade9448-5515-4d1c-ae79-38c8686c171f] Running
	I1109 14:04:38.517911   55908 system_pods.go:89] "storage-provisioner" [5c249a88-1e05-40e0-b9d2-60a993f8c146] Running
	I1109 14:04:38.517933   55908 system_pods.go:126] duration metric: took 8.084994ms to wait for k8s-apps to be running ...
	I1109 14:04:38.517962   55908 system_svc.go:44] waiting for kubelet service to be running ....
	I1109 14:04:38.518068   55908 ssh_runner.go:195] Run: sudo systemctl is-active --quiet service kubelet
	I1109 14:04:38.532879   55908 system_svc.go:56] duration metric: took 14.908297ms WaitForService to wait for kubelet
	I1109 14:04:38.532917   55908 kubeadm.go:587] duration metric: took 15.720774062s to wait for: map[apiserver:true apps_running:true default_sa:true extra:true kubelet:true node_ready:true system_pods:true]
	I1109 14:04:38.532935   55908 node_conditions.go:102] verifying NodePressure condition ...
	I1109 14:04:38.536579   55908 node_conditions.go:122] node storage ephemeral capacity is 203034800Ki
	I1109 14:04:38.536610   55908 node_conditions.go:123] node cpu capacity is 2
	I1109 14:04:38.536621   55908 node_conditions.go:122] node storage ephemeral capacity is 203034800Ki
	I1109 14:04:38.536625   55908 node_conditions.go:123] node cpu capacity is 2
	I1109 14:04:38.536629   55908 node_conditions.go:122] node storage ephemeral capacity is 203034800Ki
	I1109 14:04:38.536633   55908 node_conditions.go:123] node cpu capacity is 2
	I1109 14:04:38.536636   55908 node_conditions.go:122] node storage ephemeral capacity is 203034800Ki
	I1109 14:04:38.536648   55908 node_conditions.go:123] node cpu capacity is 2
	I1109 14:04:38.536656   55908 node_conditions.go:105] duration metric: took 3.715265ms to run NodePressure ...
	I1109 14:04:38.536669   55908 start.go:242] waiting for startup goroutines ...
	I1109 14:04:38.536695   55908 start.go:256] writing updated cluster config ...
	I1109 14:04:38.540432   55908 out.go:203] 
	I1109 14:04:38.543707   55908 config.go:182] Loaded profile config "ha-423884": Driver=docker, ContainerRuntime=crio, KubernetesVersion=v1.34.1
	I1109 14:04:38.543833   55908 profile.go:143] Saving config to /home/jenkins/minikube-integration/21139-2320/.minikube/profiles/ha-423884/config.json ...
	I1109 14:04:38.547314   55908 out.go:179] * Starting "ha-423884-m04" worker node in "ha-423884" cluster
	I1109 14:04:38.550154   55908 cache.go:134] Beginning downloading kic base image for docker with crio
	I1109 14:04:38.553075   55908 out.go:179] * Pulling base image v0.0.48-1761985721-21837 ...
	I1109 14:04:38.555918   55908 preload.go:188] Checking if preload exists for k8s version v1.34.1 and runtime crio
	I1109 14:04:38.555945   55908 cache.go:65] Caching tarball of preloaded images
	I1109 14:04:38.555984   55908 image.go:81] Checking for gcr.io/k8s-minikube/kicbase-builds:v0.0.48-1761985721-21837@sha256:a50b37e97dfdea51156e079ca6b45818a801b3d41bbe13d141f35d2e1af6c7d1 in local docker daemon
	I1109 14:04:38.556052   55908 preload.go:238] Found /home/jenkins/minikube-integration/21139-2320/.minikube/cache/preloaded-tarball/preloaded-images-k8s-v18-v1.34.1-cri-o-overlay-arm64.tar.lz4 in cache, skipping download
	I1109 14:04:38.556067   55908 cache.go:68] Finished verifying existence of preloaded tar for v1.34.1 on crio
	I1109 14:04:38.556232   55908 profile.go:143] Saving config to /home/jenkins/minikube-integration/21139-2320/.minikube/profiles/ha-423884/config.json ...
	I1109 14:04:38.596080   55908 image.go:100] Found gcr.io/k8s-minikube/kicbase-builds:v0.0.48-1761985721-21837@sha256:a50b37e97dfdea51156e079ca6b45818a801b3d41bbe13d141f35d2e1af6c7d1 in local docker daemon, skipping pull
	I1109 14:04:38.596104   55908 cache.go:158] gcr.io/k8s-minikube/kicbase-builds:v0.0.48-1761985721-21837@sha256:a50b37e97dfdea51156e079ca6b45818a801b3d41bbe13d141f35d2e1af6c7d1 exists in daemon, skipping load
	I1109 14:04:38.596117   55908 cache.go:243] Successfully downloaded all kic artifacts
	I1109 14:04:38.596140   55908 start.go:360] acquireMachinesLock for ha-423884-m04: {Name:mk8ea327a8bd5498886fa5c18402495ffce70373 Clock:{} Delay:500ms Timeout:10m0s Cancel:<nil>}
	I1109 14:04:38.596197   55908 start.go:364] duration metric: took 36.833µs to acquireMachinesLock for "ha-423884-m04"
	I1109 14:04:38.596221   55908 start.go:96] Skipping create...Using existing machine configuration
	I1109 14:04:38.596226   55908 fix.go:54] fixHost starting: m04
	I1109 14:04:38.596505   55908 cli_runner.go:164] Run: docker container inspect ha-423884-m04 --format={{.State.Status}}
	I1109 14:04:38.628055   55908 fix.go:112] recreateIfNeeded on ha-423884-m04: state=Stopped err=<nil>
	W1109 14:04:38.628083   55908 fix.go:138] unexpected machine state, will restart: <nil>
	I1109 14:04:38.631296   55908 out.go:252] * Restarting existing docker container for "ha-423884-m04" ...
	I1109 14:04:38.631384   55908 cli_runner.go:164] Run: docker start ha-423884-m04
	I1109 14:04:38.994029   55908 cli_runner.go:164] Run: docker container inspect ha-423884-m04 --format={{.State.Status}}
	I1109 14:04:39.024143   55908 kic.go:430] container "ha-423884-m04" state is running.
	I1109 14:04:39.024645   55908 cli_runner.go:164] Run: docker container inspect -f "{{range .NetworkSettings.Networks}}{{.IPAddress}},{{.GlobalIPv6Address}}{{end}}" ha-423884-m04
	I1109 14:04:39.049753   55908 profile.go:143] Saving config to /home/jenkins/minikube-integration/21139-2320/.minikube/profiles/ha-423884/config.json ...
	I1109 14:04:39.049997   55908 machine.go:94] provisionDockerMachine start ...
	I1109 14:04:39.050055   55908 cli_runner.go:164] Run: docker container inspect -f "'{{(index (index .NetworkSettings.Ports "22/tcp") 0).HostPort}}'" ha-423884-m04
	I1109 14:04:39.086245   55908 main.go:143] libmachine: Using SSH client type: native
	I1109 14:04:39.086555   55908 main.go:143] libmachine: &{{{<nil> 0 [] [] []} docker [0x3eefe0] 0x3f1790 <nil>  [] 0s} 127.0.0.1 32833 <nil> <nil>}
	I1109 14:04:39.086564   55908 main.go:143] libmachine: About to run SSH command:
	hostname
	I1109 14:04:39.087311   55908 main.go:143] libmachine: Error dialing TCP: ssh: handshake failed: read tcp 127.0.0.1:54962->127.0.0.1:32833: read: connection reset by peer
	I1109 14:04:42.305377   55908 main.go:143] libmachine: SSH cmd err, output: <nil>: ha-423884-m04
	
	I1109 14:04:42.305403   55908 ubuntu.go:182] provisioning hostname "ha-423884-m04"
	I1109 14:04:42.305544   55908 cli_runner.go:164] Run: docker container inspect -f "'{{(index (index .NetworkSettings.Ports "22/tcp") 0).HostPort}}'" ha-423884-m04
	I1109 14:04:42.345625   55908 main.go:143] libmachine: Using SSH client type: native
	I1109 14:04:42.345948   55908 main.go:143] libmachine: &{{{<nil> 0 [] [] []} docker [0x3eefe0] 0x3f1790 <nil>  [] 0s} 127.0.0.1 32833 <nil> <nil>}
	I1109 14:04:42.345975   55908 main.go:143] libmachine: About to run SSH command:
	sudo hostname ha-423884-m04 && echo "ha-423884-m04" | sudo tee /etc/hostname
	I1109 14:04:42.540380   55908 main.go:143] libmachine: SSH cmd err, output: <nil>: ha-423884-m04
	
	I1109 14:04:42.540467   55908 cli_runner.go:164] Run: docker container inspect -f "'{{(index (index .NetworkSettings.Ports "22/tcp") 0).HostPort}}'" ha-423884-m04
	I1109 14:04:42.568082   55908 main.go:143] libmachine: Using SSH client type: native
	I1109 14:04:42.568508   55908 main.go:143] libmachine: &{{{<nil> 0 [] [] []} docker [0x3eefe0] 0x3f1790 <nil>  [] 0s} 127.0.0.1 32833 <nil> <nil>}
	I1109 14:04:42.568528   55908 main.go:143] libmachine: About to run SSH command:
	
			if ! grep -xq '.*\sha-423884-m04' /etc/hosts; then
				if grep -xq '127.0.1.1\s.*' /etc/hosts; then
					sudo sed -i 's/^127.0.1.1\s.*/127.0.1.1 ha-423884-m04/g' /etc/hosts;
				else 
					echo '127.0.1.1 ha-423884-m04' | sudo tee -a /etc/hosts; 
				fi
			fi
	I1109 14:04:42.740938   55908 main.go:143] libmachine: SSH cmd err, output: <nil>: 
	I1109 14:04:42.740964   55908 ubuntu.go:188] set auth options {CertDir:/home/jenkins/minikube-integration/21139-2320/.minikube CaCertPath:/home/jenkins/minikube-integration/21139-2320/.minikube/certs/ca.pem CaPrivateKeyPath:/home/jenkins/minikube-integration/21139-2320/.minikube/certs/ca-key.pem CaCertRemotePath:/etc/docker/ca.pem ServerCertPath:/home/jenkins/minikube-integration/21139-2320/.minikube/machines/server.pem ServerKeyPath:/home/jenkins/minikube-integration/21139-2320/.minikube/machines/server-key.pem ClientKeyPath:/home/jenkins/minikube-integration/21139-2320/.minikube/certs/key.pem ServerCertRemotePath:/etc/docker/server.pem ServerKeyRemotePath:/etc/docker/server-key.pem ClientCertPath:/home/jenkins/minikube-integration/21139-2320/.minikube/certs/cert.pem ServerCertSANs:[] StorePath:/home/jenkins/minikube-integration/21139-2320/.minikube}
	I1109 14:04:42.740987   55908 ubuntu.go:190] setting up certificates
	I1109 14:04:42.740999   55908 provision.go:84] configureAuth start
	I1109 14:04:42.741056   55908 cli_runner.go:164] Run: docker container inspect -f "{{range .NetworkSettings.Networks}}{{.IPAddress}},{{.GlobalIPv6Address}}{{end}}" ha-423884-m04
	I1109 14:04:42.758596   55908 provision.go:143] copyHostCerts
	I1109 14:04:42.758635   55908 vm_assets.go:164] NewFileAsset: /home/jenkins/minikube-integration/21139-2320/.minikube/certs/ca.pem -> /home/jenkins/minikube-integration/21139-2320/.minikube/ca.pem
	I1109 14:04:42.758666   55908 exec_runner.go:144] found /home/jenkins/minikube-integration/21139-2320/.minikube/ca.pem, removing ...
	I1109 14:04:42.758673   55908 exec_runner.go:203] rm: /home/jenkins/minikube-integration/21139-2320/.minikube/ca.pem
	I1109 14:04:42.758748   55908 exec_runner.go:151] cp: /home/jenkins/minikube-integration/21139-2320/.minikube/certs/ca.pem --> /home/jenkins/minikube-integration/21139-2320/.minikube/ca.pem (1082 bytes)
	I1109 14:04:42.758825   55908 vm_assets.go:164] NewFileAsset: /home/jenkins/minikube-integration/21139-2320/.minikube/certs/cert.pem -> /home/jenkins/minikube-integration/21139-2320/.minikube/cert.pem
	I1109 14:04:42.758841   55908 exec_runner.go:144] found /home/jenkins/minikube-integration/21139-2320/.minikube/cert.pem, removing ...
	I1109 14:04:42.758845   55908 exec_runner.go:203] rm: /home/jenkins/minikube-integration/21139-2320/.minikube/cert.pem
	I1109 14:04:42.758872   55908 exec_runner.go:151] cp: /home/jenkins/minikube-integration/21139-2320/.minikube/certs/cert.pem --> /home/jenkins/minikube-integration/21139-2320/.minikube/cert.pem (1123 bytes)
	I1109 14:04:42.758947   55908 vm_assets.go:164] NewFileAsset: /home/jenkins/minikube-integration/21139-2320/.minikube/certs/key.pem -> /home/jenkins/minikube-integration/21139-2320/.minikube/key.pem
	I1109 14:04:42.758966   55908 exec_runner.go:144] found /home/jenkins/minikube-integration/21139-2320/.minikube/key.pem, removing ...
	I1109 14:04:42.758970   55908 exec_runner.go:203] rm: /home/jenkins/minikube-integration/21139-2320/.minikube/key.pem
	I1109 14:04:42.758992   55908 exec_runner.go:151] cp: /home/jenkins/minikube-integration/21139-2320/.minikube/certs/key.pem --> /home/jenkins/minikube-integration/21139-2320/.minikube/key.pem (1679 bytes)
	I1109 14:04:42.759035   55908 provision.go:117] generating server cert: /home/jenkins/minikube-integration/21139-2320/.minikube/machines/server.pem ca-key=/home/jenkins/minikube-integration/21139-2320/.minikube/certs/ca.pem private-key=/home/jenkins/minikube-integration/21139-2320/.minikube/certs/ca-key.pem org=jenkins.ha-423884-m04 san=[127.0.0.1 192.168.49.5 ha-423884-m04 localhost minikube]
	I1109 14:04:43.620778   55908 provision.go:177] copyRemoteCerts
	I1109 14:04:43.620850   55908 ssh_runner.go:195] Run: sudo mkdir -p /etc/docker /etc/docker /etc/docker
	I1109 14:04:43.620891   55908 cli_runner.go:164] Run: docker container inspect -f "'{{(index (index .NetworkSettings.Ports "22/tcp") 0).HostPort}}'" ha-423884-m04
	I1109 14:04:43.638135   55908 sshutil.go:53] new ssh client: &{IP:127.0.0.1 Port:32833 SSHKeyPath:/home/jenkins/minikube-integration/21139-2320/.minikube/machines/ha-423884-m04/id_rsa Username:docker}
	I1109 14:04:43.746715   55908 vm_assets.go:164] NewFileAsset: /home/jenkins/minikube-integration/21139-2320/.minikube/certs/ca.pem -> /etc/docker/ca.pem
	I1109 14:04:43.746778   55908 ssh_runner.go:362] scp /home/jenkins/minikube-integration/21139-2320/.minikube/certs/ca.pem --> /etc/docker/ca.pem (1082 bytes)
	I1109 14:04:43.783559   55908 vm_assets.go:164] NewFileAsset: /home/jenkins/minikube-integration/21139-2320/.minikube/machines/server.pem -> /etc/docker/server.pem
	I1109 14:04:43.783620   55908 ssh_runner.go:362] scp /home/jenkins/minikube-integration/21139-2320/.minikube/machines/server.pem --> /etc/docker/server.pem (1208 bytes)
	I1109 14:04:43.821821   55908 vm_assets.go:164] NewFileAsset: /home/jenkins/minikube-integration/21139-2320/.minikube/machines/server-key.pem -> /etc/docker/server-key.pem
	I1109 14:04:43.821884   55908 ssh_runner.go:362] scp /home/jenkins/minikube-integration/21139-2320/.minikube/machines/server-key.pem --> /etc/docker/server-key.pem (1675 bytes)
	I1109 14:04:43.853243   55908 provision.go:87] duration metric: took 1.112229927s to configureAuth
	I1109 14:04:43.853316   55908 ubuntu.go:206] setting minikube options for container-runtime
	I1109 14:04:43.853606   55908 config.go:182] Loaded profile config "ha-423884": Driver=docker, ContainerRuntime=crio, KubernetesVersion=v1.34.1
	I1109 14:04:43.853756   55908 cli_runner.go:164] Run: docker container inspect -f "'{{(index (index .NetworkSettings.Ports "22/tcp") 0).HostPort}}'" ha-423884-m04
	I1109 14:04:43.895433   55908 main.go:143] libmachine: Using SSH client type: native
	I1109 14:04:43.895732   55908 main.go:143] libmachine: &{{{<nil> 0 [] [] []} docker [0x3eefe0] 0x3f1790 <nil>  [] 0s} 127.0.0.1 32833 <nil> <nil>}
	I1109 14:04:43.895746   55908 main.go:143] libmachine: About to run SSH command:
	sudo mkdir -p /etc/sysconfig && printf %s "
	CRIO_MINIKUBE_OPTIONS='--insecure-registry 10.96.0.0/12 '
	" | sudo tee /etc/sysconfig/crio.minikube && sudo systemctl restart crio
	I1109 14:04:44.332263   55908 main.go:143] libmachine: SSH cmd err, output: <nil>: 
	CRIO_MINIKUBE_OPTIONS='--insecure-registry 10.96.0.0/12 '
	
	I1109 14:04:44.332289   55908 machine.go:97] duration metric: took 5.282283014s to provisionDockerMachine
	I1109 14:04:44.332300   55908 start.go:293] postStartSetup for "ha-423884-m04" (driver="docker")
	I1109 14:04:44.332310   55908 start.go:322] creating required directories: [/etc/kubernetes/addons /etc/kubernetes/manifests /var/tmp/minikube /var/lib/minikube /var/lib/minikube/certs /var/lib/minikube/images /var/lib/minikube/binaries /tmp/gvisor /usr/share/ca-certificates /etc/ssl/certs]
	I1109 14:04:44.332371   55908 ssh_runner.go:195] Run: sudo mkdir -p /etc/kubernetes/addons /etc/kubernetes/manifests /var/tmp/minikube /var/lib/minikube /var/lib/minikube/certs /var/lib/minikube/images /var/lib/minikube/binaries /tmp/gvisor /usr/share/ca-certificates /etc/ssl/certs
	I1109 14:04:44.332415   55908 cli_runner.go:164] Run: docker container inspect -f "'{{(index (index .NetworkSettings.Ports "22/tcp") 0).HostPort}}'" ha-423884-m04
	I1109 14:04:44.353937   55908 sshutil.go:53] new ssh client: &{IP:127.0.0.1 Port:32833 SSHKeyPath:/home/jenkins/minikube-integration/21139-2320/.minikube/machines/ha-423884-m04/id_rsa Username:docker}
	I1109 14:04:44.464143   55908 ssh_runner.go:195] Run: cat /etc/os-release
	I1109 14:04:44.470188   55908 main.go:143] libmachine: Couldn't set key VERSION_CODENAME, no corresponding struct field found
	I1109 14:04:44.470214   55908 info.go:137] Remote host: Debian GNU/Linux 12 (bookworm)
	I1109 14:04:44.470225   55908 filesync.go:126] Scanning /home/jenkins/minikube-integration/21139-2320/.minikube/addons for local assets ...
	I1109 14:04:44.470281   55908 filesync.go:126] Scanning /home/jenkins/minikube-integration/21139-2320/.minikube/files for local assets ...
	I1109 14:04:44.470354   55908 filesync.go:149] local asset: /home/jenkins/minikube-integration/21139-2320/.minikube/files/etc/ssl/certs/41162.pem -> 41162.pem in /etc/ssl/certs
	I1109 14:04:44.470361   55908 vm_assets.go:164] NewFileAsset: /home/jenkins/minikube-integration/21139-2320/.minikube/files/etc/ssl/certs/41162.pem -> /etc/ssl/certs/41162.pem
	I1109 14:04:44.470470   55908 ssh_runner.go:195] Run: sudo mkdir -p /etc/ssl/certs
	I1109 14:04:44.479795   55908 ssh_runner.go:362] scp /home/jenkins/minikube-integration/21139-2320/.minikube/files/etc/ssl/certs/41162.pem --> /etc/ssl/certs/41162.pem (1708 bytes)
	I1109 14:04:44.529226   55908 start.go:296] duration metric: took 196.901694ms for postStartSetup
	I1109 14:04:44.529386   55908 ssh_runner.go:195] Run: sh -c "df -h /var | awk 'NR==2{print $5}'"
	I1109 14:04:44.529460   55908 cli_runner.go:164] Run: docker container inspect -f "'{{(index (index .NetworkSettings.Ports "22/tcp") 0).HostPort}}'" ha-423884-m04
	I1109 14:04:44.554649   55908 sshutil.go:53] new ssh client: &{IP:127.0.0.1 Port:32833 SSHKeyPath:/home/jenkins/minikube-integration/21139-2320/.minikube/machines/ha-423884-m04/id_rsa Username:docker}
	I1109 14:04:44.673604   55908 ssh_runner.go:195] Run: sh -c "df -BG /var | awk 'NR==2{print $4}'"
	I1109 14:04:44.680762   55908 fix.go:56] duration metric: took 6.08452744s for fixHost
	I1109 14:04:44.680784   55908 start.go:83] releasing machines lock for "ha-423884-m04", held for 6.084574408s
	I1109 14:04:44.680867   55908 cli_runner.go:164] Run: docker container inspect -f "{{range .NetworkSettings.Networks}}{{.IPAddress}},{{.GlobalIPv6Address}}{{end}}" ha-423884-m04
	I1109 14:04:44.721415   55908 out.go:179] * Found network options:
	I1109 14:04:44.724159   55908 out.go:179]   - NO_PROXY=192.168.49.2,192.168.49.3,192.168.49.4
	W1109 14:04:44.726873   55908 proxy.go:120] fail to check proxy env: Error ip not in block
	W1109 14:04:44.726905   55908 proxy.go:120] fail to check proxy env: Error ip not in block
	W1109 14:04:44.726917   55908 proxy.go:120] fail to check proxy env: Error ip not in block
	W1109 14:04:44.726942   55908 proxy.go:120] fail to check proxy env: Error ip not in block
	W1109 14:04:44.726952   55908 proxy.go:120] fail to check proxy env: Error ip not in block
	W1109 14:04:44.726961   55908 proxy.go:120] fail to check proxy env: Error ip not in block
	I1109 14:04:44.727033   55908 ssh_runner.go:195] Run: sudo sh -c "podman version >/dev/null"
	I1109 14:04:44.727074   55908 ssh_runner.go:195] Run: curl -sS -m 2 https://registry.k8s.io/
	I1109 14:04:44.727134   55908 cli_runner.go:164] Run: docker container inspect -f "'{{(index (index .NetworkSettings.Ports "22/tcp") 0).HostPort}}'" ha-423884-m04
	I1109 14:04:44.727085   55908 cli_runner.go:164] Run: docker container inspect -f "'{{(index (index .NetworkSettings.Ports "22/tcp") 0).HostPort}}'" ha-423884-m04
	I1109 14:04:44.759201   55908 sshutil.go:53] new ssh client: &{IP:127.0.0.1 Port:32833 SSHKeyPath:/home/jenkins/minikube-integration/21139-2320/.minikube/machines/ha-423884-m04/id_rsa Username:docker}
	I1109 14:04:44.763544   55908 sshutil.go:53] new ssh client: &{IP:127.0.0.1 Port:32833 SSHKeyPath:/home/jenkins/minikube-integration/21139-2320/.minikube/machines/ha-423884-m04/id_rsa Username:docker}
	I1109 14:04:45.037350   55908 ssh_runner.go:195] Run: sh -c "stat /etc/cni/net.d/*loopback.conf*"
	W1109 14:04:45.135550   55908 cni.go:209] loopback cni configuration skipped: "/etc/cni/net.d/*loopback.conf*" not found
	I1109 14:04:45.135658   55908 ssh_runner.go:195] Run: sudo find /etc/cni/net.d -maxdepth 1 -type f ( ( -name *bridge* -or -name *podman* ) -and -not -name *.mk_disabled ) -printf "%p, " -exec sh -c "sudo mv {} {}.mk_disabled" ;
	I1109 14:04:45.148313   55908 cni.go:259] no active bridge cni configs found in "/etc/cni/net.d" - nothing to disable
	I1109 14:04:45.148341   55908 start.go:496] detecting cgroup driver to use...
	I1109 14:04:45.148377   55908 detect.go:187] detected "cgroupfs" cgroup driver on host os
	I1109 14:04:45.148433   55908 ssh_runner.go:195] Run: sudo systemctl stop -f containerd
	I1109 14:04:45.185399   55908 ssh_runner.go:195] Run: sudo systemctl is-active --quiet service containerd
	I1109 14:04:45.214772   55908 docker.go:218] disabling cri-docker service (if available) ...
	I1109 14:04:45.214846   55908 ssh_runner.go:195] Run: sudo systemctl stop -f cri-docker.socket
	I1109 14:04:45.250953   55908 ssh_runner.go:195] Run: sudo systemctl stop -f cri-docker.service
	I1109 14:04:45.287278   55908 ssh_runner.go:195] Run: sudo systemctl disable cri-docker.socket
	I1109 14:04:45.661062   55908 ssh_runner.go:195] Run: sudo systemctl mask cri-docker.service
	I1109 14:04:45.935411   55908 docker.go:234] disabling docker service ...
	I1109 14:04:45.935486   55908 ssh_runner.go:195] Run: sudo systemctl stop -f docker.socket
	I1109 14:04:45.952438   55908 ssh_runner.go:195] Run: sudo systemctl stop -f docker.service
	I1109 14:04:45.980819   55908 ssh_runner.go:195] Run: sudo systemctl disable docker.socket
	I1109 14:04:46.226547   55908 ssh_runner.go:195] Run: sudo systemctl mask docker.service
	I1109 14:04:46.528888   55908 ssh_runner.go:195] Run: sudo systemctl is-active --quiet service docker
	I1109 14:04:46.569464   55908 ssh_runner.go:195] Run: /bin/bash -c "sudo mkdir -p /etc && printf %s "runtime-endpoint: unix:///var/run/crio/crio.sock
	" | sudo tee /etc/crictl.yaml"
	I1109 14:04:46.593467   55908 crio.go:59] configure cri-o to use "registry.k8s.io/pause:3.10.1" pause image...
	I1109 14:04:46.593541   55908 ssh_runner.go:195] Run: sh -c "sudo sed -i 's|^.*pause_image = .*$|pause_image = "registry.k8s.io/pause:3.10.1"|' /etc/crio/crio.conf.d/02-crio.conf"
	I1109 14:04:46.617190   55908 crio.go:70] configuring cri-o to use "cgroupfs" as cgroup driver...
	I1109 14:04:46.617307   55908 ssh_runner.go:195] Run: sh -c "sudo sed -i 's|^.*cgroup_manager = .*$|cgroup_manager = "cgroupfs"|' /etc/crio/crio.conf.d/02-crio.conf"
	I1109 14:04:46.632140   55908 ssh_runner.go:195] Run: sh -c "sudo sed -i '/conmon_cgroup = .*/d' /etc/crio/crio.conf.d/02-crio.conf"
	I1109 14:04:46.655050   55908 ssh_runner.go:195] Run: sh -c "sudo sed -i '/cgroup_manager = .*/a conmon_cgroup = "pod"' /etc/crio/crio.conf.d/02-crio.conf"
	I1109 14:04:46.669679   55908 ssh_runner.go:195] Run: sh -c "sudo rm -rf /etc/cni/net.mk"
	I1109 14:04:46.703425   55908 ssh_runner.go:195] Run: sh -c "sudo sed -i '/^ *"net.ipv4.ip_unprivileged_port_start=.*"/d' /etc/crio/crio.conf.d/02-crio.conf"
	I1109 14:04:46.732454   55908 ssh_runner.go:195] Run: sh -c "sudo grep -q "^ *default_sysctls" /etc/crio/crio.conf.d/02-crio.conf || sudo sed -i '/conmon_cgroup = .*/a default_sysctls = \[\n\]' /etc/crio/crio.conf.d/02-crio.conf"
	I1109 14:04:46.748482   55908 ssh_runner.go:195] Run: sh -c "sudo sed -i -r 's|^default_sysctls *= *\[|&\n  "net.ipv4.ip_unprivileged_port_start=0",|' /etc/crio/crio.conf.d/02-crio.conf"
	I1109 14:04:46.774220   55908 ssh_runner.go:195] Run: sudo sysctl net.bridge.bridge-nf-call-iptables
	I1109 14:04:46.794338   55908 ssh_runner.go:195] Run: sudo sh -c "echo 1 > /proc/sys/net/ipv4/ip_forward"
	I1109 14:04:46.805580   55908 ssh_runner.go:195] Run: sudo systemctl daemon-reload
	I1109 14:04:47.010084   55908 ssh_runner.go:195] Run: sudo systemctl restart crio
	I1109 14:04:47.173577   55908 start.go:543] Will wait 60s for socket path /var/run/crio/crio.sock
	I1109 14:04:47.173656   55908 ssh_runner.go:195] Run: stat /var/run/crio/crio.sock
	I1109 14:04:47.181540   55908 start.go:564] Will wait 60s for crictl version
	I1109 14:04:47.181604   55908 ssh_runner.go:195] Run: which crictl
	I1109 14:04:47.186006   55908 ssh_runner.go:195] Run: sudo /usr/local/bin/crictl version
	I1109 14:04:47.222300   55908 start.go:580] Version:  0.1.0
	RuntimeName:  cri-o
	RuntimeVersion:  1.34.1
	RuntimeApiVersion:  v1
	I1109 14:04:47.222379   55908 ssh_runner.go:195] Run: crio --version
	I1109 14:04:47.253413   55908 ssh_runner.go:195] Run: crio --version
	I1109 14:04:47.291652   55908 out.go:179] * Preparing Kubernetes v1.34.1 on CRI-O 1.34.1 ...
	I1109 14:04:47.294554   55908 out.go:179]   - env NO_PROXY=192.168.49.2
	I1109 14:04:47.297616   55908 out.go:179]   - env NO_PROXY=192.168.49.2,192.168.49.3
	I1109 14:04:47.301230   55908 out.go:179]   - env NO_PROXY=192.168.49.2,192.168.49.3,192.168.49.4
	I1109 14:04:47.304267   55908 cli_runner.go:164] Run: docker network inspect ha-423884 --format "{"Name": "{{.Name}}","Driver": "{{.Driver}}","Subnet": "{{range .IPAM.Config}}{{.Subnet}}{{end}}","Gateway": "{{range .IPAM.Config}}{{.Gateway}}{{end}}","MTU": {{if (index .Options "com.docker.network.driver.mtu")}}{{(index .Options "com.docker.network.driver.mtu")}}{{else}}0{{end}}, "ContainerIPs": [{{range $k,$v := .Containers }}"{{$v.IPv4Address}}",{{end}}]}"
	I1109 14:04:47.343687   55908 ssh_runner.go:195] Run: grep 192.168.49.1	host.minikube.internal$ /etc/hosts
	I1109 14:04:47.347710   55908 ssh_runner.go:195] Run: /bin/bash -c "{ grep -v $'\thost.minikube.internal$' "/etc/hosts"; echo "192.168.49.1	host.minikube.internal"; } > /tmp/h.$$; sudo cp /tmp/h.$$ "/etc/hosts""
	I1109 14:04:47.360845   55908 mustload.go:66] Loading cluster: ha-423884
	I1109 14:04:47.361083   55908 config.go:182] Loaded profile config "ha-423884": Driver=docker, ContainerRuntime=crio, KubernetesVersion=v1.34.1
	I1109 14:04:47.361322   55908 cli_runner.go:164] Run: docker container inspect ha-423884 --format={{.State.Status}}
	I1109 14:04:47.390238   55908 host.go:66] Checking if "ha-423884" exists ...
	I1109 14:04:47.390509   55908 certs.go:69] Setting up /home/jenkins/minikube-integration/21139-2320/.minikube/profiles/ha-423884 for IP: 192.168.49.5
	I1109 14:04:47.390516   55908 certs.go:195] generating shared ca certs ...
	I1109 14:04:47.390534   55908 certs.go:227] acquiring lock for ca certs: {Name:mkdc9287ecd89df27ec460b72246ed6b75395d7c Clock:{} Delay:500ms Timeout:1m0s Cancel:<nil>}
	I1109 14:04:47.390655   55908 certs.go:236] skipping valid "minikubeCA" ca cert: /home/jenkins/minikube-integration/21139-2320/.minikube/ca.key
	I1109 14:04:47.390695   55908 certs.go:236] skipping valid "proxyClientCA" ca cert: /home/jenkins/minikube-integration/21139-2320/.minikube/proxy-client-ca.key
	I1109 14:04:47.390705   55908 vm_assets.go:164] NewFileAsset: /home/jenkins/minikube-integration/21139-2320/.minikube/ca.crt -> /var/lib/minikube/certs/ca.crt
	I1109 14:04:47.390717   55908 vm_assets.go:164] NewFileAsset: /home/jenkins/minikube-integration/21139-2320/.minikube/ca.key -> /var/lib/minikube/certs/ca.key
	I1109 14:04:47.390728   55908 vm_assets.go:164] NewFileAsset: /home/jenkins/minikube-integration/21139-2320/.minikube/proxy-client-ca.crt -> /var/lib/minikube/certs/proxy-client-ca.crt
	I1109 14:04:47.390739   55908 vm_assets.go:164] NewFileAsset: /home/jenkins/minikube-integration/21139-2320/.minikube/proxy-client-ca.key -> /var/lib/minikube/certs/proxy-client-ca.key
	I1109 14:04:47.390789   55908 certs.go:484] found cert: /home/jenkins/minikube-integration/21139-2320/.minikube/certs/4116.pem (1338 bytes)
	W1109 14:04:47.390815   55908 certs.go:480] ignoring /home/jenkins/minikube-integration/21139-2320/.minikube/certs/4116_empty.pem, impossibly tiny 0 bytes
	I1109 14:04:47.390823   55908 certs.go:484] found cert: /home/jenkins/minikube-integration/21139-2320/.minikube/certs/ca-key.pem (1679 bytes)
	I1109 14:04:47.390848   55908 certs.go:484] found cert: /home/jenkins/minikube-integration/21139-2320/.minikube/certs/ca.pem (1082 bytes)
	I1109 14:04:47.390868   55908 certs.go:484] found cert: /home/jenkins/minikube-integration/21139-2320/.minikube/certs/cert.pem (1123 bytes)
	I1109 14:04:47.390889   55908 certs.go:484] found cert: /home/jenkins/minikube-integration/21139-2320/.minikube/certs/key.pem (1679 bytes)
	I1109 14:04:47.390931   55908 certs.go:484] found cert: /home/jenkins/minikube-integration/21139-2320/.minikube/files/etc/ssl/certs/41162.pem (1708 bytes)
	I1109 14:04:47.390957   55908 vm_assets.go:164] NewFileAsset: /home/jenkins/minikube-integration/21139-2320/.minikube/certs/4116.pem -> /usr/share/ca-certificates/4116.pem
	I1109 14:04:47.390969   55908 vm_assets.go:164] NewFileAsset: /home/jenkins/minikube-integration/21139-2320/.minikube/files/etc/ssl/certs/41162.pem -> /usr/share/ca-certificates/41162.pem
	I1109 14:04:47.390980   55908 vm_assets.go:164] NewFileAsset: /home/jenkins/minikube-integration/21139-2320/.minikube/ca.crt -> /usr/share/ca-certificates/minikubeCA.pem
	I1109 14:04:47.390996   55908 ssh_runner.go:362] scp /home/jenkins/minikube-integration/21139-2320/.minikube/ca.crt --> /var/lib/minikube/certs/ca.crt (1111 bytes)
	I1109 14:04:47.419171   55908 ssh_runner.go:362] scp /home/jenkins/minikube-integration/21139-2320/.minikube/ca.key --> /var/lib/minikube/certs/ca.key (1679 bytes)
	I1109 14:04:47.458480   55908 ssh_runner.go:362] scp /home/jenkins/minikube-integration/21139-2320/.minikube/proxy-client-ca.crt --> /var/lib/minikube/certs/proxy-client-ca.crt (1119 bytes)
	I1109 14:04:47.491840   55908 ssh_runner.go:362] scp /home/jenkins/minikube-integration/21139-2320/.minikube/proxy-client-ca.key --> /var/lib/minikube/certs/proxy-client-ca.key (1675 bytes)
	I1109 14:04:47.515467   55908 ssh_runner.go:362] scp /home/jenkins/minikube-integration/21139-2320/.minikube/certs/4116.pem --> /usr/share/ca-certificates/4116.pem (1338 bytes)
	I1109 14:04:47.547694   55908 ssh_runner.go:362] scp /home/jenkins/minikube-integration/21139-2320/.minikube/files/etc/ssl/certs/41162.pem --> /usr/share/ca-certificates/41162.pem (1708 bytes)
	I1109 14:04:47.571204   55908 ssh_runner.go:362] scp /home/jenkins/minikube-integration/21139-2320/.minikube/ca.crt --> /usr/share/ca-certificates/minikubeCA.pem (1111 bytes)
	I1109 14:04:47.596967   55908 ssh_runner.go:195] Run: openssl version
	I1109 14:04:47.604617   55908 ssh_runner.go:195] Run: sudo /bin/bash -c "test -s /usr/share/ca-certificates/4116.pem && ln -fs /usr/share/ca-certificates/4116.pem /etc/ssl/certs/4116.pem"
	I1109 14:04:47.618704   55908 ssh_runner.go:195] Run: ls -la /usr/share/ca-certificates/4116.pem
	I1109 14:04:47.623578   55908 certs.go:528] hashing: -rw-r--r-- 1 root root 1338 Nov  9 13:36 /usr/share/ca-certificates/4116.pem
	I1109 14:04:47.623648   55908 ssh_runner.go:195] Run: openssl x509 -hash -noout -in /usr/share/ca-certificates/4116.pem
	I1109 14:04:47.684940   55908 ssh_runner.go:195] Run: sudo /bin/bash -c "test -L /etc/ssl/certs/51391683.0 || ln -fs /etc/ssl/certs/4116.pem /etc/ssl/certs/51391683.0"
	I1109 14:04:47.694950   55908 ssh_runner.go:195] Run: sudo /bin/bash -c "test -s /usr/share/ca-certificates/41162.pem && ln -fs /usr/share/ca-certificates/41162.pem /etc/ssl/certs/41162.pem"
	I1109 14:04:47.704570   55908 ssh_runner.go:195] Run: ls -la /usr/share/ca-certificates/41162.pem
	I1109 14:04:47.709468   55908 certs.go:528] hashing: -rw-r--r-- 1 root root 1708 Nov  9 13:36 /usr/share/ca-certificates/41162.pem
	I1109 14:04:47.709530   55908 ssh_runner.go:195] Run: openssl x509 -hash -noout -in /usr/share/ca-certificates/41162.pem
	I1109 14:04:47.765768   55908 ssh_runner.go:195] Run: sudo /bin/bash -c "test -L /etc/ssl/certs/3ec20f2e.0 || ln -fs /etc/ssl/certs/41162.pem /etc/ssl/certs/3ec20f2e.0"
	I1109 14:04:47.777604   55908 ssh_runner.go:195] Run: sudo /bin/bash -c "test -s /usr/share/ca-certificates/minikubeCA.pem && ln -fs /usr/share/ca-certificates/minikubeCA.pem /etc/ssl/certs/minikubeCA.pem"
	I1109 14:04:47.788177   55908 ssh_runner.go:195] Run: ls -la /usr/share/ca-certificates/minikubeCA.pem
	I1109 14:04:47.793126   55908 certs.go:528] hashing: -rw-r--r-- 1 root root 1111 Nov  9 13:29 /usr/share/ca-certificates/minikubeCA.pem
	I1109 14:04:47.793191   55908 ssh_runner.go:195] Run: openssl x509 -hash -noout -in /usr/share/ca-certificates/minikubeCA.pem
	I1109 14:04:47.845154   55908 ssh_runner.go:195] Run: sudo /bin/bash -c "test -L /etc/ssl/certs/b5213941.0 || ln -fs /etc/ssl/certs/minikubeCA.pem /etc/ssl/certs/b5213941.0"
	I1109 14:04:47.856386   55908 ssh_runner.go:195] Run: stat /var/lib/minikube/certs/apiserver-kubelet-client.crt
	I1109 14:04:47.861306   55908 certs.go:400] 'apiserver-kubelet-client' cert doesn't exist, likely first start: stat /var/lib/minikube/certs/apiserver-kubelet-client.crt: Process exited with status 1
	stdout:
	
	stderr:
	stat: cannot statx '/var/lib/minikube/certs/apiserver-kubelet-client.crt': No such file or directory
	I1109 14:04:47.861350   55908 kubeadm.go:935] updating node {m04 192.168.49.5 0 v1.34.1 crio false true} ...
	I1109 14:04:47.861449   55908 kubeadm.go:947] kubelet [Unit]
	Wants=crio.service
	
	[Service]
	ExecStart=
	ExecStart=/var/lib/minikube/binaries/v1.34.1/kubelet --bootstrap-kubeconfig=/etc/kubernetes/bootstrap-kubelet.conf --cgroups-per-qos=false --config=/var/lib/kubelet/config.yaml --enforce-node-allocatable= --hostname-override=ha-423884-m04 --kubeconfig=/etc/kubernetes/kubelet.conf --node-ip=192.168.49.5
	
	[Install]
	 config:
	{KubernetesVersion:v1.34.1 ClusterName:ha-423884 Namespace:default APIServerHAVIP:192.168.49.254 APIServerName:minikubeCA APIServerNames:[] APIServerIPs:[] DNSDomain:cluster.local ContainerRuntime:crio CRISocket: NetworkPlugin:cni FeatureGates: ServiceCIDR:10.96.0.0/12 ImageRepository: LoadBalancerStartIP: LoadBalancerEndIP: CustomIngressCert: RegistryAliases: ExtraOptions:[] ShouldLoadCachedImages:true EnableDefaultCNI:false CNI:}
	I1109 14:04:47.861522   55908 ssh_runner.go:195] Run: sudo ls /var/lib/minikube/binaries/v1.34.1
	I1109 14:04:47.870269   55908 binaries.go:51] Found k8s binaries, skipping transfer
	I1109 14:04:47.870337   55908 ssh_runner.go:195] Run: sudo mkdir -p /etc/systemd/system/kubelet.service.d /lib/systemd/system
	I1109 14:04:47.880368   55908 ssh_runner.go:362] scp memory --> /etc/systemd/system/kubelet.service.d/10-kubeadm.conf (363 bytes)
	I1109 14:04:47.897846   55908 ssh_runner.go:362] scp memory --> /lib/systemd/system/kubelet.service (352 bytes)
	I1109 14:04:47.917114   55908 ssh_runner.go:195] Run: grep 192.168.49.254	control-plane.minikube.internal$ /etc/hosts
	I1109 14:04:47.924685   55908 ssh_runner.go:195] Run: /bin/bash -c "{ grep -v $'\tcontrol-plane.minikube.internal$' "/etc/hosts"; echo "192.168.49.254	control-plane.minikube.internal"; } > /tmp/h.$$; sudo cp /tmp/h.$$ "/etc/hosts""
	I1109 14:04:47.936633   55908 ssh_runner.go:195] Run: sudo systemctl daemon-reload
	I1109 14:04:48.172177   55908 ssh_runner.go:195] Run: sudo systemctl start kubelet
	I1109 14:04:48.203009   55908 start.go:236] Will wait 6m0s for node &{Name:m04 IP:192.168.49.5 Port:0 KubernetesVersion:v1.34.1 ContainerRuntime:crio ControlPlane:false Worker:true}
	I1109 14:04:48.203488   55908 config.go:182] Loaded profile config "ha-423884": Driver=docker, ContainerRuntime=crio, KubernetesVersion=v1.34.1
	I1109 14:04:48.206078   55908 out.go:179] * Verifying Kubernetes components...
	I1109 14:04:48.209257   55908 ssh_runner.go:195] Run: sudo systemctl daemon-reload
	I1109 14:04:48.462006   55908 ssh_runner.go:195] Run: sudo systemctl start kubelet
	I1109 14:04:48.478911   55908 kapi.go:59] client config for ha-423884: &rest.Config{Host:"https://192.168.49.254:8443", APIPath:"", ContentConfig:rest.ContentConfig{AcceptContentTypes:"", ContentType:"", GroupVersion:(*schema.GroupVersion)(nil), NegotiatedSerializer:runtime.NegotiatedSerializer(nil)}, Username:"", Password:"", BearerToken:"", BearerTokenFile:"", Impersonate:rest.ImpersonationConfig{UserName:"", UID:"", Groups:[]string(nil), Extra:map[string][]string(nil)}, AuthProvider:<nil>, AuthConfigPersister:rest.AuthProviderConfigPersister(nil), ExecProvider:<nil>, TLSClientConfig:rest.sanitizedTLSClientConfig{Insecure:false, ServerName:"", CertFile:"/home/jenkins/minikube-integration/21139-2320/.minikube/profiles/ha-423884/client.crt", KeyFile:"/home/jenkins/minikube-integration/21139-2320/.minikube/profiles/ha-423884/client.key", CAFile:"/home/jenkins/minikube-integration/21139-2320/.minikube/ca.crt", CertData:[]uint8(nil), KeyData:[]uint8(nil), CAData:[]uint8(nil), NextProtos:[]string(nil)},
UserAgent:"", DisableCompression:false, Transport:http.RoundTripper(nil), WrapTransport:(transport.WrapperFunc)(0x21276e0), QPS:0, Burst:0, RateLimiter:flowcontrol.RateLimiter(nil), WarningHandler:rest.WarningHandler(nil), WarningHandlerWithContext:rest.WarningHandlerWithContext(nil), Timeout:0, Dial:(func(context.Context, string, string) (net.Conn, error))(nil), Proxy:(func(*http.Request) (*url.URL, error))(nil)}
	W1109 14:04:48.478989   55908 kubeadm.go:492] Overriding stale ClientConfig host https://192.168.49.254:8443 with https://192.168.49.2:8443
	I1109 14:04:48.479221   55908 node_ready.go:35] waiting up to 6m0s for node "ha-423884-m04" to be "Ready" ...
	I1109 14:04:48.482317   55908 node_ready.go:49] node "ha-423884-m04" is "Ready"
	I1109 14:04:48.482349   55908 node_ready.go:38] duration metric: took 3.109285ms for node "ha-423884-m04" to be "Ready" ...
	I1109 14:04:48.482363   55908 system_svc.go:44] waiting for kubelet service to be running ....
	I1109 14:04:48.482419   55908 ssh_runner.go:195] Run: sudo systemctl is-active --quiet service kubelet
	I1109 14:04:48.500348   55908 system_svc.go:56] duration metric: took 17.977329ms WaitForService to wait for kubelet
	I1109 14:04:48.500378   55908 kubeadm.go:587] duration metric: took 297.325981ms to wait for: map[apiserver:true apps_running:true default_sa:true extra:true kubelet:true node_ready:true system_pods:true]
	I1109 14:04:48.500397   55908 node_conditions.go:102] verifying NodePressure condition ...
	I1109 14:04:48.505686   55908 node_conditions.go:122] node storage ephemeral capacity is 203034800Ki
	I1109 14:04:48.505725   55908 node_conditions.go:123] node cpu capacity is 2
	I1109 14:04:48.505737   55908 node_conditions.go:122] node storage ephemeral capacity is 203034800Ki
	I1109 14:04:48.505742   55908 node_conditions.go:123] node cpu capacity is 2
	I1109 14:04:48.505745   55908 node_conditions.go:122] node storage ephemeral capacity is 203034800Ki
	I1109 14:04:48.505750   55908 node_conditions.go:123] node cpu capacity is 2
	I1109 14:04:48.505754   55908 node_conditions.go:122] node storage ephemeral capacity is 203034800Ki
	I1109 14:04:48.505758   55908 node_conditions.go:123] node cpu capacity is 2
	I1109 14:04:48.505763   55908 node_conditions.go:105] duration metric: took 5.360822ms to run NodePressure ...
	I1109 14:04:48.505778   55908 start.go:242] waiting for startup goroutines ...
	I1109 14:04:48.505806   55908 start.go:256] writing updated cluster config ...
	I1109 14:04:48.506138   55908 ssh_runner.go:195] Run: rm -f paused
	I1109 14:04:48.511449   55908 pod_ready.go:37] extra waiting up to 4m0s for all "kube-system" pods having one of [k8s-app=kube-dns component=etcd component=kube-apiserver component=kube-controller-manager k8s-app=kube-proxy component=kube-scheduler] labels to be "Ready" ...
	I1109 14:04:48.512086   55908 kapi.go:59] client config for ha-423884: &rest.Config{Host:"https://192.168.49.254:8443", APIPath:"", ContentConfig:rest.ContentConfig{AcceptContentTypes:"", ContentType:"", GroupVersion:(*schema.GroupVersion)(nil), NegotiatedSerializer:runtime.NegotiatedSerializer(nil)}, Username:"", Password:"", BearerToken:"", BearerTokenFile:"", Impersonate:rest.ImpersonationConfig{UserName:"", UID:"", Groups:[]string(nil), Extra:map[string][]string(nil)}, AuthProvider:<nil>, AuthConfigPersister:rest.AuthProviderConfigPersister(nil), ExecProvider:<nil>, TLSClientConfig:rest.sanitizedTLSClientConfig{Insecure:false, ServerName:"", CertFile:"/home/jenkins/minikube-integration/21139-2320/.minikube/profiles/ha-423884/client.crt", KeyFile:"/home/jenkins/minikube-integration/21139-2320/.minikube/profiles/ha-423884/client.key", CAFile:"/home/jenkins/minikube-integration/21139-2320/.minikube/ca.crt", CertData:[]uint8(nil), KeyData:[]uint8(nil), CAData:[]uint8(nil), NextProtos:[]string(nil)},
UserAgent:"", DisableCompression:false, Transport:http.RoundTripper(nil), WrapTransport:(transport.WrapperFunc)(0x21276e0), QPS:0, Burst:0, RateLimiter:flowcontrol.RateLimiter(nil), WarningHandler:rest.WarningHandler(nil), WarningHandlerWithContext:rest.WarningHandlerWithContext(nil), Timeout:0, Dial:(func(context.Context, string, string) (net.Conn, error))(nil), Proxy:(func(*http.Request) (*url.URL, error))(nil)}
	I1109 14:04:48.531812   55908 pod_ready.go:83] waiting for pod "coredns-66bc5c9577-wl6rt" in "kube-system" namespace to be "Ready" or be gone ...
	W1109 14:04:50.538801   55908 pod_ready.go:104] pod "coredns-66bc5c9577-wl6rt" is not "Ready", error: <nil>
	W1109 14:04:53.041776   55908 pod_ready.go:104] pod "coredns-66bc5c9577-wl6rt" is not "Ready", error: <nil>
	W1109 14:04:55.540126   55908 pod_ready.go:104] pod "coredns-66bc5c9577-wl6rt" is not "Ready", error: <nil>
	I1109 14:04:57.039850   55908 pod_ready.go:94] pod "coredns-66bc5c9577-wl6rt" is "Ready"
	I1109 14:04:57.039917   55908 pod_ready.go:86] duration metric: took 8.508070998s for pod "coredns-66bc5c9577-wl6rt" in "kube-system" namespace to be "Ready" or be gone ...
	I1109 14:04:57.039928   55908 pod_ready.go:83] waiting for pod "coredns-66bc5c9577-x2j4c" in "kube-system" namespace to be "Ready" or be gone ...
	I1109 14:04:57.047591   55908 pod_ready.go:94] pod "coredns-66bc5c9577-x2j4c" is "Ready"
	I1109 14:04:57.047620   55908 pod_ready.go:86] duration metric: took 7.684548ms for pod "coredns-66bc5c9577-x2j4c" in "kube-system" namespace to be "Ready" or be gone ...
	I1109 14:04:57.051339   55908 pod_ready.go:83] waiting for pod "etcd-ha-423884" in "kube-system" namespace to be "Ready" or be gone ...
	I1109 14:04:57.057478   55908 pod_ready.go:94] pod "etcd-ha-423884" is "Ready"
	I1109 14:04:57.057507   55908 pod_ready.go:86] duration metric: took 6.138948ms for pod "etcd-ha-423884" in "kube-system" namespace to be "Ready" or be gone ...
	I1109 14:04:57.057516   55908 pod_ready.go:83] waiting for pod "etcd-ha-423884-m02" in "kube-system" namespace to be "Ready" or be gone ...
	I1109 14:04:57.063675   55908 pod_ready.go:94] pod "etcd-ha-423884-m02" is "Ready"
	I1109 14:04:57.063703   55908 pod_ready.go:86] duration metric: took 6.180712ms for pod "etcd-ha-423884-m02" in "kube-system" namespace to be "Ready" or be gone ...
	I1109 14:04:57.063713   55908 pod_ready.go:83] waiting for pod "etcd-ha-423884-m03" in "kube-system" namespace to be "Ready" or be gone ...
	I1109 14:04:57.232913   55908 request.go:683] "Waited before sending request" delay="166.184726ms" reason="client-side throttling, not priority and fairness" verb="GET" URL="https://192.168.49.254:8443/api/v1/nodes/ha-423884-m03"
	I1109 14:04:57.235976   55908 pod_ready.go:94] pod "etcd-ha-423884-m03" is "Ready"
	I1109 14:04:57.236003   55908 pod_ready.go:86] duration metric: took 172.283157ms for pod "etcd-ha-423884-m03" in "kube-system" namespace to be "Ready" or be gone ...
	I1109 14:04:57.433310   55908 request.go:683] "Waited before sending request" delay="197.214303ms" reason="client-side throttling, not priority and fairness" verb="GET" URL="https://192.168.49.254:8443/api/v1/namespaces/kube-system/pods?labelSelector=component%3Dkube-apiserver"
	I1109 14:04:57.437206   55908 pod_ready.go:83] waiting for pod "kube-apiserver-ha-423884" in "kube-system" namespace to be "Ready" or be gone ...
	I1109 14:04:57.632527   55908 request.go:683] "Waited before sending request" delay="195.228871ms" reason="client-side throttling, not priority and fairness" verb="GET" URL="https://192.168.49.254:8443/api/v1/namespaces/kube-system/pods/kube-apiserver-ha-423884"
	I1109 14:04:57.833084   55908 request.go:683] "Waited before sending request" delay="197.197966ms" reason="client-side throttling, not priority and fairness" verb="GET" URL="https://192.168.49.254:8443/api/v1/nodes/ha-423884"
	I1109 14:04:57.836198   55908 pod_ready.go:94] pod "kube-apiserver-ha-423884" is "Ready"
	I1109 14:04:57.836230   55908 pod_ready.go:86] duration metric: took 398.997813ms for pod "kube-apiserver-ha-423884" in "kube-system" namespace to be "Ready" or be gone ...
	I1109 14:04:57.836239   55908 pod_ready.go:83] waiting for pod "kube-apiserver-ha-423884-m02" in "kube-system" namespace to be "Ready" or be gone ...
	I1109 14:04:58.032538   55908 request.go:683] "Waited before sending request" delay="196.215039ms" reason="client-side throttling, not priority and fairness" verb="GET" URL="https://192.168.49.254:8443/api/v1/namespaces/kube-system/pods/kube-apiserver-ha-423884-m02"
	I1109 14:04:58.232521   55908 request.go:683] "Waited before sending request" delay="195.230554ms" reason="client-side throttling, not priority and fairness" verb="GET" URL="https://192.168.49.254:8443/api/v1/nodes/ha-423884-m02"
	I1109 14:04:58.236341   55908 pod_ready.go:94] pod "kube-apiserver-ha-423884-m02" is "Ready"
	I1109 14:04:58.236367   55908 pod_ready.go:86] duration metric: took 400.120914ms for pod "kube-apiserver-ha-423884-m02" in "kube-system" namespace to be "Ready" or be gone ...
	I1109 14:04:58.236376   55908 pod_ready.go:83] waiting for pod "kube-apiserver-ha-423884-m03" in "kube-system" namespace to be "Ready" or be gone ...
	I1109 14:04:58.433023   55908 request.go:683] "Waited before sending request" delay="196.538827ms" reason="client-side throttling, not priority and fairness" verb="GET" URL="https://192.168.49.254:8443/api/v1/namespaces/kube-system/pods/kube-apiserver-ha-423884-m03"
	I1109 14:04:58.632901   55908 request.go:683] "Waited before sending request" delay="196.260046ms" reason="client-side throttling, not priority and fairness" verb="GET" URL="https://192.168.49.254:8443/api/v1/nodes/ha-423884-m03"
	I1109 14:04:58.636121   55908 pod_ready.go:94] pod "kube-apiserver-ha-423884-m03" is "Ready"
	I1109 14:04:58.636150   55908 pod_ready.go:86] duration metric: took 399.76645ms for pod "kube-apiserver-ha-423884-m03" in "kube-system" namespace to be "Ready" or be gone ...
	I1109 14:04:58.832522   55908 request.go:683] "Waited before sending request" delay="196.25788ms" reason="client-side throttling, not priority and fairness" verb="GET" URL="https://192.168.49.254:8443/api/v1/namespaces/kube-system/pods?labelSelector=component%3Dkube-controller-manager"
	I1109 14:04:58.836640   55908 pod_ready.go:83] waiting for pod "kube-controller-manager-ha-423884" in "kube-system" namespace to be "Ready" or be gone ...
	I1109 14:04:59.033076   55908 request.go:683] "Waited before sending request" delay="196.288797ms" reason="client-side throttling, not priority and fairness" verb="GET" URL="https://192.168.49.254:8443/api/v1/namespaces/kube-system/pods/kube-controller-manager-ha-423884"
	I1109 14:04:59.233471   55908 request.go:683] "Waited before sending request" delay="197.170343ms" reason="client-side throttling, not priority and fairness" verb="GET" URL="https://192.168.49.254:8443/api/v1/nodes/ha-423884"
	I1109 14:04:59.236562   55908 pod_ready.go:94] pod "kube-controller-manager-ha-423884" is "Ready"
	I1109 14:04:59.236586   55908 pod_ready.go:86] duration metric: took 399.915672ms for pod "kube-controller-manager-ha-423884" in "kube-system" namespace to be "Ready" or be gone ...
	I1109 14:04:59.236595   55908 pod_ready.go:83] waiting for pod "kube-controller-manager-ha-423884-m02" in "kube-system" namespace to be "Ready" or be gone ...
	I1109 14:04:59.432815   55908 request.go:683] "Waited before sending request" delay="196.151501ms" reason="client-side throttling, not priority and fairness" verb="GET" URL="https://192.168.49.254:8443/api/v1/namespaces/kube-system/pods/kube-controller-manager-ha-423884-m02"
	I1109 14:04:59.633389   55908 request.go:683] "Waited before sending request" delay="197.339699ms" reason="client-side throttling, not priority and fairness" verb="GET" URL="https://192.168.49.254:8443/api/v1/nodes/ha-423884-m02"
	I1109 14:04:59.636611   55908 pod_ready.go:94] pod "kube-controller-manager-ha-423884-m02" is "Ready"
	I1109 14:04:59.636639   55908 pod_ready.go:86] duration metric: took 400.036716ms for pod "kube-controller-manager-ha-423884-m02" in "kube-system" namespace to be "Ready" or be gone ...
	I1109 14:04:59.636649   55908 pod_ready.go:83] waiting for pod "kube-controller-manager-ha-423884-m03" in "kube-system" namespace to be "Ready" or be gone ...
	I1109 14:04:59.832944   55908 request.go:683] "Waited before sending request" delay="196.225586ms" reason="client-side throttling, not priority and fairness" verb="GET" URL="https://192.168.49.254:8443/api/v1/namespaces/kube-system/pods/kube-controller-manager-ha-423884-m03"
	I1109 14:05:00.032735   55908 request.go:683] "Waited before sending request" delay="196.153889ms" reason="client-side throttling, not priority and fairness" verb="GET" URL="https://192.168.49.254:8443/api/v1/nodes/ha-423884-m03"
	I1109 14:05:00.114688   55908 pod_ready.go:94] pod "kube-controller-manager-ha-423884-m03" is "Ready"
	I1109 14:05:00.114728   55908 pod_ready.go:86] duration metric: took 478.071803ms for pod "kube-controller-manager-ha-423884-m03" in "kube-system" namespace to be "Ready" or be gone ...
	I1109 14:05:00.242596   55908 request.go:683] "Waited before sending request" delay="127.725515ms" reason="client-side throttling, not priority and fairness" verb="GET" URL="https://192.168.49.254:8443/api/v1/namespaces/kube-system/pods?labelSelector=k8s-app%3Dkube-proxy"
	I1109 14:05:00.298102   55908 pod_ready.go:83] waiting for pod "kube-proxy-7z7d2" in "kube-system" namespace to be "Ready" or be gone ...
	I1109 14:05:00.433403   55908 request.go:683] "Waited before sending request" delay="135.18186ms" reason="client-side throttling, not priority and fairness" verb="GET" URL="https://192.168.49.254:8443/api/v1/namespaces/kube-system/pods/kube-proxy-7z7d2"
	I1109 14:05:00.633480   55908 request.go:683] "Waited before sending request" delay="187.320382ms" reason="client-side throttling, not priority and fairness" verb="GET" URL="https://192.168.49.254:8443/api/v1/nodes/ha-423884"
	I1109 14:05:00.659363   55908 pod_ready.go:94] pod "kube-proxy-7z7d2" is "Ready"
	I1109 14:05:00.659405   55908 pod_ready.go:86] duration metric: took 361.264172ms for pod "kube-proxy-7z7d2" in "kube-system" namespace to be "Ready" or be gone ...
	I1109 14:05:00.659421   55908 pod_ready.go:83] waiting for pod "kube-proxy-9kff9" in "kube-system" namespace to be "Ready" or be gone ...
	I1109 14:05:00.832720   55908 request.go:683] "Waited before sending request" delay="173.209595ms" reason="client-side throttling, not priority and fairness" verb="GET" URL="https://192.168.49.254:8443/api/v1/namespaces/kube-system/pods/kube-proxy-9kff9"
	I1109 14:05:01.032589   55908 request.go:683] "Waited before sending request" delay="193.218072ms" reason="client-side throttling, not priority and fairness" verb="GET" URL="https://192.168.49.254:8443/api/v1/nodes/ha-423884-m04"
	I1109 14:05:01.233422   55908 request.go:683] "Waited before sending request" delay="73.212921ms" reason="client-side throttling, not priority and fairness" verb="GET" URL="https://192.168.49.254:8443/api/v1/namespaces/kube-system/pods/kube-proxy-9kff9"
	I1109 14:05:01.433041   55908 request.go:683] "Waited before sending request" delay="190.18265ms" reason="client-side throttling, not priority and fairness" verb="GET" URL="https://192.168.49.254:8443/api/v1/nodes/ha-423884-m04"
	I1109 14:05:01.437082   55908 pod_ready.go:94] pod "kube-proxy-9kff9" is "Ready"
	I1109 14:05:01.437110   55908 pod_ready.go:86] duration metric: took 777.680802ms for pod "kube-proxy-9kff9" in "kube-system" namespace to be "Ready" or be gone ...
	I1109 14:05:01.437119   55908 pod_ready.go:83] waiting for pod "kube-proxy-f4hgn" in "kube-system" namespace to be "Ready" or be gone ...
	I1109 14:05:01.632461   55908 request.go:683] "Waited before sending request" delay="195.271922ms" reason="client-side throttling, not priority and fairness" verb="GET" URL="https://192.168.49.254:8443/api/v1/namespaces/kube-system/pods/kube-proxy-f4hgn"
	I1109 14:05:01.832811   55908 request.go:683] "Waited before sending request" delay="187.236042ms" reason="client-side throttling, not priority and fairness" verb="GET" URL="https://192.168.49.254:8443/api/v1/nodes/ha-423884-m02"
	I1109 14:05:01.836535   55908 pod_ready.go:94] pod "kube-proxy-f4hgn" is "Ready"
	I1109 14:05:01.836565   55908 pod_ready.go:86] duration metric: took 399.438784ms for pod "kube-proxy-f4hgn" in "kube-system" namespace to be "Ready" or be gone ...
	I1109 14:05:01.836576   55908 pod_ready.go:83] waiting for pod "kube-proxy-jcgxk" in "kube-system" namespace to be "Ready" or be gone ...
	I1109 14:05:02.032823   55908 request.go:683] "Waited before sending request" delay="196.168826ms" reason="client-side throttling, not priority and fairness" verb="GET" URL="https://192.168.49.254:8443/api/v1/namespaces/kube-system/pods/kube-proxy-jcgxk"
	I1109 14:05:02.232950   55908 request.go:683] "Waited before sending request" delay="192.345884ms" reason="client-side throttling, not priority and fairness" verb="GET" URL="https://192.168.49.254:8443/api/v1/nodes/ha-423884-m03"
	I1109 14:05:02.432483   55908 request.go:683] "Waited before sending request" delay="95.122005ms" reason="client-side throttling, not priority and fairness" verb="GET" URL="https://192.168.49.254:8443/api/v1/namespaces/kube-system/pods/kube-proxy-jcgxk"
	I1109 14:05:02.632558   55908 request.go:683] "Waited before sending request" delay="196.186501ms" reason="client-side throttling, not priority and fairness" verb="GET" URL="https://192.168.49.254:8443/api/v1/nodes/ha-423884-m03"
	I1109 14:05:03.032762   55908 request.go:683] "Waited before sending request" delay="191.358141ms" reason="client-side throttling, not priority and fairness" verb="GET" URL="https://192.168.49.254:8443/api/v1/nodes/ha-423884-m03"
	I1109 14:05:03.433075   55908 request.go:683] "Waited before sending request" delay="91.200576ms" reason="client-side throttling, not priority and fairness" verb="GET" URL="https://192.168.49.254:8443/api/v1/nodes/ha-423884-m03"
	W1109 14:05:03.843130   55908 pod_ready.go:104] pod "kube-proxy-jcgxk" is not "Ready", error: <nil>
	W1109 14:05:05.843241   55908 pod_ready.go:104] pod "kube-proxy-jcgxk" is not "Ready", error: <nil>
	W1109 14:05:07.843386   55908 pod_ready.go:104] pod "kube-proxy-jcgxk" is not "Ready", error: <nil>
	W1109 14:05:10.345843   55908 pod_ready.go:104] pod "kube-proxy-jcgxk" is not "Ready", error: <nil>
	W1109 14:05:12.347116   55908 pod_ready.go:104] pod "kube-proxy-jcgxk" is not "Ready", error: <nil>
	I1109 14:05:12.843484   55908 pod_ready.go:94] pod "kube-proxy-jcgxk" is "Ready"
	I1109 14:05:12.843511   55908 pod_ready.go:86] duration metric: took 11.006928371s for pod "kube-proxy-jcgxk" in "kube-system" namespace to be "Ready" or be gone ...
	I1109 14:05:12.847315   55908 pod_ready.go:83] waiting for pod "kube-scheduler-ha-423884" in "kube-system" namespace to be "Ready" or be gone ...
	I1109 14:05:12.853111   55908 pod_ready.go:94] pod "kube-scheduler-ha-423884" is "Ready"
	I1109 14:05:12.853137   55908 pod_ready.go:86] duration metric: took 5.793657ms for pod "kube-scheduler-ha-423884" in "kube-system" namespace to be "Ready" or be gone ...
	I1109 14:05:12.853146   55908 pod_ready.go:83] waiting for pod "kube-scheduler-ha-423884-m02" in "kube-system" namespace to be "Ready" or be gone ...
	I1109 14:05:12.859861   55908 pod_ready.go:94] pod "kube-scheduler-ha-423884-m02" is "Ready"
	I1109 14:05:12.859981   55908 pod_ready.go:86] duration metric: took 6.827161ms for pod "kube-scheduler-ha-423884-m02" in "kube-system" namespace to be "Ready" or be gone ...
	I1109 14:05:12.860005   55908 pod_ready.go:83] waiting for pod "kube-scheduler-ha-423884-m03" in "kube-system" namespace to be "Ready" or be gone ...
	I1109 14:05:12.867050   55908 pod_ready.go:94] pod "kube-scheduler-ha-423884-m03" is "Ready"
	I1109 14:05:12.867075   55908 pod_ready.go:86] duration metric: took 7.050311ms for pod "kube-scheduler-ha-423884-m03" in "kube-system" namespace to be "Ready" or be gone ...
	I1109 14:05:12.867087   55908 pod_ready.go:40] duration metric: took 24.355592064s for extra waiting for all "kube-system" pods having one of [k8s-app=kube-dns component=etcd component=kube-apiserver component=kube-controller-manager k8s-app=kube-proxy component=kube-scheduler] labels to be "Ready" ...
	I1109 14:05:12.924097   55908 start.go:628] kubectl: 1.33.2, cluster: 1.34.1 (minor skew: 1)
	I1109 14:05:12.927451   55908 out.go:179] * Done! kubectl is now configured to use "ha-423884" cluster and "default" namespace by default
	
	
	==> CRI-O <==
	Nov 09 14:04:15 ha-423884 crio[619]: time="2025-11-09T14:04:15.560693803Z" level=info msg="Started container" PID=1120 containerID=b63a9a2c4e5fbd3fad199cd6e213c4eaeb9cf307dbae0131d130c7d22384f79e description=default/busybox-7b57f96db7-bprtw/busybox id=6e691df6-c3f8-4e79-938c-13c481c463f8 name=/runtime.v1.RuntimeService/StartContainer sandboxID=49d4f70bf4320e7c70fbccc94bd31ea39dc7456ce3f9c2a9a4d16059f54b6f87
	Nov 09 14:04:45 ha-423884 conmon[1119]: conmon 5bed382b465f29e125aa <ninfo>: container 1132 exited with status 1
	Nov 09 14:04:46 ha-423884 crio[619]: time="2025-11-09T14:04:46.632047702Z" level=info msg="Checking image status: gcr.io/k8s-minikube/storage-provisioner:v5" id=58fafaad-5a62-4ed2-a48c-ac5cfcffacd0 name=/runtime.v1.ImageService/ImageStatus
	Nov 09 14:04:46 ha-423884 crio[619]: time="2025-11-09T14:04:46.633906069Z" level=info msg="Checking image status: gcr.io/k8s-minikube/storage-provisioner:v5" id=36005cb0-6a41-40e9-950b-0b9545dd375d name=/runtime.v1.ImageService/ImageStatus
	Nov 09 14:04:46 ha-423884 crio[619]: time="2025-11-09T14:04:46.64579785Z" level=info msg="Creating container: kube-system/storage-provisioner/storage-provisioner" id=95caab63-861a-49ee-8b75-b5d15cfb1b60 name=/runtime.v1.RuntimeService/CreateContainer
	Nov 09 14:04:46 ha-423884 crio[619]: time="2025-11-09T14:04:46.645906225Z" level=info msg="Allowed annotations are specified for workload [io.containers.trace-syscall]"
	Nov 09 14:04:46 ha-423884 crio[619]: time="2025-11-09T14:04:46.658781722Z" level=info msg="Allowed annotations are specified for workload [io.containers.trace-syscall]"
	Nov 09 14:04:46 ha-423884 crio[619]: time="2025-11-09T14:04:46.662347217Z" level=warning msg="Failed to open /etc/passwd: open /var/lib/containers/storage/overlay/184c9fdfb9f2c0bab041655609ae7f88de235f6f6f171cc5cec8c531dddf11f3/merged/etc/passwd: no such file or directory"
	Nov 09 14:04:46 ha-423884 crio[619]: time="2025-11-09T14:04:46.662465462Z" level=warning msg="Failed to open /etc/group: open /var/lib/containers/storage/overlay/184c9fdfb9f2c0bab041655609ae7f88de235f6f6f171cc5cec8c531dddf11f3/merged/etc/group: no such file or directory"
	Nov 09 14:04:46 ha-423884 crio[619]: time="2025-11-09T14:04:46.662915043Z" level=info msg="Allowed annotations are specified for workload [io.containers.trace-syscall]"
	Nov 09 14:04:46 ha-423884 crio[619]: time="2025-11-09T14:04:46.702334944Z" level=info msg="Created container b305e5d843218e1b3e886cb2f5ba534d02cd5d0a3a41d07de89ec1654cf7277c: kube-system/storage-provisioner/storage-provisioner" id=95caab63-861a-49ee-8b75-b5d15cfb1b60 name=/runtime.v1.RuntimeService/CreateContainer
	Nov 09 14:04:46 ha-423884 crio[619]: time="2025-11-09T14:04:46.714514458Z" level=info msg="Starting container: b305e5d843218e1b3e886cb2f5ba534d02cd5d0a3a41d07de89ec1654cf7277c" id=63571f8b-fba8-4137-bf17-f12c81bfa57d name=/runtime.v1.RuntimeService/StartContainer
	Nov 09 14:04:46 ha-423884 crio[619]: time="2025-11-09T14:04:46.721604636Z" level=info msg="Started container" PID=1382 containerID=b305e5d843218e1b3e886cb2f5ba534d02cd5d0a3a41d07de89ec1654cf7277c description=kube-system/storage-provisioner/storage-provisioner id=63571f8b-fba8-4137-bf17-f12c81bfa57d name=/runtime.v1.RuntimeService/StartContainer sandboxID=624febe3bef0cff2a8b38f86f67900ed2fa943529a6d461c2caa559b51a854bb
	Nov 09 14:04:55 ha-423884 crio[619]: time="2025-11-09T14:04:55.4215931Z" level=info msg="CNI monitoring event CREATE        \"/etc/cni/net.d/10-kindnet.conflist.temp\""
	Nov 09 14:04:55 ha-423884 crio[619]: time="2025-11-09T14:04:55.42716999Z" level=info msg="Found CNI network kindnet (type=ptp) at /etc/cni/net.d/10-kindnet.conflist"
	Nov 09 14:04:55 ha-423884 crio[619]: time="2025-11-09T14:04:55.427323214Z" level=info msg="Updated default CNI network name to kindnet"
	Nov 09 14:04:55 ha-423884 crio[619]: time="2025-11-09T14:04:55.427398128Z" level=info msg="CNI monitoring event WRITE         \"/etc/cni/net.d/10-kindnet.conflist.temp\""
	Nov 09 14:04:55 ha-423884 crio[619]: time="2025-11-09T14:04:55.431810591Z" level=info msg="Found CNI network kindnet (type=ptp) at /etc/cni/net.d/10-kindnet.conflist"
	Nov 09 14:04:55 ha-423884 crio[619]: time="2025-11-09T14:04:55.432264101Z" level=info msg="Updated default CNI network name to kindnet"
	Nov 09 14:04:55 ha-423884 crio[619]: time="2025-11-09T14:04:55.43234498Z" level=info msg="CNI monitoring event RENAME        \"/etc/cni/net.d/10-kindnet.conflist.temp\""
	Nov 09 14:04:55 ha-423884 crio[619]: time="2025-11-09T14:04:55.436394288Z" level=info msg="Found CNI network kindnet (type=ptp) at /etc/cni/net.d/10-kindnet.conflist"
	Nov 09 14:04:55 ha-423884 crio[619]: time="2025-11-09T14:04:55.436552493Z" level=info msg="Updated default CNI network name to kindnet"
	Nov 09 14:04:55 ha-423884 crio[619]: time="2025-11-09T14:04:55.43662753Z" level=info msg="CNI monitoring event CREATE        \"/etc/cni/net.d/10-kindnet.conflist\" ← \"/etc/cni/net.d/10-kindnet.conflist.temp\""
	Nov 09 14:04:55 ha-423884 crio[619]: time="2025-11-09T14:04:55.440324498Z" level=info msg="Found CNI network kindnet (type=ptp) at /etc/cni/net.d/10-kindnet.conflist"
	Nov 09 14:04:55 ha-423884 crio[619]: time="2025-11-09T14:04:55.440479609Z" level=info msg="Updated default CNI network name to kindnet"
	
	
	==> container status <==
	CONTAINER           IMAGE                                                              CREATED              STATE               NAME                      ATTEMPT             POD ID              POD                                 NAMESPACE
	b305e5d843218       ba04bb24b95753201135cbc420b233c1b0b9fa2e1fd21d28319c348c33fbcde6   33 seconds ago       Running             storage-provisioner       2                   624febe3bef0c       storage-provisioner                 kube-system
	4e1565497868e       138784d87c9c50f8e59412544da4cf4928d61ccbaf93b9f5898a3ba406871bfc   About a minute ago   Running             coredns                   1                   156c341c8adee       coredns-66bc5c9577-wl6rt            kube-system
	f0fd891d62df4       138784d87c9c50f8e59412544da4cf4928d61ccbaf93b9f5898a3ba406871bfc   About a minute ago   Running             coredns                   1                   0149d6cd55157       coredns-66bc5c9577-x2j4c            kube-system
	5bed382b465f2       ba04bb24b95753201135cbc420b233c1b0b9fa2e1fd21d28319c348c33fbcde6   About a minute ago   Exited              storage-provisioner       1                   624febe3bef0c       storage-provisioner                 kube-system
	b63a9a2c4e5fb       89a35e2ebb6b938201966889b5e8c85b931db6432c5643966116cd1c28bf45cd   About a minute ago   Running             busybox                   1                   49d4f70bf4320       busybox-7b57f96db7-bprtw            default
	6db8ccf0f7e5d       05baa95f5142d87797a2bc1d3d11edfb0bf0a9236d436243d15061fae8b58cb9   About a minute ago   Running             kube-proxy                1                   7482e6b61af8f       kube-proxy-7z7d2                    kube-system
	2858b15648473       b1a8c6f707935fd5f346ce5846d21ff8dd65e14c15406a14dbd16b9b897b9b4c   About a minute ago   Running             kindnet-cni               1                   ef99cabeed954       kindnet-4s4nj                       kube-system
	d4b5eae8c40aa       7eb2c6ff0c5a768fd309321bc2ade0e4e11afcf4f2017ef1d0ff00d91fdf992a   About a minute ago   Running             kube-controller-manager   9                   8d358a601f8e9       kube-controller-manager-ha-423884   kube-system
	7a8b6eec5acc3       43911e833d64d4f30460862fc0c54bb61999d60bc7d063feca71e9fc610d5196   About a minute ago   Running             kube-apiserver            8                   5dc1bc8f687be       kube-apiserver-ha-423884            kube-system
	78f5efcea671f       7eb2c6ff0c5a768fd309321bc2ade0e4e11afcf4f2017ef1d0ff00d91fdf992a   About a minute ago   Exited              kube-controller-manager   8                   8d358a601f8e9       kube-controller-manager-ha-423884   kube-system
	947390d8997ff       a1894772a478e07c67a56e8bf32335fdbe1dd4ec96976a5987083164bd00bc0e   About a minute ago   Running             etcd                      3                   0c595ba9083de       etcd-ha-423884                      kube-system
	c0ba74e816e13       43911e833d64d4f30460862fc0c54bb61999d60bc7d063feca71e9fc610d5196   About a minute ago   Exited              kube-apiserver            7                   5dc1bc8f687be       kube-apiserver-ha-423884            kube-system
	374a5429d6a56       b5f57ec6b98676d815366685a0422bd164ecf0732540b79ac51b1186cef97ff0   About a minute ago   Running             kube-scheduler            2                   3ee3bcbc0fa87       kube-scheduler-ha-423884            kube-system
	785a023345fda       2a8917f902489be5a8dd414209c32b77bd644d187ea646d86dbdc31e85efb551   About a minute ago   Running             kube-vip                  1                   90a0cbb7d6ed9       kube-vip-ha-423884                  kube-system
	
	
	==> coredns [4e1565497868eb720e6f89fa2f64f1892d9d7c7fb165c52c75c00a6e26644dcd] <==
	[INFO] plugin/kubernetes: waiting for Kubernetes API before starting server
	[INFO] plugin/kubernetes: waiting for Kubernetes API before starting server
	[INFO] plugin/ready: Still waiting on: "kubernetes"
	[INFO] plugin/kubernetes: waiting for Kubernetes API before starting server
	[INFO] plugin/kubernetes: waiting for Kubernetes API before starting server
	[INFO] plugin/kubernetes: waiting for Kubernetes API before starting server
	[INFO] plugin/kubernetes: waiting for Kubernetes API before starting server
	[INFO] plugin/kubernetes: waiting for Kubernetes API before starting server
	[INFO] plugin/kubernetes: waiting for Kubernetes API before starting server
	[INFO] plugin/kubernetes: waiting for Kubernetes API before starting server
	[WARNING] plugin/kubernetes: starting server with unsynced Kubernetes API
	.:53
	[INFO] plugin/reload: Running configuration SHA512 = 9e2996f8cb67ac53e0259ab1f8d615d07d1beb0bd07e6a1e39769c3bf486a905bb991cc47f8d2f14d0d3a90a87dfc625a0b4c524fed169d8158c40657c0694b1
	CoreDNS-1.12.1
	linux/arm64, go1.24.1, 707c7c1
	[INFO] 127.0.0.1:56290 - 23869 "HINFO IN 4295743501471833009.7362039906491692351. udp 57 false 512" NXDOMAIN qr,rd,ra 57 0.027167594s
	[INFO] plugin/ready: Still waiting on: "kubernetes"
	[INFO] plugin/ready: Still waiting on: "kubernetes"
	[INFO] plugin/kubernetes: pkg/mod/k8s.io/client-go@v0.32.3/tools/cache/reflector.go:251: failed to list *v1.Namespace: Get "https://10.96.0.1:443/api/v1/namespaces?limit=500&resourceVersion=0": dial tcp 10.96.0.1:443: i/o timeout
	[ERROR] plugin/kubernetes: Unhandled Error
	[INFO] plugin/kubernetes: pkg/mod/k8s.io/client-go@v0.32.3/tools/cache/reflector.go:251: failed to list *v1.EndpointSlice: Get "https://10.96.0.1:443/apis/discovery.k8s.io/v1/endpointslices?limit=500&resourceVersion=0": dial tcp 10.96.0.1:443: i/o timeout
	[ERROR] plugin/kubernetes: Unhandled Error
	[INFO] plugin/kubernetes: pkg/mod/k8s.io/client-go@v0.32.3/tools/cache/reflector.go:251: failed to list *v1.Service: Get "https://10.96.0.1:443/api/v1/services?limit=500&resourceVersion=0": dial tcp 10.96.0.1:443: i/o timeout
	[ERROR] plugin/kubernetes: Unhandled Error
	[INFO] plugin/ready: Still waiting on: "kubernetes"
	
	
	==> coredns [f0fd891d62df4ba35f7f2bb9f867a20bb1ee66fec8156164361837f74c33b151] <==
	[INFO] plugin/kubernetes: waiting for Kubernetes API before starting server
	[INFO] plugin/kubernetes: waiting for Kubernetes API before starting server
	[INFO] plugin/ready: Still waiting on: "kubernetes"
	[INFO] plugin/kubernetes: waiting for Kubernetes API before starting server
	[INFO] plugin/kubernetes: waiting for Kubernetes API before starting server
	[INFO] plugin/kubernetes: waiting for Kubernetes API before starting server
	[INFO] plugin/kubernetes: waiting for Kubernetes API before starting server
	[INFO] plugin/kubernetes: waiting for Kubernetes API before starting server
	[INFO] plugin/kubernetes: waiting for Kubernetes API before starting server
	[INFO] plugin/kubernetes: waiting for Kubernetes API before starting server
	[WARNING] plugin/kubernetes: starting server with unsynced Kubernetes API
	.:53
	[INFO] plugin/reload: Running configuration SHA512 = 9e2996f8cb67ac53e0259ab1f8d615d07d1beb0bd07e6a1e39769c3bf486a905bb991cc47f8d2f14d0d3a90a87dfc625a0b4c524fed169d8158c40657c0694b1
	CoreDNS-1.12.1
	linux/arm64, go1.24.1, 707c7c1
	[INFO] 127.0.0.1:41286 - 39887 "HINFO IN 9165684468172783655.3008217872247164606. udp 57 false 512" NXDOMAIN qr,rd,ra 57 0.020928117s
	[INFO] plugin/ready: Still waiting on: "kubernetes"
	[INFO] plugin/ready: Still waiting on: "kubernetes"
	[INFO] plugin/kubernetes: pkg/mod/k8s.io/client-go@v0.32.3/tools/cache/reflector.go:251: failed to list *v1.Service: Get "https://10.96.0.1:443/api/v1/services?limit=500&resourceVersion=0": dial tcp 10.96.0.1:443: i/o timeout
	[ERROR] plugin/kubernetes: Unhandled Error
	[INFO] plugin/kubernetes: pkg/mod/k8s.io/client-go@v0.32.3/tools/cache/reflector.go:251: failed to list *v1.EndpointSlice: Get "https://10.96.0.1:443/apis/discovery.k8s.io/v1/endpointslices?limit=500&resourceVersion=0": dial tcp 10.96.0.1:443: i/o timeout
	[ERROR] plugin/kubernetes: Unhandled Error
	[INFO] plugin/kubernetes: pkg/mod/k8s.io/client-go@v0.32.3/tools/cache/reflector.go:251: failed to list *v1.Namespace: Get "https://10.96.0.1:443/api/v1/namespaces?limit=500&resourceVersion=0": dial tcp 10.96.0.1:443: i/o timeout
	[ERROR] plugin/kubernetes: Unhandled Error
	[INFO] plugin/ready: Still waiting on: "kubernetes"
	
	
	==> describe nodes <==
	Name:               ha-423884
	Roles:              control-plane
	Labels:             beta.kubernetes.io/arch=arm64
	                    beta.kubernetes.io/os=linux
	                    kubernetes.io/arch=arm64
	                    kubernetes.io/hostname=ha-423884
	                    kubernetes.io/os=linux
	                    minikube.k8s.io/commit=70e94ed289d7418ac27e2778f9cf44be27a4ecda
	                    minikube.k8s.io/name=ha-423884
	                    minikube.k8s.io/primary=true
	                    minikube.k8s.io/updated_at=2025_11_09T13_50_44_0700
	                    minikube.k8s.io/version=v1.37.0
	                    node-role.kubernetes.io/control-plane=
	                    node.kubernetes.io/exclude-from-external-load-balancers=
	Annotations:        node.alpha.kubernetes.io/ttl: 0
	                    volumes.kubernetes.io/controller-managed-attach-detach: true
	CreationTimestamp:  Sun, 09 Nov 2025 13:50:40 +0000
	Taints:             <none>
	Unschedulable:      false
	Lease:
	  HolderIdentity:  ha-423884
	  AcquireTime:     <unset>
	  RenewTime:       Sun, 09 Nov 2025 14:05:10 +0000
	Conditions:
	  Type             Status  LastHeartbeatTime                 LastTransitionTime                Reason                       Message
	  ----             ------  -----------------                 ------------------                ------                       -------
	  MemoryPressure   False   Sun, 09 Nov 2025 14:04:10 +0000   Sun, 09 Nov 2025 13:50:35 +0000   KubeletHasSufficientMemory   kubelet has sufficient memory available
	  DiskPressure     False   Sun, 09 Nov 2025 14:04:10 +0000   Sun, 09 Nov 2025 13:50:35 +0000   KubeletHasNoDiskPressure     kubelet has no disk pressure
	  PIDPressure      False   Sun, 09 Nov 2025 14:04:10 +0000   Sun, 09 Nov 2025 13:50:35 +0000   KubeletHasSufficientPID      kubelet has sufficient PID available
	  Ready            True    Sun, 09 Nov 2025 14:04:10 +0000   Sun, 09 Nov 2025 13:51:29 +0000   KubeletReady                 kubelet is posting ready status
	Addresses:
	  InternalIP:  192.168.49.2
	  Hostname:    ha-423884
	Capacity:
	  cpu:                2
	  ephemeral-storage:  203034800Ki
	  hugepages-1Gi:      0
	  hugepages-2Mi:      0
	  hugepages-32Mi:     0
	  hugepages-64Ki:     0
	  memory:             8022300Ki
	  pods:               110
	Allocatable:
	  cpu:                2
	  ephemeral-storage:  203034800Ki
	  hugepages-1Gi:      0
	  hugepages-2Mi:      0
	  hugepages-32Mi:     0
	  hugepages-64Ki:     0
	  memory:             8022300Ki
	  pods:               110
	System Info:
	  Machine ID:                 8fe80b37a6a3cb4bc1adadb06905c59a
	  System UUID:                657918f5-0b52-434a-8e2d-4cc93dc46e2f
	  Boot ID:                    c2b4b02b-b0e5-42ff-aa45-346ad9349595
	  Kernel Version:             5.15.0-1084-aws
	  OS Image:                   Debian GNU/Linux 12 (bookworm)
	  Operating System:           linux
	  Architecture:               arm64
	  Container Runtime Version:  cri-o://1.34.1
	  Kubelet Version:            v1.34.1
	  Kube-Proxy Version:         
	PodCIDR:                      10.244.0.0/24
	PodCIDRs:                     10.244.0.0/24
	Non-terminated Pods:          (11 in total)
	  Namespace                   Name                                 CPU Requests  CPU Limits  Memory Requests  Memory Limits  Age
	  ---------                   ----                                 ------------  ----------  ---------------  -------------  ---
	  default                     busybox-7b57f96db7-bprtw             0 (0%)        0 (0%)      0 (0%)           0 (0%)         12m
	  kube-system                 coredns-66bc5c9577-wl6rt             100m (5%)     0 (0%)      70Mi (0%)        170Mi (2%)     14m
	  kube-system                 coredns-66bc5c9577-x2j4c             100m (5%)     0 (0%)      70Mi (0%)        170Mi (2%)     14m
	  kube-system                 etcd-ha-423884                       100m (5%)     0 (0%)      100Mi (1%)       0 (0%)         14m
	  kube-system                 kindnet-4s4nj                        100m (5%)     100m (5%)   50Mi (0%)        50Mi (0%)      14m
	  kube-system                 kube-apiserver-ha-423884             250m (12%)    0 (0%)      0 (0%)           0 (0%)         14m
	  kube-system                 kube-controller-manager-ha-423884    200m (10%)    0 (0%)      0 (0%)           0 (0%)         14m
	  kube-system                 kube-proxy-7z7d2                     0 (0%)        0 (0%)      0 (0%)           0 (0%)         14m
	  kube-system                 kube-scheduler-ha-423884             100m (5%)     0 (0%)      0 (0%)           0 (0%)         14m
	  kube-system                 kube-vip-ha-423884                   0 (0%)        0 (0%)      0 (0%)           0 (0%)         66s
	  kube-system                 storage-provisioner                  0 (0%)        0 (0%)      0 (0%)           0 (0%)         14m
	Allocated resources:
	  (Total limits may be over 100 percent, i.e., overcommitted.)
	  Resource           Requests    Limits
	  --------           --------    ------
	  cpu                950m (47%)  100m (5%)
	  memory             290Mi (3%)  390Mi (4%)
	  ephemeral-storage  0 (0%)      0 (0%)
	  hugepages-1Gi      0 (0%)      0 (0%)
	  hugepages-2Mi      0 (0%)      0 (0%)
	  hugepages-32Mi     0 (0%)      0 (0%)
	  hugepages-64Ki     0 (0%)      0 (0%)
	Events:
	  Type     Reason                   Age                  From             Message
	  ----     ------                   ----                 ----             -------
	  Normal   Starting                 62s                  kube-proxy       
	  Normal   Starting                 14m                  kube-proxy       
	  Normal   Starting                 14m                  kubelet          Starting kubelet.
	  Normal   NodeHasSufficientPID     14m (x8 over 14m)    kubelet          Node ha-423884 status is now: NodeHasSufficientPID
	  Normal   NodeHasNoDiskPressure    14m (x8 over 14m)    kubelet          Node ha-423884 status is now: NodeHasNoDiskPressure
	  Normal   NodeHasSufficientMemory  14m (x8 over 14m)    kubelet          Node ha-423884 status is now: NodeHasSufficientMemory
	  Warning  CgroupV1                 14m                  kubelet          cgroup v1 support is in maintenance mode, please migrate to cgroup v2
	  Normal   NodeHasNoDiskPressure    14m                  kubelet          Node ha-423884 status is now: NodeHasNoDiskPressure
	  Normal   NodeHasSufficientMemory  14m                  kubelet          Node ha-423884 status is now: NodeHasSufficientMemory
	  Normal   NodeHasSufficientPID     14m                  kubelet          Node ha-423884 status is now: NodeHasSufficientPID
	  Normal   Starting                 14m                  kubelet          Starting kubelet.
	  Warning  CgroupV1                 14m                  kubelet          cgroup v1 support is in maintenance mode, please migrate to cgroup v2
	  Normal   RegisteredNode           14m                  node-controller  Node ha-423884 event: Registered Node ha-423884 in Controller
	  Normal   RegisteredNode           13m                  node-controller  Node ha-423884 event: Registered Node ha-423884 in Controller
	  Normal   NodeReady                13m                  kubelet          Node ha-423884 status is now: NodeReady
	  Normal   RegisteredNode           13m                  node-controller  Node ha-423884 event: Registered Node ha-423884 in Controller
	  Normal   RegisteredNode           10m                  node-controller  Node ha-423884 event: Registered Node ha-423884 in Controller
	  Normal   Starting                 105s                 kubelet          Starting kubelet.
	  Warning  CgroupV1                 105s                 kubelet          cgroup v1 support is in maintenance mode, please migrate to cgroup v2
	  Normal   NodeHasSufficientMemory  105s (x8 over 105s)  kubelet          Node ha-423884 status is now: NodeHasSufficientMemory
	  Normal   NodeHasNoDiskPressure    105s (x8 over 105s)  kubelet          Node ha-423884 status is now: NodeHasNoDiskPressure
	  Normal   NodeHasSufficientPID     105s (x8 over 105s)  kubelet          Node ha-423884 status is now: NodeHasSufficientPID
	  Normal   RegisteredNode           66s                  node-controller  Node ha-423884 event: Registered Node ha-423884 in Controller
	  Normal   RegisteredNode           65s                  node-controller  Node ha-423884 event: Registered Node ha-423884 in Controller
	  Normal   RegisteredNode           26s                  node-controller  Node ha-423884 event: Registered Node ha-423884 in Controller
	
	
	Name:               ha-423884-m02
	Roles:              control-plane
	Labels:             beta.kubernetes.io/arch=arm64
	                    beta.kubernetes.io/os=linux
	                    kubernetes.io/arch=arm64
	                    kubernetes.io/hostname=ha-423884-m02
	                    kubernetes.io/os=linux
	                    minikube.k8s.io/commit=70e94ed289d7418ac27e2778f9cf44be27a4ecda
	                    minikube.k8s.io/name=ha-423884
	                    minikube.k8s.io/primary=false
	                    minikube.k8s.io/updated_at=2025_11_09T13_51_24_0700
	                    minikube.k8s.io/version=v1.37.0
	                    node-role.kubernetes.io/control-plane=
	                    node.kubernetes.io/exclude-from-external-load-balancers=
	Annotations:        node.alpha.kubernetes.io/ttl: 0
	                    volumes.kubernetes.io/controller-managed-attach-detach: true
	CreationTimestamp:  Sun, 09 Nov 2025 13:51:24 +0000
	Taints:             <none>
	Unschedulable:      false
	Lease:
	  HolderIdentity:  ha-423884-m02
	  AcquireTime:     <unset>
	  RenewTime:       Sun, 09 Nov 2025 14:05:11 +0000
	Conditions:
	  Type             Status  LastHeartbeatTime                 LastTransitionTime                Reason                       Message
	  ----             ------  -----------------                 ------------------                ------                       -------
	  MemoryPressure   False   Sun, 09 Nov 2025 14:04:10 +0000   Sun, 09 Nov 2025 13:54:37 +0000   KubeletHasSufficientMemory   kubelet has sufficient memory available
	  DiskPressure     False   Sun, 09 Nov 2025 14:04:10 +0000   Sun, 09 Nov 2025 13:54:37 +0000   KubeletHasNoDiskPressure     kubelet has no disk pressure
	  PIDPressure      False   Sun, 09 Nov 2025 14:04:10 +0000   Sun, 09 Nov 2025 13:54:37 +0000   KubeletHasSufficientPID      kubelet has sufficient PID available
	  Ready            True    Sun, 09 Nov 2025 14:04:10 +0000   Sun, 09 Nov 2025 13:54:37 +0000   KubeletReady                 kubelet is posting ready status
	Addresses:
	  InternalIP:  192.168.49.3
	  Hostname:    ha-423884-m02
	Capacity:
	  cpu:                2
	  ephemeral-storage:  203034800Ki
	  hugepages-1Gi:      0
	  hugepages-2Mi:      0
	  hugepages-32Mi:     0
	  hugepages-64Ki:     0
	  memory:             8022300Ki
	  pods:               110
	Allocatable:
	  cpu:                2
	  ephemeral-storage:  203034800Ki
	  hugepages-1Gi:      0
	  hugepages-2Mi:      0
	  hugepages-32Mi:     0
	  hugepages-64Ki:     0
	  memory:             8022300Ki
	  pods:               110
	System Info:
	  Machine ID:                 8fe80b37a6a3cb4bc1adadb06905c59a
	  System UUID:                36d1a056-7fa9-4feb-8fa0-03ee70e31c22
	  Boot ID:                    c2b4b02b-b0e5-42ff-aa45-346ad9349595
	  Kernel Version:             5.15.0-1084-aws
	  OS Image:                   Debian GNU/Linux 12 (bookworm)
	  Operating System:           linux
	  Architecture:               arm64
	  Container Runtime Version:  cri-o://1.34.1
	  Kubelet Version:            v1.34.1
	  Kube-Proxy Version:         
	PodCIDR:                      10.244.1.0/24
	PodCIDRs:                     10.244.1.0/24
	Non-terminated Pods:          (8 in total)
	  Namespace                   Name                                     CPU Requests  CPU Limits  Memory Requests  Memory Limits  Age
	  ---------                   ----                                     ------------  ----------  ---------------  -------------  ---
	  default                     busybox-7b57f96db7-c9qf4                 0 (0%)        0 (0%)      0 (0%)           0 (0%)         12m
	  kube-system                 etcd-ha-423884-m02                       100m (5%)     0 (0%)      100Mi (1%)       0 (0%)         13m
	  kube-system                 kindnet-ftnwt                            100m (5%)     100m (5%)   50Mi (0%)        50Mi (0%)      13m
	  kube-system                 kube-apiserver-ha-423884-m02             250m (12%)    0 (0%)      0 (0%)           0 (0%)         13m
	  kube-system                 kube-controller-manager-ha-423884-m02    200m (10%)    0 (0%)      0 (0%)           0 (0%)         13m
	  kube-system                 kube-proxy-f4hgn                         0 (0%)        0 (0%)      0 (0%)           0 (0%)         13m
	  kube-system                 kube-scheduler-ha-423884-m02             100m (5%)     0 (0%)      0 (0%)           0 (0%)         13m
	  kube-system                 kube-vip-ha-423884-m02                   0 (0%)        0 (0%)      0 (0%)           0 (0%)         13m
	Allocated resources:
	  (Total limits may be over 100 percent, i.e., overcommitted.)
	  Resource           Requests    Limits
	  --------           --------    ------
	  cpu                750m (37%)  100m (5%)
	  memory             150Mi (1%)  50Mi (0%)
	  ephemeral-storage  0 (0%)      0 (0%)
	  hugepages-1Gi      0 (0%)      0 (0%)
	  hugepages-2Mi      0 (0%)      0 (0%)
	  hugepages-32Mi     0 (0%)      0 (0%)
	  hugepages-64Ki     0 (0%)      0 (0%)
	Events:
	  Type     Reason                   Age                  From             Message
	  ----     ------                   ----                 ----             -------
	  Normal   Starting                 52s                  kube-proxy       
	  Normal   Starting                 13m                  kube-proxy       
	  Normal   RegisteredNode           13m                  node-controller  Node ha-423884-m02 event: Registered Node ha-423884-m02 in Controller
	  Normal   RegisteredNode           13m                  node-controller  Node ha-423884-m02 event: Registered Node ha-423884-m02 in Controller
	  Normal   RegisteredNode           13m                  node-controller  Node ha-423884-m02 event: Registered Node ha-423884-m02 in Controller
	  Normal   Starting                 11m                  kubelet          Starting kubelet.
	  Warning  CgroupV1                 11m                  kubelet          cgroup v1 support is in maintenance mode, please migrate to cgroup v2
	  Normal   NodeHasNoDiskPressure    11m (x8 over 11m)    kubelet          Node ha-423884-m02 status is now: NodeHasNoDiskPressure
	  Normal   NodeHasSufficientPID     11m (x8 over 11m)    kubelet          Node ha-423884-m02 status is now: NodeHasSufficientPID
	  Normal   NodeHasSufficientMemory  11m (x8 over 11m)    kubelet          Node ha-423884-m02 status is now: NodeHasSufficientMemory
	  Normal   NodeNotReady             10m                  node-controller  Node ha-423884-m02 status is now: NodeNotReady
	  Normal   RegisteredNode           10m                  node-controller  Node ha-423884-m02 event: Registered Node ha-423884-m02 in Controller
	  Warning  CgroupV1                 102s                 kubelet          cgroup v1 support is in maintenance mode, please migrate to cgroup v2
	  Normal   Starting                 102s                 kubelet          Starting kubelet.
	  Normal   NodeHasSufficientMemory  101s (x8 over 102s)  kubelet          Node ha-423884-m02 status is now: NodeHasSufficientMemory
	  Normal   NodeHasNoDiskPressure    101s (x8 over 102s)  kubelet          Node ha-423884-m02 status is now: NodeHasNoDiskPressure
	  Normal   NodeHasSufficientPID     101s (x8 over 102s)  kubelet          Node ha-423884-m02 status is now: NodeHasSufficientPID
	  Normal   RegisteredNode           66s                  node-controller  Node ha-423884-m02 event: Registered Node ha-423884-m02 in Controller
	  Normal   RegisteredNode           65s                  node-controller  Node ha-423884-m02 event: Registered Node ha-423884-m02 in Controller
	  Normal   RegisteredNode           26s                  node-controller  Node ha-423884-m02 event: Registered Node ha-423884-m02 in Controller
	
	
	Name:               ha-423884-m03
	Roles:              control-plane
	Labels:             beta.kubernetes.io/arch=arm64
	                    beta.kubernetes.io/os=linux
	                    kubernetes.io/arch=arm64
	                    kubernetes.io/hostname=ha-423884-m03
	                    kubernetes.io/os=linux
	                    minikube.k8s.io/commit=70e94ed289d7418ac27e2778f9cf44be27a4ecda
	                    minikube.k8s.io/name=ha-423884
	                    minikube.k8s.io/primary=false
	                    minikube.k8s.io/updated_at=2025_11_09T13_52_18_0700
	                    minikube.k8s.io/version=v1.37.0
	                    node-role.kubernetes.io/control-plane=
	                    node.kubernetes.io/exclude-from-external-load-balancers=
	Annotations:        node.alpha.kubernetes.io/ttl: 0
	                    volumes.kubernetes.io/controller-managed-attach-detach: true
	CreationTimestamp:  Sun, 09 Nov 2025 13:52:17 +0000
	Taints:             <none>
	Unschedulable:      false
	Lease:
	  HolderIdentity:  ha-423884-m03
	  AcquireTime:     <unset>
	  RenewTime:       Sun, 09 Nov 2025 14:05:17 +0000
	Conditions:
	  Type             Status  LastHeartbeatTime                 LastTransitionTime                Reason                       Message
	  ----             ------  -----------------                 ------------------                ------                       -------
	  MemoryPressure   False   Sun, 09 Nov 2025 14:05:11 +0000   Sun, 09 Nov 2025 13:52:17 +0000   KubeletHasSufficientMemory   kubelet has sufficient memory available
	  DiskPressure     False   Sun, 09 Nov 2025 14:05:11 +0000   Sun, 09 Nov 2025 13:52:17 +0000   KubeletHasNoDiskPressure     kubelet has no disk pressure
	  PIDPressure      False   Sun, 09 Nov 2025 14:05:11 +0000   Sun, 09 Nov 2025 13:52:17 +0000   KubeletHasSufficientPID      kubelet has sufficient PID available
	  Ready            True    Sun, 09 Nov 2025 14:05:11 +0000   Sun, 09 Nov 2025 13:52:33 +0000   KubeletReady                 kubelet is posting ready status
	Addresses:
	  InternalIP:  192.168.49.4
	  Hostname:    ha-423884-m03
	Capacity:
	  cpu:                2
	  ephemeral-storage:  203034800Ki
	  hugepages-1Gi:      0
	  hugepages-2Mi:      0
	  hugepages-32Mi:     0
	  hugepages-64Ki:     0
	  memory:             8022300Ki
	  pods:               110
	Allocatable:
	  cpu:                2
	  ephemeral-storage:  203034800Ki
	  hugepages-1Gi:      0
	  hugepages-2Mi:      0
	  hugepages-32Mi:     0
	  hugepages-64Ki:     0
	  memory:             8022300Ki
	  pods:               110
	System Info:
	  Machine ID:                 8fe80b37a6a3cb4bc1adadb06905c59a
	  System UUID:                d57bf8b4-5512-4316-94f7-79a9c657e155
	  Boot ID:                    c2b4b02b-b0e5-42ff-aa45-346ad9349595
	  Kernel Version:             5.15.0-1084-aws
	  OS Image:                   Debian GNU/Linux 12 (bookworm)
	  Operating System:           linux
	  Architecture:               arm64
	  Container Runtime Version:  cri-o://1.34.1
	  Kubelet Version:            v1.34.1
	  Kube-Proxy Version:         
	PodCIDR:                      10.244.2.0/24
	PodCIDRs:                     10.244.2.0/24
	Non-terminated Pods:          (8 in total)
	  Namespace                   Name                                     CPU Requests  CPU Limits  Memory Requests  Memory Limits  Age
	  ---------                   ----                                     ------------  ----------  ---------------  -------------  ---
	  default                     busybox-7b57f96db7-5bfxx                 0 (0%)        0 (0%)      0 (0%)           0 (0%)         12m
	  kube-system                 etcd-ha-423884-m03                       100m (5%)     0 (0%)      100Mi (1%)       0 (0%)         12m
	  kube-system                 kindnet-45jg2                            100m (5%)     100m (5%)   50Mi (0%)        50Mi (0%)      12m
	  kube-system                 kube-apiserver-ha-423884-m03             250m (12%)    0 (0%)      0 (0%)           0 (0%)         12m
	  kube-system                 kube-controller-manager-ha-423884-m03    200m (10%)    0 (0%)      0 (0%)           0 (0%)         12m
	  kube-system                 kube-proxy-jcgxk                         0 (0%)        0 (0%)      0 (0%)           0 (0%)         13m
	  kube-system                 kube-scheduler-ha-423884-m03             100m (5%)     0 (0%)      0 (0%)           0 (0%)         12m
	  kube-system                 kube-vip-ha-423884-m03                   0 (0%)        0 (0%)      0 (0%)           0 (0%)         12m
	Allocated resources:
	  (Total limits may be over 100 percent, i.e., overcommitted.)
	  Resource           Requests    Limits
	  --------           --------    ------
	  cpu                750m (37%)  100m (5%)
	  memory             150Mi (1%)  50Mi (0%)
	  ephemeral-storage  0 (0%)      0 (0%)
	  hugepages-1Gi      0 (0%)      0 (0%)
	  hugepages-2Mi      0 (0%)      0 (0%)
	  hugepages-32Mi     0 (0%)      0 (0%)
	  hugepages-64Ki     0 (0%)      0 (0%)
	Events:
	  Type     Reason                   Age                From             Message
	  ----     ------                   ----               ----             -------
	  Normal   Starting                 12m                kube-proxy       
	  Normal   Starting                 7s                 kube-proxy       
	  Normal   RegisteredNode           13m                node-controller  Node ha-423884-m03 event: Registered Node ha-423884-m03 in Controller
	  Normal   RegisteredNode           13m                node-controller  Node ha-423884-m03 event: Registered Node ha-423884-m03 in Controller
	  Normal   RegisteredNode           13m                node-controller  Node ha-423884-m03 event: Registered Node ha-423884-m03 in Controller
	  Normal   RegisteredNode           10m                node-controller  Node ha-423884-m03 event: Registered Node ha-423884-m03 in Controller
	  Warning  CgroupV1                 66s                kubelet          cgroup v1 support is in maintenance mode, please migrate to cgroup v2
	  Normal   Starting                 66s                kubelet          Starting kubelet.
	  Normal   NodeHasSufficientMemory  66s (x8 over 66s)  kubelet          Node ha-423884-m03 status is now: NodeHasSufficientMemory
	  Normal   NodeHasNoDiskPressure    66s (x8 over 66s)  kubelet          Node ha-423884-m03 status is now: NodeHasNoDiskPressure
	  Normal   NodeHasSufficientPID     66s (x8 over 66s)  kubelet          Node ha-423884-m03 status is now: NodeHasSufficientPID
	  Normal   RegisteredNode           66s                node-controller  Node ha-423884-m03 event: Registered Node ha-423884-m03 in Controller
	  Normal   RegisteredNode           65s                node-controller  Node ha-423884-m03 event: Registered Node ha-423884-m03 in Controller
	  Normal   RegisteredNode           26s                node-controller  Node ha-423884-m03 event: Registered Node ha-423884-m03 in Controller
	
	
	Name:               ha-423884-m04
	Roles:              <none>
	Labels:             beta.kubernetes.io/arch=arm64
	                    beta.kubernetes.io/os=linux
	                    kubernetes.io/arch=arm64
	                    kubernetes.io/hostname=ha-423884-m04
	                    kubernetes.io/os=linux
	                    minikube.k8s.io/commit=70e94ed289d7418ac27e2778f9cf44be27a4ecda
	                    minikube.k8s.io/name=ha-423884
	                    minikube.k8s.io/primary=false
	                    minikube.k8s.io/updated_at=2025_11_09T13_53_07_0700
	                    minikube.k8s.io/version=v1.37.0
	Annotations:        node.alpha.kubernetes.io/ttl: 0
	                    volumes.kubernetes.io/controller-managed-attach-detach: true
	CreationTimestamp:  Sun, 09 Nov 2025 13:53:06 +0000
	Taints:             <none>
	Unschedulable:      false
	Lease:
	  HolderIdentity:  ha-423884-m04
	  AcquireTime:     <unset>
	  RenewTime:       Sun, 09 Nov 2025 14:05:13 +0000
	Conditions:
	  Type             Status  LastHeartbeatTime                 LastTransitionTime                Reason                       Message
	  ----             ------  -----------------                 ------------------                ------                       -------
	  MemoryPressure   False   Sun, 09 Nov 2025 14:04:52 +0000   Sun, 09 Nov 2025 13:53:06 +0000   KubeletHasSufficientMemory   kubelet has sufficient memory available
	  DiskPressure     False   Sun, 09 Nov 2025 14:04:52 +0000   Sun, 09 Nov 2025 13:53:06 +0000   KubeletHasNoDiskPressure     kubelet has no disk pressure
	  PIDPressure      False   Sun, 09 Nov 2025 14:04:52 +0000   Sun, 09 Nov 2025 13:53:06 +0000   KubeletHasSufficientPID      kubelet has sufficient PID available
	  Ready            True    Sun, 09 Nov 2025 14:04:52 +0000   Sun, 09 Nov 2025 13:53:22 +0000   KubeletReady                 kubelet is posting ready status
	Addresses:
	  InternalIP:  192.168.49.5
	  Hostname:    ha-423884-m04
	Capacity:
	  cpu:                2
	  ephemeral-storage:  203034800Ki
	  hugepages-1Gi:      0
	  hugepages-2Mi:      0
	  hugepages-32Mi:     0
	  hugepages-64Ki:     0
	  memory:             8022300Ki
	  pods:               110
	Allocatable:
	  cpu:                2
	  ephemeral-storage:  203034800Ki
	  hugepages-1Gi:      0
	  hugepages-2Mi:      0
	  hugepages-32Mi:     0
	  hugepages-64Ki:     0
	  memory:             8022300Ki
	  pods:               110
	System Info:
	  Machine ID:                 8fe80b37a6a3cb4bc1adadb06905c59a
	  System UUID:                750e1d79-71b2-4dc5-bf03-65a8c044964c
	  Boot ID:                    c2b4b02b-b0e5-42ff-aa45-346ad9349595
	  Kernel Version:             5.15.0-1084-aws
	  OS Image:                   Debian GNU/Linux 12 (bookworm)
	  Operating System:           linux
	  Architecture:               arm64
	  Container Runtime Version:  cri-o://1.34.1
	  Kubelet Version:            v1.34.1
	  Kube-Proxy Version:         
	PodCIDR:                      10.244.3.0/24
	PodCIDRs:                     10.244.3.0/24
	Non-terminated Pods:          (2 in total)
	  Namespace                   Name                CPU Requests  CPU Limits  Memory Requests  Memory Limits  Age
	  ---------                   ----                ------------  ----------  ---------------  -------------  ---
	  kube-system                 kindnet-2tcn6       100m (5%)     100m (5%)   50Mi (0%)        50Mi (0%)      12m
	  kube-system                 kube-proxy-9kff9    0 (0%)        0 (0%)      0 (0%)           0 (0%)         12m
	Allocated resources:
	  (Total limits may be over 100 percent, i.e., overcommitted.)
	  Resource           Requests   Limits
	  --------           --------   ------
	  cpu                100m (5%)  100m (5%)
	  memory             50Mi (0%)  50Mi (0%)
	  ephemeral-storage  0 (0%)     0 (0%)
	  hugepages-1Gi      0 (0%)     0 (0%)
	  hugepages-2Mi      0 (0%)     0 (0%)
	  hugepages-32Mi     0 (0%)     0 (0%)
	  hugepages-64Ki     0 (0%)     0 (0%)
	Events:
	  Type     Reason                   Age                From             Message
	  ----     ------                   ----               ----             -------
	  Normal   Starting                 18s                kube-proxy       
	  Normal   Starting                 12m                kube-proxy       
	  Normal   NodeHasNoDiskPressure    12m (x3 over 12m)  kubelet          Node ha-423884-m04 status is now: NodeHasNoDiskPressure
	  Normal   NodeHasSufficientPID     12m (x3 over 12m)  kubelet          Node ha-423884-m04 status is now: NodeHasSufficientPID
	  Normal   NodeHasSufficientMemory  12m (x3 over 12m)  kubelet          Node ha-423884-m04 status is now: NodeHasSufficientMemory
	  Normal   RegisteredNode           12m                node-controller  Node ha-423884-m04 event: Registered Node ha-423884-m04 in Controller
	  Normal   RegisteredNode           12m                node-controller  Node ha-423884-m04 event: Registered Node ha-423884-m04 in Controller
	  Normal   CIDRAssignmentFailed     12m                cidrAllocator    Node ha-423884-m04 status is now: CIDRAssignmentFailed
	  Normal   RegisteredNode           12m                node-controller  Node ha-423884-m04 event: Registered Node ha-423884-m04 in Controller
	  Normal   NodeReady                11m                kubelet          Node ha-423884-m04 status is now: NodeReady
	  Normal   RegisteredNode           10m                node-controller  Node ha-423884-m04 event: Registered Node ha-423884-m04 in Controller
	  Normal   RegisteredNode           66s                node-controller  Node ha-423884-m04 event: Registered Node ha-423884-m04 in Controller
	  Normal   RegisteredNode           65s                node-controller  Node ha-423884-m04 event: Registered Node ha-423884-m04 in Controller
	  Normal   Starting                 41s                kubelet          Starting kubelet.
	  Warning  CgroupV1                 41s                kubelet          cgroup v1 support is in maintenance mode, please migrate to cgroup v2
	  Normal   NodeHasSufficientMemory  37s (x8 over 40s)  kubelet          Node ha-423884-m04 status is now: NodeHasSufficientMemory
	  Normal   NodeHasNoDiskPressure    37s (x8 over 40s)  kubelet          Node ha-423884-m04 status is now: NodeHasNoDiskPressure
	  Normal   NodeHasSufficientPID     37s (x8 over 40s)  kubelet          Node ha-423884-m04 status is now: NodeHasSufficientPID
	  Normal   RegisteredNode           26s                node-controller  Node ha-423884-m04 event: Registered Node ha-423884-m04 in Controller
	
	
	==> dmesg <==
	[Nov 9 13:17] ACPI: SRAT not present
	[  +0.000000] ACPI: SRAT not present
	[  +0.000000] SPI driver altr_a10sr has no spi_device_id for altr,a10sr
	[  +0.015355] device-mapper: core: CONFIG_IMA_DISABLE_HTABLE is disabled. Duplicate IMA measurements will not be recorded in the IMA log.
	[  +0.494196] systemd[1]: Configuration file /run/systemd/system/netplan-ovs-cleanup.service is marked world-inaccessible. This has no effect as configuration data is accessible via APIs without restrictions. Proceeding anyway.
	[  +0.034512] systemd[1]: /lib/systemd/system/snapd.service:23: Unknown key name 'RestartMode' in section 'Service', ignoring.
	[  +0.743336] ena 0000:00:05.0: LLQ is not supported Fallback to host mode policy.
	[  +6.564676] kauditd_printk_skb: 36 callbacks suppressed
	[Nov 9 13:30] overlayfs: idmapped layers are currently not supported
	[  +0.081590] kmem.limit_in_bytes is deprecated and will be removed. Please report your usecase to linux-mm@kvack.org if you depend on this functionality.
	[Nov 9 13:36] overlayfs: idmapped layers are currently not supported
	[ +50.497753] overlayfs: idmapped layers are currently not supported
	[Nov 9 13:50] overlayfs: idmapped layers are currently not supported
	[Nov 9 13:51] overlayfs: idmapped layers are currently not supported
	[Nov 9 13:52] overlayfs: idmapped layers are currently not supported
	[Nov 9 13:53] overlayfs: idmapped layers are currently not supported
	[Nov 9 13:54] overlayfs: idmapped layers are currently not supported
	[Nov 9 13:55] overlayfs: idmapped layers are currently not supported
	[  +3.159182] overlayfs: idmapped layers are currently not supported
	[Nov 9 14:03] overlayfs: idmapped layers are currently not supported
	[  +3.581786] overlayfs: idmapped layers are currently not supported
	[Nov 9 14:04] overlayfs: idmapped layers are currently not supported
	[Nov 9 14:05] overlayfs: idmapped layers are currently not supported
	
	
	==> etcd [947390d8997ffb89bea0e3c1e1bca5c1f8dd53d457d88db5aafd7664dbcb65b2] <==
	{"level":"warn","ts":"2025-11-09T14:04:20.723108Z","caller":"rafthttp/stream.go:222","msg":"lost TCP streaming connection with remote peer","stream-writer-type":"stream MsgApp v2","local-member-id":"aec36adc501070cc","remote-peer-id":"b6e80321287bcc6a"}
	{"level":"warn","ts":"2025-11-09T14:04:20.781289Z","caller":"rafthttp/stream.go:222","msg":"lost TCP streaming connection with remote peer","stream-writer-type":"stream Message","local-member-id":"aec36adc501070cc","remote-peer-id":"b6e80321287bcc6a"}
	{"level":"warn","ts":"2025-11-09T14:04:21.076909Z","caller":"etcdserver/cluster_util.go:259","msg":"failed to reach the peer URL","address":"https://192.168.49.4:2380/version","remote-member-id":"b6e80321287bcc6a","error":"Get \"https://192.168.49.4:2380/version\": dial tcp 192.168.49.4:2380: connect: connection refused"}
	{"level":"warn","ts":"2025-11-09T14:04:21.076972Z","caller":"etcdserver/cluster_util.go:160","msg":"failed to get version","remote-member-id":"b6e80321287bcc6a","error":"Get \"https://192.168.49.4:2380/version\": dial tcp 192.168.49.4:2380: connect: connection refused"}
	{"level":"warn","ts":"2025-11-09T14:04:25.078838Z","caller":"etcdserver/cluster_util.go:259","msg":"failed to reach the peer URL","address":"https://192.168.49.4:2380/version","remote-member-id":"b6e80321287bcc6a","error":"Get \"https://192.168.49.4:2380/version\": dial tcp 192.168.49.4:2380: connect: connection refused"}
	{"level":"warn","ts":"2025-11-09T14:04:25.078893Z","caller":"etcdserver/cluster_util.go:160","msg":"failed to get version","remote-member-id":"b6e80321287bcc6a","error":"Get \"https://192.168.49.4:2380/version\": dial tcp 192.168.49.4:2380: connect: connection refused"}
	{"level":"warn","ts":"2025-11-09T14:04:29.080762Z","caller":"etcdserver/cluster_util.go:259","msg":"failed to reach the peer URL","address":"https://192.168.49.4:2380/version","remote-member-id":"b6e80321287bcc6a","error":"Get \"https://192.168.49.4:2380/version\": dial tcp 192.168.49.4:2380: connect: connection refused"}
	{"level":"warn","ts":"2025-11-09T14:04:29.080821Z","caller":"etcdserver/cluster_util.go:160","msg":"failed to get version","remote-member-id":"b6e80321287bcc6a","error":"Get \"https://192.168.49.4:2380/version\": dial tcp 192.168.49.4:2380: connect: connection refused"}
	{"level":"warn","ts":"2025-11-09T14:04:33.081975Z","caller":"etcdserver/cluster_util.go:259","msg":"failed to reach the peer URL","address":"https://192.168.49.4:2380/version","remote-member-id":"b6e80321287bcc6a","error":"Get \"https://192.168.49.4:2380/version\": dial tcp 192.168.49.4:2380: connect: connection refused"}
	{"level":"warn","ts":"2025-11-09T14:04:33.082125Z","caller":"etcdserver/cluster_util.go:160","msg":"failed to get version","remote-member-id":"b6e80321287bcc6a","error":"Get \"https://192.168.49.4:2380/version\": dial tcp 192.168.49.4:2380: connect: connection refused"}
	{"level":"warn","ts":"2025-11-09T14:04:37.083255Z","caller":"etcdserver/cluster_util.go:259","msg":"failed to reach the peer URL","address":"https://192.168.49.4:2380/version","remote-member-id":"b6e80321287bcc6a","error":"Get \"https://192.168.49.4:2380/version\": dial tcp 192.168.49.4:2380: connect: connection refused"}
	{"level":"warn","ts":"2025-11-09T14:04:37.083315Z","caller":"etcdserver/cluster_util.go:160","msg":"failed to get version","remote-member-id":"b6e80321287bcc6a","error":"Get \"https://192.168.49.4:2380/version\": dial tcp 192.168.49.4:2380: connect: connection refused"}
	{"level":"warn","ts":"2025-11-09T14:04:41.084369Z","caller":"etcdserver/cluster_util.go:259","msg":"failed to reach the peer URL","address":"https://192.168.49.4:2380/version","remote-member-id":"b6e80321287bcc6a","error":"Get \"https://192.168.49.4:2380/version\": dial tcp 192.168.49.4:2380: connect: connection refused"}
	{"level":"warn","ts":"2025-11-09T14:04:41.084422Z","caller":"etcdserver/cluster_util.go:160","msg":"failed to get version","remote-member-id":"b6e80321287bcc6a","error":"Get \"https://192.168.49.4:2380/version\": dial tcp 192.168.49.4:2380: connect: connection refused"}
	{"level":"warn","ts":"2025-11-09T14:04:45.085605Z","caller":"etcdserver/cluster_util.go:259","msg":"failed to reach the peer URL","address":"https://192.168.49.4:2380/version","remote-member-id":"b6e80321287bcc6a","error":"Get \"https://192.168.49.4:2380/version\": dial tcp 192.168.49.4:2380: connect: connection refused"}
	{"level":"warn","ts":"2025-11-09T14:04:45.085763Z","caller":"etcdserver/cluster_util.go:160","msg":"failed to get version","remote-member-id":"b6e80321287bcc6a","error":"Get \"https://192.168.49.4:2380/version\": dial tcp 192.168.49.4:2380: connect: connection refused"}
	{"level":"info","ts":"2025-11-09T14:04:45.944960Z","caller":"rafthttp/stream.go:248","msg":"set message encoder","from":"aec36adc501070cc","to":"b6e80321287bcc6a","stream-type":"stream Message"}
	{"level":"info","ts":"2025-11-09T14:04:45.945005Z","caller":"rafthttp/peer_status.go:53","msg":"peer became active","peer-id":"b6e80321287bcc6a"}
	{"level":"info","ts":"2025-11-09T14:04:45.945017Z","caller":"rafthttp/stream.go:273","msg":"established TCP streaming connection with remote peer","stream-writer-type":"stream Message","local-member-id":"aec36adc501070cc","remote-peer-id":"b6e80321287bcc6a"}
	{"level":"info","ts":"2025-11-09T14:04:46.018416Z","caller":"rafthttp/stream.go:248","msg":"set message encoder","from":"aec36adc501070cc","to":"b6e80321287bcc6a","stream-type":"stream MsgApp v2"}
	{"level":"info","ts":"2025-11-09T14:04:46.018472Z","caller":"rafthttp/stream.go:273","msg":"established TCP streaming connection with remote peer","stream-writer-type":"stream MsgApp v2","local-member-id":"aec36adc501070cc","remote-peer-id":"b6e80321287bcc6a"}
	{"level":"info","ts":"2025-11-09T14:04:46.161733Z","caller":"rafthttp/stream.go:411","msg":"established TCP streaming connection with remote peer","stream-reader-type":"stream Message","local-member-id":"aec36adc501070cc","remote-peer-id":"b6e80321287bcc6a"}
	{"level":"info","ts":"2025-11-09T14:04:46.162210Z","caller":"rafthttp/stream.go:411","msg":"established TCP streaming connection with remote peer","stream-reader-type":"stream MsgApp v2","local-member-id":"aec36adc501070cc","remote-peer-id":"b6e80321287bcc6a"}
	{"level":"warn","ts":"2025-11-09T14:05:16.107022Z","caller":"txn/util.go:93","msg":"apply request took too long","took":"105.334087ms","expected-duration":"100ms","prefix":"read-only range ","request":"key:\"/registry/events/\" range_end:\"/registry/events0\" limit:500 ","response":"range_response_count:497 size:364476"}
	{"level":"info","ts":"2025-11-09T14:05:16.107100Z","caller":"traceutil/trace.go:172","msg":"trace[1651213599] range","detail":"{range_begin:/registry/events/; range_end:/registry/events0; response_count:497; response_revision:2344; }","duration":"105.42921ms","start":"2025-11-09T14:05:16.001658Z","end":"2025-11-09T14:05:16.107088Z","steps":["trace[1651213599] 'range keys from bolt db'  (duration: 104.510955ms)"],"step_count":1}
	
	
	==> kernel <==
	 14:05:20 up 47 min,  0 user,  load average: 2.98, 1.73, 1.27
	Linux ha-423884 5.15.0-1084-aws #91~20.04.1-Ubuntu SMP Fri May 2 07:00:04 UTC 2025 aarch64 GNU/Linux
	PRETTY_NAME="Debian GNU/Linux 12 (bookworm)"
	
	
	==> kindnet [2858b156484730345bc39e8edca1ca8eabf5a6c2eb446824527423d351ec9fd3] <==
	I1109 14:04:55.424983       1 main.go:297] Handling node with IPs: map[192.168.49.3:{}]
	I1109 14:04:55.425011       1 main.go:324] Node ha-423884-m02 has CIDR [10.244.1.0/24] 
	I1109 14:04:55.425174       1 routes.go:62] Adding route {Ifindex: 0 Dst: 10.244.1.0/24 Src: <nil> Gw: 192.168.49.3 Flags: [] Table: 0 Realm: 0} 
	I1109 14:04:55.425305       1 main.go:297] Handling node with IPs: map[192.168.49.4:{}]
	I1109 14:04:55.425321       1 main.go:324] Node ha-423884-m03 has CIDR [10.244.2.0/24] 
	I1109 14:04:55.425492       1 routes.go:62] Adding route {Ifindex: 0 Dst: 10.244.2.0/24 Src: <nil> Gw: 192.168.49.4 Flags: [] Table: 0 Realm: 0} 
	I1109 14:04:55.425604       1 main.go:297] Handling node with IPs: map[192.168.49.5:{}]
	I1109 14:04:55.425617       1 main.go:324] Node ha-423884-m04 has CIDR [10.244.3.0/24] 
	I1109 14:04:55.426323       1 routes.go:62] Adding route {Ifindex: 0 Dst: 10.244.3.0/24 Src: <nil> Gw: 192.168.49.5 Flags: [] Table: 0 Realm: 0} 
	I1109 14:05:05.419934       1 main.go:297] Handling node with IPs: map[192.168.49.2:{}]
	I1109 14:05:05.419974       1 main.go:301] handling current node
	I1109 14:05:05.419990       1 main.go:297] Handling node with IPs: map[192.168.49.3:{}]
	I1109 14:05:05.419996       1 main.go:324] Node ha-423884-m02 has CIDR [10.244.1.0/24] 
	I1109 14:05:05.420187       1 main.go:297] Handling node with IPs: map[192.168.49.4:{}]
	I1109 14:05:05.420194       1 main.go:324] Node ha-423884-m03 has CIDR [10.244.2.0/24] 
	I1109 14:05:05.420281       1 main.go:297] Handling node with IPs: map[192.168.49.5:{}]
	I1109 14:05:05.420286       1 main.go:324] Node ha-423884-m04 has CIDR [10.244.3.0/24] 
	I1109 14:05:15.417972       1 main.go:297] Handling node with IPs: map[192.168.49.5:{}]
	I1109 14:05:15.418003       1 main.go:324] Node ha-423884-m04 has CIDR [10.244.3.0/24] 
	I1109 14:05:15.418241       1 main.go:297] Handling node with IPs: map[192.168.49.2:{}]
	I1109 14:05:15.418253       1 main.go:301] handling current node
	I1109 14:05:15.418304       1 main.go:297] Handling node with IPs: map[192.168.49.3:{}]
	I1109 14:05:15.418311       1 main.go:324] Node ha-423884-m02 has CIDR [10.244.1.0/24] 
	I1109 14:05:15.418415       1 main.go:297] Handling node with IPs: map[192.168.49.4:{}]
	I1109 14:05:15.418423       1 main.go:324] Node ha-423884-m03 has CIDR [10.244.2.0/24] 
	
	
	==> kube-apiserver [7a8b6eec5acc3d0e17aa26ea522ab1781b387d043859460f3c3aa2c80f07c6d7] <==
	I1109 14:04:10.251082       1 cidrallocator.go:301] created ClusterIP allocator for Service CIDR 10.96.0.0/12
	I1109 14:04:10.254066       1 aggregator.go:171] initial CRD sync complete...
	I1109 14:04:10.254147       1 autoregister_controller.go:144] Starting autoregister controller
	I1109 14:04:10.254176       1 cache.go:32] Waiting for caches to sync for autoregister controller
	I1109 14:04:10.254222       1 cache.go:39] Caches are synced for autoregister controller
	I1109 14:04:10.259503       1 shared_informer.go:356] "Caches are synced" controller="ipallocator-repair-controller"
	I1109 14:04:10.259679       1 cache.go:39] Caches are synced for RemoteAvailability controller
	I1109 14:04:10.259777       1 cache.go:39] Caches are synced for APIServiceRegistrationController controller
	I1109 14:04:10.265702       1 apf_controller.go:382] Running API Priority and Fairness config worker
	I1109 14:04:10.265731       1 apf_controller.go:385] Running API Priority and Fairness periodic rebalancing process
	I1109 14:04:10.268080       1 shared_informer.go:356] "Caches are synced" controller="node_authorizer"
	I1109 14:04:10.269054       1 handler_discovery.go:451] Starting ResourceDiscoveryManager
	I1109 14:04:10.282785       1 shared_informer.go:356] "Caches are synced" controller="*generic.policySource[*k8s.io/api/admissionregistration/v1.ValidatingAdmissionPolicy,*k8s.io/api/admissionregistration/v1.ValidatingAdmissionPolicyBinding,k8s.io/apiserver/pkg/admission/plugin/policy/validating.Validator]"
	I1109 14:04:10.282828       1 policy_source.go:240] refreshing policies
	W1109 14:04:10.283375       1 lease.go:265] Resetting endpoints for master service "kubernetes" to [192.168.49.4]
	I1109 14:04:10.285247       1 controller.go:667] quota admission added evaluator for: endpoints
	I1109 14:04:10.308873       1 controller.go:667] quota admission added evaluator for: endpointslices.discovery.k8s.io
	I1109 14:04:10.309359       1 controller.go:667] quota admission added evaluator for: leases.coordination.k8s.io
	E1109 14:04:10.317898       1 controller.go:95] Found stale data, removed previous endpoints on kubernetes service, apiserver didn't exit successfully previously
	I1109 14:04:10.610930       1 storage_scheduling.go:111] all system priority classes are created successfully or already exist.
	W1109 14:04:12.050948       1 lease.go:265] Resetting endpoints for master service "kubernetes" to [192.168.49.2 192.168.49.3 192.168.49.4]
	I1109 14:04:13.586194       1 controller.go:667] quota admission added evaluator for: serviceaccounts
	I1109 14:04:16.069224       1 controller.go:667] quota admission added evaluator for: replicasets.apps
	I1109 14:04:16.362429       1 controller.go:667] quota admission added evaluator for: daemonsets.apps
	I1109 14:04:17.009317       1 controller.go:667] quota admission added evaluator for: deployments.apps
	
	
	==> kube-apiserver [c0ba74e816e1338d86f2f29c211b83c172784bbf106dba7bae518b2ee0201a4e] <==
	I1109 14:03:36.079801       1 server.go:150] Version: v1.34.1
	I1109 14:03:36.079970       1 server.go:152] "Golang settings" GOGC="" GOMAXPROCS="" GOTRACEBACK=""
	W1109 14:03:37.231523       1 api_enablement.go:112] alpha api enabled with emulated version 1.34 instead of the binary's version 1.34.1, this is unsupported, proceed at your own risk: api=coordination.k8s.io/v1alpha2
	W1109 14:03:37.231632       1 api_enablement.go:112] alpha api enabled with emulated version 1.34 instead of the binary's version 1.34.1, this is unsupported, proceed at your own risk: api=admissionregistration.k8s.io/v1alpha1
	W1109 14:03:37.231673       1 api_enablement.go:112] alpha api enabled with emulated version 1.34 instead of the binary's version 1.34.1, this is unsupported, proceed at your own risk: api=imagepolicy.k8s.io/v1alpha1
	W1109 14:03:37.231710       1 api_enablement.go:112] alpha api enabled with emulated version 1.34 instead of the binary's version 1.34.1, this is unsupported, proceed at your own risk: api=resource.k8s.io/v1alpha3
	W1109 14:03:37.231743       1 api_enablement.go:112] alpha api enabled with emulated version 1.34 instead of the binary's version 1.34.1, this is unsupported, proceed at your own risk: api=scheduling.k8s.io/v1alpha1
	W1109 14:03:37.231775       1 api_enablement.go:112] alpha api enabled with emulated version 1.34 instead of the binary's version 1.34.1, this is unsupported, proceed at your own risk: api=certificates.k8s.io/v1alpha1
	W1109 14:03:37.233731       1 api_enablement.go:112] alpha api enabled with emulated version 1.34 instead of the binary's version 1.34.1, this is unsupported, proceed at your own risk: api=rbac.authorization.k8s.io/v1alpha1
	W1109 14:03:37.233812       1 api_enablement.go:112] alpha api enabled with emulated version 1.34 instead of the binary's version 1.34.1, this is unsupported, proceed at your own risk: api=storage.k8s.io/v1alpha1
	W1109 14:03:37.233841       1 api_enablement.go:112] alpha api enabled with emulated version 1.34 instead of the binary's version 1.34.1, this is unsupported, proceed at your own risk: api=internal.apiserver.k8s.io/v1alpha1
	W1109 14:03:37.233872       1 api_enablement.go:112] alpha api enabled with emulated version 1.34 instead of the binary's version 1.34.1, this is unsupported, proceed at your own risk: api=authentication.k8s.io/v1alpha1
	W1109 14:03:37.233903       1 api_enablement.go:112] alpha api enabled with emulated version 1.34 instead of the binary's version 1.34.1, this is unsupported, proceed at your own risk: api=storagemigration.k8s.io/v1alpha1
	W1109 14:03:37.233935       1 api_enablement.go:112] alpha api enabled with emulated version 1.34 instead of the binary's version 1.34.1, this is unsupported, proceed at your own risk: api=node.k8s.io/v1alpha1
	W1109 14:03:37.264427       1 logging.go:55] [core] [Channel #2 SubChannel #3]grpc: addrConn.createTransport failed to connect to {Addr: "127.0.0.1:2379", ServerName: "127.0.0.1:2379", BalancerAttributes: {"<%!p(pickfirstleaf.managedByPickfirstKeyType={})>": "<%!p(bool=true)>" }}. Err: connection error: desc = "transport: authentication handshake failed: context canceled"
	W1109 14:03:37.266135       1 logging.go:55] [core] [Channel #1 SubChannel #5]grpc: addrConn.createTransport failed to connect to {Addr: "127.0.0.1:2379", ServerName: "127.0.0.1:2379", BalancerAttributes: {"<%!p(pickfirstleaf.managedByPickfirstKeyType={})>": "<%!p(bool=true)>" }}. Err: connection error: desc = "transport: Error while dialing: dial tcp 127.0.0.1:2379: operation was canceled"
	I1109 14:03:37.266724       1 shared_informer.go:349] "Waiting for caches to sync" controller="node_authorizer"
	I1109 14:03:37.284361       1 shared_informer.go:349] "Waiting for caches to sync" controller="*generic.policySource[*k8s.io/api/admissionregistration/v1.ValidatingAdmissionPolicy,*k8s.io/api/admissionregistration/v1.ValidatingAdmissionPolicyBinding,k8s.io/apiserver/pkg/admission/plugin/policy/validating.Validator]"
	I1109 14:03:37.285347       1 plugins.go:157] Loaded 14 mutating admission controller(s) successfully in the following order: NamespaceLifecycle,LimitRanger,ServiceAccount,NodeRestriction,TaintNodesByCondition,Priority,DefaultTolerationSeconds,DefaultStorageClass,StorageObjectInUseProtection,RuntimeClass,DefaultIngressClass,PodTopologyLabels,MutatingAdmissionPolicy,MutatingAdmissionWebhook.
	I1109 14:03:37.285437       1 plugins.go:160] Loaded 13 validating admission controller(s) successfully in the following order: LimitRanger,ServiceAccount,PodSecurity,Priority,PersistentVolumeClaimResize,RuntimeClass,CertificateApproval,CertificateSigning,ClusterTrustBundleAttest,CertificateSubjectRestriction,ValidatingAdmissionPolicy,ValidatingAdmissionWebhook,ResourceQuota.
	I1109 14:03:37.285697       1 instance.go:239] Using reconciler: lease
	W1109 14:03:37.287884       1 logging.go:55] [core] [Channel #7 SubChannel #8]grpc: addrConn.createTransport failed to connect to {Addr: "127.0.0.1:2379", ServerName: "127.0.0.1:2379", BalancerAttributes: {"<%!p(pickfirstleaf.managedByPickfirstKeyType={})>": "<%!p(bool=true)>" }}. Err: connection error: desc = "transport: authentication handshake failed: context canceled"
	W1109 14:03:57.261619       1 logging.go:55] [core] [Channel #1 SubChannel #6]grpc: addrConn.createTransport failed to connect to {Addr: "127.0.0.1:2379", ServerName: "127.0.0.1:2379", BalancerAttributes: {"<%!p(pickfirstleaf.managedByPickfirstKeyType={})>": "<%!p(bool=true)>" }}. Err: connection error: desc = "transport: authentication handshake failed: context canceled"
	W1109 14:03:57.262651       1 logging.go:55] [core] [Channel #2 SubChannel #4]grpc: addrConn.createTransport failed to connect to {Addr: "127.0.0.1:2379", ServerName: "127.0.0.1:2379", BalancerAttributes: {"<%!p(pickfirstleaf.managedByPickfirstKeyType={})>": "<%!p(bool=true)>" }}. Err: connection error: desc = "transport: authentication handshake failed: context canceled"
	F1109 14:03:57.287379       1 instance.go:232] Error creating leases: error creating storage factory: context deadline exceeded
	
	
	==> kube-controller-manager [78f5efcea671f680d59175d4a69693bbbeed9fa6a7cee912ee40e0f169e81738] <==
	I1109 14:03:38.933755       1 serving.go:386] Generated self-signed cert in-memory
	I1109 14:03:39.743954       1 controllermanager.go:191] "Starting" version="v1.34.1"
	I1109 14:03:39.744053       1 controllermanager.go:193] "Golang settings" GOGC="" GOMAXPROCS="" GOTRACEBACK=""
	I1109 14:03:39.745947       1 secure_serving.go:211] Serving securely on 127.0.0.1:10257
	I1109 14:03:39.746091       1 dynamic_cafile_content.go:161] "Starting controller" name="request-header::/var/lib/minikube/certs/front-proxy-ca.crt"
	I1109 14:03:39.746103       1 dynamic_cafile_content.go:161] "Starting controller" name="client-ca-bundle::/var/lib/minikube/certs/ca.crt"
	I1109 14:03:39.746115       1 tlsconfig.go:243] "Starting DynamicServingCertificateController"
	E1109 14:04:10.143520       1 controllermanager.go:245] "Error building controller context" err="failed to wait for apiserver being healthy: timed out waiting for the condition: failed to get apiserver /healthz status: forbidden: User \"system:kube-controller-manager\" cannot get path \"/healthz\""
	
	
	==> kube-controller-manager [d4b5eae8c40aaa51b1839a8972d830ffbb9a271e980e83d7f4e1e1a5a0e7c344] <==
	I1109 14:04:15.598430       1 shared_informer.go:356] "Caches are synced" controller="stateful set"
	I1109 14:04:15.608339       1 shared_informer.go:356] "Caches are synced" controller="resource quota"
	I1109 14:04:15.615143       1 shared_informer.go:356] "Caches are synced" controller="VAC protection"
	I1109 14:04:15.620158       1 shared_informer.go:356] "Caches are synced" controller="resource quota"
	I1109 14:04:15.626597       1 shared_informer.go:356] "Caches are synced" controller="disruption"
	I1109 14:04:15.635956       1 shared_informer.go:356] "Caches are synced" controller="PV protection"
	I1109 14:04:15.646645       1 shared_informer.go:356] "Caches are synced" controller="ReplicationController"
	I1109 14:04:15.647826       1 shared_informer.go:356] "Caches are synced" controller="garbage collector"
	I1109 14:04:15.648760       1 garbagecollector.go:154] "Garbage collector: all resource monitors have synced" logger="garbage-collector-controller"
	I1109 14:04:15.648829       1 garbagecollector.go:157] "Proceeding to collect garbage" logger="garbage-collector-controller"
	I1109 14:04:15.650811       1 shared_informer.go:356] "Caches are synced" controller="persistent volume"
	I1109 14:04:15.679894       1 shared_informer.go:356] "Caches are synced" controller="attach detach"
	I1109 14:04:15.695896       1 shared_informer.go:356] "Caches are synced" controller="garbage collector"
	I1109 14:04:15.916336       1 endpointslice_controller.go:344] "Error syncing endpoint slices for service, retrying" logger="endpointslice-controller" key="kube-system/kube-dns" err="failed to update kube-dns-tmhdr EndpointSlice for Service kube-system/kube-dns: Operation cannot be fulfilled on endpointslices.discovery.k8s.io \"kube-dns-tmhdr\": the object has been modified; please apply your changes to the latest version and try again"
	I1109 14:04:15.916728       1 event.go:377] Event(v1.ObjectReference{Kind:"Service", Namespace:"kube-system", Name:"kube-dns", UID:"8beb4313-ddc0-4b92-876a-23da421be39d", APIVersion:"v1", ResourceVersion:"292", FieldPath:""}): type: 'Warning' reason: 'FailedToUpdateEndpointSlices' Error updating Endpoint Slices for Service kube-system/kube-dns: failed to update kube-dns-tmhdr EndpointSlice for Service kube-system/kube-dns: Operation cannot be fulfilled on endpointslices.discovery.k8s.io "kube-dns-tmhdr": the object has been modified; please apply your changes to the latest version and try again
	E1109 14:04:16.184059       1 replica_set.go:587] "Unhandled Error" err="sync \"kube-system/coredns-66bc5c9577\" failed with Operation cannot be fulfilled on replicasets.apps \"coredns-66bc5c9577\": the object has been modified; please apply your changes to the latest version and try again" logger="UnhandledError"
	I1109 14:04:16.664643       1 endpointslice_controller.go:344] "Error syncing endpoint slices for service, retrying" logger="endpointslice-controller" key="kube-system/kube-dns" err="failed to update kube-dns-tmhdr EndpointSlice for Service kube-system/kube-dns: Operation cannot be fulfilled on endpointslices.discovery.k8s.io \"kube-dns-tmhdr\": the object has been modified; please apply your changes to the latest version and try again"
	I1109 14:04:16.665695       1 event.go:377] Event(v1.ObjectReference{Kind:"Service", Namespace:"kube-system", Name:"kube-dns", UID:"8beb4313-ddc0-4b92-876a-23da421be39d", APIVersion:"v1", ResourceVersion:"292", FieldPath:""}): type: 'Warning' reason: 'FailedToUpdateEndpointSlices' Error updating Endpoint Slices for Service kube-system/kube-dns: failed to update kube-dns-tmhdr EndpointSlice for Service kube-system/kube-dns: Operation cannot be fulfilled on endpointslices.discovery.k8s.io "kube-dns-tmhdr": the object has been modified; please apply your changes to the latest version and try again
	I1109 14:04:56.714750       1 endpointslice_controller.go:344] "Error syncing endpoint slices for service, retrying" logger="endpointslice-controller" key="kube-system/kube-dns" err="failed to update kube-dns-tmhdr EndpointSlice for Service kube-system/kube-dns: Operation cannot be fulfilled on endpointslices.discovery.k8s.io \"kube-dns-tmhdr\": the object has been modified; please apply your changes to the latest version and try again"
	I1109 14:04:56.714878       1 event.go:377] Event(v1.ObjectReference{Kind:"Service", Namespace:"kube-system", Name:"kube-dns", UID:"8beb4313-ddc0-4b92-876a-23da421be39d", APIVersion:"v1", ResourceVersion:"292", FieldPath:""}): type: 'Warning' reason: 'FailedToUpdateEndpointSlices' Error updating Endpoint Slices for Service kube-system/kube-dns: failed to update kube-dns-tmhdr EndpointSlice for Service kube-system/kube-dns: Operation cannot be fulfilled on endpointslices.discovery.k8s.io "kube-dns-tmhdr": the object has been modified; please apply your changes to the latest version and try again
	I1109 14:04:56.849774       1 endpointslice_controller.go:344] "Error syncing endpoint slices for service, retrying" logger="endpointslice-controller" key="kube-system/kube-dns" err="failed to update kube-dns-tmhdr EndpointSlice for Service kube-system/kube-dns: Operation cannot be fulfilled on endpointslices.discovery.k8s.io \"kube-dns-tmhdr\": the object has been modified; please apply your changes to the latest version and try again"
	I1109 14:04:56.849836       1 event.go:377] Event(v1.ObjectReference{Kind:"Service", Namespace:"kube-system", Name:"kube-dns", UID:"8beb4313-ddc0-4b92-876a-23da421be39d", APIVersion:"v1", ResourceVersion:"292", FieldPath:""}): type: 'Warning' reason: 'FailedToUpdateEndpointSlices' Error updating Endpoint Slices for Service kube-system/kube-dns: failed to update kube-dns-tmhdr EndpointSlice for Service kube-system/kube-dns: Operation cannot be fulfilled on endpointslices.discovery.k8s.io "kube-dns-tmhdr": the object has been modified; please apply your changes to the latest version and try again
	E1109 14:04:56.882397       1 replica_set.go:587] "Unhandled Error" err="sync \"kube-system/coredns-66bc5c9577\" failed with Operation cannot be fulfilled on replicasets.apps \"coredns-66bc5c9577\": the object has been modified; please apply your changes to the latest version and try again" logger="UnhandledError"
	E1109 14:05:01.377737       1 daemon_controller.go:346] "Unhandled Error" err="kube-system/kindnet failed with : error storing status for daemon set &v1.DaemonSet{TypeMeta:v1.TypeMeta{Kind:\"\", APIVersion:\"\"}, ObjectMeta:v1.ObjectMeta{Name:\"kindnet\", GenerateName:\"\", Namespace:\"kube-system\", SelfLink:\"\", UID:\"a423ea2b-b11a-451e-9dc0-0b9bc17e2520\", ResourceVersion:\"2273\", Generation:1, CreationTimestamp:time.Date(2025, time.November, 9, 13, 50, 44, 0, time.Local), DeletionTimestamp:<nil>, DeletionGracePeriodSeconds:(*int64)(nil), Labels:map[string]string{\"app\":\"kindnet\", \"k8s-app\":\"kindnet\", \"tier\":\"node\"}, Annotations:map[string]string{\"deprecated.daemonset.template.generation\":\"1\", \"kubectl.kubernetes.io/last-applied-configuration\":\"{\\\"apiVersion\\\":\\\"apps/v1\\\",\\\"kind\\\":\\\"DaemonSet\\\",\\\"metadata\\\":{\\\"annotations\\\":{},\\\"labels\\\":{\\\"app\\\":\\\"kindnet\\\",\\\"k8s-app\\\":\\\"kindnet\\\",\\\"tier\\\":\\\"node\\\"},\\\"name\\\":\\\"kindnet\\
\",\\\"namespace\\\":\\\"kube-system\\\"},\\\"spec\\\":{\\\"selector\\\":{\\\"matchLabels\\\":{\\\"app\\\":\\\"kindnet\\\"}},\\\"template\\\":{\\\"metadata\\\":{\\\"labels\\\":{\\\"app\\\":\\\"kindnet\\\",\\\"k8s-app\\\":\\\"kindnet\\\",\\\"tier\\\":\\\"node\\\"}},\\\"spec\\\":{\\\"containers\\\":[{\\\"env\\\":[{\\\"name\\\":\\\"HOST_IP\\\",\\\"valueFrom\\\":{\\\"fieldRef\\\":{\\\"fieldPath\\\":\\\"status.hostIP\\\"}}},{\\\"name\\\":\\\"POD_IP\\\",\\\"valueFrom\\\":{\\\"fieldRef\\\":{\\\"fieldPath\\\":\\\"status.podIP\\\"}}},{\\\"name\\\":\\\"POD_SUBNET\\\",\\\"value\\\":\\\"10.244.0.0/16\\\"}],\\\"image\\\":\\\"docker.io/kindest/kindnetd:v20250512-df8de77b\\\",\\\"name\\\":\\\"kindnet-cni\\\",\\\"resources\\\":{\\\"limits\\\":{\\\"cpu\\\":\\\"100m\\\",\\\"memory\\\":\\\"50Mi\\\"},\\\"requests\\\":{\\\"cpu\\\":\\\"100m\\\",\\\"memory\\\":\\\"50Mi\\\"}},\\\"securityContext\\\":{\\\"capabilities\\\":{\\\"add\\\":[\\\"NET_RAW\\\",\\\"NET_ADMIN\\\"]},\\\"privileged\\\":false},\\\"volumeMounts\\\":[{\\\"mountPath\
\\":\\\"/etc/cni/net.d\\\",\\\"name\\\":\\\"cni-cfg\\\"},{\\\"mountPath\\\":\\\"/run/xtables.lock\\\",\\\"name\\\":\\\"xtables-lock\\\",\\\"readOnly\\\":false},{\\\"mountPath\\\":\\\"/lib/modules\\\",\\\"name\\\":\\\"lib-modules\\\",\\\"readOnly\\\":true}]}],\\\"hostNetwork\\\":true,\\\"serviceAccountName\\\":\\\"kindnet\\\",\\\"tolerations\\\":[{\\\"effect\\\":\\\"NoSchedule\\\",\\\"operator\\\":\\\"Exists\\\"}],\\\"volumes\\\":[{\\\"hostPath\\\":{\\\"path\\\":\\\"/etc/cni/net.d\\\",\\\"type\\\":\\\"DirectoryOrCreate\\\"},\\\"name\\\":\\\"cni-cfg\\\"},{\\\"hostPath\\\":{\\\"path\\\":\\\"/run/xtables.lock\\\",\\\"type\\\":\\\"FileOrCreate\\\"},\\\"name\\\":\\\"xtables-lock\\\"},{\\\"hostPath\\\":{\\\"path\\\":\\\"/lib/modules\\\"},\\\"name\\\":\\\"lib-modules\\\"}]}}}}\\n\"}, OwnerReferences:[]v1.OwnerReference(nil), Finalizers:[]string(nil), ManagedFields:[]v1.ManagedFieldsEntry(nil)}, Spec:v1.DaemonSetSpec{Selector:(*v1.LabelSelector)(0x40017852e0), Template:v1.PodTemplateSpec{ObjectMeta:v1.ObjectMeta{Name:
\"\", GenerateName:\"\", Namespace:\"\", SelfLink:\"\", UID:\"\", ResourceVersion:\"\", Generation:0, CreationTimestamp:time.Date(1, time.January, 1, 0, 0, 0, 0, time.UTC), DeletionTimestamp:<nil>, DeletionGracePeriodSeconds:(*int64)(nil), Labels:map[string]string{\"app\":\"kindnet\", \"k8s-app\":\"kindnet\", \"tier\":\"node\"}, Annotations:map[string]string(nil), OwnerReferences:[]v1.OwnerReference(nil), Finalizers:[]string(nil), ManagedFields:[]v1.ManagedFieldsEntry(nil)}, Spec:v1.PodSpec{Volumes:[]v1.Volume{v1.Volume{Name:\"cni-cfg\", VolumeSource:v1.VolumeSource{HostPath:(*v1.HostPathVolumeSource)(0x4000aea5d0), EmptyDir:(*v1.EmptyDirVolumeSource)(nil), GCEPersistentDisk:(*v1.GCEPersistentDiskVolumeSource)(nil), AWSElasticBlockStore:(*v1.AWSElasticBlockStoreVolumeSource)(nil), GitRepo:(*v1.GitRepoVolumeSource)(nil), Secret:(*v1.SecretVolumeSource)(nil), NFS:(*v1.NFSVolumeSource)(nil), ISCSI:(*v1.ISCSIVolumeSource)(nil), Glusterfs:(*v1.GlusterfsVolumeSource)(nil), PersistentVolumeClaim:(*v1.PersistentVolum
eClaimVolumeSource)(nil), RBD:(*v1.RBDVolumeSource)(nil), FlexVolume:(*v1.FlexVolumeSource)(nil), Cinder:(*v1.CinderVolumeSource)(nil), CephFS:(*v1.CephFSVolumeSource)(nil), Flocker:(*v1.FlockerVolumeSource)(nil), DownwardAPI:(*v1.DownwardAPIVolumeSource)(nil), FC:(*v1.FCVolumeSource)(nil), AzureFile:(*v1.AzureFileVolumeSource)(nil), ConfigMap:(*v1.ConfigMapVolumeSource)(nil), VsphereVolume:(*v1.VsphereVirtualDiskVolumeSource)(nil), Quobyte:(*v1.QuobyteVolumeSource)(nil), AzureDisk:(*v1.AzureDiskVolumeSource)(nil), PhotonPersistentDisk:(*v1.PhotonPersistentDiskVolumeSource)(nil), Projected:(*v1.ProjectedVolumeSource)(nil), PortworxVolume:(*v1.PortworxVolumeSource)(nil), ScaleIO:(*v1.ScaleIOVolumeSource)(nil), StorageOS:(*v1.StorageOSVolumeSource)(nil), CSI:(*v1.CSIVolumeSource)(nil), Ephemeral:(*v1.EphemeralVolumeSource)(nil), Image:(*v1.ImageVolumeSource)(nil)}}, v1.Volume{Name:\"xtables-lock\", VolumeSource:v1.VolumeSource{HostPath:(*v1.HostPathVolumeSource)(0x4000aea618), EmptyDir:(*v1.EmptyDirVolumeSource
)(nil), GCEPersistentDisk:(*v1.GCEPersistentDiskVolumeSource)(nil), AWSElasticBlockStore:(*v1.AWSElasticBlockStoreVolumeSource)(nil), GitRepo:(*v1.GitRepoVolumeSource)(nil), Secret:(*v1.SecretVolumeSource)(nil), NFS:(*v1.NFSVolumeSource)(nil), ISCSI:(*v1.ISCSIVolumeSource)(nil), Glusterfs:(*v1.GlusterfsVolumeSource)(nil), PersistentVolumeClaim:(*v1.PersistentVolumeClaimVolumeSource)(nil), RBD:(*v1.RBDVolumeSource)(nil), FlexVolume:(*v1.FlexVolumeSource)(nil), Cinder:(*v1.CinderVolumeSource)(nil), CephFS:(*v1.CephFSVolumeSource)(nil), Flocker:(*v1.FlockerVolumeSource)(nil), DownwardAPI:(*v1.DownwardAPIVolumeSource)(nil), FC:(*v1.FCVolumeSource)(nil), AzureFile:(*v1.AzureFileVolumeSource)(nil), ConfigMap:(*v1.ConfigMapVolumeSource)(nil), VsphereVolume:(*v1.VsphereVirtualDiskVolumeSource)(nil), Quobyte:(*v1.QuobyteVolumeSource)(nil), AzureDisk:(*v1.AzureDiskVolumeSource)(nil), PhotonPersistentDisk:(*v1.PhotonPersistentDiskVolumeSource)(nil), Projected:(*v1.ProjectedVolumeSource)(nil), PortworxVolume:(*v1.Portwor
xVolumeSource)(nil), ScaleIO:(*v1.ScaleIOVolumeSource)(nil), StorageOS:(*v1.StorageOSVolumeSource)(nil), CSI:(*v1.CSIVolumeSource)(nil), Ephemeral:(*v1.EphemeralVolumeSource)(nil), Image:(*v1.ImageVolumeSource)(nil)}}, v1.Volume{Name:\"lib-modules\", VolumeSource:v1.VolumeSource{HostPath:(*v1.HostPathVolumeSource)(0x4000aea678), EmptyDir:(*v1.EmptyDirVolumeSource)(nil), GCEPersistentDisk:(*v1.GCEPersistentDiskVolumeSource)(nil), AWSElasticBlockStore:(*v1.AWSElasticBlockStoreVolumeSource)(nil), GitRepo:(*v1.GitRepoVolumeSource)(nil), Secret:(*v1.SecretVolumeSource)(nil), NFS:(*v1.NFSVolumeSource)(nil), ISCSI:(*v1.ISCSIVolumeSource)(nil), Glusterfs:(*v1.GlusterfsVolumeSource)(nil), PersistentVolumeClaim:(*v1.PersistentVolumeClaimVolumeSource)(nil), RBD:(*v1.RBDVolumeSource)(nil), FlexVolume:(*v1.FlexVolumeSource)(nil), Cinder:(*v1.CinderVolumeSource)(nil), CephFS:(*v1.CephFSVolumeSource)(nil), Flocker:(*v1.FlockerVolumeSource)(nil), DownwardAPI:(*v1.DownwardAPIVolumeSource)(nil), FC:(*v1.FCVolumeSource)(nil), A
zureFile:(*v1.AzureFileVolumeSource)(nil), ConfigMap:(*v1.ConfigMapVolumeSource)(nil), VsphereVolume:(*v1.VsphereVirtualDiskVolumeSource)(nil), Quobyte:(*v1.QuobyteVolumeSource)(nil), AzureDisk:(*v1.AzureDiskVolumeSource)(nil), PhotonPersistentDisk:(*v1.PhotonPersistentDiskVolumeSource)(nil), Projected:(*v1.ProjectedVolumeSource)(nil), PortworxVolume:(*v1.PortworxVolumeSource)(nil), ScaleIO:(*v1.ScaleIOVolumeSource)(nil), StorageOS:(*v1.StorageOSVolumeSource)(nil), CSI:(*v1.CSIVolumeSource)(nil), Ephemeral:(*v1.EphemeralVolumeSource)(nil), Image:(*v1.ImageVolumeSource)(nil)}}}, InitContainers:[]v1.Container(nil), Containers:[]v1.Container{v1.Container{Name:\"kindnet-cni\", Image:\"docker.io/kindest/kindnetd:v20250512-df8de77b\", Command:[]string(nil), Args:[]string(nil), WorkingDir:\"\", Ports:[]v1.ContainerPort(nil), EnvFrom:[]v1.EnvFromSource(nil), Env:[]v1.EnvVar{v1.EnvVar{Name:\"HOST_IP\", Value:\"\", ValueFrom:(*v1.EnvVarSource)(0x400208fe00)}, v1.EnvVar{Name:\"POD_IP\", Value:\"\", ValueFrom:(*v1.EnvVar
Source)(0x400208fe30)}, v1.EnvVar{Name:\"POD_SUBNET\", Value:\"10.244.0.0/16\", ValueFrom:(*v1.EnvVarSource)(nil)}}, Resources:v1.ResourceRequirements{Limits:v1.ResourceList{\"cpu\":resource.Quantity{i:resource.int64Amount{value:100, scale:-3}, d:resource.infDecAmount{Dec:(*inf.Dec)(nil)}, s:\"100m\", Format:\"DecimalSI\"}, \"memory\":resource.Quantity{i:resource.int64Amount{value:52428800, scale:0}, d:resource.infDecAmount{Dec:(*inf.Dec)(nil)}, s:\"50Mi\", Format:\"BinarySI\"}}, Requests:v1.ResourceList{\"cpu\":resource.Quantity{i:resource.int64Amount{value:100, scale:-3}, d:resource.infDecAmount{Dec:(*inf.Dec)(nil)}, s:\"100m\", Format:\"DecimalSI\"}, \"memory\":resource.Quantity{i:resource.int64Amount{value:52428800, scale:0}, d:resource.infDecAmount{Dec:(*inf.Dec)(nil)}, s:\"50Mi\", Format:\"BinarySI\"}}, Claims:[]v1.ResourceClaim(nil)}, ResizePolicy:[]v1.ContainerResizePolicy(nil), RestartPolicy:(*v1.ContainerRestartPolicy)(nil), RestartPolicyRules:[]v1.ContainerRestartRule(nil), VolumeMounts:[]v1.Volume
Mount{v1.VolumeMount{Name:\"cni-cfg\", ReadOnly:false, RecursiveReadOnly:(*v1.RecursiveReadOnlyMode)(nil), MountPath:\"/etc/cni/net.d\", SubPath:\"\", MountPropagation:(*v1.MountPropagationMode)(nil), SubPathExpr:\"\"}, v1.VolumeMount{Name:\"xtables-lock\", ReadOnly:false, RecursiveReadOnly:(*v1.RecursiveReadOnlyMode)(nil), MountPath:\"/run/xtables.lock\", SubPath:\"\", MountPropagation:(*v1.MountPropagationMode)(nil), SubPathExpr:\"\"}, v1.VolumeMount{Name:\"lib-modules\", ReadOnly:true, RecursiveReadOnly:(*v1.RecursiveReadOnlyMode)(nil), MountPath:\"/lib/modules\", SubPath:\"\", MountPropagation:(*v1.MountPropagationMode)(nil), SubPathExpr:\"\"}}, VolumeDevices:[]v1.VolumeDevice(nil), LivenessProbe:(*v1.Probe)(nil), ReadinessProbe:(*v1.Probe)(nil), StartupProbe:(*v1.Probe)(nil), Lifecycle:(*v1.Lifecycle)(nil), TerminationMessagePath:\"/dev/termination-log\", TerminationMessagePolicy:\"File\", ImagePullPolicy:\"IfNotPresent\", SecurityContext:(*v1.SecurityContext)(0x40024818c0), Stdin:false, StdinOnce:false,
TTY:false}}, EphemeralContainers:[]v1.EphemeralContainer(nil), RestartPolicy:\"Always\", TerminationGracePeriodSeconds:(*int64)(0x4002225268), ActiveDeadlineSeconds:(*int64)(nil), DNSPolicy:\"ClusterFirst\", NodeSelector:map[string]string(nil), ServiceAccountName:\"kindnet\", DeprecatedServiceAccount:\"kindnet\", AutomountServiceAccountToken:(*bool)(nil), NodeName:\"\", HostNetwork:true, HostPID:false, HostIPC:false, ShareProcessNamespace:(*bool)(nil), SecurityContext:(*v1.PodSecurityContext)(0x400180ef30), ImagePullSecrets:[]v1.LocalObjectReference(nil), Hostname:\"\", Subdomain:\"\", Affinity:(*v1.Affinity)(nil), SchedulerName:\"default-scheduler\", Tolerations:[]v1.Toleration{v1.Toleration{Key:\"\", Operator:\"Exists\", Value:\"\", Effect:\"NoSchedule\", TolerationSeconds:(*int64)(nil)}}, HostAliases:[]v1.HostAlias(nil), PriorityClassName:\"\", Priority:(*int32)(nil), DNSConfig:(*v1.PodDNSConfig)(nil), ReadinessGates:[]v1.PodReadinessGate(nil), RuntimeClassName:(*string)(nil), EnableServiceLinks:(*bool)(n
il), PreemptionPolicy:(*v1.PreemptionPolicy)(nil), Overhead:v1.ResourceList(nil), TopologySpreadConstraints:[]v1.TopologySpreadConstraint(nil), SetHostnameAsFQDN:(*bool)(nil), OS:(*v1.PodOS)(nil), HostUsers:(*bool)(nil), SchedulingGates:[]v1.PodSchedulingGate(nil), ResourceClaims:[]v1.PodResourceClaim(nil), Resources:(*v1.ResourceRequirements)(nil), HostnameOverride:(*string)(nil)}}, UpdateStrategy:v1.DaemonSetUpdateStrategy{Type:\"RollingUpdate\", RollingUpdate:(*v1.RollingUpdateDaemonSet)(0x400354e850)}, MinReadySeconds:0, RevisionHistoryLimit:(*int32)(0x40022252d0)}, Status:v1.DaemonSetStatus{CurrentNumberScheduled:4, NumberMisscheduled:0, DesiredNumberScheduled:4, NumberReady:4, ObservedGeneration:1, UpdatedNumberScheduled:4, NumberAvailable:4, NumberUnavailable:0, CollisionCount:(*int32)(nil), Conditions:[]v1.DaemonSetCondition(nil)}}: Operation cannot be fulfilled on daemonsets.apps \"kindnet\": the object has been modified; please apply your changes to the latest version and try again" logger="Unhandle
dError"
	
	
	==> kube-proxy [6db8ccf0f7e5d6927f1f90014c3a7aaa5232618397851b52007fa71137db2843] <==
	I1109 14:04:16.669492       1 server_linux.go:53] "Using iptables proxy"
	I1109 14:04:17.085521       1 shared_informer.go:349] "Waiting for caches to sync" controller="node informer cache"
	I1109 14:04:17.200105       1 shared_informer.go:356] "Caches are synced" controller="node informer cache"
	I1109 14:04:17.200215       1 server.go:219] "Successfully retrieved NodeIPs" NodeIPs=["192.168.49.2"]
	E1109 14:04:17.200363       1 server.go:256] "Kube-proxy configuration may be incomplete or incorrect" err="nodePortAddresses is unset; NodePort connections will be accepted on all local IPs. Consider using `--nodeport-addresses primary`"
	I1109 14:04:17.278348       1 server.go:265] "kube-proxy running in dual-stack mode" primary ipFamily="IPv4"
	I1109 14:04:17.278470       1 server_linux.go:132] "Using iptables Proxier"
	I1109 14:04:17.286098       1 proxier.go:242] "Setting route_localnet=1 to allow node-ports on localhost; to change this either disable iptables.localhostNodePorts (--iptables-localhost-nodeports) or set nodePortAddresses (--nodeport-addresses) to filter loopback addresses" ipFamily="IPv4"
	I1109 14:04:17.286454       1 server.go:527] "Version info" version="v1.34.1"
	I1109 14:04:17.286654       1 server.go:529] "Golang settings" GOGC="" GOMAXPROCS="" GOTRACEBACK=""
	I1109 14:04:17.290007       1 config.go:200] "Starting service config controller"
	I1109 14:04:17.290117       1 shared_informer.go:349] "Waiting for caches to sync" controller="service config"
	I1109 14:04:17.290166       1 config.go:106] "Starting endpoint slice config controller"
	I1109 14:04:17.290209       1 shared_informer.go:349] "Waiting for caches to sync" controller="endpoint slice config"
	I1109 14:04:17.290245       1 config.go:403] "Starting serviceCIDR config controller"
	I1109 14:04:17.290290       1 shared_informer.go:349] "Waiting for caches to sync" controller="serviceCIDR config"
	I1109 14:04:17.297376       1 config.go:309] "Starting node config controller"
	I1109 14:04:17.297723       1 shared_informer.go:349] "Waiting for caches to sync" controller="node config"
	I1109 14:04:17.297759       1 shared_informer.go:356] "Caches are synced" controller="node config"
	I1109 14:04:17.390352       1 shared_informer.go:356] "Caches are synced" controller="endpoint slice config"
	I1109 14:04:17.390429       1 shared_informer.go:356] "Caches are synced" controller="service config"
	I1109 14:04:17.390722       1 shared_informer.go:356] "Caches are synced" controller="serviceCIDR config"
	
	
	==> kube-scheduler [374a5429d6a564b1f172e68e0f603aefc3b04e7b183e31ef8b55c3ae430182ff] <==
	I1109 14:04:08.302323       1 server.go:177] "Golang settings" GOGC="" GOMAXPROCS="" GOTRACEBACK=""
	I1109 14:04:08.304556       1 configmap_cafile_content.go:205] "Starting controller" name="client-ca::kube-system::extension-apiserver-authentication::client-ca-file"
	I1109 14:04:08.312882       1 shared_informer.go:349] "Waiting for caches to sync" controller="client-ca::kube-system::extension-apiserver-authentication::client-ca-file"
	I1109 14:04:08.316380       1 secure_serving.go:211] Serving securely on 127.0.0.1:10259
	I1109 14:04:08.316458       1 tlsconfig.go:243] "Starting DynamicServingCertificateController"
	E1109 14:04:10.211376       1 reflector.go:205] "Failed to watch" err="failed to list *v1.CSIStorageCapacity: csistoragecapacities.storage.k8s.io is forbidden: User \"system:kube-scheduler\" cannot list resource \"csistoragecapacities\" in API group \"storage.k8s.io\" at the cluster scope" logger="UnhandledError" reflector="k8s.io/client-go/informers/factory.go:160" type="*v1.CSIStorageCapacity"
	E1109 14:04:10.211546       1 reflector.go:205] "Failed to watch" err="failed to list *v1.Pod: pods is forbidden: User \"system:kube-scheduler\" cannot list resource \"pods\" in API group \"\" at the cluster scope" logger="UnhandledError" reflector="k8s.io/client-go/informers/factory.go:160" type="*v1.Pod"
	E1109 14:04:10.211639       1 reflector.go:205] "Failed to watch" err="failed to list *v1.Node: nodes is forbidden: User \"system:kube-scheduler\" cannot list resource \"nodes\" in API group \"\" at the cluster scope" logger="UnhandledError" reflector="k8s.io/client-go/informers/factory.go:160" type="*v1.Node"
	E1109 14:04:10.211730       1 reflector.go:205] "Failed to watch" err="failed to list *v1.ResourceSlice: resourceslices.resource.k8s.io is forbidden: User \"system:kube-scheduler\" cannot list resource \"resourceslices\" in API group \"resource.k8s.io\" at the cluster scope" logger="UnhandledError" reflector="k8s.io/client-go/informers/factory.go:160" type="*v1.ResourceSlice"
	E1109 14:04:10.211824       1 reflector.go:205] "Failed to watch" err="failed to list *v1.StorageClass: storageclasses.storage.k8s.io is forbidden: User \"system:kube-scheduler\" cannot list resource \"storageclasses\" in API group \"storage.k8s.io\" at the cluster scope" logger="UnhandledError" reflector="k8s.io/client-go/informers/factory.go:160" type="*v1.StorageClass"
	E1109 14:04:10.212031       1 reflector.go:205] "Failed to watch" err="failed to list *v1.ReplicationController: replicationcontrollers is forbidden: User \"system:kube-scheduler\" cannot list resource \"replicationcontrollers\" in API group \"\" at the cluster scope" logger="UnhandledError" reflector="k8s.io/client-go/informers/factory.go:160" type="*v1.ReplicationController"
	E1109 14:04:10.212181       1 reflector.go:205] "Failed to watch" err="failed to list *v1.ResourceClaim: resourceclaims.resource.k8s.io is forbidden: User \"system:kube-scheduler\" cannot list resource \"resourceclaims\" in API group \"resource.k8s.io\" at the cluster scope" logger="UnhandledError" reflector="k8s.io/client-go/informers/factory.go:160" type="*v1.ResourceClaim"
	E1109 14:04:10.212276       1 reflector.go:205] "Failed to watch" err="failed to list *v1.DeviceClass: deviceclasses.resource.k8s.io is forbidden: User \"system:kube-scheduler\" cannot list resource \"deviceclasses\" in API group \"resource.k8s.io\" at the cluster scope" logger="UnhandledError" reflector="k8s.io/client-go/informers/factory.go:160" type="*v1.DeviceClass"
	E1109 14:04:10.212389       1 reflector.go:205] "Failed to watch" err="failed to list *v1.PodDisruptionBudget: poddisruptionbudgets.policy is forbidden: User \"system:kube-scheduler\" cannot list resource \"poddisruptionbudgets\" in API group \"policy\" at the cluster scope" logger="UnhandledError" reflector="k8s.io/client-go/informers/factory.go:160" type="*v1.PodDisruptionBudget"
	E1109 14:04:10.212522       1 reflector.go:205] "Failed to watch" err="failed to list *v1.ReplicaSet: replicasets.apps is forbidden: User \"system:kube-scheduler\" cannot list resource \"replicasets\" in API group \"apps\" at the cluster scope" logger="UnhandledError" reflector="k8s.io/client-go/informers/factory.go:160" type="*v1.ReplicaSet"
	E1109 14:04:10.212737       1 reflector.go:205] "Failed to watch" err="failed to list *v1.VolumeAttachment: volumeattachments.storage.k8s.io is forbidden: User \"system:kube-scheduler\" cannot list resource \"volumeattachments\" in API group \"storage.k8s.io\" at the cluster scope" logger="UnhandledError" reflector="k8s.io/client-go/informers/factory.go:160" type="*v1.VolumeAttachment"
	E1109 14:04:10.212857       1 reflector.go:205] "Failed to watch" err="failed to list *v1.StatefulSet: statefulsets.apps is forbidden: User \"system:kube-scheduler\" cannot list resource \"statefulsets\" in API group \"apps\" at the cluster scope" logger="UnhandledError" reflector="k8s.io/client-go/informers/factory.go:160" type="*v1.StatefulSet"
	E1109 14:04:10.213039       1 reflector.go:205] "Failed to watch" err="failed to list *v1.Namespace: namespaces is forbidden: User \"system:kube-scheduler\" cannot list resource \"namespaces\" in API group \"\" at the cluster scope" logger="UnhandledError" reflector="k8s.io/client-go/informers/factory.go:160" type="*v1.Namespace"
	E1109 14:04:10.213127       1 reflector.go:205] "Failed to watch" err="failed to list *v1.CSINode: csinodes.storage.k8s.io is forbidden: User \"system:kube-scheduler\" cannot list resource \"csinodes\" in API group \"storage.k8s.io\" at the cluster scope" logger="UnhandledError" reflector="k8s.io/client-go/informers/factory.go:160" type="*v1.CSINode"
	E1109 14:04:10.213178       1 reflector.go:205] "Failed to watch" err="failed to list *v1.Service: services is forbidden: User \"system:kube-scheduler\" cannot list resource \"services\" in API group \"\" at the cluster scope" logger="UnhandledError" reflector="k8s.io/client-go/informers/factory.go:160" type="*v1.Service"
	E1109 14:04:10.213230       1 reflector.go:205] "Failed to watch" err="failed to list *v1.PersistentVolumeClaim: persistentvolumeclaims is forbidden: User \"system:kube-scheduler\" cannot list resource \"persistentvolumeclaims\" in API group \"\" at the cluster scope" logger="UnhandledError" reflector="k8s.io/client-go/informers/factory.go:160" type="*v1.PersistentVolumeClaim"
	E1109 14:04:10.213342       1 reflector.go:205] "Failed to watch" err="failed to list *v1.CSIDriver: csidrivers.storage.k8s.io is forbidden: User \"system:kube-scheduler\" cannot list resource \"csidrivers\" in API group \"storage.k8s.io\" at the cluster scope" logger="UnhandledError" reflector="k8s.io/client-go/informers/factory.go:160" type="*v1.CSIDriver"
	E1109 14:04:10.213396       1 reflector.go:205] "Failed to watch" err="failed to list *v1.PersistentVolume: persistentvolumes is forbidden: User \"system:kube-scheduler\" cannot list resource \"persistentvolumes\" in API group \"\" at the cluster scope" logger="UnhandledError" reflector="k8s.io/client-go/informers/factory.go:160" type="*v1.PersistentVolume"
	E1109 14:04:10.213833       1 reflector.go:205] "Failed to watch" err="failed to list *v1.ConfigMap: configmaps \"extension-apiserver-authentication\" is forbidden: User \"system:kube-scheduler\" cannot list resource \"configmaps\" in API group \"\" in the namespace \"kube-system\"" logger="UnhandledError" reflector="runtime/asm_arm64.s:1223" type="*v1.ConfigMap"
	I1109 14:04:11.613639       1 shared_informer.go:356] "Caches are synced" controller="client-ca::kube-system::extension-apiserver-authentication::client-ca-file"
	
	
	==> kubelet <==
	Nov 09 14:04:14 ha-423884 kubelet[749]: I1109 14:04:14.263506     749 kubelet.go:3202] "Trying to delete pod" pod="kube-system/kube-vip-ha-423884" podUID="8470dcc0-6c4f-4241-ad4e-8b896f6712b0"
	Nov 09 14:04:14 ha-423884 kubelet[749]: E1109 14:04:14.282901     749 kubelet.go:3221] "Failed creating a mirror pod" err="pods \"etcd-ha-423884\" already exists" pod="kube-system/etcd-ha-423884"
	Nov 09 14:04:14 ha-423884 kubelet[749]: I1109 14:04:14.282937     749 kubelet.go:3219] "Creating a mirror pod for static pod" pod="kube-system/kube-apiserver-ha-423884"
	Nov 09 14:04:14 ha-423884 kubelet[749]: E1109 14:04:14.324502     749 kubelet.go:3221] "Failed creating a mirror pod" err="pods \"kube-apiserver-ha-423884\" already exists" pod="kube-system/kube-apiserver-ha-423884"
	Nov 09 14:04:14 ha-423884 kubelet[749]: I1109 14:04:14.324540     749 kubelet.go:3219] "Creating a mirror pod for static pod" pod="kube-system/kube-controller-manager-ha-423884"
	Nov 09 14:04:14 ha-423884 kubelet[749]: I1109 14:04:14.353962     749 desired_state_of_world_populator.go:154] "Finished populating initial desired state of world"
	Nov 09 14:04:14 ha-423884 kubelet[749]: E1109 14:04:14.370339     749 kubelet.go:3221] "Failed creating a mirror pod" err="pods \"kube-controller-manager-ha-423884\" already exists" pod="kube-system/kube-controller-manager-ha-423884"
	Nov 09 14:04:14 ha-423884 kubelet[749]: I1109 14:04:14.385896     749 kubelet.go:3208] "Deleted mirror pod as it didn't match the static Pod" pod="kube-system/kube-vip-ha-423884"
	Nov 09 14:04:14 ha-423884 kubelet[749]: I1109 14:04:14.385930     749 kubelet.go:3219] "Creating a mirror pod for static pod" pod="kube-system/kube-vip-ha-423884"
	Nov 09 14:04:14 ha-423884 kubelet[749]: I1109 14:04:14.403495     749 reconciler_common.go:251] "operationExecutor.VerifyControllerAttachedVolume started for volume \"tmp\" (UniqueName: \"kubernetes.io/host-path/5c249a88-1e05-40e0-b9d2-60a993f8c146-tmp\") pod \"storage-provisioner\" (UID: \"5c249a88-1e05-40e0-b9d2-60a993f8c146\") " pod="kube-system/storage-provisioner"
	Nov 09 14:04:14 ha-423884 kubelet[749]: I1109 14:04:14.403551     749 reconciler_common.go:251] "operationExecutor.VerifyControllerAttachedVolume started for volume \"lib-modules\" (UniqueName: \"kubernetes.io/host-path/f3de4d87-91fe-4303-a8db-50a70cbce4d7-lib-modules\") pod \"kube-proxy-7z7d2\" (UID: \"f3de4d87-91fe-4303-a8db-50a70cbce4d7\") " pod="kube-system/kube-proxy-7z7d2"
	Nov 09 14:04:14 ha-423884 kubelet[749]: I1109 14:04:14.403593     749 reconciler_common.go:251] "operationExecutor.VerifyControllerAttachedVolume started for volume \"lib-modules\" (UniqueName: \"kubernetes.io/host-path/aaab0693-39a9-46cc-b5c6-f07055a7cbc4-lib-modules\") pod \"kindnet-4s4nj\" (UID: \"aaab0693-39a9-46cc-b5c6-f07055a7cbc4\") " pod="kube-system/kindnet-4s4nj"
	Nov 09 14:04:14 ha-423884 kubelet[749]: I1109 14:04:14.403613     749 reconciler_common.go:251] "operationExecutor.VerifyControllerAttachedVolume started for volume \"xtables-lock\" (UniqueName: \"kubernetes.io/host-path/aaab0693-39a9-46cc-b5c6-f07055a7cbc4-xtables-lock\") pod \"kindnet-4s4nj\" (UID: \"aaab0693-39a9-46cc-b5c6-f07055a7cbc4\") " pod="kube-system/kindnet-4s4nj"
	Nov 09 14:04:14 ha-423884 kubelet[749]: I1109 14:04:14.403647     749 reconciler_common.go:251] "operationExecutor.VerifyControllerAttachedVolume started for volume \"cni-cfg\" (UniqueName: \"kubernetes.io/host-path/aaab0693-39a9-46cc-b5c6-f07055a7cbc4-cni-cfg\") pod \"kindnet-4s4nj\" (UID: \"aaab0693-39a9-46cc-b5c6-f07055a7cbc4\") " pod="kube-system/kindnet-4s4nj"
	Nov 09 14:04:14 ha-423884 kubelet[749]: I1109 14:04:14.403685     749 reconciler_common.go:251] "operationExecutor.VerifyControllerAttachedVolume started for volume \"xtables-lock\" (UniqueName: \"kubernetes.io/host-path/f3de4d87-91fe-4303-a8db-50a70cbce4d7-xtables-lock\") pod \"kube-proxy-7z7d2\" (UID: \"f3de4d87-91fe-4303-a8db-50a70cbce4d7\") " pod="kube-system/kube-proxy-7z7d2"
	Nov 09 14:04:14 ha-423884 kubelet[749]: I1109 14:04:14.469444     749 swap_util.go:74] "error creating dir to test if tmpfs noswap is enabled. Assuming not supported" mount path="" error="stat /var/lib/kubelet/plugins/kubernetes.io/empty-dir: no such file or directory"
	Nov 09 14:04:14 ha-423884 kubelet[749]: I1109 14:04:14.588284     749 pod_startup_latency_tracker.go:104] "Observed pod startup duration" pod="kube-system/kube-vip-ha-423884" podStartSLOduration=0.588263843 podStartE2EDuration="588.263843ms" podCreationTimestamp="2025-11-09 14:04:14 +0000 UTC" firstStartedPulling="0001-01-01 00:00:00 +0000 UTC" lastFinishedPulling="0001-01-01 00:00:00 +0000 UTC" observedRunningTime="2025-11-09 14:04:14.53432425 +0000 UTC m=+39.410575888" watchObservedRunningTime="2025-11-09 14:04:14.588263843 +0000 UTC m=+39.464515481"
	Nov 09 14:04:14 ha-423884 kubelet[749]: W1109 14:04:14.716436     749 manager.go:1169] Failed to process watch event {EventType:0 Name:/docker/8c902201acb6ac6cc85dcb02210aea656c86b05a6fb72e47cb8c42c952f307e8/crio-ef99cabeed9545cc36e7e4c46554ffacb1775a080c9128707cd0b34d4cb4a81a WatchSource:0}: Error finding container ef99cabeed9545cc36e7e4c46554ffacb1775a080c9128707cd0b34d4cb4a81a: Status 404 returned error can't find the container with id ef99cabeed9545cc36e7e4c46554ffacb1775a080c9128707cd0b34d4cb4a81a
	Nov 09 14:04:14 ha-423884 kubelet[749]: W1109 14:04:14.783698     749 manager.go:1169] Failed to process watch event {EventType:0 Name:/docker/8c902201acb6ac6cc85dcb02210aea656c86b05a6fb72e47cb8c42c952f307e8/crio-624febe3bef0cff2a8b38f86f67900ed2fa943529a6d461c2caa559b51a854bb WatchSource:0}: Error finding container 624febe3bef0cff2a8b38f86f67900ed2fa943529a6d461c2caa559b51a854bb: Status 404 returned error can't find the container with id 624febe3bef0cff2a8b38f86f67900ed2fa943529a6d461c2caa559b51a854bb
	Nov 09 14:04:14 ha-423884 kubelet[749]: W1109 14:04:14.798946     749 manager.go:1169] Failed to process watch event {EventType:0 Name:/docker/8c902201acb6ac6cc85dcb02210aea656c86b05a6fb72e47cb8c42c952f307e8/crio-49d4f70bf4320e7c70fbccc94bd31ea39dc7456ce3f9c2a9a4d16059f54b6f87 WatchSource:0}: Error finding container 49d4f70bf4320e7c70fbccc94bd31ea39dc7456ce3f9c2a9a4d16059f54b6f87: Status 404 returned error can't find the container with id 49d4f70bf4320e7c70fbccc94bd31ea39dc7456ce3f9c2a9a4d16059f54b6f87
	Nov 09 14:04:14 ha-423884 kubelet[749]: W1109 14:04:14.971628     749 manager.go:1169] Failed to process watch event {EventType:0 Name:/docker/8c902201acb6ac6cc85dcb02210aea656c86b05a6fb72e47cb8c42c952f307e8/crio-156c341c8adeeef3c52b9cc70a6ad9c7dc97d08df535a6d0183d2288e46aaa13 WatchSource:0}: Error finding container 156c341c8adeeef3c52b9cc70a6ad9c7dc97d08df535a6d0183d2288e46aaa13: Status 404 returned error can't find the container with id 156c341c8adeeef3c52b9cc70a6ad9c7dc97d08df535a6d0183d2288e46aaa13
	Nov 09 14:04:15 ha-423884 kubelet[749]: I1109 14:04:15.348436     749 kubelet_volumes.go:163] "Cleaned up orphaned pod volumes dir" podUID="fbb3ff8bceed3e182ae34f06d816435e" path="/var/lib/kubelet/pods/fbb3ff8bceed3e182ae34f06d816435e/volumes"
	Nov 09 14:04:35 ha-423884 kubelet[749]: E1109 14:04:35.276791     749 log.go:32] "ContainerStatus from runtime service failed" err="rpc error: code = NotFound desc = could not find container \"12f4955a06244881af7082cc0ff38ad3ea7f1e7f5cb44d59792e4bf672da56bd\": container with ID starting with 12f4955a06244881af7082cc0ff38ad3ea7f1e7f5cb44d59792e4bf672da56bd not found: ID does not exist" containerID="12f4955a06244881af7082cc0ff38ad3ea7f1e7f5cb44d59792e4bf672da56bd"
	Nov 09 14:04:35 ha-423884 kubelet[749]: I1109 14:04:35.276883     749 kuberuntime_gc.go:364] "Error getting ContainerStatus for containerID" containerID="12f4955a06244881af7082cc0ff38ad3ea7f1e7f5cb44d59792e4bf672da56bd" err="rpc error: code = NotFound desc = could not find container \"12f4955a06244881af7082cc0ff38ad3ea7f1e7f5cb44d59792e4bf672da56bd\": container with ID starting with 12f4955a06244881af7082cc0ff38ad3ea7f1e7f5cb44d59792e4bf672da56bd not found: ID does not exist"
	Nov 09 14:04:46 ha-423884 kubelet[749]: I1109 14:04:46.630690     749 scope.go:117] "RemoveContainer" containerID="5bed382b465f29e125aa4acb35f3e43d30cb2fa5b8aadd1ad04f56abc10722a7"
	

                                                
                                                
-- /stdout --
helpers_test.go:262: (dbg) Run:  out/minikube-linux-arm64 status --format={{.APIServer}} -p ha-423884 -n ha-423884
helpers_test.go:269: (dbg) Run:  kubectl --context ha-423884 get po -o=jsonpath={.items[*].metadata.name} -A --field-selector=status.phase!=Running
helpers_test.go:293: <<< TestMultiControlPlane/serial/DegradedAfterClusterRestart FAILED: end of post-mortem logs <<<
helpers_test.go:294: ---------------------/post-mortem---------------------------------
--- FAIL: TestMultiControlPlane/serial/DegradedAfterClusterRestart (3.90s)

                                                
                                    
x
+
TestMultiControlPlane/serial/AddSecondaryNode (90.39s)

                                                
                                                
=== RUN   TestMultiControlPlane/serial/AddSecondaryNode
ha_test.go:607: (dbg) Run:  out/minikube-linux-arm64 -p ha-423884 node add --control-plane --alsologtostderr -v 5
ha_test.go:607: (dbg) Done: out/minikube-linux-arm64 -p ha-423884 node add --control-plane --alsologtostderr -v 5: (1m25.726752743s)
ha_test.go:613: (dbg) Run:  out/minikube-linux-arm64 -p ha-423884 status --alsologtostderr -v 5
ha_test.go:613: (dbg) Done: out/minikube-linux-arm64 -p ha-423884 status --alsologtostderr -v 5: (1.399519037s)
ha_test.go:618: status says not all three control-plane nodes are present: args "out/minikube-linux-arm64 -p ha-423884 status --alsologtostderr -v 5": ha-423884
type: Control Plane
host: Running
kubelet: Running
apiserver: Running
kubeconfig: Configured

                                                
                                                
ha-423884-m02
type: Control Plane
host: Running
kubelet: Running
apiserver: Running
kubeconfig: Configured

                                                
                                                
ha-423884-m03
type: Control Plane
host: Running
kubelet: Running
apiserver: Running
kubeconfig: Configured

                                                
                                                
ha-423884-m04
type: Worker
host: Running
kubelet: Running

                                                
                                                
ha-423884-m05
type: Control Plane
host: Running
kubelet: Running
apiserver: Running
kubeconfig: Configured

                                                
                                                
ha_test.go:621: status says not all four hosts are running: args "out/minikube-linux-arm64 -p ha-423884 status --alsologtostderr -v 5": ha-423884
type: Control Plane
host: Running
kubelet: Running
apiserver: Running
kubeconfig: Configured

                                                
                                                
ha-423884-m02
type: Control Plane
host: Running
kubelet: Running
apiserver: Running
kubeconfig: Configured

                                                
                                                
ha-423884-m03
type: Control Plane
host: Running
kubelet: Running
apiserver: Running
kubeconfig: Configured

                                                
                                                
ha-423884-m04
type: Worker
host: Running
kubelet: Running

                                                
                                                
ha-423884-m05
type: Control Plane
host: Running
kubelet: Running
apiserver: Running
kubeconfig: Configured

                                                
                                                
ha_test.go:624: status says not all four kubelets are running: args "out/minikube-linux-arm64 -p ha-423884 status --alsologtostderr -v 5": ha-423884
type: Control Plane
host: Running
kubelet: Running
apiserver: Running
kubeconfig: Configured

                                                
                                                
ha-423884-m02
type: Control Plane
host: Running
kubelet: Running
apiserver: Running
kubeconfig: Configured

                                                
                                                
ha-423884-m03
type: Control Plane
host: Running
kubelet: Running
apiserver: Running
kubeconfig: Configured

                                                
                                                
ha-423884-m04
type: Worker
host: Running
kubelet: Running

                                                
                                                
ha-423884-m05
type: Control Plane
host: Running
kubelet: Running
apiserver: Running
kubeconfig: Configured

                                                
                                                
ha_test.go:627: status says not all three apiservers are running: args "out/minikube-linux-arm64 -p ha-423884 status --alsologtostderr -v 5": ha-423884
type: Control Plane
host: Running
kubelet: Running
apiserver: Running
kubeconfig: Configured

                                                
                                                
ha-423884-m02
type: Control Plane
host: Running
kubelet: Running
apiserver: Running
kubeconfig: Configured

                                                
                                                
ha-423884-m03
type: Control Plane
host: Running
kubelet: Running
apiserver: Running
kubeconfig: Configured

                                                
                                                
ha-423884-m04
type: Worker
host: Running
kubelet: Running

                                                
                                                
ha-423884-m05
type: Control Plane
host: Running
kubelet: Running
apiserver: Running
kubeconfig: Configured

                                                
                                                
helpers_test.go:222: -----------------------post-mortem--------------------------------
helpers_test.go:223: ======>  post-mortem[TestMultiControlPlane/serial/AddSecondaryNode]: network settings <======
helpers_test.go:230: HOST ENV snapshots: PROXY env: HTTP_PROXY="<empty>" HTTPS_PROXY="<empty>" NO_PROXY="<empty>"
helpers_test.go:238: ======>  post-mortem[TestMultiControlPlane/serial/AddSecondaryNode]: docker inspect <======
helpers_test.go:239: (dbg) Run:  docker inspect ha-423884
helpers_test.go:243: (dbg) docker inspect ha-423884:

                                                
                                                
-- stdout --
	[
	    {
	        "Id": "8c902201acb6ac6cc85dcb02210aea656c86b05a6fb72e47cb8c42c952f307e8",
	        "Created": "2025-11-09T13:50:17.166169915Z",
	        "Path": "/usr/local/bin/entrypoint",
	        "Args": [
	            "/sbin/init"
	        ],
	        "State": {
	            "Status": "running",
	            "Running": true,
	            "Paused": false,
	            "Restarting": false,
	            "OOMKilled": false,
	            "Dead": false,
	            "Pid": 56035,
	            "ExitCode": 0,
	            "Error": "",
	            "StartedAt": "2025-11-09T14:03:28.454326897Z",
	            "FinishedAt": "2025-11-09T14:03:27.198748336Z"
	        },
	        "Image": "sha256:4e1b168be0d8ee6affdaa8dcd0274d605b2f417b6cfaa574410f0380fd962b97",
	        "ResolvConfPath": "/var/lib/docker/containers/8c902201acb6ac6cc85dcb02210aea656c86b05a6fb72e47cb8c42c952f307e8/resolv.conf",
	        "HostnamePath": "/var/lib/docker/containers/8c902201acb6ac6cc85dcb02210aea656c86b05a6fb72e47cb8c42c952f307e8/hostname",
	        "HostsPath": "/var/lib/docker/containers/8c902201acb6ac6cc85dcb02210aea656c86b05a6fb72e47cb8c42c952f307e8/hosts",
	        "LogPath": "/var/lib/docker/containers/8c902201acb6ac6cc85dcb02210aea656c86b05a6fb72e47cb8c42c952f307e8/8c902201acb6ac6cc85dcb02210aea656c86b05a6fb72e47cb8c42c952f307e8-json.log",
	        "Name": "/ha-423884",
	        "RestartCount": 0,
	        "Driver": "overlay2",
	        "Platform": "linux",
	        "MountLabel": "",
	        "ProcessLabel": "",
	        "AppArmorProfile": "unconfined",
	        "ExecIDs": null,
	        "HostConfig": {
	            "Binds": [
	                "/lib/modules:/lib/modules:ro",
	                "ha-423884:/var"
	            ],
	            "ContainerIDFile": "",
	            "LogConfig": {
	                "Type": "json-file",
	                "Config": {}
	            },
	            "NetworkMode": "ha-423884",
	            "PortBindings": {
	                "22/tcp": [
	                    {
	                        "HostIp": "127.0.0.1",
	                        "HostPort": ""
	                    }
	                ],
	                "2376/tcp": [
	                    {
	                        "HostIp": "127.0.0.1",
	                        "HostPort": ""
	                    }
	                ],
	                "32443/tcp": [
	                    {
	                        "HostIp": "127.0.0.1",
	                        "HostPort": ""
	                    }
	                ],
	                "5000/tcp": [
	                    {
	                        "HostIp": "127.0.0.1",
	                        "HostPort": ""
	                    }
	                ],
	                "8443/tcp": [
	                    {
	                        "HostIp": "127.0.0.1",
	                        "HostPort": ""
	                    }
	                ]
	            },
	            "RestartPolicy": {
	                "Name": "no",
	                "MaximumRetryCount": 0
	            },
	            "AutoRemove": false,
	            "VolumeDriver": "",
	            "VolumesFrom": null,
	            "ConsoleSize": [
	                0,
	                0
	            ],
	            "CapAdd": null,
	            "CapDrop": null,
	            "CgroupnsMode": "host",
	            "Dns": [],
	            "DnsOptions": [],
	            "DnsSearch": [],
	            "ExtraHosts": null,
	            "GroupAdd": null,
	            "IpcMode": "private",
	            "Cgroup": "",
	            "Links": null,
	            "OomScoreAdj": 0,
	            "PidMode": "",
	            "Privileged": true,
	            "PublishAllPorts": false,
	            "ReadonlyRootfs": false,
	            "SecurityOpt": [
	                "seccomp=unconfined",
	                "apparmor=unconfined",
	                "label=disable"
	            ],
	            "Tmpfs": {
	                "/run": "",
	                "/tmp": ""
	            },
	            "UTSMode": "",
	            "UsernsMode": "",
	            "ShmSize": 67108864,
	            "Runtime": "runc",
	            "Isolation": "",
	            "CpuShares": 0,
	            "Memory": 3221225472,
	            "NanoCpus": 2000000000,
	            "CgroupParent": "",
	            "BlkioWeight": 0,
	            "BlkioWeightDevice": [],
	            "BlkioDeviceReadBps": [],
	            "BlkioDeviceWriteBps": [],
	            "BlkioDeviceReadIOps": [],
	            "BlkioDeviceWriteIOps": [],
	            "CpuPeriod": 0,
	            "CpuQuota": 0,
	            "CpuRealtimePeriod": 0,
	            "CpuRealtimeRuntime": 0,
	            "CpusetCpus": "",
	            "CpusetMems": "",
	            "Devices": [],
	            "DeviceCgroupRules": null,
	            "DeviceRequests": null,
	            "MemoryReservation": 0,
	            "MemorySwap": 6442450944,
	            "MemorySwappiness": null,
	            "OomKillDisable": false,
	            "PidsLimit": null,
	            "Ulimits": [],
	            "CpuCount": 0,
	            "CpuPercent": 0,
	            "IOMaximumIOps": 0,
	            "IOMaximumBandwidth": 0,
	            "MaskedPaths": null,
	            "ReadonlyPaths": null
	        },
	        "GraphDriver": {
	            "Data": {
	                "ID": "8c902201acb6ac6cc85dcb02210aea656c86b05a6fb72e47cb8c42c952f307e8",
	                "LowerDir": "/var/lib/docker/overlay2/d7d9b7ca13eaf7cc4d5734ee4a6a54ff542d1224261d25b7c41162aa58453c4b-init/diff:/var/lib/docker/overlay2/b3a4de36ab2a7c9237cb1555a4866064baca53bca407ae5f84336bea9c6bc6c8/diff",
	                "MergedDir": "/var/lib/docker/overlay2/d7d9b7ca13eaf7cc4d5734ee4a6a54ff542d1224261d25b7c41162aa58453c4b/merged",
	                "UpperDir": "/var/lib/docker/overlay2/d7d9b7ca13eaf7cc4d5734ee4a6a54ff542d1224261d25b7c41162aa58453c4b/diff",
	                "WorkDir": "/var/lib/docker/overlay2/d7d9b7ca13eaf7cc4d5734ee4a6a54ff542d1224261d25b7c41162aa58453c4b/work"
	            },
	            "Name": "overlay2"
	        },
	        "Mounts": [
	            {
	                "Type": "bind",
	                "Source": "/lib/modules",
	                "Destination": "/lib/modules",
	                "Mode": "ro",
	                "RW": false,
	                "Propagation": "rprivate"
	            },
	            {
	                "Type": "volume",
	                "Name": "ha-423884",
	                "Source": "/var/lib/docker/volumes/ha-423884/_data",
	                "Destination": "/var",
	                "Driver": "local",
	                "Mode": "z",
	                "RW": true,
	                "Propagation": ""
	            }
	        ],
	        "Config": {
	            "Hostname": "ha-423884",
	            "Domainname": "",
	            "User": "",
	            "AttachStdin": false,
	            "AttachStdout": false,
	            "AttachStderr": false,
	            "ExposedPorts": {
	                "22/tcp": {},
	                "2376/tcp": {},
	                "32443/tcp": {},
	                "5000/tcp": {},
	                "8443/tcp": {}
	            },
	            "Tty": true,
	            "OpenStdin": false,
	            "StdinOnce": false,
	            "Env": [
	                "container=docker",
	                "PATH=/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin"
	            ],
	            "Cmd": null,
	            "Image": "gcr.io/k8s-minikube/kicbase-builds:v0.0.48-1761985721-21837@sha256:a50b37e97dfdea51156e079ca6b45818a801b3d41bbe13d141f35d2e1af6c7d1",
	            "Volumes": null,
	            "WorkingDir": "/",
	            "Entrypoint": [
	                "/usr/local/bin/entrypoint",
	                "/sbin/init"
	            ],
	            "OnBuild": null,
	            "Labels": {
	                "created_by.minikube.sigs.k8s.io": "true",
	                "mode.minikube.sigs.k8s.io": "ha-423884",
	                "name.minikube.sigs.k8s.io": "ha-423884",
	                "role.minikube.sigs.k8s.io": ""
	            },
	            "StopSignal": "SIGRTMIN+3"
	        },
	        "NetworkSettings": {
	            "Bridge": "",
	            "SandboxID": "1a517d91b9dd2fa9b7c1a86f3c7ce600153c1394576da0eb7ce565af8604f53c",
	            "SandboxKey": "/var/run/docker/netns/1a517d91b9dd",
	            "Ports": {
	                "22/tcp": [
	                    {
	                        "HostIp": "127.0.0.1",
	                        "HostPort": "32818"
	                    }
	                ],
	                "2376/tcp": [
	                    {
	                        "HostIp": "127.0.0.1",
	                        "HostPort": "32819"
	                    }
	                ],
	                "32443/tcp": [
	                    {
	                        "HostIp": "127.0.0.1",
	                        "HostPort": "32822"
	                    }
	                ],
	                "5000/tcp": [
	                    {
	                        "HostIp": "127.0.0.1",
	                        "HostPort": "32820"
	                    }
	                ],
	                "8443/tcp": [
	                    {
	                        "HostIp": "127.0.0.1",
	                        "HostPort": "32821"
	                    }
	                ]
	            },
	            "HairpinMode": false,
	            "LinkLocalIPv6Address": "",
	            "LinkLocalIPv6PrefixLen": 0,
	            "SecondaryIPAddresses": null,
	            "SecondaryIPv6Addresses": null,
	            "EndpointID": "",
	            "Gateway": "",
	            "GlobalIPv6Address": "",
	            "GlobalIPv6PrefixLen": 0,
	            "IPAddress": "",
	            "IPPrefixLen": 0,
	            "IPv6Gateway": "",
	            "MacAddress": "",
	            "Networks": {
	                "ha-423884": {
	                    "IPAMConfig": {
	                        "IPv4Address": "192.168.49.2"
	                    },
	                    "Links": null,
	                    "Aliases": null,
	                    "MacAddress": "42:a0:79:53:a9:c1",
	                    "DriverOpts": null,
	                    "GwPriority": 0,
	                    "NetworkID": "b901b8dcb82129bdc4c62d2bf9cac8a365e41b87cf75b0978b149071ce152f44",
	                    "EndpointID": "863a231ee9ea532fe20e7b03570549e0d16ef617b4f2a4ad156998677dd29113",
	                    "Gateway": "192.168.49.1",
	                    "IPAddress": "192.168.49.2",
	                    "IPPrefixLen": 24,
	                    "IPv6Gateway": "",
	                    "GlobalIPv6Address": "",
	                    "GlobalIPv6PrefixLen": 0,
	                    "DNSNames": [
	                        "ha-423884",
	                        "8c902201acb6"
	                    ]
	                }
	            }
	        }
	    }
	]

                                                
                                                
-- /stdout --
helpers_test.go:247: (dbg) Run:  out/minikube-linux-arm64 status --format={{.Host}} -p ha-423884 -n ha-423884
helpers_test.go:252: <<< TestMultiControlPlane/serial/AddSecondaryNode FAILED: start of post-mortem logs <<<
helpers_test.go:253: ======>  post-mortem[TestMultiControlPlane/serial/AddSecondaryNode]: minikube logs <======
helpers_test.go:255: (dbg) Run:  out/minikube-linux-arm64 -p ha-423884 logs -n 25
helpers_test.go:255: (dbg) Done: out/minikube-linux-arm64 -p ha-423884 logs -n 25: (1.924275477s)
helpers_test.go:260: TestMultiControlPlane/serial/AddSecondaryNode logs: 
-- stdout --
	
	==> Audit <==
	┌─────────┬─────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────┬───────────┬─────────┬─────────┬─────────────────────┬─────────────────────┐
	│ COMMAND │                                                                ARGS                                                                 │  PROFILE  │  USER   │ VERSION │     START TIME      │      END TIME       │
	├─────────┼─────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────┼───────────┼─────────┼─────────┼─────────────────────┼─────────────────────┤
	│ ssh     │ ha-423884 ssh -n ha-423884-m03 sudo cat /home/docker/cp-test.txt                                                                    │ ha-423884 │ jenkins │ v1.37.0 │ 09 Nov 25 13:53 UTC │ 09 Nov 25 13:53 UTC │
	│ ssh     │ ha-423884 ssh -n ha-423884-m04 sudo cat /home/docker/cp-test_ha-423884-m03_ha-423884-m04.txt                                        │ ha-423884 │ jenkins │ v1.37.0 │ 09 Nov 25 13:53 UTC │ 09 Nov 25 13:53 UTC │
	│ cp      │ ha-423884 cp testdata/cp-test.txt ha-423884-m04:/home/docker/cp-test.txt                                                            │ ha-423884 │ jenkins │ v1.37.0 │ 09 Nov 25 13:53 UTC │ 09 Nov 25 13:53 UTC │
	│ ssh     │ ha-423884 ssh -n ha-423884-m04 sudo cat /home/docker/cp-test.txt                                                                    │ ha-423884 │ jenkins │ v1.37.0 │ 09 Nov 25 13:53 UTC │ 09 Nov 25 13:53 UTC │
	│ cp      │ ha-423884 cp ha-423884-m04:/home/docker/cp-test.txt /tmp/TestMultiControlPlaneserialCopyFile776916293/001/cp-test_ha-423884-m04.txt │ ha-423884 │ jenkins │ v1.37.0 │ 09 Nov 25 13:53 UTC │ 09 Nov 25 13:53 UTC │
	│ ssh     │ ha-423884 ssh -n ha-423884-m04 sudo cat /home/docker/cp-test.txt                                                                    │ ha-423884 │ jenkins │ v1.37.0 │ 09 Nov 25 13:53 UTC │ 09 Nov 25 13:53 UTC │
	│ cp      │ ha-423884 cp ha-423884-m04:/home/docker/cp-test.txt ha-423884:/home/docker/cp-test_ha-423884-m04_ha-423884.txt                      │ ha-423884 │ jenkins │ v1.37.0 │ 09 Nov 25 13:53 UTC │ 09 Nov 25 13:53 UTC │
	│ ssh     │ ha-423884 ssh -n ha-423884-m04 sudo cat /home/docker/cp-test.txt                                                                    │ ha-423884 │ jenkins │ v1.37.0 │ 09 Nov 25 13:53 UTC │ 09 Nov 25 13:53 UTC │
	│ ssh     │ ha-423884 ssh -n ha-423884 sudo cat /home/docker/cp-test_ha-423884-m04_ha-423884.txt                                                │ ha-423884 │ jenkins │ v1.37.0 │ 09 Nov 25 13:53 UTC │ 09 Nov 25 13:53 UTC │
	│ cp      │ ha-423884 cp ha-423884-m04:/home/docker/cp-test.txt ha-423884-m02:/home/docker/cp-test_ha-423884-m04_ha-423884-m02.txt              │ ha-423884 │ jenkins │ v1.37.0 │ 09 Nov 25 13:53 UTC │ 09 Nov 25 13:53 UTC │
	│ ssh     │ ha-423884 ssh -n ha-423884-m04 sudo cat /home/docker/cp-test.txt                                                                    │ ha-423884 │ jenkins │ v1.37.0 │ 09 Nov 25 13:53 UTC │ 09 Nov 25 13:53 UTC │
	│ ssh     │ ha-423884 ssh -n ha-423884-m02 sudo cat /home/docker/cp-test_ha-423884-m04_ha-423884-m02.txt                                        │ ha-423884 │ jenkins │ v1.37.0 │ 09 Nov 25 13:53 UTC │ 09 Nov 25 13:53 UTC │
	│ cp      │ ha-423884 cp ha-423884-m04:/home/docker/cp-test.txt ha-423884-m03:/home/docker/cp-test_ha-423884-m04_ha-423884-m03.txt              │ ha-423884 │ jenkins │ v1.37.0 │ 09 Nov 25 13:53 UTC │ 09 Nov 25 13:53 UTC │
	│ ssh     │ ha-423884 ssh -n ha-423884-m04 sudo cat /home/docker/cp-test.txt                                                                    │ ha-423884 │ jenkins │ v1.37.0 │ 09 Nov 25 13:53 UTC │ 09 Nov 25 13:53 UTC │
	│ ssh     │ ha-423884 ssh -n ha-423884-m03 sudo cat /home/docker/cp-test_ha-423884-m04_ha-423884-m03.txt                                        │ ha-423884 │ jenkins │ v1.37.0 │ 09 Nov 25 13:53 UTC │ 09 Nov 25 13:53 UTC │
	│ node    │ ha-423884 node stop m02 --alsologtostderr -v 5                                                                                      │ ha-423884 │ jenkins │ v1.37.0 │ 09 Nov 25 13:53 UTC │ 09 Nov 25 13:53 UTC │
	│ node    │ ha-423884 node start m02 --alsologtostderr -v 5                                                                                     │ ha-423884 │ jenkins │ v1.37.0 │ 09 Nov 25 13:53 UTC │ 09 Nov 25 13:54 UTC │
	│ node    │ ha-423884 node list --alsologtostderr -v 5                                                                                          │ ha-423884 │ jenkins │ v1.37.0 │ 09 Nov 25 13:54 UTC │                     │
	│ stop    │ ha-423884 stop --alsologtostderr -v 5                                                                                               │ ha-423884 │ jenkins │ v1.37.0 │ 09 Nov 25 13:54 UTC │ 09 Nov 25 13:54 UTC │
	│ start   │ ha-423884 start --wait true --alsologtostderr -v 5                                                                                  │ ha-423884 │ jenkins │ v1.37.0 │ 09 Nov 25 13:54 UTC │                     │
	│ node    │ ha-423884 node list --alsologtostderr -v 5                                                                                          │ ha-423884 │ jenkins │ v1.37.0 │ 09 Nov 25 14:02 UTC │                     │
	│ node    │ ha-423884 node delete m03 --alsologtostderr -v 5                                                                                    │ ha-423884 │ jenkins │ v1.37.0 │ 09 Nov 25 14:03 UTC │                     │
	│ stop    │ ha-423884 stop --alsologtostderr -v 5                                                                                               │ ha-423884 │ jenkins │ v1.37.0 │ 09 Nov 25 14:03 UTC │ 09 Nov 25 14:03 UTC │
	│ start   │ ha-423884 start --wait true --alsologtostderr -v 5 --driver=docker  --container-runtime=crio                                        │ ha-423884 │ jenkins │ v1.37.0 │ 09 Nov 25 14:03 UTC │ 09 Nov 25 14:05 UTC │
	│ node    │ ha-423884 node add --control-plane --alsologtostderr -v 5                                                                           │ ha-423884 │ jenkins │ v1.37.0 │ 09 Nov 25 14:05 UTC │ 09 Nov 25 14:06 UTC │
	└─────────┴─────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────┴───────────┴─────────┴─────────┴─────────────────────┴─────────────────────┘
	
	
	==> Last Start <==
	Log file created at: 2025/11/09 14:03:28
	Running on machine: ip-172-31-24-2
	Binary: Built with gc go1.24.6 for linux/arm64
	Log line format: [IWEF]mmdd hh:mm:ss.uuuuuu threadid file:line] msg
	I1109 14:03:28.177539   55908 out.go:360] Setting OutFile to fd 1 ...
	I1109 14:03:28.177725   55908 out.go:408] TERM=,COLORTERM=, which probably does not support color
	I1109 14:03:28.177737   55908 out.go:374] Setting ErrFile to fd 2...
	I1109 14:03:28.177743   55908 out.go:408] TERM=,COLORTERM=, which probably does not support color
	I1109 14:03:28.178015   55908 root.go:338] Updating PATH: /home/jenkins/minikube-integration/21139-2320/.minikube/bin
	I1109 14:03:28.178387   55908 out.go:368] Setting JSON to false
	I1109 14:03:28.179233   55908 start.go:133] hostinfo: {"hostname":"ip-172-31-24-2","uptime":2759,"bootTime":1762694250,"procs":154,"os":"linux","platform":"ubuntu","platformFamily":"debian","platformVersion":"20.04","kernelVersion":"5.15.0-1084-aws","kernelArch":"aarch64","virtualizationSystem":"","virtualizationRole":"","hostId":"6d436adf-771e-4269-b9a3-c25fd4fca4f5"}
	I1109 14:03:28.179304   55908 start.go:143] virtualization:  
	I1109 14:03:28.182654   55908 out.go:179] * [ha-423884] minikube v1.37.0 on Ubuntu 20.04 (arm64)
	I1109 14:03:28.186399   55908 out.go:179]   - MINIKUBE_LOCATION=21139
	I1109 14:03:28.186530   55908 notify.go:221] Checking for updates...
	I1109 14:03:28.192400   55908 out.go:179]   - MINIKUBE_SUPPRESS_DOCKER_PERFORMANCE=true
	I1109 14:03:28.195380   55908 out.go:179]   - KUBECONFIG=/home/jenkins/minikube-integration/21139-2320/kubeconfig
	I1109 14:03:28.198311   55908 out.go:179]   - MINIKUBE_HOME=/home/jenkins/minikube-integration/21139-2320/.minikube
	I1109 14:03:28.201212   55908 out.go:179]   - MINIKUBE_BIN=out/minikube-linux-arm64
	I1109 14:03:28.204122   55908 out.go:179]   - MINIKUBE_FORCE_SYSTEMD=
	I1109 14:03:28.207578   55908 config.go:182] Loaded profile config "ha-423884": Driver=docker, ContainerRuntime=crio, KubernetesVersion=v1.34.1
	I1109 14:03:28.208223   55908 driver.go:422] Setting default libvirt URI to qemu:///system
	I1109 14:03:28.238570   55908 docker.go:124] docker version: linux-28.1.1:Docker Engine - Community
	I1109 14:03:28.238679   55908 cli_runner.go:164] Run: docker system info --format "{{json .}}"
	I1109 14:03:28.302173   55908 info.go:266] docker info: {ID:J4M5:W6MX:GOX4:4LAQ:VI7E:VJNF:J3OP:OPBH:GF7G:PPY4:WQWD:7N4L Containers:4 ContainersRunning:0 ContainersPaused:0 ContainersStopped:4 Images:3 Driver:overlay2 DriverStatus:[[Backing Filesystem extfs] [Supports d_type true] [Using metacopy false] [Native Overlay Diff true] [userxattr false]] SystemStatus:<nil> Plugins:{Volume:[local] Network:[bridge host ipvlan macvlan null overlay] Authorization:<nil> Log:[awslogs fluentd gcplogs gelf journald json-file local splunk syslog]} MemoryLimit:true SwapLimit:true KernelMemory:false KernelMemoryTCP:true CPUCfsPeriod:true CPUCfsQuota:true CPUShares:true CPUSet:true PidsLimit:true IPv4Forwarding:true BridgeNfIptables:false BridgeNfIP6Tables:false Debug:false NFd:25 OomKillDisable:true NGoroutines:42 SystemTime:2025-11-09 14:03:28.29285158 +0000 UTC LoggingDriver:json-file CgroupDriver:cgroupfs NEventsListener:0 KernelVersion:5.15.0-1084-aws OperatingSystem:Ubuntu 20.04.6 LTS OSType:linux Architecture:aa
rch64 IndexServerAddress:https://index.docker.io/v1/ RegistryConfig:{AllowNondistributableArtifactsCIDRs:[] AllowNondistributableArtifactsHostnames:[] InsecureRegistryCIDRs:[::1/128 127.0.0.0/8] IndexConfigs:{DockerIo:{Name:docker.io Mirrors:[] Secure:true Official:true}} Mirrors:[]} NCPU:2 MemTotal:8214835200 GenericResources:<nil> DockerRootDir:/var/lib/docker HTTPProxy: HTTPSProxy: NoProxy: Name:ip-172-31-24-2 Labels:[] ExperimentalBuild:false ServerVersion:28.1.1 ClusterStore: ClusterAdvertise: Runtimes:{Runc:{Path:runc}} DefaultRuntime:runc Swarm:{NodeID: NodeAddr: LocalNodeState:inactive ControlAvailable:false Error: RemoteManagers:<nil>} LiveRestoreEnabled:false Isolation: InitBinary:docker-init ContainerdCommit:{ID:05044ec0a9a75232cad458027ca83437aae3f4da Expected:} RuncCommit:{ID:v1.2.5-0-g59923ef Expected:} InitCommit:{ID:de40ad0 Expected:} SecurityOptions:[name=apparmor name=seccomp,profile=builtin] ProductLicense: Warnings:<nil> ServerErrors:[] ClientInfo:{Debug:false Plugins:[map[Name:buildx Path
:/usr/libexec/docker/cli-plugins/docker-buildx SchemaVersion:0.1.0 ShortDescription:Docker Buildx Vendor:Docker Inc. Version:v0.23.0] map[Name:compose Path:/usr/libexec/docker/cli-plugins/docker-compose SchemaVersion:0.1.0 ShortDescription:Docker Compose Vendor:Docker Inc. Version:v2.35.1]] Warnings:<nil>}}
	I1109 14:03:28.302284   55908 docker.go:319] overlay module found
	I1109 14:03:28.305382   55908 out.go:179] * Using the docker driver based on existing profile
	I1109 14:03:28.308271   55908 start.go:309] selected driver: docker
	I1109 14:03:28.308292   55908 start.go:930] validating driver "docker" against &{Name:ha-423884 KeepContext:false EmbedCerts:false MinikubeISO: KicBaseImage:gcr.io/k8s-minikube/kicbase-builds:v0.0.48-1761985721-21837@sha256:a50b37e97dfdea51156e079ca6b45818a801b3d41bbe13d141f35d2e1af6c7d1 Memory:3072 CPUs:2 DiskSize:20000 Driver:docker HyperkitVpnKitSock: HyperkitVSockPorts:[] DockerEnv:[] ContainerVolumeMounts:[] InsecureRegistry:[] RegistryMirror:[] HostOnlyCIDR:192.168.59.1/24 HypervVirtualSwitch: HypervUseExternalSwitch:false HypervExternalAdapter: KVMNetwork:default KVMQemuURI:qemu:///system KVMGPU:false KVMHidden:false KVMNUMACount:1 APIServerPort:8443 DockerOpt:[] DisableDriverMounts:false NFSShare:[] NFSSharesRoot:/nfsshares UUID: NoVTXCheck:false DNSProxy:false HostDNSResolver:true HostOnlyNicType:virtio NatNicType:virtio SSHIPAddress: SSHUser:root SSHKey: SSHPort:22 KubernetesConfig:{KubernetesVersion:v1.34.1 ClusterName:ha-423884 Namespace:default APIServerHAVIP:192.168.49.254 APIServerName
:minikubeCA APIServerNames:[] APIServerIPs:[] DNSDomain:cluster.local ContainerRuntime:crio CRISocket: NetworkPlugin:cni FeatureGates: ServiceCIDR:10.96.0.0/12 ImageRepository: LoadBalancerStartIP: LoadBalancerEndIP: CustomIngressCert: RegistryAliases: ExtraOptions:[] ShouldLoadCachedImages:true EnableDefaultCNI:false CNI:} Nodes:[{Name: IP:192.168.49.2 Port:8443 KubernetesVersion:v1.34.1 ContainerRuntime:crio ControlPlane:true Worker:true} {Name:m02 IP:192.168.49.3 Port:8443 KubernetesVersion:v1.34.1 ContainerRuntime:crio ControlPlane:true Worker:true} {Name:m03 IP:192.168.49.4 Port:8443 KubernetesVersion:v1.34.1 ContainerRuntime:crio ControlPlane:true Worker:true} {Name:m04 IP:192.168.49.5 Port:0 KubernetesVersion:v1.34.1 ContainerRuntime:crio ControlPlane:false Worker:true}] Addons:map[ambassador:false amd-gpu-device-plugin:false auto-pause:false cloud-spanner:false csi-hostpath-driver:false dashboard:false default-storageclass:false efk:false freshpod:false gcp-auth:false gvisor:false headlamp:false inacc
el:false ingress:false ingress-dns:false inspektor-gadget:false istio:false istio-provisioner:false kong:false kubeflow:false kubetail:false kubevirt:false logviewer:false metallb:false metrics-server:false nvidia-device-plugin:false nvidia-driver-installer:false nvidia-gpu-device-plugin:false olm:false pod-security-policy:false portainer:false registry:false registry-aliases:false registry-creds:false storage-provisioner:false storage-provisioner-rancher:false volcano:false volumesnapshots:false yakd:false] CustomAddonImages:map[] CustomAddonRegistries:map[] VerifyComponents:map[apiserver:true apps_running:true default_sa:true extra:true kubelet:true node_ready:true system_pods:true] StartHostTimeout:6m0s ScheduledStop:<nil> ExposedPorts:[] ListenAddress: Network: Subnet: MultiNodeRequested:true ExtraDisks:0 CertExpiration:26280h0m0s MountString: Mount9PVersion:9p2000.L MountGID:docker MountIP: MountMSize:262144 MountOptions:[] MountPort:0 MountType:9p MountUID:docker BinaryMirror: DisableOptimizations:false
DisableMetrics:false DisableCoreDNSLog:false CustomQemuFirmwarePath: SocketVMnetClientPath: SocketVMnetPath: StaticIP: SSHAuthSock: SSHAgentPID:0 GPUs: AutoPauseInterval:1m0s}
	I1109 14:03:28.308437   55908 start.go:941] status for docker: {Installed:true Healthy:true Running:false NeedsImprovement:false Error:<nil> Reason: Fix: Doc: Version:}
	I1109 14:03:28.308547   55908 cli_runner.go:164] Run: docker system info --format "{{json .}}"
	I1109 14:03:28.367315   55908 info.go:266] docker info: {ID:J4M5:W6MX:GOX4:4LAQ:VI7E:VJNF:J3OP:OPBH:GF7G:PPY4:WQWD:7N4L Containers:4 ContainersRunning:0 ContainersPaused:0 ContainersStopped:4 Images:3 Driver:overlay2 DriverStatus:[[Backing Filesystem extfs] [Supports d_type true] [Using metacopy false] [Native Overlay Diff true] [userxattr false]] SystemStatus:<nil> Plugins:{Volume:[local] Network:[bridge host ipvlan macvlan null overlay] Authorization:<nil> Log:[awslogs fluentd gcplogs gelf journald json-file local splunk syslog]} MemoryLimit:true SwapLimit:true KernelMemory:false KernelMemoryTCP:true CPUCfsPeriod:true CPUCfsQuota:true CPUShares:true CPUSet:true PidsLimit:true IPv4Forwarding:true BridgeNfIptables:false BridgeNfIP6Tables:false Debug:false NFd:25 OomKillDisable:true NGoroutines:42 SystemTime:2025-11-09 14:03:28.35650136 +0000 UTC LoggingDriver:json-file CgroupDriver:cgroupfs NEventsListener:0 KernelVersion:5.15.0-1084-aws OperatingSystem:Ubuntu 20.04.6 LTS OSType:linux Architecture:aa
rch64 IndexServerAddress:https://index.docker.io/v1/ RegistryConfig:{AllowNondistributableArtifactsCIDRs:[] AllowNondistributableArtifactsHostnames:[] InsecureRegistryCIDRs:[::1/128 127.0.0.0/8] IndexConfigs:{DockerIo:{Name:docker.io Mirrors:[] Secure:true Official:true}} Mirrors:[]} NCPU:2 MemTotal:8214835200 GenericResources:<nil> DockerRootDir:/var/lib/docker HTTPProxy: HTTPSProxy: NoProxy: Name:ip-172-31-24-2 Labels:[] ExperimentalBuild:false ServerVersion:28.1.1 ClusterStore: ClusterAdvertise: Runtimes:{Runc:{Path:runc}} DefaultRuntime:runc Swarm:{NodeID: NodeAddr: LocalNodeState:inactive ControlAvailable:false Error: RemoteManagers:<nil>} LiveRestoreEnabled:false Isolation: InitBinary:docker-init ContainerdCommit:{ID:05044ec0a9a75232cad458027ca83437aae3f4da Expected:} RuncCommit:{ID:v1.2.5-0-g59923ef Expected:} InitCommit:{ID:de40ad0 Expected:} SecurityOptions:[name=apparmor name=seccomp,profile=builtin] ProductLicense: Warnings:<nil> ServerErrors:[] ClientInfo:{Debug:false Plugins:[map[Name:buildx Path
:/usr/libexec/docker/cli-plugins/docker-buildx SchemaVersion:0.1.0 ShortDescription:Docker Buildx Vendor:Docker Inc. Version:v0.23.0] map[Name:compose Path:/usr/libexec/docker/cli-plugins/docker-compose SchemaVersion:0.1.0 ShortDescription:Docker Compose Vendor:Docker Inc. Version:v2.35.1]] Warnings:<nil>}}
	I1109 14:03:28.367739   55908 start_flags.go:992] Waiting for all components: map[apiserver:true apps_running:true default_sa:true extra:true kubelet:true node_ready:true system_pods:true]
	I1109 14:03:28.367770   55908 cni.go:84] Creating CNI manager for ""
	I1109 14:03:28.367814   55908 cni.go:136] multinode detected (4 nodes found), recommending kindnet
	I1109 14:03:28.367923   55908 start.go:353] cluster config:
	{Name:ha-423884 KeepContext:false EmbedCerts:false MinikubeISO: KicBaseImage:gcr.io/k8s-minikube/kicbase-builds:v0.0.48-1761985721-21837@sha256:a50b37e97dfdea51156e079ca6b45818a801b3d41bbe13d141f35d2e1af6c7d1 Memory:3072 CPUs:2 DiskSize:20000 Driver:docker HyperkitVpnKitSock: HyperkitVSockPorts:[] DockerEnv:[] ContainerVolumeMounts:[] InsecureRegistry:[] RegistryMirror:[] HostOnlyCIDR:192.168.59.1/24 HypervVirtualSwitch: HypervUseExternalSwitch:false HypervExternalAdapter: KVMNetwork:default KVMQemuURI:qemu:///system KVMGPU:false KVMHidden:false KVMNUMACount:1 APIServerPort:8443 DockerOpt:[] DisableDriverMounts:false NFSShare:[] NFSSharesRoot:/nfsshares UUID: NoVTXCheck:false DNSProxy:false HostDNSResolver:true HostOnlyNicType:virtio NatNicType:virtio SSHIPAddress: SSHUser:root SSHKey: SSHPort:22 KubernetesConfig:{KubernetesVersion:v1.34.1 ClusterName:ha-423884 Namespace:default APIServerHAVIP:192.168.49.254 APIServerName:minikubeCA APIServerNames:[] APIServerIPs:[] DNSDomain:cluster.local ContainerR
untime:crio CRISocket: NetworkPlugin:cni FeatureGates: ServiceCIDR:10.96.0.0/12 ImageRepository: LoadBalancerStartIP: LoadBalancerEndIP: CustomIngressCert: RegistryAliases: ExtraOptions:[] ShouldLoadCachedImages:true EnableDefaultCNI:false CNI:} Nodes:[{Name: IP:192.168.49.2 Port:8443 KubernetesVersion:v1.34.1 ContainerRuntime:crio ControlPlane:true Worker:true} {Name:m02 IP:192.168.49.3 Port:8443 KubernetesVersion:v1.34.1 ContainerRuntime:crio ControlPlane:true Worker:true} {Name:m03 IP:192.168.49.4 Port:8443 KubernetesVersion:v1.34.1 ContainerRuntime:crio ControlPlane:true Worker:true} {Name:m04 IP:192.168.49.5 Port:0 KubernetesVersion:v1.34.1 ContainerRuntime:crio ControlPlane:false Worker:true}] Addons:map[ambassador:false amd-gpu-device-plugin:false auto-pause:false cloud-spanner:false csi-hostpath-driver:false dashboard:false default-storageclass:false efk:false freshpod:false gcp-auth:false gvisor:false headlamp:false inaccel:false ingress:false ingress-dns:false inspektor-gadget:false istio:false isti
o-provisioner:false kong:false kubeflow:false kubetail:false kubevirt:false logviewer:false metallb:false metrics-server:false nvidia-device-plugin:false nvidia-driver-installer:false nvidia-gpu-device-plugin:false olm:false pod-security-policy:false portainer:false registry:false registry-aliases:false registry-creds:false storage-provisioner:false storage-provisioner-rancher:false volcano:false volumesnapshots:false yakd:false] CustomAddonImages:map[] CustomAddonRegistries:map[] VerifyComponents:map[apiserver:true apps_running:true default_sa:true extra:true kubelet:true node_ready:true system_pods:true] StartHostTimeout:6m0s ScheduledStop:<nil> ExposedPorts:[] ListenAddress: Network: Subnet: MultiNodeRequested:true ExtraDisks:0 CertExpiration:26280h0m0s MountString: Mount9PVersion:9p2000.L MountGID:docker MountIP: MountMSize:262144 MountOptions:[] MountPort:0 MountType:9p MountUID:docker BinaryMirror: DisableOptimizations:false DisableMetrics:false DisableCoreDNSLog:false CustomQemuFirmwarePath: SocketVMne
tClientPath: SocketVMnetPath: StaticIP: SSHAuthSock: SSHAgentPID:0 GPUs: AutoPauseInterval:1m0s}
	I1109 14:03:28.372921   55908 out.go:179] * Starting "ha-423884" primary control-plane node in "ha-423884" cluster
	I1109 14:03:28.375587   55908 cache.go:134] Beginning downloading kic base image for docker with crio
	I1109 14:03:28.378486   55908 out.go:179] * Pulling base image v0.0.48-1761985721-21837 ...
	I1109 14:03:28.381428   55908 preload.go:188] Checking if preload exists for k8s version v1.34.1 and runtime crio
	I1109 14:03:28.381482   55908 preload.go:203] Found local preload: /home/jenkins/minikube-integration/21139-2320/.minikube/cache/preloaded-tarball/preloaded-images-k8s-v18-v1.34.1-cri-o-overlay-arm64.tar.lz4
	I1109 14:03:28.381492   55908 cache.go:65] Caching tarball of preloaded images
	I1109 14:03:28.381532   55908 image.go:81] Checking for gcr.io/k8s-minikube/kicbase-builds:v0.0.48-1761985721-21837@sha256:a50b37e97dfdea51156e079ca6b45818a801b3d41bbe13d141f35d2e1af6c7d1 in local docker daemon
	I1109 14:03:28.381584   55908 preload.go:238] Found /home/jenkins/minikube-integration/21139-2320/.minikube/cache/preloaded-tarball/preloaded-images-k8s-v18-v1.34.1-cri-o-overlay-arm64.tar.lz4 in cache, skipping download
	I1109 14:03:28.381603   55908 cache.go:68] Finished verifying existence of preloaded tar for v1.34.1 on crio
	I1109 14:03:28.381760   55908 profile.go:143] Saving config to /home/jenkins/minikube-integration/21139-2320/.minikube/profiles/ha-423884/config.json ...
	I1109 14:03:28.401896   55908 image.go:100] Found gcr.io/k8s-minikube/kicbase-builds:v0.0.48-1761985721-21837@sha256:a50b37e97dfdea51156e079ca6b45818a801b3d41bbe13d141f35d2e1af6c7d1 in local docker daemon, skipping pull
	I1109 14:03:28.401919   55908 cache.go:158] gcr.io/k8s-minikube/kicbase-builds:v0.0.48-1761985721-21837@sha256:a50b37e97dfdea51156e079ca6b45818a801b3d41bbe13d141f35d2e1af6c7d1 exists in daemon, skipping load
	I1109 14:03:28.401946   55908 cache.go:243] Successfully downloaded all kic artifacts
	I1109 14:03:28.401968   55908 start.go:360] acquireMachinesLock for ha-423884: {Name:mkda5c7a1ce8a51da0d8a40a6bd47565509d6909 Clock:{} Delay:500ms Timeout:10m0s Cancel:<nil>}
	I1109 14:03:28.402035   55908 start.go:364] duration metric: took 47.073µs to acquireMachinesLock for "ha-423884"
	I1109 14:03:28.402054   55908 start.go:96] Skipping create...Using existing machine configuration
	I1109 14:03:28.402059   55908 fix.go:54] fixHost starting: 
	I1109 14:03:28.402320   55908 cli_runner.go:164] Run: docker container inspect ha-423884 --format={{.State.Status}}
	I1109 14:03:28.419704   55908 fix.go:112] recreateIfNeeded on ha-423884: state=Stopped err=<nil>
	W1109 14:03:28.419733   55908 fix.go:138] unexpected machine state, will restart: <nil>
	I1109 14:03:28.423107   55908 out.go:252] * Restarting existing docker container for "ha-423884" ...
	I1109 14:03:28.423213   55908 cli_runner.go:164] Run: docker start ha-423884
	I1109 14:03:28.683970   55908 cli_runner.go:164] Run: docker container inspect ha-423884 --format={{.State.Status}}
	I1109 14:03:28.706610   55908 kic.go:430] container "ha-423884" state is running.
	I1109 14:03:28.707012   55908 cli_runner.go:164] Run: docker container inspect -f "{{range .NetworkSettings.Networks}}{{.IPAddress}},{{.GlobalIPv6Address}}{{end}}" ha-423884
	I1109 14:03:28.730099   55908 profile.go:143] Saving config to /home/jenkins/minikube-integration/21139-2320/.minikube/profiles/ha-423884/config.json ...
	I1109 14:03:28.730346   55908 machine.go:94] provisionDockerMachine start ...
	I1109 14:03:28.730410   55908 cli_runner.go:164] Run: docker container inspect -f "'{{(index (index .NetworkSettings.Ports "22/tcp") 0).HostPort}}'" ha-423884
	I1109 14:03:28.752410   55908 main.go:143] libmachine: Using SSH client type: native
	I1109 14:03:28.752757   55908 main.go:143] libmachine: &{{{<nil> 0 [] [] []} docker [0x3eefe0] 0x3f1790 <nil>  [] 0s} 127.0.0.1 32818 <nil> <nil>}
	I1109 14:03:28.752774   55908 main.go:143] libmachine: About to run SSH command:
	hostname
	I1109 14:03:28.753518   55908 main.go:143] libmachine: Error dialing TCP: ssh: handshake failed: EOF
	I1109 14:03:31.903504   55908 main.go:143] libmachine: SSH cmd err, output: <nil>: ha-423884
	
	I1109 14:03:31.903534   55908 ubuntu.go:182] provisioning hostname "ha-423884"
	I1109 14:03:31.903601   55908 cli_runner.go:164] Run: docker container inspect -f "'{{(index (index .NetworkSettings.Ports "22/tcp") 0).HostPort}}'" ha-423884
	I1109 14:03:31.923571   55908 main.go:143] libmachine: Using SSH client type: native
	I1109 14:03:31.923916   55908 main.go:143] libmachine: &{{{<nil> 0 [] [] []} docker [0x3eefe0] 0x3f1790 <nil>  [] 0s} 127.0.0.1 32818 <nil> <nil>}
	I1109 14:03:31.923929   55908 main.go:143] libmachine: About to run SSH command:
	sudo hostname ha-423884 && echo "ha-423884" | sudo tee /etc/hostname
	I1109 14:03:32.084992   55908 main.go:143] libmachine: SSH cmd err, output: <nil>: ha-423884
	
	I1109 14:03:32.085077   55908 cli_runner.go:164] Run: docker container inspect -f "'{{(index (index .NetworkSettings.Ports "22/tcp") 0).HostPort}}'" ha-423884
	I1109 14:03:32.103777   55908 main.go:143] libmachine: Using SSH client type: native
	I1109 14:03:32.104122   55908 main.go:143] libmachine: &{{{<nil> 0 [] [] []} docker [0x3eefe0] 0x3f1790 <nil>  [] 0s} 127.0.0.1 32818 <nil> <nil>}
	I1109 14:03:32.104149   55908 main.go:143] libmachine: About to run SSH command:
	
			if ! grep -xq '.*\sha-423884' /etc/hosts; then
				if grep -xq '127.0.1.1\s.*' /etc/hosts; then
					sudo sed -i 's/^127.0.1.1\s.*/127.0.1.1 ha-423884/g' /etc/hosts;
				else 
					echo '127.0.1.1 ha-423884' | sudo tee -a /etc/hosts; 
				fi
			fi
	I1109 14:03:32.256008   55908 main.go:143] libmachine: SSH cmd err, output: <nil>: 
	I1109 14:03:32.256036   55908 ubuntu.go:188] set auth options {CertDir:/home/jenkins/minikube-integration/21139-2320/.minikube CaCertPath:/home/jenkins/minikube-integration/21139-2320/.minikube/certs/ca.pem CaPrivateKeyPath:/home/jenkins/minikube-integration/21139-2320/.minikube/certs/ca-key.pem CaCertRemotePath:/etc/docker/ca.pem ServerCertPath:/home/jenkins/minikube-integration/21139-2320/.minikube/machines/server.pem ServerKeyPath:/home/jenkins/minikube-integration/21139-2320/.minikube/machines/server-key.pem ClientKeyPath:/home/jenkins/minikube-integration/21139-2320/.minikube/certs/key.pem ServerCertRemotePath:/etc/docker/server.pem ServerKeyRemotePath:/etc/docker/server-key.pem ClientCertPath:/home/jenkins/minikube-integration/21139-2320/.minikube/certs/cert.pem ServerCertSANs:[] StorePath:/home/jenkins/minikube-integration/21139-2320/.minikube}
	I1109 14:03:32.256065   55908 ubuntu.go:190] setting up certificates
	I1109 14:03:32.256074   55908 provision.go:84] configureAuth start
	I1109 14:03:32.256143   55908 cli_runner.go:164] Run: docker container inspect -f "{{range .NetworkSettings.Networks}}{{.IPAddress}},{{.GlobalIPv6Address}}{{end}}" ha-423884
	I1109 14:03:32.275304   55908 provision.go:143] copyHostCerts
	I1109 14:03:32.275347   55908 vm_assets.go:164] NewFileAsset: /home/jenkins/minikube-integration/21139-2320/.minikube/certs/ca.pem -> /home/jenkins/minikube-integration/21139-2320/.minikube/ca.pem
	I1109 14:03:32.275379   55908 exec_runner.go:144] found /home/jenkins/minikube-integration/21139-2320/.minikube/ca.pem, removing ...
	I1109 14:03:32.275389   55908 exec_runner.go:203] rm: /home/jenkins/minikube-integration/21139-2320/.minikube/ca.pem
	I1109 14:03:32.275467   55908 exec_runner.go:151] cp: /home/jenkins/minikube-integration/21139-2320/.minikube/certs/ca.pem --> /home/jenkins/minikube-integration/21139-2320/.minikube/ca.pem (1082 bytes)
	I1109 14:03:32.275563   55908 vm_assets.go:164] NewFileAsset: /home/jenkins/minikube-integration/21139-2320/.minikube/certs/cert.pem -> /home/jenkins/minikube-integration/21139-2320/.minikube/cert.pem
	I1109 14:03:32.275585   55908 exec_runner.go:144] found /home/jenkins/minikube-integration/21139-2320/.minikube/cert.pem, removing ...
	I1109 14:03:32.275593   55908 exec_runner.go:203] rm: /home/jenkins/minikube-integration/21139-2320/.minikube/cert.pem
	I1109 14:03:32.275622   55908 exec_runner.go:151] cp: /home/jenkins/minikube-integration/21139-2320/.minikube/certs/cert.pem --> /home/jenkins/minikube-integration/21139-2320/.minikube/cert.pem (1123 bytes)
	I1109 14:03:32.275677   55908 vm_assets.go:164] NewFileAsset: /home/jenkins/minikube-integration/21139-2320/.minikube/certs/key.pem -> /home/jenkins/minikube-integration/21139-2320/.minikube/key.pem
	I1109 14:03:32.275699   55908 exec_runner.go:144] found /home/jenkins/minikube-integration/21139-2320/.minikube/key.pem, removing ...
	I1109 14:03:32.275704   55908 exec_runner.go:203] rm: /home/jenkins/minikube-integration/21139-2320/.minikube/key.pem
	I1109 14:03:32.275734   55908 exec_runner.go:151] cp: /home/jenkins/minikube-integration/21139-2320/.minikube/certs/key.pem --> /home/jenkins/minikube-integration/21139-2320/.minikube/key.pem (1679 bytes)
	I1109 14:03:32.275800   55908 provision.go:117] generating server cert: /home/jenkins/minikube-integration/21139-2320/.minikube/machines/server.pem ca-key=/home/jenkins/minikube-integration/21139-2320/.minikube/certs/ca.pem private-key=/home/jenkins/minikube-integration/21139-2320/.minikube/certs/ca-key.pem org=jenkins.ha-423884 san=[127.0.0.1 192.168.49.2 ha-423884 localhost minikube]
	I1109 14:03:32.661025   55908 provision.go:177] copyRemoteCerts
	I1109 14:03:32.661095   55908 ssh_runner.go:195] Run: sudo mkdir -p /etc/docker /etc/docker /etc/docker
	I1109 14:03:32.661138   55908 cli_runner.go:164] Run: docker container inspect -f "'{{(index (index .NetworkSettings.Ports "22/tcp") 0).HostPort}}'" ha-423884
	I1109 14:03:32.678774   55908 sshutil.go:53] new ssh client: &{IP:127.0.0.1 Port:32818 SSHKeyPath:/home/jenkins/minikube-integration/21139-2320/.minikube/machines/ha-423884/id_rsa Username:docker}
	I1109 14:03:32.784475   55908 vm_assets.go:164] NewFileAsset: /home/jenkins/minikube-integration/21139-2320/.minikube/machines/server-key.pem -> /etc/docker/server-key.pem
	I1109 14:03:32.784549   55908 ssh_runner.go:362] scp /home/jenkins/minikube-integration/21139-2320/.minikube/machines/server-key.pem --> /etc/docker/server-key.pem (1679 bytes)
	I1109 14:03:32.802319   55908 vm_assets.go:164] NewFileAsset: /home/jenkins/minikube-integration/21139-2320/.minikube/certs/ca.pem -> /etc/docker/ca.pem
	I1109 14:03:32.802376   55908 ssh_runner.go:362] scp /home/jenkins/minikube-integration/21139-2320/.minikube/certs/ca.pem --> /etc/docker/ca.pem (1082 bytes)
	I1109 14:03:32.819169   55908 vm_assets.go:164] NewFileAsset: /home/jenkins/minikube-integration/21139-2320/.minikube/machines/server.pem -> /etc/docker/server.pem
	I1109 14:03:32.819280   55908 ssh_runner.go:362] scp /home/jenkins/minikube-integration/21139-2320/.minikube/machines/server.pem --> /etc/docker/server.pem (1196 bytes)
	I1109 14:03:32.836450   55908 provision.go:87] duration metric: took 580.362722ms to configureAuth
	I1109 14:03:32.836513   55908 ubuntu.go:206] setting minikube options for container-runtime
	I1109 14:03:32.836762   55908 config.go:182] Loaded profile config "ha-423884": Driver=docker, ContainerRuntime=crio, KubernetesVersion=v1.34.1
	I1109 14:03:32.836868   55908 cli_runner.go:164] Run: docker container inspect -f "'{{(index (index .NetworkSettings.Ports "22/tcp") 0).HostPort}}'" ha-423884
	I1109 14:03:32.853354   55908 main.go:143] libmachine: Using SSH client type: native
	I1109 14:03:32.853661   55908 main.go:143] libmachine: &{{{<nil> 0 [] [] []} docker [0x3eefe0] 0x3f1790 <nil>  [] 0s} 127.0.0.1 32818 <nil> <nil>}
	I1109 14:03:32.853680   55908 main.go:143] libmachine: About to run SSH command:
	sudo mkdir -p /etc/sysconfig && printf %s "
	CRIO_MINIKUBE_OPTIONS='--insecure-registry 10.96.0.0/12 '
	" | sudo tee /etc/sysconfig/crio.minikube && sudo systemctl restart crio
	I1109 14:03:33.144760   55908 main.go:143] libmachine: SSH cmd err, output: <nil>: 
	CRIO_MINIKUBE_OPTIONS='--insecure-registry 10.96.0.0/12 '
	
	I1109 14:03:33.144782   55908 machine.go:97] duration metric: took 4.41442095s to provisionDockerMachine
	I1109 14:03:33.144794   55908 start.go:293] postStartSetup for "ha-423884" (driver="docker")
	I1109 14:03:33.144804   55908 start.go:322] creating required directories: [/etc/kubernetes/addons /etc/kubernetes/manifests /var/tmp/minikube /var/lib/minikube /var/lib/minikube/certs /var/lib/minikube/images /var/lib/minikube/binaries /tmp/gvisor /usr/share/ca-certificates /etc/ssl/certs]
	I1109 14:03:33.144881   55908 ssh_runner.go:195] Run: sudo mkdir -p /etc/kubernetes/addons /etc/kubernetes/manifests /var/tmp/minikube /var/lib/minikube /var/lib/minikube/certs /var/lib/minikube/images /var/lib/minikube/binaries /tmp/gvisor /usr/share/ca-certificates /etc/ssl/certs
	I1109 14:03:33.144923   55908 cli_runner.go:164] Run: docker container inspect -f "'{{(index (index .NetworkSettings.Ports "22/tcp") 0).HostPort}}'" ha-423884
	I1109 14:03:33.163262   55908 sshutil.go:53] new ssh client: &{IP:127.0.0.1 Port:32818 SSHKeyPath:/home/jenkins/minikube-integration/21139-2320/.minikube/machines/ha-423884/id_rsa Username:docker}
	I1109 14:03:33.271726   55908 ssh_runner.go:195] Run: cat /etc/os-release
	I1109 14:03:33.275165   55908 main.go:143] libmachine: Couldn't set key VERSION_CODENAME, no corresponding struct field found
	I1109 14:03:33.275193   55908 info.go:137] Remote host: Debian GNU/Linux 12 (bookworm)
	I1109 14:03:33.275203   55908 filesync.go:126] Scanning /home/jenkins/minikube-integration/21139-2320/.minikube/addons for local assets ...
	I1109 14:03:33.275256   55908 filesync.go:126] Scanning /home/jenkins/minikube-integration/21139-2320/.minikube/files for local assets ...
	I1109 14:03:33.275333   55908 filesync.go:149] local asset: /home/jenkins/minikube-integration/21139-2320/.minikube/files/etc/ssl/certs/41162.pem -> 41162.pem in /etc/ssl/certs
	I1109 14:03:33.275341   55908 vm_assets.go:164] NewFileAsset: /home/jenkins/minikube-integration/21139-2320/.minikube/files/etc/ssl/certs/41162.pem -> /etc/ssl/certs/41162.pem
	I1109 14:03:33.275445   55908 ssh_runner.go:195] Run: sudo mkdir -p /etc/ssl/certs
	I1109 14:03:33.282869   55908 ssh_runner.go:362] scp /home/jenkins/minikube-integration/21139-2320/.minikube/files/etc/ssl/certs/41162.pem --> /etc/ssl/certs/41162.pem (1708 bytes)
	I1109 14:03:33.300086   55908 start.go:296] duration metric: took 155.276378ms for postStartSetup
	I1109 14:03:33.300181   55908 ssh_runner.go:195] Run: sh -c "df -h /var | awk 'NR==2{print $5}'"
	I1109 14:03:33.300227   55908 cli_runner.go:164] Run: docker container inspect -f "'{{(index (index .NetworkSettings.Ports "22/tcp") 0).HostPort}}'" ha-423884
	I1109 14:03:33.318900   55908 sshutil.go:53] new ssh client: &{IP:127.0.0.1 Port:32818 SSHKeyPath:/home/jenkins/minikube-integration/21139-2320/.minikube/machines/ha-423884/id_rsa Username:docker}
	I1109 14:03:33.421156   55908 ssh_runner.go:195] Run: sh -c "df -BG /var | awk 'NR==2{print $4}'"
	I1109 14:03:33.426364   55908 fix.go:56] duration metric: took 5.024296824s for fixHost
	I1109 14:03:33.426438   55908 start.go:83] releasing machines lock for "ha-423884", held for 5.024394146s
	I1109 14:03:33.426527   55908 cli_runner.go:164] Run: docker container inspect -f "{{range .NetworkSettings.Networks}}{{.IPAddress}},{{.GlobalIPv6Address}}{{end}}" ha-423884
	I1109 14:03:33.444332   55908 ssh_runner.go:195] Run: cat /version.json
	I1109 14:03:33.444382   55908 cli_runner.go:164] Run: docker container inspect -f "'{{(index (index .NetworkSettings.Ports "22/tcp") 0).HostPort}}'" ha-423884
	I1109 14:03:33.444389   55908 ssh_runner.go:195] Run: curl -sS -m 2 https://registry.k8s.io/
	I1109 14:03:33.444465   55908 cli_runner.go:164] Run: docker container inspect -f "'{{(index (index .NetworkSettings.Ports "22/tcp") 0).HostPort}}'" ha-423884
	I1109 14:03:33.466109   55908 sshutil.go:53] new ssh client: &{IP:127.0.0.1 Port:32818 SSHKeyPath:/home/jenkins/minikube-integration/21139-2320/.minikube/machines/ha-423884/id_rsa Username:docker}
	I1109 14:03:33.468674   55908 sshutil.go:53] new ssh client: &{IP:127.0.0.1 Port:32818 SSHKeyPath:/home/jenkins/minikube-integration/21139-2320/.minikube/machines/ha-423884/id_rsa Username:docker}
	I1109 14:03:33.567827   55908 ssh_runner.go:195] Run: systemctl --version
	I1109 14:03:33.665464   55908 ssh_runner.go:195] Run: sudo sh -c "podman version >/dev/null"
	I1109 14:03:33.703682   55908 ssh_runner.go:195] Run: sh -c "stat /etc/cni/net.d/*loopback.conf*"
	W1109 14:03:33.708050   55908 cni.go:209] loopback cni configuration skipped: "/etc/cni/net.d/*loopback.conf*" not found
	I1109 14:03:33.708118   55908 ssh_runner.go:195] Run: sudo find /etc/cni/net.d -maxdepth 1 -type f ( ( -name *bridge* -or -name *podman* ) -and -not -name *.mk_disabled ) -printf "%p, " -exec sh -c "sudo mv {} {}.mk_disabled" ;
	I1109 14:03:33.716273   55908 cni.go:259] no active bridge cni configs found in "/etc/cni/net.d" - nothing to disable
	I1109 14:03:33.716295   55908 start.go:496] detecting cgroup driver to use...
	I1109 14:03:33.716329   55908 detect.go:187] detected "cgroupfs" cgroup driver on host os
	I1109 14:03:33.716378   55908 ssh_runner.go:195] Run: sudo systemctl stop -f containerd
	I1109 14:03:33.732433   55908 ssh_runner.go:195] Run: sudo systemctl is-active --quiet service containerd
	I1109 14:03:33.746199   55908 docker.go:218] disabling cri-docker service (if available) ...
	I1109 14:03:33.746294   55908 ssh_runner.go:195] Run: sudo systemctl stop -f cri-docker.socket
	I1109 14:03:33.762279   55908 ssh_runner.go:195] Run: sudo systemctl stop -f cri-docker.service
	I1109 14:03:33.775981   55908 ssh_runner.go:195] Run: sudo systemctl disable cri-docker.socket
	I1109 14:03:33.917723   55908 ssh_runner.go:195] Run: sudo systemctl mask cri-docker.service
	I1109 14:03:34.035293   55908 docker.go:234] disabling docker service ...
	I1109 14:03:34.035371   55908 ssh_runner.go:195] Run: sudo systemctl stop -f docker.socket
	I1109 14:03:34.050665   55908 ssh_runner.go:195] Run: sudo systemctl stop -f docker.service
	I1109 14:03:34.063795   55908 ssh_runner.go:195] Run: sudo systemctl disable docker.socket
	I1109 14:03:34.194207   55908 ssh_runner.go:195] Run: sudo systemctl mask docker.service
	I1109 14:03:34.316201   55908 ssh_runner.go:195] Run: sudo systemctl is-active --quiet service docker
	I1109 14:03:34.328760   55908 ssh_runner.go:195] Run: /bin/bash -c "sudo mkdir -p /etc && printf %s "runtime-endpoint: unix:///var/run/crio/crio.sock
	" | sudo tee /etc/crictl.yaml"
	I1109 14:03:34.342596   55908 crio.go:59] configure cri-o to use "registry.k8s.io/pause:3.10.1" pause image...
	I1109 14:03:34.342661   55908 ssh_runner.go:195] Run: sh -c "sudo sed -i 's|^.*pause_image = .*$|pause_image = "registry.k8s.io/pause:3.10.1"|' /etc/crio/crio.conf.d/02-crio.conf"
	I1109 14:03:34.351380   55908 crio.go:70] configuring cri-o to use "cgroupfs" as cgroup driver...
	I1109 14:03:34.351501   55908 ssh_runner.go:195] Run: sh -c "sudo sed -i 's|^.*cgroup_manager = .*$|cgroup_manager = "cgroupfs"|' /etc/crio/crio.conf.d/02-crio.conf"
	I1109 14:03:34.360283   55908 ssh_runner.go:195] Run: sh -c "sudo sed -i '/conmon_cgroup = .*/d' /etc/crio/crio.conf.d/02-crio.conf"
	I1109 14:03:34.369198   55908 ssh_runner.go:195] Run: sh -c "sudo sed -i '/cgroup_manager = .*/a conmon_cgroup = "pod"' /etc/crio/crio.conf.d/02-crio.conf"
	I1109 14:03:34.378151   55908 ssh_runner.go:195] Run: sh -c "sudo rm -rf /etc/cni/net.mk"
	I1109 14:03:34.386268   55908 ssh_runner.go:195] Run: sh -c "sudo sed -i '/^ *"net.ipv4.ip_unprivileged_port_start=.*"/d' /etc/crio/crio.conf.d/02-crio.conf"
	I1109 14:03:34.394888   55908 ssh_runner.go:195] Run: sh -c "sudo grep -q "^ *default_sysctls" /etc/crio/crio.conf.d/02-crio.conf || sudo sed -i '/conmon_cgroup = .*/a default_sysctls = \[\n\]' /etc/crio/crio.conf.d/02-crio.conf"
	I1109 14:03:34.403377   55908 ssh_runner.go:195] Run: sh -c "sudo sed -i -r 's|^default_sysctls *= *\[|&\n  "net.ipv4.ip_unprivileged_port_start=0",|' /etc/crio/crio.conf.d/02-crio.conf"
	I1109 14:03:34.412509   55908 ssh_runner.go:195] Run: sudo sysctl net.bridge.bridge-nf-call-iptables
	I1109 14:03:34.419807   55908 ssh_runner.go:195] Run: sudo sh -c "echo 1 > /proc/sys/net/ipv4/ip_forward"
	I1109 14:03:34.427015   55908 ssh_runner.go:195] Run: sudo systemctl daemon-reload
	I1109 14:03:34.533676   55908 ssh_runner.go:195] Run: sudo systemctl restart crio
	I1109 14:03:34.661746   55908 start.go:543] Will wait 60s for socket path /var/run/crio/crio.sock
	I1109 14:03:34.661816   55908 ssh_runner.go:195] Run: stat /var/run/crio/crio.sock
	I1109 14:03:34.665477   55908 start.go:564] Will wait 60s for crictl version
	I1109 14:03:34.665590   55908 ssh_runner.go:195] Run: which crictl
	I1109 14:03:34.668882   55908 ssh_runner.go:195] Run: sudo /usr/local/bin/crictl version
	I1109 14:03:34.697803   55908 start.go:580] Version:  0.1.0
	RuntimeName:  cri-o
	RuntimeVersion:  1.34.1
	RuntimeApiVersion:  v1
	I1109 14:03:34.697964   55908 ssh_runner.go:195] Run: crio --version
	I1109 14:03:34.726272   55908 ssh_runner.go:195] Run: crio --version
	I1109 14:03:34.758410   55908 out.go:179] * Preparing Kubernetes v1.34.1 on CRI-O 1.34.1 ...
	I1109 14:03:34.761247   55908 cli_runner.go:164] Run: docker network inspect ha-423884 --format "{"Name": "{{.Name}}","Driver": "{{.Driver}}","Subnet": "{{range .IPAM.Config}}{{.Subnet}}{{end}}","Gateway": "{{range .IPAM.Config}}{{.Gateway}}{{end}}","MTU": {{if (index .Options "com.docker.network.driver.mtu")}}{{(index .Options "com.docker.network.driver.mtu")}}{{else}}0{{end}}, "ContainerIPs": [{{range $k,$v := .Containers }}"{{$v.IPv4Address}}",{{end}}]}"
	I1109 14:03:34.776734   55908 ssh_runner.go:195] Run: grep 192.168.49.1	host.minikube.internal$ /etc/hosts
	I1109 14:03:34.780588   55908 ssh_runner.go:195] Run: /bin/bash -c "{ grep -v $'\thost.minikube.internal$' "/etc/hosts"; echo "192.168.49.1	host.minikube.internal"; } > /tmp/h.$$; sudo cp /tmp/h.$$ "/etc/hosts""
	I1109 14:03:34.790316   55908 kubeadm.go:884] updating cluster {Name:ha-423884 KeepContext:false EmbedCerts:false MinikubeISO: KicBaseImage:gcr.io/k8s-minikube/kicbase-builds:v0.0.48-1761985721-21837@sha256:a50b37e97dfdea51156e079ca6b45818a801b3d41bbe13d141f35d2e1af6c7d1 Memory:3072 CPUs:2 DiskSize:20000 Driver:docker HyperkitVpnKitSock: HyperkitVSockPorts:[] DockerEnv:[] ContainerVolumeMounts:[] InsecureRegistry:[] RegistryMirror:[] HostOnlyCIDR:192.168.59.1/24 HypervVirtualSwitch: HypervUseExternalSwitch:false HypervExternalAdapter: KVMNetwork:default KVMQemuURI:qemu:///system KVMGPU:false KVMHidden:false KVMNUMACount:1 APIServerPort:8443 DockerOpt:[] DisableDriverMounts:false NFSShare:[] NFSSharesRoot:/nfsshares UUID: NoVTXCheck:false DNSProxy:false HostDNSResolver:true HostOnlyNicType:virtio NatNicType:virtio SSHIPAddress: SSHUser:root SSHKey: SSHPort:22 KubernetesConfig:{KubernetesVersion:v1.34.1 ClusterName:ha-423884 Namespace:default APIServerHAVIP:192.168.49.254 APIServerName:minikubeCA APISe
rverNames:[] APIServerIPs:[] DNSDomain:cluster.local ContainerRuntime:crio CRISocket: NetworkPlugin:cni FeatureGates: ServiceCIDR:10.96.0.0/12 ImageRepository: LoadBalancerStartIP: LoadBalancerEndIP: CustomIngressCert: RegistryAliases: ExtraOptions:[] ShouldLoadCachedImages:true EnableDefaultCNI:false CNI:} Nodes:[{Name: IP:192.168.49.2 Port:8443 KubernetesVersion:v1.34.1 ContainerRuntime:crio ControlPlane:true Worker:true} {Name:m02 IP:192.168.49.3 Port:8443 KubernetesVersion:v1.34.1 ContainerRuntime:crio ControlPlane:true Worker:true} {Name:m03 IP:192.168.49.4 Port:8443 KubernetesVersion:v1.34.1 ContainerRuntime:crio ControlPlane:true Worker:true} {Name:m04 IP:192.168.49.5 Port:0 KubernetesVersion:v1.34.1 ContainerRuntime:crio ControlPlane:false Worker:true}] Addons:map[ambassador:false amd-gpu-device-plugin:false auto-pause:false cloud-spanner:false csi-hostpath-driver:false dashboard:false default-storageclass:false efk:false freshpod:false gcp-auth:false gvisor:false headlamp:false inaccel:false ingress:
false ingress-dns:false inspektor-gadget:false istio:false istio-provisioner:false kong:false kubeflow:false kubetail:false kubevirt:false logviewer:false metallb:false metrics-server:false nvidia-device-plugin:false nvidia-driver-installer:false nvidia-gpu-device-plugin:false olm:false pod-security-policy:false portainer:false registry:false registry-aliases:false registry-creds:false storage-provisioner:false storage-provisioner-rancher:false volcano:false volumesnapshots:false yakd:false] CustomAddonImages:map[] CustomAddonRegistries:map[] VerifyComponents:map[apiserver:true apps_running:true default_sa:true extra:true kubelet:true node_ready:true system_pods:true] StartHostTimeout:6m0s ScheduledStop:<nil> ExposedPorts:[] ListenAddress: Network: Subnet: MultiNodeRequested:true ExtraDisks:0 CertExpiration:26280h0m0s MountString: Mount9PVersion:9p2000.L MountGID:docker MountIP: MountMSize:262144 MountOptions:[] MountPort:0 MountType:9p MountUID:docker BinaryMirror: DisableOptimizations:false DisableMetrics:f
alse DisableCoreDNSLog:false CustomQemuFirmwarePath: SocketVMnetClientPath: SocketVMnetPath: StaticIP: SSHAuthSock: SSHAgentPID:0 GPUs: AutoPauseInterval:1m0s} ...
	I1109 14:03:34.790470   55908 preload.go:188] Checking if preload exists for k8s version v1.34.1 and runtime crio
	I1109 14:03:34.790530   55908 ssh_runner.go:195] Run: sudo crictl images --output json
	I1109 14:03:34.825584   55908 crio.go:514] all images are preloaded for cri-o runtime.
	I1109 14:03:34.825621   55908 crio.go:433] Images already preloaded, skipping extraction
	I1109 14:03:34.825685   55908 ssh_runner.go:195] Run: sudo crictl images --output json
	I1109 14:03:34.851854   55908 crio.go:514] all images are preloaded for cri-o runtime.
	I1109 14:03:34.851980   55908 cache_images.go:86] Images are preloaded, skipping loading
	I1109 14:03:34.851997   55908 kubeadm.go:935] updating node { 192.168.49.2 8443 v1.34.1 crio true true} ...
	I1109 14:03:34.852146   55908 kubeadm.go:947] kubelet [Unit]
	Wants=crio.service
	
	[Service]
	ExecStart=
	ExecStart=/var/lib/minikube/binaries/v1.34.1/kubelet --bootstrap-kubeconfig=/etc/kubernetes/bootstrap-kubelet.conf --cgroups-per-qos=false --config=/var/lib/kubelet/config.yaml --enforce-node-allocatable= --hostname-override=ha-423884 --kubeconfig=/etc/kubernetes/kubelet.conf --node-ip=192.168.49.2
	
	[Install]
	 config:
	{KubernetesVersion:v1.34.1 ClusterName:ha-423884 Namespace:default APIServerHAVIP:192.168.49.254 APIServerName:minikubeCA APIServerNames:[] APIServerIPs:[] DNSDomain:cluster.local ContainerRuntime:crio CRISocket: NetworkPlugin:cni FeatureGates: ServiceCIDR:10.96.0.0/12 ImageRepository: LoadBalancerStartIP: LoadBalancerEndIP: CustomIngressCert: RegistryAliases: ExtraOptions:[] ShouldLoadCachedImages:true EnableDefaultCNI:false CNI:}
	I1109 14:03:34.852273   55908 ssh_runner.go:195] Run: crio config
	I1109 14:03:34.903939   55908 cni.go:84] Creating CNI manager for ""
	I1109 14:03:34.903963   55908 cni.go:136] multinode detected (4 nodes found), recommending kindnet
	I1109 14:03:34.903981   55908 kubeadm.go:85] Using pod CIDR: 10.244.0.0/16
	I1109 14:03:34.904009   55908 kubeadm.go:190] kubeadm options: {CertDir:/var/lib/minikube/certs ServiceCIDR:10.96.0.0/12 PodSubnet:10.244.0.0/16 AdvertiseAddress:192.168.49.2 APIServerPort:8443 KubernetesVersion:v1.34.1 EtcdDataDir:/var/lib/minikube/etcd EtcdExtraArgs:map[] ClusterName:ha-423884 NodeName:ha-423884 DNSDomain:cluster.local CRISocket:/var/run/crio/crio.sock ImageRepository: ComponentOptions:[{Component:apiServer ExtraArgs:map[enable-admission-plugins:NamespaceLifecycle,LimitRanger,ServiceAccount,DefaultStorageClass,DefaultTolerationSeconds,NodeRestriction,MutatingAdmissionWebhook,ValidatingAdmissionWebhook,ResourceQuota] Pairs:map[certSANs:["127.0.0.1", "localhost", "192.168.49.2"]]} {Component:controllerManager ExtraArgs:map[allocate-node-cidrs:true leader-elect:false] Pairs:map[]} {Component:scheduler ExtraArgs:map[leader-elect:false] Pairs:map[]}] FeatureArgs:map[] NodeIP:192.168.49.2 CgroupDriver:cgroupfs ClientCAFile:/var/lib/minikube/certs/ca.crt StaticPodPath:/etc/kubernetes/mani
fests ControlPlaneAddress:control-plane.minikube.internal KubeProxyOptions:map[] ResolvConfSearchRegression:false KubeletConfigOpts:map[containerRuntimeEndpoint:unix:///var/run/crio/crio.sock hairpinMode:hairpin-veth runtimeRequestTimeout:15m] PrependCriSocketUnix:true}
	I1109 14:03:34.904140   55908 kubeadm.go:196] kubeadm config:
	apiVersion: kubeadm.k8s.io/v1beta4
	kind: InitConfiguration
	localAPIEndpoint:
	  advertiseAddress: 192.168.49.2
	  bindPort: 8443
	bootstrapTokens:
	  - groups:
	      - system:bootstrappers:kubeadm:default-node-token
	    ttl: 24h0m0s
	    usages:
	      - signing
	      - authentication
	nodeRegistration:
	  criSocket: unix:///var/run/crio/crio.sock
	  name: "ha-423884"
	  kubeletExtraArgs:
	    - name: "node-ip"
	      value: "192.168.49.2"
	  taints: []
	---
	apiVersion: kubeadm.k8s.io/v1beta4
	kind: ClusterConfiguration
	apiServer:
	  certSANs: ["127.0.0.1", "localhost", "192.168.49.2"]
	  extraArgs:
	    - name: "enable-admission-plugins"
	      value: "NamespaceLifecycle,LimitRanger,ServiceAccount,DefaultStorageClass,DefaultTolerationSeconds,NodeRestriction,MutatingAdmissionWebhook,ValidatingAdmissionWebhook,ResourceQuota"
	controllerManager:
	  extraArgs:
	    - name: "allocate-node-cidrs"
	      value: "true"
	    - name: "leader-elect"
	      value: "false"
	scheduler:
	  extraArgs:
	    - name: "leader-elect"
	      value: "false"
	certificatesDir: /var/lib/minikube/certs
	clusterName: mk
	controlPlaneEndpoint: control-plane.minikube.internal:8443
	etcd:
	  local:
	    dataDir: /var/lib/minikube/etcd
	kubernetesVersion: v1.34.1
	networking:
	  dnsDomain: cluster.local
	  podSubnet: "10.244.0.0/16"
	  serviceSubnet: 10.96.0.0/12
	---
	apiVersion: kubelet.config.k8s.io/v1beta1
	kind: KubeletConfiguration
	authentication:
	  x509:
	    clientCAFile: /var/lib/minikube/certs/ca.crt
	cgroupDriver: cgroupfs
	containerRuntimeEndpoint: unix:///var/run/crio/crio.sock
	hairpinMode: hairpin-veth
	runtimeRequestTimeout: 15m
	clusterDomain: "cluster.local"
	# disable disk resource management by default
	imageGCHighThresholdPercent: 100
	evictionHard:
	  nodefs.available: "0%"
	  nodefs.inodesFree: "0%"
	  imagefs.available: "0%"
	failSwapOn: false
	staticPodPath: /etc/kubernetes/manifests
	---
	apiVersion: kubeproxy.config.k8s.io/v1alpha1
	kind: KubeProxyConfiguration
	clusterCIDR: "10.244.0.0/16"
	metricsBindAddress: 0.0.0.0:10249
	conntrack:
	  maxPerCore: 0
	# Skip setting "net.netfilter.nf_conntrack_tcp_timeout_established"
	  tcpEstablishedTimeout: 0s
	# Skip setting "net.netfilter.nf_conntrack_tcp_timeout_close"
	  tcpCloseWaitTimeout: 0s
	
	I1109 14:03:34.904162   55908 kube-vip.go:115] generating kube-vip config ...
	I1109 14:03:34.904219   55908 ssh_runner.go:195] Run: sudo sh -c "lsmod | grep ip_vs"
	I1109 14:03:34.915786   55908 kube-vip.go:163] giving up enabling control-plane load-balancing as ipvs kernel modules appears not to be available: sudo sh -c "lsmod | grep ip_vs": Process exited with status 1
	stdout:
	
	stderr:
	I1109 14:03:34.915909   55908 kube-vip.go:137] kube-vip config:
	apiVersion: v1
	kind: Pod
	metadata:
	  creationTimestamp: null
	  name: kube-vip
	  namespace: kube-system
	spec:
	  containers:
	  - args:
	    - manager
	    env:
	    - name: vip_arp
	      value: "true"
	    - name: port
	      value: "8443"
	    - name: vip_nodename
	      valueFrom:
	        fieldRef:
	          fieldPath: spec.nodeName
	    - name: vip_interface
	      value: eth0
	    - name: vip_cidr
	      value: "32"
	    - name: dns_mode
	      value: first
	    - name: cp_enable
	      value: "true"
	    - name: cp_namespace
	      value: kube-system
	    - name: vip_leaderelection
	      value: "true"
	    - name: vip_leasename
	      value: plndr-cp-lock
	    - name: vip_leaseduration
	      value: "5"
	    - name: vip_renewdeadline
	      value: "3"
	    - name: vip_retryperiod
	      value: "1"
	    - name: address
	      value: 192.168.49.254
	    - name: prometheus_server
	      value: :2112
	    image: ghcr.io/kube-vip/kube-vip:v1.0.1
	    imagePullPolicy: IfNotPresent
	    name: kube-vip
	    resources: {}
	    securityContext:
	      capabilities:
	        add:
	        - NET_ADMIN
	        - NET_RAW
	    volumeMounts:
	    - mountPath: /etc/kubernetes/admin.conf
	      name: kubeconfig
	  hostAliases:
	  - hostnames:
	    - kubernetes
	    ip: 127.0.0.1
	  hostNetwork: true
	  volumes:
	  - hostPath:
	      path: "/etc/kubernetes/admin.conf"
	    name: kubeconfig
	status: {}
	I1109 14:03:34.915977   55908 ssh_runner.go:195] Run: sudo ls /var/lib/minikube/binaries/v1.34.1
	I1109 14:03:34.923406   55908 binaries.go:51] Found k8s binaries, skipping transfer
	I1109 14:03:34.923480   55908 ssh_runner.go:195] Run: sudo mkdir -p /etc/systemd/system/kubelet.service.d /lib/systemd/system /var/tmp/minikube /etc/kubernetes/manifests
	I1109 14:03:34.931134   55908 ssh_runner.go:362] scp memory --> /etc/systemd/system/kubelet.service.d/10-kubeadm.conf (359 bytes)
	I1109 14:03:34.943678   55908 ssh_runner.go:362] scp memory --> /lib/systemd/system/kubelet.service (352 bytes)
	I1109 14:03:34.956560   55908 ssh_runner.go:362] scp memory --> /var/tmp/minikube/kubeadm.yaml.new (2206 bytes)
	I1109 14:03:34.969028   55908 ssh_runner.go:362] scp memory --> /etc/kubernetes/manifests/kube-vip.yaml (1358 bytes)
	I1109 14:03:34.981532   55908 ssh_runner.go:195] Run: grep 192.168.49.254	control-plane.minikube.internal$ /etc/hosts
	I1109 14:03:34.985043   55908 ssh_runner.go:195] Run: /bin/bash -c "{ grep -v $'\tcontrol-plane.minikube.internal$' "/etc/hosts"; echo "192.168.49.254	control-plane.minikube.internal"; } > /tmp/h.$$; sudo cp /tmp/h.$$ "/etc/hosts""
	I1109 14:03:34.994528   55908 ssh_runner.go:195] Run: sudo systemctl daemon-reload
	I1109 14:03:35.107177   55908 ssh_runner.go:195] Run: sudo systemctl start kubelet
	I1109 14:03:35.123121   55908 certs.go:69] Setting up /home/jenkins/minikube-integration/21139-2320/.minikube/profiles/ha-423884 for IP: 192.168.49.2
	I1109 14:03:35.123194   55908 certs.go:195] generating shared ca certs ...
	I1109 14:03:35.123226   55908 certs.go:227] acquiring lock for ca certs: {Name:mkdc9287ecd89df27ec460b72246ed6b75395d7c Clock:{} Delay:500ms Timeout:1m0s Cancel:<nil>}
	I1109 14:03:35.123409   55908 certs.go:236] skipping valid "minikubeCA" ca cert: /home/jenkins/minikube-integration/21139-2320/.minikube/ca.key
	I1109 14:03:35.123481   55908 certs.go:236] skipping valid "proxyClientCA" ca cert: /home/jenkins/minikube-integration/21139-2320/.minikube/proxy-client-ca.key
	I1109 14:03:35.123518   55908 certs.go:257] generating profile certs ...
	I1109 14:03:35.123657   55908 certs.go:360] skipping valid signed profile cert regeneration for "minikube-user": /home/jenkins/minikube-integration/21139-2320/.minikube/profiles/ha-423884/client.key
	I1109 14:03:35.123781   55908 certs.go:360] skipping valid signed profile cert regeneration for "minikube": /home/jenkins/minikube-integration/21139-2320/.minikube/profiles/ha-423884/apiserver.key.32540612
	I1109 14:03:35.123858   55908 certs.go:360] skipping valid signed profile cert regeneration for "aggregator": /home/jenkins/minikube-integration/21139-2320/.minikube/profiles/ha-423884/proxy-client.key
	I1109 14:03:35.123923   55908 vm_assets.go:164] NewFileAsset: /home/jenkins/minikube-integration/21139-2320/.minikube/ca.crt -> /var/lib/minikube/certs/ca.crt
	I1109 14:03:35.123960   55908 vm_assets.go:164] NewFileAsset: /home/jenkins/minikube-integration/21139-2320/.minikube/ca.key -> /var/lib/minikube/certs/ca.key
	I1109 14:03:35.124009   55908 vm_assets.go:164] NewFileAsset: /home/jenkins/minikube-integration/21139-2320/.minikube/proxy-client-ca.crt -> /var/lib/minikube/certs/proxy-client-ca.crt
	I1109 14:03:35.124043   55908 vm_assets.go:164] NewFileAsset: /home/jenkins/minikube-integration/21139-2320/.minikube/proxy-client-ca.key -> /var/lib/minikube/certs/proxy-client-ca.key
	I1109 14:03:35.124090   55908 vm_assets.go:164] NewFileAsset: /home/jenkins/minikube-integration/21139-2320/.minikube/profiles/ha-423884/apiserver.crt -> /var/lib/minikube/certs/apiserver.crt
	I1109 14:03:35.124123   55908 vm_assets.go:164] NewFileAsset: /home/jenkins/minikube-integration/21139-2320/.minikube/profiles/ha-423884/apiserver.key -> /var/lib/minikube/certs/apiserver.key
	I1109 14:03:35.124169   55908 vm_assets.go:164] NewFileAsset: /home/jenkins/minikube-integration/21139-2320/.minikube/profiles/ha-423884/proxy-client.crt -> /var/lib/minikube/certs/proxy-client.crt
	I1109 14:03:35.124203   55908 vm_assets.go:164] NewFileAsset: /home/jenkins/minikube-integration/21139-2320/.minikube/profiles/ha-423884/proxy-client.key -> /var/lib/minikube/certs/proxy-client.key
	I1109 14:03:35.124294   55908 certs.go:484] found cert: /home/jenkins/minikube-integration/21139-2320/.minikube/certs/4116.pem (1338 bytes)
	W1109 14:03:35.124369   55908 certs.go:480] ignoring /home/jenkins/minikube-integration/21139-2320/.minikube/certs/4116_empty.pem, impossibly tiny 0 bytes
	I1109 14:03:35.124408   55908 certs.go:484] found cert: /home/jenkins/minikube-integration/21139-2320/.minikube/certs/ca-key.pem (1679 bytes)
	I1109 14:03:35.124455   55908 certs.go:484] found cert: /home/jenkins/minikube-integration/21139-2320/.minikube/certs/ca.pem (1082 bytes)
	I1109 14:03:35.124508   55908 certs.go:484] found cert: /home/jenkins/minikube-integration/21139-2320/.minikube/certs/cert.pem (1123 bytes)
	I1109 14:03:35.124566   55908 certs.go:484] found cert: /home/jenkins/minikube-integration/21139-2320/.minikube/certs/key.pem (1679 bytes)
	I1109 14:03:35.124648   55908 certs.go:484] found cert: /home/jenkins/minikube-integration/21139-2320/.minikube/files/etc/ssl/certs/41162.pem (1708 bytes)
	I1109 14:03:35.124724   55908 vm_assets.go:164] NewFileAsset: /home/jenkins/minikube-integration/21139-2320/.minikube/certs/4116.pem -> /usr/share/ca-certificates/4116.pem
	I1109 14:03:35.124808   55908 vm_assets.go:164] NewFileAsset: /home/jenkins/minikube-integration/21139-2320/.minikube/files/etc/ssl/certs/41162.pem -> /usr/share/ca-certificates/41162.pem
	I1109 14:03:35.124844   55908 vm_assets.go:164] NewFileAsset: /home/jenkins/minikube-integration/21139-2320/.minikube/ca.crt -> /usr/share/ca-certificates/minikubeCA.pem
	I1109 14:03:35.125710   55908 ssh_runner.go:362] scp /home/jenkins/minikube-integration/21139-2320/.minikube/ca.crt --> /var/lib/minikube/certs/ca.crt (1111 bytes)
	I1109 14:03:35.143578   55908 ssh_runner.go:362] scp /home/jenkins/minikube-integration/21139-2320/.minikube/ca.key --> /var/lib/minikube/certs/ca.key (1679 bytes)
	I1109 14:03:35.160309   55908 ssh_runner.go:362] scp /home/jenkins/minikube-integration/21139-2320/.minikube/proxy-client-ca.crt --> /var/lib/minikube/certs/proxy-client-ca.crt (1119 bytes)
	I1109 14:03:35.180028   55908 ssh_runner.go:362] scp /home/jenkins/minikube-integration/21139-2320/.minikube/proxy-client-ca.key --> /var/lib/minikube/certs/proxy-client-ca.key (1675 bytes)
	I1109 14:03:35.198803   55908 ssh_runner.go:362] scp /home/jenkins/minikube-integration/21139-2320/.minikube/profiles/ha-423884/apiserver.crt --> /var/lib/minikube/certs/apiserver.crt (1440 bytes)
	I1109 14:03:35.222988   55908 ssh_runner.go:362] scp /home/jenkins/minikube-integration/21139-2320/.minikube/profiles/ha-423884/apiserver.key --> /var/lib/minikube/certs/apiserver.key (1679 bytes)
	I1109 14:03:35.246464   55908 ssh_runner.go:362] scp /home/jenkins/minikube-integration/21139-2320/.minikube/profiles/ha-423884/proxy-client.crt --> /var/lib/minikube/certs/proxy-client.crt (1147 bytes)
	I1109 14:03:35.273513   55908 ssh_runner.go:362] scp /home/jenkins/minikube-integration/21139-2320/.minikube/profiles/ha-423884/proxy-client.key --> /var/lib/minikube/certs/proxy-client.key (1675 bytes)
	I1109 14:03:35.298574   55908 ssh_runner.go:362] scp /home/jenkins/minikube-integration/21139-2320/.minikube/certs/4116.pem --> /usr/share/ca-certificates/4116.pem (1338 bytes)
	I1109 14:03:35.323310   55908 ssh_runner.go:362] scp /home/jenkins/minikube-integration/21139-2320/.minikube/files/etc/ssl/certs/41162.pem --> /usr/share/ca-certificates/41162.pem (1708 bytes)
	I1109 14:03:35.344665   55908 ssh_runner.go:362] scp /home/jenkins/minikube-integration/21139-2320/.minikube/ca.crt --> /usr/share/ca-certificates/minikubeCA.pem (1111 bytes)
	I1109 14:03:35.365172   55908 ssh_runner.go:362] scp memory --> /var/lib/minikube/kubeconfig (738 bytes)
	I1109 14:03:35.378569   55908 ssh_runner.go:195] Run: openssl version
	I1109 14:03:35.385015   55908 ssh_runner.go:195] Run: sudo /bin/bash -c "test -s /usr/share/ca-certificates/4116.pem && ln -fs /usr/share/ca-certificates/4116.pem /etc/ssl/certs/4116.pem"
	I1109 14:03:35.394601   55908 ssh_runner.go:195] Run: ls -la /usr/share/ca-certificates/4116.pem
	I1109 14:03:35.398299   55908 certs.go:528] hashing: -rw-r--r-- 1 root root 1338 Nov  9 13:36 /usr/share/ca-certificates/4116.pem
	I1109 14:03:35.398412   55908 ssh_runner.go:195] Run: openssl x509 -hash -noout -in /usr/share/ca-certificates/4116.pem
	I1109 14:03:35.453607   55908 ssh_runner.go:195] Run: sudo /bin/bash -c "test -L /etc/ssl/certs/51391683.0 || ln -fs /etc/ssl/certs/4116.pem /etc/ssl/certs/51391683.0"
	I1109 14:03:35.463012   55908 ssh_runner.go:195] Run: sudo /bin/bash -c "test -s /usr/share/ca-certificates/41162.pem && ln -fs /usr/share/ca-certificates/41162.pem /etc/ssl/certs/41162.pem"
	I1109 14:03:35.471886   55908 ssh_runner.go:195] Run: ls -la /usr/share/ca-certificates/41162.pem
	I1109 14:03:35.475852   55908 certs.go:528] hashing: -rw-r--r-- 1 root root 1708 Nov  9 13:36 /usr/share/ca-certificates/41162.pem
	I1109 14:03:35.475960   55908 ssh_runner.go:195] Run: openssl x509 -hash -noout -in /usr/share/ca-certificates/41162.pem
	I1109 14:03:35.519535   55908 ssh_runner.go:195] Run: sudo /bin/bash -c "test -L /etc/ssl/certs/3ec20f2e.0 || ln -fs /etc/ssl/certs/41162.pem /etc/ssl/certs/3ec20f2e.0"
	I1109 14:03:35.532870   55908 ssh_runner.go:195] Run: sudo /bin/bash -c "test -s /usr/share/ca-certificates/minikubeCA.pem && ln -fs /usr/share/ca-certificates/minikubeCA.pem /etc/ssl/certs/minikubeCA.pem"
	I1109 14:03:35.541526   55908 ssh_runner.go:195] Run: ls -la /usr/share/ca-certificates/minikubeCA.pem
	I1109 14:03:35.545559   55908 certs.go:528] hashing: -rw-r--r-- 1 root root 1111 Nov  9 13:29 /usr/share/ca-certificates/minikubeCA.pem
	I1109 14:03:35.545647   55908 ssh_runner.go:195] Run: openssl x509 -hash -noout -in /usr/share/ca-certificates/minikubeCA.pem
	I1109 14:03:35.587429   55908 ssh_runner.go:195] Run: sudo /bin/bash -c "test -L /etc/ssl/certs/b5213941.0 || ln -fs /etc/ssl/certs/minikubeCA.pem /etc/ssl/certs/b5213941.0"
	I1109 14:03:35.595355   55908 ssh_runner.go:195] Run: stat /var/lib/minikube/certs/apiserver-kubelet-client.crt
	I1109 14:03:35.598863   55908 ssh_runner.go:195] Run: openssl x509 -noout -in /var/lib/minikube/certs/apiserver-etcd-client.crt -checkend 86400
	I1109 14:03:35.639394   55908 ssh_runner.go:195] Run: openssl x509 -noout -in /var/lib/minikube/certs/apiserver-kubelet-client.crt -checkend 86400
	I1109 14:03:35.682546   55908 ssh_runner.go:195] Run: openssl x509 -noout -in /var/lib/minikube/certs/etcd/server.crt -checkend 86400
	I1109 14:03:35.723686   55908 ssh_runner.go:195] Run: openssl x509 -noout -in /var/lib/minikube/certs/etcd/healthcheck-client.crt -checkend 86400
	I1109 14:03:35.769486   55908 ssh_runner.go:195] Run: openssl x509 -noout -in /var/lib/minikube/certs/etcd/peer.crt -checkend 86400
	I1109 14:03:35.818163   55908 ssh_runner.go:195] Run: openssl x509 -noout -in /var/lib/minikube/certs/front-proxy-client.crt -checkend 86400
	I1109 14:03:35.873301   55908 kubeadm.go:401] StartCluster: {Name:ha-423884 KeepContext:false EmbedCerts:false MinikubeISO: KicBaseImage:gcr.io/k8s-minikube/kicbase-builds:v0.0.48-1761985721-21837@sha256:a50b37e97dfdea51156e079ca6b45818a801b3d41bbe13d141f35d2e1af6c7d1 Memory:3072 CPUs:2 DiskSize:20000 Driver:docker HyperkitVpnKitSock: HyperkitVSockPorts:[] DockerEnv:[] ContainerVolumeMounts:[] InsecureRegistry:[] RegistryMirror:[] HostOnlyCIDR:192.168.59.1/24 HypervVirtualSwitch: HypervUseExternalSwitch:false HypervExternalAdapter: KVMNetwork:default KVMQemuURI:qemu:///system KVMGPU:false KVMHidden:false KVMNUMACount:1 APIServerPort:8443 DockerOpt:[] DisableDriverMounts:false NFSShare:[] NFSSharesRoot:/nfsshares UUID: NoVTXCheck:false DNSProxy:false HostDNSResolver:true HostOnlyNicType:virtio NatNicType:virtio SSHIPAddress: SSHUser:root SSHKey: SSHPort:22 KubernetesConfig:{KubernetesVersion:v1.34.1 ClusterName:ha-423884 Namespace:default APIServerHAVIP:192.168.49.254 APIServerName:minikubeCA APIServe
rNames:[] APIServerIPs:[] DNSDomain:cluster.local ContainerRuntime:crio CRISocket: NetworkPlugin:cni FeatureGates: ServiceCIDR:10.96.0.0/12 ImageRepository: LoadBalancerStartIP: LoadBalancerEndIP: CustomIngressCert: RegistryAliases: ExtraOptions:[] ShouldLoadCachedImages:true EnableDefaultCNI:false CNI:} Nodes:[{Name: IP:192.168.49.2 Port:8443 KubernetesVersion:v1.34.1 ContainerRuntime:crio ControlPlane:true Worker:true} {Name:m02 IP:192.168.49.3 Port:8443 KubernetesVersion:v1.34.1 ContainerRuntime:crio ControlPlane:true Worker:true} {Name:m03 IP:192.168.49.4 Port:8443 KubernetesVersion:v1.34.1 ContainerRuntime:crio ControlPlane:true Worker:true} {Name:m04 IP:192.168.49.5 Port:0 KubernetesVersion:v1.34.1 ContainerRuntime:crio ControlPlane:false Worker:true}] Addons:map[ambassador:false amd-gpu-device-plugin:false auto-pause:false cloud-spanner:false csi-hostpath-driver:false dashboard:false default-storageclass:false efk:false freshpod:false gcp-auth:false gvisor:false headlamp:false inaccel:false ingress:fal
se ingress-dns:false inspektor-gadget:false istio:false istio-provisioner:false kong:false kubeflow:false kubetail:false kubevirt:false logviewer:false metallb:false metrics-server:false nvidia-device-plugin:false nvidia-driver-installer:false nvidia-gpu-device-plugin:false olm:false pod-security-policy:false portainer:false registry:false registry-aliases:false registry-creds:false storage-provisioner:false storage-provisioner-rancher:false volcano:false volumesnapshots:false yakd:false] CustomAddonImages:map[] CustomAddonRegistries:map[] VerifyComponents:map[apiserver:true apps_running:true default_sa:true extra:true kubelet:true node_ready:true system_pods:true] StartHostTimeout:6m0s ScheduledStop:<nil> ExposedPorts:[] ListenAddress: Network: Subnet: MultiNodeRequested:true ExtraDisks:0 CertExpiration:26280h0m0s MountString: Mount9PVersion:9p2000.L MountGID:docker MountIP: MountMSize:262144 MountOptions:[] MountPort:0 MountType:9p MountUID:docker BinaryMirror: DisableOptimizations:false DisableMetrics:fals
e DisableCoreDNSLog:false CustomQemuFirmwarePath: SocketVMnetClientPath: SocketVMnetPath: StaticIP: SSHAuthSock: SSHAgentPID:0 GPUs: AutoPauseInterval:1m0s}
	I1109 14:03:35.873423   55908 cri.go:54] listing CRI containers in root : {State:paused Name: Namespaces:[kube-system]}
	I1109 14:03:35.873481   55908 ssh_runner.go:195] Run: sudo -s eval "crictl ps -a --quiet --label io.kubernetes.pod.namespace=kube-system"
	I1109 14:03:35.949725   55908 cri.go:89] found id: "947390d8997ffb89bea0e3c1e1bca5c1f8dd53d457d88db5aafd7664dbcb65b2"
	I1109 14:03:35.949794   55908 cri.go:89] found id: "c0ba74e816e1338d86f2f29c211b83c172784bbf106dba7bae518b2ee0201a4e"
	I1109 14:03:35.949821   55908 cri.go:89] found id: "785a023345fda66c98e73a27cd2aa79f3beb28f1d9847ff2264dd21ee91db42a"
	I1109 14:03:35.949838   55908 cri.go:89] found id: ""
	I1109 14:03:35.949915   55908 ssh_runner.go:195] Run: sudo runc list -f json
	W1109 14:03:35.976461   55908 kubeadm.go:408] unpause failed: list paused: runc: sudo runc list -f json: Process exited with status 1
	stdout:
	
	stderr:
	time="2025-11-09T14:03:35Z" level=error msg="open /run/runc: no such file or directory"
	I1109 14:03:35.976622   55908 ssh_runner.go:195] Run: sudo ls /var/lib/kubelet/kubeadm-flags.env /var/lib/kubelet/config.yaml /var/lib/minikube/etcd
	I1109 14:03:35.995533   55908 kubeadm.go:417] found existing configuration files, will attempt cluster restart
	I1109 14:03:35.995601   55908 kubeadm.go:598] restartPrimaryControlPlane start ...
	I1109 14:03:35.995698   55908 ssh_runner.go:195] Run: sudo test -d /data/minikube
	I1109 14:03:36.007080   55908 kubeadm.go:131] /data/minikube skipping compat symlinks: sudo test -d /data/minikube: Process exited with status 1
	stdout:
	
	stderr:
	I1109 14:03:36.007609   55908 kubeconfig.go:47] verify endpoint returned: get endpoint: "ha-423884" does not appear in /home/jenkins/minikube-integration/21139-2320/kubeconfig
	I1109 14:03:36.007785   55908 kubeconfig.go:62] /home/jenkins/minikube-integration/21139-2320/kubeconfig needs updating (will repair): [kubeconfig missing "ha-423884" cluster setting kubeconfig missing "ha-423884" context setting]
	I1109 14:03:36.008206   55908 lock.go:35] WriteFile acquiring /home/jenkins/minikube-integration/21139-2320/kubeconfig: {Name:mkbcbd43244c4eb498050023163f96f81faa78c6 Clock:{} Delay:500ms Timeout:1m0s Cancel:<nil>}
	I1109 14:03:36.008996   55908 kapi.go:59] client config for ha-423884: &rest.Config{Host:"https://192.168.49.2:8443", APIPath:"", ContentConfig:rest.ContentConfig{AcceptContentTypes:"", ContentType:"", GroupVersion:(*schema.GroupVersion)(nil), NegotiatedSerializer:runtime.NegotiatedSerializer(nil)}, Username:"", Password:"", BearerToken:"", BearerTokenFile:"", Impersonate:rest.ImpersonationConfig{UserName:"", UID:"", Groups:[]string(nil), Extra:map[string][]string(nil)}, AuthProvider:<nil>, AuthConfigPersister:rest.AuthProviderConfigPersister(nil), ExecProvider:<nil>, TLSClientConfig:rest.sanitizedTLSClientConfig{Insecure:false, ServerName:"", CertFile:"/home/jenkins/minikube-integration/21139-2320/.minikube/profiles/ha-423884/client.crt", KeyFile:"/home/jenkins/minikube-integration/21139-2320/.minikube/profiles/ha-423884/client.key", CAFile:"/home/jenkins/minikube-integration/21139-2320/.minikube/ca.crt", CertData:[]uint8(nil), KeyData:[]uint8(nil), CAData:[]uint8(nil), NextProtos:[]string(nil)}, Us
erAgent:"", DisableCompression:false, Transport:http.RoundTripper(nil), WrapTransport:(transport.WrapperFunc)(0x21276e0), QPS:0, Burst:0, RateLimiter:flowcontrol.RateLimiter(nil), WarningHandler:rest.WarningHandler(nil), WarningHandlerWithContext:rest.WarningHandlerWithContext(nil), Timeout:0, Dial:(func(context.Context, string, string) (net.Conn, error))(nil), Proxy:(func(*http.Request) (*url.URL, error))(nil)}
	I1109 14:03:36.009887   55908 envvar.go:172] "Feature gate default state" feature="WatchListClient" enabled=false
	I1109 14:03:36.009995   55908 envvar.go:172] "Feature gate default state" feature="ClientsAllowCBOR" enabled=false
	I1109 14:03:36.010046   55908 envvar.go:172] "Feature gate default state" feature="ClientsPreferCBOR" enabled=false
	I1109 14:03:36.010070   55908 envvar.go:172] "Feature gate default state" feature="InformerResourceVersion" enabled=false
	I1109 14:03:36.009972   55908 cert_rotation.go:141] "Starting client certificate rotation controller" logger="tls-transport-cache"
	I1109 14:03:36.010189   55908 envvar.go:172] "Feature gate default state" feature="InOrderInformers" enabled=true
	I1109 14:03:36.010607   55908 ssh_runner.go:195] Run: sudo diff -u /var/tmp/minikube/kubeadm.yaml /var/tmp/minikube/kubeadm.yaml.new
	I1109 14:03:36.028288   55908 kubeadm.go:635] The running cluster does not require reconfiguration: 192.168.49.2
	I1109 14:03:36.028364   55908 kubeadm.go:602] duration metric: took 32.744336ms to restartPrimaryControlPlane
	I1109 14:03:36.028386   55908 kubeadm.go:403] duration metric: took 155.094636ms to StartCluster
	I1109 14:03:36.028414   55908 settings.go:142] acquiring lock: {Name:mk52a5a1cf83cfd3007d92af07a5e8dea393f789 Clock:{} Delay:500ms Timeout:1m0s Cancel:<nil>}
	I1109 14:03:36.028527   55908 settings.go:150] Updating kubeconfig:  /home/jenkins/minikube-integration/21139-2320/kubeconfig
	I1109 14:03:36.029250   55908 lock.go:35] WriteFile acquiring /home/jenkins/minikube-integration/21139-2320/kubeconfig: {Name:mkbcbd43244c4eb498050023163f96f81faa78c6 Clock:{} Delay:500ms Timeout:1m0s Cancel:<nil>}
	I1109 14:03:36.029535   55908 start.go:234] HA (multi-control plane) cluster: will skip waiting for primary control-plane node &{Name: IP:192.168.49.2 Port:8443 KubernetesVersion:v1.34.1 ContainerRuntime:crio ControlPlane:true Worker:true}
	I1109 14:03:36.029589   55908 start.go:242] waiting for startup goroutines ...
	I1109 14:03:36.029633   55908 addons.go:512] enable addons start: toEnable=map[ambassador:false amd-gpu-device-plugin:false auto-pause:false cloud-spanner:false csi-hostpath-driver:false dashboard:false default-storageclass:false efk:false freshpod:false gcp-auth:false gvisor:false headlamp:false inaccel:false ingress:false ingress-dns:false inspektor-gadget:false istio:false istio-provisioner:false kong:false kubeflow:false kubetail:false kubevirt:false logviewer:false metallb:false metrics-server:false nvidia-device-plugin:false nvidia-driver-installer:false nvidia-gpu-device-plugin:false olm:false pod-security-policy:false portainer:false registry:false registry-aliases:false registry-creds:false storage-provisioner:false storage-provisioner-rancher:false volcano:false volumesnapshots:false yakd:false]
	I1109 14:03:36.030494   55908 config.go:182] Loaded profile config "ha-423884": Driver=docker, ContainerRuntime=crio, KubernetesVersion=v1.34.1
	I1109 14:03:36.035208   55908 out.go:179] * Enabled addons: 
	I1109 14:03:36.040262   55908 addons.go:515] duration metric: took 10.631239ms for enable addons: enabled=[]
	I1109 14:03:36.040364   55908 start.go:247] waiting for cluster config update ...
	I1109 14:03:36.040385   55908 start.go:256] writing updated cluster config ...
	I1109 14:03:36.043855   55908 out.go:203] 
	I1109 14:03:36.047167   55908 config.go:182] Loaded profile config "ha-423884": Driver=docker, ContainerRuntime=crio, KubernetesVersion=v1.34.1
	I1109 14:03:36.047362   55908 profile.go:143] Saving config to /home/jenkins/minikube-integration/21139-2320/.minikube/profiles/ha-423884/config.json ...
	I1109 14:03:36.050885   55908 out.go:179] * Starting "ha-423884-m02" control-plane node in "ha-423884" cluster
	I1109 14:03:36.053842   55908 cache.go:134] Beginning downloading kic base image for docker with crio
	I1109 14:03:36.056999   55908 out.go:179] * Pulling base image v0.0.48-1761985721-21837 ...
	I1109 14:03:36.060038   55908 image.go:81] Checking for gcr.io/k8s-minikube/kicbase-builds:v0.0.48-1761985721-21837@sha256:a50b37e97dfdea51156e079ca6b45818a801b3d41bbe13d141f35d2e1af6c7d1 in local docker daemon
	I1109 14:03:36.060318   55908 preload.go:188] Checking if preload exists for k8s version v1.34.1 and runtime crio
	I1109 14:03:36.060344   55908 cache.go:65] Caching tarball of preloaded images
	I1109 14:03:36.060467   55908 preload.go:238] Found /home/jenkins/minikube-integration/21139-2320/.minikube/cache/preloaded-tarball/preloaded-images-k8s-v18-v1.34.1-cri-o-overlay-arm64.tar.lz4 in cache, skipping download
	I1109 14:03:36.060496   55908 cache.go:68] Finished verifying existence of preloaded tar for v1.34.1 on crio
	I1109 14:03:36.060681   55908 profile.go:143] Saving config to /home/jenkins/minikube-integration/21139-2320/.minikube/profiles/ha-423884/config.json ...
	I1109 14:03:36.087960   55908 image.go:100] Found gcr.io/k8s-minikube/kicbase-builds:v0.0.48-1761985721-21837@sha256:a50b37e97dfdea51156e079ca6b45818a801b3d41bbe13d141f35d2e1af6c7d1 in local docker daemon, skipping pull
	I1109 14:03:36.087980   55908 cache.go:158] gcr.io/k8s-minikube/kicbase-builds:v0.0.48-1761985721-21837@sha256:a50b37e97dfdea51156e079ca6b45818a801b3d41bbe13d141f35d2e1af6c7d1 exists in daemon, skipping load
	I1109 14:03:36.087991   55908 cache.go:243] Successfully downloaded all kic artifacts
	I1109 14:03:36.088015   55908 start.go:360] acquireMachinesLock for ha-423884-m02: {Name:mkc465d60ac134a0502b48f535d5c2db44f7f07b Clock:{} Delay:500ms Timeout:10m0s Cancel:<nil>}
	I1109 14:03:36.088071   55908 start.go:364] duration metric: took 40.263µs to acquireMachinesLock for "ha-423884-m02"
	I1109 14:03:36.088090   55908 start.go:96] Skipping create...Using existing machine configuration
	I1109 14:03:36.088095   55908 fix.go:54] fixHost starting: m02
	I1109 14:03:36.088348   55908 cli_runner.go:164] Run: docker container inspect ha-423884-m02 --format={{.State.Status}}
	I1109 14:03:36.119614   55908 fix.go:112] recreateIfNeeded on ha-423884-m02: state=Stopped err=<nil>
	W1109 14:03:36.119639   55908 fix.go:138] unexpected machine state, will restart: <nil>
	I1109 14:03:36.123884   55908 out.go:252] * Restarting existing docker container for "ha-423884-m02" ...
	I1109 14:03:36.123973   55908 cli_runner.go:164] Run: docker start ha-423884-m02
	I1109 14:03:36.530699   55908 cli_runner.go:164] Run: docker container inspect ha-423884-m02 --format={{.State.Status}}
	I1109 14:03:36.559612   55908 kic.go:430] container "ha-423884-m02" state is running.
	I1109 14:03:36.560004   55908 cli_runner.go:164] Run: docker container inspect -f "{{range .NetworkSettings.Networks}}{{.IPAddress}},{{.GlobalIPv6Address}}{{end}}" ha-423884-m02
	I1109 14:03:36.586384   55908 profile.go:143] Saving config to /home/jenkins/minikube-integration/21139-2320/.minikube/profiles/ha-423884/config.json ...
	I1109 14:03:36.586624   55908 machine.go:94] provisionDockerMachine start ...
	I1109 14:03:36.586695   55908 cli_runner.go:164] Run: docker container inspect -f "'{{(index (index .NetworkSettings.Ports "22/tcp") 0).HostPort}}'" ha-423884-m02
	I1109 14:03:36.615730   55908 main.go:143] libmachine: Using SSH client type: native
	I1109 14:03:36.616048   55908 main.go:143] libmachine: &{{{<nil> 0 [] [] []} docker [0x3eefe0] 0x3f1790 <nil>  [] 0s} 127.0.0.1 32823 <nil> <nil>}
	I1109 14:03:36.616058   55908 main.go:143] libmachine: About to run SSH command:
	hostname
	I1109 14:03:36.616804   55908 main.go:143] libmachine: Error dialing TCP: ssh: handshake failed: read tcp 127.0.0.1:49240->127.0.0.1:32823: read: connection reset by peer
	I1109 14:03:39.844217   55908 main.go:143] libmachine: SSH cmd err, output: <nil>: ha-423884-m02
	
	I1109 14:03:39.844255   55908 ubuntu.go:182] provisioning hostname "ha-423884-m02"
	I1109 14:03:39.844325   55908 cli_runner.go:164] Run: docker container inspect -f "'{{(index (index .NetworkSettings.Ports "22/tcp") 0).HostPort}}'" ha-423884-m02
	I1109 14:03:39.868660   55908 main.go:143] libmachine: Using SSH client type: native
	I1109 14:03:39.868984   55908 main.go:143] libmachine: &{{{<nil> 0 [] [] []} docker [0x3eefe0] 0x3f1790 <nil>  [] 0s} 127.0.0.1 32823 <nil> <nil>}
	I1109 14:03:39.869001   55908 main.go:143] libmachine: About to run SSH command:
	sudo hostname ha-423884-m02 && echo "ha-423884-m02" | sudo tee /etc/hostname
	I1109 14:03:40.093355   55908 main.go:143] libmachine: SSH cmd err, output: <nil>: ha-423884-m02
	
	I1109 14:03:40.093437   55908 cli_runner.go:164] Run: docker container inspect -f "'{{(index (index .NetworkSettings.Ports "22/tcp") 0).HostPort}}'" ha-423884-m02
	I1109 14:03:40.121586   55908 main.go:143] libmachine: Using SSH client type: native
	I1109 14:03:40.121898   55908 main.go:143] libmachine: &{{{<nil> 0 [] [] []} docker [0x3eefe0] 0x3f1790 <nil>  [] 0s} 127.0.0.1 32823 <nil> <nil>}
	I1109 14:03:40.121920   55908 main.go:143] libmachine: About to run SSH command:
	
			if ! grep -xq '.*\sha-423884-m02' /etc/hosts; then
				if grep -xq '127.0.1.1\s.*' /etc/hosts; then
					sudo sed -i 's/^127.0.1.1\s.*/127.0.1.1 ha-423884-m02/g' /etc/hosts;
				else 
					echo '127.0.1.1 ha-423884-m02' | sudo tee -a /etc/hosts; 
				fi
			fi
	I1109 14:03:40.328493   55908 main.go:143] libmachine: SSH cmd err, output: <nil>: 
	I1109 14:03:40.328522   55908 ubuntu.go:188] set auth options {CertDir:/home/jenkins/minikube-integration/21139-2320/.minikube CaCertPath:/home/jenkins/minikube-integration/21139-2320/.minikube/certs/ca.pem CaPrivateKeyPath:/home/jenkins/minikube-integration/21139-2320/.minikube/certs/ca-key.pem CaCertRemotePath:/etc/docker/ca.pem ServerCertPath:/home/jenkins/minikube-integration/21139-2320/.minikube/machines/server.pem ServerKeyPath:/home/jenkins/minikube-integration/21139-2320/.minikube/machines/server-key.pem ClientKeyPath:/home/jenkins/minikube-integration/21139-2320/.minikube/certs/key.pem ServerCertRemotePath:/etc/docker/server.pem ServerKeyRemotePath:/etc/docker/server-key.pem ClientCertPath:/home/jenkins/minikube-integration/21139-2320/.minikube/certs/cert.pem ServerCertSANs:[] StorePath:/home/jenkins/minikube-integration/21139-2320/.minikube}
	I1109 14:03:40.328538   55908 ubuntu.go:190] setting up certificates
	I1109 14:03:40.328548   55908 provision.go:84] configureAuth start
	I1109 14:03:40.328618   55908 cli_runner.go:164] Run: docker container inspect -f "{{range .NetworkSettings.Networks}}{{.IPAddress}},{{.GlobalIPv6Address}}{{end}}" ha-423884-m02
	I1109 14:03:40.372055   55908 provision.go:143] copyHostCerts
	I1109 14:03:40.372096   55908 vm_assets.go:164] NewFileAsset: /home/jenkins/minikube-integration/21139-2320/.minikube/certs/ca.pem -> /home/jenkins/minikube-integration/21139-2320/.minikube/ca.pem
	I1109 14:03:40.372169   55908 exec_runner.go:144] found /home/jenkins/minikube-integration/21139-2320/.minikube/ca.pem, removing ...
	I1109 14:03:40.372176   55908 exec_runner.go:203] rm: /home/jenkins/minikube-integration/21139-2320/.minikube/ca.pem
	I1109 14:03:40.372257   55908 exec_runner.go:151] cp: /home/jenkins/minikube-integration/21139-2320/.minikube/certs/ca.pem --> /home/jenkins/minikube-integration/21139-2320/.minikube/ca.pem (1082 bytes)
	I1109 14:03:40.372331   55908 vm_assets.go:164] NewFileAsset: /home/jenkins/minikube-integration/21139-2320/.minikube/certs/cert.pem -> /home/jenkins/minikube-integration/21139-2320/.minikube/cert.pem
	I1109 14:03:40.372347   55908 exec_runner.go:144] found /home/jenkins/minikube-integration/21139-2320/.minikube/cert.pem, removing ...
	I1109 14:03:40.372352   55908 exec_runner.go:203] rm: /home/jenkins/minikube-integration/21139-2320/.minikube/cert.pem
	I1109 14:03:40.372377   55908 exec_runner.go:151] cp: /home/jenkins/minikube-integration/21139-2320/.minikube/certs/cert.pem --> /home/jenkins/minikube-integration/21139-2320/.minikube/cert.pem (1123 bytes)
	I1109 14:03:40.372418   55908 vm_assets.go:164] NewFileAsset: /home/jenkins/minikube-integration/21139-2320/.minikube/certs/key.pem -> /home/jenkins/minikube-integration/21139-2320/.minikube/key.pem
	I1109 14:03:40.372433   55908 exec_runner.go:144] found /home/jenkins/minikube-integration/21139-2320/.minikube/key.pem, removing ...
	I1109 14:03:40.372437   55908 exec_runner.go:203] rm: /home/jenkins/minikube-integration/21139-2320/.minikube/key.pem
	I1109 14:03:40.372461   55908 exec_runner.go:151] cp: /home/jenkins/minikube-integration/21139-2320/.minikube/certs/key.pem --> /home/jenkins/minikube-integration/21139-2320/.minikube/key.pem (1679 bytes)
	I1109 14:03:40.372508   55908 provision.go:117] generating server cert: /home/jenkins/minikube-integration/21139-2320/.minikube/machines/server.pem ca-key=/home/jenkins/minikube-integration/21139-2320/.minikube/certs/ca.pem private-key=/home/jenkins/minikube-integration/21139-2320/.minikube/certs/ca-key.pem org=jenkins.ha-423884-m02 san=[127.0.0.1 192.168.49.3 ha-423884-m02 localhost minikube]
	I1109 14:03:40.460419   55908 provision.go:177] copyRemoteCerts
	I1109 14:03:40.460536   55908 ssh_runner.go:195] Run: sudo mkdir -p /etc/docker /etc/docker /etc/docker
	I1109 14:03:40.460611   55908 cli_runner.go:164] Run: docker container inspect -f "'{{(index (index .NetworkSettings.Ports "22/tcp") 0).HostPort}}'" ha-423884-m02
	I1109 14:03:40.505492   55908 sshutil.go:53] new ssh client: &{IP:127.0.0.1 Port:32823 SSHKeyPath:/home/jenkins/minikube-integration/21139-2320/.minikube/machines/ha-423884-m02/id_rsa Username:docker}
	I1109 14:03:40.630054   55908 vm_assets.go:164] NewFileAsset: /home/jenkins/minikube-integration/21139-2320/.minikube/certs/ca.pem -> /etc/docker/ca.pem
	I1109 14:03:40.630110   55908 ssh_runner.go:362] scp /home/jenkins/minikube-integration/21139-2320/.minikube/certs/ca.pem --> /etc/docker/ca.pem (1082 bytes)
	I1109 14:03:40.653044   55908 vm_assets.go:164] NewFileAsset: /home/jenkins/minikube-integration/21139-2320/.minikube/machines/server.pem -> /etc/docker/server.pem
	I1109 14:03:40.653106   55908 ssh_runner.go:362] scp /home/jenkins/minikube-integration/21139-2320/.minikube/machines/server.pem --> /etc/docker/server.pem (1208 bytes)
	I1109 14:03:40.683285   55908 vm_assets.go:164] NewFileAsset: /home/jenkins/minikube-integration/21139-2320/.minikube/machines/server-key.pem -> /etc/docker/server-key.pem
	I1109 14:03:40.683343   55908 ssh_runner.go:362] scp /home/jenkins/minikube-integration/21139-2320/.minikube/machines/server-key.pem --> /etc/docker/server-key.pem (1679 bytes)
	I1109 14:03:40.713212   55908 provision.go:87] duration metric: took 384.650953ms to configureAuth
	I1109 14:03:40.713278   55908 ubuntu.go:206] setting minikube options for container-runtime
	I1109 14:03:40.713537   55908 config.go:182] Loaded profile config "ha-423884": Driver=docker, ContainerRuntime=crio, KubernetesVersion=v1.34.1
	I1109 14:03:40.713674   55908 cli_runner.go:164] Run: docker container inspect -f "'{{(index (index .NetworkSettings.Ports "22/tcp") 0).HostPort}}'" ha-423884-m02
	I1109 14:03:40.745458   55908 main.go:143] libmachine: Using SSH client type: native
	I1109 14:03:40.745765   55908 main.go:143] libmachine: &{{{<nil> 0 [] [] []} docker [0x3eefe0] 0x3f1790 <nil>  [] 0s} 127.0.0.1 32823 <nil> <nil>}
	I1109 14:03:40.745786   55908 main.go:143] libmachine: About to run SSH command:
	sudo mkdir -p /etc/sysconfig && printf %s "
	CRIO_MINIKUBE_OPTIONS='--insecure-registry 10.96.0.0/12 '
	" | sudo tee /etc/sysconfig/crio.minikube && sudo systemctl restart crio
	I1109 14:03:41.160286   55908 main.go:143] libmachine: SSH cmd err, output: <nil>: 
	CRIO_MINIKUBE_OPTIONS='--insecure-registry 10.96.0.0/12 '
	
	I1109 14:03:41.160309   55908 machine.go:97] duration metric: took 4.573667407s to provisionDockerMachine
	I1109 14:03:41.160321   55908 start.go:293] postStartSetup for "ha-423884-m02" (driver="docker")
	I1109 14:03:41.160332   55908 start.go:322] creating required directories: [/etc/kubernetes/addons /etc/kubernetes/manifests /var/tmp/minikube /var/lib/minikube /var/lib/minikube/certs /var/lib/minikube/images /var/lib/minikube/binaries /tmp/gvisor /usr/share/ca-certificates /etc/ssl/certs]
	I1109 14:03:41.160396   55908 ssh_runner.go:195] Run: sudo mkdir -p /etc/kubernetes/addons /etc/kubernetes/manifests /var/tmp/minikube /var/lib/minikube /var/lib/minikube/certs /var/lib/minikube/images /var/lib/minikube/binaries /tmp/gvisor /usr/share/ca-certificates /etc/ssl/certs
	I1109 14:03:41.160449   55908 cli_runner.go:164] Run: docker container inspect -f "'{{(index (index .NetworkSettings.Ports "22/tcp") 0).HostPort}}'" ha-423884-m02
	I1109 14:03:41.178991   55908 sshutil.go:53] new ssh client: &{IP:127.0.0.1 Port:32823 SSHKeyPath:/home/jenkins/minikube-integration/21139-2320/.minikube/machines/ha-423884-m02/id_rsa Username:docker}
	I1109 14:03:41.284963   55908 ssh_runner.go:195] Run: cat /etc/os-release
	I1109 14:03:41.288725   55908 main.go:143] libmachine: Couldn't set key VERSION_CODENAME, no corresponding struct field found
	I1109 14:03:41.288763   55908 info.go:137] Remote host: Debian GNU/Linux 12 (bookworm)
	I1109 14:03:41.288776   55908 filesync.go:126] Scanning /home/jenkins/minikube-integration/21139-2320/.minikube/addons for local assets ...
	I1109 14:03:41.288833   55908 filesync.go:126] Scanning /home/jenkins/minikube-integration/21139-2320/.minikube/files for local assets ...
	I1109 14:03:41.288922   55908 filesync.go:149] local asset: /home/jenkins/minikube-integration/21139-2320/.minikube/files/etc/ssl/certs/41162.pem -> 41162.pem in /etc/ssl/certs
	I1109 14:03:41.288929   55908 vm_assets.go:164] NewFileAsset: /home/jenkins/minikube-integration/21139-2320/.minikube/files/etc/ssl/certs/41162.pem -> /etc/ssl/certs/41162.pem
	I1109 14:03:41.289033   55908 ssh_runner.go:195] Run: sudo mkdir -p /etc/ssl/certs
	I1109 14:03:41.297714   55908 ssh_runner.go:362] scp /home/jenkins/minikube-integration/21139-2320/.minikube/files/etc/ssl/certs/41162.pem --> /etc/ssl/certs/41162.pem (1708 bytes)
	I1109 14:03:41.316091   55908 start.go:296] duration metric: took 155.749725ms for postStartSetup
	I1109 14:03:41.316183   55908 ssh_runner.go:195] Run: sh -c "df -h /var | awk 'NR==2{print $5}'"
	I1109 14:03:41.316251   55908 cli_runner.go:164] Run: docker container inspect -f "'{{(index (index .NetworkSettings.Ports "22/tcp") 0).HostPort}}'" ha-423884-m02
	I1109 14:03:41.332754   55908 sshutil.go:53] new ssh client: &{IP:127.0.0.1 Port:32823 SSHKeyPath:/home/jenkins/minikube-integration/21139-2320/.minikube/machines/ha-423884-m02/id_rsa Username:docker}
	I1109 14:03:41.441566   55908 ssh_runner.go:195] Run: sh -c "df -BG /var | awk 'NR==2{print $4}'"
	I1109 14:03:41.446853   55908 fix.go:56] duration metric: took 5.358725913s for fixHost
	I1109 14:03:41.446878   55908 start.go:83] releasing machines lock for "ha-423884-m02", held for 5.358799177s
	I1109 14:03:41.446969   55908 cli_runner.go:164] Run: docker container inspect -f "{{range .NetworkSettings.Networks}}{{.IPAddress}},{{.GlobalIPv6Address}}{{end}}" ha-423884-m02
	I1109 14:03:41.471189   55908 out.go:179] * Found network options:
	I1109 14:03:41.474105   55908 out.go:179]   - NO_PROXY=192.168.49.2
	W1109 14:03:41.477016   55908 proxy.go:120] fail to check proxy env: Error ip not in block
	W1109 14:03:41.477060   55908 proxy.go:120] fail to check proxy env: Error ip not in block
	I1109 14:03:41.477139   55908 ssh_runner.go:195] Run: sudo sh -c "podman version >/dev/null"
	I1109 14:03:41.477182   55908 cli_runner.go:164] Run: docker container inspect -f "'{{(index (index .NetworkSettings.Ports "22/tcp") 0).HostPort}}'" ha-423884-m02
	I1109 14:03:41.477214   55908 ssh_runner.go:195] Run: curl -sS -m 2 https://registry.k8s.io/
	I1109 14:03:41.477268   55908 cli_runner.go:164] Run: docker container inspect -f "'{{(index (index .NetworkSettings.Ports "22/tcp") 0).HostPort}}'" ha-423884-m02
	I1109 14:03:41.498901   55908 sshutil.go:53] new ssh client: &{IP:127.0.0.1 Port:32823 SSHKeyPath:/home/jenkins/minikube-integration/21139-2320/.minikube/machines/ha-423884-m02/id_rsa Username:docker}
	I1109 14:03:41.500358   55908 sshutil.go:53] new ssh client: &{IP:127.0.0.1 Port:32823 SSHKeyPath:/home/jenkins/minikube-integration/21139-2320/.minikube/machines/ha-423884-m02/id_rsa Username:docker}
	I1109 14:03:41.696694   55908 ssh_runner.go:195] Run: sh -c "stat /etc/cni/net.d/*loopback.conf*"
	W1109 14:03:41.701371   55908 cni.go:209] loopback cni configuration skipped: "/etc/cni/net.d/*loopback.conf*" not found
	I1109 14:03:41.701516   55908 ssh_runner.go:195] Run: sudo find /etc/cni/net.d -maxdepth 1 -type f ( ( -name *bridge* -or -name *podman* ) -and -not -name *.mk_disabled ) -printf "%p, " -exec sh -c "sudo mv {} {}.mk_disabled" ;
	I1109 14:03:41.709683   55908 cni.go:259] no active bridge cni configs found in "/etc/cni/net.d" - nothing to disable
	I1109 14:03:41.709721   55908 start.go:496] detecting cgroup driver to use...
	I1109 14:03:41.709755   55908 detect.go:187] detected "cgroupfs" cgroup driver on host os
	I1109 14:03:41.709825   55908 ssh_runner.go:195] Run: sudo systemctl stop -f containerd
	I1109 14:03:41.725678   55908 ssh_runner.go:195] Run: sudo systemctl is-active --quiet service containerd
	I1109 14:03:41.739787   55908 docker.go:218] disabling cri-docker service (if available) ...
	I1109 14:03:41.739856   55908 ssh_runner.go:195] Run: sudo systemctl stop -f cri-docker.socket
	I1109 14:03:41.757143   55908 ssh_runner.go:195] Run: sudo systemctl stop -f cri-docker.service
	I1109 14:03:41.771643   55908 ssh_runner.go:195] Run: sudo systemctl disable cri-docker.socket
	I1109 14:03:41.900022   55908 ssh_runner.go:195] Run: sudo systemctl mask cri-docker.service
	I1109 14:03:42.105606   55908 docker.go:234] disabling docker service ...
	I1109 14:03:42.105681   55908 ssh_runner.go:195] Run: sudo systemctl stop -f docker.socket
	I1109 14:03:42.144421   55908 ssh_runner.go:195] Run: sudo systemctl stop -f docker.service
	I1109 14:03:42.178839   55908 ssh_runner.go:195] Run: sudo systemctl disable docker.socket
	I1109 14:03:42.468213   55908 ssh_runner.go:195] Run: sudo systemctl mask docker.service
	I1109 14:03:42.691726   55908 ssh_runner.go:195] Run: sudo systemctl is-active --quiet service docker
	I1109 14:03:42.709612   55908 ssh_runner.go:195] Run: /bin/bash -c "sudo mkdir -p /etc && printf %s "runtime-endpoint: unix:///var/run/crio/crio.sock
	" | sudo tee /etc/crictl.yaml"
	I1109 14:03:42.730882   55908 crio.go:59] configure cri-o to use "registry.k8s.io/pause:3.10.1" pause image...
	I1109 14:03:42.730946   55908 ssh_runner.go:195] Run: sh -c "sudo sed -i 's|^.*pause_image = .*$|pause_image = "registry.k8s.io/pause:3.10.1"|' /etc/crio/crio.conf.d/02-crio.conf"
	I1109 14:03:42.740089   55908 crio.go:70] configuring cri-o to use "cgroupfs" as cgroup driver...
	I1109 14:03:42.740148   55908 ssh_runner.go:195] Run: sh -c "sudo sed -i 's|^.*cgroup_manager = .*$|cgroup_manager = "cgroupfs"|' /etc/crio/crio.conf.d/02-crio.conf"
	I1109 14:03:42.750087   55908 ssh_runner.go:195] Run: sh -c "sudo sed -i '/conmon_cgroup = .*/d' /etc/crio/crio.conf.d/02-crio.conf"
	I1109 14:03:42.759038   55908 ssh_runner.go:195] Run: sh -c "sudo sed -i '/cgroup_manager = .*/a conmon_cgroup = "pod"' /etc/crio/crio.conf.d/02-crio.conf"
	I1109 14:03:42.773257   55908 ssh_runner.go:195] Run: sh -c "sudo rm -rf /etc/cni/net.mk"
	I1109 14:03:42.782648   55908 ssh_runner.go:195] Run: sh -c "sudo sed -i '/^ *"net.ipv4.ip_unprivileged_port_start=.*"/d' /etc/crio/crio.conf.d/02-crio.conf"
	I1109 14:03:42.800890   55908 ssh_runner.go:195] Run: sh -c "sudo grep -q "^ *default_sysctls" /etc/crio/crio.conf.d/02-crio.conf || sudo sed -i '/conmon_cgroup = .*/a default_sysctls = \[\n\]' /etc/crio/crio.conf.d/02-crio.conf"
	I1109 14:03:42.812622   55908 ssh_runner.go:195] Run: sh -c "sudo sed -i -r 's|^default_sysctls *= *\[|&\n  "net.ipv4.ip_unprivileged_port_start=0",|' /etc/crio/crio.conf.d/02-crio.conf"
	I1109 14:03:42.829326   55908 ssh_runner.go:195] Run: sudo sysctl net.bridge.bridge-nf-call-iptables
	I1109 14:03:42.846516   55908 ssh_runner.go:195] Run: sudo sh -c "echo 1 > /proc/sys/net/ipv4/ip_forward"
	I1109 14:03:42.860429   55908 ssh_runner.go:195] Run: sudo systemctl daemon-reload
	I1109 14:03:43.078130   55908 ssh_runner.go:195] Run: sudo systemctl restart crio
	I1109 14:03:43.300172   55908 start.go:543] Will wait 60s for socket path /var/run/crio/crio.sock
	I1109 14:03:43.300292   55908 ssh_runner.go:195] Run: stat /var/run/crio/crio.sock
	I1109 14:03:43.304336   55908 start.go:564] Will wait 60s for crictl version
	I1109 14:03:43.304441   55908 ssh_runner.go:195] Run: which crictl
	I1109 14:03:43.308290   55908 ssh_runner.go:195] Run: sudo /usr/local/bin/crictl version
	I1109 14:03:43.334041   55908 start.go:580] Version:  0.1.0
	RuntimeName:  cri-o
	RuntimeVersion:  1.34.1
	RuntimeApiVersion:  v1
	I1109 14:03:43.334158   55908 ssh_runner.go:195] Run: crio --version
	I1109 14:03:43.366433   55908 ssh_runner.go:195] Run: crio --version
	I1109 14:03:43.403997   55908 out.go:179] * Preparing Kubernetes v1.34.1 on CRI-O 1.34.1 ...
	I1109 14:03:43.406881   55908 out.go:179]   - env NO_PROXY=192.168.49.2
	I1109 14:03:43.409947   55908 cli_runner.go:164] Run: docker network inspect ha-423884 --format "{"Name": "{{.Name}}","Driver": "{{.Driver}}","Subnet": "{{range .IPAM.Config}}{{.Subnet}}{{end}}","Gateway": "{{range .IPAM.Config}}{{.Gateway}}{{end}}","MTU": {{if (index .Options "com.docker.network.driver.mtu")}}{{(index .Options "com.docker.network.driver.mtu")}}{{else}}0{{end}}, "ContainerIPs": [{{range $k,$v := .Containers }}"{{$v.IPv4Address}}",{{end}}]}"
	I1109 14:03:43.426148   55908 ssh_runner.go:195] Run: grep 192.168.49.1	host.minikube.internal$ /etc/hosts
	I1109 14:03:43.430019   55908 ssh_runner.go:195] Run: /bin/bash -c "{ grep -v $'\thost.minikube.internal$' "/etc/hosts"; echo "192.168.49.1	host.minikube.internal"; } > /tmp/h.$$; sudo cp /tmp/h.$$ "/etc/hosts""
	I1109 14:03:43.439859   55908 mustload.go:66] Loading cluster: ha-423884
	I1109 14:03:43.440179   55908 config.go:182] Loaded profile config "ha-423884": Driver=docker, ContainerRuntime=crio, KubernetesVersion=v1.34.1
	I1109 14:03:43.440497   55908 cli_runner.go:164] Run: docker container inspect ha-423884 --format={{.State.Status}}
	I1109 14:03:43.458429   55908 host.go:66] Checking if "ha-423884" exists ...
	I1109 14:03:43.458717   55908 certs.go:69] Setting up /home/jenkins/minikube-integration/21139-2320/.minikube/profiles/ha-423884 for IP: 192.168.49.3
	I1109 14:03:43.458732   55908 certs.go:195] generating shared ca certs ...
	I1109 14:03:43.458747   55908 certs.go:227] acquiring lock for ca certs: {Name:mkdc9287ecd89df27ec460b72246ed6b75395d7c Clock:{} Delay:500ms Timeout:1m0s Cancel:<nil>}
	I1109 14:03:43.458858   55908 certs.go:236] skipping valid "minikubeCA" ca cert: /home/jenkins/minikube-integration/21139-2320/.minikube/ca.key
	I1109 14:03:43.458906   55908 certs.go:236] skipping valid "proxyClientCA" ca cert: /home/jenkins/minikube-integration/21139-2320/.minikube/proxy-client-ca.key
	I1109 14:03:43.458917   55908 certs.go:257] generating profile certs ...
	I1109 14:03:43.458991   55908 certs.go:360] skipping valid signed profile cert regeneration for "minikube-user": /home/jenkins/minikube-integration/21139-2320/.minikube/profiles/ha-423884/client.key
	I1109 14:03:43.459044   55908 certs.go:360] skipping valid signed profile cert regeneration for "minikube": /home/jenkins/minikube-integration/21139-2320/.minikube/profiles/ha-423884/apiserver.key.75d82079
	I1109 14:03:43.459087   55908 certs.go:360] skipping valid signed profile cert regeneration for "aggregator": /home/jenkins/minikube-integration/21139-2320/.minikube/profiles/ha-423884/proxy-client.key
	I1109 14:03:43.459098   55908 vm_assets.go:164] NewFileAsset: /home/jenkins/minikube-integration/21139-2320/.minikube/ca.crt -> /var/lib/minikube/certs/ca.crt
	I1109 14:03:43.459110   55908 vm_assets.go:164] NewFileAsset: /home/jenkins/minikube-integration/21139-2320/.minikube/ca.key -> /var/lib/minikube/certs/ca.key
	I1109 14:03:43.459125   55908 vm_assets.go:164] NewFileAsset: /home/jenkins/minikube-integration/21139-2320/.minikube/proxy-client-ca.crt -> /var/lib/minikube/certs/proxy-client-ca.crt
	I1109 14:03:43.459143   55908 vm_assets.go:164] NewFileAsset: /home/jenkins/minikube-integration/21139-2320/.minikube/proxy-client-ca.key -> /var/lib/minikube/certs/proxy-client-ca.key
	I1109 14:03:43.459162   55908 vm_assets.go:164] NewFileAsset: /home/jenkins/minikube-integration/21139-2320/.minikube/profiles/ha-423884/apiserver.crt -> /var/lib/minikube/certs/apiserver.crt
	I1109 14:03:43.459178   55908 vm_assets.go:164] NewFileAsset: /home/jenkins/minikube-integration/21139-2320/.minikube/profiles/ha-423884/apiserver.key -> /var/lib/minikube/certs/apiserver.key
	I1109 14:03:43.459192   55908 vm_assets.go:164] NewFileAsset: /home/jenkins/minikube-integration/21139-2320/.minikube/profiles/ha-423884/proxy-client.crt -> /var/lib/minikube/certs/proxy-client.crt
	I1109 14:03:43.459209   55908 vm_assets.go:164] NewFileAsset: /home/jenkins/minikube-integration/21139-2320/.minikube/profiles/ha-423884/proxy-client.key -> /var/lib/minikube/certs/proxy-client.key
	I1109 14:03:43.459262   55908 certs.go:484] found cert: /home/jenkins/minikube-integration/21139-2320/.minikube/certs/4116.pem (1338 bytes)
	W1109 14:03:43.459293   55908 certs.go:480] ignoring /home/jenkins/minikube-integration/21139-2320/.minikube/certs/4116_empty.pem, impossibly tiny 0 bytes
	I1109 14:03:43.459305   55908 certs.go:484] found cert: /home/jenkins/minikube-integration/21139-2320/.minikube/certs/ca-key.pem (1679 bytes)
	I1109 14:03:43.459331   55908 certs.go:484] found cert: /home/jenkins/minikube-integration/21139-2320/.minikube/certs/ca.pem (1082 bytes)
	I1109 14:03:43.459355   55908 certs.go:484] found cert: /home/jenkins/minikube-integration/21139-2320/.minikube/certs/cert.pem (1123 bytes)
	I1109 14:03:43.459385   55908 certs.go:484] found cert: /home/jenkins/minikube-integration/21139-2320/.minikube/certs/key.pem (1679 bytes)
	I1109 14:03:43.459432   55908 certs.go:484] found cert: /home/jenkins/minikube-integration/21139-2320/.minikube/files/etc/ssl/certs/41162.pem (1708 bytes)
	I1109 14:03:43.459462   55908 vm_assets.go:164] NewFileAsset: /home/jenkins/minikube-integration/21139-2320/.minikube/ca.crt -> /usr/share/ca-certificates/minikubeCA.pem
	I1109 14:03:43.459482   55908 vm_assets.go:164] NewFileAsset: /home/jenkins/minikube-integration/21139-2320/.minikube/certs/4116.pem -> /usr/share/ca-certificates/4116.pem
	I1109 14:03:43.459498   55908 vm_assets.go:164] NewFileAsset: /home/jenkins/minikube-integration/21139-2320/.minikube/files/etc/ssl/certs/41162.pem -> /usr/share/ca-certificates/41162.pem
	I1109 14:03:43.459553   55908 cli_runner.go:164] Run: docker container inspect -f "'{{(index (index .NetworkSettings.Ports "22/tcp") 0).HostPort}}'" ha-423884
	I1109 14:03:43.476791   55908 sshutil.go:53] new ssh client: &{IP:127.0.0.1 Port:32818 SSHKeyPath:/home/jenkins/minikube-integration/21139-2320/.minikube/machines/ha-423884/id_rsa Username:docker}
	I1109 14:03:43.576150   55908 ssh_runner.go:195] Run: stat -c %s /var/lib/minikube/certs/sa.pub
	I1109 14:03:43.579947   55908 ssh_runner.go:447] scp /var/lib/minikube/certs/sa.pub --> memory (451 bytes)
	I1109 14:03:43.588442   55908 ssh_runner.go:195] Run: stat -c %s /var/lib/minikube/certs/sa.key
	I1109 14:03:43.591845   55908 ssh_runner.go:447] scp /var/lib/minikube/certs/sa.key --> memory (1679 bytes)
	I1109 14:03:43.600302   55908 ssh_runner.go:195] Run: stat -c %s /var/lib/minikube/certs/front-proxy-ca.crt
	I1109 14:03:43.603828   55908 ssh_runner.go:447] scp /var/lib/minikube/certs/front-proxy-ca.crt --> memory (1123 bytes)
	I1109 14:03:43.612657   55908 ssh_runner.go:195] Run: stat -c %s /var/lib/minikube/certs/front-proxy-ca.key
	I1109 14:03:43.616127   55908 ssh_runner.go:447] scp /var/lib/minikube/certs/front-proxy-ca.key --> memory (1675 bytes)
	I1109 14:03:43.624209   55908 ssh_runner.go:195] Run: stat -c %s /var/lib/minikube/certs/etcd/ca.crt
	I1109 14:03:43.627692   55908 ssh_runner.go:447] scp /var/lib/minikube/certs/etcd/ca.crt --> memory (1094 bytes)
	I1109 14:03:43.635688   55908 ssh_runner.go:195] Run: stat -c %s /var/lib/minikube/certs/etcd/ca.key
	I1109 14:03:43.639181   55908 ssh_runner.go:447] scp /var/lib/minikube/certs/etcd/ca.key --> memory (1675 bytes)
	I1109 14:03:43.647210   55908 ssh_runner.go:362] scp /home/jenkins/minikube-integration/21139-2320/.minikube/ca.crt --> /var/lib/minikube/certs/ca.crt (1111 bytes)
	I1109 14:03:43.665935   55908 ssh_runner.go:362] scp /home/jenkins/minikube-integration/21139-2320/.minikube/ca.key --> /var/lib/minikube/certs/ca.key (1679 bytes)
	I1109 14:03:43.683098   55908 ssh_runner.go:362] scp /home/jenkins/minikube-integration/21139-2320/.minikube/proxy-client-ca.crt --> /var/lib/minikube/certs/proxy-client-ca.crt (1119 bytes)
	I1109 14:03:43.701792   55908 ssh_runner.go:362] scp /home/jenkins/minikube-integration/21139-2320/.minikube/proxy-client-ca.key --> /var/lib/minikube/certs/proxy-client-ca.key (1675 bytes)
	I1109 14:03:43.720535   55908 ssh_runner.go:362] scp /home/jenkins/minikube-integration/21139-2320/.minikube/profiles/ha-423884/apiserver.crt --> /var/lib/minikube/certs/apiserver.crt (1440 bytes)
	I1109 14:03:43.738207   55908 ssh_runner.go:362] scp /home/jenkins/minikube-integration/21139-2320/.minikube/profiles/ha-423884/apiserver.key --> /var/lib/minikube/certs/apiserver.key (1679 bytes)
	I1109 14:03:43.756027   55908 ssh_runner.go:362] scp /home/jenkins/minikube-integration/21139-2320/.minikube/profiles/ha-423884/proxy-client.crt --> /var/lib/minikube/certs/proxy-client.crt (1147 bytes)
	I1109 14:03:43.774278   55908 ssh_runner.go:362] scp /home/jenkins/minikube-integration/21139-2320/.minikube/profiles/ha-423884/proxy-client.key --> /var/lib/minikube/certs/proxy-client.key (1675 bytes)
	I1109 14:03:43.792937   55908 ssh_runner.go:362] scp /home/jenkins/minikube-integration/21139-2320/.minikube/ca.crt --> /usr/share/ca-certificates/minikubeCA.pem (1111 bytes)
	I1109 14:03:43.811113   55908 ssh_runner.go:362] scp /home/jenkins/minikube-integration/21139-2320/.minikube/certs/4116.pem --> /usr/share/ca-certificates/4116.pem (1338 bytes)
	I1109 14:03:43.829133   55908 ssh_runner.go:362] scp /home/jenkins/minikube-integration/21139-2320/.minikube/files/etc/ssl/certs/41162.pem --> /usr/share/ca-certificates/41162.pem (1708 bytes)
	I1109 14:03:43.847536   55908 ssh_runner.go:362] scp memory --> /var/lib/minikube/certs/sa.pub (451 bytes)
	I1109 14:03:43.860908   55908 ssh_runner.go:362] scp memory --> /var/lib/minikube/certs/sa.key (1679 bytes)
	I1109 14:03:43.873289   55908 ssh_runner.go:362] scp memory --> /var/lib/minikube/certs/front-proxy-ca.crt (1123 bytes)
	I1109 14:03:43.886865   55908 ssh_runner.go:362] scp memory --> /var/lib/minikube/certs/front-proxy-ca.key (1675 bytes)
	I1109 14:03:43.900616   55908 ssh_runner.go:362] scp memory --> /var/lib/minikube/certs/etcd/ca.crt (1094 bytes)
	I1109 14:03:43.913948   55908 ssh_runner.go:362] scp memory --> /var/lib/minikube/certs/etcd/ca.key (1675 bytes)
	I1109 14:03:43.927015   55908 ssh_runner.go:362] scp memory --> /var/lib/minikube/kubeconfig (744 bytes)
	I1109 14:03:43.939523   55908 ssh_runner.go:195] Run: openssl version
	I1109 14:03:43.945583   55908 ssh_runner.go:195] Run: sudo /bin/bash -c "test -s /usr/share/ca-certificates/minikubeCA.pem && ln -fs /usr/share/ca-certificates/minikubeCA.pem /etc/ssl/certs/minikubeCA.pem"
	I1109 14:03:43.954590   55908 ssh_runner.go:195] Run: ls -la /usr/share/ca-certificates/minikubeCA.pem
	I1109 14:03:43.958760   55908 certs.go:528] hashing: -rw-r--r-- 1 root root 1111 Nov  9 13:29 /usr/share/ca-certificates/minikubeCA.pem
	I1109 14:03:43.958867   55908 ssh_runner.go:195] Run: openssl x509 -hash -noout -in /usr/share/ca-certificates/minikubeCA.pem
	I1109 14:03:43.999953   55908 ssh_runner.go:195] Run: sudo /bin/bash -c "test -L /etc/ssl/certs/b5213941.0 || ln -fs /etc/ssl/certs/minikubeCA.pem /etc/ssl/certs/b5213941.0"
	I1109 14:03:44.007895   55908 ssh_runner.go:195] Run: sudo /bin/bash -c "test -s /usr/share/ca-certificates/4116.pem && ln -fs /usr/share/ca-certificates/4116.pem /etc/ssl/certs/4116.pem"
	I1109 14:03:44.020206   55908 ssh_runner.go:195] Run: ls -la /usr/share/ca-certificates/4116.pem
	I1109 14:03:44.024532   55908 certs.go:528] hashing: -rw-r--r-- 1 root root 1338 Nov  9 13:36 /usr/share/ca-certificates/4116.pem
	I1109 14:03:44.024619   55908 ssh_runner.go:195] Run: openssl x509 -hash -noout -in /usr/share/ca-certificates/4116.pem
	I1109 14:03:44.068208   55908 ssh_runner.go:195] Run: sudo /bin/bash -c "test -L /etc/ssl/certs/51391683.0 || ln -fs /etc/ssl/certs/4116.pem /etc/ssl/certs/51391683.0"
	I1109 14:03:44.079840   55908 ssh_runner.go:195] Run: sudo /bin/bash -c "test -s /usr/share/ca-certificates/41162.pem && ln -fs /usr/share/ca-certificates/41162.pem /etc/ssl/certs/41162.pem"
	I1109 14:03:44.089486   55908 ssh_runner.go:195] Run: ls -la /usr/share/ca-certificates/41162.pem
	I1109 14:03:44.094109   55908 certs.go:528] hashing: -rw-r--r-- 1 root root 1708 Nov  9 13:36 /usr/share/ca-certificates/41162.pem
	I1109 14:03:44.094227   55908 ssh_runner.go:195] Run: openssl x509 -hash -noout -in /usr/share/ca-certificates/41162.pem
	I1109 14:03:44.137949   55908 ssh_runner.go:195] Run: sudo /bin/bash -c "test -L /etc/ssl/certs/3ec20f2e.0 || ln -fs /etc/ssl/certs/41162.pem /etc/ssl/certs/3ec20f2e.0"
	I1109 14:03:44.146324   55908 ssh_runner.go:195] Run: stat /var/lib/minikube/certs/apiserver-kubelet-client.crt
	I1109 14:03:44.150369   55908 ssh_runner.go:195] Run: openssl x509 -noout -in /var/lib/minikube/certs/apiserver-etcd-client.crt -checkend 86400
	I1109 14:03:44.191825   55908 ssh_runner.go:195] Run: openssl x509 -noout -in /var/lib/minikube/certs/apiserver-kubelet-client.crt -checkend 86400
	I1109 14:03:44.232925   55908 ssh_runner.go:195] Run: openssl x509 -noout -in /var/lib/minikube/certs/etcd/server.crt -checkend 86400
	I1109 14:03:44.273939   55908 ssh_runner.go:195] Run: openssl x509 -noout -in /var/lib/minikube/certs/etcd/healthcheck-client.crt -checkend 86400
	I1109 14:03:44.314652   55908 ssh_runner.go:195] Run: openssl x509 -noout -in /var/lib/minikube/certs/etcd/peer.crt -checkend 86400
	I1109 14:03:44.356028   55908 ssh_runner.go:195] Run: openssl x509 -noout -in /var/lib/minikube/certs/front-proxy-client.crt -checkend 86400
	I1109 14:03:44.407731   55908 kubeadm.go:935] updating node {m02 192.168.49.3 8443 v1.34.1 crio true true} ...
	I1109 14:03:44.407917   55908 kubeadm.go:947] kubelet [Unit]
	Wants=crio.service
	
	[Service]
	ExecStart=
	ExecStart=/var/lib/minikube/binaries/v1.34.1/kubelet --bootstrap-kubeconfig=/etc/kubernetes/bootstrap-kubelet.conf --cgroups-per-qos=false --config=/var/lib/kubelet/config.yaml --enforce-node-allocatable= --hostname-override=ha-423884-m02 --kubeconfig=/etc/kubernetes/kubelet.conf --node-ip=192.168.49.3
	
	[Install]
	 config:
	{KubernetesVersion:v1.34.1 ClusterName:ha-423884 Namespace:default APIServerHAVIP:192.168.49.254 APIServerName:minikubeCA APIServerNames:[] APIServerIPs:[] DNSDomain:cluster.local ContainerRuntime:crio CRISocket: NetworkPlugin:cni FeatureGates: ServiceCIDR:10.96.0.0/12 ImageRepository: LoadBalancerStartIP: LoadBalancerEndIP: CustomIngressCert: RegistryAliases: ExtraOptions:[] ShouldLoadCachedImages:true EnableDefaultCNI:false CNI:}
	I1109 14:03:44.407958   55908 kube-vip.go:115] generating kube-vip config ...
	I1109 14:03:44.408031   55908 ssh_runner.go:195] Run: sudo sh -c "lsmod | grep ip_vs"
	I1109 14:03:44.419991   55908 kube-vip.go:163] giving up enabling control-plane load-balancing as ipvs kernel modules appears not to be available: sudo sh -c "lsmod | grep ip_vs": Process exited with status 1
	stdout:
	
	stderr:
	I1109 14:03:44.420052   55908 kube-vip.go:137] kube-vip config:
	apiVersion: v1
	kind: Pod
	metadata:
	  creationTimestamp: null
	  name: kube-vip
	  namespace: kube-system
	spec:
	  containers:
	  - args:
	    - manager
	    env:
	    - name: vip_arp
	      value: "true"
	    - name: port
	      value: "8443"
	    - name: vip_nodename
	      valueFrom:
	        fieldRef:
	          fieldPath: spec.nodeName
	    - name: vip_interface
	      value: eth0
	    - name: vip_cidr
	      value: "32"
	    - name: dns_mode
	      value: first
	    - name: cp_enable
	      value: "true"
	    - name: cp_namespace
	      value: kube-system
	    - name: vip_leaderelection
	      value: "true"
	    - name: vip_leasename
	      value: plndr-cp-lock
	    - name: vip_leaseduration
	      value: "5"
	    - name: vip_renewdeadline
	      value: "3"
	    - name: vip_retryperiod
	      value: "1"
	    - name: address
	      value: 192.168.49.254
	    - name: prometheus_server
	      value: :2112
	    image: ghcr.io/kube-vip/kube-vip:v1.0.1
	    imagePullPolicy: IfNotPresent
	    name: kube-vip
	    resources: {}
	    securityContext:
	      capabilities:
	        add:
	        - NET_ADMIN
	        - NET_RAW
	    volumeMounts:
	    - mountPath: /etc/kubernetes/admin.conf
	      name: kubeconfig
	  hostAliases:
	  - hostnames:
	    - kubernetes
	    ip: 127.0.0.1
	  hostNetwork: true
	  volumes:
	  - hostPath:
	      path: "/etc/kubernetes/admin.conf"
	    name: kubeconfig
	status: {}
	I1109 14:03:44.420129   55908 ssh_runner.go:195] Run: sudo ls /var/lib/minikube/binaries/v1.34.1
	I1109 14:03:44.427945   55908 binaries.go:51] Found k8s binaries, skipping transfer
	I1109 14:03:44.428013   55908 ssh_runner.go:195] Run: sudo mkdir -p /etc/systemd/system/kubelet.service.d /lib/systemd/system /etc/kubernetes/manifests
	I1109 14:03:44.435476   55908 ssh_runner.go:362] scp memory --> /etc/systemd/system/kubelet.service.d/10-kubeadm.conf (363 bytes)
	I1109 14:03:44.448591   55908 ssh_runner.go:362] scp memory --> /lib/systemd/system/kubelet.service (352 bytes)
	I1109 14:03:44.461928   55908 ssh_runner.go:362] scp memory --> /etc/kubernetes/manifests/kube-vip.yaml (1358 bytes)
	I1109 14:03:44.475231   55908 ssh_runner.go:195] Run: grep 192.168.49.254	control-plane.minikube.internal$ /etc/hosts
	I1109 14:03:44.478933   55908 ssh_runner.go:195] Run: /bin/bash -c "{ grep -v $'\tcontrol-plane.minikube.internal$' "/etc/hosts"; echo "192.168.49.254	control-plane.minikube.internal"; } > /tmp/h.$$; sudo cp /tmp/h.$$ "/etc/hosts""
	I1109 14:03:44.488867   55908 ssh_runner.go:195] Run: sudo systemctl daemon-reload
	I1109 14:03:44.623612   55908 ssh_runner.go:195] Run: sudo systemctl start kubelet
	I1109 14:03:44.638897   55908 start.go:236] Will wait 6m0s for node &{Name:m02 IP:192.168.49.3 Port:8443 KubernetesVersion:v1.34.1 ContainerRuntime:crio ControlPlane:true Worker:true}
	I1109 14:03:44.639336   55908 config.go:182] Loaded profile config "ha-423884": Driver=docker, ContainerRuntime=crio, KubernetesVersion=v1.34.1
	I1109 14:03:44.643324   55908 out.go:179] * Verifying Kubernetes components...
	I1109 14:03:44.646391   55908 ssh_runner.go:195] Run: sudo systemctl daemon-reload
	I1109 14:03:44.766731   55908 ssh_runner.go:195] Run: sudo systemctl start kubelet
	I1109 14:03:44.781836   55908 kapi.go:59] client config for ha-423884: &rest.Config{Host:"https://192.168.49.254:8443", APIPath:"", ContentConfig:rest.ContentConfig{AcceptContentTypes:"", ContentType:"", GroupVersion:(*schema.GroupVersion)(nil), NegotiatedSerializer:runtime.NegotiatedSerializer(nil)}, Username:"", Password:"", BearerToken:"", BearerTokenFile:"", Impersonate:rest.ImpersonationConfig{UserName:"", UID:"", Groups:[]string(nil), Extra:map[string][]string(nil)}, AuthProvider:<nil>, AuthConfigPersister:rest.AuthProviderConfigPersister(nil), ExecProvider:<nil>, TLSClientConfig:rest.sanitizedTLSClientConfig{Insecure:false, ServerName:"", CertFile:"/home/jenkins/minikube-integration/21139-2320/.minikube/profiles/ha-423884/client.crt", KeyFile:"/home/jenkins/minikube-integration/21139-2320/.minikube/profiles/ha-423884/client.key", CAFile:"/home/jenkins/minikube-integration/21139-2320/.minikube/ca.crt", CertData:[]uint8(nil), KeyData:[]uint8(nil), CAData:[]uint8(nil), NextProtos:[]string(nil)},
UserAgent:"", DisableCompression:false, Transport:http.RoundTripper(nil), WrapTransport:(transport.WrapperFunc)(0x21276e0), QPS:0, Burst:0, RateLimiter:flowcontrol.RateLimiter(nil), WarningHandler:rest.WarningHandler(nil), WarningHandlerWithContext:rest.WarningHandlerWithContext(nil), Timeout:0, Dial:(func(context.Context, string, string) (net.Conn, error))(nil), Proxy:(func(*http.Request) (*url.URL, error))(nil)}
	W1109 14:03:44.781971   55908 kubeadm.go:492] Overriding stale ClientConfig host https://192.168.49.254:8443 with https://192.168.49.2:8443
	I1109 14:03:44.782234   55908 node_ready.go:35] waiting up to 6m0s for node "ha-423884-m02" to be "Ready" ...
	W1109 14:03:54.783441   55908 node_ready.go:55] error getting node "ha-423884-m02" condition "Ready" status (will retry): Get "https://192.168.49.2:8443/api/v1/nodes/ha-423884-m02": net/http: TLS handshake timeout
	I1109 14:03:58.293061   55908 with_retry.go:234] "Got a Retry-After response" delay="1s" attempt=1 url="https://192.168.49.2:8443/api/v1/nodes/ha-423884-m02"
	W1109 14:04:08.294056   55908 node_ready.go:55] error getting node "ha-423884-m02" condition "Ready" status (will retry): Get "https://192.168.49.2:8443/api/v1/nodes/ha-423884-m02": net/http: TLS handshake timeout - error from a previous attempt: read tcp 192.168.49.1:36070->192.168.49.2:8443: read: connection reset by peer
	I1109 14:04:10.224067   55908 node_ready.go:49] node "ha-423884-m02" is "Ready"
	I1109 14:04:10.224094   55908 node_ready.go:38] duration metric: took 25.441822993s for node "ha-423884-m02" to be "Ready" ...
	I1109 14:04:10.224107   55908 api_server.go:52] waiting for apiserver process to appear ...
	I1109 14:04:10.224169   55908 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I1109 14:04:10.237071   55908 api_server.go:72] duration metric: took 25.598086143s to wait for apiserver process to appear ...
	I1109 14:04:10.237093   55908 api_server.go:88] waiting for apiserver healthz status ...
	I1109 14:04:10.237122   55908 api_server.go:253] Checking apiserver healthz at https://192.168.49.2:8443/healthz ...
	I1109 14:04:10.273674   55908 api_server.go:279] https://192.168.49.2:8443/healthz returned 500:
	[+]ping ok
	[+]log ok
	[+]etcd ok
	[+]poststarthook/start-apiserver-admission-initializer ok
	[+]poststarthook/generic-apiserver-start-informers ok
	[+]poststarthook/priority-and-fairness-config-consumer ok
	[+]poststarthook/priority-and-fairness-filter ok
	[+]poststarthook/storage-object-count-tracker-hook ok
	[+]poststarthook/start-apiextensions-informers ok
	[+]poststarthook/start-apiextensions-controllers ok
	[+]poststarthook/crd-informer-synced ok
	[+]poststarthook/start-system-namespaces-controller ok
	[+]poststarthook/start-cluster-authentication-info-controller ok
	[+]poststarthook/start-kube-apiserver-identity-lease-controller ok
	[+]poststarthook/start-kube-apiserver-identity-lease-garbage-collector ok
	[+]poststarthook/start-legacy-token-tracking-controller ok
	[+]poststarthook/start-service-ip-repair-controllers ok
	[-]poststarthook/rbac/bootstrap-roles failed: reason withheld
	[-]poststarthook/scheduling/bootstrap-system-priority-classes failed: reason withheld
	[+]poststarthook/priority-and-fairness-config-producer ok
	[-]poststarthook/bootstrap-controller failed: reason withheld
	[+]poststarthook/start-kubernetes-service-cidr-controller ok
	[+]poststarthook/aggregator-reload-proxy-client-cert ok
	[+]poststarthook/start-kube-aggregator-informers ok
	[+]poststarthook/apiservice-status-local-available-controller ok
	[+]poststarthook/apiservice-status-remote-available-controller ok
	[+]poststarthook/apiservice-registration-controller ok
	[+]poststarthook/apiservice-discovery-controller ok
	[+]poststarthook/kube-apiserver-autoregistration ok
	[+]autoregister-completion ok
	[+]poststarthook/apiservice-openapi-controller ok
	[+]poststarthook/apiservice-openapiv3-controller ok
	healthz check failed
	W1109 14:04:10.273706   55908 api_server.go:103] status: https://192.168.49.2:8443/healthz returned error 500:
	[+]ping ok
	[+]log ok
	[+]etcd ok
	[+]poststarthook/start-apiserver-admission-initializer ok
	[+]poststarthook/generic-apiserver-start-informers ok
	[+]poststarthook/priority-and-fairness-config-consumer ok
	[+]poststarthook/priority-and-fairness-filter ok
	[+]poststarthook/storage-object-count-tracker-hook ok
	[+]poststarthook/start-apiextensions-informers ok
	[+]poststarthook/start-apiextensions-controllers ok
	[+]poststarthook/crd-informer-synced ok
	[+]poststarthook/start-system-namespaces-controller ok
	[+]poststarthook/start-cluster-authentication-info-controller ok
	[+]poststarthook/start-kube-apiserver-identity-lease-controller ok
	[+]poststarthook/start-kube-apiserver-identity-lease-garbage-collector ok
	[+]poststarthook/start-legacy-token-tracking-controller ok
	[+]poststarthook/start-service-ip-repair-controllers ok
	[-]poststarthook/rbac/bootstrap-roles failed: reason withheld
	[-]poststarthook/scheduling/bootstrap-system-priority-classes failed: reason withheld
	[+]poststarthook/priority-and-fairness-config-producer ok
	[-]poststarthook/bootstrap-controller failed: reason withheld
	[+]poststarthook/start-kubernetes-service-cidr-controller ok
	[+]poststarthook/aggregator-reload-proxy-client-cert ok
	[+]poststarthook/start-kube-aggregator-informers ok
	[+]poststarthook/apiservice-status-local-available-controller ok
	[+]poststarthook/apiservice-status-remote-available-controller ok
	[+]poststarthook/apiservice-registration-controller ok
	[+]poststarthook/apiservice-discovery-controller ok
	[+]poststarthook/kube-apiserver-autoregistration ok
	[+]autoregister-completion ok
	[+]poststarthook/apiservice-openapi-controller ok
	[+]poststarthook/apiservice-openapiv3-controller ok
	healthz check failed
	I1109 14:04:10.737933   55908 api_server.go:253] Checking apiserver healthz at https://192.168.49.2:8443/healthz ...
	I1109 14:04:10.747401   55908 api_server.go:279] https://192.168.49.2:8443/healthz returned 500:
	[+]ping ok
	[+]log ok
	[+]etcd ok
	[+]poststarthook/start-apiserver-admission-initializer ok
	[+]poststarthook/generic-apiserver-start-informers ok
	[+]poststarthook/priority-and-fairness-config-consumer ok
	[+]poststarthook/priority-and-fairness-filter ok
	[+]poststarthook/storage-object-count-tracker-hook ok
	[+]poststarthook/start-apiextensions-informers ok
	[+]poststarthook/start-apiextensions-controllers ok
	[+]poststarthook/crd-informer-synced ok
	[+]poststarthook/start-system-namespaces-controller ok
	[+]poststarthook/start-cluster-authentication-info-controller ok
	[+]poststarthook/start-kube-apiserver-identity-lease-controller ok
	[+]poststarthook/start-kube-apiserver-identity-lease-garbage-collector ok
	[+]poststarthook/start-legacy-token-tracking-controller ok
	[+]poststarthook/start-service-ip-repair-controllers ok
	[-]poststarthook/rbac/bootstrap-roles failed: reason withheld
	[+]poststarthook/scheduling/bootstrap-system-priority-classes ok
	[+]poststarthook/priority-and-fairness-config-producer ok
	[+]poststarthook/bootstrap-controller ok
	[+]poststarthook/start-kubernetes-service-cidr-controller ok
	[+]poststarthook/aggregator-reload-proxy-client-cert ok
	[+]poststarthook/start-kube-aggregator-informers ok
	[+]poststarthook/apiservice-status-local-available-controller ok
	[+]poststarthook/apiservice-status-remote-available-controller ok
	[+]poststarthook/apiservice-registration-controller ok
	[+]poststarthook/apiservice-discovery-controller ok
	[+]poststarthook/kube-apiserver-autoregistration ok
	[+]autoregister-completion ok
	[+]poststarthook/apiservice-openapi-controller ok
	[+]poststarthook/apiservice-openapiv3-controller ok
	healthz check failed
	W1109 14:04:10.747476   55908 api_server.go:103] status: https://192.168.49.2:8443/healthz returned error 500:
	[+]ping ok
	[+]log ok
	[+]etcd ok
	[+]poststarthook/start-apiserver-admission-initializer ok
	[+]poststarthook/generic-apiserver-start-informers ok
	[+]poststarthook/priority-and-fairness-config-consumer ok
	[+]poststarthook/priority-and-fairness-filter ok
	[+]poststarthook/storage-object-count-tracker-hook ok
	[+]poststarthook/start-apiextensions-informers ok
	[+]poststarthook/start-apiextensions-controllers ok
	[+]poststarthook/crd-informer-synced ok
	[+]poststarthook/start-system-namespaces-controller ok
	[+]poststarthook/start-cluster-authentication-info-controller ok
	[+]poststarthook/start-kube-apiserver-identity-lease-controller ok
	[+]poststarthook/start-kube-apiserver-identity-lease-garbage-collector ok
	[+]poststarthook/start-legacy-token-tracking-controller ok
	[+]poststarthook/start-service-ip-repair-controllers ok
	[-]poststarthook/rbac/bootstrap-roles failed: reason withheld
	[+]poststarthook/scheduling/bootstrap-system-priority-classes ok
	[+]poststarthook/priority-and-fairness-config-producer ok
	[+]poststarthook/bootstrap-controller ok
	[+]poststarthook/start-kubernetes-service-cidr-controller ok
	[+]poststarthook/aggregator-reload-proxy-client-cert ok
	[+]poststarthook/start-kube-aggregator-informers ok
	[+]poststarthook/apiservice-status-local-available-controller ok
	[+]poststarthook/apiservice-status-remote-available-controller ok
	[+]poststarthook/apiservice-registration-controller ok
	[+]poststarthook/apiservice-discovery-controller ok
	[+]poststarthook/kube-apiserver-autoregistration ok
	[+]autoregister-completion ok
	[+]poststarthook/apiservice-openapi-controller ok
	[+]poststarthook/apiservice-openapiv3-controller ok
	healthz check failed
	I1109 14:04:11.238081   55908 api_server.go:253] Checking apiserver healthz at https://192.168.49.2:8443/healthz ...
	I1109 14:04:11.253573   55908 api_server.go:279] https://192.168.49.2:8443/healthz returned 500:
	[+]ping ok
	[+]log ok
	[+]etcd ok
	[+]poststarthook/start-apiserver-admission-initializer ok
	[+]poststarthook/generic-apiserver-start-informers ok
	[+]poststarthook/priority-and-fairness-config-consumer ok
	[+]poststarthook/priority-and-fairness-filter ok
	[+]poststarthook/storage-object-count-tracker-hook ok
	[+]poststarthook/start-apiextensions-informers ok
	[+]poststarthook/start-apiextensions-controllers ok
	[+]poststarthook/crd-informer-synced ok
	[+]poststarthook/start-system-namespaces-controller ok
	[+]poststarthook/start-cluster-authentication-info-controller ok
	[+]poststarthook/start-kube-apiserver-identity-lease-controller ok
	[+]poststarthook/start-kube-apiserver-identity-lease-garbage-collector ok
	[+]poststarthook/start-legacy-token-tracking-controller ok
	[+]poststarthook/start-service-ip-repair-controllers ok
	[-]poststarthook/rbac/bootstrap-roles failed: reason withheld
	[+]poststarthook/scheduling/bootstrap-system-priority-classes ok
	[+]poststarthook/priority-and-fairness-config-producer ok
	[+]poststarthook/bootstrap-controller ok
	[+]poststarthook/start-kubernetes-service-cidr-controller ok
	[+]poststarthook/aggregator-reload-proxy-client-cert ok
	[+]poststarthook/start-kube-aggregator-informers ok
	[+]poststarthook/apiservice-status-local-available-controller ok
	[+]poststarthook/apiservice-status-remote-available-controller ok
	[+]poststarthook/apiservice-registration-controller ok
	[+]poststarthook/apiservice-discovery-controller ok
	[+]poststarthook/kube-apiserver-autoregistration ok
	[+]autoregister-completion ok
	[+]poststarthook/apiservice-openapi-controller ok
	[+]poststarthook/apiservice-openapiv3-controller ok
	healthz check failed
	W1109 14:04:11.253663   55908 api_server.go:103] status: https://192.168.49.2:8443/healthz returned error 500:
	[+]ping ok
	[+]log ok
	[+]etcd ok
	[+]poststarthook/start-apiserver-admission-initializer ok
	[+]poststarthook/generic-apiserver-start-informers ok
	[+]poststarthook/priority-and-fairness-config-consumer ok
	[+]poststarthook/priority-and-fairness-filter ok
	[+]poststarthook/storage-object-count-tracker-hook ok
	[+]poststarthook/start-apiextensions-informers ok
	[+]poststarthook/start-apiextensions-controllers ok
	[+]poststarthook/crd-informer-synced ok
	[+]poststarthook/start-system-namespaces-controller ok
	[+]poststarthook/start-cluster-authentication-info-controller ok
	[+]poststarthook/start-kube-apiserver-identity-lease-controller ok
	[+]poststarthook/start-kube-apiserver-identity-lease-garbage-collector ok
	[+]poststarthook/start-legacy-token-tracking-controller ok
	[+]poststarthook/start-service-ip-repair-controllers ok
	[-]poststarthook/rbac/bootstrap-roles failed: reason withheld
	[+]poststarthook/scheduling/bootstrap-system-priority-classes ok
	[+]poststarthook/priority-and-fairness-config-producer ok
	[+]poststarthook/bootstrap-controller ok
	[+]poststarthook/start-kubernetes-service-cidr-controller ok
	[+]poststarthook/aggregator-reload-proxy-client-cert ok
	[+]poststarthook/start-kube-aggregator-informers ok
	[+]poststarthook/apiservice-status-local-available-controller ok
	[+]poststarthook/apiservice-status-remote-available-controller ok
	[+]poststarthook/apiservice-registration-controller ok
	[+]poststarthook/apiservice-discovery-controller ok
	[+]poststarthook/kube-apiserver-autoregistration ok
	[+]autoregister-completion ok
	[+]poststarthook/apiservice-openapi-controller ok
	[+]poststarthook/apiservice-openapiv3-controller ok
	healthz check failed
	I1109 14:04:11.737248   55908 api_server.go:253] Checking apiserver healthz at https://192.168.49.2:8443/healthz ...
	I1109 14:04:11.745671   55908 api_server.go:279] https://192.168.49.2:8443/healthz returned 500:
	[+]ping ok
	[+]log ok
	[+]etcd ok
	[+]poststarthook/start-apiserver-admission-initializer ok
	[+]poststarthook/generic-apiserver-start-informers ok
	[+]poststarthook/priority-and-fairness-config-consumer ok
	[+]poststarthook/priority-and-fairness-filter ok
	[+]poststarthook/storage-object-count-tracker-hook ok
	[+]poststarthook/start-apiextensions-informers ok
	[+]poststarthook/start-apiextensions-controllers ok
	[+]poststarthook/crd-informer-synced ok
	[+]poststarthook/start-system-namespaces-controller ok
	[+]poststarthook/start-cluster-authentication-info-controller ok
	[+]poststarthook/start-kube-apiserver-identity-lease-controller ok
	[+]poststarthook/start-kube-apiserver-identity-lease-garbage-collector ok
	[+]poststarthook/start-legacy-token-tracking-controller ok
	[+]poststarthook/start-service-ip-repair-controllers ok
	[-]poststarthook/rbac/bootstrap-roles failed: reason withheld
	[+]poststarthook/scheduling/bootstrap-system-priority-classes ok
	[+]poststarthook/priority-and-fairness-config-producer ok
	[+]poststarthook/bootstrap-controller ok
	[+]poststarthook/start-kubernetes-service-cidr-controller ok
	[+]poststarthook/aggregator-reload-proxy-client-cert ok
	[+]poststarthook/start-kube-aggregator-informers ok
	[+]poststarthook/apiservice-status-local-available-controller ok
	[+]poststarthook/apiservice-status-remote-available-controller ok
	[+]poststarthook/apiservice-registration-controller ok
	[+]poststarthook/apiservice-discovery-controller ok
	[+]poststarthook/kube-apiserver-autoregistration ok
	[+]autoregister-completion ok
	[+]poststarthook/apiservice-openapi-controller ok
	[+]poststarthook/apiservice-openapiv3-controller ok
	healthz check failed
	W1109 14:04:11.745753   55908 api_server.go:103] status: https://192.168.49.2:8443/healthz returned error 500:
	[+]ping ok
	[+]log ok
	[+]etcd ok
	[+]poststarthook/start-apiserver-admission-initializer ok
	[+]poststarthook/generic-apiserver-start-informers ok
	[+]poststarthook/priority-and-fairness-config-consumer ok
	[+]poststarthook/priority-and-fairness-filter ok
	[+]poststarthook/storage-object-count-tracker-hook ok
	[+]poststarthook/start-apiextensions-informers ok
	[+]poststarthook/start-apiextensions-controllers ok
	[+]poststarthook/crd-informer-synced ok
	[+]poststarthook/start-system-namespaces-controller ok
	[+]poststarthook/start-cluster-authentication-info-controller ok
	[+]poststarthook/start-kube-apiserver-identity-lease-controller ok
	[+]poststarthook/start-kube-apiserver-identity-lease-garbage-collector ok
	[+]poststarthook/start-legacy-token-tracking-controller ok
	[+]poststarthook/start-service-ip-repair-controllers ok
	[-]poststarthook/rbac/bootstrap-roles failed: reason withheld
	[+]poststarthook/scheduling/bootstrap-system-priority-classes ok
	[+]poststarthook/priority-and-fairness-config-producer ok
	[+]poststarthook/bootstrap-controller ok
	[+]poststarthook/start-kubernetes-service-cidr-controller ok
	[+]poststarthook/aggregator-reload-proxy-client-cert ok
	[+]poststarthook/start-kube-aggregator-informers ok
	[+]poststarthook/apiservice-status-local-available-controller ok
	[+]poststarthook/apiservice-status-remote-available-controller ok
	[+]poststarthook/apiservice-registration-controller ok
	[+]poststarthook/apiservice-discovery-controller ok
	[+]poststarthook/kube-apiserver-autoregistration ok
	[+]autoregister-completion ok
	[+]poststarthook/apiservice-openapi-controller ok
	[+]poststarthook/apiservice-openapiv3-controller ok
	healthz check failed
	I1109 14:04:12.237288   55908 api_server.go:253] Checking apiserver healthz at https://192.168.49.2:8443/healthz ...
	I1109 14:04:12.246058   55908 api_server.go:279] https://192.168.49.2:8443/healthz returned 200:
	ok
	I1109 14:04:12.247325   55908 api_server.go:141] control plane version: v1.34.1
	I1109 14:04:12.247378   55908 api_server.go:131] duration metric: took 2.0102771s to wait for apiserver health ...
	I1109 14:04:12.247399   55908 system_pods.go:43] waiting for kube-system pods to appear ...
	I1109 14:04:12.255293   55908 system_pods.go:59] 26 kube-system pods found
	I1109 14:04:12.255379   55908 system_pods.go:61] "coredns-66bc5c9577-wl6rt" [95478f0f-683f-4542-aaa4-adc037f97d0c] Running
	I1109 14:04:12.255399   55908 system_pods.go:61] "coredns-66bc5c9577-x2j4c" [96cca476-edbb-4139-8b01-f7fd7c7d55aa] Running
	I1109 14:04:12.255418   55908 system_pods.go:61] "etcd-ha-423884" [004413ee-6d00-4b6b-8e58-dbe5c2694a91] Running
	I1109 14:04:12.255451   55908 system_pods.go:61] "etcd-ha-423884-m02" [7009b9f1-1f16-4b4e-a5e3-1d8df4987593] Running
	I1109 14:04:12.255475   55908 system_pods.go:61] "etcd-ha-423884-m03" [5c66298e-55ac-439c-8fc7-cc91a29fff8c] Running
	I1109 14:04:12.255490   55908 system_pods.go:61] "kindnet-2tcn6" [22d3ab43-1335-4838-ab1d-7368817c4287] Running
	I1109 14:04:12.255507   55908 system_pods.go:61] "kindnet-45jg2" [fe2d9f7d-ba0c-42a5-af2d-ebfb3153b0b1] Running
	I1109 14:04:12.255525   55908 system_pods.go:61] "kindnet-4s4nj" [aaab0693-39a9-46cc-b5c6-f07055a7cbc4] Running
	I1109 14:04:12.255556   55908 system_pods.go:61] "kindnet-ftnwt" [ea92690f-5103-40c4-ba92-16b97894d00c] Running
	I1109 14:04:12.255578   55908 system_pods.go:61] "kube-apiserver-ha-423884" [1981277b-07b7-4bfc-8601-d026e58476b1] Running
	I1109 14:04:12.255596   55908 system_pods.go:61] "kube-apiserver-ha-423884-m02" [e203df4c-2d27-445c-ae67-14d3e03d4a48] Running
	I1109 14:04:12.255613   55908 system_pods.go:61] "kube-apiserver-ha-423884-m03" [16908abd-f0bf-4fc2-b689-4062490d63b3] Running
	I1109 14:04:12.255631   55908 system_pods.go:61] "kube-controller-manager-ha-423884" [1d981b3b-aa6e-470c-8fd5-a97cedeb7ab7] Running
	I1109 14:04:12.255657   55908 system_pods.go:61] "kube-controller-manager-ha-423884-m02" [047f7949-9451-4301-be27-a9073b1f8dd8] Running
	I1109 14:04:12.255679   55908 system_pods.go:61] "kube-controller-manager-ha-423884-m03" [94734b39-b4df-4736-8bb8-4289b8b59d4a] Running
	I1109 14:04:12.255698   55908 system_pods.go:61] "kube-proxy-7z7d2" [f3de4d87-91fe-4303-a8db-50a70cbce4d7] Running
	I1109 14:04:12.255716   55908 system_pods.go:61] "kube-proxy-9kff9" [3e8293e6-027b-460b-bf10-1c31ea96c7b9] Running
	I1109 14:04:12.255733   55908 system_pods.go:61] "kube-proxy-f4hgn" [4eebc45e-329c-47be-b22c-f516100cff56] Running
	I1109 14:04:12.255760   55908 system_pods.go:61] "kube-proxy-jcgxk" [511bb5aa-d398-4eb4-852e-d2b6cda335c7] Running
	I1109 14:04:12.255785   55908 system_pods.go:61] "kube-scheduler-ha-423884" [9d1a4a22-ff4f-4204-b9a0-c90bc99cf5ee] Running
	I1109 14:04:12.255802   55908 system_pods.go:61] "kube-scheduler-ha-423884-m02" [3959577a-da41-4834-8c10-1cd6da3da88d] Running
	I1109 14:04:12.255819   55908 system_pods.go:61] "kube-scheduler-ha-423884-m03" [ab2c4706-65e8-429e-9365-35ddef56a3f5] Running
	I1109 14:04:12.255834   55908 system_pods.go:61] "kube-vip-ha-423884" [8470dcc0-6c4f-4241-ad4e-8b896f6712b0] Running
	I1109 14:04:12.255904   55908 system_pods.go:61] "kube-vip-ha-423884-m02" [17d3154c-6732-452b-a209-4a8c98c5626e] Running
	I1109 14:04:12.255931   55908 system_pods.go:61] "kube-vip-ha-423884-m03" [aade9448-5515-4d1c-ae79-38c8686c171f] Running
	I1109 14:04:12.255949   55908 system_pods.go:61] "storage-provisioner" [5c249a88-1e05-40e0-b9d2-60a993f8c146] Running
	I1109 14:04:12.255967   55908 system_pods.go:74] duration metric: took 8.549678ms to wait for pod list to return data ...
	I1109 14:04:12.255987   55908 default_sa.go:34] waiting for default service account to be created ...
	I1109 14:04:12.259644   55908 default_sa.go:45] found service account: "default"
	I1109 14:04:12.259701   55908 default_sa.go:55] duration metric: took 3.685783ms for default service account to be created ...
	I1109 14:04:12.259723   55908 system_pods.go:116] waiting for k8s-apps to be running ...
	I1109 14:04:12.265757   55908 system_pods.go:86] 26 kube-system pods found
	I1109 14:04:12.265830   55908 system_pods.go:89] "coredns-66bc5c9577-wl6rt" [95478f0f-683f-4542-aaa4-adc037f97d0c] Running
	I1109 14:04:12.265849   55908 system_pods.go:89] "coredns-66bc5c9577-x2j4c" [96cca476-edbb-4139-8b01-f7fd7c7d55aa] Running
	I1109 14:04:12.265871   55908 system_pods.go:89] "etcd-ha-423884" [004413ee-6d00-4b6b-8e58-dbe5c2694a91] Running
	I1109 14:04:12.265906   55908 system_pods.go:89] "etcd-ha-423884-m02" [7009b9f1-1f16-4b4e-a5e3-1d8df4987593] Running
	I1109 14:04:12.265928   55908 system_pods.go:89] "etcd-ha-423884-m03" [5c66298e-55ac-439c-8fc7-cc91a29fff8c] Running
	I1109 14:04:12.265945   55908 system_pods.go:89] "kindnet-2tcn6" [22d3ab43-1335-4838-ab1d-7368817c4287] Running
	I1109 14:04:12.265961   55908 system_pods.go:89] "kindnet-45jg2" [fe2d9f7d-ba0c-42a5-af2d-ebfb3153b0b1] Running
	I1109 14:04:12.265977   55908 system_pods.go:89] "kindnet-4s4nj" [aaab0693-39a9-46cc-b5c6-f07055a7cbc4] Running
	I1109 14:04:12.266004   55908 system_pods.go:89] "kindnet-ftnwt" [ea92690f-5103-40c4-ba92-16b97894d00c] Running
	I1109 14:04:12.266025   55908 system_pods.go:89] "kube-apiserver-ha-423884" [1981277b-07b7-4bfc-8601-d026e58476b1] Running
	I1109 14:04:12.266042   55908 system_pods.go:89] "kube-apiserver-ha-423884-m02" [e203df4c-2d27-445c-ae67-14d3e03d4a48] Running
	I1109 14:04:12.266059   55908 system_pods.go:89] "kube-apiserver-ha-423884-m03" [16908abd-f0bf-4fc2-b689-4062490d63b3] Running
	I1109 14:04:12.266077   55908 system_pods.go:89] "kube-controller-manager-ha-423884" [1d981b3b-aa6e-470c-8fd5-a97cedeb7ab7] Running
	I1109 14:04:12.266107   55908 system_pods.go:89] "kube-controller-manager-ha-423884-m02" [047f7949-9451-4301-be27-a9073b1f8dd8] Running
	I1109 14:04:12.266238   55908 system_pods.go:89] "kube-controller-manager-ha-423884-m03" [94734b39-b4df-4736-8bb8-4289b8b59d4a] Running
	I1109 14:04:12.266258   55908 system_pods.go:89] "kube-proxy-7z7d2" [f3de4d87-91fe-4303-a8db-50a70cbce4d7] Running
	I1109 14:04:12.266274   55908 system_pods.go:89] "kube-proxy-9kff9" [3e8293e6-027b-460b-bf10-1c31ea96c7b9] Running
	I1109 14:04:12.266290   55908 system_pods.go:89] "kube-proxy-f4hgn" [4eebc45e-329c-47be-b22c-f516100cff56] Running
	I1109 14:04:12.266322   55908 system_pods.go:89] "kube-proxy-jcgxk" [511bb5aa-d398-4eb4-852e-d2b6cda335c7] Running
	I1109 14:04:12.266345   55908 system_pods.go:89] "kube-scheduler-ha-423884" [9d1a4a22-ff4f-4204-b9a0-c90bc99cf5ee] Running
	I1109 14:04:12.266364   55908 system_pods.go:89] "kube-scheduler-ha-423884-m02" [3959577a-da41-4834-8c10-1cd6da3da88d] Running
	I1109 14:04:12.266382   55908 system_pods.go:89] "kube-scheduler-ha-423884-m03" [ab2c4706-65e8-429e-9365-35ddef56a3f5] Running
	I1109 14:04:12.266400   55908 system_pods.go:89] "kube-vip-ha-423884" [8470dcc0-6c4f-4241-ad4e-8b896f6712b0] Running
	I1109 14:04:12.266427   55908 system_pods.go:89] "kube-vip-ha-423884-m02" [17d3154c-6732-452b-a209-4a8c98c5626e] Running
	I1109 14:04:12.266450   55908 system_pods.go:89] "kube-vip-ha-423884-m03" [aade9448-5515-4d1c-ae79-38c8686c171f] Running
	I1109 14:04:12.266468   55908 system_pods.go:89] "storage-provisioner" [5c249a88-1e05-40e0-b9d2-60a993f8c146] Running
	I1109 14:04:12.266489   55908 system_pods.go:126] duration metric: took 6.747337ms to wait for k8s-apps to be running ...
	I1109 14:04:12.266510   55908 system_svc.go:44] waiting for kubelet service to be running ....
	I1109 14:04:12.266588   55908 ssh_runner.go:195] Run: sudo systemctl is-active --quiet service kubelet
	I1109 14:04:12.282135   55908 system_svc.go:56] duration metric: took 15.616371ms WaitForService to wait for kubelet
	I1109 14:04:12.282232   55908 kubeadm.go:587] duration metric: took 27.643251935s to wait for: map[apiserver:true apps_running:true default_sa:true extra:true kubelet:true node_ready:true system_pods:true]
	I1109 14:04:12.282264   55908 node_conditions.go:102] verifying NodePressure condition ...
	I1109 14:04:12.287797   55908 node_conditions.go:122] node storage ephemeral capacity is 203034800Ki
	I1109 14:04:12.287962   55908 node_conditions.go:123] node cpu capacity is 2
	I1109 14:04:12.287995   55908 node_conditions.go:122] node storage ephemeral capacity is 203034800Ki
	I1109 14:04:12.288016   55908 node_conditions.go:123] node cpu capacity is 2
	I1109 14:04:12.288036   55908 node_conditions.go:122] node storage ephemeral capacity is 203034800Ki
	I1109 14:04:12.288054   55908 node_conditions.go:123] node cpu capacity is 2
	I1109 14:04:12.288080   55908 node_conditions.go:122] node storage ephemeral capacity is 203034800Ki
	I1109 14:04:12.288104   55908 node_conditions.go:123] node cpu capacity is 2
	I1109 14:04:12.288124   55908 node_conditions.go:105] duration metric: took 5.843459ms to run NodePressure ...
	I1109 14:04:12.288147   55908 start.go:242] waiting for startup goroutines ...
	I1109 14:04:12.288194   55908 start.go:256] writing updated cluster config ...
	I1109 14:04:12.292016   55908 out.go:203] 
	I1109 14:04:12.295240   55908 config.go:182] Loaded profile config "ha-423884": Driver=docker, ContainerRuntime=crio, KubernetesVersion=v1.34.1
	I1109 14:04:12.295416   55908 profile.go:143] Saving config to /home/jenkins/minikube-integration/21139-2320/.minikube/profiles/ha-423884/config.json ...
	I1109 14:04:12.298693   55908 out.go:179] * Starting "ha-423884-m03" control-plane node in "ha-423884" cluster
	I1109 14:04:12.302221   55908 cache.go:134] Beginning downloading kic base image for docker with crio
	I1109 14:04:12.305225   55908 out.go:179] * Pulling base image v0.0.48-1761985721-21837 ...
	I1109 14:04:12.307950   55908 preload.go:188] Checking if preload exists for k8s version v1.34.1 and runtime crio
	I1109 14:04:12.307975   55908 cache.go:65] Caching tarball of preloaded images
	I1109 14:04:12.308093   55908 preload.go:238] Found /home/jenkins/minikube-integration/21139-2320/.minikube/cache/preloaded-tarball/preloaded-images-k8s-v18-v1.34.1-cri-o-overlay-arm64.tar.lz4 in cache, skipping download
	I1109 14:04:12.308103   55908 cache.go:68] Finished verifying existence of preloaded tar for v1.34.1 on crio
	I1109 14:04:12.308245   55908 profile.go:143] Saving config to /home/jenkins/minikube-integration/21139-2320/.minikube/profiles/ha-423884/config.json ...
	I1109 14:04:12.308454   55908 image.go:81] Checking for gcr.io/k8s-minikube/kicbase-builds:v0.0.48-1761985721-21837@sha256:a50b37e97dfdea51156e079ca6b45818a801b3d41bbe13d141f35d2e1af6c7d1 in local docker daemon
	I1109 14:04:12.335753   55908 image.go:100] Found gcr.io/k8s-minikube/kicbase-builds:v0.0.48-1761985721-21837@sha256:a50b37e97dfdea51156e079ca6b45818a801b3d41bbe13d141f35d2e1af6c7d1 in local docker daemon, skipping pull
	I1109 14:04:12.335772   55908 cache.go:158] gcr.io/k8s-minikube/kicbase-builds:v0.0.48-1761985721-21837@sha256:a50b37e97dfdea51156e079ca6b45818a801b3d41bbe13d141f35d2e1af6c7d1 exists in daemon, skipping load
	I1109 14:04:12.335783   55908 cache.go:243] Successfully downloaded all kic artifacts
	I1109 14:04:12.335806   55908 start.go:360] acquireMachinesLock for ha-423884-m03: {Name:mk2c1f49120f6acdbb0b7c106d84b578b982c1c0 Clock:{} Delay:500ms Timeout:10m0s Cancel:<nil>}
	I1109 14:04:12.335852   55908 start.go:364] duration metric: took 32.608µs to acquireMachinesLock for "ha-423884-m03"
	I1109 14:04:12.335906   55908 start.go:96] Skipping create...Using existing machine configuration
	I1109 14:04:12.335913   55908 fix.go:54] fixHost starting: m03
	I1109 14:04:12.336176   55908 cli_runner.go:164] Run: docker container inspect ha-423884-m03 --format={{.State.Status}}
	I1109 14:04:12.360018   55908 fix.go:112] recreateIfNeeded on ha-423884-m03: state=Stopped err=<nil>
	W1109 14:04:12.360050   55908 fix.go:138] unexpected machine state, will restart: <nil>
	I1109 14:04:12.363431   55908 out.go:252] * Restarting existing docker container for "ha-423884-m03" ...
	I1109 14:04:12.363592   55908 cli_runner.go:164] Run: docker start ha-423884-m03
	I1109 14:04:12.653356   55908 cli_runner.go:164] Run: docker container inspect ha-423884-m03 --format={{.State.Status}}
	I1109 14:04:12.683958   55908 kic.go:430] container "ha-423884-m03" state is running.
	I1109 14:04:12.684306   55908 cli_runner.go:164] Run: docker container inspect -f "{{range .NetworkSettings.Networks}}{{.IPAddress}},{{.GlobalIPv6Address}}{{end}}" ha-423884-m03
	I1109 14:04:12.727840   55908 profile.go:143] Saving config to /home/jenkins/minikube-integration/21139-2320/.minikube/profiles/ha-423884/config.json ...
	I1109 14:04:12.728107   55908 machine.go:94] provisionDockerMachine start ...
	I1109 14:04:12.728163   55908 cli_runner.go:164] Run: docker container inspect -f "'{{(index (index .NetworkSettings.Ports "22/tcp") 0).HostPort}}'" ha-423884-m03
	I1109 14:04:12.759896   55908 main.go:143] libmachine: Using SSH client type: native
	I1109 14:04:12.760195   55908 main.go:143] libmachine: &{{{<nil> 0 [] [] []} docker [0x3eefe0] 0x3f1790 <nil>  [] 0s} 127.0.0.1 32828 <nil> <nil>}
	I1109 14:04:12.760204   55908 main.go:143] libmachine: About to run SSH command:
	hostname
	I1109 14:04:12.761068   55908 main.go:143] libmachine: Error dialing TCP: ssh: handshake failed: EOF
	I1109 14:04:16.033281   55908 main.go:143] libmachine: SSH cmd err, output: <nil>: ha-423884-m03
	
	I1109 14:04:16.033354   55908 ubuntu.go:182] provisioning hostname "ha-423884-m03"
	I1109 14:04:16.033448   55908 cli_runner.go:164] Run: docker container inspect -f "'{{(index (index .NetworkSettings.Ports "22/tcp") 0).HostPort}}'" ha-423884-m03
	I1109 14:04:16.074078   55908 main.go:143] libmachine: Using SSH client type: native
	I1109 14:04:16.074389   55908 main.go:143] libmachine: &{{{<nil> 0 [] [] []} docker [0x3eefe0] 0x3f1790 <nil>  [] 0s} 127.0.0.1 32828 <nil> <nil>}
	I1109 14:04:16.074407   55908 main.go:143] libmachine: About to run SSH command:
	sudo hostname ha-423884-m03 && echo "ha-423884-m03" | sudo tee /etc/hostname
	I1109 14:04:16.423110   55908 main.go:143] libmachine: SSH cmd err, output: <nil>: ha-423884-m03
	
	I1109 14:04:16.423192   55908 cli_runner.go:164] Run: docker container inspect -f "'{{(index (index .NetworkSettings.Ports "22/tcp") 0).HostPort}}'" ha-423884-m03
	I1109 14:04:16.456144   55908 main.go:143] libmachine: Using SSH client type: native
	I1109 14:04:16.456500   55908 main.go:143] libmachine: &{{{<nil> 0 [] [] []} docker [0x3eefe0] 0x3f1790 <nil>  [] 0s} 127.0.0.1 32828 <nil> <nil>}
	I1109 14:04:16.456523   55908 main.go:143] libmachine: About to run SSH command:
	
			if ! grep -xq '.*\sha-423884-m03' /etc/hosts; then
				if grep -xq '127.0.1.1\s.*' /etc/hosts; then
					sudo sed -i 's/^127.0.1.1\s.*/127.0.1.1 ha-423884-m03/g' /etc/hosts;
				else 
					echo '127.0.1.1 ha-423884-m03' | sudo tee -a /etc/hosts; 
				fi
			fi
	I1109 14:04:16.751298   55908 main.go:143] libmachine: SSH cmd err, output: <nil>: 
	I1109 14:04:16.751374   55908 ubuntu.go:188] set auth options {CertDir:/home/jenkins/minikube-integration/21139-2320/.minikube CaCertPath:/home/jenkins/minikube-integration/21139-2320/.minikube/certs/ca.pem CaPrivateKeyPath:/home/jenkins/minikube-integration/21139-2320/.minikube/certs/ca-key.pem CaCertRemotePath:/etc/docker/ca.pem ServerCertPath:/home/jenkins/minikube-integration/21139-2320/.minikube/machines/server.pem ServerKeyPath:/home/jenkins/minikube-integration/21139-2320/.minikube/machines/server-key.pem ClientKeyPath:/home/jenkins/minikube-integration/21139-2320/.minikube/certs/key.pem ServerCertRemotePath:/etc/docker/server.pem ServerKeyRemotePath:/etc/docker/server-key.pem ClientCertPath:/home/jenkins/minikube-integration/21139-2320/.minikube/certs/cert.pem ServerCertSANs:[] StorePath:/home/jenkins/minikube-integration/21139-2320/.minikube}
	I1109 14:04:16.751397   55908 ubuntu.go:190] setting up certificates
	I1109 14:04:16.751407   55908 provision.go:84] configureAuth start
	I1109 14:04:16.751471   55908 cli_runner.go:164] Run: docker container inspect -f "{{range .NetworkSettings.Networks}}{{.IPAddress}},{{.GlobalIPv6Address}}{{end}}" ha-423884-m03
	I1109 14:04:16.793487   55908 provision.go:143] copyHostCerts
	I1109 14:04:16.793536   55908 vm_assets.go:164] NewFileAsset: /home/jenkins/minikube-integration/21139-2320/.minikube/certs/ca.pem -> /home/jenkins/minikube-integration/21139-2320/.minikube/ca.pem
	I1109 14:04:16.793570   55908 exec_runner.go:144] found /home/jenkins/minikube-integration/21139-2320/.minikube/ca.pem, removing ...
	I1109 14:04:16.793586   55908 exec_runner.go:203] rm: /home/jenkins/minikube-integration/21139-2320/.minikube/ca.pem
	I1109 14:04:16.793664   55908 exec_runner.go:151] cp: /home/jenkins/minikube-integration/21139-2320/.minikube/certs/ca.pem --> /home/jenkins/minikube-integration/21139-2320/.minikube/ca.pem (1082 bytes)
	I1109 14:04:16.793744   55908 vm_assets.go:164] NewFileAsset: /home/jenkins/minikube-integration/21139-2320/.minikube/certs/cert.pem -> /home/jenkins/minikube-integration/21139-2320/.minikube/cert.pem
	I1109 14:04:16.793767   55908 exec_runner.go:144] found /home/jenkins/minikube-integration/21139-2320/.minikube/cert.pem, removing ...
	I1109 14:04:16.793774   55908 exec_runner.go:203] rm: /home/jenkins/minikube-integration/21139-2320/.minikube/cert.pem
	I1109 14:04:16.793803   55908 exec_runner.go:151] cp: /home/jenkins/minikube-integration/21139-2320/.minikube/certs/cert.pem --> /home/jenkins/minikube-integration/21139-2320/.minikube/cert.pem (1123 bytes)
	I1109 14:04:16.793848   55908 vm_assets.go:164] NewFileAsset: /home/jenkins/minikube-integration/21139-2320/.minikube/certs/key.pem -> /home/jenkins/minikube-integration/21139-2320/.minikube/key.pem
	I1109 14:04:16.793870   55908 exec_runner.go:144] found /home/jenkins/minikube-integration/21139-2320/.minikube/key.pem, removing ...
	I1109 14:04:16.793874   55908 exec_runner.go:203] rm: /home/jenkins/minikube-integration/21139-2320/.minikube/key.pem
	I1109 14:04:16.793899   55908 exec_runner.go:151] cp: /home/jenkins/minikube-integration/21139-2320/.minikube/certs/key.pem --> /home/jenkins/minikube-integration/21139-2320/.minikube/key.pem (1679 bytes)
	I1109 14:04:16.793952   55908 provision.go:117] generating server cert: /home/jenkins/minikube-integration/21139-2320/.minikube/machines/server.pem ca-key=/home/jenkins/minikube-integration/21139-2320/.minikube/certs/ca.pem private-key=/home/jenkins/minikube-integration/21139-2320/.minikube/certs/ca-key.pem org=jenkins.ha-423884-m03 san=[127.0.0.1 192.168.49.4 ha-423884-m03 localhost minikube]
	I1109 14:04:17.244605   55908 provision.go:177] copyRemoteCerts
	I1109 14:04:17.244683   55908 ssh_runner.go:195] Run: sudo mkdir -p /etc/docker /etc/docker /etc/docker
	I1109 14:04:17.244730   55908 cli_runner.go:164] Run: docker container inspect -f "'{{(index (index .NetworkSettings.Ports "22/tcp") 0).HostPort}}'" ha-423884-m03
	I1109 14:04:17.267714   55908 sshutil.go:53] new ssh client: &{IP:127.0.0.1 Port:32828 SSHKeyPath:/home/jenkins/minikube-integration/21139-2320/.minikube/machines/ha-423884-m03/id_rsa Username:docker}
	I1109 14:04:17.397341   55908 vm_assets.go:164] NewFileAsset: /home/jenkins/minikube-integration/21139-2320/.minikube/certs/ca.pem -> /etc/docker/ca.pem
	I1109 14:04:17.397397   55908 ssh_runner.go:362] scp /home/jenkins/minikube-integration/21139-2320/.minikube/certs/ca.pem --> /etc/docker/ca.pem (1082 bytes)
	I1109 14:04:17.451209   55908 vm_assets.go:164] NewFileAsset: /home/jenkins/minikube-integration/21139-2320/.minikube/machines/server.pem -> /etc/docker/server.pem
	I1109 14:04:17.451268   55908 ssh_runner.go:362] scp /home/jenkins/minikube-integration/21139-2320/.minikube/machines/server.pem --> /etc/docker/server.pem (1208 bytes)
	I1109 14:04:17.501897   55908 vm_assets.go:164] NewFileAsset: /home/jenkins/minikube-integration/21139-2320/.minikube/machines/server-key.pem -> /etc/docker/server-key.pem
	I1109 14:04:17.501959   55908 ssh_runner.go:362] scp /home/jenkins/minikube-integration/21139-2320/.minikube/machines/server-key.pem --> /etc/docker/server-key.pem (1679 bytes)
	I1109 14:04:17.543399   55908 provision.go:87] duration metric: took 791.974444ms to configureAuth
	I1109 14:04:17.543429   55908 ubuntu.go:206] setting minikube options for container-runtime
	I1109 14:04:17.543658   55908 config.go:182] Loaded profile config "ha-423884": Driver=docker, ContainerRuntime=crio, KubernetesVersion=v1.34.1
	I1109 14:04:17.543760   55908 cli_runner.go:164] Run: docker container inspect -f "'{{(index (index .NetworkSettings.Ports "22/tcp") 0).HostPort}}'" ha-423884-m03
	I1109 14:04:17.578118   55908 main.go:143] libmachine: Using SSH client type: native
	I1109 14:04:17.578425   55908 main.go:143] libmachine: &{{{<nil> 0 [] [] []} docker [0x3eefe0] 0x3f1790 <nil>  [] 0s} 127.0.0.1 32828 <nil> <nil>}
	I1109 14:04:17.578447   55908 main.go:143] libmachine: About to run SSH command:
	sudo mkdir -p /etc/sysconfig && printf %s "
	CRIO_MINIKUBE_OPTIONS='--insecure-registry 10.96.0.0/12 '
	" | sudo tee /etc/sysconfig/crio.minikube && sudo systemctl restart crio
	I1109 14:04:18.006743   55908 main.go:143] libmachine: SSH cmd err, output: <nil>: 
	CRIO_MINIKUBE_OPTIONS='--insecure-registry 10.96.0.0/12 '
	
	I1109 14:04:18.006766   55908 machine.go:97] duration metric: took 5.278648591s to provisionDockerMachine
	I1109 14:04:18.006777   55908 start.go:293] postStartSetup for "ha-423884-m03" (driver="docker")
	I1109 14:04:18.006788   55908 start.go:322] creating required directories: [/etc/kubernetes/addons /etc/kubernetes/manifests /var/tmp/minikube /var/lib/minikube /var/lib/minikube/certs /var/lib/minikube/images /var/lib/minikube/binaries /tmp/gvisor /usr/share/ca-certificates /etc/ssl/certs]
	I1109 14:04:18.006849   55908 ssh_runner.go:195] Run: sudo mkdir -p /etc/kubernetes/addons /etc/kubernetes/manifests /var/tmp/minikube /var/lib/minikube /var/lib/minikube/certs /var/lib/minikube/images /var/lib/minikube/binaries /tmp/gvisor /usr/share/ca-certificates /etc/ssl/certs
	I1109 14:04:18.006908   55908 cli_runner.go:164] Run: docker container inspect -f "'{{(index (index .NetworkSettings.Ports "22/tcp") 0).HostPort}}'" ha-423884-m03
	I1109 14:04:18.028378   55908 sshutil.go:53] new ssh client: &{IP:127.0.0.1 Port:32828 SSHKeyPath:/home/jenkins/minikube-integration/21139-2320/.minikube/machines/ha-423884-m03/id_rsa Username:docker}
	I1109 14:04:18.136392   55908 ssh_runner.go:195] Run: cat /etc/os-release
	I1109 14:04:18.139676   55908 main.go:143] libmachine: Couldn't set key VERSION_CODENAME, no corresponding struct field found
	I1109 14:04:18.139706   55908 info.go:137] Remote host: Debian GNU/Linux 12 (bookworm)
	I1109 14:04:18.139718   55908 filesync.go:126] Scanning /home/jenkins/minikube-integration/21139-2320/.minikube/addons for local assets ...
	I1109 14:04:18.139772   55908 filesync.go:126] Scanning /home/jenkins/minikube-integration/21139-2320/.minikube/files for local assets ...
	I1109 14:04:18.139877   55908 filesync.go:149] local asset: /home/jenkins/minikube-integration/21139-2320/.minikube/files/etc/ssl/certs/41162.pem -> 41162.pem in /etc/ssl/certs
	I1109 14:04:18.139916   55908 vm_assets.go:164] NewFileAsset: /home/jenkins/minikube-integration/21139-2320/.minikube/files/etc/ssl/certs/41162.pem -> /etc/ssl/certs/41162.pem
	I1109 14:04:18.140203   55908 ssh_runner.go:195] Run: sudo mkdir -p /etc/ssl/certs
	I1109 14:04:18.151607   55908 ssh_runner.go:362] scp /home/jenkins/minikube-integration/21139-2320/.minikube/files/etc/ssl/certs/41162.pem --> /etc/ssl/certs/41162.pem (1708 bytes)
	I1109 14:04:18.170641   55908 start.go:296] duration metric: took 163.846632ms for postStartSetup
	I1109 14:04:18.170734   55908 ssh_runner.go:195] Run: sh -c "df -h /var | awk 'NR==2{print $5}'"
	I1109 14:04:18.170783   55908 cli_runner.go:164] Run: docker container inspect -f "'{{(index (index .NetworkSettings.Ports "22/tcp") 0).HostPort}}'" ha-423884-m03
	I1109 14:04:18.190645   55908 sshutil.go:53] new ssh client: &{IP:127.0.0.1 Port:32828 SSHKeyPath:/home/jenkins/minikube-integration/21139-2320/.minikube/machines/ha-423884-m03/id_rsa Username:docker}
	I1109 14:04:18.303725   55908 ssh_runner.go:195] Run: sh -c "df -BG /var | awk 'NR==2{print $4}'"
	I1109 14:04:18.315157   55908 fix.go:56] duration metric: took 5.979236955s for fixHost
	I1109 14:04:18.315228   55908 start.go:83] releasing machines lock for "ha-423884-m03", held for 5.979367853s
	I1109 14:04:18.315337   55908 cli_runner.go:164] Run: docker container inspect -f "{{range .NetworkSettings.Networks}}{{.IPAddress}},{{.GlobalIPv6Address}}{{end}}" ha-423884-m03
	I1109 14:04:18.346232   55908 out.go:179] * Found network options:
	I1109 14:04:18.349488   55908 out.go:179]   - NO_PROXY=192.168.49.2,192.168.49.3
	W1109 14:04:18.352634   55908 proxy.go:120] fail to check proxy env: Error ip not in block
	W1109 14:04:18.352664   55908 proxy.go:120] fail to check proxy env: Error ip not in block
	W1109 14:04:18.352686   55908 proxy.go:120] fail to check proxy env: Error ip not in block
	W1109 14:04:18.352696   55908 proxy.go:120] fail to check proxy env: Error ip not in block
	I1109 14:04:18.352763   55908 ssh_runner.go:195] Run: sudo sh -c "podman version >/dev/null"
	I1109 14:04:18.352815   55908 cli_runner.go:164] Run: docker container inspect -f "'{{(index (index .NetworkSettings.Ports "22/tcp") 0).HostPort}}'" ha-423884-m03
	I1109 14:04:18.353042   55908 ssh_runner.go:195] Run: curl -sS -m 2 https://registry.k8s.io/
	I1109 14:04:18.353099   55908 cli_runner.go:164] Run: docker container inspect -f "'{{(index (index .NetworkSettings.Ports "22/tcp") 0).HostPort}}'" ha-423884-m03
	I1109 14:04:18.407037   55908 sshutil.go:53] new ssh client: &{IP:127.0.0.1 Port:32828 SSHKeyPath:/home/jenkins/minikube-integration/21139-2320/.minikube/machines/ha-423884-m03/id_rsa Username:docker}
	I1109 14:04:18.416133   55908 sshutil.go:53] new ssh client: &{IP:127.0.0.1 Port:32828 SSHKeyPath:/home/jenkins/minikube-integration/21139-2320/.minikube/machines/ha-423884-m03/id_rsa Username:docker}
	I1109 14:04:18.761655   55908 ssh_runner.go:195] Run: sh -c "stat /etc/cni/net.d/*loopback.conf*"
	W1109 14:04:18.827322   55908 cni.go:209] loopback cni configuration skipped: "/etc/cni/net.d/*loopback.conf*" not found
	I1109 14:04:18.827443   55908 ssh_runner.go:195] Run: sudo find /etc/cni/net.d -maxdepth 1 -type f ( ( -name *bridge* -or -name *podman* ) -and -not -name *.mk_disabled ) -printf "%p, " -exec sh -c "sudo mv {} {}.mk_disabled" ;
	I1109 14:04:18.846068   55908 cni.go:259] no active bridge cni configs found in "/etc/cni/net.d" - nothing to disable
	I1109 14:04:18.846140   55908 start.go:496] detecting cgroup driver to use...
	I1109 14:04:18.846187   55908 detect.go:187] detected "cgroupfs" cgroup driver on host os
	I1109 14:04:18.846266   55908 ssh_runner.go:195] Run: sudo systemctl stop -f containerd
	I1109 14:04:18.869418   55908 ssh_runner.go:195] Run: sudo systemctl is-active --quiet service containerd
	I1109 14:04:18.889860   55908 docker.go:218] disabling cri-docker service (if available) ...
	I1109 14:04:18.889997   55908 ssh_runner.go:195] Run: sudo systemctl stop -f cri-docker.socket
	I1109 14:04:18.919381   55908 ssh_runner.go:195] Run: sudo systemctl stop -f cri-docker.service
	I1109 14:04:18.942214   55908 ssh_runner.go:195] Run: sudo systemctl disable cri-docker.socket
	I1109 14:04:19.209339   55908 ssh_runner.go:195] Run: sudo systemctl mask cri-docker.service
	I1109 14:04:19.469248   55908 docker.go:234] disabling docker service ...
	I1109 14:04:19.469315   55908 ssh_runner.go:195] Run: sudo systemctl stop -f docker.socket
	I1109 14:04:19.487357   55908 ssh_runner.go:195] Run: sudo systemctl stop -f docker.service
	I1109 14:04:19.508816   55908 ssh_runner.go:195] Run: sudo systemctl disable docker.socket
	I1109 14:04:19.750896   55908 ssh_runner.go:195] Run: sudo systemctl mask docker.service
	I1109 14:04:19.978351   55908 ssh_runner.go:195] Run: sudo systemctl is-active --quiet service docker
	I1109 14:04:20.002094   55908 ssh_runner.go:195] Run: /bin/bash -c "sudo mkdir -p /etc && printf %s "runtime-endpoint: unix:///var/run/crio/crio.sock
	" | sudo tee /etc/crictl.yaml"
	I1109 14:04:20.029962   55908 crio.go:59] configure cri-o to use "registry.k8s.io/pause:3.10.1" pause image...
	I1109 14:04:20.030038   55908 ssh_runner.go:195] Run: sh -c "sudo sed -i 's|^.*pause_image = .*$|pause_image = "registry.k8s.io/pause:3.10.1"|' /etc/crio/crio.conf.d/02-crio.conf"
	I1109 14:04:20.046014   55908 crio.go:70] configuring cri-o to use "cgroupfs" as cgroup driver...
	I1109 14:04:20.046086   55908 ssh_runner.go:195] Run: sh -c "sudo sed -i 's|^.*cgroup_manager = .*$|cgroup_manager = "cgroupfs"|' /etc/crio/crio.conf.d/02-crio.conf"
	I1109 14:04:20.061773   55908 ssh_runner.go:195] Run: sh -c "sudo sed -i '/conmon_cgroup = .*/d' /etc/crio/crio.conf.d/02-crio.conf"
	I1109 14:04:20.083454   55908 ssh_runner.go:195] Run: sh -c "sudo sed -i '/cgroup_manager = .*/a conmon_cgroup = "pod"' /etc/crio/crio.conf.d/02-crio.conf"
	I1109 14:04:20.096347   55908 ssh_runner.go:195] Run: sh -c "sudo rm -rf /etc/cni/net.mk"
	I1109 14:04:20.114097   55908 ssh_runner.go:195] Run: sh -c "sudo sed -i '/^ *"net.ipv4.ip_unprivileged_port_start=.*"/d' /etc/crio/crio.conf.d/02-crio.conf"
	I1109 14:04:20.126722   55908 ssh_runner.go:195] Run: sh -c "sudo grep -q "^ *default_sysctls" /etc/crio/crio.conf.d/02-crio.conf || sudo sed -i '/conmon_cgroup = .*/a default_sysctls = \[\n\]' /etc/crio/crio.conf.d/02-crio.conf"
	I1109 14:04:20.143159   55908 ssh_runner.go:195] Run: sh -c "sudo sed -i -r 's|^default_sysctls *= *\[|&\n  "net.ipv4.ip_unprivileged_port_start=0",|' /etc/crio/crio.conf.d/02-crio.conf"
	I1109 14:04:20.160109   55908 ssh_runner.go:195] Run: sudo sysctl net.bridge.bridge-nf-call-iptables
	I1109 14:04:20.177582   55908 ssh_runner.go:195] Run: sudo sh -c "echo 1 > /proc/sys/net/ipv4/ip_forward"
	I1109 14:04:20.196091   55908 ssh_runner.go:195] Run: sudo systemctl daemon-reload
	I1109 14:04:20.468433   55908 ssh_runner.go:195] Run: sudo systemctl restart crio
	I1109 14:04:21.283004   55908 start.go:543] Will wait 60s for socket path /var/run/crio/crio.sock
	I1109 14:04:21.283084   55908 ssh_runner.go:195] Run: stat /var/run/crio/crio.sock
	I1109 14:04:21.287304   55908 start.go:564] Will wait 60s for crictl version
	I1109 14:04:21.287372   55908 ssh_runner.go:195] Run: which crictl
	I1109 14:04:21.291538   55908 ssh_runner.go:195] Run: sudo /usr/local/bin/crictl version
	I1109 14:04:21.328386   55908 start.go:580] Version:  0.1.0
	RuntimeName:  cri-o
	RuntimeVersion:  1.34.1
	RuntimeApiVersion:  v1
	I1109 14:04:21.328481   55908 ssh_runner.go:195] Run: crio --version
	I1109 14:04:21.361417   55908 ssh_runner.go:195] Run: crio --version
	I1109 14:04:21.451954   55908 out.go:179] * Preparing Kubernetes v1.34.1 on CRI-O 1.34.1 ...
	I1109 14:04:21.455954   55908 out.go:179]   - env NO_PROXY=192.168.49.2
	I1109 14:04:21.459224   55908 out.go:179]   - env NO_PROXY=192.168.49.2,192.168.49.3
	I1109 14:04:21.462952   55908 cli_runner.go:164] Run: docker network inspect ha-423884 --format "{"Name": "{{.Name}}","Driver": "{{.Driver}}","Subnet": "{{range .IPAM.Config}}{{.Subnet}}{{end}}","Gateway": "{{range .IPAM.Config}}{{.Gateway}}{{end}}","MTU": {{if (index .Options "com.docker.network.driver.mtu")}}{{(index .Options "com.docker.network.driver.mtu")}}{{else}}0{{end}}, "ContainerIPs": [{{range $k,$v := .Containers }}"{{$v.IPv4Address}}",{{end}}]}"
	I1109 14:04:21.484807   55908 ssh_runner.go:195] Run: grep 192.168.49.1	host.minikube.internal$ /etc/hosts
	I1109 14:04:21.489960   55908 ssh_runner.go:195] Run: /bin/bash -c "{ grep -v $'\thost.minikube.internal$' "/etc/hosts"; echo "192.168.49.1	host.minikube.internal"; } > /tmp/h.$$; sudo cp /tmp/h.$$ "/etc/hosts""
	I1109 14:04:21.506775   55908 mustload.go:66] Loading cluster: ha-423884
	I1109 14:04:21.507015   55908 config.go:182] Loaded profile config "ha-423884": Driver=docker, ContainerRuntime=crio, KubernetesVersion=v1.34.1
	I1109 14:04:21.507301   55908 cli_runner.go:164] Run: docker container inspect ha-423884 --format={{.State.Status}}
	I1109 14:04:21.526101   55908 host.go:66] Checking if "ha-423884" exists ...
	I1109 14:04:21.526377   55908 certs.go:69] Setting up /home/jenkins/minikube-integration/21139-2320/.minikube/profiles/ha-423884 for IP: 192.168.49.4
	I1109 14:04:21.526391   55908 certs.go:195] generating shared ca certs ...
	I1109 14:04:21.526407   55908 certs.go:227] acquiring lock for ca certs: {Name:mkdc9287ecd89df27ec460b72246ed6b75395d7c Clock:{} Delay:500ms Timeout:1m0s Cancel:<nil>}
	I1109 14:04:21.526515   55908 certs.go:236] skipping valid "minikubeCA" ca cert: /home/jenkins/minikube-integration/21139-2320/.minikube/ca.key
	I1109 14:04:21.526559   55908 certs.go:236] skipping valid "proxyClientCA" ca cert: /home/jenkins/minikube-integration/21139-2320/.minikube/proxy-client-ca.key
	I1109 14:04:21.526572   55908 certs.go:257] generating profile certs ...
	I1109 14:04:21.526658   55908 certs.go:360] skipping valid signed profile cert regeneration for "minikube-user": /home/jenkins/minikube-integration/21139-2320/.minikube/profiles/ha-423884/client.key
	I1109 14:04:21.526726   55908 certs.go:360] skipping valid signed profile cert regeneration for "minikube": /home/jenkins/minikube-integration/21139-2320/.minikube/profiles/ha-423884/apiserver.key.7ffb4171
	I1109 14:04:21.526767   55908 certs.go:360] skipping valid signed profile cert regeneration for "aggregator": /home/jenkins/minikube-integration/21139-2320/.minikube/profiles/ha-423884/proxy-client.key
	I1109 14:04:21.526781   55908 vm_assets.go:164] NewFileAsset: /home/jenkins/minikube-integration/21139-2320/.minikube/ca.crt -> /var/lib/minikube/certs/ca.crt
	I1109 14:04:21.526793   55908 vm_assets.go:164] NewFileAsset: /home/jenkins/minikube-integration/21139-2320/.minikube/ca.key -> /var/lib/minikube/certs/ca.key
	I1109 14:04:21.526808   55908 vm_assets.go:164] NewFileAsset: /home/jenkins/minikube-integration/21139-2320/.minikube/proxy-client-ca.crt -> /var/lib/minikube/certs/proxy-client-ca.crt
	I1109 14:04:21.526826   55908 vm_assets.go:164] NewFileAsset: /home/jenkins/minikube-integration/21139-2320/.minikube/proxy-client-ca.key -> /var/lib/minikube/certs/proxy-client-ca.key
	I1109 14:04:21.526836   55908 vm_assets.go:164] NewFileAsset: /home/jenkins/minikube-integration/21139-2320/.minikube/profiles/ha-423884/apiserver.crt -> /var/lib/minikube/certs/apiserver.crt
	I1109 14:04:21.526848   55908 vm_assets.go:164] NewFileAsset: /home/jenkins/minikube-integration/21139-2320/.minikube/profiles/ha-423884/apiserver.key -> /var/lib/minikube/certs/apiserver.key
	I1109 14:04:21.526910   55908 vm_assets.go:164] NewFileAsset: /home/jenkins/minikube-integration/21139-2320/.minikube/profiles/ha-423884/proxy-client.crt -> /var/lib/minikube/certs/proxy-client.crt
	I1109 14:04:21.526925   55908 vm_assets.go:164] NewFileAsset: /home/jenkins/minikube-integration/21139-2320/.minikube/profiles/ha-423884/proxy-client.key -> /var/lib/minikube/certs/proxy-client.key
	I1109 14:04:21.526982   55908 certs.go:484] found cert: /home/jenkins/minikube-integration/21139-2320/.minikube/certs/4116.pem (1338 bytes)
	W1109 14:04:21.527018   55908 certs.go:480] ignoring /home/jenkins/minikube-integration/21139-2320/.minikube/certs/4116_empty.pem, impossibly tiny 0 bytes
	I1109 14:04:21.527028   55908 certs.go:484] found cert: /home/jenkins/minikube-integration/21139-2320/.minikube/certs/ca-key.pem (1679 bytes)
	I1109 14:04:21.527056   55908 certs.go:484] found cert: /home/jenkins/minikube-integration/21139-2320/.minikube/certs/ca.pem (1082 bytes)
	I1109 14:04:21.527080   55908 certs.go:484] found cert: /home/jenkins/minikube-integration/21139-2320/.minikube/certs/cert.pem (1123 bytes)
	I1109 14:04:21.527107   55908 certs.go:484] found cert: /home/jenkins/minikube-integration/21139-2320/.minikube/certs/key.pem (1679 bytes)
	I1109 14:04:21.527154   55908 certs.go:484] found cert: /home/jenkins/minikube-integration/21139-2320/.minikube/files/etc/ssl/certs/41162.pem (1708 bytes)
	I1109 14:04:21.527185   55908 vm_assets.go:164] NewFileAsset: /home/jenkins/minikube-integration/21139-2320/.minikube/files/etc/ssl/certs/41162.pem -> /usr/share/ca-certificates/41162.pem
	I1109 14:04:21.527200   55908 vm_assets.go:164] NewFileAsset: /home/jenkins/minikube-integration/21139-2320/.minikube/ca.crt -> /usr/share/ca-certificates/minikubeCA.pem
	I1109 14:04:21.527211   55908 vm_assets.go:164] NewFileAsset: /home/jenkins/minikube-integration/21139-2320/.minikube/certs/4116.pem -> /usr/share/ca-certificates/4116.pem
	I1109 14:04:21.527271   55908 cli_runner.go:164] Run: docker container inspect -f "'{{(index (index .NetworkSettings.Ports "22/tcp") 0).HostPort}}'" ha-423884
	I1109 14:04:21.551818   55908 sshutil.go:53] new ssh client: &{IP:127.0.0.1 Port:32818 SSHKeyPath:/home/jenkins/minikube-integration/21139-2320/.minikube/machines/ha-423884/id_rsa Username:docker}
	I1109 14:04:21.676202   55908 ssh_runner.go:195] Run: stat -c %s /var/lib/minikube/certs/sa.pub
	I1109 14:04:21.680212   55908 ssh_runner.go:447] scp /var/lib/minikube/certs/sa.pub --> memory (451 bytes)
	I1109 14:04:21.691215   55908 ssh_runner.go:195] Run: stat -c %s /var/lib/minikube/certs/sa.key
	I1109 14:04:21.701694   55908 ssh_runner.go:447] scp /var/lib/minikube/certs/sa.key --> memory (1679 bytes)
	I1109 14:04:21.714762   55908 ssh_runner.go:195] Run: stat -c %s /var/lib/minikube/certs/front-proxy-ca.crt
	I1109 14:04:21.719210   55908 ssh_runner.go:447] scp /var/lib/minikube/certs/front-proxy-ca.crt --> memory (1123 bytes)
	I1109 14:04:21.729229   55908 ssh_runner.go:195] Run: stat -c %s /var/lib/minikube/certs/front-proxy-ca.key
	I1109 14:04:21.733219   55908 ssh_runner.go:447] scp /var/lib/minikube/certs/front-proxy-ca.key --> memory (1675 bytes)
	I1109 14:04:21.742594   55908 ssh_runner.go:195] Run: stat -c %s /var/lib/minikube/certs/etcd/ca.crt
	I1109 14:04:21.746326   55908 ssh_runner.go:447] scp /var/lib/minikube/certs/etcd/ca.crt --> memory (1094 bytes)
	I1109 14:04:21.755768   55908 ssh_runner.go:195] Run: stat -c %s /var/lib/minikube/certs/etcd/ca.key
	I1109 14:04:21.759436   55908 ssh_runner.go:447] scp /var/lib/minikube/certs/etcd/ca.key --> memory (1675 bytes)
	I1109 14:04:21.771660   55908 ssh_runner.go:362] scp /home/jenkins/minikube-integration/21139-2320/.minikube/ca.crt --> /var/lib/minikube/certs/ca.crt (1111 bytes)
	I1109 14:04:21.795312   55908 ssh_runner.go:362] scp /home/jenkins/minikube-integration/21139-2320/.minikube/ca.key --> /var/lib/minikube/certs/ca.key (1679 bytes)
	I1109 14:04:21.815560   55908 ssh_runner.go:362] scp /home/jenkins/minikube-integration/21139-2320/.minikube/proxy-client-ca.crt --> /var/lib/minikube/certs/proxy-client-ca.crt (1119 bytes)
	I1109 14:04:21.833662   55908 ssh_runner.go:362] scp /home/jenkins/minikube-integration/21139-2320/.minikube/proxy-client-ca.key --> /var/lib/minikube/certs/proxy-client-ca.key (1675 bytes)
	I1109 14:04:21.852805   55908 ssh_runner.go:362] scp /home/jenkins/minikube-integration/21139-2320/.minikube/profiles/ha-423884/apiserver.crt --> /var/lib/minikube/certs/apiserver.crt (1440 bytes)
	I1109 14:04:21.870267   55908 ssh_runner.go:362] scp /home/jenkins/minikube-integration/21139-2320/.minikube/profiles/ha-423884/apiserver.key --> /var/lib/minikube/certs/apiserver.key (1679 bytes)
	I1109 14:04:21.889041   55908 ssh_runner.go:362] scp /home/jenkins/minikube-integration/21139-2320/.minikube/profiles/ha-423884/proxy-client.crt --> /var/lib/minikube/certs/proxy-client.crt (1147 bytes)
	I1109 14:04:21.907386   55908 ssh_runner.go:362] scp /home/jenkins/minikube-integration/21139-2320/.minikube/profiles/ha-423884/proxy-client.key --> /var/lib/minikube/certs/proxy-client.key (1675 bytes)
	I1109 14:04:21.925376   55908 ssh_runner.go:362] scp /home/jenkins/minikube-integration/21139-2320/.minikube/files/etc/ssl/certs/41162.pem --> /usr/share/ca-certificates/41162.pem (1708 bytes)
	I1109 14:04:21.943214   55908 ssh_runner.go:362] scp /home/jenkins/minikube-integration/21139-2320/.minikube/ca.crt --> /usr/share/ca-certificates/minikubeCA.pem (1111 bytes)
	I1109 14:04:21.961586   55908 ssh_runner.go:362] scp /home/jenkins/minikube-integration/21139-2320/.minikube/certs/4116.pem --> /usr/share/ca-certificates/4116.pem (1338 bytes)
	I1109 14:04:21.979793   55908 ssh_runner.go:362] scp memory --> /var/lib/minikube/certs/sa.pub (451 bytes)
	I1109 14:04:21.993395   55908 ssh_runner.go:362] scp memory --> /var/lib/minikube/certs/sa.key (1679 bytes)
	I1109 14:04:22.006684   55908 ssh_runner.go:362] scp memory --> /var/lib/minikube/certs/front-proxy-ca.crt (1123 bytes)
	I1109 14:04:22.033388   55908 ssh_runner.go:362] scp memory --> /var/lib/minikube/certs/front-proxy-ca.key (1675 bytes)
	I1109 14:04:22.052052   55908 ssh_runner.go:362] scp memory --> /var/lib/minikube/certs/etcd/ca.crt (1094 bytes)
	I1109 14:04:22.068060   55908 ssh_runner.go:362] scp memory --> /var/lib/minikube/certs/etcd/ca.key (1675 bytes)
	I1109 14:04:22.086207   55908 ssh_runner.go:362] scp memory --> /var/lib/minikube/kubeconfig (744 bytes)
	I1109 14:04:22.104940   55908 ssh_runner.go:195] Run: openssl version
	I1109 14:04:22.112046   55908 ssh_runner.go:195] Run: sudo /bin/bash -c "test -s /usr/share/ca-certificates/41162.pem && ln -fs /usr/share/ca-certificates/41162.pem /etc/ssl/certs/41162.pem"
	I1109 14:04:22.122102   55908 ssh_runner.go:195] Run: ls -la /usr/share/ca-certificates/41162.pem
	I1109 14:04:22.125980   55908 certs.go:528] hashing: -rw-r--r-- 1 root root 1708 Nov  9 13:36 /usr/share/ca-certificates/41162.pem
	I1109 14:04:22.126092   55908 ssh_runner.go:195] Run: openssl x509 -hash -noout -in /usr/share/ca-certificates/41162.pem
	I1109 14:04:22.167702   55908 ssh_runner.go:195] Run: sudo /bin/bash -c "test -L /etc/ssl/certs/3ec20f2e.0 || ln -fs /etc/ssl/certs/41162.pem /etc/ssl/certs/3ec20f2e.0"
	I1109 14:04:22.176107   55908 ssh_runner.go:195] Run: sudo /bin/bash -c "test -s /usr/share/ca-certificates/minikubeCA.pem && ln -fs /usr/share/ca-certificates/minikubeCA.pem /etc/ssl/certs/minikubeCA.pem"
	I1109 14:04:22.184759   55908 ssh_runner.go:195] Run: ls -la /usr/share/ca-certificates/minikubeCA.pem
	I1109 14:04:22.189529   55908 certs.go:528] hashing: -rw-r--r-- 1 root root 1111 Nov  9 13:29 /usr/share/ca-certificates/minikubeCA.pem
	I1109 14:04:22.189649   55908 ssh_runner.go:195] Run: openssl x509 -hash -noout -in /usr/share/ca-certificates/minikubeCA.pem
	I1109 14:04:22.231896   55908 ssh_runner.go:195] Run: sudo /bin/bash -c "test -L /etc/ssl/certs/b5213941.0 || ln -fs /etc/ssl/certs/minikubeCA.pem /etc/ssl/certs/b5213941.0"
	I1109 14:04:22.240788   55908 ssh_runner.go:195] Run: sudo /bin/bash -c "test -s /usr/share/ca-certificates/4116.pem && ln -fs /usr/share/ca-certificates/4116.pem /etc/ssl/certs/4116.pem"
	I1109 14:04:22.250648   55908 ssh_runner.go:195] Run: ls -la /usr/share/ca-certificates/4116.pem
	I1109 14:04:22.254774   55908 certs.go:528] hashing: -rw-r--r-- 1 root root 1338 Nov  9 13:36 /usr/share/ca-certificates/4116.pem
	I1109 14:04:22.254890   55908 ssh_runner.go:195] Run: openssl x509 -hash -noout -in /usr/share/ca-certificates/4116.pem
	I1109 14:04:22.295743   55908 ssh_runner.go:195] Run: sudo /bin/bash -c "test -L /etc/ssl/certs/51391683.0 || ln -fs /etc/ssl/certs/4116.pem /etc/ssl/certs/51391683.0"
	I1109 14:04:22.303694   55908 ssh_runner.go:195] Run: stat /var/lib/minikube/certs/apiserver-kubelet-client.crt
	I1109 14:04:22.308400   55908 ssh_runner.go:195] Run: openssl x509 -noout -in /var/lib/minikube/certs/apiserver-etcd-client.crt -checkend 86400
	I1109 14:04:22.361240   55908 ssh_runner.go:195] Run: openssl x509 -noout -in /var/lib/minikube/certs/apiserver-kubelet-client.crt -checkend 86400
	I1109 14:04:22.402093   55908 ssh_runner.go:195] Run: openssl x509 -noout -in /var/lib/minikube/certs/etcd/server.crt -checkend 86400
	I1109 14:04:22.444367   55908 ssh_runner.go:195] Run: openssl x509 -noout -in /var/lib/minikube/certs/etcd/healthcheck-client.crt -checkend 86400
	I1109 14:04:22.486212   55908 ssh_runner.go:195] Run: openssl x509 -noout -in /var/lib/minikube/certs/etcd/peer.crt -checkend 86400
	I1109 14:04:22.528227   55908 ssh_runner.go:195] Run: openssl x509 -noout -in /var/lib/minikube/certs/front-proxy-client.crt -checkend 86400
	I1109 14:04:22.571111   55908 kubeadm.go:935] updating node {m03 192.168.49.4 8443 v1.34.1 crio true true} ...
	I1109 14:04:22.571227   55908 kubeadm.go:947] kubelet [Unit]
	Wants=crio.service
	
	[Service]
	ExecStart=
	ExecStart=/var/lib/minikube/binaries/v1.34.1/kubelet --bootstrap-kubeconfig=/etc/kubernetes/bootstrap-kubelet.conf --cgroups-per-qos=false --config=/var/lib/kubelet/config.yaml --enforce-node-allocatable= --hostname-override=ha-423884-m03 --kubeconfig=/etc/kubernetes/kubelet.conf --node-ip=192.168.49.4
	
	[Install]
	 config:
	{KubernetesVersion:v1.34.1 ClusterName:ha-423884 Namespace:default APIServerHAVIP:192.168.49.254 APIServerName:minikubeCA APIServerNames:[] APIServerIPs:[] DNSDomain:cluster.local ContainerRuntime:crio CRISocket: NetworkPlugin:cni FeatureGates: ServiceCIDR:10.96.0.0/12 ImageRepository: LoadBalancerStartIP: LoadBalancerEndIP: CustomIngressCert: RegistryAliases: ExtraOptions:[] ShouldLoadCachedImages:true EnableDefaultCNI:false CNI:}
	I1109 14:04:22.571257   55908 kube-vip.go:115] generating kube-vip config ...
	I1109 14:04:22.571311   55908 ssh_runner.go:195] Run: sudo sh -c "lsmod | grep ip_vs"
	I1109 14:04:22.583651   55908 kube-vip.go:163] giving up enabling control-plane load-balancing as ipvs kernel modules appears not to be available: sudo sh -c "lsmod | grep ip_vs": Process exited with status 1
	stdout:
	
	stderr:
	I1109 14:04:22.583707   55908 kube-vip.go:137] kube-vip config:
	apiVersion: v1
	kind: Pod
	metadata:
	  creationTimestamp: null
	  name: kube-vip
	  namespace: kube-system
	spec:
	  containers:
	  - args:
	    - manager
	    env:
	    - name: vip_arp
	      value: "true"
	    - name: port
	      value: "8443"
	    - name: vip_nodename
	      valueFrom:
	        fieldRef:
	          fieldPath: spec.nodeName
	    - name: vip_interface
	      value: eth0
	    - name: vip_cidr
	      value: "32"
	    - name: dns_mode
	      value: first
	    - name: cp_enable
	      value: "true"
	    - name: cp_namespace
	      value: kube-system
	    - name: vip_leaderelection
	      value: "true"
	    - name: vip_leasename
	      value: plndr-cp-lock
	    - name: vip_leaseduration
	      value: "5"
	    - name: vip_renewdeadline
	      value: "3"
	    - name: vip_retryperiod
	      value: "1"
	    - name: address
	      value: 192.168.49.254
	    - name: prometheus_server
	      value: :2112
	    image: ghcr.io/kube-vip/kube-vip:v1.0.1
	    imagePullPolicy: IfNotPresent
	    name: kube-vip
	    resources: {}
	    securityContext:
	      capabilities:
	        add:
	        - NET_ADMIN
	        - NET_RAW
	    volumeMounts:
	    - mountPath: /etc/kubernetes/admin.conf
	      name: kubeconfig
	  hostAliases:
	  - hostnames:
	    - kubernetes
	    ip: 127.0.0.1
	  hostNetwork: true
	  volumes:
	  - hostPath:
	      path: "/etc/kubernetes/admin.conf"
	    name: kubeconfig
	status: {}
	I1109 14:04:22.583783   55908 ssh_runner.go:195] Run: sudo ls /var/lib/minikube/binaries/v1.34.1
	I1109 14:04:22.592357   55908 binaries.go:51] Found k8s binaries, skipping transfer
	I1109 14:04:22.592434   55908 ssh_runner.go:195] Run: sudo mkdir -p /etc/systemd/system/kubelet.service.d /lib/systemd/system /etc/kubernetes/manifests
	I1109 14:04:22.602564   55908 ssh_runner.go:362] scp memory --> /etc/systemd/system/kubelet.service.d/10-kubeadm.conf (363 bytes)
	I1109 14:04:22.615684   55908 ssh_runner.go:362] scp memory --> /lib/systemd/system/kubelet.service (352 bytes)
	I1109 14:04:22.634261   55908 ssh_runner.go:362] scp memory --> /etc/kubernetes/manifests/kube-vip.yaml (1358 bytes)
	I1109 14:04:22.648965   55908 ssh_runner.go:195] Run: grep 192.168.49.254	control-plane.minikube.internal$ /etc/hosts
	I1109 14:04:22.652918   55908 ssh_runner.go:195] Run: /bin/bash -c "{ grep -v $'\tcontrol-plane.minikube.internal$' "/etc/hosts"; echo "192.168.49.254	control-plane.minikube.internal"; } > /tmp/h.$$; sudo cp /tmp/h.$$ "/etc/hosts""
	I1109 14:04:22.663308   55908 ssh_runner.go:195] Run: sudo systemctl daemon-reload
	I1109 14:04:22.796103   55908 ssh_runner.go:195] Run: sudo systemctl start kubelet
	I1109 14:04:22.812101   55908 start.go:236] Will wait 6m0s for node &{Name:m03 IP:192.168.49.4 Port:8443 KubernetesVersion:v1.34.1 ContainerRuntime:crio ControlPlane:true Worker:true}
	I1109 14:04:22.812586   55908 config.go:182] Loaded profile config "ha-423884": Driver=docker, ContainerRuntime=crio, KubernetesVersion=v1.34.1
	I1109 14:04:22.817295   55908 out.go:179] * Verifying Kubernetes components...
	I1109 14:04:22.820274   55908 ssh_runner.go:195] Run: sudo systemctl daemon-reload
	I1109 14:04:22.956399   55908 ssh_runner.go:195] Run: sudo systemctl start kubelet
	I1109 14:04:22.970086   55908 kapi.go:59] client config for ha-423884: &rest.Config{Host:"https://192.168.49.254:8443", APIPath:"", ContentConfig:rest.ContentConfig{AcceptContentTypes:"", ContentType:"", GroupVersion:(*schema.GroupVersion)(nil), NegotiatedSerializer:runtime.NegotiatedSerializer(nil)}, Username:"", Password:"", BearerToken:"", BearerTokenFile:"", Impersonate:rest.ImpersonationConfig{UserName:"", UID:"", Groups:[]string(nil), Extra:map[string][]string(nil)}, AuthProvider:<nil>, AuthConfigPersister:rest.AuthProviderConfigPersister(nil), ExecProvider:<nil>, TLSClientConfig:rest.sanitizedTLSClientConfig{Insecure:false, ServerName:"", CertFile:"/home/jenkins/minikube-integration/21139-2320/.minikube/profiles/ha-423884/client.crt", KeyFile:"/home/jenkins/minikube-integration/21139-2320/.minikube/profiles/ha-423884/client.key", CAFile:"/home/jenkins/minikube-integration/21139-2320/.minikube/ca.crt", CertData:[]uint8(nil), KeyData:[]uint8(nil), CAData:[]uint8(nil), NextProtos:[]string(nil)},
UserAgent:"", DisableCompression:false, Transport:http.RoundTripper(nil), WrapTransport:(transport.WrapperFunc)(0x21276e0), QPS:0, Burst:0, RateLimiter:flowcontrol.RateLimiter(nil), WarningHandler:rest.WarningHandler(nil), WarningHandlerWithContext:rest.WarningHandlerWithContext(nil), Timeout:0, Dial:(func(context.Context, string, string) (net.Conn, error))(nil), Proxy:(func(*http.Request) (*url.URL, error))(nil)}
	W1109 14:04:22.970158   55908 kubeadm.go:492] Overriding stale ClientConfig host https://192.168.49.254:8443 with https://192.168.49.2:8443
	I1109 14:04:22.970389   55908 node_ready.go:35] waiting up to 6m0s for node "ha-423884-m03" to be "Ready" ...
	I1109 14:04:22.973665   55908 node_ready.go:49] node "ha-423884-m03" is "Ready"
	I1109 14:04:22.973696   55908 node_ready.go:38] duration metric: took 3.289742ms for node "ha-423884-m03" to be "Ready" ...
	I1109 14:04:22.973708   55908 api_server.go:52] waiting for apiserver process to appear ...
	I1109 14:04:22.973776   55908 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I1109 14:04:23.474233   55908 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I1109 14:04:23.974449   55908 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I1109 14:04:24.473927   55908 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I1109 14:04:24.973967   55908 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I1109 14:04:25.474635   55908 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I1109 14:04:25.973916   55908 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I1109 14:04:26.474480   55908 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I1109 14:04:26.974653   55908 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I1109 14:04:27.474731   55908 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I1109 14:04:27.974238   55908 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I1109 14:04:28.474498   55908 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I1109 14:04:28.973919   55908 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I1109 14:04:29.474517   55908 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I1109 14:04:29.974713   55908 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I1109 14:04:30.474585   55908 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I1109 14:04:30.974741   55908 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I1109 14:04:31.473916   55908 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I1109 14:04:31.974806   55908 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I1109 14:04:32.474537   55908 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I1109 14:04:32.973899   55908 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I1109 14:04:33.474884   55908 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I1109 14:04:33.974179   55908 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I1109 14:04:34.473908   55908 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I1109 14:04:34.973922   55908 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I1109 14:04:35.474186   55908 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I1109 14:04:35.974351   55908 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I1109 14:04:36.474756   55908 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I1109 14:04:36.973943   55908 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I1109 14:04:37.474873   55908 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I1109 14:04:37.974832   55908 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I1109 14:04:38.474095   55908 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I1109 14:04:38.486973   55908 api_server.go:72] duration metric: took 15.674824664s to wait for apiserver process to appear ...
	I1109 14:04:38.486994   55908 api_server.go:88] waiting for apiserver healthz status ...
	I1109 14:04:38.487013   55908 api_server.go:253] Checking apiserver healthz at https://192.168.49.2:8443/healthz ...
	I1109 14:04:38.496492   55908 api_server.go:279] https://192.168.49.2:8443/healthz returned 200:
	ok
	I1109 14:04:38.497757   55908 api_server.go:141] control plane version: v1.34.1
	I1109 14:04:38.497778   55908 api_server.go:131] duration metric: took 10.777406ms to wait for apiserver health ...
	I1109 14:04:38.497787   55908 system_pods.go:43] waiting for kube-system pods to appear ...
	I1109 14:04:38.505258   55908 system_pods.go:59] 26 kube-system pods found
	I1109 14:04:38.505350   55908 system_pods.go:61] "coredns-66bc5c9577-wl6rt" [95478f0f-683f-4542-aaa4-adc037f97d0c] Running / Ready:ContainersNotReady (containers with unready status: [coredns]) / ContainersReady:ContainersNotReady (containers with unready status: [coredns])
	I1109 14:04:38.505374   55908 system_pods.go:61] "coredns-66bc5c9577-x2j4c" [96cca476-edbb-4139-8b01-f7fd7c7d55aa] Running / Ready:ContainersNotReady (containers with unready status: [coredns]) / ContainersReady:ContainersNotReady (containers with unready status: [coredns])
	I1109 14:04:38.505408   55908 system_pods.go:61] "etcd-ha-423884" [004413ee-6d00-4b6b-8e58-dbe5c2694a91] Running
	I1109 14:04:38.505432   55908 system_pods.go:61] "etcd-ha-423884-m02" [7009b9f1-1f16-4b4e-a5e3-1d8df4987593] Running
	I1109 14:04:38.505449   55908 system_pods.go:61] "etcd-ha-423884-m03" [5c66298e-55ac-439c-8fc7-cc91a29fff8c] Running
	I1109 14:04:38.505466   55908 system_pods.go:61] "kindnet-2tcn6" [22d3ab43-1335-4838-ab1d-7368817c4287] Running
	I1109 14:04:38.505484   55908 system_pods.go:61] "kindnet-45jg2" [fe2d9f7d-ba0c-42a5-af2d-ebfb3153b0b1] Running
	I1109 14:04:38.505510   55908 system_pods.go:61] "kindnet-4s4nj" [aaab0693-39a9-46cc-b5c6-f07055a7cbc4] Running
	I1109 14:04:38.505536   55908 system_pods.go:61] "kindnet-ftnwt" [ea92690f-5103-40c4-ba92-16b97894d00c] Running
	I1109 14:04:38.505555   55908 system_pods.go:61] "kube-apiserver-ha-423884" [1981277b-07b7-4bfc-8601-d026e58476b1] Running
	I1109 14:04:38.505572   55908 system_pods.go:61] "kube-apiserver-ha-423884-m02" [e203df4c-2d27-445c-ae67-14d3e03d4a48] Running
	I1109 14:04:38.505590   55908 system_pods.go:61] "kube-apiserver-ha-423884-m03" [16908abd-f0bf-4fc2-b689-4062490d63b3] Running
	I1109 14:04:38.505618   55908 system_pods.go:61] "kube-controller-manager-ha-423884" [1d981b3b-aa6e-470c-8fd5-a97cedeb7ab7] Running
	I1109 14:04:38.505641   55908 system_pods.go:61] "kube-controller-manager-ha-423884-m02" [047f7949-9451-4301-be27-a9073b1f8dd8] Running
	I1109 14:04:38.505659   55908 system_pods.go:61] "kube-controller-manager-ha-423884-m03" [94734b39-b4df-4736-8bb8-4289b8b59d4a] Running
	I1109 14:04:38.505675   55908 system_pods.go:61] "kube-proxy-7z7d2" [f3de4d87-91fe-4303-a8db-50a70cbce4d7] Running
	I1109 14:04:38.505694   55908 system_pods.go:61] "kube-proxy-9kff9" [3e8293e6-027b-460b-bf10-1c31ea96c7b9] Running
	I1109 14:04:38.505721   55908 system_pods.go:61] "kube-proxy-f4hgn" [4eebc45e-329c-47be-b22c-f516100cff56] Running
	I1109 14:04:38.505743   55908 system_pods.go:61] "kube-proxy-jcgxk" [511bb5aa-d398-4eb4-852e-d2b6cda335c7] Running
	I1109 14:04:38.505761   55908 system_pods.go:61] "kube-scheduler-ha-423884" [9d1a4a22-ff4f-4204-b9a0-c90bc99cf5ee] Running
	I1109 14:04:38.505778   55908 system_pods.go:61] "kube-scheduler-ha-423884-m02" [3959577a-da41-4834-8c10-1cd6da3da88d] Running
	I1109 14:04:38.505796   55908 system_pods.go:61] "kube-scheduler-ha-423884-m03" [ab2c4706-65e8-429e-9365-35ddef56a3f5] Running
	I1109 14:04:38.505824   55908 system_pods.go:61] "kube-vip-ha-423884" [b043421c-6408-4df1-87d9-bc0d12fef736] Running
	I1109 14:04:38.505850   55908 system_pods.go:61] "kube-vip-ha-423884-m02" [17d3154c-6732-452b-a209-4a8c98c5626e] Running
	I1109 14:04:38.505867   55908 system_pods.go:61] "kube-vip-ha-423884-m03" [aade9448-5515-4d1c-ae79-38c8686c171f] Running
	I1109 14:04:38.505886   55908 system_pods.go:61] "storage-provisioner" [5c249a88-1e05-40e0-b9d2-60a993f8c146] Running
	I1109 14:04:38.505905   55908 system_pods.go:74] duration metric: took 8.112367ms to wait for pod list to return data ...
	I1109 14:04:38.505935   55908 default_sa.go:34] waiting for default service account to be created ...
	I1109 14:04:38.509739   55908 default_sa.go:45] found service account: "default"
	I1109 14:04:38.509805   55908 default_sa.go:55] duration metric: took 3.846441ms for default service account to be created ...
	I1109 14:04:38.509829   55908 system_pods.go:116] waiting for k8s-apps to be running ...
	I1109 14:04:38.517291   55908 system_pods.go:86] 26 kube-system pods found
	I1109 14:04:38.517382   55908 system_pods.go:89] "coredns-66bc5c9577-wl6rt" [95478f0f-683f-4542-aaa4-adc037f97d0c] Running / Ready:ContainersNotReady (containers with unready status: [coredns]) / ContainersReady:ContainersNotReady (containers with unready status: [coredns])
	I1109 14:04:38.517407   55908 system_pods.go:89] "coredns-66bc5c9577-x2j4c" [96cca476-edbb-4139-8b01-f7fd7c7d55aa] Running / Ready:ContainersNotReady (containers with unready status: [coredns]) / ContainersReady:ContainersNotReady (containers with unready status: [coredns])
	I1109 14:04:38.517444   55908 system_pods.go:89] "etcd-ha-423884" [004413ee-6d00-4b6b-8e58-dbe5c2694a91] Running
	I1109 14:04:38.517467   55908 system_pods.go:89] "etcd-ha-423884-m02" [7009b9f1-1f16-4b4e-a5e3-1d8df4987593] Running
	I1109 14:04:38.517484   55908 system_pods.go:89] "etcd-ha-423884-m03" [5c66298e-55ac-439c-8fc7-cc91a29fff8c] Running
	I1109 14:04:38.517500   55908 system_pods.go:89] "kindnet-2tcn6" [22d3ab43-1335-4838-ab1d-7368817c4287] Running
	I1109 14:04:38.517518   55908 system_pods.go:89] "kindnet-45jg2" [fe2d9f7d-ba0c-42a5-af2d-ebfb3153b0b1] Running
	I1109 14:04:38.517545   55908 system_pods.go:89] "kindnet-4s4nj" [aaab0693-39a9-46cc-b5c6-f07055a7cbc4] Running
	I1109 14:04:38.517568   55908 system_pods.go:89] "kindnet-ftnwt" [ea92690f-5103-40c4-ba92-16b97894d00c] Running
	I1109 14:04:38.517586   55908 system_pods.go:89] "kube-apiserver-ha-423884" [1981277b-07b7-4bfc-8601-d026e58476b1] Running
	I1109 14:04:38.517602   55908 system_pods.go:89] "kube-apiserver-ha-423884-m02" [e203df4c-2d27-445c-ae67-14d3e03d4a48] Running
	I1109 14:04:38.517620   55908 system_pods.go:89] "kube-apiserver-ha-423884-m03" [16908abd-f0bf-4fc2-b689-4062490d63b3] Running
	I1109 14:04:38.517648   55908 system_pods.go:89] "kube-controller-manager-ha-423884" [1d981b3b-aa6e-470c-8fd5-a97cedeb7ab7] Running
	I1109 14:04:38.517670   55908 system_pods.go:89] "kube-controller-manager-ha-423884-m02" [047f7949-9451-4301-be27-a9073b1f8dd8] Running
	I1109 14:04:38.517688   55908 system_pods.go:89] "kube-controller-manager-ha-423884-m03" [94734b39-b4df-4736-8bb8-4289b8b59d4a] Running
	I1109 14:04:38.517705   55908 system_pods.go:89] "kube-proxy-7z7d2" [f3de4d87-91fe-4303-a8db-50a70cbce4d7] Running
	I1109 14:04:38.517722   55908 system_pods.go:89] "kube-proxy-9kff9" [3e8293e6-027b-460b-bf10-1c31ea96c7b9] Running
	I1109 14:04:38.517750   55908 system_pods.go:89] "kube-proxy-f4hgn" [4eebc45e-329c-47be-b22c-f516100cff56] Running
	I1109 14:04:38.517773   55908 system_pods.go:89] "kube-proxy-jcgxk" [511bb5aa-d398-4eb4-852e-d2b6cda335c7] Running
	I1109 14:04:38.517794   55908 system_pods.go:89] "kube-scheduler-ha-423884" [9d1a4a22-ff4f-4204-b9a0-c90bc99cf5ee] Running
	I1109 14:04:38.517812   55908 system_pods.go:89] "kube-scheduler-ha-423884-m02" [3959577a-da41-4834-8c10-1cd6da3da88d] Running
	I1109 14:04:38.517830   55908 system_pods.go:89] "kube-scheduler-ha-423884-m03" [ab2c4706-65e8-429e-9365-35ddef56a3f5] Running
	I1109 14:04:38.517856   55908 system_pods.go:89] "kube-vip-ha-423884" [b043421c-6408-4df1-87d9-bc0d12fef736] Running
	I1109 14:04:38.517877   55908 system_pods.go:89] "kube-vip-ha-423884-m02" [17d3154c-6732-452b-a209-4a8c98c5626e] Running
	I1109 14:04:38.517894   55908 system_pods.go:89] "kube-vip-ha-423884-m03" [aade9448-5515-4d1c-ae79-38c8686c171f] Running
	I1109 14:04:38.517911   55908 system_pods.go:89] "storage-provisioner" [5c249a88-1e05-40e0-b9d2-60a993f8c146] Running
	I1109 14:04:38.517933   55908 system_pods.go:126] duration metric: took 8.084994ms to wait for k8s-apps to be running ...
	I1109 14:04:38.517962   55908 system_svc.go:44] waiting for kubelet service to be running ....
	I1109 14:04:38.518068   55908 ssh_runner.go:195] Run: sudo systemctl is-active --quiet service kubelet
	I1109 14:04:38.532879   55908 system_svc.go:56] duration metric: took 14.908297ms WaitForService to wait for kubelet
	I1109 14:04:38.532917   55908 kubeadm.go:587] duration metric: took 15.720774062s to wait for: map[apiserver:true apps_running:true default_sa:true extra:true kubelet:true node_ready:true system_pods:true]
	I1109 14:04:38.532935   55908 node_conditions.go:102] verifying NodePressure condition ...
	I1109 14:04:38.536579   55908 node_conditions.go:122] node storage ephemeral capacity is 203034800Ki
	I1109 14:04:38.536610   55908 node_conditions.go:123] node cpu capacity is 2
	I1109 14:04:38.536621   55908 node_conditions.go:122] node storage ephemeral capacity is 203034800Ki
	I1109 14:04:38.536625   55908 node_conditions.go:123] node cpu capacity is 2
	I1109 14:04:38.536629   55908 node_conditions.go:122] node storage ephemeral capacity is 203034800Ki
	I1109 14:04:38.536633   55908 node_conditions.go:123] node cpu capacity is 2
	I1109 14:04:38.536636   55908 node_conditions.go:122] node storage ephemeral capacity is 203034800Ki
	I1109 14:04:38.536648   55908 node_conditions.go:123] node cpu capacity is 2
	I1109 14:04:38.536656   55908 node_conditions.go:105] duration metric: took 3.715265ms to run NodePressure ...
	I1109 14:04:38.536669   55908 start.go:242] waiting for startup goroutines ...
	I1109 14:04:38.536695   55908 start.go:256] writing updated cluster config ...
	I1109 14:04:38.540432   55908 out.go:203] 
	I1109 14:04:38.543707   55908 config.go:182] Loaded profile config "ha-423884": Driver=docker, ContainerRuntime=crio, KubernetesVersion=v1.34.1
	I1109 14:04:38.543833   55908 profile.go:143] Saving config to /home/jenkins/minikube-integration/21139-2320/.minikube/profiles/ha-423884/config.json ...
	I1109 14:04:38.547314   55908 out.go:179] * Starting "ha-423884-m04" worker node in "ha-423884" cluster
	I1109 14:04:38.550154   55908 cache.go:134] Beginning downloading kic base image for docker with crio
	I1109 14:04:38.553075   55908 out.go:179] * Pulling base image v0.0.48-1761985721-21837 ...
	I1109 14:04:38.555918   55908 preload.go:188] Checking if preload exists for k8s version v1.34.1 and runtime crio
	I1109 14:04:38.555945   55908 cache.go:65] Caching tarball of preloaded images
	I1109 14:04:38.555984   55908 image.go:81] Checking for gcr.io/k8s-minikube/kicbase-builds:v0.0.48-1761985721-21837@sha256:a50b37e97dfdea51156e079ca6b45818a801b3d41bbe13d141f35d2e1af6c7d1 in local docker daemon
	I1109 14:04:38.556052   55908 preload.go:238] Found /home/jenkins/minikube-integration/21139-2320/.minikube/cache/preloaded-tarball/preloaded-images-k8s-v18-v1.34.1-cri-o-overlay-arm64.tar.lz4 in cache, skipping download
	I1109 14:04:38.556067   55908 cache.go:68] Finished verifying existence of preloaded tar for v1.34.1 on crio
	I1109 14:04:38.556232   55908 profile.go:143] Saving config to /home/jenkins/minikube-integration/21139-2320/.minikube/profiles/ha-423884/config.json ...
	I1109 14:04:38.596080   55908 image.go:100] Found gcr.io/k8s-minikube/kicbase-builds:v0.0.48-1761985721-21837@sha256:a50b37e97dfdea51156e079ca6b45818a801b3d41bbe13d141f35d2e1af6c7d1 in local docker daemon, skipping pull
	I1109 14:04:38.596104   55908 cache.go:158] gcr.io/k8s-minikube/kicbase-builds:v0.0.48-1761985721-21837@sha256:a50b37e97dfdea51156e079ca6b45818a801b3d41bbe13d141f35d2e1af6c7d1 exists in daemon, skipping load
	I1109 14:04:38.596117   55908 cache.go:243] Successfully downloaded all kic artifacts
	I1109 14:04:38.596140   55908 start.go:360] acquireMachinesLock for ha-423884-m04: {Name:mk8ea327a8bd5498886fa5c18402495ffce70373 Clock:{} Delay:500ms Timeout:10m0s Cancel:<nil>}
	I1109 14:04:38.596197   55908 start.go:364] duration metric: took 36.833µs to acquireMachinesLock for "ha-423884-m04"
	I1109 14:04:38.596221   55908 start.go:96] Skipping create...Using existing machine configuration
	I1109 14:04:38.596226   55908 fix.go:54] fixHost starting: m04
	I1109 14:04:38.596505   55908 cli_runner.go:164] Run: docker container inspect ha-423884-m04 --format={{.State.Status}}
	I1109 14:04:38.628055   55908 fix.go:112] recreateIfNeeded on ha-423884-m04: state=Stopped err=<nil>
	W1109 14:04:38.628083   55908 fix.go:138] unexpected machine state, will restart: <nil>
	I1109 14:04:38.631296   55908 out.go:252] * Restarting existing docker container for "ha-423884-m04" ...
	I1109 14:04:38.631384   55908 cli_runner.go:164] Run: docker start ha-423884-m04
	I1109 14:04:38.994029   55908 cli_runner.go:164] Run: docker container inspect ha-423884-m04 --format={{.State.Status}}
	I1109 14:04:39.024143   55908 kic.go:430] container "ha-423884-m04" state is running.
	I1109 14:04:39.024645   55908 cli_runner.go:164] Run: docker container inspect -f "{{range .NetworkSettings.Networks}}{{.IPAddress}},{{.GlobalIPv6Address}}{{end}}" ha-423884-m04
	I1109 14:04:39.049753   55908 profile.go:143] Saving config to /home/jenkins/minikube-integration/21139-2320/.minikube/profiles/ha-423884/config.json ...
	I1109 14:04:39.049997   55908 machine.go:94] provisionDockerMachine start ...
	I1109 14:04:39.050055   55908 cli_runner.go:164] Run: docker container inspect -f "'{{(index (index .NetworkSettings.Ports "22/tcp") 0).HostPort}}'" ha-423884-m04
	I1109 14:04:39.086245   55908 main.go:143] libmachine: Using SSH client type: native
	I1109 14:04:39.086555   55908 main.go:143] libmachine: &{{{<nil> 0 [] [] []} docker [0x3eefe0] 0x3f1790 <nil>  [] 0s} 127.0.0.1 32833 <nil> <nil>}
	I1109 14:04:39.086564   55908 main.go:143] libmachine: About to run SSH command:
	hostname
	I1109 14:04:39.087311   55908 main.go:143] libmachine: Error dialing TCP: ssh: handshake failed: read tcp 127.0.0.1:54962->127.0.0.1:32833: read: connection reset by peer
	I1109 14:04:42.305377   55908 main.go:143] libmachine: SSH cmd err, output: <nil>: ha-423884-m04
	
	I1109 14:04:42.305403   55908 ubuntu.go:182] provisioning hostname "ha-423884-m04"
	I1109 14:04:42.305544   55908 cli_runner.go:164] Run: docker container inspect -f "'{{(index (index .NetworkSettings.Ports "22/tcp") 0).HostPort}}'" ha-423884-m04
	I1109 14:04:42.345625   55908 main.go:143] libmachine: Using SSH client type: native
	I1109 14:04:42.345948   55908 main.go:143] libmachine: &{{{<nil> 0 [] [] []} docker [0x3eefe0] 0x3f1790 <nil>  [] 0s} 127.0.0.1 32833 <nil> <nil>}
	I1109 14:04:42.345975   55908 main.go:143] libmachine: About to run SSH command:
	sudo hostname ha-423884-m04 && echo "ha-423884-m04" | sudo tee /etc/hostname
	I1109 14:04:42.540380   55908 main.go:143] libmachine: SSH cmd err, output: <nil>: ha-423884-m04
	
	I1109 14:04:42.540467   55908 cli_runner.go:164] Run: docker container inspect -f "'{{(index (index .NetworkSettings.Ports "22/tcp") 0).HostPort}}'" ha-423884-m04
	I1109 14:04:42.568082   55908 main.go:143] libmachine: Using SSH client type: native
	I1109 14:04:42.568508   55908 main.go:143] libmachine: &{{{<nil> 0 [] [] []} docker [0x3eefe0] 0x3f1790 <nil>  [] 0s} 127.0.0.1 32833 <nil> <nil>}
	I1109 14:04:42.568528   55908 main.go:143] libmachine: About to run SSH command:
	
			if ! grep -xq '.*\sha-423884-m04' /etc/hosts; then
				if grep -xq '127.0.1.1\s.*' /etc/hosts; then
					sudo sed -i 's/^127.0.1.1\s.*/127.0.1.1 ha-423884-m04/g' /etc/hosts;
				else 
					echo '127.0.1.1 ha-423884-m04' | sudo tee -a /etc/hosts; 
				fi
			fi
	I1109 14:04:42.740938   55908 main.go:143] libmachine: SSH cmd err, output: <nil>: 
	I1109 14:04:42.740964   55908 ubuntu.go:188] set auth options {CertDir:/home/jenkins/minikube-integration/21139-2320/.minikube CaCertPath:/home/jenkins/minikube-integration/21139-2320/.minikube/certs/ca.pem CaPrivateKeyPath:/home/jenkins/minikube-integration/21139-2320/.minikube/certs/ca-key.pem CaCertRemotePath:/etc/docker/ca.pem ServerCertPath:/home/jenkins/minikube-integration/21139-2320/.minikube/machines/server.pem ServerKeyPath:/home/jenkins/minikube-integration/21139-2320/.minikube/machines/server-key.pem ClientKeyPath:/home/jenkins/minikube-integration/21139-2320/.minikube/certs/key.pem ServerCertRemotePath:/etc/docker/server.pem ServerKeyRemotePath:/etc/docker/server-key.pem ClientCertPath:/home/jenkins/minikube-integration/21139-2320/.minikube/certs/cert.pem ServerCertSANs:[] StorePath:/home/jenkins/minikube-integration/21139-2320/.minikube}
	I1109 14:04:42.740987   55908 ubuntu.go:190] setting up certificates
	I1109 14:04:42.740999   55908 provision.go:84] configureAuth start
	I1109 14:04:42.741056   55908 cli_runner.go:164] Run: docker container inspect -f "{{range .NetworkSettings.Networks}}{{.IPAddress}},{{.GlobalIPv6Address}}{{end}}" ha-423884-m04
	I1109 14:04:42.758596   55908 provision.go:143] copyHostCerts
	I1109 14:04:42.758635   55908 vm_assets.go:164] NewFileAsset: /home/jenkins/minikube-integration/21139-2320/.minikube/certs/ca.pem -> /home/jenkins/minikube-integration/21139-2320/.minikube/ca.pem
	I1109 14:04:42.758666   55908 exec_runner.go:144] found /home/jenkins/minikube-integration/21139-2320/.minikube/ca.pem, removing ...
	I1109 14:04:42.758673   55908 exec_runner.go:203] rm: /home/jenkins/minikube-integration/21139-2320/.minikube/ca.pem
	I1109 14:04:42.758748   55908 exec_runner.go:151] cp: /home/jenkins/minikube-integration/21139-2320/.minikube/certs/ca.pem --> /home/jenkins/minikube-integration/21139-2320/.minikube/ca.pem (1082 bytes)
	I1109 14:04:42.758825   55908 vm_assets.go:164] NewFileAsset: /home/jenkins/minikube-integration/21139-2320/.minikube/certs/cert.pem -> /home/jenkins/minikube-integration/21139-2320/.minikube/cert.pem
	I1109 14:04:42.758841   55908 exec_runner.go:144] found /home/jenkins/minikube-integration/21139-2320/.minikube/cert.pem, removing ...
	I1109 14:04:42.758845   55908 exec_runner.go:203] rm: /home/jenkins/minikube-integration/21139-2320/.minikube/cert.pem
	I1109 14:04:42.758872   55908 exec_runner.go:151] cp: /home/jenkins/minikube-integration/21139-2320/.minikube/certs/cert.pem --> /home/jenkins/minikube-integration/21139-2320/.minikube/cert.pem (1123 bytes)
	I1109 14:04:42.758947   55908 vm_assets.go:164] NewFileAsset: /home/jenkins/minikube-integration/21139-2320/.minikube/certs/key.pem -> /home/jenkins/minikube-integration/21139-2320/.minikube/key.pem
	I1109 14:04:42.758966   55908 exec_runner.go:144] found /home/jenkins/minikube-integration/21139-2320/.minikube/key.pem, removing ...
	I1109 14:04:42.758970   55908 exec_runner.go:203] rm: /home/jenkins/minikube-integration/21139-2320/.minikube/key.pem
	I1109 14:04:42.758992   55908 exec_runner.go:151] cp: /home/jenkins/minikube-integration/21139-2320/.minikube/certs/key.pem --> /home/jenkins/minikube-integration/21139-2320/.minikube/key.pem (1679 bytes)
	I1109 14:04:42.759035   55908 provision.go:117] generating server cert: /home/jenkins/minikube-integration/21139-2320/.minikube/machines/server.pem ca-key=/home/jenkins/minikube-integration/21139-2320/.minikube/certs/ca.pem private-key=/home/jenkins/minikube-integration/21139-2320/.minikube/certs/ca-key.pem org=jenkins.ha-423884-m04 san=[127.0.0.1 192.168.49.5 ha-423884-m04 localhost minikube]
	I1109 14:04:43.620778   55908 provision.go:177] copyRemoteCerts
	I1109 14:04:43.620850   55908 ssh_runner.go:195] Run: sudo mkdir -p /etc/docker /etc/docker /etc/docker
	I1109 14:04:43.620891   55908 cli_runner.go:164] Run: docker container inspect -f "'{{(index (index .NetworkSettings.Ports "22/tcp") 0).HostPort}}'" ha-423884-m04
	I1109 14:04:43.638135   55908 sshutil.go:53] new ssh client: &{IP:127.0.0.1 Port:32833 SSHKeyPath:/home/jenkins/minikube-integration/21139-2320/.minikube/machines/ha-423884-m04/id_rsa Username:docker}
	I1109 14:04:43.746715   55908 vm_assets.go:164] NewFileAsset: /home/jenkins/minikube-integration/21139-2320/.minikube/certs/ca.pem -> /etc/docker/ca.pem
	I1109 14:04:43.746778   55908 ssh_runner.go:362] scp /home/jenkins/minikube-integration/21139-2320/.minikube/certs/ca.pem --> /etc/docker/ca.pem (1082 bytes)
	I1109 14:04:43.783559   55908 vm_assets.go:164] NewFileAsset: /home/jenkins/minikube-integration/21139-2320/.minikube/machines/server.pem -> /etc/docker/server.pem
	I1109 14:04:43.783620   55908 ssh_runner.go:362] scp /home/jenkins/minikube-integration/21139-2320/.minikube/machines/server.pem --> /etc/docker/server.pem (1208 bytes)
	I1109 14:04:43.821821   55908 vm_assets.go:164] NewFileAsset: /home/jenkins/minikube-integration/21139-2320/.minikube/machines/server-key.pem -> /etc/docker/server-key.pem
	I1109 14:04:43.821884   55908 ssh_runner.go:362] scp /home/jenkins/minikube-integration/21139-2320/.minikube/machines/server-key.pem --> /etc/docker/server-key.pem (1675 bytes)
	I1109 14:04:43.853243   55908 provision.go:87] duration metric: took 1.112229927s to configureAuth
	I1109 14:04:43.853316   55908 ubuntu.go:206] setting minikube options for container-runtime
	I1109 14:04:43.853606   55908 config.go:182] Loaded profile config "ha-423884": Driver=docker, ContainerRuntime=crio, KubernetesVersion=v1.34.1
	I1109 14:04:43.853756   55908 cli_runner.go:164] Run: docker container inspect -f "'{{(index (index .NetworkSettings.Ports "22/tcp") 0).HostPort}}'" ha-423884-m04
	I1109 14:04:43.895433   55908 main.go:143] libmachine: Using SSH client type: native
	I1109 14:04:43.895732   55908 main.go:143] libmachine: &{{{<nil> 0 [] [] []} docker [0x3eefe0] 0x3f1790 <nil>  [] 0s} 127.0.0.1 32833 <nil> <nil>}
	I1109 14:04:43.895746   55908 main.go:143] libmachine: About to run SSH command:
	sudo mkdir -p /etc/sysconfig && printf %s "
	CRIO_MINIKUBE_OPTIONS='--insecure-registry 10.96.0.0/12 '
	" | sudo tee /etc/sysconfig/crio.minikube && sudo systemctl restart crio
	I1109 14:04:44.332263   55908 main.go:143] libmachine: SSH cmd err, output: <nil>: 
	CRIO_MINIKUBE_OPTIONS='--insecure-registry 10.96.0.0/12 '
	
	I1109 14:04:44.332289   55908 machine.go:97] duration metric: took 5.282283014s to provisionDockerMachine
	I1109 14:04:44.332300   55908 start.go:293] postStartSetup for "ha-423884-m04" (driver="docker")
	I1109 14:04:44.332310   55908 start.go:322] creating required directories: [/etc/kubernetes/addons /etc/kubernetes/manifests /var/tmp/minikube /var/lib/minikube /var/lib/minikube/certs /var/lib/minikube/images /var/lib/minikube/binaries /tmp/gvisor /usr/share/ca-certificates /etc/ssl/certs]
	I1109 14:04:44.332371   55908 ssh_runner.go:195] Run: sudo mkdir -p /etc/kubernetes/addons /etc/kubernetes/manifests /var/tmp/minikube /var/lib/minikube /var/lib/minikube/certs /var/lib/minikube/images /var/lib/minikube/binaries /tmp/gvisor /usr/share/ca-certificates /etc/ssl/certs
	I1109 14:04:44.332415   55908 cli_runner.go:164] Run: docker container inspect -f "'{{(index (index .NetworkSettings.Ports "22/tcp") 0).HostPort}}'" ha-423884-m04
	I1109 14:04:44.353937   55908 sshutil.go:53] new ssh client: &{IP:127.0.0.1 Port:32833 SSHKeyPath:/home/jenkins/minikube-integration/21139-2320/.minikube/machines/ha-423884-m04/id_rsa Username:docker}
	I1109 14:04:44.464143   55908 ssh_runner.go:195] Run: cat /etc/os-release
	I1109 14:04:44.470188   55908 main.go:143] libmachine: Couldn't set key VERSION_CODENAME, no corresponding struct field found
	I1109 14:04:44.470214   55908 info.go:137] Remote host: Debian GNU/Linux 12 (bookworm)
	I1109 14:04:44.470225   55908 filesync.go:126] Scanning /home/jenkins/minikube-integration/21139-2320/.minikube/addons for local assets ...
	I1109 14:04:44.470281   55908 filesync.go:126] Scanning /home/jenkins/minikube-integration/21139-2320/.minikube/files for local assets ...
	I1109 14:04:44.470354   55908 filesync.go:149] local asset: /home/jenkins/minikube-integration/21139-2320/.minikube/files/etc/ssl/certs/41162.pem -> 41162.pem in /etc/ssl/certs
	I1109 14:04:44.470361   55908 vm_assets.go:164] NewFileAsset: /home/jenkins/minikube-integration/21139-2320/.minikube/files/etc/ssl/certs/41162.pem -> /etc/ssl/certs/41162.pem
	I1109 14:04:44.470470   55908 ssh_runner.go:195] Run: sudo mkdir -p /etc/ssl/certs
	I1109 14:04:44.479795   55908 ssh_runner.go:362] scp /home/jenkins/minikube-integration/21139-2320/.minikube/files/etc/ssl/certs/41162.pem --> /etc/ssl/certs/41162.pem (1708 bytes)
	I1109 14:04:44.529226   55908 start.go:296] duration metric: took 196.901694ms for postStartSetup
	I1109 14:04:44.529386   55908 ssh_runner.go:195] Run: sh -c "df -h /var | awk 'NR==2{print $5}'"
	I1109 14:04:44.529460   55908 cli_runner.go:164] Run: docker container inspect -f "'{{(index (index .NetworkSettings.Ports "22/tcp") 0).HostPort}}'" ha-423884-m04
	I1109 14:04:44.554649   55908 sshutil.go:53] new ssh client: &{IP:127.0.0.1 Port:32833 SSHKeyPath:/home/jenkins/minikube-integration/21139-2320/.minikube/machines/ha-423884-m04/id_rsa Username:docker}
	I1109 14:04:44.673604   55908 ssh_runner.go:195] Run: sh -c "df -BG /var | awk 'NR==2{print $4}'"
	I1109 14:04:44.680762   55908 fix.go:56] duration metric: took 6.08452744s for fixHost
	I1109 14:04:44.680784   55908 start.go:83] releasing machines lock for "ha-423884-m04", held for 6.084574408s
	I1109 14:04:44.680867   55908 cli_runner.go:164] Run: docker container inspect -f "{{range .NetworkSettings.Networks}}{{.IPAddress}},{{.GlobalIPv6Address}}{{end}}" ha-423884-m04
	I1109 14:04:44.721415   55908 out.go:179] * Found network options:
	I1109 14:04:44.724159   55908 out.go:179]   - NO_PROXY=192.168.49.2,192.168.49.3,192.168.49.4
	W1109 14:04:44.726873   55908 proxy.go:120] fail to check proxy env: Error ip not in block
	W1109 14:04:44.726905   55908 proxy.go:120] fail to check proxy env: Error ip not in block
	W1109 14:04:44.726917   55908 proxy.go:120] fail to check proxy env: Error ip not in block
	W1109 14:04:44.726942   55908 proxy.go:120] fail to check proxy env: Error ip not in block
	W1109 14:04:44.726952   55908 proxy.go:120] fail to check proxy env: Error ip not in block
	W1109 14:04:44.726961   55908 proxy.go:120] fail to check proxy env: Error ip not in block
	I1109 14:04:44.727033   55908 ssh_runner.go:195] Run: sudo sh -c "podman version >/dev/null"
	I1109 14:04:44.727074   55908 ssh_runner.go:195] Run: curl -sS -m 2 https://registry.k8s.io/
	I1109 14:04:44.727134   55908 cli_runner.go:164] Run: docker container inspect -f "'{{(index (index .NetworkSettings.Ports "22/tcp") 0).HostPort}}'" ha-423884-m04
	I1109 14:04:44.727085   55908 cli_runner.go:164] Run: docker container inspect -f "'{{(index (index .NetworkSettings.Ports "22/tcp") 0).HostPort}}'" ha-423884-m04
	I1109 14:04:44.759201   55908 sshutil.go:53] new ssh client: &{IP:127.0.0.1 Port:32833 SSHKeyPath:/home/jenkins/minikube-integration/21139-2320/.minikube/machines/ha-423884-m04/id_rsa Username:docker}
	I1109 14:04:44.763544   55908 sshutil.go:53] new ssh client: &{IP:127.0.0.1 Port:32833 SSHKeyPath:/home/jenkins/minikube-integration/21139-2320/.minikube/machines/ha-423884-m04/id_rsa Username:docker}
	I1109 14:04:45.037350   55908 ssh_runner.go:195] Run: sh -c "stat /etc/cni/net.d/*loopback.conf*"
	W1109 14:04:45.135550   55908 cni.go:209] loopback cni configuration skipped: "/etc/cni/net.d/*loopback.conf*" not found
	I1109 14:04:45.135658   55908 ssh_runner.go:195] Run: sudo find /etc/cni/net.d -maxdepth 1 -type f ( ( -name *bridge* -or -name *podman* ) -and -not -name *.mk_disabled ) -printf "%p, " -exec sh -c "sudo mv {} {}.mk_disabled" ;
	I1109 14:04:45.148313   55908 cni.go:259] no active bridge cni configs found in "/etc/cni/net.d" - nothing to disable
	I1109 14:04:45.148341   55908 start.go:496] detecting cgroup driver to use...
	I1109 14:04:45.148377   55908 detect.go:187] detected "cgroupfs" cgroup driver on host os
	I1109 14:04:45.148433   55908 ssh_runner.go:195] Run: sudo systemctl stop -f containerd
	I1109 14:04:45.185399   55908 ssh_runner.go:195] Run: sudo systemctl is-active --quiet service containerd
	I1109 14:04:45.214772   55908 docker.go:218] disabling cri-docker service (if available) ...
	I1109 14:04:45.214846   55908 ssh_runner.go:195] Run: sudo systemctl stop -f cri-docker.socket
	I1109 14:04:45.250953   55908 ssh_runner.go:195] Run: sudo systemctl stop -f cri-docker.service
	I1109 14:04:45.287278   55908 ssh_runner.go:195] Run: sudo systemctl disable cri-docker.socket
	I1109 14:04:45.661062   55908 ssh_runner.go:195] Run: sudo systemctl mask cri-docker.service
	I1109 14:04:45.935411   55908 docker.go:234] disabling docker service ...
	I1109 14:04:45.935486   55908 ssh_runner.go:195] Run: sudo systemctl stop -f docker.socket
	I1109 14:04:45.952438   55908 ssh_runner.go:195] Run: sudo systemctl stop -f docker.service
	I1109 14:04:45.980819   55908 ssh_runner.go:195] Run: sudo systemctl disable docker.socket
	I1109 14:04:46.226547   55908 ssh_runner.go:195] Run: sudo systemctl mask docker.service
	I1109 14:04:46.528888   55908 ssh_runner.go:195] Run: sudo systemctl is-active --quiet service docker
	I1109 14:04:46.569464   55908 ssh_runner.go:195] Run: /bin/bash -c "sudo mkdir -p /etc && printf %s "runtime-endpoint: unix:///var/run/crio/crio.sock
	" | sudo tee /etc/crictl.yaml"
	I1109 14:04:46.593467   55908 crio.go:59] configure cri-o to use "registry.k8s.io/pause:3.10.1" pause image...
	I1109 14:04:46.593541   55908 ssh_runner.go:195] Run: sh -c "sudo sed -i 's|^.*pause_image = .*$|pause_image = "registry.k8s.io/pause:3.10.1"|' /etc/crio/crio.conf.d/02-crio.conf"
	I1109 14:04:46.617190   55908 crio.go:70] configuring cri-o to use "cgroupfs" as cgroup driver...
	I1109 14:04:46.617307   55908 ssh_runner.go:195] Run: sh -c "sudo sed -i 's|^.*cgroup_manager = .*$|cgroup_manager = "cgroupfs"|' /etc/crio/crio.conf.d/02-crio.conf"
	I1109 14:04:46.632140   55908 ssh_runner.go:195] Run: sh -c "sudo sed -i '/conmon_cgroup = .*/d' /etc/crio/crio.conf.d/02-crio.conf"
	I1109 14:04:46.655050   55908 ssh_runner.go:195] Run: sh -c "sudo sed -i '/cgroup_manager = .*/a conmon_cgroup = "pod"' /etc/crio/crio.conf.d/02-crio.conf"
	I1109 14:04:46.669679   55908 ssh_runner.go:195] Run: sh -c "sudo rm -rf /etc/cni/net.mk"
	I1109 14:04:46.703425   55908 ssh_runner.go:195] Run: sh -c "sudo sed -i '/^ *"net.ipv4.ip_unprivileged_port_start=.*"/d' /etc/crio/crio.conf.d/02-crio.conf"
	I1109 14:04:46.732454   55908 ssh_runner.go:195] Run: sh -c "sudo grep -q "^ *default_sysctls" /etc/crio/crio.conf.d/02-crio.conf || sudo sed -i '/conmon_cgroup = .*/a default_sysctls = \[\n\]' /etc/crio/crio.conf.d/02-crio.conf"
	I1109 14:04:46.748482   55908 ssh_runner.go:195] Run: sh -c "sudo sed -i -r 's|^default_sysctls *= *\[|&\n  "net.ipv4.ip_unprivileged_port_start=0",|' /etc/crio/crio.conf.d/02-crio.conf"
	I1109 14:04:46.774220   55908 ssh_runner.go:195] Run: sudo sysctl net.bridge.bridge-nf-call-iptables
	I1109 14:04:46.794338   55908 ssh_runner.go:195] Run: sudo sh -c "echo 1 > /proc/sys/net/ipv4/ip_forward"
	I1109 14:04:46.805580   55908 ssh_runner.go:195] Run: sudo systemctl daemon-reload
	I1109 14:04:47.010084   55908 ssh_runner.go:195] Run: sudo systemctl restart crio
	I1109 14:04:47.173577   55908 start.go:543] Will wait 60s for socket path /var/run/crio/crio.sock
	I1109 14:04:47.173656   55908 ssh_runner.go:195] Run: stat /var/run/crio/crio.sock
	I1109 14:04:47.181540   55908 start.go:564] Will wait 60s for crictl version
	I1109 14:04:47.181604   55908 ssh_runner.go:195] Run: which crictl
	I1109 14:04:47.186006   55908 ssh_runner.go:195] Run: sudo /usr/local/bin/crictl version
	I1109 14:04:47.222300   55908 start.go:580] Version:  0.1.0
	RuntimeName:  cri-o
	RuntimeVersion:  1.34.1
	RuntimeApiVersion:  v1
	I1109 14:04:47.222379   55908 ssh_runner.go:195] Run: crio --version
	I1109 14:04:47.253413   55908 ssh_runner.go:195] Run: crio --version
	I1109 14:04:47.291652   55908 out.go:179] * Preparing Kubernetes v1.34.1 on CRI-O 1.34.1 ...
	I1109 14:04:47.294554   55908 out.go:179]   - env NO_PROXY=192.168.49.2
	I1109 14:04:47.297616   55908 out.go:179]   - env NO_PROXY=192.168.49.2,192.168.49.3
	I1109 14:04:47.301230   55908 out.go:179]   - env NO_PROXY=192.168.49.2,192.168.49.3,192.168.49.4
	I1109 14:04:47.304267   55908 cli_runner.go:164] Run: docker network inspect ha-423884 --format "{"Name": "{{.Name}}","Driver": "{{.Driver}}","Subnet": "{{range .IPAM.Config}}{{.Subnet}}{{end}}","Gateway": "{{range .IPAM.Config}}{{.Gateway}}{{end}}","MTU": {{if (index .Options "com.docker.network.driver.mtu")}}{{(index .Options "com.docker.network.driver.mtu")}}{{else}}0{{end}}, "ContainerIPs": [{{range $k,$v := .Containers }}"{{$v.IPv4Address}}",{{end}}]}"
	I1109 14:04:47.343687   55908 ssh_runner.go:195] Run: grep 192.168.49.1	host.minikube.internal$ /etc/hosts
	I1109 14:04:47.347710   55908 ssh_runner.go:195] Run: /bin/bash -c "{ grep -v $'\thost.minikube.internal$' "/etc/hosts"; echo "192.168.49.1	host.minikube.internal"; } > /tmp/h.$$; sudo cp /tmp/h.$$ "/etc/hosts""
	I1109 14:04:47.360845   55908 mustload.go:66] Loading cluster: ha-423884
	I1109 14:04:47.361083   55908 config.go:182] Loaded profile config "ha-423884": Driver=docker, ContainerRuntime=crio, KubernetesVersion=v1.34.1
	I1109 14:04:47.361322   55908 cli_runner.go:164] Run: docker container inspect ha-423884 --format={{.State.Status}}
	I1109 14:04:47.390238   55908 host.go:66] Checking if "ha-423884" exists ...
	I1109 14:04:47.390509   55908 certs.go:69] Setting up /home/jenkins/minikube-integration/21139-2320/.minikube/profiles/ha-423884 for IP: 192.168.49.5
	I1109 14:04:47.390516   55908 certs.go:195] generating shared ca certs ...
	I1109 14:04:47.390534   55908 certs.go:227] acquiring lock for ca certs: {Name:mkdc9287ecd89df27ec460b72246ed6b75395d7c Clock:{} Delay:500ms Timeout:1m0s Cancel:<nil>}
	I1109 14:04:47.390655   55908 certs.go:236] skipping valid "minikubeCA" ca cert: /home/jenkins/minikube-integration/21139-2320/.minikube/ca.key
	I1109 14:04:47.390695   55908 certs.go:236] skipping valid "proxyClientCA" ca cert: /home/jenkins/minikube-integration/21139-2320/.minikube/proxy-client-ca.key
	I1109 14:04:47.390705   55908 vm_assets.go:164] NewFileAsset: /home/jenkins/minikube-integration/21139-2320/.minikube/ca.crt -> /var/lib/minikube/certs/ca.crt
	I1109 14:04:47.390717   55908 vm_assets.go:164] NewFileAsset: /home/jenkins/minikube-integration/21139-2320/.minikube/ca.key -> /var/lib/minikube/certs/ca.key
	I1109 14:04:47.390728   55908 vm_assets.go:164] NewFileAsset: /home/jenkins/minikube-integration/21139-2320/.minikube/proxy-client-ca.crt -> /var/lib/minikube/certs/proxy-client-ca.crt
	I1109 14:04:47.390739   55908 vm_assets.go:164] NewFileAsset: /home/jenkins/minikube-integration/21139-2320/.minikube/proxy-client-ca.key -> /var/lib/minikube/certs/proxy-client-ca.key
	I1109 14:04:47.390789   55908 certs.go:484] found cert: /home/jenkins/minikube-integration/21139-2320/.minikube/certs/4116.pem (1338 bytes)
	W1109 14:04:47.390815   55908 certs.go:480] ignoring /home/jenkins/minikube-integration/21139-2320/.minikube/certs/4116_empty.pem, impossibly tiny 0 bytes
	I1109 14:04:47.390823   55908 certs.go:484] found cert: /home/jenkins/minikube-integration/21139-2320/.minikube/certs/ca-key.pem (1679 bytes)
	I1109 14:04:47.390848   55908 certs.go:484] found cert: /home/jenkins/minikube-integration/21139-2320/.minikube/certs/ca.pem (1082 bytes)
	I1109 14:04:47.390868   55908 certs.go:484] found cert: /home/jenkins/minikube-integration/21139-2320/.minikube/certs/cert.pem (1123 bytes)
	I1109 14:04:47.390889   55908 certs.go:484] found cert: /home/jenkins/minikube-integration/21139-2320/.minikube/certs/key.pem (1679 bytes)
	I1109 14:04:47.390931   55908 certs.go:484] found cert: /home/jenkins/minikube-integration/21139-2320/.minikube/files/etc/ssl/certs/41162.pem (1708 bytes)
	I1109 14:04:47.390957   55908 vm_assets.go:164] NewFileAsset: /home/jenkins/minikube-integration/21139-2320/.minikube/certs/4116.pem -> /usr/share/ca-certificates/4116.pem
	I1109 14:04:47.390969   55908 vm_assets.go:164] NewFileAsset: /home/jenkins/minikube-integration/21139-2320/.minikube/files/etc/ssl/certs/41162.pem -> /usr/share/ca-certificates/41162.pem
	I1109 14:04:47.390980   55908 vm_assets.go:164] NewFileAsset: /home/jenkins/minikube-integration/21139-2320/.minikube/ca.crt -> /usr/share/ca-certificates/minikubeCA.pem
	I1109 14:04:47.390996   55908 ssh_runner.go:362] scp /home/jenkins/minikube-integration/21139-2320/.minikube/ca.crt --> /var/lib/minikube/certs/ca.crt (1111 bytes)
	I1109 14:04:47.419171   55908 ssh_runner.go:362] scp /home/jenkins/minikube-integration/21139-2320/.minikube/ca.key --> /var/lib/minikube/certs/ca.key (1679 bytes)
	I1109 14:04:47.458480   55908 ssh_runner.go:362] scp /home/jenkins/minikube-integration/21139-2320/.minikube/proxy-client-ca.crt --> /var/lib/minikube/certs/proxy-client-ca.crt (1119 bytes)
	I1109 14:04:47.491840   55908 ssh_runner.go:362] scp /home/jenkins/minikube-integration/21139-2320/.minikube/proxy-client-ca.key --> /var/lib/minikube/certs/proxy-client-ca.key (1675 bytes)
	I1109 14:04:47.515467   55908 ssh_runner.go:362] scp /home/jenkins/minikube-integration/21139-2320/.minikube/certs/4116.pem --> /usr/share/ca-certificates/4116.pem (1338 bytes)
	I1109 14:04:47.547694   55908 ssh_runner.go:362] scp /home/jenkins/minikube-integration/21139-2320/.minikube/files/etc/ssl/certs/41162.pem --> /usr/share/ca-certificates/41162.pem (1708 bytes)
	I1109 14:04:47.571204   55908 ssh_runner.go:362] scp /home/jenkins/minikube-integration/21139-2320/.minikube/ca.crt --> /usr/share/ca-certificates/minikubeCA.pem (1111 bytes)
	I1109 14:04:47.596967   55908 ssh_runner.go:195] Run: openssl version
	I1109 14:04:47.604617   55908 ssh_runner.go:195] Run: sudo /bin/bash -c "test -s /usr/share/ca-certificates/4116.pem && ln -fs /usr/share/ca-certificates/4116.pem /etc/ssl/certs/4116.pem"
	I1109 14:04:47.618704   55908 ssh_runner.go:195] Run: ls -la /usr/share/ca-certificates/4116.pem
	I1109 14:04:47.623578   55908 certs.go:528] hashing: -rw-r--r-- 1 root root 1338 Nov  9 13:36 /usr/share/ca-certificates/4116.pem
	I1109 14:04:47.623648   55908 ssh_runner.go:195] Run: openssl x509 -hash -noout -in /usr/share/ca-certificates/4116.pem
	I1109 14:04:47.684940   55908 ssh_runner.go:195] Run: sudo /bin/bash -c "test -L /etc/ssl/certs/51391683.0 || ln -fs /etc/ssl/certs/4116.pem /etc/ssl/certs/51391683.0"
	I1109 14:04:47.694950   55908 ssh_runner.go:195] Run: sudo /bin/bash -c "test -s /usr/share/ca-certificates/41162.pem && ln -fs /usr/share/ca-certificates/41162.pem /etc/ssl/certs/41162.pem"
	I1109 14:04:47.704570   55908 ssh_runner.go:195] Run: ls -la /usr/share/ca-certificates/41162.pem
	I1109 14:04:47.709468   55908 certs.go:528] hashing: -rw-r--r-- 1 root root 1708 Nov  9 13:36 /usr/share/ca-certificates/41162.pem
	I1109 14:04:47.709530   55908 ssh_runner.go:195] Run: openssl x509 -hash -noout -in /usr/share/ca-certificates/41162.pem
	I1109 14:04:47.765768   55908 ssh_runner.go:195] Run: sudo /bin/bash -c "test -L /etc/ssl/certs/3ec20f2e.0 || ln -fs /etc/ssl/certs/41162.pem /etc/ssl/certs/3ec20f2e.0"
	I1109 14:04:47.777604   55908 ssh_runner.go:195] Run: sudo /bin/bash -c "test -s /usr/share/ca-certificates/minikubeCA.pem && ln -fs /usr/share/ca-certificates/minikubeCA.pem /etc/ssl/certs/minikubeCA.pem"
	I1109 14:04:47.788177   55908 ssh_runner.go:195] Run: ls -la /usr/share/ca-certificates/minikubeCA.pem
	I1109 14:04:47.793126   55908 certs.go:528] hashing: -rw-r--r-- 1 root root 1111 Nov  9 13:29 /usr/share/ca-certificates/minikubeCA.pem
	I1109 14:04:47.793191   55908 ssh_runner.go:195] Run: openssl x509 -hash -noout -in /usr/share/ca-certificates/minikubeCA.pem
	I1109 14:04:47.845154   55908 ssh_runner.go:195] Run: sudo /bin/bash -c "test -L /etc/ssl/certs/b5213941.0 || ln -fs /etc/ssl/certs/minikubeCA.pem /etc/ssl/certs/b5213941.0"
	I1109 14:04:47.856386   55908 ssh_runner.go:195] Run: stat /var/lib/minikube/certs/apiserver-kubelet-client.crt
	I1109 14:04:47.861306   55908 certs.go:400] 'apiserver-kubelet-client' cert doesn't exist, likely first start: stat /var/lib/minikube/certs/apiserver-kubelet-client.crt: Process exited with status 1
	stdout:
	
	stderr:
	stat: cannot statx '/var/lib/minikube/certs/apiserver-kubelet-client.crt': No such file or directory
	I1109 14:04:47.861350   55908 kubeadm.go:935] updating node {m04 192.168.49.5 0 v1.34.1 crio false true} ...
	I1109 14:04:47.861449   55908 kubeadm.go:947] kubelet [Unit]
	Wants=crio.service
	
	[Service]
	ExecStart=
	ExecStart=/var/lib/minikube/binaries/v1.34.1/kubelet --bootstrap-kubeconfig=/etc/kubernetes/bootstrap-kubelet.conf --cgroups-per-qos=false --config=/var/lib/kubelet/config.yaml --enforce-node-allocatable= --hostname-override=ha-423884-m04 --kubeconfig=/etc/kubernetes/kubelet.conf --node-ip=192.168.49.5
	
	[Install]
	 config:
	{KubernetesVersion:v1.34.1 ClusterName:ha-423884 Namespace:default APIServerHAVIP:192.168.49.254 APIServerName:minikubeCA APIServerNames:[] APIServerIPs:[] DNSDomain:cluster.local ContainerRuntime:crio CRISocket: NetworkPlugin:cni FeatureGates: ServiceCIDR:10.96.0.0/12 ImageRepository: LoadBalancerStartIP: LoadBalancerEndIP: CustomIngressCert: RegistryAliases: ExtraOptions:[] ShouldLoadCachedImages:true EnableDefaultCNI:false CNI:}
	I1109 14:04:47.861522   55908 ssh_runner.go:195] Run: sudo ls /var/lib/minikube/binaries/v1.34.1
	I1109 14:04:47.870269   55908 binaries.go:51] Found k8s binaries, skipping transfer
	I1109 14:04:47.870337   55908 ssh_runner.go:195] Run: sudo mkdir -p /etc/systemd/system/kubelet.service.d /lib/systemd/system
	I1109 14:04:47.880368   55908 ssh_runner.go:362] scp memory --> /etc/systemd/system/kubelet.service.d/10-kubeadm.conf (363 bytes)
	I1109 14:04:47.897846   55908 ssh_runner.go:362] scp memory --> /lib/systemd/system/kubelet.service (352 bytes)
	I1109 14:04:47.917114   55908 ssh_runner.go:195] Run: grep 192.168.49.254	control-plane.minikube.internal$ /etc/hosts
	I1109 14:04:47.924685   55908 ssh_runner.go:195] Run: /bin/bash -c "{ grep -v $'\tcontrol-plane.minikube.internal$' "/etc/hosts"; echo "192.168.49.254	control-plane.minikube.internal"; } > /tmp/h.$$; sudo cp /tmp/h.$$ "/etc/hosts""
	I1109 14:04:47.936633   55908 ssh_runner.go:195] Run: sudo systemctl daemon-reload
	I1109 14:04:48.172177   55908 ssh_runner.go:195] Run: sudo systemctl start kubelet
	I1109 14:04:48.203009   55908 start.go:236] Will wait 6m0s for node &{Name:m04 IP:192.168.49.5 Port:0 KubernetesVersion:v1.34.1 ContainerRuntime:crio ControlPlane:false Worker:true}
	I1109 14:04:48.203488   55908 config.go:182] Loaded profile config "ha-423884": Driver=docker, ContainerRuntime=crio, KubernetesVersion=v1.34.1
	I1109 14:04:48.206078   55908 out.go:179] * Verifying Kubernetes components...
	I1109 14:04:48.209257   55908 ssh_runner.go:195] Run: sudo systemctl daemon-reload
	I1109 14:04:48.462006   55908 ssh_runner.go:195] Run: sudo systemctl start kubelet
	I1109 14:04:48.478911   55908 kapi.go:59] client config for ha-423884: &rest.Config{Host:"https://192.168.49.254:8443", APIPath:"", ContentConfig:rest.ContentConfig{AcceptContentTypes:"", ContentType:"", GroupVersion:(*schema.GroupVersion)(nil), NegotiatedSerializer:runtime.NegotiatedSerializer(nil)}, Username:"", Password:"", BearerToken:"", BearerTokenFile:"", Impersonate:rest.ImpersonationConfig{UserName:"", UID:"", Groups:[]string(nil), Extra:map[string][]string(nil)}, AuthProvider:<nil>, AuthConfigPersister:rest.AuthProviderConfigPersister(nil), ExecProvider:<nil>, TLSClientConfig:rest.sanitizedTLSClientConfig{Insecure:false, ServerName:"", CertFile:"/home/jenkins/minikube-integration/21139-2320/.minikube/profiles/ha-423884/client.crt", KeyFile:"/home/jenkins/minikube-integration/21139-2320/.minikube/profiles/ha-423884/client.key", CAFile:"/home/jenkins/minikube-integration/21139-2320/.minikube/ca.crt", CertData:[]uint8(nil), KeyData:[]uint8(nil), CAData:[]uint8(nil), NextProtos:[]string(nil)},
UserAgent:"", DisableCompression:false, Transport:http.RoundTripper(nil), WrapTransport:(transport.WrapperFunc)(0x21276e0), QPS:0, Burst:0, RateLimiter:flowcontrol.RateLimiter(nil), WarningHandler:rest.WarningHandler(nil), WarningHandlerWithContext:rest.WarningHandlerWithContext(nil), Timeout:0, Dial:(func(context.Context, string, string) (net.Conn, error))(nil), Proxy:(func(*http.Request) (*url.URL, error))(nil)}
	W1109 14:04:48.478989   55908 kubeadm.go:492] Overriding stale ClientConfig host https://192.168.49.254:8443 with https://192.168.49.2:8443
	I1109 14:04:48.479221   55908 node_ready.go:35] waiting up to 6m0s for node "ha-423884-m04" to be "Ready" ...
	I1109 14:04:48.482317   55908 node_ready.go:49] node "ha-423884-m04" is "Ready"
	I1109 14:04:48.482349   55908 node_ready.go:38] duration metric: took 3.109285ms for node "ha-423884-m04" to be "Ready" ...
	I1109 14:04:48.482363   55908 system_svc.go:44] waiting for kubelet service to be running ....
	I1109 14:04:48.482419   55908 ssh_runner.go:195] Run: sudo systemctl is-active --quiet service kubelet
	I1109 14:04:48.500348   55908 system_svc.go:56] duration metric: took 17.977329ms WaitForService to wait for kubelet
	I1109 14:04:48.500378   55908 kubeadm.go:587] duration metric: took 297.325981ms to wait for: map[apiserver:true apps_running:true default_sa:true extra:true kubelet:true node_ready:true system_pods:true]
	I1109 14:04:48.500397   55908 node_conditions.go:102] verifying NodePressure condition ...
	I1109 14:04:48.505686   55908 node_conditions.go:122] node storage ephemeral capacity is 203034800Ki
	I1109 14:04:48.505725   55908 node_conditions.go:123] node cpu capacity is 2
	I1109 14:04:48.505737   55908 node_conditions.go:122] node storage ephemeral capacity is 203034800Ki
	I1109 14:04:48.505742   55908 node_conditions.go:123] node cpu capacity is 2
	I1109 14:04:48.505745   55908 node_conditions.go:122] node storage ephemeral capacity is 203034800Ki
	I1109 14:04:48.505750   55908 node_conditions.go:123] node cpu capacity is 2
	I1109 14:04:48.505754   55908 node_conditions.go:122] node storage ephemeral capacity is 203034800Ki
	I1109 14:04:48.505758   55908 node_conditions.go:123] node cpu capacity is 2
	I1109 14:04:48.505763   55908 node_conditions.go:105] duration metric: took 5.360822ms to run NodePressure ...
	I1109 14:04:48.505778   55908 start.go:242] waiting for startup goroutines ...
	I1109 14:04:48.505806   55908 start.go:256] writing updated cluster config ...
	I1109 14:04:48.506138   55908 ssh_runner.go:195] Run: rm -f paused
	I1109 14:04:48.511449   55908 pod_ready.go:37] extra waiting up to 4m0s for all "kube-system" pods having one of [k8s-app=kube-dns component=etcd component=kube-apiserver component=kube-controller-manager k8s-app=kube-proxy component=kube-scheduler] labels to be "Ready" ...
	I1109 14:04:48.512086   55908 kapi.go:59] client config for ha-423884: &rest.Config{Host:"https://192.168.49.254:8443", APIPath:"", ContentConfig:rest.ContentConfig{AcceptContentTypes:"", ContentType:"", GroupVersion:(*schema.GroupVersion)(nil), NegotiatedSerializer:runtime.NegotiatedSerializer(nil)}, Username:"", Password:"", BearerToken:"", BearerTokenFile:"", Impersonate:rest.ImpersonationConfig{UserName:"", UID:"", Groups:[]string(nil), Extra:map[string][]string(nil)}, AuthProvider:<nil>, AuthConfigPersister:rest.AuthProviderConfigPersister(nil), ExecProvider:<nil>, TLSClientConfig:rest.sanitizedTLSClientConfig{Insecure:false, ServerName:"", CertFile:"/home/jenkins/minikube-integration/21139-2320/.minikube/profiles/ha-423884/client.crt", KeyFile:"/home/jenkins/minikube-integration/21139-2320/.minikube/profiles/ha-423884/client.key", CAFile:"/home/jenkins/minikube-integration/21139-2320/.minikube/ca.crt", CertData:[]uint8(nil), KeyData:[]uint8(nil), CAData:[]uint8(nil), NextProtos:[]string(nil)},
UserAgent:"", DisableCompression:false, Transport:http.RoundTripper(nil), WrapTransport:(transport.WrapperFunc)(0x21276e0), QPS:0, Burst:0, RateLimiter:flowcontrol.RateLimiter(nil), WarningHandler:rest.WarningHandler(nil), WarningHandlerWithContext:rest.WarningHandlerWithContext(nil), Timeout:0, Dial:(func(context.Context, string, string) (net.Conn, error))(nil), Proxy:(func(*http.Request) (*url.URL, error))(nil)}
	I1109 14:04:48.531812   55908 pod_ready.go:83] waiting for pod "coredns-66bc5c9577-wl6rt" in "kube-system" namespace to be "Ready" or be gone ...
	W1109 14:04:50.538801   55908 pod_ready.go:104] pod "coredns-66bc5c9577-wl6rt" is not "Ready", error: <nil>
	W1109 14:04:53.041776   55908 pod_ready.go:104] pod "coredns-66bc5c9577-wl6rt" is not "Ready", error: <nil>
	W1109 14:04:55.540126   55908 pod_ready.go:104] pod "coredns-66bc5c9577-wl6rt" is not "Ready", error: <nil>
	I1109 14:04:57.039850   55908 pod_ready.go:94] pod "coredns-66bc5c9577-wl6rt" is "Ready"
	I1109 14:04:57.039917   55908 pod_ready.go:86] duration metric: took 8.508070998s for pod "coredns-66bc5c9577-wl6rt" in "kube-system" namespace to be "Ready" or be gone ...
	I1109 14:04:57.039928   55908 pod_ready.go:83] waiting for pod "coredns-66bc5c9577-x2j4c" in "kube-system" namespace to be "Ready" or be gone ...
	I1109 14:04:57.047591   55908 pod_ready.go:94] pod "coredns-66bc5c9577-x2j4c" is "Ready"
	I1109 14:04:57.047620   55908 pod_ready.go:86] duration metric: took 7.684548ms for pod "coredns-66bc5c9577-x2j4c" in "kube-system" namespace to be "Ready" or be gone ...
	I1109 14:04:57.051339   55908 pod_ready.go:83] waiting for pod "etcd-ha-423884" in "kube-system" namespace to be "Ready" or be gone ...
	I1109 14:04:57.057478   55908 pod_ready.go:94] pod "etcd-ha-423884" is "Ready"
	I1109 14:04:57.057507   55908 pod_ready.go:86] duration metric: took 6.138948ms for pod "etcd-ha-423884" in "kube-system" namespace to be "Ready" or be gone ...
	I1109 14:04:57.057516   55908 pod_ready.go:83] waiting for pod "etcd-ha-423884-m02" in "kube-system" namespace to be "Ready" or be gone ...
	I1109 14:04:57.063675   55908 pod_ready.go:94] pod "etcd-ha-423884-m02" is "Ready"
	I1109 14:04:57.063703   55908 pod_ready.go:86] duration metric: took 6.180712ms for pod "etcd-ha-423884-m02" in "kube-system" namespace to be "Ready" or be gone ...
	I1109 14:04:57.063713   55908 pod_ready.go:83] waiting for pod "etcd-ha-423884-m03" in "kube-system" namespace to be "Ready" or be gone ...
	I1109 14:04:57.232913   55908 request.go:683] "Waited before sending request" delay="166.184726ms" reason="client-side throttling, not priority and fairness" verb="GET" URL="https://192.168.49.254:8443/api/v1/nodes/ha-423884-m03"
	I1109 14:04:57.235976   55908 pod_ready.go:94] pod "etcd-ha-423884-m03" is "Ready"
	I1109 14:04:57.236003   55908 pod_ready.go:86] duration metric: took 172.283157ms for pod "etcd-ha-423884-m03" in "kube-system" namespace to be "Ready" or be gone ...
	I1109 14:04:57.433310   55908 request.go:683] "Waited before sending request" delay="197.214303ms" reason="client-side throttling, not priority and fairness" verb="GET" URL="https://192.168.49.254:8443/api/v1/namespaces/kube-system/pods?labelSelector=component%3Dkube-apiserver"
	I1109 14:04:57.437206   55908 pod_ready.go:83] waiting for pod "kube-apiserver-ha-423884" in "kube-system" namespace to be "Ready" or be gone ...
	I1109 14:04:57.632527   55908 request.go:683] "Waited before sending request" delay="195.228871ms" reason="client-side throttling, not priority and fairness" verb="GET" URL="https://192.168.49.254:8443/api/v1/namespaces/kube-system/pods/kube-apiserver-ha-423884"
	I1109 14:04:57.833084   55908 request.go:683] "Waited before sending request" delay="197.197966ms" reason="client-side throttling, not priority and fairness" verb="GET" URL="https://192.168.49.254:8443/api/v1/nodes/ha-423884"
	I1109 14:04:57.836198   55908 pod_ready.go:94] pod "kube-apiserver-ha-423884" is "Ready"
	I1109 14:04:57.836230   55908 pod_ready.go:86] duration metric: took 398.997813ms for pod "kube-apiserver-ha-423884" in "kube-system" namespace to be "Ready" or be gone ...
	I1109 14:04:57.836239   55908 pod_ready.go:83] waiting for pod "kube-apiserver-ha-423884-m02" in "kube-system" namespace to be "Ready" or be gone ...
	I1109 14:04:58.032538   55908 request.go:683] "Waited before sending request" delay="196.215039ms" reason="client-side throttling, not priority and fairness" verb="GET" URL="https://192.168.49.254:8443/api/v1/namespaces/kube-system/pods/kube-apiserver-ha-423884-m02"
	I1109 14:04:58.232521   55908 request.go:683] "Waited before sending request" delay="195.230554ms" reason="client-side throttling, not priority and fairness" verb="GET" URL="https://192.168.49.254:8443/api/v1/nodes/ha-423884-m02"
	I1109 14:04:58.236341   55908 pod_ready.go:94] pod "kube-apiserver-ha-423884-m02" is "Ready"
	I1109 14:04:58.236367   55908 pod_ready.go:86] duration metric: took 400.120914ms for pod "kube-apiserver-ha-423884-m02" in "kube-system" namespace to be "Ready" or be gone ...
	I1109 14:04:58.236376   55908 pod_ready.go:83] waiting for pod "kube-apiserver-ha-423884-m03" in "kube-system" namespace to be "Ready" or be gone ...
	I1109 14:04:58.433023   55908 request.go:683] "Waited before sending request" delay="196.538827ms" reason="client-side throttling, not priority and fairness" verb="GET" URL="https://192.168.49.254:8443/api/v1/namespaces/kube-system/pods/kube-apiserver-ha-423884-m03"
	I1109 14:04:58.632901   55908 request.go:683] "Waited before sending request" delay="196.260046ms" reason="client-side throttling, not priority and fairness" verb="GET" URL="https://192.168.49.254:8443/api/v1/nodes/ha-423884-m03"
	I1109 14:04:58.636121   55908 pod_ready.go:94] pod "kube-apiserver-ha-423884-m03" is "Ready"
	I1109 14:04:58.636150   55908 pod_ready.go:86] duration metric: took 399.76645ms for pod "kube-apiserver-ha-423884-m03" in "kube-system" namespace to be "Ready" or be gone ...
	I1109 14:04:58.832522   55908 request.go:683] "Waited before sending request" delay="196.25788ms" reason="client-side throttling, not priority and fairness" verb="GET" URL="https://192.168.49.254:8443/api/v1/namespaces/kube-system/pods?labelSelector=component%3Dkube-controller-manager"
	I1109 14:04:58.836640   55908 pod_ready.go:83] waiting for pod "kube-controller-manager-ha-423884" in "kube-system" namespace to be "Ready" or be gone ...
	I1109 14:04:59.033076   55908 request.go:683] "Waited before sending request" delay="196.288797ms" reason="client-side throttling, not priority and fairness" verb="GET" URL="https://192.168.49.254:8443/api/v1/namespaces/kube-system/pods/kube-controller-manager-ha-423884"
	I1109 14:04:59.233471   55908 request.go:683] "Waited before sending request" delay="197.170343ms" reason="client-side throttling, not priority and fairness" verb="GET" URL="https://192.168.49.254:8443/api/v1/nodes/ha-423884"
	I1109 14:04:59.236562   55908 pod_ready.go:94] pod "kube-controller-manager-ha-423884" is "Ready"
	I1109 14:04:59.236586   55908 pod_ready.go:86] duration metric: took 399.915672ms for pod "kube-controller-manager-ha-423884" in "kube-system" namespace to be "Ready" or be gone ...
	I1109 14:04:59.236595   55908 pod_ready.go:83] waiting for pod "kube-controller-manager-ha-423884-m02" in "kube-system" namespace to be "Ready" or be gone ...
	I1109 14:04:59.432815   55908 request.go:683] "Waited before sending request" delay="196.151501ms" reason="client-side throttling, not priority and fairness" verb="GET" URL="https://192.168.49.254:8443/api/v1/namespaces/kube-system/pods/kube-controller-manager-ha-423884-m02"
	I1109 14:04:59.633389   55908 request.go:683] "Waited before sending request" delay="197.339699ms" reason="client-side throttling, not priority and fairness" verb="GET" URL="https://192.168.49.254:8443/api/v1/nodes/ha-423884-m02"
	I1109 14:04:59.636611   55908 pod_ready.go:94] pod "kube-controller-manager-ha-423884-m02" is "Ready"
	I1109 14:04:59.636639   55908 pod_ready.go:86] duration metric: took 400.036716ms for pod "kube-controller-manager-ha-423884-m02" in "kube-system" namespace to be "Ready" or be gone ...
	I1109 14:04:59.636649   55908 pod_ready.go:83] waiting for pod "kube-controller-manager-ha-423884-m03" in "kube-system" namespace to be "Ready" or be gone ...
	I1109 14:04:59.832944   55908 request.go:683] "Waited before sending request" delay="196.225586ms" reason="client-side throttling, not priority and fairness" verb="GET" URL="https://192.168.49.254:8443/api/v1/namespaces/kube-system/pods/kube-controller-manager-ha-423884-m03"
	I1109 14:05:00.032735   55908 request.go:683] "Waited before sending request" delay="196.153889ms" reason="client-side throttling, not priority and fairness" verb="GET" URL="https://192.168.49.254:8443/api/v1/nodes/ha-423884-m03"
	I1109 14:05:00.114688   55908 pod_ready.go:94] pod "kube-controller-manager-ha-423884-m03" is "Ready"
	I1109 14:05:00.114728   55908 pod_ready.go:86] duration metric: took 478.071803ms for pod "kube-controller-manager-ha-423884-m03" in "kube-system" namespace to be "Ready" or be gone ...
	I1109 14:05:00.242596   55908 request.go:683] "Waited before sending request" delay="127.725515ms" reason="client-side throttling, not priority and fairness" verb="GET" URL="https://192.168.49.254:8443/api/v1/namespaces/kube-system/pods?labelSelector=k8s-app%3Dkube-proxy"
	I1109 14:05:00.298102   55908 pod_ready.go:83] waiting for pod "kube-proxy-7z7d2" in "kube-system" namespace to be "Ready" or be gone ...
	I1109 14:05:00.433403   55908 request.go:683] "Waited before sending request" delay="135.18186ms" reason="client-side throttling, not priority and fairness" verb="GET" URL="https://192.168.49.254:8443/api/v1/namespaces/kube-system/pods/kube-proxy-7z7d2"
	I1109 14:05:00.633480   55908 request.go:683] "Waited before sending request" delay="187.320382ms" reason="client-side throttling, not priority and fairness" verb="GET" URL="https://192.168.49.254:8443/api/v1/nodes/ha-423884"
	I1109 14:05:00.659363   55908 pod_ready.go:94] pod "kube-proxy-7z7d2" is "Ready"
	I1109 14:05:00.659405   55908 pod_ready.go:86] duration metric: took 361.264172ms for pod "kube-proxy-7z7d2" in "kube-system" namespace to be "Ready" or be gone ...
	I1109 14:05:00.659421   55908 pod_ready.go:83] waiting for pod "kube-proxy-9kff9" in "kube-system" namespace to be "Ready" or be gone ...
	I1109 14:05:00.832720   55908 request.go:683] "Waited before sending request" delay="173.209595ms" reason="client-side throttling, not priority and fairness" verb="GET" URL="https://192.168.49.254:8443/api/v1/namespaces/kube-system/pods/kube-proxy-9kff9"
	I1109 14:05:01.032589   55908 request.go:683] "Waited before sending request" delay="193.218072ms" reason="client-side throttling, not priority and fairness" verb="GET" URL="https://192.168.49.254:8443/api/v1/nodes/ha-423884-m04"
	I1109 14:05:01.233422   55908 request.go:683] "Waited before sending request" delay="73.212921ms" reason="client-side throttling, not priority and fairness" verb="GET" URL="https://192.168.49.254:8443/api/v1/namespaces/kube-system/pods/kube-proxy-9kff9"
	I1109 14:05:01.433041   55908 request.go:683] "Waited before sending request" delay="190.18265ms" reason="client-side throttling, not priority and fairness" verb="GET" URL="https://192.168.49.254:8443/api/v1/nodes/ha-423884-m04"
	I1109 14:05:01.437082   55908 pod_ready.go:94] pod "kube-proxy-9kff9" is "Ready"
	I1109 14:05:01.437110   55908 pod_ready.go:86] duration metric: took 777.680802ms for pod "kube-proxy-9kff9" in "kube-system" namespace to be "Ready" or be gone ...
	I1109 14:05:01.437119   55908 pod_ready.go:83] waiting for pod "kube-proxy-f4hgn" in "kube-system" namespace to be "Ready" or be gone ...
	I1109 14:05:01.632461   55908 request.go:683] "Waited before sending request" delay="195.271922ms" reason="client-side throttling, not priority and fairness" verb="GET" URL="https://192.168.49.254:8443/api/v1/namespaces/kube-system/pods/kube-proxy-f4hgn"
	I1109 14:05:01.832811   55908 request.go:683] "Waited before sending request" delay="187.236042ms" reason="client-side throttling, not priority and fairness" verb="GET" URL="https://192.168.49.254:8443/api/v1/nodes/ha-423884-m02"
	I1109 14:05:01.836535   55908 pod_ready.go:94] pod "kube-proxy-f4hgn" is "Ready"
	I1109 14:05:01.836565   55908 pod_ready.go:86] duration metric: took 399.438784ms for pod "kube-proxy-f4hgn" in "kube-system" namespace to be "Ready" or be gone ...
	I1109 14:05:01.836576   55908 pod_ready.go:83] waiting for pod "kube-proxy-jcgxk" in "kube-system" namespace to be "Ready" or be gone ...
	I1109 14:05:02.032823   55908 request.go:683] "Waited before sending request" delay="196.168826ms" reason="client-side throttling, not priority and fairness" verb="GET" URL="https://192.168.49.254:8443/api/v1/namespaces/kube-system/pods/kube-proxy-jcgxk"
	I1109 14:05:02.232950   55908 request.go:683] "Waited before sending request" delay="192.345884ms" reason="client-side throttling, not priority and fairness" verb="GET" URL="https://192.168.49.254:8443/api/v1/nodes/ha-423884-m03"
	I1109 14:05:02.432483   55908 request.go:683] "Waited before sending request" delay="95.122005ms" reason="client-side throttling, not priority and fairness" verb="GET" URL="https://192.168.49.254:8443/api/v1/namespaces/kube-system/pods/kube-proxy-jcgxk"
	I1109 14:05:02.632558   55908 request.go:683] "Waited before sending request" delay="196.186501ms" reason="client-side throttling, not priority and fairness" verb="GET" URL="https://192.168.49.254:8443/api/v1/nodes/ha-423884-m03"
	I1109 14:05:03.032762   55908 request.go:683] "Waited before sending request" delay="191.358141ms" reason="client-side throttling, not priority and fairness" verb="GET" URL="https://192.168.49.254:8443/api/v1/nodes/ha-423884-m03"
	I1109 14:05:03.433075   55908 request.go:683] "Waited before sending request" delay="91.200576ms" reason="client-side throttling, not priority and fairness" verb="GET" URL="https://192.168.49.254:8443/api/v1/nodes/ha-423884-m03"
	W1109 14:05:03.843130   55908 pod_ready.go:104] pod "kube-proxy-jcgxk" is not "Ready", error: <nil>
	W1109 14:05:05.843241   55908 pod_ready.go:104] pod "kube-proxy-jcgxk" is not "Ready", error: <nil>
	W1109 14:05:07.843386   55908 pod_ready.go:104] pod "kube-proxy-jcgxk" is not "Ready", error: <nil>
	W1109 14:05:10.345843   55908 pod_ready.go:104] pod "kube-proxy-jcgxk" is not "Ready", error: <nil>
	W1109 14:05:12.347116   55908 pod_ready.go:104] pod "kube-proxy-jcgxk" is not "Ready", error: <nil>
	I1109 14:05:12.843484   55908 pod_ready.go:94] pod "kube-proxy-jcgxk" is "Ready"
	I1109 14:05:12.843511   55908 pod_ready.go:86] duration metric: took 11.006928371s for pod "kube-proxy-jcgxk" in "kube-system" namespace to be "Ready" or be gone ...
	I1109 14:05:12.847315   55908 pod_ready.go:83] waiting for pod "kube-scheduler-ha-423884" in "kube-system" namespace to be "Ready" or be gone ...
	I1109 14:05:12.853111   55908 pod_ready.go:94] pod "kube-scheduler-ha-423884" is "Ready"
	I1109 14:05:12.853137   55908 pod_ready.go:86] duration metric: took 5.793657ms for pod "kube-scheduler-ha-423884" in "kube-system" namespace to be "Ready" or be gone ...
	I1109 14:05:12.853146   55908 pod_ready.go:83] waiting for pod "kube-scheduler-ha-423884-m02" in "kube-system" namespace to be "Ready" or be gone ...
	I1109 14:05:12.859861   55908 pod_ready.go:94] pod "kube-scheduler-ha-423884-m02" is "Ready"
	I1109 14:05:12.859981   55908 pod_ready.go:86] duration metric: took 6.827161ms for pod "kube-scheduler-ha-423884-m02" in "kube-system" namespace to be "Ready" or be gone ...
	I1109 14:05:12.860005   55908 pod_ready.go:83] waiting for pod "kube-scheduler-ha-423884-m03" in "kube-system" namespace to be "Ready" or be gone ...
	I1109 14:05:12.867050   55908 pod_ready.go:94] pod "kube-scheduler-ha-423884-m03" is "Ready"
	I1109 14:05:12.867075   55908 pod_ready.go:86] duration metric: took 7.050311ms for pod "kube-scheduler-ha-423884-m03" in "kube-system" namespace to be "Ready" or be gone ...
	I1109 14:05:12.867087   55908 pod_ready.go:40] duration metric: took 24.355592064s for extra waiting for all "kube-system" pods having one of [k8s-app=kube-dns component=etcd component=kube-apiserver component=kube-controller-manager k8s-app=kube-proxy component=kube-scheduler] labels to be "Ready" ...
	I1109 14:05:12.924097   55908 start.go:628] kubectl: 1.33.2, cluster: 1.34.1 (minor skew: 1)
	I1109 14:05:12.927451   55908 out.go:179] * Done! kubectl is now configured to use "ha-423884" cluster and "default" namespace by default
	
	
	==> CRI-O <==
	Nov 09 14:04:15 ha-423884 crio[619]: time="2025-11-09T14:04:15.560693803Z" level=info msg="Started container" PID=1120 containerID=b63a9a2c4e5fbd3fad199cd6e213c4eaeb9cf307dbae0131d130c7d22384f79e description=default/busybox-7b57f96db7-bprtw/busybox id=6e691df6-c3f8-4e79-938c-13c481c463f8 name=/runtime.v1.RuntimeService/StartContainer sandboxID=49d4f70bf4320e7c70fbccc94bd31ea39dc7456ce3f9c2a9a4d16059f54b6f87
	Nov 09 14:04:45 ha-423884 conmon[1119]: conmon 5bed382b465f29e125aa <ninfo>: container 1132 exited with status 1
	Nov 09 14:04:46 ha-423884 crio[619]: time="2025-11-09T14:04:46.632047702Z" level=info msg="Checking image status: gcr.io/k8s-minikube/storage-provisioner:v5" id=58fafaad-5a62-4ed2-a48c-ac5cfcffacd0 name=/runtime.v1.ImageService/ImageStatus
	Nov 09 14:04:46 ha-423884 crio[619]: time="2025-11-09T14:04:46.633906069Z" level=info msg="Checking image status: gcr.io/k8s-minikube/storage-provisioner:v5" id=36005cb0-6a41-40e9-950b-0b9545dd375d name=/runtime.v1.ImageService/ImageStatus
	Nov 09 14:04:46 ha-423884 crio[619]: time="2025-11-09T14:04:46.64579785Z" level=info msg="Creating container: kube-system/storage-provisioner/storage-provisioner" id=95caab63-861a-49ee-8b75-b5d15cfb1b60 name=/runtime.v1.RuntimeService/CreateContainer
	Nov 09 14:04:46 ha-423884 crio[619]: time="2025-11-09T14:04:46.645906225Z" level=info msg="Allowed annotations are specified for workload [io.containers.trace-syscall]"
	Nov 09 14:04:46 ha-423884 crio[619]: time="2025-11-09T14:04:46.658781722Z" level=info msg="Allowed annotations are specified for workload [io.containers.trace-syscall]"
	Nov 09 14:04:46 ha-423884 crio[619]: time="2025-11-09T14:04:46.662347217Z" level=warning msg="Failed to open /etc/passwd: open /var/lib/containers/storage/overlay/184c9fdfb9f2c0bab041655609ae7f88de235f6f6f171cc5cec8c531dddf11f3/merged/etc/passwd: no such file or directory"
	Nov 09 14:04:46 ha-423884 crio[619]: time="2025-11-09T14:04:46.662465462Z" level=warning msg="Failed to open /etc/group: open /var/lib/containers/storage/overlay/184c9fdfb9f2c0bab041655609ae7f88de235f6f6f171cc5cec8c531dddf11f3/merged/etc/group: no such file or directory"
	Nov 09 14:04:46 ha-423884 crio[619]: time="2025-11-09T14:04:46.662915043Z" level=info msg="Allowed annotations are specified for workload [io.containers.trace-syscall]"
	Nov 09 14:04:46 ha-423884 crio[619]: time="2025-11-09T14:04:46.702334944Z" level=info msg="Created container b305e5d843218e1b3e886cb2f5ba534d02cd5d0a3a41d07de89ec1654cf7277c: kube-system/storage-provisioner/storage-provisioner" id=95caab63-861a-49ee-8b75-b5d15cfb1b60 name=/runtime.v1.RuntimeService/CreateContainer
	Nov 09 14:04:46 ha-423884 crio[619]: time="2025-11-09T14:04:46.714514458Z" level=info msg="Starting container: b305e5d843218e1b3e886cb2f5ba534d02cd5d0a3a41d07de89ec1654cf7277c" id=63571f8b-fba8-4137-bf17-f12c81bfa57d name=/runtime.v1.RuntimeService/StartContainer
	Nov 09 14:04:46 ha-423884 crio[619]: time="2025-11-09T14:04:46.721604636Z" level=info msg="Started container" PID=1382 containerID=b305e5d843218e1b3e886cb2f5ba534d02cd5d0a3a41d07de89ec1654cf7277c description=kube-system/storage-provisioner/storage-provisioner id=63571f8b-fba8-4137-bf17-f12c81bfa57d name=/runtime.v1.RuntimeService/StartContainer sandboxID=624febe3bef0cff2a8b38f86f67900ed2fa943529a6d461c2caa559b51a854bb
	Nov 09 14:04:55 ha-423884 crio[619]: time="2025-11-09T14:04:55.4215931Z" level=info msg="CNI monitoring event CREATE        \"/etc/cni/net.d/10-kindnet.conflist.temp\""
	Nov 09 14:04:55 ha-423884 crio[619]: time="2025-11-09T14:04:55.42716999Z" level=info msg="Found CNI network kindnet (type=ptp) at /etc/cni/net.d/10-kindnet.conflist"
	Nov 09 14:04:55 ha-423884 crio[619]: time="2025-11-09T14:04:55.427323214Z" level=info msg="Updated default CNI network name to kindnet"
	Nov 09 14:04:55 ha-423884 crio[619]: time="2025-11-09T14:04:55.427398128Z" level=info msg="CNI monitoring event WRITE         \"/etc/cni/net.d/10-kindnet.conflist.temp\""
	Nov 09 14:04:55 ha-423884 crio[619]: time="2025-11-09T14:04:55.431810591Z" level=info msg="Found CNI network kindnet (type=ptp) at /etc/cni/net.d/10-kindnet.conflist"
	Nov 09 14:04:55 ha-423884 crio[619]: time="2025-11-09T14:04:55.432264101Z" level=info msg="Updated default CNI network name to kindnet"
	Nov 09 14:04:55 ha-423884 crio[619]: time="2025-11-09T14:04:55.43234498Z" level=info msg="CNI monitoring event RENAME        \"/etc/cni/net.d/10-kindnet.conflist.temp\""
	Nov 09 14:04:55 ha-423884 crio[619]: time="2025-11-09T14:04:55.436394288Z" level=info msg="Found CNI network kindnet (type=ptp) at /etc/cni/net.d/10-kindnet.conflist"
	Nov 09 14:04:55 ha-423884 crio[619]: time="2025-11-09T14:04:55.436552493Z" level=info msg="Updated default CNI network name to kindnet"
	Nov 09 14:04:55 ha-423884 crio[619]: time="2025-11-09T14:04:55.43662753Z" level=info msg="CNI monitoring event CREATE        \"/etc/cni/net.d/10-kindnet.conflist\" ← \"/etc/cni/net.d/10-kindnet.conflist.temp\""
	Nov 09 14:04:55 ha-423884 crio[619]: time="2025-11-09T14:04:55.440324498Z" level=info msg="Found CNI network kindnet (type=ptp) at /etc/cni/net.d/10-kindnet.conflist"
	Nov 09 14:04:55 ha-423884 crio[619]: time="2025-11-09T14:04:55.440479609Z" level=info msg="Updated default CNI network name to kindnet"
	
	
	==> container status <==
	CONTAINER           IMAGE                                                              CREATED             STATE               NAME                      ATTEMPT             POD ID              POD                                 NAMESPACE
	b305e5d843218       ba04bb24b95753201135cbc420b233c1b0b9fa2e1fd21d28319c348c33fbcde6   2 minutes ago       Running             storage-provisioner       2                   624febe3bef0c       storage-provisioner                 kube-system
	4e1565497868e       138784d87c9c50f8e59412544da4cf4928d61ccbaf93b9f5898a3ba406871bfc   2 minutes ago       Running             coredns                   1                   156c341c8adee       coredns-66bc5c9577-wl6rt            kube-system
	f0fd891d62df4       138784d87c9c50f8e59412544da4cf4928d61ccbaf93b9f5898a3ba406871bfc   2 minutes ago       Running             coredns                   1                   0149d6cd55157       coredns-66bc5c9577-x2j4c            kube-system
	5bed382b465f2       ba04bb24b95753201135cbc420b233c1b0b9fa2e1fd21d28319c348c33fbcde6   2 minutes ago       Exited              storage-provisioner       1                   624febe3bef0c       storage-provisioner                 kube-system
	b63a9a2c4e5fb       89a35e2ebb6b938201966889b5e8c85b931db6432c5643966116cd1c28bf45cd   2 minutes ago       Running             busybox                   1                   49d4f70bf4320       busybox-7b57f96db7-bprtw            default
	6db8ccf0f7e5d       05baa95f5142d87797a2bc1d3d11edfb0bf0a9236d436243d15061fae8b58cb9   2 minutes ago       Running             kube-proxy                1                   7482e6b61af8f       kube-proxy-7z7d2                    kube-system
	2858b15648473       b1a8c6f707935fd5f346ce5846d21ff8dd65e14c15406a14dbd16b9b897b9b4c   2 minutes ago       Running             kindnet-cni               1                   ef99cabeed954       kindnet-4s4nj                       kube-system
	d4b5eae8c40aa       7eb2c6ff0c5a768fd309321bc2ade0e4e11afcf4f2017ef1d0ff00d91fdf992a   2 minutes ago       Running             kube-controller-manager   9                   8d358a601f8e9       kube-controller-manager-ha-423884   kube-system
	7a8b6eec5acc3       43911e833d64d4f30460862fc0c54bb61999d60bc7d063feca71e9fc610d5196   2 minutes ago       Running             kube-apiserver            8                   5dc1bc8f687be       kube-apiserver-ha-423884            kube-system
	78f5efcea671f       7eb2c6ff0c5a768fd309321bc2ade0e4e11afcf4f2017ef1d0ff00d91fdf992a   3 minutes ago       Exited              kube-controller-manager   8                   8d358a601f8e9       kube-controller-manager-ha-423884   kube-system
	947390d8997ff       a1894772a478e07c67a56e8bf32335fdbe1dd4ec96976a5987083164bd00bc0e   3 minutes ago       Running             etcd                      3                   0c595ba9083de       etcd-ha-423884                      kube-system
	c0ba74e816e13       43911e833d64d4f30460862fc0c54bb61999d60bc7d063feca71e9fc610d5196   3 minutes ago       Exited              kube-apiserver            7                   5dc1bc8f687be       kube-apiserver-ha-423884            kube-system
	374a5429d6a56       b5f57ec6b98676d815366685a0422bd164ecf0732540b79ac51b1186cef97ff0   3 minutes ago       Running             kube-scheduler            2                   3ee3bcbc0fa87       kube-scheduler-ha-423884            kube-system
	785a023345fda       2a8917f902489be5a8dd414209c32b77bd644d187ea646d86dbdc31e85efb551   3 minutes ago       Running             kube-vip                  1                   90a0cbb7d6ed9       kube-vip-ha-423884                  kube-system
	
	
	==> coredns [4e1565497868eb720e6f89fa2f64f1892d9d7c7fb165c52c75c00a6e26644dcd] <==
	[INFO] plugin/kubernetes: waiting for Kubernetes API before starting server
	[INFO] plugin/kubernetes: waiting for Kubernetes API before starting server
	[INFO] plugin/ready: Still waiting on: "kubernetes"
	[INFO] plugin/kubernetes: waiting for Kubernetes API before starting server
	[INFO] plugin/kubernetes: waiting for Kubernetes API before starting server
	[INFO] plugin/kubernetes: waiting for Kubernetes API before starting server
	[INFO] plugin/kubernetes: waiting for Kubernetes API before starting server
	[INFO] plugin/kubernetes: waiting for Kubernetes API before starting server
	[INFO] plugin/kubernetes: waiting for Kubernetes API before starting server
	[INFO] plugin/kubernetes: waiting for Kubernetes API before starting server
	[WARNING] plugin/kubernetes: starting server with unsynced Kubernetes API
	.:53
	[INFO] plugin/reload: Running configuration SHA512 = 9e2996f8cb67ac53e0259ab1f8d615d07d1beb0bd07e6a1e39769c3bf486a905bb991cc47f8d2f14d0d3a90a87dfc625a0b4c524fed169d8158c40657c0694b1
	CoreDNS-1.12.1
	linux/arm64, go1.24.1, 707c7c1
	[INFO] 127.0.0.1:56290 - 23869 "HINFO IN 4295743501471833009.7362039906491692351. udp 57 false 512" NXDOMAIN qr,rd,ra 57 0.027167594s
	[INFO] plugin/ready: Still waiting on: "kubernetes"
	[INFO] plugin/ready: Still waiting on: "kubernetes"
	[INFO] plugin/kubernetes: pkg/mod/k8s.io/client-go@v0.32.3/tools/cache/reflector.go:251: failed to list *v1.Namespace: Get "https://10.96.0.1:443/api/v1/namespaces?limit=500&resourceVersion=0": dial tcp 10.96.0.1:443: i/o timeout
	[ERROR] plugin/kubernetes: Unhandled Error
	[INFO] plugin/kubernetes: pkg/mod/k8s.io/client-go@v0.32.3/tools/cache/reflector.go:251: failed to list *v1.EndpointSlice: Get "https://10.96.0.1:443/apis/discovery.k8s.io/v1/endpointslices?limit=500&resourceVersion=0": dial tcp 10.96.0.1:443: i/o timeout
	[ERROR] plugin/kubernetes: Unhandled Error
	[INFO] plugin/kubernetes: pkg/mod/k8s.io/client-go@v0.32.3/tools/cache/reflector.go:251: failed to list *v1.Service: Get "https://10.96.0.1:443/api/v1/services?limit=500&resourceVersion=0": dial tcp 10.96.0.1:443: i/o timeout
	[ERROR] plugin/kubernetes: Unhandled Error
	[INFO] plugin/ready: Still waiting on: "kubernetes"
	
	
	==> coredns [f0fd891d62df4ba35f7f2bb9f867a20bb1ee66fec8156164361837f74c33b151] <==
	[INFO] plugin/kubernetes: waiting for Kubernetes API before starting server
	[INFO] plugin/kubernetes: waiting for Kubernetes API before starting server
	[INFO] plugin/ready: Still waiting on: "kubernetes"
	[INFO] plugin/kubernetes: waiting for Kubernetes API before starting server
	[INFO] plugin/kubernetes: waiting for Kubernetes API before starting server
	[INFO] plugin/kubernetes: waiting for Kubernetes API before starting server
	[INFO] plugin/kubernetes: waiting for Kubernetes API before starting server
	[INFO] plugin/kubernetes: waiting for Kubernetes API before starting server
	[INFO] plugin/kubernetes: waiting for Kubernetes API before starting server
	[INFO] plugin/kubernetes: waiting for Kubernetes API before starting server
	[WARNING] plugin/kubernetes: starting server with unsynced Kubernetes API
	.:53
	[INFO] plugin/reload: Running configuration SHA512 = 9e2996f8cb67ac53e0259ab1f8d615d07d1beb0bd07e6a1e39769c3bf486a905bb991cc47f8d2f14d0d3a90a87dfc625a0b4c524fed169d8158c40657c0694b1
	CoreDNS-1.12.1
	linux/arm64, go1.24.1, 707c7c1
	[INFO] 127.0.0.1:41286 - 39887 "HINFO IN 9165684468172783655.3008217872247164606. udp 57 false 512" NXDOMAIN qr,rd,ra 57 0.020928117s
	[INFO] plugin/ready: Still waiting on: "kubernetes"
	[INFO] plugin/ready: Still waiting on: "kubernetes"
	[INFO] plugin/kubernetes: pkg/mod/k8s.io/client-go@v0.32.3/tools/cache/reflector.go:251: failed to list *v1.Service: Get "https://10.96.0.1:443/api/v1/services?limit=500&resourceVersion=0": dial tcp 10.96.0.1:443: i/o timeout
	[ERROR] plugin/kubernetes: Unhandled Error
	[INFO] plugin/kubernetes: pkg/mod/k8s.io/client-go@v0.32.3/tools/cache/reflector.go:251: failed to list *v1.EndpointSlice: Get "https://10.96.0.1:443/apis/discovery.k8s.io/v1/endpointslices?limit=500&resourceVersion=0": dial tcp 10.96.0.1:443: i/o timeout
	[ERROR] plugin/kubernetes: Unhandled Error
	[INFO] plugin/kubernetes: pkg/mod/k8s.io/client-go@v0.32.3/tools/cache/reflector.go:251: failed to list *v1.Namespace: Get "https://10.96.0.1:443/api/v1/namespaces?limit=500&resourceVersion=0": dial tcp 10.96.0.1:443: i/o timeout
	[ERROR] plugin/kubernetes: Unhandled Error
	[INFO] plugin/ready: Still waiting on: "kubernetes"
	
	
	==> describe nodes <==
	Name:               ha-423884
	Roles:              control-plane
	Labels:             beta.kubernetes.io/arch=arm64
	                    beta.kubernetes.io/os=linux
	                    kubernetes.io/arch=arm64
	                    kubernetes.io/hostname=ha-423884
	                    kubernetes.io/os=linux
	                    minikube.k8s.io/commit=70e94ed289d7418ac27e2778f9cf44be27a4ecda
	                    minikube.k8s.io/name=ha-423884
	                    minikube.k8s.io/primary=true
	                    minikube.k8s.io/updated_at=2025_11_09T13_50_44_0700
	                    minikube.k8s.io/version=v1.37.0
	                    node-role.kubernetes.io/control-plane=
	                    node.kubernetes.io/exclude-from-external-load-balancers=
	Annotations:        node.alpha.kubernetes.io/ttl: 0
	                    volumes.kubernetes.io/controller-managed-attach-detach: true
	CreationTimestamp:  Sun, 09 Nov 2025 13:50:40 +0000
	Taints:             <none>
	Unschedulable:      false
	Lease:
	  HolderIdentity:  ha-423884
	  AcquireTime:     <unset>
	  RenewTime:       Sun, 09 Nov 2025 14:06:42 +0000
	Conditions:
	  Type             Status  LastHeartbeatTime                 LastTransitionTime                Reason                       Message
	  ----             ------  -----------------                 ------------------                ------                       -------
	  MemoryPressure   False   Sun, 09 Nov 2025 14:06:33 +0000   Sun, 09 Nov 2025 13:50:35 +0000   KubeletHasSufficientMemory   kubelet has sufficient memory available
	  DiskPressure     False   Sun, 09 Nov 2025 14:06:33 +0000   Sun, 09 Nov 2025 13:50:35 +0000   KubeletHasNoDiskPressure     kubelet has no disk pressure
	  PIDPressure      False   Sun, 09 Nov 2025 14:06:33 +0000   Sun, 09 Nov 2025 13:50:35 +0000   KubeletHasSufficientPID      kubelet has sufficient PID available
	  Ready            True    Sun, 09 Nov 2025 14:06:33 +0000   Sun, 09 Nov 2025 13:51:29 +0000   KubeletReady                 kubelet is posting ready status
	Addresses:
	  InternalIP:  192.168.49.2
	  Hostname:    ha-423884
	Capacity:
	  cpu:                2
	  ephemeral-storage:  203034800Ki
	  hugepages-1Gi:      0
	  hugepages-2Mi:      0
	  hugepages-32Mi:     0
	  hugepages-64Ki:     0
	  memory:             8022300Ki
	  pods:               110
	Allocatable:
	  cpu:                2
	  ephemeral-storage:  203034800Ki
	  hugepages-1Gi:      0
	  hugepages-2Mi:      0
	  hugepages-32Mi:     0
	  hugepages-64Ki:     0
	  memory:             8022300Ki
	  pods:               110
	System Info:
	  Machine ID:                 8fe80b37a6a3cb4bc1adadb06905c59a
	  System UUID:                657918f5-0b52-434a-8e2d-4cc93dc46e2f
	  Boot ID:                    c2b4b02b-b0e5-42ff-aa45-346ad9349595
	  Kernel Version:             5.15.0-1084-aws
	  OS Image:                   Debian GNU/Linux 12 (bookworm)
	  Operating System:           linux
	  Architecture:               arm64
	  Container Runtime Version:  cri-o://1.34.1
	  Kubelet Version:            v1.34.1
	  Kube-Proxy Version:         
	PodCIDR:                      10.244.0.0/24
	PodCIDRs:                     10.244.0.0/24
	Non-terminated Pods:          (11 in total)
	  Namespace                   Name                                 CPU Requests  CPU Limits  Memory Requests  Memory Limits  Age
	  ---------                   ----                                 ------------  ----------  ---------------  -------------  ---
	  default                     busybox-7b57f96db7-bprtw             0 (0%)        0 (0%)      0 (0%)           0 (0%)         14m
	  kube-system                 coredns-66bc5c9577-wl6rt             100m (5%)     0 (0%)      70Mi (0%)        170Mi (2%)     16m
	  kube-system                 coredns-66bc5c9577-x2j4c             100m (5%)     0 (0%)      70Mi (0%)        170Mi (2%)     16m
	  kube-system                 etcd-ha-423884                       100m (5%)     0 (0%)      100Mi (1%)       0 (0%)         16m
	  kube-system                 kindnet-4s4nj                        100m (5%)     100m (5%)   50Mi (0%)        50Mi (0%)      16m
	  kube-system                 kube-apiserver-ha-423884             250m (12%)    0 (0%)      0 (0%)           0 (0%)         16m
	  kube-system                 kube-controller-manager-ha-423884    200m (10%)    0 (0%)      0 (0%)           0 (0%)         16m
	  kube-system                 kube-proxy-7z7d2                     0 (0%)        0 (0%)      0 (0%)           0 (0%)         16m
	  kube-system                 kube-scheduler-ha-423884             100m (5%)     0 (0%)      0 (0%)           0 (0%)         16m
	  kube-system                 kube-vip-ha-423884                   0 (0%)        0 (0%)      0 (0%)           0 (0%)         2m36s
	  kube-system                 storage-provisioner                  0 (0%)        0 (0%)      0 (0%)           0 (0%)         16m
	Allocated resources:
	  (Total limits may be over 100 percent, i.e., overcommitted.)
	  Resource           Requests    Limits
	  --------           --------    ------
	  cpu                950m (47%)  100m (5%)
	  memory             290Mi (3%)  390Mi (4%)
	  ephemeral-storage  0 (0%)      0 (0%)
	  hugepages-1Gi      0 (0%)      0 (0%)
	  hugepages-2Mi      0 (0%)      0 (0%)
	  hugepages-32Mi     0 (0%)      0 (0%)
	  hugepages-64Ki     0 (0%)      0 (0%)
	Events:
	  Type     Reason                   Age                    From             Message
	  ----     ------                   ----                   ----             -------
	  Normal   Starting                 16m                    kube-proxy       
	  Normal   Starting                 2m32s                  kube-proxy       
	  Normal   NodeHasSufficientPID     16m (x8 over 16m)      kubelet          Node ha-423884 status is now: NodeHasSufficientPID
	  Warning  CgroupV1                 16m                    kubelet          cgroup v1 support is in maintenance mode, please migrate to cgroup v2
	  Normal   Starting                 16m                    kubelet          Starting kubelet.
	  Normal   NodeHasNoDiskPressure    16m (x8 over 16m)      kubelet          Node ha-423884 status is now: NodeHasNoDiskPressure
	  Normal   NodeHasSufficientMemory  16m (x8 over 16m)      kubelet          Node ha-423884 status is now: NodeHasSufficientMemory
	  Normal   Starting                 16m                    kubelet          Starting kubelet.
	  Normal   NodeHasNoDiskPressure    16m                    kubelet          Node ha-423884 status is now: NodeHasNoDiskPressure
	  Warning  CgroupV1                 16m                    kubelet          cgroup v1 support is in maintenance mode, please migrate to cgroup v2
	  Normal   NodeHasSufficientPID     16m                    kubelet          Node ha-423884 status is now: NodeHasSufficientPID
	  Normal   NodeHasSufficientMemory  16m                    kubelet          Node ha-423884 status is now: NodeHasSufficientMemory
	  Normal   RegisteredNode           16m                    node-controller  Node ha-423884 event: Registered Node ha-423884 in Controller
	  Normal   RegisteredNode           15m                    node-controller  Node ha-423884 event: Registered Node ha-423884 in Controller
	  Normal   NodeReady                15m                    kubelet          Node ha-423884 status is now: NodeReady
	  Normal   RegisteredNode           14m                    node-controller  Node ha-423884 event: Registered Node ha-423884 in Controller
	  Normal   RegisteredNode           12m                    node-controller  Node ha-423884 event: Registered Node ha-423884 in Controller
	  Normal   Starting                 3m15s                  kubelet          Starting kubelet.
	  Warning  CgroupV1                 3m15s                  kubelet          cgroup v1 support is in maintenance mode, please migrate to cgroup v2
	  Normal   NodeHasSufficientMemory  3m15s (x8 over 3m15s)  kubelet          Node ha-423884 status is now: NodeHasSufficientMemory
	  Normal   NodeHasNoDiskPressure    3m15s (x8 over 3m15s)  kubelet          Node ha-423884 status is now: NodeHasNoDiskPressure
	  Normal   NodeHasSufficientPID     3m15s (x8 over 3m15s)  kubelet          Node ha-423884 status is now: NodeHasSufficientPID
	  Normal   RegisteredNode           2m36s                  node-controller  Node ha-423884 event: Registered Node ha-423884 in Controller
	  Normal   RegisteredNode           2m35s                  node-controller  Node ha-423884 event: Registered Node ha-423884 in Controller
	  Normal   RegisteredNode           116s                   node-controller  Node ha-423884 event: Registered Node ha-423884 in Controller
	  Normal   RegisteredNode           50s                    node-controller  Node ha-423884 event: Registered Node ha-423884 in Controller
	
	
	Name:               ha-423884-m02
	Roles:              control-plane
	Labels:             beta.kubernetes.io/arch=arm64
	                    beta.kubernetes.io/os=linux
	                    kubernetes.io/arch=arm64
	                    kubernetes.io/hostname=ha-423884-m02
	                    kubernetes.io/os=linux
	                    minikube.k8s.io/commit=70e94ed289d7418ac27e2778f9cf44be27a4ecda
	                    minikube.k8s.io/name=ha-423884
	                    minikube.k8s.io/primary=false
	                    minikube.k8s.io/updated_at=2025_11_09T13_51_24_0700
	                    minikube.k8s.io/version=v1.37.0
	                    node-role.kubernetes.io/control-plane=
	                    node.kubernetes.io/exclude-from-external-load-balancers=
	Annotations:        node.alpha.kubernetes.io/ttl: 0
	                    volumes.kubernetes.io/controller-managed-attach-detach: true
	CreationTimestamp:  Sun, 09 Nov 2025 13:51:24 +0000
	Taints:             <none>
	Unschedulable:      false
	Lease:
	  HolderIdentity:  ha-423884-m02
	  AcquireTime:     <unset>
	  RenewTime:       Sun, 09 Nov 2025 14:06:42 +0000
	Conditions:
	  Type             Status  LastHeartbeatTime                 LastTransitionTime                Reason                       Message
	  ----             ------  -----------------                 ------------------                ------                       -------
	  MemoryPressure   False   Sun, 09 Nov 2025 14:05:52 +0000   Sun, 09 Nov 2025 13:54:37 +0000   KubeletHasSufficientMemory   kubelet has sufficient memory available
	  DiskPressure     False   Sun, 09 Nov 2025 14:05:52 +0000   Sun, 09 Nov 2025 13:54:37 +0000   KubeletHasNoDiskPressure     kubelet has no disk pressure
	  PIDPressure      False   Sun, 09 Nov 2025 14:05:52 +0000   Sun, 09 Nov 2025 13:54:37 +0000   KubeletHasSufficientPID      kubelet has sufficient PID available
	  Ready            True    Sun, 09 Nov 2025 14:05:52 +0000   Sun, 09 Nov 2025 13:54:37 +0000   KubeletReady                 kubelet is posting ready status
	Addresses:
	  InternalIP:  192.168.49.3
	  Hostname:    ha-423884-m02
	Capacity:
	  cpu:                2
	  ephemeral-storage:  203034800Ki
	  hugepages-1Gi:      0
	  hugepages-2Mi:      0
	  hugepages-32Mi:     0
	  hugepages-64Ki:     0
	  memory:             8022300Ki
	  pods:               110
	Allocatable:
	  cpu:                2
	  ephemeral-storage:  203034800Ki
	  hugepages-1Gi:      0
	  hugepages-2Mi:      0
	  hugepages-32Mi:     0
	  hugepages-64Ki:     0
	  memory:             8022300Ki
	  pods:               110
	System Info:
	  Machine ID:                 8fe80b37a6a3cb4bc1adadb06905c59a
	  System UUID:                36d1a056-7fa9-4feb-8fa0-03ee70e31c22
	  Boot ID:                    c2b4b02b-b0e5-42ff-aa45-346ad9349595
	  Kernel Version:             5.15.0-1084-aws
	  OS Image:                   Debian GNU/Linux 12 (bookworm)
	  Operating System:           linux
	  Architecture:               arm64
	  Container Runtime Version:  cri-o://1.34.1
	  Kubelet Version:            v1.34.1
	  Kube-Proxy Version:         
	PodCIDR:                      10.244.1.0/24
	PodCIDRs:                     10.244.1.0/24
	Non-terminated Pods:          (8 in total)
	  Namespace                   Name                                     CPU Requests  CPU Limits  Memory Requests  Memory Limits  Age
	  ---------                   ----                                     ------------  ----------  ---------------  -------------  ---
	  default                     busybox-7b57f96db7-c9qf4                 0 (0%)        0 (0%)      0 (0%)           0 (0%)         14m
	  kube-system                 etcd-ha-423884-m02                       100m (5%)     0 (0%)      100Mi (1%)       0 (0%)         15m
	  kube-system                 kindnet-ftnwt                            100m (5%)     100m (5%)   50Mi (0%)        50Mi (0%)      15m
	  kube-system                 kube-apiserver-ha-423884-m02             250m (12%)    0 (0%)      0 (0%)           0 (0%)         15m
	  kube-system                 kube-controller-manager-ha-423884-m02    200m (10%)    0 (0%)      0 (0%)           0 (0%)         15m
	  kube-system                 kube-proxy-f4hgn                         0 (0%)        0 (0%)      0 (0%)           0 (0%)         15m
	  kube-system                 kube-scheduler-ha-423884-m02             100m (5%)     0 (0%)      0 (0%)           0 (0%)         15m
	  kube-system                 kube-vip-ha-423884-m02                   0 (0%)        0 (0%)      0 (0%)           0 (0%)         15m
	Allocated resources:
	  (Total limits may be over 100 percent, i.e., overcommitted.)
	  Resource           Requests    Limits
	  --------           --------    ------
	  cpu                750m (37%)  100m (5%)
	  memory             150Mi (1%)  50Mi (0%)
	  ephemeral-storage  0 (0%)      0 (0%)
	  hugepages-1Gi      0 (0%)      0 (0%)
	  hugepages-2Mi      0 (0%)      0 (0%)
	  hugepages-32Mi     0 (0%)      0 (0%)
	  hugepages-64Ki     0 (0%)      0 (0%)
	Events:
	  Type     Reason                   Age                    From             Message
	  ----     ------                   ----                   ----             -------
	  Normal   Starting                 2m22s                  kube-proxy       
	  Normal   Starting                 15m                    kube-proxy       
	  Normal   RegisteredNode           15m                    node-controller  Node ha-423884-m02 event: Registered Node ha-423884-m02 in Controller
	  Normal   RegisteredNode           15m                    node-controller  Node ha-423884-m02 event: Registered Node ha-423884-m02 in Controller
	  Normal   RegisteredNode           14m                    node-controller  Node ha-423884-m02 event: Registered Node ha-423884-m02 in Controller
	  Normal   Starting                 12m                    kubelet          Starting kubelet.
	  Warning  CgroupV1                 12m                    kubelet          cgroup v1 support is in maintenance mode, please migrate to cgroup v2
	  Normal   NodeHasSufficientMemory  12m (x8 over 12m)      kubelet          Node ha-423884-m02 status is now: NodeHasSufficientMemory
	  Normal   NodeHasSufficientPID     12m (x8 over 12m)      kubelet          Node ha-423884-m02 status is now: NodeHasSufficientPID
	  Normal   NodeHasNoDiskPressure    12m (x8 over 12m)      kubelet          Node ha-423884-m02 status is now: NodeHasNoDiskPressure
	  Normal   NodeNotReady             12m                    node-controller  Node ha-423884-m02 status is now: NodeNotReady
	  Normal   RegisteredNode           12m                    node-controller  Node ha-423884-m02 event: Registered Node ha-423884-m02 in Controller
	  Normal   Starting                 3m12s                  kubelet          Starting kubelet.
	  Warning  CgroupV1                 3m12s                  kubelet          cgroup v1 support is in maintenance mode, please migrate to cgroup v2
	  Normal   NodeHasSufficientMemory  3m11s (x8 over 3m12s)  kubelet          Node ha-423884-m02 status is now: NodeHasSufficientMemory
	  Normal   NodeHasNoDiskPressure    3m11s (x8 over 3m12s)  kubelet          Node ha-423884-m02 status is now: NodeHasNoDiskPressure
	  Normal   NodeHasSufficientPID     3m11s (x8 over 3m12s)  kubelet          Node ha-423884-m02 status is now: NodeHasSufficientPID
	  Normal   RegisteredNode           2m36s                  node-controller  Node ha-423884-m02 event: Registered Node ha-423884-m02 in Controller
	  Normal   RegisteredNode           2m35s                  node-controller  Node ha-423884-m02 event: Registered Node ha-423884-m02 in Controller
	  Normal   RegisteredNode           116s                   node-controller  Node ha-423884-m02 event: Registered Node ha-423884-m02 in Controller
	  Normal   RegisteredNode           50s                    node-controller  Node ha-423884-m02 event: Registered Node ha-423884-m02 in Controller
	
	
	Name:               ha-423884-m03
	Roles:              control-plane
	Labels:             beta.kubernetes.io/arch=arm64
	                    beta.kubernetes.io/os=linux
	                    kubernetes.io/arch=arm64
	                    kubernetes.io/hostname=ha-423884-m03
	                    kubernetes.io/os=linux
	                    minikube.k8s.io/commit=70e94ed289d7418ac27e2778f9cf44be27a4ecda
	                    minikube.k8s.io/name=ha-423884
	                    minikube.k8s.io/primary=false
	                    minikube.k8s.io/updated_at=2025_11_09T13_52_18_0700
	                    minikube.k8s.io/version=v1.37.0
	                    node-role.kubernetes.io/control-plane=
	                    node.kubernetes.io/exclude-from-external-load-balancers=
	Annotations:        node.alpha.kubernetes.io/ttl: 0
	                    volumes.kubernetes.io/controller-managed-attach-detach: true
	CreationTimestamp:  Sun, 09 Nov 2025 13:52:17 +0000
	Taints:             <none>
	Unschedulable:      false
	Lease:
	  HolderIdentity:  ha-423884-m03
	  AcquireTime:     <unset>
	  RenewTime:       Sun, 09 Nov 2025 14:06:48 +0000
	Conditions:
	  Type             Status  LastHeartbeatTime                 LastTransitionTime                Reason                       Message
	  ----             ------  -----------------                 ------------------                ------                       -------
	  MemoryPressure   False   Sun, 09 Nov 2025 14:06:12 +0000   Sun, 09 Nov 2025 13:52:17 +0000   KubeletHasSufficientMemory   kubelet has sufficient memory available
	  DiskPressure     False   Sun, 09 Nov 2025 14:06:12 +0000   Sun, 09 Nov 2025 13:52:17 +0000   KubeletHasNoDiskPressure     kubelet has no disk pressure
	  PIDPressure      False   Sun, 09 Nov 2025 14:06:12 +0000   Sun, 09 Nov 2025 13:52:17 +0000   KubeletHasSufficientPID      kubelet has sufficient PID available
	  Ready            True    Sun, 09 Nov 2025 14:06:12 +0000   Sun, 09 Nov 2025 13:52:33 +0000   KubeletReady                 kubelet is posting ready status
	Addresses:
	  InternalIP:  192.168.49.4
	  Hostname:    ha-423884-m03
	Capacity:
	  cpu:                2
	  ephemeral-storage:  203034800Ki
	  hugepages-1Gi:      0
	  hugepages-2Mi:      0
	  hugepages-32Mi:     0
	  hugepages-64Ki:     0
	  memory:             8022300Ki
	  pods:               110
	Allocatable:
	  cpu:                2
	  ephemeral-storage:  203034800Ki
	  hugepages-1Gi:      0
	  hugepages-2Mi:      0
	  hugepages-32Mi:     0
	  hugepages-64Ki:     0
	  memory:             8022300Ki
	  pods:               110
	System Info:
	  Machine ID:                 8fe80b37a6a3cb4bc1adadb06905c59a
	  System UUID:                d57bf8b4-5512-4316-94f7-79a9c657e155
	  Boot ID:                    c2b4b02b-b0e5-42ff-aa45-346ad9349595
	  Kernel Version:             5.15.0-1084-aws
	  OS Image:                   Debian GNU/Linux 12 (bookworm)
	  Operating System:           linux
	  Architecture:               arm64
	  Container Runtime Version:  cri-o://1.34.1
	  Kubelet Version:            v1.34.1
	  Kube-Proxy Version:         
	PodCIDR:                      10.244.2.0/24
	PodCIDRs:                     10.244.2.0/24
	Non-terminated Pods:          (8 in total)
	  Namespace                   Name                                     CPU Requests  CPU Limits  Memory Requests  Memory Limits  Age
	  ---------                   ----                                     ------------  ----------  ---------------  -------------  ---
	  default                     busybox-7b57f96db7-5bfxx                 0 (0%)        0 (0%)      0 (0%)           0 (0%)         14m
	  kube-system                 etcd-ha-423884-m03                       100m (5%)     0 (0%)      100Mi (1%)       0 (0%)         14m
	  kube-system                 kindnet-45jg2                            100m (5%)     100m (5%)   50Mi (0%)        50Mi (0%)      14m
	  kube-system                 kube-apiserver-ha-423884-m03             250m (12%)    0 (0%)      0 (0%)           0 (0%)         14m
	  kube-system                 kube-controller-manager-ha-423884-m03    200m (10%)    0 (0%)      0 (0%)           0 (0%)         14m
	  kube-system                 kube-proxy-jcgxk                         0 (0%)        0 (0%)      0 (0%)           0 (0%)         14m
	  kube-system                 kube-scheduler-ha-423884-m03             100m (5%)     0 (0%)      0 (0%)           0 (0%)         14m
	  kube-system                 kube-vip-ha-423884-m03                   0 (0%)        0 (0%)      0 (0%)           0 (0%)         14m
	Allocated resources:
	  (Total limits may be over 100 percent, i.e., overcommitted.)
	  Resource           Requests    Limits
	  --------           --------    ------
	  cpu                750m (37%)  100m (5%)
	  memory             150Mi (1%)  50Mi (0%)
	  ephemeral-storage  0 (0%)      0 (0%)
	  hugepages-1Gi      0 (0%)      0 (0%)
	  hugepages-2Mi      0 (0%)      0 (0%)
	  hugepages-32Mi     0 (0%)      0 (0%)
	  hugepages-64Ki     0 (0%)      0 (0%)
	Events:
	  Type     Reason                   Age                    From             Message
	  ----     ------                   ----                   ----             -------
	  Normal   Starting                 14m                    kube-proxy       
	  Normal   Starting                 97s                    kube-proxy       
	  Normal   RegisteredNode           14m                    node-controller  Node ha-423884-m03 event: Registered Node ha-423884-m03 in Controller
	  Normal   RegisteredNode           14m                    node-controller  Node ha-423884-m03 event: Registered Node ha-423884-m03 in Controller
	  Normal   RegisteredNode           14m                    node-controller  Node ha-423884-m03 event: Registered Node ha-423884-m03 in Controller
	  Normal   RegisteredNode           12m                    node-controller  Node ha-423884-m03 event: Registered Node ha-423884-m03 in Controller
	  Normal   Starting                 2m36s                  kubelet          Starting kubelet.
	  Warning  CgroupV1                 2m36s                  kubelet          cgroup v1 support is in maintenance mode, please migrate to cgroup v2
	  Normal   NodeHasSufficientMemory  2m36s (x8 over 2m36s)  kubelet          Node ha-423884-m03 status is now: NodeHasSufficientMemory
	  Normal   NodeHasNoDiskPressure    2m36s (x8 over 2m36s)  kubelet          Node ha-423884-m03 status is now: NodeHasNoDiskPressure
	  Normal   NodeHasSufficientPID     2m36s (x8 over 2m36s)  kubelet          Node ha-423884-m03 status is now: NodeHasSufficientPID
	  Normal   RegisteredNode           2m36s                  node-controller  Node ha-423884-m03 event: Registered Node ha-423884-m03 in Controller
	  Normal   RegisteredNode           2m35s                  node-controller  Node ha-423884-m03 event: Registered Node ha-423884-m03 in Controller
	  Normal   RegisteredNode           116s                   node-controller  Node ha-423884-m03 event: Registered Node ha-423884-m03 in Controller
	  Normal   RegisteredNode           50s                    node-controller  Node ha-423884-m03 event: Registered Node ha-423884-m03 in Controller
	
	
	Name:               ha-423884-m04
	Roles:              <none>
	Labels:             beta.kubernetes.io/arch=arm64
	                    beta.kubernetes.io/os=linux
	                    kubernetes.io/arch=arm64
	                    kubernetes.io/hostname=ha-423884-m04
	                    kubernetes.io/os=linux
	                    minikube.k8s.io/commit=70e94ed289d7418ac27e2778f9cf44be27a4ecda
	                    minikube.k8s.io/name=ha-423884
	                    minikube.k8s.io/primary=false
	                    minikube.k8s.io/updated_at=2025_11_09T13_53_07_0700
	                    minikube.k8s.io/version=v1.37.0
	Annotations:        node.alpha.kubernetes.io/ttl: 0
	                    volumes.kubernetes.io/controller-managed-attach-detach: true
	CreationTimestamp:  Sun, 09 Nov 2025 13:53:06 +0000
	Taints:             <none>
	Unschedulable:      false
	Lease:
	  HolderIdentity:  ha-423884-m04
	  AcquireTime:     <unset>
	  RenewTime:       Sun, 09 Nov 2025 14:06:45 +0000
	Conditions:
	  Type             Status  LastHeartbeatTime                 LastTransitionTime                Reason                       Message
	  ----             ------  -----------------                 ------------------                ------                       -------
	  MemoryPressure   False   Sun, 09 Nov 2025 14:04:52 +0000   Sun, 09 Nov 2025 13:53:06 +0000   KubeletHasSufficientMemory   kubelet has sufficient memory available
	  DiskPressure     False   Sun, 09 Nov 2025 14:04:52 +0000   Sun, 09 Nov 2025 13:53:06 +0000   KubeletHasNoDiskPressure     kubelet has no disk pressure
	  PIDPressure      False   Sun, 09 Nov 2025 14:04:52 +0000   Sun, 09 Nov 2025 13:53:06 +0000   KubeletHasSufficientPID      kubelet has sufficient PID available
	  Ready            True    Sun, 09 Nov 2025 14:04:52 +0000   Sun, 09 Nov 2025 13:53:22 +0000   KubeletReady                 kubelet is posting ready status
	Addresses:
	  InternalIP:  192.168.49.5
	  Hostname:    ha-423884-m04
	Capacity:
	  cpu:                2
	  ephemeral-storage:  203034800Ki
	  hugepages-1Gi:      0
	  hugepages-2Mi:      0
	  hugepages-32Mi:     0
	  hugepages-64Ki:     0
	  memory:             8022300Ki
	  pods:               110
	Allocatable:
	  cpu:                2
	  ephemeral-storage:  203034800Ki
	  hugepages-1Gi:      0
	  hugepages-2Mi:      0
	  hugepages-32Mi:     0
	  hugepages-64Ki:     0
	  memory:             8022300Ki
	  pods:               110
	System Info:
	  Machine ID:                 8fe80b37a6a3cb4bc1adadb06905c59a
	  System UUID:                750e1d79-71b2-4dc5-bf03-65a8c044964c
	  Boot ID:                    c2b4b02b-b0e5-42ff-aa45-346ad9349595
	  Kernel Version:             5.15.0-1084-aws
	  OS Image:                   Debian GNU/Linux 12 (bookworm)
	  Operating System:           linux
	  Architecture:               arm64
	  Container Runtime Version:  cri-o://1.34.1
	  Kubelet Version:            v1.34.1
	  Kube-Proxy Version:         
	PodCIDR:                      10.244.3.0/24
	PodCIDRs:                     10.244.3.0/24
	Non-terminated Pods:          (2 in total)
	  Namespace                   Name                CPU Requests  CPU Limits  Memory Requests  Memory Limits  Age
	  ---------                   ----                ------------  ----------  ---------------  -------------  ---
	  kube-system                 kindnet-2tcn6       100m (5%)     100m (5%)   50Mi (0%)        50Mi (0%)      13m
	  kube-system                 kube-proxy-9kff9    0 (0%)        0 (0%)      0 (0%)           0 (0%)         13m
	Allocated resources:
	  (Total limits may be over 100 percent, i.e., overcommitted.)
	  Resource           Requests   Limits
	  --------           --------   ------
	  cpu                100m (5%)  100m (5%)
	  memory             50Mi (0%)  50Mi (0%)
	  ephemeral-storage  0 (0%)     0 (0%)
	  hugepages-1Gi      0 (0%)     0 (0%)
	  hugepages-2Mi      0 (0%)     0 (0%)
	  hugepages-32Mi     0 (0%)     0 (0%)
	  hugepages-64Ki     0 (0%)     0 (0%)
	Events:
	  Type     Reason                   Age                   From             Message
	  ----     ------                   ----                  ----             -------
	  Normal   Starting                 109s                  kube-proxy       
	  Normal   Starting                 13m                   kube-proxy       
	  Normal   NodeHasSufficientPID     13m (x3 over 13m)     kubelet          Node ha-423884-m04 status is now: NodeHasSufficientPID
	  Normal   NodeHasSufficientMemory  13m (x3 over 13m)     kubelet          Node ha-423884-m04 status is now: NodeHasSufficientMemory
	  Normal   NodeHasNoDiskPressure    13m (x3 over 13m)     kubelet          Node ha-423884-m04 status is now: NodeHasNoDiskPressure
	  Normal   RegisteredNode           13m                   node-controller  Node ha-423884-m04 event: Registered Node ha-423884-m04 in Controller
	  Normal   RegisteredNode           13m                   node-controller  Node ha-423884-m04 event: Registered Node ha-423884-m04 in Controller
	  Normal   CIDRAssignmentFailed     13m                   cidrAllocator    Node ha-423884-m04 status is now: CIDRAssignmentFailed
	  Normal   RegisteredNode           13m                   node-controller  Node ha-423884-m04 event: Registered Node ha-423884-m04 in Controller
	  Normal   NodeReady                13m                   kubelet          Node ha-423884-m04 status is now: NodeReady
	  Normal   RegisteredNode           12m                   node-controller  Node ha-423884-m04 event: Registered Node ha-423884-m04 in Controller
	  Normal   RegisteredNode           2m36s                 node-controller  Node ha-423884-m04 event: Registered Node ha-423884-m04 in Controller
	  Normal   RegisteredNode           2m35s                 node-controller  Node ha-423884-m04 event: Registered Node ha-423884-m04 in Controller
	  Normal   Starting                 2m11s                 kubelet          Starting kubelet.
	  Warning  CgroupV1                 2m11s                 kubelet          cgroup v1 support is in maintenance mode, please migrate to cgroup v2
	  Normal   NodeHasSufficientMemory  2m7s (x8 over 2m10s)  kubelet          Node ha-423884-m04 status is now: NodeHasSufficientMemory
	  Normal   NodeHasNoDiskPressure    2m7s (x8 over 2m10s)  kubelet          Node ha-423884-m04 status is now: NodeHasNoDiskPressure
	  Normal   NodeHasSufficientPID     2m7s (x8 over 2m10s)  kubelet          Node ha-423884-m04 status is now: NodeHasSufficientPID
	  Normal   RegisteredNode           116s                  node-controller  Node ha-423884-m04 event: Registered Node ha-423884-m04 in Controller
	  Normal   RegisteredNode           50s                   node-controller  Node ha-423884-m04 event: Registered Node ha-423884-m04 in Controller
	
	
	Name:               ha-423884-m05
	Roles:              control-plane
	Labels:             beta.kubernetes.io/arch=arm64
	                    beta.kubernetes.io/os=linux
	                    kubernetes.io/arch=arm64
	                    kubernetes.io/hostname=ha-423884-m05
	                    kubernetes.io/os=linux
	                    minikube.k8s.io/commit=70e94ed289d7418ac27e2778f9cf44be27a4ecda
	                    minikube.k8s.io/name=ha-423884
	                    minikube.k8s.io/primary=false
	                    minikube.k8s.io/updated_at=2025_11_09T14_06_03_0700
	                    minikube.k8s.io/version=v1.37.0
	                    node-role.kubernetes.io/control-plane=
	                    node.kubernetes.io/exclude-from-external-load-balancers=
	Annotations:        node.alpha.kubernetes.io/ttl: 0
	                    volumes.kubernetes.io/controller-managed-attach-detach: true
	CreationTimestamp:  Sun, 09 Nov 2025 14:06:01 +0000
	Taints:             <none>
	Unschedulable:      false
	Lease:
	  HolderIdentity:  ha-423884-m05
	  AcquireTime:     <unset>
	  RenewTime:       Sun, 09 Nov 2025 14:06:42 +0000
	Conditions:
	  Type             Status  LastHeartbeatTime                 LastTransitionTime                Reason                       Message
	  ----             ------  -----------------                 ------------------                ------                       -------
	  MemoryPressure   False   Sun, 09 Nov 2025 14:06:46 +0000   Sun, 09 Nov 2025 14:06:01 +0000   KubeletHasSufficientMemory   kubelet has sufficient memory available
	  DiskPressure     False   Sun, 09 Nov 2025 14:06:46 +0000   Sun, 09 Nov 2025 14:06:01 +0000   KubeletHasNoDiskPressure     kubelet has no disk pressure
	  PIDPressure      False   Sun, 09 Nov 2025 14:06:46 +0000   Sun, 09 Nov 2025 14:06:01 +0000   KubeletHasSufficientPID      kubelet has sufficient PID available
	  Ready            True    Sun, 09 Nov 2025 14:06:46 +0000   Sun, 09 Nov 2025 14:06:46 +0000   KubeletReady                 kubelet is posting ready status
	Addresses:
	  InternalIP:  192.168.49.6
	  Hostname:    ha-423884-m05
	Capacity:
	  cpu:                2
	  ephemeral-storage:  203034800Ki
	  hugepages-1Gi:      0
	  hugepages-2Mi:      0
	  hugepages-32Mi:     0
	  hugepages-64Ki:     0
	  memory:             8022300Ki
	  pods:               110
	Allocatable:
	  cpu:                2
	  ephemeral-storage:  203034800Ki
	  hugepages-1Gi:      0
	  hugepages-2Mi:      0
	  hugepages-32Mi:     0
	  hugepages-64Ki:     0
	  memory:             8022300Ki
	  pods:               110
	System Info:
	  Machine ID:                 8fe80b37a6a3cb4bc1adadb06905c59a
	  System UUID:                1fc70477-b16b-405f-8157-408b8fa43a9d
	  Boot ID:                    c2b4b02b-b0e5-42ff-aa45-346ad9349595
	  Kernel Version:             5.15.0-1084-aws
	  OS Image:                   Debian GNU/Linux 12 (bookworm)
	  Operating System:           linux
	  Architecture:               arm64
	  Container Runtime Version:  cri-o://1.34.1
	  Kubelet Version:            v1.34.1
	  Kube-Proxy Version:         
	PodCIDR:                      10.244.4.0/24
	PodCIDRs:                     10.244.4.0/24
	Non-terminated Pods:          (7 in total)
	  Namespace                   Name                                     CPU Requests  CPU Limits  Memory Requests  Memory Limits  Age
	  ---------                   ----                                     ------------  ----------  ---------------  -------------  ---
	  kube-system                 etcd-ha-423884-m05                       100m (5%)     0 (0%)      100Mi (1%)       0 (0%)         47s
	  kube-system                 kindnet-44gxs                            100m (5%)     100m (5%)   50Mi (0%)        50Mi (0%)      45s
	  kube-system                 kube-apiserver-ha-423884-m05             250m (12%)    0 (0%)      0 (0%)           0 (0%)         48s
	  kube-system                 kube-controller-manager-ha-423884-m05    200m (10%)    0 (0%)      0 (0%)           0 (0%)         47s
	  kube-system                 kube-proxy-kvnr4                         0 (0%)        0 (0%)      0 (0%)           0 (0%)         49s
	  kube-system                 kube-scheduler-ha-423884-m05             100m (5%)     0 (0%)      0 (0%)           0 (0%)         47s
	  kube-system                 kube-vip-ha-423884-m05                   0 (0%)        0 (0%)      0 (0%)           0 (0%)         47s
	Allocated resources:
	  (Total limits may be over 100 percent, i.e., overcommitted.)
	  Resource           Requests    Limits
	  --------           --------    ------
	  cpu                750m (37%)  100m (5%)
	  memory             150Mi (1%)  50Mi (0%)
	  ephemeral-storage  0 (0%)      0 (0%)
	  hugepages-1Gi      0 (0%)      0 (0%)
	  hugepages-2Mi      0 (0%)      0 (0%)
	  hugepages-32Mi     0 (0%)      0 (0%)
	  hugepages-64Ki     0 (0%)      0 (0%)
	Events:
	  Type    Reason          Age   From             Message
	  ----    ------          ----  ----             -------
	  Normal  Starting        33s   kube-proxy       
	  Normal  RegisteredNode  46s   node-controller  Node ha-423884-m05 event: Registered Node ha-423884-m05 in Controller
	  Normal  RegisteredNode  46s   node-controller  Node ha-423884-m05 event: Registered Node ha-423884-m05 in Controller
	  Normal  RegisteredNode  45s   node-controller  Node ha-423884-m05 event: Registered Node ha-423884-m05 in Controller
	  Normal  RegisteredNode  45s   node-controller  Node ha-423884-m05 event: Registered Node ha-423884-m05 in Controller
	
	
	==> dmesg <==
	[Nov 9 13:17] ACPI: SRAT not present
	[  +0.000000] ACPI: SRAT not present
	[  +0.000000] SPI driver altr_a10sr has no spi_device_id for altr,a10sr
	[  +0.015355] device-mapper: core: CONFIG_IMA_DISABLE_HTABLE is disabled. Duplicate IMA measurements will not be recorded in the IMA log.
	[  +0.494196] systemd[1]: Configuration file /run/systemd/system/netplan-ovs-cleanup.service is marked world-inaccessible. This has no effect as configuration data is accessible via APIs without restrictions. Proceeding anyway.
	[  +0.034512] systemd[1]: /lib/systemd/system/snapd.service:23: Unknown key name 'RestartMode' in section 'Service', ignoring.
	[  +0.743336] ena 0000:00:05.0: LLQ is not supported Fallback to host mode policy.
	[  +6.564676] kauditd_printk_skb: 36 callbacks suppressed
	[Nov 9 13:30] overlayfs: idmapped layers are currently not supported
	[  +0.081590] kmem.limit_in_bytes is deprecated and will be removed. Please report your usecase to linux-mm@kvack.org if you depend on this functionality.
	[Nov 9 13:36] overlayfs: idmapped layers are currently not supported
	[ +50.497753] overlayfs: idmapped layers are currently not supported
	[Nov 9 13:50] overlayfs: idmapped layers are currently not supported
	[Nov 9 13:51] overlayfs: idmapped layers are currently not supported
	[Nov 9 13:52] overlayfs: idmapped layers are currently not supported
	[Nov 9 13:53] overlayfs: idmapped layers are currently not supported
	[Nov 9 13:54] overlayfs: idmapped layers are currently not supported
	[Nov 9 13:55] overlayfs: idmapped layers are currently not supported
	[  +3.159182] overlayfs: idmapped layers are currently not supported
	[Nov 9 14:03] overlayfs: idmapped layers are currently not supported
	[  +3.581786] overlayfs: idmapped layers are currently not supported
	[Nov 9 14:04] overlayfs: idmapped layers are currently not supported
	[Nov 9 14:05] overlayfs: idmapped layers are currently not supported
	[ +45.728314] overlayfs: idmapped layers are currently not supported
	
	
	==> etcd [947390d8997ffb89bea0e3c1e1bca5c1f8dd53d457d88db5aafd7664dbcb65b2] <==
	{"level":"info","ts":"2025-11-09T14:06:01.965628Z","caller":"traceutil/trace.go:172","msg":"trace[1175708871] transaction","detail":"{read_only:false; response_revision:2483; number_of_response:1; }","duration":"114.598676ms","start":"2025-11-09T14:06:01.851008Z","end":"2025-11-09T14:06:01.965607Z","steps":["trace[1175708871] 'process raft request'  (duration: 93.916098ms)"],"step_count":1}
	{"level":"info","ts":"2025-11-09T14:06:01.965846Z","caller":"traceutil/trace.go:172","msg":"trace[1120643452] transaction","detail":"{read_only:false; number_of_response:1; response_revision:2483; }","duration":"114.631676ms","start":"2025-11-09T14:06:01.851193Z","end":"2025-11-09T14:06:01.965824Z","steps":["trace[1120643452] 'process raft request'  (duration: 93.785289ms)"],"step_count":1}
	{"level":"info","ts":"2025-11-09T14:06:01.966010Z","caller":"traceutil/trace.go:172","msg":"trace[607380676] transaction","detail":"{read_only:false; number_of_response:1; response_revision:2483; }","duration":"113.339469ms","start":"2025-11-09T14:06:01.852663Z","end":"2025-11-09T14:06:01.966003Z","steps":["trace[607380676] 'process raft request'  (duration: 92.371981ms)"],"step_count":1}
	{"level":"info","ts":"2025-11-09T14:06:01.966096Z","caller":"traceutil/trace.go:172","msg":"trace[504956675] transaction","detail":"{read_only:false; number_of_response:1; response_revision:2483; }","duration":"113.3348ms","start":"2025-11-09T14:06:01.852755Z","end":"2025-11-09T14:06:01.966090Z","steps":["trace[504956675] 'process raft request'  (duration: 92.297313ms)"],"step_count":1}
	{"level":"info","ts":"2025-11-09T14:06:01.966163Z","caller":"traceutil/trace.go:172","msg":"trace[663412455] transaction","detail":"{read_only:false; number_of_response:1; response_revision:2483; }","duration":"113.303193ms","start":"2025-11-09T14:06:01.852854Z","end":"2025-11-09T14:06:01.966157Z","steps":["trace[663412455] 'process raft request'  (duration: 92.214613ms)"],"step_count":1}
	{"level":"info","ts":"2025-11-09T14:06:01.967022Z","caller":"etcdserver/server.go:2246","msg":"skip compaction since there is an inflight snapshot"}
	{"level":"info","ts":"2025-11-09T14:06:02.169292Z","caller":"traceutil/trace.go:172","msg":"trace[208251802] linearizableReadLoop","detail":"{readStateIndex:3029; appliedIndex:3030; }","duration":"106.922313ms","start":"2025-11-09T14:06:02.062354Z","end":"2025-11-09T14:06:02.169277Z","steps":["trace[208251802] 'read index received'  (duration: 106.916364ms)","trace[208251802] 'applied index is now lower than readState.Index'  (duration: 4.923µs)"],"step_count":2}
	{"level":"warn","ts":"2025-11-09T14:06:02.169639Z","caller":"txn/util.go:93","msg":"apply request took too long","took":"107.268925ms","expected-duration":"100ms","prefix":"read-only range ","request":"key:\"/registry/minions/ha-423884-m05\" limit:1 ","response":"range_response_count:1 size:4587"}
	{"level":"info","ts":"2025-11-09T14:06:02.206785Z","caller":"traceutil/trace.go:172","msg":"trace[77792878] range","detail":"{range_begin:/registry/minions/ha-423884-m05; range_end:; response_count:1; response_revision:2492; }","duration":"144.415374ms","start":"2025-11-09T14:06:02.062350Z","end":"2025-11-09T14:06:02.206765Z","steps":["trace[77792878] 'agreement among raft nodes before linearized reading'  (duration: 107.1649ms)"],"step_count":1}
	{"level":"info","ts":"2025-11-09T14:06:02.278681Z","caller":"traceutil/trace.go:172","msg":"trace[1385268954] transaction","detail":"{read_only:false; number_of_response:1; response_revision:2514; }","duration":"109.903656ms","start":"2025-11-09T14:06:02.168759Z","end":"2025-11-09T14:06:02.278663Z","steps":["trace[1385268954] 'process raft request'  (duration: 97.195103ms)"],"step_count":1}
	{"level":"info","ts":"2025-11-09T14:06:02.279603Z","caller":"traceutil/trace.go:172","msg":"trace[902015390] transaction","detail":"{read_only:false; number_of_response:1; response_revision:2514; }","duration":"110.746472ms","start":"2025-11-09T14:06:02.168844Z","end":"2025-11-09T14:06:02.279591Z","steps":["trace[902015390] 'process raft request'  (duration: 97.179907ms)"],"step_count":1}
	{"level":"info","ts":"2025-11-09T14:06:02.290821Z","caller":"traceutil/trace.go:172","msg":"trace[1310650415] transaction","detail":"{read_only:false; number_of_response:1; response_revision:2514; }","duration":"121.880819ms","start":"2025-11-09T14:06:02.168925Z","end":"2025-11-09T14:06:02.290806Z","steps":["trace[1310650415] 'process raft request'  (duration: 97.120205ms)"],"step_count":1}
	{"level":"warn","ts":"2025-11-09T14:06:02.314753Z","caller":"txn/util.go:93","msg":"apply request took too long","took":"109.782948ms","expected-duration":"100ms","prefix":"read-only range ","request":"key:\"/registry/daemonsets/kube-system/kindnet\" limit:1 ","response":"range_response_count:1 size:4723"}
	{"level":"info","ts":"2025-11-09T14:06:02.315134Z","caller":"traceutil/trace.go:172","msg":"trace[1058310503] range","detail":"{range_begin:/registry/daemonsets/kube-system/kindnet; range_end:; response_count:1; response_revision:2519; }","duration":"110.282187ms","start":"2025-11-09T14:06:02.204838Z","end":"2025-11-09T14:06:02.315121Z","steps":["trace[1058310503] 'agreement among raft nodes before linearized reading'  (duration: 109.588615ms)"],"step_count":1}
	{"level":"warn","ts":"2025-11-09T14:06:02.645526Z","caller":"txn/util.go:93","msg":"apply request took too long","took":"175.681057ms","expected-duration":"100ms","prefix":"read-only range ","request":"key:\"/registry/daemonsets/kube-system/kindnet\" limit:1 ","response":"range_response_count:1 size:4723"}
	{"level":"info","ts":"2025-11-09T14:06:02.645650Z","caller":"traceutil/trace.go:172","msg":"trace[979730927] range","detail":"{range_begin:/registry/daemonsets/kube-system/kindnet; range_end:; response_count:1; response_revision:2536; }","duration":"175.816033ms","start":"2025-11-09T14:06:02.469821Z","end":"2025-11-09T14:06:02.645637Z","steps":["trace[979730927] 'agreement among raft nodes before linearized reading'  (duration: 175.572994ms)"],"step_count":1}
	{"level":"info","ts":"2025-11-09T14:06:02.683922Z","caller":"etcdserver/server.go:2246","msg":"skip compaction since there is an inflight snapshot"}
	{"level":"info","ts":"2025-11-09T14:06:02.684107Z","caller":"traceutil/trace.go:172","msg":"trace[283009681] transaction","detail":"{read_only:false; response_revision:2547; number_of_response:1; }","duration":"104.662963ms","start":"2025-11-09T14:06:02.579425Z","end":"2025-11-09T14:06:02.684088Z","steps":["trace[283009681] 'process raft request'  (duration: 104.209025ms)"],"step_count":1}
	{"level":"info","ts":"2025-11-09T14:06:05.286292Z","caller":"etcdserver/server.go:2246","msg":"skip compaction since there is an inflight snapshot"}
	{"level":"warn","ts":"2025-11-09T14:06:05.433331Z","caller":"txn/util.go:93","msg":"apply request took too long","took":"137.146558ms","expected-duration":"100ms","prefix":"read-only range ","request":"key:\"/registry/pods/kube-system/kindnet-wd67q\" limit:1 ","response":"range_response_count:1 size:3694"}
	{"level":"info","ts":"2025-11-09T14:06:05.433485Z","caller":"traceutil/trace.go:172","msg":"trace[1667098546] range","detail":"{range_begin:/registry/pods/kube-system/kindnet-wd67q; range_end:; response_count:1; response_revision:2646; }","duration":"137.31027ms","start":"2025-11-09T14:06:05.296162Z","end":"2025-11-09T14:06:05.433472Z","steps":["trace[1667098546] 'agreement among raft nodes before linearized reading'  (duration: 135.691937ms)"],"step_count":1}
	{"level":"info","ts":"2025-11-09T14:06:05.438948Z","caller":"traceutil/trace.go:172","msg":"trace[861317397] transaction","detail":"{read_only:false; number_of_response:1; response_revision:2648; }","duration":"102.4751ms","start":"2025-11-09T14:06:05.336459Z","end":"2025-11-09T14:06:05.438934Z","steps":["trace[861317397] 'process raft request'  (duration: 102.381512ms)"],"step_count":1}
	{"level":"info","ts":"2025-11-09T14:06:05.445198Z","caller":"traceutil/trace.go:172","msg":"trace[1409993524] transaction","detail":"{read_only:false; response_revision:2648; number_of_response:1; }","duration":"108.804568ms","start":"2025-11-09T14:06:05.336375Z","end":"2025-11-09T14:06:05.445179Z","steps":["trace[1409993524] 'process raft request'  (duration: 102.432088ms)"],"step_count":1}
	{"level":"info","ts":"2025-11-09T14:06:06.978903Z","caller":"etcdserver/server.go:2246","msg":"skip compaction since there is an inflight snapshot"}
	{"level":"info","ts":"2025-11-09T14:06:17.995725Z","caller":"etcdserver/server.go:1856","msg":"sent merged snapshot","from":"aec36adc501070cc","to":"33821fa08d210d57","bytes":5253003,"size":"5.3 MB","took":"30.505789898s"}
	
	
	==> kernel <==
	 14:06:50 up 49 min,  0 user,  load average: 3.99, 2.54, 1.60
	Linux ha-423884 5.15.0-1084-aws #91~20.04.1-Ubuntu SMP Fri May 2 07:00:04 UTC 2025 aarch64 GNU/Linux
	PRETTY_NAME="Debian GNU/Linux 12 (bookworm)"
	
	
	==> kindnet [2858b156484730345bc39e8edca1ca8eabf5a6c2eb446824527423d351ec9fd3] <==
	I1109 14:06:25.419551       1 main.go:324] Node ha-423884-m03 has CIDR [10.244.2.0/24] 
	I1109 14:06:25.419603       1 main.go:297] Handling node with IPs: map[192.168.49.5:{}]
	I1109 14:06:25.419608       1 main.go:324] Node ha-423884-m04 has CIDR [10.244.3.0/24] 
	I1109 14:06:25.419654       1 main.go:297] Handling node with IPs: map[192.168.49.6:{}]
	I1109 14:06:25.419659       1 main.go:324] Node ha-423884-m05 has CIDR [10.244.4.0/24] 
	I1109 14:06:35.418634       1 main.go:297] Handling node with IPs: map[192.168.49.2:{}]
	I1109 14:06:35.418710       1 main.go:301] handling current node
	I1109 14:06:35.418728       1 main.go:297] Handling node with IPs: map[192.168.49.3:{}]
	I1109 14:06:35.418737       1 main.go:324] Node ha-423884-m02 has CIDR [10.244.1.0/24] 
	I1109 14:06:35.418884       1 main.go:297] Handling node with IPs: map[192.168.49.4:{}]
	I1109 14:06:35.418890       1 main.go:324] Node ha-423884-m03 has CIDR [10.244.2.0/24] 
	I1109 14:06:35.418939       1 main.go:297] Handling node with IPs: map[192.168.49.5:{}]
	I1109 14:06:35.419816       1 main.go:324] Node ha-423884-m04 has CIDR [10.244.3.0/24] 
	I1109 14:06:35.419996       1 main.go:297] Handling node with IPs: map[192.168.49.6:{}]
	I1109 14:06:35.420007       1 main.go:324] Node ha-423884-m05 has CIDR [10.244.4.0/24] 
	I1109 14:06:45.423419       1 main.go:297] Handling node with IPs: map[192.168.49.2:{}]
	I1109 14:06:45.423457       1 main.go:301] handling current node
	I1109 14:06:45.423473       1 main.go:297] Handling node with IPs: map[192.168.49.3:{}]
	I1109 14:06:45.423482       1 main.go:324] Node ha-423884-m02 has CIDR [10.244.1.0/24] 
	I1109 14:06:45.423721       1 main.go:297] Handling node with IPs: map[192.168.49.4:{}]
	I1109 14:06:45.423740       1 main.go:324] Node ha-423884-m03 has CIDR [10.244.2.0/24] 
	I1109 14:06:45.423834       1 main.go:297] Handling node with IPs: map[192.168.49.5:{}]
	I1109 14:06:45.423846       1 main.go:324] Node ha-423884-m04 has CIDR [10.244.3.0/24] 
	I1109 14:06:45.423983       1 main.go:297] Handling node with IPs: map[192.168.49.6:{}]
	I1109 14:06:45.423999       1 main.go:324] Node ha-423884-m05 has CIDR [10.244.4.0/24] 
	
	
	==> kube-apiserver [7a8b6eec5acc3d0e17aa26ea522ab1781b387d043859460f3c3aa2c80f07c6d7] <==
	I1109 14:04:10.251082       1 cidrallocator.go:301] created ClusterIP allocator for Service CIDR 10.96.0.0/12
	I1109 14:04:10.254066       1 aggregator.go:171] initial CRD sync complete...
	I1109 14:04:10.254147       1 autoregister_controller.go:144] Starting autoregister controller
	I1109 14:04:10.254176       1 cache.go:32] Waiting for caches to sync for autoregister controller
	I1109 14:04:10.254222       1 cache.go:39] Caches are synced for autoregister controller
	I1109 14:04:10.259503       1 shared_informer.go:356] "Caches are synced" controller="ipallocator-repair-controller"
	I1109 14:04:10.259679       1 cache.go:39] Caches are synced for RemoteAvailability controller
	I1109 14:04:10.259777       1 cache.go:39] Caches are synced for APIServiceRegistrationController controller
	I1109 14:04:10.265702       1 apf_controller.go:382] Running API Priority and Fairness config worker
	I1109 14:04:10.265731       1 apf_controller.go:385] Running API Priority and Fairness periodic rebalancing process
	I1109 14:04:10.268080       1 shared_informer.go:356] "Caches are synced" controller="node_authorizer"
	I1109 14:04:10.269054       1 handler_discovery.go:451] Starting ResourceDiscoveryManager
	I1109 14:04:10.282785       1 shared_informer.go:356] "Caches are synced" controller="*generic.policySource[*k8s.io/api/admissionregistration/v1.ValidatingAdmissionPolicy,*k8s.io/api/admissionregistration/v1.ValidatingAdmissionPolicyBinding,k8s.io/apiserver/pkg/admission/plugin/policy/validating.Validator]"
	I1109 14:04:10.282828       1 policy_source.go:240] refreshing policies
	W1109 14:04:10.283375       1 lease.go:265] Resetting endpoints for master service "kubernetes" to [192.168.49.4]
	I1109 14:04:10.285247       1 controller.go:667] quota admission added evaluator for: endpoints
	I1109 14:04:10.308873       1 controller.go:667] quota admission added evaluator for: endpointslices.discovery.k8s.io
	I1109 14:04:10.309359       1 controller.go:667] quota admission added evaluator for: leases.coordination.k8s.io
	E1109 14:04:10.317898       1 controller.go:95] Found stale data, removed previous endpoints on kubernetes service, apiserver didn't exit successfully previously
	I1109 14:04:10.610930       1 storage_scheduling.go:111] all system priority classes are created successfully or already exist.
	W1109 14:04:12.050948       1 lease.go:265] Resetting endpoints for master service "kubernetes" to [192.168.49.2 192.168.49.3 192.168.49.4]
	I1109 14:04:13.586194       1 controller.go:667] quota admission added evaluator for: serviceaccounts
	I1109 14:04:16.069224       1 controller.go:667] quota admission added evaluator for: replicasets.apps
	I1109 14:04:16.362429       1 controller.go:667] quota admission added evaluator for: daemonsets.apps
	I1109 14:04:17.009317       1 controller.go:667] quota admission added evaluator for: deployments.apps
	
	
	==> kube-apiserver [c0ba74e816e1338d86f2f29c211b83c172784bbf106dba7bae518b2ee0201a4e] <==
	I1109 14:03:36.079801       1 server.go:150] Version: v1.34.1
	I1109 14:03:36.079970       1 server.go:152] "Golang settings" GOGC="" GOMAXPROCS="" GOTRACEBACK=""
	W1109 14:03:37.231523       1 api_enablement.go:112] alpha api enabled with emulated version 1.34 instead of the binary's version 1.34.1, this is unsupported, proceed at your own risk: api=coordination.k8s.io/v1alpha2
	W1109 14:03:37.231632       1 api_enablement.go:112] alpha api enabled with emulated version 1.34 instead of the binary's version 1.34.1, this is unsupported, proceed at your own risk: api=admissionregistration.k8s.io/v1alpha1
	W1109 14:03:37.231673       1 api_enablement.go:112] alpha api enabled with emulated version 1.34 instead of the binary's version 1.34.1, this is unsupported, proceed at your own risk: api=imagepolicy.k8s.io/v1alpha1
	W1109 14:03:37.231710       1 api_enablement.go:112] alpha api enabled with emulated version 1.34 instead of the binary's version 1.34.1, this is unsupported, proceed at your own risk: api=resource.k8s.io/v1alpha3
	W1109 14:03:37.231743       1 api_enablement.go:112] alpha api enabled with emulated version 1.34 instead of the binary's version 1.34.1, this is unsupported, proceed at your own risk: api=scheduling.k8s.io/v1alpha1
	W1109 14:03:37.231775       1 api_enablement.go:112] alpha api enabled with emulated version 1.34 instead of the binary's version 1.34.1, this is unsupported, proceed at your own risk: api=certificates.k8s.io/v1alpha1
	W1109 14:03:37.233731       1 api_enablement.go:112] alpha api enabled with emulated version 1.34 instead of the binary's version 1.34.1, this is unsupported, proceed at your own risk: api=rbac.authorization.k8s.io/v1alpha1
	W1109 14:03:37.233812       1 api_enablement.go:112] alpha api enabled with emulated version 1.34 instead of the binary's version 1.34.1, this is unsupported, proceed at your own risk: api=storage.k8s.io/v1alpha1
	W1109 14:03:37.233841       1 api_enablement.go:112] alpha api enabled with emulated version 1.34 instead of the binary's version 1.34.1, this is unsupported, proceed at your own risk: api=internal.apiserver.k8s.io/v1alpha1
	W1109 14:03:37.233872       1 api_enablement.go:112] alpha api enabled with emulated version 1.34 instead of the binary's version 1.34.1, this is unsupported, proceed at your own risk: api=authentication.k8s.io/v1alpha1
	W1109 14:03:37.233903       1 api_enablement.go:112] alpha api enabled with emulated version 1.34 instead of the binary's version 1.34.1, this is unsupported, proceed at your own risk: api=storagemigration.k8s.io/v1alpha1
	W1109 14:03:37.233935       1 api_enablement.go:112] alpha api enabled with emulated version 1.34 instead of the binary's version 1.34.1, this is unsupported, proceed at your own risk: api=node.k8s.io/v1alpha1
	W1109 14:03:37.264427       1 logging.go:55] [core] [Channel #2 SubChannel #3]grpc: addrConn.createTransport failed to connect to {Addr: "127.0.0.1:2379", ServerName: "127.0.0.1:2379", BalancerAttributes: {"<%!p(pickfirstleaf.managedByPickfirstKeyType={})>": "<%!p(bool=true)>" }}. Err: connection error: desc = "transport: authentication handshake failed: context canceled"
	W1109 14:03:37.266135       1 logging.go:55] [core] [Channel #1 SubChannel #5]grpc: addrConn.createTransport failed to connect to {Addr: "127.0.0.1:2379", ServerName: "127.0.0.1:2379", BalancerAttributes: {"<%!p(pickfirstleaf.managedByPickfirstKeyType={})>": "<%!p(bool=true)>" }}. Err: connection error: desc = "transport: Error while dialing: dial tcp 127.0.0.1:2379: operation was canceled"
	I1109 14:03:37.266724       1 shared_informer.go:349] "Waiting for caches to sync" controller="node_authorizer"
	I1109 14:03:37.284361       1 shared_informer.go:349] "Waiting for caches to sync" controller="*generic.policySource[*k8s.io/api/admissionregistration/v1.ValidatingAdmissionPolicy,*k8s.io/api/admissionregistration/v1.ValidatingAdmissionPolicyBinding,k8s.io/apiserver/pkg/admission/plugin/policy/validating.Validator]"
	I1109 14:03:37.285347       1 plugins.go:157] Loaded 14 mutating admission controller(s) successfully in the following order: NamespaceLifecycle,LimitRanger,ServiceAccount,NodeRestriction,TaintNodesByCondition,Priority,DefaultTolerationSeconds,DefaultStorageClass,StorageObjectInUseProtection,RuntimeClass,DefaultIngressClass,PodTopologyLabels,MutatingAdmissionPolicy,MutatingAdmissionWebhook.
	I1109 14:03:37.285437       1 plugins.go:160] Loaded 13 validating admission controller(s) successfully in the following order: LimitRanger,ServiceAccount,PodSecurity,Priority,PersistentVolumeClaimResize,RuntimeClass,CertificateApproval,CertificateSigning,ClusterTrustBundleAttest,CertificateSubjectRestriction,ValidatingAdmissionPolicy,ValidatingAdmissionWebhook,ResourceQuota.
	I1109 14:03:37.285697       1 instance.go:239] Using reconciler: lease
	W1109 14:03:37.287884       1 logging.go:55] [core] [Channel #7 SubChannel #8]grpc: addrConn.createTransport failed to connect to {Addr: "127.0.0.1:2379", ServerName: "127.0.0.1:2379", BalancerAttributes: {"<%!p(pickfirstleaf.managedByPickfirstKeyType={})>": "<%!p(bool=true)>" }}. Err: connection error: desc = "transport: authentication handshake failed: context canceled"
	W1109 14:03:57.261619       1 logging.go:55] [core] [Channel #1 SubChannel #6]grpc: addrConn.createTransport failed to connect to {Addr: "127.0.0.1:2379", ServerName: "127.0.0.1:2379", BalancerAttributes: {"<%!p(pickfirstleaf.managedByPickfirstKeyType={})>": "<%!p(bool=true)>" }}. Err: connection error: desc = "transport: authentication handshake failed: context canceled"
	W1109 14:03:57.262651       1 logging.go:55] [core] [Channel #2 SubChannel #4]grpc: addrConn.createTransport failed to connect to {Addr: "127.0.0.1:2379", ServerName: "127.0.0.1:2379", BalancerAttributes: {"<%!p(pickfirstleaf.managedByPickfirstKeyType={})>": "<%!p(bool=true)>" }}. Err: connection error: desc = "transport: authentication handshake failed: context canceled"
	F1109 14:03:57.287379       1 instance.go:232] Error creating leases: error creating storage factory: context deadline exceeded
	
	
	==> kube-controller-manager [78f5efcea671f680d59175d4a69693bbbeed9fa6a7cee912ee40e0f169e81738] <==
	I1109 14:03:38.933755       1 serving.go:386] Generated self-signed cert in-memory
	I1109 14:03:39.743954       1 controllermanager.go:191] "Starting" version="v1.34.1"
	I1109 14:03:39.744053       1 controllermanager.go:193] "Golang settings" GOGC="" GOMAXPROCS="" GOTRACEBACK=""
	I1109 14:03:39.745947       1 secure_serving.go:211] Serving securely on 127.0.0.1:10257
	I1109 14:03:39.746091       1 dynamic_cafile_content.go:161] "Starting controller" name="request-header::/var/lib/minikube/certs/front-proxy-ca.crt"
	I1109 14:03:39.746103       1 dynamic_cafile_content.go:161] "Starting controller" name="client-ca-bundle::/var/lib/minikube/certs/ca.crt"
	I1109 14:03:39.746115       1 tlsconfig.go:243] "Starting DynamicServingCertificateController"
	E1109 14:04:10.143520       1 controllermanager.go:245] "Error building controller context" err="failed to wait for apiserver being healthy: timed out waiting for the condition: failed to get apiserver /healthz status: forbidden: User \"system:kube-controller-manager\" cannot get path \"/healthz\""
	
	
	==> kube-controller-manager [d4b5eae8c40aaa51b1839a8972d830ffbb9a271e980e83d7f4e1e1a5a0e7c344] <==
	I1109 14:04:15.647826       1 shared_informer.go:356] "Caches are synced" controller="garbage collector"
	I1109 14:04:15.648760       1 garbagecollector.go:154] "Garbage collector: all resource monitors have synced" logger="garbage-collector-controller"
	I1109 14:04:15.648829       1 garbagecollector.go:157] "Proceeding to collect garbage" logger="garbage-collector-controller"
	I1109 14:04:15.650811       1 shared_informer.go:356] "Caches are synced" controller="persistent volume"
	I1109 14:04:15.679894       1 shared_informer.go:356] "Caches are synced" controller="attach detach"
	I1109 14:04:15.695896       1 shared_informer.go:356] "Caches are synced" controller="garbage collector"
	I1109 14:04:15.916336       1 endpointslice_controller.go:344] "Error syncing endpoint slices for service, retrying" logger="endpointslice-controller" key="kube-system/kube-dns" err="failed to update kube-dns-tmhdr EndpointSlice for Service kube-system/kube-dns: Operation cannot be fulfilled on endpointslices.discovery.k8s.io \"kube-dns-tmhdr\": the object has been modified; please apply your changes to the latest version and try again"
	I1109 14:04:15.916728       1 event.go:377] Event(v1.ObjectReference{Kind:"Service", Namespace:"kube-system", Name:"kube-dns", UID:"8beb4313-ddc0-4b92-876a-23da421be39d", APIVersion:"v1", ResourceVersion:"292", FieldPath:""}): type: 'Warning' reason: 'FailedToUpdateEndpointSlices' Error updating Endpoint Slices for Service kube-system/kube-dns: failed to update kube-dns-tmhdr EndpointSlice for Service kube-system/kube-dns: Operation cannot be fulfilled on endpointslices.discovery.k8s.io "kube-dns-tmhdr": the object has been modified; please apply your changes to the latest version and try again
	E1109 14:04:16.184059       1 replica_set.go:587] "Unhandled Error" err="sync \"kube-system/coredns-66bc5c9577\" failed with Operation cannot be fulfilled on replicasets.apps \"coredns-66bc5c9577\": the object has been modified; please apply your changes to the latest version and try again" logger="UnhandledError"
	I1109 14:04:16.664643       1 endpointslice_controller.go:344] "Error syncing endpoint slices for service, retrying" logger="endpointslice-controller" key="kube-system/kube-dns" err="failed to update kube-dns-tmhdr EndpointSlice for Service kube-system/kube-dns: Operation cannot be fulfilled on endpointslices.discovery.k8s.io \"kube-dns-tmhdr\": the object has been modified; please apply your changes to the latest version and try again"
	I1109 14:04:16.665695       1 event.go:377] Event(v1.ObjectReference{Kind:"Service", Namespace:"kube-system", Name:"kube-dns", UID:"8beb4313-ddc0-4b92-876a-23da421be39d", APIVersion:"v1", ResourceVersion:"292", FieldPath:""}): type: 'Warning' reason: 'FailedToUpdateEndpointSlices' Error updating Endpoint Slices for Service kube-system/kube-dns: failed to update kube-dns-tmhdr EndpointSlice for Service kube-system/kube-dns: Operation cannot be fulfilled on endpointslices.discovery.k8s.io "kube-dns-tmhdr": the object has been modified; please apply your changes to the latest version and try again
	I1109 14:04:56.714750       1 endpointslice_controller.go:344] "Error syncing endpoint slices for service, retrying" logger="endpointslice-controller" key="kube-system/kube-dns" err="failed to update kube-dns-tmhdr EndpointSlice for Service kube-system/kube-dns: Operation cannot be fulfilled on endpointslices.discovery.k8s.io \"kube-dns-tmhdr\": the object has been modified; please apply your changes to the latest version and try again"
	I1109 14:04:56.714878       1 event.go:377] Event(v1.ObjectReference{Kind:"Service", Namespace:"kube-system", Name:"kube-dns", UID:"8beb4313-ddc0-4b92-876a-23da421be39d", APIVersion:"v1", ResourceVersion:"292", FieldPath:""}): type: 'Warning' reason: 'FailedToUpdateEndpointSlices' Error updating Endpoint Slices for Service kube-system/kube-dns: failed to update kube-dns-tmhdr EndpointSlice for Service kube-system/kube-dns: Operation cannot be fulfilled on endpointslices.discovery.k8s.io "kube-dns-tmhdr": the object has been modified; please apply your changes to the latest version and try again
	I1109 14:04:56.849774       1 endpointslice_controller.go:344] "Error syncing endpoint slices for service, retrying" logger="endpointslice-controller" key="kube-system/kube-dns" err="failed to update kube-dns-tmhdr EndpointSlice for Service kube-system/kube-dns: Operation cannot be fulfilled on endpointslices.discovery.k8s.io \"kube-dns-tmhdr\": the object has been modified; please apply your changes to the latest version and try again"
	I1109 14:04:56.849836       1 event.go:377] Event(v1.ObjectReference{Kind:"Service", Namespace:"kube-system", Name:"kube-dns", UID:"8beb4313-ddc0-4b92-876a-23da421be39d", APIVersion:"v1", ResourceVersion:"292", FieldPath:""}): type: 'Warning' reason: 'FailedToUpdateEndpointSlices' Error updating Endpoint Slices for Service kube-system/kube-dns: failed to update kube-dns-tmhdr EndpointSlice for Service kube-system/kube-dns: Operation cannot be fulfilled on endpointslices.discovery.k8s.io "kube-dns-tmhdr": the object has been modified; please apply your changes to the latest version and try again
	E1109 14:04:56.882397       1 replica_set.go:587] "Unhandled Error" err="sync \"kube-system/coredns-66bc5c9577\" failed with Operation cannot be fulfilled on replicasets.apps \"coredns-66bc5c9577\": the object has been modified; please apply your changes to the latest version and try again" logger="UnhandledError"
	E1109 14:05:01.377737       1 daemon_controller.go:346] "Unhandled Error" err="kube-system/kindnet failed with : error storing status for daemon set &v1.DaemonSet{TypeMeta:v1.TypeMeta{Kind:\"\", APIVersion:\"\"}, ObjectMeta:v1.ObjectMeta{Name:\"kindnet\", GenerateName:\"\", Namespace:\"kube-system\", SelfLink:\"\", UID:\"a423ea2b-b11a-451e-9dc0-0b9bc17e2520\", ResourceVersion:\"2273\", Generation:1, CreationTimestamp:time.Date(2025, time.November, 9, 13, 50, 44, 0, time.Local), DeletionTimestamp:<nil>, DeletionGracePeriodSeconds:(*int64)(nil), Labels:map[string]string{\"app\":\"kindnet\", \"k8s-app\":\"kindnet\", \"tier\":\"node\"}, Annotations:map[string]string{\"deprecated.daemonset.template.generation\":\"1\", \"kubectl.kubernetes.io/last-applied-configuration\":\"{\\\"apiVersion\\\":\\\"apps/v1\\\",\\\"kind\\\":\\\"DaemonSet\\\",\\\"metadata\\\":{\\\"annotations\\\":{},\\\"labels\\\":{\\\"app\\\":\\\"kindnet\\\",\\\"k8s-app\\\":\\\"kindnet\\\",\\\"tier\\\":\\\"node\\\"},\\\"name\\\":\\\"kindnet\\
\",\\\"namespace\\\":\\\"kube-system\\\"},\\\"spec\\\":{\\\"selector\\\":{\\\"matchLabels\\\":{\\\"app\\\":\\\"kindnet\\\"}},\\\"template\\\":{\\\"metadata\\\":{\\\"labels\\\":{\\\"app\\\":\\\"kindnet\\\",\\\"k8s-app\\\":\\\"kindnet\\\",\\\"tier\\\":\\\"node\\\"}},\\\"spec\\\":{\\\"containers\\\":[{\\\"env\\\":[{\\\"name\\\":\\\"HOST_IP\\\",\\\"valueFrom\\\":{\\\"fieldRef\\\":{\\\"fieldPath\\\":\\\"status.hostIP\\\"}}},{\\\"name\\\":\\\"POD_IP\\\",\\\"valueFrom\\\":{\\\"fieldRef\\\":{\\\"fieldPath\\\":\\\"status.podIP\\\"}}},{\\\"name\\\":\\\"POD_SUBNET\\\",\\\"value\\\":\\\"10.244.0.0/16\\\"}],\\\"image\\\":\\\"docker.io/kindest/kindnetd:v20250512-df8de77b\\\",\\\"name\\\":\\\"kindnet-cni\\\",\\\"resources\\\":{\\\"limits\\\":{\\\"cpu\\\":\\\"100m\\\",\\\"memory\\\":\\\"50Mi\\\"},\\\"requests\\\":{\\\"cpu\\\":\\\"100m\\\",\\\"memory\\\":\\\"50Mi\\\"}},\\\"securityContext\\\":{\\\"capabilities\\\":{\\\"add\\\":[\\\"NET_RAW\\\",\\\"NET_ADMIN\\\"]},\\\"privileged\\\":false},\\\"volumeMounts\\\":[{\\\"mountPath\
\\":\\\"/etc/cni/net.d\\\",\\\"name\\\":\\\"cni-cfg\\\"},{\\\"mountPath\\\":\\\"/run/xtables.lock\\\",\\\"name\\\":\\\"xtables-lock\\\",\\\"readOnly\\\":false},{\\\"mountPath\\\":\\\"/lib/modules\\\",\\\"name\\\":\\\"lib-modules\\\",\\\"readOnly\\\":true}]}],\\\"hostNetwork\\\":true,\\\"serviceAccountName\\\":\\\"kindnet\\\",\\\"tolerations\\\":[{\\\"effect\\\":\\\"NoSchedule\\\",\\\"operator\\\":\\\"Exists\\\"}],\\\"volumes\\\":[{\\\"hostPath\\\":{\\\"path\\\":\\\"/etc/cni/net.d\\\",\\\"type\\\":\\\"DirectoryOrCreate\\\"},\\\"name\\\":\\\"cni-cfg\\\"},{\\\"hostPath\\\":{\\\"path\\\":\\\"/run/xtables.lock\\\",\\\"type\\\":\\\"FileOrCreate\\\"},\\\"name\\\":\\\"xtables-lock\\\"},{\\\"hostPath\\\":{\\\"path\\\":\\\"/lib/modules\\\"},\\\"name\\\":\\\"lib-modules\\\"}]}}}}\\n\"}, OwnerReferences:[]v1.OwnerReference(nil), Finalizers:[]string(nil), ManagedFields:[]v1.ManagedFieldsEntry(nil)}, Spec:v1.DaemonSetSpec{Selector:(*v1.LabelSelector)(0x40017852e0), Template:v1.PodTemplateSpec{ObjectMeta:v1.ObjectMeta{Name:
\"\", GenerateName:\"\", Namespace:\"\", SelfLink:\"\", UID:\"\", ResourceVersion:\"\", Generation:0, CreationTimestamp:time.Date(1, time.January, 1, 0, 0, 0, 0, time.UTC), DeletionTimestamp:<nil>, DeletionGracePeriodSeconds:(*int64)(nil), Labels:map[string]string{\"app\":\"kindnet\", \"k8s-app\":\"kindnet\", \"tier\":\"node\"}, Annotations:map[string]string(nil), OwnerReferences:[]v1.OwnerReference(nil), Finalizers:[]string(nil), ManagedFields:[]v1.ManagedFieldsEntry(nil)}, Spec:v1.PodSpec{Volumes:[]v1.Volume{v1.Volume{Name:\"cni-cfg\", VolumeSource:v1.VolumeSource{HostPath:(*v1.HostPathVolumeSource)(0x4000aea5d0), EmptyDir:(*v1.EmptyDirVolumeSource)(nil), GCEPersistentDisk:(*v1.GCEPersistentDiskVolumeSource)(nil), AWSElasticBlockStore:(*v1.AWSElasticBlockStoreVolumeSource)(nil), GitRepo:(*v1.GitRepoVolumeSource)(nil), Secret:(*v1.SecretVolumeSource)(nil), NFS:(*v1.NFSVolumeSource)(nil), ISCSI:(*v1.ISCSIVolumeSource)(nil), Glusterfs:(*v1.GlusterfsVolumeSource)(nil), PersistentVolumeClaim:(*v1.PersistentVolum
eClaimVolumeSource)(nil), RBD:(*v1.RBDVolumeSource)(nil), FlexVolume:(*v1.FlexVolumeSource)(nil), Cinder:(*v1.CinderVolumeSource)(nil), CephFS:(*v1.CephFSVolumeSource)(nil), Flocker:(*v1.FlockerVolumeSource)(nil), DownwardAPI:(*v1.DownwardAPIVolumeSource)(nil), FC:(*v1.FCVolumeSource)(nil), AzureFile:(*v1.AzureFileVolumeSource)(nil), ConfigMap:(*v1.ConfigMapVolumeSource)(nil), VsphereVolume:(*v1.VsphereVirtualDiskVolumeSource)(nil), Quobyte:(*v1.QuobyteVolumeSource)(nil), AzureDisk:(*v1.AzureDiskVolumeSource)(nil), PhotonPersistentDisk:(*v1.PhotonPersistentDiskVolumeSource)(nil), Projected:(*v1.ProjectedVolumeSource)(nil), PortworxVolume:(*v1.PortworxVolumeSource)(nil), ScaleIO:(*v1.ScaleIOVolumeSource)(nil), StorageOS:(*v1.StorageOSVolumeSource)(nil), CSI:(*v1.CSIVolumeSource)(nil), Ephemeral:(*v1.EphemeralVolumeSource)(nil), Image:(*v1.ImageVolumeSource)(nil)}}, v1.Volume{Name:\"xtables-lock\", VolumeSource:v1.VolumeSource{HostPath:(*v1.HostPathVolumeSource)(0x4000aea618), EmptyDir:(*v1.EmptyDirVolumeSource
)(nil), GCEPersistentDisk:(*v1.GCEPersistentDiskVolumeSource)(nil), AWSElasticBlockStore:(*v1.AWSElasticBlockStoreVolumeSource)(nil), GitRepo:(*v1.GitRepoVolumeSource)(nil), Secret:(*v1.SecretVolumeSource)(nil), NFS:(*v1.NFSVolumeSource)(nil), ISCSI:(*v1.ISCSIVolumeSource)(nil), Glusterfs:(*v1.GlusterfsVolumeSource)(nil), PersistentVolumeClaim:(*v1.PersistentVolumeClaimVolumeSource)(nil), RBD:(*v1.RBDVolumeSource)(nil), FlexVolume:(*v1.FlexVolumeSource)(nil), Cinder:(*v1.CinderVolumeSource)(nil), CephFS:(*v1.CephFSVolumeSource)(nil), Flocker:(*v1.FlockerVolumeSource)(nil), DownwardAPI:(*v1.DownwardAPIVolumeSource)(nil), FC:(*v1.FCVolumeSource)(nil), AzureFile:(*v1.AzureFileVolumeSource)(nil), ConfigMap:(*v1.ConfigMapVolumeSource)(nil), VsphereVolume:(*v1.VsphereVirtualDiskVolumeSource)(nil), Quobyte:(*v1.QuobyteVolumeSource)(nil), AzureDisk:(*v1.AzureDiskVolumeSource)(nil), PhotonPersistentDisk:(*v1.PhotonPersistentDiskVolumeSource)(nil), Projected:(*v1.ProjectedVolumeSource)(nil), PortworxVolume:(*v1.Portwor
xVolumeSource)(nil), ScaleIO:(*v1.ScaleIOVolumeSource)(nil), StorageOS:(*v1.StorageOSVolumeSource)(nil), CSI:(*v1.CSIVolumeSource)(nil), Ephemeral:(*v1.EphemeralVolumeSource)(nil), Image:(*v1.ImageVolumeSource)(nil)}}, v1.Volume{Name:\"lib-modules\", VolumeSource:v1.VolumeSource{HostPath:(*v1.HostPathVolumeSource)(0x4000aea678), EmptyDir:(*v1.EmptyDirVolumeSource)(nil), GCEPersistentDisk:(*v1.GCEPersistentDiskVolumeSource)(nil), AWSElasticBlockStore:(*v1.AWSElasticBlockStoreVolumeSource)(nil), GitRepo:(*v1.GitRepoVolumeSource)(nil), Secret:(*v1.SecretVolumeSource)(nil), NFS:(*v1.NFSVolumeSource)(nil), ISCSI:(*v1.ISCSIVolumeSource)(nil), Glusterfs:(*v1.GlusterfsVolumeSource)(nil), PersistentVolumeClaim:(*v1.PersistentVolumeClaimVolumeSource)(nil), RBD:(*v1.RBDVolumeSource)(nil), FlexVolume:(*v1.FlexVolumeSource)(nil), Cinder:(*v1.CinderVolumeSource)(nil), CephFS:(*v1.CephFSVolumeSource)(nil), Flocker:(*v1.FlockerVolumeSource)(nil), DownwardAPI:(*v1.DownwardAPIVolumeSource)(nil), FC:(*v1.FCVolumeSource)(nil), A
zureFile:(*v1.AzureFileVolumeSource)(nil), ConfigMap:(*v1.ConfigMapVolumeSource)(nil), VsphereVolume:(*v1.VsphereVirtualDiskVolumeSource)(nil), Quobyte:(*v1.QuobyteVolumeSource)(nil), AzureDisk:(*v1.AzureDiskVolumeSource)(nil), PhotonPersistentDisk:(*v1.PhotonPersistentDiskVolumeSource)(nil), Projected:(*v1.ProjectedVolumeSource)(nil), PortworxVolume:(*v1.PortworxVolumeSource)(nil), ScaleIO:(*v1.ScaleIOVolumeSource)(nil), StorageOS:(*v1.StorageOSVolumeSource)(nil), CSI:(*v1.CSIVolumeSource)(nil), Ephemeral:(*v1.EphemeralVolumeSource)(nil), Image:(*v1.ImageVolumeSource)(nil)}}}, InitContainers:[]v1.Container(nil), Containers:[]v1.Container{v1.Container{Name:\"kindnet-cni\", Image:\"docker.io/kindest/kindnetd:v20250512-df8de77b\", Command:[]string(nil), Args:[]string(nil), WorkingDir:\"\", Ports:[]v1.ContainerPort(nil), EnvFrom:[]v1.EnvFromSource(nil), Env:[]v1.EnvVar{v1.EnvVar{Name:\"HOST_IP\", Value:\"\", ValueFrom:(*v1.EnvVarSource)(0x400208fe00)}, v1.EnvVar{Name:\"POD_IP\", Value:\"\", ValueFrom:(*v1.EnvVar
Source)(0x400208fe30)}, v1.EnvVar{Name:\"POD_SUBNET\", Value:\"10.244.0.0/16\", ValueFrom:(*v1.EnvVarSource)(nil)}}, Resources:v1.ResourceRequirements{Limits:v1.ResourceList{\"cpu\":resource.Quantity{i:resource.int64Amount{value:100, scale:-3}, d:resource.infDecAmount{Dec:(*inf.Dec)(nil)}, s:\"100m\", Format:\"DecimalSI\"}, \"memory\":resource.Quantity{i:resource.int64Amount{value:52428800, scale:0}, d:resource.infDecAmount{Dec:(*inf.Dec)(nil)}, s:\"50Mi\", Format:\"BinarySI\"}}, Requests:v1.ResourceList{\"cpu\":resource.Quantity{i:resource.int64Amount{value:100, scale:-3}, d:resource.infDecAmount{Dec:(*inf.Dec)(nil)}, s:\"100m\", Format:\"DecimalSI\"}, \"memory\":resource.Quantity{i:resource.int64Amount{value:52428800, scale:0}, d:resource.infDecAmount{Dec:(*inf.Dec)(nil)}, s:\"50Mi\", Format:\"BinarySI\"}}, Claims:[]v1.ResourceClaim(nil)}, ResizePolicy:[]v1.ContainerResizePolicy(nil), RestartPolicy:(*v1.ContainerRestartPolicy)(nil), RestartPolicyRules:[]v1.ContainerRestartRule(nil), VolumeMounts:[]v1.Volume
Mount{v1.VolumeMount{Name:\"cni-cfg\", ReadOnly:false, RecursiveReadOnly:(*v1.RecursiveReadOnlyMode)(nil), MountPath:\"/etc/cni/net.d\", SubPath:\"\", MountPropagation:(*v1.MountPropagationMode)(nil), SubPathExpr:\"\"}, v1.VolumeMount{Name:\"xtables-lock\", ReadOnly:false, RecursiveReadOnly:(*v1.RecursiveReadOnlyMode)(nil), MountPath:\"/run/xtables.lock\", SubPath:\"\", MountPropagation:(*v1.MountPropagationMode)(nil), SubPathExpr:\"\"}, v1.VolumeMount{Name:\"lib-modules\", ReadOnly:true, RecursiveReadOnly:(*v1.RecursiveReadOnlyMode)(nil), MountPath:\"/lib/modules\", SubPath:\"\", MountPropagation:(*v1.MountPropagationMode)(nil), SubPathExpr:\"\"}}, VolumeDevices:[]v1.VolumeDevice(nil), LivenessProbe:(*v1.Probe)(nil), ReadinessProbe:(*v1.Probe)(nil), StartupProbe:(*v1.Probe)(nil), Lifecycle:(*v1.Lifecycle)(nil), TerminationMessagePath:\"/dev/termination-log\", TerminationMessagePolicy:\"File\", ImagePullPolicy:\"IfNotPresent\", SecurityContext:(*v1.SecurityContext)(0x40024818c0), Stdin:false, StdinOnce:false,
TTY:false}}, EphemeralContainers:[]v1.EphemeralContainer(nil), RestartPolicy:\"Always\", TerminationGracePeriodSeconds:(*int64)(0x4002225268), ActiveDeadlineSeconds:(*int64)(nil), DNSPolicy:\"ClusterFirst\", NodeSelector:map[string]string(nil), ServiceAccountName:\"kindnet\", DeprecatedServiceAccount:\"kindnet\", AutomountServiceAccountToken:(*bool)(nil), NodeName:\"\", HostNetwork:true, HostPID:false, HostIPC:false, ShareProcessNamespace:(*bool)(nil), SecurityContext:(*v1.PodSecurityContext)(0x400180ef30), ImagePullSecrets:[]v1.LocalObjectReference(nil), Hostname:\"\", Subdomain:\"\", Affinity:(*v1.Affinity)(nil), SchedulerName:\"default-scheduler\", Tolerations:[]v1.Toleration{v1.Toleration{Key:\"\", Operator:\"Exists\", Value:\"\", Effect:\"NoSchedule\", TolerationSeconds:(*int64)(nil)}}, HostAliases:[]v1.HostAlias(nil), PriorityClassName:\"\", Priority:(*int32)(nil), DNSConfig:(*v1.PodDNSConfig)(nil), ReadinessGates:[]v1.PodReadinessGate(nil), RuntimeClassName:(*string)(nil), EnableServiceLinks:(*bool)(n
il), PreemptionPolicy:(*v1.PreemptionPolicy)(nil), Overhead:v1.ResourceList(nil), TopologySpreadConstraints:[]v1.TopologySpreadConstraint(nil), SetHostnameAsFQDN:(*bool)(nil), OS:(*v1.PodOS)(nil), HostUsers:(*bool)(nil), SchedulingGates:[]v1.PodSchedulingGate(nil), ResourceClaims:[]v1.PodResourceClaim(nil), Resources:(*v1.ResourceRequirements)(nil), HostnameOverride:(*string)(nil)}}, UpdateStrategy:v1.DaemonSetUpdateStrategy{Type:\"RollingUpdate\", RollingUpdate:(*v1.RollingUpdateDaemonSet)(0x400354e850)}, MinReadySeconds:0, RevisionHistoryLimit:(*int32)(0x40022252d0)}, Status:v1.DaemonSetStatus{CurrentNumberScheduled:4, NumberMisscheduled:0, DesiredNumberScheduled:4, NumberReady:4, ObservedGeneration:1, UpdatedNumberScheduled:4, NumberAvailable:4, NumberUnavailable:0, CollisionCount:(*int32)(nil), Conditions:[]v1.DaemonSetCondition(nil)}}: Operation cannot be fulfilled on daemonsets.apps \"kindnet\": the object has been modified; please apply your changes to the latest version and try again" logger="Unhandle
dError"
	E1109 14:06:00.976377       1 certificate_controller.go:151] "Unhandled Error" err="Sync csr-rs764 failed with : error updating approval for csr: Operation cannot be fulfilled on certificatesigningrequests.certificates.k8s.io \"csr-rs764\": the object has been modified; please apply your changes to the latest version and try again" logger="UnhandledError"
	E1109 14:06:01.012377       1 certificate_controller.go:151] "Unhandled Error" err="Sync csr-rs764 failed with : error updating signature for csr: Operation cannot be fulfilled on certificatesigningrequests.certificates.k8s.io \"csr-rs764\": the object has been modified; please apply your changes to the latest version and try again" logger="UnhandledError"
	I1109 14:06:01.694293       1 topologycache.go:237] "Can't get CPU or zone information for node" logger="endpointslice-controller" node="ha-423884-m04"
	I1109 14:06:01.694518       1 actual_state_of_world.go:541] "Failed to update statusUpdateNeeded field in actual state of world" logger="persistentvolume-attach-detach-controller" err="Failed to set statusUpdateNeeded to needed true, because nodeName=\"ha-423884-m05\" does not exist"
	I1109 14:06:01.758023       1 range_allocator.go:428] "Set node PodCIDR" logger="node-ipam-controller" node="ha-423884-m05" podCIDRs=["10.244.4.0/24"]
	I1109 14:06:05.578297       1 node_lifecycle_controller.go:873] "Missing timestamp for Node. Assuming now as a timestamp" logger="node-lifecycle-controller" node="ha-423884-m05"
	I1109 14:06:46.839449       1 topologycache.go:237] "Can't get CPU or zone information for node" logger="endpointslice-controller" node="ha-423884-m04"
	
	
	==> kube-proxy [6db8ccf0f7e5d6927f1f90014c3a7aaa5232618397851b52007fa71137db2843] <==
	I1109 14:04:16.669492       1 server_linux.go:53] "Using iptables proxy"
	I1109 14:04:17.085521       1 shared_informer.go:349] "Waiting for caches to sync" controller="node informer cache"
	I1109 14:04:17.200105       1 shared_informer.go:356] "Caches are synced" controller="node informer cache"
	I1109 14:04:17.200215       1 server.go:219] "Successfully retrieved NodeIPs" NodeIPs=["192.168.49.2"]
	E1109 14:04:17.200363       1 server.go:256] "Kube-proxy configuration may be incomplete or incorrect" err="nodePortAddresses is unset; NodePort connections will be accepted on all local IPs. Consider using `--nodeport-addresses primary`"
	I1109 14:04:17.278348       1 server.go:265] "kube-proxy running in dual-stack mode" primary ipFamily="IPv4"
	I1109 14:04:17.278470       1 server_linux.go:132] "Using iptables Proxier"
	I1109 14:04:17.286098       1 proxier.go:242] "Setting route_localnet=1 to allow node-ports on localhost; to change this either disable iptables.localhostNodePorts (--iptables-localhost-nodeports) or set nodePortAddresses (--nodeport-addresses) to filter loopback addresses" ipFamily="IPv4"
	I1109 14:04:17.286454       1 server.go:527] "Version info" version="v1.34.1"
	I1109 14:04:17.286654       1 server.go:529] "Golang settings" GOGC="" GOMAXPROCS="" GOTRACEBACK=""
	I1109 14:04:17.290007       1 config.go:200] "Starting service config controller"
	I1109 14:04:17.290117       1 shared_informer.go:349] "Waiting for caches to sync" controller="service config"
	I1109 14:04:17.290166       1 config.go:106] "Starting endpoint slice config controller"
	I1109 14:04:17.290209       1 shared_informer.go:349] "Waiting for caches to sync" controller="endpoint slice config"
	I1109 14:04:17.290245       1 config.go:403] "Starting serviceCIDR config controller"
	I1109 14:04:17.290290       1 shared_informer.go:349] "Waiting for caches to sync" controller="serviceCIDR config"
	I1109 14:04:17.297376       1 config.go:309] "Starting node config controller"
	I1109 14:04:17.297723       1 shared_informer.go:349] "Waiting for caches to sync" controller="node config"
	I1109 14:04:17.297759       1 shared_informer.go:356] "Caches are synced" controller="node config"
	I1109 14:04:17.390352       1 shared_informer.go:356] "Caches are synced" controller="endpoint slice config"
	I1109 14:04:17.390429       1 shared_informer.go:356] "Caches are synced" controller="service config"
	I1109 14:04:17.390722       1 shared_informer.go:356] "Caches are synced" controller="serviceCIDR config"
	
	
	==> kube-scheduler [374a5429d6a564b1f172e68e0f603aefc3b04e7b183e31ef8b55c3ae430182ff] <==
	I1109 14:06:02.022097       1 schedule_one.go:1092] "Pod has been assigned to node. Abort adding it back to queue." pod="kube-system/kindnet-vdndm" node="ha-423884-m05"
	E1109 14:06:02.025101       1 schedule_one.go:1079] "Error scheduling pod; retrying" err="running Bind plugin \"DefaultBinder\": Operation cannot be fulfilled on pods/binding \"kube-proxy-kvnr4\": pod kube-proxy-kvnr4 is already assigned to node \"ha-423884-m05\"" logger="UnhandledError" pod="kube-system/kube-proxy-kvnr4"
	I1109 14:06:02.051418       1 schedule_one.go:1092] "Pod has been assigned to node. Abort adding it back to queue." pod="kube-system/kube-proxy-kvnr4" node="ha-423884-m05"
	E1109 14:06:02.035062       1 schedule_one.go:1079] "Error scheduling pod; retrying" err="running Bind plugin \"DefaultBinder\": Operation cannot be fulfilled on pods/binding \"kube-proxy-v7zkg\": pod kube-proxy-v7zkg is already assigned to node \"ha-423884-m05\"" logger="UnhandledError" pod="kube-system/kube-proxy-v7zkg"
	I1109 14:06:02.052496       1 schedule_one.go:1092] "Pod has been assigned to node. Abort adding it back to queue." pod="kube-system/kube-proxy-v7zkg" node="ha-423884-m05"
	E1109 14:06:02.522861       1 framework.go:1400] "Plugin Failed" err="Operation cannot be fulfilled on pods/binding \"kube-proxy-m79kh\": pod kube-proxy-m79kh is already assigned to node \"ha-423884-m05\"" plugin="DefaultBinder" pod="kube-system/kube-proxy-m79kh" node="ha-423884-m05"
	E1109 14:06:02.523002       1 schedule_one.go:379] "scheduler cache ForgetPod failed" err="pod b8ec98e6-7a7b-4875-ba3d-54d76bcc48d1(kube-system/kube-proxy-m79kh) wasn't assumed so cannot be forgotten" logger="UnhandledError" pod="kube-system/kube-proxy-m79kh"
	E1109 14:06:02.523063       1 schedule_one.go:1079] "Error scheduling pod; retrying" err="running Bind plugin \"DefaultBinder\": Operation cannot be fulfilled on pods/binding \"kube-proxy-m79kh\": pod kube-proxy-m79kh is already assigned to node \"ha-423884-m05\"" logger="UnhandledError" pod="kube-system/kube-proxy-m79kh"
	I1109 14:06:02.534280       1 schedule_one.go:1092] "Pod has been assigned to node. Abort adding it back to queue." pod="kube-system/kube-proxy-m79kh" node="ha-423884-m05"
	E1109 14:06:02.610261       1 framework.go:1400] "Plugin Failed" err="Operation cannot be fulfilled on pods/binding \"kindnet-hxbhl\": pod kindnet-hxbhl is already assigned to node \"ha-423884-m05\"" plugin="DefaultBinder" pod="kube-system/kindnet-hxbhl" node="ha-423884-m05"
	E1109 14:06:02.610394       1 schedule_one.go:379] "scheduler cache ForgetPod failed" err="pod 17de7135-f9e3-491d-bc8a-184957016c66(kube-system/kindnet-hxbhl) wasn't assumed so cannot be forgotten" logger="UnhandledError" pod="kube-system/kindnet-hxbhl"
	E1109 14:06:02.610452       1 schedule_one.go:1079] "Error scheduling pod; retrying" err="running Bind plugin \"DefaultBinder\": Operation cannot be fulfilled on pods/binding \"kindnet-hxbhl\": pod kindnet-hxbhl is already assigned to node \"ha-423884-m05\"" logger="UnhandledError" pod="kube-system/kindnet-hxbhl"
	I1109 14:06:02.619516       1 schedule_one.go:1092] "Pod has been assigned to node. Abort adding it back to queue." pod="kube-system/kindnet-hxbhl" node="ha-423884-m05"
	E1109 14:06:05.401006       1 framework.go:1400] "Plugin Failed" err="Operation cannot be fulfilled on pods/binding \"kindnet-wd67q\": pod kindnet-wd67q is already assigned to node \"ha-423884-m05\"" plugin="DefaultBinder" pod="kube-system/kindnet-wd67q" node="ha-423884-m05"
	E1109 14:06:05.401130       1 schedule_one.go:379] "scheduler cache ForgetPod failed" err="pod 4e5b1687-6beb-4f69-ae4d-b512d9dde310(kube-system/kindnet-wd67q) wasn't assumed so cannot be forgotten" logger="UnhandledError" pod="kube-system/kindnet-wd67q"
	E1109 14:06:05.401497       1 schedule_one.go:1079] "Error scheduling pod; retrying" err="running Bind plugin \"DefaultBinder\": Operation cannot be fulfilled on pods/binding \"kindnet-wd67q\": pod kindnet-wd67q is already assigned to node \"ha-423884-m05\"" logger="UnhandledError" pod="kube-system/kindnet-wd67q"
	E1109 14:06:05.402258       1 framework.go:1400] "Plugin Failed" err="Operation cannot be fulfilled on pods/binding \"kindnet-th4qv\": pod kindnet-th4qv is already assigned to node \"ha-423884-m05\"" plugin="DefaultBinder" pod="kube-system/kindnet-th4qv" node="ha-423884-m05"
	E1109 14:06:05.402376       1 schedule_one.go:379] "scheduler cache ForgetPod failed" err="pod d3d358d3-d4dd-4c89-bf70-2c8d12502968(kube-system/kindnet-th4qv) wasn't assumed so cannot be forgotten" logger="UnhandledError" pod="kube-system/kindnet-th4qv"
	E1109 14:06:05.402614       1 schedule_one.go:1079] "Error scheduling pod; retrying" err="running Bind plugin \"DefaultBinder\": Operation cannot be fulfilled on pods/binding \"kindnet-th4qv\": pod kindnet-th4qv is already assigned to node \"ha-423884-m05\"" logger="UnhandledError" pod="kube-system/kindnet-th4qv"
	I1109 14:06:05.402696       1 schedule_one.go:1092] "Pod has been assigned to node. Abort adding it back to queue." pod="kube-system/kindnet-wd67q" node="ha-423884-m05"
	I1109 14:06:05.403579       1 schedule_one.go:1092] "Pod has been assigned to node. Abort adding it back to queue." pod="kube-system/kindnet-th4qv" node="ha-423884-m05"
	E1109 14:06:05.485252       1 framework.go:1400] "Plugin Failed" err="Operation cannot be fulfilled on pods/binding \"kindnet-8b7z6\": pod kindnet-8b7z6 is already assigned to node \"ha-423884-m05\"" plugin="DefaultBinder" pod="kube-system/kindnet-8b7z6" node="ha-423884-m05"
	E1109 14:06:05.485472       1 schedule_one.go:379] "scheduler cache ForgetPod failed" err="pod 89c2d73f-27bd-4a17-886a-8d6734fd89d0(kube-system/kindnet-8b7z6) wasn't assumed so cannot be forgotten" logger="UnhandledError" pod="kube-system/kindnet-8b7z6"
	E1109 14:06:05.486709       1 schedule_one.go:1079] "Error scheduling pod; retrying" err="running Bind plugin \"DefaultBinder\": Operation cannot be fulfilled on pods/binding \"kindnet-8b7z6\": pod kindnet-8b7z6 is already assigned to node \"ha-423884-m05\"" logger="UnhandledError" pod="kube-system/kindnet-8b7z6"
	I1109 14:06:05.489197       1 schedule_one.go:1092] "Pod has been assigned to node. Abort adding it back to queue." pod="kube-system/kindnet-8b7z6" node="ha-423884-m05"
	
	
	==> kubelet <==
	Nov 09 14:04:14 ha-423884 kubelet[749]: I1109 14:04:14.263506     749 kubelet.go:3202] "Trying to delete pod" pod="kube-system/kube-vip-ha-423884" podUID="8470dcc0-6c4f-4241-ad4e-8b896f6712b0"
	Nov 09 14:04:14 ha-423884 kubelet[749]: E1109 14:04:14.282901     749 kubelet.go:3221] "Failed creating a mirror pod" err="pods \"etcd-ha-423884\" already exists" pod="kube-system/etcd-ha-423884"
	Nov 09 14:04:14 ha-423884 kubelet[749]: I1109 14:04:14.282937     749 kubelet.go:3219] "Creating a mirror pod for static pod" pod="kube-system/kube-apiserver-ha-423884"
	Nov 09 14:04:14 ha-423884 kubelet[749]: E1109 14:04:14.324502     749 kubelet.go:3221] "Failed creating a mirror pod" err="pods \"kube-apiserver-ha-423884\" already exists" pod="kube-system/kube-apiserver-ha-423884"
	Nov 09 14:04:14 ha-423884 kubelet[749]: I1109 14:04:14.324540     749 kubelet.go:3219] "Creating a mirror pod for static pod" pod="kube-system/kube-controller-manager-ha-423884"
	Nov 09 14:04:14 ha-423884 kubelet[749]: I1109 14:04:14.353962     749 desired_state_of_world_populator.go:154] "Finished populating initial desired state of world"
	Nov 09 14:04:14 ha-423884 kubelet[749]: E1109 14:04:14.370339     749 kubelet.go:3221] "Failed creating a mirror pod" err="pods \"kube-controller-manager-ha-423884\" already exists" pod="kube-system/kube-controller-manager-ha-423884"
	Nov 09 14:04:14 ha-423884 kubelet[749]: I1109 14:04:14.385896     749 kubelet.go:3208] "Deleted mirror pod as it didn't match the static Pod" pod="kube-system/kube-vip-ha-423884"
	Nov 09 14:04:14 ha-423884 kubelet[749]: I1109 14:04:14.385930     749 kubelet.go:3219] "Creating a mirror pod for static pod" pod="kube-system/kube-vip-ha-423884"
	Nov 09 14:04:14 ha-423884 kubelet[749]: I1109 14:04:14.403495     749 reconciler_common.go:251] "operationExecutor.VerifyControllerAttachedVolume started for volume \"tmp\" (UniqueName: \"kubernetes.io/host-path/5c249a88-1e05-40e0-b9d2-60a993f8c146-tmp\") pod \"storage-provisioner\" (UID: \"5c249a88-1e05-40e0-b9d2-60a993f8c146\") " pod="kube-system/storage-provisioner"
	Nov 09 14:04:14 ha-423884 kubelet[749]: I1109 14:04:14.403551     749 reconciler_common.go:251] "operationExecutor.VerifyControllerAttachedVolume started for volume \"lib-modules\" (UniqueName: \"kubernetes.io/host-path/f3de4d87-91fe-4303-a8db-50a70cbce4d7-lib-modules\") pod \"kube-proxy-7z7d2\" (UID: \"f3de4d87-91fe-4303-a8db-50a70cbce4d7\") " pod="kube-system/kube-proxy-7z7d2"
	Nov 09 14:04:14 ha-423884 kubelet[749]: I1109 14:04:14.403593     749 reconciler_common.go:251] "operationExecutor.VerifyControllerAttachedVolume started for volume \"lib-modules\" (UniqueName: \"kubernetes.io/host-path/aaab0693-39a9-46cc-b5c6-f07055a7cbc4-lib-modules\") pod \"kindnet-4s4nj\" (UID: \"aaab0693-39a9-46cc-b5c6-f07055a7cbc4\") " pod="kube-system/kindnet-4s4nj"
	Nov 09 14:04:14 ha-423884 kubelet[749]: I1109 14:04:14.403613     749 reconciler_common.go:251] "operationExecutor.VerifyControllerAttachedVolume started for volume \"xtables-lock\" (UniqueName: \"kubernetes.io/host-path/aaab0693-39a9-46cc-b5c6-f07055a7cbc4-xtables-lock\") pod \"kindnet-4s4nj\" (UID: \"aaab0693-39a9-46cc-b5c6-f07055a7cbc4\") " pod="kube-system/kindnet-4s4nj"
	Nov 09 14:04:14 ha-423884 kubelet[749]: I1109 14:04:14.403647     749 reconciler_common.go:251] "operationExecutor.VerifyControllerAttachedVolume started for volume \"cni-cfg\" (UniqueName: \"kubernetes.io/host-path/aaab0693-39a9-46cc-b5c6-f07055a7cbc4-cni-cfg\") pod \"kindnet-4s4nj\" (UID: \"aaab0693-39a9-46cc-b5c6-f07055a7cbc4\") " pod="kube-system/kindnet-4s4nj"
	Nov 09 14:04:14 ha-423884 kubelet[749]: I1109 14:04:14.403685     749 reconciler_common.go:251] "operationExecutor.VerifyControllerAttachedVolume started for volume \"xtables-lock\" (UniqueName: \"kubernetes.io/host-path/f3de4d87-91fe-4303-a8db-50a70cbce4d7-xtables-lock\") pod \"kube-proxy-7z7d2\" (UID: \"f3de4d87-91fe-4303-a8db-50a70cbce4d7\") " pod="kube-system/kube-proxy-7z7d2"
	Nov 09 14:04:14 ha-423884 kubelet[749]: I1109 14:04:14.469444     749 swap_util.go:74] "error creating dir to test if tmpfs noswap is enabled. Assuming not supported" mount path="" error="stat /var/lib/kubelet/plugins/kubernetes.io/empty-dir: no such file or directory"
	Nov 09 14:04:14 ha-423884 kubelet[749]: I1109 14:04:14.588284     749 pod_startup_latency_tracker.go:104] "Observed pod startup duration" pod="kube-system/kube-vip-ha-423884" podStartSLOduration=0.588263843 podStartE2EDuration="588.263843ms" podCreationTimestamp="2025-11-09 14:04:14 +0000 UTC" firstStartedPulling="0001-01-01 00:00:00 +0000 UTC" lastFinishedPulling="0001-01-01 00:00:00 +0000 UTC" observedRunningTime="2025-11-09 14:04:14.53432425 +0000 UTC m=+39.410575888" watchObservedRunningTime="2025-11-09 14:04:14.588263843 +0000 UTC m=+39.464515481"
	Nov 09 14:04:14 ha-423884 kubelet[749]: W1109 14:04:14.716436     749 manager.go:1169] Failed to process watch event {EventType:0 Name:/docker/8c902201acb6ac6cc85dcb02210aea656c86b05a6fb72e47cb8c42c952f307e8/crio-ef99cabeed9545cc36e7e4c46554ffacb1775a080c9128707cd0b34d4cb4a81a WatchSource:0}: Error finding container ef99cabeed9545cc36e7e4c46554ffacb1775a080c9128707cd0b34d4cb4a81a: Status 404 returned error can't find the container with id ef99cabeed9545cc36e7e4c46554ffacb1775a080c9128707cd0b34d4cb4a81a
	Nov 09 14:04:14 ha-423884 kubelet[749]: W1109 14:04:14.783698     749 manager.go:1169] Failed to process watch event {EventType:0 Name:/docker/8c902201acb6ac6cc85dcb02210aea656c86b05a6fb72e47cb8c42c952f307e8/crio-624febe3bef0cff2a8b38f86f67900ed2fa943529a6d461c2caa559b51a854bb WatchSource:0}: Error finding container 624febe3bef0cff2a8b38f86f67900ed2fa943529a6d461c2caa559b51a854bb: Status 404 returned error can't find the container with id 624febe3bef0cff2a8b38f86f67900ed2fa943529a6d461c2caa559b51a854bb
	Nov 09 14:04:14 ha-423884 kubelet[749]: W1109 14:04:14.798946     749 manager.go:1169] Failed to process watch event {EventType:0 Name:/docker/8c902201acb6ac6cc85dcb02210aea656c86b05a6fb72e47cb8c42c952f307e8/crio-49d4f70bf4320e7c70fbccc94bd31ea39dc7456ce3f9c2a9a4d16059f54b6f87 WatchSource:0}: Error finding container 49d4f70bf4320e7c70fbccc94bd31ea39dc7456ce3f9c2a9a4d16059f54b6f87: Status 404 returned error can't find the container with id 49d4f70bf4320e7c70fbccc94bd31ea39dc7456ce3f9c2a9a4d16059f54b6f87
	Nov 09 14:04:14 ha-423884 kubelet[749]: W1109 14:04:14.971628     749 manager.go:1169] Failed to process watch event {EventType:0 Name:/docker/8c902201acb6ac6cc85dcb02210aea656c86b05a6fb72e47cb8c42c952f307e8/crio-156c341c8adeeef3c52b9cc70a6ad9c7dc97d08df535a6d0183d2288e46aaa13 WatchSource:0}: Error finding container 156c341c8adeeef3c52b9cc70a6ad9c7dc97d08df535a6d0183d2288e46aaa13: Status 404 returned error can't find the container with id 156c341c8adeeef3c52b9cc70a6ad9c7dc97d08df535a6d0183d2288e46aaa13
	Nov 09 14:04:15 ha-423884 kubelet[749]: I1109 14:04:15.348436     749 kubelet_volumes.go:163] "Cleaned up orphaned pod volumes dir" podUID="fbb3ff8bceed3e182ae34f06d816435e" path="/var/lib/kubelet/pods/fbb3ff8bceed3e182ae34f06d816435e/volumes"
	Nov 09 14:04:35 ha-423884 kubelet[749]: E1109 14:04:35.276791     749 log.go:32] "ContainerStatus from runtime service failed" err="rpc error: code = NotFound desc = could not find container \"12f4955a06244881af7082cc0ff38ad3ea7f1e7f5cb44d59792e4bf672da56bd\": container with ID starting with 12f4955a06244881af7082cc0ff38ad3ea7f1e7f5cb44d59792e4bf672da56bd not found: ID does not exist" containerID="12f4955a06244881af7082cc0ff38ad3ea7f1e7f5cb44d59792e4bf672da56bd"
	Nov 09 14:04:35 ha-423884 kubelet[749]: I1109 14:04:35.276883     749 kuberuntime_gc.go:364] "Error getting ContainerStatus for containerID" containerID="12f4955a06244881af7082cc0ff38ad3ea7f1e7f5cb44d59792e4bf672da56bd" err="rpc error: code = NotFound desc = could not find container \"12f4955a06244881af7082cc0ff38ad3ea7f1e7f5cb44d59792e4bf672da56bd\": container with ID starting with 12f4955a06244881af7082cc0ff38ad3ea7f1e7f5cb44d59792e4bf672da56bd not found: ID does not exist"
	Nov 09 14:04:46 ha-423884 kubelet[749]: I1109 14:04:46.630690     749 scope.go:117] "RemoveContainer" containerID="5bed382b465f29e125aa4acb35f3e43d30cb2fa5b8aadd1ad04f56abc10722a7"
	

                                                
                                                
-- /stdout --
helpers_test.go:262: (dbg) Run:  out/minikube-linux-arm64 status --format={{.APIServer}} -p ha-423884 -n ha-423884
helpers_test.go:269: (dbg) Run:  kubectl --context ha-423884 get po -o=jsonpath={.items[*].metadata.name} -A --field-selector=status.phase!=Running
helpers_test.go:293: <<< TestMultiControlPlane/serial/AddSecondaryNode FAILED: end of post-mortem logs <<<
helpers_test.go:294: ---------------------/post-mortem---------------------------------
--- FAIL: TestMultiControlPlane/serial/AddSecondaryNode (90.39s)

                                                
                                    
x
+
TestMultiControlPlane/serial/HAppyAfterSecondaryNodeAdd (4.6s)

                                                
                                                
=== RUN   TestMultiControlPlane/serial/HAppyAfterSecondaryNodeAdd
ha_test.go:281: (dbg) Run:  out/minikube-linux-arm64 profile list --output json
ha_test.go:281: (dbg) Done: out/minikube-linux-arm64 profile list --output json: (1.412055569s)
ha_test.go:305: expected profile "ha-423884" in json of 'profile list' to include 4 nodes but have 5 nodes. got *"{\"invalid\":[],\"valid\":[{\"Name\":\"ha-423884\",\"Status\":\"HAppy\",\"Config\":{\"Name\":\"ha-423884\",\"KeepContext\":false,\"EmbedCerts\":false,\"MinikubeISO\":\"\",\"KicBaseImage\":\"gcr.io/k8s-minikube/kicbase-builds:v0.0.48-1761985721-21837@sha256:a50b37e97dfdea51156e079ca6b45818a801b3d41bbe13d141f35d2e1af6c7d1\",\"Memory\":3072,\"CPUs\":2,\"DiskSize\":20000,\"Driver\":\"docker\",\"HyperkitVpnKitSock\":\"\",\"HyperkitVSockPorts\":[],\"DockerEnv\":null,\"ContainerVolumeMounts\":null,\"InsecureRegistry\":null,\"RegistryMirror\":[],\"HostOnlyCIDR\":\"192.168.59.1/24\",\"HypervVirtualSwitch\":\"\",\"HypervUseExternalSwitch\":false,\"HypervExternalAdapter\":\"\",\"KVMNetwork\":\"default\",\"KVMQemuURI\":\"qemu:///system\",\"KVMGPU\":false,\"KVMHidden\":false,\"KVMNUMACount\":1,\"APIServerPort\":8443,\"DockerOpt\":null,\"DisableDriverMounts\":false,\"NFSShare\":[],\"NFSSharesRoot\":\"/nfssh
ares\",\"UUID\":\"\",\"NoVTXCheck\":false,\"DNSProxy\":false,\"HostDNSResolver\":true,\"HostOnlyNicType\":\"virtio\",\"NatNicType\":\"virtio\",\"SSHIPAddress\":\"\",\"SSHUser\":\"root\",\"SSHKey\":\"\",\"SSHPort\":22,\"KubernetesConfig\":{\"KubernetesVersion\":\"v1.34.1\",\"ClusterName\":\"ha-423884\",\"Namespace\":\"default\",\"APIServerHAVIP\":\"192.168.49.254\",\"APIServerName\":\"minikubeCA\",\"APIServerNames\":null,\"APIServerIPs\":null,\"DNSDomain\":\"cluster.local\",\"ContainerRuntime\":\"crio\",\"CRISocket\":\"\",\"NetworkPlugin\":\"cni\",\"FeatureGates\":\"\",\"ServiceCIDR\":\"10.96.0.0/12\",\"ImageRepository\":\"\",\"LoadBalancerStartIP\":\"\",\"LoadBalancerEndIP\":\"\",\"CustomIngressCert\":\"\",\"RegistryAliases\":\"\",\"ExtraOptions\":null,\"ShouldLoadCachedImages\":true,\"EnableDefaultCNI\":false,\"CNI\":\"\"},\"Nodes\":[{\"Name\":\"\",\"IP\":\"192.168.49.2\",\"Port\":8443,\"KubernetesVersion\":\"v1.34.1\",\"ContainerRuntime\":\"crio\",\"ControlPlane\":true,\"Worker\":true},{\"Name\":\"m02\",\"I
P\":\"192.168.49.3\",\"Port\":8443,\"KubernetesVersion\":\"v1.34.1\",\"ContainerRuntime\":\"crio\",\"ControlPlane\":true,\"Worker\":true},{\"Name\":\"m03\",\"IP\":\"192.168.49.4\",\"Port\":8443,\"KubernetesVersion\":\"v1.34.1\",\"ContainerRuntime\":\"crio\",\"ControlPlane\":true,\"Worker\":true},{\"Name\":\"m04\",\"IP\":\"192.168.49.5\",\"Port\":0,\"KubernetesVersion\":\"v1.34.1\",\"ContainerRuntime\":\"crio\",\"ControlPlane\":false,\"Worker\":true},{\"Name\":\"m05\",\"IP\":\"192.168.49.6\",\"Port\":8443,\"KubernetesVersion\":\"v1.34.1\",\"ContainerRuntime\":\"\",\"ControlPlane\":true,\"Worker\":true}],\"Addons\":{\"ambassador\":false,\"amd-gpu-device-plugin\":false,\"auto-pause\":false,\"cloud-spanner\":false,\"csi-hostpath-driver\":false,\"dashboard\":false,\"default-storageclass\":false,\"efk\":false,\"freshpod\":false,\"gcp-auth\":false,\"gvisor\":false,\"headlamp\":false,\"inaccel\":false,\"ingress\":false,\"ingress-dns\":false,\"inspektor-gadget\":false,\"istio\":false,\"istio-provisioner\":false,\"kong
\":false,\"kubeflow\":false,\"kubetail\":false,\"kubevirt\":false,\"logviewer\":false,\"metallb\":false,\"metrics-server\":false,\"nvidia-device-plugin\":false,\"nvidia-driver-installer\":false,\"nvidia-gpu-device-plugin\":false,\"olm\":false,\"pod-security-policy\":false,\"portainer\":false,\"registry\":false,\"registry-aliases\":false,\"registry-creds\":false,\"storage-provisioner\":false,\"storage-provisioner-rancher\":false,\"volcano\":false,\"volumesnapshots\":false,\"yakd\":false},\"CustomAddonImages\":null,\"CustomAddonRegistries\":null,\"VerifyComponents\":{\"apiserver\":true,\"apps_running\":true,\"default_sa\":true,\"extra\":true,\"kubelet\":true,\"node_ready\":true,\"system_pods\":true},\"StartHostTimeout\":360000000000,\"ScheduledStop\":null,\"ExposedPorts\":[],\"ListenAddress\":\"\",\"Network\":\"\",\"Subnet\":\"\",\"MultiNodeRequested\":true,\"ExtraDisks\":0,\"CertExpiration\":94608000000000000,\"MountString\":\"\",\"Mount9PVersion\":\"9p2000.L\",\"MountGID\":\"docker\",\"MountIP\":\"\",\"MountM
Size\":262144,\"MountOptions\":[],\"MountPort\":0,\"MountType\":\"9p\",\"MountUID\":\"docker\",\"BinaryMirror\":\"\",\"DisableOptimizations\":false,\"DisableMetrics\":false,\"DisableCoreDNSLog\":false,\"CustomQemuFirmwarePath\":\"\",\"SocketVMnetClientPath\":\"\",\"SocketVMnetPath\":\"\",\"StaticIP\":\"\",\"SSHAuthSock\":\"\",\"SSHAgentPID\":0,\"GPUs\":\"\",\"AutoPauseInterval\":60000000000},\"Active\":false,\"ActiveKubeContext\":true}]}"*. args: "out/minikube-linux-arm64 profile list --output json"
helpers_test.go:222: -----------------------post-mortem--------------------------------
helpers_test.go:223: ======>  post-mortem[TestMultiControlPlane/serial/HAppyAfterSecondaryNodeAdd]: network settings <======
helpers_test.go:230: HOST ENV snapshots: PROXY env: HTTP_PROXY="<empty>" HTTPS_PROXY="<empty>" NO_PROXY="<empty>"
helpers_test.go:238: ======>  post-mortem[TestMultiControlPlane/serial/HAppyAfterSecondaryNodeAdd]: docker inspect <======
helpers_test.go:239: (dbg) Run:  docker inspect ha-423884
helpers_test.go:243: (dbg) docker inspect ha-423884:

                                                
                                                
-- stdout --
	[
	    {
	        "Id": "8c902201acb6ac6cc85dcb02210aea656c86b05a6fb72e47cb8c42c952f307e8",
	        "Created": "2025-11-09T13:50:17.166169915Z",
	        "Path": "/usr/local/bin/entrypoint",
	        "Args": [
	            "/sbin/init"
	        ],
	        "State": {
	            "Status": "running",
	            "Running": true,
	            "Paused": false,
	            "Restarting": false,
	            "OOMKilled": false,
	            "Dead": false,
	            "Pid": 56035,
	            "ExitCode": 0,
	            "Error": "",
	            "StartedAt": "2025-11-09T14:03:28.454326897Z",
	            "FinishedAt": "2025-11-09T14:03:27.198748336Z"
	        },
	        "Image": "sha256:4e1b168be0d8ee6affdaa8dcd0274d605b2f417b6cfaa574410f0380fd962b97",
	        "ResolvConfPath": "/var/lib/docker/containers/8c902201acb6ac6cc85dcb02210aea656c86b05a6fb72e47cb8c42c952f307e8/resolv.conf",
	        "HostnamePath": "/var/lib/docker/containers/8c902201acb6ac6cc85dcb02210aea656c86b05a6fb72e47cb8c42c952f307e8/hostname",
	        "HostsPath": "/var/lib/docker/containers/8c902201acb6ac6cc85dcb02210aea656c86b05a6fb72e47cb8c42c952f307e8/hosts",
	        "LogPath": "/var/lib/docker/containers/8c902201acb6ac6cc85dcb02210aea656c86b05a6fb72e47cb8c42c952f307e8/8c902201acb6ac6cc85dcb02210aea656c86b05a6fb72e47cb8c42c952f307e8-json.log",
	        "Name": "/ha-423884",
	        "RestartCount": 0,
	        "Driver": "overlay2",
	        "Platform": "linux",
	        "MountLabel": "",
	        "ProcessLabel": "",
	        "AppArmorProfile": "unconfined",
	        "ExecIDs": null,
	        "HostConfig": {
	            "Binds": [
	                "/lib/modules:/lib/modules:ro",
	                "ha-423884:/var"
	            ],
	            "ContainerIDFile": "",
	            "LogConfig": {
	                "Type": "json-file",
	                "Config": {}
	            },
	            "NetworkMode": "ha-423884",
	            "PortBindings": {
	                "22/tcp": [
	                    {
	                        "HostIp": "127.0.0.1",
	                        "HostPort": ""
	                    }
	                ],
	                "2376/tcp": [
	                    {
	                        "HostIp": "127.0.0.1",
	                        "HostPort": ""
	                    }
	                ],
	                "32443/tcp": [
	                    {
	                        "HostIp": "127.0.0.1",
	                        "HostPort": ""
	                    }
	                ],
	                "5000/tcp": [
	                    {
	                        "HostIp": "127.0.0.1",
	                        "HostPort": ""
	                    }
	                ],
	                "8443/tcp": [
	                    {
	                        "HostIp": "127.0.0.1",
	                        "HostPort": ""
	                    }
	                ]
	            },
	            "RestartPolicy": {
	                "Name": "no",
	                "MaximumRetryCount": 0
	            },
	            "AutoRemove": false,
	            "VolumeDriver": "",
	            "VolumesFrom": null,
	            "ConsoleSize": [
	                0,
	                0
	            ],
	            "CapAdd": null,
	            "CapDrop": null,
	            "CgroupnsMode": "host",
	            "Dns": [],
	            "DnsOptions": [],
	            "DnsSearch": [],
	            "ExtraHosts": null,
	            "GroupAdd": null,
	            "IpcMode": "private",
	            "Cgroup": "",
	            "Links": null,
	            "OomScoreAdj": 0,
	            "PidMode": "",
	            "Privileged": true,
	            "PublishAllPorts": false,
	            "ReadonlyRootfs": false,
	            "SecurityOpt": [
	                "seccomp=unconfined",
	                "apparmor=unconfined",
	                "label=disable"
	            ],
	            "Tmpfs": {
	                "/run": "",
	                "/tmp": ""
	            },
	            "UTSMode": "",
	            "UsernsMode": "",
	            "ShmSize": 67108864,
	            "Runtime": "runc",
	            "Isolation": "",
	            "CpuShares": 0,
	            "Memory": 3221225472,
	            "NanoCpus": 2000000000,
	            "CgroupParent": "",
	            "BlkioWeight": 0,
	            "BlkioWeightDevice": [],
	            "BlkioDeviceReadBps": [],
	            "BlkioDeviceWriteBps": [],
	            "BlkioDeviceReadIOps": [],
	            "BlkioDeviceWriteIOps": [],
	            "CpuPeriod": 0,
	            "CpuQuota": 0,
	            "CpuRealtimePeriod": 0,
	            "CpuRealtimeRuntime": 0,
	            "CpusetCpus": "",
	            "CpusetMems": "",
	            "Devices": [],
	            "DeviceCgroupRules": null,
	            "DeviceRequests": null,
	            "MemoryReservation": 0,
	            "MemorySwap": 6442450944,
	            "MemorySwappiness": null,
	            "OomKillDisable": false,
	            "PidsLimit": null,
	            "Ulimits": [],
	            "CpuCount": 0,
	            "CpuPercent": 0,
	            "IOMaximumIOps": 0,
	            "IOMaximumBandwidth": 0,
	            "MaskedPaths": null,
	            "ReadonlyPaths": null
	        },
	        "GraphDriver": {
	            "Data": {
	                "ID": "8c902201acb6ac6cc85dcb02210aea656c86b05a6fb72e47cb8c42c952f307e8",
	                "LowerDir": "/var/lib/docker/overlay2/d7d9b7ca13eaf7cc4d5734ee4a6a54ff542d1224261d25b7c41162aa58453c4b-init/diff:/var/lib/docker/overlay2/b3a4de36ab2a7c9237cb1555a4866064baca53bca407ae5f84336bea9c6bc6c8/diff",
	                "MergedDir": "/var/lib/docker/overlay2/d7d9b7ca13eaf7cc4d5734ee4a6a54ff542d1224261d25b7c41162aa58453c4b/merged",
	                "UpperDir": "/var/lib/docker/overlay2/d7d9b7ca13eaf7cc4d5734ee4a6a54ff542d1224261d25b7c41162aa58453c4b/diff",
	                "WorkDir": "/var/lib/docker/overlay2/d7d9b7ca13eaf7cc4d5734ee4a6a54ff542d1224261d25b7c41162aa58453c4b/work"
	            },
	            "Name": "overlay2"
	        },
	        "Mounts": [
	            {
	                "Type": "bind",
	                "Source": "/lib/modules",
	                "Destination": "/lib/modules",
	                "Mode": "ro",
	                "RW": false,
	                "Propagation": "rprivate"
	            },
	            {
	                "Type": "volume",
	                "Name": "ha-423884",
	                "Source": "/var/lib/docker/volumes/ha-423884/_data",
	                "Destination": "/var",
	                "Driver": "local",
	                "Mode": "z",
	                "RW": true,
	                "Propagation": ""
	            }
	        ],
	        "Config": {
	            "Hostname": "ha-423884",
	            "Domainname": "",
	            "User": "",
	            "AttachStdin": false,
	            "AttachStdout": false,
	            "AttachStderr": false,
	            "ExposedPorts": {
	                "22/tcp": {},
	                "2376/tcp": {},
	                "32443/tcp": {},
	                "5000/tcp": {},
	                "8443/tcp": {}
	            },
	            "Tty": true,
	            "OpenStdin": false,
	            "StdinOnce": false,
	            "Env": [
	                "container=docker",
	                "PATH=/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin"
	            ],
	            "Cmd": null,
	            "Image": "gcr.io/k8s-minikube/kicbase-builds:v0.0.48-1761985721-21837@sha256:a50b37e97dfdea51156e079ca6b45818a801b3d41bbe13d141f35d2e1af6c7d1",
	            "Volumes": null,
	            "WorkingDir": "/",
	            "Entrypoint": [
	                "/usr/local/bin/entrypoint",
	                "/sbin/init"
	            ],
	            "OnBuild": null,
	            "Labels": {
	                "created_by.minikube.sigs.k8s.io": "true",
	                "mode.minikube.sigs.k8s.io": "ha-423884",
	                "name.minikube.sigs.k8s.io": "ha-423884",
	                "role.minikube.sigs.k8s.io": ""
	            },
	            "StopSignal": "SIGRTMIN+3"
	        },
	        "NetworkSettings": {
	            "Bridge": "",
	            "SandboxID": "1a517d91b9dd2fa9b7c1a86f3c7ce600153c1394576da0eb7ce565af8604f53c",
	            "SandboxKey": "/var/run/docker/netns/1a517d91b9dd",
	            "Ports": {
	                "22/tcp": [
	                    {
	                        "HostIp": "127.0.0.1",
	                        "HostPort": "32818"
	                    }
	                ],
	                "2376/tcp": [
	                    {
	                        "HostIp": "127.0.0.1",
	                        "HostPort": "32819"
	                    }
	                ],
	                "32443/tcp": [
	                    {
	                        "HostIp": "127.0.0.1",
	                        "HostPort": "32822"
	                    }
	                ],
	                "5000/tcp": [
	                    {
	                        "HostIp": "127.0.0.1",
	                        "HostPort": "32820"
	                    }
	                ],
	                "8443/tcp": [
	                    {
	                        "HostIp": "127.0.0.1",
	                        "HostPort": "32821"
	                    }
	                ]
	            },
	            "HairpinMode": false,
	            "LinkLocalIPv6Address": "",
	            "LinkLocalIPv6PrefixLen": 0,
	            "SecondaryIPAddresses": null,
	            "SecondaryIPv6Addresses": null,
	            "EndpointID": "",
	            "Gateway": "",
	            "GlobalIPv6Address": "",
	            "GlobalIPv6PrefixLen": 0,
	            "IPAddress": "",
	            "IPPrefixLen": 0,
	            "IPv6Gateway": "",
	            "MacAddress": "",
	            "Networks": {
	                "ha-423884": {
	                    "IPAMConfig": {
	                        "IPv4Address": "192.168.49.2"
	                    },
	                    "Links": null,
	                    "Aliases": null,
	                    "MacAddress": "42:a0:79:53:a9:c1",
	                    "DriverOpts": null,
	                    "GwPriority": 0,
	                    "NetworkID": "b901b8dcb82129bdc4c62d2bf9cac8a365e41b87cf75b0978b149071ce152f44",
	                    "EndpointID": "863a231ee9ea532fe20e7b03570549e0d16ef617b4f2a4ad156998677dd29113",
	                    "Gateway": "192.168.49.1",
	                    "IPAddress": "192.168.49.2",
	                    "IPPrefixLen": 24,
	                    "IPv6Gateway": "",
	                    "GlobalIPv6Address": "",
	                    "GlobalIPv6PrefixLen": 0,
	                    "DNSNames": [
	                        "ha-423884",
	                        "8c902201acb6"
	                    ]
	                }
	            }
	        }
	    }
	]

                                                
                                                
-- /stdout --
helpers_test.go:247: (dbg) Run:  out/minikube-linux-arm64 status --format={{.Host}} -p ha-423884 -n ha-423884
helpers_test.go:252: <<< TestMultiControlPlane/serial/HAppyAfterSecondaryNodeAdd FAILED: start of post-mortem logs <<<
helpers_test.go:253: ======>  post-mortem[TestMultiControlPlane/serial/HAppyAfterSecondaryNodeAdd]: minikube logs <======
helpers_test.go:255: (dbg) Run:  out/minikube-linux-arm64 -p ha-423884 logs -n 25
helpers_test.go:255: (dbg) Done: out/minikube-linux-arm64 -p ha-423884 logs -n 25: (1.800922565s)
helpers_test.go:260: TestMultiControlPlane/serial/HAppyAfterSecondaryNodeAdd logs: 
-- stdout --
	
	==> Audit <==
	┌─────────┬─────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────┬───────────┬─────────┬─────────┬─────────────────────┬─────────────────────┐
	│ COMMAND │                                                                ARGS                                                                 │  PROFILE  │  USER   │ VERSION │     START TIME      │      END TIME       │
	├─────────┼─────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────┼───────────┼─────────┼─────────┼─────────────────────┼─────────────────────┤
	│ ssh     │ ha-423884 ssh -n ha-423884-m03 sudo cat /home/docker/cp-test.txt                                                                    │ ha-423884 │ jenkins │ v1.37.0 │ 09 Nov 25 13:53 UTC │ 09 Nov 25 13:53 UTC │
	│ ssh     │ ha-423884 ssh -n ha-423884-m04 sudo cat /home/docker/cp-test_ha-423884-m03_ha-423884-m04.txt                                        │ ha-423884 │ jenkins │ v1.37.0 │ 09 Nov 25 13:53 UTC │ 09 Nov 25 13:53 UTC │
	│ cp      │ ha-423884 cp testdata/cp-test.txt ha-423884-m04:/home/docker/cp-test.txt                                                            │ ha-423884 │ jenkins │ v1.37.0 │ 09 Nov 25 13:53 UTC │ 09 Nov 25 13:53 UTC │
	│ ssh     │ ha-423884 ssh -n ha-423884-m04 sudo cat /home/docker/cp-test.txt                                                                    │ ha-423884 │ jenkins │ v1.37.0 │ 09 Nov 25 13:53 UTC │ 09 Nov 25 13:53 UTC │
	│ cp      │ ha-423884 cp ha-423884-m04:/home/docker/cp-test.txt /tmp/TestMultiControlPlaneserialCopyFile776916293/001/cp-test_ha-423884-m04.txt │ ha-423884 │ jenkins │ v1.37.0 │ 09 Nov 25 13:53 UTC │ 09 Nov 25 13:53 UTC │
	│ ssh     │ ha-423884 ssh -n ha-423884-m04 sudo cat /home/docker/cp-test.txt                                                                    │ ha-423884 │ jenkins │ v1.37.0 │ 09 Nov 25 13:53 UTC │ 09 Nov 25 13:53 UTC │
	│ cp      │ ha-423884 cp ha-423884-m04:/home/docker/cp-test.txt ha-423884:/home/docker/cp-test_ha-423884-m04_ha-423884.txt                      │ ha-423884 │ jenkins │ v1.37.0 │ 09 Nov 25 13:53 UTC │ 09 Nov 25 13:53 UTC │
	│ ssh     │ ha-423884 ssh -n ha-423884-m04 sudo cat /home/docker/cp-test.txt                                                                    │ ha-423884 │ jenkins │ v1.37.0 │ 09 Nov 25 13:53 UTC │ 09 Nov 25 13:53 UTC │
	│ ssh     │ ha-423884 ssh -n ha-423884 sudo cat /home/docker/cp-test_ha-423884-m04_ha-423884.txt                                                │ ha-423884 │ jenkins │ v1.37.0 │ 09 Nov 25 13:53 UTC │ 09 Nov 25 13:53 UTC │
	│ cp      │ ha-423884 cp ha-423884-m04:/home/docker/cp-test.txt ha-423884-m02:/home/docker/cp-test_ha-423884-m04_ha-423884-m02.txt              │ ha-423884 │ jenkins │ v1.37.0 │ 09 Nov 25 13:53 UTC │ 09 Nov 25 13:53 UTC │
	│ ssh     │ ha-423884 ssh -n ha-423884-m04 sudo cat /home/docker/cp-test.txt                                                                    │ ha-423884 │ jenkins │ v1.37.0 │ 09 Nov 25 13:53 UTC │ 09 Nov 25 13:53 UTC │
	│ ssh     │ ha-423884 ssh -n ha-423884-m02 sudo cat /home/docker/cp-test_ha-423884-m04_ha-423884-m02.txt                                        │ ha-423884 │ jenkins │ v1.37.0 │ 09 Nov 25 13:53 UTC │ 09 Nov 25 13:53 UTC │
	│ cp      │ ha-423884 cp ha-423884-m04:/home/docker/cp-test.txt ha-423884-m03:/home/docker/cp-test_ha-423884-m04_ha-423884-m03.txt              │ ha-423884 │ jenkins │ v1.37.0 │ 09 Nov 25 13:53 UTC │ 09 Nov 25 13:53 UTC │
	│ ssh     │ ha-423884 ssh -n ha-423884-m04 sudo cat /home/docker/cp-test.txt                                                                    │ ha-423884 │ jenkins │ v1.37.0 │ 09 Nov 25 13:53 UTC │ 09 Nov 25 13:53 UTC │
	│ ssh     │ ha-423884 ssh -n ha-423884-m03 sudo cat /home/docker/cp-test_ha-423884-m04_ha-423884-m03.txt                                        │ ha-423884 │ jenkins │ v1.37.0 │ 09 Nov 25 13:53 UTC │ 09 Nov 25 13:53 UTC │
	│ node    │ ha-423884 node stop m02 --alsologtostderr -v 5                                                                                      │ ha-423884 │ jenkins │ v1.37.0 │ 09 Nov 25 13:53 UTC │ 09 Nov 25 13:53 UTC │
	│ node    │ ha-423884 node start m02 --alsologtostderr -v 5                                                                                     │ ha-423884 │ jenkins │ v1.37.0 │ 09 Nov 25 13:53 UTC │ 09 Nov 25 13:54 UTC │
	│ node    │ ha-423884 node list --alsologtostderr -v 5                                                                                          │ ha-423884 │ jenkins │ v1.37.0 │ 09 Nov 25 13:54 UTC │                     │
	│ stop    │ ha-423884 stop --alsologtostderr -v 5                                                                                               │ ha-423884 │ jenkins │ v1.37.0 │ 09 Nov 25 13:54 UTC │ 09 Nov 25 13:54 UTC │
	│ start   │ ha-423884 start --wait true --alsologtostderr -v 5                                                                                  │ ha-423884 │ jenkins │ v1.37.0 │ 09 Nov 25 13:54 UTC │                     │
	│ node    │ ha-423884 node list --alsologtostderr -v 5                                                                                          │ ha-423884 │ jenkins │ v1.37.0 │ 09 Nov 25 14:02 UTC │                     │
	│ node    │ ha-423884 node delete m03 --alsologtostderr -v 5                                                                                    │ ha-423884 │ jenkins │ v1.37.0 │ 09 Nov 25 14:03 UTC │                     │
	│ stop    │ ha-423884 stop --alsologtostderr -v 5                                                                                               │ ha-423884 │ jenkins │ v1.37.0 │ 09 Nov 25 14:03 UTC │ 09 Nov 25 14:03 UTC │
	│ start   │ ha-423884 start --wait true --alsologtostderr -v 5 --driver=docker  --container-runtime=crio                                        │ ha-423884 │ jenkins │ v1.37.0 │ 09 Nov 25 14:03 UTC │ 09 Nov 25 14:05 UTC │
	│ node    │ ha-423884 node add --control-plane --alsologtostderr -v 5                                                                           │ ha-423884 │ jenkins │ v1.37.0 │ 09 Nov 25 14:05 UTC │ 09 Nov 25 14:06 UTC │
	└─────────┴─────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────┴───────────┴─────────┴─────────┴─────────────────────┴─────────────────────┘
	
	
	==> Last Start <==
	Log file created at: 2025/11/09 14:03:28
	Running on machine: ip-172-31-24-2
	Binary: Built with gc go1.24.6 for linux/arm64
	Log line format: [IWEF]mmdd hh:mm:ss.uuuuuu threadid file:line] msg
	I1109 14:03:28.177539   55908 out.go:360] Setting OutFile to fd 1 ...
	I1109 14:03:28.177725   55908 out.go:408] TERM=,COLORTERM=, which probably does not support color
	I1109 14:03:28.177737   55908 out.go:374] Setting ErrFile to fd 2...
	I1109 14:03:28.177743   55908 out.go:408] TERM=,COLORTERM=, which probably does not support color
	I1109 14:03:28.178015   55908 root.go:338] Updating PATH: /home/jenkins/minikube-integration/21139-2320/.minikube/bin
	I1109 14:03:28.178387   55908 out.go:368] Setting JSON to false
	I1109 14:03:28.179233   55908 start.go:133] hostinfo: {"hostname":"ip-172-31-24-2","uptime":2759,"bootTime":1762694250,"procs":154,"os":"linux","platform":"ubuntu","platformFamily":"debian","platformVersion":"20.04","kernelVersion":"5.15.0-1084-aws","kernelArch":"aarch64","virtualizationSystem":"","virtualizationRole":"","hostId":"6d436adf-771e-4269-b9a3-c25fd4fca4f5"}
	I1109 14:03:28.179304   55908 start.go:143] virtualization:  
	I1109 14:03:28.182654   55908 out.go:179] * [ha-423884] minikube v1.37.0 on Ubuntu 20.04 (arm64)
	I1109 14:03:28.186399   55908 out.go:179]   - MINIKUBE_LOCATION=21139
	I1109 14:03:28.186530   55908 notify.go:221] Checking for updates...
	I1109 14:03:28.192400   55908 out.go:179]   - MINIKUBE_SUPPRESS_DOCKER_PERFORMANCE=true
	I1109 14:03:28.195380   55908 out.go:179]   - KUBECONFIG=/home/jenkins/minikube-integration/21139-2320/kubeconfig
	I1109 14:03:28.198311   55908 out.go:179]   - MINIKUBE_HOME=/home/jenkins/minikube-integration/21139-2320/.minikube
	I1109 14:03:28.201212   55908 out.go:179]   - MINIKUBE_BIN=out/minikube-linux-arm64
	I1109 14:03:28.204122   55908 out.go:179]   - MINIKUBE_FORCE_SYSTEMD=
	I1109 14:03:28.207578   55908 config.go:182] Loaded profile config "ha-423884": Driver=docker, ContainerRuntime=crio, KubernetesVersion=v1.34.1
	I1109 14:03:28.208223   55908 driver.go:422] Setting default libvirt URI to qemu:///system
	I1109 14:03:28.238570   55908 docker.go:124] docker version: linux-28.1.1:Docker Engine - Community
	I1109 14:03:28.238679   55908 cli_runner.go:164] Run: docker system info --format "{{json .}}"
	I1109 14:03:28.302173   55908 info.go:266] docker info: {ID:J4M5:W6MX:GOX4:4LAQ:VI7E:VJNF:J3OP:OPBH:GF7G:PPY4:WQWD:7N4L Containers:4 ContainersRunning:0 ContainersPaused:0 ContainersStopped:4 Images:3 Driver:overlay2 DriverStatus:[[Backing Filesystem extfs] [Supports d_type true] [Using metacopy false] [Native Overlay Diff true] [userxattr false]] SystemStatus:<nil> Plugins:{Volume:[local] Network:[bridge host ipvlan macvlan null overlay] Authorization:<nil> Log:[awslogs fluentd gcplogs gelf journald json-file local splunk syslog]} MemoryLimit:true SwapLimit:true KernelMemory:false KernelMemoryTCP:true CPUCfsPeriod:true CPUCfsQuota:true CPUShares:true CPUSet:true PidsLimit:true IPv4Forwarding:true BridgeNfIptables:false BridgeNfIP6Tables:false Debug:false NFd:25 OomKillDisable:true NGoroutines:42 SystemTime:2025-11-09 14:03:28.29285158 +0000 UTC LoggingDriver:json-file CgroupDriver:cgroupfs NEventsListener:0 KernelVersion:5.15.0-1084-aws OperatingSystem:Ubuntu 20.04.6 LTS OSType:linux Architecture:aa
rch64 IndexServerAddress:https://index.docker.io/v1/ RegistryConfig:{AllowNondistributableArtifactsCIDRs:[] AllowNondistributableArtifactsHostnames:[] InsecureRegistryCIDRs:[::1/128 127.0.0.0/8] IndexConfigs:{DockerIo:{Name:docker.io Mirrors:[] Secure:true Official:true}} Mirrors:[]} NCPU:2 MemTotal:8214835200 GenericResources:<nil> DockerRootDir:/var/lib/docker HTTPProxy: HTTPSProxy: NoProxy: Name:ip-172-31-24-2 Labels:[] ExperimentalBuild:false ServerVersion:28.1.1 ClusterStore: ClusterAdvertise: Runtimes:{Runc:{Path:runc}} DefaultRuntime:runc Swarm:{NodeID: NodeAddr: LocalNodeState:inactive ControlAvailable:false Error: RemoteManagers:<nil>} LiveRestoreEnabled:false Isolation: InitBinary:docker-init ContainerdCommit:{ID:05044ec0a9a75232cad458027ca83437aae3f4da Expected:} RuncCommit:{ID:v1.2.5-0-g59923ef Expected:} InitCommit:{ID:de40ad0 Expected:} SecurityOptions:[name=apparmor name=seccomp,profile=builtin] ProductLicense: Warnings:<nil> ServerErrors:[] ClientInfo:{Debug:false Plugins:[map[Name:buildx Path
:/usr/libexec/docker/cli-plugins/docker-buildx SchemaVersion:0.1.0 ShortDescription:Docker Buildx Vendor:Docker Inc. Version:v0.23.0] map[Name:compose Path:/usr/libexec/docker/cli-plugins/docker-compose SchemaVersion:0.1.0 ShortDescription:Docker Compose Vendor:Docker Inc. Version:v2.35.1]] Warnings:<nil>}}
	I1109 14:03:28.302284   55908 docker.go:319] overlay module found
	I1109 14:03:28.305382   55908 out.go:179] * Using the docker driver based on existing profile
	I1109 14:03:28.308271   55908 start.go:309] selected driver: docker
	I1109 14:03:28.308292   55908 start.go:930] validating driver "docker" against &{Name:ha-423884 KeepContext:false EmbedCerts:false MinikubeISO: KicBaseImage:gcr.io/k8s-minikube/kicbase-builds:v0.0.48-1761985721-21837@sha256:a50b37e97dfdea51156e079ca6b45818a801b3d41bbe13d141f35d2e1af6c7d1 Memory:3072 CPUs:2 DiskSize:20000 Driver:docker HyperkitVpnKitSock: HyperkitVSockPorts:[] DockerEnv:[] ContainerVolumeMounts:[] InsecureRegistry:[] RegistryMirror:[] HostOnlyCIDR:192.168.59.1/24 HypervVirtualSwitch: HypervUseExternalSwitch:false HypervExternalAdapter: KVMNetwork:default KVMQemuURI:qemu:///system KVMGPU:false KVMHidden:false KVMNUMACount:1 APIServerPort:8443 DockerOpt:[] DisableDriverMounts:false NFSShare:[] NFSSharesRoot:/nfsshares UUID: NoVTXCheck:false DNSProxy:false HostDNSResolver:true HostOnlyNicType:virtio NatNicType:virtio SSHIPAddress: SSHUser:root SSHKey: SSHPort:22 KubernetesConfig:{KubernetesVersion:v1.34.1 ClusterName:ha-423884 Namespace:default APIServerHAVIP:192.168.49.254 APIServerName
:minikubeCA APIServerNames:[] APIServerIPs:[] DNSDomain:cluster.local ContainerRuntime:crio CRISocket: NetworkPlugin:cni FeatureGates: ServiceCIDR:10.96.0.0/12 ImageRepository: LoadBalancerStartIP: LoadBalancerEndIP: CustomIngressCert: RegistryAliases: ExtraOptions:[] ShouldLoadCachedImages:true EnableDefaultCNI:false CNI:} Nodes:[{Name: IP:192.168.49.2 Port:8443 KubernetesVersion:v1.34.1 ContainerRuntime:crio ControlPlane:true Worker:true} {Name:m02 IP:192.168.49.3 Port:8443 KubernetesVersion:v1.34.1 ContainerRuntime:crio ControlPlane:true Worker:true} {Name:m03 IP:192.168.49.4 Port:8443 KubernetesVersion:v1.34.1 ContainerRuntime:crio ControlPlane:true Worker:true} {Name:m04 IP:192.168.49.5 Port:0 KubernetesVersion:v1.34.1 ContainerRuntime:crio ControlPlane:false Worker:true}] Addons:map[ambassador:false amd-gpu-device-plugin:false auto-pause:false cloud-spanner:false csi-hostpath-driver:false dashboard:false default-storageclass:false efk:false freshpod:false gcp-auth:false gvisor:false headlamp:false inacc
el:false ingress:false ingress-dns:false inspektor-gadget:false istio:false istio-provisioner:false kong:false kubeflow:false kubetail:false kubevirt:false logviewer:false metallb:false metrics-server:false nvidia-device-plugin:false nvidia-driver-installer:false nvidia-gpu-device-plugin:false olm:false pod-security-policy:false portainer:false registry:false registry-aliases:false registry-creds:false storage-provisioner:false storage-provisioner-rancher:false volcano:false volumesnapshots:false yakd:false] CustomAddonImages:map[] CustomAddonRegistries:map[] VerifyComponents:map[apiserver:true apps_running:true default_sa:true extra:true kubelet:true node_ready:true system_pods:true] StartHostTimeout:6m0s ScheduledStop:<nil> ExposedPorts:[] ListenAddress: Network: Subnet: MultiNodeRequested:true ExtraDisks:0 CertExpiration:26280h0m0s MountString: Mount9PVersion:9p2000.L MountGID:docker MountIP: MountMSize:262144 MountOptions:[] MountPort:0 MountType:9p MountUID:docker BinaryMirror: DisableOptimizations:false
DisableMetrics:false DisableCoreDNSLog:false CustomQemuFirmwarePath: SocketVMnetClientPath: SocketVMnetPath: StaticIP: SSHAuthSock: SSHAgentPID:0 GPUs: AutoPauseInterval:1m0s}
	I1109 14:03:28.308437   55908 start.go:941] status for docker: {Installed:true Healthy:true Running:false NeedsImprovement:false Error:<nil> Reason: Fix: Doc: Version:}
	I1109 14:03:28.308547   55908 cli_runner.go:164] Run: docker system info --format "{{json .}}"
	I1109 14:03:28.367315   55908 info.go:266] docker info: {ID:J4M5:W6MX:GOX4:4LAQ:VI7E:VJNF:J3OP:OPBH:GF7G:PPY4:WQWD:7N4L Containers:4 ContainersRunning:0 ContainersPaused:0 ContainersStopped:4 Images:3 Driver:overlay2 DriverStatus:[[Backing Filesystem extfs] [Supports d_type true] [Using metacopy false] [Native Overlay Diff true] [userxattr false]] SystemStatus:<nil> Plugins:{Volume:[local] Network:[bridge host ipvlan macvlan null overlay] Authorization:<nil> Log:[awslogs fluentd gcplogs gelf journald json-file local splunk syslog]} MemoryLimit:true SwapLimit:true KernelMemory:false KernelMemoryTCP:true CPUCfsPeriod:true CPUCfsQuota:true CPUShares:true CPUSet:true PidsLimit:true IPv4Forwarding:true BridgeNfIptables:false BridgeNfIP6Tables:false Debug:false NFd:25 OomKillDisable:true NGoroutines:42 SystemTime:2025-11-09 14:03:28.35650136 +0000 UTC LoggingDriver:json-file CgroupDriver:cgroupfs NEventsListener:0 KernelVersion:5.15.0-1084-aws OperatingSystem:Ubuntu 20.04.6 LTS OSType:linux Architecture:aa
rch64 IndexServerAddress:https://index.docker.io/v1/ RegistryConfig:{AllowNondistributableArtifactsCIDRs:[] AllowNondistributableArtifactsHostnames:[] InsecureRegistryCIDRs:[::1/128 127.0.0.0/8] IndexConfigs:{DockerIo:{Name:docker.io Mirrors:[] Secure:true Official:true}} Mirrors:[]} NCPU:2 MemTotal:8214835200 GenericResources:<nil> DockerRootDir:/var/lib/docker HTTPProxy: HTTPSProxy: NoProxy: Name:ip-172-31-24-2 Labels:[] ExperimentalBuild:false ServerVersion:28.1.1 ClusterStore: ClusterAdvertise: Runtimes:{Runc:{Path:runc}} DefaultRuntime:runc Swarm:{NodeID: NodeAddr: LocalNodeState:inactive ControlAvailable:false Error: RemoteManagers:<nil>} LiveRestoreEnabled:false Isolation: InitBinary:docker-init ContainerdCommit:{ID:05044ec0a9a75232cad458027ca83437aae3f4da Expected:} RuncCommit:{ID:v1.2.5-0-g59923ef Expected:} InitCommit:{ID:de40ad0 Expected:} SecurityOptions:[name=apparmor name=seccomp,profile=builtin] ProductLicense: Warnings:<nil> ServerErrors:[] ClientInfo:{Debug:false Plugins:[map[Name:buildx Path
:/usr/libexec/docker/cli-plugins/docker-buildx SchemaVersion:0.1.0 ShortDescription:Docker Buildx Vendor:Docker Inc. Version:v0.23.0] map[Name:compose Path:/usr/libexec/docker/cli-plugins/docker-compose SchemaVersion:0.1.0 ShortDescription:Docker Compose Vendor:Docker Inc. Version:v2.35.1]] Warnings:<nil>}}
	I1109 14:03:28.367739   55908 start_flags.go:992] Waiting for all components: map[apiserver:true apps_running:true default_sa:true extra:true kubelet:true node_ready:true system_pods:true]
	I1109 14:03:28.367770   55908 cni.go:84] Creating CNI manager for ""
	I1109 14:03:28.367814   55908 cni.go:136] multinode detected (4 nodes found), recommending kindnet
	I1109 14:03:28.367923   55908 start.go:353] cluster config:
	{Name:ha-423884 KeepContext:false EmbedCerts:false MinikubeISO: KicBaseImage:gcr.io/k8s-minikube/kicbase-builds:v0.0.48-1761985721-21837@sha256:a50b37e97dfdea51156e079ca6b45818a801b3d41bbe13d141f35d2e1af6c7d1 Memory:3072 CPUs:2 DiskSize:20000 Driver:docker HyperkitVpnKitSock: HyperkitVSockPorts:[] DockerEnv:[] ContainerVolumeMounts:[] InsecureRegistry:[] RegistryMirror:[] HostOnlyCIDR:192.168.59.1/24 HypervVirtualSwitch: HypervUseExternalSwitch:false HypervExternalAdapter: KVMNetwork:default KVMQemuURI:qemu:///system KVMGPU:false KVMHidden:false KVMNUMACount:1 APIServerPort:8443 DockerOpt:[] DisableDriverMounts:false NFSShare:[] NFSSharesRoot:/nfsshares UUID: NoVTXCheck:false DNSProxy:false HostDNSResolver:true HostOnlyNicType:virtio NatNicType:virtio SSHIPAddress: SSHUser:root SSHKey: SSHPort:22 KubernetesConfig:{KubernetesVersion:v1.34.1 ClusterName:ha-423884 Namespace:default APIServerHAVIP:192.168.49.254 APIServerName:minikubeCA APIServerNames:[] APIServerIPs:[] DNSDomain:cluster.local ContainerR
untime:crio CRISocket: NetworkPlugin:cni FeatureGates: ServiceCIDR:10.96.0.0/12 ImageRepository: LoadBalancerStartIP: LoadBalancerEndIP: CustomIngressCert: RegistryAliases: ExtraOptions:[] ShouldLoadCachedImages:true EnableDefaultCNI:false CNI:} Nodes:[{Name: IP:192.168.49.2 Port:8443 KubernetesVersion:v1.34.1 ContainerRuntime:crio ControlPlane:true Worker:true} {Name:m02 IP:192.168.49.3 Port:8443 KubernetesVersion:v1.34.1 ContainerRuntime:crio ControlPlane:true Worker:true} {Name:m03 IP:192.168.49.4 Port:8443 KubernetesVersion:v1.34.1 ContainerRuntime:crio ControlPlane:true Worker:true} {Name:m04 IP:192.168.49.5 Port:0 KubernetesVersion:v1.34.1 ContainerRuntime:crio ControlPlane:false Worker:true}] Addons:map[ambassador:false amd-gpu-device-plugin:false auto-pause:false cloud-spanner:false csi-hostpath-driver:false dashboard:false default-storageclass:false efk:false freshpod:false gcp-auth:false gvisor:false headlamp:false inaccel:false ingress:false ingress-dns:false inspektor-gadget:false istio:false isti
o-provisioner:false kong:false kubeflow:false kubetail:false kubevirt:false logviewer:false metallb:false metrics-server:false nvidia-device-plugin:false nvidia-driver-installer:false nvidia-gpu-device-plugin:false olm:false pod-security-policy:false portainer:false registry:false registry-aliases:false registry-creds:false storage-provisioner:false storage-provisioner-rancher:false volcano:false volumesnapshots:false yakd:false] CustomAddonImages:map[] CustomAddonRegistries:map[] VerifyComponents:map[apiserver:true apps_running:true default_sa:true extra:true kubelet:true node_ready:true system_pods:true] StartHostTimeout:6m0s ScheduledStop:<nil> ExposedPorts:[] ListenAddress: Network: Subnet: MultiNodeRequested:true ExtraDisks:0 CertExpiration:26280h0m0s MountString: Mount9PVersion:9p2000.L MountGID:docker MountIP: MountMSize:262144 MountOptions:[] MountPort:0 MountType:9p MountUID:docker BinaryMirror: DisableOptimizations:false DisableMetrics:false DisableCoreDNSLog:false CustomQemuFirmwarePath: SocketVMne
tClientPath: SocketVMnetPath: StaticIP: SSHAuthSock: SSHAgentPID:0 GPUs: AutoPauseInterval:1m0s}
	I1109 14:03:28.372921   55908 out.go:179] * Starting "ha-423884" primary control-plane node in "ha-423884" cluster
	I1109 14:03:28.375587   55908 cache.go:134] Beginning downloading kic base image for docker with crio
	I1109 14:03:28.378486   55908 out.go:179] * Pulling base image v0.0.48-1761985721-21837 ...
	I1109 14:03:28.381428   55908 preload.go:188] Checking if preload exists for k8s version v1.34.1 and runtime crio
	I1109 14:03:28.381482   55908 preload.go:203] Found local preload: /home/jenkins/minikube-integration/21139-2320/.minikube/cache/preloaded-tarball/preloaded-images-k8s-v18-v1.34.1-cri-o-overlay-arm64.tar.lz4
	I1109 14:03:28.381492   55908 cache.go:65] Caching tarball of preloaded images
	I1109 14:03:28.381532   55908 image.go:81] Checking for gcr.io/k8s-minikube/kicbase-builds:v0.0.48-1761985721-21837@sha256:a50b37e97dfdea51156e079ca6b45818a801b3d41bbe13d141f35d2e1af6c7d1 in local docker daemon
	I1109 14:03:28.381584   55908 preload.go:238] Found /home/jenkins/minikube-integration/21139-2320/.minikube/cache/preloaded-tarball/preloaded-images-k8s-v18-v1.34.1-cri-o-overlay-arm64.tar.lz4 in cache, skipping download
	I1109 14:03:28.381603   55908 cache.go:68] Finished verifying existence of preloaded tar for v1.34.1 on crio
	I1109 14:03:28.381760   55908 profile.go:143] Saving config to /home/jenkins/minikube-integration/21139-2320/.minikube/profiles/ha-423884/config.json ...
	I1109 14:03:28.401896   55908 image.go:100] Found gcr.io/k8s-minikube/kicbase-builds:v0.0.48-1761985721-21837@sha256:a50b37e97dfdea51156e079ca6b45818a801b3d41bbe13d141f35d2e1af6c7d1 in local docker daemon, skipping pull
	I1109 14:03:28.401919   55908 cache.go:158] gcr.io/k8s-minikube/kicbase-builds:v0.0.48-1761985721-21837@sha256:a50b37e97dfdea51156e079ca6b45818a801b3d41bbe13d141f35d2e1af6c7d1 exists in daemon, skipping load
	I1109 14:03:28.401946   55908 cache.go:243] Successfully downloaded all kic artifacts
	I1109 14:03:28.401968   55908 start.go:360] acquireMachinesLock for ha-423884: {Name:mkda5c7a1ce8a51da0d8a40a6bd47565509d6909 Clock:{} Delay:500ms Timeout:10m0s Cancel:<nil>}
	I1109 14:03:28.402035   55908 start.go:364] duration metric: took 47.073µs to acquireMachinesLock for "ha-423884"
	I1109 14:03:28.402054   55908 start.go:96] Skipping create...Using existing machine configuration
	I1109 14:03:28.402059   55908 fix.go:54] fixHost starting: 
	I1109 14:03:28.402320   55908 cli_runner.go:164] Run: docker container inspect ha-423884 --format={{.State.Status}}
	I1109 14:03:28.419704   55908 fix.go:112] recreateIfNeeded on ha-423884: state=Stopped err=<nil>
	W1109 14:03:28.419733   55908 fix.go:138] unexpected machine state, will restart: <nil>
	I1109 14:03:28.423107   55908 out.go:252] * Restarting existing docker container for "ha-423884" ...
	I1109 14:03:28.423213   55908 cli_runner.go:164] Run: docker start ha-423884
	I1109 14:03:28.683970   55908 cli_runner.go:164] Run: docker container inspect ha-423884 --format={{.State.Status}}
	I1109 14:03:28.706610   55908 kic.go:430] container "ha-423884" state is running.
	I1109 14:03:28.707012   55908 cli_runner.go:164] Run: docker container inspect -f "{{range .NetworkSettings.Networks}}{{.IPAddress}},{{.GlobalIPv6Address}}{{end}}" ha-423884
	I1109 14:03:28.730099   55908 profile.go:143] Saving config to /home/jenkins/minikube-integration/21139-2320/.minikube/profiles/ha-423884/config.json ...
	I1109 14:03:28.730346   55908 machine.go:94] provisionDockerMachine start ...
	I1109 14:03:28.730410   55908 cli_runner.go:164] Run: docker container inspect -f "'{{(index (index .NetworkSettings.Ports "22/tcp") 0).HostPort}}'" ha-423884
	I1109 14:03:28.752410   55908 main.go:143] libmachine: Using SSH client type: native
	I1109 14:03:28.752757   55908 main.go:143] libmachine: &{{{<nil> 0 [] [] []} docker [0x3eefe0] 0x3f1790 <nil>  [] 0s} 127.0.0.1 32818 <nil> <nil>}
	I1109 14:03:28.752774   55908 main.go:143] libmachine: About to run SSH command:
	hostname
	I1109 14:03:28.753518   55908 main.go:143] libmachine: Error dialing TCP: ssh: handshake failed: EOF
	I1109 14:03:31.903504   55908 main.go:143] libmachine: SSH cmd err, output: <nil>: ha-423884
	
	I1109 14:03:31.903534   55908 ubuntu.go:182] provisioning hostname "ha-423884"
	I1109 14:03:31.903601   55908 cli_runner.go:164] Run: docker container inspect -f "'{{(index (index .NetworkSettings.Ports "22/tcp") 0).HostPort}}'" ha-423884
	I1109 14:03:31.923571   55908 main.go:143] libmachine: Using SSH client type: native
	I1109 14:03:31.923916   55908 main.go:143] libmachine: &{{{<nil> 0 [] [] []} docker [0x3eefe0] 0x3f1790 <nil>  [] 0s} 127.0.0.1 32818 <nil> <nil>}
	I1109 14:03:31.923929   55908 main.go:143] libmachine: About to run SSH command:
	sudo hostname ha-423884 && echo "ha-423884" | sudo tee /etc/hostname
	I1109 14:03:32.084992   55908 main.go:143] libmachine: SSH cmd err, output: <nil>: ha-423884
	
	I1109 14:03:32.085077   55908 cli_runner.go:164] Run: docker container inspect -f "'{{(index (index .NetworkSettings.Ports "22/tcp") 0).HostPort}}'" ha-423884
	I1109 14:03:32.103777   55908 main.go:143] libmachine: Using SSH client type: native
	I1109 14:03:32.104122   55908 main.go:143] libmachine: &{{{<nil> 0 [] [] []} docker [0x3eefe0] 0x3f1790 <nil>  [] 0s} 127.0.0.1 32818 <nil> <nil>}
	I1109 14:03:32.104149   55908 main.go:143] libmachine: About to run SSH command:
	
			if ! grep -xq '.*\sha-423884' /etc/hosts; then
				if grep -xq '127.0.1.1\s.*' /etc/hosts; then
					sudo sed -i 's/^127.0.1.1\s.*/127.0.1.1 ha-423884/g' /etc/hosts;
				else 
					echo '127.0.1.1 ha-423884' | sudo tee -a /etc/hosts; 
				fi
			fi
	I1109 14:03:32.256008   55908 main.go:143] libmachine: SSH cmd err, output: <nil>: 
	I1109 14:03:32.256036   55908 ubuntu.go:188] set auth options {CertDir:/home/jenkins/minikube-integration/21139-2320/.minikube CaCertPath:/home/jenkins/minikube-integration/21139-2320/.minikube/certs/ca.pem CaPrivateKeyPath:/home/jenkins/minikube-integration/21139-2320/.minikube/certs/ca-key.pem CaCertRemotePath:/etc/docker/ca.pem ServerCertPath:/home/jenkins/minikube-integration/21139-2320/.minikube/machines/server.pem ServerKeyPath:/home/jenkins/minikube-integration/21139-2320/.minikube/machines/server-key.pem ClientKeyPath:/home/jenkins/minikube-integration/21139-2320/.minikube/certs/key.pem ServerCertRemotePath:/etc/docker/server.pem ServerKeyRemotePath:/etc/docker/server-key.pem ClientCertPath:/home/jenkins/minikube-integration/21139-2320/.minikube/certs/cert.pem ServerCertSANs:[] StorePath:/home/jenkins/minikube-integration/21139-2320/.minikube}
	I1109 14:03:32.256065   55908 ubuntu.go:190] setting up certificates
	I1109 14:03:32.256074   55908 provision.go:84] configureAuth start
	I1109 14:03:32.256143   55908 cli_runner.go:164] Run: docker container inspect -f "{{range .NetworkSettings.Networks}}{{.IPAddress}},{{.GlobalIPv6Address}}{{end}}" ha-423884
	I1109 14:03:32.275304   55908 provision.go:143] copyHostCerts
	I1109 14:03:32.275347   55908 vm_assets.go:164] NewFileAsset: /home/jenkins/minikube-integration/21139-2320/.minikube/certs/ca.pem -> /home/jenkins/minikube-integration/21139-2320/.minikube/ca.pem
	I1109 14:03:32.275379   55908 exec_runner.go:144] found /home/jenkins/minikube-integration/21139-2320/.minikube/ca.pem, removing ...
	I1109 14:03:32.275389   55908 exec_runner.go:203] rm: /home/jenkins/minikube-integration/21139-2320/.minikube/ca.pem
	I1109 14:03:32.275467   55908 exec_runner.go:151] cp: /home/jenkins/minikube-integration/21139-2320/.minikube/certs/ca.pem --> /home/jenkins/minikube-integration/21139-2320/.minikube/ca.pem (1082 bytes)
	I1109 14:03:32.275563   55908 vm_assets.go:164] NewFileAsset: /home/jenkins/minikube-integration/21139-2320/.minikube/certs/cert.pem -> /home/jenkins/minikube-integration/21139-2320/.minikube/cert.pem
	I1109 14:03:32.275585   55908 exec_runner.go:144] found /home/jenkins/minikube-integration/21139-2320/.minikube/cert.pem, removing ...
	I1109 14:03:32.275593   55908 exec_runner.go:203] rm: /home/jenkins/minikube-integration/21139-2320/.minikube/cert.pem
	I1109 14:03:32.275622   55908 exec_runner.go:151] cp: /home/jenkins/minikube-integration/21139-2320/.minikube/certs/cert.pem --> /home/jenkins/minikube-integration/21139-2320/.minikube/cert.pem (1123 bytes)
	I1109 14:03:32.275677   55908 vm_assets.go:164] NewFileAsset: /home/jenkins/minikube-integration/21139-2320/.minikube/certs/key.pem -> /home/jenkins/minikube-integration/21139-2320/.minikube/key.pem
	I1109 14:03:32.275699   55908 exec_runner.go:144] found /home/jenkins/minikube-integration/21139-2320/.minikube/key.pem, removing ...
	I1109 14:03:32.275704   55908 exec_runner.go:203] rm: /home/jenkins/minikube-integration/21139-2320/.minikube/key.pem
	I1109 14:03:32.275734   55908 exec_runner.go:151] cp: /home/jenkins/minikube-integration/21139-2320/.minikube/certs/key.pem --> /home/jenkins/minikube-integration/21139-2320/.minikube/key.pem (1679 bytes)
	I1109 14:03:32.275800   55908 provision.go:117] generating server cert: /home/jenkins/minikube-integration/21139-2320/.minikube/machines/server.pem ca-key=/home/jenkins/minikube-integration/21139-2320/.minikube/certs/ca.pem private-key=/home/jenkins/minikube-integration/21139-2320/.minikube/certs/ca-key.pem org=jenkins.ha-423884 san=[127.0.0.1 192.168.49.2 ha-423884 localhost minikube]
	I1109 14:03:32.661025   55908 provision.go:177] copyRemoteCerts
	I1109 14:03:32.661095   55908 ssh_runner.go:195] Run: sudo mkdir -p /etc/docker /etc/docker /etc/docker
	I1109 14:03:32.661138   55908 cli_runner.go:164] Run: docker container inspect -f "'{{(index (index .NetworkSettings.Ports "22/tcp") 0).HostPort}}'" ha-423884
	I1109 14:03:32.678774   55908 sshutil.go:53] new ssh client: &{IP:127.0.0.1 Port:32818 SSHKeyPath:/home/jenkins/minikube-integration/21139-2320/.minikube/machines/ha-423884/id_rsa Username:docker}
	I1109 14:03:32.784475   55908 vm_assets.go:164] NewFileAsset: /home/jenkins/minikube-integration/21139-2320/.minikube/machines/server-key.pem -> /etc/docker/server-key.pem
	I1109 14:03:32.784549   55908 ssh_runner.go:362] scp /home/jenkins/minikube-integration/21139-2320/.minikube/machines/server-key.pem --> /etc/docker/server-key.pem (1679 bytes)
	I1109 14:03:32.802319   55908 vm_assets.go:164] NewFileAsset: /home/jenkins/minikube-integration/21139-2320/.minikube/certs/ca.pem -> /etc/docker/ca.pem
	I1109 14:03:32.802376   55908 ssh_runner.go:362] scp /home/jenkins/minikube-integration/21139-2320/.minikube/certs/ca.pem --> /etc/docker/ca.pem (1082 bytes)
	I1109 14:03:32.819169   55908 vm_assets.go:164] NewFileAsset: /home/jenkins/minikube-integration/21139-2320/.minikube/machines/server.pem -> /etc/docker/server.pem
	I1109 14:03:32.819280   55908 ssh_runner.go:362] scp /home/jenkins/minikube-integration/21139-2320/.minikube/machines/server.pem --> /etc/docker/server.pem (1196 bytes)
	I1109 14:03:32.836450   55908 provision.go:87] duration metric: took 580.362722ms to configureAuth
	I1109 14:03:32.836513   55908 ubuntu.go:206] setting minikube options for container-runtime
	I1109 14:03:32.836762   55908 config.go:182] Loaded profile config "ha-423884": Driver=docker, ContainerRuntime=crio, KubernetesVersion=v1.34.1
	I1109 14:03:32.836868   55908 cli_runner.go:164] Run: docker container inspect -f "'{{(index (index .NetworkSettings.Ports "22/tcp") 0).HostPort}}'" ha-423884
	I1109 14:03:32.853354   55908 main.go:143] libmachine: Using SSH client type: native
	I1109 14:03:32.853661   55908 main.go:143] libmachine: &{{{<nil> 0 [] [] []} docker [0x3eefe0] 0x3f1790 <nil>  [] 0s} 127.0.0.1 32818 <nil> <nil>}
	I1109 14:03:32.853680   55908 main.go:143] libmachine: About to run SSH command:
	sudo mkdir -p /etc/sysconfig && printf %s "
	CRIO_MINIKUBE_OPTIONS='--insecure-registry 10.96.0.0/12 '
	" | sudo tee /etc/sysconfig/crio.minikube && sudo systemctl restart crio
	I1109 14:03:33.144760   55908 main.go:143] libmachine: SSH cmd err, output: <nil>: 
	CRIO_MINIKUBE_OPTIONS='--insecure-registry 10.96.0.0/12 '
	
	I1109 14:03:33.144782   55908 machine.go:97] duration metric: took 4.41442095s to provisionDockerMachine
	I1109 14:03:33.144794   55908 start.go:293] postStartSetup for "ha-423884" (driver="docker")
	I1109 14:03:33.144804   55908 start.go:322] creating required directories: [/etc/kubernetes/addons /etc/kubernetes/manifests /var/tmp/minikube /var/lib/minikube /var/lib/minikube/certs /var/lib/minikube/images /var/lib/minikube/binaries /tmp/gvisor /usr/share/ca-certificates /etc/ssl/certs]
	I1109 14:03:33.144881   55908 ssh_runner.go:195] Run: sudo mkdir -p /etc/kubernetes/addons /etc/kubernetes/manifests /var/tmp/minikube /var/lib/minikube /var/lib/minikube/certs /var/lib/minikube/images /var/lib/minikube/binaries /tmp/gvisor /usr/share/ca-certificates /etc/ssl/certs
	I1109 14:03:33.144923   55908 cli_runner.go:164] Run: docker container inspect -f "'{{(index (index .NetworkSettings.Ports "22/tcp") 0).HostPort}}'" ha-423884
	I1109 14:03:33.163262   55908 sshutil.go:53] new ssh client: &{IP:127.0.0.1 Port:32818 SSHKeyPath:/home/jenkins/minikube-integration/21139-2320/.minikube/machines/ha-423884/id_rsa Username:docker}
	I1109 14:03:33.271726   55908 ssh_runner.go:195] Run: cat /etc/os-release
	I1109 14:03:33.275165   55908 main.go:143] libmachine: Couldn't set key VERSION_CODENAME, no corresponding struct field found
	I1109 14:03:33.275193   55908 info.go:137] Remote host: Debian GNU/Linux 12 (bookworm)
	I1109 14:03:33.275203   55908 filesync.go:126] Scanning /home/jenkins/minikube-integration/21139-2320/.minikube/addons for local assets ...
	I1109 14:03:33.275256   55908 filesync.go:126] Scanning /home/jenkins/minikube-integration/21139-2320/.minikube/files for local assets ...
	I1109 14:03:33.275333   55908 filesync.go:149] local asset: /home/jenkins/minikube-integration/21139-2320/.minikube/files/etc/ssl/certs/41162.pem -> 41162.pem in /etc/ssl/certs
	I1109 14:03:33.275341   55908 vm_assets.go:164] NewFileAsset: /home/jenkins/minikube-integration/21139-2320/.minikube/files/etc/ssl/certs/41162.pem -> /etc/ssl/certs/41162.pem
	I1109 14:03:33.275445   55908 ssh_runner.go:195] Run: sudo mkdir -p /etc/ssl/certs
	I1109 14:03:33.282869   55908 ssh_runner.go:362] scp /home/jenkins/minikube-integration/21139-2320/.minikube/files/etc/ssl/certs/41162.pem --> /etc/ssl/certs/41162.pem (1708 bytes)
	I1109 14:03:33.300086   55908 start.go:296] duration metric: took 155.276378ms for postStartSetup
	I1109 14:03:33.300181   55908 ssh_runner.go:195] Run: sh -c "df -h /var | awk 'NR==2{print $5}'"
	I1109 14:03:33.300227   55908 cli_runner.go:164] Run: docker container inspect -f "'{{(index (index .NetworkSettings.Ports "22/tcp") 0).HostPort}}'" ha-423884
	I1109 14:03:33.318900   55908 sshutil.go:53] new ssh client: &{IP:127.0.0.1 Port:32818 SSHKeyPath:/home/jenkins/minikube-integration/21139-2320/.minikube/machines/ha-423884/id_rsa Username:docker}
	I1109 14:03:33.421156   55908 ssh_runner.go:195] Run: sh -c "df -BG /var | awk 'NR==2{print $4}'"
	I1109 14:03:33.426364   55908 fix.go:56] duration metric: took 5.024296824s for fixHost
	I1109 14:03:33.426438   55908 start.go:83] releasing machines lock for "ha-423884", held for 5.024394146s
	I1109 14:03:33.426527   55908 cli_runner.go:164] Run: docker container inspect -f "{{range .NetworkSettings.Networks}}{{.IPAddress}},{{.GlobalIPv6Address}}{{end}}" ha-423884
	I1109 14:03:33.444332   55908 ssh_runner.go:195] Run: cat /version.json
	I1109 14:03:33.444382   55908 cli_runner.go:164] Run: docker container inspect -f "'{{(index (index .NetworkSettings.Ports "22/tcp") 0).HostPort}}'" ha-423884
	I1109 14:03:33.444389   55908 ssh_runner.go:195] Run: curl -sS -m 2 https://registry.k8s.io/
	I1109 14:03:33.444465   55908 cli_runner.go:164] Run: docker container inspect -f "'{{(index (index .NetworkSettings.Ports "22/tcp") 0).HostPort}}'" ha-423884
	I1109 14:03:33.466109   55908 sshutil.go:53] new ssh client: &{IP:127.0.0.1 Port:32818 SSHKeyPath:/home/jenkins/minikube-integration/21139-2320/.minikube/machines/ha-423884/id_rsa Username:docker}
	I1109 14:03:33.468674   55908 sshutil.go:53] new ssh client: &{IP:127.0.0.1 Port:32818 SSHKeyPath:/home/jenkins/minikube-integration/21139-2320/.minikube/machines/ha-423884/id_rsa Username:docker}
	I1109 14:03:33.567827   55908 ssh_runner.go:195] Run: systemctl --version
	I1109 14:03:33.665464   55908 ssh_runner.go:195] Run: sudo sh -c "podman version >/dev/null"
	I1109 14:03:33.703682   55908 ssh_runner.go:195] Run: sh -c "stat /etc/cni/net.d/*loopback.conf*"
	W1109 14:03:33.708050   55908 cni.go:209] loopback cni configuration skipped: "/etc/cni/net.d/*loopback.conf*" not found
	I1109 14:03:33.708118   55908 ssh_runner.go:195] Run: sudo find /etc/cni/net.d -maxdepth 1 -type f ( ( -name *bridge* -or -name *podman* ) -and -not -name *.mk_disabled ) -printf "%p, " -exec sh -c "sudo mv {} {}.mk_disabled" ;
	I1109 14:03:33.716273   55908 cni.go:259] no active bridge cni configs found in "/etc/cni/net.d" - nothing to disable
	I1109 14:03:33.716295   55908 start.go:496] detecting cgroup driver to use...
	I1109 14:03:33.716329   55908 detect.go:187] detected "cgroupfs" cgroup driver on host os
	I1109 14:03:33.716378   55908 ssh_runner.go:195] Run: sudo systemctl stop -f containerd
	I1109 14:03:33.732433   55908 ssh_runner.go:195] Run: sudo systemctl is-active --quiet service containerd
	I1109 14:03:33.746199   55908 docker.go:218] disabling cri-docker service (if available) ...
	I1109 14:03:33.746294   55908 ssh_runner.go:195] Run: sudo systemctl stop -f cri-docker.socket
	I1109 14:03:33.762279   55908 ssh_runner.go:195] Run: sudo systemctl stop -f cri-docker.service
	I1109 14:03:33.775981   55908 ssh_runner.go:195] Run: sudo systemctl disable cri-docker.socket
	I1109 14:03:33.917723   55908 ssh_runner.go:195] Run: sudo systemctl mask cri-docker.service
	I1109 14:03:34.035293   55908 docker.go:234] disabling docker service ...
	I1109 14:03:34.035371   55908 ssh_runner.go:195] Run: sudo systemctl stop -f docker.socket
	I1109 14:03:34.050665   55908 ssh_runner.go:195] Run: sudo systemctl stop -f docker.service
	I1109 14:03:34.063795   55908 ssh_runner.go:195] Run: sudo systemctl disable docker.socket
	I1109 14:03:34.194207   55908 ssh_runner.go:195] Run: sudo systemctl mask docker.service
	I1109 14:03:34.316201   55908 ssh_runner.go:195] Run: sudo systemctl is-active --quiet service docker
	I1109 14:03:34.328760   55908 ssh_runner.go:195] Run: /bin/bash -c "sudo mkdir -p /etc && printf %s "runtime-endpoint: unix:///var/run/crio/crio.sock
	" | sudo tee /etc/crictl.yaml"
	I1109 14:03:34.342596   55908 crio.go:59] configure cri-o to use "registry.k8s.io/pause:3.10.1" pause image...
	I1109 14:03:34.342661   55908 ssh_runner.go:195] Run: sh -c "sudo sed -i 's|^.*pause_image = .*$|pause_image = "registry.k8s.io/pause:3.10.1"|' /etc/crio/crio.conf.d/02-crio.conf"
	I1109 14:03:34.351380   55908 crio.go:70] configuring cri-o to use "cgroupfs" as cgroup driver...
	I1109 14:03:34.351501   55908 ssh_runner.go:195] Run: sh -c "sudo sed -i 's|^.*cgroup_manager = .*$|cgroup_manager = "cgroupfs"|' /etc/crio/crio.conf.d/02-crio.conf"
	I1109 14:03:34.360283   55908 ssh_runner.go:195] Run: sh -c "sudo sed -i '/conmon_cgroup = .*/d' /etc/crio/crio.conf.d/02-crio.conf"
	I1109 14:03:34.369198   55908 ssh_runner.go:195] Run: sh -c "sudo sed -i '/cgroup_manager = .*/a conmon_cgroup = "pod"' /etc/crio/crio.conf.d/02-crio.conf"
	I1109 14:03:34.378151   55908 ssh_runner.go:195] Run: sh -c "sudo rm -rf /etc/cni/net.mk"
	I1109 14:03:34.386268   55908 ssh_runner.go:195] Run: sh -c "sudo sed -i '/^ *"net.ipv4.ip_unprivileged_port_start=.*"/d' /etc/crio/crio.conf.d/02-crio.conf"
	I1109 14:03:34.394888   55908 ssh_runner.go:195] Run: sh -c "sudo grep -q "^ *default_sysctls" /etc/crio/crio.conf.d/02-crio.conf || sudo sed -i '/conmon_cgroup = .*/a default_sysctls = \[\n\]' /etc/crio/crio.conf.d/02-crio.conf"
	I1109 14:03:34.403377   55908 ssh_runner.go:195] Run: sh -c "sudo sed -i -r 's|^default_sysctls *= *\[|&\n  "net.ipv4.ip_unprivileged_port_start=0",|' /etc/crio/crio.conf.d/02-crio.conf"
	I1109 14:03:34.412509   55908 ssh_runner.go:195] Run: sudo sysctl net.bridge.bridge-nf-call-iptables
	I1109 14:03:34.419807   55908 ssh_runner.go:195] Run: sudo sh -c "echo 1 > /proc/sys/net/ipv4/ip_forward"
	I1109 14:03:34.427015   55908 ssh_runner.go:195] Run: sudo systemctl daemon-reload
	I1109 14:03:34.533676   55908 ssh_runner.go:195] Run: sudo systemctl restart crio
	I1109 14:03:34.661746   55908 start.go:543] Will wait 60s for socket path /var/run/crio/crio.sock
	I1109 14:03:34.661816   55908 ssh_runner.go:195] Run: stat /var/run/crio/crio.sock
	I1109 14:03:34.665477   55908 start.go:564] Will wait 60s for crictl version
	I1109 14:03:34.665590   55908 ssh_runner.go:195] Run: which crictl
	I1109 14:03:34.668882   55908 ssh_runner.go:195] Run: sudo /usr/local/bin/crictl version
	I1109 14:03:34.697803   55908 start.go:580] Version:  0.1.0
	RuntimeName:  cri-o
	RuntimeVersion:  1.34.1
	RuntimeApiVersion:  v1
	I1109 14:03:34.697964   55908 ssh_runner.go:195] Run: crio --version
	I1109 14:03:34.726272   55908 ssh_runner.go:195] Run: crio --version
	I1109 14:03:34.758410   55908 out.go:179] * Preparing Kubernetes v1.34.1 on CRI-O 1.34.1 ...
	I1109 14:03:34.761247   55908 cli_runner.go:164] Run: docker network inspect ha-423884 --format "{"Name": "{{.Name}}","Driver": "{{.Driver}}","Subnet": "{{range .IPAM.Config}}{{.Subnet}}{{end}}","Gateway": "{{range .IPAM.Config}}{{.Gateway}}{{end}}","MTU": {{if (index .Options "com.docker.network.driver.mtu")}}{{(index .Options "com.docker.network.driver.mtu")}}{{else}}0{{end}}, "ContainerIPs": [{{range $k,$v := .Containers }}"{{$v.IPv4Address}}",{{end}}]}"
	I1109 14:03:34.776734   55908 ssh_runner.go:195] Run: grep 192.168.49.1	host.minikube.internal$ /etc/hosts
	I1109 14:03:34.780588   55908 ssh_runner.go:195] Run: /bin/bash -c "{ grep -v $'\thost.minikube.internal$' "/etc/hosts"; echo "192.168.49.1	host.minikube.internal"; } > /tmp/h.$$; sudo cp /tmp/h.$$ "/etc/hosts""
	I1109 14:03:34.790316   55908 kubeadm.go:884] updating cluster {Name:ha-423884 KeepContext:false EmbedCerts:false MinikubeISO: KicBaseImage:gcr.io/k8s-minikube/kicbase-builds:v0.0.48-1761985721-21837@sha256:a50b37e97dfdea51156e079ca6b45818a801b3d41bbe13d141f35d2e1af6c7d1 Memory:3072 CPUs:2 DiskSize:20000 Driver:docker HyperkitVpnKitSock: HyperkitVSockPorts:[] DockerEnv:[] ContainerVolumeMounts:[] InsecureRegistry:[] RegistryMirror:[] HostOnlyCIDR:192.168.59.1/24 HypervVirtualSwitch: HypervUseExternalSwitch:false HypervExternalAdapter: KVMNetwork:default KVMQemuURI:qemu:///system KVMGPU:false KVMHidden:false KVMNUMACount:1 APIServerPort:8443 DockerOpt:[] DisableDriverMounts:false NFSShare:[] NFSSharesRoot:/nfsshares UUID: NoVTXCheck:false DNSProxy:false HostDNSResolver:true HostOnlyNicType:virtio NatNicType:virtio SSHIPAddress: SSHUser:root SSHKey: SSHPort:22 KubernetesConfig:{KubernetesVersion:v1.34.1 ClusterName:ha-423884 Namespace:default APIServerHAVIP:192.168.49.254 APIServerName:minikubeCA APISe
rverNames:[] APIServerIPs:[] DNSDomain:cluster.local ContainerRuntime:crio CRISocket: NetworkPlugin:cni FeatureGates: ServiceCIDR:10.96.0.0/12 ImageRepository: LoadBalancerStartIP: LoadBalancerEndIP: CustomIngressCert: RegistryAliases: ExtraOptions:[] ShouldLoadCachedImages:true EnableDefaultCNI:false CNI:} Nodes:[{Name: IP:192.168.49.2 Port:8443 KubernetesVersion:v1.34.1 ContainerRuntime:crio ControlPlane:true Worker:true} {Name:m02 IP:192.168.49.3 Port:8443 KubernetesVersion:v1.34.1 ContainerRuntime:crio ControlPlane:true Worker:true} {Name:m03 IP:192.168.49.4 Port:8443 KubernetesVersion:v1.34.1 ContainerRuntime:crio ControlPlane:true Worker:true} {Name:m04 IP:192.168.49.5 Port:0 KubernetesVersion:v1.34.1 ContainerRuntime:crio ControlPlane:false Worker:true}] Addons:map[ambassador:false amd-gpu-device-plugin:false auto-pause:false cloud-spanner:false csi-hostpath-driver:false dashboard:false default-storageclass:false efk:false freshpod:false gcp-auth:false gvisor:false headlamp:false inaccel:false ingress:
false ingress-dns:false inspektor-gadget:false istio:false istio-provisioner:false kong:false kubeflow:false kubetail:false kubevirt:false logviewer:false metallb:false metrics-server:false nvidia-device-plugin:false nvidia-driver-installer:false nvidia-gpu-device-plugin:false olm:false pod-security-policy:false portainer:false registry:false registry-aliases:false registry-creds:false storage-provisioner:false storage-provisioner-rancher:false volcano:false volumesnapshots:false yakd:false] CustomAddonImages:map[] CustomAddonRegistries:map[] VerifyComponents:map[apiserver:true apps_running:true default_sa:true extra:true kubelet:true node_ready:true system_pods:true] StartHostTimeout:6m0s ScheduledStop:<nil> ExposedPorts:[] ListenAddress: Network: Subnet: MultiNodeRequested:true ExtraDisks:0 CertExpiration:26280h0m0s MountString: Mount9PVersion:9p2000.L MountGID:docker MountIP: MountMSize:262144 MountOptions:[] MountPort:0 MountType:9p MountUID:docker BinaryMirror: DisableOptimizations:false DisableMetrics:f
alse DisableCoreDNSLog:false CustomQemuFirmwarePath: SocketVMnetClientPath: SocketVMnetPath: StaticIP: SSHAuthSock: SSHAgentPID:0 GPUs: AutoPauseInterval:1m0s} ...
	I1109 14:03:34.790470   55908 preload.go:188] Checking if preload exists for k8s version v1.34.1 and runtime crio
	I1109 14:03:34.790530   55908 ssh_runner.go:195] Run: sudo crictl images --output json
	I1109 14:03:34.825584   55908 crio.go:514] all images are preloaded for cri-o runtime.
	I1109 14:03:34.825621   55908 crio.go:433] Images already preloaded, skipping extraction
	I1109 14:03:34.825685   55908 ssh_runner.go:195] Run: sudo crictl images --output json
	I1109 14:03:34.851854   55908 crio.go:514] all images are preloaded for cri-o runtime.
	I1109 14:03:34.851980   55908 cache_images.go:86] Images are preloaded, skipping loading
	I1109 14:03:34.851997   55908 kubeadm.go:935] updating node { 192.168.49.2 8443 v1.34.1 crio true true} ...
	I1109 14:03:34.852146   55908 kubeadm.go:947] kubelet [Unit]
	Wants=crio.service
	
	[Service]
	ExecStart=
	ExecStart=/var/lib/minikube/binaries/v1.34.1/kubelet --bootstrap-kubeconfig=/etc/kubernetes/bootstrap-kubelet.conf --cgroups-per-qos=false --config=/var/lib/kubelet/config.yaml --enforce-node-allocatable= --hostname-override=ha-423884 --kubeconfig=/etc/kubernetes/kubelet.conf --node-ip=192.168.49.2
	
	[Install]
	 config:
	{KubernetesVersion:v1.34.1 ClusterName:ha-423884 Namespace:default APIServerHAVIP:192.168.49.254 APIServerName:minikubeCA APIServerNames:[] APIServerIPs:[] DNSDomain:cluster.local ContainerRuntime:crio CRISocket: NetworkPlugin:cni FeatureGates: ServiceCIDR:10.96.0.0/12 ImageRepository: LoadBalancerStartIP: LoadBalancerEndIP: CustomIngressCert: RegistryAliases: ExtraOptions:[] ShouldLoadCachedImages:true EnableDefaultCNI:false CNI:}
	I1109 14:03:34.852273   55908 ssh_runner.go:195] Run: crio config
	I1109 14:03:34.903939   55908 cni.go:84] Creating CNI manager for ""
	I1109 14:03:34.903963   55908 cni.go:136] multinode detected (4 nodes found), recommending kindnet
	I1109 14:03:34.903981   55908 kubeadm.go:85] Using pod CIDR: 10.244.0.0/16
	I1109 14:03:34.904009   55908 kubeadm.go:190] kubeadm options: {CertDir:/var/lib/minikube/certs ServiceCIDR:10.96.0.0/12 PodSubnet:10.244.0.0/16 AdvertiseAddress:192.168.49.2 APIServerPort:8443 KubernetesVersion:v1.34.1 EtcdDataDir:/var/lib/minikube/etcd EtcdExtraArgs:map[] ClusterName:ha-423884 NodeName:ha-423884 DNSDomain:cluster.local CRISocket:/var/run/crio/crio.sock ImageRepository: ComponentOptions:[{Component:apiServer ExtraArgs:map[enable-admission-plugins:NamespaceLifecycle,LimitRanger,ServiceAccount,DefaultStorageClass,DefaultTolerationSeconds,NodeRestriction,MutatingAdmissionWebhook,ValidatingAdmissionWebhook,ResourceQuota] Pairs:map[certSANs:["127.0.0.1", "localhost", "192.168.49.2"]]} {Component:controllerManager ExtraArgs:map[allocate-node-cidrs:true leader-elect:false] Pairs:map[]} {Component:scheduler ExtraArgs:map[leader-elect:false] Pairs:map[]}] FeatureArgs:map[] NodeIP:192.168.49.2 CgroupDriver:cgroupfs ClientCAFile:/var/lib/minikube/certs/ca.crt StaticPodPath:/etc/kubernetes/mani
fests ControlPlaneAddress:control-plane.minikube.internal KubeProxyOptions:map[] ResolvConfSearchRegression:false KubeletConfigOpts:map[containerRuntimeEndpoint:unix:///var/run/crio/crio.sock hairpinMode:hairpin-veth runtimeRequestTimeout:15m] PrependCriSocketUnix:true}
	I1109 14:03:34.904140   55908 kubeadm.go:196] kubeadm config:
	apiVersion: kubeadm.k8s.io/v1beta4
	kind: InitConfiguration
	localAPIEndpoint:
	  advertiseAddress: 192.168.49.2
	  bindPort: 8443
	bootstrapTokens:
	  - groups:
	      - system:bootstrappers:kubeadm:default-node-token
	    ttl: 24h0m0s
	    usages:
	      - signing
	      - authentication
	nodeRegistration:
	  criSocket: unix:///var/run/crio/crio.sock
	  name: "ha-423884"
	  kubeletExtraArgs:
	    - name: "node-ip"
	      value: "192.168.49.2"
	  taints: []
	---
	apiVersion: kubeadm.k8s.io/v1beta4
	kind: ClusterConfiguration
	apiServer:
	  certSANs: ["127.0.0.1", "localhost", "192.168.49.2"]
	  extraArgs:
	    - name: "enable-admission-plugins"
	      value: "NamespaceLifecycle,LimitRanger,ServiceAccount,DefaultStorageClass,DefaultTolerationSeconds,NodeRestriction,MutatingAdmissionWebhook,ValidatingAdmissionWebhook,ResourceQuota"
	controllerManager:
	  extraArgs:
	    - name: "allocate-node-cidrs"
	      value: "true"
	    - name: "leader-elect"
	      value: "false"
	scheduler:
	  extraArgs:
	    - name: "leader-elect"
	      value: "false"
	certificatesDir: /var/lib/minikube/certs
	clusterName: mk
	controlPlaneEndpoint: control-plane.minikube.internal:8443
	etcd:
	  local:
	    dataDir: /var/lib/minikube/etcd
	kubernetesVersion: v1.34.1
	networking:
	  dnsDomain: cluster.local
	  podSubnet: "10.244.0.0/16"
	  serviceSubnet: 10.96.0.0/12
	---
	apiVersion: kubelet.config.k8s.io/v1beta1
	kind: KubeletConfiguration
	authentication:
	  x509:
	    clientCAFile: /var/lib/minikube/certs/ca.crt
	cgroupDriver: cgroupfs
	containerRuntimeEndpoint: unix:///var/run/crio/crio.sock
	hairpinMode: hairpin-veth
	runtimeRequestTimeout: 15m
	clusterDomain: "cluster.local"
	# disable disk resource management by default
	imageGCHighThresholdPercent: 100
	evictionHard:
	  nodefs.available: "0%"
	  nodefs.inodesFree: "0%"
	  imagefs.available: "0%"
	failSwapOn: false
	staticPodPath: /etc/kubernetes/manifests
	---
	apiVersion: kubeproxy.config.k8s.io/v1alpha1
	kind: KubeProxyConfiguration
	clusterCIDR: "10.244.0.0/16"
	metricsBindAddress: 0.0.0.0:10249
	conntrack:
	  maxPerCore: 0
	# Skip setting "net.netfilter.nf_conntrack_tcp_timeout_established"
	  tcpEstablishedTimeout: 0s
	# Skip setting "net.netfilter.nf_conntrack_tcp_timeout_close"
	  tcpCloseWaitTimeout: 0s
	
	I1109 14:03:34.904162   55908 kube-vip.go:115] generating kube-vip config ...
	I1109 14:03:34.904219   55908 ssh_runner.go:195] Run: sudo sh -c "lsmod | grep ip_vs"
	I1109 14:03:34.915786   55908 kube-vip.go:163] giving up enabling control-plane load-balancing as ipvs kernel modules appears not to be available: sudo sh -c "lsmod | grep ip_vs": Process exited with status 1
	stdout:
	
	stderr:
	I1109 14:03:34.915909   55908 kube-vip.go:137] kube-vip config:
	apiVersion: v1
	kind: Pod
	metadata:
	  creationTimestamp: null
	  name: kube-vip
	  namespace: kube-system
	spec:
	  containers:
	  - args:
	    - manager
	    env:
	    - name: vip_arp
	      value: "true"
	    - name: port
	      value: "8443"
	    - name: vip_nodename
	      valueFrom:
	        fieldRef:
	          fieldPath: spec.nodeName
	    - name: vip_interface
	      value: eth0
	    - name: vip_cidr
	      value: "32"
	    - name: dns_mode
	      value: first
	    - name: cp_enable
	      value: "true"
	    - name: cp_namespace
	      value: kube-system
	    - name: vip_leaderelection
	      value: "true"
	    - name: vip_leasename
	      value: plndr-cp-lock
	    - name: vip_leaseduration
	      value: "5"
	    - name: vip_renewdeadline
	      value: "3"
	    - name: vip_retryperiod
	      value: "1"
	    - name: address
	      value: 192.168.49.254
	    - name: prometheus_server
	      value: :2112
	    image: ghcr.io/kube-vip/kube-vip:v1.0.1
	    imagePullPolicy: IfNotPresent
	    name: kube-vip
	    resources: {}
	    securityContext:
	      capabilities:
	        add:
	        - NET_ADMIN
	        - NET_RAW
	    volumeMounts:
	    - mountPath: /etc/kubernetes/admin.conf
	      name: kubeconfig
	  hostAliases:
	  - hostnames:
	    - kubernetes
	    ip: 127.0.0.1
	  hostNetwork: true
	  volumes:
	  - hostPath:
	      path: "/etc/kubernetes/admin.conf"
	    name: kubeconfig
	status: {}
	I1109 14:03:34.915977   55908 ssh_runner.go:195] Run: sudo ls /var/lib/minikube/binaries/v1.34.1
	I1109 14:03:34.923406   55908 binaries.go:51] Found k8s binaries, skipping transfer
	I1109 14:03:34.923480   55908 ssh_runner.go:195] Run: sudo mkdir -p /etc/systemd/system/kubelet.service.d /lib/systemd/system /var/tmp/minikube /etc/kubernetes/manifests
	I1109 14:03:34.931134   55908 ssh_runner.go:362] scp memory --> /etc/systemd/system/kubelet.service.d/10-kubeadm.conf (359 bytes)
	I1109 14:03:34.943678   55908 ssh_runner.go:362] scp memory --> /lib/systemd/system/kubelet.service (352 bytes)
	I1109 14:03:34.956560   55908 ssh_runner.go:362] scp memory --> /var/tmp/minikube/kubeadm.yaml.new (2206 bytes)
	I1109 14:03:34.969028   55908 ssh_runner.go:362] scp memory --> /etc/kubernetes/manifests/kube-vip.yaml (1358 bytes)
	I1109 14:03:34.981532   55908 ssh_runner.go:195] Run: grep 192.168.49.254	control-plane.minikube.internal$ /etc/hosts
	I1109 14:03:34.985043   55908 ssh_runner.go:195] Run: /bin/bash -c "{ grep -v $'\tcontrol-plane.minikube.internal$' "/etc/hosts"; echo "192.168.49.254	control-plane.minikube.internal"; } > /tmp/h.$$; sudo cp /tmp/h.$$ "/etc/hosts""
	I1109 14:03:34.994528   55908 ssh_runner.go:195] Run: sudo systemctl daemon-reload
	I1109 14:03:35.107177   55908 ssh_runner.go:195] Run: sudo systemctl start kubelet
	I1109 14:03:35.123121   55908 certs.go:69] Setting up /home/jenkins/minikube-integration/21139-2320/.minikube/profiles/ha-423884 for IP: 192.168.49.2
	I1109 14:03:35.123194   55908 certs.go:195] generating shared ca certs ...
	I1109 14:03:35.123226   55908 certs.go:227] acquiring lock for ca certs: {Name:mkdc9287ecd89df27ec460b72246ed6b75395d7c Clock:{} Delay:500ms Timeout:1m0s Cancel:<nil>}
	I1109 14:03:35.123409   55908 certs.go:236] skipping valid "minikubeCA" ca cert: /home/jenkins/minikube-integration/21139-2320/.minikube/ca.key
	I1109 14:03:35.123481   55908 certs.go:236] skipping valid "proxyClientCA" ca cert: /home/jenkins/minikube-integration/21139-2320/.minikube/proxy-client-ca.key
	I1109 14:03:35.123518   55908 certs.go:257] generating profile certs ...
	I1109 14:03:35.123657   55908 certs.go:360] skipping valid signed profile cert regeneration for "minikube-user": /home/jenkins/minikube-integration/21139-2320/.minikube/profiles/ha-423884/client.key
	I1109 14:03:35.123781   55908 certs.go:360] skipping valid signed profile cert regeneration for "minikube": /home/jenkins/minikube-integration/21139-2320/.minikube/profiles/ha-423884/apiserver.key.32540612
	I1109 14:03:35.123858   55908 certs.go:360] skipping valid signed profile cert regeneration for "aggregator": /home/jenkins/minikube-integration/21139-2320/.minikube/profiles/ha-423884/proxy-client.key
	I1109 14:03:35.123923   55908 vm_assets.go:164] NewFileAsset: /home/jenkins/minikube-integration/21139-2320/.minikube/ca.crt -> /var/lib/minikube/certs/ca.crt
	I1109 14:03:35.123960   55908 vm_assets.go:164] NewFileAsset: /home/jenkins/minikube-integration/21139-2320/.minikube/ca.key -> /var/lib/minikube/certs/ca.key
	I1109 14:03:35.124009   55908 vm_assets.go:164] NewFileAsset: /home/jenkins/minikube-integration/21139-2320/.minikube/proxy-client-ca.crt -> /var/lib/minikube/certs/proxy-client-ca.crt
	I1109 14:03:35.124043   55908 vm_assets.go:164] NewFileAsset: /home/jenkins/minikube-integration/21139-2320/.minikube/proxy-client-ca.key -> /var/lib/minikube/certs/proxy-client-ca.key
	I1109 14:03:35.124090   55908 vm_assets.go:164] NewFileAsset: /home/jenkins/minikube-integration/21139-2320/.minikube/profiles/ha-423884/apiserver.crt -> /var/lib/minikube/certs/apiserver.crt
	I1109 14:03:35.124123   55908 vm_assets.go:164] NewFileAsset: /home/jenkins/minikube-integration/21139-2320/.minikube/profiles/ha-423884/apiserver.key -> /var/lib/minikube/certs/apiserver.key
	I1109 14:03:35.124169   55908 vm_assets.go:164] NewFileAsset: /home/jenkins/minikube-integration/21139-2320/.minikube/profiles/ha-423884/proxy-client.crt -> /var/lib/minikube/certs/proxy-client.crt
	I1109 14:03:35.124203   55908 vm_assets.go:164] NewFileAsset: /home/jenkins/minikube-integration/21139-2320/.minikube/profiles/ha-423884/proxy-client.key -> /var/lib/minikube/certs/proxy-client.key
	I1109 14:03:35.124294   55908 certs.go:484] found cert: /home/jenkins/minikube-integration/21139-2320/.minikube/certs/4116.pem (1338 bytes)
	W1109 14:03:35.124369   55908 certs.go:480] ignoring /home/jenkins/minikube-integration/21139-2320/.minikube/certs/4116_empty.pem, impossibly tiny 0 bytes
	I1109 14:03:35.124408   55908 certs.go:484] found cert: /home/jenkins/minikube-integration/21139-2320/.minikube/certs/ca-key.pem (1679 bytes)
	I1109 14:03:35.124455   55908 certs.go:484] found cert: /home/jenkins/minikube-integration/21139-2320/.minikube/certs/ca.pem (1082 bytes)
	I1109 14:03:35.124508   55908 certs.go:484] found cert: /home/jenkins/minikube-integration/21139-2320/.minikube/certs/cert.pem (1123 bytes)
	I1109 14:03:35.124566   55908 certs.go:484] found cert: /home/jenkins/minikube-integration/21139-2320/.minikube/certs/key.pem (1679 bytes)
	I1109 14:03:35.124648   55908 certs.go:484] found cert: /home/jenkins/minikube-integration/21139-2320/.minikube/files/etc/ssl/certs/41162.pem (1708 bytes)
	I1109 14:03:35.124724   55908 vm_assets.go:164] NewFileAsset: /home/jenkins/minikube-integration/21139-2320/.minikube/certs/4116.pem -> /usr/share/ca-certificates/4116.pem
	I1109 14:03:35.124808   55908 vm_assets.go:164] NewFileAsset: /home/jenkins/minikube-integration/21139-2320/.minikube/files/etc/ssl/certs/41162.pem -> /usr/share/ca-certificates/41162.pem
	I1109 14:03:35.124844   55908 vm_assets.go:164] NewFileAsset: /home/jenkins/minikube-integration/21139-2320/.minikube/ca.crt -> /usr/share/ca-certificates/minikubeCA.pem
	I1109 14:03:35.125710   55908 ssh_runner.go:362] scp /home/jenkins/minikube-integration/21139-2320/.minikube/ca.crt --> /var/lib/minikube/certs/ca.crt (1111 bytes)
	I1109 14:03:35.143578   55908 ssh_runner.go:362] scp /home/jenkins/minikube-integration/21139-2320/.minikube/ca.key --> /var/lib/minikube/certs/ca.key (1679 bytes)
	I1109 14:03:35.160309   55908 ssh_runner.go:362] scp /home/jenkins/minikube-integration/21139-2320/.minikube/proxy-client-ca.crt --> /var/lib/minikube/certs/proxy-client-ca.crt (1119 bytes)
	I1109 14:03:35.180028   55908 ssh_runner.go:362] scp /home/jenkins/minikube-integration/21139-2320/.minikube/proxy-client-ca.key --> /var/lib/minikube/certs/proxy-client-ca.key (1675 bytes)
	I1109 14:03:35.198803   55908 ssh_runner.go:362] scp /home/jenkins/minikube-integration/21139-2320/.minikube/profiles/ha-423884/apiserver.crt --> /var/lib/minikube/certs/apiserver.crt (1440 bytes)
	I1109 14:03:35.222988   55908 ssh_runner.go:362] scp /home/jenkins/minikube-integration/21139-2320/.minikube/profiles/ha-423884/apiserver.key --> /var/lib/minikube/certs/apiserver.key (1679 bytes)
	I1109 14:03:35.246464   55908 ssh_runner.go:362] scp /home/jenkins/minikube-integration/21139-2320/.minikube/profiles/ha-423884/proxy-client.crt --> /var/lib/minikube/certs/proxy-client.crt (1147 bytes)
	I1109 14:03:35.273513   55908 ssh_runner.go:362] scp /home/jenkins/minikube-integration/21139-2320/.minikube/profiles/ha-423884/proxy-client.key --> /var/lib/minikube/certs/proxy-client.key (1675 bytes)
	I1109 14:03:35.298574   55908 ssh_runner.go:362] scp /home/jenkins/minikube-integration/21139-2320/.minikube/certs/4116.pem --> /usr/share/ca-certificates/4116.pem (1338 bytes)
	I1109 14:03:35.323310   55908 ssh_runner.go:362] scp /home/jenkins/minikube-integration/21139-2320/.minikube/files/etc/ssl/certs/41162.pem --> /usr/share/ca-certificates/41162.pem (1708 bytes)
	I1109 14:03:35.344665   55908 ssh_runner.go:362] scp /home/jenkins/minikube-integration/21139-2320/.minikube/ca.crt --> /usr/share/ca-certificates/minikubeCA.pem (1111 bytes)
	I1109 14:03:35.365172   55908 ssh_runner.go:362] scp memory --> /var/lib/minikube/kubeconfig (738 bytes)
	I1109 14:03:35.378569   55908 ssh_runner.go:195] Run: openssl version
	I1109 14:03:35.385015   55908 ssh_runner.go:195] Run: sudo /bin/bash -c "test -s /usr/share/ca-certificates/4116.pem && ln -fs /usr/share/ca-certificates/4116.pem /etc/ssl/certs/4116.pem"
	I1109 14:03:35.394601   55908 ssh_runner.go:195] Run: ls -la /usr/share/ca-certificates/4116.pem
	I1109 14:03:35.398299   55908 certs.go:528] hashing: -rw-r--r-- 1 root root 1338 Nov  9 13:36 /usr/share/ca-certificates/4116.pem
	I1109 14:03:35.398412   55908 ssh_runner.go:195] Run: openssl x509 -hash -noout -in /usr/share/ca-certificates/4116.pem
	I1109 14:03:35.453607   55908 ssh_runner.go:195] Run: sudo /bin/bash -c "test -L /etc/ssl/certs/51391683.0 || ln -fs /etc/ssl/certs/4116.pem /etc/ssl/certs/51391683.0"
	I1109 14:03:35.463012   55908 ssh_runner.go:195] Run: sudo /bin/bash -c "test -s /usr/share/ca-certificates/41162.pem && ln -fs /usr/share/ca-certificates/41162.pem /etc/ssl/certs/41162.pem"
	I1109 14:03:35.471886   55908 ssh_runner.go:195] Run: ls -la /usr/share/ca-certificates/41162.pem
	I1109 14:03:35.475852   55908 certs.go:528] hashing: -rw-r--r-- 1 root root 1708 Nov  9 13:36 /usr/share/ca-certificates/41162.pem
	I1109 14:03:35.475960   55908 ssh_runner.go:195] Run: openssl x509 -hash -noout -in /usr/share/ca-certificates/41162.pem
	I1109 14:03:35.519535   55908 ssh_runner.go:195] Run: sudo /bin/bash -c "test -L /etc/ssl/certs/3ec20f2e.0 || ln -fs /etc/ssl/certs/41162.pem /etc/ssl/certs/3ec20f2e.0"
	I1109 14:03:35.532870   55908 ssh_runner.go:195] Run: sudo /bin/bash -c "test -s /usr/share/ca-certificates/minikubeCA.pem && ln -fs /usr/share/ca-certificates/minikubeCA.pem /etc/ssl/certs/minikubeCA.pem"
	I1109 14:03:35.541526   55908 ssh_runner.go:195] Run: ls -la /usr/share/ca-certificates/minikubeCA.pem
	I1109 14:03:35.545559   55908 certs.go:528] hashing: -rw-r--r-- 1 root root 1111 Nov  9 13:29 /usr/share/ca-certificates/minikubeCA.pem
	I1109 14:03:35.545647   55908 ssh_runner.go:195] Run: openssl x509 -hash -noout -in /usr/share/ca-certificates/minikubeCA.pem
	I1109 14:03:35.587429   55908 ssh_runner.go:195] Run: sudo /bin/bash -c "test -L /etc/ssl/certs/b5213941.0 || ln -fs /etc/ssl/certs/minikubeCA.pem /etc/ssl/certs/b5213941.0"
	I1109 14:03:35.595355   55908 ssh_runner.go:195] Run: stat /var/lib/minikube/certs/apiserver-kubelet-client.crt
	I1109 14:03:35.598863   55908 ssh_runner.go:195] Run: openssl x509 -noout -in /var/lib/minikube/certs/apiserver-etcd-client.crt -checkend 86400
	I1109 14:03:35.639394   55908 ssh_runner.go:195] Run: openssl x509 -noout -in /var/lib/minikube/certs/apiserver-kubelet-client.crt -checkend 86400
	I1109 14:03:35.682546   55908 ssh_runner.go:195] Run: openssl x509 -noout -in /var/lib/minikube/certs/etcd/server.crt -checkend 86400
	I1109 14:03:35.723686   55908 ssh_runner.go:195] Run: openssl x509 -noout -in /var/lib/minikube/certs/etcd/healthcheck-client.crt -checkend 86400
	I1109 14:03:35.769486   55908 ssh_runner.go:195] Run: openssl x509 -noout -in /var/lib/minikube/certs/etcd/peer.crt -checkend 86400
	I1109 14:03:35.818163   55908 ssh_runner.go:195] Run: openssl x509 -noout -in /var/lib/minikube/certs/front-proxy-client.crt -checkend 86400
	I1109 14:03:35.873301   55908 kubeadm.go:401] StartCluster: {Name:ha-423884 KeepContext:false EmbedCerts:false MinikubeISO: KicBaseImage:gcr.io/k8s-minikube/kicbase-builds:v0.0.48-1761985721-21837@sha256:a50b37e97dfdea51156e079ca6b45818a801b3d41bbe13d141f35d2e1af6c7d1 Memory:3072 CPUs:2 DiskSize:20000 Driver:docker HyperkitVpnKitSock: HyperkitVSockPorts:[] DockerEnv:[] ContainerVolumeMounts:[] InsecureRegistry:[] RegistryMirror:[] HostOnlyCIDR:192.168.59.1/24 HypervVirtualSwitch: HypervUseExternalSwitch:false HypervExternalAdapter: KVMNetwork:default KVMQemuURI:qemu:///system KVMGPU:false KVMHidden:false KVMNUMACount:1 APIServerPort:8443 DockerOpt:[] DisableDriverMounts:false NFSShare:[] NFSSharesRoot:/nfsshares UUID: NoVTXCheck:false DNSProxy:false HostDNSResolver:true HostOnlyNicType:virtio NatNicType:virtio SSHIPAddress: SSHUser:root SSHKey: SSHPort:22 KubernetesConfig:{KubernetesVersion:v1.34.1 ClusterName:ha-423884 Namespace:default APIServerHAVIP:192.168.49.254 APIServerName:minikubeCA APIServe
rNames:[] APIServerIPs:[] DNSDomain:cluster.local ContainerRuntime:crio CRISocket: NetworkPlugin:cni FeatureGates: ServiceCIDR:10.96.0.0/12 ImageRepository: LoadBalancerStartIP: LoadBalancerEndIP: CustomIngressCert: RegistryAliases: ExtraOptions:[] ShouldLoadCachedImages:true EnableDefaultCNI:false CNI:} Nodes:[{Name: IP:192.168.49.2 Port:8443 KubernetesVersion:v1.34.1 ContainerRuntime:crio ControlPlane:true Worker:true} {Name:m02 IP:192.168.49.3 Port:8443 KubernetesVersion:v1.34.1 ContainerRuntime:crio ControlPlane:true Worker:true} {Name:m03 IP:192.168.49.4 Port:8443 KubernetesVersion:v1.34.1 ContainerRuntime:crio ControlPlane:true Worker:true} {Name:m04 IP:192.168.49.5 Port:0 KubernetesVersion:v1.34.1 ContainerRuntime:crio ControlPlane:false Worker:true}] Addons:map[ambassador:false amd-gpu-device-plugin:false auto-pause:false cloud-spanner:false csi-hostpath-driver:false dashboard:false default-storageclass:false efk:false freshpod:false gcp-auth:false gvisor:false headlamp:false inaccel:false ingress:fal
se ingress-dns:false inspektor-gadget:false istio:false istio-provisioner:false kong:false kubeflow:false kubetail:false kubevirt:false logviewer:false metallb:false metrics-server:false nvidia-device-plugin:false nvidia-driver-installer:false nvidia-gpu-device-plugin:false olm:false pod-security-policy:false portainer:false registry:false registry-aliases:false registry-creds:false storage-provisioner:false storage-provisioner-rancher:false volcano:false volumesnapshots:false yakd:false] CustomAddonImages:map[] CustomAddonRegistries:map[] VerifyComponents:map[apiserver:true apps_running:true default_sa:true extra:true kubelet:true node_ready:true system_pods:true] StartHostTimeout:6m0s ScheduledStop:<nil> ExposedPorts:[] ListenAddress: Network: Subnet: MultiNodeRequested:true ExtraDisks:0 CertExpiration:26280h0m0s MountString: Mount9PVersion:9p2000.L MountGID:docker MountIP: MountMSize:262144 MountOptions:[] MountPort:0 MountType:9p MountUID:docker BinaryMirror: DisableOptimizations:false DisableMetrics:fals
e DisableCoreDNSLog:false CustomQemuFirmwarePath: SocketVMnetClientPath: SocketVMnetPath: StaticIP: SSHAuthSock: SSHAgentPID:0 GPUs: AutoPauseInterval:1m0s}
	I1109 14:03:35.873423   55908 cri.go:54] listing CRI containers in root : {State:paused Name: Namespaces:[kube-system]}
	I1109 14:03:35.873481   55908 ssh_runner.go:195] Run: sudo -s eval "crictl ps -a --quiet --label io.kubernetes.pod.namespace=kube-system"
	I1109 14:03:35.949725   55908 cri.go:89] found id: "947390d8997ffb89bea0e3c1e1bca5c1f8dd53d457d88db5aafd7664dbcb65b2"
	I1109 14:03:35.949794   55908 cri.go:89] found id: "c0ba74e816e1338d86f2f29c211b83c172784bbf106dba7bae518b2ee0201a4e"
	I1109 14:03:35.949821   55908 cri.go:89] found id: "785a023345fda66c98e73a27cd2aa79f3beb28f1d9847ff2264dd21ee91db42a"
	I1109 14:03:35.949838   55908 cri.go:89] found id: ""
	I1109 14:03:35.949915   55908 ssh_runner.go:195] Run: sudo runc list -f json
	W1109 14:03:35.976461   55908 kubeadm.go:408] unpause failed: list paused: runc: sudo runc list -f json: Process exited with status 1
	stdout:
	
	stderr:
	time="2025-11-09T14:03:35Z" level=error msg="open /run/runc: no such file or directory"
	I1109 14:03:35.976622   55908 ssh_runner.go:195] Run: sudo ls /var/lib/kubelet/kubeadm-flags.env /var/lib/kubelet/config.yaml /var/lib/minikube/etcd
	I1109 14:03:35.995533   55908 kubeadm.go:417] found existing configuration files, will attempt cluster restart
	I1109 14:03:35.995601   55908 kubeadm.go:598] restartPrimaryControlPlane start ...
	I1109 14:03:35.995698   55908 ssh_runner.go:195] Run: sudo test -d /data/minikube
	I1109 14:03:36.007080   55908 kubeadm.go:131] /data/minikube skipping compat symlinks: sudo test -d /data/minikube: Process exited with status 1
	stdout:
	
	stderr:
	I1109 14:03:36.007609   55908 kubeconfig.go:47] verify endpoint returned: get endpoint: "ha-423884" does not appear in /home/jenkins/minikube-integration/21139-2320/kubeconfig
	I1109 14:03:36.007785   55908 kubeconfig.go:62] /home/jenkins/minikube-integration/21139-2320/kubeconfig needs updating (will repair): [kubeconfig missing "ha-423884" cluster setting kubeconfig missing "ha-423884" context setting]
	I1109 14:03:36.008206   55908 lock.go:35] WriteFile acquiring /home/jenkins/minikube-integration/21139-2320/kubeconfig: {Name:mkbcbd43244c4eb498050023163f96f81faa78c6 Clock:{} Delay:500ms Timeout:1m0s Cancel:<nil>}
	I1109 14:03:36.008996   55908 kapi.go:59] client config for ha-423884: &rest.Config{Host:"https://192.168.49.2:8443", APIPath:"", ContentConfig:rest.ContentConfig{AcceptContentTypes:"", ContentType:"", GroupVersion:(*schema.GroupVersion)(nil), NegotiatedSerializer:runtime.NegotiatedSerializer(nil)}, Username:"", Password:"", BearerToken:"", BearerTokenFile:"", Impersonate:rest.ImpersonationConfig{UserName:"", UID:"", Groups:[]string(nil), Extra:map[string][]string(nil)}, AuthProvider:<nil>, AuthConfigPersister:rest.AuthProviderConfigPersister(nil), ExecProvider:<nil>, TLSClientConfig:rest.sanitizedTLSClientConfig{Insecure:false, ServerName:"", CertFile:"/home/jenkins/minikube-integration/21139-2320/.minikube/profiles/ha-423884/client.crt", KeyFile:"/home/jenkins/minikube-integration/21139-2320/.minikube/profiles/ha-423884/client.key", CAFile:"/home/jenkins/minikube-integration/21139-2320/.minikube/ca.crt", CertData:[]uint8(nil), KeyData:[]uint8(nil), CAData:[]uint8(nil), NextProtos:[]string(nil)}, Us
erAgent:"", DisableCompression:false, Transport:http.RoundTripper(nil), WrapTransport:(transport.WrapperFunc)(0x21276e0), QPS:0, Burst:0, RateLimiter:flowcontrol.RateLimiter(nil), WarningHandler:rest.WarningHandler(nil), WarningHandlerWithContext:rest.WarningHandlerWithContext(nil), Timeout:0, Dial:(func(context.Context, string, string) (net.Conn, error))(nil), Proxy:(func(*http.Request) (*url.URL, error))(nil)}
	I1109 14:03:36.009887   55908 envvar.go:172] "Feature gate default state" feature="WatchListClient" enabled=false
	I1109 14:03:36.009995   55908 envvar.go:172] "Feature gate default state" feature="ClientsAllowCBOR" enabled=false
	I1109 14:03:36.010046   55908 envvar.go:172] "Feature gate default state" feature="ClientsPreferCBOR" enabled=false
	I1109 14:03:36.010070   55908 envvar.go:172] "Feature gate default state" feature="InformerResourceVersion" enabled=false
	I1109 14:03:36.009972   55908 cert_rotation.go:141] "Starting client certificate rotation controller" logger="tls-transport-cache"
	I1109 14:03:36.010189   55908 envvar.go:172] "Feature gate default state" feature="InOrderInformers" enabled=true
	I1109 14:03:36.010607   55908 ssh_runner.go:195] Run: sudo diff -u /var/tmp/minikube/kubeadm.yaml /var/tmp/minikube/kubeadm.yaml.new
	I1109 14:03:36.028288   55908 kubeadm.go:635] The running cluster does not require reconfiguration: 192.168.49.2
	I1109 14:03:36.028364   55908 kubeadm.go:602] duration metric: took 32.744336ms to restartPrimaryControlPlane
	I1109 14:03:36.028386   55908 kubeadm.go:403] duration metric: took 155.094636ms to StartCluster
	I1109 14:03:36.028414   55908 settings.go:142] acquiring lock: {Name:mk52a5a1cf83cfd3007d92af07a5e8dea393f789 Clock:{} Delay:500ms Timeout:1m0s Cancel:<nil>}
	I1109 14:03:36.028527   55908 settings.go:150] Updating kubeconfig:  /home/jenkins/minikube-integration/21139-2320/kubeconfig
	I1109 14:03:36.029250   55908 lock.go:35] WriteFile acquiring /home/jenkins/minikube-integration/21139-2320/kubeconfig: {Name:mkbcbd43244c4eb498050023163f96f81faa78c6 Clock:{} Delay:500ms Timeout:1m0s Cancel:<nil>}
	I1109 14:03:36.029535   55908 start.go:234] HA (multi-control plane) cluster: will skip waiting for primary control-plane node &{Name: IP:192.168.49.2 Port:8443 KubernetesVersion:v1.34.1 ContainerRuntime:crio ControlPlane:true Worker:true}
	I1109 14:03:36.029589   55908 start.go:242] waiting for startup goroutines ...
	I1109 14:03:36.029633   55908 addons.go:512] enable addons start: toEnable=map[ambassador:false amd-gpu-device-plugin:false auto-pause:false cloud-spanner:false csi-hostpath-driver:false dashboard:false default-storageclass:false efk:false freshpod:false gcp-auth:false gvisor:false headlamp:false inaccel:false ingress:false ingress-dns:false inspektor-gadget:false istio:false istio-provisioner:false kong:false kubeflow:false kubetail:false kubevirt:false logviewer:false metallb:false metrics-server:false nvidia-device-plugin:false nvidia-driver-installer:false nvidia-gpu-device-plugin:false olm:false pod-security-policy:false portainer:false registry:false registry-aliases:false registry-creds:false storage-provisioner:false storage-provisioner-rancher:false volcano:false volumesnapshots:false yakd:false]
	I1109 14:03:36.030494   55908 config.go:182] Loaded profile config "ha-423884": Driver=docker, ContainerRuntime=crio, KubernetesVersion=v1.34.1
	I1109 14:03:36.035208   55908 out.go:179] * Enabled addons: 
	I1109 14:03:36.040262   55908 addons.go:515] duration metric: took 10.631239ms for enable addons: enabled=[]
	I1109 14:03:36.040364   55908 start.go:247] waiting for cluster config update ...
	I1109 14:03:36.040385   55908 start.go:256] writing updated cluster config ...
	I1109 14:03:36.043855   55908 out.go:203] 
	I1109 14:03:36.047167   55908 config.go:182] Loaded profile config "ha-423884": Driver=docker, ContainerRuntime=crio, KubernetesVersion=v1.34.1
	I1109 14:03:36.047362   55908 profile.go:143] Saving config to /home/jenkins/minikube-integration/21139-2320/.minikube/profiles/ha-423884/config.json ...
	I1109 14:03:36.050885   55908 out.go:179] * Starting "ha-423884-m02" control-plane node in "ha-423884" cluster
	I1109 14:03:36.053842   55908 cache.go:134] Beginning downloading kic base image for docker with crio
	I1109 14:03:36.056999   55908 out.go:179] * Pulling base image v0.0.48-1761985721-21837 ...
	I1109 14:03:36.060038   55908 image.go:81] Checking for gcr.io/k8s-minikube/kicbase-builds:v0.0.48-1761985721-21837@sha256:a50b37e97dfdea51156e079ca6b45818a801b3d41bbe13d141f35d2e1af6c7d1 in local docker daemon
	I1109 14:03:36.060318   55908 preload.go:188] Checking if preload exists for k8s version v1.34.1 and runtime crio
	I1109 14:03:36.060344   55908 cache.go:65] Caching tarball of preloaded images
	I1109 14:03:36.060467   55908 preload.go:238] Found /home/jenkins/minikube-integration/21139-2320/.minikube/cache/preloaded-tarball/preloaded-images-k8s-v18-v1.34.1-cri-o-overlay-arm64.tar.lz4 in cache, skipping download
	I1109 14:03:36.060496   55908 cache.go:68] Finished verifying existence of preloaded tar for v1.34.1 on crio
	I1109 14:03:36.060681   55908 profile.go:143] Saving config to /home/jenkins/minikube-integration/21139-2320/.minikube/profiles/ha-423884/config.json ...
	I1109 14:03:36.087960   55908 image.go:100] Found gcr.io/k8s-minikube/kicbase-builds:v0.0.48-1761985721-21837@sha256:a50b37e97dfdea51156e079ca6b45818a801b3d41bbe13d141f35d2e1af6c7d1 in local docker daemon, skipping pull
	I1109 14:03:36.087980   55908 cache.go:158] gcr.io/k8s-minikube/kicbase-builds:v0.0.48-1761985721-21837@sha256:a50b37e97dfdea51156e079ca6b45818a801b3d41bbe13d141f35d2e1af6c7d1 exists in daemon, skipping load
	I1109 14:03:36.087991   55908 cache.go:243] Successfully downloaded all kic artifacts
	I1109 14:03:36.088015   55908 start.go:360] acquireMachinesLock for ha-423884-m02: {Name:mkc465d60ac134a0502b48f535d5c2db44f7f07b Clock:{} Delay:500ms Timeout:10m0s Cancel:<nil>}
	I1109 14:03:36.088071   55908 start.go:364] duration metric: took 40.263µs to acquireMachinesLock for "ha-423884-m02"
	I1109 14:03:36.088090   55908 start.go:96] Skipping create...Using existing machine configuration
	I1109 14:03:36.088095   55908 fix.go:54] fixHost starting: m02
	I1109 14:03:36.088348   55908 cli_runner.go:164] Run: docker container inspect ha-423884-m02 --format={{.State.Status}}
	I1109 14:03:36.119614   55908 fix.go:112] recreateIfNeeded on ha-423884-m02: state=Stopped err=<nil>
	W1109 14:03:36.119639   55908 fix.go:138] unexpected machine state, will restart: <nil>
	I1109 14:03:36.123884   55908 out.go:252] * Restarting existing docker container for "ha-423884-m02" ...
	I1109 14:03:36.123973   55908 cli_runner.go:164] Run: docker start ha-423884-m02
	I1109 14:03:36.530699   55908 cli_runner.go:164] Run: docker container inspect ha-423884-m02 --format={{.State.Status}}
	I1109 14:03:36.559612   55908 kic.go:430] container "ha-423884-m02" state is running.
	I1109 14:03:36.560004   55908 cli_runner.go:164] Run: docker container inspect -f "{{range .NetworkSettings.Networks}}{{.IPAddress}},{{.GlobalIPv6Address}}{{end}}" ha-423884-m02
	I1109 14:03:36.586384   55908 profile.go:143] Saving config to /home/jenkins/minikube-integration/21139-2320/.minikube/profiles/ha-423884/config.json ...
	I1109 14:03:36.586624   55908 machine.go:94] provisionDockerMachine start ...
	I1109 14:03:36.586695   55908 cli_runner.go:164] Run: docker container inspect -f "'{{(index (index .NetworkSettings.Ports "22/tcp") 0).HostPort}}'" ha-423884-m02
	I1109 14:03:36.615730   55908 main.go:143] libmachine: Using SSH client type: native
	I1109 14:03:36.616048   55908 main.go:143] libmachine: &{{{<nil> 0 [] [] []} docker [0x3eefe0] 0x3f1790 <nil>  [] 0s} 127.0.0.1 32823 <nil> <nil>}
	I1109 14:03:36.616058   55908 main.go:143] libmachine: About to run SSH command:
	hostname
	I1109 14:03:36.616804   55908 main.go:143] libmachine: Error dialing TCP: ssh: handshake failed: read tcp 127.0.0.1:49240->127.0.0.1:32823: read: connection reset by peer
	I1109 14:03:39.844217   55908 main.go:143] libmachine: SSH cmd err, output: <nil>: ha-423884-m02
	
	I1109 14:03:39.844255   55908 ubuntu.go:182] provisioning hostname "ha-423884-m02"
	I1109 14:03:39.844325   55908 cli_runner.go:164] Run: docker container inspect -f "'{{(index (index .NetworkSettings.Ports "22/tcp") 0).HostPort}}'" ha-423884-m02
	I1109 14:03:39.868660   55908 main.go:143] libmachine: Using SSH client type: native
	I1109 14:03:39.868984   55908 main.go:143] libmachine: &{{{<nil> 0 [] [] []} docker [0x3eefe0] 0x3f1790 <nil>  [] 0s} 127.0.0.1 32823 <nil> <nil>}
	I1109 14:03:39.869001   55908 main.go:143] libmachine: About to run SSH command:
	sudo hostname ha-423884-m02 && echo "ha-423884-m02" | sudo tee /etc/hostname
	I1109 14:03:40.093355   55908 main.go:143] libmachine: SSH cmd err, output: <nil>: ha-423884-m02
	
	I1109 14:03:40.093437   55908 cli_runner.go:164] Run: docker container inspect -f "'{{(index (index .NetworkSettings.Ports "22/tcp") 0).HostPort}}'" ha-423884-m02
	I1109 14:03:40.121586   55908 main.go:143] libmachine: Using SSH client type: native
	I1109 14:03:40.121898   55908 main.go:143] libmachine: &{{{<nil> 0 [] [] []} docker [0x3eefe0] 0x3f1790 <nil>  [] 0s} 127.0.0.1 32823 <nil> <nil>}
	I1109 14:03:40.121920   55908 main.go:143] libmachine: About to run SSH command:
	
			if ! grep -xq '.*\sha-423884-m02' /etc/hosts; then
				if grep -xq '127.0.1.1\s.*' /etc/hosts; then
					sudo sed -i 's/^127.0.1.1\s.*/127.0.1.1 ha-423884-m02/g' /etc/hosts;
				else 
					echo '127.0.1.1 ha-423884-m02' | sudo tee -a /etc/hosts; 
				fi
			fi
	I1109 14:03:40.328493   55908 main.go:143] libmachine: SSH cmd err, output: <nil>: 
	I1109 14:03:40.328522   55908 ubuntu.go:188] set auth options {CertDir:/home/jenkins/minikube-integration/21139-2320/.minikube CaCertPath:/home/jenkins/minikube-integration/21139-2320/.minikube/certs/ca.pem CaPrivateKeyPath:/home/jenkins/minikube-integration/21139-2320/.minikube/certs/ca-key.pem CaCertRemotePath:/etc/docker/ca.pem ServerCertPath:/home/jenkins/minikube-integration/21139-2320/.minikube/machines/server.pem ServerKeyPath:/home/jenkins/minikube-integration/21139-2320/.minikube/machines/server-key.pem ClientKeyPath:/home/jenkins/minikube-integration/21139-2320/.minikube/certs/key.pem ServerCertRemotePath:/etc/docker/server.pem ServerKeyRemotePath:/etc/docker/server-key.pem ClientCertPath:/home/jenkins/minikube-integration/21139-2320/.minikube/certs/cert.pem ServerCertSANs:[] StorePath:/home/jenkins/minikube-integration/21139-2320/.minikube}
	I1109 14:03:40.328538   55908 ubuntu.go:190] setting up certificates
	I1109 14:03:40.328548   55908 provision.go:84] configureAuth start
	I1109 14:03:40.328618   55908 cli_runner.go:164] Run: docker container inspect -f "{{range .NetworkSettings.Networks}}{{.IPAddress}},{{.GlobalIPv6Address}}{{end}}" ha-423884-m02
	I1109 14:03:40.372055   55908 provision.go:143] copyHostCerts
	I1109 14:03:40.372096   55908 vm_assets.go:164] NewFileAsset: /home/jenkins/minikube-integration/21139-2320/.minikube/certs/ca.pem -> /home/jenkins/minikube-integration/21139-2320/.minikube/ca.pem
	I1109 14:03:40.372169   55908 exec_runner.go:144] found /home/jenkins/minikube-integration/21139-2320/.minikube/ca.pem, removing ...
	I1109 14:03:40.372176   55908 exec_runner.go:203] rm: /home/jenkins/minikube-integration/21139-2320/.minikube/ca.pem
	I1109 14:03:40.372257   55908 exec_runner.go:151] cp: /home/jenkins/minikube-integration/21139-2320/.minikube/certs/ca.pem --> /home/jenkins/minikube-integration/21139-2320/.minikube/ca.pem (1082 bytes)
	I1109 14:03:40.372331   55908 vm_assets.go:164] NewFileAsset: /home/jenkins/minikube-integration/21139-2320/.minikube/certs/cert.pem -> /home/jenkins/minikube-integration/21139-2320/.minikube/cert.pem
	I1109 14:03:40.372347   55908 exec_runner.go:144] found /home/jenkins/minikube-integration/21139-2320/.minikube/cert.pem, removing ...
	I1109 14:03:40.372352   55908 exec_runner.go:203] rm: /home/jenkins/minikube-integration/21139-2320/.minikube/cert.pem
	I1109 14:03:40.372377   55908 exec_runner.go:151] cp: /home/jenkins/minikube-integration/21139-2320/.minikube/certs/cert.pem --> /home/jenkins/minikube-integration/21139-2320/.minikube/cert.pem (1123 bytes)
	I1109 14:03:40.372418   55908 vm_assets.go:164] NewFileAsset: /home/jenkins/minikube-integration/21139-2320/.minikube/certs/key.pem -> /home/jenkins/minikube-integration/21139-2320/.minikube/key.pem
	I1109 14:03:40.372433   55908 exec_runner.go:144] found /home/jenkins/minikube-integration/21139-2320/.minikube/key.pem, removing ...
	I1109 14:03:40.372437   55908 exec_runner.go:203] rm: /home/jenkins/minikube-integration/21139-2320/.minikube/key.pem
	I1109 14:03:40.372461   55908 exec_runner.go:151] cp: /home/jenkins/minikube-integration/21139-2320/.minikube/certs/key.pem --> /home/jenkins/minikube-integration/21139-2320/.minikube/key.pem (1679 bytes)
	I1109 14:03:40.372508   55908 provision.go:117] generating server cert: /home/jenkins/minikube-integration/21139-2320/.minikube/machines/server.pem ca-key=/home/jenkins/minikube-integration/21139-2320/.minikube/certs/ca.pem private-key=/home/jenkins/minikube-integration/21139-2320/.minikube/certs/ca-key.pem org=jenkins.ha-423884-m02 san=[127.0.0.1 192.168.49.3 ha-423884-m02 localhost minikube]
	I1109 14:03:40.460419   55908 provision.go:177] copyRemoteCerts
	I1109 14:03:40.460536   55908 ssh_runner.go:195] Run: sudo mkdir -p /etc/docker /etc/docker /etc/docker
	I1109 14:03:40.460611   55908 cli_runner.go:164] Run: docker container inspect -f "'{{(index (index .NetworkSettings.Ports "22/tcp") 0).HostPort}}'" ha-423884-m02
	I1109 14:03:40.505492   55908 sshutil.go:53] new ssh client: &{IP:127.0.0.1 Port:32823 SSHKeyPath:/home/jenkins/minikube-integration/21139-2320/.minikube/machines/ha-423884-m02/id_rsa Username:docker}
	I1109 14:03:40.630054   55908 vm_assets.go:164] NewFileAsset: /home/jenkins/minikube-integration/21139-2320/.minikube/certs/ca.pem -> /etc/docker/ca.pem
	I1109 14:03:40.630110   55908 ssh_runner.go:362] scp /home/jenkins/minikube-integration/21139-2320/.minikube/certs/ca.pem --> /etc/docker/ca.pem (1082 bytes)
	I1109 14:03:40.653044   55908 vm_assets.go:164] NewFileAsset: /home/jenkins/minikube-integration/21139-2320/.minikube/machines/server.pem -> /etc/docker/server.pem
	I1109 14:03:40.653106   55908 ssh_runner.go:362] scp /home/jenkins/minikube-integration/21139-2320/.minikube/machines/server.pem --> /etc/docker/server.pem (1208 bytes)
	I1109 14:03:40.683285   55908 vm_assets.go:164] NewFileAsset: /home/jenkins/minikube-integration/21139-2320/.minikube/machines/server-key.pem -> /etc/docker/server-key.pem
	I1109 14:03:40.683343   55908 ssh_runner.go:362] scp /home/jenkins/minikube-integration/21139-2320/.minikube/machines/server-key.pem --> /etc/docker/server-key.pem (1679 bytes)
	I1109 14:03:40.713212   55908 provision.go:87] duration metric: took 384.650953ms to configureAuth
	I1109 14:03:40.713278   55908 ubuntu.go:206] setting minikube options for container-runtime
	I1109 14:03:40.713537   55908 config.go:182] Loaded profile config "ha-423884": Driver=docker, ContainerRuntime=crio, KubernetesVersion=v1.34.1
	I1109 14:03:40.713674   55908 cli_runner.go:164] Run: docker container inspect -f "'{{(index (index .NetworkSettings.Ports "22/tcp") 0).HostPort}}'" ha-423884-m02
	I1109 14:03:40.745458   55908 main.go:143] libmachine: Using SSH client type: native
	I1109 14:03:40.745765   55908 main.go:143] libmachine: &{{{<nil> 0 [] [] []} docker [0x3eefe0] 0x3f1790 <nil>  [] 0s} 127.0.0.1 32823 <nil> <nil>}
	I1109 14:03:40.745786   55908 main.go:143] libmachine: About to run SSH command:
	sudo mkdir -p /etc/sysconfig && printf %s "
	CRIO_MINIKUBE_OPTIONS='--insecure-registry 10.96.0.0/12 '
	" | sudo tee /etc/sysconfig/crio.minikube && sudo systemctl restart crio
	I1109 14:03:41.160286   55908 main.go:143] libmachine: SSH cmd err, output: <nil>: 
	CRIO_MINIKUBE_OPTIONS='--insecure-registry 10.96.0.0/12 '
	
	I1109 14:03:41.160309   55908 machine.go:97] duration metric: took 4.573667407s to provisionDockerMachine
	I1109 14:03:41.160321   55908 start.go:293] postStartSetup for "ha-423884-m02" (driver="docker")
	I1109 14:03:41.160332   55908 start.go:322] creating required directories: [/etc/kubernetes/addons /etc/kubernetes/manifests /var/tmp/minikube /var/lib/minikube /var/lib/minikube/certs /var/lib/minikube/images /var/lib/minikube/binaries /tmp/gvisor /usr/share/ca-certificates /etc/ssl/certs]
	I1109 14:03:41.160396   55908 ssh_runner.go:195] Run: sudo mkdir -p /etc/kubernetes/addons /etc/kubernetes/manifests /var/tmp/minikube /var/lib/minikube /var/lib/minikube/certs /var/lib/minikube/images /var/lib/minikube/binaries /tmp/gvisor /usr/share/ca-certificates /etc/ssl/certs
	I1109 14:03:41.160449   55908 cli_runner.go:164] Run: docker container inspect -f "'{{(index (index .NetworkSettings.Ports "22/tcp") 0).HostPort}}'" ha-423884-m02
	I1109 14:03:41.178991   55908 sshutil.go:53] new ssh client: &{IP:127.0.0.1 Port:32823 SSHKeyPath:/home/jenkins/minikube-integration/21139-2320/.minikube/machines/ha-423884-m02/id_rsa Username:docker}
	I1109 14:03:41.284963   55908 ssh_runner.go:195] Run: cat /etc/os-release
	I1109 14:03:41.288725   55908 main.go:143] libmachine: Couldn't set key VERSION_CODENAME, no corresponding struct field found
	I1109 14:03:41.288763   55908 info.go:137] Remote host: Debian GNU/Linux 12 (bookworm)
	I1109 14:03:41.288776   55908 filesync.go:126] Scanning /home/jenkins/minikube-integration/21139-2320/.minikube/addons for local assets ...
	I1109 14:03:41.288833   55908 filesync.go:126] Scanning /home/jenkins/minikube-integration/21139-2320/.minikube/files for local assets ...
	I1109 14:03:41.288922   55908 filesync.go:149] local asset: /home/jenkins/minikube-integration/21139-2320/.minikube/files/etc/ssl/certs/41162.pem -> 41162.pem in /etc/ssl/certs
	I1109 14:03:41.288929   55908 vm_assets.go:164] NewFileAsset: /home/jenkins/minikube-integration/21139-2320/.minikube/files/etc/ssl/certs/41162.pem -> /etc/ssl/certs/41162.pem
	I1109 14:03:41.289033   55908 ssh_runner.go:195] Run: sudo mkdir -p /etc/ssl/certs
	I1109 14:03:41.297714   55908 ssh_runner.go:362] scp /home/jenkins/minikube-integration/21139-2320/.minikube/files/etc/ssl/certs/41162.pem --> /etc/ssl/certs/41162.pem (1708 bytes)
	I1109 14:03:41.316091   55908 start.go:296] duration metric: took 155.749725ms for postStartSetup
	I1109 14:03:41.316183   55908 ssh_runner.go:195] Run: sh -c "df -h /var | awk 'NR==2{print $5}'"
	I1109 14:03:41.316251   55908 cli_runner.go:164] Run: docker container inspect -f "'{{(index (index .NetworkSettings.Ports "22/tcp") 0).HostPort}}'" ha-423884-m02
	I1109 14:03:41.332754   55908 sshutil.go:53] new ssh client: &{IP:127.0.0.1 Port:32823 SSHKeyPath:/home/jenkins/minikube-integration/21139-2320/.minikube/machines/ha-423884-m02/id_rsa Username:docker}
	I1109 14:03:41.441566   55908 ssh_runner.go:195] Run: sh -c "df -BG /var | awk 'NR==2{print $4}'"
	I1109 14:03:41.446853   55908 fix.go:56] duration metric: took 5.358725913s for fixHost
	I1109 14:03:41.446878   55908 start.go:83] releasing machines lock for "ha-423884-m02", held for 5.358799177s
	I1109 14:03:41.446969   55908 cli_runner.go:164] Run: docker container inspect -f "{{range .NetworkSettings.Networks}}{{.IPAddress}},{{.GlobalIPv6Address}}{{end}}" ha-423884-m02
	I1109 14:03:41.471189   55908 out.go:179] * Found network options:
	I1109 14:03:41.474105   55908 out.go:179]   - NO_PROXY=192.168.49.2
	W1109 14:03:41.477016   55908 proxy.go:120] fail to check proxy env: Error ip not in block
	W1109 14:03:41.477060   55908 proxy.go:120] fail to check proxy env: Error ip not in block
	I1109 14:03:41.477139   55908 ssh_runner.go:195] Run: sudo sh -c "podman version >/dev/null"
	I1109 14:03:41.477182   55908 cli_runner.go:164] Run: docker container inspect -f "'{{(index (index .NetworkSettings.Ports "22/tcp") 0).HostPort}}'" ha-423884-m02
	I1109 14:03:41.477214   55908 ssh_runner.go:195] Run: curl -sS -m 2 https://registry.k8s.io/
	I1109 14:03:41.477268   55908 cli_runner.go:164] Run: docker container inspect -f "'{{(index (index .NetworkSettings.Ports "22/tcp") 0).HostPort}}'" ha-423884-m02
	I1109 14:03:41.498901   55908 sshutil.go:53] new ssh client: &{IP:127.0.0.1 Port:32823 SSHKeyPath:/home/jenkins/minikube-integration/21139-2320/.minikube/machines/ha-423884-m02/id_rsa Username:docker}
	I1109 14:03:41.500358   55908 sshutil.go:53] new ssh client: &{IP:127.0.0.1 Port:32823 SSHKeyPath:/home/jenkins/minikube-integration/21139-2320/.minikube/machines/ha-423884-m02/id_rsa Username:docker}
	I1109 14:03:41.696694   55908 ssh_runner.go:195] Run: sh -c "stat /etc/cni/net.d/*loopback.conf*"
	W1109 14:03:41.701371   55908 cni.go:209] loopback cni configuration skipped: "/etc/cni/net.d/*loopback.conf*" not found
	I1109 14:03:41.701516   55908 ssh_runner.go:195] Run: sudo find /etc/cni/net.d -maxdepth 1 -type f ( ( -name *bridge* -or -name *podman* ) -and -not -name *.mk_disabled ) -printf "%p, " -exec sh -c "sudo mv {} {}.mk_disabled" ;
	I1109 14:03:41.709683   55908 cni.go:259] no active bridge cni configs found in "/etc/cni/net.d" - nothing to disable
	I1109 14:03:41.709721   55908 start.go:496] detecting cgroup driver to use...
	I1109 14:03:41.709755   55908 detect.go:187] detected "cgroupfs" cgroup driver on host os
	I1109 14:03:41.709825   55908 ssh_runner.go:195] Run: sudo systemctl stop -f containerd
	I1109 14:03:41.725678   55908 ssh_runner.go:195] Run: sudo systemctl is-active --quiet service containerd
	I1109 14:03:41.739787   55908 docker.go:218] disabling cri-docker service (if available) ...
	I1109 14:03:41.739856   55908 ssh_runner.go:195] Run: sudo systemctl stop -f cri-docker.socket
	I1109 14:03:41.757143   55908 ssh_runner.go:195] Run: sudo systemctl stop -f cri-docker.service
	I1109 14:03:41.771643   55908 ssh_runner.go:195] Run: sudo systemctl disable cri-docker.socket
	I1109 14:03:41.900022   55908 ssh_runner.go:195] Run: sudo systemctl mask cri-docker.service
	I1109 14:03:42.105606   55908 docker.go:234] disabling docker service ...
	I1109 14:03:42.105681   55908 ssh_runner.go:195] Run: sudo systemctl stop -f docker.socket
	I1109 14:03:42.144421   55908 ssh_runner.go:195] Run: sudo systemctl stop -f docker.service
	I1109 14:03:42.178839   55908 ssh_runner.go:195] Run: sudo systemctl disable docker.socket
	I1109 14:03:42.468213   55908 ssh_runner.go:195] Run: sudo systemctl mask docker.service
	I1109 14:03:42.691726   55908 ssh_runner.go:195] Run: sudo systemctl is-active --quiet service docker
	I1109 14:03:42.709612   55908 ssh_runner.go:195] Run: /bin/bash -c "sudo mkdir -p /etc && printf %s "runtime-endpoint: unix:///var/run/crio/crio.sock
	" | sudo tee /etc/crictl.yaml"
	I1109 14:03:42.730882   55908 crio.go:59] configure cri-o to use "registry.k8s.io/pause:3.10.1" pause image...
	I1109 14:03:42.730946   55908 ssh_runner.go:195] Run: sh -c "sudo sed -i 's|^.*pause_image = .*$|pause_image = "registry.k8s.io/pause:3.10.1"|' /etc/crio/crio.conf.d/02-crio.conf"
	I1109 14:03:42.740089   55908 crio.go:70] configuring cri-o to use "cgroupfs" as cgroup driver...
	I1109 14:03:42.740148   55908 ssh_runner.go:195] Run: sh -c "sudo sed -i 's|^.*cgroup_manager = .*$|cgroup_manager = "cgroupfs"|' /etc/crio/crio.conf.d/02-crio.conf"
	I1109 14:03:42.750087   55908 ssh_runner.go:195] Run: sh -c "sudo sed -i '/conmon_cgroup = .*/d' /etc/crio/crio.conf.d/02-crio.conf"
	I1109 14:03:42.759038   55908 ssh_runner.go:195] Run: sh -c "sudo sed -i '/cgroup_manager = .*/a conmon_cgroup = "pod"' /etc/crio/crio.conf.d/02-crio.conf"
	I1109 14:03:42.773257   55908 ssh_runner.go:195] Run: sh -c "sudo rm -rf /etc/cni/net.mk"
	I1109 14:03:42.782648   55908 ssh_runner.go:195] Run: sh -c "sudo sed -i '/^ *"net.ipv4.ip_unprivileged_port_start=.*"/d' /etc/crio/crio.conf.d/02-crio.conf"
	I1109 14:03:42.800890   55908 ssh_runner.go:195] Run: sh -c "sudo grep -q "^ *default_sysctls" /etc/crio/crio.conf.d/02-crio.conf || sudo sed -i '/conmon_cgroup = .*/a default_sysctls = \[\n\]' /etc/crio/crio.conf.d/02-crio.conf"
	I1109 14:03:42.812622   55908 ssh_runner.go:195] Run: sh -c "sudo sed -i -r 's|^default_sysctls *= *\[|&\n  "net.ipv4.ip_unprivileged_port_start=0",|' /etc/crio/crio.conf.d/02-crio.conf"
	I1109 14:03:42.829326   55908 ssh_runner.go:195] Run: sudo sysctl net.bridge.bridge-nf-call-iptables
	I1109 14:03:42.846516   55908 ssh_runner.go:195] Run: sudo sh -c "echo 1 > /proc/sys/net/ipv4/ip_forward"
	I1109 14:03:42.860429   55908 ssh_runner.go:195] Run: sudo systemctl daemon-reload
	I1109 14:03:43.078130   55908 ssh_runner.go:195] Run: sudo systemctl restart crio
	I1109 14:03:43.300172   55908 start.go:543] Will wait 60s for socket path /var/run/crio/crio.sock
	I1109 14:03:43.300292   55908 ssh_runner.go:195] Run: stat /var/run/crio/crio.sock
	I1109 14:03:43.304336   55908 start.go:564] Will wait 60s for crictl version
	I1109 14:03:43.304441   55908 ssh_runner.go:195] Run: which crictl
	I1109 14:03:43.308290   55908 ssh_runner.go:195] Run: sudo /usr/local/bin/crictl version
	I1109 14:03:43.334041   55908 start.go:580] Version:  0.1.0
	RuntimeName:  cri-o
	RuntimeVersion:  1.34.1
	RuntimeApiVersion:  v1
	I1109 14:03:43.334158   55908 ssh_runner.go:195] Run: crio --version
	I1109 14:03:43.366433   55908 ssh_runner.go:195] Run: crio --version
	I1109 14:03:43.403997   55908 out.go:179] * Preparing Kubernetes v1.34.1 on CRI-O 1.34.1 ...
	I1109 14:03:43.406881   55908 out.go:179]   - env NO_PROXY=192.168.49.2
	I1109 14:03:43.409947   55908 cli_runner.go:164] Run: docker network inspect ha-423884 --format "{"Name": "{{.Name}}","Driver": "{{.Driver}}","Subnet": "{{range .IPAM.Config}}{{.Subnet}}{{end}}","Gateway": "{{range .IPAM.Config}}{{.Gateway}}{{end}}","MTU": {{if (index .Options "com.docker.network.driver.mtu")}}{{(index .Options "com.docker.network.driver.mtu")}}{{else}}0{{end}}, "ContainerIPs": [{{range $k,$v := .Containers }}"{{$v.IPv4Address}}",{{end}}]}"
	I1109 14:03:43.426148   55908 ssh_runner.go:195] Run: grep 192.168.49.1	host.minikube.internal$ /etc/hosts
	I1109 14:03:43.430019   55908 ssh_runner.go:195] Run: /bin/bash -c "{ grep -v $'\thost.minikube.internal$' "/etc/hosts"; echo "192.168.49.1	host.minikube.internal"; } > /tmp/h.$$; sudo cp /tmp/h.$$ "/etc/hosts""
	I1109 14:03:43.439859   55908 mustload.go:66] Loading cluster: ha-423884
	I1109 14:03:43.440179   55908 config.go:182] Loaded profile config "ha-423884": Driver=docker, ContainerRuntime=crio, KubernetesVersion=v1.34.1
	I1109 14:03:43.440497   55908 cli_runner.go:164] Run: docker container inspect ha-423884 --format={{.State.Status}}
	I1109 14:03:43.458429   55908 host.go:66] Checking if "ha-423884" exists ...
	I1109 14:03:43.458717   55908 certs.go:69] Setting up /home/jenkins/minikube-integration/21139-2320/.minikube/profiles/ha-423884 for IP: 192.168.49.3
	I1109 14:03:43.458732   55908 certs.go:195] generating shared ca certs ...
	I1109 14:03:43.458747   55908 certs.go:227] acquiring lock for ca certs: {Name:mkdc9287ecd89df27ec460b72246ed6b75395d7c Clock:{} Delay:500ms Timeout:1m0s Cancel:<nil>}
	I1109 14:03:43.458858   55908 certs.go:236] skipping valid "minikubeCA" ca cert: /home/jenkins/minikube-integration/21139-2320/.minikube/ca.key
	I1109 14:03:43.458906   55908 certs.go:236] skipping valid "proxyClientCA" ca cert: /home/jenkins/minikube-integration/21139-2320/.minikube/proxy-client-ca.key
	I1109 14:03:43.458917   55908 certs.go:257] generating profile certs ...
	I1109 14:03:43.458991   55908 certs.go:360] skipping valid signed profile cert regeneration for "minikube-user": /home/jenkins/minikube-integration/21139-2320/.minikube/profiles/ha-423884/client.key
	I1109 14:03:43.459044   55908 certs.go:360] skipping valid signed profile cert regeneration for "minikube": /home/jenkins/minikube-integration/21139-2320/.minikube/profiles/ha-423884/apiserver.key.75d82079
	I1109 14:03:43.459087   55908 certs.go:360] skipping valid signed profile cert regeneration for "aggregator": /home/jenkins/minikube-integration/21139-2320/.minikube/profiles/ha-423884/proxy-client.key
	I1109 14:03:43.459098   55908 vm_assets.go:164] NewFileAsset: /home/jenkins/minikube-integration/21139-2320/.minikube/ca.crt -> /var/lib/minikube/certs/ca.crt
	I1109 14:03:43.459110   55908 vm_assets.go:164] NewFileAsset: /home/jenkins/minikube-integration/21139-2320/.minikube/ca.key -> /var/lib/minikube/certs/ca.key
	I1109 14:03:43.459125   55908 vm_assets.go:164] NewFileAsset: /home/jenkins/minikube-integration/21139-2320/.minikube/proxy-client-ca.crt -> /var/lib/minikube/certs/proxy-client-ca.crt
	I1109 14:03:43.459143   55908 vm_assets.go:164] NewFileAsset: /home/jenkins/minikube-integration/21139-2320/.minikube/proxy-client-ca.key -> /var/lib/minikube/certs/proxy-client-ca.key
	I1109 14:03:43.459162   55908 vm_assets.go:164] NewFileAsset: /home/jenkins/minikube-integration/21139-2320/.minikube/profiles/ha-423884/apiserver.crt -> /var/lib/minikube/certs/apiserver.crt
	I1109 14:03:43.459178   55908 vm_assets.go:164] NewFileAsset: /home/jenkins/minikube-integration/21139-2320/.minikube/profiles/ha-423884/apiserver.key -> /var/lib/minikube/certs/apiserver.key
	I1109 14:03:43.459192   55908 vm_assets.go:164] NewFileAsset: /home/jenkins/minikube-integration/21139-2320/.minikube/profiles/ha-423884/proxy-client.crt -> /var/lib/minikube/certs/proxy-client.crt
	I1109 14:03:43.459209   55908 vm_assets.go:164] NewFileAsset: /home/jenkins/minikube-integration/21139-2320/.minikube/profiles/ha-423884/proxy-client.key -> /var/lib/minikube/certs/proxy-client.key
	I1109 14:03:43.459262   55908 certs.go:484] found cert: /home/jenkins/minikube-integration/21139-2320/.minikube/certs/4116.pem (1338 bytes)
	W1109 14:03:43.459293   55908 certs.go:480] ignoring /home/jenkins/minikube-integration/21139-2320/.minikube/certs/4116_empty.pem, impossibly tiny 0 bytes
	I1109 14:03:43.459305   55908 certs.go:484] found cert: /home/jenkins/minikube-integration/21139-2320/.minikube/certs/ca-key.pem (1679 bytes)
	I1109 14:03:43.459331   55908 certs.go:484] found cert: /home/jenkins/minikube-integration/21139-2320/.minikube/certs/ca.pem (1082 bytes)
	I1109 14:03:43.459355   55908 certs.go:484] found cert: /home/jenkins/minikube-integration/21139-2320/.minikube/certs/cert.pem (1123 bytes)
	I1109 14:03:43.459385   55908 certs.go:484] found cert: /home/jenkins/minikube-integration/21139-2320/.minikube/certs/key.pem (1679 bytes)
	I1109 14:03:43.459432   55908 certs.go:484] found cert: /home/jenkins/minikube-integration/21139-2320/.minikube/files/etc/ssl/certs/41162.pem (1708 bytes)
	I1109 14:03:43.459462   55908 vm_assets.go:164] NewFileAsset: /home/jenkins/minikube-integration/21139-2320/.minikube/ca.crt -> /usr/share/ca-certificates/minikubeCA.pem
	I1109 14:03:43.459482   55908 vm_assets.go:164] NewFileAsset: /home/jenkins/minikube-integration/21139-2320/.minikube/certs/4116.pem -> /usr/share/ca-certificates/4116.pem
	I1109 14:03:43.459498   55908 vm_assets.go:164] NewFileAsset: /home/jenkins/minikube-integration/21139-2320/.minikube/files/etc/ssl/certs/41162.pem -> /usr/share/ca-certificates/41162.pem
	I1109 14:03:43.459553   55908 cli_runner.go:164] Run: docker container inspect -f "'{{(index (index .NetworkSettings.Ports "22/tcp") 0).HostPort}}'" ha-423884
	I1109 14:03:43.476791   55908 sshutil.go:53] new ssh client: &{IP:127.0.0.1 Port:32818 SSHKeyPath:/home/jenkins/minikube-integration/21139-2320/.minikube/machines/ha-423884/id_rsa Username:docker}
	I1109 14:03:43.576150   55908 ssh_runner.go:195] Run: stat -c %s /var/lib/minikube/certs/sa.pub
	I1109 14:03:43.579947   55908 ssh_runner.go:447] scp /var/lib/minikube/certs/sa.pub --> memory (451 bytes)
	I1109 14:03:43.588442   55908 ssh_runner.go:195] Run: stat -c %s /var/lib/minikube/certs/sa.key
	I1109 14:03:43.591845   55908 ssh_runner.go:447] scp /var/lib/minikube/certs/sa.key --> memory (1679 bytes)
	I1109 14:03:43.600302   55908 ssh_runner.go:195] Run: stat -c %s /var/lib/minikube/certs/front-proxy-ca.crt
	I1109 14:03:43.603828   55908 ssh_runner.go:447] scp /var/lib/minikube/certs/front-proxy-ca.crt --> memory (1123 bytes)
	I1109 14:03:43.612657   55908 ssh_runner.go:195] Run: stat -c %s /var/lib/minikube/certs/front-proxy-ca.key
	I1109 14:03:43.616127   55908 ssh_runner.go:447] scp /var/lib/minikube/certs/front-proxy-ca.key --> memory (1675 bytes)
	I1109 14:03:43.624209   55908 ssh_runner.go:195] Run: stat -c %s /var/lib/minikube/certs/etcd/ca.crt
	I1109 14:03:43.627692   55908 ssh_runner.go:447] scp /var/lib/minikube/certs/etcd/ca.crt --> memory (1094 bytes)
	I1109 14:03:43.635688   55908 ssh_runner.go:195] Run: stat -c %s /var/lib/minikube/certs/etcd/ca.key
	I1109 14:03:43.639181   55908 ssh_runner.go:447] scp /var/lib/minikube/certs/etcd/ca.key --> memory (1675 bytes)
	I1109 14:03:43.647210   55908 ssh_runner.go:362] scp /home/jenkins/minikube-integration/21139-2320/.minikube/ca.crt --> /var/lib/minikube/certs/ca.crt (1111 bytes)
	I1109 14:03:43.665935   55908 ssh_runner.go:362] scp /home/jenkins/minikube-integration/21139-2320/.minikube/ca.key --> /var/lib/minikube/certs/ca.key (1679 bytes)
	I1109 14:03:43.683098   55908 ssh_runner.go:362] scp /home/jenkins/minikube-integration/21139-2320/.minikube/proxy-client-ca.crt --> /var/lib/minikube/certs/proxy-client-ca.crt (1119 bytes)
	I1109 14:03:43.701792   55908 ssh_runner.go:362] scp /home/jenkins/minikube-integration/21139-2320/.minikube/proxy-client-ca.key --> /var/lib/minikube/certs/proxy-client-ca.key (1675 bytes)
	I1109 14:03:43.720535   55908 ssh_runner.go:362] scp /home/jenkins/minikube-integration/21139-2320/.minikube/profiles/ha-423884/apiserver.crt --> /var/lib/minikube/certs/apiserver.crt (1440 bytes)
	I1109 14:03:43.738207   55908 ssh_runner.go:362] scp /home/jenkins/minikube-integration/21139-2320/.minikube/profiles/ha-423884/apiserver.key --> /var/lib/minikube/certs/apiserver.key (1679 bytes)
	I1109 14:03:43.756027   55908 ssh_runner.go:362] scp /home/jenkins/minikube-integration/21139-2320/.minikube/profiles/ha-423884/proxy-client.crt --> /var/lib/minikube/certs/proxy-client.crt (1147 bytes)
	I1109 14:03:43.774278   55908 ssh_runner.go:362] scp /home/jenkins/minikube-integration/21139-2320/.minikube/profiles/ha-423884/proxy-client.key --> /var/lib/minikube/certs/proxy-client.key (1675 bytes)
	I1109 14:03:43.792937   55908 ssh_runner.go:362] scp /home/jenkins/minikube-integration/21139-2320/.minikube/ca.crt --> /usr/share/ca-certificates/minikubeCA.pem (1111 bytes)
	I1109 14:03:43.811113   55908 ssh_runner.go:362] scp /home/jenkins/minikube-integration/21139-2320/.minikube/certs/4116.pem --> /usr/share/ca-certificates/4116.pem (1338 bytes)
	I1109 14:03:43.829133   55908 ssh_runner.go:362] scp /home/jenkins/minikube-integration/21139-2320/.minikube/files/etc/ssl/certs/41162.pem --> /usr/share/ca-certificates/41162.pem (1708 bytes)
	I1109 14:03:43.847536   55908 ssh_runner.go:362] scp memory --> /var/lib/minikube/certs/sa.pub (451 bytes)
	I1109 14:03:43.860908   55908 ssh_runner.go:362] scp memory --> /var/lib/minikube/certs/sa.key (1679 bytes)
	I1109 14:03:43.873289   55908 ssh_runner.go:362] scp memory --> /var/lib/minikube/certs/front-proxy-ca.crt (1123 bytes)
	I1109 14:03:43.886865   55908 ssh_runner.go:362] scp memory --> /var/lib/minikube/certs/front-proxy-ca.key (1675 bytes)
	I1109 14:03:43.900616   55908 ssh_runner.go:362] scp memory --> /var/lib/minikube/certs/etcd/ca.crt (1094 bytes)
	I1109 14:03:43.913948   55908 ssh_runner.go:362] scp memory --> /var/lib/minikube/certs/etcd/ca.key (1675 bytes)
	I1109 14:03:43.927015   55908 ssh_runner.go:362] scp memory --> /var/lib/minikube/kubeconfig (744 bytes)
	I1109 14:03:43.939523   55908 ssh_runner.go:195] Run: openssl version
	I1109 14:03:43.945583   55908 ssh_runner.go:195] Run: sudo /bin/bash -c "test -s /usr/share/ca-certificates/minikubeCA.pem && ln -fs /usr/share/ca-certificates/minikubeCA.pem /etc/ssl/certs/minikubeCA.pem"
	I1109 14:03:43.954590   55908 ssh_runner.go:195] Run: ls -la /usr/share/ca-certificates/minikubeCA.pem
	I1109 14:03:43.958760   55908 certs.go:528] hashing: -rw-r--r-- 1 root root 1111 Nov  9 13:29 /usr/share/ca-certificates/minikubeCA.pem
	I1109 14:03:43.958867   55908 ssh_runner.go:195] Run: openssl x509 -hash -noout -in /usr/share/ca-certificates/minikubeCA.pem
	I1109 14:03:43.999953   55908 ssh_runner.go:195] Run: sudo /bin/bash -c "test -L /etc/ssl/certs/b5213941.0 || ln -fs /etc/ssl/certs/minikubeCA.pem /etc/ssl/certs/b5213941.0"
	I1109 14:03:44.007895   55908 ssh_runner.go:195] Run: sudo /bin/bash -c "test -s /usr/share/ca-certificates/4116.pem && ln -fs /usr/share/ca-certificates/4116.pem /etc/ssl/certs/4116.pem"
	I1109 14:03:44.020206   55908 ssh_runner.go:195] Run: ls -la /usr/share/ca-certificates/4116.pem
	I1109 14:03:44.024532   55908 certs.go:528] hashing: -rw-r--r-- 1 root root 1338 Nov  9 13:36 /usr/share/ca-certificates/4116.pem
	I1109 14:03:44.024619   55908 ssh_runner.go:195] Run: openssl x509 -hash -noout -in /usr/share/ca-certificates/4116.pem
	I1109 14:03:44.068208   55908 ssh_runner.go:195] Run: sudo /bin/bash -c "test -L /etc/ssl/certs/51391683.0 || ln -fs /etc/ssl/certs/4116.pem /etc/ssl/certs/51391683.0"
	I1109 14:03:44.079840   55908 ssh_runner.go:195] Run: sudo /bin/bash -c "test -s /usr/share/ca-certificates/41162.pem && ln -fs /usr/share/ca-certificates/41162.pem /etc/ssl/certs/41162.pem"
	I1109 14:03:44.089486   55908 ssh_runner.go:195] Run: ls -la /usr/share/ca-certificates/41162.pem
	I1109 14:03:44.094109   55908 certs.go:528] hashing: -rw-r--r-- 1 root root 1708 Nov  9 13:36 /usr/share/ca-certificates/41162.pem
	I1109 14:03:44.094227   55908 ssh_runner.go:195] Run: openssl x509 -hash -noout -in /usr/share/ca-certificates/41162.pem
	I1109 14:03:44.137949   55908 ssh_runner.go:195] Run: sudo /bin/bash -c "test -L /etc/ssl/certs/3ec20f2e.0 || ln -fs /etc/ssl/certs/41162.pem /etc/ssl/certs/3ec20f2e.0"
	I1109 14:03:44.146324   55908 ssh_runner.go:195] Run: stat /var/lib/minikube/certs/apiserver-kubelet-client.crt
	I1109 14:03:44.150369   55908 ssh_runner.go:195] Run: openssl x509 -noout -in /var/lib/minikube/certs/apiserver-etcd-client.crt -checkend 86400
	I1109 14:03:44.191825   55908 ssh_runner.go:195] Run: openssl x509 -noout -in /var/lib/minikube/certs/apiserver-kubelet-client.crt -checkend 86400
	I1109 14:03:44.232925   55908 ssh_runner.go:195] Run: openssl x509 -noout -in /var/lib/minikube/certs/etcd/server.crt -checkend 86400
	I1109 14:03:44.273939   55908 ssh_runner.go:195] Run: openssl x509 -noout -in /var/lib/minikube/certs/etcd/healthcheck-client.crt -checkend 86400
	I1109 14:03:44.314652   55908 ssh_runner.go:195] Run: openssl x509 -noout -in /var/lib/minikube/certs/etcd/peer.crt -checkend 86400
	I1109 14:03:44.356028   55908 ssh_runner.go:195] Run: openssl x509 -noout -in /var/lib/minikube/certs/front-proxy-client.crt -checkend 86400
	I1109 14:03:44.407731   55908 kubeadm.go:935] updating node {m02 192.168.49.3 8443 v1.34.1 crio true true} ...
	I1109 14:03:44.407917   55908 kubeadm.go:947] kubelet [Unit]
	Wants=crio.service
	
	[Service]
	ExecStart=
	ExecStart=/var/lib/minikube/binaries/v1.34.1/kubelet --bootstrap-kubeconfig=/etc/kubernetes/bootstrap-kubelet.conf --cgroups-per-qos=false --config=/var/lib/kubelet/config.yaml --enforce-node-allocatable= --hostname-override=ha-423884-m02 --kubeconfig=/etc/kubernetes/kubelet.conf --node-ip=192.168.49.3
	
	[Install]
	 config:
	{KubernetesVersion:v1.34.1 ClusterName:ha-423884 Namespace:default APIServerHAVIP:192.168.49.254 APIServerName:minikubeCA APIServerNames:[] APIServerIPs:[] DNSDomain:cluster.local ContainerRuntime:crio CRISocket: NetworkPlugin:cni FeatureGates: ServiceCIDR:10.96.0.0/12 ImageRepository: LoadBalancerStartIP: LoadBalancerEndIP: CustomIngressCert: RegistryAliases: ExtraOptions:[] ShouldLoadCachedImages:true EnableDefaultCNI:false CNI:}
	I1109 14:03:44.407958   55908 kube-vip.go:115] generating kube-vip config ...
	I1109 14:03:44.408031   55908 ssh_runner.go:195] Run: sudo sh -c "lsmod | grep ip_vs"
	I1109 14:03:44.419991   55908 kube-vip.go:163] giving up enabling control-plane load-balancing as ipvs kernel modules appears not to be available: sudo sh -c "lsmod | grep ip_vs": Process exited with status 1
	stdout:
	
	stderr:
	I1109 14:03:44.420052   55908 kube-vip.go:137] kube-vip config:
	apiVersion: v1
	kind: Pod
	metadata:
	  creationTimestamp: null
	  name: kube-vip
	  namespace: kube-system
	spec:
	  containers:
	  - args:
	    - manager
	    env:
	    - name: vip_arp
	      value: "true"
	    - name: port
	      value: "8443"
	    - name: vip_nodename
	      valueFrom:
	        fieldRef:
	          fieldPath: spec.nodeName
	    - name: vip_interface
	      value: eth0
	    - name: vip_cidr
	      value: "32"
	    - name: dns_mode
	      value: first
	    - name: cp_enable
	      value: "true"
	    - name: cp_namespace
	      value: kube-system
	    - name: vip_leaderelection
	      value: "true"
	    - name: vip_leasename
	      value: plndr-cp-lock
	    - name: vip_leaseduration
	      value: "5"
	    - name: vip_renewdeadline
	      value: "3"
	    - name: vip_retryperiod
	      value: "1"
	    - name: address
	      value: 192.168.49.254
	    - name: prometheus_server
	      value: :2112
	    image: ghcr.io/kube-vip/kube-vip:v1.0.1
	    imagePullPolicy: IfNotPresent
	    name: kube-vip
	    resources: {}
	    securityContext:
	      capabilities:
	        add:
	        - NET_ADMIN
	        - NET_RAW
	    volumeMounts:
	    - mountPath: /etc/kubernetes/admin.conf
	      name: kubeconfig
	  hostAliases:
	  - hostnames:
	    - kubernetes
	    ip: 127.0.0.1
	  hostNetwork: true
	  volumes:
	  - hostPath:
	      path: "/etc/kubernetes/admin.conf"
	    name: kubeconfig
	status: {}
	I1109 14:03:44.420129   55908 ssh_runner.go:195] Run: sudo ls /var/lib/minikube/binaries/v1.34.1
	I1109 14:03:44.427945   55908 binaries.go:51] Found k8s binaries, skipping transfer
	I1109 14:03:44.428013   55908 ssh_runner.go:195] Run: sudo mkdir -p /etc/systemd/system/kubelet.service.d /lib/systemd/system /etc/kubernetes/manifests
	I1109 14:03:44.435476   55908 ssh_runner.go:362] scp memory --> /etc/systemd/system/kubelet.service.d/10-kubeadm.conf (363 bytes)
	I1109 14:03:44.448591   55908 ssh_runner.go:362] scp memory --> /lib/systemd/system/kubelet.service (352 bytes)
	I1109 14:03:44.461928   55908 ssh_runner.go:362] scp memory --> /etc/kubernetes/manifests/kube-vip.yaml (1358 bytes)
	I1109 14:03:44.475231   55908 ssh_runner.go:195] Run: grep 192.168.49.254	control-plane.minikube.internal$ /etc/hosts
	I1109 14:03:44.478933   55908 ssh_runner.go:195] Run: /bin/bash -c "{ grep -v $'\tcontrol-plane.minikube.internal$' "/etc/hosts"; echo "192.168.49.254	control-plane.minikube.internal"; } > /tmp/h.$$; sudo cp /tmp/h.$$ "/etc/hosts""
	I1109 14:03:44.488867   55908 ssh_runner.go:195] Run: sudo systemctl daemon-reload
	I1109 14:03:44.623612   55908 ssh_runner.go:195] Run: sudo systemctl start kubelet
	I1109 14:03:44.638897   55908 start.go:236] Will wait 6m0s for node &{Name:m02 IP:192.168.49.3 Port:8443 KubernetesVersion:v1.34.1 ContainerRuntime:crio ControlPlane:true Worker:true}
	I1109 14:03:44.639336   55908 config.go:182] Loaded profile config "ha-423884": Driver=docker, ContainerRuntime=crio, KubernetesVersion=v1.34.1
	I1109 14:03:44.643324   55908 out.go:179] * Verifying Kubernetes components...
	I1109 14:03:44.646391   55908 ssh_runner.go:195] Run: sudo systemctl daemon-reload
	I1109 14:03:44.766731   55908 ssh_runner.go:195] Run: sudo systemctl start kubelet
	I1109 14:03:44.781836   55908 kapi.go:59] client config for ha-423884: &rest.Config{Host:"https://192.168.49.254:8443", APIPath:"", ContentConfig:rest.ContentConfig{AcceptContentTypes:"", ContentType:"", GroupVersion:(*schema.GroupVersion)(nil), NegotiatedSerializer:runtime.NegotiatedSerializer(nil)}, Username:"", Password:"", BearerToken:"", BearerTokenFile:"", Impersonate:rest.ImpersonationConfig{UserName:"", UID:"", Groups:[]string(nil), Extra:map[string][]string(nil)}, AuthProvider:<nil>, AuthConfigPersister:rest.AuthProviderConfigPersister(nil), ExecProvider:<nil>, TLSClientConfig:rest.sanitizedTLSClientConfig{Insecure:false, ServerName:"", CertFile:"/home/jenkins/minikube-integration/21139-2320/.minikube/profiles/ha-423884/client.crt", KeyFile:"/home/jenkins/minikube-integration/21139-2320/.minikube/profiles/ha-423884/client.key", CAFile:"/home/jenkins/minikube-integration/21139-2320/.minikube/ca.crt", CertData:[]uint8(nil), KeyData:[]uint8(nil), CAData:[]uint8(nil), NextProtos:[]string(nil)},
UserAgent:"", DisableCompression:false, Transport:http.RoundTripper(nil), WrapTransport:(transport.WrapperFunc)(0x21276e0), QPS:0, Burst:0, RateLimiter:flowcontrol.RateLimiter(nil), WarningHandler:rest.WarningHandler(nil), WarningHandlerWithContext:rest.WarningHandlerWithContext(nil), Timeout:0, Dial:(func(context.Context, string, string) (net.Conn, error))(nil), Proxy:(func(*http.Request) (*url.URL, error))(nil)}
	W1109 14:03:44.781971   55908 kubeadm.go:492] Overriding stale ClientConfig host https://192.168.49.254:8443 with https://192.168.49.2:8443
	I1109 14:03:44.782234   55908 node_ready.go:35] waiting up to 6m0s for node "ha-423884-m02" to be "Ready" ...
	W1109 14:03:54.783441   55908 node_ready.go:55] error getting node "ha-423884-m02" condition "Ready" status (will retry): Get "https://192.168.49.2:8443/api/v1/nodes/ha-423884-m02": net/http: TLS handshake timeout
	I1109 14:03:58.293061   55908 with_retry.go:234] "Got a Retry-After response" delay="1s" attempt=1 url="https://192.168.49.2:8443/api/v1/nodes/ha-423884-m02"
	W1109 14:04:08.294056   55908 node_ready.go:55] error getting node "ha-423884-m02" condition "Ready" status (will retry): Get "https://192.168.49.2:8443/api/v1/nodes/ha-423884-m02": net/http: TLS handshake timeout - error from a previous attempt: read tcp 192.168.49.1:36070->192.168.49.2:8443: read: connection reset by peer
	I1109 14:04:10.224067   55908 node_ready.go:49] node "ha-423884-m02" is "Ready"
	I1109 14:04:10.224094   55908 node_ready.go:38] duration metric: took 25.441822993s for node "ha-423884-m02" to be "Ready" ...
	I1109 14:04:10.224107   55908 api_server.go:52] waiting for apiserver process to appear ...
	I1109 14:04:10.224169   55908 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I1109 14:04:10.237071   55908 api_server.go:72] duration metric: took 25.598086143s to wait for apiserver process to appear ...
	I1109 14:04:10.237093   55908 api_server.go:88] waiting for apiserver healthz status ...
	I1109 14:04:10.237122   55908 api_server.go:253] Checking apiserver healthz at https://192.168.49.2:8443/healthz ...
	I1109 14:04:10.273674   55908 api_server.go:279] https://192.168.49.2:8443/healthz returned 500:
	[+]ping ok
	[+]log ok
	[+]etcd ok
	[+]poststarthook/start-apiserver-admission-initializer ok
	[+]poststarthook/generic-apiserver-start-informers ok
	[+]poststarthook/priority-and-fairness-config-consumer ok
	[+]poststarthook/priority-and-fairness-filter ok
	[+]poststarthook/storage-object-count-tracker-hook ok
	[+]poststarthook/start-apiextensions-informers ok
	[+]poststarthook/start-apiextensions-controllers ok
	[+]poststarthook/crd-informer-synced ok
	[+]poststarthook/start-system-namespaces-controller ok
	[+]poststarthook/start-cluster-authentication-info-controller ok
	[+]poststarthook/start-kube-apiserver-identity-lease-controller ok
	[+]poststarthook/start-kube-apiserver-identity-lease-garbage-collector ok
	[+]poststarthook/start-legacy-token-tracking-controller ok
	[+]poststarthook/start-service-ip-repair-controllers ok
	[-]poststarthook/rbac/bootstrap-roles failed: reason withheld
	[-]poststarthook/scheduling/bootstrap-system-priority-classes failed: reason withheld
	[+]poststarthook/priority-and-fairness-config-producer ok
	[-]poststarthook/bootstrap-controller failed: reason withheld
	[+]poststarthook/start-kubernetes-service-cidr-controller ok
	[+]poststarthook/aggregator-reload-proxy-client-cert ok
	[+]poststarthook/start-kube-aggregator-informers ok
	[+]poststarthook/apiservice-status-local-available-controller ok
	[+]poststarthook/apiservice-status-remote-available-controller ok
	[+]poststarthook/apiservice-registration-controller ok
	[+]poststarthook/apiservice-discovery-controller ok
	[+]poststarthook/kube-apiserver-autoregistration ok
	[+]autoregister-completion ok
	[+]poststarthook/apiservice-openapi-controller ok
	[+]poststarthook/apiservice-openapiv3-controller ok
	healthz check failed
	W1109 14:04:10.273706   55908 api_server.go:103] status: https://192.168.49.2:8443/healthz returned error 500:
	[+]ping ok
	[+]log ok
	[+]etcd ok
	[+]poststarthook/start-apiserver-admission-initializer ok
	[+]poststarthook/generic-apiserver-start-informers ok
	[+]poststarthook/priority-and-fairness-config-consumer ok
	[+]poststarthook/priority-and-fairness-filter ok
	[+]poststarthook/storage-object-count-tracker-hook ok
	[+]poststarthook/start-apiextensions-informers ok
	[+]poststarthook/start-apiextensions-controllers ok
	[+]poststarthook/crd-informer-synced ok
	[+]poststarthook/start-system-namespaces-controller ok
	[+]poststarthook/start-cluster-authentication-info-controller ok
	[+]poststarthook/start-kube-apiserver-identity-lease-controller ok
	[+]poststarthook/start-kube-apiserver-identity-lease-garbage-collector ok
	[+]poststarthook/start-legacy-token-tracking-controller ok
	[+]poststarthook/start-service-ip-repair-controllers ok
	[-]poststarthook/rbac/bootstrap-roles failed: reason withheld
	[-]poststarthook/scheduling/bootstrap-system-priority-classes failed: reason withheld
	[+]poststarthook/priority-and-fairness-config-producer ok
	[-]poststarthook/bootstrap-controller failed: reason withheld
	[+]poststarthook/start-kubernetes-service-cidr-controller ok
	[+]poststarthook/aggregator-reload-proxy-client-cert ok
	[+]poststarthook/start-kube-aggregator-informers ok
	[+]poststarthook/apiservice-status-local-available-controller ok
	[+]poststarthook/apiservice-status-remote-available-controller ok
	[+]poststarthook/apiservice-registration-controller ok
	[+]poststarthook/apiservice-discovery-controller ok
	[+]poststarthook/kube-apiserver-autoregistration ok
	[+]autoregister-completion ok
	[+]poststarthook/apiservice-openapi-controller ok
	[+]poststarthook/apiservice-openapiv3-controller ok
	healthz check failed
	I1109 14:04:10.737933   55908 api_server.go:253] Checking apiserver healthz at https://192.168.49.2:8443/healthz ...
	I1109 14:04:10.747401   55908 api_server.go:279] https://192.168.49.2:8443/healthz returned 500:
	[+]ping ok
	[+]log ok
	[+]etcd ok
	[+]poststarthook/start-apiserver-admission-initializer ok
	[+]poststarthook/generic-apiserver-start-informers ok
	[+]poststarthook/priority-and-fairness-config-consumer ok
	[+]poststarthook/priority-and-fairness-filter ok
	[+]poststarthook/storage-object-count-tracker-hook ok
	[+]poststarthook/start-apiextensions-informers ok
	[+]poststarthook/start-apiextensions-controllers ok
	[+]poststarthook/crd-informer-synced ok
	[+]poststarthook/start-system-namespaces-controller ok
	[+]poststarthook/start-cluster-authentication-info-controller ok
	[+]poststarthook/start-kube-apiserver-identity-lease-controller ok
	[+]poststarthook/start-kube-apiserver-identity-lease-garbage-collector ok
	[+]poststarthook/start-legacy-token-tracking-controller ok
	[+]poststarthook/start-service-ip-repair-controllers ok
	[-]poststarthook/rbac/bootstrap-roles failed: reason withheld
	[+]poststarthook/scheduling/bootstrap-system-priority-classes ok
	[+]poststarthook/priority-and-fairness-config-producer ok
	[+]poststarthook/bootstrap-controller ok
	[+]poststarthook/start-kubernetes-service-cidr-controller ok
	[+]poststarthook/aggregator-reload-proxy-client-cert ok
	[+]poststarthook/start-kube-aggregator-informers ok
	[+]poststarthook/apiservice-status-local-available-controller ok
	[+]poststarthook/apiservice-status-remote-available-controller ok
	[+]poststarthook/apiservice-registration-controller ok
	[+]poststarthook/apiservice-discovery-controller ok
	[+]poststarthook/kube-apiserver-autoregistration ok
	[+]autoregister-completion ok
	[+]poststarthook/apiservice-openapi-controller ok
	[+]poststarthook/apiservice-openapiv3-controller ok
	healthz check failed
	W1109 14:04:10.747476   55908 api_server.go:103] status: https://192.168.49.2:8443/healthz returned error 500:
	[+]ping ok
	[+]log ok
	[+]etcd ok
	[+]poststarthook/start-apiserver-admission-initializer ok
	[+]poststarthook/generic-apiserver-start-informers ok
	[+]poststarthook/priority-and-fairness-config-consumer ok
	[+]poststarthook/priority-and-fairness-filter ok
	[+]poststarthook/storage-object-count-tracker-hook ok
	[+]poststarthook/start-apiextensions-informers ok
	[+]poststarthook/start-apiextensions-controllers ok
	[+]poststarthook/crd-informer-synced ok
	[+]poststarthook/start-system-namespaces-controller ok
	[+]poststarthook/start-cluster-authentication-info-controller ok
	[+]poststarthook/start-kube-apiserver-identity-lease-controller ok
	[+]poststarthook/start-kube-apiserver-identity-lease-garbage-collector ok
	[+]poststarthook/start-legacy-token-tracking-controller ok
	[+]poststarthook/start-service-ip-repair-controllers ok
	[-]poststarthook/rbac/bootstrap-roles failed: reason withheld
	[+]poststarthook/scheduling/bootstrap-system-priority-classes ok
	[+]poststarthook/priority-and-fairness-config-producer ok
	[+]poststarthook/bootstrap-controller ok
	[+]poststarthook/start-kubernetes-service-cidr-controller ok
	[+]poststarthook/aggregator-reload-proxy-client-cert ok
	[+]poststarthook/start-kube-aggregator-informers ok
	[+]poststarthook/apiservice-status-local-available-controller ok
	[+]poststarthook/apiservice-status-remote-available-controller ok
	[+]poststarthook/apiservice-registration-controller ok
	[+]poststarthook/apiservice-discovery-controller ok
	[+]poststarthook/kube-apiserver-autoregistration ok
	[+]autoregister-completion ok
	[+]poststarthook/apiservice-openapi-controller ok
	[+]poststarthook/apiservice-openapiv3-controller ok
	healthz check failed
	I1109 14:04:11.238081   55908 api_server.go:253] Checking apiserver healthz at https://192.168.49.2:8443/healthz ...
	I1109 14:04:11.253573   55908 api_server.go:279] https://192.168.49.2:8443/healthz returned 500:
	[+]ping ok
	[+]log ok
	[+]etcd ok
	[+]poststarthook/start-apiserver-admission-initializer ok
	[+]poststarthook/generic-apiserver-start-informers ok
	[+]poststarthook/priority-and-fairness-config-consumer ok
	[+]poststarthook/priority-and-fairness-filter ok
	[+]poststarthook/storage-object-count-tracker-hook ok
	[+]poststarthook/start-apiextensions-informers ok
	[+]poststarthook/start-apiextensions-controllers ok
	[+]poststarthook/crd-informer-synced ok
	[+]poststarthook/start-system-namespaces-controller ok
	[+]poststarthook/start-cluster-authentication-info-controller ok
	[+]poststarthook/start-kube-apiserver-identity-lease-controller ok
	[+]poststarthook/start-kube-apiserver-identity-lease-garbage-collector ok
	[+]poststarthook/start-legacy-token-tracking-controller ok
	[+]poststarthook/start-service-ip-repair-controllers ok
	[-]poststarthook/rbac/bootstrap-roles failed: reason withheld
	[+]poststarthook/scheduling/bootstrap-system-priority-classes ok
	[+]poststarthook/priority-and-fairness-config-producer ok
	[+]poststarthook/bootstrap-controller ok
	[+]poststarthook/start-kubernetes-service-cidr-controller ok
	[+]poststarthook/aggregator-reload-proxy-client-cert ok
	[+]poststarthook/start-kube-aggregator-informers ok
	[+]poststarthook/apiservice-status-local-available-controller ok
	[+]poststarthook/apiservice-status-remote-available-controller ok
	[+]poststarthook/apiservice-registration-controller ok
	[+]poststarthook/apiservice-discovery-controller ok
	[+]poststarthook/kube-apiserver-autoregistration ok
	[+]autoregister-completion ok
	[+]poststarthook/apiservice-openapi-controller ok
	[+]poststarthook/apiservice-openapiv3-controller ok
	healthz check failed
	W1109 14:04:11.253663   55908 api_server.go:103] status: https://192.168.49.2:8443/healthz returned error 500:
	[+]ping ok
	[+]log ok
	[+]etcd ok
	[+]poststarthook/start-apiserver-admission-initializer ok
	[+]poststarthook/generic-apiserver-start-informers ok
	[+]poststarthook/priority-and-fairness-config-consumer ok
	[+]poststarthook/priority-and-fairness-filter ok
	[+]poststarthook/storage-object-count-tracker-hook ok
	[+]poststarthook/start-apiextensions-informers ok
	[+]poststarthook/start-apiextensions-controllers ok
	[+]poststarthook/crd-informer-synced ok
	[+]poststarthook/start-system-namespaces-controller ok
	[+]poststarthook/start-cluster-authentication-info-controller ok
	[+]poststarthook/start-kube-apiserver-identity-lease-controller ok
	[+]poststarthook/start-kube-apiserver-identity-lease-garbage-collector ok
	[+]poststarthook/start-legacy-token-tracking-controller ok
	[+]poststarthook/start-service-ip-repair-controllers ok
	[-]poststarthook/rbac/bootstrap-roles failed: reason withheld
	[+]poststarthook/scheduling/bootstrap-system-priority-classes ok
	[+]poststarthook/priority-and-fairness-config-producer ok
	[+]poststarthook/bootstrap-controller ok
	[+]poststarthook/start-kubernetes-service-cidr-controller ok
	[+]poststarthook/aggregator-reload-proxy-client-cert ok
	[+]poststarthook/start-kube-aggregator-informers ok
	[+]poststarthook/apiservice-status-local-available-controller ok
	[+]poststarthook/apiservice-status-remote-available-controller ok
	[+]poststarthook/apiservice-registration-controller ok
	[+]poststarthook/apiservice-discovery-controller ok
	[+]poststarthook/kube-apiserver-autoregistration ok
	[+]autoregister-completion ok
	[+]poststarthook/apiservice-openapi-controller ok
	[+]poststarthook/apiservice-openapiv3-controller ok
	healthz check failed
	I1109 14:04:11.737248   55908 api_server.go:253] Checking apiserver healthz at https://192.168.49.2:8443/healthz ...
	I1109 14:04:11.745671   55908 api_server.go:279] https://192.168.49.2:8443/healthz returned 500:
	[+]ping ok
	[+]log ok
	[+]etcd ok
	[+]poststarthook/start-apiserver-admission-initializer ok
	[+]poststarthook/generic-apiserver-start-informers ok
	[+]poststarthook/priority-and-fairness-config-consumer ok
	[+]poststarthook/priority-and-fairness-filter ok
	[+]poststarthook/storage-object-count-tracker-hook ok
	[+]poststarthook/start-apiextensions-informers ok
	[+]poststarthook/start-apiextensions-controllers ok
	[+]poststarthook/crd-informer-synced ok
	[+]poststarthook/start-system-namespaces-controller ok
	[+]poststarthook/start-cluster-authentication-info-controller ok
	[+]poststarthook/start-kube-apiserver-identity-lease-controller ok
	[+]poststarthook/start-kube-apiserver-identity-lease-garbage-collector ok
	[+]poststarthook/start-legacy-token-tracking-controller ok
	[+]poststarthook/start-service-ip-repair-controllers ok
	[-]poststarthook/rbac/bootstrap-roles failed: reason withheld
	[+]poststarthook/scheduling/bootstrap-system-priority-classes ok
	[+]poststarthook/priority-and-fairness-config-producer ok
	[+]poststarthook/bootstrap-controller ok
	[+]poststarthook/start-kubernetes-service-cidr-controller ok
	[+]poststarthook/aggregator-reload-proxy-client-cert ok
	[+]poststarthook/start-kube-aggregator-informers ok
	[+]poststarthook/apiservice-status-local-available-controller ok
	[+]poststarthook/apiservice-status-remote-available-controller ok
	[+]poststarthook/apiservice-registration-controller ok
	[+]poststarthook/apiservice-discovery-controller ok
	[+]poststarthook/kube-apiserver-autoregistration ok
	[+]autoregister-completion ok
	[+]poststarthook/apiservice-openapi-controller ok
	[+]poststarthook/apiservice-openapiv3-controller ok
	healthz check failed
	W1109 14:04:11.745753   55908 api_server.go:103] status: https://192.168.49.2:8443/healthz returned error 500:
	[+]ping ok
	[+]log ok
	[+]etcd ok
	[+]poststarthook/start-apiserver-admission-initializer ok
	[+]poststarthook/generic-apiserver-start-informers ok
	[+]poststarthook/priority-and-fairness-config-consumer ok
	[+]poststarthook/priority-and-fairness-filter ok
	[+]poststarthook/storage-object-count-tracker-hook ok
	[+]poststarthook/start-apiextensions-informers ok
	[+]poststarthook/start-apiextensions-controllers ok
	[+]poststarthook/crd-informer-synced ok
	[+]poststarthook/start-system-namespaces-controller ok
	[+]poststarthook/start-cluster-authentication-info-controller ok
	[+]poststarthook/start-kube-apiserver-identity-lease-controller ok
	[+]poststarthook/start-kube-apiserver-identity-lease-garbage-collector ok
	[+]poststarthook/start-legacy-token-tracking-controller ok
	[+]poststarthook/start-service-ip-repair-controllers ok
	[-]poststarthook/rbac/bootstrap-roles failed: reason withheld
	[+]poststarthook/scheduling/bootstrap-system-priority-classes ok
	[+]poststarthook/priority-and-fairness-config-producer ok
	[+]poststarthook/bootstrap-controller ok
	[+]poststarthook/start-kubernetes-service-cidr-controller ok
	[+]poststarthook/aggregator-reload-proxy-client-cert ok
	[+]poststarthook/start-kube-aggregator-informers ok
	[+]poststarthook/apiservice-status-local-available-controller ok
	[+]poststarthook/apiservice-status-remote-available-controller ok
	[+]poststarthook/apiservice-registration-controller ok
	[+]poststarthook/apiservice-discovery-controller ok
	[+]poststarthook/kube-apiserver-autoregistration ok
	[+]autoregister-completion ok
	[+]poststarthook/apiservice-openapi-controller ok
	[+]poststarthook/apiservice-openapiv3-controller ok
	healthz check failed
	I1109 14:04:12.237288   55908 api_server.go:253] Checking apiserver healthz at https://192.168.49.2:8443/healthz ...
	I1109 14:04:12.246058   55908 api_server.go:279] https://192.168.49.2:8443/healthz returned 200:
	ok
	I1109 14:04:12.247325   55908 api_server.go:141] control plane version: v1.34.1
	I1109 14:04:12.247378   55908 api_server.go:131] duration metric: took 2.0102771s to wait for apiserver health ...
	I1109 14:04:12.247399   55908 system_pods.go:43] waiting for kube-system pods to appear ...
	I1109 14:04:12.255293   55908 system_pods.go:59] 26 kube-system pods found
	I1109 14:04:12.255379   55908 system_pods.go:61] "coredns-66bc5c9577-wl6rt" [95478f0f-683f-4542-aaa4-adc037f97d0c] Running
	I1109 14:04:12.255399   55908 system_pods.go:61] "coredns-66bc5c9577-x2j4c" [96cca476-edbb-4139-8b01-f7fd7c7d55aa] Running
	I1109 14:04:12.255418   55908 system_pods.go:61] "etcd-ha-423884" [004413ee-6d00-4b6b-8e58-dbe5c2694a91] Running
	I1109 14:04:12.255451   55908 system_pods.go:61] "etcd-ha-423884-m02" [7009b9f1-1f16-4b4e-a5e3-1d8df4987593] Running
	I1109 14:04:12.255475   55908 system_pods.go:61] "etcd-ha-423884-m03" [5c66298e-55ac-439c-8fc7-cc91a29fff8c] Running
	I1109 14:04:12.255490   55908 system_pods.go:61] "kindnet-2tcn6" [22d3ab43-1335-4838-ab1d-7368817c4287] Running
	I1109 14:04:12.255507   55908 system_pods.go:61] "kindnet-45jg2" [fe2d9f7d-ba0c-42a5-af2d-ebfb3153b0b1] Running
	I1109 14:04:12.255525   55908 system_pods.go:61] "kindnet-4s4nj" [aaab0693-39a9-46cc-b5c6-f07055a7cbc4] Running
	I1109 14:04:12.255556   55908 system_pods.go:61] "kindnet-ftnwt" [ea92690f-5103-40c4-ba92-16b97894d00c] Running
	I1109 14:04:12.255578   55908 system_pods.go:61] "kube-apiserver-ha-423884" [1981277b-07b7-4bfc-8601-d026e58476b1] Running
	I1109 14:04:12.255596   55908 system_pods.go:61] "kube-apiserver-ha-423884-m02" [e203df4c-2d27-445c-ae67-14d3e03d4a48] Running
	I1109 14:04:12.255613   55908 system_pods.go:61] "kube-apiserver-ha-423884-m03" [16908abd-f0bf-4fc2-b689-4062490d63b3] Running
	I1109 14:04:12.255631   55908 system_pods.go:61] "kube-controller-manager-ha-423884" [1d981b3b-aa6e-470c-8fd5-a97cedeb7ab7] Running
	I1109 14:04:12.255657   55908 system_pods.go:61] "kube-controller-manager-ha-423884-m02" [047f7949-9451-4301-be27-a9073b1f8dd8] Running
	I1109 14:04:12.255679   55908 system_pods.go:61] "kube-controller-manager-ha-423884-m03" [94734b39-b4df-4736-8bb8-4289b8b59d4a] Running
	I1109 14:04:12.255698   55908 system_pods.go:61] "kube-proxy-7z7d2" [f3de4d87-91fe-4303-a8db-50a70cbce4d7] Running
	I1109 14:04:12.255716   55908 system_pods.go:61] "kube-proxy-9kff9" [3e8293e6-027b-460b-bf10-1c31ea96c7b9] Running
	I1109 14:04:12.255733   55908 system_pods.go:61] "kube-proxy-f4hgn" [4eebc45e-329c-47be-b22c-f516100cff56] Running
	I1109 14:04:12.255760   55908 system_pods.go:61] "kube-proxy-jcgxk" [511bb5aa-d398-4eb4-852e-d2b6cda335c7] Running
	I1109 14:04:12.255785   55908 system_pods.go:61] "kube-scheduler-ha-423884" [9d1a4a22-ff4f-4204-b9a0-c90bc99cf5ee] Running
	I1109 14:04:12.255802   55908 system_pods.go:61] "kube-scheduler-ha-423884-m02" [3959577a-da41-4834-8c10-1cd6da3da88d] Running
	I1109 14:04:12.255819   55908 system_pods.go:61] "kube-scheduler-ha-423884-m03" [ab2c4706-65e8-429e-9365-35ddef56a3f5] Running
	I1109 14:04:12.255834   55908 system_pods.go:61] "kube-vip-ha-423884" [8470dcc0-6c4f-4241-ad4e-8b896f6712b0] Running
	I1109 14:04:12.255904   55908 system_pods.go:61] "kube-vip-ha-423884-m02" [17d3154c-6732-452b-a209-4a8c98c5626e] Running
	I1109 14:04:12.255931   55908 system_pods.go:61] "kube-vip-ha-423884-m03" [aade9448-5515-4d1c-ae79-38c8686c171f] Running
	I1109 14:04:12.255949   55908 system_pods.go:61] "storage-provisioner" [5c249a88-1e05-40e0-b9d2-60a993f8c146] Running
	I1109 14:04:12.255967   55908 system_pods.go:74] duration metric: took 8.549678ms to wait for pod list to return data ...
	I1109 14:04:12.255987   55908 default_sa.go:34] waiting for default service account to be created ...
	I1109 14:04:12.259644   55908 default_sa.go:45] found service account: "default"
	I1109 14:04:12.259701   55908 default_sa.go:55] duration metric: took 3.685783ms for default service account to be created ...
	I1109 14:04:12.259723   55908 system_pods.go:116] waiting for k8s-apps to be running ...
	I1109 14:04:12.265757   55908 system_pods.go:86] 26 kube-system pods found
	I1109 14:04:12.265830   55908 system_pods.go:89] "coredns-66bc5c9577-wl6rt" [95478f0f-683f-4542-aaa4-adc037f97d0c] Running
	I1109 14:04:12.265849   55908 system_pods.go:89] "coredns-66bc5c9577-x2j4c" [96cca476-edbb-4139-8b01-f7fd7c7d55aa] Running
	I1109 14:04:12.265871   55908 system_pods.go:89] "etcd-ha-423884" [004413ee-6d00-4b6b-8e58-dbe5c2694a91] Running
	I1109 14:04:12.265906   55908 system_pods.go:89] "etcd-ha-423884-m02" [7009b9f1-1f16-4b4e-a5e3-1d8df4987593] Running
	I1109 14:04:12.265928   55908 system_pods.go:89] "etcd-ha-423884-m03" [5c66298e-55ac-439c-8fc7-cc91a29fff8c] Running
	I1109 14:04:12.265945   55908 system_pods.go:89] "kindnet-2tcn6" [22d3ab43-1335-4838-ab1d-7368817c4287] Running
	I1109 14:04:12.265961   55908 system_pods.go:89] "kindnet-45jg2" [fe2d9f7d-ba0c-42a5-af2d-ebfb3153b0b1] Running
	I1109 14:04:12.265977   55908 system_pods.go:89] "kindnet-4s4nj" [aaab0693-39a9-46cc-b5c6-f07055a7cbc4] Running
	I1109 14:04:12.266004   55908 system_pods.go:89] "kindnet-ftnwt" [ea92690f-5103-40c4-ba92-16b97894d00c] Running
	I1109 14:04:12.266025   55908 system_pods.go:89] "kube-apiserver-ha-423884" [1981277b-07b7-4bfc-8601-d026e58476b1] Running
	I1109 14:04:12.266042   55908 system_pods.go:89] "kube-apiserver-ha-423884-m02" [e203df4c-2d27-445c-ae67-14d3e03d4a48] Running
	I1109 14:04:12.266059   55908 system_pods.go:89] "kube-apiserver-ha-423884-m03" [16908abd-f0bf-4fc2-b689-4062490d63b3] Running
	I1109 14:04:12.266077   55908 system_pods.go:89] "kube-controller-manager-ha-423884" [1d981b3b-aa6e-470c-8fd5-a97cedeb7ab7] Running
	I1109 14:04:12.266107   55908 system_pods.go:89] "kube-controller-manager-ha-423884-m02" [047f7949-9451-4301-be27-a9073b1f8dd8] Running
	I1109 14:04:12.266238   55908 system_pods.go:89] "kube-controller-manager-ha-423884-m03" [94734b39-b4df-4736-8bb8-4289b8b59d4a] Running
	I1109 14:04:12.266258   55908 system_pods.go:89] "kube-proxy-7z7d2" [f3de4d87-91fe-4303-a8db-50a70cbce4d7] Running
	I1109 14:04:12.266274   55908 system_pods.go:89] "kube-proxy-9kff9" [3e8293e6-027b-460b-bf10-1c31ea96c7b9] Running
	I1109 14:04:12.266290   55908 system_pods.go:89] "kube-proxy-f4hgn" [4eebc45e-329c-47be-b22c-f516100cff56] Running
	I1109 14:04:12.266322   55908 system_pods.go:89] "kube-proxy-jcgxk" [511bb5aa-d398-4eb4-852e-d2b6cda335c7] Running
	I1109 14:04:12.266345   55908 system_pods.go:89] "kube-scheduler-ha-423884" [9d1a4a22-ff4f-4204-b9a0-c90bc99cf5ee] Running
	I1109 14:04:12.266364   55908 system_pods.go:89] "kube-scheduler-ha-423884-m02" [3959577a-da41-4834-8c10-1cd6da3da88d] Running
	I1109 14:04:12.266382   55908 system_pods.go:89] "kube-scheduler-ha-423884-m03" [ab2c4706-65e8-429e-9365-35ddef56a3f5] Running
	I1109 14:04:12.266400   55908 system_pods.go:89] "kube-vip-ha-423884" [8470dcc0-6c4f-4241-ad4e-8b896f6712b0] Running
	I1109 14:04:12.266427   55908 system_pods.go:89] "kube-vip-ha-423884-m02" [17d3154c-6732-452b-a209-4a8c98c5626e] Running
	I1109 14:04:12.266450   55908 system_pods.go:89] "kube-vip-ha-423884-m03" [aade9448-5515-4d1c-ae79-38c8686c171f] Running
	I1109 14:04:12.266468   55908 system_pods.go:89] "storage-provisioner" [5c249a88-1e05-40e0-b9d2-60a993f8c146] Running
	I1109 14:04:12.266489   55908 system_pods.go:126] duration metric: took 6.747337ms to wait for k8s-apps to be running ...
	I1109 14:04:12.266510   55908 system_svc.go:44] waiting for kubelet service to be running ....
	I1109 14:04:12.266588   55908 ssh_runner.go:195] Run: sudo systemctl is-active --quiet service kubelet
	I1109 14:04:12.282135   55908 system_svc.go:56] duration metric: took 15.616371ms WaitForService to wait for kubelet
	I1109 14:04:12.282232   55908 kubeadm.go:587] duration metric: took 27.643251935s to wait for: map[apiserver:true apps_running:true default_sa:true extra:true kubelet:true node_ready:true system_pods:true]
	I1109 14:04:12.282264   55908 node_conditions.go:102] verifying NodePressure condition ...
	I1109 14:04:12.287797   55908 node_conditions.go:122] node storage ephemeral capacity is 203034800Ki
	I1109 14:04:12.287962   55908 node_conditions.go:123] node cpu capacity is 2
	I1109 14:04:12.287995   55908 node_conditions.go:122] node storage ephemeral capacity is 203034800Ki
	I1109 14:04:12.288016   55908 node_conditions.go:123] node cpu capacity is 2
	I1109 14:04:12.288036   55908 node_conditions.go:122] node storage ephemeral capacity is 203034800Ki
	I1109 14:04:12.288054   55908 node_conditions.go:123] node cpu capacity is 2
	I1109 14:04:12.288080   55908 node_conditions.go:122] node storage ephemeral capacity is 203034800Ki
	I1109 14:04:12.288104   55908 node_conditions.go:123] node cpu capacity is 2
	I1109 14:04:12.288124   55908 node_conditions.go:105] duration metric: took 5.843459ms to run NodePressure ...
	I1109 14:04:12.288147   55908 start.go:242] waiting for startup goroutines ...
	I1109 14:04:12.288194   55908 start.go:256] writing updated cluster config ...
	I1109 14:04:12.292016   55908 out.go:203] 
	I1109 14:04:12.295240   55908 config.go:182] Loaded profile config "ha-423884": Driver=docker, ContainerRuntime=crio, KubernetesVersion=v1.34.1
	I1109 14:04:12.295416   55908 profile.go:143] Saving config to /home/jenkins/minikube-integration/21139-2320/.minikube/profiles/ha-423884/config.json ...
	I1109 14:04:12.298693   55908 out.go:179] * Starting "ha-423884-m03" control-plane node in "ha-423884" cluster
	I1109 14:04:12.302221   55908 cache.go:134] Beginning downloading kic base image for docker with crio
	I1109 14:04:12.305225   55908 out.go:179] * Pulling base image v0.0.48-1761985721-21837 ...
	I1109 14:04:12.307950   55908 preload.go:188] Checking if preload exists for k8s version v1.34.1 and runtime crio
	I1109 14:04:12.307975   55908 cache.go:65] Caching tarball of preloaded images
	I1109 14:04:12.308093   55908 preload.go:238] Found /home/jenkins/minikube-integration/21139-2320/.minikube/cache/preloaded-tarball/preloaded-images-k8s-v18-v1.34.1-cri-o-overlay-arm64.tar.lz4 in cache, skipping download
	I1109 14:04:12.308103   55908 cache.go:68] Finished verifying existence of preloaded tar for v1.34.1 on crio
	I1109 14:04:12.308245   55908 profile.go:143] Saving config to /home/jenkins/minikube-integration/21139-2320/.minikube/profiles/ha-423884/config.json ...
	I1109 14:04:12.308454   55908 image.go:81] Checking for gcr.io/k8s-minikube/kicbase-builds:v0.0.48-1761985721-21837@sha256:a50b37e97dfdea51156e079ca6b45818a801b3d41bbe13d141f35d2e1af6c7d1 in local docker daemon
	I1109 14:04:12.335753   55908 image.go:100] Found gcr.io/k8s-minikube/kicbase-builds:v0.0.48-1761985721-21837@sha256:a50b37e97dfdea51156e079ca6b45818a801b3d41bbe13d141f35d2e1af6c7d1 in local docker daemon, skipping pull
	I1109 14:04:12.335772   55908 cache.go:158] gcr.io/k8s-minikube/kicbase-builds:v0.0.48-1761985721-21837@sha256:a50b37e97dfdea51156e079ca6b45818a801b3d41bbe13d141f35d2e1af6c7d1 exists in daemon, skipping load
	I1109 14:04:12.335783   55908 cache.go:243] Successfully downloaded all kic artifacts
	I1109 14:04:12.335806   55908 start.go:360] acquireMachinesLock for ha-423884-m03: {Name:mk2c1f49120f6acdbb0b7c106d84b578b982c1c0 Clock:{} Delay:500ms Timeout:10m0s Cancel:<nil>}
	I1109 14:04:12.335852   55908 start.go:364] duration metric: took 32.608µs to acquireMachinesLock for "ha-423884-m03"
	I1109 14:04:12.335906   55908 start.go:96] Skipping create...Using existing machine configuration
	I1109 14:04:12.335913   55908 fix.go:54] fixHost starting: m03
	I1109 14:04:12.336176   55908 cli_runner.go:164] Run: docker container inspect ha-423884-m03 --format={{.State.Status}}
	I1109 14:04:12.360018   55908 fix.go:112] recreateIfNeeded on ha-423884-m03: state=Stopped err=<nil>
	W1109 14:04:12.360050   55908 fix.go:138] unexpected machine state, will restart: <nil>
	I1109 14:04:12.363431   55908 out.go:252] * Restarting existing docker container for "ha-423884-m03" ...
	I1109 14:04:12.363592   55908 cli_runner.go:164] Run: docker start ha-423884-m03
	I1109 14:04:12.653356   55908 cli_runner.go:164] Run: docker container inspect ha-423884-m03 --format={{.State.Status}}
	I1109 14:04:12.683958   55908 kic.go:430] container "ha-423884-m03" state is running.
	I1109 14:04:12.684306   55908 cli_runner.go:164] Run: docker container inspect -f "{{range .NetworkSettings.Networks}}{{.IPAddress}},{{.GlobalIPv6Address}}{{end}}" ha-423884-m03
	I1109 14:04:12.727840   55908 profile.go:143] Saving config to /home/jenkins/minikube-integration/21139-2320/.minikube/profiles/ha-423884/config.json ...
	I1109 14:04:12.728107   55908 machine.go:94] provisionDockerMachine start ...
	I1109 14:04:12.728163   55908 cli_runner.go:164] Run: docker container inspect -f "'{{(index (index .NetworkSettings.Ports "22/tcp") 0).HostPort}}'" ha-423884-m03
	I1109 14:04:12.759896   55908 main.go:143] libmachine: Using SSH client type: native
	I1109 14:04:12.760195   55908 main.go:143] libmachine: &{{{<nil> 0 [] [] []} docker [0x3eefe0] 0x3f1790 <nil>  [] 0s} 127.0.0.1 32828 <nil> <nil>}
	I1109 14:04:12.760204   55908 main.go:143] libmachine: About to run SSH command:
	hostname
	I1109 14:04:12.761068   55908 main.go:143] libmachine: Error dialing TCP: ssh: handshake failed: EOF
	I1109 14:04:16.033281   55908 main.go:143] libmachine: SSH cmd err, output: <nil>: ha-423884-m03
	
	I1109 14:04:16.033354   55908 ubuntu.go:182] provisioning hostname "ha-423884-m03"
	I1109 14:04:16.033448   55908 cli_runner.go:164] Run: docker container inspect -f "'{{(index (index .NetworkSettings.Ports "22/tcp") 0).HostPort}}'" ha-423884-m03
	I1109 14:04:16.074078   55908 main.go:143] libmachine: Using SSH client type: native
	I1109 14:04:16.074389   55908 main.go:143] libmachine: &{{{<nil> 0 [] [] []} docker [0x3eefe0] 0x3f1790 <nil>  [] 0s} 127.0.0.1 32828 <nil> <nil>}
	I1109 14:04:16.074407   55908 main.go:143] libmachine: About to run SSH command:
	sudo hostname ha-423884-m03 && echo "ha-423884-m03" | sudo tee /etc/hostname
	I1109 14:04:16.423110   55908 main.go:143] libmachine: SSH cmd err, output: <nil>: ha-423884-m03
	
	I1109 14:04:16.423192   55908 cli_runner.go:164] Run: docker container inspect -f "'{{(index (index .NetworkSettings.Ports "22/tcp") 0).HostPort}}'" ha-423884-m03
	I1109 14:04:16.456144   55908 main.go:143] libmachine: Using SSH client type: native
	I1109 14:04:16.456500   55908 main.go:143] libmachine: &{{{<nil> 0 [] [] []} docker [0x3eefe0] 0x3f1790 <nil>  [] 0s} 127.0.0.1 32828 <nil> <nil>}
	I1109 14:04:16.456523   55908 main.go:143] libmachine: About to run SSH command:
	
			if ! grep -xq '.*\sha-423884-m03' /etc/hosts; then
				if grep -xq '127.0.1.1\s.*' /etc/hosts; then
					sudo sed -i 's/^127.0.1.1\s.*/127.0.1.1 ha-423884-m03/g' /etc/hosts;
				else 
					echo '127.0.1.1 ha-423884-m03' | sudo tee -a /etc/hosts; 
				fi
			fi
	I1109 14:04:16.751298   55908 main.go:143] libmachine: SSH cmd err, output: <nil>: 
	I1109 14:04:16.751374   55908 ubuntu.go:188] set auth options {CertDir:/home/jenkins/minikube-integration/21139-2320/.minikube CaCertPath:/home/jenkins/minikube-integration/21139-2320/.minikube/certs/ca.pem CaPrivateKeyPath:/home/jenkins/minikube-integration/21139-2320/.minikube/certs/ca-key.pem CaCertRemotePath:/etc/docker/ca.pem ServerCertPath:/home/jenkins/minikube-integration/21139-2320/.minikube/machines/server.pem ServerKeyPath:/home/jenkins/minikube-integration/21139-2320/.minikube/machines/server-key.pem ClientKeyPath:/home/jenkins/minikube-integration/21139-2320/.minikube/certs/key.pem ServerCertRemotePath:/etc/docker/server.pem ServerKeyRemotePath:/etc/docker/server-key.pem ClientCertPath:/home/jenkins/minikube-integration/21139-2320/.minikube/certs/cert.pem ServerCertSANs:[] StorePath:/home/jenkins/minikube-integration/21139-2320/.minikube}
	I1109 14:04:16.751397   55908 ubuntu.go:190] setting up certificates
	I1109 14:04:16.751407   55908 provision.go:84] configureAuth start
	I1109 14:04:16.751471   55908 cli_runner.go:164] Run: docker container inspect -f "{{range .NetworkSettings.Networks}}{{.IPAddress}},{{.GlobalIPv6Address}}{{end}}" ha-423884-m03
	I1109 14:04:16.793487   55908 provision.go:143] copyHostCerts
	I1109 14:04:16.793536   55908 vm_assets.go:164] NewFileAsset: /home/jenkins/minikube-integration/21139-2320/.minikube/certs/ca.pem -> /home/jenkins/minikube-integration/21139-2320/.minikube/ca.pem
	I1109 14:04:16.793570   55908 exec_runner.go:144] found /home/jenkins/minikube-integration/21139-2320/.minikube/ca.pem, removing ...
	I1109 14:04:16.793586   55908 exec_runner.go:203] rm: /home/jenkins/minikube-integration/21139-2320/.minikube/ca.pem
	I1109 14:04:16.793664   55908 exec_runner.go:151] cp: /home/jenkins/minikube-integration/21139-2320/.minikube/certs/ca.pem --> /home/jenkins/minikube-integration/21139-2320/.minikube/ca.pem (1082 bytes)
	I1109 14:04:16.793744   55908 vm_assets.go:164] NewFileAsset: /home/jenkins/minikube-integration/21139-2320/.minikube/certs/cert.pem -> /home/jenkins/minikube-integration/21139-2320/.minikube/cert.pem
	I1109 14:04:16.793767   55908 exec_runner.go:144] found /home/jenkins/minikube-integration/21139-2320/.minikube/cert.pem, removing ...
	I1109 14:04:16.793774   55908 exec_runner.go:203] rm: /home/jenkins/minikube-integration/21139-2320/.minikube/cert.pem
	I1109 14:04:16.793803   55908 exec_runner.go:151] cp: /home/jenkins/minikube-integration/21139-2320/.minikube/certs/cert.pem --> /home/jenkins/minikube-integration/21139-2320/.minikube/cert.pem (1123 bytes)
	I1109 14:04:16.793848   55908 vm_assets.go:164] NewFileAsset: /home/jenkins/minikube-integration/21139-2320/.minikube/certs/key.pem -> /home/jenkins/minikube-integration/21139-2320/.minikube/key.pem
	I1109 14:04:16.793870   55908 exec_runner.go:144] found /home/jenkins/minikube-integration/21139-2320/.minikube/key.pem, removing ...
	I1109 14:04:16.793874   55908 exec_runner.go:203] rm: /home/jenkins/minikube-integration/21139-2320/.minikube/key.pem
	I1109 14:04:16.793899   55908 exec_runner.go:151] cp: /home/jenkins/minikube-integration/21139-2320/.minikube/certs/key.pem --> /home/jenkins/minikube-integration/21139-2320/.minikube/key.pem (1679 bytes)
	I1109 14:04:16.793952   55908 provision.go:117] generating server cert: /home/jenkins/minikube-integration/21139-2320/.minikube/machines/server.pem ca-key=/home/jenkins/minikube-integration/21139-2320/.minikube/certs/ca.pem private-key=/home/jenkins/minikube-integration/21139-2320/.minikube/certs/ca-key.pem org=jenkins.ha-423884-m03 san=[127.0.0.1 192.168.49.4 ha-423884-m03 localhost minikube]
	I1109 14:04:17.244605   55908 provision.go:177] copyRemoteCerts
	I1109 14:04:17.244683   55908 ssh_runner.go:195] Run: sudo mkdir -p /etc/docker /etc/docker /etc/docker
	I1109 14:04:17.244730   55908 cli_runner.go:164] Run: docker container inspect -f "'{{(index (index .NetworkSettings.Ports "22/tcp") 0).HostPort}}'" ha-423884-m03
	I1109 14:04:17.267714   55908 sshutil.go:53] new ssh client: &{IP:127.0.0.1 Port:32828 SSHKeyPath:/home/jenkins/minikube-integration/21139-2320/.minikube/machines/ha-423884-m03/id_rsa Username:docker}
	I1109 14:04:17.397341   55908 vm_assets.go:164] NewFileAsset: /home/jenkins/minikube-integration/21139-2320/.minikube/certs/ca.pem -> /etc/docker/ca.pem
	I1109 14:04:17.397397   55908 ssh_runner.go:362] scp /home/jenkins/minikube-integration/21139-2320/.minikube/certs/ca.pem --> /etc/docker/ca.pem (1082 bytes)
	I1109 14:04:17.451209   55908 vm_assets.go:164] NewFileAsset: /home/jenkins/minikube-integration/21139-2320/.minikube/machines/server.pem -> /etc/docker/server.pem
	I1109 14:04:17.451268   55908 ssh_runner.go:362] scp /home/jenkins/minikube-integration/21139-2320/.minikube/machines/server.pem --> /etc/docker/server.pem (1208 bytes)
	I1109 14:04:17.501897   55908 vm_assets.go:164] NewFileAsset: /home/jenkins/minikube-integration/21139-2320/.minikube/machines/server-key.pem -> /etc/docker/server-key.pem
	I1109 14:04:17.501959   55908 ssh_runner.go:362] scp /home/jenkins/minikube-integration/21139-2320/.minikube/machines/server-key.pem --> /etc/docker/server-key.pem (1679 bytes)
	I1109 14:04:17.543399   55908 provision.go:87] duration metric: took 791.974444ms to configureAuth
	I1109 14:04:17.543429   55908 ubuntu.go:206] setting minikube options for container-runtime
	I1109 14:04:17.543658   55908 config.go:182] Loaded profile config "ha-423884": Driver=docker, ContainerRuntime=crio, KubernetesVersion=v1.34.1
	I1109 14:04:17.543760   55908 cli_runner.go:164] Run: docker container inspect -f "'{{(index (index .NetworkSettings.Ports "22/tcp") 0).HostPort}}'" ha-423884-m03
	I1109 14:04:17.578118   55908 main.go:143] libmachine: Using SSH client type: native
	I1109 14:04:17.578425   55908 main.go:143] libmachine: &{{{<nil> 0 [] [] []} docker [0x3eefe0] 0x3f1790 <nil>  [] 0s} 127.0.0.1 32828 <nil> <nil>}
	I1109 14:04:17.578447   55908 main.go:143] libmachine: About to run SSH command:
	sudo mkdir -p /etc/sysconfig && printf %s "
	CRIO_MINIKUBE_OPTIONS='--insecure-registry 10.96.0.0/12 '
	" | sudo tee /etc/sysconfig/crio.minikube && sudo systemctl restart crio
	I1109 14:04:18.006743   55908 main.go:143] libmachine: SSH cmd err, output: <nil>: 
	CRIO_MINIKUBE_OPTIONS='--insecure-registry 10.96.0.0/12 '
	
	I1109 14:04:18.006766   55908 machine.go:97] duration metric: took 5.278648591s to provisionDockerMachine
	I1109 14:04:18.006777   55908 start.go:293] postStartSetup for "ha-423884-m03" (driver="docker")
	I1109 14:04:18.006788   55908 start.go:322] creating required directories: [/etc/kubernetes/addons /etc/kubernetes/manifests /var/tmp/minikube /var/lib/minikube /var/lib/minikube/certs /var/lib/minikube/images /var/lib/minikube/binaries /tmp/gvisor /usr/share/ca-certificates /etc/ssl/certs]
	I1109 14:04:18.006849   55908 ssh_runner.go:195] Run: sudo mkdir -p /etc/kubernetes/addons /etc/kubernetes/manifests /var/tmp/minikube /var/lib/minikube /var/lib/minikube/certs /var/lib/minikube/images /var/lib/minikube/binaries /tmp/gvisor /usr/share/ca-certificates /etc/ssl/certs
	I1109 14:04:18.006908   55908 cli_runner.go:164] Run: docker container inspect -f "'{{(index (index .NetworkSettings.Ports "22/tcp") 0).HostPort}}'" ha-423884-m03
	I1109 14:04:18.028378   55908 sshutil.go:53] new ssh client: &{IP:127.0.0.1 Port:32828 SSHKeyPath:/home/jenkins/minikube-integration/21139-2320/.minikube/machines/ha-423884-m03/id_rsa Username:docker}
	I1109 14:04:18.136392   55908 ssh_runner.go:195] Run: cat /etc/os-release
	I1109 14:04:18.139676   55908 main.go:143] libmachine: Couldn't set key VERSION_CODENAME, no corresponding struct field found
	I1109 14:04:18.139706   55908 info.go:137] Remote host: Debian GNU/Linux 12 (bookworm)
	I1109 14:04:18.139718   55908 filesync.go:126] Scanning /home/jenkins/minikube-integration/21139-2320/.minikube/addons for local assets ...
	I1109 14:04:18.139772   55908 filesync.go:126] Scanning /home/jenkins/minikube-integration/21139-2320/.minikube/files for local assets ...
	I1109 14:04:18.139877   55908 filesync.go:149] local asset: /home/jenkins/minikube-integration/21139-2320/.minikube/files/etc/ssl/certs/41162.pem -> 41162.pem in /etc/ssl/certs
	I1109 14:04:18.139916   55908 vm_assets.go:164] NewFileAsset: /home/jenkins/minikube-integration/21139-2320/.minikube/files/etc/ssl/certs/41162.pem -> /etc/ssl/certs/41162.pem
	I1109 14:04:18.140203   55908 ssh_runner.go:195] Run: sudo mkdir -p /etc/ssl/certs
	I1109 14:04:18.151607   55908 ssh_runner.go:362] scp /home/jenkins/minikube-integration/21139-2320/.minikube/files/etc/ssl/certs/41162.pem --> /etc/ssl/certs/41162.pem (1708 bytes)
	I1109 14:04:18.170641   55908 start.go:296] duration metric: took 163.846632ms for postStartSetup
	I1109 14:04:18.170734   55908 ssh_runner.go:195] Run: sh -c "df -h /var | awk 'NR==2{print $5}'"
	I1109 14:04:18.170783   55908 cli_runner.go:164] Run: docker container inspect -f "'{{(index (index .NetworkSettings.Ports "22/tcp") 0).HostPort}}'" ha-423884-m03
	I1109 14:04:18.190645   55908 sshutil.go:53] new ssh client: &{IP:127.0.0.1 Port:32828 SSHKeyPath:/home/jenkins/minikube-integration/21139-2320/.minikube/machines/ha-423884-m03/id_rsa Username:docker}
	I1109 14:04:18.303725   55908 ssh_runner.go:195] Run: sh -c "df -BG /var | awk 'NR==2{print $4}'"
	I1109 14:04:18.315157   55908 fix.go:56] duration metric: took 5.979236955s for fixHost
	I1109 14:04:18.315228   55908 start.go:83] releasing machines lock for "ha-423884-m03", held for 5.979367853s
	I1109 14:04:18.315337   55908 cli_runner.go:164] Run: docker container inspect -f "{{range .NetworkSettings.Networks}}{{.IPAddress}},{{.GlobalIPv6Address}}{{end}}" ha-423884-m03
	I1109 14:04:18.346232   55908 out.go:179] * Found network options:
	I1109 14:04:18.349488   55908 out.go:179]   - NO_PROXY=192.168.49.2,192.168.49.3
	W1109 14:04:18.352634   55908 proxy.go:120] fail to check proxy env: Error ip not in block
	W1109 14:04:18.352664   55908 proxy.go:120] fail to check proxy env: Error ip not in block
	W1109 14:04:18.352686   55908 proxy.go:120] fail to check proxy env: Error ip not in block
	W1109 14:04:18.352696   55908 proxy.go:120] fail to check proxy env: Error ip not in block
	I1109 14:04:18.352763   55908 ssh_runner.go:195] Run: sudo sh -c "podman version >/dev/null"
	I1109 14:04:18.352815   55908 cli_runner.go:164] Run: docker container inspect -f "'{{(index (index .NetworkSettings.Ports "22/tcp") 0).HostPort}}'" ha-423884-m03
	I1109 14:04:18.353042   55908 ssh_runner.go:195] Run: curl -sS -m 2 https://registry.k8s.io/
	I1109 14:04:18.353099   55908 cli_runner.go:164] Run: docker container inspect -f "'{{(index (index .NetworkSettings.Ports "22/tcp") 0).HostPort}}'" ha-423884-m03
	I1109 14:04:18.407037   55908 sshutil.go:53] new ssh client: &{IP:127.0.0.1 Port:32828 SSHKeyPath:/home/jenkins/minikube-integration/21139-2320/.minikube/machines/ha-423884-m03/id_rsa Username:docker}
	I1109 14:04:18.416133   55908 sshutil.go:53] new ssh client: &{IP:127.0.0.1 Port:32828 SSHKeyPath:/home/jenkins/minikube-integration/21139-2320/.minikube/machines/ha-423884-m03/id_rsa Username:docker}
	I1109 14:04:18.761655   55908 ssh_runner.go:195] Run: sh -c "stat /etc/cni/net.d/*loopback.conf*"
	W1109 14:04:18.827322   55908 cni.go:209] loopback cni configuration skipped: "/etc/cni/net.d/*loopback.conf*" not found
	I1109 14:04:18.827443   55908 ssh_runner.go:195] Run: sudo find /etc/cni/net.d -maxdepth 1 -type f ( ( -name *bridge* -or -name *podman* ) -and -not -name *.mk_disabled ) -printf "%p, " -exec sh -c "sudo mv {} {}.mk_disabled" ;
	I1109 14:04:18.846068   55908 cni.go:259] no active bridge cni configs found in "/etc/cni/net.d" - nothing to disable
	I1109 14:04:18.846140   55908 start.go:496] detecting cgroup driver to use...
	I1109 14:04:18.846187   55908 detect.go:187] detected "cgroupfs" cgroup driver on host os
	I1109 14:04:18.846266   55908 ssh_runner.go:195] Run: sudo systemctl stop -f containerd
	I1109 14:04:18.869418   55908 ssh_runner.go:195] Run: sudo systemctl is-active --quiet service containerd
	I1109 14:04:18.889860   55908 docker.go:218] disabling cri-docker service (if available) ...
	I1109 14:04:18.889997   55908 ssh_runner.go:195] Run: sudo systemctl stop -f cri-docker.socket
	I1109 14:04:18.919381   55908 ssh_runner.go:195] Run: sudo systemctl stop -f cri-docker.service
	I1109 14:04:18.942214   55908 ssh_runner.go:195] Run: sudo systemctl disable cri-docker.socket
	I1109 14:04:19.209339   55908 ssh_runner.go:195] Run: sudo systemctl mask cri-docker.service
	I1109 14:04:19.469248   55908 docker.go:234] disabling docker service ...
	I1109 14:04:19.469315   55908 ssh_runner.go:195] Run: sudo systemctl stop -f docker.socket
	I1109 14:04:19.487357   55908 ssh_runner.go:195] Run: sudo systemctl stop -f docker.service
	I1109 14:04:19.508816   55908 ssh_runner.go:195] Run: sudo systemctl disable docker.socket
	I1109 14:04:19.750896   55908 ssh_runner.go:195] Run: sudo systemctl mask docker.service
	I1109 14:04:19.978351   55908 ssh_runner.go:195] Run: sudo systemctl is-active --quiet service docker
	I1109 14:04:20.002094   55908 ssh_runner.go:195] Run: /bin/bash -c "sudo mkdir -p /etc && printf %s "runtime-endpoint: unix:///var/run/crio/crio.sock
	" | sudo tee /etc/crictl.yaml"
	I1109 14:04:20.029962   55908 crio.go:59] configure cri-o to use "registry.k8s.io/pause:3.10.1" pause image...
	I1109 14:04:20.030038   55908 ssh_runner.go:195] Run: sh -c "sudo sed -i 's|^.*pause_image = .*$|pause_image = "registry.k8s.io/pause:3.10.1"|' /etc/crio/crio.conf.d/02-crio.conf"
	I1109 14:04:20.046014   55908 crio.go:70] configuring cri-o to use "cgroupfs" as cgroup driver...
	I1109 14:04:20.046086   55908 ssh_runner.go:195] Run: sh -c "sudo sed -i 's|^.*cgroup_manager = .*$|cgroup_manager = "cgroupfs"|' /etc/crio/crio.conf.d/02-crio.conf"
	I1109 14:04:20.061773   55908 ssh_runner.go:195] Run: sh -c "sudo sed -i '/conmon_cgroup = .*/d' /etc/crio/crio.conf.d/02-crio.conf"
	I1109 14:04:20.083454   55908 ssh_runner.go:195] Run: sh -c "sudo sed -i '/cgroup_manager = .*/a conmon_cgroup = "pod"' /etc/crio/crio.conf.d/02-crio.conf"
	I1109 14:04:20.096347   55908 ssh_runner.go:195] Run: sh -c "sudo rm -rf /etc/cni/net.mk"
	I1109 14:04:20.114097   55908 ssh_runner.go:195] Run: sh -c "sudo sed -i '/^ *"net.ipv4.ip_unprivileged_port_start=.*"/d' /etc/crio/crio.conf.d/02-crio.conf"
	I1109 14:04:20.126722   55908 ssh_runner.go:195] Run: sh -c "sudo grep -q "^ *default_sysctls" /etc/crio/crio.conf.d/02-crio.conf || sudo sed -i '/conmon_cgroup = .*/a default_sysctls = \[\n\]' /etc/crio/crio.conf.d/02-crio.conf"
	I1109 14:04:20.143159   55908 ssh_runner.go:195] Run: sh -c "sudo sed -i -r 's|^default_sysctls *= *\[|&\n  "net.ipv4.ip_unprivileged_port_start=0",|' /etc/crio/crio.conf.d/02-crio.conf"
	I1109 14:04:20.160109   55908 ssh_runner.go:195] Run: sudo sysctl net.bridge.bridge-nf-call-iptables
	I1109 14:04:20.177582   55908 ssh_runner.go:195] Run: sudo sh -c "echo 1 > /proc/sys/net/ipv4/ip_forward"
	I1109 14:04:20.196091   55908 ssh_runner.go:195] Run: sudo systemctl daemon-reload
	I1109 14:04:20.468433   55908 ssh_runner.go:195] Run: sudo systemctl restart crio
	I1109 14:04:21.283004   55908 start.go:543] Will wait 60s for socket path /var/run/crio/crio.sock
	I1109 14:04:21.283084   55908 ssh_runner.go:195] Run: stat /var/run/crio/crio.sock
	I1109 14:04:21.287304   55908 start.go:564] Will wait 60s for crictl version
	I1109 14:04:21.287372   55908 ssh_runner.go:195] Run: which crictl
	I1109 14:04:21.291538   55908 ssh_runner.go:195] Run: sudo /usr/local/bin/crictl version
	I1109 14:04:21.328386   55908 start.go:580] Version:  0.1.0
	RuntimeName:  cri-o
	RuntimeVersion:  1.34.1
	RuntimeApiVersion:  v1
	I1109 14:04:21.328481   55908 ssh_runner.go:195] Run: crio --version
	I1109 14:04:21.361417   55908 ssh_runner.go:195] Run: crio --version
	I1109 14:04:21.451954   55908 out.go:179] * Preparing Kubernetes v1.34.1 on CRI-O 1.34.1 ...
	I1109 14:04:21.455954   55908 out.go:179]   - env NO_PROXY=192.168.49.2
	I1109 14:04:21.459224   55908 out.go:179]   - env NO_PROXY=192.168.49.2,192.168.49.3
	I1109 14:04:21.462952   55908 cli_runner.go:164] Run: docker network inspect ha-423884 --format "{"Name": "{{.Name}}","Driver": "{{.Driver}}","Subnet": "{{range .IPAM.Config}}{{.Subnet}}{{end}}","Gateway": "{{range .IPAM.Config}}{{.Gateway}}{{end}}","MTU": {{if (index .Options "com.docker.network.driver.mtu")}}{{(index .Options "com.docker.network.driver.mtu")}}{{else}}0{{end}}, "ContainerIPs": [{{range $k,$v := .Containers }}"{{$v.IPv4Address}}",{{end}}]}"
	I1109 14:04:21.484807   55908 ssh_runner.go:195] Run: grep 192.168.49.1	host.minikube.internal$ /etc/hosts
	I1109 14:04:21.489960   55908 ssh_runner.go:195] Run: /bin/bash -c "{ grep -v $'\thost.minikube.internal$' "/etc/hosts"; echo "192.168.49.1	host.minikube.internal"; } > /tmp/h.$$; sudo cp /tmp/h.$$ "/etc/hosts""
	I1109 14:04:21.506775   55908 mustload.go:66] Loading cluster: ha-423884
	I1109 14:04:21.507015   55908 config.go:182] Loaded profile config "ha-423884": Driver=docker, ContainerRuntime=crio, KubernetesVersion=v1.34.1
	I1109 14:04:21.507301   55908 cli_runner.go:164] Run: docker container inspect ha-423884 --format={{.State.Status}}
	I1109 14:04:21.526101   55908 host.go:66] Checking if "ha-423884" exists ...
	I1109 14:04:21.526377   55908 certs.go:69] Setting up /home/jenkins/minikube-integration/21139-2320/.minikube/profiles/ha-423884 for IP: 192.168.49.4
	I1109 14:04:21.526391   55908 certs.go:195] generating shared ca certs ...
	I1109 14:04:21.526407   55908 certs.go:227] acquiring lock for ca certs: {Name:mkdc9287ecd89df27ec460b72246ed6b75395d7c Clock:{} Delay:500ms Timeout:1m0s Cancel:<nil>}
	I1109 14:04:21.526515   55908 certs.go:236] skipping valid "minikubeCA" ca cert: /home/jenkins/minikube-integration/21139-2320/.minikube/ca.key
	I1109 14:04:21.526559   55908 certs.go:236] skipping valid "proxyClientCA" ca cert: /home/jenkins/minikube-integration/21139-2320/.minikube/proxy-client-ca.key
	I1109 14:04:21.526572   55908 certs.go:257] generating profile certs ...
	I1109 14:04:21.526658   55908 certs.go:360] skipping valid signed profile cert regeneration for "minikube-user": /home/jenkins/minikube-integration/21139-2320/.minikube/profiles/ha-423884/client.key
	I1109 14:04:21.526726   55908 certs.go:360] skipping valid signed profile cert regeneration for "minikube": /home/jenkins/minikube-integration/21139-2320/.minikube/profiles/ha-423884/apiserver.key.7ffb4171
	I1109 14:04:21.526767   55908 certs.go:360] skipping valid signed profile cert regeneration for "aggregator": /home/jenkins/minikube-integration/21139-2320/.minikube/profiles/ha-423884/proxy-client.key
	I1109 14:04:21.526781   55908 vm_assets.go:164] NewFileAsset: /home/jenkins/minikube-integration/21139-2320/.minikube/ca.crt -> /var/lib/minikube/certs/ca.crt
	I1109 14:04:21.526793   55908 vm_assets.go:164] NewFileAsset: /home/jenkins/minikube-integration/21139-2320/.minikube/ca.key -> /var/lib/minikube/certs/ca.key
	I1109 14:04:21.526808   55908 vm_assets.go:164] NewFileAsset: /home/jenkins/minikube-integration/21139-2320/.minikube/proxy-client-ca.crt -> /var/lib/minikube/certs/proxy-client-ca.crt
	I1109 14:04:21.526826   55908 vm_assets.go:164] NewFileAsset: /home/jenkins/minikube-integration/21139-2320/.minikube/proxy-client-ca.key -> /var/lib/minikube/certs/proxy-client-ca.key
	I1109 14:04:21.526836   55908 vm_assets.go:164] NewFileAsset: /home/jenkins/minikube-integration/21139-2320/.minikube/profiles/ha-423884/apiserver.crt -> /var/lib/minikube/certs/apiserver.crt
	I1109 14:04:21.526848   55908 vm_assets.go:164] NewFileAsset: /home/jenkins/minikube-integration/21139-2320/.minikube/profiles/ha-423884/apiserver.key -> /var/lib/minikube/certs/apiserver.key
	I1109 14:04:21.526910   55908 vm_assets.go:164] NewFileAsset: /home/jenkins/minikube-integration/21139-2320/.minikube/profiles/ha-423884/proxy-client.crt -> /var/lib/minikube/certs/proxy-client.crt
	I1109 14:04:21.526925   55908 vm_assets.go:164] NewFileAsset: /home/jenkins/minikube-integration/21139-2320/.minikube/profiles/ha-423884/proxy-client.key -> /var/lib/minikube/certs/proxy-client.key
	I1109 14:04:21.526982   55908 certs.go:484] found cert: /home/jenkins/minikube-integration/21139-2320/.minikube/certs/4116.pem (1338 bytes)
	W1109 14:04:21.527018   55908 certs.go:480] ignoring /home/jenkins/minikube-integration/21139-2320/.minikube/certs/4116_empty.pem, impossibly tiny 0 bytes
	I1109 14:04:21.527028   55908 certs.go:484] found cert: /home/jenkins/minikube-integration/21139-2320/.minikube/certs/ca-key.pem (1679 bytes)
	I1109 14:04:21.527056   55908 certs.go:484] found cert: /home/jenkins/minikube-integration/21139-2320/.minikube/certs/ca.pem (1082 bytes)
	I1109 14:04:21.527080   55908 certs.go:484] found cert: /home/jenkins/minikube-integration/21139-2320/.minikube/certs/cert.pem (1123 bytes)
	I1109 14:04:21.527107   55908 certs.go:484] found cert: /home/jenkins/minikube-integration/21139-2320/.minikube/certs/key.pem (1679 bytes)
	I1109 14:04:21.527154   55908 certs.go:484] found cert: /home/jenkins/minikube-integration/21139-2320/.minikube/files/etc/ssl/certs/41162.pem (1708 bytes)
	I1109 14:04:21.527185   55908 vm_assets.go:164] NewFileAsset: /home/jenkins/minikube-integration/21139-2320/.minikube/files/etc/ssl/certs/41162.pem -> /usr/share/ca-certificates/41162.pem
	I1109 14:04:21.527200   55908 vm_assets.go:164] NewFileAsset: /home/jenkins/minikube-integration/21139-2320/.minikube/ca.crt -> /usr/share/ca-certificates/minikubeCA.pem
	I1109 14:04:21.527211   55908 vm_assets.go:164] NewFileAsset: /home/jenkins/minikube-integration/21139-2320/.minikube/certs/4116.pem -> /usr/share/ca-certificates/4116.pem
	I1109 14:04:21.527271   55908 cli_runner.go:164] Run: docker container inspect -f "'{{(index (index .NetworkSettings.Ports "22/tcp") 0).HostPort}}'" ha-423884
	I1109 14:04:21.551818   55908 sshutil.go:53] new ssh client: &{IP:127.0.0.1 Port:32818 SSHKeyPath:/home/jenkins/minikube-integration/21139-2320/.minikube/machines/ha-423884/id_rsa Username:docker}
	I1109 14:04:21.676202   55908 ssh_runner.go:195] Run: stat -c %s /var/lib/minikube/certs/sa.pub
	I1109 14:04:21.680212   55908 ssh_runner.go:447] scp /var/lib/minikube/certs/sa.pub --> memory (451 bytes)
	I1109 14:04:21.691215   55908 ssh_runner.go:195] Run: stat -c %s /var/lib/minikube/certs/sa.key
	I1109 14:04:21.701694   55908 ssh_runner.go:447] scp /var/lib/minikube/certs/sa.key --> memory (1679 bytes)
	I1109 14:04:21.714762   55908 ssh_runner.go:195] Run: stat -c %s /var/lib/minikube/certs/front-proxy-ca.crt
	I1109 14:04:21.719210   55908 ssh_runner.go:447] scp /var/lib/minikube/certs/front-proxy-ca.crt --> memory (1123 bytes)
	I1109 14:04:21.729229   55908 ssh_runner.go:195] Run: stat -c %s /var/lib/minikube/certs/front-proxy-ca.key
	I1109 14:04:21.733219   55908 ssh_runner.go:447] scp /var/lib/minikube/certs/front-proxy-ca.key --> memory (1675 bytes)
	I1109 14:04:21.742594   55908 ssh_runner.go:195] Run: stat -c %s /var/lib/minikube/certs/etcd/ca.crt
	I1109 14:04:21.746326   55908 ssh_runner.go:447] scp /var/lib/minikube/certs/etcd/ca.crt --> memory (1094 bytes)
	I1109 14:04:21.755768   55908 ssh_runner.go:195] Run: stat -c %s /var/lib/minikube/certs/etcd/ca.key
	I1109 14:04:21.759436   55908 ssh_runner.go:447] scp /var/lib/minikube/certs/etcd/ca.key --> memory (1675 bytes)
	I1109 14:04:21.771660   55908 ssh_runner.go:362] scp /home/jenkins/minikube-integration/21139-2320/.minikube/ca.crt --> /var/lib/minikube/certs/ca.crt (1111 bytes)
	I1109 14:04:21.795312   55908 ssh_runner.go:362] scp /home/jenkins/minikube-integration/21139-2320/.minikube/ca.key --> /var/lib/minikube/certs/ca.key (1679 bytes)
	I1109 14:04:21.815560   55908 ssh_runner.go:362] scp /home/jenkins/minikube-integration/21139-2320/.minikube/proxy-client-ca.crt --> /var/lib/minikube/certs/proxy-client-ca.crt (1119 bytes)
	I1109 14:04:21.833662   55908 ssh_runner.go:362] scp /home/jenkins/minikube-integration/21139-2320/.minikube/proxy-client-ca.key --> /var/lib/minikube/certs/proxy-client-ca.key (1675 bytes)
	I1109 14:04:21.852805   55908 ssh_runner.go:362] scp /home/jenkins/minikube-integration/21139-2320/.minikube/profiles/ha-423884/apiserver.crt --> /var/lib/minikube/certs/apiserver.crt (1440 bytes)
	I1109 14:04:21.870267   55908 ssh_runner.go:362] scp /home/jenkins/minikube-integration/21139-2320/.minikube/profiles/ha-423884/apiserver.key --> /var/lib/minikube/certs/apiserver.key (1679 bytes)
	I1109 14:04:21.889041   55908 ssh_runner.go:362] scp /home/jenkins/minikube-integration/21139-2320/.minikube/profiles/ha-423884/proxy-client.crt --> /var/lib/minikube/certs/proxy-client.crt (1147 bytes)
	I1109 14:04:21.907386   55908 ssh_runner.go:362] scp /home/jenkins/minikube-integration/21139-2320/.minikube/profiles/ha-423884/proxy-client.key --> /var/lib/minikube/certs/proxy-client.key (1675 bytes)
	I1109 14:04:21.925376   55908 ssh_runner.go:362] scp /home/jenkins/minikube-integration/21139-2320/.minikube/files/etc/ssl/certs/41162.pem --> /usr/share/ca-certificates/41162.pem (1708 bytes)
	I1109 14:04:21.943214   55908 ssh_runner.go:362] scp /home/jenkins/minikube-integration/21139-2320/.minikube/ca.crt --> /usr/share/ca-certificates/minikubeCA.pem (1111 bytes)
	I1109 14:04:21.961586   55908 ssh_runner.go:362] scp /home/jenkins/minikube-integration/21139-2320/.minikube/certs/4116.pem --> /usr/share/ca-certificates/4116.pem (1338 bytes)
	I1109 14:04:21.979793   55908 ssh_runner.go:362] scp memory --> /var/lib/minikube/certs/sa.pub (451 bytes)
	I1109 14:04:21.993395   55908 ssh_runner.go:362] scp memory --> /var/lib/minikube/certs/sa.key (1679 bytes)
	I1109 14:04:22.006684   55908 ssh_runner.go:362] scp memory --> /var/lib/minikube/certs/front-proxy-ca.crt (1123 bytes)
	I1109 14:04:22.033388   55908 ssh_runner.go:362] scp memory --> /var/lib/minikube/certs/front-proxy-ca.key (1675 bytes)
	I1109 14:04:22.052052   55908 ssh_runner.go:362] scp memory --> /var/lib/minikube/certs/etcd/ca.crt (1094 bytes)
	I1109 14:04:22.068060   55908 ssh_runner.go:362] scp memory --> /var/lib/minikube/certs/etcd/ca.key (1675 bytes)
	I1109 14:04:22.086207   55908 ssh_runner.go:362] scp memory --> /var/lib/minikube/kubeconfig (744 bytes)
	I1109 14:04:22.104940   55908 ssh_runner.go:195] Run: openssl version
	I1109 14:04:22.112046   55908 ssh_runner.go:195] Run: sudo /bin/bash -c "test -s /usr/share/ca-certificates/41162.pem && ln -fs /usr/share/ca-certificates/41162.pem /etc/ssl/certs/41162.pem"
	I1109 14:04:22.122102   55908 ssh_runner.go:195] Run: ls -la /usr/share/ca-certificates/41162.pem
	I1109 14:04:22.125980   55908 certs.go:528] hashing: -rw-r--r-- 1 root root 1708 Nov  9 13:36 /usr/share/ca-certificates/41162.pem
	I1109 14:04:22.126092   55908 ssh_runner.go:195] Run: openssl x509 -hash -noout -in /usr/share/ca-certificates/41162.pem
	I1109 14:04:22.167702   55908 ssh_runner.go:195] Run: sudo /bin/bash -c "test -L /etc/ssl/certs/3ec20f2e.0 || ln -fs /etc/ssl/certs/41162.pem /etc/ssl/certs/3ec20f2e.0"
	I1109 14:04:22.176107   55908 ssh_runner.go:195] Run: sudo /bin/bash -c "test -s /usr/share/ca-certificates/minikubeCA.pem && ln -fs /usr/share/ca-certificates/minikubeCA.pem /etc/ssl/certs/minikubeCA.pem"
	I1109 14:04:22.184759   55908 ssh_runner.go:195] Run: ls -la /usr/share/ca-certificates/minikubeCA.pem
	I1109 14:04:22.189529   55908 certs.go:528] hashing: -rw-r--r-- 1 root root 1111 Nov  9 13:29 /usr/share/ca-certificates/minikubeCA.pem
	I1109 14:04:22.189649   55908 ssh_runner.go:195] Run: openssl x509 -hash -noout -in /usr/share/ca-certificates/minikubeCA.pem
	I1109 14:04:22.231896   55908 ssh_runner.go:195] Run: sudo /bin/bash -c "test -L /etc/ssl/certs/b5213941.0 || ln -fs /etc/ssl/certs/minikubeCA.pem /etc/ssl/certs/b5213941.0"
	I1109 14:04:22.240788   55908 ssh_runner.go:195] Run: sudo /bin/bash -c "test -s /usr/share/ca-certificates/4116.pem && ln -fs /usr/share/ca-certificates/4116.pem /etc/ssl/certs/4116.pem"
	I1109 14:04:22.250648   55908 ssh_runner.go:195] Run: ls -la /usr/share/ca-certificates/4116.pem
	I1109 14:04:22.254774   55908 certs.go:528] hashing: -rw-r--r-- 1 root root 1338 Nov  9 13:36 /usr/share/ca-certificates/4116.pem
	I1109 14:04:22.254890   55908 ssh_runner.go:195] Run: openssl x509 -hash -noout -in /usr/share/ca-certificates/4116.pem
	I1109 14:04:22.295743   55908 ssh_runner.go:195] Run: sudo /bin/bash -c "test -L /etc/ssl/certs/51391683.0 || ln -fs /etc/ssl/certs/4116.pem /etc/ssl/certs/51391683.0"
	I1109 14:04:22.303694   55908 ssh_runner.go:195] Run: stat /var/lib/minikube/certs/apiserver-kubelet-client.crt
	I1109 14:04:22.308400   55908 ssh_runner.go:195] Run: openssl x509 -noout -in /var/lib/minikube/certs/apiserver-etcd-client.crt -checkend 86400
	I1109 14:04:22.361240   55908 ssh_runner.go:195] Run: openssl x509 -noout -in /var/lib/minikube/certs/apiserver-kubelet-client.crt -checkend 86400
	I1109 14:04:22.402093   55908 ssh_runner.go:195] Run: openssl x509 -noout -in /var/lib/minikube/certs/etcd/server.crt -checkend 86400
	I1109 14:04:22.444367   55908 ssh_runner.go:195] Run: openssl x509 -noout -in /var/lib/minikube/certs/etcd/healthcheck-client.crt -checkend 86400
	I1109 14:04:22.486212   55908 ssh_runner.go:195] Run: openssl x509 -noout -in /var/lib/minikube/certs/etcd/peer.crt -checkend 86400
	I1109 14:04:22.528227   55908 ssh_runner.go:195] Run: openssl x509 -noout -in /var/lib/minikube/certs/front-proxy-client.crt -checkend 86400
	I1109 14:04:22.571111   55908 kubeadm.go:935] updating node {m03 192.168.49.4 8443 v1.34.1 crio true true} ...
	I1109 14:04:22.571227   55908 kubeadm.go:947] kubelet [Unit]
	Wants=crio.service
	
	[Service]
	ExecStart=
	ExecStart=/var/lib/minikube/binaries/v1.34.1/kubelet --bootstrap-kubeconfig=/etc/kubernetes/bootstrap-kubelet.conf --cgroups-per-qos=false --config=/var/lib/kubelet/config.yaml --enforce-node-allocatable= --hostname-override=ha-423884-m03 --kubeconfig=/etc/kubernetes/kubelet.conf --node-ip=192.168.49.4
	
	[Install]
	 config:
	{KubernetesVersion:v1.34.1 ClusterName:ha-423884 Namespace:default APIServerHAVIP:192.168.49.254 APIServerName:minikubeCA APIServerNames:[] APIServerIPs:[] DNSDomain:cluster.local ContainerRuntime:crio CRISocket: NetworkPlugin:cni FeatureGates: ServiceCIDR:10.96.0.0/12 ImageRepository: LoadBalancerStartIP: LoadBalancerEndIP: CustomIngressCert: RegistryAliases: ExtraOptions:[] ShouldLoadCachedImages:true EnableDefaultCNI:false CNI:}
	I1109 14:04:22.571257   55908 kube-vip.go:115] generating kube-vip config ...
	I1109 14:04:22.571311   55908 ssh_runner.go:195] Run: sudo sh -c "lsmod | grep ip_vs"
	I1109 14:04:22.583651   55908 kube-vip.go:163] giving up enabling control-plane load-balancing as ipvs kernel modules appears not to be available: sudo sh -c "lsmod | grep ip_vs": Process exited with status 1
	stdout:
	
	stderr:
	I1109 14:04:22.583707   55908 kube-vip.go:137] kube-vip config:
	apiVersion: v1
	kind: Pod
	metadata:
	  creationTimestamp: null
	  name: kube-vip
	  namespace: kube-system
	spec:
	  containers:
	  - args:
	    - manager
	    env:
	    - name: vip_arp
	      value: "true"
	    - name: port
	      value: "8443"
	    - name: vip_nodename
	      valueFrom:
	        fieldRef:
	          fieldPath: spec.nodeName
	    - name: vip_interface
	      value: eth0
	    - name: vip_cidr
	      value: "32"
	    - name: dns_mode
	      value: first
	    - name: cp_enable
	      value: "true"
	    - name: cp_namespace
	      value: kube-system
	    - name: vip_leaderelection
	      value: "true"
	    - name: vip_leasename
	      value: plndr-cp-lock
	    - name: vip_leaseduration
	      value: "5"
	    - name: vip_renewdeadline
	      value: "3"
	    - name: vip_retryperiod
	      value: "1"
	    - name: address
	      value: 192.168.49.254
	    - name: prometheus_server
	      value: :2112
	    image: ghcr.io/kube-vip/kube-vip:v1.0.1
	    imagePullPolicy: IfNotPresent
	    name: kube-vip
	    resources: {}
	    securityContext:
	      capabilities:
	        add:
	        - NET_ADMIN
	        - NET_RAW
	    volumeMounts:
	    - mountPath: /etc/kubernetes/admin.conf
	      name: kubeconfig
	  hostAliases:
	  - hostnames:
	    - kubernetes
	    ip: 127.0.0.1
	  hostNetwork: true
	  volumes:
	  - hostPath:
	      path: "/etc/kubernetes/admin.conf"
	    name: kubeconfig
	status: {}
	I1109 14:04:22.583783   55908 ssh_runner.go:195] Run: sudo ls /var/lib/minikube/binaries/v1.34.1
	I1109 14:04:22.592357   55908 binaries.go:51] Found k8s binaries, skipping transfer
	I1109 14:04:22.592434   55908 ssh_runner.go:195] Run: sudo mkdir -p /etc/systemd/system/kubelet.service.d /lib/systemd/system /etc/kubernetes/manifests
	I1109 14:04:22.602564   55908 ssh_runner.go:362] scp memory --> /etc/systemd/system/kubelet.service.d/10-kubeadm.conf (363 bytes)
	I1109 14:04:22.615684   55908 ssh_runner.go:362] scp memory --> /lib/systemd/system/kubelet.service (352 bytes)
	I1109 14:04:22.634261   55908 ssh_runner.go:362] scp memory --> /etc/kubernetes/manifests/kube-vip.yaml (1358 bytes)
	I1109 14:04:22.648965   55908 ssh_runner.go:195] Run: grep 192.168.49.254	control-plane.minikube.internal$ /etc/hosts
	I1109 14:04:22.652918   55908 ssh_runner.go:195] Run: /bin/bash -c "{ grep -v $'\tcontrol-plane.minikube.internal$' "/etc/hosts"; echo "192.168.49.254	control-plane.minikube.internal"; } > /tmp/h.$$; sudo cp /tmp/h.$$ "/etc/hosts""
	I1109 14:04:22.663308   55908 ssh_runner.go:195] Run: sudo systemctl daemon-reload
	I1109 14:04:22.796103   55908 ssh_runner.go:195] Run: sudo systemctl start kubelet
	I1109 14:04:22.812101   55908 start.go:236] Will wait 6m0s for node &{Name:m03 IP:192.168.49.4 Port:8443 KubernetesVersion:v1.34.1 ContainerRuntime:crio ControlPlane:true Worker:true}
	I1109 14:04:22.812586   55908 config.go:182] Loaded profile config "ha-423884": Driver=docker, ContainerRuntime=crio, KubernetesVersion=v1.34.1
	I1109 14:04:22.817295   55908 out.go:179] * Verifying Kubernetes components...
	I1109 14:04:22.820274   55908 ssh_runner.go:195] Run: sudo systemctl daemon-reload
	I1109 14:04:22.956399   55908 ssh_runner.go:195] Run: sudo systemctl start kubelet
	I1109 14:04:22.970086   55908 kapi.go:59] client config for ha-423884: &rest.Config{Host:"https://192.168.49.254:8443", APIPath:"", ContentConfig:rest.ContentConfig{AcceptContentTypes:"", ContentType:"", GroupVersion:(*schema.GroupVersion)(nil), NegotiatedSerializer:runtime.NegotiatedSerializer(nil)}, Username:"", Password:"", BearerToken:"", BearerTokenFile:"", Impersonate:rest.ImpersonationConfig{UserName:"", UID:"", Groups:[]string(nil), Extra:map[string][]string(nil)}, AuthProvider:<nil>, AuthConfigPersister:rest.AuthProviderConfigPersister(nil), ExecProvider:<nil>, TLSClientConfig:rest.sanitizedTLSClientConfig{Insecure:false, ServerName:"", CertFile:"/home/jenkins/minikube-integration/21139-2320/.minikube/profiles/ha-423884/client.crt", KeyFile:"/home/jenkins/minikube-integration/21139-2320/.minikube/profiles/ha-423884/client.key", CAFile:"/home/jenkins/minikube-integration/21139-2320/.minikube/ca.crt", CertData:[]uint8(nil), KeyData:[]uint8(nil), CAData:[]uint8(nil), NextProtos:[]string(nil)},
UserAgent:"", DisableCompression:false, Transport:http.RoundTripper(nil), WrapTransport:(transport.WrapperFunc)(0x21276e0), QPS:0, Burst:0, RateLimiter:flowcontrol.RateLimiter(nil), WarningHandler:rest.WarningHandler(nil), WarningHandlerWithContext:rest.WarningHandlerWithContext(nil), Timeout:0, Dial:(func(context.Context, string, string) (net.Conn, error))(nil), Proxy:(func(*http.Request) (*url.URL, error))(nil)}
	W1109 14:04:22.970158   55908 kubeadm.go:492] Overriding stale ClientConfig host https://192.168.49.254:8443 with https://192.168.49.2:8443
	I1109 14:04:22.970389   55908 node_ready.go:35] waiting up to 6m0s for node "ha-423884-m03" to be "Ready" ...
	I1109 14:04:22.973665   55908 node_ready.go:49] node "ha-423884-m03" is "Ready"
	I1109 14:04:22.973696   55908 node_ready.go:38] duration metric: took 3.289742ms for node "ha-423884-m03" to be "Ready" ...
	I1109 14:04:22.973708   55908 api_server.go:52] waiting for apiserver process to appear ...
	I1109 14:04:22.973776   55908 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I1109 14:04:23.474233   55908 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I1109 14:04:23.974449   55908 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I1109 14:04:24.473927   55908 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I1109 14:04:24.973967   55908 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I1109 14:04:25.474635   55908 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I1109 14:04:25.973916   55908 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I1109 14:04:26.474480   55908 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I1109 14:04:26.974653   55908 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I1109 14:04:27.474731   55908 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I1109 14:04:27.974238   55908 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I1109 14:04:28.474498   55908 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I1109 14:04:28.973919   55908 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I1109 14:04:29.474517   55908 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I1109 14:04:29.974713   55908 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I1109 14:04:30.474585   55908 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I1109 14:04:30.974741   55908 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I1109 14:04:31.473916   55908 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I1109 14:04:31.974806   55908 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I1109 14:04:32.474537   55908 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I1109 14:04:32.973899   55908 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I1109 14:04:33.474884   55908 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I1109 14:04:33.974179   55908 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I1109 14:04:34.473908   55908 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I1109 14:04:34.973922   55908 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I1109 14:04:35.474186   55908 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I1109 14:04:35.974351   55908 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I1109 14:04:36.474756   55908 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I1109 14:04:36.973943   55908 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I1109 14:04:37.474873   55908 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I1109 14:04:37.974832   55908 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I1109 14:04:38.474095   55908 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I1109 14:04:38.486973   55908 api_server.go:72] duration metric: took 15.674824664s to wait for apiserver process to appear ...
	I1109 14:04:38.486994   55908 api_server.go:88] waiting for apiserver healthz status ...
	I1109 14:04:38.487013   55908 api_server.go:253] Checking apiserver healthz at https://192.168.49.2:8443/healthz ...
	I1109 14:04:38.496492   55908 api_server.go:279] https://192.168.49.2:8443/healthz returned 200:
	ok
	I1109 14:04:38.497757   55908 api_server.go:141] control plane version: v1.34.1
	I1109 14:04:38.497778   55908 api_server.go:131] duration metric: took 10.777406ms to wait for apiserver health ...
	I1109 14:04:38.497787   55908 system_pods.go:43] waiting for kube-system pods to appear ...
	I1109 14:04:38.505258   55908 system_pods.go:59] 26 kube-system pods found
	I1109 14:04:38.505350   55908 system_pods.go:61] "coredns-66bc5c9577-wl6rt" [95478f0f-683f-4542-aaa4-adc037f97d0c] Running / Ready:ContainersNotReady (containers with unready status: [coredns]) / ContainersReady:ContainersNotReady (containers with unready status: [coredns])
	I1109 14:04:38.505374   55908 system_pods.go:61] "coredns-66bc5c9577-x2j4c" [96cca476-edbb-4139-8b01-f7fd7c7d55aa] Running / Ready:ContainersNotReady (containers with unready status: [coredns]) / ContainersReady:ContainersNotReady (containers with unready status: [coredns])
	I1109 14:04:38.505408   55908 system_pods.go:61] "etcd-ha-423884" [004413ee-6d00-4b6b-8e58-dbe5c2694a91] Running
	I1109 14:04:38.505432   55908 system_pods.go:61] "etcd-ha-423884-m02" [7009b9f1-1f16-4b4e-a5e3-1d8df4987593] Running
	I1109 14:04:38.505449   55908 system_pods.go:61] "etcd-ha-423884-m03" [5c66298e-55ac-439c-8fc7-cc91a29fff8c] Running
	I1109 14:04:38.505466   55908 system_pods.go:61] "kindnet-2tcn6" [22d3ab43-1335-4838-ab1d-7368817c4287] Running
	I1109 14:04:38.505484   55908 system_pods.go:61] "kindnet-45jg2" [fe2d9f7d-ba0c-42a5-af2d-ebfb3153b0b1] Running
	I1109 14:04:38.505510   55908 system_pods.go:61] "kindnet-4s4nj" [aaab0693-39a9-46cc-b5c6-f07055a7cbc4] Running
	I1109 14:04:38.505536   55908 system_pods.go:61] "kindnet-ftnwt" [ea92690f-5103-40c4-ba92-16b97894d00c] Running
	I1109 14:04:38.505555   55908 system_pods.go:61] "kube-apiserver-ha-423884" [1981277b-07b7-4bfc-8601-d026e58476b1] Running
	I1109 14:04:38.505572   55908 system_pods.go:61] "kube-apiserver-ha-423884-m02" [e203df4c-2d27-445c-ae67-14d3e03d4a48] Running
	I1109 14:04:38.505590   55908 system_pods.go:61] "kube-apiserver-ha-423884-m03" [16908abd-f0bf-4fc2-b689-4062490d63b3] Running
	I1109 14:04:38.505618   55908 system_pods.go:61] "kube-controller-manager-ha-423884" [1d981b3b-aa6e-470c-8fd5-a97cedeb7ab7] Running
	I1109 14:04:38.505641   55908 system_pods.go:61] "kube-controller-manager-ha-423884-m02" [047f7949-9451-4301-be27-a9073b1f8dd8] Running
	I1109 14:04:38.505659   55908 system_pods.go:61] "kube-controller-manager-ha-423884-m03" [94734b39-b4df-4736-8bb8-4289b8b59d4a] Running
	I1109 14:04:38.505675   55908 system_pods.go:61] "kube-proxy-7z7d2" [f3de4d87-91fe-4303-a8db-50a70cbce4d7] Running
	I1109 14:04:38.505694   55908 system_pods.go:61] "kube-proxy-9kff9" [3e8293e6-027b-460b-bf10-1c31ea96c7b9] Running
	I1109 14:04:38.505721   55908 system_pods.go:61] "kube-proxy-f4hgn" [4eebc45e-329c-47be-b22c-f516100cff56] Running
	I1109 14:04:38.505743   55908 system_pods.go:61] "kube-proxy-jcgxk" [511bb5aa-d398-4eb4-852e-d2b6cda335c7] Running
	I1109 14:04:38.505761   55908 system_pods.go:61] "kube-scheduler-ha-423884" [9d1a4a22-ff4f-4204-b9a0-c90bc99cf5ee] Running
	I1109 14:04:38.505778   55908 system_pods.go:61] "kube-scheduler-ha-423884-m02" [3959577a-da41-4834-8c10-1cd6da3da88d] Running
	I1109 14:04:38.505796   55908 system_pods.go:61] "kube-scheduler-ha-423884-m03" [ab2c4706-65e8-429e-9365-35ddef56a3f5] Running
	I1109 14:04:38.505824   55908 system_pods.go:61] "kube-vip-ha-423884" [b043421c-6408-4df1-87d9-bc0d12fef736] Running
	I1109 14:04:38.505850   55908 system_pods.go:61] "kube-vip-ha-423884-m02" [17d3154c-6732-452b-a209-4a8c98c5626e] Running
	I1109 14:04:38.505867   55908 system_pods.go:61] "kube-vip-ha-423884-m03" [aade9448-5515-4d1c-ae79-38c8686c171f] Running
	I1109 14:04:38.505886   55908 system_pods.go:61] "storage-provisioner" [5c249a88-1e05-40e0-b9d2-60a993f8c146] Running
	I1109 14:04:38.505905   55908 system_pods.go:74] duration metric: took 8.112367ms to wait for pod list to return data ...
	I1109 14:04:38.505935   55908 default_sa.go:34] waiting for default service account to be created ...
	I1109 14:04:38.509739   55908 default_sa.go:45] found service account: "default"
	I1109 14:04:38.509805   55908 default_sa.go:55] duration metric: took 3.846441ms for default service account to be created ...
	I1109 14:04:38.509829   55908 system_pods.go:116] waiting for k8s-apps to be running ...
	I1109 14:04:38.517291   55908 system_pods.go:86] 26 kube-system pods found
	I1109 14:04:38.517382   55908 system_pods.go:89] "coredns-66bc5c9577-wl6rt" [95478f0f-683f-4542-aaa4-adc037f97d0c] Running / Ready:ContainersNotReady (containers with unready status: [coredns]) / ContainersReady:ContainersNotReady (containers with unready status: [coredns])
	I1109 14:04:38.517407   55908 system_pods.go:89] "coredns-66bc5c9577-x2j4c" [96cca476-edbb-4139-8b01-f7fd7c7d55aa] Running / Ready:ContainersNotReady (containers with unready status: [coredns]) / ContainersReady:ContainersNotReady (containers with unready status: [coredns])
	I1109 14:04:38.517444   55908 system_pods.go:89] "etcd-ha-423884" [004413ee-6d00-4b6b-8e58-dbe5c2694a91] Running
	I1109 14:04:38.517467   55908 system_pods.go:89] "etcd-ha-423884-m02" [7009b9f1-1f16-4b4e-a5e3-1d8df4987593] Running
	I1109 14:04:38.517484   55908 system_pods.go:89] "etcd-ha-423884-m03" [5c66298e-55ac-439c-8fc7-cc91a29fff8c] Running
	I1109 14:04:38.517500   55908 system_pods.go:89] "kindnet-2tcn6" [22d3ab43-1335-4838-ab1d-7368817c4287] Running
	I1109 14:04:38.517518   55908 system_pods.go:89] "kindnet-45jg2" [fe2d9f7d-ba0c-42a5-af2d-ebfb3153b0b1] Running
	I1109 14:04:38.517545   55908 system_pods.go:89] "kindnet-4s4nj" [aaab0693-39a9-46cc-b5c6-f07055a7cbc4] Running
	I1109 14:04:38.517568   55908 system_pods.go:89] "kindnet-ftnwt" [ea92690f-5103-40c4-ba92-16b97894d00c] Running
	I1109 14:04:38.517586   55908 system_pods.go:89] "kube-apiserver-ha-423884" [1981277b-07b7-4bfc-8601-d026e58476b1] Running
	I1109 14:04:38.517602   55908 system_pods.go:89] "kube-apiserver-ha-423884-m02" [e203df4c-2d27-445c-ae67-14d3e03d4a48] Running
	I1109 14:04:38.517620   55908 system_pods.go:89] "kube-apiserver-ha-423884-m03" [16908abd-f0bf-4fc2-b689-4062490d63b3] Running
	I1109 14:04:38.517648   55908 system_pods.go:89] "kube-controller-manager-ha-423884" [1d981b3b-aa6e-470c-8fd5-a97cedeb7ab7] Running
	I1109 14:04:38.517670   55908 system_pods.go:89] "kube-controller-manager-ha-423884-m02" [047f7949-9451-4301-be27-a9073b1f8dd8] Running
	I1109 14:04:38.517688   55908 system_pods.go:89] "kube-controller-manager-ha-423884-m03" [94734b39-b4df-4736-8bb8-4289b8b59d4a] Running
	I1109 14:04:38.517705   55908 system_pods.go:89] "kube-proxy-7z7d2" [f3de4d87-91fe-4303-a8db-50a70cbce4d7] Running
	I1109 14:04:38.517722   55908 system_pods.go:89] "kube-proxy-9kff9" [3e8293e6-027b-460b-bf10-1c31ea96c7b9] Running
	I1109 14:04:38.517750   55908 system_pods.go:89] "kube-proxy-f4hgn" [4eebc45e-329c-47be-b22c-f516100cff56] Running
	I1109 14:04:38.517773   55908 system_pods.go:89] "kube-proxy-jcgxk" [511bb5aa-d398-4eb4-852e-d2b6cda335c7] Running
	I1109 14:04:38.517794   55908 system_pods.go:89] "kube-scheduler-ha-423884" [9d1a4a22-ff4f-4204-b9a0-c90bc99cf5ee] Running
	I1109 14:04:38.517812   55908 system_pods.go:89] "kube-scheduler-ha-423884-m02" [3959577a-da41-4834-8c10-1cd6da3da88d] Running
	I1109 14:04:38.517830   55908 system_pods.go:89] "kube-scheduler-ha-423884-m03" [ab2c4706-65e8-429e-9365-35ddef56a3f5] Running
	I1109 14:04:38.517856   55908 system_pods.go:89] "kube-vip-ha-423884" [b043421c-6408-4df1-87d9-bc0d12fef736] Running
	I1109 14:04:38.517877   55908 system_pods.go:89] "kube-vip-ha-423884-m02" [17d3154c-6732-452b-a209-4a8c98c5626e] Running
	I1109 14:04:38.517894   55908 system_pods.go:89] "kube-vip-ha-423884-m03" [aade9448-5515-4d1c-ae79-38c8686c171f] Running
	I1109 14:04:38.517911   55908 system_pods.go:89] "storage-provisioner" [5c249a88-1e05-40e0-b9d2-60a993f8c146] Running
	I1109 14:04:38.517933   55908 system_pods.go:126] duration metric: took 8.084994ms to wait for k8s-apps to be running ...
	I1109 14:04:38.517962   55908 system_svc.go:44] waiting for kubelet service to be running ....
	I1109 14:04:38.518068   55908 ssh_runner.go:195] Run: sudo systemctl is-active --quiet service kubelet
	I1109 14:04:38.532879   55908 system_svc.go:56] duration metric: took 14.908297ms WaitForService to wait for kubelet
	I1109 14:04:38.532917   55908 kubeadm.go:587] duration metric: took 15.720774062s to wait for: map[apiserver:true apps_running:true default_sa:true extra:true kubelet:true node_ready:true system_pods:true]
	I1109 14:04:38.532935   55908 node_conditions.go:102] verifying NodePressure condition ...
	I1109 14:04:38.536579   55908 node_conditions.go:122] node storage ephemeral capacity is 203034800Ki
	I1109 14:04:38.536610   55908 node_conditions.go:123] node cpu capacity is 2
	I1109 14:04:38.536621   55908 node_conditions.go:122] node storage ephemeral capacity is 203034800Ki
	I1109 14:04:38.536625   55908 node_conditions.go:123] node cpu capacity is 2
	I1109 14:04:38.536629   55908 node_conditions.go:122] node storage ephemeral capacity is 203034800Ki
	I1109 14:04:38.536633   55908 node_conditions.go:123] node cpu capacity is 2
	I1109 14:04:38.536636   55908 node_conditions.go:122] node storage ephemeral capacity is 203034800Ki
	I1109 14:04:38.536648   55908 node_conditions.go:123] node cpu capacity is 2
	I1109 14:04:38.536656   55908 node_conditions.go:105] duration metric: took 3.715265ms to run NodePressure ...
	I1109 14:04:38.536669   55908 start.go:242] waiting for startup goroutines ...
	I1109 14:04:38.536695   55908 start.go:256] writing updated cluster config ...
	I1109 14:04:38.540432   55908 out.go:203] 
	I1109 14:04:38.543707   55908 config.go:182] Loaded profile config "ha-423884": Driver=docker, ContainerRuntime=crio, KubernetesVersion=v1.34.1
	I1109 14:04:38.543833   55908 profile.go:143] Saving config to /home/jenkins/minikube-integration/21139-2320/.minikube/profiles/ha-423884/config.json ...
	I1109 14:04:38.547314   55908 out.go:179] * Starting "ha-423884-m04" worker node in "ha-423884" cluster
	I1109 14:04:38.550154   55908 cache.go:134] Beginning downloading kic base image for docker with crio
	I1109 14:04:38.553075   55908 out.go:179] * Pulling base image v0.0.48-1761985721-21837 ...
	I1109 14:04:38.555918   55908 preload.go:188] Checking if preload exists for k8s version v1.34.1 and runtime crio
	I1109 14:04:38.555945   55908 cache.go:65] Caching tarball of preloaded images
	I1109 14:04:38.555984   55908 image.go:81] Checking for gcr.io/k8s-minikube/kicbase-builds:v0.0.48-1761985721-21837@sha256:a50b37e97dfdea51156e079ca6b45818a801b3d41bbe13d141f35d2e1af6c7d1 in local docker daemon
	I1109 14:04:38.556052   55908 preload.go:238] Found /home/jenkins/minikube-integration/21139-2320/.minikube/cache/preloaded-tarball/preloaded-images-k8s-v18-v1.34.1-cri-o-overlay-arm64.tar.lz4 in cache, skipping download
	I1109 14:04:38.556067   55908 cache.go:68] Finished verifying existence of preloaded tar for v1.34.1 on crio
	I1109 14:04:38.556232   55908 profile.go:143] Saving config to /home/jenkins/minikube-integration/21139-2320/.minikube/profiles/ha-423884/config.json ...
	I1109 14:04:38.596080   55908 image.go:100] Found gcr.io/k8s-minikube/kicbase-builds:v0.0.48-1761985721-21837@sha256:a50b37e97dfdea51156e079ca6b45818a801b3d41bbe13d141f35d2e1af6c7d1 in local docker daemon, skipping pull
	I1109 14:04:38.596104   55908 cache.go:158] gcr.io/k8s-minikube/kicbase-builds:v0.0.48-1761985721-21837@sha256:a50b37e97dfdea51156e079ca6b45818a801b3d41bbe13d141f35d2e1af6c7d1 exists in daemon, skipping load
	I1109 14:04:38.596117   55908 cache.go:243] Successfully downloaded all kic artifacts
	I1109 14:04:38.596140   55908 start.go:360] acquireMachinesLock for ha-423884-m04: {Name:mk8ea327a8bd5498886fa5c18402495ffce70373 Clock:{} Delay:500ms Timeout:10m0s Cancel:<nil>}
	I1109 14:04:38.596197   55908 start.go:364] duration metric: took 36.833µs to acquireMachinesLock for "ha-423884-m04"
	I1109 14:04:38.596221   55908 start.go:96] Skipping create...Using existing machine configuration
	I1109 14:04:38.596226   55908 fix.go:54] fixHost starting: m04
	I1109 14:04:38.596505   55908 cli_runner.go:164] Run: docker container inspect ha-423884-m04 --format={{.State.Status}}
	I1109 14:04:38.628055   55908 fix.go:112] recreateIfNeeded on ha-423884-m04: state=Stopped err=<nil>
	W1109 14:04:38.628083   55908 fix.go:138] unexpected machine state, will restart: <nil>
	I1109 14:04:38.631296   55908 out.go:252] * Restarting existing docker container for "ha-423884-m04" ...
	I1109 14:04:38.631384   55908 cli_runner.go:164] Run: docker start ha-423884-m04
	I1109 14:04:38.994029   55908 cli_runner.go:164] Run: docker container inspect ha-423884-m04 --format={{.State.Status}}
	I1109 14:04:39.024143   55908 kic.go:430] container "ha-423884-m04" state is running.
	I1109 14:04:39.024645   55908 cli_runner.go:164] Run: docker container inspect -f "{{range .NetworkSettings.Networks}}{{.IPAddress}},{{.GlobalIPv6Address}}{{end}}" ha-423884-m04
	I1109 14:04:39.049753   55908 profile.go:143] Saving config to /home/jenkins/minikube-integration/21139-2320/.minikube/profiles/ha-423884/config.json ...
	I1109 14:04:39.049997   55908 machine.go:94] provisionDockerMachine start ...
	I1109 14:04:39.050055   55908 cli_runner.go:164] Run: docker container inspect -f "'{{(index (index .NetworkSettings.Ports "22/tcp") 0).HostPort}}'" ha-423884-m04
	I1109 14:04:39.086245   55908 main.go:143] libmachine: Using SSH client type: native
	I1109 14:04:39.086555   55908 main.go:143] libmachine: &{{{<nil> 0 [] [] []} docker [0x3eefe0] 0x3f1790 <nil>  [] 0s} 127.0.0.1 32833 <nil> <nil>}
	I1109 14:04:39.086564   55908 main.go:143] libmachine: About to run SSH command:
	hostname
	I1109 14:04:39.087311   55908 main.go:143] libmachine: Error dialing TCP: ssh: handshake failed: read tcp 127.0.0.1:54962->127.0.0.1:32833: read: connection reset by peer
	I1109 14:04:42.305377   55908 main.go:143] libmachine: SSH cmd err, output: <nil>: ha-423884-m04
	
	I1109 14:04:42.305403   55908 ubuntu.go:182] provisioning hostname "ha-423884-m04"
	I1109 14:04:42.305544   55908 cli_runner.go:164] Run: docker container inspect -f "'{{(index (index .NetworkSettings.Ports "22/tcp") 0).HostPort}}'" ha-423884-m04
	I1109 14:04:42.345625   55908 main.go:143] libmachine: Using SSH client type: native
	I1109 14:04:42.345948   55908 main.go:143] libmachine: &{{{<nil> 0 [] [] []} docker [0x3eefe0] 0x3f1790 <nil>  [] 0s} 127.0.0.1 32833 <nil> <nil>}
	I1109 14:04:42.345975   55908 main.go:143] libmachine: About to run SSH command:
	sudo hostname ha-423884-m04 && echo "ha-423884-m04" | sudo tee /etc/hostname
	I1109 14:04:42.540380   55908 main.go:143] libmachine: SSH cmd err, output: <nil>: ha-423884-m04
	
	I1109 14:04:42.540467   55908 cli_runner.go:164] Run: docker container inspect -f "'{{(index (index .NetworkSettings.Ports "22/tcp") 0).HostPort}}'" ha-423884-m04
	I1109 14:04:42.568082   55908 main.go:143] libmachine: Using SSH client type: native
	I1109 14:04:42.568508   55908 main.go:143] libmachine: &{{{<nil> 0 [] [] []} docker [0x3eefe0] 0x3f1790 <nil>  [] 0s} 127.0.0.1 32833 <nil> <nil>}
	I1109 14:04:42.568528   55908 main.go:143] libmachine: About to run SSH command:
	
			if ! grep -xq '.*\sha-423884-m04' /etc/hosts; then
				if grep -xq '127.0.1.1\s.*' /etc/hosts; then
					sudo sed -i 's/^127.0.1.1\s.*/127.0.1.1 ha-423884-m04/g' /etc/hosts;
				else 
					echo '127.0.1.1 ha-423884-m04' | sudo tee -a /etc/hosts; 
				fi
			fi
	I1109 14:04:42.740938   55908 main.go:143] libmachine: SSH cmd err, output: <nil>: 
	I1109 14:04:42.740964   55908 ubuntu.go:188] set auth options {CertDir:/home/jenkins/minikube-integration/21139-2320/.minikube CaCertPath:/home/jenkins/minikube-integration/21139-2320/.minikube/certs/ca.pem CaPrivateKeyPath:/home/jenkins/minikube-integration/21139-2320/.minikube/certs/ca-key.pem CaCertRemotePath:/etc/docker/ca.pem ServerCertPath:/home/jenkins/minikube-integration/21139-2320/.minikube/machines/server.pem ServerKeyPath:/home/jenkins/minikube-integration/21139-2320/.minikube/machines/server-key.pem ClientKeyPath:/home/jenkins/minikube-integration/21139-2320/.minikube/certs/key.pem ServerCertRemotePath:/etc/docker/server.pem ServerKeyRemotePath:/etc/docker/server-key.pem ClientCertPath:/home/jenkins/minikube-integration/21139-2320/.minikube/certs/cert.pem ServerCertSANs:[] StorePath:/home/jenkins/minikube-integration/21139-2320/.minikube}
	I1109 14:04:42.740987   55908 ubuntu.go:190] setting up certificates
	I1109 14:04:42.740999   55908 provision.go:84] configureAuth start
	I1109 14:04:42.741056   55908 cli_runner.go:164] Run: docker container inspect -f "{{range .NetworkSettings.Networks}}{{.IPAddress}},{{.GlobalIPv6Address}}{{end}}" ha-423884-m04
	I1109 14:04:42.758596   55908 provision.go:143] copyHostCerts
	I1109 14:04:42.758635   55908 vm_assets.go:164] NewFileAsset: /home/jenkins/minikube-integration/21139-2320/.minikube/certs/ca.pem -> /home/jenkins/minikube-integration/21139-2320/.minikube/ca.pem
	I1109 14:04:42.758666   55908 exec_runner.go:144] found /home/jenkins/minikube-integration/21139-2320/.minikube/ca.pem, removing ...
	I1109 14:04:42.758673   55908 exec_runner.go:203] rm: /home/jenkins/minikube-integration/21139-2320/.minikube/ca.pem
	I1109 14:04:42.758748   55908 exec_runner.go:151] cp: /home/jenkins/minikube-integration/21139-2320/.minikube/certs/ca.pem --> /home/jenkins/minikube-integration/21139-2320/.minikube/ca.pem (1082 bytes)
	I1109 14:04:42.758825   55908 vm_assets.go:164] NewFileAsset: /home/jenkins/minikube-integration/21139-2320/.minikube/certs/cert.pem -> /home/jenkins/minikube-integration/21139-2320/.minikube/cert.pem
	I1109 14:04:42.758841   55908 exec_runner.go:144] found /home/jenkins/minikube-integration/21139-2320/.minikube/cert.pem, removing ...
	I1109 14:04:42.758845   55908 exec_runner.go:203] rm: /home/jenkins/minikube-integration/21139-2320/.minikube/cert.pem
	I1109 14:04:42.758872   55908 exec_runner.go:151] cp: /home/jenkins/minikube-integration/21139-2320/.minikube/certs/cert.pem --> /home/jenkins/minikube-integration/21139-2320/.minikube/cert.pem (1123 bytes)
	I1109 14:04:42.758947   55908 vm_assets.go:164] NewFileAsset: /home/jenkins/minikube-integration/21139-2320/.minikube/certs/key.pem -> /home/jenkins/minikube-integration/21139-2320/.minikube/key.pem
	I1109 14:04:42.758966   55908 exec_runner.go:144] found /home/jenkins/minikube-integration/21139-2320/.minikube/key.pem, removing ...
	I1109 14:04:42.758970   55908 exec_runner.go:203] rm: /home/jenkins/minikube-integration/21139-2320/.minikube/key.pem
	I1109 14:04:42.758992   55908 exec_runner.go:151] cp: /home/jenkins/minikube-integration/21139-2320/.minikube/certs/key.pem --> /home/jenkins/minikube-integration/21139-2320/.minikube/key.pem (1679 bytes)
	I1109 14:04:42.759035   55908 provision.go:117] generating server cert: /home/jenkins/minikube-integration/21139-2320/.minikube/machines/server.pem ca-key=/home/jenkins/minikube-integration/21139-2320/.minikube/certs/ca.pem private-key=/home/jenkins/minikube-integration/21139-2320/.minikube/certs/ca-key.pem org=jenkins.ha-423884-m04 san=[127.0.0.1 192.168.49.5 ha-423884-m04 localhost minikube]
	I1109 14:04:43.620778   55908 provision.go:177] copyRemoteCerts
	I1109 14:04:43.620850   55908 ssh_runner.go:195] Run: sudo mkdir -p /etc/docker /etc/docker /etc/docker
	I1109 14:04:43.620891   55908 cli_runner.go:164] Run: docker container inspect -f "'{{(index (index .NetworkSettings.Ports "22/tcp") 0).HostPort}}'" ha-423884-m04
	I1109 14:04:43.638135   55908 sshutil.go:53] new ssh client: &{IP:127.0.0.1 Port:32833 SSHKeyPath:/home/jenkins/minikube-integration/21139-2320/.minikube/machines/ha-423884-m04/id_rsa Username:docker}
	I1109 14:04:43.746715   55908 vm_assets.go:164] NewFileAsset: /home/jenkins/minikube-integration/21139-2320/.minikube/certs/ca.pem -> /etc/docker/ca.pem
	I1109 14:04:43.746778   55908 ssh_runner.go:362] scp /home/jenkins/minikube-integration/21139-2320/.minikube/certs/ca.pem --> /etc/docker/ca.pem (1082 bytes)
	I1109 14:04:43.783559   55908 vm_assets.go:164] NewFileAsset: /home/jenkins/minikube-integration/21139-2320/.minikube/machines/server.pem -> /etc/docker/server.pem
	I1109 14:04:43.783620   55908 ssh_runner.go:362] scp /home/jenkins/minikube-integration/21139-2320/.minikube/machines/server.pem --> /etc/docker/server.pem (1208 bytes)
	I1109 14:04:43.821821   55908 vm_assets.go:164] NewFileAsset: /home/jenkins/minikube-integration/21139-2320/.minikube/machines/server-key.pem -> /etc/docker/server-key.pem
	I1109 14:04:43.821884   55908 ssh_runner.go:362] scp /home/jenkins/minikube-integration/21139-2320/.minikube/machines/server-key.pem --> /etc/docker/server-key.pem (1675 bytes)
	I1109 14:04:43.853243   55908 provision.go:87] duration metric: took 1.112229927s to configureAuth
	I1109 14:04:43.853316   55908 ubuntu.go:206] setting minikube options for container-runtime
	I1109 14:04:43.853606   55908 config.go:182] Loaded profile config "ha-423884": Driver=docker, ContainerRuntime=crio, KubernetesVersion=v1.34.1
	I1109 14:04:43.853756   55908 cli_runner.go:164] Run: docker container inspect -f "'{{(index (index .NetworkSettings.Ports "22/tcp") 0).HostPort}}'" ha-423884-m04
	I1109 14:04:43.895433   55908 main.go:143] libmachine: Using SSH client type: native
	I1109 14:04:43.895732   55908 main.go:143] libmachine: &{{{<nil> 0 [] [] []} docker [0x3eefe0] 0x3f1790 <nil>  [] 0s} 127.0.0.1 32833 <nil> <nil>}
	I1109 14:04:43.895746   55908 main.go:143] libmachine: About to run SSH command:
	sudo mkdir -p /etc/sysconfig && printf %s "
	CRIO_MINIKUBE_OPTIONS='--insecure-registry 10.96.0.0/12 '
	" | sudo tee /etc/sysconfig/crio.minikube && sudo systemctl restart crio
	I1109 14:04:44.332263   55908 main.go:143] libmachine: SSH cmd err, output: <nil>: 
	CRIO_MINIKUBE_OPTIONS='--insecure-registry 10.96.0.0/12 '
	
	I1109 14:04:44.332289   55908 machine.go:97] duration metric: took 5.282283014s to provisionDockerMachine
	I1109 14:04:44.332300   55908 start.go:293] postStartSetup for "ha-423884-m04" (driver="docker")
	I1109 14:04:44.332310   55908 start.go:322] creating required directories: [/etc/kubernetes/addons /etc/kubernetes/manifests /var/tmp/minikube /var/lib/minikube /var/lib/minikube/certs /var/lib/minikube/images /var/lib/minikube/binaries /tmp/gvisor /usr/share/ca-certificates /etc/ssl/certs]
	I1109 14:04:44.332371   55908 ssh_runner.go:195] Run: sudo mkdir -p /etc/kubernetes/addons /etc/kubernetes/manifests /var/tmp/minikube /var/lib/minikube /var/lib/minikube/certs /var/lib/minikube/images /var/lib/minikube/binaries /tmp/gvisor /usr/share/ca-certificates /etc/ssl/certs
	I1109 14:04:44.332415   55908 cli_runner.go:164] Run: docker container inspect -f "'{{(index (index .NetworkSettings.Ports "22/tcp") 0).HostPort}}'" ha-423884-m04
	I1109 14:04:44.353937   55908 sshutil.go:53] new ssh client: &{IP:127.0.0.1 Port:32833 SSHKeyPath:/home/jenkins/minikube-integration/21139-2320/.minikube/machines/ha-423884-m04/id_rsa Username:docker}
	I1109 14:04:44.464143   55908 ssh_runner.go:195] Run: cat /etc/os-release
	I1109 14:04:44.470188   55908 main.go:143] libmachine: Couldn't set key VERSION_CODENAME, no corresponding struct field found
	I1109 14:04:44.470214   55908 info.go:137] Remote host: Debian GNU/Linux 12 (bookworm)
	I1109 14:04:44.470225   55908 filesync.go:126] Scanning /home/jenkins/minikube-integration/21139-2320/.minikube/addons for local assets ...
	I1109 14:04:44.470281   55908 filesync.go:126] Scanning /home/jenkins/minikube-integration/21139-2320/.minikube/files for local assets ...
	I1109 14:04:44.470354   55908 filesync.go:149] local asset: /home/jenkins/minikube-integration/21139-2320/.minikube/files/etc/ssl/certs/41162.pem -> 41162.pem in /etc/ssl/certs
	I1109 14:04:44.470361   55908 vm_assets.go:164] NewFileAsset: /home/jenkins/minikube-integration/21139-2320/.minikube/files/etc/ssl/certs/41162.pem -> /etc/ssl/certs/41162.pem
	I1109 14:04:44.470470   55908 ssh_runner.go:195] Run: sudo mkdir -p /etc/ssl/certs
	I1109 14:04:44.479795   55908 ssh_runner.go:362] scp /home/jenkins/minikube-integration/21139-2320/.minikube/files/etc/ssl/certs/41162.pem --> /etc/ssl/certs/41162.pem (1708 bytes)
	I1109 14:04:44.529226   55908 start.go:296] duration metric: took 196.901694ms for postStartSetup
	I1109 14:04:44.529386   55908 ssh_runner.go:195] Run: sh -c "df -h /var | awk 'NR==2{print $5}'"
	I1109 14:04:44.529460   55908 cli_runner.go:164] Run: docker container inspect -f "'{{(index (index .NetworkSettings.Ports "22/tcp") 0).HostPort}}'" ha-423884-m04
	I1109 14:04:44.554649   55908 sshutil.go:53] new ssh client: &{IP:127.0.0.1 Port:32833 SSHKeyPath:/home/jenkins/minikube-integration/21139-2320/.minikube/machines/ha-423884-m04/id_rsa Username:docker}
	I1109 14:04:44.673604   55908 ssh_runner.go:195] Run: sh -c "df -BG /var | awk 'NR==2{print $4}'"
	I1109 14:04:44.680762   55908 fix.go:56] duration metric: took 6.08452744s for fixHost
	I1109 14:04:44.680784   55908 start.go:83] releasing machines lock for "ha-423884-m04", held for 6.084574408s
	I1109 14:04:44.680867   55908 cli_runner.go:164] Run: docker container inspect -f "{{range .NetworkSettings.Networks}}{{.IPAddress}},{{.GlobalIPv6Address}}{{end}}" ha-423884-m04
	I1109 14:04:44.721415   55908 out.go:179] * Found network options:
	I1109 14:04:44.724159   55908 out.go:179]   - NO_PROXY=192.168.49.2,192.168.49.3,192.168.49.4
	W1109 14:04:44.726873   55908 proxy.go:120] fail to check proxy env: Error ip not in block
	W1109 14:04:44.726905   55908 proxy.go:120] fail to check proxy env: Error ip not in block
	W1109 14:04:44.726917   55908 proxy.go:120] fail to check proxy env: Error ip not in block
	W1109 14:04:44.726942   55908 proxy.go:120] fail to check proxy env: Error ip not in block
	W1109 14:04:44.726952   55908 proxy.go:120] fail to check proxy env: Error ip not in block
	W1109 14:04:44.726961   55908 proxy.go:120] fail to check proxy env: Error ip not in block
	I1109 14:04:44.727033   55908 ssh_runner.go:195] Run: sudo sh -c "podman version >/dev/null"
	I1109 14:04:44.727074   55908 ssh_runner.go:195] Run: curl -sS -m 2 https://registry.k8s.io/
	I1109 14:04:44.727134   55908 cli_runner.go:164] Run: docker container inspect -f "'{{(index (index .NetworkSettings.Ports "22/tcp") 0).HostPort}}'" ha-423884-m04
	I1109 14:04:44.727085   55908 cli_runner.go:164] Run: docker container inspect -f "'{{(index (index .NetworkSettings.Ports "22/tcp") 0).HostPort}}'" ha-423884-m04
	I1109 14:04:44.759201   55908 sshutil.go:53] new ssh client: &{IP:127.0.0.1 Port:32833 SSHKeyPath:/home/jenkins/minikube-integration/21139-2320/.minikube/machines/ha-423884-m04/id_rsa Username:docker}
	I1109 14:04:44.763544   55908 sshutil.go:53] new ssh client: &{IP:127.0.0.1 Port:32833 SSHKeyPath:/home/jenkins/minikube-integration/21139-2320/.minikube/machines/ha-423884-m04/id_rsa Username:docker}
	I1109 14:04:45.037350   55908 ssh_runner.go:195] Run: sh -c "stat /etc/cni/net.d/*loopback.conf*"
	W1109 14:04:45.135550   55908 cni.go:209] loopback cni configuration skipped: "/etc/cni/net.d/*loopback.conf*" not found
	I1109 14:04:45.135658   55908 ssh_runner.go:195] Run: sudo find /etc/cni/net.d -maxdepth 1 -type f ( ( -name *bridge* -or -name *podman* ) -and -not -name *.mk_disabled ) -printf "%p, " -exec sh -c "sudo mv {} {}.mk_disabled" ;
	I1109 14:04:45.148313   55908 cni.go:259] no active bridge cni configs found in "/etc/cni/net.d" - nothing to disable
	I1109 14:04:45.148341   55908 start.go:496] detecting cgroup driver to use...
	I1109 14:04:45.148377   55908 detect.go:187] detected "cgroupfs" cgroup driver on host os
	I1109 14:04:45.148433   55908 ssh_runner.go:195] Run: sudo systemctl stop -f containerd
	I1109 14:04:45.185399   55908 ssh_runner.go:195] Run: sudo systemctl is-active --quiet service containerd
	I1109 14:04:45.214772   55908 docker.go:218] disabling cri-docker service (if available) ...
	I1109 14:04:45.214846   55908 ssh_runner.go:195] Run: sudo systemctl stop -f cri-docker.socket
	I1109 14:04:45.250953   55908 ssh_runner.go:195] Run: sudo systemctl stop -f cri-docker.service
	I1109 14:04:45.287278   55908 ssh_runner.go:195] Run: sudo systemctl disable cri-docker.socket
	I1109 14:04:45.661062   55908 ssh_runner.go:195] Run: sudo systemctl mask cri-docker.service
	I1109 14:04:45.935411   55908 docker.go:234] disabling docker service ...
	I1109 14:04:45.935486   55908 ssh_runner.go:195] Run: sudo systemctl stop -f docker.socket
	I1109 14:04:45.952438   55908 ssh_runner.go:195] Run: sudo systemctl stop -f docker.service
	I1109 14:04:45.980819   55908 ssh_runner.go:195] Run: sudo systemctl disable docker.socket
	I1109 14:04:46.226547   55908 ssh_runner.go:195] Run: sudo systemctl mask docker.service
	I1109 14:04:46.528888   55908 ssh_runner.go:195] Run: sudo systemctl is-active --quiet service docker
	I1109 14:04:46.569464   55908 ssh_runner.go:195] Run: /bin/bash -c "sudo mkdir -p /etc && printf %s "runtime-endpoint: unix:///var/run/crio/crio.sock
	" | sudo tee /etc/crictl.yaml"
	I1109 14:04:46.593467   55908 crio.go:59] configure cri-o to use "registry.k8s.io/pause:3.10.1" pause image...
	I1109 14:04:46.593541   55908 ssh_runner.go:195] Run: sh -c "sudo sed -i 's|^.*pause_image = .*$|pause_image = "registry.k8s.io/pause:3.10.1"|' /etc/crio/crio.conf.d/02-crio.conf"
	I1109 14:04:46.617190   55908 crio.go:70] configuring cri-o to use "cgroupfs" as cgroup driver...
	I1109 14:04:46.617307   55908 ssh_runner.go:195] Run: sh -c "sudo sed -i 's|^.*cgroup_manager = .*$|cgroup_manager = "cgroupfs"|' /etc/crio/crio.conf.d/02-crio.conf"
	I1109 14:04:46.632140   55908 ssh_runner.go:195] Run: sh -c "sudo sed -i '/conmon_cgroup = .*/d' /etc/crio/crio.conf.d/02-crio.conf"
	I1109 14:04:46.655050   55908 ssh_runner.go:195] Run: sh -c "sudo sed -i '/cgroup_manager = .*/a conmon_cgroup = "pod"' /etc/crio/crio.conf.d/02-crio.conf"
	I1109 14:04:46.669679   55908 ssh_runner.go:195] Run: sh -c "sudo rm -rf /etc/cni/net.mk"
	I1109 14:04:46.703425   55908 ssh_runner.go:195] Run: sh -c "sudo sed -i '/^ *"net.ipv4.ip_unprivileged_port_start=.*"/d' /etc/crio/crio.conf.d/02-crio.conf"
	I1109 14:04:46.732454   55908 ssh_runner.go:195] Run: sh -c "sudo grep -q "^ *default_sysctls" /etc/crio/crio.conf.d/02-crio.conf || sudo sed -i '/conmon_cgroup = .*/a default_sysctls = \[\n\]' /etc/crio/crio.conf.d/02-crio.conf"
	I1109 14:04:46.748482   55908 ssh_runner.go:195] Run: sh -c "sudo sed -i -r 's|^default_sysctls *= *\[|&\n  "net.ipv4.ip_unprivileged_port_start=0",|' /etc/crio/crio.conf.d/02-crio.conf"
	I1109 14:04:46.774220   55908 ssh_runner.go:195] Run: sudo sysctl net.bridge.bridge-nf-call-iptables
	I1109 14:04:46.794338   55908 ssh_runner.go:195] Run: sudo sh -c "echo 1 > /proc/sys/net/ipv4/ip_forward"
	I1109 14:04:46.805580   55908 ssh_runner.go:195] Run: sudo systemctl daemon-reload
	I1109 14:04:47.010084   55908 ssh_runner.go:195] Run: sudo systemctl restart crio
	I1109 14:04:47.173577   55908 start.go:543] Will wait 60s for socket path /var/run/crio/crio.sock
	I1109 14:04:47.173656   55908 ssh_runner.go:195] Run: stat /var/run/crio/crio.sock
	I1109 14:04:47.181540   55908 start.go:564] Will wait 60s for crictl version
	I1109 14:04:47.181604   55908 ssh_runner.go:195] Run: which crictl
	I1109 14:04:47.186006   55908 ssh_runner.go:195] Run: sudo /usr/local/bin/crictl version
	I1109 14:04:47.222300   55908 start.go:580] Version:  0.1.0
	RuntimeName:  cri-o
	RuntimeVersion:  1.34.1
	RuntimeApiVersion:  v1
	I1109 14:04:47.222379   55908 ssh_runner.go:195] Run: crio --version
	I1109 14:04:47.253413   55908 ssh_runner.go:195] Run: crio --version
	I1109 14:04:47.291652   55908 out.go:179] * Preparing Kubernetes v1.34.1 on CRI-O 1.34.1 ...
	I1109 14:04:47.294554   55908 out.go:179]   - env NO_PROXY=192.168.49.2
	I1109 14:04:47.297616   55908 out.go:179]   - env NO_PROXY=192.168.49.2,192.168.49.3
	I1109 14:04:47.301230   55908 out.go:179]   - env NO_PROXY=192.168.49.2,192.168.49.3,192.168.49.4
	I1109 14:04:47.304267   55908 cli_runner.go:164] Run: docker network inspect ha-423884 --format "{"Name": "{{.Name}}","Driver": "{{.Driver}}","Subnet": "{{range .IPAM.Config}}{{.Subnet}}{{end}}","Gateway": "{{range .IPAM.Config}}{{.Gateway}}{{end}}","MTU": {{if (index .Options "com.docker.network.driver.mtu")}}{{(index .Options "com.docker.network.driver.mtu")}}{{else}}0{{end}}, "ContainerIPs": [{{range $k,$v := .Containers }}"{{$v.IPv4Address}}",{{end}}]}"
	I1109 14:04:47.343687   55908 ssh_runner.go:195] Run: grep 192.168.49.1	host.minikube.internal$ /etc/hosts
	I1109 14:04:47.347710   55908 ssh_runner.go:195] Run: /bin/bash -c "{ grep -v $'\thost.minikube.internal$' "/etc/hosts"; echo "192.168.49.1	host.minikube.internal"; } > /tmp/h.$$; sudo cp /tmp/h.$$ "/etc/hosts""
	I1109 14:04:47.360845   55908 mustload.go:66] Loading cluster: ha-423884
	I1109 14:04:47.361083   55908 config.go:182] Loaded profile config "ha-423884": Driver=docker, ContainerRuntime=crio, KubernetesVersion=v1.34.1
	I1109 14:04:47.361322   55908 cli_runner.go:164] Run: docker container inspect ha-423884 --format={{.State.Status}}
	I1109 14:04:47.390238   55908 host.go:66] Checking if "ha-423884" exists ...
	I1109 14:04:47.390509   55908 certs.go:69] Setting up /home/jenkins/minikube-integration/21139-2320/.minikube/profiles/ha-423884 for IP: 192.168.49.5
	I1109 14:04:47.390516   55908 certs.go:195] generating shared ca certs ...
	I1109 14:04:47.390534   55908 certs.go:227] acquiring lock for ca certs: {Name:mkdc9287ecd89df27ec460b72246ed6b75395d7c Clock:{} Delay:500ms Timeout:1m0s Cancel:<nil>}
	I1109 14:04:47.390655   55908 certs.go:236] skipping valid "minikubeCA" ca cert: /home/jenkins/minikube-integration/21139-2320/.minikube/ca.key
	I1109 14:04:47.390695   55908 certs.go:236] skipping valid "proxyClientCA" ca cert: /home/jenkins/minikube-integration/21139-2320/.minikube/proxy-client-ca.key
	I1109 14:04:47.390705   55908 vm_assets.go:164] NewFileAsset: /home/jenkins/minikube-integration/21139-2320/.minikube/ca.crt -> /var/lib/minikube/certs/ca.crt
	I1109 14:04:47.390717   55908 vm_assets.go:164] NewFileAsset: /home/jenkins/minikube-integration/21139-2320/.minikube/ca.key -> /var/lib/minikube/certs/ca.key
	I1109 14:04:47.390728   55908 vm_assets.go:164] NewFileAsset: /home/jenkins/minikube-integration/21139-2320/.minikube/proxy-client-ca.crt -> /var/lib/minikube/certs/proxy-client-ca.crt
	I1109 14:04:47.390739   55908 vm_assets.go:164] NewFileAsset: /home/jenkins/minikube-integration/21139-2320/.minikube/proxy-client-ca.key -> /var/lib/minikube/certs/proxy-client-ca.key
	I1109 14:04:47.390789   55908 certs.go:484] found cert: /home/jenkins/minikube-integration/21139-2320/.minikube/certs/4116.pem (1338 bytes)
	W1109 14:04:47.390815   55908 certs.go:480] ignoring /home/jenkins/minikube-integration/21139-2320/.minikube/certs/4116_empty.pem, impossibly tiny 0 bytes
	I1109 14:04:47.390823   55908 certs.go:484] found cert: /home/jenkins/minikube-integration/21139-2320/.minikube/certs/ca-key.pem (1679 bytes)
	I1109 14:04:47.390848   55908 certs.go:484] found cert: /home/jenkins/minikube-integration/21139-2320/.minikube/certs/ca.pem (1082 bytes)
	I1109 14:04:47.390868   55908 certs.go:484] found cert: /home/jenkins/minikube-integration/21139-2320/.minikube/certs/cert.pem (1123 bytes)
	I1109 14:04:47.390889   55908 certs.go:484] found cert: /home/jenkins/minikube-integration/21139-2320/.minikube/certs/key.pem (1679 bytes)
	I1109 14:04:47.390931   55908 certs.go:484] found cert: /home/jenkins/minikube-integration/21139-2320/.minikube/files/etc/ssl/certs/41162.pem (1708 bytes)
	I1109 14:04:47.390957   55908 vm_assets.go:164] NewFileAsset: /home/jenkins/minikube-integration/21139-2320/.minikube/certs/4116.pem -> /usr/share/ca-certificates/4116.pem
	I1109 14:04:47.390969   55908 vm_assets.go:164] NewFileAsset: /home/jenkins/minikube-integration/21139-2320/.minikube/files/etc/ssl/certs/41162.pem -> /usr/share/ca-certificates/41162.pem
	I1109 14:04:47.390980   55908 vm_assets.go:164] NewFileAsset: /home/jenkins/minikube-integration/21139-2320/.minikube/ca.crt -> /usr/share/ca-certificates/minikubeCA.pem
	I1109 14:04:47.390996   55908 ssh_runner.go:362] scp /home/jenkins/minikube-integration/21139-2320/.minikube/ca.crt --> /var/lib/minikube/certs/ca.crt (1111 bytes)
	I1109 14:04:47.419171   55908 ssh_runner.go:362] scp /home/jenkins/minikube-integration/21139-2320/.minikube/ca.key --> /var/lib/minikube/certs/ca.key (1679 bytes)
	I1109 14:04:47.458480   55908 ssh_runner.go:362] scp /home/jenkins/minikube-integration/21139-2320/.minikube/proxy-client-ca.crt --> /var/lib/minikube/certs/proxy-client-ca.crt (1119 bytes)
	I1109 14:04:47.491840   55908 ssh_runner.go:362] scp /home/jenkins/minikube-integration/21139-2320/.minikube/proxy-client-ca.key --> /var/lib/minikube/certs/proxy-client-ca.key (1675 bytes)
	I1109 14:04:47.515467   55908 ssh_runner.go:362] scp /home/jenkins/minikube-integration/21139-2320/.minikube/certs/4116.pem --> /usr/share/ca-certificates/4116.pem (1338 bytes)
	I1109 14:04:47.547694   55908 ssh_runner.go:362] scp /home/jenkins/minikube-integration/21139-2320/.minikube/files/etc/ssl/certs/41162.pem --> /usr/share/ca-certificates/41162.pem (1708 bytes)
	I1109 14:04:47.571204   55908 ssh_runner.go:362] scp /home/jenkins/minikube-integration/21139-2320/.minikube/ca.crt --> /usr/share/ca-certificates/minikubeCA.pem (1111 bytes)
	I1109 14:04:47.596967   55908 ssh_runner.go:195] Run: openssl version
	I1109 14:04:47.604617   55908 ssh_runner.go:195] Run: sudo /bin/bash -c "test -s /usr/share/ca-certificates/4116.pem && ln -fs /usr/share/ca-certificates/4116.pem /etc/ssl/certs/4116.pem"
	I1109 14:04:47.618704   55908 ssh_runner.go:195] Run: ls -la /usr/share/ca-certificates/4116.pem
	I1109 14:04:47.623578   55908 certs.go:528] hashing: -rw-r--r-- 1 root root 1338 Nov  9 13:36 /usr/share/ca-certificates/4116.pem
	I1109 14:04:47.623648   55908 ssh_runner.go:195] Run: openssl x509 -hash -noout -in /usr/share/ca-certificates/4116.pem
	I1109 14:04:47.684940   55908 ssh_runner.go:195] Run: sudo /bin/bash -c "test -L /etc/ssl/certs/51391683.0 || ln -fs /etc/ssl/certs/4116.pem /etc/ssl/certs/51391683.0"
	I1109 14:04:47.694950   55908 ssh_runner.go:195] Run: sudo /bin/bash -c "test -s /usr/share/ca-certificates/41162.pem && ln -fs /usr/share/ca-certificates/41162.pem /etc/ssl/certs/41162.pem"
	I1109 14:04:47.704570   55908 ssh_runner.go:195] Run: ls -la /usr/share/ca-certificates/41162.pem
	I1109 14:04:47.709468   55908 certs.go:528] hashing: -rw-r--r-- 1 root root 1708 Nov  9 13:36 /usr/share/ca-certificates/41162.pem
	I1109 14:04:47.709530   55908 ssh_runner.go:195] Run: openssl x509 -hash -noout -in /usr/share/ca-certificates/41162.pem
	I1109 14:04:47.765768   55908 ssh_runner.go:195] Run: sudo /bin/bash -c "test -L /etc/ssl/certs/3ec20f2e.0 || ln -fs /etc/ssl/certs/41162.pem /etc/ssl/certs/3ec20f2e.0"
	I1109 14:04:47.777604   55908 ssh_runner.go:195] Run: sudo /bin/bash -c "test -s /usr/share/ca-certificates/minikubeCA.pem && ln -fs /usr/share/ca-certificates/minikubeCA.pem /etc/ssl/certs/minikubeCA.pem"
	I1109 14:04:47.788177   55908 ssh_runner.go:195] Run: ls -la /usr/share/ca-certificates/minikubeCA.pem
	I1109 14:04:47.793126   55908 certs.go:528] hashing: -rw-r--r-- 1 root root 1111 Nov  9 13:29 /usr/share/ca-certificates/minikubeCA.pem
	I1109 14:04:47.793191   55908 ssh_runner.go:195] Run: openssl x509 -hash -noout -in /usr/share/ca-certificates/minikubeCA.pem
	I1109 14:04:47.845154   55908 ssh_runner.go:195] Run: sudo /bin/bash -c "test -L /etc/ssl/certs/b5213941.0 || ln -fs /etc/ssl/certs/minikubeCA.pem /etc/ssl/certs/b5213941.0"
	I1109 14:04:47.856386   55908 ssh_runner.go:195] Run: stat /var/lib/minikube/certs/apiserver-kubelet-client.crt
	I1109 14:04:47.861306   55908 certs.go:400] 'apiserver-kubelet-client' cert doesn't exist, likely first start: stat /var/lib/minikube/certs/apiserver-kubelet-client.crt: Process exited with status 1
	stdout:
	
	stderr:
	stat: cannot statx '/var/lib/minikube/certs/apiserver-kubelet-client.crt': No such file or directory
	I1109 14:04:47.861350   55908 kubeadm.go:935] updating node {m04 192.168.49.5 0 v1.34.1 crio false true} ...
	I1109 14:04:47.861449   55908 kubeadm.go:947] kubelet [Unit]
	Wants=crio.service
	
	[Service]
	ExecStart=
	ExecStart=/var/lib/minikube/binaries/v1.34.1/kubelet --bootstrap-kubeconfig=/etc/kubernetes/bootstrap-kubelet.conf --cgroups-per-qos=false --config=/var/lib/kubelet/config.yaml --enforce-node-allocatable= --hostname-override=ha-423884-m04 --kubeconfig=/etc/kubernetes/kubelet.conf --node-ip=192.168.49.5
	
	[Install]
	 config:
	{KubernetesVersion:v1.34.1 ClusterName:ha-423884 Namespace:default APIServerHAVIP:192.168.49.254 APIServerName:minikubeCA APIServerNames:[] APIServerIPs:[] DNSDomain:cluster.local ContainerRuntime:crio CRISocket: NetworkPlugin:cni FeatureGates: ServiceCIDR:10.96.0.0/12 ImageRepository: LoadBalancerStartIP: LoadBalancerEndIP: CustomIngressCert: RegistryAliases: ExtraOptions:[] ShouldLoadCachedImages:true EnableDefaultCNI:false CNI:}
	I1109 14:04:47.861522   55908 ssh_runner.go:195] Run: sudo ls /var/lib/minikube/binaries/v1.34.1
	I1109 14:04:47.870269   55908 binaries.go:51] Found k8s binaries, skipping transfer
	I1109 14:04:47.870337   55908 ssh_runner.go:195] Run: sudo mkdir -p /etc/systemd/system/kubelet.service.d /lib/systemd/system
	I1109 14:04:47.880368   55908 ssh_runner.go:362] scp memory --> /etc/systemd/system/kubelet.service.d/10-kubeadm.conf (363 bytes)
	I1109 14:04:47.897846   55908 ssh_runner.go:362] scp memory --> /lib/systemd/system/kubelet.service (352 bytes)
	I1109 14:04:47.917114   55908 ssh_runner.go:195] Run: grep 192.168.49.254	control-plane.minikube.internal$ /etc/hosts
	I1109 14:04:47.924685   55908 ssh_runner.go:195] Run: /bin/bash -c "{ grep -v $'\tcontrol-plane.minikube.internal$' "/etc/hosts"; echo "192.168.49.254	control-plane.minikube.internal"; } > /tmp/h.$$; sudo cp /tmp/h.$$ "/etc/hosts""
	I1109 14:04:47.936633   55908 ssh_runner.go:195] Run: sudo systemctl daemon-reload
	I1109 14:04:48.172177   55908 ssh_runner.go:195] Run: sudo systemctl start kubelet
	I1109 14:04:48.203009   55908 start.go:236] Will wait 6m0s for node &{Name:m04 IP:192.168.49.5 Port:0 KubernetesVersion:v1.34.1 ContainerRuntime:crio ControlPlane:false Worker:true}
	I1109 14:04:48.203488   55908 config.go:182] Loaded profile config "ha-423884": Driver=docker, ContainerRuntime=crio, KubernetesVersion=v1.34.1
	I1109 14:04:48.206078   55908 out.go:179] * Verifying Kubernetes components...
	I1109 14:04:48.209257   55908 ssh_runner.go:195] Run: sudo systemctl daemon-reload
	I1109 14:04:48.462006   55908 ssh_runner.go:195] Run: sudo systemctl start kubelet
	I1109 14:04:48.478911   55908 kapi.go:59] client config for ha-423884: &rest.Config{Host:"https://192.168.49.254:8443", APIPath:"", ContentConfig:rest.ContentConfig{AcceptContentTypes:"", ContentType:"", GroupVersion:(*schema.GroupVersion)(nil), NegotiatedSerializer:runtime.NegotiatedSerializer(nil)}, Username:"", Password:"", BearerToken:"", BearerTokenFile:"", Impersonate:rest.ImpersonationConfig{UserName:"", UID:"", Groups:[]string(nil), Extra:map[string][]string(nil)}, AuthProvider:<nil>, AuthConfigPersister:rest.AuthProviderConfigPersister(nil), ExecProvider:<nil>, TLSClientConfig:rest.sanitizedTLSClientConfig{Insecure:false, ServerName:"", CertFile:"/home/jenkins/minikube-integration/21139-2320/.minikube/profiles/ha-423884/client.crt", KeyFile:"/home/jenkins/minikube-integration/21139-2320/.minikube/profiles/ha-423884/client.key", CAFile:"/home/jenkins/minikube-integration/21139-2320/.minikube/ca.crt", CertData:[]uint8(nil), KeyData:[]uint8(nil), CAData:[]uint8(nil), NextProtos:[]string(nil)},
UserAgent:"", DisableCompression:false, Transport:http.RoundTripper(nil), WrapTransport:(transport.WrapperFunc)(0x21276e0), QPS:0, Burst:0, RateLimiter:flowcontrol.RateLimiter(nil), WarningHandler:rest.WarningHandler(nil), WarningHandlerWithContext:rest.WarningHandlerWithContext(nil), Timeout:0, Dial:(func(context.Context, string, string) (net.Conn, error))(nil), Proxy:(func(*http.Request) (*url.URL, error))(nil)}
	W1109 14:04:48.478989   55908 kubeadm.go:492] Overriding stale ClientConfig host https://192.168.49.254:8443 with https://192.168.49.2:8443
	I1109 14:04:48.479221   55908 node_ready.go:35] waiting up to 6m0s for node "ha-423884-m04" to be "Ready" ...
	I1109 14:04:48.482317   55908 node_ready.go:49] node "ha-423884-m04" is "Ready"
	I1109 14:04:48.482349   55908 node_ready.go:38] duration metric: took 3.109285ms for node "ha-423884-m04" to be "Ready" ...
	I1109 14:04:48.482363   55908 system_svc.go:44] waiting for kubelet service to be running ....
	I1109 14:04:48.482419   55908 ssh_runner.go:195] Run: sudo systemctl is-active --quiet service kubelet
	I1109 14:04:48.500348   55908 system_svc.go:56] duration metric: took 17.977329ms WaitForService to wait for kubelet
	I1109 14:04:48.500378   55908 kubeadm.go:587] duration metric: took 297.325981ms to wait for: map[apiserver:true apps_running:true default_sa:true extra:true kubelet:true node_ready:true system_pods:true]
	I1109 14:04:48.500397   55908 node_conditions.go:102] verifying NodePressure condition ...
	I1109 14:04:48.505686   55908 node_conditions.go:122] node storage ephemeral capacity is 203034800Ki
	I1109 14:04:48.505725   55908 node_conditions.go:123] node cpu capacity is 2
	I1109 14:04:48.505737   55908 node_conditions.go:122] node storage ephemeral capacity is 203034800Ki
	I1109 14:04:48.505742   55908 node_conditions.go:123] node cpu capacity is 2
	I1109 14:04:48.505745   55908 node_conditions.go:122] node storage ephemeral capacity is 203034800Ki
	I1109 14:04:48.505750   55908 node_conditions.go:123] node cpu capacity is 2
	I1109 14:04:48.505754   55908 node_conditions.go:122] node storage ephemeral capacity is 203034800Ki
	I1109 14:04:48.505758   55908 node_conditions.go:123] node cpu capacity is 2
	I1109 14:04:48.505763   55908 node_conditions.go:105] duration metric: took 5.360822ms to run NodePressure ...
	I1109 14:04:48.505778   55908 start.go:242] waiting for startup goroutines ...
	I1109 14:04:48.505806   55908 start.go:256] writing updated cluster config ...
	I1109 14:04:48.506138   55908 ssh_runner.go:195] Run: rm -f paused
	I1109 14:04:48.511449   55908 pod_ready.go:37] extra waiting up to 4m0s for all "kube-system" pods having one of [k8s-app=kube-dns component=etcd component=kube-apiserver component=kube-controller-manager k8s-app=kube-proxy component=kube-scheduler] labels to be "Ready" ...
	I1109 14:04:48.512086   55908 kapi.go:59] client config for ha-423884: &rest.Config{Host:"https://192.168.49.254:8443", APIPath:"", ContentConfig:rest.ContentConfig{AcceptContentTypes:"", ContentType:"", GroupVersion:(*schema.GroupVersion)(nil), NegotiatedSerializer:runtime.NegotiatedSerializer(nil)}, Username:"", Password:"", BearerToken:"", BearerTokenFile:"", Impersonate:rest.ImpersonationConfig{UserName:"", UID:"", Groups:[]string(nil), Extra:map[string][]string(nil)}, AuthProvider:<nil>, AuthConfigPersister:rest.AuthProviderConfigPersister(nil), ExecProvider:<nil>, TLSClientConfig:rest.sanitizedTLSClientConfig{Insecure:false, ServerName:"", CertFile:"/home/jenkins/minikube-integration/21139-2320/.minikube/profiles/ha-423884/client.crt", KeyFile:"/home/jenkins/minikube-integration/21139-2320/.minikube/profiles/ha-423884/client.key", CAFile:"/home/jenkins/minikube-integration/21139-2320/.minikube/ca.crt", CertData:[]uint8(nil), KeyData:[]uint8(nil), CAData:[]uint8(nil), NextProtos:[]string(nil)},
UserAgent:"", DisableCompression:false, Transport:http.RoundTripper(nil), WrapTransport:(transport.WrapperFunc)(0x21276e0), QPS:0, Burst:0, RateLimiter:flowcontrol.RateLimiter(nil), WarningHandler:rest.WarningHandler(nil), WarningHandlerWithContext:rest.WarningHandlerWithContext(nil), Timeout:0, Dial:(func(context.Context, string, string) (net.Conn, error))(nil), Proxy:(func(*http.Request) (*url.URL, error))(nil)}
	I1109 14:04:48.531812   55908 pod_ready.go:83] waiting for pod "coredns-66bc5c9577-wl6rt" in "kube-system" namespace to be "Ready" or be gone ...
	W1109 14:04:50.538801   55908 pod_ready.go:104] pod "coredns-66bc5c9577-wl6rt" is not "Ready", error: <nil>
	W1109 14:04:53.041776   55908 pod_ready.go:104] pod "coredns-66bc5c9577-wl6rt" is not "Ready", error: <nil>
	W1109 14:04:55.540126   55908 pod_ready.go:104] pod "coredns-66bc5c9577-wl6rt" is not "Ready", error: <nil>
	I1109 14:04:57.039850   55908 pod_ready.go:94] pod "coredns-66bc5c9577-wl6rt" is "Ready"
	I1109 14:04:57.039917   55908 pod_ready.go:86] duration metric: took 8.508070998s for pod "coredns-66bc5c9577-wl6rt" in "kube-system" namespace to be "Ready" or be gone ...
	I1109 14:04:57.039928   55908 pod_ready.go:83] waiting for pod "coredns-66bc5c9577-x2j4c" in "kube-system" namespace to be "Ready" or be gone ...
	I1109 14:04:57.047591   55908 pod_ready.go:94] pod "coredns-66bc5c9577-x2j4c" is "Ready"
	I1109 14:04:57.047620   55908 pod_ready.go:86] duration metric: took 7.684548ms for pod "coredns-66bc5c9577-x2j4c" in "kube-system" namespace to be "Ready" or be gone ...
	I1109 14:04:57.051339   55908 pod_ready.go:83] waiting for pod "etcd-ha-423884" in "kube-system" namespace to be "Ready" or be gone ...
	I1109 14:04:57.057478   55908 pod_ready.go:94] pod "etcd-ha-423884" is "Ready"
	I1109 14:04:57.057507   55908 pod_ready.go:86] duration metric: took 6.138948ms for pod "etcd-ha-423884" in "kube-system" namespace to be "Ready" or be gone ...
	I1109 14:04:57.057516   55908 pod_ready.go:83] waiting for pod "etcd-ha-423884-m02" in "kube-system" namespace to be "Ready" or be gone ...
	I1109 14:04:57.063675   55908 pod_ready.go:94] pod "etcd-ha-423884-m02" is "Ready"
	I1109 14:04:57.063703   55908 pod_ready.go:86] duration metric: took 6.180712ms for pod "etcd-ha-423884-m02" in "kube-system" namespace to be "Ready" or be gone ...
	I1109 14:04:57.063713   55908 pod_ready.go:83] waiting for pod "etcd-ha-423884-m03" in "kube-system" namespace to be "Ready" or be gone ...
	I1109 14:04:57.232913   55908 request.go:683] "Waited before sending request" delay="166.184726ms" reason="client-side throttling, not priority and fairness" verb="GET" URL="https://192.168.49.254:8443/api/v1/nodes/ha-423884-m03"
	I1109 14:04:57.235976   55908 pod_ready.go:94] pod "etcd-ha-423884-m03" is "Ready"
	I1109 14:04:57.236003   55908 pod_ready.go:86] duration metric: took 172.283157ms for pod "etcd-ha-423884-m03" in "kube-system" namespace to be "Ready" or be gone ...
	I1109 14:04:57.433310   55908 request.go:683] "Waited before sending request" delay="197.214303ms" reason="client-side throttling, not priority and fairness" verb="GET" URL="https://192.168.49.254:8443/api/v1/namespaces/kube-system/pods?labelSelector=component%3Dkube-apiserver"
	I1109 14:04:57.437206   55908 pod_ready.go:83] waiting for pod "kube-apiserver-ha-423884" in "kube-system" namespace to be "Ready" or be gone ...
	I1109 14:04:57.632527   55908 request.go:683] "Waited before sending request" delay="195.228871ms" reason="client-side throttling, not priority and fairness" verb="GET" URL="https://192.168.49.254:8443/api/v1/namespaces/kube-system/pods/kube-apiserver-ha-423884"
	I1109 14:04:57.833084   55908 request.go:683] "Waited before sending request" delay="197.197966ms" reason="client-side throttling, not priority and fairness" verb="GET" URL="https://192.168.49.254:8443/api/v1/nodes/ha-423884"
	I1109 14:04:57.836198   55908 pod_ready.go:94] pod "kube-apiserver-ha-423884" is "Ready"
	I1109 14:04:57.836230   55908 pod_ready.go:86] duration metric: took 398.997813ms for pod "kube-apiserver-ha-423884" in "kube-system" namespace to be "Ready" or be gone ...
	I1109 14:04:57.836239   55908 pod_ready.go:83] waiting for pod "kube-apiserver-ha-423884-m02" in "kube-system" namespace to be "Ready" or be gone ...
	I1109 14:04:58.032538   55908 request.go:683] "Waited before sending request" delay="196.215039ms" reason="client-side throttling, not priority and fairness" verb="GET" URL="https://192.168.49.254:8443/api/v1/namespaces/kube-system/pods/kube-apiserver-ha-423884-m02"
	I1109 14:04:58.232521   55908 request.go:683] "Waited before sending request" delay="195.230554ms" reason="client-side throttling, not priority and fairness" verb="GET" URL="https://192.168.49.254:8443/api/v1/nodes/ha-423884-m02"
	I1109 14:04:58.236341   55908 pod_ready.go:94] pod "kube-apiserver-ha-423884-m02" is "Ready"
	I1109 14:04:58.236367   55908 pod_ready.go:86] duration metric: took 400.120914ms for pod "kube-apiserver-ha-423884-m02" in "kube-system" namespace to be "Ready" or be gone ...
	I1109 14:04:58.236376   55908 pod_ready.go:83] waiting for pod "kube-apiserver-ha-423884-m03" in "kube-system" namespace to be "Ready" or be gone ...
	I1109 14:04:58.433023   55908 request.go:683] "Waited before sending request" delay="196.538827ms" reason="client-side throttling, not priority and fairness" verb="GET" URL="https://192.168.49.254:8443/api/v1/namespaces/kube-system/pods/kube-apiserver-ha-423884-m03"
	I1109 14:04:58.632901   55908 request.go:683] "Waited before sending request" delay="196.260046ms" reason="client-side throttling, not priority and fairness" verb="GET" URL="https://192.168.49.254:8443/api/v1/nodes/ha-423884-m03"
	I1109 14:04:58.636121   55908 pod_ready.go:94] pod "kube-apiserver-ha-423884-m03" is "Ready"
	I1109 14:04:58.636150   55908 pod_ready.go:86] duration metric: took 399.76645ms for pod "kube-apiserver-ha-423884-m03" in "kube-system" namespace to be "Ready" or be gone ...
	I1109 14:04:58.832522   55908 request.go:683] "Waited before sending request" delay="196.25788ms" reason="client-side throttling, not priority and fairness" verb="GET" URL="https://192.168.49.254:8443/api/v1/namespaces/kube-system/pods?labelSelector=component%3Dkube-controller-manager"
	I1109 14:04:58.836640   55908 pod_ready.go:83] waiting for pod "kube-controller-manager-ha-423884" in "kube-system" namespace to be "Ready" or be gone ...
	I1109 14:04:59.033076   55908 request.go:683] "Waited before sending request" delay="196.288797ms" reason="client-side throttling, not priority and fairness" verb="GET" URL="https://192.168.49.254:8443/api/v1/namespaces/kube-system/pods/kube-controller-manager-ha-423884"
	I1109 14:04:59.233471   55908 request.go:683] "Waited before sending request" delay="197.170343ms" reason="client-side throttling, not priority and fairness" verb="GET" URL="https://192.168.49.254:8443/api/v1/nodes/ha-423884"
	I1109 14:04:59.236562   55908 pod_ready.go:94] pod "kube-controller-manager-ha-423884" is "Ready"
	I1109 14:04:59.236586   55908 pod_ready.go:86] duration metric: took 399.915672ms for pod "kube-controller-manager-ha-423884" in "kube-system" namespace to be "Ready" or be gone ...
	I1109 14:04:59.236595   55908 pod_ready.go:83] waiting for pod "kube-controller-manager-ha-423884-m02" in "kube-system" namespace to be "Ready" or be gone ...
	I1109 14:04:59.432815   55908 request.go:683] "Waited before sending request" delay="196.151501ms" reason="client-side throttling, not priority and fairness" verb="GET" URL="https://192.168.49.254:8443/api/v1/namespaces/kube-system/pods/kube-controller-manager-ha-423884-m02"
	I1109 14:04:59.633389   55908 request.go:683] "Waited before sending request" delay="197.339699ms" reason="client-side throttling, not priority and fairness" verb="GET" URL="https://192.168.49.254:8443/api/v1/nodes/ha-423884-m02"
	I1109 14:04:59.636611   55908 pod_ready.go:94] pod "kube-controller-manager-ha-423884-m02" is "Ready"
	I1109 14:04:59.636639   55908 pod_ready.go:86] duration metric: took 400.036716ms for pod "kube-controller-manager-ha-423884-m02" in "kube-system" namespace to be "Ready" or be gone ...
	I1109 14:04:59.636649   55908 pod_ready.go:83] waiting for pod "kube-controller-manager-ha-423884-m03" in "kube-system" namespace to be "Ready" or be gone ...
	I1109 14:04:59.832944   55908 request.go:683] "Waited before sending request" delay="196.225586ms" reason="client-side throttling, not priority and fairness" verb="GET" URL="https://192.168.49.254:8443/api/v1/namespaces/kube-system/pods/kube-controller-manager-ha-423884-m03"
	I1109 14:05:00.032735   55908 request.go:683] "Waited before sending request" delay="196.153889ms" reason="client-side throttling, not priority and fairness" verb="GET" URL="https://192.168.49.254:8443/api/v1/nodes/ha-423884-m03"
	I1109 14:05:00.114688   55908 pod_ready.go:94] pod "kube-controller-manager-ha-423884-m03" is "Ready"
	I1109 14:05:00.114728   55908 pod_ready.go:86] duration metric: took 478.071803ms for pod "kube-controller-manager-ha-423884-m03" in "kube-system" namespace to be "Ready" or be gone ...
	I1109 14:05:00.242596   55908 request.go:683] "Waited before sending request" delay="127.725515ms" reason="client-side throttling, not priority and fairness" verb="GET" URL="https://192.168.49.254:8443/api/v1/namespaces/kube-system/pods?labelSelector=k8s-app%3Dkube-proxy"
	I1109 14:05:00.298102   55908 pod_ready.go:83] waiting for pod "kube-proxy-7z7d2" in "kube-system" namespace to be "Ready" or be gone ...
	I1109 14:05:00.433403   55908 request.go:683] "Waited before sending request" delay="135.18186ms" reason="client-side throttling, not priority and fairness" verb="GET" URL="https://192.168.49.254:8443/api/v1/namespaces/kube-system/pods/kube-proxy-7z7d2"
	I1109 14:05:00.633480   55908 request.go:683] "Waited before sending request" delay="187.320382ms" reason="client-side throttling, not priority and fairness" verb="GET" URL="https://192.168.49.254:8443/api/v1/nodes/ha-423884"
	I1109 14:05:00.659363   55908 pod_ready.go:94] pod "kube-proxy-7z7d2" is "Ready"
	I1109 14:05:00.659405   55908 pod_ready.go:86] duration metric: took 361.264172ms for pod "kube-proxy-7z7d2" in "kube-system" namespace to be "Ready" or be gone ...
	I1109 14:05:00.659421   55908 pod_ready.go:83] waiting for pod "kube-proxy-9kff9" in "kube-system" namespace to be "Ready" or be gone ...
	I1109 14:05:00.832720   55908 request.go:683] "Waited before sending request" delay="173.209595ms" reason="client-side throttling, not priority and fairness" verb="GET" URL="https://192.168.49.254:8443/api/v1/namespaces/kube-system/pods/kube-proxy-9kff9"
	I1109 14:05:01.032589   55908 request.go:683] "Waited before sending request" delay="193.218072ms" reason="client-side throttling, not priority and fairness" verb="GET" URL="https://192.168.49.254:8443/api/v1/nodes/ha-423884-m04"
	I1109 14:05:01.233422   55908 request.go:683] "Waited before sending request" delay="73.212921ms" reason="client-side throttling, not priority and fairness" verb="GET" URL="https://192.168.49.254:8443/api/v1/namespaces/kube-system/pods/kube-proxy-9kff9"
	I1109 14:05:01.433041   55908 request.go:683] "Waited before sending request" delay="190.18265ms" reason="client-side throttling, not priority and fairness" verb="GET" URL="https://192.168.49.254:8443/api/v1/nodes/ha-423884-m04"
	I1109 14:05:01.437082   55908 pod_ready.go:94] pod "kube-proxy-9kff9" is "Ready"
	I1109 14:05:01.437110   55908 pod_ready.go:86] duration metric: took 777.680802ms for pod "kube-proxy-9kff9" in "kube-system" namespace to be "Ready" or be gone ...
	I1109 14:05:01.437119   55908 pod_ready.go:83] waiting for pod "kube-proxy-f4hgn" in "kube-system" namespace to be "Ready" or be gone ...
	I1109 14:05:01.632461   55908 request.go:683] "Waited before sending request" delay="195.271922ms" reason="client-side throttling, not priority and fairness" verb="GET" URL="https://192.168.49.254:8443/api/v1/namespaces/kube-system/pods/kube-proxy-f4hgn"
	I1109 14:05:01.832811   55908 request.go:683] "Waited before sending request" delay="187.236042ms" reason="client-side throttling, not priority and fairness" verb="GET" URL="https://192.168.49.254:8443/api/v1/nodes/ha-423884-m02"
	I1109 14:05:01.836535   55908 pod_ready.go:94] pod "kube-proxy-f4hgn" is "Ready"
	I1109 14:05:01.836565   55908 pod_ready.go:86] duration metric: took 399.438784ms for pod "kube-proxy-f4hgn" in "kube-system" namespace to be "Ready" or be gone ...
	I1109 14:05:01.836576   55908 pod_ready.go:83] waiting for pod "kube-proxy-jcgxk" in "kube-system" namespace to be "Ready" or be gone ...
	I1109 14:05:02.032823   55908 request.go:683] "Waited before sending request" delay="196.168826ms" reason="client-side throttling, not priority and fairness" verb="GET" URL="https://192.168.49.254:8443/api/v1/namespaces/kube-system/pods/kube-proxy-jcgxk"
	I1109 14:05:02.232950   55908 request.go:683] "Waited before sending request" delay="192.345884ms" reason="client-side throttling, not priority and fairness" verb="GET" URL="https://192.168.49.254:8443/api/v1/nodes/ha-423884-m03"
	I1109 14:05:02.432483   55908 request.go:683] "Waited before sending request" delay="95.122005ms" reason="client-side throttling, not priority and fairness" verb="GET" URL="https://192.168.49.254:8443/api/v1/namespaces/kube-system/pods/kube-proxy-jcgxk"
	I1109 14:05:02.632558   55908 request.go:683] "Waited before sending request" delay="196.186501ms" reason="client-side throttling, not priority and fairness" verb="GET" URL="https://192.168.49.254:8443/api/v1/nodes/ha-423884-m03"
	I1109 14:05:03.032762   55908 request.go:683] "Waited before sending request" delay="191.358141ms" reason="client-side throttling, not priority and fairness" verb="GET" URL="https://192.168.49.254:8443/api/v1/nodes/ha-423884-m03"
	I1109 14:05:03.433075   55908 request.go:683] "Waited before sending request" delay="91.200576ms" reason="client-side throttling, not priority and fairness" verb="GET" URL="https://192.168.49.254:8443/api/v1/nodes/ha-423884-m03"
	W1109 14:05:03.843130   55908 pod_ready.go:104] pod "kube-proxy-jcgxk" is not "Ready", error: <nil>
	W1109 14:05:05.843241   55908 pod_ready.go:104] pod "kube-proxy-jcgxk" is not "Ready", error: <nil>
	W1109 14:05:07.843386   55908 pod_ready.go:104] pod "kube-proxy-jcgxk" is not "Ready", error: <nil>
	W1109 14:05:10.345843   55908 pod_ready.go:104] pod "kube-proxy-jcgxk" is not "Ready", error: <nil>
	W1109 14:05:12.347116   55908 pod_ready.go:104] pod "kube-proxy-jcgxk" is not "Ready", error: <nil>
	I1109 14:05:12.843484   55908 pod_ready.go:94] pod "kube-proxy-jcgxk" is "Ready"
	I1109 14:05:12.843511   55908 pod_ready.go:86] duration metric: took 11.006928371s for pod "kube-proxy-jcgxk" in "kube-system" namespace to be "Ready" or be gone ...
	I1109 14:05:12.847315   55908 pod_ready.go:83] waiting for pod "kube-scheduler-ha-423884" in "kube-system" namespace to be "Ready" or be gone ...
	I1109 14:05:12.853111   55908 pod_ready.go:94] pod "kube-scheduler-ha-423884" is "Ready"
	I1109 14:05:12.853137   55908 pod_ready.go:86] duration metric: took 5.793657ms for pod "kube-scheduler-ha-423884" in "kube-system" namespace to be "Ready" or be gone ...
	I1109 14:05:12.853146   55908 pod_ready.go:83] waiting for pod "kube-scheduler-ha-423884-m02" in "kube-system" namespace to be "Ready" or be gone ...
	I1109 14:05:12.859861   55908 pod_ready.go:94] pod "kube-scheduler-ha-423884-m02" is "Ready"
	I1109 14:05:12.859981   55908 pod_ready.go:86] duration metric: took 6.827161ms for pod "kube-scheduler-ha-423884-m02" in "kube-system" namespace to be "Ready" or be gone ...
	I1109 14:05:12.860005   55908 pod_ready.go:83] waiting for pod "kube-scheduler-ha-423884-m03" in "kube-system" namespace to be "Ready" or be gone ...
	I1109 14:05:12.867050   55908 pod_ready.go:94] pod "kube-scheduler-ha-423884-m03" is "Ready"
	I1109 14:05:12.867075   55908 pod_ready.go:86] duration metric: took 7.050311ms for pod "kube-scheduler-ha-423884-m03" in "kube-system" namespace to be "Ready" or be gone ...
	I1109 14:05:12.867087   55908 pod_ready.go:40] duration metric: took 24.355592064s for extra waiting for all "kube-system" pods having one of [k8s-app=kube-dns component=etcd component=kube-apiserver component=kube-controller-manager k8s-app=kube-proxy component=kube-scheduler] labels to be "Ready" ...
	I1109 14:05:12.924097   55908 start.go:628] kubectl: 1.33.2, cluster: 1.34.1 (minor skew: 1)
	I1109 14:05:12.927451   55908 out.go:179] * Done! kubectl is now configured to use "ha-423884" cluster and "default" namespace by default
	
	
	==> CRI-O <==
	Nov 09 14:04:15 ha-423884 crio[619]: time="2025-11-09T14:04:15.560693803Z" level=info msg="Started container" PID=1120 containerID=b63a9a2c4e5fbd3fad199cd6e213c4eaeb9cf307dbae0131d130c7d22384f79e description=default/busybox-7b57f96db7-bprtw/busybox id=6e691df6-c3f8-4e79-938c-13c481c463f8 name=/runtime.v1.RuntimeService/StartContainer sandboxID=49d4f70bf4320e7c70fbccc94bd31ea39dc7456ce3f9c2a9a4d16059f54b6f87
	Nov 09 14:04:45 ha-423884 conmon[1119]: conmon 5bed382b465f29e125aa <ninfo>: container 1132 exited with status 1
	Nov 09 14:04:46 ha-423884 crio[619]: time="2025-11-09T14:04:46.632047702Z" level=info msg="Checking image status: gcr.io/k8s-minikube/storage-provisioner:v5" id=58fafaad-5a62-4ed2-a48c-ac5cfcffacd0 name=/runtime.v1.ImageService/ImageStatus
	Nov 09 14:04:46 ha-423884 crio[619]: time="2025-11-09T14:04:46.633906069Z" level=info msg="Checking image status: gcr.io/k8s-minikube/storage-provisioner:v5" id=36005cb0-6a41-40e9-950b-0b9545dd375d name=/runtime.v1.ImageService/ImageStatus
	Nov 09 14:04:46 ha-423884 crio[619]: time="2025-11-09T14:04:46.64579785Z" level=info msg="Creating container: kube-system/storage-provisioner/storage-provisioner" id=95caab63-861a-49ee-8b75-b5d15cfb1b60 name=/runtime.v1.RuntimeService/CreateContainer
	Nov 09 14:04:46 ha-423884 crio[619]: time="2025-11-09T14:04:46.645906225Z" level=info msg="Allowed annotations are specified for workload [io.containers.trace-syscall]"
	Nov 09 14:04:46 ha-423884 crio[619]: time="2025-11-09T14:04:46.658781722Z" level=info msg="Allowed annotations are specified for workload [io.containers.trace-syscall]"
	Nov 09 14:04:46 ha-423884 crio[619]: time="2025-11-09T14:04:46.662347217Z" level=warning msg="Failed to open /etc/passwd: open /var/lib/containers/storage/overlay/184c9fdfb9f2c0bab041655609ae7f88de235f6f6f171cc5cec8c531dddf11f3/merged/etc/passwd: no such file or directory"
	Nov 09 14:04:46 ha-423884 crio[619]: time="2025-11-09T14:04:46.662465462Z" level=warning msg="Failed to open /etc/group: open /var/lib/containers/storage/overlay/184c9fdfb9f2c0bab041655609ae7f88de235f6f6f171cc5cec8c531dddf11f3/merged/etc/group: no such file or directory"
	Nov 09 14:04:46 ha-423884 crio[619]: time="2025-11-09T14:04:46.662915043Z" level=info msg="Allowed annotations are specified for workload [io.containers.trace-syscall]"
	Nov 09 14:04:46 ha-423884 crio[619]: time="2025-11-09T14:04:46.702334944Z" level=info msg="Created container b305e5d843218e1b3e886cb2f5ba534d02cd5d0a3a41d07de89ec1654cf7277c: kube-system/storage-provisioner/storage-provisioner" id=95caab63-861a-49ee-8b75-b5d15cfb1b60 name=/runtime.v1.RuntimeService/CreateContainer
	Nov 09 14:04:46 ha-423884 crio[619]: time="2025-11-09T14:04:46.714514458Z" level=info msg="Starting container: b305e5d843218e1b3e886cb2f5ba534d02cd5d0a3a41d07de89ec1654cf7277c" id=63571f8b-fba8-4137-bf17-f12c81bfa57d name=/runtime.v1.RuntimeService/StartContainer
	Nov 09 14:04:46 ha-423884 crio[619]: time="2025-11-09T14:04:46.721604636Z" level=info msg="Started container" PID=1382 containerID=b305e5d843218e1b3e886cb2f5ba534d02cd5d0a3a41d07de89ec1654cf7277c description=kube-system/storage-provisioner/storage-provisioner id=63571f8b-fba8-4137-bf17-f12c81bfa57d name=/runtime.v1.RuntimeService/StartContainer sandboxID=624febe3bef0cff2a8b38f86f67900ed2fa943529a6d461c2caa559b51a854bb
	Nov 09 14:04:55 ha-423884 crio[619]: time="2025-11-09T14:04:55.4215931Z" level=info msg="CNI monitoring event CREATE        \"/etc/cni/net.d/10-kindnet.conflist.temp\""
	Nov 09 14:04:55 ha-423884 crio[619]: time="2025-11-09T14:04:55.42716999Z" level=info msg="Found CNI network kindnet (type=ptp) at /etc/cni/net.d/10-kindnet.conflist"
	Nov 09 14:04:55 ha-423884 crio[619]: time="2025-11-09T14:04:55.427323214Z" level=info msg="Updated default CNI network name to kindnet"
	Nov 09 14:04:55 ha-423884 crio[619]: time="2025-11-09T14:04:55.427398128Z" level=info msg="CNI monitoring event WRITE         \"/etc/cni/net.d/10-kindnet.conflist.temp\""
	Nov 09 14:04:55 ha-423884 crio[619]: time="2025-11-09T14:04:55.431810591Z" level=info msg="Found CNI network kindnet (type=ptp) at /etc/cni/net.d/10-kindnet.conflist"
	Nov 09 14:04:55 ha-423884 crio[619]: time="2025-11-09T14:04:55.432264101Z" level=info msg="Updated default CNI network name to kindnet"
	Nov 09 14:04:55 ha-423884 crio[619]: time="2025-11-09T14:04:55.43234498Z" level=info msg="CNI monitoring event RENAME        \"/etc/cni/net.d/10-kindnet.conflist.temp\""
	Nov 09 14:04:55 ha-423884 crio[619]: time="2025-11-09T14:04:55.436394288Z" level=info msg="Found CNI network kindnet (type=ptp) at /etc/cni/net.d/10-kindnet.conflist"
	Nov 09 14:04:55 ha-423884 crio[619]: time="2025-11-09T14:04:55.436552493Z" level=info msg="Updated default CNI network name to kindnet"
	Nov 09 14:04:55 ha-423884 crio[619]: time="2025-11-09T14:04:55.43662753Z" level=info msg="CNI monitoring event CREATE        \"/etc/cni/net.d/10-kindnet.conflist\" ← \"/etc/cni/net.d/10-kindnet.conflist.temp\""
	Nov 09 14:04:55 ha-423884 crio[619]: time="2025-11-09T14:04:55.440324498Z" level=info msg="Found CNI network kindnet (type=ptp) at /etc/cni/net.d/10-kindnet.conflist"
	Nov 09 14:04:55 ha-423884 crio[619]: time="2025-11-09T14:04:55.440479609Z" level=info msg="Updated default CNI network name to kindnet"
	
	
	==> container status <==
	CONTAINER           IMAGE                                                              CREATED             STATE               NAME                      ATTEMPT             POD ID              POD                                 NAMESPACE
	b305e5d843218       ba04bb24b95753201135cbc420b233c1b0b9fa2e1fd21d28319c348c33fbcde6   2 minutes ago       Running             storage-provisioner       2                   624febe3bef0c       storage-provisioner                 kube-system
	4e1565497868e       138784d87c9c50f8e59412544da4cf4928d61ccbaf93b9f5898a3ba406871bfc   2 minutes ago       Running             coredns                   1                   156c341c8adee       coredns-66bc5c9577-wl6rt            kube-system
	f0fd891d62df4       138784d87c9c50f8e59412544da4cf4928d61ccbaf93b9f5898a3ba406871bfc   2 minutes ago       Running             coredns                   1                   0149d6cd55157       coredns-66bc5c9577-x2j4c            kube-system
	5bed382b465f2       ba04bb24b95753201135cbc420b233c1b0b9fa2e1fd21d28319c348c33fbcde6   2 minutes ago       Exited              storage-provisioner       1                   624febe3bef0c       storage-provisioner                 kube-system
	b63a9a2c4e5fb       89a35e2ebb6b938201966889b5e8c85b931db6432c5643966116cd1c28bf45cd   2 minutes ago       Running             busybox                   1                   49d4f70bf4320       busybox-7b57f96db7-bprtw            default
	6db8ccf0f7e5d       05baa95f5142d87797a2bc1d3d11edfb0bf0a9236d436243d15061fae8b58cb9   2 minutes ago       Running             kube-proxy                1                   7482e6b61af8f       kube-proxy-7z7d2                    kube-system
	2858b15648473       b1a8c6f707935fd5f346ce5846d21ff8dd65e14c15406a14dbd16b9b897b9b4c   2 minutes ago       Running             kindnet-cni               1                   ef99cabeed954       kindnet-4s4nj                       kube-system
	d4b5eae8c40aa       7eb2c6ff0c5a768fd309321bc2ade0e4e11afcf4f2017ef1d0ff00d91fdf992a   2 minutes ago       Running             kube-controller-manager   9                   8d358a601f8e9       kube-controller-manager-ha-423884   kube-system
	7a8b6eec5acc3       43911e833d64d4f30460862fc0c54bb61999d60bc7d063feca71e9fc610d5196   2 minutes ago       Running             kube-apiserver            8                   5dc1bc8f687be       kube-apiserver-ha-423884            kube-system
	78f5efcea671f       7eb2c6ff0c5a768fd309321bc2ade0e4e11afcf4f2017ef1d0ff00d91fdf992a   3 minutes ago       Exited              kube-controller-manager   8                   8d358a601f8e9       kube-controller-manager-ha-423884   kube-system
	947390d8997ff       a1894772a478e07c67a56e8bf32335fdbe1dd4ec96976a5987083164bd00bc0e   3 minutes ago       Running             etcd                      3                   0c595ba9083de       etcd-ha-423884                      kube-system
	c0ba74e816e13       43911e833d64d4f30460862fc0c54bb61999d60bc7d063feca71e9fc610d5196   3 minutes ago       Exited              kube-apiserver            7                   5dc1bc8f687be       kube-apiserver-ha-423884            kube-system
	374a5429d6a56       b5f57ec6b98676d815366685a0422bd164ecf0732540b79ac51b1186cef97ff0   3 minutes ago       Running             kube-scheduler            2                   3ee3bcbc0fa87       kube-scheduler-ha-423884            kube-system
	785a023345fda       2a8917f902489be5a8dd414209c32b77bd644d187ea646d86dbdc31e85efb551   3 minutes ago       Running             kube-vip                  1                   90a0cbb7d6ed9       kube-vip-ha-423884                  kube-system
	
	
	==> coredns [4e1565497868eb720e6f89fa2f64f1892d9d7c7fb165c52c75c00a6e26644dcd] <==
	[INFO] plugin/kubernetes: waiting for Kubernetes API before starting server
	[INFO] plugin/kubernetes: waiting for Kubernetes API before starting server
	[INFO] plugin/ready: Still waiting on: "kubernetes"
	[INFO] plugin/kubernetes: waiting for Kubernetes API before starting server
	[INFO] plugin/kubernetes: waiting for Kubernetes API before starting server
	[INFO] plugin/kubernetes: waiting for Kubernetes API before starting server
	[INFO] plugin/kubernetes: waiting for Kubernetes API before starting server
	[INFO] plugin/kubernetes: waiting for Kubernetes API before starting server
	[INFO] plugin/kubernetes: waiting for Kubernetes API before starting server
	[INFO] plugin/kubernetes: waiting for Kubernetes API before starting server
	[WARNING] plugin/kubernetes: starting server with unsynced Kubernetes API
	.:53
	[INFO] plugin/reload: Running configuration SHA512 = 9e2996f8cb67ac53e0259ab1f8d615d07d1beb0bd07e6a1e39769c3bf486a905bb991cc47f8d2f14d0d3a90a87dfc625a0b4c524fed169d8158c40657c0694b1
	CoreDNS-1.12.1
	linux/arm64, go1.24.1, 707c7c1
	[INFO] 127.0.0.1:56290 - 23869 "HINFO IN 4295743501471833009.7362039906491692351. udp 57 false 512" NXDOMAIN qr,rd,ra 57 0.027167594s
	[INFO] plugin/ready: Still waiting on: "kubernetes"
	[INFO] plugin/ready: Still waiting on: "kubernetes"
	[INFO] plugin/kubernetes: pkg/mod/k8s.io/client-go@v0.32.3/tools/cache/reflector.go:251: failed to list *v1.Namespace: Get "https://10.96.0.1:443/api/v1/namespaces?limit=500&resourceVersion=0": dial tcp 10.96.0.1:443: i/o timeout
	[ERROR] plugin/kubernetes: Unhandled Error
	[INFO] plugin/kubernetes: pkg/mod/k8s.io/client-go@v0.32.3/tools/cache/reflector.go:251: failed to list *v1.EndpointSlice: Get "https://10.96.0.1:443/apis/discovery.k8s.io/v1/endpointslices?limit=500&resourceVersion=0": dial tcp 10.96.0.1:443: i/o timeout
	[ERROR] plugin/kubernetes: Unhandled Error
	[INFO] plugin/kubernetes: pkg/mod/k8s.io/client-go@v0.32.3/tools/cache/reflector.go:251: failed to list *v1.Service: Get "https://10.96.0.1:443/api/v1/services?limit=500&resourceVersion=0": dial tcp 10.96.0.1:443: i/o timeout
	[ERROR] plugin/kubernetes: Unhandled Error
	[INFO] plugin/ready: Still waiting on: "kubernetes"
	
	
	==> coredns [f0fd891d62df4ba35f7f2bb9f867a20bb1ee66fec8156164361837f74c33b151] <==
	[INFO] plugin/kubernetes: waiting for Kubernetes API before starting server
	[INFO] plugin/kubernetes: waiting for Kubernetes API before starting server
	[INFO] plugin/ready: Still waiting on: "kubernetes"
	[INFO] plugin/kubernetes: waiting for Kubernetes API before starting server
	[INFO] plugin/kubernetes: waiting for Kubernetes API before starting server
	[INFO] plugin/kubernetes: waiting for Kubernetes API before starting server
	[INFO] plugin/kubernetes: waiting for Kubernetes API before starting server
	[INFO] plugin/kubernetes: waiting for Kubernetes API before starting server
	[INFO] plugin/kubernetes: waiting for Kubernetes API before starting server
	[INFO] plugin/kubernetes: waiting for Kubernetes API before starting server
	[WARNING] plugin/kubernetes: starting server with unsynced Kubernetes API
	.:53
	[INFO] plugin/reload: Running configuration SHA512 = 9e2996f8cb67ac53e0259ab1f8d615d07d1beb0bd07e6a1e39769c3bf486a905bb991cc47f8d2f14d0d3a90a87dfc625a0b4c524fed169d8158c40657c0694b1
	CoreDNS-1.12.1
	linux/arm64, go1.24.1, 707c7c1
	[INFO] 127.0.0.1:41286 - 39887 "HINFO IN 9165684468172783655.3008217872247164606. udp 57 false 512" NXDOMAIN qr,rd,ra 57 0.020928117s
	[INFO] plugin/ready: Still waiting on: "kubernetes"
	[INFO] plugin/ready: Still waiting on: "kubernetes"
	[INFO] plugin/kubernetes: pkg/mod/k8s.io/client-go@v0.32.3/tools/cache/reflector.go:251: failed to list *v1.Service: Get "https://10.96.0.1:443/api/v1/services?limit=500&resourceVersion=0": dial tcp 10.96.0.1:443: i/o timeout
	[ERROR] plugin/kubernetes: Unhandled Error
	[INFO] plugin/kubernetes: pkg/mod/k8s.io/client-go@v0.32.3/tools/cache/reflector.go:251: failed to list *v1.EndpointSlice: Get "https://10.96.0.1:443/apis/discovery.k8s.io/v1/endpointslices?limit=500&resourceVersion=0": dial tcp 10.96.0.1:443: i/o timeout
	[ERROR] plugin/kubernetes: Unhandled Error
	[INFO] plugin/kubernetes: pkg/mod/k8s.io/client-go@v0.32.3/tools/cache/reflector.go:251: failed to list *v1.Namespace: Get "https://10.96.0.1:443/api/v1/namespaces?limit=500&resourceVersion=0": dial tcp 10.96.0.1:443: i/o timeout
	[ERROR] plugin/kubernetes: Unhandled Error
	[INFO] plugin/ready: Still waiting on: "kubernetes"
	
	
	==> describe nodes <==
	Name:               ha-423884
	Roles:              control-plane
	Labels:             beta.kubernetes.io/arch=arm64
	                    beta.kubernetes.io/os=linux
	                    kubernetes.io/arch=arm64
	                    kubernetes.io/hostname=ha-423884
	                    kubernetes.io/os=linux
	                    minikube.k8s.io/commit=70e94ed289d7418ac27e2778f9cf44be27a4ecda
	                    minikube.k8s.io/name=ha-423884
	                    minikube.k8s.io/primary=true
	                    minikube.k8s.io/updated_at=2025_11_09T13_50_44_0700
	                    minikube.k8s.io/version=v1.37.0
	                    node-role.kubernetes.io/control-plane=
	                    node.kubernetes.io/exclude-from-external-load-balancers=
	Annotations:        node.alpha.kubernetes.io/ttl: 0
	                    volumes.kubernetes.io/controller-managed-attach-detach: true
	CreationTimestamp:  Sun, 09 Nov 2025 13:50:40 +0000
	Taints:             <none>
	Unschedulable:      false
	Lease:
	  HolderIdentity:  ha-423884
	  AcquireTime:     <unset>
	  RenewTime:       Sun, 09 Nov 2025 14:06:53 +0000
	Conditions:
	  Type             Status  LastHeartbeatTime                 LastTransitionTime                Reason                       Message
	  ----             ------  -----------------                 ------------------                ------                       -------
	  MemoryPressure   False   Sun, 09 Nov 2025 14:06:33 +0000   Sun, 09 Nov 2025 13:50:35 +0000   KubeletHasSufficientMemory   kubelet has sufficient memory available
	  DiskPressure     False   Sun, 09 Nov 2025 14:06:33 +0000   Sun, 09 Nov 2025 13:50:35 +0000   KubeletHasNoDiskPressure     kubelet has no disk pressure
	  PIDPressure      False   Sun, 09 Nov 2025 14:06:33 +0000   Sun, 09 Nov 2025 13:50:35 +0000   KubeletHasSufficientPID      kubelet has sufficient PID available
	  Ready            True    Sun, 09 Nov 2025 14:06:33 +0000   Sun, 09 Nov 2025 13:51:29 +0000   KubeletReady                 kubelet is posting ready status
	Addresses:
	  InternalIP:  192.168.49.2
	  Hostname:    ha-423884
	Capacity:
	  cpu:                2
	  ephemeral-storage:  203034800Ki
	  hugepages-1Gi:      0
	  hugepages-2Mi:      0
	  hugepages-32Mi:     0
	  hugepages-64Ki:     0
	  memory:             8022300Ki
	  pods:               110
	Allocatable:
	  cpu:                2
	  ephemeral-storage:  203034800Ki
	  hugepages-1Gi:      0
	  hugepages-2Mi:      0
	  hugepages-32Mi:     0
	  hugepages-64Ki:     0
	  memory:             8022300Ki
	  pods:               110
	System Info:
	  Machine ID:                 8fe80b37a6a3cb4bc1adadb06905c59a
	  System UUID:                657918f5-0b52-434a-8e2d-4cc93dc46e2f
	  Boot ID:                    c2b4b02b-b0e5-42ff-aa45-346ad9349595
	  Kernel Version:             5.15.0-1084-aws
	  OS Image:                   Debian GNU/Linux 12 (bookworm)
	  Operating System:           linux
	  Architecture:               arm64
	  Container Runtime Version:  cri-o://1.34.1
	  Kubelet Version:            v1.34.1
	  Kube-Proxy Version:         
	PodCIDR:                      10.244.0.0/24
	PodCIDRs:                     10.244.0.0/24
	Non-terminated Pods:          (11 in total)
	  Namespace                   Name                                 CPU Requests  CPU Limits  Memory Requests  Memory Limits  Age
	  ---------                   ----                                 ------------  ----------  ---------------  -------------  ---
	  default                     busybox-7b57f96db7-bprtw             0 (0%)        0 (0%)      0 (0%)           0 (0%)         14m
	  kube-system                 coredns-66bc5c9577-wl6rt             100m (5%)     0 (0%)      70Mi (0%)        170Mi (2%)     16m
	  kube-system                 coredns-66bc5c9577-x2j4c             100m (5%)     0 (0%)      70Mi (0%)        170Mi (2%)     16m
	  kube-system                 etcd-ha-423884                       100m (5%)     0 (0%)      100Mi (1%)       0 (0%)         16m
	  kube-system                 kindnet-4s4nj                        100m (5%)     100m (5%)   50Mi (0%)        50Mi (0%)      16m
	  kube-system                 kube-apiserver-ha-423884             250m (12%)    0 (0%)      0 (0%)           0 (0%)         16m
	  kube-system                 kube-controller-manager-ha-423884    200m (10%)    0 (0%)      0 (0%)           0 (0%)         16m
	  kube-system                 kube-proxy-7z7d2                     0 (0%)        0 (0%)      0 (0%)           0 (0%)         16m
	  kube-system                 kube-scheduler-ha-423884             100m (5%)     0 (0%)      0 (0%)           0 (0%)         16m
	  kube-system                 kube-vip-ha-423884                   0 (0%)        0 (0%)      0 (0%)           0 (0%)         2m40s
	  kube-system                 storage-provisioner                  0 (0%)        0 (0%)      0 (0%)           0 (0%)         16m
	Allocated resources:
	  (Total limits may be over 100 percent, i.e., overcommitted.)
	  Resource           Requests    Limits
	  --------           --------    ------
	  cpu                950m (47%)  100m (5%)
	  memory             290Mi (3%)  390Mi (4%)
	  ephemeral-storage  0 (0%)      0 (0%)
	  hugepages-1Gi      0 (0%)      0 (0%)
	  hugepages-2Mi      0 (0%)      0 (0%)
	  hugepages-32Mi     0 (0%)      0 (0%)
	  hugepages-64Ki     0 (0%)      0 (0%)
	Events:
	  Type     Reason                   Age                    From             Message
	  ----     ------                   ----                   ----             -------
	  Normal   Starting                 16m                    kube-proxy       
	  Normal   Starting                 2m37s                  kube-proxy       
	  Normal   NodeHasSufficientPID     16m (x8 over 16m)      kubelet          Node ha-423884 status is now: NodeHasSufficientPID
	  Warning  CgroupV1                 16m                    kubelet          cgroup v1 support is in maintenance mode, please migrate to cgroup v2
	  Normal   Starting                 16m                    kubelet          Starting kubelet.
	  Normal   NodeHasNoDiskPressure    16m (x8 over 16m)      kubelet          Node ha-423884 status is now: NodeHasNoDiskPressure
	  Normal   NodeHasSufficientMemory  16m (x8 over 16m)      kubelet          Node ha-423884 status is now: NodeHasSufficientMemory
	  Normal   Starting                 16m                    kubelet          Starting kubelet.
	  Normal   NodeHasNoDiskPressure    16m                    kubelet          Node ha-423884 status is now: NodeHasNoDiskPressure
	  Warning  CgroupV1                 16m                    kubelet          cgroup v1 support is in maintenance mode, please migrate to cgroup v2
	  Normal   NodeHasSufficientPID     16m                    kubelet          Node ha-423884 status is now: NodeHasSufficientPID
	  Normal   NodeHasSufficientMemory  16m                    kubelet          Node ha-423884 status is now: NodeHasSufficientMemory
	  Normal   RegisteredNode           16m                    node-controller  Node ha-423884 event: Registered Node ha-423884 in Controller
	  Normal   RegisteredNode           15m                    node-controller  Node ha-423884 event: Registered Node ha-423884 in Controller
	  Normal   NodeReady                15m                    kubelet          Node ha-423884 status is now: NodeReady
	  Normal   RegisteredNode           14m                    node-controller  Node ha-423884 event: Registered Node ha-423884 in Controller
	  Normal   RegisteredNode           12m                    node-controller  Node ha-423884 event: Registered Node ha-423884 in Controller
	  Normal   Starting                 3m19s                  kubelet          Starting kubelet.
	  Warning  CgroupV1                 3m19s                  kubelet          cgroup v1 support is in maintenance mode, please migrate to cgroup v2
	  Normal   NodeHasSufficientMemory  3m19s (x8 over 3m19s)  kubelet          Node ha-423884 status is now: NodeHasSufficientMemory
	  Normal   NodeHasNoDiskPressure    3m19s (x8 over 3m19s)  kubelet          Node ha-423884 status is now: NodeHasNoDiskPressure
	  Normal   NodeHasSufficientPID     3m19s (x8 over 3m19s)  kubelet          Node ha-423884 status is now: NodeHasSufficientPID
	  Normal   RegisteredNode           2m40s                  node-controller  Node ha-423884 event: Registered Node ha-423884 in Controller
	  Normal   RegisteredNode           2m39s                  node-controller  Node ha-423884 event: Registered Node ha-423884 in Controller
	  Normal   RegisteredNode           2m                     node-controller  Node ha-423884 event: Registered Node ha-423884 in Controller
	  Normal   RegisteredNode           54s                    node-controller  Node ha-423884 event: Registered Node ha-423884 in Controller
	
	
	Name:               ha-423884-m02
	Roles:              control-plane
	Labels:             beta.kubernetes.io/arch=arm64
	                    beta.kubernetes.io/os=linux
	                    kubernetes.io/arch=arm64
	                    kubernetes.io/hostname=ha-423884-m02
	                    kubernetes.io/os=linux
	                    minikube.k8s.io/commit=70e94ed289d7418ac27e2778f9cf44be27a4ecda
	                    minikube.k8s.io/name=ha-423884
	                    minikube.k8s.io/primary=false
	                    minikube.k8s.io/updated_at=2025_11_09T13_51_24_0700
	                    minikube.k8s.io/version=v1.37.0
	                    node-role.kubernetes.io/control-plane=
	                    node.kubernetes.io/exclude-from-external-load-balancers=
	Annotations:        node.alpha.kubernetes.io/ttl: 0
	                    volumes.kubernetes.io/controller-managed-attach-detach: true
	CreationTimestamp:  Sun, 09 Nov 2025 13:51:24 +0000
	Taints:             <none>
	Unschedulable:      false
	Lease:
	  HolderIdentity:  ha-423884-m02
	  AcquireTime:     <unset>
	  RenewTime:       Sun, 09 Nov 2025 14:06:53 +0000
	Conditions:
	  Type             Status  LastHeartbeatTime                 LastTransitionTime                Reason                       Message
	  ----             ------  -----------------                 ------------------                ------                       -------
	  MemoryPressure   False   Sun, 09 Nov 2025 14:05:52 +0000   Sun, 09 Nov 2025 13:54:37 +0000   KubeletHasSufficientMemory   kubelet has sufficient memory available
	  DiskPressure     False   Sun, 09 Nov 2025 14:05:52 +0000   Sun, 09 Nov 2025 13:54:37 +0000   KubeletHasNoDiskPressure     kubelet has no disk pressure
	  PIDPressure      False   Sun, 09 Nov 2025 14:05:52 +0000   Sun, 09 Nov 2025 13:54:37 +0000   KubeletHasSufficientPID      kubelet has sufficient PID available
	  Ready            True    Sun, 09 Nov 2025 14:05:52 +0000   Sun, 09 Nov 2025 13:54:37 +0000   KubeletReady                 kubelet is posting ready status
	Addresses:
	  InternalIP:  192.168.49.3
	  Hostname:    ha-423884-m02
	Capacity:
	  cpu:                2
	  ephemeral-storage:  203034800Ki
	  hugepages-1Gi:      0
	  hugepages-2Mi:      0
	  hugepages-32Mi:     0
	  hugepages-64Ki:     0
	  memory:             8022300Ki
	  pods:               110
	Allocatable:
	  cpu:                2
	  ephemeral-storage:  203034800Ki
	  hugepages-1Gi:      0
	  hugepages-2Mi:      0
	  hugepages-32Mi:     0
	  hugepages-64Ki:     0
	  memory:             8022300Ki
	  pods:               110
	System Info:
	  Machine ID:                 8fe80b37a6a3cb4bc1adadb06905c59a
	  System UUID:                36d1a056-7fa9-4feb-8fa0-03ee70e31c22
	  Boot ID:                    c2b4b02b-b0e5-42ff-aa45-346ad9349595
	  Kernel Version:             5.15.0-1084-aws
	  OS Image:                   Debian GNU/Linux 12 (bookworm)
	  Operating System:           linux
	  Architecture:               arm64
	  Container Runtime Version:  cri-o://1.34.1
	  Kubelet Version:            v1.34.1
	  Kube-Proxy Version:         
	PodCIDR:                      10.244.1.0/24
	PodCIDRs:                     10.244.1.0/24
	Non-terminated Pods:          (8 in total)
	  Namespace                   Name                                     CPU Requests  CPU Limits  Memory Requests  Memory Limits  Age
	  ---------                   ----                                     ------------  ----------  ---------------  -------------  ---
	  default                     busybox-7b57f96db7-c9qf4                 0 (0%)        0 (0%)      0 (0%)           0 (0%)         14m
	  kube-system                 etcd-ha-423884-m02                       100m (5%)     0 (0%)      100Mi (1%)       0 (0%)         15m
	  kube-system                 kindnet-ftnwt                            100m (5%)     100m (5%)   50Mi (0%)        50Mi (0%)      15m
	  kube-system                 kube-apiserver-ha-423884-m02             250m (12%)    0 (0%)      0 (0%)           0 (0%)         15m
	  kube-system                 kube-controller-manager-ha-423884-m02    200m (10%)    0 (0%)      0 (0%)           0 (0%)         15m
	  kube-system                 kube-proxy-f4hgn                         0 (0%)        0 (0%)      0 (0%)           0 (0%)         15m
	  kube-system                 kube-scheduler-ha-423884-m02             100m (5%)     0 (0%)      0 (0%)           0 (0%)         15m
	  kube-system                 kube-vip-ha-423884-m02                   0 (0%)        0 (0%)      0 (0%)           0 (0%)         15m
	Allocated resources:
	  (Total limits may be over 100 percent, i.e., overcommitted.)
	  Resource           Requests    Limits
	  --------           --------    ------
	  cpu                750m (37%)  100m (5%)
	  memory             150Mi (1%)  50Mi (0%)
	  ephemeral-storage  0 (0%)      0 (0%)
	  hugepages-1Gi      0 (0%)      0 (0%)
	  hugepages-2Mi      0 (0%)      0 (0%)
	  hugepages-32Mi     0 (0%)      0 (0%)
	  hugepages-64Ki     0 (0%)      0 (0%)
	Events:
	  Type     Reason                   Age                    From             Message
	  ----     ------                   ----                   ----             -------
	  Normal   Starting                 2m27s                  kube-proxy       
	  Normal   Starting                 15m                    kube-proxy       
	  Normal   RegisteredNode           15m                    node-controller  Node ha-423884-m02 event: Registered Node ha-423884-m02 in Controller
	  Normal   RegisteredNode           15m                    node-controller  Node ha-423884-m02 event: Registered Node ha-423884-m02 in Controller
	  Normal   RegisteredNode           14m                    node-controller  Node ha-423884-m02 event: Registered Node ha-423884-m02 in Controller
	  Normal   Starting                 12m                    kubelet          Starting kubelet.
	  Warning  CgroupV1                 12m                    kubelet          cgroup v1 support is in maintenance mode, please migrate to cgroup v2
	  Normal   NodeHasSufficientMemory  12m (x8 over 12m)      kubelet          Node ha-423884-m02 status is now: NodeHasSufficientMemory
	  Normal   NodeHasSufficientPID     12m (x8 over 12m)      kubelet          Node ha-423884-m02 status is now: NodeHasSufficientPID
	  Normal   NodeHasNoDiskPressure    12m (x8 over 12m)      kubelet          Node ha-423884-m02 status is now: NodeHasNoDiskPressure
	  Normal   NodeNotReady             12m                    node-controller  Node ha-423884-m02 status is now: NodeNotReady
	  Normal   RegisteredNode           12m                    node-controller  Node ha-423884-m02 event: Registered Node ha-423884-m02 in Controller
	  Normal   Starting                 3m16s                  kubelet          Starting kubelet.
	  Warning  CgroupV1                 3m16s                  kubelet          cgroup v1 support is in maintenance mode, please migrate to cgroup v2
	  Normal   NodeHasSufficientMemory  3m15s (x8 over 3m16s)  kubelet          Node ha-423884-m02 status is now: NodeHasSufficientMemory
	  Normal   NodeHasNoDiskPressure    3m15s (x8 over 3m16s)  kubelet          Node ha-423884-m02 status is now: NodeHasNoDiskPressure
	  Normal   NodeHasSufficientPID     3m15s (x8 over 3m16s)  kubelet          Node ha-423884-m02 status is now: NodeHasSufficientPID
	  Normal   RegisteredNode           2m40s                  node-controller  Node ha-423884-m02 event: Registered Node ha-423884-m02 in Controller
	  Normal   RegisteredNode           2m39s                  node-controller  Node ha-423884-m02 event: Registered Node ha-423884-m02 in Controller
	  Normal   RegisteredNode           2m                     node-controller  Node ha-423884-m02 event: Registered Node ha-423884-m02 in Controller
	  Normal   RegisteredNode           54s                    node-controller  Node ha-423884-m02 event: Registered Node ha-423884-m02 in Controller
	
	
	Name:               ha-423884-m03
	Roles:              control-plane
	Labels:             beta.kubernetes.io/arch=arm64
	                    beta.kubernetes.io/os=linux
	                    kubernetes.io/arch=arm64
	                    kubernetes.io/hostname=ha-423884-m03
	                    kubernetes.io/os=linux
	                    minikube.k8s.io/commit=70e94ed289d7418ac27e2778f9cf44be27a4ecda
	                    minikube.k8s.io/name=ha-423884
	                    minikube.k8s.io/primary=false
	                    minikube.k8s.io/updated_at=2025_11_09T13_52_18_0700
	                    minikube.k8s.io/version=v1.37.0
	                    node-role.kubernetes.io/control-plane=
	                    node.kubernetes.io/exclude-from-external-load-balancers=
	Annotations:        node.alpha.kubernetes.io/ttl: 0
	                    volumes.kubernetes.io/controller-managed-attach-detach: true
	CreationTimestamp:  Sun, 09 Nov 2025 13:52:17 +0000
	Taints:             <none>
	Unschedulable:      false
	Lease:
	  HolderIdentity:  ha-423884-m03
	  AcquireTime:     <unset>
	  RenewTime:       Sun, 09 Nov 2025 14:06:48 +0000
	Conditions:
	  Type             Status  LastHeartbeatTime                 LastTransitionTime                Reason                       Message
	  ----             ------  -----------------                 ------------------                ------                       -------
	  MemoryPressure   False   Sun, 09 Nov 2025 14:06:12 +0000   Sun, 09 Nov 2025 13:52:17 +0000   KubeletHasSufficientMemory   kubelet has sufficient memory available
	  DiskPressure     False   Sun, 09 Nov 2025 14:06:12 +0000   Sun, 09 Nov 2025 13:52:17 +0000   KubeletHasNoDiskPressure     kubelet has no disk pressure
	  PIDPressure      False   Sun, 09 Nov 2025 14:06:12 +0000   Sun, 09 Nov 2025 13:52:17 +0000   KubeletHasSufficientPID      kubelet has sufficient PID available
	  Ready            True    Sun, 09 Nov 2025 14:06:12 +0000   Sun, 09 Nov 2025 13:52:33 +0000   KubeletReady                 kubelet is posting ready status
	Addresses:
	  InternalIP:  192.168.49.4
	  Hostname:    ha-423884-m03
	Capacity:
	  cpu:                2
	  ephemeral-storage:  203034800Ki
	  hugepages-1Gi:      0
	  hugepages-2Mi:      0
	  hugepages-32Mi:     0
	  hugepages-64Ki:     0
	  memory:             8022300Ki
	  pods:               110
	Allocatable:
	  cpu:                2
	  ephemeral-storage:  203034800Ki
	  hugepages-1Gi:      0
	  hugepages-2Mi:      0
	  hugepages-32Mi:     0
	  hugepages-64Ki:     0
	  memory:             8022300Ki
	  pods:               110
	System Info:
	  Machine ID:                 8fe80b37a6a3cb4bc1adadb06905c59a
	  System UUID:                d57bf8b4-5512-4316-94f7-79a9c657e155
	  Boot ID:                    c2b4b02b-b0e5-42ff-aa45-346ad9349595
	  Kernel Version:             5.15.0-1084-aws
	  OS Image:                   Debian GNU/Linux 12 (bookworm)
	  Operating System:           linux
	  Architecture:               arm64
	  Container Runtime Version:  cri-o://1.34.1
	  Kubelet Version:            v1.34.1
	  Kube-Proxy Version:         
	PodCIDR:                      10.244.2.0/24
	PodCIDRs:                     10.244.2.0/24
	Non-terminated Pods:          (8 in total)
	  Namespace                   Name                                     CPU Requests  CPU Limits  Memory Requests  Memory Limits  Age
	  ---------                   ----                                     ------------  ----------  ---------------  -------------  ---
	  default                     busybox-7b57f96db7-5bfxx                 0 (0%)        0 (0%)      0 (0%)           0 (0%)         14m
	  kube-system                 etcd-ha-423884-m03                       100m (5%)     0 (0%)      100Mi (1%)       0 (0%)         14m
	  kube-system                 kindnet-45jg2                            100m (5%)     100m (5%)   50Mi (0%)        50Mi (0%)      14m
	  kube-system                 kube-apiserver-ha-423884-m03             250m (12%)    0 (0%)      0 (0%)           0 (0%)         14m
	  kube-system                 kube-controller-manager-ha-423884-m03    200m (10%)    0 (0%)      0 (0%)           0 (0%)         14m
	  kube-system                 kube-proxy-jcgxk                         0 (0%)        0 (0%)      0 (0%)           0 (0%)         14m
	  kube-system                 kube-scheduler-ha-423884-m03             100m (5%)     0 (0%)      0 (0%)           0 (0%)         14m
	  kube-system                 kube-vip-ha-423884-m03                   0 (0%)        0 (0%)      0 (0%)           0 (0%)         14m
	Allocated resources:
	  (Total limits may be over 100 percent, i.e., overcommitted.)
	  Resource           Requests    Limits
	  --------           --------    ------
	  cpu                750m (37%)  100m (5%)
	  memory             150Mi (1%)  50Mi (0%)
	  ephemeral-storage  0 (0%)      0 (0%)
	  hugepages-1Gi      0 (0%)      0 (0%)
	  hugepages-2Mi      0 (0%)      0 (0%)
	  hugepages-32Mi     0 (0%)      0 (0%)
	  hugepages-64Ki     0 (0%)      0 (0%)
	Events:
	  Type     Reason                   Age                    From             Message
	  ----     ------                   ----                   ----             -------
	  Normal   Starting                 14m                    kube-proxy       
	  Normal   Starting                 102s                   kube-proxy       
	  Normal   RegisteredNode           14m                    node-controller  Node ha-423884-m03 event: Registered Node ha-423884-m03 in Controller
	  Normal   RegisteredNode           14m                    node-controller  Node ha-423884-m03 event: Registered Node ha-423884-m03 in Controller
	  Normal   RegisteredNode           14m                    node-controller  Node ha-423884-m03 event: Registered Node ha-423884-m03 in Controller
	  Normal   RegisteredNode           12m                    node-controller  Node ha-423884-m03 event: Registered Node ha-423884-m03 in Controller
	  Normal   Starting                 2m40s                  kubelet          Starting kubelet.
	  Warning  CgroupV1                 2m40s                  kubelet          cgroup v1 support is in maintenance mode, please migrate to cgroup v2
	  Normal   NodeHasSufficientMemory  2m40s (x8 over 2m40s)  kubelet          Node ha-423884-m03 status is now: NodeHasSufficientMemory
	  Normal   NodeHasNoDiskPressure    2m40s (x8 over 2m40s)  kubelet          Node ha-423884-m03 status is now: NodeHasNoDiskPressure
	  Normal   NodeHasSufficientPID     2m40s (x8 over 2m40s)  kubelet          Node ha-423884-m03 status is now: NodeHasSufficientPID
	  Normal   RegisteredNode           2m40s                  node-controller  Node ha-423884-m03 event: Registered Node ha-423884-m03 in Controller
	  Normal   RegisteredNode           2m39s                  node-controller  Node ha-423884-m03 event: Registered Node ha-423884-m03 in Controller
	  Normal   RegisteredNode           2m                     node-controller  Node ha-423884-m03 event: Registered Node ha-423884-m03 in Controller
	  Normal   RegisteredNode           54s                    node-controller  Node ha-423884-m03 event: Registered Node ha-423884-m03 in Controller
	
	
	Name:               ha-423884-m04
	Roles:              <none>
	Labels:             beta.kubernetes.io/arch=arm64
	                    beta.kubernetes.io/os=linux
	                    kubernetes.io/arch=arm64
	                    kubernetes.io/hostname=ha-423884-m04
	                    kubernetes.io/os=linux
	                    minikube.k8s.io/commit=70e94ed289d7418ac27e2778f9cf44be27a4ecda
	                    minikube.k8s.io/name=ha-423884
	                    minikube.k8s.io/primary=false
	                    minikube.k8s.io/updated_at=2025_11_09T13_53_07_0700
	                    minikube.k8s.io/version=v1.37.0
	Annotations:        node.alpha.kubernetes.io/ttl: 0
	                    volumes.kubernetes.io/controller-managed-attach-detach: true
	CreationTimestamp:  Sun, 09 Nov 2025 13:53:06 +0000
	Taints:             <none>
	Unschedulable:      false
	Lease:
	  HolderIdentity:  ha-423884-m04
	  AcquireTime:     <unset>
	  RenewTime:       Sun, 09 Nov 2025 14:06:45 +0000
	Conditions:
	  Type             Status  LastHeartbeatTime                 LastTransitionTime                Reason                       Message
	  ----             ------  -----------------                 ------------------                ------                       -------
	  MemoryPressure   False   Sun, 09 Nov 2025 14:04:52 +0000   Sun, 09 Nov 2025 13:53:06 +0000   KubeletHasSufficientMemory   kubelet has sufficient memory available
	  DiskPressure     False   Sun, 09 Nov 2025 14:04:52 +0000   Sun, 09 Nov 2025 13:53:06 +0000   KubeletHasNoDiskPressure     kubelet has no disk pressure
	  PIDPressure      False   Sun, 09 Nov 2025 14:04:52 +0000   Sun, 09 Nov 2025 13:53:06 +0000   KubeletHasSufficientPID      kubelet has sufficient PID available
	  Ready            True    Sun, 09 Nov 2025 14:04:52 +0000   Sun, 09 Nov 2025 13:53:22 +0000   KubeletReady                 kubelet is posting ready status
	Addresses:
	  InternalIP:  192.168.49.5
	  Hostname:    ha-423884-m04
	Capacity:
	  cpu:                2
	  ephemeral-storage:  203034800Ki
	  hugepages-1Gi:      0
	  hugepages-2Mi:      0
	  hugepages-32Mi:     0
	  hugepages-64Ki:     0
	  memory:             8022300Ki
	  pods:               110
	Allocatable:
	  cpu:                2
	  ephemeral-storage:  203034800Ki
	  hugepages-1Gi:      0
	  hugepages-2Mi:      0
	  hugepages-32Mi:     0
	  hugepages-64Ki:     0
	  memory:             8022300Ki
	  pods:               110
	System Info:
	  Machine ID:                 8fe80b37a6a3cb4bc1adadb06905c59a
	  System UUID:                750e1d79-71b2-4dc5-bf03-65a8c044964c
	  Boot ID:                    c2b4b02b-b0e5-42ff-aa45-346ad9349595
	  Kernel Version:             5.15.0-1084-aws
	  OS Image:                   Debian GNU/Linux 12 (bookworm)
	  Operating System:           linux
	  Architecture:               arm64
	  Container Runtime Version:  cri-o://1.34.1
	  Kubelet Version:            v1.34.1
	  Kube-Proxy Version:         
	PodCIDR:                      10.244.3.0/24
	PodCIDRs:                     10.244.3.0/24
	Non-terminated Pods:          (2 in total)
	  Namespace                   Name                CPU Requests  CPU Limits  Memory Requests  Memory Limits  Age
	  ---------                   ----                ------------  ----------  ---------------  -------------  ---
	  kube-system                 kindnet-2tcn6       100m (5%)     100m (5%)   50Mi (0%)        50Mi (0%)      13m
	  kube-system                 kube-proxy-9kff9    0 (0%)        0 (0%)      0 (0%)           0 (0%)         13m
	Allocated resources:
	  (Total limits may be over 100 percent, i.e., overcommitted.)
	  Resource           Requests   Limits
	  --------           --------   ------
	  cpu                100m (5%)  100m (5%)
	  memory             50Mi (0%)  50Mi (0%)
	  ephemeral-storage  0 (0%)     0 (0%)
	  hugepages-1Gi      0 (0%)     0 (0%)
	  hugepages-2Mi      0 (0%)     0 (0%)
	  hugepages-32Mi     0 (0%)     0 (0%)
	  hugepages-64Ki     0 (0%)     0 (0%)
	Events:
	  Type     Reason                   Age                    From             Message
	  ----     ------                   ----                   ----             -------
	  Normal   Starting                 113s                   kube-proxy       
	  Normal   Starting                 13m                    kube-proxy       
	  Normal   NodeHasSufficientPID     13m (x3 over 13m)      kubelet          Node ha-423884-m04 status is now: NodeHasSufficientPID
	  Normal   NodeHasSufficientMemory  13m (x3 over 13m)      kubelet          Node ha-423884-m04 status is now: NodeHasSufficientMemory
	  Normal   NodeHasNoDiskPressure    13m (x3 over 13m)      kubelet          Node ha-423884-m04 status is now: NodeHasNoDiskPressure
	  Normal   RegisteredNode           13m                    node-controller  Node ha-423884-m04 event: Registered Node ha-423884-m04 in Controller
	  Normal   RegisteredNode           13m                    node-controller  Node ha-423884-m04 event: Registered Node ha-423884-m04 in Controller
	  Normal   CIDRAssignmentFailed     13m                    cidrAllocator    Node ha-423884-m04 status is now: CIDRAssignmentFailed
	  Normal   RegisteredNode           13m                    node-controller  Node ha-423884-m04 event: Registered Node ha-423884-m04 in Controller
	  Normal   NodeReady                13m                    kubelet          Node ha-423884-m04 status is now: NodeReady
	  Normal   RegisteredNode           12m                    node-controller  Node ha-423884-m04 event: Registered Node ha-423884-m04 in Controller
	  Normal   RegisteredNode           2m40s                  node-controller  Node ha-423884-m04 event: Registered Node ha-423884-m04 in Controller
	  Normal   RegisteredNode           2m39s                  node-controller  Node ha-423884-m04 event: Registered Node ha-423884-m04 in Controller
	  Normal   Starting                 2m15s                  kubelet          Starting kubelet.
	  Warning  CgroupV1                 2m15s                  kubelet          cgroup v1 support is in maintenance mode, please migrate to cgroup v2
	  Normal   NodeHasSufficientMemory  2m11s (x8 over 2m14s)  kubelet          Node ha-423884-m04 status is now: NodeHasSufficientMemory
	  Normal   NodeHasNoDiskPressure    2m11s (x8 over 2m14s)  kubelet          Node ha-423884-m04 status is now: NodeHasNoDiskPressure
	  Normal   NodeHasSufficientPID     2m11s (x8 over 2m14s)  kubelet          Node ha-423884-m04 status is now: NodeHasSufficientPID
	  Normal   RegisteredNode           2m                     node-controller  Node ha-423884-m04 event: Registered Node ha-423884-m04 in Controller
	  Normal   RegisteredNode           54s                    node-controller  Node ha-423884-m04 event: Registered Node ha-423884-m04 in Controller
	
	
	Name:               ha-423884-m05
	Roles:              control-plane
	Labels:             beta.kubernetes.io/arch=arm64
	                    beta.kubernetes.io/os=linux
	                    kubernetes.io/arch=arm64
	                    kubernetes.io/hostname=ha-423884-m05
	                    kubernetes.io/os=linux
	                    minikube.k8s.io/commit=70e94ed289d7418ac27e2778f9cf44be27a4ecda
	                    minikube.k8s.io/name=ha-423884
	                    minikube.k8s.io/primary=false
	                    minikube.k8s.io/updated_at=2025_11_09T14_06_03_0700
	                    minikube.k8s.io/version=v1.37.0
	                    node-role.kubernetes.io/control-plane=
	                    node.kubernetes.io/exclude-from-external-load-balancers=
	Annotations:        node.alpha.kubernetes.io/ttl: 0
	                    volumes.kubernetes.io/controller-managed-attach-detach: true
	CreationTimestamp:  Sun, 09 Nov 2025 14:06:01 +0000
	Taints:             <none>
	Unschedulable:      false
	Lease:
	  HolderIdentity:  ha-423884-m05
	  AcquireTime:     <unset>
	  RenewTime:       Sun, 09 Nov 2025 14:06:52 +0000
	Conditions:
	  Type             Status  LastHeartbeatTime                 LastTransitionTime                Reason                       Message
	  ----             ------  -----------------                 ------------------                ------                       -------
	  MemoryPressure   False   Sun, 09 Nov 2025 14:06:46 +0000   Sun, 09 Nov 2025 14:06:01 +0000   KubeletHasSufficientMemory   kubelet has sufficient memory available
	  DiskPressure     False   Sun, 09 Nov 2025 14:06:46 +0000   Sun, 09 Nov 2025 14:06:01 +0000   KubeletHasNoDiskPressure     kubelet has no disk pressure
	  PIDPressure      False   Sun, 09 Nov 2025 14:06:46 +0000   Sun, 09 Nov 2025 14:06:01 +0000   KubeletHasSufficientPID      kubelet has sufficient PID available
	  Ready            True    Sun, 09 Nov 2025 14:06:46 +0000   Sun, 09 Nov 2025 14:06:46 +0000   KubeletReady                 kubelet is posting ready status
	Addresses:
	  InternalIP:  192.168.49.6
	  Hostname:    ha-423884-m05
	Capacity:
	  cpu:                2
	  ephemeral-storage:  203034800Ki
	  hugepages-1Gi:      0
	  hugepages-2Mi:      0
	  hugepages-32Mi:     0
	  hugepages-64Ki:     0
	  memory:             8022300Ki
	  pods:               110
	Allocatable:
	  cpu:                2
	  ephemeral-storage:  203034800Ki
	  hugepages-1Gi:      0
	  hugepages-2Mi:      0
	  hugepages-32Mi:     0
	  hugepages-64Ki:     0
	  memory:             8022300Ki
	  pods:               110
	System Info:
	  Machine ID:                 8fe80b37a6a3cb4bc1adadb06905c59a
	  System UUID:                1fc70477-b16b-405f-8157-408b8fa43a9d
	  Boot ID:                    c2b4b02b-b0e5-42ff-aa45-346ad9349595
	  Kernel Version:             5.15.0-1084-aws
	  OS Image:                   Debian GNU/Linux 12 (bookworm)
	  Operating System:           linux
	  Architecture:               arm64
	  Container Runtime Version:  cri-o://1.34.1
	  Kubelet Version:            v1.34.1
	  Kube-Proxy Version:         
	PodCIDR:                      10.244.4.0/24
	PodCIDRs:                     10.244.4.0/24
	Non-terminated Pods:          (7 in total)
	  Namespace                   Name                                     CPU Requests  CPU Limits  Memory Requests  Memory Limits  Age
	  ---------                   ----                                     ------------  ----------  ---------------  -------------  ---
	  kube-system                 etcd-ha-423884-m05                       100m (5%)     0 (0%)      100Mi (1%)       0 (0%)         51s
	  kube-system                 kindnet-44gxs                            100m (5%)     100m (5%)   50Mi (0%)        50Mi (0%)      49s
	  kube-system                 kube-apiserver-ha-423884-m05             250m (12%)    0 (0%)      0 (0%)           0 (0%)         52s
	  kube-system                 kube-controller-manager-ha-423884-m05    200m (10%)    0 (0%)      0 (0%)           0 (0%)         51s
	  kube-system                 kube-proxy-kvnr4                         0 (0%)        0 (0%)      0 (0%)           0 (0%)         53s
	  kube-system                 kube-scheduler-ha-423884-m05             100m (5%)     0 (0%)      0 (0%)           0 (0%)         51s
	  kube-system                 kube-vip-ha-423884-m05                   0 (0%)        0 (0%)      0 (0%)           0 (0%)         51s
	Allocated resources:
	  (Total limits may be over 100 percent, i.e., overcommitted.)
	  Resource           Requests    Limits
	  --------           --------    ------
	  cpu                750m (37%)  100m (5%)
	  memory             150Mi (1%)  50Mi (0%)
	  ephemeral-storage  0 (0%)      0 (0%)
	  hugepages-1Gi      0 (0%)      0 (0%)
	  hugepages-2Mi      0 (0%)      0 (0%)
	  hugepages-32Mi     0 (0%)      0 (0%)
	  hugepages-64Ki     0 (0%)      0 (0%)
	Events:
	  Type    Reason          Age   From             Message
	  ----    ------          ----  ----             -------
	  Normal  Starting        37s   kube-proxy       
	  Normal  RegisteredNode  50s   node-controller  Node ha-423884-m05 event: Registered Node ha-423884-m05 in Controller
	  Normal  RegisteredNode  50s   node-controller  Node ha-423884-m05 event: Registered Node ha-423884-m05 in Controller
	  Normal  RegisteredNode  49s   node-controller  Node ha-423884-m05 event: Registered Node ha-423884-m05 in Controller
	  Normal  RegisteredNode  49s   node-controller  Node ha-423884-m05 event: Registered Node ha-423884-m05 in Controller
	
	
	==> dmesg <==
	[Nov 9 13:17] ACPI: SRAT not present
	[  +0.000000] ACPI: SRAT not present
	[  +0.000000] SPI driver altr_a10sr has no spi_device_id for altr,a10sr
	[  +0.015355] device-mapper: core: CONFIG_IMA_DISABLE_HTABLE is disabled. Duplicate IMA measurements will not be recorded in the IMA log.
	[  +0.494196] systemd[1]: Configuration file /run/systemd/system/netplan-ovs-cleanup.service is marked world-inaccessible. This has no effect as configuration data is accessible via APIs without restrictions. Proceeding anyway.
	[  +0.034512] systemd[1]: /lib/systemd/system/snapd.service:23: Unknown key name 'RestartMode' in section 'Service', ignoring.
	[  +0.743336] ena 0000:00:05.0: LLQ is not supported Fallback to host mode policy.
	[  +6.564676] kauditd_printk_skb: 36 callbacks suppressed
	[Nov 9 13:30] overlayfs: idmapped layers are currently not supported
	[  +0.081590] kmem.limit_in_bytes is deprecated and will be removed. Please report your usecase to linux-mm@kvack.org if you depend on this functionality.
	[Nov 9 13:36] overlayfs: idmapped layers are currently not supported
	[ +50.497753] overlayfs: idmapped layers are currently not supported
	[Nov 9 13:50] overlayfs: idmapped layers are currently not supported
	[Nov 9 13:51] overlayfs: idmapped layers are currently not supported
	[Nov 9 13:52] overlayfs: idmapped layers are currently not supported
	[Nov 9 13:53] overlayfs: idmapped layers are currently not supported
	[Nov 9 13:54] overlayfs: idmapped layers are currently not supported
	[Nov 9 13:55] overlayfs: idmapped layers are currently not supported
	[  +3.159182] overlayfs: idmapped layers are currently not supported
	[Nov 9 14:03] overlayfs: idmapped layers are currently not supported
	[  +3.581786] overlayfs: idmapped layers are currently not supported
	[Nov 9 14:04] overlayfs: idmapped layers are currently not supported
	[Nov 9 14:05] overlayfs: idmapped layers are currently not supported
	[ +45.728314] overlayfs: idmapped layers are currently not supported
	
	
	==> etcd [947390d8997ffb89bea0e3c1e1bca5c1f8dd53d457d88db5aafd7664dbcb65b2] <==
	{"level":"info","ts":"2025-11-09T14:06:01.965628Z","caller":"traceutil/trace.go:172","msg":"trace[1175708871] transaction","detail":"{read_only:false; response_revision:2483; number_of_response:1; }","duration":"114.598676ms","start":"2025-11-09T14:06:01.851008Z","end":"2025-11-09T14:06:01.965607Z","steps":["trace[1175708871] 'process raft request'  (duration: 93.916098ms)"],"step_count":1}
	{"level":"info","ts":"2025-11-09T14:06:01.965846Z","caller":"traceutil/trace.go:172","msg":"trace[1120643452] transaction","detail":"{read_only:false; number_of_response:1; response_revision:2483; }","duration":"114.631676ms","start":"2025-11-09T14:06:01.851193Z","end":"2025-11-09T14:06:01.965824Z","steps":["trace[1120643452] 'process raft request'  (duration: 93.785289ms)"],"step_count":1}
	{"level":"info","ts":"2025-11-09T14:06:01.966010Z","caller":"traceutil/trace.go:172","msg":"trace[607380676] transaction","detail":"{read_only:false; number_of_response:1; response_revision:2483; }","duration":"113.339469ms","start":"2025-11-09T14:06:01.852663Z","end":"2025-11-09T14:06:01.966003Z","steps":["trace[607380676] 'process raft request'  (duration: 92.371981ms)"],"step_count":1}
	{"level":"info","ts":"2025-11-09T14:06:01.966096Z","caller":"traceutil/trace.go:172","msg":"trace[504956675] transaction","detail":"{read_only:false; number_of_response:1; response_revision:2483; }","duration":"113.3348ms","start":"2025-11-09T14:06:01.852755Z","end":"2025-11-09T14:06:01.966090Z","steps":["trace[504956675] 'process raft request'  (duration: 92.297313ms)"],"step_count":1}
	{"level":"info","ts":"2025-11-09T14:06:01.966163Z","caller":"traceutil/trace.go:172","msg":"trace[663412455] transaction","detail":"{read_only:false; number_of_response:1; response_revision:2483; }","duration":"113.303193ms","start":"2025-11-09T14:06:01.852854Z","end":"2025-11-09T14:06:01.966157Z","steps":["trace[663412455] 'process raft request'  (duration: 92.214613ms)"],"step_count":1}
	{"level":"info","ts":"2025-11-09T14:06:01.967022Z","caller":"etcdserver/server.go:2246","msg":"skip compaction since there is an inflight snapshot"}
	{"level":"info","ts":"2025-11-09T14:06:02.169292Z","caller":"traceutil/trace.go:172","msg":"trace[208251802] linearizableReadLoop","detail":"{readStateIndex:3029; appliedIndex:3030; }","duration":"106.922313ms","start":"2025-11-09T14:06:02.062354Z","end":"2025-11-09T14:06:02.169277Z","steps":["trace[208251802] 'read index received'  (duration: 106.916364ms)","trace[208251802] 'applied index is now lower than readState.Index'  (duration: 4.923µs)"],"step_count":2}
	{"level":"warn","ts":"2025-11-09T14:06:02.169639Z","caller":"txn/util.go:93","msg":"apply request took too long","took":"107.268925ms","expected-duration":"100ms","prefix":"read-only range ","request":"key:\"/registry/minions/ha-423884-m05\" limit:1 ","response":"range_response_count:1 size:4587"}
	{"level":"info","ts":"2025-11-09T14:06:02.206785Z","caller":"traceutil/trace.go:172","msg":"trace[77792878] range","detail":"{range_begin:/registry/minions/ha-423884-m05; range_end:; response_count:1; response_revision:2492; }","duration":"144.415374ms","start":"2025-11-09T14:06:02.062350Z","end":"2025-11-09T14:06:02.206765Z","steps":["trace[77792878] 'agreement among raft nodes before linearized reading'  (duration: 107.1649ms)"],"step_count":1}
	{"level":"info","ts":"2025-11-09T14:06:02.278681Z","caller":"traceutil/trace.go:172","msg":"trace[1385268954] transaction","detail":"{read_only:false; number_of_response:1; response_revision:2514; }","duration":"109.903656ms","start":"2025-11-09T14:06:02.168759Z","end":"2025-11-09T14:06:02.278663Z","steps":["trace[1385268954] 'process raft request'  (duration: 97.195103ms)"],"step_count":1}
	{"level":"info","ts":"2025-11-09T14:06:02.279603Z","caller":"traceutil/trace.go:172","msg":"trace[902015390] transaction","detail":"{read_only:false; number_of_response:1; response_revision:2514; }","duration":"110.746472ms","start":"2025-11-09T14:06:02.168844Z","end":"2025-11-09T14:06:02.279591Z","steps":["trace[902015390] 'process raft request'  (duration: 97.179907ms)"],"step_count":1}
	{"level":"info","ts":"2025-11-09T14:06:02.290821Z","caller":"traceutil/trace.go:172","msg":"trace[1310650415] transaction","detail":"{read_only:false; number_of_response:1; response_revision:2514; }","duration":"121.880819ms","start":"2025-11-09T14:06:02.168925Z","end":"2025-11-09T14:06:02.290806Z","steps":["trace[1310650415] 'process raft request'  (duration: 97.120205ms)"],"step_count":1}
	{"level":"warn","ts":"2025-11-09T14:06:02.314753Z","caller":"txn/util.go:93","msg":"apply request took too long","took":"109.782948ms","expected-duration":"100ms","prefix":"read-only range ","request":"key:\"/registry/daemonsets/kube-system/kindnet\" limit:1 ","response":"range_response_count:1 size:4723"}
	{"level":"info","ts":"2025-11-09T14:06:02.315134Z","caller":"traceutil/trace.go:172","msg":"trace[1058310503] range","detail":"{range_begin:/registry/daemonsets/kube-system/kindnet; range_end:; response_count:1; response_revision:2519; }","duration":"110.282187ms","start":"2025-11-09T14:06:02.204838Z","end":"2025-11-09T14:06:02.315121Z","steps":["trace[1058310503] 'agreement among raft nodes before linearized reading'  (duration: 109.588615ms)"],"step_count":1}
	{"level":"warn","ts":"2025-11-09T14:06:02.645526Z","caller":"txn/util.go:93","msg":"apply request took too long","took":"175.681057ms","expected-duration":"100ms","prefix":"read-only range ","request":"key:\"/registry/daemonsets/kube-system/kindnet\" limit:1 ","response":"range_response_count:1 size:4723"}
	{"level":"info","ts":"2025-11-09T14:06:02.645650Z","caller":"traceutil/trace.go:172","msg":"trace[979730927] range","detail":"{range_begin:/registry/daemonsets/kube-system/kindnet; range_end:; response_count:1; response_revision:2536; }","duration":"175.816033ms","start":"2025-11-09T14:06:02.469821Z","end":"2025-11-09T14:06:02.645637Z","steps":["trace[979730927] 'agreement among raft nodes before linearized reading'  (duration: 175.572994ms)"],"step_count":1}
	{"level":"info","ts":"2025-11-09T14:06:02.683922Z","caller":"etcdserver/server.go:2246","msg":"skip compaction since there is an inflight snapshot"}
	{"level":"info","ts":"2025-11-09T14:06:02.684107Z","caller":"traceutil/trace.go:172","msg":"trace[283009681] transaction","detail":"{read_only:false; response_revision:2547; number_of_response:1; }","duration":"104.662963ms","start":"2025-11-09T14:06:02.579425Z","end":"2025-11-09T14:06:02.684088Z","steps":["trace[283009681] 'process raft request'  (duration: 104.209025ms)"],"step_count":1}
	{"level":"info","ts":"2025-11-09T14:06:05.286292Z","caller":"etcdserver/server.go:2246","msg":"skip compaction since there is an inflight snapshot"}
	{"level":"warn","ts":"2025-11-09T14:06:05.433331Z","caller":"txn/util.go:93","msg":"apply request took too long","took":"137.146558ms","expected-duration":"100ms","prefix":"read-only range ","request":"key:\"/registry/pods/kube-system/kindnet-wd67q\" limit:1 ","response":"range_response_count:1 size:3694"}
	{"level":"info","ts":"2025-11-09T14:06:05.433485Z","caller":"traceutil/trace.go:172","msg":"trace[1667098546] range","detail":"{range_begin:/registry/pods/kube-system/kindnet-wd67q; range_end:; response_count:1; response_revision:2646; }","duration":"137.31027ms","start":"2025-11-09T14:06:05.296162Z","end":"2025-11-09T14:06:05.433472Z","steps":["trace[1667098546] 'agreement among raft nodes before linearized reading'  (duration: 135.691937ms)"],"step_count":1}
	{"level":"info","ts":"2025-11-09T14:06:05.438948Z","caller":"traceutil/trace.go:172","msg":"trace[861317397] transaction","detail":"{read_only:false; number_of_response:1; response_revision:2648; }","duration":"102.4751ms","start":"2025-11-09T14:06:05.336459Z","end":"2025-11-09T14:06:05.438934Z","steps":["trace[861317397] 'process raft request'  (duration: 102.381512ms)"],"step_count":1}
	{"level":"info","ts":"2025-11-09T14:06:05.445198Z","caller":"traceutil/trace.go:172","msg":"trace[1409993524] transaction","detail":"{read_only:false; response_revision:2648; number_of_response:1; }","duration":"108.804568ms","start":"2025-11-09T14:06:05.336375Z","end":"2025-11-09T14:06:05.445179Z","steps":["trace[1409993524] 'process raft request'  (duration: 102.432088ms)"],"step_count":1}
	{"level":"info","ts":"2025-11-09T14:06:06.978903Z","caller":"etcdserver/server.go:2246","msg":"skip compaction since there is an inflight snapshot"}
	{"level":"info","ts":"2025-11-09T14:06:17.995725Z","caller":"etcdserver/server.go:1856","msg":"sent merged snapshot","from":"aec36adc501070cc","to":"33821fa08d210d57","bytes":5253003,"size":"5.3 MB","took":"30.505789898s"}
	
	
	==> kernel <==
	 14:06:55 up 49 min,  0 user,  load average: 3.91, 2.54, 1.61
	Linux ha-423884 5.15.0-1084-aws #91~20.04.1-Ubuntu SMP Fri May 2 07:00:04 UTC 2025 aarch64 GNU/Linux
	PRETTY_NAME="Debian GNU/Linux 12 (bookworm)"
	
	
	==> kindnet [2858b156484730345bc39e8edca1ca8eabf5a6c2eb446824527423d351ec9fd3] <==
	I1109 14:06:25.419551       1 main.go:324] Node ha-423884-m03 has CIDR [10.244.2.0/24] 
	I1109 14:06:25.419603       1 main.go:297] Handling node with IPs: map[192.168.49.5:{}]
	I1109 14:06:25.419608       1 main.go:324] Node ha-423884-m04 has CIDR [10.244.3.0/24] 
	I1109 14:06:25.419654       1 main.go:297] Handling node with IPs: map[192.168.49.6:{}]
	I1109 14:06:25.419659       1 main.go:324] Node ha-423884-m05 has CIDR [10.244.4.0/24] 
	I1109 14:06:35.418634       1 main.go:297] Handling node with IPs: map[192.168.49.2:{}]
	I1109 14:06:35.418710       1 main.go:301] handling current node
	I1109 14:06:35.418728       1 main.go:297] Handling node with IPs: map[192.168.49.3:{}]
	I1109 14:06:35.418737       1 main.go:324] Node ha-423884-m02 has CIDR [10.244.1.0/24] 
	I1109 14:06:35.418884       1 main.go:297] Handling node with IPs: map[192.168.49.4:{}]
	I1109 14:06:35.418890       1 main.go:324] Node ha-423884-m03 has CIDR [10.244.2.0/24] 
	I1109 14:06:35.418939       1 main.go:297] Handling node with IPs: map[192.168.49.5:{}]
	I1109 14:06:35.419816       1 main.go:324] Node ha-423884-m04 has CIDR [10.244.3.0/24] 
	I1109 14:06:35.419996       1 main.go:297] Handling node with IPs: map[192.168.49.6:{}]
	I1109 14:06:35.420007       1 main.go:324] Node ha-423884-m05 has CIDR [10.244.4.0/24] 
	I1109 14:06:45.423419       1 main.go:297] Handling node with IPs: map[192.168.49.2:{}]
	I1109 14:06:45.423457       1 main.go:301] handling current node
	I1109 14:06:45.423473       1 main.go:297] Handling node with IPs: map[192.168.49.3:{}]
	I1109 14:06:45.423482       1 main.go:324] Node ha-423884-m02 has CIDR [10.244.1.0/24] 
	I1109 14:06:45.423721       1 main.go:297] Handling node with IPs: map[192.168.49.4:{}]
	I1109 14:06:45.423740       1 main.go:324] Node ha-423884-m03 has CIDR [10.244.2.0/24] 
	I1109 14:06:45.423834       1 main.go:297] Handling node with IPs: map[192.168.49.5:{}]
	I1109 14:06:45.423846       1 main.go:324] Node ha-423884-m04 has CIDR [10.244.3.0/24] 
	I1109 14:06:45.423983       1 main.go:297] Handling node with IPs: map[192.168.49.6:{}]
	I1109 14:06:45.423999       1 main.go:324] Node ha-423884-m05 has CIDR [10.244.4.0/24] 
	
	
	==> kube-apiserver [7a8b6eec5acc3d0e17aa26ea522ab1781b387d043859460f3c3aa2c80f07c6d7] <==
	I1109 14:04:10.251082       1 cidrallocator.go:301] created ClusterIP allocator for Service CIDR 10.96.0.0/12
	I1109 14:04:10.254066       1 aggregator.go:171] initial CRD sync complete...
	I1109 14:04:10.254147       1 autoregister_controller.go:144] Starting autoregister controller
	I1109 14:04:10.254176       1 cache.go:32] Waiting for caches to sync for autoregister controller
	I1109 14:04:10.254222       1 cache.go:39] Caches are synced for autoregister controller
	I1109 14:04:10.259503       1 shared_informer.go:356] "Caches are synced" controller="ipallocator-repair-controller"
	I1109 14:04:10.259679       1 cache.go:39] Caches are synced for RemoteAvailability controller
	I1109 14:04:10.259777       1 cache.go:39] Caches are synced for APIServiceRegistrationController controller
	I1109 14:04:10.265702       1 apf_controller.go:382] Running API Priority and Fairness config worker
	I1109 14:04:10.265731       1 apf_controller.go:385] Running API Priority and Fairness periodic rebalancing process
	I1109 14:04:10.268080       1 shared_informer.go:356] "Caches are synced" controller="node_authorizer"
	I1109 14:04:10.269054       1 handler_discovery.go:451] Starting ResourceDiscoveryManager
	I1109 14:04:10.282785       1 shared_informer.go:356] "Caches are synced" controller="*generic.policySource[*k8s.io/api/admissionregistration/v1.ValidatingAdmissionPolicy,*k8s.io/api/admissionregistration/v1.ValidatingAdmissionPolicyBinding,k8s.io/apiserver/pkg/admission/plugin/policy/validating.Validator]"
	I1109 14:04:10.282828       1 policy_source.go:240] refreshing policies
	W1109 14:04:10.283375       1 lease.go:265] Resetting endpoints for master service "kubernetes" to [192.168.49.4]
	I1109 14:04:10.285247       1 controller.go:667] quota admission added evaluator for: endpoints
	I1109 14:04:10.308873       1 controller.go:667] quota admission added evaluator for: endpointslices.discovery.k8s.io
	I1109 14:04:10.309359       1 controller.go:667] quota admission added evaluator for: leases.coordination.k8s.io
	E1109 14:04:10.317898       1 controller.go:95] Found stale data, removed previous endpoints on kubernetes service, apiserver didn't exit successfully previously
	I1109 14:04:10.610930       1 storage_scheduling.go:111] all system priority classes are created successfully or already exist.
	W1109 14:04:12.050948       1 lease.go:265] Resetting endpoints for master service "kubernetes" to [192.168.49.2 192.168.49.3 192.168.49.4]
	I1109 14:04:13.586194       1 controller.go:667] quota admission added evaluator for: serviceaccounts
	I1109 14:04:16.069224       1 controller.go:667] quota admission added evaluator for: replicasets.apps
	I1109 14:04:16.362429       1 controller.go:667] quota admission added evaluator for: daemonsets.apps
	I1109 14:04:17.009317       1 controller.go:667] quota admission added evaluator for: deployments.apps
	
	
	==> kube-apiserver [c0ba74e816e1338d86f2f29c211b83c172784bbf106dba7bae518b2ee0201a4e] <==
	I1109 14:03:36.079801       1 server.go:150] Version: v1.34.1
	I1109 14:03:36.079970       1 server.go:152] "Golang settings" GOGC="" GOMAXPROCS="" GOTRACEBACK=""
	W1109 14:03:37.231523       1 api_enablement.go:112] alpha api enabled with emulated version 1.34 instead of the binary's version 1.34.1, this is unsupported, proceed at your own risk: api=coordination.k8s.io/v1alpha2
	W1109 14:03:37.231632       1 api_enablement.go:112] alpha api enabled with emulated version 1.34 instead of the binary's version 1.34.1, this is unsupported, proceed at your own risk: api=admissionregistration.k8s.io/v1alpha1
	W1109 14:03:37.231673       1 api_enablement.go:112] alpha api enabled with emulated version 1.34 instead of the binary's version 1.34.1, this is unsupported, proceed at your own risk: api=imagepolicy.k8s.io/v1alpha1
	W1109 14:03:37.231710       1 api_enablement.go:112] alpha api enabled with emulated version 1.34 instead of the binary's version 1.34.1, this is unsupported, proceed at your own risk: api=resource.k8s.io/v1alpha3
	W1109 14:03:37.231743       1 api_enablement.go:112] alpha api enabled with emulated version 1.34 instead of the binary's version 1.34.1, this is unsupported, proceed at your own risk: api=scheduling.k8s.io/v1alpha1
	W1109 14:03:37.231775       1 api_enablement.go:112] alpha api enabled with emulated version 1.34 instead of the binary's version 1.34.1, this is unsupported, proceed at your own risk: api=certificates.k8s.io/v1alpha1
	W1109 14:03:37.233731       1 api_enablement.go:112] alpha api enabled with emulated version 1.34 instead of the binary's version 1.34.1, this is unsupported, proceed at your own risk: api=rbac.authorization.k8s.io/v1alpha1
	W1109 14:03:37.233812       1 api_enablement.go:112] alpha api enabled with emulated version 1.34 instead of the binary's version 1.34.1, this is unsupported, proceed at your own risk: api=storage.k8s.io/v1alpha1
	W1109 14:03:37.233841       1 api_enablement.go:112] alpha api enabled with emulated version 1.34 instead of the binary's version 1.34.1, this is unsupported, proceed at your own risk: api=internal.apiserver.k8s.io/v1alpha1
	W1109 14:03:37.233872       1 api_enablement.go:112] alpha api enabled with emulated version 1.34 instead of the binary's version 1.34.1, this is unsupported, proceed at your own risk: api=authentication.k8s.io/v1alpha1
	W1109 14:03:37.233903       1 api_enablement.go:112] alpha api enabled with emulated version 1.34 instead of the binary's version 1.34.1, this is unsupported, proceed at your own risk: api=storagemigration.k8s.io/v1alpha1
	W1109 14:03:37.233935       1 api_enablement.go:112] alpha api enabled with emulated version 1.34 instead of the binary's version 1.34.1, this is unsupported, proceed at your own risk: api=node.k8s.io/v1alpha1
	W1109 14:03:37.264427       1 logging.go:55] [core] [Channel #2 SubChannel #3]grpc: addrConn.createTransport failed to connect to {Addr: "127.0.0.1:2379", ServerName: "127.0.0.1:2379", BalancerAttributes: {"<%!p(pickfirstleaf.managedByPickfirstKeyType={})>": "<%!p(bool=true)>" }}. Err: connection error: desc = "transport: authentication handshake failed: context canceled"
	W1109 14:03:37.266135       1 logging.go:55] [core] [Channel #1 SubChannel #5]grpc: addrConn.createTransport failed to connect to {Addr: "127.0.0.1:2379", ServerName: "127.0.0.1:2379", BalancerAttributes: {"<%!p(pickfirstleaf.managedByPickfirstKeyType={})>": "<%!p(bool=true)>" }}. Err: connection error: desc = "transport: Error while dialing: dial tcp 127.0.0.1:2379: operation was canceled"
	I1109 14:03:37.266724       1 shared_informer.go:349] "Waiting for caches to sync" controller="node_authorizer"
	I1109 14:03:37.284361       1 shared_informer.go:349] "Waiting for caches to sync" controller="*generic.policySource[*k8s.io/api/admissionregistration/v1.ValidatingAdmissionPolicy,*k8s.io/api/admissionregistration/v1.ValidatingAdmissionPolicyBinding,k8s.io/apiserver/pkg/admission/plugin/policy/validating.Validator]"
	I1109 14:03:37.285347       1 plugins.go:157] Loaded 14 mutating admission controller(s) successfully in the following order: NamespaceLifecycle,LimitRanger,ServiceAccount,NodeRestriction,TaintNodesByCondition,Priority,DefaultTolerationSeconds,DefaultStorageClass,StorageObjectInUseProtection,RuntimeClass,DefaultIngressClass,PodTopologyLabels,MutatingAdmissionPolicy,MutatingAdmissionWebhook.
	I1109 14:03:37.285437       1 plugins.go:160] Loaded 13 validating admission controller(s) successfully in the following order: LimitRanger,ServiceAccount,PodSecurity,Priority,PersistentVolumeClaimResize,RuntimeClass,CertificateApproval,CertificateSigning,ClusterTrustBundleAttest,CertificateSubjectRestriction,ValidatingAdmissionPolicy,ValidatingAdmissionWebhook,ResourceQuota.
	I1109 14:03:37.285697       1 instance.go:239] Using reconciler: lease
	W1109 14:03:37.287884       1 logging.go:55] [core] [Channel #7 SubChannel #8]grpc: addrConn.createTransport failed to connect to {Addr: "127.0.0.1:2379", ServerName: "127.0.0.1:2379", BalancerAttributes: {"<%!p(pickfirstleaf.managedByPickfirstKeyType={})>": "<%!p(bool=true)>" }}. Err: connection error: desc = "transport: authentication handshake failed: context canceled"
	W1109 14:03:57.261619       1 logging.go:55] [core] [Channel #1 SubChannel #6]grpc: addrConn.createTransport failed to connect to {Addr: "127.0.0.1:2379", ServerName: "127.0.0.1:2379", BalancerAttributes: {"<%!p(pickfirstleaf.managedByPickfirstKeyType={})>": "<%!p(bool=true)>" }}. Err: connection error: desc = "transport: authentication handshake failed: context canceled"
	W1109 14:03:57.262651       1 logging.go:55] [core] [Channel #2 SubChannel #4]grpc: addrConn.createTransport failed to connect to {Addr: "127.0.0.1:2379", ServerName: "127.0.0.1:2379", BalancerAttributes: {"<%!p(pickfirstleaf.managedByPickfirstKeyType={})>": "<%!p(bool=true)>" }}. Err: connection error: desc = "transport: authentication handshake failed: context canceled"
	F1109 14:03:57.287379       1 instance.go:232] Error creating leases: error creating storage factory: context deadline exceeded
	
	
	==> kube-controller-manager [78f5efcea671f680d59175d4a69693bbbeed9fa6a7cee912ee40e0f169e81738] <==
	I1109 14:03:38.933755       1 serving.go:386] Generated self-signed cert in-memory
	I1109 14:03:39.743954       1 controllermanager.go:191] "Starting" version="v1.34.1"
	I1109 14:03:39.744053       1 controllermanager.go:193] "Golang settings" GOGC="" GOMAXPROCS="" GOTRACEBACK=""
	I1109 14:03:39.745947       1 secure_serving.go:211] Serving securely on 127.0.0.1:10257
	I1109 14:03:39.746091       1 dynamic_cafile_content.go:161] "Starting controller" name="request-header::/var/lib/minikube/certs/front-proxy-ca.crt"
	I1109 14:03:39.746103       1 dynamic_cafile_content.go:161] "Starting controller" name="client-ca-bundle::/var/lib/minikube/certs/ca.crt"
	I1109 14:03:39.746115       1 tlsconfig.go:243] "Starting DynamicServingCertificateController"
	E1109 14:04:10.143520       1 controllermanager.go:245] "Error building controller context" err="failed to wait for apiserver being healthy: timed out waiting for the condition: failed to get apiserver /healthz status: forbidden: User \"system:kube-controller-manager\" cannot get path \"/healthz\""
	
	
	==> kube-controller-manager [d4b5eae8c40aaa51b1839a8972d830ffbb9a271e980e83d7f4e1e1a5a0e7c344] <==
	I1109 14:04:15.647826       1 shared_informer.go:356] "Caches are synced" controller="garbage collector"
	I1109 14:04:15.648760       1 garbagecollector.go:154] "Garbage collector: all resource monitors have synced" logger="garbage-collector-controller"
	I1109 14:04:15.648829       1 garbagecollector.go:157] "Proceeding to collect garbage" logger="garbage-collector-controller"
	I1109 14:04:15.650811       1 shared_informer.go:356] "Caches are synced" controller="persistent volume"
	I1109 14:04:15.679894       1 shared_informer.go:356] "Caches are synced" controller="attach detach"
	I1109 14:04:15.695896       1 shared_informer.go:356] "Caches are synced" controller="garbage collector"
	I1109 14:04:15.916336       1 endpointslice_controller.go:344] "Error syncing endpoint slices for service, retrying" logger="endpointslice-controller" key="kube-system/kube-dns" err="failed to update kube-dns-tmhdr EndpointSlice for Service kube-system/kube-dns: Operation cannot be fulfilled on endpointslices.discovery.k8s.io \"kube-dns-tmhdr\": the object has been modified; please apply your changes to the latest version and try again"
	I1109 14:04:15.916728       1 event.go:377] Event(v1.ObjectReference{Kind:"Service", Namespace:"kube-system", Name:"kube-dns", UID:"8beb4313-ddc0-4b92-876a-23da421be39d", APIVersion:"v1", ResourceVersion:"292", FieldPath:""}): type: 'Warning' reason: 'FailedToUpdateEndpointSlices' Error updating Endpoint Slices for Service kube-system/kube-dns: failed to update kube-dns-tmhdr EndpointSlice for Service kube-system/kube-dns: Operation cannot be fulfilled on endpointslices.discovery.k8s.io "kube-dns-tmhdr": the object has been modified; please apply your changes to the latest version and try again
	E1109 14:04:16.184059       1 replica_set.go:587] "Unhandled Error" err="sync \"kube-system/coredns-66bc5c9577\" failed with Operation cannot be fulfilled on replicasets.apps \"coredns-66bc5c9577\": the object has been modified; please apply your changes to the latest version and try again" logger="UnhandledError"
	I1109 14:04:16.664643       1 endpointslice_controller.go:344] "Error syncing endpoint slices for service, retrying" logger="endpointslice-controller" key="kube-system/kube-dns" err="failed to update kube-dns-tmhdr EndpointSlice for Service kube-system/kube-dns: Operation cannot be fulfilled on endpointslices.discovery.k8s.io \"kube-dns-tmhdr\": the object has been modified; please apply your changes to the latest version and try again"
	I1109 14:04:16.665695       1 event.go:377] Event(v1.ObjectReference{Kind:"Service", Namespace:"kube-system", Name:"kube-dns", UID:"8beb4313-ddc0-4b92-876a-23da421be39d", APIVersion:"v1", ResourceVersion:"292", FieldPath:""}): type: 'Warning' reason: 'FailedToUpdateEndpointSlices' Error updating Endpoint Slices for Service kube-system/kube-dns: failed to update kube-dns-tmhdr EndpointSlice for Service kube-system/kube-dns: Operation cannot be fulfilled on endpointslices.discovery.k8s.io "kube-dns-tmhdr": the object has been modified; please apply your changes to the latest version and try again
	I1109 14:04:56.714750       1 endpointslice_controller.go:344] "Error syncing endpoint slices for service, retrying" logger="endpointslice-controller" key="kube-system/kube-dns" err="failed to update kube-dns-tmhdr EndpointSlice for Service kube-system/kube-dns: Operation cannot be fulfilled on endpointslices.discovery.k8s.io \"kube-dns-tmhdr\": the object has been modified; please apply your changes to the latest version and try again"
	I1109 14:04:56.714878       1 event.go:377] Event(v1.ObjectReference{Kind:"Service", Namespace:"kube-system", Name:"kube-dns", UID:"8beb4313-ddc0-4b92-876a-23da421be39d", APIVersion:"v1", ResourceVersion:"292", FieldPath:""}): type: 'Warning' reason: 'FailedToUpdateEndpointSlices' Error updating Endpoint Slices for Service kube-system/kube-dns: failed to update kube-dns-tmhdr EndpointSlice for Service kube-system/kube-dns: Operation cannot be fulfilled on endpointslices.discovery.k8s.io "kube-dns-tmhdr": the object has been modified; please apply your changes to the latest version and try again
	I1109 14:04:56.849774       1 endpointslice_controller.go:344] "Error syncing endpoint slices for service, retrying" logger="endpointslice-controller" key="kube-system/kube-dns" err="failed to update kube-dns-tmhdr EndpointSlice for Service kube-system/kube-dns: Operation cannot be fulfilled on endpointslices.discovery.k8s.io \"kube-dns-tmhdr\": the object has been modified; please apply your changes to the latest version and try again"
	I1109 14:04:56.849836       1 event.go:377] Event(v1.ObjectReference{Kind:"Service", Namespace:"kube-system", Name:"kube-dns", UID:"8beb4313-ddc0-4b92-876a-23da421be39d", APIVersion:"v1", ResourceVersion:"292", FieldPath:""}): type: 'Warning' reason: 'FailedToUpdateEndpointSlices' Error updating Endpoint Slices for Service kube-system/kube-dns: failed to update kube-dns-tmhdr EndpointSlice for Service kube-system/kube-dns: Operation cannot be fulfilled on endpointslices.discovery.k8s.io "kube-dns-tmhdr": the object has been modified; please apply your changes to the latest version and try again
	E1109 14:04:56.882397       1 replica_set.go:587] "Unhandled Error" err="sync \"kube-system/coredns-66bc5c9577\" failed with Operation cannot be fulfilled on replicasets.apps \"coredns-66bc5c9577\": the object has been modified; please apply your changes to the latest version and try again" logger="UnhandledError"
	E1109 14:05:01.377737       1 daemon_controller.go:346] "Unhandled Error" err="kube-system/kindnet failed with : error storing status for daemon set &v1.DaemonSet{TypeMeta:v1.TypeMeta{Kind:\"\", APIVersion:\"\"}, ObjectMeta:v1.ObjectMeta{Name:\"kindnet\", GenerateName:\"\", Namespace:\"kube-system\", SelfLink:\"\", UID:\"a423ea2b-b11a-451e-9dc0-0b9bc17e2520\", ResourceVersion:\"2273\", Generation:1, CreationTimestamp:time.Date(2025, time.November, 9, 13, 50, 44, 0, time.Local), DeletionTimestamp:<nil>, DeletionGracePeriodSeconds:(*int64)(nil), Labels:map[string]string{\"app\":\"kindnet\", \"k8s-app\":\"kindnet\", \"tier\":\"node\"}, Annotations:map[string]string{\"deprecated.daemonset.template.generation\":\"1\", \"kubectl.kubernetes.io/last-applied-configuration\":\"{\\\"apiVersion\\\":\\\"apps/v1\\\",\\\"kind\\\":\\\"DaemonSet\\\",\\\"metadata\\\":{\\\"annotations\\\":{},\\\"labels\\\":{\\\"app\\\":\\\"kindnet\\\",\\\"k8s-app\\\":\\\"kindnet\\\",\\\"tier\\\":\\\"node\\\"},\\\"name\\\":\\\"kindnet\\
\",\\\"namespace\\\":\\\"kube-system\\\"},\\\"spec\\\":{\\\"selector\\\":{\\\"matchLabels\\\":{\\\"app\\\":\\\"kindnet\\\"}},\\\"template\\\":{\\\"metadata\\\":{\\\"labels\\\":{\\\"app\\\":\\\"kindnet\\\",\\\"k8s-app\\\":\\\"kindnet\\\",\\\"tier\\\":\\\"node\\\"}},\\\"spec\\\":{\\\"containers\\\":[{\\\"env\\\":[{\\\"name\\\":\\\"HOST_IP\\\",\\\"valueFrom\\\":{\\\"fieldRef\\\":{\\\"fieldPath\\\":\\\"status.hostIP\\\"}}},{\\\"name\\\":\\\"POD_IP\\\",\\\"valueFrom\\\":{\\\"fieldRef\\\":{\\\"fieldPath\\\":\\\"status.podIP\\\"}}},{\\\"name\\\":\\\"POD_SUBNET\\\",\\\"value\\\":\\\"10.244.0.0/16\\\"}],\\\"image\\\":\\\"docker.io/kindest/kindnetd:v20250512-df8de77b\\\",\\\"name\\\":\\\"kindnet-cni\\\",\\\"resources\\\":{\\\"limits\\\":{\\\"cpu\\\":\\\"100m\\\",\\\"memory\\\":\\\"50Mi\\\"},\\\"requests\\\":{\\\"cpu\\\":\\\"100m\\\",\\\"memory\\\":\\\"50Mi\\\"}},\\\"securityContext\\\":{\\\"capabilities\\\":{\\\"add\\\":[\\\"NET_RAW\\\",\\\"NET_ADMIN\\\"]},\\\"privileged\\\":false},\\\"volumeMounts\\\":[{\\\"mountPath\
\\":\\\"/etc/cni/net.d\\\",\\\"name\\\":\\\"cni-cfg\\\"},{\\\"mountPath\\\":\\\"/run/xtables.lock\\\",\\\"name\\\":\\\"xtables-lock\\\",\\\"readOnly\\\":false},{\\\"mountPath\\\":\\\"/lib/modules\\\",\\\"name\\\":\\\"lib-modules\\\",\\\"readOnly\\\":true}]}],\\\"hostNetwork\\\":true,\\\"serviceAccountName\\\":\\\"kindnet\\\",\\\"tolerations\\\":[{\\\"effect\\\":\\\"NoSchedule\\\",\\\"operator\\\":\\\"Exists\\\"}],\\\"volumes\\\":[{\\\"hostPath\\\":{\\\"path\\\":\\\"/etc/cni/net.d\\\",\\\"type\\\":\\\"DirectoryOrCreate\\\"},\\\"name\\\":\\\"cni-cfg\\\"},{\\\"hostPath\\\":{\\\"path\\\":\\\"/run/xtables.lock\\\",\\\"type\\\":\\\"FileOrCreate\\\"},\\\"name\\\":\\\"xtables-lock\\\"},{\\\"hostPath\\\":{\\\"path\\\":\\\"/lib/modules\\\"},\\\"name\\\":\\\"lib-modules\\\"}]}}}}\\n\"}, OwnerReferences:[]v1.OwnerReference(nil), Finalizers:[]string(nil), ManagedFields:[]v1.ManagedFieldsEntry(nil)}, Spec:v1.DaemonSetSpec{Selector:(*v1.LabelSelector)(0x40017852e0), Template:v1.PodTemplateSpec{ObjectMeta:v1.ObjectMeta{Name:
\"\", GenerateName:\"\", Namespace:\"\", SelfLink:\"\", UID:\"\", ResourceVersion:\"\", Generation:0, CreationTimestamp:time.Date(1, time.January, 1, 0, 0, 0, 0, time.UTC), DeletionTimestamp:<nil>, DeletionGracePeriodSeconds:(*int64)(nil), Labels:map[string]string{\"app\":\"kindnet\", \"k8s-app\":\"kindnet\", \"tier\":\"node\"}, Annotations:map[string]string(nil), OwnerReferences:[]v1.OwnerReference(nil), Finalizers:[]string(nil), ManagedFields:[]v1.ManagedFieldsEntry(nil)}, Spec:v1.PodSpec{Volumes:[]v1.Volume{v1.Volume{Name:\"cni-cfg\", VolumeSource:v1.VolumeSource{HostPath:(*v1.HostPathVolumeSource)(0x4000aea5d0), EmptyDir:(*v1.EmptyDirVolumeSource)(nil), GCEPersistentDisk:(*v1.GCEPersistentDiskVolumeSource)(nil), AWSElasticBlockStore:(*v1.AWSElasticBlockStoreVolumeSource)(nil), GitRepo:(*v1.GitRepoVolumeSource)(nil), Secret:(*v1.SecretVolumeSource)(nil), NFS:(*v1.NFSVolumeSource)(nil), ISCSI:(*v1.ISCSIVolumeSource)(nil), Glusterfs:(*v1.GlusterfsVolumeSource)(nil), PersistentVolumeClaim:(*v1.PersistentVolum
eClaimVolumeSource)(nil), RBD:(*v1.RBDVolumeSource)(nil), FlexVolume:(*v1.FlexVolumeSource)(nil), Cinder:(*v1.CinderVolumeSource)(nil), CephFS:(*v1.CephFSVolumeSource)(nil), Flocker:(*v1.FlockerVolumeSource)(nil), DownwardAPI:(*v1.DownwardAPIVolumeSource)(nil), FC:(*v1.FCVolumeSource)(nil), AzureFile:(*v1.AzureFileVolumeSource)(nil), ConfigMap:(*v1.ConfigMapVolumeSource)(nil), VsphereVolume:(*v1.VsphereVirtualDiskVolumeSource)(nil), Quobyte:(*v1.QuobyteVolumeSource)(nil), AzureDisk:(*v1.AzureDiskVolumeSource)(nil), PhotonPersistentDisk:(*v1.PhotonPersistentDiskVolumeSource)(nil), Projected:(*v1.ProjectedVolumeSource)(nil), PortworxVolume:(*v1.PortworxVolumeSource)(nil), ScaleIO:(*v1.ScaleIOVolumeSource)(nil), StorageOS:(*v1.StorageOSVolumeSource)(nil), CSI:(*v1.CSIVolumeSource)(nil), Ephemeral:(*v1.EphemeralVolumeSource)(nil), Image:(*v1.ImageVolumeSource)(nil)}}, v1.Volume{Name:\"xtables-lock\", VolumeSource:v1.VolumeSource{HostPath:(*v1.HostPathVolumeSource)(0x4000aea618), EmptyDir:(*v1.EmptyDirVolumeSource
)(nil), GCEPersistentDisk:(*v1.GCEPersistentDiskVolumeSource)(nil), AWSElasticBlockStore:(*v1.AWSElasticBlockStoreVolumeSource)(nil), GitRepo:(*v1.GitRepoVolumeSource)(nil), Secret:(*v1.SecretVolumeSource)(nil), NFS:(*v1.NFSVolumeSource)(nil), ISCSI:(*v1.ISCSIVolumeSource)(nil), Glusterfs:(*v1.GlusterfsVolumeSource)(nil), PersistentVolumeClaim:(*v1.PersistentVolumeClaimVolumeSource)(nil), RBD:(*v1.RBDVolumeSource)(nil), FlexVolume:(*v1.FlexVolumeSource)(nil), Cinder:(*v1.CinderVolumeSource)(nil), CephFS:(*v1.CephFSVolumeSource)(nil), Flocker:(*v1.FlockerVolumeSource)(nil), DownwardAPI:(*v1.DownwardAPIVolumeSource)(nil), FC:(*v1.FCVolumeSource)(nil), AzureFile:(*v1.AzureFileVolumeSource)(nil), ConfigMap:(*v1.ConfigMapVolumeSource)(nil), VsphereVolume:(*v1.VsphereVirtualDiskVolumeSource)(nil), Quobyte:(*v1.QuobyteVolumeSource)(nil), AzureDisk:(*v1.AzureDiskVolumeSource)(nil), PhotonPersistentDisk:(*v1.PhotonPersistentDiskVolumeSource)(nil), Projected:(*v1.ProjectedVolumeSource)(nil), PortworxVolume:(*v1.Portwor
xVolumeSource)(nil), ScaleIO:(*v1.ScaleIOVolumeSource)(nil), StorageOS:(*v1.StorageOSVolumeSource)(nil), CSI:(*v1.CSIVolumeSource)(nil), Ephemeral:(*v1.EphemeralVolumeSource)(nil), Image:(*v1.ImageVolumeSource)(nil)}}, v1.Volume{Name:\"lib-modules\", VolumeSource:v1.VolumeSource{HostPath:(*v1.HostPathVolumeSource)(0x4000aea678), EmptyDir:(*v1.EmptyDirVolumeSource)(nil), GCEPersistentDisk:(*v1.GCEPersistentDiskVolumeSource)(nil), AWSElasticBlockStore:(*v1.AWSElasticBlockStoreVolumeSource)(nil), GitRepo:(*v1.GitRepoVolumeSource)(nil), Secret:(*v1.SecretVolumeSource)(nil), NFS:(*v1.NFSVolumeSource)(nil), ISCSI:(*v1.ISCSIVolumeSource)(nil), Glusterfs:(*v1.GlusterfsVolumeSource)(nil), PersistentVolumeClaim:(*v1.PersistentVolumeClaimVolumeSource)(nil), RBD:(*v1.RBDVolumeSource)(nil), FlexVolume:(*v1.FlexVolumeSource)(nil), Cinder:(*v1.CinderVolumeSource)(nil), CephFS:(*v1.CephFSVolumeSource)(nil), Flocker:(*v1.FlockerVolumeSource)(nil), DownwardAPI:(*v1.DownwardAPIVolumeSource)(nil), FC:(*v1.FCVolumeSource)(nil), A
zureFile:(*v1.AzureFileVolumeSource)(nil), ConfigMap:(*v1.ConfigMapVolumeSource)(nil), VsphereVolume:(*v1.VsphereVirtualDiskVolumeSource)(nil), Quobyte:(*v1.QuobyteVolumeSource)(nil), AzureDisk:(*v1.AzureDiskVolumeSource)(nil), PhotonPersistentDisk:(*v1.PhotonPersistentDiskVolumeSource)(nil), Projected:(*v1.ProjectedVolumeSource)(nil), PortworxVolume:(*v1.PortworxVolumeSource)(nil), ScaleIO:(*v1.ScaleIOVolumeSource)(nil), StorageOS:(*v1.StorageOSVolumeSource)(nil), CSI:(*v1.CSIVolumeSource)(nil), Ephemeral:(*v1.EphemeralVolumeSource)(nil), Image:(*v1.ImageVolumeSource)(nil)}}}, InitContainers:[]v1.Container(nil), Containers:[]v1.Container{v1.Container{Name:\"kindnet-cni\", Image:\"docker.io/kindest/kindnetd:v20250512-df8de77b\", Command:[]string(nil), Args:[]string(nil), WorkingDir:\"\", Ports:[]v1.ContainerPort(nil), EnvFrom:[]v1.EnvFromSource(nil), Env:[]v1.EnvVar{v1.EnvVar{Name:\"HOST_IP\", Value:\"\", ValueFrom:(*v1.EnvVarSource)(0x400208fe00)}, v1.EnvVar{Name:\"POD_IP\", Value:\"\", ValueFrom:(*v1.EnvVar
Source)(0x400208fe30)}, v1.EnvVar{Name:\"POD_SUBNET\", Value:\"10.244.0.0/16\", ValueFrom:(*v1.EnvVarSource)(nil)}}, Resources:v1.ResourceRequirements{Limits:v1.ResourceList{\"cpu\":resource.Quantity{i:resource.int64Amount{value:100, scale:-3}, d:resource.infDecAmount{Dec:(*inf.Dec)(nil)}, s:\"100m\", Format:\"DecimalSI\"}, \"memory\":resource.Quantity{i:resource.int64Amount{value:52428800, scale:0}, d:resource.infDecAmount{Dec:(*inf.Dec)(nil)}, s:\"50Mi\", Format:\"BinarySI\"}}, Requests:v1.ResourceList{\"cpu\":resource.Quantity{i:resource.int64Amount{value:100, scale:-3}, d:resource.infDecAmount{Dec:(*inf.Dec)(nil)}, s:\"100m\", Format:\"DecimalSI\"}, \"memory\":resource.Quantity{i:resource.int64Amount{value:52428800, scale:0}, d:resource.infDecAmount{Dec:(*inf.Dec)(nil)}, s:\"50Mi\", Format:\"BinarySI\"}}, Claims:[]v1.ResourceClaim(nil)}, ResizePolicy:[]v1.ContainerResizePolicy(nil), RestartPolicy:(*v1.ContainerRestartPolicy)(nil), RestartPolicyRules:[]v1.ContainerRestartRule(nil), VolumeMounts:[]v1.Volume
Mount{v1.VolumeMount{Name:\"cni-cfg\", ReadOnly:false, RecursiveReadOnly:(*v1.RecursiveReadOnlyMode)(nil), MountPath:\"/etc/cni/net.d\", SubPath:\"\", MountPropagation:(*v1.MountPropagationMode)(nil), SubPathExpr:\"\"}, v1.VolumeMount{Name:\"xtables-lock\", ReadOnly:false, RecursiveReadOnly:(*v1.RecursiveReadOnlyMode)(nil), MountPath:\"/run/xtables.lock\", SubPath:\"\", MountPropagation:(*v1.MountPropagationMode)(nil), SubPathExpr:\"\"}, v1.VolumeMount{Name:\"lib-modules\", ReadOnly:true, RecursiveReadOnly:(*v1.RecursiveReadOnlyMode)(nil), MountPath:\"/lib/modules\", SubPath:\"\", MountPropagation:(*v1.MountPropagationMode)(nil), SubPathExpr:\"\"}}, VolumeDevices:[]v1.VolumeDevice(nil), LivenessProbe:(*v1.Probe)(nil), ReadinessProbe:(*v1.Probe)(nil), StartupProbe:(*v1.Probe)(nil), Lifecycle:(*v1.Lifecycle)(nil), TerminationMessagePath:\"/dev/termination-log\", TerminationMessagePolicy:\"File\", ImagePullPolicy:\"IfNotPresent\", SecurityContext:(*v1.SecurityContext)(0x40024818c0), Stdin:false, StdinOnce:false,
TTY:false}}, EphemeralContainers:[]v1.EphemeralContainer(nil), RestartPolicy:\"Always\", TerminationGracePeriodSeconds:(*int64)(0x4002225268), ActiveDeadlineSeconds:(*int64)(nil), DNSPolicy:\"ClusterFirst\", NodeSelector:map[string]string(nil), ServiceAccountName:\"kindnet\", DeprecatedServiceAccount:\"kindnet\", AutomountServiceAccountToken:(*bool)(nil), NodeName:\"\", HostNetwork:true, HostPID:false, HostIPC:false, ShareProcessNamespace:(*bool)(nil), SecurityContext:(*v1.PodSecurityContext)(0x400180ef30), ImagePullSecrets:[]v1.LocalObjectReference(nil), Hostname:\"\", Subdomain:\"\", Affinity:(*v1.Affinity)(nil), SchedulerName:\"default-scheduler\", Tolerations:[]v1.Toleration{v1.Toleration{Key:\"\", Operator:\"Exists\", Value:\"\", Effect:\"NoSchedule\", TolerationSeconds:(*int64)(nil)}}, HostAliases:[]v1.HostAlias(nil), PriorityClassName:\"\", Priority:(*int32)(nil), DNSConfig:(*v1.PodDNSConfig)(nil), ReadinessGates:[]v1.PodReadinessGate(nil), RuntimeClassName:(*string)(nil), EnableServiceLinks:(*bool)(n
il), PreemptionPolicy:(*v1.PreemptionPolicy)(nil), Overhead:v1.ResourceList(nil), TopologySpreadConstraints:[]v1.TopologySpreadConstraint(nil), SetHostnameAsFQDN:(*bool)(nil), OS:(*v1.PodOS)(nil), HostUsers:(*bool)(nil), SchedulingGates:[]v1.PodSchedulingGate(nil), ResourceClaims:[]v1.PodResourceClaim(nil), Resources:(*v1.ResourceRequirements)(nil), HostnameOverride:(*string)(nil)}}, UpdateStrategy:v1.DaemonSetUpdateStrategy{Type:\"RollingUpdate\", RollingUpdate:(*v1.RollingUpdateDaemonSet)(0x400354e850)}, MinReadySeconds:0, RevisionHistoryLimit:(*int32)(0x40022252d0)}, Status:v1.DaemonSetStatus{CurrentNumberScheduled:4, NumberMisscheduled:0, DesiredNumberScheduled:4, NumberReady:4, ObservedGeneration:1, UpdatedNumberScheduled:4, NumberAvailable:4, NumberUnavailable:0, CollisionCount:(*int32)(nil), Conditions:[]v1.DaemonSetCondition(nil)}}: Operation cannot be fulfilled on daemonsets.apps \"kindnet\": the object has been modified; please apply your changes to the latest version and try again" logger="Unhandle
dError"
	E1109 14:06:00.976377       1 certificate_controller.go:151] "Unhandled Error" err="Sync csr-rs764 failed with : error updating approval for csr: Operation cannot be fulfilled on certificatesigningrequests.certificates.k8s.io \"csr-rs764\": the object has been modified; please apply your changes to the latest version and try again" logger="UnhandledError"
	E1109 14:06:01.012377       1 certificate_controller.go:151] "Unhandled Error" err="Sync csr-rs764 failed with : error updating signature for csr: Operation cannot be fulfilled on certificatesigningrequests.certificates.k8s.io \"csr-rs764\": the object has been modified; please apply your changes to the latest version and try again" logger="UnhandledError"
	I1109 14:06:01.694293       1 topologycache.go:237] "Can't get CPU or zone information for node" logger="endpointslice-controller" node="ha-423884-m04"
	I1109 14:06:01.694518       1 actual_state_of_world.go:541] "Failed to update statusUpdateNeeded field in actual state of world" logger="persistentvolume-attach-detach-controller" err="Failed to set statusUpdateNeeded to needed true, because nodeName=\"ha-423884-m05\" does not exist"
	I1109 14:06:01.758023       1 range_allocator.go:428] "Set node PodCIDR" logger="node-ipam-controller" node="ha-423884-m05" podCIDRs=["10.244.4.0/24"]
	I1109 14:06:05.578297       1 node_lifecycle_controller.go:873] "Missing timestamp for Node. Assuming now as a timestamp" logger="node-lifecycle-controller" node="ha-423884-m05"
	I1109 14:06:46.839449       1 topologycache.go:237] "Can't get CPU or zone information for node" logger="endpointslice-controller" node="ha-423884-m04"
	
	
	==> kube-proxy [6db8ccf0f7e5d6927f1f90014c3a7aaa5232618397851b52007fa71137db2843] <==
	I1109 14:04:16.669492       1 server_linux.go:53] "Using iptables proxy"
	I1109 14:04:17.085521       1 shared_informer.go:349] "Waiting for caches to sync" controller="node informer cache"
	I1109 14:04:17.200105       1 shared_informer.go:356] "Caches are synced" controller="node informer cache"
	I1109 14:04:17.200215       1 server.go:219] "Successfully retrieved NodeIPs" NodeIPs=["192.168.49.2"]
	E1109 14:04:17.200363       1 server.go:256] "Kube-proxy configuration may be incomplete or incorrect" err="nodePortAddresses is unset; NodePort connections will be accepted on all local IPs. Consider using `--nodeport-addresses primary`"
	I1109 14:04:17.278348       1 server.go:265] "kube-proxy running in dual-stack mode" primary ipFamily="IPv4"
	I1109 14:04:17.278470       1 server_linux.go:132] "Using iptables Proxier"
	I1109 14:04:17.286098       1 proxier.go:242] "Setting route_localnet=1 to allow node-ports on localhost; to change this either disable iptables.localhostNodePorts (--iptables-localhost-nodeports) or set nodePortAddresses (--nodeport-addresses) to filter loopback addresses" ipFamily="IPv4"
	I1109 14:04:17.286454       1 server.go:527] "Version info" version="v1.34.1"
	I1109 14:04:17.286654       1 server.go:529] "Golang settings" GOGC="" GOMAXPROCS="" GOTRACEBACK=""
	I1109 14:04:17.290007       1 config.go:200] "Starting service config controller"
	I1109 14:04:17.290117       1 shared_informer.go:349] "Waiting for caches to sync" controller="service config"
	I1109 14:04:17.290166       1 config.go:106] "Starting endpoint slice config controller"
	I1109 14:04:17.290209       1 shared_informer.go:349] "Waiting for caches to sync" controller="endpoint slice config"
	I1109 14:04:17.290245       1 config.go:403] "Starting serviceCIDR config controller"
	I1109 14:04:17.290290       1 shared_informer.go:349] "Waiting for caches to sync" controller="serviceCIDR config"
	I1109 14:04:17.297376       1 config.go:309] "Starting node config controller"
	I1109 14:04:17.297723       1 shared_informer.go:349] "Waiting for caches to sync" controller="node config"
	I1109 14:04:17.297759       1 shared_informer.go:356] "Caches are synced" controller="node config"
	I1109 14:04:17.390352       1 shared_informer.go:356] "Caches are synced" controller="endpoint slice config"
	I1109 14:04:17.390429       1 shared_informer.go:356] "Caches are synced" controller="service config"
	I1109 14:04:17.390722       1 shared_informer.go:356] "Caches are synced" controller="serviceCIDR config"
	
	
	==> kube-scheduler [374a5429d6a564b1f172e68e0f603aefc3b04e7b183e31ef8b55c3ae430182ff] <==
	I1109 14:06:02.022097       1 schedule_one.go:1092] "Pod has been assigned to node. Abort adding it back to queue." pod="kube-system/kindnet-vdndm" node="ha-423884-m05"
	E1109 14:06:02.025101       1 schedule_one.go:1079] "Error scheduling pod; retrying" err="running Bind plugin \"DefaultBinder\": Operation cannot be fulfilled on pods/binding \"kube-proxy-kvnr4\": pod kube-proxy-kvnr4 is already assigned to node \"ha-423884-m05\"" logger="UnhandledError" pod="kube-system/kube-proxy-kvnr4"
	I1109 14:06:02.051418       1 schedule_one.go:1092] "Pod has been assigned to node. Abort adding it back to queue." pod="kube-system/kube-proxy-kvnr4" node="ha-423884-m05"
	E1109 14:06:02.035062       1 schedule_one.go:1079] "Error scheduling pod; retrying" err="running Bind plugin \"DefaultBinder\": Operation cannot be fulfilled on pods/binding \"kube-proxy-v7zkg\": pod kube-proxy-v7zkg is already assigned to node \"ha-423884-m05\"" logger="UnhandledError" pod="kube-system/kube-proxy-v7zkg"
	I1109 14:06:02.052496       1 schedule_one.go:1092] "Pod has been assigned to node. Abort adding it back to queue." pod="kube-system/kube-proxy-v7zkg" node="ha-423884-m05"
	E1109 14:06:02.522861       1 framework.go:1400] "Plugin Failed" err="Operation cannot be fulfilled on pods/binding \"kube-proxy-m79kh\": pod kube-proxy-m79kh is already assigned to node \"ha-423884-m05\"" plugin="DefaultBinder" pod="kube-system/kube-proxy-m79kh" node="ha-423884-m05"
	E1109 14:06:02.523002       1 schedule_one.go:379] "scheduler cache ForgetPod failed" err="pod b8ec98e6-7a7b-4875-ba3d-54d76bcc48d1(kube-system/kube-proxy-m79kh) wasn't assumed so cannot be forgotten" logger="UnhandledError" pod="kube-system/kube-proxy-m79kh"
	E1109 14:06:02.523063       1 schedule_one.go:1079] "Error scheduling pod; retrying" err="running Bind plugin \"DefaultBinder\": Operation cannot be fulfilled on pods/binding \"kube-proxy-m79kh\": pod kube-proxy-m79kh is already assigned to node \"ha-423884-m05\"" logger="UnhandledError" pod="kube-system/kube-proxy-m79kh"
	I1109 14:06:02.534280       1 schedule_one.go:1092] "Pod has been assigned to node. Abort adding it back to queue." pod="kube-system/kube-proxy-m79kh" node="ha-423884-m05"
	E1109 14:06:02.610261       1 framework.go:1400] "Plugin Failed" err="Operation cannot be fulfilled on pods/binding \"kindnet-hxbhl\": pod kindnet-hxbhl is already assigned to node \"ha-423884-m05\"" plugin="DefaultBinder" pod="kube-system/kindnet-hxbhl" node="ha-423884-m05"
	E1109 14:06:02.610394       1 schedule_one.go:379] "scheduler cache ForgetPod failed" err="pod 17de7135-f9e3-491d-bc8a-184957016c66(kube-system/kindnet-hxbhl) wasn't assumed so cannot be forgotten" logger="UnhandledError" pod="kube-system/kindnet-hxbhl"
	E1109 14:06:02.610452       1 schedule_one.go:1079] "Error scheduling pod; retrying" err="running Bind plugin \"DefaultBinder\": Operation cannot be fulfilled on pods/binding \"kindnet-hxbhl\": pod kindnet-hxbhl is already assigned to node \"ha-423884-m05\"" logger="UnhandledError" pod="kube-system/kindnet-hxbhl"
	I1109 14:06:02.619516       1 schedule_one.go:1092] "Pod has been assigned to node. Abort adding it back to queue." pod="kube-system/kindnet-hxbhl" node="ha-423884-m05"
	E1109 14:06:05.401006       1 framework.go:1400] "Plugin Failed" err="Operation cannot be fulfilled on pods/binding \"kindnet-wd67q\": pod kindnet-wd67q is already assigned to node \"ha-423884-m05\"" plugin="DefaultBinder" pod="kube-system/kindnet-wd67q" node="ha-423884-m05"
	E1109 14:06:05.401130       1 schedule_one.go:379] "scheduler cache ForgetPod failed" err="pod 4e5b1687-6beb-4f69-ae4d-b512d9dde310(kube-system/kindnet-wd67q) wasn't assumed so cannot be forgotten" logger="UnhandledError" pod="kube-system/kindnet-wd67q"
	E1109 14:06:05.401497       1 schedule_one.go:1079] "Error scheduling pod; retrying" err="running Bind plugin \"DefaultBinder\": Operation cannot be fulfilled on pods/binding \"kindnet-wd67q\": pod kindnet-wd67q is already assigned to node \"ha-423884-m05\"" logger="UnhandledError" pod="kube-system/kindnet-wd67q"
	E1109 14:06:05.402258       1 framework.go:1400] "Plugin Failed" err="Operation cannot be fulfilled on pods/binding \"kindnet-th4qv\": pod kindnet-th4qv is already assigned to node \"ha-423884-m05\"" plugin="DefaultBinder" pod="kube-system/kindnet-th4qv" node="ha-423884-m05"
	E1109 14:06:05.402376       1 schedule_one.go:379] "scheduler cache ForgetPod failed" err="pod d3d358d3-d4dd-4c89-bf70-2c8d12502968(kube-system/kindnet-th4qv) wasn't assumed so cannot be forgotten" logger="UnhandledError" pod="kube-system/kindnet-th4qv"
	E1109 14:06:05.402614       1 schedule_one.go:1079] "Error scheduling pod; retrying" err="running Bind plugin \"DefaultBinder\": Operation cannot be fulfilled on pods/binding \"kindnet-th4qv\": pod kindnet-th4qv is already assigned to node \"ha-423884-m05\"" logger="UnhandledError" pod="kube-system/kindnet-th4qv"
	I1109 14:06:05.402696       1 schedule_one.go:1092] "Pod has been assigned to node. Abort adding it back to queue." pod="kube-system/kindnet-wd67q" node="ha-423884-m05"
	I1109 14:06:05.403579       1 schedule_one.go:1092] "Pod has been assigned to node. Abort adding it back to queue." pod="kube-system/kindnet-th4qv" node="ha-423884-m05"
	E1109 14:06:05.485252       1 framework.go:1400] "Plugin Failed" err="Operation cannot be fulfilled on pods/binding \"kindnet-8b7z6\": pod kindnet-8b7z6 is already assigned to node \"ha-423884-m05\"" plugin="DefaultBinder" pod="kube-system/kindnet-8b7z6" node="ha-423884-m05"
	E1109 14:06:05.485472       1 schedule_one.go:379] "scheduler cache ForgetPod failed" err="pod 89c2d73f-27bd-4a17-886a-8d6734fd89d0(kube-system/kindnet-8b7z6) wasn't assumed so cannot be forgotten" logger="UnhandledError" pod="kube-system/kindnet-8b7z6"
	E1109 14:06:05.486709       1 schedule_one.go:1079] "Error scheduling pod; retrying" err="running Bind plugin \"DefaultBinder\": Operation cannot be fulfilled on pods/binding \"kindnet-8b7z6\": pod kindnet-8b7z6 is already assigned to node \"ha-423884-m05\"" logger="UnhandledError" pod="kube-system/kindnet-8b7z6"
	I1109 14:06:05.489197       1 schedule_one.go:1092] "Pod has been assigned to node. Abort adding it back to queue." pod="kube-system/kindnet-8b7z6" node="ha-423884-m05"
	
	
	==> kubelet <==
	Nov 09 14:04:14 ha-423884 kubelet[749]: I1109 14:04:14.263506     749 kubelet.go:3202] "Trying to delete pod" pod="kube-system/kube-vip-ha-423884" podUID="8470dcc0-6c4f-4241-ad4e-8b896f6712b0"
	Nov 09 14:04:14 ha-423884 kubelet[749]: E1109 14:04:14.282901     749 kubelet.go:3221] "Failed creating a mirror pod" err="pods \"etcd-ha-423884\" already exists" pod="kube-system/etcd-ha-423884"
	Nov 09 14:04:14 ha-423884 kubelet[749]: I1109 14:04:14.282937     749 kubelet.go:3219] "Creating a mirror pod for static pod" pod="kube-system/kube-apiserver-ha-423884"
	Nov 09 14:04:14 ha-423884 kubelet[749]: E1109 14:04:14.324502     749 kubelet.go:3221] "Failed creating a mirror pod" err="pods \"kube-apiserver-ha-423884\" already exists" pod="kube-system/kube-apiserver-ha-423884"
	Nov 09 14:04:14 ha-423884 kubelet[749]: I1109 14:04:14.324540     749 kubelet.go:3219] "Creating a mirror pod for static pod" pod="kube-system/kube-controller-manager-ha-423884"
	Nov 09 14:04:14 ha-423884 kubelet[749]: I1109 14:04:14.353962     749 desired_state_of_world_populator.go:154] "Finished populating initial desired state of world"
	Nov 09 14:04:14 ha-423884 kubelet[749]: E1109 14:04:14.370339     749 kubelet.go:3221] "Failed creating a mirror pod" err="pods \"kube-controller-manager-ha-423884\" already exists" pod="kube-system/kube-controller-manager-ha-423884"
	Nov 09 14:04:14 ha-423884 kubelet[749]: I1109 14:04:14.385896     749 kubelet.go:3208] "Deleted mirror pod as it didn't match the static Pod" pod="kube-system/kube-vip-ha-423884"
	Nov 09 14:04:14 ha-423884 kubelet[749]: I1109 14:04:14.385930     749 kubelet.go:3219] "Creating a mirror pod for static pod" pod="kube-system/kube-vip-ha-423884"
	Nov 09 14:04:14 ha-423884 kubelet[749]: I1109 14:04:14.403495     749 reconciler_common.go:251] "operationExecutor.VerifyControllerAttachedVolume started for volume \"tmp\" (UniqueName: \"kubernetes.io/host-path/5c249a88-1e05-40e0-b9d2-60a993f8c146-tmp\") pod \"storage-provisioner\" (UID: \"5c249a88-1e05-40e0-b9d2-60a993f8c146\") " pod="kube-system/storage-provisioner"
	Nov 09 14:04:14 ha-423884 kubelet[749]: I1109 14:04:14.403551     749 reconciler_common.go:251] "operationExecutor.VerifyControllerAttachedVolume started for volume \"lib-modules\" (UniqueName: \"kubernetes.io/host-path/f3de4d87-91fe-4303-a8db-50a70cbce4d7-lib-modules\") pod \"kube-proxy-7z7d2\" (UID: \"f3de4d87-91fe-4303-a8db-50a70cbce4d7\") " pod="kube-system/kube-proxy-7z7d2"
	Nov 09 14:04:14 ha-423884 kubelet[749]: I1109 14:04:14.403593     749 reconciler_common.go:251] "operationExecutor.VerifyControllerAttachedVolume started for volume \"lib-modules\" (UniqueName: \"kubernetes.io/host-path/aaab0693-39a9-46cc-b5c6-f07055a7cbc4-lib-modules\") pod \"kindnet-4s4nj\" (UID: \"aaab0693-39a9-46cc-b5c6-f07055a7cbc4\") " pod="kube-system/kindnet-4s4nj"
	Nov 09 14:04:14 ha-423884 kubelet[749]: I1109 14:04:14.403613     749 reconciler_common.go:251] "operationExecutor.VerifyControllerAttachedVolume started for volume \"xtables-lock\" (UniqueName: \"kubernetes.io/host-path/aaab0693-39a9-46cc-b5c6-f07055a7cbc4-xtables-lock\") pod \"kindnet-4s4nj\" (UID: \"aaab0693-39a9-46cc-b5c6-f07055a7cbc4\") " pod="kube-system/kindnet-4s4nj"
	Nov 09 14:04:14 ha-423884 kubelet[749]: I1109 14:04:14.403647     749 reconciler_common.go:251] "operationExecutor.VerifyControllerAttachedVolume started for volume \"cni-cfg\" (UniqueName: \"kubernetes.io/host-path/aaab0693-39a9-46cc-b5c6-f07055a7cbc4-cni-cfg\") pod \"kindnet-4s4nj\" (UID: \"aaab0693-39a9-46cc-b5c6-f07055a7cbc4\") " pod="kube-system/kindnet-4s4nj"
	Nov 09 14:04:14 ha-423884 kubelet[749]: I1109 14:04:14.403685     749 reconciler_common.go:251] "operationExecutor.VerifyControllerAttachedVolume started for volume \"xtables-lock\" (UniqueName: \"kubernetes.io/host-path/f3de4d87-91fe-4303-a8db-50a70cbce4d7-xtables-lock\") pod \"kube-proxy-7z7d2\" (UID: \"f3de4d87-91fe-4303-a8db-50a70cbce4d7\") " pod="kube-system/kube-proxy-7z7d2"
	Nov 09 14:04:14 ha-423884 kubelet[749]: I1109 14:04:14.469444     749 swap_util.go:74] "error creating dir to test if tmpfs noswap is enabled. Assuming not supported" mount path="" error="stat /var/lib/kubelet/plugins/kubernetes.io/empty-dir: no such file or directory"
	Nov 09 14:04:14 ha-423884 kubelet[749]: I1109 14:04:14.588284     749 pod_startup_latency_tracker.go:104] "Observed pod startup duration" pod="kube-system/kube-vip-ha-423884" podStartSLOduration=0.588263843 podStartE2EDuration="588.263843ms" podCreationTimestamp="2025-11-09 14:04:14 +0000 UTC" firstStartedPulling="0001-01-01 00:00:00 +0000 UTC" lastFinishedPulling="0001-01-01 00:00:00 +0000 UTC" observedRunningTime="2025-11-09 14:04:14.53432425 +0000 UTC m=+39.410575888" watchObservedRunningTime="2025-11-09 14:04:14.588263843 +0000 UTC m=+39.464515481"
	Nov 09 14:04:14 ha-423884 kubelet[749]: W1109 14:04:14.716436     749 manager.go:1169] Failed to process watch event {EventType:0 Name:/docker/8c902201acb6ac6cc85dcb02210aea656c86b05a6fb72e47cb8c42c952f307e8/crio-ef99cabeed9545cc36e7e4c46554ffacb1775a080c9128707cd0b34d4cb4a81a WatchSource:0}: Error finding container ef99cabeed9545cc36e7e4c46554ffacb1775a080c9128707cd0b34d4cb4a81a: Status 404 returned error can't find the container with id ef99cabeed9545cc36e7e4c46554ffacb1775a080c9128707cd0b34d4cb4a81a
	Nov 09 14:04:14 ha-423884 kubelet[749]: W1109 14:04:14.783698     749 manager.go:1169] Failed to process watch event {EventType:0 Name:/docker/8c902201acb6ac6cc85dcb02210aea656c86b05a6fb72e47cb8c42c952f307e8/crio-624febe3bef0cff2a8b38f86f67900ed2fa943529a6d461c2caa559b51a854bb WatchSource:0}: Error finding container 624febe3bef0cff2a8b38f86f67900ed2fa943529a6d461c2caa559b51a854bb: Status 404 returned error can't find the container with id 624febe3bef0cff2a8b38f86f67900ed2fa943529a6d461c2caa559b51a854bb
	Nov 09 14:04:14 ha-423884 kubelet[749]: W1109 14:04:14.798946     749 manager.go:1169] Failed to process watch event {EventType:0 Name:/docker/8c902201acb6ac6cc85dcb02210aea656c86b05a6fb72e47cb8c42c952f307e8/crio-49d4f70bf4320e7c70fbccc94bd31ea39dc7456ce3f9c2a9a4d16059f54b6f87 WatchSource:0}: Error finding container 49d4f70bf4320e7c70fbccc94bd31ea39dc7456ce3f9c2a9a4d16059f54b6f87: Status 404 returned error can't find the container with id 49d4f70bf4320e7c70fbccc94bd31ea39dc7456ce3f9c2a9a4d16059f54b6f87
	Nov 09 14:04:14 ha-423884 kubelet[749]: W1109 14:04:14.971628     749 manager.go:1169] Failed to process watch event {EventType:0 Name:/docker/8c902201acb6ac6cc85dcb02210aea656c86b05a6fb72e47cb8c42c952f307e8/crio-156c341c8adeeef3c52b9cc70a6ad9c7dc97d08df535a6d0183d2288e46aaa13 WatchSource:0}: Error finding container 156c341c8adeeef3c52b9cc70a6ad9c7dc97d08df535a6d0183d2288e46aaa13: Status 404 returned error can't find the container with id 156c341c8adeeef3c52b9cc70a6ad9c7dc97d08df535a6d0183d2288e46aaa13
	Nov 09 14:04:15 ha-423884 kubelet[749]: I1109 14:04:15.348436     749 kubelet_volumes.go:163] "Cleaned up orphaned pod volumes dir" podUID="fbb3ff8bceed3e182ae34f06d816435e" path="/var/lib/kubelet/pods/fbb3ff8bceed3e182ae34f06d816435e/volumes"
	Nov 09 14:04:35 ha-423884 kubelet[749]: E1109 14:04:35.276791     749 log.go:32] "ContainerStatus from runtime service failed" err="rpc error: code = NotFound desc = could not find container \"12f4955a06244881af7082cc0ff38ad3ea7f1e7f5cb44d59792e4bf672da56bd\": container with ID starting with 12f4955a06244881af7082cc0ff38ad3ea7f1e7f5cb44d59792e4bf672da56bd not found: ID does not exist" containerID="12f4955a06244881af7082cc0ff38ad3ea7f1e7f5cb44d59792e4bf672da56bd"
	Nov 09 14:04:35 ha-423884 kubelet[749]: I1109 14:04:35.276883     749 kuberuntime_gc.go:364] "Error getting ContainerStatus for containerID" containerID="12f4955a06244881af7082cc0ff38ad3ea7f1e7f5cb44d59792e4bf672da56bd" err="rpc error: code = NotFound desc = could not find container \"12f4955a06244881af7082cc0ff38ad3ea7f1e7f5cb44d59792e4bf672da56bd\": container with ID starting with 12f4955a06244881af7082cc0ff38ad3ea7f1e7f5cb44d59792e4bf672da56bd not found: ID does not exist"
	Nov 09 14:04:46 ha-423884 kubelet[749]: I1109 14:04:46.630690     749 scope.go:117] "RemoveContainer" containerID="5bed382b465f29e125aa4acb35f3e43d30cb2fa5b8aadd1ad04f56abc10722a7"
	

                                                
                                                
-- /stdout --
helpers_test.go:262: (dbg) Run:  out/minikube-linux-arm64 status --format={{.APIServer}} -p ha-423884 -n ha-423884
helpers_test.go:269: (dbg) Run:  kubectl --context ha-423884 get po -o=jsonpath={.items[*].metadata.name} -A --field-selector=status.phase!=Running
helpers_test.go:293: <<< TestMultiControlPlane/serial/HAppyAfterSecondaryNodeAdd FAILED: end of post-mortem logs <<<
helpers_test.go:294: ---------------------/post-mortem---------------------------------
--- FAIL: TestMultiControlPlane/serial/HAppyAfterSecondaryNodeAdd (4.60s)

                                                
                                    
x
+
TestJSONOutput/pause/Command (2.15s)

                                                
                                                
=== RUN   TestJSONOutput/pause/Command
json_output_test.go:63: (dbg) Run:  out/minikube-linux-arm64 pause -p json-output-510235 --output=json --user=testUser
json_output_test.go:63: (dbg) Non-zero exit: out/minikube-linux-arm64 pause -p json-output-510235 --output=json --user=testUser: exit status 80 (2.145105635s)

                                                
                                                
-- stdout --
	{"specversion":"1.0","id":"ee9e956e-db07-4722-9761-62b900bb8737","source":"https://minikube.sigs.k8s.io/","type":"io.k8s.sigs.minikube.step","datacontenttype":"application/json","data":{"currentstep":"0","message":"Pausing node json-output-510235 ...","name":"Pausing","totalsteps":"1"}}
	{"specversion":"1.0","id":"24f23492-362e-49c6-a319-75fed6589669","source":"https://minikube.sigs.k8s.io/","type":"io.k8s.sigs.minikube.error","datacontenttype":"application/json","data":{"advice":"","exitcode":"80","issues":"","message":"Pause: list running: runc: sudo runc list -f json: Process exited with status 1\nstdout:\n\nstderr:\ntime=\"2025-11-09T14:08:34Z\" level=error msg=\"open /run/runc: no such file or directory\"","name":"GUEST_PAUSE","url":""}}
	{"specversion":"1.0","id":"0b0b8a3d-923e-48cd-826f-8a61a332c1c1","source":"https://minikube.sigs.k8s.io/","type":"io.k8s.sigs.minikube.error","datacontenttype":"application/json","data":{"message":"╭───────────────────────────────────────────────────────────────────────────────────────────╮\n│                                                                                           │\n│    If the above advice does not help, please let us know:                                 │\n│    https://github.com/kubernetes/minikube/issues/new/choose                               │\n│                                                                                           │\n│    Please run `minikube logs --file=logs.txt` and attach logs.txt to the GitHub issue.    │\n│    Please also attach the following f
ile to the GitHub issue:                             │\n│    - /tmp/minikube_pause_49fdaea37aad8ebccb761973c21590cc64efe8d9_0.log                   │\n│                                                                                           │\n╰───────────────────────────────────────────────────────────────────────────────────────────╯"}}

                                                
                                                
-- /stdout --
json_output_test.go:65: failed to clean up: args "out/minikube-linux-arm64 pause -p json-output-510235 --output=json --user=testUser": exit status 80
--- FAIL: TestJSONOutput/pause/Command (2.15s)

                                                
                                    
x
+
TestJSONOutput/unpause/Command (1.74s)

                                                
                                                
=== RUN   TestJSONOutput/unpause/Command
json_output_test.go:63: (dbg) Run:  out/minikube-linux-arm64 unpause -p json-output-510235 --output=json --user=testUser
json_output_test.go:63: (dbg) Non-zero exit: out/minikube-linux-arm64 unpause -p json-output-510235 --output=json --user=testUser: exit status 80 (1.736638531s)

                                                
                                                
-- stdout --
	{"specversion":"1.0","id":"14cf1260-ed69-4a72-b020-3df4c3b0beea","source":"https://minikube.sigs.k8s.io/","type":"io.k8s.sigs.minikube.step","datacontenttype":"application/json","data":{"currentstep":"0","message":"Unpausing node json-output-510235 ...","name":"Unpausing","totalsteps":"1"}}
	{"specversion":"1.0","id":"a099202f-cec6-4cb1-8a04-77bd6a20bbb8","source":"https://minikube.sigs.k8s.io/","type":"io.k8s.sigs.minikube.error","datacontenttype":"application/json","data":{"advice":"","exitcode":"80","issues":"","message":"Pause: list paused: runc: sudo runc list -f json: Process exited with status 1\nstdout:\n\nstderr:\ntime=\"2025-11-09T14:08:36Z\" level=error msg=\"open /run/runc: no such file or directory\"","name":"GUEST_UNPAUSE","url":""}}
	{"specversion":"1.0","id":"267723b7-e75f-47aa-ba7f-4a7d31aa0329","source":"https://minikube.sigs.k8s.io/","type":"io.k8s.sigs.minikube.error","datacontenttype":"application/json","data":{"message":"╭───────────────────────────────────────────────────────────────────────────────────────────╮\n│                                                                                           │\n│    If the above advice does not help, please let us know:                                 │\n│    https://github.com/kubernetes/minikube/issues/new/choose                               │\n│                                                                                           │\n│    Please run `minikube logs --file=logs.txt` and attach logs.txt to the GitHub issue.    │\n│    Please also attach the following f
ile to the GitHub issue:                             │\n│    - /tmp/minikube_unpause_85c908ac827001a7ced33feb0caf7da086d17584_0.log                 │\n│                                                                                           │\n╰───────────────────────────────────────────────────────────────────────────────────────────╯"}}

                                                
                                                
-- /stdout --
json_output_test.go:65: failed to clean up: args "out/minikube-linux-arm64 unpause -p json-output-510235 --output=json --user=testUser": exit status 80
--- FAIL: TestJSONOutput/unpause/Command (1.74s)

                                                
                                    
x
+
TestPause/serial/Pause (8.06s)

                                                
                                                
=== RUN   TestPause/serial/Pause
pause_test.go:110: (dbg) Run:  out/minikube-linux-arm64 pause -p pause-342238 --alsologtostderr -v=5
pause_test.go:110: (dbg) Non-zero exit: out/minikube-linux-arm64 pause -p pause-342238 --alsologtostderr -v=5: exit status 80 (2.499765731s)

                                                
                                                
-- stdout --
	* Pausing node pause-342238 ... 
	
	

                                                
                                                
-- /stdout --
** stderr ** 
	I1109 14:32:46.654005  168989 out.go:360] Setting OutFile to fd 1 ...
	I1109 14:32:46.659795  168989 out.go:408] TERM=,COLORTERM=, which probably does not support color
	I1109 14:32:46.659830  168989 out.go:374] Setting ErrFile to fd 2...
	I1109 14:32:46.659842  168989 out.go:408] TERM=,COLORTERM=, which probably does not support color
	I1109 14:32:46.660232  168989 root.go:338] Updating PATH: /home/jenkins/minikube-integration/21139-2320/.minikube/bin
	I1109 14:32:46.660551  168989 out.go:368] Setting JSON to false
	I1109 14:32:46.660586  168989 mustload.go:66] Loading cluster: pause-342238
	I1109 14:32:46.661099  168989 config.go:182] Loaded profile config "pause-342238": Driver=docker, ContainerRuntime=crio, KubernetesVersion=v1.34.1
	I1109 14:32:46.661644  168989 cli_runner.go:164] Run: docker container inspect pause-342238 --format={{.State.Status}}
	I1109 14:32:46.680682  168989 host.go:66] Checking if "pause-342238" exists ...
	I1109 14:32:46.681034  168989 cli_runner.go:164] Run: docker system info --format "{{json .}}"
	I1109 14:32:46.745287  168989 info.go:266] docker info: {ID:J4M5:W6MX:GOX4:4LAQ:VI7E:VJNF:J3OP:OPBH:GF7G:PPY4:WQWD:7N4L Containers:2 ContainersRunning:1 ContainersPaused:0 ContainersStopped:1 Images:4 Driver:overlay2 DriverStatus:[[Backing Filesystem extfs] [Supports d_type true] [Using metacopy false] [Native Overlay Diff true] [userxattr false]] SystemStatus:<nil> Plugins:{Volume:[local] Network:[bridge host ipvlan macvlan null overlay] Authorization:<nil> Log:[awslogs fluentd gcplogs gelf journald json-file local splunk syslog]} MemoryLimit:true SwapLimit:true KernelMemory:false KernelMemoryTCP:true CPUCfsPeriod:true CPUCfsQuota:true CPUShares:true CPUSet:true PidsLimit:true IPv4Forwarding:true BridgeNfIptables:false BridgeNfIP6Tables:false Debug:false NFd:42 OomKillDisable:true NGoroutines:58 SystemTime:2025-11-09 14:32:46.735145921 +0000 UTC LoggingDriver:json-file CgroupDriver:cgroupfs NEventsListener:0 KernelVersion:5.15.0-1084-aws OperatingSystem:Ubuntu 20.04.6 LTS OSType:linux Architecture:a
arch64 IndexServerAddress:https://index.docker.io/v1/ RegistryConfig:{AllowNondistributableArtifactsCIDRs:[] AllowNondistributableArtifactsHostnames:[] InsecureRegistryCIDRs:[::1/128 127.0.0.0/8] IndexConfigs:{DockerIo:{Name:docker.io Mirrors:[] Secure:true Official:true}} Mirrors:[]} NCPU:2 MemTotal:8214835200 GenericResources:<nil> DockerRootDir:/var/lib/docker HTTPProxy: HTTPSProxy: NoProxy: Name:ip-172-31-24-2 Labels:[] ExperimentalBuild:false ServerVersion:28.1.1 ClusterStore: ClusterAdvertise: Runtimes:{Runc:{Path:runc}} DefaultRuntime:runc Swarm:{NodeID: NodeAddr: LocalNodeState:inactive ControlAvailable:false Error: RemoteManagers:<nil>} LiveRestoreEnabled:false Isolation: InitBinary:docker-init ContainerdCommit:{ID:05044ec0a9a75232cad458027ca83437aae3f4da Expected:} RuncCommit:{ID:v1.2.5-0-g59923ef Expected:} InitCommit:{ID:de40ad0 Expected:} SecurityOptions:[name=apparmor name=seccomp,profile=builtin] ProductLicense: Warnings:<nil> ServerErrors:[] ClientInfo:{Debug:false Plugins:[map[Name:buildx Pat
h:/usr/libexec/docker/cli-plugins/docker-buildx SchemaVersion:0.1.0 ShortDescription:Docker Buildx Vendor:Docker Inc. Version:v0.23.0] map[Name:compose Path:/usr/libexec/docker/cli-plugins/docker-compose SchemaVersion:0.1.0 ShortDescription:Docker Compose Vendor:Docker Inc. Version:v2.35.1]] Warnings:<nil>}}
	I1109 14:32:46.745967  168989 pause.go:60] "namespaces" [kube-system kubernetes-dashboard istio-operator]="keys" map[addons:[] all:%!s(bool=false) apiserver-ips:[] apiserver-name:minikubeCA apiserver-names:[] apiserver-port:%!s(int=8443) auto-pause-interval:1m0s auto-update-drivers:%!s(bool=true) base-image:gcr.io/k8s-minikube/kicbase-builds:v0.0.48-1761985721-21837@sha256:a50b37e97dfdea51156e079ca6b45818a801b3d41bbe13d141f35d2e1af6c7d1 binary-mirror: bootstrapper:kubeadm cache-images:%!s(bool=true) cancel-scheduled:%!s(bool=false) cert-expiration:26280h0m0s cni: container-runtime: cpus:2 cri-socket: delete-on-failure:%!s(bool=false) disable-coredns-log:%!s(bool=false) disable-driver-mounts:%!s(bool=false) disable-metrics:%!s(bool=false) disable-optimizations:%!s(bool=false) disk-size:20000mb dns-domain:cluster.local dns-proxy:%!s(bool=false) docker-env:[] docker-opt:[] download-only:%!s(bool=false) driver: dry-run:%!s(bool=false) embed-certs:%!s(bool=false) embedcerts:%!s(bool=false) enable-default-
cni:%!s(bool=false) extra-config: extra-disks:%!s(int=0) feature-gates: force:%!s(bool=false) force-systemd:%!s(bool=false) gpus: ha:%!s(bool=false) host-dns-resolver:%!s(bool=true) host-only-cidr:192.168.59.1/24 host-only-nic-type:virtio hyperkit-vpnkit-sock: hyperkit-vsock-ports:[] hyperv-external-adapter: hyperv-use-external-switch:%!s(bool=false) hyperv-virtual-switch: image-mirror-country: image-repository: insecure-registry:[] install-addons:%!s(bool=true) interactive:%!s(bool=true) iso-url:[https://storage.googleapis.com/minikube-builds/iso/21834/minikube-v1.37.0-1762018871-21834-arm64.iso https://github.com/kubernetes/minikube/releases/download/v1.37.0-1762018871-21834/minikube-v1.37.0-1762018871-21834-arm64.iso https://kubernetes.oss-cn-hangzhou.aliyuncs.com/minikube/iso/minikube-v1.37.0-1762018871-21834-arm64.iso] keep-context:%!s(bool=false) keep-context-active:%!s(bool=false) kubernetes-version: kvm-gpu:%!s(bool=false) kvm-hidden:%!s(bool=false) kvm-network:default kvm-numa-count:%!s(int=1) kvm-qe
mu-uri:qemu:///system listen-address: maxauditentries:%!s(int=1000) memory: mount:%!s(bool=false) mount-9p-version:9p2000.L mount-gid:docker mount-ip: mount-msize:%!s(int=262144) mount-options:[] mount-port:0 mount-string: mount-type:9p mount-uid:docker namespace:default nat-nic-type:virtio native-ssh:%!s(bool=true) network: network-plugin: nfs-share:[] nfs-shares-root:/nfsshares no-kubernetes:%!s(bool=false) no-vtx-check:%!s(bool=false) nodes:%!s(int=1) output:text ports:[] preload:%!s(bool=true) profile:pause-342238 purge:%!s(bool=false) qemu-firmware-path: registry-mirror:[] reminderwaitperiodinhours:%!s(int=24) rootless:%!s(bool=false) schedule:0s service-cluster-ip-range:10.96.0.0/12 skip-audit:%!s(bool=false) socket-vmnet-client-path: socket-vmnet-path: ssh-ip-address: ssh-key: ssh-port:%!s(int=22) ssh-user:root static-ip: subnet: trace: user: uuid: vm:%!s(bool=false) vm-driver: wait:[apiserver system_pods] wait-timeout:6m0s wantnonedriverwarning:%!s(bool=true) wantupdatenotification:%!s(bool=true) want
virtualboxdriverwarning:%!s(bool=true)]="(MISSING)"
	I1109 14:32:46.750548  168989 out.go:179] * Pausing node pause-342238 ... 
	I1109 14:32:46.755552  168989 host.go:66] Checking if "pause-342238" exists ...
	I1109 14:32:46.756240  168989 ssh_runner.go:195] Run: systemctl --version
	I1109 14:32:46.756304  168989 cli_runner.go:164] Run: docker container inspect -f "'{{(index (index .NetworkSettings.Ports "22/tcp") 0).HostPort}}'" pause-342238
	I1109 14:32:46.780455  168989 sshutil.go:53] new ssh client: &{IP:127.0.0.1 Port:33020 SSHKeyPath:/home/jenkins/minikube-integration/21139-2320/.minikube/machines/pause-342238/id_rsa Username:docker}
	I1109 14:32:46.896931  168989 ssh_runner.go:195] Run: sudo systemctl is-active --quiet service kubelet
	I1109 14:32:46.929964  168989 pause.go:52] kubelet running: true
	I1109 14:32:46.930059  168989 ssh_runner.go:195] Run: sudo systemctl disable --now kubelet
	I1109 14:32:47.212090  168989 cri.go:54] listing CRI containers in root : {State:running Name: Namespaces:[kube-system kubernetes-dashboard istio-operator]}
	I1109 14:32:47.212175  168989 ssh_runner.go:195] Run: sudo -s eval "crictl ps -a --quiet --label io.kubernetes.pod.namespace=kube-system; crictl ps -a --quiet --label io.kubernetes.pod.namespace=kubernetes-dashboard; crictl ps -a --quiet --label io.kubernetes.pod.namespace=istio-operator"
	I1109 14:32:47.350062  168989 cri.go:89] found id: "08018cf90a3eb18947e98d613c69d015284e179c3a6bad63d3d660e2bba9d425"
	I1109 14:32:47.350083  168989 cri.go:89] found id: "aad8dd5398369e8f3aebeb36d9dfabb64135dc113966d452fdeeed0063b3e1e2"
	I1109 14:32:47.350088  168989 cri.go:89] found id: "a5688c3ff2046c131f5e093ac5f29648ebf7838084be2c2c4f88f5ec839473a5"
	I1109 14:32:47.350092  168989 cri.go:89] found id: "bf258d25a5e083c243df5f441d370436199818e034bda7733cdefc1cedba4399"
	I1109 14:32:47.350095  168989 cri.go:89] found id: "48a5ab06b1f371ac5dbcd26591c6cff2b3c7a3f9d82ec3d36aacffc729c253e0"
	I1109 14:32:47.350099  168989 cri.go:89] found id: "afac707896ac08acd09ba3881e8702a21359b6a9e5316bfc87cd94a6597c947f"
	I1109 14:32:47.350103  168989 cri.go:89] found id: "26b98fc2b6e91b01fe188e5f329cb231784dc1c5dd7afa048ee956eeaea49020"
	I1109 14:32:47.350106  168989 cri.go:89] found id: "e00cc266825be7d8acef576725cca22f3def607f1f54044f19ec2250c9c87463"
	I1109 14:32:47.350109  168989 cri.go:89] found id: "c9b12325c06ba9e4ab5abe52dc50d540a4c27cfd177801eff74770b81d946220"
	I1109 14:32:47.350115  168989 cri.go:89] found id: "350dceaf4e4da722256622b6806f53ab082a3778455c73c0ca943ef840d44bfe"
	I1109 14:32:47.350118  168989 cri.go:89] found id: "5c6f6fd508ff4f00007a53c6092586ec8651aa027004a7f6f7018d9f33274a1d"
	I1109 14:32:47.350121  168989 cri.go:89] found id: "bd2b939fdc032d3abef04f2b1ec27467d56019dfce8f70d173b907e0b789d362"
	I1109 14:32:47.350124  168989 cri.go:89] found id: "5d123fdfeb405288face83e3bb92b58e7775f382757b3119f80305045fbcea28"
	I1109 14:32:47.350127  168989 cri.go:89] found id: "4a1f936e07428b18d9c307997f095c8d73e649e7b520d8a35b53836606e5e79d"
	I1109 14:32:47.350130  168989 cri.go:89] found id: ""
	I1109 14:32:47.350177  168989 ssh_runner.go:195] Run: sudo runc list -f json
	I1109 14:32:47.387096  168989 retry.go:31] will retry after 298.458419ms: list running: runc: sudo runc list -f json: Process exited with status 1
	stdout:
	
	stderr:
	time="2025-11-09T14:32:47Z" level=error msg="open /run/runc: no such file or directory"
	I1109 14:32:47.686637  168989 ssh_runner.go:195] Run: sudo systemctl is-active --quiet service kubelet
	I1109 14:32:47.728633  168989 pause.go:52] kubelet running: false
	I1109 14:32:47.728692  168989 ssh_runner.go:195] Run: sudo systemctl disable --now kubelet
	I1109 14:32:48.073012  168989 cri.go:54] listing CRI containers in root : {State:running Name: Namespaces:[kube-system kubernetes-dashboard istio-operator]}
	I1109 14:32:48.073087  168989 ssh_runner.go:195] Run: sudo -s eval "crictl ps -a --quiet --label io.kubernetes.pod.namespace=kube-system; crictl ps -a --quiet --label io.kubernetes.pod.namespace=kubernetes-dashboard; crictl ps -a --quiet --label io.kubernetes.pod.namespace=istio-operator"
	I1109 14:32:48.193365  168989 cri.go:89] found id: "08018cf90a3eb18947e98d613c69d015284e179c3a6bad63d3d660e2bba9d425"
	I1109 14:32:48.193384  168989 cri.go:89] found id: "aad8dd5398369e8f3aebeb36d9dfabb64135dc113966d452fdeeed0063b3e1e2"
	I1109 14:32:48.193392  168989 cri.go:89] found id: "a5688c3ff2046c131f5e093ac5f29648ebf7838084be2c2c4f88f5ec839473a5"
	I1109 14:32:48.193396  168989 cri.go:89] found id: "bf258d25a5e083c243df5f441d370436199818e034bda7733cdefc1cedba4399"
	I1109 14:32:48.193399  168989 cri.go:89] found id: "48a5ab06b1f371ac5dbcd26591c6cff2b3c7a3f9d82ec3d36aacffc729c253e0"
	I1109 14:32:48.193403  168989 cri.go:89] found id: "afac707896ac08acd09ba3881e8702a21359b6a9e5316bfc87cd94a6597c947f"
	I1109 14:32:48.193406  168989 cri.go:89] found id: "26b98fc2b6e91b01fe188e5f329cb231784dc1c5dd7afa048ee956eeaea49020"
	I1109 14:32:48.193409  168989 cri.go:89] found id: "e00cc266825be7d8acef576725cca22f3def607f1f54044f19ec2250c9c87463"
	I1109 14:32:48.193412  168989 cri.go:89] found id: "c9b12325c06ba9e4ab5abe52dc50d540a4c27cfd177801eff74770b81d946220"
	I1109 14:32:48.193418  168989 cri.go:89] found id: "350dceaf4e4da722256622b6806f53ab082a3778455c73c0ca943ef840d44bfe"
	I1109 14:32:48.193421  168989 cri.go:89] found id: "5c6f6fd508ff4f00007a53c6092586ec8651aa027004a7f6f7018d9f33274a1d"
	I1109 14:32:48.193424  168989 cri.go:89] found id: "bd2b939fdc032d3abef04f2b1ec27467d56019dfce8f70d173b907e0b789d362"
	I1109 14:32:48.193431  168989 cri.go:89] found id: "5d123fdfeb405288face83e3bb92b58e7775f382757b3119f80305045fbcea28"
	I1109 14:32:48.193434  168989 cri.go:89] found id: "4a1f936e07428b18d9c307997f095c8d73e649e7b520d8a35b53836606e5e79d"
	I1109 14:32:48.193437  168989 cri.go:89] found id: ""
	I1109 14:32:48.193488  168989 ssh_runner.go:195] Run: sudo runc list -f json
	I1109 14:32:48.230942  168989 retry.go:31] will retry after 394.250991ms: list running: runc: sudo runc list -f json: Process exited with status 1
	stdout:
	
	stderr:
	time="2025-11-09T14:32:48Z" level=error msg="open /run/runc: no such file or directory"
	I1109 14:32:48.625399  168989 ssh_runner.go:195] Run: sudo systemctl is-active --quiet service kubelet
	I1109 14:32:48.656136  168989 pause.go:52] kubelet running: false
	I1109 14:32:48.656199  168989 ssh_runner.go:195] Run: sudo systemctl disable --now kubelet
	I1109 14:32:48.956310  168989 cri.go:54] listing CRI containers in root : {State:running Name: Namespaces:[kube-system kubernetes-dashboard istio-operator]}
	I1109 14:32:48.956390  168989 ssh_runner.go:195] Run: sudo -s eval "crictl ps -a --quiet --label io.kubernetes.pod.namespace=kube-system; crictl ps -a --quiet --label io.kubernetes.pod.namespace=kubernetes-dashboard; crictl ps -a --quiet --label io.kubernetes.pod.namespace=istio-operator"
	I1109 14:32:49.064508  168989 cri.go:89] found id: "08018cf90a3eb18947e98d613c69d015284e179c3a6bad63d3d660e2bba9d425"
	I1109 14:32:49.064528  168989 cri.go:89] found id: "aad8dd5398369e8f3aebeb36d9dfabb64135dc113966d452fdeeed0063b3e1e2"
	I1109 14:32:49.064533  168989 cri.go:89] found id: "a5688c3ff2046c131f5e093ac5f29648ebf7838084be2c2c4f88f5ec839473a5"
	I1109 14:32:49.064537  168989 cri.go:89] found id: "bf258d25a5e083c243df5f441d370436199818e034bda7733cdefc1cedba4399"
	I1109 14:32:49.064540  168989 cri.go:89] found id: "48a5ab06b1f371ac5dbcd26591c6cff2b3c7a3f9d82ec3d36aacffc729c253e0"
	I1109 14:32:49.064544  168989 cri.go:89] found id: "afac707896ac08acd09ba3881e8702a21359b6a9e5316bfc87cd94a6597c947f"
	I1109 14:32:49.064548  168989 cri.go:89] found id: "26b98fc2b6e91b01fe188e5f329cb231784dc1c5dd7afa048ee956eeaea49020"
	I1109 14:32:49.064552  168989 cri.go:89] found id: "e00cc266825be7d8acef576725cca22f3def607f1f54044f19ec2250c9c87463"
	I1109 14:32:49.064555  168989 cri.go:89] found id: "c9b12325c06ba9e4ab5abe52dc50d540a4c27cfd177801eff74770b81d946220"
	I1109 14:32:49.064561  168989 cri.go:89] found id: "350dceaf4e4da722256622b6806f53ab082a3778455c73c0ca943ef840d44bfe"
	I1109 14:32:49.064573  168989 cri.go:89] found id: "5c6f6fd508ff4f00007a53c6092586ec8651aa027004a7f6f7018d9f33274a1d"
	I1109 14:32:49.064576  168989 cri.go:89] found id: "bd2b939fdc032d3abef04f2b1ec27467d56019dfce8f70d173b907e0b789d362"
	I1109 14:32:49.064580  168989 cri.go:89] found id: "5d123fdfeb405288face83e3bb92b58e7775f382757b3119f80305045fbcea28"
	I1109 14:32:49.064585  168989 cri.go:89] found id: "4a1f936e07428b18d9c307997f095c8d73e649e7b520d8a35b53836606e5e79d"
	I1109 14:32:49.064594  168989 cri.go:89] found id: ""
	I1109 14:32:49.064650  168989 ssh_runner.go:195] Run: sudo runc list -f json
	I1109 14:32:49.085187  168989 out.go:203] 
	W1109 14:32:49.087766  168989 out.go:285] X Exiting due to GUEST_PAUSE: Pause: list running: runc: sudo runc list -f json: Process exited with status 1
	stdout:
	
	stderr:
	time="2025-11-09T14:32:49Z" level=error msg="open /run/runc: no such file or directory"
	
	X Exiting due to GUEST_PAUSE: Pause: list running: runc: sudo runc list -f json: Process exited with status 1
	stdout:
	
	stderr:
	time="2025-11-09T14:32:49Z" level=error msg="open /run/runc: no such file or directory"
	
	W1109 14:32:49.087791  168989 out.go:285] * 
	* 
	W1109 14:32:49.092828  168989 out.go:308] ╭─────────────────────────────────────────────────────────────────────────────────────────────╮
	│                                                                                             │
	│    * If the above advice does not help, please let us know:                                 │
	│      https://github.com/kubernetes/minikube/issues/new/choose                               │
	│                                                                                             │
	│    * Please run `minikube logs --file=logs.txt` and attach logs.txt to the GitHub issue.    │
	│    * Please also attach the following file to the GitHub issue:                             │
	│    * - /tmp/minikube_pause_49fdaea37aad8ebccb761973c21590cc64efe8d9_0.log                   │
	│                                                                                             │
	╰─────────────────────────────────────────────────────────────────────────────────────────────╯
	╭─────────────────────────────────────────────────────────────────────────────────────────────╮
	│                                                                                             │
	│    * If the above advice does not help, please let us know:                                 │
	│      https://github.com/kubernetes/minikube/issues/new/choose                               │
	│                                                                                             │
	│    * Please run `minikube logs --file=logs.txt` and attach logs.txt to the GitHub issue.    │
	│    * Please also attach the following file to the GitHub issue:                             │
	│    * - /tmp/minikube_pause_49fdaea37aad8ebccb761973c21590cc64efe8d9_0.log                   │
	│                                                                                             │
	╰─────────────────────────────────────────────────────────────────────────────────────────────╯
	I1109 14:32:49.096424  168989 out.go:203] 

                                                
                                                
** /stderr **
pause_test.go:112: failed to pause minikube with args: "out/minikube-linux-arm64 pause -p pause-342238 --alsologtostderr -v=5" : exit status 80
helpers_test.go:222: -----------------------post-mortem--------------------------------
helpers_test.go:223: ======>  post-mortem[TestPause/serial/Pause]: network settings <======
helpers_test.go:230: HOST ENV snapshots: PROXY env: HTTP_PROXY="<empty>" HTTPS_PROXY="<empty>" NO_PROXY="<empty>"
helpers_test.go:238: ======>  post-mortem[TestPause/serial/Pause]: docker inspect <======
helpers_test.go:239: (dbg) Run:  docker inspect pause-342238
helpers_test.go:243: (dbg) docker inspect pause-342238:

                                                
                                                
-- stdout --
	[
	    {
	        "Id": "8616b3d77f84bc96996155893edf2d015293a4fc8dbee6e29eb44e7ccde9470b",
	        "Created": "2025-11-09T14:30:46.184710211Z",
	        "Path": "/usr/local/bin/entrypoint",
	        "Args": [
	            "/sbin/init"
	        ],
	        "State": {
	            "Status": "running",
	            "Running": true,
	            "Paused": false,
	            "Restarting": false,
	            "OOMKilled": false,
	            "Dead": false,
	            "Pid": 162164,
	            "ExitCode": 0,
	            "Error": "",
	            "StartedAt": "2025-11-09T14:30:46.249829699Z",
	            "FinishedAt": "0001-01-01T00:00:00Z"
	        },
	        "Image": "sha256:4e1b168be0d8ee6affdaa8dcd0274d605b2f417b6cfaa574410f0380fd962b97",
	        "ResolvConfPath": "/var/lib/docker/containers/8616b3d77f84bc96996155893edf2d015293a4fc8dbee6e29eb44e7ccde9470b/resolv.conf",
	        "HostnamePath": "/var/lib/docker/containers/8616b3d77f84bc96996155893edf2d015293a4fc8dbee6e29eb44e7ccde9470b/hostname",
	        "HostsPath": "/var/lib/docker/containers/8616b3d77f84bc96996155893edf2d015293a4fc8dbee6e29eb44e7ccde9470b/hosts",
	        "LogPath": "/var/lib/docker/containers/8616b3d77f84bc96996155893edf2d015293a4fc8dbee6e29eb44e7ccde9470b/8616b3d77f84bc96996155893edf2d015293a4fc8dbee6e29eb44e7ccde9470b-json.log",
	        "Name": "/pause-342238",
	        "RestartCount": 0,
	        "Driver": "overlay2",
	        "Platform": "linux",
	        "MountLabel": "",
	        "ProcessLabel": "",
	        "AppArmorProfile": "unconfined",
	        "ExecIDs": null,
	        "HostConfig": {
	            "Binds": [
	                "/lib/modules:/lib/modules:ro",
	                "pause-342238:/var"
	            ],
	            "ContainerIDFile": "",
	            "LogConfig": {
	                "Type": "json-file",
	                "Config": {}
	            },
	            "NetworkMode": "pause-342238",
	            "PortBindings": {
	                "22/tcp": [
	                    {
	                        "HostIp": "127.0.0.1",
	                        "HostPort": ""
	                    }
	                ],
	                "2376/tcp": [
	                    {
	                        "HostIp": "127.0.0.1",
	                        "HostPort": ""
	                    }
	                ],
	                "32443/tcp": [
	                    {
	                        "HostIp": "127.0.0.1",
	                        "HostPort": ""
	                    }
	                ],
	                "5000/tcp": [
	                    {
	                        "HostIp": "127.0.0.1",
	                        "HostPort": ""
	                    }
	                ],
	                "8443/tcp": [
	                    {
	                        "HostIp": "127.0.0.1",
	                        "HostPort": ""
	                    }
	                ]
	            },
	            "RestartPolicy": {
	                "Name": "no",
	                "MaximumRetryCount": 0
	            },
	            "AutoRemove": false,
	            "VolumeDriver": "",
	            "VolumesFrom": null,
	            "ConsoleSize": [
	                0,
	                0
	            ],
	            "CapAdd": null,
	            "CapDrop": null,
	            "CgroupnsMode": "host",
	            "Dns": [],
	            "DnsOptions": [],
	            "DnsSearch": [],
	            "ExtraHosts": null,
	            "GroupAdd": null,
	            "IpcMode": "private",
	            "Cgroup": "",
	            "Links": null,
	            "OomScoreAdj": 0,
	            "PidMode": "",
	            "Privileged": true,
	            "PublishAllPorts": false,
	            "ReadonlyRootfs": false,
	            "SecurityOpt": [
	                "seccomp=unconfined",
	                "apparmor=unconfined",
	                "label=disable"
	            ],
	            "Tmpfs": {
	                "/run": "",
	                "/tmp": ""
	            },
	            "UTSMode": "",
	            "UsernsMode": "",
	            "ShmSize": 67108864,
	            "Runtime": "runc",
	            "Isolation": "",
	            "CpuShares": 0,
	            "Memory": 3221225472,
	            "NanoCpus": 2000000000,
	            "CgroupParent": "",
	            "BlkioWeight": 0,
	            "BlkioWeightDevice": [],
	            "BlkioDeviceReadBps": [],
	            "BlkioDeviceWriteBps": [],
	            "BlkioDeviceReadIOps": [],
	            "BlkioDeviceWriteIOps": [],
	            "CpuPeriod": 0,
	            "CpuQuota": 0,
	            "CpuRealtimePeriod": 0,
	            "CpuRealtimeRuntime": 0,
	            "CpusetCpus": "",
	            "CpusetMems": "",
	            "Devices": [],
	            "DeviceCgroupRules": null,
	            "DeviceRequests": null,
	            "MemoryReservation": 0,
	            "MemorySwap": 6442450944,
	            "MemorySwappiness": null,
	            "OomKillDisable": false,
	            "PidsLimit": null,
	            "Ulimits": [],
	            "CpuCount": 0,
	            "CpuPercent": 0,
	            "IOMaximumIOps": 0,
	            "IOMaximumBandwidth": 0,
	            "MaskedPaths": null,
	            "ReadonlyPaths": null
	        },
	        "GraphDriver": {
	            "Data": {
	                "ID": "8616b3d77f84bc96996155893edf2d015293a4fc8dbee6e29eb44e7ccde9470b",
	                "LowerDir": "/var/lib/docker/overlay2/926a1884ad44e5f009bd7313f33190f07a044fa40f529142374227c966f40740-init/diff:/var/lib/docker/overlay2/b3a4de36ab2a7c9237cb1555a4866064baca53bca407ae5f84336bea9c6bc6c8/diff",
	                "MergedDir": "/var/lib/docker/overlay2/926a1884ad44e5f009bd7313f33190f07a044fa40f529142374227c966f40740/merged",
	                "UpperDir": "/var/lib/docker/overlay2/926a1884ad44e5f009bd7313f33190f07a044fa40f529142374227c966f40740/diff",
	                "WorkDir": "/var/lib/docker/overlay2/926a1884ad44e5f009bd7313f33190f07a044fa40f529142374227c966f40740/work"
	            },
	            "Name": "overlay2"
	        },
	        "Mounts": [
	            {
	                "Type": "volume",
	                "Name": "pause-342238",
	                "Source": "/var/lib/docker/volumes/pause-342238/_data",
	                "Destination": "/var",
	                "Driver": "local",
	                "Mode": "z",
	                "RW": true,
	                "Propagation": ""
	            },
	            {
	                "Type": "bind",
	                "Source": "/lib/modules",
	                "Destination": "/lib/modules",
	                "Mode": "ro",
	                "RW": false,
	                "Propagation": "rprivate"
	            }
	        ],
	        "Config": {
	            "Hostname": "pause-342238",
	            "Domainname": "",
	            "User": "",
	            "AttachStdin": false,
	            "AttachStdout": false,
	            "AttachStderr": false,
	            "ExposedPorts": {
	                "22/tcp": {},
	                "2376/tcp": {},
	                "32443/tcp": {},
	                "5000/tcp": {},
	                "8443/tcp": {}
	            },
	            "Tty": true,
	            "OpenStdin": false,
	            "StdinOnce": false,
	            "Env": [
	                "container=docker",
	                "PATH=/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin"
	            ],
	            "Cmd": null,
	            "Image": "gcr.io/k8s-minikube/kicbase-builds:v0.0.48-1761985721-21837@sha256:a50b37e97dfdea51156e079ca6b45818a801b3d41bbe13d141f35d2e1af6c7d1",
	            "Volumes": null,
	            "WorkingDir": "/",
	            "Entrypoint": [
	                "/usr/local/bin/entrypoint",
	                "/sbin/init"
	            ],
	            "OnBuild": null,
	            "Labels": {
	                "created_by.minikube.sigs.k8s.io": "true",
	                "mode.minikube.sigs.k8s.io": "pause-342238",
	                "name.minikube.sigs.k8s.io": "pause-342238",
	                "role.minikube.sigs.k8s.io": ""
	            },
	            "StopSignal": "SIGRTMIN+3"
	        },
	        "NetworkSettings": {
	            "Bridge": "",
	            "SandboxID": "e3609b2a319d9663b6d20470955d326e7d4e0e1752835990a6eff6cbca7594e6",
	            "SandboxKey": "/var/run/docker/netns/e3609b2a319d",
	            "Ports": {
	                "22/tcp": [
	                    {
	                        "HostIp": "127.0.0.1",
	                        "HostPort": "33020"
	                    }
	                ],
	                "2376/tcp": [
	                    {
	                        "HostIp": "127.0.0.1",
	                        "HostPort": "33021"
	                    }
	                ],
	                "32443/tcp": [
	                    {
	                        "HostIp": "127.0.0.1",
	                        "HostPort": "33024"
	                    }
	                ],
	                "5000/tcp": [
	                    {
	                        "HostIp": "127.0.0.1",
	                        "HostPort": "33022"
	                    }
	                ],
	                "8443/tcp": [
	                    {
	                        "HostIp": "127.0.0.1",
	                        "HostPort": "33023"
	                    }
	                ]
	            },
	            "HairpinMode": false,
	            "LinkLocalIPv6Address": "",
	            "LinkLocalIPv6PrefixLen": 0,
	            "SecondaryIPAddresses": null,
	            "SecondaryIPv6Addresses": null,
	            "EndpointID": "",
	            "Gateway": "",
	            "GlobalIPv6Address": "",
	            "GlobalIPv6PrefixLen": 0,
	            "IPAddress": "",
	            "IPPrefixLen": 0,
	            "IPv6Gateway": "",
	            "MacAddress": "",
	            "Networks": {
	                "pause-342238": {
	                    "IPAMConfig": {
	                        "IPv4Address": "192.168.85.2"
	                    },
	                    "Links": null,
	                    "Aliases": null,
	                    "MacAddress": "12:be:6b:93:f7:94",
	                    "DriverOpts": null,
	                    "GwPriority": 0,
	                    "NetworkID": "8c6b405806b0b7491609b963ef6e96a48d9426c1f4e7f03b46455af0345964a0",
	                    "EndpointID": "c10a72cc3cd48c85e6040b7d4b586f056abb5d31524a4e545ea3db1fc69b9014",
	                    "Gateway": "192.168.85.1",
	                    "IPAddress": "192.168.85.2",
	                    "IPPrefixLen": 24,
	                    "IPv6Gateway": "",
	                    "GlobalIPv6Address": "",
	                    "GlobalIPv6PrefixLen": 0,
	                    "DNSNames": [
	                        "pause-342238",
	                        "8616b3d77f84"
	                    ]
	                }
	            }
	        }
	    }
	]

                                                
                                                
-- /stdout --
helpers_test.go:247: (dbg) Run:  out/minikube-linux-arm64 status --format={{.Host}} -p pause-342238 -n pause-342238
helpers_test.go:247: (dbg) Non-zero exit: out/minikube-linux-arm64 status --format={{.Host}} -p pause-342238 -n pause-342238: exit status 2 (389.333731ms)

                                                
                                                
-- stdout --
	Running

                                                
                                                
-- /stdout --
helpers_test.go:247: status error: exit status 2 (may be ok)
helpers_test.go:252: <<< TestPause/serial/Pause FAILED: start of post-mortem logs <<<
helpers_test.go:253: ======>  post-mortem[TestPause/serial/Pause]: minikube logs <======
helpers_test.go:255: (dbg) Run:  out/minikube-linux-arm64 -p pause-342238 logs -n 25
helpers_test.go:255: (dbg) Done: out/minikube-linux-arm64 -p pause-342238 logs -n 25: (1.691435772s)
helpers_test.go:260: TestPause/serial/Pause logs: 
-- stdout --
	
	==> Audit <==
	┌─────────┬──────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────┬───────────────────────────┬─────────┬─────────┬─────────────────────┬─────────────────────┐
	│ COMMAND │                                                                   ARGS                                                                   │          PROFILE          │  USER   │ VERSION │     START TIME      │      END TIME       │
	├─────────┼──────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────┼───────────────────────────┼─────────┼─────────┼─────────────────────┼─────────────────────┤
	│ delete  │ -p missing-upgrade-396103                                                                                                                │ missing-upgrade-396103    │ jenkins │ v1.37.0 │ 09 Nov 25 14:26 UTC │ 09 Nov 25 14:26 UTC │
	│ start   │ -p kubernetes-upgrade-334644 --memory=3072 --kubernetes-version=v1.28.0 --alsologtostderr -v=1 --driver=docker  --container-runtime=crio │ kubernetes-upgrade-334644 │ jenkins │ v1.37.0 │ 09 Nov 25 14:26 UTC │ 09 Nov 25 14:27 UTC │
	│ stop    │ -p kubernetes-upgrade-334644                                                                                                             │ kubernetes-upgrade-334644 │ jenkins │ v1.37.0 │ 09 Nov 25 14:27 UTC │ 09 Nov 25 14:27 UTC │
	│ start   │ -p kubernetes-upgrade-334644 --memory=3072 --kubernetes-version=v1.34.1 --alsologtostderr -v=1 --driver=docker  --container-runtime=crio │ kubernetes-upgrade-334644 │ jenkins │ v1.37.0 │ 09 Nov 25 14:27 UTC │ 09 Nov 25 14:32 UTC │
	│ delete  │ -p NoKubernetes-451939                                                                                                                   │ NoKubernetes-451939       │ jenkins │ v1.37.0 │ 09 Nov 25 14:27 UTC │ 09 Nov 25 14:27 UTC │
	│ start   │ -p NoKubernetes-451939 --no-kubernetes --memory=3072 --alsologtostderr -v=5 --driver=docker  --container-runtime=crio                    │ NoKubernetes-451939       │ jenkins │ v1.37.0 │ 09 Nov 25 14:27 UTC │ 09 Nov 25 14:27 UTC │
	│ ssh     │ -p NoKubernetes-451939 sudo systemctl is-active --quiet service kubelet                                                                  │ NoKubernetes-451939       │ jenkins │ v1.37.0 │ 09 Nov 25 14:27 UTC │                     │
	│ stop    │ -p NoKubernetes-451939                                                                                                                   │ NoKubernetes-451939       │ jenkins │ v1.37.0 │ 09 Nov 25 14:28 UTC │ 09 Nov 25 14:28 UTC │
	│ start   │ -p NoKubernetes-451939 --driver=docker  --container-runtime=crio                                                                         │ NoKubernetes-451939       │ jenkins │ v1.37.0 │ 09 Nov 25 14:28 UTC │ 09 Nov 25 14:28 UTC │
	│ ssh     │ -p NoKubernetes-451939 sudo systemctl is-active --quiet service kubelet                                                                  │ NoKubernetes-451939       │ jenkins │ v1.37.0 │ 09 Nov 25 14:28 UTC │                     │
	│ delete  │ -p NoKubernetes-451939                                                                                                                   │ NoKubernetes-451939       │ jenkins │ v1.37.0 │ 09 Nov 25 14:28 UTC │ 09 Nov 25 14:28 UTC │
	│ start   │ -p stopped-upgrade-471685 --memory=3072 --vm-driver=docker  --container-runtime=crio                                                     │ stopped-upgrade-471685    │ jenkins │ v1.32.0 │ 09 Nov 25 14:28 UTC │ 09 Nov 25 14:29 UTC │
	│ stop    │ stopped-upgrade-471685 stop                                                                                                              │ stopped-upgrade-471685    │ jenkins │ v1.32.0 │ 09 Nov 25 14:29 UTC │ 09 Nov 25 14:29 UTC │
	│ start   │ -p stopped-upgrade-471685 --memory=3072 --alsologtostderr -v=1 --driver=docker  --container-runtime=crio                                 │ stopped-upgrade-471685    │ jenkins │ v1.37.0 │ 09 Nov 25 14:29 UTC │ 09 Nov 25 14:29 UTC │
	│ delete  │ -p stopped-upgrade-471685                                                                                                                │ stopped-upgrade-471685    │ jenkins │ v1.37.0 │ 09 Nov 25 14:29 UTC │ 09 Nov 25 14:29 UTC │
	│ start   │ -p running-upgrade-382260 --memory=3072 --vm-driver=docker  --container-runtime=crio                                                     │ running-upgrade-382260    │ jenkins │ v1.32.0 │ 09 Nov 25 14:29 UTC │ 09 Nov 25 14:30 UTC │
	│ start   │ -p running-upgrade-382260 --memory=3072 --alsologtostderr -v=1 --driver=docker  --container-runtime=crio                                 │ running-upgrade-382260    │ jenkins │ v1.37.0 │ 09 Nov 25 14:30 UTC │ 09 Nov 25 14:30 UTC │
	│ delete  │ -p running-upgrade-382260                                                                                                                │ running-upgrade-382260    │ jenkins │ v1.37.0 │ 09 Nov 25 14:30 UTC │ 09 Nov 25 14:30 UTC │
	│ start   │ -p pause-342238 --memory=3072 --install-addons=false --wait=all --driver=docker  --container-runtime=crio                                │ pause-342238              │ jenkins │ v1.37.0 │ 09 Nov 25 14:30 UTC │ 09 Nov 25 14:32 UTC │
	│ start   │ -p kubernetes-upgrade-334644 --memory=3072 --kubernetes-version=v1.28.0 --driver=docker  --container-runtime=crio                        │ kubernetes-upgrade-334644 │ jenkins │ v1.37.0 │ 09 Nov 25 14:32 UTC │                     │
	│ start   │ -p kubernetes-upgrade-334644 --memory=3072 --kubernetes-version=v1.34.1 --alsologtostderr -v=1 --driver=docker  --container-runtime=crio │ kubernetes-upgrade-334644 │ jenkins │ v1.37.0 │ 09 Nov 25 14:32 UTC │ 09 Nov 25 14:32 UTC │
	│ start   │ -p pause-342238 --alsologtostderr -v=1 --driver=docker  --container-runtime=crio                                                         │ pause-342238              │ jenkins │ v1.37.0 │ 09 Nov 25 14:32 UTC │ 09 Nov 25 14:32 UTC │
	│ delete  │ -p kubernetes-upgrade-334644                                                                                                             │ kubernetes-upgrade-334644 │ jenkins │ v1.37.0 │ 09 Nov 25 14:32 UTC │ 09 Nov 25 14:32 UTC │
	│ start   │ -p force-systemd-flag-519664 --memory=3072 --force-systemd --alsologtostderr -v=5 --driver=docker  --container-runtime=crio              │ force-systemd-flag-519664 │ jenkins │ v1.37.0 │ 09 Nov 25 14:32 UTC │                     │
	│ pause   │ -p pause-342238 --alsologtostderr -v=5                                                                                                   │ pause-342238              │ jenkins │ v1.37.0 │ 09 Nov 25 14:32 UTC │                     │
	└─────────┴──────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────┴───────────────────────────┴─────────┴─────────┴─────────────────────┴─────────────────────┘
	
	
	==> Last Start <==
	Log file created at: 2025/11/09 14:32:41
	Running on machine: ip-172-31-24-2
	Binary: Built with gc go1.24.6 for linux/arm64
	Log line format: [IWEF]mmdd hh:mm:ss.uuuuuu threadid file:line] msg
	I1109 14:32:41.308187  168662 out.go:360] Setting OutFile to fd 1 ...
	I1109 14:32:41.308516  168662 out.go:408] TERM=,COLORTERM=, which probably does not support color
	I1109 14:32:41.308548  168662 out.go:374] Setting ErrFile to fd 2...
	I1109 14:32:41.308577  168662 out.go:408] TERM=,COLORTERM=, which probably does not support color
	I1109 14:32:41.308937  168662 root.go:338] Updating PATH: /home/jenkins/minikube-integration/21139-2320/.minikube/bin
	I1109 14:32:41.309576  168662 out.go:368] Setting JSON to false
	I1109 14:32:41.310918  168662 start.go:133] hostinfo: {"hostname":"ip-172-31-24-2","uptime":4512,"bootTime":1762694250,"procs":186,"os":"linux","platform":"ubuntu","platformFamily":"debian","platformVersion":"20.04","kernelVersion":"5.15.0-1084-aws","kernelArch":"aarch64","virtualizationSystem":"","virtualizationRole":"","hostId":"6d436adf-771e-4269-b9a3-c25fd4fca4f5"}
	I1109 14:32:41.311062  168662 start.go:143] virtualization:  
	I1109 14:32:41.317140  168662 out.go:179] * [force-systemd-flag-519664] minikube v1.37.0 on Ubuntu 20.04 (arm64)
	I1109 14:32:41.320599  168662 out.go:179]   - MINIKUBE_LOCATION=21139
	I1109 14:32:41.320675  168662 notify.go:221] Checking for updates...
	I1109 14:32:41.327757  168662 out.go:179]   - MINIKUBE_SUPPRESS_DOCKER_PERFORMANCE=true
	I1109 14:32:41.330739  168662 out.go:179]   - KUBECONFIG=/home/jenkins/minikube-integration/21139-2320/kubeconfig
	I1109 14:32:41.333899  168662 out.go:179]   - MINIKUBE_HOME=/home/jenkins/minikube-integration/21139-2320/.minikube
	I1109 14:32:41.337043  168662 out.go:179]   - MINIKUBE_BIN=out/minikube-linux-arm64
	I1109 14:32:41.340202  168662 out.go:179]   - MINIKUBE_FORCE_SYSTEMD=
	I1109 14:32:41.343686  168662 config.go:182] Loaded profile config "pause-342238": Driver=docker, ContainerRuntime=crio, KubernetesVersion=v1.34.1
	I1109 14:32:41.343782  168662 driver.go:422] Setting default libvirt URI to qemu:///system
	I1109 14:32:41.373447  168662 docker.go:124] docker version: linux-28.1.1:Docker Engine - Community
	I1109 14:32:41.373569  168662 cli_runner.go:164] Run: docker system info --format "{{json .}}"
	I1109 14:32:41.445638  168662 info.go:266] docker info: {ID:J4M5:W6MX:GOX4:4LAQ:VI7E:VJNF:J3OP:OPBH:GF7G:PPY4:WQWD:7N4L Containers:1 ContainersRunning:1 ContainersPaused:0 ContainersStopped:0 Images:4 Driver:overlay2 DriverStatus:[[Backing Filesystem extfs] [Supports d_type true] [Using metacopy false] [Native Overlay Diff true] [userxattr false]] SystemStatus:<nil> Plugins:{Volume:[local] Network:[bridge host ipvlan macvlan null overlay] Authorization:<nil> Log:[awslogs fluentd gcplogs gelf journald json-file local splunk syslog]} MemoryLimit:true SwapLimit:true KernelMemory:false KernelMemoryTCP:true CPUCfsPeriod:true CPUCfsQuota:true CPUShares:true CPUSet:true PidsLimit:true IPv4Forwarding:true BridgeNfIptables:false BridgeNfIP6Tables:false Debug:false NFd:36 OomKillDisable:true NGoroutines:52 SystemTime:2025-11-09 14:32:41.435311659 +0000 UTC LoggingDriver:json-file CgroupDriver:cgroupfs NEventsListener:0 KernelVersion:5.15.0-1084-aws OperatingSystem:Ubuntu 20.04.6 LTS OSType:linux Architecture:a
arch64 IndexServerAddress:https://index.docker.io/v1/ RegistryConfig:{AllowNondistributableArtifactsCIDRs:[] AllowNondistributableArtifactsHostnames:[] InsecureRegistryCIDRs:[::1/128 127.0.0.0/8] IndexConfigs:{DockerIo:{Name:docker.io Mirrors:[] Secure:true Official:true}} Mirrors:[]} NCPU:2 MemTotal:8214835200 GenericResources:<nil> DockerRootDir:/var/lib/docker HTTPProxy: HTTPSProxy: NoProxy: Name:ip-172-31-24-2 Labels:[] ExperimentalBuild:false ServerVersion:28.1.1 ClusterStore: ClusterAdvertise: Runtimes:{Runc:{Path:runc}} DefaultRuntime:runc Swarm:{NodeID: NodeAddr: LocalNodeState:inactive ControlAvailable:false Error: RemoteManagers:<nil>} LiveRestoreEnabled:false Isolation: InitBinary:docker-init ContainerdCommit:{ID:05044ec0a9a75232cad458027ca83437aae3f4da Expected:} RuncCommit:{ID:v1.2.5-0-g59923ef Expected:} InitCommit:{ID:de40ad0 Expected:} SecurityOptions:[name=apparmor name=seccomp,profile=builtin] ProductLicense: Warnings:<nil> ServerErrors:[] ClientInfo:{Debug:false Plugins:[map[Name:buildx Pat
h:/usr/libexec/docker/cli-plugins/docker-buildx SchemaVersion:0.1.0 ShortDescription:Docker Buildx Vendor:Docker Inc. Version:v0.23.0] map[Name:compose Path:/usr/libexec/docker/cli-plugins/docker-compose SchemaVersion:0.1.0 ShortDescription:Docker Compose Vendor:Docker Inc. Version:v2.35.1]] Warnings:<nil>}}
	I1109 14:32:41.445758  168662 docker.go:319] overlay module found
	I1109 14:32:41.449106  168662 out.go:179] * Using the docker driver based on user configuration
	I1109 14:32:41.452037  168662 start.go:309] selected driver: docker
	I1109 14:32:41.452055  168662 start.go:930] validating driver "docker" against <nil>
	I1109 14:32:41.452069  168662 start.go:941] status for docker: {Installed:true Healthy:true Running:false NeedsImprovement:false Error:<nil> Reason: Fix: Doc: Version:}
	I1109 14:32:41.453476  168662 cli_runner.go:164] Run: docker system info --format "{{json .}}"
	I1109 14:32:41.516678  168662 info.go:266] docker info: {ID:J4M5:W6MX:GOX4:4LAQ:VI7E:VJNF:J3OP:OPBH:GF7G:PPY4:WQWD:7N4L Containers:1 ContainersRunning:1 ContainersPaused:0 ContainersStopped:0 Images:4 Driver:overlay2 DriverStatus:[[Backing Filesystem extfs] [Supports d_type true] [Using metacopy false] [Native Overlay Diff true] [userxattr false]] SystemStatus:<nil> Plugins:{Volume:[local] Network:[bridge host ipvlan macvlan null overlay] Authorization:<nil> Log:[awslogs fluentd gcplogs gelf journald json-file local splunk syslog]} MemoryLimit:true SwapLimit:true KernelMemory:false KernelMemoryTCP:true CPUCfsPeriod:true CPUCfsQuota:true CPUShares:true CPUSet:true PidsLimit:true IPv4Forwarding:true BridgeNfIptables:false BridgeNfIP6Tables:false Debug:false NFd:36 OomKillDisable:true NGoroutines:52 SystemTime:2025-11-09 14:32:41.506033835 +0000 UTC LoggingDriver:json-file CgroupDriver:cgroupfs NEventsListener:0 KernelVersion:5.15.0-1084-aws OperatingSystem:Ubuntu 20.04.6 LTS OSType:linux Architecture:a
arch64 IndexServerAddress:https://index.docker.io/v1/ RegistryConfig:{AllowNondistributableArtifactsCIDRs:[] AllowNondistributableArtifactsHostnames:[] InsecureRegistryCIDRs:[::1/128 127.0.0.0/8] IndexConfigs:{DockerIo:{Name:docker.io Mirrors:[] Secure:true Official:true}} Mirrors:[]} NCPU:2 MemTotal:8214835200 GenericResources:<nil> DockerRootDir:/var/lib/docker HTTPProxy: HTTPSProxy: NoProxy: Name:ip-172-31-24-2 Labels:[] ExperimentalBuild:false ServerVersion:28.1.1 ClusterStore: ClusterAdvertise: Runtimes:{Runc:{Path:runc}} DefaultRuntime:runc Swarm:{NodeID: NodeAddr: LocalNodeState:inactive ControlAvailable:false Error: RemoteManagers:<nil>} LiveRestoreEnabled:false Isolation: InitBinary:docker-init ContainerdCommit:{ID:05044ec0a9a75232cad458027ca83437aae3f4da Expected:} RuncCommit:{ID:v1.2.5-0-g59923ef Expected:} InitCommit:{ID:de40ad0 Expected:} SecurityOptions:[name=apparmor name=seccomp,profile=builtin] ProductLicense: Warnings:<nil> ServerErrors:[] ClientInfo:{Debug:false Plugins:[map[Name:buildx Pat
h:/usr/libexec/docker/cli-plugins/docker-buildx SchemaVersion:0.1.0 ShortDescription:Docker Buildx Vendor:Docker Inc. Version:v0.23.0] map[Name:compose Path:/usr/libexec/docker/cli-plugins/docker-compose SchemaVersion:0.1.0 ShortDescription:Docker Compose Vendor:Docker Inc. Version:v2.35.1]] Warnings:<nil>}}
	I1109 14:32:41.516879  168662 start_flags.go:327] no existing cluster config was found, will generate one from the flags 
	I1109 14:32:41.517111  168662 start_flags.go:974] Wait components to verify : map[apiserver:true system_pods:true]
	I1109 14:32:41.520125  168662 out.go:179] * Using Docker driver with root privileges
	I1109 14:32:41.523049  168662 cni.go:84] Creating CNI manager for ""
	I1109 14:32:41.523116  168662 cni.go:143] "docker" driver + "crio" runtime found, recommending kindnet
	I1109 14:32:41.523129  168662 start_flags.go:336] Found "CNI" CNI - setting NetworkPlugin=cni
	I1109 14:32:41.523211  168662 start.go:353] cluster config:
	{Name:force-systemd-flag-519664 KeepContext:false EmbedCerts:false MinikubeISO: KicBaseImage:gcr.io/k8s-minikube/kicbase-builds:v0.0.48-1761985721-21837@sha256:a50b37e97dfdea51156e079ca6b45818a801b3d41bbe13d141f35d2e1af6c7d1 Memory:3072 CPUs:2 DiskSize:20000 Driver:docker HyperkitVpnKitSock: HyperkitVSockPorts:[] DockerEnv:[] ContainerVolumeMounts:[] InsecureRegistry:[] RegistryMirror:[] HostOnlyCIDR:192.168.59.1/24 HypervVirtualSwitch: HypervUseExternalSwitch:false HypervExternalAdapter: KVMNetwork:default KVMQemuURI:qemu:///system KVMGPU:false KVMHidden:false KVMNUMACount:1 APIServerPort:8443 DockerOpt:[] DisableDriverMounts:false NFSShare:[] NFSSharesRoot:/nfsshares UUID: NoVTXCheck:false DNSProxy:false HostDNSResolver:true HostOnlyNicType:virtio NatNicType:virtio SSHIPAddress: SSHUser:root SSHKey: SSHPort:22 KubernetesConfig:{KubernetesVersion:v1.34.1 ClusterName:force-systemd-flag-519664 Namespace:default APIServerHAVIP: APIServerName:minikubeCA APIServerNames:[] APIServerIPs:[] DNSDomain:cluste
r.local ContainerRuntime:crio CRISocket: NetworkPlugin:cni FeatureGates: ServiceCIDR:10.96.0.0/12 ImageRepository: LoadBalancerStartIP: LoadBalancerEndIP: CustomIngressCert: RegistryAliases: ExtraOptions:[] ShouldLoadCachedImages:true EnableDefaultCNI:false CNI:} Nodes:[{Name: IP: Port:8443 KubernetesVersion:v1.34.1 ContainerRuntime:crio ControlPlane:true Worker:true}] Addons:map[] CustomAddonImages:map[] CustomAddonRegistries:map[] VerifyComponents:map[apiserver:true system_pods:true] StartHostTimeout:6m0s ScheduledStop:<nil> ExposedPorts:[] ListenAddress: Network: Subnet: MultiNodeRequested:false ExtraDisks:0 CertExpiration:26280h0m0s MountString: Mount9PVersion:9p2000.L MountGID:docker MountIP: MountMSize:262144 MountOptions:[] MountPort:0 MountType:9p MountUID:docker BinaryMirror: DisableOptimizations:false DisableMetrics:false DisableCoreDNSLog:false CustomQemuFirmwarePath: SocketVMnetClientPath: SocketVMnetPath: StaticIP: SSHAuthSock: SSHAgentPID:0 GPUs: AutoPauseInterval:1m0s}
	I1109 14:32:41.526420  168662 out.go:179] * Starting "force-systemd-flag-519664" primary control-plane node in "force-systemd-flag-519664" cluster
	I1109 14:32:41.530307  168662 cache.go:134] Beginning downloading kic base image for docker with crio
	I1109 14:32:41.533297  168662 out.go:179] * Pulling base image v0.0.48-1761985721-21837 ...
	I1109 14:32:41.536036  168662 image.go:81] Checking for gcr.io/k8s-minikube/kicbase-builds:v0.0.48-1761985721-21837@sha256:a50b37e97dfdea51156e079ca6b45818a801b3d41bbe13d141f35d2e1af6c7d1 in local docker daemon
	I1109 14:32:41.536091  168662 preload.go:188] Checking if preload exists for k8s version v1.34.1 and runtime crio
	I1109 14:32:41.536130  168662 preload.go:203] Found local preload: /home/jenkins/minikube-integration/21139-2320/.minikube/cache/preloaded-tarball/preloaded-images-k8s-v18-v1.34.1-cri-o-overlay-arm64.tar.lz4
	I1109 14:32:41.536139  168662 cache.go:65] Caching tarball of preloaded images
	I1109 14:32:41.536227  168662 preload.go:238] Found /home/jenkins/minikube-integration/21139-2320/.minikube/cache/preloaded-tarball/preloaded-images-k8s-v18-v1.34.1-cri-o-overlay-arm64.tar.lz4 in cache, skipping download
	I1109 14:32:41.536237  168662 cache.go:68] Finished verifying existence of preloaded tar for v1.34.1 on crio
	I1109 14:32:41.536340  168662 profile.go:143] Saving config to /home/jenkins/minikube-integration/21139-2320/.minikube/profiles/force-systemd-flag-519664/config.json ...
	I1109 14:32:41.536357  168662 lock.go:35] WriteFile acquiring /home/jenkins/minikube-integration/21139-2320/.minikube/profiles/force-systemd-flag-519664/config.json: {Name:mk28db7b284d5d4952368e85bfa9f43c92c325a2 Clock:{} Delay:500ms Timeout:1m0s Cancel:<nil>}
	I1109 14:32:41.555667  168662 image.go:100] Found gcr.io/k8s-minikube/kicbase-builds:v0.0.48-1761985721-21837@sha256:a50b37e97dfdea51156e079ca6b45818a801b3d41bbe13d141f35d2e1af6c7d1 in local docker daemon, skipping pull
	I1109 14:32:41.555692  168662 cache.go:158] gcr.io/k8s-minikube/kicbase-builds:v0.0.48-1761985721-21837@sha256:a50b37e97dfdea51156e079ca6b45818a801b3d41bbe13d141f35d2e1af6c7d1 exists in daemon, skipping load
	I1109 14:32:41.555711  168662 cache.go:243] Successfully downloaded all kic artifacts
	I1109 14:32:41.555735  168662 start.go:360] acquireMachinesLock for force-systemd-flag-519664: {Name:mk9577fc13b01146c0de79a0ba1703985e6f141e Clock:{} Delay:500ms Timeout:10m0s Cancel:<nil>}
	I1109 14:32:41.555852  168662 start.go:364] duration metric: took 98.257µs to acquireMachinesLock for "force-systemd-flag-519664"
	I1109 14:32:41.555915  168662 start.go:93] Provisioning new machine with config: &{Name:force-systemd-flag-519664 KeepContext:false EmbedCerts:false MinikubeISO: KicBaseImage:gcr.io/k8s-minikube/kicbase-builds:v0.0.48-1761985721-21837@sha256:a50b37e97dfdea51156e079ca6b45818a801b3d41bbe13d141f35d2e1af6c7d1 Memory:3072 CPUs:2 DiskSize:20000 Driver:docker HyperkitVpnKitSock: HyperkitVSockPorts:[] DockerEnv:[] ContainerVolumeMounts:[] InsecureRegistry:[] RegistryMirror:[] HostOnlyCIDR:192.168.59.1/24 HypervVirtualSwitch: HypervUseExternalSwitch:false HypervExternalAdapter: KVMNetwork:default KVMQemuURI:qemu:///system KVMGPU:false KVMHidden:false KVMNUMACount:1 APIServerPort:8443 DockerOpt:[] DisableDriverMounts:false NFSShare:[] NFSSharesRoot:/nfsshares UUID: NoVTXCheck:false DNSProxy:false HostDNSResolver:true HostOnlyNicType:virtio NatNicType:virtio SSHIPAddress: SSHUser:root SSHKey: SSHPort:22 KubernetesConfig:{KubernetesVersion:v1.34.1 ClusterName:force-systemd-flag-519664 Namespace:default APIServer
HAVIP: APIServerName:minikubeCA APIServerNames:[] APIServerIPs:[] DNSDomain:cluster.local ContainerRuntime:crio CRISocket: NetworkPlugin:cni FeatureGates: ServiceCIDR:10.96.0.0/12 ImageRepository: LoadBalancerStartIP: LoadBalancerEndIP: CustomIngressCert: RegistryAliases: ExtraOptions:[] ShouldLoadCachedImages:true EnableDefaultCNI:false CNI:} Nodes:[{Name: IP: Port:8443 KubernetesVersion:v1.34.1 ContainerRuntime:crio ControlPlane:true Worker:true}] Addons:map[] CustomAddonImages:map[] CustomAddonRegistries:map[] VerifyComponents:map[apiserver:true system_pods:true] StartHostTimeout:6m0s ScheduledStop:<nil> ExposedPorts:[] ListenAddress: Network: Subnet: MultiNodeRequested:false ExtraDisks:0 CertExpiration:26280h0m0s MountString: Mount9PVersion:9p2000.L MountGID:docker MountIP: MountMSize:262144 MountOptions:[] MountPort:0 MountType:9p MountUID:docker BinaryMirror: DisableOptimizations:false DisableMetrics:false DisableCoreDNSLog:false CustomQemuFirmwarePath: SocketVMnetClientPath: SocketVMnetPath: StaticIP:
SSHAuthSock: SSHAgentPID:0 GPUs: AutoPauseInterval:1m0s} &{Name: IP: Port:8443 KubernetesVersion:v1.34.1 ContainerRuntime:crio ControlPlane:true Worker:true}
	I1109 14:32:41.555995  168662 start.go:125] createHost starting for "" (driver="docker")
	W1109 14:32:42.531315  166531 pod_ready.go:104] pod "coredns-66bc5c9577-4vkj9" is not "Ready", error: <nil>
	W1109 14:32:44.550049  166531 pod_ready.go:104] pod "coredns-66bc5c9577-4vkj9" is not "Ready", error: <nil>
	I1109 14:32:45.063834  166531 pod_ready.go:94] pod "coredns-66bc5c9577-4vkj9" is "Ready"
	I1109 14:32:45.063881  166531 pod_ready.go:86] duration metric: took 11.038253986s for pod "coredns-66bc5c9577-4vkj9" in "kube-system" namespace to be "Ready" or be gone ...
	I1109 14:32:45.120759  166531 pod_ready.go:83] waiting for pod "etcd-pause-342238" in "kube-system" namespace to be "Ready" or be gone ...
	I1109 14:32:45.132122  166531 pod_ready.go:94] pod "etcd-pause-342238" is "Ready"
	I1109 14:32:45.132155  166531 pod_ready.go:86] duration metric: took 11.365253ms for pod "etcd-pause-342238" in "kube-system" namespace to be "Ready" or be gone ...
	I1109 14:32:45.136891  166531 pod_ready.go:83] waiting for pod "kube-apiserver-pause-342238" in "kube-system" namespace to be "Ready" or be gone ...
	I1109 14:32:45.144668  166531 pod_ready.go:94] pod "kube-apiserver-pause-342238" is "Ready"
	I1109 14:32:45.144763  166531 pod_ready.go:86] duration metric: took 7.845932ms for pod "kube-apiserver-pause-342238" in "kube-system" namespace to be "Ready" or be gone ...
	I1109 14:32:45.147797  166531 pod_ready.go:83] waiting for pod "kube-controller-manager-pause-342238" in "kube-system" namespace to be "Ready" or be gone ...
	I1109 14:32:45.236138  166531 pod_ready.go:94] pod "kube-controller-manager-pause-342238" is "Ready"
	I1109 14:32:45.236179  166531 pod_ready.go:86] duration metric: took 88.255389ms for pod "kube-controller-manager-pause-342238" in "kube-system" namespace to be "Ready" or be gone ...
	I1109 14:32:45.433442  166531 pod_ready.go:83] waiting for pod "kube-proxy-r56tq" in "kube-system" namespace to be "Ready" or be gone ...
	I1109 14:32:41.559297  168662 out.go:252] * Creating docker container (CPUs=2, Memory=3072MB) ...
	I1109 14:32:41.559536  168662 start.go:159] libmachine.API.Create for "force-systemd-flag-519664" (driver="docker")
	I1109 14:32:41.559582  168662 client.go:173] LocalClient.Create starting
	I1109 14:32:41.559658  168662 main.go:143] libmachine: Reading certificate data from /home/jenkins/minikube-integration/21139-2320/.minikube/certs/ca.pem
	I1109 14:32:41.559695  168662 main.go:143] libmachine: Decoding PEM data...
	I1109 14:32:41.559719  168662 main.go:143] libmachine: Parsing certificate...
	I1109 14:32:41.559779  168662 main.go:143] libmachine: Reading certificate data from /home/jenkins/minikube-integration/21139-2320/.minikube/certs/cert.pem
	I1109 14:32:41.559802  168662 main.go:143] libmachine: Decoding PEM data...
	I1109 14:32:41.559812  168662 main.go:143] libmachine: Parsing certificate...
	I1109 14:32:41.560235  168662 cli_runner.go:164] Run: docker network inspect force-systemd-flag-519664 --format "{"Name": "{{.Name}}","Driver": "{{.Driver}}","Subnet": "{{range .IPAM.Config}}{{.Subnet}}{{end}}","Gateway": "{{range .IPAM.Config}}{{.Gateway}}{{end}}","MTU": {{if (index .Options "com.docker.network.driver.mtu")}}{{(index .Options "com.docker.network.driver.mtu")}}{{else}}0{{end}}, "ContainerIPs": [{{range $k,$v := .Containers }}"{{$v.IPv4Address}}",{{end}}]}"
	W1109 14:32:41.576242  168662 cli_runner.go:211] docker network inspect force-systemd-flag-519664 --format "{"Name": "{{.Name}}","Driver": "{{.Driver}}","Subnet": "{{range .IPAM.Config}}{{.Subnet}}{{end}}","Gateway": "{{range .IPAM.Config}}{{.Gateway}}{{end}}","MTU": {{if (index .Options "com.docker.network.driver.mtu")}}{{(index .Options "com.docker.network.driver.mtu")}}{{else}}0{{end}}, "ContainerIPs": [{{range $k,$v := .Containers }}"{{$v.IPv4Address}}",{{end}}]}" returned with exit code 1
	I1109 14:32:41.576355  168662 network_create.go:284] running [docker network inspect force-systemd-flag-519664] to gather additional debugging logs...
	I1109 14:32:41.576376  168662 cli_runner.go:164] Run: docker network inspect force-systemd-flag-519664
	W1109 14:32:41.597754  168662 cli_runner.go:211] docker network inspect force-systemd-flag-519664 returned with exit code 1
	I1109 14:32:41.597784  168662 network_create.go:287] error running [docker network inspect force-systemd-flag-519664]: docker network inspect force-systemd-flag-519664: exit status 1
	stdout:
	[]
	
	stderr:
	Error response from daemon: network force-systemd-flag-519664 not found
	I1109 14:32:41.597797  168662 network_create.go:289] output of [docker network inspect force-systemd-flag-519664]: -- stdout --
	[]
	
	-- /stdout --
	** stderr ** 
	Error response from daemon: network force-systemd-flag-519664 not found
	
	** /stderr **
	I1109 14:32:41.597979  168662 cli_runner.go:164] Run: docker network inspect bridge --format "{"Name": "{{.Name}}","Driver": "{{.Driver}}","Subnet": "{{range .IPAM.Config}}{{.Subnet}}{{end}}","Gateway": "{{range .IPAM.Config}}{{.Gateway}}{{end}}","MTU": {{if (index .Options "com.docker.network.driver.mtu")}}{{(index .Options "com.docker.network.driver.mtu")}}{{else}}0{{end}}, "ContainerIPs": [{{range $k,$v := .Containers }}"{{$v.IPv4Address}}",{{end}}]}"
	I1109 14:32:41.615335  168662 network.go:211] skipping subnet 192.168.49.0/24 that is taken: &{IP:192.168.49.0 Netmask:255.255.255.0 Prefix:24 CIDR:192.168.49.0/24 Gateway:192.168.49.1 ClientMin:192.168.49.2 ClientMax:192.168.49.254 Broadcast:192.168.49.255 IsPrivate:true Interface:{IfaceName:br-b901b8dcb821 IfaceIPv4:192.168.49.1 IfaceMTU:1500 IfaceMAC:fe:01:f6:7f:4e:91} reservation:<nil>}
	I1109 14:32:41.615631  168662 network.go:211] skipping subnet 192.168.58.0/24 that is taken: &{IP:192.168.58.0 Netmask:255.255.255.0 Prefix:24 CIDR:192.168.58.0/24 Gateway:192.168.58.1 ClientMin:192.168.58.2 ClientMax:192.168.58.254 Broadcast:192.168.58.255 IsPrivate:true Interface:{IfaceName:br-46dda1eda2df IfaceIPv4:192.168.58.1 IfaceMTU:1500 IfaceMAC:3a:a9:4d:4f:8f:31} reservation:<nil>}
	I1109 14:32:41.615955  168662 network.go:211] skipping subnet 192.168.67.0/24 that is taken: &{IP:192.168.67.0 Netmask:255.255.255.0 Prefix:24 CIDR:192.168.67.0/24 Gateway:192.168.67.1 ClientMin:192.168.67.2 ClientMax:192.168.67.254 Broadcast:192.168.67.255 IsPrivate:true Interface:{IfaceName:br-3b44df0b0b1c IfaceIPv4:192.168.67.1 IfaceMTU:1500 IfaceMAC:06:80:ac:56:fe:3d} reservation:<nil>}
	I1109 14:32:41.616347  168662 network.go:206] using free private subnet 192.168.76.0/24: &{IP:192.168.76.0 Netmask:255.255.255.0 Prefix:24 CIDR:192.168.76.0/24 Gateway:192.168.76.1 ClientMin:192.168.76.2 ClientMax:192.168.76.254 Broadcast:192.168.76.255 IsPrivate:true Interface:{IfaceName: IfaceIPv4: IfaceMTU:0 IfaceMAC:} reservation:0x40019a2fa0}
	I1109 14:32:41.616372  168662 network_create.go:124] attempt to create docker network force-systemd-flag-519664 192.168.76.0/24 with gateway 192.168.76.1 and MTU of 1500 ...
	I1109 14:32:41.616433  168662 cli_runner.go:164] Run: docker network create --driver=bridge --subnet=192.168.76.0/24 --gateway=192.168.76.1 -o --ip-masq -o --icc -o com.docker.network.driver.mtu=1500 --label=created_by.minikube.sigs.k8s.io=true --label=name.minikube.sigs.k8s.io=force-systemd-flag-519664 force-systemd-flag-519664
	I1109 14:32:41.676322  168662 network_create.go:108] docker network force-systemd-flag-519664 192.168.76.0/24 created
	I1109 14:32:41.676355  168662 kic.go:121] calculated static IP "192.168.76.2" for the "force-systemd-flag-519664" container
	I1109 14:32:41.676446  168662 cli_runner.go:164] Run: docker ps -a --format {{.Names}}
	I1109 14:32:41.692305  168662 cli_runner.go:164] Run: docker volume create force-systemd-flag-519664 --label name.minikube.sigs.k8s.io=force-systemd-flag-519664 --label created_by.minikube.sigs.k8s.io=true
	I1109 14:32:41.710931  168662 oci.go:103] Successfully created a docker volume force-systemd-flag-519664
	I1109 14:32:41.711034  168662 cli_runner.go:164] Run: docker run --rm --name force-systemd-flag-519664-preload-sidecar --label created_by.minikube.sigs.k8s.io=true --label name.minikube.sigs.k8s.io=force-systemd-flag-519664 --entrypoint /usr/bin/test -v force-systemd-flag-519664:/var gcr.io/k8s-minikube/kicbase-builds:v0.0.48-1761985721-21837@sha256:a50b37e97dfdea51156e079ca6b45818a801b3d41bbe13d141f35d2e1af6c7d1 -d /var/lib
	I1109 14:32:42.268059  168662 oci.go:107] Successfully prepared a docker volume force-systemd-flag-519664
	I1109 14:32:42.268141  168662 preload.go:188] Checking if preload exists for k8s version v1.34.1 and runtime crio
	I1109 14:32:42.268156  168662 kic.go:194] Starting extracting preloaded images to volume ...
	I1109 14:32:42.268241  168662 cli_runner.go:164] Run: docker run --rm --entrypoint /usr/bin/tar -v /home/jenkins/minikube-integration/21139-2320/.minikube/cache/preloaded-tarball/preloaded-images-k8s-v18-v1.34.1-cri-o-overlay-arm64.tar.lz4:/preloaded.tar:ro -v force-systemd-flag-519664:/extractDir gcr.io/k8s-minikube/kicbase-builds:v0.0.48-1761985721-21837@sha256:a50b37e97dfdea51156e079ca6b45818a801b3d41bbe13d141f35d2e1af6c7d1 -I lz4 -xf /preloaded.tar -C /extractDir
	I1109 14:32:45.832946  166531 pod_ready.go:94] pod "kube-proxy-r56tq" is "Ready"
	I1109 14:32:45.832987  166531 pod_ready.go:86] duration metric: took 399.519235ms for pod "kube-proxy-r56tq" in "kube-system" namespace to be "Ready" or be gone ...
	I1109 14:32:46.033394  166531 pod_ready.go:83] waiting for pod "kube-scheduler-pause-342238" in "kube-system" namespace to be "Ready" or be gone ...
	I1109 14:32:46.433223  166531 pod_ready.go:94] pod "kube-scheduler-pause-342238" is "Ready"
	I1109 14:32:46.433247  166531 pod_ready.go:86] duration metric: took 399.82747ms for pod "kube-scheduler-pause-342238" in "kube-system" namespace to be "Ready" or be gone ...
	I1109 14:32:46.433259  166531 pod_ready.go:40] duration metric: took 12.413813745s for extra waiting for all "kube-system" pods having one of [k8s-app=kube-dns component=etcd component=kube-apiserver component=kube-controller-manager k8s-app=kube-proxy component=kube-scheduler] labels to be "Ready" ...
	I1109 14:32:46.498528  166531 start.go:628] kubectl: 1.33.2, cluster: 1.34.1 (minor skew: 1)
	I1109 14:32:46.528332  166531 out.go:179] * Done! kubectl is now configured to use "pause-342238" cluster and "default" namespace by default
	
	
	==> CRI-O <==
	Nov 09 14:32:26 pause-342238 crio[2193]: time="2025-11-09T14:32:26.546294959Z" level=info msg="Started container" PID=2509 containerID=a5688c3ff2046c131f5e093ac5f29648ebf7838084be2c2c4f88f5ec839473a5 description=kube-system/kube-proxy-r56tq/kube-proxy id=a66dab69-3513-41a2-ad8f-5b065fda47b4 name=/runtime.v1.RuntimeService/StartContainer sandboxID=bce71343d0534c12c2c39b3dc11bd816ba438d5cb754d13ff71742834b621858
	Nov 09 14:32:36 pause-342238 crio[2193]: time="2025-11-09T14:32:36.422024254Z" level=info msg="CNI monitoring event CREATE        \"/etc/cni/net.d/10-kindnet.conflist.temp\""
	Nov 09 14:32:36 pause-342238 crio[2193]: time="2025-11-09T14:32:36.426049833Z" level=info msg="Found CNI network kindnet (type=ptp) at /etc/cni/net.d/10-kindnet.conflist"
	Nov 09 14:32:36 pause-342238 crio[2193]: time="2025-11-09T14:32:36.42608409Z" level=info msg="Updated default CNI network name to kindnet"
	Nov 09 14:32:36 pause-342238 crio[2193]: time="2025-11-09T14:32:36.426112299Z" level=info msg="CNI monitoring event WRITE         \"/etc/cni/net.d/10-kindnet.conflist.temp\""
	Nov 09 14:32:36 pause-342238 crio[2193]: time="2025-11-09T14:32:36.430810184Z" level=info msg="Found CNI network kindnet (type=ptp) at /etc/cni/net.d/10-kindnet.conflist"
	Nov 09 14:32:36 pause-342238 crio[2193]: time="2025-11-09T14:32:36.430980818Z" level=info msg="Updated default CNI network name to kindnet"
	Nov 09 14:32:36 pause-342238 crio[2193]: time="2025-11-09T14:32:36.43105487Z" level=info msg="CNI monitoring event WRITE         \"/etc/cni/net.d/10-kindnet.conflist.temp\""
	Nov 09 14:32:36 pause-342238 crio[2193]: time="2025-11-09T14:32:36.436971638Z" level=info msg="Found CNI network kindnet (type=ptp) at /etc/cni/net.d/10-kindnet.conflist"
	Nov 09 14:32:36 pause-342238 crio[2193]: time="2025-11-09T14:32:36.437126051Z" level=info msg="Updated default CNI network name to kindnet"
	Nov 09 14:32:36 pause-342238 crio[2193]: time="2025-11-09T14:32:36.43719348Z" level=info msg="CNI monitoring event RENAME        \"/etc/cni/net.d/10-kindnet.conflist.temp\""
	Nov 09 14:32:36 pause-342238 crio[2193]: time="2025-11-09T14:32:36.444857528Z" level=info msg="Found CNI network kindnet (type=ptp) at /etc/cni/net.d/10-kindnet.conflist"
	Nov 09 14:32:36 pause-342238 crio[2193]: time="2025-11-09T14:32:36.444907333Z" level=info msg="Updated default CNI network name to kindnet"
	Nov 09 14:32:36 pause-342238 crio[2193]: time="2025-11-09T14:32:36.444941737Z" level=info msg="CNI monitoring event CREATE        \"/etc/cni/net.d/10-kindnet.conflist\" ← \"/etc/cni/net.d/10-kindnet.conflist.temp\""
	Nov 09 14:32:36 pause-342238 crio[2193]: time="2025-11-09T14:32:36.452629013Z" level=info msg="Found CNI network kindnet (type=ptp) at /etc/cni/net.d/10-kindnet.conflist"
	Nov 09 14:32:36 pause-342238 crio[2193]: time="2025-11-09T14:32:36.452825748Z" level=info msg="Updated default CNI network name to kindnet"
	Nov 09 14:32:44 pause-342238 crio[2193]: time="2025-11-09T14:32:44.463350115Z" level=info msg="Checking image status: registry.k8s.io/coredns/coredns:v1.12.1" id=68646f80-ed7e-48a3-a41c-43f4ad145b8a name=/runtime.v1.ImageService/ImageStatus
	Nov 09 14:32:44 pause-342238 crio[2193]: time="2025-11-09T14:32:44.4669731Z" level=info msg="Checking image status: registry.k8s.io/coredns/coredns:v1.12.1" id=d4c70845-6eab-46ce-8494-ad15b7ab9509 name=/runtime.v1.ImageService/ImageStatus
	Nov 09 14:32:44 pause-342238 crio[2193]: time="2025-11-09T14:32:44.468999732Z" level=info msg="Creating container: kube-system/coredns-66bc5c9577-4vkj9/coredns" id=b587fc55-0061-4db5-ab1b-0095e66dfa41 name=/runtime.v1.RuntimeService/CreateContainer
	Nov 09 14:32:44 pause-342238 crio[2193]: time="2025-11-09T14:32:44.469144217Z" level=info msg="Allowed annotations are specified for workload [io.containers.trace-syscall]"
	Nov 09 14:32:44 pause-342238 crio[2193]: time="2025-11-09T14:32:44.484213055Z" level=info msg="Allowed annotations are specified for workload [io.containers.trace-syscall]"
	Nov 09 14:32:44 pause-342238 crio[2193]: time="2025-11-09T14:32:44.485025275Z" level=info msg="Allowed annotations are specified for workload [io.containers.trace-syscall]"
	Nov 09 14:32:44 pause-342238 crio[2193]: time="2025-11-09T14:32:44.516501229Z" level=info msg="Created container 08018cf90a3eb18947e98d613c69d015284e179c3a6bad63d3d660e2bba9d425: kube-system/coredns-66bc5c9577-4vkj9/coredns" id=b587fc55-0061-4db5-ab1b-0095e66dfa41 name=/runtime.v1.RuntimeService/CreateContainer
	Nov 09 14:32:44 pause-342238 crio[2193]: time="2025-11-09T14:32:44.517489417Z" level=info msg="Starting container: 08018cf90a3eb18947e98d613c69d015284e179c3a6bad63d3d660e2bba9d425" id=993f93ce-d4ec-4b2a-adb1-35106383dfd6 name=/runtime.v1.RuntimeService/StartContainer
	Nov 09 14:32:44 pause-342238 crio[2193]: time="2025-11-09T14:32:44.520227805Z" level=info msg="Started container" PID=2753 containerID=08018cf90a3eb18947e98d613c69d015284e179c3a6bad63d3d660e2bba9d425 description=kube-system/coredns-66bc5c9577-4vkj9/coredns id=993f93ce-d4ec-4b2a-adb1-35106383dfd6 name=/runtime.v1.RuntimeService/StartContainer sandboxID=a67d49738b53b28ae5b634dbbd4392da5294a814f429a45fce96913a142a8f21
	
	
	==> container status <==
	CONTAINER           IMAGE                                                              CREATED             STATE               NAME                      ATTEMPT             POD ID              POD                                    NAMESPACE
	08018cf90a3eb       138784d87c9c50f8e59412544da4cf4928d61ccbaf93b9f5898a3ba406871bfc   5 seconds ago       Running             coredns                   2                   a67d49738b53b       coredns-66bc5c9577-4vkj9               kube-system
	aad8dd5398369       b1a8c6f707935fd5f346ce5846d21ff8dd65e14c15406a14dbd16b9b897b9b4c   24 seconds ago      Running             kindnet-cni               2                   704a5f3cfeced       kindnet-dvtdj                          kube-system
	a5688c3ff2046       05baa95f5142d87797a2bc1d3d11edfb0bf0a9236d436243d15061fae8b58cb9   24 seconds ago      Running             kube-proxy                2                   bce71343d0534       kube-proxy-r56tq                       kube-system
	bf258d25a5e08       a1894772a478e07c67a56e8bf32335fdbe1dd4ec96976a5987083164bd00bc0e   24 seconds ago      Running             etcd                      2                   8d67c5cf68e27       etcd-pause-342238                      kube-system
	48a5ab06b1f37       b5f57ec6b98676d815366685a0422bd164ecf0732540b79ac51b1186cef97ff0   24 seconds ago      Running             kube-scheduler            2                   f5e6f6d1c3687       kube-scheduler-pause-342238            kube-system
	afac707896ac0       7eb2c6ff0c5a768fd309321bc2ade0e4e11afcf4f2017ef1d0ff00d91fdf992a   24 seconds ago      Running             kube-controller-manager   2                   aa679bf46e066       kube-controller-manager-pause-342238   kube-system
	26b98fc2b6e91       43911e833d64d4f30460862fc0c54bb61999d60bc7d063feca71e9fc610d5196   24 seconds ago      Running             kube-apiserver            2                   50699b691ecec       kube-apiserver-pause-342238            kube-system
	e00cc266825be       b5f57ec6b98676d815366685a0422bd164ecf0732540b79ac51b1186cef97ff0   36 seconds ago      Exited              kube-scheduler            1                   f5e6f6d1c3687       kube-scheduler-pause-342238            kube-system
	c9b12325c06ba       05baa95f5142d87797a2bc1d3d11edfb0bf0a9236d436243d15061fae8b58cb9   36 seconds ago      Exited              kube-proxy                1                   bce71343d0534       kube-proxy-r56tq                       kube-system
	350dceaf4e4da       a1894772a478e07c67a56e8bf32335fdbe1dd4ec96976a5987083164bd00bc0e   36 seconds ago      Exited              etcd                      1                   8d67c5cf68e27       etcd-pause-342238                      kube-system
	5c6f6fd508ff4       b1a8c6f707935fd5f346ce5846d21ff8dd65e14c15406a14dbd16b9b897b9b4c   36 seconds ago      Exited              kindnet-cni               1                   704a5f3cfeced       kindnet-dvtdj                          kube-system
	bd2b939fdc032       7eb2c6ff0c5a768fd309321bc2ade0e4e11afcf4f2017ef1d0ff00d91fdf992a   36 seconds ago      Exited              kube-controller-manager   1                   aa679bf46e066       kube-controller-manager-pause-342238   kube-system
	5d123fdfeb405       43911e833d64d4f30460862fc0c54bb61999d60bc7d063feca71e9fc610d5196   36 seconds ago      Exited              kube-apiserver            1                   50699b691ecec       kube-apiserver-pause-342238            kube-system
	4a1f936e07428       138784d87c9c50f8e59412544da4cf4928d61ccbaf93b9f5898a3ba406871bfc   36 seconds ago      Exited              coredns                   1                   a67d49738b53b       coredns-66bc5c9577-4vkj9               kube-system
	
	
	==> coredns [08018cf90a3eb18947e98d613c69d015284e179c3a6bad63d3d660e2bba9d425] <==
	maxprocs: Leaving GOMAXPROCS=2: CPU quota undefined
	.:53
	[INFO] plugin/reload: Running configuration SHA512 = fa9a0cdcdddcb4be74a0eaf7cfcb211c40e29ddf5507e03bbfc0065bade31f0f2641a2513136e246f32328dd126fc93236fb5c595246f0763926a524386705e8
	CoreDNS-1.12.1
	linux/arm64, go1.24.1, 707c7c1
	[INFO] 127.0.0.1:56056 - 58999 "HINFO IN 1044715838941189554.2066353908982510073. udp 57 false 512" NXDOMAIN qr,rd,ra 57 0.013012685s
	
	
	==> coredns [4a1f936e07428b18d9c307997f095c8d73e649e7b520d8a35b53836606e5e79d] <==
	maxprocs: Leaving GOMAXPROCS=2: CPU quota undefined
	[INFO] plugin/kubernetes: waiting for Kubernetes API before starting server
	[INFO] plugin/ready: Still waiting on: "kubernetes"
	[INFO] SIGTERM: Shutting down servers then terminating
	[INFO] plugin/kubernetes: waiting for Kubernetes API before starting server
	[INFO] plugin/kubernetes: waiting for Kubernetes API before starting server
	[INFO] plugin/kubernetes: waiting for Kubernetes API before starting server
	[INFO] plugin/kubernetes: waiting for Kubernetes API before starting server
	[INFO] plugin/kubernetes: waiting for Kubernetes API before starting server
	[INFO] plugin/kubernetes: waiting for Kubernetes API before starting server
	[INFO] plugin/kubernetes: waiting for Kubernetes API before starting server
	[INFO] plugin/kubernetes: waiting for Kubernetes API before starting server
	[WARNING] plugin/kubernetes: starting server with unsynced Kubernetes API
	.:53
	[INFO] plugin/reload: Running configuration SHA512 = fa9a0cdcdddcb4be74a0eaf7cfcb211c40e29ddf5507e03bbfc0065bade31f0f2641a2513136e246f32328dd126fc93236fb5c595246f0763926a524386705e8
	CoreDNS-1.12.1
	linux/arm64, go1.24.1, 707c7c1
	[INFO] plugin/health: Going into lameduck mode for 5s
	[INFO] 127.0.0.1:54554 - 38985 "HINFO IN 2451062830002027841.7771890144460148705. udp 57 false 512" NXDOMAIN qr,rd,ra 57 0.020729974s
	
	
	==> describe nodes <==
	Name:               pause-342238
	Roles:              control-plane
	Labels:             beta.kubernetes.io/arch=arm64
	                    beta.kubernetes.io/os=linux
	                    kubernetes.io/arch=arm64
	                    kubernetes.io/hostname=pause-342238
	                    kubernetes.io/os=linux
	                    minikube.k8s.io/commit=70e94ed289d7418ac27e2778f9cf44be27a4ecda
	                    minikube.k8s.io/name=pause-342238
	                    minikube.k8s.io/primary=true
	                    minikube.k8s.io/updated_at=2025_11_09T14_31_16_0700
	                    minikube.k8s.io/version=v1.37.0
	                    node-role.kubernetes.io/control-plane=
	                    node.kubernetes.io/exclude-from-external-load-balancers=
	Annotations:        node.alpha.kubernetes.io/ttl: 0
	                    volumes.kubernetes.io/controller-managed-attach-detach: true
	CreationTimestamp:  Sun, 09 Nov 2025 14:31:12 +0000
	Taints:             <none>
	Unschedulable:      false
	Lease:
	  HolderIdentity:  pause-342238
	  AcquireTime:     <unset>
	  RenewTime:       Sun, 09 Nov 2025 14:32:42 +0000
	Conditions:
	  Type             Status  LastHeartbeatTime                 LastTransitionTime                Reason                       Message
	  ----             ------  -----------------                 ------------------                ------                       -------
	  MemoryPressure   False   Sun, 09 Nov 2025 14:32:37 +0000   Sun, 09 Nov 2025 14:31:08 +0000   KubeletHasSufficientMemory   kubelet has sufficient memory available
	  DiskPressure     False   Sun, 09 Nov 2025 14:32:37 +0000   Sun, 09 Nov 2025 14:31:08 +0000   KubeletHasNoDiskPressure     kubelet has no disk pressure
	  PIDPressure      False   Sun, 09 Nov 2025 14:32:37 +0000   Sun, 09 Nov 2025 14:31:08 +0000   KubeletHasSufficientPID      kubelet has sufficient PID available
	  Ready            True    Sun, 09 Nov 2025 14:32:37 +0000   Sun, 09 Nov 2025 14:32:02 +0000   KubeletReady                 kubelet is posting ready status
	Addresses:
	  InternalIP:  192.168.85.2
	  Hostname:    pause-342238
	Capacity:
	  cpu:                2
	  ephemeral-storage:  203034800Ki
	  hugepages-1Gi:      0
	  hugepages-2Mi:      0
	  hugepages-32Mi:     0
	  hugepages-64Ki:     0
	  memory:             8022300Ki
	  pods:               110
	Allocatable:
	  cpu:                2
	  ephemeral-storage:  203034800Ki
	  hugepages-1Gi:      0
	  hugepages-2Mi:      0
	  hugepages-32Mi:     0
	  hugepages-64Ki:     0
	  memory:             8022300Ki
	  pods:               110
	System Info:
	  Machine ID:                 8fe80b37a6a3cb4bc1adadb06905c59a
	  System UUID:                51f3084b-4cea-414c-a9fb-6d5bf7a7a557
	  Boot ID:                    c2b4b02b-b0e5-42ff-aa45-346ad9349595
	  Kernel Version:             5.15.0-1084-aws
	  OS Image:                   Debian GNU/Linux 12 (bookworm)
	  Operating System:           linux
	  Architecture:               arm64
	  Container Runtime Version:  cri-o://1.34.1
	  Kubelet Version:            v1.34.1
	  Kube-Proxy Version:         
	PodCIDR:                      10.244.0.0/24
	PodCIDRs:                     10.244.0.0/24
	Non-terminated Pods:          (7 in total)
	  Namespace                   Name                                    CPU Requests  CPU Limits  Memory Requests  Memory Limits  Age
	  ---------                   ----                                    ------------  ----------  ---------------  -------------  ---
	  kube-system                 coredns-66bc5c9577-4vkj9                100m (5%)     0 (0%)      70Mi (0%)        170Mi (2%)     89s
	  kube-system                 etcd-pause-342238                       100m (5%)     0 (0%)      100Mi (1%)       0 (0%)         95s
	  kube-system                 kindnet-dvtdj                           100m (5%)     100m (5%)   50Mi (0%)        50Mi (0%)      89s
	  kube-system                 kube-apiserver-pause-342238             250m (12%)    0 (0%)      0 (0%)           0 (0%)         95s
	  kube-system                 kube-controller-manager-pause-342238    200m (10%)    0 (0%)      0 (0%)           0 (0%)         95s
	  kube-system                 kube-proxy-r56tq                        0 (0%)        0 (0%)      0 (0%)           0 (0%)         89s
	  kube-system                 kube-scheduler-pause-342238             100m (5%)     0 (0%)      0 (0%)           0 (0%)         95s
	Allocated resources:
	  (Total limits may be over 100 percent, i.e., overcommitted.)
	  Resource           Requests    Limits
	  --------           --------    ------
	  cpu                850m (42%)  100m (5%)
	  memory             220Mi (2%)  220Mi (2%)
	  ephemeral-storage  0 (0%)      0 (0%)
	  hugepages-1Gi      0 (0%)      0 (0%)
	  hugepages-2Mi      0 (0%)      0 (0%)
	  hugepages-32Mi     0 (0%)      0 (0%)
	  hugepages-64Ki     0 (0%)      0 (0%)
	Events:
	  Type     Reason                   Age   From             Message
	  ----     ------                   ----  ----             -------
	  Normal   Starting                 88s   kube-proxy       
	  Normal   Starting                 15s   kube-proxy       
	  Normal   Starting                 95s   kubelet          Starting kubelet.
	  Warning  CgroupV1                 95s   kubelet          cgroup v1 support is in maintenance mode, please migrate to cgroup v2
	  Normal   NodeHasSufficientMemory  95s   kubelet          Node pause-342238 status is now: NodeHasSufficientMemory
	  Normal   NodeHasNoDiskPressure    95s   kubelet          Node pause-342238 status is now: NodeHasNoDiskPressure
	  Normal   NodeHasSufficientPID     95s   kubelet          Node pause-342238 status is now: NodeHasSufficientPID
	  Normal   RegisteredNode           90s   node-controller  Node pause-342238 event: Registered Node pause-342238 in Controller
	  Normal   NodeReady                48s   kubelet          Node pause-342238 status is now: NodeReady
	  Warning  ContainerGCFailed        35s   kubelet          rpc error: code = Unavailable desc = connection error: desc = "transport: Error while dialing: dial unix /var/run/crio/crio.sock: connect: no such file or directory"
	  Normal   RegisteredNode           15s   node-controller  Node pause-342238 event: Registered Node pause-342238 in Controller
	
	
	==> dmesg <==
	[  +3.159182] overlayfs: idmapped layers are currently not supported
	[Nov 9 14:03] overlayfs: idmapped layers are currently not supported
	[  +3.581786] overlayfs: idmapped layers are currently not supported
	[Nov 9 14:04] overlayfs: idmapped layers are currently not supported
	[Nov 9 14:05] overlayfs: idmapped layers are currently not supported
	[ +45.728314] overlayfs: idmapped layers are currently not supported
	[Nov 9 14:07] overlayfs: idmapped layers are currently not supported
	[Nov 9 14:12] overlayfs: idmapped layers are currently not supported
	[ +35.606556] overlayfs: idmapped layers are currently not supported
	[Nov 9 14:14] overlayfs: idmapped layers are currently not supported
	[Nov 9 14:15] overlayfs: idmapped layers are currently not supported
	[Nov 9 14:16] overlayfs: idmapped layers are currently not supported
	[Nov 9 14:18] overlayfs: idmapped layers are currently not supported
	[ +26.909872] overlayfs: idmapped layers are currently not supported
	[  +3.850831] overlayfs: idmapped layers are currently not supported
	[Nov 9 14:19] overlayfs: idmapped layers are currently not supported
	[ +17.180951] overlayfs: idmapped layers are currently not supported
	[Nov 9 14:20] overlayfs: idmapped layers are currently not supported
	[ +23.736977] overlayfs: idmapped layers are currently not supported
	[Nov 9 14:22] overlayfs: idmapped layers are currently not supported
	[Nov 9 14:23] overlayfs: idmapped layers are currently not supported
	[Nov 9 14:25] overlayfs: idmapped layers are currently not supported
	[Nov 9 14:27] overlayfs: idmapped layers are currently not supported
	[ +24.536638] overlayfs: idmapped layers are currently not supported
	[Nov 9 14:31] overlayfs: idmapped layers are currently not supported
	
	
	==> etcd [350dceaf4e4da722256622b6806f53ab082a3778455c73c0ca943ef840d44bfe] <==
	{"level":"warn","ts":"2025-11-09T14:32:14.819949Z","caller":"embed/config_logging.go:188","msg":"rejected connection on client endpoint","remote-addr":"127.0.0.1:48420","server-name":"","error":"EOF"}
	{"level":"warn","ts":"2025-11-09T14:32:14.822952Z","caller":"embed/config_logging.go:188","msg":"rejected connection on client endpoint","remote-addr":"127.0.0.1:48402","server-name":"","error":"EOF"}
	{"level":"warn","ts":"2025-11-09T14:32:14.908552Z","caller":"embed/config_logging.go:188","msg":"rejected connection on client endpoint","remote-addr":"127.0.0.1:48434","server-name":"","error":"EOF"}
	{"level":"warn","ts":"2025-11-09T14:32:14.966948Z","caller":"embed/config_logging.go:188","msg":"rejected connection on client endpoint","remote-addr":"127.0.0.1:48454","server-name":"","error":"EOF"}
	{"level":"warn","ts":"2025-11-09T14:32:15.070138Z","caller":"embed/config_logging.go:188","msg":"rejected connection on client endpoint","remote-addr":"127.0.0.1:48466","server-name":"","error":"EOF"}
	{"level":"info","ts":"2025-11-09T14:32:15.080731Z","caller":"osutil/interrupt_unix.go:65","msg":"received signal; shutting down","signal":"terminated"}
	{"level":"info","ts":"2025-11-09T14:32:15.082757Z","caller":"embed/etcd.go:426","msg":"closing etcd server","name":"pause-342238","data-dir":"/var/lib/minikube/etcd","advertise-peer-urls":["https://192.168.85.2:2380"],"advertise-client-urls":["https://192.168.85.2:2379"]}
	{"level":"error","ts":"2025-11-09T14:32:15.082993Z","caller":"embed/etcd.go:912","msg":"setting up serving from embedded etcd failed.","error":"http: Server closed","stacktrace":"go.etcd.io/etcd/server/v3/embed.(*Etcd).errHandler\n\tgo.etcd.io/etcd/server/v3/embed/etcd.go:912\ngo.etcd.io/etcd/server/v3/embed.(*serveCtx).startHandler.func1\n\tgo.etcd.io/etcd/server/v3/embed/serve.go:90"}
	{"level":"warn","ts":"2025-11-09T14:32:15.083245Z","caller":"embed/config_logging.go:188","msg":"rejected connection on client endpoint","remote-addr":"127.0.0.1:48476","server-name":"","error":"read tcp 127.0.0.1:2379->127.0.0.1:48476: use of closed network connection"}
	{"level":"warn","ts":"2025-11-09T14:32:15.098698Z","caller":"embed/config_logging.go:188","msg":"rejected connection on client endpoint","remote-addr":"127.0.0.1:48504","server-name":"","error":"write tcp 127.0.0.1:2379->127.0.0.1:48504: use of closed network connection"}
	{"level":"error","ts":"2025-11-09T14:32:15.098782Z","caller":"embed/etcd.go:912","msg":"setting up serving from embedded etcd failed.","error":"http: Server closed","stacktrace":"go.etcd.io/etcd/server/v3/embed.(*Etcd).errHandler\n\tgo.etcd.io/etcd/server/v3/embed/etcd.go:912\ngo.etcd.io/etcd/server/v3/embed.(*serveCtx).startHandler.func1\n\tgo.etcd.io/etcd/server/v3/embed/serve.go:90"}
	{"level":"error","ts":"2025-11-09T14:32:15.101140Z","caller":"embed/etcd.go:912","msg":"setting up serving from embedded etcd failed.","error":"accept tcp 127.0.0.1:2381: use of closed network connection","stacktrace":"go.etcd.io/etcd/server/v3/embed.(*Etcd).errHandler\n\tgo.etcd.io/etcd/server/v3/embed/etcd.go:912\ngo.etcd.io/etcd/server/v3/embed.(*Etcd).startHandler.func1\n\tgo.etcd.io/etcd/server/v3/embed/etcd.go:906"}
	{"level":"info","ts":"2025-11-09T14:32:15.101247Z","caller":"etcdserver/server.go:1281","msg":"skipped leadership transfer for single voting member cluster","local-member-id":"9f0758e1c58a86ed","current-leader-member-id":"9f0758e1c58a86ed"}
	{"level":"info","ts":"2025-11-09T14:32:15.101365Z","caller":"etcdserver/server.go:2319","msg":"server has stopped; stopping cluster version's monitor"}
	{"level":"info","ts":"2025-11-09T14:32:15.102837Z","caller":"etcdserver/server.go:2342","msg":"server has stopped; stopping storage version's monitor"}
	{"level":"warn","ts":"2025-11-09T14:32:15.103179Z","caller":"embed/serve.go:245","msg":"stopping secure grpc server due to error","error":"accept tcp 127.0.0.1:2379: use of closed network connection"}
	{"level":"warn","ts":"2025-11-09T14:32:15.103248Z","caller":"embed/serve.go:247","msg":"stopped secure grpc server due to error","error":"accept tcp 127.0.0.1:2379: use of closed network connection"}
	{"level":"error","ts":"2025-11-09T14:32:15.103282Z","caller":"embed/etcd.go:912","msg":"setting up serving from embedded etcd failed.","error":"accept tcp 127.0.0.1:2379: use of closed network connection","stacktrace":"go.etcd.io/etcd/server/v3/embed.(*Etcd).errHandler\n\tgo.etcd.io/etcd/server/v3/embed/etcd.go:912\ngo.etcd.io/etcd/server/v3/embed.(*Etcd).startHandler.func1\n\tgo.etcd.io/etcd/server/v3/embed/etcd.go:906"}
	{"level":"warn","ts":"2025-11-09T14:32:15.103354Z","caller":"embed/serve.go:245","msg":"stopping secure grpc server due to error","error":"accept tcp 192.168.85.2:2379: use of closed network connection"}
	{"level":"warn","ts":"2025-11-09T14:32:15.103390Z","caller":"embed/serve.go:247","msg":"stopped secure grpc server due to error","error":"accept tcp 192.168.85.2:2379: use of closed network connection"}
	{"level":"error","ts":"2025-11-09T14:32:15.103419Z","caller":"embed/etcd.go:912","msg":"setting up serving from embedded etcd failed.","error":"accept tcp 192.168.85.2:2379: use of closed network connection","stacktrace":"go.etcd.io/etcd/server/v3/embed.(*Etcd).errHandler\n\tgo.etcd.io/etcd/server/v3/embed/etcd.go:912\ngo.etcd.io/etcd/server/v3/embed.(*Etcd).startHandler.func1\n\tgo.etcd.io/etcd/server/v3/embed/etcd.go:906"}
	{"level":"info","ts":"2025-11-09T14:32:15.107685Z","caller":"embed/etcd.go:621","msg":"stopping serving peer traffic","address":"192.168.85.2:2380"}
	{"level":"error","ts":"2025-11-09T14:32:15.107850Z","caller":"embed/etcd.go:912","msg":"setting up serving from embedded etcd failed.","error":"accept tcp 192.168.85.2:2380: use of closed network connection","stacktrace":"go.etcd.io/etcd/server/v3/embed.(*Etcd).errHandler\n\tgo.etcd.io/etcd/server/v3/embed/etcd.go:912\ngo.etcd.io/etcd/server/v3/embed.(*Etcd).startHandler.func1\n\tgo.etcd.io/etcd/server/v3/embed/etcd.go:906"}
	{"level":"info","ts":"2025-11-09T14:32:15.108090Z","caller":"embed/etcd.go:626","msg":"stopped serving peer traffic","address":"192.168.85.2:2380"}
	{"level":"info","ts":"2025-11-09T14:32:15.108248Z","caller":"embed/etcd.go:428","msg":"closed etcd server","name":"pause-342238","data-dir":"/var/lib/minikube/etcd","advertise-peer-urls":["https://192.168.85.2:2380"],"advertise-client-urls":["https://192.168.85.2:2379"]}
	
	
	==> etcd [bf258d25a5e083c243df5f441d370436199818e034bda7733cdefc1cedba4399] <==
	{"level":"warn","ts":"2025-11-09T14:32:30.196876Z","caller":"embed/config_logging.go:188","msg":"rejected connection on client endpoint","remote-addr":"127.0.0.1:37170","server-name":"","error":"EOF"}
	{"level":"warn","ts":"2025-11-09T14:32:30.243438Z","caller":"embed/config_logging.go:188","msg":"rejected connection on client endpoint","remote-addr":"127.0.0.1:37196","server-name":"","error":"EOF"}
	{"level":"warn","ts":"2025-11-09T14:32:30.254699Z","caller":"embed/config_logging.go:188","msg":"rejected connection on client endpoint","remote-addr":"127.0.0.1:37212","server-name":"","error":"EOF"}
	{"level":"warn","ts":"2025-11-09T14:32:30.288223Z","caller":"embed/config_logging.go:188","msg":"rejected connection on client endpoint","remote-addr":"127.0.0.1:37238","server-name":"","error":"EOF"}
	{"level":"warn","ts":"2025-11-09T14:32:30.324447Z","caller":"embed/config_logging.go:188","msg":"rejected connection on client endpoint","remote-addr":"127.0.0.1:37250","server-name":"","error":"EOF"}
	{"level":"warn","ts":"2025-11-09T14:32:30.365311Z","caller":"embed/config_logging.go:188","msg":"rejected connection on client endpoint","remote-addr":"127.0.0.1:37256","server-name":"","error":"EOF"}
	{"level":"warn","ts":"2025-11-09T14:32:30.397777Z","caller":"embed/config_logging.go:188","msg":"rejected connection on client endpoint","remote-addr":"127.0.0.1:37276","server-name":"","error":"EOF"}
	{"level":"warn","ts":"2025-11-09T14:32:30.434361Z","caller":"embed/config_logging.go:188","msg":"rejected connection on client endpoint","remote-addr":"127.0.0.1:37304","server-name":"","error":"EOF"}
	{"level":"warn","ts":"2025-11-09T14:32:30.444049Z","caller":"embed/config_logging.go:188","msg":"rejected connection on client endpoint","remote-addr":"127.0.0.1:37316","server-name":"","error":"EOF"}
	{"level":"warn","ts":"2025-11-09T14:32:30.477950Z","caller":"embed/config_logging.go:188","msg":"rejected connection on client endpoint","remote-addr":"127.0.0.1:37334","server-name":"","error":"EOF"}
	{"level":"warn","ts":"2025-11-09T14:32:30.497040Z","caller":"embed/config_logging.go:188","msg":"rejected connection on client endpoint","remote-addr":"127.0.0.1:37356","server-name":"","error":"EOF"}
	{"level":"warn","ts":"2025-11-09T14:32:30.532691Z","caller":"embed/config_logging.go:188","msg":"rejected connection on client endpoint","remote-addr":"127.0.0.1:37366","server-name":"","error":"EOF"}
	{"level":"warn","ts":"2025-11-09T14:32:30.556684Z","caller":"embed/config_logging.go:188","msg":"rejected connection on client endpoint","remote-addr":"127.0.0.1:37382","server-name":"","error":"EOF"}
	{"level":"warn","ts":"2025-11-09T14:32:30.592552Z","caller":"embed/config_logging.go:188","msg":"rejected connection on client endpoint","remote-addr":"127.0.0.1:37398","server-name":"","error":"EOF"}
	{"level":"warn","ts":"2025-11-09T14:32:30.621665Z","caller":"embed/config_logging.go:188","msg":"rejected connection on client endpoint","remote-addr":"127.0.0.1:37430","server-name":"","error":"EOF"}
	{"level":"warn","ts":"2025-11-09T14:32:30.645066Z","caller":"embed/config_logging.go:188","msg":"rejected connection on client endpoint","remote-addr":"127.0.0.1:37460","server-name":"","error":"EOF"}
	{"level":"warn","ts":"2025-11-09T14:32:30.665713Z","caller":"embed/config_logging.go:188","msg":"rejected connection on client endpoint","remote-addr":"127.0.0.1:37474","server-name":"","error":"EOF"}
	{"level":"warn","ts":"2025-11-09T14:32:30.694128Z","caller":"embed/config_logging.go:188","msg":"rejected connection on client endpoint","remote-addr":"127.0.0.1:37492","server-name":"","error":"EOF"}
	{"level":"warn","ts":"2025-11-09T14:32:30.724346Z","caller":"embed/config_logging.go:188","msg":"rejected connection on client endpoint","remote-addr":"127.0.0.1:37512","server-name":"","error":"EOF"}
	{"level":"warn","ts":"2025-11-09T14:32:30.748352Z","caller":"embed/config_logging.go:188","msg":"rejected connection on client endpoint","remote-addr":"127.0.0.1:37528","server-name":"","error":"EOF"}
	{"level":"warn","ts":"2025-11-09T14:32:30.786113Z","caller":"embed/config_logging.go:188","msg":"rejected connection on client endpoint","remote-addr":"127.0.0.1:37534","server-name":"","error":"EOF"}
	{"level":"warn","ts":"2025-11-09T14:32:30.817793Z","caller":"embed/config_logging.go:188","msg":"rejected connection on client endpoint","remote-addr":"127.0.0.1:37562","server-name":"","error":"EOF"}
	{"level":"warn","ts":"2025-11-09T14:32:30.856408Z","caller":"embed/config_logging.go:188","msg":"rejected connection on client endpoint","remote-addr":"127.0.0.1:37576","server-name":"","error":"EOF"}
	{"level":"warn","ts":"2025-11-09T14:32:30.911141Z","caller":"embed/config_logging.go:188","msg":"rejected connection on client endpoint","remote-addr":"127.0.0.1:37590","server-name":"","error":"EOF"}
	{"level":"warn","ts":"2025-11-09T14:32:31.004220Z","caller":"embed/config_logging.go:188","msg":"rejected connection on client endpoint","remote-addr":"127.0.0.1:58410","server-name":"","error":"EOF"}
	
	
	==> kernel <==
	 14:32:50 up  1:15,  0 user,  load average: 6.06, 3.43, 2.39
	Linux pause-342238 5.15.0-1084-aws #91~20.04.1-Ubuntu SMP Fri May 2 07:00:04 UTC 2025 aarch64 GNU/Linux
	PRETTY_NAME="Debian GNU/Linux 12 (bookworm)"
	
	
	==> kindnet [5c6f6fd508ff4f00007a53c6092586ec8651aa027004a7f6f7018d9f33274a1d] <==
	I1109 14:32:14.033105       1 main.go:109] connected to apiserver: https://10.96.0.1:443
	I1109 14:32:14.033503       1 main.go:139] hostIP = 192.168.85.2
	podIP = 192.168.85.2
	I1109 14:32:14.033681       1 main.go:148] setting mtu 1500 for CNI 
	I1109 14:32:14.033755       1 main.go:178] kindnetd IP family: "ipv4"
	I1109 14:32:14.033791       1 main.go:182] noMask IPv4 subnets: [10.244.0.0/16]
	time="2025-11-09T14:32:14Z" level=info msg="Created plugin 10-kube-network-policies (kindnetd, handles RunPodSandbox,RemovePodSandbox)"
	I1109 14:32:14.272424       1 controller.go:377] "Starting controller" name="kube-network-policies"
	I1109 14:32:14.279986       1 controller.go:381] "Waiting for informer caches to sync"
	I1109 14:32:14.280083       1 shared_informer.go:350] "Waiting for caches to sync" controller="kube-network-policies"
	I1109 14:32:14.281015       1 controller.go:390] nri plugin exited: failed to connect to NRI service: dial unix /var/run/nri/nri.sock: connect: no such file or directory
	
	
	==> kindnet [aad8dd5398369e8f3aebeb36d9dfabb64135dc113966d452fdeeed0063b3e1e2] <==
	I1109 14:32:26.130032       1 main.go:109] connected to apiserver: https://10.96.0.1:443
	I1109 14:32:26.130414       1 main.go:139] hostIP = 192.168.85.2
	podIP = 192.168.85.2
	I1109 14:32:26.134726       1 main.go:148] setting mtu 1500 for CNI 
	I1109 14:32:26.134811       1 main.go:178] kindnetd IP family: "ipv4"
	I1109 14:32:26.134850       1 main.go:182] noMask IPv4 subnets: [10.244.0.0/16]
	time="2025-11-09T14:32:26Z" level=info msg="Created plugin 10-kube-network-policies (kindnetd, handles RunPodSandbox,RemovePodSandbox)"
	I1109 14:32:26.421615       1 controller.go:377] "Starting controller" name="kube-network-policies"
	I1109 14:32:26.421697       1 controller.go:381] "Waiting for informer caches to sync"
	I1109 14:32:26.421730       1 shared_informer.go:350] "Waiting for caches to sync" controller="kube-network-policies"
	I1109 14:32:26.437481       1 controller.go:390] nri plugin exited: failed to connect to NRI service: dial unix /var/run/nri/nri.sock: connect: no such file or directory
	I1109 14:32:32.535945       1 shared_informer.go:357] "Caches are synced" controller="kube-network-policies"
	I1109 14:32:32.536005       1 metrics.go:72] Registering metrics
	I1109 14:32:32.536074       1 controller.go:711] "Syncing nftables rules"
	I1109 14:32:36.421596       1 main.go:297] Handling node with IPs: map[192.168.85.2:{}]
	I1109 14:32:36.421689       1 main.go:301] handling current node
	I1109 14:32:46.422151       1 main.go:297] Handling node with IPs: map[192.168.85.2:{}]
	I1109 14:32:46.422277       1 main.go:301] handling current node
	
	
	==> kube-apiserver [26b98fc2b6e91b01fe188e5f329cb231784dc1c5dd7afa048ee956eeaea49020] <==
	I1109 14:32:32.410923       1 shared_informer.go:356] "Caches are synced" controller="kubernetes-service-cidr-controller"
	I1109 14:32:32.410960       1 default_servicecidr_controller.go:137] Shutting down kubernetes-service-cidr-controller
	I1109 14:32:32.435987       1 shared_informer.go:356] "Caches are synced" controller="crd-autoregister"
	I1109 14:32:32.436472       1 shared_informer.go:356] "Caches are synced" controller="*generic.policySource[*k8s.io/api/admissionregistration/v1.ValidatingAdmissionPolicy,*k8s.io/api/admissionregistration/v1.ValidatingAdmissionPolicyBinding,k8s.io/apiserver/pkg/admission/plugin/policy/validating.Validator]"
	I1109 14:32:32.436490       1 policy_source.go:240] refreshing policies
	I1109 14:32:32.436594       1 aggregator.go:171] initial CRD sync complete...
	I1109 14:32:32.436602       1 autoregister_controller.go:144] Starting autoregister controller
	I1109 14:32:32.436608       1 cache.go:32] Waiting for caches to sync for autoregister controller
	I1109 14:32:32.458438       1 shared_informer.go:356] "Caches are synced" controller="ipallocator-repair-controller"
	I1109 14:32:32.471251       1 controller.go:667] quota admission added evaluator for: leases.coordination.k8s.io
	I1109 14:32:32.505391       1 cache.go:39] Caches are synced for LocalAvailability controller
	I1109 14:32:32.505640       1 apf_controller.go:382] Running API Priority and Fairness config worker
	I1109 14:32:32.505656       1 apf_controller.go:385] Running API Priority and Fairness periodic rebalancing process
	I1109 14:32:32.505769       1 cache.go:39] Caches are synced for RemoteAvailability controller
	I1109 14:32:32.506094       1 cache.go:39] Caches are synced for APIServiceRegistrationController controller
	I1109 14:32:32.516974       1 handler_discovery.go:451] Starting ResourceDiscoveryManager
	I1109 14:32:32.536651       1 cache.go:39] Caches are synced for autoregister controller
	I1109 14:32:32.538925       1 shared_informer.go:356] "Caches are synced" controller="node_authorizer"
	I1109 14:32:32.545358       1 cidrallocator.go:301] created ClusterIP allocator for Service CIDR 10.96.0.0/12
	I1109 14:32:33.157201       1 storage_scheduling.go:111] all system priority classes are created successfully or already exist.
	I1109 14:32:34.802710       1 controller.go:667] quota admission added evaluator for: serviceaccounts
	I1109 14:32:36.010272       1 controller.go:667] quota admission added evaluator for: replicasets.apps
	I1109 14:32:36.059327       1 controller.go:667] quota admission added evaluator for: endpointslices.discovery.k8s.io
	I1109 14:32:36.109612       1 controller.go:667] quota admission added evaluator for: endpoints
	I1109 14:32:36.211110       1 controller.go:667] quota admission added evaluator for: deployments.apps
	
	
	==> kube-apiserver [5d123fdfeb405288face83e3bb92b58e7775f382757b3119f80305045fbcea28] <==
	W1109 14:32:14.738018       1 api_enablement.go:112] alpha api enabled with emulated version 1.34 instead of the binary's version 1.34.1, this is unsupported, proceed at your own risk: api=imagepolicy.k8s.io/v1alpha1
	W1109 14:32:14.738049       1 api_enablement.go:112] alpha api enabled with emulated version 1.34 instead of the binary's version 1.34.1, this is unsupported, proceed at your own risk: api=scheduling.k8s.io/v1alpha1
	W1109 14:32:14.742800       1 api_enablement.go:112] alpha api enabled with emulated version 1.34 instead of the binary's version 1.34.1, this is unsupported, proceed at your own risk: api=storagemigration.k8s.io/v1alpha1
	W1109 14:32:14.742830       1 api_enablement.go:112] alpha api enabled with emulated version 1.34 instead of the binary's version 1.34.1, this is unsupported, proceed at your own risk: api=admissionregistration.k8s.io/v1alpha1
	W1109 14:32:14.742873       1 api_enablement.go:112] alpha api enabled with emulated version 1.34 instead of the binary's version 1.34.1, this is unsupported, proceed at your own risk: api=internal.apiserver.k8s.io/v1alpha1
	W1109 14:32:14.742879       1 api_enablement.go:112] alpha api enabled with emulated version 1.34 instead of the binary's version 1.34.1, this is unsupported, proceed at your own risk: api=coordination.k8s.io/v1alpha2
	W1109 14:32:14.742883       1 api_enablement.go:112] alpha api enabled with emulated version 1.34 instead of the binary's version 1.34.1, this is unsupported, proceed at your own risk: api=rbac.authorization.k8s.io/v1alpha1
	W1109 14:32:14.742888       1 api_enablement.go:112] alpha api enabled with emulated version 1.34 instead of the binary's version 1.34.1, this is unsupported, proceed at your own risk: api=node.k8s.io/v1alpha1
	W1109 14:32:14.788849       1 logging.go:55] [core] [Channel #1 SubChannel #2]grpc: addrConn.createTransport failed to connect to {Addr: "127.0.0.1:2379", ServerName: "127.0.0.1:2379", BalancerAttributes: {"<%!p(pickfirstleaf.managedByPickfirstKeyType={})>": "<%!p(bool=true)>" }}. Err: connection error: desc = "transport: authentication handshake failed: context canceled"
	W1109 14:32:14.790241       1 logging.go:55] [core] [Channel #4 SubChannel #5]grpc: addrConn.createTransport failed to connect to {Addr: "127.0.0.1:2379", ServerName: "127.0.0.1:2379", BalancerAttributes: {"<%!p(pickfirstleaf.managedByPickfirstKeyType={})>": "<%!p(bool=true)>" }}. Err: connection error: desc = "transport: Error while dialing: dial tcp 127.0.0.1:2379: operation was canceled"
	I1109 14:32:14.800366       1 shared_informer.go:349] "Waiting for caches to sync" controller="node_authorizer"
	I1109 14:32:14.843943       1 shared_informer.go:349] "Waiting for caches to sync" controller="*generic.policySource[*k8s.io/api/admissionregistration/v1.ValidatingAdmissionPolicy,*k8s.io/api/admissionregistration/v1.ValidatingAdmissionPolicyBinding,k8s.io/apiserver/pkg/admission/plugin/policy/validating.Validator]"
	I1109 14:32:14.893282       1 plugins.go:157] Loaded 14 mutating admission controller(s) successfully in the following order: NamespaceLifecycle,LimitRanger,ServiceAccount,NodeRestriction,TaintNodesByCondition,Priority,DefaultTolerationSeconds,DefaultStorageClass,StorageObjectInUseProtection,RuntimeClass,DefaultIngressClass,PodTopologyLabels,MutatingAdmissionPolicy,MutatingAdmissionWebhook.
	I1109 14:32:14.893399       1 plugins.go:160] Loaded 13 validating admission controller(s) successfully in the following order: LimitRanger,ServiceAccount,PodSecurity,Priority,PersistentVolumeClaimResize,RuntimeClass,CertificateApproval,CertificateSigning,ClusterTrustBundleAttest,CertificateSubjectRestriction,ValidatingAdmissionPolicy,ValidatingAdmissionWebhook,ResourceQuota.
	I1109 14:32:14.893736       1 instance.go:239] Using reconciler: lease
	W1109 14:32:14.900922       1 logging.go:55] [core] [Channel #9 SubChannel #10]grpc: addrConn.createTransport failed to connect to {Addr: "127.0.0.1:2379", ServerName: "127.0.0.1:2379", BalancerAttributes: {"<%!p(pickfirstleaf.managedByPickfirstKeyType={})>": "<%!p(bool=true)>" }}. Err: connection error: desc = "transport: Error while dialing: dial tcp 127.0.0.1:2379: operation was canceled"
	W1109 14:32:14.965426       1 logging.go:55] [core] [Channel #13 SubChannel #14]grpc: addrConn.createTransport failed to connect to {Addr: "127.0.0.1:2379", ServerName: "127.0.0.1:2379", BalancerAttributes: {"<%!p(pickfirstleaf.managedByPickfirstKeyType={})>": "<%!p(bool=true)>" }}. Err: connection error: desc = "transport: Error while dialing: dial tcp 127.0.0.1:2379: operation was canceled"
	W1109 14:32:15.057928       1 logging.go:55] [core] [Channel #17 SubChannel #20]grpc: addrConn.createTransport failed to connect to {Addr: "127.0.0.1:2379", ServerName: "127.0.0.1:2379", BalancerAttributes: {"<%!p(pickfirstleaf.managedByPickfirstKeyType={})>": "<%!p(bool=true)>" }}. Err: connection error: desc = "transport: Error while dialing: dial tcp 127.0.0.1:2379: operation was canceled"
	W1109 14:32:15.094763       1 logging.go:55] [core] [Channel #18 SubChannel #22]grpc: addrConn.createTransport failed to connect to {Addr: "127.0.0.1:2379", ServerName: "127.0.0.1:2379", BalancerAttributes: {"<%!p(pickfirstleaf.managedByPickfirstKeyType={})>": "<%!p(bool=true)>" }}. Err: connection error: desc = "transport: authentication handshake failed: read tcp 127.0.0.1:48492->127.0.0.1:2379: read: connection reset by peer"
	W1109 14:32:15.094952       1 logging.go:55] [core] [Channel #19 SubChannel #23]grpc: addrConn.createTransport failed to connect to {Addr: "127.0.0.1:2379", ServerName: "127.0.0.1:2379", BalancerAttributes: {"<%!p(pickfirstleaf.managedByPickfirstKeyType={})>": "<%!p(bool=true)>" }}. Err: connection error: desc = "transport: authentication handshake failed: EOF"
	W1109 14:32:15.095453       1 logging.go:55] [core] [Channel #17 SubChannel #21]grpc: addrConn.createTransport failed to connect to {Addr: "127.0.0.1:2379", ServerName: "127.0.0.1:2379", BalancerAttributes: {"<%!p(pickfirstleaf.managedByPickfirstKeyType={})>": "<%!p(bool=true)>" }}. Err: connection error: desc = "transport: failed to write client preface: write tcp 127.0.0.1:48476->127.0.0.1:2379: write: broken pipe"
	W1109 14:32:15.100414       1 logging.go:55] [core] [Channel #1 SubChannel #3]grpc: addrConn.createTransport failed to connect to {Addr: "127.0.0.1:2379", ServerName: "127.0.0.1:2379", BalancerAttributes: {"<%!p(pickfirstleaf.managedByPickfirstKeyType={})>": "<%!p(bool=true)>" }}. Err: connection error: desc = "transport: Error while dialing: dial tcp 127.0.0.1:2379: connect: connection refused"
	W1109 14:32:15.100619       1 logging.go:55] [core] [Channel #9 SubChannel #11]grpc: addrConn.createTransport failed to connect to {Addr: "127.0.0.1:2379", ServerName: "127.0.0.1:2379", BalancerAttributes: {"<%!p(pickfirstleaf.managedByPickfirstKeyType={})>": "<%!p(bool=true)>" }}. Err: connection error: desc = "transport: Error while dialing: dial tcp 127.0.0.1:2379: connect: connection refused"
	W1109 14:32:15.100791       1 logging.go:55] [core] [Channel #13 SubChannel #15]grpc: addrConn.createTransport failed to connect to {Addr: "127.0.0.1:2379", ServerName: "127.0.0.1:2379", BalancerAttributes: {"<%!p(pickfirstleaf.managedByPickfirstKeyType={})>": "<%!p(bool=true)>" }}. Err: connection error: desc = "transport: Error while dialing: dial tcp 127.0.0.1:2379: connect: connection refused"
	W1109 14:32:15.100868       1 logging.go:55] [core] [Channel #4 SubChannel #6]grpc: addrConn.createTransport failed to connect to {Addr: "127.0.0.1:2379", ServerName: "127.0.0.1:2379", BalancerAttributes: {"<%!p(pickfirstleaf.managedByPickfirstKeyType={})>": "<%!p(bool=true)>" }}. Err: connection error: desc = "transport: Error while dialing: dial tcp 127.0.0.1:2379: connect: connection refused"
	
	
	==> kube-controller-manager [afac707896ac08acd09ba3881e8702a21359b6a9e5316bfc87cd94a6597c947f] <==
	I1109 14:32:35.731613       1 shared_informer.go:356] "Caches are synced" controller="node"
	I1109 14:32:35.731666       1 range_allocator.go:177] "Sending events to api server" logger="node-ipam-controller"
	I1109 14:32:35.731699       1 range_allocator.go:183] "Starting range CIDR allocator" logger="node-ipam-controller"
	I1109 14:32:35.731704       1 shared_informer.go:349] "Waiting for caches to sync" controller="cidrallocator"
	I1109 14:32:35.731709       1 shared_informer.go:356] "Caches are synced" controller="cidrallocator"
	I1109 14:32:35.731810       1 shared_informer.go:356] "Caches are synced" controller="PV protection"
	I1109 14:32:35.739029       1 shared_informer.go:356] "Caches are synced" controller="service account"
	I1109 14:32:35.745655       1 shared_informer.go:349] "Waiting for caches to sync" controller="garbage collector"
	I1109 14:32:35.752245       1 shared_informer.go:356] "Caches are synced" controller="disruption"
	I1109 14:32:35.752181       1 shared_informer.go:356] "Caches are synced" controller="job"
	I1109 14:32:35.757051       1 shared_informer.go:356] "Caches are synced" controller="endpoint"
	I1109 14:32:35.757129       1 shared_informer.go:356] "Caches are synced" controller="attach detach"
	I1109 14:32:35.757608       1 shared_informer.go:356] "Caches are synced" controller="TTL"
	I1109 14:32:35.757662       1 shared_informer.go:356] "Caches are synced" controller="ClusterRoleAggregator"
	I1109 14:32:35.760968       1 shared_informer.go:356] "Caches are synced" controller="resource_claim"
	I1109 14:32:35.761156       1 shared_informer.go:356] "Caches are synced" controller="validatingadmissionpolicy-status"
	I1109 14:32:35.765370       1 shared_informer.go:356] "Caches are synced" controller="taint-eviction-controller"
	I1109 14:32:35.767547       1 shared_informer.go:356] "Caches are synced" controller="taint"
	I1109 14:32:35.769522       1 node_lifecycle_controller.go:1221] "Initializing eviction metric for zone" logger="node-lifecycle-controller" zone=""
	I1109 14:32:35.769633       1 node_lifecycle_controller.go:873] "Missing timestamp for Node. Assuming now as a timestamp" logger="node-lifecycle-controller" node="pause-342238"
	I1109 14:32:35.769703       1 node_lifecycle_controller.go:1067] "Controller detected that zone is now in new state" logger="node-lifecycle-controller" zone="" newState="Normal"
	I1109 14:32:35.809176       1 shared_informer.go:356] "Caches are synced" controller="garbage collector"
	I1109 14:32:35.809208       1 garbagecollector.go:154] "Garbage collector: all resource monitors have synced" logger="garbage-collector-controller"
	I1109 14:32:35.809217       1 garbagecollector.go:157] "Proceeding to collect garbage" logger="garbage-collector-controller"
	I1109 14:32:35.853189       1 shared_informer.go:356] "Caches are synced" controller="garbage collector"
	
	
	==> kube-controller-manager [bd2b939fdc032d3abef04f2b1ec27467d56019dfce8f70d173b907e0b789d362] <==
	
	
	==> kube-proxy [a5688c3ff2046c131f5e093ac5f29648ebf7838084be2c2c4f88f5ec839473a5] <==
	I1109 14:32:32.278165       1 server_linux.go:53] "Using iptables proxy"
	I1109 14:32:33.145261       1 shared_informer.go:349] "Waiting for caches to sync" controller="node informer cache"
	I1109 14:32:33.283598       1 shared_informer.go:356] "Caches are synced" controller="node informer cache"
	I1109 14:32:33.303144       1 server.go:219] "Successfully retrieved NodeIPs" NodeIPs=["192.168.85.2"]
	E1109 14:32:33.311989       1 server.go:256] "Kube-proxy configuration may be incomplete or incorrect" err="nodePortAddresses is unset; NodePort connections will be accepted on all local IPs. Consider using `--nodeport-addresses primary`"
	I1109 14:32:34.645100       1 server.go:265] "kube-proxy running in dual-stack mode" primary ipFamily="IPv4"
	I1109 14:32:34.645227       1 server_linux.go:132] "Using iptables Proxier"
	I1109 14:32:34.657296       1 proxier.go:242] "Setting route_localnet=1 to allow node-ports on localhost; to change this either disable iptables.localhostNodePorts (--iptables-localhost-nodeports) or set nodePortAddresses (--nodeport-addresses) to filter loopback addresses" ipFamily="IPv4"
	I1109 14:32:34.657657       1 server.go:527] "Version info" version="v1.34.1"
	I1109 14:32:34.657878       1 server.go:529] "Golang settings" GOGC="" GOMAXPROCS="" GOTRACEBACK=""
	I1109 14:32:34.659162       1 config.go:200] "Starting service config controller"
	I1109 14:32:34.659224       1 shared_informer.go:349] "Waiting for caches to sync" controller="service config"
	I1109 14:32:34.659266       1 config.go:106] "Starting endpoint slice config controller"
	I1109 14:32:34.659309       1 shared_informer.go:349] "Waiting for caches to sync" controller="endpoint slice config"
	I1109 14:32:34.659342       1 config.go:403] "Starting serviceCIDR config controller"
	I1109 14:32:34.659368       1 shared_informer.go:349] "Waiting for caches to sync" controller="serviceCIDR config"
	I1109 14:32:34.662282       1 config.go:309] "Starting node config controller"
	I1109 14:32:34.663296       1 shared_informer.go:349] "Waiting for caches to sync" controller="node config"
	I1109 14:32:34.663358       1 shared_informer.go:356] "Caches are synced" controller="node config"
	I1109 14:32:34.760123       1 shared_informer.go:356] "Caches are synced" controller="serviceCIDR config"
	I1109 14:32:34.760219       1 shared_informer.go:356] "Caches are synced" controller="service config"
	I1109 14:32:34.760244       1 shared_informer.go:356] "Caches are synced" controller="endpoint slice config"
	
	
	==> kube-proxy [c9b12325c06ba9e4ab5abe52dc50d540a4c27cfd177801eff74770b81d946220] <==
	
	
	==> kube-scheduler [48a5ab06b1f371ac5dbcd26591c6cff2b3c7a3f9d82ec3d36aacffc729c253e0] <==
	I1109 14:32:34.380517       1 serving.go:386] Generated self-signed cert in-memory
	I1109 14:32:35.839765       1 server.go:175] "Starting Kubernetes Scheduler" version="v1.34.1"
	I1109 14:32:35.839925       1 server.go:177] "Golang settings" GOGC="" GOMAXPROCS="" GOTRACEBACK=""
	I1109 14:32:35.844969       1 secure_serving.go:211] Serving securely on 127.0.0.1:10259
	I1109 14:32:35.845191       1 requestheader_controller.go:180] Starting RequestHeaderAuthRequestController
	I1109 14:32:35.845256       1 shared_informer.go:349] "Waiting for caches to sync" controller="RequestHeaderAuthRequestController"
	I1109 14:32:35.845306       1 tlsconfig.go:243] "Starting DynamicServingCertificateController"
	I1109 14:32:35.847432       1 configmap_cafile_content.go:205] "Starting controller" name="client-ca::kube-system::extension-apiserver-authentication::client-ca-file"
	I1109 14:32:35.852978       1 shared_informer.go:349] "Waiting for caches to sync" controller="client-ca::kube-system::extension-apiserver-authentication::client-ca-file"
	I1109 14:32:35.847630       1 configmap_cafile_content.go:205] "Starting controller" name="client-ca::kube-system::extension-apiserver-authentication::requestheader-client-ca-file"
	I1109 14:32:35.857543       1 shared_informer.go:349] "Waiting for caches to sync" controller="client-ca::kube-system::extension-apiserver-authentication::requestheader-client-ca-file"
	I1109 14:32:35.945417       1 shared_informer.go:356] "Caches are synced" controller="RequestHeaderAuthRequestController"
	I1109 14:32:35.953511       1 shared_informer.go:356] "Caches are synced" controller="client-ca::kube-system::extension-apiserver-authentication::client-ca-file"
	I1109 14:32:35.958671       1 shared_informer.go:356] "Caches are synced" controller="client-ca::kube-system::extension-apiserver-authentication::requestheader-client-ca-file"
	
	
	==> kube-scheduler [e00cc266825be7d8acef576725cca22f3def607f1f54044f19ec2250c9c87463] <==
	
	
	==> kubelet <==
	Nov 09 14:32:25 pause-342238 kubelet[1305]: I1109 14:32:25.948519    1305 scope.go:117] "RemoveContainer" containerID="380328e8ff2e3837ff636037723b0b7ac29bf2c87d21bdf3437b7af96e7fd1b9"
	Nov 09 14:32:26 pause-342238 kubelet[1305]: I1109 14:32:26.007802    1305 scope.go:117] "RemoveContainer" containerID="98695eb4c34c3bc7b1b7f3ed32e548c56a553f0a55ff86003dd620ca903f90cd"
	Nov 09 14:32:26 pause-342238 kubelet[1305]: I1109 14:32:26.032918    1305 scope.go:117] "RemoveContainer" containerID="909928f9417995811bbe0ffd9c5779e2289c45f69d32a650303ec921235069f8"
	Nov 09 14:32:26 pause-342238 kubelet[1305]: I1109 14:32:26.851171    1305 scope.go:117] "RemoveContainer" containerID="4a1f936e07428b18d9c307997f095c8d73e649e7b520d8a35b53836606e5e79d"
	Nov 09 14:32:26 pause-342238 kubelet[1305]: E1109 14:32:26.852761    1305 pod_workers.go:1324] "Error syncing pod, skipping" err="failed to \"StartContainer\" for \"coredns\" with CrashLoopBackOff: \"back-off 10s restarting failed container=coredns pod=coredns-66bc5c9577-4vkj9_kube-system(e8dda2a5-805d-4a5c-904c-a8ff327f8180)\"" pod="kube-system/coredns-66bc5c9577-4vkj9" podUID="e8dda2a5-805d-4a5c-904c-a8ff327f8180"
	Nov 09 14:32:32 pause-342238 kubelet[1305]: E1109 14:32:32.144340    1305 status_manager.go:1018] "Failed to get status for pod" err="pods \"coredns-66bc5c9577-4vkj9\" is forbidden: User \"system:node:pause-342238\" cannot get resource \"pods\" in API group \"\" in the namespace \"kube-system\": no relationship found between node 'pause-342238' and this object" podUID="e8dda2a5-805d-4a5c-904c-a8ff327f8180" pod="kube-system/coredns-66bc5c9577-4vkj9"
	Nov 09 14:32:32 pause-342238 kubelet[1305]: E1109 14:32:32.145042    1305 reflector.go:205] "Failed to watch" err="configmaps \"kube-root-ca.crt\" is forbidden: User \"system:node:pause-342238\" cannot watch resource \"configmaps\" in API group \"\" in the namespace \"kube-system\": no relationship found between node 'pause-342238' and this object" logger="UnhandledError" reflector="object-\"kube-system\"/\"kube-root-ca.crt\"" type="*v1.ConfigMap"
	Nov 09 14:32:32 pause-342238 kubelet[1305]: E1109 14:32:32.145156    1305 reflector.go:205] "Failed to watch" err="configmaps \"kube-proxy\" is forbidden: User \"system:node:pause-342238\" cannot watch resource \"configmaps\" in API group \"\" in the namespace \"kube-system\": no relationship found between node 'pause-342238' and this object" logger="UnhandledError" reflector="object-\"kube-system\"/\"kube-proxy\"" type="*v1.ConfigMap"
	Nov 09 14:32:32 pause-342238 kubelet[1305]: E1109 14:32:32.204706    1305 status_manager.go:1018] "Failed to get status for pod" err="pods \"etcd-pause-342238\" is forbidden: User \"system:node:pause-342238\" cannot get resource \"pods\" in API group \"\" in the namespace \"kube-system\": no relationship found between node 'pause-342238' and this object" podUID="4f893e796c1ff75a6fa95936e877b240" pod="kube-system/etcd-pause-342238"
	Nov 09 14:32:32 pause-342238 kubelet[1305]: E1109 14:32:32.225586    1305 status_manager.go:1018] "Failed to get status for pod" err="pods \"kube-scheduler-pause-342238\" is forbidden: User \"system:node:pause-342238\" cannot get resource \"pods\" in API group \"\" in the namespace \"kube-system\": no relationship found between node 'pause-342238' and this object" podUID="04361c61235cdebe210dd630178960d4" pod="kube-system/kube-scheduler-pause-342238"
	Nov 09 14:32:32 pause-342238 kubelet[1305]: E1109 14:32:32.332443    1305 status_manager.go:1018] "Failed to get status for pod" err="pods \"kube-apiserver-pause-342238\" is forbidden: User \"system:node:pause-342238\" cannot get resource \"pods\" in API group \"\" in the namespace \"kube-system\": no relationship found between node 'pause-342238' and this object" podUID="8297305e855c4c37329d28dfaf111542" pod="kube-system/kube-apiserver-pause-342238"
	Nov 09 14:32:32 pause-342238 kubelet[1305]: E1109 14:32:32.372202    1305 status_manager.go:1018] "Failed to get status for pod" err="pods \"kube-controller-manager-pause-342238\" is forbidden: User \"system:node:pause-342238\" cannot get resource \"pods\" in API group \"\" in the namespace \"kube-system\": no relationship found between node 'pause-342238' and this object" podUID="250f4c32a7ddbe3bb161b27a06859cf2" pod="kube-system/kube-controller-manager-pause-342238"
	Nov 09 14:32:32 pause-342238 kubelet[1305]: E1109 14:32:32.384743    1305 status_manager.go:1018] "Failed to get status for pod" err="pods \"kindnet-dvtdj\" is forbidden: User \"system:node:pause-342238\" cannot get resource \"pods\" in API group \"\" in the namespace \"kube-system\": no relationship found between node 'pause-342238' and this object" podUID="2b0a0e90-64b0-4df4-bf36-a524a30af1f2" pod="kube-system/kindnet-dvtdj"
	Nov 09 14:32:32 pause-342238 kubelet[1305]: E1109 14:32:32.389913    1305 status_manager.go:1018] "Failed to get status for pod" err="pods \"kube-proxy-r56tq\" is forbidden: User \"system:node:pause-342238\" cannot get resource \"pods\" in API group \"\" in the namespace \"kube-system\": no relationship found between node 'pause-342238' and this object" podUID="cc4772f3-712c-4c0e-8991-2e92a242e19c" pod="kube-system/kube-proxy-r56tq"
	Nov 09 14:32:32 pause-342238 kubelet[1305]: E1109 14:32:32.397077    1305 status_manager.go:1018] "Failed to get status for pod" err="pods \"etcd-pause-342238\" is forbidden: User \"system:node:pause-342238\" cannot get resource \"pods\" in API group \"\" in the namespace \"kube-system\": no relationship found between node 'pause-342238' and this object" podUID="4f893e796c1ff75a6fa95936e877b240" pod="kube-system/etcd-pause-342238"
	Nov 09 14:32:32 pause-342238 kubelet[1305]: E1109 14:32:32.399334    1305 status_manager.go:1018] "Failed to get status for pod" err="pods \"kube-scheduler-pause-342238\" is forbidden: User \"system:node:pause-342238\" cannot get resource \"pods\" in API group \"\" in the namespace \"kube-system\": no relationship found between node 'pause-342238' and this object" podUID="04361c61235cdebe210dd630178960d4" pod="kube-system/kube-scheduler-pause-342238"
	Nov 09 14:32:32 pause-342238 kubelet[1305]: E1109 14:32:32.428393    1305 status_manager.go:1018] "Failed to get status for pod" err="pods \"kube-apiserver-pause-342238\" is forbidden: User \"system:node:pause-342238\" cannot get resource \"pods\" in API group \"\" in the namespace \"kube-system\": no relationship found between node 'pause-342238' and this object" podUID="8297305e855c4c37329d28dfaf111542" pod="kube-system/kube-apiserver-pause-342238"
	Nov 09 14:32:32 pause-342238 kubelet[1305]: E1109 14:32:32.439830    1305 status_manager.go:1018] "Failed to get status for pod" err="pods \"kube-controller-manager-pause-342238\" is forbidden: User \"system:node:pause-342238\" cannot get resource \"pods\" in API group \"\" in the namespace \"kube-system\": no relationship found between node 'pause-342238' and this object" podUID="250f4c32a7ddbe3bb161b27a06859cf2" pod="kube-system/kube-controller-manager-pause-342238"
	Nov 09 14:32:32 pause-342238 kubelet[1305]: I1109 14:32:32.783459    1305 scope.go:117] "RemoveContainer" containerID="4a1f936e07428b18d9c307997f095c8d73e649e7b520d8a35b53836606e5e79d"
	Nov 09 14:32:32 pause-342238 kubelet[1305]: E1109 14:32:32.783915    1305 pod_workers.go:1324] "Error syncing pod, skipping" err="failed to \"StartContainer\" for \"coredns\" with CrashLoopBackOff: \"back-off 10s restarting failed container=coredns pod=coredns-66bc5c9577-4vkj9_kube-system(e8dda2a5-805d-4a5c-904c-a8ff327f8180)\"" pod="kube-system/coredns-66bc5c9577-4vkj9" podUID="e8dda2a5-805d-4a5c-904c-a8ff327f8180"
	Nov 09 14:32:35 pause-342238 kubelet[1305]: W1109 14:32:35.636275    1305 conversion.go:112] Could not get instant cpu stats: cumulative stats decrease
	Nov 09 14:32:44 pause-342238 kubelet[1305]: I1109 14:32:44.456126    1305 scope.go:117] "RemoveContainer" containerID="4a1f936e07428b18d9c307997f095c8d73e649e7b520d8a35b53836606e5e79d"
	Nov 09 14:32:47 pause-342238 systemd[1]: Stopping kubelet.service - kubelet: The Kubernetes Node Agent...
	Nov 09 14:32:47 pause-342238 systemd[1]: kubelet.service: Deactivated successfully.
	Nov 09 14:32:47 pause-342238 systemd[1]: Stopped kubelet.service - kubelet: The Kubernetes Node Agent.
	

                                                
                                                
-- /stdout --
helpers_test.go:262: (dbg) Run:  out/minikube-linux-arm64 status --format={{.APIServer}} -p pause-342238 -n pause-342238
helpers_test.go:262: (dbg) Non-zero exit: out/minikube-linux-arm64 status --format={{.APIServer}} -p pause-342238 -n pause-342238: exit status 2 (448.913853ms)

                                                
                                                
-- stdout --
	Running

                                                
                                                
-- /stdout --
helpers_test.go:262: status error: exit status 2 (may be ok)
helpers_test.go:269: (dbg) Run:  kubectl --context pause-342238 get po -o=jsonpath={.items[*].metadata.name} -A --field-selector=status.phase!=Running
helpers_test.go:293: <<< TestPause/serial/Pause FAILED: end of post-mortem logs <<<
helpers_test.go:294: ---------------------/post-mortem---------------------------------
helpers_test.go:222: -----------------------post-mortem--------------------------------
helpers_test.go:223: ======>  post-mortem[TestPause/serial/Pause]: network settings <======
helpers_test.go:230: HOST ENV snapshots: PROXY env: HTTP_PROXY="<empty>" HTTPS_PROXY="<empty>" NO_PROXY="<empty>"
helpers_test.go:238: ======>  post-mortem[TestPause/serial/Pause]: docker inspect <======
helpers_test.go:239: (dbg) Run:  docker inspect pause-342238
helpers_test.go:243: (dbg) docker inspect pause-342238:

                                                
                                                
-- stdout --
	[
	    {
	        "Id": "8616b3d77f84bc96996155893edf2d015293a4fc8dbee6e29eb44e7ccde9470b",
	        "Created": "2025-11-09T14:30:46.184710211Z",
	        "Path": "/usr/local/bin/entrypoint",
	        "Args": [
	            "/sbin/init"
	        ],
	        "State": {
	            "Status": "running",
	            "Running": true,
	            "Paused": false,
	            "Restarting": false,
	            "OOMKilled": false,
	            "Dead": false,
	            "Pid": 162164,
	            "ExitCode": 0,
	            "Error": "",
	            "StartedAt": "2025-11-09T14:30:46.249829699Z",
	            "FinishedAt": "0001-01-01T00:00:00Z"
	        },
	        "Image": "sha256:4e1b168be0d8ee6affdaa8dcd0274d605b2f417b6cfaa574410f0380fd962b97",
	        "ResolvConfPath": "/var/lib/docker/containers/8616b3d77f84bc96996155893edf2d015293a4fc8dbee6e29eb44e7ccde9470b/resolv.conf",
	        "HostnamePath": "/var/lib/docker/containers/8616b3d77f84bc96996155893edf2d015293a4fc8dbee6e29eb44e7ccde9470b/hostname",
	        "HostsPath": "/var/lib/docker/containers/8616b3d77f84bc96996155893edf2d015293a4fc8dbee6e29eb44e7ccde9470b/hosts",
	        "LogPath": "/var/lib/docker/containers/8616b3d77f84bc96996155893edf2d015293a4fc8dbee6e29eb44e7ccde9470b/8616b3d77f84bc96996155893edf2d015293a4fc8dbee6e29eb44e7ccde9470b-json.log",
	        "Name": "/pause-342238",
	        "RestartCount": 0,
	        "Driver": "overlay2",
	        "Platform": "linux",
	        "MountLabel": "",
	        "ProcessLabel": "",
	        "AppArmorProfile": "unconfined",
	        "ExecIDs": null,
	        "HostConfig": {
	            "Binds": [
	                "/lib/modules:/lib/modules:ro",
	                "pause-342238:/var"
	            ],
	            "ContainerIDFile": "",
	            "LogConfig": {
	                "Type": "json-file",
	                "Config": {}
	            },
	            "NetworkMode": "pause-342238",
	            "PortBindings": {
	                "22/tcp": [
	                    {
	                        "HostIp": "127.0.0.1",
	                        "HostPort": ""
	                    }
	                ],
	                "2376/tcp": [
	                    {
	                        "HostIp": "127.0.0.1",
	                        "HostPort": ""
	                    }
	                ],
	                "32443/tcp": [
	                    {
	                        "HostIp": "127.0.0.1",
	                        "HostPort": ""
	                    }
	                ],
	                "5000/tcp": [
	                    {
	                        "HostIp": "127.0.0.1",
	                        "HostPort": ""
	                    }
	                ],
	                "8443/tcp": [
	                    {
	                        "HostIp": "127.0.0.1",
	                        "HostPort": ""
	                    }
	                ]
	            },
	            "RestartPolicy": {
	                "Name": "no",
	                "MaximumRetryCount": 0
	            },
	            "AutoRemove": false,
	            "VolumeDriver": "",
	            "VolumesFrom": null,
	            "ConsoleSize": [
	                0,
	                0
	            ],
	            "CapAdd": null,
	            "CapDrop": null,
	            "CgroupnsMode": "host",
	            "Dns": [],
	            "DnsOptions": [],
	            "DnsSearch": [],
	            "ExtraHosts": null,
	            "GroupAdd": null,
	            "IpcMode": "private",
	            "Cgroup": "",
	            "Links": null,
	            "OomScoreAdj": 0,
	            "PidMode": "",
	            "Privileged": true,
	            "PublishAllPorts": false,
	            "ReadonlyRootfs": false,
	            "SecurityOpt": [
	                "seccomp=unconfined",
	                "apparmor=unconfined",
	                "label=disable"
	            ],
	            "Tmpfs": {
	                "/run": "",
	                "/tmp": ""
	            },
	            "UTSMode": "",
	            "UsernsMode": "",
	            "ShmSize": 67108864,
	            "Runtime": "runc",
	            "Isolation": "",
	            "CpuShares": 0,
	            "Memory": 3221225472,
	            "NanoCpus": 2000000000,
	            "CgroupParent": "",
	            "BlkioWeight": 0,
	            "BlkioWeightDevice": [],
	            "BlkioDeviceReadBps": [],
	            "BlkioDeviceWriteBps": [],
	            "BlkioDeviceReadIOps": [],
	            "BlkioDeviceWriteIOps": [],
	            "CpuPeriod": 0,
	            "CpuQuota": 0,
	            "CpuRealtimePeriod": 0,
	            "CpuRealtimeRuntime": 0,
	            "CpusetCpus": "",
	            "CpusetMems": "",
	            "Devices": [],
	            "DeviceCgroupRules": null,
	            "DeviceRequests": null,
	            "MemoryReservation": 0,
	            "MemorySwap": 6442450944,
	            "MemorySwappiness": null,
	            "OomKillDisable": false,
	            "PidsLimit": null,
	            "Ulimits": [],
	            "CpuCount": 0,
	            "CpuPercent": 0,
	            "IOMaximumIOps": 0,
	            "IOMaximumBandwidth": 0,
	            "MaskedPaths": null,
	            "ReadonlyPaths": null
	        },
	        "GraphDriver": {
	            "Data": {
	                "ID": "8616b3d77f84bc96996155893edf2d015293a4fc8dbee6e29eb44e7ccde9470b",
	                "LowerDir": "/var/lib/docker/overlay2/926a1884ad44e5f009bd7313f33190f07a044fa40f529142374227c966f40740-init/diff:/var/lib/docker/overlay2/b3a4de36ab2a7c9237cb1555a4866064baca53bca407ae5f84336bea9c6bc6c8/diff",
	                "MergedDir": "/var/lib/docker/overlay2/926a1884ad44e5f009bd7313f33190f07a044fa40f529142374227c966f40740/merged",
	                "UpperDir": "/var/lib/docker/overlay2/926a1884ad44e5f009bd7313f33190f07a044fa40f529142374227c966f40740/diff",
	                "WorkDir": "/var/lib/docker/overlay2/926a1884ad44e5f009bd7313f33190f07a044fa40f529142374227c966f40740/work"
	            },
	            "Name": "overlay2"
	        },
	        "Mounts": [
	            {
	                "Type": "volume",
	                "Name": "pause-342238",
	                "Source": "/var/lib/docker/volumes/pause-342238/_data",
	                "Destination": "/var",
	                "Driver": "local",
	                "Mode": "z",
	                "RW": true,
	                "Propagation": ""
	            },
	            {
	                "Type": "bind",
	                "Source": "/lib/modules",
	                "Destination": "/lib/modules",
	                "Mode": "ro",
	                "RW": false,
	                "Propagation": "rprivate"
	            }
	        ],
	        "Config": {
	            "Hostname": "pause-342238",
	            "Domainname": "",
	            "User": "",
	            "AttachStdin": false,
	            "AttachStdout": false,
	            "AttachStderr": false,
	            "ExposedPorts": {
	                "22/tcp": {},
	                "2376/tcp": {},
	                "32443/tcp": {},
	                "5000/tcp": {},
	                "8443/tcp": {}
	            },
	            "Tty": true,
	            "OpenStdin": false,
	            "StdinOnce": false,
	            "Env": [
	                "container=docker",
	                "PATH=/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin"
	            ],
	            "Cmd": null,
	            "Image": "gcr.io/k8s-minikube/kicbase-builds:v0.0.48-1761985721-21837@sha256:a50b37e97dfdea51156e079ca6b45818a801b3d41bbe13d141f35d2e1af6c7d1",
	            "Volumes": null,
	            "WorkingDir": "/",
	            "Entrypoint": [
	                "/usr/local/bin/entrypoint",
	                "/sbin/init"
	            ],
	            "OnBuild": null,
	            "Labels": {
	                "created_by.minikube.sigs.k8s.io": "true",
	                "mode.minikube.sigs.k8s.io": "pause-342238",
	                "name.minikube.sigs.k8s.io": "pause-342238",
	                "role.minikube.sigs.k8s.io": ""
	            },
	            "StopSignal": "SIGRTMIN+3"
	        },
	        "NetworkSettings": {
	            "Bridge": "",
	            "SandboxID": "e3609b2a319d9663b6d20470955d326e7d4e0e1752835990a6eff6cbca7594e6",
	            "SandboxKey": "/var/run/docker/netns/e3609b2a319d",
	            "Ports": {
	                "22/tcp": [
	                    {
	                        "HostIp": "127.0.0.1",
	                        "HostPort": "33020"
	                    }
	                ],
	                "2376/tcp": [
	                    {
	                        "HostIp": "127.0.0.1",
	                        "HostPort": "33021"
	                    }
	                ],
	                "32443/tcp": [
	                    {
	                        "HostIp": "127.0.0.1",
	                        "HostPort": "33024"
	                    }
	                ],
	                "5000/tcp": [
	                    {
	                        "HostIp": "127.0.0.1",
	                        "HostPort": "33022"
	                    }
	                ],
	                "8443/tcp": [
	                    {
	                        "HostIp": "127.0.0.1",
	                        "HostPort": "33023"
	                    }
	                ]
	            },
	            "HairpinMode": false,
	            "LinkLocalIPv6Address": "",
	            "LinkLocalIPv6PrefixLen": 0,
	            "SecondaryIPAddresses": null,
	            "SecondaryIPv6Addresses": null,
	            "EndpointID": "",
	            "Gateway": "",
	            "GlobalIPv6Address": "",
	            "GlobalIPv6PrefixLen": 0,
	            "IPAddress": "",
	            "IPPrefixLen": 0,
	            "IPv6Gateway": "",
	            "MacAddress": "",
	            "Networks": {
	                "pause-342238": {
	                    "IPAMConfig": {
	                        "IPv4Address": "192.168.85.2"
	                    },
	                    "Links": null,
	                    "Aliases": null,
	                    "MacAddress": "12:be:6b:93:f7:94",
	                    "DriverOpts": null,
	                    "GwPriority": 0,
	                    "NetworkID": "8c6b405806b0b7491609b963ef6e96a48d9426c1f4e7f03b46455af0345964a0",
	                    "EndpointID": "c10a72cc3cd48c85e6040b7d4b586f056abb5d31524a4e545ea3db1fc69b9014",
	                    "Gateway": "192.168.85.1",
	                    "IPAddress": "192.168.85.2",
	                    "IPPrefixLen": 24,
	                    "IPv6Gateway": "",
	                    "GlobalIPv6Address": "",
	                    "GlobalIPv6PrefixLen": 0,
	                    "DNSNames": [
	                        "pause-342238",
	                        "8616b3d77f84"
	                    ]
	                }
	            }
	        }
	    }
	]

                                                
                                                
-- /stdout --
helpers_test.go:247: (dbg) Run:  out/minikube-linux-arm64 status --format={{.Host}} -p pause-342238 -n pause-342238
helpers_test.go:247: (dbg) Non-zero exit: out/minikube-linux-arm64 status --format={{.Host}} -p pause-342238 -n pause-342238: exit status 2 (447.415022ms)

                                                
                                                
-- stdout --
	Running

                                                
                                                
-- /stdout --
helpers_test.go:247: status error: exit status 2 (may be ok)
helpers_test.go:252: <<< TestPause/serial/Pause FAILED: start of post-mortem logs <<<
helpers_test.go:253: ======>  post-mortem[TestPause/serial/Pause]: minikube logs <======
helpers_test.go:255: (dbg) Run:  out/minikube-linux-arm64 -p pause-342238 logs -n 25
helpers_test.go:255: (dbg) Done: out/minikube-linux-arm64 -p pause-342238 logs -n 25: (1.760253069s)
helpers_test.go:260: TestPause/serial/Pause logs: 
-- stdout --
	
	==> Audit <==
	┌─────────┬──────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────┬───────────────────────────┬─────────┬─────────┬─────────────────────┬─────────────────────┐
	│ COMMAND │                                                                   ARGS                                                                   │          PROFILE          │  USER   │ VERSION │     START TIME      │      END TIME       │
	├─────────┼──────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────┼───────────────────────────┼─────────┼─────────┼─────────────────────┼─────────────────────┤
	│ delete  │ -p missing-upgrade-396103                                                                                                                │ missing-upgrade-396103    │ jenkins │ v1.37.0 │ 09 Nov 25 14:26 UTC │ 09 Nov 25 14:26 UTC │
	│ start   │ -p kubernetes-upgrade-334644 --memory=3072 --kubernetes-version=v1.28.0 --alsologtostderr -v=1 --driver=docker  --container-runtime=crio │ kubernetes-upgrade-334644 │ jenkins │ v1.37.0 │ 09 Nov 25 14:26 UTC │ 09 Nov 25 14:27 UTC │
	│ stop    │ -p kubernetes-upgrade-334644                                                                                                             │ kubernetes-upgrade-334644 │ jenkins │ v1.37.0 │ 09 Nov 25 14:27 UTC │ 09 Nov 25 14:27 UTC │
	│ start   │ -p kubernetes-upgrade-334644 --memory=3072 --kubernetes-version=v1.34.1 --alsologtostderr -v=1 --driver=docker  --container-runtime=crio │ kubernetes-upgrade-334644 │ jenkins │ v1.37.0 │ 09 Nov 25 14:27 UTC │ 09 Nov 25 14:32 UTC │
	│ delete  │ -p NoKubernetes-451939                                                                                                                   │ NoKubernetes-451939       │ jenkins │ v1.37.0 │ 09 Nov 25 14:27 UTC │ 09 Nov 25 14:27 UTC │
	│ start   │ -p NoKubernetes-451939 --no-kubernetes --memory=3072 --alsologtostderr -v=5 --driver=docker  --container-runtime=crio                    │ NoKubernetes-451939       │ jenkins │ v1.37.0 │ 09 Nov 25 14:27 UTC │ 09 Nov 25 14:27 UTC │
	│ ssh     │ -p NoKubernetes-451939 sudo systemctl is-active --quiet service kubelet                                                                  │ NoKubernetes-451939       │ jenkins │ v1.37.0 │ 09 Nov 25 14:27 UTC │                     │
	│ stop    │ -p NoKubernetes-451939                                                                                                                   │ NoKubernetes-451939       │ jenkins │ v1.37.0 │ 09 Nov 25 14:28 UTC │ 09 Nov 25 14:28 UTC │
	│ start   │ -p NoKubernetes-451939 --driver=docker  --container-runtime=crio                                                                         │ NoKubernetes-451939       │ jenkins │ v1.37.0 │ 09 Nov 25 14:28 UTC │ 09 Nov 25 14:28 UTC │
	│ ssh     │ -p NoKubernetes-451939 sudo systemctl is-active --quiet service kubelet                                                                  │ NoKubernetes-451939       │ jenkins │ v1.37.0 │ 09 Nov 25 14:28 UTC │                     │
	│ delete  │ -p NoKubernetes-451939                                                                                                                   │ NoKubernetes-451939       │ jenkins │ v1.37.0 │ 09 Nov 25 14:28 UTC │ 09 Nov 25 14:28 UTC │
	│ start   │ -p stopped-upgrade-471685 --memory=3072 --vm-driver=docker  --container-runtime=crio                                                     │ stopped-upgrade-471685    │ jenkins │ v1.32.0 │ 09 Nov 25 14:28 UTC │ 09 Nov 25 14:29 UTC │
	│ stop    │ stopped-upgrade-471685 stop                                                                                                              │ stopped-upgrade-471685    │ jenkins │ v1.32.0 │ 09 Nov 25 14:29 UTC │ 09 Nov 25 14:29 UTC │
	│ start   │ -p stopped-upgrade-471685 --memory=3072 --alsologtostderr -v=1 --driver=docker  --container-runtime=crio                                 │ stopped-upgrade-471685    │ jenkins │ v1.37.0 │ 09 Nov 25 14:29 UTC │ 09 Nov 25 14:29 UTC │
	│ delete  │ -p stopped-upgrade-471685                                                                                                                │ stopped-upgrade-471685    │ jenkins │ v1.37.0 │ 09 Nov 25 14:29 UTC │ 09 Nov 25 14:29 UTC │
	│ start   │ -p running-upgrade-382260 --memory=3072 --vm-driver=docker  --container-runtime=crio                                                     │ running-upgrade-382260    │ jenkins │ v1.32.0 │ 09 Nov 25 14:29 UTC │ 09 Nov 25 14:30 UTC │
	│ start   │ -p running-upgrade-382260 --memory=3072 --alsologtostderr -v=1 --driver=docker  --container-runtime=crio                                 │ running-upgrade-382260    │ jenkins │ v1.37.0 │ 09 Nov 25 14:30 UTC │ 09 Nov 25 14:30 UTC │
	│ delete  │ -p running-upgrade-382260                                                                                                                │ running-upgrade-382260    │ jenkins │ v1.37.0 │ 09 Nov 25 14:30 UTC │ 09 Nov 25 14:30 UTC │
	│ start   │ -p pause-342238 --memory=3072 --install-addons=false --wait=all --driver=docker  --container-runtime=crio                                │ pause-342238              │ jenkins │ v1.37.0 │ 09 Nov 25 14:30 UTC │ 09 Nov 25 14:32 UTC │
	│ start   │ -p kubernetes-upgrade-334644 --memory=3072 --kubernetes-version=v1.28.0 --driver=docker  --container-runtime=crio                        │ kubernetes-upgrade-334644 │ jenkins │ v1.37.0 │ 09 Nov 25 14:32 UTC │                     │
	│ start   │ -p kubernetes-upgrade-334644 --memory=3072 --kubernetes-version=v1.34.1 --alsologtostderr -v=1 --driver=docker  --container-runtime=crio │ kubernetes-upgrade-334644 │ jenkins │ v1.37.0 │ 09 Nov 25 14:32 UTC │ 09 Nov 25 14:32 UTC │
	│ start   │ -p pause-342238 --alsologtostderr -v=1 --driver=docker  --container-runtime=crio                                                         │ pause-342238              │ jenkins │ v1.37.0 │ 09 Nov 25 14:32 UTC │ 09 Nov 25 14:32 UTC │
	│ delete  │ -p kubernetes-upgrade-334644                                                                                                             │ kubernetes-upgrade-334644 │ jenkins │ v1.37.0 │ 09 Nov 25 14:32 UTC │ 09 Nov 25 14:32 UTC │
	│ start   │ -p force-systemd-flag-519664 --memory=3072 --force-systemd --alsologtostderr -v=5 --driver=docker  --container-runtime=crio              │ force-systemd-flag-519664 │ jenkins │ v1.37.0 │ 09 Nov 25 14:32 UTC │                     │
	│ pause   │ -p pause-342238 --alsologtostderr -v=5                                                                                                   │ pause-342238              │ jenkins │ v1.37.0 │ 09 Nov 25 14:32 UTC │                     │
	└─────────┴──────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────┴───────────────────────────┴─────────┴─────────┴─────────────────────┴─────────────────────┘
	
	
	==> Last Start <==
	Log file created at: 2025/11/09 14:32:41
	Running on machine: ip-172-31-24-2
	Binary: Built with gc go1.24.6 for linux/arm64
	Log line format: [IWEF]mmdd hh:mm:ss.uuuuuu threadid file:line] msg
	I1109 14:32:41.308187  168662 out.go:360] Setting OutFile to fd 1 ...
	I1109 14:32:41.308516  168662 out.go:408] TERM=,COLORTERM=, which probably does not support color
	I1109 14:32:41.308548  168662 out.go:374] Setting ErrFile to fd 2...
	I1109 14:32:41.308577  168662 out.go:408] TERM=,COLORTERM=, which probably does not support color
	I1109 14:32:41.308937  168662 root.go:338] Updating PATH: /home/jenkins/minikube-integration/21139-2320/.minikube/bin
	I1109 14:32:41.309576  168662 out.go:368] Setting JSON to false
	I1109 14:32:41.310918  168662 start.go:133] hostinfo: {"hostname":"ip-172-31-24-2","uptime":4512,"bootTime":1762694250,"procs":186,"os":"linux","platform":"ubuntu","platformFamily":"debian","platformVersion":"20.04","kernelVersion":"5.15.0-1084-aws","kernelArch":"aarch64","virtualizationSystem":"","virtualizationRole":"","hostId":"6d436adf-771e-4269-b9a3-c25fd4fca4f5"}
	I1109 14:32:41.311062  168662 start.go:143] virtualization:  
	I1109 14:32:41.317140  168662 out.go:179] * [force-systemd-flag-519664] minikube v1.37.0 on Ubuntu 20.04 (arm64)
	I1109 14:32:41.320599  168662 out.go:179]   - MINIKUBE_LOCATION=21139
	I1109 14:32:41.320675  168662 notify.go:221] Checking for updates...
	I1109 14:32:41.327757  168662 out.go:179]   - MINIKUBE_SUPPRESS_DOCKER_PERFORMANCE=true
	I1109 14:32:41.330739  168662 out.go:179]   - KUBECONFIG=/home/jenkins/minikube-integration/21139-2320/kubeconfig
	I1109 14:32:41.333899  168662 out.go:179]   - MINIKUBE_HOME=/home/jenkins/minikube-integration/21139-2320/.minikube
	I1109 14:32:41.337043  168662 out.go:179]   - MINIKUBE_BIN=out/minikube-linux-arm64
	I1109 14:32:41.340202  168662 out.go:179]   - MINIKUBE_FORCE_SYSTEMD=
	I1109 14:32:41.343686  168662 config.go:182] Loaded profile config "pause-342238": Driver=docker, ContainerRuntime=crio, KubernetesVersion=v1.34.1
	I1109 14:32:41.343782  168662 driver.go:422] Setting default libvirt URI to qemu:///system
	I1109 14:32:41.373447  168662 docker.go:124] docker version: linux-28.1.1:Docker Engine - Community
	I1109 14:32:41.373569  168662 cli_runner.go:164] Run: docker system info --format "{{json .}}"
	I1109 14:32:41.445638  168662 info.go:266] docker info: {ID:J4M5:W6MX:GOX4:4LAQ:VI7E:VJNF:J3OP:OPBH:GF7G:PPY4:WQWD:7N4L Containers:1 ContainersRunning:1 ContainersPaused:0 ContainersStopped:0 Images:4 Driver:overlay2 DriverStatus:[[Backing Filesystem extfs] [Supports d_type true] [Using metacopy false] [Native Overlay Diff true] [userxattr false]] SystemStatus:<nil> Plugins:{Volume:[local] Network:[bridge host ipvlan macvlan null overlay] Authorization:<nil> Log:[awslogs fluentd gcplogs gelf journald json-file local splunk syslog]} MemoryLimit:true SwapLimit:true KernelMemory:false KernelMemoryTCP:true CPUCfsPeriod:true CPUCfsQuota:true CPUShares:true CPUSet:true PidsLimit:true IPv4Forwarding:true BridgeNfIptables:false BridgeNfIP6Tables:false Debug:false NFd:36 OomKillDisable:true NGoroutines:52 SystemTime:2025-11-09 14:32:41.435311659 +0000 UTC LoggingDriver:json-file CgroupDriver:cgroupfs NEventsListener:0 KernelVersion:5.15.0-1084-aws OperatingSystem:Ubuntu 20.04.6 LTS OSType:linux Architecture:a
arch64 IndexServerAddress:https://index.docker.io/v1/ RegistryConfig:{AllowNondistributableArtifactsCIDRs:[] AllowNondistributableArtifactsHostnames:[] InsecureRegistryCIDRs:[::1/128 127.0.0.0/8] IndexConfigs:{DockerIo:{Name:docker.io Mirrors:[] Secure:true Official:true}} Mirrors:[]} NCPU:2 MemTotal:8214835200 GenericResources:<nil> DockerRootDir:/var/lib/docker HTTPProxy: HTTPSProxy: NoProxy: Name:ip-172-31-24-2 Labels:[] ExperimentalBuild:false ServerVersion:28.1.1 ClusterStore: ClusterAdvertise: Runtimes:{Runc:{Path:runc}} DefaultRuntime:runc Swarm:{NodeID: NodeAddr: LocalNodeState:inactive ControlAvailable:false Error: RemoteManagers:<nil>} LiveRestoreEnabled:false Isolation: InitBinary:docker-init ContainerdCommit:{ID:05044ec0a9a75232cad458027ca83437aae3f4da Expected:} RuncCommit:{ID:v1.2.5-0-g59923ef Expected:} InitCommit:{ID:de40ad0 Expected:} SecurityOptions:[name=apparmor name=seccomp,profile=builtin] ProductLicense: Warnings:<nil> ServerErrors:[] ClientInfo:{Debug:false Plugins:[map[Name:buildx Pat
h:/usr/libexec/docker/cli-plugins/docker-buildx SchemaVersion:0.1.0 ShortDescription:Docker Buildx Vendor:Docker Inc. Version:v0.23.0] map[Name:compose Path:/usr/libexec/docker/cli-plugins/docker-compose SchemaVersion:0.1.0 ShortDescription:Docker Compose Vendor:Docker Inc. Version:v2.35.1]] Warnings:<nil>}}
	I1109 14:32:41.445758  168662 docker.go:319] overlay module found
	I1109 14:32:41.449106  168662 out.go:179] * Using the docker driver based on user configuration
	I1109 14:32:41.452037  168662 start.go:309] selected driver: docker
	I1109 14:32:41.452055  168662 start.go:930] validating driver "docker" against <nil>
	I1109 14:32:41.452069  168662 start.go:941] status for docker: {Installed:true Healthy:true Running:false NeedsImprovement:false Error:<nil> Reason: Fix: Doc: Version:}
	I1109 14:32:41.453476  168662 cli_runner.go:164] Run: docker system info --format "{{json .}}"
	I1109 14:32:41.516678  168662 info.go:266] docker info: {ID:J4M5:W6MX:GOX4:4LAQ:VI7E:VJNF:J3OP:OPBH:GF7G:PPY4:WQWD:7N4L Containers:1 ContainersRunning:1 ContainersPaused:0 ContainersStopped:0 Images:4 Driver:overlay2 DriverStatus:[[Backing Filesystem extfs] [Supports d_type true] [Using metacopy false] [Native Overlay Diff true] [userxattr false]] SystemStatus:<nil> Plugins:{Volume:[local] Network:[bridge host ipvlan macvlan null overlay] Authorization:<nil> Log:[awslogs fluentd gcplogs gelf journald json-file local splunk syslog]} MemoryLimit:true SwapLimit:true KernelMemory:false KernelMemoryTCP:true CPUCfsPeriod:true CPUCfsQuota:true CPUShares:true CPUSet:true PidsLimit:true IPv4Forwarding:true BridgeNfIptables:false BridgeNfIP6Tables:false Debug:false NFd:36 OomKillDisable:true NGoroutines:52 SystemTime:2025-11-09 14:32:41.506033835 +0000 UTC LoggingDriver:json-file CgroupDriver:cgroupfs NEventsListener:0 KernelVersion:5.15.0-1084-aws OperatingSystem:Ubuntu 20.04.6 LTS OSType:linux Architecture:a
arch64 IndexServerAddress:https://index.docker.io/v1/ RegistryConfig:{AllowNondistributableArtifactsCIDRs:[] AllowNondistributableArtifactsHostnames:[] InsecureRegistryCIDRs:[::1/128 127.0.0.0/8] IndexConfigs:{DockerIo:{Name:docker.io Mirrors:[] Secure:true Official:true}} Mirrors:[]} NCPU:2 MemTotal:8214835200 GenericResources:<nil> DockerRootDir:/var/lib/docker HTTPProxy: HTTPSProxy: NoProxy: Name:ip-172-31-24-2 Labels:[] ExperimentalBuild:false ServerVersion:28.1.1 ClusterStore: ClusterAdvertise: Runtimes:{Runc:{Path:runc}} DefaultRuntime:runc Swarm:{NodeID: NodeAddr: LocalNodeState:inactive ControlAvailable:false Error: RemoteManagers:<nil>} LiveRestoreEnabled:false Isolation: InitBinary:docker-init ContainerdCommit:{ID:05044ec0a9a75232cad458027ca83437aae3f4da Expected:} RuncCommit:{ID:v1.2.5-0-g59923ef Expected:} InitCommit:{ID:de40ad0 Expected:} SecurityOptions:[name=apparmor name=seccomp,profile=builtin] ProductLicense: Warnings:<nil> ServerErrors:[] ClientInfo:{Debug:false Plugins:[map[Name:buildx Pat
h:/usr/libexec/docker/cli-plugins/docker-buildx SchemaVersion:0.1.0 ShortDescription:Docker Buildx Vendor:Docker Inc. Version:v0.23.0] map[Name:compose Path:/usr/libexec/docker/cli-plugins/docker-compose SchemaVersion:0.1.0 ShortDescription:Docker Compose Vendor:Docker Inc. Version:v2.35.1]] Warnings:<nil>}}
	I1109 14:32:41.516879  168662 start_flags.go:327] no existing cluster config was found, will generate one from the flags 
	I1109 14:32:41.517111  168662 start_flags.go:974] Wait components to verify : map[apiserver:true system_pods:true]
	I1109 14:32:41.520125  168662 out.go:179] * Using Docker driver with root privileges
	I1109 14:32:41.523049  168662 cni.go:84] Creating CNI manager for ""
	I1109 14:32:41.523116  168662 cni.go:143] "docker" driver + "crio" runtime found, recommending kindnet
	I1109 14:32:41.523129  168662 start_flags.go:336] Found "CNI" CNI - setting NetworkPlugin=cni
	I1109 14:32:41.523211  168662 start.go:353] cluster config:
	{Name:force-systemd-flag-519664 KeepContext:false EmbedCerts:false MinikubeISO: KicBaseImage:gcr.io/k8s-minikube/kicbase-builds:v0.0.48-1761985721-21837@sha256:a50b37e97dfdea51156e079ca6b45818a801b3d41bbe13d141f35d2e1af6c7d1 Memory:3072 CPUs:2 DiskSize:20000 Driver:docker HyperkitVpnKitSock: HyperkitVSockPorts:[] DockerEnv:[] ContainerVolumeMounts:[] InsecureRegistry:[] RegistryMirror:[] HostOnlyCIDR:192.168.59.1/24 HypervVirtualSwitch: HypervUseExternalSwitch:false HypervExternalAdapter: KVMNetwork:default KVMQemuURI:qemu:///system KVMGPU:false KVMHidden:false KVMNUMACount:1 APIServerPort:8443 DockerOpt:[] DisableDriverMounts:false NFSShare:[] NFSSharesRoot:/nfsshares UUID: NoVTXCheck:false DNSProxy:false HostDNSResolver:true HostOnlyNicType:virtio NatNicType:virtio SSHIPAddress: SSHUser:root SSHKey: SSHPort:22 KubernetesConfig:{KubernetesVersion:v1.34.1 ClusterName:force-systemd-flag-519664 Namespace:default APIServerHAVIP: APIServerName:minikubeCA APIServerNames:[] APIServerIPs:[] DNSDomain:cluste
r.local ContainerRuntime:crio CRISocket: NetworkPlugin:cni FeatureGates: ServiceCIDR:10.96.0.0/12 ImageRepository: LoadBalancerStartIP: LoadBalancerEndIP: CustomIngressCert: RegistryAliases: ExtraOptions:[] ShouldLoadCachedImages:true EnableDefaultCNI:false CNI:} Nodes:[{Name: IP: Port:8443 KubernetesVersion:v1.34.1 ContainerRuntime:crio ControlPlane:true Worker:true}] Addons:map[] CustomAddonImages:map[] CustomAddonRegistries:map[] VerifyComponents:map[apiserver:true system_pods:true] StartHostTimeout:6m0s ScheduledStop:<nil> ExposedPorts:[] ListenAddress: Network: Subnet: MultiNodeRequested:false ExtraDisks:0 CertExpiration:26280h0m0s MountString: Mount9PVersion:9p2000.L MountGID:docker MountIP: MountMSize:262144 MountOptions:[] MountPort:0 MountType:9p MountUID:docker BinaryMirror: DisableOptimizations:false DisableMetrics:false DisableCoreDNSLog:false CustomQemuFirmwarePath: SocketVMnetClientPath: SocketVMnetPath: StaticIP: SSHAuthSock: SSHAgentPID:0 GPUs: AutoPauseInterval:1m0s}
	I1109 14:32:41.526420  168662 out.go:179] * Starting "force-systemd-flag-519664" primary control-plane node in "force-systemd-flag-519664" cluster
	I1109 14:32:41.530307  168662 cache.go:134] Beginning downloading kic base image for docker with crio
	I1109 14:32:41.533297  168662 out.go:179] * Pulling base image v0.0.48-1761985721-21837 ...
	I1109 14:32:41.536036  168662 image.go:81] Checking for gcr.io/k8s-minikube/kicbase-builds:v0.0.48-1761985721-21837@sha256:a50b37e97dfdea51156e079ca6b45818a801b3d41bbe13d141f35d2e1af6c7d1 in local docker daemon
	I1109 14:32:41.536091  168662 preload.go:188] Checking if preload exists for k8s version v1.34.1 and runtime crio
	I1109 14:32:41.536130  168662 preload.go:203] Found local preload: /home/jenkins/minikube-integration/21139-2320/.minikube/cache/preloaded-tarball/preloaded-images-k8s-v18-v1.34.1-cri-o-overlay-arm64.tar.lz4
	I1109 14:32:41.536139  168662 cache.go:65] Caching tarball of preloaded images
	I1109 14:32:41.536227  168662 preload.go:238] Found /home/jenkins/minikube-integration/21139-2320/.minikube/cache/preloaded-tarball/preloaded-images-k8s-v18-v1.34.1-cri-o-overlay-arm64.tar.lz4 in cache, skipping download
	I1109 14:32:41.536237  168662 cache.go:68] Finished verifying existence of preloaded tar for v1.34.1 on crio
	I1109 14:32:41.536340  168662 profile.go:143] Saving config to /home/jenkins/minikube-integration/21139-2320/.minikube/profiles/force-systemd-flag-519664/config.json ...
	I1109 14:32:41.536357  168662 lock.go:35] WriteFile acquiring /home/jenkins/minikube-integration/21139-2320/.minikube/profiles/force-systemd-flag-519664/config.json: {Name:mk28db7b284d5d4952368e85bfa9f43c92c325a2 Clock:{} Delay:500ms Timeout:1m0s Cancel:<nil>}
	I1109 14:32:41.555667  168662 image.go:100] Found gcr.io/k8s-minikube/kicbase-builds:v0.0.48-1761985721-21837@sha256:a50b37e97dfdea51156e079ca6b45818a801b3d41bbe13d141f35d2e1af6c7d1 in local docker daemon, skipping pull
	I1109 14:32:41.555692  168662 cache.go:158] gcr.io/k8s-minikube/kicbase-builds:v0.0.48-1761985721-21837@sha256:a50b37e97dfdea51156e079ca6b45818a801b3d41bbe13d141f35d2e1af6c7d1 exists in daemon, skipping load
	I1109 14:32:41.555711  168662 cache.go:243] Successfully downloaded all kic artifacts
	I1109 14:32:41.555735  168662 start.go:360] acquireMachinesLock for force-systemd-flag-519664: {Name:mk9577fc13b01146c0de79a0ba1703985e6f141e Clock:{} Delay:500ms Timeout:10m0s Cancel:<nil>}
	I1109 14:32:41.555852  168662 start.go:364] duration metric: took 98.257µs to acquireMachinesLock for "force-systemd-flag-519664"
	I1109 14:32:41.555915  168662 start.go:93] Provisioning new machine with config: &{Name:force-systemd-flag-519664 KeepContext:false EmbedCerts:false MinikubeISO: KicBaseImage:gcr.io/k8s-minikube/kicbase-builds:v0.0.48-1761985721-21837@sha256:a50b37e97dfdea51156e079ca6b45818a801b3d41bbe13d141f35d2e1af6c7d1 Memory:3072 CPUs:2 DiskSize:20000 Driver:docker HyperkitVpnKitSock: HyperkitVSockPorts:[] DockerEnv:[] ContainerVolumeMounts:[] InsecureRegistry:[] RegistryMirror:[] HostOnlyCIDR:192.168.59.1/24 HypervVirtualSwitch: HypervUseExternalSwitch:false HypervExternalAdapter: KVMNetwork:default KVMQemuURI:qemu:///system KVMGPU:false KVMHidden:false KVMNUMACount:1 APIServerPort:8443 DockerOpt:[] DisableDriverMounts:false NFSShare:[] NFSSharesRoot:/nfsshares UUID: NoVTXCheck:false DNSProxy:false HostDNSResolver:true HostOnlyNicType:virtio NatNicType:virtio SSHIPAddress: SSHUser:root SSHKey: SSHPort:22 KubernetesConfig:{KubernetesVersion:v1.34.1 ClusterName:force-systemd-flag-519664 Namespace:default APIServer
HAVIP: APIServerName:minikubeCA APIServerNames:[] APIServerIPs:[] DNSDomain:cluster.local ContainerRuntime:crio CRISocket: NetworkPlugin:cni FeatureGates: ServiceCIDR:10.96.0.0/12 ImageRepository: LoadBalancerStartIP: LoadBalancerEndIP: CustomIngressCert: RegistryAliases: ExtraOptions:[] ShouldLoadCachedImages:true EnableDefaultCNI:false CNI:} Nodes:[{Name: IP: Port:8443 KubernetesVersion:v1.34.1 ContainerRuntime:crio ControlPlane:true Worker:true}] Addons:map[] CustomAddonImages:map[] CustomAddonRegistries:map[] VerifyComponents:map[apiserver:true system_pods:true] StartHostTimeout:6m0s ScheduledStop:<nil> ExposedPorts:[] ListenAddress: Network: Subnet: MultiNodeRequested:false ExtraDisks:0 CertExpiration:26280h0m0s MountString: Mount9PVersion:9p2000.L MountGID:docker MountIP: MountMSize:262144 MountOptions:[] MountPort:0 MountType:9p MountUID:docker BinaryMirror: DisableOptimizations:false DisableMetrics:false DisableCoreDNSLog:false CustomQemuFirmwarePath: SocketVMnetClientPath: SocketVMnetPath: StaticIP:
SSHAuthSock: SSHAgentPID:0 GPUs: AutoPauseInterval:1m0s} &{Name: IP: Port:8443 KubernetesVersion:v1.34.1 ContainerRuntime:crio ControlPlane:true Worker:true}
	I1109 14:32:41.555995  168662 start.go:125] createHost starting for "" (driver="docker")
	W1109 14:32:42.531315  166531 pod_ready.go:104] pod "coredns-66bc5c9577-4vkj9" is not "Ready", error: <nil>
	W1109 14:32:44.550049  166531 pod_ready.go:104] pod "coredns-66bc5c9577-4vkj9" is not "Ready", error: <nil>
	I1109 14:32:45.063834  166531 pod_ready.go:94] pod "coredns-66bc5c9577-4vkj9" is "Ready"
	I1109 14:32:45.063881  166531 pod_ready.go:86] duration metric: took 11.038253986s for pod "coredns-66bc5c9577-4vkj9" in "kube-system" namespace to be "Ready" or be gone ...
	I1109 14:32:45.120759  166531 pod_ready.go:83] waiting for pod "etcd-pause-342238" in "kube-system" namespace to be "Ready" or be gone ...
	I1109 14:32:45.132122  166531 pod_ready.go:94] pod "etcd-pause-342238" is "Ready"
	I1109 14:32:45.132155  166531 pod_ready.go:86] duration metric: took 11.365253ms for pod "etcd-pause-342238" in "kube-system" namespace to be "Ready" or be gone ...
	I1109 14:32:45.136891  166531 pod_ready.go:83] waiting for pod "kube-apiserver-pause-342238" in "kube-system" namespace to be "Ready" or be gone ...
	I1109 14:32:45.144668  166531 pod_ready.go:94] pod "kube-apiserver-pause-342238" is "Ready"
	I1109 14:32:45.144763  166531 pod_ready.go:86] duration metric: took 7.845932ms for pod "kube-apiserver-pause-342238" in "kube-system" namespace to be "Ready" or be gone ...
	I1109 14:32:45.147797  166531 pod_ready.go:83] waiting for pod "kube-controller-manager-pause-342238" in "kube-system" namespace to be "Ready" or be gone ...
	I1109 14:32:45.236138  166531 pod_ready.go:94] pod "kube-controller-manager-pause-342238" is "Ready"
	I1109 14:32:45.236179  166531 pod_ready.go:86] duration metric: took 88.255389ms for pod "kube-controller-manager-pause-342238" in "kube-system" namespace to be "Ready" or be gone ...
	I1109 14:32:45.433442  166531 pod_ready.go:83] waiting for pod "kube-proxy-r56tq" in "kube-system" namespace to be "Ready" or be gone ...
	I1109 14:32:41.559297  168662 out.go:252] * Creating docker container (CPUs=2, Memory=3072MB) ...
	I1109 14:32:41.559536  168662 start.go:159] libmachine.API.Create for "force-systemd-flag-519664" (driver="docker")
	I1109 14:32:41.559582  168662 client.go:173] LocalClient.Create starting
	I1109 14:32:41.559658  168662 main.go:143] libmachine: Reading certificate data from /home/jenkins/minikube-integration/21139-2320/.minikube/certs/ca.pem
	I1109 14:32:41.559695  168662 main.go:143] libmachine: Decoding PEM data...
	I1109 14:32:41.559719  168662 main.go:143] libmachine: Parsing certificate...
	I1109 14:32:41.559779  168662 main.go:143] libmachine: Reading certificate data from /home/jenkins/minikube-integration/21139-2320/.minikube/certs/cert.pem
	I1109 14:32:41.559802  168662 main.go:143] libmachine: Decoding PEM data...
	I1109 14:32:41.559812  168662 main.go:143] libmachine: Parsing certificate...
	I1109 14:32:41.560235  168662 cli_runner.go:164] Run: docker network inspect force-systemd-flag-519664 --format "{"Name": "{{.Name}}","Driver": "{{.Driver}}","Subnet": "{{range .IPAM.Config}}{{.Subnet}}{{end}}","Gateway": "{{range .IPAM.Config}}{{.Gateway}}{{end}}","MTU": {{if (index .Options "com.docker.network.driver.mtu")}}{{(index .Options "com.docker.network.driver.mtu")}}{{else}}0{{end}}, "ContainerIPs": [{{range $k,$v := .Containers }}"{{$v.IPv4Address}}",{{end}}]}"
	W1109 14:32:41.576242  168662 cli_runner.go:211] docker network inspect force-systemd-flag-519664 --format "{"Name": "{{.Name}}","Driver": "{{.Driver}}","Subnet": "{{range .IPAM.Config}}{{.Subnet}}{{end}}","Gateway": "{{range .IPAM.Config}}{{.Gateway}}{{end}}","MTU": {{if (index .Options "com.docker.network.driver.mtu")}}{{(index .Options "com.docker.network.driver.mtu")}}{{else}}0{{end}}, "ContainerIPs": [{{range $k,$v := .Containers }}"{{$v.IPv4Address}}",{{end}}]}" returned with exit code 1
	I1109 14:32:41.576355  168662 network_create.go:284] running [docker network inspect force-systemd-flag-519664] to gather additional debugging logs...
	I1109 14:32:41.576376  168662 cli_runner.go:164] Run: docker network inspect force-systemd-flag-519664
	W1109 14:32:41.597754  168662 cli_runner.go:211] docker network inspect force-systemd-flag-519664 returned with exit code 1
	I1109 14:32:41.597784  168662 network_create.go:287] error running [docker network inspect force-systemd-flag-519664]: docker network inspect force-systemd-flag-519664: exit status 1
	stdout:
	[]
	
	stderr:
	Error response from daemon: network force-systemd-flag-519664 not found
	I1109 14:32:41.597797  168662 network_create.go:289] output of [docker network inspect force-systemd-flag-519664]: -- stdout --
	[]
	
	-- /stdout --
	** stderr ** 
	Error response from daemon: network force-systemd-flag-519664 not found
	
	** /stderr **
	I1109 14:32:41.597979  168662 cli_runner.go:164] Run: docker network inspect bridge --format "{"Name": "{{.Name}}","Driver": "{{.Driver}}","Subnet": "{{range .IPAM.Config}}{{.Subnet}}{{end}}","Gateway": "{{range .IPAM.Config}}{{.Gateway}}{{end}}","MTU": {{if (index .Options "com.docker.network.driver.mtu")}}{{(index .Options "com.docker.network.driver.mtu")}}{{else}}0{{end}}, "ContainerIPs": [{{range $k,$v := .Containers }}"{{$v.IPv4Address}}",{{end}}]}"
	I1109 14:32:41.615335  168662 network.go:211] skipping subnet 192.168.49.0/24 that is taken: &{IP:192.168.49.0 Netmask:255.255.255.0 Prefix:24 CIDR:192.168.49.0/24 Gateway:192.168.49.1 ClientMin:192.168.49.2 ClientMax:192.168.49.254 Broadcast:192.168.49.255 IsPrivate:true Interface:{IfaceName:br-b901b8dcb821 IfaceIPv4:192.168.49.1 IfaceMTU:1500 IfaceMAC:fe:01:f6:7f:4e:91} reservation:<nil>}
	I1109 14:32:41.615631  168662 network.go:211] skipping subnet 192.168.58.0/24 that is taken: &{IP:192.168.58.0 Netmask:255.255.255.0 Prefix:24 CIDR:192.168.58.0/24 Gateway:192.168.58.1 ClientMin:192.168.58.2 ClientMax:192.168.58.254 Broadcast:192.168.58.255 IsPrivate:true Interface:{IfaceName:br-46dda1eda2df IfaceIPv4:192.168.58.1 IfaceMTU:1500 IfaceMAC:3a:a9:4d:4f:8f:31} reservation:<nil>}
	I1109 14:32:41.615955  168662 network.go:211] skipping subnet 192.168.67.0/24 that is taken: &{IP:192.168.67.0 Netmask:255.255.255.0 Prefix:24 CIDR:192.168.67.0/24 Gateway:192.168.67.1 ClientMin:192.168.67.2 ClientMax:192.168.67.254 Broadcast:192.168.67.255 IsPrivate:true Interface:{IfaceName:br-3b44df0b0b1c IfaceIPv4:192.168.67.1 IfaceMTU:1500 IfaceMAC:06:80:ac:56:fe:3d} reservation:<nil>}
	I1109 14:32:41.616347  168662 network.go:206] using free private subnet 192.168.76.0/24: &{IP:192.168.76.0 Netmask:255.255.255.0 Prefix:24 CIDR:192.168.76.0/24 Gateway:192.168.76.1 ClientMin:192.168.76.2 ClientMax:192.168.76.254 Broadcast:192.168.76.255 IsPrivate:true Interface:{IfaceName: IfaceIPv4: IfaceMTU:0 IfaceMAC:} reservation:0x40019a2fa0}
	I1109 14:32:41.616372  168662 network_create.go:124] attempt to create docker network force-systemd-flag-519664 192.168.76.0/24 with gateway 192.168.76.1 and MTU of 1500 ...
	I1109 14:32:41.616433  168662 cli_runner.go:164] Run: docker network create --driver=bridge --subnet=192.168.76.0/24 --gateway=192.168.76.1 -o --ip-masq -o --icc -o com.docker.network.driver.mtu=1500 --label=created_by.minikube.sigs.k8s.io=true --label=name.minikube.sigs.k8s.io=force-systemd-flag-519664 force-systemd-flag-519664
	I1109 14:32:41.676322  168662 network_create.go:108] docker network force-systemd-flag-519664 192.168.76.0/24 created
	I1109 14:32:41.676355  168662 kic.go:121] calculated static IP "192.168.76.2" for the "force-systemd-flag-519664" container
	I1109 14:32:41.676446  168662 cli_runner.go:164] Run: docker ps -a --format {{.Names}}
	I1109 14:32:41.692305  168662 cli_runner.go:164] Run: docker volume create force-systemd-flag-519664 --label name.minikube.sigs.k8s.io=force-systemd-flag-519664 --label created_by.minikube.sigs.k8s.io=true
	I1109 14:32:41.710931  168662 oci.go:103] Successfully created a docker volume force-systemd-flag-519664
	I1109 14:32:41.711034  168662 cli_runner.go:164] Run: docker run --rm --name force-systemd-flag-519664-preload-sidecar --label created_by.minikube.sigs.k8s.io=true --label name.minikube.sigs.k8s.io=force-systemd-flag-519664 --entrypoint /usr/bin/test -v force-systemd-flag-519664:/var gcr.io/k8s-minikube/kicbase-builds:v0.0.48-1761985721-21837@sha256:a50b37e97dfdea51156e079ca6b45818a801b3d41bbe13d141f35d2e1af6c7d1 -d /var/lib
	I1109 14:32:42.268059  168662 oci.go:107] Successfully prepared a docker volume force-systemd-flag-519664
	I1109 14:32:42.268141  168662 preload.go:188] Checking if preload exists for k8s version v1.34.1 and runtime crio
	I1109 14:32:42.268156  168662 kic.go:194] Starting extracting preloaded images to volume ...
	I1109 14:32:42.268241  168662 cli_runner.go:164] Run: docker run --rm --entrypoint /usr/bin/tar -v /home/jenkins/minikube-integration/21139-2320/.minikube/cache/preloaded-tarball/preloaded-images-k8s-v18-v1.34.1-cri-o-overlay-arm64.tar.lz4:/preloaded.tar:ro -v force-systemd-flag-519664:/extractDir gcr.io/k8s-minikube/kicbase-builds:v0.0.48-1761985721-21837@sha256:a50b37e97dfdea51156e079ca6b45818a801b3d41bbe13d141f35d2e1af6c7d1 -I lz4 -xf /preloaded.tar -C /extractDir
	I1109 14:32:45.832946  166531 pod_ready.go:94] pod "kube-proxy-r56tq" is "Ready"
	I1109 14:32:45.832987  166531 pod_ready.go:86] duration metric: took 399.519235ms for pod "kube-proxy-r56tq" in "kube-system" namespace to be "Ready" or be gone ...
	I1109 14:32:46.033394  166531 pod_ready.go:83] waiting for pod "kube-scheduler-pause-342238" in "kube-system" namespace to be "Ready" or be gone ...
	I1109 14:32:46.433223  166531 pod_ready.go:94] pod "kube-scheduler-pause-342238" is "Ready"
	I1109 14:32:46.433247  166531 pod_ready.go:86] duration metric: took 399.82747ms for pod "kube-scheduler-pause-342238" in "kube-system" namespace to be "Ready" or be gone ...
	I1109 14:32:46.433259  166531 pod_ready.go:40] duration metric: took 12.413813745s for extra waiting for all "kube-system" pods having one of [k8s-app=kube-dns component=etcd component=kube-apiserver component=kube-controller-manager k8s-app=kube-proxy component=kube-scheduler] labels to be "Ready" ...
	I1109 14:32:46.498528  166531 start.go:628] kubectl: 1.33.2, cluster: 1.34.1 (minor skew: 1)
	I1109 14:32:46.528332  166531 out.go:179] * Done! kubectl is now configured to use "pause-342238" cluster and "default" namespace by default
	I1109 14:32:46.754788  168662 cli_runner.go:217] Completed: docker run --rm --entrypoint /usr/bin/tar -v /home/jenkins/minikube-integration/21139-2320/.minikube/cache/preloaded-tarball/preloaded-images-k8s-v18-v1.34.1-cri-o-overlay-arm64.tar.lz4:/preloaded.tar:ro -v force-systemd-flag-519664:/extractDir gcr.io/k8s-minikube/kicbase-builds:v0.0.48-1761985721-21837@sha256:a50b37e97dfdea51156e079ca6b45818a801b3d41bbe13d141f35d2e1af6c7d1 -I lz4 -xf /preloaded.tar -C /extractDir: (4.486495514s)
	I1109 14:32:46.754823  168662 kic.go:203] duration metric: took 4.486663383s to extract preloaded images to volume ...
	W1109 14:32:46.755028  168662 cgroups_linux.go:77] Your kernel does not support swap limit capabilities or the cgroup is not mounted.
	I1109 14:32:46.755290  168662 cli_runner.go:164] Run: docker info --format "'{{json .SecurityOptions}}'"
	I1109 14:32:46.847826  168662 cli_runner.go:164] Run: docker run -d -t --privileged --security-opt seccomp=unconfined --tmpfs /tmp --tmpfs /run -v /lib/modules:/lib/modules:ro --hostname force-systemd-flag-519664 --name force-systemd-flag-519664 --label created_by.minikube.sigs.k8s.io=true --label name.minikube.sigs.k8s.io=force-systemd-flag-519664 --label role.minikube.sigs.k8s.io= --label mode.minikube.sigs.k8s.io=force-systemd-flag-519664 --network force-systemd-flag-519664 --ip 192.168.76.2 --volume force-systemd-flag-519664:/var --security-opt apparmor=unconfined --memory=3072mb --cpus=2 -e container=docker --expose 8443 --publish=127.0.0.1::8443 --publish=127.0.0.1::22 --publish=127.0.0.1::2376 --publish=127.0.0.1::5000 --publish=127.0.0.1::32443 gcr.io/k8s-minikube/kicbase-builds:v0.0.48-1761985721-21837@sha256:a50b37e97dfdea51156e079ca6b45818a801b3d41bbe13d141f35d2e1af6c7d1
	I1109 14:32:47.243843  168662 cli_runner.go:164] Run: docker container inspect force-systemd-flag-519664 --format={{.State.Running}}
	I1109 14:32:47.264850  168662 cli_runner.go:164] Run: docker container inspect force-systemd-flag-519664 --format={{.State.Status}}
	I1109 14:32:47.293666  168662 cli_runner.go:164] Run: docker exec force-systemd-flag-519664 stat /var/lib/dpkg/alternatives/iptables
	I1109 14:32:47.360300  168662 oci.go:144] the created container "force-systemd-flag-519664" has a running status.
	I1109 14:32:47.360324  168662 kic.go:225] Creating ssh key for kic: /home/jenkins/minikube-integration/21139-2320/.minikube/machines/force-systemd-flag-519664/id_rsa...
	I1109 14:32:48.848848  168662 vm_assets.go:164] NewFileAsset: /home/jenkins/minikube-integration/21139-2320/.minikube/machines/force-systemd-flag-519664/id_rsa.pub -> /home/docker/.ssh/authorized_keys
	I1109 14:32:48.848924  168662 kic_runner.go:191] docker (temp): /home/jenkins/minikube-integration/21139-2320/.minikube/machines/force-systemd-flag-519664/id_rsa.pub --> /home/docker/.ssh/authorized_keys (381 bytes)
	I1109 14:32:48.874147  168662 cli_runner.go:164] Run: docker container inspect force-systemd-flag-519664 --format={{.State.Status}}
	I1109 14:32:48.904424  168662 kic_runner.go:93] Run: chown docker:docker /home/docker/.ssh/authorized_keys
	I1109 14:32:48.904444  168662 kic_runner.go:114] Args: [docker exec --privileged force-systemd-flag-519664 chown docker:docker /home/docker/.ssh/authorized_keys]
	I1109 14:32:48.971204  168662 cli_runner.go:164] Run: docker container inspect force-systemd-flag-519664 --format={{.State.Status}}
	I1109 14:32:48.995011  168662 machine.go:94] provisionDockerMachine start ...
	I1109 14:32:48.995391  168662 cli_runner.go:164] Run: docker container inspect -f "'{{(index (index .NetworkSettings.Ports "22/tcp") 0).HostPort}}'" force-systemd-flag-519664
	I1109 14:32:49.021929  168662 main.go:143] libmachine: Using SSH client type: native
	I1109 14:32:49.022268  168662 main.go:143] libmachine: &{{{<nil> 0 [] [] []} docker [0x3eefe0] 0x3f1790 <nil>  [] 0s} 127.0.0.1 33025 <nil> <nil>}
	I1109 14:32:49.022278  168662 main.go:143] libmachine: About to run SSH command:
	hostname
	I1109 14:32:49.211027  168662 main.go:143] libmachine: SSH cmd err, output: <nil>: force-systemd-flag-519664
	
	I1109 14:32:49.211102  168662 ubuntu.go:182] provisioning hostname "force-systemd-flag-519664"
	I1109 14:32:49.211203  168662 cli_runner.go:164] Run: docker container inspect -f "'{{(index (index .NetworkSettings.Ports "22/tcp") 0).HostPort}}'" force-systemd-flag-519664
	I1109 14:32:49.241516  168662 main.go:143] libmachine: Using SSH client type: native
	I1109 14:32:49.241842  168662 main.go:143] libmachine: &{{{<nil> 0 [] [] []} docker [0x3eefe0] 0x3f1790 <nil>  [] 0s} 127.0.0.1 33025 <nil> <nil>}
	I1109 14:32:49.241860  168662 main.go:143] libmachine: About to run SSH command:
	sudo hostname force-systemd-flag-519664 && echo "force-systemd-flag-519664" | sudo tee /etc/hostname
	I1109 14:32:49.419092  168662 main.go:143] libmachine: SSH cmd err, output: <nil>: force-systemd-flag-519664
	
	I1109 14:32:49.419171  168662 cli_runner.go:164] Run: docker container inspect -f "'{{(index (index .NetworkSettings.Ports "22/tcp") 0).HostPort}}'" force-systemd-flag-519664
	I1109 14:32:49.463371  168662 main.go:143] libmachine: Using SSH client type: native
	I1109 14:32:49.463765  168662 main.go:143] libmachine: &{{{<nil> 0 [] [] []} docker [0x3eefe0] 0x3f1790 <nil>  [] 0s} 127.0.0.1 33025 <nil> <nil>}
	I1109 14:32:49.463786  168662 main.go:143] libmachine: About to run SSH command:
	
			if ! grep -xq '.*\sforce-systemd-flag-519664' /etc/hosts; then
				if grep -xq '127.0.1.1\s.*' /etc/hosts; then
					sudo sed -i 's/^127.0.1.1\s.*/127.0.1.1 force-systemd-flag-519664/g' /etc/hosts;
				else 
					echo '127.0.1.1 force-systemd-flag-519664' | sudo tee -a /etc/hosts; 
				fi
			fi
	I1109 14:32:49.631064  168662 main.go:143] libmachine: SSH cmd err, output: <nil>: 
	I1109 14:32:49.631090  168662 ubuntu.go:188] set auth options {CertDir:/home/jenkins/minikube-integration/21139-2320/.minikube CaCertPath:/home/jenkins/minikube-integration/21139-2320/.minikube/certs/ca.pem CaPrivateKeyPath:/home/jenkins/minikube-integration/21139-2320/.minikube/certs/ca-key.pem CaCertRemotePath:/etc/docker/ca.pem ServerCertPath:/home/jenkins/minikube-integration/21139-2320/.minikube/machines/server.pem ServerKeyPath:/home/jenkins/minikube-integration/21139-2320/.minikube/machines/server-key.pem ClientKeyPath:/home/jenkins/minikube-integration/21139-2320/.minikube/certs/key.pem ServerCertRemotePath:/etc/docker/server.pem ServerKeyRemotePath:/etc/docker/server-key.pem ClientCertPath:/home/jenkins/minikube-integration/21139-2320/.minikube/certs/cert.pem ServerCertSANs:[] StorePath:/home/jenkins/minikube-integration/21139-2320/.minikube}
	I1109 14:32:49.631196  168662 ubuntu.go:190] setting up certificates
	I1109 14:32:49.631231  168662 provision.go:84] configureAuth start
	I1109 14:32:49.631306  168662 cli_runner.go:164] Run: docker container inspect -f "{{range .NetworkSettings.Networks}}{{.IPAddress}},{{.GlobalIPv6Address}}{{end}}" force-systemd-flag-519664
	I1109 14:32:49.660913  168662 provision.go:143] copyHostCerts
	I1109 14:32:49.660951  168662 vm_assets.go:164] NewFileAsset: /home/jenkins/minikube-integration/21139-2320/.minikube/certs/ca.pem -> /home/jenkins/minikube-integration/21139-2320/.minikube/ca.pem
	I1109 14:32:49.660982  168662 exec_runner.go:144] found /home/jenkins/minikube-integration/21139-2320/.minikube/ca.pem, removing ...
	I1109 14:32:49.660989  168662 exec_runner.go:203] rm: /home/jenkins/minikube-integration/21139-2320/.minikube/ca.pem
	I1109 14:32:49.661078  168662 exec_runner.go:151] cp: /home/jenkins/minikube-integration/21139-2320/.minikube/certs/ca.pem --> /home/jenkins/minikube-integration/21139-2320/.minikube/ca.pem (1082 bytes)
	I1109 14:32:49.661154  168662 vm_assets.go:164] NewFileAsset: /home/jenkins/minikube-integration/21139-2320/.minikube/certs/cert.pem -> /home/jenkins/minikube-integration/21139-2320/.minikube/cert.pem
	I1109 14:32:49.661171  168662 exec_runner.go:144] found /home/jenkins/minikube-integration/21139-2320/.minikube/cert.pem, removing ...
	I1109 14:32:49.661176  168662 exec_runner.go:203] rm: /home/jenkins/minikube-integration/21139-2320/.minikube/cert.pem
	I1109 14:32:49.661202  168662 exec_runner.go:151] cp: /home/jenkins/minikube-integration/21139-2320/.minikube/certs/cert.pem --> /home/jenkins/minikube-integration/21139-2320/.minikube/cert.pem (1123 bytes)
	I1109 14:32:49.661243  168662 vm_assets.go:164] NewFileAsset: /home/jenkins/minikube-integration/21139-2320/.minikube/certs/key.pem -> /home/jenkins/minikube-integration/21139-2320/.minikube/key.pem
	I1109 14:32:49.661259  168662 exec_runner.go:144] found /home/jenkins/minikube-integration/21139-2320/.minikube/key.pem, removing ...
	I1109 14:32:49.661263  168662 exec_runner.go:203] rm: /home/jenkins/minikube-integration/21139-2320/.minikube/key.pem
	I1109 14:32:49.661285  168662 exec_runner.go:151] cp: /home/jenkins/minikube-integration/21139-2320/.minikube/certs/key.pem --> /home/jenkins/minikube-integration/21139-2320/.minikube/key.pem (1679 bytes)
	I1109 14:32:49.661330  168662 provision.go:117] generating server cert: /home/jenkins/minikube-integration/21139-2320/.minikube/machines/server.pem ca-key=/home/jenkins/minikube-integration/21139-2320/.minikube/certs/ca.pem private-key=/home/jenkins/minikube-integration/21139-2320/.minikube/certs/ca-key.pem org=jenkins.force-systemd-flag-519664 san=[127.0.0.1 192.168.76.2 force-systemd-flag-519664 localhost minikube]
	I1109 14:32:50.060362  168662 provision.go:177] copyRemoteCerts
	I1109 14:32:50.060429  168662 ssh_runner.go:195] Run: sudo mkdir -p /etc/docker /etc/docker /etc/docker
	I1109 14:32:50.060472  168662 cli_runner.go:164] Run: docker container inspect -f "'{{(index (index .NetworkSettings.Ports "22/tcp") 0).HostPort}}'" force-systemd-flag-519664
	I1109 14:32:50.081463  168662 sshutil.go:53] new ssh client: &{IP:127.0.0.1 Port:33025 SSHKeyPath:/home/jenkins/minikube-integration/21139-2320/.minikube/machines/force-systemd-flag-519664/id_rsa Username:docker}
	I1109 14:32:50.193232  168662 vm_assets.go:164] NewFileAsset: /home/jenkins/minikube-integration/21139-2320/.minikube/machines/server.pem -> /etc/docker/server.pem
	I1109 14:32:50.193353  168662 ssh_runner.go:362] scp /home/jenkins/minikube-integration/21139-2320/.minikube/machines/server.pem --> /etc/docker/server.pem (1241 bytes)
	I1109 14:32:50.214191  168662 vm_assets.go:164] NewFileAsset: /home/jenkins/minikube-integration/21139-2320/.minikube/machines/server-key.pem -> /etc/docker/server-key.pem
	I1109 14:32:50.214256  168662 ssh_runner.go:362] scp /home/jenkins/minikube-integration/21139-2320/.minikube/machines/server-key.pem --> /etc/docker/server-key.pem (1675 bytes)
	I1109 14:32:50.237529  168662 vm_assets.go:164] NewFileAsset: /home/jenkins/minikube-integration/21139-2320/.minikube/certs/ca.pem -> /etc/docker/ca.pem
	I1109 14:32:50.237590  168662 ssh_runner.go:362] scp /home/jenkins/minikube-integration/21139-2320/.minikube/certs/ca.pem --> /etc/docker/ca.pem (1082 bytes)
	I1109 14:32:50.263428  168662 provision.go:87] duration metric: took 632.177351ms to configureAuth
	I1109 14:32:50.263451  168662 ubuntu.go:206] setting minikube options for container-runtime
	I1109 14:32:50.263625  168662 config.go:182] Loaded profile config "force-systemd-flag-519664": Driver=docker, ContainerRuntime=crio, KubernetesVersion=v1.34.1
	I1109 14:32:50.263727  168662 cli_runner.go:164] Run: docker container inspect -f "'{{(index (index .NetworkSettings.Ports "22/tcp") 0).HostPort}}'" force-systemd-flag-519664
	I1109 14:32:50.289247  168662 main.go:143] libmachine: Using SSH client type: native
	I1109 14:32:50.289554  168662 main.go:143] libmachine: &{{{<nil> 0 [] [] []} docker [0x3eefe0] 0x3f1790 <nil>  [] 0s} 127.0.0.1 33025 <nil> <nil>}
	I1109 14:32:50.289569  168662 main.go:143] libmachine: About to run SSH command:
	sudo mkdir -p /etc/sysconfig && printf %s "
	CRIO_MINIKUBE_OPTIONS='--insecure-registry 10.96.0.0/12 '
	" | sudo tee /etc/sysconfig/crio.minikube && sudo systemctl restart crio
	I1109 14:32:50.587240  168662 main.go:143] libmachine: SSH cmd err, output: <nil>: 
	CRIO_MINIKUBE_OPTIONS='--insecure-registry 10.96.0.0/12 '
	
	I1109 14:32:50.587260  168662 machine.go:97] duration metric: took 1.592229006s to provisionDockerMachine
	I1109 14:32:50.587270  168662 client.go:176] duration metric: took 9.027675559s to LocalClient.Create
	I1109 14:32:50.587284  168662 start.go:167] duration metric: took 9.027749947s to libmachine.API.Create "force-systemd-flag-519664"
	I1109 14:32:50.587295  168662 start.go:293] postStartSetup for "force-systemd-flag-519664" (driver="docker")
	I1109 14:32:50.587305  168662 start.go:322] creating required directories: [/etc/kubernetes/addons /etc/kubernetes/manifests /var/tmp/minikube /var/lib/minikube /var/lib/minikube/certs /var/lib/minikube/images /var/lib/minikube/binaries /tmp/gvisor /usr/share/ca-certificates /etc/ssl/certs]
	I1109 14:32:50.587368  168662 ssh_runner.go:195] Run: sudo mkdir -p /etc/kubernetes/addons /etc/kubernetes/manifests /var/tmp/minikube /var/lib/minikube /var/lib/minikube/certs /var/lib/minikube/images /var/lib/minikube/binaries /tmp/gvisor /usr/share/ca-certificates /etc/ssl/certs
	I1109 14:32:50.587409  168662 cli_runner.go:164] Run: docker container inspect -f "'{{(index (index .NetworkSettings.Ports "22/tcp") 0).HostPort}}'" force-systemd-flag-519664
	I1109 14:32:50.613414  168662 sshutil.go:53] new ssh client: &{IP:127.0.0.1 Port:33025 SSHKeyPath:/home/jenkins/minikube-integration/21139-2320/.minikube/machines/force-systemd-flag-519664/id_rsa Username:docker}
	I1109 14:32:50.726380  168662 ssh_runner.go:195] Run: cat /etc/os-release
	I1109 14:32:50.730558  168662 main.go:143] libmachine: Couldn't set key VERSION_CODENAME, no corresponding struct field found
	I1109 14:32:50.730588  168662 info.go:137] Remote host: Debian GNU/Linux 12 (bookworm)
	I1109 14:32:50.730599  168662 filesync.go:126] Scanning /home/jenkins/minikube-integration/21139-2320/.minikube/addons for local assets ...
	I1109 14:32:50.730654  168662 filesync.go:126] Scanning /home/jenkins/minikube-integration/21139-2320/.minikube/files for local assets ...
	I1109 14:32:50.730740  168662 filesync.go:149] local asset: /home/jenkins/minikube-integration/21139-2320/.minikube/files/etc/ssl/certs/41162.pem -> 41162.pem in /etc/ssl/certs
	I1109 14:32:50.730752  168662 vm_assets.go:164] NewFileAsset: /home/jenkins/minikube-integration/21139-2320/.minikube/files/etc/ssl/certs/41162.pem -> /etc/ssl/certs/41162.pem
	I1109 14:32:50.730856  168662 ssh_runner.go:195] Run: sudo mkdir -p /etc/ssl/certs
	I1109 14:32:50.739442  168662 ssh_runner.go:362] scp /home/jenkins/minikube-integration/21139-2320/.minikube/files/etc/ssl/certs/41162.pem --> /etc/ssl/certs/41162.pem (1708 bytes)
	I1109 14:32:50.759666  168662 start.go:296] duration metric: took 172.35728ms for postStartSetup
	I1109 14:32:50.760105  168662 cli_runner.go:164] Run: docker container inspect -f "{{range .NetworkSettings.Networks}}{{.IPAddress}},{{.GlobalIPv6Address}}{{end}}" force-systemd-flag-519664
	I1109 14:32:50.782236  168662 profile.go:143] Saving config to /home/jenkins/minikube-integration/21139-2320/.minikube/profiles/force-systemd-flag-519664/config.json ...
	I1109 14:32:50.782827  168662 ssh_runner.go:195] Run: sh -c "df -h /var | awk 'NR==2{print $5}'"
	I1109 14:32:50.782883  168662 cli_runner.go:164] Run: docker container inspect -f "'{{(index (index .NetworkSettings.Ports "22/tcp") 0).HostPort}}'" force-systemd-flag-519664
	I1109 14:32:50.805476  168662 sshutil.go:53] new ssh client: &{IP:127.0.0.1 Port:33025 SSHKeyPath:/home/jenkins/minikube-integration/21139-2320/.minikube/machines/force-systemd-flag-519664/id_rsa Username:docker}
	I1109 14:32:50.913428  168662 ssh_runner.go:195] Run: sh -c "df -BG /var | awk 'NR==2{print $4}'"
	I1109 14:32:50.918236  168662 start.go:128] duration metric: took 9.36222645s to createHost
	I1109 14:32:50.918260  168662 start.go:83] releasing machines lock for "force-systemd-flag-519664", held for 9.362395682s
	I1109 14:32:50.918328  168662 cli_runner.go:164] Run: docker container inspect -f "{{range .NetworkSettings.Networks}}{{.IPAddress}},{{.GlobalIPv6Address}}{{end}}" force-systemd-flag-519664
	I1109 14:32:50.937408  168662 ssh_runner.go:195] Run: cat /version.json
	I1109 14:32:50.937472  168662 cli_runner.go:164] Run: docker container inspect -f "'{{(index (index .NetworkSettings.Ports "22/tcp") 0).HostPort}}'" force-systemd-flag-519664
	I1109 14:32:50.937700  168662 ssh_runner.go:195] Run: curl -sS -m 2 https://registry.k8s.io/
	I1109 14:32:50.937764  168662 cli_runner.go:164] Run: docker container inspect -f "'{{(index (index .NetworkSettings.Ports "22/tcp") 0).HostPort}}'" force-systemd-flag-519664
	I1109 14:32:50.957510  168662 sshutil.go:53] new ssh client: &{IP:127.0.0.1 Port:33025 SSHKeyPath:/home/jenkins/minikube-integration/21139-2320/.minikube/machines/force-systemd-flag-519664/id_rsa Username:docker}
	I1109 14:32:50.979169  168662 sshutil.go:53] new ssh client: &{IP:127.0.0.1 Port:33025 SSHKeyPath:/home/jenkins/minikube-integration/21139-2320/.minikube/machines/force-systemd-flag-519664/id_rsa Username:docker}
	I1109 14:32:51.077459  168662 ssh_runner.go:195] Run: systemctl --version
	I1109 14:32:51.191766  168662 ssh_runner.go:195] Run: sudo sh -c "podman version >/dev/null"
	I1109 14:32:51.260045  168662 ssh_runner.go:195] Run: sh -c "stat /etc/cni/net.d/*loopback.conf*"
	W1109 14:32:51.265694  168662 cni.go:209] loopback cni configuration skipped: "/etc/cni/net.d/*loopback.conf*" not found
	I1109 14:32:51.265778  168662 ssh_runner.go:195] Run: sudo find /etc/cni/net.d -maxdepth 1 -type f ( ( -name *bridge* -or -name *podman* ) -and -not -name *.mk_disabled ) -printf "%p, " -exec sh -c "sudo mv {} {}.mk_disabled" ;
	I1109 14:32:51.304031  168662 cni.go:262] disabled [/etc/cni/net.d/87-podman-bridge.conflist, /etc/cni/net.d/10-crio-bridge.conflist.disabled] bridge cni config(s)
	I1109 14:32:51.304059  168662 start.go:496] detecting cgroup driver to use...
	I1109 14:32:51.304072  168662 start.go:500] using "systemd" cgroup driver as enforced via flags
	I1109 14:32:51.304129  168662 ssh_runner.go:195] Run: sudo systemctl stop -f containerd
	
	
	==> CRI-O <==
	Nov 09 14:32:26 pause-342238 crio[2193]: time="2025-11-09T14:32:26.546294959Z" level=info msg="Started container" PID=2509 containerID=a5688c3ff2046c131f5e093ac5f29648ebf7838084be2c2c4f88f5ec839473a5 description=kube-system/kube-proxy-r56tq/kube-proxy id=a66dab69-3513-41a2-ad8f-5b065fda47b4 name=/runtime.v1.RuntimeService/StartContainer sandboxID=bce71343d0534c12c2c39b3dc11bd816ba438d5cb754d13ff71742834b621858
	Nov 09 14:32:36 pause-342238 crio[2193]: time="2025-11-09T14:32:36.422024254Z" level=info msg="CNI monitoring event CREATE        \"/etc/cni/net.d/10-kindnet.conflist.temp\""
	Nov 09 14:32:36 pause-342238 crio[2193]: time="2025-11-09T14:32:36.426049833Z" level=info msg="Found CNI network kindnet (type=ptp) at /etc/cni/net.d/10-kindnet.conflist"
	Nov 09 14:32:36 pause-342238 crio[2193]: time="2025-11-09T14:32:36.42608409Z" level=info msg="Updated default CNI network name to kindnet"
	Nov 09 14:32:36 pause-342238 crio[2193]: time="2025-11-09T14:32:36.426112299Z" level=info msg="CNI monitoring event WRITE         \"/etc/cni/net.d/10-kindnet.conflist.temp\""
	Nov 09 14:32:36 pause-342238 crio[2193]: time="2025-11-09T14:32:36.430810184Z" level=info msg="Found CNI network kindnet (type=ptp) at /etc/cni/net.d/10-kindnet.conflist"
	Nov 09 14:32:36 pause-342238 crio[2193]: time="2025-11-09T14:32:36.430980818Z" level=info msg="Updated default CNI network name to kindnet"
	Nov 09 14:32:36 pause-342238 crio[2193]: time="2025-11-09T14:32:36.43105487Z" level=info msg="CNI monitoring event WRITE         \"/etc/cni/net.d/10-kindnet.conflist.temp\""
	Nov 09 14:32:36 pause-342238 crio[2193]: time="2025-11-09T14:32:36.436971638Z" level=info msg="Found CNI network kindnet (type=ptp) at /etc/cni/net.d/10-kindnet.conflist"
	Nov 09 14:32:36 pause-342238 crio[2193]: time="2025-11-09T14:32:36.437126051Z" level=info msg="Updated default CNI network name to kindnet"
	Nov 09 14:32:36 pause-342238 crio[2193]: time="2025-11-09T14:32:36.43719348Z" level=info msg="CNI monitoring event RENAME        \"/etc/cni/net.d/10-kindnet.conflist.temp\""
	Nov 09 14:32:36 pause-342238 crio[2193]: time="2025-11-09T14:32:36.444857528Z" level=info msg="Found CNI network kindnet (type=ptp) at /etc/cni/net.d/10-kindnet.conflist"
	Nov 09 14:32:36 pause-342238 crio[2193]: time="2025-11-09T14:32:36.444907333Z" level=info msg="Updated default CNI network name to kindnet"
	Nov 09 14:32:36 pause-342238 crio[2193]: time="2025-11-09T14:32:36.444941737Z" level=info msg="CNI monitoring event CREATE        \"/etc/cni/net.d/10-kindnet.conflist\" ← \"/etc/cni/net.d/10-kindnet.conflist.temp\""
	Nov 09 14:32:36 pause-342238 crio[2193]: time="2025-11-09T14:32:36.452629013Z" level=info msg="Found CNI network kindnet (type=ptp) at /etc/cni/net.d/10-kindnet.conflist"
	Nov 09 14:32:36 pause-342238 crio[2193]: time="2025-11-09T14:32:36.452825748Z" level=info msg="Updated default CNI network name to kindnet"
	Nov 09 14:32:44 pause-342238 crio[2193]: time="2025-11-09T14:32:44.463350115Z" level=info msg="Checking image status: registry.k8s.io/coredns/coredns:v1.12.1" id=68646f80-ed7e-48a3-a41c-43f4ad145b8a name=/runtime.v1.ImageService/ImageStatus
	Nov 09 14:32:44 pause-342238 crio[2193]: time="2025-11-09T14:32:44.4669731Z" level=info msg="Checking image status: registry.k8s.io/coredns/coredns:v1.12.1" id=d4c70845-6eab-46ce-8494-ad15b7ab9509 name=/runtime.v1.ImageService/ImageStatus
	Nov 09 14:32:44 pause-342238 crio[2193]: time="2025-11-09T14:32:44.468999732Z" level=info msg="Creating container: kube-system/coredns-66bc5c9577-4vkj9/coredns" id=b587fc55-0061-4db5-ab1b-0095e66dfa41 name=/runtime.v1.RuntimeService/CreateContainer
	Nov 09 14:32:44 pause-342238 crio[2193]: time="2025-11-09T14:32:44.469144217Z" level=info msg="Allowed annotations are specified for workload [io.containers.trace-syscall]"
	Nov 09 14:32:44 pause-342238 crio[2193]: time="2025-11-09T14:32:44.484213055Z" level=info msg="Allowed annotations are specified for workload [io.containers.trace-syscall]"
	Nov 09 14:32:44 pause-342238 crio[2193]: time="2025-11-09T14:32:44.485025275Z" level=info msg="Allowed annotations are specified for workload [io.containers.trace-syscall]"
	Nov 09 14:32:44 pause-342238 crio[2193]: time="2025-11-09T14:32:44.516501229Z" level=info msg="Created container 08018cf90a3eb18947e98d613c69d015284e179c3a6bad63d3d660e2bba9d425: kube-system/coredns-66bc5c9577-4vkj9/coredns" id=b587fc55-0061-4db5-ab1b-0095e66dfa41 name=/runtime.v1.RuntimeService/CreateContainer
	Nov 09 14:32:44 pause-342238 crio[2193]: time="2025-11-09T14:32:44.517489417Z" level=info msg="Starting container: 08018cf90a3eb18947e98d613c69d015284e179c3a6bad63d3d660e2bba9d425" id=993f93ce-d4ec-4b2a-adb1-35106383dfd6 name=/runtime.v1.RuntimeService/StartContainer
	Nov 09 14:32:44 pause-342238 crio[2193]: time="2025-11-09T14:32:44.520227805Z" level=info msg="Started container" PID=2753 containerID=08018cf90a3eb18947e98d613c69d015284e179c3a6bad63d3d660e2bba9d425 description=kube-system/coredns-66bc5c9577-4vkj9/coredns id=993f93ce-d4ec-4b2a-adb1-35106383dfd6 name=/runtime.v1.RuntimeService/StartContainer sandboxID=a67d49738b53b28ae5b634dbbd4392da5294a814f429a45fce96913a142a8f21
	
	
	==> container status <==
	CONTAINER           IMAGE                                                              CREATED             STATE               NAME                      ATTEMPT             POD ID              POD                                    NAMESPACE
	08018cf90a3eb       138784d87c9c50f8e59412544da4cf4928d61ccbaf93b9f5898a3ba406871bfc   8 seconds ago       Running             coredns                   2                   a67d49738b53b       coredns-66bc5c9577-4vkj9               kube-system
	aad8dd5398369       b1a8c6f707935fd5f346ce5846d21ff8dd65e14c15406a14dbd16b9b897b9b4c   27 seconds ago      Running             kindnet-cni               2                   704a5f3cfeced       kindnet-dvtdj                          kube-system
	a5688c3ff2046       05baa95f5142d87797a2bc1d3d11edfb0bf0a9236d436243d15061fae8b58cb9   27 seconds ago      Running             kube-proxy                2                   bce71343d0534       kube-proxy-r56tq                       kube-system
	bf258d25a5e08       a1894772a478e07c67a56e8bf32335fdbe1dd4ec96976a5987083164bd00bc0e   27 seconds ago      Running             etcd                      2                   8d67c5cf68e27       etcd-pause-342238                      kube-system
	48a5ab06b1f37       b5f57ec6b98676d815366685a0422bd164ecf0732540b79ac51b1186cef97ff0   27 seconds ago      Running             kube-scheduler            2                   f5e6f6d1c3687       kube-scheduler-pause-342238            kube-system
	afac707896ac0       7eb2c6ff0c5a768fd309321bc2ade0e4e11afcf4f2017ef1d0ff00d91fdf992a   27 seconds ago      Running             kube-controller-manager   2                   aa679bf46e066       kube-controller-manager-pause-342238   kube-system
	26b98fc2b6e91       43911e833d64d4f30460862fc0c54bb61999d60bc7d063feca71e9fc610d5196   27 seconds ago      Running             kube-apiserver            2                   50699b691ecec       kube-apiserver-pause-342238            kube-system
	e00cc266825be       b5f57ec6b98676d815366685a0422bd164ecf0732540b79ac51b1186cef97ff0   39 seconds ago      Exited              kube-scheduler            1                   f5e6f6d1c3687       kube-scheduler-pause-342238            kube-system
	c9b12325c06ba       05baa95f5142d87797a2bc1d3d11edfb0bf0a9236d436243d15061fae8b58cb9   39 seconds ago      Exited              kube-proxy                1                   bce71343d0534       kube-proxy-r56tq                       kube-system
	350dceaf4e4da       a1894772a478e07c67a56e8bf32335fdbe1dd4ec96976a5987083164bd00bc0e   39 seconds ago      Exited              etcd                      1                   8d67c5cf68e27       etcd-pause-342238                      kube-system
	5c6f6fd508ff4       b1a8c6f707935fd5f346ce5846d21ff8dd65e14c15406a14dbd16b9b897b9b4c   39 seconds ago      Exited              kindnet-cni               1                   704a5f3cfeced       kindnet-dvtdj                          kube-system
	bd2b939fdc032       7eb2c6ff0c5a768fd309321bc2ade0e4e11afcf4f2017ef1d0ff00d91fdf992a   39 seconds ago      Exited              kube-controller-manager   1                   aa679bf46e066       kube-controller-manager-pause-342238   kube-system
	5d123fdfeb405       43911e833d64d4f30460862fc0c54bb61999d60bc7d063feca71e9fc610d5196   39 seconds ago      Exited              kube-apiserver            1                   50699b691ecec       kube-apiserver-pause-342238            kube-system
	4a1f936e07428       138784d87c9c50f8e59412544da4cf4928d61ccbaf93b9f5898a3ba406871bfc   39 seconds ago      Exited              coredns                   1                   a67d49738b53b       coredns-66bc5c9577-4vkj9               kube-system
	
	
	==> coredns [08018cf90a3eb18947e98d613c69d015284e179c3a6bad63d3d660e2bba9d425] <==
	maxprocs: Leaving GOMAXPROCS=2: CPU quota undefined
	.:53
	[INFO] plugin/reload: Running configuration SHA512 = fa9a0cdcdddcb4be74a0eaf7cfcb211c40e29ddf5507e03bbfc0065bade31f0f2641a2513136e246f32328dd126fc93236fb5c595246f0763926a524386705e8
	CoreDNS-1.12.1
	linux/arm64, go1.24.1, 707c7c1
	[INFO] 127.0.0.1:56056 - 58999 "HINFO IN 1044715838941189554.2066353908982510073. udp 57 false 512" NXDOMAIN qr,rd,ra 57 0.013012685s
	
	
	==> coredns [4a1f936e07428b18d9c307997f095c8d73e649e7b520d8a35b53836606e5e79d] <==
	maxprocs: Leaving GOMAXPROCS=2: CPU quota undefined
	[INFO] plugin/kubernetes: waiting for Kubernetes API before starting server
	[INFO] plugin/ready: Still waiting on: "kubernetes"
	[INFO] SIGTERM: Shutting down servers then terminating
	[INFO] plugin/kubernetes: waiting for Kubernetes API before starting server
	[INFO] plugin/kubernetes: waiting for Kubernetes API before starting server
	[INFO] plugin/kubernetes: waiting for Kubernetes API before starting server
	[INFO] plugin/kubernetes: waiting for Kubernetes API before starting server
	[INFO] plugin/kubernetes: waiting for Kubernetes API before starting server
	[INFO] plugin/kubernetes: waiting for Kubernetes API before starting server
	[INFO] plugin/kubernetes: waiting for Kubernetes API before starting server
	[INFO] plugin/kubernetes: waiting for Kubernetes API before starting server
	[WARNING] plugin/kubernetes: starting server with unsynced Kubernetes API
	.:53
	[INFO] plugin/reload: Running configuration SHA512 = fa9a0cdcdddcb4be74a0eaf7cfcb211c40e29ddf5507e03bbfc0065bade31f0f2641a2513136e246f32328dd126fc93236fb5c595246f0763926a524386705e8
	CoreDNS-1.12.1
	linux/arm64, go1.24.1, 707c7c1
	[INFO] plugin/health: Going into lameduck mode for 5s
	[INFO] 127.0.0.1:54554 - 38985 "HINFO IN 2451062830002027841.7771890144460148705. udp 57 false 512" NXDOMAIN qr,rd,ra 57 0.020729974s
	
	
	==> describe nodes <==
	Name:               pause-342238
	Roles:              control-plane
	Labels:             beta.kubernetes.io/arch=arm64
	                    beta.kubernetes.io/os=linux
	                    kubernetes.io/arch=arm64
	                    kubernetes.io/hostname=pause-342238
	                    kubernetes.io/os=linux
	                    minikube.k8s.io/commit=70e94ed289d7418ac27e2778f9cf44be27a4ecda
	                    minikube.k8s.io/name=pause-342238
	                    minikube.k8s.io/primary=true
	                    minikube.k8s.io/updated_at=2025_11_09T14_31_16_0700
	                    minikube.k8s.io/version=v1.37.0
	                    node-role.kubernetes.io/control-plane=
	                    node.kubernetes.io/exclude-from-external-load-balancers=
	Annotations:        node.alpha.kubernetes.io/ttl: 0
	                    volumes.kubernetes.io/controller-managed-attach-detach: true
	CreationTimestamp:  Sun, 09 Nov 2025 14:31:12 +0000
	Taints:             <none>
	Unschedulable:      false
	Lease:
	  HolderIdentity:  pause-342238
	  AcquireTime:     <unset>
	  RenewTime:       Sun, 09 Nov 2025 14:32:42 +0000
	Conditions:
	  Type             Status  LastHeartbeatTime                 LastTransitionTime                Reason                       Message
	  ----             ------  -----------------                 ------------------                ------                       -------
	  MemoryPressure   False   Sun, 09 Nov 2025 14:32:37 +0000   Sun, 09 Nov 2025 14:31:08 +0000   KubeletHasSufficientMemory   kubelet has sufficient memory available
	  DiskPressure     False   Sun, 09 Nov 2025 14:32:37 +0000   Sun, 09 Nov 2025 14:31:08 +0000   KubeletHasNoDiskPressure     kubelet has no disk pressure
	  PIDPressure      False   Sun, 09 Nov 2025 14:32:37 +0000   Sun, 09 Nov 2025 14:31:08 +0000   KubeletHasSufficientPID      kubelet has sufficient PID available
	  Ready            True    Sun, 09 Nov 2025 14:32:37 +0000   Sun, 09 Nov 2025 14:32:02 +0000   KubeletReady                 kubelet is posting ready status
	Addresses:
	  InternalIP:  192.168.85.2
	  Hostname:    pause-342238
	Capacity:
	  cpu:                2
	  ephemeral-storage:  203034800Ki
	  hugepages-1Gi:      0
	  hugepages-2Mi:      0
	  hugepages-32Mi:     0
	  hugepages-64Ki:     0
	  memory:             8022300Ki
	  pods:               110
	Allocatable:
	  cpu:                2
	  ephemeral-storage:  203034800Ki
	  hugepages-1Gi:      0
	  hugepages-2Mi:      0
	  hugepages-32Mi:     0
	  hugepages-64Ki:     0
	  memory:             8022300Ki
	  pods:               110
	System Info:
	  Machine ID:                 8fe80b37a6a3cb4bc1adadb06905c59a
	  System UUID:                51f3084b-4cea-414c-a9fb-6d5bf7a7a557
	  Boot ID:                    c2b4b02b-b0e5-42ff-aa45-346ad9349595
	  Kernel Version:             5.15.0-1084-aws
	  OS Image:                   Debian GNU/Linux 12 (bookworm)
	  Operating System:           linux
	  Architecture:               arm64
	  Container Runtime Version:  cri-o://1.34.1
	  Kubelet Version:            v1.34.1
	  Kube-Proxy Version:         
	PodCIDR:                      10.244.0.0/24
	PodCIDRs:                     10.244.0.0/24
	Non-terminated Pods:          (7 in total)
	  Namespace                   Name                                    CPU Requests  CPU Limits  Memory Requests  Memory Limits  Age
	  ---------                   ----                                    ------------  ----------  ---------------  -------------  ---
	  kube-system                 coredns-66bc5c9577-4vkj9                100m (5%)     0 (0%)      70Mi (0%)        170Mi (2%)     92s
	  kube-system                 etcd-pause-342238                       100m (5%)     0 (0%)      100Mi (1%)       0 (0%)         98s
	  kube-system                 kindnet-dvtdj                           100m (5%)     100m (5%)   50Mi (0%)        50Mi (0%)      92s
	  kube-system                 kube-apiserver-pause-342238             250m (12%)    0 (0%)      0 (0%)           0 (0%)         98s
	  kube-system                 kube-controller-manager-pause-342238    200m (10%)    0 (0%)      0 (0%)           0 (0%)         98s
	  kube-system                 kube-proxy-r56tq                        0 (0%)        0 (0%)      0 (0%)           0 (0%)         92s
	  kube-system                 kube-scheduler-pause-342238             100m (5%)     0 (0%)      0 (0%)           0 (0%)         98s
	Allocated resources:
	  (Total limits may be over 100 percent, i.e., overcommitted.)
	  Resource           Requests    Limits
	  --------           --------    ------
	  cpu                850m (42%)  100m (5%)
	  memory             220Mi (2%)  220Mi (2%)
	  ephemeral-storage  0 (0%)      0 (0%)
	  hugepages-1Gi      0 (0%)      0 (0%)
	  hugepages-2Mi      0 (0%)      0 (0%)
	  hugepages-32Mi     0 (0%)      0 (0%)
	  hugepages-64Ki     0 (0%)      0 (0%)
	Events:
	  Type     Reason                   Age   From             Message
	  ----     ------                   ----  ----             -------
	  Normal   Starting                 91s   kube-proxy       
	  Normal   Starting                 18s   kube-proxy       
	  Normal   Starting                 98s   kubelet          Starting kubelet.
	  Warning  CgroupV1                 98s   kubelet          cgroup v1 support is in maintenance mode, please migrate to cgroup v2
	  Normal   NodeHasSufficientMemory  98s   kubelet          Node pause-342238 status is now: NodeHasSufficientMemory
	  Normal   NodeHasNoDiskPressure    98s   kubelet          Node pause-342238 status is now: NodeHasNoDiskPressure
	  Normal   NodeHasSufficientPID     98s   kubelet          Node pause-342238 status is now: NodeHasSufficientPID
	  Normal   RegisteredNode           93s   node-controller  Node pause-342238 event: Registered Node pause-342238 in Controller
	  Normal   NodeReady                51s   kubelet          Node pause-342238 status is now: NodeReady
	  Warning  ContainerGCFailed        38s   kubelet          rpc error: code = Unavailable desc = connection error: desc = "transport: Error while dialing: dial unix /var/run/crio/crio.sock: connect: no such file or directory"
	  Normal   RegisteredNode           18s   node-controller  Node pause-342238 event: Registered Node pause-342238 in Controller
	
	
	==> dmesg <==
	[  +3.159182] overlayfs: idmapped layers are currently not supported
	[Nov 9 14:03] overlayfs: idmapped layers are currently not supported
	[  +3.581786] overlayfs: idmapped layers are currently not supported
	[Nov 9 14:04] overlayfs: idmapped layers are currently not supported
	[Nov 9 14:05] overlayfs: idmapped layers are currently not supported
	[ +45.728314] overlayfs: idmapped layers are currently not supported
	[Nov 9 14:07] overlayfs: idmapped layers are currently not supported
	[Nov 9 14:12] overlayfs: idmapped layers are currently not supported
	[ +35.606556] overlayfs: idmapped layers are currently not supported
	[Nov 9 14:14] overlayfs: idmapped layers are currently not supported
	[Nov 9 14:15] overlayfs: idmapped layers are currently not supported
	[Nov 9 14:16] overlayfs: idmapped layers are currently not supported
	[Nov 9 14:18] overlayfs: idmapped layers are currently not supported
	[ +26.909872] overlayfs: idmapped layers are currently not supported
	[  +3.850831] overlayfs: idmapped layers are currently not supported
	[Nov 9 14:19] overlayfs: idmapped layers are currently not supported
	[ +17.180951] overlayfs: idmapped layers are currently not supported
	[Nov 9 14:20] overlayfs: idmapped layers are currently not supported
	[ +23.736977] overlayfs: idmapped layers are currently not supported
	[Nov 9 14:22] overlayfs: idmapped layers are currently not supported
	[Nov 9 14:23] overlayfs: idmapped layers are currently not supported
	[Nov 9 14:25] overlayfs: idmapped layers are currently not supported
	[Nov 9 14:27] overlayfs: idmapped layers are currently not supported
	[ +24.536638] overlayfs: idmapped layers are currently not supported
	[Nov 9 14:31] overlayfs: idmapped layers are currently not supported
	
	
	==> etcd [350dceaf4e4da722256622b6806f53ab082a3778455c73c0ca943ef840d44bfe] <==
	{"level":"warn","ts":"2025-11-09T14:32:14.819949Z","caller":"embed/config_logging.go:188","msg":"rejected connection on client endpoint","remote-addr":"127.0.0.1:48420","server-name":"","error":"EOF"}
	{"level":"warn","ts":"2025-11-09T14:32:14.822952Z","caller":"embed/config_logging.go:188","msg":"rejected connection on client endpoint","remote-addr":"127.0.0.1:48402","server-name":"","error":"EOF"}
	{"level":"warn","ts":"2025-11-09T14:32:14.908552Z","caller":"embed/config_logging.go:188","msg":"rejected connection on client endpoint","remote-addr":"127.0.0.1:48434","server-name":"","error":"EOF"}
	{"level":"warn","ts":"2025-11-09T14:32:14.966948Z","caller":"embed/config_logging.go:188","msg":"rejected connection on client endpoint","remote-addr":"127.0.0.1:48454","server-name":"","error":"EOF"}
	{"level":"warn","ts":"2025-11-09T14:32:15.070138Z","caller":"embed/config_logging.go:188","msg":"rejected connection on client endpoint","remote-addr":"127.0.0.1:48466","server-name":"","error":"EOF"}
	{"level":"info","ts":"2025-11-09T14:32:15.080731Z","caller":"osutil/interrupt_unix.go:65","msg":"received signal; shutting down","signal":"terminated"}
	{"level":"info","ts":"2025-11-09T14:32:15.082757Z","caller":"embed/etcd.go:426","msg":"closing etcd server","name":"pause-342238","data-dir":"/var/lib/minikube/etcd","advertise-peer-urls":["https://192.168.85.2:2380"],"advertise-client-urls":["https://192.168.85.2:2379"]}
	{"level":"error","ts":"2025-11-09T14:32:15.082993Z","caller":"embed/etcd.go:912","msg":"setting up serving from embedded etcd failed.","error":"http: Server closed","stacktrace":"go.etcd.io/etcd/server/v3/embed.(*Etcd).errHandler\n\tgo.etcd.io/etcd/server/v3/embed/etcd.go:912\ngo.etcd.io/etcd/server/v3/embed.(*serveCtx).startHandler.func1\n\tgo.etcd.io/etcd/server/v3/embed/serve.go:90"}
	{"level":"warn","ts":"2025-11-09T14:32:15.083245Z","caller":"embed/config_logging.go:188","msg":"rejected connection on client endpoint","remote-addr":"127.0.0.1:48476","server-name":"","error":"read tcp 127.0.0.1:2379->127.0.0.1:48476: use of closed network connection"}
	{"level":"warn","ts":"2025-11-09T14:32:15.098698Z","caller":"embed/config_logging.go:188","msg":"rejected connection on client endpoint","remote-addr":"127.0.0.1:48504","server-name":"","error":"write tcp 127.0.0.1:2379->127.0.0.1:48504: use of closed network connection"}
	{"level":"error","ts":"2025-11-09T14:32:15.098782Z","caller":"embed/etcd.go:912","msg":"setting up serving from embedded etcd failed.","error":"http: Server closed","stacktrace":"go.etcd.io/etcd/server/v3/embed.(*Etcd).errHandler\n\tgo.etcd.io/etcd/server/v3/embed/etcd.go:912\ngo.etcd.io/etcd/server/v3/embed.(*serveCtx).startHandler.func1\n\tgo.etcd.io/etcd/server/v3/embed/serve.go:90"}
	{"level":"error","ts":"2025-11-09T14:32:15.101140Z","caller":"embed/etcd.go:912","msg":"setting up serving from embedded etcd failed.","error":"accept tcp 127.0.0.1:2381: use of closed network connection","stacktrace":"go.etcd.io/etcd/server/v3/embed.(*Etcd).errHandler\n\tgo.etcd.io/etcd/server/v3/embed/etcd.go:912\ngo.etcd.io/etcd/server/v3/embed.(*Etcd).startHandler.func1\n\tgo.etcd.io/etcd/server/v3/embed/etcd.go:906"}
	{"level":"info","ts":"2025-11-09T14:32:15.101247Z","caller":"etcdserver/server.go:1281","msg":"skipped leadership transfer for single voting member cluster","local-member-id":"9f0758e1c58a86ed","current-leader-member-id":"9f0758e1c58a86ed"}
	{"level":"info","ts":"2025-11-09T14:32:15.101365Z","caller":"etcdserver/server.go:2319","msg":"server has stopped; stopping cluster version's monitor"}
	{"level":"info","ts":"2025-11-09T14:32:15.102837Z","caller":"etcdserver/server.go:2342","msg":"server has stopped; stopping storage version's monitor"}
	{"level":"warn","ts":"2025-11-09T14:32:15.103179Z","caller":"embed/serve.go:245","msg":"stopping secure grpc server due to error","error":"accept tcp 127.0.0.1:2379: use of closed network connection"}
	{"level":"warn","ts":"2025-11-09T14:32:15.103248Z","caller":"embed/serve.go:247","msg":"stopped secure grpc server due to error","error":"accept tcp 127.0.0.1:2379: use of closed network connection"}
	{"level":"error","ts":"2025-11-09T14:32:15.103282Z","caller":"embed/etcd.go:912","msg":"setting up serving from embedded etcd failed.","error":"accept tcp 127.0.0.1:2379: use of closed network connection","stacktrace":"go.etcd.io/etcd/server/v3/embed.(*Etcd).errHandler\n\tgo.etcd.io/etcd/server/v3/embed/etcd.go:912\ngo.etcd.io/etcd/server/v3/embed.(*Etcd).startHandler.func1\n\tgo.etcd.io/etcd/server/v3/embed/etcd.go:906"}
	{"level":"warn","ts":"2025-11-09T14:32:15.103354Z","caller":"embed/serve.go:245","msg":"stopping secure grpc server due to error","error":"accept tcp 192.168.85.2:2379: use of closed network connection"}
	{"level":"warn","ts":"2025-11-09T14:32:15.103390Z","caller":"embed/serve.go:247","msg":"stopped secure grpc server due to error","error":"accept tcp 192.168.85.2:2379: use of closed network connection"}
	{"level":"error","ts":"2025-11-09T14:32:15.103419Z","caller":"embed/etcd.go:912","msg":"setting up serving from embedded etcd failed.","error":"accept tcp 192.168.85.2:2379: use of closed network connection","stacktrace":"go.etcd.io/etcd/server/v3/embed.(*Etcd).errHandler\n\tgo.etcd.io/etcd/server/v3/embed/etcd.go:912\ngo.etcd.io/etcd/server/v3/embed.(*Etcd).startHandler.func1\n\tgo.etcd.io/etcd/server/v3/embed/etcd.go:906"}
	{"level":"info","ts":"2025-11-09T14:32:15.107685Z","caller":"embed/etcd.go:621","msg":"stopping serving peer traffic","address":"192.168.85.2:2380"}
	{"level":"error","ts":"2025-11-09T14:32:15.107850Z","caller":"embed/etcd.go:912","msg":"setting up serving from embedded etcd failed.","error":"accept tcp 192.168.85.2:2380: use of closed network connection","stacktrace":"go.etcd.io/etcd/server/v3/embed.(*Etcd).errHandler\n\tgo.etcd.io/etcd/server/v3/embed/etcd.go:912\ngo.etcd.io/etcd/server/v3/embed.(*Etcd).startHandler.func1\n\tgo.etcd.io/etcd/server/v3/embed/etcd.go:906"}
	{"level":"info","ts":"2025-11-09T14:32:15.108090Z","caller":"embed/etcd.go:626","msg":"stopped serving peer traffic","address":"192.168.85.2:2380"}
	{"level":"info","ts":"2025-11-09T14:32:15.108248Z","caller":"embed/etcd.go:428","msg":"closed etcd server","name":"pause-342238","data-dir":"/var/lib/minikube/etcd","advertise-peer-urls":["https://192.168.85.2:2380"],"advertise-client-urls":["https://192.168.85.2:2379"]}
	
	
	==> etcd [bf258d25a5e083c243df5f441d370436199818e034bda7733cdefc1cedba4399] <==
	{"level":"warn","ts":"2025-11-09T14:32:30.196876Z","caller":"embed/config_logging.go:188","msg":"rejected connection on client endpoint","remote-addr":"127.0.0.1:37170","server-name":"","error":"EOF"}
	{"level":"warn","ts":"2025-11-09T14:32:30.243438Z","caller":"embed/config_logging.go:188","msg":"rejected connection on client endpoint","remote-addr":"127.0.0.1:37196","server-name":"","error":"EOF"}
	{"level":"warn","ts":"2025-11-09T14:32:30.254699Z","caller":"embed/config_logging.go:188","msg":"rejected connection on client endpoint","remote-addr":"127.0.0.1:37212","server-name":"","error":"EOF"}
	{"level":"warn","ts":"2025-11-09T14:32:30.288223Z","caller":"embed/config_logging.go:188","msg":"rejected connection on client endpoint","remote-addr":"127.0.0.1:37238","server-name":"","error":"EOF"}
	{"level":"warn","ts":"2025-11-09T14:32:30.324447Z","caller":"embed/config_logging.go:188","msg":"rejected connection on client endpoint","remote-addr":"127.0.0.1:37250","server-name":"","error":"EOF"}
	{"level":"warn","ts":"2025-11-09T14:32:30.365311Z","caller":"embed/config_logging.go:188","msg":"rejected connection on client endpoint","remote-addr":"127.0.0.1:37256","server-name":"","error":"EOF"}
	{"level":"warn","ts":"2025-11-09T14:32:30.397777Z","caller":"embed/config_logging.go:188","msg":"rejected connection on client endpoint","remote-addr":"127.0.0.1:37276","server-name":"","error":"EOF"}
	{"level":"warn","ts":"2025-11-09T14:32:30.434361Z","caller":"embed/config_logging.go:188","msg":"rejected connection on client endpoint","remote-addr":"127.0.0.1:37304","server-name":"","error":"EOF"}
	{"level":"warn","ts":"2025-11-09T14:32:30.444049Z","caller":"embed/config_logging.go:188","msg":"rejected connection on client endpoint","remote-addr":"127.0.0.1:37316","server-name":"","error":"EOF"}
	{"level":"warn","ts":"2025-11-09T14:32:30.477950Z","caller":"embed/config_logging.go:188","msg":"rejected connection on client endpoint","remote-addr":"127.0.0.1:37334","server-name":"","error":"EOF"}
	{"level":"warn","ts":"2025-11-09T14:32:30.497040Z","caller":"embed/config_logging.go:188","msg":"rejected connection on client endpoint","remote-addr":"127.0.0.1:37356","server-name":"","error":"EOF"}
	{"level":"warn","ts":"2025-11-09T14:32:30.532691Z","caller":"embed/config_logging.go:188","msg":"rejected connection on client endpoint","remote-addr":"127.0.0.1:37366","server-name":"","error":"EOF"}
	{"level":"warn","ts":"2025-11-09T14:32:30.556684Z","caller":"embed/config_logging.go:188","msg":"rejected connection on client endpoint","remote-addr":"127.0.0.1:37382","server-name":"","error":"EOF"}
	{"level":"warn","ts":"2025-11-09T14:32:30.592552Z","caller":"embed/config_logging.go:188","msg":"rejected connection on client endpoint","remote-addr":"127.0.0.1:37398","server-name":"","error":"EOF"}
	{"level":"warn","ts":"2025-11-09T14:32:30.621665Z","caller":"embed/config_logging.go:188","msg":"rejected connection on client endpoint","remote-addr":"127.0.0.1:37430","server-name":"","error":"EOF"}
	{"level":"warn","ts":"2025-11-09T14:32:30.645066Z","caller":"embed/config_logging.go:188","msg":"rejected connection on client endpoint","remote-addr":"127.0.0.1:37460","server-name":"","error":"EOF"}
	{"level":"warn","ts":"2025-11-09T14:32:30.665713Z","caller":"embed/config_logging.go:188","msg":"rejected connection on client endpoint","remote-addr":"127.0.0.1:37474","server-name":"","error":"EOF"}
	{"level":"warn","ts":"2025-11-09T14:32:30.694128Z","caller":"embed/config_logging.go:188","msg":"rejected connection on client endpoint","remote-addr":"127.0.0.1:37492","server-name":"","error":"EOF"}
	{"level":"warn","ts":"2025-11-09T14:32:30.724346Z","caller":"embed/config_logging.go:188","msg":"rejected connection on client endpoint","remote-addr":"127.0.0.1:37512","server-name":"","error":"EOF"}
	{"level":"warn","ts":"2025-11-09T14:32:30.748352Z","caller":"embed/config_logging.go:188","msg":"rejected connection on client endpoint","remote-addr":"127.0.0.1:37528","server-name":"","error":"EOF"}
	{"level":"warn","ts":"2025-11-09T14:32:30.786113Z","caller":"embed/config_logging.go:188","msg":"rejected connection on client endpoint","remote-addr":"127.0.0.1:37534","server-name":"","error":"EOF"}
	{"level":"warn","ts":"2025-11-09T14:32:30.817793Z","caller":"embed/config_logging.go:188","msg":"rejected connection on client endpoint","remote-addr":"127.0.0.1:37562","server-name":"","error":"EOF"}
	{"level":"warn","ts":"2025-11-09T14:32:30.856408Z","caller":"embed/config_logging.go:188","msg":"rejected connection on client endpoint","remote-addr":"127.0.0.1:37576","server-name":"","error":"EOF"}
	{"level":"warn","ts":"2025-11-09T14:32:30.911141Z","caller":"embed/config_logging.go:188","msg":"rejected connection on client endpoint","remote-addr":"127.0.0.1:37590","server-name":"","error":"EOF"}
	{"level":"warn","ts":"2025-11-09T14:32:31.004220Z","caller":"embed/config_logging.go:188","msg":"rejected connection on client endpoint","remote-addr":"127.0.0.1:58410","server-name":"","error":"EOF"}
	
	
	==> kernel <==
	 14:32:53 up  1:15,  0 user,  load average: 6.06, 3.43, 2.39
	Linux pause-342238 5.15.0-1084-aws #91~20.04.1-Ubuntu SMP Fri May 2 07:00:04 UTC 2025 aarch64 GNU/Linux
	PRETTY_NAME="Debian GNU/Linux 12 (bookworm)"
	
	
	==> kindnet [5c6f6fd508ff4f00007a53c6092586ec8651aa027004a7f6f7018d9f33274a1d] <==
	I1109 14:32:14.033105       1 main.go:109] connected to apiserver: https://10.96.0.1:443
	I1109 14:32:14.033503       1 main.go:139] hostIP = 192.168.85.2
	podIP = 192.168.85.2
	I1109 14:32:14.033681       1 main.go:148] setting mtu 1500 for CNI 
	I1109 14:32:14.033755       1 main.go:178] kindnetd IP family: "ipv4"
	I1109 14:32:14.033791       1 main.go:182] noMask IPv4 subnets: [10.244.0.0/16]
	time="2025-11-09T14:32:14Z" level=info msg="Created plugin 10-kube-network-policies (kindnetd, handles RunPodSandbox,RemovePodSandbox)"
	I1109 14:32:14.272424       1 controller.go:377] "Starting controller" name="kube-network-policies"
	I1109 14:32:14.279986       1 controller.go:381] "Waiting for informer caches to sync"
	I1109 14:32:14.280083       1 shared_informer.go:350] "Waiting for caches to sync" controller="kube-network-policies"
	I1109 14:32:14.281015       1 controller.go:390] nri plugin exited: failed to connect to NRI service: dial unix /var/run/nri/nri.sock: connect: no such file or directory
	
	
	==> kindnet [aad8dd5398369e8f3aebeb36d9dfabb64135dc113966d452fdeeed0063b3e1e2] <==
	I1109 14:32:26.130032       1 main.go:109] connected to apiserver: https://10.96.0.1:443
	I1109 14:32:26.130414       1 main.go:139] hostIP = 192.168.85.2
	podIP = 192.168.85.2
	I1109 14:32:26.134726       1 main.go:148] setting mtu 1500 for CNI 
	I1109 14:32:26.134811       1 main.go:178] kindnetd IP family: "ipv4"
	I1109 14:32:26.134850       1 main.go:182] noMask IPv4 subnets: [10.244.0.0/16]
	time="2025-11-09T14:32:26Z" level=info msg="Created plugin 10-kube-network-policies (kindnetd, handles RunPodSandbox,RemovePodSandbox)"
	I1109 14:32:26.421615       1 controller.go:377] "Starting controller" name="kube-network-policies"
	I1109 14:32:26.421697       1 controller.go:381] "Waiting for informer caches to sync"
	I1109 14:32:26.421730       1 shared_informer.go:350] "Waiting for caches to sync" controller="kube-network-policies"
	I1109 14:32:26.437481       1 controller.go:390] nri plugin exited: failed to connect to NRI service: dial unix /var/run/nri/nri.sock: connect: no such file or directory
	I1109 14:32:32.535945       1 shared_informer.go:357] "Caches are synced" controller="kube-network-policies"
	I1109 14:32:32.536005       1 metrics.go:72] Registering metrics
	I1109 14:32:32.536074       1 controller.go:711] "Syncing nftables rules"
	I1109 14:32:36.421596       1 main.go:297] Handling node with IPs: map[192.168.85.2:{}]
	I1109 14:32:36.421689       1 main.go:301] handling current node
	I1109 14:32:46.422151       1 main.go:297] Handling node with IPs: map[192.168.85.2:{}]
	I1109 14:32:46.422277       1 main.go:301] handling current node
	
	
	==> kube-apiserver [26b98fc2b6e91b01fe188e5f329cb231784dc1c5dd7afa048ee956eeaea49020] <==
	I1109 14:32:32.410923       1 shared_informer.go:356] "Caches are synced" controller="kubernetes-service-cidr-controller"
	I1109 14:32:32.410960       1 default_servicecidr_controller.go:137] Shutting down kubernetes-service-cidr-controller
	I1109 14:32:32.435987       1 shared_informer.go:356] "Caches are synced" controller="crd-autoregister"
	I1109 14:32:32.436472       1 shared_informer.go:356] "Caches are synced" controller="*generic.policySource[*k8s.io/api/admissionregistration/v1.ValidatingAdmissionPolicy,*k8s.io/api/admissionregistration/v1.ValidatingAdmissionPolicyBinding,k8s.io/apiserver/pkg/admission/plugin/policy/validating.Validator]"
	I1109 14:32:32.436490       1 policy_source.go:240] refreshing policies
	I1109 14:32:32.436594       1 aggregator.go:171] initial CRD sync complete...
	I1109 14:32:32.436602       1 autoregister_controller.go:144] Starting autoregister controller
	I1109 14:32:32.436608       1 cache.go:32] Waiting for caches to sync for autoregister controller
	I1109 14:32:32.458438       1 shared_informer.go:356] "Caches are synced" controller="ipallocator-repair-controller"
	I1109 14:32:32.471251       1 controller.go:667] quota admission added evaluator for: leases.coordination.k8s.io
	I1109 14:32:32.505391       1 cache.go:39] Caches are synced for LocalAvailability controller
	I1109 14:32:32.505640       1 apf_controller.go:382] Running API Priority and Fairness config worker
	I1109 14:32:32.505656       1 apf_controller.go:385] Running API Priority and Fairness periodic rebalancing process
	I1109 14:32:32.505769       1 cache.go:39] Caches are synced for RemoteAvailability controller
	I1109 14:32:32.506094       1 cache.go:39] Caches are synced for APIServiceRegistrationController controller
	I1109 14:32:32.516974       1 handler_discovery.go:451] Starting ResourceDiscoveryManager
	I1109 14:32:32.536651       1 cache.go:39] Caches are synced for autoregister controller
	I1109 14:32:32.538925       1 shared_informer.go:356] "Caches are synced" controller="node_authorizer"
	I1109 14:32:32.545358       1 cidrallocator.go:301] created ClusterIP allocator for Service CIDR 10.96.0.0/12
	I1109 14:32:33.157201       1 storage_scheduling.go:111] all system priority classes are created successfully or already exist.
	I1109 14:32:34.802710       1 controller.go:667] quota admission added evaluator for: serviceaccounts
	I1109 14:32:36.010272       1 controller.go:667] quota admission added evaluator for: replicasets.apps
	I1109 14:32:36.059327       1 controller.go:667] quota admission added evaluator for: endpointslices.discovery.k8s.io
	I1109 14:32:36.109612       1 controller.go:667] quota admission added evaluator for: endpoints
	I1109 14:32:36.211110       1 controller.go:667] quota admission added evaluator for: deployments.apps
	
	
	==> kube-apiserver [5d123fdfeb405288face83e3bb92b58e7775f382757b3119f80305045fbcea28] <==
	W1109 14:32:14.738018       1 api_enablement.go:112] alpha api enabled with emulated version 1.34 instead of the binary's version 1.34.1, this is unsupported, proceed at your own risk: api=imagepolicy.k8s.io/v1alpha1
	W1109 14:32:14.738049       1 api_enablement.go:112] alpha api enabled with emulated version 1.34 instead of the binary's version 1.34.1, this is unsupported, proceed at your own risk: api=scheduling.k8s.io/v1alpha1
	W1109 14:32:14.742800       1 api_enablement.go:112] alpha api enabled with emulated version 1.34 instead of the binary's version 1.34.1, this is unsupported, proceed at your own risk: api=storagemigration.k8s.io/v1alpha1
	W1109 14:32:14.742830       1 api_enablement.go:112] alpha api enabled with emulated version 1.34 instead of the binary's version 1.34.1, this is unsupported, proceed at your own risk: api=admissionregistration.k8s.io/v1alpha1
	W1109 14:32:14.742873       1 api_enablement.go:112] alpha api enabled with emulated version 1.34 instead of the binary's version 1.34.1, this is unsupported, proceed at your own risk: api=internal.apiserver.k8s.io/v1alpha1
	W1109 14:32:14.742879       1 api_enablement.go:112] alpha api enabled with emulated version 1.34 instead of the binary's version 1.34.1, this is unsupported, proceed at your own risk: api=coordination.k8s.io/v1alpha2
	W1109 14:32:14.742883       1 api_enablement.go:112] alpha api enabled with emulated version 1.34 instead of the binary's version 1.34.1, this is unsupported, proceed at your own risk: api=rbac.authorization.k8s.io/v1alpha1
	W1109 14:32:14.742888       1 api_enablement.go:112] alpha api enabled with emulated version 1.34 instead of the binary's version 1.34.1, this is unsupported, proceed at your own risk: api=node.k8s.io/v1alpha1
	W1109 14:32:14.788849       1 logging.go:55] [core] [Channel #1 SubChannel #2]grpc: addrConn.createTransport failed to connect to {Addr: "127.0.0.1:2379", ServerName: "127.0.0.1:2379", BalancerAttributes: {"<%!p(pickfirstleaf.managedByPickfirstKeyType={})>": "<%!p(bool=true)>" }}. Err: connection error: desc = "transport: authentication handshake failed: context canceled"
	W1109 14:32:14.790241       1 logging.go:55] [core] [Channel #4 SubChannel #5]grpc: addrConn.createTransport failed to connect to {Addr: "127.0.0.1:2379", ServerName: "127.0.0.1:2379", BalancerAttributes: {"<%!p(pickfirstleaf.managedByPickfirstKeyType={})>": "<%!p(bool=true)>" }}. Err: connection error: desc = "transport: Error while dialing: dial tcp 127.0.0.1:2379: operation was canceled"
	I1109 14:32:14.800366       1 shared_informer.go:349] "Waiting for caches to sync" controller="node_authorizer"
	I1109 14:32:14.843943       1 shared_informer.go:349] "Waiting for caches to sync" controller="*generic.policySource[*k8s.io/api/admissionregistration/v1.ValidatingAdmissionPolicy,*k8s.io/api/admissionregistration/v1.ValidatingAdmissionPolicyBinding,k8s.io/apiserver/pkg/admission/plugin/policy/validating.Validator]"
	I1109 14:32:14.893282       1 plugins.go:157] Loaded 14 mutating admission controller(s) successfully in the following order: NamespaceLifecycle,LimitRanger,ServiceAccount,NodeRestriction,TaintNodesByCondition,Priority,DefaultTolerationSeconds,DefaultStorageClass,StorageObjectInUseProtection,RuntimeClass,DefaultIngressClass,PodTopologyLabels,MutatingAdmissionPolicy,MutatingAdmissionWebhook.
	I1109 14:32:14.893399       1 plugins.go:160] Loaded 13 validating admission controller(s) successfully in the following order: LimitRanger,ServiceAccount,PodSecurity,Priority,PersistentVolumeClaimResize,RuntimeClass,CertificateApproval,CertificateSigning,ClusterTrustBundleAttest,CertificateSubjectRestriction,ValidatingAdmissionPolicy,ValidatingAdmissionWebhook,ResourceQuota.
	I1109 14:32:14.893736       1 instance.go:239] Using reconciler: lease
	W1109 14:32:14.900922       1 logging.go:55] [core] [Channel #9 SubChannel #10]grpc: addrConn.createTransport failed to connect to {Addr: "127.0.0.1:2379", ServerName: "127.0.0.1:2379", BalancerAttributes: {"<%!p(pickfirstleaf.managedByPickfirstKeyType={})>": "<%!p(bool=true)>" }}. Err: connection error: desc = "transport: Error while dialing: dial tcp 127.0.0.1:2379: operation was canceled"
	W1109 14:32:14.965426       1 logging.go:55] [core] [Channel #13 SubChannel #14]grpc: addrConn.createTransport failed to connect to {Addr: "127.0.0.1:2379", ServerName: "127.0.0.1:2379", BalancerAttributes: {"<%!p(pickfirstleaf.managedByPickfirstKeyType={})>": "<%!p(bool=true)>" }}. Err: connection error: desc = "transport: Error while dialing: dial tcp 127.0.0.1:2379: operation was canceled"
	W1109 14:32:15.057928       1 logging.go:55] [core] [Channel #17 SubChannel #20]grpc: addrConn.createTransport failed to connect to {Addr: "127.0.0.1:2379", ServerName: "127.0.0.1:2379", BalancerAttributes: {"<%!p(pickfirstleaf.managedByPickfirstKeyType={})>": "<%!p(bool=true)>" }}. Err: connection error: desc = "transport: Error while dialing: dial tcp 127.0.0.1:2379: operation was canceled"
	W1109 14:32:15.094763       1 logging.go:55] [core] [Channel #18 SubChannel #22]grpc: addrConn.createTransport failed to connect to {Addr: "127.0.0.1:2379", ServerName: "127.0.0.1:2379", BalancerAttributes: {"<%!p(pickfirstleaf.managedByPickfirstKeyType={})>": "<%!p(bool=true)>" }}. Err: connection error: desc = "transport: authentication handshake failed: read tcp 127.0.0.1:48492->127.0.0.1:2379: read: connection reset by peer"
	W1109 14:32:15.094952       1 logging.go:55] [core] [Channel #19 SubChannel #23]grpc: addrConn.createTransport failed to connect to {Addr: "127.0.0.1:2379", ServerName: "127.0.0.1:2379", BalancerAttributes: {"<%!p(pickfirstleaf.managedByPickfirstKeyType={})>": "<%!p(bool=true)>" }}. Err: connection error: desc = "transport: authentication handshake failed: EOF"
	W1109 14:32:15.095453       1 logging.go:55] [core] [Channel #17 SubChannel #21]grpc: addrConn.createTransport failed to connect to {Addr: "127.0.0.1:2379", ServerName: "127.0.0.1:2379", BalancerAttributes: {"<%!p(pickfirstleaf.managedByPickfirstKeyType={})>": "<%!p(bool=true)>" }}. Err: connection error: desc = "transport: failed to write client preface: write tcp 127.0.0.1:48476->127.0.0.1:2379: write: broken pipe"
	W1109 14:32:15.100414       1 logging.go:55] [core] [Channel #1 SubChannel #3]grpc: addrConn.createTransport failed to connect to {Addr: "127.0.0.1:2379", ServerName: "127.0.0.1:2379", BalancerAttributes: {"<%!p(pickfirstleaf.managedByPickfirstKeyType={})>": "<%!p(bool=true)>" }}. Err: connection error: desc = "transport: Error while dialing: dial tcp 127.0.0.1:2379: connect: connection refused"
	W1109 14:32:15.100619       1 logging.go:55] [core] [Channel #9 SubChannel #11]grpc: addrConn.createTransport failed to connect to {Addr: "127.0.0.1:2379", ServerName: "127.0.0.1:2379", BalancerAttributes: {"<%!p(pickfirstleaf.managedByPickfirstKeyType={})>": "<%!p(bool=true)>" }}. Err: connection error: desc = "transport: Error while dialing: dial tcp 127.0.0.1:2379: connect: connection refused"
	W1109 14:32:15.100791       1 logging.go:55] [core] [Channel #13 SubChannel #15]grpc: addrConn.createTransport failed to connect to {Addr: "127.0.0.1:2379", ServerName: "127.0.0.1:2379", BalancerAttributes: {"<%!p(pickfirstleaf.managedByPickfirstKeyType={})>": "<%!p(bool=true)>" }}. Err: connection error: desc = "transport: Error while dialing: dial tcp 127.0.0.1:2379: connect: connection refused"
	W1109 14:32:15.100868       1 logging.go:55] [core] [Channel #4 SubChannel #6]grpc: addrConn.createTransport failed to connect to {Addr: "127.0.0.1:2379", ServerName: "127.0.0.1:2379", BalancerAttributes: {"<%!p(pickfirstleaf.managedByPickfirstKeyType={})>": "<%!p(bool=true)>" }}. Err: connection error: desc = "transport: Error while dialing: dial tcp 127.0.0.1:2379: connect: connection refused"
	
	
	==> kube-controller-manager [afac707896ac08acd09ba3881e8702a21359b6a9e5316bfc87cd94a6597c947f] <==
	I1109 14:32:35.731613       1 shared_informer.go:356] "Caches are synced" controller="node"
	I1109 14:32:35.731666       1 range_allocator.go:177] "Sending events to api server" logger="node-ipam-controller"
	I1109 14:32:35.731699       1 range_allocator.go:183] "Starting range CIDR allocator" logger="node-ipam-controller"
	I1109 14:32:35.731704       1 shared_informer.go:349] "Waiting for caches to sync" controller="cidrallocator"
	I1109 14:32:35.731709       1 shared_informer.go:356] "Caches are synced" controller="cidrallocator"
	I1109 14:32:35.731810       1 shared_informer.go:356] "Caches are synced" controller="PV protection"
	I1109 14:32:35.739029       1 shared_informer.go:356] "Caches are synced" controller="service account"
	I1109 14:32:35.745655       1 shared_informer.go:349] "Waiting for caches to sync" controller="garbage collector"
	I1109 14:32:35.752245       1 shared_informer.go:356] "Caches are synced" controller="disruption"
	I1109 14:32:35.752181       1 shared_informer.go:356] "Caches are synced" controller="job"
	I1109 14:32:35.757051       1 shared_informer.go:356] "Caches are synced" controller="endpoint"
	I1109 14:32:35.757129       1 shared_informer.go:356] "Caches are synced" controller="attach detach"
	I1109 14:32:35.757608       1 shared_informer.go:356] "Caches are synced" controller="TTL"
	I1109 14:32:35.757662       1 shared_informer.go:356] "Caches are synced" controller="ClusterRoleAggregator"
	I1109 14:32:35.760968       1 shared_informer.go:356] "Caches are synced" controller="resource_claim"
	I1109 14:32:35.761156       1 shared_informer.go:356] "Caches are synced" controller="validatingadmissionpolicy-status"
	I1109 14:32:35.765370       1 shared_informer.go:356] "Caches are synced" controller="taint-eviction-controller"
	I1109 14:32:35.767547       1 shared_informer.go:356] "Caches are synced" controller="taint"
	I1109 14:32:35.769522       1 node_lifecycle_controller.go:1221] "Initializing eviction metric for zone" logger="node-lifecycle-controller" zone=""
	I1109 14:32:35.769633       1 node_lifecycle_controller.go:873] "Missing timestamp for Node. Assuming now as a timestamp" logger="node-lifecycle-controller" node="pause-342238"
	I1109 14:32:35.769703       1 node_lifecycle_controller.go:1067] "Controller detected that zone is now in new state" logger="node-lifecycle-controller" zone="" newState="Normal"
	I1109 14:32:35.809176       1 shared_informer.go:356] "Caches are synced" controller="garbage collector"
	I1109 14:32:35.809208       1 garbagecollector.go:154] "Garbage collector: all resource monitors have synced" logger="garbage-collector-controller"
	I1109 14:32:35.809217       1 garbagecollector.go:157] "Proceeding to collect garbage" logger="garbage-collector-controller"
	I1109 14:32:35.853189       1 shared_informer.go:356] "Caches are synced" controller="garbage collector"
	
	
	==> kube-controller-manager [bd2b939fdc032d3abef04f2b1ec27467d56019dfce8f70d173b907e0b789d362] <==
	
	
	==> kube-proxy [a5688c3ff2046c131f5e093ac5f29648ebf7838084be2c2c4f88f5ec839473a5] <==
	I1109 14:32:32.278165       1 server_linux.go:53] "Using iptables proxy"
	I1109 14:32:33.145261       1 shared_informer.go:349] "Waiting for caches to sync" controller="node informer cache"
	I1109 14:32:33.283598       1 shared_informer.go:356] "Caches are synced" controller="node informer cache"
	I1109 14:32:33.303144       1 server.go:219] "Successfully retrieved NodeIPs" NodeIPs=["192.168.85.2"]
	E1109 14:32:33.311989       1 server.go:256] "Kube-proxy configuration may be incomplete or incorrect" err="nodePortAddresses is unset; NodePort connections will be accepted on all local IPs. Consider using `--nodeport-addresses primary`"
	I1109 14:32:34.645100       1 server.go:265] "kube-proxy running in dual-stack mode" primary ipFamily="IPv4"
	I1109 14:32:34.645227       1 server_linux.go:132] "Using iptables Proxier"
	I1109 14:32:34.657296       1 proxier.go:242] "Setting route_localnet=1 to allow node-ports on localhost; to change this either disable iptables.localhostNodePorts (--iptables-localhost-nodeports) or set nodePortAddresses (--nodeport-addresses) to filter loopback addresses" ipFamily="IPv4"
	I1109 14:32:34.657657       1 server.go:527] "Version info" version="v1.34.1"
	I1109 14:32:34.657878       1 server.go:529] "Golang settings" GOGC="" GOMAXPROCS="" GOTRACEBACK=""
	I1109 14:32:34.659162       1 config.go:200] "Starting service config controller"
	I1109 14:32:34.659224       1 shared_informer.go:349] "Waiting for caches to sync" controller="service config"
	I1109 14:32:34.659266       1 config.go:106] "Starting endpoint slice config controller"
	I1109 14:32:34.659309       1 shared_informer.go:349] "Waiting for caches to sync" controller="endpoint slice config"
	I1109 14:32:34.659342       1 config.go:403] "Starting serviceCIDR config controller"
	I1109 14:32:34.659368       1 shared_informer.go:349] "Waiting for caches to sync" controller="serviceCIDR config"
	I1109 14:32:34.662282       1 config.go:309] "Starting node config controller"
	I1109 14:32:34.663296       1 shared_informer.go:349] "Waiting for caches to sync" controller="node config"
	I1109 14:32:34.663358       1 shared_informer.go:356] "Caches are synced" controller="node config"
	I1109 14:32:34.760123       1 shared_informer.go:356] "Caches are synced" controller="serviceCIDR config"
	I1109 14:32:34.760219       1 shared_informer.go:356] "Caches are synced" controller="service config"
	I1109 14:32:34.760244       1 shared_informer.go:356] "Caches are synced" controller="endpoint slice config"
	
	
	==> kube-proxy [c9b12325c06ba9e4ab5abe52dc50d540a4c27cfd177801eff74770b81d946220] <==
	
	
	==> kube-scheduler [48a5ab06b1f371ac5dbcd26591c6cff2b3c7a3f9d82ec3d36aacffc729c253e0] <==
	I1109 14:32:34.380517       1 serving.go:386] Generated self-signed cert in-memory
	I1109 14:32:35.839765       1 server.go:175] "Starting Kubernetes Scheduler" version="v1.34.1"
	I1109 14:32:35.839925       1 server.go:177] "Golang settings" GOGC="" GOMAXPROCS="" GOTRACEBACK=""
	I1109 14:32:35.844969       1 secure_serving.go:211] Serving securely on 127.0.0.1:10259
	I1109 14:32:35.845191       1 requestheader_controller.go:180] Starting RequestHeaderAuthRequestController
	I1109 14:32:35.845256       1 shared_informer.go:349] "Waiting for caches to sync" controller="RequestHeaderAuthRequestController"
	I1109 14:32:35.845306       1 tlsconfig.go:243] "Starting DynamicServingCertificateController"
	I1109 14:32:35.847432       1 configmap_cafile_content.go:205] "Starting controller" name="client-ca::kube-system::extension-apiserver-authentication::client-ca-file"
	I1109 14:32:35.852978       1 shared_informer.go:349] "Waiting for caches to sync" controller="client-ca::kube-system::extension-apiserver-authentication::client-ca-file"
	I1109 14:32:35.847630       1 configmap_cafile_content.go:205] "Starting controller" name="client-ca::kube-system::extension-apiserver-authentication::requestheader-client-ca-file"
	I1109 14:32:35.857543       1 shared_informer.go:349] "Waiting for caches to sync" controller="client-ca::kube-system::extension-apiserver-authentication::requestheader-client-ca-file"
	I1109 14:32:35.945417       1 shared_informer.go:356] "Caches are synced" controller="RequestHeaderAuthRequestController"
	I1109 14:32:35.953511       1 shared_informer.go:356] "Caches are synced" controller="client-ca::kube-system::extension-apiserver-authentication::client-ca-file"
	I1109 14:32:35.958671       1 shared_informer.go:356] "Caches are synced" controller="client-ca::kube-system::extension-apiserver-authentication::requestheader-client-ca-file"
	
	
	==> kube-scheduler [e00cc266825be7d8acef576725cca22f3def607f1f54044f19ec2250c9c87463] <==
	
	
	==> kubelet <==
	Nov 09 14:32:25 pause-342238 kubelet[1305]: I1109 14:32:25.948519    1305 scope.go:117] "RemoveContainer" containerID="380328e8ff2e3837ff636037723b0b7ac29bf2c87d21bdf3437b7af96e7fd1b9"
	Nov 09 14:32:26 pause-342238 kubelet[1305]: I1109 14:32:26.007802    1305 scope.go:117] "RemoveContainer" containerID="98695eb4c34c3bc7b1b7f3ed32e548c56a553f0a55ff86003dd620ca903f90cd"
	Nov 09 14:32:26 pause-342238 kubelet[1305]: I1109 14:32:26.032918    1305 scope.go:117] "RemoveContainer" containerID="909928f9417995811bbe0ffd9c5779e2289c45f69d32a650303ec921235069f8"
	Nov 09 14:32:26 pause-342238 kubelet[1305]: I1109 14:32:26.851171    1305 scope.go:117] "RemoveContainer" containerID="4a1f936e07428b18d9c307997f095c8d73e649e7b520d8a35b53836606e5e79d"
	Nov 09 14:32:26 pause-342238 kubelet[1305]: E1109 14:32:26.852761    1305 pod_workers.go:1324] "Error syncing pod, skipping" err="failed to \"StartContainer\" for \"coredns\" with CrashLoopBackOff: \"back-off 10s restarting failed container=coredns pod=coredns-66bc5c9577-4vkj9_kube-system(e8dda2a5-805d-4a5c-904c-a8ff327f8180)\"" pod="kube-system/coredns-66bc5c9577-4vkj9" podUID="e8dda2a5-805d-4a5c-904c-a8ff327f8180"
	Nov 09 14:32:32 pause-342238 kubelet[1305]: E1109 14:32:32.144340    1305 status_manager.go:1018] "Failed to get status for pod" err="pods \"coredns-66bc5c9577-4vkj9\" is forbidden: User \"system:node:pause-342238\" cannot get resource \"pods\" in API group \"\" in the namespace \"kube-system\": no relationship found between node 'pause-342238' and this object" podUID="e8dda2a5-805d-4a5c-904c-a8ff327f8180" pod="kube-system/coredns-66bc5c9577-4vkj9"
	Nov 09 14:32:32 pause-342238 kubelet[1305]: E1109 14:32:32.145042    1305 reflector.go:205] "Failed to watch" err="configmaps \"kube-root-ca.crt\" is forbidden: User \"system:node:pause-342238\" cannot watch resource \"configmaps\" in API group \"\" in the namespace \"kube-system\": no relationship found between node 'pause-342238' and this object" logger="UnhandledError" reflector="object-\"kube-system\"/\"kube-root-ca.crt\"" type="*v1.ConfigMap"
	Nov 09 14:32:32 pause-342238 kubelet[1305]: E1109 14:32:32.145156    1305 reflector.go:205] "Failed to watch" err="configmaps \"kube-proxy\" is forbidden: User \"system:node:pause-342238\" cannot watch resource \"configmaps\" in API group \"\" in the namespace \"kube-system\": no relationship found between node 'pause-342238' and this object" logger="UnhandledError" reflector="object-\"kube-system\"/\"kube-proxy\"" type="*v1.ConfigMap"
	Nov 09 14:32:32 pause-342238 kubelet[1305]: E1109 14:32:32.204706    1305 status_manager.go:1018] "Failed to get status for pod" err="pods \"etcd-pause-342238\" is forbidden: User \"system:node:pause-342238\" cannot get resource \"pods\" in API group \"\" in the namespace \"kube-system\": no relationship found between node 'pause-342238' and this object" podUID="4f893e796c1ff75a6fa95936e877b240" pod="kube-system/etcd-pause-342238"
	Nov 09 14:32:32 pause-342238 kubelet[1305]: E1109 14:32:32.225586    1305 status_manager.go:1018] "Failed to get status for pod" err="pods \"kube-scheduler-pause-342238\" is forbidden: User \"system:node:pause-342238\" cannot get resource \"pods\" in API group \"\" in the namespace \"kube-system\": no relationship found between node 'pause-342238' and this object" podUID="04361c61235cdebe210dd630178960d4" pod="kube-system/kube-scheduler-pause-342238"
	Nov 09 14:32:32 pause-342238 kubelet[1305]: E1109 14:32:32.332443    1305 status_manager.go:1018] "Failed to get status for pod" err="pods \"kube-apiserver-pause-342238\" is forbidden: User \"system:node:pause-342238\" cannot get resource \"pods\" in API group \"\" in the namespace \"kube-system\": no relationship found between node 'pause-342238' and this object" podUID="8297305e855c4c37329d28dfaf111542" pod="kube-system/kube-apiserver-pause-342238"
	Nov 09 14:32:32 pause-342238 kubelet[1305]: E1109 14:32:32.372202    1305 status_manager.go:1018] "Failed to get status for pod" err="pods \"kube-controller-manager-pause-342238\" is forbidden: User \"system:node:pause-342238\" cannot get resource \"pods\" in API group \"\" in the namespace \"kube-system\": no relationship found between node 'pause-342238' and this object" podUID="250f4c32a7ddbe3bb161b27a06859cf2" pod="kube-system/kube-controller-manager-pause-342238"
	Nov 09 14:32:32 pause-342238 kubelet[1305]: E1109 14:32:32.384743    1305 status_manager.go:1018] "Failed to get status for pod" err="pods \"kindnet-dvtdj\" is forbidden: User \"system:node:pause-342238\" cannot get resource \"pods\" in API group \"\" in the namespace \"kube-system\": no relationship found between node 'pause-342238' and this object" podUID="2b0a0e90-64b0-4df4-bf36-a524a30af1f2" pod="kube-system/kindnet-dvtdj"
	Nov 09 14:32:32 pause-342238 kubelet[1305]: E1109 14:32:32.389913    1305 status_manager.go:1018] "Failed to get status for pod" err="pods \"kube-proxy-r56tq\" is forbidden: User \"system:node:pause-342238\" cannot get resource \"pods\" in API group \"\" in the namespace \"kube-system\": no relationship found between node 'pause-342238' and this object" podUID="cc4772f3-712c-4c0e-8991-2e92a242e19c" pod="kube-system/kube-proxy-r56tq"
	Nov 09 14:32:32 pause-342238 kubelet[1305]: E1109 14:32:32.397077    1305 status_manager.go:1018] "Failed to get status for pod" err="pods \"etcd-pause-342238\" is forbidden: User \"system:node:pause-342238\" cannot get resource \"pods\" in API group \"\" in the namespace \"kube-system\": no relationship found between node 'pause-342238' and this object" podUID="4f893e796c1ff75a6fa95936e877b240" pod="kube-system/etcd-pause-342238"
	Nov 09 14:32:32 pause-342238 kubelet[1305]: E1109 14:32:32.399334    1305 status_manager.go:1018] "Failed to get status for pod" err="pods \"kube-scheduler-pause-342238\" is forbidden: User \"system:node:pause-342238\" cannot get resource \"pods\" in API group \"\" in the namespace \"kube-system\": no relationship found between node 'pause-342238' and this object" podUID="04361c61235cdebe210dd630178960d4" pod="kube-system/kube-scheduler-pause-342238"
	Nov 09 14:32:32 pause-342238 kubelet[1305]: E1109 14:32:32.428393    1305 status_manager.go:1018] "Failed to get status for pod" err="pods \"kube-apiserver-pause-342238\" is forbidden: User \"system:node:pause-342238\" cannot get resource \"pods\" in API group \"\" in the namespace \"kube-system\": no relationship found between node 'pause-342238' and this object" podUID="8297305e855c4c37329d28dfaf111542" pod="kube-system/kube-apiserver-pause-342238"
	Nov 09 14:32:32 pause-342238 kubelet[1305]: E1109 14:32:32.439830    1305 status_manager.go:1018] "Failed to get status for pod" err="pods \"kube-controller-manager-pause-342238\" is forbidden: User \"system:node:pause-342238\" cannot get resource \"pods\" in API group \"\" in the namespace \"kube-system\": no relationship found between node 'pause-342238' and this object" podUID="250f4c32a7ddbe3bb161b27a06859cf2" pod="kube-system/kube-controller-manager-pause-342238"
	Nov 09 14:32:32 pause-342238 kubelet[1305]: I1109 14:32:32.783459    1305 scope.go:117] "RemoveContainer" containerID="4a1f936e07428b18d9c307997f095c8d73e649e7b520d8a35b53836606e5e79d"
	Nov 09 14:32:32 pause-342238 kubelet[1305]: E1109 14:32:32.783915    1305 pod_workers.go:1324] "Error syncing pod, skipping" err="failed to \"StartContainer\" for \"coredns\" with CrashLoopBackOff: \"back-off 10s restarting failed container=coredns pod=coredns-66bc5c9577-4vkj9_kube-system(e8dda2a5-805d-4a5c-904c-a8ff327f8180)\"" pod="kube-system/coredns-66bc5c9577-4vkj9" podUID="e8dda2a5-805d-4a5c-904c-a8ff327f8180"
	Nov 09 14:32:35 pause-342238 kubelet[1305]: W1109 14:32:35.636275    1305 conversion.go:112] Could not get instant cpu stats: cumulative stats decrease
	Nov 09 14:32:44 pause-342238 kubelet[1305]: I1109 14:32:44.456126    1305 scope.go:117] "RemoveContainer" containerID="4a1f936e07428b18d9c307997f095c8d73e649e7b520d8a35b53836606e5e79d"
	Nov 09 14:32:47 pause-342238 systemd[1]: Stopping kubelet.service - kubelet: The Kubernetes Node Agent...
	Nov 09 14:32:47 pause-342238 systemd[1]: kubelet.service: Deactivated successfully.
	Nov 09 14:32:47 pause-342238 systemd[1]: Stopped kubelet.service - kubelet: The Kubernetes Node Agent.
	

                                                
                                                
-- /stdout --
helpers_test.go:262: (dbg) Run:  out/minikube-linux-arm64 status --format={{.APIServer}} -p pause-342238 -n pause-342238
helpers_test.go:262: (dbg) Non-zero exit: out/minikube-linux-arm64 status --format={{.APIServer}} -p pause-342238 -n pause-342238: exit status 2 (485.572813ms)

                                                
                                                
-- stdout --
	Running

                                                
                                                
-- /stdout --
helpers_test.go:262: status error: exit status 2 (may be ok)
helpers_test.go:269: (dbg) Run:  kubectl --context pause-342238 get po -o=jsonpath={.items[*].metadata.name} -A --field-selector=status.phase!=Running
helpers_test.go:293: <<< TestPause/serial/Pause FAILED: end of post-mortem logs <<<
helpers_test.go:294: ---------------------/post-mortem---------------------------------
--- FAIL: TestPause/serial/Pause (8.06s)

                                                
                                    
x
+
TestStartStop/group/old-k8s-version/serial/EnableAddonWhileActive (2.57s)

                                                
                                                
=== RUN   TestStartStop/group/old-k8s-version/serial/EnableAddonWhileActive
start_stop_delete_test.go:203: (dbg) Run:  out/minikube-linux-arm64 addons enable metrics-server -p old-k8s-version-349599 --images=MetricsServer=registry.k8s.io/echoserver:1.4 --registries=MetricsServer=fake.domain
start_stop_delete_test.go:203: (dbg) Non-zero exit: out/minikube-linux-arm64 addons enable metrics-server -p old-k8s-version-349599 --images=MetricsServer=registry.k8s.io/echoserver:1.4 --registries=MetricsServer=fake.domain: exit status 11 (264.616183ms)

                                                
                                                
-- stdout --
	
	

                                                
                                                
-- /stdout --
** stderr ** 
	X Exiting due to MK_ADDON_ENABLE_PAUSED: enabled failed: check paused: list paused: runc: sudo runc list -f json: Process exited with status 1
	stdout:
	
	stderr:
	time="2025-11-09T14:35:49Z" level=error msg="open /run/runc: no such file or directory"
	
	* 
	╭─────────────────────────────────────────────────────────────────────────────────────────────╮
	│                                                                                             │
	│    * If the above advice does not help, please let us know:                                 │
	│      https://github.com/kubernetes/minikube/issues/new/choose                               │
	│                                                                                             │
	│    * Please run `minikube logs --file=logs.txt` and attach logs.txt to the GitHub issue.    │
	│    * Please also attach the following file to the GitHub issue:                             │
	│    * - /tmp/minikube_addons_2bafae6fa40fec163538f94366e390b0317a8b15_0.log                  │
	│                                                                                             │
	╰─────────────────────────────────────────────────────────────────────────────────────────────╯

                                                
                                                
** /stderr **
start_stop_delete_test.go:205: failed to enable an addon post-stop. args "out/minikube-linux-arm64 addons enable metrics-server -p old-k8s-version-349599 --images=MetricsServer=registry.k8s.io/echoserver:1.4 --registries=MetricsServer=fake.domain": exit status 11
start_stop_delete_test.go:213: (dbg) Run:  kubectl --context old-k8s-version-349599 describe deploy/metrics-server -n kube-system
start_stop_delete_test.go:213: (dbg) Non-zero exit: kubectl --context old-k8s-version-349599 describe deploy/metrics-server -n kube-system: exit status 1 (94.119282ms)

                                                
                                                
** stderr ** 
	Error from server (NotFound): deployments.apps "metrics-server" not found

                                                
                                                
** /stderr **
start_stop_delete_test.go:215: failed to get info on auto-pause deployments. args "kubectl --context old-k8s-version-349599 describe deploy/metrics-server -n kube-system": exit status 1
start_stop_delete_test.go:219: addon did not load correct image. Expected to contain " fake.domain/registry.k8s.io/echoserver:1.4". Addon deployment info: 
helpers_test.go:222: -----------------------post-mortem--------------------------------
helpers_test.go:223: ======>  post-mortem[TestStartStop/group/old-k8s-version/serial/EnableAddonWhileActive]: network settings <======
helpers_test.go:230: HOST ENV snapshots: PROXY env: HTTP_PROXY="<empty>" HTTPS_PROXY="<empty>" NO_PROXY="<empty>"
helpers_test.go:238: ======>  post-mortem[TestStartStop/group/old-k8s-version/serial/EnableAddonWhileActive]: docker inspect <======
helpers_test.go:239: (dbg) Run:  docker inspect old-k8s-version-349599
helpers_test.go:243: (dbg) docker inspect old-k8s-version-349599:

                                                
                                                
-- stdout --
	[
	    {
	        "Id": "05a48047eaa75462f179a30f03ed71f80e3b9debeb5e99158451fa2d3f4dabf4",
	        "Created": "2025-11-09T14:34:44.509425898Z",
	        "Path": "/usr/local/bin/entrypoint",
	        "Args": [
	            "/sbin/init"
	        ],
	        "State": {
	            "Status": "running",
	            "Running": true,
	            "Paused": false,
	            "Restarting": false,
	            "OOMKilled": false,
	            "Dead": false,
	            "Pid": 181370,
	            "ExitCode": 0,
	            "Error": "",
	            "StartedAt": "2025-11-09T14:34:44.572487334Z",
	            "FinishedAt": "0001-01-01T00:00:00Z"
	        },
	        "Image": "sha256:4e1b168be0d8ee6affdaa8dcd0274d605b2f417b6cfaa574410f0380fd962b97",
	        "ResolvConfPath": "/var/lib/docker/containers/05a48047eaa75462f179a30f03ed71f80e3b9debeb5e99158451fa2d3f4dabf4/resolv.conf",
	        "HostnamePath": "/var/lib/docker/containers/05a48047eaa75462f179a30f03ed71f80e3b9debeb5e99158451fa2d3f4dabf4/hostname",
	        "HostsPath": "/var/lib/docker/containers/05a48047eaa75462f179a30f03ed71f80e3b9debeb5e99158451fa2d3f4dabf4/hosts",
	        "LogPath": "/var/lib/docker/containers/05a48047eaa75462f179a30f03ed71f80e3b9debeb5e99158451fa2d3f4dabf4/05a48047eaa75462f179a30f03ed71f80e3b9debeb5e99158451fa2d3f4dabf4-json.log",
	        "Name": "/old-k8s-version-349599",
	        "RestartCount": 0,
	        "Driver": "overlay2",
	        "Platform": "linux",
	        "MountLabel": "",
	        "ProcessLabel": "",
	        "AppArmorProfile": "unconfined",
	        "ExecIDs": null,
	        "HostConfig": {
	            "Binds": [
	                "/lib/modules:/lib/modules:ro",
	                "old-k8s-version-349599:/var"
	            ],
	            "ContainerIDFile": "",
	            "LogConfig": {
	                "Type": "json-file",
	                "Config": {}
	            },
	            "NetworkMode": "old-k8s-version-349599",
	            "PortBindings": {
	                "22/tcp": [
	                    {
	                        "HostIp": "127.0.0.1",
	                        "HostPort": ""
	                    }
	                ],
	                "2376/tcp": [
	                    {
	                        "HostIp": "127.0.0.1",
	                        "HostPort": ""
	                    }
	                ],
	                "32443/tcp": [
	                    {
	                        "HostIp": "127.0.0.1",
	                        "HostPort": ""
	                    }
	                ],
	                "5000/tcp": [
	                    {
	                        "HostIp": "127.0.0.1",
	                        "HostPort": ""
	                    }
	                ],
	                "8443/tcp": [
	                    {
	                        "HostIp": "127.0.0.1",
	                        "HostPort": ""
	                    }
	                ]
	            },
	            "RestartPolicy": {
	                "Name": "no",
	                "MaximumRetryCount": 0
	            },
	            "AutoRemove": false,
	            "VolumeDriver": "",
	            "VolumesFrom": null,
	            "ConsoleSize": [
	                0,
	                0
	            ],
	            "CapAdd": null,
	            "CapDrop": null,
	            "CgroupnsMode": "host",
	            "Dns": [],
	            "DnsOptions": [],
	            "DnsSearch": [],
	            "ExtraHosts": null,
	            "GroupAdd": null,
	            "IpcMode": "private",
	            "Cgroup": "",
	            "Links": null,
	            "OomScoreAdj": 0,
	            "PidMode": "",
	            "Privileged": true,
	            "PublishAllPorts": false,
	            "ReadonlyRootfs": false,
	            "SecurityOpt": [
	                "seccomp=unconfined",
	                "apparmor=unconfined",
	                "label=disable"
	            ],
	            "Tmpfs": {
	                "/run": "",
	                "/tmp": ""
	            },
	            "UTSMode": "",
	            "UsernsMode": "",
	            "ShmSize": 67108864,
	            "Runtime": "runc",
	            "Isolation": "",
	            "CpuShares": 0,
	            "Memory": 3221225472,
	            "NanoCpus": 2000000000,
	            "CgroupParent": "",
	            "BlkioWeight": 0,
	            "BlkioWeightDevice": [],
	            "BlkioDeviceReadBps": [],
	            "BlkioDeviceWriteBps": [],
	            "BlkioDeviceReadIOps": [],
	            "BlkioDeviceWriteIOps": [],
	            "CpuPeriod": 0,
	            "CpuQuota": 0,
	            "CpuRealtimePeriod": 0,
	            "CpuRealtimeRuntime": 0,
	            "CpusetCpus": "",
	            "CpusetMems": "",
	            "Devices": [],
	            "DeviceCgroupRules": null,
	            "DeviceRequests": null,
	            "MemoryReservation": 0,
	            "MemorySwap": 6442450944,
	            "MemorySwappiness": null,
	            "OomKillDisable": false,
	            "PidsLimit": null,
	            "Ulimits": [],
	            "CpuCount": 0,
	            "CpuPercent": 0,
	            "IOMaximumIOps": 0,
	            "IOMaximumBandwidth": 0,
	            "MaskedPaths": null,
	            "ReadonlyPaths": null
	        },
	        "GraphDriver": {
	            "Data": {
	                "ID": "05a48047eaa75462f179a30f03ed71f80e3b9debeb5e99158451fa2d3f4dabf4",
	                "LowerDir": "/var/lib/docker/overlay2/043f891c3159bca2d07d287e6da028ee93f869b1f74239ca97c3d7eb29ebd8a6-init/diff:/var/lib/docker/overlay2/b3a4de36ab2a7c9237cb1555a4866064baca53bca407ae5f84336bea9c6bc6c8/diff",
	                "MergedDir": "/var/lib/docker/overlay2/043f891c3159bca2d07d287e6da028ee93f869b1f74239ca97c3d7eb29ebd8a6/merged",
	                "UpperDir": "/var/lib/docker/overlay2/043f891c3159bca2d07d287e6da028ee93f869b1f74239ca97c3d7eb29ebd8a6/diff",
	                "WorkDir": "/var/lib/docker/overlay2/043f891c3159bca2d07d287e6da028ee93f869b1f74239ca97c3d7eb29ebd8a6/work"
	            },
	            "Name": "overlay2"
	        },
	        "Mounts": [
	            {
	                "Type": "bind",
	                "Source": "/lib/modules",
	                "Destination": "/lib/modules",
	                "Mode": "ro",
	                "RW": false,
	                "Propagation": "rprivate"
	            },
	            {
	                "Type": "volume",
	                "Name": "old-k8s-version-349599",
	                "Source": "/var/lib/docker/volumes/old-k8s-version-349599/_data",
	                "Destination": "/var",
	                "Driver": "local",
	                "Mode": "z",
	                "RW": true,
	                "Propagation": ""
	            }
	        ],
	        "Config": {
	            "Hostname": "old-k8s-version-349599",
	            "Domainname": "",
	            "User": "",
	            "AttachStdin": false,
	            "AttachStdout": false,
	            "AttachStderr": false,
	            "ExposedPorts": {
	                "22/tcp": {},
	                "2376/tcp": {},
	                "32443/tcp": {},
	                "5000/tcp": {},
	                "8443/tcp": {}
	            },
	            "Tty": true,
	            "OpenStdin": false,
	            "StdinOnce": false,
	            "Env": [
	                "container=docker",
	                "PATH=/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin"
	            ],
	            "Cmd": null,
	            "Image": "gcr.io/k8s-minikube/kicbase-builds:v0.0.48-1761985721-21837@sha256:a50b37e97dfdea51156e079ca6b45818a801b3d41bbe13d141f35d2e1af6c7d1",
	            "Volumes": null,
	            "WorkingDir": "/",
	            "Entrypoint": [
	                "/usr/local/bin/entrypoint",
	                "/sbin/init"
	            ],
	            "OnBuild": null,
	            "Labels": {
	                "created_by.minikube.sigs.k8s.io": "true",
	                "mode.minikube.sigs.k8s.io": "old-k8s-version-349599",
	                "name.minikube.sigs.k8s.io": "old-k8s-version-349599",
	                "role.minikube.sigs.k8s.io": ""
	            },
	            "StopSignal": "SIGRTMIN+3"
	        },
	        "NetworkSettings": {
	            "Bridge": "",
	            "SandboxID": "8fed6f6686a2f60345bc766ce3bb7b045d57517c90f3429c256dbd8a46b278a1",
	            "SandboxKey": "/var/run/docker/netns/8fed6f6686a2",
	            "Ports": {
	                "22/tcp": [
	                    {
	                        "HostIp": "127.0.0.1",
	                        "HostPort": "33045"
	                    }
	                ],
	                "2376/tcp": [
	                    {
	                        "HostIp": "127.0.0.1",
	                        "HostPort": "33046"
	                    }
	                ],
	                "32443/tcp": [
	                    {
	                        "HostIp": "127.0.0.1",
	                        "HostPort": "33049"
	                    }
	                ],
	                "5000/tcp": [
	                    {
	                        "HostIp": "127.0.0.1",
	                        "HostPort": "33047"
	                    }
	                ],
	                "8443/tcp": [
	                    {
	                        "HostIp": "127.0.0.1",
	                        "HostPort": "33048"
	                    }
	                ]
	            },
	            "HairpinMode": false,
	            "LinkLocalIPv6Address": "",
	            "LinkLocalIPv6PrefixLen": 0,
	            "SecondaryIPAddresses": null,
	            "SecondaryIPv6Addresses": null,
	            "EndpointID": "",
	            "Gateway": "",
	            "GlobalIPv6Address": "",
	            "GlobalIPv6PrefixLen": 0,
	            "IPAddress": "",
	            "IPPrefixLen": 0,
	            "IPv6Gateway": "",
	            "MacAddress": "",
	            "Networks": {
	                "old-k8s-version-349599": {
	                    "IPAMConfig": {
	                        "IPv4Address": "192.168.85.2"
	                    },
	                    "Links": null,
	                    "Aliases": null,
	                    "MacAddress": "86:1a:18:af:f4:81",
	                    "DriverOpts": null,
	                    "GwPriority": 0,
	                    "NetworkID": "30e3d4188e00f4421ef297f05815077467a901e69125366b2721a1705b0d17e1",
	                    "EndpointID": "0a64836f473d2742773fd161f65649cf7bc81a8b31286b3e1d5351d24e601da5",
	                    "Gateway": "192.168.85.1",
	                    "IPAddress": "192.168.85.2",
	                    "IPPrefixLen": 24,
	                    "IPv6Gateway": "",
	                    "GlobalIPv6Address": "",
	                    "GlobalIPv6PrefixLen": 0,
	                    "DNSNames": [
	                        "old-k8s-version-349599",
	                        "05a48047eaa7"
	                    ]
	                }
	            }
	        }
	    }
	]

                                                
                                                
-- /stdout --
helpers_test.go:247: (dbg) Run:  out/minikube-linux-arm64 status --format={{.Host}} -p old-k8s-version-349599 -n old-k8s-version-349599
helpers_test.go:252: <<< TestStartStop/group/old-k8s-version/serial/EnableAddonWhileActive FAILED: start of post-mortem logs <<<
helpers_test.go:253: ======>  post-mortem[TestStartStop/group/old-k8s-version/serial/EnableAddonWhileActive]: minikube logs <======
helpers_test.go:255: (dbg) Run:  out/minikube-linux-arm64 -p old-k8s-version-349599 logs -n 25
helpers_test.go:255: (dbg) Done: out/minikube-linux-arm64 -p old-k8s-version-349599 logs -n 25: (1.233535528s)
helpers_test.go:260: TestStartStop/group/old-k8s-version/serial/EnableAddonWhileActive logs: 
-- stdout --
	
	==> Audit <==
	┌─────────┬───────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────┬───────────────────────────┬─────────┬─────────┬─────────────────────┬─────────────────
────┐
	│ COMMAND │                                                                                                                     ARGS                                                                                                                      │          PROFILE          │  USER   │ VERSION │     START TIME      │      END TIME       │
	├─────────┼───────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────┼───────────────────────────┼─────────┼─────────┼─────────────────────┼─────────────────
────┤
	│ ssh     │ -p cilium-241021 sudo systemctl cat cri-docker --no-pager                                                                                                                                                                                     │ cilium-241021             │ jenkins │ v1.37.0 │ 09 Nov 25 14:33 UTC │                     │
	│ ssh     │ -p cilium-241021 sudo cat /etc/systemd/system/cri-docker.service.d/10-cni.conf                                                                                                                                                                │ cilium-241021             │ jenkins │ v1.37.0 │ 09 Nov 25 14:33 UTC │                     │
	│ ssh     │ -p cilium-241021 sudo cat /usr/lib/systemd/system/cri-docker.service                                                                                                                                                                          │ cilium-241021             │ jenkins │ v1.37.0 │ 09 Nov 25 14:33 UTC │                     │
	│ ssh     │ -p cilium-241021 sudo cri-dockerd --version                                                                                                                                                                                                   │ cilium-241021             │ jenkins │ v1.37.0 │ 09 Nov 25 14:33 UTC │                     │
	│ ssh     │ -p cilium-241021 sudo systemctl status containerd --all --full --no-pager                                                                                                                                                                     │ cilium-241021             │ jenkins │ v1.37.0 │ 09 Nov 25 14:33 UTC │                     │
	│ ssh     │ -p cilium-241021 sudo systemctl cat containerd --no-pager                                                                                                                                                                                     │ cilium-241021             │ jenkins │ v1.37.0 │ 09 Nov 25 14:33 UTC │                     │
	│ ssh     │ -p cilium-241021 sudo cat /lib/systemd/system/containerd.service                                                                                                                                                                              │ cilium-241021             │ jenkins │ v1.37.0 │ 09 Nov 25 14:33 UTC │                     │
	│ ssh     │ -p cilium-241021 sudo cat /etc/containerd/config.toml                                                                                                                                                                                         │ cilium-241021             │ jenkins │ v1.37.0 │ 09 Nov 25 14:33 UTC │                     │
	│ ssh     │ -p cilium-241021 sudo containerd config dump                                                                                                                                                                                                  │ cilium-241021             │ jenkins │ v1.37.0 │ 09 Nov 25 14:33 UTC │                     │
	│ ssh     │ -p cilium-241021 sudo systemctl status crio --all --full --no-pager                                                                                                                                                                           │ cilium-241021             │ jenkins │ v1.37.0 │ 09 Nov 25 14:33 UTC │                     │
	│ ssh     │ -p cilium-241021 sudo systemctl cat crio --no-pager                                                                                                                                                                                           │ cilium-241021             │ jenkins │ v1.37.0 │ 09 Nov 25 14:33 UTC │                     │
	│ ssh     │ -p cilium-241021 sudo find /etc/crio -type f -exec sh -c 'echo {}; cat {}' \;                                                                                                                                                                 │ cilium-241021             │ jenkins │ v1.37.0 │ 09 Nov 25 14:33 UTC │                     │
	│ ssh     │ -p cilium-241021 sudo crio config                                                                                                                                                                                                             │ cilium-241021             │ jenkins │ v1.37.0 │ 09 Nov 25 14:33 UTC │                     │
	│ delete  │ -p cilium-241021                                                                                                                                                                                                                              │ cilium-241021             │ jenkins │ v1.37.0 │ 09 Nov 25 14:33 UTC │ 09 Nov 25 14:33 UTC │
	│ start   │ -p force-systemd-env-413219 --memory=3072 --alsologtostderr -v=5 --driver=docker  --container-runtime=crio                                                                                                                                    │ force-systemd-env-413219  │ jenkins │ v1.37.0 │ 09 Nov 25 14:33 UTC │ 09 Nov 25 14:33 UTC │
	│ ssh     │ force-systemd-flag-519664 ssh cat /etc/crio/crio.conf.d/02-crio.conf                                                                                                                                                                          │ force-systemd-flag-519664 │ jenkins │ v1.37.0 │ 09 Nov 25 14:33 UTC │ 09 Nov 25 14:33 UTC │
	│ delete  │ -p force-systemd-flag-519664                                                                                                                                                                                                                  │ force-systemd-flag-519664 │ jenkins │ v1.37.0 │ 09 Nov 25 14:33 UTC │ 09 Nov 25 14:33 UTC │
	│ start   │ -p cert-expiration-179822 --memory=3072 --cert-expiration=3m --driver=docker  --container-runtime=crio                                                                                                                                        │ cert-expiration-179822    │ jenkins │ v1.37.0 │ 09 Nov 25 14:33 UTC │ 09 Nov 25 14:34 UTC │
	│ delete  │ -p force-systemd-env-413219                                                                                                                                                                                                                   │ force-systemd-env-413219  │ jenkins │ v1.37.0 │ 09 Nov 25 14:33 UTC │ 09 Nov 25 14:34 UTC │
	│ start   │ -p cert-options-276181 --memory=3072 --apiserver-ips=127.0.0.1 --apiserver-ips=192.168.15.15 --apiserver-names=localhost --apiserver-names=www.google.com --apiserver-port=8555 --driver=docker  --container-runtime=crio                     │ cert-options-276181       │ jenkins │ v1.37.0 │ 09 Nov 25 14:34 UTC │ 09 Nov 25 14:34 UTC │
	│ ssh     │ cert-options-276181 ssh openssl x509 -text -noout -in /var/lib/minikube/certs/apiserver.crt                                                                                                                                                   │ cert-options-276181       │ jenkins │ v1.37.0 │ 09 Nov 25 14:34 UTC │ 09 Nov 25 14:34 UTC │
	│ ssh     │ -p cert-options-276181 -- sudo cat /etc/kubernetes/admin.conf                                                                                                                                                                                 │ cert-options-276181       │ jenkins │ v1.37.0 │ 09 Nov 25 14:34 UTC │ 09 Nov 25 14:34 UTC │
	│ delete  │ -p cert-options-276181                                                                                                                                                                                                                        │ cert-options-276181       │ jenkins │ v1.37.0 │ 09 Nov 25 14:34 UTC │ 09 Nov 25 14:34 UTC │
	│ start   │ -p old-k8s-version-349599 --memory=3072 --alsologtostderr --wait=true --kvm-network=default --kvm-qemu-uri=qemu:///system --disable-driver-mounts --keep-context=false --driver=docker  --container-runtime=crio --kubernetes-version=v1.28.0 │ old-k8s-version-349599    │ jenkins │ v1.37.0 │ 09 Nov 25 14:34 UTC │ 09 Nov 25 14:35 UTC │
	│ addons  │ enable metrics-server -p old-k8s-version-349599 --images=MetricsServer=registry.k8s.io/echoserver:1.4 --registries=MetricsServer=fake.domain                                                                                                  │ old-k8s-version-349599    │ jenkins │ v1.37.0 │ 09 Nov 25 14:35 UTC │                     │
	└─────────┴───────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────┴───────────────────────────┴─────────┴─────────┴─────────────────────┴─────────────────
────┘
	
	
	==> Last Start <==
	Log file created at: 2025/11/09 14:34:38
	Running on machine: ip-172-31-24-2
	Binary: Built with gc go1.24.6 for linux/arm64
	Log line format: [IWEF]mmdd hh:mm:ss.uuuuuu threadid file:line] msg
	I1109 14:34:38.513439  180981 out.go:360] Setting OutFile to fd 1 ...
	I1109 14:34:38.513557  180981 out.go:408] TERM=,COLORTERM=, which probably does not support color
	I1109 14:34:38.513567  180981 out.go:374] Setting ErrFile to fd 2...
	I1109 14:34:38.513571  180981 out.go:408] TERM=,COLORTERM=, which probably does not support color
	I1109 14:34:38.513826  180981 root.go:338] Updating PATH: /home/jenkins/minikube-integration/21139-2320/.minikube/bin
	I1109 14:34:38.514241  180981 out.go:368] Setting JSON to false
	I1109 14:34:38.515083  180981 start.go:133] hostinfo: {"hostname":"ip-172-31-24-2","uptime":4629,"bootTime":1762694250,"procs":178,"os":"linux","platform":"ubuntu","platformFamily":"debian","platformVersion":"20.04","kernelVersion":"5.15.0-1084-aws","kernelArch":"aarch64","virtualizationSystem":"","virtualizationRole":"","hostId":"6d436adf-771e-4269-b9a3-c25fd4fca4f5"}
	I1109 14:34:38.515146  180981 start.go:143] virtualization:  
	I1109 14:34:38.520982  180981 out.go:179] * [old-k8s-version-349599] minikube v1.37.0 on Ubuntu 20.04 (arm64)
	I1109 14:34:38.524430  180981 out.go:179]   - MINIKUBE_LOCATION=21139
	I1109 14:34:38.524505  180981 notify.go:221] Checking for updates...
	I1109 14:34:38.531496  180981 out.go:179]   - MINIKUBE_SUPPRESS_DOCKER_PERFORMANCE=true
	I1109 14:34:38.534683  180981 out.go:179]   - KUBECONFIG=/home/jenkins/minikube-integration/21139-2320/kubeconfig
	I1109 14:34:38.537917  180981 out.go:179]   - MINIKUBE_HOME=/home/jenkins/minikube-integration/21139-2320/.minikube
	I1109 14:34:38.541348  180981 out.go:179]   - MINIKUBE_BIN=out/minikube-linux-arm64
	I1109 14:34:38.544634  180981 out.go:179]   - MINIKUBE_FORCE_SYSTEMD=
	I1109 14:34:38.548234  180981 config.go:182] Loaded profile config "cert-expiration-179822": Driver=docker, ContainerRuntime=crio, KubernetesVersion=v1.34.1
	I1109 14:34:38.548412  180981 driver.go:422] Setting default libvirt URI to qemu:///system
	I1109 14:34:38.582693  180981 docker.go:124] docker version: linux-28.1.1:Docker Engine - Community
	I1109 14:34:38.582811  180981 cli_runner.go:164] Run: docker system info --format "{{json .}}"
	I1109 14:34:38.656810  180981 info.go:266] docker info: {ID:J4M5:W6MX:GOX4:4LAQ:VI7E:VJNF:J3OP:OPBH:GF7G:PPY4:WQWD:7N4L Containers:1 ContainersRunning:1 ContainersPaused:0 ContainersStopped:0 Images:4 Driver:overlay2 DriverStatus:[[Backing Filesystem extfs] [Supports d_type true] [Using metacopy false] [Native Overlay Diff true] [userxattr false]] SystemStatus:<nil> Plugins:{Volume:[local] Network:[bridge host ipvlan macvlan null overlay] Authorization:<nil> Log:[awslogs fluentd gcplogs gelf journald json-file local splunk syslog]} MemoryLimit:true SwapLimit:true KernelMemory:false KernelMemoryTCP:true CPUCfsPeriod:true CPUCfsQuota:true CPUShares:true CPUSet:true PidsLimit:true IPv4Forwarding:true BridgeNfIptables:false BridgeNfIP6Tables:false Debug:false NFd:37 OomKillDisable:true NGoroutines:52 SystemTime:2025-11-09 14:34:38.647008813 +0000 UTC LoggingDriver:json-file CgroupDriver:cgroupfs NEventsListener:0 KernelVersion:5.15.0-1084-aws OperatingSystem:Ubuntu 20.04.6 LTS OSType:linux Architecture:a
arch64 IndexServerAddress:https://index.docker.io/v1/ RegistryConfig:{AllowNondistributableArtifactsCIDRs:[] AllowNondistributableArtifactsHostnames:[] InsecureRegistryCIDRs:[::1/128 127.0.0.0/8] IndexConfigs:{DockerIo:{Name:docker.io Mirrors:[] Secure:true Official:true}} Mirrors:[]} NCPU:2 MemTotal:8214835200 GenericResources:<nil> DockerRootDir:/var/lib/docker HTTPProxy: HTTPSProxy: NoProxy: Name:ip-172-31-24-2 Labels:[] ExperimentalBuild:false ServerVersion:28.1.1 ClusterStore: ClusterAdvertise: Runtimes:{Runc:{Path:runc}} DefaultRuntime:runc Swarm:{NodeID: NodeAddr: LocalNodeState:inactive ControlAvailable:false Error: RemoteManagers:<nil>} LiveRestoreEnabled:false Isolation: InitBinary:docker-init ContainerdCommit:{ID:05044ec0a9a75232cad458027ca83437aae3f4da Expected:} RuncCommit:{ID:v1.2.5-0-g59923ef Expected:} InitCommit:{ID:de40ad0 Expected:} SecurityOptions:[name=apparmor name=seccomp,profile=builtin] ProductLicense: Warnings:<nil> ServerErrors:[] ClientInfo:{Debug:false Plugins:[map[Name:buildx Pat
h:/usr/libexec/docker/cli-plugins/docker-buildx SchemaVersion:0.1.0 ShortDescription:Docker Buildx Vendor:Docker Inc. Version:v0.23.0] map[Name:compose Path:/usr/libexec/docker/cli-plugins/docker-compose SchemaVersion:0.1.0 ShortDescription:Docker Compose Vendor:Docker Inc. Version:v2.35.1]] Warnings:<nil>}}
	I1109 14:34:38.656924  180981 docker.go:319] overlay module found
	I1109 14:34:38.660107  180981 out.go:179] * Using the docker driver based on user configuration
	I1109 14:34:38.663094  180981 start.go:309] selected driver: docker
	I1109 14:34:38.663115  180981 start.go:930] validating driver "docker" against <nil>
	I1109 14:34:38.663131  180981 start.go:941] status for docker: {Installed:true Healthy:true Running:false NeedsImprovement:false Error:<nil> Reason: Fix: Doc: Version:}
	I1109 14:34:38.663944  180981 cli_runner.go:164] Run: docker system info --format "{{json .}}"
	I1109 14:34:38.725516  180981 info.go:266] docker info: {ID:J4M5:W6MX:GOX4:4LAQ:VI7E:VJNF:J3OP:OPBH:GF7G:PPY4:WQWD:7N4L Containers:1 ContainersRunning:1 ContainersPaused:0 ContainersStopped:0 Images:4 Driver:overlay2 DriverStatus:[[Backing Filesystem extfs] [Supports d_type true] [Using metacopy false] [Native Overlay Diff true] [userxattr false]] SystemStatus:<nil> Plugins:{Volume:[local] Network:[bridge host ipvlan macvlan null overlay] Authorization:<nil> Log:[awslogs fluentd gcplogs gelf journald json-file local splunk syslog]} MemoryLimit:true SwapLimit:true KernelMemory:false KernelMemoryTCP:true CPUCfsPeriod:true CPUCfsQuota:true CPUShares:true CPUSet:true PidsLimit:true IPv4Forwarding:true BridgeNfIptables:false BridgeNfIP6Tables:false Debug:false NFd:37 OomKillDisable:true NGoroutines:52 SystemTime:2025-11-09 14:34:38.716412412 +0000 UTC LoggingDriver:json-file CgroupDriver:cgroupfs NEventsListener:0 KernelVersion:5.15.0-1084-aws OperatingSystem:Ubuntu 20.04.6 LTS OSType:linux Architecture:a
arch64 IndexServerAddress:https://index.docker.io/v1/ RegistryConfig:{AllowNondistributableArtifactsCIDRs:[] AllowNondistributableArtifactsHostnames:[] InsecureRegistryCIDRs:[::1/128 127.0.0.0/8] IndexConfigs:{DockerIo:{Name:docker.io Mirrors:[] Secure:true Official:true}} Mirrors:[]} NCPU:2 MemTotal:8214835200 GenericResources:<nil> DockerRootDir:/var/lib/docker HTTPProxy: HTTPSProxy: NoProxy: Name:ip-172-31-24-2 Labels:[] ExperimentalBuild:false ServerVersion:28.1.1 ClusterStore: ClusterAdvertise: Runtimes:{Runc:{Path:runc}} DefaultRuntime:runc Swarm:{NodeID: NodeAddr: LocalNodeState:inactive ControlAvailable:false Error: RemoteManagers:<nil>} LiveRestoreEnabled:false Isolation: InitBinary:docker-init ContainerdCommit:{ID:05044ec0a9a75232cad458027ca83437aae3f4da Expected:} RuncCommit:{ID:v1.2.5-0-g59923ef Expected:} InitCommit:{ID:de40ad0 Expected:} SecurityOptions:[name=apparmor name=seccomp,profile=builtin] ProductLicense: Warnings:<nil> ServerErrors:[] ClientInfo:{Debug:false Plugins:[map[Name:buildx Pat
h:/usr/libexec/docker/cli-plugins/docker-buildx SchemaVersion:0.1.0 ShortDescription:Docker Buildx Vendor:Docker Inc. Version:v0.23.0] map[Name:compose Path:/usr/libexec/docker/cli-plugins/docker-compose SchemaVersion:0.1.0 ShortDescription:Docker Compose Vendor:Docker Inc. Version:v2.35.1]] Warnings:<nil>}}
	I1109 14:34:38.725680  180981 start_flags.go:327] no existing cluster config was found, will generate one from the flags 
	I1109 14:34:38.725907  180981 start_flags.go:992] Waiting for all components: map[apiserver:true apps_running:true default_sa:true extra:true kubelet:true node_ready:true system_pods:true]
	I1109 14:34:38.729184  180981 out.go:179] * Using Docker driver with root privileges
	I1109 14:34:38.732723  180981 cni.go:84] Creating CNI manager for ""
	I1109 14:34:38.732782  180981 cni.go:143] "docker" driver + "crio" runtime found, recommending kindnet
	I1109 14:34:38.732793  180981 start_flags.go:336] Found "CNI" CNI - setting NetworkPlugin=cni
	I1109 14:34:38.732867  180981 start.go:353] cluster config:
	{Name:old-k8s-version-349599 KeepContext:false EmbedCerts:false MinikubeISO: KicBaseImage:gcr.io/k8s-minikube/kicbase-builds:v0.0.48-1761985721-21837@sha256:a50b37e97dfdea51156e079ca6b45818a801b3d41bbe13d141f35d2e1af6c7d1 Memory:3072 CPUs:2 DiskSize:20000 Driver:docker HyperkitVpnKitSock: HyperkitVSockPorts:[] DockerEnv:[] ContainerVolumeMounts:[] InsecureRegistry:[] RegistryMirror:[] HostOnlyCIDR:192.168.59.1/24 HypervVirtualSwitch: HypervUseExternalSwitch:false HypervExternalAdapter: KVMNetwork:default KVMQemuURI:qemu:///system KVMGPU:false KVMHidden:false KVMNUMACount:1 APIServerPort:8443 DockerOpt:[] DisableDriverMounts:true NFSShare:[] NFSSharesRoot:/nfsshares UUID: NoVTXCheck:false DNSProxy:false HostDNSResolver:true HostOnlyNicType:virtio NatNicType:virtio SSHIPAddress: SSHUser:root SSHKey: SSHPort:22 KubernetesConfig:{KubernetesVersion:v1.28.0 ClusterName:old-k8s-version-349599 Namespace:default APIServerHAVIP: APIServerName:minikubeCA APIServerNames:[] APIServerIPs:[] DNSDomain:cluster.local
ContainerRuntime:crio CRISocket: NetworkPlugin:cni FeatureGates: ServiceCIDR:10.96.0.0/12 ImageRepository: LoadBalancerStartIP: LoadBalancerEndIP: CustomIngressCert: RegistryAliases: ExtraOptions:[] ShouldLoadCachedImages:true EnableDefaultCNI:false CNI:} Nodes:[{Name: IP: Port:8443 KubernetesVersion:v1.28.0 ContainerRuntime:crio ControlPlane:true Worker:true}] Addons:map[] CustomAddonImages:map[] CustomAddonRegistries:map[] VerifyComponents:map[apiserver:true apps_running:true default_sa:true extra:true kubelet:true node_ready:true system_pods:true] StartHostTimeout:6m0s ScheduledStop:<nil> ExposedPorts:[] ListenAddress: Network: Subnet: MultiNodeRequested:false ExtraDisks:0 CertExpiration:26280h0m0s MountString: Mount9PVersion:9p2000.L MountGID:docker MountIP: MountMSize:262144 MountOptions:[] MountPort:0 MountType:9p MountUID:docker BinaryMirror: DisableOptimizations:false DisableMetrics:false DisableCoreDNSLog:false CustomQemuFirmwarePath: SocketVMnetClientPath: SocketVMnetPath: StaticIP: SSHAuthSock: SS
HAgentPID:0 GPUs: AutoPauseInterval:1m0s}
	I1109 14:34:38.737817  180981 out.go:179] * Starting "old-k8s-version-349599" primary control-plane node in "old-k8s-version-349599" cluster
	I1109 14:34:38.740629  180981 cache.go:134] Beginning downloading kic base image for docker with crio
	I1109 14:34:38.743656  180981 out.go:179] * Pulling base image v0.0.48-1761985721-21837 ...
	I1109 14:34:38.746439  180981 preload.go:188] Checking if preload exists for k8s version v1.28.0 and runtime crio
	I1109 14:34:38.746480  180981 image.go:81] Checking for gcr.io/k8s-minikube/kicbase-builds:v0.0.48-1761985721-21837@sha256:a50b37e97dfdea51156e079ca6b45818a801b3d41bbe13d141f35d2e1af6c7d1 in local docker daemon
	I1109 14:34:38.746486  180981 preload.go:203] Found local preload: /home/jenkins/minikube-integration/21139-2320/.minikube/cache/preloaded-tarball/preloaded-images-k8s-v18-v1.28.0-cri-o-overlay-arm64.tar.lz4
	I1109 14:34:38.746511  180981 cache.go:65] Caching tarball of preloaded images
	I1109 14:34:38.746596  180981 preload.go:238] Found /home/jenkins/minikube-integration/21139-2320/.minikube/cache/preloaded-tarball/preloaded-images-k8s-v18-v1.28.0-cri-o-overlay-arm64.tar.lz4 in cache, skipping download
	I1109 14:34:38.746606  180981 cache.go:68] Finished verifying existence of preloaded tar for v1.28.0 on crio
	I1109 14:34:38.746713  180981 profile.go:143] Saving config to /home/jenkins/minikube-integration/21139-2320/.minikube/profiles/old-k8s-version-349599/config.json ...
	I1109 14:34:38.746738  180981 lock.go:35] WriteFile acquiring /home/jenkins/minikube-integration/21139-2320/.minikube/profiles/old-k8s-version-349599/config.json: {Name:mkaf196392a074c18fb979705f0102f666cc160a Clock:{} Delay:500ms Timeout:1m0s Cancel:<nil>}
	I1109 14:34:38.765250  180981 image.go:100] Found gcr.io/k8s-minikube/kicbase-builds:v0.0.48-1761985721-21837@sha256:a50b37e97dfdea51156e079ca6b45818a801b3d41bbe13d141f35d2e1af6c7d1 in local docker daemon, skipping pull
	I1109 14:34:38.765272  180981 cache.go:158] gcr.io/k8s-minikube/kicbase-builds:v0.0.48-1761985721-21837@sha256:a50b37e97dfdea51156e079ca6b45818a801b3d41bbe13d141f35d2e1af6c7d1 exists in daemon, skipping load
	I1109 14:34:38.765289  180981 cache.go:243] Successfully downloaded all kic artifacts
	I1109 14:34:38.765312  180981 start.go:360] acquireMachinesLock for old-k8s-version-349599: {Name:mkbe18cf125cfd8836d4bc86844e116f57958772 Clock:{} Delay:500ms Timeout:10m0s Cancel:<nil>}
	I1109 14:34:38.766143  180981 start.go:364] duration metric: took 807.832µs to acquireMachinesLock for "old-k8s-version-349599"
	I1109 14:34:38.766181  180981 start.go:93] Provisioning new machine with config: &{Name:old-k8s-version-349599 KeepContext:false EmbedCerts:false MinikubeISO: KicBaseImage:gcr.io/k8s-minikube/kicbase-builds:v0.0.48-1761985721-21837@sha256:a50b37e97dfdea51156e079ca6b45818a801b3d41bbe13d141f35d2e1af6c7d1 Memory:3072 CPUs:2 DiskSize:20000 Driver:docker HyperkitVpnKitSock: HyperkitVSockPorts:[] DockerEnv:[] ContainerVolumeMounts:[] InsecureRegistry:[] RegistryMirror:[] HostOnlyCIDR:192.168.59.1/24 HypervVirtualSwitch: HypervUseExternalSwitch:false HypervExternalAdapter: KVMNetwork:default KVMQemuURI:qemu:///system KVMGPU:false KVMHidden:false KVMNUMACount:1 APIServerPort:8443 DockerOpt:[] DisableDriverMounts:true NFSShare:[] NFSSharesRoot:/nfsshares UUID: NoVTXCheck:false DNSProxy:false HostDNSResolver:true HostOnlyNicType:virtio NatNicType:virtio SSHIPAddress: SSHUser:root SSHKey: SSHPort:22 KubernetesConfig:{KubernetesVersion:v1.28.0 ClusterName:old-k8s-version-349599 Namespace:default APIServerHAVIP:
APIServerName:minikubeCA APIServerNames:[] APIServerIPs:[] DNSDomain:cluster.local ContainerRuntime:crio CRISocket: NetworkPlugin:cni FeatureGates: ServiceCIDR:10.96.0.0/12 ImageRepository: LoadBalancerStartIP: LoadBalancerEndIP: CustomIngressCert: RegistryAliases: ExtraOptions:[] ShouldLoadCachedImages:true EnableDefaultCNI:false CNI:} Nodes:[{Name: IP: Port:8443 KubernetesVersion:v1.28.0 ContainerRuntime:crio ControlPlane:true Worker:true}] Addons:map[] CustomAddonImages:map[] CustomAddonRegistries:map[] VerifyComponents:map[apiserver:true apps_running:true default_sa:true extra:true kubelet:true node_ready:true system_pods:true] StartHostTimeout:6m0s ScheduledStop:<nil> ExposedPorts:[] ListenAddress: Network: Subnet: MultiNodeRequested:false ExtraDisks:0 CertExpiration:26280h0m0s MountString: Mount9PVersion:9p2000.L MountGID:docker MountIP: MountMSize:262144 MountOptions:[] MountPort:0 MountType:9p MountUID:docker BinaryMirror: DisableOptimizations:false DisableMetrics:false DisableCoreDNSLog:false CustomQ
emuFirmwarePath: SocketVMnetClientPath: SocketVMnetPath: StaticIP: SSHAuthSock: SSHAgentPID:0 GPUs: AutoPauseInterval:1m0s} &{Name: IP: Port:8443 KubernetesVersion:v1.28.0 ContainerRuntime:crio ControlPlane:true Worker:true}
	I1109 14:34:38.766261  180981 start.go:125] createHost starting for "" (driver="docker")
	I1109 14:34:38.769696  180981 out.go:252] * Creating docker container (CPUs=2, Memory=3072MB) ...
	I1109 14:34:38.769929  180981 start.go:159] libmachine.API.Create for "old-k8s-version-349599" (driver="docker")
	I1109 14:34:38.769968  180981 client.go:173] LocalClient.Create starting
	I1109 14:34:38.770041  180981 main.go:143] libmachine: Reading certificate data from /home/jenkins/minikube-integration/21139-2320/.minikube/certs/ca.pem
	I1109 14:34:38.770079  180981 main.go:143] libmachine: Decoding PEM data...
	I1109 14:34:38.770098  180981 main.go:143] libmachine: Parsing certificate...
	I1109 14:34:38.770150  180981 main.go:143] libmachine: Reading certificate data from /home/jenkins/minikube-integration/21139-2320/.minikube/certs/cert.pem
	I1109 14:34:38.770173  180981 main.go:143] libmachine: Decoding PEM data...
	I1109 14:34:38.770183  180981 main.go:143] libmachine: Parsing certificate...
	I1109 14:34:38.770549  180981 cli_runner.go:164] Run: docker network inspect old-k8s-version-349599 --format "{"Name": "{{.Name}}","Driver": "{{.Driver}}","Subnet": "{{range .IPAM.Config}}{{.Subnet}}{{end}}","Gateway": "{{range .IPAM.Config}}{{.Gateway}}{{end}}","MTU": {{if (index .Options "com.docker.network.driver.mtu")}}{{(index .Options "com.docker.network.driver.mtu")}}{{else}}0{{end}}, "ContainerIPs": [{{range $k,$v := .Containers }}"{{$v.IPv4Address}}",{{end}}]}"
	W1109 14:34:38.788041  180981 cli_runner.go:211] docker network inspect old-k8s-version-349599 --format "{"Name": "{{.Name}}","Driver": "{{.Driver}}","Subnet": "{{range .IPAM.Config}}{{.Subnet}}{{end}}","Gateway": "{{range .IPAM.Config}}{{.Gateway}}{{end}}","MTU": {{if (index .Options "com.docker.network.driver.mtu")}}{{(index .Options "com.docker.network.driver.mtu")}}{{else}}0{{end}}, "ContainerIPs": [{{range $k,$v := .Containers }}"{{$v.IPv4Address}}",{{end}}]}" returned with exit code 1
	I1109 14:34:38.788130  180981 network_create.go:284] running [docker network inspect old-k8s-version-349599] to gather additional debugging logs...
	I1109 14:34:38.788161  180981 cli_runner.go:164] Run: docker network inspect old-k8s-version-349599
	W1109 14:34:38.804394  180981 cli_runner.go:211] docker network inspect old-k8s-version-349599 returned with exit code 1
	I1109 14:34:38.804426  180981 network_create.go:287] error running [docker network inspect old-k8s-version-349599]: docker network inspect old-k8s-version-349599: exit status 1
	stdout:
	[]
	
	stderr:
	Error response from daemon: network old-k8s-version-349599 not found
	I1109 14:34:38.804441  180981 network_create.go:289] output of [docker network inspect old-k8s-version-349599]: -- stdout --
	[]
	
	-- /stdout --
	** stderr ** 
	Error response from daemon: network old-k8s-version-349599 not found
	
	** /stderr **
	I1109 14:34:38.804543  180981 cli_runner.go:164] Run: docker network inspect bridge --format "{"Name": "{{.Name}}","Driver": "{{.Driver}}","Subnet": "{{range .IPAM.Config}}{{.Subnet}}{{end}}","Gateway": "{{range .IPAM.Config}}{{.Gateway}}{{end}}","MTU": {{if (index .Options "com.docker.network.driver.mtu")}}{{(index .Options "com.docker.network.driver.mtu")}}{{else}}0{{end}}, "ContainerIPs": [{{range $k,$v := .Containers }}"{{$v.IPv4Address}}",{{end}}]}"
	I1109 14:34:38.822288  180981 network.go:211] skipping subnet 192.168.49.0/24 that is taken: &{IP:192.168.49.0 Netmask:255.255.255.0 Prefix:24 CIDR:192.168.49.0/24 Gateway:192.168.49.1 ClientMin:192.168.49.2 ClientMax:192.168.49.254 Broadcast:192.168.49.255 IsPrivate:true Interface:{IfaceName:br-b901b8dcb821 IfaceIPv4:192.168.49.1 IfaceMTU:1500 IfaceMAC:fe:01:f6:7f:4e:91} reservation:<nil>}
	I1109 14:34:38.822770  180981 network.go:211] skipping subnet 192.168.58.0/24 that is taken: &{IP:192.168.58.0 Netmask:255.255.255.0 Prefix:24 CIDR:192.168.58.0/24 Gateway:192.168.58.1 ClientMin:192.168.58.2 ClientMax:192.168.58.254 Broadcast:192.168.58.255 IsPrivate:true Interface:{IfaceName:br-46dda1eda2df IfaceIPv4:192.168.58.1 IfaceMTU:1500 IfaceMAC:3a:a9:4d:4f:8f:31} reservation:<nil>}
	I1109 14:34:38.823206  180981 network.go:211] skipping subnet 192.168.67.0/24 that is taken: &{IP:192.168.67.0 Netmask:255.255.255.0 Prefix:24 CIDR:192.168.67.0/24 Gateway:192.168.67.1 ClientMin:192.168.67.2 ClientMax:192.168.67.254 Broadcast:192.168.67.255 IsPrivate:true Interface:{IfaceName:br-3b44df0b0b1c IfaceIPv4:192.168.67.1 IfaceMTU:1500 IfaceMAC:06:80:ac:56:fe:3d} reservation:<nil>}
	I1109 14:34:38.823431  180981 network.go:211] skipping subnet 192.168.76.0/24 that is taken: &{IP:192.168.76.0 Netmask:255.255.255.0 Prefix:24 CIDR:192.168.76.0/24 Gateway:192.168.76.1 ClientMin:192.168.76.2 ClientMax:192.168.76.254 Broadcast:192.168.76.255 IsPrivate:true Interface:{IfaceName:br-19fe94425a30 IfaceIPv4:192.168.76.1 IfaceMTU:1500 IfaceMAC:be:51:69:db:63:61} reservation:<nil>}
	I1109 14:34:38.823814  180981 network.go:206] using free private subnet 192.168.85.0/24: &{IP:192.168.85.0 Netmask:255.255.255.0 Prefix:24 CIDR:192.168.85.0/24 Gateway:192.168.85.1 ClientMin:192.168.85.2 ClientMax:192.168.85.254 Broadcast:192.168.85.255 IsPrivate:true Interface:{IfaceName: IfaceIPv4: IfaceMTU:0 IfaceMAC:} reservation:0x40019e2dc0}
	I1109 14:34:38.823838  180981 network_create.go:124] attempt to create docker network old-k8s-version-349599 192.168.85.0/24 with gateway 192.168.85.1 and MTU of 1500 ...
	I1109 14:34:38.823944  180981 cli_runner.go:164] Run: docker network create --driver=bridge --subnet=192.168.85.0/24 --gateway=192.168.85.1 -o --ip-masq -o --icc -o com.docker.network.driver.mtu=1500 --label=created_by.minikube.sigs.k8s.io=true --label=name.minikube.sigs.k8s.io=old-k8s-version-349599 old-k8s-version-349599
	I1109 14:34:38.885608  180981 network_create.go:108] docker network old-k8s-version-349599 192.168.85.0/24 created
	I1109 14:34:38.885645  180981 kic.go:121] calculated static IP "192.168.85.2" for the "old-k8s-version-349599" container
	I1109 14:34:38.885716  180981 cli_runner.go:164] Run: docker ps -a --format {{.Names}}
	I1109 14:34:38.904755  180981 cli_runner.go:164] Run: docker volume create old-k8s-version-349599 --label name.minikube.sigs.k8s.io=old-k8s-version-349599 --label created_by.minikube.sigs.k8s.io=true
	I1109 14:34:38.922043  180981 oci.go:103] Successfully created a docker volume old-k8s-version-349599
	I1109 14:34:38.922129  180981 cli_runner.go:164] Run: docker run --rm --name old-k8s-version-349599-preload-sidecar --label created_by.minikube.sigs.k8s.io=true --label name.minikube.sigs.k8s.io=old-k8s-version-349599 --entrypoint /usr/bin/test -v old-k8s-version-349599:/var gcr.io/k8s-minikube/kicbase-builds:v0.0.48-1761985721-21837@sha256:a50b37e97dfdea51156e079ca6b45818a801b3d41bbe13d141f35d2e1af6c7d1 -d /var/lib
	I1109 14:34:39.457756  180981 oci.go:107] Successfully prepared a docker volume old-k8s-version-349599
	I1109 14:34:39.457842  180981 preload.go:188] Checking if preload exists for k8s version v1.28.0 and runtime crio
	I1109 14:34:39.457856  180981 kic.go:194] Starting extracting preloaded images to volume ...
	I1109 14:34:39.457929  180981 cli_runner.go:164] Run: docker run --rm --entrypoint /usr/bin/tar -v /home/jenkins/minikube-integration/21139-2320/.minikube/cache/preloaded-tarball/preloaded-images-k8s-v18-v1.28.0-cri-o-overlay-arm64.tar.lz4:/preloaded.tar:ro -v old-k8s-version-349599:/extractDir gcr.io/k8s-minikube/kicbase-builds:v0.0.48-1761985721-21837@sha256:a50b37e97dfdea51156e079ca6b45818a801b3d41bbe13d141f35d2e1af6c7d1 -I lz4 -xf /preloaded.tar -C /extractDir
	I1109 14:34:44.444119  180981 cli_runner.go:217] Completed: docker run --rm --entrypoint /usr/bin/tar -v /home/jenkins/minikube-integration/21139-2320/.minikube/cache/preloaded-tarball/preloaded-images-k8s-v18-v1.28.0-cri-o-overlay-arm64.tar.lz4:/preloaded.tar:ro -v old-k8s-version-349599:/extractDir gcr.io/k8s-minikube/kicbase-builds:v0.0.48-1761985721-21837@sha256:a50b37e97dfdea51156e079ca6b45818a801b3d41bbe13d141f35d2e1af6c7d1 -I lz4 -xf /preloaded.tar -C /extractDir: (4.986147806s)
	I1109 14:34:44.444154  180981 kic.go:203] duration metric: took 4.986293325s to extract preloaded images to volume ...
	W1109 14:34:44.444312  180981 cgroups_linux.go:77] Your kernel does not support swap limit capabilities or the cgroup is not mounted.
	I1109 14:34:44.444433  180981 cli_runner.go:164] Run: docker info --format "'{{json .SecurityOptions}}'"
	I1109 14:34:44.494796  180981 cli_runner.go:164] Run: docker run -d -t --privileged --security-opt seccomp=unconfined --tmpfs /tmp --tmpfs /run -v /lib/modules:/lib/modules:ro --hostname old-k8s-version-349599 --name old-k8s-version-349599 --label created_by.minikube.sigs.k8s.io=true --label name.minikube.sigs.k8s.io=old-k8s-version-349599 --label role.minikube.sigs.k8s.io= --label mode.minikube.sigs.k8s.io=old-k8s-version-349599 --network old-k8s-version-349599 --ip 192.168.85.2 --volume old-k8s-version-349599:/var --security-opt apparmor=unconfined --memory=3072mb --cpus=2 -e container=docker --expose 8443 --publish=127.0.0.1::8443 --publish=127.0.0.1::22 --publish=127.0.0.1::2376 --publish=127.0.0.1::5000 --publish=127.0.0.1::32443 gcr.io/k8s-minikube/kicbase-builds:v0.0.48-1761985721-21837@sha256:a50b37e97dfdea51156e079ca6b45818a801b3d41bbe13d141f35d2e1af6c7d1
	I1109 14:34:44.828652  180981 cli_runner.go:164] Run: docker container inspect old-k8s-version-349599 --format={{.State.Running}}
	I1109 14:34:44.852159  180981 cli_runner.go:164] Run: docker container inspect old-k8s-version-349599 --format={{.State.Status}}
	I1109 14:34:44.875469  180981 cli_runner.go:164] Run: docker exec old-k8s-version-349599 stat /var/lib/dpkg/alternatives/iptables
	I1109 14:34:44.927442  180981 oci.go:144] the created container "old-k8s-version-349599" has a running status.
	I1109 14:34:44.927469  180981 kic.go:225] Creating ssh key for kic: /home/jenkins/minikube-integration/21139-2320/.minikube/machines/old-k8s-version-349599/id_rsa...
	I1109 14:34:45.915917  180981 kic_runner.go:191] docker (temp): /home/jenkins/minikube-integration/21139-2320/.minikube/machines/old-k8s-version-349599/id_rsa.pub --> /home/docker/.ssh/authorized_keys (381 bytes)
	I1109 14:34:45.939390  180981 cli_runner.go:164] Run: docker container inspect old-k8s-version-349599 --format={{.State.Status}}
	I1109 14:34:45.958626  180981 kic_runner.go:93] Run: chown docker:docker /home/docker/.ssh/authorized_keys
	I1109 14:34:45.958651  180981 kic_runner.go:114] Args: [docker exec --privileged old-k8s-version-349599 chown docker:docker /home/docker/.ssh/authorized_keys]
	I1109 14:34:46.002089  180981 cli_runner.go:164] Run: docker container inspect old-k8s-version-349599 --format={{.State.Status}}
	I1109 14:34:46.021627  180981 machine.go:94] provisionDockerMachine start ...
	I1109 14:34:46.021733  180981 cli_runner.go:164] Run: docker container inspect -f "'{{(index (index .NetworkSettings.Ports "22/tcp") 0).HostPort}}'" old-k8s-version-349599
	I1109 14:34:46.041418  180981 main.go:143] libmachine: Using SSH client type: native
	I1109 14:34:46.041764  180981 main.go:143] libmachine: &{{{<nil> 0 [] [] []} docker [0x3eefe0] 0x3f1790 <nil>  [] 0s} 127.0.0.1 33045 <nil> <nil>}
	I1109 14:34:46.041780  180981 main.go:143] libmachine: About to run SSH command:
	hostname
	I1109 14:34:46.042423  180981 main.go:143] libmachine: Error dialing TCP: ssh: handshake failed: EOF
	I1109 14:34:49.195731  180981 main.go:143] libmachine: SSH cmd err, output: <nil>: old-k8s-version-349599
	
	I1109 14:34:49.195755  180981 ubuntu.go:182] provisioning hostname "old-k8s-version-349599"
	I1109 14:34:49.195832  180981 cli_runner.go:164] Run: docker container inspect -f "'{{(index (index .NetworkSettings.Ports "22/tcp") 0).HostPort}}'" old-k8s-version-349599
	I1109 14:34:49.214097  180981 main.go:143] libmachine: Using SSH client type: native
	I1109 14:34:49.214411  180981 main.go:143] libmachine: &{{{<nil> 0 [] [] []} docker [0x3eefe0] 0x3f1790 <nil>  [] 0s} 127.0.0.1 33045 <nil> <nil>}
	I1109 14:34:49.214430  180981 main.go:143] libmachine: About to run SSH command:
	sudo hostname old-k8s-version-349599 && echo "old-k8s-version-349599" | sudo tee /etc/hostname
	I1109 14:34:49.385332  180981 main.go:143] libmachine: SSH cmd err, output: <nil>: old-k8s-version-349599
	
	I1109 14:34:49.385416  180981 cli_runner.go:164] Run: docker container inspect -f "'{{(index (index .NetworkSettings.Ports "22/tcp") 0).HostPort}}'" old-k8s-version-349599
	I1109 14:34:49.402794  180981 main.go:143] libmachine: Using SSH client type: native
	I1109 14:34:49.403247  180981 main.go:143] libmachine: &{{{<nil> 0 [] [] []} docker [0x3eefe0] 0x3f1790 <nil>  [] 0s} 127.0.0.1 33045 <nil> <nil>}
	I1109 14:34:49.403273  180981 main.go:143] libmachine: About to run SSH command:
	
			if ! grep -xq '.*\sold-k8s-version-349599' /etc/hosts; then
				if grep -xq '127.0.1.1\s.*' /etc/hosts; then
					sudo sed -i 's/^127.0.1.1\s.*/127.0.1.1 old-k8s-version-349599/g' /etc/hosts;
				else 
					echo '127.0.1.1 old-k8s-version-349599' | sudo tee -a /etc/hosts; 
				fi
			fi
	I1109 14:34:49.560302  180981 main.go:143] libmachine: SSH cmd err, output: <nil>: 
	I1109 14:34:49.560379  180981 ubuntu.go:188] set auth options {CertDir:/home/jenkins/minikube-integration/21139-2320/.minikube CaCertPath:/home/jenkins/minikube-integration/21139-2320/.minikube/certs/ca.pem CaPrivateKeyPath:/home/jenkins/minikube-integration/21139-2320/.minikube/certs/ca-key.pem CaCertRemotePath:/etc/docker/ca.pem ServerCertPath:/home/jenkins/minikube-integration/21139-2320/.minikube/machines/server.pem ServerKeyPath:/home/jenkins/minikube-integration/21139-2320/.minikube/machines/server-key.pem ClientKeyPath:/home/jenkins/minikube-integration/21139-2320/.minikube/certs/key.pem ServerCertRemotePath:/etc/docker/server.pem ServerKeyRemotePath:/etc/docker/server-key.pem ClientCertPath:/home/jenkins/minikube-integration/21139-2320/.minikube/certs/cert.pem ServerCertSANs:[] StorePath:/home/jenkins/minikube-integration/21139-2320/.minikube}
	I1109 14:34:49.560439  180981 ubuntu.go:190] setting up certificates
	I1109 14:34:49.560453  180981 provision.go:84] configureAuth start
	I1109 14:34:49.560525  180981 cli_runner.go:164] Run: docker container inspect -f "{{range .NetworkSettings.Networks}}{{.IPAddress}},{{.GlobalIPv6Address}}{{end}}" old-k8s-version-349599
	I1109 14:34:49.577951  180981 provision.go:143] copyHostCerts
	I1109 14:34:49.578018  180981 exec_runner.go:144] found /home/jenkins/minikube-integration/21139-2320/.minikube/key.pem, removing ...
	I1109 14:34:49.578032  180981 exec_runner.go:203] rm: /home/jenkins/minikube-integration/21139-2320/.minikube/key.pem
	I1109 14:34:49.578113  180981 exec_runner.go:151] cp: /home/jenkins/minikube-integration/21139-2320/.minikube/certs/key.pem --> /home/jenkins/minikube-integration/21139-2320/.minikube/key.pem (1679 bytes)
	I1109 14:34:49.578219  180981 exec_runner.go:144] found /home/jenkins/minikube-integration/21139-2320/.minikube/ca.pem, removing ...
	I1109 14:34:49.578230  180981 exec_runner.go:203] rm: /home/jenkins/minikube-integration/21139-2320/.minikube/ca.pem
	I1109 14:34:49.578258  180981 exec_runner.go:151] cp: /home/jenkins/minikube-integration/21139-2320/.minikube/certs/ca.pem --> /home/jenkins/minikube-integration/21139-2320/.minikube/ca.pem (1082 bytes)
	I1109 14:34:49.578321  180981 exec_runner.go:144] found /home/jenkins/minikube-integration/21139-2320/.minikube/cert.pem, removing ...
	I1109 14:34:49.578329  180981 exec_runner.go:203] rm: /home/jenkins/minikube-integration/21139-2320/.minikube/cert.pem
	I1109 14:34:49.578352  180981 exec_runner.go:151] cp: /home/jenkins/minikube-integration/21139-2320/.minikube/certs/cert.pem --> /home/jenkins/minikube-integration/21139-2320/.minikube/cert.pem (1123 bytes)
	I1109 14:34:49.578418  180981 provision.go:117] generating server cert: /home/jenkins/minikube-integration/21139-2320/.minikube/machines/server.pem ca-key=/home/jenkins/minikube-integration/21139-2320/.minikube/certs/ca.pem private-key=/home/jenkins/minikube-integration/21139-2320/.minikube/certs/ca-key.pem org=jenkins.old-k8s-version-349599 san=[127.0.0.1 192.168.85.2 localhost minikube old-k8s-version-349599]
	I1109 14:34:50.241967  180981 provision.go:177] copyRemoteCerts
	I1109 14:34:50.242031  180981 ssh_runner.go:195] Run: sudo mkdir -p /etc/docker /etc/docker /etc/docker
	I1109 14:34:50.242071  180981 cli_runner.go:164] Run: docker container inspect -f "'{{(index (index .NetworkSettings.Ports "22/tcp") 0).HostPort}}'" old-k8s-version-349599
	I1109 14:34:50.259612  180981 sshutil.go:53] new ssh client: &{IP:127.0.0.1 Port:33045 SSHKeyPath:/home/jenkins/minikube-integration/21139-2320/.minikube/machines/old-k8s-version-349599/id_rsa Username:docker}
	I1109 14:34:50.367562  180981 ssh_runner.go:362] scp /home/jenkins/minikube-integration/21139-2320/.minikube/certs/ca.pem --> /etc/docker/ca.pem (1082 bytes)
	I1109 14:34:50.385334  180981 ssh_runner.go:362] scp /home/jenkins/minikube-integration/21139-2320/.minikube/machines/server.pem --> /etc/docker/server.pem (1233 bytes)
	I1109 14:34:50.402952  180981 ssh_runner.go:362] scp /home/jenkins/minikube-integration/21139-2320/.minikube/machines/server-key.pem --> /etc/docker/server-key.pem (1675 bytes)
	I1109 14:34:50.421982  180981 provision.go:87] duration metric: took 861.515963ms to configureAuth
	I1109 14:34:50.422009  180981 ubuntu.go:206] setting minikube options for container-runtime
	I1109 14:34:50.422195  180981 config.go:182] Loaded profile config "old-k8s-version-349599": Driver=docker, ContainerRuntime=crio, KubernetesVersion=v1.28.0
	I1109 14:34:50.422304  180981 cli_runner.go:164] Run: docker container inspect -f "'{{(index (index .NetworkSettings.Ports "22/tcp") 0).HostPort}}'" old-k8s-version-349599
	I1109 14:34:50.439280  180981 main.go:143] libmachine: Using SSH client type: native
	I1109 14:34:50.439599  180981 main.go:143] libmachine: &{{{<nil> 0 [] [] []} docker [0x3eefe0] 0x3f1790 <nil>  [] 0s} 127.0.0.1 33045 <nil> <nil>}
	I1109 14:34:50.439620  180981 main.go:143] libmachine: About to run SSH command:
	sudo mkdir -p /etc/sysconfig && printf %s "
	CRIO_MINIKUBE_OPTIONS='--insecure-registry 10.96.0.0/12 '
	" | sudo tee /etc/sysconfig/crio.minikube && sudo systemctl restart crio
	I1109 14:34:50.698359  180981 main.go:143] libmachine: SSH cmd err, output: <nil>: 
	CRIO_MINIKUBE_OPTIONS='--insecure-registry 10.96.0.0/12 '
	
	I1109 14:34:50.698387  180981 machine.go:97] duration metric: took 4.676729049s to provisionDockerMachine
	I1109 14:34:50.698405  180981 client.go:176] duration metric: took 11.928419299s to LocalClient.Create
	I1109 14:34:50.698422  180981 start.go:167] duration metric: took 11.928494639s to libmachine.API.Create "old-k8s-version-349599"
	I1109 14:34:50.698432  180981 start.go:293] postStartSetup for "old-k8s-version-349599" (driver="docker")
	I1109 14:34:50.698451  180981 start.go:322] creating required directories: [/etc/kubernetes/addons /etc/kubernetes/manifests /var/tmp/minikube /var/lib/minikube /var/lib/minikube/certs /var/lib/minikube/images /var/lib/minikube/binaries /tmp/gvisor /usr/share/ca-certificates /etc/ssl/certs]
	I1109 14:34:50.698526  180981 ssh_runner.go:195] Run: sudo mkdir -p /etc/kubernetes/addons /etc/kubernetes/manifests /var/tmp/minikube /var/lib/minikube /var/lib/minikube/certs /var/lib/minikube/images /var/lib/minikube/binaries /tmp/gvisor /usr/share/ca-certificates /etc/ssl/certs
	I1109 14:34:50.698578  180981 cli_runner.go:164] Run: docker container inspect -f "'{{(index (index .NetworkSettings.Ports "22/tcp") 0).HostPort}}'" old-k8s-version-349599
	I1109 14:34:50.716160  180981 sshutil.go:53] new ssh client: &{IP:127.0.0.1 Port:33045 SSHKeyPath:/home/jenkins/minikube-integration/21139-2320/.minikube/machines/old-k8s-version-349599/id_rsa Username:docker}
	I1109 14:34:50.819959  180981 ssh_runner.go:195] Run: cat /etc/os-release
	I1109 14:34:50.823335  180981 main.go:143] libmachine: Couldn't set key VERSION_CODENAME, no corresponding struct field found
	I1109 14:34:50.823364  180981 info.go:137] Remote host: Debian GNU/Linux 12 (bookworm)
	I1109 14:34:50.823376  180981 filesync.go:126] Scanning /home/jenkins/minikube-integration/21139-2320/.minikube/addons for local assets ...
	I1109 14:34:50.823436  180981 filesync.go:126] Scanning /home/jenkins/minikube-integration/21139-2320/.minikube/files for local assets ...
	I1109 14:34:50.823527  180981 filesync.go:149] local asset: /home/jenkins/minikube-integration/21139-2320/.minikube/files/etc/ssl/certs/41162.pem -> 41162.pem in /etc/ssl/certs
	I1109 14:34:50.823637  180981 ssh_runner.go:195] Run: sudo mkdir -p /etc/ssl/certs
	I1109 14:34:50.831022  180981 ssh_runner.go:362] scp /home/jenkins/minikube-integration/21139-2320/.minikube/files/etc/ssl/certs/41162.pem --> /etc/ssl/certs/41162.pem (1708 bytes)
	I1109 14:34:50.848150  180981 start.go:296] duration metric: took 149.694264ms for postStartSetup
	I1109 14:34:50.848512  180981 cli_runner.go:164] Run: docker container inspect -f "{{range .NetworkSettings.Networks}}{{.IPAddress}},{{.GlobalIPv6Address}}{{end}}" old-k8s-version-349599
	I1109 14:34:50.865059  180981 profile.go:143] Saving config to /home/jenkins/minikube-integration/21139-2320/.minikube/profiles/old-k8s-version-349599/config.json ...
	I1109 14:34:50.865343  180981 ssh_runner.go:195] Run: sh -c "df -h /var | awk 'NR==2{print $5}'"
	I1109 14:34:50.865392  180981 cli_runner.go:164] Run: docker container inspect -f "'{{(index (index .NetworkSettings.Ports "22/tcp") 0).HostPort}}'" old-k8s-version-349599
	I1109 14:34:50.881456  180981 sshutil.go:53] new ssh client: &{IP:127.0.0.1 Port:33045 SSHKeyPath:/home/jenkins/minikube-integration/21139-2320/.minikube/machines/old-k8s-version-349599/id_rsa Username:docker}
	I1109 14:34:50.988659  180981 ssh_runner.go:195] Run: sh -c "df -BG /var | awk 'NR==2{print $4}'"
	I1109 14:34:50.993287  180981 start.go:128] duration metric: took 12.227011021s to createHost
	I1109 14:34:50.993315  180981 start.go:83] releasing machines lock for "old-k8s-version-349599", held for 12.227154038s
	I1109 14:34:50.993384  180981 cli_runner.go:164] Run: docker container inspect -f "{{range .NetworkSettings.Networks}}{{.IPAddress}},{{.GlobalIPv6Address}}{{end}}" old-k8s-version-349599
	I1109 14:34:51.009985  180981 ssh_runner.go:195] Run: cat /version.json
	I1109 14:34:51.010039  180981 cli_runner.go:164] Run: docker container inspect -f "'{{(index (index .NetworkSettings.Ports "22/tcp") 0).HostPort}}'" old-k8s-version-349599
	I1109 14:34:51.010289  180981 ssh_runner.go:195] Run: curl -sS -m 2 https://registry.k8s.io/
	I1109 14:34:51.010355  180981 cli_runner.go:164] Run: docker container inspect -f "'{{(index (index .NetworkSettings.Ports "22/tcp") 0).HostPort}}'" old-k8s-version-349599
	I1109 14:34:51.034998  180981 sshutil.go:53] new ssh client: &{IP:127.0.0.1 Port:33045 SSHKeyPath:/home/jenkins/minikube-integration/21139-2320/.minikube/machines/old-k8s-version-349599/id_rsa Username:docker}
	I1109 14:34:51.046869  180981 sshutil.go:53] new ssh client: &{IP:127.0.0.1 Port:33045 SSHKeyPath:/home/jenkins/minikube-integration/21139-2320/.minikube/machines/old-k8s-version-349599/id_rsa Username:docker}
	I1109 14:34:51.147815  180981 ssh_runner.go:195] Run: systemctl --version
	I1109 14:34:51.239684  180981 ssh_runner.go:195] Run: sudo sh -c "podman version >/dev/null"
	I1109 14:34:51.275787  180981 ssh_runner.go:195] Run: sh -c "stat /etc/cni/net.d/*loopback.conf*"
	W1109 14:34:51.281639  180981 cni.go:209] loopback cni configuration skipped: "/etc/cni/net.d/*loopback.conf*" not found
	I1109 14:34:51.281718  180981 ssh_runner.go:195] Run: sudo find /etc/cni/net.d -maxdepth 1 -type f ( ( -name *bridge* -or -name *podman* ) -and -not -name *.mk_disabled ) -printf "%p, " -exec sh -c "sudo mv {} {}.mk_disabled" ;
	I1109 14:34:51.316653  180981 cni.go:262] disabled [/etc/cni/net.d/87-podman-bridge.conflist, /etc/cni/net.d/10-crio-bridge.conflist.disabled] bridge cni config(s)
	I1109 14:34:51.316677  180981 start.go:496] detecting cgroup driver to use...
	I1109 14:34:51.316710  180981 detect.go:187] detected "cgroupfs" cgroup driver on host os
	I1109 14:34:51.316759  180981 ssh_runner.go:195] Run: sudo systemctl stop -f containerd
	I1109 14:34:51.336295  180981 ssh_runner.go:195] Run: sudo systemctl is-active --quiet service containerd
	I1109 14:34:51.349782  180981 docker.go:218] disabling cri-docker service (if available) ...
	I1109 14:34:51.349849  180981 ssh_runner.go:195] Run: sudo systemctl stop -f cri-docker.socket
	I1109 14:34:51.368177  180981 ssh_runner.go:195] Run: sudo systemctl stop -f cri-docker.service
	I1109 14:34:51.388070  180981 ssh_runner.go:195] Run: sudo systemctl disable cri-docker.socket
	I1109 14:34:51.513976  180981 ssh_runner.go:195] Run: sudo systemctl mask cri-docker.service
	I1109 14:34:51.641669  180981 docker.go:234] disabling docker service ...
	I1109 14:34:51.641758  180981 ssh_runner.go:195] Run: sudo systemctl stop -f docker.socket
	I1109 14:34:51.662283  180981 ssh_runner.go:195] Run: sudo systemctl stop -f docker.service
	I1109 14:34:51.676469  180981 ssh_runner.go:195] Run: sudo systemctl disable docker.socket
	I1109 14:34:51.819307  180981 ssh_runner.go:195] Run: sudo systemctl mask docker.service
	I1109 14:34:51.950493  180981 ssh_runner.go:195] Run: sudo systemctl is-active --quiet service docker
	I1109 14:34:51.963857  180981 ssh_runner.go:195] Run: /bin/bash -c "sudo mkdir -p /etc && printf %s "runtime-endpoint: unix:///var/run/crio/crio.sock
	" | sudo tee /etc/crictl.yaml"
	I1109 14:34:51.978397  180981 crio.go:59] configure cri-o to use "registry.k8s.io/pause:3.9" pause image...
	I1109 14:34:51.978462  180981 ssh_runner.go:195] Run: sh -c "sudo sed -i 's|^.*pause_image = .*$|pause_image = "registry.k8s.io/pause:3.9"|' /etc/crio/crio.conf.d/02-crio.conf"
	I1109 14:34:51.987468  180981 crio.go:70] configuring cri-o to use "cgroupfs" as cgroup driver...
	I1109 14:34:51.987534  180981 ssh_runner.go:195] Run: sh -c "sudo sed -i 's|^.*cgroup_manager = .*$|cgroup_manager = "cgroupfs"|' /etc/crio/crio.conf.d/02-crio.conf"
	I1109 14:34:51.998044  180981 ssh_runner.go:195] Run: sh -c "sudo sed -i '/conmon_cgroup = .*/d' /etc/crio/crio.conf.d/02-crio.conf"
	I1109 14:34:52.006340  180981 ssh_runner.go:195] Run: sh -c "sudo sed -i '/cgroup_manager = .*/a conmon_cgroup = "pod"' /etc/crio/crio.conf.d/02-crio.conf"
	I1109 14:34:52.021275  180981 ssh_runner.go:195] Run: sh -c "sudo rm -rf /etc/cni/net.mk"
	I1109 14:34:52.036697  180981 ssh_runner.go:195] Run: sh -c "sudo sed -i '/^ *"net.ipv4.ip_unprivileged_port_start=.*"/d' /etc/crio/crio.conf.d/02-crio.conf"
	I1109 14:34:52.046596  180981 ssh_runner.go:195] Run: sh -c "sudo grep -q "^ *default_sysctls" /etc/crio/crio.conf.d/02-crio.conf || sudo sed -i '/conmon_cgroup = .*/a default_sysctls = \[\n\]' /etc/crio/crio.conf.d/02-crio.conf"
	I1109 14:34:52.061805  180981 ssh_runner.go:195] Run: sh -c "sudo sed -i -r 's|^default_sysctls *= *\[|&\n  "net.ipv4.ip_unprivileged_port_start=0",|' /etc/crio/crio.conf.d/02-crio.conf"
	I1109 14:34:52.071471  180981 ssh_runner.go:195] Run: sudo sysctl net.bridge.bridge-nf-call-iptables
	I1109 14:34:52.079454  180981 ssh_runner.go:195] Run: sudo sh -c "echo 1 > /proc/sys/net/ipv4/ip_forward"
	I1109 14:34:52.087458  180981 ssh_runner.go:195] Run: sudo systemctl daemon-reload
	I1109 14:34:52.190859  180981 ssh_runner.go:195] Run: sudo systemctl restart crio
	I1109 14:34:52.335410  180981 start.go:543] Will wait 60s for socket path /var/run/crio/crio.sock
	I1109 14:34:52.335476  180981 ssh_runner.go:195] Run: stat /var/run/crio/crio.sock
	I1109 14:34:52.339901  180981 start.go:564] Will wait 60s for crictl version
	I1109 14:34:52.340024  180981 ssh_runner.go:195] Run: which crictl
	I1109 14:34:52.343832  180981 ssh_runner.go:195] Run: sudo /usr/local/bin/crictl version
	I1109 14:34:52.374042  180981 start.go:580] Version:  0.1.0
	RuntimeName:  cri-o
	RuntimeVersion:  1.34.1
	RuntimeApiVersion:  v1
	I1109 14:34:52.374209  180981 ssh_runner.go:195] Run: crio --version
	I1109 14:34:52.403045  180981 ssh_runner.go:195] Run: crio --version
	I1109 14:34:52.438663  180981 out.go:179] * Preparing Kubernetes v1.28.0 on CRI-O 1.34.1 ...
	I1109 14:34:52.441734  180981 cli_runner.go:164] Run: docker network inspect old-k8s-version-349599 --format "{"Name": "{{.Name}}","Driver": "{{.Driver}}","Subnet": "{{range .IPAM.Config}}{{.Subnet}}{{end}}","Gateway": "{{range .IPAM.Config}}{{.Gateway}}{{end}}","MTU": {{if (index .Options "com.docker.network.driver.mtu")}}{{(index .Options "com.docker.network.driver.mtu")}}{{else}}0{{end}}, "ContainerIPs": [{{range $k,$v := .Containers }}"{{$v.IPv4Address}}",{{end}}]}"
	I1109 14:34:52.458231  180981 ssh_runner.go:195] Run: grep 192.168.85.1	host.minikube.internal$ /etc/hosts
	I1109 14:34:52.462133  180981 ssh_runner.go:195] Run: /bin/bash -c "{ grep -v $'\thost.minikube.internal$' "/etc/hosts"; echo "192.168.85.1	host.minikube.internal"; } > /tmp/h.$$; sudo cp /tmp/h.$$ "/etc/hosts""
	I1109 14:34:52.472315  180981 kubeadm.go:884] updating cluster {Name:old-k8s-version-349599 KeepContext:false EmbedCerts:false MinikubeISO: KicBaseImage:gcr.io/k8s-minikube/kicbase-builds:v0.0.48-1761985721-21837@sha256:a50b37e97dfdea51156e079ca6b45818a801b3d41bbe13d141f35d2e1af6c7d1 Memory:3072 CPUs:2 DiskSize:20000 Driver:docker HyperkitVpnKitSock: HyperkitVSockPorts:[] DockerEnv:[] ContainerVolumeMounts:[] InsecureRegistry:[] RegistryMirror:[] HostOnlyCIDR:192.168.59.1/24 HypervVirtualSwitch: HypervUseExternalSwitch:false HypervExternalAdapter: KVMNetwork:default KVMQemuURI:qemu:///system KVMGPU:false KVMHidden:false KVMNUMACount:1 APIServerPort:8443 DockerOpt:[] DisableDriverMounts:true NFSShare:[] NFSSharesRoot:/nfsshares UUID: NoVTXCheck:false DNSProxy:false HostDNSResolver:true HostOnlyNicType:virtio NatNicType:virtio SSHIPAddress: SSHUser:root SSHKey: SSHPort:22 KubernetesConfig:{KubernetesVersion:v1.28.0 ClusterName:old-k8s-version-349599 Namespace:default APIServerHAVIP: APIServerName:minik
ubeCA APIServerNames:[] APIServerIPs:[] DNSDomain:cluster.local ContainerRuntime:crio CRISocket: NetworkPlugin:cni FeatureGates: ServiceCIDR:10.96.0.0/12 ImageRepository: LoadBalancerStartIP: LoadBalancerEndIP: CustomIngressCert: RegistryAliases: ExtraOptions:[] ShouldLoadCachedImages:true EnableDefaultCNI:false CNI:} Nodes:[{Name: IP:192.168.85.2 Port:8443 KubernetesVersion:v1.28.0 ContainerRuntime:crio ControlPlane:true Worker:true}] Addons:map[] CustomAddonImages:map[] CustomAddonRegistries:map[] VerifyComponents:map[apiserver:true apps_running:true default_sa:true extra:true kubelet:true node_ready:true system_pods:true] StartHostTimeout:6m0s ScheduledStop:<nil> ExposedPorts:[] ListenAddress: Network: Subnet: MultiNodeRequested:false ExtraDisks:0 CertExpiration:26280h0m0s MountString: Mount9PVersion:9p2000.L MountGID:docker MountIP: MountMSize:262144 MountOptions:[] MountPort:0 MountType:9p MountUID:docker BinaryMirror: DisableOptimizations:false DisableMetrics:false DisableCoreDNSLog:false CustomQemuFirm
warePath: SocketVMnetClientPath: SocketVMnetPath: StaticIP: SSHAuthSock: SSHAgentPID:0 GPUs: AutoPauseInterval:1m0s} ...
	I1109 14:34:52.472440  180981 preload.go:188] Checking if preload exists for k8s version v1.28.0 and runtime crio
	I1109 14:34:52.472502  180981 ssh_runner.go:195] Run: sudo crictl images --output json
	I1109 14:34:52.503147  180981 crio.go:514] all images are preloaded for cri-o runtime.
	I1109 14:34:52.503174  180981 crio.go:433] Images already preloaded, skipping extraction
	I1109 14:34:52.503231  180981 ssh_runner.go:195] Run: sudo crictl images --output json
	I1109 14:34:52.532763  180981 crio.go:514] all images are preloaded for cri-o runtime.
	I1109 14:34:52.532790  180981 cache_images.go:86] Images are preloaded, skipping loading
	I1109 14:34:52.532799  180981 kubeadm.go:935] updating node { 192.168.85.2 8443 v1.28.0 crio true true} ...
	I1109 14:34:52.532886  180981 kubeadm.go:947] kubelet [Unit]
	Wants=crio.service
	
	[Service]
	ExecStart=
	ExecStart=/var/lib/minikube/binaries/v1.28.0/kubelet --bootstrap-kubeconfig=/etc/kubernetes/bootstrap-kubelet.conf --cgroups-per-qos=false --config=/var/lib/kubelet/config.yaml --enforce-node-allocatable= --hostname-override=old-k8s-version-349599 --kubeconfig=/etc/kubernetes/kubelet.conf --node-ip=192.168.85.2
	
	[Install]
	 config:
	{KubernetesVersion:v1.28.0 ClusterName:old-k8s-version-349599 Namespace:default APIServerHAVIP: APIServerName:minikubeCA APIServerNames:[] APIServerIPs:[] DNSDomain:cluster.local ContainerRuntime:crio CRISocket: NetworkPlugin:cni FeatureGates: ServiceCIDR:10.96.0.0/12 ImageRepository: LoadBalancerStartIP: LoadBalancerEndIP: CustomIngressCert: RegistryAliases: ExtraOptions:[] ShouldLoadCachedImages:true EnableDefaultCNI:false CNI:}
	I1109 14:34:52.532983  180981 ssh_runner.go:195] Run: crio config
	I1109 14:34:52.597160  180981 cni.go:84] Creating CNI manager for ""
	I1109 14:34:52.597185  180981 cni.go:143] "docker" driver + "crio" runtime found, recommending kindnet
	I1109 14:34:52.597225  180981 kubeadm.go:85] Using pod CIDR: 10.244.0.0/16
	I1109 14:34:52.597254  180981 kubeadm.go:190] kubeadm options: {CertDir:/var/lib/minikube/certs ServiceCIDR:10.96.0.0/12 PodSubnet:10.244.0.0/16 AdvertiseAddress:192.168.85.2 APIServerPort:8443 KubernetesVersion:v1.28.0 EtcdDataDir:/var/lib/minikube/etcd EtcdExtraArgs:map[] ClusterName:old-k8s-version-349599 NodeName:old-k8s-version-349599 DNSDomain:cluster.local CRISocket:/var/run/crio/crio.sock ImageRepository: ComponentOptions:[{Component:apiServer ExtraArgs:map[enable-admission-plugins:NamespaceLifecycle,LimitRanger,ServiceAccount,DefaultStorageClass,DefaultTolerationSeconds,NodeRestriction,MutatingAdmissionWebhook,ValidatingAdmissionWebhook,ResourceQuota] Pairs:map[certSANs:["127.0.0.1", "localhost", "192.168.85.2"]]} {Component:controllerManager ExtraArgs:map[allocate-node-cidrs:true leader-elect:false] Pairs:map[]} {Component:scheduler ExtraArgs:map[leader-elect:false] Pairs:map[]}] FeatureArgs:map[] NodeIP:192.168.85.2 CgroupDriver:cgroupfs ClientCAFile:/var/lib/minikube/certs/ca.crt StaticPo
dPath:/etc/kubernetes/manifests ControlPlaneAddress:control-plane.minikube.internal KubeProxyOptions:map[] ResolvConfSearchRegression:false KubeletConfigOpts:map[containerRuntimeEndpoint:unix:///var/run/crio/crio.sock hairpinMode:hairpin-veth runtimeRequestTimeout:15m] PrependCriSocketUnix:true}
	I1109 14:34:52.597401  180981 kubeadm.go:196] kubeadm config:
	apiVersion: kubeadm.k8s.io/v1beta3
	kind: InitConfiguration
	localAPIEndpoint:
	  advertiseAddress: 192.168.85.2
	  bindPort: 8443
	bootstrapTokens:
	  - groups:
	      - system:bootstrappers:kubeadm:default-node-token
	    ttl: 24h0m0s
	    usages:
	      - signing
	      - authentication
	nodeRegistration:
	  criSocket: unix:///var/run/crio/crio.sock
	  name: "old-k8s-version-349599"
	  kubeletExtraArgs:
	    node-ip: 192.168.85.2
	  taints: []
	---
	apiVersion: kubeadm.k8s.io/v1beta3
	kind: ClusterConfiguration
	apiServer:
	  certSANs: ["127.0.0.1", "localhost", "192.168.85.2"]
	  extraArgs:
	    enable-admission-plugins: "NamespaceLifecycle,LimitRanger,ServiceAccount,DefaultStorageClass,DefaultTolerationSeconds,NodeRestriction,MutatingAdmissionWebhook,ValidatingAdmissionWebhook,ResourceQuota"
	controllerManager:
	  extraArgs:
	    allocate-node-cidrs: "true"
	    leader-elect: "false"
	scheduler:
	  extraArgs:
	    leader-elect: "false"
	certificatesDir: /var/lib/minikube/certs
	clusterName: mk
	controlPlaneEndpoint: control-plane.minikube.internal:8443
	etcd:
	  local:
	    dataDir: /var/lib/minikube/etcd
	    extraArgs:
	      proxy-refresh-interval: "70000"
	kubernetesVersion: v1.28.0
	networking:
	  dnsDomain: cluster.local
	  podSubnet: "10.244.0.0/16"
	  serviceSubnet: 10.96.0.0/12
	---
	apiVersion: kubelet.config.k8s.io/v1beta1
	kind: KubeletConfiguration
	authentication:
	  x509:
	    clientCAFile: /var/lib/minikube/certs/ca.crt
	cgroupDriver: cgroupfs
	containerRuntimeEndpoint: unix:///var/run/crio/crio.sock
	hairpinMode: hairpin-veth
	runtimeRequestTimeout: 15m
	clusterDomain: "cluster.local"
	# disable disk resource management by default
	imageGCHighThresholdPercent: 100
	evictionHard:
	  nodefs.available: "0%"
	  nodefs.inodesFree: "0%"
	  imagefs.available: "0%"
	failSwapOn: false
	staticPodPath: /etc/kubernetes/manifests
	---
	apiVersion: kubeproxy.config.k8s.io/v1alpha1
	kind: KubeProxyConfiguration
	clusterCIDR: "10.244.0.0/16"
	metricsBindAddress: 0.0.0.0:10249
	conntrack:
	  maxPerCore: 0
	# Skip setting "net.netfilter.nf_conntrack_tcp_timeout_established"
	  tcpEstablishedTimeout: 0s
	# Skip setting "net.netfilter.nf_conntrack_tcp_timeout_close"
	  tcpCloseWaitTimeout: 0s
	
	I1109 14:34:52.597475  180981 ssh_runner.go:195] Run: sudo ls /var/lib/minikube/binaries/v1.28.0
	I1109 14:34:52.605385  180981 binaries.go:51] Found k8s binaries, skipping transfer
	I1109 14:34:52.605457  180981 ssh_runner.go:195] Run: sudo mkdir -p /etc/systemd/system/kubelet.service.d /lib/systemd/system /var/tmp/minikube
	I1109 14:34:52.613074  180981 ssh_runner.go:362] scp memory --> /etc/systemd/system/kubelet.service.d/10-kubeadm.conf (372 bytes)
	I1109 14:34:52.627966  180981 ssh_runner.go:362] scp memory --> /lib/systemd/system/kubelet.service (352 bytes)
	I1109 14:34:52.642532  180981 ssh_runner.go:362] scp memory --> /var/tmp/minikube/kubeadm.yaml.new (2160 bytes)
	I1109 14:34:52.655708  180981 ssh_runner.go:195] Run: grep 192.168.85.2	control-plane.minikube.internal$ /etc/hosts
	I1109 14:34:52.659208  180981 ssh_runner.go:195] Run: /bin/bash -c "{ grep -v $'\tcontrol-plane.minikube.internal$' "/etc/hosts"; echo "192.168.85.2	control-plane.minikube.internal"; } > /tmp/h.$$; sudo cp /tmp/h.$$ "/etc/hosts""
	I1109 14:34:52.669135  180981 ssh_runner.go:195] Run: sudo systemctl daemon-reload
	I1109 14:34:52.780050  180981 ssh_runner.go:195] Run: sudo systemctl start kubelet
	I1109 14:34:52.798429  180981 certs.go:69] Setting up /home/jenkins/minikube-integration/21139-2320/.minikube/profiles/old-k8s-version-349599 for IP: 192.168.85.2
	I1109 14:34:52.798448  180981 certs.go:195] generating shared ca certs ...
	I1109 14:34:52.798465  180981 certs.go:227] acquiring lock for ca certs: {Name:mkdc9287ecd89df27ec460b72246ed6b75395d7c Clock:{} Delay:500ms Timeout:1m0s Cancel:<nil>}
	I1109 14:34:52.798601  180981 certs.go:236] skipping valid "minikubeCA" ca cert: /home/jenkins/minikube-integration/21139-2320/.minikube/ca.key
	I1109 14:34:52.798645  180981 certs.go:236] skipping valid "proxyClientCA" ca cert: /home/jenkins/minikube-integration/21139-2320/.minikube/proxy-client-ca.key
	I1109 14:34:52.798659  180981 certs.go:257] generating profile certs ...
	I1109 14:34:52.798714  180981 certs.go:364] generating signed profile cert for "minikube-user": /home/jenkins/minikube-integration/21139-2320/.minikube/profiles/old-k8s-version-349599/client.key
	I1109 14:34:52.798731  180981 crypto.go:68] Generating cert /home/jenkins/minikube-integration/21139-2320/.minikube/profiles/old-k8s-version-349599/client.crt with IP's: []
	I1109 14:34:53.441514  180981 crypto.go:156] Writing cert to /home/jenkins/minikube-integration/21139-2320/.minikube/profiles/old-k8s-version-349599/client.crt ...
	I1109 14:34:53.441545  180981 lock.go:35] WriteFile acquiring /home/jenkins/minikube-integration/21139-2320/.minikube/profiles/old-k8s-version-349599/client.crt: {Name:mk126fcc17541c066e5baaf12a0d160627f82587 Clock:{} Delay:500ms Timeout:1m0s Cancel:<nil>}
	I1109 14:34:53.441737  180981 crypto.go:164] Writing key to /home/jenkins/minikube-integration/21139-2320/.minikube/profiles/old-k8s-version-349599/client.key ...
	I1109 14:34:53.441754  180981 lock.go:35] WriteFile acquiring /home/jenkins/minikube-integration/21139-2320/.minikube/profiles/old-k8s-version-349599/client.key: {Name:mk422b53032a57287396b1c99950288fd99f996c Clock:{} Delay:500ms Timeout:1m0s Cancel:<nil>}
	I1109 14:34:53.441851  180981 certs.go:364] generating signed profile cert for "minikube": /home/jenkins/minikube-integration/21139-2320/.minikube/profiles/old-k8s-version-349599/apiserver.key.7d031c72
	I1109 14:34:53.441873  180981 crypto.go:68] Generating cert /home/jenkins/minikube-integration/21139-2320/.minikube/profiles/old-k8s-version-349599/apiserver.crt.7d031c72 with IP's: [10.96.0.1 127.0.0.1 10.0.0.1 192.168.85.2]
	I1109 14:34:53.634393  180981 crypto.go:156] Writing cert to /home/jenkins/minikube-integration/21139-2320/.minikube/profiles/old-k8s-version-349599/apiserver.crt.7d031c72 ...
	I1109 14:34:53.634423  180981 lock.go:35] WriteFile acquiring /home/jenkins/minikube-integration/21139-2320/.minikube/profiles/old-k8s-version-349599/apiserver.crt.7d031c72: {Name:mk96571b42616064c11e1917dcd3dc615cda6b48 Clock:{} Delay:500ms Timeout:1m0s Cancel:<nil>}
	I1109 14:34:53.634602  180981 crypto.go:164] Writing key to /home/jenkins/minikube-integration/21139-2320/.minikube/profiles/old-k8s-version-349599/apiserver.key.7d031c72 ...
	I1109 14:34:53.634615  180981 lock.go:35] WriteFile acquiring /home/jenkins/minikube-integration/21139-2320/.minikube/profiles/old-k8s-version-349599/apiserver.key.7d031c72: {Name:mk42d3fe5e5f4a19ee39429fd92dbd696f8020f9 Clock:{} Delay:500ms Timeout:1m0s Cancel:<nil>}
	I1109 14:34:53.634698  180981 certs.go:382] copying /home/jenkins/minikube-integration/21139-2320/.minikube/profiles/old-k8s-version-349599/apiserver.crt.7d031c72 -> /home/jenkins/minikube-integration/21139-2320/.minikube/profiles/old-k8s-version-349599/apiserver.crt
	I1109 14:34:53.634785  180981 certs.go:386] copying /home/jenkins/minikube-integration/21139-2320/.minikube/profiles/old-k8s-version-349599/apiserver.key.7d031c72 -> /home/jenkins/minikube-integration/21139-2320/.minikube/profiles/old-k8s-version-349599/apiserver.key
	I1109 14:34:53.634849  180981 certs.go:364] generating signed profile cert for "aggregator": /home/jenkins/minikube-integration/21139-2320/.minikube/profiles/old-k8s-version-349599/proxy-client.key
	I1109 14:34:53.634868  180981 crypto.go:68] Generating cert /home/jenkins/minikube-integration/21139-2320/.minikube/profiles/old-k8s-version-349599/proxy-client.crt with IP's: []
	I1109 14:34:55.138122  180981 crypto.go:156] Writing cert to /home/jenkins/minikube-integration/21139-2320/.minikube/profiles/old-k8s-version-349599/proxy-client.crt ...
	I1109 14:34:55.138151  180981 lock.go:35] WriteFile acquiring /home/jenkins/minikube-integration/21139-2320/.minikube/profiles/old-k8s-version-349599/proxy-client.crt: {Name:mk29e8af7e6ccad36f7493c1b25c4d046799fee3 Clock:{} Delay:500ms Timeout:1m0s Cancel:<nil>}
	I1109 14:34:55.143686  180981 crypto.go:164] Writing key to /home/jenkins/minikube-integration/21139-2320/.minikube/profiles/old-k8s-version-349599/proxy-client.key ...
	I1109 14:34:55.143724  180981 lock.go:35] WriteFile acquiring /home/jenkins/minikube-integration/21139-2320/.minikube/profiles/old-k8s-version-349599/proxy-client.key: {Name:mk8b041b05e25f447457519582fa2e2013871e3b Clock:{} Delay:500ms Timeout:1m0s Cancel:<nil>}
	I1109 14:34:55.143975  180981 certs.go:484] found cert: /home/jenkins/minikube-integration/21139-2320/.minikube/certs/4116.pem (1338 bytes)
	W1109 14:34:55.144021  180981 certs.go:480] ignoring /home/jenkins/minikube-integration/21139-2320/.minikube/certs/4116_empty.pem, impossibly tiny 0 bytes
	I1109 14:34:55.144032  180981 certs.go:484] found cert: /home/jenkins/minikube-integration/21139-2320/.minikube/certs/ca-key.pem (1679 bytes)
	I1109 14:34:55.144056  180981 certs.go:484] found cert: /home/jenkins/minikube-integration/21139-2320/.minikube/certs/ca.pem (1082 bytes)
	I1109 14:34:55.144087  180981 certs.go:484] found cert: /home/jenkins/minikube-integration/21139-2320/.minikube/certs/cert.pem (1123 bytes)
	I1109 14:34:55.144110  180981 certs.go:484] found cert: /home/jenkins/minikube-integration/21139-2320/.minikube/certs/key.pem (1679 bytes)
	I1109 14:34:55.144151  180981 certs.go:484] found cert: /home/jenkins/minikube-integration/21139-2320/.minikube/files/etc/ssl/certs/41162.pem (1708 bytes)
	I1109 14:34:55.144687  180981 ssh_runner.go:362] scp /home/jenkins/minikube-integration/21139-2320/.minikube/ca.crt --> /var/lib/minikube/certs/ca.crt (1111 bytes)
	I1109 14:34:55.165749  180981 ssh_runner.go:362] scp /home/jenkins/minikube-integration/21139-2320/.minikube/ca.key --> /var/lib/minikube/certs/ca.key (1679 bytes)
	I1109 14:34:55.189121  180981 ssh_runner.go:362] scp /home/jenkins/minikube-integration/21139-2320/.minikube/proxy-client-ca.crt --> /var/lib/minikube/certs/proxy-client-ca.crt (1119 bytes)
	I1109 14:34:55.208833  180981 ssh_runner.go:362] scp /home/jenkins/minikube-integration/21139-2320/.minikube/proxy-client-ca.key --> /var/lib/minikube/certs/proxy-client-ca.key (1675 bytes)
	I1109 14:34:55.229678  180981 ssh_runner.go:362] scp /home/jenkins/minikube-integration/21139-2320/.minikube/profiles/old-k8s-version-349599/apiserver.crt --> /var/lib/minikube/certs/apiserver.crt (1432 bytes)
	I1109 14:34:55.249595  180981 ssh_runner.go:362] scp /home/jenkins/minikube-integration/21139-2320/.minikube/profiles/old-k8s-version-349599/apiserver.key --> /var/lib/minikube/certs/apiserver.key (1675 bytes)
	I1109 14:34:55.268894  180981 ssh_runner.go:362] scp /home/jenkins/minikube-integration/21139-2320/.minikube/profiles/old-k8s-version-349599/proxy-client.crt --> /var/lib/minikube/certs/proxy-client.crt (1147 bytes)
	I1109 14:34:55.287197  180981 ssh_runner.go:362] scp /home/jenkins/minikube-integration/21139-2320/.minikube/profiles/old-k8s-version-349599/proxy-client.key --> /var/lib/minikube/certs/proxy-client.key (1675 bytes)
	I1109 14:34:55.307364  180981 ssh_runner.go:362] scp /home/jenkins/minikube-integration/21139-2320/.minikube/files/etc/ssl/certs/41162.pem --> /usr/share/ca-certificates/41162.pem (1708 bytes)
	I1109 14:34:55.331550  180981 ssh_runner.go:362] scp /home/jenkins/minikube-integration/21139-2320/.minikube/ca.crt --> /usr/share/ca-certificates/minikubeCA.pem (1111 bytes)
	I1109 14:34:55.351555  180981 ssh_runner.go:362] scp /home/jenkins/minikube-integration/21139-2320/.minikube/certs/4116.pem --> /usr/share/ca-certificates/4116.pem (1338 bytes)
	I1109 14:34:55.370829  180981 ssh_runner.go:362] scp memory --> /var/lib/minikube/kubeconfig (738 bytes)
	I1109 14:34:55.383608  180981 ssh_runner.go:195] Run: openssl version
	I1109 14:34:55.389928  180981 ssh_runner.go:195] Run: sudo /bin/bash -c "test -s /usr/share/ca-certificates/41162.pem && ln -fs /usr/share/ca-certificates/41162.pem /etc/ssl/certs/41162.pem"
	I1109 14:34:55.398686  180981 ssh_runner.go:195] Run: ls -la /usr/share/ca-certificates/41162.pem
	I1109 14:34:55.402597  180981 certs.go:528] hashing: -rw-r--r-- 1 root root 1708 Nov  9 13:36 /usr/share/ca-certificates/41162.pem
	I1109 14:34:55.402674  180981 ssh_runner.go:195] Run: openssl x509 -hash -noout -in /usr/share/ca-certificates/41162.pem
	I1109 14:34:55.445629  180981 ssh_runner.go:195] Run: sudo /bin/bash -c "test -L /etc/ssl/certs/3ec20f2e.0 || ln -fs /etc/ssl/certs/41162.pem /etc/ssl/certs/3ec20f2e.0"
	I1109 14:34:55.454928  180981 ssh_runner.go:195] Run: sudo /bin/bash -c "test -s /usr/share/ca-certificates/minikubeCA.pem && ln -fs /usr/share/ca-certificates/minikubeCA.pem /etc/ssl/certs/minikubeCA.pem"
	I1109 14:34:55.463614  180981 ssh_runner.go:195] Run: ls -la /usr/share/ca-certificates/minikubeCA.pem
	I1109 14:34:55.467848  180981 certs.go:528] hashing: -rw-r--r-- 1 root root 1111 Nov  9 13:29 /usr/share/ca-certificates/minikubeCA.pem
	I1109 14:34:55.467944  180981 ssh_runner.go:195] Run: openssl x509 -hash -noout -in /usr/share/ca-certificates/minikubeCA.pem
	I1109 14:34:55.510189  180981 ssh_runner.go:195] Run: sudo /bin/bash -c "test -L /etc/ssl/certs/b5213941.0 || ln -fs /etc/ssl/certs/minikubeCA.pem /etc/ssl/certs/b5213941.0"
	I1109 14:34:55.518822  180981 ssh_runner.go:195] Run: sudo /bin/bash -c "test -s /usr/share/ca-certificates/4116.pem && ln -fs /usr/share/ca-certificates/4116.pem /etc/ssl/certs/4116.pem"
	I1109 14:34:55.527580  180981 ssh_runner.go:195] Run: ls -la /usr/share/ca-certificates/4116.pem
	I1109 14:34:55.531504  180981 certs.go:528] hashing: -rw-r--r-- 1 root root 1338 Nov  9 13:36 /usr/share/ca-certificates/4116.pem
	I1109 14:34:55.531566  180981 ssh_runner.go:195] Run: openssl x509 -hash -noout -in /usr/share/ca-certificates/4116.pem
	I1109 14:34:55.573098  180981 ssh_runner.go:195] Run: sudo /bin/bash -c "test -L /etc/ssl/certs/51391683.0 || ln -fs /etc/ssl/certs/4116.pem /etc/ssl/certs/51391683.0"
	I1109 14:34:55.581793  180981 ssh_runner.go:195] Run: stat /var/lib/minikube/certs/apiserver-kubelet-client.crt
	I1109 14:34:55.585407  180981 certs.go:400] 'apiserver-kubelet-client' cert doesn't exist, likely first start: stat /var/lib/minikube/certs/apiserver-kubelet-client.crt: Process exited with status 1
	stdout:
	
	stderr:
	stat: cannot statx '/var/lib/minikube/certs/apiserver-kubelet-client.crt': No such file or directory
	I1109 14:34:55.585460  180981 kubeadm.go:401] StartCluster: {Name:old-k8s-version-349599 KeepContext:false EmbedCerts:false MinikubeISO: KicBaseImage:gcr.io/k8s-minikube/kicbase-builds:v0.0.48-1761985721-21837@sha256:a50b37e97dfdea51156e079ca6b45818a801b3d41bbe13d141f35d2e1af6c7d1 Memory:3072 CPUs:2 DiskSize:20000 Driver:docker HyperkitVpnKitSock: HyperkitVSockPorts:[] DockerEnv:[] ContainerVolumeMounts:[] InsecureRegistry:[] RegistryMirror:[] HostOnlyCIDR:192.168.59.1/24 HypervVirtualSwitch: HypervUseExternalSwitch:false HypervExternalAdapter: KVMNetwork:default KVMQemuURI:qemu:///system KVMGPU:false KVMHidden:false KVMNUMACount:1 APIServerPort:8443 DockerOpt:[] DisableDriverMounts:true NFSShare:[] NFSSharesRoot:/nfsshares UUID: NoVTXCheck:false DNSProxy:false HostDNSResolver:true HostOnlyNicType:virtio NatNicType:virtio SSHIPAddress: SSHUser:root SSHKey: SSHPort:22 KubernetesConfig:{KubernetesVersion:v1.28.0 ClusterName:old-k8s-version-349599 Namespace:default APIServerHAVIP: APIServerName:minikube
CA APIServerNames:[] APIServerIPs:[] DNSDomain:cluster.local ContainerRuntime:crio CRISocket: NetworkPlugin:cni FeatureGates: ServiceCIDR:10.96.0.0/12 ImageRepository: LoadBalancerStartIP: LoadBalancerEndIP: CustomIngressCert: RegistryAliases: ExtraOptions:[] ShouldLoadCachedImages:true EnableDefaultCNI:false CNI:} Nodes:[{Name: IP:192.168.85.2 Port:8443 KubernetesVersion:v1.28.0 ContainerRuntime:crio ControlPlane:true Worker:true}] Addons:map[] CustomAddonImages:map[] CustomAddonRegistries:map[] VerifyComponents:map[apiserver:true apps_running:true default_sa:true extra:true kubelet:true node_ready:true system_pods:true] StartHostTimeout:6m0s ScheduledStop:<nil> ExposedPorts:[] ListenAddress: Network: Subnet: MultiNodeRequested:false ExtraDisks:0 CertExpiration:26280h0m0s MountString: Mount9PVersion:9p2000.L MountGID:docker MountIP: MountMSize:262144 MountOptions:[] MountPort:0 MountType:9p MountUID:docker BinaryMirror: DisableOptimizations:false DisableMetrics:false DisableCoreDNSLog:false CustomQemuFirmwar
ePath: SocketVMnetClientPath: SocketVMnetPath: StaticIP: SSHAuthSock: SSHAgentPID:0 GPUs: AutoPauseInterval:1m0s}
	I1109 14:34:55.585534  180981 cri.go:54] listing CRI containers in root : {State:paused Name: Namespaces:[kube-system]}
	I1109 14:34:55.585595  180981 ssh_runner.go:195] Run: sudo -s eval "crictl ps -a --quiet --label io.kubernetes.pod.namespace=kube-system"
	I1109 14:34:55.617076  180981 cri.go:89] found id: ""
	I1109 14:34:55.617154  180981 ssh_runner.go:195] Run: sudo ls /var/lib/kubelet/kubeadm-flags.env /var/lib/kubelet/config.yaml /var/lib/minikube/etcd
	I1109 14:34:55.625002  180981 ssh_runner.go:195] Run: sudo cp /var/tmp/minikube/kubeadm.yaml.new /var/tmp/minikube/kubeadm.yaml
	I1109 14:34:55.633062  180981 kubeadm.go:215] ignoring SystemVerification for kubeadm because of docker driver
	I1109 14:34:55.633168  180981 ssh_runner.go:195] Run: sudo ls -la /etc/kubernetes/admin.conf /etc/kubernetes/kubelet.conf /etc/kubernetes/controller-manager.conf /etc/kubernetes/scheduler.conf
	I1109 14:34:55.641376  180981 kubeadm.go:156] config check failed, skipping stale config cleanup: sudo ls -la /etc/kubernetes/admin.conf /etc/kubernetes/kubelet.conf /etc/kubernetes/controller-manager.conf /etc/kubernetes/scheduler.conf: Process exited with status 2
	stdout:
	
	stderr:
	ls: cannot access '/etc/kubernetes/admin.conf': No such file or directory
	ls: cannot access '/etc/kubernetes/kubelet.conf': No such file or directory
	ls: cannot access '/etc/kubernetes/controller-manager.conf': No such file or directory
	ls: cannot access '/etc/kubernetes/scheduler.conf': No such file or directory
	I1109 14:34:55.641396  180981 kubeadm.go:158] found existing configuration files:
	
	I1109 14:34:55.641475  180981 ssh_runner.go:195] Run: sudo grep https://control-plane.minikube.internal:8443 /etc/kubernetes/admin.conf
	I1109 14:34:55.649372  180981 kubeadm.go:164] "https://control-plane.minikube.internal:8443" may not be in /etc/kubernetes/admin.conf - will remove: sudo grep https://control-plane.minikube.internal:8443 /etc/kubernetes/admin.conf: Process exited with status 2
	stdout:
	
	stderr:
	grep: /etc/kubernetes/admin.conf: No such file or directory
	I1109 14:34:55.649439  180981 ssh_runner.go:195] Run: sudo rm -f /etc/kubernetes/admin.conf
	I1109 14:34:55.656966  180981 ssh_runner.go:195] Run: sudo grep https://control-plane.minikube.internal:8443 /etc/kubernetes/kubelet.conf
	I1109 14:34:55.664618  180981 kubeadm.go:164] "https://control-plane.minikube.internal:8443" may not be in /etc/kubernetes/kubelet.conf - will remove: sudo grep https://control-plane.minikube.internal:8443 /etc/kubernetes/kubelet.conf: Process exited with status 2
	stdout:
	
	stderr:
	grep: /etc/kubernetes/kubelet.conf: No such file or directory
	I1109 14:34:55.664703  180981 ssh_runner.go:195] Run: sudo rm -f /etc/kubernetes/kubelet.conf
	I1109 14:34:55.672161  180981 ssh_runner.go:195] Run: sudo grep https://control-plane.minikube.internal:8443 /etc/kubernetes/controller-manager.conf
	I1109 14:34:55.680230  180981 kubeadm.go:164] "https://control-plane.minikube.internal:8443" may not be in /etc/kubernetes/controller-manager.conf - will remove: sudo grep https://control-plane.minikube.internal:8443 /etc/kubernetes/controller-manager.conf: Process exited with status 2
	stdout:
	
	stderr:
	grep: /etc/kubernetes/controller-manager.conf: No such file or directory
	I1109 14:34:55.680298  180981 ssh_runner.go:195] Run: sudo rm -f /etc/kubernetes/controller-manager.conf
	I1109 14:34:55.688184  180981 ssh_runner.go:195] Run: sudo grep https://control-plane.minikube.internal:8443 /etc/kubernetes/scheduler.conf
	I1109 14:34:55.696172  180981 kubeadm.go:164] "https://control-plane.minikube.internal:8443" may not be in /etc/kubernetes/scheduler.conf - will remove: sudo grep https://control-plane.minikube.internal:8443 /etc/kubernetes/scheduler.conf: Process exited with status 2
	stdout:
	
	stderr:
	grep: /etc/kubernetes/scheduler.conf: No such file or directory
	I1109 14:34:55.696261  180981 ssh_runner.go:195] Run: sudo rm -f /etc/kubernetes/scheduler.conf
	I1109 14:34:55.704027  180981 ssh_runner.go:286] Start: sudo /bin/bash -c "env PATH="/var/lib/minikube/binaries/v1.28.0:$PATH" kubeadm init --config /var/tmp/minikube/kubeadm.yaml  --ignore-preflight-errors=DirAvailable--etc-kubernetes-manifests,DirAvailable--var-lib-minikube,DirAvailable--var-lib-minikube-etcd,FileAvailable--etc-kubernetes-manifests-kube-scheduler.yaml,FileAvailable--etc-kubernetes-manifests-kube-apiserver.yaml,FileAvailable--etc-kubernetes-manifests-kube-controller-manager.yaml,FileAvailable--etc-kubernetes-manifests-etcd.yaml,Port-10250,Swap,NumCPU,Mem,SystemVerification,FileContent--proc-sys-net-bridge-bridge-nf-call-iptables"
	I1109 14:34:55.804447  180981 kubeadm.go:319] 	[WARNING SystemVerification]: failed to parse kernel config: unable to load kernel module: "configs", output: "modprobe: FATAL: Module configs not found in directory /lib/modules/5.15.0-1084-aws\n", err: exit status 1
	I1109 14:34:55.888809  180981 kubeadm.go:319] 	[WARNING Service-Kubelet]: kubelet service is not enabled, please run 'systemctl enable kubelet.service'
	I1109 14:35:11.839804  180981 kubeadm.go:319] [init] Using Kubernetes version: v1.28.0
	I1109 14:35:11.839862  180981 kubeadm.go:319] [preflight] Running pre-flight checks
	I1109 14:35:11.839993  180981 kubeadm.go:319] [preflight] The system verification failed. Printing the output from the verification:
	I1109 14:35:11.840066  180981 kubeadm.go:319] KERNEL_VERSION: 5.15.0-1084-aws
	I1109 14:35:11.840111  180981 kubeadm.go:319] OS: Linux
	I1109 14:35:11.840163  180981 kubeadm.go:319] CGROUPS_CPU: enabled
	I1109 14:35:11.840217  180981 kubeadm.go:319] CGROUPS_CPUACCT: enabled
	I1109 14:35:11.840270  180981 kubeadm.go:319] CGROUPS_CPUSET: enabled
	I1109 14:35:11.840324  180981 kubeadm.go:319] CGROUPS_DEVICES: enabled
	I1109 14:35:11.840377  180981 kubeadm.go:319] CGROUPS_FREEZER: enabled
	I1109 14:35:11.840433  180981 kubeadm.go:319] CGROUPS_MEMORY: enabled
	I1109 14:35:11.840484  180981 kubeadm.go:319] CGROUPS_PIDS: enabled
	I1109 14:35:11.840538  180981 kubeadm.go:319] CGROUPS_HUGETLB: enabled
	I1109 14:35:11.840590  180981 kubeadm.go:319] CGROUPS_BLKIO: enabled
	I1109 14:35:11.840667  180981 kubeadm.go:319] [preflight] Pulling images required for setting up a Kubernetes cluster
	I1109 14:35:11.840769  180981 kubeadm.go:319] [preflight] This might take a minute or two, depending on the speed of your internet connection
	I1109 14:35:11.840869  180981 kubeadm.go:319] [preflight] You can also perform this action in beforehand using 'kubeadm config images pull'
	I1109 14:35:11.840944  180981 kubeadm.go:319] [certs] Using certificateDir folder "/var/lib/minikube/certs"
	I1109 14:35:11.844167  180981 out.go:252]   - Generating certificates and keys ...
	I1109 14:35:11.844269  180981 kubeadm.go:319] [certs] Using existing ca certificate authority
	I1109 14:35:11.844359  180981 kubeadm.go:319] [certs] Using existing apiserver certificate and key on disk
	I1109 14:35:11.844442  180981 kubeadm.go:319] [certs] Generating "apiserver-kubelet-client" certificate and key
	I1109 14:35:11.844507  180981 kubeadm.go:319] [certs] Generating "front-proxy-ca" certificate and key
	I1109 14:35:11.844577  180981 kubeadm.go:319] [certs] Generating "front-proxy-client" certificate and key
	I1109 14:35:11.844634  180981 kubeadm.go:319] [certs] Generating "etcd/ca" certificate and key
	I1109 14:35:11.844691  180981 kubeadm.go:319] [certs] Generating "etcd/server" certificate and key
	I1109 14:35:11.844822  180981 kubeadm.go:319] [certs] etcd/server serving cert is signed for DNS names [localhost old-k8s-version-349599] and IPs [192.168.85.2 127.0.0.1 ::1]
	I1109 14:35:11.844892  180981 kubeadm.go:319] [certs] Generating "etcd/peer" certificate and key
	I1109 14:35:11.845023  180981 kubeadm.go:319] [certs] etcd/peer serving cert is signed for DNS names [localhost old-k8s-version-349599] and IPs [192.168.85.2 127.0.0.1 ::1]
	I1109 14:35:11.845091  180981 kubeadm.go:319] [certs] Generating "etcd/healthcheck-client" certificate and key
	I1109 14:35:11.845157  180981 kubeadm.go:319] [certs] Generating "apiserver-etcd-client" certificate and key
	I1109 14:35:11.845205  180981 kubeadm.go:319] [certs] Generating "sa" key and public key
	I1109 14:35:11.845263  180981 kubeadm.go:319] [kubeconfig] Using kubeconfig folder "/etc/kubernetes"
	I1109 14:35:11.845316  180981 kubeadm.go:319] [kubeconfig] Writing "admin.conf" kubeconfig file
	I1109 14:35:11.845372  180981 kubeadm.go:319] [kubeconfig] Writing "kubelet.conf" kubeconfig file
	I1109 14:35:11.845444  180981 kubeadm.go:319] [kubeconfig] Writing "controller-manager.conf" kubeconfig file
	I1109 14:35:11.845501  180981 kubeadm.go:319] [kubeconfig] Writing "scheduler.conf" kubeconfig file
	I1109 14:35:11.845586  180981 kubeadm.go:319] [etcd] Creating static Pod manifest for local etcd in "/etc/kubernetes/manifests"
	I1109 14:35:11.845655  180981 kubeadm.go:319] [control-plane] Using manifest folder "/etc/kubernetes/manifests"
	I1109 14:35:11.848738  180981 out.go:252]   - Booting up control plane ...
	I1109 14:35:11.848918  180981 kubeadm.go:319] [control-plane] Creating static Pod manifest for "kube-apiserver"
	I1109 14:35:11.849009  180981 kubeadm.go:319] [control-plane] Creating static Pod manifest for "kube-controller-manager"
	I1109 14:35:11.849084  180981 kubeadm.go:319] [control-plane] Creating static Pod manifest for "kube-scheduler"
	I1109 14:35:11.849208  180981 kubeadm.go:319] [kubelet-start] Writing kubelet environment file with flags to file "/var/lib/kubelet/kubeadm-flags.env"
	I1109 14:35:11.849303  180981 kubeadm.go:319] [kubelet-start] Writing kubelet configuration to file "/var/lib/kubelet/config.yaml"
	I1109 14:35:11.849347  180981 kubeadm.go:319] [kubelet-start] Starting the kubelet
	I1109 14:35:11.849520  180981 kubeadm.go:319] [wait-control-plane] Waiting for the kubelet to boot up the control plane as static Pods from directory "/etc/kubernetes/manifests". This can take up to 4m0s
	I1109 14:35:11.849606  180981 kubeadm.go:319] [apiclient] All control plane components are healthy after 7.002074 seconds
	I1109 14:35:11.849725  180981 kubeadm.go:319] [upload-config] Storing the configuration used in ConfigMap "kubeadm-config" in the "kube-system" Namespace
	I1109 14:35:11.849865  180981 kubeadm.go:319] [kubelet] Creating a ConfigMap "kubelet-config" in namespace kube-system with the configuration for the kubelets in the cluster
	I1109 14:35:11.849930  180981 kubeadm.go:319] [upload-certs] Skipping phase. Please see --upload-certs
	I1109 14:35:11.850148  180981 kubeadm.go:319] [mark-control-plane] Marking the node old-k8s-version-349599 as control-plane by adding the labels: [node-role.kubernetes.io/control-plane node.kubernetes.io/exclude-from-external-load-balancers]
	I1109 14:35:11.850219  180981 kubeadm.go:319] [bootstrap-token] Using token: 6xaaa7.lny141wqe4r2gekt
	I1109 14:35:11.853044  180981 out.go:252]   - Configuring RBAC rules ...
	I1109 14:35:11.853172  180981 kubeadm.go:319] [bootstrap-token] Configuring bootstrap tokens, cluster-info ConfigMap, RBAC Roles
	I1109 14:35:11.853286  180981 kubeadm.go:319] [bootstrap-token] Configured RBAC rules to allow Node Bootstrap tokens to get nodes
	I1109 14:35:11.853434  180981 kubeadm.go:319] [bootstrap-token] Configured RBAC rules to allow Node Bootstrap tokens to post CSRs in order for nodes to get long term certificate credentials
	I1109 14:35:11.853570  180981 kubeadm.go:319] [bootstrap-token] Configured RBAC rules to allow the csrapprover controller automatically approve CSRs from a Node Bootstrap Token
	I1109 14:35:11.853691  180981 kubeadm.go:319] [bootstrap-token] Configured RBAC rules to allow certificate rotation for all node client certificates in the cluster
	I1109 14:35:11.853796  180981 kubeadm.go:319] [bootstrap-token] Creating the "cluster-info" ConfigMap in the "kube-public" namespace
	I1109 14:35:11.853916  180981 kubeadm.go:319] [kubelet-finalize] Updating "/etc/kubernetes/kubelet.conf" to point to a rotatable kubelet client certificate and key
	I1109 14:35:11.853966  180981 kubeadm.go:319] [addons] Applied essential addon: CoreDNS
	I1109 14:35:11.854014  180981 kubeadm.go:319] [addons] Applied essential addon: kube-proxy
	I1109 14:35:11.854018  180981 kubeadm.go:319] 
	I1109 14:35:11.854081  180981 kubeadm.go:319] Your Kubernetes control-plane has initialized successfully!
	I1109 14:35:11.854085  180981 kubeadm.go:319] 
	I1109 14:35:11.854165  180981 kubeadm.go:319] To start using your cluster, you need to run the following as a regular user:
	I1109 14:35:11.854169  180981 kubeadm.go:319] 
	I1109 14:35:11.854196  180981 kubeadm.go:319]   mkdir -p $HOME/.kube
	I1109 14:35:11.854258  180981 kubeadm.go:319]   sudo cp -i /etc/kubernetes/admin.conf $HOME/.kube/config
	I1109 14:35:11.854311  180981 kubeadm.go:319]   sudo chown $(id -u):$(id -g) $HOME/.kube/config
	I1109 14:35:11.854315  180981 kubeadm.go:319] 
	I1109 14:35:11.854371  180981 kubeadm.go:319] Alternatively, if you are the root user, you can run:
	I1109 14:35:11.854375  180981 kubeadm.go:319] 
	I1109 14:35:11.854424  180981 kubeadm.go:319]   export KUBECONFIG=/etc/kubernetes/admin.conf
	I1109 14:35:11.854428  180981 kubeadm.go:319] 
	I1109 14:35:11.854483  180981 kubeadm.go:319] You should now deploy a pod network to the cluster.
	I1109 14:35:11.854561  180981 kubeadm.go:319] Run "kubectl apply -f [podnetwork].yaml" with one of the options listed at:
	I1109 14:35:11.854632  180981 kubeadm.go:319]   https://kubernetes.io/docs/concepts/cluster-administration/addons/
	I1109 14:35:11.854636  180981 kubeadm.go:319] 
	I1109 14:35:11.854724  180981 kubeadm.go:319] You can now join any number of control-plane nodes by copying certificate authorities
	I1109 14:35:11.854804  180981 kubeadm.go:319] and service account keys on each node and then running the following as root:
	I1109 14:35:11.854808  180981 kubeadm.go:319] 
	I1109 14:35:11.854895  180981 kubeadm.go:319]   kubeadm join control-plane.minikube.internal:8443 --token 6xaaa7.lny141wqe4r2gekt \
	I1109 14:35:11.855010  180981 kubeadm.go:319] 	--discovery-token-ca-cert-hash sha256:bb1c1d61f7abb7f43f33866576b71e6342f6f67545a2d1f7cc51fa851cd51d52 \
	I1109 14:35:11.855036  180981 kubeadm.go:319] 	--control-plane 
	I1109 14:35:11.855040  180981 kubeadm.go:319] 
	I1109 14:35:11.855133  180981 kubeadm.go:319] Then you can join any number of worker nodes by running the following on each as root:
	I1109 14:35:11.855138  180981 kubeadm.go:319] 
	I1109 14:35:11.855223  180981 kubeadm.go:319] kubeadm join control-plane.minikube.internal:8443 --token 6xaaa7.lny141wqe4r2gekt \
	I1109 14:35:11.855342  180981 kubeadm.go:319] 	--discovery-token-ca-cert-hash sha256:bb1c1d61f7abb7f43f33866576b71e6342f6f67545a2d1f7cc51fa851cd51d52 
	I1109 14:35:11.855350  180981 cni.go:84] Creating CNI manager for ""
	I1109 14:35:11.855357  180981 cni.go:143] "docker" driver + "crio" runtime found, recommending kindnet
	I1109 14:35:11.858489  180981 out.go:179] * Configuring CNI (Container Networking Interface) ...
	I1109 14:35:11.861410  180981 ssh_runner.go:195] Run: stat /opt/cni/bin/portmap
	I1109 14:35:11.866594  180981 cni.go:182] applying CNI manifest using /var/lib/minikube/binaries/v1.28.0/kubectl ...
	I1109 14:35:11.866612  180981 ssh_runner.go:362] scp memory --> /var/tmp/minikube/cni.yaml (2601 bytes)
	I1109 14:35:11.889922  180981 ssh_runner.go:195] Run: sudo /var/lib/minikube/binaries/v1.28.0/kubectl apply --kubeconfig=/var/lib/minikube/kubeconfig -f /var/tmp/minikube/cni.yaml
	I1109 14:35:12.863235  180981 ssh_runner.go:195] Run: /bin/bash -c "cat /proc/$(pgrep kube-apiserver)/oom_adj"
	I1109 14:35:12.863387  180981 ssh_runner.go:195] Run: sudo /var/lib/minikube/binaries/v1.28.0/kubectl create clusterrolebinding minikube-rbac --clusterrole=cluster-admin --serviceaccount=kube-system:default --kubeconfig=/var/lib/minikube/kubeconfig
	I1109 14:35:12.863466  180981 ssh_runner.go:195] Run: sudo /var/lib/minikube/binaries/v1.28.0/kubectl --kubeconfig=/var/lib/minikube/kubeconfig label --overwrite nodes old-k8s-version-349599 minikube.k8s.io/updated_at=2025_11_09T14_35_12_0700 minikube.k8s.io/version=v1.37.0 minikube.k8s.io/commit=70e94ed289d7418ac27e2778f9cf44be27a4ecda minikube.k8s.io/name=old-k8s-version-349599 minikube.k8s.io/primary=true
	I1109 14:35:13.005499  180981 ops.go:34] apiserver oom_adj: -16
	I1109 14:35:13.005608  180981 ssh_runner.go:195] Run: sudo /var/lib/minikube/binaries/v1.28.0/kubectl get sa default --kubeconfig=/var/lib/minikube/kubeconfig
	I1109 14:35:13.505708  180981 ssh_runner.go:195] Run: sudo /var/lib/minikube/binaries/v1.28.0/kubectl get sa default --kubeconfig=/var/lib/minikube/kubeconfig
	I1109 14:35:14.006567  180981 ssh_runner.go:195] Run: sudo /var/lib/minikube/binaries/v1.28.0/kubectl get sa default --kubeconfig=/var/lib/minikube/kubeconfig
	I1109 14:35:14.506395  180981 ssh_runner.go:195] Run: sudo /var/lib/minikube/binaries/v1.28.0/kubectl get sa default --kubeconfig=/var/lib/minikube/kubeconfig
	I1109 14:35:15.005780  180981 ssh_runner.go:195] Run: sudo /var/lib/minikube/binaries/v1.28.0/kubectl get sa default --kubeconfig=/var/lib/minikube/kubeconfig
	I1109 14:35:15.505754  180981 ssh_runner.go:195] Run: sudo /var/lib/minikube/binaries/v1.28.0/kubectl get sa default --kubeconfig=/var/lib/minikube/kubeconfig
	I1109 14:35:16.005706  180981 ssh_runner.go:195] Run: sudo /var/lib/minikube/binaries/v1.28.0/kubectl get sa default --kubeconfig=/var/lib/minikube/kubeconfig
	I1109 14:35:16.506300  180981 ssh_runner.go:195] Run: sudo /var/lib/minikube/binaries/v1.28.0/kubectl get sa default --kubeconfig=/var/lib/minikube/kubeconfig
	I1109 14:35:17.005988  180981 ssh_runner.go:195] Run: sudo /var/lib/minikube/binaries/v1.28.0/kubectl get sa default --kubeconfig=/var/lib/minikube/kubeconfig
	I1109 14:35:17.506086  180981 ssh_runner.go:195] Run: sudo /var/lib/minikube/binaries/v1.28.0/kubectl get sa default --kubeconfig=/var/lib/minikube/kubeconfig
	I1109 14:35:18.006074  180981 ssh_runner.go:195] Run: sudo /var/lib/minikube/binaries/v1.28.0/kubectl get sa default --kubeconfig=/var/lib/minikube/kubeconfig
	I1109 14:35:18.505989  180981 ssh_runner.go:195] Run: sudo /var/lib/minikube/binaries/v1.28.0/kubectl get sa default --kubeconfig=/var/lib/minikube/kubeconfig
	I1109 14:35:19.005772  180981 ssh_runner.go:195] Run: sudo /var/lib/minikube/binaries/v1.28.0/kubectl get sa default --kubeconfig=/var/lib/minikube/kubeconfig
	I1109 14:35:19.506305  180981 ssh_runner.go:195] Run: sudo /var/lib/minikube/binaries/v1.28.0/kubectl get sa default --kubeconfig=/var/lib/minikube/kubeconfig
	I1109 14:35:20.005839  180981 ssh_runner.go:195] Run: sudo /var/lib/minikube/binaries/v1.28.0/kubectl get sa default --kubeconfig=/var/lib/minikube/kubeconfig
	I1109 14:35:20.505941  180981 ssh_runner.go:195] Run: sudo /var/lib/minikube/binaries/v1.28.0/kubectl get sa default --kubeconfig=/var/lib/minikube/kubeconfig
	I1109 14:35:21.006110  180981 ssh_runner.go:195] Run: sudo /var/lib/minikube/binaries/v1.28.0/kubectl get sa default --kubeconfig=/var/lib/minikube/kubeconfig
	I1109 14:35:21.506635  180981 ssh_runner.go:195] Run: sudo /var/lib/minikube/binaries/v1.28.0/kubectl get sa default --kubeconfig=/var/lib/minikube/kubeconfig
	I1109 14:35:22.005805  180981 ssh_runner.go:195] Run: sudo /var/lib/minikube/binaries/v1.28.0/kubectl get sa default --kubeconfig=/var/lib/minikube/kubeconfig
	I1109 14:35:22.505777  180981 ssh_runner.go:195] Run: sudo /var/lib/minikube/binaries/v1.28.0/kubectl get sa default --kubeconfig=/var/lib/minikube/kubeconfig
	I1109 14:35:23.005732  180981 ssh_runner.go:195] Run: sudo /var/lib/minikube/binaries/v1.28.0/kubectl get sa default --kubeconfig=/var/lib/minikube/kubeconfig
	I1109 14:35:23.506484  180981 ssh_runner.go:195] Run: sudo /var/lib/minikube/binaries/v1.28.0/kubectl get sa default --kubeconfig=/var/lib/minikube/kubeconfig
	I1109 14:35:24.006200  180981 ssh_runner.go:195] Run: sudo /var/lib/minikube/binaries/v1.28.0/kubectl get sa default --kubeconfig=/var/lib/minikube/kubeconfig
	I1109 14:35:24.159002  180981 kubeadm.go:1114] duration metric: took 11.295656215s to wait for elevateKubeSystemPrivileges
	I1109 14:35:24.159034  180981 kubeadm.go:403] duration metric: took 28.573577289s to StartCluster
	I1109 14:35:24.159051  180981 settings.go:142] acquiring lock: {Name:mk52a5a1cf83cfd3007d92af07a5e8dea393f789 Clock:{} Delay:500ms Timeout:1m0s Cancel:<nil>}
	I1109 14:35:24.159119  180981 settings.go:150] Updating kubeconfig:  /home/jenkins/minikube-integration/21139-2320/kubeconfig
	I1109 14:35:24.163051  180981 lock.go:35] WriteFile acquiring /home/jenkins/minikube-integration/21139-2320/kubeconfig: {Name:mkbcbd43244c4eb498050023163f96f81faa78c6 Clock:{} Delay:500ms Timeout:1m0s Cancel:<nil>}
	I1109 14:35:24.163335  180981 start.go:236] Will wait 6m0s for node &{Name: IP:192.168.85.2 Port:8443 KubernetesVersion:v1.28.0 ContainerRuntime:crio ControlPlane:true Worker:true}
	I1109 14:35:24.164073  180981 ssh_runner.go:195] Run: /bin/bash -c "sudo /var/lib/minikube/binaries/v1.28.0/kubectl --kubeconfig=/var/lib/minikube/kubeconfig -n kube-system get configmap coredns -o yaml"
	I1109 14:35:24.164588  180981 config.go:182] Loaded profile config "old-k8s-version-349599": Driver=docker, ContainerRuntime=crio, KubernetesVersion=v1.28.0
	I1109 14:35:24.164638  180981 addons.go:512] enable addons start: toEnable=map[ambassador:false amd-gpu-device-plugin:false auto-pause:false cloud-spanner:false csi-hostpath-driver:false dashboard:false default-storageclass:true efk:false freshpod:false gcp-auth:false gvisor:false headlamp:false inaccel:false ingress:false ingress-dns:false inspektor-gadget:false istio:false istio-provisioner:false kong:false kubeflow:false kubetail:false kubevirt:false logviewer:false metallb:false metrics-server:false nvidia-device-plugin:false nvidia-driver-installer:false nvidia-gpu-device-plugin:false olm:false pod-security-policy:false portainer:false registry:false registry-aliases:false registry-creds:false storage-provisioner:true storage-provisioner-rancher:false volcano:false volumesnapshots:false yakd:false]
	I1109 14:35:24.164769  180981 addons.go:70] Setting storage-provisioner=true in profile "old-k8s-version-349599"
	I1109 14:35:24.164784  180981 addons.go:239] Setting addon storage-provisioner=true in "old-k8s-version-349599"
	I1109 14:35:24.164815  180981 host.go:66] Checking if "old-k8s-version-349599" exists ...
	I1109 14:35:24.165382  180981 cli_runner.go:164] Run: docker container inspect old-k8s-version-349599 --format={{.State.Status}}
	I1109 14:35:24.169435  180981 addons.go:70] Setting default-storageclass=true in profile "old-k8s-version-349599"
	I1109 14:35:24.171841  180981 addons_storage_classes.go:34] enableOrDisableStorageClasses default-storageclass=true on "old-k8s-version-349599"
	I1109 14:35:24.180846  180981 out.go:179] * Verifying Kubernetes components...
	I1109 14:35:24.181414  180981 cli_runner.go:164] Run: docker container inspect old-k8s-version-349599 --format={{.State.Status}}
	I1109 14:35:24.183788  180981 ssh_runner.go:195] Run: sudo systemctl daemon-reload
	I1109 14:35:24.194940  180981 out.go:179]   - Using image gcr.io/k8s-minikube/storage-provisioner:v5
	I1109 14:35:24.198002  180981 addons.go:436] installing /etc/kubernetes/addons/storage-provisioner.yaml
	I1109 14:35:24.198022  180981 ssh_runner.go:362] scp memory --> /etc/kubernetes/addons/storage-provisioner.yaml (2676 bytes)
	I1109 14:35:24.198085  180981 cli_runner.go:164] Run: docker container inspect -f "'{{(index (index .NetworkSettings.Ports "22/tcp") 0).HostPort}}'" old-k8s-version-349599
	I1109 14:35:24.224103  180981 sshutil.go:53] new ssh client: &{IP:127.0.0.1 Port:33045 SSHKeyPath:/home/jenkins/minikube-integration/21139-2320/.minikube/machines/old-k8s-version-349599/id_rsa Username:docker}
	I1109 14:35:24.248624  180981 addons.go:239] Setting addon default-storageclass=true in "old-k8s-version-349599"
	I1109 14:35:24.248664  180981 host.go:66] Checking if "old-k8s-version-349599" exists ...
	I1109 14:35:24.249095  180981 cli_runner.go:164] Run: docker container inspect old-k8s-version-349599 --format={{.State.Status}}
	I1109 14:35:24.281879  180981 addons.go:436] installing /etc/kubernetes/addons/storageclass.yaml
	I1109 14:35:24.281900  180981 ssh_runner.go:362] scp storageclass/storageclass.yaml --> /etc/kubernetes/addons/storageclass.yaml (271 bytes)
	I1109 14:35:24.281962  180981 cli_runner.go:164] Run: docker container inspect -f "'{{(index (index .NetworkSettings.Ports "22/tcp") 0).HostPort}}'" old-k8s-version-349599
	I1109 14:35:24.311432  180981 sshutil.go:53] new ssh client: &{IP:127.0.0.1 Port:33045 SSHKeyPath:/home/jenkins/minikube-integration/21139-2320/.minikube/machines/old-k8s-version-349599/id_rsa Username:docker}
	I1109 14:35:24.559335  180981 ssh_runner.go:195] Run: /bin/bash -c "sudo /var/lib/minikube/binaries/v1.28.0/kubectl --kubeconfig=/var/lib/minikube/kubeconfig -n kube-system get configmap coredns -o yaml | sed -e '/^        forward . \/etc\/resolv.conf.*/i \        hosts {\n           192.168.85.1 host.minikube.internal\n           fallthrough\n        }' -e '/^        errors *$/i \        log' | sudo /var/lib/minikube/binaries/v1.28.0/kubectl --kubeconfig=/var/lib/minikube/kubeconfig replace -f -"
	I1109 14:35:24.569018  180981 ssh_runner.go:195] Run: sudo KUBECONFIG=/var/lib/minikube/kubeconfig /var/lib/minikube/binaries/v1.28.0/kubectl apply -f /etc/kubernetes/addons/storageclass.yaml
	I1109 14:35:24.599723  180981 ssh_runner.go:195] Run: sudo systemctl start kubelet
	I1109 14:35:24.604062  180981 ssh_runner.go:195] Run: sudo KUBECONFIG=/var/lib/minikube/kubeconfig /var/lib/minikube/binaries/v1.28.0/kubectl apply -f /etc/kubernetes/addons/storage-provisioner.yaml
	I1109 14:35:25.572700  180981 ssh_runner.go:235] Completed: /bin/bash -c "sudo /var/lib/minikube/binaries/v1.28.0/kubectl --kubeconfig=/var/lib/minikube/kubeconfig -n kube-system get configmap coredns -o yaml | sed -e '/^        forward . \/etc\/resolv.conf.*/i \        hosts {\n           192.168.85.1 host.minikube.internal\n           fallthrough\n        }' -e '/^        errors *$/i \        log' | sudo /var/lib/minikube/binaries/v1.28.0/kubectl --kubeconfig=/var/lib/minikube/kubeconfig replace -f -": (1.013288976s)
	I1109 14:35:25.572776  180981 start.go:977] {"host.minikube.internal": 192.168.85.1} host record injected into CoreDNS's ConfigMap
	I1109 14:35:25.573268  180981 ssh_runner.go:235] Completed: sudo KUBECONFIG=/var/lib/minikube/kubeconfig /var/lib/minikube/binaries/v1.28.0/kubectl apply -f /etc/kubernetes/addons/storageclass.yaml: (1.004176611s)
	I1109 14:35:25.574208  180981 node_ready.go:35] waiting up to 6m0s for node "old-k8s-version-349599" to be "Ready" ...
	I1109 14:35:25.871743  180981 ssh_runner.go:235] Completed: sudo KUBECONFIG=/var/lib/minikube/kubeconfig /var/lib/minikube/binaries/v1.28.0/kubectl apply -f /etc/kubernetes/addons/storage-provisioner.yaml: (1.267595196s)
	I1109 14:35:25.876755  180981 out.go:179] * Enabled addons: default-storageclass, storage-provisioner
	I1109 14:35:25.879710  180981 addons.go:515] duration metric: took 1.715048831s for enable addons: enabled=[default-storageclass storage-provisioner]
	I1109 14:35:26.080255  180981 kapi.go:214] "coredns" deployment in "kube-system" namespace and "old-k8s-version-349599" context rescaled to 1 replicas
	W1109 14:35:27.579527  180981 node_ready.go:57] node "old-k8s-version-349599" has "Ready":"False" status (will retry)
	W1109 14:35:30.087202  180981 node_ready.go:57] node "old-k8s-version-349599" has "Ready":"False" status (will retry)
	W1109 14:35:32.577902  180981 node_ready.go:57] node "old-k8s-version-349599" has "Ready":"False" status (will retry)
	W1109 14:35:35.078478  180981 node_ready.go:57] node "old-k8s-version-349599" has "Ready":"False" status (will retry)
	W1109 14:35:37.078521  180981 node_ready.go:57] node "old-k8s-version-349599" has "Ready":"False" status (will retry)
	I1109 14:35:38.578693  180981 node_ready.go:49] node "old-k8s-version-349599" is "Ready"
	I1109 14:35:38.578726  180981 node_ready.go:38] duration metric: took 13.003918578s for node "old-k8s-version-349599" to be "Ready" ...
	I1109 14:35:38.578740  180981 api_server.go:52] waiting for apiserver process to appear ...
	I1109 14:35:38.578799  180981 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I1109 14:35:38.590387  180981 api_server.go:72] duration metric: took 14.427009118s to wait for apiserver process to appear ...
	I1109 14:35:38.590412  180981 api_server.go:88] waiting for apiserver healthz status ...
	I1109 14:35:38.590434  180981 api_server.go:253] Checking apiserver healthz at https://192.168.85.2:8443/healthz ...
	I1109 14:35:38.599763  180981 api_server.go:279] https://192.168.85.2:8443/healthz returned 200:
	ok
	I1109 14:35:38.601075  180981 api_server.go:141] control plane version: v1.28.0
	I1109 14:35:38.601101  180981 api_server.go:131] duration metric: took 10.681335ms to wait for apiserver health ...
	I1109 14:35:38.601110  180981 system_pods.go:43] waiting for kube-system pods to appear ...
	I1109 14:35:38.604518  180981 system_pods.go:59] 8 kube-system pods found
	I1109 14:35:38.604551  180981 system_pods.go:61] "coredns-5dd5756b68-2z64q" [e47a91d2-3b07-42d3-9893-b2773590c8e4] Pending / Ready:ContainersNotReady (containers with unready status: [coredns]) / ContainersReady:ContainersNotReady (containers with unready status: [coredns])
	I1109 14:35:38.604558  180981 system_pods.go:61] "etcd-old-k8s-version-349599" [d0974f47-3ec3-45a5-a9ec-c0d1128cc3d2] Running
	I1109 14:35:38.604563  180981 system_pods.go:61] "kindnet-2r8mz" [2e794a4e-c0c0-4759-9087-e80451139f25] Running
	I1109 14:35:38.604568  180981 system_pods.go:61] "kube-apiserver-old-k8s-version-349599" [ca1366d1-7313-4e2b-aeb4-85231f53b9b7] Running
	I1109 14:35:38.604573  180981 system_pods.go:61] "kube-controller-manager-old-k8s-version-349599" [6640eb75-c05e-4f41-b535-4b43ee2f99b8] Running
	I1109 14:35:38.604578  180981 system_pods.go:61] "kube-proxy-tcp6s" [2ce7f7de-8607-4c51-a80b-792af4b7c036] Running
	I1109 14:35:38.604583  180981 system_pods.go:61] "kube-scheduler-old-k8s-version-349599" [4e2bdf11-00ad-471d-9d1f-415178b82eba] Running
	I1109 14:35:38.604595  180981 system_pods.go:61] "storage-provisioner" [4bf52fd0-d955-4414-a7d8-bbc6576c0a2e] Pending / Ready:ContainersNotReady (containers with unready status: [storage-provisioner]) / ContainersReady:ContainersNotReady (containers with unready status: [storage-provisioner])
	I1109 14:35:38.604602  180981 system_pods.go:74] duration metric: took 3.485969ms to wait for pod list to return data ...
	I1109 14:35:38.604616  180981 default_sa.go:34] waiting for default service account to be created ...
	I1109 14:35:38.606810  180981 default_sa.go:45] found service account: "default"
	I1109 14:35:38.606836  180981 default_sa.go:55] duration metric: took 2.214421ms for default service account to be created ...
	I1109 14:35:38.606846  180981 system_pods.go:116] waiting for k8s-apps to be running ...
	I1109 14:35:38.610395  180981 system_pods.go:86] 8 kube-system pods found
	I1109 14:35:38.610430  180981 system_pods.go:89] "coredns-5dd5756b68-2z64q" [e47a91d2-3b07-42d3-9893-b2773590c8e4] Pending / Ready:ContainersNotReady (containers with unready status: [coredns]) / ContainersReady:ContainersNotReady (containers with unready status: [coredns])
	I1109 14:35:38.610438  180981 system_pods.go:89] "etcd-old-k8s-version-349599" [d0974f47-3ec3-45a5-a9ec-c0d1128cc3d2] Running
	I1109 14:35:38.610445  180981 system_pods.go:89] "kindnet-2r8mz" [2e794a4e-c0c0-4759-9087-e80451139f25] Running
	I1109 14:35:38.610450  180981 system_pods.go:89] "kube-apiserver-old-k8s-version-349599" [ca1366d1-7313-4e2b-aeb4-85231f53b9b7] Running
	I1109 14:35:38.610455  180981 system_pods.go:89] "kube-controller-manager-old-k8s-version-349599" [6640eb75-c05e-4f41-b535-4b43ee2f99b8] Running
	I1109 14:35:38.610459  180981 system_pods.go:89] "kube-proxy-tcp6s" [2ce7f7de-8607-4c51-a80b-792af4b7c036] Running
	I1109 14:35:38.610463  180981 system_pods.go:89] "kube-scheduler-old-k8s-version-349599" [4e2bdf11-00ad-471d-9d1f-415178b82eba] Running
	I1109 14:35:38.610470  180981 system_pods.go:89] "storage-provisioner" [4bf52fd0-d955-4414-a7d8-bbc6576c0a2e] Pending / Ready:ContainersNotReady (containers with unready status: [storage-provisioner]) / ContainersReady:ContainersNotReady (containers with unready status: [storage-provisioner])
	I1109 14:35:38.610492  180981 retry.go:31] will retry after 251.110722ms: missing components: kube-dns
	I1109 14:35:38.866150  180981 system_pods.go:86] 8 kube-system pods found
	I1109 14:35:38.866186  180981 system_pods.go:89] "coredns-5dd5756b68-2z64q" [e47a91d2-3b07-42d3-9893-b2773590c8e4] Pending / Ready:ContainersNotReady (containers with unready status: [coredns]) / ContainersReady:ContainersNotReady (containers with unready status: [coredns])
	I1109 14:35:38.866193  180981 system_pods.go:89] "etcd-old-k8s-version-349599" [d0974f47-3ec3-45a5-a9ec-c0d1128cc3d2] Running
	I1109 14:35:38.866200  180981 system_pods.go:89] "kindnet-2r8mz" [2e794a4e-c0c0-4759-9087-e80451139f25] Running
	I1109 14:35:38.866204  180981 system_pods.go:89] "kube-apiserver-old-k8s-version-349599" [ca1366d1-7313-4e2b-aeb4-85231f53b9b7] Running
	I1109 14:35:38.866209  180981 system_pods.go:89] "kube-controller-manager-old-k8s-version-349599" [6640eb75-c05e-4f41-b535-4b43ee2f99b8] Running
	I1109 14:35:38.866214  180981 system_pods.go:89] "kube-proxy-tcp6s" [2ce7f7de-8607-4c51-a80b-792af4b7c036] Running
	I1109 14:35:38.866218  180981 system_pods.go:89] "kube-scheduler-old-k8s-version-349599" [4e2bdf11-00ad-471d-9d1f-415178b82eba] Running
	I1109 14:35:38.866223  180981 system_pods.go:89] "storage-provisioner" [4bf52fd0-d955-4414-a7d8-bbc6576c0a2e] Pending / Ready:ContainersNotReady (containers with unready status: [storage-provisioner]) / ContainersReady:ContainersNotReady (containers with unready status: [storage-provisioner])
	I1109 14:35:38.866242  180981 retry.go:31] will retry after 249.341741ms: missing components: kube-dns
	I1109 14:35:39.120187  180981 system_pods.go:86] 8 kube-system pods found
	I1109 14:35:39.120223  180981 system_pods.go:89] "coredns-5dd5756b68-2z64q" [e47a91d2-3b07-42d3-9893-b2773590c8e4] Pending / Ready:ContainersNotReady (containers with unready status: [coredns]) / ContainersReady:ContainersNotReady (containers with unready status: [coredns])
	I1109 14:35:39.120230  180981 system_pods.go:89] "etcd-old-k8s-version-349599" [d0974f47-3ec3-45a5-a9ec-c0d1128cc3d2] Running
	I1109 14:35:39.120236  180981 system_pods.go:89] "kindnet-2r8mz" [2e794a4e-c0c0-4759-9087-e80451139f25] Running
	I1109 14:35:39.120240  180981 system_pods.go:89] "kube-apiserver-old-k8s-version-349599" [ca1366d1-7313-4e2b-aeb4-85231f53b9b7] Running
	I1109 14:35:39.120245  180981 system_pods.go:89] "kube-controller-manager-old-k8s-version-349599" [6640eb75-c05e-4f41-b535-4b43ee2f99b8] Running
	I1109 14:35:39.120249  180981 system_pods.go:89] "kube-proxy-tcp6s" [2ce7f7de-8607-4c51-a80b-792af4b7c036] Running
	I1109 14:35:39.120253  180981 system_pods.go:89] "kube-scheduler-old-k8s-version-349599" [4e2bdf11-00ad-471d-9d1f-415178b82eba] Running
	I1109 14:35:39.120260  180981 system_pods.go:89] "storage-provisioner" [4bf52fd0-d955-4414-a7d8-bbc6576c0a2e] Pending / Ready:ContainersNotReady (containers with unready status: [storage-provisioner]) / ContainersReady:ContainersNotReady (containers with unready status: [storage-provisioner])
	I1109 14:35:39.120280  180981 retry.go:31] will retry after 408.565922ms: missing components: kube-dns
	I1109 14:35:39.533597  180981 system_pods.go:86] 8 kube-system pods found
	I1109 14:35:39.533628  180981 system_pods.go:89] "coredns-5dd5756b68-2z64q" [e47a91d2-3b07-42d3-9893-b2773590c8e4] Running
	I1109 14:35:39.533635  180981 system_pods.go:89] "etcd-old-k8s-version-349599" [d0974f47-3ec3-45a5-a9ec-c0d1128cc3d2] Running
	I1109 14:35:39.533639  180981 system_pods.go:89] "kindnet-2r8mz" [2e794a4e-c0c0-4759-9087-e80451139f25] Running
	I1109 14:35:39.533643  180981 system_pods.go:89] "kube-apiserver-old-k8s-version-349599" [ca1366d1-7313-4e2b-aeb4-85231f53b9b7] Running
	I1109 14:35:39.533648  180981 system_pods.go:89] "kube-controller-manager-old-k8s-version-349599" [6640eb75-c05e-4f41-b535-4b43ee2f99b8] Running
	I1109 14:35:39.533652  180981 system_pods.go:89] "kube-proxy-tcp6s" [2ce7f7de-8607-4c51-a80b-792af4b7c036] Running
	I1109 14:35:39.533656  180981 system_pods.go:89] "kube-scheduler-old-k8s-version-349599" [4e2bdf11-00ad-471d-9d1f-415178b82eba] Running
	I1109 14:35:39.533659  180981 system_pods.go:89] "storage-provisioner" [4bf52fd0-d955-4414-a7d8-bbc6576c0a2e] Running
	I1109 14:35:39.533667  180981 system_pods.go:126] duration metric: took 926.815386ms to wait for k8s-apps to be running ...
	I1109 14:35:39.533675  180981 system_svc.go:44] waiting for kubelet service to be running ....
	I1109 14:35:39.533731  180981 ssh_runner.go:195] Run: sudo systemctl is-active --quiet service kubelet
	I1109 14:35:39.547808  180981 system_svc.go:56] duration metric: took 14.124408ms WaitForService to wait for kubelet
	I1109 14:35:39.547836  180981 kubeadm.go:587] duration metric: took 15.384461322s to wait for: map[apiserver:true apps_running:true default_sa:true extra:true kubelet:true node_ready:true system_pods:true]
	I1109 14:35:39.547854  180981 node_conditions.go:102] verifying NodePressure condition ...
	I1109 14:35:39.550825  180981 node_conditions.go:122] node storage ephemeral capacity is 203034800Ki
	I1109 14:35:39.550856  180981 node_conditions.go:123] node cpu capacity is 2
	I1109 14:35:39.550869  180981 node_conditions.go:105] duration metric: took 3.009806ms to run NodePressure ...
	I1109 14:35:39.550881  180981 start.go:242] waiting for startup goroutines ...
	I1109 14:35:39.550889  180981 start.go:247] waiting for cluster config update ...
	I1109 14:35:39.550900  180981 start.go:256] writing updated cluster config ...
	I1109 14:35:39.551188  180981 ssh_runner.go:195] Run: rm -f paused
	I1109 14:35:39.554892  180981 pod_ready.go:37] extra waiting up to 4m0s for all "kube-system" pods having one of [k8s-app=kube-dns component=etcd component=kube-apiserver component=kube-controller-manager k8s-app=kube-proxy component=kube-scheduler] labels to be "Ready" ...
	I1109 14:35:39.559834  180981 pod_ready.go:83] waiting for pod "coredns-5dd5756b68-2z64q" in "kube-system" namespace to be "Ready" or be gone ...
	I1109 14:35:39.566199  180981 pod_ready.go:94] pod "coredns-5dd5756b68-2z64q" is "Ready"
	I1109 14:35:39.566228  180981 pod_ready.go:86] duration metric: took 6.371763ms for pod "coredns-5dd5756b68-2z64q" in "kube-system" namespace to be "Ready" or be gone ...
	I1109 14:35:39.569715  180981 pod_ready.go:83] waiting for pod "etcd-old-k8s-version-349599" in "kube-system" namespace to be "Ready" or be gone ...
	I1109 14:35:39.575264  180981 pod_ready.go:94] pod "etcd-old-k8s-version-349599" is "Ready"
	I1109 14:35:39.575330  180981 pod_ready.go:86] duration metric: took 5.588455ms for pod "etcd-old-k8s-version-349599" in "kube-system" namespace to be "Ready" or be gone ...
	I1109 14:35:39.578684  180981 pod_ready.go:83] waiting for pod "kube-apiserver-old-k8s-version-349599" in "kube-system" namespace to be "Ready" or be gone ...
	I1109 14:35:39.584166  180981 pod_ready.go:94] pod "kube-apiserver-old-k8s-version-349599" is "Ready"
	I1109 14:35:39.584194  180981 pod_ready.go:86] duration metric: took 5.483314ms for pod "kube-apiserver-old-k8s-version-349599" in "kube-system" namespace to be "Ready" or be gone ...
	I1109 14:35:39.587222  180981 pod_ready.go:83] waiting for pod "kube-controller-manager-old-k8s-version-349599" in "kube-system" namespace to be "Ready" or be gone ...
	I1109 14:35:39.960294  180981 pod_ready.go:94] pod "kube-controller-manager-old-k8s-version-349599" is "Ready"
	I1109 14:35:39.960325  180981 pod_ready.go:86] duration metric: took 373.073889ms for pod "kube-controller-manager-old-k8s-version-349599" in "kube-system" namespace to be "Ready" or be gone ...
	I1109 14:35:40.161226  180981 pod_ready.go:83] waiting for pod "kube-proxy-tcp6s" in "kube-system" namespace to be "Ready" or be gone ...
	I1109 14:35:40.560123  180981 pod_ready.go:94] pod "kube-proxy-tcp6s" is "Ready"
	I1109 14:35:40.560158  180981 pod_ready.go:86] duration metric: took 398.899073ms for pod "kube-proxy-tcp6s" in "kube-system" namespace to be "Ready" or be gone ...
	I1109 14:35:40.759654  180981 pod_ready.go:83] waiting for pod "kube-scheduler-old-k8s-version-349599" in "kube-system" namespace to be "Ready" or be gone ...
	I1109 14:35:41.161182  180981 pod_ready.go:94] pod "kube-scheduler-old-k8s-version-349599" is "Ready"
	I1109 14:35:41.161222  180981 pod_ready.go:86] duration metric: took 401.532395ms for pod "kube-scheduler-old-k8s-version-349599" in "kube-system" namespace to be "Ready" or be gone ...
	I1109 14:35:41.161236  180981 pod_ready.go:40] duration metric: took 1.606305497s for extra waiting for all "kube-system" pods having one of [k8s-app=kube-dns component=etcd component=kube-apiserver component=kube-controller-manager k8s-app=kube-proxy component=kube-scheduler] labels to be "Ready" ...
	I1109 14:35:41.220137  180981 start.go:628] kubectl: 1.33.2, cluster: 1.28.0 (minor skew: 5)
	I1109 14:35:41.223400  180981 out.go:203] 
	W1109 14:35:41.226318  180981 out.go:285] ! /usr/local/bin/kubectl is version 1.33.2, which may have incompatibilities with Kubernetes 1.28.0.
	I1109 14:35:41.229408  180981 out.go:179]   - Want kubectl v1.28.0? Try 'minikube kubectl -- get pods -A'
	I1109 14:35:41.232439  180981 out.go:179] * Done! kubectl is now configured to use "old-k8s-version-349599" cluster and "default" namespace by default
	
	
	==> CRI-O <==
	Nov 09 14:35:38 old-k8s-version-349599 crio[838]: time="2025-11-09T14:35:38.8321581Z" level=info msg="Created container b25f5583a35c0527c905e7935d54b166e405b69ef8b4e5acb8c8553d4e276035: kube-system/coredns-5dd5756b68-2z64q/coredns" id=3c069e6e-1577-4429-8efd-58d690b2472b name=/runtime.v1.RuntimeService/CreateContainer
	Nov 09 14:35:38 old-k8s-version-349599 crio[838]: time="2025-11-09T14:35:38.833067373Z" level=info msg="Starting container: b25f5583a35c0527c905e7935d54b166e405b69ef8b4e5acb8c8553d4e276035" id=c25446f6-3b86-48c0-ba08-0960f19107c2 name=/runtime.v1.RuntimeService/StartContainer
	Nov 09 14:35:38 old-k8s-version-349599 crio[838]: time="2025-11-09T14:35:38.834755687Z" level=info msg="Started container" PID=1949 containerID=b25f5583a35c0527c905e7935d54b166e405b69ef8b4e5acb8c8553d4e276035 description=kube-system/coredns-5dd5756b68-2z64q/coredns id=c25446f6-3b86-48c0-ba08-0960f19107c2 name=/runtime.v1.RuntimeService/StartContainer sandboxID=8d089934dae232dab2ca52ba269de91e21071d4e08036aac3a0f8851b46ec9aa
	Nov 09 14:35:41 old-k8s-version-349599 crio[838]: time="2025-11-09T14:35:41.738084747Z" level=info msg="Running pod sandbox: default/busybox/POD" id=ec88ebc3-e833-4aad-9480-3a8a841a6b84 name=/runtime.v1.RuntimeService/RunPodSandbox
	Nov 09 14:35:41 old-k8s-version-349599 crio[838]: time="2025-11-09T14:35:41.738155Z" level=info msg="Allowed annotations are specified for workload [io.containers.trace-syscall]"
	Nov 09 14:35:41 old-k8s-version-349599 crio[838]: time="2025-11-09T14:35:41.743429747Z" level=info msg="Got pod network &{Name:busybox Namespace:default ID:85e550818ca3362fdd63de27783179cb84446494e06e53644d67d728604c3ad6 UID:623307ca-4ed7-4378-9c59-77fc8b166a0b NetNS:/var/run/netns/e3321bc8-1c4f-4e15-8416-1b8744f1ca0c Networks:[{Name:kindnet Ifname:eth0}] RuntimeConfig:map[kindnet:{IP: MAC: PortMappings:[] Bandwidth:<nil> IpRanges:[] CgroupPath: PodAnnotations:0x4000079710}] Aliases:map[]}"
	Nov 09 14:35:41 old-k8s-version-349599 crio[838]: time="2025-11-09T14:35:41.743466244Z" level=info msg="Adding pod default_busybox to CNI network \"kindnet\" (type=ptp)"
	Nov 09 14:35:41 old-k8s-version-349599 crio[838]: time="2025-11-09T14:35:41.755231705Z" level=info msg="Got pod network &{Name:busybox Namespace:default ID:85e550818ca3362fdd63de27783179cb84446494e06e53644d67d728604c3ad6 UID:623307ca-4ed7-4378-9c59-77fc8b166a0b NetNS:/var/run/netns/e3321bc8-1c4f-4e15-8416-1b8744f1ca0c Networks:[{Name:kindnet Ifname:eth0}] RuntimeConfig:map[kindnet:{IP: MAC: PortMappings:[] Bandwidth:<nil> IpRanges:[] CgroupPath: PodAnnotations:0x4000079710}] Aliases:map[]}"
	Nov 09 14:35:41 old-k8s-version-349599 crio[838]: time="2025-11-09T14:35:41.755434963Z" level=info msg="Checking pod default_busybox for CNI network kindnet (type=ptp)"
	Nov 09 14:35:41 old-k8s-version-349599 crio[838]: time="2025-11-09T14:35:41.758543839Z" level=info msg="Ran pod sandbox 85e550818ca3362fdd63de27783179cb84446494e06e53644d67d728604c3ad6 with infra container: default/busybox/POD" id=ec88ebc3-e833-4aad-9480-3a8a841a6b84 name=/runtime.v1.RuntimeService/RunPodSandbox
	Nov 09 14:35:41 old-k8s-version-349599 crio[838]: time="2025-11-09T14:35:41.760824837Z" level=info msg="Checking image status: gcr.io/k8s-minikube/busybox:1.28.4-glibc" id=3e18b57f-6f8c-4087-b5b6-efb91e5a783e name=/runtime.v1.ImageService/ImageStatus
	Nov 09 14:35:41 old-k8s-version-349599 crio[838]: time="2025-11-09T14:35:41.760962963Z" level=info msg="Image gcr.io/k8s-minikube/busybox:1.28.4-glibc not found" id=3e18b57f-6f8c-4087-b5b6-efb91e5a783e name=/runtime.v1.ImageService/ImageStatus
	Nov 09 14:35:41 old-k8s-version-349599 crio[838]: time="2025-11-09T14:35:41.761016371Z" level=info msg="Neither image nor artfiact gcr.io/k8s-minikube/busybox:1.28.4-glibc found" id=3e18b57f-6f8c-4087-b5b6-efb91e5a783e name=/runtime.v1.ImageService/ImageStatus
	Nov 09 14:35:41 old-k8s-version-349599 crio[838]: time="2025-11-09T14:35:41.762462763Z" level=info msg="Pulling image: gcr.io/k8s-minikube/busybox:1.28.4-glibc" id=ff8041e0-8bf7-487f-aaaa-55f2c04b0e2f name=/runtime.v1.ImageService/PullImage
	Nov 09 14:35:41 old-k8s-version-349599 crio[838]: time="2025-11-09T14:35:41.764845727Z" level=info msg="Trying to access \"gcr.io/k8s-minikube/busybox:1.28.4-glibc\""
	Nov 09 14:35:43 old-k8s-version-349599 crio[838]: time="2025-11-09T14:35:43.866008918Z" level=info msg="Pulled image: gcr.io/k8s-minikube/busybox@sha256:580b0aa58b210f512f818b7b7ef4f63c803f7a8cd6baf571b1462b79f7b7719e" id=ff8041e0-8bf7-487f-aaaa-55f2c04b0e2f name=/runtime.v1.ImageService/PullImage
	Nov 09 14:35:43 old-k8s-version-349599 crio[838]: time="2025-11-09T14:35:43.867279268Z" level=info msg="Checking image status: gcr.io/k8s-minikube/busybox:1.28.4-glibc" id=1ae7212d-1644-4472-aa70-66854a1e6cd8 name=/runtime.v1.ImageService/ImageStatus
	Nov 09 14:35:43 old-k8s-version-349599 crio[838]: time="2025-11-09T14:35:43.870009647Z" level=info msg="Creating container: default/busybox/busybox" id=0b88a5cd-88cd-4574-a7f9-60b45c120dc7 name=/runtime.v1.RuntimeService/CreateContainer
	Nov 09 14:35:43 old-k8s-version-349599 crio[838]: time="2025-11-09T14:35:43.870225648Z" level=info msg="Allowed annotations are specified for workload [io.containers.trace-syscall]"
	Nov 09 14:35:43 old-k8s-version-349599 crio[838]: time="2025-11-09T14:35:43.876790791Z" level=info msg="Allowed annotations are specified for workload [io.containers.trace-syscall]"
	Nov 09 14:35:43 old-k8s-version-349599 crio[838]: time="2025-11-09T14:35:43.877425207Z" level=info msg="Allowed annotations are specified for workload [io.containers.trace-syscall]"
	Nov 09 14:35:43 old-k8s-version-349599 crio[838]: time="2025-11-09T14:35:43.894585277Z" level=info msg="Created container ee5c8dc4b0e5f5898203490e2e4a31969b1dae641b3e0a6286b5734cba9b7c43: default/busybox/busybox" id=0b88a5cd-88cd-4574-a7f9-60b45c120dc7 name=/runtime.v1.RuntimeService/CreateContainer
	Nov 09 14:35:43 old-k8s-version-349599 crio[838]: time="2025-11-09T14:35:43.896097253Z" level=info msg="Starting container: ee5c8dc4b0e5f5898203490e2e4a31969b1dae641b3e0a6286b5734cba9b7c43" id=c959eb55-d0d6-4149-87f8-d5dd9608d8ee name=/runtime.v1.RuntimeService/StartContainer
	Nov 09 14:35:43 old-k8s-version-349599 crio[838]: time="2025-11-09T14:35:43.89768818Z" level=info msg="Started container" PID=2001 containerID=ee5c8dc4b0e5f5898203490e2e4a31969b1dae641b3e0a6286b5734cba9b7c43 description=default/busybox/busybox id=c959eb55-d0d6-4149-87f8-d5dd9608d8ee name=/runtime.v1.RuntimeService/StartContainer sandboxID=85e550818ca3362fdd63de27783179cb84446494e06e53644d67d728604c3ad6
	Nov 09 14:35:49 old-k8s-version-349599 crio[838]: time="2025-11-09T14:35:49.631425488Z" level=error msg="Unhandled Error: unable to upgrade websocket connection: websocket server finished before becoming ready (logger=\"UnhandledError\")"
	
	
	==> container status <==
	CONTAINER           IMAGE                                                                                                 CREATED             STATE               NAME                      ATTEMPT             POD ID              POD                                              NAMESPACE
	ee5c8dc4b0e5f       gcr.io/k8s-minikube/busybox@sha256:580b0aa58b210f512f818b7b7ef4f63c803f7a8cd6baf571b1462b79f7b7719e   7 seconds ago       Running             busybox                   0                   85e550818ca33       busybox                                          default
	b25f5583a35c0       97e04611ad43405a2e5863ae17c6f1bc9181bdefdaa78627c432ef754a4eb108                                      12 seconds ago      Running             coredns                   0                   8d089934dae23       coredns-5dd5756b68-2z64q                         kube-system
	326c8148b3dc1       ba04bb24b95753201135cbc420b233c1b0b9fa2e1fd21d28319c348c33fbcde6                                      12 seconds ago      Running             storage-provisioner       0                   a8de96c4f3694       storage-provisioner                              kube-system
	c93961935b72f       docker.io/kindest/kindnetd@sha256:2bdc3188f2ddc8e54841f69ef900a8dde1280057c97500f966a7ef31364021f1    23 seconds ago      Running             kindnet-cni               0                   a1e9aaf8ebdb7       kindnet-2r8mz                                    kube-system
	f6d920e9fdd7f       940f54a5bcae9dd4c97844fa36d12cc5d9078cffd5e677ad0df1528c12f3240d                                      26 seconds ago      Running             kube-proxy                0                   a28bf08a8cb00       kube-proxy-tcp6s                                 kube-system
	324d7b7676bc4       9cdd6470f48c8b127530b7ce6ea4b3524137984481e48bcde619735890840ace                                      46 seconds ago      Running             etcd                      0                   0bb030c356b13       etcd-old-k8s-version-349599                      kube-system
	d4f18a37925ef       46cc66ccc7c19b4b30625b0aa4e178792add2385659205d7c6fcbd05d78c23e5                                      46 seconds ago      Running             kube-controller-manager   0                   0bc4155d003c1       kube-controller-manager-old-k8s-version-349599   kube-system
	d1c42ae8337b7       00543d2fe5d71095984891a0609ee504b81f9d72a69a0ad02039d4e135213766                                      46 seconds ago      Running             kube-apiserver            0                   d47d6b10fab50       kube-apiserver-old-k8s-version-349599            kube-system
	d34753281803d       762dce4090c5f4789bb5dbb933d5b50bc1a2357d7739bbce30d949820e5a38ee                                      46 seconds ago      Running             kube-scheduler            0                   c8c422b4a6f7b       kube-scheduler-old-k8s-version-349599            kube-system
	
	
	==> coredns [b25f5583a35c0527c905e7935d54b166e405b69ef8b4e5acb8c8553d4e276035] <==
	.:53
	[INFO] plugin/reload: Running configuration SHA512 = 8aa94104b4dae56b00431f7362ac05b997af2246775de35dc2eb361b0707b2fa7199f9ddfdba27fdef1331b76d09c41700f6cb5d00836dabab7c0df8e651283f
	CoreDNS-1.10.1
	linux/arm64, go1.20, 055b2c3
	[INFO] 127.0.0.1:41484 - 20124 "HINFO IN 3330968203696484667.5196511438381478725. udp 57 false 512" NXDOMAIN qr,rd,ra 57 0.004911143s
	
	
	==> describe nodes <==
	Name:               old-k8s-version-349599
	Roles:              control-plane
	Labels:             beta.kubernetes.io/arch=arm64
	                    beta.kubernetes.io/os=linux
	                    kubernetes.io/arch=arm64
	                    kubernetes.io/hostname=old-k8s-version-349599
	                    kubernetes.io/os=linux
	                    minikube.k8s.io/commit=70e94ed289d7418ac27e2778f9cf44be27a4ecda
	                    minikube.k8s.io/name=old-k8s-version-349599
	                    minikube.k8s.io/primary=true
	                    minikube.k8s.io/updated_at=2025_11_09T14_35_12_0700
	                    minikube.k8s.io/version=v1.37.0
	                    node-role.kubernetes.io/control-plane=
	                    node.kubernetes.io/exclude-from-external-load-balancers=
	Annotations:        kubeadm.alpha.kubernetes.io/cri-socket: unix:///var/run/crio/crio.sock
	                    node.alpha.kubernetes.io/ttl: 0
	                    volumes.kubernetes.io/controller-managed-attach-detach: true
	CreationTimestamp:  Sun, 09 Nov 2025 14:35:08 +0000
	Taints:             <none>
	Unschedulable:      false
	Lease:
	  HolderIdentity:  old-k8s-version-349599
	  AcquireTime:     <unset>
	  RenewTime:       Sun, 09 Nov 2025 14:35:42 +0000
	Conditions:
	  Type             Status  LastHeartbeatTime                 LastTransitionTime                Reason                       Message
	  ----             ------  -----------------                 ------------------                ------                       -------
	  MemoryPressure   False   Sun, 09 Nov 2025 14:35:42 +0000   Sun, 09 Nov 2025 14:35:04 +0000   KubeletHasSufficientMemory   kubelet has sufficient memory available
	  DiskPressure     False   Sun, 09 Nov 2025 14:35:42 +0000   Sun, 09 Nov 2025 14:35:04 +0000   KubeletHasNoDiskPressure     kubelet has no disk pressure
	  PIDPressure      False   Sun, 09 Nov 2025 14:35:42 +0000   Sun, 09 Nov 2025 14:35:04 +0000   KubeletHasSufficientPID      kubelet has sufficient PID available
	  Ready            True    Sun, 09 Nov 2025 14:35:42 +0000   Sun, 09 Nov 2025 14:35:38 +0000   KubeletReady                 kubelet is posting ready status
	Addresses:
	  InternalIP:  192.168.85.2
	  Hostname:    old-k8s-version-349599
	Capacity:
	  cpu:                2
	  ephemeral-storage:  203034800Ki
	  hugepages-1Gi:      0
	  hugepages-2Mi:      0
	  hugepages-32Mi:     0
	  hugepages-64Ki:     0
	  memory:             8022300Ki
	  pods:               110
	Allocatable:
	  cpu:                2
	  ephemeral-storage:  203034800Ki
	  hugepages-1Gi:      0
	  hugepages-2Mi:      0
	  hugepages-32Mi:     0
	  hugepages-64Ki:     0
	  memory:             8022300Ki
	  pods:               110
	System Info:
	  Machine ID:                 8fe80b37a6a3cb4bc1adadb06905c59a
	  System UUID:                134d2443-5714-4231-bc47-128f14f493a4
	  Boot ID:                    c2b4b02b-b0e5-42ff-aa45-346ad9349595
	  Kernel Version:             5.15.0-1084-aws
	  OS Image:                   Debian GNU/Linux 12 (bookworm)
	  Operating System:           linux
	  Architecture:               arm64
	  Container Runtime Version:  cri-o://1.34.1
	  Kubelet Version:            v1.28.0
	  Kube-Proxy Version:         v1.28.0
	PodCIDR:                      10.244.0.0/24
	PodCIDRs:                     10.244.0.0/24
	Non-terminated Pods:          (9 in total)
	  Namespace                   Name                                              CPU Requests  CPU Limits  Memory Requests  Memory Limits  Age
	  ---------                   ----                                              ------------  ----------  ---------------  -------------  ---
	  default                     busybox                                           0 (0%)        0 (0%)      0 (0%)           0 (0%)         10s
	  kube-system                 coredns-5dd5756b68-2z64q                          100m (5%)     0 (0%)      70Mi (0%)        170Mi (2%)     27s
	  kube-system                 etcd-old-k8s-version-349599                       100m (5%)     0 (0%)      100Mi (1%)       0 (0%)         40s
	  kube-system                 kindnet-2r8mz                                     100m (5%)     100m (5%)   50Mi (0%)        50Mi (0%)      27s
	  kube-system                 kube-apiserver-old-k8s-version-349599             250m (12%)    0 (0%)      0 (0%)           0 (0%)         39s
	  kube-system                 kube-controller-manager-old-k8s-version-349599    200m (10%)    0 (0%)      0 (0%)           0 (0%)         39s
	  kube-system                 kube-proxy-tcp6s                                  0 (0%)        0 (0%)      0 (0%)           0 (0%)         27s
	  kube-system                 kube-scheduler-old-k8s-version-349599             100m (5%)     0 (0%)      0 (0%)           0 (0%)         39s
	  kube-system                 storage-provisioner                               0 (0%)        0 (0%)      0 (0%)           0 (0%)         26s
	Allocated resources:
	  (Total limits may be over 100 percent, i.e., overcommitted.)
	  Resource           Requests    Limits
	  --------           --------    ------
	  cpu                850m (42%)  100m (5%)
	  memory             220Mi (2%)  220Mi (2%)
	  ephemeral-storage  0 (0%)      0 (0%)
	  hugepages-1Gi      0 (0%)      0 (0%)
	  hugepages-2Mi      0 (0%)      0 (0%)
	  hugepages-32Mi     0 (0%)      0 (0%)
	  hugepages-64Ki     0 (0%)      0 (0%)
	Events:
	  Type    Reason                   Age   From             Message
	  ----    ------                   ----  ----             -------
	  Normal  Starting                 26s   kube-proxy       
	  Normal  Starting                 40s   kubelet          Starting kubelet.
	  Normal  NodeHasSufficientMemory  40s   kubelet          Node old-k8s-version-349599 status is now: NodeHasSufficientMemory
	  Normal  NodeHasNoDiskPressure    40s   kubelet          Node old-k8s-version-349599 status is now: NodeHasNoDiskPressure
	  Normal  NodeHasSufficientPID     40s   kubelet          Node old-k8s-version-349599 status is now: NodeHasSufficientPID
	  Normal  RegisteredNode           28s   node-controller  Node old-k8s-version-349599 event: Registered Node old-k8s-version-349599 in Controller
	  Normal  NodeReady                13s   kubelet          Node old-k8s-version-349599 status is now: NodeReady
	
	
	==> dmesg <==
	[ +45.728314] overlayfs: idmapped layers are currently not supported
	[Nov 9 14:07] overlayfs: idmapped layers are currently not supported
	[Nov 9 14:12] overlayfs: idmapped layers are currently not supported
	[ +35.606556] overlayfs: idmapped layers are currently not supported
	[Nov 9 14:14] overlayfs: idmapped layers are currently not supported
	[Nov 9 14:15] overlayfs: idmapped layers are currently not supported
	[Nov 9 14:16] overlayfs: idmapped layers are currently not supported
	[Nov 9 14:18] overlayfs: idmapped layers are currently not supported
	[ +26.909872] overlayfs: idmapped layers are currently not supported
	[  +3.850831] overlayfs: idmapped layers are currently not supported
	[Nov 9 14:19] overlayfs: idmapped layers are currently not supported
	[ +17.180951] overlayfs: idmapped layers are currently not supported
	[Nov 9 14:20] overlayfs: idmapped layers are currently not supported
	[ +23.736977] overlayfs: idmapped layers are currently not supported
	[Nov 9 14:22] overlayfs: idmapped layers are currently not supported
	[Nov 9 14:23] overlayfs: idmapped layers are currently not supported
	[Nov 9 14:25] overlayfs: idmapped layers are currently not supported
	[Nov 9 14:27] overlayfs: idmapped layers are currently not supported
	[ +24.536638] overlayfs: idmapped layers are currently not supported
	[Nov 9 14:31] overlayfs: idmapped layers are currently not supported
	[Nov 9 14:33] overlayfs: idmapped layers are currently not supported
	[ +39.159698] overlayfs: idmapped layers are currently not supported
	[  +5.641155] overlayfs: idmapped layers are currently not supported
	[Nov 9 14:34] overlayfs: idmapped layers are currently not supported
	[Nov 9 14:35] overlayfs: idmapped layers are currently not supported
	
	
	==> etcd [324d7b7676bc4cc1bcc7a8fb530897b0491c29a8aff4de6a99f757808ec0abea] <==
	{"level":"info","ts":"2025-11-09T14:35:04.609442Z","logger":"raft","caller":"etcdserver/zap_raft.go:77","msg":"9f0758e1c58a86ed switched to configuration voters=(11459225503572592365)"}
	{"level":"info","ts":"2025-11-09T14:35:04.609564Z","caller":"membership/cluster.go:421","msg":"added member","cluster-id":"68eaea490fab4e05","local-member-id":"9f0758e1c58a86ed","added-peer-id":"9f0758e1c58a86ed","added-peer-peer-urls":["https://192.168.85.2:2380"]}
	{"level":"info","ts":"2025-11-09T14:35:04.611558Z","caller":"embed/etcd.go:726","msg":"starting with client TLS","tls-info":"cert = /var/lib/minikube/certs/etcd/server.crt, key = /var/lib/minikube/certs/etcd/server.key, client-cert=, client-key=, trusted-ca = /var/lib/minikube/certs/etcd/ca.crt, client-cert-auth = true, crl-file = ","cipher-suites":[]}
	{"level":"info","ts":"2025-11-09T14:35:04.611671Z","caller":"embed/etcd.go:597","msg":"serving peer traffic","address":"192.168.85.2:2380"}
	{"level":"info","ts":"2025-11-09T14:35:04.61174Z","caller":"embed/etcd.go:569","msg":"cmux::serve","address":"192.168.85.2:2380"}
	{"level":"info","ts":"2025-11-09T14:35:04.615538Z","caller":"embed/etcd.go:278","msg":"now serving peer/client/metrics","local-member-id":"9f0758e1c58a86ed","initial-advertise-peer-urls":["https://192.168.85.2:2380"],"listen-peer-urls":["https://192.168.85.2:2380"],"advertise-client-urls":["https://192.168.85.2:2379"],"listen-client-urls":["https://127.0.0.1:2379","https://192.168.85.2:2379"],"listen-metrics-urls":["http://127.0.0.1:2381"]}
	{"level":"info","ts":"2025-11-09T14:35:04.615611Z","caller":"embed/etcd.go:855","msg":"serving metrics","address":"http://127.0.0.1:2381"}
	{"level":"info","ts":"2025-11-09T14:35:05.087928Z","logger":"raft","caller":"etcdserver/zap_raft.go:77","msg":"9f0758e1c58a86ed is starting a new election at term 1"}
	{"level":"info","ts":"2025-11-09T14:35:05.088056Z","logger":"raft","caller":"etcdserver/zap_raft.go:77","msg":"9f0758e1c58a86ed became pre-candidate at term 1"}
	{"level":"info","ts":"2025-11-09T14:35:05.088097Z","logger":"raft","caller":"etcdserver/zap_raft.go:77","msg":"9f0758e1c58a86ed received MsgPreVoteResp from 9f0758e1c58a86ed at term 1"}
	{"level":"info","ts":"2025-11-09T14:35:05.088148Z","logger":"raft","caller":"etcdserver/zap_raft.go:77","msg":"9f0758e1c58a86ed became candidate at term 2"}
	{"level":"info","ts":"2025-11-09T14:35:05.088181Z","logger":"raft","caller":"etcdserver/zap_raft.go:77","msg":"9f0758e1c58a86ed received MsgVoteResp from 9f0758e1c58a86ed at term 2"}
	{"level":"info","ts":"2025-11-09T14:35:05.088229Z","logger":"raft","caller":"etcdserver/zap_raft.go:77","msg":"9f0758e1c58a86ed became leader at term 2"}
	{"level":"info","ts":"2025-11-09T14:35:05.088263Z","logger":"raft","caller":"etcdserver/zap_raft.go:77","msg":"raft.node: 9f0758e1c58a86ed elected leader 9f0758e1c58a86ed at term 2"}
	{"level":"info","ts":"2025-11-09T14:35:05.090244Z","caller":"etcdserver/server.go:2571","msg":"setting up initial cluster version using v2 API","cluster-version":"3.5"}
	{"level":"info","ts":"2025-11-09T14:35:05.094327Z","caller":"etcdserver/server.go:2062","msg":"published local member to cluster through raft","local-member-id":"9f0758e1c58a86ed","local-member-attributes":"{Name:old-k8s-version-349599 ClientURLs:[https://192.168.85.2:2379]}","request-path":"/0/members/9f0758e1c58a86ed/attributes","cluster-id":"68eaea490fab4e05","publish-timeout":"7s"}
	{"level":"info","ts":"2025-11-09T14:35:05.094419Z","caller":"embed/serve.go:103","msg":"ready to serve client requests"}
	{"level":"info","ts":"2025-11-09T14:35:05.095552Z","caller":"embed/serve.go:250","msg":"serving client traffic securely","traffic":"grpc+http","address":"127.0.0.1:2379"}
	{"level":"info","ts":"2025-11-09T14:35:05.099854Z","caller":"membership/cluster.go:584","msg":"set initial cluster version","cluster-id":"68eaea490fab4e05","local-member-id":"9f0758e1c58a86ed","cluster-version":"3.5"}
	{"level":"info","ts":"2025-11-09T14:35:05.100036Z","caller":"api/capability.go:75","msg":"enabled capabilities for version","cluster-version":"3.5"}
	{"level":"info","ts":"2025-11-09T14:35:05.100109Z","caller":"etcdserver/server.go:2595","msg":"cluster version is updated","cluster-version":"3.5"}
	{"level":"info","ts":"2025-11-09T14:35:05.100151Z","caller":"embed/serve.go:103","msg":"ready to serve client requests"}
	{"level":"info","ts":"2025-11-09T14:35:05.101265Z","caller":"embed/serve.go:250","msg":"serving client traffic securely","traffic":"grpc+http","address":"192.168.85.2:2379"}
	{"level":"info","ts":"2025-11-09T14:35:05.132486Z","caller":"etcdmain/main.go:44","msg":"notifying init daemon"}
	{"level":"info","ts":"2025-11-09T14:35:05.132589Z","caller":"etcdmain/main.go:50","msg":"successfully notified init daemon"}
	
	
	==> kernel <==
	 14:35:51 up  1:18,  0 user,  load average: 2.69, 3.45, 2.62
	Linux old-k8s-version-349599 5.15.0-1084-aws #91~20.04.1-Ubuntu SMP Fri May 2 07:00:04 UTC 2025 aarch64 GNU/Linux
	PRETTY_NAME="Debian GNU/Linux 12 (bookworm)"
	
	
	==> kindnet [c93961935b72ff0e53c0104c5cdb29b87e6076e907d05de8e014cba4bd4917a5] <==
	I1109 14:35:27.924308       1 main.go:109] connected to apiserver: https://10.96.0.1:443
	I1109 14:35:27.924671       1 main.go:139] hostIP = 192.168.85.2
	podIP = 192.168.85.2
	I1109 14:35:27.924812       1 main.go:148] setting mtu 1500 for CNI 
	I1109 14:35:27.924877       1 main.go:178] kindnetd IP family: "ipv4"
	I1109 14:35:27.924914       1 main.go:182] noMask IPv4 subnets: [10.244.0.0/16]
	time="2025-11-09T14:35:28Z" level=info msg="Created plugin 10-kube-network-policies (kindnetd, handles RunPodSandbox,RemovePodSandbox)"
	I1109 14:35:28.215977       1 controller.go:377] "Starting controller" name="kube-network-policies"
	I1109 14:35:28.216077       1 controller.go:381] "Waiting for informer caches to sync"
	I1109 14:35:28.216112       1 shared_informer.go:350] "Waiting for caches to sync" controller="kube-network-policies"
	I1109 14:35:28.217726       1 controller.go:390] nri plugin exited: failed to connect to NRI service: dial unix /var/run/nri/nri.sock: connect: no such file or directory
	I1109 14:35:28.416839       1 shared_informer.go:357] "Caches are synced" controller="kube-network-policies"
	I1109 14:35:28.416863       1 metrics.go:72] Registering metrics
	I1109 14:35:28.416918       1 controller.go:711] "Syncing nftables rules"
	I1109 14:35:38.131262       1 main.go:297] Handling node with IPs: map[192.168.85.2:{}]
	I1109 14:35:38.131421       1 main.go:301] handling current node
	I1109 14:35:48.126555       1 main.go:297] Handling node with IPs: map[192.168.85.2:{}]
	I1109 14:35:48.126590       1 main.go:301] handling current node
	
	
	==> kube-apiserver [d1c42ae8337b787ed643e04f1e5f1899e55f2813b1fa41c1544d189a7edd8482] <==
	I1109 14:35:08.732714       1 controller.go:624] quota admission added evaluator for: namespaces
	I1109 14:35:08.736080       1 cache.go:39] Caches are synced for AvailableConditionController controller
	I1109 14:35:08.738906       1 shared_informer.go:318] Caches are synced for cluster_authentication_trust_controller
	I1109 14:35:08.748082       1 controller.go:624] quota admission added evaluator for: leases.coordination.k8s.io
	I1109 14:35:08.767786       1 shared_informer.go:318] Caches are synced for crd-autoregister
	I1109 14:35:08.767969       1 aggregator.go:166] initial CRD sync complete...
	I1109 14:35:08.768008       1 autoregister_controller.go:141] Starting autoregister controller
	I1109 14:35:08.768036       1 cache.go:32] Waiting for caches to sync for autoregister controller
	I1109 14:35:08.768063       1 cache.go:39] Caches are synced for autoregister controller
	I1109 14:35:09.437858       1 storage_scheduling.go:95] created PriorityClass system-node-critical with value 2000001000
	I1109 14:35:09.442795       1 storage_scheduling.go:95] created PriorityClass system-cluster-critical with value 2000000000
	I1109 14:35:09.442965       1 storage_scheduling.go:111] all system priority classes are created successfully or already exist.
	I1109 14:35:10.136618       1 controller.go:624] quota admission added evaluator for: roles.rbac.authorization.k8s.io
	I1109 14:35:10.195793       1 controller.go:624] quota admission added evaluator for: rolebindings.rbac.authorization.k8s.io
	I1109 14:35:10.255018       1 alloc.go:330] "allocated clusterIPs" service="default/kubernetes" clusterIPs={"IPv4":"10.96.0.1"}
	W1109 14:35:10.262597       1 lease.go:263] Resetting endpoints for master service "kubernetes" to [192.168.85.2]
	I1109 14:35:10.264230       1 controller.go:624] quota admission added evaluator for: endpoints
	I1109 14:35:10.269918       1 controller.go:624] quota admission added evaluator for: endpointslices.discovery.k8s.io
	I1109 14:35:10.651262       1 controller.go:624] quota admission added evaluator for: serviceaccounts
	I1109 14:35:11.723550       1 controller.go:624] quota admission added evaluator for: deployments.apps
	I1109 14:35:11.738447       1 alloc.go:330] "allocated clusterIPs" service="kube-system/kube-dns" clusterIPs={"IPv4":"10.96.0.10"}
	I1109 14:35:11.759488       1 controller.go:624] quota admission added evaluator for: daemonsets.apps
	I1109 14:35:24.326346       1 controller.go:624] quota admission added evaluator for: replicasets.apps
	I1109 14:35:24.509862       1 controller.go:624] quota admission added evaluator for: controllerrevisions.apps
	E1109 14:35:49.695314       1 upgradeaware.go:439] Error proxying data from backend to client: write tcp 192.168.85.2:8443->192.168.85.1:44604: write: broken pipe
	
	
	==> kube-controller-manager [d4f18a37925efd931f343e362c110e5002f10695b1cbc188773a1c3cd27ab8ba] <==
	I1109 14:35:23.659916       1 shared_informer.go:318] Caches are synced for resource quota
	I1109 14:35:23.696787       1 shared_informer.go:318] Caches are synced for cronjob
	I1109 14:35:23.702721       1 shared_informer.go:318] Caches are synced for TTL after finished
	I1109 14:35:24.044827       1 shared_informer.go:318] Caches are synced for garbage collector
	I1109 14:35:24.073272       1 shared_informer.go:318] Caches are synced for garbage collector
	I1109 14:35:24.073298       1 garbagecollector.go:166] "All resource monitors have synced. Proceeding to collect garbage"
	I1109 14:35:24.392629       1 event.go:307] "Event occurred" object="kube-system/coredns" fieldPath="" kind="Deployment" apiVersion="apps/v1" type="Normal" reason="ScalingReplicaSet" message="Scaled up replica set coredns-5dd5756b68 to 2"
	I1109 14:35:24.587763       1 event.go:307] "Event occurred" object="kube-system/kube-proxy" fieldPath="" kind="DaemonSet" apiVersion="apps/v1" type="Normal" reason="SuccessfulCreate" message="Created pod: kube-proxy-tcp6s"
	I1109 14:35:24.629639       1 event.go:307] "Event occurred" object="kube-system/kindnet" fieldPath="" kind="DaemonSet" apiVersion="apps/v1" type="Normal" reason="SuccessfulCreate" message="Created pod: kindnet-2r8mz"
	I1109 14:35:24.646501       1 event.go:307] "Event occurred" object="kube-system/coredns-5dd5756b68" fieldPath="" kind="ReplicaSet" apiVersion="apps/v1" type="Normal" reason="SuccessfulCreate" message="Created pod: coredns-5dd5756b68-4l6sk"
	I1109 14:35:24.678721       1 event.go:307] "Event occurred" object="kube-system/coredns-5dd5756b68" fieldPath="" kind="ReplicaSet" apiVersion="apps/v1" type="Normal" reason="SuccessfulCreate" message="Created pod: coredns-5dd5756b68-2z64q"
	I1109 14:35:24.711166       1 replica_set.go:676] "Finished syncing" kind="ReplicaSet" key="kube-system/coredns-5dd5756b68" duration="356.798774ms"
	I1109 14:35:24.747740       1 replica_set.go:676] "Finished syncing" kind="ReplicaSet" key="kube-system/coredns-5dd5756b68" duration="34.41449ms"
	I1109 14:35:24.747829       1 replica_set.go:676] "Finished syncing" kind="ReplicaSet" key="kube-system/coredns-5dd5756b68" duration="59.43µs"
	I1109 14:35:24.789385       1 replica_set.go:676] "Finished syncing" kind="ReplicaSet" key="kube-system/coredns-5dd5756b68" duration="94.639µs"
	I1109 14:35:25.653727       1 event.go:307] "Event occurred" object="kube-system/coredns" fieldPath="" kind="Deployment" apiVersion="apps/v1" type="Normal" reason="ScalingReplicaSet" message="Scaled down replica set coredns-5dd5756b68 to 1 from 2"
	I1109 14:35:25.697276       1 event.go:307] "Event occurred" object="kube-system/coredns-5dd5756b68" fieldPath="" kind="ReplicaSet" apiVersion="apps/v1" type="Normal" reason="SuccessfulDelete" message="Deleted pod: coredns-5dd5756b68-4l6sk"
	I1109 14:35:25.717872       1 replica_set.go:676] "Finished syncing" kind="ReplicaSet" key="kube-system/coredns-5dd5756b68" duration="63.840333ms"
	I1109 14:35:25.743062       1 replica_set.go:676] "Finished syncing" kind="ReplicaSet" key="kube-system/coredns-5dd5756b68" duration="25.144357ms"
	I1109 14:35:25.743147       1 replica_set.go:676] "Finished syncing" kind="ReplicaSet" key="kube-system/coredns-5dd5756b68" duration="54.081µs"
	I1109 14:35:38.426871       1 replica_set.go:676] "Finished syncing" kind="ReplicaSet" key="kube-system/coredns-5dd5756b68" duration="89.273µs"
	I1109 14:35:38.443819       1 replica_set.go:676] "Finished syncing" kind="ReplicaSet" key="kube-system/coredns-5dd5756b68" duration="55.246µs"
	I1109 14:35:38.498604       1 node_lifecycle_controller.go:1048] "Controller detected that some Nodes are Ready. Exiting master disruption mode"
	I1109 14:35:39.215838       1 replica_set.go:676] "Finished syncing" kind="ReplicaSet" key="kube-system/coredns-5dd5756b68" duration="18.726855ms"
	I1109 14:35:39.216061       1 replica_set.go:676] "Finished syncing" kind="ReplicaSet" key="kube-system/coredns-5dd5756b68" duration="75.118µs"
	
	
	==> kube-proxy [f6d920e9fdd7f7203b0dcb10962fb7406e4fe49546d01cff9bc920a1f23701ec] <==
	I1109 14:35:25.128582       1 server_others.go:69] "Using iptables proxy"
	I1109 14:35:25.170734       1 node.go:141] Successfully retrieved node IP: 192.168.85.2
	I1109 14:35:25.225639       1 server.go:632] "kube-proxy running in dual-stack mode" primary ipFamily="IPv4"
	I1109 14:35:25.229022       1 server_others.go:152] "Using iptables Proxier"
	I1109 14:35:25.229061       1 server_others.go:421] "Detect-local-mode set to ClusterCIDR, but no cluster CIDR for family" ipFamily="IPv6"
	I1109 14:35:25.229069       1 server_others.go:438] "Defaulting to no-op detect-local"
	I1109 14:35:25.229099       1 proxier.go:251] "Setting route_localnet=1 to allow node-ports on localhost; to change this either disable iptables.localhostNodePorts (--iptables-localhost-nodeports) or set nodePortAddresses (--nodeport-addresses) to filter loopback addresses"
	I1109 14:35:25.229328       1 server.go:846] "Version info" version="v1.28.0"
	I1109 14:35:25.229339       1 server.go:848] "Golang settings" GOGC="" GOMAXPROCS="" GOTRACEBACK=""
	I1109 14:35:25.230536       1 config.go:188] "Starting service config controller"
	I1109 14:35:25.230549       1 shared_informer.go:311] Waiting for caches to sync for service config
	I1109 14:35:25.230566       1 config.go:97] "Starting endpoint slice config controller"
	I1109 14:35:25.230569       1 shared_informer.go:311] Waiting for caches to sync for endpoint slice config
	I1109 14:35:25.236774       1 config.go:315] "Starting node config controller"
	I1109 14:35:25.236798       1 shared_informer.go:311] Waiting for caches to sync for node config
	I1109 14:35:25.331518       1 shared_informer.go:318] Caches are synced for endpoint slice config
	I1109 14:35:25.331567       1 shared_informer.go:318] Caches are synced for service config
	I1109 14:35:25.337288       1 shared_informer.go:318] Caches are synced for node config
	
	
	==> kube-scheduler [d34753281803d16a6dd290ccc61f5446d9540ba9b2cdd87df92520d593cfd229] <==
	W1109 14:35:08.685955       1 reflector.go:535] vendor/k8s.io/client-go/informers/factory.go:150: failed to list *v1.Pod: pods is forbidden: User "system:kube-scheduler" cannot list resource "pods" in API group "" at the cluster scope
	E1109 14:35:08.685971       1 reflector.go:147] vendor/k8s.io/client-go/informers/factory.go:150: Failed to watch *v1.Pod: failed to list *v1.Pod: pods is forbidden: User "system:kube-scheduler" cannot list resource "pods" in API group "" at the cluster scope
	W1109 14:35:08.690093       1 reflector.go:535] vendor/k8s.io/client-go/informers/factory.go:150: failed to list *v1.Node: nodes is forbidden: User "system:kube-scheduler" cannot list resource "nodes" in API group "" at the cluster scope
	E1109 14:35:08.690222       1 reflector.go:147] vendor/k8s.io/client-go/informers/factory.go:150: Failed to watch *v1.Node: failed to list *v1.Node: nodes is forbidden: User "system:kube-scheduler" cannot list resource "nodes" in API group "" at the cluster scope
	W1109 14:35:09.570720       1 reflector.go:535] vendor/k8s.io/client-go/informers/factory.go:150: failed to list *v1.ReplicaSet: replicasets.apps is forbidden: User "system:kube-scheduler" cannot list resource "replicasets" in API group "apps" at the cluster scope
	E1109 14:35:09.570837       1 reflector.go:147] vendor/k8s.io/client-go/informers/factory.go:150: Failed to watch *v1.ReplicaSet: failed to list *v1.ReplicaSet: replicasets.apps is forbidden: User "system:kube-scheduler" cannot list resource "replicasets" in API group "apps" at the cluster scope
	W1109 14:35:09.591415       1 reflector.go:535] vendor/k8s.io/client-go/informers/factory.go:150: failed to list *v1.Node: nodes is forbidden: User "system:kube-scheduler" cannot list resource "nodes" in API group "" at the cluster scope
	E1109 14:35:09.591451       1 reflector.go:147] vendor/k8s.io/client-go/informers/factory.go:150: Failed to watch *v1.Node: failed to list *v1.Node: nodes is forbidden: User "system:kube-scheduler" cannot list resource "nodes" in API group "" at the cluster scope
	W1109 14:35:09.611027       1 reflector.go:535] vendor/k8s.io/client-go/informers/factory.go:150: failed to list *v1.StatefulSet: statefulsets.apps is forbidden: User "system:kube-scheduler" cannot list resource "statefulsets" in API group "apps" at the cluster scope
	E1109 14:35:09.611069       1 reflector.go:147] vendor/k8s.io/client-go/informers/factory.go:150: Failed to watch *v1.StatefulSet: failed to list *v1.StatefulSet: statefulsets.apps is forbidden: User "system:kube-scheduler" cannot list resource "statefulsets" in API group "apps" at the cluster scope
	W1109 14:35:09.612169       1 reflector.go:535] vendor/k8s.io/client-go/informers/factory.go:150: failed to list *v1.CSIStorageCapacity: csistoragecapacities.storage.k8s.io is forbidden: User "system:kube-scheduler" cannot list resource "csistoragecapacities" in API group "storage.k8s.io" at the cluster scope
	E1109 14:35:09.612286       1 reflector.go:147] vendor/k8s.io/client-go/informers/factory.go:150: Failed to watch *v1.CSIStorageCapacity: failed to list *v1.CSIStorageCapacity: csistoragecapacities.storage.k8s.io is forbidden: User "system:kube-scheduler" cannot list resource "csistoragecapacities" in API group "storage.k8s.io" at the cluster scope
	W1109 14:35:09.626411       1 reflector.go:535] vendor/k8s.io/client-go/informers/factory.go:150: failed to list *v1.StorageClass: storageclasses.storage.k8s.io is forbidden: User "system:kube-scheduler" cannot list resource "storageclasses" in API group "storage.k8s.io" at the cluster scope
	E1109 14:35:09.626529       1 reflector.go:147] vendor/k8s.io/client-go/informers/factory.go:150: Failed to watch *v1.StorageClass: failed to list *v1.StorageClass: storageclasses.storage.k8s.io is forbidden: User "system:kube-scheduler" cannot list resource "storageclasses" in API group "storage.k8s.io" at the cluster scope
	W1109 14:35:09.701382       1 reflector.go:535] vendor/k8s.io/client-go/informers/factory.go:150: failed to list *v1.ReplicationController: replicationcontrollers is forbidden: User "system:kube-scheduler" cannot list resource "replicationcontrollers" in API group "" at the cluster scope
	E1109 14:35:09.701491       1 reflector.go:147] vendor/k8s.io/client-go/informers/factory.go:150: Failed to watch *v1.ReplicationController: failed to list *v1.ReplicationController: replicationcontrollers is forbidden: User "system:kube-scheduler" cannot list resource "replicationcontrollers" in API group "" at the cluster scope
	W1109 14:35:09.733368       1 reflector.go:535] vendor/k8s.io/client-go/informers/factory.go:150: failed to list *v1.CSIDriver: csidrivers.storage.k8s.io is forbidden: User "system:kube-scheduler" cannot list resource "csidrivers" in API group "storage.k8s.io" at the cluster scope
	E1109 14:35:09.733424       1 reflector.go:147] vendor/k8s.io/client-go/informers/factory.go:150: Failed to watch *v1.CSIDriver: failed to list *v1.CSIDriver: csidrivers.storage.k8s.io is forbidden: User "system:kube-scheduler" cannot list resource "csidrivers" in API group "storage.k8s.io" at the cluster scope
	W1109 14:35:09.876391       1 reflector.go:535] vendor/k8s.io/client-go/informers/factory.go:150: failed to list *v1.CSINode: csinodes.storage.k8s.io is forbidden: User "system:kube-scheduler" cannot list resource "csinodes" in API group "storage.k8s.io" at the cluster scope
	E1109 14:35:09.876500       1 reflector.go:147] vendor/k8s.io/client-go/informers/factory.go:150: Failed to watch *v1.CSINode: failed to list *v1.CSINode: csinodes.storage.k8s.io is forbidden: User "system:kube-scheduler" cannot list resource "csinodes" in API group "storage.k8s.io" at the cluster scope
	W1109 14:35:09.889062       1 reflector.go:535] vendor/k8s.io/client-go/informers/factory.go:150: failed to list *v1.Namespace: namespaces is forbidden: User "system:kube-scheduler" cannot list resource "namespaces" in API group "" at the cluster scope
	E1109 14:35:09.889110       1 reflector.go:147] vendor/k8s.io/client-go/informers/factory.go:150: Failed to watch *v1.Namespace: failed to list *v1.Namespace: namespaces is forbidden: User "system:kube-scheduler" cannot list resource "namespaces" in API group "" at the cluster scope
	W1109 14:35:10.065911       1 reflector.go:535] pkg/server/dynamiccertificates/configmap_cafile_content.go:206: failed to list *v1.ConfigMap: configmaps "extension-apiserver-authentication" is forbidden: User "system:kube-scheduler" cannot list resource "configmaps" in API group "" in the namespace "kube-system"
	E1109 14:35:10.066035       1 reflector.go:147] pkg/server/dynamiccertificates/configmap_cafile_content.go:206: Failed to watch *v1.ConfigMap: failed to list *v1.ConfigMap: configmaps "extension-apiserver-authentication" is forbidden: User "system:kube-scheduler" cannot list resource "configmaps" in API group "" in the namespace "kube-system"
	I1109 14:35:13.263312       1 shared_informer.go:318] Caches are synced for client-ca::kube-system::extension-apiserver-authentication::client-ca-file
	
	
	==> kubelet <==
	Nov 09 14:35:24 old-k8s-version-349599 kubelet[1379]: I1109 14:35:24.656196    1379 reconciler_common.go:258] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kube-proxy\" (UniqueName: \"kubernetes.io/configmap/2ce7f7de-8607-4c51-a80b-792af4b7c036-kube-proxy\") pod \"kube-proxy-tcp6s\" (UID: \"2ce7f7de-8607-4c51-a80b-792af4b7c036\") " pod="kube-system/kube-proxy-tcp6s"
	Nov 09 14:35:24 old-k8s-version-349599 kubelet[1379]: I1109 14:35:24.656253    1379 reconciler_common.go:258] "operationExecutor.VerifyControllerAttachedVolume started for volume \"lib-modules\" (UniqueName: \"kubernetes.io/host-path/2ce7f7de-8607-4c51-a80b-792af4b7c036-lib-modules\") pod \"kube-proxy-tcp6s\" (UID: \"2ce7f7de-8607-4c51-a80b-792af4b7c036\") " pod="kube-system/kube-proxy-tcp6s"
	Nov 09 14:35:24 old-k8s-version-349599 kubelet[1379]: I1109 14:35:24.656551    1379 reconciler_common.go:258] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kube-api-access-cjtng\" (UniqueName: \"kubernetes.io/projected/2ce7f7de-8607-4c51-a80b-792af4b7c036-kube-api-access-cjtng\") pod \"kube-proxy-tcp6s\" (UID: \"2ce7f7de-8607-4c51-a80b-792af4b7c036\") " pod="kube-system/kube-proxy-tcp6s"
	Nov 09 14:35:24 old-k8s-version-349599 kubelet[1379]: I1109 14:35:24.656602    1379 reconciler_common.go:258] "operationExecutor.VerifyControllerAttachedVolume started for volume \"xtables-lock\" (UniqueName: \"kubernetes.io/host-path/2ce7f7de-8607-4c51-a80b-792af4b7c036-xtables-lock\") pod \"kube-proxy-tcp6s\" (UID: \"2ce7f7de-8607-4c51-a80b-792af4b7c036\") " pod="kube-system/kube-proxy-tcp6s"
	Nov 09 14:35:24 old-k8s-version-349599 kubelet[1379]: I1109 14:35:24.674912    1379 topology_manager.go:215] "Topology Admit Handler" podUID="2e794a4e-c0c0-4759-9087-e80451139f25" podNamespace="kube-system" podName="kindnet-2r8mz"
	Nov 09 14:35:24 old-k8s-version-349599 kubelet[1379]: I1109 14:35:24.757879    1379 reconciler_common.go:258] "operationExecutor.VerifyControllerAttachedVolume started for volume \"cni-cfg\" (UniqueName: \"kubernetes.io/host-path/2e794a4e-c0c0-4759-9087-e80451139f25-cni-cfg\") pod \"kindnet-2r8mz\" (UID: \"2e794a4e-c0c0-4759-9087-e80451139f25\") " pod="kube-system/kindnet-2r8mz"
	Nov 09 14:35:24 old-k8s-version-349599 kubelet[1379]: I1109 14:35:24.758060    1379 reconciler_common.go:258] "operationExecutor.VerifyControllerAttachedVolume started for volume \"xtables-lock\" (UniqueName: \"kubernetes.io/host-path/2e794a4e-c0c0-4759-9087-e80451139f25-xtables-lock\") pod \"kindnet-2r8mz\" (UID: \"2e794a4e-c0c0-4759-9087-e80451139f25\") " pod="kube-system/kindnet-2r8mz"
	Nov 09 14:35:24 old-k8s-version-349599 kubelet[1379]: I1109 14:35:24.758166    1379 reconciler_common.go:258] "operationExecutor.VerifyControllerAttachedVolume started for volume \"lib-modules\" (UniqueName: \"kubernetes.io/host-path/2e794a4e-c0c0-4759-9087-e80451139f25-lib-modules\") pod \"kindnet-2r8mz\" (UID: \"2e794a4e-c0c0-4759-9087-e80451139f25\") " pod="kube-system/kindnet-2r8mz"
	Nov 09 14:35:24 old-k8s-version-349599 kubelet[1379]: I1109 14:35:24.758543    1379 reconciler_common.go:258] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kube-api-access-78z7z\" (UniqueName: \"kubernetes.io/projected/2e794a4e-c0c0-4759-9087-e80451139f25-kube-api-access-78z7z\") pod \"kindnet-2r8mz\" (UID: \"2e794a4e-c0c0-4759-9087-e80451139f25\") " pod="kube-system/kindnet-2r8mz"
	Nov 09 14:35:25 old-k8s-version-349599 kubelet[1379]: W1109 14:35:25.016281    1379 manager.go:1159] Failed to process watch event {EventType:0 Name:/docker/05a48047eaa75462f179a30f03ed71f80e3b9debeb5e99158451fa2d3f4dabf4/crio-a1e9aaf8ebdb7631565576f1b60578a90d495d2fbe6e2aa43a97a5af12e461e8 WatchSource:0}: Error finding container a1e9aaf8ebdb7631565576f1b60578a90d495d2fbe6e2aa43a97a5af12e461e8: Status 404 returned error can't find the container with id a1e9aaf8ebdb7631565576f1b60578a90d495d2fbe6e2aa43a97a5af12e461e8
	Nov 09 14:35:28 old-k8s-version-349599 kubelet[1379]: I1109 14:35:28.156381    1379 pod_startup_latency_tracker.go:102] "Observed pod startup duration" pod="kube-system/kube-proxy-tcp6s" podStartSLOduration=4.156338937 podCreationTimestamp="2025-11-09 14:35:24 +0000 UTC" firstStartedPulling="0001-01-01 00:00:00 +0000 UTC" lastFinishedPulling="0001-01-01 00:00:00 +0000 UTC" observedRunningTime="2025-11-09 14:35:25.166294736 +0000 UTC m=+13.477390636" watchObservedRunningTime="2025-11-09 14:35:28.156338937 +0000 UTC m=+16.467434828"
	Nov 09 14:35:31 old-k8s-version-349599 kubelet[1379]: I1109 14:35:31.918241    1379 pod_startup_latency_tracker.go:102] "Observed pod startup duration" pod="kube-system/kindnet-2r8mz" podStartSLOduration=5.114189235 podCreationTimestamp="2025-11-09 14:35:24 +0000 UTC" firstStartedPulling="2025-11-09 14:35:25.021886565 +0000 UTC m=+13.332982440" lastFinishedPulling="2025-11-09 14:35:27.82588767 +0000 UTC m=+16.136983553" observedRunningTime="2025-11-09 14:35:28.158446937 +0000 UTC m=+16.469542820" watchObservedRunningTime="2025-11-09 14:35:31.918190348 +0000 UTC m=+20.229286223"
	Nov 09 14:35:38 old-k8s-version-349599 kubelet[1379]: I1109 14:35:38.393121    1379 kubelet_node_status.go:493] "Fast updating node status as it just became ready"
	Nov 09 14:35:38 old-k8s-version-349599 kubelet[1379]: I1109 14:35:38.426302    1379 topology_manager.go:215] "Topology Admit Handler" podUID="e47a91d2-3b07-42d3-9893-b2773590c8e4" podNamespace="kube-system" podName="coredns-5dd5756b68-2z64q"
	Nov 09 14:35:38 old-k8s-version-349599 kubelet[1379]: I1109 14:35:38.430791    1379 topology_manager.go:215] "Topology Admit Handler" podUID="4bf52fd0-d955-4414-a7d8-bbc6576c0a2e" podNamespace="kube-system" podName="storage-provisioner"
	Nov 09 14:35:38 old-k8s-version-349599 kubelet[1379]: I1109 14:35:38.566853    1379 reconciler_common.go:258] "operationExecutor.VerifyControllerAttachedVolume started for volume \"config-volume\" (UniqueName: \"kubernetes.io/configmap/e47a91d2-3b07-42d3-9893-b2773590c8e4-config-volume\") pod \"coredns-5dd5756b68-2z64q\" (UID: \"e47a91d2-3b07-42d3-9893-b2773590c8e4\") " pod="kube-system/coredns-5dd5756b68-2z64q"
	Nov 09 14:35:38 old-k8s-version-349599 kubelet[1379]: I1109 14:35:38.566914    1379 reconciler_common.go:258] "operationExecutor.VerifyControllerAttachedVolume started for volume \"tmp\" (UniqueName: \"kubernetes.io/host-path/4bf52fd0-d955-4414-a7d8-bbc6576c0a2e-tmp\") pod \"storage-provisioner\" (UID: \"4bf52fd0-d955-4414-a7d8-bbc6576c0a2e\") " pod="kube-system/storage-provisioner"
	Nov 09 14:35:38 old-k8s-version-349599 kubelet[1379]: I1109 14:35:38.566942    1379 reconciler_common.go:258] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kube-api-access-fxjwg\" (UniqueName: \"kubernetes.io/projected/4bf52fd0-d955-4414-a7d8-bbc6576c0a2e-kube-api-access-fxjwg\") pod \"storage-provisioner\" (UID: \"4bf52fd0-d955-4414-a7d8-bbc6576c0a2e\") " pod="kube-system/storage-provisioner"
	Nov 09 14:35:38 old-k8s-version-349599 kubelet[1379]: I1109 14:35:38.566972    1379 reconciler_common.go:258] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kube-api-access-bh6fp\" (UniqueName: \"kubernetes.io/projected/e47a91d2-3b07-42d3-9893-b2773590c8e4-kube-api-access-bh6fp\") pod \"coredns-5dd5756b68-2z64q\" (UID: \"e47a91d2-3b07-42d3-9893-b2773590c8e4\") " pod="kube-system/coredns-5dd5756b68-2z64q"
	Nov 09 14:35:38 old-k8s-version-349599 kubelet[1379]: W1109 14:35:38.746583    1379 manager.go:1159] Failed to process watch event {EventType:0 Name:/docker/05a48047eaa75462f179a30f03ed71f80e3b9debeb5e99158451fa2d3f4dabf4/crio-a8de96c4f369475f7b75df274649dcf983a70199b620591732b123a9db5fe07c WatchSource:0}: Error finding container a8de96c4f369475f7b75df274649dcf983a70199b620591732b123a9db5fe07c: Status 404 returned error can't find the container with id a8de96c4f369475f7b75df274649dcf983a70199b620591732b123a9db5fe07c
	Nov 09 14:35:38 old-k8s-version-349599 kubelet[1379]: W1109 14:35:38.785251    1379 manager.go:1159] Failed to process watch event {EventType:0 Name:/docker/05a48047eaa75462f179a30f03ed71f80e3b9debeb5e99158451fa2d3f4dabf4/crio-8d089934dae232dab2ca52ba269de91e21071d4e08036aac3a0f8851b46ec9aa WatchSource:0}: Error finding container 8d089934dae232dab2ca52ba269de91e21071d4e08036aac3a0f8851b46ec9aa: Status 404 returned error can't find the container with id 8d089934dae232dab2ca52ba269de91e21071d4e08036aac3a0f8851b46ec9aa
	Nov 09 14:35:39 old-k8s-version-349599 kubelet[1379]: I1109 14:35:39.195851    1379 pod_startup_latency_tracker.go:102] "Observed pod startup duration" pod="kube-system/storage-provisioner" podStartSLOduration=14.195809925 podCreationTimestamp="2025-11-09 14:35:25 +0000 UTC" firstStartedPulling="0001-01-01 00:00:00 +0000 UTC" lastFinishedPulling="0001-01-01 00:00:00 +0000 UTC" observedRunningTime="2025-11-09 14:35:39.18224013 +0000 UTC m=+27.493336021" watchObservedRunningTime="2025-11-09 14:35:39.195809925 +0000 UTC m=+27.506905800"
	Nov 09 14:35:41 old-k8s-version-349599 kubelet[1379]: I1109 14:35:41.436010    1379 pod_startup_latency_tracker.go:102] "Observed pod startup duration" pod="kube-system/coredns-5dd5756b68-2z64q" podStartSLOduration=17.435969974 podCreationTimestamp="2025-11-09 14:35:24 +0000 UTC" firstStartedPulling="0001-01-01 00:00:00 +0000 UTC" lastFinishedPulling="0001-01-01 00:00:00 +0000 UTC" observedRunningTime="2025-11-09 14:35:39.196544354 +0000 UTC m=+27.507640262" watchObservedRunningTime="2025-11-09 14:35:41.435969974 +0000 UTC m=+29.747065848"
	Nov 09 14:35:41 old-k8s-version-349599 kubelet[1379]: I1109 14:35:41.436693    1379 topology_manager.go:215] "Topology Admit Handler" podUID="623307ca-4ed7-4378-9c59-77fc8b166a0b" podNamespace="default" podName="busybox"
	Nov 09 14:35:41 old-k8s-version-349599 kubelet[1379]: I1109 14:35:41.488125    1379 reconciler_common.go:258] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kube-api-access-2kxlf\" (UniqueName: \"kubernetes.io/projected/623307ca-4ed7-4378-9c59-77fc8b166a0b-kube-api-access-2kxlf\") pod \"busybox\" (UID: \"623307ca-4ed7-4378-9c59-77fc8b166a0b\") " pod="default/busybox"
	
	
	==> storage-provisioner [326c8148b3dc1255b204c61f07f7fa7318a29d0f56358226d14a95c781680112] <==
	I1109 14:35:38.797360       1 storage_provisioner.go:116] Initializing the minikube storage provisioner...
	I1109 14:35:38.818290       1 storage_provisioner.go:141] Storage provisioner initialized, now starting service!
	I1109 14:35:38.818334       1 leaderelection.go:243] attempting to acquire leader lease kube-system/k8s.io-minikube-hostpath...
	I1109 14:35:38.828414       1 leaderelection.go:253] successfully acquired lease kube-system/k8s.io-minikube-hostpath
	I1109 14:35:38.828690       1 controller.go:835] Starting provisioner controller k8s.io/minikube-hostpath_old-k8s-version-349599_71946b80-9d5b-4f34-844e-d5e40e6d2832!
	I1109 14:35:38.829566       1 event.go:282] Event(v1.ObjectReference{Kind:"Endpoints", Namespace:"kube-system", Name:"k8s.io-minikube-hostpath", UID:"bcc80e6b-0d84-4689-aa52-aa122ab7b376", APIVersion:"v1", ResourceVersion:"404", FieldPath:""}): type: 'Normal' reason: 'LeaderElection' old-k8s-version-349599_71946b80-9d5b-4f34-844e-d5e40e6d2832 became leader
	I1109 14:35:38.929657       1 controller.go:884] Started provisioner controller k8s.io/minikube-hostpath_old-k8s-version-349599_71946b80-9d5b-4f34-844e-d5e40e6d2832!
	

                                                
                                                
-- /stdout --
helpers_test.go:262: (dbg) Run:  out/minikube-linux-arm64 status --format={{.APIServer}} -p old-k8s-version-349599 -n old-k8s-version-349599
helpers_test.go:269: (dbg) Run:  kubectl --context old-k8s-version-349599 get po -o=jsonpath={.items[*].metadata.name} -A --field-selector=status.phase!=Running
helpers_test.go:293: <<< TestStartStop/group/old-k8s-version/serial/EnableAddonWhileActive FAILED: end of post-mortem logs <<<
helpers_test.go:294: ---------------------/post-mortem---------------------------------
--- FAIL: TestStartStop/group/old-k8s-version/serial/EnableAddonWhileActive (2.57s)

                                                
                                    
x
+
TestStartStop/group/old-k8s-version/serial/Pause (6.76s)

                                                
                                                
=== RUN   TestStartStop/group/old-k8s-version/serial/Pause
start_stop_delete_test.go:309: (dbg) Run:  out/minikube-linux-arm64 pause -p old-k8s-version-349599 --alsologtostderr -v=1
start_stop_delete_test.go:309: (dbg) Non-zero exit: out/minikube-linux-arm64 pause -p old-k8s-version-349599 --alsologtostderr -v=1: exit status 80 (1.730883726s)

                                                
                                                
-- stdout --
	* Pausing node old-k8s-version-349599 ... 
	
	

                                                
                                                
-- /stdout --
** stderr ** 
	I1109 14:37:07.238178  187034 out.go:360] Setting OutFile to fd 1 ...
	I1109 14:37:07.238349  187034 out.go:408] TERM=,COLORTERM=, which probably does not support color
	I1109 14:37:07.238372  187034 out.go:374] Setting ErrFile to fd 2...
	I1109 14:37:07.238390  187034 out.go:408] TERM=,COLORTERM=, which probably does not support color
	I1109 14:37:07.238651  187034 root.go:338] Updating PATH: /home/jenkins/minikube-integration/21139-2320/.minikube/bin
	I1109 14:37:07.238954  187034 out.go:368] Setting JSON to false
	I1109 14:37:07.239008  187034 mustload.go:66] Loading cluster: old-k8s-version-349599
	I1109 14:37:07.239475  187034 config.go:182] Loaded profile config "old-k8s-version-349599": Driver=docker, ContainerRuntime=crio, KubernetesVersion=v1.28.0
	I1109 14:37:07.240036  187034 cli_runner.go:164] Run: docker container inspect old-k8s-version-349599 --format={{.State.Status}}
	I1109 14:37:07.264270  187034 host.go:66] Checking if "old-k8s-version-349599" exists ...
	I1109 14:37:07.264593  187034 cli_runner.go:164] Run: docker system info --format "{{json .}}"
	I1109 14:37:07.333570  187034 info.go:266] docker info: {ID:J4M5:W6MX:GOX4:4LAQ:VI7E:VJNF:J3OP:OPBH:GF7G:PPY4:WQWD:7N4L Containers:2 ContainersRunning:2 ContainersPaused:0 ContainersStopped:0 Images:4 Driver:overlay2 DriverStatus:[[Backing Filesystem extfs] [Supports d_type true] [Using metacopy false] [Native Overlay Diff true] [userxattr false]] SystemStatus:<nil> Plugins:{Volume:[local] Network:[bridge host ipvlan macvlan null overlay] Authorization:<nil> Log:[awslogs fluentd gcplogs gelf journald json-file local splunk syslog]} MemoryLimit:true SwapLimit:true KernelMemory:false KernelMemoryTCP:true CPUCfsPeriod:true CPUCfsQuota:true CPUShares:true CPUSet:true PidsLimit:true IPv4Forwarding:true BridgeNfIptables:false BridgeNfIP6Tables:false Debug:false NFd:50 OomKillDisable:true NGoroutines:63 SystemTime:2025-11-09 14:37:07.324085167 +0000 UTC LoggingDriver:json-file CgroupDriver:cgroupfs NEventsListener:0 KernelVersion:5.15.0-1084-aws OperatingSystem:Ubuntu 20.04.6 LTS OSType:linux Architecture:a
arch64 IndexServerAddress:https://index.docker.io/v1/ RegistryConfig:{AllowNondistributableArtifactsCIDRs:[] AllowNondistributableArtifactsHostnames:[] InsecureRegistryCIDRs:[::1/128 127.0.0.0/8] IndexConfigs:{DockerIo:{Name:docker.io Mirrors:[] Secure:true Official:true}} Mirrors:[]} NCPU:2 MemTotal:8214835200 GenericResources:<nil> DockerRootDir:/var/lib/docker HTTPProxy: HTTPSProxy: NoProxy: Name:ip-172-31-24-2 Labels:[] ExperimentalBuild:false ServerVersion:28.1.1 ClusterStore: ClusterAdvertise: Runtimes:{Runc:{Path:runc}} DefaultRuntime:runc Swarm:{NodeID: NodeAddr: LocalNodeState:inactive ControlAvailable:false Error: RemoteManagers:<nil>} LiveRestoreEnabled:false Isolation: InitBinary:docker-init ContainerdCommit:{ID:05044ec0a9a75232cad458027ca83437aae3f4da Expected:} RuncCommit:{ID:v1.2.5-0-g59923ef Expected:} InitCommit:{ID:de40ad0 Expected:} SecurityOptions:[name=apparmor name=seccomp,profile=builtin] ProductLicense: Warnings:<nil> ServerErrors:[] ClientInfo:{Debug:false Plugins:[map[Name:buildx Pat
h:/usr/libexec/docker/cli-plugins/docker-buildx SchemaVersion:0.1.0 ShortDescription:Docker Buildx Vendor:Docker Inc. Version:v0.23.0] map[Name:compose Path:/usr/libexec/docker/cli-plugins/docker-compose SchemaVersion:0.1.0 ShortDescription:Docker Compose Vendor:Docker Inc. Version:v2.35.1]] Warnings:<nil>}}
	I1109 14:37:07.334277  187034 pause.go:60] "namespaces" [kube-system kubernetes-dashboard istio-operator]="keys" map[addons:[] all:%!s(bool=false) apiserver-ips:[] apiserver-name:minikubeCA apiserver-names:[] apiserver-port:%!s(int=8443) auto-pause-interval:1m0s auto-update-drivers:%!s(bool=true) base-image:gcr.io/k8s-minikube/kicbase-builds:v0.0.48-1761985721-21837@sha256:a50b37e97dfdea51156e079ca6b45818a801b3d41bbe13d141f35d2e1af6c7d1 binary-mirror: bootstrapper:kubeadm cache-images:%!s(bool=true) cancel-scheduled:%!s(bool=false) cert-expiration:26280h0m0s cni: container-runtime: cpus:2 cri-socket: delete-on-failure:%!s(bool=false) disable-coredns-log:%!s(bool=false) disable-driver-mounts:%!s(bool=false) disable-metrics:%!s(bool=false) disable-optimizations:%!s(bool=false) disk-size:20000mb dns-domain:cluster.local dns-proxy:%!s(bool=false) docker-env:[] docker-opt:[] download-only:%!s(bool=false) driver: dry-run:%!s(bool=false) embed-certs:%!s(bool=false) embedcerts:%!s(bool=false) enable-default-
cni:%!s(bool=false) extra-config: extra-disks:%!s(int=0) feature-gates: force:%!s(bool=false) force-systemd:%!s(bool=false) gpus: ha:%!s(bool=false) host-dns-resolver:%!s(bool=true) host-only-cidr:192.168.59.1/24 host-only-nic-type:virtio hyperkit-vpnkit-sock: hyperkit-vsock-ports:[] hyperv-external-adapter: hyperv-use-external-switch:%!s(bool=false) hyperv-virtual-switch: image-mirror-country: image-repository: insecure-registry:[] install-addons:%!s(bool=true) interactive:%!s(bool=true) iso-url:[https://storage.googleapis.com/minikube-builds/iso/21834/minikube-v1.37.0-1762018871-21834-arm64.iso https://github.com/kubernetes/minikube/releases/download/v1.37.0-1762018871-21834/minikube-v1.37.0-1762018871-21834-arm64.iso https://kubernetes.oss-cn-hangzhou.aliyuncs.com/minikube/iso/minikube-v1.37.0-1762018871-21834-arm64.iso] keep-context:%!s(bool=false) keep-context-active:%!s(bool=false) kubernetes-version: kvm-gpu:%!s(bool=false) kvm-hidden:%!s(bool=false) kvm-network:default kvm-numa-count:%!s(int=1) kvm-qe
mu-uri:qemu:///system listen-address: maxauditentries:%!s(int=1000) memory: mount:%!s(bool=false) mount-9p-version:9p2000.L mount-gid:docker mount-ip: mount-msize:%!s(int=262144) mount-options:[] mount-port:0 mount-string: mount-type:9p mount-uid:docker namespace:default nat-nic-type:virtio native-ssh:%!s(bool=true) network: network-plugin: nfs-share:[] nfs-shares-root:/nfsshares no-kubernetes:%!s(bool=false) no-vtx-check:%!s(bool=false) nodes:%!s(int=1) output:text ports:[] preload:%!s(bool=true) profile:old-k8s-version-349599 purge:%!s(bool=false) qemu-firmware-path: registry-mirror:[] reminderwaitperiodinhours:%!s(int=24) rootless:%!s(bool=false) schedule:0s service-cluster-ip-range:10.96.0.0/12 skip-audit:%!s(bool=false) socket-vmnet-client-path: socket-vmnet-path: ssh-ip-address: ssh-key: ssh-port:%!s(int=22) ssh-user:root static-ip: subnet: trace: user: uuid: vm:%!s(bool=false) vm-driver: wait:[apiserver system_pods] wait-timeout:6m0s wantnonedriverwarning:%!s(bool=true) wantupdatenotification:%!s(bool=
true) wantvirtualboxdriverwarning:%!s(bool=true)]="(MISSING)"
	I1109 14:37:07.337700  187034 out.go:179] * Pausing node old-k8s-version-349599 ... 
	I1109 14:37:07.341522  187034 host.go:66] Checking if "old-k8s-version-349599" exists ...
	I1109 14:37:07.342008  187034 ssh_runner.go:195] Run: systemctl --version
	I1109 14:37:07.342074  187034 cli_runner.go:164] Run: docker container inspect -f "'{{(index (index .NetworkSettings.Ports "22/tcp") 0).HostPort}}'" old-k8s-version-349599
	I1109 14:37:07.364199  187034 sshutil.go:53] new ssh client: &{IP:127.0.0.1 Port:33050 SSHKeyPath:/home/jenkins/minikube-integration/21139-2320/.minikube/machines/old-k8s-version-349599/id_rsa Username:docker}
	I1109 14:37:07.470862  187034 ssh_runner.go:195] Run: sudo systemctl is-active --quiet service kubelet
	I1109 14:37:07.486411  187034 pause.go:52] kubelet running: true
	I1109 14:37:07.486477  187034 ssh_runner.go:195] Run: sudo systemctl disable --now kubelet
	I1109 14:37:07.763701  187034 cri.go:54] listing CRI containers in root : {State:running Name: Namespaces:[kube-system kubernetes-dashboard istio-operator]}
	I1109 14:37:07.763791  187034 ssh_runner.go:195] Run: sudo -s eval "crictl ps -a --quiet --label io.kubernetes.pod.namespace=kube-system; crictl ps -a --quiet --label io.kubernetes.pod.namespace=kubernetes-dashboard; crictl ps -a --quiet --label io.kubernetes.pod.namespace=istio-operator"
	I1109 14:37:07.836052  187034 cri.go:89] found id: "919922a360f6444e8428262f1b20912ee20139aebb5630e9a84eff8171773387"
	I1109 14:37:07.836079  187034 cri.go:89] found id: "621dc7ea0fcf5ac7c892c1bbe94e7a818d2f91fb0df63e7cf223ff4897b41ad3"
	I1109 14:37:07.836085  187034 cri.go:89] found id: "b3c3e104ca19d5be2d27a38e2d31cd4f2ad95c10aa4c506cfb01ed415c28d05f"
	I1109 14:37:07.836089  187034 cri.go:89] found id: "d300f08cb92b19cb9b5a272616c5e79afc0bfe1871029a2593822d2bf33fb5ca"
	I1109 14:37:07.836092  187034 cri.go:89] found id: "c78cbfc5d0722f14e9b91810784e0e104fe3b650ea48136618bd12858c936bee"
	I1109 14:37:07.836096  187034 cri.go:89] found id: "c8143ca805893c2ad47b324b4c3297e732d17fc9169127e2729034bb9adf7859"
	I1109 14:37:07.836100  187034 cri.go:89] found id: "03ab7893dd34a99ef31e35ac9a05d93d56b1b7a9163cfb3a3ee2f2072b6daee7"
	I1109 14:37:07.836103  187034 cri.go:89] found id: "98c23037f637958c6a33dfcab68d8ef514da9e545abea49c2a268832fe03da24"
	I1109 14:37:07.836107  187034 cri.go:89] found id: "03a4f2701535c8987f03de2ce9c786e81ea4137423fd749101e600759bd76a67"
	I1109 14:37:07.836113  187034 cri.go:89] found id: "43eaaa1f3a3a30d7d2b345fa130225cfafa48f473bd3042ba003d472afa8b09d"
	I1109 14:37:07.836117  187034 cri.go:89] found id: "99a152525f2af576af22e0c8f665863f936a06e0043b3013a220d24d8cca148b"
	I1109 14:37:07.836120  187034 cri.go:89] found id: ""
	I1109 14:37:07.836172  187034 ssh_runner.go:195] Run: sudo runc list -f json
	I1109 14:37:07.847544  187034 retry.go:31] will retry after 260.024508ms: list running: runc: sudo runc list -f json: Process exited with status 1
	stdout:
	
	stderr:
	time="2025-11-09T14:37:07Z" level=error msg="open /run/runc: no such file or directory"
	I1109 14:37:08.107917  187034 ssh_runner.go:195] Run: sudo systemctl is-active --quiet service kubelet
	I1109 14:37:08.121885  187034 pause.go:52] kubelet running: false
	I1109 14:37:08.121966  187034 ssh_runner.go:195] Run: sudo systemctl disable --now kubelet
	I1109 14:37:08.283487  187034 cri.go:54] listing CRI containers in root : {State:running Name: Namespaces:[kube-system kubernetes-dashboard istio-operator]}
	I1109 14:37:08.283561  187034 ssh_runner.go:195] Run: sudo -s eval "crictl ps -a --quiet --label io.kubernetes.pod.namespace=kube-system; crictl ps -a --quiet --label io.kubernetes.pod.namespace=kubernetes-dashboard; crictl ps -a --quiet --label io.kubernetes.pod.namespace=istio-operator"
	I1109 14:37:08.350503  187034 cri.go:89] found id: "919922a360f6444e8428262f1b20912ee20139aebb5630e9a84eff8171773387"
	I1109 14:37:08.350570  187034 cri.go:89] found id: "621dc7ea0fcf5ac7c892c1bbe94e7a818d2f91fb0df63e7cf223ff4897b41ad3"
	I1109 14:37:08.350607  187034 cri.go:89] found id: "b3c3e104ca19d5be2d27a38e2d31cd4f2ad95c10aa4c506cfb01ed415c28d05f"
	I1109 14:37:08.350630  187034 cri.go:89] found id: "d300f08cb92b19cb9b5a272616c5e79afc0bfe1871029a2593822d2bf33fb5ca"
	I1109 14:37:08.350645  187034 cri.go:89] found id: "c78cbfc5d0722f14e9b91810784e0e104fe3b650ea48136618bd12858c936bee"
	I1109 14:37:08.350662  187034 cri.go:89] found id: "c8143ca805893c2ad47b324b4c3297e732d17fc9169127e2729034bb9adf7859"
	I1109 14:37:08.350689  187034 cri.go:89] found id: "03ab7893dd34a99ef31e35ac9a05d93d56b1b7a9163cfb3a3ee2f2072b6daee7"
	I1109 14:37:08.350709  187034 cri.go:89] found id: "98c23037f637958c6a33dfcab68d8ef514da9e545abea49c2a268832fe03da24"
	I1109 14:37:08.350725  187034 cri.go:89] found id: "03a4f2701535c8987f03de2ce9c786e81ea4137423fd749101e600759bd76a67"
	I1109 14:37:08.350745  187034 cri.go:89] found id: "43eaaa1f3a3a30d7d2b345fa130225cfafa48f473bd3042ba003d472afa8b09d"
	I1109 14:37:08.350774  187034 cri.go:89] found id: "99a152525f2af576af22e0c8f665863f936a06e0043b3013a220d24d8cca148b"
	I1109 14:37:08.350795  187034 cri.go:89] found id: ""
	I1109 14:37:08.350873  187034 ssh_runner.go:195] Run: sudo runc list -f json
	I1109 14:37:08.361822  187034 retry.go:31] will retry after 270.334715ms: list running: runc: sudo runc list -f json: Process exited with status 1
	stdout:
	
	stderr:
	time="2025-11-09T14:37:08Z" level=error msg="open /run/runc: no such file or directory"
	I1109 14:37:08.633123  187034 ssh_runner.go:195] Run: sudo systemctl is-active --quiet service kubelet
	I1109 14:37:08.646054  187034 pause.go:52] kubelet running: false
	I1109 14:37:08.646157  187034 ssh_runner.go:195] Run: sudo systemctl disable --now kubelet
	I1109 14:37:08.813938  187034 cri.go:54] listing CRI containers in root : {State:running Name: Namespaces:[kube-system kubernetes-dashboard istio-operator]}
	I1109 14:37:08.814049  187034 ssh_runner.go:195] Run: sudo -s eval "crictl ps -a --quiet --label io.kubernetes.pod.namespace=kube-system; crictl ps -a --quiet --label io.kubernetes.pod.namespace=kubernetes-dashboard; crictl ps -a --quiet --label io.kubernetes.pod.namespace=istio-operator"
	I1109 14:37:08.884075  187034 cri.go:89] found id: "919922a360f6444e8428262f1b20912ee20139aebb5630e9a84eff8171773387"
	I1109 14:37:08.884095  187034 cri.go:89] found id: "621dc7ea0fcf5ac7c892c1bbe94e7a818d2f91fb0df63e7cf223ff4897b41ad3"
	I1109 14:37:08.884100  187034 cri.go:89] found id: "b3c3e104ca19d5be2d27a38e2d31cd4f2ad95c10aa4c506cfb01ed415c28d05f"
	I1109 14:37:08.884104  187034 cri.go:89] found id: "d300f08cb92b19cb9b5a272616c5e79afc0bfe1871029a2593822d2bf33fb5ca"
	I1109 14:37:08.884107  187034 cri.go:89] found id: "c78cbfc5d0722f14e9b91810784e0e104fe3b650ea48136618bd12858c936bee"
	I1109 14:37:08.884111  187034 cri.go:89] found id: "c8143ca805893c2ad47b324b4c3297e732d17fc9169127e2729034bb9adf7859"
	I1109 14:37:08.884113  187034 cri.go:89] found id: "03ab7893dd34a99ef31e35ac9a05d93d56b1b7a9163cfb3a3ee2f2072b6daee7"
	I1109 14:37:08.884116  187034 cri.go:89] found id: "98c23037f637958c6a33dfcab68d8ef514da9e545abea49c2a268832fe03da24"
	I1109 14:37:08.884119  187034 cri.go:89] found id: "03a4f2701535c8987f03de2ce9c786e81ea4137423fd749101e600759bd76a67"
	I1109 14:37:08.884125  187034 cri.go:89] found id: "43eaaa1f3a3a30d7d2b345fa130225cfafa48f473bd3042ba003d472afa8b09d"
	I1109 14:37:08.884128  187034 cri.go:89] found id: "99a152525f2af576af22e0c8f665863f936a06e0043b3013a220d24d8cca148b"
	I1109 14:37:08.884132  187034 cri.go:89] found id: ""
	I1109 14:37:08.884218  187034 ssh_runner.go:195] Run: sudo runc list -f json
	I1109 14:37:08.902069  187034 out.go:203] 
	W1109 14:37:08.904840  187034 out.go:285] X Exiting due to GUEST_PAUSE: Pause: list running: runc: sudo runc list -f json: Process exited with status 1
	stdout:
	
	stderr:
	time="2025-11-09T14:37:08Z" level=error msg="open /run/runc: no such file or directory"
	
	X Exiting due to GUEST_PAUSE: Pause: list running: runc: sudo runc list -f json: Process exited with status 1
	stdout:
	
	stderr:
	time="2025-11-09T14:37:08Z" level=error msg="open /run/runc: no such file or directory"
	
	W1109 14:37:08.904860  187034 out.go:285] * 
	* 
	W1109 14:37:08.909602  187034 out.go:308] ╭─────────────────────────────────────────────────────────────────────────────────────────────╮
	│                                                                                             │
	│    * If the above advice does not help, please let us know:                                 │
	│      https://github.com/kubernetes/minikube/issues/new/choose                               │
	│                                                                                             │
	│    * Please run `minikube logs --file=logs.txt` and attach logs.txt to the GitHub issue.    │
	│    * Please also attach the following file to the GitHub issue:                             │
	│    * - /tmp/minikube_pause_49fdaea37aad8ebccb761973c21590cc64efe8d9_0.log                   │
	│                                                                                             │
	╰─────────────────────────────────────────────────────────────────────────────────────────────╯
	╭─────────────────────────────────────────────────────────────────────────────────────────────╮
	│                                                                                             │
	│    * If the above advice does not help, please let us know:                                 │
	│      https://github.com/kubernetes/minikube/issues/new/choose                               │
	│                                                                                             │
	│    * Please run `minikube logs --file=logs.txt` and attach logs.txt to the GitHub issue.    │
	│    * Please also attach the following file to the GitHub issue:                             │
	│    * - /tmp/minikube_pause_49fdaea37aad8ebccb761973c21590cc64efe8d9_0.log                   │
	│                                                                                             │
	╰─────────────────────────────────────────────────────────────────────────────────────────────╯
	I1109 14:37:08.912727  187034 out.go:203] 

                                                
                                                
** /stderr **
start_stop_delete_test.go:309: out/minikube-linux-arm64 pause -p old-k8s-version-349599 --alsologtostderr -v=1 failed: exit status 80
helpers_test.go:222: -----------------------post-mortem--------------------------------
helpers_test.go:223: ======>  post-mortem[TestStartStop/group/old-k8s-version/serial/Pause]: network settings <======
helpers_test.go:230: HOST ENV snapshots: PROXY env: HTTP_PROXY="<empty>" HTTPS_PROXY="<empty>" NO_PROXY="<empty>"
helpers_test.go:238: ======>  post-mortem[TestStartStop/group/old-k8s-version/serial/Pause]: docker inspect <======
helpers_test.go:239: (dbg) Run:  docker inspect old-k8s-version-349599
helpers_test.go:243: (dbg) docker inspect old-k8s-version-349599:

                                                
                                                
-- stdout --
	[
	    {
	        "Id": "05a48047eaa75462f179a30f03ed71f80e3b9debeb5e99158451fa2d3f4dabf4",
	        "Created": "2025-11-09T14:34:44.509425898Z",
	        "Path": "/usr/local/bin/entrypoint",
	        "Args": [
	            "/sbin/init"
	        ],
	        "State": {
	            "Status": "running",
	            "Running": true,
	            "Paused": false,
	            "Restarting": false,
	            "OOMKilled": false,
	            "Dead": false,
	            "Pid": 184700,
	            "ExitCode": 0,
	            "Error": "",
	            "StartedAt": "2025-11-09T14:36:04.809404383Z",
	            "FinishedAt": "2025-11-09T14:36:03.969648528Z"
	        },
	        "Image": "sha256:4e1b168be0d8ee6affdaa8dcd0274d605b2f417b6cfaa574410f0380fd962b97",
	        "ResolvConfPath": "/var/lib/docker/containers/05a48047eaa75462f179a30f03ed71f80e3b9debeb5e99158451fa2d3f4dabf4/resolv.conf",
	        "HostnamePath": "/var/lib/docker/containers/05a48047eaa75462f179a30f03ed71f80e3b9debeb5e99158451fa2d3f4dabf4/hostname",
	        "HostsPath": "/var/lib/docker/containers/05a48047eaa75462f179a30f03ed71f80e3b9debeb5e99158451fa2d3f4dabf4/hosts",
	        "LogPath": "/var/lib/docker/containers/05a48047eaa75462f179a30f03ed71f80e3b9debeb5e99158451fa2d3f4dabf4/05a48047eaa75462f179a30f03ed71f80e3b9debeb5e99158451fa2d3f4dabf4-json.log",
	        "Name": "/old-k8s-version-349599",
	        "RestartCount": 0,
	        "Driver": "overlay2",
	        "Platform": "linux",
	        "MountLabel": "",
	        "ProcessLabel": "",
	        "AppArmorProfile": "unconfined",
	        "ExecIDs": null,
	        "HostConfig": {
	            "Binds": [
	                "/lib/modules:/lib/modules:ro",
	                "old-k8s-version-349599:/var"
	            ],
	            "ContainerIDFile": "",
	            "LogConfig": {
	                "Type": "json-file",
	                "Config": {}
	            },
	            "NetworkMode": "old-k8s-version-349599",
	            "PortBindings": {
	                "22/tcp": [
	                    {
	                        "HostIp": "127.0.0.1",
	                        "HostPort": ""
	                    }
	                ],
	                "2376/tcp": [
	                    {
	                        "HostIp": "127.0.0.1",
	                        "HostPort": ""
	                    }
	                ],
	                "32443/tcp": [
	                    {
	                        "HostIp": "127.0.0.1",
	                        "HostPort": ""
	                    }
	                ],
	                "5000/tcp": [
	                    {
	                        "HostIp": "127.0.0.1",
	                        "HostPort": ""
	                    }
	                ],
	                "8443/tcp": [
	                    {
	                        "HostIp": "127.0.0.1",
	                        "HostPort": ""
	                    }
	                ]
	            },
	            "RestartPolicy": {
	                "Name": "no",
	                "MaximumRetryCount": 0
	            },
	            "AutoRemove": false,
	            "VolumeDriver": "",
	            "VolumesFrom": null,
	            "ConsoleSize": [
	                0,
	                0
	            ],
	            "CapAdd": null,
	            "CapDrop": null,
	            "CgroupnsMode": "host",
	            "Dns": [],
	            "DnsOptions": [],
	            "DnsSearch": [],
	            "ExtraHosts": null,
	            "GroupAdd": null,
	            "IpcMode": "private",
	            "Cgroup": "",
	            "Links": null,
	            "OomScoreAdj": 0,
	            "PidMode": "",
	            "Privileged": true,
	            "PublishAllPorts": false,
	            "ReadonlyRootfs": false,
	            "SecurityOpt": [
	                "seccomp=unconfined",
	                "apparmor=unconfined",
	                "label=disable"
	            ],
	            "Tmpfs": {
	                "/run": "",
	                "/tmp": ""
	            },
	            "UTSMode": "",
	            "UsernsMode": "",
	            "ShmSize": 67108864,
	            "Runtime": "runc",
	            "Isolation": "",
	            "CpuShares": 0,
	            "Memory": 3221225472,
	            "NanoCpus": 2000000000,
	            "CgroupParent": "",
	            "BlkioWeight": 0,
	            "BlkioWeightDevice": [],
	            "BlkioDeviceReadBps": [],
	            "BlkioDeviceWriteBps": [],
	            "BlkioDeviceReadIOps": [],
	            "BlkioDeviceWriteIOps": [],
	            "CpuPeriod": 0,
	            "CpuQuota": 0,
	            "CpuRealtimePeriod": 0,
	            "CpuRealtimeRuntime": 0,
	            "CpusetCpus": "",
	            "CpusetMems": "",
	            "Devices": [],
	            "DeviceCgroupRules": null,
	            "DeviceRequests": null,
	            "MemoryReservation": 0,
	            "MemorySwap": 6442450944,
	            "MemorySwappiness": null,
	            "OomKillDisable": false,
	            "PidsLimit": null,
	            "Ulimits": [],
	            "CpuCount": 0,
	            "CpuPercent": 0,
	            "IOMaximumIOps": 0,
	            "IOMaximumBandwidth": 0,
	            "MaskedPaths": null,
	            "ReadonlyPaths": null
	        },
	        "GraphDriver": {
	            "Data": {
	                "ID": "05a48047eaa75462f179a30f03ed71f80e3b9debeb5e99158451fa2d3f4dabf4",
	                "LowerDir": "/var/lib/docker/overlay2/043f891c3159bca2d07d287e6da028ee93f869b1f74239ca97c3d7eb29ebd8a6-init/diff:/var/lib/docker/overlay2/b3a4de36ab2a7c9237cb1555a4866064baca53bca407ae5f84336bea9c6bc6c8/diff",
	                "MergedDir": "/var/lib/docker/overlay2/043f891c3159bca2d07d287e6da028ee93f869b1f74239ca97c3d7eb29ebd8a6/merged",
	                "UpperDir": "/var/lib/docker/overlay2/043f891c3159bca2d07d287e6da028ee93f869b1f74239ca97c3d7eb29ebd8a6/diff",
	                "WorkDir": "/var/lib/docker/overlay2/043f891c3159bca2d07d287e6da028ee93f869b1f74239ca97c3d7eb29ebd8a6/work"
	            },
	            "Name": "overlay2"
	        },
	        "Mounts": [
	            {
	                "Type": "bind",
	                "Source": "/lib/modules",
	                "Destination": "/lib/modules",
	                "Mode": "ro",
	                "RW": false,
	                "Propagation": "rprivate"
	            },
	            {
	                "Type": "volume",
	                "Name": "old-k8s-version-349599",
	                "Source": "/var/lib/docker/volumes/old-k8s-version-349599/_data",
	                "Destination": "/var",
	                "Driver": "local",
	                "Mode": "z",
	                "RW": true,
	                "Propagation": ""
	            }
	        ],
	        "Config": {
	            "Hostname": "old-k8s-version-349599",
	            "Domainname": "",
	            "User": "",
	            "AttachStdin": false,
	            "AttachStdout": false,
	            "AttachStderr": false,
	            "ExposedPorts": {
	                "22/tcp": {},
	                "2376/tcp": {},
	                "32443/tcp": {},
	                "5000/tcp": {},
	                "8443/tcp": {}
	            },
	            "Tty": true,
	            "OpenStdin": false,
	            "StdinOnce": false,
	            "Env": [
	                "container=docker",
	                "PATH=/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin"
	            ],
	            "Cmd": null,
	            "Image": "gcr.io/k8s-minikube/kicbase-builds:v0.0.48-1761985721-21837@sha256:a50b37e97dfdea51156e079ca6b45818a801b3d41bbe13d141f35d2e1af6c7d1",
	            "Volumes": null,
	            "WorkingDir": "/",
	            "Entrypoint": [
	                "/usr/local/bin/entrypoint",
	                "/sbin/init"
	            ],
	            "OnBuild": null,
	            "Labels": {
	                "created_by.minikube.sigs.k8s.io": "true",
	                "mode.minikube.sigs.k8s.io": "old-k8s-version-349599",
	                "name.minikube.sigs.k8s.io": "old-k8s-version-349599",
	                "role.minikube.sigs.k8s.io": ""
	            },
	            "StopSignal": "SIGRTMIN+3"
	        },
	        "NetworkSettings": {
	            "Bridge": "",
	            "SandboxID": "8e8d720e4aff2defe96c53aee2f3b636cc01e2b02140ae6d28dcc44004e52d04",
	            "SandboxKey": "/var/run/docker/netns/8e8d720e4aff",
	            "Ports": {
	                "22/tcp": [
	                    {
	                        "HostIp": "127.0.0.1",
	                        "HostPort": "33050"
	                    }
	                ],
	                "2376/tcp": [
	                    {
	                        "HostIp": "127.0.0.1",
	                        "HostPort": "33051"
	                    }
	                ],
	                "32443/tcp": [
	                    {
	                        "HostIp": "127.0.0.1",
	                        "HostPort": "33054"
	                    }
	                ],
	                "5000/tcp": [
	                    {
	                        "HostIp": "127.0.0.1",
	                        "HostPort": "33052"
	                    }
	                ],
	                "8443/tcp": [
	                    {
	                        "HostIp": "127.0.0.1",
	                        "HostPort": "33053"
	                    }
	                ]
	            },
	            "HairpinMode": false,
	            "LinkLocalIPv6Address": "",
	            "LinkLocalIPv6PrefixLen": 0,
	            "SecondaryIPAddresses": null,
	            "SecondaryIPv6Addresses": null,
	            "EndpointID": "",
	            "Gateway": "",
	            "GlobalIPv6Address": "",
	            "GlobalIPv6PrefixLen": 0,
	            "IPAddress": "",
	            "IPPrefixLen": 0,
	            "IPv6Gateway": "",
	            "MacAddress": "",
	            "Networks": {
	                "old-k8s-version-349599": {
	                    "IPAMConfig": {
	                        "IPv4Address": "192.168.85.2"
	                    },
	                    "Links": null,
	                    "Aliases": null,
	                    "MacAddress": "36:b7:ae:62:fb:b0",
	                    "DriverOpts": null,
	                    "GwPriority": 0,
	                    "NetworkID": "30e3d4188e00f4421ef297f05815077467a901e69125366b2721a1705b0d17e1",
	                    "EndpointID": "904f7fe8c4fd30867b075d9f4c3a748018b6c7c5a17fc9c5d2913b01e2f95fdd",
	                    "Gateway": "192.168.85.1",
	                    "IPAddress": "192.168.85.2",
	                    "IPPrefixLen": 24,
	                    "IPv6Gateway": "",
	                    "GlobalIPv6Address": "",
	                    "GlobalIPv6PrefixLen": 0,
	                    "DNSNames": [
	                        "old-k8s-version-349599",
	                        "05a48047eaa7"
	                    ]
	                }
	            }
	        }
	    }
	]

                                                
                                                
-- /stdout --
helpers_test.go:247: (dbg) Run:  out/minikube-linux-arm64 status --format={{.Host}} -p old-k8s-version-349599 -n old-k8s-version-349599
helpers_test.go:247: (dbg) Non-zero exit: out/minikube-linux-arm64 status --format={{.Host}} -p old-k8s-version-349599 -n old-k8s-version-349599: exit status 2 (351.553686ms)

                                                
                                                
-- stdout --
	Running

                                                
                                                
-- /stdout --
helpers_test.go:247: status error: exit status 2 (may be ok)
helpers_test.go:252: <<< TestStartStop/group/old-k8s-version/serial/Pause FAILED: start of post-mortem logs <<<
helpers_test.go:253: ======>  post-mortem[TestStartStop/group/old-k8s-version/serial/Pause]: minikube logs <======
helpers_test.go:255: (dbg) Run:  out/minikube-linux-arm64 -p old-k8s-version-349599 logs -n 25
helpers_test.go:255: (dbg) Done: out/minikube-linux-arm64 -p old-k8s-version-349599 logs -n 25: (1.333411336s)
helpers_test.go:260: TestStartStop/group/old-k8s-version/serial/Pause logs: 
-- stdout --
	
	==> Audit <==
	┌─────────┬───────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────┬───────────────────────────┬─────────┬─────────┬─────────────────────┬─────────────────
────┐
	│ COMMAND │                                                                                                                     ARGS                                                                                                                      │          PROFILE          │  USER   │ VERSION │     START TIME      │      END TIME       │
	├─────────┼───────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────┼───────────────────────────┼─────────┼─────────┼─────────────────────┼─────────────────
────┤
	│ ssh     │ -p cilium-241021 sudo cat /lib/systemd/system/containerd.service                                                                                                                                                                              │ cilium-241021             │ jenkins │ v1.37.0 │ 09 Nov 25 14:33 UTC │                     │
	│ ssh     │ -p cilium-241021 sudo cat /etc/containerd/config.toml                                                                                                                                                                                         │ cilium-241021             │ jenkins │ v1.37.0 │ 09 Nov 25 14:33 UTC │                     │
	│ ssh     │ -p cilium-241021 sudo containerd config dump                                                                                                                                                                                                  │ cilium-241021             │ jenkins │ v1.37.0 │ 09 Nov 25 14:33 UTC │                     │
	│ ssh     │ -p cilium-241021 sudo systemctl status crio --all --full --no-pager                                                                                                                                                                           │ cilium-241021             │ jenkins │ v1.37.0 │ 09 Nov 25 14:33 UTC │                     │
	│ ssh     │ -p cilium-241021 sudo systemctl cat crio --no-pager                                                                                                                                                                                           │ cilium-241021             │ jenkins │ v1.37.0 │ 09 Nov 25 14:33 UTC │                     │
	│ ssh     │ -p cilium-241021 sudo find /etc/crio -type f -exec sh -c 'echo {}; cat {}' \;                                                                                                                                                                 │ cilium-241021             │ jenkins │ v1.37.0 │ 09 Nov 25 14:33 UTC │                     │
	│ ssh     │ -p cilium-241021 sudo crio config                                                                                                                                                                                                             │ cilium-241021             │ jenkins │ v1.37.0 │ 09 Nov 25 14:33 UTC │                     │
	│ delete  │ -p cilium-241021                                                                                                                                                                                                                              │ cilium-241021             │ jenkins │ v1.37.0 │ 09 Nov 25 14:33 UTC │ 09 Nov 25 14:33 UTC │
	│ start   │ -p force-systemd-env-413219 --memory=3072 --alsologtostderr -v=5 --driver=docker  --container-runtime=crio                                                                                                                                    │ force-systemd-env-413219  │ jenkins │ v1.37.0 │ 09 Nov 25 14:33 UTC │ 09 Nov 25 14:33 UTC │
	│ ssh     │ force-systemd-flag-519664 ssh cat /etc/crio/crio.conf.d/02-crio.conf                                                                                                                                                                          │ force-systemd-flag-519664 │ jenkins │ v1.37.0 │ 09 Nov 25 14:33 UTC │ 09 Nov 25 14:33 UTC │
	│ delete  │ -p force-systemd-flag-519664                                                                                                                                                                                                                  │ force-systemd-flag-519664 │ jenkins │ v1.37.0 │ 09 Nov 25 14:33 UTC │ 09 Nov 25 14:33 UTC │
	│ start   │ -p cert-expiration-179822 --memory=3072 --cert-expiration=3m --driver=docker  --container-runtime=crio                                                                                                                                        │ cert-expiration-179822    │ jenkins │ v1.37.0 │ 09 Nov 25 14:33 UTC │ 09 Nov 25 14:34 UTC │
	│ delete  │ -p force-systemd-env-413219                                                                                                                                                                                                                   │ force-systemd-env-413219  │ jenkins │ v1.37.0 │ 09 Nov 25 14:33 UTC │ 09 Nov 25 14:34 UTC │
	│ start   │ -p cert-options-276181 --memory=3072 --apiserver-ips=127.0.0.1 --apiserver-ips=192.168.15.15 --apiserver-names=localhost --apiserver-names=www.google.com --apiserver-port=8555 --driver=docker  --container-runtime=crio                     │ cert-options-276181       │ jenkins │ v1.37.0 │ 09 Nov 25 14:34 UTC │ 09 Nov 25 14:34 UTC │
	│ ssh     │ cert-options-276181 ssh openssl x509 -text -noout -in /var/lib/minikube/certs/apiserver.crt                                                                                                                                                   │ cert-options-276181       │ jenkins │ v1.37.0 │ 09 Nov 25 14:34 UTC │ 09 Nov 25 14:34 UTC │
	│ ssh     │ -p cert-options-276181 -- sudo cat /etc/kubernetes/admin.conf                                                                                                                                                                                 │ cert-options-276181       │ jenkins │ v1.37.0 │ 09 Nov 25 14:34 UTC │ 09 Nov 25 14:34 UTC │
	│ delete  │ -p cert-options-276181                                                                                                                                                                                                                        │ cert-options-276181       │ jenkins │ v1.37.0 │ 09 Nov 25 14:34 UTC │ 09 Nov 25 14:34 UTC │
	│ start   │ -p old-k8s-version-349599 --memory=3072 --alsologtostderr --wait=true --kvm-network=default --kvm-qemu-uri=qemu:///system --disable-driver-mounts --keep-context=false --driver=docker  --container-runtime=crio --kubernetes-version=v1.28.0 │ old-k8s-version-349599    │ jenkins │ v1.37.0 │ 09 Nov 25 14:34 UTC │ 09 Nov 25 14:35 UTC │
	│ addons  │ enable metrics-server -p old-k8s-version-349599 --images=MetricsServer=registry.k8s.io/echoserver:1.4 --registries=MetricsServer=fake.domain                                                                                                  │ old-k8s-version-349599    │ jenkins │ v1.37.0 │ 09 Nov 25 14:35 UTC │                     │
	│ stop    │ -p old-k8s-version-349599 --alsologtostderr -v=3                                                                                                                                                                                              │ old-k8s-version-349599    │ jenkins │ v1.37.0 │ 09 Nov 25 14:35 UTC │ 09 Nov 25 14:36 UTC │
	│ addons  │ enable dashboard -p old-k8s-version-349599 --images=MetricsScraper=registry.k8s.io/echoserver:1.4                                                                                                                                             │ old-k8s-version-349599    │ jenkins │ v1.37.0 │ 09 Nov 25 14:36 UTC │ 09 Nov 25 14:36 UTC │
	│ start   │ -p old-k8s-version-349599 --memory=3072 --alsologtostderr --wait=true --kvm-network=default --kvm-qemu-uri=qemu:///system --disable-driver-mounts --keep-context=false --driver=docker  --container-runtime=crio --kubernetes-version=v1.28.0 │ old-k8s-version-349599    │ jenkins │ v1.37.0 │ 09 Nov 25 14:36 UTC │ 09 Nov 25 14:36 UTC │
	│ start   │ -p cert-expiration-179822 --memory=3072 --cert-expiration=8760h --driver=docker  --container-runtime=crio                                                                                                                                     │ cert-expiration-179822    │ jenkins │ v1.37.0 │ 09 Nov 25 14:37 UTC │                     │
	│ image   │ old-k8s-version-349599 image list --format=json                                                                                                                                                                                               │ old-k8s-version-349599    │ jenkins │ v1.37.0 │ 09 Nov 25 14:37 UTC │ 09 Nov 25 14:37 UTC │
	│ pause   │ -p old-k8s-version-349599 --alsologtostderr -v=1                                                                                                                                                                                              │ old-k8s-version-349599    │ jenkins │ v1.37.0 │ 09 Nov 25 14:37 UTC │                     │
	└─────────┴───────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────┴───────────────────────────┴─────────┴─────────┴─────────────────────┴─────────────────
────┘
	
	
	==> Last Start <==
	Log file created at: 2025/11/09 14:37:04
	Running on machine: ip-172-31-24-2
	Binary: Built with gc go1.24.6 for linux/arm64
	Log line format: [IWEF]mmdd hh:mm:ss.uuuuuu threadid file:line] msg
	I1109 14:37:04.307197  186764 out.go:360] Setting OutFile to fd 1 ...
	I1109 14:37:04.307315  186764 out.go:408] TERM=,COLORTERM=, which probably does not support color
	I1109 14:37:04.307319  186764 out.go:374] Setting ErrFile to fd 2...
	I1109 14:37:04.307322  186764 out.go:408] TERM=,COLORTERM=, which probably does not support color
	I1109 14:37:04.307665  186764 root.go:338] Updating PATH: /home/jenkins/minikube-integration/21139-2320/.minikube/bin
	I1109 14:37:04.308150  186764 out.go:368] Setting JSON to false
	I1109 14:37:04.309349  186764 start.go:133] hostinfo: {"hostname":"ip-172-31-24-2","uptime":4775,"bootTime":1762694250,"procs":209,"os":"linux","platform":"ubuntu","platformFamily":"debian","platformVersion":"20.04","kernelVersion":"5.15.0-1084-aws","kernelArch":"aarch64","virtualizationSystem":"","virtualizationRole":"","hostId":"6d436adf-771e-4269-b9a3-c25fd4fca4f5"}
	I1109 14:37:04.309443  186764 start.go:143] virtualization:  
	I1109 14:37:04.313455  186764 out.go:179] * [cert-expiration-179822] minikube v1.37.0 on Ubuntu 20.04 (arm64)
	I1109 14:37:04.316771  186764 out.go:179]   - MINIKUBE_LOCATION=21139
	I1109 14:37:04.316859  186764 notify.go:221] Checking for updates...
	I1109 14:37:04.325867  186764 out.go:179]   - MINIKUBE_SUPPRESS_DOCKER_PERFORMANCE=true
	I1109 14:37:04.328920  186764 out.go:179]   - KUBECONFIG=/home/jenkins/minikube-integration/21139-2320/kubeconfig
	I1109 14:37:04.332129  186764 out.go:179]   - MINIKUBE_HOME=/home/jenkins/minikube-integration/21139-2320/.minikube
	I1109 14:37:04.335138  186764 out.go:179]   - MINIKUBE_BIN=out/minikube-linux-arm64
	I1109 14:37:04.338169  186764 out.go:179]   - MINIKUBE_FORCE_SYSTEMD=
	I1109 14:37:04.341535  186764 config.go:182] Loaded profile config "cert-expiration-179822": Driver=docker, ContainerRuntime=crio, KubernetesVersion=v1.34.1
	I1109 14:37:04.342068  186764 driver.go:422] Setting default libvirt URI to qemu:///system
	I1109 14:37:04.368322  186764 docker.go:124] docker version: linux-28.1.1:Docker Engine - Community
	I1109 14:37:04.368412  186764 cli_runner.go:164] Run: docker system info --format "{{json .}}"
	I1109 14:37:04.453299  186764 info.go:266] docker info: {ID:J4M5:W6MX:GOX4:4LAQ:VI7E:VJNF:J3OP:OPBH:GF7G:PPY4:WQWD:7N4L Containers:2 ContainersRunning:2 ContainersPaused:0 ContainersStopped:0 Images:4 Driver:overlay2 DriverStatus:[[Backing Filesystem extfs] [Supports d_type true] [Using metacopy false] [Native Overlay Diff true] [userxattr false]] SystemStatus:<nil> Plugins:{Volume:[local] Network:[bridge host ipvlan macvlan null overlay] Authorization:<nil> Log:[awslogs fluentd gcplogs gelf journald json-file local splunk syslog]} MemoryLimit:true SwapLimit:true KernelMemory:false KernelMemoryTCP:true CPUCfsPeriod:true CPUCfsQuota:true CPUShares:true CPUSet:true PidsLimit:true IPv4Forwarding:true BridgeNfIptables:false BridgeNfIP6Tables:false Debug:false NFd:49 OomKillDisable:true NGoroutines:62 SystemTime:2025-11-09 14:37:04.443383025 +0000 UTC LoggingDriver:json-file CgroupDriver:cgroupfs NEventsListener:0 KernelVersion:5.15.0-1084-aws OperatingSystem:Ubuntu 20.04.6 LTS OSType:linux Architecture:a
arch64 IndexServerAddress:https://index.docker.io/v1/ RegistryConfig:{AllowNondistributableArtifactsCIDRs:[] AllowNondistributableArtifactsHostnames:[] InsecureRegistryCIDRs:[::1/128 127.0.0.0/8] IndexConfigs:{DockerIo:{Name:docker.io Mirrors:[] Secure:true Official:true}} Mirrors:[]} NCPU:2 MemTotal:8214835200 GenericResources:<nil> DockerRootDir:/var/lib/docker HTTPProxy: HTTPSProxy: NoProxy: Name:ip-172-31-24-2 Labels:[] ExperimentalBuild:false ServerVersion:28.1.1 ClusterStore: ClusterAdvertise: Runtimes:{Runc:{Path:runc}} DefaultRuntime:runc Swarm:{NodeID: NodeAddr: LocalNodeState:inactive ControlAvailable:false Error: RemoteManagers:<nil>} LiveRestoreEnabled:false Isolation: InitBinary:docker-init ContainerdCommit:{ID:05044ec0a9a75232cad458027ca83437aae3f4da Expected:} RuncCommit:{ID:v1.2.5-0-g59923ef Expected:} InitCommit:{ID:de40ad0 Expected:} SecurityOptions:[name=apparmor name=seccomp,profile=builtin] ProductLicense: Warnings:<nil> ServerErrors:[] ClientInfo:{Debug:false Plugins:[map[Name:buildx Pat
h:/usr/libexec/docker/cli-plugins/docker-buildx SchemaVersion:0.1.0 ShortDescription:Docker Buildx Vendor:Docker Inc. Version:v0.23.0] map[Name:compose Path:/usr/libexec/docker/cli-plugins/docker-compose SchemaVersion:0.1.0 ShortDescription:Docker Compose Vendor:Docker Inc. Version:v2.35.1]] Warnings:<nil>}}
	I1109 14:37:04.453394  186764 docker.go:319] overlay module found
	I1109 14:37:04.456692  186764 out.go:179] * Using the docker driver based on existing profile
	I1109 14:37:04.459742  186764 start.go:309] selected driver: docker
	I1109 14:37:04.459752  186764 start.go:930] validating driver "docker" against &{Name:cert-expiration-179822 KeepContext:false EmbedCerts:false MinikubeISO: KicBaseImage:gcr.io/k8s-minikube/kicbase-builds:v0.0.48-1761985721-21837@sha256:a50b37e97dfdea51156e079ca6b45818a801b3d41bbe13d141f35d2e1af6c7d1 Memory:3072 CPUs:2 DiskSize:20000 Driver:docker HyperkitVpnKitSock: HyperkitVSockPorts:[] DockerEnv:[] ContainerVolumeMounts:[] InsecureRegistry:[] RegistryMirror:[] HostOnlyCIDR:192.168.59.1/24 HypervVirtualSwitch: HypervUseExternalSwitch:false HypervExternalAdapter: KVMNetwork:default KVMQemuURI:qemu:///system KVMGPU:false KVMHidden:false KVMNUMACount:1 APIServerPort:8443 DockerOpt:[] DisableDriverMounts:false NFSShare:[] NFSSharesRoot:/nfsshares UUID: NoVTXCheck:false DNSProxy:false HostDNSResolver:true HostOnlyNicType:virtio NatNicType:virtio SSHIPAddress: SSHUser:root SSHKey: SSHPort:22 KubernetesConfig:{KubernetesVersion:v1.34.1 ClusterName:cert-expiration-179822 Namespace:default APIServerHAVIP: A
PIServerName:minikubeCA APIServerNames:[] APIServerIPs:[] DNSDomain:cluster.local ContainerRuntime:crio CRISocket: NetworkPlugin:cni FeatureGates: ServiceCIDR:10.96.0.0/12 ImageRepository: LoadBalancerStartIP: LoadBalancerEndIP: CustomIngressCert: RegistryAliases: ExtraOptions:[] ShouldLoadCachedImages:true EnableDefaultCNI:false CNI:} Nodes:[{Name: IP:192.168.76.2 Port:8443 KubernetesVersion:v1.34.1 ContainerRuntime:crio ControlPlane:true Worker:true}] Addons:map[default-storageclass:true storage-provisioner:true] CustomAddonImages:map[] CustomAddonRegistries:map[] VerifyComponents:map[apiserver:true system_pods:true] StartHostTimeout:6m0s ScheduledStop:<nil> ExposedPorts:[] ListenAddress: Network: Subnet: MultiNodeRequested:false ExtraDisks:0 CertExpiration:3m0s MountString: Mount9PVersion:9p2000.L MountGID:docker MountIP: MountMSize:262144 MountOptions:[] MountPort:0 MountType:9p MountUID:docker BinaryMirror: DisableOptimizations:false DisableMetrics:false DisableCoreDNSLog:false CustomQemuFirmwarePath: So
cketVMnetClientPath: SocketVMnetPath: StaticIP: SSHAuthSock: SSHAgentPID:0 GPUs: AutoPauseInterval:1m0s}
	I1109 14:37:04.459853  186764 start.go:941] status for docker: {Installed:true Healthy:true Running:false NeedsImprovement:false Error:<nil> Reason: Fix: Doc: Version:}
	I1109 14:37:04.460662  186764 cli_runner.go:164] Run: docker system info --format "{{json .}}"
	I1109 14:37:04.526336  186764 info.go:266] docker info: {ID:J4M5:W6MX:GOX4:4LAQ:VI7E:VJNF:J3OP:OPBH:GF7G:PPY4:WQWD:7N4L Containers:2 ContainersRunning:2 ContainersPaused:0 ContainersStopped:0 Images:4 Driver:overlay2 DriverStatus:[[Backing Filesystem extfs] [Supports d_type true] [Using metacopy false] [Native Overlay Diff true] [userxattr false]] SystemStatus:<nil> Plugins:{Volume:[local] Network:[bridge host ipvlan macvlan null overlay] Authorization:<nil> Log:[awslogs fluentd gcplogs gelf journald json-file local splunk syslog]} MemoryLimit:true SwapLimit:true KernelMemory:false KernelMemoryTCP:true CPUCfsPeriod:true CPUCfsQuota:true CPUShares:true CPUSet:true PidsLimit:true IPv4Forwarding:true BridgeNfIptables:false BridgeNfIP6Tables:false Debug:false NFd:49 OomKillDisable:true NGoroutines:62 SystemTime:2025-11-09 14:37:04.516969467 +0000 UTC LoggingDriver:json-file CgroupDriver:cgroupfs NEventsListener:0 KernelVersion:5.15.0-1084-aws OperatingSystem:Ubuntu 20.04.6 LTS OSType:linux Architecture:a
arch64 IndexServerAddress:https://index.docker.io/v1/ RegistryConfig:{AllowNondistributableArtifactsCIDRs:[] AllowNondistributableArtifactsHostnames:[] InsecureRegistryCIDRs:[::1/128 127.0.0.0/8] IndexConfigs:{DockerIo:{Name:docker.io Mirrors:[] Secure:true Official:true}} Mirrors:[]} NCPU:2 MemTotal:8214835200 GenericResources:<nil> DockerRootDir:/var/lib/docker HTTPProxy: HTTPSProxy: NoProxy: Name:ip-172-31-24-2 Labels:[] ExperimentalBuild:false ServerVersion:28.1.1 ClusterStore: ClusterAdvertise: Runtimes:{Runc:{Path:runc}} DefaultRuntime:runc Swarm:{NodeID: NodeAddr: LocalNodeState:inactive ControlAvailable:false Error: RemoteManagers:<nil>} LiveRestoreEnabled:false Isolation: InitBinary:docker-init ContainerdCommit:{ID:05044ec0a9a75232cad458027ca83437aae3f4da Expected:} RuncCommit:{ID:v1.2.5-0-g59923ef Expected:} InitCommit:{ID:de40ad0 Expected:} SecurityOptions:[name=apparmor name=seccomp,profile=builtin] ProductLicense: Warnings:<nil> ServerErrors:[] ClientInfo:{Debug:false Plugins:[map[Name:buildx Pat
h:/usr/libexec/docker/cli-plugins/docker-buildx SchemaVersion:0.1.0 ShortDescription:Docker Buildx Vendor:Docker Inc. Version:v0.23.0] map[Name:compose Path:/usr/libexec/docker/cli-plugins/docker-compose SchemaVersion:0.1.0 ShortDescription:Docker Compose Vendor:Docker Inc. Version:v2.35.1]] Warnings:<nil>}}
	I1109 14:37:04.526645  186764 cni.go:84] Creating CNI manager for ""
	I1109 14:37:04.526698  186764 cni.go:143] "docker" driver + "crio" runtime found, recommending kindnet
	I1109 14:37:04.526735  186764 start.go:353] cluster config:
	{Name:cert-expiration-179822 KeepContext:false EmbedCerts:false MinikubeISO: KicBaseImage:gcr.io/k8s-minikube/kicbase-builds:v0.0.48-1761985721-21837@sha256:a50b37e97dfdea51156e079ca6b45818a801b3d41bbe13d141f35d2e1af6c7d1 Memory:3072 CPUs:2 DiskSize:20000 Driver:docker HyperkitVpnKitSock: HyperkitVSockPorts:[] DockerEnv:[] ContainerVolumeMounts:[] InsecureRegistry:[] RegistryMirror:[] HostOnlyCIDR:192.168.59.1/24 HypervVirtualSwitch: HypervUseExternalSwitch:false HypervExternalAdapter: KVMNetwork:default KVMQemuURI:qemu:///system KVMGPU:false KVMHidden:false KVMNUMACount:1 APIServerPort:8443 DockerOpt:[] DisableDriverMounts:false NFSShare:[] NFSSharesRoot:/nfsshares UUID: NoVTXCheck:false DNSProxy:false HostDNSResolver:true HostOnlyNicType:virtio NatNicType:virtio SSHIPAddress: SSHUser:root SSHKey: SSHPort:22 KubernetesConfig:{KubernetesVersion:v1.34.1 ClusterName:cert-expiration-179822 Namespace:default APIServerHAVIP: APIServerName:minikubeCA APIServerNames:[] APIServerIPs:[] DNSDomain:cluster.loca
l ContainerRuntime:crio CRISocket: NetworkPlugin:cni FeatureGates: ServiceCIDR:10.96.0.0/12 ImageRepository: LoadBalancerStartIP: LoadBalancerEndIP: CustomIngressCert: RegistryAliases: ExtraOptions:[] ShouldLoadCachedImages:true EnableDefaultCNI:false CNI:} Nodes:[{Name: IP:192.168.76.2 Port:8443 KubernetesVersion:v1.34.1 ContainerRuntime:crio ControlPlane:true Worker:true}] Addons:map[default-storageclass:true storage-provisioner:true] CustomAddonImages:map[] CustomAddonRegistries:map[] VerifyComponents:map[apiserver:true system_pods:true] StartHostTimeout:6m0s ScheduledStop:<nil> ExposedPorts:[] ListenAddress: Network: Subnet: MultiNodeRequested:false ExtraDisks:0 CertExpiration:8760h0m0s MountString: Mount9PVersion:9p2000.L MountGID:docker MountIP: MountMSize:262144 MountOptions:[] MountPort:0 MountType:9p MountUID:docker BinaryMirror: DisableOptimizations:false DisableMetrics:false DisableCoreDNSLog:false CustomQemuFirmwarePath: SocketVMnetClientPath: SocketVMnetPath: StaticIP: SSHAuthSock: SSHAgentPID:0
GPUs: AutoPauseInterval:1m0s}
	I1109 14:37:04.531829  186764 out.go:179] * Starting "cert-expiration-179822" primary control-plane node in "cert-expiration-179822" cluster
	I1109 14:37:04.534759  186764 cache.go:134] Beginning downloading kic base image for docker with crio
	I1109 14:37:04.537830  186764 out.go:179] * Pulling base image v0.0.48-1761985721-21837 ...
	I1109 14:37:04.540686  186764 preload.go:188] Checking if preload exists for k8s version v1.34.1 and runtime crio
	I1109 14:37:04.540755  186764 preload.go:203] Found local preload: /home/jenkins/minikube-integration/21139-2320/.minikube/cache/preloaded-tarball/preloaded-images-k8s-v18-v1.34.1-cri-o-overlay-arm64.tar.lz4
	I1109 14:37:04.540755  186764 image.go:81] Checking for gcr.io/k8s-minikube/kicbase-builds:v0.0.48-1761985721-21837@sha256:a50b37e97dfdea51156e079ca6b45818a801b3d41bbe13d141f35d2e1af6c7d1 in local docker daemon
	I1109 14:37:04.540764  186764 cache.go:65] Caching tarball of preloaded images
	I1109 14:37:04.540925  186764 preload.go:238] Found /home/jenkins/minikube-integration/21139-2320/.minikube/cache/preloaded-tarball/preloaded-images-k8s-v18-v1.34.1-cri-o-overlay-arm64.tar.lz4 in cache, skipping download
	I1109 14:37:04.540935  186764 cache.go:68] Finished verifying existence of preloaded tar for v1.34.1 on crio
	I1109 14:37:04.541035  186764 profile.go:143] Saving config to /home/jenkins/minikube-integration/21139-2320/.minikube/profiles/cert-expiration-179822/config.json ...
	I1109 14:37:04.562334  186764 image.go:100] Found gcr.io/k8s-minikube/kicbase-builds:v0.0.48-1761985721-21837@sha256:a50b37e97dfdea51156e079ca6b45818a801b3d41bbe13d141f35d2e1af6c7d1 in local docker daemon, skipping pull
	I1109 14:37:04.562345  186764 cache.go:158] gcr.io/k8s-minikube/kicbase-builds:v0.0.48-1761985721-21837@sha256:a50b37e97dfdea51156e079ca6b45818a801b3d41bbe13d141f35d2e1af6c7d1 exists in daemon, skipping load
	I1109 14:37:04.562363  186764 cache.go:243] Successfully downloaded all kic artifacts
	I1109 14:37:04.562384  186764 start.go:360] acquireMachinesLock for cert-expiration-179822: {Name:mk728324a0331ee9c1c68956a06f457b31040b5b Clock:{} Delay:500ms Timeout:10m0s Cancel:<nil>}
	I1109 14:37:04.562446  186764 start.go:364] duration metric: took 46.195µs to acquireMachinesLock for "cert-expiration-179822"
	I1109 14:37:04.562464  186764 start.go:96] Skipping create...Using existing machine configuration
	I1109 14:37:04.562470  186764 fix.go:54] fixHost starting: 
	I1109 14:37:04.562715  186764 cli_runner.go:164] Run: docker container inspect cert-expiration-179822 --format={{.State.Status}}
	I1109 14:37:04.580887  186764 fix.go:112] recreateIfNeeded on cert-expiration-179822: state=Running err=<nil>
	W1109 14:37:04.580906  186764 fix.go:138] unexpected machine state, will restart: <nil>
	I1109 14:37:04.584206  186764 out.go:252] * Updating the running docker "cert-expiration-179822" container ...
	I1109 14:37:04.584234  186764 machine.go:94] provisionDockerMachine start ...
	I1109 14:37:04.584324  186764 cli_runner.go:164] Run: docker container inspect -f "'{{(index (index .NetworkSettings.Ports "22/tcp") 0).HostPort}}'" cert-expiration-179822
	I1109 14:37:04.603489  186764 main.go:143] libmachine: Using SSH client type: native
	I1109 14:37:04.604013  186764 main.go:143] libmachine: &{{{<nil> 0 [] [] []} docker [0x3eefe0] 0x3f1790 <nil>  [] 0s} 127.0.0.1 33035 <nil> <nil>}
	I1109 14:37:04.604021  186764 main.go:143] libmachine: About to run SSH command:
	hostname
	I1109 14:37:04.764828  186764 main.go:143] libmachine: SSH cmd err, output: <nil>: cert-expiration-179822
	
	I1109 14:37:04.764842  186764 ubuntu.go:182] provisioning hostname "cert-expiration-179822"
	I1109 14:37:04.764914  186764 cli_runner.go:164] Run: docker container inspect -f "'{{(index (index .NetworkSettings.Ports "22/tcp") 0).HostPort}}'" cert-expiration-179822
	I1109 14:37:04.785783  186764 main.go:143] libmachine: Using SSH client type: native
	I1109 14:37:04.786088  186764 main.go:143] libmachine: &{{{<nil> 0 [] [] []} docker [0x3eefe0] 0x3f1790 <nil>  [] 0s} 127.0.0.1 33035 <nil> <nil>}
	I1109 14:37:04.786096  186764 main.go:143] libmachine: About to run SSH command:
	sudo hostname cert-expiration-179822 && echo "cert-expiration-179822" | sudo tee /etc/hostname
	I1109 14:37:04.952698  186764 main.go:143] libmachine: SSH cmd err, output: <nil>: cert-expiration-179822
	
	I1109 14:37:04.952764  186764 cli_runner.go:164] Run: docker container inspect -f "'{{(index (index .NetworkSettings.Ports "22/tcp") 0).HostPort}}'" cert-expiration-179822
	I1109 14:37:04.975912  186764 main.go:143] libmachine: Using SSH client type: native
	I1109 14:37:04.976211  186764 main.go:143] libmachine: &{{{<nil> 0 [] [] []} docker [0x3eefe0] 0x3f1790 <nil>  [] 0s} 127.0.0.1 33035 <nil> <nil>}
	I1109 14:37:04.976226  186764 main.go:143] libmachine: About to run SSH command:
	
			if ! grep -xq '.*\scert-expiration-179822' /etc/hosts; then
				if grep -xq '127.0.1.1\s.*' /etc/hosts; then
					sudo sed -i 's/^127.0.1.1\s.*/127.0.1.1 cert-expiration-179822/g' /etc/hosts;
				else 
					echo '127.0.1.1 cert-expiration-179822' | sudo tee -a /etc/hosts; 
				fi
			fi
	I1109 14:37:05.136424  186764 main.go:143] libmachine: SSH cmd err, output: <nil>: 
	I1109 14:37:05.136454  186764 ubuntu.go:188] set auth options {CertDir:/home/jenkins/minikube-integration/21139-2320/.minikube CaCertPath:/home/jenkins/minikube-integration/21139-2320/.minikube/certs/ca.pem CaPrivateKeyPath:/home/jenkins/minikube-integration/21139-2320/.minikube/certs/ca-key.pem CaCertRemotePath:/etc/docker/ca.pem ServerCertPath:/home/jenkins/minikube-integration/21139-2320/.minikube/machines/server.pem ServerKeyPath:/home/jenkins/minikube-integration/21139-2320/.minikube/machines/server-key.pem ClientKeyPath:/home/jenkins/minikube-integration/21139-2320/.minikube/certs/key.pem ServerCertRemotePath:/etc/docker/server.pem ServerKeyRemotePath:/etc/docker/server-key.pem ClientCertPath:/home/jenkins/minikube-integration/21139-2320/.minikube/certs/cert.pem ServerCertSANs:[] StorePath:/home/jenkins/minikube-integration/21139-2320/.minikube}
	I1109 14:37:05.136470  186764 ubuntu.go:190] setting up certificates
	I1109 14:37:05.136478  186764 provision.go:84] configureAuth start
	I1109 14:37:05.136562  186764 cli_runner.go:164] Run: docker container inspect -f "{{range .NetworkSettings.Networks}}{{.IPAddress}},{{.GlobalIPv6Address}}{{end}}" cert-expiration-179822
	I1109 14:37:05.155935  186764 provision.go:143] copyHostCerts
	I1109 14:37:05.155997  186764 exec_runner.go:144] found /home/jenkins/minikube-integration/21139-2320/.minikube/cert.pem, removing ...
	I1109 14:37:05.156011  186764 exec_runner.go:203] rm: /home/jenkins/minikube-integration/21139-2320/.minikube/cert.pem
	I1109 14:37:05.156087  186764 exec_runner.go:151] cp: /home/jenkins/minikube-integration/21139-2320/.minikube/certs/cert.pem --> /home/jenkins/minikube-integration/21139-2320/.minikube/cert.pem (1123 bytes)
	I1109 14:37:05.156198  186764 exec_runner.go:144] found /home/jenkins/minikube-integration/21139-2320/.minikube/key.pem, removing ...
	I1109 14:37:05.156202  186764 exec_runner.go:203] rm: /home/jenkins/minikube-integration/21139-2320/.minikube/key.pem
	I1109 14:37:05.156230  186764 exec_runner.go:151] cp: /home/jenkins/minikube-integration/21139-2320/.minikube/certs/key.pem --> /home/jenkins/minikube-integration/21139-2320/.minikube/key.pem (1679 bytes)
	I1109 14:37:05.156308  186764 exec_runner.go:144] found /home/jenkins/minikube-integration/21139-2320/.minikube/ca.pem, removing ...
	I1109 14:37:05.156312  186764 exec_runner.go:203] rm: /home/jenkins/minikube-integration/21139-2320/.minikube/ca.pem
	I1109 14:37:05.156344  186764 exec_runner.go:151] cp: /home/jenkins/minikube-integration/21139-2320/.minikube/certs/ca.pem --> /home/jenkins/minikube-integration/21139-2320/.minikube/ca.pem (1082 bytes)
	I1109 14:37:05.156445  186764 provision.go:117] generating server cert: /home/jenkins/minikube-integration/21139-2320/.minikube/machines/server.pem ca-key=/home/jenkins/minikube-integration/21139-2320/.minikube/certs/ca.pem private-key=/home/jenkins/minikube-integration/21139-2320/.minikube/certs/ca-key.pem org=jenkins.cert-expiration-179822 san=[127.0.0.1 192.168.76.2 cert-expiration-179822 localhost minikube]
	I1109 14:37:05.340934  186764 provision.go:177] copyRemoteCerts
	I1109 14:37:05.340991  186764 ssh_runner.go:195] Run: sudo mkdir -p /etc/docker /etc/docker /etc/docker
	I1109 14:37:05.341028  186764 cli_runner.go:164] Run: docker container inspect -f "'{{(index (index .NetworkSettings.Ports "22/tcp") 0).HostPort}}'" cert-expiration-179822
	I1109 14:37:05.358978  186764 sshutil.go:53] new ssh client: &{IP:127.0.0.1 Port:33035 SSHKeyPath:/home/jenkins/minikube-integration/21139-2320/.minikube/machines/cert-expiration-179822/id_rsa Username:docker}
	I1109 14:37:05.464929  186764 ssh_runner.go:362] scp /home/jenkins/minikube-integration/21139-2320/.minikube/certs/ca.pem --> /etc/docker/ca.pem (1082 bytes)
	I1109 14:37:05.487304  186764 ssh_runner.go:362] scp /home/jenkins/minikube-integration/21139-2320/.minikube/machines/server.pem --> /etc/docker/server.pem (1233 bytes)
	I1109 14:37:05.505592  186764 ssh_runner.go:362] scp /home/jenkins/minikube-integration/21139-2320/.minikube/machines/server-key.pem --> /etc/docker/server-key.pem (1675 bytes)
	I1109 14:37:05.523765  186764 provision.go:87] duration metric: took 387.27447ms to configureAuth
	I1109 14:37:05.523780  186764 ubuntu.go:206] setting minikube options for container-runtime
	I1109 14:37:05.523995  186764 config.go:182] Loaded profile config "cert-expiration-179822": Driver=docker, ContainerRuntime=crio, KubernetesVersion=v1.34.1
	I1109 14:37:05.524113  186764 cli_runner.go:164] Run: docker container inspect -f "'{{(index (index .NetworkSettings.Ports "22/tcp") 0).HostPort}}'" cert-expiration-179822
	I1109 14:37:05.542901  186764 main.go:143] libmachine: Using SSH client type: native
	I1109 14:37:05.543203  186764 main.go:143] libmachine: &{{{<nil> 0 [] [] []} docker [0x3eefe0] 0x3f1790 <nil>  [] 0s} 127.0.0.1 33035 <nil> <nil>}
	I1109 14:37:05.543215  186764 main.go:143] libmachine: About to run SSH command:
	sudo mkdir -p /etc/sysconfig && printf %s "
	CRIO_MINIKUBE_OPTIONS='--insecure-registry 10.96.0.0/12 '
	" | sudo tee /etc/sysconfig/crio.minikube && sudo systemctl restart crio
	
	
	==> CRI-O <==
	Nov 09 14:36:51 old-k8s-version-349599 crio[650]: time="2025-11-09T14:36:51.909624849Z" level=info msg="Allowed annotations are specified for workload [io.containers.trace-syscall]"
	Nov 09 14:36:51 old-k8s-version-349599 crio[650]: time="2025-11-09T14:36:51.916179068Z" level=info msg="Allowed annotations are specified for workload [io.containers.trace-syscall]"
	Nov 09 14:36:51 old-k8s-version-349599 crio[650]: time="2025-11-09T14:36:51.916764524Z" level=info msg="Allowed annotations are specified for workload [io.containers.trace-syscall]"
	Nov 09 14:36:51 old-k8s-version-349599 crio[650]: time="2025-11-09T14:36:51.932596525Z" level=info msg="Created container 43eaaa1f3a3a30d7d2b345fa130225cfafa48f473bd3042ba003d472afa8b09d: kubernetes-dashboard/dashboard-metrics-scraper-5f989dc9cf-vrb6q/dashboard-metrics-scraper" id=43845b38-f52b-4a6b-8124-68e225870fa0 name=/runtime.v1.RuntimeService/CreateContainer
	Nov 09 14:36:51 old-k8s-version-349599 crio[650]: time="2025-11-09T14:36:51.933335706Z" level=info msg="Starting container: 43eaaa1f3a3a30d7d2b345fa130225cfafa48f473bd3042ba003d472afa8b09d" id=82f2c0c0-00c1-4916-8dbf-1d46089ca3b0 name=/runtime.v1.RuntimeService/StartContainer
	Nov 09 14:36:51 old-k8s-version-349599 crio[650]: time="2025-11-09T14:36:51.934916131Z" level=info msg="Started container" PID=1652 containerID=43eaaa1f3a3a30d7d2b345fa130225cfafa48f473bd3042ba003d472afa8b09d description=kubernetes-dashboard/dashboard-metrics-scraper-5f989dc9cf-vrb6q/dashboard-metrics-scraper id=82f2c0c0-00c1-4916-8dbf-1d46089ca3b0 name=/runtime.v1.RuntimeService/StartContainer sandboxID=87cb2197ede75806db1069c7ea256a275ef0b4efc840a2f2c023186e51da8f65
	Nov 09 14:36:51 old-k8s-version-349599 conmon[1650]: conmon 43eaaa1f3a3a30d7d2b3 <ninfo>: container 1652 exited with status 1
	Nov 09 14:36:52 old-k8s-version-349599 crio[650]: time="2025-11-09T14:36:52.192473609Z" level=info msg="Removing container: 1e42cf99b7d01821fcb8ed85dc640839269f65bf1bcd085d1e7395d561734f82" id=4474a4d7-8766-4229-a45d-656f2dc69105 name=/runtime.v1.RuntimeService/RemoveContainer
	Nov 09 14:36:52 old-k8s-version-349599 crio[650]: time="2025-11-09T14:36:52.204123658Z" level=info msg="Error loading conmon cgroup of container 1e42cf99b7d01821fcb8ed85dc640839269f65bf1bcd085d1e7395d561734f82: cgroup deleted" id=4474a4d7-8766-4229-a45d-656f2dc69105 name=/runtime.v1.RuntimeService/RemoveContainer
	Nov 09 14:36:52 old-k8s-version-349599 crio[650]: time="2025-11-09T14:36:52.209124825Z" level=info msg="Removed container 1e42cf99b7d01821fcb8ed85dc640839269f65bf1bcd085d1e7395d561734f82: kubernetes-dashboard/dashboard-metrics-scraper-5f989dc9cf-vrb6q/dashboard-metrics-scraper" id=4474a4d7-8766-4229-a45d-656f2dc69105 name=/runtime.v1.RuntimeService/RemoveContainer
	Nov 09 14:36:59 old-k8s-version-349599 crio[650]: time="2025-11-09T14:36:59.640997954Z" level=info msg="CNI monitoring event CREATE        \"/etc/cni/net.d/10-kindnet.conflist.temp\""
	Nov 09 14:36:59 old-k8s-version-349599 crio[650]: time="2025-11-09T14:36:59.646689661Z" level=info msg="Found CNI network kindnet (type=ptp) at /etc/cni/net.d/10-kindnet.conflist"
	Nov 09 14:36:59 old-k8s-version-349599 crio[650]: time="2025-11-09T14:36:59.646729579Z" level=info msg="Updated default CNI network name to kindnet"
	Nov 09 14:36:59 old-k8s-version-349599 crio[650]: time="2025-11-09T14:36:59.646752505Z" level=info msg="CNI monitoring event WRITE         \"/etc/cni/net.d/10-kindnet.conflist.temp\""
	Nov 09 14:36:59 old-k8s-version-349599 crio[650]: time="2025-11-09T14:36:59.649827545Z" level=info msg="Found CNI network kindnet (type=ptp) at /etc/cni/net.d/10-kindnet.conflist"
	Nov 09 14:36:59 old-k8s-version-349599 crio[650]: time="2025-11-09T14:36:59.649863279Z" level=info msg="Updated default CNI network name to kindnet"
	Nov 09 14:36:59 old-k8s-version-349599 crio[650]: time="2025-11-09T14:36:59.649885589Z" level=info msg="CNI monitoring event WRITE         \"/etc/cni/net.d/10-kindnet.conflist.temp\""
	Nov 09 14:36:59 old-k8s-version-349599 crio[650]: time="2025-11-09T14:36:59.653010788Z" level=info msg="Found CNI network kindnet (type=ptp) at /etc/cni/net.d/10-kindnet.conflist"
	Nov 09 14:36:59 old-k8s-version-349599 crio[650]: time="2025-11-09T14:36:59.653047244Z" level=info msg="Updated default CNI network name to kindnet"
	Nov 09 14:36:59 old-k8s-version-349599 crio[650]: time="2025-11-09T14:36:59.65306911Z" level=info msg="CNI monitoring event RENAME        \"/etc/cni/net.d/10-kindnet.conflist.temp\""
	Nov 09 14:36:59 old-k8s-version-349599 crio[650]: time="2025-11-09T14:36:59.656286084Z" level=info msg="Found CNI network kindnet (type=ptp) at /etc/cni/net.d/10-kindnet.conflist"
	Nov 09 14:36:59 old-k8s-version-349599 crio[650]: time="2025-11-09T14:36:59.656316936Z" level=info msg="Updated default CNI network name to kindnet"
	Nov 09 14:36:59 old-k8s-version-349599 crio[650]: time="2025-11-09T14:36:59.656344358Z" level=info msg="CNI monitoring event CREATE        \"/etc/cni/net.d/10-kindnet.conflist\" ← \"/etc/cni/net.d/10-kindnet.conflist.temp\""
	Nov 09 14:36:59 old-k8s-version-349599 crio[650]: time="2025-11-09T14:36:59.659745349Z" level=info msg="Found CNI network kindnet (type=ptp) at /etc/cni/net.d/10-kindnet.conflist"
	Nov 09 14:36:59 old-k8s-version-349599 crio[650]: time="2025-11-09T14:36:59.659778843Z" level=info msg="Updated default CNI network name to kindnet"
	
	
	==> container status <==
	CONTAINER           IMAGE                                                                                                      CREATED             STATE               NAME                        ATTEMPT             POD ID              POD                                              NAMESPACE
	43eaaa1f3a3a3       a90209bb39e3d7b5fc9daf60c17044ea969aaca0333d672d8c7a34c7446e7ff7                                           18 seconds ago      Exited              dashboard-metrics-scraper   2                   87cb2197ede75       dashboard-metrics-scraper-5f989dc9cf-vrb6q       kubernetes-dashboard
	919922a360f64       ba04bb24b95753201135cbc420b233c1b0b9fa2e1fd21d28319c348c33fbcde6                                           19 seconds ago      Running             storage-provisioner         2                   2a38ac772f968       storage-provisioner                              kube-system
	99a152525f2af       docker.io/kubernetesui/dashboard@sha256:5c52c60663b473628bd98e4ffee7a747ef1f88d8c7bcee957b089fb3f61bdedf   30 seconds ago      Running             kubernetes-dashboard        0                   98dccb643bbe9       kubernetes-dashboard-8694d4445c-4d8hp            kubernetes-dashboard
	621dc7ea0fcf5       97e04611ad43405a2e5863ae17c6f1bc9181bdefdaa78627c432ef754a4eb108                                           50 seconds ago      Running             coredns                     1                   e1116bd09abe8       coredns-5dd5756b68-2z64q                         kube-system
	a629db1cea71c       1611cd07b61d57dbbfebe6db242513fd51e1c02d20ba08af17a45837d86a8a8c                                           50 seconds ago      Running             busybox                     1                   d8415107b3c41       busybox                                          default
	b3c3e104ca19d       b1a8c6f707935fd5f346ce5846d21ff8dd65e14c15406a14dbd16b9b897b9b4c                                           50 seconds ago      Running             kindnet-cni                 1                   ae55c48c6f987       kindnet-2r8mz                                    kube-system
	d300f08cb92b1       ba04bb24b95753201135cbc420b233c1b0b9fa2e1fd21d28319c348c33fbcde6                                           50 seconds ago      Exited              storage-provisioner         1                   2a38ac772f968       storage-provisioner                              kube-system
	c78cbfc5d0722       940f54a5bcae9dd4c97844fa36d12cc5d9078cffd5e677ad0df1528c12f3240d                                           50 seconds ago      Running             kube-proxy                  1                   3d48a49bf3438       kube-proxy-tcp6s                                 kube-system
	c8143ca805893       46cc66ccc7c19b4b30625b0aa4e178792add2385659205d7c6fcbd05d78c23e5                                           57 seconds ago      Running             kube-controller-manager     1                   09f65485c985d       kube-controller-manager-old-k8s-version-349599   kube-system
	03ab7893dd34a       9cdd6470f48c8b127530b7ce6ea4b3524137984481e48bcde619735890840ace                                           57 seconds ago      Running             etcd                        1                   9775c9637bb8b       etcd-old-k8s-version-349599                      kube-system
	98c23037f6379       762dce4090c5f4789bb5dbb933d5b50bc1a2357d7739bbce30d949820e5a38ee                                           57 seconds ago      Running             kube-scheduler              1                   e184e645a2deb       kube-scheduler-old-k8s-version-349599            kube-system
	03a4f2701535c       00543d2fe5d71095984891a0609ee504b81f9d72a69a0ad02039d4e135213766                                           57 seconds ago      Running             kube-apiserver              1                   70f7765b5fb5b       kube-apiserver-old-k8s-version-349599            kube-system
	
	
	==> coredns [621dc7ea0fcf5ac7c892c1bbe94e7a818d2f91fb0df63e7cf223ff4897b41ad3] <==
	[INFO] plugin/kubernetes: waiting for Kubernetes API before starting server
	[INFO] plugin/kubernetes: waiting for Kubernetes API before starting server
	[INFO] plugin/kubernetes: waiting for Kubernetes API before starting server
	[INFO] plugin/kubernetes: waiting for Kubernetes API before starting server
	[INFO] plugin/kubernetes: waiting for Kubernetes API before starting server
	[INFO] plugin/kubernetes: waiting for Kubernetes API before starting server
	[INFO] plugin/kubernetes: waiting for Kubernetes API before starting server
	[INFO] plugin/kubernetes: waiting for Kubernetes API before starting server
	[INFO] plugin/ready: Still waiting on: "kubernetes"
	[INFO] plugin/ready: Still waiting on: "kubernetes"
	[INFO] plugin/kubernetes: waiting for Kubernetes API before starting server
	[WARNING] plugin/kubernetes: starting server with unsynced Kubernetes API
	.:53
	[INFO] plugin/reload: Running configuration SHA512 = 8aa94104b4dae56b00431f7362ac05b997af2246775de35dc2eb361b0707b2fa7199f9ddfdba27fdef1331b76d09c41700f6cb5d00836dabab7c0df8e651283f
	CoreDNS-1.10.1
	linux/arm64, go1.20, 055b2c3
	[INFO] 127.0.0.1:58671 - 2372 "HINFO IN 4515961813184582711.618130487628995851. udp 56 false 512" NXDOMAIN qr,rd,ra 56 0.004351684s
	[INFO] plugin/ready: Still waiting on: "kubernetes"
	[INFO] plugin/ready: Still waiting on: "kubernetes"
	
	
	==> describe nodes <==
	Name:               old-k8s-version-349599
	Roles:              control-plane
	Labels:             beta.kubernetes.io/arch=arm64
	                    beta.kubernetes.io/os=linux
	                    kubernetes.io/arch=arm64
	                    kubernetes.io/hostname=old-k8s-version-349599
	                    kubernetes.io/os=linux
	                    minikube.k8s.io/commit=70e94ed289d7418ac27e2778f9cf44be27a4ecda
	                    minikube.k8s.io/name=old-k8s-version-349599
	                    minikube.k8s.io/primary=true
	                    minikube.k8s.io/updated_at=2025_11_09T14_35_12_0700
	                    minikube.k8s.io/version=v1.37.0
	                    node-role.kubernetes.io/control-plane=
	                    node.kubernetes.io/exclude-from-external-load-balancers=
	Annotations:        kubeadm.alpha.kubernetes.io/cri-socket: unix:///var/run/crio/crio.sock
	                    node.alpha.kubernetes.io/ttl: 0
	                    volumes.kubernetes.io/controller-managed-attach-detach: true
	CreationTimestamp:  Sun, 09 Nov 2025 14:35:08 +0000
	Taints:             <none>
	Unschedulable:      false
	Lease:
	  HolderIdentity:  old-k8s-version-349599
	  AcquireTime:     <unset>
	  RenewTime:       Sun, 09 Nov 2025 14:36:58 +0000
	Conditions:
	  Type             Status  LastHeartbeatTime                 LastTransitionTime                Reason                       Message
	  ----             ------  -----------------                 ------------------                ------                       -------
	  MemoryPressure   False   Sun, 09 Nov 2025 14:36:48 +0000   Sun, 09 Nov 2025 14:35:04 +0000   KubeletHasSufficientMemory   kubelet has sufficient memory available
	  DiskPressure     False   Sun, 09 Nov 2025 14:36:48 +0000   Sun, 09 Nov 2025 14:35:04 +0000   KubeletHasNoDiskPressure     kubelet has no disk pressure
	  PIDPressure      False   Sun, 09 Nov 2025 14:36:48 +0000   Sun, 09 Nov 2025 14:35:04 +0000   KubeletHasSufficientPID      kubelet has sufficient PID available
	  Ready            True    Sun, 09 Nov 2025 14:36:48 +0000   Sun, 09 Nov 2025 14:35:38 +0000   KubeletReady                 kubelet is posting ready status
	Addresses:
	  InternalIP:  192.168.85.2
	  Hostname:    old-k8s-version-349599
	Capacity:
	  cpu:                2
	  ephemeral-storage:  203034800Ki
	  hugepages-1Gi:      0
	  hugepages-2Mi:      0
	  hugepages-32Mi:     0
	  hugepages-64Ki:     0
	  memory:             8022300Ki
	  pods:               110
	Allocatable:
	  cpu:                2
	  ephemeral-storage:  203034800Ki
	  hugepages-1Gi:      0
	  hugepages-2Mi:      0
	  hugepages-32Mi:     0
	  hugepages-64Ki:     0
	  memory:             8022300Ki
	  pods:               110
	System Info:
	  Machine ID:                 8fe80b37a6a3cb4bc1adadb06905c59a
	  System UUID:                134d2443-5714-4231-bc47-128f14f493a4
	  Boot ID:                    c2b4b02b-b0e5-42ff-aa45-346ad9349595
	  Kernel Version:             5.15.0-1084-aws
	  OS Image:                   Debian GNU/Linux 12 (bookworm)
	  Operating System:           linux
	  Architecture:               arm64
	  Container Runtime Version:  cri-o://1.34.1
	  Kubelet Version:            v1.28.0
	  Kube-Proxy Version:         v1.28.0
	PodCIDR:                      10.244.0.0/24
	PodCIDRs:                     10.244.0.0/24
	Non-terminated Pods:          (11 in total)
	  Namespace                   Name                                              CPU Requests  CPU Limits  Memory Requests  Memory Limits  Age
	  ---------                   ----                                              ------------  ----------  ---------------  -------------  ---
	  default                     busybox                                           0 (0%)        0 (0%)      0 (0%)           0 (0%)         89s
	  kube-system                 coredns-5dd5756b68-2z64q                          100m (5%)     0 (0%)      70Mi (0%)        170Mi (2%)     106s
	  kube-system                 etcd-old-k8s-version-349599                       100m (5%)     0 (0%)      100Mi (1%)       0 (0%)         119s
	  kube-system                 kindnet-2r8mz                                     100m (5%)     100m (5%)   50Mi (0%)        50Mi (0%)      106s
	  kube-system                 kube-apiserver-old-k8s-version-349599             250m (12%)    0 (0%)      0 (0%)           0 (0%)         118s
	  kube-system                 kube-controller-manager-old-k8s-version-349599    200m (10%)    0 (0%)      0 (0%)           0 (0%)         118s
	  kube-system                 kube-proxy-tcp6s                                  0 (0%)        0 (0%)      0 (0%)           0 (0%)         106s
	  kube-system                 kube-scheduler-old-k8s-version-349599             100m (5%)     0 (0%)      0 (0%)           0 (0%)         118s
	  kube-system                 storage-provisioner                               0 (0%)        0 (0%)      0 (0%)           0 (0%)         105s
	  kubernetes-dashboard        dashboard-metrics-scraper-5f989dc9cf-vrb6q        0 (0%)        0 (0%)      0 (0%)           0 (0%)         40s
	  kubernetes-dashboard        kubernetes-dashboard-8694d4445c-4d8hp             0 (0%)        0 (0%)      0 (0%)           0 (0%)         40s
	Allocated resources:
	  (Total limits may be over 100 percent, i.e., overcommitted.)
	  Resource           Requests    Limits
	  --------           --------    ------
	  cpu                850m (42%)  100m (5%)
	  memory             220Mi (2%)  220Mi (2%)
	  ephemeral-storage  0 (0%)      0 (0%)
	  hugepages-1Gi      0 (0%)      0 (0%)
	  hugepages-2Mi      0 (0%)      0 (0%)
	  hugepages-32Mi     0 (0%)      0 (0%)
	  hugepages-64Ki     0 (0%)      0 (0%)
	Events:
	  Type    Reason                   Age                From             Message
	  ----    ------                   ----               ----             -------
	  Normal  Starting                 104s               kube-proxy       
	  Normal  Starting                 50s                kube-proxy       
	  Normal  NodeHasSufficientMemory  119s               kubelet          Node old-k8s-version-349599 status is now: NodeHasSufficientMemory
	  Normal  NodeHasNoDiskPressure    119s               kubelet          Node old-k8s-version-349599 status is now: NodeHasNoDiskPressure
	  Normal  NodeHasSufficientPID     119s               kubelet          Node old-k8s-version-349599 status is now: NodeHasSufficientPID
	  Normal  Starting                 119s               kubelet          Starting kubelet.
	  Normal  RegisteredNode           107s               node-controller  Node old-k8s-version-349599 event: Registered Node old-k8s-version-349599 in Controller
	  Normal  NodeReady                92s                kubelet          Node old-k8s-version-349599 status is now: NodeReady
	  Normal  Starting                 59s                kubelet          Starting kubelet.
	  Normal  NodeHasSufficientMemory  58s (x8 over 59s)  kubelet          Node old-k8s-version-349599 status is now: NodeHasSufficientMemory
	  Normal  NodeHasNoDiskPressure    58s (x8 over 59s)  kubelet          Node old-k8s-version-349599 status is now: NodeHasNoDiskPressure
	  Normal  NodeHasSufficientPID     58s (x8 over 59s)  kubelet          Node old-k8s-version-349599 status is now: NodeHasSufficientPID
	  Normal  RegisteredNode           41s                node-controller  Node old-k8s-version-349599 event: Registered Node old-k8s-version-349599 in Controller
	
	
	==> dmesg <==
	[Nov 9 14:07] overlayfs: idmapped layers are currently not supported
	[Nov 9 14:12] overlayfs: idmapped layers are currently not supported
	[ +35.606556] overlayfs: idmapped layers are currently not supported
	[Nov 9 14:14] overlayfs: idmapped layers are currently not supported
	[Nov 9 14:15] overlayfs: idmapped layers are currently not supported
	[Nov 9 14:16] overlayfs: idmapped layers are currently not supported
	[Nov 9 14:18] overlayfs: idmapped layers are currently not supported
	[ +26.909872] overlayfs: idmapped layers are currently not supported
	[  +3.850831] overlayfs: idmapped layers are currently not supported
	[Nov 9 14:19] overlayfs: idmapped layers are currently not supported
	[ +17.180951] overlayfs: idmapped layers are currently not supported
	[Nov 9 14:20] overlayfs: idmapped layers are currently not supported
	[ +23.736977] overlayfs: idmapped layers are currently not supported
	[Nov 9 14:22] overlayfs: idmapped layers are currently not supported
	[Nov 9 14:23] overlayfs: idmapped layers are currently not supported
	[Nov 9 14:25] overlayfs: idmapped layers are currently not supported
	[Nov 9 14:27] overlayfs: idmapped layers are currently not supported
	[ +24.536638] overlayfs: idmapped layers are currently not supported
	[Nov 9 14:31] overlayfs: idmapped layers are currently not supported
	[Nov 9 14:33] overlayfs: idmapped layers are currently not supported
	[ +39.159698] overlayfs: idmapped layers are currently not supported
	[  +5.641155] overlayfs: idmapped layers are currently not supported
	[Nov 9 14:34] overlayfs: idmapped layers are currently not supported
	[Nov 9 14:35] overlayfs: idmapped layers are currently not supported
	[Nov 9 14:36] overlayfs: idmapped layers are currently not supported
	
	
	==> etcd [03ab7893dd34a99ef31e35ac9a05d93d56b1b7a9163cfb3a3ee2f2072b6daee7] <==
	{"level":"info","ts":"2025-11-09T14:36:13.290224Z","caller":"fileutil/purge.go:44","msg":"started to purge file","dir":"/var/lib/minikube/etcd/member/wal","suffix":"wal","max":5,"interval":"30s"}
	{"level":"info","ts":"2025-11-09T14:36:13.290436Z","caller":"etcdserver/server.go:754","msg":"starting initial election tick advance","election-ticks":10}
	{"level":"info","ts":"2025-11-09T14:36:13.290903Z","logger":"raft","caller":"etcdserver/zap_raft.go:77","msg":"9f0758e1c58a86ed switched to configuration voters=(11459225503572592365)"}
	{"level":"info","ts":"2025-11-09T14:36:13.291017Z","caller":"membership/cluster.go:421","msg":"added member","cluster-id":"68eaea490fab4e05","local-member-id":"9f0758e1c58a86ed","added-peer-id":"9f0758e1c58a86ed","added-peer-peer-urls":["https://192.168.85.2:2380"]}
	{"level":"info","ts":"2025-11-09T14:36:13.291134Z","caller":"membership/cluster.go:584","msg":"set initial cluster version","cluster-id":"68eaea490fab4e05","local-member-id":"9f0758e1c58a86ed","cluster-version":"3.5"}
	{"level":"info","ts":"2025-11-09T14:36:13.291161Z","caller":"api/capability.go:75","msg":"enabled capabilities for version","cluster-version":"3.5"}
	{"level":"info","ts":"2025-11-09T14:36:13.330277Z","caller":"embed/etcd.go:597","msg":"serving peer traffic","address":"192.168.85.2:2380"}
	{"level":"info","ts":"2025-11-09T14:36:13.330388Z","caller":"embed/etcd.go:569","msg":"cmux::serve","address":"192.168.85.2:2380"}
	{"level":"info","ts":"2025-11-09T14:36:13.330118Z","caller":"embed/etcd.go:726","msg":"starting with client TLS","tls-info":"cert = /var/lib/minikube/certs/etcd/server.crt, key = /var/lib/minikube/certs/etcd/server.key, client-cert=, client-key=, trusted-ca = /var/lib/minikube/certs/etcd/ca.crt, client-cert-auth = true, crl-file = ","cipher-suites":[]}
	{"level":"info","ts":"2025-11-09T14:36:13.33129Z","caller":"embed/etcd.go:278","msg":"now serving peer/client/metrics","local-member-id":"9f0758e1c58a86ed","initial-advertise-peer-urls":["https://192.168.85.2:2380"],"listen-peer-urls":["https://192.168.85.2:2380"],"advertise-client-urls":["https://192.168.85.2:2379"],"listen-client-urls":["https://127.0.0.1:2379","https://192.168.85.2:2379"],"listen-metrics-urls":["http://127.0.0.1:2381"]}
	{"level":"info","ts":"2025-11-09T14:36:13.331403Z","caller":"embed/etcd.go:855","msg":"serving metrics","address":"http://127.0.0.1:2381"}
	{"level":"info","ts":"2025-11-09T14:36:14.383949Z","logger":"raft","caller":"etcdserver/zap_raft.go:77","msg":"9f0758e1c58a86ed is starting a new election at term 2"}
	{"level":"info","ts":"2025-11-09T14:36:14.384079Z","logger":"raft","caller":"etcdserver/zap_raft.go:77","msg":"9f0758e1c58a86ed became pre-candidate at term 2"}
	{"level":"info","ts":"2025-11-09T14:36:14.384184Z","logger":"raft","caller":"etcdserver/zap_raft.go:77","msg":"9f0758e1c58a86ed received MsgPreVoteResp from 9f0758e1c58a86ed at term 2"}
	{"level":"info","ts":"2025-11-09T14:36:14.384224Z","logger":"raft","caller":"etcdserver/zap_raft.go:77","msg":"9f0758e1c58a86ed became candidate at term 3"}
	{"level":"info","ts":"2025-11-09T14:36:14.384278Z","logger":"raft","caller":"etcdserver/zap_raft.go:77","msg":"9f0758e1c58a86ed received MsgVoteResp from 9f0758e1c58a86ed at term 3"}
	{"level":"info","ts":"2025-11-09T14:36:14.384314Z","logger":"raft","caller":"etcdserver/zap_raft.go:77","msg":"9f0758e1c58a86ed became leader at term 3"}
	{"level":"info","ts":"2025-11-09T14:36:14.384352Z","logger":"raft","caller":"etcdserver/zap_raft.go:77","msg":"raft.node: 9f0758e1c58a86ed elected leader 9f0758e1c58a86ed at term 3"}
	{"level":"info","ts":"2025-11-09T14:36:14.388159Z","caller":"etcdmain/main.go:44","msg":"notifying init daemon"}
	{"level":"info","ts":"2025-11-09T14:36:14.388283Z","caller":"etcdmain/main.go:50","msg":"successfully notified init daemon"}
	{"level":"info","ts":"2025-11-09T14:36:14.391936Z","caller":"embed/serve.go:103","msg":"ready to serve client requests"}
	{"level":"info","ts":"2025-11-09T14:36:14.39337Z","caller":"embed/serve.go:250","msg":"serving client traffic securely","traffic":"grpc+http","address":"192.168.85.2:2379"}
	{"level":"info","ts":"2025-11-09T14:36:14.387979Z","caller":"etcdserver/server.go:2062","msg":"published local member to cluster through raft","local-member-id":"9f0758e1c58a86ed","local-member-attributes":"{Name:old-k8s-version-349599 ClientURLs:[https://192.168.85.2:2379]}","request-path":"/0/members/9f0758e1c58a86ed/attributes","cluster-id":"68eaea490fab4e05","publish-timeout":"7s"}
	{"level":"info","ts":"2025-11-09T14:36:14.396818Z","caller":"embed/serve.go:103","msg":"ready to serve client requests"}
	{"level":"info","ts":"2025-11-09T14:36:14.415984Z","caller":"embed/serve.go:250","msg":"serving client traffic securely","traffic":"grpc+http","address":"127.0.0.1:2379"}
	
	
	==> kernel <==
	 14:37:10 up  1:19,  0 user,  load average: 2.26, 3.17, 2.59
	Linux old-k8s-version-349599 5.15.0-1084-aws #91~20.04.1-Ubuntu SMP Fri May 2 07:00:04 UTC 2025 aarch64 GNU/Linux
	PRETTY_NAME="Debian GNU/Linux 12 (bookworm)"
	
	
	==> kindnet [b3c3e104ca19d5be2d27a38e2d31cd4f2ad95c10aa4c506cfb01ed415c28d05f] <==
	I1109 14:36:19.447854       1 main.go:109] connected to apiserver: https://10.96.0.1:443
	I1109 14:36:19.451375       1 main.go:139] hostIP = 192.168.85.2
	podIP = 192.168.85.2
	I1109 14:36:19.451505       1 main.go:148] setting mtu 1500 for CNI 
	I1109 14:36:19.451518       1 main.go:178] kindnetd IP family: "ipv4"
	I1109 14:36:19.451529       1 main.go:182] noMask IPv4 subnets: [10.244.0.0/16]
	time="2025-11-09T14:36:19Z" level=info msg="Created plugin 10-kube-network-policies (kindnetd, handles RunPodSandbox,RemovePodSandbox)"
	I1109 14:36:19.638654       1 controller.go:377] "Starting controller" name="kube-network-policies"
	I1109 14:36:19.638715       1 controller.go:381] "Waiting for informer caches to sync"
	I1109 14:36:19.638747       1 shared_informer.go:350] "Waiting for caches to sync" controller="kube-network-policies"
	I1109 14:36:19.638885       1 controller.go:390] nri plugin exited: failed to connect to NRI service: dial unix /var/run/nri/nri.sock: connect: no such file or directory
	E1109 14:36:49.640406       1 reflector.go:200] "Failed to watch" err="failed to list *v1.Node: Get \"https://10.96.0.1:443/api/v1/nodes?limit=500&resourceVersion=0\": dial tcp 10.96.0.1:443: i/o timeout" logger="UnhandledError" reflector="pkg/mod/k8s.io/client-go@v0.33.0/tools/cache/reflector.go:285" type="*v1.Node"
	E1109 14:36:49.640419       1 reflector.go:200] "Failed to watch" err="failed to list *v1.Pod: Get \"https://10.96.0.1:443/api/v1/pods?limit=500&resourceVersion=0\": dial tcp 10.96.0.1:443: i/o timeout" logger="UnhandledError" reflector="pkg/mod/k8s.io/client-go@v0.33.0/tools/cache/reflector.go:285" type="*v1.Pod"
	E1109 14:36:49.640523       1 reflector.go:200] "Failed to watch" err="failed to list *v1.NetworkPolicy: Get \"https://10.96.0.1:443/apis/networking.k8s.io/v1/networkpolicies?limit=500&resourceVersion=0\": dial tcp 10.96.0.1:443: i/o timeout" logger="UnhandledError" reflector="pkg/mod/k8s.io/client-go@v0.33.0/tools/cache/reflector.go:285" type="*v1.NetworkPolicy"
	E1109 14:36:49.640588       1 reflector.go:200] "Failed to watch" err="failed to list *v1.Namespace: Get \"https://10.96.0.1:443/api/v1/namespaces?limit=500&resourceVersion=0\": dial tcp 10.96.0.1:443: i/o timeout" logger="UnhandledError" reflector="pkg/mod/k8s.io/client-go@v0.33.0/tools/cache/reflector.go:285" type="*v1.Namespace"
	I1109 14:36:51.038929       1 shared_informer.go:357] "Caches are synced" controller="kube-network-policies"
	I1109 14:36:51.038965       1 metrics.go:72] Registering metrics
	I1109 14:36:51.039044       1 controller.go:711] "Syncing nftables rules"
	I1109 14:36:59.640070       1 main.go:297] Handling node with IPs: map[192.168.85.2:{}]
	I1109 14:36:59.640136       1 main.go:301] handling current node
	I1109 14:37:09.644698       1 main.go:297] Handling node with IPs: map[192.168.85.2:{}]
	I1109 14:37:09.644728       1 main.go:301] handling current node
	
	
	==> kube-apiserver [03a4f2701535c8987f03de2ce9c786e81ea4137423fd749101e600759bd76a67] <==
	I1109 14:36:17.870478       1 shared_informer.go:318] Caches are synced for crd-autoregister
	I1109 14:36:17.878023       1 aggregator.go:166] initial CRD sync complete...
	I1109 14:36:17.879687       1 autoregister_controller.go:141] Starting autoregister controller
	I1109 14:36:17.879728       1 cache.go:32] Waiting for caches to sync for autoregister controller
	I1109 14:36:17.879761       1 cache.go:39] Caches are synced for autoregister controller
	I1109 14:36:17.908015       1 controller.go:624] quota admission added evaluator for: leases.coordination.k8s.io
	I1109 14:36:17.912239       1 cache.go:39] Caches are synced for APIServiceRegistrationController controller
	I1109 14:36:17.921516       1 shared_informer.go:318] Caches are synced for cluster_authentication_trust_controller
	I1109 14:36:17.921775       1 apf_controller.go:377] Running API Priority and Fairness config worker
	I1109 14:36:17.921819       1 apf_controller.go:380] Running API Priority and Fairness periodic rebalancing process
	I1109 14:36:17.922015       1 shared_informer.go:318] Caches are synced for configmaps
	I1109 14:36:17.922460       1 cache.go:39] Caches are synced for AvailableConditionController controller
	I1109 14:36:17.923796       1 shared_informer.go:318] Caches are synced for node_authorizer
	E1109 14:36:17.960860       1 controller.go:97] Error removing old endpoints from kubernetes service: no API server IP addresses were listed in storage, refusing to erase all endpoints for the kubernetes Service
	I1109 14:36:18.515513       1 storage_scheduling.go:111] all system priority classes are created successfully or already exist.
	I1109 14:36:19.618082       1 controller.go:624] quota admission added evaluator for: namespaces
	I1109 14:36:19.681833       1 controller.go:624] quota admission added evaluator for: deployments.apps
	I1109 14:36:19.735276       1 controller.go:624] quota admission added evaluator for: roles.rbac.authorization.k8s.io
	I1109 14:36:19.754257       1 controller.go:624] quota admission added evaluator for: rolebindings.rbac.authorization.k8s.io
	I1109 14:36:19.773786       1 controller.go:624] quota admission added evaluator for: serviceaccounts
	I1109 14:36:19.838487       1 alloc.go:330] "allocated clusterIPs" service="kubernetes-dashboard/kubernetes-dashboard" clusterIPs={"IPv4":"10.97.59.158"}
	I1109 14:36:19.868339       1 alloc.go:330] "allocated clusterIPs" service="kubernetes-dashboard/dashboard-metrics-scraper" clusterIPs={"IPv4":"10.110.225.22"}
	I1109 14:36:29.980795       1 controller.go:624] quota admission added evaluator for: endpointslices.discovery.k8s.io
	I1109 14:36:30.278246       1 controller.go:624] quota admission added evaluator for: replicasets.apps
	I1109 14:36:30.346139       1 controller.go:624] quota admission added evaluator for: endpoints
	
	
	==> kube-controller-manager [c8143ca805893c2ad47b324b4c3297e732d17fc9169127e2729034bb9adf7859] <==
	I1109 14:36:30.284762       1 event.go:307] "Event occurred" object="kubernetes-dashboard/dashboard-metrics-scraper" fieldPath="" kind="Deployment" apiVersion="apps/v1" type="Normal" reason="ScalingReplicaSet" message="Scaled up replica set dashboard-metrics-scraper-5f989dc9cf to 1"
	I1109 14:36:30.287705       1 event.go:307] "Event occurred" object="kubernetes-dashboard/kubernetes-dashboard" fieldPath="" kind="Deployment" apiVersion="apps/v1" type="Normal" reason="ScalingReplicaSet" message="Scaled up replica set kubernetes-dashboard-8694d4445c to 1"
	I1109 14:36:30.310724       1 event.go:307] "Event occurred" object="kubernetes-dashboard/dashboard-metrics-scraper-5f989dc9cf" fieldPath="" kind="ReplicaSet" apiVersion="apps/v1" type="Normal" reason="SuccessfulCreate" message="Created pod: dashboard-metrics-scraper-5f989dc9cf-vrb6q"
	I1109 14:36:30.310822       1 event.go:307] "Event occurred" object="kubernetes-dashboard/kubernetes-dashboard-8694d4445c" fieldPath="" kind="ReplicaSet" apiVersion="apps/v1" type="Normal" reason="SuccessfulCreate" message="Created pod: kubernetes-dashboard-8694d4445c-4d8hp"
	I1109 14:36:30.319714       1 shared_informer.go:318] Caches are synced for garbage collector
	I1109 14:36:30.319743       1 garbagecollector.go:166] "All resource monitors have synced. Proceeding to collect garbage"
	I1109 14:36:30.329907       1 replica_set.go:676] "Finished syncing" kind="ReplicaSet" key="kubernetes-dashboard/dashboard-metrics-scraper-5f989dc9cf" duration="45.208437ms"
	I1109 14:36:30.339243       1 replica_set.go:676] "Finished syncing" kind="ReplicaSet" key="kubernetes-dashboard/kubernetes-dashboard-8694d4445c" duration="52.141693ms"
	I1109 14:36:30.350666       1 shared_informer.go:318] Caches are synced for garbage collector
	I1109 14:36:30.382215       1 replica_set.go:676] "Finished syncing" kind="ReplicaSet" key="kubernetes-dashboard/dashboard-metrics-scraper-5f989dc9cf" duration="52.249197ms"
	I1109 14:36:30.388024       1 replica_set.go:676] "Finished syncing" kind="ReplicaSet" key="kubernetes-dashboard/kubernetes-dashboard-8694d4445c" duration="48.670654ms"
	I1109 14:36:30.388452       1 event.go:307] "Event occurred" object="dashboard-metrics-scraper" fieldPath="" kind="Endpoints" apiVersion="v1" type="Warning" reason="FailedToCreateEndpoint" message="Failed to create endpoint for service kubernetes-dashboard/dashboard-metrics-scraper: endpoints \"dashboard-metrics-scraper\" already exists"
	I1109 14:36:30.407960       1 replica_set.go:676] "Finished syncing" kind="ReplicaSet" key="kubernetes-dashboard/kubernetes-dashboard-8694d4445c" duration="19.886736ms"
	I1109 14:36:30.408061       1 replica_set.go:676] "Finished syncing" kind="ReplicaSet" key="kubernetes-dashboard/kubernetes-dashboard-8694d4445c" duration="58.856µs"
	I1109 14:36:30.408414       1 replica_set.go:676] "Finished syncing" kind="ReplicaSet" key="kubernetes-dashboard/dashboard-metrics-scraper-5f989dc9cf" duration="26.152853ms"
	I1109 14:36:30.408480       1 replica_set.go:676] "Finished syncing" kind="ReplicaSet" key="kubernetes-dashboard/dashboard-metrics-scraper-5f989dc9cf" duration="38.327µs"
	I1109 14:36:35.148922       1 replica_set.go:676] "Finished syncing" kind="ReplicaSet" key="kubernetes-dashboard/dashboard-metrics-scraper-5f989dc9cf" duration="85.351µs"
	I1109 14:36:36.165645       1 replica_set.go:676] "Finished syncing" kind="ReplicaSet" key="kubernetes-dashboard/dashboard-metrics-scraper-5f989dc9cf" duration="55.59µs"
	I1109 14:36:37.176987       1 replica_set.go:676] "Finished syncing" kind="ReplicaSet" key="kubernetes-dashboard/dashboard-metrics-scraper-5f989dc9cf" duration="80.444µs"
	I1109 14:36:40.180158       1 replica_set.go:676] "Finished syncing" kind="ReplicaSet" key="kubernetes-dashboard/kubernetes-dashboard-8694d4445c" duration="10.105816ms"
	I1109 14:36:40.180605       1 replica_set.go:676] "Finished syncing" kind="ReplicaSet" key="kubernetes-dashboard/kubernetes-dashboard-8694d4445c" duration="86.835µs"
	I1109 14:36:52.211810       1 replica_set.go:676] "Finished syncing" kind="ReplicaSet" key="kubernetes-dashboard/dashboard-metrics-scraper-5f989dc9cf" duration="44.217µs"
	I1109 14:36:53.759627       1 replica_set.go:676] "Finished syncing" kind="ReplicaSet" key="kube-system/coredns-5dd5756b68" duration="17.884807ms"
	I1109 14:36:53.760634       1 replica_set.go:676] "Finished syncing" kind="ReplicaSet" key="kube-system/coredns-5dd5756b68" duration="51.594µs"
	I1109 14:37:00.637133       1 replica_set.go:676] "Finished syncing" kind="ReplicaSet" key="kubernetes-dashboard/dashboard-metrics-scraper-5f989dc9cf" duration="52.234µs"
	
	
	==> kube-proxy [c78cbfc5d0722f14e9b91810784e0e104fe3b650ea48136618bd12858c936bee] <==
	I1109 14:36:19.391662       1 server_others.go:69] "Using iptables proxy"
	I1109 14:36:19.459780       1 node.go:141] Successfully retrieved node IP: 192.168.85.2
	I1109 14:36:19.567801       1 server.go:632] "kube-proxy running in dual-stack mode" primary ipFamily="IPv4"
	I1109 14:36:19.569849       1 server_others.go:152] "Using iptables Proxier"
	I1109 14:36:19.569881       1 server_others.go:421] "Detect-local-mode set to ClusterCIDR, but no cluster CIDR for family" ipFamily="IPv6"
	I1109 14:36:19.569889       1 server_others.go:438] "Defaulting to no-op detect-local"
	I1109 14:36:19.569921       1 proxier.go:251] "Setting route_localnet=1 to allow node-ports on localhost; to change this either disable iptables.localhostNodePorts (--iptables-localhost-nodeports) or set nodePortAddresses (--nodeport-addresses) to filter loopback addresses"
	I1109 14:36:19.570366       1 server.go:846] "Version info" version="v1.28.0"
	I1109 14:36:19.570382       1 server.go:848] "Golang settings" GOGC="" GOMAXPROCS="" GOTRACEBACK=""
	I1109 14:36:19.571028       1 config.go:188] "Starting service config controller"
	I1109 14:36:19.571048       1 shared_informer.go:311] Waiting for caches to sync for service config
	I1109 14:36:19.571065       1 config.go:97] "Starting endpoint slice config controller"
	I1109 14:36:19.571068       1 shared_informer.go:311] Waiting for caches to sync for endpoint slice config
	I1109 14:36:19.571584       1 config.go:315] "Starting node config controller"
	I1109 14:36:19.571591       1 shared_informer.go:311] Waiting for caches to sync for node config
	I1109 14:36:19.671239       1 shared_informer.go:318] Caches are synced for endpoint slice config
	I1109 14:36:19.671915       1 shared_informer.go:318] Caches are synced for service config
	I1109 14:36:19.671935       1 shared_informer.go:318] Caches are synced for node config
	
	
	==> kube-scheduler [98c23037f637958c6a33dfcab68d8ef514da9e545abea49c2a268832fe03da24] <==
	I1109 14:36:16.486175       1 serving.go:348] Generated self-signed cert in-memory
	W1109 14:36:17.840878       1 requestheader_controller.go:193] Unable to get configmap/extension-apiserver-authentication in kube-system.  Usually fixed by 'kubectl create rolebinding -n kube-system ROLEBINDING_NAME --role=extension-apiserver-authentication-reader --serviceaccount=YOUR_NS:YOUR_SA'
	W1109 14:36:17.840914       1 authentication.go:368] Error looking up in-cluster authentication configuration: configmaps "extension-apiserver-authentication" is forbidden: User "system:kube-scheduler" cannot get resource "configmaps" in API group "" in the namespace "kube-system"
	W1109 14:36:17.840923       1 authentication.go:369] Continuing without authentication configuration. This may treat all requests as anonymous.
	W1109 14:36:17.840931       1 authentication.go:370] To require authentication configuration lookup to succeed, set --authentication-tolerate-lookup-failure=false
	I1109 14:36:17.897439       1 server.go:154] "Starting Kubernetes Scheduler" version="v1.28.0"
	I1109 14:36:17.897478       1 server.go:156] "Golang settings" GOGC="" GOMAXPROCS="" GOTRACEBACK=""
	I1109 14:36:17.899225       1 configmap_cafile_content.go:202] "Starting controller" name="client-ca::kube-system::extension-apiserver-authentication::client-ca-file"
	I1109 14:36:17.900164       1 shared_informer.go:311] Waiting for caches to sync for client-ca::kube-system::extension-apiserver-authentication::client-ca-file
	I1109 14:36:17.902165       1 secure_serving.go:210] Serving securely on 127.0.0.1:10259
	I1109 14:36:17.902376       1 tlsconfig.go:240] "Starting DynamicServingCertificateController"
	I1109 14:36:18.001072       1 shared_informer.go:318] Caches are synced for client-ca::kube-system::extension-apiserver-authentication::client-ca-file
	
	
	==> kubelet <==
	Nov 09 14:36:30 old-k8s-version-349599 kubelet[775]: I1109 14:36:30.456383     775 reconciler_common.go:258] "operationExecutor.VerifyControllerAttachedVolume started for volume \"tmp-volume\" (UniqueName: \"kubernetes.io/empty-dir/7fc94779-c3d2-4199-a9bb-0ece8c4a32c0-tmp-volume\") pod \"dashboard-metrics-scraper-5f989dc9cf-vrb6q\" (UID: \"7fc94779-c3d2-4199-a9bb-0ece8c4a32c0\") " pod="kubernetes-dashboard/dashboard-metrics-scraper-5f989dc9cf-vrb6q"
	Nov 09 14:36:30 old-k8s-version-349599 kubelet[775]: I1109 14:36:30.456445     775 reconciler_common.go:258] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kube-api-access-hdd2c\" (UniqueName: \"kubernetes.io/projected/7fc94779-c3d2-4199-a9bb-0ece8c4a32c0-kube-api-access-hdd2c\") pod \"dashboard-metrics-scraper-5f989dc9cf-vrb6q\" (UID: \"7fc94779-c3d2-4199-a9bb-0ece8c4a32c0\") " pod="kubernetes-dashboard/dashboard-metrics-scraper-5f989dc9cf-vrb6q"
	Nov 09 14:36:30 old-k8s-version-349599 kubelet[775]: I1109 14:36:30.456480     775 reconciler_common.go:258] "operationExecutor.VerifyControllerAttachedVolume started for volume \"tmp-volume\" (UniqueName: \"kubernetes.io/empty-dir/393cd277-6a4b-46b3-b252-9d0f66277445-tmp-volume\") pod \"kubernetes-dashboard-8694d4445c-4d8hp\" (UID: \"393cd277-6a4b-46b3-b252-9d0f66277445\") " pod="kubernetes-dashboard/kubernetes-dashboard-8694d4445c-4d8hp"
	Nov 09 14:36:30 old-k8s-version-349599 kubelet[775]: I1109 14:36:30.456504     775 reconciler_common.go:258] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kube-api-access-rm9bq\" (UniqueName: \"kubernetes.io/projected/393cd277-6a4b-46b3-b252-9d0f66277445-kube-api-access-rm9bq\") pod \"kubernetes-dashboard-8694d4445c-4d8hp\" (UID: \"393cd277-6a4b-46b3-b252-9d0f66277445\") " pod="kubernetes-dashboard/kubernetes-dashboard-8694d4445c-4d8hp"
	Nov 09 14:36:30 old-k8s-version-349599 kubelet[775]: W1109 14:36:30.645603     775 manager.go:1159] Failed to process watch event {EventType:0 Name:/docker/05a48047eaa75462f179a30f03ed71f80e3b9debeb5e99158451fa2d3f4dabf4/crio-87cb2197ede75806db1069c7ea256a275ef0b4efc840a2f2c023186e51da8f65 WatchSource:0}: Error finding container 87cb2197ede75806db1069c7ea256a275ef0b4efc840a2f2c023186e51da8f65: Status 404 returned error can't find the container with id 87cb2197ede75806db1069c7ea256a275ef0b4efc840a2f2c023186e51da8f65
	Nov 09 14:36:30 old-k8s-version-349599 kubelet[775]: W1109 14:36:30.665078     775 manager.go:1159] Failed to process watch event {EventType:0 Name:/docker/05a48047eaa75462f179a30f03ed71f80e3b9debeb5e99158451fa2d3f4dabf4/crio-98dccb643bbe9abe69dad05fdcc276c89b78456158fd13bc2421d2fad560def4 WatchSource:0}: Error finding container 98dccb643bbe9abe69dad05fdcc276c89b78456158fd13bc2421d2fad560def4: Status 404 returned error can't find the container with id 98dccb643bbe9abe69dad05fdcc276c89b78456158fd13bc2421d2fad560def4
	Nov 09 14:36:35 old-k8s-version-349599 kubelet[775]: I1109 14:36:35.133936     775 scope.go:117] "RemoveContainer" containerID="3b003cad32bb827d35950322067206f806ff6ea0a3a44c4c79d686132bb5687e"
	Nov 09 14:36:36 old-k8s-version-349599 kubelet[775]: I1109 14:36:36.140944     775 scope.go:117] "RemoveContainer" containerID="3b003cad32bb827d35950322067206f806ff6ea0a3a44c4c79d686132bb5687e"
	Nov 09 14:36:36 old-k8s-version-349599 kubelet[775]: I1109 14:36:36.142112     775 scope.go:117] "RemoveContainer" containerID="1e42cf99b7d01821fcb8ed85dc640839269f65bf1bcd085d1e7395d561734f82"
	Nov 09 14:36:36 old-k8s-version-349599 kubelet[775]: E1109 14:36:36.142624     775 pod_workers.go:1300] "Error syncing pod, skipping" err="failed to \"StartContainer\" for \"dashboard-metrics-scraper\" with CrashLoopBackOff: \"back-off 10s restarting failed container=dashboard-metrics-scraper pod=dashboard-metrics-scraper-5f989dc9cf-vrb6q_kubernetes-dashboard(7fc94779-c3d2-4199-a9bb-0ece8c4a32c0)\"" pod="kubernetes-dashboard/dashboard-metrics-scraper-5f989dc9cf-vrb6q" podUID="7fc94779-c3d2-4199-a9bb-0ece8c4a32c0"
	Nov 09 14:36:37 old-k8s-version-349599 kubelet[775]: I1109 14:36:37.144232     775 scope.go:117] "RemoveContainer" containerID="1e42cf99b7d01821fcb8ed85dc640839269f65bf1bcd085d1e7395d561734f82"
	Nov 09 14:36:37 old-k8s-version-349599 kubelet[775]: E1109 14:36:37.144528     775 pod_workers.go:1300] "Error syncing pod, skipping" err="failed to \"StartContainer\" for \"dashboard-metrics-scraper\" with CrashLoopBackOff: \"back-off 10s restarting failed container=dashboard-metrics-scraper pod=dashboard-metrics-scraper-5f989dc9cf-vrb6q_kubernetes-dashboard(7fc94779-c3d2-4199-a9bb-0ece8c4a32c0)\"" pod="kubernetes-dashboard/dashboard-metrics-scraper-5f989dc9cf-vrb6q" podUID="7fc94779-c3d2-4199-a9bb-0ece8c4a32c0"
	Nov 09 14:36:40 old-k8s-version-349599 kubelet[775]: I1109 14:36:40.621834     775 scope.go:117] "RemoveContainer" containerID="1e42cf99b7d01821fcb8ed85dc640839269f65bf1bcd085d1e7395d561734f82"
	Nov 09 14:36:40 old-k8s-version-349599 kubelet[775]: E1109 14:36:40.622176     775 pod_workers.go:1300] "Error syncing pod, skipping" err="failed to \"StartContainer\" for \"dashboard-metrics-scraper\" with CrashLoopBackOff: \"back-off 10s restarting failed container=dashboard-metrics-scraper pod=dashboard-metrics-scraper-5f989dc9cf-vrb6q_kubernetes-dashboard(7fc94779-c3d2-4199-a9bb-0ece8c4a32c0)\"" pod="kubernetes-dashboard/dashboard-metrics-scraper-5f989dc9cf-vrb6q" podUID="7fc94779-c3d2-4199-a9bb-0ece8c4a32c0"
	Nov 09 14:36:50 old-k8s-version-349599 kubelet[775]: I1109 14:36:50.180164     775 scope.go:117] "RemoveContainer" containerID="d300f08cb92b19cb9b5a272616c5e79afc0bfe1871029a2593822d2bf33fb5ca"
	Nov 09 14:36:50 old-k8s-version-349599 kubelet[775]: I1109 14:36:50.206290     775 pod_startup_latency_tracker.go:102] "Observed pod startup duration" pod="kubernetes-dashboard/kubernetes-dashboard-8694d4445c-4d8hp" podStartSLOduration=11.409239556 podCreationTimestamp="2025-11-09 14:36:30 +0000 UTC" firstStartedPulling="2025-11-09 14:36:30.668344188 +0000 UTC m=+18.993871769" lastFinishedPulling="2025-11-09 14:36:39.464594174 +0000 UTC m=+27.790121755" observedRunningTime="2025-11-09 14:36:40.169423727 +0000 UTC m=+28.494951316" watchObservedRunningTime="2025-11-09 14:36:50.205489542 +0000 UTC m=+38.531017123"
	Nov 09 14:36:51 old-k8s-version-349599 kubelet[775]: I1109 14:36:51.906211     775 scope.go:117] "RemoveContainer" containerID="1e42cf99b7d01821fcb8ed85dc640839269f65bf1bcd085d1e7395d561734f82"
	Nov 09 14:36:52 old-k8s-version-349599 kubelet[775]: I1109 14:36:52.189248     775 scope.go:117] "RemoveContainer" containerID="1e42cf99b7d01821fcb8ed85dc640839269f65bf1bcd085d1e7395d561734f82"
	Nov 09 14:36:52 old-k8s-version-349599 kubelet[775]: I1109 14:36:52.189521     775 scope.go:117] "RemoveContainer" containerID="43eaaa1f3a3a30d7d2b345fa130225cfafa48f473bd3042ba003d472afa8b09d"
	Nov 09 14:36:52 old-k8s-version-349599 kubelet[775]: E1109 14:36:52.189885     775 pod_workers.go:1300] "Error syncing pod, skipping" err="failed to \"StartContainer\" for \"dashboard-metrics-scraper\" with CrashLoopBackOff: \"back-off 20s restarting failed container=dashboard-metrics-scraper pod=dashboard-metrics-scraper-5f989dc9cf-vrb6q_kubernetes-dashboard(7fc94779-c3d2-4199-a9bb-0ece8c4a32c0)\"" pod="kubernetes-dashboard/dashboard-metrics-scraper-5f989dc9cf-vrb6q" podUID="7fc94779-c3d2-4199-a9bb-0ece8c4a32c0"
	Nov 09 14:37:00 old-k8s-version-349599 kubelet[775]: I1109 14:37:00.621865     775 scope.go:117] "RemoveContainer" containerID="43eaaa1f3a3a30d7d2b345fa130225cfafa48f473bd3042ba003d472afa8b09d"
	Nov 09 14:37:00 old-k8s-version-349599 kubelet[775]: E1109 14:37:00.622659     775 pod_workers.go:1300] "Error syncing pod, skipping" err="failed to \"StartContainer\" for \"dashboard-metrics-scraper\" with CrashLoopBackOff: \"back-off 20s restarting failed container=dashboard-metrics-scraper pod=dashboard-metrics-scraper-5f989dc9cf-vrb6q_kubernetes-dashboard(7fc94779-c3d2-4199-a9bb-0ece8c4a32c0)\"" pod="kubernetes-dashboard/dashboard-metrics-scraper-5f989dc9cf-vrb6q" podUID="7fc94779-c3d2-4199-a9bb-0ece8c4a32c0"
	Nov 09 14:37:07 old-k8s-version-349599 systemd[1]: Stopping kubelet.service - kubelet: The Kubernetes Node Agent...
	Nov 09 14:37:07 old-k8s-version-349599 systemd[1]: kubelet.service: Deactivated successfully.
	Nov 09 14:37:07 old-k8s-version-349599 systemd[1]: Stopped kubelet.service - kubelet: The Kubernetes Node Agent.
	
	
	==> kubernetes-dashboard [99a152525f2af576af22e0c8f665863f936a06e0043b3013a220d24d8cca148b] <==
	2025/11/09 14:36:39 Using namespace: kubernetes-dashboard
	2025/11/09 14:36:39 Using in-cluster config to connect to apiserver
	2025/11/09 14:36:39 Using secret token for csrf signing
	2025/11/09 14:36:39 Initializing csrf token from kubernetes-dashboard-csrf secret
	2025/11/09 14:36:39 Empty token. Generating and storing in a secret kubernetes-dashboard-csrf
	2025/11/09 14:36:39 Successful initial request to the apiserver, version: v1.28.0
	2025/11/09 14:36:39 Generating JWE encryption key
	2025/11/09 14:36:39 New synchronizer has been registered: kubernetes-dashboard-key-holder-kubernetes-dashboard. Starting
	2025/11/09 14:36:39 Starting secret synchronizer for kubernetes-dashboard-key-holder in namespace kubernetes-dashboard
	2025/11/09 14:36:39 Initializing JWE encryption key from synchronized object
	2025/11/09 14:36:39 Creating in-cluster Sidecar client
	2025/11/09 14:36:39 Serving insecurely on HTTP port: 9090
	2025/11/09 14:36:39 Metric client health check failed: the server is currently unable to handle the request (get services dashboard-metrics-scraper). Retrying in 30 seconds.
	2025/11/09 14:37:09 Metric client health check failed: the server is currently unable to handle the request (get services dashboard-metrics-scraper). Retrying in 30 seconds.
	2025/11/09 14:36:39 Starting overwatch
	
	
	==> storage-provisioner [919922a360f6444e8428262f1b20912ee20139aebb5630e9a84eff8171773387] <==
	I1109 14:36:50.233781       1 storage_provisioner.go:116] Initializing the minikube storage provisioner...
	I1109 14:36:50.248801       1 storage_provisioner.go:141] Storage provisioner initialized, now starting service!
	I1109 14:36:50.248858       1 leaderelection.go:243] attempting to acquire leader lease kube-system/k8s.io-minikube-hostpath...
	I1109 14:37:07.646895       1 leaderelection.go:253] successfully acquired lease kube-system/k8s.io-minikube-hostpath
	I1109 14:37:07.647446       1 event.go:282] Event(v1.ObjectReference{Kind:"Endpoints", Namespace:"kube-system", Name:"k8s.io-minikube-hostpath", UID:"bcc80e6b-0d84-4689-aa52-aa122ab7b376", APIVersion:"v1", ResourceVersion:"626", FieldPath:""}): type: 'Normal' reason: 'LeaderElection' old-k8s-version-349599_ef02f250-1691-4245-a6c4-383ca971c28b became leader
	I1109 14:37:07.650083       1 controller.go:835] Starting provisioner controller k8s.io/minikube-hostpath_old-k8s-version-349599_ef02f250-1691-4245-a6c4-383ca971c28b!
	I1109 14:37:07.751264       1 controller.go:884] Started provisioner controller k8s.io/minikube-hostpath_old-k8s-version-349599_ef02f250-1691-4245-a6c4-383ca971c28b!
	
	
	==> storage-provisioner [d300f08cb92b19cb9b5a272616c5e79afc0bfe1871029a2593822d2bf33fb5ca] <==
	I1109 14:36:19.446766       1 storage_provisioner.go:116] Initializing the minikube storage provisioner...
	F1109 14:36:49.464666       1 main.go:39] error getting server version: Get "https://10.96.0.1:443/version?timeout=32s": dial tcp 10.96.0.1:443: i/o timeout
	

                                                
                                                
-- /stdout --
helpers_test.go:262: (dbg) Run:  out/minikube-linux-arm64 status --format={{.APIServer}} -p old-k8s-version-349599 -n old-k8s-version-349599
helpers_test.go:262: (dbg) Non-zero exit: out/minikube-linux-arm64 status --format={{.APIServer}} -p old-k8s-version-349599 -n old-k8s-version-349599: exit status 2 (445.173149ms)

                                                
                                                
-- stdout --
	Running

                                                
                                                
-- /stdout --
helpers_test.go:262: status error: exit status 2 (may be ok)
helpers_test.go:269: (dbg) Run:  kubectl --context old-k8s-version-349599 get po -o=jsonpath={.items[*].metadata.name} -A --field-selector=status.phase!=Running
helpers_test.go:293: <<< TestStartStop/group/old-k8s-version/serial/Pause FAILED: end of post-mortem logs <<<
helpers_test.go:294: ---------------------/post-mortem---------------------------------
helpers_test.go:222: -----------------------post-mortem--------------------------------
helpers_test.go:223: ======>  post-mortem[TestStartStop/group/old-k8s-version/serial/Pause]: network settings <======
helpers_test.go:230: HOST ENV snapshots: PROXY env: HTTP_PROXY="<empty>" HTTPS_PROXY="<empty>" NO_PROXY="<empty>"
helpers_test.go:238: ======>  post-mortem[TestStartStop/group/old-k8s-version/serial/Pause]: docker inspect <======
helpers_test.go:239: (dbg) Run:  docker inspect old-k8s-version-349599
helpers_test.go:243: (dbg) docker inspect old-k8s-version-349599:

                                                
                                                
-- stdout --
	[
	    {
	        "Id": "05a48047eaa75462f179a30f03ed71f80e3b9debeb5e99158451fa2d3f4dabf4",
	        "Created": "2025-11-09T14:34:44.509425898Z",
	        "Path": "/usr/local/bin/entrypoint",
	        "Args": [
	            "/sbin/init"
	        ],
	        "State": {
	            "Status": "running",
	            "Running": true,
	            "Paused": false,
	            "Restarting": false,
	            "OOMKilled": false,
	            "Dead": false,
	            "Pid": 184700,
	            "ExitCode": 0,
	            "Error": "",
	            "StartedAt": "2025-11-09T14:36:04.809404383Z",
	            "FinishedAt": "2025-11-09T14:36:03.969648528Z"
	        },
	        "Image": "sha256:4e1b168be0d8ee6affdaa8dcd0274d605b2f417b6cfaa574410f0380fd962b97",
	        "ResolvConfPath": "/var/lib/docker/containers/05a48047eaa75462f179a30f03ed71f80e3b9debeb5e99158451fa2d3f4dabf4/resolv.conf",
	        "HostnamePath": "/var/lib/docker/containers/05a48047eaa75462f179a30f03ed71f80e3b9debeb5e99158451fa2d3f4dabf4/hostname",
	        "HostsPath": "/var/lib/docker/containers/05a48047eaa75462f179a30f03ed71f80e3b9debeb5e99158451fa2d3f4dabf4/hosts",
	        "LogPath": "/var/lib/docker/containers/05a48047eaa75462f179a30f03ed71f80e3b9debeb5e99158451fa2d3f4dabf4/05a48047eaa75462f179a30f03ed71f80e3b9debeb5e99158451fa2d3f4dabf4-json.log",
	        "Name": "/old-k8s-version-349599",
	        "RestartCount": 0,
	        "Driver": "overlay2",
	        "Platform": "linux",
	        "MountLabel": "",
	        "ProcessLabel": "",
	        "AppArmorProfile": "unconfined",
	        "ExecIDs": null,
	        "HostConfig": {
	            "Binds": [
	                "/lib/modules:/lib/modules:ro",
	                "old-k8s-version-349599:/var"
	            ],
	            "ContainerIDFile": "",
	            "LogConfig": {
	                "Type": "json-file",
	                "Config": {}
	            },
	            "NetworkMode": "old-k8s-version-349599",
	            "PortBindings": {
	                "22/tcp": [
	                    {
	                        "HostIp": "127.0.0.1",
	                        "HostPort": ""
	                    }
	                ],
	                "2376/tcp": [
	                    {
	                        "HostIp": "127.0.0.1",
	                        "HostPort": ""
	                    }
	                ],
	                "32443/tcp": [
	                    {
	                        "HostIp": "127.0.0.1",
	                        "HostPort": ""
	                    }
	                ],
	                "5000/tcp": [
	                    {
	                        "HostIp": "127.0.0.1",
	                        "HostPort": ""
	                    }
	                ],
	                "8443/tcp": [
	                    {
	                        "HostIp": "127.0.0.1",
	                        "HostPort": ""
	                    }
	                ]
	            },
	            "RestartPolicy": {
	                "Name": "no",
	                "MaximumRetryCount": 0
	            },
	            "AutoRemove": false,
	            "VolumeDriver": "",
	            "VolumesFrom": null,
	            "ConsoleSize": [
	                0,
	                0
	            ],
	            "CapAdd": null,
	            "CapDrop": null,
	            "CgroupnsMode": "host",
	            "Dns": [],
	            "DnsOptions": [],
	            "DnsSearch": [],
	            "ExtraHosts": null,
	            "GroupAdd": null,
	            "IpcMode": "private",
	            "Cgroup": "",
	            "Links": null,
	            "OomScoreAdj": 0,
	            "PidMode": "",
	            "Privileged": true,
	            "PublishAllPorts": false,
	            "ReadonlyRootfs": false,
	            "SecurityOpt": [
	                "seccomp=unconfined",
	                "apparmor=unconfined",
	                "label=disable"
	            ],
	            "Tmpfs": {
	                "/run": "",
	                "/tmp": ""
	            },
	            "UTSMode": "",
	            "UsernsMode": "",
	            "ShmSize": 67108864,
	            "Runtime": "runc",
	            "Isolation": "",
	            "CpuShares": 0,
	            "Memory": 3221225472,
	            "NanoCpus": 2000000000,
	            "CgroupParent": "",
	            "BlkioWeight": 0,
	            "BlkioWeightDevice": [],
	            "BlkioDeviceReadBps": [],
	            "BlkioDeviceWriteBps": [],
	            "BlkioDeviceReadIOps": [],
	            "BlkioDeviceWriteIOps": [],
	            "CpuPeriod": 0,
	            "CpuQuota": 0,
	            "CpuRealtimePeriod": 0,
	            "CpuRealtimeRuntime": 0,
	            "CpusetCpus": "",
	            "CpusetMems": "",
	            "Devices": [],
	            "DeviceCgroupRules": null,
	            "DeviceRequests": null,
	            "MemoryReservation": 0,
	            "MemorySwap": 6442450944,
	            "MemorySwappiness": null,
	            "OomKillDisable": false,
	            "PidsLimit": null,
	            "Ulimits": [],
	            "CpuCount": 0,
	            "CpuPercent": 0,
	            "IOMaximumIOps": 0,
	            "IOMaximumBandwidth": 0,
	            "MaskedPaths": null,
	            "ReadonlyPaths": null
	        },
	        "GraphDriver": {
	            "Data": {
	                "ID": "05a48047eaa75462f179a30f03ed71f80e3b9debeb5e99158451fa2d3f4dabf4",
	                "LowerDir": "/var/lib/docker/overlay2/043f891c3159bca2d07d287e6da028ee93f869b1f74239ca97c3d7eb29ebd8a6-init/diff:/var/lib/docker/overlay2/b3a4de36ab2a7c9237cb1555a4866064baca53bca407ae5f84336bea9c6bc6c8/diff",
	                "MergedDir": "/var/lib/docker/overlay2/043f891c3159bca2d07d287e6da028ee93f869b1f74239ca97c3d7eb29ebd8a6/merged",
	                "UpperDir": "/var/lib/docker/overlay2/043f891c3159bca2d07d287e6da028ee93f869b1f74239ca97c3d7eb29ebd8a6/diff",
	                "WorkDir": "/var/lib/docker/overlay2/043f891c3159bca2d07d287e6da028ee93f869b1f74239ca97c3d7eb29ebd8a6/work"
	            },
	            "Name": "overlay2"
	        },
	        "Mounts": [
	            {
	                "Type": "bind",
	                "Source": "/lib/modules",
	                "Destination": "/lib/modules",
	                "Mode": "ro",
	                "RW": false,
	                "Propagation": "rprivate"
	            },
	            {
	                "Type": "volume",
	                "Name": "old-k8s-version-349599",
	                "Source": "/var/lib/docker/volumes/old-k8s-version-349599/_data",
	                "Destination": "/var",
	                "Driver": "local",
	                "Mode": "z",
	                "RW": true,
	                "Propagation": ""
	            }
	        ],
	        "Config": {
	            "Hostname": "old-k8s-version-349599",
	            "Domainname": "",
	            "User": "",
	            "AttachStdin": false,
	            "AttachStdout": false,
	            "AttachStderr": false,
	            "ExposedPorts": {
	                "22/tcp": {},
	                "2376/tcp": {},
	                "32443/tcp": {},
	                "5000/tcp": {},
	                "8443/tcp": {}
	            },
	            "Tty": true,
	            "OpenStdin": false,
	            "StdinOnce": false,
	            "Env": [
	                "container=docker",
	                "PATH=/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin"
	            ],
	            "Cmd": null,
	            "Image": "gcr.io/k8s-minikube/kicbase-builds:v0.0.48-1761985721-21837@sha256:a50b37e97dfdea51156e079ca6b45818a801b3d41bbe13d141f35d2e1af6c7d1",
	            "Volumes": null,
	            "WorkingDir": "/",
	            "Entrypoint": [
	                "/usr/local/bin/entrypoint",
	                "/sbin/init"
	            ],
	            "OnBuild": null,
	            "Labels": {
	                "created_by.minikube.sigs.k8s.io": "true",
	                "mode.minikube.sigs.k8s.io": "old-k8s-version-349599",
	                "name.minikube.sigs.k8s.io": "old-k8s-version-349599",
	                "role.minikube.sigs.k8s.io": ""
	            },
	            "StopSignal": "SIGRTMIN+3"
	        },
	        "NetworkSettings": {
	            "Bridge": "",
	            "SandboxID": "8e8d720e4aff2defe96c53aee2f3b636cc01e2b02140ae6d28dcc44004e52d04",
	            "SandboxKey": "/var/run/docker/netns/8e8d720e4aff",
	            "Ports": {
	                "22/tcp": [
	                    {
	                        "HostIp": "127.0.0.1",
	                        "HostPort": "33050"
	                    }
	                ],
	                "2376/tcp": [
	                    {
	                        "HostIp": "127.0.0.1",
	                        "HostPort": "33051"
	                    }
	                ],
	                "32443/tcp": [
	                    {
	                        "HostIp": "127.0.0.1",
	                        "HostPort": "33054"
	                    }
	                ],
	                "5000/tcp": [
	                    {
	                        "HostIp": "127.0.0.1",
	                        "HostPort": "33052"
	                    }
	                ],
	                "8443/tcp": [
	                    {
	                        "HostIp": "127.0.0.1",
	                        "HostPort": "33053"
	                    }
	                ]
	            },
	            "HairpinMode": false,
	            "LinkLocalIPv6Address": "",
	            "LinkLocalIPv6PrefixLen": 0,
	            "SecondaryIPAddresses": null,
	            "SecondaryIPv6Addresses": null,
	            "EndpointID": "",
	            "Gateway": "",
	            "GlobalIPv6Address": "",
	            "GlobalIPv6PrefixLen": 0,
	            "IPAddress": "",
	            "IPPrefixLen": 0,
	            "IPv6Gateway": "",
	            "MacAddress": "",
	            "Networks": {
	                "old-k8s-version-349599": {
	                    "IPAMConfig": {
	                        "IPv4Address": "192.168.85.2"
	                    },
	                    "Links": null,
	                    "Aliases": null,
	                    "MacAddress": "36:b7:ae:62:fb:b0",
	                    "DriverOpts": null,
	                    "GwPriority": 0,
	                    "NetworkID": "30e3d4188e00f4421ef297f05815077467a901e69125366b2721a1705b0d17e1",
	                    "EndpointID": "904f7fe8c4fd30867b075d9f4c3a748018b6c7c5a17fc9c5d2913b01e2f95fdd",
	                    "Gateway": "192.168.85.1",
	                    "IPAddress": "192.168.85.2",
	                    "IPPrefixLen": 24,
	                    "IPv6Gateway": "",
	                    "GlobalIPv6Address": "",
	                    "GlobalIPv6PrefixLen": 0,
	                    "DNSNames": [
	                        "old-k8s-version-349599",
	                        "05a48047eaa7"
	                    ]
	                }
	            }
	        }
	    }
	]

                                                
                                                
-- /stdout --
helpers_test.go:247: (dbg) Run:  out/minikube-linux-arm64 status --format={{.Host}} -p old-k8s-version-349599 -n old-k8s-version-349599
helpers_test.go:247: (dbg) Non-zero exit: out/minikube-linux-arm64 status --format={{.Host}} -p old-k8s-version-349599 -n old-k8s-version-349599: exit status 2 (442.402197ms)

                                                
                                                
-- stdout --
	Running

                                                
                                                
-- /stdout --
helpers_test.go:247: status error: exit status 2 (may be ok)
helpers_test.go:252: <<< TestStartStop/group/old-k8s-version/serial/Pause FAILED: start of post-mortem logs <<<
helpers_test.go:253: ======>  post-mortem[TestStartStop/group/old-k8s-version/serial/Pause]: minikube logs <======
helpers_test.go:255: (dbg) Run:  out/minikube-linux-arm64 -p old-k8s-version-349599 logs -n 25
helpers_test.go:255: (dbg) Done: out/minikube-linux-arm64 -p old-k8s-version-349599 logs -n 25: (1.648070067s)
helpers_test.go:260: TestStartStop/group/old-k8s-version/serial/Pause logs: 
-- stdout --
	
	==> Audit <==
	┌─────────┬───────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────┬───────────────────────────┬─────────┬─────────┬─────────────────────┬─────────────────
────┐
	│ COMMAND │                                                                                                                     ARGS                                                                                                                      │          PROFILE          │  USER   │ VERSION │     START TIME      │      END TIME       │
	├─────────┼───────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────┼───────────────────────────┼─────────┼─────────┼─────────────────────┼─────────────────
────┤
	│ ssh     │ -p cilium-241021 sudo cat /lib/systemd/system/containerd.service                                                                                                                                                                              │ cilium-241021             │ jenkins │ v1.37.0 │ 09 Nov 25 14:33 UTC │                     │
	│ ssh     │ -p cilium-241021 sudo cat /etc/containerd/config.toml                                                                                                                                                                                         │ cilium-241021             │ jenkins │ v1.37.0 │ 09 Nov 25 14:33 UTC │                     │
	│ ssh     │ -p cilium-241021 sudo containerd config dump                                                                                                                                                                                                  │ cilium-241021             │ jenkins │ v1.37.0 │ 09 Nov 25 14:33 UTC │                     │
	│ ssh     │ -p cilium-241021 sudo systemctl status crio --all --full --no-pager                                                                                                                                                                           │ cilium-241021             │ jenkins │ v1.37.0 │ 09 Nov 25 14:33 UTC │                     │
	│ ssh     │ -p cilium-241021 sudo systemctl cat crio --no-pager                                                                                                                                                                                           │ cilium-241021             │ jenkins │ v1.37.0 │ 09 Nov 25 14:33 UTC │                     │
	│ ssh     │ -p cilium-241021 sudo find /etc/crio -type f -exec sh -c 'echo {}; cat {}' \;                                                                                                                                                                 │ cilium-241021             │ jenkins │ v1.37.0 │ 09 Nov 25 14:33 UTC │                     │
	│ ssh     │ -p cilium-241021 sudo crio config                                                                                                                                                                                                             │ cilium-241021             │ jenkins │ v1.37.0 │ 09 Nov 25 14:33 UTC │                     │
	│ delete  │ -p cilium-241021                                                                                                                                                                                                                              │ cilium-241021             │ jenkins │ v1.37.0 │ 09 Nov 25 14:33 UTC │ 09 Nov 25 14:33 UTC │
	│ start   │ -p force-systemd-env-413219 --memory=3072 --alsologtostderr -v=5 --driver=docker  --container-runtime=crio                                                                                                                                    │ force-systemd-env-413219  │ jenkins │ v1.37.0 │ 09 Nov 25 14:33 UTC │ 09 Nov 25 14:33 UTC │
	│ ssh     │ force-systemd-flag-519664 ssh cat /etc/crio/crio.conf.d/02-crio.conf                                                                                                                                                                          │ force-systemd-flag-519664 │ jenkins │ v1.37.0 │ 09 Nov 25 14:33 UTC │ 09 Nov 25 14:33 UTC │
	│ delete  │ -p force-systemd-flag-519664                                                                                                                                                                                                                  │ force-systemd-flag-519664 │ jenkins │ v1.37.0 │ 09 Nov 25 14:33 UTC │ 09 Nov 25 14:33 UTC │
	│ start   │ -p cert-expiration-179822 --memory=3072 --cert-expiration=3m --driver=docker  --container-runtime=crio                                                                                                                                        │ cert-expiration-179822    │ jenkins │ v1.37.0 │ 09 Nov 25 14:33 UTC │ 09 Nov 25 14:34 UTC │
	│ delete  │ -p force-systemd-env-413219                                                                                                                                                                                                                   │ force-systemd-env-413219  │ jenkins │ v1.37.0 │ 09 Nov 25 14:33 UTC │ 09 Nov 25 14:34 UTC │
	│ start   │ -p cert-options-276181 --memory=3072 --apiserver-ips=127.0.0.1 --apiserver-ips=192.168.15.15 --apiserver-names=localhost --apiserver-names=www.google.com --apiserver-port=8555 --driver=docker  --container-runtime=crio                     │ cert-options-276181       │ jenkins │ v1.37.0 │ 09 Nov 25 14:34 UTC │ 09 Nov 25 14:34 UTC │
	│ ssh     │ cert-options-276181 ssh openssl x509 -text -noout -in /var/lib/minikube/certs/apiserver.crt                                                                                                                                                   │ cert-options-276181       │ jenkins │ v1.37.0 │ 09 Nov 25 14:34 UTC │ 09 Nov 25 14:34 UTC │
	│ ssh     │ -p cert-options-276181 -- sudo cat /etc/kubernetes/admin.conf                                                                                                                                                                                 │ cert-options-276181       │ jenkins │ v1.37.0 │ 09 Nov 25 14:34 UTC │ 09 Nov 25 14:34 UTC │
	│ delete  │ -p cert-options-276181                                                                                                                                                                                                                        │ cert-options-276181       │ jenkins │ v1.37.0 │ 09 Nov 25 14:34 UTC │ 09 Nov 25 14:34 UTC │
	│ start   │ -p old-k8s-version-349599 --memory=3072 --alsologtostderr --wait=true --kvm-network=default --kvm-qemu-uri=qemu:///system --disable-driver-mounts --keep-context=false --driver=docker  --container-runtime=crio --kubernetes-version=v1.28.0 │ old-k8s-version-349599    │ jenkins │ v1.37.0 │ 09 Nov 25 14:34 UTC │ 09 Nov 25 14:35 UTC │
	│ addons  │ enable metrics-server -p old-k8s-version-349599 --images=MetricsServer=registry.k8s.io/echoserver:1.4 --registries=MetricsServer=fake.domain                                                                                                  │ old-k8s-version-349599    │ jenkins │ v1.37.0 │ 09 Nov 25 14:35 UTC │                     │
	│ stop    │ -p old-k8s-version-349599 --alsologtostderr -v=3                                                                                                                                                                                              │ old-k8s-version-349599    │ jenkins │ v1.37.0 │ 09 Nov 25 14:35 UTC │ 09 Nov 25 14:36 UTC │
	│ addons  │ enable dashboard -p old-k8s-version-349599 --images=MetricsScraper=registry.k8s.io/echoserver:1.4                                                                                                                                             │ old-k8s-version-349599    │ jenkins │ v1.37.0 │ 09 Nov 25 14:36 UTC │ 09 Nov 25 14:36 UTC │
	│ start   │ -p old-k8s-version-349599 --memory=3072 --alsologtostderr --wait=true --kvm-network=default --kvm-qemu-uri=qemu:///system --disable-driver-mounts --keep-context=false --driver=docker  --container-runtime=crio --kubernetes-version=v1.28.0 │ old-k8s-version-349599    │ jenkins │ v1.37.0 │ 09 Nov 25 14:36 UTC │ 09 Nov 25 14:36 UTC │
	│ start   │ -p cert-expiration-179822 --memory=3072 --cert-expiration=8760h --driver=docker  --container-runtime=crio                                                                                                                                     │ cert-expiration-179822    │ jenkins │ v1.37.0 │ 09 Nov 25 14:37 UTC │                     │
	│ image   │ old-k8s-version-349599 image list --format=json                                                                                                                                                                                               │ old-k8s-version-349599    │ jenkins │ v1.37.0 │ 09 Nov 25 14:37 UTC │ 09 Nov 25 14:37 UTC │
	│ pause   │ -p old-k8s-version-349599 --alsologtostderr -v=1                                                                                                                                                                                              │ old-k8s-version-349599    │ jenkins │ v1.37.0 │ 09 Nov 25 14:37 UTC │                     │
	└─────────┴───────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────┴───────────────────────────┴─────────┴─────────┴─────────────────────┴─────────────────
────┘
	
	
	==> Last Start <==
	Log file created at: 2025/11/09 14:37:04
	Running on machine: ip-172-31-24-2
	Binary: Built with gc go1.24.6 for linux/arm64
	Log line format: [IWEF]mmdd hh:mm:ss.uuuuuu threadid file:line] msg
	I1109 14:37:04.307197  186764 out.go:360] Setting OutFile to fd 1 ...
	I1109 14:37:04.307315  186764 out.go:408] TERM=,COLORTERM=, which probably does not support color
	I1109 14:37:04.307319  186764 out.go:374] Setting ErrFile to fd 2...
	I1109 14:37:04.307322  186764 out.go:408] TERM=,COLORTERM=, which probably does not support color
	I1109 14:37:04.307665  186764 root.go:338] Updating PATH: /home/jenkins/minikube-integration/21139-2320/.minikube/bin
	I1109 14:37:04.308150  186764 out.go:368] Setting JSON to false
	I1109 14:37:04.309349  186764 start.go:133] hostinfo: {"hostname":"ip-172-31-24-2","uptime":4775,"bootTime":1762694250,"procs":209,"os":"linux","platform":"ubuntu","platformFamily":"debian","platformVersion":"20.04","kernelVersion":"5.15.0-1084-aws","kernelArch":"aarch64","virtualizationSystem":"","virtualizationRole":"","hostId":"6d436adf-771e-4269-b9a3-c25fd4fca4f5"}
	I1109 14:37:04.309443  186764 start.go:143] virtualization:  
	I1109 14:37:04.313455  186764 out.go:179] * [cert-expiration-179822] minikube v1.37.0 on Ubuntu 20.04 (arm64)
	I1109 14:37:04.316771  186764 out.go:179]   - MINIKUBE_LOCATION=21139
	I1109 14:37:04.316859  186764 notify.go:221] Checking for updates...
	I1109 14:37:04.325867  186764 out.go:179]   - MINIKUBE_SUPPRESS_DOCKER_PERFORMANCE=true
	I1109 14:37:04.328920  186764 out.go:179]   - KUBECONFIG=/home/jenkins/minikube-integration/21139-2320/kubeconfig
	I1109 14:37:04.332129  186764 out.go:179]   - MINIKUBE_HOME=/home/jenkins/minikube-integration/21139-2320/.minikube
	I1109 14:37:04.335138  186764 out.go:179]   - MINIKUBE_BIN=out/minikube-linux-arm64
	I1109 14:37:04.338169  186764 out.go:179]   - MINIKUBE_FORCE_SYSTEMD=
	I1109 14:37:04.341535  186764 config.go:182] Loaded profile config "cert-expiration-179822": Driver=docker, ContainerRuntime=crio, KubernetesVersion=v1.34.1
	I1109 14:37:04.342068  186764 driver.go:422] Setting default libvirt URI to qemu:///system
	I1109 14:37:04.368322  186764 docker.go:124] docker version: linux-28.1.1:Docker Engine - Community
	I1109 14:37:04.368412  186764 cli_runner.go:164] Run: docker system info --format "{{json .}}"
	I1109 14:37:04.453299  186764 info.go:266] docker info: {ID:J4M5:W6MX:GOX4:4LAQ:VI7E:VJNF:J3OP:OPBH:GF7G:PPY4:WQWD:7N4L Containers:2 ContainersRunning:2 ContainersPaused:0 ContainersStopped:0 Images:4 Driver:overlay2 DriverStatus:[[Backing Filesystem extfs] [Supports d_type true] [Using metacopy false] [Native Overlay Diff true] [userxattr false]] SystemStatus:<nil> Plugins:{Volume:[local] Network:[bridge host ipvlan macvlan null overlay] Authorization:<nil> Log:[awslogs fluentd gcplogs gelf journald json-file local splunk syslog]} MemoryLimit:true SwapLimit:true KernelMemory:false KernelMemoryTCP:true CPUCfsPeriod:true CPUCfsQuota:true CPUShares:true CPUSet:true PidsLimit:true IPv4Forwarding:true BridgeNfIptables:false BridgeNfIP6Tables:false Debug:false NFd:49 OomKillDisable:true NGoroutines:62 SystemTime:2025-11-09 14:37:04.443383025 +0000 UTC LoggingDriver:json-file CgroupDriver:cgroupfs NEventsListener:0 KernelVersion:5.15.0-1084-aws OperatingSystem:Ubuntu 20.04.6 LTS OSType:linux Architecture:a
arch64 IndexServerAddress:https://index.docker.io/v1/ RegistryConfig:{AllowNondistributableArtifactsCIDRs:[] AllowNondistributableArtifactsHostnames:[] InsecureRegistryCIDRs:[::1/128 127.0.0.0/8] IndexConfigs:{DockerIo:{Name:docker.io Mirrors:[] Secure:true Official:true}} Mirrors:[]} NCPU:2 MemTotal:8214835200 GenericResources:<nil> DockerRootDir:/var/lib/docker HTTPProxy: HTTPSProxy: NoProxy: Name:ip-172-31-24-2 Labels:[] ExperimentalBuild:false ServerVersion:28.1.1 ClusterStore: ClusterAdvertise: Runtimes:{Runc:{Path:runc}} DefaultRuntime:runc Swarm:{NodeID: NodeAddr: LocalNodeState:inactive ControlAvailable:false Error: RemoteManagers:<nil>} LiveRestoreEnabled:false Isolation: InitBinary:docker-init ContainerdCommit:{ID:05044ec0a9a75232cad458027ca83437aae3f4da Expected:} RuncCommit:{ID:v1.2.5-0-g59923ef Expected:} InitCommit:{ID:de40ad0 Expected:} SecurityOptions:[name=apparmor name=seccomp,profile=builtin] ProductLicense: Warnings:<nil> ServerErrors:[] ClientInfo:{Debug:false Plugins:[map[Name:buildx Pat
h:/usr/libexec/docker/cli-plugins/docker-buildx SchemaVersion:0.1.0 ShortDescription:Docker Buildx Vendor:Docker Inc. Version:v0.23.0] map[Name:compose Path:/usr/libexec/docker/cli-plugins/docker-compose SchemaVersion:0.1.0 ShortDescription:Docker Compose Vendor:Docker Inc. Version:v2.35.1]] Warnings:<nil>}}
	I1109 14:37:04.453394  186764 docker.go:319] overlay module found
	I1109 14:37:04.456692  186764 out.go:179] * Using the docker driver based on existing profile
	I1109 14:37:04.459742  186764 start.go:309] selected driver: docker
	I1109 14:37:04.459752  186764 start.go:930] validating driver "docker" against &{Name:cert-expiration-179822 KeepContext:false EmbedCerts:false MinikubeISO: KicBaseImage:gcr.io/k8s-minikube/kicbase-builds:v0.0.48-1761985721-21837@sha256:a50b37e97dfdea51156e079ca6b45818a801b3d41bbe13d141f35d2e1af6c7d1 Memory:3072 CPUs:2 DiskSize:20000 Driver:docker HyperkitVpnKitSock: HyperkitVSockPorts:[] DockerEnv:[] ContainerVolumeMounts:[] InsecureRegistry:[] RegistryMirror:[] HostOnlyCIDR:192.168.59.1/24 HypervVirtualSwitch: HypervUseExternalSwitch:false HypervExternalAdapter: KVMNetwork:default KVMQemuURI:qemu:///system KVMGPU:false KVMHidden:false KVMNUMACount:1 APIServerPort:8443 DockerOpt:[] DisableDriverMounts:false NFSShare:[] NFSSharesRoot:/nfsshares UUID: NoVTXCheck:false DNSProxy:false HostDNSResolver:true HostOnlyNicType:virtio NatNicType:virtio SSHIPAddress: SSHUser:root SSHKey: SSHPort:22 KubernetesConfig:{KubernetesVersion:v1.34.1 ClusterName:cert-expiration-179822 Namespace:default APIServerHAVIP: A
PIServerName:minikubeCA APIServerNames:[] APIServerIPs:[] DNSDomain:cluster.local ContainerRuntime:crio CRISocket: NetworkPlugin:cni FeatureGates: ServiceCIDR:10.96.0.0/12 ImageRepository: LoadBalancerStartIP: LoadBalancerEndIP: CustomIngressCert: RegistryAliases: ExtraOptions:[] ShouldLoadCachedImages:true EnableDefaultCNI:false CNI:} Nodes:[{Name: IP:192.168.76.2 Port:8443 KubernetesVersion:v1.34.1 ContainerRuntime:crio ControlPlane:true Worker:true}] Addons:map[default-storageclass:true storage-provisioner:true] CustomAddonImages:map[] CustomAddonRegistries:map[] VerifyComponents:map[apiserver:true system_pods:true] StartHostTimeout:6m0s ScheduledStop:<nil> ExposedPorts:[] ListenAddress: Network: Subnet: MultiNodeRequested:false ExtraDisks:0 CertExpiration:3m0s MountString: Mount9PVersion:9p2000.L MountGID:docker MountIP: MountMSize:262144 MountOptions:[] MountPort:0 MountType:9p MountUID:docker BinaryMirror: DisableOptimizations:false DisableMetrics:false DisableCoreDNSLog:false CustomQemuFirmwarePath: So
cketVMnetClientPath: SocketVMnetPath: StaticIP: SSHAuthSock: SSHAgentPID:0 GPUs: AutoPauseInterval:1m0s}
	I1109 14:37:04.459853  186764 start.go:941] status for docker: {Installed:true Healthy:true Running:false NeedsImprovement:false Error:<nil> Reason: Fix: Doc: Version:}
	I1109 14:37:04.460662  186764 cli_runner.go:164] Run: docker system info --format "{{json .}}"
	I1109 14:37:04.526336  186764 info.go:266] docker info: {ID:J4M5:W6MX:GOX4:4LAQ:VI7E:VJNF:J3OP:OPBH:GF7G:PPY4:WQWD:7N4L Containers:2 ContainersRunning:2 ContainersPaused:0 ContainersStopped:0 Images:4 Driver:overlay2 DriverStatus:[[Backing Filesystem extfs] [Supports d_type true] [Using metacopy false] [Native Overlay Diff true] [userxattr false]] SystemStatus:<nil> Plugins:{Volume:[local] Network:[bridge host ipvlan macvlan null overlay] Authorization:<nil> Log:[awslogs fluentd gcplogs gelf journald json-file local splunk syslog]} MemoryLimit:true SwapLimit:true KernelMemory:false KernelMemoryTCP:true CPUCfsPeriod:true CPUCfsQuota:true CPUShares:true CPUSet:true PidsLimit:true IPv4Forwarding:true BridgeNfIptables:false BridgeNfIP6Tables:false Debug:false NFd:49 OomKillDisable:true NGoroutines:62 SystemTime:2025-11-09 14:37:04.516969467 +0000 UTC LoggingDriver:json-file CgroupDriver:cgroupfs NEventsListener:0 KernelVersion:5.15.0-1084-aws OperatingSystem:Ubuntu 20.04.6 LTS OSType:linux Architecture:a
arch64 IndexServerAddress:https://index.docker.io/v1/ RegistryConfig:{AllowNondistributableArtifactsCIDRs:[] AllowNondistributableArtifactsHostnames:[] InsecureRegistryCIDRs:[::1/128 127.0.0.0/8] IndexConfigs:{DockerIo:{Name:docker.io Mirrors:[] Secure:true Official:true}} Mirrors:[]} NCPU:2 MemTotal:8214835200 GenericResources:<nil> DockerRootDir:/var/lib/docker HTTPProxy: HTTPSProxy: NoProxy: Name:ip-172-31-24-2 Labels:[] ExperimentalBuild:false ServerVersion:28.1.1 ClusterStore: ClusterAdvertise: Runtimes:{Runc:{Path:runc}} DefaultRuntime:runc Swarm:{NodeID: NodeAddr: LocalNodeState:inactive ControlAvailable:false Error: RemoteManagers:<nil>} LiveRestoreEnabled:false Isolation: InitBinary:docker-init ContainerdCommit:{ID:05044ec0a9a75232cad458027ca83437aae3f4da Expected:} RuncCommit:{ID:v1.2.5-0-g59923ef Expected:} InitCommit:{ID:de40ad0 Expected:} SecurityOptions:[name=apparmor name=seccomp,profile=builtin] ProductLicense: Warnings:<nil> ServerErrors:[] ClientInfo:{Debug:false Plugins:[map[Name:buildx Pat
h:/usr/libexec/docker/cli-plugins/docker-buildx SchemaVersion:0.1.0 ShortDescription:Docker Buildx Vendor:Docker Inc. Version:v0.23.0] map[Name:compose Path:/usr/libexec/docker/cli-plugins/docker-compose SchemaVersion:0.1.0 ShortDescription:Docker Compose Vendor:Docker Inc. Version:v2.35.1]] Warnings:<nil>}}
	I1109 14:37:04.526645  186764 cni.go:84] Creating CNI manager for ""
	I1109 14:37:04.526698  186764 cni.go:143] "docker" driver + "crio" runtime found, recommending kindnet
	I1109 14:37:04.526735  186764 start.go:353] cluster config:
	{Name:cert-expiration-179822 KeepContext:false EmbedCerts:false MinikubeISO: KicBaseImage:gcr.io/k8s-minikube/kicbase-builds:v0.0.48-1761985721-21837@sha256:a50b37e97dfdea51156e079ca6b45818a801b3d41bbe13d141f35d2e1af6c7d1 Memory:3072 CPUs:2 DiskSize:20000 Driver:docker HyperkitVpnKitSock: HyperkitVSockPorts:[] DockerEnv:[] ContainerVolumeMounts:[] InsecureRegistry:[] RegistryMirror:[] HostOnlyCIDR:192.168.59.1/24 HypervVirtualSwitch: HypervUseExternalSwitch:false HypervExternalAdapter: KVMNetwork:default KVMQemuURI:qemu:///system KVMGPU:false KVMHidden:false KVMNUMACount:1 APIServerPort:8443 DockerOpt:[] DisableDriverMounts:false NFSShare:[] NFSSharesRoot:/nfsshares UUID: NoVTXCheck:false DNSProxy:false HostDNSResolver:true HostOnlyNicType:virtio NatNicType:virtio SSHIPAddress: SSHUser:root SSHKey: SSHPort:22 KubernetesConfig:{KubernetesVersion:v1.34.1 ClusterName:cert-expiration-179822 Namespace:default APIServerHAVIP: APIServerName:minikubeCA APIServerNames:[] APIServerIPs:[] DNSDomain:cluster.loca
l ContainerRuntime:crio CRISocket: NetworkPlugin:cni FeatureGates: ServiceCIDR:10.96.0.0/12 ImageRepository: LoadBalancerStartIP: LoadBalancerEndIP: CustomIngressCert: RegistryAliases: ExtraOptions:[] ShouldLoadCachedImages:true EnableDefaultCNI:false CNI:} Nodes:[{Name: IP:192.168.76.2 Port:8443 KubernetesVersion:v1.34.1 ContainerRuntime:crio ControlPlane:true Worker:true}] Addons:map[default-storageclass:true storage-provisioner:true] CustomAddonImages:map[] CustomAddonRegistries:map[] VerifyComponents:map[apiserver:true system_pods:true] StartHostTimeout:6m0s ScheduledStop:<nil> ExposedPorts:[] ListenAddress: Network: Subnet: MultiNodeRequested:false ExtraDisks:0 CertExpiration:8760h0m0s MountString: Mount9PVersion:9p2000.L MountGID:docker MountIP: MountMSize:262144 MountOptions:[] MountPort:0 MountType:9p MountUID:docker BinaryMirror: DisableOptimizations:false DisableMetrics:false DisableCoreDNSLog:false CustomQemuFirmwarePath: SocketVMnetClientPath: SocketVMnetPath: StaticIP: SSHAuthSock: SSHAgentPID:0
GPUs: AutoPauseInterval:1m0s}
	I1109 14:37:04.531829  186764 out.go:179] * Starting "cert-expiration-179822" primary control-plane node in "cert-expiration-179822" cluster
	I1109 14:37:04.534759  186764 cache.go:134] Beginning downloading kic base image for docker with crio
	I1109 14:37:04.537830  186764 out.go:179] * Pulling base image v0.0.48-1761985721-21837 ...
	I1109 14:37:04.540686  186764 preload.go:188] Checking if preload exists for k8s version v1.34.1 and runtime crio
	I1109 14:37:04.540755  186764 preload.go:203] Found local preload: /home/jenkins/minikube-integration/21139-2320/.minikube/cache/preloaded-tarball/preloaded-images-k8s-v18-v1.34.1-cri-o-overlay-arm64.tar.lz4
	I1109 14:37:04.540755  186764 image.go:81] Checking for gcr.io/k8s-minikube/kicbase-builds:v0.0.48-1761985721-21837@sha256:a50b37e97dfdea51156e079ca6b45818a801b3d41bbe13d141f35d2e1af6c7d1 in local docker daemon
	I1109 14:37:04.540764  186764 cache.go:65] Caching tarball of preloaded images
	I1109 14:37:04.540925  186764 preload.go:238] Found /home/jenkins/minikube-integration/21139-2320/.minikube/cache/preloaded-tarball/preloaded-images-k8s-v18-v1.34.1-cri-o-overlay-arm64.tar.lz4 in cache, skipping download
	I1109 14:37:04.540935  186764 cache.go:68] Finished verifying existence of preloaded tar for v1.34.1 on crio
	I1109 14:37:04.541035  186764 profile.go:143] Saving config to /home/jenkins/minikube-integration/21139-2320/.minikube/profiles/cert-expiration-179822/config.json ...
	I1109 14:37:04.562334  186764 image.go:100] Found gcr.io/k8s-minikube/kicbase-builds:v0.0.48-1761985721-21837@sha256:a50b37e97dfdea51156e079ca6b45818a801b3d41bbe13d141f35d2e1af6c7d1 in local docker daemon, skipping pull
	I1109 14:37:04.562345  186764 cache.go:158] gcr.io/k8s-minikube/kicbase-builds:v0.0.48-1761985721-21837@sha256:a50b37e97dfdea51156e079ca6b45818a801b3d41bbe13d141f35d2e1af6c7d1 exists in daemon, skipping load
	I1109 14:37:04.562363  186764 cache.go:243] Successfully downloaded all kic artifacts
	I1109 14:37:04.562384  186764 start.go:360] acquireMachinesLock for cert-expiration-179822: {Name:mk728324a0331ee9c1c68956a06f457b31040b5b Clock:{} Delay:500ms Timeout:10m0s Cancel:<nil>}
	I1109 14:37:04.562446  186764 start.go:364] duration metric: took 46.195µs to acquireMachinesLock for "cert-expiration-179822"
	I1109 14:37:04.562464  186764 start.go:96] Skipping create...Using existing machine configuration
	I1109 14:37:04.562470  186764 fix.go:54] fixHost starting: 
	I1109 14:37:04.562715  186764 cli_runner.go:164] Run: docker container inspect cert-expiration-179822 --format={{.State.Status}}
	I1109 14:37:04.580887  186764 fix.go:112] recreateIfNeeded on cert-expiration-179822: state=Running err=<nil>
	W1109 14:37:04.580906  186764 fix.go:138] unexpected machine state, will restart: <nil>
	I1109 14:37:04.584206  186764 out.go:252] * Updating the running docker "cert-expiration-179822" container ...
	I1109 14:37:04.584234  186764 machine.go:94] provisionDockerMachine start ...
	I1109 14:37:04.584324  186764 cli_runner.go:164] Run: docker container inspect -f "'{{(index (index .NetworkSettings.Ports "22/tcp") 0).HostPort}}'" cert-expiration-179822
	I1109 14:37:04.603489  186764 main.go:143] libmachine: Using SSH client type: native
	I1109 14:37:04.604013  186764 main.go:143] libmachine: &{{{<nil> 0 [] [] []} docker [0x3eefe0] 0x3f1790 <nil>  [] 0s} 127.0.0.1 33035 <nil> <nil>}
	I1109 14:37:04.604021  186764 main.go:143] libmachine: About to run SSH command:
	hostname
	I1109 14:37:04.764828  186764 main.go:143] libmachine: SSH cmd err, output: <nil>: cert-expiration-179822
	
	I1109 14:37:04.764842  186764 ubuntu.go:182] provisioning hostname "cert-expiration-179822"
	I1109 14:37:04.764914  186764 cli_runner.go:164] Run: docker container inspect -f "'{{(index (index .NetworkSettings.Ports "22/tcp") 0).HostPort}}'" cert-expiration-179822
	I1109 14:37:04.785783  186764 main.go:143] libmachine: Using SSH client type: native
	I1109 14:37:04.786088  186764 main.go:143] libmachine: &{{{<nil> 0 [] [] []} docker [0x3eefe0] 0x3f1790 <nil>  [] 0s} 127.0.0.1 33035 <nil> <nil>}
	I1109 14:37:04.786096  186764 main.go:143] libmachine: About to run SSH command:
	sudo hostname cert-expiration-179822 && echo "cert-expiration-179822" | sudo tee /etc/hostname
	I1109 14:37:04.952698  186764 main.go:143] libmachine: SSH cmd err, output: <nil>: cert-expiration-179822
	
	I1109 14:37:04.952764  186764 cli_runner.go:164] Run: docker container inspect -f "'{{(index (index .NetworkSettings.Ports "22/tcp") 0).HostPort}}'" cert-expiration-179822
	I1109 14:37:04.975912  186764 main.go:143] libmachine: Using SSH client type: native
	I1109 14:37:04.976211  186764 main.go:143] libmachine: &{{{<nil> 0 [] [] []} docker [0x3eefe0] 0x3f1790 <nil>  [] 0s} 127.0.0.1 33035 <nil> <nil>}
	I1109 14:37:04.976226  186764 main.go:143] libmachine: About to run SSH command:
	
			if ! grep -xq '.*\scert-expiration-179822' /etc/hosts; then
				if grep -xq '127.0.1.1\s.*' /etc/hosts; then
					sudo sed -i 's/^127.0.1.1\s.*/127.0.1.1 cert-expiration-179822/g' /etc/hosts;
				else 
					echo '127.0.1.1 cert-expiration-179822' | sudo tee -a /etc/hosts; 
				fi
			fi
	I1109 14:37:05.136424  186764 main.go:143] libmachine: SSH cmd err, output: <nil>: 
	I1109 14:37:05.136454  186764 ubuntu.go:188] set auth options {CertDir:/home/jenkins/minikube-integration/21139-2320/.minikube CaCertPath:/home/jenkins/minikube-integration/21139-2320/.minikube/certs/ca.pem CaPrivateKeyPath:/home/jenkins/minikube-integration/21139-2320/.minikube/certs/ca-key.pem CaCertRemotePath:/etc/docker/ca.pem ServerCertPath:/home/jenkins/minikube-integration/21139-2320/.minikube/machines/server.pem ServerKeyPath:/home/jenkins/minikube-integration/21139-2320/.minikube/machines/server-key.pem ClientKeyPath:/home/jenkins/minikube-integration/21139-2320/.minikube/certs/key.pem ServerCertRemotePath:/etc/docker/server.pem ServerKeyRemotePath:/etc/docker/server-key.pem ClientCertPath:/home/jenkins/minikube-integration/21139-2320/.minikube/certs/cert.pem ServerCertSANs:[] StorePath:/home/jenkins/minikube-integration/21139-2320/.minikube}
	I1109 14:37:05.136470  186764 ubuntu.go:190] setting up certificates
	I1109 14:37:05.136478  186764 provision.go:84] configureAuth start
	I1109 14:37:05.136562  186764 cli_runner.go:164] Run: docker container inspect -f "{{range .NetworkSettings.Networks}}{{.IPAddress}},{{.GlobalIPv6Address}}{{end}}" cert-expiration-179822
	I1109 14:37:05.155935  186764 provision.go:143] copyHostCerts
	I1109 14:37:05.155997  186764 exec_runner.go:144] found /home/jenkins/minikube-integration/21139-2320/.minikube/cert.pem, removing ...
	I1109 14:37:05.156011  186764 exec_runner.go:203] rm: /home/jenkins/minikube-integration/21139-2320/.minikube/cert.pem
	I1109 14:37:05.156087  186764 exec_runner.go:151] cp: /home/jenkins/minikube-integration/21139-2320/.minikube/certs/cert.pem --> /home/jenkins/minikube-integration/21139-2320/.minikube/cert.pem (1123 bytes)
	I1109 14:37:05.156198  186764 exec_runner.go:144] found /home/jenkins/minikube-integration/21139-2320/.minikube/key.pem, removing ...
	I1109 14:37:05.156202  186764 exec_runner.go:203] rm: /home/jenkins/minikube-integration/21139-2320/.minikube/key.pem
	I1109 14:37:05.156230  186764 exec_runner.go:151] cp: /home/jenkins/minikube-integration/21139-2320/.minikube/certs/key.pem --> /home/jenkins/minikube-integration/21139-2320/.minikube/key.pem (1679 bytes)
	I1109 14:37:05.156308  186764 exec_runner.go:144] found /home/jenkins/minikube-integration/21139-2320/.minikube/ca.pem, removing ...
	I1109 14:37:05.156312  186764 exec_runner.go:203] rm: /home/jenkins/minikube-integration/21139-2320/.minikube/ca.pem
	I1109 14:37:05.156344  186764 exec_runner.go:151] cp: /home/jenkins/minikube-integration/21139-2320/.minikube/certs/ca.pem --> /home/jenkins/minikube-integration/21139-2320/.minikube/ca.pem (1082 bytes)
	I1109 14:37:05.156445  186764 provision.go:117] generating server cert: /home/jenkins/minikube-integration/21139-2320/.minikube/machines/server.pem ca-key=/home/jenkins/minikube-integration/21139-2320/.minikube/certs/ca.pem private-key=/home/jenkins/minikube-integration/21139-2320/.minikube/certs/ca-key.pem org=jenkins.cert-expiration-179822 san=[127.0.0.1 192.168.76.2 cert-expiration-179822 localhost minikube]
	I1109 14:37:05.340934  186764 provision.go:177] copyRemoteCerts
	I1109 14:37:05.340991  186764 ssh_runner.go:195] Run: sudo mkdir -p /etc/docker /etc/docker /etc/docker
	I1109 14:37:05.341028  186764 cli_runner.go:164] Run: docker container inspect -f "'{{(index (index .NetworkSettings.Ports "22/tcp") 0).HostPort}}'" cert-expiration-179822
	I1109 14:37:05.358978  186764 sshutil.go:53] new ssh client: &{IP:127.0.0.1 Port:33035 SSHKeyPath:/home/jenkins/minikube-integration/21139-2320/.minikube/machines/cert-expiration-179822/id_rsa Username:docker}
	I1109 14:37:05.464929  186764 ssh_runner.go:362] scp /home/jenkins/minikube-integration/21139-2320/.minikube/certs/ca.pem --> /etc/docker/ca.pem (1082 bytes)
	I1109 14:37:05.487304  186764 ssh_runner.go:362] scp /home/jenkins/minikube-integration/21139-2320/.minikube/machines/server.pem --> /etc/docker/server.pem (1233 bytes)
	I1109 14:37:05.505592  186764 ssh_runner.go:362] scp /home/jenkins/minikube-integration/21139-2320/.minikube/machines/server-key.pem --> /etc/docker/server-key.pem (1675 bytes)
	I1109 14:37:05.523765  186764 provision.go:87] duration metric: took 387.27447ms to configureAuth
	I1109 14:37:05.523780  186764 ubuntu.go:206] setting minikube options for container-runtime
	I1109 14:37:05.523995  186764 config.go:182] Loaded profile config "cert-expiration-179822": Driver=docker, ContainerRuntime=crio, KubernetesVersion=v1.34.1
	I1109 14:37:05.524113  186764 cli_runner.go:164] Run: docker container inspect -f "'{{(index (index .NetworkSettings.Ports "22/tcp") 0).HostPort}}'" cert-expiration-179822
	I1109 14:37:05.542901  186764 main.go:143] libmachine: Using SSH client type: native
	I1109 14:37:05.543203  186764 main.go:143] libmachine: &{{{<nil> 0 [] [] []} docker [0x3eefe0] 0x3f1790 <nil>  [] 0s} 127.0.0.1 33035 <nil> <nil>}
	I1109 14:37:05.543215  186764 main.go:143] libmachine: About to run SSH command:
	sudo mkdir -p /etc/sysconfig && printf %s "
	CRIO_MINIKUBE_OPTIONS='--insecure-registry 10.96.0.0/12 '
	" | sudo tee /etc/sysconfig/crio.minikube && sudo systemctl restart crio
	
	
	==> CRI-O <==
	Nov 09 14:36:51 old-k8s-version-349599 crio[650]: time="2025-11-09T14:36:51.909624849Z" level=info msg="Allowed annotations are specified for workload [io.containers.trace-syscall]"
	Nov 09 14:36:51 old-k8s-version-349599 crio[650]: time="2025-11-09T14:36:51.916179068Z" level=info msg="Allowed annotations are specified for workload [io.containers.trace-syscall]"
	Nov 09 14:36:51 old-k8s-version-349599 crio[650]: time="2025-11-09T14:36:51.916764524Z" level=info msg="Allowed annotations are specified for workload [io.containers.trace-syscall]"
	Nov 09 14:36:51 old-k8s-version-349599 crio[650]: time="2025-11-09T14:36:51.932596525Z" level=info msg="Created container 43eaaa1f3a3a30d7d2b345fa130225cfafa48f473bd3042ba003d472afa8b09d: kubernetes-dashboard/dashboard-metrics-scraper-5f989dc9cf-vrb6q/dashboard-metrics-scraper" id=43845b38-f52b-4a6b-8124-68e225870fa0 name=/runtime.v1.RuntimeService/CreateContainer
	Nov 09 14:36:51 old-k8s-version-349599 crio[650]: time="2025-11-09T14:36:51.933335706Z" level=info msg="Starting container: 43eaaa1f3a3a30d7d2b345fa130225cfafa48f473bd3042ba003d472afa8b09d" id=82f2c0c0-00c1-4916-8dbf-1d46089ca3b0 name=/runtime.v1.RuntimeService/StartContainer
	Nov 09 14:36:51 old-k8s-version-349599 crio[650]: time="2025-11-09T14:36:51.934916131Z" level=info msg="Started container" PID=1652 containerID=43eaaa1f3a3a30d7d2b345fa130225cfafa48f473bd3042ba003d472afa8b09d description=kubernetes-dashboard/dashboard-metrics-scraper-5f989dc9cf-vrb6q/dashboard-metrics-scraper id=82f2c0c0-00c1-4916-8dbf-1d46089ca3b0 name=/runtime.v1.RuntimeService/StartContainer sandboxID=87cb2197ede75806db1069c7ea256a275ef0b4efc840a2f2c023186e51da8f65
	Nov 09 14:36:51 old-k8s-version-349599 conmon[1650]: conmon 43eaaa1f3a3a30d7d2b3 <ninfo>: container 1652 exited with status 1
	Nov 09 14:36:52 old-k8s-version-349599 crio[650]: time="2025-11-09T14:36:52.192473609Z" level=info msg="Removing container: 1e42cf99b7d01821fcb8ed85dc640839269f65bf1bcd085d1e7395d561734f82" id=4474a4d7-8766-4229-a45d-656f2dc69105 name=/runtime.v1.RuntimeService/RemoveContainer
	Nov 09 14:36:52 old-k8s-version-349599 crio[650]: time="2025-11-09T14:36:52.204123658Z" level=info msg="Error loading conmon cgroup of container 1e42cf99b7d01821fcb8ed85dc640839269f65bf1bcd085d1e7395d561734f82: cgroup deleted" id=4474a4d7-8766-4229-a45d-656f2dc69105 name=/runtime.v1.RuntimeService/RemoveContainer
	Nov 09 14:36:52 old-k8s-version-349599 crio[650]: time="2025-11-09T14:36:52.209124825Z" level=info msg="Removed container 1e42cf99b7d01821fcb8ed85dc640839269f65bf1bcd085d1e7395d561734f82: kubernetes-dashboard/dashboard-metrics-scraper-5f989dc9cf-vrb6q/dashboard-metrics-scraper" id=4474a4d7-8766-4229-a45d-656f2dc69105 name=/runtime.v1.RuntimeService/RemoveContainer
	Nov 09 14:36:59 old-k8s-version-349599 crio[650]: time="2025-11-09T14:36:59.640997954Z" level=info msg="CNI monitoring event CREATE        \"/etc/cni/net.d/10-kindnet.conflist.temp\""
	Nov 09 14:36:59 old-k8s-version-349599 crio[650]: time="2025-11-09T14:36:59.646689661Z" level=info msg="Found CNI network kindnet (type=ptp) at /etc/cni/net.d/10-kindnet.conflist"
	Nov 09 14:36:59 old-k8s-version-349599 crio[650]: time="2025-11-09T14:36:59.646729579Z" level=info msg="Updated default CNI network name to kindnet"
	Nov 09 14:36:59 old-k8s-version-349599 crio[650]: time="2025-11-09T14:36:59.646752505Z" level=info msg="CNI monitoring event WRITE         \"/etc/cni/net.d/10-kindnet.conflist.temp\""
	Nov 09 14:36:59 old-k8s-version-349599 crio[650]: time="2025-11-09T14:36:59.649827545Z" level=info msg="Found CNI network kindnet (type=ptp) at /etc/cni/net.d/10-kindnet.conflist"
	Nov 09 14:36:59 old-k8s-version-349599 crio[650]: time="2025-11-09T14:36:59.649863279Z" level=info msg="Updated default CNI network name to kindnet"
	Nov 09 14:36:59 old-k8s-version-349599 crio[650]: time="2025-11-09T14:36:59.649885589Z" level=info msg="CNI monitoring event WRITE         \"/etc/cni/net.d/10-kindnet.conflist.temp\""
	Nov 09 14:36:59 old-k8s-version-349599 crio[650]: time="2025-11-09T14:36:59.653010788Z" level=info msg="Found CNI network kindnet (type=ptp) at /etc/cni/net.d/10-kindnet.conflist"
	Nov 09 14:36:59 old-k8s-version-349599 crio[650]: time="2025-11-09T14:36:59.653047244Z" level=info msg="Updated default CNI network name to kindnet"
	Nov 09 14:36:59 old-k8s-version-349599 crio[650]: time="2025-11-09T14:36:59.65306911Z" level=info msg="CNI monitoring event RENAME        \"/etc/cni/net.d/10-kindnet.conflist.temp\""
	Nov 09 14:36:59 old-k8s-version-349599 crio[650]: time="2025-11-09T14:36:59.656286084Z" level=info msg="Found CNI network kindnet (type=ptp) at /etc/cni/net.d/10-kindnet.conflist"
	Nov 09 14:36:59 old-k8s-version-349599 crio[650]: time="2025-11-09T14:36:59.656316936Z" level=info msg="Updated default CNI network name to kindnet"
	Nov 09 14:36:59 old-k8s-version-349599 crio[650]: time="2025-11-09T14:36:59.656344358Z" level=info msg="CNI monitoring event CREATE        \"/etc/cni/net.d/10-kindnet.conflist\" ← \"/etc/cni/net.d/10-kindnet.conflist.temp\""
	Nov 09 14:36:59 old-k8s-version-349599 crio[650]: time="2025-11-09T14:36:59.659745349Z" level=info msg="Found CNI network kindnet (type=ptp) at /etc/cni/net.d/10-kindnet.conflist"
	Nov 09 14:36:59 old-k8s-version-349599 crio[650]: time="2025-11-09T14:36:59.659778843Z" level=info msg="Updated default CNI network name to kindnet"
	
	
	==> container status <==
	CONTAINER           IMAGE                                                                                                      CREATED              STATE               NAME                        ATTEMPT             POD ID              POD                                              NAMESPACE
	43eaaa1f3a3a3       a90209bb39e3d7b5fc9daf60c17044ea969aaca0333d672d8c7a34c7446e7ff7                                           20 seconds ago       Exited              dashboard-metrics-scraper   2                   87cb2197ede75       dashboard-metrics-scraper-5f989dc9cf-vrb6q       kubernetes-dashboard
	919922a360f64       ba04bb24b95753201135cbc420b233c1b0b9fa2e1fd21d28319c348c33fbcde6                                           22 seconds ago       Running             storage-provisioner         2                   2a38ac772f968       storage-provisioner                              kube-system
	99a152525f2af       docker.io/kubernetesui/dashboard@sha256:5c52c60663b473628bd98e4ffee7a747ef1f88d8c7bcee957b089fb3f61bdedf   33 seconds ago       Running             kubernetes-dashboard        0                   98dccb643bbe9       kubernetes-dashboard-8694d4445c-4d8hp            kubernetes-dashboard
	621dc7ea0fcf5       97e04611ad43405a2e5863ae17c6f1bc9181bdefdaa78627c432ef754a4eb108                                           53 seconds ago       Running             coredns                     1                   e1116bd09abe8       coredns-5dd5756b68-2z64q                         kube-system
	a629db1cea71c       1611cd07b61d57dbbfebe6db242513fd51e1c02d20ba08af17a45837d86a8a8c                                           53 seconds ago       Running             busybox                     1                   d8415107b3c41       busybox                                          default
	b3c3e104ca19d       b1a8c6f707935fd5f346ce5846d21ff8dd65e14c15406a14dbd16b9b897b9b4c                                           53 seconds ago       Running             kindnet-cni                 1                   ae55c48c6f987       kindnet-2r8mz                                    kube-system
	d300f08cb92b1       ba04bb24b95753201135cbc420b233c1b0b9fa2e1fd21d28319c348c33fbcde6                                           53 seconds ago       Exited              storage-provisioner         1                   2a38ac772f968       storage-provisioner                              kube-system
	c78cbfc5d0722       940f54a5bcae9dd4c97844fa36d12cc5d9078cffd5e677ad0df1528c12f3240d                                           53 seconds ago       Running             kube-proxy                  1                   3d48a49bf3438       kube-proxy-tcp6s                                 kube-system
	c8143ca805893       46cc66ccc7c19b4b30625b0aa4e178792add2385659205d7c6fcbd05d78c23e5                                           About a minute ago   Running             kube-controller-manager     1                   09f65485c985d       kube-controller-manager-old-k8s-version-349599   kube-system
	03ab7893dd34a       9cdd6470f48c8b127530b7ce6ea4b3524137984481e48bcde619735890840ace                                           About a minute ago   Running             etcd                        1                   9775c9637bb8b       etcd-old-k8s-version-349599                      kube-system
	98c23037f6379       762dce4090c5f4789bb5dbb933d5b50bc1a2357d7739bbce30d949820e5a38ee                                           About a minute ago   Running             kube-scheduler              1                   e184e645a2deb       kube-scheduler-old-k8s-version-349599            kube-system
	03a4f2701535c       00543d2fe5d71095984891a0609ee504b81f9d72a69a0ad02039d4e135213766                                           About a minute ago   Running             kube-apiserver              1                   70f7765b5fb5b       kube-apiserver-old-k8s-version-349599            kube-system
	
	
	==> coredns [621dc7ea0fcf5ac7c892c1bbe94e7a818d2f91fb0df63e7cf223ff4897b41ad3] <==
	[INFO] plugin/kubernetes: waiting for Kubernetes API before starting server
	[INFO] plugin/kubernetes: waiting for Kubernetes API before starting server
	[INFO] plugin/kubernetes: waiting for Kubernetes API before starting server
	[INFO] plugin/kubernetes: waiting for Kubernetes API before starting server
	[INFO] plugin/kubernetes: waiting for Kubernetes API before starting server
	[INFO] plugin/kubernetes: waiting for Kubernetes API before starting server
	[INFO] plugin/kubernetes: waiting for Kubernetes API before starting server
	[INFO] plugin/kubernetes: waiting for Kubernetes API before starting server
	[INFO] plugin/ready: Still waiting on: "kubernetes"
	[INFO] plugin/ready: Still waiting on: "kubernetes"
	[INFO] plugin/kubernetes: waiting for Kubernetes API before starting server
	[WARNING] plugin/kubernetes: starting server with unsynced Kubernetes API
	.:53
	[INFO] plugin/reload: Running configuration SHA512 = 8aa94104b4dae56b00431f7362ac05b997af2246775de35dc2eb361b0707b2fa7199f9ddfdba27fdef1331b76d09c41700f6cb5d00836dabab7c0df8e651283f
	CoreDNS-1.10.1
	linux/arm64, go1.20, 055b2c3
	[INFO] 127.0.0.1:58671 - 2372 "HINFO IN 4515961813184582711.618130487628995851. udp 56 false 512" NXDOMAIN qr,rd,ra 56 0.004351684s
	[INFO] plugin/ready: Still waiting on: "kubernetes"
	[INFO] plugin/ready: Still waiting on: "kubernetes"
	
	
	==> describe nodes <==
	Name:               old-k8s-version-349599
	Roles:              control-plane
	Labels:             beta.kubernetes.io/arch=arm64
	                    beta.kubernetes.io/os=linux
	                    kubernetes.io/arch=arm64
	                    kubernetes.io/hostname=old-k8s-version-349599
	                    kubernetes.io/os=linux
	                    minikube.k8s.io/commit=70e94ed289d7418ac27e2778f9cf44be27a4ecda
	                    minikube.k8s.io/name=old-k8s-version-349599
	                    minikube.k8s.io/primary=true
	                    minikube.k8s.io/updated_at=2025_11_09T14_35_12_0700
	                    minikube.k8s.io/version=v1.37.0
	                    node-role.kubernetes.io/control-plane=
	                    node.kubernetes.io/exclude-from-external-load-balancers=
	Annotations:        kubeadm.alpha.kubernetes.io/cri-socket: unix:///var/run/crio/crio.sock
	                    node.alpha.kubernetes.io/ttl: 0
	                    volumes.kubernetes.io/controller-managed-attach-detach: true
	CreationTimestamp:  Sun, 09 Nov 2025 14:35:08 +0000
	Taints:             <none>
	Unschedulable:      false
	Lease:
	  HolderIdentity:  old-k8s-version-349599
	  AcquireTime:     <unset>
	  RenewTime:       Sun, 09 Nov 2025 14:36:58 +0000
	Conditions:
	  Type             Status  LastHeartbeatTime                 LastTransitionTime                Reason                       Message
	  ----             ------  -----------------                 ------------------                ------                       -------
	  MemoryPressure   False   Sun, 09 Nov 2025 14:36:48 +0000   Sun, 09 Nov 2025 14:35:04 +0000   KubeletHasSufficientMemory   kubelet has sufficient memory available
	  DiskPressure     False   Sun, 09 Nov 2025 14:36:48 +0000   Sun, 09 Nov 2025 14:35:04 +0000   KubeletHasNoDiskPressure     kubelet has no disk pressure
	  PIDPressure      False   Sun, 09 Nov 2025 14:36:48 +0000   Sun, 09 Nov 2025 14:35:04 +0000   KubeletHasSufficientPID      kubelet has sufficient PID available
	  Ready            True    Sun, 09 Nov 2025 14:36:48 +0000   Sun, 09 Nov 2025 14:35:38 +0000   KubeletReady                 kubelet is posting ready status
	Addresses:
	  InternalIP:  192.168.85.2
	  Hostname:    old-k8s-version-349599
	Capacity:
	  cpu:                2
	  ephemeral-storage:  203034800Ki
	  hugepages-1Gi:      0
	  hugepages-2Mi:      0
	  hugepages-32Mi:     0
	  hugepages-64Ki:     0
	  memory:             8022300Ki
	  pods:               110
	Allocatable:
	  cpu:                2
	  ephemeral-storage:  203034800Ki
	  hugepages-1Gi:      0
	  hugepages-2Mi:      0
	  hugepages-32Mi:     0
	  hugepages-64Ki:     0
	  memory:             8022300Ki
	  pods:               110
	System Info:
	  Machine ID:                 8fe80b37a6a3cb4bc1adadb06905c59a
	  System UUID:                134d2443-5714-4231-bc47-128f14f493a4
	  Boot ID:                    c2b4b02b-b0e5-42ff-aa45-346ad9349595
	  Kernel Version:             5.15.0-1084-aws
	  OS Image:                   Debian GNU/Linux 12 (bookworm)
	  Operating System:           linux
	  Architecture:               arm64
	  Container Runtime Version:  cri-o://1.34.1
	  Kubelet Version:            v1.28.0
	  Kube-Proxy Version:         v1.28.0
	PodCIDR:                      10.244.0.0/24
	PodCIDRs:                     10.244.0.0/24
	Non-terminated Pods:          (11 in total)
	  Namespace                   Name                                              CPU Requests  CPU Limits  Memory Requests  Memory Limits  Age
	  ---------                   ----                                              ------------  ----------  ---------------  -------------  ---
	  default                     busybox                                           0 (0%)        0 (0%)      0 (0%)           0 (0%)         91s
	  kube-system                 coredns-5dd5756b68-2z64q                          100m (5%)     0 (0%)      70Mi (0%)        170Mi (2%)     108s
	  kube-system                 etcd-old-k8s-version-349599                       100m (5%)     0 (0%)      100Mi (1%)       0 (0%)         2m1s
	  kube-system                 kindnet-2r8mz                                     100m (5%)     100m (5%)   50Mi (0%)        50Mi (0%)      108s
	  kube-system                 kube-apiserver-old-k8s-version-349599             250m (12%)    0 (0%)      0 (0%)           0 (0%)         2m
	  kube-system                 kube-controller-manager-old-k8s-version-349599    200m (10%)    0 (0%)      0 (0%)           0 (0%)         2m
	  kube-system                 kube-proxy-tcp6s                                  0 (0%)        0 (0%)      0 (0%)           0 (0%)         108s
	  kube-system                 kube-scheduler-old-k8s-version-349599             100m (5%)     0 (0%)      0 (0%)           0 (0%)         2m
	  kube-system                 storage-provisioner                               0 (0%)        0 (0%)      0 (0%)           0 (0%)         107s
	  kubernetes-dashboard        dashboard-metrics-scraper-5f989dc9cf-vrb6q        0 (0%)        0 (0%)      0 (0%)           0 (0%)         42s
	  kubernetes-dashboard        kubernetes-dashboard-8694d4445c-4d8hp             0 (0%)        0 (0%)      0 (0%)           0 (0%)         42s
	Allocated resources:
	  (Total limits may be over 100 percent, i.e., overcommitted.)
	  Resource           Requests    Limits
	  --------           --------    ------
	  cpu                850m (42%)  100m (5%)
	  memory             220Mi (2%)  220Mi (2%)
	  ephemeral-storage  0 (0%)      0 (0%)
	  hugepages-1Gi      0 (0%)      0 (0%)
	  hugepages-2Mi      0 (0%)      0 (0%)
	  hugepages-32Mi     0 (0%)      0 (0%)
	  hugepages-64Ki     0 (0%)      0 (0%)
	Events:
	  Type    Reason                   Age                From             Message
	  ----    ------                   ----               ----             -------
	  Normal  Starting                 107s               kube-proxy       
	  Normal  Starting                 53s                kube-proxy       
	  Normal  NodeHasSufficientMemory  2m1s               kubelet          Node old-k8s-version-349599 status is now: NodeHasSufficientMemory
	  Normal  NodeHasNoDiskPressure    2m1s               kubelet          Node old-k8s-version-349599 status is now: NodeHasNoDiskPressure
	  Normal  NodeHasSufficientPID     2m1s               kubelet          Node old-k8s-version-349599 status is now: NodeHasSufficientPID
	  Normal  Starting                 2m1s               kubelet          Starting kubelet.
	  Normal  RegisteredNode           109s               node-controller  Node old-k8s-version-349599 event: Registered Node old-k8s-version-349599 in Controller
	  Normal  NodeReady                94s                kubelet          Node old-k8s-version-349599 status is now: NodeReady
	  Normal  Starting                 61s                kubelet          Starting kubelet.
	  Normal  NodeHasSufficientMemory  60s (x8 over 61s)  kubelet          Node old-k8s-version-349599 status is now: NodeHasSufficientMemory
	  Normal  NodeHasNoDiskPressure    60s (x8 over 61s)  kubelet          Node old-k8s-version-349599 status is now: NodeHasNoDiskPressure
	  Normal  NodeHasSufficientPID     60s (x8 over 61s)  kubelet          Node old-k8s-version-349599 status is now: NodeHasSufficientPID
	  Normal  RegisteredNode           43s                node-controller  Node old-k8s-version-349599 event: Registered Node old-k8s-version-349599 in Controller
	
	
	==> dmesg <==
	[Nov 9 14:07] overlayfs: idmapped layers are currently not supported
	[Nov 9 14:12] overlayfs: idmapped layers are currently not supported
	[ +35.606556] overlayfs: idmapped layers are currently not supported
	[Nov 9 14:14] overlayfs: idmapped layers are currently not supported
	[Nov 9 14:15] overlayfs: idmapped layers are currently not supported
	[Nov 9 14:16] overlayfs: idmapped layers are currently not supported
	[Nov 9 14:18] overlayfs: idmapped layers are currently not supported
	[ +26.909872] overlayfs: idmapped layers are currently not supported
	[  +3.850831] overlayfs: idmapped layers are currently not supported
	[Nov 9 14:19] overlayfs: idmapped layers are currently not supported
	[ +17.180951] overlayfs: idmapped layers are currently not supported
	[Nov 9 14:20] overlayfs: idmapped layers are currently not supported
	[ +23.736977] overlayfs: idmapped layers are currently not supported
	[Nov 9 14:22] overlayfs: idmapped layers are currently not supported
	[Nov 9 14:23] overlayfs: idmapped layers are currently not supported
	[Nov 9 14:25] overlayfs: idmapped layers are currently not supported
	[Nov 9 14:27] overlayfs: idmapped layers are currently not supported
	[ +24.536638] overlayfs: idmapped layers are currently not supported
	[Nov 9 14:31] overlayfs: idmapped layers are currently not supported
	[Nov 9 14:33] overlayfs: idmapped layers are currently not supported
	[ +39.159698] overlayfs: idmapped layers are currently not supported
	[  +5.641155] overlayfs: idmapped layers are currently not supported
	[Nov 9 14:34] overlayfs: idmapped layers are currently not supported
	[Nov 9 14:35] overlayfs: idmapped layers are currently not supported
	[Nov 9 14:36] overlayfs: idmapped layers are currently not supported
	
	
	==> etcd [03ab7893dd34a99ef31e35ac9a05d93d56b1b7a9163cfb3a3ee2f2072b6daee7] <==
	{"level":"info","ts":"2025-11-09T14:36:13.290224Z","caller":"fileutil/purge.go:44","msg":"started to purge file","dir":"/var/lib/minikube/etcd/member/wal","suffix":"wal","max":5,"interval":"30s"}
	{"level":"info","ts":"2025-11-09T14:36:13.290436Z","caller":"etcdserver/server.go:754","msg":"starting initial election tick advance","election-ticks":10}
	{"level":"info","ts":"2025-11-09T14:36:13.290903Z","logger":"raft","caller":"etcdserver/zap_raft.go:77","msg":"9f0758e1c58a86ed switched to configuration voters=(11459225503572592365)"}
	{"level":"info","ts":"2025-11-09T14:36:13.291017Z","caller":"membership/cluster.go:421","msg":"added member","cluster-id":"68eaea490fab4e05","local-member-id":"9f0758e1c58a86ed","added-peer-id":"9f0758e1c58a86ed","added-peer-peer-urls":["https://192.168.85.2:2380"]}
	{"level":"info","ts":"2025-11-09T14:36:13.291134Z","caller":"membership/cluster.go:584","msg":"set initial cluster version","cluster-id":"68eaea490fab4e05","local-member-id":"9f0758e1c58a86ed","cluster-version":"3.5"}
	{"level":"info","ts":"2025-11-09T14:36:13.291161Z","caller":"api/capability.go:75","msg":"enabled capabilities for version","cluster-version":"3.5"}
	{"level":"info","ts":"2025-11-09T14:36:13.330277Z","caller":"embed/etcd.go:597","msg":"serving peer traffic","address":"192.168.85.2:2380"}
	{"level":"info","ts":"2025-11-09T14:36:13.330388Z","caller":"embed/etcd.go:569","msg":"cmux::serve","address":"192.168.85.2:2380"}
	{"level":"info","ts":"2025-11-09T14:36:13.330118Z","caller":"embed/etcd.go:726","msg":"starting with client TLS","tls-info":"cert = /var/lib/minikube/certs/etcd/server.crt, key = /var/lib/minikube/certs/etcd/server.key, client-cert=, client-key=, trusted-ca = /var/lib/minikube/certs/etcd/ca.crt, client-cert-auth = true, crl-file = ","cipher-suites":[]}
	{"level":"info","ts":"2025-11-09T14:36:13.33129Z","caller":"embed/etcd.go:278","msg":"now serving peer/client/metrics","local-member-id":"9f0758e1c58a86ed","initial-advertise-peer-urls":["https://192.168.85.2:2380"],"listen-peer-urls":["https://192.168.85.2:2380"],"advertise-client-urls":["https://192.168.85.2:2379"],"listen-client-urls":["https://127.0.0.1:2379","https://192.168.85.2:2379"],"listen-metrics-urls":["http://127.0.0.1:2381"]}
	{"level":"info","ts":"2025-11-09T14:36:13.331403Z","caller":"embed/etcd.go:855","msg":"serving metrics","address":"http://127.0.0.1:2381"}
	{"level":"info","ts":"2025-11-09T14:36:14.383949Z","logger":"raft","caller":"etcdserver/zap_raft.go:77","msg":"9f0758e1c58a86ed is starting a new election at term 2"}
	{"level":"info","ts":"2025-11-09T14:36:14.384079Z","logger":"raft","caller":"etcdserver/zap_raft.go:77","msg":"9f0758e1c58a86ed became pre-candidate at term 2"}
	{"level":"info","ts":"2025-11-09T14:36:14.384184Z","logger":"raft","caller":"etcdserver/zap_raft.go:77","msg":"9f0758e1c58a86ed received MsgPreVoteResp from 9f0758e1c58a86ed at term 2"}
	{"level":"info","ts":"2025-11-09T14:36:14.384224Z","logger":"raft","caller":"etcdserver/zap_raft.go:77","msg":"9f0758e1c58a86ed became candidate at term 3"}
	{"level":"info","ts":"2025-11-09T14:36:14.384278Z","logger":"raft","caller":"etcdserver/zap_raft.go:77","msg":"9f0758e1c58a86ed received MsgVoteResp from 9f0758e1c58a86ed at term 3"}
	{"level":"info","ts":"2025-11-09T14:36:14.384314Z","logger":"raft","caller":"etcdserver/zap_raft.go:77","msg":"9f0758e1c58a86ed became leader at term 3"}
	{"level":"info","ts":"2025-11-09T14:36:14.384352Z","logger":"raft","caller":"etcdserver/zap_raft.go:77","msg":"raft.node: 9f0758e1c58a86ed elected leader 9f0758e1c58a86ed at term 3"}
	{"level":"info","ts":"2025-11-09T14:36:14.388159Z","caller":"etcdmain/main.go:44","msg":"notifying init daemon"}
	{"level":"info","ts":"2025-11-09T14:36:14.388283Z","caller":"etcdmain/main.go:50","msg":"successfully notified init daemon"}
	{"level":"info","ts":"2025-11-09T14:36:14.391936Z","caller":"embed/serve.go:103","msg":"ready to serve client requests"}
	{"level":"info","ts":"2025-11-09T14:36:14.39337Z","caller":"embed/serve.go:250","msg":"serving client traffic securely","traffic":"grpc+http","address":"192.168.85.2:2379"}
	{"level":"info","ts":"2025-11-09T14:36:14.387979Z","caller":"etcdserver/server.go:2062","msg":"published local member to cluster through raft","local-member-id":"9f0758e1c58a86ed","local-member-attributes":"{Name:old-k8s-version-349599 ClientURLs:[https://192.168.85.2:2379]}","request-path":"/0/members/9f0758e1c58a86ed/attributes","cluster-id":"68eaea490fab4e05","publish-timeout":"7s"}
	{"level":"info","ts":"2025-11-09T14:36:14.396818Z","caller":"embed/serve.go:103","msg":"ready to serve client requests"}
	{"level":"info","ts":"2025-11-09T14:36:14.415984Z","caller":"embed/serve.go:250","msg":"serving client traffic securely","traffic":"grpc+http","address":"127.0.0.1:2379"}
	
	
	==> kernel <==
	 14:37:12 up  1:19,  0 user,  load average: 2.26, 3.17, 2.59
	Linux old-k8s-version-349599 5.15.0-1084-aws #91~20.04.1-Ubuntu SMP Fri May 2 07:00:04 UTC 2025 aarch64 GNU/Linux
	PRETTY_NAME="Debian GNU/Linux 12 (bookworm)"
	
	
	==> kindnet [b3c3e104ca19d5be2d27a38e2d31cd4f2ad95c10aa4c506cfb01ed415c28d05f] <==
	I1109 14:36:19.447854       1 main.go:109] connected to apiserver: https://10.96.0.1:443
	I1109 14:36:19.451375       1 main.go:139] hostIP = 192.168.85.2
	podIP = 192.168.85.2
	I1109 14:36:19.451505       1 main.go:148] setting mtu 1500 for CNI 
	I1109 14:36:19.451518       1 main.go:178] kindnetd IP family: "ipv4"
	I1109 14:36:19.451529       1 main.go:182] noMask IPv4 subnets: [10.244.0.0/16]
	time="2025-11-09T14:36:19Z" level=info msg="Created plugin 10-kube-network-policies (kindnetd, handles RunPodSandbox,RemovePodSandbox)"
	I1109 14:36:19.638654       1 controller.go:377] "Starting controller" name="kube-network-policies"
	I1109 14:36:19.638715       1 controller.go:381] "Waiting for informer caches to sync"
	I1109 14:36:19.638747       1 shared_informer.go:350] "Waiting for caches to sync" controller="kube-network-policies"
	I1109 14:36:19.638885       1 controller.go:390] nri plugin exited: failed to connect to NRI service: dial unix /var/run/nri/nri.sock: connect: no such file or directory
	E1109 14:36:49.640406       1 reflector.go:200] "Failed to watch" err="failed to list *v1.Node: Get \"https://10.96.0.1:443/api/v1/nodes?limit=500&resourceVersion=0\": dial tcp 10.96.0.1:443: i/o timeout" logger="UnhandledError" reflector="pkg/mod/k8s.io/client-go@v0.33.0/tools/cache/reflector.go:285" type="*v1.Node"
	E1109 14:36:49.640419       1 reflector.go:200] "Failed to watch" err="failed to list *v1.Pod: Get \"https://10.96.0.1:443/api/v1/pods?limit=500&resourceVersion=0\": dial tcp 10.96.0.1:443: i/o timeout" logger="UnhandledError" reflector="pkg/mod/k8s.io/client-go@v0.33.0/tools/cache/reflector.go:285" type="*v1.Pod"
	E1109 14:36:49.640523       1 reflector.go:200] "Failed to watch" err="failed to list *v1.NetworkPolicy: Get \"https://10.96.0.1:443/apis/networking.k8s.io/v1/networkpolicies?limit=500&resourceVersion=0\": dial tcp 10.96.0.1:443: i/o timeout" logger="UnhandledError" reflector="pkg/mod/k8s.io/client-go@v0.33.0/tools/cache/reflector.go:285" type="*v1.NetworkPolicy"
	E1109 14:36:49.640588       1 reflector.go:200] "Failed to watch" err="failed to list *v1.Namespace: Get \"https://10.96.0.1:443/api/v1/namespaces?limit=500&resourceVersion=0\": dial tcp 10.96.0.1:443: i/o timeout" logger="UnhandledError" reflector="pkg/mod/k8s.io/client-go@v0.33.0/tools/cache/reflector.go:285" type="*v1.Namespace"
	I1109 14:36:51.038929       1 shared_informer.go:357] "Caches are synced" controller="kube-network-policies"
	I1109 14:36:51.038965       1 metrics.go:72] Registering metrics
	I1109 14:36:51.039044       1 controller.go:711] "Syncing nftables rules"
	I1109 14:36:59.640070       1 main.go:297] Handling node with IPs: map[192.168.85.2:{}]
	I1109 14:36:59.640136       1 main.go:301] handling current node
	I1109 14:37:09.644698       1 main.go:297] Handling node with IPs: map[192.168.85.2:{}]
	I1109 14:37:09.644728       1 main.go:301] handling current node
	
	
	==> kube-apiserver [03a4f2701535c8987f03de2ce9c786e81ea4137423fd749101e600759bd76a67] <==
	I1109 14:36:17.870478       1 shared_informer.go:318] Caches are synced for crd-autoregister
	I1109 14:36:17.878023       1 aggregator.go:166] initial CRD sync complete...
	I1109 14:36:17.879687       1 autoregister_controller.go:141] Starting autoregister controller
	I1109 14:36:17.879728       1 cache.go:32] Waiting for caches to sync for autoregister controller
	I1109 14:36:17.879761       1 cache.go:39] Caches are synced for autoregister controller
	I1109 14:36:17.908015       1 controller.go:624] quota admission added evaluator for: leases.coordination.k8s.io
	I1109 14:36:17.912239       1 cache.go:39] Caches are synced for APIServiceRegistrationController controller
	I1109 14:36:17.921516       1 shared_informer.go:318] Caches are synced for cluster_authentication_trust_controller
	I1109 14:36:17.921775       1 apf_controller.go:377] Running API Priority and Fairness config worker
	I1109 14:36:17.921819       1 apf_controller.go:380] Running API Priority and Fairness periodic rebalancing process
	I1109 14:36:17.922015       1 shared_informer.go:318] Caches are synced for configmaps
	I1109 14:36:17.922460       1 cache.go:39] Caches are synced for AvailableConditionController controller
	I1109 14:36:17.923796       1 shared_informer.go:318] Caches are synced for node_authorizer
	E1109 14:36:17.960860       1 controller.go:97] Error removing old endpoints from kubernetes service: no API server IP addresses were listed in storage, refusing to erase all endpoints for the kubernetes Service
	I1109 14:36:18.515513       1 storage_scheduling.go:111] all system priority classes are created successfully or already exist.
	I1109 14:36:19.618082       1 controller.go:624] quota admission added evaluator for: namespaces
	I1109 14:36:19.681833       1 controller.go:624] quota admission added evaluator for: deployments.apps
	I1109 14:36:19.735276       1 controller.go:624] quota admission added evaluator for: roles.rbac.authorization.k8s.io
	I1109 14:36:19.754257       1 controller.go:624] quota admission added evaluator for: rolebindings.rbac.authorization.k8s.io
	I1109 14:36:19.773786       1 controller.go:624] quota admission added evaluator for: serviceaccounts
	I1109 14:36:19.838487       1 alloc.go:330] "allocated clusterIPs" service="kubernetes-dashboard/kubernetes-dashboard" clusterIPs={"IPv4":"10.97.59.158"}
	I1109 14:36:19.868339       1 alloc.go:330] "allocated clusterIPs" service="kubernetes-dashboard/dashboard-metrics-scraper" clusterIPs={"IPv4":"10.110.225.22"}
	I1109 14:36:29.980795       1 controller.go:624] quota admission added evaluator for: endpointslices.discovery.k8s.io
	I1109 14:36:30.278246       1 controller.go:624] quota admission added evaluator for: replicasets.apps
	I1109 14:36:30.346139       1 controller.go:624] quota admission added evaluator for: endpoints
	
	
	==> kube-controller-manager [c8143ca805893c2ad47b324b4c3297e732d17fc9169127e2729034bb9adf7859] <==
	I1109 14:36:30.284762       1 event.go:307] "Event occurred" object="kubernetes-dashboard/dashboard-metrics-scraper" fieldPath="" kind="Deployment" apiVersion="apps/v1" type="Normal" reason="ScalingReplicaSet" message="Scaled up replica set dashboard-metrics-scraper-5f989dc9cf to 1"
	I1109 14:36:30.287705       1 event.go:307] "Event occurred" object="kubernetes-dashboard/kubernetes-dashboard" fieldPath="" kind="Deployment" apiVersion="apps/v1" type="Normal" reason="ScalingReplicaSet" message="Scaled up replica set kubernetes-dashboard-8694d4445c to 1"
	I1109 14:36:30.310724       1 event.go:307] "Event occurred" object="kubernetes-dashboard/dashboard-metrics-scraper-5f989dc9cf" fieldPath="" kind="ReplicaSet" apiVersion="apps/v1" type="Normal" reason="SuccessfulCreate" message="Created pod: dashboard-metrics-scraper-5f989dc9cf-vrb6q"
	I1109 14:36:30.310822       1 event.go:307] "Event occurred" object="kubernetes-dashboard/kubernetes-dashboard-8694d4445c" fieldPath="" kind="ReplicaSet" apiVersion="apps/v1" type="Normal" reason="SuccessfulCreate" message="Created pod: kubernetes-dashboard-8694d4445c-4d8hp"
	I1109 14:36:30.319714       1 shared_informer.go:318] Caches are synced for garbage collector
	I1109 14:36:30.319743       1 garbagecollector.go:166] "All resource monitors have synced. Proceeding to collect garbage"
	I1109 14:36:30.329907       1 replica_set.go:676] "Finished syncing" kind="ReplicaSet" key="kubernetes-dashboard/dashboard-metrics-scraper-5f989dc9cf" duration="45.208437ms"
	I1109 14:36:30.339243       1 replica_set.go:676] "Finished syncing" kind="ReplicaSet" key="kubernetes-dashboard/kubernetes-dashboard-8694d4445c" duration="52.141693ms"
	I1109 14:36:30.350666       1 shared_informer.go:318] Caches are synced for garbage collector
	I1109 14:36:30.382215       1 replica_set.go:676] "Finished syncing" kind="ReplicaSet" key="kubernetes-dashboard/dashboard-metrics-scraper-5f989dc9cf" duration="52.249197ms"
	I1109 14:36:30.388024       1 replica_set.go:676] "Finished syncing" kind="ReplicaSet" key="kubernetes-dashboard/kubernetes-dashboard-8694d4445c" duration="48.670654ms"
	I1109 14:36:30.388452       1 event.go:307] "Event occurred" object="dashboard-metrics-scraper" fieldPath="" kind="Endpoints" apiVersion="v1" type="Warning" reason="FailedToCreateEndpoint" message="Failed to create endpoint for service kubernetes-dashboard/dashboard-metrics-scraper: endpoints \"dashboard-metrics-scraper\" already exists"
	I1109 14:36:30.407960       1 replica_set.go:676] "Finished syncing" kind="ReplicaSet" key="kubernetes-dashboard/kubernetes-dashboard-8694d4445c" duration="19.886736ms"
	I1109 14:36:30.408061       1 replica_set.go:676] "Finished syncing" kind="ReplicaSet" key="kubernetes-dashboard/kubernetes-dashboard-8694d4445c" duration="58.856µs"
	I1109 14:36:30.408414       1 replica_set.go:676] "Finished syncing" kind="ReplicaSet" key="kubernetes-dashboard/dashboard-metrics-scraper-5f989dc9cf" duration="26.152853ms"
	I1109 14:36:30.408480       1 replica_set.go:676] "Finished syncing" kind="ReplicaSet" key="kubernetes-dashboard/dashboard-metrics-scraper-5f989dc9cf" duration="38.327µs"
	I1109 14:36:35.148922       1 replica_set.go:676] "Finished syncing" kind="ReplicaSet" key="kubernetes-dashboard/dashboard-metrics-scraper-5f989dc9cf" duration="85.351µs"
	I1109 14:36:36.165645       1 replica_set.go:676] "Finished syncing" kind="ReplicaSet" key="kubernetes-dashboard/dashboard-metrics-scraper-5f989dc9cf" duration="55.59µs"
	I1109 14:36:37.176987       1 replica_set.go:676] "Finished syncing" kind="ReplicaSet" key="kubernetes-dashboard/dashboard-metrics-scraper-5f989dc9cf" duration="80.444µs"
	I1109 14:36:40.180158       1 replica_set.go:676] "Finished syncing" kind="ReplicaSet" key="kubernetes-dashboard/kubernetes-dashboard-8694d4445c" duration="10.105816ms"
	I1109 14:36:40.180605       1 replica_set.go:676] "Finished syncing" kind="ReplicaSet" key="kubernetes-dashboard/kubernetes-dashboard-8694d4445c" duration="86.835µs"
	I1109 14:36:52.211810       1 replica_set.go:676] "Finished syncing" kind="ReplicaSet" key="kubernetes-dashboard/dashboard-metrics-scraper-5f989dc9cf" duration="44.217µs"
	I1109 14:36:53.759627       1 replica_set.go:676] "Finished syncing" kind="ReplicaSet" key="kube-system/coredns-5dd5756b68" duration="17.884807ms"
	I1109 14:36:53.760634       1 replica_set.go:676] "Finished syncing" kind="ReplicaSet" key="kube-system/coredns-5dd5756b68" duration="51.594µs"
	I1109 14:37:00.637133       1 replica_set.go:676] "Finished syncing" kind="ReplicaSet" key="kubernetes-dashboard/dashboard-metrics-scraper-5f989dc9cf" duration="52.234µs"
	
	
	==> kube-proxy [c78cbfc5d0722f14e9b91810784e0e104fe3b650ea48136618bd12858c936bee] <==
	I1109 14:36:19.391662       1 server_others.go:69] "Using iptables proxy"
	I1109 14:36:19.459780       1 node.go:141] Successfully retrieved node IP: 192.168.85.2
	I1109 14:36:19.567801       1 server.go:632] "kube-proxy running in dual-stack mode" primary ipFamily="IPv4"
	I1109 14:36:19.569849       1 server_others.go:152] "Using iptables Proxier"
	I1109 14:36:19.569881       1 server_others.go:421] "Detect-local-mode set to ClusterCIDR, but no cluster CIDR for family" ipFamily="IPv6"
	I1109 14:36:19.569889       1 server_others.go:438] "Defaulting to no-op detect-local"
	I1109 14:36:19.569921       1 proxier.go:251] "Setting route_localnet=1 to allow node-ports on localhost; to change this either disable iptables.localhostNodePorts (--iptables-localhost-nodeports) or set nodePortAddresses (--nodeport-addresses) to filter loopback addresses"
	I1109 14:36:19.570366       1 server.go:846] "Version info" version="v1.28.0"
	I1109 14:36:19.570382       1 server.go:848] "Golang settings" GOGC="" GOMAXPROCS="" GOTRACEBACK=""
	I1109 14:36:19.571028       1 config.go:188] "Starting service config controller"
	I1109 14:36:19.571048       1 shared_informer.go:311] Waiting for caches to sync for service config
	I1109 14:36:19.571065       1 config.go:97] "Starting endpoint slice config controller"
	I1109 14:36:19.571068       1 shared_informer.go:311] Waiting for caches to sync for endpoint slice config
	I1109 14:36:19.571584       1 config.go:315] "Starting node config controller"
	I1109 14:36:19.571591       1 shared_informer.go:311] Waiting for caches to sync for node config
	I1109 14:36:19.671239       1 shared_informer.go:318] Caches are synced for endpoint slice config
	I1109 14:36:19.671915       1 shared_informer.go:318] Caches are synced for service config
	I1109 14:36:19.671935       1 shared_informer.go:318] Caches are synced for node config
	
	
	==> kube-scheduler [98c23037f637958c6a33dfcab68d8ef514da9e545abea49c2a268832fe03da24] <==
	I1109 14:36:16.486175       1 serving.go:348] Generated self-signed cert in-memory
	W1109 14:36:17.840878       1 requestheader_controller.go:193] Unable to get configmap/extension-apiserver-authentication in kube-system.  Usually fixed by 'kubectl create rolebinding -n kube-system ROLEBINDING_NAME --role=extension-apiserver-authentication-reader --serviceaccount=YOUR_NS:YOUR_SA'
	W1109 14:36:17.840914       1 authentication.go:368] Error looking up in-cluster authentication configuration: configmaps "extension-apiserver-authentication" is forbidden: User "system:kube-scheduler" cannot get resource "configmaps" in API group "" in the namespace "kube-system"
	W1109 14:36:17.840923       1 authentication.go:369] Continuing without authentication configuration. This may treat all requests as anonymous.
	W1109 14:36:17.840931       1 authentication.go:370] To require authentication configuration lookup to succeed, set --authentication-tolerate-lookup-failure=false
	I1109 14:36:17.897439       1 server.go:154] "Starting Kubernetes Scheduler" version="v1.28.0"
	I1109 14:36:17.897478       1 server.go:156] "Golang settings" GOGC="" GOMAXPROCS="" GOTRACEBACK=""
	I1109 14:36:17.899225       1 configmap_cafile_content.go:202] "Starting controller" name="client-ca::kube-system::extension-apiserver-authentication::client-ca-file"
	I1109 14:36:17.900164       1 shared_informer.go:311] Waiting for caches to sync for client-ca::kube-system::extension-apiserver-authentication::client-ca-file
	I1109 14:36:17.902165       1 secure_serving.go:210] Serving securely on 127.0.0.1:10259
	I1109 14:36:17.902376       1 tlsconfig.go:240] "Starting DynamicServingCertificateController"
	I1109 14:36:18.001072       1 shared_informer.go:318] Caches are synced for client-ca::kube-system::extension-apiserver-authentication::client-ca-file
	
	
	==> kubelet <==
	Nov 09 14:36:30 old-k8s-version-349599 kubelet[775]: I1109 14:36:30.456383     775 reconciler_common.go:258] "operationExecutor.VerifyControllerAttachedVolume started for volume \"tmp-volume\" (UniqueName: \"kubernetes.io/empty-dir/7fc94779-c3d2-4199-a9bb-0ece8c4a32c0-tmp-volume\") pod \"dashboard-metrics-scraper-5f989dc9cf-vrb6q\" (UID: \"7fc94779-c3d2-4199-a9bb-0ece8c4a32c0\") " pod="kubernetes-dashboard/dashboard-metrics-scraper-5f989dc9cf-vrb6q"
	Nov 09 14:36:30 old-k8s-version-349599 kubelet[775]: I1109 14:36:30.456445     775 reconciler_common.go:258] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kube-api-access-hdd2c\" (UniqueName: \"kubernetes.io/projected/7fc94779-c3d2-4199-a9bb-0ece8c4a32c0-kube-api-access-hdd2c\") pod \"dashboard-metrics-scraper-5f989dc9cf-vrb6q\" (UID: \"7fc94779-c3d2-4199-a9bb-0ece8c4a32c0\") " pod="kubernetes-dashboard/dashboard-metrics-scraper-5f989dc9cf-vrb6q"
	Nov 09 14:36:30 old-k8s-version-349599 kubelet[775]: I1109 14:36:30.456480     775 reconciler_common.go:258] "operationExecutor.VerifyControllerAttachedVolume started for volume \"tmp-volume\" (UniqueName: \"kubernetes.io/empty-dir/393cd277-6a4b-46b3-b252-9d0f66277445-tmp-volume\") pod \"kubernetes-dashboard-8694d4445c-4d8hp\" (UID: \"393cd277-6a4b-46b3-b252-9d0f66277445\") " pod="kubernetes-dashboard/kubernetes-dashboard-8694d4445c-4d8hp"
	Nov 09 14:36:30 old-k8s-version-349599 kubelet[775]: I1109 14:36:30.456504     775 reconciler_common.go:258] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kube-api-access-rm9bq\" (UniqueName: \"kubernetes.io/projected/393cd277-6a4b-46b3-b252-9d0f66277445-kube-api-access-rm9bq\") pod \"kubernetes-dashboard-8694d4445c-4d8hp\" (UID: \"393cd277-6a4b-46b3-b252-9d0f66277445\") " pod="kubernetes-dashboard/kubernetes-dashboard-8694d4445c-4d8hp"
	Nov 09 14:36:30 old-k8s-version-349599 kubelet[775]: W1109 14:36:30.645603     775 manager.go:1159] Failed to process watch event {EventType:0 Name:/docker/05a48047eaa75462f179a30f03ed71f80e3b9debeb5e99158451fa2d3f4dabf4/crio-87cb2197ede75806db1069c7ea256a275ef0b4efc840a2f2c023186e51da8f65 WatchSource:0}: Error finding container 87cb2197ede75806db1069c7ea256a275ef0b4efc840a2f2c023186e51da8f65: Status 404 returned error can't find the container with id 87cb2197ede75806db1069c7ea256a275ef0b4efc840a2f2c023186e51da8f65
	Nov 09 14:36:30 old-k8s-version-349599 kubelet[775]: W1109 14:36:30.665078     775 manager.go:1159] Failed to process watch event {EventType:0 Name:/docker/05a48047eaa75462f179a30f03ed71f80e3b9debeb5e99158451fa2d3f4dabf4/crio-98dccb643bbe9abe69dad05fdcc276c89b78456158fd13bc2421d2fad560def4 WatchSource:0}: Error finding container 98dccb643bbe9abe69dad05fdcc276c89b78456158fd13bc2421d2fad560def4: Status 404 returned error can't find the container with id 98dccb643bbe9abe69dad05fdcc276c89b78456158fd13bc2421d2fad560def4
	Nov 09 14:36:35 old-k8s-version-349599 kubelet[775]: I1109 14:36:35.133936     775 scope.go:117] "RemoveContainer" containerID="3b003cad32bb827d35950322067206f806ff6ea0a3a44c4c79d686132bb5687e"
	Nov 09 14:36:36 old-k8s-version-349599 kubelet[775]: I1109 14:36:36.140944     775 scope.go:117] "RemoveContainer" containerID="3b003cad32bb827d35950322067206f806ff6ea0a3a44c4c79d686132bb5687e"
	Nov 09 14:36:36 old-k8s-version-349599 kubelet[775]: I1109 14:36:36.142112     775 scope.go:117] "RemoveContainer" containerID="1e42cf99b7d01821fcb8ed85dc640839269f65bf1bcd085d1e7395d561734f82"
	Nov 09 14:36:36 old-k8s-version-349599 kubelet[775]: E1109 14:36:36.142624     775 pod_workers.go:1300] "Error syncing pod, skipping" err="failed to \"StartContainer\" for \"dashboard-metrics-scraper\" with CrashLoopBackOff: \"back-off 10s restarting failed container=dashboard-metrics-scraper pod=dashboard-metrics-scraper-5f989dc9cf-vrb6q_kubernetes-dashboard(7fc94779-c3d2-4199-a9bb-0ece8c4a32c0)\"" pod="kubernetes-dashboard/dashboard-metrics-scraper-5f989dc9cf-vrb6q" podUID="7fc94779-c3d2-4199-a9bb-0ece8c4a32c0"
	Nov 09 14:36:37 old-k8s-version-349599 kubelet[775]: I1109 14:36:37.144232     775 scope.go:117] "RemoveContainer" containerID="1e42cf99b7d01821fcb8ed85dc640839269f65bf1bcd085d1e7395d561734f82"
	Nov 09 14:36:37 old-k8s-version-349599 kubelet[775]: E1109 14:36:37.144528     775 pod_workers.go:1300] "Error syncing pod, skipping" err="failed to \"StartContainer\" for \"dashboard-metrics-scraper\" with CrashLoopBackOff: \"back-off 10s restarting failed container=dashboard-metrics-scraper pod=dashboard-metrics-scraper-5f989dc9cf-vrb6q_kubernetes-dashboard(7fc94779-c3d2-4199-a9bb-0ece8c4a32c0)\"" pod="kubernetes-dashboard/dashboard-metrics-scraper-5f989dc9cf-vrb6q" podUID="7fc94779-c3d2-4199-a9bb-0ece8c4a32c0"
	Nov 09 14:36:40 old-k8s-version-349599 kubelet[775]: I1109 14:36:40.621834     775 scope.go:117] "RemoveContainer" containerID="1e42cf99b7d01821fcb8ed85dc640839269f65bf1bcd085d1e7395d561734f82"
	Nov 09 14:36:40 old-k8s-version-349599 kubelet[775]: E1109 14:36:40.622176     775 pod_workers.go:1300] "Error syncing pod, skipping" err="failed to \"StartContainer\" for \"dashboard-metrics-scraper\" with CrashLoopBackOff: \"back-off 10s restarting failed container=dashboard-metrics-scraper pod=dashboard-metrics-scraper-5f989dc9cf-vrb6q_kubernetes-dashboard(7fc94779-c3d2-4199-a9bb-0ece8c4a32c0)\"" pod="kubernetes-dashboard/dashboard-metrics-scraper-5f989dc9cf-vrb6q" podUID="7fc94779-c3d2-4199-a9bb-0ece8c4a32c0"
	Nov 09 14:36:50 old-k8s-version-349599 kubelet[775]: I1109 14:36:50.180164     775 scope.go:117] "RemoveContainer" containerID="d300f08cb92b19cb9b5a272616c5e79afc0bfe1871029a2593822d2bf33fb5ca"
	Nov 09 14:36:50 old-k8s-version-349599 kubelet[775]: I1109 14:36:50.206290     775 pod_startup_latency_tracker.go:102] "Observed pod startup duration" pod="kubernetes-dashboard/kubernetes-dashboard-8694d4445c-4d8hp" podStartSLOduration=11.409239556 podCreationTimestamp="2025-11-09 14:36:30 +0000 UTC" firstStartedPulling="2025-11-09 14:36:30.668344188 +0000 UTC m=+18.993871769" lastFinishedPulling="2025-11-09 14:36:39.464594174 +0000 UTC m=+27.790121755" observedRunningTime="2025-11-09 14:36:40.169423727 +0000 UTC m=+28.494951316" watchObservedRunningTime="2025-11-09 14:36:50.205489542 +0000 UTC m=+38.531017123"
	Nov 09 14:36:51 old-k8s-version-349599 kubelet[775]: I1109 14:36:51.906211     775 scope.go:117] "RemoveContainer" containerID="1e42cf99b7d01821fcb8ed85dc640839269f65bf1bcd085d1e7395d561734f82"
	Nov 09 14:36:52 old-k8s-version-349599 kubelet[775]: I1109 14:36:52.189248     775 scope.go:117] "RemoveContainer" containerID="1e42cf99b7d01821fcb8ed85dc640839269f65bf1bcd085d1e7395d561734f82"
	Nov 09 14:36:52 old-k8s-version-349599 kubelet[775]: I1109 14:36:52.189521     775 scope.go:117] "RemoveContainer" containerID="43eaaa1f3a3a30d7d2b345fa130225cfafa48f473bd3042ba003d472afa8b09d"
	Nov 09 14:36:52 old-k8s-version-349599 kubelet[775]: E1109 14:36:52.189885     775 pod_workers.go:1300] "Error syncing pod, skipping" err="failed to \"StartContainer\" for \"dashboard-metrics-scraper\" with CrashLoopBackOff: \"back-off 20s restarting failed container=dashboard-metrics-scraper pod=dashboard-metrics-scraper-5f989dc9cf-vrb6q_kubernetes-dashboard(7fc94779-c3d2-4199-a9bb-0ece8c4a32c0)\"" pod="kubernetes-dashboard/dashboard-metrics-scraper-5f989dc9cf-vrb6q" podUID="7fc94779-c3d2-4199-a9bb-0ece8c4a32c0"
	Nov 09 14:37:00 old-k8s-version-349599 kubelet[775]: I1109 14:37:00.621865     775 scope.go:117] "RemoveContainer" containerID="43eaaa1f3a3a30d7d2b345fa130225cfafa48f473bd3042ba003d472afa8b09d"
	Nov 09 14:37:00 old-k8s-version-349599 kubelet[775]: E1109 14:37:00.622659     775 pod_workers.go:1300] "Error syncing pod, skipping" err="failed to \"StartContainer\" for \"dashboard-metrics-scraper\" with CrashLoopBackOff: \"back-off 20s restarting failed container=dashboard-metrics-scraper pod=dashboard-metrics-scraper-5f989dc9cf-vrb6q_kubernetes-dashboard(7fc94779-c3d2-4199-a9bb-0ece8c4a32c0)\"" pod="kubernetes-dashboard/dashboard-metrics-scraper-5f989dc9cf-vrb6q" podUID="7fc94779-c3d2-4199-a9bb-0ece8c4a32c0"
	Nov 09 14:37:07 old-k8s-version-349599 systemd[1]: Stopping kubelet.service - kubelet: The Kubernetes Node Agent...
	Nov 09 14:37:07 old-k8s-version-349599 systemd[1]: kubelet.service: Deactivated successfully.
	Nov 09 14:37:07 old-k8s-version-349599 systemd[1]: Stopped kubelet.service - kubelet: The Kubernetes Node Agent.
	
	
	==> kubernetes-dashboard [99a152525f2af576af22e0c8f665863f936a06e0043b3013a220d24d8cca148b] <==
	2025/11/09 14:36:39 Using namespace: kubernetes-dashboard
	2025/11/09 14:36:39 Using in-cluster config to connect to apiserver
	2025/11/09 14:36:39 Using secret token for csrf signing
	2025/11/09 14:36:39 Initializing csrf token from kubernetes-dashboard-csrf secret
	2025/11/09 14:36:39 Empty token. Generating and storing in a secret kubernetes-dashboard-csrf
	2025/11/09 14:36:39 Successful initial request to the apiserver, version: v1.28.0
	2025/11/09 14:36:39 Generating JWE encryption key
	2025/11/09 14:36:39 New synchronizer has been registered: kubernetes-dashboard-key-holder-kubernetes-dashboard. Starting
	2025/11/09 14:36:39 Starting secret synchronizer for kubernetes-dashboard-key-holder in namespace kubernetes-dashboard
	2025/11/09 14:36:39 Initializing JWE encryption key from synchronized object
	2025/11/09 14:36:39 Creating in-cluster Sidecar client
	2025/11/09 14:36:39 Serving insecurely on HTTP port: 9090
	2025/11/09 14:36:39 Metric client health check failed: the server is currently unable to handle the request (get services dashboard-metrics-scraper). Retrying in 30 seconds.
	2025/11/09 14:37:09 Metric client health check failed: the server is currently unable to handle the request (get services dashboard-metrics-scraper). Retrying in 30 seconds.
	2025/11/09 14:36:39 Starting overwatch
	
	
	==> storage-provisioner [919922a360f6444e8428262f1b20912ee20139aebb5630e9a84eff8171773387] <==
	I1109 14:36:50.233781       1 storage_provisioner.go:116] Initializing the minikube storage provisioner...
	I1109 14:36:50.248801       1 storage_provisioner.go:141] Storage provisioner initialized, now starting service!
	I1109 14:36:50.248858       1 leaderelection.go:243] attempting to acquire leader lease kube-system/k8s.io-minikube-hostpath...
	I1109 14:37:07.646895       1 leaderelection.go:253] successfully acquired lease kube-system/k8s.io-minikube-hostpath
	I1109 14:37:07.647446       1 event.go:282] Event(v1.ObjectReference{Kind:"Endpoints", Namespace:"kube-system", Name:"k8s.io-minikube-hostpath", UID:"bcc80e6b-0d84-4689-aa52-aa122ab7b376", APIVersion:"v1", ResourceVersion:"626", FieldPath:""}): type: 'Normal' reason: 'LeaderElection' old-k8s-version-349599_ef02f250-1691-4245-a6c4-383ca971c28b became leader
	I1109 14:37:07.650083       1 controller.go:835] Starting provisioner controller k8s.io/minikube-hostpath_old-k8s-version-349599_ef02f250-1691-4245-a6c4-383ca971c28b!
	I1109 14:37:07.751264       1 controller.go:884] Started provisioner controller k8s.io/minikube-hostpath_old-k8s-version-349599_ef02f250-1691-4245-a6c4-383ca971c28b!
	
	
	==> storage-provisioner [d300f08cb92b19cb9b5a272616c5e79afc0bfe1871029a2593822d2bf33fb5ca] <==
	I1109 14:36:19.446766       1 storage_provisioner.go:116] Initializing the minikube storage provisioner...
	F1109 14:36:49.464666       1 main.go:39] error getting server version: Get "https://10.96.0.1:443/version?timeout=32s": dial tcp 10.96.0.1:443: i/o timeout
	

                                                
                                                
-- /stdout --
helpers_test.go:262: (dbg) Run:  out/minikube-linux-arm64 status --format={{.APIServer}} -p old-k8s-version-349599 -n old-k8s-version-349599
helpers_test.go:262: (dbg) Non-zero exit: out/minikube-linux-arm64 status --format={{.APIServer}} -p old-k8s-version-349599 -n old-k8s-version-349599: exit status 2 (488.953502ms)

                                                
                                                
-- stdout --
	Running

                                                
                                                
-- /stdout --
helpers_test.go:262: status error: exit status 2 (may be ok)
helpers_test.go:269: (dbg) Run:  kubectl --context old-k8s-version-349599 get po -o=jsonpath={.items[*].metadata.name} -A --field-selector=status.phase!=Running
helpers_test.go:293: <<< TestStartStop/group/old-k8s-version/serial/Pause FAILED: end of post-mortem logs <<<
helpers_test.go:294: ---------------------/post-mortem---------------------------------
--- FAIL: TestStartStop/group/old-k8s-version/serial/Pause (6.76s)

                                                
                                    
x
+
TestStartStop/group/default-k8s-diff-port/serial/EnableAddonWhileActive (2.97s)

                                                
                                                
=== RUN   TestStartStop/group/default-k8s-diff-port/serial/EnableAddonWhileActive
start_stop_delete_test.go:203: (dbg) Run:  out/minikube-linux-arm64 addons enable metrics-server -p default-k8s-diff-port-103048 --images=MetricsServer=registry.k8s.io/echoserver:1.4 --registries=MetricsServer=fake.domain
start_stop_delete_test.go:203: (dbg) Non-zero exit: out/minikube-linux-arm64 addons enable metrics-server -p default-k8s-diff-port-103048 --images=MetricsServer=registry.k8s.io/echoserver:1.4 --registries=MetricsServer=fake.domain: exit status 11 (271.094978ms)

                                                
                                                
-- stdout --
	
	

                                                
                                                
-- /stdout --
** stderr ** 
	X Exiting due to MK_ADDON_ENABLE_PAUSED: enabled failed: check paused: list paused: runc: sudo runc list -f json: Process exited with status 1
	stdout:
	
	stderr:
	time="2025-11-09T14:38:58Z" level=error msg="open /run/runc: no such file or directory"
	
	* 
	╭─────────────────────────────────────────────────────────────────────────────────────────────╮
	│                                                                                             │
	│    * If the above advice does not help, please let us know:                                 │
	│      https://github.com/kubernetes/minikube/issues/new/choose                               │
	│                                                                                             │
	│    * Please run `minikube logs --file=logs.txt` and attach logs.txt to the GitHub issue.    │
	│    * Please also attach the following file to the GitHub issue:                             │
	│    * - /tmp/minikube_addons_2bafae6fa40fec163538f94366e390b0317a8b15_0.log                  │
	│                                                                                             │
	╰─────────────────────────────────────────────────────────────────────────────────────────────╯

                                                
                                                
** /stderr **
start_stop_delete_test.go:205: failed to enable an addon post-stop. args "out/minikube-linux-arm64 addons enable metrics-server -p default-k8s-diff-port-103048 --images=MetricsServer=registry.k8s.io/echoserver:1.4 --registries=MetricsServer=fake.domain": exit status 11
start_stop_delete_test.go:213: (dbg) Run:  kubectl --context default-k8s-diff-port-103048 describe deploy/metrics-server -n kube-system
start_stop_delete_test.go:213: (dbg) Non-zero exit: kubectl --context default-k8s-diff-port-103048 describe deploy/metrics-server -n kube-system: exit status 1 (86.031454ms)

                                                
                                                
** stderr ** 
	Error from server (NotFound): deployments.apps "metrics-server" not found

                                                
                                                
** /stderr **
start_stop_delete_test.go:215: failed to get info on auto-pause deployments. args "kubectl --context default-k8s-diff-port-103048 describe deploy/metrics-server -n kube-system": exit status 1
start_stop_delete_test.go:219: addon did not load correct image. Expected to contain " fake.domain/registry.k8s.io/echoserver:1.4". Addon deployment info: 
helpers_test.go:222: -----------------------post-mortem--------------------------------
helpers_test.go:223: ======>  post-mortem[TestStartStop/group/default-k8s-diff-port/serial/EnableAddonWhileActive]: network settings <======
helpers_test.go:230: HOST ENV snapshots: PROXY env: HTTP_PROXY="<empty>" HTTPS_PROXY="<empty>" NO_PROXY="<empty>"
helpers_test.go:238: ======>  post-mortem[TestStartStop/group/default-k8s-diff-port/serial/EnableAddonWhileActive]: docker inspect <======
helpers_test.go:239: (dbg) Run:  docker inspect default-k8s-diff-port-103048
helpers_test.go:243: (dbg) docker inspect default-k8s-diff-port-103048:

                                                
                                                
-- stdout --
	[
	    {
	        "Id": "6ee0024be4f4d8590c4fac2f7a334ceb3085a08f0ed0d87b59fee8a0f0938ca3",
	        "Created": "2025-11-09T14:37:24.407836175Z",
	        "Path": "/usr/local/bin/entrypoint",
	        "Args": [
	            "/sbin/init"
	        ],
	        "State": {
	            "Status": "running",
	            "Running": true,
	            "Paused": false,
	            "Restarting": false,
	            "OOMKilled": false,
	            "Dead": false,
	            "Pid": 189703,
	            "ExitCode": 0,
	            "Error": "",
	            "StartedAt": "2025-11-09T14:37:24.481487225Z",
	            "FinishedAt": "0001-01-01T00:00:00Z"
	        },
	        "Image": "sha256:4e1b168be0d8ee6affdaa8dcd0274d605b2f417b6cfaa574410f0380fd962b97",
	        "ResolvConfPath": "/var/lib/docker/containers/6ee0024be4f4d8590c4fac2f7a334ceb3085a08f0ed0d87b59fee8a0f0938ca3/resolv.conf",
	        "HostnamePath": "/var/lib/docker/containers/6ee0024be4f4d8590c4fac2f7a334ceb3085a08f0ed0d87b59fee8a0f0938ca3/hostname",
	        "HostsPath": "/var/lib/docker/containers/6ee0024be4f4d8590c4fac2f7a334ceb3085a08f0ed0d87b59fee8a0f0938ca3/hosts",
	        "LogPath": "/var/lib/docker/containers/6ee0024be4f4d8590c4fac2f7a334ceb3085a08f0ed0d87b59fee8a0f0938ca3/6ee0024be4f4d8590c4fac2f7a334ceb3085a08f0ed0d87b59fee8a0f0938ca3-json.log",
	        "Name": "/default-k8s-diff-port-103048",
	        "RestartCount": 0,
	        "Driver": "overlay2",
	        "Platform": "linux",
	        "MountLabel": "",
	        "ProcessLabel": "",
	        "AppArmorProfile": "unconfined",
	        "ExecIDs": null,
	        "HostConfig": {
	            "Binds": [
	                "/lib/modules:/lib/modules:ro",
	                "default-k8s-diff-port-103048:/var"
	            ],
	            "ContainerIDFile": "",
	            "LogConfig": {
	                "Type": "json-file",
	                "Config": {}
	            },
	            "NetworkMode": "default-k8s-diff-port-103048",
	            "PortBindings": {
	                "22/tcp": [
	                    {
	                        "HostIp": "127.0.0.1",
	                        "HostPort": ""
	                    }
	                ],
	                "2376/tcp": [
	                    {
	                        "HostIp": "127.0.0.1",
	                        "HostPort": ""
	                    }
	                ],
	                "32443/tcp": [
	                    {
	                        "HostIp": "127.0.0.1",
	                        "HostPort": ""
	                    }
	                ],
	                "5000/tcp": [
	                    {
	                        "HostIp": "127.0.0.1",
	                        "HostPort": ""
	                    }
	                ],
	                "8444/tcp": [
	                    {
	                        "HostIp": "127.0.0.1",
	                        "HostPort": ""
	                    }
	                ]
	            },
	            "RestartPolicy": {
	                "Name": "no",
	                "MaximumRetryCount": 0
	            },
	            "AutoRemove": false,
	            "VolumeDriver": "",
	            "VolumesFrom": null,
	            "ConsoleSize": [
	                0,
	                0
	            ],
	            "CapAdd": null,
	            "CapDrop": null,
	            "CgroupnsMode": "host",
	            "Dns": [],
	            "DnsOptions": [],
	            "DnsSearch": [],
	            "ExtraHosts": null,
	            "GroupAdd": null,
	            "IpcMode": "private",
	            "Cgroup": "",
	            "Links": null,
	            "OomScoreAdj": 0,
	            "PidMode": "",
	            "Privileged": true,
	            "PublishAllPorts": false,
	            "ReadonlyRootfs": false,
	            "SecurityOpt": [
	                "seccomp=unconfined",
	                "apparmor=unconfined",
	                "label=disable"
	            ],
	            "Tmpfs": {
	                "/run": "",
	                "/tmp": ""
	            },
	            "UTSMode": "",
	            "UsernsMode": "",
	            "ShmSize": 67108864,
	            "Runtime": "runc",
	            "Isolation": "",
	            "CpuShares": 0,
	            "Memory": 3221225472,
	            "NanoCpus": 2000000000,
	            "CgroupParent": "",
	            "BlkioWeight": 0,
	            "BlkioWeightDevice": [],
	            "BlkioDeviceReadBps": [],
	            "BlkioDeviceWriteBps": [],
	            "BlkioDeviceReadIOps": [],
	            "BlkioDeviceWriteIOps": [],
	            "CpuPeriod": 0,
	            "CpuQuota": 0,
	            "CpuRealtimePeriod": 0,
	            "CpuRealtimeRuntime": 0,
	            "CpusetCpus": "",
	            "CpusetMems": "",
	            "Devices": [],
	            "DeviceCgroupRules": null,
	            "DeviceRequests": null,
	            "MemoryReservation": 0,
	            "MemorySwap": 6442450944,
	            "MemorySwappiness": null,
	            "OomKillDisable": false,
	            "PidsLimit": null,
	            "Ulimits": [],
	            "CpuCount": 0,
	            "CpuPercent": 0,
	            "IOMaximumIOps": 0,
	            "IOMaximumBandwidth": 0,
	            "MaskedPaths": null,
	            "ReadonlyPaths": null
	        },
	        "GraphDriver": {
	            "Data": {
	                "ID": "6ee0024be4f4d8590c4fac2f7a334ceb3085a08f0ed0d87b59fee8a0f0938ca3",
	                "LowerDir": "/var/lib/docker/overlay2/90c77410a0abe5011717be244f0d41706cf22dd1cd6f35d2e962ad2e9e3c6364-init/diff:/var/lib/docker/overlay2/b3a4de36ab2a7c9237cb1555a4866064baca53bca407ae5f84336bea9c6bc6c8/diff",
	                "MergedDir": "/var/lib/docker/overlay2/90c77410a0abe5011717be244f0d41706cf22dd1cd6f35d2e962ad2e9e3c6364/merged",
	                "UpperDir": "/var/lib/docker/overlay2/90c77410a0abe5011717be244f0d41706cf22dd1cd6f35d2e962ad2e9e3c6364/diff",
	                "WorkDir": "/var/lib/docker/overlay2/90c77410a0abe5011717be244f0d41706cf22dd1cd6f35d2e962ad2e9e3c6364/work"
	            },
	            "Name": "overlay2"
	        },
	        "Mounts": [
	            {
	                "Type": "bind",
	                "Source": "/lib/modules",
	                "Destination": "/lib/modules",
	                "Mode": "ro",
	                "RW": false,
	                "Propagation": "rprivate"
	            },
	            {
	                "Type": "volume",
	                "Name": "default-k8s-diff-port-103048",
	                "Source": "/var/lib/docker/volumes/default-k8s-diff-port-103048/_data",
	                "Destination": "/var",
	                "Driver": "local",
	                "Mode": "z",
	                "RW": true,
	                "Propagation": ""
	            }
	        ],
	        "Config": {
	            "Hostname": "default-k8s-diff-port-103048",
	            "Domainname": "",
	            "User": "",
	            "AttachStdin": false,
	            "AttachStdout": false,
	            "AttachStderr": false,
	            "ExposedPorts": {
	                "22/tcp": {},
	                "2376/tcp": {},
	                "32443/tcp": {},
	                "5000/tcp": {},
	                "8444/tcp": {}
	            },
	            "Tty": true,
	            "OpenStdin": false,
	            "StdinOnce": false,
	            "Env": [
	                "container=docker",
	                "PATH=/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin"
	            ],
	            "Cmd": null,
	            "Image": "gcr.io/k8s-minikube/kicbase-builds:v0.0.48-1761985721-21837@sha256:a50b37e97dfdea51156e079ca6b45818a801b3d41bbe13d141f35d2e1af6c7d1",
	            "Volumes": null,
	            "WorkingDir": "/",
	            "Entrypoint": [
	                "/usr/local/bin/entrypoint",
	                "/sbin/init"
	            ],
	            "OnBuild": null,
	            "Labels": {
	                "created_by.minikube.sigs.k8s.io": "true",
	                "mode.minikube.sigs.k8s.io": "default-k8s-diff-port-103048",
	                "name.minikube.sigs.k8s.io": "default-k8s-diff-port-103048",
	                "role.minikube.sigs.k8s.io": ""
	            },
	            "StopSignal": "SIGRTMIN+3"
	        },
	        "NetworkSettings": {
	            "Bridge": "",
	            "SandboxID": "8e6807b074544f5eb4b8ac22a11b360cff27b5b857c76a63b759a11fa478f14a",
	            "SandboxKey": "/var/run/docker/netns/8e6807b07454",
	            "Ports": {
	                "22/tcp": [
	                    {
	                        "HostIp": "127.0.0.1",
	                        "HostPort": "33055"
	                    }
	                ],
	                "2376/tcp": [
	                    {
	                        "HostIp": "127.0.0.1",
	                        "HostPort": "33056"
	                    }
	                ],
	                "32443/tcp": [
	                    {
	                        "HostIp": "127.0.0.1",
	                        "HostPort": "33059"
	                    }
	                ],
	                "5000/tcp": [
	                    {
	                        "HostIp": "127.0.0.1",
	                        "HostPort": "33057"
	                    }
	                ],
	                "8444/tcp": [
	                    {
	                        "HostIp": "127.0.0.1",
	                        "HostPort": "33058"
	                    }
	                ]
	            },
	            "HairpinMode": false,
	            "LinkLocalIPv6Address": "",
	            "LinkLocalIPv6PrefixLen": 0,
	            "SecondaryIPAddresses": null,
	            "SecondaryIPv6Addresses": null,
	            "EndpointID": "",
	            "Gateway": "",
	            "GlobalIPv6Address": "",
	            "GlobalIPv6PrefixLen": 0,
	            "IPAddress": "",
	            "IPPrefixLen": 0,
	            "IPv6Gateway": "",
	            "MacAddress": "",
	            "Networks": {
	                "default-k8s-diff-port-103048": {
	                    "IPAMConfig": {
	                        "IPv4Address": "192.168.85.2"
	                    },
	                    "Links": null,
	                    "Aliases": null,
	                    "MacAddress": "a6:1b:6b:bf:df:da",
	                    "DriverOpts": null,
	                    "GwPriority": 0,
	                    "NetworkID": "f575eafa491ba158377eb7b6fb901ba71cca9fc0a5cdf5e89e6c475d768dfea9",
	                    "EndpointID": "476e20615f1c2779239cd0a9b8df5ef29e6ecad345394d160b957cfb6ae64f8a",
	                    "Gateway": "192.168.85.1",
	                    "IPAddress": "192.168.85.2",
	                    "IPPrefixLen": 24,
	                    "IPv6Gateway": "",
	                    "GlobalIPv6Address": "",
	                    "GlobalIPv6PrefixLen": 0,
	                    "DNSNames": [
	                        "default-k8s-diff-port-103048",
	                        "6ee0024be4f4"
	                    ]
	                }
	            }
	        }
	    }
	]

                                                
                                                
-- /stdout --
helpers_test.go:247: (dbg) Run:  out/minikube-linux-arm64 status --format={{.Host}} -p default-k8s-diff-port-103048 -n default-k8s-diff-port-103048
helpers_test.go:252: <<< TestStartStop/group/default-k8s-diff-port/serial/EnableAddonWhileActive FAILED: start of post-mortem logs <<<
helpers_test.go:253: ======>  post-mortem[TestStartStop/group/default-k8s-diff-port/serial/EnableAddonWhileActive]: minikube logs <======
helpers_test.go:255: (dbg) Run:  out/minikube-linux-arm64 -p default-k8s-diff-port-103048 logs -n 25
helpers_test.go:255: (dbg) Done: out/minikube-linux-arm64 -p default-k8s-diff-port-103048 logs -n 25: (1.200752255s)
helpers_test.go:260: TestStartStop/group/default-k8s-diff-port/serial/EnableAddonWhileActive logs: 
-- stdout --
	
	==> Audit <==
	┌─────────┬───────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────┬──────────────────────────────┬─────────┬─────────┬─────────────────────┬──────────────
───────┐
	│ COMMAND │                                                                                                                     ARGS                                                                                                                      │           PROFILE            │  USER   │ VERSION │     START TIME      │      END TIME       │
	├─────────┼───────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────┼──────────────────────────────┼─────────┼─────────┼─────────────────────┼──────────────
───────┤
	│ ssh     │ -p cilium-241021 sudo crio config                                                                                                                                                                                                             │ cilium-241021                │ jenkins │ v1.37.0 │ 09 Nov 25 14:33 UTC │                     │
	│ delete  │ -p cilium-241021                                                                                                                                                                                                                              │ cilium-241021                │ jenkins │ v1.37.0 │ 09 Nov 25 14:33 UTC │ 09 Nov 25 14:33 UTC │
	│ start   │ -p force-systemd-env-413219 --memory=3072 --alsologtostderr -v=5 --driver=docker  --container-runtime=crio                                                                                                                                    │ force-systemd-env-413219     │ jenkins │ v1.37.0 │ 09 Nov 25 14:33 UTC │ 09 Nov 25 14:33 UTC │
	│ ssh     │ force-systemd-flag-519664 ssh cat /etc/crio/crio.conf.d/02-crio.conf                                                                                                                                                                          │ force-systemd-flag-519664    │ jenkins │ v1.37.0 │ 09 Nov 25 14:33 UTC │ 09 Nov 25 14:33 UTC │
	│ delete  │ -p force-systemd-flag-519664                                                                                                                                                                                                                  │ force-systemd-flag-519664    │ jenkins │ v1.37.0 │ 09 Nov 25 14:33 UTC │ 09 Nov 25 14:33 UTC │
	│ start   │ -p cert-expiration-179822 --memory=3072 --cert-expiration=3m --driver=docker  --container-runtime=crio                                                                                                                                        │ cert-expiration-179822       │ jenkins │ v1.37.0 │ 09 Nov 25 14:33 UTC │ 09 Nov 25 14:34 UTC │
	│ delete  │ -p force-systemd-env-413219                                                                                                                                                                                                                   │ force-systemd-env-413219     │ jenkins │ v1.37.0 │ 09 Nov 25 14:33 UTC │ 09 Nov 25 14:34 UTC │
	│ start   │ -p cert-options-276181 --memory=3072 --apiserver-ips=127.0.0.1 --apiserver-ips=192.168.15.15 --apiserver-names=localhost --apiserver-names=www.google.com --apiserver-port=8555 --driver=docker  --container-runtime=crio                     │ cert-options-276181          │ jenkins │ v1.37.0 │ 09 Nov 25 14:34 UTC │ 09 Nov 25 14:34 UTC │
	│ ssh     │ cert-options-276181 ssh openssl x509 -text -noout -in /var/lib/minikube/certs/apiserver.crt                                                                                                                                                   │ cert-options-276181          │ jenkins │ v1.37.0 │ 09 Nov 25 14:34 UTC │ 09 Nov 25 14:34 UTC │
	│ ssh     │ -p cert-options-276181 -- sudo cat /etc/kubernetes/admin.conf                                                                                                                                                                                 │ cert-options-276181          │ jenkins │ v1.37.0 │ 09 Nov 25 14:34 UTC │ 09 Nov 25 14:34 UTC │
	│ delete  │ -p cert-options-276181                                                                                                                                                                                                                        │ cert-options-276181          │ jenkins │ v1.37.0 │ 09 Nov 25 14:34 UTC │ 09 Nov 25 14:34 UTC │
	│ start   │ -p old-k8s-version-349599 --memory=3072 --alsologtostderr --wait=true --kvm-network=default --kvm-qemu-uri=qemu:///system --disable-driver-mounts --keep-context=false --driver=docker  --container-runtime=crio --kubernetes-version=v1.28.0 │ old-k8s-version-349599       │ jenkins │ v1.37.0 │ 09 Nov 25 14:34 UTC │ 09 Nov 25 14:35 UTC │
	│ addons  │ enable metrics-server -p old-k8s-version-349599 --images=MetricsServer=registry.k8s.io/echoserver:1.4 --registries=MetricsServer=fake.domain                                                                                                  │ old-k8s-version-349599       │ jenkins │ v1.37.0 │ 09 Nov 25 14:35 UTC │                     │
	│ stop    │ -p old-k8s-version-349599 --alsologtostderr -v=3                                                                                                                                                                                              │ old-k8s-version-349599       │ jenkins │ v1.37.0 │ 09 Nov 25 14:35 UTC │ 09 Nov 25 14:36 UTC │
	│ addons  │ enable dashboard -p old-k8s-version-349599 --images=MetricsScraper=registry.k8s.io/echoserver:1.4                                                                                                                                             │ old-k8s-version-349599       │ jenkins │ v1.37.0 │ 09 Nov 25 14:36 UTC │ 09 Nov 25 14:36 UTC │
	│ start   │ -p old-k8s-version-349599 --memory=3072 --alsologtostderr --wait=true --kvm-network=default --kvm-qemu-uri=qemu:///system --disable-driver-mounts --keep-context=false --driver=docker  --container-runtime=crio --kubernetes-version=v1.28.0 │ old-k8s-version-349599       │ jenkins │ v1.37.0 │ 09 Nov 25 14:36 UTC │ 09 Nov 25 14:36 UTC │
	│ start   │ -p cert-expiration-179822 --memory=3072 --cert-expiration=8760h --driver=docker  --container-runtime=crio                                                                                                                                     │ cert-expiration-179822       │ jenkins │ v1.37.0 │ 09 Nov 25 14:37 UTC │ 09 Nov 25 14:37 UTC │
	│ image   │ old-k8s-version-349599 image list --format=json                                                                                                                                                                                               │ old-k8s-version-349599       │ jenkins │ v1.37.0 │ 09 Nov 25 14:37 UTC │ 09 Nov 25 14:37 UTC │
	│ pause   │ -p old-k8s-version-349599 --alsologtostderr -v=1                                                                                                                                                                                              │ old-k8s-version-349599       │ jenkins │ v1.37.0 │ 09 Nov 25 14:37 UTC │                     │
	│ delete  │ -p old-k8s-version-349599                                                                                                                                                                                                                     │ old-k8s-version-349599       │ jenkins │ v1.37.0 │ 09 Nov 25 14:37 UTC │ 09 Nov 25 14:37 UTC │
	│ delete  │ -p old-k8s-version-349599                                                                                                                                                                                                                     │ old-k8s-version-349599       │ jenkins │ v1.37.0 │ 09 Nov 25 14:37 UTC │ 09 Nov 25 14:37 UTC │
	│ start   │ -p default-k8s-diff-port-103048 --memory=3072 --alsologtostderr --wait=true --apiserver-port=8444 --driver=docker  --container-runtime=crio --kubernetes-version=v1.34.1                                                                      │ default-k8s-diff-port-103048 │ jenkins │ v1.37.0 │ 09 Nov 25 14:37 UTC │ 09 Nov 25 14:38 UTC │
	│ delete  │ -p cert-expiration-179822                                                                                                                                                                                                                     │ cert-expiration-179822       │ jenkins │ v1.37.0 │ 09 Nov 25 14:37 UTC │ 09 Nov 25 14:37 UTC │
	│ start   │ -p embed-certs-422728 --memory=3072 --alsologtostderr --wait=true --embed-certs --driver=docker  --container-runtime=crio --kubernetes-version=v1.34.1                                                                                        │ embed-certs-422728           │ jenkins │ v1.37.0 │ 09 Nov 25 14:37 UTC │ 09 Nov 25 14:38 UTC │
	│ addons  │ enable metrics-server -p default-k8s-diff-port-103048 --images=MetricsServer=registry.k8s.io/echoserver:1.4 --registries=MetricsServer=fake.domain                                                                                            │ default-k8s-diff-port-103048 │ jenkins │ v1.37.0 │ 09 Nov 25 14:38 UTC │                     │
	└─────────┴───────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────┴──────────────────────────────┴─────────┴─────────┴─────────────────────┴──────────────
───────┘
	
	
	==> Last Start <==
	Log file created at: 2025/11/09 14:37:27
	Running on machine: ip-172-31-24-2
	Binary: Built with gc go1.24.6 for linux/arm64
	Log line format: [IWEF]mmdd hh:mm:ss.uuuuuu threadid file:line] msg
	I1109 14:37:27.894277  190681 out.go:360] Setting OutFile to fd 1 ...
	I1109 14:37:27.894406  190681 out.go:408] TERM=,COLORTERM=, which probably does not support color
	I1109 14:37:27.894416  190681 out.go:374] Setting ErrFile to fd 2...
	I1109 14:37:27.894422  190681 out.go:408] TERM=,COLORTERM=, which probably does not support color
	I1109 14:37:27.894789  190681 root.go:338] Updating PATH: /home/jenkins/minikube-integration/21139-2320/.minikube/bin
	I1109 14:37:27.895306  190681 out.go:368] Setting JSON to false
	I1109 14:37:27.896214  190681 start.go:133] hostinfo: {"hostname":"ip-172-31-24-2","uptime":4798,"bootTime":1762694250,"procs":171,"os":"linux","platform":"ubuntu","platformFamily":"debian","platformVersion":"20.04","kernelVersion":"5.15.0-1084-aws","kernelArch":"aarch64","virtualizationSystem":"","virtualizationRole":"","hostId":"6d436adf-771e-4269-b9a3-c25fd4fca4f5"}
	I1109 14:37:27.896283  190681 start.go:143] virtualization:  
	I1109 14:37:27.900282  190681 out.go:179] * [embed-certs-422728] minikube v1.37.0 on Ubuntu 20.04 (arm64)
	I1109 14:37:27.904596  190681 out.go:179]   - MINIKUBE_LOCATION=21139
	I1109 14:37:27.904826  190681 notify.go:221] Checking for updates...
	I1109 14:37:27.910876  190681 out.go:179]   - MINIKUBE_SUPPRESS_DOCKER_PERFORMANCE=true
	I1109 14:37:27.913958  190681 out.go:179]   - KUBECONFIG=/home/jenkins/minikube-integration/21139-2320/kubeconfig
	I1109 14:37:27.916949  190681 out.go:179]   - MINIKUBE_HOME=/home/jenkins/minikube-integration/21139-2320/.minikube
	I1109 14:37:27.920020  190681 out.go:179]   - MINIKUBE_BIN=out/minikube-linux-arm64
	I1109 14:37:27.923021  190681 out.go:179]   - MINIKUBE_FORCE_SYSTEMD=
	I1109 14:37:27.926652  190681 config.go:182] Loaded profile config "default-k8s-diff-port-103048": Driver=docker, ContainerRuntime=crio, KubernetesVersion=v1.34.1
	I1109 14:37:27.926783  190681 driver.go:422] Setting default libvirt URI to qemu:///system
	I1109 14:37:27.956898  190681 docker.go:124] docker version: linux-28.1.1:Docker Engine - Community
	I1109 14:37:27.957090  190681 cli_runner.go:164] Run: docker system info --format "{{json .}}"
	I1109 14:37:28.011782  190681 info.go:266] docker info: {ID:J4M5:W6MX:GOX4:4LAQ:VI7E:VJNF:J3OP:OPBH:GF7G:PPY4:WQWD:7N4L Containers:1 ContainersRunning:1 ContainersPaused:0 ContainersStopped:0 Images:4 Driver:overlay2 DriverStatus:[[Backing Filesystem extfs] [Supports d_type true] [Using metacopy false] [Native Overlay Diff true] [userxattr false]] SystemStatus:<nil> Plugins:{Volume:[local] Network:[bridge host ipvlan macvlan null overlay] Authorization:<nil> Log:[awslogs fluentd gcplogs gelf journald json-file local splunk syslog]} MemoryLimit:true SwapLimit:true KernelMemory:false KernelMemoryTCP:true CPUCfsPeriod:true CPUCfsQuota:true CPUShares:true CPUSet:true PidsLimit:true IPv4Forwarding:true BridgeNfIptables:false BridgeNfIP6Tables:false Debug:false NFd:40 OomKillDisable:true NGoroutines:53 SystemTime:2025-11-09 14:37:28.000997805 +0000 UTC LoggingDriver:json-file CgroupDriver:cgroupfs NEventsListener:0 KernelVersion:5.15.0-1084-aws OperatingSystem:Ubuntu 20.04.6 LTS OSType:linux Architecture:a
arch64 IndexServerAddress:https://index.docker.io/v1/ RegistryConfig:{AllowNondistributableArtifactsCIDRs:[] AllowNondistributableArtifactsHostnames:[] InsecureRegistryCIDRs:[::1/128 127.0.0.0/8] IndexConfigs:{DockerIo:{Name:docker.io Mirrors:[] Secure:true Official:true}} Mirrors:[]} NCPU:2 MemTotal:8214835200 GenericResources:<nil> DockerRootDir:/var/lib/docker HTTPProxy: HTTPSProxy: NoProxy: Name:ip-172-31-24-2 Labels:[] ExperimentalBuild:false ServerVersion:28.1.1 ClusterStore: ClusterAdvertise: Runtimes:{Runc:{Path:runc}} DefaultRuntime:runc Swarm:{NodeID: NodeAddr: LocalNodeState:inactive ControlAvailable:false Error: RemoteManagers:<nil>} LiveRestoreEnabled:false Isolation: InitBinary:docker-init ContainerdCommit:{ID:05044ec0a9a75232cad458027ca83437aae3f4da Expected:} RuncCommit:{ID:v1.2.5-0-g59923ef Expected:} InitCommit:{ID:de40ad0 Expected:} SecurityOptions:[name=apparmor name=seccomp,profile=builtin] ProductLicense: Warnings:<nil> ServerErrors:[] ClientInfo:{Debug:false Plugins:[map[Name:buildx Pat
h:/usr/libexec/docker/cli-plugins/docker-buildx SchemaVersion:0.1.0 ShortDescription:Docker Buildx Vendor:Docker Inc. Version:v0.23.0] map[Name:compose Path:/usr/libexec/docker/cli-plugins/docker-compose SchemaVersion:0.1.0 ShortDescription:Docker Compose Vendor:Docker Inc. Version:v2.35.1]] Warnings:<nil>}}
	I1109 14:37:28.011962  190681 docker.go:319] overlay module found
	I1109 14:37:28.016020  190681 out.go:179] * Using the docker driver based on user configuration
	I1109 14:37:28.019201  190681 start.go:309] selected driver: docker
	I1109 14:37:28.019230  190681 start.go:930] validating driver "docker" against <nil>
	I1109 14:37:28.019247  190681 start.go:941] status for docker: {Installed:true Healthy:true Running:false NeedsImprovement:false Error:<nil> Reason: Fix: Doc: Version:}
	I1109 14:37:28.020086  190681 cli_runner.go:164] Run: docker system info --format "{{json .}}"
	I1109 14:37:28.081454  190681 info.go:266] docker info: {ID:J4M5:W6MX:GOX4:4LAQ:VI7E:VJNF:J3OP:OPBH:GF7G:PPY4:WQWD:7N4L Containers:1 ContainersRunning:1 ContainersPaused:0 ContainersStopped:0 Images:4 Driver:overlay2 DriverStatus:[[Backing Filesystem extfs] [Supports d_type true] [Using metacopy false] [Native Overlay Diff true] [userxattr false]] SystemStatus:<nil> Plugins:{Volume:[local] Network:[bridge host ipvlan macvlan null overlay] Authorization:<nil> Log:[awslogs fluentd gcplogs gelf journald json-file local splunk syslog]} MemoryLimit:true SwapLimit:true KernelMemory:false KernelMemoryTCP:true CPUCfsPeriod:true CPUCfsQuota:true CPUShares:true CPUSet:true PidsLimit:true IPv4Forwarding:true BridgeNfIptables:false BridgeNfIP6Tables:false Debug:false NFd:40 OomKillDisable:true NGoroutines:53 SystemTime:2025-11-09 14:37:28.07150824 +0000 UTC LoggingDriver:json-file CgroupDriver:cgroupfs NEventsListener:0 KernelVersion:5.15.0-1084-aws OperatingSystem:Ubuntu 20.04.6 LTS OSType:linux Architecture:aa
rch64 IndexServerAddress:https://index.docker.io/v1/ RegistryConfig:{AllowNondistributableArtifactsCIDRs:[] AllowNondistributableArtifactsHostnames:[] InsecureRegistryCIDRs:[::1/128 127.0.0.0/8] IndexConfigs:{DockerIo:{Name:docker.io Mirrors:[] Secure:true Official:true}} Mirrors:[]} NCPU:2 MemTotal:8214835200 GenericResources:<nil> DockerRootDir:/var/lib/docker HTTPProxy: HTTPSProxy: NoProxy: Name:ip-172-31-24-2 Labels:[] ExperimentalBuild:false ServerVersion:28.1.1 ClusterStore: ClusterAdvertise: Runtimes:{Runc:{Path:runc}} DefaultRuntime:runc Swarm:{NodeID: NodeAddr: LocalNodeState:inactive ControlAvailable:false Error: RemoteManagers:<nil>} LiveRestoreEnabled:false Isolation: InitBinary:docker-init ContainerdCommit:{ID:05044ec0a9a75232cad458027ca83437aae3f4da Expected:} RuncCommit:{ID:v1.2.5-0-g59923ef Expected:} InitCommit:{ID:de40ad0 Expected:} SecurityOptions:[name=apparmor name=seccomp,profile=builtin] ProductLicense: Warnings:<nil> ServerErrors:[] ClientInfo:{Debug:false Plugins:[map[Name:buildx Path
:/usr/libexec/docker/cli-plugins/docker-buildx SchemaVersion:0.1.0 ShortDescription:Docker Buildx Vendor:Docker Inc. Version:v0.23.0] map[Name:compose Path:/usr/libexec/docker/cli-plugins/docker-compose SchemaVersion:0.1.0 ShortDescription:Docker Compose Vendor:Docker Inc. Version:v2.35.1]] Warnings:<nil>}}
	I1109 14:37:28.081608  190681 start_flags.go:327] no existing cluster config was found, will generate one from the flags 
	I1109 14:37:28.081853  190681 start_flags.go:992] Waiting for all components: map[apiserver:true apps_running:true default_sa:true extra:true kubelet:true node_ready:true system_pods:true]
	I1109 14:37:28.085016  190681 out.go:179] * Using Docker driver with root privileges
	I1109 14:37:28.087949  190681 cni.go:84] Creating CNI manager for ""
	I1109 14:37:28.088017  190681 cni.go:143] "docker" driver + "crio" runtime found, recommending kindnet
	I1109 14:37:28.088032  190681 start_flags.go:336] Found "CNI" CNI - setting NetworkPlugin=cni
	I1109 14:37:28.088119  190681 start.go:353] cluster config:
	{Name:embed-certs-422728 KeepContext:false EmbedCerts:true MinikubeISO: KicBaseImage:gcr.io/k8s-minikube/kicbase-builds:v0.0.48-1761985721-21837@sha256:a50b37e97dfdea51156e079ca6b45818a801b3d41bbe13d141f35d2e1af6c7d1 Memory:3072 CPUs:2 DiskSize:20000 Driver:docker HyperkitVpnKitSock: HyperkitVSockPorts:[] DockerEnv:[] ContainerVolumeMounts:[] InsecureRegistry:[] RegistryMirror:[] HostOnlyCIDR:192.168.59.1/24 HypervVirtualSwitch: HypervUseExternalSwitch:false HypervExternalAdapter: KVMNetwork:default KVMQemuURI:qemu:///system KVMGPU:false KVMHidden:false KVMNUMACount:1 APIServerPort:8443 DockerOpt:[] DisableDriverMounts:false NFSShare:[] NFSSharesRoot:/nfsshares UUID: NoVTXCheck:false DNSProxy:false HostDNSResolver:true HostOnlyNicType:virtio NatNicType:virtio SSHIPAddress: SSHUser:root SSHKey: SSHPort:22 KubernetesConfig:{KubernetesVersion:v1.34.1 ClusterName:embed-certs-422728 Namespace:default APIServerHAVIP: APIServerName:minikubeCA APIServerNames:[] APIServerIPs:[] DNSDomain:cluster.local Contain
erRuntime:crio CRISocket: NetworkPlugin:cni FeatureGates: ServiceCIDR:10.96.0.0/12 ImageRepository: LoadBalancerStartIP: LoadBalancerEndIP: CustomIngressCert: RegistryAliases: ExtraOptions:[] ShouldLoadCachedImages:true EnableDefaultCNI:false CNI:} Nodes:[{Name: IP: Port:8443 KubernetesVersion:v1.34.1 ContainerRuntime:crio ControlPlane:true Worker:true}] Addons:map[] CustomAddonImages:map[] CustomAddonRegistries:map[] VerifyComponents:map[apiserver:true apps_running:true default_sa:true extra:true kubelet:true node_ready:true system_pods:true] StartHostTimeout:6m0s ScheduledStop:<nil> ExposedPorts:[] ListenAddress: Network: Subnet: MultiNodeRequested:false ExtraDisks:0 CertExpiration:26280h0m0s MountString: Mount9PVersion:9p2000.L MountGID:docker MountIP: MountMSize:262144 MountOptions:[] MountPort:0 MountType:9p MountUID:docker BinaryMirror: DisableOptimizations:false DisableMetrics:false DisableCoreDNSLog:false CustomQemuFirmwarePath: SocketVMnetClientPath: SocketVMnetPath: StaticIP: SSHAuthSock: SSHAgentPI
D:0 GPUs: AutoPauseInterval:1m0s}
	I1109 14:37:28.091441  190681 out.go:179] * Starting "embed-certs-422728" primary control-plane node in "embed-certs-422728" cluster
	I1109 14:37:28.094451  190681 cache.go:134] Beginning downloading kic base image for docker with crio
	I1109 14:37:28.097483  190681 out.go:179] * Pulling base image v0.0.48-1761985721-21837 ...
	I1109 14:37:28.100545  190681 preload.go:188] Checking if preload exists for k8s version v1.34.1 and runtime crio
	I1109 14:37:28.100596  190681 preload.go:203] Found local preload: /home/jenkins/minikube-integration/21139-2320/.minikube/cache/preloaded-tarball/preloaded-images-k8s-v18-v1.34.1-cri-o-overlay-arm64.tar.lz4
	I1109 14:37:28.100623  190681 cache.go:65] Caching tarball of preloaded images
	I1109 14:37:28.100629  190681 image.go:81] Checking for gcr.io/k8s-minikube/kicbase-builds:v0.0.48-1761985721-21837@sha256:a50b37e97dfdea51156e079ca6b45818a801b3d41bbe13d141f35d2e1af6c7d1 in local docker daemon
	I1109 14:37:28.100706  190681 preload.go:238] Found /home/jenkins/minikube-integration/21139-2320/.minikube/cache/preloaded-tarball/preloaded-images-k8s-v18-v1.34.1-cri-o-overlay-arm64.tar.lz4 in cache, skipping download
	I1109 14:37:28.100717  190681 cache.go:68] Finished verifying existence of preloaded tar for v1.34.1 on crio
	I1109 14:37:28.100823  190681 profile.go:143] Saving config to /home/jenkins/minikube-integration/21139-2320/.minikube/profiles/embed-certs-422728/config.json ...
	I1109 14:37:28.100845  190681 lock.go:35] WriteFile acquiring /home/jenkins/minikube-integration/21139-2320/.minikube/profiles/embed-certs-422728/config.json: {Name:mk4bdaec63ea3c9d33dd739aedde655ecb97f8c3 Clock:{} Delay:500ms Timeout:1m0s Cancel:<nil>}
	I1109 14:37:28.120386  190681 image.go:100] Found gcr.io/k8s-minikube/kicbase-builds:v0.0.48-1761985721-21837@sha256:a50b37e97dfdea51156e079ca6b45818a801b3d41bbe13d141f35d2e1af6c7d1 in local docker daemon, skipping pull
	I1109 14:37:28.120409  190681 cache.go:158] gcr.io/k8s-minikube/kicbase-builds:v0.0.48-1761985721-21837@sha256:a50b37e97dfdea51156e079ca6b45818a801b3d41bbe13d141f35d2e1af6c7d1 exists in daemon, skipping load
	I1109 14:37:28.120423  190681 cache.go:243] Successfully downloaded all kic artifacts
	I1109 14:37:28.120445  190681 start.go:360] acquireMachinesLock for embed-certs-422728: {Name:mkaf26c3066ebca49339c9527aed846108c5e799 Clock:{} Delay:500ms Timeout:10m0s Cancel:<nil>}
	I1109 14:37:28.120559  190681 start.go:364] duration metric: took 88.411µs to acquireMachinesLock for "embed-certs-422728"
	I1109 14:37:28.120590  190681 start.go:93] Provisioning new machine with config: &{Name:embed-certs-422728 KeepContext:false EmbedCerts:true MinikubeISO: KicBaseImage:gcr.io/k8s-minikube/kicbase-builds:v0.0.48-1761985721-21837@sha256:a50b37e97dfdea51156e079ca6b45818a801b3d41bbe13d141f35d2e1af6c7d1 Memory:3072 CPUs:2 DiskSize:20000 Driver:docker HyperkitVpnKitSock: HyperkitVSockPorts:[] DockerEnv:[] ContainerVolumeMounts:[] InsecureRegistry:[] RegistryMirror:[] HostOnlyCIDR:192.168.59.1/24 HypervVirtualSwitch: HypervUseExternalSwitch:false HypervExternalAdapter: KVMNetwork:default KVMQemuURI:qemu:///system KVMGPU:false KVMHidden:false KVMNUMACount:1 APIServerPort:8443 DockerOpt:[] DisableDriverMounts:false NFSShare:[] NFSSharesRoot:/nfsshares UUID: NoVTXCheck:false DNSProxy:false HostDNSResolver:true HostOnlyNicType:virtio NatNicType:virtio SSHIPAddress: SSHUser:root SSHKey: SSHPort:22 KubernetesConfig:{KubernetesVersion:v1.34.1 ClusterName:embed-certs-422728 Namespace:default APIServerHAVIP: APIServe
rName:minikubeCA APIServerNames:[] APIServerIPs:[] DNSDomain:cluster.local ContainerRuntime:crio CRISocket: NetworkPlugin:cni FeatureGates: ServiceCIDR:10.96.0.0/12 ImageRepository: LoadBalancerStartIP: LoadBalancerEndIP: CustomIngressCert: RegistryAliases: ExtraOptions:[] ShouldLoadCachedImages:true EnableDefaultCNI:false CNI:} Nodes:[{Name: IP: Port:8443 KubernetesVersion:v1.34.1 ContainerRuntime:crio ControlPlane:true Worker:true}] Addons:map[] CustomAddonImages:map[] CustomAddonRegistries:map[] VerifyComponents:map[apiserver:true apps_running:true default_sa:true extra:true kubelet:true node_ready:true system_pods:true] StartHostTimeout:6m0s ScheduledStop:<nil> ExposedPorts:[] ListenAddress: Network: Subnet: MultiNodeRequested:false ExtraDisks:0 CertExpiration:26280h0m0s MountString: Mount9PVersion:9p2000.L MountGID:docker MountIP: MountMSize:262144 MountOptions:[] MountPort:0 MountType:9p MountUID:docker BinaryMirror: DisableOptimizations:false DisableMetrics:false DisableCoreDNSLog:false CustomQemuFirmw
arePath: SocketVMnetClientPath: SocketVMnetPath: StaticIP: SSHAuthSock: SSHAgentPID:0 GPUs: AutoPauseInterval:1m0s} &{Name: IP: Port:8443 KubernetesVersion:v1.34.1 ContainerRuntime:crio ControlPlane:true Worker:true}
	I1109 14:37:28.120662  190681 start.go:125] createHost starting for "" (driver="docker")
	I1109 14:37:24.289388  189143 cli_runner.go:217] Completed: docker run --rm --entrypoint /usr/bin/tar -v /home/jenkins/minikube-integration/21139-2320/.minikube/cache/preloaded-tarball/preloaded-images-k8s-v18-v1.34.1-cri-o-overlay-arm64.tar.lz4:/preloaded.tar:ro -v default-k8s-diff-port-103048:/extractDir gcr.io/k8s-minikube/kicbase-builds:v0.0.48-1761985721-21837@sha256:a50b37e97dfdea51156e079ca6b45818a801b3d41bbe13d141f35d2e1af6c7d1 -I lz4 -xf /preloaded.tar -C /extractDir: (4.628908959s)
	I1109 14:37:24.289413  189143 kic.go:203] duration metric: took 4.629038798s to extract preloaded images to volume ...
	W1109 14:37:24.289552  189143 cgroups_linux.go:77] Your kernel does not support swap limit capabilities or the cgroup is not mounted.
	I1109 14:37:24.289667  189143 cli_runner.go:164] Run: docker info --format "'{{json .SecurityOptions}}'"
	I1109 14:37:24.388861  189143 cli_runner.go:164] Run: docker run -d -t --privileged --security-opt seccomp=unconfined --tmpfs /tmp --tmpfs /run -v /lib/modules:/lib/modules:ro --hostname default-k8s-diff-port-103048 --name default-k8s-diff-port-103048 --label created_by.minikube.sigs.k8s.io=true --label name.minikube.sigs.k8s.io=default-k8s-diff-port-103048 --label role.minikube.sigs.k8s.io= --label mode.minikube.sigs.k8s.io=default-k8s-diff-port-103048 --network default-k8s-diff-port-103048 --ip 192.168.85.2 --volume default-k8s-diff-port-103048:/var --security-opt apparmor=unconfined --memory=3072mb --cpus=2 -e container=docker --expose 8444 --publish=127.0.0.1::8444 --publish=127.0.0.1::22 --publish=127.0.0.1::2376 --publish=127.0.0.1::5000 --publish=127.0.0.1::32443 gcr.io/k8s-minikube/kicbase-builds:v0.0.48-1761985721-21837@sha256:a50b37e97dfdea51156e079ca6b45818a801b3d41bbe13d141f35d2e1af6c7d1
	I1109 14:37:24.745528  189143 cli_runner.go:164] Run: docker container inspect default-k8s-diff-port-103048 --format={{.State.Running}}
	I1109 14:37:24.782473  189143 cli_runner.go:164] Run: docker container inspect default-k8s-diff-port-103048 --format={{.State.Status}}
	I1109 14:37:24.816268  189143 cli_runner.go:164] Run: docker exec default-k8s-diff-port-103048 stat /var/lib/dpkg/alternatives/iptables
	I1109 14:37:24.933856  189143 oci.go:144] the created container "default-k8s-diff-port-103048" has a running status.
	I1109 14:37:24.933881  189143 kic.go:225] Creating ssh key for kic: /home/jenkins/minikube-integration/21139-2320/.minikube/machines/default-k8s-diff-port-103048/id_rsa...
	I1109 14:37:25.374288  189143 kic_runner.go:191] docker (temp): /home/jenkins/minikube-integration/21139-2320/.minikube/machines/default-k8s-diff-port-103048/id_rsa.pub --> /home/docker/.ssh/authorized_keys (381 bytes)
	I1109 14:37:25.424219  189143 cli_runner.go:164] Run: docker container inspect default-k8s-diff-port-103048 --format={{.State.Status}}
	I1109 14:37:25.460449  189143 kic_runner.go:93] Run: chown docker:docker /home/docker/.ssh/authorized_keys
	I1109 14:37:25.460483  189143 kic_runner.go:114] Args: [docker exec --privileged default-k8s-diff-port-103048 chown docker:docker /home/docker/.ssh/authorized_keys]
	I1109 14:37:25.526335  189143 cli_runner.go:164] Run: docker container inspect default-k8s-diff-port-103048 --format={{.State.Status}}
	I1109 14:37:25.563150  189143 machine.go:94] provisionDockerMachine start ...
	I1109 14:37:25.563257  189143 cli_runner.go:164] Run: docker container inspect -f "'{{(index (index .NetworkSettings.Ports "22/tcp") 0).HostPort}}'" default-k8s-diff-port-103048
	I1109 14:37:25.589918  189143 main.go:143] libmachine: Using SSH client type: native
	I1109 14:37:25.590471  189143 main.go:143] libmachine: &{{{<nil> 0 [] [] []} docker [0x3eefe0] 0x3f1790 <nil>  [] 0s} 127.0.0.1 33055 <nil> <nil>}
	I1109 14:37:25.590505  189143 main.go:143] libmachine: About to run SSH command:
	hostname
	I1109 14:37:25.591566  189143 main.go:143] libmachine: Error dialing TCP: ssh: handshake failed: EOF
	I1109 14:37:28.124290  190681 out.go:252] * Creating docker container (CPUs=2, Memory=3072MB) ...
	I1109 14:37:28.124568  190681 start.go:159] libmachine.API.Create for "embed-certs-422728" (driver="docker")
	I1109 14:37:28.124616  190681 client.go:173] LocalClient.Create starting
	I1109 14:37:28.124694  190681 main.go:143] libmachine: Reading certificate data from /home/jenkins/minikube-integration/21139-2320/.minikube/certs/ca.pem
	I1109 14:37:28.124738  190681 main.go:143] libmachine: Decoding PEM data...
	I1109 14:37:28.124757  190681 main.go:143] libmachine: Parsing certificate...
	I1109 14:37:28.124819  190681 main.go:143] libmachine: Reading certificate data from /home/jenkins/minikube-integration/21139-2320/.minikube/certs/cert.pem
	I1109 14:37:28.124845  190681 main.go:143] libmachine: Decoding PEM data...
	I1109 14:37:28.124858  190681 main.go:143] libmachine: Parsing certificate...
	I1109 14:37:28.125225  190681 cli_runner.go:164] Run: docker network inspect embed-certs-422728 --format "{"Name": "{{.Name}}","Driver": "{{.Driver}}","Subnet": "{{range .IPAM.Config}}{{.Subnet}}{{end}}","Gateway": "{{range .IPAM.Config}}{{.Gateway}}{{end}}","MTU": {{if (index .Options "com.docker.network.driver.mtu")}}{{(index .Options "com.docker.network.driver.mtu")}}{{else}}0{{end}}, "ContainerIPs": [{{range $k,$v := .Containers }}"{{$v.IPv4Address}}",{{end}}]}"
	W1109 14:37:28.141263  190681 cli_runner.go:211] docker network inspect embed-certs-422728 --format "{"Name": "{{.Name}}","Driver": "{{.Driver}}","Subnet": "{{range .IPAM.Config}}{{.Subnet}}{{end}}","Gateway": "{{range .IPAM.Config}}{{.Gateway}}{{end}}","MTU": {{if (index .Options "com.docker.network.driver.mtu")}}{{(index .Options "com.docker.network.driver.mtu")}}{{else}}0{{end}}, "ContainerIPs": [{{range $k,$v := .Containers }}"{{$v.IPv4Address}}",{{end}}]}" returned with exit code 1
	I1109 14:37:28.141342  190681 network_create.go:284] running [docker network inspect embed-certs-422728] to gather additional debugging logs...
	I1109 14:37:28.141364  190681 cli_runner.go:164] Run: docker network inspect embed-certs-422728
	W1109 14:37:28.157685  190681 cli_runner.go:211] docker network inspect embed-certs-422728 returned with exit code 1
	I1109 14:37:28.157717  190681 network_create.go:287] error running [docker network inspect embed-certs-422728]: docker network inspect embed-certs-422728: exit status 1
	stdout:
	[]
	
	stderr:
	Error response from daemon: network embed-certs-422728 not found
	I1109 14:37:28.157737  190681 network_create.go:289] output of [docker network inspect embed-certs-422728]: -- stdout --
	[]
	
	-- /stdout --
	** stderr ** 
	Error response from daemon: network embed-certs-422728 not found
	
	** /stderr **
	I1109 14:37:28.157857  190681 cli_runner.go:164] Run: docker network inspect bridge --format "{"Name": "{{.Name}}","Driver": "{{.Driver}}","Subnet": "{{range .IPAM.Config}}{{.Subnet}}{{end}}","Gateway": "{{range .IPAM.Config}}{{.Gateway}}{{end}}","MTU": {{if (index .Options "com.docker.network.driver.mtu")}}{{(index .Options "com.docker.network.driver.mtu")}}{{else}}0{{end}}, "ContainerIPs": [{{range $k,$v := .Containers }}"{{$v.IPv4Address}}",{{end}}]}"
	I1109 14:37:28.174886  190681 network.go:211] skipping subnet 192.168.49.0/24 that is taken: &{IP:192.168.49.0 Netmask:255.255.255.0 Prefix:24 CIDR:192.168.49.0/24 Gateway:192.168.49.1 ClientMin:192.168.49.2 ClientMax:192.168.49.254 Broadcast:192.168.49.255 IsPrivate:true Interface:{IfaceName:br-b901b8dcb821 IfaceIPv4:192.168.49.1 IfaceMTU:1500 IfaceMAC:fe:01:f6:7f:4e:91} reservation:<nil>}
	I1109 14:37:28.175199  190681 network.go:211] skipping subnet 192.168.58.0/24 that is taken: &{IP:192.168.58.0 Netmask:255.255.255.0 Prefix:24 CIDR:192.168.58.0/24 Gateway:192.168.58.1 ClientMin:192.168.58.2 ClientMax:192.168.58.254 Broadcast:192.168.58.255 IsPrivate:true Interface:{IfaceName:br-46dda1eda2df IfaceIPv4:192.168.58.1 IfaceMTU:1500 IfaceMAC:3a:a9:4d:4f:8f:31} reservation:<nil>}
	I1109 14:37:28.175517  190681 network.go:211] skipping subnet 192.168.67.0/24 that is taken: &{IP:192.168.67.0 Netmask:255.255.255.0 Prefix:24 CIDR:192.168.67.0/24 Gateway:192.168.67.1 ClientMin:192.168.67.2 ClientMax:192.168.67.254 Broadcast:192.168.67.255 IsPrivate:true Interface:{IfaceName:br-3b44df0b0b1c IfaceIPv4:192.168.67.1 IfaceMTU:1500 IfaceMAC:06:80:ac:56:fe:3d} reservation:<nil>}
	I1109 14:37:28.175994  190681 network.go:206] using free private subnet 192.168.76.0/24: &{IP:192.168.76.0 Netmask:255.255.255.0 Prefix:24 CIDR:192.168.76.0/24 Gateway:192.168.76.1 ClientMin:192.168.76.2 ClientMax:192.168.76.254 Broadcast:192.168.76.255 IsPrivate:true Interface:{IfaceName: IfaceIPv4: IfaceMTU:0 IfaceMAC:} reservation:0x40019732d0}
	I1109 14:37:28.176022  190681 network_create.go:124] attempt to create docker network embed-certs-422728 192.168.76.0/24 with gateway 192.168.76.1 and MTU of 1500 ...
	I1109 14:37:28.176079  190681 cli_runner.go:164] Run: docker network create --driver=bridge --subnet=192.168.76.0/24 --gateway=192.168.76.1 -o --ip-masq -o --icc -o com.docker.network.driver.mtu=1500 --label=created_by.minikube.sigs.k8s.io=true --label=name.minikube.sigs.k8s.io=embed-certs-422728 embed-certs-422728
	I1109 14:37:28.243516  190681 network_create.go:108] docker network embed-certs-422728 192.168.76.0/24 created
	I1109 14:37:28.243550  190681 kic.go:121] calculated static IP "192.168.76.2" for the "embed-certs-422728" container
	I1109 14:37:28.243620  190681 cli_runner.go:164] Run: docker ps -a --format {{.Names}}
	I1109 14:37:28.259956  190681 cli_runner.go:164] Run: docker volume create embed-certs-422728 --label name.minikube.sigs.k8s.io=embed-certs-422728 --label created_by.minikube.sigs.k8s.io=true
	I1109 14:37:28.277965  190681 oci.go:103] Successfully created a docker volume embed-certs-422728
	I1109 14:37:28.278056  190681 cli_runner.go:164] Run: docker run --rm --name embed-certs-422728-preload-sidecar --label created_by.minikube.sigs.k8s.io=true --label name.minikube.sigs.k8s.io=embed-certs-422728 --entrypoint /usr/bin/test -v embed-certs-422728:/var gcr.io/k8s-minikube/kicbase-builds:v0.0.48-1761985721-21837@sha256:a50b37e97dfdea51156e079ca6b45818a801b3d41bbe13d141f35d2e1af6c7d1 -d /var/lib
	I1109 14:37:28.861326  190681 oci.go:107] Successfully prepared a docker volume embed-certs-422728
	I1109 14:37:28.861389  190681 preload.go:188] Checking if preload exists for k8s version v1.34.1 and runtime crio
	I1109 14:37:28.861399  190681 kic.go:194] Starting extracting preloaded images to volume ...
	I1109 14:37:28.861461  190681 cli_runner.go:164] Run: docker run --rm --entrypoint /usr/bin/tar -v /home/jenkins/minikube-integration/21139-2320/.minikube/cache/preloaded-tarball/preloaded-images-k8s-v18-v1.34.1-cri-o-overlay-arm64.tar.lz4:/preloaded.tar:ro -v embed-certs-422728:/extractDir gcr.io/k8s-minikube/kicbase-builds:v0.0.48-1761985721-21837@sha256:a50b37e97dfdea51156e079ca6b45818a801b3d41bbe13d141f35d2e1af6c7d1 -I lz4 -xf /preloaded.tar -C /extractDir
	I1109 14:37:28.763939  189143 main.go:143] libmachine: SSH cmd err, output: <nil>: default-k8s-diff-port-103048
	
	I1109 14:37:28.763963  189143 ubuntu.go:182] provisioning hostname "default-k8s-diff-port-103048"
	I1109 14:37:28.764033  189143 cli_runner.go:164] Run: docker container inspect -f "'{{(index (index .NetworkSettings.Ports "22/tcp") 0).HostPort}}'" default-k8s-diff-port-103048
	I1109 14:37:28.786740  189143 main.go:143] libmachine: Using SSH client type: native
	I1109 14:37:28.787052  189143 main.go:143] libmachine: &{{{<nil> 0 [] [] []} docker [0x3eefe0] 0x3f1790 <nil>  [] 0s} 127.0.0.1 33055 <nil> <nil>}
	I1109 14:37:28.787069  189143 main.go:143] libmachine: About to run SSH command:
	sudo hostname default-k8s-diff-port-103048 && echo "default-k8s-diff-port-103048" | sudo tee /etc/hostname
	I1109 14:37:28.968030  189143 main.go:143] libmachine: SSH cmd err, output: <nil>: default-k8s-diff-port-103048
	
	I1109 14:37:28.968100  189143 cli_runner.go:164] Run: docker container inspect -f "'{{(index (index .NetworkSettings.Ports "22/tcp") 0).HostPort}}'" default-k8s-diff-port-103048
	I1109 14:37:28.991597  189143 main.go:143] libmachine: Using SSH client type: native
	I1109 14:37:28.992013  189143 main.go:143] libmachine: &{{{<nil> 0 [] [] []} docker [0x3eefe0] 0x3f1790 <nil>  [] 0s} 127.0.0.1 33055 <nil> <nil>}
	I1109 14:37:28.992035  189143 main.go:143] libmachine: About to run SSH command:
	
			if ! grep -xq '.*\sdefault-k8s-diff-port-103048' /etc/hosts; then
				if grep -xq '127.0.1.1\s.*' /etc/hosts; then
					sudo sed -i 's/^127.0.1.1\s.*/127.0.1.1 default-k8s-diff-port-103048/g' /etc/hosts;
				else 
					echo '127.0.1.1 default-k8s-diff-port-103048' | sudo tee -a /etc/hosts; 
				fi
			fi
	I1109 14:37:29.180074  189143 main.go:143] libmachine: SSH cmd err, output: <nil>: 
	I1109 14:37:29.180104  189143 ubuntu.go:188] set auth options {CertDir:/home/jenkins/minikube-integration/21139-2320/.minikube CaCertPath:/home/jenkins/minikube-integration/21139-2320/.minikube/certs/ca.pem CaPrivateKeyPath:/home/jenkins/minikube-integration/21139-2320/.minikube/certs/ca-key.pem CaCertRemotePath:/etc/docker/ca.pem ServerCertPath:/home/jenkins/minikube-integration/21139-2320/.minikube/machines/server.pem ServerKeyPath:/home/jenkins/minikube-integration/21139-2320/.minikube/machines/server-key.pem ClientKeyPath:/home/jenkins/minikube-integration/21139-2320/.minikube/certs/key.pem ServerCertRemotePath:/etc/docker/server.pem ServerKeyRemotePath:/etc/docker/server-key.pem ClientCertPath:/home/jenkins/minikube-integration/21139-2320/.minikube/certs/cert.pem ServerCertSANs:[] StorePath:/home/jenkins/minikube-integration/21139-2320/.minikube}
	I1109 14:37:29.180138  189143 ubuntu.go:190] setting up certificates
	I1109 14:37:29.180147  189143 provision.go:84] configureAuth start
	I1109 14:37:29.180214  189143 cli_runner.go:164] Run: docker container inspect -f "{{range .NetworkSettings.Networks}}{{.IPAddress}},{{.GlobalIPv6Address}}{{end}}" default-k8s-diff-port-103048
	I1109 14:37:29.208683  189143 provision.go:143] copyHostCerts
	I1109 14:37:29.208753  189143 exec_runner.go:144] found /home/jenkins/minikube-integration/21139-2320/.minikube/ca.pem, removing ...
	I1109 14:37:29.208767  189143 exec_runner.go:203] rm: /home/jenkins/minikube-integration/21139-2320/.minikube/ca.pem
	I1109 14:37:29.208854  189143 exec_runner.go:151] cp: /home/jenkins/minikube-integration/21139-2320/.minikube/certs/ca.pem --> /home/jenkins/minikube-integration/21139-2320/.minikube/ca.pem (1082 bytes)
	I1109 14:37:29.208968  189143 exec_runner.go:144] found /home/jenkins/minikube-integration/21139-2320/.minikube/cert.pem, removing ...
	I1109 14:37:29.208979  189143 exec_runner.go:203] rm: /home/jenkins/minikube-integration/21139-2320/.minikube/cert.pem
	I1109 14:37:29.209011  189143 exec_runner.go:151] cp: /home/jenkins/minikube-integration/21139-2320/.minikube/certs/cert.pem --> /home/jenkins/minikube-integration/21139-2320/.minikube/cert.pem (1123 bytes)
	I1109 14:37:29.209081  189143 exec_runner.go:144] found /home/jenkins/minikube-integration/21139-2320/.minikube/key.pem, removing ...
	I1109 14:37:29.209091  189143 exec_runner.go:203] rm: /home/jenkins/minikube-integration/21139-2320/.minikube/key.pem
	I1109 14:37:29.209115  189143 exec_runner.go:151] cp: /home/jenkins/minikube-integration/21139-2320/.minikube/certs/key.pem --> /home/jenkins/minikube-integration/21139-2320/.minikube/key.pem (1679 bytes)
	I1109 14:37:29.209173  189143 provision.go:117] generating server cert: /home/jenkins/minikube-integration/21139-2320/.minikube/machines/server.pem ca-key=/home/jenkins/minikube-integration/21139-2320/.minikube/certs/ca.pem private-key=/home/jenkins/minikube-integration/21139-2320/.minikube/certs/ca-key.pem org=jenkins.default-k8s-diff-port-103048 san=[127.0.0.1 192.168.85.2 default-k8s-diff-port-103048 localhost minikube]
	I1109 14:37:29.947152  189143 provision.go:177] copyRemoteCerts
	I1109 14:37:29.947241  189143 ssh_runner.go:195] Run: sudo mkdir -p /etc/docker /etc/docker /etc/docker
	I1109 14:37:29.947290  189143 cli_runner.go:164] Run: docker container inspect -f "'{{(index (index .NetworkSettings.Ports "22/tcp") 0).HostPort}}'" default-k8s-diff-port-103048
	I1109 14:37:29.970484  189143 sshutil.go:53] new ssh client: &{IP:127.0.0.1 Port:33055 SSHKeyPath:/home/jenkins/minikube-integration/21139-2320/.minikube/machines/default-k8s-diff-port-103048/id_rsa Username:docker}
	I1109 14:37:30.094209  189143 ssh_runner.go:362] scp /home/jenkins/minikube-integration/21139-2320/.minikube/certs/ca.pem --> /etc/docker/ca.pem (1082 bytes)
	I1109 14:37:30.118124  189143 ssh_runner.go:362] scp /home/jenkins/minikube-integration/21139-2320/.minikube/machines/server.pem --> /etc/docker/server.pem (1249 bytes)
	I1109 14:37:30.140811  189143 ssh_runner.go:362] scp /home/jenkins/minikube-integration/21139-2320/.minikube/machines/server-key.pem --> /etc/docker/server-key.pem (1679 bytes)
	I1109 14:37:30.162596  189143 provision.go:87] duration metric: took 982.428887ms to configureAuth
	I1109 14:37:30.162669  189143 ubuntu.go:206] setting minikube options for container-runtime
	I1109 14:37:30.162932  189143 config.go:182] Loaded profile config "default-k8s-diff-port-103048": Driver=docker, ContainerRuntime=crio, KubernetesVersion=v1.34.1
	I1109 14:37:30.163087  189143 cli_runner.go:164] Run: docker container inspect -f "'{{(index (index .NetworkSettings.Ports "22/tcp") 0).HostPort}}'" default-k8s-diff-port-103048
	I1109 14:37:30.188399  189143 main.go:143] libmachine: Using SSH client type: native
	I1109 14:37:30.188745  189143 main.go:143] libmachine: &{{{<nil> 0 [] [] []} docker [0x3eefe0] 0x3f1790 <nil>  [] 0s} 127.0.0.1 33055 <nil> <nil>}
	I1109 14:37:30.188761  189143 main.go:143] libmachine: About to run SSH command:
	sudo mkdir -p /etc/sysconfig && printf %s "
	CRIO_MINIKUBE_OPTIONS='--insecure-registry 10.96.0.0/12 '
	" | sudo tee /etc/sysconfig/crio.minikube && sudo systemctl restart crio
	I1109 14:37:30.512011  189143 main.go:143] libmachine: SSH cmd err, output: <nil>: 
	CRIO_MINIKUBE_OPTIONS='--insecure-registry 10.96.0.0/12 '
	
	I1109 14:37:30.512097  189143 machine.go:97] duration metric: took 4.948924223s to provisionDockerMachine
	I1109 14:37:30.512123  189143 client.go:176] duration metric: took 11.84746418s to LocalClient.Create
	I1109 14:37:30.512169  189143 start.go:167] duration metric: took 11.84754917s to libmachine.API.Create "default-k8s-diff-port-103048"
	I1109 14:37:30.512196  189143 start.go:293] postStartSetup for "default-k8s-diff-port-103048" (driver="docker")
	I1109 14:37:30.512221  189143 start.go:322] creating required directories: [/etc/kubernetes/addons /etc/kubernetes/manifests /var/tmp/minikube /var/lib/minikube /var/lib/minikube/certs /var/lib/minikube/images /var/lib/minikube/binaries /tmp/gvisor /usr/share/ca-certificates /etc/ssl/certs]
	I1109 14:37:30.512330  189143 ssh_runner.go:195] Run: sudo mkdir -p /etc/kubernetes/addons /etc/kubernetes/manifests /var/tmp/minikube /var/lib/minikube /var/lib/minikube/certs /var/lib/minikube/images /var/lib/minikube/binaries /tmp/gvisor /usr/share/ca-certificates /etc/ssl/certs
	I1109 14:37:30.512408  189143 cli_runner.go:164] Run: docker container inspect -f "'{{(index (index .NetworkSettings.Ports "22/tcp") 0).HostPort}}'" default-k8s-diff-port-103048
	I1109 14:37:30.537079  189143 sshutil.go:53] new ssh client: &{IP:127.0.0.1 Port:33055 SSHKeyPath:/home/jenkins/minikube-integration/21139-2320/.minikube/machines/default-k8s-diff-port-103048/id_rsa Username:docker}
	I1109 14:37:30.648499  189143 ssh_runner.go:195] Run: cat /etc/os-release
	I1109 14:37:30.652391  189143 main.go:143] libmachine: Couldn't set key VERSION_CODENAME, no corresponding struct field found
	I1109 14:37:30.652422  189143 info.go:137] Remote host: Debian GNU/Linux 12 (bookworm)
	I1109 14:37:30.652433  189143 filesync.go:126] Scanning /home/jenkins/minikube-integration/21139-2320/.minikube/addons for local assets ...
	I1109 14:37:30.652520  189143 filesync.go:126] Scanning /home/jenkins/minikube-integration/21139-2320/.minikube/files for local assets ...
	I1109 14:37:30.652617  189143 filesync.go:149] local asset: /home/jenkins/minikube-integration/21139-2320/.minikube/files/etc/ssl/certs/41162.pem -> 41162.pem in /etc/ssl/certs
	I1109 14:37:30.652731  189143 ssh_runner.go:195] Run: sudo mkdir -p /etc/ssl/certs
	I1109 14:37:30.660986  189143 ssh_runner.go:362] scp /home/jenkins/minikube-integration/21139-2320/.minikube/files/etc/ssl/certs/41162.pem --> /etc/ssl/certs/41162.pem (1708 bytes)
	I1109 14:37:30.680750  189143 start.go:296] duration metric: took 168.525074ms for postStartSetup
	I1109 14:37:30.681184  189143 cli_runner.go:164] Run: docker container inspect -f "{{range .NetworkSettings.Networks}}{{.IPAddress}},{{.GlobalIPv6Address}}{{end}}" default-k8s-diff-port-103048
	I1109 14:37:30.712083  189143 profile.go:143] Saving config to /home/jenkins/minikube-integration/21139-2320/.minikube/profiles/default-k8s-diff-port-103048/config.json ...
	I1109 14:37:30.712381  189143 ssh_runner.go:195] Run: sh -c "df -h /var | awk 'NR==2{print $5}'"
	I1109 14:37:30.712425  189143 cli_runner.go:164] Run: docker container inspect -f "'{{(index (index .NetworkSettings.Ports "22/tcp") 0).HostPort}}'" default-k8s-diff-port-103048
	I1109 14:37:30.733868  189143 sshutil.go:53] new ssh client: &{IP:127.0.0.1 Port:33055 SSHKeyPath:/home/jenkins/minikube-integration/21139-2320/.minikube/machines/default-k8s-diff-port-103048/id_rsa Username:docker}
	I1109 14:37:30.841430  189143 ssh_runner.go:195] Run: sh -c "df -BG /var | awk 'NR==2{print $4}'"
	I1109 14:37:30.849227  189143 start.go:128] duration metric: took 12.188224409s to createHost
	I1109 14:37:30.849253  189143 start.go:83] releasing machines lock for "default-k8s-diff-port-103048", held for 12.188344861s
	I1109 14:37:30.849331  189143 cli_runner.go:164] Run: docker container inspect -f "{{range .NetworkSettings.Networks}}{{.IPAddress}},{{.GlobalIPv6Address}}{{end}}" default-k8s-diff-port-103048
	I1109 14:37:30.867928  189143 ssh_runner.go:195] Run: cat /version.json
	I1109 14:37:30.867941  189143 ssh_runner.go:195] Run: curl -sS -m 2 https://registry.k8s.io/
	I1109 14:37:30.867981  189143 cli_runner.go:164] Run: docker container inspect -f "'{{(index (index .NetworkSettings.Ports "22/tcp") 0).HostPort}}'" default-k8s-diff-port-103048
	I1109 14:37:30.868015  189143 cli_runner.go:164] Run: docker container inspect -f "'{{(index (index .NetworkSettings.Ports "22/tcp") 0).HostPort}}'" default-k8s-diff-port-103048
	I1109 14:37:30.897381  189143 sshutil.go:53] new ssh client: &{IP:127.0.0.1 Port:33055 SSHKeyPath:/home/jenkins/minikube-integration/21139-2320/.minikube/machines/default-k8s-diff-port-103048/id_rsa Username:docker}
	I1109 14:37:30.907682  189143 sshutil.go:53] new ssh client: &{IP:127.0.0.1 Port:33055 SSHKeyPath:/home/jenkins/minikube-integration/21139-2320/.minikube/machines/default-k8s-diff-port-103048/id_rsa Username:docker}
	I1109 14:37:31.028555  189143 ssh_runner.go:195] Run: systemctl --version
	I1109 14:37:31.036871  189143 ssh_runner.go:195] Run: sudo sh -c "podman version >/dev/null"
	I1109 14:37:31.164908  189143 ssh_runner.go:195] Run: sh -c "stat /etc/cni/net.d/*loopback.conf*"
	W1109 14:37:31.172089  189143 cni.go:209] loopback cni configuration skipped: "/etc/cni/net.d/*loopback.conf*" not found
	I1109 14:37:31.172165  189143 ssh_runner.go:195] Run: sudo find /etc/cni/net.d -maxdepth 1 -type f ( ( -name *bridge* -or -name *podman* ) -and -not -name *.mk_disabled ) -printf "%p, " -exec sh -c "sudo mv {} {}.mk_disabled" ;
	I1109 14:37:31.238997  189143 cni.go:262] disabled [/etc/cni/net.d/87-podman-bridge.conflist, /etc/cni/net.d/10-crio-bridge.conflist.disabled] bridge cni config(s)
	I1109 14:37:31.239022  189143 start.go:496] detecting cgroup driver to use...
	I1109 14:37:31.239057  189143 detect.go:187] detected "cgroupfs" cgroup driver on host os
	I1109 14:37:31.239108  189143 ssh_runner.go:195] Run: sudo systemctl stop -f containerd
	I1109 14:37:31.271492  189143 ssh_runner.go:195] Run: sudo systemctl is-active --quiet service containerd
	I1109 14:37:31.289679  189143 docker.go:218] disabling cri-docker service (if available) ...
	I1109 14:37:31.289742  189143 ssh_runner.go:195] Run: sudo systemctl stop -f cri-docker.socket
	I1109 14:37:31.312162  189143 ssh_runner.go:195] Run: sudo systemctl stop -f cri-docker.service
	I1109 14:37:31.337696  189143 ssh_runner.go:195] Run: sudo systemctl disable cri-docker.socket
	I1109 14:37:31.517684  189143 ssh_runner.go:195] Run: sudo systemctl mask cri-docker.service
	I1109 14:37:31.689258  189143 docker.go:234] disabling docker service ...
	I1109 14:37:31.689334  189143 ssh_runner.go:195] Run: sudo systemctl stop -f docker.socket
	I1109 14:37:31.714460  189143 ssh_runner.go:195] Run: sudo systemctl stop -f docker.service
	I1109 14:37:31.728794  189143 ssh_runner.go:195] Run: sudo systemctl disable docker.socket
	I1109 14:37:31.881564  189143 ssh_runner.go:195] Run: sudo systemctl mask docker.service
	I1109 14:37:32.013745  189143 ssh_runner.go:195] Run: sudo systemctl is-active --quiet service docker
	I1109 14:37:32.031812  189143 ssh_runner.go:195] Run: /bin/bash -c "sudo mkdir -p /etc && printf %s "runtime-endpoint: unix:///var/run/crio/crio.sock
	" | sudo tee /etc/crictl.yaml"
	I1109 14:37:32.049051  189143 crio.go:59] configure cri-o to use "registry.k8s.io/pause:3.10.1" pause image...
	I1109 14:37:32.049128  189143 ssh_runner.go:195] Run: sh -c "sudo sed -i 's|^.*pause_image = .*$|pause_image = "registry.k8s.io/pause:3.10.1"|' /etc/crio/crio.conf.d/02-crio.conf"
	I1109 14:37:32.061319  189143 crio.go:70] configuring cri-o to use "cgroupfs" as cgroup driver...
	I1109 14:37:32.061422  189143 ssh_runner.go:195] Run: sh -c "sudo sed -i 's|^.*cgroup_manager = .*$|cgroup_manager = "cgroupfs"|' /etc/crio/crio.conf.d/02-crio.conf"
	I1109 14:37:32.073624  189143 ssh_runner.go:195] Run: sh -c "sudo sed -i '/conmon_cgroup = .*/d' /etc/crio/crio.conf.d/02-crio.conf"
	I1109 14:37:32.083429  189143 ssh_runner.go:195] Run: sh -c "sudo sed -i '/cgroup_manager = .*/a conmon_cgroup = "pod"' /etc/crio/crio.conf.d/02-crio.conf"
	I1109 14:37:32.096227  189143 ssh_runner.go:195] Run: sh -c "sudo rm -rf /etc/cni/net.mk"
	I1109 14:37:32.105855  189143 ssh_runner.go:195] Run: sh -c "sudo sed -i '/^ *"net.ipv4.ip_unprivileged_port_start=.*"/d' /etc/crio/crio.conf.d/02-crio.conf"
	I1109 14:37:32.115035  189143 ssh_runner.go:195] Run: sh -c "sudo grep -q "^ *default_sysctls" /etc/crio/crio.conf.d/02-crio.conf || sudo sed -i '/conmon_cgroup = .*/a default_sysctls = \[\n\]' /etc/crio/crio.conf.d/02-crio.conf"
	I1109 14:37:32.131863  189143 ssh_runner.go:195] Run: sh -c "sudo sed -i -r 's|^default_sysctls *= *\[|&\n  "net.ipv4.ip_unprivileged_port_start=0",|' /etc/crio/crio.conf.d/02-crio.conf"
	I1109 14:37:32.144259  189143 ssh_runner.go:195] Run: sudo sysctl net.bridge.bridge-nf-call-iptables
	I1109 14:37:32.152292  189143 ssh_runner.go:195] Run: sudo sh -c "echo 1 > /proc/sys/net/ipv4/ip_forward"
	I1109 14:37:32.159982  189143 ssh_runner.go:195] Run: sudo systemctl daemon-reload
	I1109 14:37:32.293121  189143 ssh_runner.go:195] Run: sudo systemctl restart crio
	I1109 14:37:33.731815  189143 ssh_runner.go:235] Completed: sudo systemctl restart crio: (1.438617031s)
	I1109 14:37:33.731844  189143 start.go:543] Will wait 60s for socket path /var/run/crio/crio.sock
	I1109 14:37:33.731915  189143 ssh_runner.go:195] Run: stat /var/run/crio/crio.sock
	I1109 14:37:33.736048  189143 start.go:564] Will wait 60s for crictl version
	I1109 14:37:33.736119  189143 ssh_runner.go:195] Run: which crictl
	I1109 14:37:33.739955  189143 ssh_runner.go:195] Run: sudo /usr/local/bin/crictl version
	I1109 14:37:33.780063  189143 start.go:580] Version:  0.1.0
	RuntimeName:  cri-o
	RuntimeVersion:  1.34.1
	RuntimeApiVersion:  v1
	I1109 14:37:33.780140  189143 ssh_runner.go:195] Run: crio --version
	I1109 14:37:33.826915  189143 ssh_runner.go:195] Run: crio --version
	I1109 14:37:33.885137  189143 out.go:179] * Preparing Kubernetes v1.34.1 on CRI-O 1.34.1 ...
	I1109 14:37:33.888072  189143 cli_runner.go:164] Run: docker network inspect default-k8s-diff-port-103048 --format "{"Name": "{{.Name}}","Driver": "{{.Driver}}","Subnet": "{{range .IPAM.Config}}{{.Subnet}}{{end}}","Gateway": "{{range .IPAM.Config}}{{.Gateway}}{{end}}","MTU": {{if (index .Options "com.docker.network.driver.mtu")}}{{(index .Options "com.docker.network.driver.mtu")}}{{else}}0{{end}}, "ContainerIPs": [{{range $k,$v := .Containers }}"{{$v.IPv4Address}}",{{end}}]}"
	I1109 14:37:33.909741  189143 ssh_runner.go:195] Run: grep 192.168.85.1	host.minikube.internal$ /etc/hosts
	I1109 14:37:33.915383  189143 ssh_runner.go:195] Run: /bin/bash -c "{ grep -v $'\thost.minikube.internal$' "/etc/hosts"; echo "192.168.85.1	host.minikube.internal"; } > /tmp/h.$$; sudo cp /tmp/h.$$ "/etc/hosts""
	I1109 14:37:33.927397  189143 kubeadm.go:884] updating cluster {Name:default-k8s-diff-port-103048 KeepContext:false EmbedCerts:false MinikubeISO: KicBaseImage:gcr.io/k8s-minikube/kicbase-builds:v0.0.48-1761985721-21837@sha256:a50b37e97dfdea51156e079ca6b45818a801b3d41bbe13d141f35d2e1af6c7d1 Memory:3072 CPUs:2 DiskSize:20000 Driver:docker HyperkitVpnKitSock: HyperkitVSockPorts:[] DockerEnv:[] ContainerVolumeMounts:[] InsecureRegistry:[] RegistryMirror:[] HostOnlyCIDR:192.168.59.1/24 HypervVirtualSwitch: HypervUseExternalSwitch:false HypervExternalAdapter: KVMNetwork:default KVMQemuURI:qemu:///system KVMGPU:false KVMHidden:false KVMNUMACount:1 APIServerPort:8444 DockerOpt:[] DisableDriverMounts:false NFSShare:[] NFSSharesRoot:/nfsshares UUID: NoVTXCheck:false DNSProxy:false HostDNSResolver:true HostOnlyNicType:virtio NatNicType:virtio SSHIPAddress: SSHUser:root SSHKey: SSHPort:22 KubernetesConfig:{KubernetesVersion:v1.34.1 ClusterName:default-k8s-diff-port-103048 Namespace:default APIServerHAVIP: APISer
verName:minikubeCA APIServerNames:[] APIServerIPs:[] DNSDomain:cluster.local ContainerRuntime:crio CRISocket: NetworkPlugin:cni FeatureGates: ServiceCIDR:10.96.0.0/12 ImageRepository: LoadBalancerStartIP: LoadBalancerEndIP: CustomIngressCert: RegistryAliases: ExtraOptions:[] ShouldLoadCachedImages:true EnableDefaultCNI:false CNI:} Nodes:[{Name: IP:192.168.85.2 Port:8444 KubernetesVersion:v1.34.1 ContainerRuntime:crio ControlPlane:true Worker:true}] Addons:map[] CustomAddonImages:map[] CustomAddonRegistries:map[] VerifyComponents:map[apiserver:true apps_running:true default_sa:true extra:true kubelet:true node_ready:true system_pods:true] StartHostTimeout:6m0s ScheduledStop:<nil> ExposedPorts:[] ListenAddress: Network: Subnet: MultiNodeRequested:false ExtraDisks:0 CertExpiration:26280h0m0s MountString: Mount9PVersion:9p2000.L MountGID:docker MountIP: MountMSize:262144 MountOptions:[] MountPort:0 MountType:9p MountUID:docker BinaryMirror: DisableOptimizations:false DisableMetrics:false DisableCoreDNSLog:false C
ustomQemuFirmwarePath: SocketVMnetClientPath: SocketVMnetPath: StaticIP: SSHAuthSock: SSHAgentPID:0 GPUs: AutoPauseInterval:1m0s} ...
	I1109 14:37:33.927510  189143 preload.go:188] Checking if preload exists for k8s version v1.34.1 and runtime crio
	I1109 14:37:33.927559  189143 ssh_runner.go:195] Run: sudo crictl images --output json
	I1109 14:37:33.975664  189143 crio.go:514] all images are preloaded for cri-o runtime.
	I1109 14:37:33.975685  189143 crio.go:433] Images already preloaded, skipping extraction
	I1109 14:37:33.975745  189143 ssh_runner.go:195] Run: sudo crictl images --output json
	I1109 14:37:34.005445  189143 crio.go:514] all images are preloaded for cri-o runtime.
	I1109 14:37:34.005469  189143 cache_images.go:86] Images are preloaded, skipping loading
	I1109 14:37:34.005477  189143 kubeadm.go:935] updating node { 192.168.85.2 8444 v1.34.1 crio true true} ...
	I1109 14:37:34.005577  189143 kubeadm.go:947] kubelet [Unit]
	Wants=crio.service
	
	[Service]
	ExecStart=
	ExecStart=/var/lib/minikube/binaries/v1.34.1/kubelet --bootstrap-kubeconfig=/etc/kubernetes/bootstrap-kubelet.conf --cgroups-per-qos=false --config=/var/lib/kubelet/config.yaml --enforce-node-allocatable= --hostname-override=default-k8s-diff-port-103048 --kubeconfig=/etc/kubernetes/kubelet.conf --node-ip=192.168.85.2
	
	[Install]
	 config:
	{KubernetesVersion:v1.34.1 ClusterName:default-k8s-diff-port-103048 Namespace:default APIServerHAVIP: APIServerName:minikubeCA APIServerNames:[] APIServerIPs:[] DNSDomain:cluster.local ContainerRuntime:crio CRISocket: NetworkPlugin:cni FeatureGates: ServiceCIDR:10.96.0.0/12 ImageRepository: LoadBalancerStartIP: LoadBalancerEndIP: CustomIngressCert: RegistryAliases: ExtraOptions:[] ShouldLoadCachedImages:true EnableDefaultCNI:false CNI:}
	I1109 14:37:34.005666  189143 ssh_runner.go:195] Run: crio config
	I1109 14:37:34.083321  189143 cni.go:84] Creating CNI manager for ""
	I1109 14:37:34.083403  189143 cni.go:143] "docker" driver + "crio" runtime found, recommending kindnet
	I1109 14:37:34.083434  189143 kubeadm.go:85] Using pod CIDR: 10.244.0.0/16
	I1109 14:37:34.083486  189143 kubeadm.go:190] kubeadm options: {CertDir:/var/lib/minikube/certs ServiceCIDR:10.96.0.0/12 PodSubnet:10.244.0.0/16 AdvertiseAddress:192.168.85.2 APIServerPort:8444 KubernetesVersion:v1.34.1 EtcdDataDir:/var/lib/minikube/etcd EtcdExtraArgs:map[] ClusterName:default-k8s-diff-port-103048 NodeName:default-k8s-diff-port-103048 DNSDomain:cluster.local CRISocket:/var/run/crio/crio.sock ImageRepository: ComponentOptions:[{Component:apiServer ExtraArgs:map[enable-admission-plugins:NamespaceLifecycle,LimitRanger,ServiceAccount,DefaultStorageClass,DefaultTolerationSeconds,NodeRestriction,MutatingAdmissionWebhook,ValidatingAdmissionWebhook,ResourceQuota] Pairs:map[certSANs:["127.0.0.1", "localhost", "192.168.85.2"]]} {Component:controllerManager ExtraArgs:map[allocate-node-cidrs:true leader-elect:false] Pairs:map[]} {Component:scheduler ExtraArgs:map[leader-elect:false] Pairs:map[]}] FeatureArgs:map[] NodeIP:192.168.85.2 CgroupDriver:cgroupfs ClientCAFile:/var/lib/minikube/certs/ca.
crt StaticPodPath:/etc/kubernetes/manifests ControlPlaneAddress:control-plane.minikube.internal KubeProxyOptions:map[] ResolvConfSearchRegression:false KubeletConfigOpts:map[containerRuntimeEndpoint:unix:///var/run/crio/crio.sock hairpinMode:hairpin-veth runtimeRequestTimeout:15m] PrependCriSocketUnix:true}
	I1109 14:37:34.083658  189143 kubeadm.go:196] kubeadm config:
	apiVersion: kubeadm.k8s.io/v1beta4
	kind: InitConfiguration
	localAPIEndpoint:
	  advertiseAddress: 192.168.85.2
	  bindPort: 8444
	bootstrapTokens:
	  - groups:
	      - system:bootstrappers:kubeadm:default-node-token
	    ttl: 24h0m0s
	    usages:
	      - signing
	      - authentication
	nodeRegistration:
	  criSocket: unix:///var/run/crio/crio.sock
	  name: "default-k8s-diff-port-103048"
	  kubeletExtraArgs:
	    - name: "node-ip"
	      value: "192.168.85.2"
	  taints: []
	---
	apiVersion: kubeadm.k8s.io/v1beta4
	kind: ClusterConfiguration
	apiServer:
	  certSANs: ["127.0.0.1", "localhost", "192.168.85.2"]
	  extraArgs:
	    - name: "enable-admission-plugins"
	      value: "NamespaceLifecycle,LimitRanger,ServiceAccount,DefaultStorageClass,DefaultTolerationSeconds,NodeRestriction,MutatingAdmissionWebhook,ValidatingAdmissionWebhook,ResourceQuota"
	controllerManager:
	  extraArgs:
	    - name: "allocate-node-cidrs"
	      value: "true"
	    - name: "leader-elect"
	      value: "false"
	scheduler:
	  extraArgs:
	    - name: "leader-elect"
	      value: "false"
	certificatesDir: /var/lib/minikube/certs
	clusterName: mk
	controlPlaneEndpoint: control-plane.minikube.internal:8444
	etcd:
	  local:
	    dataDir: /var/lib/minikube/etcd
	kubernetesVersion: v1.34.1
	networking:
	  dnsDomain: cluster.local
	  podSubnet: "10.244.0.0/16"
	  serviceSubnet: 10.96.0.0/12
	---
	apiVersion: kubelet.config.k8s.io/v1beta1
	kind: KubeletConfiguration
	authentication:
	  x509:
	    clientCAFile: /var/lib/minikube/certs/ca.crt
	cgroupDriver: cgroupfs
	containerRuntimeEndpoint: unix:///var/run/crio/crio.sock
	hairpinMode: hairpin-veth
	runtimeRequestTimeout: 15m
	clusterDomain: "cluster.local"
	# disable disk resource management by default
	imageGCHighThresholdPercent: 100
	evictionHard:
	  nodefs.available: "0%"
	  nodefs.inodesFree: "0%"
	  imagefs.available: "0%"
	failSwapOn: false
	staticPodPath: /etc/kubernetes/manifests
	---
	apiVersion: kubeproxy.config.k8s.io/v1alpha1
	kind: KubeProxyConfiguration
	clusterCIDR: "10.244.0.0/16"
	metricsBindAddress: 0.0.0.0:10249
	conntrack:
	  maxPerCore: 0
	# Skip setting "net.netfilter.nf_conntrack_tcp_timeout_established"
	  tcpEstablishedTimeout: 0s
	# Skip setting "net.netfilter.nf_conntrack_tcp_timeout_close"
	  tcpCloseWaitTimeout: 0s
	
	I1109 14:37:34.083761  189143 ssh_runner.go:195] Run: sudo ls /var/lib/minikube/binaries/v1.34.1
	I1109 14:37:34.095026  189143 binaries.go:51] Found k8s binaries, skipping transfer
	I1109 14:37:34.095141  189143 ssh_runner.go:195] Run: sudo mkdir -p /etc/systemd/system/kubelet.service.d /lib/systemd/system /var/tmp/minikube
	I1109 14:37:34.111248  189143 ssh_runner.go:362] scp memory --> /etc/systemd/system/kubelet.service.d/10-kubeadm.conf (378 bytes)
	I1109 14:37:34.143321  189143 ssh_runner.go:362] scp memory --> /lib/systemd/system/kubelet.service (352 bytes)
	I1109 14:37:34.175263  189143 ssh_runner.go:362] scp memory --> /var/tmp/minikube/kubeadm.yaml.new (2225 bytes)
	I1109 14:37:34.204391  189143 ssh_runner.go:195] Run: grep 192.168.85.2	control-plane.minikube.internal$ /etc/hosts
	I1109 14:37:34.215888  189143 ssh_runner.go:195] Run: /bin/bash -c "{ grep -v $'\tcontrol-plane.minikube.internal$' "/etc/hosts"; echo "192.168.85.2	control-plane.minikube.internal"; } > /tmp/h.$$; sudo cp /tmp/h.$$ "/etc/hosts""
	I1109 14:37:34.234263  189143 ssh_runner.go:195] Run: sudo systemctl daemon-reload
	I1109 14:37:34.543457  189143 ssh_runner.go:195] Run: sudo systemctl start kubelet
	I1109 14:37:34.564062  189143 certs.go:69] Setting up /home/jenkins/minikube-integration/21139-2320/.minikube/profiles/default-k8s-diff-port-103048 for IP: 192.168.85.2
	I1109 14:37:34.564082  189143 certs.go:195] generating shared ca certs ...
	I1109 14:37:34.564098  189143 certs.go:227] acquiring lock for ca certs: {Name:mkdc9287ecd89df27ec460b72246ed6b75395d7c Clock:{} Delay:500ms Timeout:1m0s Cancel:<nil>}
	I1109 14:37:34.564231  189143 certs.go:236] skipping valid "minikubeCA" ca cert: /home/jenkins/minikube-integration/21139-2320/.minikube/ca.key
	I1109 14:37:34.564271  189143 certs.go:236] skipping valid "proxyClientCA" ca cert: /home/jenkins/minikube-integration/21139-2320/.minikube/proxy-client-ca.key
	I1109 14:37:34.564278  189143 certs.go:257] generating profile certs ...
	I1109 14:37:34.564331  189143 certs.go:364] generating signed profile cert for "minikube-user": /home/jenkins/minikube-integration/21139-2320/.minikube/profiles/default-k8s-diff-port-103048/client.key
	I1109 14:37:34.564342  189143 crypto.go:68] Generating cert /home/jenkins/minikube-integration/21139-2320/.minikube/profiles/default-k8s-diff-port-103048/client.crt with IP's: []
	I1109 14:37:35.204051  189143 crypto.go:156] Writing cert to /home/jenkins/minikube-integration/21139-2320/.minikube/profiles/default-k8s-diff-port-103048/client.crt ...
	I1109 14:37:35.204088  189143 lock.go:35] WriteFile acquiring /home/jenkins/minikube-integration/21139-2320/.minikube/profiles/default-k8s-diff-port-103048/client.crt: {Name:mk319386f170bb2d77712b8498f6bcc46b18ff9c Clock:{} Delay:500ms Timeout:1m0s Cancel:<nil>}
	I1109 14:37:35.204300  189143 crypto.go:164] Writing key to /home/jenkins/minikube-integration/21139-2320/.minikube/profiles/default-k8s-diff-port-103048/client.key ...
	I1109 14:37:35.204311  189143 lock.go:35] WriteFile acquiring /home/jenkins/minikube-integration/21139-2320/.minikube/profiles/default-k8s-diff-port-103048/client.key: {Name:mk1b66c07b4c486eedd4da8511cb496afffa6e6e Clock:{} Delay:500ms Timeout:1m0s Cancel:<nil>}
	I1109 14:37:35.204415  189143 certs.go:364] generating signed profile cert for "minikube": /home/jenkins/minikube-integration/21139-2320/.minikube/profiles/default-k8s-diff-port-103048/apiserver.key.87358e1c
	I1109 14:37:35.204430  189143 crypto.go:68] Generating cert /home/jenkins/minikube-integration/21139-2320/.minikube/profiles/default-k8s-diff-port-103048/apiserver.crt.87358e1c with IP's: [10.96.0.1 127.0.0.1 10.0.0.1 192.168.85.2]
	I1109 14:37:35.765746  189143 crypto.go:156] Writing cert to /home/jenkins/minikube-integration/21139-2320/.minikube/profiles/default-k8s-diff-port-103048/apiserver.crt.87358e1c ...
	I1109 14:37:35.765777  189143 lock.go:35] WriteFile acquiring /home/jenkins/minikube-integration/21139-2320/.minikube/profiles/default-k8s-diff-port-103048/apiserver.crt.87358e1c: {Name:mk78654738496ae4878f38f21723da2e12a738c1 Clock:{} Delay:500ms Timeout:1m0s Cancel:<nil>}
	I1109 14:37:35.765949  189143 crypto.go:164] Writing key to /home/jenkins/minikube-integration/21139-2320/.minikube/profiles/default-k8s-diff-port-103048/apiserver.key.87358e1c ...
	I1109 14:37:35.765966  189143 lock.go:35] WriteFile acquiring /home/jenkins/minikube-integration/21139-2320/.minikube/profiles/default-k8s-diff-port-103048/apiserver.key.87358e1c: {Name:mk5b82f1276b0676529203567965fd1e87ea9e4e Clock:{} Delay:500ms Timeout:1m0s Cancel:<nil>}
	I1109 14:37:35.766061  189143 certs.go:382] copying /home/jenkins/minikube-integration/21139-2320/.minikube/profiles/default-k8s-diff-port-103048/apiserver.crt.87358e1c -> /home/jenkins/minikube-integration/21139-2320/.minikube/profiles/default-k8s-diff-port-103048/apiserver.crt
	I1109 14:37:35.766143  189143 certs.go:386] copying /home/jenkins/minikube-integration/21139-2320/.minikube/profiles/default-k8s-diff-port-103048/apiserver.key.87358e1c -> /home/jenkins/minikube-integration/21139-2320/.minikube/profiles/default-k8s-diff-port-103048/apiserver.key
	I1109 14:37:35.766205  189143 certs.go:364] generating signed profile cert for "aggregator": /home/jenkins/minikube-integration/21139-2320/.minikube/profiles/default-k8s-diff-port-103048/proxy-client.key
	I1109 14:37:35.766225  189143 crypto.go:68] Generating cert /home/jenkins/minikube-integration/21139-2320/.minikube/profiles/default-k8s-diff-port-103048/proxy-client.crt with IP's: []
	I1109 14:37:36.305399  189143 crypto.go:156] Writing cert to /home/jenkins/minikube-integration/21139-2320/.minikube/profiles/default-k8s-diff-port-103048/proxy-client.crt ...
	I1109 14:37:36.305475  189143 lock.go:35] WriteFile acquiring /home/jenkins/minikube-integration/21139-2320/.minikube/profiles/default-k8s-diff-port-103048/proxy-client.crt: {Name:mkbde1426663a0abe079281b4819c0d065ec6219 Clock:{} Delay:500ms Timeout:1m0s Cancel:<nil>}
	I1109 14:37:36.305683  189143 crypto.go:164] Writing key to /home/jenkins/minikube-integration/21139-2320/.minikube/profiles/default-k8s-diff-port-103048/proxy-client.key ...
	I1109 14:37:36.305722  189143 lock.go:35] WriteFile acquiring /home/jenkins/minikube-integration/21139-2320/.minikube/profiles/default-k8s-diff-port-103048/proxy-client.key: {Name:mka94bae95eeae6fddc50a30b08aed4d38918606 Clock:{} Delay:500ms Timeout:1m0s Cancel:<nil>}
	I1109 14:37:36.305955  189143 certs.go:484] found cert: /home/jenkins/minikube-integration/21139-2320/.minikube/certs/4116.pem (1338 bytes)
	W1109 14:37:36.306026  189143 certs.go:480] ignoring /home/jenkins/minikube-integration/21139-2320/.minikube/certs/4116_empty.pem, impossibly tiny 0 bytes
	I1109 14:37:36.306053  189143 certs.go:484] found cert: /home/jenkins/minikube-integration/21139-2320/.minikube/certs/ca-key.pem (1679 bytes)
	I1109 14:37:36.306100  189143 certs.go:484] found cert: /home/jenkins/minikube-integration/21139-2320/.minikube/certs/ca.pem (1082 bytes)
	I1109 14:37:36.306154  189143 certs.go:484] found cert: /home/jenkins/minikube-integration/21139-2320/.minikube/certs/cert.pem (1123 bytes)
	I1109 14:37:36.306199  189143 certs.go:484] found cert: /home/jenkins/minikube-integration/21139-2320/.minikube/certs/key.pem (1679 bytes)
	I1109 14:37:36.306273  189143 certs.go:484] found cert: /home/jenkins/minikube-integration/21139-2320/.minikube/files/etc/ssl/certs/41162.pem (1708 bytes)
	I1109 14:37:36.306876  189143 ssh_runner.go:362] scp /home/jenkins/minikube-integration/21139-2320/.minikube/ca.crt --> /var/lib/minikube/certs/ca.crt (1111 bytes)
	I1109 14:37:36.328938  189143 ssh_runner.go:362] scp /home/jenkins/minikube-integration/21139-2320/.minikube/ca.key --> /var/lib/minikube/certs/ca.key (1679 bytes)
	I1109 14:37:36.347713  189143 ssh_runner.go:362] scp /home/jenkins/minikube-integration/21139-2320/.minikube/proxy-client-ca.crt --> /var/lib/minikube/certs/proxy-client-ca.crt (1119 bytes)
	I1109 14:37:36.369561  189143 ssh_runner.go:362] scp /home/jenkins/minikube-integration/21139-2320/.minikube/proxy-client-ca.key --> /var/lib/minikube/certs/proxy-client-ca.key (1675 bytes)
	I1109 14:37:36.390274  189143 ssh_runner.go:362] scp /home/jenkins/minikube-integration/21139-2320/.minikube/profiles/default-k8s-diff-port-103048/apiserver.crt --> /var/lib/minikube/certs/apiserver.crt (1440 bytes)
	I1109 14:37:36.417418  189143 ssh_runner.go:362] scp /home/jenkins/minikube-integration/21139-2320/.minikube/profiles/default-k8s-diff-port-103048/apiserver.key --> /var/lib/minikube/certs/apiserver.key (1675 bytes)
	I1109 14:37:36.438988  189143 ssh_runner.go:362] scp /home/jenkins/minikube-integration/21139-2320/.minikube/profiles/default-k8s-diff-port-103048/proxy-client.crt --> /var/lib/minikube/certs/proxy-client.crt (1147 bytes)
	I1109 14:37:36.469111  189143 ssh_runner.go:362] scp /home/jenkins/minikube-integration/21139-2320/.minikube/profiles/default-k8s-diff-port-103048/proxy-client.key --> /var/lib/minikube/certs/proxy-client.key (1675 bytes)
	I1109 14:37:36.491206  189143 ssh_runner.go:362] scp /home/jenkins/minikube-integration/21139-2320/.minikube/certs/4116.pem --> /usr/share/ca-certificates/4116.pem (1338 bytes)
	I1109 14:37:36.509992  189143 ssh_runner.go:362] scp /home/jenkins/minikube-integration/21139-2320/.minikube/files/etc/ssl/certs/41162.pem --> /usr/share/ca-certificates/41162.pem (1708 bytes)
	I1109 14:37:36.529176  189143 ssh_runner.go:362] scp /home/jenkins/minikube-integration/21139-2320/.minikube/ca.crt --> /usr/share/ca-certificates/minikubeCA.pem (1111 bytes)
	I1109 14:37:36.547490  189143 ssh_runner.go:362] scp memory --> /var/lib/minikube/kubeconfig (738 bytes)
	I1109 14:37:36.567777  189143 ssh_runner.go:195] Run: openssl version
	I1109 14:37:36.574386  189143 ssh_runner.go:195] Run: sudo /bin/bash -c "test -s /usr/share/ca-certificates/4116.pem && ln -fs /usr/share/ca-certificates/4116.pem /etc/ssl/certs/4116.pem"
	I1109 14:37:36.583356  189143 ssh_runner.go:195] Run: ls -la /usr/share/ca-certificates/4116.pem
	I1109 14:37:36.587195  189143 certs.go:528] hashing: -rw-r--r-- 1 root root 1338 Nov  9 13:36 /usr/share/ca-certificates/4116.pem
	I1109 14:37:36.587250  189143 ssh_runner.go:195] Run: openssl x509 -hash -noout -in /usr/share/ca-certificates/4116.pem
	I1109 14:37:36.628084  189143 ssh_runner.go:195] Run: sudo /bin/bash -c "test -L /etc/ssl/certs/51391683.0 || ln -fs /etc/ssl/certs/4116.pem /etc/ssl/certs/51391683.0"
	I1109 14:37:36.637044  189143 ssh_runner.go:195] Run: sudo /bin/bash -c "test -s /usr/share/ca-certificates/41162.pem && ln -fs /usr/share/ca-certificates/41162.pem /etc/ssl/certs/41162.pem"
	I1109 14:37:36.644668  189143 ssh_runner.go:195] Run: ls -la /usr/share/ca-certificates/41162.pem
	I1109 14:37:36.648624  189143 certs.go:528] hashing: -rw-r--r-- 1 root root 1708 Nov  9 13:36 /usr/share/ca-certificates/41162.pem
	I1109 14:37:36.648691  189143 ssh_runner.go:195] Run: openssl x509 -hash -noout -in /usr/share/ca-certificates/41162.pem
	I1109 14:37:36.689895  189143 ssh_runner.go:195] Run: sudo /bin/bash -c "test -L /etc/ssl/certs/3ec20f2e.0 || ln -fs /etc/ssl/certs/41162.pem /etc/ssl/certs/3ec20f2e.0"
	I1109 14:37:36.698046  189143 ssh_runner.go:195] Run: sudo /bin/bash -c "test -s /usr/share/ca-certificates/minikubeCA.pem && ln -fs /usr/share/ca-certificates/minikubeCA.pem /etc/ssl/certs/minikubeCA.pem"
	I1109 14:37:36.705924  189143 ssh_runner.go:195] Run: ls -la /usr/share/ca-certificates/minikubeCA.pem
	I1109 14:37:36.710146  189143 certs.go:528] hashing: -rw-r--r-- 1 root root 1111 Nov  9 13:29 /usr/share/ca-certificates/minikubeCA.pem
	I1109 14:37:36.710256  189143 ssh_runner.go:195] Run: openssl x509 -hash -noout -in /usr/share/ca-certificates/minikubeCA.pem
	I1109 14:37:36.751992  189143 ssh_runner.go:195] Run: sudo /bin/bash -c "test -L /etc/ssl/certs/b5213941.0 || ln -fs /etc/ssl/certs/minikubeCA.pem /etc/ssl/certs/b5213941.0"
	I1109 14:37:36.760006  189143 ssh_runner.go:195] Run: stat /var/lib/minikube/certs/apiserver-kubelet-client.crt
	I1109 14:37:36.763958  189143 certs.go:400] 'apiserver-kubelet-client' cert doesn't exist, likely first start: stat /var/lib/minikube/certs/apiserver-kubelet-client.crt: Process exited with status 1
	stdout:
	
	stderr:
	stat: cannot statx '/var/lib/minikube/certs/apiserver-kubelet-client.crt': No such file or directory
	I1109 14:37:36.764052  189143 kubeadm.go:401] StartCluster: {Name:default-k8s-diff-port-103048 KeepContext:false EmbedCerts:false MinikubeISO: KicBaseImage:gcr.io/k8s-minikube/kicbase-builds:v0.0.48-1761985721-21837@sha256:a50b37e97dfdea51156e079ca6b45818a801b3d41bbe13d141f35d2e1af6c7d1 Memory:3072 CPUs:2 DiskSize:20000 Driver:docker HyperkitVpnKitSock: HyperkitVSockPorts:[] DockerEnv:[] ContainerVolumeMounts:[] InsecureRegistry:[] RegistryMirror:[] HostOnlyCIDR:192.168.59.1/24 HypervVirtualSwitch: HypervUseExternalSwitch:false HypervExternalAdapter: KVMNetwork:default KVMQemuURI:qemu:///system KVMGPU:false KVMHidden:false KVMNUMACount:1 APIServerPort:8444 DockerOpt:[] DisableDriverMounts:false NFSShare:[] NFSSharesRoot:/nfsshares UUID: NoVTXCheck:false DNSProxy:false HostDNSResolver:true HostOnlyNicType:virtio NatNicType:virtio SSHIPAddress: SSHUser:root SSHKey: SSHPort:22 KubernetesConfig:{KubernetesVersion:v1.34.1 ClusterName:default-k8s-diff-port-103048 Namespace:default APIServerHAVIP: APIServer
Name:minikubeCA APIServerNames:[] APIServerIPs:[] DNSDomain:cluster.local ContainerRuntime:crio CRISocket: NetworkPlugin:cni FeatureGates: ServiceCIDR:10.96.0.0/12 ImageRepository: LoadBalancerStartIP: LoadBalancerEndIP: CustomIngressCert: RegistryAliases: ExtraOptions:[] ShouldLoadCachedImages:true EnableDefaultCNI:false CNI:} Nodes:[{Name: IP:192.168.85.2 Port:8444 KubernetesVersion:v1.34.1 ContainerRuntime:crio ControlPlane:true Worker:true}] Addons:map[] CustomAddonImages:map[] CustomAddonRegistries:map[] VerifyComponents:map[apiserver:true apps_running:true default_sa:true extra:true kubelet:true node_ready:true system_pods:true] StartHostTimeout:6m0s ScheduledStop:<nil> ExposedPorts:[] ListenAddress: Network: Subnet: MultiNodeRequested:false ExtraDisks:0 CertExpiration:26280h0m0s MountString: Mount9PVersion:9p2000.L MountGID:docker MountIP: MountMSize:262144 MountOptions:[] MountPort:0 MountType:9p MountUID:docker BinaryMirror: DisableOptimizations:false DisableMetrics:false DisableCoreDNSLog:false Cust
omQemuFirmwarePath: SocketVMnetClientPath: SocketVMnetPath: StaticIP: SSHAuthSock: SSHAgentPID:0 GPUs: AutoPauseInterval:1m0s}
	I1109 14:37:36.764159  189143 cri.go:54] listing CRI containers in root : {State:paused Name: Namespaces:[kube-system]}
	I1109 14:37:36.764247  189143 ssh_runner.go:195] Run: sudo -s eval "crictl ps -a --quiet --label io.kubernetes.pod.namespace=kube-system"
	I1109 14:37:36.791006  189143 cri.go:89] found id: ""
	I1109 14:37:36.791122  189143 ssh_runner.go:195] Run: sudo ls /var/lib/kubelet/kubeadm-flags.env /var/lib/kubelet/config.yaml /var/lib/minikube/etcd
	I1109 14:37:36.800580  189143 ssh_runner.go:195] Run: sudo cp /var/tmp/minikube/kubeadm.yaml.new /var/tmp/minikube/kubeadm.yaml
	I1109 14:37:36.808608  189143 kubeadm.go:215] ignoring SystemVerification for kubeadm because of docker driver
	I1109 14:37:36.808705  189143 ssh_runner.go:195] Run: sudo ls -la /etc/kubernetes/admin.conf /etc/kubernetes/kubelet.conf /etc/kubernetes/controller-manager.conf /etc/kubernetes/scheduler.conf
	I1109 14:37:36.818251  189143 kubeadm.go:156] config check failed, skipping stale config cleanup: sudo ls -la /etc/kubernetes/admin.conf /etc/kubernetes/kubelet.conf /etc/kubernetes/controller-manager.conf /etc/kubernetes/scheduler.conf: Process exited with status 2
	stdout:
	
	stderr:
	ls: cannot access '/etc/kubernetes/admin.conf': No such file or directory
	ls: cannot access '/etc/kubernetes/kubelet.conf': No such file or directory
	ls: cannot access '/etc/kubernetes/controller-manager.conf': No such file or directory
	ls: cannot access '/etc/kubernetes/scheduler.conf': No such file or directory
	I1109 14:37:36.818315  189143 kubeadm.go:158] found existing configuration files:
	
	I1109 14:37:36.818386  189143 ssh_runner.go:195] Run: sudo grep https://control-plane.minikube.internal:8444 /etc/kubernetes/admin.conf
	I1109 14:37:36.827006  189143 kubeadm.go:164] "https://control-plane.minikube.internal:8444" may not be in /etc/kubernetes/admin.conf - will remove: sudo grep https://control-plane.minikube.internal:8444 /etc/kubernetes/admin.conf: Process exited with status 2
	stdout:
	
	stderr:
	grep: /etc/kubernetes/admin.conf: No such file or directory
	I1109 14:37:36.827110  189143 ssh_runner.go:195] Run: sudo rm -f /etc/kubernetes/admin.conf
	I1109 14:37:36.835814  189143 ssh_runner.go:195] Run: sudo grep https://control-plane.minikube.internal:8444 /etc/kubernetes/kubelet.conf
	I1109 14:37:36.843973  189143 kubeadm.go:164] "https://control-plane.minikube.internal:8444" may not be in /etc/kubernetes/kubelet.conf - will remove: sudo grep https://control-plane.minikube.internal:8444 /etc/kubernetes/kubelet.conf: Process exited with status 2
	stdout:
	
	stderr:
	grep: /etc/kubernetes/kubelet.conf: No such file or directory
	I1109 14:37:36.844079  189143 ssh_runner.go:195] Run: sudo rm -f /etc/kubernetes/kubelet.conf
	I1109 14:37:36.851559  189143 ssh_runner.go:195] Run: sudo grep https://control-plane.minikube.internal:8444 /etc/kubernetes/controller-manager.conf
	I1109 14:37:36.859852  189143 kubeadm.go:164] "https://control-plane.minikube.internal:8444" may not be in /etc/kubernetes/controller-manager.conf - will remove: sudo grep https://control-plane.minikube.internal:8444 /etc/kubernetes/controller-manager.conf: Process exited with status 2
	stdout:
	
	stderr:
	grep: /etc/kubernetes/controller-manager.conf: No such file or directory
	I1109 14:37:36.859986  189143 ssh_runner.go:195] Run: sudo rm -f /etc/kubernetes/controller-manager.conf
	I1109 14:37:36.867289  189143 ssh_runner.go:195] Run: sudo grep https://control-plane.minikube.internal:8444 /etc/kubernetes/scheduler.conf
	I1109 14:37:36.875295  189143 kubeadm.go:164] "https://control-plane.minikube.internal:8444" may not be in /etc/kubernetes/scheduler.conf - will remove: sudo grep https://control-plane.minikube.internal:8444 /etc/kubernetes/scheduler.conf: Process exited with status 2
	stdout:
	
	stderr:
	grep: /etc/kubernetes/scheduler.conf: No such file or directory
	I1109 14:37:36.875398  189143 ssh_runner.go:195] Run: sudo rm -f /etc/kubernetes/scheduler.conf
	I1109 14:37:36.882707  189143 ssh_runner.go:286] Start: sudo /bin/bash -c "env PATH="/var/lib/minikube/binaries/v1.34.1:$PATH" kubeadm init --config /var/tmp/minikube/kubeadm.yaml  --ignore-preflight-errors=DirAvailable--etc-kubernetes-manifests,DirAvailable--var-lib-minikube,DirAvailable--var-lib-minikube-etcd,FileAvailable--etc-kubernetes-manifests-kube-scheduler.yaml,FileAvailable--etc-kubernetes-manifests-kube-apiserver.yaml,FileAvailable--etc-kubernetes-manifests-kube-controller-manager.yaml,FileAvailable--etc-kubernetes-manifests-etcd.yaml,Port-10250,Swap,NumCPU,Mem,SystemVerification,FileContent--proc-sys-net-bridge-bridge-nf-call-iptables"
	I1109 14:37:36.927926  189143 kubeadm.go:319] [init] Using Kubernetes version: v1.34.1
	I1109 14:37:36.928306  189143 kubeadm.go:319] [preflight] Running pre-flight checks
	I1109 14:37:36.980410  189143 kubeadm.go:319] [preflight] The system verification failed. Printing the output from the verification:
	I1109 14:37:36.980494  189143 kubeadm.go:319] KERNEL_VERSION: 5.15.0-1084-aws
	I1109 14:37:36.980533  189143 kubeadm.go:319] OS: Linux
	I1109 14:37:36.980580  189143 kubeadm.go:319] CGROUPS_CPU: enabled
	I1109 14:37:36.980631  189143 kubeadm.go:319] CGROUPS_CPUACCT: enabled
	I1109 14:37:36.980681  189143 kubeadm.go:319] CGROUPS_CPUSET: enabled
	I1109 14:37:36.980731  189143 kubeadm.go:319] CGROUPS_DEVICES: enabled
	I1109 14:37:36.980783  189143 kubeadm.go:319] CGROUPS_FREEZER: enabled
	I1109 14:37:36.980836  189143 kubeadm.go:319] CGROUPS_MEMORY: enabled
	I1109 14:37:36.980883  189143 kubeadm.go:319] CGROUPS_PIDS: enabled
	I1109 14:37:36.980941  189143 kubeadm.go:319] CGROUPS_HUGETLB: enabled
	I1109 14:37:36.980989  189143 kubeadm.go:319] CGROUPS_BLKIO: enabled
	I1109 14:37:37.110194  189143 kubeadm.go:319] [preflight] Pulling images required for setting up a Kubernetes cluster
	I1109 14:37:37.110309  189143 kubeadm.go:319] [preflight] This might take a minute or two, depending on the speed of your internet connection
	I1109 14:37:37.110405  189143 kubeadm.go:319] [preflight] You can also perform this action beforehand using 'kubeadm config images pull'
	I1109 14:37:37.124673  189143 kubeadm.go:319] [certs] Using certificateDir folder "/var/lib/minikube/certs"
	I1109 14:37:33.621312  190681 cli_runner.go:217] Completed: docker run --rm --entrypoint /usr/bin/tar -v /home/jenkins/minikube-integration/21139-2320/.minikube/cache/preloaded-tarball/preloaded-images-k8s-v18-v1.34.1-cri-o-overlay-arm64.tar.lz4:/preloaded.tar:ro -v embed-certs-422728:/extractDir gcr.io/k8s-minikube/kicbase-builds:v0.0.48-1761985721-21837@sha256:a50b37e97dfdea51156e079ca6b45818a801b3d41bbe13d141f35d2e1af6c7d1 -I lz4 -xf /preloaded.tar -C /extractDir: (4.75981683s)
	I1109 14:37:33.621344  190681 kic.go:203] duration metric: took 4.75994154s to extract preloaded images to volume ...
	W1109 14:37:33.621483  190681 cgroups_linux.go:77] Your kernel does not support swap limit capabilities or the cgroup is not mounted.
	I1109 14:37:33.621599  190681 cli_runner.go:164] Run: docker info --format "'{{json .SecurityOptions}}'"
	I1109 14:37:33.717076  190681 cli_runner.go:164] Run: docker run -d -t --privileged --security-opt seccomp=unconfined --tmpfs /tmp --tmpfs /run -v /lib/modules:/lib/modules:ro --hostname embed-certs-422728 --name embed-certs-422728 --label created_by.minikube.sigs.k8s.io=true --label name.minikube.sigs.k8s.io=embed-certs-422728 --label role.minikube.sigs.k8s.io= --label mode.minikube.sigs.k8s.io=embed-certs-422728 --network embed-certs-422728 --ip 192.168.76.2 --volume embed-certs-422728:/var --security-opt apparmor=unconfined --memory=3072mb --cpus=2 -e container=docker --expose 8443 --publish=127.0.0.1::8443 --publish=127.0.0.1::22 --publish=127.0.0.1::2376 --publish=127.0.0.1::5000 --publish=127.0.0.1::32443 gcr.io/k8s-minikube/kicbase-builds:v0.0.48-1761985721-21837@sha256:a50b37e97dfdea51156e079ca6b45818a801b3d41bbe13d141f35d2e1af6c7d1
	I1109 14:37:34.108930  190681 cli_runner.go:164] Run: docker container inspect embed-certs-422728 --format={{.State.Running}}
	I1109 14:37:34.144754  190681 cli_runner.go:164] Run: docker container inspect embed-certs-422728 --format={{.State.Status}}
	I1109 14:37:34.179083  190681 cli_runner.go:164] Run: docker exec embed-certs-422728 stat /var/lib/dpkg/alternatives/iptables
	I1109 14:37:34.246021  190681 oci.go:144] the created container "embed-certs-422728" has a running status.
	I1109 14:37:34.246050  190681 kic.go:225] Creating ssh key for kic: /home/jenkins/minikube-integration/21139-2320/.minikube/machines/embed-certs-422728/id_rsa...
	I1109 14:37:35.793995  190681 kic_runner.go:191] docker (temp): /home/jenkins/minikube-integration/21139-2320/.minikube/machines/embed-certs-422728/id_rsa.pub --> /home/docker/.ssh/authorized_keys (381 bytes)
	I1109 14:37:35.827258  190681 cli_runner.go:164] Run: docker container inspect embed-certs-422728 --format={{.State.Status}}
	I1109 14:37:35.855173  190681 kic_runner.go:93] Run: chown docker:docker /home/docker/.ssh/authorized_keys
	I1109 14:37:35.855199  190681 kic_runner.go:114] Args: [docker exec --privileged embed-certs-422728 chown docker:docker /home/docker/.ssh/authorized_keys]
	I1109 14:37:35.933016  190681 cli_runner.go:164] Run: docker container inspect embed-certs-422728 --format={{.State.Status}}
	I1109 14:37:35.961922  190681 machine.go:94] provisionDockerMachine start ...
	I1109 14:37:35.962016  190681 cli_runner.go:164] Run: docker container inspect -f "'{{(index (index .NetworkSettings.Ports "22/tcp") 0).HostPort}}'" embed-certs-422728
	I1109 14:37:35.985738  190681 main.go:143] libmachine: Using SSH client type: native
	I1109 14:37:35.986065  190681 main.go:143] libmachine: &{{{<nil> 0 [] [] []} docker [0x3eefe0] 0x3f1790 <nil>  [] 0s} 127.0.0.1 33060 <nil> <nil>}
	I1109 14:37:35.986074  190681 main.go:143] libmachine: About to run SSH command:
	hostname
	I1109 14:37:36.163139  190681 main.go:143] libmachine: SSH cmd err, output: <nil>: embed-certs-422728
	
	I1109 14:37:36.163159  190681 ubuntu.go:182] provisioning hostname "embed-certs-422728"
	I1109 14:37:36.163230  190681 cli_runner.go:164] Run: docker container inspect -f "'{{(index (index .NetworkSettings.Ports "22/tcp") 0).HostPort}}'" embed-certs-422728
	I1109 14:37:36.184262  190681 main.go:143] libmachine: Using SSH client type: native
	I1109 14:37:36.184594  190681 main.go:143] libmachine: &{{{<nil> 0 [] [] []} docker [0x3eefe0] 0x3f1790 <nil>  [] 0s} 127.0.0.1 33060 <nil> <nil>}
	I1109 14:37:36.184611  190681 main.go:143] libmachine: About to run SSH command:
	sudo hostname embed-certs-422728 && echo "embed-certs-422728" | sudo tee /etc/hostname
	I1109 14:37:36.369570  190681 main.go:143] libmachine: SSH cmd err, output: <nil>: embed-certs-422728
	
	I1109 14:37:36.369635  190681 cli_runner.go:164] Run: docker container inspect -f "'{{(index (index .NetworkSettings.Ports "22/tcp") 0).HostPort}}'" embed-certs-422728
	I1109 14:37:36.395153  190681 main.go:143] libmachine: Using SSH client type: native
	I1109 14:37:36.395448  190681 main.go:143] libmachine: &{{{<nil> 0 [] [] []} docker [0x3eefe0] 0x3f1790 <nil>  [] 0s} 127.0.0.1 33060 <nil> <nil>}
	I1109 14:37:36.395465  190681 main.go:143] libmachine: About to run SSH command:
	
			if ! grep -xq '.*\sembed-certs-422728' /etc/hosts; then
				if grep -xq '127.0.1.1\s.*' /etc/hosts; then
					sudo sed -i 's/^127.0.1.1\s.*/127.0.1.1 embed-certs-422728/g' /etc/hosts;
				else 
					echo '127.0.1.1 embed-certs-422728' | sudo tee -a /etc/hosts; 
				fi
			fi
	I1109 14:37:36.556231  190681 main.go:143] libmachine: SSH cmd err, output: <nil>: 
	I1109 14:37:36.556255  190681 ubuntu.go:188] set auth options {CertDir:/home/jenkins/minikube-integration/21139-2320/.minikube CaCertPath:/home/jenkins/minikube-integration/21139-2320/.minikube/certs/ca.pem CaPrivateKeyPath:/home/jenkins/minikube-integration/21139-2320/.minikube/certs/ca-key.pem CaCertRemotePath:/etc/docker/ca.pem ServerCertPath:/home/jenkins/minikube-integration/21139-2320/.minikube/machines/server.pem ServerKeyPath:/home/jenkins/minikube-integration/21139-2320/.minikube/machines/server-key.pem ClientKeyPath:/home/jenkins/minikube-integration/21139-2320/.minikube/certs/key.pem ServerCertRemotePath:/etc/docker/server.pem ServerKeyRemotePath:/etc/docker/server-key.pem ClientCertPath:/home/jenkins/minikube-integration/21139-2320/.minikube/certs/cert.pem ServerCertSANs:[] StorePath:/home/jenkins/minikube-integration/21139-2320/.minikube}
	I1109 14:37:36.556275  190681 ubuntu.go:190] setting up certificates
	I1109 14:37:36.556286  190681 provision.go:84] configureAuth start
	I1109 14:37:36.556354  190681 cli_runner.go:164] Run: docker container inspect -f "{{range .NetworkSettings.Networks}}{{.IPAddress}},{{.GlobalIPv6Address}}{{end}}" embed-certs-422728
	I1109 14:37:36.577644  190681 provision.go:143] copyHostCerts
	I1109 14:37:36.577710  190681 exec_runner.go:144] found /home/jenkins/minikube-integration/21139-2320/.minikube/ca.pem, removing ...
	I1109 14:37:36.577724  190681 exec_runner.go:203] rm: /home/jenkins/minikube-integration/21139-2320/.minikube/ca.pem
	I1109 14:37:36.577799  190681 exec_runner.go:151] cp: /home/jenkins/minikube-integration/21139-2320/.minikube/certs/ca.pem --> /home/jenkins/minikube-integration/21139-2320/.minikube/ca.pem (1082 bytes)
	I1109 14:37:36.577897  190681 exec_runner.go:144] found /home/jenkins/minikube-integration/21139-2320/.minikube/cert.pem, removing ...
	I1109 14:37:36.577907  190681 exec_runner.go:203] rm: /home/jenkins/minikube-integration/21139-2320/.minikube/cert.pem
	I1109 14:37:36.577936  190681 exec_runner.go:151] cp: /home/jenkins/minikube-integration/21139-2320/.minikube/certs/cert.pem --> /home/jenkins/minikube-integration/21139-2320/.minikube/cert.pem (1123 bytes)
	I1109 14:37:36.577994  190681 exec_runner.go:144] found /home/jenkins/minikube-integration/21139-2320/.minikube/key.pem, removing ...
	I1109 14:37:36.578003  190681 exec_runner.go:203] rm: /home/jenkins/minikube-integration/21139-2320/.minikube/key.pem
	I1109 14:37:36.578027  190681 exec_runner.go:151] cp: /home/jenkins/minikube-integration/21139-2320/.minikube/certs/key.pem --> /home/jenkins/minikube-integration/21139-2320/.minikube/key.pem (1679 bytes)
	I1109 14:37:36.578077  190681 provision.go:117] generating server cert: /home/jenkins/minikube-integration/21139-2320/.minikube/machines/server.pem ca-key=/home/jenkins/minikube-integration/21139-2320/.minikube/certs/ca.pem private-key=/home/jenkins/minikube-integration/21139-2320/.minikube/certs/ca-key.pem org=jenkins.embed-certs-422728 san=[127.0.0.1 192.168.76.2 embed-certs-422728 localhost minikube]
	I1109 14:37:37.050151  190681 provision.go:177] copyRemoteCerts
	I1109 14:37:37.050264  190681 ssh_runner.go:195] Run: sudo mkdir -p /etc/docker /etc/docker /etc/docker
	I1109 14:37:37.050321  190681 cli_runner.go:164] Run: docker container inspect -f "'{{(index (index .NetworkSettings.Ports "22/tcp") 0).HostPort}}'" embed-certs-422728
	I1109 14:37:37.069435  190681 sshutil.go:53] new ssh client: &{IP:127.0.0.1 Port:33060 SSHKeyPath:/home/jenkins/minikube-integration/21139-2320/.minikube/machines/embed-certs-422728/id_rsa Username:docker}
	I1109 14:37:37.178314  190681 ssh_runner.go:362] scp /home/jenkins/minikube-integration/21139-2320/.minikube/certs/ca.pem --> /etc/docker/ca.pem (1082 bytes)
	I1109 14:37:37.202711  190681 ssh_runner.go:362] scp /home/jenkins/minikube-integration/21139-2320/.minikube/machines/server.pem --> /etc/docker/server.pem (1220 bytes)
	I1109 14:37:37.219814  190681 ssh_runner.go:362] scp /home/jenkins/minikube-integration/21139-2320/.minikube/machines/server-key.pem --> /etc/docker/server-key.pem (1675 bytes)
	I1109 14:37:37.237810  190681 provision.go:87] duration metric: took 681.504793ms to configureAuth
	I1109 14:37:37.237832  190681 ubuntu.go:206] setting minikube options for container-runtime
	I1109 14:37:37.238011  190681 config.go:182] Loaded profile config "embed-certs-422728": Driver=docker, ContainerRuntime=crio, KubernetesVersion=v1.34.1
	I1109 14:37:37.238111  190681 cli_runner.go:164] Run: docker container inspect -f "'{{(index (index .NetworkSettings.Ports "22/tcp") 0).HostPort}}'" embed-certs-422728
	I1109 14:37:37.277717  190681 main.go:143] libmachine: Using SSH client type: native
	I1109 14:37:37.278060  190681 main.go:143] libmachine: &{{{<nil> 0 [] [] []} docker [0x3eefe0] 0x3f1790 <nil>  [] 0s} 127.0.0.1 33060 <nil> <nil>}
	I1109 14:37:37.278082  190681 main.go:143] libmachine: About to run SSH command:
	sudo mkdir -p /etc/sysconfig && printf %s "
	CRIO_MINIKUBE_OPTIONS='--insecure-registry 10.96.0.0/12 '
	" | sudo tee /etc/sysconfig/crio.minikube && sudo systemctl restart crio
	I1109 14:37:37.570802  190681 main.go:143] libmachine: SSH cmd err, output: <nil>: 
	CRIO_MINIKUBE_OPTIONS='--insecure-registry 10.96.0.0/12 '
	
	I1109 14:37:37.570825  190681 machine.go:97] duration metric: took 1.608885623s to provisionDockerMachine
	I1109 14:37:37.570835  190681 client.go:176] duration metric: took 9.446209559s to LocalClient.Create
	I1109 14:37:37.570848  190681 start.go:167] duration metric: took 9.446286055s to libmachine.API.Create "embed-certs-422728"
	I1109 14:37:37.570856  190681 start.go:293] postStartSetup for "embed-certs-422728" (driver="docker")
	I1109 14:37:37.570865  190681 start.go:322] creating required directories: [/etc/kubernetes/addons /etc/kubernetes/manifests /var/tmp/minikube /var/lib/minikube /var/lib/minikube/certs /var/lib/minikube/images /var/lib/minikube/binaries /tmp/gvisor /usr/share/ca-certificates /etc/ssl/certs]
	I1109 14:37:37.570935  190681 ssh_runner.go:195] Run: sudo mkdir -p /etc/kubernetes/addons /etc/kubernetes/manifests /var/tmp/minikube /var/lib/minikube /var/lib/minikube/certs /var/lib/minikube/images /var/lib/minikube/binaries /tmp/gvisor /usr/share/ca-certificates /etc/ssl/certs
	I1109 14:37:37.570998  190681 cli_runner.go:164] Run: docker container inspect -f "'{{(index (index .NetworkSettings.Ports "22/tcp") 0).HostPort}}'" embed-certs-422728
	I1109 14:37:37.593092  190681 sshutil.go:53] new ssh client: &{IP:127.0.0.1 Port:33060 SSHKeyPath:/home/jenkins/minikube-integration/21139-2320/.minikube/machines/embed-certs-422728/id_rsa Username:docker}
	I1109 14:37:37.700665  190681 ssh_runner.go:195] Run: cat /etc/os-release
	I1109 14:37:37.704483  190681 main.go:143] libmachine: Couldn't set key VERSION_CODENAME, no corresponding struct field found
	I1109 14:37:37.704510  190681 info.go:137] Remote host: Debian GNU/Linux 12 (bookworm)
	I1109 14:37:37.704522  190681 filesync.go:126] Scanning /home/jenkins/minikube-integration/21139-2320/.minikube/addons for local assets ...
	I1109 14:37:37.704587  190681 filesync.go:126] Scanning /home/jenkins/minikube-integration/21139-2320/.minikube/files for local assets ...
	I1109 14:37:37.704672  190681 filesync.go:149] local asset: /home/jenkins/minikube-integration/21139-2320/.minikube/files/etc/ssl/certs/41162.pem -> 41162.pem in /etc/ssl/certs
	I1109 14:37:37.704771  190681 ssh_runner.go:195] Run: sudo mkdir -p /etc/ssl/certs
	I1109 14:37:37.712412  190681 ssh_runner.go:362] scp /home/jenkins/minikube-integration/21139-2320/.minikube/files/etc/ssl/certs/41162.pem --> /etc/ssl/certs/41162.pem (1708 bytes)
	I1109 14:37:37.730752  190681 start.go:296] duration metric: took 159.882791ms for postStartSetup
	I1109 14:37:37.731107  190681 cli_runner.go:164] Run: docker container inspect -f "{{range .NetworkSettings.Networks}}{{.IPAddress}},{{.GlobalIPv6Address}}{{end}}" embed-certs-422728
	I1109 14:37:37.747546  190681 profile.go:143] Saving config to /home/jenkins/minikube-integration/21139-2320/.minikube/profiles/embed-certs-422728/config.json ...
	I1109 14:37:37.747819  190681 ssh_runner.go:195] Run: sh -c "df -h /var | awk 'NR==2{print $5}'"
	I1109 14:37:37.747915  190681 cli_runner.go:164] Run: docker container inspect -f "'{{(index (index .NetworkSettings.Ports "22/tcp") 0).HostPort}}'" embed-certs-422728
	I1109 14:37:37.764253  190681 sshutil.go:53] new ssh client: &{IP:127.0.0.1 Port:33060 SSHKeyPath:/home/jenkins/minikube-integration/21139-2320/.minikube/machines/embed-certs-422728/id_rsa Username:docker}
	I1109 14:37:37.877289  190681 ssh_runner.go:195] Run: sh -c "df -BG /var | awk 'NR==2{print $4}'"
	I1109 14:37:37.882629  190681 start.go:128] duration metric: took 9.761952743s to createHost
	I1109 14:37:37.882654  190681 start.go:83] releasing machines lock for "embed-certs-422728", held for 9.762080974s
	I1109 14:37:37.882721  190681 cli_runner.go:164] Run: docker container inspect -f "{{range .NetworkSettings.Networks}}{{.IPAddress}},{{.GlobalIPv6Address}}{{end}}" embed-certs-422728
	I1109 14:37:37.129759  189143 out.go:252]   - Generating certificates and keys ...
	I1109 14:37:37.129875  189143 kubeadm.go:319] [certs] Using existing ca certificate authority
	I1109 14:37:37.129951  189143 kubeadm.go:319] [certs] Using existing apiserver certificate and key on disk
	I1109 14:37:37.939982  189143 kubeadm.go:319] [certs] Generating "apiserver-kubelet-client" certificate and key
	I1109 14:37:37.904221  190681 ssh_runner.go:195] Run: cat /version.json
	I1109 14:37:37.904280  190681 cli_runner.go:164] Run: docker container inspect -f "'{{(index (index .NetworkSettings.Ports "22/tcp") 0).HostPort}}'" embed-certs-422728
	I1109 14:37:37.904613  190681 ssh_runner.go:195] Run: curl -sS -m 2 https://registry.k8s.io/
	I1109 14:37:37.904682  190681 cli_runner.go:164] Run: docker container inspect -f "'{{(index (index .NetworkSettings.Ports "22/tcp") 0).HostPort}}'" embed-certs-422728
	I1109 14:37:37.942266  190681 sshutil.go:53] new ssh client: &{IP:127.0.0.1 Port:33060 SSHKeyPath:/home/jenkins/minikube-integration/21139-2320/.minikube/machines/embed-certs-422728/id_rsa Username:docker}
	I1109 14:37:37.948914  190681 sshutil.go:53] new ssh client: &{IP:127.0.0.1 Port:33060 SSHKeyPath:/home/jenkins/minikube-integration/21139-2320/.minikube/machines/embed-certs-422728/id_rsa Username:docker}
	I1109 14:37:38.153694  190681 ssh_runner.go:195] Run: systemctl --version
	I1109 14:37:38.160793  190681 ssh_runner.go:195] Run: sudo sh -c "podman version >/dev/null"
	I1109 14:37:38.206624  190681 ssh_runner.go:195] Run: sh -c "stat /etc/cni/net.d/*loopback.conf*"
	W1109 14:37:38.211010  190681 cni.go:209] loopback cni configuration skipped: "/etc/cni/net.d/*loopback.conf*" not found
	I1109 14:37:38.211134  190681 ssh_runner.go:195] Run: sudo find /etc/cni/net.d -maxdepth 1 -type f ( ( -name *bridge* -or -name *podman* ) -and -not -name *.mk_disabled ) -printf "%p, " -exec sh -c "sudo mv {} {}.mk_disabled" ;
	I1109 14:37:38.241762  190681 cni.go:262] disabled [/etc/cni/net.d/87-podman-bridge.conflist, /etc/cni/net.d/10-crio-bridge.conflist.disabled] bridge cni config(s)
	I1109 14:37:38.241787  190681 start.go:496] detecting cgroup driver to use...
	I1109 14:37:38.241827  190681 detect.go:187] detected "cgroupfs" cgroup driver on host os
	I1109 14:37:38.241888  190681 ssh_runner.go:195] Run: sudo systemctl stop -f containerd
	I1109 14:37:38.261521  190681 ssh_runner.go:195] Run: sudo systemctl is-active --quiet service containerd
	I1109 14:37:38.276562  190681 docker.go:218] disabling cri-docker service (if available) ...
	I1109 14:37:38.276631  190681 ssh_runner.go:195] Run: sudo systemctl stop -f cri-docker.socket
	I1109 14:37:38.295061  190681 ssh_runner.go:195] Run: sudo systemctl stop -f cri-docker.service
	I1109 14:37:38.314807  190681 ssh_runner.go:195] Run: sudo systemctl disable cri-docker.socket
	I1109 14:37:38.465078  190681 ssh_runner.go:195] Run: sudo systemctl mask cri-docker.service
	I1109 14:37:38.661701  190681 docker.go:234] disabling docker service ...
	I1109 14:37:38.661781  190681 ssh_runner.go:195] Run: sudo systemctl stop -f docker.socket
	I1109 14:37:38.685698  190681 ssh_runner.go:195] Run: sudo systemctl stop -f docker.service
	I1109 14:37:38.700596  190681 ssh_runner.go:195] Run: sudo systemctl disable docker.socket
	I1109 14:37:38.852535  190681 ssh_runner.go:195] Run: sudo systemctl mask docker.service
	I1109 14:37:39.004478  190681 ssh_runner.go:195] Run: sudo systemctl is-active --quiet service docker
	I1109 14:37:39.021207  190681 ssh_runner.go:195] Run: /bin/bash -c "sudo mkdir -p /etc && printf %s "runtime-endpoint: unix:///var/run/crio/crio.sock
	" | sudo tee /etc/crictl.yaml"
	I1109 14:37:39.038137  190681 crio.go:59] configure cri-o to use "registry.k8s.io/pause:3.10.1" pause image...
	I1109 14:37:39.038206  190681 ssh_runner.go:195] Run: sh -c "sudo sed -i 's|^.*pause_image = .*$|pause_image = "registry.k8s.io/pause:3.10.1"|' /etc/crio/crio.conf.d/02-crio.conf"
	I1109 14:37:39.047706  190681 crio.go:70] configuring cri-o to use "cgroupfs" as cgroup driver...
	I1109 14:37:39.047775  190681 ssh_runner.go:195] Run: sh -c "sudo sed -i 's|^.*cgroup_manager = .*$|cgroup_manager = "cgroupfs"|' /etc/crio/crio.conf.d/02-crio.conf"
	I1109 14:37:39.057668  190681 ssh_runner.go:195] Run: sh -c "sudo sed -i '/conmon_cgroup = .*/d' /etc/crio/crio.conf.d/02-crio.conf"
	I1109 14:37:39.066902  190681 ssh_runner.go:195] Run: sh -c "sudo sed -i '/cgroup_manager = .*/a conmon_cgroup = "pod"' /etc/crio/crio.conf.d/02-crio.conf"
	I1109 14:37:39.076064  190681 ssh_runner.go:195] Run: sh -c "sudo rm -rf /etc/cni/net.mk"
	I1109 14:37:39.084815  190681 ssh_runner.go:195] Run: sh -c "sudo sed -i '/^ *"net.ipv4.ip_unprivileged_port_start=.*"/d' /etc/crio/crio.conf.d/02-crio.conf"
	I1109 14:37:39.094208  190681 ssh_runner.go:195] Run: sh -c "sudo grep -q "^ *default_sysctls" /etc/crio/crio.conf.d/02-crio.conf || sudo sed -i '/conmon_cgroup = .*/a default_sysctls = \[\n\]' /etc/crio/crio.conf.d/02-crio.conf"
	I1109 14:37:39.108916  190681 ssh_runner.go:195] Run: sh -c "sudo sed -i -r 's|^default_sysctls *= *\[|&\n  "net.ipv4.ip_unprivileged_port_start=0",|' /etc/crio/crio.conf.d/02-crio.conf"
	I1109 14:37:39.118205  190681 ssh_runner.go:195] Run: sudo sysctl net.bridge.bridge-nf-call-iptables
	I1109 14:37:39.126682  190681 ssh_runner.go:195] Run: sudo sh -c "echo 1 > /proc/sys/net/ipv4/ip_forward"
	I1109 14:37:39.135280  190681 ssh_runner.go:195] Run: sudo systemctl daemon-reload
	I1109 14:37:39.276261  190681 ssh_runner.go:195] Run: sudo systemctl restart crio
	I1109 14:37:39.428332  190681 start.go:543] Will wait 60s for socket path /var/run/crio/crio.sock
	I1109 14:37:39.428416  190681 ssh_runner.go:195] Run: stat /var/run/crio/crio.sock
	I1109 14:37:39.433110  190681 start.go:564] Will wait 60s for crictl version
	I1109 14:37:39.433190  190681 ssh_runner.go:195] Run: which crictl
	I1109 14:37:39.436804  190681 ssh_runner.go:195] Run: sudo /usr/local/bin/crictl version
	I1109 14:37:39.462726  190681 start.go:580] Version:  0.1.0
	RuntimeName:  cri-o
	RuntimeVersion:  1.34.1
	RuntimeApiVersion:  v1
	I1109 14:37:39.462870  190681 ssh_runner.go:195] Run: crio --version
	I1109 14:37:39.497055  190681 ssh_runner.go:195] Run: crio --version
	I1109 14:37:39.536554  190681 out.go:179] * Preparing Kubernetes v1.34.1 on CRI-O 1.34.1 ...
	I1109 14:37:39.539477  190681 cli_runner.go:164] Run: docker network inspect embed-certs-422728 --format "{"Name": "{{.Name}}","Driver": "{{.Driver}}","Subnet": "{{range .IPAM.Config}}{{.Subnet}}{{end}}","Gateway": "{{range .IPAM.Config}}{{.Gateway}}{{end}}","MTU": {{if (index .Options "com.docker.network.driver.mtu")}}{{(index .Options "com.docker.network.driver.mtu")}}{{else}}0{{end}}, "ContainerIPs": [{{range $k,$v := .Containers }}"{{$v.IPv4Address}}",{{end}}]}"
	I1109 14:37:39.560445  190681 ssh_runner.go:195] Run: grep 192.168.76.1	host.minikube.internal$ /etc/hosts
	I1109 14:37:39.568342  190681 ssh_runner.go:195] Run: /bin/bash -c "{ grep -v $'\thost.minikube.internal$' "/etc/hosts"; echo "192.168.76.1	host.minikube.internal"; } > /tmp/h.$$; sudo cp /tmp/h.$$ "/etc/hosts""
	I1109 14:37:39.577861  190681 kubeadm.go:884] updating cluster {Name:embed-certs-422728 KeepContext:false EmbedCerts:true MinikubeISO: KicBaseImage:gcr.io/k8s-minikube/kicbase-builds:v0.0.48-1761985721-21837@sha256:a50b37e97dfdea51156e079ca6b45818a801b3d41bbe13d141f35d2e1af6c7d1 Memory:3072 CPUs:2 DiskSize:20000 Driver:docker HyperkitVpnKitSock: HyperkitVSockPorts:[] DockerEnv:[] ContainerVolumeMounts:[] InsecureRegistry:[] RegistryMirror:[] HostOnlyCIDR:192.168.59.1/24 HypervVirtualSwitch: HypervUseExternalSwitch:false HypervExternalAdapter: KVMNetwork:default KVMQemuURI:qemu:///system KVMGPU:false KVMHidden:false KVMNUMACount:1 APIServerPort:8443 DockerOpt:[] DisableDriverMounts:false NFSShare:[] NFSSharesRoot:/nfsshares UUID: NoVTXCheck:false DNSProxy:false HostDNSResolver:true HostOnlyNicType:virtio NatNicType:virtio SSHIPAddress: SSHUser:root SSHKey: SSHPort:22 KubernetesConfig:{KubernetesVersion:v1.34.1 ClusterName:embed-certs-422728 Namespace:default APIServerHAVIP: APIServerName:minikubeCA AP
IServerNames:[] APIServerIPs:[] DNSDomain:cluster.local ContainerRuntime:crio CRISocket: NetworkPlugin:cni FeatureGates: ServiceCIDR:10.96.0.0/12 ImageRepository: LoadBalancerStartIP: LoadBalancerEndIP: CustomIngressCert: RegistryAliases: ExtraOptions:[] ShouldLoadCachedImages:true EnableDefaultCNI:false CNI:} Nodes:[{Name: IP:192.168.76.2 Port:8443 KubernetesVersion:v1.34.1 ContainerRuntime:crio ControlPlane:true Worker:true}] Addons:map[] CustomAddonImages:map[] CustomAddonRegistries:map[] VerifyComponents:map[apiserver:true apps_running:true default_sa:true extra:true kubelet:true node_ready:true system_pods:true] StartHostTimeout:6m0s ScheduledStop:<nil> ExposedPorts:[] ListenAddress: Network: Subnet: MultiNodeRequested:false ExtraDisks:0 CertExpiration:26280h0m0s MountString: Mount9PVersion:9p2000.L MountGID:docker MountIP: MountMSize:262144 MountOptions:[] MountPort:0 MountType:9p MountUID:docker BinaryMirror: DisableOptimizations:false DisableMetrics:false DisableCoreDNSLog:false CustomQemuFirmwarePath
: SocketVMnetClientPath: SocketVMnetPath: StaticIP: SSHAuthSock: SSHAgentPID:0 GPUs: AutoPauseInterval:1m0s} ...
	I1109 14:37:39.577977  190681 preload.go:188] Checking if preload exists for k8s version v1.34.1 and runtime crio
	I1109 14:37:39.578030  190681 ssh_runner.go:195] Run: sudo crictl images --output json
	I1109 14:37:39.630850  190681 crio.go:514] all images are preloaded for cri-o runtime.
	I1109 14:37:39.630877  190681 crio.go:433] Images already preloaded, skipping extraction
	I1109 14:37:39.630931  190681 ssh_runner.go:195] Run: sudo crictl images --output json
	I1109 14:37:39.658737  190681 crio.go:514] all images are preloaded for cri-o runtime.
	I1109 14:37:39.658763  190681 cache_images.go:86] Images are preloaded, skipping loading
	I1109 14:37:39.658771  190681 kubeadm.go:935] updating node { 192.168.76.2 8443 v1.34.1 crio true true} ...
	I1109 14:37:39.658863  190681 kubeadm.go:947] kubelet [Unit]
	Wants=crio.service
	
	[Service]
	ExecStart=
	ExecStart=/var/lib/minikube/binaries/v1.34.1/kubelet --bootstrap-kubeconfig=/etc/kubernetes/bootstrap-kubelet.conf --cgroups-per-qos=false --config=/var/lib/kubelet/config.yaml --enforce-node-allocatable= --hostname-override=embed-certs-422728 --kubeconfig=/etc/kubernetes/kubelet.conf --node-ip=192.168.76.2
	
	[Install]
	 config:
	{KubernetesVersion:v1.34.1 ClusterName:embed-certs-422728 Namespace:default APIServerHAVIP: APIServerName:minikubeCA APIServerNames:[] APIServerIPs:[] DNSDomain:cluster.local ContainerRuntime:crio CRISocket: NetworkPlugin:cni FeatureGates: ServiceCIDR:10.96.0.0/12 ImageRepository: LoadBalancerStartIP: LoadBalancerEndIP: CustomIngressCert: RegistryAliases: ExtraOptions:[] ShouldLoadCachedImages:true EnableDefaultCNI:false CNI:}
	I1109 14:37:39.658944  190681 ssh_runner.go:195] Run: crio config
	I1109 14:37:39.721631  190681 cni.go:84] Creating CNI manager for ""
	I1109 14:37:39.721665  190681 cni.go:143] "docker" driver + "crio" runtime found, recommending kindnet
	I1109 14:37:39.721684  190681 kubeadm.go:85] Using pod CIDR: 10.244.0.0/16
	I1109 14:37:39.721715  190681 kubeadm.go:190] kubeadm options: {CertDir:/var/lib/minikube/certs ServiceCIDR:10.96.0.0/12 PodSubnet:10.244.0.0/16 AdvertiseAddress:192.168.76.2 APIServerPort:8443 KubernetesVersion:v1.34.1 EtcdDataDir:/var/lib/minikube/etcd EtcdExtraArgs:map[] ClusterName:embed-certs-422728 NodeName:embed-certs-422728 DNSDomain:cluster.local CRISocket:/var/run/crio/crio.sock ImageRepository: ComponentOptions:[{Component:apiServer ExtraArgs:map[enable-admission-plugins:NamespaceLifecycle,LimitRanger,ServiceAccount,DefaultStorageClass,DefaultTolerationSeconds,NodeRestriction,MutatingAdmissionWebhook,ValidatingAdmissionWebhook,ResourceQuota] Pairs:map[certSANs:["127.0.0.1", "localhost", "192.168.76.2"]]} {Component:controllerManager ExtraArgs:map[allocate-node-cidrs:true leader-elect:false] Pairs:map[]} {Component:scheduler ExtraArgs:map[leader-elect:false] Pairs:map[]}] FeatureArgs:map[] NodeIP:192.168.76.2 CgroupDriver:cgroupfs ClientCAFile:/var/lib/minikube/certs/ca.crt StaticPodPath:/e
tc/kubernetes/manifests ControlPlaneAddress:control-plane.minikube.internal KubeProxyOptions:map[] ResolvConfSearchRegression:false KubeletConfigOpts:map[containerRuntimeEndpoint:unix:///var/run/crio/crio.sock hairpinMode:hairpin-veth runtimeRequestTimeout:15m] PrependCriSocketUnix:true}
	I1109 14:37:39.721873  190681 kubeadm.go:196] kubeadm config:
	apiVersion: kubeadm.k8s.io/v1beta4
	kind: InitConfiguration
	localAPIEndpoint:
	  advertiseAddress: 192.168.76.2
	  bindPort: 8443
	bootstrapTokens:
	  - groups:
	      - system:bootstrappers:kubeadm:default-node-token
	    ttl: 24h0m0s
	    usages:
	      - signing
	      - authentication
	nodeRegistration:
	  criSocket: unix:///var/run/crio/crio.sock
	  name: "embed-certs-422728"
	  kubeletExtraArgs:
	    - name: "node-ip"
	      value: "192.168.76.2"
	  taints: []
	---
	apiVersion: kubeadm.k8s.io/v1beta4
	kind: ClusterConfiguration
	apiServer:
	  certSANs: ["127.0.0.1", "localhost", "192.168.76.2"]
	  extraArgs:
	    - name: "enable-admission-plugins"
	      value: "NamespaceLifecycle,LimitRanger,ServiceAccount,DefaultStorageClass,DefaultTolerationSeconds,NodeRestriction,MutatingAdmissionWebhook,ValidatingAdmissionWebhook,ResourceQuota"
	controllerManager:
	  extraArgs:
	    - name: "allocate-node-cidrs"
	      value: "true"
	    - name: "leader-elect"
	      value: "false"
	scheduler:
	  extraArgs:
	    - name: "leader-elect"
	      value: "false"
	certificatesDir: /var/lib/minikube/certs
	clusterName: mk
	controlPlaneEndpoint: control-plane.minikube.internal:8443
	etcd:
	  local:
	    dataDir: /var/lib/minikube/etcd
	kubernetesVersion: v1.34.1
	networking:
	  dnsDomain: cluster.local
	  podSubnet: "10.244.0.0/16"
	  serviceSubnet: 10.96.0.0/12
	---
	apiVersion: kubelet.config.k8s.io/v1beta1
	kind: KubeletConfiguration
	authentication:
	  x509:
	    clientCAFile: /var/lib/minikube/certs/ca.crt
	cgroupDriver: cgroupfs
	containerRuntimeEndpoint: unix:///var/run/crio/crio.sock
	hairpinMode: hairpin-veth
	runtimeRequestTimeout: 15m
	clusterDomain: "cluster.local"
	# disable disk resource management by default
	imageGCHighThresholdPercent: 100
	evictionHard:
	  nodefs.available: "0%"
	  nodefs.inodesFree: "0%"
	  imagefs.available: "0%"
	failSwapOn: false
	staticPodPath: /etc/kubernetes/manifests
	---
	apiVersion: kubeproxy.config.k8s.io/v1alpha1
	kind: KubeProxyConfiguration
	clusterCIDR: "10.244.0.0/16"
	metricsBindAddress: 0.0.0.0:10249
	conntrack:
	  maxPerCore: 0
	# Skip setting "net.netfilter.nf_conntrack_tcp_timeout_established"
	  tcpEstablishedTimeout: 0s
	# Skip setting "net.netfilter.nf_conntrack_tcp_timeout_close"
	  tcpCloseWaitTimeout: 0s
	
	I1109 14:37:39.722002  190681 ssh_runner.go:195] Run: sudo ls /var/lib/minikube/binaries/v1.34.1
	I1109 14:37:39.730651  190681 binaries.go:51] Found k8s binaries, skipping transfer
	I1109 14:37:39.730723  190681 ssh_runner.go:195] Run: sudo mkdir -p /etc/systemd/system/kubelet.service.d /lib/systemd/system /var/tmp/minikube
	I1109 14:37:39.738876  190681 ssh_runner.go:362] scp memory --> /etc/systemd/system/kubelet.service.d/10-kubeadm.conf (368 bytes)
	I1109 14:37:39.757436  190681 ssh_runner.go:362] scp memory --> /lib/systemd/system/kubelet.service (352 bytes)
	I1109 14:37:39.771011  190681 ssh_runner.go:362] scp memory --> /var/tmp/minikube/kubeadm.yaml.new (2215 bytes)
	I1109 14:37:39.786092  190681 ssh_runner.go:195] Run: grep 192.168.76.2	control-plane.minikube.internal$ /etc/hosts
	I1109 14:37:39.790206  190681 ssh_runner.go:195] Run: /bin/bash -c "{ grep -v $'\tcontrol-plane.minikube.internal$' "/etc/hosts"; echo "192.168.76.2	control-plane.minikube.internal"; } > /tmp/h.$$; sudo cp /tmp/h.$$ "/etc/hosts""
	I1109 14:37:39.799900  190681 ssh_runner.go:195] Run: sudo systemctl daemon-reload
	I1109 14:37:39.940282  190681 ssh_runner.go:195] Run: sudo systemctl start kubelet
	I1109 14:37:39.965097  190681 certs.go:69] Setting up /home/jenkins/minikube-integration/21139-2320/.minikube/profiles/embed-certs-422728 for IP: 192.168.76.2
	I1109 14:37:39.965166  190681 certs.go:195] generating shared ca certs ...
	I1109 14:37:39.965199  190681 certs.go:227] acquiring lock for ca certs: {Name:mkdc9287ecd89df27ec460b72246ed6b75395d7c Clock:{} Delay:500ms Timeout:1m0s Cancel:<nil>}
	I1109 14:37:39.965367  190681 certs.go:236] skipping valid "minikubeCA" ca cert: /home/jenkins/minikube-integration/21139-2320/.minikube/ca.key
	I1109 14:37:39.965442  190681 certs.go:236] skipping valid "proxyClientCA" ca cert: /home/jenkins/minikube-integration/21139-2320/.minikube/proxy-client-ca.key
	I1109 14:37:39.965479  190681 certs.go:257] generating profile certs ...
	I1109 14:37:39.965568  190681 certs.go:364] generating signed profile cert for "minikube-user": /home/jenkins/minikube-integration/21139-2320/.minikube/profiles/embed-certs-422728/client.key
	I1109 14:37:39.965597  190681 crypto.go:68] Generating cert /home/jenkins/minikube-integration/21139-2320/.minikube/profiles/embed-certs-422728/client.crt with IP's: []
	I1109 14:37:40.464957  190681 crypto.go:156] Writing cert to /home/jenkins/minikube-integration/21139-2320/.minikube/profiles/embed-certs-422728/client.crt ...
	I1109 14:37:40.464985  190681 lock.go:35] WriteFile acquiring /home/jenkins/minikube-integration/21139-2320/.minikube/profiles/embed-certs-422728/client.crt: {Name:mkb3052a1a3ee81a199bbfd07c17ebda70f0241b Clock:{} Delay:500ms Timeout:1m0s Cancel:<nil>}
	I1109 14:37:40.465156  190681 crypto.go:164] Writing key to /home/jenkins/minikube-integration/21139-2320/.minikube/profiles/embed-certs-422728/client.key ...
	I1109 14:37:40.465163  190681 lock.go:35] WriteFile acquiring /home/jenkins/minikube-integration/21139-2320/.minikube/profiles/embed-certs-422728/client.key: {Name:mk4c95f6664bd8acbdb34959202e45d60df7d02e Clock:{} Delay:500ms Timeout:1m0s Cancel:<nil>}
	I1109 14:37:40.465239  190681 certs.go:364] generating signed profile cert for "minikube": /home/jenkins/minikube-integration/21139-2320/.minikube/profiles/embed-certs-422728/apiserver.key.b1b6b07a
	I1109 14:37:40.465257  190681 crypto.go:68] Generating cert /home/jenkins/minikube-integration/21139-2320/.minikube/profiles/embed-certs-422728/apiserver.crt.b1b6b07a with IP's: [10.96.0.1 127.0.0.1 10.0.0.1 192.168.76.2]
	I1109 14:37:40.845161  190681 crypto.go:156] Writing cert to /home/jenkins/minikube-integration/21139-2320/.minikube/profiles/embed-certs-422728/apiserver.crt.b1b6b07a ...
	I1109 14:37:40.845194  190681 lock.go:35] WriteFile acquiring /home/jenkins/minikube-integration/21139-2320/.minikube/profiles/embed-certs-422728/apiserver.crt.b1b6b07a: {Name:mk34dc98430e56ee4a4f57cd0ba366d96b6dea41 Clock:{} Delay:500ms Timeout:1m0s Cancel:<nil>}
	I1109 14:37:40.845406  190681 crypto.go:164] Writing key to /home/jenkins/minikube-integration/21139-2320/.minikube/profiles/embed-certs-422728/apiserver.key.b1b6b07a ...
	I1109 14:37:40.845424  190681 lock.go:35] WriteFile acquiring /home/jenkins/minikube-integration/21139-2320/.minikube/profiles/embed-certs-422728/apiserver.key.b1b6b07a: {Name:mkf100a1bb9d721d9144c34041e2b66fa2fa32ab Clock:{} Delay:500ms Timeout:1m0s Cancel:<nil>}
	I1109 14:37:40.845516  190681 certs.go:382] copying /home/jenkins/minikube-integration/21139-2320/.minikube/profiles/embed-certs-422728/apiserver.crt.b1b6b07a -> /home/jenkins/minikube-integration/21139-2320/.minikube/profiles/embed-certs-422728/apiserver.crt
	I1109 14:37:40.845600  190681 certs.go:386] copying /home/jenkins/minikube-integration/21139-2320/.minikube/profiles/embed-certs-422728/apiserver.key.b1b6b07a -> /home/jenkins/minikube-integration/21139-2320/.minikube/profiles/embed-certs-422728/apiserver.key
	I1109 14:37:40.845664  190681 certs.go:364] generating signed profile cert for "aggregator": /home/jenkins/minikube-integration/21139-2320/.minikube/profiles/embed-certs-422728/proxy-client.key
	I1109 14:37:40.845684  190681 crypto.go:68] Generating cert /home/jenkins/minikube-integration/21139-2320/.minikube/profiles/embed-certs-422728/proxy-client.crt with IP's: []
	I1109 14:37:41.999590  190681 crypto.go:156] Writing cert to /home/jenkins/minikube-integration/21139-2320/.minikube/profiles/embed-certs-422728/proxy-client.crt ...
	I1109 14:37:41.999623  190681 lock.go:35] WriteFile acquiring /home/jenkins/minikube-integration/21139-2320/.minikube/profiles/embed-certs-422728/proxy-client.crt: {Name:mk26367aa5d706d5485496188212ef42dd866cfe Clock:{} Delay:500ms Timeout:1m0s Cancel:<nil>}
	I1109 14:37:41.999796  190681 crypto.go:164] Writing key to /home/jenkins/minikube-integration/21139-2320/.minikube/profiles/embed-certs-422728/proxy-client.key ...
	I1109 14:37:41.999813  190681 lock.go:35] WriteFile acquiring /home/jenkins/minikube-integration/21139-2320/.minikube/profiles/embed-certs-422728/proxy-client.key: {Name:mke3b694bfe374eb19b825b523bddbf55f17a2d2 Clock:{} Delay:500ms Timeout:1m0s Cancel:<nil>}
	I1109 14:37:42.000015  190681 certs.go:484] found cert: /home/jenkins/minikube-integration/21139-2320/.minikube/certs/4116.pem (1338 bytes)
	W1109 14:37:42.000058  190681 certs.go:480] ignoring /home/jenkins/minikube-integration/21139-2320/.minikube/certs/4116_empty.pem, impossibly tiny 0 bytes
	I1109 14:37:42.000072  190681 certs.go:484] found cert: /home/jenkins/minikube-integration/21139-2320/.minikube/certs/ca-key.pem (1679 bytes)
	I1109 14:37:42.000099  190681 certs.go:484] found cert: /home/jenkins/minikube-integration/21139-2320/.minikube/certs/ca.pem (1082 bytes)
	I1109 14:37:42.000128  190681 certs.go:484] found cert: /home/jenkins/minikube-integration/21139-2320/.minikube/certs/cert.pem (1123 bytes)
	I1109 14:37:42.000155  190681 certs.go:484] found cert: /home/jenkins/minikube-integration/21139-2320/.minikube/certs/key.pem (1679 bytes)
	I1109 14:37:42.000202  190681 certs.go:484] found cert: /home/jenkins/minikube-integration/21139-2320/.minikube/files/etc/ssl/certs/41162.pem (1708 bytes)
	I1109 14:37:42.000805  190681 ssh_runner.go:362] scp /home/jenkins/minikube-integration/21139-2320/.minikube/ca.crt --> /var/lib/minikube/certs/ca.crt (1111 bytes)
	I1109 14:37:42.024398  190681 ssh_runner.go:362] scp /home/jenkins/minikube-integration/21139-2320/.minikube/ca.key --> /var/lib/minikube/certs/ca.key (1679 bytes)
	I1109 14:37:42.047819  190681 ssh_runner.go:362] scp /home/jenkins/minikube-integration/21139-2320/.minikube/proxy-client-ca.crt --> /var/lib/minikube/certs/proxy-client-ca.crt (1119 bytes)
	I1109 14:37:42.069993  190681 ssh_runner.go:362] scp /home/jenkins/minikube-integration/21139-2320/.minikube/proxy-client-ca.key --> /var/lib/minikube/certs/proxy-client-ca.key (1675 bytes)
	I1109 14:37:42.094622  190681 ssh_runner.go:362] scp /home/jenkins/minikube-integration/21139-2320/.minikube/profiles/embed-certs-422728/apiserver.crt --> /var/lib/minikube/certs/apiserver.crt (1428 bytes)
	I1109 14:37:42.119636  190681 ssh_runner.go:362] scp /home/jenkins/minikube-integration/21139-2320/.minikube/profiles/embed-certs-422728/apiserver.key --> /var/lib/minikube/certs/apiserver.key (1679 bytes)
	I1109 14:37:42.145596  190681 ssh_runner.go:362] scp /home/jenkins/minikube-integration/21139-2320/.minikube/profiles/embed-certs-422728/proxy-client.crt --> /var/lib/minikube/certs/proxy-client.crt (1147 bytes)
	I1109 14:37:42.171131  190681 ssh_runner.go:362] scp /home/jenkins/minikube-integration/21139-2320/.minikube/profiles/embed-certs-422728/proxy-client.key --> /var/lib/minikube/certs/proxy-client.key (1675 bytes)
	I1109 14:37:42.196337  190681 ssh_runner.go:362] scp /home/jenkins/minikube-integration/21139-2320/.minikube/certs/4116.pem --> /usr/share/ca-certificates/4116.pem (1338 bytes)
	I1109 14:37:42.221062  190681 ssh_runner.go:362] scp /home/jenkins/minikube-integration/21139-2320/.minikube/files/etc/ssl/certs/41162.pem --> /usr/share/ca-certificates/41162.pem (1708 bytes)
	I1109 14:37:42.245411  190681 ssh_runner.go:362] scp /home/jenkins/minikube-integration/21139-2320/.minikube/ca.crt --> /usr/share/ca-certificates/minikubeCA.pem (1111 bytes)
	I1109 14:37:42.268392  190681 ssh_runner.go:362] scp memory --> /var/lib/minikube/kubeconfig (738 bytes)
	I1109 14:37:42.294622  190681 ssh_runner.go:195] Run: openssl version
	I1109 14:37:42.302336  190681 ssh_runner.go:195] Run: sudo /bin/bash -c "test -s /usr/share/ca-certificates/41162.pem && ln -fs /usr/share/ca-certificates/41162.pem /etc/ssl/certs/41162.pem"
	I1109 14:37:42.312594  190681 ssh_runner.go:195] Run: ls -la /usr/share/ca-certificates/41162.pem
	I1109 14:37:42.317219  190681 certs.go:528] hashing: -rw-r--r-- 1 root root 1708 Nov  9 13:36 /usr/share/ca-certificates/41162.pem
	I1109 14:37:42.317338  190681 ssh_runner.go:195] Run: openssl x509 -hash -noout -in /usr/share/ca-certificates/41162.pem
	I1109 14:37:42.362880  190681 ssh_runner.go:195] Run: sudo /bin/bash -c "test -L /etc/ssl/certs/3ec20f2e.0 || ln -fs /etc/ssl/certs/41162.pem /etc/ssl/certs/3ec20f2e.0"
	I1109 14:37:42.372541  190681 ssh_runner.go:195] Run: sudo /bin/bash -c "test -s /usr/share/ca-certificates/minikubeCA.pem && ln -fs /usr/share/ca-certificates/minikubeCA.pem /etc/ssl/certs/minikubeCA.pem"
	I1109 14:37:42.381922  190681 ssh_runner.go:195] Run: ls -la /usr/share/ca-certificates/minikubeCA.pem
	I1109 14:37:42.386584  190681 certs.go:528] hashing: -rw-r--r-- 1 root root 1111 Nov  9 13:29 /usr/share/ca-certificates/minikubeCA.pem
	I1109 14:37:42.386701  190681 ssh_runner.go:195] Run: openssl x509 -hash -noout -in /usr/share/ca-certificates/minikubeCA.pem
	I1109 14:37:42.430305  190681 ssh_runner.go:195] Run: sudo /bin/bash -c "test -L /etc/ssl/certs/b5213941.0 || ln -fs /etc/ssl/certs/minikubeCA.pem /etc/ssl/certs/b5213941.0"
	I1109 14:37:42.440223  190681 ssh_runner.go:195] Run: sudo /bin/bash -c "test -s /usr/share/ca-certificates/4116.pem && ln -fs /usr/share/ca-certificates/4116.pem /etc/ssl/certs/4116.pem"
	I1109 14:37:42.449666  190681 ssh_runner.go:195] Run: ls -la /usr/share/ca-certificates/4116.pem
	I1109 14:37:42.454514  190681 certs.go:528] hashing: -rw-r--r-- 1 root root 1338 Nov  9 13:36 /usr/share/ca-certificates/4116.pem
	I1109 14:37:42.454662  190681 ssh_runner.go:195] Run: openssl x509 -hash -noout -in /usr/share/ca-certificates/4116.pem
	I1109 14:37:42.498487  190681 ssh_runner.go:195] Run: sudo /bin/bash -c "test -L /etc/ssl/certs/51391683.0 || ln -fs /etc/ssl/certs/4116.pem /etc/ssl/certs/51391683.0"
	I1109 14:37:42.507929  190681 ssh_runner.go:195] Run: stat /var/lib/minikube/certs/apiserver-kubelet-client.crt
	I1109 14:37:42.512583  190681 certs.go:400] 'apiserver-kubelet-client' cert doesn't exist, likely first start: stat /var/lib/minikube/certs/apiserver-kubelet-client.crt: Process exited with status 1
	stdout:
	
	stderr:
	stat: cannot statx '/var/lib/minikube/certs/apiserver-kubelet-client.crt': No such file or directory
	I1109 14:37:42.512691  190681 kubeadm.go:401] StartCluster: {Name:embed-certs-422728 KeepContext:false EmbedCerts:true MinikubeISO: KicBaseImage:gcr.io/k8s-minikube/kicbase-builds:v0.0.48-1761985721-21837@sha256:a50b37e97dfdea51156e079ca6b45818a801b3d41bbe13d141f35d2e1af6c7d1 Memory:3072 CPUs:2 DiskSize:20000 Driver:docker HyperkitVpnKitSock: HyperkitVSockPorts:[] DockerEnv:[] ContainerVolumeMounts:[] InsecureRegistry:[] RegistryMirror:[] HostOnlyCIDR:192.168.59.1/24 HypervVirtualSwitch: HypervUseExternalSwitch:false HypervExternalAdapter: KVMNetwork:default KVMQemuURI:qemu:///system KVMGPU:false KVMHidden:false KVMNUMACount:1 APIServerPort:8443 DockerOpt:[] DisableDriverMounts:false NFSShare:[] NFSSharesRoot:/nfsshares UUID: NoVTXCheck:false DNSProxy:false HostDNSResolver:true HostOnlyNicType:virtio NatNicType:virtio SSHIPAddress: SSHUser:root SSHKey: SSHPort:22 KubernetesConfig:{KubernetesVersion:v1.34.1 ClusterName:embed-certs-422728 Namespace:default APIServerHAVIP: APIServerName:minikubeCA APISe
rverNames:[] APIServerIPs:[] DNSDomain:cluster.local ContainerRuntime:crio CRISocket: NetworkPlugin:cni FeatureGates: ServiceCIDR:10.96.0.0/12 ImageRepository: LoadBalancerStartIP: LoadBalancerEndIP: CustomIngressCert: RegistryAliases: ExtraOptions:[] ShouldLoadCachedImages:true EnableDefaultCNI:false CNI:} Nodes:[{Name: IP:192.168.76.2 Port:8443 KubernetesVersion:v1.34.1 ContainerRuntime:crio ControlPlane:true Worker:true}] Addons:map[] CustomAddonImages:map[] CustomAddonRegistries:map[] VerifyComponents:map[apiserver:true apps_running:true default_sa:true extra:true kubelet:true node_ready:true system_pods:true] StartHostTimeout:6m0s ScheduledStop:<nil> ExposedPorts:[] ListenAddress: Network: Subnet: MultiNodeRequested:false ExtraDisks:0 CertExpiration:26280h0m0s MountString: Mount9PVersion:9p2000.L MountGID:docker MountIP: MountMSize:262144 MountOptions:[] MountPort:0 MountType:9p MountUID:docker BinaryMirror: DisableOptimizations:false DisableMetrics:false DisableCoreDNSLog:false CustomQemuFirmwarePath: S
ocketVMnetClientPath: SocketVMnetPath: StaticIP: SSHAuthSock: SSHAgentPID:0 GPUs: AutoPauseInterval:1m0s}
	I1109 14:37:42.512807  190681 cri.go:54] listing CRI containers in root : {State:paused Name: Namespaces:[kube-system]}
	I1109 14:37:42.512892  190681 ssh_runner.go:195] Run: sudo -s eval "crictl ps -a --quiet --label io.kubernetes.pod.namespace=kube-system"
	I1109 14:37:42.545234  190681 cri.go:89] found id: ""
	I1109 14:37:42.545357  190681 ssh_runner.go:195] Run: sudo ls /var/lib/kubelet/kubeadm-flags.env /var/lib/kubelet/config.yaml /var/lib/minikube/etcd
	I1109 14:37:42.556463  190681 ssh_runner.go:195] Run: sudo cp /var/tmp/minikube/kubeadm.yaml.new /var/tmp/minikube/kubeadm.yaml
	I1109 14:37:42.565543  190681 kubeadm.go:215] ignoring SystemVerification for kubeadm because of docker driver
	I1109 14:37:42.565668  190681 ssh_runner.go:195] Run: sudo ls -la /etc/kubernetes/admin.conf /etc/kubernetes/kubelet.conf /etc/kubernetes/controller-manager.conf /etc/kubernetes/scheduler.conf
	I1109 14:37:42.576781  190681 kubeadm.go:156] config check failed, skipping stale config cleanup: sudo ls -la /etc/kubernetes/admin.conf /etc/kubernetes/kubelet.conf /etc/kubernetes/controller-manager.conf /etc/kubernetes/scheduler.conf: Process exited with status 2
	stdout:
	
	stderr:
	ls: cannot access '/etc/kubernetes/admin.conf': No such file or directory
	ls: cannot access '/etc/kubernetes/kubelet.conf': No such file or directory
	ls: cannot access '/etc/kubernetes/controller-manager.conf': No such file or directory
	ls: cannot access '/etc/kubernetes/scheduler.conf': No such file or directory
	I1109 14:37:42.576837  190681 kubeadm.go:158] found existing configuration files:
	
	I1109 14:37:42.576925  190681 ssh_runner.go:195] Run: sudo grep https://control-plane.minikube.internal:8443 /etc/kubernetes/admin.conf
	I1109 14:37:42.585838  190681 kubeadm.go:164] "https://control-plane.minikube.internal:8443" may not be in /etc/kubernetes/admin.conf - will remove: sudo grep https://control-plane.minikube.internal:8443 /etc/kubernetes/admin.conf: Process exited with status 2
	stdout:
	
	stderr:
	grep: /etc/kubernetes/admin.conf: No such file or directory
	I1109 14:37:42.585953  190681 ssh_runner.go:195] Run: sudo rm -f /etc/kubernetes/admin.conf
	I1109 14:37:42.594060  190681 ssh_runner.go:195] Run: sudo grep https://control-plane.minikube.internal:8443 /etc/kubernetes/kubelet.conf
	I1109 14:37:42.603270  190681 kubeadm.go:164] "https://control-plane.minikube.internal:8443" may not be in /etc/kubernetes/kubelet.conf - will remove: sudo grep https://control-plane.minikube.internal:8443 /etc/kubernetes/kubelet.conf: Process exited with status 2
	stdout:
	
	stderr:
	grep: /etc/kubernetes/kubelet.conf: No such file or directory
	I1109 14:37:42.603384  190681 ssh_runner.go:195] Run: sudo rm -f /etc/kubernetes/kubelet.conf
	I1109 14:37:42.611451  190681 ssh_runner.go:195] Run: sudo grep https://control-plane.minikube.internal:8443 /etc/kubernetes/controller-manager.conf
	I1109 14:37:42.620895  190681 kubeadm.go:164] "https://control-plane.minikube.internal:8443" may not be in /etc/kubernetes/controller-manager.conf - will remove: sudo grep https://control-plane.minikube.internal:8443 /etc/kubernetes/controller-manager.conf: Process exited with status 2
	stdout:
	
	stderr:
	grep: /etc/kubernetes/controller-manager.conf: No such file or directory
	I1109 14:37:42.621003  190681 ssh_runner.go:195] Run: sudo rm -f /etc/kubernetes/controller-manager.conf
	I1109 14:37:42.630264  190681 ssh_runner.go:195] Run: sudo grep https://control-plane.minikube.internal:8443 /etc/kubernetes/scheduler.conf
	I1109 14:37:42.639623  190681 kubeadm.go:164] "https://control-plane.minikube.internal:8443" may not be in /etc/kubernetes/scheduler.conf - will remove: sudo grep https://control-plane.minikube.internal:8443 /etc/kubernetes/scheduler.conf: Process exited with status 2
	stdout:
	
	stderr:
	grep: /etc/kubernetes/scheduler.conf: No such file or directory
	I1109 14:37:42.639740  190681 ssh_runner.go:195] Run: sudo rm -f /etc/kubernetes/scheduler.conf
	I1109 14:37:42.647939  190681 ssh_runner.go:286] Start: sudo /bin/bash -c "env PATH="/var/lib/minikube/binaries/v1.34.1:$PATH" kubeadm init --config /var/tmp/minikube/kubeadm.yaml  --ignore-preflight-errors=DirAvailable--etc-kubernetes-manifests,DirAvailable--var-lib-minikube,DirAvailable--var-lib-minikube-etcd,FileAvailable--etc-kubernetes-manifests-kube-scheduler.yaml,FileAvailable--etc-kubernetes-manifests-kube-apiserver.yaml,FileAvailable--etc-kubernetes-manifests-kube-controller-manager.yaml,FileAvailable--etc-kubernetes-manifests-etcd.yaml,Port-10250,Swap,NumCPU,Mem,SystemVerification,FileContent--proc-sys-net-bridge-bridge-nf-call-iptables"
	I1109 14:37:42.729524  190681 kubeadm.go:319] [init] Using Kubernetes version: v1.34.1
	I1109 14:37:42.730008  190681 kubeadm.go:319] [preflight] Running pre-flight checks
	I1109 14:37:42.762021  190681 kubeadm.go:319] [preflight] The system verification failed. Printing the output from the verification:
	I1109 14:37:42.762173  190681 kubeadm.go:319] KERNEL_VERSION: 5.15.0-1084-aws
	I1109 14:37:42.762250  190681 kubeadm.go:319] OS: Linux
	I1109 14:37:42.762333  190681 kubeadm.go:319] CGROUPS_CPU: enabled
	I1109 14:37:42.762417  190681 kubeadm.go:319] CGROUPS_CPUACCT: enabled
	I1109 14:37:42.762498  190681 kubeadm.go:319] CGROUPS_CPUSET: enabled
	I1109 14:37:42.762579  190681 kubeadm.go:319] CGROUPS_DEVICES: enabled
	I1109 14:37:42.762659  190681 kubeadm.go:319] CGROUPS_FREEZER: enabled
	I1109 14:37:42.762745  190681 kubeadm.go:319] CGROUPS_MEMORY: enabled
	I1109 14:37:42.762820  190681 kubeadm.go:319] CGROUPS_PIDS: enabled
	I1109 14:37:42.762899  190681 kubeadm.go:319] CGROUPS_HUGETLB: enabled
	I1109 14:37:42.762976  190681 kubeadm.go:319] CGROUPS_BLKIO: enabled
	I1109 14:37:42.838155  190681 kubeadm.go:319] [preflight] Pulling images required for setting up a Kubernetes cluster
	I1109 14:37:42.838330  190681 kubeadm.go:319] [preflight] This might take a minute or two, depending on the speed of your internet connection
	I1109 14:37:42.838478  190681 kubeadm.go:319] [preflight] You can also perform this action beforehand using 'kubeadm config images pull'
	I1109 14:37:42.859899  190681 kubeadm.go:319] [certs] Using certificateDir folder "/var/lib/minikube/certs"
	I1109 14:37:42.866768  190681 out.go:252]   - Generating certificates and keys ...
	I1109 14:37:42.866943  190681 kubeadm.go:319] [certs] Using existing ca certificate authority
	I1109 14:37:42.867058  190681 kubeadm.go:319] [certs] Using existing apiserver certificate and key on disk
	I1109 14:37:38.390439  189143 kubeadm.go:319] [certs] Generating "front-proxy-ca" certificate and key
	I1109 14:37:39.132013  189143 kubeadm.go:319] [certs] Generating "front-proxy-client" certificate and key
	I1109 14:37:39.667087  189143 kubeadm.go:319] [certs] Generating "etcd/ca" certificate and key
	I1109 14:37:40.260225  189143 kubeadm.go:319] [certs] Generating "etcd/server" certificate and key
	I1109 14:37:40.260377  189143 kubeadm.go:319] [certs] etcd/server serving cert is signed for DNS names [default-k8s-diff-port-103048 localhost] and IPs [192.168.85.2 127.0.0.1 ::1]
	I1109 14:37:42.060547  189143 kubeadm.go:319] [certs] Generating "etcd/peer" certificate and key
	I1109 14:37:42.060888  189143 kubeadm.go:319] [certs] etcd/peer serving cert is signed for DNS names [default-k8s-diff-port-103048 localhost] and IPs [192.168.85.2 127.0.0.1 ::1]
	I1109 14:37:42.911663  189143 kubeadm.go:319] [certs] Generating "etcd/healthcheck-client" certificate and key
	I1109 14:37:43.120616  189143 kubeadm.go:319] [certs] Generating "apiserver-etcd-client" certificate and key
	I1109 14:37:43.514096  189143 kubeadm.go:319] [certs] Generating "sa" key and public key
	I1109 14:37:43.514397  189143 kubeadm.go:319] [kubeconfig] Using kubeconfig folder "/etc/kubernetes"
	I1109 14:37:43.996218  189143 kubeadm.go:319] [kubeconfig] Writing "admin.conf" kubeconfig file
	I1109 14:37:44.980688  189143 kubeadm.go:319] [kubeconfig] Writing "super-admin.conf" kubeconfig file
	I1109 14:37:45.437449  189143 kubeadm.go:319] [kubeconfig] Writing "kubelet.conf" kubeconfig file
	I1109 14:37:45.707091  189143 kubeadm.go:319] [kubeconfig] Writing "controller-manager.conf" kubeconfig file
	I1109 14:37:46.319949  189143 kubeadm.go:319] [kubeconfig] Writing "scheduler.conf" kubeconfig file
	I1109 14:37:46.320711  189143 kubeadm.go:319] [etcd] Creating static Pod manifest for local etcd in "/etc/kubernetes/manifests"
	I1109 14:37:46.332006  189143 kubeadm.go:319] [control-plane] Using manifest folder "/etc/kubernetes/manifests"
	I1109 14:37:43.447449  190681 kubeadm.go:319] [certs] Generating "apiserver-kubelet-client" certificate and key
	I1109 14:37:44.003488  190681 kubeadm.go:319] [certs] Generating "front-proxy-ca" certificate and key
	I1109 14:37:44.076213  190681 kubeadm.go:319] [certs] Generating "front-proxy-client" certificate and key
	I1109 14:37:44.147781  190681 kubeadm.go:319] [certs] Generating "etcd/ca" certificate and key
	I1109 14:37:45.005626  190681 kubeadm.go:319] [certs] Generating "etcd/server" certificate and key
	I1109 14:37:45.006224  190681 kubeadm.go:319] [certs] etcd/server serving cert is signed for DNS names [embed-certs-422728 localhost] and IPs [192.168.76.2 127.0.0.1 ::1]
	I1109 14:37:45.236610  190681 kubeadm.go:319] [certs] Generating "etcd/peer" certificate and key
	I1109 14:37:45.237325  190681 kubeadm.go:319] [certs] etcd/peer serving cert is signed for DNS names [embed-certs-422728 localhost] and IPs [192.168.76.2 127.0.0.1 ::1]
	I1109 14:37:45.668726  190681 kubeadm.go:319] [certs] Generating "etcd/healthcheck-client" certificate and key
	I1109 14:37:45.896215  190681 kubeadm.go:319] [certs] Generating "apiserver-etcd-client" certificate and key
	I1109 14:37:46.818389  190681 kubeadm.go:319] [certs] Generating "sa" key and public key
	I1109 14:37:46.818607  190681 kubeadm.go:319] [kubeconfig] Using kubeconfig folder "/etc/kubernetes"
	I1109 14:37:47.791785  190681 kubeadm.go:319] [kubeconfig] Writing "admin.conf" kubeconfig file
	I1109 14:37:46.335519  189143 out.go:252]   - Booting up control plane ...
	I1109 14:37:46.335632  189143 kubeadm.go:319] [control-plane] Creating static Pod manifest for "kube-apiserver"
	I1109 14:37:46.335715  189143 kubeadm.go:319] [control-plane] Creating static Pod manifest for "kube-controller-manager"
	I1109 14:37:46.336483  189143 kubeadm.go:319] [control-plane] Creating static Pod manifest for "kube-scheduler"
	I1109 14:37:46.372440  189143 kubeadm.go:319] [kubelet-start] Writing kubelet environment file with flags to file "/var/lib/kubelet/kubeadm-flags.env"
	I1109 14:37:46.372559  189143 kubeadm.go:319] [kubelet-start] Writing kubelet configuration to file "/var/lib/kubelet/instance-config.yaml"
	I1109 14:37:46.381181  189143 kubeadm.go:319] [patches] Applied patch of type "application/strategic-merge-patch+json" to target "kubeletconfiguration"
	I1109 14:37:46.381277  189143 kubeadm.go:319] [kubelet-start] Writing kubelet configuration to file "/var/lib/kubelet/config.yaml"
	I1109 14:37:46.381317  189143 kubeadm.go:319] [kubelet-start] Starting the kubelet
	I1109 14:37:46.558538  189143 kubeadm.go:319] [wait-control-plane] Waiting for the kubelet to boot up the control plane as static Pods from directory "/etc/kubernetes/manifests"
	I1109 14:37:46.558665  189143 kubeadm.go:319] [kubelet-check] Waiting for a healthy kubelet at http://127.0.0.1:10248/healthz. This can take up to 4m0s
	I1109 14:37:48.060058  189143 kubeadm.go:319] [kubelet-check] The kubelet is healthy after 1.501614856s
	I1109 14:37:48.067853  189143 kubeadm.go:319] [control-plane-check] Waiting for healthy control plane components. This can take up to 4m0s
	I1109 14:37:48.067965  189143 kubeadm.go:319] [control-plane-check] Checking kube-apiserver at https://192.168.85.2:8444/livez
	I1109 14:37:48.068060  189143 kubeadm.go:319] [control-plane-check] Checking kube-controller-manager at https://127.0.0.1:10257/healthz
	I1109 14:37:48.068142  189143 kubeadm.go:319] [control-plane-check] Checking kube-scheduler at https://127.0.0.1:10259/livez
	I1109 14:37:48.091168  190681 kubeadm.go:319] [kubeconfig] Writing "super-admin.conf" kubeconfig file
	I1109 14:37:48.744223  190681 kubeadm.go:319] [kubeconfig] Writing "kubelet.conf" kubeconfig file
	I1109 14:37:49.540188  190681 kubeadm.go:319] [kubeconfig] Writing "controller-manager.conf" kubeconfig file
	I1109 14:37:49.873301  190681 kubeadm.go:319] [kubeconfig] Writing "scheduler.conf" kubeconfig file
	I1109 14:37:49.876135  190681 kubeadm.go:319] [etcd] Creating static Pod manifest for local etcd in "/etc/kubernetes/manifests"
	I1109 14:37:49.887520  190681 kubeadm.go:319] [control-plane] Using manifest folder "/etc/kubernetes/manifests"
	I1109 14:37:49.891081  190681 out.go:252]   - Booting up control plane ...
	I1109 14:37:49.891198  190681 kubeadm.go:319] [control-plane] Creating static Pod manifest for "kube-apiserver"
	I1109 14:37:49.891281  190681 kubeadm.go:319] [control-plane] Creating static Pod manifest for "kube-controller-manager"
	I1109 14:37:49.900200  190681 kubeadm.go:319] [control-plane] Creating static Pod manifest for "kube-scheduler"
	I1109 14:37:49.937072  190681 kubeadm.go:319] [kubelet-start] Writing kubelet environment file with flags to file "/var/lib/kubelet/kubeadm-flags.env"
	I1109 14:37:49.937199  190681 kubeadm.go:319] [kubelet-start] Writing kubelet configuration to file "/var/lib/kubelet/instance-config.yaml"
	I1109 14:37:49.954669  190681 kubeadm.go:319] [patches] Applied patch of type "application/strategic-merge-patch+json" to target "kubeletconfiguration"
	I1109 14:37:49.954949  190681 kubeadm.go:319] [kubelet-start] Writing kubelet configuration to file "/var/lib/kubelet/config.yaml"
	I1109 14:37:49.955154  190681 kubeadm.go:319] [kubelet-start] Starting the kubelet
	I1109 14:37:50.174661  190681 kubeadm.go:319] [wait-control-plane] Waiting for the kubelet to boot up the control plane as static Pods from directory "/etc/kubernetes/manifests"
	I1109 14:37:50.174784  190681 kubeadm.go:319] [kubelet-check] Waiting for a healthy kubelet at http://127.0.0.1:10248/healthz. This can take up to 4m0s
	I1109 14:37:51.192271  190681 kubeadm.go:319] [kubelet-check] The kubelet is healthy after 1.016598052s
	I1109 14:37:51.194921  190681 kubeadm.go:319] [control-plane-check] Waiting for healthy control plane components. This can take up to 4m0s
	I1109 14:37:51.195333  190681 kubeadm.go:319] [control-plane-check] Checking kube-apiserver at https://192.168.76.2:8443/livez
	I1109 14:37:51.195636  190681 kubeadm.go:319] [control-plane-check] Checking kube-controller-manager at https://127.0.0.1:10257/healthz
	I1109 14:37:51.196474  190681 kubeadm.go:319] [control-plane-check] Checking kube-scheduler at https://127.0.0.1:10259/livez
	I1109 14:37:56.512419  190681 kubeadm.go:319] [control-plane-check] kube-controller-manager is healthy after 5.315667569s
	I1109 14:37:54.423580  189143 kubeadm.go:319] [control-plane-check] kube-controller-manager is healthy after 6.354386642s
	I1109 14:37:58.540767  189143 kubeadm.go:319] [control-plane-check] kube-scheduler is healthy after 10.472872963s
	I1109 14:37:59.070621  189143 kubeadm.go:319] [control-plane-check] kube-apiserver is healthy after 11.001686771s
	I1109 14:37:59.091330  189143 kubeadm.go:319] [upload-config] Storing the configuration used in ConfigMap "kubeadm-config" in the "kube-system" Namespace
	I1109 14:37:59.109333  189143 kubeadm.go:319] [kubelet] Creating a ConfigMap "kubelet-config" in namespace kube-system with the configuration for the kubelets in the cluster
	I1109 14:37:59.131208  189143 kubeadm.go:319] [upload-certs] Skipping phase. Please see --upload-certs
	I1109 14:37:59.131428  189143 kubeadm.go:319] [mark-control-plane] Marking the node default-k8s-diff-port-103048 as control-plane by adding the labels: [node-role.kubernetes.io/control-plane node.kubernetes.io/exclude-from-external-load-balancers]
	I1109 14:37:59.147642  189143 kubeadm.go:319] [bootstrap-token] Using token: adxlwq.cgijiq3nisu2ttzm
	I1109 14:37:59.152690  189143 out.go:252]   - Configuring RBAC rules ...
	I1109 14:37:59.152820  189143 kubeadm.go:319] [bootstrap-token] Configuring bootstrap tokens, cluster-info ConfigMap, RBAC Roles
	I1109 14:37:59.160917  189143 kubeadm.go:319] [bootstrap-token] Configured RBAC rules to allow Node Bootstrap tokens to get nodes
	I1109 14:37:59.169971  189143 kubeadm.go:319] [bootstrap-token] Configured RBAC rules to allow Node Bootstrap tokens to post CSRs in order for nodes to get long term certificate credentials
	I1109 14:37:59.175023  189143 kubeadm.go:319] [bootstrap-token] Configured RBAC rules to allow the csrapprover controller automatically approve CSRs from a Node Bootstrap Token
	I1109 14:37:59.179686  189143 kubeadm.go:319] [bootstrap-token] Configured RBAC rules to allow certificate rotation for all node client certificates in the cluster
	I1109 14:37:59.184477  189143 kubeadm.go:319] [bootstrap-token] Creating the "cluster-info" ConfigMap in the "kube-public" namespace
	I1109 14:37:59.478474  189143 kubeadm.go:319] [kubelet-finalize] Updating "/etc/kubernetes/kubelet.conf" to point to a rotatable kubelet client certificate and key
	I1109 14:37:59.942239  189143 kubeadm.go:319] [addons] Applied essential addon: CoreDNS
	I1109 14:38:00.527542  189143 kubeadm.go:319] [addons] Applied essential addon: kube-proxy
	I1109 14:38:00.527567  189143 kubeadm.go:319] 
	I1109 14:38:00.527632  189143 kubeadm.go:319] Your Kubernetes control-plane has initialized successfully!
	I1109 14:38:00.527646  189143 kubeadm.go:319] 
	I1109 14:38:00.527738  189143 kubeadm.go:319] To start using your cluster, you need to run the following as a regular user:
	I1109 14:38:00.527758  189143 kubeadm.go:319] 
	I1109 14:38:00.527786  189143 kubeadm.go:319]   mkdir -p $HOME/.kube
	I1109 14:38:00.527851  189143 kubeadm.go:319]   sudo cp -i /etc/kubernetes/admin.conf $HOME/.kube/config
	I1109 14:38:00.527963  189143 kubeadm.go:319]   sudo chown $(id -u):$(id -g) $HOME/.kube/config
	I1109 14:38:00.527975  189143 kubeadm.go:319] 
	I1109 14:38:00.528032  189143 kubeadm.go:319] Alternatively, if you are the root user, you can run:
	I1109 14:38:00.528040  189143 kubeadm.go:319] 
	I1109 14:38:00.528090  189143 kubeadm.go:319]   export KUBECONFIG=/etc/kubernetes/admin.conf
	I1109 14:38:00.528098  189143 kubeadm.go:319] 
	I1109 14:38:00.528153  189143 kubeadm.go:319] You should now deploy a pod network to the cluster.
	I1109 14:38:00.528235  189143 kubeadm.go:319] Run "kubectl apply -f [podnetwork].yaml" with one of the options listed at:
	I1109 14:38:00.528310  189143 kubeadm.go:319]   https://kubernetes.io/docs/concepts/cluster-administration/addons/
	I1109 14:38:00.528318  189143 kubeadm.go:319] 
	I1109 14:38:00.528415  189143 kubeadm.go:319] You can now join any number of control-plane nodes by copying certificate authorities
	I1109 14:38:00.528500  189143 kubeadm.go:319] and service account keys on each node and then running the following as root:
	I1109 14:38:00.528508  189143 kubeadm.go:319] 
	I1109 14:38:00.528595  189143 kubeadm.go:319]   kubeadm join control-plane.minikube.internal:8444 --token adxlwq.cgijiq3nisu2ttzm \
	I1109 14:38:00.528709  189143 kubeadm.go:319] 	--discovery-token-ca-cert-hash sha256:bb1c1d61f7abb7f43f33866576b71e6342f6f67545a2d1f7cc51fa851cd51d52 \
	I1109 14:38:00.528737  189143 kubeadm.go:319] 	--control-plane 
	I1109 14:38:00.528746  189143 kubeadm.go:319] 
	I1109 14:38:00.528835  189143 kubeadm.go:319] Then you can join any number of worker nodes by running the following on each as root:
	I1109 14:38:00.528844  189143 kubeadm.go:319] 
	I1109 14:38:00.528930  189143 kubeadm.go:319] kubeadm join control-plane.minikube.internal:8444 --token adxlwq.cgijiq3nisu2ttzm \
	I1109 14:38:00.529049  189143 kubeadm.go:319] 	--discovery-token-ca-cert-hash sha256:bb1c1d61f7abb7f43f33866576b71e6342f6f67545a2d1f7cc51fa851cd51d52 
	I1109 14:38:00.542250  189143 kubeadm.go:319] 	[WARNING SystemVerification]: cgroups v1 support is in maintenance mode, please migrate to cgroups v2
	I1109 14:38:00.542506  189143 kubeadm.go:319] 	[WARNING SystemVerification]: failed to parse kernel config: unable to load kernel module: "configs", output: "modprobe: FATAL: Module configs not found in directory /lib/modules/5.15.0-1084-aws\n", err: exit status 1
	I1109 14:38:00.542624  189143 kubeadm.go:319] 	[WARNING Service-Kubelet]: kubelet service is not enabled, please run 'systemctl enable kubelet.service'
	I1109 14:38:00.542723  189143 cni.go:84] Creating CNI manager for ""
	I1109 14:38:00.542746  189143 cni.go:143] "docker" driver + "crio" runtime found, recommending kindnet
	I1109 14:38:00.546129  189143 out.go:179] * Configuring CNI (Container Networking Interface) ...
	I1109 14:37:59.490132  190681 kubeadm.go:319] [control-plane-check] kube-scheduler is healthy after 8.293260821s
	I1109 14:38:00.698666  190681 kubeadm.go:319] [control-plane-check] kube-apiserver is healthy after 9.502633826s
	I1109 14:38:00.728034  190681 kubeadm.go:319] [upload-config] Storing the configuration used in ConfigMap "kubeadm-config" in the "kube-system" Namespace
	I1109 14:38:00.748774  190681 kubeadm.go:319] [kubelet] Creating a ConfigMap "kubelet-config" in namespace kube-system with the configuration for the kubelets in the cluster
	I1109 14:38:00.770877  190681 kubeadm.go:319] [upload-certs] Skipping phase. Please see --upload-certs
	I1109 14:38:00.772910  190681 kubeadm.go:319] [mark-control-plane] Marking the node embed-certs-422728 as control-plane by adding the labels: [node-role.kubernetes.io/control-plane node.kubernetes.io/exclude-from-external-load-balancers]
	I1109 14:38:00.788529  190681 kubeadm.go:319] [bootstrap-token] Using token: w7hsc2.zw6d7sksu6ywppck
	I1109 14:38:00.791639  190681 out.go:252]   - Configuring RBAC rules ...
	I1109 14:38:00.791769  190681 kubeadm.go:319] [bootstrap-token] Configuring bootstrap tokens, cluster-info ConfigMap, RBAC Roles
	I1109 14:38:00.797848  190681 kubeadm.go:319] [bootstrap-token] Configured RBAC rules to allow Node Bootstrap tokens to get nodes
	I1109 14:38:00.807117  190681 kubeadm.go:319] [bootstrap-token] Configured RBAC rules to allow Node Bootstrap tokens to post CSRs in order for nodes to get long term certificate credentials
	I1109 14:38:00.815658  190681 kubeadm.go:319] [bootstrap-token] Configured RBAC rules to allow the csrapprover controller automatically approve CSRs from a Node Bootstrap Token
	I1109 14:38:00.821107  190681 kubeadm.go:319] [bootstrap-token] Configured RBAC rules to allow certificate rotation for all node client certificates in the cluster
	I1109 14:38:00.828767  190681 kubeadm.go:319] [bootstrap-token] Creating the "cluster-info" ConfigMap in the "kube-public" namespace
	I1109 14:38:01.106548  190681 kubeadm.go:319] [kubelet-finalize] Updating "/etc/kubernetes/kubelet.conf" to point to a rotatable kubelet client certificate and key
	I1109 14:38:01.588431  190681 kubeadm.go:319] [addons] Applied essential addon: CoreDNS
	I1109 14:38:02.106954  190681 kubeadm.go:319] [addons] Applied essential addon: kube-proxy
	I1109 14:38:02.108563  190681 kubeadm.go:319] 
	I1109 14:38:02.108729  190681 kubeadm.go:319] Your Kubernetes control-plane has initialized successfully!
	I1109 14:38:02.108750  190681 kubeadm.go:319] 
	I1109 14:38:02.108828  190681 kubeadm.go:319] To start using your cluster, you need to run the following as a regular user:
	I1109 14:38:02.108833  190681 kubeadm.go:319] 
	I1109 14:38:02.108859  190681 kubeadm.go:319]   mkdir -p $HOME/.kube
	I1109 14:38:02.108918  190681 kubeadm.go:319]   sudo cp -i /etc/kubernetes/admin.conf $HOME/.kube/config
	I1109 14:38:02.108969  190681 kubeadm.go:319]   sudo chown $(id -u):$(id -g) $HOME/.kube/config
	I1109 14:38:02.108974  190681 kubeadm.go:319] 
	I1109 14:38:02.109028  190681 kubeadm.go:319] Alternatively, if you are the root user, you can run:
	I1109 14:38:02.109032  190681 kubeadm.go:319] 
	I1109 14:38:02.109080  190681 kubeadm.go:319]   export KUBECONFIG=/etc/kubernetes/admin.conf
	I1109 14:38:02.109085  190681 kubeadm.go:319] 
	I1109 14:38:02.109137  190681 kubeadm.go:319] You should now deploy a pod network to the cluster.
	I1109 14:38:02.109212  190681 kubeadm.go:319] Run "kubectl apply -f [podnetwork].yaml" with one of the options listed at:
	I1109 14:38:02.109283  190681 kubeadm.go:319]   https://kubernetes.io/docs/concepts/cluster-administration/addons/
	I1109 14:38:02.109292  190681 kubeadm.go:319] 
	I1109 14:38:02.109376  190681 kubeadm.go:319] You can now join any number of control-plane nodes by copying certificate authorities
	I1109 14:38:02.109452  190681 kubeadm.go:319] and service account keys on each node and then running the following as root:
	I1109 14:38:02.109456  190681 kubeadm.go:319] 
	I1109 14:38:02.109539  190681 kubeadm.go:319]   kubeadm join control-plane.minikube.internal:8443 --token w7hsc2.zw6d7sksu6ywppck \
	I1109 14:38:02.109641  190681 kubeadm.go:319] 	--discovery-token-ca-cert-hash sha256:bb1c1d61f7abb7f43f33866576b71e6342f6f67545a2d1f7cc51fa851cd51d52 \
	I1109 14:38:02.109662  190681 kubeadm.go:319] 	--control-plane 
	I1109 14:38:02.109667  190681 kubeadm.go:319] 
	I1109 14:38:02.109751  190681 kubeadm.go:319] Then you can join any number of worker nodes by running the following on each as root:
	I1109 14:38:02.109756  190681 kubeadm.go:319] 
	I1109 14:38:02.109838  190681 kubeadm.go:319] kubeadm join control-plane.minikube.internal:8443 --token w7hsc2.zw6d7sksu6ywppck \
	I1109 14:38:02.109940  190681 kubeadm.go:319] 	--discovery-token-ca-cert-hash sha256:bb1c1d61f7abb7f43f33866576b71e6342f6f67545a2d1f7cc51fa851cd51d52 
	I1109 14:38:02.115588  190681 kubeadm.go:319] 	[WARNING SystemVerification]: cgroups v1 support is in maintenance mode, please migrate to cgroups v2
	I1109 14:38:02.115835  190681 kubeadm.go:319] 	[WARNING SystemVerification]: failed to parse kernel config: unable to load kernel module: "configs", output: "modprobe: FATAL: Module configs not found in directory /lib/modules/5.15.0-1084-aws\n", err: exit status 1
	I1109 14:38:02.116070  190681 kubeadm.go:319] 	[WARNING Service-Kubelet]: kubelet service is not enabled, please run 'systemctl enable kubelet.service'
	I1109 14:38:02.116099  190681 cni.go:84] Creating CNI manager for ""
	I1109 14:38:02.116108  190681 cni.go:143] "docker" driver + "crio" runtime found, recommending kindnet
	I1109 14:38:02.119328  190681 out.go:179] * Configuring CNI (Container Networking Interface) ...
	I1109 14:38:02.122338  190681 ssh_runner.go:195] Run: stat /opt/cni/bin/portmap
	I1109 14:38:02.126894  190681 cni.go:182] applying CNI manifest using /var/lib/minikube/binaries/v1.34.1/kubectl ...
	I1109 14:38:02.126934  190681 ssh_runner.go:362] scp memory --> /var/tmp/minikube/cni.yaml (2601 bytes)
	I1109 14:38:02.141905  190681 ssh_runner.go:195] Run: sudo /var/lib/minikube/binaries/v1.34.1/kubectl apply --kubeconfig=/var/lib/minikube/kubeconfig -f /var/tmp/minikube/cni.yaml
	I1109 14:38:02.496545  190681 ssh_runner.go:195] Run: /bin/bash -c "cat /proc/$(pgrep kube-apiserver)/oom_adj"
	I1109 14:38:02.496676  190681 ssh_runner.go:195] Run: sudo /var/lib/minikube/binaries/v1.34.1/kubectl create clusterrolebinding minikube-rbac --clusterrole=cluster-admin --serviceaccount=kube-system:default --kubeconfig=/var/lib/minikube/kubeconfig
	I1109 14:38:02.496775  190681 ssh_runner.go:195] Run: sudo /var/lib/minikube/binaries/v1.34.1/kubectl --kubeconfig=/var/lib/minikube/kubeconfig label --overwrite nodes embed-certs-422728 minikube.k8s.io/updated_at=2025_11_09T14_38_02_0700 minikube.k8s.io/version=v1.37.0 minikube.k8s.io/commit=70e94ed289d7418ac27e2778f9cf44be27a4ecda minikube.k8s.io/name=embed-certs-422728 minikube.k8s.io/primary=true
	I1109 14:38:02.646117  190681 ops.go:34] apiserver oom_adj: -16
	I1109 14:38:02.646246  190681 ssh_runner.go:195] Run: sudo /var/lib/minikube/binaries/v1.34.1/kubectl get sa default --kubeconfig=/var/lib/minikube/kubeconfig
	I1109 14:38:00.549060  189143 ssh_runner.go:195] Run: stat /opt/cni/bin/portmap
	I1109 14:38:00.555763  189143 cni.go:182] applying CNI manifest using /var/lib/minikube/binaries/v1.34.1/kubectl ...
	I1109 14:38:00.555783  189143 ssh_runner.go:362] scp memory --> /var/tmp/minikube/cni.yaml (2601 bytes)
	I1109 14:38:00.586867  189143 ssh_runner.go:195] Run: sudo /var/lib/minikube/binaries/v1.34.1/kubectl apply --kubeconfig=/var/lib/minikube/kubeconfig -f /var/tmp/minikube/cni.yaml
	I1109 14:38:01.051565  189143 ssh_runner.go:195] Run: /bin/bash -c "cat /proc/$(pgrep kube-apiserver)/oom_adj"
	I1109 14:38:01.051646  189143 ssh_runner.go:195] Run: sudo /var/lib/minikube/binaries/v1.34.1/kubectl create clusterrolebinding minikube-rbac --clusterrole=cluster-admin --serviceaccount=kube-system:default --kubeconfig=/var/lib/minikube/kubeconfig
	I1109 14:38:01.051726  189143 ssh_runner.go:195] Run: sudo /var/lib/minikube/binaries/v1.34.1/kubectl --kubeconfig=/var/lib/minikube/kubeconfig label --overwrite nodes default-k8s-diff-port-103048 minikube.k8s.io/updated_at=2025_11_09T14_38_01_0700 minikube.k8s.io/version=v1.37.0 minikube.k8s.io/commit=70e94ed289d7418ac27e2778f9cf44be27a4ecda minikube.k8s.io/name=default-k8s-diff-port-103048 minikube.k8s.io/primary=true
	I1109 14:38:01.328259  189143 ops.go:34] apiserver oom_adj: -16
	I1109 14:38:01.328378  189143 ssh_runner.go:195] Run: sudo /var/lib/minikube/binaries/v1.34.1/kubectl get sa default --kubeconfig=/var/lib/minikube/kubeconfig
	I1109 14:38:01.828842  189143 ssh_runner.go:195] Run: sudo /var/lib/minikube/binaries/v1.34.1/kubectl get sa default --kubeconfig=/var/lib/minikube/kubeconfig
	I1109 14:38:02.328513  189143 ssh_runner.go:195] Run: sudo /var/lib/minikube/binaries/v1.34.1/kubectl get sa default --kubeconfig=/var/lib/minikube/kubeconfig
	I1109 14:38:02.829255  189143 ssh_runner.go:195] Run: sudo /var/lib/minikube/binaries/v1.34.1/kubectl get sa default --kubeconfig=/var/lib/minikube/kubeconfig
	I1109 14:38:03.329028  189143 ssh_runner.go:195] Run: sudo /var/lib/minikube/binaries/v1.34.1/kubectl get sa default --kubeconfig=/var/lib/minikube/kubeconfig
	I1109 14:38:03.829088  189143 ssh_runner.go:195] Run: sudo /var/lib/minikube/binaries/v1.34.1/kubectl get sa default --kubeconfig=/var/lib/minikube/kubeconfig
	I1109 14:38:04.328572  189143 ssh_runner.go:195] Run: sudo /var/lib/minikube/binaries/v1.34.1/kubectl get sa default --kubeconfig=/var/lib/minikube/kubeconfig
	I1109 14:38:04.829274  189143 ssh_runner.go:195] Run: sudo /var/lib/minikube/binaries/v1.34.1/kubectl get sa default --kubeconfig=/var/lib/minikube/kubeconfig
	I1109 14:38:05.329398  189143 ssh_runner.go:195] Run: sudo /var/lib/minikube/binaries/v1.34.1/kubectl get sa default --kubeconfig=/var/lib/minikube/kubeconfig
	I1109 14:38:05.829067  189143 ssh_runner.go:195] Run: sudo /var/lib/minikube/binaries/v1.34.1/kubectl get sa default --kubeconfig=/var/lib/minikube/kubeconfig
	I1109 14:38:06.049792  189143 kubeadm.go:1114] duration metric: took 4.998205783s to wait for elevateKubeSystemPrivileges
	I1109 14:38:06.049826  189143 kubeadm.go:403] duration metric: took 29.285778776s to StartCluster
	I1109 14:38:06.049848  189143 settings.go:142] acquiring lock: {Name:mk52a5a1cf83cfd3007d92af07a5e8dea393f789 Clock:{} Delay:500ms Timeout:1m0s Cancel:<nil>}
	I1109 14:38:06.049912  189143 settings.go:150] Updating kubeconfig:  /home/jenkins/minikube-integration/21139-2320/kubeconfig
	I1109 14:38:06.050565  189143 lock.go:35] WriteFile acquiring /home/jenkins/minikube-integration/21139-2320/kubeconfig: {Name:mkbcbd43244c4eb498050023163f96f81faa78c6 Clock:{} Delay:500ms Timeout:1m0s Cancel:<nil>}
	I1109 14:38:06.050762  189143 start.go:236] Will wait 6m0s for node &{Name: IP:192.168.85.2 Port:8444 KubernetesVersion:v1.34.1 ContainerRuntime:crio ControlPlane:true Worker:true}
	I1109 14:38:06.050922  189143 ssh_runner.go:195] Run: /bin/bash -c "sudo /var/lib/minikube/binaries/v1.34.1/kubectl --kubeconfig=/var/lib/minikube/kubeconfig -n kube-system get configmap coredns -o yaml"
	I1109 14:38:06.051202  189143 config.go:182] Loaded profile config "default-k8s-diff-port-103048": Driver=docker, ContainerRuntime=crio, KubernetesVersion=v1.34.1
	I1109 14:38:06.051239  189143 addons.go:512] enable addons start: toEnable=map[ambassador:false amd-gpu-device-plugin:false auto-pause:false cloud-spanner:false csi-hostpath-driver:false dashboard:false default-storageclass:true efk:false freshpod:false gcp-auth:false gvisor:false headlamp:false inaccel:false ingress:false ingress-dns:false inspektor-gadget:false istio:false istio-provisioner:false kong:false kubeflow:false kubetail:false kubevirt:false logviewer:false metallb:false metrics-server:false nvidia-device-plugin:false nvidia-driver-installer:false nvidia-gpu-device-plugin:false olm:false pod-security-policy:false portainer:false registry:false registry-aliases:false registry-creds:false storage-provisioner:true storage-provisioner-rancher:false volcano:false volumesnapshots:false yakd:false]
	I1109 14:38:06.051303  189143 addons.go:70] Setting storage-provisioner=true in profile "default-k8s-diff-port-103048"
	I1109 14:38:06.051316  189143 addons.go:239] Setting addon storage-provisioner=true in "default-k8s-diff-port-103048"
	I1109 14:38:06.051339  189143 host.go:66] Checking if "default-k8s-diff-port-103048" exists ...
	I1109 14:38:06.051933  189143 cli_runner.go:164] Run: docker container inspect default-k8s-diff-port-103048 --format={{.State.Status}}
	I1109 14:38:06.052365  189143 addons.go:70] Setting default-storageclass=true in profile "default-k8s-diff-port-103048"
	I1109 14:38:06.052397  189143 addons_storage_classes.go:34] enableOrDisableStorageClasses default-storageclass=true on "default-k8s-diff-port-103048"
	I1109 14:38:06.052688  189143 cli_runner.go:164] Run: docker container inspect default-k8s-diff-port-103048 --format={{.State.Status}}
	I1109 14:38:06.059051  189143 out.go:179] * Verifying Kubernetes components...
	I1109 14:38:06.065944  189143 ssh_runner.go:195] Run: sudo systemctl daemon-reload
	I1109 14:38:06.092893  189143 out.go:179]   - Using image gcr.io/k8s-minikube/storage-provisioner:v5
	I1109 14:38:03.147142  190681 ssh_runner.go:195] Run: sudo /var/lib/minikube/binaries/v1.34.1/kubectl get sa default --kubeconfig=/var/lib/minikube/kubeconfig
	I1109 14:38:03.647289  190681 ssh_runner.go:195] Run: sudo /var/lib/minikube/binaries/v1.34.1/kubectl get sa default --kubeconfig=/var/lib/minikube/kubeconfig
	I1109 14:38:04.146648  190681 ssh_runner.go:195] Run: sudo /var/lib/minikube/binaries/v1.34.1/kubectl get sa default --kubeconfig=/var/lib/minikube/kubeconfig
	I1109 14:38:04.647045  190681 ssh_runner.go:195] Run: sudo /var/lib/minikube/binaries/v1.34.1/kubectl get sa default --kubeconfig=/var/lib/minikube/kubeconfig
	I1109 14:38:05.146960  190681 ssh_runner.go:195] Run: sudo /var/lib/minikube/binaries/v1.34.1/kubectl get sa default --kubeconfig=/var/lib/minikube/kubeconfig
	I1109 14:38:05.647028  190681 ssh_runner.go:195] Run: sudo /var/lib/minikube/binaries/v1.34.1/kubectl get sa default --kubeconfig=/var/lib/minikube/kubeconfig
	I1109 14:38:06.152032  190681 ssh_runner.go:195] Run: sudo /var/lib/minikube/binaries/v1.34.1/kubectl get sa default --kubeconfig=/var/lib/minikube/kubeconfig
	I1109 14:38:06.647057  190681 ssh_runner.go:195] Run: sudo /var/lib/minikube/binaries/v1.34.1/kubectl get sa default --kubeconfig=/var/lib/minikube/kubeconfig
	I1109 14:38:07.146723  190681 ssh_runner.go:195] Run: sudo /var/lib/minikube/binaries/v1.34.1/kubectl get sa default --kubeconfig=/var/lib/minikube/kubeconfig
	I1109 14:38:07.399244  190681 kubeadm.go:1114] duration metric: took 4.902610827s to wait for elevateKubeSystemPrivileges
	I1109 14:38:07.399270  190681 kubeadm.go:403] duration metric: took 24.886583855s to StartCluster
	I1109 14:38:07.399286  190681 settings.go:142] acquiring lock: {Name:mk52a5a1cf83cfd3007d92af07a5e8dea393f789 Clock:{} Delay:500ms Timeout:1m0s Cancel:<nil>}
	I1109 14:38:07.399340  190681 settings.go:150] Updating kubeconfig:  /home/jenkins/minikube-integration/21139-2320/kubeconfig
	I1109 14:38:07.400744  190681 lock.go:35] WriteFile acquiring /home/jenkins/minikube-integration/21139-2320/kubeconfig: {Name:mkbcbd43244c4eb498050023163f96f81faa78c6 Clock:{} Delay:500ms Timeout:1m0s Cancel:<nil>}
	I1109 14:38:07.400965  190681 start.go:236] Will wait 6m0s for node &{Name: IP:192.168.76.2 Port:8443 KubernetesVersion:v1.34.1 ContainerRuntime:crio ControlPlane:true Worker:true}
	I1109 14:38:07.401068  190681 ssh_runner.go:195] Run: /bin/bash -c "sudo /var/lib/minikube/binaries/v1.34.1/kubectl --kubeconfig=/var/lib/minikube/kubeconfig -n kube-system get configmap coredns -o yaml"
	I1109 14:38:07.401372  190681 config.go:182] Loaded profile config "embed-certs-422728": Driver=docker, ContainerRuntime=crio, KubernetesVersion=v1.34.1
	I1109 14:38:07.401416  190681 addons.go:512] enable addons start: toEnable=map[ambassador:false amd-gpu-device-plugin:false auto-pause:false cloud-spanner:false csi-hostpath-driver:false dashboard:false default-storageclass:true efk:false freshpod:false gcp-auth:false gvisor:false headlamp:false inaccel:false ingress:false ingress-dns:false inspektor-gadget:false istio:false istio-provisioner:false kong:false kubeflow:false kubetail:false kubevirt:false logviewer:false metallb:false metrics-server:false nvidia-device-plugin:false nvidia-driver-installer:false nvidia-gpu-device-plugin:false olm:false pod-security-policy:false portainer:false registry:false registry-aliases:false registry-creds:false storage-provisioner:true storage-provisioner-rancher:false volcano:false volumesnapshots:false yakd:false]
	I1109 14:38:07.401481  190681 addons.go:70] Setting storage-provisioner=true in profile "embed-certs-422728"
	I1109 14:38:07.401496  190681 addons.go:239] Setting addon storage-provisioner=true in "embed-certs-422728"
	I1109 14:38:07.401515  190681 host.go:66] Checking if "embed-certs-422728" exists ...
	I1109 14:38:07.402301  190681 cli_runner.go:164] Run: docker container inspect embed-certs-422728 --format={{.State.Status}}
	I1109 14:38:07.402746  190681 addons.go:70] Setting default-storageclass=true in profile "embed-certs-422728"
	I1109 14:38:07.402764  190681 addons_storage_classes.go:34] enableOrDisableStorageClasses default-storageclass=true on "embed-certs-422728"
	I1109 14:38:07.403030  190681 cli_runner.go:164] Run: docker container inspect embed-certs-422728 --format={{.State.Status}}
	I1109 14:38:07.406945  190681 out.go:179] * Verifying Kubernetes components...
	I1109 14:38:07.416317  190681 ssh_runner.go:195] Run: sudo systemctl daemon-reload
	I1109 14:38:07.439165  190681 out.go:179]   - Using image gcr.io/k8s-minikube/storage-provisioner:v5
	I1109 14:38:07.443535  190681 addons.go:436] installing /etc/kubernetes/addons/storage-provisioner.yaml
	I1109 14:38:07.443560  190681 ssh_runner.go:362] scp memory --> /etc/kubernetes/addons/storage-provisioner.yaml (2676 bytes)
	I1109 14:38:07.443629  190681 cli_runner.go:164] Run: docker container inspect -f "'{{(index (index .NetworkSettings.Ports "22/tcp") 0).HostPort}}'" embed-certs-422728
	I1109 14:38:07.454489  190681 addons.go:239] Setting addon default-storageclass=true in "embed-certs-422728"
	I1109 14:38:07.454536  190681 host.go:66] Checking if "embed-certs-422728" exists ...
	I1109 14:38:07.454960  190681 cli_runner.go:164] Run: docker container inspect embed-certs-422728 --format={{.State.Status}}
	I1109 14:38:07.487950  190681 sshutil.go:53] new ssh client: &{IP:127.0.0.1 Port:33060 SSHKeyPath:/home/jenkins/minikube-integration/21139-2320/.minikube/machines/embed-certs-422728/id_rsa Username:docker}
	I1109 14:38:07.502084  190681 addons.go:436] installing /etc/kubernetes/addons/storageclass.yaml
	I1109 14:38:07.502104  190681 ssh_runner.go:362] scp storageclass/storageclass.yaml --> /etc/kubernetes/addons/storageclass.yaml (271 bytes)
	I1109 14:38:07.502164  190681 cli_runner.go:164] Run: docker container inspect -f "'{{(index (index .NetworkSettings.Ports "22/tcp") 0).HostPort}}'" embed-certs-422728
	I1109 14:38:07.530312  190681 sshutil.go:53] new ssh client: &{IP:127.0.0.1 Port:33060 SSHKeyPath:/home/jenkins/minikube-integration/21139-2320/.minikube/machines/embed-certs-422728/id_rsa Username:docker}
	I1109 14:38:06.095895  189143 addons.go:436] installing /etc/kubernetes/addons/storage-provisioner.yaml
	I1109 14:38:06.095918  189143 ssh_runner.go:362] scp memory --> /etc/kubernetes/addons/storage-provisioner.yaml (2676 bytes)
	I1109 14:38:06.095987  189143 cli_runner.go:164] Run: docker container inspect -f "'{{(index (index .NetworkSettings.Ports "22/tcp") 0).HostPort}}'" default-k8s-diff-port-103048
	I1109 14:38:06.100625  189143 addons.go:239] Setting addon default-storageclass=true in "default-k8s-diff-port-103048"
	I1109 14:38:06.100676  189143 host.go:66] Checking if "default-k8s-diff-port-103048" exists ...
	I1109 14:38:06.101099  189143 cli_runner.go:164] Run: docker container inspect default-k8s-diff-port-103048 --format={{.State.Status}}
	I1109 14:38:06.136914  189143 addons.go:436] installing /etc/kubernetes/addons/storageclass.yaml
	I1109 14:38:06.136939  189143 ssh_runner.go:362] scp storageclass/storageclass.yaml --> /etc/kubernetes/addons/storageclass.yaml (271 bytes)
	I1109 14:38:06.137009  189143 cli_runner.go:164] Run: docker container inspect -f "'{{(index (index .NetworkSettings.Ports "22/tcp") 0).HostPort}}'" default-k8s-diff-port-103048
	I1109 14:38:06.152378  189143 sshutil.go:53] new ssh client: &{IP:127.0.0.1 Port:33055 SSHKeyPath:/home/jenkins/minikube-integration/21139-2320/.minikube/machines/default-k8s-diff-port-103048/id_rsa Username:docker}
	I1109 14:38:06.175502  189143 sshutil.go:53] new ssh client: &{IP:127.0.0.1 Port:33055 SSHKeyPath:/home/jenkins/minikube-integration/21139-2320/.minikube/machines/default-k8s-diff-port-103048/id_rsa Username:docker}
	I1109 14:38:06.717262  189143 ssh_runner.go:195] Run: sudo KUBECONFIG=/var/lib/minikube/kubeconfig /var/lib/minikube/binaries/v1.34.1/kubectl apply -f /etc/kubernetes/addons/storageclass.yaml
	I1109 14:38:06.749468  189143 ssh_runner.go:195] Run: /bin/bash -c "sudo /var/lib/minikube/binaries/v1.34.1/kubectl --kubeconfig=/var/lib/minikube/kubeconfig -n kube-system get configmap coredns -o yaml | sed -e '/^        forward . \/etc\/resolv.conf.*/i \        hosts {\n           192.168.85.1 host.minikube.internal\n           fallthrough\n        }' -e '/^        errors *$/i \        log' | sudo /var/lib/minikube/binaries/v1.34.1/kubectl --kubeconfig=/var/lib/minikube/kubeconfig replace -f -"
	I1109 14:38:06.749679  189143 ssh_runner.go:195] Run: sudo systemctl start kubelet
	I1109 14:38:06.752463  189143 ssh_runner.go:195] Run: sudo KUBECONFIG=/var/lib/minikube/kubeconfig /var/lib/minikube/binaries/v1.34.1/kubectl apply -f /etc/kubernetes/addons/storage-provisioner.yaml
	I1109 14:38:07.580151  189143 start.go:977] {"host.minikube.internal": 192.168.85.1} host record injected into CoreDNS's ConfigMap
	I1109 14:38:07.582479  189143 node_ready.go:35] waiting up to 6m0s for node "default-k8s-diff-port-103048" to be "Ready" ...
	I1109 14:38:07.973526  189143 ssh_runner.go:235] Completed: sudo KUBECONFIG=/var/lib/minikube/kubeconfig /var/lib/minikube/binaries/v1.34.1/kubectl apply -f /etc/kubernetes/addons/storage-provisioner.yaml: (1.220998864s)
	I1109 14:38:07.976593  189143 out.go:179] * Enabled addons: default-storageclass, storage-provisioner
	I1109 14:38:07.979442  189143 addons.go:515] duration metric: took 1.928195233s for enable addons: enabled=[default-storageclass storage-provisioner]
	I1109 14:38:08.085303  189143 kapi.go:214] "coredns" deployment in "kube-system" namespace and "default-k8s-diff-port-103048" context rescaled to 1 replicas
	I1109 14:38:08.047785  190681 ssh_runner.go:195] Run: sudo KUBECONFIG=/var/lib/minikube/kubeconfig /var/lib/minikube/binaries/v1.34.1/kubectl apply -f /etc/kubernetes/addons/storageclass.yaml
	I1109 14:38:08.074632  190681 ssh_runner.go:195] Run: /bin/bash -c "sudo /var/lib/minikube/binaries/v1.34.1/kubectl --kubeconfig=/var/lib/minikube/kubeconfig -n kube-system get configmap coredns -o yaml | sed -e '/^        forward . \/etc\/resolv.conf.*/i \        hosts {\n           192.168.76.1 host.minikube.internal\n           fallthrough\n        }' -e '/^        errors *$/i \        log' | sudo /var/lib/minikube/binaries/v1.34.1/kubectl --kubeconfig=/var/lib/minikube/kubeconfig replace -f -"
	I1109 14:38:08.074759  190681 ssh_runner.go:195] Run: sudo systemctl start kubelet
	I1109 14:38:08.097787  190681 ssh_runner.go:195] Run: sudo KUBECONFIG=/var/lib/minikube/kubeconfig /var/lib/minikube/binaries/v1.34.1/kubectl apply -f /etc/kubernetes/addons/storage-provisioner.yaml
	I1109 14:38:08.719937  190681 node_ready.go:35] waiting up to 6m0s for node "embed-certs-422728" to be "Ready" ...
	I1109 14:38:08.720168  190681 start.go:977] {"host.minikube.internal": 192.168.76.1} host record injected into CoreDNS's ConfigMap
	I1109 14:38:08.989996  190681 out.go:179] * Enabled addons: default-storageclass, storage-provisioner
	I1109 14:38:08.992915  190681 addons.go:515] duration metric: took 1.591484578s for enable addons: enabled=[default-storageclass storage-provisioner]
	I1109 14:38:09.223822  190681 kapi.go:214] "coredns" deployment in "kube-system" namespace and "embed-certs-422728" context rescaled to 1 replicas
	W1109 14:38:10.722642  190681 node_ready.go:57] node "embed-certs-422728" has "Ready":"False" status (will retry)
	W1109 14:38:12.723522  190681 node_ready.go:57] node "embed-certs-422728" has "Ready":"False" status (will retry)
	W1109 14:38:09.585194  189143 node_ready.go:57] node "default-k8s-diff-port-103048" has "Ready":"False" status (will retry)
	W1109 14:38:12.085539  189143 node_ready.go:57] node "default-k8s-diff-port-103048" has "Ready":"False" status (will retry)
	W1109 14:38:15.223153  190681 node_ready.go:57] node "embed-certs-422728" has "Ready":"False" status (will retry)
	W1109 14:38:17.223262  190681 node_ready.go:57] node "embed-certs-422728" has "Ready":"False" status (will retry)
	W1109 14:38:14.585848  189143 node_ready.go:57] node "default-k8s-diff-port-103048" has "Ready":"False" status (will retry)
	W1109 14:38:16.586032  189143 node_ready.go:57] node "default-k8s-diff-port-103048" has "Ready":"False" status (will retry)
	W1109 14:38:19.223547  190681 node_ready.go:57] node "embed-certs-422728" has "Ready":"False" status (will retry)
	W1109 14:38:21.224085  190681 node_ready.go:57] node "embed-certs-422728" has "Ready":"False" status (will retry)
	W1109 14:38:18.586067  189143 node_ready.go:57] node "default-k8s-diff-port-103048" has "Ready":"False" status (will retry)
	W1109 14:38:21.085381  189143 node_ready.go:57] node "default-k8s-diff-port-103048" has "Ready":"False" status (will retry)
	W1109 14:38:23.086050  189143 node_ready.go:57] node "default-k8s-diff-port-103048" has "Ready":"False" status (will retry)
	W1109 14:38:23.723395  190681 node_ready.go:57] node "embed-certs-422728" has "Ready":"False" status (will retry)
	W1109 14:38:25.723790  190681 node_ready.go:57] node "embed-certs-422728" has "Ready":"False" status (will retry)
	W1109 14:38:25.086470  189143 node_ready.go:57] node "default-k8s-diff-port-103048" has "Ready":"False" status (will retry)
	W1109 14:38:27.585606  189143 node_ready.go:57] node "default-k8s-diff-port-103048" has "Ready":"False" status (will retry)
	W1109 14:38:28.222840  190681 node_ready.go:57] node "embed-certs-422728" has "Ready":"False" status (will retry)
	W1109 14:38:30.223153  190681 node_ready.go:57] node "embed-certs-422728" has "Ready":"False" status (will retry)
	W1109 14:38:32.223588  190681 node_ready.go:57] node "embed-certs-422728" has "Ready":"False" status (will retry)
	W1109 14:38:29.585772  189143 node_ready.go:57] node "default-k8s-diff-port-103048" has "Ready":"False" status (will retry)
	W1109 14:38:32.085381  189143 node_ready.go:57] node "default-k8s-diff-port-103048" has "Ready":"False" status (will retry)
	W1109 14:38:34.723152  190681 node_ready.go:57] node "embed-certs-422728" has "Ready":"False" status (will retry)
	W1109 14:38:37.223685  190681 node_ready.go:57] node "embed-certs-422728" has "Ready":"False" status (will retry)
	W1109 14:38:34.586301  189143 node_ready.go:57] node "default-k8s-diff-port-103048" has "Ready":"False" status (will retry)
	W1109 14:38:37.085897  189143 node_ready.go:57] node "default-k8s-diff-port-103048" has "Ready":"False" status (will retry)
	W1109 14:38:39.722888  190681 node_ready.go:57] node "embed-certs-422728" has "Ready":"False" status (will retry)
	W1109 14:38:41.723575  190681 node_ready.go:57] node "embed-certs-422728" has "Ready":"False" status (will retry)
	W1109 14:38:39.585697  189143 node_ready.go:57] node "default-k8s-diff-port-103048" has "Ready":"False" status (will retry)
	W1109 14:38:41.585978  189143 node_ready.go:57] node "default-k8s-diff-port-103048" has "Ready":"False" status (will retry)
	W1109 14:38:44.223726  190681 node_ready.go:57] node "embed-certs-422728" has "Ready":"False" status (will retry)
	W1109 14:38:46.224037  190681 node_ready.go:57] node "embed-certs-422728" has "Ready":"False" status (will retry)
	W1109 14:38:43.586549  189143 node_ready.go:57] node "default-k8s-diff-port-103048" has "Ready":"False" status (will retry)
	W1109 14:38:46.085605  189143 node_ready.go:57] node "default-k8s-diff-port-103048" has "Ready":"False" status (will retry)
	I1109 14:38:47.085517  189143 node_ready.go:49] node "default-k8s-diff-port-103048" is "Ready"
	I1109 14:38:47.085546  189143 node_ready.go:38] duration metric: took 39.503046372s for node "default-k8s-diff-port-103048" to be "Ready" ...
	I1109 14:38:47.085562  189143 api_server.go:52] waiting for apiserver process to appear ...
	I1109 14:38:47.085631  189143 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I1109 14:38:47.097612  189143 api_server.go:72] duration metric: took 41.046822049s to wait for apiserver process to appear ...
	I1109 14:38:47.097635  189143 api_server.go:88] waiting for apiserver healthz status ...
	I1109 14:38:47.097654  189143 api_server.go:253] Checking apiserver healthz at https://192.168.85.2:8444/healthz ...
	I1109 14:38:47.106692  189143 api_server.go:279] https://192.168.85.2:8444/healthz returned 200:
	ok
	I1109 14:38:47.107695  189143 api_server.go:141] control plane version: v1.34.1
	I1109 14:38:47.107719  189143 api_server.go:131] duration metric: took 10.077431ms to wait for apiserver health ...
	I1109 14:38:47.107728  189143 system_pods.go:43] waiting for kube-system pods to appear ...
	I1109 14:38:47.110886  189143 system_pods.go:59] 8 kube-system pods found
	I1109 14:38:47.110923  189143 system_pods.go:61] "coredns-66bc5c9577-rbvc2" [a2c09df3-22f7-4863-81b7-71d92e6457c7] Pending / Ready:ContainersNotReady (containers with unready status: [coredns]) / ContainersReady:ContainersNotReady (containers with unready status: [coredns])
	I1109 14:38:47.110930  189143 system_pods.go:61] "etcd-default-k8s-diff-port-103048" [7ca61e70-9bc1-4a3d-9175-40fabccc3dfe] Running
	I1109 14:38:47.110937  189143 system_pods.go:61] "kindnet-tz2x5" [41a63a24-6d2b-453d-a118-2a5b03e08396] Running
	I1109 14:38:47.110946  189143 system_pods.go:61] "kube-apiserver-default-k8s-diff-port-103048" [80dc9a9a-1a79-447a-b1a4-38d7611de8c3] Running
	I1109 14:38:47.110951  189143 system_pods.go:61] "kube-controller-manager-default-k8s-diff-port-103048" [f1da8430-3b4b-4fab-a1cb-d8984aabae63] Running
	I1109 14:38:47.110962  189143 system_pods.go:61] "kube-proxy-c57m2" [d93835ed-7e40-4171-a3ee-f815a8d20380] Running
	I1109 14:38:47.110966  189143 system_pods.go:61] "kube-scheduler-default-k8s-diff-port-103048" [8e18ee25-6d8f-4598-89c1-67b7fc04936b] Running
	I1109 14:38:47.110983  189143 system_pods.go:61] "storage-provisioner" [251b0857-5681-47c0-b891-8a4c109aaa4b] Pending / Ready:ContainersNotReady (containers with unready status: [storage-provisioner]) / ContainersReady:ContainersNotReady (containers with unready status: [storage-provisioner])
	I1109 14:38:47.110989  189143 system_pods.go:74] duration metric: took 3.256387ms to wait for pod list to return data ...
	I1109 14:38:47.111002  189143 default_sa.go:34] waiting for default service account to be created ...
	I1109 14:38:47.113862  189143 default_sa.go:45] found service account: "default"
	I1109 14:38:47.113890  189143 default_sa.go:55] duration metric: took 2.881639ms for default service account to be created ...
	I1109 14:38:47.113900  189143 system_pods.go:116] waiting for k8s-apps to be running ...
	I1109 14:38:47.117019  189143 system_pods.go:86] 8 kube-system pods found
	I1109 14:38:47.117056  189143 system_pods.go:89] "coredns-66bc5c9577-rbvc2" [a2c09df3-22f7-4863-81b7-71d92e6457c7] Pending / Ready:ContainersNotReady (containers with unready status: [coredns]) / ContainersReady:ContainersNotReady (containers with unready status: [coredns])
	I1109 14:38:47.117062  189143 system_pods.go:89] "etcd-default-k8s-diff-port-103048" [7ca61e70-9bc1-4a3d-9175-40fabccc3dfe] Running
	I1109 14:38:47.117069  189143 system_pods.go:89] "kindnet-tz2x5" [41a63a24-6d2b-453d-a118-2a5b03e08396] Running
	I1109 14:38:47.117075  189143 system_pods.go:89] "kube-apiserver-default-k8s-diff-port-103048" [80dc9a9a-1a79-447a-b1a4-38d7611de8c3] Running
	I1109 14:38:47.117105  189143 system_pods.go:89] "kube-controller-manager-default-k8s-diff-port-103048" [f1da8430-3b4b-4fab-a1cb-d8984aabae63] Running
	I1109 14:38:47.117117  189143 system_pods.go:89] "kube-proxy-c57m2" [d93835ed-7e40-4171-a3ee-f815a8d20380] Running
	I1109 14:38:47.117124  189143 system_pods.go:89] "kube-scheduler-default-k8s-diff-port-103048" [8e18ee25-6d8f-4598-89c1-67b7fc04936b] Running
	I1109 14:38:47.117131  189143 system_pods.go:89] "storage-provisioner" [251b0857-5681-47c0-b891-8a4c109aaa4b] Pending / Ready:ContainersNotReady (containers with unready status: [storage-provisioner]) / ContainersReady:ContainersNotReady (containers with unready status: [storage-provisioner])
	I1109 14:38:47.117176  189143 retry.go:31] will retry after 256.058817ms: missing components: kube-dns
	I1109 14:38:47.381163  189143 system_pods.go:86] 8 kube-system pods found
	I1109 14:38:47.381239  189143 system_pods.go:89] "coredns-66bc5c9577-rbvc2" [a2c09df3-22f7-4863-81b7-71d92e6457c7] Pending / Ready:ContainersNotReady (containers with unready status: [coredns]) / ContainersReady:ContainersNotReady (containers with unready status: [coredns])
	I1109 14:38:47.381247  189143 system_pods.go:89] "etcd-default-k8s-diff-port-103048" [7ca61e70-9bc1-4a3d-9175-40fabccc3dfe] Running
	I1109 14:38:47.381254  189143 system_pods.go:89] "kindnet-tz2x5" [41a63a24-6d2b-453d-a118-2a5b03e08396] Running
	I1109 14:38:47.381258  189143 system_pods.go:89] "kube-apiserver-default-k8s-diff-port-103048" [80dc9a9a-1a79-447a-b1a4-38d7611de8c3] Running
	I1109 14:38:47.381262  189143 system_pods.go:89] "kube-controller-manager-default-k8s-diff-port-103048" [f1da8430-3b4b-4fab-a1cb-d8984aabae63] Running
	I1109 14:38:47.381267  189143 system_pods.go:89] "kube-proxy-c57m2" [d93835ed-7e40-4171-a3ee-f815a8d20380] Running
	I1109 14:38:47.381271  189143 system_pods.go:89] "kube-scheduler-default-k8s-diff-port-103048" [8e18ee25-6d8f-4598-89c1-67b7fc04936b] Running
	I1109 14:38:47.381276  189143 system_pods.go:89] "storage-provisioner" [251b0857-5681-47c0-b891-8a4c109aaa4b] Pending / Ready:ContainersNotReady (containers with unready status: [storage-provisioner]) / ContainersReady:ContainersNotReady (containers with unready status: [storage-provisioner])
	I1109 14:38:47.381291  189143 retry.go:31] will retry after 235.739071ms: missing components: kube-dns
	I1109 14:38:47.621143  189143 system_pods.go:86] 8 kube-system pods found
	I1109 14:38:47.621182  189143 system_pods.go:89] "coredns-66bc5c9577-rbvc2" [a2c09df3-22f7-4863-81b7-71d92e6457c7] Pending / Ready:ContainersNotReady (containers with unready status: [coredns]) / ContainersReady:ContainersNotReady (containers with unready status: [coredns])
	I1109 14:38:47.621189  189143 system_pods.go:89] "etcd-default-k8s-diff-port-103048" [7ca61e70-9bc1-4a3d-9175-40fabccc3dfe] Running
	I1109 14:38:47.621195  189143 system_pods.go:89] "kindnet-tz2x5" [41a63a24-6d2b-453d-a118-2a5b03e08396] Running
	I1109 14:38:47.621199  189143 system_pods.go:89] "kube-apiserver-default-k8s-diff-port-103048" [80dc9a9a-1a79-447a-b1a4-38d7611de8c3] Running
	I1109 14:38:47.621204  189143 system_pods.go:89] "kube-controller-manager-default-k8s-diff-port-103048" [f1da8430-3b4b-4fab-a1cb-d8984aabae63] Running
	I1109 14:38:47.621208  189143 system_pods.go:89] "kube-proxy-c57m2" [d93835ed-7e40-4171-a3ee-f815a8d20380] Running
	I1109 14:38:47.621218  189143 system_pods.go:89] "kube-scheduler-default-k8s-diff-port-103048" [8e18ee25-6d8f-4598-89c1-67b7fc04936b] Running
	I1109 14:38:47.621224  189143 system_pods.go:89] "storage-provisioner" [251b0857-5681-47c0-b891-8a4c109aaa4b] Pending / Ready:ContainersNotReady (containers with unready status: [storage-provisioner]) / ContainersReady:ContainersNotReady (containers with unready status: [storage-provisioner])
	I1109 14:38:47.621245  189143 retry.go:31] will retry after 351.929389ms: missing components: kube-dns
	I1109 14:38:47.979300  189143 system_pods.go:86] 8 kube-system pods found
	I1109 14:38:47.979333  189143 system_pods.go:89] "coredns-66bc5c9577-rbvc2" [a2c09df3-22f7-4863-81b7-71d92e6457c7] Running
	I1109 14:38:47.979341  189143 system_pods.go:89] "etcd-default-k8s-diff-port-103048" [7ca61e70-9bc1-4a3d-9175-40fabccc3dfe] Running
	I1109 14:38:47.979347  189143 system_pods.go:89] "kindnet-tz2x5" [41a63a24-6d2b-453d-a118-2a5b03e08396] Running
	I1109 14:38:47.979351  189143 system_pods.go:89] "kube-apiserver-default-k8s-diff-port-103048" [80dc9a9a-1a79-447a-b1a4-38d7611de8c3] Running
	I1109 14:38:47.979355  189143 system_pods.go:89] "kube-controller-manager-default-k8s-diff-port-103048" [f1da8430-3b4b-4fab-a1cb-d8984aabae63] Running
	I1109 14:38:47.979382  189143 system_pods.go:89] "kube-proxy-c57m2" [d93835ed-7e40-4171-a3ee-f815a8d20380] Running
	I1109 14:38:47.979392  189143 system_pods.go:89] "kube-scheduler-default-k8s-diff-port-103048" [8e18ee25-6d8f-4598-89c1-67b7fc04936b] Running
	I1109 14:38:47.979395  189143 system_pods.go:89] "storage-provisioner" [251b0857-5681-47c0-b891-8a4c109aaa4b] Running
	I1109 14:38:47.979404  189143 system_pods.go:126] duration metric: took 865.497806ms to wait for k8s-apps to be running ...
	I1109 14:38:47.979429  189143 system_svc.go:44] waiting for kubelet service to be running ....
	I1109 14:38:47.979497  189143 ssh_runner.go:195] Run: sudo systemctl is-active --quiet service kubelet
	I1109 14:38:47.994182  189143 system_svc.go:56] duration metric: took 14.750051ms WaitForService to wait for kubelet
	I1109 14:38:47.994210  189143 kubeadm.go:587] duration metric: took 41.943424065s to wait for: map[apiserver:true apps_running:true default_sa:true extra:true kubelet:true node_ready:true system_pods:true]
	I1109 14:38:47.994247  189143 node_conditions.go:102] verifying NodePressure condition ...
	I1109 14:38:47.997674  189143 node_conditions.go:122] node storage ephemeral capacity is 203034800Ki
	I1109 14:38:47.997704  189143 node_conditions.go:123] node cpu capacity is 2
	I1109 14:38:47.997717  189143 node_conditions.go:105] duration metric: took 3.460623ms to run NodePressure ...
	I1109 14:38:47.997731  189143 start.go:242] waiting for startup goroutines ...
	I1109 14:38:47.997761  189143 start.go:247] waiting for cluster config update ...
	I1109 14:38:47.997786  189143 start.go:256] writing updated cluster config ...
	I1109 14:38:47.998072  189143 ssh_runner.go:195] Run: rm -f paused
	I1109 14:38:48.002012  189143 pod_ready.go:37] extra waiting up to 4m0s for all "kube-system" pods having one of [k8s-app=kube-dns component=etcd component=kube-apiserver component=kube-controller-manager k8s-app=kube-proxy component=kube-scheduler] labels to be "Ready" ...
	I1109 14:38:48.006052  189143 pod_ready.go:83] waiting for pod "coredns-66bc5c9577-rbvc2" in "kube-system" namespace to be "Ready" or be gone ...
	I1109 14:38:48.017202  189143 pod_ready.go:94] pod "coredns-66bc5c9577-rbvc2" is "Ready"
	I1109 14:38:48.017233  189143 pod_ready.go:86] duration metric: took 11.152681ms for pod "coredns-66bc5c9577-rbvc2" in "kube-system" namespace to be "Ready" or be gone ...
	I1109 14:38:48.020187  189143 pod_ready.go:83] waiting for pod "etcd-default-k8s-diff-port-103048" in "kube-system" namespace to be "Ready" or be gone ...
	I1109 14:38:48.026142  189143 pod_ready.go:94] pod "etcd-default-k8s-diff-port-103048" is "Ready"
	I1109 14:38:48.026213  189143 pod_ready.go:86] duration metric: took 5.997743ms for pod "etcd-default-k8s-diff-port-103048" in "kube-system" namespace to be "Ready" or be gone ...
	I1109 14:38:48.028747  189143 pod_ready.go:83] waiting for pod "kube-apiserver-default-k8s-diff-port-103048" in "kube-system" namespace to be "Ready" or be gone ...
	I1109 14:38:48.034343  189143 pod_ready.go:94] pod "kube-apiserver-default-k8s-diff-port-103048" is "Ready"
	I1109 14:38:48.034375  189143 pod_ready.go:86] duration metric: took 5.596976ms for pod "kube-apiserver-default-k8s-diff-port-103048" in "kube-system" namespace to be "Ready" or be gone ...
	I1109 14:38:48.037412  189143 pod_ready.go:83] waiting for pod "kube-controller-manager-default-k8s-diff-port-103048" in "kube-system" namespace to be "Ready" or be gone ...
	I1109 14:38:48.406383  189143 pod_ready.go:94] pod "kube-controller-manager-default-k8s-diff-port-103048" is "Ready"
	I1109 14:38:48.406425  189143 pod_ready.go:86] duration metric: took 368.986966ms for pod "kube-controller-manager-default-k8s-diff-port-103048" in "kube-system" namespace to be "Ready" or be gone ...
	I1109 14:38:48.606603  189143 pod_ready.go:83] waiting for pod "kube-proxy-c57m2" in "kube-system" namespace to be "Ready" or be gone ...
	I1109 14:38:49.006702  189143 pod_ready.go:94] pod "kube-proxy-c57m2" is "Ready"
	I1109 14:38:49.006728  189143 pod_ready.go:86] duration metric: took 400.099585ms for pod "kube-proxy-c57m2" in "kube-system" namespace to be "Ready" or be gone ...
	I1109 14:38:49.207114  189143 pod_ready.go:83] waiting for pod "kube-scheduler-default-k8s-diff-port-103048" in "kube-system" namespace to be "Ready" or be gone ...
	I1109 14:38:49.606163  189143 pod_ready.go:94] pod "kube-scheduler-default-k8s-diff-port-103048" is "Ready"
	I1109 14:38:49.606194  189143 pod_ready.go:86] duration metric: took 399.053714ms for pod "kube-scheduler-default-k8s-diff-port-103048" in "kube-system" namespace to be "Ready" or be gone ...
	I1109 14:38:49.606206  189143 pod_ready.go:40] duration metric: took 1.604164876s for extra waiting for all "kube-system" pods having one of [k8s-app=kube-dns component=etcd component=kube-apiserver component=kube-controller-manager k8s-app=kube-proxy component=kube-scheduler] labels to be "Ready" ...
	I1109 14:38:49.663295  189143 start.go:628] kubectl: 1.33.2, cluster: 1.34.1 (minor skew: 1)
	I1109 14:38:49.666538  189143 out.go:179] * Done! kubectl is now configured to use "default-k8s-diff-port-103048" cluster and "default" namespace by default
	W1109 14:38:48.722666  190681 node_ready.go:57] node "embed-certs-422728" has "Ready":"False" status (will retry)
	I1109 14:38:49.222796  190681 node_ready.go:49] node "embed-certs-422728" is "Ready"
	I1109 14:38:49.222825  190681 node_ready.go:38] duration metric: took 40.502853705s for node "embed-certs-422728" to be "Ready" ...
	I1109 14:38:49.222839  190681 api_server.go:52] waiting for apiserver process to appear ...
	I1109 14:38:49.222896  190681 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I1109 14:38:49.242437  190681 api_server.go:72] duration metric: took 41.841444224s to wait for apiserver process to appear ...
	I1109 14:38:49.242463  190681 api_server.go:88] waiting for apiserver healthz status ...
	I1109 14:38:49.242481  190681 api_server.go:253] Checking apiserver healthz at https://192.168.76.2:8443/healthz ...
	I1109 14:38:49.251813  190681 api_server.go:279] https://192.168.76.2:8443/healthz returned 200:
	ok
	I1109 14:38:49.253127  190681 api_server.go:141] control plane version: v1.34.1
	I1109 14:38:49.253156  190681 api_server.go:131] duration metric: took 10.686659ms to wait for apiserver health ...
	I1109 14:38:49.253166  190681 system_pods.go:43] waiting for kube-system pods to appear ...
	I1109 14:38:49.256773  190681 system_pods.go:59] 8 kube-system pods found
	I1109 14:38:49.256847  190681 system_pods.go:61] "coredns-66bc5c9577-4hk6l" [85300af6-fc4a-42dd-b6f9-4374a4461cdc] Pending / Ready:ContainersNotReady (containers with unready status: [coredns]) / ContainersReady:ContainersNotReady (containers with unready status: [coredns])
	I1109 14:38:49.256857  190681 system_pods.go:61] "etcd-embed-certs-422728" [a71faa76-21fd-4bf5-82d3-26489363edf1] Running
	I1109 14:38:49.256878  190681 system_pods.go:61] "kindnet-29xxd" [081cda95-4468-46a9-a913-ec3c53472afd] Running
	I1109 14:38:49.256883  190681 system_pods.go:61] "kube-apiserver-embed-certs-422728" [fa49f721-7504-41a2-80d6-eeac9f3ba024] Running
	I1109 14:38:49.256888  190681 system_pods.go:61] "kube-controller-manager-embed-certs-422728" [1fefd317-f995-414d-9744-cb3bb29d6663] Running
	I1109 14:38:49.256893  190681 system_pods.go:61] "kube-proxy-5zn8j" [91237b20-cef1-4550-bd7c-cbf7ec8d850c] Running
	I1109 14:38:49.256921  190681 system_pods.go:61] "kube-scheduler-embed-certs-422728" [aadcb563-6ce1-437f-ab4e-6f9450bb1b93] Running
	I1109 14:38:49.256948  190681 system_pods.go:61] "storage-provisioner" [e11ae084-2938-40cc-9538-cffa02747d9b] Pending / Ready:ContainersNotReady (containers with unready status: [storage-provisioner]) / ContainersReady:ContainersNotReady (containers with unready status: [storage-provisioner])
	I1109 14:38:49.256955  190681 system_pods.go:74] duration metric: took 3.784022ms to wait for pod list to return data ...
	I1109 14:38:49.256966  190681 default_sa.go:34] waiting for default service account to be created ...
	I1109 14:38:49.259821  190681 default_sa.go:45] found service account: "default"
	I1109 14:38:49.259844  190681 default_sa.go:55] duration metric: took 2.872417ms for default service account to be created ...
	I1109 14:38:49.259853  190681 system_pods.go:116] waiting for k8s-apps to be running ...
	I1109 14:38:49.266398  190681 system_pods.go:86] 8 kube-system pods found
	I1109 14:38:49.266463  190681 system_pods.go:89] "coredns-66bc5c9577-4hk6l" [85300af6-fc4a-42dd-b6f9-4374a4461cdc] Pending / Ready:ContainersNotReady (containers with unready status: [coredns]) / ContainersReady:ContainersNotReady (containers with unready status: [coredns])
	I1109 14:38:49.266471  190681 system_pods.go:89] "etcd-embed-certs-422728" [a71faa76-21fd-4bf5-82d3-26489363edf1] Running
	I1109 14:38:49.266478  190681 system_pods.go:89] "kindnet-29xxd" [081cda95-4468-46a9-a913-ec3c53472afd] Running
	I1109 14:38:49.266484  190681 system_pods.go:89] "kube-apiserver-embed-certs-422728" [fa49f721-7504-41a2-80d6-eeac9f3ba024] Running
	I1109 14:38:49.266494  190681 system_pods.go:89] "kube-controller-manager-embed-certs-422728" [1fefd317-f995-414d-9744-cb3bb29d6663] Running
	I1109 14:38:49.266498  190681 system_pods.go:89] "kube-proxy-5zn8j" [91237b20-cef1-4550-bd7c-cbf7ec8d850c] Running
	I1109 14:38:49.266503  190681 system_pods.go:89] "kube-scheduler-embed-certs-422728" [aadcb563-6ce1-437f-ab4e-6f9450bb1b93] Running
	I1109 14:38:49.266520  190681 system_pods.go:89] "storage-provisioner" [e11ae084-2938-40cc-9538-cffa02747d9b] Pending / Ready:ContainersNotReady (containers with unready status: [storage-provisioner]) / ContainersReady:ContainersNotReady (containers with unready status: [storage-provisioner])
	I1109 14:38:49.266541  190681 retry.go:31] will retry after 228.694576ms: missing components: kube-dns
	I1109 14:38:49.500658  190681 system_pods.go:86] 8 kube-system pods found
	I1109 14:38:49.500694  190681 system_pods.go:89] "coredns-66bc5c9577-4hk6l" [85300af6-fc4a-42dd-b6f9-4374a4461cdc] Pending / Ready:ContainersNotReady (containers with unready status: [coredns]) / ContainersReady:ContainersNotReady (containers with unready status: [coredns])
	I1109 14:38:49.500701  190681 system_pods.go:89] "etcd-embed-certs-422728" [a71faa76-21fd-4bf5-82d3-26489363edf1] Running
	I1109 14:38:49.500709  190681 system_pods.go:89] "kindnet-29xxd" [081cda95-4468-46a9-a913-ec3c53472afd] Running
	I1109 14:38:49.500735  190681 system_pods.go:89] "kube-apiserver-embed-certs-422728" [fa49f721-7504-41a2-80d6-eeac9f3ba024] Running
	I1109 14:38:49.500751  190681 system_pods.go:89] "kube-controller-manager-embed-certs-422728" [1fefd317-f995-414d-9744-cb3bb29d6663] Running
	I1109 14:38:49.500756  190681 system_pods.go:89] "kube-proxy-5zn8j" [91237b20-cef1-4550-bd7c-cbf7ec8d850c] Running
	I1109 14:38:49.500768  190681 system_pods.go:89] "kube-scheduler-embed-certs-422728" [aadcb563-6ce1-437f-ab4e-6f9450bb1b93] Running
	I1109 14:38:49.500775  190681 system_pods.go:89] "storage-provisioner" [e11ae084-2938-40cc-9538-cffa02747d9b] Pending / Ready:ContainersNotReady (containers with unready status: [storage-provisioner]) / ContainersReady:ContainersNotReady (containers with unready status: [storage-provisioner])
	I1109 14:38:49.500791  190681 retry.go:31] will retry after 289.168887ms: missing components: kube-dns
	I1109 14:38:49.793798  190681 system_pods.go:86] 8 kube-system pods found
	I1109 14:38:49.793832  190681 system_pods.go:89] "coredns-66bc5c9577-4hk6l" [85300af6-fc4a-42dd-b6f9-4374a4461cdc] Pending / Ready:ContainersNotReady (containers with unready status: [coredns]) / ContainersReady:ContainersNotReady (containers with unready status: [coredns])
	I1109 14:38:49.793839  190681 system_pods.go:89] "etcd-embed-certs-422728" [a71faa76-21fd-4bf5-82d3-26489363edf1] Running
	I1109 14:38:49.793845  190681 system_pods.go:89] "kindnet-29xxd" [081cda95-4468-46a9-a913-ec3c53472afd] Running
	I1109 14:38:49.793849  190681 system_pods.go:89] "kube-apiserver-embed-certs-422728" [fa49f721-7504-41a2-80d6-eeac9f3ba024] Running
	I1109 14:38:49.793854  190681 system_pods.go:89] "kube-controller-manager-embed-certs-422728" [1fefd317-f995-414d-9744-cb3bb29d6663] Running
	I1109 14:38:49.793859  190681 system_pods.go:89] "kube-proxy-5zn8j" [91237b20-cef1-4550-bd7c-cbf7ec8d850c] Running
	I1109 14:38:49.793863  190681 system_pods.go:89] "kube-scheduler-embed-certs-422728" [aadcb563-6ce1-437f-ab4e-6f9450bb1b93] Running
	I1109 14:38:49.793868  190681 system_pods.go:89] "storage-provisioner" [e11ae084-2938-40cc-9538-cffa02747d9b] Pending / Ready:ContainersNotReady (containers with unready status: [storage-provisioner]) / ContainersReady:ContainersNotReady (containers with unready status: [storage-provisioner])
	I1109 14:38:49.793882  190681 retry.go:31] will retry after 329.103159ms: missing components: kube-dns
	I1109 14:38:50.127461  190681 system_pods.go:86] 8 kube-system pods found
	I1109 14:38:50.127497  190681 system_pods.go:89] "coredns-66bc5c9577-4hk6l" [85300af6-fc4a-42dd-b6f9-4374a4461cdc] Running
	I1109 14:38:50.127504  190681 system_pods.go:89] "etcd-embed-certs-422728" [a71faa76-21fd-4bf5-82d3-26489363edf1] Running
	I1109 14:38:50.127508  190681 system_pods.go:89] "kindnet-29xxd" [081cda95-4468-46a9-a913-ec3c53472afd] Running
	I1109 14:38:50.127512  190681 system_pods.go:89] "kube-apiserver-embed-certs-422728" [fa49f721-7504-41a2-80d6-eeac9f3ba024] Running
	I1109 14:38:50.127517  190681 system_pods.go:89] "kube-controller-manager-embed-certs-422728" [1fefd317-f995-414d-9744-cb3bb29d6663] Running
	I1109 14:38:50.127521  190681 system_pods.go:89] "kube-proxy-5zn8j" [91237b20-cef1-4550-bd7c-cbf7ec8d850c] Running
	I1109 14:38:50.127531  190681 system_pods.go:89] "kube-scheduler-embed-certs-422728" [aadcb563-6ce1-437f-ab4e-6f9450bb1b93] Running
	I1109 14:38:50.127535  190681 system_pods.go:89] "storage-provisioner" [e11ae084-2938-40cc-9538-cffa02747d9b] Running
	I1109 14:38:50.127544  190681 system_pods.go:126] duration metric: took 867.625882ms to wait for k8s-apps to be running ...
	I1109 14:38:50.127552  190681 system_svc.go:44] waiting for kubelet service to be running ....
	I1109 14:38:50.127624  190681 ssh_runner.go:195] Run: sudo systemctl is-active --quiet service kubelet
	I1109 14:38:50.141671  190681 system_svc.go:56] duration metric: took 14.110334ms WaitForService to wait for kubelet
	I1109 14:38:50.141697  190681 kubeadm.go:587] duration metric: took 42.740708795s to wait for: map[apiserver:true apps_running:true default_sa:true extra:true kubelet:true node_ready:true system_pods:true]
	I1109 14:38:50.141713  190681 node_conditions.go:102] verifying NodePressure condition ...
	I1109 14:38:50.144545  190681 node_conditions.go:122] node storage ephemeral capacity is 203034800Ki
	I1109 14:38:50.144576  190681 node_conditions.go:123] node cpu capacity is 2
	I1109 14:38:50.144589  190681 node_conditions.go:105] duration metric: took 2.869742ms to run NodePressure ...
	I1109 14:38:50.144600  190681 start.go:242] waiting for startup goroutines ...
	I1109 14:38:50.144607  190681 start.go:247] waiting for cluster config update ...
	I1109 14:38:50.144618  190681 start.go:256] writing updated cluster config ...
	I1109 14:38:50.144895  190681 ssh_runner.go:195] Run: rm -f paused
	I1109 14:38:50.148730  190681 pod_ready.go:37] extra waiting up to 4m0s for all "kube-system" pods having one of [k8s-app=kube-dns component=etcd component=kube-apiserver component=kube-controller-manager k8s-app=kube-proxy component=kube-scheduler] labels to be "Ready" ...
	I1109 14:38:50.227333  190681 pod_ready.go:83] waiting for pod "coredns-66bc5c9577-4hk6l" in "kube-system" namespace to be "Ready" or be gone ...
	I1109 14:38:50.233164  190681 pod_ready.go:94] pod "coredns-66bc5c9577-4hk6l" is "Ready"
	I1109 14:38:50.233193  190681 pod_ready.go:86] duration metric: took 5.833065ms for pod "coredns-66bc5c9577-4hk6l" in "kube-system" namespace to be "Ready" or be gone ...
	I1109 14:38:50.236197  190681 pod_ready.go:83] waiting for pod "etcd-embed-certs-422728" in "kube-system" namespace to be "Ready" or be gone ...
	I1109 14:38:50.242450  190681 pod_ready.go:94] pod "etcd-embed-certs-422728" is "Ready"
	I1109 14:38:50.242477  190681 pod_ready.go:86] duration metric: took 6.258586ms for pod "etcd-embed-certs-422728" in "kube-system" namespace to be "Ready" or be gone ...
	I1109 14:38:50.245018  190681 pod_ready.go:83] waiting for pod "kube-apiserver-embed-certs-422728" in "kube-system" namespace to be "Ready" or be gone ...
	I1109 14:38:50.250812  190681 pod_ready.go:94] pod "kube-apiserver-embed-certs-422728" is "Ready"
	I1109 14:38:50.250833  190681 pod_ready.go:86] duration metric: took 5.795ms for pod "kube-apiserver-embed-certs-422728" in "kube-system" namespace to be "Ready" or be gone ...
	I1109 14:38:50.254399  190681 pod_ready.go:83] waiting for pod "kube-controller-manager-embed-certs-422728" in "kube-system" namespace to be "Ready" or be gone ...
	I1109 14:38:50.552915  190681 pod_ready.go:94] pod "kube-controller-manager-embed-certs-422728" is "Ready"
	I1109 14:38:50.552943  190681 pod_ready.go:86] duration metric: took 298.520803ms for pod "kube-controller-manager-embed-certs-422728" in "kube-system" namespace to be "Ready" or be gone ...
	I1109 14:38:50.753326  190681 pod_ready.go:83] waiting for pod "kube-proxy-5zn8j" in "kube-system" namespace to be "Ready" or be gone ...
	I1109 14:38:51.153649  190681 pod_ready.go:94] pod "kube-proxy-5zn8j" is "Ready"
	I1109 14:38:51.153732  190681 pod_ready.go:86] duration metric: took 400.381845ms for pod "kube-proxy-5zn8j" in "kube-system" namespace to be "Ready" or be gone ...
	I1109 14:38:51.352991  190681 pod_ready.go:83] waiting for pod "kube-scheduler-embed-certs-422728" in "kube-system" namespace to be "Ready" or be gone ...
	I1109 14:38:51.753508  190681 pod_ready.go:94] pod "kube-scheduler-embed-certs-422728" is "Ready"
	I1109 14:38:51.753536  190681 pod_ready.go:86] duration metric: took 400.518087ms for pod "kube-scheduler-embed-certs-422728" in "kube-system" namespace to be "Ready" or be gone ...
	I1109 14:38:51.753548  190681 pod_ready.go:40] duration metric: took 1.604783227s for extra waiting for all "kube-system" pods having one of [k8s-app=kube-dns component=etcd component=kube-apiserver component=kube-controller-manager k8s-app=kube-proxy component=kube-scheduler] labels to be "Ready" ...
	I1109 14:38:51.808397  190681 start.go:628] kubectl: 1.33.2, cluster: 1.34.1 (minor skew: 1)
	I1109 14:38:51.814313  190681 out.go:179] * Done! kubectl is now configured to use "embed-certs-422728" cluster and "default" namespace by default
	
	
	==> CRI-O <==
	Nov 09 14:38:47 default-k8s-diff-port-103048 crio[842]: time="2025-11-09T14:38:47.273697819Z" level=info msg="Created container f015c272c5eb3b6f8b950bcdae180376559d624731e4d8dad8a64a15310a7eae: kube-system/coredns-66bc5c9577-rbvc2/coredns" id=e245e959-5d05-4046-8ada-9de8bf8cada1 name=/runtime.v1.RuntimeService/CreateContainer
	Nov 09 14:38:47 default-k8s-diff-port-103048 crio[842]: time="2025-11-09T14:38:47.274635197Z" level=info msg="Starting container: f015c272c5eb3b6f8b950bcdae180376559d624731e4d8dad8a64a15310a7eae" id=4b24e29b-c320-4d96-8b0b-04e441873b1d name=/runtime.v1.RuntimeService/StartContainer
	Nov 09 14:38:47 default-k8s-diff-port-103048 crio[842]: time="2025-11-09T14:38:47.281878997Z" level=info msg="Started container" PID=1758 containerID=f015c272c5eb3b6f8b950bcdae180376559d624731e4d8dad8a64a15310a7eae description=kube-system/coredns-66bc5c9577-rbvc2/coredns id=4b24e29b-c320-4d96-8b0b-04e441873b1d name=/runtime.v1.RuntimeService/StartContainer sandboxID=5d3d6213e6126f8c8ae7189fd4f76f487d9aa706f666b208b52f12caeba1e8d8
	Nov 09 14:38:50 default-k8s-diff-port-103048 crio[842]: time="2025-11-09T14:38:50.223815108Z" level=info msg="Running pod sandbox: default/busybox/POD" id=871f872e-d3cb-415f-a3c3-b2e1bdc6df6f name=/runtime.v1.RuntimeService/RunPodSandbox
	Nov 09 14:38:50 default-k8s-diff-port-103048 crio[842]: time="2025-11-09T14:38:50.223908532Z" level=info msg="Allowed annotations are specified for workload [io.containers.trace-syscall]"
	Nov 09 14:38:50 default-k8s-diff-port-103048 crio[842]: time="2025-11-09T14:38:50.238708085Z" level=info msg="Got pod network &{Name:busybox Namespace:default ID:d20b7ef550f3f1fbb3566d1363eced2e82cc5adfa368c194da5a27a72575abc7 UID:f325ae72-af8a-416b-b2a2-8fe2e1b4d024 NetNS:/var/run/netns/ca8e3f06-c67f-45e3-8f10-d3d39bb03090 Networks:[{Name:kindnet Ifname:eth0}] RuntimeConfig:map[kindnet:{IP: MAC: PortMappings:[] Bandwidth:<nil> IpRanges:[] CgroupPath: PodAnnotations:0x400012d758}] Aliases:map[]}"
	Nov 09 14:38:50 default-k8s-diff-port-103048 crio[842]: time="2025-11-09T14:38:50.238745788Z" level=info msg="Adding pod default_busybox to CNI network \"kindnet\" (type=ptp)"
	Nov 09 14:38:50 default-k8s-diff-port-103048 crio[842]: time="2025-11-09T14:38:50.255365508Z" level=info msg="Got pod network &{Name:busybox Namespace:default ID:d20b7ef550f3f1fbb3566d1363eced2e82cc5adfa368c194da5a27a72575abc7 UID:f325ae72-af8a-416b-b2a2-8fe2e1b4d024 NetNS:/var/run/netns/ca8e3f06-c67f-45e3-8f10-d3d39bb03090 Networks:[{Name:kindnet Ifname:eth0}] RuntimeConfig:map[kindnet:{IP: MAC: PortMappings:[] Bandwidth:<nil> IpRanges:[] CgroupPath: PodAnnotations:0x400012d758}] Aliases:map[]}"
	Nov 09 14:38:50 default-k8s-diff-port-103048 crio[842]: time="2025-11-09T14:38:50.255515844Z" level=info msg="Checking pod default_busybox for CNI network kindnet (type=ptp)"
	Nov 09 14:38:50 default-k8s-diff-port-103048 crio[842]: time="2025-11-09T14:38:50.258451933Z" level=info msg="Ran pod sandbox d20b7ef550f3f1fbb3566d1363eced2e82cc5adfa368c194da5a27a72575abc7 with infra container: default/busybox/POD" id=871f872e-d3cb-415f-a3c3-b2e1bdc6df6f name=/runtime.v1.RuntimeService/RunPodSandbox
	Nov 09 14:38:50 default-k8s-diff-port-103048 crio[842]: time="2025-11-09T14:38:50.261785211Z" level=info msg="Checking image status: gcr.io/k8s-minikube/busybox:1.28.4-glibc" id=dc609635-41e8-493f-8973-9e5a8124f86c name=/runtime.v1.ImageService/ImageStatus
	Nov 09 14:38:50 default-k8s-diff-port-103048 crio[842]: time="2025-11-09T14:38:50.262027585Z" level=info msg="Image gcr.io/k8s-minikube/busybox:1.28.4-glibc not found" id=dc609635-41e8-493f-8973-9e5a8124f86c name=/runtime.v1.ImageService/ImageStatus
	Nov 09 14:38:50 default-k8s-diff-port-103048 crio[842]: time="2025-11-09T14:38:50.262079269Z" level=info msg="Neither image nor artfiact gcr.io/k8s-minikube/busybox:1.28.4-glibc found" id=dc609635-41e8-493f-8973-9e5a8124f86c name=/runtime.v1.ImageService/ImageStatus
	Nov 09 14:38:50 default-k8s-diff-port-103048 crio[842]: time="2025-11-09T14:38:50.263563228Z" level=info msg="Pulling image: gcr.io/k8s-minikube/busybox:1.28.4-glibc" id=2402c474-3ddb-4b30-aa5f-1d957f721b4c name=/runtime.v1.ImageService/PullImage
	Nov 09 14:38:50 default-k8s-diff-port-103048 crio[842]: time="2025-11-09T14:38:50.268855496Z" level=info msg="Trying to access \"gcr.io/k8s-minikube/busybox:1.28.4-glibc\""
	Nov 09 14:38:52 default-k8s-diff-port-103048 crio[842]: time="2025-11-09T14:38:52.372791614Z" level=info msg="Pulled image: gcr.io/k8s-minikube/busybox@sha256:580b0aa58b210f512f818b7b7ef4f63c803f7a8cd6baf571b1462b79f7b7719e" id=2402c474-3ddb-4b30-aa5f-1d957f721b4c name=/runtime.v1.ImageService/PullImage
	Nov 09 14:38:52 default-k8s-diff-port-103048 crio[842]: time="2025-11-09T14:38:52.376931036Z" level=info msg="Checking image status: gcr.io/k8s-minikube/busybox:1.28.4-glibc" id=a879c481-eb7b-46cf-913f-cb20f7aea705 name=/runtime.v1.ImageService/ImageStatus
	Nov 09 14:38:52 default-k8s-diff-port-103048 crio[842]: time="2025-11-09T14:38:52.381331105Z" level=info msg="Checking image status: gcr.io/k8s-minikube/busybox:1.28.4-glibc" id=5c7215ac-6592-4a38-ba7f-a66345576b3b name=/runtime.v1.ImageService/ImageStatus
	Nov 09 14:38:52 default-k8s-diff-port-103048 crio[842]: time="2025-11-09T14:38:52.388239936Z" level=info msg="Creating container: default/busybox/busybox" id=023bedb0-c238-4dc0-b52d-ed57a74c7508 name=/runtime.v1.RuntimeService/CreateContainer
	Nov 09 14:38:52 default-k8s-diff-port-103048 crio[842]: time="2025-11-09T14:38:52.388372401Z" level=info msg="Allowed annotations are specified for workload [io.containers.trace-syscall]"
	Nov 09 14:38:52 default-k8s-diff-port-103048 crio[842]: time="2025-11-09T14:38:52.399497086Z" level=info msg="Allowed annotations are specified for workload [io.containers.trace-syscall]"
	Nov 09 14:38:52 default-k8s-diff-port-103048 crio[842]: time="2025-11-09T14:38:52.40012123Z" level=info msg="Allowed annotations are specified for workload [io.containers.trace-syscall]"
	Nov 09 14:38:52 default-k8s-diff-port-103048 crio[842]: time="2025-11-09T14:38:52.415640018Z" level=info msg="Created container 4ba1720a8f86f84197b179fb9606fc508b480986959412e57e47eb3276ff0b61: default/busybox/busybox" id=023bedb0-c238-4dc0-b52d-ed57a74c7508 name=/runtime.v1.RuntimeService/CreateContainer
	Nov 09 14:38:52 default-k8s-diff-port-103048 crio[842]: time="2025-11-09T14:38:52.416728405Z" level=info msg="Starting container: 4ba1720a8f86f84197b179fb9606fc508b480986959412e57e47eb3276ff0b61" id=49952794-2653-40b7-987a-f76b6af8c83e name=/runtime.v1.RuntimeService/StartContainer
	Nov 09 14:38:52 default-k8s-diff-port-103048 crio[842]: time="2025-11-09T14:38:52.429944454Z" level=info msg="Started container" PID=1815 containerID=4ba1720a8f86f84197b179fb9606fc508b480986959412e57e47eb3276ff0b61 description=default/busybox/busybox id=49952794-2653-40b7-987a-f76b6af8c83e name=/runtime.v1.RuntimeService/StartContainer sandboxID=d20b7ef550f3f1fbb3566d1363eced2e82cc5adfa368c194da5a27a72575abc7
	
	
	==> container status <==
	CONTAINER           IMAGE                                                                                                 CREATED              STATE               NAME                      ATTEMPT             POD ID              POD                                                    NAMESPACE
	4ba1720a8f86f       gcr.io/k8s-minikube/busybox@sha256:580b0aa58b210f512f818b7b7ef4f63c803f7a8cd6baf571b1462b79f7b7719e   7 seconds ago        Running             busybox                   0                   d20b7ef550f3f       busybox                                                default
	f015c272c5eb3       138784d87c9c50f8e59412544da4cf4928d61ccbaf93b9f5898a3ba406871bfc                                      12 seconds ago       Running             coredns                   0                   5d3d6213e6126       coredns-66bc5c9577-rbvc2                               kube-system
	f1fc4d4c90b58       ba04bb24b95753201135cbc420b233c1b0b9fa2e1fd21d28319c348c33fbcde6                                      12 seconds ago       Running             storage-provisioner       0                   5fbfb52852e45       storage-provisioner                                    kube-system
	44478051f104b       05baa95f5142d87797a2bc1d3d11edfb0bf0a9236d436243d15061fae8b58cb9                                      53 seconds ago       Running             kube-proxy                0                   06dfd931c18a1       kube-proxy-c57m2                                       kube-system
	35302e734ccba       b1a8c6f707935fd5f346ce5846d21ff8dd65e14c15406a14dbd16b9b897b9b4c                                      53 seconds ago       Running             kindnet-cni               0                   5469d3e412015       kindnet-tz2x5                                          kube-system
	25b9c4b7620e5       b5f57ec6b98676d815366685a0422bd164ecf0732540b79ac51b1186cef97ff0                                      About a minute ago   Running             kube-scheduler            0                   27439398e2833       kube-scheduler-default-k8s-diff-port-103048            kube-system
	07863602cf668       a1894772a478e07c67a56e8bf32335fdbe1dd4ec96976a5987083164bd00bc0e                                      About a minute ago   Running             etcd                      0                   a83de990514f8       etcd-default-k8s-diff-port-103048                      kube-system
	c3f2e334bcad7       7eb2c6ff0c5a768fd309321bc2ade0e4e11afcf4f2017ef1d0ff00d91fdf992a                                      About a minute ago   Running             kube-controller-manager   0                   5e1633d05728a       kube-controller-manager-default-k8s-diff-port-103048   kube-system
	dd0245b0767e0       43911e833d64d4f30460862fc0c54bb61999d60bc7d063feca71e9fc610d5196                                      About a minute ago   Running             kube-apiserver            0                   0ed17c17e48e3       kube-apiserver-default-k8s-diff-port-103048            kube-system
	
	
	==> coredns [f015c272c5eb3b6f8b950bcdae180376559d624731e4d8dad8a64a15310a7eae] <==
	maxprocs: Leaving GOMAXPROCS=2: CPU quota undefined
	.:53
	[INFO] plugin/reload: Running configuration SHA512 = fa9a0cdcdddcb4be74a0eaf7cfcb211c40e29ddf5507e03bbfc0065bade31f0f2641a2513136e246f32328dd126fc93236fb5c595246f0763926a524386705e8
	CoreDNS-1.12.1
	linux/arm64, go1.24.1, 707c7c1
	[INFO] 127.0.0.1:56189 - 19009 "HINFO IN 7954671563450847899.4394004431074979143. udp 57 false 512" NXDOMAIN qr,rd,ra 57 0.011610153s
	
	
	==> describe nodes <==
	Name:               default-k8s-diff-port-103048
	Roles:              control-plane
	Labels:             beta.kubernetes.io/arch=arm64
	                    beta.kubernetes.io/os=linux
	                    kubernetes.io/arch=arm64
	                    kubernetes.io/hostname=default-k8s-diff-port-103048
	                    kubernetes.io/os=linux
	                    minikube.k8s.io/commit=70e94ed289d7418ac27e2778f9cf44be27a4ecda
	                    minikube.k8s.io/name=default-k8s-diff-port-103048
	                    minikube.k8s.io/primary=true
	                    minikube.k8s.io/updated_at=2025_11_09T14_38_01_0700
	                    minikube.k8s.io/version=v1.37.0
	                    node-role.kubernetes.io/control-plane=
	                    node.kubernetes.io/exclude-from-external-load-balancers=
	Annotations:        node.alpha.kubernetes.io/ttl: 0
	                    volumes.kubernetes.io/controller-managed-attach-detach: true
	CreationTimestamp:  Sun, 09 Nov 2025 14:37:56 +0000
	Taints:             <none>
	Unschedulable:      false
	Lease:
	  HolderIdentity:  default-k8s-diff-port-103048
	  AcquireTime:     <unset>
	  RenewTime:       Sun, 09 Nov 2025 14:38:51 +0000
	Conditions:
	  Type             Status  LastHeartbeatTime                 LastTransitionTime                Reason                       Message
	  ----             ------  -----------------                 ------------------                ------                       -------
	  MemoryPressure   False   Sun, 09 Nov 2025 14:38:51 +0000   Sun, 09 Nov 2025 14:37:49 +0000   KubeletHasSufficientMemory   kubelet has sufficient memory available
	  DiskPressure     False   Sun, 09 Nov 2025 14:38:51 +0000   Sun, 09 Nov 2025 14:37:49 +0000   KubeletHasNoDiskPressure     kubelet has no disk pressure
	  PIDPressure      False   Sun, 09 Nov 2025 14:38:51 +0000   Sun, 09 Nov 2025 14:37:49 +0000   KubeletHasSufficientPID      kubelet has sufficient PID available
	  Ready            True    Sun, 09 Nov 2025 14:38:51 +0000   Sun, 09 Nov 2025 14:38:46 +0000   KubeletReady                 kubelet is posting ready status
	Addresses:
	  InternalIP:  192.168.85.2
	  Hostname:    default-k8s-diff-port-103048
	Capacity:
	  cpu:                2
	  ephemeral-storage:  203034800Ki
	  hugepages-1Gi:      0
	  hugepages-2Mi:      0
	  hugepages-32Mi:     0
	  hugepages-64Ki:     0
	  memory:             8022300Ki
	  pods:               110
	Allocatable:
	  cpu:                2
	  ephemeral-storage:  203034800Ki
	  hugepages-1Gi:      0
	  hugepages-2Mi:      0
	  hugepages-32Mi:     0
	  hugepages-64Ki:     0
	  memory:             8022300Ki
	  pods:               110
	System Info:
	  Machine ID:                 8fe80b37a6a3cb4bc1adadb06905c59a
	  System UUID:                6ac075f8-cd4f-431f-b369-b54146be0749
	  Boot ID:                    c2b4b02b-b0e5-42ff-aa45-346ad9349595
	  Kernel Version:             5.15.0-1084-aws
	  OS Image:                   Debian GNU/Linux 12 (bookworm)
	  Operating System:           linux
	  Architecture:               arm64
	  Container Runtime Version:  cri-o://1.34.1
	  Kubelet Version:            v1.34.1
	  Kube-Proxy Version:         
	PodCIDR:                      10.244.0.0/24
	PodCIDRs:                     10.244.0.0/24
	Non-terminated Pods:          (9 in total)
	  Namespace                   Name                                                    CPU Requests  CPU Limits  Memory Requests  Memory Limits  Age
	  ---------                   ----                                                    ------------  ----------  ---------------  -------------  ---
	  default                     busybox                                                 0 (0%)        0 (0%)      0 (0%)           0 (0%)         10s
	  kube-system                 coredns-66bc5c9577-rbvc2                                100m (5%)     0 (0%)      70Mi (0%)        170Mi (2%)     53s
	  kube-system                 etcd-default-k8s-diff-port-103048                       100m (5%)     0 (0%)      100Mi (1%)       0 (0%)         59s
	  kube-system                 kindnet-tz2x5                                           100m (5%)     100m (5%)   50Mi (0%)        50Mi (0%)      54s
	  kube-system                 kube-apiserver-default-k8s-diff-port-103048             250m (12%)    0 (0%)      0 (0%)           0 (0%)         59s
	  kube-system                 kube-controller-manager-default-k8s-diff-port-103048    200m (10%)    0 (0%)      0 (0%)           0 (0%)         59s
	  kube-system                 kube-proxy-c57m2                                        0 (0%)        0 (0%)      0 (0%)           0 (0%)         54s
	  kube-system                 kube-scheduler-default-k8s-diff-port-103048             100m (5%)     0 (0%)      0 (0%)           0 (0%)         63s
	  kube-system                 storage-provisioner                                     0 (0%)        0 (0%)      0 (0%)           0 (0%)         52s
	Allocated resources:
	  (Total limits may be over 100 percent, i.e., overcommitted.)
	  Resource           Requests    Limits
	  --------           --------    ------
	  cpu                850m (42%)  100m (5%)
	  memory             220Mi (2%)  220Mi (2%)
	  ephemeral-storage  0 (0%)      0 (0%)
	  hugepages-1Gi      0 (0%)      0 (0%)
	  hugepages-2Mi      0 (0%)      0 (0%)
	  hugepages-32Mi     0 (0%)      0 (0%)
	  hugepages-64Ki     0 (0%)      0 (0%)
	Events:
	  Type     Reason                   Age                From             Message
	  ----     ------                   ----               ----             -------
	  Normal   Starting                 52s                kube-proxy       
	  Warning  CgroupV1                 72s                kubelet          cgroup v1 support is in maintenance mode, please migrate to cgroup v2
	  Normal   NodeHasSufficientMemory  72s (x8 over 72s)  kubelet          Node default-k8s-diff-port-103048 status is now: NodeHasSufficientMemory
	  Normal   NodeHasNoDiskPressure    72s (x8 over 72s)  kubelet          Node default-k8s-diff-port-103048 status is now: NodeHasNoDiskPressure
	  Normal   NodeHasSufficientPID     72s (x8 over 72s)  kubelet          Node default-k8s-diff-port-103048 status is now: NodeHasSufficientPID
	  Normal   Starting                 59s                kubelet          Starting kubelet.
	  Warning  CgroupV1                 59s                kubelet          cgroup v1 support is in maintenance mode, please migrate to cgroup v2
	  Normal   NodeHasSufficientMemory  59s                kubelet          Node default-k8s-diff-port-103048 status is now: NodeHasSufficientMemory
	  Normal   NodeHasNoDiskPressure    59s                kubelet          Node default-k8s-diff-port-103048 status is now: NodeHasNoDiskPressure
	  Normal   NodeHasSufficientPID     59s                kubelet          Node default-k8s-diff-port-103048 status is now: NodeHasSufficientPID
	  Normal   RegisteredNode           55s                node-controller  Node default-k8s-diff-port-103048 event: Registered Node default-k8s-diff-port-103048 in Controller
	  Normal   NodeReady                13s                kubelet          Node default-k8s-diff-port-103048 status is now: NodeReady
	
	
	==> dmesg <==
	[ +35.606556] overlayfs: idmapped layers are currently not supported
	[Nov 9 14:14] overlayfs: idmapped layers are currently not supported
	[Nov 9 14:15] overlayfs: idmapped layers are currently not supported
	[Nov 9 14:16] overlayfs: idmapped layers are currently not supported
	[Nov 9 14:18] overlayfs: idmapped layers are currently not supported
	[ +26.909872] overlayfs: idmapped layers are currently not supported
	[  +3.850831] overlayfs: idmapped layers are currently not supported
	[Nov 9 14:19] overlayfs: idmapped layers are currently not supported
	[ +17.180951] overlayfs: idmapped layers are currently not supported
	[Nov 9 14:20] overlayfs: idmapped layers are currently not supported
	[ +23.736977] overlayfs: idmapped layers are currently not supported
	[Nov 9 14:22] overlayfs: idmapped layers are currently not supported
	[Nov 9 14:23] overlayfs: idmapped layers are currently not supported
	[Nov 9 14:25] overlayfs: idmapped layers are currently not supported
	[Nov 9 14:27] overlayfs: idmapped layers are currently not supported
	[ +24.536638] overlayfs: idmapped layers are currently not supported
	[Nov 9 14:31] overlayfs: idmapped layers are currently not supported
	[Nov 9 14:33] overlayfs: idmapped layers are currently not supported
	[ +39.159698] overlayfs: idmapped layers are currently not supported
	[  +5.641155] overlayfs: idmapped layers are currently not supported
	[Nov 9 14:34] overlayfs: idmapped layers are currently not supported
	[Nov 9 14:35] overlayfs: idmapped layers are currently not supported
	[Nov 9 14:36] overlayfs: idmapped layers are currently not supported
	[Nov 9 14:37] overlayfs: idmapped layers are currently not supported
	[  +3.455400] overlayfs: idmapped layers are currently not supported
	
	
	==> etcd [07863602cf668603597724e0044c4abe5b86d62bd7b916caadf2fabed18414a7] <==
	{"level":"warn","ts":"2025-11-09T14:37:52.444244Z","caller":"embed/config_logging.go:188","msg":"rejected connection on client endpoint","remote-addr":"127.0.0.1:60732","server-name":"","error":"EOF"}
	{"level":"warn","ts":"2025-11-09T14:37:52.508599Z","caller":"embed/config_logging.go:188","msg":"rejected connection on client endpoint","remote-addr":"127.0.0.1:60752","server-name":"","error":"EOF"}
	{"level":"warn","ts":"2025-11-09T14:37:52.545104Z","caller":"embed/config_logging.go:188","msg":"rejected connection on client endpoint","remote-addr":"127.0.0.1:60784","server-name":"","error":"EOF"}
	{"level":"warn","ts":"2025-11-09T14:37:52.627999Z","caller":"embed/config_logging.go:188","msg":"rejected connection on client endpoint","remote-addr":"127.0.0.1:60810","server-name":"","error":"EOF"}
	{"level":"warn","ts":"2025-11-09T14:37:52.667042Z","caller":"embed/config_logging.go:188","msg":"rejected connection on client endpoint","remote-addr":"127.0.0.1:60830","server-name":"","error":"EOF"}
	{"level":"warn","ts":"2025-11-09T14:37:52.768244Z","caller":"embed/config_logging.go:188","msg":"rejected connection on client endpoint","remote-addr":"127.0.0.1:60850","server-name":"","error":"EOF"}
	{"level":"warn","ts":"2025-11-09T14:37:52.812972Z","caller":"embed/config_logging.go:188","msg":"rejected connection on client endpoint","remote-addr":"127.0.0.1:60874","server-name":"","error":"EOF"}
	{"level":"warn","ts":"2025-11-09T14:37:52.890746Z","caller":"embed/config_logging.go:188","msg":"rejected connection on client endpoint","remote-addr":"127.0.0.1:60886","server-name":"","error":"EOF"}
	{"level":"warn","ts":"2025-11-09T14:37:52.949212Z","caller":"embed/config_logging.go:188","msg":"rejected connection on client endpoint","remote-addr":"127.0.0.1:60890","server-name":"","error":"EOF"}
	{"level":"warn","ts":"2025-11-09T14:37:53.045321Z","caller":"embed/config_logging.go:188","msg":"rejected connection on client endpoint","remote-addr":"127.0.0.1:60910","server-name":"","error":"EOF"}
	{"level":"warn","ts":"2025-11-09T14:37:53.124190Z","caller":"embed/config_logging.go:188","msg":"rejected connection on client endpoint","remote-addr":"127.0.0.1:60928","server-name":"","error":"EOF"}
	{"level":"warn","ts":"2025-11-09T14:37:53.160371Z","caller":"embed/config_logging.go:188","msg":"rejected connection on client endpoint","remote-addr":"127.0.0.1:60942","server-name":"","error":"EOF"}
	{"level":"warn","ts":"2025-11-09T14:37:53.213820Z","caller":"embed/config_logging.go:188","msg":"rejected connection on client endpoint","remote-addr":"127.0.0.1:60960","server-name":"","error":"EOF"}
	{"level":"warn","ts":"2025-11-09T14:37:53.279422Z","caller":"embed/config_logging.go:188","msg":"rejected connection on client endpoint","remote-addr":"127.0.0.1:60980","server-name":"","error":"EOF"}
	{"level":"warn","ts":"2025-11-09T14:37:53.356047Z","caller":"embed/config_logging.go:188","msg":"rejected connection on client endpoint","remote-addr":"127.0.0.1:32772","server-name":"","error":"EOF"}
	{"level":"warn","ts":"2025-11-09T14:37:53.432752Z","caller":"embed/config_logging.go:188","msg":"rejected connection on client endpoint","remote-addr":"127.0.0.1:32794","server-name":"","error":"EOF"}
	{"level":"warn","ts":"2025-11-09T14:37:53.498181Z","caller":"embed/config_logging.go:188","msg":"rejected connection on client endpoint","remote-addr":"127.0.0.1:32812","server-name":"","error":"EOF"}
	{"level":"warn","ts":"2025-11-09T14:37:53.595370Z","caller":"embed/config_logging.go:188","msg":"rejected connection on client endpoint","remote-addr":"127.0.0.1:32830","server-name":"","error":"EOF"}
	{"level":"warn","ts":"2025-11-09T14:37:53.644107Z","caller":"embed/config_logging.go:188","msg":"rejected connection on client endpoint","remote-addr":"127.0.0.1:32836","server-name":"","error":"EOF"}
	{"level":"warn","ts":"2025-11-09T14:37:53.724003Z","caller":"embed/config_logging.go:188","msg":"rejected connection on client endpoint","remote-addr":"127.0.0.1:32850","server-name":"","error":"EOF"}
	{"level":"warn","ts":"2025-11-09T14:37:53.774755Z","caller":"embed/config_logging.go:188","msg":"rejected connection on client endpoint","remote-addr":"127.0.0.1:32862","server-name":"","error":"EOF"}
	{"level":"warn","ts":"2025-11-09T14:37:53.860387Z","caller":"embed/config_logging.go:188","msg":"rejected connection on client endpoint","remote-addr":"127.0.0.1:32874","server-name":"","error":"EOF"}
	{"level":"warn","ts":"2025-11-09T14:37:53.897296Z","caller":"embed/config_logging.go:188","msg":"rejected connection on client endpoint","remote-addr":"127.0.0.1:32896","server-name":"","error":"EOF"}
	{"level":"warn","ts":"2025-11-09T14:37:53.921853Z","caller":"embed/config_logging.go:188","msg":"rejected connection on client endpoint","remote-addr":"127.0.0.1:32932","server-name":"","error":"EOF"}
	{"level":"warn","ts":"2025-11-09T14:37:54.146989Z","caller":"embed/config_logging.go:188","msg":"rejected connection on client endpoint","remote-addr":"127.0.0.1:32988","server-name":"","error":"EOF"}
	
	
	==> kernel <==
	 14:38:59 up  1:21,  0 user,  load average: 2.35, 3.27, 2.72
	Linux default-k8s-diff-port-103048 5.15.0-1084-aws #91~20.04.1-Ubuntu SMP Fri May 2 07:00:04 UTC 2025 aarch64 GNU/Linux
	PRETTY_NAME="Debian GNU/Linux 12 (bookworm)"
	
	
	==> kindnet [35302e734ccba37e7b7780f54316da74e85849f96b579bf26b9e77125cc05c53] <==
	I1109 14:38:06.424181       1 main.go:109] connected to apiserver: https://10.96.0.1:443
	I1109 14:38:06.424446       1 main.go:139] hostIP = 192.168.85.2
	podIP = 192.168.85.2
	I1109 14:38:06.424563       1 main.go:148] setting mtu 1500 for CNI 
	I1109 14:38:06.424575       1 main.go:178] kindnetd IP family: "ipv4"
	I1109 14:38:06.424585       1 main.go:182] noMask IPv4 subnets: [10.244.0.0/16]
	time="2025-11-09T14:38:06Z" level=info msg="Created plugin 10-kube-network-policies (kindnetd, handles RunPodSandbox,RemovePodSandbox)"
	I1109 14:38:06.621462       1 controller.go:377] "Starting controller" name="kube-network-policies"
	I1109 14:38:06.621481       1 controller.go:381] "Waiting for informer caches to sync"
	I1109 14:38:06.621490       1 shared_informer.go:350] "Waiting for caches to sync" controller="kube-network-policies"
	I1109 14:38:06.621779       1 controller.go:390] nri plugin exited: failed to connect to NRI service: dial unix /var/run/nri/nri.sock: connect: no such file or directory
	E1109 14:38:36.621094       1 reflector.go:200] "Failed to watch" err="failed to list *v1.Namespace: Get \"https://10.96.0.1:443/api/v1/namespaces?limit=500&resourceVersion=0\": dial tcp 10.96.0.1:443: i/o timeout" logger="UnhandledError" reflector="pkg/mod/k8s.io/client-go@v0.33.0/tools/cache/reflector.go:285" type="*v1.Namespace"
	E1109 14:38:36.622025       1 reflector.go:200] "Failed to watch" err="failed to list *v1.Node: Get \"https://10.96.0.1:443/api/v1/nodes?limit=500&resourceVersion=0\": dial tcp 10.96.0.1:443: i/o timeout" logger="UnhandledError" reflector="pkg/mod/k8s.io/client-go@v0.33.0/tools/cache/reflector.go:285" type="*v1.Node"
	E1109 14:38:36.622112       1 reflector.go:200] "Failed to watch" err="failed to list *v1.Pod: Get \"https://10.96.0.1:443/api/v1/pods?limit=500&resourceVersion=0\": dial tcp 10.96.0.1:443: i/o timeout" logger="UnhandledError" reflector="pkg/mod/k8s.io/client-go@v0.33.0/tools/cache/reflector.go:285" type="*v1.Pod"
	E1109 14:38:36.622125       1 reflector.go:200] "Failed to watch" err="failed to list *v1.NetworkPolicy: Get \"https://10.96.0.1:443/apis/networking.k8s.io/v1/networkpolicies?limit=500&resourceVersion=0\": dial tcp 10.96.0.1:443: i/o timeout" logger="UnhandledError" reflector="pkg/mod/k8s.io/client-go@v0.33.0/tools/cache/reflector.go:285" type="*v1.NetworkPolicy"
	I1109 14:38:38.121905       1 shared_informer.go:357] "Caches are synced" controller="kube-network-policies"
	I1109 14:38:38.121939       1 metrics.go:72] Registering metrics
	I1109 14:38:38.122006       1 controller.go:711] "Syncing nftables rules"
	I1109 14:38:46.624532       1 main.go:297] Handling node with IPs: map[192.168.85.2:{}]
	I1109 14:38:46.624571       1 main.go:301] handling current node
	I1109 14:38:56.622701       1 main.go:297] Handling node with IPs: map[192.168.85.2:{}]
	I1109 14:38:56.622766       1 main.go:301] handling current node
	
	
	==> kube-apiserver [dd0245b0767e00b25eb16e74125558db9d52ccaa4e66a6e4c487b49819301450] <==
	I1109 14:37:56.168085       1 policy_source.go:240] refreshing policies
	I1109 14:37:56.190183       1 controller.go:667] quota admission added evaluator for: namespaces
	I1109 14:37:56.240129       1 shared_informer.go:356] "Caches are synced" controller="ipallocator-repair-controller"
	I1109 14:37:56.272266       1 controller.go:667] quota admission added evaluator for: leases.coordination.k8s.io
	I1109 14:37:56.293812       1 cidrallocator.go:301] created ClusterIP allocator for Service CIDR 10.96.0.0/12
	I1109 14:37:56.294268       1 default_servicecidr_controller.go:228] Setting default ServiceCIDR condition Ready to True
	I1109 14:37:56.397455       1 cidrallocator.go:277] updated ClusterIP allocator for Service CIDR 10.96.0.0/12
	I1109 14:37:56.405660       1 default_servicecidr_controller.go:137] Shutting down kubernetes-service-cidr-controller
	I1109 14:37:56.519661       1 storage_scheduling.go:95] created PriorityClass system-node-critical with value 2000001000
	I1109 14:37:56.554357       1 storage_scheduling.go:95] created PriorityClass system-cluster-critical with value 2000000000
	I1109 14:37:56.554445       1 storage_scheduling.go:111] all system priority classes are created successfully or already exist.
	I1109 14:37:58.669034       1 controller.go:667] quota admission added evaluator for: roles.rbac.authorization.k8s.io
	I1109 14:37:58.747109       1 controller.go:667] quota admission added evaluator for: rolebindings.rbac.authorization.k8s.io
	I1109 14:37:58.887558       1 alloc.go:328] "allocated clusterIPs" service="default/kubernetes" clusterIPs={"IPv4":"10.96.0.1"}
	W1109 14:37:58.895778       1 lease.go:265] Resetting endpoints for master service "kubernetes" to [192.168.85.2]
	I1109 14:37:58.897034       1 controller.go:667] quota admission added evaluator for: endpoints
	I1109 14:37:58.905016       1 controller.go:667] quota admission added evaluator for: endpointslices.discovery.k8s.io
	I1109 14:37:59.871935       1 controller.go:667] quota admission added evaluator for: serviceaccounts
	I1109 14:37:59.906849       1 controller.go:667] quota admission added evaluator for: deployments.apps
	I1109 14:37:59.937808       1 alloc.go:328] "allocated clusterIPs" service="kube-system/kube-dns" clusterIPs={"IPv4":"10.96.0.10"}
	I1109 14:37:59.963523       1 controller.go:667] quota admission added evaluator for: daemonsets.apps
	I1109 14:38:05.623358       1 controller.go:667] quota admission added evaluator for: controllerrevisions.apps
	I1109 14:38:05.810108       1 controller.go:667] quota admission added evaluator for: replicasets.apps
	I1109 14:38:05.912495       1 cidrallocator.go:277] updated ClusterIP allocator for Service CIDR 10.96.0.0/12
	I1109 14:38:05.943505       1 cidrallocator.go:277] updated ClusterIP allocator for Service CIDR 10.96.0.0/12
	
	
	==> kube-controller-manager [c3f2e334bcad7945d28e0a609eacc02c8f9a59cb7ca874b3bd4b89b523245d66] <==
	I1109 14:38:04.920292       1 shared_informer.go:356] "Caches are synced" controller="VAC protection"
	I1109 14:38:04.923959       1 shared_informer.go:356] "Caches are synced" controller="ReplicaSet"
	I1109 14:38:04.924982       1 shared_informer.go:356] "Caches are synced" controller="daemon sets"
	I1109 14:38:04.927380       1 shared_informer.go:356] "Caches are synced" controller="stateful set"
	I1109 14:38:04.929918       1 shared_informer.go:356] "Caches are synced" controller="ephemeral"
	I1109 14:38:04.931756       1 shared_informer.go:356] "Caches are synced" controller="TTL"
	I1109 14:38:04.934035       1 shared_informer.go:356] "Caches are synced" controller="expand"
	I1109 14:38:04.939574       1 shared_informer.go:356] "Caches are synced" controller="endpoint"
	I1109 14:38:04.942034       1 shared_informer.go:356] "Caches are synced" controller="endpoint_slice_mirroring"
	I1109 14:38:04.957550       1 shared_informer.go:356] "Caches are synced" controller="deployment"
	I1109 14:38:04.964858       1 shared_informer.go:356] "Caches are synced" controller="HPA"
	I1109 14:38:04.966465       1 shared_informer.go:356] "Caches are synced" controller="disruption"
	I1109 14:38:04.966504       1 shared_informer.go:356] "Caches are synced" controller="cronjob"
	I1109 14:38:04.970400       1 shared_informer.go:356] "Caches are synced" controller="legacy-service-account-token-cleaner"
	I1109 14:38:04.970537       1 shared_informer.go:356] "Caches are synced" controller="persistent volume"
	I1109 14:38:04.970686       1 shared_informer.go:356] "Caches are synced" controller="service-cidr-controller"
	I1109 14:38:04.970813       1 shared_informer.go:356] "Caches are synced" controller="GC"
	I1109 14:38:04.970896       1 shared_informer.go:356] "Caches are synced" controller="endpoint_slice"
	I1109 14:38:04.973356       1 shared_informer.go:356] "Caches are synced" controller="ClusterRoleAggregator"
	I1109 14:38:04.973422       1 shared_informer.go:356] "Caches are synced" controller="resource quota"
	I1109 14:38:04.975973       1 shared_informer.go:356] "Caches are synced" controller="resource_claim"
	I1109 14:38:04.977476       1 shared_informer.go:356] "Caches are synced" controller="namespace"
	I1109 14:38:04.978637       1 shared_informer.go:356] "Caches are synced" controller="resource quota"
	I1109 14:38:05.003409       1 shared_informer.go:356] "Caches are synced" controller="garbage collector"
	I1109 14:38:49.927801       1 node_lifecycle_controller.go:1044] "Controller detected that some Nodes are Ready. Exiting master disruption mode" logger="node-lifecycle-controller"
	
	
	==> kube-proxy [44478051f104b7851e99fa8bd12f89f8f628eebfe44fff050602fe0cf74c24d0] <==
	I1109 14:38:06.735408       1 server_linux.go:53] "Using iptables proxy"
	I1109 14:38:06.886829       1 shared_informer.go:349] "Waiting for caches to sync" controller="node informer cache"
	I1109 14:38:06.987644       1 shared_informer.go:356] "Caches are synced" controller="node informer cache"
	I1109 14:38:06.987704       1 server.go:219] "Successfully retrieved NodeIPs" NodeIPs=["192.168.85.2"]
	E1109 14:38:06.987844       1 server.go:256] "Kube-proxy configuration may be incomplete or incorrect" err="nodePortAddresses is unset; NodePort connections will be accepted on all local IPs. Consider using `--nodeport-addresses primary`"
	I1109 14:38:07.112585       1 server.go:265] "kube-proxy running in dual-stack mode" primary ipFamily="IPv4"
	I1109 14:38:07.112648       1 server_linux.go:132] "Using iptables Proxier"
	I1109 14:38:07.135456       1 proxier.go:242] "Setting route_localnet=1 to allow node-ports on localhost; to change this either disable iptables.localhostNodePorts (--iptables-localhost-nodeports) or set nodePortAddresses (--nodeport-addresses) to filter loopback addresses" ipFamily="IPv4"
	I1109 14:38:07.139222       1 server.go:527] "Version info" version="v1.34.1"
	I1109 14:38:07.144647       1 server.go:529] "Golang settings" GOGC="" GOMAXPROCS="" GOTRACEBACK=""
	I1109 14:38:07.150174       1 config.go:200] "Starting service config controller"
	I1109 14:38:07.150262       1 shared_informer.go:349] "Waiting for caches to sync" controller="service config"
	I1109 14:38:07.152016       1 config.go:106] "Starting endpoint slice config controller"
	I1109 14:38:07.152067       1 shared_informer.go:349] "Waiting for caches to sync" controller="endpoint slice config"
	I1109 14:38:07.152106       1 config.go:403] "Starting serviceCIDR config controller"
	I1109 14:38:07.152148       1 shared_informer.go:349] "Waiting for caches to sync" controller="serviceCIDR config"
	I1109 14:38:07.155341       1 config.go:309] "Starting node config controller"
	I1109 14:38:07.155536       1 shared_informer.go:349] "Waiting for caches to sync" controller="node config"
	I1109 14:38:07.155584       1 shared_informer.go:356] "Caches are synced" controller="node config"
	I1109 14:38:07.252343       1 shared_informer.go:356] "Caches are synced" controller="service config"
	I1109 14:38:07.252436       1 shared_informer.go:356] "Caches are synced" controller="endpoint slice config"
	I1109 14:38:07.252266       1 shared_informer.go:356] "Caches are synced" controller="serviceCIDR config"
	
	
	==> kube-scheduler [25b9c4b7620e5bee3e6e61460a406ebcd4c4660b92510e7844da9e87ce426bf6] <==
	I1109 14:37:51.570829       1 serving.go:386] Generated self-signed cert in-memory
	W1109 14:37:58.447934       1 requestheader_controller.go:204] Unable to get configmap/extension-apiserver-authentication in kube-system.  Usually fixed by 'kubectl create rolebinding -n kube-system ROLEBINDING_NAME --role=extension-apiserver-authentication-reader --serviceaccount=YOUR_NS:YOUR_SA'
	W1109 14:37:58.448338       1 authentication.go:397] Error looking up in-cluster authentication configuration: configmaps "extension-apiserver-authentication" is forbidden: User "system:kube-scheduler" cannot get resource "configmaps" in API group "" in the namespace "kube-system"
	W1109 14:37:58.448382       1 authentication.go:398] Continuing without authentication configuration. This may treat all requests as anonymous.
	W1109 14:37:58.448427       1 authentication.go:399] To require authentication configuration lookup to succeed, set --authentication-tolerate-lookup-failure=false
	I1109 14:37:58.497737       1 server.go:175] "Starting Kubernetes Scheduler" version="v1.34.1"
	I1109 14:37:58.497774       1 server.go:177] "Golang settings" GOGC="" GOMAXPROCS="" GOTRACEBACK=""
	I1109 14:37:58.508054       1 configmap_cafile_content.go:205] "Starting controller" name="client-ca::kube-system::extension-apiserver-authentication::client-ca-file"
	I1109 14:37:58.508097       1 shared_informer.go:349] "Waiting for caches to sync" controller="client-ca::kube-system::extension-apiserver-authentication::client-ca-file"
	I1109 14:37:58.508949       1 secure_serving.go:211] Serving securely on 127.0.0.1:10259
	E1109 14:37:58.521713       1 reflector.go:205] "Failed to watch" err="failed to list *v1.ConfigMap: configmaps \"extension-apiserver-authentication\" is forbidden: User \"system:kube-scheduler\" cannot list resource \"configmaps\" in API group \"\" in the namespace \"kube-system\"" logger="UnhandledError" reflector="runtime/asm_arm64.s:1223" type="*v1.ConfigMap"
	I1109 14:37:58.523283       1 tlsconfig.go:243] "Starting DynamicServingCertificateController"
	I1109 14:37:59.908349       1 shared_informer.go:356] "Caches are synced" controller="client-ca::kube-system::extension-apiserver-authentication::client-ca-file"
	
	
	==> kubelet <==
	Nov 09 14:38:01 default-k8s-diff-port-103048 kubelet[1318]: E1109 14:38:01.667543    1318 kubelet.go:3221] "Failed creating a mirror pod" err="pods \"kube-controller-manager-default-k8s-diff-port-103048\" already exists" pod="kube-system/kube-controller-manager-default-k8s-diff-port-103048"
	Nov 09 14:38:04 default-k8s-diff-port-103048 kubelet[1318]: I1109 14:38:04.952564    1318 kuberuntime_manager.go:1828] "Updating runtime config through cri with podcidr" CIDR="10.244.0.0/24"
	Nov 09 14:38:04 default-k8s-diff-port-103048 kubelet[1318]: I1109 14:38:04.953162    1318 kubelet_network.go:47] "Updating Pod CIDR" originalPodCIDR="" newPodCIDR="10.244.0.0/24"
	Nov 09 14:38:05 default-k8s-diff-port-103048 kubelet[1318]: I1109 14:38:05.794158    1318 reconciler_common.go:251] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kube-api-access-lpptj\" (UniqueName: \"kubernetes.io/projected/d93835ed-7e40-4171-a3ee-f815a8d20380-kube-api-access-lpptj\") pod \"kube-proxy-c57m2\" (UID: \"d93835ed-7e40-4171-a3ee-f815a8d20380\") " pod="kube-system/kube-proxy-c57m2"
	Nov 09 14:38:05 default-k8s-diff-port-103048 kubelet[1318]: I1109 14:38:05.794217    1318 reconciler_common.go:251] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kube-proxy\" (UniqueName: \"kubernetes.io/configmap/d93835ed-7e40-4171-a3ee-f815a8d20380-kube-proxy\") pod \"kube-proxy-c57m2\" (UID: \"d93835ed-7e40-4171-a3ee-f815a8d20380\") " pod="kube-system/kube-proxy-c57m2"
	Nov 09 14:38:05 default-k8s-diff-port-103048 kubelet[1318]: I1109 14:38:05.794242    1318 reconciler_common.go:251] "operationExecutor.VerifyControllerAttachedVolume started for volume \"lib-modules\" (UniqueName: \"kubernetes.io/host-path/d93835ed-7e40-4171-a3ee-f815a8d20380-lib-modules\") pod \"kube-proxy-c57m2\" (UID: \"d93835ed-7e40-4171-a3ee-f815a8d20380\") " pod="kube-system/kube-proxy-c57m2"
	Nov 09 14:38:05 default-k8s-diff-port-103048 kubelet[1318]: I1109 14:38:05.794259    1318 reconciler_common.go:251] "operationExecutor.VerifyControllerAttachedVolume started for volume \"xtables-lock\" (UniqueName: \"kubernetes.io/host-path/41a63a24-6d2b-453d-a118-2a5b03e08396-xtables-lock\") pod \"kindnet-tz2x5\" (UID: \"41a63a24-6d2b-453d-a118-2a5b03e08396\") " pod="kube-system/kindnet-tz2x5"
	Nov 09 14:38:05 default-k8s-diff-port-103048 kubelet[1318]: I1109 14:38:05.794276    1318 reconciler_common.go:251] "operationExecutor.VerifyControllerAttachedVolume started for volume \"lib-modules\" (UniqueName: \"kubernetes.io/host-path/41a63a24-6d2b-453d-a118-2a5b03e08396-lib-modules\") pod \"kindnet-tz2x5\" (UID: \"41a63a24-6d2b-453d-a118-2a5b03e08396\") " pod="kube-system/kindnet-tz2x5"
	Nov 09 14:38:05 default-k8s-diff-port-103048 kubelet[1318]: I1109 14:38:05.794295    1318 reconciler_common.go:251] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kube-api-access-9svn5\" (UniqueName: \"kubernetes.io/projected/41a63a24-6d2b-453d-a118-2a5b03e08396-kube-api-access-9svn5\") pod \"kindnet-tz2x5\" (UID: \"41a63a24-6d2b-453d-a118-2a5b03e08396\") " pod="kube-system/kindnet-tz2x5"
	Nov 09 14:38:05 default-k8s-diff-port-103048 kubelet[1318]: I1109 14:38:05.794313    1318 reconciler_common.go:251] "operationExecutor.VerifyControllerAttachedVolume started for volume \"xtables-lock\" (UniqueName: \"kubernetes.io/host-path/d93835ed-7e40-4171-a3ee-f815a8d20380-xtables-lock\") pod \"kube-proxy-c57m2\" (UID: \"d93835ed-7e40-4171-a3ee-f815a8d20380\") " pod="kube-system/kube-proxy-c57m2"
	Nov 09 14:38:05 default-k8s-diff-port-103048 kubelet[1318]: I1109 14:38:05.794330    1318 reconciler_common.go:251] "operationExecutor.VerifyControllerAttachedVolume started for volume \"cni-cfg\" (UniqueName: \"kubernetes.io/host-path/41a63a24-6d2b-453d-a118-2a5b03e08396-cni-cfg\") pod \"kindnet-tz2x5\" (UID: \"41a63a24-6d2b-453d-a118-2a5b03e08396\") " pod="kube-system/kindnet-tz2x5"
	Nov 09 14:38:05 default-k8s-diff-port-103048 kubelet[1318]: I1109 14:38:05.919650    1318 swap_util.go:74] "error creating dir to test if tmpfs noswap is enabled. Assuming not supported" mount path="" error="stat /var/lib/kubelet/plugins/kubernetes.io/empty-dir: no such file or directory"
	Nov 09 14:38:05 default-k8s-diff-port-103048 kubelet[1318]: W1109 14:38:05.996581    1318 manager.go:1169] Failed to process watch event {EventType:0 Name:/docker/6ee0024be4f4d8590c4fac2f7a334ceb3085a08f0ed0d87b59fee8a0f0938ca3/crio-5469d3e4120158ca256a5a6e2bb84db2a56ad7a1e1fcf7e40b213cbb38627ef2 WatchSource:0}: Error finding container 5469d3e4120158ca256a5a6e2bb84db2a56ad7a1e1fcf7e40b213cbb38627ef2: Status 404 returned error can't find the container with id 5469d3e4120158ca256a5a6e2bb84db2a56ad7a1e1fcf7e40b213cbb38627ef2
	Nov 09 14:38:06 default-k8s-diff-port-103048 kubelet[1318]: W1109 14:38:06.014766    1318 manager.go:1169] Failed to process watch event {EventType:0 Name:/docker/6ee0024be4f4d8590c4fac2f7a334ceb3085a08f0ed0d87b59fee8a0f0938ca3/crio-06dfd931c18a18b52565d8111af1734ebc8f47f7be80032f3f3e7ff05dafdfea WatchSource:0}: Error finding container 06dfd931c18a18b52565d8111af1734ebc8f47f7be80032f3f3e7ff05dafdfea: Status 404 returned error can't find the container with id 06dfd931c18a18b52565d8111af1734ebc8f47f7be80032f3f3e7ff05dafdfea
	Nov 09 14:38:06 default-k8s-diff-port-103048 kubelet[1318]: I1109 14:38:06.777706    1318 pod_startup_latency_tracker.go:104] "Observed pod startup duration" pod="kube-system/kindnet-tz2x5" podStartSLOduration=1.7776650539999999 podStartE2EDuration="1.777665054s" podCreationTimestamp="2025-11-09 14:38:05 +0000 UTC" firstStartedPulling="0001-01-01 00:00:00 +0000 UTC" lastFinishedPulling="0001-01-01 00:00:00 +0000 UTC" observedRunningTime="2025-11-09 14:38:06.713916997 +0000 UTC m=+6.910383723" watchObservedRunningTime="2025-11-09 14:38:06.777665054 +0000 UTC m=+6.974131780"
	Nov 09 14:38:07 default-k8s-diff-port-103048 kubelet[1318]: I1109 14:38:07.829875    1318 pod_startup_latency_tracker.go:104] "Observed pod startup duration" pod="kube-system/kube-proxy-c57m2" podStartSLOduration=2.829846059 podStartE2EDuration="2.829846059s" podCreationTimestamp="2025-11-09 14:38:05 +0000 UTC" firstStartedPulling="0001-01-01 00:00:00 +0000 UTC" lastFinishedPulling="0001-01-01 00:00:00 +0000 UTC" observedRunningTime="2025-11-09 14:38:06.779015924 +0000 UTC m=+6.975482650" watchObservedRunningTime="2025-11-09 14:38:07.829846059 +0000 UTC m=+8.026312777"
	Nov 09 14:38:46 default-k8s-diff-port-103048 kubelet[1318]: I1109 14:38:46.811272    1318 kubelet_node_status.go:439] "Fast updating node status as it just became ready"
	Nov 09 14:38:47 default-k8s-diff-port-103048 kubelet[1318]: I1109 14:38:47.028767    1318 reconciler_common.go:251] "operationExecutor.VerifyControllerAttachedVolume started for volume \"tmp\" (UniqueName: \"kubernetes.io/host-path/251b0857-5681-47c0-b891-8a4c109aaa4b-tmp\") pod \"storage-provisioner\" (UID: \"251b0857-5681-47c0-b891-8a4c109aaa4b\") " pod="kube-system/storage-provisioner"
	Nov 09 14:38:47 default-k8s-diff-port-103048 kubelet[1318]: I1109 14:38:47.028815    1318 reconciler_common.go:251] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kube-api-access-jk2hk\" (UniqueName: \"kubernetes.io/projected/251b0857-5681-47c0-b891-8a4c109aaa4b-kube-api-access-jk2hk\") pod \"storage-provisioner\" (UID: \"251b0857-5681-47c0-b891-8a4c109aaa4b\") " pod="kube-system/storage-provisioner"
	Nov 09 14:38:47 default-k8s-diff-port-103048 kubelet[1318]: I1109 14:38:47.028839    1318 reconciler_common.go:251] "operationExecutor.VerifyControllerAttachedVolume started for volume \"config-volume\" (UniqueName: \"kubernetes.io/configmap/a2c09df3-22f7-4863-81b7-71d92e6457c7-config-volume\") pod \"coredns-66bc5c9577-rbvc2\" (UID: \"a2c09df3-22f7-4863-81b7-71d92e6457c7\") " pod="kube-system/coredns-66bc5c9577-rbvc2"
	Nov 09 14:38:47 default-k8s-diff-port-103048 kubelet[1318]: I1109 14:38:47.028859    1318 reconciler_common.go:251] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kube-api-access-552l5\" (UniqueName: \"kubernetes.io/projected/a2c09df3-22f7-4863-81b7-71d92e6457c7-kube-api-access-552l5\") pod \"coredns-66bc5c9577-rbvc2\" (UID: \"a2c09df3-22f7-4863-81b7-71d92e6457c7\") " pod="kube-system/coredns-66bc5c9577-rbvc2"
	Nov 09 14:38:47 default-k8s-diff-port-103048 kubelet[1318]: W1109 14:38:47.180901    1318 manager.go:1169] Failed to process watch event {EventType:0 Name:/docker/6ee0024be4f4d8590c4fac2f7a334ceb3085a08f0ed0d87b59fee8a0f0938ca3/crio-5fbfb52852e4589a4d7fa90cc2fe9898ac6249ca33f6d264feab9c18c4e0427a WatchSource:0}: Error finding container 5fbfb52852e4589a4d7fa90cc2fe9898ac6249ca33f6d264feab9c18c4e0427a: Status 404 returned error can't find the container with id 5fbfb52852e4589a4d7fa90cc2fe9898ac6249ca33f6d264feab9c18c4e0427a
	Nov 09 14:38:47 default-k8s-diff-port-103048 kubelet[1318]: I1109 14:38:47.833027    1318 pod_startup_latency_tracker.go:104] "Observed pod startup duration" pod="kube-system/coredns-66bc5c9577-rbvc2" podStartSLOduration=41.832999708 podStartE2EDuration="41.832999708s" podCreationTimestamp="2025-11-09 14:38:06 +0000 UTC" firstStartedPulling="0001-01-01 00:00:00 +0000 UTC" lastFinishedPulling="0001-01-01 00:00:00 +0000 UTC" observedRunningTime="2025-11-09 14:38:47.783181773 +0000 UTC m=+47.979648507" watchObservedRunningTime="2025-11-09 14:38:47.832999708 +0000 UTC m=+48.029466451"
	Nov 09 14:38:49 default-k8s-diff-port-103048 kubelet[1318]: I1109 14:38:49.905880    1318 pod_startup_latency_tracker.go:104] "Observed pod startup duration" pod="kube-system/storage-provisioner" podStartSLOduration=42.905835687 podStartE2EDuration="42.905835687s" podCreationTimestamp="2025-11-09 14:38:07 +0000 UTC" firstStartedPulling="0001-01-01 00:00:00 +0000 UTC" lastFinishedPulling="0001-01-01 00:00:00 +0000 UTC" observedRunningTime="2025-11-09 14:38:47.876480124 +0000 UTC m=+48.072946858" watchObservedRunningTime="2025-11-09 14:38:49.905835687 +0000 UTC m=+50.102302404"
	Nov 09 14:38:50 default-k8s-diff-port-103048 kubelet[1318]: I1109 14:38:50.056517    1318 reconciler_common.go:251] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kube-api-access-56txw\" (UniqueName: \"kubernetes.io/projected/f325ae72-af8a-416b-b2a2-8fe2e1b4d024-kube-api-access-56txw\") pod \"busybox\" (UID: \"f325ae72-af8a-416b-b2a2-8fe2e1b4d024\") " pod="default/busybox"
	
	
	==> storage-provisioner [f1fc4d4c90b5884fc74577c094b380a0aa73b722f54e66f386f2739633d2daf0] <==
	I1109 14:38:47.265473       1 storage_provisioner.go:116] Initializing the minikube storage provisioner...
	I1109 14:38:47.287249       1 storage_provisioner.go:141] Storage provisioner initialized, now starting service!
	I1109 14:38:47.287362       1 leaderelection.go:243] attempting to acquire leader lease kube-system/k8s.io-minikube-hostpath...
	W1109 14:38:47.290969       1 warnings.go:70] v1 Endpoints is deprecated in v1.33+; use discovery.k8s.io/v1 EndpointSlice
	W1109 14:38:47.300133       1 warnings.go:70] v1 Endpoints is deprecated in v1.33+; use discovery.k8s.io/v1 EndpointSlice
	I1109 14:38:47.314710       1 leaderelection.go:253] successfully acquired lease kube-system/k8s.io-minikube-hostpath
	I1109 14:38:47.314989       1 controller.go:835] Starting provisioner controller k8s.io/minikube-hostpath_default-k8s-diff-port-103048_37c3b51f-0418-4370-bc5e-81d9ea2d382c!
	I1109 14:38:47.322310       1 event.go:282] Event(v1.ObjectReference{Kind:"Endpoints", Namespace:"kube-system", Name:"k8s.io-minikube-hostpath", UID:"ec6f7261-5e6f-4cd5-8d6b-f26a96ba18b9", APIVersion:"v1", ResourceVersion:"463", FieldPath:""}): type: 'Normal' reason: 'LeaderElection' default-k8s-diff-port-103048_37c3b51f-0418-4370-bc5e-81d9ea2d382c became leader
	W1109 14:38:47.337881       1 warnings.go:70] v1 Endpoints is deprecated in v1.33+; use discovery.k8s.io/v1 EndpointSlice
	W1109 14:38:47.342210       1 warnings.go:70] v1 Endpoints is deprecated in v1.33+; use discovery.k8s.io/v1 EndpointSlice
	I1109 14:38:47.415973       1 controller.go:884] Started provisioner controller k8s.io/minikube-hostpath_default-k8s-diff-port-103048_37c3b51f-0418-4370-bc5e-81d9ea2d382c!
	W1109 14:38:49.346913       1 warnings.go:70] v1 Endpoints is deprecated in v1.33+; use discovery.k8s.io/v1 EndpointSlice
	W1109 14:38:49.357513       1 warnings.go:70] v1 Endpoints is deprecated in v1.33+; use discovery.k8s.io/v1 EndpointSlice
	W1109 14:38:51.360358       1 warnings.go:70] v1 Endpoints is deprecated in v1.33+; use discovery.k8s.io/v1 EndpointSlice
	W1109 14:38:51.367490       1 warnings.go:70] v1 Endpoints is deprecated in v1.33+; use discovery.k8s.io/v1 EndpointSlice
	W1109 14:38:53.370594       1 warnings.go:70] v1 Endpoints is deprecated in v1.33+; use discovery.k8s.io/v1 EndpointSlice
	W1109 14:38:53.375137       1 warnings.go:70] v1 Endpoints is deprecated in v1.33+; use discovery.k8s.io/v1 EndpointSlice
	W1109 14:38:55.378912       1 warnings.go:70] v1 Endpoints is deprecated in v1.33+; use discovery.k8s.io/v1 EndpointSlice
	W1109 14:38:55.386362       1 warnings.go:70] v1 Endpoints is deprecated in v1.33+; use discovery.k8s.io/v1 EndpointSlice
	W1109 14:38:57.389831       1 warnings.go:70] v1 Endpoints is deprecated in v1.33+; use discovery.k8s.io/v1 EndpointSlice
	W1109 14:38:57.398288       1 warnings.go:70] v1 Endpoints is deprecated in v1.33+; use discovery.k8s.io/v1 EndpointSlice
	W1109 14:38:59.402175       1 warnings.go:70] v1 Endpoints is deprecated in v1.33+; use discovery.k8s.io/v1 EndpointSlice
	W1109 14:38:59.411037       1 warnings.go:70] v1 Endpoints is deprecated in v1.33+; use discovery.k8s.io/v1 EndpointSlice
	

                                                
                                                
-- /stdout --
helpers_test.go:262: (dbg) Run:  out/minikube-linux-arm64 status --format={{.APIServer}} -p default-k8s-diff-port-103048 -n default-k8s-diff-port-103048
helpers_test.go:269: (dbg) Run:  kubectl --context default-k8s-diff-port-103048 get po -o=jsonpath={.items[*].metadata.name} -A --field-selector=status.phase!=Running
helpers_test.go:293: <<< TestStartStop/group/default-k8s-diff-port/serial/EnableAddonWhileActive FAILED: end of post-mortem logs <<<
helpers_test.go:294: ---------------------/post-mortem---------------------------------
--- FAIL: TestStartStop/group/default-k8s-diff-port/serial/EnableAddonWhileActive (2.97s)

                                                
                                    
x
+
TestStartStop/group/embed-certs/serial/EnableAddonWhileActive (2.95s)

                                                
                                                
=== RUN   TestStartStop/group/embed-certs/serial/EnableAddonWhileActive
start_stop_delete_test.go:203: (dbg) Run:  out/minikube-linux-arm64 addons enable metrics-server -p embed-certs-422728 --images=MetricsServer=registry.k8s.io/echoserver:1.4 --registries=MetricsServer=fake.domain
start_stop_delete_test.go:203: (dbg) Non-zero exit: out/minikube-linux-arm64 addons enable metrics-server -p embed-certs-422728 --images=MetricsServer=registry.k8s.io/echoserver:1.4 --registries=MetricsServer=fake.domain: exit status 11 (396.582301ms)

                                                
                                                
-- stdout --
	
	

                                                
                                                
-- /stdout --
** stderr ** 
	X Exiting due to MK_ADDON_ENABLE_PAUSED: enabled failed: check paused: list paused: runc: sudo runc list -f json: Process exited with status 1
	stdout:
	
	stderr:
	time="2025-11-09T14:39:00Z" level=error msg="open /run/runc: no such file or directory"
	
	* 
	╭─────────────────────────────────────────────────────────────────────────────────────────────╮
	│                                                                                             │
	│    * If the above advice does not help, please let us know:                                 │
	│      https://github.com/kubernetes/minikube/issues/new/choose                               │
	│                                                                                             │
	│    * Please run `minikube logs --file=logs.txt` and attach logs.txt to the GitHub issue.    │
	│    * Please also attach the following file to the GitHub issue:                             │
	│    * - /tmp/minikube_addons_2bafae6fa40fec163538f94366e390b0317a8b15_0.log                  │
	│                                                                                             │
	╰─────────────────────────────────────────────────────────────────────────────────────────────╯

                                                
                                                
** /stderr **
start_stop_delete_test.go:205: failed to enable an addon post-stop. args "out/minikube-linux-arm64 addons enable metrics-server -p embed-certs-422728 --images=MetricsServer=registry.k8s.io/echoserver:1.4 --registries=MetricsServer=fake.domain": exit status 11
start_stop_delete_test.go:213: (dbg) Run:  kubectl --context embed-certs-422728 describe deploy/metrics-server -n kube-system
start_stop_delete_test.go:213: (dbg) Non-zero exit: kubectl --context embed-certs-422728 describe deploy/metrics-server -n kube-system: exit status 1 (105.437891ms)

                                                
                                                
** stderr ** 
	Error from server (NotFound): deployments.apps "metrics-server" not found

                                                
                                                
** /stderr **
start_stop_delete_test.go:215: failed to get info on auto-pause deployments. args "kubectl --context embed-certs-422728 describe deploy/metrics-server -n kube-system": exit status 1
start_stop_delete_test.go:219: addon did not load correct image. Expected to contain " fake.domain/registry.k8s.io/echoserver:1.4". Addon deployment info: 
helpers_test.go:222: -----------------------post-mortem--------------------------------
helpers_test.go:223: ======>  post-mortem[TestStartStop/group/embed-certs/serial/EnableAddonWhileActive]: network settings <======
helpers_test.go:230: HOST ENV snapshots: PROXY env: HTTP_PROXY="<empty>" HTTPS_PROXY="<empty>" NO_PROXY="<empty>"
helpers_test.go:238: ======>  post-mortem[TestStartStop/group/embed-certs/serial/EnableAddonWhileActive]: docker inspect <======
helpers_test.go:239: (dbg) Run:  docker inspect embed-certs-422728
helpers_test.go:243: (dbg) docker inspect embed-certs-422728:

                                                
                                                
-- stdout --
	[
	    {
	        "Id": "45825e68cb8679a16f491499ba94fa2babf9539c7324c0ac19dfc0bb866dfb12",
	        "Created": "2025-11-09T14:37:33.73724942Z",
	        "Path": "/usr/local/bin/entrypoint",
	        "Args": [
	            "/sbin/init"
	        ],
	        "State": {
	            "Status": "running",
	            "Running": true,
	            "Paused": false,
	            "Restarting": false,
	            "OOMKilled": false,
	            "Dead": false,
	            "Pid": 191566,
	            "ExitCode": 0,
	            "Error": "",
	            "StartedAt": "2025-11-09T14:37:33.801913154Z",
	            "FinishedAt": "0001-01-01T00:00:00Z"
	        },
	        "Image": "sha256:4e1b168be0d8ee6affdaa8dcd0274d605b2f417b6cfaa574410f0380fd962b97",
	        "ResolvConfPath": "/var/lib/docker/containers/45825e68cb8679a16f491499ba94fa2babf9539c7324c0ac19dfc0bb866dfb12/resolv.conf",
	        "HostnamePath": "/var/lib/docker/containers/45825e68cb8679a16f491499ba94fa2babf9539c7324c0ac19dfc0bb866dfb12/hostname",
	        "HostsPath": "/var/lib/docker/containers/45825e68cb8679a16f491499ba94fa2babf9539c7324c0ac19dfc0bb866dfb12/hosts",
	        "LogPath": "/var/lib/docker/containers/45825e68cb8679a16f491499ba94fa2babf9539c7324c0ac19dfc0bb866dfb12/45825e68cb8679a16f491499ba94fa2babf9539c7324c0ac19dfc0bb866dfb12-json.log",
	        "Name": "/embed-certs-422728",
	        "RestartCount": 0,
	        "Driver": "overlay2",
	        "Platform": "linux",
	        "MountLabel": "",
	        "ProcessLabel": "",
	        "AppArmorProfile": "unconfined",
	        "ExecIDs": null,
	        "HostConfig": {
	            "Binds": [
	                "/lib/modules:/lib/modules:ro",
	                "embed-certs-422728:/var"
	            ],
	            "ContainerIDFile": "",
	            "LogConfig": {
	                "Type": "json-file",
	                "Config": {}
	            },
	            "NetworkMode": "embed-certs-422728",
	            "PortBindings": {
	                "22/tcp": [
	                    {
	                        "HostIp": "127.0.0.1",
	                        "HostPort": ""
	                    }
	                ],
	                "2376/tcp": [
	                    {
	                        "HostIp": "127.0.0.1",
	                        "HostPort": ""
	                    }
	                ],
	                "32443/tcp": [
	                    {
	                        "HostIp": "127.0.0.1",
	                        "HostPort": ""
	                    }
	                ],
	                "5000/tcp": [
	                    {
	                        "HostIp": "127.0.0.1",
	                        "HostPort": ""
	                    }
	                ],
	                "8443/tcp": [
	                    {
	                        "HostIp": "127.0.0.1",
	                        "HostPort": ""
	                    }
	                ]
	            },
	            "RestartPolicy": {
	                "Name": "no",
	                "MaximumRetryCount": 0
	            },
	            "AutoRemove": false,
	            "VolumeDriver": "",
	            "VolumesFrom": null,
	            "ConsoleSize": [
	                0,
	                0
	            ],
	            "CapAdd": null,
	            "CapDrop": null,
	            "CgroupnsMode": "host",
	            "Dns": [],
	            "DnsOptions": [],
	            "DnsSearch": [],
	            "ExtraHosts": null,
	            "GroupAdd": null,
	            "IpcMode": "private",
	            "Cgroup": "",
	            "Links": null,
	            "OomScoreAdj": 0,
	            "PidMode": "",
	            "Privileged": true,
	            "PublishAllPorts": false,
	            "ReadonlyRootfs": false,
	            "SecurityOpt": [
	                "seccomp=unconfined",
	                "apparmor=unconfined",
	                "label=disable"
	            ],
	            "Tmpfs": {
	                "/run": "",
	                "/tmp": ""
	            },
	            "UTSMode": "",
	            "UsernsMode": "",
	            "ShmSize": 67108864,
	            "Runtime": "runc",
	            "Isolation": "",
	            "CpuShares": 0,
	            "Memory": 3221225472,
	            "NanoCpus": 2000000000,
	            "CgroupParent": "",
	            "BlkioWeight": 0,
	            "BlkioWeightDevice": [],
	            "BlkioDeviceReadBps": [],
	            "BlkioDeviceWriteBps": [],
	            "BlkioDeviceReadIOps": [],
	            "BlkioDeviceWriteIOps": [],
	            "CpuPeriod": 0,
	            "CpuQuota": 0,
	            "CpuRealtimePeriod": 0,
	            "CpuRealtimeRuntime": 0,
	            "CpusetCpus": "",
	            "CpusetMems": "",
	            "Devices": [],
	            "DeviceCgroupRules": null,
	            "DeviceRequests": null,
	            "MemoryReservation": 0,
	            "MemorySwap": 6442450944,
	            "MemorySwappiness": null,
	            "OomKillDisable": false,
	            "PidsLimit": null,
	            "Ulimits": [],
	            "CpuCount": 0,
	            "CpuPercent": 0,
	            "IOMaximumIOps": 0,
	            "IOMaximumBandwidth": 0,
	            "MaskedPaths": null,
	            "ReadonlyPaths": null
	        },
	        "GraphDriver": {
	            "Data": {
	                "ID": "45825e68cb8679a16f491499ba94fa2babf9539c7324c0ac19dfc0bb866dfb12",
	                "LowerDir": "/var/lib/docker/overlay2/82eab5e46ee1cb98bb6b6b37a53cb10857e6af0e929746a88406ed41f77ffa85-init/diff:/var/lib/docker/overlay2/b3a4de36ab2a7c9237cb1555a4866064baca53bca407ae5f84336bea9c6bc6c8/diff",
	                "MergedDir": "/var/lib/docker/overlay2/82eab5e46ee1cb98bb6b6b37a53cb10857e6af0e929746a88406ed41f77ffa85/merged",
	                "UpperDir": "/var/lib/docker/overlay2/82eab5e46ee1cb98bb6b6b37a53cb10857e6af0e929746a88406ed41f77ffa85/diff",
	                "WorkDir": "/var/lib/docker/overlay2/82eab5e46ee1cb98bb6b6b37a53cb10857e6af0e929746a88406ed41f77ffa85/work"
	            },
	            "Name": "overlay2"
	        },
	        "Mounts": [
	            {
	                "Type": "bind",
	                "Source": "/lib/modules",
	                "Destination": "/lib/modules",
	                "Mode": "ro",
	                "RW": false,
	                "Propagation": "rprivate"
	            },
	            {
	                "Type": "volume",
	                "Name": "embed-certs-422728",
	                "Source": "/var/lib/docker/volumes/embed-certs-422728/_data",
	                "Destination": "/var",
	                "Driver": "local",
	                "Mode": "z",
	                "RW": true,
	                "Propagation": ""
	            }
	        ],
	        "Config": {
	            "Hostname": "embed-certs-422728",
	            "Domainname": "",
	            "User": "",
	            "AttachStdin": false,
	            "AttachStdout": false,
	            "AttachStderr": false,
	            "ExposedPorts": {
	                "22/tcp": {},
	                "2376/tcp": {},
	                "32443/tcp": {},
	                "5000/tcp": {},
	                "8443/tcp": {}
	            },
	            "Tty": true,
	            "OpenStdin": false,
	            "StdinOnce": false,
	            "Env": [
	                "container=docker",
	                "PATH=/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin"
	            ],
	            "Cmd": null,
	            "Image": "gcr.io/k8s-minikube/kicbase-builds:v0.0.48-1761985721-21837@sha256:a50b37e97dfdea51156e079ca6b45818a801b3d41bbe13d141f35d2e1af6c7d1",
	            "Volumes": null,
	            "WorkingDir": "/",
	            "Entrypoint": [
	                "/usr/local/bin/entrypoint",
	                "/sbin/init"
	            ],
	            "OnBuild": null,
	            "Labels": {
	                "created_by.minikube.sigs.k8s.io": "true",
	                "mode.minikube.sigs.k8s.io": "embed-certs-422728",
	                "name.minikube.sigs.k8s.io": "embed-certs-422728",
	                "role.minikube.sigs.k8s.io": ""
	            },
	            "StopSignal": "SIGRTMIN+3"
	        },
	        "NetworkSettings": {
	            "Bridge": "",
	            "SandboxID": "defd8151d544eeff498dfa0fcd81991cec3dd2cedf20abd49c7a8eb1e032ae73",
	            "SandboxKey": "/var/run/docker/netns/defd8151d544",
	            "Ports": {
	                "22/tcp": [
	                    {
	                        "HostIp": "127.0.0.1",
	                        "HostPort": "33060"
	                    }
	                ],
	                "2376/tcp": [
	                    {
	                        "HostIp": "127.0.0.1",
	                        "HostPort": "33061"
	                    }
	                ],
	                "32443/tcp": [
	                    {
	                        "HostIp": "127.0.0.1",
	                        "HostPort": "33064"
	                    }
	                ],
	                "5000/tcp": [
	                    {
	                        "HostIp": "127.0.0.1",
	                        "HostPort": "33062"
	                    }
	                ],
	                "8443/tcp": [
	                    {
	                        "HostIp": "127.0.0.1",
	                        "HostPort": "33063"
	                    }
	                ]
	            },
	            "HairpinMode": false,
	            "LinkLocalIPv6Address": "",
	            "LinkLocalIPv6PrefixLen": 0,
	            "SecondaryIPAddresses": null,
	            "SecondaryIPv6Addresses": null,
	            "EndpointID": "",
	            "Gateway": "",
	            "GlobalIPv6Address": "",
	            "GlobalIPv6PrefixLen": 0,
	            "IPAddress": "",
	            "IPPrefixLen": 0,
	            "IPv6Gateway": "",
	            "MacAddress": "",
	            "Networks": {
	                "embed-certs-422728": {
	                    "IPAMConfig": {
	                        "IPv4Address": "192.168.76.2"
	                    },
	                    "Links": null,
	                    "Aliases": null,
	                    "MacAddress": "7a:c2:bd:8e:27:e1",
	                    "DriverOpts": null,
	                    "GwPriority": 0,
	                    "NetworkID": "78ce79b8fdce892f49cf723023717b9a2880c30a5665eaa6c42c151329eb9e85",
	                    "EndpointID": "5d6e844052938fd39acf16efdacb683a8e471d646459cc00f2a1c5b2af436e3d",
	                    "Gateway": "192.168.76.1",
	                    "IPAddress": "192.168.76.2",
	                    "IPPrefixLen": 24,
	                    "IPv6Gateway": "",
	                    "GlobalIPv6Address": "",
	                    "GlobalIPv6PrefixLen": 0,
	                    "DNSNames": [
	                        "embed-certs-422728",
	                        "45825e68cb86"
	                    ]
	                }
	            }
	        }
	    }
	]

                                                
                                                
-- /stdout --
helpers_test.go:247: (dbg) Run:  out/minikube-linux-arm64 status --format={{.Host}} -p embed-certs-422728 -n embed-certs-422728
helpers_test.go:252: <<< TestStartStop/group/embed-certs/serial/EnableAddonWhileActive FAILED: start of post-mortem logs <<<
helpers_test.go:253: ======>  post-mortem[TestStartStop/group/embed-certs/serial/EnableAddonWhileActive]: minikube logs <======
helpers_test.go:255: (dbg) Run:  out/minikube-linux-arm64 -p embed-certs-422728 logs -n 25
helpers_test.go:255: (dbg) Done: out/minikube-linux-arm64 -p embed-certs-422728 logs -n 25: (1.3869678s)
helpers_test.go:260: TestStartStop/group/embed-certs/serial/EnableAddonWhileActive logs: 
-- stdout --
	
	==> Audit <==
	┌─────────┬───────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────┬──────────────────────────────┬─────────┬─────────┬─────────────────────┬──────────────
───────┐
	│ COMMAND │                                                                                                                     ARGS                                                                                                                      │           PROFILE            │  USER   │ VERSION │     START TIME      │      END TIME       │
	├─────────┼───────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────┼──────────────────────────────┼─────────┼─────────┼─────────────────────┼──────────────
───────┤
	│ start   │ -p force-systemd-env-413219 --memory=3072 --alsologtostderr -v=5 --driver=docker  --container-runtime=crio                                                                                                                                    │ force-systemd-env-413219     │ jenkins │ v1.37.0 │ 09 Nov 25 14:33 UTC │ 09 Nov 25 14:33 UTC │
	│ ssh     │ force-systemd-flag-519664 ssh cat /etc/crio/crio.conf.d/02-crio.conf                                                                                                                                                                          │ force-systemd-flag-519664    │ jenkins │ v1.37.0 │ 09 Nov 25 14:33 UTC │ 09 Nov 25 14:33 UTC │
	│ delete  │ -p force-systemd-flag-519664                                                                                                                                                                                                                  │ force-systemd-flag-519664    │ jenkins │ v1.37.0 │ 09 Nov 25 14:33 UTC │ 09 Nov 25 14:33 UTC │
	│ start   │ -p cert-expiration-179822 --memory=3072 --cert-expiration=3m --driver=docker  --container-runtime=crio                                                                                                                                        │ cert-expiration-179822       │ jenkins │ v1.37.0 │ 09 Nov 25 14:33 UTC │ 09 Nov 25 14:34 UTC │
	│ delete  │ -p force-systemd-env-413219                                                                                                                                                                                                                   │ force-systemd-env-413219     │ jenkins │ v1.37.0 │ 09 Nov 25 14:33 UTC │ 09 Nov 25 14:34 UTC │
	│ start   │ -p cert-options-276181 --memory=3072 --apiserver-ips=127.0.0.1 --apiserver-ips=192.168.15.15 --apiserver-names=localhost --apiserver-names=www.google.com --apiserver-port=8555 --driver=docker  --container-runtime=crio                     │ cert-options-276181          │ jenkins │ v1.37.0 │ 09 Nov 25 14:34 UTC │ 09 Nov 25 14:34 UTC │
	│ ssh     │ cert-options-276181 ssh openssl x509 -text -noout -in /var/lib/minikube/certs/apiserver.crt                                                                                                                                                   │ cert-options-276181          │ jenkins │ v1.37.0 │ 09 Nov 25 14:34 UTC │ 09 Nov 25 14:34 UTC │
	│ ssh     │ -p cert-options-276181 -- sudo cat /etc/kubernetes/admin.conf                                                                                                                                                                                 │ cert-options-276181          │ jenkins │ v1.37.0 │ 09 Nov 25 14:34 UTC │ 09 Nov 25 14:34 UTC │
	│ delete  │ -p cert-options-276181                                                                                                                                                                                                                        │ cert-options-276181          │ jenkins │ v1.37.0 │ 09 Nov 25 14:34 UTC │ 09 Nov 25 14:34 UTC │
	│ start   │ -p old-k8s-version-349599 --memory=3072 --alsologtostderr --wait=true --kvm-network=default --kvm-qemu-uri=qemu:///system --disable-driver-mounts --keep-context=false --driver=docker  --container-runtime=crio --kubernetes-version=v1.28.0 │ old-k8s-version-349599       │ jenkins │ v1.37.0 │ 09 Nov 25 14:34 UTC │ 09 Nov 25 14:35 UTC │
	│ addons  │ enable metrics-server -p old-k8s-version-349599 --images=MetricsServer=registry.k8s.io/echoserver:1.4 --registries=MetricsServer=fake.domain                                                                                                  │ old-k8s-version-349599       │ jenkins │ v1.37.0 │ 09 Nov 25 14:35 UTC │                     │
	│ stop    │ -p old-k8s-version-349599 --alsologtostderr -v=3                                                                                                                                                                                              │ old-k8s-version-349599       │ jenkins │ v1.37.0 │ 09 Nov 25 14:35 UTC │ 09 Nov 25 14:36 UTC │
	│ addons  │ enable dashboard -p old-k8s-version-349599 --images=MetricsScraper=registry.k8s.io/echoserver:1.4                                                                                                                                             │ old-k8s-version-349599       │ jenkins │ v1.37.0 │ 09 Nov 25 14:36 UTC │ 09 Nov 25 14:36 UTC │
	│ start   │ -p old-k8s-version-349599 --memory=3072 --alsologtostderr --wait=true --kvm-network=default --kvm-qemu-uri=qemu:///system --disable-driver-mounts --keep-context=false --driver=docker  --container-runtime=crio --kubernetes-version=v1.28.0 │ old-k8s-version-349599       │ jenkins │ v1.37.0 │ 09 Nov 25 14:36 UTC │ 09 Nov 25 14:36 UTC │
	│ start   │ -p cert-expiration-179822 --memory=3072 --cert-expiration=8760h --driver=docker  --container-runtime=crio                                                                                                                                     │ cert-expiration-179822       │ jenkins │ v1.37.0 │ 09 Nov 25 14:37 UTC │ 09 Nov 25 14:37 UTC │
	│ image   │ old-k8s-version-349599 image list --format=json                                                                                                                                                                                               │ old-k8s-version-349599       │ jenkins │ v1.37.0 │ 09 Nov 25 14:37 UTC │ 09 Nov 25 14:37 UTC │
	│ pause   │ -p old-k8s-version-349599 --alsologtostderr -v=1                                                                                                                                                                                              │ old-k8s-version-349599       │ jenkins │ v1.37.0 │ 09 Nov 25 14:37 UTC │                     │
	│ delete  │ -p old-k8s-version-349599                                                                                                                                                                                                                     │ old-k8s-version-349599       │ jenkins │ v1.37.0 │ 09 Nov 25 14:37 UTC │ 09 Nov 25 14:37 UTC │
	│ delete  │ -p old-k8s-version-349599                                                                                                                                                                                                                     │ old-k8s-version-349599       │ jenkins │ v1.37.0 │ 09 Nov 25 14:37 UTC │ 09 Nov 25 14:37 UTC │
	│ start   │ -p default-k8s-diff-port-103048 --memory=3072 --alsologtostderr --wait=true --apiserver-port=8444 --driver=docker  --container-runtime=crio --kubernetes-version=v1.34.1                                                                      │ default-k8s-diff-port-103048 │ jenkins │ v1.37.0 │ 09 Nov 25 14:37 UTC │ 09 Nov 25 14:38 UTC │
	│ delete  │ -p cert-expiration-179822                                                                                                                                                                                                                     │ cert-expiration-179822       │ jenkins │ v1.37.0 │ 09 Nov 25 14:37 UTC │ 09 Nov 25 14:37 UTC │
	│ start   │ -p embed-certs-422728 --memory=3072 --alsologtostderr --wait=true --embed-certs --driver=docker  --container-runtime=crio --kubernetes-version=v1.34.1                                                                                        │ embed-certs-422728           │ jenkins │ v1.37.0 │ 09 Nov 25 14:37 UTC │ 09 Nov 25 14:38 UTC │
	│ addons  │ enable metrics-server -p default-k8s-diff-port-103048 --images=MetricsServer=registry.k8s.io/echoserver:1.4 --registries=MetricsServer=fake.domain                                                                                            │ default-k8s-diff-port-103048 │ jenkins │ v1.37.0 │ 09 Nov 25 14:38 UTC │                     │
	│ addons  │ enable metrics-server -p embed-certs-422728 --images=MetricsServer=registry.k8s.io/echoserver:1.4 --registries=MetricsServer=fake.domain                                                                                                      │ embed-certs-422728           │ jenkins │ v1.37.0 │ 09 Nov 25 14:39 UTC │                     │
	│ stop    │ -p default-k8s-diff-port-103048 --alsologtostderr -v=3                                                                                                                                                                                        │ default-k8s-diff-port-103048 │ jenkins │ v1.37.0 │ 09 Nov 25 14:39 UTC │                     │
	└─────────┴───────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────┴──────────────────────────────┴─────────┴─────────┴─────────────────────┴──────────────
───────┘
	
	
	==> Last Start <==
	Log file created at: 2025/11/09 14:37:27
	Running on machine: ip-172-31-24-2
	Binary: Built with gc go1.24.6 for linux/arm64
	Log line format: [IWEF]mmdd hh:mm:ss.uuuuuu threadid file:line] msg
	I1109 14:37:27.894277  190681 out.go:360] Setting OutFile to fd 1 ...
	I1109 14:37:27.894406  190681 out.go:408] TERM=,COLORTERM=, which probably does not support color
	I1109 14:37:27.894416  190681 out.go:374] Setting ErrFile to fd 2...
	I1109 14:37:27.894422  190681 out.go:408] TERM=,COLORTERM=, which probably does not support color
	I1109 14:37:27.894789  190681 root.go:338] Updating PATH: /home/jenkins/minikube-integration/21139-2320/.minikube/bin
	I1109 14:37:27.895306  190681 out.go:368] Setting JSON to false
	I1109 14:37:27.896214  190681 start.go:133] hostinfo: {"hostname":"ip-172-31-24-2","uptime":4798,"bootTime":1762694250,"procs":171,"os":"linux","platform":"ubuntu","platformFamily":"debian","platformVersion":"20.04","kernelVersion":"5.15.0-1084-aws","kernelArch":"aarch64","virtualizationSystem":"","virtualizationRole":"","hostId":"6d436adf-771e-4269-b9a3-c25fd4fca4f5"}
	I1109 14:37:27.896283  190681 start.go:143] virtualization:  
	I1109 14:37:27.900282  190681 out.go:179] * [embed-certs-422728] minikube v1.37.0 on Ubuntu 20.04 (arm64)
	I1109 14:37:27.904596  190681 out.go:179]   - MINIKUBE_LOCATION=21139
	I1109 14:37:27.904826  190681 notify.go:221] Checking for updates...
	I1109 14:37:27.910876  190681 out.go:179]   - MINIKUBE_SUPPRESS_DOCKER_PERFORMANCE=true
	I1109 14:37:27.913958  190681 out.go:179]   - KUBECONFIG=/home/jenkins/minikube-integration/21139-2320/kubeconfig
	I1109 14:37:27.916949  190681 out.go:179]   - MINIKUBE_HOME=/home/jenkins/minikube-integration/21139-2320/.minikube
	I1109 14:37:27.920020  190681 out.go:179]   - MINIKUBE_BIN=out/minikube-linux-arm64
	I1109 14:37:27.923021  190681 out.go:179]   - MINIKUBE_FORCE_SYSTEMD=
	I1109 14:37:27.926652  190681 config.go:182] Loaded profile config "default-k8s-diff-port-103048": Driver=docker, ContainerRuntime=crio, KubernetesVersion=v1.34.1
	I1109 14:37:27.926783  190681 driver.go:422] Setting default libvirt URI to qemu:///system
	I1109 14:37:27.956898  190681 docker.go:124] docker version: linux-28.1.1:Docker Engine - Community
	I1109 14:37:27.957090  190681 cli_runner.go:164] Run: docker system info --format "{{json .}}"
	I1109 14:37:28.011782  190681 info.go:266] docker info: {ID:J4M5:W6MX:GOX4:4LAQ:VI7E:VJNF:J3OP:OPBH:GF7G:PPY4:WQWD:7N4L Containers:1 ContainersRunning:1 ContainersPaused:0 ContainersStopped:0 Images:4 Driver:overlay2 DriverStatus:[[Backing Filesystem extfs] [Supports d_type true] [Using metacopy false] [Native Overlay Diff true] [userxattr false]] SystemStatus:<nil> Plugins:{Volume:[local] Network:[bridge host ipvlan macvlan null overlay] Authorization:<nil> Log:[awslogs fluentd gcplogs gelf journald json-file local splunk syslog]} MemoryLimit:true SwapLimit:true KernelMemory:false KernelMemoryTCP:true CPUCfsPeriod:true CPUCfsQuota:true CPUShares:true CPUSet:true PidsLimit:true IPv4Forwarding:true BridgeNfIptables:false BridgeNfIP6Tables:false Debug:false NFd:40 OomKillDisable:true NGoroutines:53 SystemTime:2025-11-09 14:37:28.000997805 +0000 UTC LoggingDriver:json-file CgroupDriver:cgroupfs NEventsListener:0 KernelVersion:5.15.0-1084-aws OperatingSystem:Ubuntu 20.04.6 LTS OSType:linux Architecture:a
arch64 IndexServerAddress:https://index.docker.io/v1/ RegistryConfig:{AllowNondistributableArtifactsCIDRs:[] AllowNondistributableArtifactsHostnames:[] InsecureRegistryCIDRs:[::1/128 127.0.0.0/8] IndexConfigs:{DockerIo:{Name:docker.io Mirrors:[] Secure:true Official:true}} Mirrors:[]} NCPU:2 MemTotal:8214835200 GenericResources:<nil> DockerRootDir:/var/lib/docker HTTPProxy: HTTPSProxy: NoProxy: Name:ip-172-31-24-2 Labels:[] ExperimentalBuild:false ServerVersion:28.1.1 ClusterStore: ClusterAdvertise: Runtimes:{Runc:{Path:runc}} DefaultRuntime:runc Swarm:{NodeID: NodeAddr: LocalNodeState:inactive ControlAvailable:false Error: RemoteManagers:<nil>} LiveRestoreEnabled:false Isolation: InitBinary:docker-init ContainerdCommit:{ID:05044ec0a9a75232cad458027ca83437aae3f4da Expected:} RuncCommit:{ID:v1.2.5-0-g59923ef Expected:} InitCommit:{ID:de40ad0 Expected:} SecurityOptions:[name=apparmor name=seccomp,profile=builtin] ProductLicense: Warnings:<nil> ServerErrors:[] ClientInfo:{Debug:false Plugins:[map[Name:buildx Pat
h:/usr/libexec/docker/cli-plugins/docker-buildx SchemaVersion:0.1.0 ShortDescription:Docker Buildx Vendor:Docker Inc. Version:v0.23.0] map[Name:compose Path:/usr/libexec/docker/cli-plugins/docker-compose SchemaVersion:0.1.0 ShortDescription:Docker Compose Vendor:Docker Inc. Version:v2.35.1]] Warnings:<nil>}}
	I1109 14:37:28.011962  190681 docker.go:319] overlay module found
	I1109 14:37:28.016020  190681 out.go:179] * Using the docker driver based on user configuration
	I1109 14:37:28.019201  190681 start.go:309] selected driver: docker
	I1109 14:37:28.019230  190681 start.go:930] validating driver "docker" against <nil>
	I1109 14:37:28.019247  190681 start.go:941] status for docker: {Installed:true Healthy:true Running:false NeedsImprovement:false Error:<nil> Reason: Fix: Doc: Version:}
	I1109 14:37:28.020086  190681 cli_runner.go:164] Run: docker system info --format "{{json .}}"
	I1109 14:37:28.081454  190681 info.go:266] docker info: {ID:J4M5:W6MX:GOX4:4LAQ:VI7E:VJNF:J3OP:OPBH:GF7G:PPY4:WQWD:7N4L Containers:1 ContainersRunning:1 ContainersPaused:0 ContainersStopped:0 Images:4 Driver:overlay2 DriverStatus:[[Backing Filesystem extfs] [Supports d_type true] [Using metacopy false] [Native Overlay Diff true] [userxattr false]] SystemStatus:<nil> Plugins:{Volume:[local] Network:[bridge host ipvlan macvlan null overlay] Authorization:<nil> Log:[awslogs fluentd gcplogs gelf journald json-file local splunk syslog]} MemoryLimit:true SwapLimit:true KernelMemory:false KernelMemoryTCP:true CPUCfsPeriod:true CPUCfsQuota:true CPUShares:true CPUSet:true PidsLimit:true IPv4Forwarding:true BridgeNfIptables:false BridgeNfIP6Tables:false Debug:false NFd:40 OomKillDisable:true NGoroutines:53 SystemTime:2025-11-09 14:37:28.07150824 +0000 UTC LoggingDriver:json-file CgroupDriver:cgroupfs NEventsListener:0 KernelVersion:5.15.0-1084-aws OperatingSystem:Ubuntu 20.04.6 LTS OSType:linux Architecture:aa
rch64 IndexServerAddress:https://index.docker.io/v1/ RegistryConfig:{AllowNondistributableArtifactsCIDRs:[] AllowNondistributableArtifactsHostnames:[] InsecureRegistryCIDRs:[::1/128 127.0.0.0/8] IndexConfigs:{DockerIo:{Name:docker.io Mirrors:[] Secure:true Official:true}} Mirrors:[]} NCPU:2 MemTotal:8214835200 GenericResources:<nil> DockerRootDir:/var/lib/docker HTTPProxy: HTTPSProxy: NoProxy: Name:ip-172-31-24-2 Labels:[] ExperimentalBuild:false ServerVersion:28.1.1 ClusterStore: ClusterAdvertise: Runtimes:{Runc:{Path:runc}} DefaultRuntime:runc Swarm:{NodeID: NodeAddr: LocalNodeState:inactive ControlAvailable:false Error: RemoteManagers:<nil>} LiveRestoreEnabled:false Isolation: InitBinary:docker-init ContainerdCommit:{ID:05044ec0a9a75232cad458027ca83437aae3f4da Expected:} RuncCommit:{ID:v1.2.5-0-g59923ef Expected:} InitCommit:{ID:de40ad0 Expected:} SecurityOptions:[name=apparmor name=seccomp,profile=builtin] ProductLicense: Warnings:<nil> ServerErrors:[] ClientInfo:{Debug:false Plugins:[map[Name:buildx Path
:/usr/libexec/docker/cli-plugins/docker-buildx SchemaVersion:0.1.0 ShortDescription:Docker Buildx Vendor:Docker Inc. Version:v0.23.0] map[Name:compose Path:/usr/libexec/docker/cli-plugins/docker-compose SchemaVersion:0.1.0 ShortDescription:Docker Compose Vendor:Docker Inc. Version:v2.35.1]] Warnings:<nil>}}
	I1109 14:37:28.081608  190681 start_flags.go:327] no existing cluster config was found, will generate one from the flags 
	I1109 14:37:28.081853  190681 start_flags.go:992] Waiting for all components: map[apiserver:true apps_running:true default_sa:true extra:true kubelet:true node_ready:true system_pods:true]
	I1109 14:37:28.085016  190681 out.go:179] * Using Docker driver with root privileges
	I1109 14:37:28.087949  190681 cni.go:84] Creating CNI manager for ""
	I1109 14:37:28.088017  190681 cni.go:143] "docker" driver + "crio" runtime found, recommending kindnet
	I1109 14:37:28.088032  190681 start_flags.go:336] Found "CNI" CNI - setting NetworkPlugin=cni
	I1109 14:37:28.088119  190681 start.go:353] cluster config:
	{Name:embed-certs-422728 KeepContext:false EmbedCerts:true MinikubeISO: KicBaseImage:gcr.io/k8s-minikube/kicbase-builds:v0.0.48-1761985721-21837@sha256:a50b37e97dfdea51156e079ca6b45818a801b3d41bbe13d141f35d2e1af6c7d1 Memory:3072 CPUs:2 DiskSize:20000 Driver:docker HyperkitVpnKitSock: HyperkitVSockPorts:[] DockerEnv:[] ContainerVolumeMounts:[] InsecureRegistry:[] RegistryMirror:[] HostOnlyCIDR:192.168.59.1/24 HypervVirtualSwitch: HypervUseExternalSwitch:false HypervExternalAdapter: KVMNetwork:default KVMQemuURI:qemu:///system KVMGPU:false KVMHidden:false KVMNUMACount:1 APIServerPort:8443 DockerOpt:[] DisableDriverMounts:false NFSShare:[] NFSSharesRoot:/nfsshares UUID: NoVTXCheck:false DNSProxy:false HostDNSResolver:true HostOnlyNicType:virtio NatNicType:virtio SSHIPAddress: SSHUser:root SSHKey: SSHPort:22 KubernetesConfig:{KubernetesVersion:v1.34.1 ClusterName:embed-certs-422728 Namespace:default APIServerHAVIP: APIServerName:minikubeCA APIServerNames:[] APIServerIPs:[] DNSDomain:cluster.local Contain
erRuntime:crio CRISocket: NetworkPlugin:cni FeatureGates: ServiceCIDR:10.96.0.0/12 ImageRepository: LoadBalancerStartIP: LoadBalancerEndIP: CustomIngressCert: RegistryAliases: ExtraOptions:[] ShouldLoadCachedImages:true EnableDefaultCNI:false CNI:} Nodes:[{Name: IP: Port:8443 KubernetesVersion:v1.34.1 ContainerRuntime:crio ControlPlane:true Worker:true}] Addons:map[] CustomAddonImages:map[] CustomAddonRegistries:map[] VerifyComponents:map[apiserver:true apps_running:true default_sa:true extra:true kubelet:true node_ready:true system_pods:true] StartHostTimeout:6m0s ScheduledStop:<nil> ExposedPorts:[] ListenAddress: Network: Subnet: MultiNodeRequested:false ExtraDisks:0 CertExpiration:26280h0m0s MountString: Mount9PVersion:9p2000.L MountGID:docker MountIP: MountMSize:262144 MountOptions:[] MountPort:0 MountType:9p MountUID:docker BinaryMirror: DisableOptimizations:false DisableMetrics:false DisableCoreDNSLog:false CustomQemuFirmwarePath: SocketVMnetClientPath: SocketVMnetPath: StaticIP: SSHAuthSock: SSHAgentPI
D:0 GPUs: AutoPauseInterval:1m0s}
	I1109 14:37:28.091441  190681 out.go:179] * Starting "embed-certs-422728" primary control-plane node in "embed-certs-422728" cluster
	I1109 14:37:28.094451  190681 cache.go:134] Beginning downloading kic base image for docker with crio
	I1109 14:37:28.097483  190681 out.go:179] * Pulling base image v0.0.48-1761985721-21837 ...
	I1109 14:37:28.100545  190681 preload.go:188] Checking if preload exists for k8s version v1.34.1 and runtime crio
	I1109 14:37:28.100596  190681 preload.go:203] Found local preload: /home/jenkins/minikube-integration/21139-2320/.minikube/cache/preloaded-tarball/preloaded-images-k8s-v18-v1.34.1-cri-o-overlay-arm64.tar.lz4
	I1109 14:37:28.100623  190681 cache.go:65] Caching tarball of preloaded images
	I1109 14:37:28.100629  190681 image.go:81] Checking for gcr.io/k8s-minikube/kicbase-builds:v0.0.48-1761985721-21837@sha256:a50b37e97dfdea51156e079ca6b45818a801b3d41bbe13d141f35d2e1af6c7d1 in local docker daemon
	I1109 14:37:28.100706  190681 preload.go:238] Found /home/jenkins/minikube-integration/21139-2320/.minikube/cache/preloaded-tarball/preloaded-images-k8s-v18-v1.34.1-cri-o-overlay-arm64.tar.lz4 in cache, skipping download
	I1109 14:37:28.100717  190681 cache.go:68] Finished verifying existence of preloaded tar for v1.34.1 on crio
	I1109 14:37:28.100823  190681 profile.go:143] Saving config to /home/jenkins/minikube-integration/21139-2320/.minikube/profiles/embed-certs-422728/config.json ...
	I1109 14:37:28.100845  190681 lock.go:35] WriteFile acquiring /home/jenkins/minikube-integration/21139-2320/.minikube/profiles/embed-certs-422728/config.json: {Name:mk4bdaec63ea3c9d33dd739aedde655ecb97f8c3 Clock:{} Delay:500ms Timeout:1m0s Cancel:<nil>}
	I1109 14:37:28.120386  190681 image.go:100] Found gcr.io/k8s-minikube/kicbase-builds:v0.0.48-1761985721-21837@sha256:a50b37e97dfdea51156e079ca6b45818a801b3d41bbe13d141f35d2e1af6c7d1 in local docker daemon, skipping pull
	I1109 14:37:28.120409  190681 cache.go:158] gcr.io/k8s-minikube/kicbase-builds:v0.0.48-1761985721-21837@sha256:a50b37e97dfdea51156e079ca6b45818a801b3d41bbe13d141f35d2e1af6c7d1 exists in daemon, skipping load
	I1109 14:37:28.120423  190681 cache.go:243] Successfully downloaded all kic artifacts
	I1109 14:37:28.120445  190681 start.go:360] acquireMachinesLock for embed-certs-422728: {Name:mkaf26c3066ebca49339c9527aed846108c5e799 Clock:{} Delay:500ms Timeout:10m0s Cancel:<nil>}
	I1109 14:37:28.120559  190681 start.go:364] duration metric: took 88.411µs to acquireMachinesLock for "embed-certs-422728"
	I1109 14:37:28.120590  190681 start.go:93] Provisioning new machine with config: &{Name:embed-certs-422728 KeepContext:false EmbedCerts:true MinikubeISO: KicBaseImage:gcr.io/k8s-minikube/kicbase-builds:v0.0.48-1761985721-21837@sha256:a50b37e97dfdea51156e079ca6b45818a801b3d41bbe13d141f35d2e1af6c7d1 Memory:3072 CPUs:2 DiskSize:20000 Driver:docker HyperkitVpnKitSock: HyperkitVSockPorts:[] DockerEnv:[] ContainerVolumeMounts:[] InsecureRegistry:[] RegistryMirror:[] HostOnlyCIDR:192.168.59.1/24 HypervVirtualSwitch: HypervUseExternalSwitch:false HypervExternalAdapter: KVMNetwork:default KVMQemuURI:qemu:///system KVMGPU:false KVMHidden:false KVMNUMACount:1 APIServerPort:8443 DockerOpt:[] DisableDriverMounts:false NFSShare:[] NFSSharesRoot:/nfsshares UUID: NoVTXCheck:false DNSProxy:false HostDNSResolver:true HostOnlyNicType:virtio NatNicType:virtio SSHIPAddress: SSHUser:root SSHKey: SSHPort:22 KubernetesConfig:{KubernetesVersion:v1.34.1 ClusterName:embed-certs-422728 Namespace:default APIServerHAVIP: APIServe
rName:minikubeCA APIServerNames:[] APIServerIPs:[] DNSDomain:cluster.local ContainerRuntime:crio CRISocket: NetworkPlugin:cni FeatureGates: ServiceCIDR:10.96.0.0/12 ImageRepository: LoadBalancerStartIP: LoadBalancerEndIP: CustomIngressCert: RegistryAliases: ExtraOptions:[] ShouldLoadCachedImages:true EnableDefaultCNI:false CNI:} Nodes:[{Name: IP: Port:8443 KubernetesVersion:v1.34.1 ContainerRuntime:crio ControlPlane:true Worker:true}] Addons:map[] CustomAddonImages:map[] CustomAddonRegistries:map[] VerifyComponents:map[apiserver:true apps_running:true default_sa:true extra:true kubelet:true node_ready:true system_pods:true] StartHostTimeout:6m0s ScheduledStop:<nil> ExposedPorts:[] ListenAddress: Network: Subnet: MultiNodeRequested:false ExtraDisks:0 CertExpiration:26280h0m0s MountString: Mount9PVersion:9p2000.L MountGID:docker MountIP: MountMSize:262144 MountOptions:[] MountPort:0 MountType:9p MountUID:docker BinaryMirror: DisableOptimizations:false DisableMetrics:false DisableCoreDNSLog:false CustomQemuFirmw
arePath: SocketVMnetClientPath: SocketVMnetPath: StaticIP: SSHAuthSock: SSHAgentPID:0 GPUs: AutoPauseInterval:1m0s} &{Name: IP: Port:8443 KubernetesVersion:v1.34.1 ContainerRuntime:crio ControlPlane:true Worker:true}
	I1109 14:37:28.120662  190681 start.go:125] createHost starting for "" (driver="docker")
	I1109 14:37:24.289388  189143 cli_runner.go:217] Completed: docker run --rm --entrypoint /usr/bin/tar -v /home/jenkins/minikube-integration/21139-2320/.minikube/cache/preloaded-tarball/preloaded-images-k8s-v18-v1.34.1-cri-o-overlay-arm64.tar.lz4:/preloaded.tar:ro -v default-k8s-diff-port-103048:/extractDir gcr.io/k8s-minikube/kicbase-builds:v0.0.48-1761985721-21837@sha256:a50b37e97dfdea51156e079ca6b45818a801b3d41bbe13d141f35d2e1af6c7d1 -I lz4 -xf /preloaded.tar -C /extractDir: (4.628908959s)
	I1109 14:37:24.289413  189143 kic.go:203] duration metric: took 4.629038798s to extract preloaded images to volume ...
	W1109 14:37:24.289552  189143 cgroups_linux.go:77] Your kernel does not support swap limit capabilities or the cgroup is not mounted.
	I1109 14:37:24.289667  189143 cli_runner.go:164] Run: docker info --format "'{{json .SecurityOptions}}'"
	I1109 14:37:24.388861  189143 cli_runner.go:164] Run: docker run -d -t --privileged --security-opt seccomp=unconfined --tmpfs /tmp --tmpfs /run -v /lib/modules:/lib/modules:ro --hostname default-k8s-diff-port-103048 --name default-k8s-diff-port-103048 --label created_by.minikube.sigs.k8s.io=true --label name.minikube.sigs.k8s.io=default-k8s-diff-port-103048 --label role.minikube.sigs.k8s.io= --label mode.minikube.sigs.k8s.io=default-k8s-diff-port-103048 --network default-k8s-diff-port-103048 --ip 192.168.85.2 --volume default-k8s-diff-port-103048:/var --security-opt apparmor=unconfined --memory=3072mb --cpus=2 -e container=docker --expose 8444 --publish=127.0.0.1::8444 --publish=127.0.0.1::22 --publish=127.0.0.1::2376 --publish=127.0.0.1::5000 --publish=127.0.0.1::32443 gcr.io/k8s-minikube/kicbase-builds:v0.0.48-1761985721-21837@sha256:a50b37e97dfdea51156e079ca6b45818a801b3d41bbe13d141f35d2e1af6c7d1
	I1109 14:37:24.745528  189143 cli_runner.go:164] Run: docker container inspect default-k8s-diff-port-103048 --format={{.State.Running}}
	I1109 14:37:24.782473  189143 cli_runner.go:164] Run: docker container inspect default-k8s-diff-port-103048 --format={{.State.Status}}
	I1109 14:37:24.816268  189143 cli_runner.go:164] Run: docker exec default-k8s-diff-port-103048 stat /var/lib/dpkg/alternatives/iptables
	I1109 14:37:24.933856  189143 oci.go:144] the created container "default-k8s-diff-port-103048" has a running status.
	I1109 14:37:24.933881  189143 kic.go:225] Creating ssh key for kic: /home/jenkins/minikube-integration/21139-2320/.minikube/machines/default-k8s-diff-port-103048/id_rsa...
	I1109 14:37:25.374288  189143 kic_runner.go:191] docker (temp): /home/jenkins/minikube-integration/21139-2320/.minikube/machines/default-k8s-diff-port-103048/id_rsa.pub --> /home/docker/.ssh/authorized_keys (381 bytes)
	I1109 14:37:25.424219  189143 cli_runner.go:164] Run: docker container inspect default-k8s-diff-port-103048 --format={{.State.Status}}
	I1109 14:37:25.460449  189143 kic_runner.go:93] Run: chown docker:docker /home/docker/.ssh/authorized_keys
	I1109 14:37:25.460483  189143 kic_runner.go:114] Args: [docker exec --privileged default-k8s-diff-port-103048 chown docker:docker /home/docker/.ssh/authorized_keys]
	I1109 14:37:25.526335  189143 cli_runner.go:164] Run: docker container inspect default-k8s-diff-port-103048 --format={{.State.Status}}
	I1109 14:37:25.563150  189143 machine.go:94] provisionDockerMachine start ...
	I1109 14:37:25.563257  189143 cli_runner.go:164] Run: docker container inspect -f "'{{(index (index .NetworkSettings.Ports "22/tcp") 0).HostPort}}'" default-k8s-diff-port-103048
	I1109 14:37:25.589918  189143 main.go:143] libmachine: Using SSH client type: native
	I1109 14:37:25.590471  189143 main.go:143] libmachine: &{{{<nil> 0 [] [] []} docker [0x3eefe0] 0x3f1790 <nil>  [] 0s} 127.0.0.1 33055 <nil> <nil>}
	I1109 14:37:25.590505  189143 main.go:143] libmachine: About to run SSH command:
	hostname
	I1109 14:37:25.591566  189143 main.go:143] libmachine: Error dialing TCP: ssh: handshake failed: EOF
	I1109 14:37:28.124290  190681 out.go:252] * Creating docker container (CPUs=2, Memory=3072MB) ...
	I1109 14:37:28.124568  190681 start.go:159] libmachine.API.Create for "embed-certs-422728" (driver="docker")
	I1109 14:37:28.124616  190681 client.go:173] LocalClient.Create starting
	I1109 14:37:28.124694  190681 main.go:143] libmachine: Reading certificate data from /home/jenkins/minikube-integration/21139-2320/.minikube/certs/ca.pem
	I1109 14:37:28.124738  190681 main.go:143] libmachine: Decoding PEM data...
	I1109 14:37:28.124757  190681 main.go:143] libmachine: Parsing certificate...
	I1109 14:37:28.124819  190681 main.go:143] libmachine: Reading certificate data from /home/jenkins/minikube-integration/21139-2320/.minikube/certs/cert.pem
	I1109 14:37:28.124845  190681 main.go:143] libmachine: Decoding PEM data...
	I1109 14:37:28.124858  190681 main.go:143] libmachine: Parsing certificate...
	I1109 14:37:28.125225  190681 cli_runner.go:164] Run: docker network inspect embed-certs-422728 --format "{"Name": "{{.Name}}","Driver": "{{.Driver}}","Subnet": "{{range .IPAM.Config}}{{.Subnet}}{{end}}","Gateway": "{{range .IPAM.Config}}{{.Gateway}}{{end}}","MTU": {{if (index .Options "com.docker.network.driver.mtu")}}{{(index .Options "com.docker.network.driver.mtu")}}{{else}}0{{end}}, "ContainerIPs": [{{range $k,$v := .Containers }}"{{$v.IPv4Address}}",{{end}}]}"
	W1109 14:37:28.141263  190681 cli_runner.go:211] docker network inspect embed-certs-422728 --format "{"Name": "{{.Name}}","Driver": "{{.Driver}}","Subnet": "{{range .IPAM.Config}}{{.Subnet}}{{end}}","Gateway": "{{range .IPAM.Config}}{{.Gateway}}{{end}}","MTU": {{if (index .Options "com.docker.network.driver.mtu")}}{{(index .Options "com.docker.network.driver.mtu")}}{{else}}0{{end}}, "ContainerIPs": [{{range $k,$v := .Containers }}"{{$v.IPv4Address}}",{{end}}]}" returned with exit code 1
	I1109 14:37:28.141342  190681 network_create.go:284] running [docker network inspect embed-certs-422728] to gather additional debugging logs...
	I1109 14:37:28.141364  190681 cli_runner.go:164] Run: docker network inspect embed-certs-422728
	W1109 14:37:28.157685  190681 cli_runner.go:211] docker network inspect embed-certs-422728 returned with exit code 1
	I1109 14:37:28.157717  190681 network_create.go:287] error running [docker network inspect embed-certs-422728]: docker network inspect embed-certs-422728: exit status 1
	stdout:
	[]
	
	stderr:
	Error response from daemon: network embed-certs-422728 not found
	I1109 14:37:28.157737  190681 network_create.go:289] output of [docker network inspect embed-certs-422728]: -- stdout --
	[]
	
	-- /stdout --
	** stderr ** 
	Error response from daemon: network embed-certs-422728 not found
	
	** /stderr **
	I1109 14:37:28.157857  190681 cli_runner.go:164] Run: docker network inspect bridge --format "{"Name": "{{.Name}}","Driver": "{{.Driver}}","Subnet": "{{range .IPAM.Config}}{{.Subnet}}{{end}}","Gateway": "{{range .IPAM.Config}}{{.Gateway}}{{end}}","MTU": {{if (index .Options "com.docker.network.driver.mtu")}}{{(index .Options "com.docker.network.driver.mtu")}}{{else}}0{{end}}, "ContainerIPs": [{{range $k,$v := .Containers }}"{{$v.IPv4Address}}",{{end}}]}"
	I1109 14:37:28.174886  190681 network.go:211] skipping subnet 192.168.49.0/24 that is taken: &{IP:192.168.49.0 Netmask:255.255.255.0 Prefix:24 CIDR:192.168.49.0/24 Gateway:192.168.49.1 ClientMin:192.168.49.2 ClientMax:192.168.49.254 Broadcast:192.168.49.255 IsPrivate:true Interface:{IfaceName:br-b901b8dcb821 IfaceIPv4:192.168.49.1 IfaceMTU:1500 IfaceMAC:fe:01:f6:7f:4e:91} reservation:<nil>}
	I1109 14:37:28.175199  190681 network.go:211] skipping subnet 192.168.58.0/24 that is taken: &{IP:192.168.58.0 Netmask:255.255.255.0 Prefix:24 CIDR:192.168.58.0/24 Gateway:192.168.58.1 ClientMin:192.168.58.2 ClientMax:192.168.58.254 Broadcast:192.168.58.255 IsPrivate:true Interface:{IfaceName:br-46dda1eda2df IfaceIPv4:192.168.58.1 IfaceMTU:1500 IfaceMAC:3a:a9:4d:4f:8f:31} reservation:<nil>}
	I1109 14:37:28.175517  190681 network.go:211] skipping subnet 192.168.67.0/24 that is taken: &{IP:192.168.67.0 Netmask:255.255.255.0 Prefix:24 CIDR:192.168.67.0/24 Gateway:192.168.67.1 ClientMin:192.168.67.2 ClientMax:192.168.67.254 Broadcast:192.168.67.255 IsPrivate:true Interface:{IfaceName:br-3b44df0b0b1c IfaceIPv4:192.168.67.1 IfaceMTU:1500 IfaceMAC:06:80:ac:56:fe:3d} reservation:<nil>}
	I1109 14:37:28.175994  190681 network.go:206] using free private subnet 192.168.76.0/24: &{IP:192.168.76.0 Netmask:255.255.255.0 Prefix:24 CIDR:192.168.76.0/24 Gateway:192.168.76.1 ClientMin:192.168.76.2 ClientMax:192.168.76.254 Broadcast:192.168.76.255 IsPrivate:true Interface:{IfaceName: IfaceIPv4: IfaceMTU:0 IfaceMAC:} reservation:0x40019732d0}
	I1109 14:37:28.176022  190681 network_create.go:124] attempt to create docker network embed-certs-422728 192.168.76.0/24 with gateway 192.168.76.1 and MTU of 1500 ...
	I1109 14:37:28.176079  190681 cli_runner.go:164] Run: docker network create --driver=bridge --subnet=192.168.76.0/24 --gateway=192.168.76.1 -o --ip-masq -o --icc -o com.docker.network.driver.mtu=1500 --label=created_by.minikube.sigs.k8s.io=true --label=name.minikube.sigs.k8s.io=embed-certs-422728 embed-certs-422728
	I1109 14:37:28.243516  190681 network_create.go:108] docker network embed-certs-422728 192.168.76.0/24 created
	I1109 14:37:28.243550  190681 kic.go:121] calculated static IP "192.168.76.2" for the "embed-certs-422728" container
	I1109 14:37:28.243620  190681 cli_runner.go:164] Run: docker ps -a --format {{.Names}}
	I1109 14:37:28.259956  190681 cli_runner.go:164] Run: docker volume create embed-certs-422728 --label name.minikube.sigs.k8s.io=embed-certs-422728 --label created_by.minikube.sigs.k8s.io=true
	I1109 14:37:28.277965  190681 oci.go:103] Successfully created a docker volume embed-certs-422728
	I1109 14:37:28.278056  190681 cli_runner.go:164] Run: docker run --rm --name embed-certs-422728-preload-sidecar --label created_by.minikube.sigs.k8s.io=true --label name.minikube.sigs.k8s.io=embed-certs-422728 --entrypoint /usr/bin/test -v embed-certs-422728:/var gcr.io/k8s-minikube/kicbase-builds:v0.0.48-1761985721-21837@sha256:a50b37e97dfdea51156e079ca6b45818a801b3d41bbe13d141f35d2e1af6c7d1 -d /var/lib
	I1109 14:37:28.861326  190681 oci.go:107] Successfully prepared a docker volume embed-certs-422728
	I1109 14:37:28.861389  190681 preload.go:188] Checking if preload exists for k8s version v1.34.1 and runtime crio
	I1109 14:37:28.861399  190681 kic.go:194] Starting extracting preloaded images to volume ...
	I1109 14:37:28.861461  190681 cli_runner.go:164] Run: docker run --rm --entrypoint /usr/bin/tar -v /home/jenkins/minikube-integration/21139-2320/.minikube/cache/preloaded-tarball/preloaded-images-k8s-v18-v1.34.1-cri-o-overlay-arm64.tar.lz4:/preloaded.tar:ro -v embed-certs-422728:/extractDir gcr.io/k8s-minikube/kicbase-builds:v0.0.48-1761985721-21837@sha256:a50b37e97dfdea51156e079ca6b45818a801b3d41bbe13d141f35d2e1af6c7d1 -I lz4 -xf /preloaded.tar -C /extractDir
	I1109 14:37:28.763939  189143 main.go:143] libmachine: SSH cmd err, output: <nil>: default-k8s-diff-port-103048
	
	I1109 14:37:28.763963  189143 ubuntu.go:182] provisioning hostname "default-k8s-diff-port-103048"
	I1109 14:37:28.764033  189143 cli_runner.go:164] Run: docker container inspect -f "'{{(index (index .NetworkSettings.Ports "22/tcp") 0).HostPort}}'" default-k8s-diff-port-103048
	I1109 14:37:28.786740  189143 main.go:143] libmachine: Using SSH client type: native
	I1109 14:37:28.787052  189143 main.go:143] libmachine: &{{{<nil> 0 [] [] []} docker [0x3eefe0] 0x3f1790 <nil>  [] 0s} 127.0.0.1 33055 <nil> <nil>}
	I1109 14:37:28.787069  189143 main.go:143] libmachine: About to run SSH command:
	sudo hostname default-k8s-diff-port-103048 && echo "default-k8s-diff-port-103048" | sudo tee /etc/hostname
	I1109 14:37:28.968030  189143 main.go:143] libmachine: SSH cmd err, output: <nil>: default-k8s-diff-port-103048
	
	I1109 14:37:28.968100  189143 cli_runner.go:164] Run: docker container inspect -f "'{{(index (index .NetworkSettings.Ports "22/tcp") 0).HostPort}}'" default-k8s-diff-port-103048
	I1109 14:37:28.991597  189143 main.go:143] libmachine: Using SSH client type: native
	I1109 14:37:28.992013  189143 main.go:143] libmachine: &{{{<nil> 0 [] [] []} docker [0x3eefe0] 0x3f1790 <nil>  [] 0s} 127.0.0.1 33055 <nil> <nil>}
	I1109 14:37:28.992035  189143 main.go:143] libmachine: About to run SSH command:
	
			if ! grep -xq '.*\sdefault-k8s-diff-port-103048' /etc/hosts; then
				if grep -xq '127.0.1.1\s.*' /etc/hosts; then
					sudo sed -i 's/^127.0.1.1\s.*/127.0.1.1 default-k8s-diff-port-103048/g' /etc/hosts;
				else 
					echo '127.0.1.1 default-k8s-diff-port-103048' | sudo tee -a /etc/hosts; 
				fi
			fi
	I1109 14:37:29.180074  189143 main.go:143] libmachine: SSH cmd err, output: <nil>: 
	I1109 14:37:29.180104  189143 ubuntu.go:188] set auth options {CertDir:/home/jenkins/minikube-integration/21139-2320/.minikube CaCertPath:/home/jenkins/minikube-integration/21139-2320/.minikube/certs/ca.pem CaPrivateKeyPath:/home/jenkins/minikube-integration/21139-2320/.minikube/certs/ca-key.pem CaCertRemotePath:/etc/docker/ca.pem ServerCertPath:/home/jenkins/minikube-integration/21139-2320/.minikube/machines/server.pem ServerKeyPath:/home/jenkins/minikube-integration/21139-2320/.minikube/machines/server-key.pem ClientKeyPath:/home/jenkins/minikube-integration/21139-2320/.minikube/certs/key.pem ServerCertRemotePath:/etc/docker/server.pem ServerKeyRemotePath:/etc/docker/server-key.pem ClientCertPath:/home/jenkins/minikube-integration/21139-2320/.minikube/certs/cert.pem ServerCertSANs:[] StorePath:/home/jenkins/minikube-integration/21139-2320/.minikube}
	I1109 14:37:29.180138  189143 ubuntu.go:190] setting up certificates
	I1109 14:37:29.180147  189143 provision.go:84] configureAuth start
	I1109 14:37:29.180214  189143 cli_runner.go:164] Run: docker container inspect -f "{{range .NetworkSettings.Networks}}{{.IPAddress}},{{.GlobalIPv6Address}}{{end}}" default-k8s-diff-port-103048
	I1109 14:37:29.208683  189143 provision.go:143] copyHostCerts
	I1109 14:37:29.208753  189143 exec_runner.go:144] found /home/jenkins/minikube-integration/21139-2320/.minikube/ca.pem, removing ...
	I1109 14:37:29.208767  189143 exec_runner.go:203] rm: /home/jenkins/minikube-integration/21139-2320/.minikube/ca.pem
	I1109 14:37:29.208854  189143 exec_runner.go:151] cp: /home/jenkins/minikube-integration/21139-2320/.minikube/certs/ca.pem --> /home/jenkins/minikube-integration/21139-2320/.minikube/ca.pem (1082 bytes)
	I1109 14:37:29.208968  189143 exec_runner.go:144] found /home/jenkins/minikube-integration/21139-2320/.minikube/cert.pem, removing ...
	I1109 14:37:29.208979  189143 exec_runner.go:203] rm: /home/jenkins/minikube-integration/21139-2320/.minikube/cert.pem
	I1109 14:37:29.209011  189143 exec_runner.go:151] cp: /home/jenkins/minikube-integration/21139-2320/.minikube/certs/cert.pem --> /home/jenkins/minikube-integration/21139-2320/.minikube/cert.pem (1123 bytes)
	I1109 14:37:29.209081  189143 exec_runner.go:144] found /home/jenkins/minikube-integration/21139-2320/.minikube/key.pem, removing ...
	I1109 14:37:29.209091  189143 exec_runner.go:203] rm: /home/jenkins/minikube-integration/21139-2320/.minikube/key.pem
	I1109 14:37:29.209115  189143 exec_runner.go:151] cp: /home/jenkins/minikube-integration/21139-2320/.minikube/certs/key.pem --> /home/jenkins/minikube-integration/21139-2320/.minikube/key.pem (1679 bytes)
	I1109 14:37:29.209173  189143 provision.go:117] generating server cert: /home/jenkins/minikube-integration/21139-2320/.minikube/machines/server.pem ca-key=/home/jenkins/minikube-integration/21139-2320/.minikube/certs/ca.pem private-key=/home/jenkins/minikube-integration/21139-2320/.minikube/certs/ca-key.pem org=jenkins.default-k8s-diff-port-103048 san=[127.0.0.1 192.168.85.2 default-k8s-diff-port-103048 localhost minikube]
	I1109 14:37:29.947152  189143 provision.go:177] copyRemoteCerts
	I1109 14:37:29.947241  189143 ssh_runner.go:195] Run: sudo mkdir -p /etc/docker /etc/docker /etc/docker
	I1109 14:37:29.947290  189143 cli_runner.go:164] Run: docker container inspect -f "'{{(index (index .NetworkSettings.Ports "22/tcp") 0).HostPort}}'" default-k8s-diff-port-103048
	I1109 14:37:29.970484  189143 sshutil.go:53] new ssh client: &{IP:127.0.0.1 Port:33055 SSHKeyPath:/home/jenkins/minikube-integration/21139-2320/.minikube/machines/default-k8s-diff-port-103048/id_rsa Username:docker}
	I1109 14:37:30.094209  189143 ssh_runner.go:362] scp /home/jenkins/minikube-integration/21139-2320/.minikube/certs/ca.pem --> /etc/docker/ca.pem (1082 bytes)
	I1109 14:37:30.118124  189143 ssh_runner.go:362] scp /home/jenkins/minikube-integration/21139-2320/.minikube/machines/server.pem --> /etc/docker/server.pem (1249 bytes)
	I1109 14:37:30.140811  189143 ssh_runner.go:362] scp /home/jenkins/minikube-integration/21139-2320/.minikube/machines/server-key.pem --> /etc/docker/server-key.pem (1679 bytes)
	I1109 14:37:30.162596  189143 provision.go:87] duration metric: took 982.428887ms to configureAuth
	I1109 14:37:30.162669  189143 ubuntu.go:206] setting minikube options for container-runtime
	I1109 14:37:30.162932  189143 config.go:182] Loaded profile config "default-k8s-diff-port-103048": Driver=docker, ContainerRuntime=crio, KubernetesVersion=v1.34.1
	I1109 14:37:30.163087  189143 cli_runner.go:164] Run: docker container inspect -f "'{{(index (index .NetworkSettings.Ports "22/tcp") 0).HostPort}}'" default-k8s-diff-port-103048
	I1109 14:37:30.188399  189143 main.go:143] libmachine: Using SSH client type: native
	I1109 14:37:30.188745  189143 main.go:143] libmachine: &{{{<nil> 0 [] [] []} docker [0x3eefe0] 0x3f1790 <nil>  [] 0s} 127.0.0.1 33055 <nil> <nil>}
	I1109 14:37:30.188761  189143 main.go:143] libmachine: About to run SSH command:
	sudo mkdir -p /etc/sysconfig && printf %s "
	CRIO_MINIKUBE_OPTIONS='--insecure-registry 10.96.0.0/12 '
	" | sudo tee /etc/sysconfig/crio.minikube && sudo systemctl restart crio
	I1109 14:37:30.512011  189143 main.go:143] libmachine: SSH cmd err, output: <nil>: 
	CRIO_MINIKUBE_OPTIONS='--insecure-registry 10.96.0.0/12 '
	
	I1109 14:37:30.512097  189143 machine.go:97] duration metric: took 4.948924223s to provisionDockerMachine
	I1109 14:37:30.512123  189143 client.go:176] duration metric: took 11.84746418s to LocalClient.Create
	I1109 14:37:30.512169  189143 start.go:167] duration metric: took 11.84754917s to libmachine.API.Create "default-k8s-diff-port-103048"
	I1109 14:37:30.512196  189143 start.go:293] postStartSetup for "default-k8s-diff-port-103048" (driver="docker")
	I1109 14:37:30.512221  189143 start.go:322] creating required directories: [/etc/kubernetes/addons /etc/kubernetes/manifests /var/tmp/minikube /var/lib/minikube /var/lib/minikube/certs /var/lib/minikube/images /var/lib/minikube/binaries /tmp/gvisor /usr/share/ca-certificates /etc/ssl/certs]
	I1109 14:37:30.512330  189143 ssh_runner.go:195] Run: sudo mkdir -p /etc/kubernetes/addons /etc/kubernetes/manifests /var/tmp/minikube /var/lib/minikube /var/lib/minikube/certs /var/lib/minikube/images /var/lib/minikube/binaries /tmp/gvisor /usr/share/ca-certificates /etc/ssl/certs
	I1109 14:37:30.512408  189143 cli_runner.go:164] Run: docker container inspect -f "'{{(index (index .NetworkSettings.Ports "22/tcp") 0).HostPort}}'" default-k8s-diff-port-103048
	I1109 14:37:30.537079  189143 sshutil.go:53] new ssh client: &{IP:127.0.0.1 Port:33055 SSHKeyPath:/home/jenkins/minikube-integration/21139-2320/.minikube/machines/default-k8s-diff-port-103048/id_rsa Username:docker}
	I1109 14:37:30.648499  189143 ssh_runner.go:195] Run: cat /etc/os-release
	I1109 14:37:30.652391  189143 main.go:143] libmachine: Couldn't set key VERSION_CODENAME, no corresponding struct field found
	I1109 14:37:30.652422  189143 info.go:137] Remote host: Debian GNU/Linux 12 (bookworm)
	I1109 14:37:30.652433  189143 filesync.go:126] Scanning /home/jenkins/minikube-integration/21139-2320/.minikube/addons for local assets ...
	I1109 14:37:30.652520  189143 filesync.go:126] Scanning /home/jenkins/minikube-integration/21139-2320/.minikube/files for local assets ...
	I1109 14:37:30.652617  189143 filesync.go:149] local asset: /home/jenkins/minikube-integration/21139-2320/.minikube/files/etc/ssl/certs/41162.pem -> 41162.pem in /etc/ssl/certs
	I1109 14:37:30.652731  189143 ssh_runner.go:195] Run: sudo mkdir -p /etc/ssl/certs
	I1109 14:37:30.660986  189143 ssh_runner.go:362] scp /home/jenkins/minikube-integration/21139-2320/.minikube/files/etc/ssl/certs/41162.pem --> /etc/ssl/certs/41162.pem (1708 bytes)
	I1109 14:37:30.680750  189143 start.go:296] duration metric: took 168.525074ms for postStartSetup
	I1109 14:37:30.681184  189143 cli_runner.go:164] Run: docker container inspect -f "{{range .NetworkSettings.Networks}}{{.IPAddress}},{{.GlobalIPv6Address}}{{end}}" default-k8s-diff-port-103048
	I1109 14:37:30.712083  189143 profile.go:143] Saving config to /home/jenkins/minikube-integration/21139-2320/.minikube/profiles/default-k8s-diff-port-103048/config.json ...
	I1109 14:37:30.712381  189143 ssh_runner.go:195] Run: sh -c "df -h /var | awk 'NR==2{print $5}'"
	I1109 14:37:30.712425  189143 cli_runner.go:164] Run: docker container inspect -f "'{{(index (index .NetworkSettings.Ports "22/tcp") 0).HostPort}}'" default-k8s-diff-port-103048
	I1109 14:37:30.733868  189143 sshutil.go:53] new ssh client: &{IP:127.0.0.1 Port:33055 SSHKeyPath:/home/jenkins/minikube-integration/21139-2320/.minikube/machines/default-k8s-diff-port-103048/id_rsa Username:docker}
	I1109 14:37:30.841430  189143 ssh_runner.go:195] Run: sh -c "df -BG /var | awk 'NR==2{print $4}'"
	I1109 14:37:30.849227  189143 start.go:128] duration metric: took 12.188224409s to createHost
	I1109 14:37:30.849253  189143 start.go:83] releasing machines lock for "default-k8s-diff-port-103048", held for 12.188344861s
	I1109 14:37:30.849331  189143 cli_runner.go:164] Run: docker container inspect -f "{{range .NetworkSettings.Networks}}{{.IPAddress}},{{.GlobalIPv6Address}}{{end}}" default-k8s-diff-port-103048
	I1109 14:37:30.867928  189143 ssh_runner.go:195] Run: cat /version.json
	I1109 14:37:30.867941  189143 ssh_runner.go:195] Run: curl -sS -m 2 https://registry.k8s.io/
	I1109 14:37:30.867981  189143 cli_runner.go:164] Run: docker container inspect -f "'{{(index (index .NetworkSettings.Ports "22/tcp") 0).HostPort}}'" default-k8s-diff-port-103048
	I1109 14:37:30.868015  189143 cli_runner.go:164] Run: docker container inspect -f "'{{(index (index .NetworkSettings.Ports "22/tcp") 0).HostPort}}'" default-k8s-diff-port-103048
	I1109 14:37:30.897381  189143 sshutil.go:53] new ssh client: &{IP:127.0.0.1 Port:33055 SSHKeyPath:/home/jenkins/minikube-integration/21139-2320/.minikube/machines/default-k8s-diff-port-103048/id_rsa Username:docker}
	I1109 14:37:30.907682  189143 sshutil.go:53] new ssh client: &{IP:127.0.0.1 Port:33055 SSHKeyPath:/home/jenkins/minikube-integration/21139-2320/.minikube/machines/default-k8s-diff-port-103048/id_rsa Username:docker}
	I1109 14:37:31.028555  189143 ssh_runner.go:195] Run: systemctl --version
	I1109 14:37:31.036871  189143 ssh_runner.go:195] Run: sudo sh -c "podman version >/dev/null"
	I1109 14:37:31.164908  189143 ssh_runner.go:195] Run: sh -c "stat /etc/cni/net.d/*loopback.conf*"
	W1109 14:37:31.172089  189143 cni.go:209] loopback cni configuration skipped: "/etc/cni/net.d/*loopback.conf*" not found
	I1109 14:37:31.172165  189143 ssh_runner.go:195] Run: sudo find /etc/cni/net.d -maxdepth 1 -type f ( ( -name *bridge* -or -name *podman* ) -and -not -name *.mk_disabled ) -printf "%p, " -exec sh -c "sudo mv {} {}.mk_disabled" ;
	I1109 14:37:31.238997  189143 cni.go:262] disabled [/etc/cni/net.d/87-podman-bridge.conflist, /etc/cni/net.d/10-crio-bridge.conflist.disabled] bridge cni config(s)
	I1109 14:37:31.239022  189143 start.go:496] detecting cgroup driver to use...
	I1109 14:37:31.239057  189143 detect.go:187] detected "cgroupfs" cgroup driver on host os
	I1109 14:37:31.239108  189143 ssh_runner.go:195] Run: sudo systemctl stop -f containerd
	I1109 14:37:31.271492  189143 ssh_runner.go:195] Run: sudo systemctl is-active --quiet service containerd
	I1109 14:37:31.289679  189143 docker.go:218] disabling cri-docker service (if available) ...
	I1109 14:37:31.289742  189143 ssh_runner.go:195] Run: sudo systemctl stop -f cri-docker.socket
	I1109 14:37:31.312162  189143 ssh_runner.go:195] Run: sudo systemctl stop -f cri-docker.service
	I1109 14:37:31.337696  189143 ssh_runner.go:195] Run: sudo systemctl disable cri-docker.socket
	I1109 14:37:31.517684  189143 ssh_runner.go:195] Run: sudo systemctl mask cri-docker.service
	I1109 14:37:31.689258  189143 docker.go:234] disabling docker service ...
	I1109 14:37:31.689334  189143 ssh_runner.go:195] Run: sudo systemctl stop -f docker.socket
	I1109 14:37:31.714460  189143 ssh_runner.go:195] Run: sudo systemctl stop -f docker.service
	I1109 14:37:31.728794  189143 ssh_runner.go:195] Run: sudo systemctl disable docker.socket
	I1109 14:37:31.881564  189143 ssh_runner.go:195] Run: sudo systemctl mask docker.service
	I1109 14:37:32.013745  189143 ssh_runner.go:195] Run: sudo systemctl is-active --quiet service docker
	I1109 14:37:32.031812  189143 ssh_runner.go:195] Run: /bin/bash -c "sudo mkdir -p /etc && printf %s "runtime-endpoint: unix:///var/run/crio/crio.sock
	" | sudo tee /etc/crictl.yaml"
	I1109 14:37:32.049051  189143 crio.go:59] configure cri-o to use "registry.k8s.io/pause:3.10.1" pause image...
	I1109 14:37:32.049128  189143 ssh_runner.go:195] Run: sh -c "sudo sed -i 's|^.*pause_image = .*$|pause_image = "registry.k8s.io/pause:3.10.1"|' /etc/crio/crio.conf.d/02-crio.conf"
	I1109 14:37:32.061319  189143 crio.go:70] configuring cri-o to use "cgroupfs" as cgroup driver...
	I1109 14:37:32.061422  189143 ssh_runner.go:195] Run: sh -c "sudo sed -i 's|^.*cgroup_manager = .*$|cgroup_manager = "cgroupfs"|' /etc/crio/crio.conf.d/02-crio.conf"
	I1109 14:37:32.073624  189143 ssh_runner.go:195] Run: sh -c "sudo sed -i '/conmon_cgroup = .*/d' /etc/crio/crio.conf.d/02-crio.conf"
	I1109 14:37:32.083429  189143 ssh_runner.go:195] Run: sh -c "sudo sed -i '/cgroup_manager = .*/a conmon_cgroup = "pod"' /etc/crio/crio.conf.d/02-crio.conf"
	I1109 14:37:32.096227  189143 ssh_runner.go:195] Run: sh -c "sudo rm -rf /etc/cni/net.mk"
	I1109 14:37:32.105855  189143 ssh_runner.go:195] Run: sh -c "sudo sed -i '/^ *"net.ipv4.ip_unprivileged_port_start=.*"/d' /etc/crio/crio.conf.d/02-crio.conf"
	I1109 14:37:32.115035  189143 ssh_runner.go:195] Run: sh -c "sudo grep -q "^ *default_sysctls" /etc/crio/crio.conf.d/02-crio.conf || sudo sed -i '/conmon_cgroup = .*/a default_sysctls = \[\n\]' /etc/crio/crio.conf.d/02-crio.conf"
	I1109 14:37:32.131863  189143 ssh_runner.go:195] Run: sh -c "sudo sed -i -r 's|^default_sysctls *= *\[|&\n  "net.ipv4.ip_unprivileged_port_start=0",|' /etc/crio/crio.conf.d/02-crio.conf"
	I1109 14:37:32.144259  189143 ssh_runner.go:195] Run: sudo sysctl net.bridge.bridge-nf-call-iptables
	I1109 14:37:32.152292  189143 ssh_runner.go:195] Run: sudo sh -c "echo 1 > /proc/sys/net/ipv4/ip_forward"
	I1109 14:37:32.159982  189143 ssh_runner.go:195] Run: sudo systemctl daemon-reload
	I1109 14:37:32.293121  189143 ssh_runner.go:195] Run: sudo systemctl restart crio
	I1109 14:37:33.731815  189143 ssh_runner.go:235] Completed: sudo systemctl restart crio: (1.438617031s)
	I1109 14:37:33.731844  189143 start.go:543] Will wait 60s for socket path /var/run/crio/crio.sock
	I1109 14:37:33.731915  189143 ssh_runner.go:195] Run: stat /var/run/crio/crio.sock
	I1109 14:37:33.736048  189143 start.go:564] Will wait 60s for crictl version
	I1109 14:37:33.736119  189143 ssh_runner.go:195] Run: which crictl
	I1109 14:37:33.739955  189143 ssh_runner.go:195] Run: sudo /usr/local/bin/crictl version
	I1109 14:37:33.780063  189143 start.go:580] Version:  0.1.0
	RuntimeName:  cri-o
	RuntimeVersion:  1.34.1
	RuntimeApiVersion:  v1
	I1109 14:37:33.780140  189143 ssh_runner.go:195] Run: crio --version
	I1109 14:37:33.826915  189143 ssh_runner.go:195] Run: crio --version
	I1109 14:37:33.885137  189143 out.go:179] * Preparing Kubernetes v1.34.1 on CRI-O 1.34.1 ...
	I1109 14:37:33.888072  189143 cli_runner.go:164] Run: docker network inspect default-k8s-diff-port-103048 --format "{"Name": "{{.Name}}","Driver": "{{.Driver}}","Subnet": "{{range .IPAM.Config}}{{.Subnet}}{{end}}","Gateway": "{{range .IPAM.Config}}{{.Gateway}}{{end}}","MTU": {{if (index .Options "com.docker.network.driver.mtu")}}{{(index .Options "com.docker.network.driver.mtu")}}{{else}}0{{end}}, "ContainerIPs": [{{range $k,$v := .Containers }}"{{$v.IPv4Address}}",{{end}}]}"
	I1109 14:37:33.909741  189143 ssh_runner.go:195] Run: grep 192.168.85.1	host.minikube.internal$ /etc/hosts
	I1109 14:37:33.915383  189143 ssh_runner.go:195] Run: /bin/bash -c "{ grep -v $'\thost.minikube.internal$' "/etc/hosts"; echo "192.168.85.1	host.minikube.internal"; } > /tmp/h.$$; sudo cp /tmp/h.$$ "/etc/hosts""
	I1109 14:37:33.927397  189143 kubeadm.go:884] updating cluster {Name:default-k8s-diff-port-103048 KeepContext:false EmbedCerts:false MinikubeISO: KicBaseImage:gcr.io/k8s-minikube/kicbase-builds:v0.0.48-1761985721-21837@sha256:a50b37e97dfdea51156e079ca6b45818a801b3d41bbe13d141f35d2e1af6c7d1 Memory:3072 CPUs:2 DiskSize:20000 Driver:docker HyperkitVpnKitSock: HyperkitVSockPorts:[] DockerEnv:[] ContainerVolumeMounts:[] InsecureRegistry:[] RegistryMirror:[] HostOnlyCIDR:192.168.59.1/24 HypervVirtualSwitch: HypervUseExternalSwitch:false HypervExternalAdapter: KVMNetwork:default KVMQemuURI:qemu:///system KVMGPU:false KVMHidden:false KVMNUMACount:1 APIServerPort:8444 DockerOpt:[] DisableDriverMounts:false NFSShare:[] NFSSharesRoot:/nfsshares UUID: NoVTXCheck:false DNSProxy:false HostDNSResolver:true HostOnlyNicType:virtio NatNicType:virtio SSHIPAddress: SSHUser:root SSHKey: SSHPort:22 KubernetesConfig:{KubernetesVersion:v1.34.1 ClusterName:default-k8s-diff-port-103048 Namespace:default APIServerHAVIP: APISer
verName:minikubeCA APIServerNames:[] APIServerIPs:[] DNSDomain:cluster.local ContainerRuntime:crio CRISocket: NetworkPlugin:cni FeatureGates: ServiceCIDR:10.96.0.0/12 ImageRepository: LoadBalancerStartIP: LoadBalancerEndIP: CustomIngressCert: RegistryAliases: ExtraOptions:[] ShouldLoadCachedImages:true EnableDefaultCNI:false CNI:} Nodes:[{Name: IP:192.168.85.2 Port:8444 KubernetesVersion:v1.34.1 ContainerRuntime:crio ControlPlane:true Worker:true}] Addons:map[] CustomAddonImages:map[] CustomAddonRegistries:map[] VerifyComponents:map[apiserver:true apps_running:true default_sa:true extra:true kubelet:true node_ready:true system_pods:true] StartHostTimeout:6m0s ScheduledStop:<nil> ExposedPorts:[] ListenAddress: Network: Subnet: MultiNodeRequested:false ExtraDisks:0 CertExpiration:26280h0m0s MountString: Mount9PVersion:9p2000.L MountGID:docker MountIP: MountMSize:262144 MountOptions:[] MountPort:0 MountType:9p MountUID:docker BinaryMirror: DisableOptimizations:false DisableMetrics:false DisableCoreDNSLog:false C
ustomQemuFirmwarePath: SocketVMnetClientPath: SocketVMnetPath: StaticIP: SSHAuthSock: SSHAgentPID:0 GPUs: AutoPauseInterval:1m0s} ...
	I1109 14:37:33.927510  189143 preload.go:188] Checking if preload exists for k8s version v1.34.1 and runtime crio
	I1109 14:37:33.927559  189143 ssh_runner.go:195] Run: sudo crictl images --output json
	I1109 14:37:33.975664  189143 crio.go:514] all images are preloaded for cri-o runtime.
	I1109 14:37:33.975685  189143 crio.go:433] Images already preloaded, skipping extraction
	I1109 14:37:33.975745  189143 ssh_runner.go:195] Run: sudo crictl images --output json
	I1109 14:37:34.005445  189143 crio.go:514] all images are preloaded for cri-o runtime.
	I1109 14:37:34.005469  189143 cache_images.go:86] Images are preloaded, skipping loading
	I1109 14:37:34.005477  189143 kubeadm.go:935] updating node { 192.168.85.2 8444 v1.34.1 crio true true} ...
	I1109 14:37:34.005577  189143 kubeadm.go:947] kubelet [Unit]
	Wants=crio.service
	
	[Service]
	ExecStart=
	ExecStart=/var/lib/minikube/binaries/v1.34.1/kubelet --bootstrap-kubeconfig=/etc/kubernetes/bootstrap-kubelet.conf --cgroups-per-qos=false --config=/var/lib/kubelet/config.yaml --enforce-node-allocatable= --hostname-override=default-k8s-diff-port-103048 --kubeconfig=/etc/kubernetes/kubelet.conf --node-ip=192.168.85.2
	
	[Install]
	 config:
	{KubernetesVersion:v1.34.1 ClusterName:default-k8s-diff-port-103048 Namespace:default APIServerHAVIP: APIServerName:minikubeCA APIServerNames:[] APIServerIPs:[] DNSDomain:cluster.local ContainerRuntime:crio CRISocket: NetworkPlugin:cni FeatureGates: ServiceCIDR:10.96.0.0/12 ImageRepository: LoadBalancerStartIP: LoadBalancerEndIP: CustomIngressCert: RegistryAliases: ExtraOptions:[] ShouldLoadCachedImages:true EnableDefaultCNI:false CNI:}
	I1109 14:37:34.005666  189143 ssh_runner.go:195] Run: crio config
	I1109 14:37:34.083321  189143 cni.go:84] Creating CNI manager for ""
	I1109 14:37:34.083403  189143 cni.go:143] "docker" driver + "crio" runtime found, recommending kindnet
	I1109 14:37:34.083434  189143 kubeadm.go:85] Using pod CIDR: 10.244.0.0/16
	I1109 14:37:34.083486  189143 kubeadm.go:190] kubeadm options: {CertDir:/var/lib/minikube/certs ServiceCIDR:10.96.0.0/12 PodSubnet:10.244.0.0/16 AdvertiseAddress:192.168.85.2 APIServerPort:8444 KubernetesVersion:v1.34.1 EtcdDataDir:/var/lib/minikube/etcd EtcdExtraArgs:map[] ClusterName:default-k8s-diff-port-103048 NodeName:default-k8s-diff-port-103048 DNSDomain:cluster.local CRISocket:/var/run/crio/crio.sock ImageRepository: ComponentOptions:[{Component:apiServer ExtraArgs:map[enable-admission-plugins:NamespaceLifecycle,LimitRanger,ServiceAccount,DefaultStorageClass,DefaultTolerationSeconds,NodeRestriction,MutatingAdmissionWebhook,ValidatingAdmissionWebhook,ResourceQuota] Pairs:map[certSANs:["127.0.0.1", "localhost", "192.168.85.2"]]} {Component:controllerManager ExtraArgs:map[allocate-node-cidrs:true leader-elect:false] Pairs:map[]} {Component:scheduler ExtraArgs:map[leader-elect:false] Pairs:map[]}] FeatureArgs:map[] NodeIP:192.168.85.2 CgroupDriver:cgroupfs ClientCAFile:/var/lib/minikube/certs/ca.
crt StaticPodPath:/etc/kubernetes/manifests ControlPlaneAddress:control-plane.minikube.internal KubeProxyOptions:map[] ResolvConfSearchRegression:false KubeletConfigOpts:map[containerRuntimeEndpoint:unix:///var/run/crio/crio.sock hairpinMode:hairpin-veth runtimeRequestTimeout:15m] PrependCriSocketUnix:true}
	I1109 14:37:34.083658  189143 kubeadm.go:196] kubeadm config:
	apiVersion: kubeadm.k8s.io/v1beta4
	kind: InitConfiguration
	localAPIEndpoint:
	  advertiseAddress: 192.168.85.2
	  bindPort: 8444
	bootstrapTokens:
	  - groups:
	      - system:bootstrappers:kubeadm:default-node-token
	    ttl: 24h0m0s
	    usages:
	      - signing
	      - authentication
	nodeRegistration:
	  criSocket: unix:///var/run/crio/crio.sock
	  name: "default-k8s-diff-port-103048"
	  kubeletExtraArgs:
	    - name: "node-ip"
	      value: "192.168.85.2"
	  taints: []
	---
	apiVersion: kubeadm.k8s.io/v1beta4
	kind: ClusterConfiguration
	apiServer:
	  certSANs: ["127.0.0.1", "localhost", "192.168.85.2"]
	  extraArgs:
	    - name: "enable-admission-plugins"
	      value: "NamespaceLifecycle,LimitRanger,ServiceAccount,DefaultStorageClass,DefaultTolerationSeconds,NodeRestriction,MutatingAdmissionWebhook,ValidatingAdmissionWebhook,ResourceQuota"
	controllerManager:
	  extraArgs:
	    - name: "allocate-node-cidrs"
	      value: "true"
	    - name: "leader-elect"
	      value: "false"
	scheduler:
	  extraArgs:
	    - name: "leader-elect"
	      value: "false"
	certificatesDir: /var/lib/minikube/certs
	clusterName: mk
	controlPlaneEndpoint: control-plane.minikube.internal:8444
	etcd:
	  local:
	    dataDir: /var/lib/minikube/etcd
	kubernetesVersion: v1.34.1
	networking:
	  dnsDomain: cluster.local
	  podSubnet: "10.244.0.0/16"
	  serviceSubnet: 10.96.0.0/12
	---
	apiVersion: kubelet.config.k8s.io/v1beta1
	kind: KubeletConfiguration
	authentication:
	  x509:
	    clientCAFile: /var/lib/minikube/certs/ca.crt
	cgroupDriver: cgroupfs
	containerRuntimeEndpoint: unix:///var/run/crio/crio.sock
	hairpinMode: hairpin-veth
	runtimeRequestTimeout: 15m
	clusterDomain: "cluster.local"
	# disable disk resource management by default
	imageGCHighThresholdPercent: 100
	evictionHard:
	  nodefs.available: "0%"
	  nodefs.inodesFree: "0%"
	  imagefs.available: "0%"
	failSwapOn: false
	staticPodPath: /etc/kubernetes/manifests
	---
	apiVersion: kubeproxy.config.k8s.io/v1alpha1
	kind: KubeProxyConfiguration
	clusterCIDR: "10.244.0.0/16"
	metricsBindAddress: 0.0.0.0:10249
	conntrack:
	  maxPerCore: 0
	# Skip setting "net.netfilter.nf_conntrack_tcp_timeout_established"
	  tcpEstablishedTimeout: 0s
	# Skip setting "net.netfilter.nf_conntrack_tcp_timeout_close"
	  tcpCloseWaitTimeout: 0s
	
	I1109 14:37:34.083761  189143 ssh_runner.go:195] Run: sudo ls /var/lib/minikube/binaries/v1.34.1
	I1109 14:37:34.095026  189143 binaries.go:51] Found k8s binaries, skipping transfer
	I1109 14:37:34.095141  189143 ssh_runner.go:195] Run: sudo mkdir -p /etc/systemd/system/kubelet.service.d /lib/systemd/system /var/tmp/minikube
	I1109 14:37:34.111248  189143 ssh_runner.go:362] scp memory --> /etc/systemd/system/kubelet.service.d/10-kubeadm.conf (378 bytes)
	I1109 14:37:34.143321  189143 ssh_runner.go:362] scp memory --> /lib/systemd/system/kubelet.service (352 bytes)
	I1109 14:37:34.175263  189143 ssh_runner.go:362] scp memory --> /var/tmp/minikube/kubeadm.yaml.new (2225 bytes)
	I1109 14:37:34.204391  189143 ssh_runner.go:195] Run: grep 192.168.85.2	control-plane.minikube.internal$ /etc/hosts
	I1109 14:37:34.215888  189143 ssh_runner.go:195] Run: /bin/bash -c "{ grep -v $'\tcontrol-plane.minikube.internal$' "/etc/hosts"; echo "192.168.85.2	control-plane.minikube.internal"; } > /tmp/h.$$; sudo cp /tmp/h.$$ "/etc/hosts""
	I1109 14:37:34.234263  189143 ssh_runner.go:195] Run: sudo systemctl daemon-reload
	I1109 14:37:34.543457  189143 ssh_runner.go:195] Run: sudo systemctl start kubelet
	I1109 14:37:34.564062  189143 certs.go:69] Setting up /home/jenkins/minikube-integration/21139-2320/.minikube/profiles/default-k8s-diff-port-103048 for IP: 192.168.85.2
	I1109 14:37:34.564082  189143 certs.go:195] generating shared ca certs ...
	I1109 14:37:34.564098  189143 certs.go:227] acquiring lock for ca certs: {Name:mkdc9287ecd89df27ec460b72246ed6b75395d7c Clock:{} Delay:500ms Timeout:1m0s Cancel:<nil>}
	I1109 14:37:34.564231  189143 certs.go:236] skipping valid "minikubeCA" ca cert: /home/jenkins/minikube-integration/21139-2320/.minikube/ca.key
	I1109 14:37:34.564271  189143 certs.go:236] skipping valid "proxyClientCA" ca cert: /home/jenkins/minikube-integration/21139-2320/.minikube/proxy-client-ca.key
	I1109 14:37:34.564278  189143 certs.go:257] generating profile certs ...
	I1109 14:37:34.564331  189143 certs.go:364] generating signed profile cert for "minikube-user": /home/jenkins/minikube-integration/21139-2320/.minikube/profiles/default-k8s-diff-port-103048/client.key
	I1109 14:37:34.564342  189143 crypto.go:68] Generating cert /home/jenkins/minikube-integration/21139-2320/.minikube/profiles/default-k8s-diff-port-103048/client.crt with IP's: []
	I1109 14:37:35.204051  189143 crypto.go:156] Writing cert to /home/jenkins/minikube-integration/21139-2320/.minikube/profiles/default-k8s-diff-port-103048/client.crt ...
	I1109 14:37:35.204088  189143 lock.go:35] WriteFile acquiring /home/jenkins/minikube-integration/21139-2320/.minikube/profiles/default-k8s-diff-port-103048/client.crt: {Name:mk319386f170bb2d77712b8498f6bcc46b18ff9c Clock:{} Delay:500ms Timeout:1m0s Cancel:<nil>}
	I1109 14:37:35.204300  189143 crypto.go:164] Writing key to /home/jenkins/minikube-integration/21139-2320/.minikube/profiles/default-k8s-diff-port-103048/client.key ...
	I1109 14:37:35.204311  189143 lock.go:35] WriteFile acquiring /home/jenkins/minikube-integration/21139-2320/.minikube/profiles/default-k8s-diff-port-103048/client.key: {Name:mk1b66c07b4c486eedd4da8511cb496afffa6e6e Clock:{} Delay:500ms Timeout:1m0s Cancel:<nil>}
	I1109 14:37:35.204415  189143 certs.go:364] generating signed profile cert for "minikube": /home/jenkins/minikube-integration/21139-2320/.minikube/profiles/default-k8s-diff-port-103048/apiserver.key.87358e1c
	I1109 14:37:35.204430  189143 crypto.go:68] Generating cert /home/jenkins/minikube-integration/21139-2320/.minikube/profiles/default-k8s-diff-port-103048/apiserver.crt.87358e1c with IP's: [10.96.0.1 127.0.0.1 10.0.0.1 192.168.85.2]
	I1109 14:37:35.765746  189143 crypto.go:156] Writing cert to /home/jenkins/minikube-integration/21139-2320/.minikube/profiles/default-k8s-diff-port-103048/apiserver.crt.87358e1c ...
	I1109 14:37:35.765777  189143 lock.go:35] WriteFile acquiring /home/jenkins/minikube-integration/21139-2320/.minikube/profiles/default-k8s-diff-port-103048/apiserver.crt.87358e1c: {Name:mk78654738496ae4878f38f21723da2e12a738c1 Clock:{} Delay:500ms Timeout:1m0s Cancel:<nil>}
	I1109 14:37:35.765949  189143 crypto.go:164] Writing key to /home/jenkins/minikube-integration/21139-2320/.minikube/profiles/default-k8s-diff-port-103048/apiserver.key.87358e1c ...
	I1109 14:37:35.765966  189143 lock.go:35] WriteFile acquiring /home/jenkins/minikube-integration/21139-2320/.minikube/profiles/default-k8s-diff-port-103048/apiserver.key.87358e1c: {Name:mk5b82f1276b0676529203567965fd1e87ea9e4e Clock:{} Delay:500ms Timeout:1m0s Cancel:<nil>}
	I1109 14:37:35.766061  189143 certs.go:382] copying /home/jenkins/minikube-integration/21139-2320/.minikube/profiles/default-k8s-diff-port-103048/apiserver.crt.87358e1c -> /home/jenkins/minikube-integration/21139-2320/.minikube/profiles/default-k8s-diff-port-103048/apiserver.crt
	I1109 14:37:35.766143  189143 certs.go:386] copying /home/jenkins/minikube-integration/21139-2320/.minikube/profiles/default-k8s-diff-port-103048/apiserver.key.87358e1c -> /home/jenkins/minikube-integration/21139-2320/.minikube/profiles/default-k8s-diff-port-103048/apiserver.key
	I1109 14:37:35.766205  189143 certs.go:364] generating signed profile cert for "aggregator": /home/jenkins/minikube-integration/21139-2320/.minikube/profiles/default-k8s-diff-port-103048/proxy-client.key
	I1109 14:37:35.766225  189143 crypto.go:68] Generating cert /home/jenkins/minikube-integration/21139-2320/.minikube/profiles/default-k8s-diff-port-103048/proxy-client.crt with IP's: []
	I1109 14:37:36.305399  189143 crypto.go:156] Writing cert to /home/jenkins/minikube-integration/21139-2320/.minikube/profiles/default-k8s-diff-port-103048/proxy-client.crt ...
	I1109 14:37:36.305475  189143 lock.go:35] WriteFile acquiring /home/jenkins/minikube-integration/21139-2320/.minikube/profiles/default-k8s-diff-port-103048/proxy-client.crt: {Name:mkbde1426663a0abe079281b4819c0d065ec6219 Clock:{} Delay:500ms Timeout:1m0s Cancel:<nil>}
	I1109 14:37:36.305683  189143 crypto.go:164] Writing key to /home/jenkins/minikube-integration/21139-2320/.minikube/profiles/default-k8s-diff-port-103048/proxy-client.key ...
	I1109 14:37:36.305722  189143 lock.go:35] WriteFile acquiring /home/jenkins/minikube-integration/21139-2320/.minikube/profiles/default-k8s-diff-port-103048/proxy-client.key: {Name:mka94bae95eeae6fddc50a30b08aed4d38918606 Clock:{} Delay:500ms Timeout:1m0s Cancel:<nil>}
	I1109 14:37:36.305955  189143 certs.go:484] found cert: /home/jenkins/minikube-integration/21139-2320/.minikube/certs/4116.pem (1338 bytes)
	W1109 14:37:36.306026  189143 certs.go:480] ignoring /home/jenkins/minikube-integration/21139-2320/.minikube/certs/4116_empty.pem, impossibly tiny 0 bytes
	I1109 14:37:36.306053  189143 certs.go:484] found cert: /home/jenkins/minikube-integration/21139-2320/.minikube/certs/ca-key.pem (1679 bytes)
	I1109 14:37:36.306100  189143 certs.go:484] found cert: /home/jenkins/minikube-integration/21139-2320/.minikube/certs/ca.pem (1082 bytes)
	I1109 14:37:36.306154  189143 certs.go:484] found cert: /home/jenkins/minikube-integration/21139-2320/.minikube/certs/cert.pem (1123 bytes)
	I1109 14:37:36.306199  189143 certs.go:484] found cert: /home/jenkins/minikube-integration/21139-2320/.minikube/certs/key.pem (1679 bytes)
	I1109 14:37:36.306273  189143 certs.go:484] found cert: /home/jenkins/minikube-integration/21139-2320/.minikube/files/etc/ssl/certs/41162.pem (1708 bytes)
	I1109 14:37:36.306876  189143 ssh_runner.go:362] scp /home/jenkins/minikube-integration/21139-2320/.minikube/ca.crt --> /var/lib/minikube/certs/ca.crt (1111 bytes)
	I1109 14:37:36.328938  189143 ssh_runner.go:362] scp /home/jenkins/minikube-integration/21139-2320/.minikube/ca.key --> /var/lib/minikube/certs/ca.key (1679 bytes)
	I1109 14:37:36.347713  189143 ssh_runner.go:362] scp /home/jenkins/minikube-integration/21139-2320/.minikube/proxy-client-ca.crt --> /var/lib/minikube/certs/proxy-client-ca.crt (1119 bytes)
	I1109 14:37:36.369561  189143 ssh_runner.go:362] scp /home/jenkins/minikube-integration/21139-2320/.minikube/proxy-client-ca.key --> /var/lib/minikube/certs/proxy-client-ca.key (1675 bytes)
	I1109 14:37:36.390274  189143 ssh_runner.go:362] scp /home/jenkins/minikube-integration/21139-2320/.minikube/profiles/default-k8s-diff-port-103048/apiserver.crt --> /var/lib/minikube/certs/apiserver.crt (1440 bytes)
	I1109 14:37:36.417418  189143 ssh_runner.go:362] scp /home/jenkins/minikube-integration/21139-2320/.minikube/profiles/default-k8s-diff-port-103048/apiserver.key --> /var/lib/minikube/certs/apiserver.key (1675 bytes)
	I1109 14:37:36.438988  189143 ssh_runner.go:362] scp /home/jenkins/minikube-integration/21139-2320/.minikube/profiles/default-k8s-diff-port-103048/proxy-client.crt --> /var/lib/minikube/certs/proxy-client.crt (1147 bytes)
	I1109 14:37:36.469111  189143 ssh_runner.go:362] scp /home/jenkins/minikube-integration/21139-2320/.minikube/profiles/default-k8s-diff-port-103048/proxy-client.key --> /var/lib/minikube/certs/proxy-client.key (1675 bytes)
	I1109 14:37:36.491206  189143 ssh_runner.go:362] scp /home/jenkins/minikube-integration/21139-2320/.minikube/certs/4116.pem --> /usr/share/ca-certificates/4116.pem (1338 bytes)
	I1109 14:37:36.509992  189143 ssh_runner.go:362] scp /home/jenkins/minikube-integration/21139-2320/.minikube/files/etc/ssl/certs/41162.pem --> /usr/share/ca-certificates/41162.pem (1708 bytes)
	I1109 14:37:36.529176  189143 ssh_runner.go:362] scp /home/jenkins/minikube-integration/21139-2320/.minikube/ca.crt --> /usr/share/ca-certificates/minikubeCA.pem (1111 bytes)
	I1109 14:37:36.547490  189143 ssh_runner.go:362] scp memory --> /var/lib/minikube/kubeconfig (738 bytes)
	I1109 14:37:36.567777  189143 ssh_runner.go:195] Run: openssl version
	I1109 14:37:36.574386  189143 ssh_runner.go:195] Run: sudo /bin/bash -c "test -s /usr/share/ca-certificates/4116.pem && ln -fs /usr/share/ca-certificates/4116.pem /etc/ssl/certs/4116.pem"
	I1109 14:37:36.583356  189143 ssh_runner.go:195] Run: ls -la /usr/share/ca-certificates/4116.pem
	I1109 14:37:36.587195  189143 certs.go:528] hashing: -rw-r--r-- 1 root root 1338 Nov  9 13:36 /usr/share/ca-certificates/4116.pem
	I1109 14:37:36.587250  189143 ssh_runner.go:195] Run: openssl x509 -hash -noout -in /usr/share/ca-certificates/4116.pem
	I1109 14:37:36.628084  189143 ssh_runner.go:195] Run: sudo /bin/bash -c "test -L /etc/ssl/certs/51391683.0 || ln -fs /etc/ssl/certs/4116.pem /etc/ssl/certs/51391683.0"
	I1109 14:37:36.637044  189143 ssh_runner.go:195] Run: sudo /bin/bash -c "test -s /usr/share/ca-certificates/41162.pem && ln -fs /usr/share/ca-certificates/41162.pem /etc/ssl/certs/41162.pem"
	I1109 14:37:36.644668  189143 ssh_runner.go:195] Run: ls -la /usr/share/ca-certificates/41162.pem
	I1109 14:37:36.648624  189143 certs.go:528] hashing: -rw-r--r-- 1 root root 1708 Nov  9 13:36 /usr/share/ca-certificates/41162.pem
	I1109 14:37:36.648691  189143 ssh_runner.go:195] Run: openssl x509 -hash -noout -in /usr/share/ca-certificates/41162.pem
	I1109 14:37:36.689895  189143 ssh_runner.go:195] Run: sudo /bin/bash -c "test -L /etc/ssl/certs/3ec20f2e.0 || ln -fs /etc/ssl/certs/41162.pem /etc/ssl/certs/3ec20f2e.0"
	I1109 14:37:36.698046  189143 ssh_runner.go:195] Run: sudo /bin/bash -c "test -s /usr/share/ca-certificates/minikubeCA.pem && ln -fs /usr/share/ca-certificates/minikubeCA.pem /etc/ssl/certs/minikubeCA.pem"
	I1109 14:37:36.705924  189143 ssh_runner.go:195] Run: ls -la /usr/share/ca-certificates/minikubeCA.pem
	I1109 14:37:36.710146  189143 certs.go:528] hashing: -rw-r--r-- 1 root root 1111 Nov  9 13:29 /usr/share/ca-certificates/minikubeCA.pem
	I1109 14:37:36.710256  189143 ssh_runner.go:195] Run: openssl x509 -hash -noout -in /usr/share/ca-certificates/minikubeCA.pem
	I1109 14:37:36.751992  189143 ssh_runner.go:195] Run: sudo /bin/bash -c "test -L /etc/ssl/certs/b5213941.0 || ln -fs /etc/ssl/certs/minikubeCA.pem /etc/ssl/certs/b5213941.0"
	I1109 14:37:36.760006  189143 ssh_runner.go:195] Run: stat /var/lib/minikube/certs/apiserver-kubelet-client.crt
	I1109 14:37:36.763958  189143 certs.go:400] 'apiserver-kubelet-client' cert doesn't exist, likely first start: stat /var/lib/minikube/certs/apiserver-kubelet-client.crt: Process exited with status 1
	stdout:
	
	stderr:
	stat: cannot statx '/var/lib/minikube/certs/apiserver-kubelet-client.crt': No such file or directory
	I1109 14:37:36.764052  189143 kubeadm.go:401] StartCluster: {Name:default-k8s-diff-port-103048 KeepContext:false EmbedCerts:false MinikubeISO: KicBaseImage:gcr.io/k8s-minikube/kicbase-builds:v0.0.48-1761985721-21837@sha256:a50b37e97dfdea51156e079ca6b45818a801b3d41bbe13d141f35d2e1af6c7d1 Memory:3072 CPUs:2 DiskSize:20000 Driver:docker HyperkitVpnKitSock: HyperkitVSockPorts:[] DockerEnv:[] ContainerVolumeMounts:[] InsecureRegistry:[] RegistryMirror:[] HostOnlyCIDR:192.168.59.1/24 HypervVirtualSwitch: HypervUseExternalSwitch:false HypervExternalAdapter: KVMNetwork:default KVMQemuURI:qemu:///system KVMGPU:false KVMHidden:false KVMNUMACount:1 APIServerPort:8444 DockerOpt:[] DisableDriverMounts:false NFSShare:[] NFSSharesRoot:/nfsshares UUID: NoVTXCheck:false DNSProxy:false HostDNSResolver:true HostOnlyNicType:virtio NatNicType:virtio SSHIPAddress: SSHUser:root SSHKey: SSHPort:22 KubernetesConfig:{KubernetesVersion:v1.34.1 ClusterName:default-k8s-diff-port-103048 Namespace:default APIServerHAVIP: APIServer
Name:minikubeCA APIServerNames:[] APIServerIPs:[] DNSDomain:cluster.local ContainerRuntime:crio CRISocket: NetworkPlugin:cni FeatureGates: ServiceCIDR:10.96.0.0/12 ImageRepository: LoadBalancerStartIP: LoadBalancerEndIP: CustomIngressCert: RegistryAliases: ExtraOptions:[] ShouldLoadCachedImages:true EnableDefaultCNI:false CNI:} Nodes:[{Name: IP:192.168.85.2 Port:8444 KubernetesVersion:v1.34.1 ContainerRuntime:crio ControlPlane:true Worker:true}] Addons:map[] CustomAddonImages:map[] CustomAddonRegistries:map[] VerifyComponents:map[apiserver:true apps_running:true default_sa:true extra:true kubelet:true node_ready:true system_pods:true] StartHostTimeout:6m0s ScheduledStop:<nil> ExposedPorts:[] ListenAddress: Network: Subnet: MultiNodeRequested:false ExtraDisks:0 CertExpiration:26280h0m0s MountString: Mount9PVersion:9p2000.L MountGID:docker MountIP: MountMSize:262144 MountOptions:[] MountPort:0 MountType:9p MountUID:docker BinaryMirror: DisableOptimizations:false DisableMetrics:false DisableCoreDNSLog:false Cust
omQemuFirmwarePath: SocketVMnetClientPath: SocketVMnetPath: StaticIP: SSHAuthSock: SSHAgentPID:0 GPUs: AutoPauseInterval:1m0s}
	I1109 14:37:36.764159  189143 cri.go:54] listing CRI containers in root : {State:paused Name: Namespaces:[kube-system]}
	I1109 14:37:36.764247  189143 ssh_runner.go:195] Run: sudo -s eval "crictl ps -a --quiet --label io.kubernetes.pod.namespace=kube-system"
	I1109 14:37:36.791006  189143 cri.go:89] found id: ""
	I1109 14:37:36.791122  189143 ssh_runner.go:195] Run: sudo ls /var/lib/kubelet/kubeadm-flags.env /var/lib/kubelet/config.yaml /var/lib/minikube/etcd
	I1109 14:37:36.800580  189143 ssh_runner.go:195] Run: sudo cp /var/tmp/minikube/kubeadm.yaml.new /var/tmp/minikube/kubeadm.yaml
	I1109 14:37:36.808608  189143 kubeadm.go:215] ignoring SystemVerification for kubeadm because of docker driver
	I1109 14:37:36.808705  189143 ssh_runner.go:195] Run: sudo ls -la /etc/kubernetes/admin.conf /etc/kubernetes/kubelet.conf /etc/kubernetes/controller-manager.conf /etc/kubernetes/scheduler.conf
	I1109 14:37:36.818251  189143 kubeadm.go:156] config check failed, skipping stale config cleanup: sudo ls -la /etc/kubernetes/admin.conf /etc/kubernetes/kubelet.conf /etc/kubernetes/controller-manager.conf /etc/kubernetes/scheduler.conf: Process exited with status 2
	stdout:
	
	stderr:
	ls: cannot access '/etc/kubernetes/admin.conf': No such file or directory
	ls: cannot access '/etc/kubernetes/kubelet.conf': No such file or directory
	ls: cannot access '/etc/kubernetes/controller-manager.conf': No such file or directory
	ls: cannot access '/etc/kubernetes/scheduler.conf': No such file or directory
	I1109 14:37:36.818315  189143 kubeadm.go:158] found existing configuration files:
	
	I1109 14:37:36.818386  189143 ssh_runner.go:195] Run: sudo grep https://control-plane.minikube.internal:8444 /etc/kubernetes/admin.conf
	I1109 14:37:36.827006  189143 kubeadm.go:164] "https://control-plane.minikube.internal:8444" may not be in /etc/kubernetes/admin.conf - will remove: sudo grep https://control-plane.minikube.internal:8444 /etc/kubernetes/admin.conf: Process exited with status 2
	stdout:
	
	stderr:
	grep: /etc/kubernetes/admin.conf: No such file or directory
	I1109 14:37:36.827110  189143 ssh_runner.go:195] Run: sudo rm -f /etc/kubernetes/admin.conf
	I1109 14:37:36.835814  189143 ssh_runner.go:195] Run: sudo grep https://control-plane.minikube.internal:8444 /etc/kubernetes/kubelet.conf
	I1109 14:37:36.843973  189143 kubeadm.go:164] "https://control-plane.minikube.internal:8444" may not be in /etc/kubernetes/kubelet.conf - will remove: sudo grep https://control-plane.minikube.internal:8444 /etc/kubernetes/kubelet.conf: Process exited with status 2
	stdout:
	
	stderr:
	grep: /etc/kubernetes/kubelet.conf: No such file or directory
	I1109 14:37:36.844079  189143 ssh_runner.go:195] Run: sudo rm -f /etc/kubernetes/kubelet.conf
	I1109 14:37:36.851559  189143 ssh_runner.go:195] Run: sudo grep https://control-plane.minikube.internal:8444 /etc/kubernetes/controller-manager.conf
	I1109 14:37:36.859852  189143 kubeadm.go:164] "https://control-plane.minikube.internal:8444" may not be in /etc/kubernetes/controller-manager.conf - will remove: sudo grep https://control-plane.minikube.internal:8444 /etc/kubernetes/controller-manager.conf: Process exited with status 2
	stdout:
	
	stderr:
	grep: /etc/kubernetes/controller-manager.conf: No such file or directory
	I1109 14:37:36.859986  189143 ssh_runner.go:195] Run: sudo rm -f /etc/kubernetes/controller-manager.conf
	I1109 14:37:36.867289  189143 ssh_runner.go:195] Run: sudo grep https://control-plane.minikube.internal:8444 /etc/kubernetes/scheduler.conf
	I1109 14:37:36.875295  189143 kubeadm.go:164] "https://control-plane.minikube.internal:8444" may not be in /etc/kubernetes/scheduler.conf - will remove: sudo grep https://control-plane.minikube.internal:8444 /etc/kubernetes/scheduler.conf: Process exited with status 2
	stdout:
	
	stderr:
	grep: /etc/kubernetes/scheduler.conf: No such file or directory
	I1109 14:37:36.875398  189143 ssh_runner.go:195] Run: sudo rm -f /etc/kubernetes/scheduler.conf
	I1109 14:37:36.882707  189143 ssh_runner.go:286] Start: sudo /bin/bash -c "env PATH="/var/lib/minikube/binaries/v1.34.1:$PATH" kubeadm init --config /var/tmp/minikube/kubeadm.yaml  --ignore-preflight-errors=DirAvailable--etc-kubernetes-manifests,DirAvailable--var-lib-minikube,DirAvailable--var-lib-minikube-etcd,FileAvailable--etc-kubernetes-manifests-kube-scheduler.yaml,FileAvailable--etc-kubernetes-manifests-kube-apiserver.yaml,FileAvailable--etc-kubernetes-manifests-kube-controller-manager.yaml,FileAvailable--etc-kubernetes-manifests-etcd.yaml,Port-10250,Swap,NumCPU,Mem,SystemVerification,FileContent--proc-sys-net-bridge-bridge-nf-call-iptables"
	I1109 14:37:36.927926  189143 kubeadm.go:319] [init] Using Kubernetes version: v1.34.1
	I1109 14:37:36.928306  189143 kubeadm.go:319] [preflight] Running pre-flight checks
	I1109 14:37:36.980410  189143 kubeadm.go:319] [preflight] The system verification failed. Printing the output from the verification:
	I1109 14:37:36.980494  189143 kubeadm.go:319] KERNEL_VERSION: 5.15.0-1084-aws
	I1109 14:37:36.980533  189143 kubeadm.go:319] OS: Linux
	I1109 14:37:36.980580  189143 kubeadm.go:319] CGROUPS_CPU: enabled
	I1109 14:37:36.980631  189143 kubeadm.go:319] CGROUPS_CPUACCT: enabled
	I1109 14:37:36.980681  189143 kubeadm.go:319] CGROUPS_CPUSET: enabled
	I1109 14:37:36.980731  189143 kubeadm.go:319] CGROUPS_DEVICES: enabled
	I1109 14:37:36.980783  189143 kubeadm.go:319] CGROUPS_FREEZER: enabled
	I1109 14:37:36.980836  189143 kubeadm.go:319] CGROUPS_MEMORY: enabled
	I1109 14:37:36.980883  189143 kubeadm.go:319] CGROUPS_PIDS: enabled
	I1109 14:37:36.980941  189143 kubeadm.go:319] CGROUPS_HUGETLB: enabled
	I1109 14:37:36.980989  189143 kubeadm.go:319] CGROUPS_BLKIO: enabled
	I1109 14:37:37.110194  189143 kubeadm.go:319] [preflight] Pulling images required for setting up a Kubernetes cluster
	I1109 14:37:37.110309  189143 kubeadm.go:319] [preflight] This might take a minute or two, depending on the speed of your internet connection
	I1109 14:37:37.110405  189143 kubeadm.go:319] [preflight] You can also perform this action beforehand using 'kubeadm config images pull'
	I1109 14:37:37.124673  189143 kubeadm.go:319] [certs] Using certificateDir folder "/var/lib/minikube/certs"
	I1109 14:37:33.621312  190681 cli_runner.go:217] Completed: docker run --rm --entrypoint /usr/bin/tar -v /home/jenkins/minikube-integration/21139-2320/.minikube/cache/preloaded-tarball/preloaded-images-k8s-v18-v1.34.1-cri-o-overlay-arm64.tar.lz4:/preloaded.tar:ro -v embed-certs-422728:/extractDir gcr.io/k8s-minikube/kicbase-builds:v0.0.48-1761985721-21837@sha256:a50b37e97dfdea51156e079ca6b45818a801b3d41bbe13d141f35d2e1af6c7d1 -I lz4 -xf /preloaded.tar -C /extractDir: (4.75981683s)
	I1109 14:37:33.621344  190681 kic.go:203] duration metric: took 4.75994154s to extract preloaded images to volume ...
	W1109 14:37:33.621483  190681 cgroups_linux.go:77] Your kernel does not support swap limit capabilities or the cgroup is not mounted.
	I1109 14:37:33.621599  190681 cli_runner.go:164] Run: docker info --format "'{{json .SecurityOptions}}'"
	I1109 14:37:33.717076  190681 cli_runner.go:164] Run: docker run -d -t --privileged --security-opt seccomp=unconfined --tmpfs /tmp --tmpfs /run -v /lib/modules:/lib/modules:ro --hostname embed-certs-422728 --name embed-certs-422728 --label created_by.minikube.sigs.k8s.io=true --label name.minikube.sigs.k8s.io=embed-certs-422728 --label role.minikube.sigs.k8s.io= --label mode.minikube.sigs.k8s.io=embed-certs-422728 --network embed-certs-422728 --ip 192.168.76.2 --volume embed-certs-422728:/var --security-opt apparmor=unconfined --memory=3072mb --cpus=2 -e container=docker --expose 8443 --publish=127.0.0.1::8443 --publish=127.0.0.1::22 --publish=127.0.0.1::2376 --publish=127.0.0.1::5000 --publish=127.0.0.1::32443 gcr.io/k8s-minikube/kicbase-builds:v0.0.48-1761985721-21837@sha256:a50b37e97dfdea51156e079ca6b45818a801b3d41bbe13d141f35d2e1af6c7d1
	I1109 14:37:34.108930  190681 cli_runner.go:164] Run: docker container inspect embed-certs-422728 --format={{.State.Running}}
	I1109 14:37:34.144754  190681 cli_runner.go:164] Run: docker container inspect embed-certs-422728 --format={{.State.Status}}
	I1109 14:37:34.179083  190681 cli_runner.go:164] Run: docker exec embed-certs-422728 stat /var/lib/dpkg/alternatives/iptables
	I1109 14:37:34.246021  190681 oci.go:144] the created container "embed-certs-422728" has a running status.
	I1109 14:37:34.246050  190681 kic.go:225] Creating ssh key for kic: /home/jenkins/minikube-integration/21139-2320/.minikube/machines/embed-certs-422728/id_rsa...
	I1109 14:37:35.793995  190681 kic_runner.go:191] docker (temp): /home/jenkins/minikube-integration/21139-2320/.minikube/machines/embed-certs-422728/id_rsa.pub --> /home/docker/.ssh/authorized_keys (381 bytes)
	I1109 14:37:35.827258  190681 cli_runner.go:164] Run: docker container inspect embed-certs-422728 --format={{.State.Status}}
	I1109 14:37:35.855173  190681 kic_runner.go:93] Run: chown docker:docker /home/docker/.ssh/authorized_keys
	I1109 14:37:35.855199  190681 kic_runner.go:114] Args: [docker exec --privileged embed-certs-422728 chown docker:docker /home/docker/.ssh/authorized_keys]
	I1109 14:37:35.933016  190681 cli_runner.go:164] Run: docker container inspect embed-certs-422728 --format={{.State.Status}}
	I1109 14:37:35.961922  190681 machine.go:94] provisionDockerMachine start ...
	I1109 14:37:35.962016  190681 cli_runner.go:164] Run: docker container inspect -f "'{{(index (index .NetworkSettings.Ports "22/tcp") 0).HostPort}}'" embed-certs-422728
	I1109 14:37:35.985738  190681 main.go:143] libmachine: Using SSH client type: native
	I1109 14:37:35.986065  190681 main.go:143] libmachine: &{{{<nil> 0 [] [] []} docker [0x3eefe0] 0x3f1790 <nil>  [] 0s} 127.0.0.1 33060 <nil> <nil>}
	I1109 14:37:35.986074  190681 main.go:143] libmachine: About to run SSH command:
	hostname
	I1109 14:37:36.163139  190681 main.go:143] libmachine: SSH cmd err, output: <nil>: embed-certs-422728
	
	I1109 14:37:36.163159  190681 ubuntu.go:182] provisioning hostname "embed-certs-422728"
	I1109 14:37:36.163230  190681 cli_runner.go:164] Run: docker container inspect -f "'{{(index (index .NetworkSettings.Ports "22/tcp") 0).HostPort}}'" embed-certs-422728
	I1109 14:37:36.184262  190681 main.go:143] libmachine: Using SSH client type: native
	I1109 14:37:36.184594  190681 main.go:143] libmachine: &{{{<nil> 0 [] [] []} docker [0x3eefe0] 0x3f1790 <nil>  [] 0s} 127.0.0.1 33060 <nil> <nil>}
	I1109 14:37:36.184611  190681 main.go:143] libmachine: About to run SSH command:
	sudo hostname embed-certs-422728 && echo "embed-certs-422728" | sudo tee /etc/hostname
	I1109 14:37:36.369570  190681 main.go:143] libmachine: SSH cmd err, output: <nil>: embed-certs-422728
	
	I1109 14:37:36.369635  190681 cli_runner.go:164] Run: docker container inspect -f "'{{(index (index .NetworkSettings.Ports "22/tcp") 0).HostPort}}'" embed-certs-422728
	I1109 14:37:36.395153  190681 main.go:143] libmachine: Using SSH client type: native
	I1109 14:37:36.395448  190681 main.go:143] libmachine: &{{{<nil> 0 [] [] []} docker [0x3eefe0] 0x3f1790 <nil>  [] 0s} 127.0.0.1 33060 <nil> <nil>}
	I1109 14:37:36.395465  190681 main.go:143] libmachine: About to run SSH command:
	
			if ! grep -xq '.*\sembed-certs-422728' /etc/hosts; then
				if grep -xq '127.0.1.1\s.*' /etc/hosts; then
					sudo sed -i 's/^127.0.1.1\s.*/127.0.1.1 embed-certs-422728/g' /etc/hosts;
				else 
					echo '127.0.1.1 embed-certs-422728' | sudo tee -a /etc/hosts; 
				fi
			fi
	I1109 14:37:36.556231  190681 main.go:143] libmachine: SSH cmd err, output: <nil>: 
	I1109 14:37:36.556255  190681 ubuntu.go:188] set auth options {CertDir:/home/jenkins/minikube-integration/21139-2320/.minikube CaCertPath:/home/jenkins/minikube-integration/21139-2320/.minikube/certs/ca.pem CaPrivateKeyPath:/home/jenkins/minikube-integration/21139-2320/.minikube/certs/ca-key.pem CaCertRemotePath:/etc/docker/ca.pem ServerCertPath:/home/jenkins/minikube-integration/21139-2320/.minikube/machines/server.pem ServerKeyPath:/home/jenkins/minikube-integration/21139-2320/.minikube/machines/server-key.pem ClientKeyPath:/home/jenkins/minikube-integration/21139-2320/.minikube/certs/key.pem ServerCertRemotePath:/etc/docker/server.pem ServerKeyRemotePath:/etc/docker/server-key.pem ClientCertPath:/home/jenkins/minikube-integration/21139-2320/.minikube/certs/cert.pem ServerCertSANs:[] StorePath:/home/jenkins/minikube-integration/21139-2320/.minikube}
	I1109 14:37:36.556275  190681 ubuntu.go:190] setting up certificates
	I1109 14:37:36.556286  190681 provision.go:84] configureAuth start
	I1109 14:37:36.556354  190681 cli_runner.go:164] Run: docker container inspect -f "{{range .NetworkSettings.Networks}}{{.IPAddress}},{{.GlobalIPv6Address}}{{end}}" embed-certs-422728
	I1109 14:37:36.577644  190681 provision.go:143] copyHostCerts
	I1109 14:37:36.577710  190681 exec_runner.go:144] found /home/jenkins/minikube-integration/21139-2320/.minikube/ca.pem, removing ...
	I1109 14:37:36.577724  190681 exec_runner.go:203] rm: /home/jenkins/minikube-integration/21139-2320/.minikube/ca.pem
	I1109 14:37:36.577799  190681 exec_runner.go:151] cp: /home/jenkins/minikube-integration/21139-2320/.minikube/certs/ca.pem --> /home/jenkins/minikube-integration/21139-2320/.minikube/ca.pem (1082 bytes)
	I1109 14:37:36.577897  190681 exec_runner.go:144] found /home/jenkins/minikube-integration/21139-2320/.minikube/cert.pem, removing ...
	I1109 14:37:36.577907  190681 exec_runner.go:203] rm: /home/jenkins/minikube-integration/21139-2320/.minikube/cert.pem
	I1109 14:37:36.577936  190681 exec_runner.go:151] cp: /home/jenkins/minikube-integration/21139-2320/.minikube/certs/cert.pem --> /home/jenkins/minikube-integration/21139-2320/.minikube/cert.pem (1123 bytes)
	I1109 14:37:36.577994  190681 exec_runner.go:144] found /home/jenkins/minikube-integration/21139-2320/.minikube/key.pem, removing ...
	I1109 14:37:36.578003  190681 exec_runner.go:203] rm: /home/jenkins/minikube-integration/21139-2320/.minikube/key.pem
	I1109 14:37:36.578027  190681 exec_runner.go:151] cp: /home/jenkins/minikube-integration/21139-2320/.minikube/certs/key.pem --> /home/jenkins/minikube-integration/21139-2320/.minikube/key.pem (1679 bytes)
	I1109 14:37:36.578077  190681 provision.go:117] generating server cert: /home/jenkins/minikube-integration/21139-2320/.minikube/machines/server.pem ca-key=/home/jenkins/minikube-integration/21139-2320/.minikube/certs/ca.pem private-key=/home/jenkins/minikube-integration/21139-2320/.minikube/certs/ca-key.pem org=jenkins.embed-certs-422728 san=[127.0.0.1 192.168.76.2 embed-certs-422728 localhost minikube]
	I1109 14:37:37.050151  190681 provision.go:177] copyRemoteCerts
	I1109 14:37:37.050264  190681 ssh_runner.go:195] Run: sudo mkdir -p /etc/docker /etc/docker /etc/docker
	I1109 14:37:37.050321  190681 cli_runner.go:164] Run: docker container inspect -f "'{{(index (index .NetworkSettings.Ports "22/tcp") 0).HostPort}}'" embed-certs-422728
	I1109 14:37:37.069435  190681 sshutil.go:53] new ssh client: &{IP:127.0.0.1 Port:33060 SSHKeyPath:/home/jenkins/minikube-integration/21139-2320/.minikube/machines/embed-certs-422728/id_rsa Username:docker}
	I1109 14:37:37.178314  190681 ssh_runner.go:362] scp /home/jenkins/minikube-integration/21139-2320/.minikube/certs/ca.pem --> /etc/docker/ca.pem (1082 bytes)
	I1109 14:37:37.202711  190681 ssh_runner.go:362] scp /home/jenkins/minikube-integration/21139-2320/.minikube/machines/server.pem --> /etc/docker/server.pem (1220 bytes)
	I1109 14:37:37.219814  190681 ssh_runner.go:362] scp /home/jenkins/minikube-integration/21139-2320/.minikube/machines/server-key.pem --> /etc/docker/server-key.pem (1675 bytes)
	I1109 14:37:37.237810  190681 provision.go:87] duration metric: took 681.504793ms to configureAuth
	I1109 14:37:37.237832  190681 ubuntu.go:206] setting minikube options for container-runtime
	I1109 14:37:37.238011  190681 config.go:182] Loaded profile config "embed-certs-422728": Driver=docker, ContainerRuntime=crio, KubernetesVersion=v1.34.1
	I1109 14:37:37.238111  190681 cli_runner.go:164] Run: docker container inspect -f "'{{(index (index .NetworkSettings.Ports "22/tcp") 0).HostPort}}'" embed-certs-422728
	I1109 14:37:37.277717  190681 main.go:143] libmachine: Using SSH client type: native
	I1109 14:37:37.278060  190681 main.go:143] libmachine: &{{{<nil> 0 [] [] []} docker [0x3eefe0] 0x3f1790 <nil>  [] 0s} 127.0.0.1 33060 <nil> <nil>}
	I1109 14:37:37.278082  190681 main.go:143] libmachine: About to run SSH command:
	sudo mkdir -p /etc/sysconfig && printf %s "
	CRIO_MINIKUBE_OPTIONS='--insecure-registry 10.96.0.0/12 '
	" | sudo tee /etc/sysconfig/crio.minikube && sudo systemctl restart crio
	I1109 14:37:37.570802  190681 main.go:143] libmachine: SSH cmd err, output: <nil>: 
	CRIO_MINIKUBE_OPTIONS='--insecure-registry 10.96.0.0/12 '
	
	I1109 14:37:37.570825  190681 machine.go:97] duration metric: took 1.608885623s to provisionDockerMachine
	I1109 14:37:37.570835  190681 client.go:176] duration metric: took 9.446209559s to LocalClient.Create
	I1109 14:37:37.570848  190681 start.go:167] duration metric: took 9.446286055s to libmachine.API.Create "embed-certs-422728"
	I1109 14:37:37.570856  190681 start.go:293] postStartSetup for "embed-certs-422728" (driver="docker")
	I1109 14:37:37.570865  190681 start.go:322] creating required directories: [/etc/kubernetes/addons /etc/kubernetes/manifests /var/tmp/minikube /var/lib/minikube /var/lib/minikube/certs /var/lib/minikube/images /var/lib/minikube/binaries /tmp/gvisor /usr/share/ca-certificates /etc/ssl/certs]
	I1109 14:37:37.570935  190681 ssh_runner.go:195] Run: sudo mkdir -p /etc/kubernetes/addons /etc/kubernetes/manifests /var/tmp/minikube /var/lib/minikube /var/lib/minikube/certs /var/lib/minikube/images /var/lib/minikube/binaries /tmp/gvisor /usr/share/ca-certificates /etc/ssl/certs
	I1109 14:37:37.570998  190681 cli_runner.go:164] Run: docker container inspect -f "'{{(index (index .NetworkSettings.Ports "22/tcp") 0).HostPort}}'" embed-certs-422728
	I1109 14:37:37.593092  190681 sshutil.go:53] new ssh client: &{IP:127.0.0.1 Port:33060 SSHKeyPath:/home/jenkins/minikube-integration/21139-2320/.minikube/machines/embed-certs-422728/id_rsa Username:docker}
	I1109 14:37:37.700665  190681 ssh_runner.go:195] Run: cat /etc/os-release
	I1109 14:37:37.704483  190681 main.go:143] libmachine: Couldn't set key VERSION_CODENAME, no corresponding struct field found
	I1109 14:37:37.704510  190681 info.go:137] Remote host: Debian GNU/Linux 12 (bookworm)
	I1109 14:37:37.704522  190681 filesync.go:126] Scanning /home/jenkins/minikube-integration/21139-2320/.minikube/addons for local assets ...
	I1109 14:37:37.704587  190681 filesync.go:126] Scanning /home/jenkins/minikube-integration/21139-2320/.minikube/files for local assets ...
	I1109 14:37:37.704672  190681 filesync.go:149] local asset: /home/jenkins/minikube-integration/21139-2320/.minikube/files/etc/ssl/certs/41162.pem -> 41162.pem in /etc/ssl/certs
	I1109 14:37:37.704771  190681 ssh_runner.go:195] Run: sudo mkdir -p /etc/ssl/certs
	I1109 14:37:37.712412  190681 ssh_runner.go:362] scp /home/jenkins/minikube-integration/21139-2320/.minikube/files/etc/ssl/certs/41162.pem --> /etc/ssl/certs/41162.pem (1708 bytes)
	I1109 14:37:37.730752  190681 start.go:296] duration metric: took 159.882791ms for postStartSetup
	I1109 14:37:37.731107  190681 cli_runner.go:164] Run: docker container inspect -f "{{range .NetworkSettings.Networks}}{{.IPAddress}},{{.GlobalIPv6Address}}{{end}}" embed-certs-422728
	I1109 14:37:37.747546  190681 profile.go:143] Saving config to /home/jenkins/minikube-integration/21139-2320/.minikube/profiles/embed-certs-422728/config.json ...
	I1109 14:37:37.747819  190681 ssh_runner.go:195] Run: sh -c "df -h /var | awk 'NR==2{print $5}'"
	I1109 14:37:37.747915  190681 cli_runner.go:164] Run: docker container inspect -f "'{{(index (index .NetworkSettings.Ports "22/tcp") 0).HostPort}}'" embed-certs-422728
	I1109 14:37:37.764253  190681 sshutil.go:53] new ssh client: &{IP:127.0.0.1 Port:33060 SSHKeyPath:/home/jenkins/minikube-integration/21139-2320/.minikube/machines/embed-certs-422728/id_rsa Username:docker}
	I1109 14:37:37.877289  190681 ssh_runner.go:195] Run: sh -c "df -BG /var | awk 'NR==2{print $4}'"
	I1109 14:37:37.882629  190681 start.go:128] duration metric: took 9.761952743s to createHost
	I1109 14:37:37.882654  190681 start.go:83] releasing machines lock for "embed-certs-422728", held for 9.762080974s
	I1109 14:37:37.882721  190681 cli_runner.go:164] Run: docker container inspect -f "{{range .NetworkSettings.Networks}}{{.IPAddress}},{{.GlobalIPv6Address}}{{end}}" embed-certs-422728
	I1109 14:37:37.129759  189143 out.go:252]   - Generating certificates and keys ...
	I1109 14:37:37.129875  189143 kubeadm.go:319] [certs] Using existing ca certificate authority
	I1109 14:37:37.129951  189143 kubeadm.go:319] [certs] Using existing apiserver certificate and key on disk
	I1109 14:37:37.939982  189143 kubeadm.go:319] [certs] Generating "apiserver-kubelet-client" certificate and key
	I1109 14:37:37.904221  190681 ssh_runner.go:195] Run: cat /version.json
	I1109 14:37:37.904280  190681 cli_runner.go:164] Run: docker container inspect -f "'{{(index (index .NetworkSettings.Ports "22/tcp") 0).HostPort}}'" embed-certs-422728
	I1109 14:37:37.904613  190681 ssh_runner.go:195] Run: curl -sS -m 2 https://registry.k8s.io/
	I1109 14:37:37.904682  190681 cli_runner.go:164] Run: docker container inspect -f "'{{(index (index .NetworkSettings.Ports "22/tcp") 0).HostPort}}'" embed-certs-422728
	I1109 14:37:37.942266  190681 sshutil.go:53] new ssh client: &{IP:127.0.0.1 Port:33060 SSHKeyPath:/home/jenkins/minikube-integration/21139-2320/.minikube/machines/embed-certs-422728/id_rsa Username:docker}
	I1109 14:37:37.948914  190681 sshutil.go:53] new ssh client: &{IP:127.0.0.1 Port:33060 SSHKeyPath:/home/jenkins/minikube-integration/21139-2320/.minikube/machines/embed-certs-422728/id_rsa Username:docker}
	I1109 14:37:38.153694  190681 ssh_runner.go:195] Run: systemctl --version
	I1109 14:37:38.160793  190681 ssh_runner.go:195] Run: sudo sh -c "podman version >/dev/null"
	I1109 14:37:38.206624  190681 ssh_runner.go:195] Run: sh -c "stat /etc/cni/net.d/*loopback.conf*"
	W1109 14:37:38.211010  190681 cni.go:209] loopback cni configuration skipped: "/etc/cni/net.d/*loopback.conf*" not found
	I1109 14:37:38.211134  190681 ssh_runner.go:195] Run: sudo find /etc/cni/net.d -maxdepth 1 -type f ( ( -name *bridge* -or -name *podman* ) -and -not -name *.mk_disabled ) -printf "%p, " -exec sh -c "sudo mv {} {}.mk_disabled" ;
	I1109 14:37:38.241762  190681 cni.go:262] disabled [/etc/cni/net.d/87-podman-bridge.conflist, /etc/cni/net.d/10-crio-bridge.conflist.disabled] bridge cni config(s)
	I1109 14:37:38.241787  190681 start.go:496] detecting cgroup driver to use...
	I1109 14:37:38.241827  190681 detect.go:187] detected "cgroupfs" cgroup driver on host os
	I1109 14:37:38.241888  190681 ssh_runner.go:195] Run: sudo systemctl stop -f containerd
	I1109 14:37:38.261521  190681 ssh_runner.go:195] Run: sudo systemctl is-active --quiet service containerd
	I1109 14:37:38.276562  190681 docker.go:218] disabling cri-docker service (if available) ...
	I1109 14:37:38.276631  190681 ssh_runner.go:195] Run: sudo systemctl stop -f cri-docker.socket
	I1109 14:37:38.295061  190681 ssh_runner.go:195] Run: sudo systemctl stop -f cri-docker.service
	I1109 14:37:38.314807  190681 ssh_runner.go:195] Run: sudo systemctl disable cri-docker.socket
	I1109 14:37:38.465078  190681 ssh_runner.go:195] Run: sudo systemctl mask cri-docker.service
	I1109 14:37:38.661701  190681 docker.go:234] disabling docker service ...
	I1109 14:37:38.661781  190681 ssh_runner.go:195] Run: sudo systemctl stop -f docker.socket
	I1109 14:37:38.685698  190681 ssh_runner.go:195] Run: sudo systemctl stop -f docker.service
	I1109 14:37:38.700596  190681 ssh_runner.go:195] Run: sudo systemctl disable docker.socket
	I1109 14:37:38.852535  190681 ssh_runner.go:195] Run: sudo systemctl mask docker.service
	I1109 14:37:39.004478  190681 ssh_runner.go:195] Run: sudo systemctl is-active --quiet service docker
	I1109 14:37:39.021207  190681 ssh_runner.go:195] Run: /bin/bash -c "sudo mkdir -p /etc && printf %s "runtime-endpoint: unix:///var/run/crio/crio.sock
	" | sudo tee /etc/crictl.yaml"
	I1109 14:37:39.038137  190681 crio.go:59] configure cri-o to use "registry.k8s.io/pause:3.10.1" pause image...
	I1109 14:37:39.038206  190681 ssh_runner.go:195] Run: sh -c "sudo sed -i 's|^.*pause_image = .*$|pause_image = "registry.k8s.io/pause:3.10.1"|' /etc/crio/crio.conf.d/02-crio.conf"
	I1109 14:37:39.047706  190681 crio.go:70] configuring cri-o to use "cgroupfs" as cgroup driver...
	I1109 14:37:39.047775  190681 ssh_runner.go:195] Run: sh -c "sudo sed -i 's|^.*cgroup_manager = .*$|cgroup_manager = "cgroupfs"|' /etc/crio/crio.conf.d/02-crio.conf"
	I1109 14:37:39.057668  190681 ssh_runner.go:195] Run: sh -c "sudo sed -i '/conmon_cgroup = .*/d' /etc/crio/crio.conf.d/02-crio.conf"
	I1109 14:37:39.066902  190681 ssh_runner.go:195] Run: sh -c "sudo sed -i '/cgroup_manager = .*/a conmon_cgroup = "pod"' /etc/crio/crio.conf.d/02-crio.conf"
	I1109 14:37:39.076064  190681 ssh_runner.go:195] Run: sh -c "sudo rm -rf /etc/cni/net.mk"
	I1109 14:37:39.084815  190681 ssh_runner.go:195] Run: sh -c "sudo sed -i '/^ *"net.ipv4.ip_unprivileged_port_start=.*"/d' /etc/crio/crio.conf.d/02-crio.conf"
	I1109 14:37:39.094208  190681 ssh_runner.go:195] Run: sh -c "sudo grep -q "^ *default_sysctls" /etc/crio/crio.conf.d/02-crio.conf || sudo sed -i '/conmon_cgroup = .*/a default_sysctls = \[\n\]' /etc/crio/crio.conf.d/02-crio.conf"
	I1109 14:37:39.108916  190681 ssh_runner.go:195] Run: sh -c "sudo sed -i -r 's|^default_sysctls *= *\[|&\n  "net.ipv4.ip_unprivileged_port_start=0",|' /etc/crio/crio.conf.d/02-crio.conf"
	I1109 14:37:39.118205  190681 ssh_runner.go:195] Run: sudo sysctl net.bridge.bridge-nf-call-iptables
	I1109 14:37:39.126682  190681 ssh_runner.go:195] Run: sudo sh -c "echo 1 > /proc/sys/net/ipv4/ip_forward"
	I1109 14:37:39.135280  190681 ssh_runner.go:195] Run: sudo systemctl daemon-reload
	I1109 14:37:39.276261  190681 ssh_runner.go:195] Run: sudo systemctl restart crio
	I1109 14:37:39.428332  190681 start.go:543] Will wait 60s for socket path /var/run/crio/crio.sock
	I1109 14:37:39.428416  190681 ssh_runner.go:195] Run: stat /var/run/crio/crio.sock
	I1109 14:37:39.433110  190681 start.go:564] Will wait 60s for crictl version
	I1109 14:37:39.433190  190681 ssh_runner.go:195] Run: which crictl
	I1109 14:37:39.436804  190681 ssh_runner.go:195] Run: sudo /usr/local/bin/crictl version
	I1109 14:37:39.462726  190681 start.go:580] Version:  0.1.0
	RuntimeName:  cri-o
	RuntimeVersion:  1.34.1
	RuntimeApiVersion:  v1
	I1109 14:37:39.462870  190681 ssh_runner.go:195] Run: crio --version
	I1109 14:37:39.497055  190681 ssh_runner.go:195] Run: crio --version
	I1109 14:37:39.536554  190681 out.go:179] * Preparing Kubernetes v1.34.1 on CRI-O 1.34.1 ...
	I1109 14:37:39.539477  190681 cli_runner.go:164] Run: docker network inspect embed-certs-422728 --format "{"Name": "{{.Name}}","Driver": "{{.Driver}}","Subnet": "{{range .IPAM.Config}}{{.Subnet}}{{end}}","Gateway": "{{range .IPAM.Config}}{{.Gateway}}{{end}}","MTU": {{if (index .Options "com.docker.network.driver.mtu")}}{{(index .Options "com.docker.network.driver.mtu")}}{{else}}0{{end}}, "ContainerIPs": [{{range $k,$v := .Containers }}"{{$v.IPv4Address}}",{{end}}]}"
	I1109 14:37:39.560445  190681 ssh_runner.go:195] Run: grep 192.168.76.1	host.minikube.internal$ /etc/hosts
	I1109 14:37:39.568342  190681 ssh_runner.go:195] Run: /bin/bash -c "{ grep -v $'\thost.minikube.internal$' "/etc/hosts"; echo "192.168.76.1	host.minikube.internal"; } > /tmp/h.$$; sudo cp /tmp/h.$$ "/etc/hosts""
	I1109 14:37:39.577861  190681 kubeadm.go:884] updating cluster {Name:embed-certs-422728 KeepContext:false EmbedCerts:true MinikubeISO: KicBaseImage:gcr.io/k8s-minikube/kicbase-builds:v0.0.48-1761985721-21837@sha256:a50b37e97dfdea51156e079ca6b45818a801b3d41bbe13d141f35d2e1af6c7d1 Memory:3072 CPUs:2 DiskSize:20000 Driver:docker HyperkitVpnKitSock: HyperkitVSockPorts:[] DockerEnv:[] ContainerVolumeMounts:[] InsecureRegistry:[] RegistryMirror:[] HostOnlyCIDR:192.168.59.1/24 HypervVirtualSwitch: HypervUseExternalSwitch:false HypervExternalAdapter: KVMNetwork:default KVMQemuURI:qemu:///system KVMGPU:false KVMHidden:false KVMNUMACount:1 APIServerPort:8443 DockerOpt:[] DisableDriverMounts:false NFSShare:[] NFSSharesRoot:/nfsshares UUID: NoVTXCheck:false DNSProxy:false HostDNSResolver:true HostOnlyNicType:virtio NatNicType:virtio SSHIPAddress: SSHUser:root SSHKey: SSHPort:22 KubernetesConfig:{KubernetesVersion:v1.34.1 ClusterName:embed-certs-422728 Namespace:default APIServerHAVIP: APIServerName:minikubeCA AP
IServerNames:[] APIServerIPs:[] DNSDomain:cluster.local ContainerRuntime:crio CRISocket: NetworkPlugin:cni FeatureGates: ServiceCIDR:10.96.0.0/12 ImageRepository: LoadBalancerStartIP: LoadBalancerEndIP: CustomIngressCert: RegistryAliases: ExtraOptions:[] ShouldLoadCachedImages:true EnableDefaultCNI:false CNI:} Nodes:[{Name: IP:192.168.76.2 Port:8443 KubernetesVersion:v1.34.1 ContainerRuntime:crio ControlPlane:true Worker:true}] Addons:map[] CustomAddonImages:map[] CustomAddonRegistries:map[] VerifyComponents:map[apiserver:true apps_running:true default_sa:true extra:true kubelet:true node_ready:true system_pods:true] StartHostTimeout:6m0s ScheduledStop:<nil> ExposedPorts:[] ListenAddress: Network: Subnet: MultiNodeRequested:false ExtraDisks:0 CertExpiration:26280h0m0s MountString: Mount9PVersion:9p2000.L MountGID:docker MountIP: MountMSize:262144 MountOptions:[] MountPort:0 MountType:9p MountUID:docker BinaryMirror: DisableOptimizations:false DisableMetrics:false DisableCoreDNSLog:false CustomQemuFirmwarePath
: SocketVMnetClientPath: SocketVMnetPath: StaticIP: SSHAuthSock: SSHAgentPID:0 GPUs: AutoPauseInterval:1m0s} ...
	I1109 14:37:39.577977  190681 preload.go:188] Checking if preload exists for k8s version v1.34.1 and runtime crio
	I1109 14:37:39.578030  190681 ssh_runner.go:195] Run: sudo crictl images --output json
	I1109 14:37:39.630850  190681 crio.go:514] all images are preloaded for cri-o runtime.
	I1109 14:37:39.630877  190681 crio.go:433] Images already preloaded, skipping extraction
	I1109 14:37:39.630931  190681 ssh_runner.go:195] Run: sudo crictl images --output json
	I1109 14:37:39.658737  190681 crio.go:514] all images are preloaded for cri-o runtime.
	I1109 14:37:39.658763  190681 cache_images.go:86] Images are preloaded, skipping loading
	I1109 14:37:39.658771  190681 kubeadm.go:935] updating node { 192.168.76.2 8443 v1.34.1 crio true true} ...
	I1109 14:37:39.658863  190681 kubeadm.go:947] kubelet [Unit]
	Wants=crio.service
	
	[Service]
	ExecStart=
	ExecStart=/var/lib/minikube/binaries/v1.34.1/kubelet --bootstrap-kubeconfig=/etc/kubernetes/bootstrap-kubelet.conf --cgroups-per-qos=false --config=/var/lib/kubelet/config.yaml --enforce-node-allocatable= --hostname-override=embed-certs-422728 --kubeconfig=/etc/kubernetes/kubelet.conf --node-ip=192.168.76.2
	
	[Install]
	 config:
	{KubernetesVersion:v1.34.1 ClusterName:embed-certs-422728 Namespace:default APIServerHAVIP: APIServerName:minikubeCA APIServerNames:[] APIServerIPs:[] DNSDomain:cluster.local ContainerRuntime:crio CRISocket: NetworkPlugin:cni FeatureGates: ServiceCIDR:10.96.0.0/12 ImageRepository: LoadBalancerStartIP: LoadBalancerEndIP: CustomIngressCert: RegistryAliases: ExtraOptions:[] ShouldLoadCachedImages:true EnableDefaultCNI:false CNI:}
	I1109 14:37:39.658944  190681 ssh_runner.go:195] Run: crio config
	I1109 14:37:39.721631  190681 cni.go:84] Creating CNI manager for ""
	I1109 14:37:39.721665  190681 cni.go:143] "docker" driver + "crio" runtime found, recommending kindnet
	I1109 14:37:39.721684  190681 kubeadm.go:85] Using pod CIDR: 10.244.0.0/16
	I1109 14:37:39.721715  190681 kubeadm.go:190] kubeadm options: {CertDir:/var/lib/minikube/certs ServiceCIDR:10.96.0.0/12 PodSubnet:10.244.0.0/16 AdvertiseAddress:192.168.76.2 APIServerPort:8443 KubernetesVersion:v1.34.1 EtcdDataDir:/var/lib/minikube/etcd EtcdExtraArgs:map[] ClusterName:embed-certs-422728 NodeName:embed-certs-422728 DNSDomain:cluster.local CRISocket:/var/run/crio/crio.sock ImageRepository: ComponentOptions:[{Component:apiServer ExtraArgs:map[enable-admission-plugins:NamespaceLifecycle,LimitRanger,ServiceAccount,DefaultStorageClass,DefaultTolerationSeconds,NodeRestriction,MutatingAdmissionWebhook,ValidatingAdmissionWebhook,ResourceQuota] Pairs:map[certSANs:["127.0.0.1", "localhost", "192.168.76.2"]]} {Component:controllerManager ExtraArgs:map[allocate-node-cidrs:true leader-elect:false] Pairs:map[]} {Component:scheduler ExtraArgs:map[leader-elect:false] Pairs:map[]}] FeatureArgs:map[] NodeIP:192.168.76.2 CgroupDriver:cgroupfs ClientCAFile:/var/lib/minikube/certs/ca.crt StaticPodPath:/e
tc/kubernetes/manifests ControlPlaneAddress:control-plane.minikube.internal KubeProxyOptions:map[] ResolvConfSearchRegression:false KubeletConfigOpts:map[containerRuntimeEndpoint:unix:///var/run/crio/crio.sock hairpinMode:hairpin-veth runtimeRequestTimeout:15m] PrependCriSocketUnix:true}
	I1109 14:37:39.721873  190681 kubeadm.go:196] kubeadm config:
	apiVersion: kubeadm.k8s.io/v1beta4
	kind: InitConfiguration
	localAPIEndpoint:
	  advertiseAddress: 192.168.76.2
	  bindPort: 8443
	bootstrapTokens:
	  - groups:
	      - system:bootstrappers:kubeadm:default-node-token
	    ttl: 24h0m0s
	    usages:
	      - signing
	      - authentication
	nodeRegistration:
	  criSocket: unix:///var/run/crio/crio.sock
	  name: "embed-certs-422728"
	  kubeletExtraArgs:
	    - name: "node-ip"
	      value: "192.168.76.2"
	  taints: []
	---
	apiVersion: kubeadm.k8s.io/v1beta4
	kind: ClusterConfiguration
	apiServer:
	  certSANs: ["127.0.0.1", "localhost", "192.168.76.2"]
	  extraArgs:
	    - name: "enable-admission-plugins"
	      value: "NamespaceLifecycle,LimitRanger,ServiceAccount,DefaultStorageClass,DefaultTolerationSeconds,NodeRestriction,MutatingAdmissionWebhook,ValidatingAdmissionWebhook,ResourceQuota"
	controllerManager:
	  extraArgs:
	    - name: "allocate-node-cidrs"
	      value: "true"
	    - name: "leader-elect"
	      value: "false"
	scheduler:
	  extraArgs:
	    - name: "leader-elect"
	      value: "false"
	certificatesDir: /var/lib/minikube/certs
	clusterName: mk
	controlPlaneEndpoint: control-plane.minikube.internal:8443
	etcd:
	  local:
	    dataDir: /var/lib/minikube/etcd
	kubernetesVersion: v1.34.1
	networking:
	  dnsDomain: cluster.local
	  podSubnet: "10.244.0.0/16"
	  serviceSubnet: 10.96.0.0/12
	---
	apiVersion: kubelet.config.k8s.io/v1beta1
	kind: KubeletConfiguration
	authentication:
	  x509:
	    clientCAFile: /var/lib/minikube/certs/ca.crt
	cgroupDriver: cgroupfs
	containerRuntimeEndpoint: unix:///var/run/crio/crio.sock
	hairpinMode: hairpin-veth
	runtimeRequestTimeout: 15m
	clusterDomain: "cluster.local"
	# disable disk resource management by default
	imageGCHighThresholdPercent: 100
	evictionHard:
	  nodefs.available: "0%"
	  nodefs.inodesFree: "0%"
	  imagefs.available: "0%"
	failSwapOn: false
	staticPodPath: /etc/kubernetes/manifests
	---
	apiVersion: kubeproxy.config.k8s.io/v1alpha1
	kind: KubeProxyConfiguration
	clusterCIDR: "10.244.0.0/16"
	metricsBindAddress: 0.0.0.0:10249
	conntrack:
	  maxPerCore: 0
	# Skip setting "net.netfilter.nf_conntrack_tcp_timeout_established"
	  tcpEstablishedTimeout: 0s
	# Skip setting "net.netfilter.nf_conntrack_tcp_timeout_close"
	  tcpCloseWaitTimeout: 0s
	
	I1109 14:37:39.722002  190681 ssh_runner.go:195] Run: sudo ls /var/lib/minikube/binaries/v1.34.1
	I1109 14:37:39.730651  190681 binaries.go:51] Found k8s binaries, skipping transfer
	I1109 14:37:39.730723  190681 ssh_runner.go:195] Run: sudo mkdir -p /etc/systemd/system/kubelet.service.d /lib/systemd/system /var/tmp/minikube
	I1109 14:37:39.738876  190681 ssh_runner.go:362] scp memory --> /etc/systemd/system/kubelet.service.d/10-kubeadm.conf (368 bytes)
	I1109 14:37:39.757436  190681 ssh_runner.go:362] scp memory --> /lib/systemd/system/kubelet.service (352 bytes)
	I1109 14:37:39.771011  190681 ssh_runner.go:362] scp memory --> /var/tmp/minikube/kubeadm.yaml.new (2215 bytes)
	I1109 14:37:39.786092  190681 ssh_runner.go:195] Run: grep 192.168.76.2	control-plane.minikube.internal$ /etc/hosts
	I1109 14:37:39.790206  190681 ssh_runner.go:195] Run: /bin/bash -c "{ grep -v $'\tcontrol-plane.minikube.internal$' "/etc/hosts"; echo "192.168.76.2	control-plane.minikube.internal"; } > /tmp/h.$$; sudo cp /tmp/h.$$ "/etc/hosts""
	I1109 14:37:39.799900  190681 ssh_runner.go:195] Run: sudo systemctl daemon-reload
	I1109 14:37:39.940282  190681 ssh_runner.go:195] Run: sudo systemctl start kubelet
	I1109 14:37:39.965097  190681 certs.go:69] Setting up /home/jenkins/minikube-integration/21139-2320/.minikube/profiles/embed-certs-422728 for IP: 192.168.76.2
	I1109 14:37:39.965166  190681 certs.go:195] generating shared ca certs ...
	I1109 14:37:39.965199  190681 certs.go:227] acquiring lock for ca certs: {Name:mkdc9287ecd89df27ec460b72246ed6b75395d7c Clock:{} Delay:500ms Timeout:1m0s Cancel:<nil>}
	I1109 14:37:39.965367  190681 certs.go:236] skipping valid "minikubeCA" ca cert: /home/jenkins/minikube-integration/21139-2320/.minikube/ca.key
	I1109 14:37:39.965442  190681 certs.go:236] skipping valid "proxyClientCA" ca cert: /home/jenkins/minikube-integration/21139-2320/.minikube/proxy-client-ca.key
	I1109 14:37:39.965479  190681 certs.go:257] generating profile certs ...
	I1109 14:37:39.965568  190681 certs.go:364] generating signed profile cert for "minikube-user": /home/jenkins/minikube-integration/21139-2320/.minikube/profiles/embed-certs-422728/client.key
	I1109 14:37:39.965597  190681 crypto.go:68] Generating cert /home/jenkins/minikube-integration/21139-2320/.minikube/profiles/embed-certs-422728/client.crt with IP's: []
	I1109 14:37:40.464957  190681 crypto.go:156] Writing cert to /home/jenkins/minikube-integration/21139-2320/.minikube/profiles/embed-certs-422728/client.crt ...
	I1109 14:37:40.464985  190681 lock.go:35] WriteFile acquiring /home/jenkins/minikube-integration/21139-2320/.minikube/profiles/embed-certs-422728/client.crt: {Name:mkb3052a1a3ee81a199bbfd07c17ebda70f0241b Clock:{} Delay:500ms Timeout:1m0s Cancel:<nil>}
	I1109 14:37:40.465156  190681 crypto.go:164] Writing key to /home/jenkins/minikube-integration/21139-2320/.minikube/profiles/embed-certs-422728/client.key ...
	I1109 14:37:40.465163  190681 lock.go:35] WriteFile acquiring /home/jenkins/minikube-integration/21139-2320/.minikube/profiles/embed-certs-422728/client.key: {Name:mk4c95f6664bd8acbdb34959202e45d60df7d02e Clock:{} Delay:500ms Timeout:1m0s Cancel:<nil>}
	I1109 14:37:40.465239  190681 certs.go:364] generating signed profile cert for "minikube": /home/jenkins/minikube-integration/21139-2320/.minikube/profiles/embed-certs-422728/apiserver.key.b1b6b07a
	I1109 14:37:40.465257  190681 crypto.go:68] Generating cert /home/jenkins/minikube-integration/21139-2320/.minikube/profiles/embed-certs-422728/apiserver.crt.b1b6b07a with IP's: [10.96.0.1 127.0.0.1 10.0.0.1 192.168.76.2]
	I1109 14:37:40.845161  190681 crypto.go:156] Writing cert to /home/jenkins/minikube-integration/21139-2320/.minikube/profiles/embed-certs-422728/apiserver.crt.b1b6b07a ...
	I1109 14:37:40.845194  190681 lock.go:35] WriteFile acquiring /home/jenkins/minikube-integration/21139-2320/.minikube/profiles/embed-certs-422728/apiserver.crt.b1b6b07a: {Name:mk34dc98430e56ee4a4f57cd0ba366d96b6dea41 Clock:{} Delay:500ms Timeout:1m0s Cancel:<nil>}
	I1109 14:37:40.845406  190681 crypto.go:164] Writing key to /home/jenkins/minikube-integration/21139-2320/.minikube/profiles/embed-certs-422728/apiserver.key.b1b6b07a ...
	I1109 14:37:40.845424  190681 lock.go:35] WriteFile acquiring /home/jenkins/minikube-integration/21139-2320/.minikube/profiles/embed-certs-422728/apiserver.key.b1b6b07a: {Name:mkf100a1bb9d721d9144c34041e2b66fa2fa32ab Clock:{} Delay:500ms Timeout:1m0s Cancel:<nil>}
	I1109 14:37:40.845516  190681 certs.go:382] copying /home/jenkins/minikube-integration/21139-2320/.minikube/profiles/embed-certs-422728/apiserver.crt.b1b6b07a -> /home/jenkins/minikube-integration/21139-2320/.minikube/profiles/embed-certs-422728/apiserver.crt
	I1109 14:37:40.845600  190681 certs.go:386] copying /home/jenkins/minikube-integration/21139-2320/.minikube/profiles/embed-certs-422728/apiserver.key.b1b6b07a -> /home/jenkins/minikube-integration/21139-2320/.minikube/profiles/embed-certs-422728/apiserver.key
	I1109 14:37:40.845664  190681 certs.go:364] generating signed profile cert for "aggregator": /home/jenkins/minikube-integration/21139-2320/.minikube/profiles/embed-certs-422728/proxy-client.key
	I1109 14:37:40.845684  190681 crypto.go:68] Generating cert /home/jenkins/minikube-integration/21139-2320/.minikube/profiles/embed-certs-422728/proxy-client.crt with IP's: []
	I1109 14:37:41.999590  190681 crypto.go:156] Writing cert to /home/jenkins/minikube-integration/21139-2320/.minikube/profiles/embed-certs-422728/proxy-client.crt ...
	I1109 14:37:41.999623  190681 lock.go:35] WriteFile acquiring /home/jenkins/minikube-integration/21139-2320/.minikube/profiles/embed-certs-422728/proxy-client.crt: {Name:mk26367aa5d706d5485496188212ef42dd866cfe Clock:{} Delay:500ms Timeout:1m0s Cancel:<nil>}
	I1109 14:37:41.999796  190681 crypto.go:164] Writing key to /home/jenkins/minikube-integration/21139-2320/.minikube/profiles/embed-certs-422728/proxy-client.key ...
	I1109 14:37:41.999813  190681 lock.go:35] WriteFile acquiring /home/jenkins/minikube-integration/21139-2320/.minikube/profiles/embed-certs-422728/proxy-client.key: {Name:mke3b694bfe374eb19b825b523bddbf55f17a2d2 Clock:{} Delay:500ms Timeout:1m0s Cancel:<nil>}
	I1109 14:37:42.000015  190681 certs.go:484] found cert: /home/jenkins/minikube-integration/21139-2320/.minikube/certs/4116.pem (1338 bytes)
	W1109 14:37:42.000058  190681 certs.go:480] ignoring /home/jenkins/minikube-integration/21139-2320/.minikube/certs/4116_empty.pem, impossibly tiny 0 bytes
	I1109 14:37:42.000072  190681 certs.go:484] found cert: /home/jenkins/minikube-integration/21139-2320/.minikube/certs/ca-key.pem (1679 bytes)
	I1109 14:37:42.000099  190681 certs.go:484] found cert: /home/jenkins/minikube-integration/21139-2320/.minikube/certs/ca.pem (1082 bytes)
	I1109 14:37:42.000128  190681 certs.go:484] found cert: /home/jenkins/minikube-integration/21139-2320/.minikube/certs/cert.pem (1123 bytes)
	I1109 14:37:42.000155  190681 certs.go:484] found cert: /home/jenkins/minikube-integration/21139-2320/.minikube/certs/key.pem (1679 bytes)
	I1109 14:37:42.000202  190681 certs.go:484] found cert: /home/jenkins/minikube-integration/21139-2320/.minikube/files/etc/ssl/certs/41162.pem (1708 bytes)
	I1109 14:37:42.000805  190681 ssh_runner.go:362] scp /home/jenkins/minikube-integration/21139-2320/.minikube/ca.crt --> /var/lib/minikube/certs/ca.crt (1111 bytes)
	I1109 14:37:42.024398  190681 ssh_runner.go:362] scp /home/jenkins/minikube-integration/21139-2320/.minikube/ca.key --> /var/lib/minikube/certs/ca.key (1679 bytes)
	I1109 14:37:42.047819  190681 ssh_runner.go:362] scp /home/jenkins/minikube-integration/21139-2320/.minikube/proxy-client-ca.crt --> /var/lib/minikube/certs/proxy-client-ca.crt (1119 bytes)
	I1109 14:37:42.069993  190681 ssh_runner.go:362] scp /home/jenkins/minikube-integration/21139-2320/.minikube/proxy-client-ca.key --> /var/lib/minikube/certs/proxy-client-ca.key (1675 bytes)
	I1109 14:37:42.094622  190681 ssh_runner.go:362] scp /home/jenkins/minikube-integration/21139-2320/.minikube/profiles/embed-certs-422728/apiserver.crt --> /var/lib/minikube/certs/apiserver.crt (1428 bytes)
	I1109 14:37:42.119636  190681 ssh_runner.go:362] scp /home/jenkins/minikube-integration/21139-2320/.minikube/profiles/embed-certs-422728/apiserver.key --> /var/lib/minikube/certs/apiserver.key (1679 bytes)
	I1109 14:37:42.145596  190681 ssh_runner.go:362] scp /home/jenkins/minikube-integration/21139-2320/.minikube/profiles/embed-certs-422728/proxy-client.crt --> /var/lib/minikube/certs/proxy-client.crt (1147 bytes)
	I1109 14:37:42.171131  190681 ssh_runner.go:362] scp /home/jenkins/minikube-integration/21139-2320/.minikube/profiles/embed-certs-422728/proxy-client.key --> /var/lib/minikube/certs/proxy-client.key (1675 bytes)
	I1109 14:37:42.196337  190681 ssh_runner.go:362] scp /home/jenkins/minikube-integration/21139-2320/.minikube/certs/4116.pem --> /usr/share/ca-certificates/4116.pem (1338 bytes)
	I1109 14:37:42.221062  190681 ssh_runner.go:362] scp /home/jenkins/minikube-integration/21139-2320/.minikube/files/etc/ssl/certs/41162.pem --> /usr/share/ca-certificates/41162.pem (1708 bytes)
	I1109 14:37:42.245411  190681 ssh_runner.go:362] scp /home/jenkins/minikube-integration/21139-2320/.minikube/ca.crt --> /usr/share/ca-certificates/minikubeCA.pem (1111 bytes)
	I1109 14:37:42.268392  190681 ssh_runner.go:362] scp memory --> /var/lib/minikube/kubeconfig (738 bytes)
	I1109 14:37:42.294622  190681 ssh_runner.go:195] Run: openssl version
	I1109 14:37:42.302336  190681 ssh_runner.go:195] Run: sudo /bin/bash -c "test -s /usr/share/ca-certificates/41162.pem && ln -fs /usr/share/ca-certificates/41162.pem /etc/ssl/certs/41162.pem"
	I1109 14:37:42.312594  190681 ssh_runner.go:195] Run: ls -la /usr/share/ca-certificates/41162.pem
	I1109 14:37:42.317219  190681 certs.go:528] hashing: -rw-r--r-- 1 root root 1708 Nov  9 13:36 /usr/share/ca-certificates/41162.pem
	I1109 14:37:42.317338  190681 ssh_runner.go:195] Run: openssl x509 -hash -noout -in /usr/share/ca-certificates/41162.pem
	I1109 14:37:42.362880  190681 ssh_runner.go:195] Run: sudo /bin/bash -c "test -L /etc/ssl/certs/3ec20f2e.0 || ln -fs /etc/ssl/certs/41162.pem /etc/ssl/certs/3ec20f2e.0"
	I1109 14:37:42.372541  190681 ssh_runner.go:195] Run: sudo /bin/bash -c "test -s /usr/share/ca-certificates/minikubeCA.pem && ln -fs /usr/share/ca-certificates/minikubeCA.pem /etc/ssl/certs/minikubeCA.pem"
	I1109 14:37:42.381922  190681 ssh_runner.go:195] Run: ls -la /usr/share/ca-certificates/minikubeCA.pem
	I1109 14:37:42.386584  190681 certs.go:528] hashing: -rw-r--r-- 1 root root 1111 Nov  9 13:29 /usr/share/ca-certificates/minikubeCA.pem
	I1109 14:37:42.386701  190681 ssh_runner.go:195] Run: openssl x509 -hash -noout -in /usr/share/ca-certificates/minikubeCA.pem
	I1109 14:37:42.430305  190681 ssh_runner.go:195] Run: sudo /bin/bash -c "test -L /etc/ssl/certs/b5213941.0 || ln -fs /etc/ssl/certs/minikubeCA.pem /etc/ssl/certs/b5213941.0"
	I1109 14:37:42.440223  190681 ssh_runner.go:195] Run: sudo /bin/bash -c "test -s /usr/share/ca-certificates/4116.pem && ln -fs /usr/share/ca-certificates/4116.pem /etc/ssl/certs/4116.pem"
	I1109 14:37:42.449666  190681 ssh_runner.go:195] Run: ls -la /usr/share/ca-certificates/4116.pem
	I1109 14:37:42.454514  190681 certs.go:528] hashing: -rw-r--r-- 1 root root 1338 Nov  9 13:36 /usr/share/ca-certificates/4116.pem
	I1109 14:37:42.454662  190681 ssh_runner.go:195] Run: openssl x509 -hash -noout -in /usr/share/ca-certificates/4116.pem
	I1109 14:37:42.498487  190681 ssh_runner.go:195] Run: sudo /bin/bash -c "test -L /etc/ssl/certs/51391683.0 || ln -fs /etc/ssl/certs/4116.pem /etc/ssl/certs/51391683.0"
	I1109 14:37:42.507929  190681 ssh_runner.go:195] Run: stat /var/lib/minikube/certs/apiserver-kubelet-client.crt
	I1109 14:37:42.512583  190681 certs.go:400] 'apiserver-kubelet-client' cert doesn't exist, likely first start: stat /var/lib/minikube/certs/apiserver-kubelet-client.crt: Process exited with status 1
	stdout:
	
	stderr:
	stat: cannot statx '/var/lib/minikube/certs/apiserver-kubelet-client.crt': No such file or directory
	I1109 14:37:42.512691  190681 kubeadm.go:401] StartCluster: {Name:embed-certs-422728 KeepContext:false EmbedCerts:true MinikubeISO: KicBaseImage:gcr.io/k8s-minikube/kicbase-builds:v0.0.48-1761985721-21837@sha256:a50b37e97dfdea51156e079ca6b45818a801b3d41bbe13d141f35d2e1af6c7d1 Memory:3072 CPUs:2 DiskSize:20000 Driver:docker HyperkitVpnKitSock: HyperkitVSockPorts:[] DockerEnv:[] ContainerVolumeMounts:[] InsecureRegistry:[] RegistryMirror:[] HostOnlyCIDR:192.168.59.1/24 HypervVirtualSwitch: HypervUseExternalSwitch:false HypervExternalAdapter: KVMNetwork:default KVMQemuURI:qemu:///system KVMGPU:false KVMHidden:false KVMNUMACount:1 APIServerPort:8443 DockerOpt:[] DisableDriverMounts:false NFSShare:[] NFSSharesRoot:/nfsshares UUID: NoVTXCheck:false DNSProxy:false HostDNSResolver:true HostOnlyNicType:virtio NatNicType:virtio SSHIPAddress: SSHUser:root SSHKey: SSHPort:22 KubernetesConfig:{KubernetesVersion:v1.34.1 ClusterName:embed-certs-422728 Namespace:default APIServerHAVIP: APIServerName:minikubeCA APISe
rverNames:[] APIServerIPs:[] DNSDomain:cluster.local ContainerRuntime:crio CRISocket: NetworkPlugin:cni FeatureGates: ServiceCIDR:10.96.0.0/12 ImageRepository: LoadBalancerStartIP: LoadBalancerEndIP: CustomIngressCert: RegistryAliases: ExtraOptions:[] ShouldLoadCachedImages:true EnableDefaultCNI:false CNI:} Nodes:[{Name: IP:192.168.76.2 Port:8443 KubernetesVersion:v1.34.1 ContainerRuntime:crio ControlPlane:true Worker:true}] Addons:map[] CustomAddonImages:map[] CustomAddonRegistries:map[] VerifyComponents:map[apiserver:true apps_running:true default_sa:true extra:true kubelet:true node_ready:true system_pods:true] StartHostTimeout:6m0s ScheduledStop:<nil> ExposedPorts:[] ListenAddress: Network: Subnet: MultiNodeRequested:false ExtraDisks:0 CertExpiration:26280h0m0s MountString: Mount9PVersion:9p2000.L MountGID:docker MountIP: MountMSize:262144 MountOptions:[] MountPort:0 MountType:9p MountUID:docker BinaryMirror: DisableOptimizations:false DisableMetrics:false DisableCoreDNSLog:false CustomQemuFirmwarePath: S
ocketVMnetClientPath: SocketVMnetPath: StaticIP: SSHAuthSock: SSHAgentPID:0 GPUs: AutoPauseInterval:1m0s}
	I1109 14:37:42.512807  190681 cri.go:54] listing CRI containers in root : {State:paused Name: Namespaces:[kube-system]}
	I1109 14:37:42.512892  190681 ssh_runner.go:195] Run: sudo -s eval "crictl ps -a --quiet --label io.kubernetes.pod.namespace=kube-system"
	I1109 14:37:42.545234  190681 cri.go:89] found id: ""
	I1109 14:37:42.545357  190681 ssh_runner.go:195] Run: sudo ls /var/lib/kubelet/kubeadm-flags.env /var/lib/kubelet/config.yaml /var/lib/minikube/etcd
	I1109 14:37:42.556463  190681 ssh_runner.go:195] Run: sudo cp /var/tmp/minikube/kubeadm.yaml.new /var/tmp/minikube/kubeadm.yaml
	I1109 14:37:42.565543  190681 kubeadm.go:215] ignoring SystemVerification for kubeadm because of docker driver
	I1109 14:37:42.565668  190681 ssh_runner.go:195] Run: sudo ls -la /etc/kubernetes/admin.conf /etc/kubernetes/kubelet.conf /etc/kubernetes/controller-manager.conf /etc/kubernetes/scheduler.conf
	I1109 14:37:42.576781  190681 kubeadm.go:156] config check failed, skipping stale config cleanup: sudo ls -la /etc/kubernetes/admin.conf /etc/kubernetes/kubelet.conf /etc/kubernetes/controller-manager.conf /etc/kubernetes/scheduler.conf: Process exited with status 2
	stdout:
	
	stderr:
	ls: cannot access '/etc/kubernetes/admin.conf': No such file or directory
	ls: cannot access '/etc/kubernetes/kubelet.conf': No such file or directory
	ls: cannot access '/etc/kubernetes/controller-manager.conf': No such file or directory
	ls: cannot access '/etc/kubernetes/scheduler.conf': No such file or directory
	I1109 14:37:42.576837  190681 kubeadm.go:158] found existing configuration files:
	
	I1109 14:37:42.576925  190681 ssh_runner.go:195] Run: sudo grep https://control-plane.minikube.internal:8443 /etc/kubernetes/admin.conf
	I1109 14:37:42.585838  190681 kubeadm.go:164] "https://control-plane.minikube.internal:8443" may not be in /etc/kubernetes/admin.conf - will remove: sudo grep https://control-plane.minikube.internal:8443 /etc/kubernetes/admin.conf: Process exited with status 2
	stdout:
	
	stderr:
	grep: /etc/kubernetes/admin.conf: No such file or directory
	I1109 14:37:42.585953  190681 ssh_runner.go:195] Run: sudo rm -f /etc/kubernetes/admin.conf
	I1109 14:37:42.594060  190681 ssh_runner.go:195] Run: sudo grep https://control-plane.minikube.internal:8443 /etc/kubernetes/kubelet.conf
	I1109 14:37:42.603270  190681 kubeadm.go:164] "https://control-plane.minikube.internal:8443" may not be in /etc/kubernetes/kubelet.conf - will remove: sudo grep https://control-plane.minikube.internal:8443 /etc/kubernetes/kubelet.conf: Process exited with status 2
	stdout:
	
	stderr:
	grep: /etc/kubernetes/kubelet.conf: No such file or directory
	I1109 14:37:42.603384  190681 ssh_runner.go:195] Run: sudo rm -f /etc/kubernetes/kubelet.conf
	I1109 14:37:42.611451  190681 ssh_runner.go:195] Run: sudo grep https://control-plane.minikube.internal:8443 /etc/kubernetes/controller-manager.conf
	I1109 14:37:42.620895  190681 kubeadm.go:164] "https://control-plane.minikube.internal:8443" may not be in /etc/kubernetes/controller-manager.conf - will remove: sudo grep https://control-plane.minikube.internal:8443 /etc/kubernetes/controller-manager.conf: Process exited with status 2
	stdout:
	
	stderr:
	grep: /etc/kubernetes/controller-manager.conf: No such file or directory
	I1109 14:37:42.621003  190681 ssh_runner.go:195] Run: sudo rm -f /etc/kubernetes/controller-manager.conf
	I1109 14:37:42.630264  190681 ssh_runner.go:195] Run: sudo grep https://control-plane.minikube.internal:8443 /etc/kubernetes/scheduler.conf
	I1109 14:37:42.639623  190681 kubeadm.go:164] "https://control-plane.minikube.internal:8443" may not be in /etc/kubernetes/scheduler.conf - will remove: sudo grep https://control-plane.minikube.internal:8443 /etc/kubernetes/scheduler.conf: Process exited with status 2
	stdout:
	
	stderr:
	grep: /etc/kubernetes/scheduler.conf: No such file or directory
	I1109 14:37:42.639740  190681 ssh_runner.go:195] Run: sudo rm -f /etc/kubernetes/scheduler.conf
	I1109 14:37:42.647939  190681 ssh_runner.go:286] Start: sudo /bin/bash -c "env PATH="/var/lib/minikube/binaries/v1.34.1:$PATH" kubeadm init --config /var/tmp/minikube/kubeadm.yaml  --ignore-preflight-errors=DirAvailable--etc-kubernetes-manifests,DirAvailable--var-lib-minikube,DirAvailable--var-lib-minikube-etcd,FileAvailable--etc-kubernetes-manifests-kube-scheduler.yaml,FileAvailable--etc-kubernetes-manifests-kube-apiserver.yaml,FileAvailable--etc-kubernetes-manifests-kube-controller-manager.yaml,FileAvailable--etc-kubernetes-manifests-etcd.yaml,Port-10250,Swap,NumCPU,Mem,SystemVerification,FileContent--proc-sys-net-bridge-bridge-nf-call-iptables"
	I1109 14:37:42.729524  190681 kubeadm.go:319] [init] Using Kubernetes version: v1.34.1
	I1109 14:37:42.730008  190681 kubeadm.go:319] [preflight] Running pre-flight checks
	I1109 14:37:42.762021  190681 kubeadm.go:319] [preflight] The system verification failed. Printing the output from the verification:
	I1109 14:37:42.762173  190681 kubeadm.go:319] KERNEL_VERSION: 5.15.0-1084-aws
	I1109 14:37:42.762250  190681 kubeadm.go:319] OS: Linux
	I1109 14:37:42.762333  190681 kubeadm.go:319] CGROUPS_CPU: enabled
	I1109 14:37:42.762417  190681 kubeadm.go:319] CGROUPS_CPUACCT: enabled
	I1109 14:37:42.762498  190681 kubeadm.go:319] CGROUPS_CPUSET: enabled
	I1109 14:37:42.762579  190681 kubeadm.go:319] CGROUPS_DEVICES: enabled
	I1109 14:37:42.762659  190681 kubeadm.go:319] CGROUPS_FREEZER: enabled
	I1109 14:37:42.762745  190681 kubeadm.go:319] CGROUPS_MEMORY: enabled
	I1109 14:37:42.762820  190681 kubeadm.go:319] CGROUPS_PIDS: enabled
	I1109 14:37:42.762899  190681 kubeadm.go:319] CGROUPS_HUGETLB: enabled
	I1109 14:37:42.762976  190681 kubeadm.go:319] CGROUPS_BLKIO: enabled
	I1109 14:37:42.838155  190681 kubeadm.go:319] [preflight] Pulling images required for setting up a Kubernetes cluster
	I1109 14:37:42.838330  190681 kubeadm.go:319] [preflight] This might take a minute or two, depending on the speed of your internet connection
	I1109 14:37:42.838478  190681 kubeadm.go:319] [preflight] You can also perform this action beforehand using 'kubeadm config images pull'
	I1109 14:37:42.859899  190681 kubeadm.go:319] [certs] Using certificateDir folder "/var/lib/minikube/certs"
	I1109 14:37:42.866768  190681 out.go:252]   - Generating certificates and keys ...
	I1109 14:37:42.866943  190681 kubeadm.go:319] [certs] Using existing ca certificate authority
	I1109 14:37:42.867058  190681 kubeadm.go:319] [certs] Using existing apiserver certificate and key on disk
	I1109 14:37:38.390439  189143 kubeadm.go:319] [certs] Generating "front-proxy-ca" certificate and key
	I1109 14:37:39.132013  189143 kubeadm.go:319] [certs] Generating "front-proxy-client" certificate and key
	I1109 14:37:39.667087  189143 kubeadm.go:319] [certs] Generating "etcd/ca" certificate and key
	I1109 14:37:40.260225  189143 kubeadm.go:319] [certs] Generating "etcd/server" certificate and key
	I1109 14:37:40.260377  189143 kubeadm.go:319] [certs] etcd/server serving cert is signed for DNS names [default-k8s-diff-port-103048 localhost] and IPs [192.168.85.2 127.0.0.1 ::1]
	I1109 14:37:42.060547  189143 kubeadm.go:319] [certs] Generating "etcd/peer" certificate and key
	I1109 14:37:42.060888  189143 kubeadm.go:319] [certs] etcd/peer serving cert is signed for DNS names [default-k8s-diff-port-103048 localhost] and IPs [192.168.85.2 127.0.0.1 ::1]
	I1109 14:37:42.911663  189143 kubeadm.go:319] [certs] Generating "etcd/healthcheck-client" certificate and key
	I1109 14:37:43.120616  189143 kubeadm.go:319] [certs] Generating "apiserver-etcd-client" certificate and key
	I1109 14:37:43.514096  189143 kubeadm.go:319] [certs] Generating "sa" key and public key
	I1109 14:37:43.514397  189143 kubeadm.go:319] [kubeconfig] Using kubeconfig folder "/etc/kubernetes"
	I1109 14:37:43.996218  189143 kubeadm.go:319] [kubeconfig] Writing "admin.conf" kubeconfig file
	I1109 14:37:44.980688  189143 kubeadm.go:319] [kubeconfig] Writing "super-admin.conf" kubeconfig file
	I1109 14:37:45.437449  189143 kubeadm.go:319] [kubeconfig] Writing "kubelet.conf" kubeconfig file
	I1109 14:37:45.707091  189143 kubeadm.go:319] [kubeconfig] Writing "controller-manager.conf" kubeconfig file
	I1109 14:37:46.319949  189143 kubeadm.go:319] [kubeconfig] Writing "scheduler.conf" kubeconfig file
	I1109 14:37:46.320711  189143 kubeadm.go:319] [etcd] Creating static Pod manifest for local etcd in "/etc/kubernetes/manifests"
	I1109 14:37:46.332006  189143 kubeadm.go:319] [control-plane] Using manifest folder "/etc/kubernetes/manifests"
	I1109 14:37:43.447449  190681 kubeadm.go:319] [certs] Generating "apiserver-kubelet-client" certificate and key
	I1109 14:37:44.003488  190681 kubeadm.go:319] [certs] Generating "front-proxy-ca" certificate and key
	I1109 14:37:44.076213  190681 kubeadm.go:319] [certs] Generating "front-proxy-client" certificate and key
	I1109 14:37:44.147781  190681 kubeadm.go:319] [certs] Generating "etcd/ca" certificate and key
	I1109 14:37:45.005626  190681 kubeadm.go:319] [certs] Generating "etcd/server" certificate and key
	I1109 14:37:45.006224  190681 kubeadm.go:319] [certs] etcd/server serving cert is signed for DNS names [embed-certs-422728 localhost] and IPs [192.168.76.2 127.0.0.1 ::1]
	I1109 14:37:45.236610  190681 kubeadm.go:319] [certs] Generating "etcd/peer" certificate and key
	I1109 14:37:45.237325  190681 kubeadm.go:319] [certs] etcd/peer serving cert is signed for DNS names [embed-certs-422728 localhost] and IPs [192.168.76.2 127.0.0.1 ::1]
	I1109 14:37:45.668726  190681 kubeadm.go:319] [certs] Generating "etcd/healthcheck-client" certificate and key
	I1109 14:37:45.896215  190681 kubeadm.go:319] [certs] Generating "apiserver-etcd-client" certificate and key
	I1109 14:37:46.818389  190681 kubeadm.go:319] [certs] Generating "sa" key and public key
	I1109 14:37:46.818607  190681 kubeadm.go:319] [kubeconfig] Using kubeconfig folder "/etc/kubernetes"
	I1109 14:37:47.791785  190681 kubeadm.go:319] [kubeconfig] Writing "admin.conf" kubeconfig file
	I1109 14:37:46.335519  189143 out.go:252]   - Booting up control plane ...
	I1109 14:37:46.335632  189143 kubeadm.go:319] [control-plane] Creating static Pod manifest for "kube-apiserver"
	I1109 14:37:46.335715  189143 kubeadm.go:319] [control-plane] Creating static Pod manifest for "kube-controller-manager"
	I1109 14:37:46.336483  189143 kubeadm.go:319] [control-plane] Creating static Pod manifest for "kube-scheduler"
	I1109 14:37:46.372440  189143 kubeadm.go:319] [kubelet-start] Writing kubelet environment file with flags to file "/var/lib/kubelet/kubeadm-flags.env"
	I1109 14:37:46.372559  189143 kubeadm.go:319] [kubelet-start] Writing kubelet configuration to file "/var/lib/kubelet/instance-config.yaml"
	I1109 14:37:46.381181  189143 kubeadm.go:319] [patches] Applied patch of type "application/strategic-merge-patch+json" to target "kubeletconfiguration"
	I1109 14:37:46.381277  189143 kubeadm.go:319] [kubelet-start] Writing kubelet configuration to file "/var/lib/kubelet/config.yaml"
	I1109 14:37:46.381317  189143 kubeadm.go:319] [kubelet-start] Starting the kubelet
	I1109 14:37:46.558538  189143 kubeadm.go:319] [wait-control-plane] Waiting for the kubelet to boot up the control plane as static Pods from directory "/etc/kubernetes/manifests"
	I1109 14:37:46.558665  189143 kubeadm.go:319] [kubelet-check] Waiting for a healthy kubelet at http://127.0.0.1:10248/healthz. This can take up to 4m0s
	I1109 14:37:48.060058  189143 kubeadm.go:319] [kubelet-check] The kubelet is healthy after 1.501614856s
	I1109 14:37:48.067853  189143 kubeadm.go:319] [control-plane-check] Waiting for healthy control plane components. This can take up to 4m0s
	I1109 14:37:48.067965  189143 kubeadm.go:319] [control-plane-check] Checking kube-apiserver at https://192.168.85.2:8444/livez
	I1109 14:37:48.068060  189143 kubeadm.go:319] [control-plane-check] Checking kube-controller-manager at https://127.0.0.1:10257/healthz
	I1109 14:37:48.068142  189143 kubeadm.go:319] [control-plane-check] Checking kube-scheduler at https://127.0.0.1:10259/livez
	I1109 14:37:48.091168  190681 kubeadm.go:319] [kubeconfig] Writing "super-admin.conf" kubeconfig file
	I1109 14:37:48.744223  190681 kubeadm.go:319] [kubeconfig] Writing "kubelet.conf" kubeconfig file
	I1109 14:37:49.540188  190681 kubeadm.go:319] [kubeconfig] Writing "controller-manager.conf" kubeconfig file
	I1109 14:37:49.873301  190681 kubeadm.go:319] [kubeconfig] Writing "scheduler.conf" kubeconfig file
	I1109 14:37:49.876135  190681 kubeadm.go:319] [etcd] Creating static Pod manifest for local etcd in "/etc/kubernetes/manifests"
	I1109 14:37:49.887520  190681 kubeadm.go:319] [control-plane] Using manifest folder "/etc/kubernetes/manifests"
	I1109 14:37:49.891081  190681 out.go:252]   - Booting up control plane ...
	I1109 14:37:49.891198  190681 kubeadm.go:319] [control-plane] Creating static Pod manifest for "kube-apiserver"
	I1109 14:37:49.891281  190681 kubeadm.go:319] [control-plane] Creating static Pod manifest for "kube-controller-manager"
	I1109 14:37:49.900200  190681 kubeadm.go:319] [control-plane] Creating static Pod manifest for "kube-scheduler"
	I1109 14:37:49.937072  190681 kubeadm.go:319] [kubelet-start] Writing kubelet environment file with flags to file "/var/lib/kubelet/kubeadm-flags.env"
	I1109 14:37:49.937199  190681 kubeadm.go:319] [kubelet-start] Writing kubelet configuration to file "/var/lib/kubelet/instance-config.yaml"
	I1109 14:37:49.954669  190681 kubeadm.go:319] [patches] Applied patch of type "application/strategic-merge-patch+json" to target "kubeletconfiguration"
	I1109 14:37:49.954949  190681 kubeadm.go:319] [kubelet-start] Writing kubelet configuration to file "/var/lib/kubelet/config.yaml"
	I1109 14:37:49.955154  190681 kubeadm.go:319] [kubelet-start] Starting the kubelet
	I1109 14:37:50.174661  190681 kubeadm.go:319] [wait-control-plane] Waiting for the kubelet to boot up the control plane as static Pods from directory "/etc/kubernetes/manifests"
	I1109 14:37:50.174784  190681 kubeadm.go:319] [kubelet-check] Waiting for a healthy kubelet at http://127.0.0.1:10248/healthz. This can take up to 4m0s
	I1109 14:37:51.192271  190681 kubeadm.go:319] [kubelet-check] The kubelet is healthy after 1.016598052s
	I1109 14:37:51.194921  190681 kubeadm.go:319] [control-plane-check] Waiting for healthy control plane components. This can take up to 4m0s
	I1109 14:37:51.195333  190681 kubeadm.go:319] [control-plane-check] Checking kube-apiserver at https://192.168.76.2:8443/livez
	I1109 14:37:51.195636  190681 kubeadm.go:319] [control-plane-check] Checking kube-controller-manager at https://127.0.0.1:10257/healthz
	I1109 14:37:51.196474  190681 kubeadm.go:319] [control-plane-check] Checking kube-scheduler at https://127.0.0.1:10259/livez
	I1109 14:37:56.512419  190681 kubeadm.go:319] [control-plane-check] kube-controller-manager is healthy after 5.315667569s
	I1109 14:37:54.423580  189143 kubeadm.go:319] [control-plane-check] kube-controller-manager is healthy after 6.354386642s
	I1109 14:37:58.540767  189143 kubeadm.go:319] [control-plane-check] kube-scheduler is healthy after 10.472872963s
	I1109 14:37:59.070621  189143 kubeadm.go:319] [control-plane-check] kube-apiserver is healthy after 11.001686771s
	I1109 14:37:59.091330  189143 kubeadm.go:319] [upload-config] Storing the configuration used in ConfigMap "kubeadm-config" in the "kube-system" Namespace
	I1109 14:37:59.109333  189143 kubeadm.go:319] [kubelet] Creating a ConfigMap "kubelet-config" in namespace kube-system with the configuration for the kubelets in the cluster
	I1109 14:37:59.131208  189143 kubeadm.go:319] [upload-certs] Skipping phase. Please see --upload-certs
	I1109 14:37:59.131428  189143 kubeadm.go:319] [mark-control-plane] Marking the node default-k8s-diff-port-103048 as control-plane by adding the labels: [node-role.kubernetes.io/control-plane node.kubernetes.io/exclude-from-external-load-balancers]
	I1109 14:37:59.147642  189143 kubeadm.go:319] [bootstrap-token] Using token: adxlwq.cgijiq3nisu2ttzm
	I1109 14:37:59.152690  189143 out.go:252]   - Configuring RBAC rules ...
	I1109 14:37:59.152820  189143 kubeadm.go:319] [bootstrap-token] Configuring bootstrap tokens, cluster-info ConfigMap, RBAC Roles
	I1109 14:37:59.160917  189143 kubeadm.go:319] [bootstrap-token] Configured RBAC rules to allow Node Bootstrap tokens to get nodes
	I1109 14:37:59.169971  189143 kubeadm.go:319] [bootstrap-token] Configured RBAC rules to allow Node Bootstrap tokens to post CSRs in order for nodes to get long term certificate credentials
	I1109 14:37:59.175023  189143 kubeadm.go:319] [bootstrap-token] Configured RBAC rules to allow the csrapprover controller automatically approve CSRs from a Node Bootstrap Token
	I1109 14:37:59.179686  189143 kubeadm.go:319] [bootstrap-token] Configured RBAC rules to allow certificate rotation for all node client certificates in the cluster
	I1109 14:37:59.184477  189143 kubeadm.go:319] [bootstrap-token] Creating the "cluster-info" ConfigMap in the "kube-public" namespace
	I1109 14:37:59.478474  189143 kubeadm.go:319] [kubelet-finalize] Updating "/etc/kubernetes/kubelet.conf" to point to a rotatable kubelet client certificate and key
	I1109 14:37:59.942239  189143 kubeadm.go:319] [addons] Applied essential addon: CoreDNS
	I1109 14:38:00.527542  189143 kubeadm.go:319] [addons] Applied essential addon: kube-proxy
	I1109 14:38:00.527567  189143 kubeadm.go:319] 
	I1109 14:38:00.527632  189143 kubeadm.go:319] Your Kubernetes control-plane has initialized successfully!
	I1109 14:38:00.527646  189143 kubeadm.go:319] 
	I1109 14:38:00.527738  189143 kubeadm.go:319] To start using your cluster, you need to run the following as a regular user:
	I1109 14:38:00.527758  189143 kubeadm.go:319] 
	I1109 14:38:00.527786  189143 kubeadm.go:319]   mkdir -p $HOME/.kube
	I1109 14:38:00.527851  189143 kubeadm.go:319]   sudo cp -i /etc/kubernetes/admin.conf $HOME/.kube/config
	I1109 14:38:00.527963  189143 kubeadm.go:319]   sudo chown $(id -u):$(id -g) $HOME/.kube/config
	I1109 14:38:00.527975  189143 kubeadm.go:319] 
	I1109 14:38:00.528032  189143 kubeadm.go:319] Alternatively, if you are the root user, you can run:
	I1109 14:38:00.528040  189143 kubeadm.go:319] 
	I1109 14:38:00.528090  189143 kubeadm.go:319]   export KUBECONFIG=/etc/kubernetes/admin.conf
	I1109 14:38:00.528098  189143 kubeadm.go:319] 
	I1109 14:38:00.528153  189143 kubeadm.go:319] You should now deploy a pod network to the cluster.
	I1109 14:38:00.528235  189143 kubeadm.go:319] Run "kubectl apply -f [podnetwork].yaml" with one of the options listed at:
	I1109 14:38:00.528310  189143 kubeadm.go:319]   https://kubernetes.io/docs/concepts/cluster-administration/addons/
	I1109 14:38:00.528318  189143 kubeadm.go:319] 
	I1109 14:38:00.528415  189143 kubeadm.go:319] You can now join any number of control-plane nodes by copying certificate authorities
	I1109 14:38:00.528500  189143 kubeadm.go:319] and service account keys on each node and then running the following as root:
	I1109 14:38:00.528508  189143 kubeadm.go:319] 
	I1109 14:38:00.528595  189143 kubeadm.go:319]   kubeadm join control-plane.minikube.internal:8444 --token adxlwq.cgijiq3nisu2ttzm \
	I1109 14:38:00.528709  189143 kubeadm.go:319] 	--discovery-token-ca-cert-hash sha256:bb1c1d61f7abb7f43f33866576b71e6342f6f67545a2d1f7cc51fa851cd51d52 \
	I1109 14:38:00.528737  189143 kubeadm.go:319] 	--control-plane 
	I1109 14:38:00.528746  189143 kubeadm.go:319] 
	I1109 14:38:00.528835  189143 kubeadm.go:319] Then you can join any number of worker nodes by running the following on each as root:
	I1109 14:38:00.528844  189143 kubeadm.go:319] 
	I1109 14:38:00.528930  189143 kubeadm.go:319] kubeadm join control-plane.minikube.internal:8444 --token adxlwq.cgijiq3nisu2ttzm \
	I1109 14:38:00.529049  189143 kubeadm.go:319] 	--discovery-token-ca-cert-hash sha256:bb1c1d61f7abb7f43f33866576b71e6342f6f67545a2d1f7cc51fa851cd51d52 
	I1109 14:38:00.542250  189143 kubeadm.go:319] 	[WARNING SystemVerification]: cgroups v1 support is in maintenance mode, please migrate to cgroups v2
	I1109 14:38:00.542506  189143 kubeadm.go:319] 	[WARNING SystemVerification]: failed to parse kernel config: unable to load kernel module: "configs", output: "modprobe: FATAL: Module configs not found in directory /lib/modules/5.15.0-1084-aws\n", err: exit status 1
	I1109 14:38:00.542624  189143 kubeadm.go:319] 	[WARNING Service-Kubelet]: kubelet service is not enabled, please run 'systemctl enable kubelet.service'
	I1109 14:38:00.542723  189143 cni.go:84] Creating CNI manager for ""
	I1109 14:38:00.542746  189143 cni.go:143] "docker" driver + "crio" runtime found, recommending kindnet
	I1109 14:38:00.546129  189143 out.go:179] * Configuring CNI (Container Networking Interface) ...
	I1109 14:37:59.490132  190681 kubeadm.go:319] [control-plane-check] kube-scheduler is healthy after 8.293260821s
	I1109 14:38:00.698666  190681 kubeadm.go:319] [control-plane-check] kube-apiserver is healthy after 9.502633826s
	I1109 14:38:00.728034  190681 kubeadm.go:319] [upload-config] Storing the configuration used in ConfigMap "kubeadm-config" in the "kube-system" Namespace
	I1109 14:38:00.748774  190681 kubeadm.go:319] [kubelet] Creating a ConfigMap "kubelet-config" in namespace kube-system with the configuration for the kubelets in the cluster
	I1109 14:38:00.770877  190681 kubeadm.go:319] [upload-certs] Skipping phase. Please see --upload-certs
	I1109 14:38:00.772910  190681 kubeadm.go:319] [mark-control-plane] Marking the node embed-certs-422728 as control-plane by adding the labels: [node-role.kubernetes.io/control-plane node.kubernetes.io/exclude-from-external-load-balancers]
	I1109 14:38:00.788529  190681 kubeadm.go:319] [bootstrap-token] Using token: w7hsc2.zw6d7sksu6ywppck
	I1109 14:38:00.791639  190681 out.go:252]   - Configuring RBAC rules ...
	I1109 14:38:00.791769  190681 kubeadm.go:319] [bootstrap-token] Configuring bootstrap tokens, cluster-info ConfigMap, RBAC Roles
	I1109 14:38:00.797848  190681 kubeadm.go:319] [bootstrap-token] Configured RBAC rules to allow Node Bootstrap tokens to get nodes
	I1109 14:38:00.807117  190681 kubeadm.go:319] [bootstrap-token] Configured RBAC rules to allow Node Bootstrap tokens to post CSRs in order for nodes to get long term certificate credentials
	I1109 14:38:00.815658  190681 kubeadm.go:319] [bootstrap-token] Configured RBAC rules to allow the csrapprover controller automatically approve CSRs from a Node Bootstrap Token
	I1109 14:38:00.821107  190681 kubeadm.go:319] [bootstrap-token] Configured RBAC rules to allow certificate rotation for all node client certificates in the cluster
	I1109 14:38:00.828767  190681 kubeadm.go:319] [bootstrap-token] Creating the "cluster-info" ConfigMap in the "kube-public" namespace
	I1109 14:38:01.106548  190681 kubeadm.go:319] [kubelet-finalize] Updating "/etc/kubernetes/kubelet.conf" to point to a rotatable kubelet client certificate and key
	I1109 14:38:01.588431  190681 kubeadm.go:319] [addons] Applied essential addon: CoreDNS
	I1109 14:38:02.106954  190681 kubeadm.go:319] [addons] Applied essential addon: kube-proxy
	I1109 14:38:02.108563  190681 kubeadm.go:319] 
	I1109 14:38:02.108729  190681 kubeadm.go:319] Your Kubernetes control-plane has initialized successfully!
	I1109 14:38:02.108750  190681 kubeadm.go:319] 
	I1109 14:38:02.108828  190681 kubeadm.go:319] To start using your cluster, you need to run the following as a regular user:
	I1109 14:38:02.108833  190681 kubeadm.go:319] 
	I1109 14:38:02.108859  190681 kubeadm.go:319]   mkdir -p $HOME/.kube
	I1109 14:38:02.108918  190681 kubeadm.go:319]   sudo cp -i /etc/kubernetes/admin.conf $HOME/.kube/config
	I1109 14:38:02.108969  190681 kubeadm.go:319]   sudo chown $(id -u):$(id -g) $HOME/.kube/config
	I1109 14:38:02.108974  190681 kubeadm.go:319] 
	I1109 14:38:02.109028  190681 kubeadm.go:319] Alternatively, if you are the root user, you can run:
	I1109 14:38:02.109032  190681 kubeadm.go:319] 
	I1109 14:38:02.109080  190681 kubeadm.go:319]   export KUBECONFIG=/etc/kubernetes/admin.conf
	I1109 14:38:02.109085  190681 kubeadm.go:319] 
	I1109 14:38:02.109137  190681 kubeadm.go:319] You should now deploy a pod network to the cluster.
	I1109 14:38:02.109212  190681 kubeadm.go:319] Run "kubectl apply -f [podnetwork].yaml" with one of the options listed at:
	I1109 14:38:02.109283  190681 kubeadm.go:319]   https://kubernetes.io/docs/concepts/cluster-administration/addons/
	I1109 14:38:02.109292  190681 kubeadm.go:319] 
	I1109 14:38:02.109376  190681 kubeadm.go:319] You can now join any number of control-plane nodes by copying certificate authorities
	I1109 14:38:02.109452  190681 kubeadm.go:319] and service account keys on each node and then running the following as root:
	I1109 14:38:02.109456  190681 kubeadm.go:319] 
	I1109 14:38:02.109539  190681 kubeadm.go:319]   kubeadm join control-plane.minikube.internal:8443 --token w7hsc2.zw6d7sksu6ywppck \
	I1109 14:38:02.109641  190681 kubeadm.go:319] 	--discovery-token-ca-cert-hash sha256:bb1c1d61f7abb7f43f33866576b71e6342f6f67545a2d1f7cc51fa851cd51d52 \
	I1109 14:38:02.109662  190681 kubeadm.go:319] 	--control-plane 
	I1109 14:38:02.109667  190681 kubeadm.go:319] 
	I1109 14:38:02.109751  190681 kubeadm.go:319] Then you can join any number of worker nodes by running the following on each as root:
	I1109 14:38:02.109756  190681 kubeadm.go:319] 
	I1109 14:38:02.109838  190681 kubeadm.go:319] kubeadm join control-plane.minikube.internal:8443 --token w7hsc2.zw6d7sksu6ywppck \
	I1109 14:38:02.109940  190681 kubeadm.go:319] 	--discovery-token-ca-cert-hash sha256:bb1c1d61f7abb7f43f33866576b71e6342f6f67545a2d1f7cc51fa851cd51d52 
	I1109 14:38:02.115588  190681 kubeadm.go:319] 	[WARNING SystemVerification]: cgroups v1 support is in maintenance mode, please migrate to cgroups v2
	I1109 14:38:02.115835  190681 kubeadm.go:319] 	[WARNING SystemVerification]: failed to parse kernel config: unable to load kernel module: "configs", output: "modprobe: FATAL: Module configs not found in directory /lib/modules/5.15.0-1084-aws\n", err: exit status 1
	I1109 14:38:02.116070  190681 kubeadm.go:319] 	[WARNING Service-Kubelet]: kubelet service is not enabled, please run 'systemctl enable kubelet.service'
	I1109 14:38:02.116099  190681 cni.go:84] Creating CNI manager for ""
	I1109 14:38:02.116108  190681 cni.go:143] "docker" driver + "crio" runtime found, recommending kindnet
	I1109 14:38:02.119328  190681 out.go:179] * Configuring CNI (Container Networking Interface) ...
	I1109 14:38:02.122338  190681 ssh_runner.go:195] Run: stat /opt/cni/bin/portmap
	I1109 14:38:02.126894  190681 cni.go:182] applying CNI manifest using /var/lib/minikube/binaries/v1.34.1/kubectl ...
	I1109 14:38:02.126934  190681 ssh_runner.go:362] scp memory --> /var/tmp/minikube/cni.yaml (2601 bytes)
	I1109 14:38:02.141905  190681 ssh_runner.go:195] Run: sudo /var/lib/minikube/binaries/v1.34.1/kubectl apply --kubeconfig=/var/lib/minikube/kubeconfig -f /var/tmp/minikube/cni.yaml
	I1109 14:38:02.496545  190681 ssh_runner.go:195] Run: /bin/bash -c "cat /proc/$(pgrep kube-apiserver)/oom_adj"
	I1109 14:38:02.496676  190681 ssh_runner.go:195] Run: sudo /var/lib/minikube/binaries/v1.34.1/kubectl create clusterrolebinding minikube-rbac --clusterrole=cluster-admin --serviceaccount=kube-system:default --kubeconfig=/var/lib/minikube/kubeconfig
	I1109 14:38:02.496775  190681 ssh_runner.go:195] Run: sudo /var/lib/minikube/binaries/v1.34.1/kubectl --kubeconfig=/var/lib/minikube/kubeconfig label --overwrite nodes embed-certs-422728 minikube.k8s.io/updated_at=2025_11_09T14_38_02_0700 minikube.k8s.io/version=v1.37.0 minikube.k8s.io/commit=70e94ed289d7418ac27e2778f9cf44be27a4ecda minikube.k8s.io/name=embed-certs-422728 minikube.k8s.io/primary=true
	I1109 14:38:02.646117  190681 ops.go:34] apiserver oom_adj: -16
	I1109 14:38:02.646246  190681 ssh_runner.go:195] Run: sudo /var/lib/minikube/binaries/v1.34.1/kubectl get sa default --kubeconfig=/var/lib/minikube/kubeconfig
	I1109 14:38:00.549060  189143 ssh_runner.go:195] Run: stat /opt/cni/bin/portmap
	I1109 14:38:00.555763  189143 cni.go:182] applying CNI manifest using /var/lib/minikube/binaries/v1.34.1/kubectl ...
	I1109 14:38:00.555783  189143 ssh_runner.go:362] scp memory --> /var/tmp/minikube/cni.yaml (2601 bytes)
	I1109 14:38:00.586867  189143 ssh_runner.go:195] Run: sudo /var/lib/minikube/binaries/v1.34.1/kubectl apply --kubeconfig=/var/lib/minikube/kubeconfig -f /var/tmp/minikube/cni.yaml
	I1109 14:38:01.051565  189143 ssh_runner.go:195] Run: /bin/bash -c "cat /proc/$(pgrep kube-apiserver)/oom_adj"
	I1109 14:38:01.051646  189143 ssh_runner.go:195] Run: sudo /var/lib/minikube/binaries/v1.34.1/kubectl create clusterrolebinding minikube-rbac --clusterrole=cluster-admin --serviceaccount=kube-system:default --kubeconfig=/var/lib/minikube/kubeconfig
	I1109 14:38:01.051726  189143 ssh_runner.go:195] Run: sudo /var/lib/minikube/binaries/v1.34.1/kubectl --kubeconfig=/var/lib/minikube/kubeconfig label --overwrite nodes default-k8s-diff-port-103048 minikube.k8s.io/updated_at=2025_11_09T14_38_01_0700 minikube.k8s.io/version=v1.37.0 minikube.k8s.io/commit=70e94ed289d7418ac27e2778f9cf44be27a4ecda minikube.k8s.io/name=default-k8s-diff-port-103048 minikube.k8s.io/primary=true
	I1109 14:38:01.328259  189143 ops.go:34] apiserver oom_adj: -16
	I1109 14:38:01.328378  189143 ssh_runner.go:195] Run: sudo /var/lib/minikube/binaries/v1.34.1/kubectl get sa default --kubeconfig=/var/lib/minikube/kubeconfig
	I1109 14:38:01.828842  189143 ssh_runner.go:195] Run: sudo /var/lib/minikube/binaries/v1.34.1/kubectl get sa default --kubeconfig=/var/lib/minikube/kubeconfig
	I1109 14:38:02.328513  189143 ssh_runner.go:195] Run: sudo /var/lib/minikube/binaries/v1.34.1/kubectl get sa default --kubeconfig=/var/lib/minikube/kubeconfig
	I1109 14:38:02.829255  189143 ssh_runner.go:195] Run: sudo /var/lib/minikube/binaries/v1.34.1/kubectl get sa default --kubeconfig=/var/lib/minikube/kubeconfig
	I1109 14:38:03.329028  189143 ssh_runner.go:195] Run: sudo /var/lib/minikube/binaries/v1.34.1/kubectl get sa default --kubeconfig=/var/lib/minikube/kubeconfig
	I1109 14:38:03.829088  189143 ssh_runner.go:195] Run: sudo /var/lib/minikube/binaries/v1.34.1/kubectl get sa default --kubeconfig=/var/lib/minikube/kubeconfig
	I1109 14:38:04.328572  189143 ssh_runner.go:195] Run: sudo /var/lib/minikube/binaries/v1.34.1/kubectl get sa default --kubeconfig=/var/lib/minikube/kubeconfig
	I1109 14:38:04.829274  189143 ssh_runner.go:195] Run: sudo /var/lib/minikube/binaries/v1.34.1/kubectl get sa default --kubeconfig=/var/lib/minikube/kubeconfig
	I1109 14:38:05.329398  189143 ssh_runner.go:195] Run: sudo /var/lib/minikube/binaries/v1.34.1/kubectl get sa default --kubeconfig=/var/lib/minikube/kubeconfig
	I1109 14:38:05.829067  189143 ssh_runner.go:195] Run: sudo /var/lib/minikube/binaries/v1.34.1/kubectl get sa default --kubeconfig=/var/lib/minikube/kubeconfig
	I1109 14:38:06.049792  189143 kubeadm.go:1114] duration metric: took 4.998205783s to wait for elevateKubeSystemPrivileges
	I1109 14:38:06.049826  189143 kubeadm.go:403] duration metric: took 29.285778776s to StartCluster
	I1109 14:38:06.049848  189143 settings.go:142] acquiring lock: {Name:mk52a5a1cf83cfd3007d92af07a5e8dea393f789 Clock:{} Delay:500ms Timeout:1m0s Cancel:<nil>}
	I1109 14:38:06.049912  189143 settings.go:150] Updating kubeconfig:  /home/jenkins/minikube-integration/21139-2320/kubeconfig
	I1109 14:38:06.050565  189143 lock.go:35] WriteFile acquiring /home/jenkins/minikube-integration/21139-2320/kubeconfig: {Name:mkbcbd43244c4eb498050023163f96f81faa78c6 Clock:{} Delay:500ms Timeout:1m0s Cancel:<nil>}
	I1109 14:38:06.050762  189143 start.go:236] Will wait 6m0s for node &{Name: IP:192.168.85.2 Port:8444 KubernetesVersion:v1.34.1 ContainerRuntime:crio ControlPlane:true Worker:true}
	I1109 14:38:06.050922  189143 ssh_runner.go:195] Run: /bin/bash -c "sudo /var/lib/minikube/binaries/v1.34.1/kubectl --kubeconfig=/var/lib/minikube/kubeconfig -n kube-system get configmap coredns -o yaml"
	I1109 14:38:06.051202  189143 config.go:182] Loaded profile config "default-k8s-diff-port-103048": Driver=docker, ContainerRuntime=crio, KubernetesVersion=v1.34.1
	I1109 14:38:06.051239  189143 addons.go:512] enable addons start: toEnable=map[ambassador:false amd-gpu-device-plugin:false auto-pause:false cloud-spanner:false csi-hostpath-driver:false dashboard:false default-storageclass:true efk:false freshpod:false gcp-auth:false gvisor:false headlamp:false inaccel:false ingress:false ingress-dns:false inspektor-gadget:false istio:false istio-provisioner:false kong:false kubeflow:false kubetail:false kubevirt:false logviewer:false metallb:false metrics-server:false nvidia-device-plugin:false nvidia-driver-installer:false nvidia-gpu-device-plugin:false olm:false pod-security-policy:false portainer:false registry:false registry-aliases:false registry-creds:false storage-provisioner:true storage-provisioner-rancher:false volcano:false volumesnapshots:false yakd:false]
	I1109 14:38:06.051303  189143 addons.go:70] Setting storage-provisioner=true in profile "default-k8s-diff-port-103048"
	I1109 14:38:06.051316  189143 addons.go:239] Setting addon storage-provisioner=true in "default-k8s-diff-port-103048"
	I1109 14:38:06.051339  189143 host.go:66] Checking if "default-k8s-diff-port-103048" exists ...
	I1109 14:38:06.051933  189143 cli_runner.go:164] Run: docker container inspect default-k8s-diff-port-103048 --format={{.State.Status}}
	I1109 14:38:06.052365  189143 addons.go:70] Setting default-storageclass=true in profile "default-k8s-diff-port-103048"
	I1109 14:38:06.052397  189143 addons_storage_classes.go:34] enableOrDisableStorageClasses default-storageclass=true on "default-k8s-diff-port-103048"
	I1109 14:38:06.052688  189143 cli_runner.go:164] Run: docker container inspect default-k8s-diff-port-103048 --format={{.State.Status}}
	I1109 14:38:06.059051  189143 out.go:179] * Verifying Kubernetes components...
	I1109 14:38:06.065944  189143 ssh_runner.go:195] Run: sudo systemctl daemon-reload
	I1109 14:38:06.092893  189143 out.go:179]   - Using image gcr.io/k8s-minikube/storage-provisioner:v5
	I1109 14:38:03.147142  190681 ssh_runner.go:195] Run: sudo /var/lib/minikube/binaries/v1.34.1/kubectl get sa default --kubeconfig=/var/lib/minikube/kubeconfig
	I1109 14:38:03.647289  190681 ssh_runner.go:195] Run: sudo /var/lib/minikube/binaries/v1.34.1/kubectl get sa default --kubeconfig=/var/lib/minikube/kubeconfig
	I1109 14:38:04.146648  190681 ssh_runner.go:195] Run: sudo /var/lib/minikube/binaries/v1.34.1/kubectl get sa default --kubeconfig=/var/lib/minikube/kubeconfig
	I1109 14:38:04.647045  190681 ssh_runner.go:195] Run: sudo /var/lib/minikube/binaries/v1.34.1/kubectl get sa default --kubeconfig=/var/lib/minikube/kubeconfig
	I1109 14:38:05.146960  190681 ssh_runner.go:195] Run: sudo /var/lib/minikube/binaries/v1.34.1/kubectl get sa default --kubeconfig=/var/lib/minikube/kubeconfig
	I1109 14:38:05.647028  190681 ssh_runner.go:195] Run: sudo /var/lib/minikube/binaries/v1.34.1/kubectl get sa default --kubeconfig=/var/lib/minikube/kubeconfig
	I1109 14:38:06.152032  190681 ssh_runner.go:195] Run: sudo /var/lib/minikube/binaries/v1.34.1/kubectl get sa default --kubeconfig=/var/lib/minikube/kubeconfig
	I1109 14:38:06.647057  190681 ssh_runner.go:195] Run: sudo /var/lib/minikube/binaries/v1.34.1/kubectl get sa default --kubeconfig=/var/lib/minikube/kubeconfig
	I1109 14:38:07.146723  190681 ssh_runner.go:195] Run: sudo /var/lib/minikube/binaries/v1.34.1/kubectl get sa default --kubeconfig=/var/lib/minikube/kubeconfig
	I1109 14:38:07.399244  190681 kubeadm.go:1114] duration metric: took 4.902610827s to wait for elevateKubeSystemPrivileges
	I1109 14:38:07.399270  190681 kubeadm.go:403] duration metric: took 24.886583855s to StartCluster
	I1109 14:38:07.399286  190681 settings.go:142] acquiring lock: {Name:mk52a5a1cf83cfd3007d92af07a5e8dea393f789 Clock:{} Delay:500ms Timeout:1m0s Cancel:<nil>}
	I1109 14:38:07.399340  190681 settings.go:150] Updating kubeconfig:  /home/jenkins/minikube-integration/21139-2320/kubeconfig
	I1109 14:38:07.400744  190681 lock.go:35] WriteFile acquiring /home/jenkins/minikube-integration/21139-2320/kubeconfig: {Name:mkbcbd43244c4eb498050023163f96f81faa78c6 Clock:{} Delay:500ms Timeout:1m0s Cancel:<nil>}
	I1109 14:38:07.400965  190681 start.go:236] Will wait 6m0s for node &{Name: IP:192.168.76.2 Port:8443 KubernetesVersion:v1.34.1 ContainerRuntime:crio ControlPlane:true Worker:true}
	I1109 14:38:07.401068  190681 ssh_runner.go:195] Run: /bin/bash -c "sudo /var/lib/minikube/binaries/v1.34.1/kubectl --kubeconfig=/var/lib/minikube/kubeconfig -n kube-system get configmap coredns -o yaml"
	I1109 14:38:07.401372  190681 config.go:182] Loaded profile config "embed-certs-422728": Driver=docker, ContainerRuntime=crio, KubernetesVersion=v1.34.1
	I1109 14:38:07.401416  190681 addons.go:512] enable addons start: toEnable=map[ambassador:false amd-gpu-device-plugin:false auto-pause:false cloud-spanner:false csi-hostpath-driver:false dashboard:false default-storageclass:true efk:false freshpod:false gcp-auth:false gvisor:false headlamp:false inaccel:false ingress:false ingress-dns:false inspektor-gadget:false istio:false istio-provisioner:false kong:false kubeflow:false kubetail:false kubevirt:false logviewer:false metallb:false metrics-server:false nvidia-device-plugin:false nvidia-driver-installer:false nvidia-gpu-device-plugin:false olm:false pod-security-policy:false portainer:false registry:false registry-aliases:false registry-creds:false storage-provisioner:true storage-provisioner-rancher:false volcano:false volumesnapshots:false yakd:false]
	I1109 14:38:07.401481  190681 addons.go:70] Setting storage-provisioner=true in profile "embed-certs-422728"
	I1109 14:38:07.401496  190681 addons.go:239] Setting addon storage-provisioner=true in "embed-certs-422728"
	I1109 14:38:07.401515  190681 host.go:66] Checking if "embed-certs-422728" exists ...
	I1109 14:38:07.402301  190681 cli_runner.go:164] Run: docker container inspect embed-certs-422728 --format={{.State.Status}}
	I1109 14:38:07.402746  190681 addons.go:70] Setting default-storageclass=true in profile "embed-certs-422728"
	I1109 14:38:07.402764  190681 addons_storage_classes.go:34] enableOrDisableStorageClasses default-storageclass=true on "embed-certs-422728"
	I1109 14:38:07.403030  190681 cli_runner.go:164] Run: docker container inspect embed-certs-422728 --format={{.State.Status}}
	I1109 14:38:07.406945  190681 out.go:179] * Verifying Kubernetes components...
	I1109 14:38:07.416317  190681 ssh_runner.go:195] Run: sudo systemctl daemon-reload
	I1109 14:38:07.439165  190681 out.go:179]   - Using image gcr.io/k8s-minikube/storage-provisioner:v5
	I1109 14:38:07.443535  190681 addons.go:436] installing /etc/kubernetes/addons/storage-provisioner.yaml
	I1109 14:38:07.443560  190681 ssh_runner.go:362] scp memory --> /etc/kubernetes/addons/storage-provisioner.yaml (2676 bytes)
	I1109 14:38:07.443629  190681 cli_runner.go:164] Run: docker container inspect -f "'{{(index (index .NetworkSettings.Ports "22/tcp") 0).HostPort}}'" embed-certs-422728
	I1109 14:38:07.454489  190681 addons.go:239] Setting addon default-storageclass=true in "embed-certs-422728"
	I1109 14:38:07.454536  190681 host.go:66] Checking if "embed-certs-422728" exists ...
	I1109 14:38:07.454960  190681 cli_runner.go:164] Run: docker container inspect embed-certs-422728 --format={{.State.Status}}
	I1109 14:38:07.487950  190681 sshutil.go:53] new ssh client: &{IP:127.0.0.1 Port:33060 SSHKeyPath:/home/jenkins/minikube-integration/21139-2320/.minikube/machines/embed-certs-422728/id_rsa Username:docker}
	I1109 14:38:07.502084  190681 addons.go:436] installing /etc/kubernetes/addons/storageclass.yaml
	I1109 14:38:07.502104  190681 ssh_runner.go:362] scp storageclass/storageclass.yaml --> /etc/kubernetes/addons/storageclass.yaml (271 bytes)
	I1109 14:38:07.502164  190681 cli_runner.go:164] Run: docker container inspect -f "'{{(index (index .NetworkSettings.Ports "22/tcp") 0).HostPort}}'" embed-certs-422728
	I1109 14:38:07.530312  190681 sshutil.go:53] new ssh client: &{IP:127.0.0.1 Port:33060 SSHKeyPath:/home/jenkins/minikube-integration/21139-2320/.minikube/machines/embed-certs-422728/id_rsa Username:docker}
	I1109 14:38:06.095895  189143 addons.go:436] installing /etc/kubernetes/addons/storage-provisioner.yaml
	I1109 14:38:06.095918  189143 ssh_runner.go:362] scp memory --> /etc/kubernetes/addons/storage-provisioner.yaml (2676 bytes)
	I1109 14:38:06.095987  189143 cli_runner.go:164] Run: docker container inspect -f "'{{(index (index .NetworkSettings.Ports "22/tcp") 0).HostPort}}'" default-k8s-diff-port-103048
	I1109 14:38:06.100625  189143 addons.go:239] Setting addon default-storageclass=true in "default-k8s-diff-port-103048"
	I1109 14:38:06.100676  189143 host.go:66] Checking if "default-k8s-diff-port-103048" exists ...
	I1109 14:38:06.101099  189143 cli_runner.go:164] Run: docker container inspect default-k8s-diff-port-103048 --format={{.State.Status}}
	I1109 14:38:06.136914  189143 addons.go:436] installing /etc/kubernetes/addons/storageclass.yaml
	I1109 14:38:06.136939  189143 ssh_runner.go:362] scp storageclass/storageclass.yaml --> /etc/kubernetes/addons/storageclass.yaml (271 bytes)
	I1109 14:38:06.137009  189143 cli_runner.go:164] Run: docker container inspect -f "'{{(index (index .NetworkSettings.Ports "22/tcp") 0).HostPort}}'" default-k8s-diff-port-103048
	I1109 14:38:06.152378  189143 sshutil.go:53] new ssh client: &{IP:127.0.0.1 Port:33055 SSHKeyPath:/home/jenkins/minikube-integration/21139-2320/.minikube/machines/default-k8s-diff-port-103048/id_rsa Username:docker}
	I1109 14:38:06.175502  189143 sshutil.go:53] new ssh client: &{IP:127.0.0.1 Port:33055 SSHKeyPath:/home/jenkins/minikube-integration/21139-2320/.minikube/machines/default-k8s-diff-port-103048/id_rsa Username:docker}
	I1109 14:38:06.717262  189143 ssh_runner.go:195] Run: sudo KUBECONFIG=/var/lib/minikube/kubeconfig /var/lib/minikube/binaries/v1.34.1/kubectl apply -f /etc/kubernetes/addons/storageclass.yaml
	I1109 14:38:06.749468  189143 ssh_runner.go:195] Run: /bin/bash -c "sudo /var/lib/minikube/binaries/v1.34.1/kubectl --kubeconfig=/var/lib/minikube/kubeconfig -n kube-system get configmap coredns -o yaml | sed -e '/^        forward . \/etc\/resolv.conf.*/i \        hosts {\n           192.168.85.1 host.minikube.internal\n           fallthrough\n        }' -e '/^        errors *$/i \        log' | sudo /var/lib/minikube/binaries/v1.34.1/kubectl --kubeconfig=/var/lib/minikube/kubeconfig replace -f -"
	I1109 14:38:06.749679  189143 ssh_runner.go:195] Run: sudo systemctl start kubelet
	I1109 14:38:06.752463  189143 ssh_runner.go:195] Run: sudo KUBECONFIG=/var/lib/minikube/kubeconfig /var/lib/minikube/binaries/v1.34.1/kubectl apply -f /etc/kubernetes/addons/storage-provisioner.yaml
	I1109 14:38:07.580151  189143 start.go:977] {"host.minikube.internal": 192.168.85.1} host record injected into CoreDNS's ConfigMap
	I1109 14:38:07.582479  189143 node_ready.go:35] waiting up to 6m0s for node "default-k8s-diff-port-103048" to be "Ready" ...
	I1109 14:38:07.973526  189143 ssh_runner.go:235] Completed: sudo KUBECONFIG=/var/lib/minikube/kubeconfig /var/lib/minikube/binaries/v1.34.1/kubectl apply -f /etc/kubernetes/addons/storage-provisioner.yaml: (1.220998864s)
	I1109 14:38:07.976593  189143 out.go:179] * Enabled addons: default-storageclass, storage-provisioner
	I1109 14:38:07.979442  189143 addons.go:515] duration metric: took 1.928195233s for enable addons: enabled=[default-storageclass storage-provisioner]
	I1109 14:38:08.085303  189143 kapi.go:214] "coredns" deployment in "kube-system" namespace and "default-k8s-diff-port-103048" context rescaled to 1 replicas
	I1109 14:38:08.047785  190681 ssh_runner.go:195] Run: sudo KUBECONFIG=/var/lib/minikube/kubeconfig /var/lib/minikube/binaries/v1.34.1/kubectl apply -f /etc/kubernetes/addons/storageclass.yaml
	I1109 14:38:08.074632  190681 ssh_runner.go:195] Run: /bin/bash -c "sudo /var/lib/minikube/binaries/v1.34.1/kubectl --kubeconfig=/var/lib/minikube/kubeconfig -n kube-system get configmap coredns -o yaml | sed -e '/^        forward . \/etc\/resolv.conf.*/i \        hosts {\n           192.168.76.1 host.minikube.internal\n           fallthrough\n        }' -e '/^        errors *$/i \        log' | sudo /var/lib/minikube/binaries/v1.34.1/kubectl --kubeconfig=/var/lib/minikube/kubeconfig replace -f -"
	I1109 14:38:08.074759  190681 ssh_runner.go:195] Run: sudo systemctl start kubelet
	I1109 14:38:08.097787  190681 ssh_runner.go:195] Run: sudo KUBECONFIG=/var/lib/minikube/kubeconfig /var/lib/minikube/binaries/v1.34.1/kubectl apply -f /etc/kubernetes/addons/storage-provisioner.yaml
	I1109 14:38:08.719937  190681 node_ready.go:35] waiting up to 6m0s for node "embed-certs-422728" to be "Ready" ...
	I1109 14:38:08.720168  190681 start.go:977] {"host.minikube.internal": 192.168.76.1} host record injected into CoreDNS's ConfigMap
	I1109 14:38:08.989996  190681 out.go:179] * Enabled addons: default-storageclass, storage-provisioner
	I1109 14:38:08.992915  190681 addons.go:515] duration metric: took 1.591484578s for enable addons: enabled=[default-storageclass storage-provisioner]
	I1109 14:38:09.223822  190681 kapi.go:214] "coredns" deployment in "kube-system" namespace and "embed-certs-422728" context rescaled to 1 replicas
	W1109 14:38:10.722642  190681 node_ready.go:57] node "embed-certs-422728" has "Ready":"False" status (will retry)
	W1109 14:38:12.723522  190681 node_ready.go:57] node "embed-certs-422728" has "Ready":"False" status (will retry)
	W1109 14:38:09.585194  189143 node_ready.go:57] node "default-k8s-diff-port-103048" has "Ready":"False" status (will retry)
	W1109 14:38:12.085539  189143 node_ready.go:57] node "default-k8s-diff-port-103048" has "Ready":"False" status (will retry)
	W1109 14:38:15.223153  190681 node_ready.go:57] node "embed-certs-422728" has "Ready":"False" status (will retry)
	W1109 14:38:17.223262  190681 node_ready.go:57] node "embed-certs-422728" has "Ready":"False" status (will retry)
	W1109 14:38:14.585848  189143 node_ready.go:57] node "default-k8s-diff-port-103048" has "Ready":"False" status (will retry)
	W1109 14:38:16.586032  189143 node_ready.go:57] node "default-k8s-diff-port-103048" has "Ready":"False" status (will retry)
	W1109 14:38:19.223547  190681 node_ready.go:57] node "embed-certs-422728" has "Ready":"False" status (will retry)
	W1109 14:38:21.224085  190681 node_ready.go:57] node "embed-certs-422728" has "Ready":"False" status (will retry)
	W1109 14:38:18.586067  189143 node_ready.go:57] node "default-k8s-diff-port-103048" has "Ready":"False" status (will retry)
	W1109 14:38:21.085381  189143 node_ready.go:57] node "default-k8s-diff-port-103048" has "Ready":"False" status (will retry)
	W1109 14:38:23.086050  189143 node_ready.go:57] node "default-k8s-diff-port-103048" has "Ready":"False" status (will retry)
	W1109 14:38:23.723395  190681 node_ready.go:57] node "embed-certs-422728" has "Ready":"False" status (will retry)
	W1109 14:38:25.723790  190681 node_ready.go:57] node "embed-certs-422728" has "Ready":"False" status (will retry)
	W1109 14:38:25.086470  189143 node_ready.go:57] node "default-k8s-diff-port-103048" has "Ready":"False" status (will retry)
	W1109 14:38:27.585606  189143 node_ready.go:57] node "default-k8s-diff-port-103048" has "Ready":"False" status (will retry)
	W1109 14:38:28.222840  190681 node_ready.go:57] node "embed-certs-422728" has "Ready":"False" status (will retry)
	W1109 14:38:30.223153  190681 node_ready.go:57] node "embed-certs-422728" has "Ready":"False" status (will retry)
	W1109 14:38:32.223588  190681 node_ready.go:57] node "embed-certs-422728" has "Ready":"False" status (will retry)
	W1109 14:38:29.585772  189143 node_ready.go:57] node "default-k8s-diff-port-103048" has "Ready":"False" status (will retry)
	W1109 14:38:32.085381  189143 node_ready.go:57] node "default-k8s-diff-port-103048" has "Ready":"False" status (will retry)
	W1109 14:38:34.723152  190681 node_ready.go:57] node "embed-certs-422728" has "Ready":"False" status (will retry)
	W1109 14:38:37.223685  190681 node_ready.go:57] node "embed-certs-422728" has "Ready":"False" status (will retry)
	W1109 14:38:34.586301  189143 node_ready.go:57] node "default-k8s-diff-port-103048" has "Ready":"False" status (will retry)
	W1109 14:38:37.085897  189143 node_ready.go:57] node "default-k8s-diff-port-103048" has "Ready":"False" status (will retry)
	W1109 14:38:39.722888  190681 node_ready.go:57] node "embed-certs-422728" has "Ready":"False" status (will retry)
	W1109 14:38:41.723575  190681 node_ready.go:57] node "embed-certs-422728" has "Ready":"False" status (will retry)
	W1109 14:38:39.585697  189143 node_ready.go:57] node "default-k8s-diff-port-103048" has "Ready":"False" status (will retry)
	W1109 14:38:41.585978  189143 node_ready.go:57] node "default-k8s-diff-port-103048" has "Ready":"False" status (will retry)
	W1109 14:38:44.223726  190681 node_ready.go:57] node "embed-certs-422728" has "Ready":"False" status (will retry)
	W1109 14:38:46.224037  190681 node_ready.go:57] node "embed-certs-422728" has "Ready":"False" status (will retry)
	W1109 14:38:43.586549  189143 node_ready.go:57] node "default-k8s-diff-port-103048" has "Ready":"False" status (will retry)
	W1109 14:38:46.085605  189143 node_ready.go:57] node "default-k8s-diff-port-103048" has "Ready":"False" status (will retry)
	I1109 14:38:47.085517  189143 node_ready.go:49] node "default-k8s-diff-port-103048" is "Ready"
	I1109 14:38:47.085546  189143 node_ready.go:38] duration metric: took 39.503046372s for node "default-k8s-diff-port-103048" to be "Ready" ...
	I1109 14:38:47.085562  189143 api_server.go:52] waiting for apiserver process to appear ...
	I1109 14:38:47.085631  189143 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I1109 14:38:47.097612  189143 api_server.go:72] duration metric: took 41.046822049s to wait for apiserver process to appear ...
	I1109 14:38:47.097635  189143 api_server.go:88] waiting for apiserver healthz status ...
	I1109 14:38:47.097654  189143 api_server.go:253] Checking apiserver healthz at https://192.168.85.2:8444/healthz ...
	I1109 14:38:47.106692  189143 api_server.go:279] https://192.168.85.2:8444/healthz returned 200:
	ok
	I1109 14:38:47.107695  189143 api_server.go:141] control plane version: v1.34.1
	I1109 14:38:47.107719  189143 api_server.go:131] duration metric: took 10.077431ms to wait for apiserver health ...
	I1109 14:38:47.107728  189143 system_pods.go:43] waiting for kube-system pods to appear ...
	I1109 14:38:47.110886  189143 system_pods.go:59] 8 kube-system pods found
	I1109 14:38:47.110923  189143 system_pods.go:61] "coredns-66bc5c9577-rbvc2" [a2c09df3-22f7-4863-81b7-71d92e6457c7] Pending / Ready:ContainersNotReady (containers with unready status: [coredns]) / ContainersReady:ContainersNotReady (containers with unready status: [coredns])
	I1109 14:38:47.110930  189143 system_pods.go:61] "etcd-default-k8s-diff-port-103048" [7ca61e70-9bc1-4a3d-9175-40fabccc3dfe] Running
	I1109 14:38:47.110937  189143 system_pods.go:61] "kindnet-tz2x5" [41a63a24-6d2b-453d-a118-2a5b03e08396] Running
	I1109 14:38:47.110946  189143 system_pods.go:61] "kube-apiserver-default-k8s-diff-port-103048" [80dc9a9a-1a79-447a-b1a4-38d7611de8c3] Running
	I1109 14:38:47.110951  189143 system_pods.go:61] "kube-controller-manager-default-k8s-diff-port-103048" [f1da8430-3b4b-4fab-a1cb-d8984aabae63] Running
	I1109 14:38:47.110962  189143 system_pods.go:61] "kube-proxy-c57m2" [d93835ed-7e40-4171-a3ee-f815a8d20380] Running
	I1109 14:38:47.110966  189143 system_pods.go:61] "kube-scheduler-default-k8s-diff-port-103048" [8e18ee25-6d8f-4598-89c1-67b7fc04936b] Running
	I1109 14:38:47.110983  189143 system_pods.go:61] "storage-provisioner" [251b0857-5681-47c0-b891-8a4c109aaa4b] Pending / Ready:ContainersNotReady (containers with unready status: [storage-provisioner]) / ContainersReady:ContainersNotReady (containers with unready status: [storage-provisioner])
	I1109 14:38:47.110989  189143 system_pods.go:74] duration metric: took 3.256387ms to wait for pod list to return data ...
	I1109 14:38:47.111002  189143 default_sa.go:34] waiting for default service account to be created ...
	I1109 14:38:47.113862  189143 default_sa.go:45] found service account: "default"
	I1109 14:38:47.113890  189143 default_sa.go:55] duration metric: took 2.881639ms for default service account to be created ...
	I1109 14:38:47.113900  189143 system_pods.go:116] waiting for k8s-apps to be running ...
	I1109 14:38:47.117019  189143 system_pods.go:86] 8 kube-system pods found
	I1109 14:38:47.117056  189143 system_pods.go:89] "coredns-66bc5c9577-rbvc2" [a2c09df3-22f7-4863-81b7-71d92e6457c7] Pending / Ready:ContainersNotReady (containers with unready status: [coredns]) / ContainersReady:ContainersNotReady (containers with unready status: [coredns])
	I1109 14:38:47.117062  189143 system_pods.go:89] "etcd-default-k8s-diff-port-103048" [7ca61e70-9bc1-4a3d-9175-40fabccc3dfe] Running
	I1109 14:38:47.117069  189143 system_pods.go:89] "kindnet-tz2x5" [41a63a24-6d2b-453d-a118-2a5b03e08396] Running
	I1109 14:38:47.117075  189143 system_pods.go:89] "kube-apiserver-default-k8s-diff-port-103048" [80dc9a9a-1a79-447a-b1a4-38d7611de8c3] Running
	I1109 14:38:47.117105  189143 system_pods.go:89] "kube-controller-manager-default-k8s-diff-port-103048" [f1da8430-3b4b-4fab-a1cb-d8984aabae63] Running
	I1109 14:38:47.117117  189143 system_pods.go:89] "kube-proxy-c57m2" [d93835ed-7e40-4171-a3ee-f815a8d20380] Running
	I1109 14:38:47.117124  189143 system_pods.go:89] "kube-scheduler-default-k8s-diff-port-103048" [8e18ee25-6d8f-4598-89c1-67b7fc04936b] Running
	I1109 14:38:47.117131  189143 system_pods.go:89] "storage-provisioner" [251b0857-5681-47c0-b891-8a4c109aaa4b] Pending / Ready:ContainersNotReady (containers with unready status: [storage-provisioner]) / ContainersReady:ContainersNotReady (containers with unready status: [storage-provisioner])
	I1109 14:38:47.117176  189143 retry.go:31] will retry after 256.058817ms: missing components: kube-dns
	I1109 14:38:47.381163  189143 system_pods.go:86] 8 kube-system pods found
	I1109 14:38:47.381239  189143 system_pods.go:89] "coredns-66bc5c9577-rbvc2" [a2c09df3-22f7-4863-81b7-71d92e6457c7] Pending / Ready:ContainersNotReady (containers with unready status: [coredns]) / ContainersReady:ContainersNotReady (containers with unready status: [coredns])
	I1109 14:38:47.381247  189143 system_pods.go:89] "etcd-default-k8s-diff-port-103048" [7ca61e70-9bc1-4a3d-9175-40fabccc3dfe] Running
	I1109 14:38:47.381254  189143 system_pods.go:89] "kindnet-tz2x5" [41a63a24-6d2b-453d-a118-2a5b03e08396] Running
	I1109 14:38:47.381258  189143 system_pods.go:89] "kube-apiserver-default-k8s-diff-port-103048" [80dc9a9a-1a79-447a-b1a4-38d7611de8c3] Running
	I1109 14:38:47.381262  189143 system_pods.go:89] "kube-controller-manager-default-k8s-diff-port-103048" [f1da8430-3b4b-4fab-a1cb-d8984aabae63] Running
	I1109 14:38:47.381267  189143 system_pods.go:89] "kube-proxy-c57m2" [d93835ed-7e40-4171-a3ee-f815a8d20380] Running
	I1109 14:38:47.381271  189143 system_pods.go:89] "kube-scheduler-default-k8s-diff-port-103048" [8e18ee25-6d8f-4598-89c1-67b7fc04936b] Running
	I1109 14:38:47.381276  189143 system_pods.go:89] "storage-provisioner" [251b0857-5681-47c0-b891-8a4c109aaa4b] Pending / Ready:ContainersNotReady (containers with unready status: [storage-provisioner]) / ContainersReady:ContainersNotReady (containers with unready status: [storage-provisioner])
	I1109 14:38:47.381291  189143 retry.go:31] will retry after 235.739071ms: missing components: kube-dns
	I1109 14:38:47.621143  189143 system_pods.go:86] 8 kube-system pods found
	I1109 14:38:47.621182  189143 system_pods.go:89] "coredns-66bc5c9577-rbvc2" [a2c09df3-22f7-4863-81b7-71d92e6457c7] Pending / Ready:ContainersNotReady (containers with unready status: [coredns]) / ContainersReady:ContainersNotReady (containers with unready status: [coredns])
	I1109 14:38:47.621189  189143 system_pods.go:89] "etcd-default-k8s-diff-port-103048" [7ca61e70-9bc1-4a3d-9175-40fabccc3dfe] Running
	I1109 14:38:47.621195  189143 system_pods.go:89] "kindnet-tz2x5" [41a63a24-6d2b-453d-a118-2a5b03e08396] Running
	I1109 14:38:47.621199  189143 system_pods.go:89] "kube-apiserver-default-k8s-diff-port-103048" [80dc9a9a-1a79-447a-b1a4-38d7611de8c3] Running
	I1109 14:38:47.621204  189143 system_pods.go:89] "kube-controller-manager-default-k8s-diff-port-103048" [f1da8430-3b4b-4fab-a1cb-d8984aabae63] Running
	I1109 14:38:47.621208  189143 system_pods.go:89] "kube-proxy-c57m2" [d93835ed-7e40-4171-a3ee-f815a8d20380] Running
	I1109 14:38:47.621218  189143 system_pods.go:89] "kube-scheduler-default-k8s-diff-port-103048" [8e18ee25-6d8f-4598-89c1-67b7fc04936b] Running
	I1109 14:38:47.621224  189143 system_pods.go:89] "storage-provisioner" [251b0857-5681-47c0-b891-8a4c109aaa4b] Pending / Ready:ContainersNotReady (containers with unready status: [storage-provisioner]) / ContainersReady:ContainersNotReady (containers with unready status: [storage-provisioner])
	I1109 14:38:47.621245  189143 retry.go:31] will retry after 351.929389ms: missing components: kube-dns
	I1109 14:38:47.979300  189143 system_pods.go:86] 8 kube-system pods found
	I1109 14:38:47.979333  189143 system_pods.go:89] "coredns-66bc5c9577-rbvc2" [a2c09df3-22f7-4863-81b7-71d92e6457c7] Running
	I1109 14:38:47.979341  189143 system_pods.go:89] "etcd-default-k8s-diff-port-103048" [7ca61e70-9bc1-4a3d-9175-40fabccc3dfe] Running
	I1109 14:38:47.979347  189143 system_pods.go:89] "kindnet-tz2x5" [41a63a24-6d2b-453d-a118-2a5b03e08396] Running
	I1109 14:38:47.979351  189143 system_pods.go:89] "kube-apiserver-default-k8s-diff-port-103048" [80dc9a9a-1a79-447a-b1a4-38d7611de8c3] Running
	I1109 14:38:47.979355  189143 system_pods.go:89] "kube-controller-manager-default-k8s-diff-port-103048" [f1da8430-3b4b-4fab-a1cb-d8984aabae63] Running
	I1109 14:38:47.979382  189143 system_pods.go:89] "kube-proxy-c57m2" [d93835ed-7e40-4171-a3ee-f815a8d20380] Running
	I1109 14:38:47.979392  189143 system_pods.go:89] "kube-scheduler-default-k8s-diff-port-103048" [8e18ee25-6d8f-4598-89c1-67b7fc04936b] Running
	I1109 14:38:47.979395  189143 system_pods.go:89] "storage-provisioner" [251b0857-5681-47c0-b891-8a4c109aaa4b] Running
	I1109 14:38:47.979404  189143 system_pods.go:126] duration metric: took 865.497806ms to wait for k8s-apps to be running ...
	I1109 14:38:47.979429  189143 system_svc.go:44] waiting for kubelet service to be running ....
	I1109 14:38:47.979497  189143 ssh_runner.go:195] Run: sudo systemctl is-active --quiet service kubelet
	I1109 14:38:47.994182  189143 system_svc.go:56] duration metric: took 14.750051ms WaitForService to wait for kubelet
	I1109 14:38:47.994210  189143 kubeadm.go:587] duration metric: took 41.943424065s to wait for: map[apiserver:true apps_running:true default_sa:true extra:true kubelet:true node_ready:true system_pods:true]
	I1109 14:38:47.994247  189143 node_conditions.go:102] verifying NodePressure condition ...
	I1109 14:38:47.997674  189143 node_conditions.go:122] node storage ephemeral capacity is 203034800Ki
	I1109 14:38:47.997704  189143 node_conditions.go:123] node cpu capacity is 2
	I1109 14:38:47.997717  189143 node_conditions.go:105] duration metric: took 3.460623ms to run NodePressure ...
	I1109 14:38:47.997731  189143 start.go:242] waiting for startup goroutines ...
	I1109 14:38:47.997761  189143 start.go:247] waiting for cluster config update ...
	I1109 14:38:47.997786  189143 start.go:256] writing updated cluster config ...
	I1109 14:38:47.998072  189143 ssh_runner.go:195] Run: rm -f paused
	I1109 14:38:48.002012  189143 pod_ready.go:37] extra waiting up to 4m0s for all "kube-system" pods having one of [k8s-app=kube-dns component=etcd component=kube-apiserver component=kube-controller-manager k8s-app=kube-proxy component=kube-scheduler] labels to be "Ready" ...
	I1109 14:38:48.006052  189143 pod_ready.go:83] waiting for pod "coredns-66bc5c9577-rbvc2" in "kube-system" namespace to be "Ready" or be gone ...
	I1109 14:38:48.017202  189143 pod_ready.go:94] pod "coredns-66bc5c9577-rbvc2" is "Ready"
	I1109 14:38:48.017233  189143 pod_ready.go:86] duration metric: took 11.152681ms for pod "coredns-66bc5c9577-rbvc2" in "kube-system" namespace to be "Ready" or be gone ...
	I1109 14:38:48.020187  189143 pod_ready.go:83] waiting for pod "etcd-default-k8s-diff-port-103048" in "kube-system" namespace to be "Ready" or be gone ...
	I1109 14:38:48.026142  189143 pod_ready.go:94] pod "etcd-default-k8s-diff-port-103048" is "Ready"
	I1109 14:38:48.026213  189143 pod_ready.go:86] duration metric: took 5.997743ms for pod "etcd-default-k8s-diff-port-103048" in "kube-system" namespace to be "Ready" or be gone ...
	I1109 14:38:48.028747  189143 pod_ready.go:83] waiting for pod "kube-apiserver-default-k8s-diff-port-103048" in "kube-system" namespace to be "Ready" or be gone ...
	I1109 14:38:48.034343  189143 pod_ready.go:94] pod "kube-apiserver-default-k8s-diff-port-103048" is "Ready"
	I1109 14:38:48.034375  189143 pod_ready.go:86] duration metric: took 5.596976ms for pod "kube-apiserver-default-k8s-diff-port-103048" in "kube-system" namespace to be "Ready" or be gone ...
	I1109 14:38:48.037412  189143 pod_ready.go:83] waiting for pod "kube-controller-manager-default-k8s-diff-port-103048" in "kube-system" namespace to be "Ready" or be gone ...
	I1109 14:38:48.406383  189143 pod_ready.go:94] pod "kube-controller-manager-default-k8s-diff-port-103048" is "Ready"
	I1109 14:38:48.406425  189143 pod_ready.go:86] duration metric: took 368.986966ms for pod "kube-controller-manager-default-k8s-diff-port-103048" in "kube-system" namespace to be "Ready" or be gone ...
	I1109 14:38:48.606603  189143 pod_ready.go:83] waiting for pod "kube-proxy-c57m2" in "kube-system" namespace to be "Ready" or be gone ...
	I1109 14:38:49.006702  189143 pod_ready.go:94] pod "kube-proxy-c57m2" is "Ready"
	I1109 14:38:49.006728  189143 pod_ready.go:86] duration metric: took 400.099585ms for pod "kube-proxy-c57m2" in "kube-system" namespace to be "Ready" or be gone ...
	I1109 14:38:49.207114  189143 pod_ready.go:83] waiting for pod "kube-scheduler-default-k8s-diff-port-103048" in "kube-system" namespace to be "Ready" or be gone ...
	I1109 14:38:49.606163  189143 pod_ready.go:94] pod "kube-scheduler-default-k8s-diff-port-103048" is "Ready"
	I1109 14:38:49.606194  189143 pod_ready.go:86] duration metric: took 399.053714ms for pod "kube-scheduler-default-k8s-diff-port-103048" in "kube-system" namespace to be "Ready" or be gone ...
	I1109 14:38:49.606206  189143 pod_ready.go:40] duration metric: took 1.604164876s for extra waiting for all "kube-system" pods having one of [k8s-app=kube-dns component=etcd component=kube-apiserver component=kube-controller-manager k8s-app=kube-proxy component=kube-scheduler] labels to be "Ready" ...
	I1109 14:38:49.663295  189143 start.go:628] kubectl: 1.33.2, cluster: 1.34.1 (minor skew: 1)
	I1109 14:38:49.666538  189143 out.go:179] * Done! kubectl is now configured to use "default-k8s-diff-port-103048" cluster and "default" namespace by default
	W1109 14:38:48.722666  190681 node_ready.go:57] node "embed-certs-422728" has "Ready":"False" status (will retry)
	I1109 14:38:49.222796  190681 node_ready.go:49] node "embed-certs-422728" is "Ready"
	I1109 14:38:49.222825  190681 node_ready.go:38] duration metric: took 40.502853705s for node "embed-certs-422728" to be "Ready" ...
	I1109 14:38:49.222839  190681 api_server.go:52] waiting for apiserver process to appear ...
	I1109 14:38:49.222896  190681 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I1109 14:38:49.242437  190681 api_server.go:72] duration metric: took 41.841444224s to wait for apiserver process to appear ...
	I1109 14:38:49.242463  190681 api_server.go:88] waiting for apiserver healthz status ...
	I1109 14:38:49.242481  190681 api_server.go:253] Checking apiserver healthz at https://192.168.76.2:8443/healthz ...
	I1109 14:38:49.251813  190681 api_server.go:279] https://192.168.76.2:8443/healthz returned 200:
	ok
	I1109 14:38:49.253127  190681 api_server.go:141] control plane version: v1.34.1
	I1109 14:38:49.253156  190681 api_server.go:131] duration metric: took 10.686659ms to wait for apiserver health ...
	I1109 14:38:49.253166  190681 system_pods.go:43] waiting for kube-system pods to appear ...
	I1109 14:38:49.256773  190681 system_pods.go:59] 8 kube-system pods found
	I1109 14:38:49.256847  190681 system_pods.go:61] "coredns-66bc5c9577-4hk6l" [85300af6-fc4a-42dd-b6f9-4374a4461cdc] Pending / Ready:ContainersNotReady (containers with unready status: [coredns]) / ContainersReady:ContainersNotReady (containers with unready status: [coredns])
	I1109 14:38:49.256857  190681 system_pods.go:61] "etcd-embed-certs-422728" [a71faa76-21fd-4bf5-82d3-26489363edf1] Running
	I1109 14:38:49.256878  190681 system_pods.go:61] "kindnet-29xxd" [081cda95-4468-46a9-a913-ec3c53472afd] Running
	I1109 14:38:49.256883  190681 system_pods.go:61] "kube-apiserver-embed-certs-422728" [fa49f721-7504-41a2-80d6-eeac9f3ba024] Running
	I1109 14:38:49.256888  190681 system_pods.go:61] "kube-controller-manager-embed-certs-422728" [1fefd317-f995-414d-9744-cb3bb29d6663] Running
	I1109 14:38:49.256893  190681 system_pods.go:61] "kube-proxy-5zn8j" [91237b20-cef1-4550-bd7c-cbf7ec8d850c] Running
	I1109 14:38:49.256921  190681 system_pods.go:61] "kube-scheduler-embed-certs-422728" [aadcb563-6ce1-437f-ab4e-6f9450bb1b93] Running
	I1109 14:38:49.256948  190681 system_pods.go:61] "storage-provisioner" [e11ae084-2938-40cc-9538-cffa02747d9b] Pending / Ready:ContainersNotReady (containers with unready status: [storage-provisioner]) / ContainersReady:ContainersNotReady (containers with unready status: [storage-provisioner])
	I1109 14:38:49.256955  190681 system_pods.go:74] duration metric: took 3.784022ms to wait for pod list to return data ...
	I1109 14:38:49.256966  190681 default_sa.go:34] waiting for default service account to be created ...
	I1109 14:38:49.259821  190681 default_sa.go:45] found service account: "default"
	I1109 14:38:49.259844  190681 default_sa.go:55] duration metric: took 2.872417ms for default service account to be created ...
	I1109 14:38:49.259853  190681 system_pods.go:116] waiting for k8s-apps to be running ...
	I1109 14:38:49.266398  190681 system_pods.go:86] 8 kube-system pods found
	I1109 14:38:49.266463  190681 system_pods.go:89] "coredns-66bc5c9577-4hk6l" [85300af6-fc4a-42dd-b6f9-4374a4461cdc] Pending / Ready:ContainersNotReady (containers with unready status: [coredns]) / ContainersReady:ContainersNotReady (containers with unready status: [coredns])
	I1109 14:38:49.266471  190681 system_pods.go:89] "etcd-embed-certs-422728" [a71faa76-21fd-4bf5-82d3-26489363edf1] Running
	I1109 14:38:49.266478  190681 system_pods.go:89] "kindnet-29xxd" [081cda95-4468-46a9-a913-ec3c53472afd] Running
	I1109 14:38:49.266484  190681 system_pods.go:89] "kube-apiserver-embed-certs-422728" [fa49f721-7504-41a2-80d6-eeac9f3ba024] Running
	I1109 14:38:49.266494  190681 system_pods.go:89] "kube-controller-manager-embed-certs-422728" [1fefd317-f995-414d-9744-cb3bb29d6663] Running
	I1109 14:38:49.266498  190681 system_pods.go:89] "kube-proxy-5zn8j" [91237b20-cef1-4550-bd7c-cbf7ec8d850c] Running
	I1109 14:38:49.266503  190681 system_pods.go:89] "kube-scheduler-embed-certs-422728" [aadcb563-6ce1-437f-ab4e-6f9450bb1b93] Running
	I1109 14:38:49.266520  190681 system_pods.go:89] "storage-provisioner" [e11ae084-2938-40cc-9538-cffa02747d9b] Pending / Ready:ContainersNotReady (containers with unready status: [storage-provisioner]) / ContainersReady:ContainersNotReady (containers with unready status: [storage-provisioner])
	I1109 14:38:49.266541  190681 retry.go:31] will retry after 228.694576ms: missing components: kube-dns
	I1109 14:38:49.500658  190681 system_pods.go:86] 8 kube-system pods found
	I1109 14:38:49.500694  190681 system_pods.go:89] "coredns-66bc5c9577-4hk6l" [85300af6-fc4a-42dd-b6f9-4374a4461cdc] Pending / Ready:ContainersNotReady (containers with unready status: [coredns]) / ContainersReady:ContainersNotReady (containers with unready status: [coredns])
	I1109 14:38:49.500701  190681 system_pods.go:89] "etcd-embed-certs-422728" [a71faa76-21fd-4bf5-82d3-26489363edf1] Running
	I1109 14:38:49.500709  190681 system_pods.go:89] "kindnet-29xxd" [081cda95-4468-46a9-a913-ec3c53472afd] Running
	I1109 14:38:49.500735  190681 system_pods.go:89] "kube-apiserver-embed-certs-422728" [fa49f721-7504-41a2-80d6-eeac9f3ba024] Running
	I1109 14:38:49.500751  190681 system_pods.go:89] "kube-controller-manager-embed-certs-422728" [1fefd317-f995-414d-9744-cb3bb29d6663] Running
	I1109 14:38:49.500756  190681 system_pods.go:89] "kube-proxy-5zn8j" [91237b20-cef1-4550-bd7c-cbf7ec8d850c] Running
	I1109 14:38:49.500768  190681 system_pods.go:89] "kube-scheduler-embed-certs-422728" [aadcb563-6ce1-437f-ab4e-6f9450bb1b93] Running
	I1109 14:38:49.500775  190681 system_pods.go:89] "storage-provisioner" [e11ae084-2938-40cc-9538-cffa02747d9b] Pending / Ready:ContainersNotReady (containers with unready status: [storage-provisioner]) / ContainersReady:ContainersNotReady (containers with unready status: [storage-provisioner])
	I1109 14:38:49.500791  190681 retry.go:31] will retry after 289.168887ms: missing components: kube-dns
	I1109 14:38:49.793798  190681 system_pods.go:86] 8 kube-system pods found
	I1109 14:38:49.793832  190681 system_pods.go:89] "coredns-66bc5c9577-4hk6l" [85300af6-fc4a-42dd-b6f9-4374a4461cdc] Pending / Ready:ContainersNotReady (containers with unready status: [coredns]) / ContainersReady:ContainersNotReady (containers with unready status: [coredns])
	I1109 14:38:49.793839  190681 system_pods.go:89] "etcd-embed-certs-422728" [a71faa76-21fd-4bf5-82d3-26489363edf1] Running
	I1109 14:38:49.793845  190681 system_pods.go:89] "kindnet-29xxd" [081cda95-4468-46a9-a913-ec3c53472afd] Running
	I1109 14:38:49.793849  190681 system_pods.go:89] "kube-apiserver-embed-certs-422728" [fa49f721-7504-41a2-80d6-eeac9f3ba024] Running
	I1109 14:38:49.793854  190681 system_pods.go:89] "kube-controller-manager-embed-certs-422728" [1fefd317-f995-414d-9744-cb3bb29d6663] Running
	I1109 14:38:49.793859  190681 system_pods.go:89] "kube-proxy-5zn8j" [91237b20-cef1-4550-bd7c-cbf7ec8d850c] Running
	I1109 14:38:49.793863  190681 system_pods.go:89] "kube-scheduler-embed-certs-422728" [aadcb563-6ce1-437f-ab4e-6f9450bb1b93] Running
	I1109 14:38:49.793868  190681 system_pods.go:89] "storage-provisioner" [e11ae084-2938-40cc-9538-cffa02747d9b] Pending / Ready:ContainersNotReady (containers with unready status: [storage-provisioner]) / ContainersReady:ContainersNotReady (containers with unready status: [storage-provisioner])
	I1109 14:38:49.793882  190681 retry.go:31] will retry after 329.103159ms: missing components: kube-dns
	I1109 14:38:50.127461  190681 system_pods.go:86] 8 kube-system pods found
	I1109 14:38:50.127497  190681 system_pods.go:89] "coredns-66bc5c9577-4hk6l" [85300af6-fc4a-42dd-b6f9-4374a4461cdc] Running
	I1109 14:38:50.127504  190681 system_pods.go:89] "etcd-embed-certs-422728" [a71faa76-21fd-4bf5-82d3-26489363edf1] Running
	I1109 14:38:50.127508  190681 system_pods.go:89] "kindnet-29xxd" [081cda95-4468-46a9-a913-ec3c53472afd] Running
	I1109 14:38:50.127512  190681 system_pods.go:89] "kube-apiserver-embed-certs-422728" [fa49f721-7504-41a2-80d6-eeac9f3ba024] Running
	I1109 14:38:50.127517  190681 system_pods.go:89] "kube-controller-manager-embed-certs-422728" [1fefd317-f995-414d-9744-cb3bb29d6663] Running
	I1109 14:38:50.127521  190681 system_pods.go:89] "kube-proxy-5zn8j" [91237b20-cef1-4550-bd7c-cbf7ec8d850c] Running
	I1109 14:38:50.127531  190681 system_pods.go:89] "kube-scheduler-embed-certs-422728" [aadcb563-6ce1-437f-ab4e-6f9450bb1b93] Running
	I1109 14:38:50.127535  190681 system_pods.go:89] "storage-provisioner" [e11ae084-2938-40cc-9538-cffa02747d9b] Running
	I1109 14:38:50.127544  190681 system_pods.go:126] duration metric: took 867.625882ms to wait for k8s-apps to be running ...
	I1109 14:38:50.127552  190681 system_svc.go:44] waiting for kubelet service to be running ....
	I1109 14:38:50.127624  190681 ssh_runner.go:195] Run: sudo systemctl is-active --quiet service kubelet
	I1109 14:38:50.141671  190681 system_svc.go:56] duration metric: took 14.110334ms WaitForService to wait for kubelet
	I1109 14:38:50.141697  190681 kubeadm.go:587] duration metric: took 42.740708795s to wait for: map[apiserver:true apps_running:true default_sa:true extra:true kubelet:true node_ready:true system_pods:true]
	I1109 14:38:50.141713  190681 node_conditions.go:102] verifying NodePressure condition ...
	I1109 14:38:50.144545  190681 node_conditions.go:122] node storage ephemeral capacity is 203034800Ki
	I1109 14:38:50.144576  190681 node_conditions.go:123] node cpu capacity is 2
	I1109 14:38:50.144589  190681 node_conditions.go:105] duration metric: took 2.869742ms to run NodePressure ...
	I1109 14:38:50.144600  190681 start.go:242] waiting for startup goroutines ...
	I1109 14:38:50.144607  190681 start.go:247] waiting for cluster config update ...
	I1109 14:38:50.144618  190681 start.go:256] writing updated cluster config ...
	I1109 14:38:50.144895  190681 ssh_runner.go:195] Run: rm -f paused
	I1109 14:38:50.148730  190681 pod_ready.go:37] extra waiting up to 4m0s for all "kube-system" pods having one of [k8s-app=kube-dns component=etcd component=kube-apiserver component=kube-controller-manager k8s-app=kube-proxy component=kube-scheduler] labels to be "Ready" ...
	I1109 14:38:50.227333  190681 pod_ready.go:83] waiting for pod "coredns-66bc5c9577-4hk6l" in "kube-system" namespace to be "Ready" or be gone ...
	I1109 14:38:50.233164  190681 pod_ready.go:94] pod "coredns-66bc5c9577-4hk6l" is "Ready"
	I1109 14:38:50.233193  190681 pod_ready.go:86] duration metric: took 5.833065ms for pod "coredns-66bc5c9577-4hk6l" in "kube-system" namespace to be "Ready" or be gone ...
	I1109 14:38:50.236197  190681 pod_ready.go:83] waiting for pod "etcd-embed-certs-422728" in "kube-system" namespace to be "Ready" or be gone ...
	I1109 14:38:50.242450  190681 pod_ready.go:94] pod "etcd-embed-certs-422728" is "Ready"
	I1109 14:38:50.242477  190681 pod_ready.go:86] duration metric: took 6.258586ms for pod "etcd-embed-certs-422728" in "kube-system" namespace to be "Ready" or be gone ...
	I1109 14:38:50.245018  190681 pod_ready.go:83] waiting for pod "kube-apiserver-embed-certs-422728" in "kube-system" namespace to be "Ready" or be gone ...
	I1109 14:38:50.250812  190681 pod_ready.go:94] pod "kube-apiserver-embed-certs-422728" is "Ready"
	I1109 14:38:50.250833  190681 pod_ready.go:86] duration metric: took 5.795ms for pod "kube-apiserver-embed-certs-422728" in "kube-system" namespace to be "Ready" or be gone ...
	I1109 14:38:50.254399  190681 pod_ready.go:83] waiting for pod "kube-controller-manager-embed-certs-422728" in "kube-system" namespace to be "Ready" or be gone ...
	I1109 14:38:50.552915  190681 pod_ready.go:94] pod "kube-controller-manager-embed-certs-422728" is "Ready"
	I1109 14:38:50.552943  190681 pod_ready.go:86] duration metric: took 298.520803ms for pod "kube-controller-manager-embed-certs-422728" in "kube-system" namespace to be "Ready" or be gone ...
	I1109 14:38:50.753326  190681 pod_ready.go:83] waiting for pod "kube-proxy-5zn8j" in "kube-system" namespace to be "Ready" or be gone ...
	I1109 14:38:51.153649  190681 pod_ready.go:94] pod "kube-proxy-5zn8j" is "Ready"
	I1109 14:38:51.153732  190681 pod_ready.go:86] duration metric: took 400.381845ms for pod "kube-proxy-5zn8j" in "kube-system" namespace to be "Ready" or be gone ...
	I1109 14:38:51.352991  190681 pod_ready.go:83] waiting for pod "kube-scheduler-embed-certs-422728" in "kube-system" namespace to be "Ready" or be gone ...
	I1109 14:38:51.753508  190681 pod_ready.go:94] pod "kube-scheduler-embed-certs-422728" is "Ready"
	I1109 14:38:51.753536  190681 pod_ready.go:86] duration metric: took 400.518087ms for pod "kube-scheduler-embed-certs-422728" in "kube-system" namespace to be "Ready" or be gone ...
	I1109 14:38:51.753548  190681 pod_ready.go:40] duration metric: took 1.604783227s for extra waiting for all "kube-system" pods having one of [k8s-app=kube-dns component=etcd component=kube-apiserver component=kube-controller-manager k8s-app=kube-proxy component=kube-scheduler] labels to be "Ready" ...
	I1109 14:38:51.808397  190681 start.go:628] kubectl: 1.33.2, cluster: 1.34.1 (minor skew: 1)
	I1109 14:38:51.814313  190681 out.go:179] * Done! kubectl is now configured to use "embed-certs-422728" cluster and "default" namespace by default
	
	
	==> CRI-O <==
	Nov 09 14:38:49 embed-certs-422728 crio[846]: time="2025-11-09T14:38:49.378254322Z" level=info msg="Created container cd4c51bebf45040b98405d22010de2bc21c02ecda159ad9249eb45233f24c2ef: kube-system/coredns-66bc5c9577-4hk6l/coredns" id=fbafcd86-2e7f-4236-8c09-5301bda94391 name=/runtime.v1.RuntimeService/CreateContainer
	Nov 09 14:38:49 embed-certs-422728 crio[846]: time="2025-11-09T14:38:49.379215659Z" level=info msg="Starting container: cd4c51bebf45040b98405d22010de2bc21c02ecda159ad9249eb45233f24c2ef" id=289a4bd3-7544-4725-afc7-3c9fef4e2f7a name=/runtime.v1.RuntimeService/StartContainer
	Nov 09 14:38:49 embed-certs-422728 crio[846]: time="2025-11-09T14:38:49.381332648Z" level=info msg="Started container" PID=1740 containerID=cd4c51bebf45040b98405d22010de2bc21c02ecda159ad9249eb45233f24c2ef description=kube-system/coredns-66bc5c9577-4hk6l/coredns id=289a4bd3-7544-4725-afc7-3c9fef4e2f7a name=/runtime.v1.RuntimeService/StartContainer sandboxID=99f439221141d444baa990b2d5a1803fddf4ff217514d45e542d142ec22f7a06
	Nov 09 14:38:52 embed-certs-422728 crio[846]: time="2025-11-09T14:38:52.419206168Z" level=info msg="Running pod sandbox: default/busybox/POD" id=c9922fa7-8f22-4fd2-8b3d-a7a92ff2be35 name=/runtime.v1.RuntimeService/RunPodSandbox
	Nov 09 14:38:52 embed-certs-422728 crio[846]: time="2025-11-09T14:38:52.419312302Z" level=info msg="Allowed annotations are specified for workload [io.containers.trace-syscall]"
	Nov 09 14:38:52 embed-certs-422728 crio[846]: time="2025-11-09T14:38:52.424815691Z" level=info msg="Got pod network &{Name:busybox Namespace:default ID:d303aed0a1c67e35e3a30463da6ec801a48d025d07d3f334e09b84b2e67a4d2c UID:219a9c8a-eefa-4542-a8f6-78c4f56bea13 NetNS:/var/run/netns/4df28467-9703-49de-8fd9-bbedc8a77059 Networks:[{Name:kindnet Ifname:eth0}] RuntimeConfig:map[kindnet:{IP: MAC: PortMappings:[] Bandwidth:<nil> IpRanges:[] CgroupPath: PodAnnotations:0x4000079b28}] Aliases:map[]}"
	Nov 09 14:38:52 embed-certs-422728 crio[846]: time="2025-11-09T14:38:52.42485094Z" level=info msg="Adding pod default_busybox to CNI network \"kindnet\" (type=ptp)"
	Nov 09 14:38:52 embed-certs-422728 crio[846]: time="2025-11-09T14:38:52.439563379Z" level=info msg="Got pod network &{Name:busybox Namespace:default ID:d303aed0a1c67e35e3a30463da6ec801a48d025d07d3f334e09b84b2e67a4d2c UID:219a9c8a-eefa-4542-a8f6-78c4f56bea13 NetNS:/var/run/netns/4df28467-9703-49de-8fd9-bbedc8a77059 Networks:[{Name:kindnet Ifname:eth0}] RuntimeConfig:map[kindnet:{IP: MAC: PortMappings:[] Bandwidth:<nil> IpRanges:[] CgroupPath: PodAnnotations:0x4000079b28}] Aliases:map[]}"
	Nov 09 14:38:52 embed-certs-422728 crio[846]: time="2025-11-09T14:38:52.439710671Z" level=info msg="Checking pod default_busybox for CNI network kindnet (type=ptp)"
	Nov 09 14:38:52 embed-certs-422728 crio[846]: time="2025-11-09T14:38:52.443225768Z" level=info msg="Ran pod sandbox d303aed0a1c67e35e3a30463da6ec801a48d025d07d3f334e09b84b2e67a4d2c with infra container: default/busybox/POD" id=c9922fa7-8f22-4fd2-8b3d-a7a92ff2be35 name=/runtime.v1.RuntimeService/RunPodSandbox
	Nov 09 14:38:52 embed-certs-422728 crio[846]: time="2025-11-09T14:38:52.445350208Z" level=info msg="Checking image status: gcr.io/k8s-minikube/busybox:1.28.4-glibc" id=74e57359-0f1a-4118-88da-2f6dce403352 name=/runtime.v1.ImageService/ImageStatus
	Nov 09 14:38:52 embed-certs-422728 crio[846]: time="2025-11-09T14:38:52.445475739Z" level=info msg="Image gcr.io/k8s-minikube/busybox:1.28.4-glibc not found" id=74e57359-0f1a-4118-88da-2f6dce403352 name=/runtime.v1.ImageService/ImageStatus
	Nov 09 14:38:52 embed-certs-422728 crio[846]: time="2025-11-09T14:38:52.445516691Z" level=info msg="Neither image nor artfiact gcr.io/k8s-minikube/busybox:1.28.4-glibc found" id=74e57359-0f1a-4118-88da-2f6dce403352 name=/runtime.v1.ImageService/ImageStatus
	Nov 09 14:38:52 embed-certs-422728 crio[846]: time="2025-11-09T14:38:52.44640181Z" level=info msg="Pulling image: gcr.io/k8s-minikube/busybox:1.28.4-glibc" id=ef35f18a-316e-4da9-b38d-b2995528e2c8 name=/runtime.v1.ImageService/PullImage
	Nov 09 14:38:52 embed-certs-422728 crio[846]: time="2025-11-09T14:38:52.449431233Z" level=info msg="Trying to access \"gcr.io/k8s-minikube/busybox:1.28.4-glibc\""
	Nov 09 14:38:54 embed-certs-422728 crio[846]: time="2025-11-09T14:38:54.71632161Z" level=info msg="Pulled image: gcr.io/k8s-minikube/busybox@sha256:580b0aa58b210f512f818b7b7ef4f63c803f7a8cd6baf571b1462b79f7b7719e" id=ef35f18a-316e-4da9-b38d-b2995528e2c8 name=/runtime.v1.ImageService/PullImage
	Nov 09 14:38:54 embed-certs-422728 crio[846]: time="2025-11-09T14:38:54.71733674Z" level=info msg="Checking image status: gcr.io/k8s-minikube/busybox:1.28.4-glibc" id=ff38ff86-366f-452d-bf79-3e4ccdb23ed8 name=/runtime.v1.ImageService/ImageStatus
	Nov 09 14:38:54 embed-certs-422728 crio[846]: time="2025-11-09T14:38:54.720087852Z" level=info msg="Checking image status: gcr.io/k8s-minikube/busybox:1.28.4-glibc" id=8706801c-81d0-4f24-be76-662d84e7f653 name=/runtime.v1.ImageService/ImageStatus
	Nov 09 14:38:54 embed-certs-422728 crio[846]: time="2025-11-09T14:38:54.727163002Z" level=info msg="Creating container: default/busybox/busybox" id=5e379c30-9658-4043-8ae7-8954511a6a8b name=/runtime.v1.RuntimeService/CreateContainer
	Nov 09 14:38:54 embed-certs-422728 crio[846]: time="2025-11-09T14:38:54.72729184Z" level=info msg="Allowed annotations are specified for workload [io.containers.trace-syscall]"
	Nov 09 14:38:54 embed-certs-422728 crio[846]: time="2025-11-09T14:38:54.732236611Z" level=info msg="Allowed annotations are specified for workload [io.containers.trace-syscall]"
	Nov 09 14:38:54 embed-certs-422728 crio[846]: time="2025-11-09T14:38:54.732740032Z" level=info msg="Allowed annotations are specified for workload [io.containers.trace-syscall]"
	Nov 09 14:38:54 embed-certs-422728 crio[846]: time="2025-11-09T14:38:54.746872832Z" level=info msg="Created container 36e84808d460a6f7c218afc5b6e1f26d0810baaefa922036d6bbf44a9452c39a: default/busybox/busybox" id=5e379c30-9658-4043-8ae7-8954511a6a8b name=/runtime.v1.RuntimeService/CreateContainer
	Nov 09 14:38:54 embed-certs-422728 crio[846]: time="2025-11-09T14:38:54.748223604Z" level=info msg="Starting container: 36e84808d460a6f7c218afc5b6e1f26d0810baaefa922036d6bbf44a9452c39a" id=fdf78986-abc5-42c5-b6a3-fda2e393ba3c name=/runtime.v1.RuntimeService/StartContainer
	Nov 09 14:38:54 embed-certs-422728 crio[846]: time="2025-11-09T14:38:54.750556612Z" level=info msg="Started container" PID=1792 containerID=36e84808d460a6f7c218afc5b6e1f26d0810baaefa922036d6bbf44a9452c39a description=default/busybox/busybox id=fdf78986-abc5-42c5-b6a3-fda2e393ba3c name=/runtime.v1.RuntimeService/StartContainer sandboxID=d303aed0a1c67e35e3a30463da6ec801a48d025d07d3f334e09b84b2e67a4d2c
	
	
	==> container status <==
	CONTAINER           IMAGE                                                                                                 CREATED              STATE               NAME                      ATTEMPT             POD ID              POD                                          NAMESPACE
	36e84808d460a       gcr.io/k8s-minikube/busybox@sha256:580b0aa58b210f512f818b7b7ef4f63c803f7a8cd6baf571b1462b79f7b7719e   7 seconds ago        Running             busybox                   0                   d303aed0a1c67       busybox                                      default
	cd4c51bebf450       138784d87c9c50f8e59412544da4cf4928d61ccbaf93b9f5898a3ba406871bfc                                      12 seconds ago       Running             coredns                   0                   99f439221141d       coredns-66bc5c9577-4hk6l                     kube-system
	3e2996af763ec       ba04bb24b95753201135cbc420b233c1b0b9fa2e1fd21d28319c348c33fbcde6                                      12 seconds ago       Running             storage-provisioner       0                   6a40b2fa681e8       storage-provisioner                          kube-system
	dbea4850ded65       b1a8c6f707935fd5f346ce5846d21ff8dd65e14c15406a14dbd16b9b897b9b4c                                      53 seconds ago       Running             kindnet-cni               0                   6b6b5baec9fa4       kindnet-29xxd                                kube-system
	0725974398a94       05baa95f5142d87797a2bc1d3d11edfb0bf0a9236d436243d15061fae8b58cb9                                      53 seconds ago       Running             kube-proxy                0                   280c0cc4e5f57       kube-proxy-5zn8j                             kube-system
	eb34e0f31d9e4       a1894772a478e07c67a56e8bf32335fdbe1dd4ec96976a5987083164bd00bc0e                                      About a minute ago   Running             etcd                      0                   4aba2aeda82a5       etcd-embed-certs-422728                      kube-system
	7e0132ab090a4       43911e833d64d4f30460862fc0c54bb61999d60bc7d063feca71e9fc610d5196                                      About a minute ago   Running             kube-apiserver            0                   df34f082a78f4       kube-apiserver-embed-certs-422728            kube-system
	95367553d7ad4       b5f57ec6b98676d815366685a0422bd164ecf0732540b79ac51b1186cef97ff0                                      About a minute ago   Running             kube-scheduler            0                   584d9a847cdef       kube-scheduler-embed-certs-422728            kube-system
	3c617d7e1acb1       7eb2c6ff0c5a768fd309321bc2ade0e4e11afcf4f2017ef1d0ff00d91fdf992a                                      About a minute ago   Running             kube-controller-manager   0                   21199b20b6f54       kube-controller-manager-embed-certs-422728   kube-system
	
	
	==> coredns [cd4c51bebf45040b98405d22010de2bc21c02ecda159ad9249eb45233f24c2ef] <==
	maxprocs: Leaving GOMAXPROCS=2: CPU quota undefined
	.:53
	[INFO] plugin/reload: Running configuration SHA512 = 3e2243e8b9e7116f563b83b1933f477a68ba9ad4a829ed5d7e54629fb2ce53528b9bc6023030be20be434ad805fd246296dd428c64e9bbef3a70f22b8621f560
	CoreDNS-1.12.1
	linux/arm64, go1.24.1, 707c7c1
	[INFO] 127.0.0.1:44069 - 18159 "HINFO IN 5386442615321086841.8755009623240173312. udp 57 false 512" NXDOMAIN qr,rd,ra 57 0.010765904s
	
	
	==> describe nodes <==
	Name:               embed-certs-422728
	Roles:              control-plane
	Labels:             beta.kubernetes.io/arch=arm64
	                    beta.kubernetes.io/os=linux
	                    kubernetes.io/arch=arm64
	                    kubernetes.io/hostname=embed-certs-422728
	                    kubernetes.io/os=linux
	                    minikube.k8s.io/commit=70e94ed289d7418ac27e2778f9cf44be27a4ecda
	                    minikube.k8s.io/name=embed-certs-422728
	                    minikube.k8s.io/primary=true
	                    minikube.k8s.io/updated_at=2025_11_09T14_38_02_0700
	                    minikube.k8s.io/version=v1.37.0
	                    node-role.kubernetes.io/control-plane=
	                    node.kubernetes.io/exclude-from-external-load-balancers=
	Annotations:        node.alpha.kubernetes.io/ttl: 0
	                    volumes.kubernetes.io/controller-managed-attach-detach: true
	CreationTimestamp:  Sun, 09 Nov 2025 14:37:58 +0000
	Taints:             <none>
	Unschedulable:      false
	Lease:
	  HolderIdentity:  embed-certs-422728
	  AcquireTime:     <unset>
	  RenewTime:       Sun, 09 Nov 2025 14:38:52 +0000
	Conditions:
	  Type             Status  LastHeartbeatTime                 LastTransitionTime                Reason                       Message
	  ----             ------  -----------------                 ------------------                ------                       -------
	  MemoryPressure   False   Sun, 09 Nov 2025 14:38:48 +0000   Sun, 09 Nov 2025 14:37:52 +0000   KubeletHasSufficientMemory   kubelet has sufficient memory available
	  DiskPressure     False   Sun, 09 Nov 2025 14:38:48 +0000   Sun, 09 Nov 2025 14:37:52 +0000   KubeletHasNoDiskPressure     kubelet has no disk pressure
	  PIDPressure      False   Sun, 09 Nov 2025 14:38:48 +0000   Sun, 09 Nov 2025 14:37:52 +0000   KubeletHasSufficientPID      kubelet has sufficient PID available
	  Ready            True    Sun, 09 Nov 2025 14:38:48 +0000   Sun, 09 Nov 2025 14:38:48 +0000   KubeletReady                 kubelet is posting ready status
	Addresses:
	  InternalIP:  192.168.76.2
	  Hostname:    embed-certs-422728
	Capacity:
	  cpu:                2
	  ephemeral-storage:  203034800Ki
	  hugepages-1Gi:      0
	  hugepages-2Mi:      0
	  hugepages-32Mi:     0
	  hugepages-64Ki:     0
	  memory:             8022300Ki
	  pods:               110
	Allocatable:
	  cpu:                2
	  ephemeral-storage:  203034800Ki
	  hugepages-1Gi:      0
	  hugepages-2Mi:      0
	  hugepages-32Mi:     0
	  hugepages-64Ki:     0
	  memory:             8022300Ki
	  pods:               110
	System Info:
	  Machine ID:                 8fe80b37a6a3cb4bc1adadb06905c59a
	  System UUID:                d088bd86-8a64-46dd-b81e-fc8968fd6fcd
	  Boot ID:                    c2b4b02b-b0e5-42ff-aa45-346ad9349595
	  Kernel Version:             5.15.0-1084-aws
	  OS Image:                   Debian GNU/Linux 12 (bookworm)
	  Operating System:           linux
	  Architecture:               arm64
	  Container Runtime Version:  cri-o://1.34.1
	  Kubelet Version:            v1.34.1
	  Kube-Proxy Version:         
	PodCIDR:                      10.244.0.0/24
	PodCIDRs:                     10.244.0.0/24
	Non-terminated Pods:          (9 in total)
	  Namespace                   Name                                          CPU Requests  CPU Limits  Memory Requests  Memory Limits  Age
	  ---------                   ----                                          ------------  ----------  ---------------  -------------  ---
	  default                     busybox                                       0 (0%)        0 (0%)      0 (0%)           0 (0%)         10s
	  kube-system                 coredns-66bc5c9577-4hk6l                      100m (5%)     0 (0%)      70Mi (0%)        170Mi (2%)     55s
	  kube-system                 etcd-embed-certs-422728                       100m (5%)     0 (0%)      100Mi (1%)       0 (0%)         60s
	  kube-system                 kindnet-29xxd                                 100m (5%)     100m (5%)   50Mi (0%)        50Mi (0%)      55s
	  kube-system                 kube-apiserver-embed-certs-422728             250m (12%)    0 (0%)      0 (0%)           0 (0%)         60s
	  kube-system                 kube-controller-manager-embed-certs-422728    200m (10%)    0 (0%)      0 (0%)           0 (0%)         60s
	  kube-system                 kube-proxy-5zn8j                              0 (0%)        0 (0%)      0 (0%)           0 (0%)         55s
	  kube-system                 kube-scheduler-embed-certs-422728             100m (5%)     0 (0%)      0 (0%)           0 (0%)         60s
	  kube-system                 storage-provisioner                           0 (0%)        0 (0%)      0 (0%)           0 (0%)         54s
	Allocated resources:
	  (Total limits may be over 100 percent, i.e., overcommitted.)
	  Resource           Requests    Limits
	  --------           --------    ------
	  cpu                850m (42%)  100m (5%)
	  memory             220Mi (2%)  220Mi (2%)
	  ephemeral-storage  0 (0%)      0 (0%)
	  hugepages-1Gi      0 (0%)      0 (0%)
	  hugepages-2Mi      0 (0%)      0 (0%)
	  hugepages-32Mi     0 (0%)      0 (0%)
	  hugepages-64Ki     0 (0%)      0 (0%)
	Events:
	  Type     Reason                   Age                From             Message
	  ----     ------                   ----               ----             -------
	  Normal   Starting                 53s                kube-proxy       
	  Normal   NodeHasSufficientMemory  71s (x8 over 71s)  kubelet          Node embed-certs-422728 status is now: NodeHasSufficientMemory
	  Normal   NodeHasNoDiskPressure    71s (x8 over 71s)  kubelet          Node embed-certs-422728 status is now: NodeHasNoDiskPressure
	  Normal   NodeHasSufficientPID     71s (x8 over 71s)  kubelet          Node embed-certs-422728 status is now: NodeHasSufficientPID
	  Normal   Starting                 61s                kubelet          Starting kubelet.
	  Warning  CgroupV1                 61s                kubelet          cgroup v1 support is in maintenance mode, please migrate to cgroup v2
	  Normal   NodeHasSufficientMemory  60s                kubelet          Node embed-certs-422728 status is now: NodeHasSufficientMemory
	  Normal   NodeHasNoDiskPressure    60s                kubelet          Node embed-certs-422728 status is now: NodeHasNoDiskPressure
	  Normal   NodeHasSufficientPID     60s                kubelet          Node embed-certs-422728 status is now: NodeHasSufficientPID
	  Normal   RegisteredNode           56s                node-controller  Node embed-certs-422728 event: Registered Node embed-certs-422728 in Controller
	  Normal   NodeReady                14s                kubelet          Node embed-certs-422728 status is now: NodeReady
	
	
	==> dmesg <==
	[ +35.606556] overlayfs: idmapped layers are currently not supported
	[Nov 9 14:14] overlayfs: idmapped layers are currently not supported
	[Nov 9 14:15] overlayfs: idmapped layers are currently not supported
	[Nov 9 14:16] overlayfs: idmapped layers are currently not supported
	[Nov 9 14:18] overlayfs: idmapped layers are currently not supported
	[ +26.909872] overlayfs: idmapped layers are currently not supported
	[  +3.850831] overlayfs: idmapped layers are currently not supported
	[Nov 9 14:19] overlayfs: idmapped layers are currently not supported
	[ +17.180951] overlayfs: idmapped layers are currently not supported
	[Nov 9 14:20] overlayfs: idmapped layers are currently not supported
	[ +23.736977] overlayfs: idmapped layers are currently not supported
	[Nov 9 14:22] overlayfs: idmapped layers are currently not supported
	[Nov 9 14:23] overlayfs: idmapped layers are currently not supported
	[Nov 9 14:25] overlayfs: idmapped layers are currently not supported
	[Nov 9 14:27] overlayfs: idmapped layers are currently not supported
	[ +24.536638] overlayfs: idmapped layers are currently not supported
	[Nov 9 14:31] overlayfs: idmapped layers are currently not supported
	[Nov 9 14:33] overlayfs: idmapped layers are currently not supported
	[ +39.159698] overlayfs: idmapped layers are currently not supported
	[  +5.641155] overlayfs: idmapped layers are currently not supported
	[Nov 9 14:34] overlayfs: idmapped layers are currently not supported
	[Nov 9 14:35] overlayfs: idmapped layers are currently not supported
	[Nov 9 14:36] overlayfs: idmapped layers are currently not supported
	[Nov 9 14:37] overlayfs: idmapped layers are currently not supported
	[  +3.455400] overlayfs: idmapped layers are currently not supported
	
	
	==> etcd [eb34e0f31d9e44ab6e71b4b48a207961c0363a26e8bde143dc707248dfb165a2] <==
	{"level":"warn","ts":"2025-11-09T14:37:56.152064Z","caller":"embed/config_logging.go:188","msg":"rejected connection on client endpoint","remote-addr":"127.0.0.1:33578","server-name":"","error":"EOF"}
	{"level":"warn","ts":"2025-11-09T14:37:56.177744Z","caller":"embed/config_logging.go:188","msg":"rejected connection on client endpoint","remote-addr":"127.0.0.1:33588","server-name":"","error":"EOF"}
	{"level":"warn","ts":"2025-11-09T14:37:56.250562Z","caller":"embed/config_logging.go:188","msg":"rejected connection on client endpoint","remote-addr":"127.0.0.1:33602","server-name":"","error":"EOF"}
	{"level":"warn","ts":"2025-11-09T14:37:56.310021Z","caller":"embed/config_logging.go:188","msg":"rejected connection on client endpoint","remote-addr":"127.0.0.1:33616","server-name":"","error":"EOF"}
	{"level":"warn","ts":"2025-11-09T14:37:56.359936Z","caller":"embed/config_logging.go:188","msg":"rejected connection on client endpoint","remote-addr":"127.0.0.1:33636","server-name":"","error":"EOF"}
	{"level":"warn","ts":"2025-11-09T14:37:56.387562Z","caller":"embed/config_logging.go:188","msg":"rejected connection on client endpoint","remote-addr":"127.0.0.1:33650","server-name":"","error":"EOF"}
	{"level":"warn","ts":"2025-11-09T14:37:56.491673Z","caller":"embed/config_logging.go:188","msg":"rejected connection on client endpoint","remote-addr":"127.0.0.1:33656","server-name":"","error":"EOF"}
	{"level":"warn","ts":"2025-11-09T14:37:56.546725Z","caller":"embed/config_logging.go:188","msg":"rejected connection on client endpoint","remote-addr":"127.0.0.1:33676","server-name":"","error":"EOF"}
	{"level":"warn","ts":"2025-11-09T14:37:56.586983Z","caller":"embed/config_logging.go:188","msg":"rejected connection on client endpoint","remote-addr":"127.0.0.1:33688","server-name":"","error":"EOF"}
	{"level":"warn","ts":"2025-11-09T14:37:56.624153Z","caller":"embed/config_logging.go:188","msg":"rejected connection on client endpoint","remote-addr":"127.0.0.1:33698","server-name":"","error":"EOF"}
	{"level":"warn","ts":"2025-11-09T14:37:56.647200Z","caller":"embed/config_logging.go:188","msg":"rejected connection on client endpoint","remote-addr":"127.0.0.1:33724","server-name":"","error":"EOF"}
	{"level":"warn","ts":"2025-11-09T14:37:56.700783Z","caller":"embed/config_logging.go:188","msg":"rejected connection on client endpoint","remote-addr":"127.0.0.1:33738","server-name":"","error":"EOF"}
	{"level":"warn","ts":"2025-11-09T14:37:56.710652Z","caller":"embed/config_logging.go:188","msg":"rejected connection on client endpoint","remote-addr":"127.0.0.1:33748","server-name":"","error":"EOF"}
	{"level":"warn","ts":"2025-11-09T14:37:56.727950Z","caller":"embed/config_logging.go:188","msg":"rejected connection on client endpoint","remote-addr":"127.0.0.1:33758","server-name":"","error":"EOF"}
	{"level":"warn","ts":"2025-11-09T14:37:56.747060Z","caller":"embed/config_logging.go:188","msg":"rejected connection on client endpoint","remote-addr":"127.0.0.1:33770","server-name":"","error":"EOF"}
	{"level":"warn","ts":"2025-11-09T14:37:56.767152Z","caller":"embed/config_logging.go:188","msg":"rejected connection on client endpoint","remote-addr":"127.0.0.1:33788","server-name":"","error":"EOF"}
	{"level":"warn","ts":"2025-11-09T14:37:56.788941Z","caller":"embed/config_logging.go:188","msg":"rejected connection on client endpoint","remote-addr":"127.0.0.1:33794","server-name":"","error":"EOF"}
	{"level":"warn","ts":"2025-11-09T14:37:56.803941Z","caller":"embed/config_logging.go:188","msg":"rejected connection on client endpoint","remote-addr":"127.0.0.1:33812","server-name":"","error":"EOF"}
	{"level":"warn","ts":"2025-11-09T14:37:56.828429Z","caller":"embed/config_logging.go:188","msg":"rejected connection on client endpoint","remote-addr":"127.0.0.1:33836","server-name":"","error":"EOF"}
	{"level":"warn","ts":"2025-11-09T14:37:56.858294Z","caller":"embed/config_logging.go:188","msg":"rejected connection on client endpoint","remote-addr":"127.0.0.1:33850","server-name":"","error":"EOF"}
	{"level":"warn","ts":"2025-11-09T14:37:56.881570Z","caller":"embed/config_logging.go:188","msg":"rejected connection on client endpoint","remote-addr":"127.0.0.1:33870","server-name":"","error":"EOF"}
	{"level":"warn","ts":"2025-11-09T14:37:56.913180Z","caller":"embed/config_logging.go:188","msg":"rejected connection on client endpoint","remote-addr":"127.0.0.1:33886","server-name":"","error":"EOF"}
	{"level":"warn","ts":"2025-11-09T14:37:57.052778Z","caller":"embed/config_logging.go:188","msg":"rejected connection on client endpoint","remote-addr":"127.0.0.1:33912","server-name":"","error":"EOF"}
	{"level":"info","ts":"2025-11-09T14:38:07.952444Z","caller":"traceutil/trace.go:172","msg":"trace[791842909] transaction","detail":"{read_only:false; response_revision:382; number_of_response:1; }","duration":"102.419164ms","start":"2025-11-09T14:38:07.850004Z","end":"2025-11-09T14:38:07.952424Z","steps":["trace[791842909] 'process raft request'  (duration: 78.180636ms)"],"step_count":1}
	{"level":"info","ts":"2025-11-09T14:38:07.953531Z","caller":"traceutil/trace.go:172","msg":"trace[2125780685] transaction","detail":"{read_only:false; response_revision:383; number_of_response:1; }","duration":"103.453822ms","start":"2025-11-09T14:38:07.850065Z","end":"2025-11-09T14:38:07.953519Z","steps":["trace[2125780685] 'process raft request'  (duration: 78.146133ms)"],"step_count":1}
	
	
	==> kernel <==
	 14:39:02 up  1:21,  0 user,  load average: 2.35, 3.27, 2.72
	Linux embed-certs-422728 5.15.0-1084-aws #91~20.04.1-Ubuntu SMP Fri May 2 07:00:04 UTC 2025 aarch64 GNU/Linux
	PRETTY_NAME="Debian GNU/Linux 12 (bookworm)"
	
	
	==> kindnet [dbea4850ded653ecc72d308aaa39982c66be7e99c97d63231c9e4100facb8048] <==
	I1109 14:38:08.534550       1 main.go:109] connected to apiserver: https://10.96.0.1:443
	I1109 14:38:08.535607       1 main.go:139] hostIP = 192.168.76.2
	podIP = 192.168.76.2
	I1109 14:38:08.535744       1 main.go:148] setting mtu 1500 for CNI 
	I1109 14:38:08.535755       1 main.go:178] kindnetd IP family: "ipv4"
	I1109 14:38:08.535766       1 main.go:182] noMask IPv4 subnets: [10.244.0.0/16]
	time="2025-11-09T14:38:08Z" level=info msg="Created plugin 10-kube-network-policies (kindnetd, handles RunPodSandbox,RemovePodSandbox)"
	I1109 14:38:08.737135       1 controller.go:377] "Starting controller" name="kube-network-policies"
	I1109 14:38:08.737155       1 controller.go:381] "Waiting for informer caches to sync"
	I1109 14:38:08.737163       1 shared_informer.go:350] "Waiting for caches to sync" controller="kube-network-policies"
	I1109 14:38:08.737268       1 controller.go:390] nri plugin exited: failed to connect to NRI service: dial unix /var/run/nri/nri.sock: connect: no such file or directory
	E1109 14:38:38.736614       1 reflector.go:200] "Failed to watch" err="failed to list *v1.NetworkPolicy: Get \"https://10.96.0.1:443/apis/networking.k8s.io/v1/networkpolicies?limit=500&resourceVersion=0\": dial tcp 10.96.0.1:443: i/o timeout" logger="UnhandledError" reflector="pkg/mod/k8s.io/client-go@v0.33.0/tools/cache/reflector.go:285" type="*v1.NetworkPolicy"
	E1109 14:38:38.737593       1 reflector.go:200] "Failed to watch" err="failed to list *v1.Namespace: Get \"https://10.96.0.1:443/api/v1/namespaces?limit=500&resourceVersion=0\": dial tcp 10.96.0.1:443: i/o timeout" logger="UnhandledError" reflector="pkg/mod/k8s.io/client-go@v0.33.0/tools/cache/reflector.go:285" type="*v1.Namespace"
	E1109 14:38:38.737607       1 reflector.go:200] "Failed to watch" err="failed to list *v1.Pod: Get \"https://10.96.0.1:443/api/v1/pods?limit=500&resourceVersion=0\": dial tcp 10.96.0.1:443: i/o timeout" logger="UnhandledError" reflector="pkg/mod/k8s.io/client-go@v0.33.0/tools/cache/reflector.go:285" type="*v1.Pod"
	E1109 14:38:38.737693       1 reflector.go:200] "Failed to watch" err="failed to list *v1.Node: Get \"https://10.96.0.1:443/api/v1/nodes?limit=500&resourceVersion=0\": dial tcp 10.96.0.1:443: i/o timeout" logger="UnhandledError" reflector="pkg/mod/k8s.io/client-go@v0.33.0/tools/cache/reflector.go:285" type="*v1.Node"
	I1109 14:38:40.237983       1 shared_informer.go:357] "Caches are synced" controller="kube-network-policies"
	I1109 14:38:40.238030       1 metrics.go:72] Registering metrics
	I1109 14:38:40.238092       1 controller.go:711] "Syncing nftables rules"
	I1109 14:38:48.743117       1 main.go:297] Handling node with IPs: map[192.168.76.2:{}]
	I1109 14:38:48.743179       1 main.go:301] handling current node
	I1109 14:38:58.738547       1 main.go:297] Handling node with IPs: map[192.168.76.2:{}]
	I1109 14:38:58.738593       1 main.go:301] handling current node
	
	
	==> kube-apiserver [7e0132ab090a4839b41a64c1bdfb4bbac45f43898d7f788955008c071f05f28f] <==
	I1109 14:37:58.557266       1 cache.go:39] Caches are synced for autoregister controller
	I1109 14:37:58.567575       1 controller.go:667] quota admission added evaluator for: namespaces
	I1109 14:37:58.588030       1 cidrallocator.go:301] created ClusterIP allocator for Service CIDR 10.96.0.0/12
	I1109 14:37:58.588195       1 default_servicecidr_controller.go:228] Setting default ServiceCIDR condition Ready to True
	I1109 14:37:58.638268       1 controller.go:667] quota admission added evaluator for: leases.coordination.k8s.io
	I1109 14:37:58.666488       1 cidrallocator.go:277] updated ClusterIP allocator for Service CIDR 10.96.0.0/12
	I1109 14:37:58.695783       1 default_servicecidr_controller.go:137] Shutting down kubernetes-service-cidr-controller
	I1109 14:37:58.943485       1 storage_scheduling.go:95] created PriorityClass system-node-critical with value 2000001000
	I1109 14:37:58.955266       1 storage_scheduling.go:95] created PriorityClass system-cluster-critical with value 2000000000
	I1109 14:37:58.955296       1 storage_scheduling.go:111] all system priority classes are created successfully or already exist.
	I1109 14:37:59.965727       1 controller.go:667] quota admission added evaluator for: roles.rbac.authorization.k8s.io
	I1109 14:38:00.169451       1 controller.go:667] quota admission added evaluator for: rolebindings.rbac.authorization.k8s.io
	I1109 14:38:00.500759       1 alloc.go:328] "allocated clusterIPs" service="default/kubernetes" clusterIPs={"IPv4":"10.96.0.1"}
	W1109 14:38:00.518809       1 lease.go:265] Resetting endpoints for master service "kubernetes" to [192.168.76.2]
	I1109 14:38:00.520363       1 controller.go:667] quota admission added evaluator for: endpoints
	I1109 14:38:00.536226       1 controller.go:667] quota admission added evaluator for: endpointslices.discovery.k8s.io
	I1109 14:38:01.257341       1 controller.go:667] quota admission added evaluator for: serviceaccounts
	I1109 14:38:01.565246       1 controller.go:667] quota admission added evaluator for: deployments.apps
	I1109 14:38:01.587434       1 alloc.go:328] "allocated clusterIPs" service="kube-system/kube-dns" clusterIPs={"IPv4":"10.96.0.10"}
	I1109 14:38:01.600923       1 controller.go:667] quota admission added evaluator for: daemonsets.apps
	I1109 14:38:07.141380       1 cidrallocator.go:277] updated ClusterIP allocator for Service CIDR 10.96.0.0/12
	I1109 14:38:07.149184       1 cidrallocator.go:277] updated ClusterIP allocator for Service CIDR 10.96.0.0/12
	I1109 14:38:07.303040       1 controller.go:667] quota admission added evaluator for: replicasets.apps
	I1109 14:38:07.376570       1 controller.go:667] quota admission added evaluator for: controllerrevisions.apps
	E1109 14:39:00.524488       1 conn.go:339] Error on socket receive: read tcp 192.168.76.2:8443->192.168.76.1:37824: use of closed network connection
	
	
	==> kube-controller-manager [3c617d7e1acb1a1ad84769a650c7faa5264d3295bee286a3f5c600acd9fbd7bf] <==
	I1109 14:38:06.486871       1 shared_informer.go:356] "Caches are synced" controller="certificate-csrsigning-kube-apiserver-client"
	I1109 14:38:06.486900       1 shared_informer.go:356] "Caches are synced" controller="endpoint_slice"
	I1109 14:38:06.486947       1 shared_informer.go:356] "Caches are synced" controller="PVC protection"
	I1109 14:38:06.486970       1 shared_informer.go:356] "Caches are synced" controller="TTL"
	I1109 14:38:06.489360       1 shared_informer.go:356] "Caches are synced" controller="node"
	I1109 14:38:06.489417       1 range_allocator.go:177] "Sending events to api server" logger="node-ipam-controller"
	I1109 14:38:06.489437       1 range_allocator.go:183] "Starting range CIDR allocator" logger="node-ipam-controller"
	I1109 14:38:06.489454       1 shared_informer.go:349] "Waiting for caches to sync" controller="cidrallocator"
	I1109 14:38:06.489460       1 shared_informer.go:356] "Caches are synced" controller="cidrallocator"
	I1109 14:38:06.489956       1 shared_informer.go:356] "Caches are synced" controller="service-cidr-controller"
	I1109 14:38:06.490104       1 shared_informer.go:356] "Caches are synced" controller="job"
	I1109 14:38:06.490143       1 shared_informer.go:356] "Caches are synced" controller="ReplicationController"
	I1109 14:38:06.490173       1 shared_informer.go:356] "Caches are synced" controller="bootstrap_signer"
	I1109 14:38:06.503399       1 shared_informer.go:356] "Caches are synced" controller="ReplicaSet"
	I1109 14:38:06.503772       1 shared_informer.go:356] "Caches are synced" controller="resource quota"
	I1109 14:38:06.503823       1 shared_informer.go:356] "Caches are synced" controller="namespace"
	I1109 14:38:06.510334       1 shared_informer.go:356] "Caches are synced" controller="cronjob"
	I1109 14:38:06.525913       1 shared_informer.go:356] "Caches are synced" controller="deployment"
	I1109 14:38:06.526528       1 shared_informer.go:356] "Caches are synced" controller="garbage collector"
	I1109 14:38:06.526553       1 garbagecollector.go:154] "Garbage collector: all resource monitors have synced" logger="garbage-collector-controller"
	I1109 14:38:06.526560       1 garbagecollector.go:157] "Proceeding to collect garbage" logger="garbage-collector-controller"
	I1109 14:38:06.536794       1 shared_informer.go:356] "Caches are synced" controller="TTL after finished"
	I1109 14:38:06.536841       1 shared_informer.go:356] "Caches are synced" controller="daemon sets"
	I1109 14:38:06.554966       1 range_allocator.go:428] "Set node PodCIDR" logger="node-ipam-controller" node="embed-certs-422728" podCIDRs=["10.244.0.0/24"]
	I1109 14:38:51.493845       1 node_lifecycle_controller.go:1044] "Controller detected that some Nodes are Ready. Exiting master disruption mode" logger="node-lifecycle-controller"
	
	
	==> kube-proxy [0725974398a94b9059e4a9e5363a16eddb3c1d0f815063206b3192091cfb8154] <==
	I1109 14:38:08.473601       1 server_linux.go:53] "Using iptables proxy"
	I1109 14:38:08.657538       1 shared_informer.go:349] "Waiting for caches to sync" controller="node informer cache"
	I1109 14:38:08.758565       1 shared_informer.go:356] "Caches are synced" controller="node informer cache"
	I1109 14:38:08.758610       1 server.go:219] "Successfully retrieved NodeIPs" NodeIPs=["192.168.76.2"]
	E1109 14:38:08.758710       1 server.go:256] "Kube-proxy configuration may be incomplete or incorrect" err="nodePortAddresses is unset; NodePort connections will be accepted on all local IPs. Consider using `--nodeport-addresses primary`"
	I1109 14:38:09.005559       1 server.go:265] "kube-proxy running in dual-stack mode" primary ipFamily="IPv4"
	I1109 14:38:09.005686       1 server_linux.go:132] "Using iptables Proxier"
	I1109 14:38:09.012126       1 proxier.go:242] "Setting route_localnet=1 to allow node-ports on localhost; to change this either disable iptables.localhostNodePorts (--iptables-localhost-nodeports) or set nodePortAddresses (--nodeport-addresses) to filter loopback addresses" ipFamily="IPv4"
	I1109 14:38:09.012632       1 server.go:527] "Version info" version="v1.34.1"
	I1109 14:38:09.013732       1 server.go:529] "Golang settings" GOGC="" GOMAXPROCS="" GOTRACEBACK=""
	I1109 14:38:09.015455       1 config.go:200] "Starting service config controller"
	I1109 14:38:09.015632       1 shared_informer.go:349] "Waiting for caches to sync" controller="service config"
	I1109 14:38:09.015702       1 config.go:106] "Starting endpoint slice config controller"
	I1109 14:38:09.015734       1 shared_informer.go:349] "Waiting for caches to sync" controller="endpoint slice config"
	I1109 14:38:09.015766       1 config.go:403] "Starting serviceCIDR config controller"
	I1109 14:38:09.015790       1 shared_informer.go:349] "Waiting for caches to sync" controller="serviceCIDR config"
	I1109 14:38:09.016611       1 config.go:309] "Starting node config controller"
	I1109 14:38:09.016685       1 shared_informer.go:349] "Waiting for caches to sync" controller="node config"
	I1109 14:38:09.016733       1 shared_informer.go:356] "Caches are synced" controller="node config"
	I1109 14:38:09.116366       1 shared_informer.go:356] "Caches are synced" controller="serviceCIDR config"
	I1109 14:38:09.116415       1 shared_informer.go:356] "Caches are synced" controller="endpoint slice config"
	I1109 14:38:09.116374       1 shared_informer.go:356] "Caches are synced" controller="service config"
	
	
	==> kube-scheduler [95367553d7ad439b835b5e1bbb248f6898359f3e78aa1937563f9977ac6f42b3] <==
	I1109 14:37:59.465312       1 server.go:177] "Golang settings" GOGC="" GOMAXPROCS="" GOTRACEBACK=""
	I1109 14:37:59.467481       1 configmap_cafile_content.go:205] "Starting controller" name="client-ca::kube-system::extension-apiserver-authentication::client-ca-file"
	I1109 14:37:59.467532       1 shared_informer.go:349] "Waiting for caches to sync" controller="client-ca::kube-system::extension-apiserver-authentication::client-ca-file"
	I1109 14:37:59.467928       1 secure_serving.go:211] Serving securely on 127.0.0.1:10259
	I1109 14:37:59.467988       1 tlsconfig.go:243] "Starting DynamicServingCertificateController"
	E1109 14:37:59.485997       1 reflector.go:205] "Failed to watch" err="failed to list *v1.ResourceClaim: resourceclaims.resource.k8s.io is forbidden: User \"system:kube-scheduler\" cannot list resource \"resourceclaims\" in API group \"resource.k8s.io\" at the cluster scope" logger="UnhandledError" reflector="k8s.io/client-go/informers/factory.go:160" type="*v1.ResourceClaim"
	E1109 14:37:59.489854       1 reflector.go:205] "Failed to watch" err="failed to list *v1.PersistentVolumeClaim: persistentvolumeclaims is forbidden: User \"system:kube-scheduler\" cannot list resource \"persistentvolumeclaims\" in API group \"\" at the cluster scope" logger="UnhandledError" reflector="k8s.io/client-go/informers/factory.go:160" type="*v1.PersistentVolumeClaim"
	E1109 14:37:59.492162       1 reflector.go:205] "Failed to watch" err="failed to list *v1.Namespace: namespaces is forbidden: User \"system:kube-scheduler\" cannot list resource \"namespaces\" in API group \"\" at the cluster scope" logger="UnhandledError" reflector="k8s.io/client-go/informers/factory.go:160" type="*v1.Namespace"
	E1109 14:37:59.492233       1 reflector.go:205] "Failed to watch" err="failed to list *v1.Service: services is forbidden: User \"system:kube-scheduler\" cannot list resource \"services\" in API group \"\" at the cluster scope" logger="UnhandledError" reflector="k8s.io/client-go/informers/factory.go:160" type="*v1.Service"
	E1109 14:37:59.492292       1 reflector.go:205] "Failed to watch" err="failed to list *v1.StatefulSet: statefulsets.apps is forbidden: User \"system:kube-scheduler\" cannot list resource \"statefulsets\" in API group \"apps\" at the cluster scope" logger="UnhandledError" reflector="k8s.io/client-go/informers/factory.go:160" type="*v1.StatefulSet"
	E1109 14:37:59.492611       1 reflector.go:205] "Failed to watch" err="failed to list *v1.DeviceClass: deviceclasses.resource.k8s.io is forbidden: User \"system:kube-scheduler\" cannot list resource \"deviceclasses\" in API group \"resource.k8s.io\" at the cluster scope" logger="UnhandledError" reflector="k8s.io/client-go/informers/factory.go:160" type="*v1.DeviceClass"
	E1109 14:37:59.492750       1 reflector.go:205] "Failed to watch" err="failed to list *v1.ConfigMap: configmaps \"extension-apiserver-authentication\" is forbidden: User \"system:kube-scheduler\" cannot list resource \"configmaps\" in API group \"\" in the namespace \"kube-system\"" logger="UnhandledError" reflector="runtime/asm_arm64.s:1223" type="*v1.ConfigMap"
	E1109 14:37:59.493194       1 reflector.go:205] "Failed to watch" err="failed to list *v1.ResourceSlice: resourceslices.resource.k8s.io is forbidden: User \"system:kube-scheduler\" cannot list resource \"resourceslices\" in API group \"resource.k8s.io\" at the cluster scope" logger="UnhandledError" reflector="k8s.io/client-go/informers/factory.go:160" type="*v1.ResourceSlice"
	E1109 14:37:59.498399       1 reflector.go:205] "Failed to watch" err="failed to list *v1.CSIStorageCapacity: csistoragecapacities.storage.k8s.io is forbidden: User \"system:kube-scheduler\" cannot list resource \"csistoragecapacities\" in API group \"storage.k8s.io\" at the cluster scope" logger="UnhandledError" reflector="k8s.io/client-go/informers/factory.go:160" type="*v1.CSIStorageCapacity"
	E1109 14:37:59.498577       1 reflector.go:205] "Failed to watch" err="failed to list *v1.Node: nodes is forbidden: User \"system:kube-scheduler\" cannot list resource \"nodes\" in API group \"\" at the cluster scope" logger="UnhandledError" reflector="k8s.io/client-go/informers/factory.go:160" type="*v1.Node"
	E1109 14:37:59.498730       1 reflector.go:205] "Failed to watch" err="failed to list *v1.PodDisruptionBudget: poddisruptionbudgets.policy is forbidden: User \"system:kube-scheduler\" cannot list resource \"poddisruptionbudgets\" in API group \"policy\" at the cluster scope" logger="UnhandledError" reflector="k8s.io/client-go/informers/factory.go:160" type="*v1.PodDisruptionBudget"
	E1109 14:37:59.499054       1 reflector.go:205] "Failed to watch" err="failed to list *v1.CSIDriver: csidrivers.storage.k8s.io is forbidden: User \"system:kube-scheduler\" cannot list resource \"csidrivers\" in API group \"storage.k8s.io\" at the cluster scope" logger="UnhandledError" reflector="k8s.io/client-go/informers/factory.go:160" type="*v1.CSIDriver"
	E1109 14:37:59.499240       1 reflector.go:205] "Failed to watch" err="failed to list *v1.PersistentVolume: persistentvolumes is forbidden: User \"system:kube-scheduler\" cannot list resource \"persistentvolumes\" in API group \"\" at the cluster scope" logger="UnhandledError" reflector="k8s.io/client-go/informers/factory.go:160" type="*v1.PersistentVolume"
	E1109 14:37:59.499395       1 reflector.go:205] "Failed to watch" err="failed to list *v1.Pod: pods is forbidden: User \"system:kube-scheduler\" cannot list resource \"pods\" in API group \"\" at the cluster scope" logger="UnhandledError" reflector="k8s.io/client-go/informers/factory.go:160" type="*v1.Pod"
	E1109 14:37:59.499572       1 reflector.go:205] "Failed to watch" err="failed to list *v1.StorageClass: storageclasses.storage.k8s.io is forbidden: User \"system:kube-scheduler\" cannot list resource \"storageclasses\" in API group \"storage.k8s.io\" at the cluster scope" logger="UnhandledError" reflector="k8s.io/client-go/informers/factory.go:160" type="*v1.StorageClass"
	E1109 14:37:59.499777       1 reflector.go:205] "Failed to watch" err="failed to list *v1.VolumeAttachment: volumeattachments.storage.k8s.io is forbidden: User \"system:kube-scheduler\" cannot list resource \"volumeattachments\" in API group \"storage.k8s.io\" at the cluster scope" logger="UnhandledError" reflector="k8s.io/client-go/informers/factory.go:160" type="*v1.VolumeAttachment"
	E1109 14:37:59.500047       1 reflector.go:205] "Failed to watch" err="failed to list *v1.ReplicationController: replicationcontrollers is forbidden: User \"system:kube-scheduler\" cannot list resource \"replicationcontrollers\" in API group \"\" at the cluster scope" logger="UnhandledError" reflector="k8s.io/client-go/informers/factory.go:160" type="*v1.ReplicationController"
	E1109 14:37:59.500223       1 reflector.go:205] "Failed to watch" err="failed to list *v1.ReplicaSet: replicasets.apps is forbidden: User \"system:kube-scheduler\" cannot list resource \"replicasets\" in API group \"apps\" at the cluster scope" logger="UnhandledError" reflector="k8s.io/client-go/informers/factory.go:160" type="*v1.ReplicaSet"
	E1109 14:37:59.500300       1 reflector.go:205] "Failed to watch" err="failed to list *v1.CSINode: csinodes.storage.k8s.io is forbidden: User \"system:kube-scheduler\" cannot list resource \"csinodes\" in API group \"storage.k8s.io\" at the cluster scope" logger="UnhandledError" reflector="k8s.io/client-go/informers/factory.go:160" type="*v1.CSINode"
	I1109 14:38:00.568684       1 shared_informer.go:356] "Caches are synced" controller="client-ca::kube-system::extension-apiserver-authentication::client-ca-file"
	
	
	==> kubelet <==
	Nov 09 14:38:06 embed-certs-422728 kubelet[1303]: I1109 14:38:06.577601    1303 kuberuntime_manager.go:1828] "Updating runtime config through cri with podcidr" CIDR="10.244.0.0/24"
	Nov 09 14:38:06 embed-certs-422728 kubelet[1303]: I1109 14:38:06.578336    1303 kubelet_network.go:47] "Updating Pod CIDR" originalPodCIDR="" newPodCIDR="10.244.0.0/24"
	Nov 09 14:38:07 embed-certs-422728 kubelet[1303]: I1109 14:38:07.805270    1303 reconciler_common.go:251] "operationExecutor.VerifyControllerAttachedVolume started for volume \"xtables-lock\" (UniqueName: \"kubernetes.io/host-path/91237b20-cef1-4550-bd7c-cbf7ec8d850c-xtables-lock\") pod \"kube-proxy-5zn8j\" (UID: \"91237b20-cef1-4550-bd7c-cbf7ec8d850c\") " pod="kube-system/kube-proxy-5zn8j"
	Nov 09 14:38:07 embed-certs-422728 kubelet[1303]: I1109 14:38:07.805346    1303 reconciler_common.go:251] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kube-proxy\" (UniqueName: \"kubernetes.io/configmap/91237b20-cef1-4550-bd7c-cbf7ec8d850c-kube-proxy\") pod \"kube-proxy-5zn8j\" (UID: \"91237b20-cef1-4550-bd7c-cbf7ec8d850c\") " pod="kube-system/kube-proxy-5zn8j"
	Nov 09 14:38:07 embed-certs-422728 kubelet[1303]: I1109 14:38:07.805381    1303 reconciler_common.go:251] "operationExecutor.VerifyControllerAttachedVolume started for volume \"lib-modules\" (UniqueName: \"kubernetes.io/host-path/91237b20-cef1-4550-bd7c-cbf7ec8d850c-lib-modules\") pod \"kube-proxy-5zn8j\" (UID: \"91237b20-cef1-4550-bd7c-cbf7ec8d850c\") " pod="kube-system/kube-proxy-5zn8j"
	Nov 09 14:38:07 embed-certs-422728 kubelet[1303]: I1109 14:38:07.805420    1303 reconciler_common.go:251] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kube-api-access-csnvw\" (UniqueName: \"kubernetes.io/projected/91237b20-cef1-4550-bd7c-cbf7ec8d850c-kube-api-access-csnvw\") pod \"kube-proxy-5zn8j\" (UID: \"91237b20-cef1-4550-bd7c-cbf7ec8d850c\") " pod="kube-system/kube-proxy-5zn8j"
	Nov 09 14:38:07 embed-certs-422728 kubelet[1303]: I1109 14:38:07.905969    1303 reconciler_common.go:251] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kube-api-access-h4gss\" (UniqueName: \"kubernetes.io/projected/081cda95-4468-46a9-a913-ec3c53472afd-kube-api-access-h4gss\") pod \"kindnet-29xxd\" (UID: \"081cda95-4468-46a9-a913-ec3c53472afd\") " pod="kube-system/kindnet-29xxd"
	Nov 09 14:38:07 embed-certs-422728 kubelet[1303]: I1109 14:38:07.906033    1303 reconciler_common.go:251] "operationExecutor.VerifyControllerAttachedVolume started for volume \"cni-cfg\" (UniqueName: \"kubernetes.io/host-path/081cda95-4468-46a9-a913-ec3c53472afd-cni-cfg\") pod \"kindnet-29xxd\" (UID: \"081cda95-4468-46a9-a913-ec3c53472afd\") " pod="kube-system/kindnet-29xxd"
	Nov 09 14:38:07 embed-certs-422728 kubelet[1303]: I1109 14:38:07.906054    1303 reconciler_common.go:251] "operationExecutor.VerifyControllerAttachedVolume started for volume \"xtables-lock\" (UniqueName: \"kubernetes.io/host-path/081cda95-4468-46a9-a913-ec3c53472afd-xtables-lock\") pod \"kindnet-29xxd\" (UID: \"081cda95-4468-46a9-a913-ec3c53472afd\") " pod="kube-system/kindnet-29xxd"
	Nov 09 14:38:07 embed-certs-422728 kubelet[1303]: I1109 14:38:07.906072    1303 reconciler_common.go:251] "operationExecutor.VerifyControllerAttachedVolume started for volume \"lib-modules\" (UniqueName: \"kubernetes.io/host-path/081cda95-4468-46a9-a913-ec3c53472afd-lib-modules\") pod \"kindnet-29xxd\" (UID: \"081cda95-4468-46a9-a913-ec3c53472afd\") " pod="kube-system/kindnet-29xxd"
	Nov 09 14:38:08 embed-certs-422728 kubelet[1303]: I1109 14:38:08.048487    1303 swap_util.go:74] "error creating dir to test if tmpfs noswap is enabled. Assuming not supported" mount path="" error="stat /var/lib/kubelet/plugins/kubernetes.io/empty-dir: no such file or directory"
	Nov 09 14:38:08 embed-certs-422728 kubelet[1303]: W1109 14:38:08.325916    1303 manager.go:1169] Failed to process watch event {EventType:0 Name:/docker/45825e68cb8679a16f491499ba94fa2babf9539c7324c0ac19dfc0bb866dfb12/crio-280c0cc4e5f5727ea268106dd58bb20863ef1c9ecc8c35ee23cc69115e9ff1d4 WatchSource:0}: Error finding container 280c0cc4e5f5727ea268106dd58bb20863ef1c9ecc8c35ee23cc69115e9ff1d4: Status 404 returned error can't find the container with id 280c0cc4e5f5727ea268106dd58bb20863ef1c9ecc8c35ee23cc69115e9ff1d4
	Nov 09 14:38:08 embed-certs-422728 kubelet[1303]: W1109 14:38:08.421364    1303 manager.go:1169] Failed to process watch event {EventType:0 Name:/docker/45825e68cb8679a16f491499ba94fa2babf9539c7324c0ac19dfc0bb866dfb12/crio-6b6b5baec9fa42bcba22dfffe4a08b41a8cde308ecceb4657bb77da447182060 WatchSource:0}: Error finding container 6b6b5baec9fa42bcba22dfffe4a08b41a8cde308ecceb4657bb77da447182060: Status 404 returned error can't find the container with id 6b6b5baec9fa42bcba22dfffe4a08b41a8cde308ecceb4657bb77da447182060
	Nov 09 14:38:08 embed-certs-422728 kubelet[1303]: I1109 14:38:08.971162    1303 pod_startup_latency_tracker.go:104] "Observed pod startup duration" pod="kube-system/kube-proxy-5zn8j" podStartSLOduration=1.971133852 podStartE2EDuration="1.971133852s" podCreationTimestamp="2025-11-09 14:38:07 +0000 UTC" firstStartedPulling="0001-01-01 00:00:00 +0000 UTC" lastFinishedPulling="0001-01-01 00:00:00 +0000 UTC" observedRunningTime="2025-11-09 14:38:08.9696602 +0000 UTC m=+7.484400717" watchObservedRunningTime="2025-11-09 14:38:08.971133852 +0000 UTC m=+7.485874369"
	Nov 09 14:38:08 embed-certs-422728 kubelet[1303]: I1109 14:38:08.971816    1303 pod_startup_latency_tracker.go:104] "Observed pod startup duration" pod="kube-system/kindnet-29xxd" podStartSLOduration=1.97180415 podStartE2EDuration="1.97180415s" podCreationTimestamp="2025-11-09 14:38:07 +0000 UTC" firstStartedPulling="0001-01-01 00:00:00 +0000 UTC" lastFinishedPulling="0001-01-01 00:00:00 +0000 UTC" observedRunningTime="2025-11-09 14:38:08.938774818 +0000 UTC m=+7.453515335" watchObservedRunningTime="2025-11-09 14:38:08.97180415 +0000 UTC m=+7.486544667"
	Nov 09 14:38:48 embed-certs-422728 kubelet[1303]: I1109 14:38:48.910989    1303 kubelet_node_status.go:439] "Fast updating node status as it just became ready"
	Nov 09 14:38:49 embed-certs-422728 kubelet[1303]: I1109 14:38:49.043484    1303 reconciler_common.go:251] "operationExecutor.VerifyControllerAttachedVolume started for volume \"tmp\" (UniqueName: \"kubernetes.io/host-path/e11ae084-2938-40cc-9538-cffa02747d9b-tmp\") pod \"storage-provisioner\" (UID: \"e11ae084-2938-40cc-9538-cffa02747d9b\") " pod="kube-system/storage-provisioner"
	Nov 09 14:38:49 embed-certs-422728 kubelet[1303]: I1109 14:38:49.043703    1303 reconciler_common.go:251] "operationExecutor.VerifyControllerAttachedVolume started for volume \"config-volume\" (UniqueName: \"kubernetes.io/configmap/85300af6-fc4a-42dd-b6f9-4374a4461cdc-config-volume\") pod \"coredns-66bc5c9577-4hk6l\" (UID: \"85300af6-fc4a-42dd-b6f9-4374a4461cdc\") " pod="kube-system/coredns-66bc5c9577-4hk6l"
	Nov 09 14:38:49 embed-certs-422728 kubelet[1303]: I1109 14:38:49.043740    1303 reconciler_common.go:251] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kube-api-access-sfwhq\" (UniqueName: \"kubernetes.io/projected/85300af6-fc4a-42dd-b6f9-4374a4461cdc-kube-api-access-sfwhq\") pod \"coredns-66bc5c9577-4hk6l\" (UID: \"85300af6-fc4a-42dd-b6f9-4374a4461cdc\") " pod="kube-system/coredns-66bc5c9577-4hk6l"
	Nov 09 14:38:49 embed-certs-422728 kubelet[1303]: I1109 14:38:49.043771    1303 reconciler_common.go:251] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kube-api-access-k47lg\" (UniqueName: \"kubernetes.io/projected/e11ae084-2938-40cc-9538-cffa02747d9b-kube-api-access-k47lg\") pod \"storage-provisioner\" (UID: \"e11ae084-2938-40cc-9538-cffa02747d9b\") " pod="kube-system/storage-provisioner"
	Nov 09 14:38:49 embed-certs-422728 kubelet[1303]: W1109 14:38:49.315084    1303 manager.go:1169] Failed to process watch event {EventType:0 Name:/docker/45825e68cb8679a16f491499ba94fa2babf9539c7324c0ac19dfc0bb866dfb12/crio-99f439221141d444baa990b2d5a1803fddf4ff217514d45e542d142ec22f7a06 WatchSource:0}: Error finding container 99f439221141d444baa990b2d5a1803fddf4ff217514d45e542d142ec22f7a06: Status 404 returned error can't find the container with id 99f439221141d444baa990b2d5a1803fddf4ff217514d45e542d142ec22f7a06
	Nov 09 14:38:50 embed-certs-422728 kubelet[1303]: I1109 14:38:50.053585    1303 pod_startup_latency_tracker.go:104] "Observed pod startup duration" pod="kube-system/coredns-66bc5c9577-4hk6l" podStartSLOduration=43.053562984 podStartE2EDuration="43.053562984s" podCreationTimestamp="2025-11-09 14:38:07 +0000 UTC" firstStartedPulling="0001-01-01 00:00:00 +0000 UTC" lastFinishedPulling="0001-01-01 00:00:00 +0000 UTC" observedRunningTime="2025-11-09 14:38:50.03787713 +0000 UTC m=+48.552617647" watchObservedRunningTime="2025-11-09 14:38:50.053562984 +0000 UTC m=+48.568303484"
	Nov 09 14:38:50 embed-certs-422728 kubelet[1303]: I1109 14:38:50.074077    1303 pod_startup_latency_tracker.go:104] "Observed pod startup duration" pod="kube-system/storage-provisioner" podStartSLOduration=42.074056277 podStartE2EDuration="42.074056277s" podCreationTimestamp="2025-11-09 14:38:08 +0000 UTC" firstStartedPulling="0001-01-01 00:00:00 +0000 UTC" lastFinishedPulling="0001-01-01 00:00:00 +0000 UTC" observedRunningTime="2025-11-09 14:38:50.054698706 +0000 UTC m=+48.569439264" watchObservedRunningTime="2025-11-09 14:38:50.074056277 +0000 UTC m=+48.588796786"
	Nov 09 14:38:52 embed-certs-422728 kubelet[1303]: I1109 14:38:52.265572    1303 reconciler_common.go:251] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kube-api-access-sjkb2\" (UniqueName: \"kubernetes.io/projected/219a9c8a-eefa-4542-a8f6-78c4f56bea13-kube-api-access-sjkb2\") pod \"busybox\" (UID: \"219a9c8a-eefa-4542-a8f6-78c4f56bea13\") " pod="default/busybox"
	Nov 09 14:38:52 embed-certs-422728 kubelet[1303]: W1109 14:38:52.444249    1303 manager.go:1169] Failed to process watch event {EventType:0 Name:/docker/45825e68cb8679a16f491499ba94fa2babf9539c7324c0ac19dfc0bb866dfb12/crio-d303aed0a1c67e35e3a30463da6ec801a48d025d07d3f334e09b84b2e67a4d2c WatchSource:0}: Error finding container d303aed0a1c67e35e3a30463da6ec801a48d025d07d3f334e09b84b2e67a4d2c: Status 404 returned error can't find the container with id d303aed0a1c67e35e3a30463da6ec801a48d025d07d3f334e09b84b2e67a4d2c
	
	
	==> storage-provisioner [3e2996af763ec02d640ff3062e52f4670b0d963e73ce5322c21ff9880cbd5daf] <==
	I1109 14:38:49.344209       1 storage_provisioner.go:116] Initializing the minikube storage provisioner...
	I1109 14:38:49.362390       1 storage_provisioner.go:141] Storage provisioner initialized, now starting service!
	I1109 14:38:49.362433       1 leaderelection.go:243] attempting to acquire leader lease kube-system/k8s.io-minikube-hostpath...
	W1109 14:38:49.364950       1 warnings.go:70] v1 Endpoints is deprecated in v1.33+; use discovery.k8s.io/v1 EndpointSlice
	W1109 14:38:49.373885       1 warnings.go:70] v1 Endpoints is deprecated in v1.33+; use discovery.k8s.io/v1 EndpointSlice
	I1109 14:38:49.374117       1 leaderelection.go:253] successfully acquired lease kube-system/k8s.io-minikube-hostpath
	I1109 14:38:49.374295       1 controller.go:835] Starting provisioner controller k8s.io/minikube-hostpath_embed-certs-422728_3f200ce8-6d50-4d60-9e3c-8d4e6945c7d8!
	I1109 14:38:49.375266       1 event.go:282] Event(v1.ObjectReference{Kind:"Endpoints", Namespace:"kube-system", Name:"k8s.io-minikube-hostpath", UID:"75bbad6d-f285-4ed2-83c3-c9896fff11ae", APIVersion:"v1", ResourceVersion:"452", FieldPath:""}): type: 'Normal' reason: 'LeaderElection' embed-certs-422728_3f200ce8-6d50-4d60-9e3c-8d4e6945c7d8 became leader
	W1109 14:38:49.388814       1 warnings.go:70] v1 Endpoints is deprecated in v1.33+; use discovery.k8s.io/v1 EndpointSlice
	W1109 14:38:49.409120       1 warnings.go:70] v1 Endpoints is deprecated in v1.33+; use discovery.k8s.io/v1 EndpointSlice
	I1109 14:38:49.474851       1 controller.go:884] Started provisioner controller k8s.io/minikube-hostpath_embed-certs-422728_3f200ce8-6d50-4d60-9e3c-8d4e6945c7d8!
	W1109 14:38:51.411975       1 warnings.go:70] v1 Endpoints is deprecated in v1.33+; use discovery.k8s.io/v1 EndpointSlice
	W1109 14:38:51.416485       1 warnings.go:70] v1 Endpoints is deprecated in v1.33+; use discovery.k8s.io/v1 EndpointSlice
	W1109 14:38:53.419282       1 warnings.go:70] v1 Endpoints is deprecated in v1.33+; use discovery.k8s.io/v1 EndpointSlice
	W1109 14:38:53.423519       1 warnings.go:70] v1 Endpoints is deprecated in v1.33+; use discovery.k8s.io/v1 EndpointSlice
	W1109 14:38:55.426929       1 warnings.go:70] v1 Endpoints is deprecated in v1.33+; use discovery.k8s.io/v1 EndpointSlice
	W1109 14:38:55.431830       1 warnings.go:70] v1 Endpoints is deprecated in v1.33+; use discovery.k8s.io/v1 EndpointSlice
	W1109 14:38:57.435433       1 warnings.go:70] v1 Endpoints is deprecated in v1.33+; use discovery.k8s.io/v1 EndpointSlice
	W1109 14:38:57.440191       1 warnings.go:70] v1 Endpoints is deprecated in v1.33+; use discovery.k8s.io/v1 EndpointSlice
	W1109 14:38:59.443283       1 warnings.go:70] v1 Endpoints is deprecated in v1.33+; use discovery.k8s.io/v1 EndpointSlice
	W1109 14:38:59.459680       1 warnings.go:70] v1 Endpoints is deprecated in v1.33+; use discovery.k8s.io/v1 EndpointSlice
	W1109 14:39:01.462704       1 warnings.go:70] v1 Endpoints is deprecated in v1.33+; use discovery.k8s.io/v1 EndpointSlice
	W1109 14:39:01.476269       1 warnings.go:70] v1 Endpoints is deprecated in v1.33+; use discovery.k8s.io/v1 EndpointSlice
	

                                                
                                                
-- /stdout --
helpers_test.go:262: (dbg) Run:  out/minikube-linux-arm64 status --format={{.APIServer}} -p embed-certs-422728 -n embed-certs-422728
helpers_test.go:269: (dbg) Run:  kubectl --context embed-certs-422728 get po -o=jsonpath={.items[*].metadata.name} -A --field-selector=status.phase!=Running
helpers_test.go:293: <<< TestStartStop/group/embed-certs/serial/EnableAddonWhileActive FAILED: end of post-mortem logs <<<
helpers_test.go:294: ---------------------/post-mortem---------------------------------
--- FAIL: TestStartStop/group/embed-certs/serial/EnableAddonWhileActive (2.95s)

                                                
                                    
x
+
TestStartStop/group/default-k8s-diff-port/serial/Pause (7.69s)

                                                
                                                
=== RUN   TestStartStop/group/default-k8s-diff-port/serial/Pause
start_stop_delete_test.go:309: (dbg) Run:  out/minikube-linux-arm64 pause -p default-k8s-diff-port-103048 --alsologtostderr -v=1
start_stop_delete_test.go:309: (dbg) Non-zero exit: out/minikube-linux-arm64 pause -p default-k8s-diff-port-103048 --alsologtostderr -v=1: exit status 80 (1.722706982s)

                                                
                                                
-- stdout --
	* Pausing node default-k8s-diff-port-103048 ... 
	
	

                                                
                                                
-- /stdout --
** stderr ** 
	I1109 14:40:18.554324  200617 out.go:360] Setting OutFile to fd 1 ...
	I1109 14:40:18.554527  200617 out.go:408] TERM=,COLORTERM=, which probably does not support color
	I1109 14:40:18.554558  200617 out.go:374] Setting ErrFile to fd 2...
	I1109 14:40:18.554672  200617 out.go:408] TERM=,COLORTERM=, which probably does not support color
	I1109 14:40:18.555180  200617 root.go:338] Updating PATH: /home/jenkins/minikube-integration/21139-2320/.minikube/bin
	I1109 14:40:18.555525  200617 out.go:368] Setting JSON to false
	I1109 14:40:18.555596  200617 mustload.go:66] Loading cluster: default-k8s-diff-port-103048
	I1109 14:40:18.556089  200617 config.go:182] Loaded profile config "default-k8s-diff-port-103048": Driver=docker, ContainerRuntime=crio, KubernetesVersion=v1.34.1
	I1109 14:40:18.556604  200617 cli_runner.go:164] Run: docker container inspect default-k8s-diff-port-103048 --format={{.State.Status}}
	I1109 14:40:18.579579  200617 host.go:66] Checking if "default-k8s-diff-port-103048" exists ...
	I1109 14:40:18.580071  200617 cli_runner.go:164] Run: docker system info --format "{{json .}}"
	I1109 14:40:18.645054  200617 info.go:266] docker info: {ID:J4M5:W6MX:GOX4:4LAQ:VI7E:VJNF:J3OP:OPBH:GF7G:PPY4:WQWD:7N4L Containers:2 ContainersRunning:2 ContainersPaused:0 ContainersStopped:0 Images:4 Driver:overlay2 DriverStatus:[[Backing Filesystem extfs] [Supports d_type true] [Using metacopy false] [Native Overlay Diff true] [userxattr false]] SystemStatus:<nil> Plugins:{Volume:[local] Network:[bridge host ipvlan macvlan null overlay] Authorization:<nil> Log:[awslogs fluentd gcplogs gelf journald json-file local splunk syslog]} MemoryLimit:true SwapLimit:true KernelMemory:false KernelMemoryTCP:true CPUCfsPeriod:true CPUCfsQuota:true CPUShares:true CPUSet:true PidsLimit:true IPv4Forwarding:true BridgeNfIptables:false BridgeNfIP6Tables:false Debug:false NFd:49 OomKillDisable:true NGoroutines:62 SystemTime:2025-11-09 14:40:18.634803075 +0000 UTC LoggingDriver:json-file CgroupDriver:cgroupfs NEventsListener:0 KernelVersion:5.15.0-1084-aws OperatingSystem:Ubuntu 20.04.6 LTS OSType:linux Architecture:a
arch64 IndexServerAddress:https://index.docker.io/v1/ RegistryConfig:{AllowNondistributableArtifactsCIDRs:[] AllowNondistributableArtifactsHostnames:[] InsecureRegistryCIDRs:[::1/128 127.0.0.0/8] IndexConfigs:{DockerIo:{Name:docker.io Mirrors:[] Secure:true Official:true}} Mirrors:[]} NCPU:2 MemTotal:8214835200 GenericResources:<nil> DockerRootDir:/var/lib/docker HTTPProxy: HTTPSProxy: NoProxy: Name:ip-172-31-24-2 Labels:[] ExperimentalBuild:false ServerVersion:28.1.1 ClusterStore: ClusterAdvertise: Runtimes:{Runc:{Path:runc}} DefaultRuntime:runc Swarm:{NodeID: NodeAddr: LocalNodeState:inactive ControlAvailable:false Error: RemoteManagers:<nil>} LiveRestoreEnabled:false Isolation: InitBinary:docker-init ContainerdCommit:{ID:05044ec0a9a75232cad458027ca83437aae3f4da Expected:} RuncCommit:{ID:v1.2.5-0-g59923ef Expected:} InitCommit:{ID:de40ad0 Expected:} SecurityOptions:[name=apparmor name=seccomp,profile=builtin] ProductLicense: Warnings:<nil> ServerErrors:[] ClientInfo:{Debug:false Plugins:[map[Name:buildx Pat
h:/usr/libexec/docker/cli-plugins/docker-buildx SchemaVersion:0.1.0 ShortDescription:Docker Buildx Vendor:Docker Inc. Version:v0.23.0] map[Name:compose Path:/usr/libexec/docker/cli-plugins/docker-compose SchemaVersion:0.1.0 ShortDescription:Docker Compose Vendor:Docker Inc. Version:v2.35.1]] Warnings:<nil>}}
	I1109 14:40:18.645776  200617 pause.go:60] "namespaces" [kube-system kubernetes-dashboard istio-operator]="keys" map[addons:[] all:%!s(bool=false) apiserver-ips:[] apiserver-name:minikubeCA apiserver-names:[] apiserver-port:%!s(int=8443) auto-pause-interval:1m0s auto-update-drivers:%!s(bool=true) base-image:gcr.io/k8s-minikube/kicbase-builds:v0.0.48-1761985721-21837@sha256:a50b37e97dfdea51156e079ca6b45818a801b3d41bbe13d141f35d2e1af6c7d1 binary-mirror: bootstrapper:kubeadm cache-images:%!s(bool=true) cancel-scheduled:%!s(bool=false) cert-expiration:26280h0m0s cni: container-runtime: cpus:2 cri-socket: delete-on-failure:%!s(bool=false) disable-coredns-log:%!s(bool=false) disable-driver-mounts:%!s(bool=false) disable-metrics:%!s(bool=false) disable-optimizations:%!s(bool=false) disk-size:20000mb dns-domain:cluster.local dns-proxy:%!s(bool=false) docker-env:[] docker-opt:[] download-only:%!s(bool=false) driver: dry-run:%!s(bool=false) embed-certs:%!s(bool=false) embedcerts:%!s(bool=false) enable-default-
cni:%!s(bool=false) extra-config: extra-disks:%!s(int=0) feature-gates: force:%!s(bool=false) force-systemd:%!s(bool=false) gpus: ha:%!s(bool=false) host-dns-resolver:%!s(bool=true) host-only-cidr:192.168.59.1/24 host-only-nic-type:virtio hyperkit-vpnkit-sock: hyperkit-vsock-ports:[] hyperv-external-adapter: hyperv-use-external-switch:%!s(bool=false) hyperv-virtual-switch: image-mirror-country: image-repository: insecure-registry:[] install-addons:%!s(bool=true) interactive:%!s(bool=true) iso-url:[https://storage.googleapis.com/minikube-builds/iso/21834/minikube-v1.37.0-1762018871-21834-arm64.iso https://github.com/kubernetes/minikube/releases/download/v1.37.0-1762018871-21834/minikube-v1.37.0-1762018871-21834-arm64.iso https://kubernetes.oss-cn-hangzhou.aliyuncs.com/minikube/iso/minikube-v1.37.0-1762018871-21834-arm64.iso] keep-context:%!s(bool=false) keep-context-active:%!s(bool=false) kubernetes-version: kvm-gpu:%!s(bool=false) kvm-hidden:%!s(bool=false) kvm-network:default kvm-numa-count:%!s(int=1) kvm-qe
mu-uri:qemu:///system listen-address: maxauditentries:%!s(int=1000) memory: mount:%!s(bool=false) mount-9p-version:9p2000.L mount-gid:docker mount-ip: mount-msize:%!s(int=262144) mount-options:[] mount-port:0 mount-string: mount-type:9p mount-uid:docker namespace:default nat-nic-type:virtio native-ssh:%!s(bool=true) network: network-plugin: nfs-share:[] nfs-shares-root:/nfsshares no-kubernetes:%!s(bool=false) no-vtx-check:%!s(bool=false) nodes:%!s(int=1) output:text ports:[] preload:%!s(bool=true) profile:default-k8s-diff-port-103048 purge:%!s(bool=false) qemu-firmware-path: registry-mirror:[] reminderwaitperiodinhours:%!s(int=24) rootless:%!s(bool=false) schedule:0s service-cluster-ip-range:10.96.0.0/12 skip-audit:%!s(bool=false) socket-vmnet-client-path: socket-vmnet-path: ssh-ip-address: ssh-key: ssh-port:%!s(int=22) ssh-user:root static-ip: subnet: trace: user: uuid: vm:%!s(bool=false) vm-driver: wait:[apiserver system_pods] wait-timeout:6m0s wantnonedriverwarning:%!s(bool=true) wantupdatenotification:%!s
(bool=true) wantvirtualboxdriverwarning:%!s(bool=true)]="(MISSING)"
	I1109 14:40:18.650099  200617 out.go:179] * Pausing node default-k8s-diff-port-103048 ... 
	I1109 14:40:18.653705  200617 host.go:66] Checking if "default-k8s-diff-port-103048" exists ...
	I1109 14:40:18.654059  200617 ssh_runner.go:195] Run: systemctl --version
	I1109 14:40:18.654110  200617 cli_runner.go:164] Run: docker container inspect -f "'{{(index (index .NetworkSettings.Ports "22/tcp") 0).HostPort}}'" default-k8s-diff-port-103048
	I1109 14:40:18.672368  200617 sshutil.go:53] new ssh client: &{IP:127.0.0.1 Port:33065 SSHKeyPath:/home/jenkins/minikube-integration/21139-2320/.minikube/machines/default-k8s-diff-port-103048/id_rsa Username:docker}
	I1109 14:40:18.778914  200617 ssh_runner.go:195] Run: sudo systemctl is-active --quiet service kubelet
	I1109 14:40:18.794074  200617 pause.go:52] kubelet running: true
	I1109 14:40:18.794152  200617 ssh_runner.go:195] Run: sudo systemctl disable --now kubelet
	I1109 14:40:19.099376  200617 cri.go:54] listing CRI containers in root : {State:running Name: Namespaces:[kube-system kubernetes-dashboard istio-operator]}
	I1109 14:40:19.099467  200617 ssh_runner.go:195] Run: sudo -s eval "crictl ps -a --quiet --label io.kubernetes.pod.namespace=kube-system; crictl ps -a --quiet --label io.kubernetes.pod.namespace=kubernetes-dashboard; crictl ps -a --quiet --label io.kubernetes.pod.namespace=istio-operator"
	I1109 14:40:19.168140  200617 cri.go:89] found id: "887e2026e77897766c17fa9712a7f11a65c2c6b18e8793321feed28a770da36c"
	I1109 14:40:19.168169  200617 cri.go:89] found id: "dc599fcbb33507001316216bcb43133c63b59a24b97538fdfa9814b27f4e7cee"
	I1109 14:40:19.168174  200617 cri.go:89] found id: "be38569ab0491d9f49c9dcbf8de0ce6af947e38961945fd5b81c78da6c67aadb"
	I1109 14:40:19.168178  200617 cri.go:89] found id: "cf78d41778b8d4241abc1e4adceffd57e40c195173e228a0d2dca2bd521cce85"
	I1109 14:40:19.168182  200617 cri.go:89] found id: "da4b547d025130d284095282ff9a975da37d1fb29f3f9f7f5b591578b7601596"
	I1109 14:40:19.168185  200617 cri.go:89] found id: "7e93099edfb89c3c6dfb2117e807c954f165aeeee5346a2bbbcc8cebc0b57e6d"
	I1109 14:40:19.168188  200617 cri.go:89] found id: "0c584231ed8c3e74b3273a950c29860375fa8aeb7da46e7a2e139930d0830dd1"
	I1109 14:40:19.168209  200617 cri.go:89] found id: "6ac4e39c7a9ff676af77a2c3d9451f34c38ceb98460bce7a887c77b5521f39bb"
	I1109 14:40:19.168219  200617 cri.go:89] found id: "7d4eac93ccb3effa065a9c4f9d98f14e0e8db5c13414ee7cecb3ffcedd98326f"
	I1109 14:40:19.168241  200617 cri.go:89] found id: "ce0dc366d5e71942bc185442cfc06238d350d5895900cca6bed6bb9f68a56488"
	I1109 14:40:19.168245  200617 cri.go:89] found id: "b77c688e34b493a3b43c4d4222447f464615cadf3927d84572de47e9f20273fb"
	I1109 14:40:19.168248  200617 cri.go:89] found id: ""
	I1109 14:40:19.168306  200617 ssh_runner.go:195] Run: sudo runc list -f json
	I1109 14:40:19.186953  200617 retry.go:31] will retry after 209.342722ms: list running: runc: sudo runc list -f json: Process exited with status 1
	stdout:
	
	stderr:
	time="2025-11-09T14:40:19Z" level=error msg="open /run/runc: no such file or directory"
	I1109 14:40:19.397519  200617 ssh_runner.go:195] Run: sudo systemctl is-active --quiet service kubelet
	I1109 14:40:19.412901  200617 pause.go:52] kubelet running: false
	I1109 14:40:19.412972  200617 ssh_runner.go:195] Run: sudo systemctl disable --now kubelet
	I1109 14:40:19.591512  200617 cri.go:54] listing CRI containers in root : {State:running Name: Namespaces:[kube-system kubernetes-dashboard istio-operator]}
	I1109 14:40:19.591593  200617 ssh_runner.go:195] Run: sudo -s eval "crictl ps -a --quiet --label io.kubernetes.pod.namespace=kube-system; crictl ps -a --quiet --label io.kubernetes.pod.namespace=kubernetes-dashboard; crictl ps -a --quiet --label io.kubernetes.pod.namespace=istio-operator"
	I1109 14:40:19.664855  200617 cri.go:89] found id: "887e2026e77897766c17fa9712a7f11a65c2c6b18e8793321feed28a770da36c"
	I1109 14:40:19.664876  200617 cri.go:89] found id: "dc599fcbb33507001316216bcb43133c63b59a24b97538fdfa9814b27f4e7cee"
	I1109 14:40:19.664882  200617 cri.go:89] found id: "be38569ab0491d9f49c9dcbf8de0ce6af947e38961945fd5b81c78da6c67aadb"
	I1109 14:40:19.664886  200617 cri.go:89] found id: "cf78d41778b8d4241abc1e4adceffd57e40c195173e228a0d2dca2bd521cce85"
	I1109 14:40:19.664890  200617 cri.go:89] found id: "da4b547d025130d284095282ff9a975da37d1fb29f3f9f7f5b591578b7601596"
	I1109 14:40:19.664894  200617 cri.go:89] found id: "7e93099edfb89c3c6dfb2117e807c954f165aeeee5346a2bbbcc8cebc0b57e6d"
	I1109 14:40:19.664898  200617 cri.go:89] found id: "0c584231ed8c3e74b3273a950c29860375fa8aeb7da46e7a2e139930d0830dd1"
	I1109 14:40:19.664901  200617 cri.go:89] found id: "6ac4e39c7a9ff676af77a2c3d9451f34c38ceb98460bce7a887c77b5521f39bb"
	I1109 14:40:19.664904  200617 cri.go:89] found id: "7d4eac93ccb3effa065a9c4f9d98f14e0e8db5c13414ee7cecb3ffcedd98326f"
	I1109 14:40:19.664913  200617 cri.go:89] found id: "ce0dc366d5e71942bc185442cfc06238d350d5895900cca6bed6bb9f68a56488"
	I1109 14:40:19.664921  200617 cri.go:89] found id: "b77c688e34b493a3b43c4d4222447f464615cadf3927d84572de47e9f20273fb"
	I1109 14:40:19.664925  200617 cri.go:89] found id: ""
	I1109 14:40:19.664977  200617 ssh_runner.go:195] Run: sudo runc list -f json
	I1109 14:40:19.676196  200617 retry.go:31] will retry after 241.127488ms: list running: runc: sudo runc list -f json: Process exited with status 1
	stdout:
	
	stderr:
	time="2025-11-09T14:40:19Z" level=error msg="open /run/runc: no such file or directory"
	I1109 14:40:19.917698  200617 ssh_runner.go:195] Run: sudo systemctl is-active --quiet service kubelet
	I1109 14:40:19.932324  200617 pause.go:52] kubelet running: false
	I1109 14:40:19.932407  200617 ssh_runner.go:195] Run: sudo systemctl disable --now kubelet
	I1109 14:40:20.112549  200617 cri.go:54] listing CRI containers in root : {State:running Name: Namespaces:[kube-system kubernetes-dashboard istio-operator]}
	I1109 14:40:20.112637  200617 ssh_runner.go:195] Run: sudo -s eval "crictl ps -a --quiet --label io.kubernetes.pod.namespace=kube-system; crictl ps -a --quiet --label io.kubernetes.pod.namespace=kubernetes-dashboard; crictl ps -a --quiet --label io.kubernetes.pod.namespace=istio-operator"
	I1109 14:40:20.182949  200617 cri.go:89] found id: "887e2026e77897766c17fa9712a7f11a65c2c6b18e8793321feed28a770da36c"
	I1109 14:40:20.182974  200617 cri.go:89] found id: "dc599fcbb33507001316216bcb43133c63b59a24b97538fdfa9814b27f4e7cee"
	I1109 14:40:20.182980  200617 cri.go:89] found id: "be38569ab0491d9f49c9dcbf8de0ce6af947e38961945fd5b81c78da6c67aadb"
	I1109 14:40:20.182984  200617 cri.go:89] found id: "cf78d41778b8d4241abc1e4adceffd57e40c195173e228a0d2dca2bd521cce85"
	I1109 14:40:20.182988  200617 cri.go:89] found id: "da4b547d025130d284095282ff9a975da37d1fb29f3f9f7f5b591578b7601596"
	I1109 14:40:20.182991  200617 cri.go:89] found id: "7e93099edfb89c3c6dfb2117e807c954f165aeeee5346a2bbbcc8cebc0b57e6d"
	I1109 14:40:20.182994  200617 cri.go:89] found id: "0c584231ed8c3e74b3273a950c29860375fa8aeb7da46e7a2e139930d0830dd1"
	I1109 14:40:20.182998  200617 cri.go:89] found id: "6ac4e39c7a9ff676af77a2c3d9451f34c38ceb98460bce7a887c77b5521f39bb"
	I1109 14:40:20.183011  200617 cri.go:89] found id: "7d4eac93ccb3effa065a9c4f9d98f14e0e8db5c13414ee7cecb3ffcedd98326f"
	I1109 14:40:20.183018  200617 cri.go:89] found id: "ce0dc366d5e71942bc185442cfc06238d350d5895900cca6bed6bb9f68a56488"
	I1109 14:40:20.183023  200617 cri.go:89] found id: "b77c688e34b493a3b43c4d4222447f464615cadf3927d84572de47e9f20273fb"
	I1109 14:40:20.183026  200617 cri.go:89] found id: ""
	I1109 14:40:20.183076  200617 ssh_runner.go:195] Run: sudo runc list -f json
	I1109 14:40:20.199332  200617 out.go:203] 
	W1109 14:40:20.202482  200617 out.go:285] X Exiting due to GUEST_PAUSE: Pause: list running: runc: sudo runc list -f json: Process exited with status 1
	stdout:
	
	stderr:
	time="2025-11-09T14:40:20Z" level=error msg="open /run/runc: no such file or directory"
	
	X Exiting due to GUEST_PAUSE: Pause: list running: runc: sudo runc list -f json: Process exited with status 1
	stdout:
	
	stderr:
	time="2025-11-09T14:40:20Z" level=error msg="open /run/runc: no such file or directory"
	
	W1109 14:40:20.202508  200617 out.go:285] * 
	* 
	W1109 14:40:20.207448  200617 out.go:308] ╭─────────────────────────────────────────────────────────────────────────────────────────────╮
	│                                                                                             │
	│    * If the above advice does not help, please let us know:                                 │
	│      https://github.com/kubernetes/minikube/issues/new/choose                               │
	│                                                                                             │
	│    * Please run `minikube logs --file=logs.txt` and attach logs.txt to the GitHub issue.    │
	│    * Please also attach the following file to the GitHub issue:                             │
	│    * - /tmp/minikube_pause_49fdaea37aad8ebccb761973c21590cc64efe8d9_0.log                   │
	│                                                                                             │
	╰─────────────────────────────────────────────────────────────────────────────────────────────╯
	╭─────────────────────────────────────────────────────────────────────────────────────────────╮
	│                                                                                             │
	│    * If the above advice does not help, please let us know:                                 │
	│      https://github.com/kubernetes/minikube/issues/new/choose                               │
	│                                                                                             │
	│    * Please run `minikube logs --file=logs.txt` and attach logs.txt to the GitHub issue.    │
	│    * Please also attach the following file to the GitHub issue:                             │
	│    * - /tmp/minikube_pause_49fdaea37aad8ebccb761973c21590cc64efe8d9_0.log                   │
	│                                                                                             │
	╰─────────────────────────────────────────────────────────────────────────────────────────────╯
	I1109 14:40:20.211654  200617 out.go:203] 

                                                
                                                
** /stderr **
start_stop_delete_test.go:309: out/minikube-linux-arm64 pause -p default-k8s-diff-port-103048 --alsologtostderr -v=1 failed: exit status 80
helpers_test.go:222: -----------------------post-mortem--------------------------------
helpers_test.go:223: ======>  post-mortem[TestStartStop/group/default-k8s-diff-port/serial/Pause]: network settings <======
helpers_test.go:230: HOST ENV snapshots: PROXY env: HTTP_PROXY="<empty>" HTTPS_PROXY="<empty>" NO_PROXY="<empty>"
helpers_test.go:238: ======>  post-mortem[TestStartStop/group/default-k8s-diff-port/serial/Pause]: docker inspect <======
helpers_test.go:239: (dbg) Run:  docker inspect default-k8s-diff-port-103048
helpers_test.go:243: (dbg) docker inspect default-k8s-diff-port-103048:

                                                
                                                
-- stdout --
	[
	    {
	        "Id": "6ee0024be4f4d8590c4fac2f7a334ceb3085a08f0ed0d87b59fee8a0f0938ca3",
	        "Created": "2025-11-09T14:37:24.407836175Z",
	        "Path": "/usr/local/bin/entrypoint",
	        "Args": [
	            "/sbin/init"
	        ],
	        "State": {
	            "Status": "running",
	            "Running": true,
	            "Paused": false,
	            "Restarting": false,
	            "OOMKilled": false,
	            "Dead": false,
	            "Pid": 196258,
	            "ExitCode": 0,
	            "Error": "",
	            "StartedAt": "2025-11-09T14:39:13.793271243Z",
	            "FinishedAt": "2025-11-09T14:39:12.972944207Z"
	        },
	        "Image": "sha256:4e1b168be0d8ee6affdaa8dcd0274d605b2f417b6cfaa574410f0380fd962b97",
	        "ResolvConfPath": "/var/lib/docker/containers/6ee0024be4f4d8590c4fac2f7a334ceb3085a08f0ed0d87b59fee8a0f0938ca3/resolv.conf",
	        "HostnamePath": "/var/lib/docker/containers/6ee0024be4f4d8590c4fac2f7a334ceb3085a08f0ed0d87b59fee8a0f0938ca3/hostname",
	        "HostsPath": "/var/lib/docker/containers/6ee0024be4f4d8590c4fac2f7a334ceb3085a08f0ed0d87b59fee8a0f0938ca3/hosts",
	        "LogPath": "/var/lib/docker/containers/6ee0024be4f4d8590c4fac2f7a334ceb3085a08f0ed0d87b59fee8a0f0938ca3/6ee0024be4f4d8590c4fac2f7a334ceb3085a08f0ed0d87b59fee8a0f0938ca3-json.log",
	        "Name": "/default-k8s-diff-port-103048",
	        "RestartCount": 0,
	        "Driver": "overlay2",
	        "Platform": "linux",
	        "MountLabel": "",
	        "ProcessLabel": "",
	        "AppArmorProfile": "unconfined",
	        "ExecIDs": null,
	        "HostConfig": {
	            "Binds": [
	                "/lib/modules:/lib/modules:ro",
	                "default-k8s-diff-port-103048:/var"
	            ],
	            "ContainerIDFile": "",
	            "LogConfig": {
	                "Type": "json-file",
	                "Config": {}
	            },
	            "NetworkMode": "default-k8s-diff-port-103048",
	            "PortBindings": {
	                "22/tcp": [
	                    {
	                        "HostIp": "127.0.0.1",
	                        "HostPort": ""
	                    }
	                ],
	                "2376/tcp": [
	                    {
	                        "HostIp": "127.0.0.1",
	                        "HostPort": ""
	                    }
	                ],
	                "32443/tcp": [
	                    {
	                        "HostIp": "127.0.0.1",
	                        "HostPort": ""
	                    }
	                ],
	                "5000/tcp": [
	                    {
	                        "HostIp": "127.0.0.1",
	                        "HostPort": ""
	                    }
	                ],
	                "8444/tcp": [
	                    {
	                        "HostIp": "127.0.0.1",
	                        "HostPort": ""
	                    }
	                ]
	            },
	            "RestartPolicy": {
	                "Name": "no",
	                "MaximumRetryCount": 0
	            },
	            "AutoRemove": false,
	            "VolumeDriver": "",
	            "VolumesFrom": null,
	            "ConsoleSize": [
	                0,
	                0
	            ],
	            "CapAdd": null,
	            "CapDrop": null,
	            "CgroupnsMode": "host",
	            "Dns": [],
	            "DnsOptions": [],
	            "DnsSearch": [],
	            "ExtraHosts": null,
	            "GroupAdd": null,
	            "IpcMode": "private",
	            "Cgroup": "",
	            "Links": null,
	            "OomScoreAdj": 0,
	            "PidMode": "",
	            "Privileged": true,
	            "PublishAllPorts": false,
	            "ReadonlyRootfs": false,
	            "SecurityOpt": [
	                "seccomp=unconfined",
	                "apparmor=unconfined",
	                "label=disable"
	            ],
	            "Tmpfs": {
	                "/run": "",
	                "/tmp": ""
	            },
	            "UTSMode": "",
	            "UsernsMode": "",
	            "ShmSize": 67108864,
	            "Runtime": "runc",
	            "Isolation": "",
	            "CpuShares": 0,
	            "Memory": 3221225472,
	            "NanoCpus": 2000000000,
	            "CgroupParent": "",
	            "BlkioWeight": 0,
	            "BlkioWeightDevice": [],
	            "BlkioDeviceReadBps": [],
	            "BlkioDeviceWriteBps": [],
	            "BlkioDeviceReadIOps": [],
	            "BlkioDeviceWriteIOps": [],
	            "CpuPeriod": 0,
	            "CpuQuota": 0,
	            "CpuRealtimePeriod": 0,
	            "CpuRealtimeRuntime": 0,
	            "CpusetCpus": "",
	            "CpusetMems": "",
	            "Devices": [],
	            "DeviceCgroupRules": null,
	            "DeviceRequests": null,
	            "MemoryReservation": 0,
	            "MemorySwap": 6442450944,
	            "MemorySwappiness": null,
	            "OomKillDisable": false,
	            "PidsLimit": null,
	            "Ulimits": [],
	            "CpuCount": 0,
	            "CpuPercent": 0,
	            "IOMaximumIOps": 0,
	            "IOMaximumBandwidth": 0,
	            "MaskedPaths": null,
	            "ReadonlyPaths": null
	        },
	        "GraphDriver": {
	            "Data": {
	                "ID": "6ee0024be4f4d8590c4fac2f7a334ceb3085a08f0ed0d87b59fee8a0f0938ca3",
	                "LowerDir": "/var/lib/docker/overlay2/90c77410a0abe5011717be244f0d41706cf22dd1cd6f35d2e962ad2e9e3c6364-init/diff:/var/lib/docker/overlay2/b3a4de36ab2a7c9237cb1555a4866064baca53bca407ae5f84336bea9c6bc6c8/diff",
	                "MergedDir": "/var/lib/docker/overlay2/90c77410a0abe5011717be244f0d41706cf22dd1cd6f35d2e962ad2e9e3c6364/merged",
	                "UpperDir": "/var/lib/docker/overlay2/90c77410a0abe5011717be244f0d41706cf22dd1cd6f35d2e962ad2e9e3c6364/diff",
	                "WorkDir": "/var/lib/docker/overlay2/90c77410a0abe5011717be244f0d41706cf22dd1cd6f35d2e962ad2e9e3c6364/work"
	            },
	            "Name": "overlay2"
	        },
	        "Mounts": [
	            {
	                "Type": "bind",
	                "Source": "/lib/modules",
	                "Destination": "/lib/modules",
	                "Mode": "ro",
	                "RW": false,
	                "Propagation": "rprivate"
	            },
	            {
	                "Type": "volume",
	                "Name": "default-k8s-diff-port-103048",
	                "Source": "/var/lib/docker/volumes/default-k8s-diff-port-103048/_data",
	                "Destination": "/var",
	                "Driver": "local",
	                "Mode": "z",
	                "RW": true,
	                "Propagation": ""
	            }
	        ],
	        "Config": {
	            "Hostname": "default-k8s-diff-port-103048",
	            "Domainname": "",
	            "User": "",
	            "AttachStdin": false,
	            "AttachStdout": false,
	            "AttachStderr": false,
	            "ExposedPorts": {
	                "22/tcp": {},
	                "2376/tcp": {},
	                "32443/tcp": {},
	                "5000/tcp": {},
	                "8444/tcp": {}
	            },
	            "Tty": true,
	            "OpenStdin": false,
	            "StdinOnce": false,
	            "Env": [
	                "container=docker",
	                "PATH=/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin"
	            ],
	            "Cmd": null,
	            "Image": "gcr.io/k8s-minikube/kicbase-builds:v0.0.48-1761985721-21837@sha256:a50b37e97dfdea51156e079ca6b45818a801b3d41bbe13d141f35d2e1af6c7d1",
	            "Volumes": null,
	            "WorkingDir": "/",
	            "Entrypoint": [
	                "/usr/local/bin/entrypoint",
	                "/sbin/init"
	            ],
	            "OnBuild": null,
	            "Labels": {
	                "created_by.minikube.sigs.k8s.io": "true",
	                "mode.minikube.sigs.k8s.io": "default-k8s-diff-port-103048",
	                "name.minikube.sigs.k8s.io": "default-k8s-diff-port-103048",
	                "role.minikube.sigs.k8s.io": ""
	            },
	            "StopSignal": "SIGRTMIN+3"
	        },
	        "NetworkSettings": {
	            "Bridge": "",
	            "SandboxID": "8bf862dff8d5fabd25c666df989d025302cb56761a371d2137c6bf76b96a6a5c",
	            "SandboxKey": "/var/run/docker/netns/8bf862dff8d5",
	            "Ports": {
	                "22/tcp": [
	                    {
	                        "HostIp": "127.0.0.1",
	                        "HostPort": "33065"
	                    }
	                ],
	                "2376/tcp": [
	                    {
	                        "HostIp": "127.0.0.1",
	                        "HostPort": "33066"
	                    }
	                ],
	                "32443/tcp": [
	                    {
	                        "HostIp": "127.0.0.1",
	                        "HostPort": "33069"
	                    }
	                ],
	                "5000/tcp": [
	                    {
	                        "HostIp": "127.0.0.1",
	                        "HostPort": "33067"
	                    }
	                ],
	                "8444/tcp": [
	                    {
	                        "HostIp": "127.0.0.1",
	                        "HostPort": "33068"
	                    }
	                ]
	            },
	            "HairpinMode": false,
	            "LinkLocalIPv6Address": "",
	            "LinkLocalIPv6PrefixLen": 0,
	            "SecondaryIPAddresses": null,
	            "SecondaryIPv6Addresses": null,
	            "EndpointID": "",
	            "Gateway": "",
	            "GlobalIPv6Address": "",
	            "GlobalIPv6PrefixLen": 0,
	            "IPAddress": "",
	            "IPPrefixLen": 0,
	            "IPv6Gateway": "",
	            "MacAddress": "",
	            "Networks": {
	                "default-k8s-diff-port-103048": {
	                    "IPAMConfig": {
	                        "IPv4Address": "192.168.85.2"
	                    },
	                    "Links": null,
	                    "Aliases": null,
	                    "MacAddress": "ca:74:a7:b0:18:05",
	                    "DriverOpts": null,
	                    "GwPriority": 0,
	                    "NetworkID": "f575eafa491ba158377eb7b6fb901ba71cca9fc0a5cdf5e89e6c475d768dfea9",
	                    "EndpointID": "adf605d034ac4721d9b0ff3dcc5a30703f1f501d36bcc2c4ded3f979a07ddef8",
	                    "Gateway": "192.168.85.1",
	                    "IPAddress": "192.168.85.2",
	                    "IPPrefixLen": 24,
	                    "IPv6Gateway": "",
	                    "GlobalIPv6Address": "",
	                    "GlobalIPv6PrefixLen": 0,
	                    "DNSNames": [
	                        "default-k8s-diff-port-103048",
	                        "6ee0024be4f4"
	                    ]
	                }
	            }
	        }
	    }
	]

                                                
                                                
-- /stdout --
helpers_test.go:247: (dbg) Run:  out/minikube-linux-arm64 status --format={{.Host}} -p default-k8s-diff-port-103048 -n default-k8s-diff-port-103048
helpers_test.go:247: (dbg) Non-zero exit: out/minikube-linux-arm64 status --format={{.Host}} -p default-k8s-diff-port-103048 -n default-k8s-diff-port-103048: exit status 2 (392.938475ms)

                                                
                                                
-- stdout --
	Running

                                                
                                                
-- /stdout --
helpers_test.go:247: status error: exit status 2 (may be ok)
helpers_test.go:252: <<< TestStartStop/group/default-k8s-diff-port/serial/Pause FAILED: start of post-mortem logs <<<
helpers_test.go:253: ======>  post-mortem[TestStartStop/group/default-k8s-diff-port/serial/Pause]: minikube logs <======
helpers_test.go:255: (dbg) Run:  out/minikube-linux-arm64 -p default-k8s-diff-port-103048 logs -n 25
helpers_test.go:255: (dbg) Done: out/minikube-linux-arm64 -p default-k8s-diff-port-103048 logs -n 25: (1.817085538s)
helpers_test.go:260: TestStartStop/group/default-k8s-diff-port/serial/Pause logs: 
-- stdout --
	
	==> Audit <==
	┌─────────┬───────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────┬──────────────────────────────┬─────────┬─────────┬─────────────────────┬──────────────
───────┐
	│ COMMAND │                                                                                                                     ARGS                                                                                                                      │           PROFILE            │  USER   │ VERSION │     START TIME      │      END TIME       │
	├─────────┼───────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────┼──────────────────────────────┼─────────┼─────────┼─────────────────────┼──────────────
───────┤
	│ delete  │ -p cert-options-276181                                                                                                                                                                                                                        │ cert-options-276181          │ jenkins │ v1.37.0 │ 09 Nov 25 14:34 UTC │ 09 Nov 25 14:34 UTC │
	│ start   │ -p old-k8s-version-349599 --memory=3072 --alsologtostderr --wait=true --kvm-network=default --kvm-qemu-uri=qemu:///system --disable-driver-mounts --keep-context=false --driver=docker  --container-runtime=crio --kubernetes-version=v1.28.0 │ old-k8s-version-349599       │ jenkins │ v1.37.0 │ 09 Nov 25 14:34 UTC │ 09 Nov 25 14:35 UTC │
	│ addons  │ enable metrics-server -p old-k8s-version-349599 --images=MetricsServer=registry.k8s.io/echoserver:1.4 --registries=MetricsServer=fake.domain                                                                                                  │ old-k8s-version-349599       │ jenkins │ v1.37.0 │ 09 Nov 25 14:35 UTC │                     │
	│ stop    │ -p old-k8s-version-349599 --alsologtostderr -v=3                                                                                                                                                                                              │ old-k8s-version-349599       │ jenkins │ v1.37.0 │ 09 Nov 25 14:35 UTC │ 09 Nov 25 14:36 UTC │
	│ addons  │ enable dashboard -p old-k8s-version-349599 --images=MetricsScraper=registry.k8s.io/echoserver:1.4                                                                                                                                             │ old-k8s-version-349599       │ jenkins │ v1.37.0 │ 09 Nov 25 14:36 UTC │ 09 Nov 25 14:36 UTC │
	│ start   │ -p old-k8s-version-349599 --memory=3072 --alsologtostderr --wait=true --kvm-network=default --kvm-qemu-uri=qemu:///system --disable-driver-mounts --keep-context=false --driver=docker  --container-runtime=crio --kubernetes-version=v1.28.0 │ old-k8s-version-349599       │ jenkins │ v1.37.0 │ 09 Nov 25 14:36 UTC │ 09 Nov 25 14:36 UTC │
	│ start   │ -p cert-expiration-179822 --memory=3072 --cert-expiration=8760h --driver=docker  --container-runtime=crio                                                                                                                                     │ cert-expiration-179822       │ jenkins │ v1.37.0 │ 09 Nov 25 14:37 UTC │ 09 Nov 25 14:37 UTC │
	│ image   │ old-k8s-version-349599 image list --format=json                                                                                                                                                                                               │ old-k8s-version-349599       │ jenkins │ v1.37.0 │ 09 Nov 25 14:37 UTC │ 09 Nov 25 14:37 UTC │
	│ pause   │ -p old-k8s-version-349599 --alsologtostderr -v=1                                                                                                                                                                                              │ old-k8s-version-349599       │ jenkins │ v1.37.0 │ 09 Nov 25 14:37 UTC │                     │
	│ delete  │ -p old-k8s-version-349599                                                                                                                                                                                                                     │ old-k8s-version-349599       │ jenkins │ v1.37.0 │ 09 Nov 25 14:37 UTC │ 09 Nov 25 14:37 UTC │
	│ delete  │ -p old-k8s-version-349599                                                                                                                                                                                                                     │ old-k8s-version-349599       │ jenkins │ v1.37.0 │ 09 Nov 25 14:37 UTC │ 09 Nov 25 14:37 UTC │
	│ start   │ -p default-k8s-diff-port-103048 --memory=3072 --alsologtostderr --wait=true --apiserver-port=8444 --driver=docker  --container-runtime=crio --kubernetes-version=v1.34.1                                                                      │ default-k8s-diff-port-103048 │ jenkins │ v1.37.0 │ 09 Nov 25 14:37 UTC │ 09 Nov 25 14:38 UTC │
	│ delete  │ -p cert-expiration-179822                                                                                                                                                                                                                     │ cert-expiration-179822       │ jenkins │ v1.37.0 │ 09 Nov 25 14:37 UTC │ 09 Nov 25 14:37 UTC │
	│ start   │ -p embed-certs-422728 --memory=3072 --alsologtostderr --wait=true --embed-certs --driver=docker  --container-runtime=crio --kubernetes-version=v1.34.1                                                                                        │ embed-certs-422728           │ jenkins │ v1.37.0 │ 09 Nov 25 14:37 UTC │ 09 Nov 25 14:38 UTC │
	│ addons  │ enable metrics-server -p default-k8s-diff-port-103048 --images=MetricsServer=registry.k8s.io/echoserver:1.4 --registries=MetricsServer=fake.domain                                                                                            │ default-k8s-diff-port-103048 │ jenkins │ v1.37.0 │ 09 Nov 25 14:38 UTC │                     │
	│ addons  │ enable metrics-server -p embed-certs-422728 --images=MetricsServer=registry.k8s.io/echoserver:1.4 --registries=MetricsServer=fake.domain                                                                                                      │ embed-certs-422728           │ jenkins │ v1.37.0 │ 09 Nov 25 14:39 UTC │                     │
	│ stop    │ -p default-k8s-diff-port-103048 --alsologtostderr -v=3                                                                                                                                                                                        │ default-k8s-diff-port-103048 │ jenkins │ v1.37.0 │ 09 Nov 25 14:39 UTC │ 09 Nov 25 14:39 UTC │
	│ stop    │ -p embed-certs-422728 --alsologtostderr -v=3                                                                                                                                                                                                  │ embed-certs-422728           │ jenkins │ v1.37.0 │ 09 Nov 25 14:39 UTC │ 09 Nov 25 14:39 UTC │
	│ addons  │ enable dashboard -p default-k8s-diff-port-103048 --images=MetricsScraper=registry.k8s.io/echoserver:1.4                                                                                                                                       │ default-k8s-diff-port-103048 │ jenkins │ v1.37.0 │ 09 Nov 25 14:39 UTC │ 09 Nov 25 14:39 UTC │
	│ start   │ -p default-k8s-diff-port-103048 --memory=3072 --alsologtostderr --wait=true --apiserver-port=8444 --driver=docker  --container-runtime=crio --kubernetes-version=v1.34.1                                                                      │ default-k8s-diff-port-103048 │ jenkins │ v1.37.0 │ 09 Nov 25 14:39 UTC │ 09 Nov 25 14:40 UTC │
	│ addons  │ enable dashboard -p embed-certs-422728 --images=MetricsScraper=registry.k8s.io/echoserver:1.4                                                                                                                                                 │ embed-certs-422728           │ jenkins │ v1.37.0 │ 09 Nov 25 14:39 UTC │ 09 Nov 25 14:39 UTC │
	│ start   │ -p embed-certs-422728 --memory=3072 --alsologtostderr --wait=true --embed-certs --driver=docker  --container-runtime=crio --kubernetes-version=v1.34.1                                                                                        │ embed-certs-422728           │ jenkins │ v1.37.0 │ 09 Nov 25 14:39 UTC │ 09 Nov 25 14:40 UTC │
	│ image   │ default-k8s-diff-port-103048 image list --format=json                                                                                                                                                                                         │ default-k8s-diff-port-103048 │ jenkins │ v1.37.0 │ 09 Nov 25 14:40 UTC │ 09 Nov 25 14:40 UTC │
	│ pause   │ -p default-k8s-diff-port-103048 --alsologtostderr -v=1                                                                                                                                                                                        │ default-k8s-diff-port-103048 │ jenkins │ v1.37.0 │ 09 Nov 25 14:40 UTC │                     │
	│ image   │ embed-certs-422728 image list --format=json                                                                                                                                                                                                   │ embed-certs-422728           │ jenkins │ v1.37.0 │ 09 Nov 25 14:40 UTC │                     │
	└─────────┴───────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────┴──────────────────────────────┴─────────┴─────────┴─────────────────────┴──────────────
───────┘
	
	
	==> Last Start <==
	Log file created at: 2025/11/09 14:39:15
	Running on machine: ip-172-31-24-2
	Binary: Built with gc go1.24.6 for linux/arm64
	Log line format: [IWEF]mmdd hh:mm:ss.uuuuuu threadid file:line] msg
	I1109 14:39:15.653812  196795 out.go:360] Setting OutFile to fd 1 ...
	I1109 14:39:15.654001  196795 out.go:408] TERM=,COLORTERM=, which probably does not support color
	I1109 14:39:15.654038  196795 out.go:374] Setting ErrFile to fd 2...
	I1109 14:39:15.654052  196795 out.go:408] TERM=,COLORTERM=, which probably does not support color
	I1109 14:39:15.654356  196795 root.go:338] Updating PATH: /home/jenkins/minikube-integration/21139-2320/.minikube/bin
	I1109 14:39:15.654781  196795 out.go:368] Setting JSON to false
	I1109 14:39:15.655688  196795 start.go:133] hostinfo: {"hostname":"ip-172-31-24-2","uptime":4906,"bootTime":1762694250,"procs":167,"os":"linux","platform":"ubuntu","platformFamily":"debian","platformVersion":"20.04","kernelVersion":"5.15.0-1084-aws","kernelArch":"aarch64","virtualizationSystem":"","virtualizationRole":"","hostId":"6d436adf-771e-4269-b9a3-c25fd4fca4f5"}
	I1109 14:39:15.655757  196795 start.go:143] virtualization:  
	I1109 14:39:15.660654  196795 out.go:179] * [embed-certs-422728] minikube v1.37.0 on Ubuntu 20.04 (arm64)
	I1109 14:39:15.663936  196795 out.go:179]   - MINIKUBE_LOCATION=21139
	I1109 14:39:15.663991  196795 notify.go:221] Checking for updates...
	I1109 14:39:15.670031  196795 out.go:179]   - MINIKUBE_SUPPRESS_DOCKER_PERFORMANCE=true
	I1109 14:39:15.672921  196795 out.go:179]   - KUBECONFIG=/home/jenkins/minikube-integration/21139-2320/kubeconfig
	I1109 14:39:15.675823  196795 out.go:179]   - MINIKUBE_HOME=/home/jenkins/minikube-integration/21139-2320/.minikube
	I1109 14:39:15.678877  196795 out.go:179]   - MINIKUBE_BIN=out/minikube-linux-arm64
	I1109 14:39:15.681871  196795 out.go:179]   - MINIKUBE_FORCE_SYSTEMD=
	I1109 14:39:15.685303  196795 config.go:182] Loaded profile config "embed-certs-422728": Driver=docker, ContainerRuntime=crio, KubernetesVersion=v1.34.1
	I1109 14:39:15.685991  196795 driver.go:422] Setting default libvirt URI to qemu:///system
	I1109 14:39:15.716089  196795 docker.go:124] docker version: linux-28.1.1:Docker Engine - Community
	I1109 14:39:15.716233  196795 cli_runner.go:164] Run: docker system info --format "{{json .}}"
	I1109 14:39:15.783072  196795 info.go:266] docker info: {ID:J4M5:W6MX:GOX4:4LAQ:VI7E:VJNF:J3OP:OPBH:GF7G:PPY4:WQWD:7N4L Containers:2 ContainersRunning:1 ContainersPaused:0 ContainersStopped:1 Images:4 Driver:overlay2 DriverStatus:[[Backing Filesystem extfs] [Supports d_type true] [Using metacopy false] [Native Overlay Diff true] [userxattr false]] SystemStatus:<nil> Plugins:{Volume:[local] Network:[bridge host ipvlan macvlan null overlay] Authorization:<nil> Log:[awslogs fluentd gcplogs gelf journald json-file local splunk syslog]} MemoryLimit:true SwapLimit:true KernelMemory:false KernelMemoryTCP:true CPUCfsPeriod:true CPUCfsQuota:true CPUShares:true CPUSet:true PidsLimit:true IPv4Forwarding:true BridgeNfIptables:false BridgeNfIP6Tables:false Debug:false NFd:38 OomKillDisable:true NGoroutines:54 SystemTime:2025-11-09 14:39:15.77300627 +0000 UTC LoggingDriver:json-file CgroupDriver:cgroupfs NEventsListener:0 KernelVersion:5.15.0-1084-aws OperatingSystem:Ubuntu 20.04.6 LTS OSType:linux Architecture:aa
rch64 IndexServerAddress:https://index.docker.io/v1/ RegistryConfig:{AllowNondistributableArtifactsCIDRs:[] AllowNondistributableArtifactsHostnames:[] InsecureRegistryCIDRs:[::1/128 127.0.0.0/8] IndexConfigs:{DockerIo:{Name:docker.io Mirrors:[] Secure:true Official:true}} Mirrors:[]} NCPU:2 MemTotal:8214835200 GenericResources:<nil> DockerRootDir:/var/lib/docker HTTPProxy: HTTPSProxy: NoProxy: Name:ip-172-31-24-2 Labels:[] ExperimentalBuild:false ServerVersion:28.1.1 ClusterStore: ClusterAdvertise: Runtimes:{Runc:{Path:runc}} DefaultRuntime:runc Swarm:{NodeID: NodeAddr: LocalNodeState:inactive ControlAvailable:false Error: RemoteManagers:<nil>} LiveRestoreEnabled:false Isolation: InitBinary:docker-init ContainerdCommit:{ID:05044ec0a9a75232cad458027ca83437aae3f4da Expected:} RuncCommit:{ID:v1.2.5-0-g59923ef Expected:} InitCommit:{ID:de40ad0 Expected:} SecurityOptions:[name=apparmor name=seccomp,profile=builtin] ProductLicense: Warnings:<nil> ServerErrors:[] ClientInfo:{Debug:false Plugins:[map[Name:buildx Path
:/usr/libexec/docker/cli-plugins/docker-buildx SchemaVersion:0.1.0 ShortDescription:Docker Buildx Vendor:Docker Inc. Version:v0.23.0] map[Name:compose Path:/usr/libexec/docker/cli-plugins/docker-compose SchemaVersion:0.1.0 ShortDescription:Docker Compose Vendor:Docker Inc. Version:v2.35.1]] Warnings:<nil>}}
	I1109 14:39:15.783205  196795 docker.go:319] overlay module found
	I1109 14:39:15.786478  196795 out.go:179] * Using the docker driver based on existing profile
	I1109 14:39:15.789381  196795 start.go:309] selected driver: docker
	I1109 14:39:15.789420  196795 start.go:930] validating driver "docker" against &{Name:embed-certs-422728 KeepContext:false EmbedCerts:true MinikubeISO: KicBaseImage:gcr.io/k8s-minikube/kicbase-builds:v0.0.48-1761985721-21837@sha256:a50b37e97dfdea51156e079ca6b45818a801b3d41bbe13d141f35d2e1af6c7d1 Memory:3072 CPUs:2 DiskSize:20000 Driver:docker HyperkitVpnKitSock: HyperkitVSockPorts:[] DockerEnv:[] ContainerVolumeMounts:[] InsecureRegistry:[] RegistryMirror:[] HostOnlyCIDR:192.168.59.1/24 HypervVirtualSwitch: HypervUseExternalSwitch:false HypervExternalAdapter: KVMNetwork:default KVMQemuURI:qemu:///system KVMGPU:false KVMHidden:false KVMNUMACount:1 APIServerPort:8443 DockerOpt:[] DisableDriverMounts:false NFSShare:[] NFSSharesRoot:/nfsshares UUID: NoVTXCheck:false DNSProxy:false HostDNSResolver:true HostOnlyNicType:virtio NatNicType:virtio SSHIPAddress: SSHUser:root SSHKey: SSHPort:22 KubernetesConfig:{KubernetesVersion:v1.34.1 ClusterName:embed-certs-422728 Namespace:default APIServerHAVIP: APIServerN
ame:minikubeCA APIServerNames:[] APIServerIPs:[] DNSDomain:cluster.local ContainerRuntime:crio CRISocket: NetworkPlugin:cni FeatureGates: ServiceCIDR:10.96.0.0/12 ImageRepository: LoadBalancerStartIP: LoadBalancerEndIP: CustomIngressCert: RegistryAliases: ExtraOptions:[] ShouldLoadCachedImages:true EnableDefaultCNI:false CNI:} Nodes:[{Name: IP:192.168.76.2 Port:8443 KubernetesVersion:v1.34.1 ContainerRuntime:crio ControlPlane:true Worker:true}] Addons:map[dashboard:true default-storageclass:true storage-provisioner:true] CustomAddonImages:map[MetricsScraper:registry.k8s.io/echoserver:1.4] CustomAddonRegistries:map[] VerifyComponents:map[apiserver:true apps_running:true default_sa:true extra:true kubelet:true node_ready:true system_pods:true] StartHostTimeout:6m0s ScheduledStop:<nil> ExposedPorts:[] ListenAddress: Network: Subnet: MultiNodeRequested:false ExtraDisks:0 CertExpiration:26280h0m0s MountString: Mount9PVersion:9p2000.L MountGID:docker MountIP: MountMSize:262144 MountOptions:[] MountPort:0 MountType:
9p MountUID:docker BinaryMirror: DisableOptimizations:false DisableMetrics:false DisableCoreDNSLog:false CustomQemuFirmwarePath: SocketVMnetClientPath: SocketVMnetPath: StaticIP: SSHAuthSock: SSHAgentPID:0 GPUs: AutoPauseInterval:1m0s}
	I1109 14:39:15.789515  196795 start.go:941] status for docker: {Installed:true Healthy:true Running:false NeedsImprovement:false Error:<nil> Reason: Fix: Doc: Version:}
	I1109 14:39:15.790229  196795 cli_runner.go:164] Run: docker system info --format "{{json .}}"
	I1109 14:39:15.845783  196795 info.go:266] docker info: {ID:J4M5:W6MX:GOX4:4LAQ:VI7E:VJNF:J3OP:OPBH:GF7G:PPY4:WQWD:7N4L Containers:2 ContainersRunning:1 ContainersPaused:0 ContainersStopped:1 Images:4 Driver:overlay2 DriverStatus:[[Backing Filesystem extfs] [Supports d_type true] [Using metacopy false] [Native Overlay Diff true] [userxattr false]] SystemStatus:<nil> Plugins:{Volume:[local] Network:[bridge host ipvlan macvlan null overlay] Authorization:<nil> Log:[awslogs fluentd gcplogs gelf journald json-file local splunk syslog]} MemoryLimit:true SwapLimit:true KernelMemory:false KernelMemoryTCP:true CPUCfsPeriod:true CPUCfsQuota:true CPUShares:true CPUSet:true PidsLimit:true IPv4Forwarding:true BridgeNfIptables:false BridgeNfIP6Tables:false Debug:false NFd:38 OomKillDisable:true NGoroutines:54 SystemTime:2025-11-09 14:39:15.836143549 +0000 UTC LoggingDriver:json-file CgroupDriver:cgroupfs NEventsListener:0 KernelVersion:5.15.0-1084-aws OperatingSystem:Ubuntu 20.04.6 LTS OSType:linux Architecture:a
arch64 IndexServerAddress:https://index.docker.io/v1/ RegistryConfig:{AllowNondistributableArtifactsCIDRs:[] AllowNondistributableArtifactsHostnames:[] InsecureRegistryCIDRs:[::1/128 127.0.0.0/8] IndexConfigs:{DockerIo:{Name:docker.io Mirrors:[] Secure:true Official:true}} Mirrors:[]} NCPU:2 MemTotal:8214835200 GenericResources:<nil> DockerRootDir:/var/lib/docker HTTPProxy: HTTPSProxy: NoProxy: Name:ip-172-31-24-2 Labels:[] ExperimentalBuild:false ServerVersion:28.1.1 ClusterStore: ClusterAdvertise: Runtimes:{Runc:{Path:runc}} DefaultRuntime:runc Swarm:{NodeID: NodeAddr: LocalNodeState:inactive ControlAvailable:false Error: RemoteManagers:<nil>} LiveRestoreEnabled:false Isolation: InitBinary:docker-init ContainerdCommit:{ID:05044ec0a9a75232cad458027ca83437aae3f4da Expected:} RuncCommit:{ID:v1.2.5-0-g59923ef Expected:} InitCommit:{ID:de40ad0 Expected:} SecurityOptions:[name=apparmor name=seccomp,profile=builtin] ProductLicense: Warnings:<nil> ServerErrors:[] ClientInfo:{Debug:false Plugins:[map[Name:buildx Pat
h:/usr/libexec/docker/cli-plugins/docker-buildx SchemaVersion:0.1.0 ShortDescription:Docker Buildx Vendor:Docker Inc. Version:v0.23.0] map[Name:compose Path:/usr/libexec/docker/cli-plugins/docker-compose SchemaVersion:0.1.0 ShortDescription:Docker Compose Vendor:Docker Inc. Version:v2.35.1]] Warnings:<nil>}}
	I1109 14:39:15.846132  196795 start_flags.go:992] Waiting for all components: map[apiserver:true apps_running:true default_sa:true extra:true kubelet:true node_ready:true system_pods:true]
	I1109 14:39:15.846168  196795 cni.go:84] Creating CNI manager for ""
	I1109 14:39:15.846227  196795 cni.go:143] "docker" driver + "crio" runtime found, recommending kindnet
	I1109 14:39:15.846266  196795 start.go:353] cluster config:
	{Name:embed-certs-422728 KeepContext:false EmbedCerts:true MinikubeISO: KicBaseImage:gcr.io/k8s-minikube/kicbase-builds:v0.0.48-1761985721-21837@sha256:a50b37e97dfdea51156e079ca6b45818a801b3d41bbe13d141f35d2e1af6c7d1 Memory:3072 CPUs:2 DiskSize:20000 Driver:docker HyperkitVpnKitSock: HyperkitVSockPorts:[] DockerEnv:[] ContainerVolumeMounts:[] InsecureRegistry:[] RegistryMirror:[] HostOnlyCIDR:192.168.59.1/24 HypervVirtualSwitch: HypervUseExternalSwitch:false HypervExternalAdapter: KVMNetwork:default KVMQemuURI:qemu:///system KVMGPU:false KVMHidden:false KVMNUMACount:1 APIServerPort:8443 DockerOpt:[] DisableDriverMounts:false NFSShare:[] NFSSharesRoot:/nfsshares UUID: NoVTXCheck:false DNSProxy:false HostDNSResolver:true HostOnlyNicType:virtio NatNicType:virtio SSHIPAddress: SSHUser:root SSHKey: SSHPort:22 KubernetesConfig:{KubernetesVersion:v1.34.1 ClusterName:embed-certs-422728 Namespace:default APIServerHAVIP: APIServerName:minikubeCA APIServerNames:[] APIServerIPs:[] DNSDomain:cluster.local Contain
erRuntime:crio CRISocket: NetworkPlugin:cni FeatureGates: ServiceCIDR:10.96.0.0/12 ImageRepository: LoadBalancerStartIP: LoadBalancerEndIP: CustomIngressCert: RegistryAliases: ExtraOptions:[] ShouldLoadCachedImages:true EnableDefaultCNI:false CNI:} Nodes:[{Name: IP:192.168.76.2 Port:8443 KubernetesVersion:v1.34.1 ContainerRuntime:crio ControlPlane:true Worker:true}] Addons:map[dashboard:true default-storageclass:true storage-provisioner:true] CustomAddonImages:map[MetricsScraper:registry.k8s.io/echoserver:1.4] CustomAddonRegistries:map[] VerifyComponents:map[apiserver:true apps_running:true default_sa:true extra:true kubelet:true node_ready:true system_pods:true] StartHostTimeout:6m0s ScheduledStop:<nil> ExposedPorts:[] ListenAddress: Network: Subnet: MultiNodeRequested:false ExtraDisks:0 CertExpiration:26280h0m0s MountString: Mount9PVersion:9p2000.L MountGID:docker MountIP: MountMSize:262144 MountOptions:[] MountPort:0 MountType:9p MountUID:docker BinaryMirror: DisableOptimizations:false DisableMetrics:false
DisableCoreDNSLog:false CustomQemuFirmwarePath: SocketVMnetClientPath: SocketVMnetPath: StaticIP: SSHAuthSock: SSHAgentPID:0 GPUs: AutoPauseInterval:1m0s}
	I1109 14:39:15.849466  196795 out.go:179] * Starting "embed-certs-422728" primary control-plane node in "embed-certs-422728" cluster
	I1109 14:39:15.852353  196795 cache.go:134] Beginning downloading kic base image for docker with crio
	I1109 14:39:15.855395  196795 out.go:179] * Pulling base image v0.0.48-1761985721-21837 ...
	I1109 14:39:15.858354  196795 preload.go:188] Checking if preload exists for k8s version v1.34.1 and runtime crio
	I1109 14:39:15.858406  196795 preload.go:203] Found local preload: /home/jenkins/minikube-integration/21139-2320/.minikube/cache/preloaded-tarball/preloaded-images-k8s-v18-v1.34.1-cri-o-overlay-arm64.tar.lz4
	I1109 14:39:15.858425  196795 cache.go:65] Caching tarball of preloaded images
	I1109 14:39:15.858430  196795 image.go:81] Checking for gcr.io/k8s-minikube/kicbase-builds:v0.0.48-1761985721-21837@sha256:a50b37e97dfdea51156e079ca6b45818a801b3d41bbe13d141f35d2e1af6c7d1 in local docker daemon
	I1109 14:39:15.858538  196795 preload.go:238] Found /home/jenkins/minikube-integration/21139-2320/.minikube/cache/preloaded-tarball/preloaded-images-k8s-v18-v1.34.1-cri-o-overlay-arm64.tar.lz4 in cache, skipping download
	I1109 14:39:15.858550  196795 cache.go:68] Finished verifying existence of preloaded tar for v1.34.1 on crio
	I1109 14:39:15.858709  196795 profile.go:143] Saving config to /home/jenkins/minikube-integration/21139-2320/.minikube/profiles/embed-certs-422728/config.json ...
	I1109 14:39:15.879215  196795 image.go:100] Found gcr.io/k8s-minikube/kicbase-builds:v0.0.48-1761985721-21837@sha256:a50b37e97dfdea51156e079ca6b45818a801b3d41bbe13d141f35d2e1af6c7d1 in local docker daemon, skipping pull
	I1109 14:39:15.879245  196795 cache.go:158] gcr.io/k8s-minikube/kicbase-builds:v0.0.48-1761985721-21837@sha256:a50b37e97dfdea51156e079ca6b45818a801b3d41bbe13d141f35d2e1af6c7d1 exists in daemon, skipping load
	I1109 14:39:15.879257  196795 cache.go:243] Successfully downloaded all kic artifacts
	I1109 14:39:15.879367  196795 start.go:360] acquireMachinesLock for embed-certs-422728: {Name:mkaf26c3066ebca49339c9527aed846108c5e799 Clock:{} Delay:500ms Timeout:10m0s Cancel:<nil>}
	I1109 14:39:15.879441  196795 start.go:364] duration metric: took 46.114µs to acquireMachinesLock for "embed-certs-422728"
	I1109 14:39:15.879465  196795 start.go:96] Skipping create...Using existing machine configuration
	I1109 14:39:15.879476  196795 fix.go:54] fixHost starting: 
	I1109 14:39:15.879824  196795 cli_runner.go:164] Run: docker container inspect embed-certs-422728 --format={{.State.Status}}
	I1109 14:39:15.897379  196795 fix.go:112] recreateIfNeeded on embed-certs-422728: state=Stopped err=<nil>
	W1109 14:39:15.897409  196795 fix.go:138] unexpected machine state, will restart: <nil>
	I1109 14:39:13.761899  196129 out.go:252] * Restarting existing docker container for "default-k8s-diff-port-103048" ...
	I1109 14:39:13.762001  196129 cli_runner.go:164] Run: docker start default-k8s-diff-port-103048
	I1109 14:39:14.005697  196129 cli_runner.go:164] Run: docker container inspect default-k8s-diff-port-103048 --format={{.State.Status}}
	I1109 14:39:14.031930  196129 kic.go:430] container "default-k8s-diff-port-103048" state is running.
	I1109 14:39:14.032334  196129 cli_runner.go:164] Run: docker container inspect -f "{{range .NetworkSettings.Networks}}{{.IPAddress}},{{.GlobalIPv6Address}}{{end}}" default-k8s-diff-port-103048
	I1109 14:39:14.054133  196129 profile.go:143] Saving config to /home/jenkins/minikube-integration/21139-2320/.minikube/profiles/default-k8s-diff-port-103048/config.json ...
	I1109 14:39:14.054518  196129 machine.go:94] provisionDockerMachine start ...
	I1109 14:39:14.054646  196129 cli_runner.go:164] Run: docker container inspect -f "'{{(index (index .NetworkSettings.Ports "22/tcp") 0).HostPort}}'" default-k8s-diff-port-103048
	I1109 14:39:14.076480  196129 main.go:143] libmachine: Using SSH client type: native
	I1109 14:39:14.076798  196129 main.go:143] libmachine: &{{{<nil> 0 [] [] []} docker [0x3eefe0] 0x3f1790 <nil>  [] 0s} 127.0.0.1 33065 <nil> <nil>}
	I1109 14:39:14.076807  196129 main.go:143] libmachine: About to run SSH command:
	hostname
	I1109 14:39:14.077436  196129 main.go:143] libmachine: Error dialing TCP: ssh: handshake failed: read tcp 127.0.0.1:58722->127.0.0.1:33065: read: connection reset by peer
	I1109 14:39:17.231473  196129 main.go:143] libmachine: SSH cmd err, output: <nil>: default-k8s-diff-port-103048
	
	I1109 14:39:17.231499  196129 ubuntu.go:182] provisioning hostname "default-k8s-diff-port-103048"
	I1109 14:39:17.231624  196129 cli_runner.go:164] Run: docker container inspect -f "'{{(index (index .NetworkSettings.Ports "22/tcp") 0).HostPort}}'" default-k8s-diff-port-103048
	I1109 14:39:17.249722  196129 main.go:143] libmachine: Using SSH client type: native
	I1109 14:39:17.250048  196129 main.go:143] libmachine: &{{{<nil> 0 [] [] []} docker [0x3eefe0] 0x3f1790 <nil>  [] 0s} 127.0.0.1 33065 <nil> <nil>}
	I1109 14:39:17.250064  196129 main.go:143] libmachine: About to run SSH command:
	sudo hostname default-k8s-diff-port-103048 && echo "default-k8s-diff-port-103048" | sudo tee /etc/hostname
	I1109 14:39:17.410092  196129 main.go:143] libmachine: SSH cmd err, output: <nil>: default-k8s-diff-port-103048
	
	I1109 14:39:17.410211  196129 cli_runner.go:164] Run: docker container inspect -f "'{{(index (index .NetworkSettings.Ports "22/tcp") 0).HostPort}}'" default-k8s-diff-port-103048
	I1109 14:39:17.428985  196129 main.go:143] libmachine: Using SSH client type: native
	I1109 14:39:17.429306  196129 main.go:143] libmachine: &{{{<nil> 0 [] [] []} docker [0x3eefe0] 0x3f1790 <nil>  [] 0s} 127.0.0.1 33065 <nil> <nil>}
	I1109 14:39:17.429330  196129 main.go:143] libmachine: About to run SSH command:
	
			if ! grep -xq '.*\sdefault-k8s-diff-port-103048' /etc/hosts; then
				if grep -xq '127.0.1.1\s.*' /etc/hosts; then
					sudo sed -i 's/^127.0.1.1\s.*/127.0.1.1 default-k8s-diff-port-103048/g' /etc/hosts;
				else 
					echo '127.0.1.1 default-k8s-diff-port-103048' | sudo tee -a /etc/hosts; 
				fi
			fi
	I1109 14:39:17.580249  196129 main.go:143] libmachine: SSH cmd err, output: <nil>: 
	I1109 14:39:17.580275  196129 ubuntu.go:188] set auth options {CertDir:/home/jenkins/minikube-integration/21139-2320/.minikube CaCertPath:/home/jenkins/minikube-integration/21139-2320/.minikube/certs/ca.pem CaPrivateKeyPath:/home/jenkins/minikube-integration/21139-2320/.minikube/certs/ca-key.pem CaCertRemotePath:/etc/docker/ca.pem ServerCertPath:/home/jenkins/minikube-integration/21139-2320/.minikube/machines/server.pem ServerKeyPath:/home/jenkins/minikube-integration/21139-2320/.minikube/machines/server-key.pem ClientKeyPath:/home/jenkins/minikube-integration/21139-2320/.minikube/certs/key.pem ServerCertRemotePath:/etc/docker/server.pem ServerKeyRemotePath:/etc/docker/server-key.pem ClientCertPath:/home/jenkins/minikube-integration/21139-2320/.minikube/certs/cert.pem ServerCertSANs:[] StorePath:/home/jenkins/minikube-integration/21139-2320/.minikube}
	I1109 14:39:17.580301  196129 ubuntu.go:190] setting up certificates
	I1109 14:39:17.580311  196129 provision.go:84] configureAuth start
	I1109 14:39:17.580368  196129 cli_runner.go:164] Run: docker container inspect -f "{{range .NetworkSettings.Networks}}{{.IPAddress}},{{.GlobalIPv6Address}}{{end}}" default-k8s-diff-port-103048
	I1109 14:39:17.598399  196129 provision.go:143] copyHostCerts
	I1109 14:39:17.598470  196129 exec_runner.go:144] found /home/jenkins/minikube-integration/21139-2320/.minikube/ca.pem, removing ...
	I1109 14:39:17.598489  196129 exec_runner.go:203] rm: /home/jenkins/minikube-integration/21139-2320/.minikube/ca.pem
	I1109 14:39:17.598565  196129 exec_runner.go:151] cp: /home/jenkins/minikube-integration/21139-2320/.minikube/certs/ca.pem --> /home/jenkins/minikube-integration/21139-2320/.minikube/ca.pem (1082 bytes)
	I1109 14:39:17.598662  196129 exec_runner.go:144] found /home/jenkins/minikube-integration/21139-2320/.minikube/cert.pem, removing ...
	I1109 14:39:17.598674  196129 exec_runner.go:203] rm: /home/jenkins/minikube-integration/21139-2320/.minikube/cert.pem
	I1109 14:39:17.598703  196129 exec_runner.go:151] cp: /home/jenkins/minikube-integration/21139-2320/.minikube/certs/cert.pem --> /home/jenkins/minikube-integration/21139-2320/.minikube/cert.pem (1123 bytes)
	I1109 14:39:17.598755  196129 exec_runner.go:144] found /home/jenkins/minikube-integration/21139-2320/.minikube/key.pem, removing ...
	I1109 14:39:17.598765  196129 exec_runner.go:203] rm: /home/jenkins/minikube-integration/21139-2320/.minikube/key.pem
	I1109 14:39:17.598788  196129 exec_runner.go:151] cp: /home/jenkins/minikube-integration/21139-2320/.minikube/certs/key.pem --> /home/jenkins/minikube-integration/21139-2320/.minikube/key.pem (1679 bytes)
	I1109 14:39:17.598837  196129 provision.go:117] generating server cert: /home/jenkins/minikube-integration/21139-2320/.minikube/machines/server.pem ca-key=/home/jenkins/minikube-integration/21139-2320/.minikube/certs/ca.pem private-key=/home/jenkins/minikube-integration/21139-2320/.minikube/certs/ca-key.pem org=jenkins.default-k8s-diff-port-103048 san=[127.0.0.1 192.168.85.2 default-k8s-diff-port-103048 localhost minikube]
	I1109 14:39:17.688954  196129 provision.go:177] copyRemoteCerts
	I1109 14:39:17.689019  196129 ssh_runner.go:195] Run: sudo mkdir -p /etc/docker /etc/docker /etc/docker
	I1109 14:39:17.689060  196129 cli_runner.go:164] Run: docker container inspect -f "'{{(index (index .NetworkSettings.Ports "22/tcp") 0).HostPort}}'" default-k8s-diff-port-103048
	I1109 14:39:17.708206  196129 sshutil.go:53] new ssh client: &{IP:127.0.0.1 Port:33065 SSHKeyPath:/home/jenkins/minikube-integration/21139-2320/.minikube/machines/default-k8s-diff-port-103048/id_rsa Username:docker}
	I1109 14:39:17.819695  196129 ssh_runner.go:362] scp /home/jenkins/minikube-integration/21139-2320/.minikube/certs/ca.pem --> /etc/docker/ca.pem (1082 bytes)
	I1109 14:39:17.837093  196129 ssh_runner.go:362] scp /home/jenkins/minikube-integration/21139-2320/.minikube/machines/server.pem --> /etc/docker/server.pem (1249 bytes)
	I1109 14:39:17.854586  196129 ssh_runner.go:362] scp /home/jenkins/minikube-integration/21139-2320/.minikube/machines/server-key.pem --> /etc/docker/server-key.pem (1675 bytes)
	I1109 14:39:17.871745  196129 provision.go:87] duration metric: took 291.419804ms to configureAuth
	I1109 14:39:17.871814  196129 ubuntu.go:206] setting minikube options for container-runtime
	I1109 14:39:17.872050  196129 config.go:182] Loaded profile config "default-k8s-diff-port-103048": Driver=docker, ContainerRuntime=crio, KubernetesVersion=v1.34.1
	I1109 14:39:17.872194  196129 cli_runner.go:164] Run: docker container inspect -f "'{{(index (index .NetworkSettings.Ports "22/tcp") 0).HostPort}}'" default-k8s-diff-port-103048
	I1109 14:39:17.889492  196129 main.go:143] libmachine: Using SSH client type: native
	I1109 14:39:17.889805  196129 main.go:143] libmachine: &{{{<nil> 0 [] [] []} docker [0x3eefe0] 0x3f1790 <nil>  [] 0s} 127.0.0.1 33065 <nil> <nil>}
	I1109 14:39:17.889825  196129 main.go:143] libmachine: About to run SSH command:
	sudo mkdir -p /etc/sysconfig && printf %s "
	CRIO_MINIKUBE_OPTIONS='--insecure-registry 10.96.0.0/12 '
	" | sudo tee /etc/sysconfig/crio.minikube && sudo systemctl restart crio
	I1109 14:39:18.202831  196129 main.go:143] libmachine: SSH cmd err, output: <nil>: 
	CRIO_MINIKUBE_OPTIONS='--insecure-registry 10.96.0.0/12 '
	
	I1109 14:39:18.202918  196129 machine.go:97] duration metric: took 4.148387076s to provisionDockerMachine
	I1109 14:39:18.202944  196129 start.go:293] postStartSetup for "default-k8s-diff-port-103048" (driver="docker")
	I1109 14:39:18.202988  196129 start.go:322] creating required directories: [/etc/kubernetes/addons /etc/kubernetes/manifests /var/tmp/minikube /var/lib/minikube /var/lib/minikube/certs /var/lib/minikube/images /var/lib/minikube/binaries /tmp/gvisor /usr/share/ca-certificates /etc/ssl/certs]
	I1109 14:39:18.203082  196129 ssh_runner.go:195] Run: sudo mkdir -p /etc/kubernetes/addons /etc/kubernetes/manifests /var/tmp/minikube /var/lib/minikube /var/lib/minikube/certs /var/lib/minikube/images /var/lib/minikube/binaries /tmp/gvisor /usr/share/ca-certificates /etc/ssl/certs
	I1109 14:39:18.203170  196129 cli_runner.go:164] Run: docker container inspect -f "'{{(index (index .NetworkSettings.Ports "22/tcp") 0).HostPort}}'" default-k8s-diff-port-103048
	I1109 14:39:18.224891  196129 sshutil.go:53] new ssh client: &{IP:127.0.0.1 Port:33065 SSHKeyPath:/home/jenkins/minikube-integration/21139-2320/.minikube/machines/default-k8s-diff-port-103048/id_rsa Username:docker}
	I1109 14:39:18.335626  196129 ssh_runner.go:195] Run: cat /etc/os-release
	I1109 14:39:18.338990  196129 main.go:143] libmachine: Couldn't set key VERSION_CODENAME, no corresponding struct field found
	I1109 14:39:18.339018  196129 info.go:137] Remote host: Debian GNU/Linux 12 (bookworm)
	I1109 14:39:18.339029  196129 filesync.go:126] Scanning /home/jenkins/minikube-integration/21139-2320/.minikube/addons for local assets ...
	I1109 14:39:18.339123  196129 filesync.go:126] Scanning /home/jenkins/minikube-integration/21139-2320/.minikube/files for local assets ...
	I1109 14:39:18.339197  196129 filesync.go:149] local asset: /home/jenkins/minikube-integration/21139-2320/.minikube/files/etc/ssl/certs/41162.pem -> 41162.pem in /etc/ssl/certs
	I1109 14:39:18.339307  196129 ssh_runner.go:195] Run: sudo mkdir -p /etc/ssl/certs
	I1109 14:39:18.347413  196129 ssh_runner.go:362] scp /home/jenkins/minikube-integration/21139-2320/.minikube/files/etc/ssl/certs/41162.pem --> /etc/ssl/certs/41162.pem (1708 bytes)
	I1109 14:39:18.365395  196129 start.go:296] duration metric: took 162.403249ms for postStartSetup
	I1109 14:39:18.365474  196129 ssh_runner.go:195] Run: sh -c "df -h /var | awk 'NR==2{print $5}'"
	I1109 14:39:18.365513  196129 cli_runner.go:164] Run: docker container inspect -f "'{{(index (index .NetworkSettings.Ports "22/tcp") 0).HostPort}}'" default-k8s-diff-port-103048
	I1109 14:39:18.383461  196129 sshutil.go:53] new ssh client: &{IP:127.0.0.1 Port:33065 SSHKeyPath:/home/jenkins/minikube-integration/21139-2320/.minikube/machines/default-k8s-diff-port-103048/id_rsa Username:docker}
	I1109 14:39:18.485492  196129 ssh_runner.go:195] Run: sh -c "df -BG /var | awk 'NR==2{print $4}'"
	I1109 14:39:18.490710  196129 fix.go:56] duration metric: took 4.748854309s for fixHost
	I1109 14:39:18.490737  196129 start.go:83] releasing machines lock for "default-k8s-diff-port-103048", held for 4.748905699s
	I1109 14:39:18.490807  196129 cli_runner.go:164] Run: docker container inspect -f "{{range .NetworkSettings.Networks}}{{.IPAddress}},{{.GlobalIPv6Address}}{{end}}" default-k8s-diff-port-103048
	I1109 14:39:18.508468  196129 ssh_runner.go:195] Run: cat /version.json
	I1109 14:39:18.508516  196129 ssh_runner.go:195] Run: curl -sS -m 2 https://registry.k8s.io/
	I1109 14:39:18.508525  196129 cli_runner.go:164] Run: docker container inspect -f "'{{(index (index .NetworkSettings.Ports "22/tcp") 0).HostPort}}'" default-k8s-diff-port-103048
	I1109 14:39:18.508574  196129 cli_runner.go:164] Run: docker container inspect -f "'{{(index (index .NetworkSettings.Ports "22/tcp") 0).HostPort}}'" default-k8s-diff-port-103048
	I1109 14:39:18.533762  196129 sshutil.go:53] new ssh client: &{IP:127.0.0.1 Port:33065 SSHKeyPath:/home/jenkins/minikube-integration/21139-2320/.minikube/machines/default-k8s-diff-port-103048/id_rsa Username:docker}
	I1109 14:39:18.534380  196129 sshutil.go:53] new ssh client: &{IP:127.0.0.1 Port:33065 SSHKeyPath:/home/jenkins/minikube-integration/21139-2320/.minikube/machines/default-k8s-diff-port-103048/id_rsa Username:docker}
	I1109 14:39:18.733641  196129 ssh_runner.go:195] Run: systemctl --version
	I1109 14:39:18.740509  196129 ssh_runner.go:195] Run: sudo sh -c "podman version >/dev/null"
	I1109 14:39:18.777813  196129 ssh_runner.go:195] Run: sh -c "stat /etc/cni/net.d/*loopback.conf*"
	W1109 14:39:18.782333  196129 cni.go:209] loopback cni configuration skipped: "/etc/cni/net.d/*loopback.conf*" not found
	I1109 14:39:18.782411  196129 ssh_runner.go:195] Run: sudo find /etc/cni/net.d -maxdepth 1 -type f ( ( -name *bridge* -or -name *podman* ) -and -not -name *.mk_disabled ) -printf "%p, " -exec sh -c "sudo mv {} {}.mk_disabled" ;
	I1109 14:39:18.790609  196129 cni.go:259] no active bridge cni configs found in "/etc/cni/net.d" - nothing to disable
	I1109 14:39:18.790636  196129 start.go:496] detecting cgroup driver to use...
	I1109 14:39:18.790700  196129 detect.go:187] detected "cgroupfs" cgroup driver on host os
	I1109 14:39:18.790764  196129 ssh_runner.go:195] Run: sudo systemctl stop -f containerd
	I1109 14:39:18.806443  196129 ssh_runner.go:195] Run: sudo systemctl is-active --quiet service containerd
	I1109 14:39:18.820129  196129 docker.go:218] disabling cri-docker service (if available) ...
	I1109 14:39:18.820246  196129 ssh_runner.go:195] Run: sudo systemctl stop -f cri-docker.socket
	I1109 14:39:18.836297  196129 ssh_runner.go:195] Run: sudo systemctl stop -f cri-docker.service
	I1109 14:39:18.849893  196129 ssh_runner.go:195] Run: sudo systemctl disable cri-docker.socket
	I1109 14:39:18.961965  196129 ssh_runner.go:195] Run: sudo systemctl mask cri-docker.service
	I1109 14:39:19.074901  196129 docker.go:234] disabling docker service ...
	I1109 14:39:19.075010  196129 ssh_runner.go:195] Run: sudo systemctl stop -f docker.socket
	I1109 14:39:19.090357  196129 ssh_runner.go:195] Run: sudo systemctl stop -f docker.service
	I1109 14:39:19.103755  196129 ssh_runner.go:195] Run: sudo systemctl disable docker.socket
	I1109 14:39:19.214649  196129 ssh_runner.go:195] Run: sudo systemctl mask docker.service
	I1109 14:39:19.369065  196129 ssh_runner.go:195] Run: sudo systemctl is-active --quiet service docker
	I1109 14:39:19.382216  196129 ssh_runner.go:195] Run: /bin/bash -c "sudo mkdir -p /etc && printf %s "runtime-endpoint: unix:///var/run/crio/crio.sock
	" | sudo tee /etc/crictl.yaml"
	I1109 14:39:19.396769  196129 crio.go:59] configure cri-o to use "registry.k8s.io/pause:3.10.1" pause image...
	I1109 14:39:19.396864  196129 ssh_runner.go:195] Run: sh -c "sudo sed -i 's|^.*pause_image = .*$|pause_image = "registry.k8s.io/pause:3.10.1"|' /etc/crio/crio.conf.d/02-crio.conf"
	I1109 14:39:19.415946  196129 crio.go:70] configuring cri-o to use "cgroupfs" as cgroup driver...
	I1109 14:39:19.416022  196129 ssh_runner.go:195] Run: sh -c "sudo sed -i 's|^.*cgroup_manager = .*$|cgroup_manager = "cgroupfs"|' /etc/crio/crio.conf.d/02-crio.conf"
	I1109 14:39:19.427276  196129 ssh_runner.go:195] Run: sh -c "sudo sed -i '/conmon_cgroup = .*/d' /etc/crio/crio.conf.d/02-crio.conf"
	I1109 14:39:19.437233  196129 ssh_runner.go:195] Run: sh -c "sudo sed -i '/cgroup_manager = .*/a conmon_cgroup = "pod"' /etc/crio/crio.conf.d/02-crio.conf"
	I1109 14:39:19.447125  196129 ssh_runner.go:195] Run: sh -c "sudo rm -rf /etc/cni/net.mk"
	I1109 14:39:19.455793  196129 ssh_runner.go:195] Run: sh -c "sudo sed -i '/^ *"net.ipv4.ip_unprivileged_port_start=.*"/d' /etc/crio/crio.conf.d/02-crio.conf"
	I1109 14:39:19.468606  196129 ssh_runner.go:195] Run: sh -c "sudo grep -q "^ *default_sysctls" /etc/crio/crio.conf.d/02-crio.conf || sudo sed -i '/conmon_cgroup = .*/a default_sysctls = \[\n\]' /etc/crio/crio.conf.d/02-crio.conf"
	I1109 14:39:19.482521  196129 ssh_runner.go:195] Run: sh -c "sudo sed -i -r 's|^default_sysctls *= *\[|&\n  "net.ipv4.ip_unprivileged_port_start=0",|' /etc/crio/crio.conf.d/02-crio.conf"
	I1109 14:39:19.491385  196129 ssh_runner.go:195] Run: sudo sysctl net.bridge.bridge-nf-call-iptables
	I1109 14:39:19.499271  196129 ssh_runner.go:195] Run: sudo sh -c "echo 1 > /proc/sys/net/ipv4/ip_forward"
	I1109 14:39:19.507157  196129 ssh_runner.go:195] Run: sudo systemctl daemon-reload
	I1109 14:39:19.643285  196129 ssh_runner.go:195] Run: sudo systemctl restart crio
	I1109 14:39:19.789716  196129 start.go:543] Will wait 60s for socket path /var/run/crio/crio.sock
	I1109 14:39:19.789782  196129 ssh_runner.go:195] Run: stat /var/run/crio/crio.sock
	I1109 14:39:19.802113  196129 start.go:564] Will wait 60s for crictl version
	I1109 14:39:19.802187  196129 ssh_runner.go:195] Run: which crictl
	I1109 14:39:19.806163  196129 ssh_runner.go:195] Run: sudo /usr/local/bin/crictl version
	I1109 14:39:19.850016  196129 start.go:580] Version:  0.1.0
	RuntimeName:  cri-o
	RuntimeVersion:  1.34.1
	RuntimeApiVersion:  v1
	I1109 14:39:19.850100  196129 ssh_runner.go:195] Run: crio --version
	I1109 14:39:19.886662  196129 ssh_runner.go:195] Run: crio --version
	I1109 14:39:19.922121  196129 out.go:179] * Preparing Kubernetes v1.34.1 on CRI-O 1.34.1 ...
	I1109 14:39:15.900502  196795 out.go:252] * Restarting existing docker container for "embed-certs-422728" ...
	I1109 14:39:15.900586  196795 cli_runner.go:164] Run: docker start embed-certs-422728
	I1109 14:39:16.155027  196795 cli_runner.go:164] Run: docker container inspect embed-certs-422728 --format={{.State.Status}}
	I1109 14:39:16.179053  196795 kic.go:430] container "embed-certs-422728" state is running.
	I1109 14:39:16.179431  196795 cli_runner.go:164] Run: docker container inspect -f "{{range .NetworkSettings.Networks}}{{.IPAddress}},{{.GlobalIPv6Address}}{{end}}" embed-certs-422728
	I1109 14:39:16.202650  196795 profile.go:143] Saving config to /home/jenkins/minikube-integration/21139-2320/.minikube/profiles/embed-certs-422728/config.json ...
	I1109 14:39:16.202886  196795 machine.go:94] provisionDockerMachine start ...
	I1109 14:39:16.202954  196795 cli_runner.go:164] Run: docker container inspect -f "'{{(index (index .NetworkSettings.Ports "22/tcp") 0).HostPort}}'" embed-certs-422728
	I1109 14:39:16.226627  196795 main.go:143] libmachine: Using SSH client type: native
	I1109 14:39:16.227039  196795 main.go:143] libmachine: &{{{<nil> 0 [] [] []} docker [0x3eefe0] 0x3f1790 <nil>  [] 0s} 127.0.0.1 33070 <nil> <nil>}
	I1109 14:39:16.227058  196795 main.go:143] libmachine: About to run SSH command:
	hostname
	I1109 14:39:16.227903  196795 main.go:143] libmachine: Error dialing TCP: ssh: handshake failed: EOF
	I1109 14:39:19.403380  196795 main.go:143] libmachine: SSH cmd err, output: <nil>: embed-certs-422728
	
	I1109 14:39:19.403421  196795 ubuntu.go:182] provisioning hostname "embed-certs-422728"
	I1109 14:39:19.403526  196795 cli_runner.go:164] Run: docker container inspect -f "'{{(index (index .NetworkSettings.Ports "22/tcp") 0).HostPort}}'" embed-certs-422728
	I1109 14:39:19.425865  196795 main.go:143] libmachine: Using SSH client type: native
	I1109 14:39:19.426162  196795 main.go:143] libmachine: &{{{<nil> 0 [] [] []} docker [0x3eefe0] 0x3f1790 <nil>  [] 0s} 127.0.0.1 33070 <nil> <nil>}
	I1109 14:39:19.426172  196795 main.go:143] libmachine: About to run SSH command:
	sudo hostname embed-certs-422728 && echo "embed-certs-422728" | sudo tee /etc/hostname
	I1109 14:39:19.604836  196795 main.go:143] libmachine: SSH cmd err, output: <nil>: embed-certs-422728
	
	I1109 14:39:19.604972  196795 cli_runner.go:164] Run: docker container inspect -f "'{{(index (index .NetworkSettings.Ports "22/tcp") 0).HostPort}}'" embed-certs-422728
	I1109 14:39:19.627515  196795 main.go:143] libmachine: Using SSH client type: native
	I1109 14:39:19.627823  196795 main.go:143] libmachine: &{{{<nil> 0 [] [] []} docker [0x3eefe0] 0x3f1790 <nil>  [] 0s} 127.0.0.1 33070 <nil> <nil>}
	I1109 14:39:19.627846  196795 main.go:143] libmachine: About to run SSH command:
	
			if ! grep -xq '.*\sembed-certs-422728' /etc/hosts; then
				if grep -xq '127.0.1.1\s.*' /etc/hosts; then
					sudo sed -i 's/^127.0.1.1\s.*/127.0.1.1 embed-certs-422728/g' /etc/hosts;
				else 
					echo '127.0.1.1 embed-certs-422728' | sudo tee -a /etc/hosts; 
				fi
			fi
	I1109 14:39:19.784610  196795 main.go:143] libmachine: SSH cmd err, output: <nil>: 
	I1109 14:39:19.784640  196795 ubuntu.go:188] set auth options {CertDir:/home/jenkins/minikube-integration/21139-2320/.minikube CaCertPath:/home/jenkins/minikube-integration/21139-2320/.minikube/certs/ca.pem CaPrivateKeyPath:/home/jenkins/minikube-integration/21139-2320/.minikube/certs/ca-key.pem CaCertRemotePath:/etc/docker/ca.pem ServerCertPath:/home/jenkins/minikube-integration/21139-2320/.minikube/machines/server.pem ServerKeyPath:/home/jenkins/minikube-integration/21139-2320/.minikube/machines/server-key.pem ClientKeyPath:/home/jenkins/minikube-integration/21139-2320/.minikube/certs/key.pem ServerCertRemotePath:/etc/docker/server.pem ServerKeyRemotePath:/etc/docker/server-key.pem ClientCertPath:/home/jenkins/minikube-integration/21139-2320/.minikube/certs/cert.pem ServerCertSANs:[] StorePath:/home/jenkins/minikube-integration/21139-2320/.minikube}
	I1109 14:39:19.784720  196795 ubuntu.go:190] setting up certificates
	I1109 14:39:19.784751  196795 provision.go:84] configureAuth start
	I1109 14:39:19.784837  196795 cli_runner.go:164] Run: docker container inspect -f "{{range .NetworkSettings.Networks}}{{.IPAddress}},{{.GlobalIPv6Address}}{{end}}" embed-certs-422728
	I1109 14:39:19.811636  196795 provision.go:143] copyHostCerts
	I1109 14:39:19.811695  196795 exec_runner.go:144] found /home/jenkins/minikube-integration/21139-2320/.minikube/cert.pem, removing ...
	I1109 14:39:19.811709  196795 exec_runner.go:203] rm: /home/jenkins/minikube-integration/21139-2320/.minikube/cert.pem
	I1109 14:39:19.811785  196795 exec_runner.go:151] cp: /home/jenkins/minikube-integration/21139-2320/.minikube/certs/cert.pem --> /home/jenkins/minikube-integration/21139-2320/.minikube/cert.pem (1123 bytes)
	I1109 14:39:19.811895  196795 exec_runner.go:144] found /home/jenkins/minikube-integration/21139-2320/.minikube/key.pem, removing ...
	I1109 14:39:19.811901  196795 exec_runner.go:203] rm: /home/jenkins/minikube-integration/21139-2320/.minikube/key.pem
	I1109 14:39:19.811929  196795 exec_runner.go:151] cp: /home/jenkins/minikube-integration/21139-2320/.minikube/certs/key.pem --> /home/jenkins/minikube-integration/21139-2320/.minikube/key.pem (1679 bytes)
	I1109 14:39:19.811991  196795 exec_runner.go:144] found /home/jenkins/minikube-integration/21139-2320/.minikube/ca.pem, removing ...
	I1109 14:39:19.811995  196795 exec_runner.go:203] rm: /home/jenkins/minikube-integration/21139-2320/.minikube/ca.pem
	I1109 14:39:19.812021  196795 exec_runner.go:151] cp: /home/jenkins/minikube-integration/21139-2320/.minikube/certs/ca.pem --> /home/jenkins/minikube-integration/21139-2320/.minikube/ca.pem (1082 bytes)
	I1109 14:39:19.812067  196795 provision.go:117] generating server cert: /home/jenkins/minikube-integration/21139-2320/.minikube/machines/server.pem ca-key=/home/jenkins/minikube-integration/21139-2320/.minikube/certs/ca.pem private-key=/home/jenkins/minikube-integration/21139-2320/.minikube/certs/ca-key.pem org=jenkins.embed-certs-422728 san=[127.0.0.1 192.168.76.2 embed-certs-422728 localhost minikube]
	I1109 14:39:20.018694  196795 provision.go:177] copyRemoteCerts
	I1109 14:39:20.018776  196795 ssh_runner.go:195] Run: sudo mkdir -p /etc/docker /etc/docker /etc/docker
	I1109 14:39:20.018829  196795 cli_runner.go:164] Run: docker container inspect -f "'{{(index (index .NetworkSettings.Ports "22/tcp") 0).HostPort}}'" embed-certs-422728
	I1109 14:39:20.041481  196795 sshutil.go:53] new ssh client: &{IP:127.0.0.1 Port:33070 SSHKeyPath:/home/jenkins/minikube-integration/21139-2320/.minikube/machines/embed-certs-422728/id_rsa Username:docker}
	I1109 14:39:20.156424  196795 ssh_runner.go:362] scp /home/jenkins/minikube-integration/21139-2320/.minikube/certs/ca.pem --> /etc/docker/ca.pem (1082 bytes)
	I1109 14:39:20.179967  196795 ssh_runner.go:362] scp /home/jenkins/minikube-integration/21139-2320/.minikube/machines/server.pem --> /etc/docker/server.pem (1224 bytes)
	I1109 14:39:20.205588  196795 ssh_runner.go:362] scp /home/jenkins/minikube-integration/21139-2320/.minikube/machines/server-key.pem --> /etc/docker/server-key.pem (1675 bytes)
	I1109 14:39:20.224981  196795 provision.go:87] duration metric: took 440.207382ms to configureAuth
	I1109 14:39:20.225018  196795 ubuntu.go:206] setting minikube options for container-runtime
	I1109 14:39:20.225226  196795 config.go:182] Loaded profile config "embed-certs-422728": Driver=docker, ContainerRuntime=crio, KubernetesVersion=v1.34.1
	I1109 14:39:20.225355  196795 cli_runner.go:164] Run: docker container inspect -f "'{{(index (index .NetworkSettings.Ports "22/tcp") 0).HostPort}}'" embed-certs-422728
	I1109 14:39:20.251487  196795 main.go:143] libmachine: Using SSH client type: native
	I1109 14:39:20.251808  196795 main.go:143] libmachine: &{{{<nil> 0 [] [] []} docker [0x3eefe0] 0x3f1790 <nil>  [] 0s} 127.0.0.1 33070 <nil> <nil>}
	I1109 14:39:20.251826  196795 main.go:143] libmachine: About to run SSH command:
	sudo mkdir -p /etc/sysconfig && printf %s "
	CRIO_MINIKUBE_OPTIONS='--insecure-registry 10.96.0.0/12 '
	" | sudo tee /etc/sysconfig/crio.minikube && sudo systemctl restart crio
	I1109 14:39:19.924910  196129 cli_runner.go:164] Run: docker network inspect default-k8s-diff-port-103048 --format "{"Name": "{{.Name}}","Driver": "{{.Driver}}","Subnet": "{{range .IPAM.Config}}{{.Subnet}}{{end}}","Gateway": "{{range .IPAM.Config}}{{.Gateway}}{{end}}","MTU": {{if (index .Options "com.docker.network.driver.mtu")}}{{(index .Options "com.docker.network.driver.mtu")}}{{else}}0{{end}}, "ContainerIPs": [{{range $k,$v := .Containers }}"{{$v.IPv4Address}}",{{end}}]}"
	I1109 14:39:19.947696  196129 ssh_runner.go:195] Run: grep 192.168.85.1	host.minikube.internal$ /etc/hosts
	I1109 14:39:19.951833  196129 ssh_runner.go:195] Run: /bin/bash -c "{ grep -v $'\thost.minikube.internal$' "/etc/hosts"; echo "192.168.85.1	host.minikube.internal"; } > /tmp/h.$$; sudo cp /tmp/h.$$ "/etc/hosts""
	I1109 14:39:19.966489  196129 kubeadm.go:884] updating cluster {Name:default-k8s-diff-port-103048 KeepContext:false EmbedCerts:false MinikubeISO: KicBaseImage:gcr.io/k8s-minikube/kicbase-builds:v0.0.48-1761985721-21837@sha256:a50b37e97dfdea51156e079ca6b45818a801b3d41bbe13d141f35d2e1af6c7d1 Memory:3072 CPUs:2 DiskSize:20000 Driver:docker HyperkitVpnKitSock: HyperkitVSockPorts:[] DockerEnv:[] ContainerVolumeMounts:[] InsecureRegistry:[] RegistryMirror:[] HostOnlyCIDR:192.168.59.1/24 HypervVirtualSwitch: HypervUseExternalSwitch:false HypervExternalAdapter: KVMNetwork:default KVMQemuURI:qemu:///system KVMGPU:false KVMHidden:false KVMNUMACount:1 APIServerPort:8444 DockerOpt:[] DisableDriverMounts:false NFSShare:[] NFSSharesRoot:/nfsshares UUID: NoVTXCheck:false DNSProxy:false HostDNSResolver:true HostOnlyNicType:virtio NatNicType:virtio SSHIPAddress: SSHUser:root SSHKey: SSHPort:22 KubernetesConfig:{KubernetesVersion:v1.34.1 ClusterName:default-k8s-diff-port-103048 Namespace:default APIServerHAVIP: APISer
verName:minikubeCA APIServerNames:[] APIServerIPs:[] DNSDomain:cluster.local ContainerRuntime:crio CRISocket: NetworkPlugin:cni FeatureGates: ServiceCIDR:10.96.0.0/12 ImageRepository: LoadBalancerStartIP: LoadBalancerEndIP: CustomIngressCert: RegistryAliases: ExtraOptions:[] ShouldLoadCachedImages:true EnableDefaultCNI:false CNI:} Nodes:[{Name: IP:192.168.85.2 Port:8444 KubernetesVersion:v1.34.1 ContainerRuntime:crio ControlPlane:true Worker:true}] Addons:map[dashboard:true default-storageclass:true storage-provisioner:true] CustomAddonImages:map[MetricsScraper:registry.k8s.io/echoserver:1.4] CustomAddonRegistries:map[] VerifyComponents:map[apiserver:true apps_running:true default_sa:true extra:true kubelet:true node_ready:true system_pods:true] StartHostTimeout:6m0s ScheduledStop:<nil> ExposedPorts:[] ListenAddress: Network: Subnet: MultiNodeRequested:false ExtraDisks:0 CertExpiration:26280h0m0s MountString: Mount9PVersion:9p2000.L MountGID:docker MountIP: MountMSize:262144 MountOptions:[] MountPort:0 MountT
ype:9p MountUID:docker BinaryMirror: DisableOptimizations:false DisableMetrics:false DisableCoreDNSLog:false CustomQemuFirmwarePath: SocketVMnetClientPath: SocketVMnetPath: StaticIP: SSHAuthSock: SSHAgentPID:0 GPUs: AutoPauseInterval:1m0s} ...
	I1109 14:39:19.966612  196129 preload.go:188] Checking if preload exists for k8s version v1.34.1 and runtime crio
	I1109 14:39:19.966665  196129 ssh_runner.go:195] Run: sudo crictl images --output json
	I1109 14:39:20.014624  196129 crio.go:514] all images are preloaded for cri-o runtime.
	I1109 14:39:20.014649  196129 crio.go:433] Images already preloaded, skipping extraction
	I1109 14:39:20.014710  196129 ssh_runner.go:195] Run: sudo crictl images --output json
	I1109 14:39:20.061070  196129 crio.go:514] all images are preloaded for cri-o runtime.
	I1109 14:39:20.061092  196129 cache_images.go:86] Images are preloaded, skipping loading
	I1109 14:39:20.061100  196129 kubeadm.go:935] updating node { 192.168.85.2 8444 v1.34.1 crio true true} ...
	I1109 14:39:20.061201  196129 kubeadm.go:947] kubelet [Unit]
	Wants=crio.service
	
	[Service]
	ExecStart=
	ExecStart=/var/lib/minikube/binaries/v1.34.1/kubelet --bootstrap-kubeconfig=/etc/kubernetes/bootstrap-kubelet.conf --cgroups-per-qos=false --config=/var/lib/kubelet/config.yaml --enforce-node-allocatable= --hostname-override=default-k8s-diff-port-103048 --kubeconfig=/etc/kubernetes/kubelet.conf --node-ip=192.168.85.2
	
	[Install]
	 config:
	{KubernetesVersion:v1.34.1 ClusterName:default-k8s-diff-port-103048 Namespace:default APIServerHAVIP: APIServerName:minikubeCA APIServerNames:[] APIServerIPs:[] DNSDomain:cluster.local ContainerRuntime:crio CRISocket: NetworkPlugin:cni FeatureGates: ServiceCIDR:10.96.0.0/12 ImageRepository: LoadBalancerStartIP: LoadBalancerEndIP: CustomIngressCert: RegistryAliases: ExtraOptions:[] ShouldLoadCachedImages:true EnableDefaultCNI:false CNI:}
	I1109 14:39:20.061279  196129 ssh_runner.go:195] Run: crio config
	I1109 14:39:20.135847  196129 cni.go:84] Creating CNI manager for ""
	I1109 14:39:20.135907  196129 cni.go:143] "docker" driver + "crio" runtime found, recommending kindnet
	I1109 14:39:20.135931  196129 kubeadm.go:85] Using pod CIDR: 10.244.0.0/16
	I1109 14:39:20.135955  196129 kubeadm.go:190] kubeadm options: {CertDir:/var/lib/minikube/certs ServiceCIDR:10.96.0.0/12 PodSubnet:10.244.0.0/16 AdvertiseAddress:192.168.85.2 APIServerPort:8444 KubernetesVersion:v1.34.1 EtcdDataDir:/var/lib/minikube/etcd EtcdExtraArgs:map[] ClusterName:default-k8s-diff-port-103048 NodeName:default-k8s-diff-port-103048 DNSDomain:cluster.local CRISocket:/var/run/crio/crio.sock ImageRepository: ComponentOptions:[{Component:apiServer ExtraArgs:map[enable-admission-plugins:NamespaceLifecycle,LimitRanger,ServiceAccount,DefaultStorageClass,DefaultTolerationSeconds,NodeRestriction,MutatingAdmissionWebhook,ValidatingAdmissionWebhook,ResourceQuota] Pairs:map[certSANs:["127.0.0.1", "localhost", "192.168.85.2"]]} {Component:controllerManager ExtraArgs:map[allocate-node-cidrs:true leader-elect:false] Pairs:map[]} {Component:scheduler ExtraArgs:map[leader-elect:false] Pairs:map[]}] FeatureArgs:map[] NodeIP:192.168.85.2 CgroupDriver:cgroupfs ClientCAFile:/var/lib/minikube/certs/ca.
crt StaticPodPath:/etc/kubernetes/manifests ControlPlaneAddress:control-plane.minikube.internal KubeProxyOptions:map[] ResolvConfSearchRegression:false KubeletConfigOpts:map[containerRuntimeEndpoint:unix:///var/run/crio/crio.sock hairpinMode:hairpin-veth runtimeRequestTimeout:15m] PrependCriSocketUnix:true}
	I1109 14:39:20.136111  196129 kubeadm.go:196] kubeadm config:
	apiVersion: kubeadm.k8s.io/v1beta4
	kind: InitConfiguration
	localAPIEndpoint:
	  advertiseAddress: 192.168.85.2
	  bindPort: 8444
	bootstrapTokens:
	  - groups:
	      - system:bootstrappers:kubeadm:default-node-token
	    ttl: 24h0m0s
	    usages:
	      - signing
	      - authentication
	nodeRegistration:
	  criSocket: unix:///var/run/crio/crio.sock
	  name: "default-k8s-diff-port-103048"
	  kubeletExtraArgs:
	    - name: "node-ip"
	      value: "192.168.85.2"
	  taints: []
	---
	apiVersion: kubeadm.k8s.io/v1beta4
	kind: ClusterConfiguration
	apiServer:
	  certSANs: ["127.0.0.1", "localhost", "192.168.85.2"]
	  extraArgs:
	    - name: "enable-admission-plugins"
	      value: "NamespaceLifecycle,LimitRanger,ServiceAccount,DefaultStorageClass,DefaultTolerationSeconds,NodeRestriction,MutatingAdmissionWebhook,ValidatingAdmissionWebhook,ResourceQuota"
	controllerManager:
	  extraArgs:
	    - name: "allocate-node-cidrs"
	      value: "true"
	    - name: "leader-elect"
	      value: "false"
	scheduler:
	  extraArgs:
	    - name: "leader-elect"
	      value: "false"
	certificatesDir: /var/lib/minikube/certs
	clusterName: mk
	controlPlaneEndpoint: control-plane.minikube.internal:8444
	etcd:
	  local:
	    dataDir: /var/lib/minikube/etcd
	kubernetesVersion: v1.34.1
	networking:
	  dnsDomain: cluster.local
	  podSubnet: "10.244.0.0/16"
	  serviceSubnet: 10.96.0.0/12
	---
	apiVersion: kubelet.config.k8s.io/v1beta1
	kind: KubeletConfiguration
	authentication:
	  x509:
	    clientCAFile: /var/lib/minikube/certs/ca.crt
	cgroupDriver: cgroupfs
	containerRuntimeEndpoint: unix:///var/run/crio/crio.sock
	hairpinMode: hairpin-veth
	runtimeRequestTimeout: 15m
	clusterDomain: "cluster.local"
	# disable disk resource management by default
	imageGCHighThresholdPercent: 100
	evictionHard:
	  nodefs.available: "0%"
	  nodefs.inodesFree: "0%"
	  imagefs.available: "0%"
	failSwapOn: false
	staticPodPath: /etc/kubernetes/manifests
	---
	apiVersion: kubeproxy.config.k8s.io/v1alpha1
	kind: KubeProxyConfiguration
	clusterCIDR: "10.244.0.0/16"
	metricsBindAddress: 0.0.0.0:10249
	conntrack:
	  maxPerCore: 0
	# Skip setting "net.netfilter.nf_conntrack_tcp_timeout_established"
	  tcpEstablishedTimeout: 0s
	# Skip setting "net.netfilter.nf_conntrack_tcp_timeout_close"
	  tcpCloseWaitTimeout: 0s
	
	I1109 14:39:20.136224  196129 ssh_runner.go:195] Run: sudo ls /var/lib/minikube/binaries/v1.34.1
	I1109 14:39:20.144992  196129 binaries.go:51] Found k8s binaries, skipping transfer
	I1109 14:39:20.145080  196129 ssh_runner.go:195] Run: sudo mkdir -p /etc/systemd/system/kubelet.service.d /lib/systemd/system /var/tmp/minikube
	I1109 14:39:20.154676  196129 ssh_runner.go:362] scp memory --> /etc/systemd/system/kubelet.service.d/10-kubeadm.conf (378 bytes)
	I1109 14:39:20.171245  196129 ssh_runner.go:362] scp memory --> /lib/systemd/system/kubelet.service (352 bytes)
	I1109 14:39:20.185580  196129 ssh_runner.go:362] scp memory --> /var/tmp/minikube/kubeadm.yaml.new (2225 bytes)
	I1109 14:39:20.201765  196129 ssh_runner.go:195] Run: grep 192.168.85.2	control-plane.minikube.internal$ /etc/hosts
	I1109 14:39:20.206582  196129 ssh_runner.go:195] Run: /bin/bash -c "{ grep -v $'\tcontrol-plane.minikube.internal$' "/etc/hosts"; echo "192.168.85.2	control-plane.minikube.internal"; } > /tmp/h.$$; sudo cp /tmp/h.$$ "/etc/hosts""
	I1109 14:39:20.218611  196129 ssh_runner.go:195] Run: sudo systemctl daemon-reload
	I1109 14:39:20.366358  196129 ssh_runner.go:195] Run: sudo systemctl start kubelet
	I1109 14:39:20.384455  196129 certs.go:69] Setting up /home/jenkins/minikube-integration/21139-2320/.minikube/profiles/default-k8s-diff-port-103048 for IP: 192.168.85.2
	I1109 14:39:20.384475  196129 certs.go:195] generating shared ca certs ...
	I1109 14:39:20.384493  196129 certs.go:227] acquiring lock for ca certs: {Name:mkdc9287ecd89df27ec460b72246ed6b75395d7c Clock:{} Delay:500ms Timeout:1m0s Cancel:<nil>}
	I1109 14:39:20.384623  196129 certs.go:236] skipping valid "minikubeCA" ca cert: /home/jenkins/minikube-integration/21139-2320/.minikube/ca.key
	I1109 14:39:20.384665  196129 certs.go:236] skipping valid "proxyClientCA" ca cert: /home/jenkins/minikube-integration/21139-2320/.minikube/proxy-client-ca.key
	I1109 14:39:20.384672  196129 certs.go:257] generating profile certs ...
	I1109 14:39:20.384786  196129 certs.go:360] skipping valid signed profile cert regeneration for "minikube-user": /home/jenkins/minikube-integration/21139-2320/.minikube/profiles/default-k8s-diff-port-103048/client.key
	I1109 14:39:20.384849  196129 certs.go:360] skipping valid signed profile cert regeneration for "minikube": /home/jenkins/minikube-integration/21139-2320/.minikube/profiles/default-k8s-diff-port-103048/apiserver.key.87358e1c
	I1109 14:39:20.384887  196129 certs.go:360] skipping valid signed profile cert regeneration for "aggregator": /home/jenkins/minikube-integration/21139-2320/.minikube/profiles/default-k8s-diff-port-103048/proxy-client.key
	I1109 14:39:20.384987  196129 certs.go:484] found cert: /home/jenkins/minikube-integration/21139-2320/.minikube/certs/4116.pem (1338 bytes)
	W1109 14:39:20.385015  196129 certs.go:480] ignoring /home/jenkins/minikube-integration/21139-2320/.minikube/certs/4116_empty.pem, impossibly tiny 0 bytes
	I1109 14:39:20.385023  196129 certs.go:484] found cert: /home/jenkins/minikube-integration/21139-2320/.minikube/certs/ca-key.pem (1679 bytes)
	I1109 14:39:20.385046  196129 certs.go:484] found cert: /home/jenkins/minikube-integration/21139-2320/.minikube/certs/ca.pem (1082 bytes)
	I1109 14:39:20.385067  196129 certs.go:484] found cert: /home/jenkins/minikube-integration/21139-2320/.minikube/certs/cert.pem (1123 bytes)
	I1109 14:39:20.385087  196129 certs.go:484] found cert: /home/jenkins/minikube-integration/21139-2320/.minikube/certs/key.pem (1679 bytes)
	I1109 14:39:20.385128  196129 certs.go:484] found cert: /home/jenkins/minikube-integration/21139-2320/.minikube/files/etc/ssl/certs/41162.pem (1708 bytes)
	I1109 14:39:20.385719  196129 ssh_runner.go:362] scp /home/jenkins/minikube-integration/21139-2320/.minikube/ca.crt --> /var/lib/minikube/certs/ca.crt (1111 bytes)
	I1109 14:39:20.406961  196129 ssh_runner.go:362] scp /home/jenkins/minikube-integration/21139-2320/.minikube/ca.key --> /var/lib/minikube/certs/ca.key (1679 bytes)
	I1109 14:39:20.439170  196129 ssh_runner.go:362] scp /home/jenkins/minikube-integration/21139-2320/.minikube/proxy-client-ca.crt --> /var/lib/minikube/certs/proxy-client-ca.crt (1119 bytes)
	I1109 14:39:20.464461  196129 ssh_runner.go:362] scp /home/jenkins/minikube-integration/21139-2320/.minikube/proxy-client-ca.key --> /var/lib/minikube/certs/proxy-client-ca.key (1675 bytes)
	I1109 14:39:20.498671  196129 ssh_runner.go:362] scp /home/jenkins/minikube-integration/21139-2320/.minikube/profiles/default-k8s-diff-port-103048/apiserver.crt --> /var/lib/minikube/certs/apiserver.crt (1440 bytes)
	I1109 14:39:20.538022  196129 ssh_runner.go:362] scp /home/jenkins/minikube-integration/21139-2320/.minikube/profiles/default-k8s-diff-port-103048/apiserver.key --> /var/lib/minikube/certs/apiserver.key (1675 bytes)
	I1109 14:39:20.576148  196129 ssh_runner.go:362] scp /home/jenkins/minikube-integration/21139-2320/.minikube/profiles/default-k8s-diff-port-103048/proxy-client.crt --> /var/lib/minikube/certs/proxy-client.crt (1147 bytes)
	I1109 14:39:20.647061  196129 ssh_runner.go:362] scp /home/jenkins/minikube-integration/21139-2320/.minikube/profiles/default-k8s-diff-port-103048/proxy-client.key --> /var/lib/minikube/certs/proxy-client.key (1675 bytes)
	I1109 14:39:20.713722  196129 ssh_runner.go:362] scp /home/jenkins/minikube-integration/21139-2320/.minikube/certs/4116.pem --> /usr/share/ca-certificates/4116.pem (1338 bytes)
	I1109 14:39:20.735137  196129 ssh_runner.go:362] scp /home/jenkins/minikube-integration/21139-2320/.minikube/files/etc/ssl/certs/41162.pem --> /usr/share/ca-certificates/41162.pem (1708 bytes)
	I1109 14:39:20.759543  196129 ssh_runner.go:362] scp /home/jenkins/minikube-integration/21139-2320/.minikube/ca.crt --> /usr/share/ca-certificates/minikubeCA.pem (1111 bytes)
	I1109 14:39:20.778573  196129 ssh_runner.go:362] scp memory --> /var/lib/minikube/kubeconfig (738 bytes)
	I1109 14:39:20.791915  196129 ssh_runner.go:195] Run: openssl version
	I1109 14:39:20.804883  196129 ssh_runner.go:195] Run: sudo /bin/bash -c "test -s /usr/share/ca-certificates/4116.pem && ln -fs /usr/share/ca-certificates/4116.pem /etc/ssl/certs/4116.pem"
	I1109 14:39:20.821236  196129 ssh_runner.go:195] Run: ls -la /usr/share/ca-certificates/4116.pem
	I1109 14:39:20.826965  196129 certs.go:528] hashing: -rw-r--r-- 1 root root 1338 Nov  9 13:36 /usr/share/ca-certificates/4116.pem
	I1109 14:39:20.827033  196129 ssh_runner.go:195] Run: openssl x509 -hash -noout -in /usr/share/ca-certificates/4116.pem
	I1109 14:39:20.880407  196129 ssh_runner.go:195] Run: sudo /bin/bash -c "test -L /etc/ssl/certs/51391683.0 || ln -fs /etc/ssl/certs/4116.pem /etc/ssl/certs/51391683.0"
	I1109 14:39:20.888410  196129 ssh_runner.go:195] Run: sudo /bin/bash -c "test -s /usr/share/ca-certificates/41162.pem && ln -fs /usr/share/ca-certificates/41162.pem /etc/ssl/certs/41162.pem"
	I1109 14:39:20.897832  196129 ssh_runner.go:195] Run: ls -la /usr/share/ca-certificates/41162.pem
	I1109 14:39:20.901509  196129 certs.go:528] hashing: -rw-r--r-- 1 root root 1708 Nov  9 13:36 /usr/share/ca-certificates/41162.pem
	I1109 14:39:20.901575  196129 ssh_runner.go:195] Run: openssl x509 -hash -noout -in /usr/share/ca-certificates/41162.pem
	I1109 14:39:20.942961  196129 ssh_runner.go:195] Run: sudo /bin/bash -c "test -L /etc/ssl/certs/3ec20f2e.0 || ln -fs /etc/ssl/certs/41162.pem /etc/ssl/certs/3ec20f2e.0"
	I1109 14:39:20.950695  196129 ssh_runner.go:195] Run: sudo /bin/bash -c "test -s /usr/share/ca-certificates/minikubeCA.pem && ln -fs /usr/share/ca-certificates/minikubeCA.pem /etc/ssl/certs/minikubeCA.pem"
	I1109 14:39:20.958594  196129 ssh_runner.go:195] Run: ls -la /usr/share/ca-certificates/minikubeCA.pem
	I1109 14:39:20.963390  196129 certs.go:528] hashing: -rw-r--r-- 1 root root 1111 Nov  9 13:29 /usr/share/ca-certificates/minikubeCA.pem
	I1109 14:39:20.963454  196129 ssh_runner.go:195] Run: openssl x509 -hash -noout -in /usr/share/ca-certificates/minikubeCA.pem
	I1109 14:39:21.024236  196129 ssh_runner.go:195] Run: sudo /bin/bash -c "test -L /etc/ssl/certs/b5213941.0 || ln -fs /etc/ssl/certs/minikubeCA.pem /etc/ssl/certs/b5213941.0"
	I1109 14:39:21.038127  196129 ssh_runner.go:195] Run: stat /var/lib/minikube/certs/apiserver-kubelet-client.crt
	I1109 14:39:21.045164  196129 ssh_runner.go:195] Run: openssl x509 -noout -in /var/lib/minikube/certs/apiserver-etcd-client.crt -checkend 86400
	I1109 14:39:21.092111  196129 ssh_runner.go:195] Run: openssl x509 -noout -in /var/lib/minikube/certs/apiserver-kubelet-client.crt -checkend 86400
	I1109 14:39:21.157987  196129 ssh_runner.go:195] Run: openssl x509 -noout -in /var/lib/minikube/certs/etcd/server.crt -checkend 86400
	I1109 14:39:21.210593  196129 ssh_runner.go:195] Run: openssl x509 -noout -in /var/lib/minikube/certs/etcd/healthcheck-client.crt -checkend 86400
	I1109 14:39:21.275270  196129 ssh_runner.go:195] Run: openssl x509 -noout -in /var/lib/minikube/certs/etcd/peer.crt -checkend 86400
	I1109 14:39:21.342680  196129 ssh_runner.go:195] Run: openssl x509 -noout -in /var/lib/minikube/certs/front-proxy-client.crt -checkend 86400
	I1109 14:39:21.420934  196129 kubeadm.go:401] StartCluster: {Name:default-k8s-diff-port-103048 KeepContext:false EmbedCerts:false MinikubeISO: KicBaseImage:gcr.io/k8s-minikube/kicbase-builds:v0.0.48-1761985721-21837@sha256:a50b37e97dfdea51156e079ca6b45818a801b3d41bbe13d141f35d2e1af6c7d1 Memory:3072 CPUs:2 DiskSize:20000 Driver:docker HyperkitVpnKitSock: HyperkitVSockPorts:[] DockerEnv:[] ContainerVolumeMounts:[] InsecureRegistry:[] RegistryMirror:[] HostOnlyCIDR:192.168.59.1/24 HypervVirtualSwitch: HypervUseExternalSwitch:false HypervExternalAdapter: KVMNetwork:default KVMQemuURI:qemu:///system KVMGPU:false KVMHidden:false KVMNUMACount:1 APIServerPort:8444 DockerOpt:[] DisableDriverMounts:false NFSShare:[] NFSSharesRoot:/nfsshares UUID: NoVTXCheck:false DNSProxy:false HostDNSResolver:true HostOnlyNicType:virtio NatNicType:virtio SSHIPAddress: SSHUser:root SSHKey: SSHPort:22 KubernetesConfig:{KubernetesVersion:v1.34.1 ClusterName:default-k8s-diff-port-103048 Namespace:default APIServerHAVIP: APIServer
Name:minikubeCA APIServerNames:[] APIServerIPs:[] DNSDomain:cluster.local ContainerRuntime:crio CRISocket: NetworkPlugin:cni FeatureGates: ServiceCIDR:10.96.0.0/12 ImageRepository: LoadBalancerStartIP: LoadBalancerEndIP: CustomIngressCert: RegistryAliases: ExtraOptions:[] ShouldLoadCachedImages:true EnableDefaultCNI:false CNI:} Nodes:[{Name: IP:192.168.85.2 Port:8444 KubernetesVersion:v1.34.1 ContainerRuntime:crio ControlPlane:true Worker:true}] Addons:map[dashboard:true default-storageclass:true storage-provisioner:true] CustomAddonImages:map[MetricsScraper:registry.k8s.io/echoserver:1.4] CustomAddonRegistries:map[] VerifyComponents:map[apiserver:true apps_running:true default_sa:true extra:true kubelet:true node_ready:true system_pods:true] StartHostTimeout:6m0s ScheduledStop:<nil> ExposedPorts:[] ListenAddress: Network: Subnet: MultiNodeRequested:false ExtraDisks:0 CertExpiration:26280h0m0s MountString: Mount9PVersion:9p2000.L MountGID:docker MountIP: MountMSize:262144 MountOptions:[] MountPort:0 MountType
:9p MountUID:docker BinaryMirror: DisableOptimizations:false DisableMetrics:false DisableCoreDNSLog:false CustomQemuFirmwarePath: SocketVMnetClientPath: SocketVMnetPath: StaticIP: SSHAuthSock: SSHAgentPID:0 GPUs: AutoPauseInterval:1m0s}
	I1109 14:39:21.421028  196129 cri.go:54] listing CRI containers in root : {State:paused Name: Namespaces:[kube-system]}
	I1109 14:39:21.421090  196129 ssh_runner.go:195] Run: sudo -s eval "crictl ps -a --quiet --label io.kubernetes.pod.namespace=kube-system"
	I1109 14:39:21.519887  196129 cri.go:89] found id: "7e93099edfb89c3c6dfb2117e807c954f165aeeee5346a2bbbcc8cebc0b57e6d"
	I1109 14:39:21.519932  196129 cri.go:89] found id: "6ac4e39c7a9ff676af77a2c3d9451f34c38ceb98460bce7a887c77b5521f39bb"
	I1109 14:39:21.519938  196129 cri.go:89] found id: "7d4eac93ccb3effa065a9c4f9d98f14e0e8db5c13414ee7cecb3ffcedd98326f"
	I1109 14:39:21.519945  196129 cri.go:89] found id: ""
	I1109 14:39:21.519999  196129 ssh_runner.go:195] Run: sudo runc list -f json
	W1109 14:39:21.543667  196129 kubeadm.go:408] unpause failed: list paused: runc: sudo runc list -f json: Process exited with status 1
	stdout:
	
	stderr:
	time="2025-11-09T14:39:21Z" level=error msg="open /run/runc: no such file or directory"
	I1109 14:39:21.543751  196129 ssh_runner.go:195] Run: sudo ls /var/lib/kubelet/kubeadm-flags.env /var/lib/kubelet/config.yaml /var/lib/minikube/etcd
	I1109 14:39:21.572102  196129 kubeadm.go:417] found existing configuration files, will attempt cluster restart
	I1109 14:39:21.572126  196129 kubeadm.go:598] restartPrimaryControlPlane start ...
	I1109 14:39:21.572191  196129 ssh_runner.go:195] Run: sudo test -d /data/minikube
	I1109 14:39:21.608694  196129 kubeadm.go:131] /data/minikube skipping compat symlinks: sudo test -d /data/minikube: Process exited with status 1
	stdout:
	
	stderr:
	I1109 14:39:21.609164  196129 kubeconfig.go:47] verify endpoint returned: get endpoint: "default-k8s-diff-port-103048" does not appear in /home/jenkins/minikube-integration/21139-2320/kubeconfig
	I1109 14:39:21.609280  196129 kubeconfig.go:62] /home/jenkins/minikube-integration/21139-2320/kubeconfig needs updating (will repair): [kubeconfig missing "default-k8s-diff-port-103048" cluster setting kubeconfig missing "default-k8s-diff-port-103048" context setting]
	I1109 14:39:21.609631  196129 lock.go:35] WriteFile acquiring /home/jenkins/minikube-integration/21139-2320/kubeconfig: {Name:mkbcbd43244c4eb498050023163f96f81faa78c6 Clock:{} Delay:500ms Timeout:1m0s Cancel:<nil>}
	I1109 14:39:21.611238  196129 ssh_runner.go:195] Run: sudo diff -u /var/tmp/minikube/kubeadm.yaml /var/tmp/minikube/kubeadm.yaml.new
	I1109 14:39:21.624438  196129 kubeadm.go:635] The running cluster does not require reconfiguration: 192.168.85.2
	I1109 14:39:21.624472  196129 kubeadm.go:602] duration metric: took 52.339359ms to restartPrimaryControlPlane
	I1109 14:39:21.624481  196129 kubeadm.go:403] duration metric: took 203.557147ms to StartCluster
	I1109 14:39:21.624504  196129 settings.go:142] acquiring lock: {Name:mk52a5a1cf83cfd3007d92af07a5e8dea393f789 Clock:{} Delay:500ms Timeout:1m0s Cancel:<nil>}
	I1109 14:39:21.624565  196129 settings.go:150] Updating kubeconfig:  /home/jenkins/minikube-integration/21139-2320/kubeconfig
	I1109 14:39:21.625263  196129 lock.go:35] WriteFile acquiring /home/jenkins/minikube-integration/21139-2320/kubeconfig: {Name:mkbcbd43244c4eb498050023163f96f81faa78c6 Clock:{} Delay:500ms Timeout:1m0s Cancel:<nil>}
	I1109 14:39:21.625488  196129 start.go:236] Will wait 6m0s for node &{Name: IP:192.168.85.2 Port:8444 KubernetesVersion:v1.34.1 ContainerRuntime:crio ControlPlane:true Worker:true}
	I1109 14:39:21.625839  196129 config.go:182] Loaded profile config "default-k8s-diff-port-103048": Driver=docker, ContainerRuntime=crio, KubernetesVersion=v1.34.1
	I1109 14:39:21.625884  196129 addons.go:512] enable addons start: toEnable=map[ambassador:false amd-gpu-device-plugin:false auto-pause:false cloud-spanner:false csi-hostpath-driver:false dashboard:true default-storageclass:true efk:false freshpod:false gcp-auth:false gvisor:false headlamp:false inaccel:false ingress:false ingress-dns:false inspektor-gadget:false istio:false istio-provisioner:false kong:false kubeflow:false kubetail:false kubevirt:false logviewer:false metallb:false metrics-server:false nvidia-device-plugin:false nvidia-driver-installer:false nvidia-gpu-device-plugin:false olm:false pod-security-policy:false portainer:false registry:false registry-aliases:false registry-creds:false storage-provisioner:true storage-provisioner-rancher:false volcano:false volumesnapshots:false yakd:false]
	I1109 14:39:21.626037  196129 addons.go:70] Setting storage-provisioner=true in profile "default-k8s-diff-port-103048"
	I1109 14:39:21.626062  196129 addons.go:239] Setting addon storage-provisioner=true in "default-k8s-diff-port-103048"
	W1109 14:39:21.626071  196129 addons.go:248] addon storage-provisioner should already be in state true
	I1109 14:39:21.626090  196129 addons.go:70] Setting dashboard=true in profile "default-k8s-diff-port-103048"
	I1109 14:39:21.626131  196129 addons.go:239] Setting addon dashboard=true in "default-k8s-diff-port-103048"
	W1109 14:39:21.626162  196129 addons.go:248] addon dashboard should already be in state true
	I1109 14:39:21.626201  196129 host.go:66] Checking if "default-k8s-diff-port-103048" exists ...
	I1109 14:39:21.626098  196129 host.go:66] Checking if "default-k8s-diff-port-103048" exists ...
	I1109 14:39:21.626753  196129 cli_runner.go:164] Run: docker container inspect default-k8s-diff-port-103048 --format={{.State.Status}}
	I1109 14:39:21.626812  196129 cli_runner.go:164] Run: docker container inspect default-k8s-diff-port-103048 --format={{.State.Status}}
	I1109 14:39:21.626105  196129 addons.go:70] Setting default-storageclass=true in profile "default-k8s-diff-port-103048"
	I1109 14:39:21.627319  196129 addons_storage_classes.go:34] enableOrDisableStorageClasses default-storageclass=true on "default-k8s-diff-port-103048"
	I1109 14:39:21.627583  196129 cli_runner.go:164] Run: docker container inspect default-k8s-diff-port-103048 --format={{.State.Status}}
	I1109 14:39:21.630802  196129 out.go:179] * Verifying Kubernetes components...
	I1109 14:39:21.639626  196129 ssh_runner.go:195] Run: sudo systemctl daemon-reload
	I1109 14:39:21.684051  196129 out.go:179]   - Using image gcr.io/k8s-minikube/storage-provisioner:v5
	I1109 14:39:21.684138  196129 out.go:179]   - Using image docker.io/kubernetesui/dashboard:v2.7.0
	I1109 14:39:21.689975  196129 addons.go:436] installing /etc/kubernetes/addons/storage-provisioner.yaml
	I1109 14:39:21.690000  196129 ssh_runner.go:362] scp memory --> /etc/kubernetes/addons/storage-provisioner.yaml (2676 bytes)
	I1109 14:39:21.690064  196129 cli_runner.go:164] Run: docker container inspect -f "'{{(index (index .NetworkSettings.Ports "22/tcp") 0).HostPort}}'" default-k8s-diff-port-103048
	I1109 14:39:21.691595  196129 addons.go:239] Setting addon default-storageclass=true in "default-k8s-diff-port-103048"
	W1109 14:39:21.691618  196129 addons.go:248] addon default-storageclass should already be in state true
	I1109 14:39:21.691648  196129 host.go:66] Checking if "default-k8s-diff-port-103048" exists ...
	I1109 14:39:21.692125  196129 cli_runner.go:164] Run: docker container inspect default-k8s-diff-port-103048 --format={{.State.Status}}
	I1109 14:39:21.693212  196129 out.go:179]   - Using image registry.k8s.io/echoserver:1.4
	I1109 14:39:20.654722  196795 main.go:143] libmachine: SSH cmd err, output: <nil>: 
	CRIO_MINIKUBE_OPTIONS='--insecure-registry 10.96.0.0/12 '
	
	I1109 14:39:20.654747  196795 machine.go:97] duration metric: took 4.451852424s to provisionDockerMachine
	I1109 14:39:20.654773  196795 start.go:293] postStartSetup for "embed-certs-422728" (driver="docker")
	I1109 14:39:20.654784  196795 start.go:322] creating required directories: [/etc/kubernetes/addons /etc/kubernetes/manifests /var/tmp/minikube /var/lib/minikube /var/lib/minikube/certs /var/lib/minikube/images /var/lib/minikube/binaries /tmp/gvisor /usr/share/ca-certificates /etc/ssl/certs]
	I1109 14:39:20.654845  196795 ssh_runner.go:195] Run: sudo mkdir -p /etc/kubernetes/addons /etc/kubernetes/manifests /var/tmp/minikube /var/lib/minikube /var/lib/minikube/certs /var/lib/minikube/images /var/lib/minikube/binaries /tmp/gvisor /usr/share/ca-certificates /etc/ssl/certs
	I1109 14:39:20.654912  196795 cli_runner.go:164] Run: docker container inspect -f "'{{(index (index .NetworkSettings.Ports "22/tcp") 0).HostPort}}'" embed-certs-422728
	I1109 14:39:20.679374  196795 sshutil.go:53] new ssh client: &{IP:127.0.0.1 Port:33070 SSHKeyPath:/home/jenkins/minikube-integration/21139-2320/.minikube/machines/embed-certs-422728/id_rsa Username:docker}
	I1109 14:39:20.801375  196795 ssh_runner.go:195] Run: cat /etc/os-release
	I1109 14:39:20.805427  196795 main.go:143] libmachine: Couldn't set key VERSION_CODENAME, no corresponding struct field found
	I1109 14:39:20.805453  196795 info.go:137] Remote host: Debian GNU/Linux 12 (bookworm)
	I1109 14:39:20.805462  196795 filesync.go:126] Scanning /home/jenkins/minikube-integration/21139-2320/.minikube/addons for local assets ...
	I1109 14:39:20.805518  196795 filesync.go:126] Scanning /home/jenkins/minikube-integration/21139-2320/.minikube/files for local assets ...
	I1109 14:39:20.805610  196795 filesync.go:149] local asset: /home/jenkins/minikube-integration/21139-2320/.minikube/files/etc/ssl/certs/41162.pem -> 41162.pem in /etc/ssl/certs
	I1109 14:39:20.805711  196795 ssh_runner.go:195] Run: sudo mkdir -p /etc/ssl/certs
	I1109 14:39:20.816935  196795 ssh_runner.go:362] scp /home/jenkins/minikube-integration/21139-2320/.minikube/files/etc/ssl/certs/41162.pem --> /etc/ssl/certs/41162.pem (1708 bytes)
	I1109 14:39:20.836739  196795 start.go:296] duration metric: took 181.951304ms for postStartSetup
	I1109 14:39:20.836817  196795 ssh_runner.go:195] Run: sh -c "df -h /var | awk 'NR==2{print $5}'"
	I1109 14:39:20.836854  196795 cli_runner.go:164] Run: docker container inspect -f "'{{(index (index .NetworkSettings.Ports "22/tcp") 0).HostPort}}'" embed-certs-422728
	I1109 14:39:20.857314  196795 sshutil.go:53] new ssh client: &{IP:127.0.0.1 Port:33070 SSHKeyPath:/home/jenkins/minikube-integration/21139-2320/.minikube/machines/embed-certs-422728/id_rsa Username:docker}
	I1109 14:39:20.961850  196795 ssh_runner.go:195] Run: sh -c "df -BG /var | awk 'NR==2{print $4}'"
	I1109 14:39:20.969380  196795 fix.go:56] duration metric: took 5.089888739s for fixHost
	I1109 14:39:20.969406  196795 start.go:83] releasing machines lock for "embed-certs-422728", held for 5.089951877s
	I1109 14:39:20.969490  196795 cli_runner.go:164] Run: docker container inspect -f "{{range .NetworkSettings.Networks}}{{.IPAddress}},{{.GlobalIPv6Address}}{{end}}" embed-certs-422728
	I1109 14:39:20.989316  196795 ssh_runner.go:195] Run: cat /version.json
	I1109 14:39:20.989379  196795 cli_runner.go:164] Run: docker container inspect -f "'{{(index (index .NetworkSettings.Ports "22/tcp") 0).HostPort}}'" embed-certs-422728
	I1109 14:39:20.989634  196795 ssh_runner.go:195] Run: curl -sS -m 2 https://registry.k8s.io/
	I1109 14:39:20.989678  196795 cli_runner.go:164] Run: docker container inspect -f "'{{(index (index .NetworkSettings.Ports "22/tcp") 0).HostPort}}'" embed-certs-422728
	I1109 14:39:21.019194  196795 sshutil.go:53] new ssh client: &{IP:127.0.0.1 Port:33070 SSHKeyPath:/home/jenkins/minikube-integration/21139-2320/.minikube/machines/embed-certs-422728/id_rsa Username:docker}
	I1109 14:39:21.033559  196795 sshutil.go:53] new ssh client: &{IP:127.0.0.1 Port:33070 SSHKeyPath:/home/jenkins/minikube-integration/21139-2320/.minikube/machines/embed-certs-422728/id_rsa Username:docker}
	I1109 14:39:21.156397  196795 ssh_runner.go:195] Run: systemctl --version
	I1109 14:39:21.283906  196795 ssh_runner.go:195] Run: sudo sh -c "podman version >/dev/null"
	I1109 14:39:21.351300  196795 ssh_runner.go:195] Run: sh -c "stat /etc/cni/net.d/*loopback.conf*"
	W1109 14:39:21.357015  196795 cni.go:209] loopback cni configuration skipped: "/etc/cni/net.d/*loopback.conf*" not found
	I1109 14:39:21.357091  196795 ssh_runner.go:195] Run: sudo find /etc/cni/net.d -maxdepth 1 -type f ( ( -name *bridge* -or -name *podman* ) -and -not -name *.mk_disabled ) -printf "%p, " -exec sh -c "sudo mv {} {}.mk_disabled" ;
	I1109 14:39:21.368625  196795 cni.go:259] no active bridge cni configs found in "/etc/cni/net.d" - nothing to disable
	I1109 14:39:21.368703  196795 start.go:496] detecting cgroup driver to use...
	I1109 14:39:21.368745  196795 detect.go:187] detected "cgroupfs" cgroup driver on host os
	I1109 14:39:21.368818  196795 ssh_runner.go:195] Run: sudo systemctl stop -f containerd
	I1109 14:39:21.387612  196795 ssh_runner.go:195] Run: sudo systemctl is-active --quiet service containerd
	I1109 14:39:21.408379  196795 docker.go:218] disabling cri-docker service (if available) ...
	I1109 14:39:21.408518  196795 ssh_runner.go:195] Run: sudo systemctl stop -f cri-docker.socket
	I1109 14:39:21.436708  196795 ssh_runner.go:195] Run: sudo systemctl stop -f cri-docker.service
	I1109 14:39:21.466974  196795 ssh_runner.go:195] Run: sudo systemctl disable cri-docker.socket
	I1109 14:39:21.728628  196795 ssh_runner.go:195] Run: sudo systemctl mask cri-docker.service
	I1109 14:39:21.974405  196795 docker.go:234] disabling docker service ...
	I1109 14:39:21.974481  196795 ssh_runner.go:195] Run: sudo systemctl stop -f docker.socket
	I1109 14:39:22.005296  196795 ssh_runner.go:195] Run: sudo systemctl stop -f docker.service
	I1109 14:39:22.034069  196795 ssh_runner.go:195] Run: sudo systemctl disable docker.socket
	I1109 14:39:22.248316  196795 ssh_runner.go:195] Run: sudo systemctl mask docker.service
	I1109 14:39:22.448530  196795 ssh_runner.go:195] Run: sudo systemctl is-active --quiet service docker
	I1109 14:39:22.471795  196795 ssh_runner.go:195] Run: /bin/bash -c "sudo mkdir -p /etc && printf %s "runtime-endpoint: unix:///var/run/crio/crio.sock
	" | sudo tee /etc/crictl.yaml"
	I1109 14:39:22.504195  196795 crio.go:59] configure cri-o to use "registry.k8s.io/pause:3.10.1" pause image...
	I1109 14:39:22.504253  196795 ssh_runner.go:195] Run: sh -c "sudo sed -i 's|^.*pause_image = .*$|pause_image = "registry.k8s.io/pause:3.10.1"|' /etc/crio/crio.conf.d/02-crio.conf"
	I1109 14:39:22.522453  196795 crio.go:70] configuring cri-o to use "cgroupfs" as cgroup driver...
	I1109 14:39:22.522527  196795 ssh_runner.go:195] Run: sh -c "sudo sed -i 's|^.*cgroup_manager = .*$|cgroup_manager = "cgroupfs"|' /etc/crio/crio.conf.d/02-crio.conf"
	I1109 14:39:22.540125  196795 ssh_runner.go:195] Run: sh -c "sudo sed -i '/conmon_cgroup = .*/d' /etc/crio/crio.conf.d/02-crio.conf"
	I1109 14:39:22.553926  196795 ssh_runner.go:195] Run: sh -c "sudo sed -i '/cgroup_manager = .*/a conmon_cgroup = "pod"' /etc/crio/crio.conf.d/02-crio.conf"
	I1109 14:39:22.576162  196795 ssh_runner.go:195] Run: sh -c "sudo rm -rf /etc/cni/net.mk"
	I1109 14:39:22.585909  196795 ssh_runner.go:195] Run: sh -c "sudo sed -i '/^ *"net.ipv4.ip_unprivileged_port_start=.*"/d' /etc/crio/crio.conf.d/02-crio.conf"
	I1109 14:39:22.594587  196795 ssh_runner.go:195] Run: sh -c "sudo grep -q "^ *default_sysctls" /etc/crio/crio.conf.d/02-crio.conf || sudo sed -i '/conmon_cgroup = .*/a default_sysctls = \[\n\]' /etc/crio/crio.conf.d/02-crio.conf"
	I1109 14:39:22.609067  196795 ssh_runner.go:195] Run: sh -c "sudo sed -i -r 's|^default_sysctls *= *\[|&\n  "net.ipv4.ip_unprivileged_port_start=0",|' /etc/crio/crio.conf.d/02-crio.conf"
	I1109 14:39:22.617377  196795 ssh_runner.go:195] Run: sudo sysctl net.bridge.bridge-nf-call-iptables
	I1109 14:39:22.630975  196795 ssh_runner.go:195] Run: sudo sh -c "echo 1 > /proc/sys/net/ipv4/ip_forward"
	I1109 14:39:22.638323  196795 ssh_runner.go:195] Run: sudo systemctl daemon-reload
	I1109 14:39:22.838273  196795 ssh_runner.go:195] Run: sudo systemctl restart crio
	I1109 14:39:23.036210  196795 start.go:543] Will wait 60s for socket path /var/run/crio/crio.sock
	I1109 14:39:23.036366  196795 ssh_runner.go:195] Run: stat /var/run/crio/crio.sock
	I1109 14:39:23.044751  196795 start.go:564] Will wait 60s for crictl version
	I1109 14:39:23.044867  196795 ssh_runner.go:195] Run: which crictl
	I1109 14:39:23.051712  196795 ssh_runner.go:195] Run: sudo /usr/local/bin/crictl version
	I1109 14:39:23.102897  196795 start.go:580] Version:  0.1.0
	RuntimeName:  cri-o
	RuntimeVersion:  1.34.1
	RuntimeApiVersion:  v1
	I1109 14:39:23.103045  196795 ssh_runner.go:195] Run: crio --version
	I1109 14:39:23.156948  196795 ssh_runner.go:195] Run: crio --version
	I1109 14:39:23.225201  196795 out.go:179] * Preparing Kubernetes v1.34.1 on CRI-O 1.34.1 ...
	I1109 14:39:21.696124  196129 addons.go:436] installing /etc/kubernetes/addons/dashboard-ns.yaml
	I1109 14:39:21.696149  196129 ssh_runner.go:362] scp dashboard/dashboard-ns.yaml --> /etc/kubernetes/addons/dashboard-ns.yaml (759 bytes)
	I1109 14:39:21.696218  196129 cli_runner.go:164] Run: docker container inspect -f "'{{(index (index .NetworkSettings.Ports "22/tcp") 0).HostPort}}'" default-k8s-diff-port-103048
	I1109 14:39:21.741371  196129 sshutil.go:53] new ssh client: &{IP:127.0.0.1 Port:33065 SSHKeyPath:/home/jenkins/minikube-integration/21139-2320/.minikube/machines/default-k8s-diff-port-103048/id_rsa Username:docker}
	I1109 14:39:21.750194  196129 addons.go:436] installing /etc/kubernetes/addons/storageclass.yaml
	I1109 14:39:21.750218  196129 ssh_runner.go:362] scp storageclass/storageclass.yaml --> /etc/kubernetes/addons/storageclass.yaml (271 bytes)
	I1109 14:39:21.750296  196129 cli_runner.go:164] Run: docker container inspect -f "'{{(index (index .NetworkSettings.Ports "22/tcp") 0).HostPort}}'" default-k8s-diff-port-103048
	I1109 14:39:21.766459  196129 sshutil.go:53] new ssh client: &{IP:127.0.0.1 Port:33065 SSHKeyPath:/home/jenkins/minikube-integration/21139-2320/.minikube/machines/default-k8s-diff-port-103048/id_rsa Username:docker}
	I1109 14:39:21.787620  196129 sshutil.go:53] new ssh client: &{IP:127.0.0.1 Port:33065 SSHKeyPath:/home/jenkins/minikube-integration/21139-2320/.minikube/machines/default-k8s-diff-port-103048/id_rsa Username:docker}
	I1109 14:39:22.148935  196129 ssh_runner.go:195] Run: sudo systemctl start kubelet
	I1109 14:39:22.161023  196129 ssh_runner.go:195] Run: sudo KUBECONFIG=/var/lib/minikube/kubeconfig /var/lib/minikube/binaries/v1.34.1/kubectl apply -f /etc/kubernetes/addons/storage-provisioner.yaml
	I1109 14:39:22.228626  196129 node_ready.go:35] waiting up to 6m0s for node "default-k8s-diff-port-103048" to be "Ready" ...
	I1109 14:39:22.237309  196129 ssh_runner.go:195] Run: sudo KUBECONFIG=/var/lib/minikube/kubeconfig /var/lib/minikube/binaries/v1.34.1/kubectl apply -f /etc/kubernetes/addons/storageclass.yaml
	I1109 14:39:22.258565  196129 addons.go:436] installing /etc/kubernetes/addons/dashboard-clusterrole.yaml
	I1109 14:39:22.258641  196129 ssh_runner.go:362] scp dashboard/dashboard-clusterrole.yaml --> /etc/kubernetes/addons/dashboard-clusterrole.yaml (1001 bytes)
	I1109 14:39:22.389427  196129 addons.go:436] installing /etc/kubernetes/addons/dashboard-clusterrolebinding.yaml
	I1109 14:39:22.389532  196129 ssh_runner.go:362] scp dashboard/dashboard-clusterrolebinding.yaml --> /etc/kubernetes/addons/dashboard-clusterrolebinding.yaml (1018 bytes)
	I1109 14:39:22.526134  196129 addons.go:436] installing /etc/kubernetes/addons/dashboard-configmap.yaml
	I1109 14:39:22.526206  196129 ssh_runner.go:362] scp dashboard/dashboard-configmap.yaml --> /etc/kubernetes/addons/dashboard-configmap.yaml (837 bytes)
	I1109 14:39:22.627561  196129 addons.go:436] installing /etc/kubernetes/addons/dashboard-dp.yaml
	I1109 14:39:22.627621  196129 ssh_runner.go:362] scp memory --> /etc/kubernetes/addons/dashboard-dp.yaml (4201 bytes)
	I1109 14:39:22.674772  196129 addons.go:436] installing /etc/kubernetes/addons/dashboard-role.yaml
	I1109 14:39:22.674843  196129 ssh_runner.go:362] scp dashboard/dashboard-role.yaml --> /etc/kubernetes/addons/dashboard-role.yaml (1724 bytes)
	I1109 14:39:22.695155  196129 addons.go:436] installing /etc/kubernetes/addons/dashboard-rolebinding.yaml
	I1109 14:39:22.695229  196129 ssh_runner.go:362] scp dashboard/dashboard-rolebinding.yaml --> /etc/kubernetes/addons/dashboard-rolebinding.yaml (1046 bytes)
	I1109 14:39:22.738582  196129 addons.go:436] installing /etc/kubernetes/addons/dashboard-sa.yaml
	I1109 14:39:22.738656  196129 ssh_runner.go:362] scp dashboard/dashboard-sa.yaml --> /etc/kubernetes/addons/dashboard-sa.yaml (837 bytes)
	I1109 14:39:22.763078  196129 addons.go:436] installing /etc/kubernetes/addons/dashboard-secret.yaml
	I1109 14:39:22.763151  196129 ssh_runner.go:362] scp dashboard/dashboard-secret.yaml --> /etc/kubernetes/addons/dashboard-secret.yaml (1389 bytes)
	I1109 14:39:22.805266  196129 addons.go:436] installing /etc/kubernetes/addons/dashboard-svc.yaml
	I1109 14:39:22.805341  196129 ssh_runner.go:362] scp dashboard/dashboard-svc.yaml --> /etc/kubernetes/addons/dashboard-svc.yaml (1294 bytes)
	I1109 14:39:22.831261  196129 ssh_runner.go:195] Run: sudo KUBECONFIG=/var/lib/minikube/kubeconfig /var/lib/minikube/binaries/v1.34.1/kubectl apply -f /etc/kubernetes/addons/dashboard-ns.yaml -f /etc/kubernetes/addons/dashboard-clusterrole.yaml -f /etc/kubernetes/addons/dashboard-clusterrolebinding.yaml -f /etc/kubernetes/addons/dashboard-configmap.yaml -f /etc/kubernetes/addons/dashboard-dp.yaml -f /etc/kubernetes/addons/dashboard-role.yaml -f /etc/kubernetes/addons/dashboard-rolebinding.yaml -f /etc/kubernetes/addons/dashboard-sa.yaml -f /etc/kubernetes/addons/dashboard-secret.yaml -f /etc/kubernetes/addons/dashboard-svc.yaml
	I1109 14:39:23.228075  196795 cli_runner.go:164] Run: docker network inspect embed-certs-422728 --format "{"Name": "{{.Name}}","Driver": "{{.Driver}}","Subnet": "{{range .IPAM.Config}}{{.Subnet}}{{end}}","Gateway": "{{range .IPAM.Config}}{{.Gateway}}{{end}}","MTU": {{if (index .Options "com.docker.network.driver.mtu")}}{{(index .Options "com.docker.network.driver.mtu")}}{{else}}0{{end}}, "ContainerIPs": [{{range $k,$v := .Containers }}"{{$v.IPv4Address}}",{{end}}]}"
	I1109 14:39:23.257879  196795 ssh_runner.go:195] Run: grep 192.168.76.1	host.minikube.internal$ /etc/hosts
	I1109 14:39:23.262130  196795 ssh_runner.go:195] Run: /bin/bash -c "{ grep -v $'\thost.minikube.internal$' "/etc/hosts"; echo "192.168.76.1	host.minikube.internal"; } > /tmp/h.$$; sudo cp /tmp/h.$$ "/etc/hosts""
	I1109 14:39:23.280983  196795 kubeadm.go:884] updating cluster {Name:embed-certs-422728 KeepContext:false EmbedCerts:true MinikubeISO: KicBaseImage:gcr.io/k8s-minikube/kicbase-builds:v0.0.48-1761985721-21837@sha256:a50b37e97dfdea51156e079ca6b45818a801b3d41bbe13d141f35d2e1af6c7d1 Memory:3072 CPUs:2 DiskSize:20000 Driver:docker HyperkitVpnKitSock: HyperkitVSockPorts:[] DockerEnv:[] ContainerVolumeMounts:[] InsecureRegistry:[] RegistryMirror:[] HostOnlyCIDR:192.168.59.1/24 HypervVirtualSwitch: HypervUseExternalSwitch:false HypervExternalAdapter: KVMNetwork:default KVMQemuURI:qemu:///system KVMGPU:false KVMHidden:false KVMNUMACount:1 APIServerPort:8443 DockerOpt:[] DisableDriverMounts:false NFSShare:[] NFSSharesRoot:/nfsshares UUID: NoVTXCheck:false DNSProxy:false HostDNSResolver:true HostOnlyNicType:virtio NatNicType:virtio SSHIPAddress: SSHUser:root SSHKey: SSHPort:22 KubernetesConfig:{KubernetesVersion:v1.34.1 ClusterName:embed-certs-422728 Namespace:default APIServerHAVIP: APIServerName:minikubeCA AP
IServerNames:[] APIServerIPs:[] DNSDomain:cluster.local ContainerRuntime:crio CRISocket: NetworkPlugin:cni FeatureGates: ServiceCIDR:10.96.0.0/12 ImageRepository: LoadBalancerStartIP: LoadBalancerEndIP: CustomIngressCert: RegistryAliases: ExtraOptions:[] ShouldLoadCachedImages:true EnableDefaultCNI:false CNI:} Nodes:[{Name: IP:192.168.76.2 Port:8443 KubernetesVersion:v1.34.1 ContainerRuntime:crio ControlPlane:true Worker:true}] Addons:map[dashboard:true default-storageclass:true storage-provisioner:true] CustomAddonImages:map[MetricsScraper:registry.k8s.io/echoserver:1.4] CustomAddonRegistries:map[] VerifyComponents:map[apiserver:true apps_running:true default_sa:true extra:true kubelet:true node_ready:true system_pods:true] StartHostTimeout:6m0s ScheduledStop:<nil> ExposedPorts:[] ListenAddress: Network: Subnet: MultiNodeRequested:false ExtraDisks:0 CertExpiration:26280h0m0s MountString: Mount9PVersion:9p2000.L MountGID:docker MountIP: MountMSize:262144 MountOptions:[] MountPort:0 MountType:9p MountUID:docke
r BinaryMirror: DisableOptimizations:false DisableMetrics:false DisableCoreDNSLog:false CustomQemuFirmwarePath: SocketVMnetClientPath: SocketVMnetPath: StaticIP: SSHAuthSock: SSHAgentPID:0 GPUs: AutoPauseInterval:1m0s} ...
	I1109 14:39:23.281094  196795 preload.go:188] Checking if preload exists for k8s version v1.34.1 and runtime crio
	I1109 14:39:23.281162  196795 ssh_runner.go:195] Run: sudo crictl images --output json
	I1109 14:39:23.361099  196795 crio.go:514] all images are preloaded for cri-o runtime.
	I1109 14:39:23.361119  196795 crio.go:433] Images already preloaded, skipping extraction
	I1109 14:39:23.361171  196795 ssh_runner.go:195] Run: sudo crictl images --output json
	I1109 14:39:23.413183  196795 crio.go:514] all images are preloaded for cri-o runtime.
	I1109 14:39:23.413202  196795 cache_images.go:86] Images are preloaded, skipping loading
	I1109 14:39:23.413210  196795 kubeadm.go:935] updating node { 192.168.76.2 8443 v1.34.1 crio true true} ...
	I1109 14:39:23.413308  196795 kubeadm.go:947] kubelet [Unit]
	Wants=crio.service
	
	[Service]
	ExecStart=
	ExecStart=/var/lib/minikube/binaries/v1.34.1/kubelet --bootstrap-kubeconfig=/etc/kubernetes/bootstrap-kubelet.conf --cgroups-per-qos=false --config=/var/lib/kubelet/config.yaml --enforce-node-allocatable= --hostname-override=embed-certs-422728 --kubeconfig=/etc/kubernetes/kubelet.conf --node-ip=192.168.76.2
	
	[Install]
	 config:
	{KubernetesVersion:v1.34.1 ClusterName:embed-certs-422728 Namespace:default APIServerHAVIP: APIServerName:minikubeCA APIServerNames:[] APIServerIPs:[] DNSDomain:cluster.local ContainerRuntime:crio CRISocket: NetworkPlugin:cni FeatureGates: ServiceCIDR:10.96.0.0/12 ImageRepository: LoadBalancerStartIP: LoadBalancerEndIP: CustomIngressCert: RegistryAliases: ExtraOptions:[] ShouldLoadCachedImages:true EnableDefaultCNI:false CNI:}
	I1109 14:39:23.413385  196795 ssh_runner.go:195] Run: crio config
	I1109 14:39:23.563585  196795 cni.go:84] Creating CNI manager for ""
	I1109 14:39:23.563654  196795 cni.go:143] "docker" driver + "crio" runtime found, recommending kindnet
	I1109 14:39:23.563691  196795 kubeadm.go:85] Using pod CIDR: 10.244.0.0/16
	I1109 14:39:23.563764  196795 kubeadm.go:190] kubeadm options: {CertDir:/var/lib/minikube/certs ServiceCIDR:10.96.0.0/12 PodSubnet:10.244.0.0/16 AdvertiseAddress:192.168.76.2 APIServerPort:8443 KubernetesVersion:v1.34.1 EtcdDataDir:/var/lib/minikube/etcd EtcdExtraArgs:map[] ClusterName:embed-certs-422728 NodeName:embed-certs-422728 DNSDomain:cluster.local CRISocket:/var/run/crio/crio.sock ImageRepository: ComponentOptions:[{Component:apiServer ExtraArgs:map[enable-admission-plugins:NamespaceLifecycle,LimitRanger,ServiceAccount,DefaultStorageClass,DefaultTolerationSeconds,NodeRestriction,MutatingAdmissionWebhook,ValidatingAdmissionWebhook,ResourceQuota] Pairs:map[certSANs:["127.0.0.1", "localhost", "192.168.76.2"]]} {Component:controllerManager ExtraArgs:map[allocate-node-cidrs:true leader-elect:false] Pairs:map[]} {Component:scheduler ExtraArgs:map[leader-elect:false] Pairs:map[]}] FeatureArgs:map[] NodeIP:192.168.76.2 CgroupDriver:cgroupfs ClientCAFile:/var/lib/minikube/certs/ca.crt StaticPodPath:/e
tc/kubernetes/manifests ControlPlaneAddress:control-plane.minikube.internal KubeProxyOptions:map[] ResolvConfSearchRegression:false KubeletConfigOpts:map[containerRuntimeEndpoint:unix:///var/run/crio/crio.sock hairpinMode:hairpin-veth runtimeRequestTimeout:15m] PrependCriSocketUnix:true}
	I1109 14:39:23.563947  196795 kubeadm.go:196] kubeadm config:
	apiVersion: kubeadm.k8s.io/v1beta4
	kind: InitConfiguration
	localAPIEndpoint:
	  advertiseAddress: 192.168.76.2
	  bindPort: 8443
	bootstrapTokens:
	  - groups:
	      - system:bootstrappers:kubeadm:default-node-token
	    ttl: 24h0m0s
	    usages:
	      - signing
	      - authentication
	nodeRegistration:
	  criSocket: unix:///var/run/crio/crio.sock
	  name: "embed-certs-422728"
	  kubeletExtraArgs:
	    - name: "node-ip"
	      value: "192.168.76.2"
	  taints: []
	---
	apiVersion: kubeadm.k8s.io/v1beta4
	kind: ClusterConfiguration
	apiServer:
	  certSANs: ["127.0.0.1", "localhost", "192.168.76.2"]
	  extraArgs:
	    - name: "enable-admission-plugins"
	      value: "NamespaceLifecycle,LimitRanger,ServiceAccount,DefaultStorageClass,DefaultTolerationSeconds,NodeRestriction,MutatingAdmissionWebhook,ValidatingAdmissionWebhook,ResourceQuota"
	controllerManager:
	  extraArgs:
	    - name: "allocate-node-cidrs"
	      value: "true"
	    - name: "leader-elect"
	      value: "false"
	scheduler:
	  extraArgs:
	    - name: "leader-elect"
	      value: "false"
	certificatesDir: /var/lib/minikube/certs
	clusterName: mk
	controlPlaneEndpoint: control-plane.minikube.internal:8443
	etcd:
	  local:
	    dataDir: /var/lib/minikube/etcd
	kubernetesVersion: v1.34.1
	networking:
	  dnsDomain: cluster.local
	  podSubnet: "10.244.0.0/16"
	  serviceSubnet: 10.96.0.0/12
	---
	apiVersion: kubelet.config.k8s.io/v1beta1
	kind: KubeletConfiguration
	authentication:
	  x509:
	    clientCAFile: /var/lib/minikube/certs/ca.crt
	cgroupDriver: cgroupfs
	containerRuntimeEndpoint: unix:///var/run/crio/crio.sock
	hairpinMode: hairpin-veth
	runtimeRequestTimeout: 15m
	clusterDomain: "cluster.local"
	# disable disk resource management by default
	imageGCHighThresholdPercent: 100
	evictionHard:
	  nodefs.available: "0%"
	  nodefs.inodesFree: "0%"
	  imagefs.available: "0%"
	failSwapOn: false
	staticPodPath: /etc/kubernetes/manifests
	---
	apiVersion: kubeproxy.config.k8s.io/v1alpha1
	kind: KubeProxyConfiguration
	clusterCIDR: "10.244.0.0/16"
	metricsBindAddress: 0.0.0.0:10249
	conntrack:
	  maxPerCore: 0
	# Skip setting "net.netfilter.nf_conntrack_tcp_timeout_established"
	  tcpEstablishedTimeout: 0s
	# Skip setting "net.netfilter.nf_conntrack_tcp_timeout_close"
	  tcpCloseWaitTimeout: 0s
	
	I1109 14:39:23.564045  196795 ssh_runner.go:195] Run: sudo ls /var/lib/minikube/binaries/v1.34.1
	I1109 14:39:23.572916  196795 binaries.go:51] Found k8s binaries, skipping transfer
	I1109 14:39:23.573035  196795 ssh_runner.go:195] Run: sudo mkdir -p /etc/systemd/system/kubelet.service.d /lib/systemd/system /var/tmp/minikube
	I1109 14:39:23.581385  196795 ssh_runner.go:362] scp memory --> /etc/systemd/system/kubelet.service.d/10-kubeadm.conf (368 bytes)
	I1109 14:39:23.595976  196795 ssh_runner.go:362] scp memory --> /lib/systemd/system/kubelet.service (352 bytes)
	I1109 14:39:23.609988  196795 ssh_runner.go:362] scp memory --> /var/tmp/minikube/kubeadm.yaml.new (2215 bytes)
	I1109 14:39:23.624103  196795 ssh_runner.go:195] Run: grep 192.168.76.2	control-plane.minikube.internal$ /etc/hosts
	I1109 14:39:23.627903  196795 ssh_runner.go:195] Run: /bin/bash -c "{ grep -v $'\tcontrol-plane.minikube.internal$' "/etc/hosts"; echo "192.168.76.2	control-plane.minikube.internal"; } > /tmp/h.$$; sudo cp /tmp/h.$$ "/etc/hosts""
	I1109 14:39:23.637960  196795 ssh_runner.go:195] Run: sudo systemctl daemon-reload
	I1109 14:39:23.834596  196795 ssh_runner.go:195] Run: sudo systemctl start kubelet
	I1109 14:39:23.851619  196795 certs.go:69] Setting up /home/jenkins/minikube-integration/21139-2320/.minikube/profiles/embed-certs-422728 for IP: 192.168.76.2
	I1109 14:39:23.851693  196795 certs.go:195] generating shared ca certs ...
	I1109 14:39:23.851722  196795 certs.go:227] acquiring lock for ca certs: {Name:mkdc9287ecd89df27ec460b72246ed6b75395d7c Clock:{} Delay:500ms Timeout:1m0s Cancel:<nil>}
	I1109 14:39:23.851903  196795 certs.go:236] skipping valid "minikubeCA" ca cert: /home/jenkins/minikube-integration/21139-2320/.minikube/ca.key
	I1109 14:39:23.851988  196795 certs.go:236] skipping valid "proxyClientCA" ca cert: /home/jenkins/minikube-integration/21139-2320/.minikube/proxy-client-ca.key
	I1109 14:39:23.852012  196795 certs.go:257] generating profile certs ...
	I1109 14:39:23.852144  196795 certs.go:360] skipping valid signed profile cert regeneration for "minikube-user": /home/jenkins/minikube-integration/21139-2320/.minikube/profiles/embed-certs-422728/client.key
	I1109 14:39:23.852244  196795 certs.go:360] skipping valid signed profile cert regeneration for "minikube": /home/jenkins/minikube-integration/21139-2320/.minikube/profiles/embed-certs-422728/apiserver.key.b1b6b07a
	I1109 14:39:23.852384  196795 certs.go:360] skipping valid signed profile cert regeneration for "aggregator": /home/jenkins/minikube-integration/21139-2320/.minikube/profiles/embed-certs-422728/proxy-client.key
	I1109 14:39:23.852540  196795 certs.go:484] found cert: /home/jenkins/minikube-integration/21139-2320/.minikube/certs/4116.pem (1338 bytes)
	W1109 14:39:23.852606  196795 certs.go:480] ignoring /home/jenkins/minikube-integration/21139-2320/.minikube/certs/4116_empty.pem, impossibly tiny 0 bytes
	I1109 14:39:23.852637  196795 certs.go:484] found cert: /home/jenkins/minikube-integration/21139-2320/.minikube/certs/ca-key.pem (1679 bytes)
	I1109 14:39:23.852689  196795 certs.go:484] found cert: /home/jenkins/minikube-integration/21139-2320/.minikube/certs/ca.pem (1082 bytes)
	I1109 14:39:23.852735  196795 certs.go:484] found cert: /home/jenkins/minikube-integration/21139-2320/.minikube/certs/cert.pem (1123 bytes)
	I1109 14:39:23.852795  196795 certs.go:484] found cert: /home/jenkins/minikube-integration/21139-2320/.minikube/certs/key.pem (1679 bytes)
	I1109 14:39:23.852868  196795 certs.go:484] found cert: /home/jenkins/minikube-integration/21139-2320/.minikube/files/etc/ssl/certs/41162.pem (1708 bytes)
	I1109 14:39:23.853641  196795 ssh_runner.go:362] scp /home/jenkins/minikube-integration/21139-2320/.minikube/ca.crt --> /var/lib/minikube/certs/ca.crt (1111 bytes)
	I1109 14:39:23.941040  196795 ssh_runner.go:362] scp /home/jenkins/minikube-integration/21139-2320/.minikube/ca.key --> /var/lib/minikube/certs/ca.key (1679 bytes)
	I1109 14:39:24.012418  196795 ssh_runner.go:362] scp /home/jenkins/minikube-integration/21139-2320/.minikube/proxy-client-ca.crt --> /var/lib/minikube/certs/proxy-client-ca.crt (1119 bytes)
	I1109 14:39:24.042429  196795 ssh_runner.go:362] scp /home/jenkins/minikube-integration/21139-2320/.minikube/proxy-client-ca.key --> /var/lib/minikube/certs/proxy-client-ca.key (1675 bytes)
	I1109 14:39:24.071468  196795 ssh_runner.go:362] scp /home/jenkins/minikube-integration/21139-2320/.minikube/profiles/embed-certs-422728/apiserver.crt --> /var/lib/minikube/certs/apiserver.crt (1428 bytes)
	I1109 14:39:24.116434  196795 ssh_runner.go:362] scp /home/jenkins/minikube-integration/21139-2320/.minikube/profiles/embed-certs-422728/apiserver.key --> /var/lib/minikube/certs/apiserver.key (1679 bytes)
	I1109 14:39:24.161053  196795 ssh_runner.go:362] scp /home/jenkins/minikube-integration/21139-2320/.minikube/profiles/embed-certs-422728/proxy-client.crt --> /var/lib/minikube/certs/proxy-client.crt (1147 bytes)
	I1109 14:39:24.224105  196795 ssh_runner.go:362] scp /home/jenkins/minikube-integration/21139-2320/.minikube/profiles/embed-certs-422728/proxy-client.key --> /var/lib/minikube/certs/proxy-client.key (1675 bytes)
	I1109 14:39:24.267707  196795 ssh_runner.go:362] scp /home/jenkins/minikube-integration/21139-2320/.minikube/files/etc/ssl/certs/41162.pem --> /usr/share/ca-certificates/41162.pem (1708 bytes)
	I1109 14:39:24.314203  196795 ssh_runner.go:362] scp /home/jenkins/minikube-integration/21139-2320/.minikube/ca.crt --> /usr/share/ca-certificates/minikubeCA.pem (1111 bytes)
	I1109 14:39:24.345761  196795 ssh_runner.go:362] scp /home/jenkins/minikube-integration/21139-2320/.minikube/certs/4116.pem --> /usr/share/ca-certificates/4116.pem (1338 bytes)
	I1109 14:39:24.382658  196795 ssh_runner.go:362] scp memory --> /var/lib/minikube/kubeconfig (738 bytes)
	I1109 14:39:24.401317  196795 ssh_runner.go:195] Run: openssl version
	I1109 14:39:24.412746  196795 ssh_runner.go:195] Run: sudo /bin/bash -c "test -s /usr/share/ca-certificates/41162.pem && ln -fs /usr/share/ca-certificates/41162.pem /etc/ssl/certs/41162.pem"
	I1109 14:39:24.425193  196795 ssh_runner.go:195] Run: ls -la /usr/share/ca-certificates/41162.pem
	I1109 14:39:24.429586  196795 certs.go:528] hashing: -rw-r--r-- 1 root root 1708 Nov  9 13:36 /usr/share/ca-certificates/41162.pem
	I1109 14:39:24.429714  196795 ssh_runner.go:195] Run: openssl x509 -hash -noout -in /usr/share/ca-certificates/41162.pem
	I1109 14:39:24.492081  196795 ssh_runner.go:195] Run: sudo /bin/bash -c "test -L /etc/ssl/certs/3ec20f2e.0 || ln -fs /etc/ssl/certs/41162.pem /etc/ssl/certs/3ec20f2e.0"
	I1109 14:39:24.502155  196795 ssh_runner.go:195] Run: sudo /bin/bash -c "test -s /usr/share/ca-certificates/minikubeCA.pem && ln -fs /usr/share/ca-certificates/minikubeCA.pem /etc/ssl/certs/minikubeCA.pem"
	I1109 14:39:24.510808  196795 ssh_runner.go:195] Run: ls -la /usr/share/ca-certificates/minikubeCA.pem
	I1109 14:39:24.515143  196795 certs.go:528] hashing: -rw-r--r-- 1 root root 1111 Nov  9 13:29 /usr/share/ca-certificates/minikubeCA.pem
	I1109 14:39:24.515237  196795 ssh_runner.go:195] Run: openssl x509 -hash -noout -in /usr/share/ca-certificates/minikubeCA.pem
	I1109 14:39:24.570674  196795 ssh_runner.go:195] Run: sudo /bin/bash -c "test -L /etc/ssl/certs/b5213941.0 || ln -fs /etc/ssl/certs/minikubeCA.pem /etc/ssl/certs/b5213941.0"
	I1109 14:39:24.579490  196795 ssh_runner.go:195] Run: sudo /bin/bash -c "test -s /usr/share/ca-certificates/4116.pem && ln -fs /usr/share/ca-certificates/4116.pem /etc/ssl/certs/4116.pem"
	I1109 14:39:24.606288  196795 ssh_runner.go:195] Run: ls -la /usr/share/ca-certificates/4116.pem
	I1109 14:39:24.614978  196795 certs.go:528] hashing: -rw-r--r-- 1 root root 1338 Nov  9 13:36 /usr/share/ca-certificates/4116.pem
	I1109 14:39:24.615077  196795 ssh_runner.go:195] Run: openssl x509 -hash -noout -in /usr/share/ca-certificates/4116.pem
	I1109 14:39:24.702675  196795 ssh_runner.go:195] Run: sudo /bin/bash -c "test -L /etc/ssl/certs/51391683.0 || ln -fs /etc/ssl/certs/4116.pem /etc/ssl/certs/51391683.0"
	I1109 14:39:24.724731  196795 ssh_runner.go:195] Run: stat /var/lib/minikube/certs/apiserver-kubelet-client.crt
	I1109 14:39:24.736968  196795 ssh_runner.go:195] Run: openssl x509 -noout -in /var/lib/minikube/certs/apiserver-etcd-client.crt -checkend 86400
	I1109 14:39:24.828754  196795 ssh_runner.go:195] Run: openssl x509 -noout -in /var/lib/minikube/certs/apiserver-kubelet-client.crt -checkend 86400
	I1109 14:39:24.919293  196795 ssh_runner.go:195] Run: openssl x509 -noout -in /var/lib/minikube/certs/etcd/server.crt -checkend 86400
	I1109 14:39:25.033233  196795 ssh_runner.go:195] Run: openssl x509 -noout -in /var/lib/minikube/certs/etcd/healthcheck-client.crt -checkend 86400
	I1109 14:39:25.133106  196795 ssh_runner.go:195] Run: openssl x509 -noout -in /var/lib/minikube/certs/etcd/peer.crt -checkend 86400
	I1109 14:39:25.239384  196795 ssh_runner.go:195] Run: openssl x509 -noout -in /var/lib/minikube/certs/front-proxy-client.crt -checkend 86400
	I1109 14:39:25.320678  196795 kubeadm.go:401] StartCluster: {Name:embed-certs-422728 KeepContext:false EmbedCerts:true MinikubeISO: KicBaseImage:gcr.io/k8s-minikube/kicbase-builds:v0.0.48-1761985721-21837@sha256:a50b37e97dfdea51156e079ca6b45818a801b3d41bbe13d141f35d2e1af6c7d1 Memory:3072 CPUs:2 DiskSize:20000 Driver:docker HyperkitVpnKitSock: HyperkitVSockPorts:[] DockerEnv:[] ContainerVolumeMounts:[] InsecureRegistry:[] RegistryMirror:[] HostOnlyCIDR:192.168.59.1/24 HypervVirtualSwitch: HypervUseExternalSwitch:false HypervExternalAdapter: KVMNetwork:default KVMQemuURI:qemu:///system KVMGPU:false KVMHidden:false KVMNUMACount:1 APIServerPort:8443 DockerOpt:[] DisableDriverMounts:false NFSShare:[] NFSSharesRoot:/nfsshares UUID: NoVTXCheck:false DNSProxy:false HostDNSResolver:true HostOnlyNicType:virtio NatNicType:virtio SSHIPAddress: SSHUser:root SSHKey: SSHPort:22 KubernetesConfig:{KubernetesVersion:v1.34.1 ClusterName:embed-certs-422728 Namespace:default APIServerHAVIP: APIServerName:minikubeCA APISe
rverNames:[] APIServerIPs:[] DNSDomain:cluster.local ContainerRuntime:crio CRISocket: NetworkPlugin:cni FeatureGates: ServiceCIDR:10.96.0.0/12 ImageRepository: LoadBalancerStartIP: LoadBalancerEndIP: CustomIngressCert: RegistryAliases: ExtraOptions:[] ShouldLoadCachedImages:true EnableDefaultCNI:false CNI:} Nodes:[{Name: IP:192.168.76.2 Port:8443 KubernetesVersion:v1.34.1 ContainerRuntime:crio ControlPlane:true Worker:true}] Addons:map[dashboard:true default-storageclass:true storage-provisioner:true] CustomAddonImages:map[MetricsScraper:registry.k8s.io/echoserver:1.4] CustomAddonRegistries:map[] VerifyComponents:map[apiserver:true apps_running:true default_sa:true extra:true kubelet:true node_ready:true system_pods:true] StartHostTimeout:6m0s ScheduledStop:<nil> ExposedPorts:[] ListenAddress: Network: Subnet: MultiNodeRequested:false ExtraDisks:0 CertExpiration:26280h0m0s MountString: Mount9PVersion:9p2000.L MountGID:docker MountIP: MountMSize:262144 MountOptions:[] MountPort:0 MountType:9p MountUID:docker B
inaryMirror: DisableOptimizations:false DisableMetrics:false DisableCoreDNSLog:false CustomQemuFirmwarePath: SocketVMnetClientPath: SocketVMnetPath: StaticIP: SSHAuthSock: SSHAgentPID:0 GPUs: AutoPauseInterval:1m0s}
	I1109 14:39:25.320782  196795 cri.go:54] listing CRI containers in root : {State:paused Name: Namespaces:[kube-system]}
	I1109 14:39:25.320876  196795 ssh_runner.go:195] Run: sudo -s eval "crictl ps -a --quiet --label io.kubernetes.pod.namespace=kube-system"
	I1109 14:39:25.395488  196795 cri.go:89] found id: "a9943a66511d5ad86eafee1dfeb5b5c4217765aa7223fad81d4d85ed65bd4366"
	I1109 14:39:25.395518  196795 cri.go:89] found id: "2b949bf057b2fcac7eac7de6351c20a0ac0d3dccfcd10d71e47fcbaab8fc91dc"
	I1109 14:39:25.395523  196795 cri.go:89] found id: "7ac348b06cb3a51d4fa87f75d29323432bc36b8fade63732460d89860ce8f3df"
	I1109 14:39:25.395529  196795 cri.go:89] found id: "7f99978e234d142a4cacb3d0faff188a383754a6fe3a8aafef37dfdbccf51f16"
	I1109 14:39:25.395540  196795 cri.go:89] found id: ""
	I1109 14:39:25.395626  196795 ssh_runner.go:195] Run: sudo runc list -f json
	W1109 14:39:25.421453  196795 kubeadm.go:408] unpause failed: list paused: runc: sudo runc list -f json: Process exited with status 1
	stdout:
	
	stderr:
	time="2025-11-09T14:39:25Z" level=error msg="open /run/runc: no such file or directory"
	I1109 14:39:25.421568  196795 ssh_runner.go:195] Run: sudo ls /var/lib/kubelet/kubeadm-flags.env /var/lib/kubelet/config.yaml /var/lib/minikube/etcd
	I1109 14:39:25.434118  196795 kubeadm.go:417] found existing configuration files, will attempt cluster restart
	I1109 14:39:25.434139  196795 kubeadm.go:598] restartPrimaryControlPlane start ...
	I1109 14:39:25.434224  196795 ssh_runner.go:195] Run: sudo test -d /data/minikube
	I1109 14:39:25.455848  196795 kubeadm.go:131] /data/minikube skipping compat symlinks: sudo test -d /data/minikube: Process exited with status 1
	stdout:
	
	stderr:
	I1109 14:39:25.456462  196795 kubeconfig.go:47] verify endpoint returned: get endpoint: "embed-certs-422728" does not appear in /home/jenkins/minikube-integration/21139-2320/kubeconfig
	I1109 14:39:25.456756  196795 kubeconfig.go:62] /home/jenkins/minikube-integration/21139-2320/kubeconfig needs updating (will repair): [kubeconfig missing "embed-certs-422728" cluster setting kubeconfig missing "embed-certs-422728" context setting]
	I1109 14:39:25.457252  196795 lock.go:35] WriteFile acquiring /home/jenkins/minikube-integration/21139-2320/kubeconfig: {Name:mkbcbd43244c4eb498050023163f96f81faa78c6 Clock:{} Delay:500ms Timeout:1m0s Cancel:<nil>}
	I1109 14:39:25.458892  196795 ssh_runner.go:195] Run: sudo diff -u /var/tmp/minikube/kubeadm.yaml /var/tmp/minikube/kubeadm.yaml.new
	I1109 14:39:25.472254  196795 kubeadm.go:635] The running cluster does not require reconfiguration: 192.168.76.2
	I1109 14:39:25.472299  196795 kubeadm.go:602] duration metric: took 38.151656ms to restartPrimaryControlPlane
	I1109 14:39:25.472333  196795 kubeadm.go:403] duration metric: took 151.665347ms to StartCluster
	I1109 14:39:25.472350  196795 settings.go:142] acquiring lock: {Name:mk52a5a1cf83cfd3007d92af07a5e8dea393f789 Clock:{} Delay:500ms Timeout:1m0s Cancel:<nil>}
	I1109 14:39:25.472439  196795 settings.go:150] Updating kubeconfig:  /home/jenkins/minikube-integration/21139-2320/kubeconfig
	I1109 14:39:25.474717  196795 lock.go:35] WriteFile acquiring /home/jenkins/minikube-integration/21139-2320/kubeconfig: {Name:mkbcbd43244c4eb498050023163f96f81faa78c6 Clock:{} Delay:500ms Timeout:1m0s Cancel:<nil>}
	I1109 14:39:25.475122  196795 start.go:236] Will wait 6m0s for node &{Name: IP:192.168.76.2 Port:8443 KubernetesVersion:v1.34.1 ContainerRuntime:crio ControlPlane:true Worker:true}
	I1109 14:39:25.475457  196795 config.go:182] Loaded profile config "embed-certs-422728": Driver=docker, ContainerRuntime=crio, KubernetesVersion=v1.34.1
	I1109 14:39:25.475514  196795 addons.go:512] enable addons start: toEnable=map[ambassador:false amd-gpu-device-plugin:false auto-pause:false cloud-spanner:false csi-hostpath-driver:false dashboard:true default-storageclass:true efk:false freshpod:false gcp-auth:false gvisor:false headlamp:false inaccel:false ingress:false ingress-dns:false inspektor-gadget:false istio:false istio-provisioner:false kong:false kubeflow:false kubetail:false kubevirt:false logviewer:false metallb:false metrics-server:false nvidia-device-plugin:false nvidia-driver-installer:false nvidia-gpu-device-plugin:false olm:false pod-security-policy:false portainer:false registry:false registry-aliases:false registry-creds:false storage-provisioner:true storage-provisioner-rancher:false volcano:false volumesnapshots:false yakd:false]
	I1109 14:39:25.475607  196795 addons.go:70] Setting storage-provisioner=true in profile "embed-certs-422728"
	I1109 14:39:25.475629  196795 addons.go:239] Setting addon storage-provisioner=true in "embed-certs-422728"
	W1109 14:39:25.475642  196795 addons.go:248] addon storage-provisioner should already be in state true
	I1109 14:39:25.475657  196795 addons.go:70] Setting dashboard=true in profile "embed-certs-422728"
	I1109 14:39:25.475671  196795 addons.go:239] Setting addon dashboard=true in "embed-certs-422728"
	W1109 14:39:25.475677  196795 addons.go:248] addon dashboard should already be in state true
	I1109 14:39:25.475700  196795 host.go:66] Checking if "embed-certs-422728" exists ...
	I1109 14:39:25.476345  196795 cli_runner.go:164] Run: docker container inspect embed-certs-422728 --format={{.State.Status}}
	I1109 14:39:25.476519  196795 host.go:66] Checking if "embed-certs-422728" exists ...
	I1109 14:39:25.476941  196795 cli_runner.go:164] Run: docker container inspect embed-certs-422728 --format={{.State.Status}}
	I1109 14:39:25.477501  196795 addons.go:70] Setting default-storageclass=true in profile "embed-certs-422728"
	I1109 14:39:25.477528  196795 addons_storage_classes.go:34] enableOrDisableStorageClasses default-storageclass=true on "embed-certs-422728"
	I1109 14:39:25.477804  196795 cli_runner.go:164] Run: docker container inspect embed-certs-422728 --format={{.State.Status}}
	I1109 14:39:25.483396  196795 out.go:179] * Verifying Kubernetes components...
	I1109 14:39:25.487964  196795 ssh_runner.go:195] Run: sudo systemctl daemon-reload
	I1109 14:39:25.515113  196795 out.go:179]   - Using image docker.io/kubernetesui/dashboard:v2.7.0
	I1109 14:39:25.518086  196795 out.go:179]   - Using image registry.k8s.io/echoserver:1.4
	I1109 14:39:25.521009  196795 addons.go:436] installing /etc/kubernetes/addons/dashboard-ns.yaml
	I1109 14:39:25.521039  196795 ssh_runner.go:362] scp dashboard/dashboard-ns.yaml --> /etc/kubernetes/addons/dashboard-ns.yaml (759 bytes)
	I1109 14:39:25.521115  196795 cli_runner.go:164] Run: docker container inspect -f "'{{(index (index .NetworkSettings.Ports "22/tcp") 0).HostPort}}'" embed-certs-422728
	I1109 14:39:25.540397  196795 out.go:179]   - Using image gcr.io/k8s-minikube/storage-provisioner:v5
	I1109 14:39:25.545565  196795 addons.go:436] installing /etc/kubernetes/addons/storage-provisioner.yaml
	I1109 14:39:25.545587  196795 ssh_runner.go:362] scp memory --> /etc/kubernetes/addons/storage-provisioner.yaml (2676 bytes)
	I1109 14:39:25.545649  196795 cli_runner.go:164] Run: docker container inspect -f "'{{(index (index .NetworkSettings.Ports "22/tcp") 0).HostPort}}'" embed-certs-422728
	I1109 14:39:25.553421  196795 addons.go:239] Setting addon default-storageclass=true in "embed-certs-422728"
	W1109 14:39:25.553458  196795 addons.go:248] addon default-storageclass should already be in state true
	I1109 14:39:25.553498  196795 host.go:66] Checking if "embed-certs-422728" exists ...
	I1109 14:39:25.553946  196795 cli_runner.go:164] Run: docker container inspect embed-certs-422728 --format={{.State.Status}}
	I1109 14:39:25.587976  196795 sshutil.go:53] new ssh client: &{IP:127.0.0.1 Port:33070 SSHKeyPath:/home/jenkins/minikube-integration/21139-2320/.minikube/machines/embed-certs-422728/id_rsa Username:docker}
	I1109 14:39:25.610580  196795 addons.go:436] installing /etc/kubernetes/addons/storageclass.yaml
	I1109 14:39:25.610609  196795 ssh_runner.go:362] scp storageclass/storageclass.yaml --> /etc/kubernetes/addons/storageclass.yaml (271 bytes)
	I1109 14:39:25.610676  196795 cli_runner.go:164] Run: docker container inspect -f "'{{(index (index .NetworkSettings.Ports "22/tcp") 0).HostPort}}'" embed-certs-422728
	I1109 14:39:25.611768  196795 sshutil.go:53] new ssh client: &{IP:127.0.0.1 Port:33070 SSHKeyPath:/home/jenkins/minikube-integration/21139-2320/.minikube/machines/embed-certs-422728/id_rsa Username:docker}
	I1109 14:39:25.643462  196795 sshutil.go:53] new ssh client: &{IP:127.0.0.1 Port:33070 SSHKeyPath:/home/jenkins/minikube-integration/21139-2320/.minikube/machines/embed-certs-422728/id_rsa Username:docker}
	I1109 14:39:25.951056  196795 ssh_runner.go:195] Run: sudo KUBECONFIG=/var/lib/minikube/kubeconfig /var/lib/minikube/binaries/v1.34.1/kubectl apply -f /etc/kubernetes/addons/storage-provisioner.yaml
	I1109 14:39:26.036278  196795 addons.go:436] installing /etc/kubernetes/addons/dashboard-clusterrole.yaml
	I1109 14:39:26.036356  196795 ssh_runner.go:362] scp dashboard/dashboard-clusterrole.yaml --> /etc/kubernetes/addons/dashboard-clusterrole.yaml (1001 bytes)
	I1109 14:39:26.113974  196795 ssh_runner.go:195] Run: sudo systemctl start kubelet
	I1109 14:39:26.133211  196795 ssh_runner.go:195] Run: sudo KUBECONFIG=/var/lib/minikube/kubeconfig /var/lib/minikube/binaries/v1.34.1/kubectl apply -f /etc/kubernetes/addons/storageclass.yaml
	I1109 14:39:26.150339  196795 addons.go:436] installing /etc/kubernetes/addons/dashboard-clusterrolebinding.yaml
	I1109 14:39:26.150412  196795 ssh_runner.go:362] scp dashboard/dashboard-clusterrolebinding.yaml --> /etc/kubernetes/addons/dashboard-clusterrolebinding.yaml (1018 bytes)
	I1109 14:39:26.224674  196795 addons.go:436] installing /etc/kubernetes/addons/dashboard-configmap.yaml
	I1109 14:39:26.224743  196795 ssh_runner.go:362] scp dashboard/dashboard-configmap.yaml --> /etc/kubernetes/addons/dashboard-configmap.yaml (837 bytes)
	I1109 14:39:26.342164  196795 addons.go:436] installing /etc/kubernetes/addons/dashboard-dp.yaml
	I1109 14:39:26.342238  196795 ssh_runner.go:362] scp memory --> /etc/kubernetes/addons/dashboard-dp.yaml (4201 bytes)
	I1109 14:39:26.457225  196795 addons.go:436] installing /etc/kubernetes/addons/dashboard-role.yaml
	I1109 14:39:26.457281  196795 ssh_runner.go:362] scp dashboard/dashboard-role.yaml --> /etc/kubernetes/addons/dashboard-role.yaml (1724 bytes)
	I1109 14:39:26.524480  196795 addons.go:436] installing /etc/kubernetes/addons/dashboard-rolebinding.yaml
	I1109 14:39:26.524551  196795 ssh_runner.go:362] scp dashboard/dashboard-rolebinding.yaml --> /etc/kubernetes/addons/dashboard-rolebinding.yaml (1046 bytes)
	I1109 14:39:26.545432  196795 addons.go:436] installing /etc/kubernetes/addons/dashboard-sa.yaml
	I1109 14:39:26.545495  196795 ssh_runner.go:362] scp dashboard/dashboard-sa.yaml --> /etc/kubernetes/addons/dashboard-sa.yaml (837 bytes)
	I1109 14:39:26.569785  196795 addons.go:436] installing /etc/kubernetes/addons/dashboard-secret.yaml
	I1109 14:39:26.569856  196795 ssh_runner.go:362] scp dashboard/dashboard-secret.yaml --> /etc/kubernetes/addons/dashboard-secret.yaml (1389 bytes)
	I1109 14:39:26.593384  196795 addons.go:436] installing /etc/kubernetes/addons/dashboard-svc.yaml
	I1109 14:39:26.593446  196795 ssh_runner.go:362] scp dashboard/dashboard-svc.yaml --> /etc/kubernetes/addons/dashboard-svc.yaml (1294 bytes)
	I1109 14:39:26.632772  196795 ssh_runner.go:195] Run: sudo KUBECONFIG=/var/lib/minikube/kubeconfig /var/lib/minikube/binaries/v1.34.1/kubectl apply -f /etc/kubernetes/addons/dashboard-ns.yaml -f /etc/kubernetes/addons/dashboard-clusterrole.yaml -f /etc/kubernetes/addons/dashboard-clusterrolebinding.yaml -f /etc/kubernetes/addons/dashboard-configmap.yaml -f /etc/kubernetes/addons/dashboard-dp.yaml -f /etc/kubernetes/addons/dashboard-role.yaml -f /etc/kubernetes/addons/dashboard-rolebinding.yaml -f /etc/kubernetes/addons/dashboard-sa.yaml -f /etc/kubernetes/addons/dashboard-secret.yaml -f /etc/kubernetes/addons/dashboard-svc.yaml
	I1109 14:39:29.705357  196129 node_ready.go:49] node "default-k8s-diff-port-103048" is "Ready"
	I1109 14:39:29.705456  196129 node_ready.go:38] duration metric: took 7.476741625s for node "default-k8s-diff-port-103048" to be "Ready" ...
	I1109 14:39:29.705484  196129 api_server.go:52] waiting for apiserver process to appear ...
	I1109 14:39:29.705569  196129 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I1109 14:39:32.996787  196129 ssh_runner.go:235] Completed: sudo KUBECONFIG=/var/lib/minikube/kubeconfig /var/lib/minikube/binaries/v1.34.1/kubectl apply -f /etc/kubernetes/addons/storage-provisioner.yaml: (10.835671987s)
	I1109 14:39:32.996899  196129 ssh_runner.go:235] Completed: sudo KUBECONFIG=/var/lib/minikube/kubeconfig /var/lib/minikube/binaries/v1.34.1/kubectl apply -f /etc/kubernetes/addons/storageclass.yaml: (10.759518546s)
	I1109 14:39:32.997220  196129 ssh_runner.go:235] Completed: sudo KUBECONFIG=/var/lib/minikube/kubeconfig /var/lib/minikube/binaries/v1.34.1/kubectl apply -f /etc/kubernetes/addons/dashboard-ns.yaml -f /etc/kubernetes/addons/dashboard-clusterrole.yaml -f /etc/kubernetes/addons/dashboard-clusterrolebinding.yaml -f /etc/kubernetes/addons/dashboard-configmap.yaml -f /etc/kubernetes/addons/dashboard-dp.yaml -f /etc/kubernetes/addons/dashboard-role.yaml -f /etc/kubernetes/addons/dashboard-rolebinding.yaml -f /etc/kubernetes/addons/dashboard-sa.yaml -f /etc/kubernetes/addons/dashboard-secret.yaml -f /etc/kubernetes/addons/dashboard-svc.yaml: (10.165879191s)
	I1109 14:39:32.997471  196129 ssh_runner.go:235] Completed: sudo pgrep -xnf kube-apiserver.*minikube.*: (3.291860623s)
	I1109 14:39:32.997521  196129 api_server.go:72] duration metric: took 11.371993953s to wait for apiserver process to appear ...
	I1109 14:39:32.997542  196129 api_server.go:88] waiting for apiserver healthz status ...
	I1109 14:39:32.997571  196129 api_server.go:253] Checking apiserver healthz at https://192.168.85.2:8444/healthz ...
	I1109 14:39:33.000725  196129 out.go:179] * Some dashboard features require the metrics-server addon. To enable all features please run:
	
		minikube -p default-k8s-diff-port-103048 addons enable metrics-server
	
	I1109 14:39:33.020969  196129 api_server.go:279] https://192.168.85.2:8444/healthz returned 200:
	ok
	I1109 14:39:33.023683  196129 api_server.go:141] control plane version: v1.34.1
	I1109 14:39:33.023714  196129 api_server.go:131] duration metric: took 26.153345ms to wait for apiserver health ...
	I1109 14:39:33.023725  196129 system_pods.go:43] waiting for kube-system pods to appear ...
	I1109 14:39:33.032087  196129 out.go:179] * Enabled addons: storage-provisioner, dashboard, default-storageclass
	I1109 14:39:33.033482  196129 system_pods.go:59] 8 kube-system pods found
	I1109 14:39:33.033582  196129 system_pods.go:61] "coredns-66bc5c9577-rbvc2" [a2c09df3-22f7-4863-81b7-71d92e6457c7] Running / Ready:ContainersNotReady (containers with unready status: [coredns]) / ContainersReady:ContainersNotReady (containers with unready status: [coredns])
	I1109 14:39:33.033606  196129 system_pods.go:61] "etcd-default-k8s-diff-port-103048" [7ca61e70-9bc1-4a3d-9175-40fabccc3dfe] Running / Ready:ContainersNotReady (containers with unready status: [etcd]) / ContainersReady:ContainersNotReady (containers with unready status: [etcd])
	I1109 14:39:33.033629  196129 system_pods.go:61] "kindnet-tz2x5" [41a63a24-6d2b-453d-a118-2a5b03e08396] Running / Ready:ContainersNotReady (containers with unready status: [kindnet-cni]) / ContainersReady:ContainersNotReady (containers with unready status: [kindnet-cni])
	I1109 14:39:33.033667  196129 system_pods.go:61] "kube-apiserver-default-k8s-diff-port-103048" [80dc9a9a-1a79-447a-b1a4-38d7611de8c3] Running / Ready:ContainersNotReady (containers with unready status: [kube-apiserver]) / ContainersReady:ContainersNotReady (containers with unready status: [kube-apiserver])
	I1109 14:39:33.033692  196129 system_pods.go:61] "kube-controller-manager-default-k8s-diff-port-103048" [f1da8430-3b4b-4fab-a1cb-d8984aabae63] Running / Ready:ContainersNotReady (containers with unready status: [kube-controller-manager]) / ContainersReady:ContainersNotReady (containers with unready status: [kube-controller-manager])
	I1109 14:39:33.033712  196129 system_pods.go:61] "kube-proxy-c57m2" [d93835ed-7e40-4171-a3ee-f815a8d20380] Running
	I1109 14:39:33.033743  196129 system_pods.go:61] "kube-scheduler-default-k8s-diff-port-103048" [8e18ee25-6d8f-4598-89c1-67b7fc04936b] Running
	I1109 14:39:33.033770  196129 system_pods.go:61] "storage-provisioner" [251b0857-5681-47c0-b891-8a4c109aaa4b] Running
	I1109 14:39:33.033790  196129 system_pods.go:74] duration metric: took 10.030263ms to wait for pod list to return data ...
	I1109 14:39:33.033824  196129 default_sa.go:34] waiting for default service account to be created ...
	I1109 14:39:33.034992  196129 addons.go:515] duration metric: took 11.409095214s for enable addons: enabled=[storage-provisioner dashboard default-storageclass]
	I1109 14:39:33.040658  196129 default_sa.go:45] found service account: "default"
	I1109 14:39:33.040686  196129 default_sa.go:55] duration metric: took 6.835118ms for default service account to be created ...
	I1109 14:39:33.040697  196129 system_pods.go:116] waiting for k8s-apps to be running ...
	I1109 14:39:33.044695  196129 system_pods.go:86] 8 kube-system pods found
	I1109 14:39:33.044733  196129 system_pods.go:89] "coredns-66bc5c9577-rbvc2" [a2c09df3-22f7-4863-81b7-71d92e6457c7] Running / Ready:ContainersNotReady (containers with unready status: [coredns]) / ContainersReady:ContainersNotReady (containers with unready status: [coredns])
	I1109 14:39:33.044743  196129 system_pods.go:89] "etcd-default-k8s-diff-port-103048" [7ca61e70-9bc1-4a3d-9175-40fabccc3dfe] Running / Ready:ContainersNotReady (containers with unready status: [etcd]) / ContainersReady:ContainersNotReady (containers with unready status: [etcd])
	I1109 14:39:33.044786  196129 system_pods.go:89] "kindnet-tz2x5" [41a63a24-6d2b-453d-a118-2a5b03e08396] Running / Ready:ContainersNotReady (containers with unready status: [kindnet-cni]) / ContainersReady:ContainersNotReady (containers with unready status: [kindnet-cni])
	I1109 14:39:33.044801  196129 system_pods.go:89] "kube-apiserver-default-k8s-diff-port-103048" [80dc9a9a-1a79-447a-b1a4-38d7611de8c3] Running / Ready:ContainersNotReady (containers with unready status: [kube-apiserver]) / ContainersReady:ContainersNotReady (containers with unready status: [kube-apiserver])
	I1109 14:39:33.044809  196129 system_pods.go:89] "kube-controller-manager-default-k8s-diff-port-103048" [f1da8430-3b4b-4fab-a1cb-d8984aabae63] Running / Ready:ContainersNotReady (containers with unready status: [kube-controller-manager]) / ContainersReady:ContainersNotReady (containers with unready status: [kube-controller-manager])
	I1109 14:39:33.044819  196129 system_pods.go:89] "kube-proxy-c57m2" [d93835ed-7e40-4171-a3ee-f815a8d20380] Running
	I1109 14:39:33.044824  196129 system_pods.go:89] "kube-scheduler-default-k8s-diff-port-103048" [8e18ee25-6d8f-4598-89c1-67b7fc04936b] Running
	I1109 14:39:33.044829  196129 system_pods.go:89] "storage-provisioner" [251b0857-5681-47c0-b891-8a4c109aaa4b] Running
	I1109 14:39:33.044854  196129 system_pods.go:126] duration metric: took 4.149902ms to wait for k8s-apps to be running ...
	I1109 14:39:33.044870  196129 system_svc.go:44] waiting for kubelet service to be running ....
	I1109 14:39:33.044951  196129 ssh_runner.go:195] Run: sudo systemctl is-active --quiet service kubelet
	I1109 14:39:33.077530  196129 system_svc.go:56] duration metric: took 32.649827ms WaitForService to wait for kubelet
	I1109 14:39:33.077564  196129 kubeadm.go:587] duration metric: took 11.452030043s to wait for: map[apiserver:true apps_running:true default_sa:true extra:true kubelet:true node_ready:true system_pods:true]
	I1109 14:39:33.077606  196129 node_conditions.go:102] verifying NodePressure condition ...
	I1109 14:39:33.086426  196129 node_conditions.go:122] node storage ephemeral capacity is 203034800Ki
	I1109 14:39:33.086461  196129 node_conditions.go:123] node cpu capacity is 2
	I1109 14:39:33.086473  196129 node_conditions.go:105] duration metric: took 8.861178ms to run NodePressure ...
	I1109 14:39:33.086516  196129 start.go:242] waiting for startup goroutines ...
	I1109 14:39:33.086533  196129 start.go:247] waiting for cluster config update ...
	I1109 14:39:33.086544  196129 start.go:256] writing updated cluster config ...
	I1109 14:39:33.086866  196129 ssh_runner.go:195] Run: rm -f paused
	I1109 14:39:33.096386  196129 pod_ready.go:37] extra waiting up to 4m0s for all "kube-system" pods having one of [k8s-app=kube-dns component=etcd component=kube-apiserver component=kube-controller-manager k8s-app=kube-proxy component=kube-scheduler] labels to be "Ready" ...
	I1109 14:39:33.164789  196129 pod_ready.go:83] waiting for pod "coredns-66bc5c9577-rbvc2" in "kube-system" namespace to be "Ready" or be gone ...
	I1109 14:39:35.201675  196795 ssh_runner.go:235] Completed: sudo KUBECONFIG=/var/lib/minikube/kubeconfig /var/lib/minikube/binaries/v1.34.1/kubectl apply -f /etc/kubernetes/addons/storage-provisioner.yaml: (9.250533062s)
	I1109 14:39:35.201721  196795 ssh_runner.go:235] Completed: sudo systemctl start kubelet: (9.087664371s)
	I1109 14:39:35.201760  196795 node_ready.go:35] waiting up to 6m0s for node "embed-certs-422728" to be "Ready" ...
	I1109 14:39:35.202074  196795 ssh_runner.go:235] Completed: sudo KUBECONFIG=/var/lib/minikube/kubeconfig /var/lib/minikube/binaries/v1.34.1/kubectl apply -f /etc/kubernetes/addons/storageclass.yaml: (9.068793828s)
	I1109 14:39:35.202315  196795 ssh_runner.go:235] Completed: sudo KUBECONFIG=/var/lib/minikube/kubeconfig /var/lib/minikube/binaries/v1.34.1/kubectl apply -f /etc/kubernetes/addons/dashboard-ns.yaml -f /etc/kubernetes/addons/dashboard-clusterrole.yaml -f /etc/kubernetes/addons/dashboard-clusterrolebinding.yaml -f /etc/kubernetes/addons/dashboard-configmap.yaml -f /etc/kubernetes/addons/dashboard-dp.yaml -f /etc/kubernetes/addons/dashboard-role.yaml -f /etc/kubernetes/addons/dashboard-rolebinding.yaml -f /etc/kubernetes/addons/dashboard-sa.yaml -f /etc/kubernetes/addons/dashboard-secret.yaml -f /etc/kubernetes/addons/dashboard-svc.yaml: (8.569467426s)
	I1109 14:39:35.205755  196795 out.go:179] * Some dashboard features require the metrics-server addon. To enable all features please run:
	
		minikube -p embed-certs-422728 addons enable metrics-server
	
	I1109 14:39:35.282264  196795 node_ready.go:49] node "embed-certs-422728" is "Ready"
	I1109 14:39:35.282343  196795 node_ready.go:38] duration metric: took 80.561028ms for node "embed-certs-422728" to be "Ready" ...
	I1109 14:39:35.282371  196795 api_server.go:52] waiting for apiserver process to appear ...
	I1109 14:39:35.282455  196795 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I1109 14:39:35.306663  196795 out.go:179] * Enabled addons: storage-provisioner, dashboard, default-storageclass
	I1109 14:39:35.309737  196795 addons.go:515] duration metric: took 9.834202528s for enable addons: enabled=[storage-provisioner dashboard default-storageclass]
	I1109 14:39:35.336441  196795 api_server.go:72] duration metric: took 9.861275529s to wait for apiserver process to appear ...
	I1109 14:39:35.336467  196795 api_server.go:88] waiting for apiserver healthz status ...
	I1109 14:39:35.336489  196795 api_server.go:253] Checking apiserver healthz at https://192.168.76.2:8443/healthz ...
	I1109 14:39:35.381991  196795 api_server.go:279] https://192.168.76.2:8443/healthz returned 200:
	ok
	I1109 14:39:35.384051  196795 api_server.go:141] control plane version: v1.34.1
	I1109 14:39:35.384080  196795 api_server.go:131] duration metric: took 47.606213ms to wait for apiserver health ...
	I1109 14:39:35.384090  196795 system_pods.go:43] waiting for kube-system pods to appear ...
	I1109 14:39:35.401482  196795 system_pods.go:59] 8 kube-system pods found
	I1109 14:39:35.401522  196795 system_pods.go:61] "coredns-66bc5c9577-4hk6l" [85300af6-fc4a-42dd-b6f9-4374a4461cdc] Running / Ready:ContainersNotReady (containers with unready status: [coredns]) / ContainersReady:ContainersNotReady (containers with unready status: [coredns])
	I1109 14:39:35.401532  196795 system_pods.go:61] "etcd-embed-certs-422728" [a71faa76-21fd-4bf5-82d3-26489363edf1] Running / Ready:ContainersNotReady (containers with unready status: [etcd]) / ContainersReady:ContainersNotReady (containers with unready status: [etcd])
	I1109 14:39:35.401542  196795 system_pods.go:61] "kindnet-29xxd" [081cda95-4468-46a9-a913-ec3c53472afd] Running / Ready:ContainersNotReady (containers with unready status: [kindnet-cni]) / ContainersReady:ContainersNotReady (containers with unready status: [kindnet-cni])
	I1109 14:39:35.401547  196795 system_pods.go:61] "kube-apiserver-embed-certs-422728" [fa49f721-7504-41a2-80d6-eeac9f3ba024] Running
	I1109 14:39:35.401556  196795 system_pods.go:61] "kube-controller-manager-embed-certs-422728" [1fefd317-f995-414d-9744-cb3bb29d6663] Running / Ready:ContainersNotReady (containers with unready status: [kube-controller-manager]) / ContainersReady:ContainersNotReady (containers with unready status: [kube-controller-manager])
	I1109 14:39:35.401564  196795 system_pods.go:61] "kube-proxy-5zn8j" [91237b20-cef1-4550-bd7c-cbf7ec8d850c] Running / Ready:ContainersNotReady (containers with unready status: [kube-proxy]) / ContainersReady:ContainersNotReady (containers with unready status: [kube-proxy])
	I1109 14:39:35.401581  196795 system_pods.go:61] "kube-scheduler-embed-certs-422728" [aadcb563-6ce1-437f-ab4e-6f9450bb1b93] Running / Ready:ContainersNotReady (containers with unready status: [kube-scheduler]) / ContainersReady:ContainersNotReady (containers with unready status: [kube-scheduler])
	I1109 14:39:35.401590  196795 system_pods.go:61] "storage-provisioner" [e11ae084-2938-40cc-9538-cffa02747d9b] Running / Ready:ContainersNotReady (containers with unready status: [storage-provisioner]) / ContainersReady:ContainersNotReady (containers with unready status: [storage-provisioner])
	I1109 14:39:35.401601  196795 system_pods.go:74] duration metric: took 17.504641ms to wait for pod list to return data ...
	I1109 14:39:35.401610  196795 default_sa.go:34] waiting for default service account to be created ...
	I1109 14:39:35.428228  196795 default_sa.go:45] found service account: "default"
	I1109 14:39:35.428256  196795 default_sa.go:55] duration metric: took 26.634138ms for default service account to be created ...
	I1109 14:39:35.428275  196795 system_pods.go:116] waiting for k8s-apps to be running ...
	I1109 14:39:35.432793  196795 system_pods.go:86] 8 kube-system pods found
	I1109 14:39:35.432824  196795 system_pods.go:89] "coredns-66bc5c9577-4hk6l" [85300af6-fc4a-42dd-b6f9-4374a4461cdc] Running / Ready:ContainersNotReady (containers with unready status: [coredns]) / ContainersReady:ContainersNotReady (containers with unready status: [coredns])
	I1109 14:39:35.432834  196795 system_pods.go:89] "etcd-embed-certs-422728" [a71faa76-21fd-4bf5-82d3-26489363edf1] Running / Ready:ContainersNotReady (containers with unready status: [etcd]) / ContainersReady:ContainersNotReady (containers with unready status: [etcd])
	I1109 14:39:35.432841  196795 system_pods.go:89] "kindnet-29xxd" [081cda95-4468-46a9-a913-ec3c53472afd] Running / Ready:ContainersNotReady (containers with unready status: [kindnet-cni]) / ContainersReady:ContainersNotReady (containers with unready status: [kindnet-cni])
	I1109 14:39:35.432854  196795 system_pods.go:89] "kube-apiserver-embed-certs-422728" [fa49f721-7504-41a2-80d6-eeac9f3ba024] Running
	I1109 14:39:35.432865  196795 system_pods.go:89] "kube-controller-manager-embed-certs-422728" [1fefd317-f995-414d-9744-cb3bb29d6663] Running / Ready:ContainersNotReady (containers with unready status: [kube-controller-manager]) / ContainersReady:ContainersNotReady (containers with unready status: [kube-controller-manager])
	I1109 14:39:35.432877  196795 system_pods.go:89] "kube-proxy-5zn8j" [91237b20-cef1-4550-bd7c-cbf7ec8d850c] Running / Ready:ContainersNotReady (containers with unready status: [kube-proxy]) / ContainersReady:ContainersNotReady (containers with unready status: [kube-proxy])
	I1109 14:39:35.432884  196795 system_pods.go:89] "kube-scheduler-embed-certs-422728" [aadcb563-6ce1-437f-ab4e-6f9450bb1b93] Running / Ready:ContainersNotReady (containers with unready status: [kube-scheduler]) / ContainersReady:ContainersNotReady (containers with unready status: [kube-scheduler])
	I1109 14:39:35.432901  196795 system_pods.go:89] "storage-provisioner" [e11ae084-2938-40cc-9538-cffa02747d9b] Running / Ready:ContainersNotReady (containers with unready status: [storage-provisioner]) / ContainersReady:ContainersNotReady (containers with unready status: [storage-provisioner])
	I1109 14:39:35.432909  196795 system_pods.go:126] duration metric: took 4.628396ms to wait for k8s-apps to be running ...
	I1109 14:39:35.432921  196795 system_svc.go:44] waiting for kubelet service to be running ....
	I1109 14:39:35.432993  196795 ssh_runner.go:195] Run: sudo systemctl is-active --quiet service kubelet
	I1109 14:39:35.485432  196795 system_svc.go:56] duration metric: took 52.500556ms WaitForService to wait for kubelet
	I1109 14:39:35.485461  196795 kubeadm.go:587] duration metric: took 10.010301465s to wait for: map[apiserver:true apps_running:true default_sa:true extra:true kubelet:true node_ready:true system_pods:true]
	I1109 14:39:35.485480  196795 node_conditions.go:102] verifying NodePressure condition ...
	I1109 14:39:35.509089  196795 node_conditions.go:122] node storage ephemeral capacity is 203034800Ki
	I1109 14:39:35.509123  196795 node_conditions.go:123] node cpu capacity is 2
	I1109 14:39:35.509136  196795 node_conditions.go:105] duration metric: took 23.649629ms to run NodePressure ...
	I1109 14:39:35.509148  196795 start.go:242] waiting for startup goroutines ...
	I1109 14:39:35.509156  196795 start.go:247] waiting for cluster config update ...
	I1109 14:39:35.509166  196795 start.go:256] writing updated cluster config ...
	I1109 14:39:35.509440  196795 ssh_runner.go:195] Run: rm -f paused
	I1109 14:39:35.523671  196795 pod_ready.go:37] extra waiting up to 4m0s for all "kube-system" pods having one of [k8s-app=kube-dns component=etcd component=kube-apiserver component=kube-controller-manager k8s-app=kube-proxy component=kube-scheduler] labels to be "Ready" ...
	I1109 14:39:35.544324  196795 pod_ready.go:83] waiting for pod "coredns-66bc5c9577-4hk6l" in "kube-system" namespace to be "Ready" or be gone ...
	W1109 14:39:35.214818  196129 pod_ready.go:104] pod "coredns-66bc5c9577-rbvc2" is not "Ready", error: <nil>
	W1109 14:39:37.670741  196129 pod_ready.go:104] pod "coredns-66bc5c9577-rbvc2" is not "Ready", error: <nil>
	W1109 14:39:37.550361  196795 pod_ready.go:104] pod "coredns-66bc5c9577-4hk6l" is not "Ready", error: <nil>
	W1109 14:39:39.551201  196795 pod_ready.go:104] pod "coredns-66bc5c9577-4hk6l" is not "Ready", error: <nil>
	W1109 14:39:39.671795  196129 pod_ready.go:104] pod "coredns-66bc5c9577-rbvc2" is not "Ready", error: <nil>
	W1109 14:39:41.672702  196129 pod_ready.go:104] pod "coredns-66bc5c9577-rbvc2" is not "Ready", error: <nil>
	W1109 14:39:42.050591  196795 pod_ready.go:104] pod "coredns-66bc5c9577-4hk6l" is not "Ready", error: <nil>
	W1109 14:39:44.052665  196795 pod_ready.go:104] pod "coredns-66bc5c9577-4hk6l" is not "Ready", error: <nil>
	W1109 14:39:43.679828  196129 pod_ready.go:104] pod "coredns-66bc5c9577-rbvc2" is not "Ready", error: <nil>
	W1109 14:39:46.172576  196129 pod_ready.go:104] pod "coredns-66bc5c9577-rbvc2" is not "Ready", error: <nil>
	W1109 14:39:46.549936  196795 pod_ready.go:104] pod "coredns-66bc5c9577-4hk6l" is not "Ready", error: <nil>
	W1109 14:39:48.550731  196795 pod_ready.go:104] pod "coredns-66bc5c9577-4hk6l" is not "Ready", error: <nil>
	W1109 14:39:50.550852  196795 pod_ready.go:104] pod "coredns-66bc5c9577-4hk6l" is not "Ready", error: <nil>
	W1109 14:39:48.675461  196129 pod_ready.go:104] pod "coredns-66bc5c9577-rbvc2" is not "Ready", error: <nil>
	W1109 14:39:51.171417  196129 pod_ready.go:104] pod "coredns-66bc5c9577-rbvc2" is not "Ready", error: <nil>
	W1109 14:39:53.050155  196795 pod_ready.go:104] pod "coredns-66bc5c9577-4hk6l" is not "Ready", error: <nil>
	W1109 14:39:55.050846  196795 pod_ready.go:104] pod "coredns-66bc5c9577-4hk6l" is not "Ready", error: <nil>
	W1109 14:39:53.669698  196129 pod_ready.go:104] pod "coredns-66bc5c9577-rbvc2" is not "Ready", error: <nil>
	W1109 14:39:55.670713  196129 pod_ready.go:104] pod "coredns-66bc5c9577-rbvc2" is not "Ready", error: <nil>
	W1109 14:39:58.170504  196129 pod_ready.go:104] pod "coredns-66bc5c9577-rbvc2" is not "Ready", error: <nil>
	W1109 14:39:57.550560  196795 pod_ready.go:104] pod "coredns-66bc5c9577-4hk6l" is not "Ready", error: <nil>
	W1109 14:40:00.080694  196795 pod_ready.go:104] pod "coredns-66bc5c9577-4hk6l" is not "Ready", error: <nil>
	W1109 14:40:00.191460  196129 pod_ready.go:104] pod "coredns-66bc5c9577-rbvc2" is not "Ready", error: <nil>
	W1109 14:40:02.670935  196129 pod_ready.go:104] pod "coredns-66bc5c9577-rbvc2" is not "Ready", error: <nil>
	W1109 14:40:02.550181  196795 pod_ready.go:104] pod "coredns-66bc5c9577-4hk6l" is not "Ready", error: <nil>
	W1109 14:40:04.550484  196795 pod_ready.go:104] pod "coredns-66bc5c9577-4hk6l" is not "Ready", error: <nil>
	I1109 14:40:05.170570  196129 pod_ready.go:94] pod "coredns-66bc5c9577-rbvc2" is "Ready"
	I1109 14:40:05.170595  196129 pod_ready.go:86] duration metric: took 32.005779394s for pod "coredns-66bc5c9577-rbvc2" in "kube-system" namespace to be "Ready" or be gone ...
	I1109 14:40:05.173494  196129 pod_ready.go:83] waiting for pod "etcd-default-k8s-diff-port-103048" in "kube-system" namespace to be "Ready" or be gone ...
	I1109 14:40:05.178322  196129 pod_ready.go:94] pod "etcd-default-k8s-diff-port-103048" is "Ready"
	I1109 14:40:05.178350  196129 pod_ready.go:86] duration metric: took 4.826832ms for pod "etcd-default-k8s-diff-port-103048" in "kube-system" namespace to be "Ready" or be gone ...
	I1109 14:40:05.181165  196129 pod_ready.go:83] waiting for pod "kube-apiserver-default-k8s-diff-port-103048" in "kube-system" namespace to be "Ready" or be gone ...
	I1109 14:40:05.185964  196129 pod_ready.go:94] pod "kube-apiserver-default-k8s-diff-port-103048" is "Ready"
	I1109 14:40:05.185994  196129 pod_ready.go:86] duration metric: took 4.801946ms for pod "kube-apiserver-default-k8s-diff-port-103048" in "kube-system" namespace to be "Ready" or be gone ...
	I1109 14:40:05.188492  196129 pod_ready.go:83] waiting for pod "kube-controller-manager-default-k8s-diff-port-103048" in "kube-system" namespace to be "Ready" or be gone ...
	I1109 14:40:05.369137  196129 pod_ready.go:94] pod "kube-controller-manager-default-k8s-diff-port-103048" is "Ready"
	I1109 14:40:05.369168  196129 pod_ready.go:86] duration metric: took 180.647632ms for pod "kube-controller-manager-default-k8s-diff-port-103048" in "kube-system" namespace to be "Ready" or be gone ...
	I1109 14:40:05.567982  196129 pod_ready.go:83] waiting for pod "kube-proxy-c57m2" in "kube-system" namespace to be "Ready" or be gone ...
	I1109 14:40:05.968952  196129 pod_ready.go:94] pod "kube-proxy-c57m2" is "Ready"
	I1109 14:40:05.968978  196129 pod_ready.go:86] duration metric: took 400.969079ms for pod "kube-proxy-c57m2" in "kube-system" namespace to be "Ready" or be gone ...
	I1109 14:40:06.169164  196129 pod_ready.go:83] waiting for pod "kube-scheduler-default-k8s-diff-port-103048" in "kube-system" namespace to be "Ready" or be gone ...
	I1109 14:40:06.568343  196129 pod_ready.go:94] pod "kube-scheduler-default-k8s-diff-port-103048" is "Ready"
	I1109 14:40:06.568432  196129 pod_ready.go:86] duration metric: took 399.237416ms for pod "kube-scheduler-default-k8s-diff-port-103048" in "kube-system" namespace to be "Ready" or be gone ...
	I1109 14:40:06.568451  196129 pod_ready.go:40] duration metric: took 33.4720313s for extra waiting for all "kube-system" pods having one of [k8s-app=kube-dns component=etcd component=kube-apiserver component=kube-controller-manager k8s-app=kube-proxy component=kube-scheduler] labels to be "Ready" ...
	I1109 14:40:06.631797  196129 start.go:628] kubectl: 1.33.2, cluster: 1.34.1 (minor skew: 1)
	I1109 14:40:06.635018  196129 out.go:179] * Done! kubectl is now configured to use "default-k8s-diff-port-103048" cluster and "default" namespace by default
	W1109 14:40:06.551498  196795 pod_ready.go:104] pod "coredns-66bc5c9577-4hk6l" is not "Ready", error: <nil>
	I1109 14:40:07.550990  196795 pod_ready.go:94] pod "coredns-66bc5c9577-4hk6l" is "Ready"
	I1109 14:40:07.551029  196795 pod_ready.go:86] duration metric: took 32.006673308s for pod "coredns-66bc5c9577-4hk6l" in "kube-system" namespace to be "Ready" or be gone ...
	I1109 14:40:07.553713  196795 pod_ready.go:83] waiting for pod "etcd-embed-certs-422728" in "kube-system" namespace to be "Ready" or be gone ...
	I1109 14:40:07.558418  196795 pod_ready.go:94] pod "etcd-embed-certs-422728" is "Ready"
	I1109 14:40:07.558442  196795 pod_ready.go:86] duration metric: took 4.698642ms for pod "etcd-embed-certs-422728" in "kube-system" namespace to be "Ready" or be gone ...
	I1109 14:40:07.560963  196795 pod_ready.go:83] waiting for pod "kube-apiserver-embed-certs-422728" in "kube-system" namespace to be "Ready" or be gone ...
	I1109 14:40:07.565961  196795 pod_ready.go:94] pod "kube-apiserver-embed-certs-422728" is "Ready"
	I1109 14:40:07.565990  196795 pod_ready.go:86] duration metric: took 4.998009ms for pod "kube-apiserver-embed-certs-422728" in "kube-system" namespace to be "Ready" or be gone ...
	I1109 14:40:07.568596  196795 pod_ready.go:83] waiting for pod "kube-controller-manager-embed-certs-422728" in "kube-system" namespace to be "Ready" or be gone ...
	I1109 14:40:07.747686  196795 pod_ready.go:94] pod "kube-controller-manager-embed-certs-422728" is "Ready"
	I1109 14:40:07.747712  196795 pod_ready.go:86] duration metric: took 179.092274ms for pod "kube-controller-manager-embed-certs-422728" in "kube-system" namespace to be "Ready" or be gone ...
	I1109 14:40:07.948777  196795 pod_ready.go:83] waiting for pod "kube-proxy-5zn8j" in "kube-system" namespace to be "Ready" or be gone ...
	I1109 14:40:08.348208  196795 pod_ready.go:94] pod "kube-proxy-5zn8j" is "Ready"
	I1109 14:40:08.348242  196795 pod_ready.go:86] duration metric: took 399.417231ms for pod "kube-proxy-5zn8j" in "kube-system" namespace to be "Ready" or be gone ...
	I1109 14:40:08.548588  196795 pod_ready.go:83] waiting for pod "kube-scheduler-embed-certs-422728" in "kube-system" namespace to be "Ready" or be gone ...
	I1109 14:40:08.948477  196795 pod_ready.go:94] pod "kube-scheduler-embed-certs-422728" is "Ready"
	I1109 14:40:08.948506  196795 pod_ready.go:86] duration metric: took 399.893445ms for pod "kube-scheduler-embed-certs-422728" in "kube-system" namespace to be "Ready" or be gone ...
	I1109 14:40:08.948519  196795 pod_ready.go:40] duration metric: took 33.424813505s for extra waiting for all "kube-system" pods having one of [k8s-app=kube-dns component=etcd component=kube-apiserver component=kube-controller-manager k8s-app=kube-proxy component=kube-scheduler] labels to be "Ready" ...
	I1109 14:40:09.011705  196795 start.go:628] kubectl: 1.33.2, cluster: 1.34.1 (minor skew: 1)
	I1109 14:40:09.015201  196795 out.go:179] * Done! kubectl is now configured to use "embed-certs-422728" cluster and "default" namespace by default
	
	
	==> CRI-O <==
	Nov 09 14:40:01 default-k8s-diff-port-103048 crio[653]: time="2025-11-09T14:40:01.248533172Z" level=info msg="Removed container 38259cff432cccc842b08ceebbe59b16a06e9c73dc6de13fc113d45ec541b9f8: kubernetes-dashboard/dashboard-metrics-scraper-6ffb444bf9-8h69l/dashboard-metrics-scraper" id=7a76787b-fa16-49be-839a-80d0c7cd0a36 name=/runtime.v1.RuntimeService/RemoveContainer
	Nov 09 14:40:02 default-k8s-diff-port-103048 conmon[1147]: conmon cf78d41778b8d4241abc <ninfo>: container 1154 exited with status 1
	Nov 09 14:40:03 default-k8s-diff-port-103048 crio[653]: time="2025-11-09T14:40:03.240417122Z" level=info msg="Checking image status: gcr.io/k8s-minikube/storage-provisioner:v5" id=2ea6294a-7b46-44f0-af7f-fea2507ce523 name=/runtime.v1.ImageService/ImageStatus
	Nov 09 14:40:03 default-k8s-diff-port-103048 crio[653]: time="2025-11-09T14:40:03.241795554Z" level=info msg="Checking image status: gcr.io/k8s-minikube/storage-provisioner:v5" id=7b078ecf-ad56-46bf-8c85-cf58a9f8c8f7 name=/runtime.v1.ImageService/ImageStatus
	Nov 09 14:40:03 default-k8s-diff-port-103048 crio[653]: time="2025-11-09T14:40:03.243293889Z" level=info msg="Creating container: kube-system/storage-provisioner/storage-provisioner" id=ad5bba3b-6022-45dd-b26c-be6b4a30237f name=/runtime.v1.RuntimeService/CreateContainer
	Nov 09 14:40:03 default-k8s-diff-port-103048 crio[653]: time="2025-11-09T14:40:03.243406744Z" level=info msg="Allowed annotations are specified for workload [io.containers.trace-syscall]"
	Nov 09 14:40:03 default-k8s-diff-port-103048 crio[653]: time="2025-11-09T14:40:03.248740944Z" level=info msg="Allowed annotations are specified for workload [io.containers.trace-syscall]"
	Nov 09 14:40:03 default-k8s-diff-port-103048 crio[653]: time="2025-11-09T14:40:03.248913368Z" level=warning msg="Failed to open /etc/passwd: open /var/lib/containers/storage/overlay/874d8eff2aa994f4a0df23deb7ae1d272461800e7ebe976c7b57a1997df8602f/merged/etc/passwd: no such file or directory"
	Nov 09 14:40:03 default-k8s-diff-port-103048 crio[653]: time="2025-11-09T14:40:03.248934012Z" level=warning msg="Failed to open /etc/group: open /var/lib/containers/storage/overlay/874d8eff2aa994f4a0df23deb7ae1d272461800e7ebe976c7b57a1997df8602f/merged/etc/group: no such file or directory"
	Nov 09 14:40:03 default-k8s-diff-port-103048 crio[653]: time="2025-11-09T14:40:03.249182531Z" level=info msg="Allowed annotations are specified for workload [io.containers.trace-syscall]"
	Nov 09 14:40:03 default-k8s-diff-port-103048 crio[653]: time="2025-11-09T14:40:03.270562984Z" level=info msg="Created container 887e2026e77897766c17fa9712a7f11a65c2c6b18e8793321feed28a770da36c: kube-system/storage-provisioner/storage-provisioner" id=ad5bba3b-6022-45dd-b26c-be6b4a30237f name=/runtime.v1.RuntimeService/CreateContainer
	Nov 09 14:40:03 default-k8s-diff-port-103048 crio[653]: time="2025-11-09T14:40:03.272954398Z" level=info msg="Starting container: 887e2026e77897766c17fa9712a7f11a65c2c6b18e8793321feed28a770da36c" id=3360318b-e6b9-4fa6-adc3-f98cfe7a1c6c name=/runtime.v1.RuntimeService/StartContainer
	Nov 09 14:40:03 default-k8s-diff-port-103048 crio[653]: time="2025-11-09T14:40:03.275412733Z" level=info msg="Started container" PID=1639 containerID=887e2026e77897766c17fa9712a7f11a65c2c6b18e8793321feed28a770da36c description=kube-system/storage-provisioner/storage-provisioner id=3360318b-e6b9-4fa6-adc3-f98cfe7a1c6c name=/runtime.v1.RuntimeService/StartContainer sandboxID=99b118862bb4b7267069a5577fd9df4036f6d5423bab063eda2461ef74dc704e
	Nov 09 14:40:12 default-k8s-diff-port-103048 crio[653]: time="2025-11-09T14:40:12.306247622Z" level=info msg="CNI monitoring event CREATE        \"/etc/cni/net.d/10-kindnet.conflist.temp\""
	Nov 09 14:40:12 default-k8s-diff-port-103048 crio[653]: time="2025-11-09T14:40:12.316769758Z" level=info msg="Found CNI network kindnet (type=ptp) at /etc/cni/net.d/10-kindnet.conflist"
	Nov 09 14:40:12 default-k8s-diff-port-103048 crio[653]: time="2025-11-09T14:40:12.317372265Z" level=info msg="Updated default CNI network name to kindnet"
	Nov 09 14:40:12 default-k8s-diff-port-103048 crio[653]: time="2025-11-09T14:40:12.317675431Z" level=info msg="CNI monitoring event WRITE         \"/etc/cni/net.d/10-kindnet.conflist.temp\""
	Nov 09 14:40:12 default-k8s-diff-port-103048 crio[653]: time="2025-11-09T14:40:12.322318352Z" level=info msg="Found CNI network kindnet (type=ptp) at /etc/cni/net.d/10-kindnet.conflist"
	Nov 09 14:40:12 default-k8s-diff-port-103048 crio[653]: time="2025-11-09T14:40:12.32235251Z" level=info msg="Updated default CNI network name to kindnet"
	Nov 09 14:40:12 default-k8s-diff-port-103048 crio[653]: time="2025-11-09T14:40:12.322375649Z" level=info msg="CNI monitoring event RENAME        \"/etc/cni/net.d/10-kindnet.conflist.temp\""
	Nov 09 14:40:12 default-k8s-diff-port-103048 crio[653]: time="2025-11-09T14:40:12.325498482Z" level=info msg="Found CNI network kindnet (type=ptp) at /etc/cni/net.d/10-kindnet.conflist"
	Nov 09 14:40:12 default-k8s-diff-port-103048 crio[653]: time="2025-11-09T14:40:12.325529186Z" level=info msg="Updated default CNI network name to kindnet"
	Nov 09 14:40:12 default-k8s-diff-port-103048 crio[653]: time="2025-11-09T14:40:12.325549363Z" level=info msg="CNI monitoring event CREATE        \"/etc/cni/net.d/10-kindnet.conflist\" ← \"/etc/cni/net.d/10-kindnet.conflist.temp\""
	Nov 09 14:40:12 default-k8s-diff-port-103048 crio[653]: time="2025-11-09T14:40:12.328808953Z" level=info msg="Found CNI network kindnet (type=ptp) at /etc/cni/net.d/10-kindnet.conflist"
	Nov 09 14:40:12 default-k8s-diff-port-103048 crio[653]: time="2025-11-09T14:40:12.328842471Z" level=info msg="Updated default CNI network name to kindnet"
	
	
	==> container status <==
	CONTAINER           IMAGE                                                                                                      CREATED              STATE               NAME                        ATTEMPT             POD ID              POD                                                    NAMESPACE
	887e2026e7789       ba04bb24b95753201135cbc420b233c1b0b9fa2e1fd21d28319c348c33fbcde6                                           18 seconds ago       Running             storage-provisioner         2                   99b118862bb4b       storage-provisioner                                    kube-system
	ce0dc366d5e71       a90209bb39e3d7b5fc9daf60c17044ea969aaca0333d672d8c7a34c7446e7ff7                                           20 seconds ago       Exited              dashboard-metrics-scraper   2                   eb13898fae8d3       dashboard-metrics-scraper-6ffb444bf9-8h69l             kubernetes-dashboard
	b77c688e34b49       docker.io/kubernetesui/dashboard@sha256:5c52c60663b473628bd98e4ffee7a747ef1f88d8c7bcee957b089fb3f61bdedf   32 seconds ago       Running             kubernetes-dashboard        0                   18f2ffd9a5f9f       kubernetes-dashboard-855c9754f9-swwl8                  kubernetes-dashboard
	dc599fcbb3350       138784d87c9c50f8e59412544da4cf4928d61ccbaf93b9f5898a3ba406871bfc                                           49 seconds ago       Running             coredns                     1                   15521f9f70953       coredns-66bc5c9577-rbvc2                               kube-system
	e9c4c43949e6a       1611cd07b61d57dbbfebe6db242513fd51e1c02d20ba08af17a45837d86a8a8c                                           49 seconds ago       Running             busybox                     1                   6acf70a827692       busybox                                                default
	be38569ab0491       b1a8c6f707935fd5f346ce5846d21ff8dd65e14c15406a14dbd16b9b897b9b4c                                           49 seconds ago       Running             kindnet-cni                 1                   79035adc44601       kindnet-tz2x5                                          kube-system
	cf78d41778b8d       ba04bb24b95753201135cbc420b233c1b0b9fa2e1fd21d28319c348c33fbcde6                                           50 seconds ago       Exited              storage-provisioner         1                   99b118862bb4b       storage-provisioner                                    kube-system
	da4b547d02513       05baa95f5142d87797a2bc1d3d11edfb0bf0a9236d436243d15061fae8b58cb9                                           50 seconds ago       Running             kube-proxy                  1                   f96eaf86c4dce       kube-proxy-c57m2                                       kube-system
	7e93099edfb89       7eb2c6ff0c5a768fd309321bc2ade0e4e11afcf4f2017ef1d0ff00d91fdf992a                                           About a minute ago   Running             kube-controller-manager     1                   89ff9146f6244       kube-controller-manager-default-k8s-diff-port-103048   kube-system
	0c584231ed8c3       b5f57ec6b98676d815366685a0422bd164ecf0732540b79ac51b1186cef97ff0                                           About a minute ago   Running             kube-scheduler              1                   2e62380a0a11b       kube-scheduler-default-k8s-diff-port-103048            kube-system
	6ac4e39c7a9ff       43911e833d64d4f30460862fc0c54bb61999d60bc7d063feca71e9fc610d5196                                           About a minute ago   Running             kube-apiserver              1                   1b0c0f4494c5d       kube-apiserver-default-k8s-diff-port-103048            kube-system
	7d4eac93ccb3e       a1894772a478e07c67a56e8bf32335fdbe1dd4ec96976a5987083164bd00bc0e                                           About a minute ago   Running             etcd                        1                   d47dac0b67da7       etcd-default-k8s-diff-port-103048                      kube-system
	
	
	==> coredns [dc599fcbb33507001316216bcb43133c63b59a24b97538fdfa9814b27f4e7cee] <==
	[INFO] plugin/ready: Still waiting on: "kubernetes"
	[INFO] plugin/kubernetes: waiting for Kubernetes API before starting server
	[INFO] plugin/kubernetes: waiting for Kubernetes API before starting server
	[INFO] plugin/ready: Still waiting on: "kubernetes"
	[INFO] plugin/kubernetes: waiting for Kubernetes API before starting server
	[INFO] plugin/kubernetes: waiting for Kubernetes API before starting server
	[INFO] plugin/ready: Still waiting on: "kubernetes"
	[INFO] plugin/kubernetes: waiting for Kubernetes API before starting server
	[INFO] plugin/kubernetes: waiting for Kubernetes API before starting server
	[INFO] plugin/kubernetes: waiting for Kubernetes API before starting server
	[INFO] plugin/kubernetes: waiting for Kubernetes API before starting server
	[WARNING] plugin/kubernetes: starting server with unsynced Kubernetes API
	.:53
	[INFO] plugin/reload: Running configuration SHA512 = fa9a0cdcdddcb4be74a0eaf7cfcb211c40e29ddf5507e03bbfc0065bade31f0f2641a2513136e246f32328dd126fc93236fb5c595246f0763926a524386705e8
	CoreDNS-1.12.1
	linux/arm64, go1.24.1, 707c7c1
	[INFO] 127.0.0.1:38928 - 33408 "HINFO IN 3306701821934207797.2508219549672476477. udp 57 false 512" NXDOMAIN qr,rd,ra 57 0.023355546s
	[INFO] plugin/ready: Still waiting on: "kubernetes"
	[INFO] plugin/ready: Still waiting on: "kubernetes"
	[INFO] plugin/kubernetes: pkg/mod/k8s.io/client-go@v0.32.3/tools/cache/reflector.go:251: failed to list *v1.Namespace: Get "https://10.96.0.1:443/api/v1/namespaces?limit=500&resourceVersion=0": dial tcp 10.96.0.1:443: i/o timeout
	[ERROR] plugin/kubernetes: Unhandled Error
	[INFO] plugin/kubernetes: pkg/mod/k8s.io/client-go@v0.32.3/tools/cache/reflector.go:251: failed to list *v1.Service: Get "https://10.96.0.1:443/api/v1/services?limit=500&resourceVersion=0": dial tcp 10.96.0.1:443: i/o timeout
	[ERROR] plugin/kubernetes: Unhandled Error
	[INFO] plugin/kubernetes: pkg/mod/k8s.io/client-go@v0.32.3/tools/cache/reflector.go:251: failed to list *v1.EndpointSlice: Get "https://10.96.0.1:443/apis/discovery.k8s.io/v1/endpointslices?limit=500&resourceVersion=0": dial tcp 10.96.0.1:443: i/o timeout
	[ERROR] plugin/kubernetes: Unhandled Error
	
	
	==> describe nodes <==
	Name:               default-k8s-diff-port-103048
	Roles:              control-plane
	Labels:             beta.kubernetes.io/arch=arm64
	                    beta.kubernetes.io/os=linux
	                    kubernetes.io/arch=arm64
	                    kubernetes.io/hostname=default-k8s-diff-port-103048
	                    kubernetes.io/os=linux
	                    minikube.k8s.io/commit=70e94ed289d7418ac27e2778f9cf44be27a4ecda
	                    minikube.k8s.io/name=default-k8s-diff-port-103048
	                    minikube.k8s.io/primary=true
	                    minikube.k8s.io/updated_at=2025_11_09T14_38_01_0700
	                    minikube.k8s.io/version=v1.37.0
	                    node-role.kubernetes.io/control-plane=
	                    node.kubernetes.io/exclude-from-external-load-balancers=
	Annotations:        node.alpha.kubernetes.io/ttl: 0
	                    volumes.kubernetes.io/controller-managed-attach-detach: true
	CreationTimestamp:  Sun, 09 Nov 2025 14:37:56 +0000
	Taints:             <none>
	Unschedulable:      false
	Lease:
	  HolderIdentity:  default-k8s-diff-port-103048
	  AcquireTime:     <unset>
	  RenewTime:       Sun, 09 Nov 2025 14:40:10 +0000
	Conditions:
	  Type             Status  LastHeartbeatTime                 LastTransitionTime                Reason                       Message
	  ----             ------  -----------------                 ------------------                ------                       -------
	  MemoryPressure   False   Sun, 09 Nov 2025 14:39:51 +0000   Sun, 09 Nov 2025 14:37:49 +0000   KubeletHasSufficientMemory   kubelet has sufficient memory available
	  DiskPressure     False   Sun, 09 Nov 2025 14:39:51 +0000   Sun, 09 Nov 2025 14:37:49 +0000   KubeletHasNoDiskPressure     kubelet has no disk pressure
	  PIDPressure      False   Sun, 09 Nov 2025 14:39:51 +0000   Sun, 09 Nov 2025 14:37:49 +0000   KubeletHasSufficientPID      kubelet has sufficient PID available
	  Ready            True    Sun, 09 Nov 2025 14:39:51 +0000   Sun, 09 Nov 2025 14:38:46 +0000   KubeletReady                 kubelet is posting ready status
	Addresses:
	  InternalIP:  192.168.85.2
	  Hostname:    default-k8s-diff-port-103048
	Capacity:
	  cpu:                2
	  ephemeral-storage:  203034800Ki
	  hugepages-1Gi:      0
	  hugepages-2Mi:      0
	  hugepages-32Mi:     0
	  hugepages-64Ki:     0
	  memory:             8022300Ki
	  pods:               110
	Allocatable:
	  cpu:                2
	  ephemeral-storage:  203034800Ki
	  hugepages-1Gi:      0
	  hugepages-2Mi:      0
	  hugepages-32Mi:     0
	  hugepages-64Ki:     0
	  memory:             8022300Ki
	  pods:               110
	System Info:
	  Machine ID:                 8fe80b37a6a3cb4bc1adadb06905c59a
	  System UUID:                6ac075f8-cd4f-431f-b369-b54146be0749
	  Boot ID:                    c2b4b02b-b0e5-42ff-aa45-346ad9349595
	  Kernel Version:             5.15.0-1084-aws
	  OS Image:                   Debian GNU/Linux 12 (bookworm)
	  Operating System:           linux
	  Architecture:               arm64
	  Container Runtime Version:  cri-o://1.34.1
	  Kubelet Version:            v1.34.1
	  Kube-Proxy Version:         
	PodCIDR:                      10.244.0.0/24
	PodCIDRs:                     10.244.0.0/24
	Non-terminated Pods:          (11 in total)
	  Namespace                   Name                                                    CPU Requests  CPU Limits  Memory Requests  Memory Limits  Age
	  ---------                   ----                                                    ------------  ----------  ---------------  -------------  ---
	  default                     busybox                                                 0 (0%)        0 (0%)      0 (0%)           0 (0%)         92s
	  kube-system                 coredns-66bc5c9577-rbvc2                                100m (5%)     0 (0%)      70Mi (0%)        170Mi (2%)     2m15s
	  kube-system                 etcd-default-k8s-diff-port-103048                       100m (5%)     0 (0%)      100Mi (1%)       0 (0%)         2m21s
	  kube-system                 kindnet-tz2x5                                           100m (5%)     100m (5%)   50Mi (0%)        50Mi (0%)      2m16s
	  kube-system                 kube-apiserver-default-k8s-diff-port-103048             250m (12%)    0 (0%)      0 (0%)           0 (0%)         2m21s
	  kube-system                 kube-controller-manager-default-k8s-diff-port-103048    200m (10%)    0 (0%)      0 (0%)           0 (0%)         2m21s
	  kube-system                 kube-proxy-c57m2                                        0 (0%)        0 (0%)      0 (0%)           0 (0%)         2m16s
	  kube-system                 kube-scheduler-default-k8s-diff-port-103048             100m (5%)     0 (0%)      0 (0%)           0 (0%)         2m25s
	  kube-system                 storage-provisioner                                     0 (0%)        0 (0%)      0 (0%)           0 (0%)         2m14s
	  kubernetes-dashboard        dashboard-metrics-scraper-6ffb444bf9-8h69l              0 (0%)        0 (0%)      0 (0%)           0 (0%)         46s
	  kubernetes-dashboard        kubernetes-dashboard-855c9754f9-swwl8                   0 (0%)        0 (0%)      0 (0%)           0 (0%)         46s
	Allocated resources:
	  (Total limits may be over 100 percent, i.e., overcommitted.)
	  Resource           Requests    Limits
	  --------           --------    ------
	  cpu                850m (42%)  100m (5%)
	  memory             220Mi (2%)  220Mi (2%)
	  ephemeral-storage  0 (0%)      0 (0%)
	  hugepages-1Gi      0 (0%)      0 (0%)
	  hugepages-2Mi      0 (0%)      0 (0%)
	  hugepages-32Mi     0 (0%)      0 (0%)
	  hugepages-64Ki     0 (0%)      0 (0%)
	Events:
	  Type     Reason                   Age                    From             Message
	  ----     ------                   ----                   ----             -------
	  Normal   Starting                 2m14s                  kube-proxy       
	  Normal   Starting                 48s                    kube-proxy       
	  Warning  CgroupV1                 2m34s                  kubelet          cgroup v1 support is in maintenance mode, please migrate to cgroup v2
	  Normal   NodeHasSufficientMemory  2m34s (x8 over 2m34s)  kubelet          Node default-k8s-diff-port-103048 status is now: NodeHasSufficientMemory
	  Normal   NodeHasNoDiskPressure    2m34s (x8 over 2m34s)  kubelet          Node default-k8s-diff-port-103048 status is now: NodeHasNoDiskPressure
	  Normal   NodeHasSufficientPID     2m34s (x8 over 2m34s)  kubelet          Node default-k8s-diff-port-103048 status is now: NodeHasSufficientPID
	  Normal   NodeHasSufficientMemory  2m21s                  kubelet          Node default-k8s-diff-port-103048 status is now: NodeHasSufficientMemory
	  Warning  CgroupV1                 2m21s                  kubelet          cgroup v1 support is in maintenance mode, please migrate to cgroup v2
	  Normal   NodeHasNoDiskPressure    2m21s                  kubelet          Node default-k8s-diff-port-103048 status is now: NodeHasNoDiskPressure
	  Normal   NodeHasSufficientPID     2m21s                  kubelet          Node default-k8s-diff-port-103048 status is now: NodeHasSufficientPID
	  Normal   Starting                 2m21s                  kubelet          Starting kubelet.
	  Normal   RegisteredNode           2m17s                  node-controller  Node default-k8s-diff-port-103048 event: Registered Node default-k8s-diff-port-103048 in Controller
	  Normal   NodeReady                95s                    kubelet          Node default-k8s-diff-port-103048 status is now: NodeReady
	  Normal   Starting                 61s                    kubelet          Starting kubelet.
	  Warning  CgroupV1                 61s                    kubelet          cgroup v1 support is in maintenance mode, please migrate to cgroup v2
	  Normal   NodeHasSufficientMemory  61s (x8 over 61s)      kubelet          Node default-k8s-diff-port-103048 status is now: NodeHasSufficientMemory
	  Normal   NodeHasNoDiskPressure    61s (x8 over 61s)      kubelet          Node default-k8s-diff-port-103048 status is now: NodeHasNoDiskPressure
	  Normal   NodeHasSufficientPID     61s (x8 over 61s)      kubelet          Node default-k8s-diff-port-103048 status is now: NodeHasSufficientPID
	  Normal   RegisteredNode           47s                    node-controller  Node default-k8s-diff-port-103048 event: Registered Node default-k8s-diff-port-103048 in Controller
	
	
	==> dmesg <==
	[Nov 9 14:15] overlayfs: idmapped layers are currently not supported
	[Nov 9 14:16] overlayfs: idmapped layers are currently not supported
	[Nov 9 14:18] overlayfs: idmapped layers are currently not supported
	[ +26.909872] overlayfs: idmapped layers are currently not supported
	[  +3.850831] overlayfs: idmapped layers are currently not supported
	[Nov 9 14:19] overlayfs: idmapped layers are currently not supported
	[ +17.180951] overlayfs: idmapped layers are currently not supported
	[Nov 9 14:20] overlayfs: idmapped layers are currently not supported
	[ +23.736977] overlayfs: idmapped layers are currently not supported
	[Nov 9 14:22] overlayfs: idmapped layers are currently not supported
	[Nov 9 14:23] overlayfs: idmapped layers are currently not supported
	[Nov 9 14:25] overlayfs: idmapped layers are currently not supported
	[Nov 9 14:27] overlayfs: idmapped layers are currently not supported
	[ +24.536638] overlayfs: idmapped layers are currently not supported
	[Nov 9 14:31] overlayfs: idmapped layers are currently not supported
	[Nov 9 14:33] overlayfs: idmapped layers are currently not supported
	[ +39.159698] overlayfs: idmapped layers are currently not supported
	[  +5.641155] overlayfs: idmapped layers are currently not supported
	[Nov 9 14:34] overlayfs: idmapped layers are currently not supported
	[Nov 9 14:35] overlayfs: idmapped layers are currently not supported
	[Nov 9 14:36] overlayfs: idmapped layers are currently not supported
	[Nov 9 14:37] overlayfs: idmapped layers are currently not supported
	[  +3.455400] overlayfs: idmapped layers are currently not supported
	[Nov 9 14:39] overlayfs: idmapped layers are currently not supported
	[  +3.462027] overlayfs: idmapped layers are currently not supported
	
	
	==> etcd [7d4eac93ccb3effa065a9c4f9d98f14e0e8db5c13414ee7cecb3ffcedd98326f] <==
	{"level":"warn","ts":"2025-11-09T14:39:25.728113Z","caller":"embed/config_logging.go:188","msg":"rejected connection on client endpoint","remote-addr":"127.0.0.1:39204","server-name":"","error":"EOF"}
	{"level":"warn","ts":"2025-11-09T14:39:25.791374Z","caller":"embed/config_logging.go:188","msg":"rejected connection on client endpoint","remote-addr":"127.0.0.1:39230","server-name":"","error":"EOF"}
	{"level":"warn","ts":"2025-11-09T14:39:25.843778Z","caller":"embed/config_logging.go:188","msg":"rejected connection on client endpoint","remote-addr":"127.0.0.1:39240","server-name":"","error":"EOF"}
	{"level":"warn","ts":"2025-11-09T14:39:25.904189Z","caller":"embed/config_logging.go:188","msg":"rejected connection on client endpoint","remote-addr":"127.0.0.1:39262","server-name":"","error":"EOF"}
	{"level":"warn","ts":"2025-11-09T14:39:25.959938Z","caller":"embed/config_logging.go:188","msg":"rejected connection on client endpoint","remote-addr":"127.0.0.1:39280","server-name":"","error":"EOF"}
	{"level":"warn","ts":"2025-11-09T14:39:26.001375Z","caller":"embed/config_logging.go:188","msg":"rejected connection on client endpoint","remote-addr":"127.0.0.1:39290","server-name":"","error":"EOF"}
	{"level":"warn","ts":"2025-11-09T14:39:26.083666Z","caller":"embed/config_logging.go:188","msg":"rejected connection on client endpoint","remote-addr":"127.0.0.1:39308","server-name":"","error":"EOF"}
	{"level":"warn","ts":"2025-11-09T14:39:26.118085Z","caller":"embed/config_logging.go:188","msg":"rejected connection on client endpoint","remote-addr":"127.0.0.1:39342","server-name":"","error":"EOF"}
	{"level":"warn","ts":"2025-11-09T14:39:26.183237Z","caller":"embed/config_logging.go:188","msg":"rejected connection on client endpoint","remote-addr":"127.0.0.1:39360","server-name":"","error":"EOF"}
	{"level":"warn","ts":"2025-11-09T14:39:26.306108Z","caller":"embed/config_logging.go:188","msg":"rejected connection on client endpoint","remote-addr":"127.0.0.1:39368","server-name":"","error":"EOF"}
	{"level":"warn","ts":"2025-11-09T14:39:26.416838Z","caller":"embed/config_logging.go:188","msg":"rejected connection on client endpoint","remote-addr":"127.0.0.1:39392","server-name":"","error":"EOF"}
	{"level":"warn","ts":"2025-11-09T14:39:26.462072Z","caller":"embed/config_logging.go:188","msg":"rejected connection on client endpoint","remote-addr":"127.0.0.1:39410","server-name":"","error":"EOF"}
	{"level":"warn","ts":"2025-11-09T14:39:26.561155Z","caller":"embed/config_logging.go:188","msg":"rejected connection on client endpoint","remote-addr":"127.0.0.1:39434","server-name":"","error":"EOF"}
	{"level":"warn","ts":"2025-11-09T14:39:26.602800Z","caller":"embed/config_logging.go:188","msg":"rejected connection on client endpoint","remote-addr":"127.0.0.1:39458","server-name":"","error":"EOF"}
	{"level":"warn","ts":"2025-11-09T14:39:26.651571Z","caller":"embed/config_logging.go:188","msg":"rejected connection on client endpoint","remote-addr":"127.0.0.1:39482","server-name":"","error":"EOF"}
	{"level":"warn","ts":"2025-11-09T14:39:26.764667Z","caller":"embed/config_logging.go:188","msg":"rejected connection on client endpoint","remote-addr":"127.0.0.1:39496","server-name":"","error":"EOF"}
	{"level":"warn","ts":"2025-11-09T14:39:26.770247Z","caller":"embed/config_logging.go:188","msg":"rejected connection on client endpoint","remote-addr":"127.0.0.1:39512","server-name":"","error":"EOF"}
	{"level":"warn","ts":"2025-11-09T14:39:26.788698Z","caller":"embed/config_logging.go:188","msg":"rejected connection on client endpoint","remote-addr":"127.0.0.1:39536","server-name":"","error":"EOF"}
	{"level":"warn","ts":"2025-11-09T14:39:26.839933Z","caller":"embed/config_logging.go:188","msg":"rejected connection on client endpoint","remote-addr":"127.0.0.1:39554","server-name":"","error":"EOF"}
	{"level":"warn","ts":"2025-11-09T14:39:26.871821Z","caller":"embed/config_logging.go:188","msg":"rejected connection on client endpoint","remote-addr":"127.0.0.1:39564","server-name":"","error":"EOF"}
	{"level":"warn","ts":"2025-11-09T14:39:26.926932Z","caller":"embed/config_logging.go:188","msg":"rejected connection on client endpoint","remote-addr":"127.0.0.1:39584","server-name":"","error":"EOF"}
	{"level":"warn","ts":"2025-11-09T14:39:27.035945Z","caller":"embed/config_logging.go:188","msg":"rejected connection on client endpoint","remote-addr":"127.0.0.1:39608","server-name":"","error":"EOF"}
	{"level":"warn","ts":"2025-11-09T14:39:27.100342Z","caller":"embed/config_logging.go:188","msg":"rejected connection on client endpoint","remote-addr":"127.0.0.1:39624","server-name":"","error":"EOF"}
	{"level":"warn","ts":"2025-11-09T14:39:27.136385Z","caller":"embed/config_logging.go:188","msg":"rejected connection on client endpoint","remote-addr":"127.0.0.1:39640","server-name":"","error":"EOF"}
	{"level":"warn","ts":"2025-11-09T14:39:27.270443Z","caller":"embed/config_logging.go:188","msg":"rejected connection on client endpoint","remote-addr":"127.0.0.1:39646","server-name":"","error":"EOF"}
	
	
	==> kernel <==
	 14:40:21 up  1:22,  0 user,  load average: 3.19, 3.42, 2.82
	Linux default-k8s-diff-port-103048 5.15.0-1084-aws #91~20.04.1-Ubuntu SMP Fri May 2 07:00:04 UTC 2025 aarch64 GNU/Linux
	PRETTY_NAME="Debian GNU/Linux 12 (bookworm)"
	
	
	==> kindnet [be38569ab0491d9f49c9dcbf8de0ce6af947e38961945fd5b81c78da6c67aadb] <==
	I1109 14:39:32.066805       1 main.go:109] connected to apiserver: https://10.96.0.1:443
	I1109 14:39:32.067051       1 main.go:139] hostIP = 192.168.85.2
	podIP = 192.168.85.2
	I1109 14:39:32.067204       1 main.go:148] setting mtu 1500 for CNI 
	I1109 14:39:32.067217       1 main.go:178] kindnetd IP family: "ipv4"
	I1109 14:39:32.067239       1 main.go:182] noMask IPv4 subnets: [10.244.0.0/16]
	time="2025-11-09T14:39:32Z" level=info msg="Created plugin 10-kube-network-policies (kindnetd, handles RunPodSandbox,RemovePodSandbox)"
	I1109 14:39:32.299350       1 controller.go:377] "Starting controller" name="kube-network-policies"
	I1109 14:39:32.299382       1 controller.go:381] "Waiting for informer caches to sync"
	I1109 14:39:32.299391       1 shared_informer.go:350] "Waiting for caches to sync" controller="kube-network-policies"
	I1109 14:39:32.299657       1 controller.go:390] nri plugin exited: failed to connect to NRI service: dial unix /var/run/nri/nri.sock: connect: no such file or directory
	E1109 14:40:02.299760       1 reflector.go:200] "Failed to watch" err="failed to list *v1.Pod: Get \"https://10.96.0.1:443/api/v1/pods?limit=500&resourceVersion=0\": dial tcp 10.96.0.1:443: i/o timeout" logger="UnhandledError" reflector="pkg/mod/k8s.io/client-go@v0.33.0/tools/cache/reflector.go:285" type="*v1.Pod"
	E1109 14:40:02.299762       1 reflector.go:200] "Failed to watch" err="failed to list *v1.Namespace: Get \"https://10.96.0.1:443/api/v1/namespaces?limit=500&resourceVersion=0\": dial tcp 10.96.0.1:443: i/o timeout" logger="UnhandledError" reflector="pkg/mod/k8s.io/client-go@v0.33.0/tools/cache/reflector.go:285" type="*v1.Namespace"
	E1109 14:40:02.299920       1 reflector.go:200] "Failed to watch" err="failed to list *v1.Node: Get \"https://10.96.0.1:443/api/v1/nodes?limit=500&resourceVersion=0\": dial tcp 10.96.0.1:443: i/o timeout" logger="UnhandledError" reflector="pkg/mod/k8s.io/client-go@v0.33.0/tools/cache/reflector.go:285" type="*v1.Node"
	E1109 14:40:02.300077       1 reflector.go:200] "Failed to watch" err="failed to list *v1.NetworkPolicy: Get \"https://10.96.0.1:443/apis/networking.k8s.io/v1/networkpolicies?limit=500&resourceVersion=0\": dial tcp 10.96.0.1:443: i/o timeout" logger="UnhandledError" reflector="pkg/mod/k8s.io/client-go@v0.33.0/tools/cache/reflector.go:285" type="*v1.NetworkPolicy"
	I1109 14:40:03.800106       1 shared_informer.go:357] "Caches are synced" controller="kube-network-policies"
	I1109 14:40:03.800139       1 metrics.go:72] Registering metrics
	I1109 14:40:03.800215       1 controller.go:711] "Syncing nftables rules"
	I1109 14:40:12.304628       1 main.go:297] Handling node with IPs: map[192.168.85.2:{}]
	I1109 14:40:12.304742       1 main.go:301] handling current node
	
	
	==> kube-apiserver [6ac4e39c7a9ff676af77a2c3d9451f34c38ceb98460bce7a887c77b5521f39bb] <==
	I1109 14:39:29.931419       1 controller.go:667] quota admission added evaluator for: leases.coordination.k8s.io
	I1109 14:39:29.976946       1 cache.go:39] Caches are synced for LocalAvailability controller
	I1109 14:39:29.988485       1 shared_informer.go:356] "Caches are synced" controller="node_authorizer"
	I1109 14:39:29.981772       1 shared_informer.go:356] "Caches are synced" controller="configmaps"
	I1109 14:39:29.988676       1 shared_informer.go:356] "Caches are synced" controller="kubernetes-service-cidr-controller"
	I1109 14:39:29.988768       1 shared_informer.go:356] "Caches are synced" controller="ipallocator-repair-controller"
	I1109 14:39:29.977213       1 shared_informer.go:356] "Caches are synced" controller="cluster_authentication_trust_controller"
	I1109 14:39:29.981707       1 cache.go:39] Caches are synced for RemoteAvailability controller
	I1109 14:39:29.981719       1 cache.go:39] Caches are synced for APIServiceRegistrationController controller
	I1109 14:39:29.981731       1 apf_controller.go:382] Running API Priority and Fairness config worker
	I1109 14:39:30.020769       1 apf_controller.go:385] Running API Priority and Fairness periodic rebalancing process
	I1109 14:39:30.021014       1 handler_discovery.go:451] Starting ResourceDiscoveryManager
	I1109 14:39:30.021311       1 cidrallocator.go:301] created ClusterIP allocator for Service CIDR 10.96.0.0/12
	I1109 14:39:29.988744       1 default_servicecidr_controller.go:137] Shutting down kubernetes-service-cidr-controller
	I1109 14:39:30.096965       1 storage_scheduling.go:111] all system priority classes are created successfully or already exist.
	I1109 14:39:30.727165       1 controller.go:667] quota admission added evaluator for: serviceaccounts
	I1109 14:39:32.426581       1 controller.go:667] quota admission added evaluator for: namespaces
	I1109 14:39:32.531581       1 controller.go:667] quota admission added evaluator for: deployments.apps
	I1109 14:39:32.589472       1 controller.go:667] quota admission added evaluator for: roles.rbac.authorization.k8s.io
	I1109 14:39:32.613961       1 controller.go:667] quota admission added evaluator for: rolebindings.rbac.authorization.k8s.io
	I1109 14:39:32.797164       1 alloc.go:328] "allocated clusterIPs" service="kubernetes-dashboard/kubernetes-dashboard" clusterIPs={"IPv4":"10.98.245.74"}
	I1109 14:39:32.861967       1 alloc.go:328] "allocated clusterIPs" service="kubernetes-dashboard/dashboard-metrics-scraper" clusterIPs={"IPv4":"10.106.242.141"}
	I1109 14:39:34.666059       1 controller.go:667] quota admission added evaluator for: replicasets.apps
	I1109 14:39:34.800364       1 controller.go:667] quota admission added evaluator for: endpointslices.discovery.k8s.io
	I1109 14:39:35.097808       1 controller.go:667] quota admission added evaluator for: endpoints
	
	
	==> kube-controller-manager [7e93099edfb89c3c6dfb2117e807c954f165aeeee5346a2bbbcc8cebc0b57e6d] <==
	I1109 14:39:34.576052       1 shared_informer.go:356] "Caches are synced" controller="cidrallocator"
	I1109 14:39:34.576136       1 shared_informer.go:356] "Caches are synced" controller="validatingadmissionpolicy-status"
	I1109 14:39:34.576334       1 shared_informer.go:356] "Caches are synced" controller="service-cidr-controller"
	I1109 14:39:34.579989       1 shared_informer.go:356] "Caches are synced" controller="taint"
	I1109 14:39:34.580098       1 node_lifecycle_controller.go:1221] "Initializing eviction metric for zone" logger="node-lifecycle-controller" zone=""
	I1109 14:39:34.580194       1 node_lifecycle_controller.go:873] "Missing timestamp for Node. Assuming now as a timestamp" logger="node-lifecycle-controller" node="default-k8s-diff-port-103048"
	I1109 14:39:34.580239       1 node_lifecycle_controller.go:1067] "Controller detected that zone is now in new state" logger="node-lifecycle-controller" zone="" newState="Normal"
	I1109 14:39:34.581750       1 shared_informer.go:356] "Caches are synced" controller="legacy-service-account-token-cleaner"
	I1109 14:39:34.588760       1 shared_informer.go:356] "Caches are synced" controller="endpoint"
	I1109 14:39:34.589092       1 shared_informer.go:356] "Caches are synced" controller="GC"
	I1109 14:39:34.589358       1 shared_informer.go:356] "Caches are synced" controller="ephemeral"
	I1109 14:39:34.589481       1 shared_informer.go:356] "Caches are synced" controller="persistent volume"
	I1109 14:39:34.589698       1 shared_informer.go:356] "Caches are synced" controller="deployment"
	I1109 14:39:34.590728       1 shared_informer.go:356] "Caches are synced" controller="disruption"
	I1109 14:39:34.590802       1 shared_informer.go:356] "Caches are synced" controller="VAC protection"
	I1109 14:39:34.614759       1 shared_informer.go:356] "Caches are synced" controller="namespace"
	I1109 14:39:34.615313       1 shared_informer.go:356] "Caches are synced" controller="TTL"
	I1109 14:39:34.633088       1 shared_informer.go:356] "Caches are synced" controller="resource quota"
	I1109 14:39:34.633165       1 shared_informer.go:356] "Caches are synced" controller="daemon sets"
	I1109 14:39:34.659875       1 shared_informer.go:356] "Caches are synced" controller="resource quota"
	I1109 14:39:34.659926       1 shared_informer.go:356] "Caches are synced" controller="bootstrap_signer"
	I1109 14:39:34.660171       1 shared_informer.go:356] "Caches are synced" controller="garbage collector"
	I1109 14:39:34.692691       1 shared_informer.go:356] "Caches are synced" controller="garbage collector"
	I1109 14:39:34.692725       1 garbagecollector.go:154] "Garbage collector: all resource monitors have synced" logger="garbage-collector-controller"
	I1109 14:39:34.692735       1 garbagecollector.go:157] "Proceeding to collect garbage" logger="garbage-collector-controller"
	
	
	==> kube-proxy [da4b547d025130d284095282ff9a975da37d1fb29f3f9f7f5b591578b7601596] <==
	I1109 14:39:32.450871       1 server_linux.go:53] "Using iptables proxy"
	I1109 14:39:32.989020       1 shared_informer.go:349] "Waiting for caches to sync" controller="node informer cache"
	I1109 14:39:33.107946       1 shared_informer.go:356] "Caches are synced" controller="node informer cache"
	I1109 14:39:33.107989       1 server.go:219] "Successfully retrieved NodeIPs" NodeIPs=["192.168.85.2"]
	E1109 14:39:33.108057       1 server.go:256] "Kube-proxy configuration may be incomplete or incorrect" err="nodePortAddresses is unset; NodePort connections will be accepted on all local IPs. Consider using `--nodeport-addresses primary`"
	I1109 14:39:33.327113       1 server.go:265] "kube-proxy running in dual-stack mode" primary ipFamily="IPv4"
	I1109 14:39:33.327184       1 server_linux.go:132] "Using iptables Proxier"
	I1109 14:39:33.336890       1 proxier.go:242] "Setting route_localnet=1 to allow node-ports on localhost; to change this either disable iptables.localhostNodePorts (--iptables-localhost-nodeports) or set nodePortAddresses (--nodeport-addresses) to filter loopback addresses" ipFamily="IPv4"
	I1109 14:39:33.337229       1 server.go:527] "Version info" version="v1.34.1"
	I1109 14:39:33.337257       1 server.go:529] "Golang settings" GOGC="" GOMAXPROCS="" GOTRACEBACK=""
	I1109 14:39:33.350797       1 config.go:200] "Starting service config controller"
	I1109 14:39:33.350896       1 shared_informer.go:349] "Waiting for caches to sync" controller="service config"
	I1109 14:39:33.351086       1 config.go:106] "Starting endpoint slice config controller"
	I1109 14:39:33.351128       1 shared_informer.go:349] "Waiting for caches to sync" controller="endpoint slice config"
	I1109 14:39:33.351299       1 config.go:403] "Starting serviceCIDR config controller"
	I1109 14:39:33.351343       1 shared_informer.go:349] "Waiting for caches to sync" controller="serviceCIDR config"
	I1109 14:39:33.353341       1 config.go:309] "Starting node config controller"
	I1109 14:39:33.353365       1 shared_informer.go:349] "Waiting for caches to sync" controller="node config"
	I1109 14:39:33.353372       1 shared_informer.go:356] "Caches are synced" controller="node config"
	I1109 14:39:33.459422       1 shared_informer.go:356] "Caches are synced" controller="serviceCIDR config"
	I1109 14:39:33.459465       1 shared_informer.go:356] "Caches are synced" controller="service config"
	I1109 14:39:33.459516       1 shared_informer.go:356] "Caches are synced" controller="endpoint slice config"
	
	
	==> kube-scheduler [0c584231ed8c3e74b3273a950c29860375fa8aeb7da46e7a2e139930d0830dd1] <==
	I1109 14:39:26.293731       1 serving.go:386] Generated self-signed cert in-memory
	W1109 14:39:29.856446       1 requestheader_controller.go:204] Unable to get configmap/extension-apiserver-authentication in kube-system.  Usually fixed by 'kubectl create rolebinding -n kube-system ROLEBINDING_NAME --role=extension-apiserver-authentication-reader --serviceaccount=YOUR_NS:YOUR_SA'
	W1109 14:39:29.856486       1 authentication.go:397] Error looking up in-cluster authentication configuration: configmaps "extension-apiserver-authentication" is forbidden: User "system:kube-scheduler" cannot get resource "configmaps" in API group "" in the namespace "kube-system"
	W1109 14:39:29.856497       1 authentication.go:398] Continuing without authentication configuration. This may treat all requests as anonymous.
	W1109 14:39:29.856504       1 authentication.go:399] To require authentication configuration lookup to succeed, set --authentication-tolerate-lookup-failure=false
	I1109 14:39:30.072563       1 server.go:175] "Starting Kubernetes Scheduler" version="v1.34.1"
	I1109 14:39:30.072670       1 server.go:177] "Golang settings" GOGC="" GOMAXPROCS="" GOTRACEBACK=""
	I1109 14:39:30.136603       1 secure_serving.go:211] Serving securely on 127.0.0.1:10259
	I1109 14:39:30.139637       1 configmap_cafile_content.go:205] "Starting controller" name="client-ca::kube-system::extension-apiserver-authentication::client-ca-file"
	I1109 14:39:30.141078       1 shared_informer.go:349] "Waiting for caches to sync" controller="client-ca::kube-system::extension-apiserver-authentication::client-ca-file"
	I1109 14:39:30.139673       1 tlsconfig.go:243] "Starting DynamicServingCertificateController"
	I1109 14:39:30.348512       1 shared_informer.go:356] "Caches are synced" controller="client-ca::kube-system::extension-apiserver-authentication::client-ca-file"
	
	
	==> kubelet <==
	Nov 09 14:39:35 default-k8s-diff-port-103048 kubelet[775]: I1109 14:39:35.353948     775 reconciler_common.go:251] "operationExecutor.VerifyControllerAttachedVolume started for volume \"tmp-volume\" (UniqueName: \"kubernetes.io/empty-dir/8829d1d9-dfd6-4815-b411-a34dcf9a605f-tmp-volume\") pod \"dashboard-metrics-scraper-6ffb444bf9-8h69l\" (UID: \"8829d1d9-dfd6-4815-b411-a34dcf9a605f\") " pod="kubernetes-dashboard/dashboard-metrics-scraper-6ffb444bf9-8h69l"
	Nov 09 14:39:35 default-k8s-diff-port-103048 kubelet[775]: I1109 14:39:35.354063     775 reconciler_common.go:251] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kube-api-access-64q8r\" (UniqueName: \"kubernetes.io/projected/41f9a5ae-7b92-448d-8014-b25c5eea04c2-kube-api-access-64q8r\") pod \"kubernetes-dashboard-855c9754f9-swwl8\" (UID: \"41f9a5ae-7b92-448d-8014-b25c5eea04c2\") " pod="kubernetes-dashboard/kubernetes-dashboard-855c9754f9-swwl8"
	Nov 09 14:39:35 default-k8s-diff-port-103048 kubelet[775]: W1109 14:39:35.645807     775 manager.go:1169] Failed to process watch event {EventType:0 Name:/docker/6ee0024be4f4d8590c4fac2f7a334ceb3085a08f0ed0d87b59fee8a0f0938ca3/crio-eb13898fae8d301a77f6c698f17b6f20a229eb2b8340baaa3b95045cd403ba5d WatchSource:0}: Error finding container eb13898fae8d301a77f6c698f17b6f20a229eb2b8340baaa3b95045cd403ba5d: Status 404 returned error can't find the container with id eb13898fae8d301a77f6c698f17b6f20a229eb2b8340baaa3b95045cd403ba5d
	Nov 09 14:39:35 default-k8s-diff-port-103048 kubelet[775]: W1109 14:39:35.682906     775 manager.go:1169] Failed to process watch event {EventType:0 Name:/docker/6ee0024be4f4d8590c4fac2f7a334ceb3085a08f0ed0d87b59fee8a0f0938ca3/crio-18f2ffd9a5f9fe2f579e19dea6b42de4fe342bfa1dbc2cff78c0fd9243becb81 WatchSource:0}: Error finding container 18f2ffd9a5f9fe2f579e19dea6b42de4fe342bfa1dbc2cff78c0fd9243becb81: Status 404 returned error can't find the container with id 18f2ffd9a5f9fe2f579e19dea6b42de4fe342bfa1dbc2cff78c0fd9243becb81
	Nov 09 14:39:42 default-k8s-diff-port-103048 kubelet[775]: I1109 14:39:42.150955     775 scope.go:117] "RemoveContainer" containerID="e7588a2f8c6c695a1bfcf66823627d7be7c69301bd1df725f15c554a1b7660d8"
	Nov 09 14:39:43 default-k8s-diff-port-103048 kubelet[775]: I1109 14:39:43.158318     775 scope.go:117] "RemoveContainer" containerID="e7588a2f8c6c695a1bfcf66823627d7be7c69301bd1df725f15c554a1b7660d8"
	Nov 09 14:39:43 default-k8s-diff-port-103048 kubelet[775]: I1109 14:39:43.158654     775 scope.go:117] "RemoveContainer" containerID="38259cff432cccc842b08ceebbe59b16a06e9c73dc6de13fc113d45ec541b9f8"
	Nov 09 14:39:43 default-k8s-diff-port-103048 kubelet[775]: E1109 14:39:43.158851     775 pod_workers.go:1324] "Error syncing pod, skipping" err="failed to \"StartContainer\" for \"dashboard-metrics-scraper\" with CrashLoopBackOff: \"back-off 10s restarting failed container=dashboard-metrics-scraper pod=dashboard-metrics-scraper-6ffb444bf9-8h69l_kubernetes-dashboard(8829d1d9-dfd6-4815-b411-a34dcf9a605f)\"" pod="kubernetes-dashboard/dashboard-metrics-scraper-6ffb444bf9-8h69l" podUID="8829d1d9-dfd6-4815-b411-a34dcf9a605f"
	Nov 09 14:39:44 default-k8s-diff-port-103048 kubelet[775]: I1109 14:39:44.170843     775 scope.go:117] "RemoveContainer" containerID="38259cff432cccc842b08ceebbe59b16a06e9c73dc6de13fc113d45ec541b9f8"
	Nov 09 14:39:44 default-k8s-diff-port-103048 kubelet[775]: E1109 14:39:44.171022     775 pod_workers.go:1324] "Error syncing pod, skipping" err="failed to \"StartContainer\" for \"dashboard-metrics-scraper\" with CrashLoopBackOff: \"back-off 10s restarting failed container=dashboard-metrics-scraper pod=dashboard-metrics-scraper-6ffb444bf9-8h69l_kubernetes-dashboard(8829d1d9-dfd6-4815-b411-a34dcf9a605f)\"" pod="kubernetes-dashboard/dashboard-metrics-scraper-6ffb444bf9-8h69l" podUID="8829d1d9-dfd6-4815-b411-a34dcf9a605f"
	Nov 09 14:39:45 default-k8s-diff-port-103048 kubelet[775]: I1109 14:39:45.583039     775 scope.go:117] "RemoveContainer" containerID="38259cff432cccc842b08ceebbe59b16a06e9c73dc6de13fc113d45ec541b9f8"
	Nov 09 14:39:45 default-k8s-diff-port-103048 kubelet[775]: E1109 14:39:45.583216     775 pod_workers.go:1324] "Error syncing pod, skipping" err="failed to \"StartContainer\" for \"dashboard-metrics-scraper\" with CrashLoopBackOff: \"back-off 10s restarting failed container=dashboard-metrics-scraper pod=dashboard-metrics-scraper-6ffb444bf9-8h69l_kubernetes-dashboard(8829d1d9-dfd6-4815-b411-a34dcf9a605f)\"" pod="kubernetes-dashboard/dashboard-metrics-scraper-6ffb444bf9-8h69l" podUID="8829d1d9-dfd6-4815-b411-a34dcf9a605f"
	Nov 09 14:40:00 default-k8s-diff-port-103048 kubelet[775]: I1109 14:40:00.709356     775 scope.go:117] "RemoveContainer" containerID="38259cff432cccc842b08ceebbe59b16a06e9c73dc6de13fc113d45ec541b9f8"
	Nov 09 14:40:01 default-k8s-diff-port-103048 kubelet[775]: I1109 14:40:01.231725     775 scope.go:117] "RemoveContainer" containerID="38259cff432cccc842b08ceebbe59b16a06e9c73dc6de13fc113d45ec541b9f8"
	Nov 09 14:40:01 default-k8s-diff-port-103048 kubelet[775]: I1109 14:40:01.232181     775 scope.go:117] "RemoveContainer" containerID="ce0dc366d5e71942bc185442cfc06238d350d5895900cca6bed6bb9f68a56488"
	Nov 09 14:40:01 default-k8s-diff-port-103048 kubelet[775]: E1109 14:40:01.233057     775 pod_workers.go:1324] "Error syncing pod, skipping" err="failed to \"StartContainer\" for \"dashboard-metrics-scraper\" with CrashLoopBackOff: \"back-off 20s restarting failed container=dashboard-metrics-scraper pod=dashboard-metrics-scraper-6ffb444bf9-8h69l_kubernetes-dashboard(8829d1d9-dfd6-4815-b411-a34dcf9a605f)\"" pod="kubernetes-dashboard/dashboard-metrics-scraper-6ffb444bf9-8h69l" podUID="8829d1d9-dfd6-4815-b411-a34dcf9a605f"
	Nov 09 14:40:01 default-k8s-diff-port-103048 kubelet[775]: I1109 14:40:01.256504     775 pod_startup_latency_tracker.go:104] "Observed pod startup duration" pod="kubernetes-dashboard/kubernetes-dashboard-855c9754f9-swwl8" podStartSLOduration=13.303506619 podStartE2EDuration="26.255155556s" podCreationTimestamp="2025-11-09 14:39:35 +0000 UTC" firstStartedPulling="2025-11-09 14:39:35.686508273 +0000 UTC m=+15.302954545" lastFinishedPulling="2025-11-09 14:39:48.63815721 +0000 UTC m=+28.254603482" observedRunningTime="2025-11-09 14:39:49.222598563 +0000 UTC m=+28.839044843" watchObservedRunningTime="2025-11-09 14:40:01.255155556 +0000 UTC m=+40.871601860"
	Nov 09 14:40:03 default-k8s-diff-port-103048 kubelet[775]: I1109 14:40:03.240035     775 scope.go:117] "RemoveContainer" containerID="cf78d41778b8d4241abc1e4adceffd57e40c195173e228a0d2dca2bd521cce85"
	Nov 09 14:40:05 default-k8s-diff-port-103048 kubelet[775]: I1109 14:40:05.583030     775 scope.go:117] "RemoveContainer" containerID="ce0dc366d5e71942bc185442cfc06238d350d5895900cca6bed6bb9f68a56488"
	Nov 09 14:40:05 default-k8s-diff-port-103048 kubelet[775]: E1109 14:40:05.583247     775 pod_workers.go:1324] "Error syncing pod, skipping" err="failed to \"StartContainer\" for \"dashboard-metrics-scraper\" with CrashLoopBackOff: \"back-off 20s restarting failed container=dashboard-metrics-scraper pod=dashboard-metrics-scraper-6ffb444bf9-8h69l_kubernetes-dashboard(8829d1d9-dfd6-4815-b411-a34dcf9a605f)\"" pod="kubernetes-dashboard/dashboard-metrics-scraper-6ffb444bf9-8h69l" podUID="8829d1d9-dfd6-4815-b411-a34dcf9a605f"
	Nov 09 14:40:18 default-k8s-diff-port-103048 kubelet[775]: I1109 14:40:18.712479     775 scope.go:117] "RemoveContainer" containerID="ce0dc366d5e71942bc185442cfc06238d350d5895900cca6bed6bb9f68a56488"
	Nov 09 14:40:18 default-k8s-diff-port-103048 kubelet[775]: E1109 14:40:18.712659     775 pod_workers.go:1324] "Error syncing pod, skipping" err="failed to \"StartContainer\" for \"dashboard-metrics-scraper\" with CrashLoopBackOff: \"back-off 20s restarting failed container=dashboard-metrics-scraper pod=dashboard-metrics-scraper-6ffb444bf9-8h69l_kubernetes-dashboard(8829d1d9-dfd6-4815-b411-a34dcf9a605f)\"" pod="kubernetes-dashboard/dashboard-metrics-scraper-6ffb444bf9-8h69l" podUID="8829d1d9-dfd6-4815-b411-a34dcf9a605f"
	Nov 09 14:40:19 default-k8s-diff-port-103048 systemd[1]: Stopping kubelet.service - kubelet: The Kubernetes Node Agent...
	Nov 09 14:40:19 default-k8s-diff-port-103048 systemd[1]: kubelet.service: Deactivated successfully.
	Nov 09 14:40:19 default-k8s-diff-port-103048 systemd[1]: Stopped kubelet.service - kubelet: The Kubernetes Node Agent.
	
	
	==> kubernetes-dashboard [b77c688e34b493a3b43c4d4222447f464615cadf3927d84572de47e9f20273fb] <==
	2025/11/09 14:39:48 Using namespace: kubernetes-dashboard
	2025/11/09 14:39:48 Using in-cluster config to connect to apiserver
	2025/11/09 14:39:48 Using secret token for csrf signing
	2025/11/09 14:39:48 Initializing csrf token from kubernetes-dashboard-csrf secret
	2025/11/09 14:39:48 Empty token. Generating and storing in a secret kubernetes-dashboard-csrf
	2025/11/09 14:39:48 Successful initial request to the apiserver, version: v1.34.1
	2025/11/09 14:39:48 Generating JWE encryption key
	2025/11/09 14:39:48 New synchronizer has been registered: kubernetes-dashboard-key-holder-kubernetes-dashboard. Starting
	2025/11/09 14:39:48 Starting secret synchronizer for kubernetes-dashboard-key-holder in namespace kubernetes-dashboard
	2025/11/09 14:39:48 Initializing JWE encryption key from synchronized object
	2025/11/09 14:39:48 Creating in-cluster Sidecar client
	2025/11/09 14:39:48 Serving insecurely on HTTP port: 9090
	2025/11/09 14:39:48 Metric client health check failed: the server is currently unable to handle the request (get services dashboard-metrics-scraper). Retrying in 30 seconds.
	2025/11/09 14:40:18 Metric client health check failed: the server is currently unable to handle the request (get services dashboard-metrics-scraper). Retrying in 30 seconds.
	2025/11/09 14:39:48 Starting overwatch
	
	
	==> storage-provisioner [887e2026e77897766c17fa9712a7f11a65c2c6b18e8793321feed28a770da36c] <==
	I1109 14:40:03.294512       1 storage_provisioner.go:116] Initializing the minikube storage provisioner...
	I1109 14:40:03.306057       1 storage_provisioner.go:141] Storage provisioner initialized, now starting service!
	I1109 14:40:03.306107       1 leaderelection.go:243] attempting to acquire leader lease kube-system/k8s.io-minikube-hostpath...
	W1109 14:40:03.321898       1 warnings.go:70] v1 Endpoints is deprecated in v1.33+; use discovery.k8s.io/v1 EndpointSlice
	W1109 14:40:06.777332       1 warnings.go:70] v1 Endpoints is deprecated in v1.33+; use discovery.k8s.io/v1 EndpointSlice
	W1109 14:40:11.038560       1 warnings.go:70] v1 Endpoints is deprecated in v1.33+; use discovery.k8s.io/v1 EndpointSlice
	W1109 14:40:14.637597       1 warnings.go:70] v1 Endpoints is deprecated in v1.33+; use discovery.k8s.io/v1 EndpointSlice
	W1109 14:40:17.691197       1 warnings.go:70] v1 Endpoints is deprecated in v1.33+; use discovery.k8s.io/v1 EndpointSlice
	W1109 14:40:20.713481       1 warnings.go:70] v1 Endpoints is deprecated in v1.33+; use discovery.k8s.io/v1 EndpointSlice
	W1109 14:40:20.724147       1 warnings.go:70] v1 Endpoints is deprecated in v1.33+; use discovery.k8s.io/v1 EndpointSlice
	I1109 14:40:20.724319       1 leaderelection.go:253] successfully acquired lease kube-system/k8s.io-minikube-hostpath
	I1109 14:40:20.724508       1 controller.go:835] Starting provisioner controller k8s.io/minikube-hostpath_default-k8s-diff-port-103048_4e41f923-9654-400a-8962-de961e818f43!
	I1109 14:40:20.725404       1 event.go:282] Event(v1.ObjectReference{Kind:"Endpoints", Namespace:"kube-system", Name:"k8s.io-minikube-hostpath", UID:"ec6f7261-5e6f-4cd5-8d6b-f26a96ba18b9", APIVersion:"v1", ResourceVersion:"689", FieldPath:""}): type: 'Normal' reason: 'LeaderElection' default-k8s-diff-port-103048_4e41f923-9654-400a-8962-de961e818f43 became leader
	W1109 14:40:20.738902       1 warnings.go:70] v1 Endpoints is deprecated in v1.33+; use discovery.k8s.io/v1 EndpointSlice
	W1109 14:40:20.757569       1 warnings.go:70] v1 Endpoints is deprecated in v1.33+; use discovery.k8s.io/v1 EndpointSlice
	I1109 14:40:20.828078       1 controller.go:884] Started provisioner controller k8s.io/minikube-hostpath_default-k8s-diff-port-103048_4e41f923-9654-400a-8962-de961e818f43!
	
	
	==> storage-provisioner [cf78d41778b8d4241abc1e4adceffd57e40c195173e228a0d2dca2bd521cce85] <==
	I1109 14:39:32.255386       1 storage_provisioner.go:116] Initializing the minikube storage provisioner...
	F1109 14:40:02.258092       1 main.go:39] error getting server version: Get "https://10.96.0.1:443/version?timeout=32s": dial tcp 10.96.0.1:443: i/o timeout
	

                                                
                                                
-- /stdout --
helpers_test.go:262: (dbg) Run:  out/minikube-linux-arm64 status --format={{.APIServer}} -p default-k8s-diff-port-103048 -n default-k8s-diff-port-103048
helpers_test.go:262: (dbg) Non-zero exit: out/minikube-linux-arm64 status --format={{.APIServer}} -p default-k8s-diff-port-103048 -n default-k8s-diff-port-103048: exit status 2 (536.644715ms)

                                                
                                                
-- stdout --
	Running

                                                
                                                
-- /stdout --
helpers_test.go:262: status error: exit status 2 (may be ok)
helpers_test.go:269: (dbg) Run:  kubectl --context default-k8s-diff-port-103048 get po -o=jsonpath={.items[*].metadata.name} -A --field-selector=status.phase!=Running
helpers_test.go:293: <<< TestStartStop/group/default-k8s-diff-port/serial/Pause FAILED: end of post-mortem logs <<<
helpers_test.go:294: ---------------------/post-mortem---------------------------------
helpers_test.go:222: -----------------------post-mortem--------------------------------
helpers_test.go:223: ======>  post-mortem[TestStartStop/group/default-k8s-diff-port/serial/Pause]: network settings <======
helpers_test.go:230: HOST ENV snapshots: PROXY env: HTTP_PROXY="<empty>" HTTPS_PROXY="<empty>" NO_PROXY="<empty>"
helpers_test.go:238: ======>  post-mortem[TestStartStop/group/default-k8s-diff-port/serial/Pause]: docker inspect <======
helpers_test.go:239: (dbg) Run:  docker inspect default-k8s-diff-port-103048
helpers_test.go:243: (dbg) docker inspect default-k8s-diff-port-103048:

                                                
                                                
-- stdout --
	[
	    {
	        "Id": "6ee0024be4f4d8590c4fac2f7a334ceb3085a08f0ed0d87b59fee8a0f0938ca3",
	        "Created": "2025-11-09T14:37:24.407836175Z",
	        "Path": "/usr/local/bin/entrypoint",
	        "Args": [
	            "/sbin/init"
	        ],
	        "State": {
	            "Status": "running",
	            "Running": true,
	            "Paused": false,
	            "Restarting": false,
	            "OOMKilled": false,
	            "Dead": false,
	            "Pid": 196258,
	            "ExitCode": 0,
	            "Error": "",
	            "StartedAt": "2025-11-09T14:39:13.793271243Z",
	            "FinishedAt": "2025-11-09T14:39:12.972944207Z"
	        },
	        "Image": "sha256:4e1b168be0d8ee6affdaa8dcd0274d605b2f417b6cfaa574410f0380fd962b97",
	        "ResolvConfPath": "/var/lib/docker/containers/6ee0024be4f4d8590c4fac2f7a334ceb3085a08f0ed0d87b59fee8a0f0938ca3/resolv.conf",
	        "HostnamePath": "/var/lib/docker/containers/6ee0024be4f4d8590c4fac2f7a334ceb3085a08f0ed0d87b59fee8a0f0938ca3/hostname",
	        "HostsPath": "/var/lib/docker/containers/6ee0024be4f4d8590c4fac2f7a334ceb3085a08f0ed0d87b59fee8a0f0938ca3/hosts",
	        "LogPath": "/var/lib/docker/containers/6ee0024be4f4d8590c4fac2f7a334ceb3085a08f0ed0d87b59fee8a0f0938ca3/6ee0024be4f4d8590c4fac2f7a334ceb3085a08f0ed0d87b59fee8a0f0938ca3-json.log",
	        "Name": "/default-k8s-diff-port-103048",
	        "RestartCount": 0,
	        "Driver": "overlay2",
	        "Platform": "linux",
	        "MountLabel": "",
	        "ProcessLabel": "",
	        "AppArmorProfile": "unconfined",
	        "ExecIDs": null,
	        "HostConfig": {
	            "Binds": [
	                "/lib/modules:/lib/modules:ro",
	                "default-k8s-diff-port-103048:/var"
	            ],
	            "ContainerIDFile": "",
	            "LogConfig": {
	                "Type": "json-file",
	                "Config": {}
	            },
	            "NetworkMode": "default-k8s-diff-port-103048",
	            "PortBindings": {
	                "22/tcp": [
	                    {
	                        "HostIp": "127.0.0.1",
	                        "HostPort": ""
	                    }
	                ],
	                "2376/tcp": [
	                    {
	                        "HostIp": "127.0.0.1",
	                        "HostPort": ""
	                    }
	                ],
	                "32443/tcp": [
	                    {
	                        "HostIp": "127.0.0.1",
	                        "HostPort": ""
	                    }
	                ],
	                "5000/tcp": [
	                    {
	                        "HostIp": "127.0.0.1",
	                        "HostPort": ""
	                    }
	                ],
	                "8444/tcp": [
	                    {
	                        "HostIp": "127.0.0.1",
	                        "HostPort": ""
	                    }
	                ]
	            },
	            "RestartPolicy": {
	                "Name": "no",
	                "MaximumRetryCount": 0
	            },
	            "AutoRemove": false,
	            "VolumeDriver": "",
	            "VolumesFrom": null,
	            "ConsoleSize": [
	                0,
	                0
	            ],
	            "CapAdd": null,
	            "CapDrop": null,
	            "CgroupnsMode": "host",
	            "Dns": [],
	            "DnsOptions": [],
	            "DnsSearch": [],
	            "ExtraHosts": null,
	            "GroupAdd": null,
	            "IpcMode": "private",
	            "Cgroup": "",
	            "Links": null,
	            "OomScoreAdj": 0,
	            "PidMode": "",
	            "Privileged": true,
	            "PublishAllPorts": false,
	            "ReadonlyRootfs": false,
	            "SecurityOpt": [
	                "seccomp=unconfined",
	                "apparmor=unconfined",
	                "label=disable"
	            ],
	            "Tmpfs": {
	                "/run": "",
	                "/tmp": ""
	            },
	            "UTSMode": "",
	            "UsernsMode": "",
	            "ShmSize": 67108864,
	            "Runtime": "runc",
	            "Isolation": "",
	            "CpuShares": 0,
	            "Memory": 3221225472,
	            "NanoCpus": 2000000000,
	            "CgroupParent": "",
	            "BlkioWeight": 0,
	            "BlkioWeightDevice": [],
	            "BlkioDeviceReadBps": [],
	            "BlkioDeviceWriteBps": [],
	            "BlkioDeviceReadIOps": [],
	            "BlkioDeviceWriteIOps": [],
	            "CpuPeriod": 0,
	            "CpuQuota": 0,
	            "CpuRealtimePeriod": 0,
	            "CpuRealtimeRuntime": 0,
	            "CpusetCpus": "",
	            "CpusetMems": "",
	            "Devices": [],
	            "DeviceCgroupRules": null,
	            "DeviceRequests": null,
	            "MemoryReservation": 0,
	            "MemorySwap": 6442450944,
	            "MemorySwappiness": null,
	            "OomKillDisable": false,
	            "PidsLimit": null,
	            "Ulimits": [],
	            "CpuCount": 0,
	            "CpuPercent": 0,
	            "IOMaximumIOps": 0,
	            "IOMaximumBandwidth": 0,
	            "MaskedPaths": null,
	            "ReadonlyPaths": null
	        },
	        "GraphDriver": {
	            "Data": {
	                "ID": "6ee0024be4f4d8590c4fac2f7a334ceb3085a08f0ed0d87b59fee8a0f0938ca3",
	                "LowerDir": "/var/lib/docker/overlay2/90c77410a0abe5011717be244f0d41706cf22dd1cd6f35d2e962ad2e9e3c6364-init/diff:/var/lib/docker/overlay2/b3a4de36ab2a7c9237cb1555a4866064baca53bca407ae5f84336bea9c6bc6c8/diff",
	                "MergedDir": "/var/lib/docker/overlay2/90c77410a0abe5011717be244f0d41706cf22dd1cd6f35d2e962ad2e9e3c6364/merged",
	                "UpperDir": "/var/lib/docker/overlay2/90c77410a0abe5011717be244f0d41706cf22dd1cd6f35d2e962ad2e9e3c6364/diff",
	                "WorkDir": "/var/lib/docker/overlay2/90c77410a0abe5011717be244f0d41706cf22dd1cd6f35d2e962ad2e9e3c6364/work"
	            },
	            "Name": "overlay2"
	        },
	        "Mounts": [
	            {
	                "Type": "bind",
	                "Source": "/lib/modules",
	                "Destination": "/lib/modules",
	                "Mode": "ro",
	                "RW": false,
	                "Propagation": "rprivate"
	            },
	            {
	                "Type": "volume",
	                "Name": "default-k8s-diff-port-103048",
	                "Source": "/var/lib/docker/volumes/default-k8s-diff-port-103048/_data",
	                "Destination": "/var",
	                "Driver": "local",
	                "Mode": "z",
	                "RW": true,
	                "Propagation": ""
	            }
	        ],
	        "Config": {
	            "Hostname": "default-k8s-diff-port-103048",
	            "Domainname": "",
	            "User": "",
	            "AttachStdin": false,
	            "AttachStdout": false,
	            "AttachStderr": false,
	            "ExposedPorts": {
	                "22/tcp": {},
	                "2376/tcp": {},
	                "32443/tcp": {},
	                "5000/tcp": {},
	                "8444/tcp": {}
	            },
	            "Tty": true,
	            "OpenStdin": false,
	            "StdinOnce": false,
	            "Env": [
	                "container=docker",
	                "PATH=/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin"
	            ],
	            "Cmd": null,
	            "Image": "gcr.io/k8s-minikube/kicbase-builds:v0.0.48-1761985721-21837@sha256:a50b37e97dfdea51156e079ca6b45818a801b3d41bbe13d141f35d2e1af6c7d1",
	            "Volumes": null,
	            "WorkingDir": "/",
	            "Entrypoint": [
	                "/usr/local/bin/entrypoint",
	                "/sbin/init"
	            ],
	            "OnBuild": null,
	            "Labels": {
	                "created_by.minikube.sigs.k8s.io": "true",
	                "mode.minikube.sigs.k8s.io": "default-k8s-diff-port-103048",
	                "name.minikube.sigs.k8s.io": "default-k8s-diff-port-103048",
	                "role.minikube.sigs.k8s.io": ""
	            },
	            "StopSignal": "SIGRTMIN+3"
	        },
	        "NetworkSettings": {
	            "Bridge": "",
	            "SandboxID": "8bf862dff8d5fabd25c666df989d025302cb56761a371d2137c6bf76b96a6a5c",
	            "SandboxKey": "/var/run/docker/netns/8bf862dff8d5",
	            "Ports": {
	                "22/tcp": [
	                    {
	                        "HostIp": "127.0.0.1",
	                        "HostPort": "33065"
	                    }
	                ],
	                "2376/tcp": [
	                    {
	                        "HostIp": "127.0.0.1",
	                        "HostPort": "33066"
	                    }
	                ],
	                "32443/tcp": [
	                    {
	                        "HostIp": "127.0.0.1",
	                        "HostPort": "33069"
	                    }
	                ],
	                "5000/tcp": [
	                    {
	                        "HostIp": "127.0.0.1",
	                        "HostPort": "33067"
	                    }
	                ],
	                "8444/tcp": [
	                    {
	                        "HostIp": "127.0.0.1",
	                        "HostPort": "33068"
	                    }
	                ]
	            },
	            "HairpinMode": false,
	            "LinkLocalIPv6Address": "",
	            "LinkLocalIPv6PrefixLen": 0,
	            "SecondaryIPAddresses": null,
	            "SecondaryIPv6Addresses": null,
	            "EndpointID": "",
	            "Gateway": "",
	            "GlobalIPv6Address": "",
	            "GlobalIPv6PrefixLen": 0,
	            "IPAddress": "",
	            "IPPrefixLen": 0,
	            "IPv6Gateway": "",
	            "MacAddress": "",
	            "Networks": {
	                "default-k8s-diff-port-103048": {
	                    "IPAMConfig": {
	                        "IPv4Address": "192.168.85.2"
	                    },
	                    "Links": null,
	                    "Aliases": null,
	                    "MacAddress": "ca:74:a7:b0:18:05",
	                    "DriverOpts": null,
	                    "GwPriority": 0,
	                    "NetworkID": "f575eafa491ba158377eb7b6fb901ba71cca9fc0a5cdf5e89e6c475d768dfea9",
	                    "EndpointID": "adf605d034ac4721d9b0ff3dcc5a30703f1f501d36bcc2c4ded3f979a07ddef8",
	                    "Gateway": "192.168.85.1",
	                    "IPAddress": "192.168.85.2",
	                    "IPPrefixLen": 24,
	                    "IPv6Gateway": "",
	                    "GlobalIPv6Address": "",
	                    "GlobalIPv6PrefixLen": 0,
	                    "DNSNames": [
	                        "default-k8s-diff-port-103048",
	                        "6ee0024be4f4"
	                    ]
	                }
	            }
	        }
	    }
	]

                                                
                                                
-- /stdout --
helpers_test.go:247: (dbg) Run:  out/minikube-linux-arm64 status --format={{.Host}} -p default-k8s-diff-port-103048 -n default-k8s-diff-port-103048
helpers_test.go:247: (dbg) Non-zero exit: out/minikube-linux-arm64 status --format={{.Host}} -p default-k8s-diff-port-103048 -n default-k8s-diff-port-103048: exit status 2 (467.496996ms)

                                                
                                                
-- stdout --
	Running

                                                
                                                
-- /stdout --
helpers_test.go:247: status error: exit status 2 (may be ok)
helpers_test.go:252: <<< TestStartStop/group/default-k8s-diff-port/serial/Pause FAILED: start of post-mortem logs <<<
helpers_test.go:253: ======>  post-mortem[TestStartStop/group/default-k8s-diff-port/serial/Pause]: minikube logs <======
helpers_test.go:255: (dbg) Run:  out/minikube-linux-arm64 -p default-k8s-diff-port-103048 logs -n 25
helpers_test.go:255: (dbg) Done: out/minikube-linux-arm64 -p default-k8s-diff-port-103048 logs -n 25: (1.678821328s)
helpers_test.go:260: TestStartStop/group/default-k8s-diff-port/serial/Pause logs: 
-- stdout --
	
	==> Audit <==
	┌─────────┬───────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────┬──────────────────────────────┬─────────┬─────────┬─────────────────────┬──────────────
───────┐
	│ COMMAND │                                                                                                                     ARGS                                                                                                                      │           PROFILE            │  USER   │ VERSION │     START TIME      │      END TIME       │
	├─────────┼───────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────┼──────────────────────────────┼─────────┼─────────┼─────────────────────┼──────────────
───────┤
	│ start   │ -p old-k8s-version-349599 --memory=3072 --alsologtostderr --wait=true --kvm-network=default --kvm-qemu-uri=qemu:///system --disable-driver-mounts --keep-context=false --driver=docker  --container-runtime=crio --kubernetes-version=v1.28.0 │ old-k8s-version-349599       │ jenkins │ v1.37.0 │ 09 Nov 25 14:34 UTC │ 09 Nov 25 14:35 UTC │
	│ addons  │ enable metrics-server -p old-k8s-version-349599 --images=MetricsServer=registry.k8s.io/echoserver:1.4 --registries=MetricsServer=fake.domain                                                                                                  │ old-k8s-version-349599       │ jenkins │ v1.37.0 │ 09 Nov 25 14:35 UTC │                     │
	│ stop    │ -p old-k8s-version-349599 --alsologtostderr -v=3                                                                                                                                                                                              │ old-k8s-version-349599       │ jenkins │ v1.37.0 │ 09 Nov 25 14:35 UTC │ 09 Nov 25 14:36 UTC │
	│ addons  │ enable dashboard -p old-k8s-version-349599 --images=MetricsScraper=registry.k8s.io/echoserver:1.4                                                                                                                                             │ old-k8s-version-349599       │ jenkins │ v1.37.0 │ 09 Nov 25 14:36 UTC │ 09 Nov 25 14:36 UTC │
	│ start   │ -p old-k8s-version-349599 --memory=3072 --alsologtostderr --wait=true --kvm-network=default --kvm-qemu-uri=qemu:///system --disable-driver-mounts --keep-context=false --driver=docker  --container-runtime=crio --kubernetes-version=v1.28.0 │ old-k8s-version-349599       │ jenkins │ v1.37.0 │ 09 Nov 25 14:36 UTC │ 09 Nov 25 14:36 UTC │
	│ start   │ -p cert-expiration-179822 --memory=3072 --cert-expiration=8760h --driver=docker  --container-runtime=crio                                                                                                                                     │ cert-expiration-179822       │ jenkins │ v1.37.0 │ 09 Nov 25 14:37 UTC │ 09 Nov 25 14:37 UTC │
	│ image   │ old-k8s-version-349599 image list --format=json                                                                                                                                                                                               │ old-k8s-version-349599       │ jenkins │ v1.37.0 │ 09 Nov 25 14:37 UTC │ 09 Nov 25 14:37 UTC │
	│ pause   │ -p old-k8s-version-349599 --alsologtostderr -v=1                                                                                                                                                                                              │ old-k8s-version-349599       │ jenkins │ v1.37.0 │ 09 Nov 25 14:37 UTC │                     │
	│ delete  │ -p old-k8s-version-349599                                                                                                                                                                                                                     │ old-k8s-version-349599       │ jenkins │ v1.37.0 │ 09 Nov 25 14:37 UTC │ 09 Nov 25 14:37 UTC │
	│ delete  │ -p old-k8s-version-349599                                                                                                                                                                                                                     │ old-k8s-version-349599       │ jenkins │ v1.37.0 │ 09 Nov 25 14:37 UTC │ 09 Nov 25 14:37 UTC │
	│ start   │ -p default-k8s-diff-port-103048 --memory=3072 --alsologtostderr --wait=true --apiserver-port=8444 --driver=docker  --container-runtime=crio --kubernetes-version=v1.34.1                                                                      │ default-k8s-diff-port-103048 │ jenkins │ v1.37.0 │ 09 Nov 25 14:37 UTC │ 09 Nov 25 14:38 UTC │
	│ delete  │ -p cert-expiration-179822                                                                                                                                                                                                                     │ cert-expiration-179822       │ jenkins │ v1.37.0 │ 09 Nov 25 14:37 UTC │ 09 Nov 25 14:37 UTC │
	│ start   │ -p embed-certs-422728 --memory=3072 --alsologtostderr --wait=true --embed-certs --driver=docker  --container-runtime=crio --kubernetes-version=v1.34.1                                                                                        │ embed-certs-422728           │ jenkins │ v1.37.0 │ 09 Nov 25 14:37 UTC │ 09 Nov 25 14:38 UTC │
	│ addons  │ enable metrics-server -p default-k8s-diff-port-103048 --images=MetricsServer=registry.k8s.io/echoserver:1.4 --registries=MetricsServer=fake.domain                                                                                            │ default-k8s-diff-port-103048 │ jenkins │ v1.37.0 │ 09 Nov 25 14:38 UTC │                     │
	│ addons  │ enable metrics-server -p embed-certs-422728 --images=MetricsServer=registry.k8s.io/echoserver:1.4 --registries=MetricsServer=fake.domain                                                                                                      │ embed-certs-422728           │ jenkins │ v1.37.0 │ 09 Nov 25 14:39 UTC │                     │
	│ stop    │ -p default-k8s-diff-port-103048 --alsologtostderr -v=3                                                                                                                                                                                        │ default-k8s-diff-port-103048 │ jenkins │ v1.37.0 │ 09 Nov 25 14:39 UTC │ 09 Nov 25 14:39 UTC │
	│ stop    │ -p embed-certs-422728 --alsologtostderr -v=3                                                                                                                                                                                                  │ embed-certs-422728           │ jenkins │ v1.37.0 │ 09 Nov 25 14:39 UTC │ 09 Nov 25 14:39 UTC │
	│ addons  │ enable dashboard -p default-k8s-diff-port-103048 --images=MetricsScraper=registry.k8s.io/echoserver:1.4                                                                                                                                       │ default-k8s-diff-port-103048 │ jenkins │ v1.37.0 │ 09 Nov 25 14:39 UTC │ 09 Nov 25 14:39 UTC │
	│ start   │ -p default-k8s-diff-port-103048 --memory=3072 --alsologtostderr --wait=true --apiserver-port=8444 --driver=docker  --container-runtime=crio --kubernetes-version=v1.34.1                                                                      │ default-k8s-diff-port-103048 │ jenkins │ v1.37.0 │ 09 Nov 25 14:39 UTC │ 09 Nov 25 14:40 UTC │
	│ addons  │ enable dashboard -p embed-certs-422728 --images=MetricsScraper=registry.k8s.io/echoserver:1.4                                                                                                                                                 │ embed-certs-422728           │ jenkins │ v1.37.0 │ 09 Nov 25 14:39 UTC │ 09 Nov 25 14:39 UTC │
	│ start   │ -p embed-certs-422728 --memory=3072 --alsologtostderr --wait=true --embed-certs --driver=docker  --container-runtime=crio --kubernetes-version=v1.34.1                                                                                        │ embed-certs-422728           │ jenkins │ v1.37.0 │ 09 Nov 25 14:39 UTC │ 09 Nov 25 14:40 UTC │
	│ image   │ default-k8s-diff-port-103048 image list --format=json                                                                                                                                                                                         │ default-k8s-diff-port-103048 │ jenkins │ v1.37.0 │ 09 Nov 25 14:40 UTC │ 09 Nov 25 14:40 UTC │
	│ pause   │ -p default-k8s-diff-port-103048 --alsologtostderr -v=1                                                                                                                                                                                        │ default-k8s-diff-port-103048 │ jenkins │ v1.37.0 │ 09 Nov 25 14:40 UTC │                     │
	│ image   │ embed-certs-422728 image list --format=json                                                                                                                                                                                                   │ embed-certs-422728           │ jenkins │ v1.37.0 │ 09 Nov 25 14:40 UTC │ 09 Nov 25 14:40 UTC │
	│ pause   │ -p embed-certs-422728 --alsologtostderr -v=1                                                                                                                                                                                                  │ embed-certs-422728           │ jenkins │ v1.37.0 │ 09 Nov 25 14:40 UTC │                     │
	└─────────┴───────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────┴──────────────────────────────┴─────────┴─────────┴─────────────────────┴──────────────
───────┘
	
	
	==> Last Start <==
	Log file created at: 2025/11/09 14:39:15
	Running on machine: ip-172-31-24-2
	Binary: Built with gc go1.24.6 for linux/arm64
	Log line format: [IWEF]mmdd hh:mm:ss.uuuuuu threadid file:line] msg
	I1109 14:39:15.653812  196795 out.go:360] Setting OutFile to fd 1 ...
	I1109 14:39:15.654001  196795 out.go:408] TERM=,COLORTERM=, which probably does not support color
	I1109 14:39:15.654038  196795 out.go:374] Setting ErrFile to fd 2...
	I1109 14:39:15.654052  196795 out.go:408] TERM=,COLORTERM=, which probably does not support color
	I1109 14:39:15.654356  196795 root.go:338] Updating PATH: /home/jenkins/minikube-integration/21139-2320/.minikube/bin
	I1109 14:39:15.654781  196795 out.go:368] Setting JSON to false
	I1109 14:39:15.655688  196795 start.go:133] hostinfo: {"hostname":"ip-172-31-24-2","uptime":4906,"bootTime":1762694250,"procs":167,"os":"linux","platform":"ubuntu","platformFamily":"debian","platformVersion":"20.04","kernelVersion":"5.15.0-1084-aws","kernelArch":"aarch64","virtualizationSystem":"","virtualizationRole":"","hostId":"6d436adf-771e-4269-b9a3-c25fd4fca4f5"}
	I1109 14:39:15.655757  196795 start.go:143] virtualization:  
	I1109 14:39:15.660654  196795 out.go:179] * [embed-certs-422728] minikube v1.37.0 on Ubuntu 20.04 (arm64)
	I1109 14:39:15.663936  196795 out.go:179]   - MINIKUBE_LOCATION=21139
	I1109 14:39:15.663991  196795 notify.go:221] Checking for updates...
	I1109 14:39:15.670031  196795 out.go:179]   - MINIKUBE_SUPPRESS_DOCKER_PERFORMANCE=true
	I1109 14:39:15.672921  196795 out.go:179]   - KUBECONFIG=/home/jenkins/minikube-integration/21139-2320/kubeconfig
	I1109 14:39:15.675823  196795 out.go:179]   - MINIKUBE_HOME=/home/jenkins/minikube-integration/21139-2320/.minikube
	I1109 14:39:15.678877  196795 out.go:179]   - MINIKUBE_BIN=out/minikube-linux-arm64
	I1109 14:39:15.681871  196795 out.go:179]   - MINIKUBE_FORCE_SYSTEMD=
	I1109 14:39:15.685303  196795 config.go:182] Loaded profile config "embed-certs-422728": Driver=docker, ContainerRuntime=crio, KubernetesVersion=v1.34.1
	I1109 14:39:15.685991  196795 driver.go:422] Setting default libvirt URI to qemu:///system
	I1109 14:39:15.716089  196795 docker.go:124] docker version: linux-28.1.1:Docker Engine - Community
	I1109 14:39:15.716233  196795 cli_runner.go:164] Run: docker system info --format "{{json .}}"
	I1109 14:39:15.783072  196795 info.go:266] docker info: {ID:J4M5:W6MX:GOX4:4LAQ:VI7E:VJNF:J3OP:OPBH:GF7G:PPY4:WQWD:7N4L Containers:2 ContainersRunning:1 ContainersPaused:0 ContainersStopped:1 Images:4 Driver:overlay2 DriverStatus:[[Backing Filesystem extfs] [Supports d_type true] [Using metacopy false] [Native Overlay Diff true] [userxattr false]] SystemStatus:<nil> Plugins:{Volume:[local] Network:[bridge host ipvlan macvlan null overlay] Authorization:<nil> Log:[awslogs fluentd gcplogs gelf journald json-file local splunk syslog]} MemoryLimit:true SwapLimit:true KernelMemory:false KernelMemoryTCP:true CPUCfsPeriod:true CPUCfsQuota:true CPUShares:true CPUSet:true PidsLimit:true IPv4Forwarding:true BridgeNfIptables:false BridgeNfIP6Tables:false Debug:false NFd:38 OomKillDisable:true NGoroutines:54 SystemTime:2025-11-09 14:39:15.77300627 +0000 UTC LoggingDriver:json-file CgroupDriver:cgroupfs NEventsListener:0 KernelVersion:5.15.0-1084-aws OperatingSystem:Ubuntu 20.04.6 LTS OSType:linux Architecture:aa
rch64 IndexServerAddress:https://index.docker.io/v1/ RegistryConfig:{AllowNondistributableArtifactsCIDRs:[] AllowNondistributableArtifactsHostnames:[] InsecureRegistryCIDRs:[::1/128 127.0.0.0/8] IndexConfigs:{DockerIo:{Name:docker.io Mirrors:[] Secure:true Official:true}} Mirrors:[]} NCPU:2 MemTotal:8214835200 GenericResources:<nil> DockerRootDir:/var/lib/docker HTTPProxy: HTTPSProxy: NoProxy: Name:ip-172-31-24-2 Labels:[] ExperimentalBuild:false ServerVersion:28.1.1 ClusterStore: ClusterAdvertise: Runtimes:{Runc:{Path:runc}} DefaultRuntime:runc Swarm:{NodeID: NodeAddr: LocalNodeState:inactive ControlAvailable:false Error: RemoteManagers:<nil>} LiveRestoreEnabled:false Isolation: InitBinary:docker-init ContainerdCommit:{ID:05044ec0a9a75232cad458027ca83437aae3f4da Expected:} RuncCommit:{ID:v1.2.5-0-g59923ef Expected:} InitCommit:{ID:de40ad0 Expected:} SecurityOptions:[name=apparmor name=seccomp,profile=builtin] ProductLicense: Warnings:<nil> ServerErrors:[] ClientInfo:{Debug:false Plugins:[map[Name:buildx Path
:/usr/libexec/docker/cli-plugins/docker-buildx SchemaVersion:0.1.0 ShortDescription:Docker Buildx Vendor:Docker Inc. Version:v0.23.0] map[Name:compose Path:/usr/libexec/docker/cli-plugins/docker-compose SchemaVersion:0.1.0 ShortDescription:Docker Compose Vendor:Docker Inc. Version:v2.35.1]] Warnings:<nil>}}
	I1109 14:39:15.783205  196795 docker.go:319] overlay module found
	I1109 14:39:15.786478  196795 out.go:179] * Using the docker driver based on existing profile
	I1109 14:39:15.789381  196795 start.go:309] selected driver: docker
	I1109 14:39:15.789420  196795 start.go:930] validating driver "docker" against &{Name:embed-certs-422728 KeepContext:false EmbedCerts:true MinikubeISO: KicBaseImage:gcr.io/k8s-minikube/kicbase-builds:v0.0.48-1761985721-21837@sha256:a50b37e97dfdea51156e079ca6b45818a801b3d41bbe13d141f35d2e1af6c7d1 Memory:3072 CPUs:2 DiskSize:20000 Driver:docker HyperkitVpnKitSock: HyperkitVSockPorts:[] DockerEnv:[] ContainerVolumeMounts:[] InsecureRegistry:[] RegistryMirror:[] HostOnlyCIDR:192.168.59.1/24 HypervVirtualSwitch: HypervUseExternalSwitch:false HypervExternalAdapter: KVMNetwork:default KVMQemuURI:qemu:///system KVMGPU:false KVMHidden:false KVMNUMACount:1 APIServerPort:8443 DockerOpt:[] DisableDriverMounts:false NFSShare:[] NFSSharesRoot:/nfsshares UUID: NoVTXCheck:false DNSProxy:false HostDNSResolver:true HostOnlyNicType:virtio NatNicType:virtio SSHIPAddress: SSHUser:root SSHKey: SSHPort:22 KubernetesConfig:{KubernetesVersion:v1.34.1 ClusterName:embed-certs-422728 Namespace:default APIServerHAVIP: APIServerN
ame:minikubeCA APIServerNames:[] APIServerIPs:[] DNSDomain:cluster.local ContainerRuntime:crio CRISocket: NetworkPlugin:cni FeatureGates: ServiceCIDR:10.96.0.0/12 ImageRepository: LoadBalancerStartIP: LoadBalancerEndIP: CustomIngressCert: RegistryAliases: ExtraOptions:[] ShouldLoadCachedImages:true EnableDefaultCNI:false CNI:} Nodes:[{Name: IP:192.168.76.2 Port:8443 KubernetesVersion:v1.34.1 ContainerRuntime:crio ControlPlane:true Worker:true}] Addons:map[dashboard:true default-storageclass:true storage-provisioner:true] CustomAddonImages:map[MetricsScraper:registry.k8s.io/echoserver:1.4] CustomAddonRegistries:map[] VerifyComponents:map[apiserver:true apps_running:true default_sa:true extra:true kubelet:true node_ready:true system_pods:true] StartHostTimeout:6m0s ScheduledStop:<nil> ExposedPorts:[] ListenAddress: Network: Subnet: MultiNodeRequested:false ExtraDisks:0 CertExpiration:26280h0m0s MountString: Mount9PVersion:9p2000.L MountGID:docker MountIP: MountMSize:262144 MountOptions:[] MountPort:0 MountType:
9p MountUID:docker BinaryMirror: DisableOptimizations:false DisableMetrics:false DisableCoreDNSLog:false CustomQemuFirmwarePath: SocketVMnetClientPath: SocketVMnetPath: StaticIP: SSHAuthSock: SSHAgentPID:0 GPUs: AutoPauseInterval:1m0s}
	I1109 14:39:15.789515  196795 start.go:941] status for docker: {Installed:true Healthy:true Running:false NeedsImprovement:false Error:<nil> Reason: Fix: Doc: Version:}
	I1109 14:39:15.790229  196795 cli_runner.go:164] Run: docker system info --format "{{json .}}"
	I1109 14:39:15.845783  196795 info.go:266] docker info: {ID:J4M5:W6MX:GOX4:4LAQ:VI7E:VJNF:J3OP:OPBH:GF7G:PPY4:WQWD:7N4L Containers:2 ContainersRunning:1 ContainersPaused:0 ContainersStopped:1 Images:4 Driver:overlay2 DriverStatus:[[Backing Filesystem extfs] [Supports d_type true] [Using metacopy false] [Native Overlay Diff true] [userxattr false]] SystemStatus:<nil> Plugins:{Volume:[local] Network:[bridge host ipvlan macvlan null overlay] Authorization:<nil> Log:[awslogs fluentd gcplogs gelf journald json-file local splunk syslog]} MemoryLimit:true SwapLimit:true KernelMemory:false KernelMemoryTCP:true CPUCfsPeriod:true CPUCfsQuota:true CPUShares:true CPUSet:true PidsLimit:true IPv4Forwarding:true BridgeNfIptables:false BridgeNfIP6Tables:false Debug:false NFd:38 OomKillDisable:true NGoroutines:54 SystemTime:2025-11-09 14:39:15.836143549 +0000 UTC LoggingDriver:json-file CgroupDriver:cgroupfs NEventsListener:0 KernelVersion:5.15.0-1084-aws OperatingSystem:Ubuntu 20.04.6 LTS OSType:linux Architecture:a
arch64 IndexServerAddress:https://index.docker.io/v1/ RegistryConfig:{AllowNondistributableArtifactsCIDRs:[] AllowNondistributableArtifactsHostnames:[] InsecureRegistryCIDRs:[::1/128 127.0.0.0/8] IndexConfigs:{DockerIo:{Name:docker.io Mirrors:[] Secure:true Official:true}} Mirrors:[]} NCPU:2 MemTotal:8214835200 GenericResources:<nil> DockerRootDir:/var/lib/docker HTTPProxy: HTTPSProxy: NoProxy: Name:ip-172-31-24-2 Labels:[] ExperimentalBuild:false ServerVersion:28.1.1 ClusterStore: ClusterAdvertise: Runtimes:{Runc:{Path:runc}} DefaultRuntime:runc Swarm:{NodeID: NodeAddr: LocalNodeState:inactive ControlAvailable:false Error: RemoteManagers:<nil>} LiveRestoreEnabled:false Isolation: InitBinary:docker-init ContainerdCommit:{ID:05044ec0a9a75232cad458027ca83437aae3f4da Expected:} RuncCommit:{ID:v1.2.5-0-g59923ef Expected:} InitCommit:{ID:de40ad0 Expected:} SecurityOptions:[name=apparmor name=seccomp,profile=builtin] ProductLicense: Warnings:<nil> ServerErrors:[] ClientInfo:{Debug:false Plugins:[map[Name:buildx Pat
h:/usr/libexec/docker/cli-plugins/docker-buildx SchemaVersion:0.1.0 ShortDescription:Docker Buildx Vendor:Docker Inc. Version:v0.23.0] map[Name:compose Path:/usr/libexec/docker/cli-plugins/docker-compose SchemaVersion:0.1.0 ShortDescription:Docker Compose Vendor:Docker Inc. Version:v2.35.1]] Warnings:<nil>}}
	I1109 14:39:15.846132  196795 start_flags.go:992] Waiting for all components: map[apiserver:true apps_running:true default_sa:true extra:true kubelet:true node_ready:true system_pods:true]
	I1109 14:39:15.846168  196795 cni.go:84] Creating CNI manager for ""
	I1109 14:39:15.846227  196795 cni.go:143] "docker" driver + "crio" runtime found, recommending kindnet
	I1109 14:39:15.846266  196795 start.go:353] cluster config:
	{Name:embed-certs-422728 KeepContext:false EmbedCerts:true MinikubeISO: KicBaseImage:gcr.io/k8s-minikube/kicbase-builds:v0.0.48-1761985721-21837@sha256:a50b37e97dfdea51156e079ca6b45818a801b3d41bbe13d141f35d2e1af6c7d1 Memory:3072 CPUs:2 DiskSize:20000 Driver:docker HyperkitVpnKitSock: HyperkitVSockPorts:[] DockerEnv:[] ContainerVolumeMounts:[] InsecureRegistry:[] RegistryMirror:[] HostOnlyCIDR:192.168.59.1/24 HypervVirtualSwitch: HypervUseExternalSwitch:false HypervExternalAdapter: KVMNetwork:default KVMQemuURI:qemu:///system KVMGPU:false KVMHidden:false KVMNUMACount:1 APIServerPort:8443 DockerOpt:[] DisableDriverMounts:false NFSShare:[] NFSSharesRoot:/nfsshares UUID: NoVTXCheck:false DNSProxy:false HostDNSResolver:true HostOnlyNicType:virtio NatNicType:virtio SSHIPAddress: SSHUser:root SSHKey: SSHPort:22 KubernetesConfig:{KubernetesVersion:v1.34.1 ClusterName:embed-certs-422728 Namespace:default APIServerHAVIP: APIServerName:minikubeCA APIServerNames:[] APIServerIPs:[] DNSDomain:cluster.local Contain
erRuntime:crio CRISocket: NetworkPlugin:cni FeatureGates: ServiceCIDR:10.96.0.0/12 ImageRepository: LoadBalancerStartIP: LoadBalancerEndIP: CustomIngressCert: RegistryAliases: ExtraOptions:[] ShouldLoadCachedImages:true EnableDefaultCNI:false CNI:} Nodes:[{Name: IP:192.168.76.2 Port:8443 KubernetesVersion:v1.34.1 ContainerRuntime:crio ControlPlane:true Worker:true}] Addons:map[dashboard:true default-storageclass:true storage-provisioner:true] CustomAddonImages:map[MetricsScraper:registry.k8s.io/echoserver:1.4] CustomAddonRegistries:map[] VerifyComponents:map[apiserver:true apps_running:true default_sa:true extra:true kubelet:true node_ready:true system_pods:true] StartHostTimeout:6m0s ScheduledStop:<nil> ExposedPorts:[] ListenAddress: Network: Subnet: MultiNodeRequested:false ExtraDisks:0 CertExpiration:26280h0m0s MountString: Mount9PVersion:9p2000.L MountGID:docker MountIP: MountMSize:262144 MountOptions:[] MountPort:0 MountType:9p MountUID:docker BinaryMirror: DisableOptimizations:false DisableMetrics:false
DisableCoreDNSLog:false CustomQemuFirmwarePath: SocketVMnetClientPath: SocketVMnetPath: StaticIP: SSHAuthSock: SSHAgentPID:0 GPUs: AutoPauseInterval:1m0s}
	I1109 14:39:15.849466  196795 out.go:179] * Starting "embed-certs-422728" primary control-plane node in "embed-certs-422728" cluster
	I1109 14:39:15.852353  196795 cache.go:134] Beginning downloading kic base image for docker with crio
	I1109 14:39:15.855395  196795 out.go:179] * Pulling base image v0.0.48-1761985721-21837 ...
	I1109 14:39:15.858354  196795 preload.go:188] Checking if preload exists for k8s version v1.34.1 and runtime crio
	I1109 14:39:15.858406  196795 preload.go:203] Found local preload: /home/jenkins/minikube-integration/21139-2320/.minikube/cache/preloaded-tarball/preloaded-images-k8s-v18-v1.34.1-cri-o-overlay-arm64.tar.lz4
	I1109 14:39:15.858425  196795 cache.go:65] Caching tarball of preloaded images
	I1109 14:39:15.858430  196795 image.go:81] Checking for gcr.io/k8s-minikube/kicbase-builds:v0.0.48-1761985721-21837@sha256:a50b37e97dfdea51156e079ca6b45818a801b3d41bbe13d141f35d2e1af6c7d1 in local docker daemon
	I1109 14:39:15.858538  196795 preload.go:238] Found /home/jenkins/minikube-integration/21139-2320/.minikube/cache/preloaded-tarball/preloaded-images-k8s-v18-v1.34.1-cri-o-overlay-arm64.tar.lz4 in cache, skipping download
	I1109 14:39:15.858550  196795 cache.go:68] Finished verifying existence of preloaded tar for v1.34.1 on crio
	I1109 14:39:15.858709  196795 profile.go:143] Saving config to /home/jenkins/minikube-integration/21139-2320/.minikube/profiles/embed-certs-422728/config.json ...
	I1109 14:39:15.879215  196795 image.go:100] Found gcr.io/k8s-minikube/kicbase-builds:v0.0.48-1761985721-21837@sha256:a50b37e97dfdea51156e079ca6b45818a801b3d41bbe13d141f35d2e1af6c7d1 in local docker daemon, skipping pull
	I1109 14:39:15.879245  196795 cache.go:158] gcr.io/k8s-minikube/kicbase-builds:v0.0.48-1761985721-21837@sha256:a50b37e97dfdea51156e079ca6b45818a801b3d41bbe13d141f35d2e1af6c7d1 exists in daemon, skipping load
	I1109 14:39:15.879257  196795 cache.go:243] Successfully downloaded all kic artifacts
	I1109 14:39:15.879367  196795 start.go:360] acquireMachinesLock for embed-certs-422728: {Name:mkaf26c3066ebca49339c9527aed846108c5e799 Clock:{} Delay:500ms Timeout:10m0s Cancel:<nil>}
	I1109 14:39:15.879441  196795 start.go:364] duration metric: took 46.114µs to acquireMachinesLock for "embed-certs-422728"
	I1109 14:39:15.879465  196795 start.go:96] Skipping create...Using existing machine configuration
	I1109 14:39:15.879476  196795 fix.go:54] fixHost starting: 
	I1109 14:39:15.879824  196795 cli_runner.go:164] Run: docker container inspect embed-certs-422728 --format={{.State.Status}}
	I1109 14:39:15.897379  196795 fix.go:112] recreateIfNeeded on embed-certs-422728: state=Stopped err=<nil>
	W1109 14:39:15.897409  196795 fix.go:138] unexpected machine state, will restart: <nil>
	I1109 14:39:13.761899  196129 out.go:252] * Restarting existing docker container for "default-k8s-diff-port-103048" ...
	I1109 14:39:13.762001  196129 cli_runner.go:164] Run: docker start default-k8s-diff-port-103048
	I1109 14:39:14.005697  196129 cli_runner.go:164] Run: docker container inspect default-k8s-diff-port-103048 --format={{.State.Status}}
	I1109 14:39:14.031930  196129 kic.go:430] container "default-k8s-diff-port-103048" state is running.
	I1109 14:39:14.032334  196129 cli_runner.go:164] Run: docker container inspect -f "{{range .NetworkSettings.Networks}}{{.IPAddress}},{{.GlobalIPv6Address}}{{end}}" default-k8s-diff-port-103048
	I1109 14:39:14.054133  196129 profile.go:143] Saving config to /home/jenkins/minikube-integration/21139-2320/.minikube/profiles/default-k8s-diff-port-103048/config.json ...
	I1109 14:39:14.054518  196129 machine.go:94] provisionDockerMachine start ...
	I1109 14:39:14.054646  196129 cli_runner.go:164] Run: docker container inspect -f "'{{(index (index .NetworkSettings.Ports "22/tcp") 0).HostPort}}'" default-k8s-diff-port-103048
	I1109 14:39:14.076480  196129 main.go:143] libmachine: Using SSH client type: native
	I1109 14:39:14.076798  196129 main.go:143] libmachine: &{{{<nil> 0 [] [] []} docker [0x3eefe0] 0x3f1790 <nil>  [] 0s} 127.0.0.1 33065 <nil> <nil>}
	I1109 14:39:14.076807  196129 main.go:143] libmachine: About to run SSH command:
	hostname
	I1109 14:39:14.077436  196129 main.go:143] libmachine: Error dialing TCP: ssh: handshake failed: read tcp 127.0.0.1:58722->127.0.0.1:33065: read: connection reset by peer
	I1109 14:39:17.231473  196129 main.go:143] libmachine: SSH cmd err, output: <nil>: default-k8s-diff-port-103048
	
	I1109 14:39:17.231499  196129 ubuntu.go:182] provisioning hostname "default-k8s-diff-port-103048"
	I1109 14:39:17.231624  196129 cli_runner.go:164] Run: docker container inspect -f "'{{(index (index .NetworkSettings.Ports "22/tcp") 0).HostPort}}'" default-k8s-diff-port-103048
	I1109 14:39:17.249722  196129 main.go:143] libmachine: Using SSH client type: native
	I1109 14:39:17.250048  196129 main.go:143] libmachine: &{{{<nil> 0 [] [] []} docker [0x3eefe0] 0x3f1790 <nil>  [] 0s} 127.0.0.1 33065 <nil> <nil>}
	I1109 14:39:17.250064  196129 main.go:143] libmachine: About to run SSH command:
	sudo hostname default-k8s-diff-port-103048 && echo "default-k8s-diff-port-103048" | sudo tee /etc/hostname
	I1109 14:39:17.410092  196129 main.go:143] libmachine: SSH cmd err, output: <nil>: default-k8s-diff-port-103048
	
	I1109 14:39:17.410211  196129 cli_runner.go:164] Run: docker container inspect -f "'{{(index (index .NetworkSettings.Ports "22/tcp") 0).HostPort}}'" default-k8s-diff-port-103048
	I1109 14:39:17.428985  196129 main.go:143] libmachine: Using SSH client type: native
	I1109 14:39:17.429306  196129 main.go:143] libmachine: &{{{<nil> 0 [] [] []} docker [0x3eefe0] 0x3f1790 <nil>  [] 0s} 127.0.0.1 33065 <nil> <nil>}
	I1109 14:39:17.429330  196129 main.go:143] libmachine: About to run SSH command:
	
			if ! grep -xq '.*\sdefault-k8s-diff-port-103048' /etc/hosts; then
				if grep -xq '127.0.1.1\s.*' /etc/hosts; then
					sudo sed -i 's/^127.0.1.1\s.*/127.0.1.1 default-k8s-diff-port-103048/g' /etc/hosts;
				else 
					echo '127.0.1.1 default-k8s-diff-port-103048' | sudo tee -a /etc/hosts; 
				fi
			fi
	I1109 14:39:17.580249  196129 main.go:143] libmachine: SSH cmd err, output: <nil>: 
	I1109 14:39:17.580275  196129 ubuntu.go:188] set auth options {CertDir:/home/jenkins/minikube-integration/21139-2320/.minikube CaCertPath:/home/jenkins/minikube-integration/21139-2320/.minikube/certs/ca.pem CaPrivateKeyPath:/home/jenkins/minikube-integration/21139-2320/.minikube/certs/ca-key.pem CaCertRemotePath:/etc/docker/ca.pem ServerCertPath:/home/jenkins/minikube-integration/21139-2320/.minikube/machines/server.pem ServerKeyPath:/home/jenkins/minikube-integration/21139-2320/.minikube/machines/server-key.pem ClientKeyPath:/home/jenkins/minikube-integration/21139-2320/.minikube/certs/key.pem ServerCertRemotePath:/etc/docker/server.pem ServerKeyRemotePath:/etc/docker/server-key.pem ClientCertPath:/home/jenkins/minikube-integration/21139-2320/.minikube/certs/cert.pem ServerCertSANs:[] StorePath:/home/jenkins/minikube-integration/21139-2320/.minikube}
	I1109 14:39:17.580301  196129 ubuntu.go:190] setting up certificates
	I1109 14:39:17.580311  196129 provision.go:84] configureAuth start
	I1109 14:39:17.580368  196129 cli_runner.go:164] Run: docker container inspect -f "{{range .NetworkSettings.Networks}}{{.IPAddress}},{{.GlobalIPv6Address}}{{end}}" default-k8s-diff-port-103048
	I1109 14:39:17.598399  196129 provision.go:143] copyHostCerts
	I1109 14:39:17.598470  196129 exec_runner.go:144] found /home/jenkins/minikube-integration/21139-2320/.minikube/ca.pem, removing ...
	I1109 14:39:17.598489  196129 exec_runner.go:203] rm: /home/jenkins/minikube-integration/21139-2320/.minikube/ca.pem
	I1109 14:39:17.598565  196129 exec_runner.go:151] cp: /home/jenkins/minikube-integration/21139-2320/.minikube/certs/ca.pem --> /home/jenkins/minikube-integration/21139-2320/.minikube/ca.pem (1082 bytes)
	I1109 14:39:17.598662  196129 exec_runner.go:144] found /home/jenkins/minikube-integration/21139-2320/.minikube/cert.pem, removing ...
	I1109 14:39:17.598674  196129 exec_runner.go:203] rm: /home/jenkins/minikube-integration/21139-2320/.minikube/cert.pem
	I1109 14:39:17.598703  196129 exec_runner.go:151] cp: /home/jenkins/minikube-integration/21139-2320/.minikube/certs/cert.pem --> /home/jenkins/minikube-integration/21139-2320/.minikube/cert.pem (1123 bytes)
	I1109 14:39:17.598755  196129 exec_runner.go:144] found /home/jenkins/minikube-integration/21139-2320/.minikube/key.pem, removing ...
	I1109 14:39:17.598765  196129 exec_runner.go:203] rm: /home/jenkins/minikube-integration/21139-2320/.minikube/key.pem
	I1109 14:39:17.598788  196129 exec_runner.go:151] cp: /home/jenkins/minikube-integration/21139-2320/.minikube/certs/key.pem --> /home/jenkins/minikube-integration/21139-2320/.minikube/key.pem (1679 bytes)
	I1109 14:39:17.598837  196129 provision.go:117] generating server cert: /home/jenkins/minikube-integration/21139-2320/.minikube/machines/server.pem ca-key=/home/jenkins/minikube-integration/21139-2320/.minikube/certs/ca.pem private-key=/home/jenkins/minikube-integration/21139-2320/.minikube/certs/ca-key.pem org=jenkins.default-k8s-diff-port-103048 san=[127.0.0.1 192.168.85.2 default-k8s-diff-port-103048 localhost minikube]
	I1109 14:39:17.688954  196129 provision.go:177] copyRemoteCerts
	I1109 14:39:17.689019  196129 ssh_runner.go:195] Run: sudo mkdir -p /etc/docker /etc/docker /etc/docker
	I1109 14:39:17.689060  196129 cli_runner.go:164] Run: docker container inspect -f "'{{(index (index .NetworkSettings.Ports "22/tcp") 0).HostPort}}'" default-k8s-diff-port-103048
	I1109 14:39:17.708206  196129 sshutil.go:53] new ssh client: &{IP:127.0.0.1 Port:33065 SSHKeyPath:/home/jenkins/minikube-integration/21139-2320/.minikube/machines/default-k8s-diff-port-103048/id_rsa Username:docker}
	I1109 14:39:17.819695  196129 ssh_runner.go:362] scp /home/jenkins/minikube-integration/21139-2320/.minikube/certs/ca.pem --> /etc/docker/ca.pem (1082 bytes)
	I1109 14:39:17.837093  196129 ssh_runner.go:362] scp /home/jenkins/minikube-integration/21139-2320/.minikube/machines/server.pem --> /etc/docker/server.pem (1249 bytes)
	I1109 14:39:17.854586  196129 ssh_runner.go:362] scp /home/jenkins/minikube-integration/21139-2320/.minikube/machines/server-key.pem --> /etc/docker/server-key.pem (1675 bytes)
	I1109 14:39:17.871745  196129 provision.go:87] duration metric: took 291.419804ms to configureAuth
	I1109 14:39:17.871814  196129 ubuntu.go:206] setting minikube options for container-runtime
	I1109 14:39:17.872050  196129 config.go:182] Loaded profile config "default-k8s-diff-port-103048": Driver=docker, ContainerRuntime=crio, KubernetesVersion=v1.34.1
	I1109 14:39:17.872194  196129 cli_runner.go:164] Run: docker container inspect -f "'{{(index (index .NetworkSettings.Ports "22/tcp") 0).HostPort}}'" default-k8s-diff-port-103048
	I1109 14:39:17.889492  196129 main.go:143] libmachine: Using SSH client type: native
	I1109 14:39:17.889805  196129 main.go:143] libmachine: &{{{<nil> 0 [] [] []} docker [0x3eefe0] 0x3f1790 <nil>  [] 0s} 127.0.0.1 33065 <nil> <nil>}
	I1109 14:39:17.889825  196129 main.go:143] libmachine: About to run SSH command:
	sudo mkdir -p /etc/sysconfig && printf %s "
	CRIO_MINIKUBE_OPTIONS='--insecure-registry 10.96.0.0/12 '
	" | sudo tee /etc/sysconfig/crio.minikube && sudo systemctl restart crio
	I1109 14:39:18.202831  196129 main.go:143] libmachine: SSH cmd err, output: <nil>: 
	CRIO_MINIKUBE_OPTIONS='--insecure-registry 10.96.0.0/12 '
	
	I1109 14:39:18.202918  196129 machine.go:97] duration metric: took 4.148387076s to provisionDockerMachine
	I1109 14:39:18.202944  196129 start.go:293] postStartSetup for "default-k8s-diff-port-103048" (driver="docker")
	I1109 14:39:18.202988  196129 start.go:322] creating required directories: [/etc/kubernetes/addons /etc/kubernetes/manifests /var/tmp/minikube /var/lib/minikube /var/lib/minikube/certs /var/lib/minikube/images /var/lib/minikube/binaries /tmp/gvisor /usr/share/ca-certificates /etc/ssl/certs]
	I1109 14:39:18.203082  196129 ssh_runner.go:195] Run: sudo mkdir -p /etc/kubernetes/addons /etc/kubernetes/manifests /var/tmp/minikube /var/lib/minikube /var/lib/minikube/certs /var/lib/minikube/images /var/lib/minikube/binaries /tmp/gvisor /usr/share/ca-certificates /etc/ssl/certs
	I1109 14:39:18.203170  196129 cli_runner.go:164] Run: docker container inspect -f "'{{(index (index .NetworkSettings.Ports "22/tcp") 0).HostPort}}'" default-k8s-diff-port-103048
	I1109 14:39:18.224891  196129 sshutil.go:53] new ssh client: &{IP:127.0.0.1 Port:33065 SSHKeyPath:/home/jenkins/minikube-integration/21139-2320/.minikube/machines/default-k8s-diff-port-103048/id_rsa Username:docker}
	I1109 14:39:18.335626  196129 ssh_runner.go:195] Run: cat /etc/os-release
	I1109 14:39:18.338990  196129 main.go:143] libmachine: Couldn't set key VERSION_CODENAME, no corresponding struct field found
	I1109 14:39:18.339018  196129 info.go:137] Remote host: Debian GNU/Linux 12 (bookworm)
	I1109 14:39:18.339029  196129 filesync.go:126] Scanning /home/jenkins/minikube-integration/21139-2320/.minikube/addons for local assets ...
	I1109 14:39:18.339123  196129 filesync.go:126] Scanning /home/jenkins/minikube-integration/21139-2320/.minikube/files for local assets ...
	I1109 14:39:18.339197  196129 filesync.go:149] local asset: /home/jenkins/minikube-integration/21139-2320/.minikube/files/etc/ssl/certs/41162.pem -> 41162.pem in /etc/ssl/certs
	I1109 14:39:18.339307  196129 ssh_runner.go:195] Run: sudo mkdir -p /etc/ssl/certs
	I1109 14:39:18.347413  196129 ssh_runner.go:362] scp /home/jenkins/minikube-integration/21139-2320/.minikube/files/etc/ssl/certs/41162.pem --> /etc/ssl/certs/41162.pem (1708 bytes)
	I1109 14:39:18.365395  196129 start.go:296] duration metric: took 162.403249ms for postStartSetup
	I1109 14:39:18.365474  196129 ssh_runner.go:195] Run: sh -c "df -h /var | awk 'NR==2{print $5}'"
	I1109 14:39:18.365513  196129 cli_runner.go:164] Run: docker container inspect -f "'{{(index (index .NetworkSettings.Ports "22/tcp") 0).HostPort}}'" default-k8s-diff-port-103048
	I1109 14:39:18.383461  196129 sshutil.go:53] new ssh client: &{IP:127.0.0.1 Port:33065 SSHKeyPath:/home/jenkins/minikube-integration/21139-2320/.minikube/machines/default-k8s-diff-port-103048/id_rsa Username:docker}
	I1109 14:39:18.485492  196129 ssh_runner.go:195] Run: sh -c "df -BG /var | awk 'NR==2{print $4}'"
	I1109 14:39:18.490710  196129 fix.go:56] duration metric: took 4.748854309s for fixHost
	I1109 14:39:18.490737  196129 start.go:83] releasing machines lock for "default-k8s-diff-port-103048", held for 4.748905699s
	I1109 14:39:18.490807  196129 cli_runner.go:164] Run: docker container inspect -f "{{range .NetworkSettings.Networks}}{{.IPAddress}},{{.GlobalIPv6Address}}{{end}}" default-k8s-diff-port-103048
	I1109 14:39:18.508468  196129 ssh_runner.go:195] Run: cat /version.json
	I1109 14:39:18.508516  196129 ssh_runner.go:195] Run: curl -sS -m 2 https://registry.k8s.io/
	I1109 14:39:18.508525  196129 cli_runner.go:164] Run: docker container inspect -f "'{{(index (index .NetworkSettings.Ports "22/tcp") 0).HostPort}}'" default-k8s-diff-port-103048
	I1109 14:39:18.508574  196129 cli_runner.go:164] Run: docker container inspect -f "'{{(index (index .NetworkSettings.Ports "22/tcp") 0).HostPort}}'" default-k8s-diff-port-103048
	I1109 14:39:18.533762  196129 sshutil.go:53] new ssh client: &{IP:127.0.0.1 Port:33065 SSHKeyPath:/home/jenkins/minikube-integration/21139-2320/.minikube/machines/default-k8s-diff-port-103048/id_rsa Username:docker}
	I1109 14:39:18.534380  196129 sshutil.go:53] new ssh client: &{IP:127.0.0.1 Port:33065 SSHKeyPath:/home/jenkins/minikube-integration/21139-2320/.minikube/machines/default-k8s-diff-port-103048/id_rsa Username:docker}
	I1109 14:39:18.733641  196129 ssh_runner.go:195] Run: systemctl --version
	I1109 14:39:18.740509  196129 ssh_runner.go:195] Run: sudo sh -c "podman version >/dev/null"
	I1109 14:39:18.777813  196129 ssh_runner.go:195] Run: sh -c "stat /etc/cni/net.d/*loopback.conf*"
	W1109 14:39:18.782333  196129 cni.go:209] loopback cni configuration skipped: "/etc/cni/net.d/*loopback.conf*" not found
	I1109 14:39:18.782411  196129 ssh_runner.go:195] Run: sudo find /etc/cni/net.d -maxdepth 1 -type f ( ( -name *bridge* -or -name *podman* ) -and -not -name *.mk_disabled ) -printf "%p, " -exec sh -c "sudo mv {} {}.mk_disabled" ;
	I1109 14:39:18.790609  196129 cni.go:259] no active bridge cni configs found in "/etc/cni/net.d" - nothing to disable
	I1109 14:39:18.790636  196129 start.go:496] detecting cgroup driver to use...
	I1109 14:39:18.790700  196129 detect.go:187] detected "cgroupfs" cgroup driver on host os
	I1109 14:39:18.790764  196129 ssh_runner.go:195] Run: sudo systemctl stop -f containerd
	I1109 14:39:18.806443  196129 ssh_runner.go:195] Run: sudo systemctl is-active --quiet service containerd
	I1109 14:39:18.820129  196129 docker.go:218] disabling cri-docker service (if available) ...
	I1109 14:39:18.820246  196129 ssh_runner.go:195] Run: sudo systemctl stop -f cri-docker.socket
	I1109 14:39:18.836297  196129 ssh_runner.go:195] Run: sudo systemctl stop -f cri-docker.service
	I1109 14:39:18.849893  196129 ssh_runner.go:195] Run: sudo systemctl disable cri-docker.socket
	I1109 14:39:18.961965  196129 ssh_runner.go:195] Run: sudo systemctl mask cri-docker.service
	I1109 14:39:19.074901  196129 docker.go:234] disabling docker service ...
	I1109 14:39:19.075010  196129 ssh_runner.go:195] Run: sudo systemctl stop -f docker.socket
	I1109 14:39:19.090357  196129 ssh_runner.go:195] Run: sudo systemctl stop -f docker.service
	I1109 14:39:19.103755  196129 ssh_runner.go:195] Run: sudo systemctl disable docker.socket
	I1109 14:39:19.214649  196129 ssh_runner.go:195] Run: sudo systemctl mask docker.service
	I1109 14:39:19.369065  196129 ssh_runner.go:195] Run: sudo systemctl is-active --quiet service docker
	I1109 14:39:19.382216  196129 ssh_runner.go:195] Run: /bin/bash -c "sudo mkdir -p /etc && printf %s "runtime-endpoint: unix:///var/run/crio/crio.sock
	" | sudo tee /etc/crictl.yaml"
	I1109 14:39:19.396769  196129 crio.go:59] configure cri-o to use "registry.k8s.io/pause:3.10.1" pause image...
	I1109 14:39:19.396864  196129 ssh_runner.go:195] Run: sh -c "sudo sed -i 's|^.*pause_image = .*$|pause_image = "registry.k8s.io/pause:3.10.1"|' /etc/crio/crio.conf.d/02-crio.conf"
	I1109 14:39:19.415946  196129 crio.go:70] configuring cri-o to use "cgroupfs" as cgroup driver...
	I1109 14:39:19.416022  196129 ssh_runner.go:195] Run: sh -c "sudo sed -i 's|^.*cgroup_manager = .*$|cgroup_manager = "cgroupfs"|' /etc/crio/crio.conf.d/02-crio.conf"
	I1109 14:39:19.427276  196129 ssh_runner.go:195] Run: sh -c "sudo sed -i '/conmon_cgroup = .*/d' /etc/crio/crio.conf.d/02-crio.conf"
	I1109 14:39:19.437233  196129 ssh_runner.go:195] Run: sh -c "sudo sed -i '/cgroup_manager = .*/a conmon_cgroup = "pod"' /etc/crio/crio.conf.d/02-crio.conf"
	I1109 14:39:19.447125  196129 ssh_runner.go:195] Run: sh -c "sudo rm -rf /etc/cni/net.mk"
	I1109 14:39:19.455793  196129 ssh_runner.go:195] Run: sh -c "sudo sed -i '/^ *"net.ipv4.ip_unprivileged_port_start=.*"/d' /etc/crio/crio.conf.d/02-crio.conf"
	I1109 14:39:19.468606  196129 ssh_runner.go:195] Run: sh -c "sudo grep -q "^ *default_sysctls" /etc/crio/crio.conf.d/02-crio.conf || sudo sed -i '/conmon_cgroup = .*/a default_sysctls = \[\n\]' /etc/crio/crio.conf.d/02-crio.conf"
	I1109 14:39:19.482521  196129 ssh_runner.go:195] Run: sh -c "sudo sed -i -r 's|^default_sysctls *= *\[|&\n  "net.ipv4.ip_unprivileged_port_start=0",|' /etc/crio/crio.conf.d/02-crio.conf"
	I1109 14:39:19.491385  196129 ssh_runner.go:195] Run: sudo sysctl net.bridge.bridge-nf-call-iptables
	I1109 14:39:19.499271  196129 ssh_runner.go:195] Run: sudo sh -c "echo 1 > /proc/sys/net/ipv4/ip_forward"
	I1109 14:39:19.507157  196129 ssh_runner.go:195] Run: sudo systemctl daemon-reload
	I1109 14:39:19.643285  196129 ssh_runner.go:195] Run: sudo systemctl restart crio
	I1109 14:39:19.789716  196129 start.go:543] Will wait 60s for socket path /var/run/crio/crio.sock
	I1109 14:39:19.789782  196129 ssh_runner.go:195] Run: stat /var/run/crio/crio.sock
	I1109 14:39:19.802113  196129 start.go:564] Will wait 60s for crictl version
	I1109 14:39:19.802187  196129 ssh_runner.go:195] Run: which crictl
	I1109 14:39:19.806163  196129 ssh_runner.go:195] Run: sudo /usr/local/bin/crictl version
	I1109 14:39:19.850016  196129 start.go:580] Version:  0.1.0
	RuntimeName:  cri-o
	RuntimeVersion:  1.34.1
	RuntimeApiVersion:  v1
	I1109 14:39:19.850100  196129 ssh_runner.go:195] Run: crio --version
	I1109 14:39:19.886662  196129 ssh_runner.go:195] Run: crio --version
	I1109 14:39:19.922121  196129 out.go:179] * Preparing Kubernetes v1.34.1 on CRI-O 1.34.1 ...
	I1109 14:39:15.900502  196795 out.go:252] * Restarting existing docker container for "embed-certs-422728" ...
	I1109 14:39:15.900586  196795 cli_runner.go:164] Run: docker start embed-certs-422728
	I1109 14:39:16.155027  196795 cli_runner.go:164] Run: docker container inspect embed-certs-422728 --format={{.State.Status}}
	I1109 14:39:16.179053  196795 kic.go:430] container "embed-certs-422728" state is running.
	I1109 14:39:16.179431  196795 cli_runner.go:164] Run: docker container inspect -f "{{range .NetworkSettings.Networks}}{{.IPAddress}},{{.GlobalIPv6Address}}{{end}}" embed-certs-422728
	I1109 14:39:16.202650  196795 profile.go:143] Saving config to /home/jenkins/minikube-integration/21139-2320/.minikube/profiles/embed-certs-422728/config.json ...
	I1109 14:39:16.202886  196795 machine.go:94] provisionDockerMachine start ...
	I1109 14:39:16.202954  196795 cli_runner.go:164] Run: docker container inspect -f "'{{(index (index .NetworkSettings.Ports "22/tcp") 0).HostPort}}'" embed-certs-422728
	I1109 14:39:16.226627  196795 main.go:143] libmachine: Using SSH client type: native
	I1109 14:39:16.227039  196795 main.go:143] libmachine: &{{{<nil> 0 [] [] []} docker [0x3eefe0] 0x3f1790 <nil>  [] 0s} 127.0.0.1 33070 <nil> <nil>}
	I1109 14:39:16.227058  196795 main.go:143] libmachine: About to run SSH command:
	hostname
	I1109 14:39:16.227903  196795 main.go:143] libmachine: Error dialing TCP: ssh: handshake failed: EOF
	I1109 14:39:19.403380  196795 main.go:143] libmachine: SSH cmd err, output: <nil>: embed-certs-422728
	
	I1109 14:39:19.403421  196795 ubuntu.go:182] provisioning hostname "embed-certs-422728"
	I1109 14:39:19.403526  196795 cli_runner.go:164] Run: docker container inspect -f "'{{(index (index .NetworkSettings.Ports "22/tcp") 0).HostPort}}'" embed-certs-422728
	I1109 14:39:19.425865  196795 main.go:143] libmachine: Using SSH client type: native
	I1109 14:39:19.426162  196795 main.go:143] libmachine: &{{{<nil> 0 [] [] []} docker [0x3eefe0] 0x3f1790 <nil>  [] 0s} 127.0.0.1 33070 <nil> <nil>}
	I1109 14:39:19.426172  196795 main.go:143] libmachine: About to run SSH command:
	sudo hostname embed-certs-422728 && echo "embed-certs-422728" | sudo tee /etc/hostname
	I1109 14:39:19.604836  196795 main.go:143] libmachine: SSH cmd err, output: <nil>: embed-certs-422728
	
	I1109 14:39:19.604972  196795 cli_runner.go:164] Run: docker container inspect -f "'{{(index (index .NetworkSettings.Ports "22/tcp") 0).HostPort}}'" embed-certs-422728
	I1109 14:39:19.627515  196795 main.go:143] libmachine: Using SSH client type: native
	I1109 14:39:19.627823  196795 main.go:143] libmachine: &{{{<nil> 0 [] [] []} docker [0x3eefe0] 0x3f1790 <nil>  [] 0s} 127.0.0.1 33070 <nil> <nil>}
	I1109 14:39:19.627846  196795 main.go:143] libmachine: About to run SSH command:
	
			if ! grep -xq '.*\sembed-certs-422728' /etc/hosts; then
				if grep -xq '127.0.1.1\s.*' /etc/hosts; then
					sudo sed -i 's/^127.0.1.1\s.*/127.0.1.1 embed-certs-422728/g' /etc/hosts;
				else 
					echo '127.0.1.1 embed-certs-422728' | sudo tee -a /etc/hosts; 
				fi
			fi
	I1109 14:39:19.784610  196795 main.go:143] libmachine: SSH cmd err, output: <nil>: 
	I1109 14:39:19.784640  196795 ubuntu.go:188] set auth options {CertDir:/home/jenkins/minikube-integration/21139-2320/.minikube CaCertPath:/home/jenkins/minikube-integration/21139-2320/.minikube/certs/ca.pem CaPrivateKeyPath:/home/jenkins/minikube-integration/21139-2320/.minikube/certs/ca-key.pem CaCertRemotePath:/etc/docker/ca.pem ServerCertPath:/home/jenkins/minikube-integration/21139-2320/.minikube/machines/server.pem ServerKeyPath:/home/jenkins/minikube-integration/21139-2320/.minikube/machines/server-key.pem ClientKeyPath:/home/jenkins/minikube-integration/21139-2320/.minikube/certs/key.pem ServerCertRemotePath:/etc/docker/server.pem ServerKeyRemotePath:/etc/docker/server-key.pem ClientCertPath:/home/jenkins/minikube-integration/21139-2320/.minikube/certs/cert.pem ServerCertSANs:[] StorePath:/home/jenkins/minikube-integration/21139-2320/.minikube}
	I1109 14:39:19.784720  196795 ubuntu.go:190] setting up certificates
	I1109 14:39:19.784751  196795 provision.go:84] configureAuth start
	I1109 14:39:19.784837  196795 cli_runner.go:164] Run: docker container inspect -f "{{range .NetworkSettings.Networks}}{{.IPAddress}},{{.GlobalIPv6Address}}{{end}}" embed-certs-422728
	I1109 14:39:19.811636  196795 provision.go:143] copyHostCerts
	I1109 14:39:19.811695  196795 exec_runner.go:144] found /home/jenkins/minikube-integration/21139-2320/.minikube/cert.pem, removing ...
	I1109 14:39:19.811709  196795 exec_runner.go:203] rm: /home/jenkins/minikube-integration/21139-2320/.minikube/cert.pem
	I1109 14:39:19.811785  196795 exec_runner.go:151] cp: /home/jenkins/minikube-integration/21139-2320/.minikube/certs/cert.pem --> /home/jenkins/minikube-integration/21139-2320/.minikube/cert.pem (1123 bytes)
	I1109 14:39:19.811895  196795 exec_runner.go:144] found /home/jenkins/minikube-integration/21139-2320/.minikube/key.pem, removing ...
	I1109 14:39:19.811901  196795 exec_runner.go:203] rm: /home/jenkins/minikube-integration/21139-2320/.minikube/key.pem
	I1109 14:39:19.811929  196795 exec_runner.go:151] cp: /home/jenkins/minikube-integration/21139-2320/.minikube/certs/key.pem --> /home/jenkins/minikube-integration/21139-2320/.minikube/key.pem (1679 bytes)
	I1109 14:39:19.811991  196795 exec_runner.go:144] found /home/jenkins/minikube-integration/21139-2320/.minikube/ca.pem, removing ...
	I1109 14:39:19.811995  196795 exec_runner.go:203] rm: /home/jenkins/minikube-integration/21139-2320/.minikube/ca.pem
	I1109 14:39:19.812021  196795 exec_runner.go:151] cp: /home/jenkins/minikube-integration/21139-2320/.minikube/certs/ca.pem --> /home/jenkins/minikube-integration/21139-2320/.minikube/ca.pem (1082 bytes)
	I1109 14:39:19.812067  196795 provision.go:117] generating server cert: /home/jenkins/minikube-integration/21139-2320/.minikube/machines/server.pem ca-key=/home/jenkins/minikube-integration/21139-2320/.minikube/certs/ca.pem private-key=/home/jenkins/minikube-integration/21139-2320/.minikube/certs/ca-key.pem org=jenkins.embed-certs-422728 san=[127.0.0.1 192.168.76.2 embed-certs-422728 localhost minikube]
	I1109 14:39:20.018694  196795 provision.go:177] copyRemoteCerts
	I1109 14:39:20.018776  196795 ssh_runner.go:195] Run: sudo mkdir -p /etc/docker /etc/docker /etc/docker
	I1109 14:39:20.018829  196795 cli_runner.go:164] Run: docker container inspect -f "'{{(index (index .NetworkSettings.Ports "22/tcp") 0).HostPort}}'" embed-certs-422728
	I1109 14:39:20.041481  196795 sshutil.go:53] new ssh client: &{IP:127.0.0.1 Port:33070 SSHKeyPath:/home/jenkins/minikube-integration/21139-2320/.minikube/machines/embed-certs-422728/id_rsa Username:docker}
	I1109 14:39:20.156424  196795 ssh_runner.go:362] scp /home/jenkins/minikube-integration/21139-2320/.minikube/certs/ca.pem --> /etc/docker/ca.pem (1082 bytes)
	I1109 14:39:20.179967  196795 ssh_runner.go:362] scp /home/jenkins/minikube-integration/21139-2320/.minikube/machines/server.pem --> /etc/docker/server.pem (1224 bytes)
	I1109 14:39:20.205588  196795 ssh_runner.go:362] scp /home/jenkins/minikube-integration/21139-2320/.minikube/machines/server-key.pem --> /etc/docker/server-key.pem (1675 bytes)
	I1109 14:39:20.224981  196795 provision.go:87] duration metric: took 440.207382ms to configureAuth
	I1109 14:39:20.225018  196795 ubuntu.go:206] setting minikube options for container-runtime
	I1109 14:39:20.225226  196795 config.go:182] Loaded profile config "embed-certs-422728": Driver=docker, ContainerRuntime=crio, KubernetesVersion=v1.34.1
	I1109 14:39:20.225355  196795 cli_runner.go:164] Run: docker container inspect -f "'{{(index (index .NetworkSettings.Ports "22/tcp") 0).HostPort}}'" embed-certs-422728
	I1109 14:39:20.251487  196795 main.go:143] libmachine: Using SSH client type: native
	I1109 14:39:20.251808  196795 main.go:143] libmachine: &{{{<nil> 0 [] [] []} docker [0x3eefe0] 0x3f1790 <nil>  [] 0s} 127.0.0.1 33070 <nil> <nil>}
	I1109 14:39:20.251826  196795 main.go:143] libmachine: About to run SSH command:
	sudo mkdir -p /etc/sysconfig && printf %s "
	CRIO_MINIKUBE_OPTIONS='--insecure-registry 10.96.0.0/12 '
	" | sudo tee /etc/sysconfig/crio.minikube && sudo systemctl restart crio
	I1109 14:39:19.924910  196129 cli_runner.go:164] Run: docker network inspect default-k8s-diff-port-103048 --format "{"Name": "{{.Name}}","Driver": "{{.Driver}}","Subnet": "{{range .IPAM.Config}}{{.Subnet}}{{end}}","Gateway": "{{range .IPAM.Config}}{{.Gateway}}{{end}}","MTU": {{if (index .Options "com.docker.network.driver.mtu")}}{{(index .Options "com.docker.network.driver.mtu")}}{{else}}0{{end}}, "ContainerIPs": [{{range $k,$v := .Containers }}"{{$v.IPv4Address}}",{{end}}]}"
	I1109 14:39:19.947696  196129 ssh_runner.go:195] Run: grep 192.168.85.1	host.minikube.internal$ /etc/hosts
	I1109 14:39:19.951833  196129 ssh_runner.go:195] Run: /bin/bash -c "{ grep -v $'\thost.minikube.internal$' "/etc/hosts"; echo "192.168.85.1	host.minikube.internal"; } > /tmp/h.$$; sudo cp /tmp/h.$$ "/etc/hosts""
	I1109 14:39:19.966489  196129 kubeadm.go:884] updating cluster {Name:default-k8s-diff-port-103048 KeepContext:false EmbedCerts:false MinikubeISO: KicBaseImage:gcr.io/k8s-minikube/kicbase-builds:v0.0.48-1761985721-21837@sha256:a50b37e97dfdea51156e079ca6b45818a801b3d41bbe13d141f35d2e1af6c7d1 Memory:3072 CPUs:2 DiskSize:20000 Driver:docker HyperkitVpnKitSock: HyperkitVSockPorts:[] DockerEnv:[] ContainerVolumeMounts:[] InsecureRegistry:[] RegistryMirror:[] HostOnlyCIDR:192.168.59.1/24 HypervVirtualSwitch: HypervUseExternalSwitch:false HypervExternalAdapter: KVMNetwork:default KVMQemuURI:qemu:///system KVMGPU:false KVMHidden:false KVMNUMACount:1 APIServerPort:8444 DockerOpt:[] DisableDriverMounts:false NFSShare:[] NFSSharesRoot:/nfsshares UUID: NoVTXCheck:false DNSProxy:false HostDNSResolver:true HostOnlyNicType:virtio NatNicType:virtio SSHIPAddress: SSHUser:root SSHKey: SSHPort:22 KubernetesConfig:{KubernetesVersion:v1.34.1 ClusterName:default-k8s-diff-port-103048 Namespace:default APIServerHAVIP: APISer
verName:minikubeCA APIServerNames:[] APIServerIPs:[] DNSDomain:cluster.local ContainerRuntime:crio CRISocket: NetworkPlugin:cni FeatureGates: ServiceCIDR:10.96.0.0/12 ImageRepository: LoadBalancerStartIP: LoadBalancerEndIP: CustomIngressCert: RegistryAliases: ExtraOptions:[] ShouldLoadCachedImages:true EnableDefaultCNI:false CNI:} Nodes:[{Name: IP:192.168.85.2 Port:8444 KubernetesVersion:v1.34.1 ContainerRuntime:crio ControlPlane:true Worker:true}] Addons:map[dashboard:true default-storageclass:true storage-provisioner:true] CustomAddonImages:map[MetricsScraper:registry.k8s.io/echoserver:1.4] CustomAddonRegistries:map[] VerifyComponents:map[apiserver:true apps_running:true default_sa:true extra:true kubelet:true node_ready:true system_pods:true] StartHostTimeout:6m0s ScheduledStop:<nil> ExposedPorts:[] ListenAddress: Network: Subnet: MultiNodeRequested:false ExtraDisks:0 CertExpiration:26280h0m0s MountString: Mount9PVersion:9p2000.L MountGID:docker MountIP: MountMSize:262144 MountOptions:[] MountPort:0 MountT
ype:9p MountUID:docker BinaryMirror: DisableOptimizations:false DisableMetrics:false DisableCoreDNSLog:false CustomQemuFirmwarePath: SocketVMnetClientPath: SocketVMnetPath: StaticIP: SSHAuthSock: SSHAgentPID:0 GPUs: AutoPauseInterval:1m0s} ...
	I1109 14:39:19.966612  196129 preload.go:188] Checking if preload exists for k8s version v1.34.1 and runtime crio
	I1109 14:39:19.966665  196129 ssh_runner.go:195] Run: sudo crictl images --output json
	I1109 14:39:20.014624  196129 crio.go:514] all images are preloaded for cri-o runtime.
	I1109 14:39:20.014649  196129 crio.go:433] Images already preloaded, skipping extraction
	I1109 14:39:20.014710  196129 ssh_runner.go:195] Run: sudo crictl images --output json
	I1109 14:39:20.061070  196129 crio.go:514] all images are preloaded for cri-o runtime.
	I1109 14:39:20.061092  196129 cache_images.go:86] Images are preloaded, skipping loading
	I1109 14:39:20.061100  196129 kubeadm.go:935] updating node { 192.168.85.2 8444 v1.34.1 crio true true} ...
	I1109 14:39:20.061201  196129 kubeadm.go:947] kubelet [Unit]
	Wants=crio.service
	
	[Service]
	ExecStart=
	ExecStart=/var/lib/minikube/binaries/v1.34.1/kubelet --bootstrap-kubeconfig=/etc/kubernetes/bootstrap-kubelet.conf --cgroups-per-qos=false --config=/var/lib/kubelet/config.yaml --enforce-node-allocatable= --hostname-override=default-k8s-diff-port-103048 --kubeconfig=/etc/kubernetes/kubelet.conf --node-ip=192.168.85.2
	
	[Install]
	 config:
	{KubernetesVersion:v1.34.1 ClusterName:default-k8s-diff-port-103048 Namespace:default APIServerHAVIP: APIServerName:minikubeCA APIServerNames:[] APIServerIPs:[] DNSDomain:cluster.local ContainerRuntime:crio CRISocket: NetworkPlugin:cni FeatureGates: ServiceCIDR:10.96.0.0/12 ImageRepository: LoadBalancerStartIP: LoadBalancerEndIP: CustomIngressCert: RegistryAliases: ExtraOptions:[] ShouldLoadCachedImages:true EnableDefaultCNI:false CNI:}
	I1109 14:39:20.061279  196129 ssh_runner.go:195] Run: crio config
	I1109 14:39:20.135847  196129 cni.go:84] Creating CNI manager for ""
	I1109 14:39:20.135907  196129 cni.go:143] "docker" driver + "crio" runtime found, recommending kindnet
	I1109 14:39:20.135931  196129 kubeadm.go:85] Using pod CIDR: 10.244.0.0/16
	I1109 14:39:20.135955  196129 kubeadm.go:190] kubeadm options: {CertDir:/var/lib/minikube/certs ServiceCIDR:10.96.0.0/12 PodSubnet:10.244.0.0/16 AdvertiseAddress:192.168.85.2 APIServerPort:8444 KubernetesVersion:v1.34.1 EtcdDataDir:/var/lib/minikube/etcd EtcdExtraArgs:map[] ClusterName:default-k8s-diff-port-103048 NodeName:default-k8s-diff-port-103048 DNSDomain:cluster.local CRISocket:/var/run/crio/crio.sock ImageRepository: ComponentOptions:[{Component:apiServer ExtraArgs:map[enable-admission-plugins:NamespaceLifecycle,LimitRanger,ServiceAccount,DefaultStorageClass,DefaultTolerationSeconds,NodeRestriction,MutatingAdmissionWebhook,ValidatingAdmissionWebhook,ResourceQuota] Pairs:map[certSANs:["127.0.0.1", "localhost", "192.168.85.2"]]} {Component:controllerManager ExtraArgs:map[allocate-node-cidrs:true leader-elect:false] Pairs:map[]} {Component:scheduler ExtraArgs:map[leader-elect:false] Pairs:map[]}] FeatureArgs:map[] NodeIP:192.168.85.2 CgroupDriver:cgroupfs ClientCAFile:/var/lib/minikube/certs/ca.
crt StaticPodPath:/etc/kubernetes/manifests ControlPlaneAddress:control-plane.minikube.internal KubeProxyOptions:map[] ResolvConfSearchRegression:false KubeletConfigOpts:map[containerRuntimeEndpoint:unix:///var/run/crio/crio.sock hairpinMode:hairpin-veth runtimeRequestTimeout:15m] PrependCriSocketUnix:true}
	I1109 14:39:20.136111  196129 kubeadm.go:196] kubeadm config:
	apiVersion: kubeadm.k8s.io/v1beta4
	kind: InitConfiguration
	localAPIEndpoint:
	  advertiseAddress: 192.168.85.2
	  bindPort: 8444
	bootstrapTokens:
	  - groups:
	      - system:bootstrappers:kubeadm:default-node-token
	    ttl: 24h0m0s
	    usages:
	      - signing
	      - authentication
	nodeRegistration:
	  criSocket: unix:///var/run/crio/crio.sock
	  name: "default-k8s-diff-port-103048"
	  kubeletExtraArgs:
	    - name: "node-ip"
	      value: "192.168.85.2"
	  taints: []
	---
	apiVersion: kubeadm.k8s.io/v1beta4
	kind: ClusterConfiguration
	apiServer:
	  certSANs: ["127.0.0.1", "localhost", "192.168.85.2"]
	  extraArgs:
	    - name: "enable-admission-plugins"
	      value: "NamespaceLifecycle,LimitRanger,ServiceAccount,DefaultStorageClass,DefaultTolerationSeconds,NodeRestriction,MutatingAdmissionWebhook,ValidatingAdmissionWebhook,ResourceQuota"
	controllerManager:
	  extraArgs:
	    - name: "allocate-node-cidrs"
	      value: "true"
	    - name: "leader-elect"
	      value: "false"
	scheduler:
	  extraArgs:
	    - name: "leader-elect"
	      value: "false"
	certificatesDir: /var/lib/minikube/certs
	clusterName: mk
	controlPlaneEndpoint: control-plane.minikube.internal:8444
	etcd:
	  local:
	    dataDir: /var/lib/minikube/etcd
	kubernetesVersion: v1.34.1
	networking:
	  dnsDomain: cluster.local
	  podSubnet: "10.244.0.0/16"
	  serviceSubnet: 10.96.0.0/12
	---
	apiVersion: kubelet.config.k8s.io/v1beta1
	kind: KubeletConfiguration
	authentication:
	  x509:
	    clientCAFile: /var/lib/minikube/certs/ca.crt
	cgroupDriver: cgroupfs
	containerRuntimeEndpoint: unix:///var/run/crio/crio.sock
	hairpinMode: hairpin-veth
	runtimeRequestTimeout: 15m
	clusterDomain: "cluster.local"
	# disable disk resource management by default
	imageGCHighThresholdPercent: 100
	evictionHard:
	  nodefs.available: "0%"
	  nodefs.inodesFree: "0%"
	  imagefs.available: "0%"
	failSwapOn: false
	staticPodPath: /etc/kubernetes/manifests
	---
	apiVersion: kubeproxy.config.k8s.io/v1alpha1
	kind: KubeProxyConfiguration
	clusterCIDR: "10.244.0.0/16"
	metricsBindAddress: 0.0.0.0:10249
	conntrack:
	  maxPerCore: 0
	# Skip setting "net.netfilter.nf_conntrack_tcp_timeout_established"
	  tcpEstablishedTimeout: 0s
	# Skip setting "net.netfilter.nf_conntrack_tcp_timeout_close"
	  tcpCloseWaitTimeout: 0s
	
	I1109 14:39:20.136224  196129 ssh_runner.go:195] Run: sudo ls /var/lib/minikube/binaries/v1.34.1
	I1109 14:39:20.144992  196129 binaries.go:51] Found k8s binaries, skipping transfer
	I1109 14:39:20.145080  196129 ssh_runner.go:195] Run: sudo mkdir -p /etc/systemd/system/kubelet.service.d /lib/systemd/system /var/tmp/minikube
	I1109 14:39:20.154676  196129 ssh_runner.go:362] scp memory --> /etc/systemd/system/kubelet.service.d/10-kubeadm.conf (378 bytes)
	I1109 14:39:20.171245  196129 ssh_runner.go:362] scp memory --> /lib/systemd/system/kubelet.service (352 bytes)
	I1109 14:39:20.185580  196129 ssh_runner.go:362] scp memory --> /var/tmp/minikube/kubeadm.yaml.new (2225 bytes)
	I1109 14:39:20.201765  196129 ssh_runner.go:195] Run: grep 192.168.85.2	control-plane.minikube.internal$ /etc/hosts
	I1109 14:39:20.206582  196129 ssh_runner.go:195] Run: /bin/bash -c "{ grep -v $'\tcontrol-plane.minikube.internal$' "/etc/hosts"; echo "192.168.85.2	control-plane.minikube.internal"; } > /tmp/h.$$; sudo cp /tmp/h.$$ "/etc/hosts""
	I1109 14:39:20.218611  196129 ssh_runner.go:195] Run: sudo systemctl daemon-reload
	I1109 14:39:20.366358  196129 ssh_runner.go:195] Run: sudo systemctl start kubelet
	I1109 14:39:20.384455  196129 certs.go:69] Setting up /home/jenkins/minikube-integration/21139-2320/.minikube/profiles/default-k8s-diff-port-103048 for IP: 192.168.85.2
	I1109 14:39:20.384475  196129 certs.go:195] generating shared ca certs ...
	I1109 14:39:20.384493  196129 certs.go:227] acquiring lock for ca certs: {Name:mkdc9287ecd89df27ec460b72246ed6b75395d7c Clock:{} Delay:500ms Timeout:1m0s Cancel:<nil>}
	I1109 14:39:20.384623  196129 certs.go:236] skipping valid "minikubeCA" ca cert: /home/jenkins/minikube-integration/21139-2320/.minikube/ca.key
	I1109 14:39:20.384665  196129 certs.go:236] skipping valid "proxyClientCA" ca cert: /home/jenkins/minikube-integration/21139-2320/.minikube/proxy-client-ca.key
	I1109 14:39:20.384672  196129 certs.go:257] generating profile certs ...
	I1109 14:39:20.384786  196129 certs.go:360] skipping valid signed profile cert regeneration for "minikube-user": /home/jenkins/minikube-integration/21139-2320/.minikube/profiles/default-k8s-diff-port-103048/client.key
	I1109 14:39:20.384849  196129 certs.go:360] skipping valid signed profile cert regeneration for "minikube": /home/jenkins/minikube-integration/21139-2320/.minikube/profiles/default-k8s-diff-port-103048/apiserver.key.87358e1c
	I1109 14:39:20.384887  196129 certs.go:360] skipping valid signed profile cert regeneration for "aggregator": /home/jenkins/minikube-integration/21139-2320/.minikube/profiles/default-k8s-diff-port-103048/proxy-client.key
	I1109 14:39:20.384987  196129 certs.go:484] found cert: /home/jenkins/minikube-integration/21139-2320/.minikube/certs/4116.pem (1338 bytes)
	W1109 14:39:20.385015  196129 certs.go:480] ignoring /home/jenkins/minikube-integration/21139-2320/.minikube/certs/4116_empty.pem, impossibly tiny 0 bytes
	I1109 14:39:20.385023  196129 certs.go:484] found cert: /home/jenkins/minikube-integration/21139-2320/.minikube/certs/ca-key.pem (1679 bytes)
	I1109 14:39:20.385046  196129 certs.go:484] found cert: /home/jenkins/minikube-integration/21139-2320/.minikube/certs/ca.pem (1082 bytes)
	I1109 14:39:20.385067  196129 certs.go:484] found cert: /home/jenkins/minikube-integration/21139-2320/.minikube/certs/cert.pem (1123 bytes)
	I1109 14:39:20.385087  196129 certs.go:484] found cert: /home/jenkins/minikube-integration/21139-2320/.minikube/certs/key.pem (1679 bytes)
	I1109 14:39:20.385128  196129 certs.go:484] found cert: /home/jenkins/minikube-integration/21139-2320/.minikube/files/etc/ssl/certs/41162.pem (1708 bytes)
	I1109 14:39:20.385719  196129 ssh_runner.go:362] scp /home/jenkins/minikube-integration/21139-2320/.minikube/ca.crt --> /var/lib/minikube/certs/ca.crt (1111 bytes)
	I1109 14:39:20.406961  196129 ssh_runner.go:362] scp /home/jenkins/minikube-integration/21139-2320/.minikube/ca.key --> /var/lib/minikube/certs/ca.key (1679 bytes)
	I1109 14:39:20.439170  196129 ssh_runner.go:362] scp /home/jenkins/minikube-integration/21139-2320/.minikube/proxy-client-ca.crt --> /var/lib/minikube/certs/proxy-client-ca.crt (1119 bytes)
	I1109 14:39:20.464461  196129 ssh_runner.go:362] scp /home/jenkins/minikube-integration/21139-2320/.minikube/proxy-client-ca.key --> /var/lib/minikube/certs/proxy-client-ca.key (1675 bytes)
	I1109 14:39:20.498671  196129 ssh_runner.go:362] scp /home/jenkins/minikube-integration/21139-2320/.minikube/profiles/default-k8s-diff-port-103048/apiserver.crt --> /var/lib/minikube/certs/apiserver.crt (1440 bytes)
	I1109 14:39:20.538022  196129 ssh_runner.go:362] scp /home/jenkins/minikube-integration/21139-2320/.minikube/profiles/default-k8s-diff-port-103048/apiserver.key --> /var/lib/minikube/certs/apiserver.key (1675 bytes)
	I1109 14:39:20.576148  196129 ssh_runner.go:362] scp /home/jenkins/minikube-integration/21139-2320/.minikube/profiles/default-k8s-diff-port-103048/proxy-client.crt --> /var/lib/minikube/certs/proxy-client.crt (1147 bytes)
	I1109 14:39:20.647061  196129 ssh_runner.go:362] scp /home/jenkins/minikube-integration/21139-2320/.minikube/profiles/default-k8s-diff-port-103048/proxy-client.key --> /var/lib/minikube/certs/proxy-client.key (1675 bytes)
	I1109 14:39:20.713722  196129 ssh_runner.go:362] scp /home/jenkins/minikube-integration/21139-2320/.minikube/certs/4116.pem --> /usr/share/ca-certificates/4116.pem (1338 bytes)
	I1109 14:39:20.735137  196129 ssh_runner.go:362] scp /home/jenkins/minikube-integration/21139-2320/.minikube/files/etc/ssl/certs/41162.pem --> /usr/share/ca-certificates/41162.pem (1708 bytes)
	I1109 14:39:20.759543  196129 ssh_runner.go:362] scp /home/jenkins/minikube-integration/21139-2320/.minikube/ca.crt --> /usr/share/ca-certificates/minikubeCA.pem (1111 bytes)
	I1109 14:39:20.778573  196129 ssh_runner.go:362] scp memory --> /var/lib/minikube/kubeconfig (738 bytes)
	I1109 14:39:20.791915  196129 ssh_runner.go:195] Run: openssl version
	I1109 14:39:20.804883  196129 ssh_runner.go:195] Run: sudo /bin/bash -c "test -s /usr/share/ca-certificates/4116.pem && ln -fs /usr/share/ca-certificates/4116.pem /etc/ssl/certs/4116.pem"
	I1109 14:39:20.821236  196129 ssh_runner.go:195] Run: ls -la /usr/share/ca-certificates/4116.pem
	I1109 14:39:20.826965  196129 certs.go:528] hashing: -rw-r--r-- 1 root root 1338 Nov  9 13:36 /usr/share/ca-certificates/4116.pem
	I1109 14:39:20.827033  196129 ssh_runner.go:195] Run: openssl x509 -hash -noout -in /usr/share/ca-certificates/4116.pem
	I1109 14:39:20.880407  196129 ssh_runner.go:195] Run: sudo /bin/bash -c "test -L /etc/ssl/certs/51391683.0 || ln -fs /etc/ssl/certs/4116.pem /etc/ssl/certs/51391683.0"
	I1109 14:39:20.888410  196129 ssh_runner.go:195] Run: sudo /bin/bash -c "test -s /usr/share/ca-certificates/41162.pem && ln -fs /usr/share/ca-certificates/41162.pem /etc/ssl/certs/41162.pem"
	I1109 14:39:20.897832  196129 ssh_runner.go:195] Run: ls -la /usr/share/ca-certificates/41162.pem
	I1109 14:39:20.901509  196129 certs.go:528] hashing: -rw-r--r-- 1 root root 1708 Nov  9 13:36 /usr/share/ca-certificates/41162.pem
	I1109 14:39:20.901575  196129 ssh_runner.go:195] Run: openssl x509 -hash -noout -in /usr/share/ca-certificates/41162.pem
	I1109 14:39:20.942961  196129 ssh_runner.go:195] Run: sudo /bin/bash -c "test -L /etc/ssl/certs/3ec20f2e.0 || ln -fs /etc/ssl/certs/41162.pem /etc/ssl/certs/3ec20f2e.0"
	I1109 14:39:20.950695  196129 ssh_runner.go:195] Run: sudo /bin/bash -c "test -s /usr/share/ca-certificates/minikubeCA.pem && ln -fs /usr/share/ca-certificates/minikubeCA.pem /etc/ssl/certs/minikubeCA.pem"
	I1109 14:39:20.958594  196129 ssh_runner.go:195] Run: ls -la /usr/share/ca-certificates/minikubeCA.pem
	I1109 14:39:20.963390  196129 certs.go:528] hashing: -rw-r--r-- 1 root root 1111 Nov  9 13:29 /usr/share/ca-certificates/minikubeCA.pem
	I1109 14:39:20.963454  196129 ssh_runner.go:195] Run: openssl x509 -hash -noout -in /usr/share/ca-certificates/minikubeCA.pem
	I1109 14:39:21.024236  196129 ssh_runner.go:195] Run: sudo /bin/bash -c "test -L /etc/ssl/certs/b5213941.0 || ln -fs /etc/ssl/certs/minikubeCA.pem /etc/ssl/certs/b5213941.0"
	I1109 14:39:21.038127  196129 ssh_runner.go:195] Run: stat /var/lib/minikube/certs/apiserver-kubelet-client.crt
	I1109 14:39:21.045164  196129 ssh_runner.go:195] Run: openssl x509 -noout -in /var/lib/minikube/certs/apiserver-etcd-client.crt -checkend 86400
	I1109 14:39:21.092111  196129 ssh_runner.go:195] Run: openssl x509 -noout -in /var/lib/minikube/certs/apiserver-kubelet-client.crt -checkend 86400
	I1109 14:39:21.157987  196129 ssh_runner.go:195] Run: openssl x509 -noout -in /var/lib/minikube/certs/etcd/server.crt -checkend 86400
	I1109 14:39:21.210593  196129 ssh_runner.go:195] Run: openssl x509 -noout -in /var/lib/minikube/certs/etcd/healthcheck-client.crt -checkend 86400
	I1109 14:39:21.275270  196129 ssh_runner.go:195] Run: openssl x509 -noout -in /var/lib/minikube/certs/etcd/peer.crt -checkend 86400
	I1109 14:39:21.342680  196129 ssh_runner.go:195] Run: openssl x509 -noout -in /var/lib/minikube/certs/front-proxy-client.crt -checkend 86400
	I1109 14:39:21.420934  196129 kubeadm.go:401] StartCluster: {Name:default-k8s-diff-port-103048 KeepContext:false EmbedCerts:false MinikubeISO: KicBaseImage:gcr.io/k8s-minikube/kicbase-builds:v0.0.48-1761985721-21837@sha256:a50b37e97dfdea51156e079ca6b45818a801b3d41bbe13d141f35d2e1af6c7d1 Memory:3072 CPUs:2 DiskSize:20000 Driver:docker HyperkitVpnKitSock: HyperkitVSockPorts:[] DockerEnv:[] ContainerVolumeMounts:[] InsecureRegistry:[] RegistryMirror:[] HostOnlyCIDR:192.168.59.1/24 HypervVirtualSwitch: HypervUseExternalSwitch:false HypervExternalAdapter: KVMNetwork:default KVMQemuURI:qemu:///system KVMGPU:false KVMHidden:false KVMNUMACount:1 APIServerPort:8444 DockerOpt:[] DisableDriverMounts:false NFSShare:[] NFSSharesRoot:/nfsshares UUID: NoVTXCheck:false DNSProxy:false HostDNSResolver:true HostOnlyNicType:virtio NatNicType:virtio SSHIPAddress: SSHUser:root SSHKey: SSHPort:22 KubernetesConfig:{KubernetesVersion:v1.34.1 ClusterName:default-k8s-diff-port-103048 Namespace:default APIServerHAVIP: APIServer
Name:minikubeCA APIServerNames:[] APIServerIPs:[] DNSDomain:cluster.local ContainerRuntime:crio CRISocket: NetworkPlugin:cni FeatureGates: ServiceCIDR:10.96.0.0/12 ImageRepository: LoadBalancerStartIP: LoadBalancerEndIP: CustomIngressCert: RegistryAliases: ExtraOptions:[] ShouldLoadCachedImages:true EnableDefaultCNI:false CNI:} Nodes:[{Name: IP:192.168.85.2 Port:8444 KubernetesVersion:v1.34.1 ContainerRuntime:crio ControlPlane:true Worker:true}] Addons:map[dashboard:true default-storageclass:true storage-provisioner:true] CustomAddonImages:map[MetricsScraper:registry.k8s.io/echoserver:1.4] CustomAddonRegistries:map[] VerifyComponents:map[apiserver:true apps_running:true default_sa:true extra:true kubelet:true node_ready:true system_pods:true] StartHostTimeout:6m0s ScheduledStop:<nil> ExposedPorts:[] ListenAddress: Network: Subnet: MultiNodeRequested:false ExtraDisks:0 CertExpiration:26280h0m0s MountString: Mount9PVersion:9p2000.L MountGID:docker MountIP: MountMSize:262144 MountOptions:[] MountPort:0 MountType
:9p MountUID:docker BinaryMirror: DisableOptimizations:false DisableMetrics:false DisableCoreDNSLog:false CustomQemuFirmwarePath: SocketVMnetClientPath: SocketVMnetPath: StaticIP: SSHAuthSock: SSHAgentPID:0 GPUs: AutoPauseInterval:1m0s}
	I1109 14:39:21.421028  196129 cri.go:54] listing CRI containers in root : {State:paused Name: Namespaces:[kube-system]}
	I1109 14:39:21.421090  196129 ssh_runner.go:195] Run: sudo -s eval "crictl ps -a --quiet --label io.kubernetes.pod.namespace=kube-system"
	I1109 14:39:21.519887  196129 cri.go:89] found id: "7e93099edfb89c3c6dfb2117e807c954f165aeeee5346a2bbbcc8cebc0b57e6d"
	I1109 14:39:21.519932  196129 cri.go:89] found id: "6ac4e39c7a9ff676af77a2c3d9451f34c38ceb98460bce7a887c77b5521f39bb"
	I1109 14:39:21.519938  196129 cri.go:89] found id: "7d4eac93ccb3effa065a9c4f9d98f14e0e8db5c13414ee7cecb3ffcedd98326f"
	I1109 14:39:21.519945  196129 cri.go:89] found id: ""
	I1109 14:39:21.519999  196129 ssh_runner.go:195] Run: sudo runc list -f json
	W1109 14:39:21.543667  196129 kubeadm.go:408] unpause failed: list paused: runc: sudo runc list -f json: Process exited with status 1
	stdout:
	
	stderr:
	time="2025-11-09T14:39:21Z" level=error msg="open /run/runc: no such file or directory"
	I1109 14:39:21.543751  196129 ssh_runner.go:195] Run: sudo ls /var/lib/kubelet/kubeadm-flags.env /var/lib/kubelet/config.yaml /var/lib/minikube/etcd
	I1109 14:39:21.572102  196129 kubeadm.go:417] found existing configuration files, will attempt cluster restart
	I1109 14:39:21.572126  196129 kubeadm.go:598] restartPrimaryControlPlane start ...
	I1109 14:39:21.572191  196129 ssh_runner.go:195] Run: sudo test -d /data/minikube
	I1109 14:39:21.608694  196129 kubeadm.go:131] /data/minikube skipping compat symlinks: sudo test -d /data/minikube: Process exited with status 1
	stdout:
	
	stderr:
	I1109 14:39:21.609164  196129 kubeconfig.go:47] verify endpoint returned: get endpoint: "default-k8s-diff-port-103048" does not appear in /home/jenkins/minikube-integration/21139-2320/kubeconfig
	I1109 14:39:21.609280  196129 kubeconfig.go:62] /home/jenkins/minikube-integration/21139-2320/kubeconfig needs updating (will repair): [kubeconfig missing "default-k8s-diff-port-103048" cluster setting kubeconfig missing "default-k8s-diff-port-103048" context setting]
	I1109 14:39:21.609631  196129 lock.go:35] WriteFile acquiring /home/jenkins/minikube-integration/21139-2320/kubeconfig: {Name:mkbcbd43244c4eb498050023163f96f81faa78c6 Clock:{} Delay:500ms Timeout:1m0s Cancel:<nil>}
	I1109 14:39:21.611238  196129 ssh_runner.go:195] Run: sudo diff -u /var/tmp/minikube/kubeadm.yaml /var/tmp/minikube/kubeadm.yaml.new
	I1109 14:39:21.624438  196129 kubeadm.go:635] The running cluster does not require reconfiguration: 192.168.85.2
	I1109 14:39:21.624472  196129 kubeadm.go:602] duration metric: took 52.339359ms to restartPrimaryControlPlane
	I1109 14:39:21.624481  196129 kubeadm.go:403] duration metric: took 203.557147ms to StartCluster
	I1109 14:39:21.624504  196129 settings.go:142] acquiring lock: {Name:mk52a5a1cf83cfd3007d92af07a5e8dea393f789 Clock:{} Delay:500ms Timeout:1m0s Cancel:<nil>}
	I1109 14:39:21.624565  196129 settings.go:150] Updating kubeconfig:  /home/jenkins/minikube-integration/21139-2320/kubeconfig
	I1109 14:39:21.625263  196129 lock.go:35] WriteFile acquiring /home/jenkins/minikube-integration/21139-2320/kubeconfig: {Name:mkbcbd43244c4eb498050023163f96f81faa78c6 Clock:{} Delay:500ms Timeout:1m0s Cancel:<nil>}
	I1109 14:39:21.625488  196129 start.go:236] Will wait 6m0s for node &{Name: IP:192.168.85.2 Port:8444 KubernetesVersion:v1.34.1 ContainerRuntime:crio ControlPlane:true Worker:true}
	I1109 14:39:21.625839  196129 config.go:182] Loaded profile config "default-k8s-diff-port-103048": Driver=docker, ContainerRuntime=crio, KubernetesVersion=v1.34.1
	I1109 14:39:21.625884  196129 addons.go:512] enable addons start: toEnable=map[ambassador:false amd-gpu-device-plugin:false auto-pause:false cloud-spanner:false csi-hostpath-driver:false dashboard:true default-storageclass:true efk:false freshpod:false gcp-auth:false gvisor:false headlamp:false inaccel:false ingress:false ingress-dns:false inspektor-gadget:false istio:false istio-provisioner:false kong:false kubeflow:false kubetail:false kubevirt:false logviewer:false metallb:false metrics-server:false nvidia-device-plugin:false nvidia-driver-installer:false nvidia-gpu-device-plugin:false olm:false pod-security-policy:false portainer:false registry:false registry-aliases:false registry-creds:false storage-provisioner:true storage-provisioner-rancher:false volcano:false volumesnapshots:false yakd:false]
	I1109 14:39:21.626037  196129 addons.go:70] Setting storage-provisioner=true in profile "default-k8s-diff-port-103048"
	I1109 14:39:21.626062  196129 addons.go:239] Setting addon storage-provisioner=true in "default-k8s-diff-port-103048"
	W1109 14:39:21.626071  196129 addons.go:248] addon storage-provisioner should already be in state true
	I1109 14:39:21.626090  196129 addons.go:70] Setting dashboard=true in profile "default-k8s-diff-port-103048"
	I1109 14:39:21.626131  196129 addons.go:239] Setting addon dashboard=true in "default-k8s-diff-port-103048"
	W1109 14:39:21.626162  196129 addons.go:248] addon dashboard should already be in state true
	I1109 14:39:21.626201  196129 host.go:66] Checking if "default-k8s-diff-port-103048" exists ...
	I1109 14:39:21.626098  196129 host.go:66] Checking if "default-k8s-diff-port-103048" exists ...
	I1109 14:39:21.626753  196129 cli_runner.go:164] Run: docker container inspect default-k8s-diff-port-103048 --format={{.State.Status}}
	I1109 14:39:21.626812  196129 cli_runner.go:164] Run: docker container inspect default-k8s-diff-port-103048 --format={{.State.Status}}
	I1109 14:39:21.626105  196129 addons.go:70] Setting default-storageclass=true in profile "default-k8s-diff-port-103048"
	I1109 14:39:21.627319  196129 addons_storage_classes.go:34] enableOrDisableStorageClasses default-storageclass=true on "default-k8s-diff-port-103048"
	I1109 14:39:21.627583  196129 cli_runner.go:164] Run: docker container inspect default-k8s-diff-port-103048 --format={{.State.Status}}
	I1109 14:39:21.630802  196129 out.go:179] * Verifying Kubernetes components...
	I1109 14:39:21.639626  196129 ssh_runner.go:195] Run: sudo systemctl daemon-reload
	I1109 14:39:21.684051  196129 out.go:179]   - Using image gcr.io/k8s-minikube/storage-provisioner:v5
	I1109 14:39:21.684138  196129 out.go:179]   - Using image docker.io/kubernetesui/dashboard:v2.7.0
	I1109 14:39:21.689975  196129 addons.go:436] installing /etc/kubernetes/addons/storage-provisioner.yaml
	I1109 14:39:21.690000  196129 ssh_runner.go:362] scp memory --> /etc/kubernetes/addons/storage-provisioner.yaml (2676 bytes)
	I1109 14:39:21.690064  196129 cli_runner.go:164] Run: docker container inspect -f "'{{(index (index .NetworkSettings.Ports "22/tcp") 0).HostPort}}'" default-k8s-diff-port-103048
	I1109 14:39:21.691595  196129 addons.go:239] Setting addon default-storageclass=true in "default-k8s-diff-port-103048"
	W1109 14:39:21.691618  196129 addons.go:248] addon default-storageclass should already be in state true
	I1109 14:39:21.691648  196129 host.go:66] Checking if "default-k8s-diff-port-103048" exists ...
	I1109 14:39:21.692125  196129 cli_runner.go:164] Run: docker container inspect default-k8s-diff-port-103048 --format={{.State.Status}}
	I1109 14:39:21.693212  196129 out.go:179]   - Using image registry.k8s.io/echoserver:1.4
	I1109 14:39:20.654722  196795 main.go:143] libmachine: SSH cmd err, output: <nil>: 
	CRIO_MINIKUBE_OPTIONS='--insecure-registry 10.96.0.0/12 '
	
	I1109 14:39:20.654747  196795 machine.go:97] duration metric: took 4.451852424s to provisionDockerMachine
	I1109 14:39:20.654773  196795 start.go:293] postStartSetup for "embed-certs-422728" (driver="docker")
	I1109 14:39:20.654784  196795 start.go:322] creating required directories: [/etc/kubernetes/addons /etc/kubernetes/manifests /var/tmp/minikube /var/lib/minikube /var/lib/minikube/certs /var/lib/minikube/images /var/lib/minikube/binaries /tmp/gvisor /usr/share/ca-certificates /etc/ssl/certs]
	I1109 14:39:20.654845  196795 ssh_runner.go:195] Run: sudo mkdir -p /etc/kubernetes/addons /etc/kubernetes/manifests /var/tmp/minikube /var/lib/minikube /var/lib/minikube/certs /var/lib/minikube/images /var/lib/minikube/binaries /tmp/gvisor /usr/share/ca-certificates /etc/ssl/certs
	I1109 14:39:20.654912  196795 cli_runner.go:164] Run: docker container inspect -f "'{{(index (index .NetworkSettings.Ports "22/tcp") 0).HostPort}}'" embed-certs-422728
	I1109 14:39:20.679374  196795 sshutil.go:53] new ssh client: &{IP:127.0.0.1 Port:33070 SSHKeyPath:/home/jenkins/minikube-integration/21139-2320/.minikube/machines/embed-certs-422728/id_rsa Username:docker}
	I1109 14:39:20.801375  196795 ssh_runner.go:195] Run: cat /etc/os-release
	I1109 14:39:20.805427  196795 main.go:143] libmachine: Couldn't set key VERSION_CODENAME, no corresponding struct field found
	I1109 14:39:20.805453  196795 info.go:137] Remote host: Debian GNU/Linux 12 (bookworm)
	I1109 14:39:20.805462  196795 filesync.go:126] Scanning /home/jenkins/minikube-integration/21139-2320/.minikube/addons for local assets ...
	I1109 14:39:20.805518  196795 filesync.go:126] Scanning /home/jenkins/minikube-integration/21139-2320/.minikube/files for local assets ...
	I1109 14:39:20.805610  196795 filesync.go:149] local asset: /home/jenkins/minikube-integration/21139-2320/.minikube/files/etc/ssl/certs/41162.pem -> 41162.pem in /etc/ssl/certs
	I1109 14:39:20.805711  196795 ssh_runner.go:195] Run: sudo mkdir -p /etc/ssl/certs
	I1109 14:39:20.816935  196795 ssh_runner.go:362] scp /home/jenkins/minikube-integration/21139-2320/.minikube/files/etc/ssl/certs/41162.pem --> /etc/ssl/certs/41162.pem (1708 bytes)
	I1109 14:39:20.836739  196795 start.go:296] duration metric: took 181.951304ms for postStartSetup
	I1109 14:39:20.836817  196795 ssh_runner.go:195] Run: sh -c "df -h /var | awk 'NR==2{print $5}'"
	I1109 14:39:20.836854  196795 cli_runner.go:164] Run: docker container inspect -f "'{{(index (index .NetworkSettings.Ports "22/tcp") 0).HostPort}}'" embed-certs-422728
	I1109 14:39:20.857314  196795 sshutil.go:53] new ssh client: &{IP:127.0.0.1 Port:33070 SSHKeyPath:/home/jenkins/minikube-integration/21139-2320/.minikube/machines/embed-certs-422728/id_rsa Username:docker}
	I1109 14:39:20.961850  196795 ssh_runner.go:195] Run: sh -c "df -BG /var | awk 'NR==2{print $4}'"
	I1109 14:39:20.969380  196795 fix.go:56] duration metric: took 5.089888739s for fixHost
	I1109 14:39:20.969406  196795 start.go:83] releasing machines lock for "embed-certs-422728", held for 5.089951877s
	I1109 14:39:20.969490  196795 cli_runner.go:164] Run: docker container inspect -f "{{range .NetworkSettings.Networks}}{{.IPAddress}},{{.GlobalIPv6Address}}{{end}}" embed-certs-422728
	I1109 14:39:20.989316  196795 ssh_runner.go:195] Run: cat /version.json
	I1109 14:39:20.989379  196795 cli_runner.go:164] Run: docker container inspect -f "'{{(index (index .NetworkSettings.Ports "22/tcp") 0).HostPort}}'" embed-certs-422728
	I1109 14:39:20.989634  196795 ssh_runner.go:195] Run: curl -sS -m 2 https://registry.k8s.io/
	I1109 14:39:20.989678  196795 cli_runner.go:164] Run: docker container inspect -f "'{{(index (index .NetworkSettings.Ports "22/tcp") 0).HostPort}}'" embed-certs-422728
	I1109 14:39:21.019194  196795 sshutil.go:53] new ssh client: &{IP:127.0.0.1 Port:33070 SSHKeyPath:/home/jenkins/minikube-integration/21139-2320/.minikube/machines/embed-certs-422728/id_rsa Username:docker}
	I1109 14:39:21.033559  196795 sshutil.go:53] new ssh client: &{IP:127.0.0.1 Port:33070 SSHKeyPath:/home/jenkins/minikube-integration/21139-2320/.minikube/machines/embed-certs-422728/id_rsa Username:docker}
	I1109 14:39:21.156397  196795 ssh_runner.go:195] Run: systemctl --version
	I1109 14:39:21.283906  196795 ssh_runner.go:195] Run: sudo sh -c "podman version >/dev/null"
	I1109 14:39:21.351300  196795 ssh_runner.go:195] Run: sh -c "stat /etc/cni/net.d/*loopback.conf*"
	W1109 14:39:21.357015  196795 cni.go:209] loopback cni configuration skipped: "/etc/cni/net.d/*loopback.conf*" not found
	I1109 14:39:21.357091  196795 ssh_runner.go:195] Run: sudo find /etc/cni/net.d -maxdepth 1 -type f ( ( -name *bridge* -or -name *podman* ) -and -not -name *.mk_disabled ) -printf "%p, " -exec sh -c "sudo mv {} {}.mk_disabled" ;
	I1109 14:39:21.368625  196795 cni.go:259] no active bridge cni configs found in "/etc/cni/net.d" - nothing to disable
	I1109 14:39:21.368703  196795 start.go:496] detecting cgroup driver to use...
	I1109 14:39:21.368745  196795 detect.go:187] detected "cgroupfs" cgroup driver on host os
	I1109 14:39:21.368818  196795 ssh_runner.go:195] Run: sudo systemctl stop -f containerd
	I1109 14:39:21.387612  196795 ssh_runner.go:195] Run: sudo systemctl is-active --quiet service containerd
	I1109 14:39:21.408379  196795 docker.go:218] disabling cri-docker service (if available) ...
	I1109 14:39:21.408518  196795 ssh_runner.go:195] Run: sudo systemctl stop -f cri-docker.socket
	I1109 14:39:21.436708  196795 ssh_runner.go:195] Run: sudo systemctl stop -f cri-docker.service
	I1109 14:39:21.466974  196795 ssh_runner.go:195] Run: sudo systemctl disable cri-docker.socket
	I1109 14:39:21.728628  196795 ssh_runner.go:195] Run: sudo systemctl mask cri-docker.service
	I1109 14:39:21.974405  196795 docker.go:234] disabling docker service ...
	I1109 14:39:21.974481  196795 ssh_runner.go:195] Run: sudo systemctl stop -f docker.socket
	I1109 14:39:22.005296  196795 ssh_runner.go:195] Run: sudo systemctl stop -f docker.service
	I1109 14:39:22.034069  196795 ssh_runner.go:195] Run: sudo systemctl disable docker.socket
	I1109 14:39:22.248316  196795 ssh_runner.go:195] Run: sudo systemctl mask docker.service
	I1109 14:39:22.448530  196795 ssh_runner.go:195] Run: sudo systemctl is-active --quiet service docker
	I1109 14:39:22.471795  196795 ssh_runner.go:195] Run: /bin/bash -c "sudo mkdir -p /etc && printf %s "runtime-endpoint: unix:///var/run/crio/crio.sock
	" | sudo tee /etc/crictl.yaml"
	I1109 14:39:22.504195  196795 crio.go:59] configure cri-o to use "registry.k8s.io/pause:3.10.1" pause image...
	I1109 14:39:22.504253  196795 ssh_runner.go:195] Run: sh -c "sudo sed -i 's|^.*pause_image = .*$|pause_image = "registry.k8s.io/pause:3.10.1"|' /etc/crio/crio.conf.d/02-crio.conf"
	I1109 14:39:22.522453  196795 crio.go:70] configuring cri-o to use "cgroupfs" as cgroup driver...
	I1109 14:39:22.522527  196795 ssh_runner.go:195] Run: sh -c "sudo sed -i 's|^.*cgroup_manager = .*$|cgroup_manager = "cgroupfs"|' /etc/crio/crio.conf.d/02-crio.conf"
	I1109 14:39:22.540125  196795 ssh_runner.go:195] Run: sh -c "sudo sed -i '/conmon_cgroup = .*/d' /etc/crio/crio.conf.d/02-crio.conf"
	I1109 14:39:22.553926  196795 ssh_runner.go:195] Run: sh -c "sudo sed -i '/cgroup_manager = .*/a conmon_cgroup = "pod"' /etc/crio/crio.conf.d/02-crio.conf"
	I1109 14:39:22.576162  196795 ssh_runner.go:195] Run: sh -c "sudo rm -rf /etc/cni/net.mk"
	I1109 14:39:22.585909  196795 ssh_runner.go:195] Run: sh -c "sudo sed -i '/^ *"net.ipv4.ip_unprivileged_port_start=.*"/d' /etc/crio/crio.conf.d/02-crio.conf"
	I1109 14:39:22.594587  196795 ssh_runner.go:195] Run: sh -c "sudo grep -q "^ *default_sysctls" /etc/crio/crio.conf.d/02-crio.conf || sudo sed -i '/conmon_cgroup = .*/a default_sysctls = \[\n\]' /etc/crio/crio.conf.d/02-crio.conf"
	I1109 14:39:22.609067  196795 ssh_runner.go:195] Run: sh -c "sudo sed -i -r 's|^default_sysctls *= *\[|&\n  "net.ipv4.ip_unprivileged_port_start=0",|' /etc/crio/crio.conf.d/02-crio.conf"
	I1109 14:39:22.617377  196795 ssh_runner.go:195] Run: sudo sysctl net.bridge.bridge-nf-call-iptables
	I1109 14:39:22.630975  196795 ssh_runner.go:195] Run: sudo sh -c "echo 1 > /proc/sys/net/ipv4/ip_forward"
	I1109 14:39:22.638323  196795 ssh_runner.go:195] Run: sudo systemctl daemon-reload
	I1109 14:39:22.838273  196795 ssh_runner.go:195] Run: sudo systemctl restart crio
	I1109 14:39:23.036210  196795 start.go:543] Will wait 60s for socket path /var/run/crio/crio.sock
	I1109 14:39:23.036366  196795 ssh_runner.go:195] Run: stat /var/run/crio/crio.sock
	I1109 14:39:23.044751  196795 start.go:564] Will wait 60s for crictl version
	I1109 14:39:23.044867  196795 ssh_runner.go:195] Run: which crictl
	I1109 14:39:23.051712  196795 ssh_runner.go:195] Run: sudo /usr/local/bin/crictl version
	I1109 14:39:23.102897  196795 start.go:580] Version:  0.1.0
	RuntimeName:  cri-o
	RuntimeVersion:  1.34.1
	RuntimeApiVersion:  v1
	I1109 14:39:23.103045  196795 ssh_runner.go:195] Run: crio --version
	I1109 14:39:23.156948  196795 ssh_runner.go:195] Run: crio --version
	I1109 14:39:23.225201  196795 out.go:179] * Preparing Kubernetes v1.34.1 on CRI-O 1.34.1 ...
	I1109 14:39:21.696124  196129 addons.go:436] installing /etc/kubernetes/addons/dashboard-ns.yaml
	I1109 14:39:21.696149  196129 ssh_runner.go:362] scp dashboard/dashboard-ns.yaml --> /etc/kubernetes/addons/dashboard-ns.yaml (759 bytes)
	I1109 14:39:21.696218  196129 cli_runner.go:164] Run: docker container inspect -f "'{{(index (index .NetworkSettings.Ports "22/tcp") 0).HostPort}}'" default-k8s-diff-port-103048
	I1109 14:39:21.741371  196129 sshutil.go:53] new ssh client: &{IP:127.0.0.1 Port:33065 SSHKeyPath:/home/jenkins/minikube-integration/21139-2320/.minikube/machines/default-k8s-diff-port-103048/id_rsa Username:docker}
	I1109 14:39:21.750194  196129 addons.go:436] installing /etc/kubernetes/addons/storageclass.yaml
	I1109 14:39:21.750218  196129 ssh_runner.go:362] scp storageclass/storageclass.yaml --> /etc/kubernetes/addons/storageclass.yaml (271 bytes)
	I1109 14:39:21.750296  196129 cli_runner.go:164] Run: docker container inspect -f "'{{(index (index .NetworkSettings.Ports "22/tcp") 0).HostPort}}'" default-k8s-diff-port-103048
	I1109 14:39:21.766459  196129 sshutil.go:53] new ssh client: &{IP:127.0.0.1 Port:33065 SSHKeyPath:/home/jenkins/minikube-integration/21139-2320/.minikube/machines/default-k8s-diff-port-103048/id_rsa Username:docker}
	I1109 14:39:21.787620  196129 sshutil.go:53] new ssh client: &{IP:127.0.0.1 Port:33065 SSHKeyPath:/home/jenkins/minikube-integration/21139-2320/.minikube/machines/default-k8s-diff-port-103048/id_rsa Username:docker}
	I1109 14:39:22.148935  196129 ssh_runner.go:195] Run: sudo systemctl start kubelet
	I1109 14:39:22.161023  196129 ssh_runner.go:195] Run: sudo KUBECONFIG=/var/lib/minikube/kubeconfig /var/lib/minikube/binaries/v1.34.1/kubectl apply -f /etc/kubernetes/addons/storage-provisioner.yaml
	I1109 14:39:22.228626  196129 node_ready.go:35] waiting up to 6m0s for node "default-k8s-diff-port-103048" to be "Ready" ...
	I1109 14:39:22.237309  196129 ssh_runner.go:195] Run: sudo KUBECONFIG=/var/lib/minikube/kubeconfig /var/lib/minikube/binaries/v1.34.1/kubectl apply -f /etc/kubernetes/addons/storageclass.yaml
	I1109 14:39:22.258565  196129 addons.go:436] installing /etc/kubernetes/addons/dashboard-clusterrole.yaml
	I1109 14:39:22.258641  196129 ssh_runner.go:362] scp dashboard/dashboard-clusterrole.yaml --> /etc/kubernetes/addons/dashboard-clusterrole.yaml (1001 bytes)
	I1109 14:39:22.389427  196129 addons.go:436] installing /etc/kubernetes/addons/dashboard-clusterrolebinding.yaml
	I1109 14:39:22.389532  196129 ssh_runner.go:362] scp dashboard/dashboard-clusterrolebinding.yaml --> /etc/kubernetes/addons/dashboard-clusterrolebinding.yaml (1018 bytes)
	I1109 14:39:22.526134  196129 addons.go:436] installing /etc/kubernetes/addons/dashboard-configmap.yaml
	I1109 14:39:22.526206  196129 ssh_runner.go:362] scp dashboard/dashboard-configmap.yaml --> /etc/kubernetes/addons/dashboard-configmap.yaml (837 bytes)
	I1109 14:39:22.627561  196129 addons.go:436] installing /etc/kubernetes/addons/dashboard-dp.yaml
	I1109 14:39:22.627621  196129 ssh_runner.go:362] scp memory --> /etc/kubernetes/addons/dashboard-dp.yaml (4201 bytes)
	I1109 14:39:22.674772  196129 addons.go:436] installing /etc/kubernetes/addons/dashboard-role.yaml
	I1109 14:39:22.674843  196129 ssh_runner.go:362] scp dashboard/dashboard-role.yaml --> /etc/kubernetes/addons/dashboard-role.yaml (1724 bytes)
	I1109 14:39:22.695155  196129 addons.go:436] installing /etc/kubernetes/addons/dashboard-rolebinding.yaml
	I1109 14:39:22.695229  196129 ssh_runner.go:362] scp dashboard/dashboard-rolebinding.yaml --> /etc/kubernetes/addons/dashboard-rolebinding.yaml (1046 bytes)
	I1109 14:39:22.738582  196129 addons.go:436] installing /etc/kubernetes/addons/dashboard-sa.yaml
	I1109 14:39:22.738656  196129 ssh_runner.go:362] scp dashboard/dashboard-sa.yaml --> /etc/kubernetes/addons/dashboard-sa.yaml (837 bytes)
	I1109 14:39:22.763078  196129 addons.go:436] installing /etc/kubernetes/addons/dashboard-secret.yaml
	I1109 14:39:22.763151  196129 ssh_runner.go:362] scp dashboard/dashboard-secret.yaml --> /etc/kubernetes/addons/dashboard-secret.yaml (1389 bytes)
	I1109 14:39:22.805266  196129 addons.go:436] installing /etc/kubernetes/addons/dashboard-svc.yaml
	I1109 14:39:22.805341  196129 ssh_runner.go:362] scp dashboard/dashboard-svc.yaml --> /etc/kubernetes/addons/dashboard-svc.yaml (1294 bytes)
	I1109 14:39:22.831261  196129 ssh_runner.go:195] Run: sudo KUBECONFIG=/var/lib/minikube/kubeconfig /var/lib/minikube/binaries/v1.34.1/kubectl apply -f /etc/kubernetes/addons/dashboard-ns.yaml -f /etc/kubernetes/addons/dashboard-clusterrole.yaml -f /etc/kubernetes/addons/dashboard-clusterrolebinding.yaml -f /etc/kubernetes/addons/dashboard-configmap.yaml -f /etc/kubernetes/addons/dashboard-dp.yaml -f /etc/kubernetes/addons/dashboard-role.yaml -f /etc/kubernetes/addons/dashboard-rolebinding.yaml -f /etc/kubernetes/addons/dashboard-sa.yaml -f /etc/kubernetes/addons/dashboard-secret.yaml -f /etc/kubernetes/addons/dashboard-svc.yaml
	I1109 14:39:23.228075  196795 cli_runner.go:164] Run: docker network inspect embed-certs-422728 --format "{"Name": "{{.Name}}","Driver": "{{.Driver}}","Subnet": "{{range .IPAM.Config}}{{.Subnet}}{{end}}","Gateway": "{{range .IPAM.Config}}{{.Gateway}}{{end}}","MTU": {{if (index .Options "com.docker.network.driver.mtu")}}{{(index .Options "com.docker.network.driver.mtu")}}{{else}}0{{end}}, "ContainerIPs": [{{range $k,$v := .Containers }}"{{$v.IPv4Address}}",{{end}}]}"
	I1109 14:39:23.257879  196795 ssh_runner.go:195] Run: grep 192.168.76.1	host.minikube.internal$ /etc/hosts
	I1109 14:39:23.262130  196795 ssh_runner.go:195] Run: /bin/bash -c "{ grep -v $'\thost.minikube.internal$' "/etc/hosts"; echo "192.168.76.1	host.minikube.internal"; } > /tmp/h.$$; sudo cp /tmp/h.$$ "/etc/hosts""
	I1109 14:39:23.280983  196795 kubeadm.go:884] updating cluster {Name:embed-certs-422728 KeepContext:false EmbedCerts:true MinikubeISO: KicBaseImage:gcr.io/k8s-minikube/kicbase-builds:v0.0.48-1761985721-21837@sha256:a50b37e97dfdea51156e079ca6b45818a801b3d41bbe13d141f35d2e1af6c7d1 Memory:3072 CPUs:2 DiskSize:20000 Driver:docker HyperkitVpnKitSock: HyperkitVSockPorts:[] DockerEnv:[] ContainerVolumeMounts:[] InsecureRegistry:[] RegistryMirror:[] HostOnlyCIDR:192.168.59.1/24 HypervVirtualSwitch: HypervUseExternalSwitch:false HypervExternalAdapter: KVMNetwork:default KVMQemuURI:qemu:///system KVMGPU:false KVMHidden:false KVMNUMACount:1 APIServerPort:8443 DockerOpt:[] DisableDriverMounts:false NFSShare:[] NFSSharesRoot:/nfsshares UUID: NoVTXCheck:false DNSProxy:false HostDNSResolver:true HostOnlyNicType:virtio NatNicType:virtio SSHIPAddress: SSHUser:root SSHKey: SSHPort:22 KubernetesConfig:{KubernetesVersion:v1.34.1 ClusterName:embed-certs-422728 Namespace:default APIServerHAVIP: APIServerName:minikubeCA AP
IServerNames:[] APIServerIPs:[] DNSDomain:cluster.local ContainerRuntime:crio CRISocket: NetworkPlugin:cni FeatureGates: ServiceCIDR:10.96.0.0/12 ImageRepository: LoadBalancerStartIP: LoadBalancerEndIP: CustomIngressCert: RegistryAliases: ExtraOptions:[] ShouldLoadCachedImages:true EnableDefaultCNI:false CNI:} Nodes:[{Name: IP:192.168.76.2 Port:8443 KubernetesVersion:v1.34.1 ContainerRuntime:crio ControlPlane:true Worker:true}] Addons:map[dashboard:true default-storageclass:true storage-provisioner:true] CustomAddonImages:map[MetricsScraper:registry.k8s.io/echoserver:1.4] CustomAddonRegistries:map[] VerifyComponents:map[apiserver:true apps_running:true default_sa:true extra:true kubelet:true node_ready:true system_pods:true] StartHostTimeout:6m0s ScheduledStop:<nil> ExposedPorts:[] ListenAddress: Network: Subnet: MultiNodeRequested:false ExtraDisks:0 CertExpiration:26280h0m0s MountString: Mount9PVersion:9p2000.L MountGID:docker MountIP: MountMSize:262144 MountOptions:[] MountPort:0 MountType:9p MountUID:docke
r BinaryMirror: DisableOptimizations:false DisableMetrics:false DisableCoreDNSLog:false CustomQemuFirmwarePath: SocketVMnetClientPath: SocketVMnetPath: StaticIP: SSHAuthSock: SSHAgentPID:0 GPUs: AutoPauseInterval:1m0s} ...
	I1109 14:39:23.281094  196795 preload.go:188] Checking if preload exists for k8s version v1.34.1 and runtime crio
	I1109 14:39:23.281162  196795 ssh_runner.go:195] Run: sudo crictl images --output json
	I1109 14:39:23.361099  196795 crio.go:514] all images are preloaded for cri-o runtime.
	I1109 14:39:23.361119  196795 crio.go:433] Images already preloaded, skipping extraction
	I1109 14:39:23.361171  196795 ssh_runner.go:195] Run: sudo crictl images --output json
	I1109 14:39:23.413183  196795 crio.go:514] all images are preloaded for cri-o runtime.
	I1109 14:39:23.413202  196795 cache_images.go:86] Images are preloaded, skipping loading
	I1109 14:39:23.413210  196795 kubeadm.go:935] updating node { 192.168.76.2 8443 v1.34.1 crio true true} ...
	I1109 14:39:23.413308  196795 kubeadm.go:947] kubelet [Unit]
	Wants=crio.service
	
	[Service]
	ExecStart=
	ExecStart=/var/lib/minikube/binaries/v1.34.1/kubelet --bootstrap-kubeconfig=/etc/kubernetes/bootstrap-kubelet.conf --cgroups-per-qos=false --config=/var/lib/kubelet/config.yaml --enforce-node-allocatable= --hostname-override=embed-certs-422728 --kubeconfig=/etc/kubernetes/kubelet.conf --node-ip=192.168.76.2
	
	[Install]
	 config:
	{KubernetesVersion:v1.34.1 ClusterName:embed-certs-422728 Namespace:default APIServerHAVIP: APIServerName:minikubeCA APIServerNames:[] APIServerIPs:[] DNSDomain:cluster.local ContainerRuntime:crio CRISocket: NetworkPlugin:cni FeatureGates: ServiceCIDR:10.96.0.0/12 ImageRepository: LoadBalancerStartIP: LoadBalancerEndIP: CustomIngressCert: RegistryAliases: ExtraOptions:[] ShouldLoadCachedImages:true EnableDefaultCNI:false CNI:}
	I1109 14:39:23.413385  196795 ssh_runner.go:195] Run: crio config
	I1109 14:39:23.563585  196795 cni.go:84] Creating CNI manager for ""
	I1109 14:39:23.563654  196795 cni.go:143] "docker" driver + "crio" runtime found, recommending kindnet
	I1109 14:39:23.563691  196795 kubeadm.go:85] Using pod CIDR: 10.244.0.0/16
	I1109 14:39:23.563764  196795 kubeadm.go:190] kubeadm options: {CertDir:/var/lib/minikube/certs ServiceCIDR:10.96.0.0/12 PodSubnet:10.244.0.0/16 AdvertiseAddress:192.168.76.2 APIServerPort:8443 KubernetesVersion:v1.34.1 EtcdDataDir:/var/lib/minikube/etcd EtcdExtraArgs:map[] ClusterName:embed-certs-422728 NodeName:embed-certs-422728 DNSDomain:cluster.local CRISocket:/var/run/crio/crio.sock ImageRepository: ComponentOptions:[{Component:apiServer ExtraArgs:map[enable-admission-plugins:NamespaceLifecycle,LimitRanger,ServiceAccount,DefaultStorageClass,DefaultTolerationSeconds,NodeRestriction,MutatingAdmissionWebhook,ValidatingAdmissionWebhook,ResourceQuota] Pairs:map[certSANs:["127.0.0.1", "localhost", "192.168.76.2"]]} {Component:controllerManager ExtraArgs:map[allocate-node-cidrs:true leader-elect:false] Pairs:map[]} {Component:scheduler ExtraArgs:map[leader-elect:false] Pairs:map[]}] FeatureArgs:map[] NodeIP:192.168.76.2 CgroupDriver:cgroupfs ClientCAFile:/var/lib/minikube/certs/ca.crt StaticPodPath:/e
tc/kubernetes/manifests ControlPlaneAddress:control-plane.minikube.internal KubeProxyOptions:map[] ResolvConfSearchRegression:false KubeletConfigOpts:map[containerRuntimeEndpoint:unix:///var/run/crio/crio.sock hairpinMode:hairpin-veth runtimeRequestTimeout:15m] PrependCriSocketUnix:true}
	I1109 14:39:23.563947  196795 kubeadm.go:196] kubeadm config:
	apiVersion: kubeadm.k8s.io/v1beta4
	kind: InitConfiguration
	localAPIEndpoint:
	  advertiseAddress: 192.168.76.2
	  bindPort: 8443
	bootstrapTokens:
	  - groups:
	      - system:bootstrappers:kubeadm:default-node-token
	    ttl: 24h0m0s
	    usages:
	      - signing
	      - authentication
	nodeRegistration:
	  criSocket: unix:///var/run/crio/crio.sock
	  name: "embed-certs-422728"
	  kubeletExtraArgs:
	    - name: "node-ip"
	      value: "192.168.76.2"
	  taints: []
	---
	apiVersion: kubeadm.k8s.io/v1beta4
	kind: ClusterConfiguration
	apiServer:
	  certSANs: ["127.0.0.1", "localhost", "192.168.76.2"]
	  extraArgs:
	    - name: "enable-admission-plugins"
	      value: "NamespaceLifecycle,LimitRanger,ServiceAccount,DefaultStorageClass,DefaultTolerationSeconds,NodeRestriction,MutatingAdmissionWebhook,ValidatingAdmissionWebhook,ResourceQuota"
	controllerManager:
	  extraArgs:
	    - name: "allocate-node-cidrs"
	      value: "true"
	    - name: "leader-elect"
	      value: "false"
	scheduler:
	  extraArgs:
	    - name: "leader-elect"
	      value: "false"
	certificatesDir: /var/lib/minikube/certs
	clusterName: mk
	controlPlaneEndpoint: control-plane.minikube.internal:8443
	etcd:
	  local:
	    dataDir: /var/lib/minikube/etcd
	kubernetesVersion: v1.34.1
	networking:
	  dnsDomain: cluster.local
	  podSubnet: "10.244.0.0/16"
	  serviceSubnet: 10.96.0.0/12
	---
	apiVersion: kubelet.config.k8s.io/v1beta1
	kind: KubeletConfiguration
	authentication:
	  x509:
	    clientCAFile: /var/lib/minikube/certs/ca.crt
	cgroupDriver: cgroupfs
	containerRuntimeEndpoint: unix:///var/run/crio/crio.sock
	hairpinMode: hairpin-veth
	runtimeRequestTimeout: 15m
	clusterDomain: "cluster.local"
	# disable disk resource management by default
	imageGCHighThresholdPercent: 100
	evictionHard:
	  nodefs.available: "0%"
	  nodefs.inodesFree: "0%"
	  imagefs.available: "0%"
	failSwapOn: false
	staticPodPath: /etc/kubernetes/manifests
	---
	apiVersion: kubeproxy.config.k8s.io/v1alpha1
	kind: KubeProxyConfiguration
	clusterCIDR: "10.244.0.0/16"
	metricsBindAddress: 0.0.0.0:10249
	conntrack:
	  maxPerCore: 0
	# Skip setting "net.netfilter.nf_conntrack_tcp_timeout_established"
	  tcpEstablishedTimeout: 0s
	# Skip setting "net.netfilter.nf_conntrack_tcp_timeout_close"
	  tcpCloseWaitTimeout: 0s
	
	I1109 14:39:23.564045  196795 ssh_runner.go:195] Run: sudo ls /var/lib/minikube/binaries/v1.34.1
	I1109 14:39:23.572916  196795 binaries.go:51] Found k8s binaries, skipping transfer
	I1109 14:39:23.573035  196795 ssh_runner.go:195] Run: sudo mkdir -p /etc/systemd/system/kubelet.service.d /lib/systemd/system /var/tmp/minikube
	I1109 14:39:23.581385  196795 ssh_runner.go:362] scp memory --> /etc/systemd/system/kubelet.service.d/10-kubeadm.conf (368 bytes)
	I1109 14:39:23.595976  196795 ssh_runner.go:362] scp memory --> /lib/systemd/system/kubelet.service (352 bytes)
	I1109 14:39:23.609988  196795 ssh_runner.go:362] scp memory --> /var/tmp/minikube/kubeadm.yaml.new (2215 bytes)
	I1109 14:39:23.624103  196795 ssh_runner.go:195] Run: grep 192.168.76.2	control-plane.minikube.internal$ /etc/hosts
	I1109 14:39:23.627903  196795 ssh_runner.go:195] Run: /bin/bash -c "{ grep -v $'\tcontrol-plane.minikube.internal$' "/etc/hosts"; echo "192.168.76.2	control-plane.minikube.internal"; } > /tmp/h.$$; sudo cp /tmp/h.$$ "/etc/hosts""
	I1109 14:39:23.637960  196795 ssh_runner.go:195] Run: sudo systemctl daemon-reload
	I1109 14:39:23.834596  196795 ssh_runner.go:195] Run: sudo systemctl start kubelet
	I1109 14:39:23.851619  196795 certs.go:69] Setting up /home/jenkins/minikube-integration/21139-2320/.minikube/profiles/embed-certs-422728 for IP: 192.168.76.2
	I1109 14:39:23.851693  196795 certs.go:195] generating shared ca certs ...
	I1109 14:39:23.851722  196795 certs.go:227] acquiring lock for ca certs: {Name:mkdc9287ecd89df27ec460b72246ed6b75395d7c Clock:{} Delay:500ms Timeout:1m0s Cancel:<nil>}
	I1109 14:39:23.851903  196795 certs.go:236] skipping valid "minikubeCA" ca cert: /home/jenkins/minikube-integration/21139-2320/.minikube/ca.key
	I1109 14:39:23.851988  196795 certs.go:236] skipping valid "proxyClientCA" ca cert: /home/jenkins/minikube-integration/21139-2320/.minikube/proxy-client-ca.key
	I1109 14:39:23.852012  196795 certs.go:257] generating profile certs ...
	I1109 14:39:23.852144  196795 certs.go:360] skipping valid signed profile cert regeneration for "minikube-user": /home/jenkins/minikube-integration/21139-2320/.minikube/profiles/embed-certs-422728/client.key
	I1109 14:39:23.852244  196795 certs.go:360] skipping valid signed profile cert regeneration for "minikube": /home/jenkins/minikube-integration/21139-2320/.minikube/profiles/embed-certs-422728/apiserver.key.b1b6b07a
	I1109 14:39:23.852384  196795 certs.go:360] skipping valid signed profile cert regeneration for "aggregator": /home/jenkins/minikube-integration/21139-2320/.minikube/profiles/embed-certs-422728/proxy-client.key
	I1109 14:39:23.852540  196795 certs.go:484] found cert: /home/jenkins/minikube-integration/21139-2320/.minikube/certs/4116.pem (1338 bytes)
	W1109 14:39:23.852606  196795 certs.go:480] ignoring /home/jenkins/minikube-integration/21139-2320/.minikube/certs/4116_empty.pem, impossibly tiny 0 bytes
	I1109 14:39:23.852637  196795 certs.go:484] found cert: /home/jenkins/minikube-integration/21139-2320/.minikube/certs/ca-key.pem (1679 bytes)
	I1109 14:39:23.852689  196795 certs.go:484] found cert: /home/jenkins/minikube-integration/21139-2320/.minikube/certs/ca.pem (1082 bytes)
	I1109 14:39:23.852735  196795 certs.go:484] found cert: /home/jenkins/minikube-integration/21139-2320/.minikube/certs/cert.pem (1123 bytes)
	I1109 14:39:23.852795  196795 certs.go:484] found cert: /home/jenkins/minikube-integration/21139-2320/.minikube/certs/key.pem (1679 bytes)
	I1109 14:39:23.852868  196795 certs.go:484] found cert: /home/jenkins/minikube-integration/21139-2320/.minikube/files/etc/ssl/certs/41162.pem (1708 bytes)
	I1109 14:39:23.853641  196795 ssh_runner.go:362] scp /home/jenkins/minikube-integration/21139-2320/.minikube/ca.crt --> /var/lib/minikube/certs/ca.crt (1111 bytes)
	I1109 14:39:23.941040  196795 ssh_runner.go:362] scp /home/jenkins/minikube-integration/21139-2320/.minikube/ca.key --> /var/lib/minikube/certs/ca.key (1679 bytes)
	I1109 14:39:24.012418  196795 ssh_runner.go:362] scp /home/jenkins/minikube-integration/21139-2320/.minikube/proxy-client-ca.crt --> /var/lib/minikube/certs/proxy-client-ca.crt (1119 bytes)
	I1109 14:39:24.042429  196795 ssh_runner.go:362] scp /home/jenkins/minikube-integration/21139-2320/.minikube/proxy-client-ca.key --> /var/lib/minikube/certs/proxy-client-ca.key (1675 bytes)
	I1109 14:39:24.071468  196795 ssh_runner.go:362] scp /home/jenkins/minikube-integration/21139-2320/.minikube/profiles/embed-certs-422728/apiserver.crt --> /var/lib/minikube/certs/apiserver.crt (1428 bytes)
	I1109 14:39:24.116434  196795 ssh_runner.go:362] scp /home/jenkins/minikube-integration/21139-2320/.minikube/profiles/embed-certs-422728/apiserver.key --> /var/lib/minikube/certs/apiserver.key (1679 bytes)
	I1109 14:39:24.161053  196795 ssh_runner.go:362] scp /home/jenkins/minikube-integration/21139-2320/.minikube/profiles/embed-certs-422728/proxy-client.crt --> /var/lib/minikube/certs/proxy-client.crt (1147 bytes)
	I1109 14:39:24.224105  196795 ssh_runner.go:362] scp /home/jenkins/minikube-integration/21139-2320/.minikube/profiles/embed-certs-422728/proxy-client.key --> /var/lib/minikube/certs/proxy-client.key (1675 bytes)
	I1109 14:39:24.267707  196795 ssh_runner.go:362] scp /home/jenkins/minikube-integration/21139-2320/.minikube/files/etc/ssl/certs/41162.pem --> /usr/share/ca-certificates/41162.pem (1708 bytes)
	I1109 14:39:24.314203  196795 ssh_runner.go:362] scp /home/jenkins/minikube-integration/21139-2320/.minikube/ca.crt --> /usr/share/ca-certificates/minikubeCA.pem (1111 bytes)
	I1109 14:39:24.345761  196795 ssh_runner.go:362] scp /home/jenkins/minikube-integration/21139-2320/.minikube/certs/4116.pem --> /usr/share/ca-certificates/4116.pem (1338 bytes)
	I1109 14:39:24.382658  196795 ssh_runner.go:362] scp memory --> /var/lib/minikube/kubeconfig (738 bytes)
	I1109 14:39:24.401317  196795 ssh_runner.go:195] Run: openssl version
	I1109 14:39:24.412746  196795 ssh_runner.go:195] Run: sudo /bin/bash -c "test -s /usr/share/ca-certificates/41162.pem && ln -fs /usr/share/ca-certificates/41162.pem /etc/ssl/certs/41162.pem"
	I1109 14:39:24.425193  196795 ssh_runner.go:195] Run: ls -la /usr/share/ca-certificates/41162.pem
	I1109 14:39:24.429586  196795 certs.go:528] hashing: -rw-r--r-- 1 root root 1708 Nov  9 13:36 /usr/share/ca-certificates/41162.pem
	I1109 14:39:24.429714  196795 ssh_runner.go:195] Run: openssl x509 -hash -noout -in /usr/share/ca-certificates/41162.pem
	I1109 14:39:24.492081  196795 ssh_runner.go:195] Run: sudo /bin/bash -c "test -L /etc/ssl/certs/3ec20f2e.0 || ln -fs /etc/ssl/certs/41162.pem /etc/ssl/certs/3ec20f2e.0"
	I1109 14:39:24.502155  196795 ssh_runner.go:195] Run: sudo /bin/bash -c "test -s /usr/share/ca-certificates/minikubeCA.pem && ln -fs /usr/share/ca-certificates/minikubeCA.pem /etc/ssl/certs/minikubeCA.pem"
	I1109 14:39:24.510808  196795 ssh_runner.go:195] Run: ls -la /usr/share/ca-certificates/minikubeCA.pem
	I1109 14:39:24.515143  196795 certs.go:528] hashing: -rw-r--r-- 1 root root 1111 Nov  9 13:29 /usr/share/ca-certificates/minikubeCA.pem
	I1109 14:39:24.515237  196795 ssh_runner.go:195] Run: openssl x509 -hash -noout -in /usr/share/ca-certificates/minikubeCA.pem
	I1109 14:39:24.570674  196795 ssh_runner.go:195] Run: sudo /bin/bash -c "test -L /etc/ssl/certs/b5213941.0 || ln -fs /etc/ssl/certs/minikubeCA.pem /etc/ssl/certs/b5213941.0"
	I1109 14:39:24.579490  196795 ssh_runner.go:195] Run: sudo /bin/bash -c "test -s /usr/share/ca-certificates/4116.pem && ln -fs /usr/share/ca-certificates/4116.pem /etc/ssl/certs/4116.pem"
	I1109 14:39:24.606288  196795 ssh_runner.go:195] Run: ls -la /usr/share/ca-certificates/4116.pem
	I1109 14:39:24.614978  196795 certs.go:528] hashing: -rw-r--r-- 1 root root 1338 Nov  9 13:36 /usr/share/ca-certificates/4116.pem
	I1109 14:39:24.615077  196795 ssh_runner.go:195] Run: openssl x509 -hash -noout -in /usr/share/ca-certificates/4116.pem
	I1109 14:39:24.702675  196795 ssh_runner.go:195] Run: sudo /bin/bash -c "test -L /etc/ssl/certs/51391683.0 || ln -fs /etc/ssl/certs/4116.pem /etc/ssl/certs/51391683.0"
	I1109 14:39:24.724731  196795 ssh_runner.go:195] Run: stat /var/lib/minikube/certs/apiserver-kubelet-client.crt
	I1109 14:39:24.736968  196795 ssh_runner.go:195] Run: openssl x509 -noout -in /var/lib/minikube/certs/apiserver-etcd-client.crt -checkend 86400
	I1109 14:39:24.828754  196795 ssh_runner.go:195] Run: openssl x509 -noout -in /var/lib/minikube/certs/apiserver-kubelet-client.crt -checkend 86400
	I1109 14:39:24.919293  196795 ssh_runner.go:195] Run: openssl x509 -noout -in /var/lib/minikube/certs/etcd/server.crt -checkend 86400
	I1109 14:39:25.033233  196795 ssh_runner.go:195] Run: openssl x509 -noout -in /var/lib/minikube/certs/etcd/healthcheck-client.crt -checkend 86400
	I1109 14:39:25.133106  196795 ssh_runner.go:195] Run: openssl x509 -noout -in /var/lib/minikube/certs/etcd/peer.crt -checkend 86400
	I1109 14:39:25.239384  196795 ssh_runner.go:195] Run: openssl x509 -noout -in /var/lib/minikube/certs/front-proxy-client.crt -checkend 86400
	I1109 14:39:25.320678  196795 kubeadm.go:401] StartCluster: {Name:embed-certs-422728 KeepContext:false EmbedCerts:true MinikubeISO: KicBaseImage:gcr.io/k8s-minikube/kicbase-builds:v0.0.48-1761985721-21837@sha256:a50b37e97dfdea51156e079ca6b45818a801b3d41bbe13d141f35d2e1af6c7d1 Memory:3072 CPUs:2 DiskSize:20000 Driver:docker HyperkitVpnKitSock: HyperkitVSockPorts:[] DockerEnv:[] ContainerVolumeMounts:[] InsecureRegistry:[] RegistryMirror:[] HostOnlyCIDR:192.168.59.1/24 HypervVirtualSwitch: HypervUseExternalSwitch:false HypervExternalAdapter: KVMNetwork:default KVMQemuURI:qemu:///system KVMGPU:false KVMHidden:false KVMNUMACount:1 APIServerPort:8443 DockerOpt:[] DisableDriverMounts:false NFSShare:[] NFSSharesRoot:/nfsshares UUID: NoVTXCheck:false DNSProxy:false HostDNSResolver:true HostOnlyNicType:virtio NatNicType:virtio SSHIPAddress: SSHUser:root SSHKey: SSHPort:22 KubernetesConfig:{KubernetesVersion:v1.34.1 ClusterName:embed-certs-422728 Namespace:default APIServerHAVIP: APIServerName:minikubeCA APISe
rverNames:[] APIServerIPs:[] DNSDomain:cluster.local ContainerRuntime:crio CRISocket: NetworkPlugin:cni FeatureGates: ServiceCIDR:10.96.0.0/12 ImageRepository: LoadBalancerStartIP: LoadBalancerEndIP: CustomIngressCert: RegistryAliases: ExtraOptions:[] ShouldLoadCachedImages:true EnableDefaultCNI:false CNI:} Nodes:[{Name: IP:192.168.76.2 Port:8443 KubernetesVersion:v1.34.1 ContainerRuntime:crio ControlPlane:true Worker:true}] Addons:map[dashboard:true default-storageclass:true storage-provisioner:true] CustomAddonImages:map[MetricsScraper:registry.k8s.io/echoserver:1.4] CustomAddonRegistries:map[] VerifyComponents:map[apiserver:true apps_running:true default_sa:true extra:true kubelet:true node_ready:true system_pods:true] StartHostTimeout:6m0s ScheduledStop:<nil> ExposedPorts:[] ListenAddress: Network: Subnet: MultiNodeRequested:false ExtraDisks:0 CertExpiration:26280h0m0s MountString: Mount9PVersion:9p2000.L MountGID:docker MountIP: MountMSize:262144 MountOptions:[] MountPort:0 MountType:9p MountUID:docker B
inaryMirror: DisableOptimizations:false DisableMetrics:false DisableCoreDNSLog:false CustomQemuFirmwarePath: SocketVMnetClientPath: SocketVMnetPath: StaticIP: SSHAuthSock: SSHAgentPID:0 GPUs: AutoPauseInterval:1m0s}
	I1109 14:39:25.320782  196795 cri.go:54] listing CRI containers in root : {State:paused Name: Namespaces:[kube-system]}
	I1109 14:39:25.320876  196795 ssh_runner.go:195] Run: sudo -s eval "crictl ps -a --quiet --label io.kubernetes.pod.namespace=kube-system"
	I1109 14:39:25.395488  196795 cri.go:89] found id: "a9943a66511d5ad86eafee1dfeb5b5c4217765aa7223fad81d4d85ed65bd4366"
	I1109 14:39:25.395518  196795 cri.go:89] found id: "2b949bf057b2fcac7eac7de6351c20a0ac0d3dccfcd10d71e47fcbaab8fc91dc"
	I1109 14:39:25.395523  196795 cri.go:89] found id: "7ac348b06cb3a51d4fa87f75d29323432bc36b8fade63732460d89860ce8f3df"
	I1109 14:39:25.395529  196795 cri.go:89] found id: "7f99978e234d142a4cacb3d0faff188a383754a6fe3a8aafef37dfdbccf51f16"
	I1109 14:39:25.395540  196795 cri.go:89] found id: ""
	I1109 14:39:25.395626  196795 ssh_runner.go:195] Run: sudo runc list -f json
	W1109 14:39:25.421453  196795 kubeadm.go:408] unpause failed: list paused: runc: sudo runc list -f json: Process exited with status 1
	stdout:
	
	stderr:
	time="2025-11-09T14:39:25Z" level=error msg="open /run/runc: no such file or directory"
	I1109 14:39:25.421568  196795 ssh_runner.go:195] Run: sudo ls /var/lib/kubelet/kubeadm-flags.env /var/lib/kubelet/config.yaml /var/lib/minikube/etcd
	I1109 14:39:25.434118  196795 kubeadm.go:417] found existing configuration files, will attempt cluster restart
	I1109 14:39:25.434139  196795 kubeadm.go:598] restartPrimaryControlPlane start ...
	I1109 14:39:25.434224  196795 ssh_runner.go:195] Run: sudo test -d /data/minikube
	I1109 14:39:25.455848  196795 kubeadm.go:131] /data/minikube skipping compat symlinks: sudo test -d /data/minikube: Process exited with status 1
	stdout:
	
	stderr:
	I1109 14:39:25.456462  196795 kubeconfig.go:47] verify endpoint returned: get endpoint: "embed-certs-422728" does not appear in /home/jenkins/minikube-integration/21139-2320/kubeconfig
	I1109 14:39:25.456756  196795 kubeconfig.go:62] /home/jenkins/minikube-integration/21139-2320/kubeconfig needs updating (will repair): [kubeconfig missing "embed-certs-422728" cluster setting kubeconfig missing "embed-certs-422728" context setting]
	I1109 14:39:25.457252  196795 lock.go:35] WriteFile acquiring /home/jenkins/minikube-integration/21139-2320/kubeconfig: {Name:mkbcbd43244c4eb498050023163f96f81faa78c6 Clock:{} Delay:500ms Timeout:1m0s Cancel:<nil>}
	I1109 14:39:25.458892  196795 ssh_runner.go:195] Run: sudo diff -u /var/tmp/minikube/kubeadm.yaml /var/tmp/minikube/kubeadm.yaml.new
	I1109 14:39:25.472254  196795 kubeadm.go:635] The running cluster does not require reconfiguration: 192.168.76.2
	I1109 14:39:25.472299  196795 kubeadm.go:602] duration metric: took 38.151656ms to restartPrimaryControlPlane
	I1109 14:39:25.472333  196795 kubeadm.go:403] duration metric: took 151.665347ms to StartCluster
	I1109 14:39:25.472350  196795 settings.go:142] acquiring lock: {Name:mk52a5a1cf83cfd3007d92af07a5e8dea393f789 Clock:{} Delay:500ms Timeout:1m0s Cancel:<nil>}
	I1109 14:39:25.472439  196795 settings.go:150] Updating kubeconfig:  /home/jenkins/minikube-integration/21139-2320/kubeconfig
	I1109 14:39:25.474717  196795 lock.go:35] WriteFile acquiring /home/jenkins/minikube-integration/21139-2320/kubeconfig: {Name:mkbcbd43244c4eb498050023163f96f81faa78c6 Clock:{} Delay:500ms Timeout:1m0s Cancel:<nil>}
	I1109 14:39:25.475122  196795 start.go:236] Will wait 6m0s for node &{Name: IP:192.168.76.2 Port:8443 KubernetesVersion:v1.34.1 ContainerRuntime:crio ControlPlane:true Worker:true}
	I1109 14:39:25.475457  196795 config.go:182] Loaded profile config "embed-certs-422728": Driver=docker, ContainerRuntime=crio, KubernetesVersion=v1.34.1
	I1109 14:39:25.475514  196795 addons.go:512] enable addons start: toEnable=map[ambassador:false amd-gpu-device-plugin:false auto-pause:false cloud-spanner:false csi-hostpath-driver:false dashboard:true default-storageclass:true efk:false freshpod:false gcp-auth:false gvisor:false headlamp:false inaccel:false ingress:false ingress-dns:false inspektor-gadget:false istio:false istio-provisioner:false kong:false kubeflow:false kubetail:false kubevirt:false logviewer:false metallb:false metrics-server:false nvidia-device-plugin:false nvidia-driver-installer:false nvidia-gpu-device-plugin:false olm:false pod-security-policy:false portainer:false registry:false registry-aliases:false registry-creds:false storage-provisioner:true storage-provisioner-rancher:false volcano:false volumesnapshots:false yakd:false]
	I1109 14:39:25.475607  196795 addons.go:70] Setting storage-provisioner=true in profile "embed-certs-422728"
	I1109 14:39:25.475629  196795 addons.go:239] Setting addon storage-provisioner=true in "embed-certs-422728"
	W1109 14:39:25.475642  196795 addons.go:248] addon storage-provisioner should already be in state true
	I1109 14:39:25.475657  196795 addons.go:70] Setting dashboard=true in profile "embed-certs-422728"
	I1109 14:39:25.475671  196795 addons.go:239] Setting addon dashboard=true in "embed-certs-422728"
	W1109 14:39:25.475677  196795 addons.go:248] addon dashboard should already be in state true
	I1109 14:39:25.475700  196795 host.go:66] Checking if "embed-certs-422728" exists ...
	I1109 14:39:25.476345  196795 cli_runner.go:164] Run: docker container inspect embed-certs-422728 --format={{.State.Status}}
	I1109 14:39:25.476519  196795 host.go:66] Checking if "embed-certs-422728" exists ...
	I1109 14:39:25.476941  196795 cli_runner.go:164] Run: docker container inspect embed-certs-422728 --format={{.State.Status}}
	I1109 14:39:25.477501  196795 addons.go:70] Setting default-storageclass=true in profile "embed-certs-422728"
	I1109 14:39:25.477528  196795 addons_storage_classes.go:34] enableOrDisableStorageClasses default-storageclass=true on "embed-certs-422728"
	I1109 14:39:25.477804  196795 cli_runner.go:164] Run: docker container inspect embed-certs-422728 --format={{.State.Status}}
	I1109 14:39:25.483396  196795 out.go:179] * Verifying Kubernetes components...
	I1109 14:39:25.487964  196795 ssh_runner.go:195] Run: sudo systemctl daemon-reload
	I1109 14:39:25.515113  196795 out.go:179]   - Using image docker.io/kubernetesui/dashboard:v2.7.0
	I1109 14:39:25.518086  196795 out.go:179]   - Using image registry.k8s.io/echoserver:1.4
	I1109 14:39:25.521009  196795 addons.go:436] installing /etc/kubernetes/addons/dashboard-ns.yaml
	I1109 14:39:25.521039  196795 ssh_runner.go:362] scp dashboard/dashboard-ns.yaml --> /etc/kubernetes/addons/dashboard-ns.yaml (759 bytes)
	I1109 14:39:25.521115  196795 cli_runner.go:164] Run: docker container inspect -f "'{{(index (index .NetworkSettings.Ports "22/tcp") 0).HostPort}}'" embed-certs-422728
	I1109 14:39:25.540397  196795 out.go:179]   - Using image gcr.io/k8s-minikube/storage-provisioner:v5
	I1109 14:39:25.545565  196795 addons.go:436] installing /etc/kubernetes/addons/storage-provisioner.yaml
	I1109 14:39:25.545587  196795 ssh_runner.go:362] scp memory --> /etc/kubernetes/addons/storage-provisioner.yaml (2676 bytes)
	I1109 14:39:25.545649  196795 cli_runner.go:164] Run: docker container inspect -f "'{{(index (index .NetworkSettings.Ports "22/tcp") 0).HostPort}}'" embed-certs-422728
	I1109 14:39:25.553421  196795 addons.go:239] Setting addon default-storageclass=true in "embed-certs-422728"
	W1109 14:39:25.553458  196795 addons.go:248] addon default-storageclass should already be in state true
	I1109 14:39:25.553498  196795 host.go:66] Checking if "embed-certs-422728" exists ...
	I1109 14:39:25.553946  196795 cli_runner.go:164] Run: docker container inspect embed-certs-422728 --format={{.State.Status}}
	I1109 14:39:25.587976  196795 sshutil.go:53] new ssh client: &{IP:127.0.0.1 Port:33070 SSHKeyPath:/home/jenkins/minikube-integration/21139-2320/.minikube/machines/embed-certs-422728/id_rsa Username:docker}
	I1109 14:39:25.610580  196795 addons.go:436] installing /etc/kubernetes/addons/storageclass.yaml
	I1109 14:39:25.610609  196795 ssh_runner.go:362] scp storageclass/storageclass.yaml --> /etc/kubernetes/addons/storageclass.yaml (271 bytes)
	I1109 14:39:25.610676  196795 cli_runner.go:164] Run: docker container inspect -f "'{{(index (index .NetworkSettings.Ports "22/tcp") 0).HostPort}}'" embed-certs-422728
	I1109 14:39:25.611768  196795 sshutil.go:53] new ssh client: &{IP:127.0.0.1 Port:33070 SSHKeyPath:/home/jenkins/minikube-integration/21139-2320/.minikube/machines/embed-certs-422728/id_rsa Username:docker}
	I1109 14:39:25.643462  196795 sshutil.go:53] new ssh client: &{IP:127.0.0.1 Port:33070 SSHKeyPath:/home/jenkins/minikube-integration/21139-2320/.minikube/machines/embed-certs-422728/id_rsa Username:docker}
	I1109 14:39:25.951056  196795 ssh_runner.go:195] Run: sudo KUBECONFIG=/var/lib/minikube/kubeconfig /var/lib/minikube/binaries/v1.34.1/kubectl apply -f /etc/kubernetes/addons/storage-provisioner.yaml
	I1109 14:39:26.036278  196795 addons.go:436] installing /etc/kubernetes/addons/dashboard-clusterrole.yaml
	I1109 14:39:26.036356  196795 ssh_runner.go:362] scp dashboard/dashboard-clusterrole.yaml --> /etc/kubernetes/addons/dashboard-clusterrole.yaml (1001 bytes)
	I1109 14:39:26.113974  196795 ssh_runner.go:195] Run: sudo systemctl start kubelet
	I1109 14:39:26.133211  196795 ssh_runner.go:195] Run: sudo KUBECONFIG=/var/lib/minikube/kubeconfig /var/lib/minikube/binaries/v1.34.1/kubectl apply -f /etc/kubernetes/addons/storageclass.yaml
	I1109 14:39:26.150339  196795 addons.go:436] installing /etc/kubernetes/addons/dashboard-clusterrolebinding.yaml
	I1109 14:39:26.150412  196795 ssh_runner.go:362] scp dashboard/dashboard-clusterrolebinding.yaml --> /etc/kubernetes/addons/dashboard-clusterrolebinding.yaml (1018 bytes)
	I1109 14:39:26.224674  196795 addons.go:436] installing /etc/kubernetes/addons/dashboard-configmap.yaml
	I1109 14:39:26.224743  196795 ssh_runner.go:362] scp dashboard/dashboard-configmap.yaml --> /etc/kubernetes/addons/dashboard-configmap.yaml (837 bytes)
	I1109 14:39:26.342164  196795 addons.go:436] installing /etc/kubernetes/addons/dashboard-dp.yaml
	I1109 14:39:26.342238  196795 ssh_runner.go:362] scp memory --> /etc/kubernetes/addons/dashboard-dp.yaml (4201 bytes)
	I1109 14:39:26.457225  196795 addons.go:436] installing /etc/kubernetes/addons/dashboard-role.yaml
	I1109 14:39:26.457281  196795 ssh_runner.go:362] scp dashboard/dashboard-role.yaml --> /etc/kubernetes/addons/dashboard-role.yaml (1724 bytes)
	I1109 14:39:26.524480  196795 addons.go:436] installing /etc/kubernetes/addons/dashboard-rolebinding.yaml
	I1109 14:39:26.524551  196795 ssh_runner.go:362] scp dashboard/dashboard-rolebinding.yaml --> /etc/kubernetes/addons/dashboard-rolebinding.yaml (1046 bytes)
	I1109 14:39:26.545432  196795 addons.go:436] installing /etc/kubernetes/addons/dashboard-sa.yaml
	I1109 14:39:26.545495  196795 ssh_runner.go:362] scp dashboard/dashboard-sa.yaml --> /etc/kubernetes/addons/dashboard-sa.yaml (837 bytes)
	I1109 14:39:26.569785  196795 addons.go:436] installing /etc/kubernetes/addons/dashboard-secret.yaml
	I1109 14:39:26.569856  196795 ssh_runner.go:362] scp dashboard/dashboard-secret.yaml --> /etc/kubernetes/addons/dashboard-secret.yaml (1389 bytes)
	I1109 14:39:26.593384  196795 addons.go:436] installing /etc/kubernetes/addons/dashboard-svc.yaml
	I1109 14:39:26.593446  196795 ssh_runner.go:362] scp dashboard/dashboard-svc.yaml --> /etc/kubernetes/addons/dashboard-svc.yaml (1294 bytes)
	I1109 14:39:26.632772  196795 ssh_runner.go:195] Run: sudo KUBECONFIG=/var/lib/minikube/kubeconfig /var/lib/minikube/binaries/v1.34.1/kubectl apply -f /etc/kubernetes/addons/dashboard-ns.yaml -f /etc/kubernetes/addons/dashboard-clusterrole.yaml -f /etc/kubernetes/addons/dashboard-clusterrolebinding.yaml -f /etc/kubernetes/addons/dashboard-configmap.yaml -f /etc/kubernetes/addons/dashboard-dp.yaml -f /etc/kubernetes/addons/dashboard-role.yaml -f /etc/kubernetes/addons/dashboard-rolebinding.yaml -f /etc/kubernetes/addons/dashboard-sa.yaml -f /etc/kubernetes/addons/dashboard-secret.yaml -f /etc/kubernetes/addons/dashboard-svc.yaml
	I1109 14:39:29.705357  196129 node_ready.go:49] node "default-k8s-diff-port-103048" is "Ready"
	I1109 14:39:29.705456  196129 node_ready.go:38] duration metric: took 7.476741625s for node "default-k8s-diff-port-103048" to be "Ready" ...
	I1109 14:39:29.705484  196129 api_server.go:52] waiting for apiserver process to appear ...
	I1109 14:39:29.705569  196129 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I1109 14:39:32.996787  196129 ssh_runner.go:235] Completed: sudo KUBECONFIG=/var/lib/minikube/kubeconfig /var/lib/minikube/binaries/v1.34.1/kubectl apply -f /etc/kubernetes/addons/storage-provisioner.yaml: (10.835671987s)
	I1109 14:39:32.996899  196129 ssh_runner.go:235] Completed: sudo KUBECONFIG=/var/lib/minikube/kubeconfig /var/lib/minikube/binaries/v1.34.1/kubectl apply -f /etc/kubernetes/addons/storageclass.yaml: (10.759518546s)
	I1109 14:39:32.997220  196129 ssh_runner.go:235] Completed: sudo KUBECONFIG=/var/lib/minikube/kubeconfig /var/lib/minikube/binaries/v1.34.1/kubectl apply -f /etc/kubernetes/addons/dashboard-ns.yaml -f /etc/kubernetes/addons/dashboard-clusterrole.yaml -f /etc/kubernetes/addons/dashboard-clusterrolebinding.yaml -f /etc/kubernetes/addons/dashboard-configmap.yaml -f /etc/kubernetes/addons/dashboard-dp.yaml -f /etc/kubernetes/addons/dashboard-role.yaml -f /etc/kubernetes/addons/dashboard-rolebinding.yaml -f /etc/kubernetes/addons/dashboard-sa.yaml -f /etc/kubernetes/addons/dashboard-secret.yaml -f /etc/kubernetes/addons/dashboard-svc.yaml: (10.165879191s)
	I1109 14:39:32.997471  196129 ssh_runner.go:235] Completed: sudo pgrep -xnf kube-apiserver.*minikube.*: (3.291860623s)
	I1109 14:39:32.997521  196129 api_server.go:72] duration metric: took 11.371993953s to wait for apiserver process to appear ...
	I1109 14:39:32.997542  196129 api_server.go:88] waiting for apiserver healthz status ...
	I1109 14:39:32.997571  196129 api_server.go:253] Checking apiserver healthz at https://192.168.85.2:8444/healthz ...
	I1109 14:39:33.000725  196129 out.go:179] * Some dashboard features require the metrics-server addon. To enable all features please run:
	
		minikube -p default-k8s-diff-port-103048 addons enable metrics-server
	
	I1109 14:39:33.020969  196129 api_server.go:279] https://192.168.85.2:8444/healthz returned 200:
	ok
	I1109 14:39:33.023683  196129 api_server.go:141] control plane version: v1.34.1
	I1109 14:39:33.023714  196129 api_server.go:131] duration metric: took 26.153345ms to wait for apiserver health ...
	I1109 14:39:33.023725  196129 system_pods.go:43] waiting for kube-system pods to appear ...
	I1109 14:39:33.032087  196129 out.go:179] * Enabled addons: storage-provisioner, dashboard, default-storageclass
	I1109 14:39:33.033482  196129 system_pods.go:59] 8 kube-system pods found
	I1109 14:39:33.033582  196129 system_pods.go:61] "coredns-66bc5c9577-rbvc2" [a2c09df3-22f7-4863-81b7-71d92e6457c7] Running / Ready:ContainersNotReady (containers with unready status: [coredns]) / ContainersReady:ContainersNotReady (containers with unready status: [coredns])
	I1109 14:39:33.033606  196129 system_pods.go:61] "etcd-default-k8s-diff-port-103048" [7ca61e70-9bc1-4a3d-9175-40fabccc3dfe] Running / Ready:ContainersNotReady (containers with unready status: [etcd]) / ContainersReady:ContainersNotReady (containers with unready status: [etcd])
	I1109 14:39:33.033629  196129 system_pods.go:61] "kindnet-tz2x5" [41a63a24-6d2b-453d-a118-2a5b03e08396] Running / Ready:ContainersNotReady (containers with unready status: [kindnet-cni]) / ContainersReady:ContainersNotReady (containers with unready status: [kindnet-cni])
	I1109 14:39:33.033667  196129 system_pods.go:61] "kube-apiserver-default-k8s-diff-port-103048" [80dc9a9a-1a79-447a-b1a4-38d7611de8c3] Running / Ready:ContainersNotReady (containers with unready status: [kube-apiserver]) / ContainersReady:ContainersNotReady (containers with unready status: [kube-apiserver])
	I1109 14:39:33.033692  196129 system_pods.go:61] "kube-controller-manager-default-k8s-diff-port-103048" [f1da8430-3b4b-4fab-a1cb-d8984aabae63] Running / Ready:ContainersNotReady (containers with unready status: [kube-controller-manager]) / ContainersReady:ContainersNotReady (containers with unready status: [kube-controller-manager])
	I1109 14:39:33.033712  196129 system_pods.go:61] "kube-proxy-c57m2" [d93835ed-7e40-4171-a3ee-f815a8d20380] Running
	I1109 14:39:33.033743  196129 system_pods.go:61] "kube-scheduler-default-k8s-diff-port-103048" [8e18ee25-6d8f-4598-89c1-67b7fc04936b] Running
	I1109 14:39:33.033770  196129 system_pods.go:61] "storage-provisioner" [251b0857-5681-47c0-b891-8a4c109aaa4b] Running
	I1109 14:39:33.033790  196129 system_pods.go:74] duration metric: took 10.030263ms to wait for pod list to return data ...
	I1109 14:39:33.033824  196129 default_sa.go:34] waiting for default service account to be created ...
	I1109 14:39:33.034992  196129 addons.go:515] duration metric: took 11.409095214s for enable addons: enabled=[storage-provisioner dashboard default-storageclass]
	I1109 14:39:33.040658  196129 default_sa.go:45] found service account: "default"
	I1109 14:39:33.040686  196129 default_sa.go:55] duration metric: took 6.835118ms for default service account to be created ...
	I1109 14:39:33.040697  196129 system_pods.go:116] waiting for k8s-apps to be running ...
	I1109 14:39:33.044695  196129 system_pods.go:86] 8 kube-system pods found
	I1109 14:39:33.044733  196129 system_pods.go:89] "coredns-66bc5c9577-rbvc2" [a2c09df3-22f7-4863-81b7-71d92e6457c7] Running / Ready:ContainersNotReady (containers with unready status: [coredns]) / ContainersReady:ContainersNotReady (containers with unready status: [coredns])
	I1109 14:39:33.044743  196129 system_pods.go:89] "etcd-default-k8s-diff-port-103048" [7ca61e70-9bc1-4a3d-9175-40fabccc3dfe] Running / Ready:ContainersNotReady (containers with unready status: [etcd]) / ContainersReady:ContainersNotReady (containers with unready status: [etcd])
	I1109 14:39:33.044786  196129 system_pods.go:89] "kindnet-tz2x5" [41a63a24-6d2b-453d-a118-2a5b03e08396] Running / Ready:ContainersNotReady (containers with unready status: [kindnet-cni]) / ContainersReady:ContainersNotReady (containers with unready status: [kindnet-cni])
	I1109 14:39:33.044801  196129 system_pods.go:89] "kube-apiserver-default-k8s-diff-port-103048" [80dc9a9a-1a79-447a-b1a4-38d7611de8c3] Running / Ready:ContainersNotReady (containers with unready status: [kube-apiserver]) / ContainersReady:ContainersNotReady (containers with unready status: [kube-apiserver])
	I1109 14:39:33.044809  196129 system_pods.go:89] "kube-controller-manager-default-k8s-diff-port-103048" [f1da8430-3b4b-4fab-a1cb-d8984aabae63] Running / Ready:ContainersNotReady (containers with unready status: [kube-controller-manager]) / ContainersReady:ContainersNotReady (containers with unready status: [kube-controller-manager])
	I1109 14:39:33.044819  196129 system_pods.go:89] "kube-proxy-c57m2" [d93835ed-7e40-4171-a3ee-f815a8d20380] Running
	I1109 14:39:33.044824  196129 system_pods.go:89] "kube-scheduler-default-k8s-diff-port-103048" [8e18ee25-6d8f-4598-89c1-67b7fc04936b] Running
	I1109 14:39:33.044829  196129 system_pods.go:89] "storage-provisioner" [251b0857-5681-47c0-b891-8a4c109aaa4b] Running
	I1109 14:39:33.044854  196129 system_pods.go:126] duration metric: took 4.149902ms to wait for k8s-apps to be running ...
	I1109 14:39:33.044870  196129 system_svc.go:44] waiting for kubelet service to be running ....
	I1109 14:39:33.044951  196129 ssh_runner.go:195] Run: sudo systemctl is-active --quiet service kubelet
	I1109 14:39:33.077530  196129 system_svc.go:56] duration metric: took 32.649827ms WaitForService to wait for kubelet
	I1109 14:39:33.077564  196129 kubeadm.go:587] duration metric: took 11.452030043s to wait for: map[apiserver:true apps_running:true default_sa:true extra:true kubelet:true node_ready:true system_pods:true]
	I1109 14:39:33.077606  196129 node_conditions.go:102] verifying NodePressure condition ...
	I1109 14:39:33.086426  196129 node_conditions.go:122] node storage ephemeral capacity is 203034800Ki
	I1109 14:39:33.086461  196129 node_conditions.go:123] node cpu capacity is 2
	I1109 14:39:33.086473  196129 node_conditions.go:105] duration metric: took 8.861178ms to run NodePressure ...
	I1109 14:39:33.086516  196129 start.go:242] waiting for startup goroutines ...
	I1109 14:39:33.086533  196129 start.go:247] waiting for cluster config update ...
	I1109 14:39:33.086544  196129 start.go:256] writing updated cluster config ...
	I1109 14:39:33.086866  196129 ssh_runner.go:195] Run: rm -f paused
	I1109 14:39:33.096386  196129 pod_ready.go:37] extra waiting up to 4m0s for all "kube-system" pods having one of [k8s-app=kube-dns component=etcd component=kube-apiserver component=kube-controller-manager k8s-app=kube-proxy component=kube-scheduler] labels to be "Ready" ...
	I1109 14:39:33.164789  196129 pod_ready.go:83] waiting for pod "coredns-66bc5c9577-rbvc2" in "kube-system" namespace to be "Ready" or be gone ...
	I1109 14:39:35.201675  196795 ssh_runner.go:235] Completed: sudo KUBECONFIG=/var/lib/minikube/kubeconfig /var/lib/minikube/binaries/v1.34.1/kubectl apply -f /etc/kubernetes/addons/storage-provisioner.yaml: (9.250533062s)
	I1109 14:39:35.201721  196795 ssh_runner.go:235] Completed: sudo systemctl start kubelet: (9.087664371s)
	I1109 14:39:35.201760  196795 node_ready.go:35] waiting up to 6m0s for node "embed-certs-422728" to be "Ready" ...
	I1109 14:39:35.202074  196795 ssh_runner.go:235] Completed: sudo KUBECONFIG=/var/lib/minikube/kubeconfig /var/lib/minikube/binaries/v1.34.1/kubectl apply -f /etc/kubernetes/addons/storageclass.yaml: (9.068793828s)
	I1109 14:39:35.202315  196795 ssh_runner.go:235] Completed: sudo KUBECONFIG=/var/lib/minikube/kubeconfig /var/lib/minikube/binaries/v1.34.1/kubectl apply -f /etc/kubernetes/addons/dashboard-ns.yaml -f /etc/kubernetes/addons/dashboard-clusterrole.yaml -f /etc/kubernetes/addons/dashboard-clusterrolebinding.yaml -f /etc/kubernetes/addons/dashboard-configmap.yaml -f /etc/kubernetes/addons/dashboard-dp.yaml -f /etc/kubernetes/addons/dashboard-role.yaml -f /etc/kubernetes/addons/dashboard-rolebinding.yaml -f /etc/kubernetes/addons/dashboard-sa.yaml -f /etc/kubernetes/addons/dashboard-secret.yaml -f /etc/kubernetes/addons/dashboard-svc.yaml: (8.569467426s)
	I1109 14:39:35.205755  196795 out.go:179] * Some dashboard features require the metrics-server addon. To enable all features please run:
	
		minikube -p embed-certs-422728 addons enable metrics-server
	
	I1109 14:39:35.282264  196795 node_ready.go:49] node "embed-certs-422728" is "Ready"
	I1109 14:39:35.282343  196795 node_ready.go:38] duration metric: took 80.561028ms for node "embed-certs-422728" to be "Ready" ...
	I1109 14:39:35.282371  196795 api_server.go:52] waiting for apiserver process to appear ...
	I1109 14:39:35.282455  196795 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I1109 14:39:35.306663  196795 out.go:179] * Enabled addons: storage-provisioner, dashboard, default-storageclass
	I1109 14:39:35.309737  196795 addons.go:515] duration metric: took 9.834202528s for enable addons: enabled=[storage-provisioner dashboard default-storageclass]
	I1109 14:39:35.336441  196795 api_server.go:72] duration metric: took 9.861275529s to wait for apiserver process to appear ...
	I1109 14:39:35.336467  196795 api_server.go:88] waiting for apiserver healthz status ...
	I1109 14:39:35.336489  196795 api_server.go:253] Checking apiserver healthz at https://192.168.76.2:8443/healthz ...
	I1109 14:39:35.381991  196795 api_server.go:279] https://192.168.76.2:8443/healthz returned 200:
	ok
	I1109 14:39:35.384051  196795 api_server.go:141] control plane version: v1.34.1
	I1109 14:39:35.384080  196795 api_server.go:131] duration metric: took 47.606213ms to wait for apiserver health ...
	I1109 14:39:35.384090  196795 system_pods.go:43] waiting for kube-system pods to appear ...
	I1109 14:39:35.401482  196795 system_pods.go:59] 8 kube-system pods found
	I1109 14:39:35.401522  196795 system_pods.go:61] "coredns-66bc5c9577-4hk6l" [85300af6-fc4a-42dd-b6f9-4374a4461cdc] Running / Ready:ContainersNotReady (containers with unready status: [coredns]) / ContainersReady:ContainersNotReady (containers with unready status: [coredns])
	I1109 14:39:35.401532  196795 system_pods.go:61] "etcd-embed-certs-422728" [a71faa76-21fd-4bf5-82d3-26489363edf1] Running / Ready:ContainersNotReady (containers with unready status: [etcd]) / ContainersReady:ContainersNotReady (containers with unready status: [etcd])
	I1109 14:39:35.401542  196795 system_pods.go:61] "kindnet-29xxd" [081cda95-4468-46a9-a913-ec3c53472afd] Running / Ready:ContainersNotReady (containers with unready status: [kindnet-cni]) / ContainersReady:ContainersNotReady (containers with unready status: [kindnet-cni])
	I1109 14:39:35.401547  196795 system_pods.go:61] "kube-apiserver-embed-certs-422728" [fa49f721-7504-41a2-80d6-eeac9f3ba024] Running
	I1109 14:39:35.401556  196795 system_pods.go:61] "kube-controller-manager-embed-certs-422728" [1fefd317-f995-414d-9744-cb3bb29d6663] Running / Ready:ContainersNotReady (containers with unready status: [kube-controller-manager]) / ContainersReady:ContainersNotReady (containers with unready status: [kube-controller-manager])
	I1109 14:39:35.401564  196795 system_pods.go:61] "kube-proxy-5zn8j" [91237b20-cef1-4550-bd7c-cbf7ec8d850c] Running / Ready:ContainersNotReady (containers with unready status: [kube-proxy]) / ContainersReady:ContainersNotReady (containers with unready status: [kube-proxy])
	I1109 14:39:35.401581  196795 system_pods.go:61] "kube-scheduler-embed-certs-422728" [aadcb563-6ce1-437f-ab4e-6f9450bb1b93] Running / Ready:ContainersNotReady (containers with unready status: [kube-scheduler]) / ContainersReady:ContainersNotReady (containers with unready status: [kube-scheduler])
	I1109 14:39:35.401590  196795 system_pods.go:61] "storage-provisioner" [e11ae084-2938-40cc-9538-cffa02747d9b] Running / Ready:ContainersNotReady (containers with unready status: [storage-provisioner]) / ContainersReady:ContainersNotReady (containers with unready status: [storage-provisioner])
	I1109 14:39:35.401601  196795 system_pods.go:74] duration metric: took 17.504641ms to wait for pod list to return data ...
	I1109 14:39:35.401610  196795 default_sa.go:34] waiting for default service account to be created ...
	I1109 14:39:35.428228  196795 default_sa.go:45] found service account: "default"
	I1109 14:39:35.428256  196795 default_sa.go:55] duration metric: took 26.634138ms for default service account to be created ...
	I1109 14:39:35.428275  196795 system_pods.go:116] waiting for k8s-apps to be running ...
	I1109 14:39:35.432793  196795 system_pods.go:86] 8 kube-system pods found
	I1109 14:39:35.432824  196795 system_pods.go:89] "coredns-66bc5c9577-4hk6l" [85300af6-fc4a-42dd-b6f9-4374a4461cdc] Running / Ready:ContainersNotReady (containers with unready status: [coredns]) / ContainersReady:ContainersNotReady (containers with unready status: [coredns])
	I1109 14:39:35.432834  196795 system_pods.go:89] "etcd-embed-certs-422728" [a71faa76-21fd-4bf5-82d3-26489363edf1] Running / Ready:ContainersNotReady (containers with unready status: [etcd]) / ContainersReady:ContainersNotReady (containers with unready status: [etcd])
	I1109 14:39:35.432841  196795 system_pods.go:89] "kindnet-29xxd" [081cda95-4468-46a9-a913-ec3c53472afd] Running / Ready:ContainersNotReady (containers with unready status: [kindnet-cni]) / ContainersReady:ContainersNotReady (containers with unready status: [kindnet-cni])
	I1109 14:39:35.432854  196795 system_pods.go:89] "kube-apiserver-embed-certs-422728" [fa49f721-7504-41a2-80d6-eeac9f3ba024] Running
	I1109 14:39:35.432865  196795 system_pods.go:89] "kube-controller-manager-embed-certs-422728" [1fefd317-f995-414d-9744-cb3bb29d6663] Running / Ready:ContainersNotReady (containers with unready status: [kube-controller-manager]) / ContainersReady:ContainersNotReady (containers with unready status: [kube-controller-manager])
	I1109 14:39:35.432877  196795 system_pods.go:89] "kube-proxy-5zn8j" [91237b20-cef1-4550-bd7c-cbf7ec8d850c] Running / Ready:ContainersNotReady (containers with unready status: [kube-proxy]) / ContainersReady:ContainersNotReady (containers with unready status: [kube-proxy])
	I1109 14:39:35.432884  196795 system_pods.go:89] "kube-scheduler-embed-certs-422728" [aadcb563-6ce1-437f-ab4e-6f9450bb1b93] Running / Ready:ContainersNotReady (containers with unready status: [kube-scheduler]) / ContainersReady:ContainersNotReady (containers with unready status: [kube-scheduler])
	I1109 14:39:35.432901  196795 system_pods.go:89] "storage-provisioner" [e11ae084-2938-40cc-9538-cffa02747d9b] Running / Ready:ContainersNotReady (containers with unready status: [storage-provisioner]) / ContainersReady:ContainersNotReady (containers with unready status: [storage-provisioner])
	I1109 14:39:35.432909  196795 system_pods.go:126] duration metric: took 4.628396ms to wait for k8s-apps to be running ...
	I1109 14:39:35.432921  196795 system_svc.go:44] waiting for kubelet service to be running ....
	I1109 14:39:35.432993  196795 ssh_runner.go:195] Run: sudo systemctl is-active --quiet service kubelet
	I1109 14:39:35.485432  196795 system_svc.go:56] duration metric: took 52.500556ms WaitForService to wait for kubelet
	I1109 14:39:35.485461  196795 kubeadm.go:587] duration metric: took 10.010301465s to wait for: map[apiserver:true apps_running:true default_sa:true extra:true kubelet:true node_ready:true system_pods:true]
	I1109 14:39:35.485480  196795 node_conditions.go:102] verifying NodePressure condition ...
	I1109 14:39:35.509089  196795 node_conditions.go:122] node storage ephemeral capacity is 203034800Ki
	I1109 14:39:35.509123  196795 node_conditions.go:123] node cpu capacity is 2
	I1109 14:39:35.509136  196795 node_conditions.go:105] duration metric: took 23.649629ms to run NodePressure ...
	I1109 14:39:35.509148  196795 start.go:242] waiting for startup goroutines ...
	I1109 14:39:35.509156  196795 start.go:247] waiting for cluster config update ...
	I1109 14:39:35.509166  196795 start.go:256] writing updated cluster config ...
	I1109 14:39:35.509440  196795 ssh_runner.go:195] Run: rm -f paused
	I1109 14:39:35.523671  196795 pod_ready.go:37] extra waiting up to 4m0s for all "kube-system" pods having one of [k8s-app=kube-dns component=etcd component=kube-apiserver component=kube-controller-manager k8s-app=kube-proxy component=kube-scheduler] labels to be "Ready" ...
	I1109 14:39:35.544324  196795 pod_ready.go:83] waiting for pod "coredns-66bc5c9577-4hk6l" in "kube-system" namespace to be "Ready" or be gone ...
	W1109 14:39:35.214818  196129 pod_ready.go:104] pod "coredns-66bc5c9577-rbvc2" is not "Ready", error: <nil>
	W1109 14:39:37.670741  196129 pod_ready.go:104] pod "coredns-66bc5c9577-rbvc2" is not "Ready", error: <nil>
	W1109 14:39:37.550361  196795 pod_ready.go:104] pod "coredns-66bc5c9577-4hk6l" is not "Ready", error: <nil>
	W1109 14:39:39.551201  196795 pod_ready.go:104] pod "coredns-66bc5c9577-4hk6l" is not "Ready", error: <nil>
	W1109 14:39:39.671795  196129 pod_ready.go:104] pod "coredns-66bc5c9577-rbvc2" is not "Ready", error: <nil>
	W1109 14:39:41.672702  196129 pod_ready.go:104] pod "coredns-66bc5c9577-rbvc2" is not "Ready", error: <nil>
	W1109 14:39:42.050591  196795 pod_ready.go:104] pod "coredns-66bc5c9577-4hk6l" is not "Ready", error: <nil>
	W1109 14:39:44.052665  196795 pod_ready.go:104] pod "coredns-66bc5c9577-4hk6l" is not "Ready", error: <nil>
	W1109 14:39:43.679828  196129 pod_ready.go:104] pod "coredns-66bc5c9577-rbvc2" is not "Ready", error: <nil>
	W1109 14:39:46.172576  196129 pod_ready.go:104] pod "coredns-66bc5c9577-rbvc2" is not "Ready", error: <nil>
	W1109 14:39:46.549936  196795 pod_ready.go:104] pod "coredns-66bc5c9577-4hk6l" is not "Ready", error: <nil>
	W1109 14:39:48.550731  196795 pod_ready.go:104] pod "coredns-66bc5c9577-4hk6l" is not "Ready", error: <nil>
	W1109 14:39:50.550852  196795 pod_ready.go:104] pod "coredns-66bc5c9577-4hk6l" is not "Ready", error: <nil>
	W1109 14:39:48.675461  196129 pod_ready.go:104] pod "coredns-66bc5c9577-rbvc2" is not "Ready", error: <nil>
	W1109 14:39:51.171417  196129 pod_ready.go:104] pod "coredns-66bc5c9577-rbvc2" is not "Ready", error: <nil>
	W1109 14:39:53.050155  196795 pod_ready.go:104] pod "coredns-66bc5c9577-4hk6l" is not "Ready", error: <nil>
	W1109 14:39:55.050846  196795 pod_ready.go:104] pod "coredns-66bc5c9577-4hk6l" is not "Ready", error: <nil>
	W1109 14:39:53.669698  196129 pod_ready.go:104] pod "coredns-66bc5c9577-rbvc2" is not "Ready", error: <nil>
	W1109 14:39:55.670713  196129 pod_ready.go:104] pod "coredns-66bc5c9577-rbvc2" is not "Ready", error: <nil>
	W1109 14:39:58.170504  196129 pod_ready.go:104] pod "coredns-66bc5c9577-rbvc2" is not "Ready", error: <nil>
	W1109 14:39:57.550560  196795 pod_ready.go:104] pod "coredns-66bc5c9577-4hk6l" is not "Ready", error: <nil>
	W1109 14:40:00.080694  196795 pod_ready.go:104] pod "coredns-66bc5c9577-4hk6l" is not "Ready", error: <nil>
	W1109 14:40:00.191460  196129 pod_ready.go:104] pod "coredns-66bc5c9577-rbvc2" is not "Ready", error: <nil>
	W1109 14:40:02.670935  196129 pod_ready.go:104] pod "coredns-66bc5c9577-rbvc2" is not "Ready", error: <nil>
	W1109 14:40:02.550181  196795 pod_ready.go:104] pod "coredns-66bc5c9577-4hk6l" is not "Ready", error: <nil>
	W1109 14:40:04.550484  196795 pod_ready.go:104] pod "coredns-66bc5c9577-4hk6l" is not "Ready", error: <nil>
	I1109 14:40:05.170570  196129 pod_ready.go:94] pod "coredns-66bc5c9577-rbvc2" is "Ready"
	I1109 14:40:05.170595  196129 pod_ready.go:86] duration metric: took 32.005779394s for pod "coredns-66bc5c9577-rbvc2" in "kube-system" namespace to be "Ready" or be gone ...
	I1109 14:40:05.173494  196129 pod_ready.go:83] waiting for pod "etcd-default-k8s-diff-port-103048" in "kube-system" namespace to be "Ready" or be gone ...
	I1109 14:40:05.178322  196129 pod_ready.go:94] pod "etcd-default-k8s-diff-port-103048" is "Ready"
	I1109 14:40:05.178350  196129 pod_ready.go:86] duration metric: took 4.826832ms for pod "etcd-default-k8s-diff-port-103048" in "kube-system" namespace to be "Ready" or be gone ...
	I1109 14:40:05.181165  196129 pod_ready.go:83] waiting for pod "kube-apiserver-default-k8s-diff-port-103048" in "kube-system" namespace to be "Ready" or be gone ...
	I1109 14:40:05.185964  196129 pod_ready.go:94] pod "kube-apiserver-default-k8s-diff-port-103048" is "Ready"
	I1109 14:40:05.185994  196129 pod_ready.go:86] duration metric: took 4.801946ms for pod "kube-apiserver-default-k8s-diff-port-103048" in "kube-system" namespace to be "Ready" or be gone ...
	I1109 14:40:05.188492  196129 pod_ready.go:83] waiting for pod "kube-controller-manager-default-k8s-diff-port-103048" in "kube-system" namespace to be "Ready" or be gone ...
	I1109 14:40:05.369137  196129 pod_ready.go:94] pod "kube-controller-manager-default-k8s-diff-port-103048" is "Ready"
	I1109 14:40:05.369168  196129 pod_ready.go:86] duration metric: took 180.647632ms for pod "kube-controller-manager-default-k8s-diff-port-103048" in "kube-system" namespace to be "Ready" or be gone ...
	I1109 14:40:05.567982  196129 pod_ready.go:83] waiting for pod "kube-proxy-c57m2" in "kube-system" namespace to be "Ready" or be gone ...
	I1109 14:40:05.968952  196129 pod_ready.go:94] pod "kube-proxy-c57m2" is "Ready"
	I1109 14:40:05.968978  196129 pod_ready.go:86] duration metric: took 400.969079ms for pod "kube-proxy-c57m2" in "kube-system" namespace to be "Ready" or be gone ...
	I1109 14:40:06.169164  196129 pod_ready.go:83] waiting for pod "kube-scheduler-default-k8s-diff-port-103048" in "kube-system" namespace to be "Ready" or be gone ...
	I1109 14:40:06.568343  196129 pod_ready.go:94] pod "kube-scheduler-default-k8s-diff-port-103048" is "Ready"
	I1109 14:40:06.568432  196129 pod_ready.go:86] duration metric: took 399.237416ms for pod "kube-scheduler-default-k8s-diff-port-103048" in "kube-system" namespace to be "Ready" or be gone ...
	I1109 14:40:06.568451  196129 pod_ready.go:40] duration metric: took 33.4720313s for extra waiting for all "kube-system" pods having one of [k8s-app=kube-dns component=etcd component=kube-apiserver component=kube-controller-manager k8s-app=kube-proxy component=kube-scheduler] labels to be "Ready" ...
	I1109 14:40:06.631797  196129 start.go:628] kubectl: 1.33.2, cluster: 1.34.1 (minor skew: 1)
	I1109 14:40:06.635018  196129 out.go:179] * Done! kubectl is now configured to use "default-k8s-diff-port-103048" cluster and "default" namespace by default
	W1109 14:40:06.551498  196795 pod_ready.go:104] pod "coredns-66bc5c9577-4hk6l" is not "Ready", error: <nil>
	I1109 14:40:07.550990  196795 pod_ready.go:94] pod "coredns-66bc5c9577-4hk6l" is "Ready"
	I1109 14:40:07.551029  196795 pod_ready.go:86] duration metric: took 32.006673308s for pod "coredns-66bc5c9577-4hk6l" in "kube-system" namespace to be "Ready" or be gone ...
	I1109 14:40:07.553713  196795 pod_ready.go:83] waiting for pod "etcd-embed-certs-422728" in "kube-system" namespace to be "Ready" or be gone ...
	I1109 14:40:07.558418  196795 pod_ready.go:94] pod "etcd-embed-certs-422728" is "Ready"
	I1109 14:40:07.558442  196795 pod_ready.go:86] duration metric: took 4.698642ms for pod "etcd-embed-certs-422728" in "kube-system" namespace to be "Ready" or be gone ...
	I1109 14:40:07.560963  196795 pod_ready.go:83] waiting for pod "kube-apiserver-embed-certs-422728" in "kube-system" namespace to be "Ready" or be gone ...
	I1109 14:40:07.565961  196795 pod_ready.go:94] pod "kube-apiserver-embed-certs-422728" is "Ready"
	I1109 14:40:07.565990  196795 pod_ready.go:86] duration metric: took 4.998009ms for pod "kube-apiserver-embed-certs-422728" in "kube-system" namespace to be "Ready" or be gone ...
	I1109 14:40:07.568596  196795 pod_ready.go:83] waiting for pod "kube-controller-manager-embed-certs-422728" in "kube-system" namespace to be "Ready" or be gone ...
	I1109 14:40:07.747686  196795 pod_ready.go:94] pod "kube-controller-manager-embed-certs-422728" is "Ready"
	I1109 14:40:07.747712  196795 pod_ready.go:86] duration metric: took 179.092274ms for pod "kube-controller-manager-embed-certs-422728" in "kube-system" namespace to be "Ready" or be gone ...
	I1109 14:40:07.948777  196795 pod_ready.go:83] waiting for pod "kube-proxy-5zn8j" in "kube-system" namespace to be "Ready" or be gone ...
	I1109 14:40:08.348208  196795 pod_ready.go:94] pod "kube-proxy-5zn8j" is "Ready"
	I1109 14:40:08.348242  196795 pod_ready.go:86] duration metric: took 399.417231ms for pod "kube-proxy-5zn8j" in "kube-system" namespace to be "Ready" or be gone ...
	I1109 14:40:08.548588  196795 pod_ready.go:83] waiting for pod "kube-scheduler-embed-certs-422728" in "kube-system" namespace to be "Ready" or be gone ...
	I1109 14:40:08.948477  196795 pod_ready.go:94] pod "kube-scheduler-embed-certs-422728" is "Ready"
	I1109 14:40:08.948506  196795 pod_ready.go:86] duration metric: took 399.893445ms for pod "kube-scheduler-embed-certs-422728" in "kube-system" namespace to be "Ready" or be gone ...
	I1109 14:40:08.948519  196795 pod_ready.go:40] duration metric: took 33.424813505s for extra waiting for all "kube-system" pods having one of [k8s-app=kube-dns component=etcd component=kube-apiserver component=kube-controller-manager k8s-app=kube-proxy component=kube-scheduler] labels to be "Ready" ...
	I1109 14:40:09.011705  196795 start.go:628] kubectl: 1.33.2, cluster: 1.34.1 (minor skew: 1)
	I1109 14:40:09.015201  196795 out.go:179] * Done! kubectl is now configured to use "embed-certs-422728" cluster and "default" namespace by default
	
	
	==> CRI-O <==
	Nov 09 14:40:01 default-k8s-diff-port-103048 crio[653]: time="2025-11-09T14:40:01.248533172Z" level=info msg="Removed container 38259cff432cccc842b08ceebbe59b16a06e9c73dc6de13fc113d45ec541b9f8: kubernetes-dashboard/dashboard-metrics-scraper-6ffb444bf9-8h69l/dashboard-metrics-scraper" id=7a76787b-fa16-49be-839a-80d0c7cd0a36 name=/runtime.v1.RuntimeService/RemoveContainer
	Nov 09 14:40:02 default-k8s-diff-port-103048 conmon[1147]: conmon cf78d41778b8d4241abc <ninfo>: container 1154 exited with status 1
	Nov 09 14:40:03 default-k8s-diff-port-103048 crio[653]: time="2025-11-09T14:40:03.240417122Z" level=info msg="Checking image status: gcr.io/k8s-minikube/storage-provisioner:v5" id=2ea6294a-7b46-44f0-af7f-fea2507ce523 name=/runtime.v1.ImageService/ImageStatus
	Nov 09 14:40:03 default-k8s-diff-port-103048 crio[653]: time="2025-11-09T14:40:03.241795554Z" level=info msg="Checking image status: gcr.io/k8s-minikube/storage-provisioner:v5" id=7b078ecf-ad56-46bf-8c85-cf58a9f8c8f7 name=/runtime.v1.ImageService/ImageStatus
	Nov 09 14:40:03 default-k8s-diff-port-103048 crio[653]: time="2025-11-09T14:40:03.243293889Z" level=info msg="Creating container: kube-system/storage-provisioner/storage-provisioner" id=ad5bba3b-6022-45dd-b26c-be6b4a30237f name=/runtime.v1.RuntimeService/CreateContainer
	Nov 09 14:40:03 default-k8s-diff-port-103048 crio[653]: time="2025-11-09T14:40:03.243406744Z" level=info msg="Allowed annotations are specified for workload [io.containers.trace-syscall]"
	Nov 09 14:40:03 default-k8s-diff-port-103048 crio[653]: time="2025-11-09T14:40:03.248740944Z" level=info msg="Allowed annotations are specified for workload [io.containers.trace-syscall]"
	Nov 09 14:40:03 default-k8s-diff-port-103048 crio[653]: time="2025-11-09T14:40:03.248913368Z" level=warning msg="Failed to open /etc/passwd: open /var/lib/containers/storage/overlay/874d8eff2aa994f4a0df23deb7ae1d272461800e7ebe976c7b57a1997df8602f/merged/etc/passwd: no such file or directory"
	Nov 09 14:40:03 default-k8s-diff-port-103048 crio[653]: time="2025-11-09T14:40:03.248934012Z" level=warning msg="Failed to open /etc/group: open /var/lib/containers/storage/overlay/874d8eff2aa994f4a0df23deb7ae1d272461800e7ebe976c7b57a1997df8602f/merged/etc/group: no such file or directory"
	Nov 09 14:40:03 default-k8s-diff-port-103048 crio[653]: time="2025-11-09T14:40:03.249182531Z" level=info msg="Allowed annotations are specified for workload [io.containers.trace-syscall]"
	Nov 09 14:40:03 default-k8s-diff-port-103048 crio[653]: time="2025-11-09T14:40:03.270562984Z" level=info msg="Created container 887e2026e77897766c17fa9712a7f11a65c2c6b18e8793321feed28a770da36c: kube-system/storage-provisioner/storage-provisioner" id=ad5bba3b-6022-45dd-b26c-be6b4a30237f name=/runtime.v1.RuntimeService/CreateContainer
	Nov 09 14:40:03 default-k8s-diff-port-103048 crio[653]: time="2025-11-09T14:40:03.272954398Z" level=info msg="Starting container: 887e2026e77897766c17fa9712a7f11a65c2c6b18e8793321feed28a770da36c" id=3360318b-e6b9-4fa6-adc3-f98cfe7a1c6c name=/runtime.v1.RuntimeService/StartContainer
	Nov 09 14:40:03 default-k8s-diff-port-103048 crio[653]: time="2025-11-09T14:40:03.275412733Z" level=info msg="Started container" PID=1639 containerID=887e2026e77897766c17fa9712a7f11a65c2c6b18e8793321feed28a770da36c description=kube-system/storage-provisioner/storage-provisioner id=3360318b-e6b9-4fa6-adc3-f98cfe7a1c6c name=/runtime.v1.RuntimeService/StartContainer sandboxID=99b118862bb4b7267069a5577fd9df4036f6d5423bab063eda2461ef74dc704e
	Nov 09 14:40:12 default-k8s-diff-port-103048 crio[653]: time="2025-11-09T14:40:12.306247622Z" level=info msg="CNI monitoring event CREATE        \"/etc/cni/net.d/10-kindnet.conflist.temp\""
	Nov 09 14:40:12 default-k8s-diff-port-103048 crio[653]: time="2025-11-09T14:40:12.316769758Z" level=info msg="Found CNI network kindnet (type=ptp) at /etc/cni/net.d/10-kindnet.conflist"
	Nov 09 14:40:12 default-k8s-diff-port-103048 crio[653]: time="2025-11-09T14:40:12.317372265Z" level=info msg="Updated default CNI network name to kindnet"
	Nov 09 14:40:12 default-k8s-diff-port-103048 crio[653]: time="2025-11-09T14:40:12.317675431Z" level=info msg="CNI monitoring event WRITE         \"/etc/cni/net.d/10-kindnet.conflist.temp\""
	Nov 09 14:40:12 default-k8s-diff-port-103048 crio[653]: time="2025-11-09T14:40:12.322318352Z" level=info msg="Found CNI network kindnet (type=ptp) at /etc/cni/net.d/10-kindnet.conflist"
	Nov 09 14:40:12 default-k8s-diff-port-103048 crio[653]: time="2025-11-09T14:40:12.32235251Z" level=info msg="Updated default CNI network name to kindnet"
	Nov 09 14:40:12 default-k8s-diff-port-103048 crio[653]: time="2025-11-09T14:40:12.322375649Z" level=info msg="CNI monitoring event RENAME        \"/etc/cni/net.d/10-kindnet.conflist.temp\""
	Nov 09 14:40:12 default-k8s-diff-port-103048 crio[653]: time="2025-11-09T14:40:12.325498482Z" level=info msg="Found CNI network kindnet (type=ptp) at /etc/cni/net.d/10-kindnet.conflist"
	Nov 09 14:40:12 default-k8s-diff-port-103048 crio[653]: time="2025-11-09T14:40:12.325529186Z" level=info msg="Updated default CNI network name to kindnet"
	Nov 09 14:40:12 default-k8s-diff-port-103048 crio[653]: time="2025-11-09T14:40:12.325549363Z" level=info msg="CNI monitoring event CREATE        \"/etc/cni/net.d/10-kindnet.conflist\" ← \"/etc/cni/net.d/10-kindnet.conflist.temp\""
	Nov 09 14:40:12 default-k8s-diff-port-103048 crio[653]: time="2025-11-09T14:40:12.328808953Z" level=info msg="Found CNI network kindnet (type=ptp) at /etc/cni/net.d/10-kindnet.conflist"
	Nov 09 14:40:12 default-k8s-diff-port-103048 crio[653]: time="2025-11-09T14:40:12.328842471Z" level=info msg="Updated default CNI network name to kindnet"
	
	
	==> container status <==
	CONTAINER           IMAGE                                                                                                      CREATED              STATE               NAME                        ATTEMPT             POD ID              POD                                                    NAMESPACE
	887e2026e7789       ba04bb24b95753201135cbc420b233c1b0b9fa2e1fd21d28319c348c33fbcde6                                           21 seconds ago       Running             storage-provisioner         2                   99b118862bb4b       storage-provisioner                                    kube-system
	ce0dc366d5e71       a90209bb39e3d7b5fc9daf60c17044ea969aaca0333d672d8c7a34c7446e7ff7                                           23 seconds ago       Exited              dashboard-metrics-scraper   2                   eb13898fae8d3       dashboard-metrics-scraper-6ffb444bf9-8h69l             kubernetes-dashboard
	b77c688e34b49       docker.io/kubernetesui/dashboard@sha256:5c52c60663b473628bd98e4ffee7a747ef1f88d8c7bcee957b089fb3f61bdedf   35 seconds ago       Running             kubernetes-dashboard        0                   18f2ffd9a5f9f       kubernetes-dashboard-855c9754f9-swwl8                  kubernetes-dashboard
	dc599fcbb3350       138784d87c9c50f8e59412544da4cf4928d61ccbaf93b9f5898a3ba406871bfc                                           52 seconds ago       Running             coredns                     1                   15521f9f70953       coredns-66bc5c9577-rbvc2                               kube-system
	e9c4c43949e6a       1611cd07b61d57dbbfebe6db242513fd51e1c02d20ba08af17a45837d86a8a8c                                           52 seconds ago       Running             busybox                     1                   6acf70a827692       busybox                                                default
	be38569ab0491       b1a8c6f707935fd5f346ce5846d21ff8dd65e14c15406a14dbd16b9b897b9b4c                                           53 seconds ago       Running             kindnet-cni                 1                   79035adc44601       kindnet-tz2x5                                          kube-system
	cf78d41778b8d       ba04bb24b95753201135cbc420b233c1b0b9fa2e1fd21d28319c348c33fbcde6                                           53 seconds ago       Exited              storage-provisioner         1                   99b118862bb4b       storage-provisioner                                    kube-system
	da4b547d02513       05baa95f5142d87797a2bc1d3d11edfb0bf0a9236d436243d15061fae8b58cb9                                           53 seconds ago       Running             kube-proxy                  1                   f96eaf86c4dce       kube-proxy-c57m2                                       kube-system
	7e93099edfb89       7eb2c6ff0c5a768fd309321bc2ade0e4e11afcf4f2017ef1d0ff00d91fdf992a                                           About a minute ago   Running             kube-controller-manager     1                   89ff9146f6244       kube-controller-manager-default-k8s-diff-port-103048   kube-system
	0c584231ed8c3       b5f57ec6b98676d815366685a0422bd164ecf0732540b79ac51b1186cef97ff0                                           About a minute ago   Running             kube-scheduler              1                   2e62380a0a11b       kube-scheduler-default-k8s-diff-port-103048            kube-system
	6ac4e39c7a9ff       43911e833d64d4f30460862fc0c54bb61999d60bc7d063feca71e9fc610d5196                                           About a minute ago   Running             kube-apiserver              1                   1b0c0f4494c5d       kube-apiserver-default-k8s-diff-port-103048            kube-system
	7d4eac93ccb3e       a1894772a478e07c67a56e8bf32335fdbe1dd4ec96976a5987083164bd00bc0e                                           About a minute ago   Running             etcd                        1                   d47dac0b67da7       etcd-default-k8s-diff-port-103048                      kube-system
	
	
	==> coredns [dc599fcbb33507001316216bcb43133c63b59a24b97538fdfa9814b27f4e7cee] <==
	[INFO] plugin/ready: Still waiting on: "kubernetes"
	[INFO] plugin/kubernetes: waiting for Kubernetes API before starting server
	[INFO] plugin/kubernetes: waiting for Kubernetes API before starting server
	[INFO] plugin/ready: Still waiting on: "kubernetes"
	[INFO] plugin/kubernetes: waiting for Kubernetes API before starting server
	[INFO] plugin/kubernetes: waiting for Kubernetes API before starting server
	[INFO] plugin/ready: Still waiting on: "kubernetes"
	[INFO] plugin/kubernetes: waiting for Kubernetes API before starting server
	[INFO] plugin/kubernetes: waiting for Kubernetes API before starting server
	[INFO] plugin/kubernetes: waiting for Kubernetes API before starting server
	[INFO] plugin/kubernetes: waiting for Kubernetes API before starting server
	[WARNING] plugin/kubernetes: starting server with unsynced Kubernetes API
	.:53
	[INFO] plugin/reload: Running configuration SHA512 = fa9a0cdcdddcb4be74a0eaf7cfcb211c40e29ddf5507e03bbfc0065bade31f0f2641a2513136e246f32328dd126fc93236fb5c595246f0763926a524386705e8
	CoreDNS-1.12.1
	linux/arm64, go1.24.1, 707c7c1
	[INFO] 127.0.0.1:38928 - 33408 "HINFO IN 3306701821934207797.2508219549672476477. udp 57 false 512" NXDOMAIN qr,rd,ra 57 0.023355546s
	[INFO] plugin/ready: Still waiting on: "kubernetes"
	[INFO] plugin/ready: Still waiting on: "kubernetes"
	[INFO] plugin/kubernetes: pkg/mod/k8s.io/client-go@v0.32.3/tools/cache/reflector.go:251: failed to list *v1.Namespace: Get "https://10.96.0.1:443/api/v1/namespaces?limit=500&resourceVersion=0": dial tcp 10.96.0.1:443: i/o timeout
	[ERROR] plugin/kubernetes: Unhandled Error
	[INFO] plugin/kubernetes: pkg/mod/k8s.io/client-go@v0.32.3/tools/cache/reflector.go:251: failed to list *v1.Service: Get "https://10.96.0.1:443/api/v1/services?limit=500&resourceVersion=0": dial tcp 10.96.0.1:443: i/o timeout
	[ERROR] plugin/kubernetes: Unhandled Error
	[INFO] plugin/kubernetes: pkg/mod/k8s.io/client-go@v0.32.3/tools/cache/reflector.go:251: failed to list *v1.EndpointSlice: Get "https://10.96.0.1:443/apis/discovery.k8s.io/v1/endpointslices?limit=500&resourceVersion=0": dial tcp 10.96.0.1:443: i/o timeout
	[ERROR] plugin/kubernetes: Unhandled Error
	
	
	==> describe nodes <==
	Name:               default-k8s-diff-port-103048
	Roles:              control-plane
	Labels:             beta.kubernetes.io/arch=arm64
	                    beta.kubernetes.io/os=linux
	                    kubernetes.io/arch=arm64
	                    kubernetes.io/hostname=default-k8s-diff-port-103048
	                    kubernetes.io/os=linux
	                    minikube.k8s.io/commit=70e94ed289d7418ac27e2778f9cf44be27a4ecda
	                    minikube.k8s.io/name=default-k8s-diff-port-103048
	                    minikube.k8s.io/primary=true
	                    minikube.k8s.io/updated_at=2025_11_09T14_38_01_0700
	                    minikube.k8s.io/version=v1.37.0
	                    node-role.kubernetes.io/control-plane=
	                    node.kubernetes.io/exclude-from-external-load-balancers=
	Annotations:        node.alpha.kubernetes.io/ttl: 0
	                    volumes.kubernetes.io/controller-managed-attach-detach: true
	CreationTimestamp:  Sun, 09 Nov 2025 14:37:56 +0000
	Taints:             <none>
	Unschedulable:      false
	Lease:
	  HolderIdentity:  default-k8s-diff-port-103048
	  AcquireTime:     <unset>
	  RenewTime:       Sun, 09 Nov 2025 14:40:10 +0000
	Conditions:
	  Type             Status  LastHeartbeatTime                 LastTransitionTime                Reason                       Message
	  ----             ------  -----------------                 ------------------                ------                       -------
	  MemoryPressure   False   Sun, 09 Nov 2025 14:39:51 +0000   Sun, 09 Nov 2025 14:37:49 +0000   KubeletHasSufficientMemory   kubelet has sufficient memory available
	  DiskPressure     False   Sun, 09 Nov 2025 14:39:51 +0000   Sun, 09 Nov 2025 14:37:49 +0000   KubeletHasNoDiskPressure     kubelet has no disk pressure
	  PIDPressure      False   Sun, 09 Nov 2025 14:39:51 +0000   Sun, 09 Nov 2025 14:37:49 +0000   KubeletHasSufficientPID      kubelet has sufficient PID available
	  Ready            True    Sun, 09 Nov 2025 14:39:51 +0000   Sun, 09 Nov 2025 14:38:46 +0000   KubeletReady                 kubelet is posting ready status
	Addresses:
	  InternalIP:  192.168.85.2
	  Hostname:    default-k8s-diff-port-103048
	Capacity:
	  cpu:                2
	  ephemeral-storage:  203034800Ki
	  hugepages-1Gi:      0
	  hugepages-2Mi:      0
	  hugepages-32Mi:     0
	  hugepages-64Ki:     0
	  memory:             8022300Ki
	  pods:               110
	Allocatable:
	  cpu:                2
	  ephemeral-storage:  203034800Ki
	  hugepages-1Gi:      0
	  hugepages-2Mi:      0
	  hugepages-32Mi:     0
	  hugepages-64Ki:     0
	  memory:             8022300Ki
	  pods:               110
	System Info:
	  Machine ID:                 8fe80b37a6a3cb4bc1adadb06905c59a
	  System UUID:                6ac075f8-cd4f-431f-b369-b54146be0749
	  Boot ID:                    c2b4b02b-b0e5-42ff-aa45-346ad9349595
	  Kernel Version:             5.15.0-1084-aws
	  OS Image:                   Debian GNU/Linux 12 (bookworm)
	  Operating System:           linux
	  Architecture:               arm64
	  Container Runtime Version:  cri-o://1.34.1
	  Kubelet Version:            v1.34.1
	  Kube-Proxy Version:         
	PodCIDR:                      10.244.0.0/24
	PodCIDRs:                     10.244.0.0/24
	Non-terminated Pods:          (11 in total)
	  Namespace                   Name                                                    CPU Requests  CPU Limits  Memory Requests  Memory Limits  Age
	  ---------                   ----                                                    ------------  ----------  ---------------  -------------  ---
	  default                     busybox                                                 0 (0%)        0 (0%)      0 (0%)           0 (0%)         95s
	  kube-system                 coredns-66bc5c9577-rbvc2                                100m (5%)     0 (0%)      70Mi (0%)        170Mi (2%)     2m18s
	  kube-system                 etcd-default-k8s-diff-port-103048                       100m (5%)     0 (0%)      100Mi (1%)       0 (0%)         2m24s
	  kube-system                 kindnet-tz2x5                                           100m (5%)     100m (5%)   50Mi (0%)        50Mi (0%)      2m19s
	  kube-system                 kube-apiserver-default-k8s-diff-port-103048             250m (12%)    0 (0%)      0 (0%)           0 (0%)         2m24s
	  kube-system                 kube-controller-manager-default-k8s-diff-port-103048    200m (10%)    0 (0%)      0 (0%)           0 (0%)         2m24s
	  kube-system                 kube-proxy-c57m2                                        0 (0%)        0 (0%)      0 (0%)           0 (0%)         2m19s
	  kube-system                 kube-scheduler-default-k8s-diff-port-103048             100m (5%)     0 (0%)      0 (0%)           0 (0%)         2m28s
	  kube-system                 storage-provisioner                                     0 (0%)        0 (0%)      0 (0%)           0 (0%)         2m17s
	  kubernetes-dashboard        dashboard-metrics-scraper-6ffb444bf9-8h69l              0 (0%)        0 (0%)      0 (0%)           0 (0%)         49s
	  kubernetes-dashboard        kubernetes-dashboard-855c9754f9-swwl8                   0 (0%)        0 (0%)      0 (0%)           0 (0%)         49s
	Allocated resources:
	  (Total limits may be over 100 percent, i.e., overcommitted.)
	  Resource           Requests    Limits
	  --------           --------    ------
	  cpu                850m (42%)  100m (5%)
	  memory             220Mi (2%)  220Mi (2%)
	  ephemeral-storage  0 (0%)      0 (0%)
	  hugepages-1Gi      0 (0%)      0 (0%)
	  hugepages-2Mi      0 (0%)      0 (0%)
	  hugepages-32Mi     0 (0%)      0 (0%)
	  hugepages-64Ki     0 (0%)      0 (0%)
	Events:
	  Type     Reason                   Age                    From             Message
	  ----     ------                   ----                   ----             -------
	  Normal   Starting                 2m17s                  kube-proxy       
	  Normal   Starting                 51s                    kube-proxy       
	  Warning  CgroupV1                 2m37s                  kubelet          cgroup v1 support is in maintenance mode, please migrate to cgroup v2
	  Normal   NodeHasSufficientMemory  2m37s (x8 over 2m37s)  kubelet          Node default-k8s-diff-port-103048 status is now: NodeHasSufficientMemory
	  Normal   NodeHasNoDiskPressure    2m37s (x8 over 2m37s)  kubelet          Node default-k8s-diff-port-103048 status is now: NodeHasNoDiskPressure
	  Normal   NodeHasSufficientPID     2m37s (x8 over 2m37s)  kubelet          Node default-k8s-diff-port-103048 status is now: NodeHasSufficientPID
	  Normal   NodeHasSufficientMemory  2m24s                  kubelet          Node default-k8s-diff-port-103048 status is now: NodeHasSufficientMemory
	  Warning  CgroupV1                 2m24s                  kubelet          cgroup v1 support is in maintenance mode, please migrate to cgroup v2
	  Normal   NodeHasNoDiskPressure    2m24s                  kubelet          Node default-k8s-diff-port-103048 status is now: NodeHasNoDiskPressure
	  Normal   NodeHasSufficientPID     2m24s                  kubelet          Node default-k8s-diff-port-103048 status is now: NodeHasSufficientPID
	  Normal   Starting                 2m24s                  kubelet          Starting kubelet.
	  Normal   RegisteredNode           2m20s                  node-controller  Node default-k8s-diff-port-103048 event: Registered Node default-k8s-diff-port-103048 in Controller
	  Normal   NodeReady                98s                    kubelet          Node default-k8s-diff-port-103048 status is now: NodeReady
	  Normal   Starting                 64s                    kubelet          Starting kubelet.
	  Warning  CgroupV1                 64s                    kubelet          cgroup v1 support is in maintenance mode, please migrate to cgroup v2
	  Normal   NodeHasSufficientMemory  64s (x8 over 64s)      kubelet          Node default-k8s-diff-port-103048 status is now: NodeHasSufficientMemory
	  Normal   NodeHasNoDiskPressure    64s (x8 over 64s)      kubelet          Node default-k8s-diff-port-103048 status is now: NodeHasNoDiskPressure
	  Normal   NodeHasSufficientPID     64s (x8 over 64s)      kubelet          Node default-k8s-diff-port-103048 status is now: NodeHasSufficientPID
	  Normal   RegisteredNode           50s                    node-controller  Node default-k8s-diff-port-103048 event: Registered Node default-k8s-diff-port-103048 in Controller
	
	
	==> dmesg <==
	[Nov 9 14:15] overlayfs: idmapped layers are currently not supported
	[Nov 9 14:16] overlayfs: idmapped layers are currently not supported
	[Nov 9 14:18] overlayfs: idmapped layers are currently not supported
	[ +26.909872] overlayfs: idmapped layers are currently not supported
	[  +3.850831] overlayfs: idmapped layers are currently not supported
	[Nov 9 14:19] overlayfs: idmapped layers are currently not supported
	[ +17.180951] overlayfs: idmapped layers are currently not supported
	[Nov 9 14:20] overlayfs: idmapped layers are currently not supported
	[ +23.736977] overlayfs: idmapped layers are currently not supported
	[Nov 9 14:22] overlayfs: idmapped layers are currently not supported
	[Nov 9 14:23] overlayfs: idmapped layers are currently not supported
	[Nov 9 14:25] overlayfs: idmapped layers are currently not supported
	[Nov 9 14:27] overlayfs: idmapped layers are currently not supported
	[ +24.536638] overlayfs: idmapped layers are currently not supported
	[Nov 9 14:31] overlayfs: idmapped layers are currently not supported
	[Nov 9 14:33] overlayfs: idmapped layers are currently not supported
	[ +39.159698] overlayfs: idmapped layers are currently not supported
	[  +5.641155] overlayfs: idmapped layers are currently not supported
	[Nov 9 14:34] overlayfs: idmapped layers are currently not supported
	[Nov 9 14:35] overlayfs: idmapped layers are currently not supported
	[Nov 9 14:36] overlayfs: idmapped layers are currently not supported
	[Nov 9 14:37] overlayfs: idmapped layers are currently not supported
	[  +3.455400] overlayfs: idmapped layers are currently not supported
	[Nov 9 14:39] overlayfs: idmapped layers are currently not supported
	[  +3.462027] overlayfs: idmapped layers are currently not supported
	
	
	==> etcd [7d4eac93ccb3effa065a9c4f9d98f14e0e8db5c13414ee7cecb3ffcedd98326f] <==
	{"level":"warn","ts":"2025-11-09T14:39:25.728113Z","caller":"embed/config_logging.go:188","msg":"rejected connection on client endpoint","remote-addr":"127.0.0.1:39204","server-name":"","error":"EOF"}
	{"level":"warn","ts":"2025-11-09T14:39:25.791374Z","caller":"embed/config_logging.go:188","msg":"rejected connection on client endpoint","remote-addr":"127.0.0.1:39230","server-name":"","error":"EOF"}
	{"level":"warn","ts":"2025-11-09T14:39:25.843778Z","caller":"embed/config_logging.go:188","msg":"rejected connection on client endpoint","remote-addr":"127.0.0.1:39240","server-name":"","error":"EOF"}
	{"level":"warn","ts":"2025-11-09T14:39:25.904189Z","caller":"embed/config_logging.go:188","msg":"rejected connection on client endpoint","remote-addr":"127.0.0.1:39262","server-name":"","error":"EOF"}
	{"level":"warn","ts":"2025-11-09T14:39:25.959938Z","caller":"embed/config_logging.go:188","msg":"rejected connection on client endpoint","remote-addr":"127.0.0.1:39280","server-name":"","error":"EOF"}
	{"level":"warn","ts":"2025-11-09T14:39:26.001375Z","caller":"embed/config_logging.go:188","msg":"rejected connection on client endpoint","remote-addr":"127.0.0.1:39290","server-name":"","error":"EOF"}
	{"level":"warn","ts":"2025-11-09T14:39:26.083666Z","caller":"embed/config_logging.go:188","msg":"rejected connection on client endpoint","remote-addr":"127.0.0.1:39308","server-name":"","error":"EOF"}
	{"level":"warn","ts":"2025-11-09T14:39:26.118085Z","caller":"embed/config_logging.go:188","msg":"rejected connection on client endpoint","remote-addr":"127.0.0.1:39342","server-name":"","error":"EOF"}
	{"level":"warn","ts":"2025-11-09T14:39:26.183237Z","caller":"embed/config_logging.go:188","msg":"rejected connection on client endpoint","remote-addr":"127.0.0.1:39360","server-name":"","error":"EOF"}
	{"level":"warn","ts":"2025-11-09T14:39:26.306108Z","caller":"embed/config_logging.go:188","msg":"rejected connection on client endpoint","remote-addr":"127.0.0.1:39368","server-name":"","error":"EOF"}
	{"level":"warn","ts":"2025-11-09T14:39:26.416838Z","caller":"embed/config_logging.go:188","msg":"rejected connection on client endpoint","remote-addr":"127.0.0.1:39392","server-name":"","error":"EOF"}
	{"level":"warn","ts":"2025-11-09T14:39:26.462072Z","caller":"embed/config_logging.go:188","msg":"rejected connection on client endpoint","remote-addr":"127.0.0.1:39410","server-name":"","error":"EOF"}
	{"level":"warn","ts":"2025-11-09T14:39:26.561155Z","caller":"embed/config_logging.go:188","msg":"rejected connection on client endpoint","remote-addr":"127.0.0.1:39434","server-name":"","error":"EOF"}
	{"level":"warn","ts":"2025-11-09T14:39:26.602800Z","caller":"embed/config_logging.go:188","msg":"rejected connection on client endpoint","remote-addr":"127.0.0.1:39458","server-name":"","error":"EOF"}
	{"level":"warn","ts":"2025-11-09T14:39:26.651571Z","caller":"embed/config_logging.go:188","msg":"rejected connection on client endpoint","remote-addr":"127.0.0.1:39482","server-name":"","error":"EOF"}
	{"level":"warn","ts":"2025-11-09T14:39:26.764667Z","caller":"embed/config_logging.go:188","msg":"rejected connection on client endpoint","remote-addr":"127.0.0.1:39496","server-name":"","error":"EOF"}
	{"level":"warn","ts":"2025-11-09T14:39:26.770247Z","caller":"embed/config_logging.go:188","msg":"rejected connection on client endpoint","remote-addr":"127.0.0.1:39512","server-name":"","error":"EOF"}
	{"level":"warn","ts":"2025-11-09T14:39:26.788698Z","caller":"embed/config_logging.go:188","msg":"rejected connection on client endpoint","remote-addr":"127.0.0.1:39536","server-name":"","error":"EOF"}
	{"level":"warn","ts":"2025-11-09T14:39:26.839933Z","caller":"embed/config_logging.go:188","msg":"rejected connection on client endpoint","remote-addr":"127.0.0.1:39554","server-name":"","error":"EOF"}
	{"level":"warn","ts":"2025-11-09T14:39:26.871821Z","caller":"embed/config_logging.go:188","msg":"rejected connection on client endpoint","remote-addr":"127.0.0.1:39564","server-name":"","error":"EOF"}
	{"level":"warn","ts":"2025-11-09T14:39:26.926932Z","caller":"embed/config_logging.go:188","msg":"rejected connection on client endpoint","remote-addr":"127.0.0.1:39584","server-name":"","error":"EOF"}
	{"level":"warn","ts":"2025-11-09T14:39:27.035945Z","caller":"embed/config_logging.go:188","msg":"rejected connection on client endpoint","remote-addr":"127.0.0.1:39608","server-name":"","error":"EOF"}
	{"level":"warn","ts":"2025-11-09T14:39:27.100342Z","caller":"embed/config_logging.go:188","msg":"rejected connection on client endpoint","remote-addr":"127.0.0.1:39624","server-name":"","error":"EOF"}
	{"level":"warn","ts":"2025-11-09T14:39:27.136385Z","caller":"embed/config_logging.go:188","msg":"rejected connection on client endpoint","remote-addr":"127.0.0.1:39640","server-name":"","error":"EOF"}
	{"level":"warn","ts":"2025-11-09T14:39:27.270443Z","caller":"embed/config_logging.go:188","msg":"rejected connection on client endpoint","remote-addr":"127.0.0.1:39646","server-name":"","error":"EOF"}
	
	
	==> kernel <==
	 14:40:24 up  1:22,  0 user,  load average: 3.10, 3.40, 2.81
	Linux default-k8s-diff-port-103048 5.15.0-1084-aws #91~20.04.1-Ubuntu SMP Fri May 2 07:00:04 UTC 2025 aarch64 GNU/Linux
	PRETTY_NAME="Debian GNU/Linux 12 (bookworm)"
	
	
	==> kindnet [be38569ab0491d9f49c9dcbf8de0ce6af947e38961945fd5b81c78da6c67aadb] <==
	I1109 14:39:32.066805       1 main.go:109] connected to apiserver: https://10.96.0.1:443
	I1109 14:39:32.067051       1 main.go:139] hostIP = 192.168.85.2
	podIP = 192.168.85.2
	I1109 14:39:32.067204       1 main.go:148] setting mtu 1500 for CNI 
	I1109 14:39:32.067217       1 main.go:178] kindnetd IP family: "ipv4"
	I1109 14:39:32.067239       1 main.go:182] noMask IPv4 subnets: [10.244.0.0/16]
	time="2025-11-09T14:39:32Z" level=info msg="Created plugin 10-kube-network-policies (kindnetd, handles RunPodSandbox,RemovePodSandbox)"
	I1109 14:39:32.299350       1 controller.go:377] "Starting controller" name="kube-network-policies"
	I1109 14:39:32.299382       1 controller.go:381] "Waiting for informer caches to sync"
	I1109 14:39:32.299391       1 shared_informer.go:350] "Waiting for caches to sync" controller="kube-network-policies"
	I1109 14:39:32.299657       1 controller.go:390] nri plugin exited: failed to connect to NRI service: dial unix /var/run/nri/nri.sock: connect: no such file or directory
	E1109 14:40:02.299760       1 reflector.go:200] "Failed to watch" err="failed to list *v1.Pod: Get \"https://10.96.0.1:443/api/v1/pods?limit=500&resourceVersion=0\": dial tcp 10.96.0.1:443: i/o timeout" logger="UnhandledError" reflector="pkg/mod/k8s.io/client-go@v0.33.0/tools/cache/reflector.go:285" type="*v1.Pod"
	E1109 14:40:02.299762       1 reflector.go:200] "Failed to watch" err="failed to list *v1.Namespace: Get \"https://10.96.0.1:443/api/v1/namespaces?limit=500&resourceVersion=0\": dial tcp 10.96.0.1:443: i/o timeout" logger="UnhandledError" reflector="pkg/mod/k8s.io/client-go@v0.33.0/tools/cache/reflector.go:285" type="*v1.Namespace"
	E1109 14:40:02.299920       1 reflector.go:200] "Failed to watch" err="failed to list *v1.Node: Get \"https://10.96.0.1:443/api/v1/nodes?limit=500&resourceVersion=0\": dial tcp 10.96.0.1:443: i/o timeout" logger="UnhandledError" reflector="pkg/mod/k8s.io/client-go@v0.33.0/tools/cache/reflector.go:285" type="*v1.Node"
	E1109 14:40:02.300077       1 reflector.go:200] "Failed to watch" err="failed to list *v1.NetworkPolicy: Get \"https://10.96.0.1:443/apis/networking.k8s.io/v1/networkpolicies?limit=500&resourceVersion=0\": dial tcp 10.96.0.1:443: i/o timeout" logger="UnhandledError" reflector="pkg/mod/k8s.io/client-go@v0.33.0/tools/cache/reflector.go:285" type="*v1.NetworkPolicy"
	I1109 14:40:03.800106       1 shared_informer.go:357] "Caches are synced" controller="kube-network-policies"
	I1109 14:40:03.800139       1 metrics.go:72] Registering metrics
	I1109 14:40:03.800215       1 controller.go:711] "Syncing nftables rules"
	I1109 14:40:12.304628       1 main.go:297] Handling node with IPs: map[192.168.85.2:{}]
	I1109 14:40:12.304742       1 main.go:301] handling current node
	I1109 14:40:22.307296       1 main.go:297] Handling node with IPs: map[192.168.85.2:{}]
	I1109 14:40:22.307335       1 main.go:301] handling current node
	
	
	==> kube-apiserver [6ac4e39c7a9ff676af77a2c3d9451f34c38ceb98460bce7a887c77b5521f39bb] <==
	I1109 14:39:29.931419       1 controller.go:667] quota admission added evaluator for: leases.coordination.k8s.io
	I1109 14:39:29.976946       1 cache.go:39] Caches are synced for LocalAvailability controller
	I1109 14:39:29.988485       1 shared_informer.go:356] "Caches are synced" controller="node_authorizer"
	I1109 14:39:29.981772       1 shared_informer.go:356] "Caches are synced" controller="configmaps"
	I1109 14:39:29.988676       1 shared_informer.go:356] "Caches are synced" controller="kubernetes-service-cidr-controller"
	I1109 14:39:29.988768       1 shared_informer.go:356] "Caches are synced" controller="ipallocator-repair-controller"
	I1109 14:39:29.977213       1 shared_informer.go:356] "Caches are synced" controller="cluster_authentication_trust_controller"
	I1109 14:39:29.981707       1 cache.go:39] Caches are synced for RemoteAvailability controller
	I1109 14:39:29.981719       1 cache.go:39] Caches are synced for APIServiceRegistrationController controller
	I1109 14:39:29.981731       1 apf_controller.go:382] Running API Priority and Fairness config worker
	I1109 14:39:30.020769       1 apf_controller.go:385] Running API Priority and Fairness periodic rebalancing process
	I1109 14:39:30.021014       1 handler_discovery.go:451] Starting ResourceDiscoveryManager
	I1109 14:39:30.021311       1 cidrallocator.go:301] created ClusterIP allocator for Service CIDR 10.96.0.0/12
	I1109 14:39:29.988744       1 default_servicecidr_controller.go:137] Shutting down kubernetes-service-cidr-controller
	I1109 14:39:30.096965       1 storage_scheduling.go:111] all system priority classes are created successfully or already exist.
	I1109 14:39:30.727165       1 controller.go:667] quota admission added evaluator for: serviceaccounts
	I1109 14:39:32.426581       1 controller.go:667] quota admission added evaluator for: namespaces
	I1109 14:39:32.531581       1 controller.go:667] quota admission added evaluator for: deployments.apps
	I1109 14:39:32.589472       1 controller.go:667] quota admission added evaluator for: roles.rbac.authorization.k8s.io
	I1109 14:39:32.613961       1 controller.go:667] quota admission added evaluator for: rolebindings.rbac.authorization.k8s.io
	I1109 14:39:32.797164       1 alloc.go:328] "allocated clusterIPs" service="kubernetes-dashboard/kubernetes-dashboard" clusterIPs={"IPv4":"10.98.245.74"}
	I1109 14:39:32.861967       1 alloc.go:328] "allocated clusterIPs" service="kubernetes-dashboard/dashboard-metrics-scraper" clusterIPs={"IPv4":"10.106.242.141"}
	I1109 14:39:34.666059       1 controller.go:667] quota admission added evaluator for: replicasets.apps
	I1109 14:39:34.800364       1 controller.go:667] quota admission added evaluator for: endpointslices.discovery.k8s.io
	I1109 14:39:35.097808       1 controller.go:667] quota admission added evaluator for: endpoints
	
	
	==> kube-controller-manager [7e93099edfb89c3c6dfb2117e807c954f165aeeee5346a2bbbcc8cebc0b57e6d] <==
	I1109 14:39:34.576052       1 shared_informer.go:356] "Caches are synced" controller="cidrallocator"
	I1109 14:39:34.576136       1 shared_informer.go:356] "Caches are synced" controller="validatingadmissionpolicy-status"
	I1109 14:39:34.576334       1 shared_informer.go:356] "Caches are synced" controller="service-cidr-controller"
	I1109 14:39:34.579989       1 shared_informer.go:356] "Caches are synced" controller="taint"
	I1109 14:39:34.580098       1 node_lifecycle_controller.go:1221] "Initializing eviction metric for zone" logger="node-lifecycle-controller" zone=""
	I1109 14:39:34.580194       1 node_lifecycle_controller.go:873] "Missing timestamp for Node. Assuming now as a timestamp" logger="node-lifecycle-controller" node="default-k8s-diff-port-103048"
	I1109 14:39:34.580239       1 node_lifecycle_controller.go:1067] "Controller detected that zone is now in new state" logger="node-lifecycle-controller" zone="" newState="Normal"
	I1109 14:39:34.581750       1 shared_informer.go:356] "Caches are synced" controller="legacy-service-account-token-cleaner"
	I1109 14:39:34.588760       1 shared_informer.go:356] "Caches are synced" controller="endpoint"
	I1109 14:39:34.589092       1 shared_informer.go:356] "Caches are synced" controller="GC"
	I1109 14:39:34.589358       1 shared_informer.go:356] "Caches are synced" controller="ephemeral"
	I1109 14:39:34.589481       1 shared_informer.go:356] "Caches are synced" controller="persistent volume"
	I1109 14:39:34.589698       1 shared_informer.go:356] "Caches are synced" controller="deployment"
	I1109 14:39:34.590728       1 shared_informer.go:356] "Caches are synced" controller="disruption"
	I1109 14:39:34.590802       1 shared_informer.go:356] "Caches are synced" controller="VAC protection"
	I1109 14:39:34.614759       1 shared_informer.go:356] "Caches are synced" controller="namespace"
	I1109 14:39:34.615313       1 shared_informer.go:356] "Caches are synced" controller="TTL"
	I1109 14:39:34.633088       1 shared_informer.go:356] "Caches are synced" controller="resource quota"
	I1109 14:39:34.633165       1 shared_informer.go:356] "Caches are synced" controller="daemon sets"
	I1109 14:39:34.659875       1 shared_informer.go:356] "Caches are synced" controller="resource quota"
	I1109 14:39:34.659926       1 shared_informer.go:356] "Caches are synced" controller="bootstrap_signer"
	I1109 14:39:34.660171       1 shared_informer.go:356] "Caches are synced" controller="garbage collector"
	I1109 14:39:34.692691       1 shared_informer.go:356] "Caches are synced" controller="garbage collector"
	I1109 14:39:34.692725       1 garbagecollector.go:154] "Garbage collector: all resource monitors have synced" logger="garbage-collector-controller"
	I1109 14:39:34.692735       1 garbagecollector.go:157] "Proceeding to collect garbage" logger="garbage-collector-controller"
	
	
	==> kube-proxy [da4b547d025130d284095282ff9a975da37d1fb29f3f9f7f5b591578b7601596] <==
	I1109 14:39:32.450871       1 server_linux.go:53] "Using iptables proxy"
	I1109 14:39:32.989020       1 shared_informer.go:349] "Waiting for caches to sync" controller="node informer cache"
	I1109 14:39:33.107946       1 shared_informer.go:356] "Caches are synced" controller="node informer cache"
	I1109 14:39:33.107989       1 server.go:219] "Successfully retrieved NodeIPs" NodeIPs=["192.168.85.2"]
	E1109 14:39:33.108057       1 server.go:256] "Kube-proxy configuration may be incomplete or incorrect" err="nodePortAddresses is unset; NodePort connections will be accepted on all local IPs. Consider using `--nodeport-addresses primary`"
	I1109 14:39:33.327113       1 server.go:265] "kube-proxy running in dual-stack mode" primary ipFamily="IPv4"
	I1109 14:39:33.327184       1 server_linux.go:132] "Using iptables Proxier"
	I1109 14:39:33.336890       1 proxier.go:242] "Setting route_localnet=1 to allow node-ports on localhost; to change this either disable iptables.localhostNodePorts (--iptables-localhost-nodeports) or set nodePortAddresses (--nodeport-addresses) to filter loopback addresses" ipFamily="IPv4"
	I1109 14:39:33.337229       1 server.go:527] "Version info" version="v1.34.1"
	I1109 14:39:33.337257       1 server.go:529] "Golang settings" GOGC="" GOMAXPROCS="" GOTRACEBACK=""
	I1109 14:39:33.350797       1 config.go:200] "Starting service config controller"
	I1109 14:39:33.350896       1 shared_informer.go:349] "Waiting for caches to sync" controller="service config"
	I1109 14:39:33.351086       1 config.go:106] "Starting endpoint slice config controller"
	I1109 14:39:33.351128       1 shared_informer.go:349] "Waiting for caches to sync" controller="endpoint slice config"
	I1109 14:39:33.351299       1 config.go:403] "Starting serviceCIDR config controller"
	I1109 14:39:33.351343       1 shared_informer.go:349] "Waiting for caches to sync" controller="serviceCIDR config"
	I1109 14:39:33.353341       1 config.go:309] "Starting node config controller"
	I1109 14:39:33.353365       1 shared_informer.go:349] "Waiting for caches to sync" controller="node config"
	I1109 14:39:33.353372       1 shared_informer.go:356] "Caches are synced" controller="node config"
	I1109 14:39:33.459422       1 shared_informer.go:356] "Caches are synced" controller="serviceCIDR config"
	I1109 14:39:33.459465       1 shared_informer.go:356] "Caches are synced" controller="service config"
	I1109 14:39:33.459516       1 shared_informer.go:356] "Caches are synced" controller="endpoint slice config"
	
	
	==> kube-scheduler [0c584231ed8c3e74b3273a950c29860375fa8aeb7da46e7a2e139930d0830dd1] <==
	I1109 14:39:26.293731       1 serving.go:386] Generated self-signed cert in-memory
	W1109 14:39:29.856446       1 requestheader_controller.go:204] Unable to get configmap/extension-apiserver-authentication in kube-system.  Usually fixed by 'kubectl create rolebinding -n kube-system ROLEBINDING_NAME --role=extension-apiserver-authentication-reader --serviceaccount=YOUR_NS:YOUR_SA'
	W1109 14:39:29.856486       1 authentication.go:397] Error looking up in-cluster authentication configuration: configmaps "extension-apiserver-authentication" is forbidden: User "system:kube-scheduler" cannot get resource "configmaps" in API group "" in the namespace "kube-system"
	W1109 14:39:29.856497       1 authentication.go:398] Continuing without authentication configuration. This may treat all requests as anonymous.
	W1109 14:39:29.856504       1 authentication.go:399] To require authentication configuration lookup to succeed, set --authentication-tolerate-lookup-failure=false
	I1109 14:39:30.072563       1 server.go:175] "Starting Kubernetes Scheduler" version="v1.34.1"
	I1109 14:39:30.072670       1 server.go:177] "Golang settings" GOGC="" GOMAXPROCS="" GOTRACEBACK=""
	I1109 14:39:30.136603       1 secure_serving.go:211] Serving securely on 127.0.0.1:10259
	I1109 14:39:30.139637       1 configmap_cafile_content.go:205] "Starting controller" name="client-ca::kube-system::extension-apiserver-authentication::client-ca-file"
	I1109 14:39:30.141078       1 shared_informer.go:349] "Waiting for caches to sync" controller="client-ca::kube-system::extension-apiserver-authentication::client-ca-file"
	I1109 14:39:30.139673       1 tlsconfig.go:243] "Starting DynamicServingCertificateController"
	I1109 14:39:30.348512       1 shared_informer.go:356] "Caches are synced" controller="client-ca::kube-system::extension-apiserver-authentication::client-ca-file"
	
	
	==> kubelet <==
	Nov 09 14:39:35 default-k8s-diff-port-103048 kubelet[775]: I1109 14:39:35.353948     775 reconciler_common.go:251] "operationExecutor.VerifyControllerAttachedVolume started for volume \"tmp-volume\" (UniqueName: \"kubernetes.io/empty-dir/8829d1d9-dfd6-4815-b411-a34dcf9a605f-tmp-volume\") pod \"dashboard-metrics-scraper-6ffb444bf9-8h69l\" (UID: \"8829d1d9-dfd6-4815-b411-a34dcf9a605f\") " pod="kubernetes-dashboard/dashboard-metrics-scraper-6ffb444bf9-8h69l"
	Nov 09 14:39:35 default-k8s-diff-port-103048 kubelet[775]: I1109 14:39:35.354063     775 reconciler_common.go:251] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kube-api-access-64q8r\" (UniqueName: \"kubernetes.io/projected/41f9a5ae-7b92-448d-8014-b25c5eea04c2-kube-api-access-64q8r\") pod \"kubernetes-dashboard-855c9754f9-swwl8\" (UID: \"41f9a5ae-7b92-448d-8014-b25c5eea04c2\") " pod="kubernetes-dashboard/kubernetes-dashboard-855c9754f9-swwl8"
	Nov 09 14:39:35 default-k8s-diff-port-103048 kubelet[775]: W1109 14:39:35.645807     775 manager.go:1169] Failed to process watch event {EventType:0 Name:/docker/6ee0024be4f4d8590c4fac2f7a334ceb3085a08f0ed0d87b59fee8a0f0938ca3/crio-eb13898fae8d301a77f6c698f17b6f20a229eb2b8340baaa3b95045cd403ba5d WatchSource:0}: Error finding container eb13898fae8d301a77f6c698f17b6f20a229eb2b8340baaa3b95045cd403ba5d: Status 404 returned error can't find the container with id eb13898fae8d301a77f6c698f17b6f20a229eb2b8340baaa3b95045cd403ba5d
	Nov 09 14:39:35 default-k8s-diff-port-103048 kubelet[775]: W1109 14:39:35.682906     775 manager.go:1169] Failed to process watch event {EventType:0 Name:/docker/6ee0024be4f4d8590c4fac2f7a334ceb3085a08f0ed0d87b59fee8a0f0938ca3/crio-18f2ffd9a5f9fe2f579e19dea6b42de4fe342bfa1dbc2cff78c0fd9243becb81 WatchSource:0}: Error finding container 18f2ffd9a5f9fe2f579e19dea6b42de4fe342bfa1dbc2cff78c0fd9243becb81: Status 404 returned error can't find the container with id 18f2ffd9a5f9fe2f579e19dea6b42de4fe342bfa1dbc2cff78c0fd9243becb81
	Nov 09 14:39:42 default-k8s-diff-port-103048 kubelet[775]: I1109 14:39:42.150955     775 scope.go:117] "RemoveContainer" containerID="e7588a2f8c6c695a1bfcf66823627d7be7c69301bd1df725f15c554a1b7660d8"
	Nov 09 14:39:43 default-k8s-diff-port-103048 kubelet[775]: I1109 14:39:43.158318     775 scope.go:117] "RemoveContainer" containerID="e7588a2f8c6c695a1bfcf66823627d7be7c69301bd1df725f15c554a1b7660d8"
	Nov 09 14:39:43 default-k8s-diff-port-103048 kubelet[775]: I1109 14:39:43.158654     775 scope.go:117] "RemoveContainer" containerID="38259cff432cccc842b08ceebbe59b16a06e9c73dc6de13fc113d45ec541b9f8"
	Nov 09 14:39:43 default-k8s-diff-port-103048 kubelet[775]: E1109 14:39:43.158851     775 pod_workers.go:1324] "Error syncing pod, skipping" err="failed to \"StartContainer\" for \"dashboard-metrics-scraper\" with CrashLoopBackOff: \"back-off 10s restarting failed container=dashboard-metrics-scraper pod=dashboard-metrics-scraper-6ffb444bf9-8h69l_kubernetes-dashboard(8829d1d9-dfd6-4815-b411-a34dcf9a605f)\"" pod="kubernetes-dashboard/dashboard-metrics-scraper-6ffb444bf9-8h69l" podUID="8829d1d9-dfd6-4815-b411-a34dcf9a605f"
	Nov 09 14:39:44 default-k8s-diff-port-103048 kubelet[775]: I1109 14:39:44.170843     775 scope.go:117] "RemoveContainer" containerID="38259cff432cccc842b08ceebbe59b16a06e9c73dc6de13fc113d45ec541b9f8"
	Nov 09 14:39:44 default-k8s-diff-port-103048 kubelet[775]: E1109 14:39:44.171022     775 pod_workers.go:1324] "Error syncing pod, skipping" err="failed to \"StartContainer\" for \"dashboard-metrics-scraper\" with CrashLoopBackOff: \"back-off 10s restarting failed container=dashboard-metrics-scraper pod=dashboard-metrics-scraper-6ffb444bf9-8h69l_kubernetes-dashboard(8829d1d9-dfd6-4815-b411-a34dcf9a605f)\"" pod="kubernetes-dashboard/dashboard-metrics-scraper-6ffb444bf9-8h69l" podUID="8829d1d9-dfd6-4815-b411-a34dcf9a605f"
	Nov 09 14:39:45 default-k8s-diff-port-103048 kubelet[775]: I1109 14:39:45.583039     775 scope.go:117] "RemoveContainer" containerID="38259cff432cccc842b08ceebbe59b16a06e9c73dc6de13fc113d45ec541b9f8"
	Nov 09 14:39:45 default-k8s-diff-port-103048 kubelet[775]: E1109 14:39:45.583216     775 pod_workers.go:1324] "Error syncing pod, skipping" err="failed to \"StartContainer\" for \"dashboard-metrics-scraper\" with CrashLoopBackOff: \"back-off 10s restarting failed container=dashboard-metrics-scraper pod=dashboard-metrics-scraper-6ffb444bf9-8h69l_kubernetes-dashboard(8829d1d9-dfd6-4815-b411-a34dcf9a605f)\"" pod="kubernetes-dashboard/dashboard-metrics-scraper-6ffb444bf9-8h69l" podUID="8829d1d9-dfd6-4815-b411-a34dcf9a605f"
	Nov 09 14:40:00 default-k8s-diff-port-103048 kubelet[775]: I1109 14:40:00.709356     775 scope.go:117] "RemoveContainer" containerID="38259cff432cccc842b08ceebbe59b16a06e9c73dc6de13fc113d45ec541b9f8"
	Nov 09 14:40:01 default-k8s-diff-port-103048 kubelet[775]: I1109 14:40:01.231725     775 scope.go:117] "RemoveContainer" containerID="38259cff432cccc842b08ceebbe59b16a06e9c73dc6de13fc113d45ec541b9f8"
	Nov 09 14:40:01 default-k8s-diff-port-103048 kubelet[775]: I1109 14:40:01.232181     775 scope.go:117] "RemoveContainer" containerID="ce0dc366d5e71942bc185442cfc06238d350d5895900cca6bed6bb9f68a56488"
	Nov 09 14:40:01 default-k8s-diff-port-103048 kubelet[775]: E1109 14:40:01.233057     775 pod_workers.go:1324] "Error syncing pod, skipping" err="failed to \"StartContainer\" for \"dashboard-metrics-scraper\" with CrashLoopBackOff: \"back-off 20s restarting failed container=dashboard-metrics-scraper pod=dashboard-metrics-scraper-6ffb444bf9-8h69l_kubernetes-dashboard(8829d1d9-dfd6-4815-b411-a34dcf9a605f)\"" pod="kubernetes-dashboard/dashboard-metrics-scraper-6ffb444bf9-8h69l" podUID="8829d1d9-dfd6-4815-b411-a34dcf9a605f"
	Nov 09 14:40:01 default-k8s-diff-port-103048 kubelet[775]: I1109 14:40:01.256504     775 pod_startup_latency_tracker.go:104] "Observed pod startup duration" pod="kubernetes-dashboard/kubernetes-dashboard-855c9754f9-swwl8" podStartSLOduration=13.303506619 podStartE2EDuration="26.255155556s" podCreationTimestamp="2025-11-09 14:39:35 +0000 UTC" firstStartedPulling="2025-11-09 14:39:35.686508273 +0000 UTC m=+15.302954545" lastFinishedPulling="2025-11-09 14:39:48.63815721 +0000 UTC m=+28.254603482" observedRunningTime="2025-11-09 14:39:49.222598563 +0000 UTC m=+28.839044843" watchObservedRunningTime="2025-11-09 14:40:01.255155556 +0000 UTC m=+40.871601860"
	Nov 09 14:40:03 default-k8s-diff-port-103048 kubelet[775]: I1109 14:40:03.240035     775 scope.go:117] "RemoveContainer" containerID="cf78d41778b8d4241abc1e4adceffd57e40c195173e228a0d2dca2bd521cce85"
	Nov 09 14:40:05 default-k8s-diff-port-103048 kubelet[775]: I1109 14:40:05.583030     775 scope.go:117] "RemoveContainer" containerID="ce0dc366d5e71942bc185442cfc06238d350d5895900cca6bed6bb9f68a56488"
	Nov 09 14:40:05 default-k8s-diff-port-103048 kubelet[775]: E1109 14:40:05.583247     775 pod_workers.go:1324] "Error syncing pod, skipping" err="failed to \"StartContainer\" for \"dashboard-metrics-scraper\" with CrashLoopBackOff: \"back-off 20s restarting failed container=dashboard-metrics-scraper pod=dashboard-metrics-scraper-6ffb444bf9-8h69l_kubernetes-dashboard(8829d1d9-dfd6-4815-b411-a34dcf9a605f)\"" pod="kubernetes-dashboard/dashboard-metrics-scraper-6ffb444bf9-8h69l" podUID="8829d1d9-dfd6-4815-b411-a34dcf9a605f"
	Nov 09 14:40:18 default-k8s-diff-port-103048 kubelet[775]: I1109 14:40:18.712479     775 scope.go:117] "RemoveContainer" containerID="ce0dc366d5e71942bc185442cfc06238d350d5895900cca6bed6bb9f68a56488"
	Nov 09 14:40:18 default-k8s-diff-port-103048 kubelet[775]: E1109 14:40:18.712659     775 pod_workers.go:1324] "Error syncing pod, skipping" err="failed to \"StartContainer\" for \"dashboard-metrics-scraper\" with CrashLoopBackOff: \"back-off 20s restarting failed container=dashboard-metrics-scraper pod=dashboard-metrics-scraper-6ffb444bf9-8h69l_kubernetes-dashboard(8829d1d9-dfd6-4815-b411-a34dcf9a605f)\"" pod="kubernetes-dashboard/dashboard-metrics-scraper-6ffb444bf9-8h69l" podUID="8829d1d9-dfd6-4815-b411-a34dcf9a605f"
	Nov 09 14:40:19 default-k8s-diff-port-103048 systemd[1]: Stopping kubelet.service - kubelet: The Kubernetes Node Agent...
	Nov 09 14:40:19 default-k8s-diff-port-103048 systemd[1]: kubelet.service: Deactivated successfully.
	Nov 09 14:40:19 default-k8s-diff-port-103048 systemd[1]: Stopped kubelet.service - kubelet: The Kubernetes Node Agent.
	
	
	==> kubernetes-dashboard [b77c688e34b493a3b43c4d4222447f464615cadf3927d84572de47e9f20273fb] <==
	2025/11/09 14:39:48 Using namespace: kubernetes-dashboard
	2025/11/09 14:39:48 Using in-cluster config to connect to apiserver
	2025/11/09 14:39:48 Using secret token for csrf signing
	2025/11/09 14:39:48 Initializing csrf token from kubernetes-dashboard-csrf secret
	2025/11/09 14:39:48 Empty token. Generating and storing in a secret kubernetes-dashboard-csrf
	2025/11/09 14:39:48 Successful initial request to the apiserver, version: v1.34.1
	2025/11/09 14:39:48 Generating JWE encryption key
	2025/11/09 14:39:48 New synchronizer has been registered: kubernetes-dashboard-key-holder-kubernetes-dashboard. Starting
	2025/11/09 14:39:48 Starting secret synchronizer for kubernetes-dashboard-key-holder in namespace kubernetes-dashboard
	2025/11/09 14:39:48 Initializing JWE encryption key from synchronized object
	2025/11/09 14:39:48 Creating in-cluster Sidecar client
	2025/11/09 14:39:48 Serving insecurely on HTTP port: 9090
	2025/11/09 14:39:48 Metric client health check failed: the server is currently unable to handle the request (get services dashboard-metrics-scraper). Retrying in 30 seconds.
	2025/11/09 14:40:18 Metric client health check failed: the server is currently unable to handle the request (get services dashboard-metrics-scraper). Retrying in 30 seconds.
	2025/11/09 14:39:48 Starting overwatch
	
	
	==> storage-provisioner [887e2026e77897766c17fa9712a7f11a65c2c6b18e8793321feed28a770da36c] <==
	I1109 14:40:03.294512       1 storage_provisioner.go:116] Initializing the minikube storage provisioner...
	I1109 14:40:03.306057       1 storage_provisioner.go:141] Storage provisioner initialized, now starting service!
	I1109 14:40:03.306107       1 leaderelection.go:243] attempting to acquire leader lease kube-system/k8s.io-minikube-hostpath...
	W1109 14:40:03.321898       1 warnings.go:70] v1 Endpoints is deprecated in v1.33+; use discovery.k8s.io/v1 EndpointSlice
	W1109 14:40:06.777332       1 warnings.go:70] v1 Endpoints is deprecated in v1.33+; use discovery.k8s.io/v1 EndpointSlice
	W1109 14:40:11.038560       1 warnings.go:70] v1 Endpoints is deprecated in v1.33+; use discovery.k8s.io/v1 EndpointSlice
	W1109 14:40:14.637597       1 warnings.go:70] v1 Endpoints is deprecated in v1.33+; use discovery.k8s.io/v1 EndpointSlice
	W1109 14:40:17.691197       1 warnings.go:70] v1 Endpoints is deprecated in v1.33+; use discovery.k8s.io/v1 EndpointSlice
	W1109 14:40:20.713481       1 warnings.go:70] v1 Endpoints is deprecated in v1.33+; use discovery.k8s.io/v1 EndpointSlice
	W1109 14:40:20.724147       1 warnings.go:70] v1 Endpoints is deprecated in v1.33+; use discovery.k8s.io/v1 EndpointSlice
	I1109 14:40:20.724319       1 leaderelection.go:253] successfully acquired lease kube-system/k8s.io-minikube-hostpath
	I1109 14:40:20.724508       1 controller.go:835] Starting provisioner controller k8s.io/minikube-hostpath_default-k8s-diff-port-103048_4e41f923-9654-400a-8962-de961e818f43!
	I1109 14:40:20.725404       1 event.go:282] Event(v1.ObjectReference{Kind:"Endpoints", Namespace:"kube-system", Name:"k8s.io-minikube-hostpath", UID:"ec6f7261-5e6f-4cd5-8d6b-f26a96ba18b9", APIVersion:"v1", ResourceVersion:"689", FieldPath:""}): type: 'Normal' reason: 'LeaderElection' default-k8s-diff-port-103048_4e41f923-9654-400a-8962-de961e818f43 became leader
	W1109 14:40:20.738902       1 warnings.go:70] v1 Endpoints is deprecated in v1.33+; use discovery.k8s.io/v1 EndpointSlice
	W1109 14:40:20.757569       1 warnings.go:70] v1 Endpoints is deprecated in v1.33+; use discovery.k8s.io/v1 EndpointSlice
	I1109 14:40:20.828078       1 controller.go:884] Started provisioner controller k8s.io/minikube-hostpath_default-k8s-diff-port-103048_4e41f923-9654-400a-8962-de961e818f43!
	W1109 14:40:22.764251       1 warnings.go:70] v1 Endpoints is deprecated in v1.33+; use discovery.k8s.io/v1 EndpointSlice
	W1109 14:40:22.773475       1 warnings.go:70] v1 Endpoints is deprecated in v1.33+; use discovery.k8s.io/v1 EndpointSlice
	W1109 14:40:24.785160       1 warnings.go:70] v1 Endpoints is deprecated in v1.33+; use discovery.k8s.io/v1 EndpointSlice
	W1109 14:40:24.793810       1 warnings.go:70] v1 Endpoints is deprecated in v1.33+; use discovery.k8s.io/v1 EndpointSlice
	
	
	==> storage-provisioner [cf78d41778b8d4241abc1e4adceffd57e40c195173e228a0d2dca2bd521cce85] <==
	I1109 14:39:32.255386       1 storage_provisioner.go:116] Initializing the minikube storage provisioner...
	F1109 14:40:02.258092       1 main.go:39] error getting server version: Get "https://10.96.0.1:443/version?timeout=32s": dial tcp 10.96.0.1:443: i/o timeout
	

                                                
                                                
-- /stdout --
helpers_test.go:262: (dbg) Run:  out/minikube-linux-arm64 status --format={{.APIServer}} -p default-k8s-diff-port-103048 -n default-k8s-diff-port-103048
helpers_test.go:262: (dbg) Non-zero exit: out/minikube-linux-arm64 status --format={{.APIServer}} -p default-k8s-diff-port-103048 -n default-k8s-diff-port-103048: exit status 2 (536.189561ms)

                                                
                                                
-- stdout --
	Running

                                                
                                                
-- /stdout --
helpers_test.go:262: status error: exit status 2 (may be ok)
helpers_test.go:269: (dbg) Run:  kubectl --context default-k8s-diff-port-103048 get po -o=jsonpath={.items[*].metadata.name} -A --field-selector=status.phase!=Running
helpers_test.go:293: <<< TestStartStop/group/default-k8s-diff-port/serial/Pause FAILED: end of post-mortem logs <<<
helpers_test.go:294: ---------------------/post-mortem---------------------------------
--- FAIL: TestStartStop/group/default-k8s-diff-port/serial/Pause (7.69s)

                                                
                                    
x
+
TestStartStop/group/embed-certs/serial/Pause (8.05s)

                                                
                                                
=== RUN   TestStartStop/group/embed-certs/serial/Pause
start_stop_delete_test.go:309: (dbg) Run:  out/minikube-linux-arm64 pause -p embed-certs-422728 --alsologtostderr -v=1
start_stop_delete_test.go:309: (dbg) Non-zero exit: out/minikube-linux-arm64 pause -p embed-certs-422728 --alsologtostderr -v=1: exit status 80 (2.141082948s)

                                                
                                                
-- stdout --
	* Pausing node embed-certs-422728 ... 
	
	

                                                
                                                
-- /stdout --
** stderr ** 
	I1109 14:40:20.927330  200978 out.go:360] Setting OutFile to fd 1 ...
	I1109 14:40:20.927490  200978 out.go:408] TERM=,COLORTERM=, which probably does not support color
	I1109 14:40:20.927497  200978 out.go:374] Setting ErrFile to fd 2...
	I1109 14:40:20.927500  200978 out.go:408] TERM=,COLORTERM=, which probably does not support color
	I1109 14:40:20.928166  200978 root.go:338] Updating PATH: /home/jenkins/minikube-integration/21139-2320/.minikube/bin
	I1109 14:40:20.928621  200978 out.go:368] Setting JSON to false
	I1109 14:40:20.928644  200978 mustload.go:66] Loading cluster: embed-certs-422728
	I1109 14:40:20.930261  200978 config.go:182] Loaded profile config "embed-certs-422728": Driver=docker, ContainerRuntime=crio, KubernetesVersion=v1.34.1
	I1109 14:40:20.931474  200978 cli_runner.go:164] Run: docker container inspect embed-certs-422728 --format={{.State.Status}}
	I1109 14:40:20.958402  200978 host.go:66] Checking if "embed-certs-422728" exists ...
	I1109 14:40:20.958792  200978 cli_runner.go:164] Run: docker system info --format "{{json .}}"
	I1109 14:40:21.046219  200978 info.go:266] docker info: {ID:J4M5:W6MX:GOX4:4LAQ:VI7E:VJNF:J3OP:OPBH:GF7G:PPY4:WQWD:7N4L Containers:2 ContainersRunning:2 ContainersPaused:0 ContainersStopped:0 Images:4 Driver:overlay2 DriverStatus:[[Backing Filesystem extfs] [Supports d_type true] [Using metacopy false] [Native Overlay Diff true] [userxattr false]] SystemStatus:<nil> Plugins:{Volume:[local] Network:[bridge host ipvlan macvlan null overlay] Authorization:<nil> Log:[awslogs fluentd gcplogs gelf journald json-file local splunk syslog]} MemoryLimit:true SwapLimit:true KernelMemory:false KernelMemoryTCP:true CPUCfsPeriod:true CPUCfsQuota:true CPUShares:true CPUSet:true PidsLimit:true IPv4Forwarding:true BridgeNfIptables:false BridgeNfIP6Tables:false Debug:false NFd:49 OomKillDisable:true NGoroutines:62 SystemTime:2025-11-09 14:40:21.037050192 +0000 UTC LoggingDriver:json-file CgroupDriver:cgroupfs NEventsListener:0 KernelVersion:5.15.0-1084-aws OperatingSystem:Ubuntu 20.04.6 LTS OSType:linux Architecture:a
arch64 IndexServerAddress:https://index.docker.io/v1/ RegistryConfig:{AllowNondistributableArtifactsCIDRs:[] AllowNondistributableArtifactsHostnames:[] InsecureRegistryCIDRs:[::1/128 127.0.0.0/8] IndexConfigs:{DockerIo:{Name:docker.io Mirrors:[] Secure:true Official:true}} Mirrors:[]} NCPU:2 MemTotal:8214835200 GenericResources:<nil> DockerRootDir:/var/lib/docker HTTPProxy: HTTPSProxy: NoProxy: Name:ip-172-31-24-2 Labels:[] ExperimentalBuild:false ServerVersion:28.1.1 ClusterStore: ClusterAdvertise: Runtimes:{Runc:{Path:runc}} DefaultRuntime:runc Swarm:{NodeID: NodeAddr: LocalNodeState:inactive ControlAvailable:false Error: RemoteManagers:<nil>} LiveRestoreEnabled:false Isolation: InitBinary:docker-init ContainerdCommit:{ID:05044ec0a9a75232cad458027ca83437aae3f4da Expected:} RuncCommit:{ID:v1.2.5-0-g59923ef Expected:} InitCommit:{ID:de40ad0 Expected:} SecurityOptions:[name=apparmor name=seccomp,profile=builtin] ProductLicense: Warnings:<nil> ServerErrors:[] ClientInfo:{Debug:false Plugins:[map[Name:buildx Pat
h:/usr/libexec/docker/cli-plugins/docker-buildx SchemaVersion:0.1.0 ShortDescription:Docker Buildx Vendor:Docker Inc. Version:v0.23.0] map[Name:compose Path:/usr/libexec/docker/cli-plugins/docker-compose SchemaVersion:0.1.0 ShortDescription:Docker Compose Vendor:Docker Inc. Version:v2.35.1]] Warnings:<nil>}}
	I1109 14:40:21.046860  200978 pause.go:60] "namespaces" [kube-system kubernetes-dashboard istio-operator]="keys" map[addons:[] all:%!s(bool=false) apiserver-ips:[] apiserver-name:minikubeCA apiserver-names:[] apiserver-port:%!s(int=8443) auto-pause-interval:1m0s auto-update-drivers:%!s(bool=true) base-image:gcr.io/k8s-minikube/kicbase-builds:v0.0.48-1761985721-21837@sha256:a50b37e97dfdea51156e079ca6b45818a801b3d41bbe13d141f35d2e1af6c7d1 binary-mirror: bootstrapper:kubeadm cache-images:%!s(bool=true) cancel-scheduled:%!s(bool=false) cert-expiration:26280h0m0s cni: container-runtime: cpus:2 cri-socket: delete-on-failure:%!s(bool=false) disable-coredns-log:%!s(bool=false) disable-driver-mounts:%!s(bool=false) disable-metrics:%!s(bool=false) disable-optimizations:%!s(bool=false) disk-size:20000mb dns-domain:cluster.local dns-proxy:%!s(bool=false) docker-env:[] docker-opt:[] download-only:%!s(bool=false) driver: dry-run:%!s(bool=false) embed-certs:%!s(bool=false) embedcerts:%!s(bool=false) enable-default-
cni:%!s(bool=false) extra-config: extra-disks:%!s(int=0) feature-gates: force:%!s(bool=false) force-systemd:%!s(bool=false) gpus: ha:%!s(bool=false) host-dns-resolver:%!s(bool=true) host-only-cidr:192.168.59.1/24 host-only-nic-type:virtio hyperkit-vpnkit-sock: hyperkit-vsock-ports:[] hyperv-external-adapter: hyperv-use-external-switch:%!s(bool=false) hyperv-virtual-switch: image-mirror-country: image-repository: insecure-registry:[] install-addons:%!s(bool=true) interactive:%!s(bool=true) iso-url:[https://storage.googleapis.com/minikube-builds/iso/21834/minikube-v1.37.0-1762018871-21834-arm64.iso https://github.com/kubernetes/minikube/releases/download/v1.37.0-1762018871-21834/minikube-v1.37.0-1762018871-21834-arm64.iso https://kubernetes.oss-cn-hangzhou.aliyuncs.com/minikube/iso/minikube-v1.37.0-1762018871-21834-arm64.iso] keep-context:%!s(bool=false) keep-context-active:%!s(bool=false) kubernetes-version: kvm-gpu:%!s(bool=false) kvm-hidden:%!s(bool=false) kvm-network:default kvm-numa-count:%!s(int=1) kvm-qe
mu-uri:qemu:///system listen-address: maxauditentries:%!s(int=1000) memory: mount:%!s(bool=false) mount-9p-version:9p2000.L mount-gid:docker mount-ip: mount-msize:%!s(int=262144) mount-options:[] mount-port:0 mount-string: mount-type:9p mount-uid:docker namespace:default nat-nic-type:virtio native-ssh:%!s(bool=true) network: network-plugin: nfs-share:[] nfs-shares-root:/nfsshares no-kubernetes:%!s(bool=false) no-vtx-check:%!s(bool=false) nodes:%!s(int=1) output:text ports:[] preload:%!s(bool=true) profile:embed-certs-422728 purge:%!s(bool=false) qemu-firmware-path: registry-mirror:[] reminderwaitperiodinhours:%!s(int=24) rootless:%!s(bool=false) schedule:0s service-cluster-ip-range:10.96.0.0/12 skip-audit:%!s(bool=false) socket-vmnet-client-path: socket-vmnet-path: ssh-ip-address: ssh-key: ssh-port:%!s(int=22) ssh-user:root static-ip: subnet: trace: user: uuid: vm:%!s(bool=false) vm-driver: wait:[apiserver system_pods] wait-timeout:6m0s wantnonedriverwarning:%!s(bool=true) wantupdatenotification:%!s(bool=true
) wantvirtualboxdriverwarning:%!s(bool=true)]="(MISSING)"
	I1109 14:40:21.050438  200978 out.go:179] * Pausing node embed-certs-422728 ... 
	I1109 14:40:21.054808  200978 host.go:66] Checking if "embed-certs-422728" exists ...
	I1109 14:40:21.055146  200978 ssh_runner.go:195] Run: systemctl --version
	I1109 14:40:21.055186  200978 cli_runner.go:164] Run: docker container inspect -f "'{{(index (index .NetworkSettings.Ports "22/tcp") 0).HostPort}}'" embed-certs-422728
	I1109 14:40:21.078165  200978 sshutil.go:53] new ssh client: &{IP:127.0.0.1 Port:33070 SSHKeyPath:/home/jenkins/minikube-integration/21139-2320/.minikube/machines/embed-certs-422728/id_rsa Username:docker}
	I1109 14:40:21.195460  200978 ssh_runner.go:195] Run: sudo systemctl is-active --quiet service kubelet
	I1109 14:40:21.218124  200978 pause.go:52] kubelet running: true
	I1109 14:40:21.218204  200978 ssh_runner.go:195] Run: sudo systemctl disable --now kubelet
	I1109 14:40:21.620421  200978 cri.go:54] listing CRI containers in root : {State:running Name: Namespaces:[kube-system kubernetes-dashboard istio-operator]}
	I1109 14:40:21.620515  200978 ssh_runner.go:195] Run: sudo -s eval "crictl ps -a --quiet --label io.kubernetes.pod.namespace=kube-system; crictl ps -a --quiet --label io.kubernetes.pod.namespace=kubernetes-dashboard; crictl ps -a --quiet --label io.kubernetes.pod.namespace=istio-operator"
	I1109 14:40:21.719057  200978 cri.go:89] found id: "7e3f27e138c59aa0cb710724e534caab4379d6a10868fbbe90e3e8f884adb4a7"
	I1109 14:40:21.719081  200978 cri.go:89] found id: "d5c35ad31efd72a72f8ce73406787babc933e64dba57602e67b2a275575beab8"
	I1109 14:40:21.719087  200978 cri.go:89] found id: "323cdc33731a98cbe7f1496b50119456aef177e9a9a5892b2aa6aa476ddc2327"
	I1109 14:40:21.719090  200978 cri.go:89] found id: "3b1b52ea2560ce0c00fa2ea0c3ba7b2fb276d6faf0899c104043d7528470cddd"
	I1109 14:40:21.719093  200978 cri.go:89] found id: "de1e286695edb140cab32ced2c194b32034a19be382818767fa2a5a464fd0087"
	I1109 14:40:21.719097  200978 cri.go:89] found id: "a9943a66511d5ad86eafee1dfeb5b5c4217765aa7223fad81d4d85ed65bd4366"
	I1109 14:40:21.719100  200978 cri.go:89] found id: "2b949bf057b2fcac7eac7de6351c20a0ac0d3dccfcd10d71e47fcbaab8fc91dc"
	I1109 14:40:21.719102  200978 cri.go:89] found id: "7ac348b06cb3a51d4fa87f75d29323432bc36b8fade63732460d89860ce8f3df"
	I1109 14:40:21.719105  200978 cri.go:89] found id: "7f99978e234d142a4cacb3d0faff188a383754a6fe3a8aafef37dfdbccf51f16"
	I1109 14:40:21.719112  200978 cri.go:89] found id: "4d552f4b4d8a6b91636fb0457d54c17eafdcbd0e136bd23e839ea5daffbf2f98"
	I1109 14:40:21.719115  200978 cri.go:89] found id: "fe108ea59a5d4ffb1318ee1b4113ef12ff67b45f3c2041c028c9738cc25481d6"
	I1109 14:40:21.719118  200978 cri.go:89] found id: ""
	I1109 14:40:21.719164  200978 ssh_runner.go:195] Run: sudo runc list -f json
	I1109 14:40:21.736329  200978 retry.go:31] will retry after 245.889194ms: list running: runc: sudo runc list -f json: Process exited with status 1
	stdout:
	
	stderr:
	time="2025-11-09T14:40:21Z" level=error msg="open /run/runc: no such file or directory"
	I1109 14:40:21.982816  200978 ssh_runner.go:195] Run: sudo systemctl is-active --quiet service kubelet
	I1109 14:40:21.996671  200978 pause.go:52] kubelet running: false
	I1109 14:40:21.996737  200978 ssh_runner.go:195] Run: sudo systemctl disable --now kubelet
	I1109 14:40:22.216434  200978 cri.go:54] listing CRI containers in root : {State:running Name: Namespaces:[kube-system kubernetes-dashboard istio-operator]}
	I1109 14:40:22.216514  200978 ssh_runner.go:195] Run: sudo -s eval "crictl ps -a --quiet --label io.kubernetes.pod.namespace=kube-system; crictl ps -a --quiet --label io.kubernetes.pod.namespace=kubernetes-dashboard; crictl ps -a --quiet --label io.kubernetes.pod.namespace=istio-operator"
	I1109 14:40:22.315509  200978 cri.go:89] found id: "7e3f27e138c59aa0cb710724e534caab4379d6a10868fbbe90e3e8f884adb4a7"
	I1109 14:40:22.315527  200978 cri.go:89] found id: "d5c35ad31efd72a72f8ce73406787babc933e64dba57602e67b2a275575beab8"
	I1109 14:40:22.315532  200978 cri.go:89] found id: "323cdc33731a98cbe7f1496b50119456aef177e9a9a5892b2aa6aa476ddc2327"
	I1109 14:40:22.315536  200978 cri.go:89] found id: "3b1b52ea2560ce0c00fa2ea0c3ba7b2fb276d6faf0899c104043d7528470cddd"
	I1109 14:40:22.315540  200978 cri.go:89] found id: "de1e286695edb140cab32ced2c194b32034a19be382818767fa2a5a464fd0087"
	I1109 14:40:22.315548  200978 cri.go:89] found id: "a9943a66511d5ad86eafee1dfeb5b5c4217765aa7223fad81d4d85ed65bd4366"
	I1109 14:40:22.315551  200978 cri.go:89] found id: "2b949bf057b2fcac7eac7de6351c20a0ac0d3dccfcd10d71e47fcbaab8fc91dc"
	I1109 14:40:22.315555  200978 cri.go:89] found id: "7ac348b06cb3a51d4fa87f75d29323432bc36b8fade63732460d89860ce8f3df"
	I1109 14:40:22.315558  200978 cri.go:89] found id: "7f99978e234d142a4cacb3d0faff188a383754a6fe3a8aafef37dfdbccf51f16"
	I1109 14:40:22.315565  200978 cri.go:89] found id: "4d552f4b4d8a6b91636fb0457d54c17eafdcbd0e136bd23e839ea5daffbf2f98"
	I1109 14:40:22.315569  200978 cri.go:89] found id: "fe108ea59a5d4ffb1318ee1b4113ef12ff67b45f3c2041c028c9738cc25481d6"
	I1109 14:40:22.315572  200978 cri.go:89] found id: ""
	I1109 14:40:22.315618  200978 ssh_runner.go:195] Run: sudo runc list -f json
	I1109 14:40:22.327410  200978 retry.go:31] will retry after 292.557819ms: list running: runc: sudo runc list -f json: Process exited with status 1
	stdout:
	
	stderr:
	time="2025-11-09T14:40:22Z" level=error msg="open /run/runc: no such file or directory"
	I1109 14:40:22.623243  200978 ssh_runner.go:195] Run: sudo systemctl is-active --quiet service kubelet
	I1109 14:40:22.648973  200978 pause.go:52] kubelet running: false
	I1109 14:40:22.649049  200978 ssh_runner.go:195] Run: sudo systemctl disable --now kubelet
	I1109 14:40:22.873742  200978 cri.go:54] listing CRI containers in root : {State:running Name: Namespaces:[kube-system kubernetes-dashboard istio-operator]}
	I1109 14:40:22.873871  200978 ssh_runner.go:195] Run: sudo -s eval "crictl ps -a --quiet --label io.kubernetes.pod.namespace=kube-system; crictl ps -a --quiet --label io.kubernetes.pod.namespace=kubernetes-dashboard; crictl ps -a --quiet --label io.kubernetes.pod.namespace=istio-operator"
	I1109 14:40:22.964741  200978 cri.go:89] found id: "7e3f27e138c59aa0cb710724e534caab4379d6a10868fbbe90e3e8f884adb4a7"
	I1109 14:40:22.964763  200978 cri.go:89] found id: "d5c35ad31efd72a72f8ce73406787babc933e64dba57602e67b2a275575beab8"
	I1109 14:40:22.964768  200978 cri.go:89] found id: "323cdc33731a98cbe7f1496b50119456aef177e9a9a5892b2aa6aa476ddc2327"
	I1109 14:40:22.964772  200978 cri.go:89] found id: "3b1b52ea2560ce0c00fa2ea0c3ba7b2fb276d6faf0899c104043d7528470cddd"
	I1109 14:40:22.964776  200978 cri.go:89] found id: "de1e286695edb140cab32ced2c194b32034a19be382818767fa2a5a464fd0087"
	I1109 14:40:22.964780  200978 cri.go:89] found id: "a9943a66511d5ad86eafee1dfeb5b5c4217765aa7223fad81d4d85ed65bd4366"
	I1109 14:40:22.964783  200978 cri.go:89] found id: "2b949bf057b2fcac7eac7de6351c20a0ac0d3dccfcd10d71e47fcbaab8fc91dc"
	I1109 14:40:22.964786  200978 cri.go:89] found id: "7ac348b06cb3a51d4fa87f75d29323432bc36b8fade63732460d89860ce8f3df"
	I1109 14:40:22.964805  200978 cri.go:89] found id: "7f99978e234d142a4cacb3d0faff188a383754a6fe3a8aafef37dfdbccf51f16"
	I1109 14:40:22.964812  200978 cri.go:89] found id: "4d552f4b4d8a6b91636fb0457d54c17eafdcbd0e136bd23e839ea5daffbf2f98"
	I1109 14:40:22.964815  200978 cri.go:89] found id: "fe108ea59a5d4ffb1318ee1b4113ef12ff67b45f3c2041c028c9738cc25481d6"
	I1109 14:40:22.964818  200978 cri.go:89] found id: ""
	I1109 14:40:22.964871  200978 ssh_runner.go:195] Run: sudo runc list -f json
	I1109 14:40:22.980569  200978 out.go:203] 
	W1109 14:40:22.983549  200978 out.go:285] X Exiting due to GUEST_PAUSE: Pause: list running: runc: sudo runc list -f json: Process exited with status 1
	stdout:
	
	stderr:
	time="2025-11-09T14:40:22Z" level=error msg="open /run/runc: no such file or directory"
	
	X Exiting due to GUEST_PAUSE: Pause: list running: runc: sudo runc list -f json: Process exited with status 1
	stdout:
	
	stderr:
	time="2025-11-09T14:40:22Z" level=error msg="open /run/runc: no such file or directory"
	
	W1109 14:40:22.983569  200978 out.go:285] * 
	* 
	W1109 14:40:22.991261  200978 out.go:308] ╭─────────────────────────────────────────────────────────────────────────────────────────────╮
	│                                                                                             │
	│    * If the above advice does not help, please let us know:                                 │
	│      https://github.com/kubernetes/minikube/issues/new/choose                               │
	│                                                                                             │
	│    * Please run `minikube logs --file=logs.txt` and attach logs.txt to the GitHub issue.    │
	│    * Please also attach the following file to the GitHub issue:                             │
	│    * - /tmp/minikube_pause_49fdaea37aad8ebccb761973c21590cc64efe8d9_0.log                   │
	│                                                                                             │
	╰─────────────────────────────────────────────────────────────────────────────────────────────╯
	╭─────────────────────────────────────────────────────────────────────────────────────────────╮
	│                                                                                             │
	│    * If the above advice does not help, please let us know:                                 │
	│      https://github.com/kubernetes/minikube/issues/new/choose                               │
	│                                                                                             │
	│    * Please run `minikube logs --file=logs.txt` and attach logs.txt to the GitHub issue.    │
	│    * Please also attach the following file to the GitHub issue:                             │
	│    * - /tmp/minikube_pause_49fdaea37aad8ebccb761973c21590cc64efe8d9_0.log                   │
	│                                                                                             │
	╰─────────────────────────────────────────────────────────────────────────────────────────────╯
	I1109 14:40:22.996845  200978 out.go:203] 

                                                
                                                
** /stderr **
start_stop_delete_test.go:309: out/minikube-linux-arm64 pause -p embed-certs-422728 --alsologtostderr -v=1 failed: exit status 80
helpers_test.go:222: -----------------------post-mortem--------------------------------
helpers_test.go:223: ======>  post-mortem[TestStartStop/group/embed-certs/serial/Pause]: network settings <======
helpers_test.go:230: HOST ENV snapshots: PROXY env: HTTP_PROXY="<empty>" HTTPS_PROXY="<empty>" NO_PROXY="<empty>"
helpers_test.go:238: ======>  post-mortem[TestStartStop/group/embed-certs/serial/Pause]: docker inspect <======
helpers_test.go:239: (dbg) Run:  docker inspect embed-certs-422728
helpers_test.go:243: (dbg) docker inspect embed-certs-422728:

                                                
                                                
-- stdout --
	[
	    {
	        "Id": "45825e68cb8679a16f491499ba94fa2babf9539c7324c0ac19dfc0bb866dfb12",
	        "Created": "2025-11-09T14:37:33.73724942Z",
	        "Path": "/usr/local/bin/entrypoint",
	        "Args": [
	            "/sbin/init"
	        ],
	        "State": {
	            "Status": "running",
	            "Running": true,
	            "Paused": false,
	            "Restarting": false,
	            "OOMKilled": false,
	            "Dead": false,
	            "Pid": 196924,
	            "ExitCode": 0,
	            "Error": "",
	            "StartedAt": "2025-11-09T14:39:15.931898847Z",
	            "FinishedAt": "2025-11-09T14:39:15.14453408Z"
	        },
	        "Image": "sha256:4e1b168be0d8ee6affdaa8dcd0274d605b2f417b6cfaa574410f0380fd962b97",
	        "ResolvConfPath": "/var/lib/docker/containers/45825e68cb8679a16f491499ba94fa2babf9539c7324c0ac19dfc0bb866dfb12/resolv.conf",
	        "HostnamePath": "/var/lib/docker/containers/45825e68cb8679a16f491499ba94fa2babf9539c7324c0ac19dfc0bb866dfb12/hostname",
	        "HostsPath": "/var/lib/docker/containers/45825e68cb8679a16f491499ba94fa2babf9539c7324c0ac19dfc0bb866dfb12/hosts",
	        "LogPath": "/var/lib/docker/containers/45825e68cb8679a16f491499ba94fa2babf9539c7324c0ac19dfc0bb866dfb12/45825e68cb8679a16f491499ba94fa2babf9539c7324c0ac19dfc0bb866dfb12-json.log",
	        "Name": "/embed-certs-422728",
	        "RestartCount": 0,
	        "Driver": "overlay2",
	        "Platform": "linux",
	        "MountLabel": "",
	        "ProcessLabel": "",
	        "AppArmorProfile": "unconfined",
	        "ExecIDs": null,
	        "HostConfig": {
	            "Binds": [
	                "/lib/modules:/lib/modules:ro",
	                "embed-certs-422728:/var"
	            ],
	            "ContainerIDFile": "",
	            "LogConfig": {
	                "Type": "json-file",
	                "Config": {}
	            },
	            "NetworkMode": "embed-certs-422728",
	            "PortBindings": {
	                "22/tcp": [
	                    {
	                        "HostIp": "127.0.0.1",
	                        "HostPort": ""
	                    }
	                ],
	                "2376/tcp": [
	                    {
	                        "HostIp": "127.0.0.1",
	                        "HostPort": ""
	                    }
	                ],
	                "32443/tcp": [
	                    {
	                        "HostIp": "127.0.0.1",
	                        "HostPort": ""
	                    }
	                ],
	                "5000/tcp": [
	                    {
	                        "HostIp": "127.0.0.1",
	                        "HostPort": ""
	                    }
	                ],
	                "8443/tcp": [
	                    {
	                        "HostIp": "127.0.0.1",
	                        "HostPort": ""
	                    }
	                ]
	            },
	            "RestartPolicy": {
	                "Name": "no",
	                "MaximumRetryCount": 0
	            },
	            "AutoRemove": false,
	            "VolumeDriver": "",
	            "VolumesFrom": null,
	            "ConsoleSize": [
	                0,
	                0
	            ],
	            "CapAdd": null,
	            "CapDrop": null,
	            "CgroupnsMode": "host",
	            "Dns": [],
	            "DnsOptions": [],
	            "DnsSearch": [],
	            "ExtraHosts": null,
	            "GroupAdd": null,
	            "IpcMode": "private",
	            "Cgroup": "",
	            "Links": null,
	            "OomScoreAdj": 0,
	            "PidMode": "",
	            "Privileged": true,
	            "PublishAllPorts": false,
	            "ReadonlyRootfs": false,
	            "SecurityOpt": [
	                "seccomp=unconfined",
	                "apparmor=unconfined",
	                "label=disable"
	            ],
	            "Tmpfs": {
	                "/run": "",
	                "/tmp": ""
	            },
	            "UTSMode": "",
	            "UsernsMode": "",
	            "ShmSize": 67108864,
	            "Runtime": "runc",
	            "Isolation": "",
	            "CpuShares": 0,
	            "Memory": 3221225472,
	            "NanoCpus": 2000000000,
	            "CgroupParent": "",
	            "BlkioWeight": 0,
	            "BlkioWeightDevice": [],
	            "BlkioDeviceReadBps": [],
	            "BlkioDeviceWriteBps": [],
	            "BlkioDeviceReadIOps": [],
	            "BlkioDeviceWriteIOps": [],
	            "CpuPeriod": 0,
	            "CpuQuota": 0,
	            "CpuRealtimePeriod": 0,
	            "CpuRealtimeRuntime": 0,
	            "CpusetCpus": "",
	            "CpusetMems": "",
	            "Devices": [],
	            "DeviceCgroupRules": null,
	            "DeviceRequests": null,
	            "MemoryReservation": 0,
	            "MemorySwap": 6442450944,
	            "MemorySwappiness": null,
	            "OomKillDisable": false,
	            "PidsLimit": null,
	            "Ulimits": [],
	            "CpuCount": 0,
	            "CpuPercent": 0,
	            "IOMaximumIOps": 0,
	            "IOMaximumBandwidth": 0,
	            "MaskedPaths": null,
	            "ReadonlyPaths": null
	        },
	        "GraphDriver": {
	            "Data": {
	                "ID": "45825e68cb8679a16f491499ba94fa2babf9539c7324c0ac19dfc0bb866dfb12",
	                "LowerDir": "/var/lib/docker/overlay2/82eab5e46ee1cb98bb6b6b37a53cb10857e6af0e929746a88406ed41f77ffa85-init/diff:/var/lib/docker/overlay2/b3a4de36ab2a7c9237cb1555a4866064baca53bca407ae5f84336bea9c6bc6c8/diff",
	                "MergedDir": "/var/lib/docker/overlay2/82eab5e46ee1cb98bb6b6b37a53cb10857e6af0e929746a88406ed41f77ffa85/merged",
	                "UpperDir": "/var/lib/docker/overlay2/82eab5e46ee1cb98bb6b6b37a53cb10857e6af0e929746a88406ed41f77ffa85/diff",
	                "WorkDir": "/var/lib/docker/overlay2/82eab5e46ee1cb98bb6b6b37a53cb10857e6af0e929746a88406ed41f77ffa85/work"
	            },
	            "Name": "overlay2"
	        },
	        "Mounts": [
	            {
	                "Type": "volume",
	                "Name": "embed-certs-422728",
	                "Source": "/var/lib/docker/volumes/embed-certs-422728/_data",
	                "Destination": "/var",
	                "Driver": "local",
	                "Mode": "z",
	                "RW": true,
	                "Propagation": ""
	            },
	            {
	                "Type": "bind",
	                "Source": "/lib/modules",
	                "Destination": "/lib/modules",
	                "Mode": "ro",
	                "RW": false,
	                "Propagation": "rprivate"
	            }
	        ],
	        "Config": {
	            "Hostname": "embed-certs-422728",
	            "Domainname": "",
	            "User": "",
	            "AttachStdin": false,
	            "AttachStdout": false,
	            "AttachStderr": false,
	            "ExposedPorts": {
	                "22/tcp": {},
	                "2376/tcp": {},
	                "32443/tcp": {},
	                "5000/tcp": {},
	                "8443/tcp": {}
	            },
	            "Tty": true,
	            "OpenStdin": false,
	            "StdinOnce": false,
	            "Env": [
	                "container=docker",
	                "PATH=/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin"
	            ],
	            "Cmd": null,
	            "Image": "gcr.io/k8s-minikube/kicbase-builds:v0.0.48-1761985721-21837@sha256:a50b37e97dfdea51156e079ca6b45818a801b3d41bbe13d141f35d2e1af6c7d1",
	            "Volumes": null,
	            "WorkingDir": "/",
	            "Entrypoint": [
	                "/usr/local/bin/entrypoint",
	                "/sbin/init"
	            ],
	            "OnBuild": null,
	            "Labels": {
	                "created_by.minikube.sigs.k8s.io": "true",
	                "mode.minikube.sigs.k8s.io": "embed-certs-422728",
	                "name.minikube.sigs.k8s.io": "embed-certs-422728",
	                "role.minikube.sigs.k8s.io": ""
	            },
	            "StopSignal": "SIGRTMIN+3"
	        },
	        "NetworkSettings": {
	            "Bridge": "",
	            "SandboxID": "29f3f7d828d652d9c9ddb4d846e708a2c5ab41bb94fcdef9a566961b5adc9615",
	            "SandboxKey": "/var/run/docker/netns/29f3f7d828d6",
	            "Ports": {
	                "22/tcp": [
	                    {
	                        "HostIp": "127.0.0.1",
	                        "HostPort": "33070"
	                    }
	                ],
	                "2376/tcp": [
	                    {
	                        "HostIp": "127.0.0.1",
	                        "HostPort": "33071"
	                    }
	                ],
	                "32443/tcp": [
	                    {
	                        "HostIp": "127.0.0.1",
	                        "HostPort": "33074"
	                    }
	                ],
	                "5000/tcp": [
	                    {
	                        "HostIp": "127.0.0.1",
	                        "HostPort": "33072"
	                    }
	                ],
	                "8443/tcp": [
	                    {
	                        "HostIp": "127.0.0.1",
	                        "HostPort": "33073"
	                    }
	                ]
	            },
	            "HairpinMode": false,
	            "LinkLocalIPv6Address": "",
	            "LinkLocalIPv6PrefixLen": 0,
	            "SecondaryIPAddresses": null,
	            "SecondaryIPv6Addresses": null,
	            "EndpointID": "",
	            "Gateway": "",
	            "GlobalIPv6Address": "",
	            "GlobalIPv6PrefixLen": 0,
	            "IPAddress": "",
	            "IPPrefixLen": 0,
	            "IPv6Gateway": "",
	            "MacAddress": "",
	            "Networks": {
	                "embed-certs-422728": {
	                    "IPAMConfig": {
	                        "IPv4Address": "192.168.76.2"
	                    },
	                    "Links": null,
	                    "Aliases": null,
	                    "MacAddress": "16:34:1e:83:22:ac",
	                    "DriverOpts": null,
	                    "GwPriority": 0,
	                    "NetworkID": "78ce79b8fdce892f49cf723023717b9a2880c30a5665eaa6c42c151329eb9e85",
	                    "EndpointID": "dea4fa3ed9f39346f83688a2e7b316193eff746ed3ee23d49fdbe1ab56df3077",
	                    "Gateway": "192.168.76.1",
	                    "IPAddress": "192.168.76.2",
	                    "IPPrefixLen": 24,
	                    "IPv6Gateway": "",
	                    "GlobalIPv6Address": "",
	                    "GlobalIPv6PrefixLen": 0,
	                    "DNSNames": [
	                        "embed-certs-422728",
	                        "45825e68cb86"
	                    ]
	                }
	            }
	        }
	    }
	]

                                                
                                                
-- /stdout --
helpers_test.go:247: (dbg) Run:  out/minikube-linux-arm64 status --format={{.Host}} -p embed-certs-422728 -n embed-certs-422728
helpers_test.go:247: (dbg) Non-zero exit: out/minikube-linux-arm64 status --format={{.Host}} -p embed-certs-422728 -n embed-certs-422728: exit status 2 (453.568035ms)

                                                
                                                
-- stdout --
	Running

                                                
                                                
-- /stdout --
helpers_test.go:247: status error: exit status 2 (may be ok)
helpers_test.go:252: <<< TestStartStop/group/embed-certs/serial/Pause FAILED: start of post-mortem logs <<<
helpers_test.go:253: ======>  post-mortem[TestStartStop/group/embed-certs/serial/Pause]: minikube logs <======
helpers_test.go:255: (dbg) Run:  out/minikube-linux-arm64 -p embed-certs-422728 logs -n 25
helpers_test.go:255: (dbg) Done: out/minikube-linux-arm64 -p embed-certs-422728 logs -n 25: (1.84321587s)
helpers_test.go:260: TestStartStop/group/embed-certs/serial/Pause logs: 
-- stdout --
	
	==> Audit <==
	┌─────────┬───────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────┬──────────────────────────────┬─────────┬─────────┬─────────────────────┬──────────────
───────┐
	│ COMMAND │                                                                                                                     ARGS                                                                                                                      │           PROFILE            │  USER   │ VERSION │     START TIME      │      END TIME       │
	├─────────┼───────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────┼──────────────────────────────┼─────────┼─────────┼─────────────────────┼──────────────
───────┤
	│ start   │ -p old-k8s-version-349599 --memory=3072 --alsologtostderr --wait=true --kvm-network=default --kvm-qemu-uri=qemu:///system --disable-driver-mounts --keep-context=false --driver=docker  --container-runtime=crio --kubernetes-version=v1.28.0 │ old-k8s-version-349599       │ jenkins │ v1.37.0 │ 09 Nov 25 14:34 UTC │ 09 Nov 25 14:35 UTC │
	│ addons  │ enable metrics-server -p old-k8s-version-349599 --images=MetricsServer=registry.k8s.io/echoserver:1.4 --registries=MetricsServer=fake.domain                                                                                                  │ old-k8s-version-349599       │ jenkins │ v1.37.0 │ 09 Nov 25 14:35 UTC │                     │
	│ stop    │ -p old-k8s-version-349599 --alsologtostderr -v=3                                                                                                                                                                                              │ old-k8s-version-349599       │ jenkins │ v1.37.0 │ 09 Nov 25 14:35 UTC │ 09 Nov 25 14:36 UTC │
	│ addons  │ enable dashboard -p old-k8s-version-349599 --images=MetricsScraper=registry.k8s.io/echoserver:1.4                                                                                                                                             │ old-k8s-version-349599       │ jenkins │ v1.37.0 │ 09 Nov 25 14:36 UTC │ 09 Nov 25 14:36 UTC │
	│ start   │ -p old-k8s-version-349599 --memory=3072 --alsologtostderr --wait=true --kvm-network=default --kvm-qemu-uri=qemu:///system --disable-driver-mounts --keep-context=false --driver=docker  --container-runtime=crio --kubernetes-version=v1.28.0 │ old-k8s-version-349599       │ jenkins │ v1.37.0 │ 09 Nov 25 14:36 UTC │ 09 Nov 25 14:36 UTC │
	│ start   │ -p cert-expiration-179822 --memory=3072 --cert-expiration=8760h --driver=docker  --container-runtime=crio                                                                                                                                     │ cert-expiration-179822       │ jenkins │ v1.37.0 │ 09 Nov 25 14:37 UTC │ 09 Nov 25 14:37 UTC │
	│ image   │ old-k8s-version-349599 image list --format=json                                                                                                                                                                                               │ old-k8s-version-349599       │ jenkins │ v1.37.0 │ 09 Nov 25 14:37 UTC │ 09 Nov 25 14:37 UTC │
	│ pause   │ -p old-k8s-version-349599 --alsologtostderr -v=1                                                                                                                                                                                              │ old-k8s-version-349599       │ jenkins │ v1.37.0 │ 09 Nov 25 14:37 UTC │                     │
	│ delete  │ -p old-k8s-version-349599                                                                                                                                                                                                                     │ old-k8s-version-349599       │ jenkins │ v1.37.0 │ 09 Nov 25 14:37 UTC │ 09 Nov 25 14:37 UTC │
	│ delete  │ -p old-k8s-version-349599                                                                                                                                                                                                                     │ old-k8s-version-349599       │ jenkins │ v1.37.0 │ 09 Nov 25 14:37 UTC │ 09 Nov 25 14:37 UTC │
	│ start   │ -p default-k8s-diff-port-103048 --memory=3072 --alsologtostderr --wait=true --apiserver-port=8444 --driver=docker  --container-runtime=crio --kubernetes-version=v1.34.1                                                                      │ default-k8s-diff-port-103048 │ jenkins │ v1.37.0 │ 09 Nov 25 14:37 UTC │ 09 Nov 25 14:38 UTC │
	│ delete  │ -p cert-expiration-179822                                                                                                                                                                                                                     │ cert-expiration-179822       │ jenkins │ v1.37.0 │ 09 Nov 25 14:37 UTC │ 09 Nov 25 14:37 UTC │
	│ start   │ -p embed-certs-422728 --memory=3072 --alsologtostderr --wait=true --embed-certs --driver=docker  --container-runtime=crio --kubernetes-version=v1.34.1                                                                                        │ embed-certs-422728           │ jenkins │ v1.37.0 │ 09 Nov 25 14:37 UTC │ 09 Nov 25 14:38 UTC │
	│ addons  │ enable metrics-server -p default-k8s-diff-port-103048 --images=MetricsServer=registry.k8s.io/echoserver:1.4 --registries=MetricsServer=fake.domain                                                                                            │ default-k8s-diff-port-103048 │ jenkins │ v1.37.0 │ 09 Nov 25 14:38 UTC │                     │
	│ addons  │ enable metrics-server -p embed-certs-422728 --images=MetricsServer=registry.k8s.io/echoserver:1.4 --registries=MetricsServer=fake.domain                                                                                                      │ embed-certs-422728           │ jenkins │ v1.37.0 │ 09 Nov 25 14:39 UTC │                     │
	│ stop    │ -p default-k8s-diff-port-103048 --alsologtostderr -v=3                                                                                                                                                                                        │ default-k8s-diff-port-103048 │ jenkins │ v1.37.0 │ 09 Nov 25 14:39 UTC │ 09 Nov 25 14:39 UTC │
	│ stop    │ -p embed-certs-422728 --alsologtostderr -v=3                                                                                                                                                                                                  │ embed-certs-422728           │ jenkins │ v1.37.0 │ 09 Nov 25 14:39 UTC │ 09 Nov 25 14:39 UTC │
	│ addons  │ enable dashboard -p default-k8s-diff-port-103048 --images=MetricsScraper=registry.k8s.io/echoserver:1.4                                                                                                                                       │ default-k8s-diff-port-103048 │ jenkins │ v1.37.0 │ 09 Nov 25 14:39 UTC │ 09 Nov 25 14:39 UTC │
	│ start   │ -p default-k8s-diff-port-103048 --memory=3072 --alsologtostderr --wait=true --apiserver-port=8444 --driver=docker  --container-runtime=crio --kubernetes-version=v1.34.1                                                                      │ default-k8s-diff-port-103048 │ jenkins │ v1.37.0 │ 09 Nov 25 14:39 UTC │ 09 Nov 25 14:40 UTC │
	│ addons  │ enable dashboard -p embed-certs-422728 --images=MetricsScraper=registry.k8s.io/echoserver:1.4                                                                                                                                                 │ embed-certs-422728           │ jenkins │ v1.37.0 │ 09 Nov 25 14:39 UTC │ 09 Nov 25 14:39 UTC │
	│ start   │ -p embed-certs-422728 --memory=3072 --alsologtostderr --wait=true --embed-certs --driver=docker  --container-runtime=crio --kubernetes-version=v1.34.1                                                                                        │ embed-certs-422728           │ jenkins │ v1.37.0 │ 09 Nov 25 14:39 UTC │ 09 Nov 25 14:40 UTC │
	│ image   │ default-k8s-diff-port-103048 image list --format=json                                                                                                                                                                                         │ default-k8s-diff-port-103048 │ jenkins │ v1.37.0 │ 09 Nov 25 14:40 UTC │ 09 Nov 25 14:40 UTC │
	│ pause   │ -p default-k8s-diff-port-103048 --alsologtostderr -v=1                                                                                                                                                                                        │ default-k8s-diff-port-103048 │ jenkins │ v1.37.0 │ 09 Nov 25 14:40 UTC │                     │
	│ image   │ embed-certs-422728 image list --format=json                                                                                                                                                                                                   │ embed-certs-422728           │ jenkins │ v1.37.0 │ 09 Nov 25 14:40 UTC │ 09 Nov 25 14:40 UTC │
	│ pause   │ -p embed-certs-422728 --alsologtostderr -v=1                                                                                                                                                                                                  │ embed-certs-422728           │ jenkins │ v1.37.0 │ 09 Nov 25 14:40 UTC │                     │
	└─────────┴───────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────┴──────────────────────────────┴─────────┴─────────┴─────────────────────┴──────────────
───────┘
	
	
	==> Last Start <==
	Log file created at: 2025/11/09 14:39:15
	Running on machine: ip-172-31-24-2
	Binary: Built with gc go1.24.6 for linux/arm64
	Log line format: [IWEF]mmdd hh:mm:ss.uuuuuu threadid file:line] msg
	I1109 14:39:15.653812  196795 out.go:360] Setting OutFile to fd 1 ...
	I1109 14:39:15.654001  196795 out.go:408] TERM=,COLORTERM=, which probably does not support color
	I1109 14:39:15.654038  196795 out.go:374] Setting ErrFile to fd 2...
	I1109 14:39:15.654052  196795 out.go:408] TERM=,COLORTERM=, which probably does not support color
	I1109 14:39:15.654356  196795 root.go:338] Updating PATH: /home/jenkins/minikube-integration/21139-2320/.minikube/bin
	I1109 14:39:15.654781  196795 out.go:368] Setting JSON to false
	I1109 14:39:15.655688  196795 start.go:133] hostinfo: {"hostname":"ip-172-31-24-2","uptime":4906,"bootTime":1762694250,"procs":167,"os":"linux","platform":"ubuntu","platformFamily":"debian","platformVersion":"20.04","kernelVersion":"5.15.0-1084-aws","kernelArch":"aarch64","virtualizationSystem":"","virtualizationRole":"","hostId":"6d436adf-771e-4269-b9a3-c25fd4fca4f5"}
	I1109 14:39:15.655757  196795 start.go:143] virtualization:  
	I1109 14:39:15.660654  196795 out.go:179] * [embed-certs-422728] minikube v1.37.0 on Ubuntu 20.04 (arm64)
	I1109 14:39:15.663936  196795 out.go:179]   - MINIKUBE_LOCATION=21139
	I1109 14:39:15.663991  196795 notify.go:221] Checking for updates...
	I1109 14:39:15.670031  196795 out.go:179]   - MINIKUBE_SUPPRESS_DOCKER_PERFORMANCE=true
	I1109 14:39:15.672921  196795 out.go:179]   - KUBECONFIG=/home/jenkins/minikube-integration/21139-2320/kubeconfig
	I1109 14:39:15.675823  196795 out.go:179]   - MINIKUBE_HOME=/home/jenkins/minikube-integration/21139-2320/.minikube
	I1109 14:39:15.678877  196795 out.go:179]   - MINIKUBE_BIN=out/minikube-linux-arm64
	I1109 14:39:15.681871  196795 out.go:179]   - MINIKUBE_FORCE_SYSTEMD=
	I1109 14:39:15.685303  196795 config.go:182] Loaded profile config "embed-certs-422728": Driver=docker, ContainerRuntime=crio, KubernetesVersion=v1.34.1
	I1109 14:39:15.685991  196795 driver.go:422] Setting default libvirt URI to qemu:///system
	I1109 14:39:15.716089  196795 docker.go:124] docker version: linux-28.1.1:Docker Engine - Community
	I1109 14:39:15.716233  196795 cli_runner.go:164] Run: docker system info --format "{{json .}}"
	I1109 14:39:15.783072  196795 info.go:266] docker info: {ID:J4M5:W6MX:GOX4:4LAQ:VI7E:VJNF:J3OP:OPBH:GF7G:PPY4:WQWD:7N4L Containers:2 ContainersRunning:1 ContainersPaused:0 ContainersStopped:1 Images:4 Driver:overlay2 DriverStatus:[[Backing Filesystem extfs] [Supports d_type true] [Using metacopy false] [Native Overlay Diff true] [userxattr false]] SystemStatus:<nil> Plugins:{Volume:[local] Network:[bridge host ipvlan macvlan null overlay] Authorization:<nil> Log:[awslogs fluentd gcplogs gelf journald json-file local splunk syslog]} MemoryLimit:true SwapLimit:true KernelMemory:false KernelMemoryTCP:true CPUCfsPeriod:true CPUCfsQuota:true CPUShares:true CPUSet:true PidsLimit:true IPv4Forwarding:true BridgeNfIptables:false BridgeNfIP6Tables:false Debug:false NFd:38 OomKillDisable:true NGoroutines:54 SystemTime:2025-11-09 14:39:15.77300627 +0000 UTC LoggingDriver:json-file CgroupDriver:cgroupfs NEventsListener:0 KernelVersion:5.15.0-1084-aws OperatingSystem:Ubuntu 20.04.6 LTS OSType:linux Architecture:aa
rch64 IndexServerAddress:https://index.docker.io/v1/ RegistryConfig:{AllowNondistributableArtifactsCIDRs:[] AllowNondistributableArtifactsHostnames:[] InsecureRegistryCIDRs:[::1/128 127.0.0.0/8] IndexConfigs:{DockerIo:{Name:docker.io Mirrors:[] Secure:true Official:true}} Mirrors:[]} NCPU:2 MemTotal:8214835200 GenericResources:<nil> DockerRootDir:/var/lib/docker HTTPProxy: HTTPSProxy: NoProxy: Name:ip-172-31-24-2 Labels:[] ExperimentalBuild:false ServerVersion:28.1.1 ClusterStore: ClusterAdvertise: Runtimes:{Runc:{Path:runc}} DefaultRuntime:runc Swarm:{NodeID: NodeAddr: LocalNodeState:inactive ControlAvailable:false Error: RemoteManagers:<nil>} LiveRestoreEnabled:false Isolation: InitBinary:docker-init ContainerdCommit:{ID:05044ec0a9a75232cad458027ca83437aae3f4da Expected:} RuncCommit:{ID:v1.2.5-0-g59923ef Expected:} InitCommit:{ID:de40ad0 Expected:} SecurityOptions:[name=apparmor name=seccomp,profile=builtin] ProductLicense: Warnings:<nil> ServerErrors:[] ClientInfo:{Debug:false Plugins:[map[Name:buildx Path
:/usr/libexec/docker/cli-plugins/docker-buildx SchemaVersion:0.1.0 ShortDescription:Docker Buildx Vendor:Docker Inc. Version:v0.23.0] map[Name:compose Path:/usr/libexec/docker/cli-plugins/docker-compose SchemaVersion:0.1.0 ShortDescription:Docker Compose Vendor:Docker Inc. Version:v2.35.1]] Warnings:<nil>}}
	I1109 14:39:15.783205  196795 docker.go:319] overlay module found
	I1109 14:39:15.786478  196795 out.go:179] * Using the docker driver based on existing profile
	I1109 14:39:15.789381  196795 start.go:309] selected driver: docker
	I1109 14:39:15.789420  196795 start.go:930] validating driver "docker" against &{Name:embed-certs-422728 KeepContext:false EmbedCerts:true MinikubeISO: KicBaseImage:gcr.io/k8s-minikube/kicbase-builds:v0.0.48-1761985721-21837@sha256:a50b37e97dfdea51156e079ca6b45818a801b3d41bbe13d141f35d2e1af6c7d1 Memory:3072 CPUs:2 DiskSize:20000 Driver:docker HyperkitVpnKitSock: HyperkitVSockPorts:[] DockerEnv:[] ContainerVolumeMounts:[] InsecureRegistry:[] RegistryMirror:[] HostOnlyCIDR:192.168.59.1/24 HypervVirtualSwitch: HypervUseExternalSwitch:false HypervExternalAdapter: KVMNetwork:default KVMQemuURI:qemu:///system KVMGPU:false KVMHidden:false KVMNUMACount:1 APIServerPort:8443 DockerOpt:[] DisableDriverMounts:false NFSShare:[] NFSSharesRoot:/nfsshares UUID: NoVTXCheck:false DNSProxy:false HostDNSResolver:true HostOnlyNicType:virtio NatNicType:virtio SSHIPAddress: SSHUser:root SSHKey: SSHPort:22 KubernetesConfig:{KubernetesVersion:v1.34.1 ClusterName:embed-certs-422728 Namespace:default APIServerHAVIP: APIServerN
ame:minikubeCA APIServerNames:[] APIServerIPs:[] DNSDomain:cluster.local ContainerRuntime:crio CRISocket: NetworkPlugin:cni FeatureGates: ServiceCIDR:10.96.0.0/12 ImageRepository: LoadBalancerStartIP: LoadBalancerEndIP: CustomIngressCert: RegistryAliases: ExtraOptions:[] ShouldLoadCachedImages:true EnableDefaultCNI:false CNI:} Nodes:[{Name: IP:192.168.76.2 Port:8443 KubernetesVersion:v1.34.1 ContainerRuntime:crio ControlPlane:true Worker:true}] Addons:map[dashboard:true default-storageclass:true storage-provisioner:true] CustomAddonImages:map[MetricsScraper:registry.k8s.io/echoserver:1.4] CustomAddonRegistries:map[] VerifyComponents:map[apiserver:true apps_running:true default_sa:true extra:true kubelet:true node_ready:true system_pods:true] StartHostTimeout:6m0s ScheduledStop:<nil> ExposedPorts:[] ListenAddress: Network: Subnet: MultiNodeRequested:false ExtraDisks:0 CertExpiration:26280h0m0s MountString: Mount9PVersion:9p2000.L MountGID:docker MountIP: MountMSize:262144 MountOptions:[] MountPort:0 MountType:
9p MountUID:docker BinaryMirror: DisableOptimizations:false DisableMetrics:false DisableCoreDNSLog:false CustomQemuFirmwarePath: SocketVMnetClientPath: SocketVMnetPath: StaticIP: SSHAuthSock: SSHAgentPID:0 GPUs: AutoPauseInterval:1m0s}
	I1109 14:39:15.789515  196795 start.go:941] status for docker: {Installed:true Healthy:true Running:false NeedsImprovement:false Error:<nil> Reason: Fix: Doc: Version:}
	I1109 14:39:15.790229  196795 cli_runner.go:164] Run: docker system info --format "{{json .}}"
	I1109 14:39:15.845783  196795 info.go:266] docker info: {ID:J4M5:W6MX:GOX4:4LAQ:VI7E:VJNF:J3OP:OPBH:GF7G:PPY4:WQWD:7N4L Containers:2 ContainersRunning:1 ContainersPaused:0 ContainersStopped:1 Images:4 Driver:overlay2 DriverStatus:[[Backing Filesystem extfs] [Supports d_type true] [Using metacopy false] [Native Overlay Diff true] [userxattr false]] SystemStatus:<nil> Plugins:{Volume:[local] Network:[bridge host ipvlan macvlan null overlay] Authorization:<nil> Log:[awslogs fluentd gcplogs gelf journald json-file local splunk syslog]} MemoryLimit:true SwapLimit:true KernelMemory:false KernelMemoryTCP:true CPUCfsPeriod:true CPUCfsQuota:true CPUShares:true CPUSet:true PidsLimit:true IPv4Forwarding:true BridgeNfIptables:false BridgeNfIP6Tables:false Debug:false NFd:38 OomKillDisable:true NGoroutines:54 SystemTime:2025-11-09 14:39:15.836143549 +0000 UTC LoggingDriver:json-file CgroupDriver:cgroupfs NEventsListener:0 KernelVersion:5.15.0-1084-aws OperatingSystem:Ubuntu 20.04.6 LTS OSType:linux Architecture:a
arch64 IndexServerAddress:https://index.docker.io/v1/ RegistryConfig:{AllowNondistributableArtifactsCIDRs:[] AllowNondistributableArtifactsHostnames:[] InsecureRegistryCIDRs:[::1/128 127.0.0.0/8] IndexConfigs:{DockerIo:{Name:docker.io Mirrors:[] Secure:true Official:true}} Mirrors:[]} NCPU:2 MemTotal:8214835200 GenericResources:<nil> DockerRootDir:/var/lib/docker HTTPProxy: HTTPSProxy: NoProxy: Name:ip-172-31-24-2 Labels:[] ExperimentalBuild:false ServerVersion:28.1.1 ClusterStore: ClusterAdvertise: Runtimes:{Runc:{Path:runc}} DefaultRuntime:runc Swarm:{NodeID: NodeAddr: LocalNodeState:inactive ControlAvailable:false Error: RemoteManagers:<nil>} LiveRestoreEnabled:false Isolation: InitBinary:docker-init ContainerdCommit:{ID:05044ec0a9a75232cad458027ca83437aae3f4da Expected:} RuncCommit:{ID:v1.2.5-0-g59923ef Expected:} InitCommit:{ID:de40ad0 Expected:} SecurityOptions:[name=apparmor name=seccomp,profile=builtin] ProductLicense: Warnings:<nil> ServerErrors:[] ClientInfo:{Debug:false Plugins:[map[Name:buildx Pat
h:/usr/libexec/docker/cli-plugins/docker-buildx SchemaVersion:0.1.0 ShortDescription:Docker Buildx Vendor:Docker Inc. Version:v0.23.0] map[Name:compose Path:/usr/libexec/docker/cli-plugins/docker-compose SchemaVersion:0.1.0 ShortDescription:Docker Compose Vendor:Docker Inc. Version:v2.35.1]] Warnings:<nil>}}
	I1109 14:39:15.846132  196795 start_flags.go:992] Waiting for all components: map[apiserver:true apps_running:true default_sa:true extra:true kubelet:true node_ready:true system_pods:true]
	I1109 14:39:15.846168  196795 cni.go:84] Creating CNI manager for ""
	I1109 14:39:15.846227  196795 cni.go:143] "docker" driver + "crio" runtime found, recommending kindnet
	I1109 14:39:15.846266  196795 start.go:353] cluster config:
	{Name:embed-certs-422728 KeepContext:false EmbedCerts:true MinikubeISO: KicBaseImage:gcr.io/k8s-minikube/kicbase-builds:v0.0.48-1761985721-21837@sha256:a50b37e97dfdea51156e079ca6b45818a801b3d41bbe13d141f35d2e1af6c7d1 Memory:3072 CPUs:2 DiskSize:20000 Driver:docker HyperkitVpnKitSock: HyperkitVSockPorts:[] DockerEnv:[] ContainerVolumeMounts:[] InsecureRegistry:[] RegistryMirror:[] HostOnlyCIDR:192.168.59.1/24 HypervVirtualSwitch: HypervUseExternalSwitch:false HypervExternalAdapter: KVMNetwork:default KVMQemuURI:qemu:///system KVMGPU:false KVMHidden:false KVMNUMACount:1 APIServerPort:8443 DockerOpt:[] DisableDriverMounts:false NFSShare:[] NFSSharesRoot:/nfsshares UUID: NoVTXCheck:false DNSProxy:false HostDNSResolver:true HostOnlyNicType:virtio NatNicType:virtio SSHIPAddress: SSHUser:root SSHKey: SSHPort:22 KubernetesConfig:{KubernetesVersion:v1.34.1 ClusterName:embed-certs-422728 Namespace:default APIServerHAVIP: APIServerName:minikubeCA APIServerNames:[] APIServerIPs:[] DNSDomain:cluster.local Contain
erRuntime:crio CRISocket: NetworkPlugin:cni FeatureGates: ServiceCIDR:10.96.0.0/12 ImageRepository: LoadBalancerStartIP: LoadBalancerEndIP: CustomIngressCert: RegistryAliases: ExtraOptions:[] ShouldLoadCachedImages:true EnableDefaultCNI:false CNI:} Nodes:[{Name: IP:192.168.76.2 Port:8443 KubernetesVersion:v1.34.1 ContainerRuntime:crio ControlPlane:true Worker:true}] Addons:map[dashboard:true default-storageclass:true storage-provisioner:true] CustomAddonImages:map[MetricsScraper:registry.k8s.io/echoserver:1.4] CustomAddonRegistries:map[] VerifyComponents:map[apiserver:true apps_running:true default_sa:true extra:true kubelet:true node_ready:true system_pods:true] StartHostTimeout:6m0s ScheduledStop:<nil> ExposedPorts:[] ListenAddress: Network: Subnet: MultiNodeRequested:false ExtraDisks:0 CertExpiration:26280h0m0s MountString: Mount9PVersion:9p2000.L MountGID:docker MountIP: MountMSize:262144 MountOptions:[] MountPort:0 MountType:9p MountUID:docker BinaryMirror: DisableOptimizations:false DisableMetrics:false
DisableCoreDNSLog:false CustomQemuFirmwarePath: SocketVMnetClientPath: SocketVMnetPath: StaticIP: SSHAuthSock: SSHAgentPID:0 GPUs: AutoPauseInterval:1m0s}
	I1109 14:39:15.849466  196795 out.go:179] * Starting "embed-certs-422728" primary control-plane node in "embed-certs-422728" cluster
	I1109 14:39:15.852353  196795 cache.go:134] Beginning downloading kic base image for docker with crio
	I1109 14:39:15.855395  196795 out.go:179] * Pulling base image v0.0.48-1761985721-21837 ...
	I1109 14:39:15.858354  196795 preload.go:188] Checking if preload exists for k8s version v1.34.1 and runtime crio
	I1109 14:39:15.858406  196795 preload.go:203] Found local preload: /home/jenkins/minikube-integration/21139-2320/.minikube/cache/preloaded-tarball/preloaded-images-k8s-v18-v1.34.1-cri-o-overlay-arm64.tar.lz4
	I1109 14:39:15.858425  196795 cache.go:65] Caching tarball of preloaded images
	I1109 14:39:15.858430  196795 image.go:81] Checking for gcr.io/k8s-minikube/kicbase-builds:v0.0.48-1761985721-21837@sha256:a50b37e97dfdea51156e079ca6b45818a801b3d41bbe13d141f35d2e1af6c7d1 in local docker daemon
	I1109 14:39:15.858538  196795 preload.go:238] Found /home/jenkins/minikube-integration/21139-2320/.minikube/cache/preloaded-tarball/preloaded-images-k8s-v18-v1.34.1-cri-o-overlay-arm64.tar.lz4 in cache, skipping download
	I1109 14:39:15.858550  196795 cache.go:68] Finished verifying existence of preloaded tar for v1.34.1 on crio
	I1109 14:39:15.858709  196795 profile.go:143] Saving config to /home/jenkins/minikube-integration/21139-2320/.minikube/profiles/embed-certs-422728/config.json ...
	I1109 14:39:15.879215  196795 image.go:100] Found gcr.io/k8s-minikube/kicbase-builds:v0.0.48-1761985721-21837@sha256:a50b37e97dfdea51156e079ca6b45818a801b3d41bbe13d141f35d2e1af6c7d1 in local docker daemon, skipping pull
	I1109 14:39:15.879245  196795 cache.go:158] gcr.io/k8s-minikube/kicbase-builds:v0.0.48-1761985721-21837@sha256:a50b37e97dfdea51156e079ca6b45818a801b3d41bbe13d141f35d2e1af6c7d1 exists in daemon, skipping load
	I1109 14:39:15.879257  196795 cache.go:243] Successfully downloaded all kic artifacts
	I1109 14:39:15.879367  196795 start.go:360] acquireMachinesLock for embed-certs-422728: {Name:mkaf26c3066ebca49339c9527aed846108c5e799 Clock:{} Delay:500ms Timeout:10m0s Cancel:<nil>}
	I1109 14:39:15.879441  196795 start.go:364] duration metric: took 46.114µs to acquireMachinesLock for "embed-certs-422728"
	I1109 14:39:15.879465  196795 start.go:96] Skipping create...Using existing machine configuration
	I1109 14:39:15.879476  196795 fix.go:54] fixHost starting: 
	I1109 14:39:15.879824  196795 cli_runner.go:164] Run: docker container inspect embed-certs-422728 --format={{.State.Status}}
	I1109 14:39:15.897379  196795 fix.go:112] recreateIfNeeded on embed-certs-422728: state=Stopped err=<nil>
	W1109 14:39:15.897409  196795 fix.go:138] unexpected machine state, will restart: <nil>
	I1109 14:39:13.761899  196129 out.go:252] * Restarting existing docker container for "default-k8s-diff-port-103048" ...
	I1109 14:39:13.762001  196129 cli_runner.go:164] Run: docker start default-k8s-diff-port-103048
	I1109 14:39:14.005697  196129 cli_runner.go:164] Run: docker container inspect default-k8s-diff-port-103048 --format={{.State.Status}}
	I1109 14:39:14.031930  196129 kic.go:430] container "default-k8s-diff-port-103048" state is running.
	I1109 14:39:14.032334  196129 cli_runner.go:164] Run: docker container inspect -f "{{range .NetworkSettings.Networks}}{{.IPAddress}},{{.GlobalIPv6Address}}{{end}}" default-k8s-diff-port-103048
	I1109 14:39:14.054133  196129 profile.go:143] Saving config to /home/jenkins/minikube-integration/21139-2320/.minikube/profiles/default-k8s-diff-port-103048/config.json ...
	I1109 14:39:14.054518  196129 machine.go:94] provisionDockerMachine start ...
	I1109 14:39:14.054646  196129 cli_runner.go:164] Run: docker container inspect -f "'{{(index (index .NetworkSettings.Ports "22/tcp") 0).HostPort}}'" default-k8s-diff-port-103048
	I1109 14:39:14.076480  196129 main.go:143] libmachine: Using SSH client type: native
	I1109 14:39:14.076798  196129 main.go:143] libmachine: &{{{<nil> 0 [] [] []} docker [0x3eefe0] 0x3f1790 <nil>  [] 0s} 127.0.0.1 33065 <nil> <nil>}
	I1109 14:39:14.076807  196129 main.go:143] libmachine: About to run SSH command:
	hostname
	I1109 14:39:14.077436  196129 main.go:143] libmachine: Error dialing TCP: ssh: handshake failed: read tcp 127.0.0.1:58722->127.0.0.1:33065: read: connection reset by peer
	I1109 14:39:17.231473  196129 main.go:143] libmachine: SSH cmd err, output: <nil>: default-k8s-diff-port-103048
	
	I1109 14:39:17.231499  196129 ubuntu.go:182] provisioning hostname "default-k8s-diff-port-103048"
	I1109 14:39:17.231624  196129 cli_runner.go:164] Run: docker container inspect -f "'{{(index (index .NetworkSettings.Ports "22/tcp") 0).HostPort}}'" default-k8s-diff-port-103048
	I1109 14:39:17.249722  196129 main.go:143] libmachine: Using SSH client type: native
	I1109 14:39:17.250048  196129 main.go:143] libmachine: &{{{<nil> 0 [] [] []} docker [0x3eefe0] 0x3f1790 <nil>  [] 0s} 127.0.0.1 33065 <nil> <nil>}
	I1109 14:39:17.250064  196129 main.go:143] libmachine: About to run SSH command:
	sudo hostname default-k8s-diff-port-103048 && echo "default-k8s-diff-port-103048" | sudo tee /etc/hostname
	I1109 14:39:17.410092  196129 main.go:143] libmachine: SSH cmd err, output: <nil>: default-k8s-diff-port-103048
	
	I1109 14:39:17.410211  196129 cli_runner.go:164] Run: docker container inspect -f "'{{(index (index .NetworkSettings.Ports "22/tcp") 0).HostPort}}'" default-k8s-diff-port-103048
	I1109 14:39:17.428985  196129 main.go:143] libmachine: Using SSH client type: native
	I1109 14:39:17.429306  196129 main.go:143] libmachine: &{{{<nil> 0 [] [] []} docker [0x3eefe0] 0x3f1790 <nil>  [] 0s} 127.0.0.1 33065 <nil> <nil>}
	I1109 14:39:17.429330  196129 main.go:143] libmachine: About to run SSH command:
	
			if ! grep -xq '.*\sdefault-k8s-diff-port-103048' /etc/hosts; then
				if grep -xq '127.0.1.1\s.*' /etc/hosts; then
					sudo sed -i 's/^127.0.1.1\s.*/127.0.1.1 default-k8s-diff-port-103048/g' /etc/hosts;
				else 
					echo '127.0.1.1 default-k8s-diff-port-103048' | sudo tee -a /etc/hosts; 
				fi
			fi
	I1109 14:39:17.580249  196129 main.go:143] libmachine: SSH cmd err, output: <nil>: 
	I1109 14:39:17.580275  196129 ubuntu.go:188] set auth options {CertDir:/home/jenkins/minikube-integration/21139-2320/.minikube CaCertPath:/home/jenkins/minikube-integration/21139-2320/.minikube/certs/ca.pem CaPrivateKeyPath:/home/jenkins/minikube-integration/21139-2320/.minikube/certs/ca-key.pem CaCertRemotePath:/etc/docker/ca.pem ServerCertPath:/home/jenkins/minikube-integration/21139-2320/.minikube/machines/server.pem ServerKeyPath:/home/jenkins/minikube-integration/21139-2320/.minikube/machines/server-key.pem ClientKeyPath:/home/jenkins/minikube-integration/21139-2320/.minikube/certs/key.pem ServerCertRemotePath:/etc/docker/server.pem ServerKeyRemotePath:/etc/docker/server-key.pem ClientCertPath:/home/jenkins/minikube-integration/21139-2320/.minikube/certs/cert.pem ServerCertSANs:[] StorePath:/home/jenkins/minikube-integration/21139-2320/.minikube}
	I1109 14:39:17.580301  196129 ubuntu.go:190] setting up certificates
	I1109 14:39:17.580311  196129 provision.go:84] configureAuth start
	I1109 14:39:17.580368  196129 cli_runner.go:164] Run: docker container inspect -f "{{range .NetworkSettings.Networks}}{{.IPAddress}},{{.GlobalIPv6Address}}{{end}}" default-k8s-diff-port-103048
	I1109 14:39:17.598399  196129 provision.go:143] copyHostCerts
	I1109 14:39:17.598470  196129 exec_runner.go:144] found /home/jenkins/minikube-integration/21139-2320/.minikube/ca.pem, removing ...
	I1109 14:39:17.598489  196129 exec_runner.go:203] rm: /home/jenkins/minikube-integration/21139-2320/.minikube/ca.pem
	I1109 14:39:17.598565  196129 exec_runner.go:151] cp: /home/jenkins/minikube-integration/21139-2320/.minikube/certs/ca.pem --> /home/jenkins/minikube-integration/21139-2320/.minikube/ca.pem (1082 bytes)
	I1109 14:39:17.598662  196129 exec_runner.go:144] found /home/jenkins/minikube-integration/21139-2320/.minikube/cert.pem, removing ...
	I1109 14:39:17.598674  196129 exec_runner.go:203] rm: /home/jenkins/minikube-integration/21139-2320/.minikube/cert.pem
	I1109 14:39:17.598703  196129 exec_runner.go:151] cp: /home/jenkins/minikube-integration/21139-2320/.minikube/certs/cert.pem --> /home/jenkins/minikube-integration/21139-2320/.minikube/cert.pem (1123 bytes)
	I1109 14:39:17.598755  196129 exec_runner.go:144] found /home/jenkins/minikube-integration/21139-2320/.minikube/key.pem, removing ...
	I1109 14:39:17.598765  196129 exec_runner.go:203] rm: /home/jenkins/minikube-integration/21139-2320/.minikube/key.pem
	I1109 14:39:17.598788  196129 exec_runner.go:151] cp: /home/jenkins/minikube-integration/21139-2320/.minikube/certs/key.pem --> /home/jenkins/minikube-integration/21139-2320/.minikube/key.pem (1679 bytes)
	I1109 14:39:17.598837  196129 provision.go:117] generating server cert: /home/jenkins/minikube-integration/21139-2320/.minikube/machines/server.pem ca-key=/home/jenkins/minikube-integration/21139-2320/.minikube/certs/ca.pem private-key=/home/jenkins/minikube-integration/21139-2320/.minikube/certs/ca-key.pem org=jenkins.default-k8s-diff-port-103048 san=[127.0.0.1 192.168.85.2 default-k8s-diff-port-103048 localhost minikube]
	I1109 14:39:17.688954  196129 provision.go:177] copyRemoteCerts
	I1109 14:39:17.689019  196129 ssh_runner.go:195] Run: sudo mkdir -p /etc/docker /etc/docker /etc/docker
	I1109 14:39:17.689060  196129 cli_runner.go:164] Run: docker container inspect -f "'{{(index (index .NetworkSettings.Ports "22/tcp") 0).HostPort}}'" default-k8s-diff-port-103048
	I1109 14:39:17.708206  196129 sshutil.go:53] new ssh client: &{IP:127.0.0.1 Port:33065 SSHKeyPath:/home/jenkins/minikube-integration/21139-2320/.minikube/machines/default-k8s-diff-port-103048/id_rsa Username:docker}
	I1109 14:39:17.819695  196129 ssh_runner.go:362] scp /home/jenkins/minikube-integration/21139-2320/.minikube/certs/ca.pem --> /etc/docker/ca.pem (1082 bytes)
	I1109 14:39:17.837093  196129 ssh_runner.go:362] scp /home/jenkins/minikube-integration/21139-2320/.minikube/machines/server.pem --> /etc/docker/server.pem (1249 bytes)
	I1109 14:39:17.854586  196129 ssh_runner.go:362] scp /home/jenkins/minikube-integration/21139-2320/.minikube/machines/server-key.pem --> /etc/docker/server-key.pem (1675 bytes)
	I1109 14:39:17.871745  196129 provision.go:87] duration metric: took 291.419804ms to configureAuth
	I1109 14:39:17.871814  196129 ubuntu.go:206] setting minikube options for container-runtime
	I1109 14:39:17.872050  196129 config.go:182] Loaded profile config "default-k8s-diff-port-103048": Driver=docker, ContainerRuntime=crio, KubernetesVersion=v1.34.1
	I1109 14:39:17.872194  196129 cli_runner.go:164] Run: docker container inspect -f "'{{(index (index .NetworkSettings.Ports "22/tcp") 0).HostPort}}'" default-k8s-diff-port-103048
	I1109 14:39:17.889492  196129 main.go:143] libmachine: Using SSH client type: native
	I1109 14:39:17.889805  196129 main.go:143] libmachine: &{{{<nil> 0 [] [] []} docker [0x3eefe0] 0x3f1790 <nil>  [] 0s} 127.0.0.1 33065 <nil> <nil>}
	I1109 14:39:17.889825  196129 main.go:143] libmachine: About to run SSH command:
	sudo mkdir -p /etc/sysconfig && printf %s "
	CRIO_MINIKUBE_OPTIONS='--insecure-registry 10.96.0.0/12 '
	" | sudo tee /etc/sysconfig/crio.minikube && sudo systemctl restart crio
	I1109 14:39:18.202831  196129 main.go:143] libmachine: SSH cmd err, output: <nil>: 
	CRIO_MINIKUBE_OPTIONS='--insecure-registry 10.96.0.0/12 '
	
	I1109 14:39:18.202918  196129 machine.go:97] duration metric: took 4.148387076s to provisionDockerMachine
	I1109 14:39:18.202944  196129 start.go:293] postStartSetup for "default-k8s-diff-port-103048" (driver="docker")
	I1109 14:39:18.202988  196129 start.go:322] creating required directories: [/etc/kubernetes/addons /etc/kubernetes/manifests /var/tmp/minikube /var/lib/minikube /var/lib/minikube/certs /var/lib/minikube/images /var/lib/minikube/binaries /tmp/gvisor /usr/share/ca-certificates /etc/ssl/certs]
	I1109 14:39:18.203082  196129 ssh_runner.go:195] Run: sudo mkdir -p /etc/kubernetes/addons /etc/kubernetes/manifests /var/tmp/minikube /var/lib/minikube /var/lib/minikube/certs /var/lib/minikube/images /var/lib/minikube/binaries /tmp/gvisor /usr/share/ca-certificates /etc/ssl/certs
	I1109 14:39:18.203170  196129 cli_runner.go:164] Run: docker container inspect -f "'{{(index (index .NetworkSettings.Ports "22/tcp") 0).HostPort}}'" default-k8s-diff-port-103048
	I1109 14:39:18.224891  196129 sshutil.go:53] new ssh client: &{IP:127.0.0.1 Port:33065 SSHKeyPath:/home/jenkins/minikube-integration/21139-2320/.minikube/machines/default-k8s-diff-port-103048/id_rsa Username:docker}
	I1109 14:39:18.335626  196129 ssh_runner.go:195] Run: cat /etc/os-release
	I1109 14:39:18.338990  196129 main.go:143] libmachine: Couldn't set key VERSION_CODENAME, no corresponding struct field found
	I1109 14:39:18.339018  196129 info.go:137] Remote host: Debian GNU/Linux 12 (bookworm)
	I1109 14:39:18.339029  196129 filesync.go:126] Scanning /home/jenkins/minikube-integration/21139-2320/.minikube/addons for local assets ...
	I1109 14:39:18.339123  196129 filesync.go:126] Scanning /home/jenkins/minikube-integration/21139-2320/.minikube/files for local assets ...
	I1109 14:39:18.339197  196129 filesync.go:149] local asset: /home/jenkins/minikube-integration/21139-2320/.minikube/files/etc/ssl/certs/41162.pem -> 41162.pem in /etc/ssl/certs
	I1109 14:39:18.339307  196129 ssh_runner.go:195] Run: sudo mkdir -p /etc/ssl/certs
	I1109 14:39:18.347413  196129 ssh_runner.go:362] scp /home/jenkins/minikube-integration/21139-2320/.minikube/files/etc/ssl/certs/41162.pem --> /etc/ssl/certs/41162.pem (1708 bytes)
	I1109 14:39:18.365395  196129 start.go:296] duration metric: took 162.403249ms for postStartSetup
	I1109 14:39:18.365474  196129 ssh_runner.go:195] Run: sh -c "df -h /var | awk 'NR==2{print $5}'"
	I1109 14:39:18.365513  196129 cli_runner.go:164] Run: docker container inspect -f "'{{(index (index .NetworkSettings.Ports "22/tcp") 0).HostPort}}'" default-k8s-diff-port-103048
	I1109 14:39:18.383461  196129 sshutil.go:53] new ssh client: &{IP:127.0.0.1 Port:33065 SSHKeyPath:/home/jenkins/minikube-integration/21139-2320/.minikube/machines/default-k8s-diff-port-103048/id_rsa Username:docker}
	I1109 14:39:18.485492  196129 ssh_runner.go:195] Run: sh -c "df -BG /var | awk 'NR==2{print $4}'"
	I1109 14:39:18.490710  196129 fix.go:56] duration metric: took 4.748854309s for fixHost
	I1109 14:39:18.490737  196129 start.go:83] releasing machines lock for "default-k8s-diff-port-103048", held for 4.748905699s
	I1109 14:39:18.490807  196129 cli_runner.go:164] Run: docker container inspect -f "{{range .NetworkSettings.Networks}}{{.IPAddress}},{{.GlobalIPv6Address}}{{end}}" default-k8s-diff-port-103048
	I1109 14:39:18.508468  196129 ssh_runner.go:195] Run: cat /version.json
	I1109 14:39:18.508516  196129 ssh_runner.go:195] Run: curl -sS -m 2 https://registry.k8s.io/
	I1109 14:39:18.508525  196129 cli_runner.go:164] Run: docker container inspect -f "'{{(index (index .NetworkSettings.Ports "22/tcp") 0).HostPort}}'" default-k8s-diff-port-103048
	I1109 14:39:18.508574  196129 cli_runner.go:164] Run: docker container inspect -f "'{{(index (index .NetworkSettings.Ports "22/tcp") 0).HostPort}}'" default-k8s-diff-port-103048
	I1109 14:39:18.533762  196129 sshutil.go:53] new ssh client: &{IP:127.0.0.1 Port:33065 SSHKeyPath:/home/jenkins/minikube-integration/21139-2320/.minikube/machines/default-k8s-diff-port-103048/id_rsa Username:docker}
	I1109 14:39:18.534380  196129 sshutil.go:53] new ssh client: &{IP:127.0.0.1 Port:33065 SSHKeyPath:/home/jenkins/minikube-integration/21139-2320/.minikube/machines/default-k8s-diff-port-103048/id_rsa Username:docker}
	I1109 14:39:18.733641  196129 ssh_runner.go:195] Run: systemctl --version
	I1109 14:39:18.740509  196129 ssh_runner.go:195] Run: sudo sh -c "podman version >/dev/null"
	I1109 14:39:18.777813  196129 ssh_runner.go:195] Run: sh -c "stat /etc/cni/net.d/*loopback.conf*"
	W1109 14:39:18.782333  196129 cni.go:209] loopback cni configuration skipped: "/etc/cni/net.d/*loopback.conf*" not found
	I1109 14:39:18.782411  196129 ssh_runner.go:195] Run: sudo find /etc/cni/net.d -maxdepth 1 -type f ( ( -name *bridge* -or -name *podman* ) -and -not -name *.mk_disabled ) -printf "%p, " -exec sh -c "sudo mv {} {}.mk_disabled" ;
	I1109 14:39:18.790609  196129 cni.go:259] no active bridge cni configs found in "/etc/cni/net.d" - nothing to disable
	I1109 14:39:18.790636  196129 start.go:496] detecting cgroup driver to use...
	I1109 14:39:18.790700  196129 detect.go:187] detected "cgroupfs" cgroup driver on host os
	I1109 14:39:18.790764  196129 ssh_runner.go:195] Run: sudo systemctl stop -f containerd
	I1109 14:39:18.806443  196129 ssh_runner.go:195] Run: sudo systemctl is-active --quiet service containerd
	I1109 14:39:18.820129  196129 docker.go:218] disabling cri-docker service (if available) ...
	I1109 14:39:18.820246  196129 ssh_runner.go:195] Run: sudo systemctl stop -f cri-docker.socket
	I1109 14:39:18.836297  196129 ssh_runner.go:195] Run: sudo systemctl stop -f cri-docker.service
	I1109 14:39:18.849893  196129 ssh_runner.go:195] Run: sudo systemctl disable cri-docker.socket
	I1109 14:39:18.961965  196129 ssh_runner.go:195] Run: sudo systemctl mask cri-docker.service
	I1109 14:39:19.074901  196129 docker.go:234] disabling docker service ...
	I1109 14:39:19.075010  196129 ssh_runner.go:195] Run: sudo systemctl stop -f docker.socket
	I1109 14:39:19.090357  196129 ssh_runner.go:195] Run: sudo systemctl stop -f docker.service
	I1109 14:39:19.103755  196129 ssh_runner.go:195] Run: sudo systemctl disable docker.socket
	I1109 14:39:19.214649  196129 ssh_runner.go:195] Run: sudo systemctl mask docker.service
	I1109 14:39:19.369065  196129 ssh_runner.go:195] Run: sudo systemctl is-active --quiet service docker
	I1109 14:39:19.382216  196129 ssh_runner.go:195] Run: /bin/bash -c "sudo mkdir -p /etc && printf %s "runtime-endpoint: unix:///var/run/crio/crio.sock
	" | sudo tee /etc/crictl.yaml"
	I1109 14:39:19.396769  196129 crio.go:59] configure cri-o to use "registry.k8s.io/pause:3.10.1" pause image...
	I1109 14:39:19.396864  196129 ssh_runner.go:195] Run: sh -c "sudo sed -i 's|^.*pause_image = .*$|pause_image = "registry.k8s.io/pause:3.10.1"|' /etc/crio/crio.conf.d/02-crio.conf"
	I1109 14:39:19.415946  196129 crio.go:70] configuring cri-o to use "cgroupfs" as cgroup driver...
	I1109 14:39:19.416022  196129 ssh_runner.go:195] Run: sh -c "sudo sed -i 's|^.*cgroup_manager = .*$|cgroup_manager = "cgroupfs"|' /etc/crio/crio.conf.d/02-crio.conf"
	I1109 14:39:19.427276  196129 ssh_runner.go:195] Run: sh -c "sudo sed -i '/conmon_cgroup = .*/d' /etc/crio/crio.conf.d/02-crio.conf"
	I1109 14:39:19.437233  196129 ssh_runner.go:195] Run: sh -c "sudo sed -i '/cgroup_manager = .*/a conmon_cgroup = "pod"' /etc/crio/crio.conf.d/02-crio.conf"
	I1109 14:39:19.447125  196129 ssh_runner.go:195] Run: sh -c "sudo rm -rf /etc/cni/net.mk"
	I1109 14:39:19.455793  196129 ssh_runner.go:195] Run: sh -c "sudo sed -i '/^ *"net.ipv4.ip_unprivileged_port_start=.*"/d' /etc/crio/crio.conf.d/02-crio.conf"
	I1109 14:39:19.468606  196129 ssh_runner.go:195] Run: sh -c "sudo grep -q "^ *default_sysctls" /etc/crio/crio.conf.d/02-crio.conf || sudo sed -i '/conmon_cgroup = .*/a default_sysctls = \[\n\]' /etc/crio/crio.conf.d/02-crio.conf"
	I1109 14:39:19.482521  196129 ssh_runner.go:195] Run: sh -c "sudo sed -i -r 's|^default_sysctls *= *\[|&\n  "net.ipv4.ip_unprivileged_port_start=0",|' /etc/crio/crio.conf.d/02-crio.conf"
	I1109 14:39:19.491385  196129 ssh_runner.go:195] Run: sudo sysctl net.bridge.bridge-nf-call-iptables
	I1109 14:39:19.499271  196129 ssh_runner.go:195] Run: sudo sh -c "echo 1 > /proc/sys/net/ipv4/ip_forward"
	I1109 14:39:19.507157  196129 ssh_runner.go:195] Run: sudo systemctl daemon-reload
	I1109 14:39:19.643285  196129 ssh_runner.go:195] Run: sudo systemctl restart crio
	I1109 14:39:19.789716  196129 start.go:543] Will wait 60s for socket path /var/run/crio/crio.sock
	I1109 14:39:19.789782  196129 ssh_runner.go:195] Run: stat /var/run/crio/crio.sock
	I1109 14:39:19.802113  196129 start.go:564] Will wait 60s for crictl version
	I1109 14:39:19.802187  196129 ssh_runner.go:195] Run: which crictl
	I1109 14:39:19.806163  196129 ssh_runner.go:195] Run: sudo /usr/local/bin/crictl version
	I1109 14:39:19.850016  196129 start.go:580] Version:  0.1.0
	RuntimeName:  cri-o
	RuntimeVersion:  1.34.1
	RuntimeApiVersion:  v1
	I1109 14:39:19.850100  196129 ssh_runner.go:195] Run: crio --version
	I1109 14:39:19.886662  196129 ssh_runner.go:195] Run: crio --version
	I1109 14:39:19.922121  196129 out.go:179] * Preparing Kubernetes v1.34.1 on CRI-O 1.34.1 ...
	I1109 14:39:15.900502  196795 out.go:252] * Restarting existing docker container for "embed-certs-422728" ...
	I1109 14:39:15.900586  196795 cli_runner.go:164] Run: docker start embed-certs-422728
	I1109 14:39:16.155027  196795 cli_runner.go:164] Run: docker container inspect embed-certs-422728 --format={{.State.Status}}
	I1109 14:39:16.179053  196795 kic.go:430] container "embed-certs-422728" state is running.
	I1109 14:39:16.179431  196795 cli_runner.go:164] Run: docker container inspect -f "{{range .NetworkSettings.Networks}}{{.IPAddress}},{{.GlobalIPv6Address}}{{end}}" embed-certs-422728
	I1109 14:39:16.202650  196795 profile.go:143] Saving config to /home/jenkins/minikube-integration/21139-2320/.minikube/profiles/embed-certs-422728/config.json ...
	I1109 14:39:16.202886  196795 machine.go:94] provisionDockerMachine start ...
	I1109 14:39:16.202954  196795 cli_runner.go:164] Run: docker container inspect -f "'{{(index (index .NetworkSettings.Ports "22/tcp") 0).HostPort}}'" embed-certs-422728
	I1109 14:39:16.226627  196795 main.go:143] libmachine: Using SSH client type: native
	I1109 14:39:16.227039  196795 main.go:143] libmachine: &{{{<nil> 0 [] [] []} docker [0x3eefe0] 0x3f1790 <nil>  [] 0s} 127.0.0.1 33070 <nil> <nil>}
	I1109 14:39:16.227058  196795 main.go:143] libmachine: About to run SSH command:
	hostname
	I1109 14:39:16.227903  196795 main.go:143] libmachine: Error dialing TCP: ssh: handshake failed: EOF
	I1109 14:39:19.403380  196795 main.go:143] libmachine: SSH cmd err, output: <nil>: embed-certs-422728
	
	I1109 14:39:19.403421  196795 ubuntu.go:182] provisioning hostname "embed-certs-422728"
	I1109 14:39:19.403526  196795 cli_runner.go:164] Run: docker container inspect -f "'{{(index (index .NetworkSettings.Ports "22/tcp") 0).HostPort}}'" embed-certs-422728
	I1109 14:39:19.425865  196795 main.go:143] libmachine: Using SSH client type: native
	I1109 14:39:19.426162  196795 main.go:143] libmachine: &{{{<nil> 0 [] [] []} docker [0x3eefe0] 0x3f1790 <nil>  [] 0s} 127.0.0.1 33070 <nil> <nil>}
	I1109 14:39:19.426172  196795 main.go:143] libmachine: About to run SSH command:
	sudo hostname embed-certs-422728 && echo "embed-certs-422728" | sudo tee /etc/hostname
	I1109 14:39:19.604836  196795 main.go:143] libmachine: SSH cmd err, output: <nil>: embed-certs-422728
	
	I1109 14:39:19.604972  196795 cli_runner.go:164] Run: docker container inspect -f "'{{(index (index .NetworkSettings.Ports "22/tcp") 0).HostPort}}'" embed-certs-422728
	I1109 14:39:19.627515  196795 main.go:143] libmachine: Using SSH client type: native
	I1109 14:39:19.627823  196795 main.go:143] libmachine: &{{{<nil> 0 [] [] []} docker [0x3eefe0] 0x3f1790 <nil>  [] 0s} 127.0.0.1 33070 <nil> <nil>}
	I1109 14:39:19.627846  196795 main.go:143] libmachine: About to run SSH command:
	
			if ! grep -xq '.*\sembed-certs-422728' /etc/hosts; then
				if grep -xq '127.0.1.1\s.*' /etc/hosts; then
					sudo sed -i 's/^127.0.1.1\s.*/127.0.1.1 embed-certs-422728/g' /etc/hosts;
				else 
					echo '127.0.1.1 embed-certs-422728' | sudo tee -a /etc/hosts; 
				fi
			fi
	I1109 14:39:19.784610  196795 main.go:143] libmachine: SSH cmd err, output: <nil>: 
	I1109 14:39:19.784640  196795 ubuntu.go:188] set auth options {CertDir:/home/jenkins/minikube-integration/21139-2320/.minikube CaCertPath:/home/jenkins/minikube-integration/21139-2320/.minikube/certs/ca.pem CaPrivateKeyPath:/home/jenkins/minikube-integration/21139-2320/.minikube/certs/ca-key.pem CaCertRemotePath:/etc/docker/ca.pem ServerCertPath:/home/jenkins/minikube-integration/21139-2320/.minikube/machines/server.pem ServerKeyPath:/home/jenkins/minikube-integration/21139-2320/.minikube/machines/server-key.pem ClientKeyPath:/home/jenkins/minikube-integration/21139-2320/.minikube/certs/key.pem ServerCertRemotePath:/etc/docker/server.pem ServerKeyRemotePath:/etc/docker/server-key.pem ClientCertPath:/home/jenkins/minikube-integration/21139-2320/.minikube/certs/cert.pem ServerCertSANs:[] StorePath:/home/jenkins/minikube-integration/21139-2320/.minikube}
	I1109 14:39:19.784720  196795 ubuntu.go:190] setting up certificates
	I1109 14:39:19.784751  196795 provision.go:84] configureAuth start
	I1109 14:39:19.784837  196795 cli_runner.go:164] Run: docker container inspect -f "{{range .NetworkSettings.Networks}}{{.IPAddress}},{{.GlobalIPv6Address}}{{end}}" embed-certs-422728
	I1109 14:39:19.811636  196795 provision.go:143] copyHostCerts
	I1109 14:39:19.811695  196795 exec_runner.go:144] found /home/jenkins/minikube-integration/21139-2320/.minikube/cert.pem, removing ...
	I1109 14:39:19.811709  196795 exec_runner.go:203] rm: /home/jenkins/minikube-integration/21139-2320/.minikube/cert.pem
	I1109 14:39:19.811785  196795 exec_runner.go:151] cp: /home/jenkins/minikube-integration/21139-2320/.minikube/certs/cert.pem --> /home/jenkins/minikube-integration/21139-2320/.minikube/cert.pem (1123 bytes)
	I1109 14:39:19.811895  196795 exec_runner.go:144] found /home/jenkins/minikube-integration/21139-2320/.minikube/key.pem, removing ...
	I1109 14:39:19.811901  196795 exec_runner.go:203] rm: /home/jenkins/minikube-integration/21139-2320/.minikube/key.pem
	I1109 14:39:19.811929  196795 exec_runner.go:151] cp: /home/jenkins/minikube-integration/21139-2320/.minikube/certs/key.pem --> /home/jenkins/minikube-integration/21139-2320/.minikube/key.pem (1679 bytes)
	I1109 14:39:19.811991  196795 exec_runner.go:144] found /home/jenkins/minikube-integration/21139-2320/.minikube/ca.pem, removing ...
	I1109 14:39:19.811995  196795 exec_runner.go:203] rm: /home/jenkins/minikube-integration/21139-2320/.minikube/ca.pem
	I1109 14:39:19.812021  196795 exec_runner.go:151] cp: /home/jenkins/minikube-integration/21139-2320/.minikube/certs/ca.pem --> /home/jenkins/minikube-integration/21139-2320/.minikube/ca.pem (1082 bytes)
	I1109 14:39:19.812067  196795 provision.go:117] generating server cert: /home/jenkins/minikube-integration/21139-2320/.minikube/machines/server.pem ca-key=/home/jenkins/minikube-integration/21139-2320/.minikube/certs/ca.pem private-key=/home/jenkins/minikube-integration/21139-2320/.minikube/certs/ca-key.pem org=jenkins.embed-certs-422728 san=[127.0.0.1 192.168.76.2 embed-certs-422728 localhost minikube]
	I1109 14:39:20.018694  196795 provision.go:177] copyRemoteCerts
	I1109 14:39:20.018776  196795 ssh_runner.go:195] Run: sudo mkdir -p /etc/docker /etc/docker /etc/docker
	I1109 14:39:20.018829  196795 cli_runner.go:164] Run: docker container inspect -f "'{{(index (index .NetworkSettings.Ports "22/tcp") 0).HostPort}}'" embed-certs-422728
	I1109 14:39:20.041481  196795 sshutil.go:53] new ssh client: &{IP:127.0.0.1 Port:33070 SSHKeyPath:/home/jenkins/minikube-integration/21139-2320/.minikube/machines/embed-certs-422728/id_rsa Username:docker}
	I1109 14:39:20.156424  196795 ssh_runner.go:362] scp /home/jenkins/minikube-integration/21139-2320/.minikube/certs/ca.pem --> /etc/docker/ca.pem (1082 bytes)
	I1109 14:39:20.179967  196795 ssh_runner.go:362] scp /home/jenkins/minikube-integration/21139-2320/.minikube/machines/server.pem --> /etc/docker/server.pem (1224 bytes)
	I1109 14:39:20.205588  196795 ssh_runner.go:362] scp /home/jenkins/minikube-integration/21139-2320/.minikube/machines/server-key.pem --> /etc/docker/server-key.pem (1675 bytes)
	I1109 14:39:20.224981  196795 provision.go:87] duration metric: took 440.207382ms to configureAuth
	I1109 14:39:20.225018  196795 ubuntu.go:206] setting minikube options for container-runtime
	I1109 14:39:20.225226  196795 config.go:182] Loaded profile config "embed-certs-422728": Driver=docker, ContainerRuntime=crio, KubernetesVersion=v1.34.1
	I1109 14:39:20.225355  196795 cli_runner.go:164] Run: docker container inspect -f "'{{(index (index .NetworkSettings.Ports "22/tcp") 0).HostPort}}'" embed-certs-422728
	I1109 14:39:20.251487  196795 main.go:143] libmachine: Using SSH client type: native
	I1109 14:39:20.251808  196795 main.go:143] libmachine: &{{{<nil> 0 [] [] []} docker [0x3eefe0] 0x3f1790 <nil>  [] 0s} 127.0.0.1 33070 <nil> <nil>}
	I1109 14:39:20.251826  196795 main.go:143] libmachine: About to run SSH command:
	sudo mkdir -p /etc/sysconfig && printf %s "
	CRIO_MINIKUBE_OPTIONS='--insecure-registry 10.96.0.0/12 '
	" | sudo tee /etc/sysconfig/crio.minikube && sudo systemctl restart crio
	I1109 14:39:19.924910  196129 cli_runner.go:164] Run: docker network inspect default-k8s-diff-port-103048 --format "{"Name": "{{.Name}}","Driver": "{{.Driver}}","Subnet": "{{range .IPAM.Config}}{{.Subnet}}{{end}}","Gateway": "{{range .IPAM.Config}}{{.Gateway}}{{end}}","MTU": {{if (index .Options "com.docker.network.driver.mtu")}}{{(index .Options "com.docker.network.driver.mtu")}}{{else}}0{{end}}, "ContainerIPs": [{{range $k,$v := .Containers }}"{{$v.IPv4Address}}",{{end}}]}"
	I1109 14:39:19.947696  196129 ssh_runner.go:195] Run: grep 192.168.85.1	host.minikube.internal$ /etc/hosts
	I1109 14:39:19.951833  196129 ssh_runner.go:195] Run: /bin/bash -c "{ grep -v $'\thost.minikube.internal$' "/etc/hosts"; echo "192.168.85.1	host.minikube.internal"; } > /tmp/h.$$; sudo cp /tmp/h.$$ "/etc/hosts""
	I1109 14:39:19.966489  196129 kubeadm.go:884] updating cluster {Name:default-k8s-diff-port-103048 KeepContext:false EmbedCerts:false MinikubeISO: KicBaseImage:gcr.io/k8s-minikube/kicbase-builds:v0.0.48-1761985721-21837@sha256:a50b37e97dfdea51156e079ca6b45818a801b3d41bbe13d141f35d2e1af6c7d1 Memory:3072 CPUs:2 DiskSize:20000 Driver:docker HyperkitVpnKitSock: HyperkitVSockPorts:[] DockerEnv:[] ContainerVolumeMounts:[] InsecureRegistry:[] RegistryMirror:[] HostOnlyCIDR:192.168.59.1/24 HypervVirtualSwitch: HypervUseExternalSwitch:false HypervExternalAdapter: KVMNetwork:default KVMQemuURI:qemu:///system KVMGPU:false KVMHidden:false KVMNUMACount:1 APIServerPort:8444 DockerOpt:[] DisableDriverMounts:false NFSShare:[] NFSSharesRoot:/nfsshares UUID: NoVTXCheck:false DNSProxy:false HostDNSResolver:true HostOnlyNicType:virtio NatNicType:virtio SSHIPAddress: SSHUser:root SSHKey: SSHPort:22 KubernetesConfig:{KubernetesVersion:v1.34.1 ClusterName:default-k8s-diff-port-103048 Namespace:default APIServerHAVIP: APISer
verName:minikubeCA APIServerNames:[] APIServerIPs:[] DNSDomain:cluster.local ContainerRuntime:crio CRISocket: NetworkPlugin:cni FeatureGates: ServiceCIDR:10.96.0.0/12 ImageRepository: LoadBalancerStartIP: LoadBalancerEndIP: CustomIngressCert: RegistryAliases: ExtraOptions:[] ShouldLoadCachedImages:true EnableDefaultCNI:false CNI:} Nodes:[{Name: IP:192.168.85.2 Port:8444 KubernetesVersion:v1.34.1 ContainerRuntime:crio ControlPlane:true Worker:true}] Addons:map[dashboard:true default-storageclass:true storage-provisioner:true] CustomAddonImages:map[MetricsScraper:registry.k8s.io/echoserver:1.4] CustomAddonRegistries:map[] VerifyComponents:map[apiserver:true apps_running:true default_sa:true extra:true kubelet:true node_ready:true system_pods:true] StartHostTimeout:6m0s ScheduledStop:<nil> ExposedPorts:[] ListenAddress: Network: Subnet: MultiNodeRequested:false ExtraDisks:0 CertExpiration:26280h0m0s MountString: Mount9PVersion:9p2000.L MountGID:docker MountIP: MountMSize:262144 MountOptions:[] MountPort:0 MountT
ype:9p MountUID:docker BinaryMirror: DisableOptimizations:false DisableMetrics:false DisableCoreDNSLog:false CustomQemuFirmwarePath: SocketVMnetClientPath: SocketVMnetPath: StaticIP: SSHAuthSock: SSHAgentPID:0 GPUs: AutoPauseInterval:1m0s} ...
	I1109 14:39:19.966612  196129 preload.go:188] Checking if preload exists for k8s version v1.34.1 and runtime crio
	I1109 14:39:19.966665  196129 ssh_runner.go:195] Run: sudo crictl images --output json
	I1109 14:39:20.014624  196129 crio.go:514] all images are preloaded for cri-o runtime.
	I1109 14:39:20.014649  196129 crio.go:433] Images already preloaded, skipping extraction
	I1109 14:39:20.014710  196129 ssh_runner.go:195] Run: sudo crictl images --output json
	I1109 14:39:20.061070  196129 crio.go:514] all images are preloaded for cri-o runtime.
	I1109 14:39:20.061092  196129 cache_images.go:86] Images are preloaded, skipping loading
	I1109 14:39:20.061100  196129 kubeadm.go:935] updating node { 192.168.85.2 8444 v1.34.1 crio true true} ...
	I1109 14:39:20.061201  196129 kubeadm.go:947] kubelet [Unit]
	Wants=crio.service
	
	[Service]
	ExecStart=
	ExecStart=/var/lib/minikube/binaries/v1.34.1/kubelet --bootstrap-kubeconfig=/etc/kubernetes/bootstrap-kubelet.conf --cgroups-per-qos=false --config=/var/lib/kubelet/config.yaml --enforce-node-allocatable= --hostname-override=default-k8s-diff-port-103048 --kubeconfig=/etc/kubernetes/kubelet.conf --node-ip=192.168.85.2
	
	[Install]
	 config:
	{KubernetesVersion:v1.34.1 ClusterName:default-k8s-diff-port-103048 Namespace:default APIServerHAVIP: APIServerName:minikubeCA APIServerNames:[] APIServerIPs:[] DNSDomain:cluster.local ContainerRuntime:crio CRISocket: NetworkPlugin:cni FeatureGates: ServiceCIDR:10.96.0.0/12 ImageRepository: LoadBalancerStartIP: LoadBalancerEndIP: CustomIngressCert: RegistryAliases: ExtraOptions:[] ShouldLoadCachedImages:true EnableDefaultCNI:false CNI:}
	I1109 14:39:20.061279  196129 ssh_runner.go:195] Run: crio config
	I1109 14:39:20.135847  196129 cni.go:84] Creating CNI manager for ""
	I1109 14:39:20.135907  196129 cni.go:143] "docker" driver + "crio" runtime found, recommending kindnet
	I1109 14:39:20.135931  196129 kubeadm.go:85] Using pod CIDR: 10.244.0.0/16
	I1109 14:39:20.135955  196129 kubeadm.go:190] kubeadm options: {CertDir:/var/lib/minikube/certs ServiceCIDR:10.96.0.0/12 PodSubnet:10.244.0.0/16 AdvertiseAddress:192.168.85.2 APIServerPort:8444 KubernetesVersion:v1.34.1 EtcdDataDir:/var/lib/minikube/etcd EtcdExtraArgs:map[] ClusterName:default-k8s-diff-port-103048 NodeName:default-k8s-diff-port-103048 DNSDomain:cluster.local CRISocket:/var/run/crio/crio.sock ImageRepository: ComponentOptions:[{Component:apiServer ExtraArgs:map[enable-admission-plugins:NamespaceLifecycle,LimitRanger,ServiceAccount,DefaultStorageClass,DefaultTolerationSeconds,NodeRestriction,MutatingAdmissionWebhook,ValidatingAdmissionWebhook,ResourceQuota] Pairs:map[certSANs:["127.0.0.1", "localhost", "192.168.85.2"]]} {Component:controllerManager ExtraArgs:map[allocate-node-cidrs:true leader-elect:false] Pairs:map[]} {Component:scheduler ExtraArgs:map[leader-elect:false] Pairs:map[]}] FeatureArgs:map[] NodeIP:192.168.85.2 CgroupDriver:cgroupfs ClientCAFile:/var/lib/minikube/certs/ca.
crt StaticPodPath:/etc/kubernetes/manifests ControlPlaneAddress:control-plane.minikube.internal KubeProxyOptions:map[] ResolvConfSearchRegression:false KubeletConfigOpts:map[containerRuntimeEndpoint:unix:///var/run/crio/crio.sock hairpinMode:hairpin-veth runtimeRequestTimeout:15m] PrependCriSocketUnix:true}
	I1109 14:39:20.136111  196129 kubeadm.go:196] kubeadm config:
	apiVersion: kubeadm.k8s.io/v1beta4
	kind: InitConfiguration
	localAPIEndpoint:
	  advertiseAddress: 192.168.85.2
	  bindPort: 8444
	bootstrapTokens:
	  - groups:
	      - system:bootstrappers:kubeadm:default-node-token
	    ttl: 24h0m0s
	    usages:
	      - signing
	      - authentication
	nodeRegistration:
	  criSocket: unix:///var/run/crio/crio.sock
	  name: "default-k8s-diff-port-103048"
	  kubeletExtraArgs:
	    - name: "node-ip"
	      value: "192.168.85.2"
	  taints: []
	---
	apiVersion: kubeadm.k8s.io/v1beta4
	kind: ClusterConfiguration
	apiServer:
	  certSANs: ["127.0.0.1", "localhost", "192.168.85.2"]
	  extraArgs:
	    - name: "enable-admission-plugins"
	      value: "NamespaceLifecycle,LimitRanger,ServiceAccount,DefaultStorageClass,DefaultTolerationSeconds,NodeRestriction,MutatingAdmissionWebhook,ValidatingAdmissionWebhook,ResourceQuota"
	controllerManager:
	  extraArgs:
	    - name: "allocate-node-cidrs"
	      value: "true"
	    - name: "leader-elect"
	      value: "false"
	scheduler:
	  extraArgs:
	    - name: "leader-elect"
	      value: "false"
	certificatesDir: /var/lib/minikube/certs
	clusterName: mk
	controlPlaneEndpoint: control-plane.minikube.internal:8444
	etcd:
	  local:
	    dataDir: /var/lib/minikube/etcd
	kubernetesVersion: v1.34.1
	networking:
	  dnsDomain: cluster.local
	  podSubnet: "10.244.0.0/16"
	  serviceSubnet: 10.96.0.0/12
	---
	apiVersion: kubelet.config.k8s.io/v1beta1
	kind: KubeletConfiguration
	authentication:
	  x509:
	    clientCAFile: /var/lib/minikube/certs/ca.crt
	cgroupDriver: cgroupfs
	containerRuntimeEndpoint: unix:///var/run/crio/crio.sock
	hairpinMode: hairpin-veth
	runtimeRequestTimeout: 15m
	clusterDomain: "cluster.local"
	# disable disk resource management by default
	imageGCHighThresholdPercent: 100
	evictionHard:
	  nodefs.available: "0%"
	  nodefs.inodesFree: "0%"
	  imagefs.available: "0%"
	failSwapOn: false
	staticPodPath: /etc/kubernetes/manifests
	---
	apiVersion: kubeproxy.config.k8s.io/v1alpha1
	kind: KubeProxyConfiguration
	clusterCIDR: "10.244.0.0/16"
	metricsBindAddress: 0.0.0.0:10249
	conntrack:
	  maxPerCore: 0
	# Skip setting "net.netfilter.nf_conntrack_tcp_timeout_established"
	  tcpEstablishedTimeout: 0s
	# Skip setting "net.netfilter.nf_conntrack_tcp_timeout_close"
	  tcpCloseWaitTimeout: 0s
	
	I1109 14:39:20.136224  196129 ssh_runner.go:195] Run: sudo ls /var/lib/minikube/binaries/v1.34.1
	I1109 14:39:20.144992  196129 binaries.go:51] Found k8s binaries, skipping transfer
	I1109 14:39:20.145080  196129 ssh_runner.go:195] Run: sudo mkdir -p /etc/systemd/system/kubelet.service.d /lib/systemd/system /var/tmp/minikube
	I1109 14:39:20.154676  196129 ssh_runner.go:362] scp memory --> /etc/systemd/system/kubelet.service.d/10-kubeadm.conf (378 bytes)
	I1109 14:39:20.171245  196129 ssh_runner.go:362] scp memory --> /lib/systemd/system/kubelet.service (352 bytes)
	I1109 14:39:20.185580  196129 ssh_runner.go:362] scp memory --> /var/tmp/minikube/kubeadm.yaml.new (2225 bytes)
	I1109 14:39:20.201765  196129 ssh_runner.go:195] Run: grep 192.168.85.2	control-plane.minikube.internal$ /etc/hosts
	I1109 14:39:20.206582  196129 ssh_runner.go:195] Run: /bin/bash -c "{ grep -v $'\tcontrol-plane.minikube.internal$' "/etc/hosts"; echo "192.168.85.2	control-plane.minikube.internal"; } > /tmp/h.$$; sudo cp /tmp/h.$$ "/etc/hosts""
	I1109 14:39:20.218611  196129 ssh_runner.go:195] Run: sudo systemctl daemon-reload
	I1109 14:39:20.366358  196129 ssh_runner.go:195] Run: sudo systemctl start kubelet
	I1109 14:39:20.384455  196129 certs.go:69] Setting up /home/jenkins/minikube-integration/21139-2320/.minikube/profiles/default-k8s-diff-port-103048 for IP: 192.168.85.2
	I1109 14:39:20.384475  196129 certs.go:195] generating shared ca certs ...
	I1109 14:39:20.384493  196129 certs.go:227] acquiring lock for ca certs: {Name:mkdc9287ecd89df27ec460b72246ed6b75395d7c Clock:{} Delay:500ms Timeout:1m0s Cancel:<nil>}
	I1109 14:39:20.384623  196129 certs.go:236] skipping valid "minikubeCA" ca cert: /home/jenkins/minikube-integration/21139-2320/.minikube/ca.key
	I1109 14:39:20.384665  196129 certs.go:236] skipping valid "proxyClientCA" ca cert: /home/jenkins/minikube-integration/21139-2320/.minikube/proxy-client-ca.key
	I1109 14:39:20.384672  196129 certs.go:257] generating profile certs ...
	I1109 14:39:20.384786  196129 certs.go:360] skipping valid signed profile cert regeneration for "minikube-user": /home/jenkins/minikube-integration/21139-2320/.minikube/profiles/default-k8s-diff-port-103048/client.key
	I1109 14:39:20.384849  196129 certs.go:360] skipping valid signed profile cert regeneration for "minikube": /home/jenkins/minikube-integration/21139-2320/.minikube/profiles/default-k8s-diff-port-103048/apiserver.key.87358e1c
	I1109 14:39:20.384887  196129 certs.go:360] skipping valid signed profile cert regeneration for "aggregator": /home/jenkins/minikube-integration/21139-2320/.minikube/profiles/default-k8s-diff-port-103048/proxy-client.key
	I1109 14:39:20.384987  196129 certs.go:484] found cert: /home/jenkins/minikube-integration/21139-2320/.minikube/certs/4116.pem (1338 bytes)
	W1109 14:39:20.385015  196129 certs.go:480] ignoring /home/jenkins/minikube-integration/21139-2320/.minikube/certs/4116_empty.pem, impossibly tiny 0 bytes
	I1109 14:39:20.385023  196129 certs.go:484] found cert: /home/jenkins/minikube-integration/21139-2320/.minikube/certs/ca-key.pem (1679 bytes)
	I1109 14:39:20.385046  196129 certs.go:484] found cert: /home/jenkins/minikube-integration/21139-2320/.minikube/certs/ca.pem (1082 bytes)
	I1109 14:39:20.385067  196129 certs.go:484] found cert: /home/jenkins/minikube-integration/21139-2320/.minikube/certs/cert.pem (1123 bytes)
	I1109 14:39:20.385087  196129 certs.go:484] found cert: /home/jenkins/minikube-integration/21139-2320/.minikube/certs/key.pem (1679 bytes)
	I1109 14:39:20.385128  196129 certs.go:484] found cert: /home/jenkins/minikube-integration/21139-2320/.minikube/files/etc/ssl/certs/41162.pem (1708 bytes)
	I1109 14:39:20.385719  196129 ssh_runner.go:362] scp /home/jenkins/minikube-integration/21139-2320/.minikube/ca.crt --> /var/lib/minikube/certs/ca.crt (1111 bytes)
	I1109 14:39:20.406961  196129 ssh_runner.go:362] scp /home/jenkins/minikube-integration/21139-2320/.minikube/ca.key --> /var/lib/minikube/certs/ca.key (1679 bytes)
	I1109 14:39:20.439170  196129 ssh_runner.go:362] scp /home/jenkins/minikube-integration/21139-2320/.minikube/proxy-client-ca.crt --> /var/lib/minikube/certs/proxy-client-ca.crt (1119 bytes)
	I1109 14:39:20.464461  196129 ssh_runner.go:362] scp /home/jenkins/minikube-integration/21139-2320/.minikube/proxy-client-ca.key --> /var/lib/minikube/certs/proxy-client-ca.key (1675 bytes)
	I1109 14:39:20.498671  196129 ssh_runner.go:362] scp /home/jenkins/minikube-integration/21139-2320/.minikube/profiles/default-k8s-diff-port-103048/apiserver.crt --> /var/lib/minikube/certs/apiserver.crt (1440 bytes)
	I1109 14:39:20.538022  196129 ssh_runner.go:362] scp /home/jenkins/minikube-integration/21139-2320/.minikube/profiles/default-k8s-diff-port-103048/apiserver.key --> /var/lib/minikube/certs/apiserver.key (1675 bytes)
	I1109 14:39:20.576148  196129 ssh_runner.go:362] scp /home/jenkins/minikube-integration/21139-2320/.minikube/profiles/default-k8s-diff-port-103048/proxy-client.crt --> /var/lib/minikube/certs/proxy-client.crt (1147 bytes)
	I1109 14:39:20.647061  196129 ssh_runner.go:362] scp /home/jenkins/minikube-integration/21139-2320/.minikube/profiles/default-k8s-diff-port-103048/proxy-client.key --> /var/lib/minikube/certs/proxy-client.key (1675 bytes)
	I1109 14:39:20.713722  196129 ssh_runner.go:362] scp /home/jenkins/minikube-integration/21139-2320/.minikube/certs/4116.pem --> /usr/share/ca-certificates/4116.pem (1338 bytes)
	I1109 14:39:20.735137  196129 ssh_runner.go:362] scp /home/jenkins/minikube-integration/21139-2320/.minikube/files/etc/ssl/certs/41162.pem --> /usr/share/ca-certificates/41162.pem (1708 bytes)
	I1109 14:39:20.759543  196129 ssh_runner.go:362] scp /home/jenkins/minikube-integration/21139-2320/.minikube/ca.crt --> /usr/share/ca-certificates/minikubeCA.pem (1111 bytes)
	I1109 14:39:20.778573  196129 ssh_runner.go:362] scp memory --> /var/lib/minikube/kubeconfig (738 bytes)
	I1109 14:39:20.791915  196129 ssh_runner.go:195] Run: openssl version
	I1109 14:39:20.804883  196129 ssh_runner.go:195] Run: sudo /bin/bash -c "test -s /usr/share/ca-certificates/4116.pem && ln -fs /usr/share/ca-certificates/4116.pem /etc/ssl/certs/4116.pem"
	I1109 14:39:20.821236  196129 ssh_runner.go:195] Run: ls -la /usr/share/ca-certificates/4116.pem
	I1109 14:39:20.826965  196129 certs.go:528] hashing: -rw-r--r-- 1 root root 1338 Nov  9 13:36 /usr/share/ca-certificates/4116.pem
	I1109 14:39:20.827033  196129 ssh_runner.go:195] Run: openssl x509 -hash -noout -in /usr/share/ca-certificates/4116.pem
	I1109 14:39:20.880407  196129 ssh_runner.go:195] Run: sudo /bin/bash -c "test -L /etc/ssl/certs/51391683.0 || ln -fs /etc/ssl/certs/4116.pem /etc/ssl/certs/51391683.0"
	I1109 14:39:20.888410  196129 ssh_runner.go:195] Run: sudo /bin/bash -c "test -s /usr/share/ca-certificates/41162.pem && ln -fs /usr/share/ca-certificates/41162.pem /etc/ssl/certs/41162.pem"
	I1109 14:39:20.897832  196129 ssh_runner.go:195] Run: ls -la /usr/share/ca-certificates/41162.pem
	I1109 14:39:20.901509  196129 certs.go:528] hashing: -rw-r--r-- 1 root root 1708 Nov  9 13:36 /usr/share/ca-certificates/41162.pem
	I1109 14:39:20.901575  196129 ssh_runner.go:195] Run: openssl x509 -hash -noout -in /usr/share/ca-certificates/41162.pem
	I1109 14:39:20.942961  196129 ssh_runner.go:195] Run: sudo /bin/bash -c "test -L /etc/ssl/certs/3ec20f2e.0 || ln -fs /etc/ssl/certs/41162.pem /etc/ssl/certs/3ec20f2e.0"
	I1109 14:39:20.950695  196129 ssh_runner.go:195] Run: sudo /bin/bash -c "test -s /usr/share/ca-certificates/minikubeCA.pem && ln -fs /usr/share/ca-certificates/minikubeCA.pem /etc/ssl/certs/minikubeCA.pem"
	I1109 14:39:20.958594  196129 ssh_runner.go:195] Run: ls -la /usr/share/ca-certificates/minikubeCA.pem
	I1109 14:39:20.963390  196129 certs.go:528] hashing: -rw-r--r-- 1 root root 1111 Nov  9 13:29 /usr/share/ca-certificates/minikubeCA.pem
	I1109 14:39:20.963454  196129 ssh_runner.go:195] Run: openssl x509 -hash -noout -in /usr/share/ca-certificates/minikubeCA.pem
	I1109 14:39:21.024236  196129 ssh_runner.go:195] Run: sudo /bin/bash -c "test -L /etc/ssl/certs/b5213941.0 || ln -fs /etc/ssl/certs/minikubeCA.pem /etc/ssl/certs/b5213941.0"
	I1109 14:39:21.038127  196129 ssh_runner.go:195] Run: stat /var/lib/minikube/certs/apiserver-kubelet-client.crt
	I1109 14:39:21.045164  196129 ssh_runner.go:195] Run: openssl x509 -noout -in /var/lib/minikube/certs/apiserver-etcd-client.crt -checkend 86400
	I1109 14:39:21.092111  196129 ssh_runner.go:195] Run: openssl x509 -noout -in /var/lib/minikube/certs/apiserver-kubelet-client.crt -checkend 86400
	I1109 14:39:21.157987  196129 ssh_runner.go:195] Run: openssl x509 -noout -in /var/lib/minikube/certs/etcd/server.crt -checkend 86400
	I1109 14:39:21.210593  196129 ssh_runner.go:195] Run: openssl x509 -noout -in /var/lib/minikube/certs/etcd/healthcheck-client.crt -checkend 86400
	I1109 14:39:21.275270  196129 ssh_runner.go:195] Run: openssl x509 -noout -in /var/lib/minikube/certs/etcd/peer.crt -checkend 86400
	I1109 14:39:21.342680  196129 ssh_runner.go:195] Run: openssl x509 -noout -in /var/lib/minikube/certs/front-proxy-client.crt -checkend 86400
	I1109 14:39:21.420934  196129 kubeadm.go:401] StartCluster: {Name:default-k8s-diff-port-103048 KeepContext:false EmbedCerts:false MinikubeISO: KicBaseImage:gcr.io/k8s-minikube/kicbase-builds:v0.0.48-1761985721-21837@sha256:a50b37e97dfdea51156e079ca6b45818a801b3d41bbe13d141f35d2e1af6c7d1 Memory:3072 CPUs:2 DiskSize:20000 Driver:docker HyperkitVpnKitSock: HyperkitVSockPorts:[] DockerEnv:[] ContainerVolumeMounts:[] InsecureRegistry:[] RegistryMirror:[] HostOnlyCIDR:192.168.59.1/24 HypervVirtualSwitch: HypervUseExternalSwitch:false HypervExternalAdapter: KVMNetwork:default KVMQemuURI:qemu:///system KVMGPU:false KVMHidden:false KVMNUMACount:1 APIServerPort:8444 DockerOpt:[] DisableDriverMounts:false NFSShare:[] NFSSharesRoot:/nfsshares UUID: NoVTXCheck:false DNSProxy:false HostDNSResolver:true HostOnlyNicType:virtio NatNicType:virtio SSHIPAddress: SSHUser:root SSHKey: SSHPort:22 KubernetesConfig:{KubernetesVersion:v1.34.1 ClusterName:default-k8s-diff-port-103048 Namespace:default APIServerHAVIP: APIServer
Name:minikubeCA APIServerNames:[] APIServerIPs:[] DNSDomain:cluster.local ContainerRuntime:crio CRISocket: NetworkPlugin:cni FeatureGates: ServiceCIDR:10.96.0.0/12 ImageRepository: LoadBalancerStartIP: LoadBalancerEndIP: CustomIngressCert: RegistryAliases: ExtraOptions:[] ShouldLoadCachedImages:true EnableDefaultCNI:false CNI:} Nodes:[{Name: IP:192.168.85.2 Port:8444 KubernetesVersion:v1.34.1 ContainerRuntime:crio ControlPlane:true Worker:true}] Addons:map[dashboard:true default-storageclass:true storage-provisioner:true] CustomAddonImages:map[MetricsScraper:registry.k8s.io/echoserver:1.4] CustomAddonRegistries:map[] VerifyComponents:map[apiserver:true apps_running:true default_sa:true extra:true kubelet:true node_ready:true system_pods:true] StartHostTimeout:6m0s ScheduledStop:<nil> ExposedPorts:[] ListenAddress: Network: Subnet: MultiNodeRequested:false ExtraDisks:0 CertExpiration:26280h0m0s MountString: Mount9PVersion:9p2000.L MountGID:docker MountIP: MountMSize:262144 MountOptions:[] MountPort:0 MountType
:9p MountUID:docker BinaryMirror: DisableOptimizations:false DisableMetrics:false DisableCoreDNSLog:false CustomQemuFirmwarePath: SocketVMnetClientPath: SocketVMnetPath: StaticIP: SSHAuthSock: SSHAgentPID:0 GPUs: AutoPauseInterval:1m0s}
	I1109 14:39:21.421028  196129 cri.go:54] listing CRI containers in root : {State:paused Name: Namespaces:[kube-system]}
	I1109 14:39:21.421090  196129 ssh_runner.go:195] Run: sudo -s eval "crictl ps -a --quiet --label io.kubernetes.pod.namespace=kube-system"
	I1109 14:39:21.519887  196129 cri.go:89] found id: "7e93099edfb89c3c6dfb2117e807c954f165aeeee5346a2bbbcc8cebc0b57e6d"
	I1109 14:39:21.519932  196129 cri.go:89] found id: "6ac4e39c7a9ff676af77a2c3d9451f34c38ceb98460bce7a887c77b5521f39bb"
	I1109 14:39:21.519938  196129 cri.go:89] found id: "7d4eac93ccb3effa065a9c4f9d98f14e0e8db5c13414ee7cecb3ffcedd98326f"
	I1109 14:39:21.519945  196129 cri.go:89] found id: ""
	I1109 14:39:21.519999  196129 ssh_runner.go:195] Run: sudo runc list -f json
	W1109 14:39:21.543667  196129 kubeadm.go:408] unpause failed: list paused: runc: sudo runc list -f json: Process exited with status 1
	stdout:
	
	stderr:
	time="2025-11-09T14:39:21Z" level=error msg="open /run/runc: no such file or directory"
	I1109 14:39:21.543751  196129 ssh_runner.go:195] Run: sudo ls /var/lib/kubelet/kubeadm-flags.env /var/lib/kubelet/config.yaml /var/lib/minikube/etcd
	I1109 14:39:21.572102  196129 kubeadm.go:417] found existing configuration files, will attempt cluster restart
	I1109 14:39:21.572126  196129 kubeadm.go:598] restartPrimaryControlPlane start ...
	I1109 14:39:21.572191  196129 ssh_runner.go:195] Run: sudo test -d /data/minikube
	I1109 14:39:21.608694  196129 kubeadm.go:131] /data/minikube skipping compat symlinks: sudo test -d /data/minikube: Process exited with status 1
	stdout:
	
	stderr:
	I1109 14:39:21.609164  196129 kubeconfig.go:47] verify endpoint returned: get endpoint: "default-k8s-diff-port-103048" does not appear in /home/jenkins/minikube-integration/21139-2320/kubeconfig
	I1109 14:39:21.609280  196129 kubeconfig.go:62] /home/jenkins/minikube-integration/21139-2320/kubeconfig needs updating (will repair): [kubeconfig missing "default-k8s-diff-port-103048" cluster setting kubeconfig missing "default-k8s-diff-port-103048" context setting]
	I1109 14:39:21.609631  196129 lock.go:35] WriteFile acquiring /home/jenkins/minikube-integration/21139-2320/kubeconfig: {Name:mkbcbd43244c4eb498050023163f96f81faa78c6 Clock:{} Delay:500ms Timeout:1m0s Cancel:<nil>}
	I1109 14:39:21.611238  196129 ssh_runner.go:195] Run: sudo diff -u /var/tmp/minikube/kubeadm.yaml /var/tmp/minikube/kubeadm.yaml.new
	I1109 14:39:21.624438  196129 kubeadm.go:635] The running cluster does not require reconfiguration: 192.168.85.2
	I1109 14:39:21.624472  196129 kubeadm.go:602] duration metric: took 52.339359ms to restartPrimaryControlPlane
	I1109 14:39:21.624481  196129 kubeadm.go:403] duration metric: took 203.557147ms to StartCluster
	I1109 14:39:21.624504  196129 settings.go:142] acquiring lock: {Name:mk52a5a1cf83cfd3007d92af07a5e8dea393f789 Clock:{} Delay:500ms Timeout:1m0s Cancel:<nil>}
	I1109 14:39:21.624565  196129 settings.go:150] Updating kubeconfig:  /home/jenkins/minikube-integration/21139-2320/kubeconfig
	I1109 14:39:21.625263  196129 lock.go:35] WriteFile acquiring /home/jenkins/minikube-integration/21139-2320/kubeconfig: {Name:mkbcbd43244c4eb498050023163f96f81faa78c6 Clock:{} Delay:500ms Timeout:1m0s Cancel:<nil>}
	I1109 14:39:21.625488  196129 start.go:236] Will wait 6m0s for node &{Name: IP:192.168.85.2 Port:8444 KubernetesVersion:v1.34.1 ContainerRuntime:crio ControlPlane:true Worker:true}
	I1109 14:39:21.625839  196129 config.go:182] Loaded profile config "default-k8s-diff-port-103048": Driver=docker, ContainerRuntime=crio, KubernetesVersion=v1.34.1
	I1109 14:39:21.625884  196129 addons.go:512] enable addons start: toEnable=map[ambassador:false amd-gpu-device-plugin:false auto-pause:false cloud-spanner:false csi-hostpath-driver:false dashboard:true default-storageclass:true efk:false freshpod:false gcp-auth:false gvisor:false headlamp:false inaccel:false ingress:false ingress-dns:false inspektor-gadget:false istio:false istio-provisioner:false kong:false kubeflow:false kubetail:false kubevirt:false logviewer:false metallb:false metrics-server:false nvidia-device-plugin:false nvidia-driver-installer:false nvidia-gpu-device-plugin:false olm:false pod-security-policy:false portainer:false registry:false registry-aliases:false registry-creds:false storage-provisioner:true storage-provisioner-rancher:false volcano:false volumesnapshots:false yakd:false]
	I1109 14:39:21.626037  196129 addons.go:70] Setting storage-provisioner=true in profile "default-k8s-diff-port-103048"
	I1109 14:39:21.626062  196129 addons.go:239] Setting addon storage-provisioner=true in "default-k8s-diff-port-103048"
	W1109 14:39:21.626071  196129 addons.go:248] addon storage-provisioner should already be in state true
	I1109 14:39:21.626090  196129 addons.go:70] Setting dashboard=true in profile "default-k8s-diff-port-103048"
	I1109 14:39:21.626131  196129 addons.go:239] Setting addon dashboard=true in "default-k8s-diff-port-103048"
	W1109 14:39:21.626162  196129 addons.go:248] addon dashboard should already be in state true
	I1109 14:39:21.626201  196129 host.go:66] Checking if "default-k8s-diff-port-103048" exists ...
	I1109 14:39:21.626098  196129 host.go:66] Checking if "default-k8s-diff-port-103048" exists ...
	I1109 14:39:21.626753  196129 cli_runner.go:164] Run: docker container inspect default-k8s-diff-port-103048 --format={{.State.Status}}
	I1109 14:39:21.626812  196129 cli_runner.go:164] Run: docker container inspect default-k8s-diff-port-103048 --format={{.State.Status}}
	I1109 14:39:21.626105  196129 addons.go:70] Setting default-storageclass=true in profile "default-k8s-diff-port-103048"
	I1109 14:39:21.627319  196129 addons_storage_classes.go:34] enableOrDisableStorageClasses default-storageclass=true on "default-k8s-diff-port-103048"
	I1109 14:39:21.627583  196129 cli_runner.go:164] Run: docker container inspect default-k8s-diff-port-103048 --format={{.State.Status}}
	I1109 14:39:21.630802  196129 out.go:179] * Verifying Kubernetes components...
	I1109 14:39:21.639626  196129 ssh_runner.go:195] Run: sudo systemctl daemon-reload
	I1109 14:39:21.684051  196129 out.go:179]   - Using image gcr.io/k8s-minikube/storage-provisioner:v5
	I1109 14:39:21.684138  196129 out.go:179]   - Using image docker.io/kubernetesui/dashboard:v2.7.0
	I1109 14:39:21.689975  196129 addons.go:436] installing /etc/kubernetes/addons/storage-provisioner.yaml
	I1109 14:39:21.690000  196129 ssh_runner.go:362] scp memory --> /etc/kubernetes/addons/storage-provisioner.yaml (2676 bytes)
	I1109 14:39:21.690064  196129 cli_runner.go:164] Run: docker container inspect -f "'{{(index (index .NetworkSettings.Ports "22/tcp") 0).HostPort}}'" default-k8s-diff-port-103048
	I1109 14:39:21.691595  196129 addons.go:239] Setting addon default-storageclass=true in "default-k8s-diff-port-103048"
	W1109 14:39:21.691618  196129 addons.go:248] addon default-storageclass should already be in state true
	I1109 14:39:21.691648  196129 host.go:66] Checking if "default-k8s-diff-port-103048" exists ...
	I1109 14:39:21.692125  196129 cli_runner.go:164] Run: docker container inspect default-k8s-diff-port-103048 --format={{.State.Status}}
	I1109 14:39:21.693212  196129 out.go:179]   - Using image registry.k8s.io/echoserver:1.4
	I1109 14:39:20.654722  196795 main.go:143] libmachine: SSH cmd err, output: <nil>: 
	CRIO_MINIKUBE_OPTIONS='--insecure-registry 10.96.0.0/12 '
	
	I1109 14:39:20.654747  196795 machine.go:97] duration metric: took 4.451852424s to provisionDockerMachine
	I1109 14:39:20.654773  196795 start.go:293] postStartSetup for "embed-certs-422728" (driver="docker")
	I1109 14:39:20.654784  196795 start.go:322] creating required directories: [/etc/kubernetes/addons /etc/kubernetes/manifests /var/tmp/minikube /var/lib/minikube /var/lib/minikube/certs /var/lib/minikube/images /var/lib/minikube/binaries /tmp/gvisor /usr/share/ca-certificates /etc/ssl/certs]
	I1109 14:39:20.654845  196795 ssh_runner.go:195] Run: sudo mkdir -p /etc/kubernetes/addons /etc/kubernetes/manifests /var/tmp/minikube /var/lib/minikube /var/lib/minikube/certs /var/lib/minikube/images /var/lib/minikube/binaries /tmp/gvisor /usr/share/ca-certificates /etc/ssl/certs
	I1109 14:39:20.654912  196795 cli_runner.go:164] Run: docker container inspect -f "'{{(index (index .NetworkSettings.Ports "22/tcp") 0).HostPort}}'" embed-certs-422728
	I1109 14:39:20.679374  196795 sshutil.go:53] new ssh client: &{IP:127.0.0.1 Port:33070 SSHKeyPath:/home/jenkins/minikube-integration/21139-2320/.minikube/machines/embed-certs-422728/id_rsa Username:docker}
	I1109 14:39:20.801375  196795 ssh_runner.go:195] Run: cat /etc/os-release
	I1109 14:39:20.805427  196795 main.go:143] libmachine: Couldn't set key VERSION_CODENAME, no corresponding struct field found
	I1109 14:39:20.805453  196795 info.go:137] Remote host: Debian GNU/Linux 12 (bookworm)
	I1109 14:39:20.805462  196795 filesync.go:126] Scanning /home/jenkins/minikube-integration/21139-2320/.minikube/addons for local assets ...
	I1109 14:39:20.805518  196795 filesync.go:126] Scanning /home/jenkins/minikube-integration/21139-2320/.minikube/files for local assets ...
	I1109 14:39:20.805610  196795 filesync.go:149] local asset: /home/jenkins/minikube-integration/21139-2320/.minikube/files/etc/ssl/certs/41162.pem -> 41162.pem in /etc/ssl/certs
	I1109 14:39:20.805711  196795 ssh_runner.go:195] Run: sudo mkdir -p /etc/ssl/certs
	I1109 14:39:20.816935  196795 ssh_runner.go:362] scp /home/jenkins/minikube-integration/21139-2320/.minikube/files/etc/ssl/certs/41162.pem --> /etc/ssl/certs/41162.pem (1708 bytes)
	I1109 14:39:20.836739  196795 start.go:296] duration metric: took 181.951304ms for postStartSetup
	I1109 14:39:20.836817  196795 ssh_runner.go:195] Run: sh -c "df -h /var | awk 'NR==2{print $5}'"
	I1109 14:39:20.836854  196795 cli_runner.go:164] Run: docker container inspect -f "'{{(index (index .NetworkSettings.Ports "22/tcp") 0).HostPort}}'" embed-certs-422728
	I1109 14:39:20.857314  196795 sshutil.go:53] new ssh client: &{IP:127.0.0.1 Port:33070 SSHKeyPath:/home/jenkins/minikube-integration/21139-2320/.minikube/machines/embed-certs-422728/id_rsa Username:docker}
	I1109 14:39:20.961850  196795 ssh_runner.go:195] Run: sh -c "df -BG /var | awk 'NR==2{print $4}'"
	I1109 14:39:20.969380  196795 fix.go:56] duration metric: took 5.089888739s for fixHost
	I1109 14:39:20.969406  196795 start.go:83] releasing machines lock for "embed-certs-422728", held for 5.089951877s
	I1109 14:39:20.969490  196795 cli_runner.go:164] Run: docker container inspect -f "{{range .NetworkSettings.Networks}}{{.IPAddress}},{{.GlobalIPv6Address}}{{end}}" embed-certs-422728
	I1109 14:39:20.989316  196795 ssh_runner.go:195] Run: cat /version.json
	I1109 14:39:20.989379  196795 cli_runner.go:164] Run: docker container inspect -f "'{{(index (index .NetworkSettings.Ports "22/tcp") 0).HostPort}}'" embed-certs-422728
	I1109 14:39:20.989634  196795 ssh_runner.go:195] Run: curl -sS -m 2 https://registry.k8s.io/
	I1109 14:39:20.989678  196795 cli_runner.go:164] Run: docker container inspect -f "'{{(index (index .NetworkSettings.Ports "22/tcp") 0).HostPort}}'" embed-certs-422728
	I1109 14:39:21.019194  196795 sshutil.go:53] new ssh client: &{IP:127.0.0.1 Port:33070 SSHKeyPath:/home/jenkins/minikube-integration/21139-2320/.minikube/machines/embed-certs-422728/id_rsa Username:docker}
	I1109 14:39:21.033559  196795 sshutil.go:53] new ssh client: &{IP:127.0.0.1 Port:33070 SSHKeyPath:/home/jenkins/minikube-integration/21139-2320/.minikube/machines/embed-certs-422728/id_rsa Username:docker}
	I1109 14:39:21.156397  196795 ssh_runner.go:195] Run: systemctl --version
	I1109 14:39:21.283906  196795 ssh_runner.go:195] Run: sudo sh -c "podman version >/dev/null"
	I1109 14:39:21.351300  196795 ssh_runner.go:195] Run: sh -c "stat /etc/cni/net.d/*loopback.conf*"
	W1109 14:39:21.357015  196795 cni.go:209] loopback cni configuration skipped: "/etc/cni/net.d/*loopback.conf*" not found
	I1109 14:39:21.357091  196795 ssh_runner.go:195] Run: sudo find /etc/cni/net.d -maxdepth 1 -type f ( ( -name *bridge* -or -name *podman* ) -and -not -name *.mk_disabled ) -printf "%p, " -exec sh -c "sudo mv {} {}.mk_disabled" ;
	I1109 14:39:21.368625  196795 cni.go:259] no active bridge cni configs found in "/etc/cni/net.d" - nothing to disable
	I1109 14:39:21.368703  196795 start.go:496] detecting cgroup driver to use...
	I1109 14:39:21.368745  196795 detect.go:187] detected "cgroupfs" cgroup driver on host os
	I1109 14:39:21.368818  196795 ssh_runner.go:195] Run: sudo systemctl stop -f containerd
	I1109 14:39:21.387612  196795 ssh_runner.go:195] Run: sudo systemctl is-active --quiet service containerd
	I1109 14:39:21.408379  196795 docker.go:218] disabling cri-docker service (if available) ...
	I1109 14:39:21.408518  196795 ssh_runner.go:195] Run: sudo systemctl stop -f cri-docker.socket
	I1109 14:39:21.436708  196795 ssh_runner.go:195] Run: sudo systemctl stop -f cri-docker.service
	I1109 14:39:21.466974  196795 ssh_runner.go:195] Run: sudo systemctl disable cri-docker.socket
	I1109 14:39:21.728628  196795 ssh_runner.go:195] Run: sudo systemctl mask cri-docker.service
	I1109 14:39:21.974405  196795 docker.go:234] disabling docker service ...
	I1109 14:39:21.974481  196795 ssh_runner.go:195] Run: sudo systemctl stop -f docker.socket
	I1109 14:39:22.005296  196795 ssh_runner.go:195] Run: sudo systemctl stop -f docker.service
	I1109 14:39:22.034069  196795 ssh_runner.go:195] Run: sudo systemctl disable docker.socket
	I1109 14:39:22.248316  196795 ssh_runner.go:195] Run: sudo systemctl mask docker.service
	I1109 14:39:22.448530  196795 ssh_runner.go:195] Run: sudo systemctl is-active --quiet service docker
	I1109 14:39:22.471795  196795 ssh_runner.go:195] Run: /bin/bash -c "sudo mkdir -p /etc && printf %s "runtime-endpoint: unix:///var/run/crio/crio.sock
	" | sudo tee /etc/crictl.yaml"
	I1109 14:39:22.504195  196795 crio.go:59] configure cri-o to use "registry.k8s.io/pause:3.10.1" pause image...
	I1109 14:39:22.504253  196795 ssh_runner.go:195] Run: sh -c "sudo sed -i 's|^.*pause_image = .*$|pause_image = "registry.k8s.io/pause:3.10.1"|' /etc/crio/crio.conf.d/02-crio.conf"
	I1109 14:39:22.522453  196795 crio.go:70] configuring cri-o to use "cgroupfs" as cgroup driver...
	I1109 14:39:22.522527  196795 ssh_runner.go:195] Run: sh -c "sudo sed -i 's|^.*cgroup_manager = .*$|cgroup_manager = "cgroupfs"|' /etc/crio/crio.conf.d/02-crio.conf"
	I1109 14:39:22.540125  196795 ssh_runner.go:195] Run: sh -c "sudo sed -i '/conmon_cgroup = .*/d' /etc/crio/crio.conf.d/02-crio.conf"
	I1109 14:39:22.553926  196795 ssh_runner.go:195] Run: sh -c "sudo sed -i '/cgroup_manager = .*/a conmon_cgroup = "pod"' /etc/crio/crio.conf.d/02-crio.conf"
	I1109 14:39:22.576162  196795 ssh_runner.go:195] Run: sh -c "sudo rm -rf /etc/cni/net.mk"
	I1109 14:39:22.585909  196795 ssh_runner.go:195] Run: sh -c "sudo sed -i '/^ *"net.ipv4.ip_unprivileged_port_start=.*"/d' /etc/crio/crio.conf.d/02-crio.conf"
	I1109 14:39:22.594587  196795 ssh_runner.go:195] Run: sh -c "sudo grep -q "^ *default_sysctls" /etc/crio/crio.conf.d/02-crio.conf || sudo sed -i '/conmon_cgroup = .*/a default_sysctls = \[\n\]' /etc/crio/crio.conf.d/02-crio.conf"
	I1109 14:39:22.609067  196795 ssh_runner.go:195] Run: sh -c "sudo sed -i -r 's|^default_sysctls *= *\[|&\n  "net.ipv4.ip_unprivileged_port_start=0",|' /etc/crio/crio.conf.d/02-crio.conf"
	I1109 14:39:22.617377  196795 ssh_runner.go:195] Run: sudo sysctl net.bridge.bridge-nf-call-iptables
	I1109 14:39:22.630975  196795 ssh_runner.go:195] Run: sudo sh -c "echo 1 > /proc/sys/net/ipv4/ip_forward"
	I1109 14:39:22.638323  196795 ssh_runner.go:195] Run: sudo systemctl daemon-reload
	I1109 14:39:22.838273  196795 ssh_runner.go:195] Run: sudo systemctl restart crio
	I1109 14:39:23.036210  196795 start.go:543] Will wait 60s for socket path /var/run/crio/crio.sock
	I1109 14:39:23.036366  196795 ssh_runner.go:195] Run: stat /var/run/crio/crio.sock
	I1109 14:39:23.044751  196795 start.go:564] Will wait 60s for crictl version
	I1109 14:39:23.044867  196795 ssh_runner.go:195] Run: which crictl
	I1109 14:39:23.051712  196795 ssh_runner.go:195] Run: sudo /usr/local/bin/crictl version
	I1109 14:39:23.102897  196795 start.go:580] Version:  0.1.0
	RuntimeName:  cri-o
	RuntimeVersion:  1.34.1
	RuntimeApiVersion:  v1
	I1109 14:39:23.103045  196795 ssh_runner.go:195] Run: crio --version
	I1109 14:39:23.156948  196795 ssh_runner.go:195] Run: crio --version
	I1109 14:39:23.225201  196795 out.go:179] * Preparing Kubernetes v1.34.1 on CRI-O 1.34.1 ...
	I1109 14:39:21.696124  196129 addons.go:436] installing /etc/kubernetes/addons/dashboard-ns.yaml
	I1109 14:39:21.696149  196129 ssh_runner.go:362] scp dashboard/dashboard-ns.yaml --> /etc/kubernetes/addons/dashboard-ns.yaml (759 bytes)
	I1109 14:39:21.696218  196129 cli_runner.go:164] Run: docker container inspect -f "'{{(index (index .NetworkSettings.Ports "22/tcp") 0).HostPort}}'" default-k8s-diff-port-103048
	I1109 14:39:21.741371  196129 sshutil.go:53] new ssh client: &{IP:127.0.0.1 Port:33065 SSHKeyPath:/home/jenkins/minikube-integration/21139-2320/.minikube/machines/default-k8s-diff-port-103048/id_rsa Username:docker}
	I1109 14:39:21.750194  196129 addons.go:436] installing /etc/kubernetes/addons/storageclass.yaml
	I1109 14:39:21.750218  196129 ssh_runner.go:362] scp storageclass/storageclass.yaml --> /etc/kubernetes/addons/storageclass.yaml (271 bytes)
	I1109 14:39:21.750296  196129 cli_runner.go:164] Run: docker container inspect -f "'{{(index (index .NetworkSettings.Ports "22/tcp") 0).HostPort}}'" default-k8s-diff-port-103048
	I1109 14:39:21.766459  196129 sshutil.go:53] new ssh client: &{IP:127.0.0.1 Port:33065 SSHKeyPath:/home/jenkins/minikube-integration/21139-2320/.minikube/machines/default-k8s-diff-port-103048/id_rsa Username:docker}
	I1109 14:39:21.787620  196129 sshutil.go:53] new ssh client: &{IP:127.0.0.1 Port:33065 SSHKeyPath:/home/jenkins/minikube-integration/21139-2320/.minikube/machines/default-k8s-diff-port-103048/id_rsa Username:docker}
	I1109 14:39:22.148935  196129 ssh_runner.go:195] Run: sudo systemctl start kubelet
	I1109 14:39:22.161023  196129 ssh_runner.go:195] Run: sudo KUBECONFIG=/var/lib/minikube/kubeconfig /var/lib/minikube/binaries/v1.34.1/kubectl apply -f /etc/kubernetes/addons/storage-provisioner.yaml
	I1109 14:39:22.228626  196129 node_ready.go:35] waiting up to 6m0s for node "default-k8s-diff-port-103048" to be "Ready" ...
	I1109 14:39:22.237309  196129 ssh_runner.go:195] Run: sudo KUBECONFIG=/var/lib/minikube/kubeconfig /var/lib/minikube/binaries/v1.34.1/kubectl apply -f /etc/kubernetes/addons/storageclass.yaml
	I1109 14:39:22.258565  196129 addons.go:436] installing /etc/kubernetes/addons/dashboard-clusterrole.yaml
	I1109 14:39:22.258641  196129 ssh_runner.go:362] scp dashboard/dashboard-clusterrole.yaml --> /etc/kubernetes/addons/dashboard-clusterrole.yaml (1001 bytes)
	I1109 14:39:22.389427  196129 addons.go:436] installing /etc/kubernetes/addons/dashboard-clusterrolebinding.yaml
	I1109 14:39:22.389532  196129 ssh_runner.go:362] scp dashboard/dashboard-clusterrolebinding.yaml --> /etc/kubernetes/addons/dashboard-clusterrolebinding.yaml (1018 bytes)
	I1109 14:39:22.526134  196129 addons.go:436] installing /etc/kubernetes/addons/dashboard-configmap.yaml
	I1109 14:39:22.526206  196129 ssh_runner.go:362] scp dashboard/dashboard-configmap.yaml --> /etc/kubernetes/addons/dashboard-configmap.yaml (837 bytes)
	I1109 14:39:22.627561  196129 addons.go:436] installing /etc/kubernetes/addons/dashboard-dp.yaml
	I1109 14:39:22.627621  196129 ssh_runner.go:362] scp memory --> /etc/kubernetes/addons/dashboard-dp.yaml (4201 bytes)
	I1109 14:39:22.674772  196129 addons.go:436] installing /etc/kubernetes/addons/dashboard-role.yaml
	I1109 14:39:22.674843  196129 ssh_runner.go:362] scp dashboard/dashboard-role.yaml --> /etc/kubernetes/addons/dashboard-role.yaml (1724 bytes)
	I1109 14:39:22.695155  196129 addons.go:436] installing /etc/kubernetes/addons/dashboard-rolebinding.yaml
	I1109 14:39:22.695229  196129 ssh_runner.go:362] scp dashboard/dashboard-rolebinding.yaml --> /etc/kubernetes/addons/dashboard-rolebinding.yaml (1046 bytes)
	I1109 14:39:22.738582  196129 addons.go:436] installing /etc/kubernetes/addons/dashboard-sa.yaml
	I1109 14:39:22.738656  196129 ssh_runner.go:362] scp dashboard/dashboard-sa.yaml --> /etc/kubernetes/addons/dashboard-sa.yaml (837 bytes)
	I1109 14:39:22.763078  196129 addons.go:436] installing /etc/kubernetes/addons/dashboard-secret.yaml
	I1109 14:39:22.763151  196129 ssh_runner.go:362] scp dashboard/dashboard-secret.yaml --> /etc/kubernetes/addons/dashboard-secret.yaml (1389 bytes)
	I1109 14:39:22.805266  196129 addons.go:436] installing /etc/kubernetes/addons/dashboard-svc.yaml
	I1109 14:39:22.805341  196129 ssh_runner.go:362] scp dashboard/dashboard-svc.yaml --> /etc/kubernetes/addons/dashboard-svc.yaml (1294 bytes)
	I1109 14:39:22.831261  196129 ssh_runner.go:195] Run: sudo KUBECONFIG=/var/lib/minikube/kubeconfig /var/lib/minikube/binaries/v1.34.1/kubectl apply -f /etc/kubernetes/addons/dashboard-ns.yaml -f /etc/kubernetes/addons/dashboard-clusterrole.yaml -f /etc/kubernetes/addons/dashboard-clusterrolebinding.yaml -f /etc/kubernetes/addons/dashboard-configmap.yaml -f /etc/kubernetes/addons/dashboard-dp.yaml -f /etc/kubernetes/addons/dashboard-role.yaml -f /etc/kubernetes/addons/dashboard-rolebinding.yaml -f /etc/kubernetes/addons/dashboard-sa.yaml -f /etc/kubernetes/addons/dashboard-secret.yaml -f /etc/kubernetes/addons/dashboard-svc.yaml
	I1109 14:39:23.228075  196795 cli_runner.go:164] Run: docker network inspect embed-certs-422728 --format "{"Name": "{{.Name}}","Driver": "{{.Driver}}","Subnet": "{{range .IPAM.Config}}{{.Subnet}}{{end}}","Gateway": "{{range .IPAM.Config}}{{.Gateway}}{{end}}","MTU": {{if (index .Options "com.docker.network.driver.mtu")}}{{(index .Options "com.docker.network.driver.mtu")}}{{else}}0{{end}}, "ContainerIPs": [{{range $k,$v := .Containers }}"{{$v.IPv4Address}}",{{end}}]}"
	I1109 14:39:23.257879  196795 ssh_runner.go:195] Run: grep 192.168.76.1	host.minikube.internal$ /etc/hosts
	I1109 14:39:23.262130  196795 ssh_runner.go:195] Run: /bin/bash -c "{ grep -v $'\thost.minikube.internal$' "/etc/hosts"; echo "192.168.76.1	host.minikube.internal"; } > /tmp/h.$$; sudo cp /tmp/h.$$ "/etc/hosts""
	I1109 14:39:23.280983  196795 kubeadm.go:884] updating cluster {Name:embed-certs-422728 KeepContext:false EmbedCerts:true MinikubeISO: KicBaseImage:gcr.io/k8s-minikube/kicbase-builds:v0.0.48-1761985721-21837@sha256:a50b37e97dfdea51156e079ca6b45818a801b3d41bbe13d141f35d2e1af6c7d1 Memory:3072 CPUs:2 DiskSize:20000 Driver:docker HyperkitVpnKitSock: HyperkitVSockPorts:[] DockerEnv:[] ContainerVolumeMounts:[] InsecureRegistry:[] RegistryMirror:[] HostOnlyCIDR:192.168.59.1/24 HypervVirtualSwitch: HypervUseExternalSwitch:false HypervExternalAdapter: KVMNetwork:default KVMQemuURI:qemu:///system KVMGPU:false KVMHidden:false KVMNUMACount:1 APIServerPort:8443 DockerOpt:[] DisableDriverMounts:false NFSShare:[] NFSSharesRoot:/nfsshares UUID: NoVTXCheck:false DNSProxy:false HostDNSResolver:true HostOnlyNicType:virtio NatNicType:virtio SSHIPAddress: SSHUser:root SSHKey: SSHPort:22 KubernetesConfig:{KubernetesVersion:v1.34.1 ClusterName:embed-certs-422728 Namespace:default APIServerHAVIP: APIServerName:minikubeCA AP
IServerNames:[] APIServerIPs:[] DNSDomain:cluster.local ContainerRuntime:crio CRISocket: NetworkPlugin:cni FeatureGates: ServiceCIDR:10.96.0.0/12 ImageRepository: LoadBalancerStartIP: LoadBalancerEndIP: CustomIngressCert: RegistryAliases: ExtraOptions:[] ShouldLoadCachedImages:true EnableDefaultCNI:false CNI:} Nodes:[{Name: IP:192.168.76.2 Port:8443 KubernetesVersion:v1.34.1 ContainerRuntime:crio ControlPlane:true Worker:true}] Addons:map[dashboard:true default-storageclass:true storage-provisioner:true] CustomAddonImages:map[MetricsScraper:registry.k8s.io/echoserver:1.4] CustomAddonRegistries:map[] VerifyComponents:map[apiserver:true apps_running:true default_sa:true extra:true kubelet:true node_ready:true system_pods:true] StartHostTimeout:6m0s ScheduledStop:<nil> ExposedPorts:[] ListenAddress: Network: Subnet: MultiNodeRequested:false ExtraDisks:0 CertExpiration:26280h0m0s MountString: Mount9PVersion:9p2000.L MountGID:docker MountIP: MountMSize:262144 MountOptions:[] MountPort:0 MountType:9p MountUID:docke
r BinaryMirror: DisableOptimizations:false DisableMetrics:false DisableCoreDNSLog:false CustomQemuFirmwarePath: SocketVMnetClientPath: SocketVMnetPath: StaticIP: SSHAuthSock: SSHAgentPID:0 GPUs: AutoPauseInterval:1m0s} ...
	I1109 14:39:23.281094  196795 preload.go:188] Checking if preload exists for k8s version v1.34.1 and runtime crio
	I1109 14:39:23.281162  196795 ssh_runner.go:195] Run: sudo crictl images --output json
	I1109 14:39:23.361099  196795 crio.go:514] all images are preloaded for cri-o runtime.
	I1109 14:39:23.361119  196795 crio.go:433] Images already preloaded, skipping extraction
	I1109 14:39:23.361171  196795 ssh_runner.go:195] Run: sudo crictl images --output json
	I1109 14:39:23.413183  196795 crio.go:514] all images are preloaded for cri-o runtime.
	I1109 14:39:23.413202  196795 cache_images.go:86] Images are preloaded, skipping loading
	I1109 14:39:23.413210  196795 kubeadm.go:935] updating node { 192.168.76.2 8443 v1.34.1 crio true true} ...
	I1109 14:39:23.413308  196795 kubeadm.go:947] kubelet [Unit]
	Wants=crio.service
	
	[Service]
	ExecStart=
	ExecStart=/var/lib/minikube/binaries/v1.34.1/kubelet --bootstrap-kubeconfig=/etc/kubernetes/bootstrap-kubelet.conf --cgroups-per-qos=false --config=/var/lib/kubelet/config.yaml --enforce-node-allocatable= --hostname-override=embed-certs-422728 --kubeconfig=/etc/kubernetes/kubelet.conf --node-ip=192.168.76.2
	
	[Install]
	 config:
	{KubernetesVersion:v1.34.1 ClusterName:embed-certs-422728 Namespace:default APIServerHAVIP: APIServerName:minikubeCA APIServerNames:[] APIServerIPs:[] DNSDomain:cluster.local ContainerRuntime:crio CRISocket: NetworkPlugin:cni FeatureGates: ServiceCIDR:10.96.0.0/12 ImageRepository: LoadBalancerStartIP: LoadBalancerEndIP: CustomIngressCert: RegistryAliases: ExtraOptions:[] ShouldLoadCachedImages:true EnableDefaultCNI:false CNI:}
	I1109 14:39:23.413385  196795 ssh_runner.go:195] Run: crio config
	I1109 14:39:23.563585  196795 cni.go:84] Creating CNI manager for ""
	I1109 14:39:23.563654  196795 cni.go:143] "docker" driver + "crio" runtime found, recommending kindnet
	I1109 14:39:23.563691  196795 kubeadm.go:85] Using pod CIDR: 10.244.0.0/16
	I1109 14:39:23.563764  196795 kubeadm.go:190] kubeadm options: {CertDir:/var/lib/minikube/certs ServiceCIDR:10.96.0.0/12 PodSubnet:10.244.0.0/16 AdvertiseAddress:192.168.76.2 APIServerPort:8443 KubernetesVersion:v1.34.1 EtcdDataDir:/var/lib/minikube/etcd EtcdExtraArgs:map[] ClusterName:embed-certs-422728 NodeName:embed-certs-422728 DNSDomain:cluster.local CRISocket:/var/run/crio/crio.sock ImageRepository: ComponentOptions:[{Component:apiServer ExtraArgs:map[enable-admission-plugins:NamespaceLifecycle,LimitRanger,ServiceAccount,DefaultStorageClass,DefaultTolerationSeconds,NodeRestriction,MutatingAdmissionWebhook,ValidatingAdmissionWebhook,ResourceQuota] Pairs:map[certSANs:["127.0.0.1", "localhost", "192.168.76.2"]]} {Component:controllerManager ExtraArgs:map[allocate-node-cidrs:true leader-elect:false] Pairs:map[]} {Component:scheduler ExtraArgs:map[leader-elect:false] Pairs:map[]}] FeatureArgs:map[] NodeIP:192.168.76.2 CgroupDriver:cgroupfs ClientCAFile:/var/lib/minikube/certs/ca.crt StaticPodPath:/e
tc/kubernetes/manifests ControlPlaneAddress:control-plane.minikube.internal KubeProxyOptions:map[] ResolvConfSearchRegression:false KubeletConfigOpts:map[containerRuntimeEndpoint:unix:///var/run/crio/crio.sock hairpinMode:hairpin-veth runtimeRequestTimeout:15m] PrependCriSocketUnix:true}
	I1109 14:39:23.563947  196795 kubeadm.go:196] kubeadm config:
	apiVersion: kubeadm.k8s.io/v1beta4
	kind: InitConfiguration
	localAPIEndpoint:
	  advertiseAddress: 192.168.76.2
	  bindPort: 8443
	bootstrapTokens:
	  - groups:
	      - system:bootstrappers:kubeadm:default-node-token
	    ttl: 24h0m0s
	    usages:
	      - signing
	      - authentication
	nodeRegistration:
	  criSocket: unix:///var/run/crio/crio.sock
	  name: "embed-certs-422728"
	  kubeletExtraArgs:
	    - name: "node-ip"
	      value: "192.168.76.2"
	  taints: []
	---
	apiVersion: kubeadm.k8s.io/v1beta4
	kind: ClusterConfiguration
	apiServer:
	  certSANs: ["127.0.0.1", "localhost", "192.168.76.2"]
	  extraArgs:
	    - name: "enable-admission-plugins"
	      value: "NamespaceLifecycle,LimitRanger,ServiceAccount,DefaultStorageClass,DefaultTolerationSeconds,NodeRestriction,MutatingAdmissionWebhook,ValidatingAdmissionWebhook,ResourceQuota"
	controllerManager:
	  extraArgs:
	    - name: "allocate-node-cidrs"
	      value: "true"
	    - name: "leader-elect"
	      value: "false"
	scheduler:
	  extraArgs:
	    - name: "leader-elect"
	      value: "false"
	certificatesDir: /var/lib/minikube/certs
	clusterName: mk
	controlPlaneEndpoint: control-plane.minikube.internal:8443
	etcd:
	  local:
	    dataDir: /var/lib/minikube/etcd
	kubernetesVersion: v1.34.1
	networking:
	  dnsDomain: cluster.local
	  podSubnet: "10.244.0.0/16"
	  serviceSubnet: 10.96.0.0/12
	---
	apiVersion: kubelet.config.k8s.io/v1beta1
	kind: KubeletConfiguration
	authentication:
	  x509:
	    clientCAFile: /var/lib/minikube/certs/ca.crt
	cgroupDriver: cgroupfs
	containerRuntimeEndpoint: unix:///var/run/crio/crio.sock
	hairpinMode: hairpin-veth
	runtimeRequestTimeout: 15m
	clusterDomain: "cluster.local"
	# disable disk resource management by default
	imageGCHighThresholdPercent: 100
	evictionHard:
	  nodefs.available: "0%"
	  nodefs.inodesFree: "0%"
	  imagefs.available: "0%"
	failSwapOn: false
	staticPodPath: /etc/kubernetes/manifests
	---
	apiVersion: kubeproxy.config.k8s.io/v1alpha1
	kind: KubeProxyConfiguration
	clusterCIDR: "10.244.0.0/16"
	metricsBindAddress: 0.0.0.0:10249
	conntrack:
	  maxPerCore: 0
	# Skip setting "net.netfilter.nf_conntrack_tcp_timeout_established"
	  tcpEstablishedTimeout: 0s
	# Skip setting "net.netfilter.nf_conntrack_tcp_timeout_close"
	  tcpCloseWaitTimeout: 0s
	
	I1109 14:39:23.564045  196795 ssh_runner.go:195] Run: sudo ls /var/lib/minikube/binaries/v1.34.1
	I1109 14:39:23.572916  196795 binaries.go:51] Found k8s binaries, skipping transfer
	I1109 14:39:23.573035  196795 ssh_runner.go:195] Run: sudo mkdir -p /etc/systemd/system/kubelet.service.d /lib/systemd/system /var/tmp/minikube
	I1109 14:39:23.581385  196795 ssh_runner.go:362] scp memory --> /etc/systemd/system/kubelet.service.d/10-kubeadm.conf (368 bytes)
	I1109 14:39:23.595976  196795 ssh_runner.go:362] scp memory --> /lib/systemd/system/kubelet.service (352 bytes)
	I1109 14:39:23.609988  196795 ssh_runner.go:362] scp memory --> /var/tmp/minikube/kubeadm.yaml.new (2215 bytes)
	I1109 14:39:23.624103  196795 ssh_runner.go:195] Run: grep 192.168.76.2	control-plane.minikube.internal$ /etc/hosts
	I1109 14:39:23.627903  196795 ssh_runner.go:195] Run: /bin/bash -c "{ grep -v $'\tcontrol-plane.minikube.internal$' "/etc/hosts"; echo "192.168.76.2	control-plane.minikube.internal"; } > /tmp/h.$$; sudo cp /tmp/h.$$ "/etc/hosts""
	I1109 14:39:23.637960  196795 ssh_runner.go:195] Run: sudo systemctl daemon-reload
	I1109 14:39:23.834596  196795 ssh_runner.go:195] Run: sudo systemctl start kubelet
	I1109 14:39:23.851619  196795 certs.go:69] Setting up /home/jenkins/minikube-integration/21139-2320/.minikube/profiles/embed-certs-422728 for IP: 192.168.76.2
	I1109 14:39:23.851693  196795 certs.go:195] generating shared ca certs ...
	I1109 14:39:23.851722  196795 certs.go:227] acquiring lock for ca certs: {Name:mkdc9287ecd89df27ec460b72246ed6b75395d7c Clock:{} Delay:500ms Timeout:1m0s Cancel:<nil>}
	I1109 14:39:23.851903  196795 certs.go:236] skipping valid "minikubeCA" ca cert: /home/jenkins/minikube-integration/21139-2320/.minikube/ca.key
	I1109 14:39:23.851988  196795 certs.go:236] skipping valid "proxyClientCA" ca cert: /home/jenkins/minikube-integration/21139-2320/.minikube/proxy-client-ca.key
	I1109 14:39:23.852012  196795 certs.go:257] generating profile certs ...
	I1109 14:39:23.852144  196795 certs.go:360] skipping valid signed profile cert regeneration for "minikube-user": /home/jenkins/minikube-integration/21139-2320/.minikube/profiles/embed-certs-422728/client.key
	I1109 14:39:23.852244  196795 certs.go:360] skipping valid signed profile cert regeneration for "minikube": /home/jenkins/minikube-integration/21139-2320/.minikube/profiles/embed-certs-422728/apiserver.key.b1b6b07a
	I1109 14:39:23.852384  196795 certs.go:360] skipping valid signed profile cert regeneration for "aggregator": /home/jenkins/minikube-integration/21139-2320/.minikube/profiles/embed-certs-422728/proxy-client.key
	I1109 14:39:23.852540  196795 certs.go:484] found cert: /home/jenkins/minikube-integration/21139-2320/.minikube/certs/4116.pem (1338 bytes)
	W1109 14:39:23.852606  196795 certs.go:480] ignoring /home/jenkins/minikube-integration/21139-2320/.minikube/certs/4116_empty.pem, impossibly tiny 0 bytes
	I1109 14:39:23.852637  196795 certs.go:484] found cert: /home/jenkins/minikube-integration/21139-2320/.minikube/certs/ca-key.pem (1679 bytes)
	I1109 14:39:23.852689  196795 certs.go:484] found cert: /home/jenkins/minikube-integration/21139-2320/.minikube/certs/ca.pem (1082 bytes)
	I1109 14:39:23.852735  196795 certs.go:484] found cert: /home/jenkins/minikube-integration/21139-2320/.minikube/certs/cert.pem (1123 bytes)
	I1109 14:39:23.852795  196795 certs.go:484] found cert: /home/jenkins/minikube-integration/21139-2320/.minikube/certs/key.pem (1679 bytes)
	I1109 14:39:23.852868  196795 certs.go:484] found cert: /home/jenkins/minikube-integration/21139-2320/.minikube/files/etc/ssl/certs/41162.pem (1708 bytes)
	I1109 14:39:23.853641  196795 ssh_runner.go:362] scp /home/jenkins/minikube-integration/21139-2320/.minikube/ca.crt --> /var/lib/minikube/certs/ca.crt (1111 bytes)
	I1109 14:39:23.941040  196795 ssh_runner.go:362] scp /home/jenkins/minikube-integration/21139-2320/.minikube/ca.key --> /var/lib/minikube/certs/ca.key (1679 bytes)
	I1109 14:39:24.012418  196795 ssh_runner.go:362] scp /home/jenkins/minikube-integration/21139-2320/.minikube/proxy-client-ca.crt --> /var/lib/minikube/certs/proxy-client-ca.crt (1119 bytes)
	I1109 14:39:24.042429  196795 ssh_runner.go:362] scp /home/jenkins/minikube-integration/21139-2320/.minikube/proxy-client-ca.key --> /var/lib/minikube/certs/proxy-client-ca.key (1675 bytes)
	I1109 14:39:24.071468  196795 ssh_runner.go:362] scp /home/jenkins/minikube-integration/21139-2320/.minikube/profiles/embed-certs-422728/apiserver.crt --> /var/lib/minikube/certs/apiserver.crt (1428 bytes)
	I1109 14:39:24.116434  196795 ssh_runner.go:362] scp /home/jenkins/minikube-integration/21139-2320/.minikube/profiles/embed-certs-422728/apiserver.key --> /var/lib/minikube/certs/apiserver.key (1679 bytes)
	I1109 14:39:24.161053  196795 ssh_runner.go:362] scp /home/jenkins/minikube-integration/21139-2320/.minikube/profiles/embed-certs-422728/proxy-client.crt --> /var/lib/minikube/certs/proxy-client.crt (1147 bytes)
	I1109 14:39:24.224105  196795 ssh_runner.go:362] scp /home/jenkins/minikube-integration/21139-2320/.minikube/profiles/embed-certs-422728/proxy-client.key --> /var/lib/minikube/certs/proxy-client.key (1675 bytes)
	I1109 14:39:24.267707  196795 ssh_runner.go:362] scp /home/jenkins/minikube-integration/21139-2320/.minikube/files/etc/ssl/certs/41162.pem --> /usr/share/ca-certificates/41162.pem (1708 bytes)
	I1109 14:39:24.314203  196795 ssh_runner.go:362] scp /home/jenkins/minikube-integration/21139-2320/.minikube/ca.crt --> /usr/share/ca-certificates/minikubeCA.pem (1111 bytes)
	I1109 14:39:24.345761  196795 ssh_runner.go:362] scp /home/jenkins/minikube-integration/21139-2320/.minikube/certs/4116.pem --> /usr/share/ca-certificates/4116.pem (1338 bytes)
	I1109 14:39:24.382658  196795 ssh_runner.go:362] scp memory --> /var/lib/minikube/kubeconfig (738 bytes)
	I1109 14:39:24.401317  196795 ssh_runner.go:195] Run: openssl version
	I1109 14:39:24.412746  196795 ssh_runner.go:195] Run: sudo /bin/bash -c "test -s /usr/share/ca-certificates/41162.pem && ln -fs /usr/share/ca-certificates/41162.pem /etc/ssl/certs/41162.pem"
	I1109 14:39:24.425193  196795 ssh_runner.go:195] Run: ls -la /usr/share/ca-certificates/41162.pem
	I1109 14:39:24.429586  196795 certs.go:528] hashing: -rw-r--r-- 1 root root 1708 Nov  9 13:36 /usr/share/ca-certificates/41162.pem
	I1109 14:39:24.429714  196795 ssh_runner.go:195] Run: openssl x509 -hash -noout -in /usr/share/ca-certificates/41162.pem
	I1109 14:39:24.492081  196795 ssh_runner.go:195] Run: sudo /bin/bash -c "test -L /etc/ssl/certs/3ec20f2e.0 || ln -fs /etc/ssl/certs/41162.pem /etc/ssl/certs/3ec20f2e.0"
	I1109 14:39:24.502155  196795 ssh_runner.go:195] Run: sudo /bin/bash -c "test -s /usr/share/ca-certificates/minikubeCA.pem && ln -fs /usr/share/ca-certificates/minikubeCA.pem /etc/ssl/certs/minikubeCA.pem"
	I1109 14:39:24.510808  196795 ssh_runner.go:195] Run: ls -la /usr/share/ca-certificates/minikubeCA.pem
	I1109 14:39:24.515143  196795 certs.go:528] hashing: -rw-r--r-- 1 root root 1111 Nov  9 13:29 /usr/share/ca-certificates/minikubeCA.pem
	I1109 14:39:24.515237  196795 ssh_runner.go:195] Run: openssl x509 -hash -noout -in /usr/share/ca-certificates/minikubeCA.pem
	I1109 14:39:24.570674  196795 ssh_runner.go:195] Run: sudo /bin/bash -c "test -L /etc/ssl/certs/b5213941.0 || ln -fs /etc/ssl/certs/minikubeCA.pem /etc/ssl/certs/b5213941.0"
	I1109 14:39:24.579490  196795 ssh_runner.go:195] Run: sudo /bin/bash -c "test -s /usr/share/ca-certificates/4116.pem && ln -fs /usr/share/ca-certificates/4116.pem /etc/ssl/certs/4116.pem"
	I1109 14:39:24.606288  196795 ssh_runner.go:195] Run: ls -la /usr/share/ca-certificates/4116.pem
	I1109 14:39:24.614978  196795 certs.go:528] hashing: -rw-r--r-- 1 root root 1338 Nov  9 13:36 /usr/share/ca-certificates/4116.pem
	I1109 14:39:24.615077  196795 ssh_runner.go:195] Run: openssl x509 -hash -noout -in /usr/share/ca-certificates/4116.pem
	I1109 14:39:24.702675  196795 ssh_runner.go:195] Run: sudo /bin/bash -c "test -L /etc/ssl/certs/51391683.0 || ln -fs /etc/ssl/certs/4116.pem /etc/ssl/certs/51391683.0"
	I1109 14:39:24.724731  196795 ssh_runner.go:195] Run: stat /var/lib/minikube/certs/apiserver-kubelet-client.crt
	I1109 14:39:24.736968  196795 ssh_runner.go:195] Run: openssl x509 -noout -in /var/lib/minikube/certs/apiserver-etcd-client.crt -checkend 86400
	I1109 14:39:24.828754  196795 ssh_runner.go:195] Run: openssl x509 -noout -in /var/lib/minikube/certs/apiserver-kubelet-client.crt -checkend 86400
	I1109 14:39:24.919293  196795 ssh_runner.go:195] Run: openssl x509 -noout -in /var/lib/minikube/certs/etcd/server.crt -checkend 86400
	I1109 14:39:25.033233  196795 ssh_runner.go:195] Run: openssl x509 -noout -in /var/lib/minikube/certs/etcd/healthcheck-client.crt -checkend 86400
	I1109 14:39:25.133106  196795 ssh_runner.go:195] Run: openssl x509 -noout -in /var/lib/minikube/certs/etcd/peer.crt -checkend 86400
	I1109 14:39:25.239384  196795 ssh_runner.go:195] Run: openssl x509 -noout -in /var/lib/minikube/certs/front-proxy-client.crt -checkend 86400
	I1109 14:39:25.320678  196795 kubeadm.go:401] StartCluster: {Name:embed-certs-422728 KeepContext:false EmbedCerts:true MinikubeISO: KicBaseImage:gcr.io/k8s-minikube/kicbase-builds:v0.0.48-1761985721-21837@sha256:a50b37e97dfdea51156e079ca6b45818a801b3d41bbe13d141f35d2e1af6c7d1 Memory:3072 CPUs:2 DiskSize:20000 Driver:docker HyperkitVpnKitSock: HyperkitVSockPorts:[] DockerEnv:[] ContainerVolumeMounts:[] InsecureRegistry:[] RegistryMirror:[] HostOnlyCIDR:192.168.59.1/24 HypervVirtualSwitch: HypervUseExternalSwitch:false HypervExternalAdapter: KVMNetwork:default KVMQemuURI:qemu:///system KVMGPU:false KVMHidden:false KVMNUMACount:1 APIServerPort:8443 DockerOpt:[] DisableDriverMounts:false NFSShare:[] NFSSharesRoot:/nfsshares UUID: NoVTXCheck:false DNSProxy:false HostDNSResolver:true HostOnlyNicType:virtio NatNicType:virtio SSHIPAddress: SSHUser:root SSHKey: SSHPort:22 KubernetesConfig:{KubernetesVersion:v1.34.1 ClusterName:embed-certs-422728 Namespace:default APIServerHAVIP: APIServerName:minikubeCA APISe
rverNames:[] APIServerIPs:[] DNSDomain:cluster.local ContainerRuntime:crio CRISocket: NetworkPlugin:cni FeatureGates: ServiceCIDR:10.96.0.0/12 ImageRepository: LoadBalancerStartIP: LoadBalancerEndIP: CustomIngressCert: RegistryAliases: ExtraOptions:[] ShouldLoadCachedImages:true EnableDefaultCNI:false CNI:} Nodes:[{Name: IP:192.168.76.2 Port:8443 KubernetesVersion:v1.34.1 ContainerRuntime:crio ControlPlane:true Worker:true}] Addons:map[dashboard:true default-storageclass:true storage-provisioner:true] CustomAddonImages:map[MetricsScraper:registry.k8s.io/echoserver:1.4] CustomAddonRegistries:map[] VerifyComponents:map[apiserver:true apps_running:true default_sa:true extra:true kubelet:true node_ready:true system_pods:true] StartHostTimeout:6m0s ScheduledStop:<nil> ExposedPorts:[] ListenAddress: Network: Subnet: MultiNodeRequested:false ExtraDisks:0 CertExpiration:26280h0m0s MountString: Mount9PVersion:9p2000.L MountGID:docker MountIP: MountMSize:262144 MountOptions:[] MountPort:0 MountType:9p MountUID:docker B
inaryMirror: DisableOptimizations:false DisableMetrics:false DisableCoreDNSLog:false CustomQemuFirmwarePath: SocketVMnetClientPath: SocketVMnetPath: StaticIP: SSHAuthSock: SSHAgentPID:0 GPUs: AutoPauseInterval:1m0s}
	I1109 14:39:25.320782  196795 cri.go:54] listing CRI containers in root : {State:paused Name: Namespaces:[kube-system]}
	I1109 14:39:25.320876  196795 ssh_runner.go:195] Run: sudo -s eval "crictl ps -a --quiet --label io.kubernetes.pod.namespace=kube-system"
	I1109 14:39:25.395488  196795 cri.go:89] found id: "a9943a66511d5ad86eafee1dfeb5b5c4217765aa7223fad81d4d85ed65bd4366"
	I1109 14:39:25.395518  196795 cri.go:89] found id: "2b949bf057b2fcac7eac7de6351c20a0ac0d3dccfcd10d71e47fcbaab8fc91dc"
	I1109 14:39:25.395523  196795 cri.go:89] found id: "7ac348b06cb3a51d4fa87f75d29323432bc36b8fade63732460d89860ce8f3df"
	I1109 14:39:25.395529  196795 cri.go:89] found id: "7f99978e234d142a4cacb3d0faff188a383754a6fe3a8aafef37dfdbccf51f16"
	I1109 14:39:25.395540  196795 cri.go:89] found id: ""
	I1109 14:39:25.395626  196795 ssh_runner.go:195] Run: sudo runc list -f json
	W1109 14:39:25.421453  196795 kubeadm.go:408] unpause failed: list paused: runc: sudo runc list -f json: Process exited with status 1
	stdout:
	
	stderr:
	time="2025-11-09T14:39:25Z" level=error msg="open /run/runc: no such file or directory"
	I1109 14:39:25.421568  196795 ssh_runner.go:195] Run: sudo ls /var/lib/kubelet/kubeadm-flags.env /var/lib/kubelet/config.yaml /var/lib/minikube/etcd
	I1109 14:39:25.434118  196795 kubeadm.go:417] found existing configuration files, will attempt cluster restart
	I1109 14:39:25.434139  196795 kubeadm.go:598] restartPrimaryControlPlane start ...
	I1109 14:39:25.434224  196795 ssh_runner.go:195] Run: sudo test -d /data/minikube
	I1109 14:39:25.455848  196795 kubeadm.go:131] /data/minikube skipping compat symlinks: sudo test -d /data/minikube: Process exited with status 1
	stdout:
	
	stderr:
	I1109 14:39:25.456462  196795 kubeconfig.go:47] verify endpoint returned: get endpoint: "embed-certs-422728" does not appear in /home/jenkins/minikube-integration/21139-2320/kubeconfig
	I1109 14:39:25.456756  196795 kubeconfig.go:62] /home/jenkins/minikube-integration/21139-2320/kubeconfig needs updating (will repair): [kubeconfig missing "embed-certs-422728" cluster setting kubeconfig missing "embed-certs-422728" context setting]
	I1109 14:39:25.457252  196795 lock.go:35] WriteFile acquiring /home/jenkins/minikube-integration/21139-2320/kubeconfig: {Name:mkbcbd43244c4eb498050023163f96f81faa78c6 Clock:{} Delay:500ms Timeout:1m0s Cancel:<nil>}
	I1109 14:39:25.458892  196795 ssh_runner.go:195] Run: sudo diff -u /var/tmp/minikube/kubeadm.yaml /var/tmp/minikube/kubeadm.yaml.new
	I1109 14:39:25.472254  196795 kubeadm.go:635] The running cluster does not require reconfiguration: 192.168.76.2
	I1109 14:39:25.472299  196795 kubeadm.go:602] duration metric: took 38.151656ms to restartPrimaryControlPlane
	I1109 14:39:25.472333  196795 kubeadm.go:403] duration metric: took 151.665347ms to StartCluster
	I1109 14:39:25.472350  196795 settings.go:142] acquiring lock: {Name:mk52a5a1cf83cfd3007d92af07a5e8dea393f789 Clock:{} Delay:500ms Timeout:1m0s Cancel:<nil>}
	I1109 14:39:25.472439  196795 settings.go:150] Updating kubeconfig:  /home/jenkins/minikube-integration/21139-2320/kubeconfig
	I1109 14:39:25.474717  196795 lock.go:35] WriteFile acquiring /home/jenkins/minikube-integration/21139-2320/kubeconfig: {Name:mkbcbd43244c4eb498050023163f96f81faa78c6 Clock:{} Delay:500ms Timeout:1m0s Cancel:<nil>}
	I1109 14:39:25.475122  196795 start.go:236] Will wait 6m0s for node &{Name: IP:192.168.76.2 Port:8443 KubernetesVersion:v1.34.1 ContainerRuntime:crio ControlPlane:true Worker:true}
	I1109 14:39:25.475457  196795 config.go:182] Loaded profile config "embed-certs-422728": Driver=docker, ContainerRuntime=crio, KubernetesVersion=v1.34.1
	I1109 14:39:25.475514  196795 addons.go:512] enable addons start: toEnable=map[ambassador:false amd-gpu-device-plugin:false auto-pause:false cloud-spanner:false csi-hostpath-driver:false dashboard:true default-storageclass:true efk:false freshpod:false gcp-auth:false gvisor:false headlamp:false inaccel:false ingress:false ingress-dns:false inspektor-gadget:false istio:false istio-provisioner:false kong:false kubeflow:false kubetail:false kubevirt:false logviewer:false metallb:false metrics-server:false nvidia-device-plugin:false nvidia-driver-installer:false nvidia-gpu-device-plugin:false olm:false pod-security-policy:false portainer:false registry:false registry-aliases:false registry-creds:false storage-provisioner:true storage-provisioner-rancher:false volcano:false volumesnapshots:false yakd:false]
	I1109 14:39:25.475607  196795 addons.go:70] Setting storage-provisioner=true in profile "embed-certs-422728"
	I1109 14:39:25.475629  196795 addons.go:239] Setting addon storage-provisioner=true in "embed-certs-422728"
	W1109 14:39:25.475642  196795 addons.go:248] addon storage-provisioner should already be in state true
	I1109 14:39:25.475657  196795 addons.go:70] Setting dashboard=true in profile "embed-certs-422728"
	I1109 14:39:25.475671  196795 addons.go:239] Setting addon dashboard=true in "embed-certs-422728"
	W1109 14:39:25.475677  196795 addons.go:248] addon dashboard should already be in state true
	I1109 14:39:25.475700  196795 host.go:66] Checking if "embed-certs-422728" exists ...
	I1109 14:39:25.476345  196795 cli_runner.go:164] Run: docker container inspect embed-certs-422728 --format={{.State.Status}}
	I1109 14:39:25.476519  196795 host.go:66] Checking if "embed-certs-422728" exists ...
	I1109 14:39:25.476941  196795 cli_runner.go:164] Run: docker container inspect embed-certs-422728 --format={{.State.Status}}
	I1109 14:39:25.477501  196795 addons.go:70] Setting default-storageclass=true in profile "embed-certs-422728"
	I1109 14:39:25.477528  196795 addons_storage_classes.go:34] enableOrDisableStorageClasses default-storageclass=true on "embed-certs-422728"
	I1109 14:39:25.477804  196795 cli_runner.go:164] Run: docker container inspect embed-certs-422728 --format={{.State.Status}}
	I1109 14:39:25.483396  196795 out.go:179] * Verifying Kubernetes components...
	I1109 14:39:25.487964  196795 ssh_runner.go:195] Run: sudo systemctl daemon-reload
	I1109 14:39:25.515113  196795 out.go:179]   - Using image docker.io/kubernetesui/dashboard:v2.7.0
	I1109 14:39:25.518086  196795 out.go:179]   - Using image registry.k8s.io/echoserver:1.4
	I1109 14:39:25.521009  196795 addons.go:436] installing /etc/kubernetes/addons/dashboard-ns.yaml
	I1109 14:39:25.521039  196795 ssh_runner.go:362] scp dashboard/dashboard-ns.yaml --> /etc/kubernetes/addons/dashboard-ns.yaml (759 bytes)
	I1109 14:39:25.521115  196795 cli_runner.go:164] Run: docker container inspect -f "'{{(index (index .NetworkSettings.Ports "22/tcp") 0).HostPort}}'" embed-certs-422728
	I1109 14:39:25.540397  196795 out.go:179]   - Using image gcr.io/k8s-minikube/storage-provisioner:v5
	I1109 14:39:25.545565  196795 addons.go:436] installing /etc/kubernetes/addons/storage-provisioner.yaml
	I1109 14:39:25.545587  196795 ssh_runner.go:362] scp memory --> /etc/kubernetes/addons/storage-provisioner.yaml (2676 bytes)
	I1109 14:39:25.545649  196795 cli_runner.go:164] Run: docker container inspect -f "'{{(index (index .NetworkSettings.Ports "22/tcp") 0).HostPort}}'" embed-certs-422728
	I1109 14:39:25.553421  196795 addons.go:239] Setting addon default-storageclass=true in "embed-certs-422728"
	W1109 14:39:25.553458  196795 addons.go:248] addon default-storageclass should already be in state true
	I1109 14:39:25.553498  196795 host.go:66] Checking if "embed-certs-422728" exists ...
	I1109 14:39:25.553946  196795 cli_runner.go:164] Run: docker container inspect embed-certs-422728 --format={{.State.Status}}
	I1109 14:39:25.587976  196795 sshutil.go:53] new ssh client: &{IP:127.0.0.1 Port:33070 SSHKeyPath:/home/jenkins/minikube-integration/21139-2320/.minikube/machines/embed-certs-422728/id_rsa Username:docker}
	I1109 14:39:25.610580  196795 addons.go:436] installing /etc/kubernetes/addons/storageclass.yaml
	I1109 14:39:25.610609  196795 ssh_runner.go:362] scp storageclass/storageclass.yaml --> /etc/kubernetes/addons/storageclass.yaml (271 bytes)
	I1109 14:39:25.610676  196795 cli_runner.go:164] Run: docker container inspect -f "'{{(index (index .NetworkSettings.Ports "22/tcp") 0).HostPort}}'" embed-certs-422728
	I1109 14:39:25.611768  196795 sshutil.go:53] new ssh client: &{IP:127.0.0.1 Port:33070 SSHKeyPath:/home/jenkins/minikube-integration/21139-2320/.minikube/machines/embed-certs-422728/id_rsa Username:docker}
	I1109 14:39:25.643462  196795 sshutil.go:53] new ssh client: &{IP:127.0.0.1 Port:33070 SSHKeyPath:/home/jenkins/minikube-integration/21139-2320/.minikube/machines/embed-certs-422728/id_rsa Username:docker}
	I1109 14:39:25.951056  196795 ssh_runner.go:195] Run: sudo KUBECONFIG=/var/lib/minikube/kubeconfig /var/lib/minikube/binaries/v1.34.1/kubectl apply -f /etc/kubernetes/addons/storage-provisioner.yaml
	I1109 14:39:26.036278  196795 addons.go:436] installing /etc/kubernetes/addons/dashboard-clusterrole.yaml
	I1109 14:39:26.036356  196795 ssh_runner.go:362] scp dashboard/dashboard-clusterrole.yaml --> /etc/kubernetes/addons/dashboard-clusterrole.yaml (1001 bytes)
	I1109 14:39:26.113974  196795 ssh_runner.go:195] Run: sudo systemctl start kubelet
	I1109 14:39:26.133211  196795 ssh_runner.go:195] Run: sudo KUBECONFIG=/var/lib/minikube/kubeconfig /var/lib/minikube/binaries/v1.34.1/kubectl apply -f /etc/kubernetes/addons/storageclass.yaml
	I1109 14:39:26.150339  196795 addons.go:436] installing /etc/kubernetes/addons/dashboard-clusterrolebinding.yaml
	I1109 14:39:26.150412  196795 ssh_runner.go:362] scp dashboard/dashboard-clusterrolebinding.yaml --> /etc/kubernetes/addons/dashboard-clusterrolebinding.yaml (1018 bytes)
	I1109 14:39:26.224674  196795 addons.go:436] installing /etc/kubernetes/addons/dashboard-configmap.yaml
	I1109 14:39:26.224743  196795 ssh_runner.go:362] scp dashboard/dashboard-configmap.yaml --> /etc/kubernetes/addons/dashboard-configmap.yaml (837 bytes)
	I1109 14:39:26.342164  196795 addons.go:436] installing /etc/kubernetes/addons/dashboard-dp.yaml
	I1109 14:39:26.342238  196795 ssh_runner.go:362] scp memory --> /etc/kubernetes/addons/dashboard-dp.yaml (4201 bytes)
	I1109 14:39:26.457225  196795 addons.go:436] installing /etc/kubernetes/addons/dashboard-role.yaml
	I1109 14:39:26.457281  196795 ssh_runner.go:362] scp dashboard/dashboard-role.yaml --> /etc/kubernetes/addons/dashboard-role.yaml (1724 bytes)
	I1109 14:39:26.524480  196795 addons.go:436] installing /etc/kubernetes/addons/dashboard-rolebinding.yaml
	I1109 14:39:26.524551  196795 ssh_runner.go:362] scp dashboard/dashboard-rolebinding.yaml --> /etc/kubernetes/addons/dashboard-rolebinding.yaml (1046 bytes)
	I1109 14:39:26.545432  196795 addons.go:436] installing /etc/kubernetes/addons/dashboard-sa.yaml
	I1109 14:39:26.545495  196795 ssh_runner.go:362] scp dashboard/dashboard-sa.yaml --> /etc/kubernetes/addons/dashboard-sa.yaml (837 bytes)
	I1109 14:39:26.569785  196795 addons.go:436] installing /etc/kubernetes/addons/dashboard-secret.yaml
	I1109 14:39:26.569856  196795 ssh_runner.go:362] scp dashboard/dashboard-secret.yaml --> /etc/kubernetes/addons/dashboard-secret.yaml (1389 bytes)
	I1109 14:39:26.593384  196795 addons.go:436] installing /etc/kubernetes/addons/dashboard-svc.yaml
	I1109 14:39:26.593446  196795 ssh_runner.go:362] scp dashboard/dashboard-svc.yaml --> /etc/kubernetes/addons/dashboard-svc.yaml (1294 bytes)
	I1109 14:39:26.632772  196795 ssh_runner.go:195] Run: sudo KUBECONFIG=/var/lib/minikube/kubeconfig /var/lib/minikube/binaries/v1.34.1/kubectl apply -f /etc/kubernetes/addons/dashboard-ns.yaml -f /etc/kubernetes/addons/dashboard-clusterrole.yaml -f /etc/kubernetes/addons/dashboard-clusterrolebinding.yaml -f /etc/kubernetes/addons/dashboard-configmap.yaml -f /etc/kubernetes/addons/dashboard-dp.yaml -f /etc/kubernetes/addons/dashboard-role.yaml -f /etc/kubernetes/addons/dashboard-rolebinding.yaml -f /etc/kubernetes/addons/dashboard-sa.yaml -f /etc/kubernetes/addons/dashboard-secret.yaml -f /etc/kubernetes/addons/dashboard-svc.yaml
	I1109 14:39:29.705357  196129 node_ready.go:49] node "default-k8s-diff-port-103048" is "Ready"
	I1109 14:39:29.705456  196129 node_ready.go:38] duration metric: took 7.476741625s for node "default-k8s-diff-port-103048" to be "Ready" ...
	I1109 14:39:29.705484  196129 api_server.go:52] waiting for apiserver process to appear ...
	I1109 14:39:29.705569  196129 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I1109 14:39:32.996787  196129 ssh_runner.go:235] Completed: sudo KUBECONFIG=/var/lib/minikube/kubeconfig /var/lib/minikube/binaries/v1.34.1/kubectl apply -f /etc/kubernetes/addons/storage-provisioner.yaml: (10.835671987s)
	I1109 14:39:32.996899  196129 ssh_runner.go:235] Completed: sudo KUBECONFIG=/var/lib/minikube/kubeconfig /var/lib/minikube/binaries/v1.34.1/kubectl apply -f /etc/kubernetes/addons/storageclass.yaml: (10.759518546s)
	I1109 14:39:32.997220  196129 ssh_runner.go:235] Completed: sudo KUBECONFIG=/var/lib/minikube/kubeconfig /var/lib/minikube/binaries/v1.34.1/kubectl apply -f /etc/kubernetes/addons/dashboard-ns.yaml -f /etc/kubernetes/addons/dashboard-clusterrole.yaml -f /etc/kubernetes/addons/dashboard-clusterrolebinding.yaml -f /etc/kubernetes/addons/dashboard-configmap.yaml -f /etc/kubernetes/addons/dashboard-dp.yaml -f /etc/kubernetes/addons/dashboard-role.yaml -f /etc/kubernetes/addons/dashboard-rolebinding.yaml -f /etc/kubernetes/addons/dashboard-sa.yaml -f /etc/kubernetes/addons/dashboard-secret.yaml -f /etc/kubernetes/addons/dashboard-svc.yaml: (10.165879191s)
	I1109 14:39:32.997471  196129 ssh_runner.go:235] Completed: sudo pgrep -xnf kube-apiserver.*minikube.*: (3.291860623s)
	I1109 14:39:32.997521  196129 api_server.go:72] duration metric: took 11.371993953s to wait for apiserver process to appear ...
	I1109 14:39:32.997542  196129 api_server.go:88] waiting for apiserver healthz status ...
	I1109 14:39:32.997571  196129 api_server.go:253] Checking apiserver healthz at https://192.168.85.2:8444/healthz ...
	I1109 14:39:33.000725  196129 out.go:179] * Some dashboard features require the metrics-server addon. To enable all features please run:
	
		minikube -p default-k8s-diff-port-103048 addons enable metrics-server
	
	I1109 14:39:33.020969  196129 api_server.go:279] https://192.168.85.2:8444/healthz returned 200:
	ok
	I1109 14:39:33.023683  196129 api_server.go:141] control plane version: v1.34.1
	I1109 14:39:33.023714  196129 api_server.go:131] duration metric: took 26.153345ms to wait for apiserver health ...
	I1109 14:39:33.023725  196129 system_pods.go:43] waiting for kube-system pods to appear ...
	I1109 14:39:33.032087  196129 out.go:179] * Enabled addons: storage-provisioner, dashboard, default-storageclass
	I1109 14:39:33.033482  196129 system_pods.go:59] 8 kube-system pods found
	I1109 14:39:33.033582  196129 system_pods.go:61] "coredns-66bc5c9577-rbvc2" [a2c09df3-22f7-4863-81b7-71d92e6457c7] Running / Ready:ContainersNotReady (containers with unready status: [coredns]) / ContainersReady:ContainersNotReady (containers with unready status: [coredns])
	I1109 14:39:33.033606  196129 system_pods.go:61] "etcd-default-k8s-diff-port-103048" [7ca61e70-9bc1-4a3d-9175-40fabccc3dfe] Running / Ready:ContainersNotReady (containers with unready status: [etcd]) / ContainersReady:ContainersNotReady (containers with unready status: [etcd])
	I1109 14:39:33.033629  196129 system_pods.go:61] "kindnet-tz2x5" [41a63a24-6d2b-453d-a118-2a5b03e08396] Running / Ready:ContainersNotReady (containers with unready status: [kindnet-cni]) / ContainersReady:ContainersNotReady (containers with unready status: [kindnet-cni])
	I1109 14:39:33.033667  196129 system_pods.go:61] "kube-apiserver-default-k8s-diff-port-103048" [80dc9a9a-1a79-447a-b1a4-38d7611de8c3] Running / Ready:ContainersNotReady (containers with unready status: [kube-apiserver]) / ContainersReady:ContainersNotReady (containers with unready status: [kube-apiserver])
	I1109 14:39:33.033692  196129 system_pods.go:61] "kube-controller-manager-default-k8s-diff-port-103048" [f1da8430-3b4b-4fab-a1cb-d8984aabae63] Running / Ready:ContainersNotReady (containers with unready status: [kube-controller-manager]) / ContainersReady:ContainersNotReady (containers with unready status: [kube-controller-manager])
	I1109 14:39:33.033712  196129 system_pods.go:61] "kube-proxy-c57m2" [d93835ed-7e40-4171-a3ee-f815a8d20380] Running
	I1109 14:39:33.033743  196129 system_pods.go:61] "kube-scheduler-default-k8s-diff-port-103048" [8e18ee25-6d8f-4598-89c1-67b7fc04936b] Running
	I1109 14:39:33.033770  196129 system_pods.go:61] "storage-provisioner" [251b0857-5681-47c0-b891-8a4c109aaa4b] Running
	I1109 14:39:33.033790  196129 system_pods.go:74] duration metric: took 10.030263ms to wait for pod list to return data ...
	I1109 14:39:33.033824  196129 default_sa.go:34] waiting for default service account to be created ...
	I1109 14:39:33.034992  196129 addons.go:515] duration metric: took 11.409095214s for enable addons: enabled=[storage-provisioner dashboard default-storageclass]
	I1109 14:39:33.040658  196129 default_sa.go:45] found service account: "default"
	I1109 14:39:33.040686  196129 default_sa.go:55] duration metric: took 6.835118ms for default service account to be created ...
	I1109 14:39:33.040697  196129 system_pods.go:116] waiting for k8s-apps to be running ...
	I1109 14:39:33.044695  196129 system_pods.go:86] 8 kube-system pods found
	I1109 14:39:33.044733  196129 system_pods.go:89] "coredns-66bc5c9577-rbvc2" [a2c09df3-22f7-4863-81b7-71d92e6457c7] Running / Ready:ContainersNotReady (containers with unready status: [coredns]) / ContainersReady:ContainersNotReady (containers with unready status: [coredns])
	I1109 14:39:33.044743  196129 system_pods.go:89] "etcd-default-k8s-diff-port-103048" [7ca61e70-9bc1-4a3d-9175-40fabccc3dfe] Running / Ready:ContainersNotReady (containers with unready status: [etcd]) / ContainersReady:ContainersNotReady (containers with unready status: [etcd])
	I1109 14:39:33.044786  196129 system_pods.go:89] "kindnet-tz2x5" [41a63a24-6d2b-453d-a118-2a5b03e08396] Running / Ready:ContainersNotReady (containers with unready status: [kindnet-cni]) / ContainersReady:ContainersNotReady (containers with unready status: [kindnet-cni])
	I1109 14:39:33.044801  196129 system_pods.go:89] "kube-apiserver-default-k8s-diff-port-103048" [80dc9a9a-1a79-447a-b1a4-38d7611de8c3] Running / Ready:ContainersNotReady (containers with unready status: [kube-apiserver]) / ContainersReady:ContainersNotReady (containers with unready status: [kube-apiserver])
	I1109 14:39:33.044809  196129 system_pods.go:89] "kube-controller-manager-default-k8s-diff-port-103048" [f1da8430-3b4b-4fab-a1cb-d8984aabae63] Running / Ready:ContainersNotReady (containers with unready status: [kube-controller-manager]) / ContainersReady:ContainersNotReady (containers with unready status: [kube-controller-manager])
	I1109 14:39:33.044819  196129 system_pods.go:89] "kube-proxy-c57m2" [d93835ed-7e40-4171-a3ee-f815a8d20380] Running
	I1109 14:39:33.044824  196129 system_pods.go:89] "kube-scheduler-default-k8s-diff-port-103048" [8e18ee25-6d8f-4598-89c1-67b7fc04936b] Running
	I1109 14:39:33.044829  196129 system_pods.go:89] "storage-provisioner" [251b0857-5681-47c0-b891-8a4c109aaa4b] Running
	I1109 14:39:33.044854  196129 system_pods.go:126] duration metric: took 4.149902ms to wait for k8s-apps to be running ...
	I1109 14:39:33.044870  196129 system_svc.go:44] waiting for kubelet service to be running ....
	I1109 14:39:33.044951  196129 ssh_runner.go:195] Run: sudo systemctl is-active --quiet service kubelet
	I1109 14:39:33.077530  196129 system_svc.go:56] duration metric: took 32.649827ms WaitForService to wait for kubelet
	I1109 14:39:33.077564  196129 kubeadm.go:587] duration metric: took 11.452030043s to wait for: map[apiserver:true apps_running:true default_sa:true extra:true kubelet:true node_ready:true system_pods:true]
	I1109 14:39:33.077606  196129 node_conditions.go:102] verifying NodePressure condition ...
	I1109 14:39:33.086426  196129 node_conditions.go:122] node storage ephemeral capacity is 203034800Ki
	I1109 14:39:33.086461  196129 node_conditions.go:123] node cpu capacity is 2
	I1109 14:39:33.086473  196129 node_conditions.go:105] duration metric: took 8.861178ms to run NodePressure ...
	I1109 14:39:33.086516  196129 start.go:242] waiting for startup goroutines ...
	I1109 14:39:33.086533  196129 start.go:247] waiting for cluster config update ...
	I1109 14:39:33.086544  196129 start.go:256] writing updated cluster config ...
	I1109 14:39:33.086866  196129 ssh_runner.go:195] Run: rm -f paused
	I1109 14:39:33.096386  196129 pod_ready.go:37] extra waiting up to 4m0s for all "kube-system" pods having one of [k8s-app=kube-dns component=etcd component=kube-apiserver component=kube-controller-manager k8s-app=kube-proxy component=kube-scheduler] labels to be "Ready" ...
	I1109 14:39:33.164789  196129 pod_ready.go:83] waiting for pod "coredns-66bc5c9577-rbvc2" in "kube-system" namespace to be "Ready" or be gone ...
	I1109 14:39:35.201675  196795 ssh_runner.go:235] Completed: sudo KUBECONFIG=/var/lib/minikube/kubeconfig /var/lib/minikube/binaries/v1.34.1/kubectl apply -f /etc/kubernetes/addons/storage-provisioner.yaml: (9.250533062s)
	I1109 14:39:35.201721  196795 ssh_runner.go:235] Completed: sudo systemctl start kubelet: (9.087664371s)
	I1109 14:39:35.201760  196795 node_ready.go:35] waiting up to 6m0s for node "embed-certs-422728" to be "Ready" ...
	I1109 14:39:35.202074  196795 ssh_runner.go:235] Completed: sudo KUBECONFIG=/var/lib/minikube/kubeconfig /var/lib/minikube/binaries/v1.34.1/kubectl apply -f /etc/kubernetes/addons/storageclass.yaml: (9.068793828s)
	I1109 14:39:35.202315  196795 ssh_runner.go:235] Completed: sudo KUBECONFIG=/var/lib/minikube/kubeconfig /var/lib/minikube/binaries/v1.34.1/kubectl apply -f /etc/kubernetes/addons/dashboard-ns.yaml -f /etc/kubernetes/addons/dashboard-clusterrole.yaml -f /etc/kubernetes/addons/dashboard-clusterrolebinding.yaml -f /etc/kubernetes/addons/dashboard-configmap.yaml -f /etc/kubernetes/addons/dashboard-dp.yaml -f /etc/kubernetes/addons/dashboard-role.yaml -f /etc/kubernetes/addons/dashboard-rolebinding.yaml -f /etc/kubernetes/addons/dashboard-sa.yaml -f /etc/kubernetes/addons/dashboard-secret.yaml -f /etc/kubernetes/addons/dashboard-svc.yaml: (8.569467426s)
	I1109 14:39:35.205755  196795 out.go:179] * Some dashboard features require the metrics-server addon. To enable all features please run:
	
		minikube -p embed-certs-422728 addons enable metrics-server
	
	I1109 14:39:35.282264  196795 node_ready.go:49] node "embed-certs-422728" is "Ready"
	I1109 14:39:35.282343  196795 node_ready.go:38] duration metric: took 80.561028ms for node "embed-certs-422728" to be "Ready" ...
	I1109 14:39:35.282371  196795 api_server.go:52] waiting for apiserver process to appear ...
	I1109 14:39:35.282455  196795 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I1109 14:39:35.306663  196795 out.go:179] * Enabled addons: storage-provisioner, dashboard, default-storageclass
	I1109 14:39:35.309737  196795 addons.go:515] duration metric: took 9.834202528s for enable addons: enabled=[storage-provisioner dashboard default-storageclass]
	I1109 14:39:35.336441  196795 api_server.go:72] duration metric: took 9.861275529s to wait for apiserver process to appear ...
	I1109 14:39:35.336467  196795 api_server.go:88] waiting for apiserver healthz status ...
	I1109 14:39:35.336489  196795 api_server.go:253] Checking apiserver healthz at https://192.168.76.2:8443/healthz ...
	I1109 14:39:35.381991  196795 api_server.go:279] https://192.168.76.2:8443/healthz returned 200:
	ok
	I1109 14:39:35.384051  196795 api_server.go:141] control plane version: v1.34.1
	I1109 14:39:35.384080  196795 api_server.go:131] duration metric: took 47.606213ms to wait for apiserver health ...
	I1109 14:39:35.384090  196795 system_pods.go:43] waiting for kube-system pods to appear ...
	I1109 14:39:35.401482  196795 system_pods.go:59] 8 kube-system pods found
	I1109 14:39:35.401522  196795 system_pods.go:61] "coredns-66bc5c9577-4hk6l" [85300af6-fc4a-42dd-b6f9-4374a4461cdc] Running / Ready:ContainersNotReady (containers with unready status: [coredns]) / ContainersReady:ContainersNotReady (containers with unready status: [coredns])
	I1109 14:39:35.401532  196795 system_pods.go:61] "etcd-embed-certs-422728" [a71faa76-21fd-4bf5-82d3-26489363edf1] Running / Ready:ContainersNotReady (containers with unready status: [etcd]) / ContainersReady:ContainersNotReady (containers with unready status: [etcd])
	I1109 14:39:35.401542  196795 system_pods.go:61] "kindnet-29xxd" [081cda95-4468-46a9-a913-ec3c53472afd] Running / Ready:ContainersNotReady (containers with unready status: [kindnet-cni]) / ContainersReady:ContainersNotReady (containers with unready status: [kindnet-cni])
	I1109 14:39:35.401547  196795 system_pods.go:61] "kube-apiserver-embed-certs-422728" [fa49f721-7504-41a2-80d6-eeac9f3ba024] Running
	I1109 14:39:35.401556  196795 system_pods.go:61] "kube-controller-manager-embed-certs-422728" [1fefd317-f995-414d-9744-cb3bb29d6663] Running / Ready:ContainersNotReady (containers with unready status: [kube-controller-manager]) / ContainersReady:ContainersNotReady (containers with unready status: [kube-controller-manager])
	I1109 14:39:35.401564  196795 system_pods.go:61] "kube-proxy-5zn8j" [91237b20-cef1-4550-bd7c-cbf7ec8d850c] Running / Ready:ContainersNotReady (containers with unready status: [kube-proxy]) / ContainersReady:ContainersNotReady (containers with unready status: [kube-proxy])
	I1109 14:39:35.401581  196795 system_pods.go:61] "kube-scheduler-embed-certs-422728" [aadcb563-6ce1-437f-ab4e-6f9450bb1b93] Running / Ready:ContainersNotReady (containers with unready status: [kube-scheduler]) / ContainersReady:ContainersNotReady (containers with unready status: [kube-scheduler])
	I1109 14:39:35.401590  196795 system_pods.go:61] "storage-provisioner" [e11ae084-2938-40cc-9538-cffa02747d9b] Running / Ready:ContainersNotReady (containers with unready status: [storage-provisioner]) / ContainersReady:ContainersNotReady (containers with unready status: [storage-provisioner])
	I1109 14:39:35.401601  196795 system_pods.go:74] duration metric: took 17.504641ms to wait for pod list to return data ...
	I1109 14:39:35.401610  196795 default_sa.go:34] waiting for default service account to be created ...
	I1109 14:39:35.428228  196795 default_sa.go:45] found service account: "default"
	I1109 14:39:35.428256  196795 default_sa.go:55] duration metric: took 26.634138ms for default service account to be created ...
	I1109 14:39:35.428275  196795 system_pods.go:116] waiting for k8s-apps to be running ...
	I1109 14:39:35.432793  196795 system_pods.go:86] 8 kube-system pods found
	I1109 14:39:35.432824  196795 system_pods.go:89] "coredns-66bc5c9577-4hk6l" [85300af6-fc4a-42dd-b6f9-4374a4461cdc] Running / Ready:ContainersNotReady (containers with unready status: [coredns]) / ContainersReady:ContainersNotReady (containers with unready status: [coredns])
	I1109 14:39:35.432834  196795 system_pods.go:89] "etcd-embed-certs-422728" [a71faa76-21fd-4bf5-82d3-26489363edf1] Running / Ready:ContainersNotReady (containers with unready status: [etcd]) / ContainersReady:ContainersNotReady (containers with unready status: [etcd])
	I1109 14:39:35.432841  196795 system_pods.go:89] "kindnet-29xxd" [081cda95-4468-46a9-a913-ec3c53472afd] Running / Ready:ContainersNotReady (containers with unready status: [kindnet-cni]) / ContainersReady:ContainersNotReady (containers with unready status: [kindnet-cni])
	I1109 14:39:35.432854  196795 system_pods.go:89] "kube-apiserver-embed-certs-422728" [fa49f721-7504-41a2-80d6-eeac9f3ba024] Running
	I1109 14:39:35.432865  196795 system_pods.go:89] "kube-controller-manager-embed-certs-422728" [1fefd317-f995-414d-9744-cb3bb29d6663] Running / Ready:ContainersNotReady (containers with unready status: [kube-controller-manager]) / ContainersReady:ContainersNotReady (containers with unready status: [kube-controller-manager])
	I1109 14:39:35.432877  196795 system_pods.go:89] "kube-proxy-5zn8j" [91237b20-cef1-4550-bd7c-cbf7ec8d850c] Running / Ready:ContainersNotReady (containers with unready status: [kube-proxy]) / ContainersReady:ContainersNotReady (containers with unready status: [kube-proxy])
	I1109 14:39:35.432884  196795 system_pods.go:89] "kube-scheduler-embed-certs-422728" [aadcb563-6ce1-437f-ab4e-6f9450bb1b93] Running / Ready:ContainersNotReady (containers with unready status: [kube-scheduler]) / ContainersReady:ContainersNotReady (containers with unready status: [kube-scheduler])
	I1109 14:39:35.432901  196795 system_pods.go:89] "storage-provisioner" [e11ae084-2938-40cc-9538-cffa02747d9b] Running / Ready:ContainersNotReady (containers with unready status: [storage-provisioner]) / ContainersReady:ContainersNotReady (containers with unready status: [storage-provisioner])
	I1109 14:39:35.432909  196795 system_pods.go:126] duration metric: took 4.628396ms to wait for k8s-apps to be running ...
	I1109 14:39:35.432921  196795 system_svc.go:44] waiting for kubelet service to be running ....
	I1109 14:39:35.432993  196795 ssh_runner.go:195] Run: sudo systemctl is-active --quiet service kubelet
	I1109 14:39:35.485432  196795 system_svc.go:56] duration metric: took 52.500556ms WaitForService to wait for kubelet
	I1109 14:39:35.485461  196795 kubeadm.go:587] duration metric: took 10.010301465s to wait for: map[apiserver:true apps_running:true default_sa:true extra:true kubelet:true node_ready:true system_pods:true]
	I1109 14:39:35.485480  196795 node_conditions.go:102] verifying NodePressure condition ...
	I1109 14:39:35.509089  196795 node_conditions.go:122] node storage ephemeral capacity is 203034800Ki
	I1109 14:39:35.509123  196795 node_conditions.go:123] node cpu capacity is 2
	I1109 14:39:35.509136  196795 node_conditions.go:105] duration metric: took 23.649629ms to run NodePressure ...
	I1109 14:39:35.509148  196795 start.go:242] waiting for startup goroutines ...
	I1109 14:39:35.509156  196795 start.go:247] waiting for cluster config update ...
	I1109 14:39:35.509166  196795 start.go:256] writing updated cluster config ...
	I1109 14:39:35.509440  196795 ssh_runner.go:195] Run: rm -f paused
	I1109 14:39:35.523671  196795 pod_ready.go:37] extra waiting up to 4m0s for all "kube-system" pods having one of [k8s-app=kube-dns component=etcd component=kube-apiserver component=kube-controller-manager k8s-app=kube-proxy component=kube-scheduler] labels to be "Ready" ...
	I1109 14:39:35.544324  196795 pod_ready.go:83] waiting for pod "coredns-66bc5c9577-4hk6l" in "kube-system" namespace to be "Ready" or be gone ...
	W1109 14:39:35.214818  196129 pod_ready.go:104] pod "coredns-66bc5c9577-rbvc2" is not "Ready", error: <nil>
	W1109 14:39:37.670741  196129 pod_ready.go:104] pod "coredns-66bc5c9577-rbvc2" is not "Ready", error: <nil>
	W1109 14:39:37.550361  196795 pod_ready.go:104] pod "coredns-66bc5c9577-4hk6l" is not "Ready", error: <nil>
	W1109 14:39:39.551201  196795 pod_ready.go:104] pod "coredns-66bc5c9577-4hk6l" is not "Ready", error: <nil>
	W1109 14:39:39.671795  196129 pod_ready.go:104] pod "coredns-66bc5c9577-rbvc2" is not "Ready", error: <nil>
	W1109 14:39:41.672702  196129 pod_ready.go:104] pod "coredns-66bc5c9577-rbvc2" is not "Ready", error: <nil>
	W1109 14:39:42.050591  196795 pod_ready.go:104] pod "coredns-66bc5c9577-4hk6l" is not "Ready", error: <nil>
	W1109 14:39:44.052665  196795 pod_ready.go:104] pod "coredns-66bc5c9577-4hk6l" is not "Ready", error: <nil>
	W1109 14:39:43.679828  196129 pod_ready.go:104] pod "coredns-66bc5c9577-rbvc2" is not "Ready", error: <nil>
	W1109 14:39:46.172576  196129 pod_ready.go:104] pod "coredns-66bc5c9577-rbvc2" is not "Ready", error: <nil>
	W1109 14:39:46.549936  196795 pod_ready.go:104] pod "coredns-66bc5c9577-4hk6l" is not "Ready", error: <nil>
	W1109 14:39:48.550731  196795 pod_ready.go:104] pod "coredns-66bc5c9577-4hk6l" is not "Ready", error: <nil>
	W1109 14:39:50.550852  196795 pod_ready.go:104] pod "coredns-66bc5c9577-4hk6l" is not "Ready", error: <nil>
	W1109 14:39:48.675461  196129 pod_ready.go:104] pod "coredns-66bc5c9577-rbvc2" is not "Ready", error: <nil>
	W1109 14:39:51.171417  196129 pod_ready.go:104] pod "coredns-66bc5c9577-rbvc2" is not "Ready", error: <nil>
	W1109 14:39:53.050155  196795 pod_ready.go:104] pod "coredns-66bc5c9577-4hk6l" is not "Ready", error: <nil>
	W1109 14:39:55.050846  196795 pod_ready.go:104] pod "coredns-66bc5c9577-4hk6l" is not "Ready", error: <nil>
	W1109 14:39:53.669698  196129 pod_ready.go:104] pod "coredns-66bc5c9577-rbvc2" is not "Ready", error: <nil>
	W1109 14:39:55.670713  196129 pod_ready.go:104] pod "coredns-66bc5c9577-rbvc2" is not "Ready", error: <nil>
	W1109 14:39:58.170504  196129 pod_ready.go:104] pod "coredns-66bc5c9577-rbvc2" is not "Ready", error: <nil>
	W1109 14:39:57.550560  196795 pod_ready.go:104] pod "coredns-66bc5c9577-4hk6l" is not "Ready", error: <nil>
	W1109 14:40:00.080694  196795 pod_ready.go:104] pod "coredns-66bc5c9577-4hk6l" is not "Ready", error: <nil>
	W1109 14:40:00.191460  196129 pod_ready.go:104] pod "coredns-66bc5c9577-rbvc2" is not "Ready", error: <nil>
	W1109 14:40:02.670935  196129 pod_ready.go:104] pod "coredns-66bc5c9577-rbvc2" is not "Ready", error: <nil>
	W1109 14:40:02.550181  196795 pod_ready.go:104] pod "coredns-66bc5c9577-4hk6l" is not "Ready", error: <nil>
	W1109 14:40:04.550484  196795 pod_ready.go:104] pod "coredns-66bc5c9577-4hk6l" is not "Ready", error: <nil>
	I1109 14:40:05.170570  196129 pod_ready.go:94] pod "coredns-66bc5c9577-rbvc2" is "Ready"
	I1109 14:40:05.170595  196129 pod_ready.go:86] duration metric: took 32.005779394s for pod "coredns-66bc5c9577-rbvc2" in "kube-system" namespace to be "Ready" or be gone ...
	I1109 14:40:05.173494  196129 pod_ready.go:83] waiting for pod "etcd-default-k8s-diff-port-103048" in "kube-system" namespace to be "Ready" or be gone ...
	I1109 14:40:05.178322  196129 pod_ready.go:94] pod "etcd-default-k8s-diff-port-103048" is "Ready"
	I1109 14:40:05.178350  196129 pod_ready.go:86] duration metric: took 4.826832ms for pod "etcd-default-k8s-diff-port-103048" in "kube-system" namespace to be "Ready" or be gone ...
	I1109 14:40:05.181165  196129 pod_ready.go:83] waiting for pod "kube-apiserver-default-k8s-diff-port-103048" in "kube-system" namespace to be "Ready" or be gone ...
	I1109 14:40:05.185964  196129 pod_ready.go:94] pod "kube-apiserver-default-k8s-diff-port-103048" is "Ready"
	I1109 14:40:05.185994  196129 pod_ready.go:86] duration metric: took 4.801946ms for pod "kube-apiserver-default-k8s-diff-port-103048" in "kube-system" namespace to be "Ready" or be gone ...
	I1109 14:40:05.188492  196129 pod_ready.go:83] waiting for pod "kube-controller-manager-default-k8s-diff-port-103048" in "kube-system" namespace to be "Ready" or be gone ...
	I1109 14:40:05.369137  196129 pod_ready.go:94] pod "kube-controller-manager-default-k8s-diff-port-103048" is "Ready"
	I1109 14:40:05.369168  196129 pod_ready.go:86] duration metric: took 180.647632ms for pod "kube-controller-manager-default-k8s-diff-port-103048" in "kube-system" namespace to be "Ready" or be gone ...
	I1109 14:40:05.567982  196129 pod_ready.go:83] waiting for pod "kube-proxy-c57m2" in "kube-system" namespace to be "Ready" or be gone ...
	I1109 14:40:05.968952  196129 pod_ready.go:94] pod "kube-proxy-c57m2" is "Ready"
	I1109 14:40:05.968978  196129 pod_ready.go:86] duration metric: took 400.969079ms for pod "kube-proxy-c57m2" in "kube-system" namespace to be "Ready" or be gone ...
	I1109 14:40:06.169164  196129 pod_ready.go:83] waiting for pod "kube-scheduler-default-k8s-diff-port-103048" in "kube-system" namespace to be "Ready" or be gone ...
	I1109 14:40:06.568343  196129 pod_ready.go:94] pod "kube-scheduler-default-k8s-diff-port-103048" is "Ready"
	I1109 14:40:06.568432  196129 pod_ready.go:86] duration metric: took 399.237416ms for pod "kube-scheduler-default-k8s-diff-port-103048" in "kube-system" namespace to be "Ready" or be gone ...
	I1109 14:40:06.568451  196129 pod_ready.go:40] duration metric: took 33.4720313s for extra waiting for all "kube-system" pods having one of [k8s-app=kube-dns component=etcd component=kube-apiserver component=kube-controller-manager k8s-app=kube-proxy component=kube-scheduler] labels to be "Ready" ...
	I1109 14:40:06.631797  196129 start.go:628] kubectl: 1.33.2, cluster: 1.34.1 (minor skew: 1)
	I1109 14:40:06.635018  196129 out.go:179] * Done! kubectl is now configured to use "default-k8s-diff-port-103048" cluster and "default" namespace by default
	W1109 14:40:06.551498  196795 pod_ready.go:104] pod "coredns-66bc5c9577-4hk6l" is not "Ready", error: <nil>
	I1109 14:40:07.550990  196795 pod_ready.go:94] pod "coredns-66bc5c9577-4hk6l" is "Ready"
	I1109 14:40:07.551029  196795 pod_ready.go:86] duration metric: took 32.006673308s for pod "coredns-66bc5c9577-4hk6l" in "kube-system" namespace to be "Ready" or be gone ...
	I1109 14:40:07.553713  196795 pod_ready.go:83] waiting for pod "etcd-embed-certs-422728" in "kube-system" namespace to be "Ready" or be gone ...
	I1109 14:40:07.558418  196795 pod_ready.go:94] pod "etcd-embed-certs-422728" is "Ready"
	I1109 14:40:07.558442  196795 pod_ready.go:86] duration metric: took 4.698642ms for pod "etcd-embed-certs-422728" in "kube-system" namespace to be "Ready" or be gone ...
	I1109 14:40:07.560963  196795 pod_ready.go:83] waiting for pod "kube-apiserver-embed-certs-422728" in "kube-system" namespace to be "Ready" or be gone ...
	I1109 14:40:07.565961  196795 pod_ready.go:94] pod "kube-apiserver-embed-certs-422728" is "Ready"
	I1109 14:40:07.565990  196795 pod_ready.go:86] duration metric: took 4.998009ms for pod "kube-apiserver-embed-certs-422728" in "kube-system" namespace to be "Ready" or be gone ...
	I1109 14:40:07.568596  196795 pod_ready.go:83] waiting for pod "kube-controller-manager-embed-certs-422728" in "kube-system" namespace to be "Ready" or be gone ...
	I1109 14:40:07.747686  196795 pod_ready.go:94] pod "kube-controller-manager-embed-certs-422728" is "Ready"
	I1109 14:40:07.747712  196795 pod_ready.go:86] duration metric: took 179.092274ms for pod "kube-controller-manager-embed-certs-422728" in "kube-system" namespace to be "Ready" or be gone ...
	I1109 14:40:07.948777  196795 pod_ready.go:83] waiting for pod "kube-proxy-5zn8j" in "kube-system" namespace to be "Ready" or be gone ...
	I1109 14:40:08.348208  196795 pod_ready.go:94] pod "kube-proxy-5zn8j" is "Ready"
	I1109 14:40:08.348242  196795 pod_ready.go:86] duration metric: took 399.417231ms for pod "kube-proxy-5zn8j" in "kube-system" namespace to be "Ready" or be gone ...
	I1109 14:40:08.548588  196795 pod_ready.go:83] waiting for pod "kube-scheduler-embed-certs-422728" in "kube-system" namespace to be "Ready" or be gone ...
	I1109 14:40:08.948477  196795 pod_ready.go:94] pod "kube-scheduler-embed-certs-422728" is "Ready"
	I1109 14:40:08.948506  196795 pod_ready.go:86] duration metric: took 399.893445ms for pod "kube-scheduler-embed-certs-422728" in "kube-system" namespace to be "Ready" or be gone ...
	I1109 14:40:08.948519  196795 pod_ready.go:40] duration metric: took 33.424813505s for extra waiting for all "kube-system" pods having one of [k8s-app=kube-dns component=etcd component=kube-apiserver component=kube-controller-manager k8s-app=kube-proxy component=kube-scheduler] labels to be "Ready" ...
	I1109 14:40:09.011705  196795 start.go:628] kubectl: 1.33.2, cluster: 1.34.1 (minor skew: 1)
	I1109 14:40:09.015201  196795 out.go:179] * Done! kubectl is now configured to use "embed-certs-422728" cluster and "default" namespace by default
	
	
	==> CRI-O <==
	Nov 09 14:40:13 embed-certs-422728 crio[646]: time="2025-11-09T14:40:13.174014359Z" level=info msg="Allowed annotations are specified for workload [io.containers.trace-syscall]"
	Nov 09 14:40:13 embed-certs-422728 crio[646]: time="2025-11-09T14:40:13.180737077Z" level=info msg="Allowed annotations are specified for workload [io.containers.trace-syscall]"
	Nov 09 14:40:13 embed-certs-422728 crio[646]: time="2025-11-09T14:40:13.181257836Z" level=info msg="Allowed annotations are specified for workload [io.containers.trace-syscall]"
	Nov 09 14:40:13 embed-certs-422728 crio[646]: time="2025-11-09T14:40:13.198212406Z" level=info msg="Created container 4d552f4b4d8a6b91636fb0457d54c17eafdcbd0e136bd23e839ea5daffbf2f98: kubernetes-dashboard/dashboard-metrics-scraper-6ffb444bf9-phsgl/dashboard-metrics-scraper" id=88611334-173b-429b-a2b8-f9cc03ee7d78 name=/runtime.v1.RuntimeService/CreateContainer
	Nov 09 14:40:13 embed-certs-422728 crio[646]: time="2025-11-09T14:40:13.199373901Z" level=info msg="Starting container: 4d552f4b4d8a6b91636fb0457d54c17eafdcbd0e136bd23e839ea5daffbf2f98" id=03454b15-bf55-4377-8bd6-b983199910d7 name=/runtime.v1.RuntimeService/StartContainer
	Nov 09 14:40:13 embed-certs-422728 crio[646]: time="2025-11-09T14:40:13.201388163Z" level=info msg="Started container" PID=1649 containerID=4d552f4b4d8a6b91636fb0457d54c17eafdcbd0e136bd23e839ea5daffbf2f98 description=kubernetes-dashboard/dashboard-metrics-scraper-6ffb444bf9-phsgl/dashboard-metrics-scraper id=03454b15-bf55-4377-8bd6-b983199910d7 name=/runtime.v1.RuntimeService/StartContainer sandboxID=9b0c05c409c71a34c4df64cb5c2ff501bc3a2054f1dbdf2985e00a20e0c69e2f
	Nov 09 14:40:13 embed-certs-422728 conmon[1647]: conmon 4d552f4b4d8a6b91636f <ninfo>: container 1649 exited with status 1
	Nov 09 14:40:13 embed-certs-422728 crio[646]: time="2025-11-09T14:40:13.474069764Z" level=info msg="Removing container: 5ae40f2a175040db81790736f4196360f115d8bd6e6ecee71c2583eacd5acccf" id=7a1e0c2a-6fec-4bca-bbe4-293b757cd551 name=/runtime.v1.RuntimeService/RemoveContainer
	Nov 09 14:40:13 embed-certs-422728 crio[646]: time="2025-11-09T14:40:13.482488538Z" level=info msg="Error loading conmon cgroup of container 5ae40f2a175040db81790736f4196360f115d8bd6e6ecee71c2583eacd5acccf: cgroup deleted" id=7a1e0c2a-6fec-4bca-bbe4-293b757cd551 name=/runtime.v1.RuntimeService/RemoveContainer
	Nov 09 14:40:13 embed-certs-422728 crio[646]: time="2025-11-09T14:40:13.487064947Z" level=info msg="Removed container 5ae40f2a175040db81790736f4196360f115d8bd6e6ecee71c2583eacd5acccf: kubernetes-dashboard/dashboard-metrics-scraper-6ffb444bf9-phsgl/dashboard-metrics-scraper" id=7a1e0c2a-6fec-4bca-bbe4-293b757cd551 name=/runtime.v1.RuntimeService/RemoveContainer
	Nov 09 14:40:15 embed-certs-422728 crio[646]: time="2025-11-09T14:40:15.933853456Z" level=info msg="CNI monitoring event CREATE        \"/etc/cni/net.d/10-kindnet.conflist.temp\""
	Nov 09 14:40:15 embed-certs-422728 crio[646]: time="2025-11-09T14:40:15.938658979Z" level=info msg="Found CNI network kindnet (type=ptp) at /etc/cni/net.d/10-kindnet.conflist"
	Nov 09 14:40:15 embed-certs-422728 crio[646]: time="2025-11-09T14:40:15.938693728Z" level=info msg="Updated default CNI network name to kindnet"
	Nov 09 14:40:15 embed-certs-422728 crio[646]: time="2025-11-09T14:40:15.938716473Z" level=info msg="CNI monitoring event WRITE         \"/etc/cni/net.d/10-kindnet.conflist.temp\""
	Nov 09 14:40:15 embed-certs-422728 crio[646]: time="2025-11-09T14:40:15.941992095Z" level=info msg="Found CNI network kindnet (type=ptp) at /etc/cni/net.d/10-kindnet.conflist"
	Nov 09 14:40:15 embed-certs-422728 crio[646]: time="2025-11-09T14:40:15.942031538Z" level=info msg="Updated default CNI network name to kindnet"
	Nov 09 14:40:15 embed-certs-422728 crio[646]: time="2025-11-09T14:40:15.942055546Z" level=info msg="CNI monitoring event WRITE         \"/etc/cni/net.d/10-kindnet.conflist.temp\""
	Nov 09 14:40:15 embed-certs-422728 crio[646]: time="2025-11-09T14:40:15.946447856Z" level=info msg="Found CNI network kindnet (type=ptp) at /etc/cni/net.d/10-kindnet.conflist"
	Nov 09 14:40:15 embed-certs-422728 crio[646]: time="2025-11-09T14:40:15.946484238Z" level=info msg="Updated default CNI network name to kindnet"
	Nov 09 14:40:15 embed-certs-422728 crio[646]: time="2025-11-09T14:40:15.946505177Z" level=info msg="CNI monitoring event RENAME        \"/etc/cni/net.d/10-kindnet.conflist.temp\""
	Nov 09 14:40:15 embed-certs-422728 crio[646]: time="2025-11-09T14:40:15.950019227Z" level=info msg="Found CNI network kindnet (type=ptp) at /etc/cni/net.d/10-kindnet.conflist"
	Nov 09 14:40:15 embed-certs-422728 crio[646]: time="2025-11-09T14:40:15.950052466Z" level=info msg="Updated default CNI network name to kindnet"
	Nov 09 14:40:15 embed-certs-422728 crio[646]: time="2025-11-09T14:40:15.950077656Z" level=info msg="CNI monitoring event CREATE        \"/etc/cni/net.d/10-kindnet.conflist\" ← \"/etc/cni/net.d/10-kindnet.conflist.temp\""
	Nov 09 14:40:15 embed-certs-422728 crio[646]: time="2025-11-09T14:40:15.954130738Z" level=info msg="Found CNI network kindnet (type=ptp) at /etc/cni/net.d/10-kindnet.conflist"
	Nov 09 14:40:15 embed-certs-422728 crio[646]: time="2025-11-09T14:40:15.954166365Z" level=info msg="Updated default CNI network name to kindnet"
	
	
	==> container status <==
	CONTAINER           IMAGE                                                                                                      CREATED             STATE               NAME                        ATTEMPT             POD ID              POD                                          NAMESPACE
	4d552f4b4d8a6       a90209bb39e3d7b5fc9daf60c17044ea969aaca0333d672d8c7a34c7446e7ff7                                           11 seconds ago      Exited              dashboard-metrics-scraper   2                   9b0c05c409c71       dashboard-metrics-scraper-6ffb444bf9-phsgl   kubernetes-dashboard
	7e3f27e138c59       ba04bb24b95753201135cbc420b233c1b0b9fa2e1fd21d28319c348c33fbcde6                                           17 seconds ago      Running             storage-provisioner         2                   d40c1330d26c4       storage-provisioner                          kube-system
	fe108ea59a5d4       docker.io/kubernetesui/dashboard@sha256:5c52c60663b473628bd98e4ffee7a747ef1f88d8c7bcee957b089fb3f61bdedf   37 seconds ago      Running             kubernetes-dashboard        0                   1ca2e62c14378       kubernetes-dashboard-855c9754f9-qdgpq        kubernetes-dashboard
	d5c35ad31efd7       138784d87c9c50f8e59412544da4cf4928d61ccbaf93b9f5898a3ba406871bfc                                           48 seconds ago      Running             coredns                     1                   079f28fdbc015       coredns-66bc5c9577-4hk6l                     kube-system
	df5eeef259ea8       1611cd07b61d57dbbfebe6db242513fd51e1c02d20ba08af17a45837d86a8a8c                                           48 seconds ago      Running             busybox                     1                   4f8c53fa1d93a       busybox                                      default
	323cdc33731a9       b1a8c6f707935fd5f346ce5846d21ff8dd65e14c15406a14dbd16b9b897b9b4c                                           49 seconds ago      Running             kindnet-cni                 1                   77d93fc0efa92       kindnet-29xxd                                kube-system
	3b1b52ea2560c       ba04bb24b95753201135cbc420b233c1b0b9fa2e1fd21d28319c348c33fbcde6                                           49 seconds ago      Exited              storage-provisioner         1                   d40c1330d26c4       storage-provisioner                          kube-system
	de1e286695edb       05baa95f5142d87797a2bc1d3d11edfb0bf0a9236d436243d15061fae8b58cb9                                           49 seconds ago      Running             kube-proxy                  1                   c12e6d2b51064       kube-proxy-5zn8j                             kube-system
	a9943a66511d5       b5f57ec6b98676d815366685a0422bd164ecf0732540b79ac51b1186cef97ff0                                           59 seconds ago      Running             kube-scheduler              1                   31b9b858ac8d4       kube-scheduler-embed-certs-422728            kube-system
	2b949bf057b2f       43911e833d64d4f30460862fc0c54bb61999d60bc7d063feca71e9fc610d5196                                           59 seconds ago      Running             kube-apiserver              1                   5fda1dcfafbd8       kube-apiserver-embed-certs-422728            kube-system
	7ac348b06cb3a       7eb2c6ff0c5a768fd309321bc2ade0e4e11afcf4f2017ef1d0ff00d91fdf992a                                           59 seconds ago      Running             kube-controller-manager     1                   7959eb0b36f7e       kube-controller-manager-embed-certs-422728   kube-system
	7f99978e234d1       a1894772a478e07c67a56e8bf32335fdbe1dd4ec96976a5987083164bd00bc0e                                           59 seconds ago      Running             etcd                        1                   011fa537d4769       etcd-embed-certs-422728                      kube-system
	
	
	==> coredns [d5c35ad31efd72a72f8ce73406787babc933e64dba57602e67b2a275575beab8] <==
	[INFO] plugin/kubernetes: waiting for Kubernetes API before starting server
	[INFO] plugin/ready: Still waiting on: "kubernetes"
	[INFO] plugin/kubernetes: waiting for Kubernetes API before starting server
	[INFO] plugin/kubernetes: waiting for Kubernetes API before starting server
	[INFO] plugin/ready: Still waiting on: "kubernetes"
	[INFO] plugin/kubernetes: waiting for Kubernetes API before starting server
	[INFO] plugin/kubernetes: waiting for Kubernetes API before starting server
	[INFO] plugin/kubernetes: waiting for Kubernetes API before starting server
	[INFO] plugin/kubernetes: waiting for Kubernetes API before starting server
	[INFO] plugin/kubernetes: waiting for Kubernetes API before starting server
	[INFO] plugin/kubernetes: waiting for Kubernetes API before starting server
	[WARNING] plugin/kubernetes: starting server with unsynced Kubernetes API
	.:53
	[INFO] plugin/reload: Running configuration SHA512 = 3e2243e8b9e7116f563b83b1933f477a68ba9ad4a829ed5d7e54629fb2ce53528b9bc6023030be20be434ad805fd246296dd428c64e9bbef3a70f22b8621f560
	CoreDNS-1.12.1
	linux/arm64, go1.24.1, 707c7c1
	[INFO] 127.0.0.1:60362 - 58300 "HINFO IN 4347462580880539103.5425589942879162873. udp 57 false 512" NXDOMAIN qr,rd,ra 57 0.014838108s
	[INFO] plugin/ready: Still waiting on: "kubernetes"
	[INFO] plugin/ready: Still waiting on: "kubernetes"
	[INFO] plugin/kubernetes: pkg/mod/k8s.io/client-go@v0.32.3/tools/cache/reflector.go:251: failed to list *v1.Namespace: Get "https://10.96.0.1:443/api/v1/namespaces?limit=500&resourceVersion=0": dial tcp 10.96.0.1:443: i/o timeout
	[ERROR] plugin/kubernetes: Unhandled Error
	[INFO] plugin/kubernetes: pkg/mod/k8s.io/client-go@v0.32.3/tools/cache/reflector.go:251: failed to list *v1.Service: Get "https://10.96.0.1:443/api/v1/services?limit=500&resourceVersion=0": dial tcp 10.96.0.1:443: i/o timeout
	[ERROR] plugin/kubernetes: Unhandled Error
	[INFO] plugin/kubernetes: pkg/mod/k8s.io/client-go@v0.32.3/tools/cache/reflector.go:251: failed to list *v1.EndpointSlice: Get "https://10.96.0.1:443/apis/discovery.k8s.io/v1/endpointslices?limit=500&resourceVersion=0": dial tcp 10.96.0.1:443: i/o timeout
	[ERROR] plugin/kubernetes: Unhandled Error
	
	
	==> describe nodes <==
	Name:               embed-certs-422728
	Roles:              control-plane
	Labels:             beta.kubernetes.io/arch=arm64
	                    beta.kubernetes.io/os=linux
	                    kubernetes.io/arch=arm64
	                    kubernetes.io/hostname=embed-certs-422728
	                    kubernetes.io/os=linux
	                    minikube.k8s.io/commit=70e94ed289d7418ac27e2778f9cf44be27a4ecda
	                    minikube.k8s.io/name=embed-certs-422728
	                    minikube.k8s.io/primary=true
	                    minikube.k8s.io/updated_at=2025_11_09T14_38_02_0700
	                    minikube.k8s.io/version=v1.37.0
	                    node-role.kubernetes.io/control-plane=
	                    node.kubernetes.io/exclude-from-external-load-balancers=
	Annotations:        node.alpha.kubernetes.io/ttl: 0
	                    volumes.kubernetes.io/controller-managed-attach-detach: true
	CreationTimestamp:  Sun, 09 Nov 2025 14:37:58 +0000
	Taints:             <none>
	Unschedulable:      false
	Lease:
	  HolderIdentity:  embed-certs-422728
	  AcquireTime:     <unset>
	  RenewTime:       Sun, 09 Nov 2025 14:40:14 +0000
	Conditions:
	  Type             Status  LastHeartbeatTime                 LastTransitionTime                Reason                       Message
	  ----             ------  -----------------                 ------------------                ------                       -------
	  MemoryPressure   False   Sun, 09 Nov 2025 14:40:04 +0000   Sun, 09 Nov 2025 14:37:52 +0000   KubeletHasSufficientMemory   kubelet has sufficient memory available
	  DiskPressure     False   Sun, 09 Nov 2025 14:40:04 +0000   Sun, 09 Nov 2025 14:37:52 +0000   KubeletHasNoDiskPressure     kubelet has no disk pressure
	  PIDPressure      False   Sun, 09 Nov 2025 14:40:04 +0000   Sun, 09 Nov 2025 14:37:52 +0000   KubeletHasSufficientPID      kubelet has sufficient PID available
	  Ready            True    Sun, 09 Nov 2025 14:40:04 +0000   Sun, 09 Nov 2025 14:38:48 +0000   KubeletReady                 kubelet is posting ready status
	Addresses:
	  InternalIP:  192.168.76.2
	  Hostname:    embed-certs-422728
	Capacity:
	  cpu:                2
	  ephemeral-storage:  203034800Ki
	  hugepages-1Gi:      0
	  hugepages-2Mi:      0
	  hugepages-32Mi:     0
	  hugepages-64Ki:     0
	  memory:             8022300Ki
	  pods:               110
	Allocatable:
	  cpu:                2
	  ephemeral-storage:  203034800Ki
	  hugepages-1Gi:      0
	  hugepages-2Mi:      0
	  hugepages-32Mi:     0
	  hugepages-64Ki:     0
	  memory:             8022300Ki
	  pods:               110
	System Info:
	  Machine ID:                 8fe80b37a6a3cb4bc1adadb06905c59a
	  System UUID:                d088bd86-8a64-46dd-b81e-fc8968fd6fcd
	  Boot ID:                    c2b4b02b-b0e5-42ff-aa45-346ad9349595
	  Kernel Version:             5.15.0-1084-aws
	  OS Image:                   Debian GNU/Linux 12 (bookworm)
	  Operating System:           linux
	  Architecture:               arm64
	  Container Runtime Version:  cri-o://1.34.1
	  Kubelet Version:            v1.34.1
	  Kube-Proxy Version:         
	PodCIDR:                      10.244.0.0/24
	PodCIDRs:                     10.244.0.0/24
	Non-terminated Pods:          (11 in total)
	  Namespace                   Name                                          CPU Requests  CPU Limits  Memory Requests  Memory Limits  Age
	  ---------                   ----                                          ------------  ----------  ---------------  -------------  ---
	  default                     busybox                                       0 (0%)        0 (0%)      0 (0%)           0 (0%)         92s
	  kube-system                 coredns-66bc5c9577-4hk6l                      100m (5%)     0 (0%)      70Mi (0%)        170Mi (2%)     2m17s
	  kube-system                 etcd-embed-certs-422728                       100m (5%)     0 (0%)      100Mi (1%)       0 (0%)         2m22s
	  kube-system                 kindnet-29xxd                                 100m (5%)     100m (5%)   50Mi (0%)        50Mi (0%)      2m17s
	  kube-system                 kube-apiserver-embed-certs-422728             250m (12%)    0 (0%)      0 (0%)           0 (0%)         2m22s
	  kube-system                 kube-controller-manager-embed-certs-422728    200m (10%)    0 (0%)      0 (0%)           0 (0%)         2m22s
	  kube-system                 kube-proxy-5zn8j                              0 (0%)        0 (0%)      0 (0%)           0 (0%)         2m17s
	  kube-system                 kube-scheduler-embed-certs-422728             100m (5%)     0 (0%)      0 (0%)           0 (0%)         2m22s
	  kube-system                 storage-provisioner                           0 (0%)        0 (0%)      0 (0%)           0 (0%)         2m16s
	  kubernetes-dashboard        dashboard-metrics-scraper-6ffb444bf9-phsgl    0 (0%)        0 (0%)      0 (0%)           0 (0%)         48s
	  kubernetes-dashboard        kubernetes-dashboard-855c9754f9-qdgpq         0 (0%)        0 (0%)      0 (0%)           0 (0%)         48s
	Allocated resources:
	  (Total limits may be over 100 percent, i.e., overcommitted.)
	  Resource           Requests    Limits
	  --------           --------    ------
	  cpu                850m (42%)  100m (5%)
	  memory             220Mi (2%)  220Mi (2%)
	  ephemeral-storage  0 (0%)      0 (0%)
	  hugepages-1Gi      0 (0%)      0 (0%)
	  hugepages-2Mi      0 (0%)      0 (0%)
	  hugepages-32Mi     0 (0%)      0 (0%)
	  hugepages-64Ki     0 (0%)      0 (0%)
	Events:
	  Type     Reason                   Age                    From             Message
	  ----     ------                   ----                   ----             -------
	  Normal   Starting                 2m15s                  kube-proxy       
	  Normal   Starting                 47s                    kube-proxy       
	  Normal   NodeHasSufficientMemory  2m33s (x8 over 2m33s)  kubelet          Node embed-certs-422728 status is now: NodeHasSufficientMemory
	  Normal   NodeHasNoDiskPressure    2m33s (x8 over 2m33s)  kubelet          Node embed-certs-422728 status is now: NodeHasNoDiskPressure
	  Normal   NodeHasSufficientPID     2m33s (x8 over 2m33s)  kubelet          Node embed-certs-422728 status is now: NodeHasSufficientPID
	  Normal   Starting                 2m23s                  kubelet          Starting kubelet.
	  Warning  CgroupV1                 2m23s                  kubelet          cgroup v1 support is in maintenance mode, please migrate to cgroup v2
	  Normal   NodeHasNoDiskPressure    2m22s                  kubelet          Node embed-certs-422728 status is now: NodeHasNoDiskPressure
	  Normal   NodeHasSufficientPID     2m22s                  kubelet          Node embed-certs-422728 status is now: NodeHasSufficientPID
	  Normal   NodeHasSufficientMemory  2m22s                  kubelet          Node embed-certs-422728 status is now: NodeHasSufficientMemory
	  Normal   RegisteredNode           2m18s                  node-controller  Node embed-certs-422728 event: Registered Node embed-certs-422728 in Controller
	  Normal   NodeReady                96s                    kubelet          Node embed-certs-422728 status is now: NodeReady
	  Normal   Starting                 61s                    kubelet          Starting kubelet.
	  Warning  CgroupV1                 60s                    kubelet          cgroup v1 support is in maintenance mode, please migrate to cgroup v2
	  Normal   NodeHasSufficientMemory  60s (x8 over 60s)      kubelet          Node embed-certs-422728 status is now: NodeHasSufficientMemory
	  Normal   NodeHasNoDiskPressure    60s (x8 over 60s)      kubelet          Node embed-certs-422728 status is now: NodeHasNoDiskPressure
	  Normal   NodeHasSufficientPID     60s (x8 over 60s)      kubelet          Node embed-certs-422728 status is now: NodeHasSufficientPID
	  Normal   RegisteredNode           48s                    node-controller  Node embed-certs-422728 event: Registered Node embed-certs-422728 in Controller
	
	
	==> dmesg <==
	[Nov 9 14:15] overlayfs: idmapped layers are currently not supported
	[Nov 9 14:16] overlayfs: idmapped layers are currently not supported
	[Nov 9 14:18] overlayfs: idmapped layers are currently not supported
	[ +26.909872] overlayfs: idmapped layers are currently not supported
	[  +3.850831] overlayfs: idmapped layers are currently not supported
	[Nov 9 14:19] overlayfs: idmapped layers are currently not supported
	[ +17.180951] overlayfs: idmapped layers are currently not supported
	[Nov 9 14:20] overlayfs: idmapped layers are currently not supported
	[ +23.736977] overlayfs: idmapped layers are currently not supported
	[Nov 9 14:22] overlayfs: idmapped layers are currently not supported
	[Nov 9 14:23] overlayfs: idmapped layers are currently not supported
	[Nov 9 14:25] overlayfs: idmapped layers are currently not supported
	[Nov 9 14:27] overlayfs: idmapped layers are currently not supported
	[ +24.536638] overlayfs: idmapped layers are currently not supported
	[Nov 9 14:31] overlayfs: idmapped layers are currently not supported
	[Nov 9 14:33] overlayfs: idmapped layers are currently not supported
	[ +39.159698] overlayfs: idmapped layers are currently not supported
	[  +5.641155] overlayfs: idmapped layers are currently not supported
	[Nov 9 14:34] overlayfs: idmapped layers are currently not supported
	[Nov 9 14:35] overlayfs: idmapped layers are currently not supported
	[Nov 9 14:36] overlayfs: idmapped layers are currently not supported
	[Nov 9 14:37] overlayfs: idmapped layers are currently not supported
	[  +3.455400] overlayfs: idmapped layers are currently not supported
	[Nov 9 14:39] overlayfs: idmapped layers are currently not supported
	[  +3.462027] overlayfs: idmapped layers are currently not supported
	
	
	==> etcd [7f99978e234d142a4cacb3d0faff188a383754a6fe3a8aafef37dfdbccf51f16] <==
	{"level":"warn","ts":"2025-11-09T14:39:29.762249Z","caller":"embed/config_logging.go:188","msg":"rejected connection on client endpoint","remote-addr":"127.0.0.1:40332","server-name":"","error":"EOF"}
	{"level":"warn","ts":"2025-11-09T14:39:29.808537Z","caller":"embed/config_logging.go:188","msg":"rejected connection on client endpoint","remote-addr":"127.0.0.1:40340","server-name":"","error":"EOF"}
	{"level":"warn","ts":"2025-11-09T14:39:29.891446Z","caller":"embed/config_logging.go:188","msg":"rejected connection on client endpoint","remote-addr":"127.0.0.1:40346","server-name":"","error":"EOF"}
	{"level":"warn","ts":"2025-11-09T14:39:29.928017Z","caller":"embed/config_logging.go:188","msg":"rejected connection on client endpoint","remote-addr":"127.0.0.1:40384","server-name":"","error":"EOF"}
	{"level":"warn","ts":"2025-11-09T14:39:29.976565Z","caller":"embed/config_logging.go:188","msg":"rejected connection on client endpoint","remote-addr":"127.0.0.1:40396","server-name":"","error":"EOF"}
	{"level":"warn","ts":"2025-11-09T14:39:30.040667Z","caller":"embed/config_logging.go:188","msg":"rejected connection on client endpoint","remote-addr":"127.0.0.1:40414","server-name":"","error":"EOF"}
	{"level":"warn","ts":"2025-11-09T14:39:30.088657Z","caller":"embed/config_logging.go:188","msg":"rejected connection on client endpoint","remote-addr":"127.0.0.1:40438","server-name":"","error":"EOF"}
	{"level":"warn","ts":"2025-11-09T14:39:30.133327Z","caller":"embed/config_logging.go:188","msg":"rejected connection on client endpoint","remote-addr":"127.0.0.1:40466","server-name":"","error":"EOF"}
	{"level":"warn","ts":"2025-11-09T14:39:30.208721Z","caller":"embed/config_logging.go:188","msg":"rejected connection on client endpoint","remote-addr":"127.0.0.1:40486","server-name":"","error":"EOF"}
	{"level":"warn","ts":"2025-11-09T14:39:30.300042Z","caller":"embed/config_logging.go:188","msg":"rejected connection on client endpoint","remote-addr":"127.0.0.1:40506","server-name":"","error":"EOF"}
	{"level":"warn","ts":"2025-11-09T14:39:30.356881Z","caller":"embed/config_logging.go:188","msg":"rejected connection on client endpoint","remote-addr":"127.0.0.1:40528","server-name":"","error":"EOF"}
	{"level":"warn","ts":"2025-11-09T14:39:30.395372Z","caller":"embed/config_logging.go:188","msg":"rejected connection on client endpoint","remote-addr":"127.0.0.1:40550","server-name":"","error":"EOF"}
	{"level":"warn","ts":"2025-11-09T14:39:30.451217Z","caller":"embed/config_logging.go:188","msg":"rejected connection on client endpoint","remote-addr":"127.0.0.1:40556","server-name":"","error":"EOF"}
	{"level":"warn","ts":"2025-11-09T14:39:30.495182Z","caller":"embed/config_logging.go:188","msg":"rejected connection on client endpoint","remote-addr":"127.0.0.1:40568","server-name":"","error":"EOF"}
	{"level":"warn","ts":"2025-11-09T14:39:30.524741Z","caller":"embed/config_logging.go:188","msg":"rejected connection on client endpoint","remote-addr":"127.0.0.1:40594","server-name":"","error":"EOF"}
	{"level":"warn","ts":"2025-11-09T14:39:30.579498Z","caller":"embed/config_logging.go:188","msg":"rejected connection on client endpoint","remote-addr":"127.0.0.1:40630","server-name":"","error":"EOF"}
	{"level":"warn","ts":"2025-11-09T14:39:30.622844Z","caller":"embed/config_logging.go:188","msg":"rejected connection on client endpoint","remote-addr":"127.0.0.1:40632","server-name":"","error":"EOF"}
	{"level":"warn","ts":"2025-11-09T14:39:30.672371Z","caller":"embed/config_logging.go:188","msg":"rejected connection on client endpoint","remote-addr":"127.0.0.1:40658","server-name":"","error":"EOF"}
	{"level":"warn","ts":"2025-11-09T14:39:30.702820Z","caller":"embed/config_logging.go:188","msg":"rejected connection on client endpoint","remote-addr":"127.0.0.1:40686","server-name":"","error":"EOF"}
	{"level":"warn","ts":"2025-11-09T14:39:30.749024Z","caller":"embed/config_logging.go:188","msg":"rejected connection on client endpoint","remote-addr":"127.0.0.1:40690","server-name":"","error":"EOF"}
	{"level":"warn","ts":"2025-11-09T14:39:30.812744Z","caller":"embed/config_logging.go:188","msg":"rejected connection on client endpoint","remote-addr":"127.0.0.1:40710","server-name":"","error":"EOF"}
	{"level":"warn","ts":"2025-11-09T14:39:30.859994Z","caller":"embed/config_logging.go:188","msg":"rejected connection on client endpoint","remote-addr":"127.0.0.1:40730","server-name":"","error":"EOF"}
	{"level":"warn","ts":"2025-11-09T14:39:30.905486Z","caller":"embed/config_logging.go:188","msg":"rejected connection on client endpoint","remote-addr":"127.0.0.1:40746","server-name":"","error":"EOF"}
	{"level":"warn","ts":"2025-11-09T14:39:30.936193Z","caller":"embed/config_logging.go:188","msg":"rejected connection on client endpoint","remote-addr":"127.0.0.1:40766","server-name":"","error":"EOF"}
	{"level":"warn","ts":"2025-11-09T14:39:31.105179Z","caller":"embed/config_logging.go:188","msg":"rejected connection on client endpoint","remote-addr":"127.0.0.1:42154","server-name":"","error":"EOF"}
	
	
	==> kernel <==
	 14:40:24 up  1:22,  0 user,  load average: 3.10, 3.40, 2.81
	Linux embed-certs-422728 5.15.0-1084-aws #91~20.04.1-Ubuntu SMP Fri May 2 07:00:04 UTC 2025 aarch64 GNU/Linux
	PRETTY_NAME="Debian GNU/Linux 12 (bookworm)"
	
	
	==> kindnet [323cdc33731a98cbe7f1496b50119456aef177e9a9a5892b2aa6aa476ddc2327] <==
	I1109 14:39:35.741237       1 main.go:109] connected to apiserver: https://10.96.0.1:443
	I1109 14:39:35.745332       1 main.go:139] hostIP = 192.168.76.2
	podIP = 192.168.76.2
	I1109 14:39:35.751729       1 main.go:148] setting mtu 1500 for CNI 
	I1109 14:39:35.752128       1 main.go:178] kindnetd IP family: "ipv4"
	I1109 14:39:35.752282       1 main.go:182] noMask IPv4 subnets: [10.244.0.0/16]
	time="2025-11-09T14:39:35Z" level=info msg="Created plugin 10-kube-network-policies (kindnetd, handles RunPodSandbox,RemovePodSandbox)"
	I1109 14:39:35.933208       1 controller.go:377] "Starting controller" name="kube-network-policies"
	I1109 14:39:35.933316       1 controller.go:381] "Waiting for informer caches to sync"
	I1109 14:39:35.933450       1 shared_informer.go:350] "Waiting for caches to sync" controller="kube-network-policies"
	I1109 14:39:35.934542       1 controller.go:390] nri plugin exited: failed to connect to NRI service: dial unix /var/run/nri/nri.sock: connect: no such file or directory
	E1109 14:40:05.933918       1 reflector.go:200] "Failed to watch" err="failed to list *v1.NetworkPolicy: Get \"https://10.96.0.1:443/apis/networking.k8s.io/v1/networkpolicies?limit=500&resourceVersion=0\": dial tcp 10.96.0.1:443: i/o timeout" logger="UnhandledError" reflector="pkg/mod/k8s.io/client-go@v0.33.0/tools/cache/reflector.go:285" type="*v1.NetworkPolicy"
	E1109 14:40:05.934060       1 reflector.go:200] "Failed to watch" err="failed to list *v1.Namespace: Get \"https://10.96.0.1:443/api/v1/namespaces?limit=500&resourceVersion=0\": dial tcp 10.96.0.1:443: i/o timeout" logger="UnhandledError" reflector="pkg/mod/k8s.io/client-go@v0.33.0/tools/cache/reflector.go:285" type="*v1.Namespace"
	E1109 14:40:05.934151       1 reflector.go:200] "Failed to watch" err="failed to list *v1.Node: Get \"https://10.96.0.1:443/api/v1/nodes?limit=500&resourceVersion=0\": dial tcp 10.96.0.1:443: i/o timeout" logger="UnhandledError" reflector="pkg/mod/k8s.io/client-go@v0.33.0/tools/cache/reflector.go:285" type="*v1.Node"
	E1109 14:40:05.934914       1 reflector.go:200] "Failed to watch" err="failed to list *v1.Pod: Get \"https://10.96.0.1:443/api/v1/pods?limit=500&resourceVersion=0\": dial tcp 10.96.0.1:443: i/o timeout" logger="UnhandledError" reflector="pkg/mod/k8s.io/client-go@v0.33.0/tools/cache/reflector.go:285" type="*v1.Pod"
	I1109 14:40:07.534248       1 shared_informer.go:357] "Caches are synced" controller="kube-network-policies"
	I1109 14:40:07.534385       1 metrics.go:72] Registering metrics
	I1109 14:40:07.534523       1 controller.go:711] "Syncing nftables rules"
	I1109 14:40:15.933498       1 main.go:297] Handling node with IPs: map[192.168.76.2:{}]
	I1109 14:40:15.933588       1 main.go:301] handling current node
	
	
	==> kube-apiserver [2b949bf057b2fcac7eac7de6351c20a0ac0d3dccfcd10d71e47fcbaab8fc91dc] <==
	I1109 14:39:33.332962       1 shared_informer.go:356] "Caches are synced" controller="*generic.policySource[*k8s.io/api/admissionregistration/v1.ValidatingAdmissionPolicy,*k8s.io/api/admissionregistration/v1.ValidatingAdmissionPolicyBinding,k8s.io/apiserver/pkg/admission/plugin/policy/validating.Validator]"
	I1109 14:39:33.332991       1 policy_source.go:240] refreshing policies
	I1109 14:39:33.333117       1 cache.go:39] Caches are synced for LocalAvailability controller
	I1109 14:39:33.333423       1 cache.go:39] Caches are synced for APIServiceRegistrationController controller
	I1109 14:39:33.341213       1 cidrallocator.go:301] created ClusterIP allocator for Service CIDR 10.96.0.0/12
	I1109 14:39:33.356301       1 cache.go:39] Caches are synced for autoregister controller
	I1109 14:39:33.358024       1 controller.go:667] quota admission added evaluator for: leases.coordination.k8s.io
	I1109 14:39:33.399041       1 cache.go:39] Caches are synced for RemoteAvailability controller
	I1109 14:39:33.399171       1 apf_controller.go:382] Running API Priority and Fairness config worker
	I1109 14:39:33.399198       1 apf_controller.go:385] Running API Priority and Fairness periodic rebalancing process
	I1109 14:39:33.430030       1 handler_discovery.go:451] Starting ResourceDiscoveryManager
	I1109 14:39:33.488083       1 shared_informer.go:356] "Caches are synced" controller="node_authorizer"
	E1109 14:39:33.502465       1 controller.go:97] Error removing old endpoints from kubernetes service: no API server IP addresses were listed in storage, refusing to erase all endpoints for the kubernetes Service
	I1109 14:39:33.735428       1 storage_scheduling.go:111] all system priority classes are created successfully or already exist.
	I1109 14:39:34.397447       1 controller.go:667] quota admission added evaluator for: namespaces
	I1109 14:39:34.566396       1 controller.go:667] quota admission added evaluator for: deployments.apps
	I1109 14:39:34.661873       1 controller.go:667] quota admission added evaluator for: roles.rbac.authorization.k8s.io
	I1109 14:39:34.709237       1 controller.go:667] quota admission added evaluator for: rolebindings.rbac.authorization.k8s.io
	I1109 14:39:34.747128       1 controller.go:667] quota admission added evaluator for: serviceaccounts
	I1109 14:39:34.901574       1 alloc.go:328] "allocated clusterIPs" service="kubernetes-dashboard/kubernetes-dashboard" clusterIPs={"IPv4":"10.98.58.88"}
	I1109 14:39:34.923625       1 alloc.go:328] "allocated clusterIPs" service="kubernetes-dashboard/dashboard-metrics-scraper" clusterIPs={"IPv4":"10.108.67.145"}
	I1109 14:39:36.605829       1 controller.go:667] quota admission added evaluator for: replicasets.apps
	I1109 14:39:36.843331       1 controller.go:667] quota admission added evaluator for: endpoints
	I1109 14:39:36.902125       1 controller.go:667] quota admission added evaluator for: daemonsets.apps
	I1109 14:39:37.032766       1 controller.go:667] quota admission added evaluator for: endpointslices.discovery.k8s.io
	
	
	==> kube-controller-manager [7ac348b06cb3a51d4fa87f75d29323432bc36b8fade63732460d89860ce8f3df] <==
	I1109 14:39:36.451850       1 shared_informer.go:356] "Caches are synced" controller="certificate-csrapproving"
	I1109 14:39:36.452555       1 shared_informer.go:356] "Caches are synced" controller="TTL"
	I1109 14:39:36.452599       1 shared_informer.go:356] "Caches are synced" controller="ReplicationController"
	I1109 14:39:36.470212       1 shared_informer.go:356] "Caches are synced" controller="garbage collector"
	I1109 14:39:36.470809       1 shared_informer.go:356] "Caches are synced" controller="persistent volume"
	I1109 14:39:36.470921       1 shared_informer.go:356] "Caches are synced" controller="disruption"
	I1109 14:39:36.471005       1 shared_informer.go:356] "Caches are synced" controller="namespace"
	I1109 14:39:36.471710       1 shared_informer.go:356] "Caches are synced" controller="expand"
	I1109 14:39:36.471752       1 shared_informer.go:356] "Caches are synced" controller="attach detach"
	I1109 14:39:36.471969       1 shared_informer.go:356] "Caches are synced" controller="ephemeral"
	I1109 14:39:36.471999       1 shared_informer.go:356] "Caches are synced" controller="endpoint"
	I1109 14:39:36.472569       1 shared_informer.go:356] "Caches are synced" controller="job"
	I1109 14:39:36.474442       1 shared_informer.go:356] "Caches are synced" controller="resource quota"
	I1109 14:39:36.480166       1 shared_informer.go:356] "Caches are synced" controller="ReplicaSet"
	I1109 14:39:36.480767       1 shared_informer.go:356] "Caches are synced" controller="ClusterRoleAggregator"
	I1109 14:39:36.482651       1 shared_informer.go:356] "Caches are synced" controller="PVC protection"
	I1109 14:39:36.488150       1 shared_informer.go:356] "Caches are synced" controller="daemon sets"
	I1109 14:39:36.494228       1 shared_informer.go:356] "Caches are synced" controller="taint-eviction-controller"
	I1109 14:39:36.494344       1 shared_informer.go:356] "Caches are synced" controller="resource_claim"
	I1109 14:39:36.503080       1 shared_informer.go:356] "Caches are synced" controller="resource quota"
	I1109 14:39:36.511251       1 shared_informer.go:356] "Caches are synced" controller="endpoint_slice"
	I1109 14:39:36.511341       1 shared_informer.go:356] "Caches are synced" controller="legacy-service-account-token-cleaner"
	I1109 14:39:36.511360       1 shared_informer.go:356] "Caches are synced" controller="HPA"
	I1109 14:39:37.061497       1 endpointslice_controller.go:344] "Error syncing endpoint slices for service, retrying" logger="endpointslice-controller" key="kubernetes-dashboard/kubernetes-dashboard" err="EndpointSlice informer cache is out of date"
	I1109 14:39:37.066294       1 endpointslice_controller.go:344] "Error syncing endpoint slices for service, retrying" logger="endpointslice-controller" key="kube-system/kube-dns" err="EndpointSlice informer cache is out of date"
	
	
	==> kube-proxy [de1e286695edb140cab32ced2c194b32034a19be382818767fa2a5a464fd0087] <==
	I1109 14:39:35.844987       1 server_linux.go:53] "Using iptables proxy"
	I1109 14:39:36.104080       1 shared_informer.go:349] "Waiting for caches to sync" controller="node informer cache"
	I1109 14:39:36.651810       1 shared_informer.go:356] "Caches are synced" controller="node informer cache"
	I1109 14:39:36.667259       1 server.go:219] "Successfully retrieved NodeIPs" NodeIPs=["192.168.76.2"]
	E1109 14:39:36.755225       1 server.go:256] "Kube-proxy configuration may be incomplete or incorrect" err="nodePortAddresses is unset; NodePort connections will be accepted on all local IPs. Consider using `--nodeport-addresses primary`"
	I1109 14:39:37.130985       1 server.go:265] "kube-proxy running in dual-stack mode" primary ipFamily="IPv4"
	I1109 14:39:37.131047       1 server_linux.go:132] "Using iptables Proxier"
	I1109 14:39:37.145719       1 proxier.go:242] "Setting route_localnet=1 to allow node-ports on localhost; to change this either disable iptables.localhostNodePorts (--iptables-localhost-nodeports) or set nodePortAddresses (--nodeport-addresses) to filter loopback addresses" ipFamily="IPv4"
	I1109 14:39:37.146182       1 server.go:527] "Version info" version="v1.34.1"
	I1109 14:39:37.146394       1 server.go:529] "Golang settings" GOGC="" GOMAXPROCS="" GOTRACEBACK=""
	I1109 14:39:37.147730       1 config.go:200] "Starting service config controller"
	I1109 14:39:37.147790       1 shared_informer.go:349] "Waiting for caches to sync" controller="service config"
	I1109 14:39:37.147844       1 config.go:106] "Starting endpoint slice config controller"
	I1109 14:39:37.147896       1 shared_informer.go:349] "Waiting for caches to sync" controller="endpoint slice config"
	I1109 14:39:37.147932       1 config.go:403] "Starting serviceCIDR config controller"
	I1109 14:39:37.147957       1 shared_informer.go:349] "Waiting for caches to sync" controller="serviceCIDR config"
	I1109 14:39:37.154412       1 config.go:309] "Starting node config controller"
	I1109 14:39:37.155440       1 shared_informer.go:349] "Waiting for caches to sync" controller="node config"
	I1109 14:39:37.155508       1 shared_informer.go:356] "Caches are synced" controller="node config"
	I1109 14:39:37.248221       1 shared_informer.go:356] "Caches are synced" controller="serviceCIDR config"
	I1109 14:39:37.248222       1 shared_informer.go:356] "Caches are synced" controller="service config"
	I1109 14:39:37.248312       1 shared_informer.go:356] "Caches are synced" controller="endpoint slice config"
	
	
	==> kube-scheduler [a9943a66511d5ad86eafee1dfeb5b5c4217765aa7223fad81d4d85ed65bd4366] <==
	I1109 14:39:33.355199       1 serving.go:386] Generated self-signed cert in-memory
	I1109 14:39:36.785076       1 server.go:175] "Starting Kubernetes Scheduler" version="v1.34.1"
	I1109 14:39:36.787406       1 server.go:177] "Golang settings" GOGC="" GOMAXPROCS="" GOTRACEBACK=""
	I1109 14:39:36.846137       1 requestheader_controller.go:180] Starting RequestHeaderAuthRequestController
	I1109 14:39:36.846238       1 shared_informer.go:349] "Waiting for caches to sync" controller="RequestHeaderAuthRequestController"
	I1109 14:39:36.846359       1 configmap_cafile_content.go:205] "Starting controller" name="client-ca::kube-system::extension-apiserver-authentication::client-ca-file"
	I1109 14:39:36.846403       1 shared_informer.go:349] "Waiting for caches to sync" controller="client-ca::kube-system::extension-apiserver-authentication::client-ca-file"
	I1109 14:39:36.846452       1 configmap_cafile_content.go:205] "Starting controller" name="client-ca::kube-system::extension-apiserver-authentication::requestheader-client-ca-file"
	I1109 14:39:36.846496       1 shared_informer.go:349] "Waiting for caches to sync" controller="client-ca::kube-system::extension-apiserver-authentication::requestheader-client-ca-file"
	I1109 14:39:36.849107       1 secure_serving.go:211] Serving securely on 127.0.0.1:10259
	I1109 14:39:36.849308       1 tlsconfig.go:243] "Starting DynamicServingCertificateController"
	I1109 14:39:36.947758       1 shared_informer.go:356] "Caches are synced" controller="client-ca::kube-system::extension-apiserver-authentication::requestheader-client-ca-file"
	I1109 14:39:36.947810       1 shared_informer.go:356] "Caches are synced" controller="client-ca::kube-system::extension-apiserver-authentication::client-ca-file"
	I1109 14:39:36.960127       1 shared_informer.go:356] "Caches are synced" controller="RequestHeaderAuthRequestController"
	
	
	==> kubelet <==
	Nov 09 14:39:38 embed-certs-422728 kubelet[768]: E1109 14:39:38.285382     768 nestedpendingoperations.go:348] Operation for "{volumeName:kubernetes.io/projected/78f81139-942a-4424-8b87-68a3e0b04fc6-kube-api-access-9jx2b podName:78f81139-942a-4424-8b87-68a3e0b04fc6 nodeName:}" failed. No retries permitted until 2025-11-09 14:39:38.785355286 +0000 UTC m=+14.922278156 (durationBeforeRetry 500ms). Error: MountVolume.SetUp failed for volume "kube-api-access-9jx2b" (UniqueName: "kubernetes.io/projected/78f81139-942a-4424-8b87-68a3e0b04fc6-kube-api-access-9jx2b") pod "dashboard-metrics-scraper-6ffb444bf9-phsgl" (UID: "78f81139-942a-4424-8b87-68a3e0b04fc6") : failed to sync configmap cache: timed out waiting for the condition
	Nov 09 14:39:38 embed-certs-422728 kubelet[768]: E1109 14:39:38.287398     768 projected.go:291] Couldn't get configMap kubernetes-dashboard/kube-root-ca.crt: failed to sync configmap cache: timed out waiting for the condition
	Nov 09 14:39:38 embed-certs-422728 kubelet[768]: E1109 14:39:38.287546     768 projected.go:196] Error preparing data for projected volume kube-api-access-jbv96 for pod kubernetes-dashboard/kubernetes-dashboard-855c9754f9-qdgpq: failed to sync configmap cache: timed out waiting for the condition
	Nov 09 14:39:38 embed-certs-422728 kubelet[768]: E1109 14:39:38.287673     768 nestedpendingoperations.go:348] Operation for "{volumeName:kubernetes.io/projected/a4624bdb-c87a-4e38-bfd4-65e1d022ae3a-kube-api-access-jbv96 podName:a4624bdb-c87a-4e38-bfd4-65e1d022ae3a nodeName:}" failed. No retries permitted until 2025-11-09 14:39:38.787653439 +0000 UTC m=+14.924576309 (durationBeforeRetry 500ms). Error: MountVolume.SetUp failed for volume "kube-api-access-jbv96" (UniqueName: "kubernetes.io/projected/a4624bdb-c87a-4e38-bfd4-65e1d022ae3a-kube-api-access-jbv96") pod "kubernetes-dashboard-855c9754f9-qdgpq" (UID: "a4624bdb-c87a-4e38-bfd4-65e1d022ae3a") : failed to sync configmap cache: timed out waiting for the condition
	Nov 09 14:39:38 embed-certs-422728 kubelet[768]: W1109 14:39:38.953916     768 manager.go:1169] Failed to process watch event {EventType:0 Name:/docker/45825e68cb8679a16f491499ba94fa2babf9539c7324c0ac19dfc0bb866dfb12/crio-1ca2e62c143788a72e86aa04d9da7f37efffe510b5eca4e03ca0ec4b4e36aa3b WatchSource:0}: Error finding container 1ca2e62c143788a72e86aa04d9da7f37efffe510b5eca4e03ca0ec4b4e36aa3b: Status 404 returned error can't find the container with id 1ca2e62c143788a72e86aa04d9da7f37efffe510b5eca4e03ca0ec4b4e36aa3b
	Nov 09 14:39:38 embed-certs-422728 kubelet[768]: W1109 14:39:38.975175     768 manager.go:1169] Failed to process watch event {EventType:0 Name:/docker/45825e68cb8679a16f491499ba94fa2babf9539c7324c0ac19dfc0bb866dfb12/crio-9b0c05c409c71a34c4df64cb5c2ff501bc3a2054f1dbdf2985e00a20e0c69e2f WatchSource:0}: Error finding container 9b0c05c409c71a34c4df64cb5c2ff501bc3a2054f1dbdf2985e00a20e0c69e2f: Status 404 returned error can't find the container with id 9b0c05c409c71a34c4df64cb5c2ff501bc3a2054f1dbdf2985e00a20e0c69e2f
	Nov 09 14:39:47 embed-certs-422728 kubelet[768]: I1109 14:39:47.427720     768 pod_startup_latency_tracker.go:104] "Observed pod startup duration" pod="kubernetes-dashboard/kubernetes-dashboard-855c9754f9-qdgpq" podStartSLOduration=3.547029057 podStartE2EDuration="11.426862183s" podCreationTimestamp="2025-11-09 14:39:36 +0000 UTC" firstStartedPulling="2025-11-09 14:39:38.957213701 +0000 UTC m=+15.094136571" lastFinishedPulling="2025-11-09 14:39:46.837046818 +0000 UTC m=+22.973969697" observedRunningTime="2025-11-09 14:39:47.426245161 +0000 UTC m=+23.563168055" watchObservedRunningTime="2025-11-09 14:39:47.426862183 +0000 UTC m=+23.563785062"
	Nov 09 14:39:52 embed-certs-422728 kubelet[768]: I1109 14:39:52.402450     768 scope.go:117] "RemoveContainer" containerID="35407c86dba9ac0dd30867af492d13dc39d0b1b307ae8eb0cb672cfdedb7fbc8"
	Nov 09 14:39:53 embed-certs-422728 kubelet[768]: I1109 14:39:53.407150     768 scope.go:117] "RemoveContainer" containerID="5ae40f2a175040db81790736f4196360f115d8bd6e6ecee71c2583eacd5acccf"
	Nov 09 14:39:53 embed-certs-422728 kubelet[768]: E1109 14:39:53.407315     768 pod_workers.go:1324] "Error syncing pod, skipping" err="failed to \"StartContainer\" for \"dashboard-metrics-scraper\" with CrashLoopBackOff: \"back-off 10s restarting failed container=dashboard-metrics-scraper pod=dashboard-metrics-scraper-6ffb444bf9-phsgl_kubernetes-dashboard(78f81139-942a-4424-8b87-68a3e0b04fc6)\"" pod="kubernetes-dashboard/dashboard-metrics-scraper-6ffb444bf9-phsgl" podUID="78f81139-942a-4424-8b87-68a3e0b04fc6"
	Nov 09 14:39:53 embed-certs-422728 kubelet[768]: I1109 14:39:53.409648     768 scope.go:117] "RemoveContainer" containerID="35407c86dba9ac0dd30867af492d13dc39d0b1b307ae8eb0cb672cfdedb7fbc8"
	Nov 09 14:39:54 embed-certs-422728 kubelet[768]: I1109 14:39:54.411269     768 scope.go:117] "RemoveContainer" containerID="5ae40f2a175040db81790736f4196360f115d8bd6e6ecee71c2583eacd5acccf"
	Nov 09 14:39:54 embed-certs-422728 kubelet[768]: E1109 14:39:54.411932     768 pod_workers.go:1324] "Error syncing pod, skipping" err="failed to \"StartContainer\" for \"dashboard-metrics-scraper\" with CrashLoopBackOff: \"back-off 10s restarting failed container=dashboard-metrics-scraper pod=dashboard-metrics-scraper-6ffb444bf9-phsgl_kubernetes-dashboard(78f81139-942a-4424-8b87-68a3e0b04fc6)\"" pod="kubernetes-dashboard/dashboard-metrics-scraper-6ffb444bf9-phsgl" podUID="78f81139-942a-4424-8b87-68a3e0b04fc6"
	Nov 09 14:39:58 embed-certs-422728 kubelet[768]: I1109 14:39:58.907215     768 scope.go:117] "RemoveContainer" containerID="5ae40f2a175040db81790736f4196360f115d8bd6e6ecee71c2583eacd5acccf"
	Nov 09 14:39:58 embed-certs-422728 kubelet[768]: E1109 14:39:58.907431     768 pod_workers.go:1324] "Error syncing pod, skipping" err="failed to \"StartContainer\" for \"dashboard-metrics-scraper\" with CrashLoopBackOff: \"back-off 10s restarting failed container=dashboard-metrics-scraper pod=dashboard-metrics-scraper-6ffb444bf9-phsgl_kubernetes-dashboard(78f81139-942a-4424-8b87-68a3e0b04fc6)\"" pod="kubernetes-dashboard/dashboard-metrics-scraper-6ffb444bf9-phsgl" podUID="78f81139-942a-4424-8b87-68a3e0b04fc6"
	Nov 09 14:40:06 embed-certs-422728 kubelet[768]: I1109 14:40:06.446270     768 scope.go:117] "RemoveContainer" containerID="3b1b52ea2560ce0c00fa2ea0c3ba7b2fb276d6faf0899c104043d7528470cddd"
	Nov 09 14:40:13 embed-certs-422728 kubelet[768]: I1109 14:40:13.170863     768 scope.go:117] "RemoveContainer" containerID="5ae40f2a175040db81790736f4196360f115d8bd6e6ecee71c2583eacd5acccf"
	Nov 09 14:40:13 embed-certs-422728 kubelet[768]: I1109 14:40:13.469958     768 scope.go:117] "RemoveContainer" containerID="5ae40f2a175040db81790736f4196360f115d8bd6e6ecee71c2583eacd5acccf"
	Nov 09 14:40:13 embed-certs-422728 kubelet[768]: I1109 14:40:13.470307     768 scope.go:117] "RemoveContainer" containerID="4d552f4b4d8a6b91636fb0457d54c17eafdcbd0e136bd23e839ea5daffbf2f98"
	Nov 09 14:40:13 embed-certs-422728 kubelet[768]: E1109 14:40:13.470477     768 pod_workers.go:1324] "Error syncing pod, skipping" err="failed to \"StartContainer\" for \"dashboard-metrics-scraper\" with CrashLoopBackOff: \"back-off 20s restarting failed container=dashboard-metrics-scraper pod=dashboard-metrics-scraper-6ffb444bf9-phsgl_kubernetes-dashboard(78f81139-942a-4424-8b87-68a3e0b04fc6)\"" pod="kubernetes-dashboard/dashboard-metrics-scraper-6ffb444bf9-phsgl" podUID="78f81139-942a-4424-8b87-68a3e0b04fc6"
	Nov 09 14:40:18 embed-certs-422728 kubelet[768]: I1109 14:40:18.905373     768 scope.go:117] "RemoveContainer" containerID="4d552f4b4d8a6b91636fb0457d54c17eafdcbd0e136bd23e839ea5daffbf2f98"
	Nov 09 14:40:18 embed-certs-422728 kubelet[768]: E1109 14:40:18.905593     768 pod_workers.go:1324] "Error syncing pod, skipping" err="failed to \"StartContainer\" for \"dashboard-metrics-scraper\" with CrashLoopBackOff: \"back-off 20s restarting failed container=dashboard-metrics-scraper pod=dashboard-metrics-scraper-6ffb444bf9-phsgl_kubernetes-dashboard(78f81139-942a-4424-8b87-68a3e0b04fc6)\"" pod="kubernetes-dashboard/dashboard-metrics-scraper-6ffb444bf9-phsgl" podUID="78f81139-942a-4424-8b87-68a3e0b04fc6"
	Nov 09 14:40:21 embed-certs-422728 systemd[1]: Stopping kubelet.service - kubelet: The Kubernetes Node Agent...
	Nov 09 14:40:21 embed-certs-422728 systemd[1]: kubelet.service: Deactivated successfully.
	Nov 09 14:40:21 embed-certs-422728 systemd[1]: Stopped kubelet.service - kubelet: The Kubernetes Node Agent.
	
	
	==> kubernetes-dashboard [fe108ea59a5d4ffb1318ee1b4113ef12ff67b45f3c2041c028c9738cc25481d6] <==
	2025/11/09 14:39:46 Starting overwatch
	2025/11/09 14:39:46 Using namespace: kubernetes-dashboard
	2025/11/09 14:39:46 Using in-cluster config to connect to apiserver
	2025/11/09 14:39:46 Using secret token for csrf signing
	2025/11/09 14:39:46 Initializing csrf token from kubernetes-dashboard-csrf secret
	2025/11/09 14:39:46 Empty token. Generating and storing in a secret kubernetes-dashboard-csrf
	2025/11/09 14:39:46 Successful initial request to the apiserver, version: v1.34.1
	2025/11/09 14:39:46 Generating JWE encryption key
	2025/11/09 14:39:46 New synchronizer has been registered: kubernetes-dashboard-key-holder-kubernetes-dashboard. Starting
	2025/11/09 14:39:46 Starting secret synchronizer for kubernetes-dashboard-key-holder in namespace kubernetes-dashboard
	2025/11/09 14:39:47 Initializing JWE encryption key from synchronized object
	2025/11/09 14:39:47 Creating in-cluster Sidecar client
	2025/11/09 14:39:47 Serving insecurely on HTTP port: 9090
	2025/11/09 14:39:47 Metric client health check failed: the server is currently unable to handle the request (get services dashboard-metrics-scraper). Retrying in 30 seconds.
	2025/11/09 14:40:17 Metric client health check failed: the server is currently unable to handle the request (get services dashboard-metrics-scraper). Retrying in 30 seconds.
	
	
	==> storage-provisioner [3b1b52ea2560ce0c00fa2ea0c3ba7b2fb276d6faf0899c104043d7528470cddd] <==
	I1109 14:39:35.874660       1 storage_provisioner.go:116] Initializing the minikube storage provisioner...
	F1109 14:40:05.877093       1 main.go:39] error getting server version: Get "https://10.96.0.1:443/version?timeout=32s": dial tcp 10.96.0.1:443: i/o timeout
	
	
	==> storage-provisioner [7e3f27e138c59aa0cb710724e534caab4379d6a10868fbbe90e3e8f884adb4a7] <==
	I1109 14:40:06.499444       1 storage_provisioner.go:116] Initializing the minikube storage provisioner...
	I1109 14:40:06.513039       1 storage_provisioner.go:141] Storage provisioner initialized, now starting service!
	I1109 14:40:06.513149       1 leaderelection.go:243] attempting to acquire leader lease kube-system/k8s.io-minikube-hostpath...
	W1109 14:40:06.519471       1 warnings.go:70] v1 Endpoints is deprecated in v1.33+; use discovery.k8s.io/v1 EndpointSlice
	W1109 14:40:09.975231       1 warnings.go:70] v1 Endpoints is deprecated in v1.33+; use discovery.k8s.io/v1 EndpointSlice
	W1109 14:40:14.235272       1 warnings.go:70] v1 Endpoints is deprecated in v1.33+; use discovery.k8s.io/v1 EndpointSlice
	W1109 14:40:17.834216       1 warnings.go:70] v1 Endpoints is deprecated in v1.33+; use discovery.k8s.io/v1 EndpointSlice
	W1109 14:40:20.887785       1 warnings.go:70] v1 Endpoints is deprecated in v1.33+; use discovery.k8s.io/v1 EndpointSlice
	W1109 14:40:23.910358       1 warnings.go:70] v1 Endpoints is deprecated in v1.33+; use discovery.k8s.io/v1 EndpointSlice
	W1109 14:40:23.915516       1 warnings.go:70] v1 Endpoints is deprecated in v1.33+; use discovery.k8s.io/v1 EndpointSlice
	I1109 14:40:23.915670       1 leaderelection.go:253] successfully acquired lease kube-system/k8s.io-minikube-hostpath
	I1109 14:40:23.915832       1 controller.go:835] Starting provisioner controller k8s.io/minikube-hostpath_embed-certs-422728_d646feab-232e-4d11-bd52-56eb99080d9e!
	I1109 14:40:23.916689       1 event.go:282] Event(v1.ObjectReference{Kind:"Endpoints", Namespace:"kube-system", Name:"k8s.io-minikube-hostpath", UID:"75bbad6d-f285-4ed2-83c3-c9896fff11ae", APIVersion:"v1", ResourceVersion:"695", FieldPath:""}): type: 'Normal' reason: 'LeaderElection' embed-certs-422728_d646feab-232e-4d11-bd52-56eb99080d9e became leader
	W1109 14:40:23.929258       1 warnings.go:70] v1 Endpoints is deprecated in v1.33+; use discovery.k8s.io/v1 EndpointSlice
	W1109 14:40:23.942561       1 warnings.go:70] v1 Endpoints is deprecated in v1.33+; use discovery.k8s.io/v1 EndpointSlice
	I1109 14:40:24.017684       1 controller.go:884] Started provisioner controller k8s.io/minikube-hostpath_embed-certs-422728_d646feab-232e-4d11-bd52-56eb99080d9e!
	

                                                
                                                
-- /stdout --
helpers_test.go:262: (dbg) Run:  out/minikube-linux-arm64 status --format={{.APIServer}} -p embed-certs-422728 -n embed-certs-422728
helpers_test.go:262: (dbg) Non-zero exit: out/minikube-linux-arm64 status --format={{.APIServer}} -p embed-certs-422728 -n embed-certs-422728: exit status 2 (582.35052ms)

                                                
                                                
-- stdout --
	Running

                                                
                                                
-- /stdout --
helpers_test.go:262: status error: exit status 2 (may be ok)
helpers_test.go:269: (dbg) Run:  kubectl --context embed-certs-422728 get po -o=jsonpath={.items[*].metadata.name} -A --field-selector=status.phase!=Running
helpers_test.go:293: <<< TestStartStop/group/embed-certs/serial/Pause FAILED: end of post-mortem logs <<<
helpers_test.go:294: ---------------------/post-mortem---------------------------------
helpers_test.go:222: -----------------------post-mortem--------------------------------
helpers_test.go:223: ======>  post-mortem[TestStartStop/group/embed-certs/serial/Pause]: network settings <======
helpers_test.go:230: HOST ENV snapshots: PROXY env: HTTP_PROXY="<empty>" HTTPS_PROXY="<empty>" NO_PROXY="<empty>"
helpers_test.go:238: ======>  post-mortem[TestStartStop/group/embed-certs/serial/Pause]: docker inspect <======
helpers_test.go:239: (dbg) Run:  docker inspect embed-certs-422728
helpers_test.go:243: (dbg) docker inspect embed-certs-422728:

                                                
                                                
-- stdout --
	[
	    {
	        "Id": "45825e68cb8679a16f491499ba94fa2babf9539c7324c0ac19dfc0bb866dfb12",
	        "Created": "2025-11-09T14:37:33.73724942Z",
	        "Path": "/usr/local/bin/entrypoint",
	        "Args": [
	            "/sbin/init"
	        ],
	        "State": {
	            "Status": "running",
	            "Running": true,
	            "Paused": false,
	            "Restarting": false,
	            "OOMKilled": false,
	            "Dead": false,
	            "Pid": 196924,
	            "ExitCode": 0,
	            "Error": "",
	            "StartedAt": "2025-11-09T14:39:15.931898847Z",
	            "FinishedAt": "2025-11-09T14:39:15.14453408Z"
	        },
	        "Image": "sha256:4e1b168be0d8ee6affdaa8dcd0274d605b2f417b6cfaa574410f0380fd962b97",
	        "ResolvConfPath": "/var/lib/docker/containers/45825e68cb8679a16f491499ba94fa2babf9539c7324c0ac19dfc0bb866dfb12/resolv.conf",
	        "HostnamePath": "/var/lib/docker/containers/45825e68cb8679a16f491499ba94fa2babf9539c7324c0ac19dfc0bb866dfb12/hostname",
	        "HostsPath": "/var/lib/docker/containers/45825e68cb8679a16f491499ba94fa2babf9539c7324c0ac19dfc0bb866dfb12/hosts",
	        "LogPath": "/var/lib/docker/containers/45825e68cb8679a16f491499ba94fa2babf9539c7324c0ac19dfc0bb866dfb12/45825e68cb8679a16f491499ba94fa2babf9539c7324c0ac19dfc0bb866dfb12-json.log",
	        "Name": "/embed-certs-422728",
	        "RestartCount": 0,
	        "Driver": "overlay2",
	        "Platform": "linux",
	        "MountLabel": "",
	        "ProcessLabel": "",
	        "AppArmorProfile": "unconfined",
	        "ExecIDs": null,
	        "HostConfig": {
	            "Binds": [
	                "/lib/modules:/lib/modules:ro",
	                "embed-certs-422728:/var"
	            ],
	            "ContainerIDFile": "",
	            "LogConfig": {
	                "Type": "json-file",
	                "Config": {}
	            },
	            "NetworkMode": "embed-certs-422728",
	            "PortBindings": {
	                "22/tcp": [
	                    {
	                        "HostIp": "127.0.0.1",
	                        "HostPort": ""
	                    }
	                ],
	                "2376/tcp": [
	                    {
	                        "HostIp": "127.0.0.1",
	                        "HostPort": ""
	                    }
	                ],
	                "32443/tcp": [
	                    {
	                        "HostIp": "127.0.0.1",
	                        "HostPort": ""
	                    }
	                ],
	                "5000/tcp": [
	                    {
	                        "HostIp": "127.0.0.1",
	                        "HostPort": ""
	                    }
	                ],
	                "8443/tcp": [
	                    {
	                        "HostIp": "127.0.0.1",
	                        "HostPort": ""
	                    }
	                ]
	            },
	            "RestartPolicy": {
	                "Name": "no",
	                "MaximumRetryCount": 0
	            },
	            "AutoRemove": false,
	            "VolumeDriver": "",
	            "VolumesFrom": null,
	            "ConsoleSize": [
	                0,
	                0
	            ],
	            "CapAdd": null,
	            "CapDrop": null,
	            "CgroupnsMode": "host",
	            "Dns": [],
	            "DnsOptions": [],
	            "DnsSearch": [],
	            "ExtraHosts": null,
	            "GroupAdd": null,
	            "IpcMode": "private",
	            "Cgroup": "",
	            "Links": null,
	            "OomScoreAdj": 0,
	            "PidMode": "",
	            "Privileged": true,
	            "PublishAllPorts": false,
	            "ReadonlyRootfs": false,
	            "SecurityOpt": [
	                "seccomp=unconfined",
	                "apparmor=unconfined",
	                "label=disable"
	            ],
	            "Tmpfs": {
	                "/run": "",
	                "/tmp": ""
	            },
	            "UTSMode": "",
	            "UsernsMode": "",
	            "ShmSize": 67108864,
	            "Runtime": "runc",
	            "Isolation": "",
	            "CpuShares": 0,
	            "Memory": 3221225472,
	            "NanoCpus": 2000000000,
	            "CgroupParent": "",
	            "BlkioWeight": 0,
	            "BlkioWeightDevice": [],
	            "BlkioDeviceReadBps": [],
	            "BlkioDeviceWriteBps": [],
	            "BlkioDeviceReadIOps": [],
	            "BlkioDeviceWriteIOps": [],
	            "CpuPeriod": 0,
	            "CpuQuota": 0,
	            "CpuRealtimePeriod": 0,
	            "CpuRealtimeRuntime": 0,
	            "CpusetCpus": "",
	            "CpusetMems": "",
	            "Devices": [],
	            "DeviceCgroupRules": null,
	            "DeviceRequests": null,
	            "MemoryReservation": 0,
	            "MemorySwap": 6442450944,
	            "MemorySwappiness": null,
	            "OomKillDisable": false,
	            "PidsLimit": null,
	            "Ulimits": [],
	            "CpuCount": 0,
	            "CpuPercent": 0,
	            "IOMaximumIOps": 0,
	            "IOMaximumBandwidth": 0,
	            "MaskedPaths": null,
	            "ReadonlyPaths": null
	        },
	        "GraphDriver": {
	            "Data": {
	                "ID": "45825e68cb8679a16f491499ba94fa2babf9539c7324c0ac19dfc0bb866dfb12",
	                "LowerDir": "/var/lib/docker/overlay2/82eab5e46ee1cb98bb6b6b37a53cb10857e6af0e929746a88406ed41f77ffa85-init/diff:/var/lib/docker/overlay2/b3a4de36ab2a7c9237cb1555a4866064baca53bca407ae5f84336bea9c6bc6c8/diff",
	                "MergedDir": "/var/lib/docker/overlay2/82eab5e46ee1cb98bb6b6b37a53cb10857e6af0e929746a88406ed41f77ffa85/merged",
	                "UpperDir": "/var/lib/docker/overlay2/82eab5e46ee1cb98bb6b6b37a53cb10857e6af0e929746a88406ed41f77ffa85/diff",
	                "WorkDir": "/var/lib/docker/overlay2/82eab5e46ee1cb98bb6b6b37a53cb10857e6af0e929746a88406ed41f77ffa85/work"
	            },
	            "Name": "overlay2"
	        },
	        "Mounts": [
	            {
	                "Type": "bind",
	                "Source": "/lib/modules",
	                "Destination": "/lib/modules",
	                "Mode": "ro",
	                "RW": false,
	                "Propagation": "rprivate"
	            },
	            {
	                "Type": "volume",
	                "Name": "embed-certs-422728",
	                "Source": "/var/lib/docker/volumes/embed-certs-422728/_data",
	                "Destination": "/var",
	                "Driver": "local",
	                "Mode": "z",
	                "RW": true,
	                "Propagation": ""
	            }
	        ],
	        "Config": {
	            "Hostname": "embed-certs-422728",
	            "Domainname": "",
	            "User": "",
	            "AttachStdin": false,
	            "AttachStdout": false,
	            "AttachStderr": false,
	            "ExposedPorts": {
	                "22/tcp": {},
	                "2376/tcp": {},
	                "32443/tcp": {},
	                "5000/tcp": {},
	                "8443/tcp": {}
	            },
	            "Tty": true,
	            "OpenStdin": false,
	            "StdinOnce": false,
	            "Env": [
	                "container=docker",
	                "PATH=/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin"
	            ],
	            "Cmd": null,
	            "Image": "gcr.io/k8s-minikube/kicbase-builds:v0.0.48-1761985721-21837@sha256:a50b37e97dfdea51156e079ca6b45818a801b3d41bbe13d141f35d2e1af6c7d1",
	            "Volumes": null,
	            "WorkingDir": "/",
	            "Entrypoint": [
	                "/usr/local/bin/entrypoint",
	                "/sbin/init"
	            ],
	            "OnBuild": null,
	            "Labels": {
	                "created_by.minikube.sigs.k8s.io": "true",
	                "mode.minikube.sigs.k8s.io": "embed-certs-422728",
	                "name.minikube.sigs.k8s.io": "embed-certs-422728",
	                "role.minikube.sigs.k8s.io": ""
	            },
	            "StopSignal": "SIGRTMIN+3"
	        },
	        "NetworkSettings": {
	            "Bridge": "",
	            "SandboxID": "29f3f7d828d652d9c9ddb4d846e708a2c5ab41bb94fcdef9a566961b5adc9615",
	            "SandboxKey": "/var/run/docker/netns/29f3f7d828d6",
	            "Ports": {
	                "22/tcp": [
	                    {
	                        "HostIp": "127.0.0.1",
	                        "HostPort": "33070"
	                    }
	                ],
	                "2376/tcp": [
	                    {
	                        "HostIp": "127.0.0.1",
	                        "HostPort": "33071"
	                    }
	                ],
	                "32443/tcp": [
	                    {
	                        "HostIp": "127.0.0.1",
	                        "HostPort": "33074"
	                    }
	                ],
	                "5000/tcp": [
	                    {
	                        "HostIp": "127.0.0.1",
	                        "HostPort": "33072"
	                    }
	                ],
	                "8443/tcp": [
	                    {
	                        "HostIp": "127.0.0.1",
	                        "HostPort": "33073"
	                    }
	                ]
	            },
	            "HairpinMode": false,
	            "LinkLocalIPv6Address": "",
	            "LinkLocalIPv6PrefixLen": 0,
	            "SecondaryIPAddresses": null,
	            "SecondaryIPv6Addresses": null,
	            "EndpointID": "",
	            "Gateway": "",
	            "GlobalIPv6Address": "",
	            "GlobalIPv6PrefixLen": 0,
	            "IPAddress": "",
	            "IPPrefixLen": 0,
	            "IPv6Gateway": "",
	            "MacAddress": "",
	            "Networks": {
	                "embed-certs-422728": {
	                    "IPAMConfig": {
	                        "IPv4Address": "192.168.76.2"
	                    },
	                    "Links": null,
	                    "Aliases": null,
	                    "MacAddress": "16:34:1e:83:22:ac",
	                    "DriverOpts": null,
	                    "GwPriority": 0,
	                    "NetworkID": "78ce79b8fdce892f49cf723023717b9a2880c30a5665eaa6c42c151329eb9e85",
	                    "EndpointID": "dea4fa3ed9f39346f83688a2e7b316193eff746ed3ee23d49fdbe1ab56df3077",
	                    "Gateway": "192.168.76.1",
	                    "IPAddress": "192.168.76.2",
	                    "IPPrefixLen": 24,
	                    "IPv6Gateway": "",
	                    "GlobalIPv6Address": "",
	                    "GlobalIPv6PrefixLen": 0,
	                    "DNSNames": [
	                        "embed-certs-422728",
	                        "45825e68cb86"
	                    ]
	                }
	            }
	        }
	    }
	]

                                                
                                                
-- /stdout --
helpers_test.go:247: (dbg) Run:  out/minikube-linux-arm64 status --format={{.Host}} -p embed-certs-422728 -n embed-certs-422728
helpers_test.go:247: (dbg) Non-zero exit: out/minikube-linux-arm64 status --format={{.Host}} -p embed-certs-422728 -n embed-certs-422728: exit status 2 (457.537262ms)

                                                
                                                
-- stdout --
	Running

                                                
                                                
-- /stdout --
helpers_test.go:247: status error: exit status 2 (may be ok)
helpers_test.go:252: <<< TestStartStop/group/embed-certs/serial/Pause FAILED: start of post-mortem logs <<<
helpers_test.go:253: ======>  post-mortem[TestStartStop/group/embed-certs/serial/Pause]: minikube logs <======
helpers_test.go:255: (dbg) Run:  out/minikube-linux-arm64 -p embed-certs-422728 logs -n 25
helpers_test.go:255: (dbg) Done: out/minikube-linux-arm64 -p embed-certs-422728 logs -n 25: (1.555784447s)
helpers_test.go:260: TestStartStop/group/embed-certs/serial/Pause logs: 
-- stdout --
	
	==> Audit <==
	┌─────────┬───────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────┬──────────────────────────────┬─────────┬─────────┬─────────────────────┬──────────────
───────┐
	│ COMMAND │                                                                                                                     ARGS                                                                                                                      │           PROFILE            │  USER   │ VERSION │     START TIME      │      END TIME       │
	├─────────┼───────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────┼──────────────────────────────┼─────────┼─────────┼─────────────────────┼──────────────
───────┤
	│ addons  │ enable metrics-server -p old-k8s-version-349599 --images=MetricsServer=registry.k8s.io/echoserver:1.4 --registries=MetricsServer=fake.domain                                                                                                  │ old-k8s-version-349599       │ jenkins │ v1.37.0 │ 09 Nov 25 14:35 UTC │                     │
	│ stop    │ -p old-k8s-version-349599 --alsologtostderr -v=3                                                                                                                                                                                              │ old-k8s-version-349599       │ jenkins │ v1.37.0 │ 09 Nov 25 14:35 UTC │ 09 Nov 25 14:36 UTC │
	│ addons  │ enable dashboard -p old-k8s-version-349599 --images=MetricsScraper=registry.k8s.io/echoserver:1.4                                                                                                                                             │ old-k8s-version-349599       │ jenkins │ v1.37.0 │ 09 Nov 25 14:36 UTC │ 09 Nov 25 14:36 UTC │
	│ start   │ -p old-k8s-version-349599 --memory=3072 --alsologtostderr --wait=true --kvm-network=default --kvm-qemu-uri=qemu:///system --disable-driver-mounts --keep-context=false --driver=docker  --container-runtime=crio --kubernetes-version=v1.28.0 │ old-k8s-version-349599       │ jenkins │ v1.37.0 │ 09 Nov 25 14:36 UTC │ 09 Nov 25 14:36 UTC │
	│ start   │ -p cert-expiration-179822 --memory=3072 --cert-expiration=8760h --driver=docker  --container-runtime=crio                                                                                                                                     │ cert-expiration-179822       │ jenkins │ v1.37.0 │ 09 Nov 25 14:37 UTC │ 09 Nov 25 14:37 UTC │
	│ image   │ old-k8s-version-349599 image list --format=json                                                                                                                                                                                               │ old-k8s-version-349599       │ jenkins │ v1.37.0 │ 09 Nov 25 14:37 UTC │ 09 Nov 25 14:37 UTC │
	│ pause   │ -p old-k8s-version-349599 --alsologtostderr -v=1                                                                                                                                                                                              │ old-k8s-version-349599       │ jenkins │ v1.37.0 │ 09 Nov 25 14:37 UTC │                     │
	│ delete  │ -p old-k8s-version-349599                                                                                                                                                                                                                     │ old-k8s-version-349599       │ jenkins │ v1.37.0 │ 09 Nov 25 14:37 UTC │ 09 Nov 25 14:37 UTC │
	│ delete  │ -p old-k8s-version-349599                                                                                                                                                                                                                     │ old-k8s-version-349599       │ jenkins │ v1.37.0 │ 09 Nov 25 14:37 UTC │ 09 Nov 25 14:37 UTC │
	│ start   │ -p default-k8s-diff-port-103048 --memory=3072 --alsologtostderr --wait=true --apiserver-port=8444 --driver=docker  --container-runtime=crio --kubernetes-version=v1.34.1                                                                      │ default-k8s-diff-port-103048 │ jenkins │ v1.37.0 │ 09 Nov 25 14:37 UTC │ 09 Nov 25 14:38 UTC │
	│ delete  │ -p cert-expiration-179822                                                                                                                                                                                                                     │ cert-expiration-179822       │ jenkins │ v1.37.0 │ 09 Nov 25 14:37 UTC │ 09 Nov 25 14:37 UTC │
	│ start   │ -p embed-certs-422728 --memory=3072 --alsologtostderr --wait=true --embed-certs --driver=docker  --container-runtime=crio --kubernetes-version=v1.34.1                                                                                        │ embed-certs-422728           │ jenkins │ v1.37.0 │ 09 Nov 25 14:37 UTC │ 09 Nov 25 14:38 UTC │
	│ addons  │ enable metrics-server -p default-k8s-diff-port-103048 --images=MetricsServer=registry.k8s.io/echoserver:1.4 --registries=MetricsServer=fake.domain                                                                                            │ default-k8s-diff-port-103048 │ jenkins │ v1.37.0 │ 09 Nov 25 14:38 UTC │                     │
	│ addons  │ enable metrics-server -p embed-certs-422728 --images=MetricsServer=registry.k8s.io/echoserver:1.4 --registries=MetricsServer=fake.domain                                                                                                      │ embed-certs-422728           │ jenkins │ v1.37.0 │ 09 Nov 25 14:39 UTC │                     │
	│ stop    │ -p default-k8s-diff-port-103048 --alsologtostderr -v=3                                                                                                                                                                                        │ default-k8s-diff-port-103048 │ jenkins │ v1.37.0 │ 09 Nov 25 14:39 UTC │ 09 Nov 25 14:39 UTC │
	│ stop    │ -p embed-certs-422728 --alsologtostderr -v=3                                                                                                                                                                                                  │ embed-certs-422728           │ jenkins │ v1.37.0 │ 09 Nov 25 14:39 UTC │ 09 Nov 25 14:39 UTC │
	│ addons  │ enable dashboard -p default-k8s-diff-port-103048 --images=MetricsScraper=registry.k8s.io/echoserver:1.4                                                                                                                                       │ default-k8s-diff-port-103048 │ jenkins │ v1.37.0 │ 09 Nov 25 14:39 UTC │ 09 Nov 25 14:39 UTC │
	│ start   │ -p default-k8s-diff-port-103048 --memory=3072 --alsologtostderr --wait=true --apiserver-port=8444 --driver=docker  --container-runtime=crio --kubernetes-version=v1.34.1                                                                      │ default-k8s-diff-port-103048 │ jenkins │ v1.37.0 │ 09 Nov 25 14:39 UTC │ 09 Nov 25 14:40 UTC │
	│ addons  │ enable dashboard -p embed-certs-422728 --images=MetricsScraper=registry.k8s.io/echoserver:1.4                                                                                                                                                 │ embed-certs-422728           │ jenkins │ v1.37.0 │ 09 Nov 25 14:39 UTC │ 09 Nov 25 14:39 UTC │
	│ start   │ -p embed-certs-422728 --memory=3072 --alsologtostderr --wait=true --embed-certs --driver=docker  --container-runtime=crio --kubernetes-version=v1.34.1                                                                                        │ embed-certs-422728           │ jenkins │ v1.37.0 │ 09 Nov 25 14:39 UTC │ 09 Nov 25 14:40 UTC │
	│ image   │ default-k8s-diff-port-103048 image list --format=json                                                                                                                                                                                         │ default-k8s-diff-port-103048 │ jenkins │ v1.37.0 │ 09 Nov 25 14:40 UTC │ 09 Nov 25 14:40 UTC │
	│ pause   │ -p default-k8s-diff-port-103048 --alsologtostderr -v=1                                                                                                                                                                                        │ default-k8s-diff-port-103048 │ jenkins │ v1.37.0 │ 09 Nov 25 14:40 UTC │                     │
	│ image   │ embed-certs-422728 image list --format=json                                                                                                                                                                                                   │ embed-certs-422728           │ jenkins │ v1.37.0 │ 09 Nov 25 14:40 UTC │ 09 Nov 25 14:40 UTC │
	│ pause   │ -p embed-certs-422728 --alsologtostderr -v=1                                                                                                                                                                                                  │ embed-certs-422728           │ jenkins │ v1.37.0 │ 09 Nov 25 14:40 UTC │                     │
	│ delete  │ -p default-k8s-diff-port-103048                                                                                                                                                                                                               │ default-k8s-diff-port-103048 │ jenkins │ v1.37.0 │ 09 Nov 25 14:40 UTC │                     │
	└─────────┴───────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────┴──────────────────────────────┴─────────┴─────────┴─────────────────────┴──────────────
───────┘
	
	
	==> Last Start <==
	Log file created at: 2025/11/09 14:39:15
	Running on machine: ip-172-31-24-2
	Binary: Built with gc go1.24.6 for linux/arm64
	Log line format: [IWEF]mmdd hh:mm:ss.uuuuuu threadid file:line] msg
	I1109 14:39:15.653812  196795 out.go:360] Setting OutFile to fd 1 ...
	I1109 14:39:15.654001  196795 out.go:408] TERM=,COLORTERM=, which probably does not support color
	I1109 14:39:15.654038  196795 out.go:374] Setting ErrFile to fd 2...
	I1109 14:39:15.654052  196795 out.go:408] TERM=,COLORTERM=, which probably does not support color
	I1109 14:39:15.654356  196795 root.go:338] Updating PATH: /home/jenkins/minikube-integration/21139-2320/.minikube/bin
	I1109 14:39:15.654781  196795 out.go:368] Setting JSON to false
	I1109 14:39:15.655688  196795 start.go:133] hostinfo: {"hostname":"ip-172-31-24-2","uptime":4906,"bootTime":1762694250,"procs":167,"os":"linux","platform":"ubuntu","platformFamily":"debian","platformVersion":"20.04","kernelVersion":"5.15.0-1084-aws","kernelArch":"aarch64","virtualizationSystem":"","virtualizationRole":"","hostId":"6d436adf-771e-4269-b9a3-c25fd4fca4f5"}
	I1109 14:39:15.655757  196795 start.go:143] virtualization:  
	I1109 14:39:15.660654  196795 out.go:179] * [embed-certs-422728] minikube v1.37.0 on Ubuntu 20.04 (arm64)
	I1109 14:39:15.663936  196795 out.go:179]   - MINIKUBE_LOCATION=21139
	I1109 14:39:15.663991  196795 notify.go:221] Checking for updates...
	I1109 14:39:15.670031  196795 out.go:179]   - MINIKUBE_SUPPRESS_DOCKER_PERFORMANCE=true
	I1109 14:39:15.672921  196795 out.go:179]   - KUBECONFIG=/home/jenkins/minikube-integration/21139-2320/kubeconfig
	I1109 14:39:15.675823  196795 out.go:179]   - MINIKUBE_HOME=/home/jenkins/minikube-integration/21139-2320/.minikube
	I1109 14:39:15.678877  196795 out.go:179]   - MINIKUBE_BIN=out/minikube-linux-arm64
	I1109 14:39:15.681871  196795 out.go:179]   - MINIKUBE_FORCE_SYSTEMD=
	I1109 14:39:15.685303  196795 config.go:182] Loaded profile config "embed-certs-422728": Driver=docker, ContainerRuntime=crio, KubernetesVersion=v1.34.1
	I1109 14:39:15.685991  196795 driver.go:422] Setting default libvirt URI to qemu:///system
	I1109 14:39:15.716089  196795 docker.go:124] docker version: linux-28.1.1:Docker Engine - Community
	I1109 14:39:15.716233  196795 cli_runner.go:164] Run: docker system info --format "{{json .}}"
	I1109 14:39:15.783072  196795 info.go:266] docker info: {ID:J4M5:W6MX:GOX4:4LAQ:VI7E:VJNF:J3OP:OPBH:GF7G:PPY4:WQWD:7N4L Containers:2 ContainersRunning:1 ContainersPaused:0 ContainersStopped:1 Images:4 Driver:overlay2 DriverStatus:[[Backing Filesystem extfs] [Supports d_type true] [Using metacopy false] [Native Overlay Diff true] [userxattr false]] SystemStatus:<nil> Plugins:{Volume:[local] Network:[bridge host ipvlan macvlan null overlay] Authorization:<nil> Log:[awslogs fluentd gcplogs gelf journald json-file local splunk syslog]} MemoryLimit:true SwapLimit:true KernelMemory:false KernelMemoryTCP:true CPUCfsPeriod:true CPUCfsQuota:true CPUShares:true CPUSet:true PidsLimit:true IPv4Forwarding:true BridgeNfIptables:false BridgeNfIP6Tables:false Debug:false NFd:38 OomKillDisable:true NGoroutines:54 SystemTime:2025-11-09 14:39:15.77300627 +0000 UTC LoggingDriver:json-file CgroupDriver:cgroupfs NEventsListener:0 KernelVersion:5.15.0-1084-aws OperatingSystem:Ubuntu 20.04.6 LTS OSType:linux Architecture:aa
rch64 IndexServerAddress:https://index.docker.io/v1/ RegistryConfig:{AllowNondistributableArtifactsCIDRs:[] AllowNondistributableArtifactsHostnames:[] InsecureRegistryCIDRs:[::1/128 127.0.0.0/8] IndexConfigs:{DockerIo:{Name:docker.io Mirrors:[] Secure:true Official:true}} Mirrors:[]} NCPU:2 MemTotal:8214835200 GenericResources:<nil> DockerRootDir:/var/lib/docker HTTPProxy: HTTPSProxy: NoProxy: Name:ip-172-31-24-2 Labels:[] ExperimentalBuild:false ServerVersion:28.1.1 ClusterStore: ClusterAdvertise: Runtimes:{Runc:{Path:runc}} DefaultRuntime:runc Swarm:{NodeID: NodeAddr: LocalNodeState:inactive ControlAvailable:false Error: RemoteManagers:<nil>} LiveRestoreEnabled:false Isolation: InitBinary:docker-init ContainerdCommit:{ID:05044ec0a9a75232cad458027ca83437aae3f4da Expected:} RuncCommit:{ID:v1.2.5-0-g59923ef Expected:} InitCommit:{ID:de40ad0 Expected:} SecurityOptions:[name=apparmor name=seccomp,profile=builtin] ProductLicense: Warnings:<nil> ServerErrors:[] ClientInfo:{Debug:false Plugins:[map[Name:buildx Path
:/usr/libexec/docker/cli-plugins/docker-buildx SchemaVersion:0.1.0 ShortDescription:Docker Buildx Vendor:Docker Inc. Version:v0.23.0] map[Name:compose Path:/usr/libexec/docker/cli-plugins/docker-compose SchemaVersion:0.1.0 ShortDescription:Docker Compose Vendor:Docker Inc. Version:v2.35.1]] Warnings:<nil>}}
	I1109 14:39:15.783205  196795 docker.go:319] overlay module found
	I1109 14:39:15.786478  196795 out.go:179] * Using the docker driver based on existing profile
	I1109 14:39:15.789381  196795 start.go:309] selected driver: docker
	I1109 14:39:15.789420  196795 start.go:930] validating driver "docker" against &{Name:embed-certs-422728 KeepContext:false EmbedCerts:true MinikubeISO: KicBaseImage:gcr.io/k8s-minikube/kicbase-builds:v0.0.48-1761985721-21837@sha256:a50b37e97dfdea51156e079ca6b45818a801b3d41bbe13d141f35d2e1af6c7d1 Memory:3072 CPUs:2 DiskSize:20000 Driver:docker HyperkitVpnKitSock: HyperkitVSockPorts:[] DockerEnv:[] ContainerVolumeMounts:[] InsecureRegistry:[] RegistryMirror:[] HostOnlyCIDR:192.168.59.1/24 HypervVirtualSwitch: HypervUseExternalSwitch:false HypervExternalAdapter: KVMNetwork:default KVMQemuURI:qemu:///system KVMGPU:false KVMHidden:false KVMNUMACount:1 APIServerPort:8443 DockerOpt:[] DisableDriverMounts:false NFSShare:[] NFSSharesRoot:/nfsshares UUID: NoVTXCheck:false DNSProxy:false HostDNSResolver:true HostOnlyNicType:virtio NatNicType:virtio SSHIPAddress: SSHUser:root SSHKey: SSHPort:22 KubernetesConfig:{KubernetesVersion:v1.34.1 ClusterName:embed-certs-422728 Namespace:default APIServerHAVIP: APIServerN
ame:minikubeCA APIServerNames:[] APIServerIPs:[] DNSDomain:cluster.local ContainerRuntime:crio CRISocket: NetworkPlugin:cni FeatureGates: ServiceCIDR:10.96.0.0/12 ImageRepository: LoadBalancerStartIP: LoadBalancerEndIP: CustomIngressCert: RegistryAliases: ExtraOptions:[] ShouldLoadCachedImages:true EnableDefaultCNI:false CNI:} Nodes:[{Name: IP:192.168.76.2 Port:8443 KubernetesVersion:v1.34.1 ContainerRuntime:crio ControlPlane:true Worker:true}] Addons:map[dashboard:true default-storageclass:true storage-provisioner:true] CustomAddonImages:map[MetricsScraper:registry.k8s.io/echoserver:1.4] CustomAddonRegistries:map[] VerifyComponents:map[apiserver:true apps_running:true default_sa:true extra:true kubelet:true node_ready:true system_pods:true] StartHostTimeout:6m0s ScheduledStop:<nil> ExposedPorts:[] ListenAddress: Network: Subnet: MultiNodeRequested:false ExtraDisks:0 CertExpiration:26280h0m0s MountString: Mount9PVersion:9p2000.L MountGID:docker MountIP: MountMSize:262144 MountOptions:[] MountPort:0 MountType:
9p MountUID:docker BinaryMirror: DisableOptimizations:false DisableMetrics:false DisableCoreDNSLog:false CustomQemuFirmwarePath: SocketVMnetClientPath: SocketVMnetPath: StaticIP: SSHAuthSock: SSHAgentPID:0 GPUs: AutoPauseInterval:1m0s}
	I1109 14:39:15.789515  196795 start.go:941] status for docker: {Installed:true Healthy:true Running:false NeedsImprovement:false Error:<nil> Reason: Fix: Doc: Version:}
	I1109 14:39:15.790229  196795 cli_runner.go:164] Run: docker system info --format "{{json .}}"
	I1109 14:39:15.845783  196795 info.go:266] docker info: {ID:J4M5:W6MX:GOX4:4LAQ:VI7E:VJNF:J3OP:OPBH:GF7G:PPY4:WQWD:7N4L Containers:2 ContainersRunning:1 ContainersPaused:0 ContainersStopped:1 Images:4 Driver:overlay2 DriverStatus:[[Backing Filesystem extfs] [Supports d_type true] [Using metacopy false] [Native Overlay Diff true] [userxattr false]] SystemStatus:<nil> Plugins:{Volume:[local] Network:[bridge host ipvlan macvlan null overlay] Authorization:<nil> Log:[awslogs fluentd gcplogs gelf journald json-file local splunk syslog]} MemoryLimit:true SwapLimit:true KernelMemory:false KernelMemoryTCP:true CPUCfsPeriod:true CPUCfsQuota:true CPUShares:true CPUSet:true PidsLimit:true IPv4Forwarding:true BridgeNfIptables:false BridgeNfIP6Tables:false Debug:false NFd:38 OomKillDisable:true NGoroutines:54 SystemTime:2025-11-09 14:39:15.836143549 +0000 UTC LoggingDriver:json-file CgroupDriver:cgroupfs NEventsListener:0 KernelVersion:5.15.0-1084-aws OperatingSystem:Ubuntu 20.04.6 LTS OSType:linux Architecture:a
arch64 IndexServerAddress:https://index.docker.io/v1/ RegistryConfig:{AllowNondistributableArtifactsCIDRs:[] AllowNondistributableArtifactsHostnames:[] InsecureRegistryCIDRs:[::1/128 127.0.0.0/8] IndexConfigs:{DockerIo:{Name:docker.io Mirrors:[] Secure:true Official:true}} Mirrors:[]} NCPU:2 MemTotal:8214835200 GenericResources:<nil> DockerRootDir:/var/lib/docker HTTPProxy: HTTPSProxy: NoProxy: Name:ip-172-31-24-2 Labels:[] ExperimentalBuild:false ServerVersion:28.1.1 ClusterStore: ClusterAdvertise: Runtimes:{Runc:{Path:runc}} DefaultRuntime:runc Swarm:{NodeID: NodeAddr: LocalNodeState:inactive ControlAvailable:false Error: RemoteManagers:<nil>} LiveRestoreEnabled:false Isolation: InitBinary:docker-init ContainerdCommit:{ID:05044ec0a9a75232cad458027ca83437aae3f4da Expected:} RuncCommit:{ID:v1.2.5-0-g59923ef Expected:} InitCommit:{ID:de40ad0 Expected:} SecurityOptions:[name=apparmor name=seccomp,profile=builtin] ProductLicense: Warnings:<nil> ServerErrors:[] ClientInfo:{Debug:false Plugins:[map[Name:buildx Pat
h:/usr/libexec/docker/cli-plugins/docker-buildx SchemaVersion:0.1.0 ShortDescription:Docker Buildx Vendor:Docker Inc. Version:v0.23.0] map[Name:compose Path:/usr/libexec/docker/cli-plugins/docker-compose SchemaVersion:0.1.0 ShortDescription:Docker Compose Vendor:Docker Inc. Version:v2.35.1]] Warnings:<nil>}}
	I1109 14:39:15.846132  196795 start_flags.go:992] Waiting for all components: map[apiserver:true apps_running:true default_sa:true extra:true kubelet:true node_ready:true system_pods:true]
	I1109 14:39:15.846168  196795 cni.go:84] Creating CNI manager for ""
	I1109 14:39:15.846227  196795 cni.go:143] "docker" driver + "crio" runtime found, recommending kindnet
	I1109 14:39:15.846266  196795 start.go:353] cluster config:
	{Name:embed-certs-422728 KeepContext:false EmbedCerts:true MinikubeISO: KicBaseImage:gcr.io/k8s-minikube/kicbase-builds:v0.0.48-1761985721-21837@sha256:a50b37e97dfdea51156e079ca6b45818a801b3d41bbe13d141f35d2e1af6c7d1 Memory:3072 CPUs:2 DiskSize:20000 Driver:docker HyperkitVpnKitSock: HyperkitVSockPorts:[] DockerEnv:[] ContainerVolumeMounts:[] InsecureRegistry:[] RegistryMirror:[] HostOnlyCIDR:192.168.59.1/24 HypervVirtualSwitch: HypervUseExternalSwitch:false HypervExternalAdapter: KVMNetwork:default KVMQemuURI:qemu:///system KVMGPU:false KVMHidden:false KVMNUMACount:1 APIServerPort:8443 DockerOpt:[] DisableDriverMounts:false NFSShare:[] NFSSharesRoot:/nfsshares UUID: NoVTXCheck:false DNSProxy:false HostDNSResolver:true HostOnlyNicType:virtio NatNicType:virtio SSHIPAddress: SSHUser:root SSHKey: SSHPort:22 KubernetesConfig:{KubernetesVersion:v1.34.1 ClusterName:embed-certs-422728 Namespace:default APIServerHAVIP: APIServerName:minikubeCA APIServerNames:[] APIServerIPs:[] DNSDomain:cluster.local Contain
erRuntime:crio CRISocket: NetworkPlugin:cni FeatureGates: ServiceCIDR:10.96.0.0/12 ImageRepository: LoadBalancerStartIP: LoadBalancerEndIP: CustomIngressCert: RegistryAliases: ExtraOptions:[] ShouldLoadCachedImages:true EnableDefaultCNI:false CNI:} Nodes:[{Name: IP:192.168.76.2 Port:8443 KubernetesVersion:v1.34.1 ContainerRuntime:crio ControlPlane:true Worker:true}] Addons:map[dashboard:true default-storageclass:true storage-provisioner:true] CustomAddonImages:map[MetricsScraper:registry.k8s.io/echoserver:1.4] CustomAddonRegistries:map[] VerifyComponents:map[apiserver:true apps_running:true default_sa:true extra:true kubelet:true node_ready:true system_pods:true] StartHostTimeout:6m0s ScheduledStop:<nil> ExposedPorts:[] ListenAddress: Network: Subnet: MultiNodeRequested:false ExtraDisks:0 CertExpiration:26280h0m0s MountString: Mount9PVersion:9p2000.L MountGID:docker MountIP: MountMSize:262144 MountOptions:[] MountPort:0 MountType:9p MountUID:docker BinaryMirror: DisableOptimizations:false DisableMetrics:false
DisableCoreDNSLog:false CustomQemuFirmwarePath: SocketVMnetClientPath: SocketVMnetPath: StaticIP: SSHAuthSock: SSHAgentPID:0 GPUs: AutoPauseInterval:1m0s}
	I1109 14:39:15.849466  196795 out.go:179] * Starting "embed-certs-422728" primary control-plane node in "embed-certs-422728" cluster
	I1109 14:39:15.852353  196795 cache.go:134] Beginning downloading kic base image for docker with crio
	I1109 14:39:15.855395  196795 out.go:179] * Pulling base image v0.0.48-1761985721-21837 ...
	I1109 14:39:15.858354  196795 preload.go:188] Checking if preload exists for k8s version v1.34.1 and runtime crio
	I1109 14:39:15.858406  196795 preload.go:203] Found local preload: /home/jenkins/minikube-integration/21139-2320/.minikube/cache/preloaded-tarball/preloaded-images-k8s-v18-v1.34.1-cri-o-overlay-arm64.tar.lz4
	I1109 14:39:15.858425  196795 cache.go:65] Caching tarball of preloaded images
	I1109 14:39:15.858430  196795 image.go:81] Checking for gcr.io/k8s-minikube/kicbase-builds:v0.0.48-1761985721-21837@sha256:a50b37e97dfdea51156e079ca6b45818a801b3d41bbe13d141f35d2e1af6c7d1 in local docker daemon
	I1109 14:39:15.858538  196795 preload.go:238] Found /home/jenkins/minikube-integration/21139-2320/.minikube/cache/preloaded-tarball/preloaded-images-k8s-v18-v1.34.1-cri-o-overlay-arm64.tar.lz4 in cache, skipping download
	I1109 14:39:15.858550  196795 cache.go:68] Finished verifying existence of preloaded tar for v1.34.1 on crio
	I1109 14:39:15.858709  196795 profile.go:143] Saving config to /home/jenkins/minikube-integration/21139-2320/.minikube/profiles/embed-certs-422728/config.json ...
	I1109 14:39:15.879215  196795 image.go:100] Found gcr.io/k8s-minikube/kicbase-builds:v0.0.48-1761985721-21837@sha256:a50b37e97dfdea51156e079ca6b45818a801b3d41bbe13d141f35d2e1af6c7d1 in local docker daemon, skipping pull
	I1109 14:39:15.879245  196795 cache.go:158] gcr.io/k8s-minikube/kicbase-builds:v0.0.48-1761985721-21837@sha256:a50b37e97dfdea51156e079ca6b45818a801b3d41bbe13d141f35d2e1af6c7d1 exists in daemon, skipping load
	I1109 14:39:15.879257  196795 cache.go:243] Successfully downloaded all kic artifacts
	I1109 14:39:15.879367  196795 start.go:360] acquireMachinesLock for embed-certs-422728: {Name:mkaf26c3066ebca49339c9527aed846108c5e799 Clock:{} Delay:500ms Timeout:10m0s Cancel:<nil>}
	I1109 14:39:15.879441  196795 start.go:364] duration metric: took 46.114µs to acquireMachinesLock for "embed-certs-422728"
	I1109 14:39:15.879465  196795 start.go:96] Skipping create...Using existing machine configuration
	I1109 14:39:15.879476  196795 fix.go:54] fixHost starting: 
	I1109 14:39:15.879824  196795 cli_runner.go:164] Run: docker container inspect embed-certs-422728 --format={{.State.Status}}
	I1109 14:39:15.897379  196795 fix.go:112] recreateIfNeeded on embed-certs-422728: state=Stopped err=<nil>
	W1109 14:39:15.897409  196795 fix.go:138] unexpected machine state, will restart: <nil>
	I1109 14:39:13.761899  196129 out.go:252] * Restarting existing docker container for "default-k8s-diff-port-103048" ...
	I1109 14:39:13.762001  196129 cli_runner.go:164] Run: docker start default-k8s-diff-port-103048
	I1109 14:39:14.005697  196129 cli_runner.go:164] Run: docker container inspect default-k8s-diff-port-103048 --format={{.State.Status}}
	I1109 14:39:14.031930  196129 kic.go:430] container "default-k8s-diff-port-103048" state is running.
	I1109 14:39:14.032334  196129 cli_runner.go:164] Run: docker container inspect -f "{{range .NetworkSettings.Networks}}{{.IPAddress}},{{.GlobalIPv6Address}}{{end}}" default-k8s-diff-port-103048
	I1109 14:39:14.054133  196129 profile.go:143] Saving config to /home/jenkins/minikube-integration/21139-2320/.minikube/profiles/default-k8s-diff-port-103048/config.json ...
	I1109 14:39:14.054518  196129 machine.go:94] provisionDockerMachine start ...
	I1109 14:39:14.054646  196129 cli_runner.go:164] Run: docker container inspect -f "'{{(index (index .NetworkSettings.Ports "22/tcp") 0).HostPort}}'" default-k8s-diff-port-103048
	I1109 14:39:14.076480  196129 main.go:143] libmachine: Using SSH client type: native
	I1109 14:39:14.076798  196129 main.go:143] libmachine: &{{{<nil> 0 [] [] []} docker [0x3eefe0] 0x3f1790 <nil>  [] 0s} 127.0.0.1 33065 <nil> <nil>}
	I1109 14:39:14.076807  196129 main.go:143] libmachine: About to run SSH command:
	hostname
	I1109 14:39:14.077436  196129 main.go:143] libmachine: Error dialing TCP: ssh: handshake failed: read tcp 127.0.0.1:58722->127.0.0.1:33065: read: connection reset by peer
	I1109 14:39:17.231473  196129 main.go:143] libmachine: SSH cmd err, output: <nil>: default-k8s-diff-port-103048
	
	I1109 14:39:17.231499  196129 ubuntu.go:182] provisioning hostname "default-k8s-diff-port-103048"
	I1109 14:39:17.231624  196129 cli_runner.go:164] Run: docker container inspect -f "'{{(index (index .NetworkSettings.Ports "22/tcp") 0).HostPort}}'" default-k8s-diff-port-103048
	I1109 14:39:17.249722  196129 main.go:143] libmachine: Using SSH client type: native
	I1109 14:39:17.250048  196129 main.go:143] libmachine: &{{{<nil> 0 [] [] []} docker [0x3eefe0] 0x3f1790 <nil>  [] 0s} 127.0.0.1 33065 <nil> <nil>}
	I1109 14:39:17.250064  196129 main.go:143] libmachine: About to run SSH command:
	sudo hostname default-k8s-diff-port-103048 && echo "default-k8s-diff-port-103048" | sudo tee /etc/hostname
	I1109 14:39:17.410092  196129 main.go:143] libmachine: SSH cmd err, output: <nil>: default-k8s-diff-port-103048
	
	I1109 14:39:17.410211  196129 cli_runner.go:164] Run: docker container inspect -f "'{{(index (index .NetworkSettings.Ports "22/tcp") 0).HostPort}}'" default-k8s-diff-port-103048
	I1109 14:39:17.428985  196129 main.go:143] libmachine: Using SSH client type: native
	I1109 14:39:17.429306  196129 main.go:143] libmachine: &{{{<nil> 0 [] [] []} docker [0x3eefe0] 0x3f1790 <nil>  [] 0s} 127.0.0.1 33065 <nil> <nil>}
	I1109 14:39:17.429330  196129 main.go:143] libmachine: About to run SSH command:
	
			if ! grep -xq '.*\sdefault-k8s-diff-port-103048' /etc/hosts; then
				if grep -xq '127.0.1.1\s.*' /etc/hosts; then
					sudo sed -i 's/^127.0.1.1\s.*/127.0.1.1 default-k8s-diff-port-103048/g' /etc/hosts;
				else 
					echo '127.0.1.1 default-k8s-diff-port-103048' | sudo tee -a /etc/hosts; 
				fi
			fi
	I1109 14:39:17.580249  196129 main.go:143] libmachine: SSH cmd err, output: <nil>: 
	I1109 14:39:17.580275  196129 ubuntu.go:188] set auth options {CertDir:/home/jenkins/minikube-integration/21139-2320/.minikube CaCertPath:/home/jenkins/minikube-integration/21139-2320/.minikube/certs/ca.pem CaPrivateKeyPath:/home/jenkins/minikube-integration/21139-2320/.minikube/certs/ca-key.pem CaCertRemotePath:/etc/docker/ca.pem ServerCertPath:/home/jenkins/minikube-integration/21139-2320/.minikube/machines/server.pem ServerKeyPath:/home/jenkins/minikube-integration/21139-2320/.minikube/machines/server-key.pem ClientKeyPath:/home/jenkins/minikube-integration/21139-2320/.minikube/certs/key.pem ServerCertRemotePath:/etc/docker/server.pem ServerKeyRemotePath:/etc/docker/server-key.pem ClientCertPath:/home/jenkins/minikube-integration/21139-2320/.minikube/certs/cert.pem ServerCertSANs:[] StorePath:/home/jenkins/minikube-integration/21139-2320/.minikube}
	I1109 14:39:17.580301  196129 ubuntu.go:190] setting up certificates
	I1109 14:39:17.580311  196129 provision.go:84] configureAuth start
	I1109 14:39:17.580368  196129 cli_runner.go:164] Run: docker container inspect -f "{{range .NetworkSettings.Networks}}{{.IPAddress}},{{.GlobalIPv6Address}}{{end}}" default-k8s-diff-port-103048
	I1109 14:39:17.598399  196129 provision.go:143] copyHostCerts
	I1109 14:39:17.598470  196129 exec_runner.go:144] found /home/jenkins/minikube-integration/21139-2320/.minikube/ca.pem, removing ...
	I1109 14:39:17.598489  196129 exec_runner.go:203] rm: /home/jenkins/minikube-integration/21139-2320/.minikube/ca.pem
	I1109 14:39:17.598565  196129 exec_runner.go:151] cp: /home/jenkins/minikube-integration/21139-2320/.minikube/certs/ca.pem --> /home/jenkins/minikube-integration/21139-2320/.minikube/ca.pem (1082 bytes)
	I1109 14:39:17.598662  196129 exec_runner.go:144] found /home/jenkins/minikube-integration/21139-2320/.minikube/cert.pem, removing ...
	I1109 14:39:17.598674  196129 exec_runner.go:203] rm: /home/jenkins/minikube-integration/21139-2320/.minikube/cert.pem
	I1109 14:39:17.598703  196129 exec_runner.go:151] cp: /home/jenkins/minikube-integration/21139-2320/.minikube/certs/cert.pem --> /home/jenkins/minikube-integration/21139-2320/.minikube/cert.pem (1123 bytes)
	I1109 14:39:17.598755  196129 exec_runner.go:144] found /home/jenkins/minikube-integration/21139-2320/.minikube/key.pem, removing ...
	I1109 14:39:17.598765  196129 exec_runner.go:203] rm: /home/jenkins/minikube-integration/21139-2320/.minikube/key.pem
	I1109 14:39:17.598788  196129 exec_runner.go:151] cp: /home/jenkins/minikube-integration/21139-2320/.minikube/certs/key.pem --> /home/jenkins/minikube-integration/21139-2320/.minikube/key.pem (1679 bytes)
	I1109 14:39:17.598837  196129 provision.go:117] generating server cert: /home/jenkins/minikube-integration/21139-2320/.minikube/machines/server.pem ca-key=/home/jenkins/minikube-integration/21139-2320/.minikube/certs/ca.pem private-key=/home/jenkins/minikube-integration/21139-2320/.minikube/certs/ca-key.pem org=jenkins.default-k8s-diff-port-103048 san=[127.0.0.1 192.168.85.2 default-k8s-diff-port-103048 localhost minikube]
	I1109 14:39:17.688954  196129 provision.go:177] copyRemoteCerts
	I1109 14:39:17.689019  196129 ssh_runner.go:195] Run: sudo mkdir -p /etc/docker /etc/docker /etc/docker
	I1109 14:39:17.689060  196129 cli_runner.go:164] Run: docker container inspect -f "'{{(index (index .NetworkSettings.Ports "22/tcp") 0).HostPort}}'" default-k8s-diff-port-103048
	I1109 14:39:17.708206  196129 sshutil.go:53] new ssh client: &{IP:127.0.0.1 Port:33065 SSHKeyPath:/home/jenkins/minikube-integration/21139-2320/.minikube/machines/default-k8s-diff-port-103048/id_rsa Username:docker}
	I1109 14:39:17.819695  196129 ssh_runner.go:362] scp /home/jenkins/minikube-integration/21139-2320/.minikube/certs/ca.pem --> /etc/docker/ca.pem (1082 bytes)
	I1109 14:39:17.837093  196129 ssh_runner.go:362] scp /home/jenkins/minikube-integration/21139-2320/.minikube/machines/server.pem --> /etc/docker/server.pem (1249 bytes)
	I1109 14:39:17.854586  196129 ssh_runner.go:362] scp /home/jenkins/minikube-integration/21139-2320/.minikube/machines/server-key.pem --> /etc/docker/server-key.pem (1675 bytes)
	I1109 14:39:17.871745  196129 provision.go:87] duration metric: took 291.419804ms to configureAuth
	I1109 14:39:17.871814  196129 ubuntu.go:206] setting minikube options for container-runtime
	I1109 14:39:17.872050  196129 config.go:182] Loaded profile config "default-k8s-diff-port-103048": Driver=docker, ContainerRuntime=crio, KubernetesVersion=v1.34.1
	I1109 14:39:17.872194  196129 cli_runner.go:164] Run: docker container inspect -f "'{{(index (index .NetworkSettings.Ports "22/tcp") 0).HostPort}}'" default-k8s-diff-port-103048
	I1109 14:39:17.889492  196129 main.go:143] libmachine: Using SSH client type: native
	I1109 14:39:17.889805  196129 main.go:143] libmachine: &{{{<nil> 0 [] [] []} docker [0x3eefe0] 0x3f1790 <nil>  [] 0s} 127.0.0.1 33065 <nil> <nil>}
	I1109 14:39:17.889825  196129 main.go:143] libmachine: About to run SSH command:
	sudo mkdir -p /etc/sysconfig && printf %s "
	CRIO_MINIKUBE_OPTIONS='--insecure-registry 10.96.0.0/12 '
	" | sudo tee /etc/sysconfig/crio.minikube && sudo systemctl restart crio
	I1109 14:39:18.202831  196129 main.go:143] libmachine: SSH cmd err, output: <nil>: 
	CRIO_MINIKUBE_OPTIONS='--insecure-registry 10.96.0.0/12 '
	
	I1109 14:39:18.202918  196129 machine.go:97] duration metric: took 4.148387076s to provisionDockerMachine
	I1109 14:39:18.202944  196129 start.go:293] postStartSetup for "default-k8s-diff-port-103048" (driver="docker")
	I1109 14:39:18.202988  196129 start.go:322] creating required directories: [/etc/kubernetes/addons /etc/kubernetes/manifests /var/tmp/minikube /var/lib/minikube /var/lib/minikube/certs /var/lib/minikube/images /var/lib/minikube/binaries /tmp/gvisor /usr/share/ca-certificates /etc/ssl/certs]
	I1109 14:39:18.203082  196129 ssh_runner.go:195] Run: sudo mkdir -p /etc/kubernetes/addons /etc/kubernetes/manifests /var/tmp/minikube /var/lib/minikube /var/lib/minikube/certs /var/lib/minikube/images /var/lib/minikube/binaries /tmp/gvisor /usr/share/ca-certificates /etc/ssl/certs
	I1109 14:39:18.203170  196129 cli_runner.go:164] Run: docker container inspect -f "'{{(index (index .NetworkSettings.Ports "22/tcp") 0).HostPort}}'" default-k8s-diff-port-103048
	I1109 14:39:18.224891  196129 sshutil.go:53] new ssh client: &{IP:127.0.0.1 Port:33065 SSHKeyPath:/home/jenkins/minikube-integration/21139-2320/.minikube/machines/default-k8s-diff-port-103048/id_rsa Username:docker}
	I1109 14:39:18.335626  196129 ssh_runner.go:195] Run: cat /etc/os-release
	I1109 14:39:18.338990  196129 main.go:143] libmachine: Couldn't set key VERSION_CODENAME, no corresponding struct field found
	I1109 14:39:18.339018  196129 info.go:137] Remote host: Debian GNU/Linux 12 (bookworm)
	I1109 14:39:18.339029  196129 filesync.go:126] Scanning /home/jenkins/minikube-integration/21139-2320/.minikube/addons for local assets ...
	I1109 14:39:18.339123  196129 filesync.go:126] Scanning /home/jenkins/minikube-integration/21139-2320/.minikube/files for local assets ...
	I1109 14:39:18.339197  196129 filesync.go:149] local asset: /home/jenkins/minikube-integration/21139-2320/.minikube/files/etc/ssl/certs/41162.pem -> 41162.pem in /etc/ssl/certs
	I1109 14:39:18.339307  196129 ssh_runner.go:195] Run: sudo mkdir -p /etc/ssl/certs
	I1109 14:39:18.347413  196129 ssh_runner.go:362] scp /home/jenkins/minikube-integration/21139-2320/.minikube/files/etc/ssl/certs/41162.pem --> /etc/ssl/certs/41162.pem (1708 bytes)
	I1109 14:39:18.365395  196129 start.go:296] duration metric: took 162.403249ms for postStartSetup
	I1109 14:39:18.365474  196129 ssh_runner.go:195] Run: sh -c "df -h /var | awk 'NR==2{print $5}'"
	I1109 14:39:18.365513  196129 cli_runner.go:164] Run: docker container inspect -f "'{{(index (index .NetworkSettings.Ports "22/tcp") 0).HostPort}}'" default-k8s-diff-port-103048
	I1109 14:39:18.383461  196129 sshutil.go:53] new ssh client: &{IP:127.0.0.1 Port:33065 SSHKeyPath:/home/jenkins/minikube-integration/21139-2320/.minikube/machines/default-k8s-diff-port-103048/id_rsa Username:docker}
	I1109 14:39:18.485492  196129 ssh_runner.go:195] Run: sh -c "df -BG /var | awk 'NR==2{print $4}'"
	I1109 14:39:18.490710  196129 fix.go:56] duration metric: took 4.748854309s for fixHost
	I1109 14:39:18.490737  196129 start.go:83] releasing machines lock for "default-k8s-diff-port-103048", held for 4.748905699s
	I1109 14:39:18.490807  196129 cli_runner.go:164] Run: docker container inspect -f "{{range .NetworkSettings.Networks}}{{.IPAddress}},{{.GlobalIPv6Address}}{{end}}" default-k8s-diff-port-103048
	I1109 14:39:18.508468  196129 ssh_runner.go:195] Run: cat /version.json
	I1109 14:39:18.508516  196129 ssh_runner.go:195] Run: curl -sS -m 2 https://registry.k8s.io/
	I1109 14:39:18.508525  196129 cli_runner.go:164] Run: docker container inspect -f "'{{(index (index .NetworkSettings.Ports "22/tcp") 0).HostPort}}'" default-k8s-diff-port-103048
	I1109 14:39:18.508574  196129 cli_runner.go:164] Run: docker container inspect -f "'{{(index (index .NetworkSettings.Ports "22/tcp") 0).HostPort}}'" default-k8s-diff-port-103048
	I1109 14:39:18.533762  196129 sshutil.go:53] new ssh client: &{IP:127.0.0.1 Port:33065 SSHKeyPath:/home/jenkins/minikube-integration/21139-2320/.minikube/machines/default-k8s-diff-port-103048/id_rsa Username:docker}
	I1109 14:39:18.534380  196129 sshutil.go:53] new ssh client: &{IP:127.0.0.1 Port:33065 SSHKeyPath:/home/jenkins/minikube-integration/21139-2320/.minikube/machines/default-k8s-diff-port-103048/id_rsa Username:docker}
	I1109 14:39:18.733641  196129 ssh_runner.go:195] Run: systemctl --version
	I1109 14:39:18.740509  196129 ssh_runner.go:195] Run: sudo sh -c "podman version >/dev/null"
	I1109 14:39:18.777813  196129 ssh_runner.go:195] Run: sh -c "stat /etc/cni/net.d/*loopback.conf*"
	W1109 14:39:18.782333  196129 cni.go:209] loopback cni configuration skipped: "/etc/cni/net.d/*loopback.conf*" not found
	I1109 14:39:18.782411  196129 ssh_runner.go:195] Run: sudo find /etc/cni/net.d -maxdepth 1 -type f ( ( -name *bridge* -or -name *podman* ) -and -not -name *.mk_disabled ) -printf "%p, " -exec sh -c "sudo mv {} {}.mk_disabled" ;
	I1109 14:39:18.790609  196129 cni.go:259] no active bridge cni configs found in "/etc/cni/net.d" - nothing to disable
	I1109 14:39:18.790636  196129 start.go:496] detecting cgroup driver to use...
	I1109 14:39:18.790700  196129 detect.go:187] detected "cgroupfs" cgroup driver on host os
	I1109 14:39:18.790764  196129 ssh_runner.go:195] Run: sudo systemctl stop -f containerd
	I1109 14:39:18.806443  196129 ssh_runner.go:195] Run: sudo systemctl is-active --quiet service containerd
	I1109 14:39:18.820129  196129 docker.go:218] disabling cri-docker service (if available) ...
	I1109 14:39:18.820246  196129 ssh_runner.go:195] Run: sudo systemctl stop -f cri-docker.socket
	I1109 14:39:18.836297  196129 ssh_runner.go:195] Run: sudo systemctl stop -f cri-docker.service
	I1109 14:39:18.849893  196129 ssh_runner.go:195] Run: sudo systemctl disable cri-docker.socket
	I1109 14:39:18.961965  196129 ssh_runner.go:195] Run: sudo systemctl mask cri-docker.service
	I1109 14:39:19.074901  196129 docker.go:234] disabling docker service ...
	I1109 14:39:19.075010  196129 ssh_runner.go:195] Run: sudo systemctl stop -f docker.socket
	I1109 14:39:19.090357  196129 ssh_runner.go:195] Run: sudo systemctl stop -f docker.service
	I1109 14:39:19.103755  196129 ssh_runner.go:195] Run: sudo systemctl disable docker.socket
	I1109 14:39:19.214649  196129 ssh_runner.go:195] Run: sudo systemctl mask docker.service
	I1109 14:39:19.369065  196129 ssh_runner.go:195] Run: sudo systemctl is-active --quiet service docker
	I1109 14:39:19.382216  196129 ssh_runner.go:195] Run: /bin/bash -c "sudo mkdir -p /etc && printf %s "runtime-endpoint: unix:///var/run/crio/crio.sock
	" | sudo tee /etc/crictl.yaml"
	I1109 14:39:19.396769  196129 crio.go:59] configure cri-o to use "registry.k8s.io/pause:3.10.1" pause image...
	I1109 14:39:19.396864  196129 ssh_runner.go:195] Run: sh -c "sudo sed -i 's|^.*pause_image = .*$|pause_image = "registry.k8s.io/pause:3.10.1"|' /etc/crio/crio.conf.d/02-crio.conf"
	I1109 14:39:19.415946  196129 crio.go:70] configuring cri-o to use "cgroupfs" as cgroup driver...
	I1109 14:39:19.416022  196129 ssh_runner.go:195] Run: sh -c "sudo sed -i 's|^.*cgroup_manager = .*$|cgroup_manager = "cgroupfs"|' /etc/crio/crio.conf.d/02-crio.conf"
	I1109 14:39:19.427276  196129 ssh_runner.go:195] Run: sh -c "sudo sed -i '/conmon_cgroup = .*/d' /etc/crio/crio.conf.d/02-crio.conf"
	I1109 14:39:19.437233  196129 ssh_runner.go:195] Run: sh -c "sudo sed -i '/cgroup_manager = .*/a conmon_cgroup = "pod"' /etc/crio/crio.conf.d/02-crio.conf"
	I1109 14:39:19.447125  196129 ssh_runner.go:195] Run: sh -c "sudo rm -rf /etc/cni/net.mk"
	I1109 14:39:19.455793  196129 ssh_runner.go:195] Run: sh -c "sudo sed -i '/^ *"net.ipv4.ip_unprivileged_port_start=.*"/d' /etc/crio/crio.conf.d/02-crio.conf"
	I1109 14:39:19.468606  196129 ssh_runner.go:195] Run: sh -c "sudo grep -q "^ *default_sysctls" /etc/crio/crio.conf.d/02-crio.conf || sudo sed -i '/conmon_cgroup = .*/a default_sysctls = \[\n\]' /etc/crio/crio.conf.d/02-crio.conf"
	I1109 14:39:19.482521  196129 ssh_runner.go:195] Run: sh -c "sudo sed -i -r 's|^default_sysctls *= *\[|&\n  "net.ipv4.ip_unprivileged_port_start=0",|' /etc/crio/crio.conf.d/02-crio.conf"
	I1109 14:39:19.491385  196129 ssh_runner.go:195] Run: sudo sysctl net.bridge.bridge-nf-call-iptables
	I1109 14:39:19.499271  196129 ssh_runner.go:195] Run: sudo sh -c "echo 1 > /proc/sys/net/ipv4/ip_forward"
	I1109 14:39:19.507157  196129 ssh_runner.go:195] Run: sudo systemctl daemon-reload
	I1109 14:39:19.643285  196129 ssh_runner.go:195] Run: sudo systemctl restart crio
	I1109 14:39:19.789716  196129 start.go:543] Will wait 60s for socket path /var/run/crio/crio.sock
	I1109 14:39:19.789782  196129 ssh_runner.go:195] Run: stat /var/run/crio/crio.sock
	I1109 14:39:19.802113  196129 start.go:564] Will wait 60s for crictl version
	I1109 14:39:19.802187  196129 ssh_runner.go:195] Run: which crictl
	I1109 14:39:19.806163  196129 ssh_runner.go:195] Run: sudo /usr/local/bin/crictl version
	I1109 14:39:19.850016  196129 start.go:580] Version:  0.1.0
	RuntimeName:  cri-o
	RuntimeVersion:  1.34.1
	RuntimeApiVersion:  v1
	I1109 14:39:19.850100  196129 ssh_runner.go:195] Run: crio --version
	I1109 14:39:19.886662  196129 ssh_runner.go:195] Run: crio --version
	I1109 14:39:19.922121  196129 out.go:179] * Preparing Kubernetes v1.34.1 on CRI-O 1.34.1 ...
	I1109 14:39:15.900502  196795 out.go:252] * Restarting existing docker container for "embed-certs-422728" ...
	I1109 14:39:15.900586  196795 cli_runner.go:164] Run: docker start embed-certs-422728
	I1109 14:39:16.155027  196795 cli_runner.go:164] Run: docker container inspect embed-certs-422728 --format={{.State.Status}}
	I1109 14:39:16.179053  196795 kic.go:430] container "embed-certs-422728" state is running.
	I1109 14:39:16.179431  196795 cli_runner.go:164] Run: docker container inspect -f "{{range .NetworkSettings.Networks}}{{.IPAddress}},{{.GlobalIPv6Address}}{{end}}" embed-certs-422728
	I1109 14:39:16.202650  196795 profile.go:143] Saving config to /home/jenkins/minikube-integration/21139-2320/.minikube/profiles/embed-certs-422728/config.json ...
	I1109 14:39:16.202886  196795 machine.go:94] provisionDockerMachine start ...
	I1109 14:39:16.202954  196795 cli_runner.go:164] Run: docker container inspect -f "'{{(index (index .NetworkSettings.Ports "22/tcp") 0).HostPort}}'" embed-certs-422728
	I1109 14:39:16.226627  196795 main.go:143] libmachine: Using SSH client type: native
	I1109 14:39:16.227039  196795 main.go:143] libmachine: &{{{<nil> 0 [] [] []} docker [0x3eefe0] 0x3f1790 <nil>  [] 0s} 127.0.0.1 33070 <nil> <nil>}
	I1109 14:39:16.227058  196795 main.go:143] libmachine: About to run SSH command:
	hostname
	I1109 14:39:16.227903  196795 main.go:143] libmachine: Error dialing TCP: ssh: handshake failed: EOF
	I1109 14:39:19.403380  196795 main.go:143] libmachine: SSH cmd err, output: <nil>: embed-certs-422728
	
	I1109 14:39:19.403421  196795 ubuntu.go:182] provisioning hostname "embed-certs-422728"
	I1109 14:39:19.403526  196795 cli_runner.go:164] Run: docker container inspect -f "'{{(index (index .NetworkSettings.Ports "22/tcp") 0).HostPort}}'" embed-certs-422728
	I1109 14:39:19.425865  196795 main.go:143] libmachine: Using SSH client type: native
	I1109 14:39:19.426162  196795 main.go:143] libmachine: &{{{<nil> 0 [] [] []} docker [0x3eefe0] 0x3f1790 <nil>  [] 0s} 127.0.0.1 33070 <nil> <nil>}
	I1109 14:39:19.426172  196795 main.go:143] libmachine: About to run SSH command:
	sudo hostname embed-certs-422728 && echo "embed-certs-422728" | sudo tee /etc/hostname
	I1109 14:39:19.604836  196795 main.go:143] libmachine: SSH cmd err, output: <nil>: embed-certs-422728
	
	I1109 14:39:19.604972  196795 cli_runner.go:164] Run: docker container inspect -f "'{{(index (index .NetworkSettings.Ports "22/tcp") 0).HostPort}}'" embed-certs-422728
	I1109 14:39:19.627515  196795 main.go:143] libmachine: Using SSH client type: native
	I1109 14:39:19.627823  196795 main.go:143] libmachine: &{{{<nil> 0 [] [] []} docker [0x3eefe0] 0x3f1790 <nil>  [] 0s} 127.0.0.1 33070 <nil> <nil>}
	I1109 14:39:19.627846  196795 main.go:143] libmachine: About to run SSH command:
	
			if ! grep -xq '.*\sembed-certs-422728' /etc/hosts; then
				if grep -xq '127.0.1.1\s.*' /etc/hosts; then
					sudo sed -i 's/^127.0.1.1\s.*/127.0.1.1 embed-certs-422728/g' /etc/hosts;
				else 
					echo '127.0.1.1 embed-certs-422728' | sudo tee -a /etc/hosts; 
				fi
			fi
	I1109 14:39:19.784610  196795 main.go:143] libmachine: SSH cmd err, output: <nil>: 
	I1109 14:39:19.784640  196795 ubuntu.go:188] set auth options {CertDir:/home/jenkins/minikube-integration/21139-2320/.minikube CaCertPath:/home/jenkins/minikube-integration/21139-2320/.minikube/certs/ca.pem CaPrivateKeyPath:/home/jenkins/minikube-integration/21139-2320/.minikube/certs/ca-key.pem CaCertRemotePath:/etc/docker/ca.pem ServerCertPath:/home/jenkins/minikube-integration/21139-2320/.minikube/machines/server.pem ServerKeyPath:/home/jenkins/minikube-integration/21139-2320/.minikube/machines/server-key.pem ClientKeyPath:/home/jenkins/minikube-integration/21139-2320/.minikube/certs/key.pem ServerCertRemotePath:/etc/docker/server.pem ServerKeyRemotePath:/etc/docker/server-key.pem ClientCertPath:/home/jenkins/minikube-integration/21139-2320/.minikube/certs/cert.pem ServerCertSANs:[] StorePath:/home/jenkins/minikube-integration/21139-2320/.minikube}
	I1109 14:39:19.784720  196795 ubuntu.go:190] setting up certificates
	I1109 14:39:19.784751  196795 provision.go:84] configureAuth start
	I1109 14:39:19.784837  196795 cli_runner.go:164] Run: docker container inspect -f "{{range .NetworkSettings.Networks}}{{.IPAddress}},{{.GlobalIPv6Address}}{{end}}" embed-certs-422728
	I1109 14:39:19.811636  196795 provision.go:143] copyHostCerts
	I1109 14:39:19.811695  196795 exec_runner.go:144] found /home/jenkins/minikube-integration/21139-2320/.minikube/cert.pem, removing ...
	I1109 14:39:19.811709  196795 exec_runner.go:203] rm: /home/jenkins/minikube-integration/21139-2320/.minikube/cert.pem
	I1109 14:39:19.811785  196795 exec_runner.go:151] cp: /home/jenkins/minikube-integration/21139-2320/.minikube/certs/cert.pem --> /home/jenkins/minikube-integration/21139-2320/.minikube/cert.pem (1123 bytes)
	I1109 14:39:19.811895  196795 exec_runner.go:144] found /home/jenkins/minikube-integration/21139-2320/.minikube/key.pem, removing ...
	I1109 14:39:19.811901  196795 exec_runner.go:203] rm: /home/jenkins/minikube-integration/21139-2320/.minikube/key.pem
	I1109 14:39:19.811929  196795 exec_runner.go:151] cp: /home/jenkins/minikube-integration/21139-2320/.minikube/certs/key.pem --> /home/jenkins/minikube-integration/21139-2320/.minikube/key.pem (1679 bytes)
	I1109 14:39:19.811991  196795 exec_runner.go:144] found /home/jenkins/minikube-integration/21139-2320/.minikube/ca.pem, removing ...
	I1109 14:39:19.811995  196795 exec_runner.go:203] rm: /home/jenkins/minikube-integration/21139-2320/.minikube/ca.pem
	I1109 14:39:19.812021  196795 exec_runner.go:151] cp: /home/jenkins/minikube-integration/21139-2320/.minikube/certs/ca.pem --> /home/jenkins/minikube-integration/21139-2320/.minikube/ca.pem (1082 bytes)
	I1109 14:39:19.812067  196795 provision.go:117] generating server cert: /home/jenkins/minikube-integration/21139-2320/.minikube/machines/server.pem ca-key=/home/jenkins/minikube-integration/21139-2320/.minikube/certs/ca.pem private-key=/home/jenkins/minikube-integration/21139-2320/.minikube/certs/ca-key.pem org=jenkins.embed-certs-422728 san=[127.0.0.1 192.168.76.2 embed-certs-422728 localhost minikube]
	I1109 14:39:20.018694  196795 provision.go:177] copyRemoteCerts
	I1109 14:39:20.018776  196795 ssh_runner.go:195] Run: sudo mkdir -p /etc/docker /etc/docker /etc/docker
	I1109 14:39:20.018829  196795 cli_runner.go:164] Run: docker container inspect -f "'{{(index (index .NetworkSettings.Ports "22/tcp") 0).HostPort}}'" embed-certs-422728
	I1109 14:39:20.041481  196795 sshutil.go:53] new ssh client: &{IP:127.0.0.1 Port:33070 SSHKeyPath:/home/jenkins/minikube-integration/21139-2320/.minikube/machines/embed-certs-422728/id_rsa Username:docker}
	I1109 14:39:20.156424  196795 ssh_runner.go:362] scp /home/jenkins/minikube-integration/21139-2320/.minikube/certs/ca.pem --> /etc/docker/ca.pem (1082 bytes)
	I1109 14:39:20.179967  196795 ssh_runner.go:362] scp /home/jenkins/minikube-integration/21139-2320/.minikube/machines/server.pem --> /etc/docker/server.pem (1224 bytes)
	I1109 14:39:20.205588  196795 ssh_runner.go:362] scp /home/jenkins/minikube-integration/21139-2320/.minikube/machines/server-key.pem --> /etc/docker/server-key.pem (1675 bytes)
	I1109 14:39:20.224981  196795 provision.go:87] duration metric: took 440.207382ms to configureAuth
	I1109 14:39:20.225018  196795 ubuntu.go:206] setting minikube options for container-runtime
	I1109 14:39:20.225226  196795 config.go:182] Loaded profile config "embed-certs-422728": Driver=docker, ContainerRuntime=crio, KubernetesVersion=v1.34.1
	I1109 14:39:20.225355  196795 cli_runner.go:164] Run: docker container inspect -f "'{{(index (index .NetworkSettings.Ports "22/tcp") 0).HostPort}}'" embed-certs-422728
	I1109 14:39:20.251487  196795 main.go:143] libmachine: Using SSH client type: native
	I1109 14:39:20.251808  196795 main.go:143] libmachine: &{{{<nil> 0 [] [] []} docker [0x3eefe0] 0x3f1790 <nil>  [] 0s} 127.0.0.1 33070 <nil> <nil>}
	I1109 14:39:20.251826  196795 main.go:143] libmachine: About to run SSH command:
	sudo mkdir -p /etc/sysconfig && printf %s "
	CRIO_MINIKUBE_OPTIONS='--insecure-registry 10.96.0.0/12 '
	" | sudo tee /etc/sysconfig/crio.minikube && sudo systemctl restart crio
	I1109 14:39:19.924910  196129 cli_runner.go:164] Run: docker network inspect default-k8s-diff-port-103048 --format "{"Name": "{{.Name}}","Driver": "{{.Driver}}","Subnet": "{{range .IPAM.Config}}{{.Subnet}}{{end}}","Gateway": "{{range .IPAM.Config}}{{.Gateway}}{{end}}","MTU": {{if (index .Options "com.docker.network.driver.mtu")}}{{(index .Options "com.docker.network.driver.mtu")}}{{else}}0{{end}}, "ContainerIPs": [{{range $k,$v := .Containers }}"{{$v.IPv4Address}}",{{end}}]}"
	I1109 14:39:19.947696  196129 ssh_runner.go:195] Run: grep 192.168.85.1	host.minikube.internal$ /etc/hosts
	I1109 14:39:19.951833  196129 ssh_runner.go:195] Run: /bin/bash -c "{ grep -v $'\thost.minikube.internal$' "/etc/hosts"; echo "192.168.85.1	host.minikube.internal"; } > /tmp/h.$$; sudo cp /tmp/h.$$ "/etc/hosts""
	I1109 14:39:19.966489  196129 kubeadm.go:884] updating cluster {Name:default-k8s-diff-port-103048 KeepContext:false EmbedCerts:false MinikubeISO: KicBaseImage:gcr.io/k8s-minikube/kicbase-builds:v0.0.48-1761985721-21837@sha256:a50b37e97dfdea51156e079ca6b45818a801b3d41bbe13d141f35d2e1af6c7d1 Memory:3072 CPUs:2 DiskSize:20000 Driver:docker HyperkitVpnKitSock: HyperkitVSockPorts:[] DockerEnv:[] ContainerVolumeMounts:[] InsecureRegistry:[] RegistryMirror:[] HostOnlyCIDR:192.168.59.1/24 HypervVirtualSwitch: HypervUseExternalSwitch:false HypervExternalAdapter: KVMNetwork:default KVMQemuURI:qemu:///system KVMGPU:false KVMHidden:false KVMNUMACount:1 APIServerPort:8444 DockerOpt:[] DisableDriverMounts:false NFSShare:[] NFSSharesRoot:/nfsshares UUID: NoVTXCheck:false DNSProxy:false HostDNSResolver:true HostOnlyNicType:virtio NatNicType:virtio SSHIPAddress: SSHUser:root SSHKey: SSHPort:22 KubernetesConfig:{KubernetesVersion:v1.34.1 ClusterName:default-k8s-diff-port-103048 Namespace:default APIServerHAVIP: APISer
verName:minikubeCA APIServerNames:[] APIServerIPs:[] DNSDomain:cluster.local ContainerRuntime:crio CRISocket: NetworkPlugin:cni FeatureGates: ServiceCIDR:10.96.0.0/12 ImageRepository: LoadBalancerStartIP: LoadBalancerEndIP: CustomIngressCert: RegistryAliases: ExtraOptions:[] ShouldLoadCachedImages:true EnableDefaultCNI:false CNI:} Nodes:[{Name: IP:192.168.85.2 Port:8444 KubernetesVersion:v1.34.1 ContainerRuntime:crio ControlPlane:true Worker:true}] Addons:map[dashboard:true default-storageclass:true storage-provisioner:true] CustomAddonImages:map[MetricsScraper:registry.k8s.io/echoserver:1.4] CustomAddonRegistries:map[] VerifyComponents:map[apiserver:true apps_running:true default_sa:true extra:true kubelet:true node_ready:true system_pods:true] StartHostTimeout:6m0s ScheduledStop:<nil> ExposedPorts:[] ListenAddress: Network: Subnet: MultiNodeRequested:false ExtraDisks:0 CertExpiration:26280h0m0s MountString: Mount9PVersion:9p2000.L MountGID:docker MountIP: MountMSize:262144 MountOptions:[] MountPort:0 MountT
ype:9p MountUID:docker BinaryMirror: DisableOptimizations:false DisableMetrics:false DisableCoreDNSLog:false CustomQemuFirmwarePath: SocketVMnetClientPath: SocketVMnetPath: StaticIP: SSHAuthSock: SSHAgentPID:0 GPUs: AutoPauseInterval:1m0s} ...
	I1109 14:39:19.966612  196129 preload.go:188] Checking if preload exists for k8s version v1.34.1 and runtime crio
	I1109 14:39:19.966665  196129 ssh_runner.go:195] Run: sudo crictl images --output json
	I1109 14:39:20.014624  196129 crio.go:514] all images are preloaded for cri-o runtime.
	I1109 14:39:20.014649  196129 crio.go:433] Images already preloaded, skipping extraction
	I1109 14:39:20.014710  196129 ssh_runner.go:195] Run: sudo crictl images --output json
	I1109 14:39:20.061070  196129 crio.go:514] all images are preloaded for cri-o runtime.
	I1109 14:39:20.061092  196129 cache_images.go:86] Images are preloaded, skipping loading
	I1109 14:39:20.061100  196129 kubeadm.go:935] updating node { 192.168.85.2 8444 v1.34.1 crio true true} ...
	I1109 14:39:20.061201  196129 kubeadm.go:947] kubelet [Unit]
	Wants=crio.service
	
	[Service]
	ExecStart=
	ExecStart=/var/lib/minikube/binaries/v1.34.1/kubelet --bootstrap-kubeconfig=/etc/kubernetes/bootstrap-kubelet.conf --cgroups-per-qos=false --config=/var/lib/kubelet/config.yaml --enforce-node-allocatable= --hostname-override=default-k8s-diff-port-103048 --kubeconfig=/etc/kubernetes/kubelet.conf --node-ip=192.168.85.2
	
	[Install]
	 config:
	{KubernetesVersion:v1.34.1 ClusterName:default-k8s-diff-port-103048 Namespace:default APIServerHAVIP: APIServerName:minikubeCA APIServerNames:[] APIServerIPs:[] DNSDomain:cluster.local ContainerRuntime:crio CRISocket: NetworkPlugin:cni FeatureGates: ServiceCIDR:10.96.0.0/12 ImageRepository: LoadBalancerStartIP: LoadBalancerEndIP: CustomIngressCert: RegistryAliases: ExtraOptions:[] ShouldLoadCachedImages:true EnableDefaultCNI:false CNI:}
	I1109 14:39:20.061279  196129 ssh_runner.go:195] Run: crio config
	I1109 14:39:20.135847  196129 cni.go:84] Creating CNI manager for ""
	I1109 14:39:20.135907  196129 cni.go:143] "docker" driver + "crio" runtime found, recommending kindnet
	I1109 14:39:20.135931  196129 kubeadm.go:85] Using pod CIDR: 10.244.0.0/16
	I1109 14:39:20.135955  196129 kubeadm.go:190] kubeadm options: {CertDir:/var/lib/minikube/certs ServiceCIDR:10.96.0.0/12 PodSubnet:10.244.0.0/16 AdvertiseAddress:192.168.85.2 APIServerPort:8444 KubernetesVersion:v1.34.1 EtcdDataDir:/var/lib/minikube/etcd EtcdExtraArgs:map[] ClusterName:default-k8s-diff-port-103048 NodeName:default-k8s-diff-port-103048 DNSDomain:cluster.local CRISocket:/var/run/crio/crio.sock ImageRepository: ComponentOptions:[{Component:apiServer ExtraArgs:map[enable-admission-plugins:NamespaceLifecycle,LimitRanger,ServiceAccount,DefaultStorageClass,DefaultTolerationSeconds,NodeRestriction,MutatingAdmissionWebhook,ValidatingAdmissionWebhook,ResourceQuota] Pairs:map[certSANs:["127.0.0.1", "localhost", "192.168.85.2"]]} {Component:controllerManager ExtraArgs:map[allocate-node-cidrs:true leader-elect:false] Pairs:map[]} {Component:scheduler ExtraArgs:map[leader-elect:false] Pairs:map[]}] FeatureArgs:map[] NodeIP:192.168.85.2 CgroupDriver:cgroupfs ClientCAFile:/var/lib/minikube/certs/ca.
crt StaticPodPath:/etc/kubernetes/manifests ControlPlaneAddress:control-plane.minikube.internal KubeProxyOptions:map[] ResolvConfSearchRegression:false KubeletConfigOpts:map[containerRuntimeEndpoint:unix:///var/run/crio/crio.sock hairpinMode:hairpin-veth runtimeRequestTimeout:15m] PrependCriSocketUnix:true}
	I1109 14:39:20.136111  196129 kubeadm.go:196] kubeadm config:
	apiVersion: kubeadm.k8s.io/v1beta4
	kind: InitConfiguration
	localAPIEndpoint:
	  advertiseAddress: 192.168.85.2
	  bindPort: 8444
	bootstrapTokens:
	  - groups:
	      - system:bootstrappers:kubeadm:default-node-token
	    ttl: 24h0m0s
	    usages:
	      - signing
	      - authentication
	nodeRegistration:
	  criSocket: unix:///var/run/crio/crio.sock
	  name: "default-k8s-diff-port-103048"
	  kubeletExtraArgs:
	    - name: "node-ip"
	      value: "192.168.85.2"
	  taints: []
	---
	apiVersion: kubeadm.k8s.io/v1beta4
	kind: ClusterConfiguration
	apiServer:
	  certSANs: ["127.0.0.1", "localhost", "192.168.85.2"]
	  extraArgs:
	    - name: "enable-admission-plugins"
	      value: "NamespaceLifecycle,LimitRanger,ServiceAccount,DefaultStorageClass,DefaultTolerationSeconds,NodeRestriction,MutatingAdmissionWebhook,ValidatingAdmissionWebhook,ResourceQuota"
	controllerManager:
	  extraArgs:
	    - name: "allocate-node-cidrs"
	      value: "true"
	    - name: "leader-elect"
	      value: "false"
	scheduler:
	  extraArgs:
	    - name: "leader-elect"
	      value: "false"
	certificatesDir: /var/lib/minikube/certs
	clusterName: mk
	controlPlaneEndpoint: control-plane.minikube.internal:8444
	etcd:
	  local:
	    dataDir: /var/lib/minikube/etcd
	kubernetesVersion: v1.34.1
	networking:
	  dnsDomain: cluster.local
	  podSubnet: "10.244.0.0/16"
	  serviceSubnet: 10.96.0.0/12
	---
	apiVersion: kubelet.config.k8s.io/v1beta1
	kind: KubeletConfiguration
	authentication:
	  x509:
	    clientCAFile: /var/lib/minikube/certs/ca.crt
	cgroupDriver: cgroupfs
	containerRuntimeEndpoint: unix:///var/run/crio/crio.sock
	hairpinMode: hairpin-veth
	runtimeRequestTimeout: 15m
	clusterDomain: "cluster.local"
	# disable disk resource management by default
	imageGCHighThresholdPercent: 100
	evictionHard:
	  nodefs.available: "0%"
	  nodefs.inodesFree: "0%"
	  imagefs.available: "0%"
	failSwapOn: false
	staticPodPath: /etc/kubernetes/manifests
	---
	apiVersion: kubeproxy.config.k8s.io/v1alpha1
	kind: KubeProxyConfiguration
	clusterCIDR: "10.244.0.0/16"
	metricsBindAddress: 0.0.0.0:10249
	conntrack:
	  maxPerCore: 0
	# Skip setting "net.netfilter.nf_conntrack_tcp_timeout_established"
	  tcpEstablishedTimeout: 0s
	# Skip setting "net.netfilter.nf_conntrack_tcp_timeout_close"
	  tcpCloseWaitTimeout: 0s
	
	I1109 14:39:20.136224  196129 ssh_runner.go:195] Run: sudo ls /var/lib/minikube/binaries/v1.34.1
	I1109 14:39:20.144992  196129 binaries.go:51] Found k8s binaries, skipping transfer
	I1109 14:39:20.145080  196129 ssh_runner.go:195] Run: sudo mkdir -p /etc/systemd/system/kubelet.service.d /lib/systemd/system /var/tmp/minikube
	I1109 14:39:20.154676  196129 ssh_runner.go:362] scp memory --> /etc/systemd/system/kubelet.service.d/10-kubeadm.conf (378 bytes)
	I1109 14:39:20.171245  196129 ssh_runner.go:362] scp memory --> /lib/systemd/system/kubelet.service (352 bytes)
	I1109 14:39:20.185580  196129 ssh_runner.go:362] scp memory --> /var/tmp/minikube/kubeadm.yaml.new (2225 bytes)
	I1109 14:39:20.201765  196129 ssh_runner.go:195] Run: grep 192.168.85.2	control-plane.minikube.internal$ /etc/hosts
	I1109 14:39:20.206582  196129 ssh_runner.go:195] Run: /bin/bash -c "{ grep -v $'\tcontrol-plane.minikube.internal$' "/etc/hosts"; echo "192.168.85.2	control-plane.minikube.internal"; } > /tmp/h.$$; sudo cp /tmp/h.$$ "/etc/hosts""
	I1109 14:39:20.218611  196129 ssh_runner.go:195] Run: sudo systemctl daemon-reload
	I1109 14:39:20.366358  196129 ssh_runner.go:195] Run: sudo systemctl start kubelet
	I1109 14:39:20.384455  196129 certs.go:69] Setting up /home/jenkins/minikube-integration/21139-2320/.minikube/profiles/default-k8s-diff-port-103048 for IP: 192.168.85.2
	I1109 14:39:20.384475  196129 certs.go:195] generating shared ca certs ...
	I1109 14:39:20.384493  196129 certs.go:227] acquiring lock for ca certs: {Name:mkdc9287ecd89df27ec460b72246ed6b75395d7c Clock:{} Delay:500ms Timeout:1m0s Cancel:<nil>}
	I1109 14:39:20.384623  196129 certs.go:236] skipping valid "minikubeCA" ca cert: /home/jenkins/minikube-integration/21139-2320/.minikube/ca.key
	I1109 14:39:20.384665  196129 certs.go:236] skipping valid "proxyClientCA" ca cert: /home/jenkins/minikube-integration/21139-2320/.minikube/proxy-client-ca.key
	I1109 14:39:20.384672  196129 certs.go:257] generating profile certs ...
	I1109 14:39:20.384786  196129 certs.go:360] skipping valid signed profile cert regeneration for "minikube-user": /home/jenkins/minikube-integration/21139-2320/.minikube/profiles/default-k8s-diff-port-103048/client.key
	I1109 14:39:20.384849  196129 certs.go:360] skipping valid signed profile cert regeneration for "minikube": /home/jenkins/minikube-integration/21139-2320/.minikube/profiles/default-k8s-diff-port-103048/apiserver.key.87358e1c
	I1109 14:39:20.384887  196129 certs.go:360] skipping valid signed profile cert regeneration for "aggregator": /home/jenkins/minikube-integration/21139-2320/.minikube/profiles/default-k8s-diff-port-103048/proxy-client.key
	I1109 14:39:20.384987  196129 certs.go:484] found cert: /home/jenkins/minikube-integration/21139-2320/.minikube/certs/4116.pem (1338 bytes)
	W1109 14:39:20.385015  196129 certs.go:480] ignoring /home/jenkins/minikube-integration/21139-2320/.minikube/certs/4116_empty.pem, impossibly tiny 0 bytes
	I1109 14:39:20.385023  196129 certs.go:484] found cert: /home/jenkins/minikube-integration/21139-2320/.minikube/certs/ca-key.pem (1679 bytes)
	I1109 14:39:20.385046  196129 certs.go:484] found cert: /home/jenkins/minikube-integration/21139-2320/.minikube/certs/ca.pem (1082 bytes)
	I1109 14:39:20.385067  196129 certs.go:484] found cert: /home/jenkins/minikube-integration/21139-2320/.minikube/certs/cert.pem (1123 bytes)
	I1109 14:39:20.385087  196129 certs.go:484] found cert: /home/jenkins/minikube-integration/21139-2320/.minikube/certs/key.pem (1679 bytes)
	I1109 14:39:20.385128  196129 certs.go:484] found cert: /home/jenkins/minikube-integration/21139-2320/.minikube/files/etc/ssl/certs/41162.pem (1708 bytes)
	I1109 14:39:20.385719  196129 ssh_runner.go:362] scp /home/jenkins/minikube-integration/21139-2320/.minikube/ca.crt --> /var/lib/minikube/certs/ca.crt (1111 bytes)
	I1109 14:39:20.406961  196129 ssh_runner.go:362] scp /home/jenkins/minikube-integration/21139-2320/.minikube/ca.key --> /var/lib/minikube/certs/ca.key (1679 bytes)
	I1109 14:39:20.439170  196129 ssh_runner.go:362] scp /home/jenkins/minikube-integration/21139-2320/.minikube/proxy-client-ca.crt --> /var/lib/minikube/certs/proxy-client-ca.crt (1119 bytes)
	I1109 14:39:20.464461  196129 ssh_runner.go:362] scp /home/jenkins/minikube-integration/21139-2320/.minikube/proxy-client-ca.key --> /var/lib/minikube/certs/proxy-client-ca.key (1675 bytes)
	I1109 14:39:20.498671  196129 ssh_runner.go:362] scp /home/jenkins/minikube-integration/21139-2320/.minikube/profiles/default-k8s-diff-port-103048/apiserver.crt --> /var/lib/minikube/certs/apiserver.crt (1440 bytes)
	I1109 14:39:20.538022  196129 ssh_runner.go:362] scp /home/jenkins/minikube-integration/21139-2320/.minikube/profiles/default-k8s-diff-port-103048/apiserver.key --> /var/lib/minikube/certs/apiserver.key (1675 bytes)
	I1109 14:39:20.576148  196129 ssh_runner.go:362] scp /home/jenkins/minikube-integration/21139-2320/.minikube/profiles/default-k8s-diff-port-103048/proxy-client.crt --> /var/lib/minikube/certs/proxy-client.crt (1147 bytes)
	I1109 14:39:20.647061  196129 ssh_runner.go:362] scp /home/jenkins/minikube-integration/21139-2320/.minikube/profiles/default-k8s-diff-port-103048/proxy-client.key --> /var/lib/minikube/certs/proxy-client.key (1675 bytes)
	I1109 14:39:20.713722  196129 ssh_runner.go:362] scp /home/jenkins/minikube-integration/21139-2320/.minikube/certs/4116.pem --> /usr/share/ca-certificates/4116.pem (1338 bytes)
	I1109 14:39:20.735137  196129 ssh_runner.go:362] scp /home/jenkins/minikube-integration/21139-2320/.minikube/files/etc/ssl/certs/41162.pem --> /usr/share/ca-certificates/41162.pem (1708 bytes)
	I1109 14:39:20.759543  196129 ssh_runner.go:362] scp /home/jenkins/minikube-integration/21139-2320/.minikube/ca.crt --> /usr/share/ca-certificates/minikubeCA.pem (1111 bytes)
	I1109 14:39:20.778573  196129 ssh_runner.go:362] scp memory --> /var/lib/minikube/kubeconfig (738 bytes)
	I1109 14:39:20.791915  196129 ssh_runner.go:195] Run: openssl version
	I1109 14:39:20.804883  196129 ssh_runner.go:195] Run: sudo /bin/bash -c "test -s /usr/share/ca-certificates/4116.pem && ln -fs /usr/share/ca-certificates/4116.pem /etc/ssl/certs/4116.pem"
	I1109 14:39:20.821236  196129 ssh_runner.go:195] Run: ls -la /usr/share/ca-certificates/4116.pem
	I1109 14:39:20.826965  196129 certs.go:528] hashing: -rw-r--r-- 1 root root 1338 Nov  9 13:36 /usr/share/ca-certificates/4116.pem
	I1109 14:39:20.827033  196129 ssh_runner.go:195] Run: openssl x509 -hash -noout -in /usr/share/ca-certificates/4116.pem
	I1109 14:39:20.880407  196129 ssh_runner.go:195] Run: sudo /bin/bash -c "test -L /etc/ssl/certs/51391683.0 || ln -fs /etc/ssl/certs/4116.pem /etc/ssl/certs/51391683.0"
	I1109 14:39:20.888410  196129 ssh_runner.go:195] Run: sudo /bin/bash -c "test -s /usr/share/ca-certificates/41162.pem && ln -fs /usr/share/ca-certificates/41162.pem /etc/ssl/certs/41162.pem"
	I1109 14:39:20.897832  196129 ssh_runner.go:195] Run: ls -la /usr/share/ca-certificates/41162.pem
	I1109 14:39:20.901509  196129 certs.go:528] hashing: -rw-r--r-- 1 root root 1708 Nov  9 13:36 /usr/share/ca-certificates/41162.pem
	I1109 14:39:20.901575  196129 ssh_runner.go:195] Run: openssl x509 -hash -noout -in /usr/share/ca-certificates/41162.pem
	I1109 14:39:20.942961  196129 ssh_runner.go:195] Run: sudo /bin/bash -c "test -L /etc/ssl/certs/3ec20f2e.0 || ln -fs /etc/ssl/certs/41162.pem /etc/ssl/certs/3ec20f2e.0"
	I1109 14:39:20.950695  196129 ssh_runner.go:195] Run: sudo /bin/bash -c "test -s /usr/share/ca-certificates/minikubeCA.pem && ln -fs /usr/share/ca-certificates/minikubeCA.pem /etc/ssl/certs/minikubeCA.pem"
	I1109 14:39:20.958594  196129 ssh_runner.go:195] Run: ls -la /usr/share/ca-certificates/minikubeCA.pem
	I1109 14:39:20.963390  196129 certs.go:528] hashing: -rw-r--r-- 1 root root 1111 Nov  9 13:29 /usr/share/ca-certificates/minikubeCA.pem
	I1109 14:39:20.963454  196129 ssh_runner.go:195] Run: openssl x509 -hash -noout -in /usr/share/ca-certificates/minikubeCA.pem
	I1109 14:39:21.024236  196129 ssh_runner.go:195] Run: sudo /bin/bash -c "test -L /etc/ssl/certs/b5213941.0 || ln -fs /etc/ssl/certs/minikubeCA.pem /etc/ssl/certs/b5213941.0"
	I1109 14:39:21.038127  196129 ssh_runner.go:195] Run: stat /var/lib/minikube/certs/apiserver-kubelet-client.crt
	I1109 14:39:21.045164  196129 ssh_runner.go:195] Run: openssl x509 -noout -in /var/lib/minikube/certs/apiserver-etcd-client.crt -checkend 86400
	I1109 14:39:21.092111  196129 ssh_runner.go:195] Run: openssl x509 -noout -in /var/lib/minikube/certs/apiserver-kubelet-client.crt -checkend 86400
	I1109 14:39:21.157987  196129 ssh_runner.go:195] Run: openssl x509 -noout -in /var/lib/minikube/certs/etcd/server.crt -checkend 86400
	I1109 14:39:21.210593  196129 ssh_runner.go:195] Run: openssl x509 -noout -in /var/lib/minikube/certs/etcd/healthcheck-client.crt -checkend 86400
	I1109 14:39:21.275270  196129 ssh_runner.go:195] Run: openssl x509 -noout -in /var/lib/minikube/certs/etcd/peer.crt -checkend 86400
	I1109 14:39:21.342680  196129 ssh_runner.go:195] Run: openssl x509 -noout -in /var/lib/minikube/certs/front-proxy-client.crt -checkend 86400
	I1109 14:39:21.420934  196129 kubeadm.go:401] StartCluster: {Name:default-k8s-diff-port-103048 KeepContext:false EmbedCerts:false MinikubeISO: KicBaseImage:gcr.io/k8s-minikube/kicbase-builds:v0.0.48-1761985721-21837@sha256:a50b37e97dfdea51156e079ca6b45818a801b3d41bbe13d141f35d2e1af6c7d1 Memory:3072 CPUs:2 DiskSize:20000 Driver:docker HyperkitVpnKitSock: HyperkitVSockPorts:[] DockerEnv:[] ContainerVolumeMounts:[] InsecureRegistry:[] RegistryMirror:[] HostOnlyCIDR:192.168.59.1/24 HypervVirtualSwitch: HypervUseExternalSwitch:false HypervExternalAdapter: KVMNetwork:default KVMQemuURI:qemu:///system KVMGPU:false KVMHidden:false KVMNUMACount:1 APIServerPort:8444 DockerOpt:[] DisableDriverMounts:false NFSShare:[] NFSSharesRoot:/nfsshares UUID: NoVTXCheck:false DNSProxy:false HostDNSResolver:true HostOnlyNicType:virtio NatNicType:virtio SSHIPAddress: SSHUser:root SSHKey: SSHPort:22 KubernetesConfig:{KubernetesVersion:v1.34.1 ClusterName:default-k8s-diff-port-103048 Namespace:default APIServerHAVIP: APIServer
Name:minikubeCA APIServerNames:[] APIServerIPs:[] DNSDomain:cluster.local ContainerRuntime:crio CRISocket: NetworkPlugin:cni FeatureGates: ServiceCIDR:10.96.0.0/12 ImageRepository: LoadBalancerStartIP: LoadBalancerEndIP: CustomIngressCert: RegistryAliases: ExtraOptions:[] ShouldLoadCachedImages:true EnableDefaultCNI:false CNI:} Nodes:[{Name: IP:192.168.85.2 Port:8444 KubernetesVersion:v1.34.1 ContainerRuntime:crio ControlPlane:true Worker:true}] Addons:map[dashboard:true default-storageclass:true storage-provisioner:true] CustomAddonImages:map[MetricsScraper:registry.k8s.io/echoserver:1.4] CustomAddonRegistries:map[] VerifyComponents:map[apiserver:true apps_running:true default_sa:true extra:true kubelet:true node_ready:true system_pods:true] StartHostTimeout:6m0s ScheduledStop:<nil> ExposedPorts:[] ListenAddress: Network: Subnet: MultiNodeRequested:false ExtraDisks:0 CertExpiration:26280h0m0s MountString: Mount9PVersion:9p2000.L MountGID:docker MountIP: MountMSize:262144 MountOptions:[] MountPort:0 MountType
:9p MountUID:docker BinaryMirror: DisableOptimizations:false DisableMetrics:false DisableCoreDNSLog:false CustomQemuFirmwarePath: SocketVMnetClientPath: SocketVMnetPath: StaticIP: SSHAuthSock: SSHAgentPID:0 GPUs: AutoPauseInterval:1m0s}
	I1109 14:39:21.421028  196129 cri.go:54] listing CRI containers in root : {State:paused Name: Namespaces:[kube-system]}
	I1109 14:39:21.421090  196129 ssh_runner.go:195] Run: sudo -s eval "crictl ps -a --quiet --label io.kubernetes.pod.namespace=kube-system"
	I1109 14:39:21.519887  196129 cri.go:89] found id: "7e93099edfb89c3c6dfb2117e807c954f165aeeee5346a2bbbcc8cebc0b57e6d"
	I1109 14:39:21.519932  196129 cri.go:89] found id: "6ac4e39c7a9ff676af77a2c3d9451f34c38ceb98460bce7a887c77b5521f39bb"
	I1109 14:39:21.519938  196129 cri.go:89] found id: "7d4eac93ccb3effa065a9c4f9d98f14e0e8db5c13414ee7cecb3ffcedd98326f"
	I1109 14:39:21.519945  196129 cri.go:89] found id: ""
	I1109 14:39:21.519999  196129 ssh_runner.go:195] Run: sudo runc list -f json
	W1109 14:39:21.543667  196129 kubeadm.go:408] unpause failed: list paused: runc: sudo runc list -f json: Process exited with status 1
	stdout:
	
	stderr:
	time="2025-11-09T14:39:21Z" level=error msg="open /run/runc: no such file or directory"
	I1109 14:39:21.543751  196129 ssh_runner.go:195] Run: sudo ls /var/lib/kubelet/kubeadm-flags.env /var/lib/kubelet/config.yaml /var/lib/minikube/etcd
	I1109 14:39:21.572102  196129 kubeadm.go:417] found existing configuration files, will attempt cluster restart
	I1109 14:39:21.572126  196129 kubeadm.go:598] restartPrimaryControlPlane start ...
	I1109 14:39:21.572191  196129 ssh_runner.go:195] Run: sudo test -d /data/minikube
	I1109 14:39:21.608694  196129 kubeadm.go:131] /data/minikube skipping compat symlinks: sudo test -d /data/minikube: Process exited with status 1
	stdout:
	
	stderr:
	I1109 14:39:21.609164  196129 kubeconfig.go:47] verify endpoint returned: get endpoint: "default-k8s-diff-port-103048" does not appear in /home/jenkins/minikube-integration/21139-2320/kubeconfig
	I1109 14:39:21.609280  196129 kubeconfig.go:62] /home/jenkins/minikube-integration/21139-2320/kubeconfig needs updating (will repair): [kubeconfig missing "default-k8s-diff-port-103048" cluster setting kubeconfig missing "default-k8s-diff-port-103048" context setting]
	I1109 14:39:21.609631  196129 lock.go:35] WriteFile acquiring /home/jenkins/minikube-integration/21139-2320/kubeconfig: {Name:mkbcbd43244c4eb498050023163f96f81faa78c6 Clock:{} Delay:500ms Timeout:1m0s Cancel:<nil>}
	I1109 14:39:21.611238  196129 ssh_runner.go:195] Run: sudo diff -u /var/tmp/minikube/kubeadm.yaml /var/tmp/minikube/kubeadm.yaml.new
	I1109 14:39:21.624438  196129 kubeadm.go:635] The running cluster does not require reconfiguration: 192.168.85.2
	I1109 14:39:21.624472  196129 kubeadm.go:602] duration metric: took 52.339359ms to restartPrimaryControlPlane
	I1109 14:39:21.624481  196129 kubeadm.go:403] duration metric: took 203.557147ms to StartCluster
	I1109 14:39:21.624504  196129 settings.go:142] acquiring lock: {Name:mk52a5a1cf83cfd3007d92af07a5e8dea393f789 Clock:{} Delay:500ms Timeout:1m0s Cancel:<nil>}
	I1109 14:39:21.624565  196129 settings.go:150] Updating kubeconfig:  /home/jenkins/minikube-integration/21139-2320/kubeconfig
	I1109 14:39:21.625263  196129 lock.go:35] WriteFile acquiring /home/jenkins/minikube-integration/21139-2320/kubeconfig: {Name:mkbcbd43244c4eb498050023163f96f81faa78c6 Clock:{} Delay:500ms Timeout:1m0s Cancel:<nil>}
	I1109 14:39:21.625488  196129 start.go:236] Will wait 6m0s for node &{Name: IP:192.168.85.2 Port:8444 KubernetesVersion:v1.34.1 ContainerRuntime:crio ControlPlane:true Worker:true}
	I1109 14:39:21.625839  196129 config.go:182] Loaded profile config "default-k8s-diff-port-103048": Driver=docker, ContainerRuntime=crio, KubernetesVersion=v1.34.1
	I1109 14:39:21.625884  196129 addons.go:512] enable addons start: toEnable=map[ambassador:false amd-gpu-device-plugin:false auto-pause:false cloud-spanner:false csi-hostpath-driver:false dashboard:true default-storageclass:true efk:false freshpod:false gcp-auth:false gvisor:false headlamp:false inaccel:false ingress:false ingress-dns:false inspektor-gadget:false istio:false istio-provisioner:false kong:false kubeflow:false kubetail:false kubevirt:false logviewer:false metallb:false metrics-server:false nvidia-device-plugin:false nvidia-driver-installer:false nvidia-gpu-device-plugin:false olm:false pod-security-policy:false portainer:false registry:false registry-aliases:false registry-creds:false storage-provisioner:true storage-provisioner-rancher:false volcano:false volumesnapshots:false yakd:false]
	I1109 14:39:21.626037  196129 addons.go:70] Setting storage-provisioner=true in profile "default-k8s-diff-port-103048"
	I1109 14:39:21.626062  196129 addons.go:239] Setting addon storage-provisioner=true in "default-k8s-diff-port-103048"
	W1109 14:39:21.626071  196129 addons.go:248] addon storage-provisioner should already be in state true
	I1109 14:39:21.626090  196129 addons.go:70] Setting dashboard=true in profile "default-k8s-diff-port-103048"
	I1109 14:39:21.626131  196129 addons.go:239] Setting addon dashboard=true in "default-k8s-diff-port-103048"
	W1109 14:39:21.626162  196129 addons.go:248] addon dashboard should already be in state true
	I1109 14:39:21.626201  196129 host.go:66] Checking if "default-k8s-diff-port-103048" exists ...
	I1109 14:39:21.626098  196129 host.go:66] Checking if "default-k8s-diff-port-103048" exists ...
	I1109 14:39:21.626753  196129 cli_runner.go:164] Run: docker container inspect default-k8s-diff-port-103048 --format={{.State.Status}}
	I1109 14:39:21.626812  196129 cli_runner.go:164] Run: docker container inspect default-k8s-diff-port-103048 --format={{.State.Status}}
	I1109 14:39:21.626105  196129 addons.go:70] Setting default-storageclass=true in profile "default-k8s-diff-port-103048"
	I1109 14:39:21.627319  196129 addons_storage_classes.go:34] enableOrDisableStorageClasses default-storageclass=true on "default-k8s-diff-port-103048"
	I1109 14:39:21.627583  196129 cli_runner.go:164] Run: docker container inspect default-k8s-diff-port-103048 --format={{.State.Status}}
	I1109 14:39:21.630802  196129 out.go:179] * Verifying Kubernetes components...
	I1109 14:39:21.639626  196129 ssh_runner.go:195] Run: sudo systemctl daemon-reload
	I1109 14:39:21.684051  196129 out.go:179]   - Using image gcr.io/k8s-minikube/storage-provisioner:v5
	I1109 14:39:21.684138  196129 out.go:179]   - Using image docker.io/kubernetesui/dashboard:v2.7.0
	I1109 14:39:21.689975  196129 addons.go:436] installing /etc/kubernetes/addons/storage-provisioner.yaml
	I1109 14:39:21.690000  196129 ssh_runner.go:362] scp memory --> /etc/kubernetes/addons/storage-provisioner.yaml (2676 bytes)
	I1109 14:39:21.690064  196129 cli_runner.go:164] Run: docker container inspect -f "'{{(index (index .NetworkSettings.Ports "22/tcp") 0).HostPort}}'" default-k8s-diff-port-103048
	I1109 14:39:21.691595  196129 addons.go:239] Setting addon default-storageclass=true in "default-k8s-diff-port-103048"
	W1109 14:39:21.691618  196129 addons.go:248] addon default-storageclass should already be in state true
	I1109 14:39:21.691648  196129 host.go:66] Checking if "default-k8s-diff-port-103048" exists ...
	I1109 14:39:21.692125  196129 cli_runner.go:164] Run: docker container inspect default-k8s-diff-port-103048 --format={{.State.Status}}
	I1109 14:39:21.693212  196129 out.go:179]   - Using image registry.k8s.io/echoserver:1.4
	I1109 14:39:20.654722  196795 main.go:143] libmachine: SSH cmd err, output: <nil>: 
	CRIO_MINIKUBE_OPTIONS='--insecure-registry 10.96.0.0/12 '
	
	I1109 14:39:20.654747  196795 machine.go:97] duration metric: took 4.451852424s to provisionDockerMachine
	I1109 14:39:20.654773  196795 start.go:293] postStartSetup for "embed-certs-422728" (driver="docker")
	I1109 14:39:20.654784  196795 start.go:322] creating required directories: [/etc/kubernetes/addons /etc/kubernetes/manifests /var/tmp/minikube /var/lib/minikube /var/lib/minikube/certs /var/lib/minikube/images /var/lib/minikube/binaries /tmp/gvisor /usr/share/ca-certificates /etc/ssl/certs]
	I1109 14:39:20.654845  196795 ssh_runner.go:195] Run: sudo mkdir -p /etc/kubernetes/addons /etc/kubernetes/manifests /var/tmp/minikube /var/lib/minikube /var/lib/minikube/certs /var/lib/minikube/images /var/lib/minikube/binaries /tmp/gvisor /usr/share/ca-certificates /etc/ssl/certs
	I1109 14:39:20.654912  196795 cli_runner.go:164] Run: docker container inspect -f "'{{(index (index .NetworkSettings.Ports "22/tcp") 0).HostPort}}'" embed-certs-422728
	I1109 14:39:20.679374  196795 sshutil.go:53] new ssh client: &{IP:127.0.0.1 Port:33070 SSHKeyPath:/home/jenkins/minikube-integration/21139-2320/.minikube/machines/embed-certs-422728/id_rsa Username:docker}
	I1109 14:39:20.801375  196795 ssh_runner.go:195] Run: cat /etc/os-release
	I1109 14:39:20.805427  196795 main.go:143] libmachine: Couldn't set key VERSION_CODENAME, no corresponding struct field found
	I1109 14:39:20.805453  196795 info.go:137] Remote host: Debian GNU/Linux 12 (bookworm)
	I1109 14:39:20.805462  196795 filesync.go:126] Scanning /home/jenkins/minikube-integration/21139-2320/.minikube/addons for local assets ...
	I1109 14:39:20.805518  196795 filesync.go:126] Scanning /home/jenkins/minikube-integration/21139-2320/.minikube/files for local assets ...
	I1109 14:39:20.805610  196795 filesync.go:149] local asset: /home/jenkins/minikube-integration/21139-2320/.minikube/files/etc/ssl/certs/41162.pem -> 41162.pem in /etc/ssl/certs
	I1109 14:39:20.805711  196795 ssh_runner.go:195] Run: sudo mkdir -p /etc/ssl/certs
	I1109 14:39:20.816935  196795 ssh_runner.go:362] scp /home/jenkins/minikube-integration/21139-2320/.minikube/files/etc/ssl/certs/41162.pem --> /etc/ssl/certs/41162.pem (1708 bytes)
	I1109 14:39:20.836739  196795 start.go:296] duration metric: took 181.951304ms for postStartSetup
	I1109 14:39:20.836817  196795 ssh_runner.go:195] Run: sh -c "df -h /var | awk 'NR==2{print $5}'"
	I1109 14:39:20.836854  196795 cli_runner.go:164] Run: docker container inspect -f "'{{(index (index .NetworkSettings.Ports "22/tcp") 0).HostPort}}'" embed-certs-422728
	I1109 14:39:20.857314  196795 sshutil.go:53] new ssh client: &{IP:127.0.0.1 Port:33070 SSHKeyPath:/home/jenkins/minikube-integration/21139-2320/.minikube/machines/embed-certs-422728/id_rsa Username:docker}
	I1109 14:39:20.961850  196795 ssh_runner.go:195] Run: sh -c "df -BG /var | awk 'NR==2{print $4}'"
	I1109 14:39:20.969380  196795 fix.go:56] duration metric: took 5.089888739s for fixHost
	I1109 14:39:20.969406  196795 start.go:83] releasing machines lock for "embed-certs-422728", held for 5.089951877s
	I1109 14:39:20.969490  196795 cli_runner.go:164] Run: docker container inspect -f "{{range .NetworkSettings.Networks}}{{.IPAddress}},{{.GlobalIPv6Address}}{{end}}" embed-certs-422728
	I1109 14:39:20.989316  196795 ssh_runner.go:195] Run: cat /version.json
	I1109 14:39:20.989379  196795 cli_runner.go:164] Run: docker container inspect -f "'{{(index (index .NetworkSettings.Ports "22/tcp") 0).HostPort}}'" embed-certs-422728
	I1109 14:39:20.989634  196795 ssh_runner.go:195] Run: curl -sS -m 2 https://registry.k8s.io/
	I1109 14:39:20.989678  196795 cli_runner.go:164] Run: docker container inspect -f "'{{(index (index .NetworkSettings.Ports "22/tcp") 0).HostPort}}'" embed-certs-422728
	I1109 14:39:21.019194  196795 sshutil.go:53] new ssh client: &{IP:127.0.0.1 Port:33070 SSHKeyPath:/home/jenkins/minikube-integration/21139-2320/.minikube/machines/embed-certs-422728/id_rsa Username:docker}
	I1109 14:39:21.033559  196795 sshutil.go:53] new ssh client: &{IP:127.0.0.1 Port:33070 SSHKeyPath:/home/jenkins/minikube-integration/21139-2320/.minikube/machines/embed-certs-422728/id_rsa Username:docker}
	I1109 14:39:21.156397  196795 ssh_runner.go:195] Run: systemctl --version
	I1109 14:39:21.283906  196795 ssh_runner.go:195] Run: sudo sh -c "podman version >/dev/null"
	I1109 14:39:21.351300  196795 ssh_runner.go:195] Run: sh -c "stat /etc/cni/net.d/*loopback.conf*"
	W1109 14:39:21.357015  196795 cni.go:209] loopback cni configuration skipped: "/etc/cni/net.d/*loopback.conf*" not found
	I1109 14:39:21.357091  196795 ssh_runner.go:195] Run: sudo find /etc/cni/net.d -maxdepth 1 -type f ( ( -name *bridge* -or -name *podman* ) -and -not -name *.mk_disabled ) -printf "%p, " -exec sh -c "sudo mv {} {}.mk_disabled" ;
	I1109 14:39:21.368625  196795 cni.go:259] no active bridge cni configs found in "/etc/cni/net.d" - nothing to disable
	I1109 14:39:21.368703  196795 start.go:496] detecting cgroup driver to use...
	I1109 14:39:21.368745  196795 detect.go:187] detected "cgroupfs" cgroup driver on host os
	I1109 14:39:21.368818  196795 ssh_runner.go:195] Run: sudo systemctl stop -f containerd
	I1109 14:39:21.387612  196795 ssh_runner.go:195] Run: sudo systemctl is-active --quiet service containerd
	I1109 14:39:21.408379  196795 docker.go:218] disabling cri-docker service (if available) ...
	I1109 14:39:21.408518  196795 ssh_runner.go:195] Run: sudo systemctl stop -f cri-docker.socket
	I1109 14:39:21.436708  196795 ssh_runner.go:195] Run: sudo systemctl stop -f cri-docker.service
	I1109 14:39:21.466974  196795 ssh_runner.go:195] Run: sudo systemctl disable cri-docker.socket
	I1109 14:39:21.728628  196795 ssh_runner.go:195] Run: sudo systemctl mask cri-docker.service
	I1109 14:39:21.974405  196795 docker.go:234] disabling docker service ...
	I1109 14:39:21.974481  196795 ssh_runner.go:195] Run: sudo systemctl stop -f docker.socket
	I1109 14:39:22.005296  196795 ssh_runner.go:195] Run: sudo systemctl stop -f docker.service
	I1109 14:39:22.034069  196795 ssh_runner.go:195] Run: sudo systemctl disable docker.socket
	I1109 14:39:22.248316  196795 ssh_runner.go:195] Run: sudo systemctl mask docker.service
	I1109 14:39:22.448530  196795 ssh_runner.go:195] Run: sudo systemctl is-active --quiet service docker
	I1109 14:39:22.471795  196795 ssh_runner.go:195] Run: /bin/bash -c "sudo mkdir -p /etc && printf %s "runtime-endpoint: unix:///var/run/crio/crio.sock
	" | sudo tee /etc/crictl.yaml"
	I1109 14:39:22.504195  196795 crio.go:59] configure cri-o to use "registry.k8s.io/pause:3.10.1" pause image...
	I1109 14:39:22.504253  196795 ssh_runner.go:195] Run: sh -c "sudo sed -i 's|^.*pause_image = .*$|pause_image = "registry.k8s.io/pause:3.10.1"|' /etc/crio/crio.conf.d/02-crio.conf"
	I1109 14:39:22.522453  196795 crio.go:70] configuring cri-o to use "cgroupfs" as cgroup driver...
	I1109 14:39:22.522527  196795 ssh_runner.go:195] Run: sh -c "sudo sed -i 's|^.*cgroup_manager = .*$|cgroup_manager = "cgroupfs"|' /etc/crio/crio.conf.d/02-crio.conf"
	I1109 14:39:22.540125  196795 ssh_runner.go:195] Run: sh -c "sudo sed -i '/conmon_cgroup = .*/d' /etc/crio/crio.conf.d/02-crio.conf"
	I1109 14:39:22.553926  196795 ssh_runner.go:195] Run: sh -c "sudo sed -i '/cgroup_manager = .*/a conmon_cgroup = "pod"' /etc/crio/crio.conf.d/02-crio.conf"
	I1109 14:39:22.576162  196795 ssh_runner.go:195] Run: sh -c "sudo rm -rf /etc/cni/net.mk"
	I1109 14:39:22.585909  196795 ssh_runner.go:195] Run: sh -c "sudo sed -i '/^ *"net.ipv4.ip_unprivileged_port_start=.*"/d' /etc/crio/crio.conf.d/02-crio.conf"
	I1109 14:39:22.594587  196795 ssh_runner.go:195] Run: sh -c "sudo grep -q "^ *default_sysctls" /etc/crio/crio.conf.d/02-crio.conf || sudo sed -i '/conmon_cgroup = .*/a default_sysctls = \[\n\]' /etc/crio/crio.conf.d/02-crio.conf"
	I1109 14:39:22.609067  196795 ssh_runner.go:195] Run: sh -c "sudo sed -i -r 's|^default_sysctls *= *\[|&\n  "net.ipv4.ip_unprivileged_port_start=0",|' /etc/crio/crio.conf.d/02-crio.conf"
	I1109 14:39:22.617377  196795 ssh_runner.go:195] Run: sudo sysctl net.bridge.bridge-nf-call-iptables
	I1109 14:39:22.630975  196795 ssh_runner.go:195] Run: sudo sh -c "echo 1 > /proc/sys/net/ipv4/ip_forward"
	I1109 14:39:22.638323  196795 ssh_runner.go:195] Run: sudo systemctl daemon-reload
	I1109 14:39:22.838273  196795 ssh_runner.go:195] Run: sudo systemctl restart crio
	I1109 14:39:23.036210  196795 start.go:543] Will wait 60s for socket path /var/run/crio/crio.sock
	I1109 14:39:23.036366  196795 ssh_runner.go:195] Run: stat /var/run/crio/crio.sock
	I1109 14:39:23.044751  196795 start.go:564] Will wait 60s for crictl version
	I1109 14:39:23.044867  196795 ssh_runner.go:195] Run: which crictl
	I1109 14:39:23.051712  196795 ssh_runner.go:195] Run: sudo /usr/local/bin/crictl version
	I1109 14:39:23.102897  196795 start.go:580] Version:  0.1.0
	RuntimeName:  cri-o
	RuntimeVersion:  1.34.1
	RuntimeApiVersion:  v1
	I1109 14:39:23.103045  196795 ssh_runner.go:195] Run: crio --version
	I1109 14:39:23.156948  196795 ssh_runner.go:195] Run: crio --version
	I1109 14:39:23.225201  196795 out.go:179] * Preparing Kubernetes v1.34.1 on CRI-O 1.34.1 ...
	I1109 14:39:21.696124  196129 addons.go:436] installing /etc/kubernetes/addons/dashboard-ns.yaml
	I1109 14:39:21.696149  196129 ssh_runner.go:362] scp dashboard/dashboard-ns.yaml --> /etc/kubernetes/addons/dashboard-ns.yaml (759 bytes)
	I1109 14:39:21.696218  196129 cli_runner.go:164] Run: docker container inspect -f "'{{(index (index .NetworkSettings.Ports "22/tcp") 0).HostPort}}'" default-k8s-diff-port-103048
	I1109 14:39:21.741371  196129 sshutil.go:53] new ssh client: &{IP:127.0.0.1 Port:33065 SSHKeyPath:/home/jenkins/minikube-integration/21139-2320/.minikube/machines/default-k8s-diff-port-103048/id_rsa Username:docker}
	I1109 14:39:21.750194  196129 addons.go:436] installing /etc/kubernetes/addons/storageclass.yaml
	I1109 14:39:21.750218  196129 ssh_runner.go:362] scp storageclass/storageclass.yaml --> /etc/kubernetes/addons/storageclass.yaml (271 bytes)
	I1109 14:39:21.750296  196129 cli_runner.go:164] Run: docker container inspect -f "'{{(index (index .NetworkSettings.Ports "22/tcp") 0).HostPort}}'" default-k8s-diff-port-103048
	I1109 14:39:21.766459  196129 sshutil.go:53] new ssh client: &{IP:127.0.0.1 Port:33065 SSHKeyPath:/home/jenkins/minikube-integration/21139-2320/.minikube/machines/default-k8s-diff-port-103048/id_rsa Username:docker}
	I1109 14:39:21.787620  196129 sshutil.go:53] new ssh client: &{IP:127.0.0.1 Port:33065 SSHKeyPath:/home/jenkins/minikube-integration/21139-2320/.minikube/machines/default-k8s-diff-port-103048/id_rsa Username:docker}
	I1109 14:39:22.148935  196129 ssh_runner.go:195] Run: sudo systemctl start kubelet
	I1109 14:39:22.161023  196129 ssh_runner.go:195] Run: sudo KUBECONFIG=/var/lib/minikube/kubeconfig /var/lib/minikube/binaries/v1.34.1/kubectl apply -f /etc/kubernetes/addons/storage-provisioner.yaml
	I1109 14:39:22.228626  196129 node_ready.go:35] waiting up to 6m0s for node "default-k8s-diff-port-103048" to be "Ready" ...
	I1109 14:39:22.237309  196129 ssh_runner.go:195] Run: sudo KUBECONFIG=/var/lib/minikube/kubeconfig /var/lib/minikube/binaries/v1.34.1/kubectl apply -f /etc/kubernetes/addons/storageclass.yaml
	I1109 14:39:22.258565  196129 addons.go:436] installing /etc/kubernetes/addons/dashboard-clusterrole.yaml
	I1109 14:39:22.258641  196129 ssh_runner.go:362] scp dashboard/dashboard-clusterrole.yaml --> /etc/kubernetes/addons/dashboard-clusterrole.yaml (1001 bytes)
	I1109 14:39:22.389427  196129 addons.go:436] installing /etc/kubernetes/addons/dashboard-clusterrolebinding.yaml
	I1109 14:39:22.389532  196129 ssh_runner.go:362] scp dashboard/dashboard-clusterrolebinding.yaml --> /etc/kubernetes/addons/dashboard-clusterrolebinding.yaml (1018 bytes)
	I1109 14:39:22.526134  196129 addons.go:436] installing /etc/kubernetes/addons/dashboard-configmap.yaml
	I1109 14:39:22.526206  196129 ssh_runner.go:362] scp dashboard/dashboard-configmap.yaml --> /etc/kubernetes/addons/dashboard-configmap.yaml (837 bytes)
	I1109 14:39:22.627561  196129 addons.go:436] installing /etc/kubernetes/addons/dashboard-dp.yaml
	I1109 14:39:22.627621  196129 ssh_runner.go:362] scp memory --> /etc/kubernetes/addons/dashboard-dp.yaml (4201 bytes)
	I1109 14:39:22.674772  196129 addons.go:436] installing /etc/kubernetes/addons/dashboard-role.yaml
	I1109 14:39:22.674843  196129 ssh_runner.go:362] scp dashboard/dashboard-role.yaml --> /etc/kubernetes/addons/dashboard-role.yaml (1724 bytes)
	I1109 14:39:22.695155  196129 addons.go:436] installing /etc/kubernetes/addons/dashboard-rolebinding.yaml
	I1109 14:39:22.695229  196129 ssh_runner.go:362] scp dashboard/dashboard-rolebinding.yaml --> /etc/kubernetes/addons/dashboard-rolebinding.yaml (1046 bytes)
	I1109 14:39:22.738582  196129 addons.go:436] installing /etc/kubernetes/addons/dashboard-sa.yaml
	I1109 14:39:22.738656  196129 ssh_runner.go:362] scp dashboard/dashboard-sa.yaml --> /etc/kubernetes/addons/dashboard-sa.yaml (837 bytes)
	I1109 14:39:22.763078  196129 addons.go:436] installing /etc/kubernetes/addons/dashboard-secret.yaml
	I1109 14:39:22.763151  196129 ssh_runner.go:362] scp dashboard/dashboard-secret.yaml --> /etc/kubernetes/addons/dashboard-secret.yaml (1389 bytes)
	I1109 14:39:22.805266  196129 addons.go:436] installing /etc/kubernetes/addons/dashboard-svc.yaml
	I1109 14:39:22.805341  196129 ssh_runner.go:362] scp dashboard/dashboard-svc.yaml --> /etc/kubernetes/addons/dashboard-svc.yaml (1294 bytes)
	I1109 14:39:22.831261  196129 ssh_runner.go:195] Run: sudo KUBECONFIG=/var/lib/minikube/kubeconfig /var/lib/minikube/binaries/v1.34.1/kubectl apply -f /etc/kubernetes/addons/dashboard-ns.yaml -f /etc/kubernetes/addons/dashboard-clusterrole.yaml -f /etc/kubernetes/addons/dashboard-clusterrolebinding.yaml -f /etc/kubernetes/addons/dashboard-configmap.yaml -f /etc/kubernetes/addons/dashboard-dp.yaml -f /etc/kubernetes/addons/dashboard-role.yaml -f /etc/kubernetes/addons/dashboard-rolebinding.yaml -f /etc/kubernetes/addons/dashboard-sa.yaml -f /etc/kubernetes/addons/dashboard-secret.yaml -f /etc/kubernetes/addons/dashboard-svc.yaml
	I1109 14:39:23.228075  196795 cli_runner.go:164] Run: docker network inspect embed-certs-422728 --format "{"Name": "{{.Name}}","Driver": "{{.Driver}}","Subnet": "{{range .IPAM.Config}}{{.Subnet}}{{end}}","Gateway": "{{range .IPAM.Config}}{{.Gateway}}{{end}}","MTU": {{if (index .Options "com.docker.network.driver.mtu")}}{{(index .Options "com.docker.network.driver.mtu")}}{{else}}0{{end}}, "ContainerIPs": [{{range $k,$v := .Containers }}"{{$v.IPv4Address}}",{{end}}]}"
	I1109 14:39:23.257879  196795 ssh_runner.go:195] Run: grep 192.168.76.1	host.minikube.internal$ /etc/hosts
	I1109 14:39:23.262130  196795 ssh_runner.go:195] Run: /bin/bash -c "{ grep -v $'\thost.minikube.internal$' "/etc/hosts"; echo "192.168.76.1	host.minikube.internal"; } > /tmp/h.$$; sudo cp /tmp/h.$$ "/etc/hosts""
	I1109 14:39:23.280983  196795 kubeadm.go:884] updating cluster {Name:embed-certs-422728 KeepContext:false EmbedCerts:true MinikubeISO: KicBaseImage:gcr.io/k8s-minikube/kicbase-builds:v0.0.48-1761985721-21837@sha256:a50b37e97dfdea51156e079ca6b45818a801b3d41bbe13d141f35d2e1af6c7d1 Memory:3072 CPUs:2 DiskSize:20000 Driver:docker HyperkitVpnKitSock: HyperkitVSockPorts:[] DockerEnv:[] ContainerVolumeMounts:[] InsecureRegistry:[] RegistryMirror:[] HostOnlyCIDR:192.168.59.1/24 HypervVirtualSwitch: HypervUseExternalSwitch:false HypervExternalAdapter: KVMNetwork:default KVMQemuURI:qemu:///system KVMGPU:false KVMHidden:false KVMNUMACount:1 APIServerPort:8443 DockerOpt:[] DisableDriverMounts:false NFSShare:[] NFSSharesRoot:/nfsshares UUID: NoVTXCheck:false DNSProxy:false HostDNSResolver:true HostOnlyNicType:virtio NatNicType:virtio SSHIPAddress: SSHUser:root SSHKey: SSHPort:22 KubernetesConfig:{KubernetesVersion:v1.34.1 ClusterName:embed-certs-422728 Namespace:default APIServerHAVIP: APIServerName:minikubeCA AP
IServerNames:[] APIServerIPs:[] DNSDomain:cluster.local ContainerRuntime:crio CRISocket: NetworkPlugin:cni FeatureGates: ServiceCIDR:10.96.0.0/12 ImageRepository: LoadBalancerStartIP: LoadBalancerEndIP: CustomIngressCert: RegistryAliases: ExtraOptions:[] ShouldLoadCachedImages:true EnableDefaultCNI:false CNI:} Nodes:[{Name: IP:192.168.76.2 Port:8443 KubernetesVersion:v1.34.1 ContainerRuntime:crio ControlPlane:true Worker:true}] Addons:map[dashboard:true default-storageclass:true storage-provisioner:true] CustomAddonImages:map[MetricsScraper:registry.k8s.io/echoserver:1.4] CustomAddonRegistries:map[] VerifyComponents:map[apiserver:true apps_running:true default_sa:true extra:true kubelet:true node_ready:true system_pods:true] StartHostTimeout:6m0s ScheduledStop:<nil> ExposedPorts:[] ListenAddress: Network: Subnet: MultiNodeRequested:false ExtraDisks:0 CertExpiration:26280h0m0s MountString: Mount9PVersion:9p2000.L MountGID:docker MountIP: MountMSize:262144 MountOptions:[] MountPort:0 MountType:9p MountUID:docke
r BinaryMirror: DisableOptimizations:false DisableMetrics:false DisableCoreDNSLog:false CustomQemuFirmwarePath: SocketVMnetClientPath: SocketVMnetPath: StaticIP: SSHAuthSock: SSHAgentPID:0 GPUs: AutoPauseInterval:1m0s} ...
	I1109 14:39:23.281094  196795 preload.go:188] Checking if preload exists for k8s version v1.34.1 and runtime crio
	I1109 14:39:23.281162  196795 ssh_runner.go:195] Run: sudo crictl images --output json
	I1109 14:39:23.361099  196795 crio.go:514] all images are preloaded for cri-o runtime.
	I1109 14:39:23.361119  196795 crio.go:433] Images already preloaded, skipping extraction
	I1109 14:39:23.361171  196795 ssh_runner.go:195] Run: sudo crictl images --output json
	I1109 14:39:23.413183  196795 crio.go:514] all images are preloaded for cri-o runtime.
	I1109 14:39:23.413202  196795 cache_images.go:86] Images are preloaded, skipping loading
	I1109 14:39:23.413210  196795 kubeadm.go:935] updating node { 192.168.76.2 8443 v1.34.1 crio true true} ...
	I1109 14:39:23.413308  196795 kubeadm.go:947] kubelet [Unit]
	Wants=crio.service
	
	[Service]
	ExecStart=
	ExecStart=/var/lib/minikube/binaries/v1.34.1/kubelet --bootstrap-kubeconfig=/etc/kubernetes/bootstrap-kubelet.conf --cgroups-per-qos=false --config=/var/lib/kubelet/config.yaml --enforce-node-allocatable= --hostname-override=embed-certs-422728 --kubeconfig=/etc/kubernetes/kubelet.conf --node-ip=192.168.76.2
	
	[Install]
	 config:
	{KubernetesVersion:v1.34.1 ClusterName:embed-certs-422728 Namespace:default APIServerHAVIP: APIServerName:minikubeCA APIServerNames:[] APIServerIPs:[] DNSDomain:cluster.local ContainerRuntime:crio CRISocket: NetworkPlugin:cni FeatureGates: ServiceCIDR:10.96.0.0/12 ImageRepository: LoadBalancerStartIP: LoadBalancerEndIP: CustomIngressCert: RegistryAliases: ExtraOptions:[] ShouldLoadCachedImages:true EnableDefaultCNI:false CNI:}
	I1109 14:39:23.413385  196795 ssh_runner.go:195] Run: crio config
	I1109 14:39:23.563585  196795 cni.go:84] Creating CNI manager for ""
	I1109 14:39:23.563654  196795 cni.go:143] "docker" driver + "crio" runtime found, recommending kindnet
	I1109 14:39:23.563691  196795 kubeadm.go:85] Using pod CIDR: 10.244.0.0/16
	I1109 14:39:23.563764  196795 kubeadm.go:190] kubeadm options: {CertDir:/var/lib/minikube/certs ServiceCIDR:10.96.0.0/12 PodSubnet:10.244.0.0/16 AdvertiseAddress:192.168.76.2 APIServerPort:8443 KubernetesVersion:v1.34.1 EtcdDataDir:/var/lib/minikube/etcd EtcdExtraArgs:map[] ClusterName:embed-certs-422728 NodeName:embed-certs-422728 DNSDomain:cluster.local CRISocket:/var/run/crio/crio.sock ImageRepository: ComponentOptions:[{Component:apiServer ExtraArgs:map[enable-admission-plugins:NamespaceLifecycle,LimitRanger,ServiceAccount,DefaultStorageClass,DefaultTolerationSeconds,NodeRestriction,MutatingAdmissionWebhook,ValidatingAdmissionWebhook,ResourceQuota] Pairs:map[certSANs:["127.0.0.1", "localhost", "192.168.76.2"]]} {Component:controllerManager ExtraArgs:map[allocate-node-cidrs:true leader-elect:false] Pairs:map[]} {Component:scheduler ExtraArgs:map[leader-elect:false] Pairs:map[]}] FeatureArgs:map[] NodeIP:192.168.76.2 CgroupDriver:cgroupfs ClientCAFile:/var/lib/minikube/certs/ca.crt StaticPodPath:/e
tc/kubernetes/manifests ControlPlaneAddress:control-plane.minikube.internal KubeProxyOptions:map[] ResolvConfSearchRegression:false KubeletConfigOpts:map[containerRuntimeEndpoint:unix:///var/run/crio/crio.sock hairpinMode:hairpin-veth runtimeRequestTimeout:15m] PrependCriSocketUnix:true}
	I1109 14:39:23.563947  196795 kubeadm.go:196] kubeadm config:
	apiVersion: kubeadm.k8s.io/v1beta4
	kind: InitConfiguration
	localAPIEndpoint:
	  advertiseAddress: 192.168.76.2
	  bindPort: 8443
	bootstrapTokens:
	  - groups:
	      - system:bootstrappers:kubeadm:default-node-token
	    ttl: 24h0m0s
	    usages:
	      - signing
	      - authentication
	nodeRegistration:
	  criSocket: unix:///var/run/crio/crio.sock
	  name: "embed-certs-422728"
	  kubeletExtraArgs:
	    - name: "node-ip"
	      value: "192.168.76.2"
	  taints: []
	---
	apiVersion: kubeadm.k8s.io/v1beta4
	kind: ClusterConfiguration
	apiServer:
	  certSANs: ["127.0.0.1", "localhost", "192.168.76.2"]
	  extraArgs:
	    - name: "enable-admission-plugins"
	      value: "NamespaceLifecycle,LimitRanger,ServiceAccount,DefaultStorageClass,DefaultTolerationSeconds,NodeRestriction,MutatingAdmissionWebhook,ValidatingAdmissionWebhook,ResourceQuota"
	controllerManager:
	  extraArgs:
	    - name: "allocate-node-cidrs"
	      value: "true"
	    - name: "leader-elect"
	      value: "false"
	scheduler:
	  extraArgs:
	    - name: "leader-elect"
	      value: "false"
	certificatesDir: /var/lib/minikube/certs
	clusterName: mk
	controlPlaneEndpoint: control-plane.minikube.internal:8443
	etcd:
	  local:
	    dataDir: /var/lib/minikube/etcd
	kubernetesVersion: v1.34.1
	networking:
	  dnsDomain: cluster.local
	  podSubnet: "10.244.0.0/16"
	  serviceSubnet: 10.96.0.0/12
	---
	apiVersion: kubelet.config.k8s.io/v1beta1
	kind: KubeletConfiguration
	authentication:
	  x509:
	    clientCAFile: /var/lib/minikube/certs/ca.crt
	cgroupDriver: cgroupfs
	containerRuntimeEndpoint: unix:///var/run/crio/crio.sock
	hairpinMode: hairpin-veth
	runtimeRequestTimeout: 15m
	clusterDomain: "cluster.local"
	# disable disk resource management by default
	imageGCHighThresholdPercent: 100
	evictionHard:
	  nodefs.available: "0%"
	  nodefs.inodesFree: "0%"
	  imagefs.available: "0%"
	failSwapOn: false
	staticPodPath: /etc/kubernetes/manifests
	---
	apiVersion: kubeproxy.config.k8s.io/v1alpha1
	kind: KubeProxyConfiguration
	clusterCIDR: "10.244.0.0/16"
	metricsBindAddress: 0.0.0.0:10249
	conntrack:
	  maxPerCore: 0
	# Skip setting "net.netfilter.nf_conntrack_tcp_timeout_established"
	  tcpEstablishedTimeout: 0s
	# Skip setting "net.netfilter.nf_conntrack_tcp_timeout_close"
	  tcpCloseWaitTimeout: 0s
	
	I1109 14:39:23.564045  196795 ssh_runner.go:195] Run: sudo ls /var/lib/minikube/binaries/v1.34.1
	I1109 14:39:23.572916  196795 binaries.go:51] Found k8s binaries, skipping transfer
	I1109 14:39:23.573035  196795 ssh_runner.go:195] Run: sudo mkdir -p /etc/systemd/system/kubelet.service.d /lib/systemd/system /var/tmp/minikube
	I1109 14:39:23.581385  196795 ssh_runner.go:362] scp memory --> /etc/systemd/system/kubelet.service.d/10-kubeadm.conf (368 bytes)
	I1109 14:39:23.595976  196795 ssh_runner.go:362] scp memory --> /lib/systemd/system/kubelet.service (352 bytes)
	I1109 14:39:23.609988  196795 ssh_runner.go:362] scp memory --> /var/tmp/minikube/kubeadm.yaml.new (2215 bytes)
	I1109 14:39:23.624103  196795 ssh_runner.go:195] Run: grep 192.168.76.2	control-plane.minikube.internal$ /etc/hosts
	I1109 14:39:23.627903  196795 ssh_runner.go:195] Run: /bin/bash -c "{ grep -v $'\tcontrol-plane.minikube.internal$' "/etc/hosts"; echo "192.168.76.2	control-plane.minikube.internal"; } > /tmp/h.$$; sudo cp /tmp/h.$$ "/etc/hosts""
	I1109 14:39:23.637960  196795 ssh_runner.go:195] Run: sudo systemctl daemon-reload
	I1109 14:39:23.834596  196795 ssh_runner.go:195] Run: sudo systemctl start kubelet
	I1109 14:39:23.851619  196795 certs.go:69] Setting up /home/jenkins/minikube-integration/21139-2320/.minikube/profiles/embed-certs-422728 for IP: 192.168.76.2
	I1109 14:39:23.851693  196795 certs.go:195] generating shared ca certs ...
	I1109 14:39:23.851722  196795 certs.go:227] acquiring lock for ca certs: {Name:mkdc9287ecd89df27ec460b72246ed6b75395d7c Clock:{} Delay:500ms Timeout:1m0s Cancel:<nil>}
	I1109 14:39:23.851903  196795 certs.go:236] skipping valid "minikubeCA" ca cert: /home/jenkins/minikube-integration/21139-2320/.minikube/ca.key
	I1109 14:39:23.851988  196795 certs.go:236] skipping valid "proxyClientCA" ca cert: /home/jenkins/minikube-integration/21139-2320/.minikube/proxy-client-ca.key
	I1109 14:39:23.852012  196795 certs.go:257] generating profile certs ...
	I1109 14:39:23.852144  196795 certs.go:360] skipping valid signed profile cert regeneration for "minikube-user": /home/jenkins/minikube-integration/21139-2320/.minikube/profiles/embed-certs-422728/client.key
	I1109 14:39:23.852244  196795 certs.go:360] skipping valid signed profile cert regeneration for "minikube": /home/jenkins/minikube-integration/21139-2320/.minikube/profiles/embed-certs-422728/apiserver.key.b1b6b07a
	I1109 14:39:23.852384  196795 certs.go:360] skipping valid signed profile cert regeneration for "aggregator": /home/jenkins/minikube-integration/21139-2320/.minikube/profiles/embed-certs-422728/proxy-client.key
	I1109 14:39:23.852540  196795 certs.go:484] found cert: /home/jenkins/minikube-integration/21139-2320/.minikube/certs/4116.pem (1338 bytes)
	W1109 14:39:23.852606  196795 certs.go:480] ignoring /home/jenkins/minikube-integration/21139-2320/.minikube/certs/4116_empty.pem, impossibly tiny 0 bytes
	I1109 14:39:23.852637  196795 certs.go:484] found cert: /home/jenkins/minikube-integration/21139-2320/.minikube/certs/ca-key.pem (1679 bytes)
	I1109 14:39:23.852689  196795 certs.go:484] found cert: /home/jenkins/minikube-integration/21139-2320/.minikube/certs/ca.pem (1082 bytes)
	I1109 14:39:23.852735  196795 certs.go:484] found cert: /home/jenkins/minikube-integration/21139-2320/.minikube/certs/cert.pem (1123 bytes)
	I1109 14:39:23.852795  196795 certs.go:484] found cert: /home/jenkins/minikube-integration/21139-2320/.minikube/certs/key.pem (1679 bytes)
	I1109 14:39:23.852868  196795 certs.go:484] found cert: /home/jenkins/minikube-integration/21139-2320/.minikube/files/etc/ssl/certs/41162.pem (1708 bytes)
	I1109 14:39:23.853641  196795 ssh_runner.go:362] scp /home/jenkins/minikube-integration/21139-2320/.minikube/ca.crt --> /var/lib/minikube/certs/ca.crt (1111 bytes)
	I1109 14:39:23.941040  196795 ssh_runner.go:362] scp /home/jenkins/minikube-integration/21139-2320/.minikube/ca.key --> /var/lib/minikube/certs/ca.key (1679 bytes)
	I1109 14:39:24.012418  196795 ssh_runner.go:362] scp /home/jenkins/minikube-integration/21139-2320/.minikube/proxy-client-ca.crt --> /var/lib/minikube/certs/proxy-client-ca.crt (1119 bytes)
	I1109 14:39:24.042429  196795 ssh_runner.go:362] scp /home/jenkins/minikube-integration/21139-2320/.minikube/proxy-client-ca.key --> /var/lib/minikube/certs/proxy-client-ca.key (1675 bytes)
	I1109 14:39:24.071468  196795 ssh_runner.go:362] scp /home/jenkins/minikube-integration/21139-2320/.minikube/profiles/embed-certs-422728/apiserver.crt --> /var/lib/minikube/certs/apiserver.crt (1428 bytes)
	I1109 14:39:24.116434  196795 ssh_runner.go:362] scp /home/jenkins/minikube-integration/21139-2320/.minikube/profiles/embed-certs-422728/apiserver.key --> /var/lib/minikube/certs/apiserver.key (1679 bytes)
	I1109 14:39:24.161053  196795 ssh_runner.go:362] scp /home/jenkins/minikube-integration/21139-2320/.minikube/profiles/embed-certs-422728/proxy-client.crt --> /var/lib/minikube/certs/proxy-client.crt (1147 bytes)
	I1109 14:39:24.224105  196795 ssh_runner.go:362] scp /home/jenkins/minikube-integration/21139-2320/.minikube/profiles/embed-certs-422728/proxy-client.key --> /var/lib/minikube/certs/proxy-client.key (1675 bytes)
	I1109 14:39:24.267707  196795 ssh_runner.go:362] scp /home/jenkins/minikube-integration/21139-2320/.minikube/files/etc/ssl/certs/41162.pem --> /usr/share/ca-certificates/41162.pem (1708 bytes)
	I1109 14:39:24.314203  196795 ssh_runner.go:362] scp /home/jenkins/minikube-integration/21139-2320/.minikube/ca.crt --> /usr/share/ca-certificates/minikubeCA.pem (1111 bytes)
	I1109 14:39:24.345761  196795 ssh_runner.go:362] scp /home/jenkins/minikube-integration/21139-2320/.minikube/certs/4116.pem --> /usr/share/ca-certificates/4116.pem (1338 bytes)
	I1109 14:39:24.382658  196795 ssh_runner.go:362] scp memory --> /var/lib/minikube/kubeconfig (738 bytes)
	I1109 14:39:24.401317  196795 ssh_runner.go:195] Run: openssl version
	I1109 14:39:24.412746  196795 ssh_runner.go:195] Run: sudo /bin/bash -c "test -s /usr/share/ca-certificates/41162.pem && ln -fs /usr/share/ca-certificates/41162.pem /etc/ssl/certs/41162.pem"
	I1109 14:39:24.425193  196795 ssh_runner.go:195] Run: ls -la /usr/share/ca-certificates/41162.pem
	I1109 14:39:24.429586  196795 certs.go:528] hashing: -rw-r--r-- 1 root root 1708 Nov  9 13:36 /usr/share/ca-certificates/41162.pem
	I1109 14:39:24.429714  196795 ssh_runner.go:195] Run: openssl x509 -hash -noout -in /usr/share/ca-certificates/41162.pem
	I1109 14:39:24.492081  196795 ssh_runner.go:195] Run: sudo /bin/bash -c "test -L /etc/ssl/certs/3ec20f2e.0 || ln -fs /etc/ssl/certs/41162.pem /etc/ssl/certs/3ec20f2e.0"
	I1109 14:39:24.502155  196795 ssh_runner.go:195] Run: sudo /bin/bash -c "test -s /usr/share/ca-certificates/minikubeCA.pem && ln -fs /usr/share/ca-certificates/minikubeCA.pem /etc/ssl/certs/minikubeCA.pem"
	I1109 14:39:24.510808  196795 ssh_runner.go:195] Run: ls -la /usr/share/ca-certificates/minikubeCA.pem
	I1109 14:39:24.515143  196795 certs.go:528] hashing: -rw-r--r-- 1 root root 1111 Nov  9 13:29 /usr/share/ca-certificates/minikubeCA.pem
	I1109 14:39:24.515237  196795 ssh_runner.go:195] Run: openssl x509 -hash -noout -in /usr/share/ca-certificates/minikubeCA.pem
	I1109 14:39:24.570674  196795 ssh_runner.go:195] Run: sudo /bin/bash -c "test -L /etc/ssl/certs/b5213941.0 || ln -fs /etc/ssl/certs/minikubeCA.pem /etc/ssl/certs/b5213941.0"
	I1109 14:39:24.579490  196795 ssh_runner.go:195] Run: sudo /bin/bash -c "test -s /usr/share/ca-certificates/4116.pem && ln -fs /usr/share/ca-certificates/4116.pem /etc/ssl/certs/4116.pem"
	I1109 14:39:24.606288  196795 ssh_runner.go:195] Run: ls -la /usr/share/ca-certificates/4116.pem
	I1109 14:39:24.614978  196795 certs.go:528] hashing: -rw-r--r-- 1 root root 1338 Nov  9 13:36 /usr/share/ca-certificates/4116.pem
	I1109 14:39:24.615077  196795 ssh_runner.go:195] Run: openssl x509 -hash -noout -in /usr/share/ca-certificates/4116.pem
	I1109 14:39:24.702675  196795 ssh_runner.go:195] Run: sudo /bin/bash -c "test -L /etc/ssl/certs/51391683.0 || ln -fs /etc/ssl/certs/4116.pem /etc/ssl/certs/51391683.0"
	I1109 14:39:24.724731  196795 ssh_runner.go:195] Run: stat /var/lib/minikube/certs/apiserver-kubelet-client.crt
	I1109 14:39:24.736968  196795 ssh_runner.go:195] Run: openssl x509 -noout -in /var/lib/minikube/certs/apiserver-etcd-client.crt -checkend 86400
	I1109 14:39:24.828754  196795 ssh_runner.go:195] Run: openssl x509 -noout -in /var/lib/minikube/certs/apiserver-kubelet-client.crt -checkend 86400
	I1109 14:39:24.919293  196795 ssh_runner.go:195] Run: openssl x509 -noout -in /var/lib/minikube/certs/etcd/server.crt -checkend 86400
	I1109 14:39:25.033233  196795 ssh_runner.go:195] Run: openssl x509 -noout -in /var/lib/minikube/certs/etcd/healthcheck-client.crt -checkend 86400
	I1109 14:39:25.133106  196795 ssh_runner.go:195] Run: openssl x509 -noout -in /var/lib/minikube/certs/etcd/peer.crt -checkend 86400
	I1109 14:39:25.239384  196795 ssh_runner.go:195] Run: openssl x509 -noout -in /var/lib/minikube/certs/front-proxy-client.crt -checkend 86400
	I1109 14:39:25.320678  196795 kubeadm.go:401] StartCluster: {Name:embed-certs-422728 KeepContext:false EmbedCerts:true MinikubeISO: KicBaseImage:gcr.io/k8s-minikube/kicbase-builds:v0.0.48-1761985721-21837@sha256:a50b37e97dfdea51156e079ca6b45818a801b3d41bbe13d141f35d2e1af6c7d1 Memory:3072 CPUs:2 DiskSize:20000 Driver:docker HyperkitVpnKitSock: HyperkitVSockPorts:[] DockerEnv:[] ContainerVolumeMounts:[] InsecureRegistry:[] RegistryMirror:[] HostOnlyCIDR:192.168.59.1/24 HypervVirtualSwitch: HypervUseExternalSwitch:false HypervExternalAdapter: KVMNetwork:default KVMQemuURI:qemu:///system KVMGPU:false KVMHidden:false KVMNUMACount:1 APIServerPort:8443 DockerOpt:[] DisableDriverMounts:false NFSShare:[] NFSSharesRoot:/nfsshares UUID: NoVTXCheck:false DNSProxy:false HostDNSResolver:true HostOnlyNicType:virtio NatNicType:virtio SSHIPAddress: SSHUser:root SSHKey: SSHPort:22 KubernetesConfig:{KubernetesVersion:v1.34.1 ClusterName:embed-certs-422728 Namespace:default APIServerHAVIP: APIServerName:minikubeCA APISe
rverNames:[] APIServerIPs:[] DNSDomain:cluster.local ContainerRuntime:crio CRISocket: NetworkPlugin:cni FeatureGates: ServiceCIDR:10.96.0.0/12 ImageRepository: LoadBalancerStartIP: LoadBalancerEndIP: CustomIngressCert: RegistryAliases: ExtraOptions:[] ShouldLoadCachedImages:true EnableDefaultCNI:false CNI:} Nodes:[{Name: IP:192.168.76.2 Port:8443 KubernetesVersion:v1.34.1 ContainerRuntime:crio ControlPlane:true Worker:true}] Addons:map[dashboard:true default-storageclass:true storage-provisioner:true] CustomAddonImages:map[MetricsScraper:registry.k8s.io/echoserver:1.4] CustomAddonRegistries:map[] VerifyComponents:map[apiserver:true apps_running:true default_sa:true extra:true kubelet:true node_ready:true system_pods:true] StartHostTimeout:6m0s ScheduledStop:<nil> ExposedPorts:[] ListenAddress: Network: Subnet: MultiNodeRequested:false ExtraDisks:0 CertExpiration:26280h0m0s MountString: Mount9PVersion:9p2000.L MountGID:docker MountIP: MountMSize:262144 MountOptions:[] MountPort:0 MountType:9p MountUID:docker B
inaryMirror: DisableOptimizations:false DisableMetrics:false DisableCoreDNSLog:false CustomQemuFirmwarePath: SocketVMnetClientPath: SocketVMnetPath: StaticIP: SSHAuthSock: SSHAgentPID:0 GPUs: AutoPauseInterval:1m0s}
	I1109 14:39:25.320782  196795 cri.go:54] listing CRI containers in root : {State:paused Name: Namespaces:[kube-system]}
	I1109 14:39:25.320876  196795 ssh_runner.go:195] Run: sudo -s eval "crictl ps -a --quiet --label io.kubernetes.pod.namespace=kube-system"
	I1109 14:39:25.395488  196795 cri.go:89] found id: "a9943a66511d5ad86eafee1dfeb5b5c4217765aa7223fad81d4d85ed65bd4366"
	I1109 14:39:25.395518  196795 cri.go:89] found id: "2b949bf057b2fcac7eac7de6351c20a0ac0d3dccfcd10d71e47fcbaab8fc91dc"
	I1109 14:39:25.395523  196795 cri.go:89] found id: "7ac348b06cb3a51d4fa87f75d29323432bc36b8fade63732460d89860ce8f3df"
	I1109 14:39:25.395529  196795 cri.go:89] found id: "7f99978e234d142a4cacb3d0faff188a383754a6fe3a8aafef37dfdbccf51f16"
	I1109 14:39:25.395540  196795 cri.go:89] found id: ""
	I1109 14:39:25.395626  196795 ssh_runner.go:195] Run: sudo runc list -f json
	W1109 14:39:25.421453  196795 kubeadm.go:408] unpause failed: list paused: runc: sudo runc list -f json: Process exited with status 1
	stdout:
	
	stderr:
	time="2025-11-09T14:39:25Z" level=error msg="open /run/runc: no such file or directory"
	I1109 14:39:25.421568  196795 ssh_runner.go:195] Run: sudo ls /var/lib/kubelet/kubeadm-flags.env /var/lib/kubelet/config.yaml /var/lib/minikube/etcd
	I1109 14:39:25.434118  196795 kubeadm.go:417] found existing configuration files, will attempt cluster restart
	I1109 14:39:25.434139  196795 kubeadm.go:598] restartPrimaryControlPlane start ...
	I1109 14:39:25.434224  196795 ssh_runner.go:195] Run: sudo test -d /data/minikube
	I1109 14:39:25.455848  196795 kubeadm.go:131] /data/minikube skipping compat symlinks: sudo test -d /data/minikube: Process exited with status 1
	stdout:
	
	stderr:
	I1109 14:39:25.456462  196795 kubeconfig.go:47] verify endpoint returned: get endpoint: "embed-certs-422728" does not appear in /home/jenkins/minikube-integration/21139-2320/kubeconfig
	I1109 14:39:25.456756  196795 kubeconfig.go:62] /home/jenkins/minikube-integration/21139-2320/kubeconfig needs updating (will repair): [kubeconfig missing "embed-certs-422728" cluster setting kubeconfig missing "embed-certs-422728" context setting]
	I1109 14:39:25.457252  196795 lock.go:35] WriteFile acquiring /home/jenkins/minikube-integration/21139-2320/kubeconfig: {Name:mkbcbd43244c4eb498050023163f96f81faa78c6 Clock:{} Delay:500ms Timeout:1m0s Cancel:<nil>}
	I1109 14:39:25.458892  196795 ssh_runner.go:195] Run: sudo diff -u /var/tmp/minikube/kubeadm.yaml /var/tmp/minikube/kubeadm.yaml.new
	I1109 14:39:25.472254  196795 kubeadm.go:635] The running cluster does not require reconfiguration: 192.168.76.2
	I1109 14:39:25.472299  196795 kubeadm.go:602] duration metric: took 38.151656ms to restartPrimaryControlPlane
	I1109 14:39:25.472333  196795 kubeadm.go:403] duration metric: took 151.665347ms to StartCluster
	I1109 14:39:25.472350  196795 settings.go:142] acquiring lock: {Name:mk52a5a1cf83cfd3007d92af07a5e8dea393f789 Clock:{} Delay:500ms Timeout:1m0s Cancel:<nil>}
	I1109 14:39:25.472439  196795 settings.go:150] Updating kubeconfig:  /home/jenkins/minikube-integration/21139-2320/kubeconfig
	I1109 14:39:25.474717  196795 lock.go:35] WriteFile acquiring /home/jenkins/minikube-integration/21139-2320/kubeconfig: {Name:mkbcbd43244c4eb498050023163f96f81faa78c6 Clock:{} Delay:500ms Timeout:1m0s Cancel:<nil>}
	I1109 14:39:25.475122  196795 start.go:236] Will wait 6m0s for node &{Name: IP:192.168.76.2 Port:8443 KubernetesVersion:v1.34.1 ContainerRuntime:crio ControlPlane:true Worker:true}
	I1109 14:39:25.475457  196795 config.go:182] Loaded profile config "embed-certs-422728": Driver=docker, ContainerRuntime=crio, KubernetesVersion=v1.34.1
	I1109 14:39:25.475514  196795 addons.go:512] enable addons start: toEnable=map[ambassador:false amd-gpu-device-plugin:false auto-pause:false cloud-spanner:false csi-hostpath-driver:false dashboard:true default-storageclass:true efk:false freshpod:false gcp-auth:false gvisor:false headlamp:false inaccel:false ingress:false ingress-dns:false inspektor-gadget:false istio:false istio-provisioner:false kong:false kubeflow:false kubetail:false kubevirt:false logviewer:false metallb:false metrics-server:false nvidia-device-plugin:false nvidia-driver-installer:false nvidia-gpu-device-plugin:false olm:false pod-security-policy:false portainer:false registry:false registry-aliases:false registry-creds:false storage-provisioner:true storage-provisioner-rancher:false volcano:false volumesnapshots:false yakd:false]
	I1109 14:39:25.475607  196795 addons.go:70] Setting storage-provisioner=true in profile "embed-certs-422728"
	I1109 14:39:25.475629  196795 addons.go:239] Setting addon storage-provisioner=true in "embed-certs-422728"
	W1109 14:39:25.475642  196795 addons.go:248] addon storage-provisioner should already be in state true
	I1109 14:39:25.475657  196795 addons.go:70] Setting dashboard=true in profile "embed-certs-422728"
	I1109 14:39:25.475671  196795 addons.go:239] Setting addon dashboard=true in "embed-certs-422728"
	W1109 14:39:25.475677  196795 addons.go:248] addon dashboard should already be in state true
	I1109 14:39:25.475700  196795 host.go:66] Checking if "embed-certs-422728" exists ...
	I1109 14:39:25.476345  196795 cli_runner.go:164] Run: docker container inspect embed-certs-422728 --format={{.State.Status}}
	I1109 14:39:25.476519  196795 host.go:66] Checking if "embed-certs-422728" exists ...
	I1109 14:39:25.476941  196795 cli_runner.go:164] Run: docker container inspect embed-certs-422728 --format={{.State.Status}}
	I1109 14:39:25.477501  196795 addons.go:70] Setting default-storageclass=true in profile "embed-certs-422728"
	I1109 14:39:25.477528  196795 addons_storage_classes.go:34] enableOrDisableStorageClasses default-storageclass=true on "embed-certs-422728"
	I1109 14:39:25.477804  196795 cli_runner.go:164] Run: docker container inspect embed-certs-422728 --format={{.State.Status}}
	I1109 14:39:25.483396  196795 out.go:179] * Verifying Kubernetes components...
	I1109 14:39:25.487964  196795 ssh_runner.go:195] Run: sudo systemctl daemon-reload
	I1109 14:39:25.515113  196795 out.go:179]   - Using image docker.io/kubernetesui/dashboard:v2.7.0
	I1109 14:39:25.518086  196795 out.go:179]   - Using image registry.k8s.io/echoserver:1.4
	I1109 14:39:25.521009  196795 addons.go:436] installing /etc/kubernetes/addons/dashboard-ns.yaml
	I1109 14:39:25.521039  196795 ssh_runner.go:362] scp dashboard/dashboard-ns.yaml --> /etc/kubernetes/addons/dashboard-ns.yaml (759 bytes)
	I1109 14:39:25.521115  196795 cli_runner.go:164] Run: docker container inspect -f "'{{(index (index .NetworkSettings.Ports "22/tcp") 0).HostPort}}'" embed-certs-422728
	I1109 14:39:25.540397  196795 out.go:179]   - Using image gcr.io/k8s-minikube/storage-provisioner:v5
	I1109 14:39:25.545565  196795 addons.go:436] installing /etc/kubernetes/addons/storage-provisioner.yaml
	I1109 14:39:25.545587  196795 ssh_runner.go:362] scp memory --> /etc/kubernetes/addons/storage-provisioner.yaml (2676 bytes)
	I1109 14:39:25.545649  196795 cli_runner.go:164] Run: docker container inspect -f "'{{(index (index .NetworkSettings.Ports "22/tcp") 0).HostPort}}'" embed-certs-422728
	I1109 14:39:25.553421  196795 addons.go:239] Setting addon default-storageclass=true in "embed-certs-422728"
	W1109 14:39:25.553458  196795 addons.go:248] addon default-storageclass should already be in state true
	I1109 14:39:25.553498  196795 host.go:66] Checking if "embed-certs-422728" exists ...
	I1109 14:39:25.553946  196795 cli_runner.go:164] Run: docker container inspect embed-certs-422728 --format={{.State.Status}}
	I1109 14:39:25.587976  196795 sshutil.go:53] new ssh client: &{IP:127.0.0.1 Port:33070 SSHKeyPath:/home/jenkins/minikube-integration/21139-2320/.minikube/machines/embed-certs-422728/id_rsa Username:docker}
	I1109 14:39:25.610580  196795 addons.go:436] installing /etc/kubernetes/addons/storageclass.yaml
	I1109 14:39:25.610609  196795 ssh_runner.go:362] scp storageclass/storageclass.yaml --> /etc/kubernetes/addons/storageclass.yaml (271 bytes)
	I1109 14:39:25.610676  196795 cli_runner.go:164] Run: docker container inspect -f "'{{(index (index .NetworkSettings.Ports "22/tcp") 0).HostPort}}'" embed-certs-422728
	I1109 14:39:25.611768  196795 sshutil.go:53] new ssh client: &{IP:127.0.0.1 Port:33070 SSHKeyPath:/home/jenkins/minikube-integration/21139-2320/.minikube/machines/embed-certs-422728/id_rsa Username:docker}
	I1109 14:39:25.643462  196795 sshutil.go:53] new ssh client: &{IP:127.0.0.1 Port:33070 SSHKeyPath:/home/jenkins/minikube-integration/21139-2320/.minikube/machines/embed-certs-422728/id_rsa Username:docker}
	I1109 14:39:25.951056  196795 ssh_runner.go:195] Run: sudo KUBECONFIG=/var/lib/minikube/kubeconfig /var/lib/minikube/binaries/v1.34.1/kubectl apply -f /etc/kubernetes/addons/storage-provisioner.yaml
	I1109 14:39:26.036278  196795 addons.go:436] installing /etc/kubernetes/addons/dashboard-clusterrole.yaml
	I1109 14:39:26.036356  196795 ssh_runner.go:362] scp dashboard/dashboard-clusterrole.yaml --> /etc/kubernetes/addons/dashboard-clusterrole.yaml (1001 bytes)
	I1109 14:39:26.113974  196795 ssh_runner.go:195] Run: sudo systemctl start kubelet
	I1109 14:39:26.133211  196795 ssh_runner.go:195] Run: sudo KUBECONFIG=/var/lib/minikube/kubeconfig /var/lib/minikube/binaries/v1.34.1/kubectl apply -f /etc/kubernetes/addons/storageclass.yaml
	I1109 14:39:26.150339  196795 addons.go:436] installing /etc/kubernetes/addons/dashboard-clusterrolebinding.yaml
	I1109 14:39:26.150412  196795 ssh_runner.go:362] scp dashboard/dashboard-clusterrolebinding.yaml --> /etc/kubernetes/addons/dashboard-clusterrolebinding.yaml (1018 bytes)
	I1109 14:39:26.224674  196795 addons.go:436] installing /etc/kubernetes/addons/dashboard-configmap.yaml
	I1109 14:39:26.224743  196795 ssh_runner.go:362] scp dashboard/dashboard-configmap.yaml --> /etc/kubernetes/addons/dashboard-configmap.yaml (837 bytes)
	I1109 14:39:26.342164  196795 addons.go:436] installing /etc/kubernetes/addons/dashboard-dp.yaml
	I1109 14:39:26.342238  196795 ssh_runner.go:362] scp memory --> /etc/kubernetes/addons/dashboard-dp.yaml (4201 bytes)
	I1109 14:39:26.457225  196795 addons.go:436] installing /etc/kubernetes/addons/dashboard-role.yaml
	I1109 14:39:26.457281  196795 ssh_runner.go:362] scp dashboard/dashboard-role.yaml --> /etc/kubernetes/addons/dashboard-role.yaml (1724 bytes)
	I1109 14:39:26.524480  196795 addons.go:436] installing /etc/kubernetes/addons/dashboard-rolebinding.yaml
	I1109 14:39:26.524551  196795 ssh_runner.go:362] scp dashboard/dashboard-rolebinding.yaml --> /etc/kubernetes/addons/dashboard-rolebinding.yaml (1046 bytes)
	I1109 14:39:26.545432  196795 addons.go:436] installing /etc/kubernetes/addons/dashboard-sa.yaml
	I1109 14:39:26.545495  196795 ssh_runner.go:362] scp dashboard/dashboard-sa.yaml --> /etc/kubernetes/addons/dashboard-sa.yaml (837 bytes)
	I1109 14:39:26.569785  196795 addons.go:436] installing /etc/kubernetes/addons/dashboard-secret.yaml
	I1109 14:39:26.569856  196795 ssh_runner.go:362] scp dashboard/dashboard-secret.yaml --> /etc/kubernetes/addons/dashboard-secret.yaml (1389 bytes)
	I1109 14:39:26.593384  196795 addons.go:436] installing /etc/kubernetes/addons/dashboard-svc.yaml
	I1109 14:39:26.593446  196795 ssh_runner.go:362] scp dashboard/dashboard-svc.yaml --> /etc/kubernetes/addons/dashboard-svc.yaml (1294 bytes)
	I1109 14:39:26.632772  196795 ssh_runner.go:195] Run: sudo KUBECONFIG=/var/lib/minikube/kubeconfig /var/lib/minikube/binaries/v1.34.1/kubectl apply -f /etc/kubernetes/addons/dashboard-ns.yaml -f /etc/kubernetes/addons/dashboard-clusterrole.yaml -f /etc/kubernetes/addons/dashboard-clusterrolebinding.yaml -f /etc/kubernetes/addons/dashboard-configmap.yaml -f /etc/kubernetes/addons/dashboard-dp.yaml -f /etc/kubernetes/addons/dashboard-role.yaml -f /etc/kubernetes/addons/dashboard-rolebinding.yaml -f /etc/kubernetes/addons/dashboard-sa.yaml -f /etc/kubernetes/addons/dashboard-secret.yaml -f /etc/kubernetes/addons/dashboard-svc.yaml
	I1109 14:39:29.705357  196129 node_ready.go:49] node "default-k8s-diff-port-103048" is "Ready"
	I1109 14:39:29.705456  196129 node_ready.go:38] duration metric: took 7.476741625s for node "default-k8s-diff-port-103048" to be "Ready" ...
	I1109 14:39:29.705484  196129 api_server.go:52] waiting for apiserver process to appear ...
	I1109 14:39:29.705569  196129 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I1109 14:39:32.996787  196129 ssh_runner.go:235] Completed: sudo KUBECONFIG=/var/lib/minikube/kubeconfig /var/lib/minikube/binaries/v1.34.1/kubectl apply -f /etc/kubernetes/addons/storage-provisioner.yaml: (10.835671987s)
	I1109 14:39:32.996899  196129 ssh_runner.go:235] Completed: sudo KUBECONFIG=/var/lib/minikube/kubeconfig /var/lib/minikube/binaries/v1.34.1/kubectl apply -f /etc/kubernetes/addons/storageclass.yaml: (10.759518546s)
	I1109 14:39:32.997220  196129 ssh_runner.go:235] Completed: sudo KUBECONFIG=/var/lib/minikube/kubeconfig /var/lib/minikube/binaries/v1.34.1/kubectl apply -f /etc/kubernetes/addons/dashboard-ns.yaml -f /etc/kubernetes/addons/dashboard-clusterrole.yaml -f /etc/kubernetes/addons/dashboard-clusterrolebinding.yaml -f /etc/kubernetes/addons/dashboard-configmap.yaml -f /etc/kubernetes/addons/dashboard-dp.yaml -f /etc/kubernetes/addons/dashboard-role.yaml -f /etc/kubernetes/addons/dashboard-rolebinding.yaml -f /etc/kubernetes/addons/dashboard-sa.yaml -f /etc/kubernetes/addons/dashboard-secret.yaml -f /etc/kubernetes/addons/dashboard-svc.yaml: (10.165879191s)
	I1109 14:39:32.997471  196129 ssh_runner.go:235] Completed: sudo pgrep -xnf kube-apiserver.*minikube.*: (3.291860623s)
	I1109 14:39:32.997521  196129 api_server.go:72] duration metric: took 11.371993953s to wait for apiserver process to appear ...
	I1109 14:39:32.997542  196129 api_server.go:88] waiting for apiserver healthz status ...
	I1109 14:39:32.997571  196129 api_server.go:253] Checking apiserver healthz at https://192.168.85.2:8444/healthz ...
	I1109 14:39:33.000725  196129 out.go:179] * Some dashboard features require the metrics-server addon. To enable all features please run:
	
		minikube -p default-k8s-diff-port-103048 addons enable metrics-server
	
	I1109 14:39:33.020969  196129 api_server.go:279] https://192.168.85.2:8444/healthz returned 200:
	ok
	I1109 14:39:33.023683  196129 api_server.go:141] control plane version: v1.34.1
	I1109 14:39:33.023714  196129 api_server.go:131] duration metric: took 26.153345ms to wait for apiserver health ...
	I1109 14:39:33.023725  196129 system_pods.go:43] waiting for kube-system pods to appear ...
	I1109 14:39:33.032087  196129 out.go:179] * Enabled addons: storage-provisioner, dashboard, default-storageclass
	I1109 14:39:33.033482  196129 system_pods.go:59] 8 kube-system pods found
	I1109 14:39:33.033582  196129 system_pods.go:61] "coredns-66bc5c9577-rbvc2" [a2c09df3-22f7-4863-81b7-71d92e6457c7] Running / Ready:ContainersNotReady (containers with unready status: [coredns]) / ContainersReady:ContainersNotReady (containers with unready status: [coredns])
	I1109 14:39:33.033606  196129 system_pods.go:61] "etcd-default-k8s-diff-port-103048" [7ca61e70-9bc1-4a3d-9175-40fabccc3dfe] Running / Ready:ContainersNotReady (containers with unready status: [etcd]) / ContainersReady:ContainersNotReady (containers with unready status: [etcd])
	I1109 14:39:33.033629  196129 system_pods.go:61] "kindnet-tz2x5" [41a63a24-6d2b-453d-a118-2a5b03e08396] Running / Ready:ContainersNotReady (containers with unready status: [kindnet-cni]) / ContainersReady:ContainersNotReady (containers with unready status: [kindnet-cni])
	I1109 14:39:33.033667  196129 system_pods.go:61] "kube-apiserver-default-k8s-diff-port-103048" [80dc9a9a-1a79-447a-b1a4-38d7611de8c3] Running / Ready:ContainersNotReady (containers with unready status: [kube-apiserver]) / ContainersReady:ContainersNotReady (containers with unready status: [kube-apiserver])
	I1109 14:39:33.033692  196129 system_pods.go:61] "kube-controller-manager-default-k8s-diff-port-103048" [f1da8430-3b4b-4fab-a1cb-d8984aabae63] Running / Ready:ContainersNotReady (containers with unready status: [kube-controller-manager]) / ContainersReady:ContainersNotReady (containers with unready status: [kube-controller-manager])
	I1109 14:39:33.033712  196129 system_pods.go:61] "kube-proxy-c57m2" [d93835ed-7e40-4171-a3ee-f815a8d20380] Running
	I1109 14:39:33.033743  196129 system_pods.go:61] "kube-scheduler-default-k8s-diff-port-103048" [8e18ee25-6d8f-4598-89c1-67b7fc04936b] Running
	I1109 14:39:33.033770  196129 system_pods.go:61] "storage-provisioner" [251b0857-5681-47c0-b891-8a4c109aaa4b] Running
	I1109 14:39:33.033790  196129 system_pods.go:74] duration metric: took 10.030263ms to wait for pod list to return data ...
	I1109 14:39:33.033824  196129 default_sa.go:34] waiting for default service account to be created ...
	I1109 14:39:33.034992  196129 addons.go:515] duration metric: took 11.409095214s for enable addons: enabled=[storage-provisioner dashboard default-storageclass]
	I1109 14:39:33.040658  196129 default_sa.go:45] found service account: "default"
	I1109 14:39:33.040686  196129 default_sa.go:55] duration metric: took 6.835118ms for default service account to be created ...
	I1109 14:39:33.040697  196129 system_pods.go:116] waiting for k8s-apps to be running ...
	I1109 14:39:33.044695  196129 system_pods.go:86] 8 kube-system pods found
	I1109 14:39:33.044733  196129 system_pods.go:89] "coredns-66bc5c9577-rbvc2" [a2c09df3-22f7-4863-81b7-71d92e6457c7] Running / Ready:ContainersNotReady (containers with unready status: [coredns]) / ContainersReady:ContainersNotReady (containers with unready status: [coredns])
	I1109 14:39:33.044743  196129 system_pods.go:89] "etcd-default-k8s-diff-port-103048" [7ca61e70-9bc1-4a3d-9175-40fabccc3dfe] Running / Ready:ContainersNotReady (containers with unready status: [etcd]) / ContainersReady:ContainersNotReady (containers with unready status: [etcd])
	I1109 14:39:33.044786  196129 system_pods.go:89] "kindnet-tz2x5" [41a63a24-6d2b-453d-a118-2a5b03e08396] Running / Ready:ContainersNotReady (containers with unready status: [kindnet-cni]) / ContainersReady:ContainersNotReady (containers with unready status: [kindnet-cni])
	I1109 14:39:33.044801  196129 system_pods.go:89] "kube-apiserver-default-k8s-diff-port-103048" [80dc9a9a-1a79-447a-b1a4-38d7611de8c3] Running / Ready:ContainersNotReady (containers with unready status: [kube-apiserver]) / ContainersReady:ContainersNotReady (containers with unready status: [kube-apiserver])
	I1109 14:39:33.044809  196129 system_pods.go:89] "kube-controller-manager-default-k8s-diff-port-103048" [f1da8430-3b4b-4fab-a1cb-d8984aabae63] Running / Ready:ContainersNotReady (containers with unready status: [kube-controller-manager]) / ContainersReady:ContainersNotReady (containers with unready status: [kube-controller-manager])
	I1109 14:39:33.044819  196129 system_pods.go:89] "kube-proxy-c57m2" [d93835ed-7e40-4171-a3ee-f815a8d20380] Running
	I1109 14:39:33.044824  196129 system_pods.go:89] "kube-scheduler-default-k8s-diff-port-103048" [8e18ee25-6d8f-4598-89c1-67b7fc04936b] Running
	I1109 14:39:33.044829  196129 system_pods.go:89] "storage-provisioner" [251b0857-5681-47c0-b891-8a4c109aaa4b] Running
	I1109 14:39:33.044854  196129 system_pods.go:126] duration metric: took 4.149902ms to wait for k8s-apps to be running ...
	I1109 14:39:33.044870  196129 system_svc.go:44] waiting for kubelet service to be running ....
	I1109 14:39:33.044951  196129 ssh_runner.go:195] Run: sudo systemctl is-active --quiet service kubelet
	I1109 14:39:33.077530  196129 system_svc.go:56] duration metric: took 32.649827ms WaitForService to wait for kubelet
	I1109 14:39:33.077564  196129 kubeadm.go:587] duration metric: took 11.452030043s to wait for: map[apiserver:true apps_running:true default_sa:true extra:true kubelet:true node_ready:true system_pods:true]
	I1109 14:39:33.077606  196129 node_conditions.go:102] verifying NodePressure condition ...
	I1109 14:39:33.086426  196129 node_conditions.go:122] node storage ephemeral capacity is 203034800Ki
	I1109 14:39:33.086461  196129 node_conditions.go:123] node cpu capacity is 2
	I1109 14:39:33.086473  196129 node_conditions.go:105] duration metric: took 8.861178ms to run NodePressure ...
	I1109 14:39:33.086516  196129 start.go:242] waiting for startup goroutines ...
	I1109 14:39:33.086533  196129 start.go:247] waiting for cluster config update ...
	I1109 14:39:33.086544  196129 start.go:256] writing updated cluster config ...
	I1109 14:39:33.086866  196129 ssh_runner.go:195] Run: rm -f paused
	I1109 14:39:33.096386  196129 pod_ready.go:37] extra waiting up to 4m0s for all "kube-system" pods having one of [k8s-app=kube-dns component=etcd component=kube-apiserver component=kube-controller-manager k8s-app=kube-proxy component=kube-scheduler] labels to be "Ready" ...
	I1109 14:39:33.164789  196129 pod_ready.go:83] waiting for pod "coredns-66bc5c9577-rbvc2" in "kube-system" namespace to be "Ready" or be gone ...
	I1109 14:39:35.201675  196795 ssh_runner.go:235] Completed: sudo KUBECONFIG=/var/lib/minikube/kubeconfig /var/lib/minikube/binaries/v1.34.1/kubectl apply -f /etc/kubernetes/addons/storage-provisioner.yaml: (9.250533062s)
	I1109 14:39:35.201721  196795 ssh_runner.go:235] Completed: sudo systemctl start kubelet: (9.087664371s)
	I1109 14:39:35.201760  196795 node_ready.go:35] waiting up to 6m0s for node "embed-certs-422728" to be "Ready" ...
	I1109 14:39:35.202074  196795 ssh_runner.go:235] Completed: sudo KUBECONFIG=/var/lib/minikube/kubeconfig /var/lib/minikube/binaries/v1.34.1/kubectl apply -f /etc/kubernetes/addons/storageclass.yaml: (9.068793828s)
	I1109 14:39:35.202315  196795 ssh_runner.go:235] Completed: sudo KUBECONFIG=/var/lib/minikube/kubeconfig /var/lib/minikube/binaries/v1.34.1/kubectl apply -f /etc/kubernetes/addons/dashboard-ns.yaml -f /etc/kubernetes/addons/dashboard-clusterrole.yaml -f /etc/kubernetes/addons/dashboard-clusterrolebinding.yaml -f /etc/kubernetes/addons/dashboard-configmap.yaml -f /etc/kubernetes/addons/dashboard-dp.yaml -f /etc/kubernetes/addons/dashboard-role.yaml -f /etc/kubernetes/addons/dashboard-rolebinding.yaml -f /etc/kubernetes/addons/dashboard-sa.yaml -f /etc/kubernetes/addons/dashboard-secret.yaml -f /etc/kubernetes/addons/dashboard-svc.yaml: (8.569467426s)
	I1109 14:39:35.205755  196795 out.go:179] * Some dashboard features require the metrics-server addon. To enable all features please run:
	
		minikube -p embed-certs-422728 addons enable metrics-server
	
	I1109 14:39:35.282264  196795 node_ready.go:49] node "embed-certs-422728" is "Ready"
	I1109 14:39:35.282343  196795 node_ready.go:38] duration metric: took 80.561028ms for node "embed-certs-422728" to be "Ready" ...
	I1109 14:39:35.282371  196795 api_server.go:52] waiting for apiserver process to appear ...
	I1109 14:39:35.282455  196795 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I1109 14:39:35.306663  196795 out.go:179] * Enabled addons: storage-provisioner, dashboard, default-storageclass
	I1109 14:39:35.309737  196795 addons.go:515] duration metric: took 9.834202528s for enable addons: enabled=[storage-provisioner dashboard default-storageclass]
	I1109 14:39:35.336441  196795 api_server.go:72] duration metric: took 9.861275529s to wait for apiserver process to appear ...
	I1109 14:39:35.336467  196795 api_server.go:88] waiting for apiserver healthz status ...
	I1109 14:39:35.336489  196795 api_server.go:253] Checking apiserver healthz at https://192.168.76.2:8443/healthz ...
	I1109 14:39:35.381991  196795 api_server.go:279] https://192.168.76.2:8443/healthz returned 200:
	ok
	I1109 14:39:35.384051  196795 api_server.go:141] control plane version: v1.34.1
	I1109 14:39:35.384080  196795 api_server.go:131] duration metric: took 47.606213ms to wait for apiserver health ...
	I1109 14:39:35.384090  196795 system_pods.go:43] waiting for kube-system pods to appear ...
	I1109 14:39:35.401482  196795 system_pods.go:59] 8 kube-system pods found
	I1109 14:39:35.401522  196795 system_pods.go:61] "coredns-66bc5c9577-4hk6l" [85300af6-fc4a-42dd-b6f9-4374a4461cdc] Running / Ready:ContainersNotReady (containers with unready status: [coredns]) / ContainersReady:ContainersNotReady (containers with unready status: [coredns])
	I1109 14:39:35.401532  196795 system_pods.go:61] "etcd-embed-certs-422728" [a71faa76-21fd-4bf5-82d3-26489363edf1] Running / Ready:ContainersNotReady (containers with unready status: [etcd]) / ContainersReady:ContainersNotReady (containers with unready status: [etcd])
	I1109 14:39:35.401542  196795 system_pods.go:61] "kindnet-29xxd" [081cda95-4468-46a9-a913-ec3c53472afd] Running / Ready:ContainersNotReady (containers with unready status: [kindnet-cni]) / ContainersReady:ContainersNotReady (containers with unready status: [kindnet-cni])
	I1109 14:39:35.401547  196795 system_pods.go:61] "kube-apiserver-embed-certs-422728" [fa49f721-7504-41a2-80d6-eeac9f3ba024] Running
	I1109 14:39:35.401556  196795 system_pods.go:61] "kube-controller-manager-embed-certs-422728" [1fefd317-f995-414d-9744-cb3bb29d6663] Running / Ready:ContainersNotReady (containers with unready status: [kube-controller-manager]) / ContainersReady:ContainersNotReady (containers with unready status: [kube-controller-manager])
	I1109 14:39:35.401564  196795 system_pods.go:61] "kube-proxy-5zn8j" [91237b20-cef1-4550-bd7c-cbf7ec8d850c] Running / Ready:ContainersNotReady (containers with unready status: [kube-proxy]) / ContainersReady:ContainersNotReady (containers with unready status: [kube-proxy])
	I1109 14:39:35.401581  196795 system_pods.go:61] "kube-scheduler-embed-certs-422728" [aadcb563-6ce1-437f-ab4e-6f9450bb1b93] Running / Ready:ContainersNotReady (containers with unready status: [kube-scheduler]) / ContainersReady:ContainersNotReady (containers with unready status: [kube-scheduler])
	I1109 14:39:35.401590  196795 system_pods.go:61] "storage-provisioner" [e11ae084-2938-40cc-9538-cffa02747d9b] Running / Ready:ContainersNotReady (containers with unready status: [storage-provisioner]) / ContainersReady:ContainersNotReady (containers with unready status: [storage-provisioner])
	I1109 14:39:35.401601  196795 system_pods.go:74] duration metric: took 17.504641ms to wait for pod list to return data ...
	I1109 14:39:35.401610  196795 default_sa.go:34] waiting for default service account to be created ...
	I1109 14:39:35.428228  196795 default_sa.go:45] found service account: "default"
	I1109 14:39:35.428256  196795 default_sa.go:55] duration metric: took 26.634138ms for default service account to be created ...
	I1109 14:39:35.428275  196795 system_pods.go:116] waiting for k8s-apps to be running ...
	I1109 14:39:35.432793  196795 system_pods.go:86] 8 kube-system pods found
	I1109 14:39:35.432824  196795 system_pods.go:89] "coredns-66bc5c9577-4hk6l" [85300af6-fc4a-42dd-b6f9-4374a4461cdc] Running / Ready:ContainersNotReady (containers with unready status: [coredns]) / ContainersReady:ContainersNotReady (containers with unready status: [coredns])
	I1109 14:39:35.432834  196795 system_pods.go:89] "etcd-embed-certs-422728" [a71faa76-21fd-4bf5-82d3-26489363edf1] Running / Ready:ContainersNotReady (containers with unready status: [etcd]) / ContainersReady:ContainersNotReady (containers with unready status: [etcd])
	I1109 14:39:35.432841  196795 system_pods.go:89] "kindnet-29xxd" [081cda95-4468-46a9-a913-ec3c53472afd] Running / Ready:ContainersNotReady (containers with unready status: [kindnet-cni]) / ContainersReady:ContainersNotReady (containers with unready status: [kindnet-cni])
	I1109 14:39:35.432854  196795 system_pods.go:89] "kube-apiserver-embed-certs-422728" [fa49f721-7504-41a2-80d6-eeac9f3ba024] Running
	I1109 14:39:35.432865  196795 system_pods.go:89] "kube-controller-manager-embed-certs-422728" [1fefd317-f995-414d-9744-cb3bb29d6663] Running / Ready:ContainersNotReady (containers with unready status: [kube-controller-manager]) / ContainersReady:ContainersNotReady (containers with unready status: [kube-controller-manager])
	I1109 14:39:35.432877  196795 system_pods.go:89] "kube-proxy-5zn8j" [91237b20-cef1-4550-bd7c-cbf7ec8d850c] Running / Ready:ContainersNotReady (containers with unready status: [kube-proxy]) / ContainersReady:ContainersNotReady (containers with unready status: [kube-proxy])
	I1109 14:39:35.432884  196795 system_pods.go:89] "kube-scheduler-embed-certs-422728" [aadcb563-6ce1-437f-ab4e-6f9450bb1b93] Running / Ready:ContainersNotReady (containers with unready status: [kube-scheduler]) / ContainersReady:ContainersNotReady (containers with unready status: [kube-scheduler])
	I1109 14:39:35.432901  196795 system_pods.go:89] "storage-provisioner" [e11ae084-2938-40cc-9538-cffa02747d9b] Running / Ready:ContainersNotReady (containers with unready status: [storage-provisioner]) / ContainersReady:ContainersNotReady (containers with unready status: [storage-provisioner])
	I1109 14:39:35.432909  196795 system_pods.go:126] duration metric: took 4.628396ms to wait for k8s-apps to be running ...
	I1109 14:39:35.432921  196795 system_svc.go:44] waiting for kubelet service to be running ....
	I1109 14:39:35.432993  196795 ssh_runner.go:195] Run: sudo systemctl is-active --quiet service kubelet
	I1109 14:39:35.485432  196795 system_svc.go:56] duration metric: took 52.500556ms WaitForService to wait for kubelet
	I1109 14:39:35.485461  196795 kubeadm.go:587] duration metric: took 10.010301465s to wait for: map[apiserver:true apps_running:true default_sa:true extra:true kubelet:true node_ready:true system_pods:true]
	I1109 14:39:35.485480  196795 node_conditions.go:102] verifying NodePressure condition ...
	I1109 14:39:35.509089  196795 node_conditions.go:122] node storage ephemeral capacity is 203034800Ki
	I1109 14:39:35.509123  196795 node_conditions.go:123] node cpu capacity is 2
	I1109 14:39:35.509136  196795 node_conditions.go:105] duration metric: took 23.649629ms to run NodePressure ...
	I1109 14:39:35.509148  196795 start.go:242] waiting for startup goroutines ...
	I1109 14:39:35.509156  196795 start.go:247] waiting for cluster config update ...
	I1109 14:39:35.509166  196795 start.go:256] writing updated cluster config ...
	I1109 14:39:35.509440  196795 ssh_runner.go:195] Run: rm -f paused
	I1109 14:39:35.523671  196795 pod_ready.go:37] extra waiting up to 4m0s for all "kube-system" pods having one of [k8s-app=kube-dns component=etcd component=kube-apiserver component=kube-controller-manager k8s-app=kube-proxy component=kube-scheduler] labels to be "Ready" ...
	I1109 14:39:35.544324  196795 pod_ready.go:83] waiting for pod "coredns-66bc5c9577-4hk6l" in "kube-system" namespace to be "Ready" or be gone ...
	W1109 14:39:35.214818  196129 pod_ready.go:104] pod "coredns-66bc5c9577-rbvc2" is not "Ready", error: <nil>
	W1109 14:39:37.670741  196129 pod_ready.go:104] pod "coredns-66bc5c9577-rbvc2" is not "Ready", error: <nil>
	W1109 14:39:37.550361  196795 pod_ready.go:104] pod "coredns-66bc5c9577-4hk6l" is not "Ready", error: <nil>
	W1109 14:39:39.551201  196795 pod_ready.go:104] pod "coredns-66bc5c9577-4hk6l" is not "Ready", error: <nil>
	W1109 14:39:39.671795  196129 pod_ready.go:104] pod "coredns-66bc5c9577-rbvc2" is not "Ready", error: <nil>
	W1109 14:39:41.672702  196129 pod_ready.go:104] pod "coredns-66bc5c9577-rbvc2" is not "Ready", error: <nil>
	W1109 14:39:42.050591  196795 pod_ready.go:104] pod "coredns-66bc5c9577-4hk6l" is not "Ready", error: <nil>
	W1109 14:39:44.052665  196795 pod_ready.go:104] pod "coredns-66bc5c9577-4hk6l" is not "Ready", error: <nil>
	W1109 14:39:43.679828  196129 pod_ready.go:104] pod "coredns-66bc5c9577-rbvc2" is not "Ready", error: <nil>
	W1109 14:39:46.172576  196129 pod_ready.go:104] pod "coredns-66bc5c9577-rbvc2" is not "Ready", error: <nil>
	W1109 14:39:46.549936  196795 pod_ready.go:104] pod "coredns-66bc5c9577-4hk6l" is not "Ready", error: <nil>
	W1109 14:39:48.550731  196795 pod_ready.go:104] pod "coredns-66bc5c9577-4hk6l" is not "Ready", error: <nil>
	W1109 14:39:50.550852  196795 pod_ready.go:104] pod "coredns-66bc5c9577-4hk6l" is not "Ready", error: <nil>
	W1109 14:39:48.675461  196129 pod_ready.go:104] pod "coredns-66bc5c9577-rbvc2" is not "Ready", error: <nil>
	W1109 14:39:51.171417  196129 pod_ready.go:104] pod "coredns-66bc5c9577-rbvc2" is not "Ready", error: <nil>
	W1109 14:39:53.050155  196795 pod_ready.go:104] pod "coredns-66bc5c9577-4hk6l" is not "Ready", error: <nil>
	W1109 14:39:55.050846  196795 pod_ready.go:104] pod "coredns-66bc5c9577-4hk6l" is not "Ready", error: <nil>
	W1109 14:39:53.669698  196129 pod_ready.go:104] pod "coredns-66bc5c9577-rbvc2" is not "Ready", error: <nil>
	W1109 14:39:55.670713  196129 pod_ready.go:104] pod "coredns-66bc5c9577-rbvc2" is not "Ready", error: <nil>
	W1109 14:39:58.170504  196129 pod_ready.go:104] pod "coredns-66bc5c9577-rbvc2" is not "Ready", error: <nil>
	W1109 14:39:57.550560  196795 pod_ready.go:104] pod "coredns-66bc5c9577-4hk6l" is not "Ready", error: <nil>
	W1109 14:40:00.080694  196795 pod_ready.go:104] pod "coredns-66bc5c9577-4hk6l" is not "Ready", error: <nil>
	W1109 14:40:00.191460  196129 pod_ready.go:104] pod "coredns-66bc5c9577-rbvc2" is not "Ready", error: <nil>
	W1109 14:40:02.670935  196129 pod_ready.go:104] pod "coredns-66bc5c9577-rbvc2" is not "Ready", error: <nil>
	W1109 14:40:02.550181  196795 pod_ready.go:104] pod "coredns-66bc5c9577-4hk6l" is not "Ready", error: <nil>
	W1109 14:40:04.550484  196795 pod_ready.go:104] pod "coredns-66bc5c9577-4hk6l" is not "Ready", error: <nil>
	I1109 14:40:05.170570  196129 pod_ready.go:94] pod "coredns-66bc5c9577-rbvc2" is "Ready"
	I1109 14:40:05.170595  196129 pod_ready.go:86] duration metric: took 32.005779394s for pod "coredns-66bc5c9577-rbvc2" in "kube-system" namespace to be "Ready" or be gone ...
	I1109 14:40:05.173494  196129 pod_ready.go:83] waiting for pod "etcd-default-k8s-diff-port-103048" in "kube-system" namespace to be "Ready" or be gone ...
	I1109 14:40:05.178322  196129 pod_ready.go:94] pod "etcd-default-k8s-diff-port-103048" is "Ready"
	I1109 14:40:05.178350  196129 pod_ready.go:86] duration metric: took 4.826832ms for pod "etcd-default-k8s-diff-port-103048" in "kube-system" namespace to be "Ready" or be gone ...
	I1109 14:40:05.181165  196129 pod_ready.go:83] waiting for pod "kube-apiserver-default-k8s-diff-port-103048" in "kube-system" namespace to be "Ready" or be gone ...
	I1109 14:40:05.185964  196129 pod_ready.go:94] pod "kube-apiserver-default-k8s-diff-port-103048" is "Ready"
	I1109 14:40:05.185994  196129 pod_ready.go:86] duration metric: took 4.801946ms for pod "kube-apiserver-default-k8s-diff-port-103048" in "kube-system" namespace to be "Ready" or be gone ...
	I1109 14:40:05.188492  196129 pod_ready.go:83] waiting for pod "kube-controller-manager-default-k8s-diff-port-103048" in "kube-system" namespace to be "Ready" or be gone ...
	I1109 14:40:05.369137  196129 pod_ready.go:94] pod "kube-controller-manager-default-k8s-diff-port-103048" is "Ready"
	I1109 14:40:05.369168  196129 pod_ready.go:86] duration metric: took 180.647632ms for pod "kube-controller-manager-default-k8s-diff-port-103048" in "kube-system" namespace to be "Ready" or be gone ...
	I1109 14:40:05.567982  196129 pod_ready.go:83] waiting for pod "kube-proxy-c57m2" in "kube-system" namespace to be "Ready" or be gone ...
	I1109 14:40:05.968952  196129 pod_ready.go:94] pod "kube-proxy-c57m2" is "Ready"
	I1109 14:40:05.968978  196129 pod_ready.go:86] duration metric: took 400.969079ms for pod "kube-proxy-c57m2" in "kube-system" namespace to be "Ready" or be gone ...
	I1109 14:40:06.169164  196129 pod_ready.go:83] waiting for pod "kube-scheduler-default-k8s-diff-port-103048" in "kube-system" namespace to be "Ready" or be gone ...
	I1109 14:40:06.568343  196129 pod_ready.go:94] pod "kube-scheduler-default-k8s-diff-port-103048" is "Ready"
	I1109 14:40:06.568432  196129 pod_ready.go:86] duration metric: took 399.237416ms for pod "kube-scheduler-default-k8s-diff-port-103048" in "kube-system" namespace to be "Ready" or be gone ...
	I1109 14:40:06.568451  196129 pod_ready.go:40] duration metric: took 33.4720313s for extra waiting for all "kube-system" pods having one of [k8s-app=kube-dns component=etcd component=kube-apiserver component=kube-controller-manager k8s-app=kube-proxy component=kube-scheduler] labels to be "Ready" ...
	I1109 14:40:06.631797  196129 start.go:628] kubectl: 1.33.2, cluster: 1.34.1 (minor skew: 1)
	I1109 14:40:06.635018  196129 out.go:179] * Done! kubectl is now configured to use "default-k8s-diff-port-103048" cluster and "default" namespace by default
	W1109 14:40:06.551498  196795 pod_ready.go:104] pod "coredns-66bc5c9577-4hk6l" is not "Ready", error: <nil>
	I1109 14:40:07.550990  196795 pod_ready.go:94] pod "coredns-66bc5c9577-4hk6l" is "Ready"
	I1109 14:40:07.551029  196795 pod_ready.go:86] duration metric: took 32.006673308s for pod "coredns-66bc5c9577-4hk6l" in "kube-system" namespace to be "Ready" or be gone ...
	I1109 14:40:07.553713  196795 pod_ready.go:83] waiting for pod "etcd-embed-certs-422728" in "kube-system" namespace to be "Ready" or be gone ...
	I1109 14:40:07.558418  196795 pod_ready.go:94] pod "etcd-embed-certs-422728" is "Ready"
	I1109 14:40:07.558442  196795 pod_ready.go:86] duration metric: took 4.698642ms for pod "etcd-embed-certs-422728" in "kube-system" namespace to be "Ready" or be gone ...
	I1109 14:40:07.560963  196795 pod_ready.go:83] waiting for pod "kube-apiserver-embed-certs-422728" in "kube-system" namespace to be "Ready" or be gone ...
	I1109 14:40:07.565961  196795 pod_ready.go:94] pod "kube-apiserver-embed-certs-422728" is "Ready"
	I1109 14:40:07.565990  196795 pod_ready.go:86] duration metric: took 4.998009ms for pod "kube-apiserver-embed-certs-422728" in "kube-system" namespace to be "Ready" or be gone ...
	I1109 14:40:07.568596  196795 pod_ready.go:83] waiting for pod "kube-controller-manager-embed-certs-422728" in "kube-system" namespace to be "Ready" or be gone ...
	I1109 14:40:07.747686  196795 pod_ready.go:94] pod "kube-controller-manager-embed-certs-422728" is "Ready"
	I1109 14:40:07.747712  196795 pod_ready.go:86] duration metric: took 179.092274ms for pod "kube-controller-manager-embed-certs-422728" in "kube-system" namespace to be "Ready" or be gone ...
	I1109 14:40:07.948777  196795 pod_ready.go:83] waiting for pod "kube-proxy-5zn8j" in "kube-system" namespace to be "Ready" or be gone ...
	I1109 14:40:08.348208  196795 pod_ready.go:94] pod "kube-proxy-5zn8j" is "Ready"
	I1109 14:40:08.348242  196795 pod_ready.go:86] duration metric: took 399.417231ms for pod "kube-proxy-5zn8j" in "kube-system" namespace to be "Ready" or be gone ...
	I1109 14:40:08.548588  196795 pod_ready.go:83] waiting for pod "kube-scheduler-embed-certs-422728" in "kube-system" namespace to be "Ready" or be gone ...
	I1109 14:40:08.948477  196795 pod_ready.go:94] pod "kube-scheduler-embed-certs-422728" is "Ready"
	I1109 14:40:08.948506  196795 pod_ready.go:86] duration metric: took 399.893445ms for pod "kube-scheduler-embed-certs-422728" in "kube-system" namespace to be "Ready" or be gone ...
	I1109 14:40:08.948519  196795 pod_ready.go:40] duration metric: took 33.424813505s for extra waiting for all "kube-system" pods having one of [k8s-app=kube-dns component=etcd component=kube-apiserver component=kube-controller-manager k8s-app=kube-proxy component=kube-scheduler] labels to be "Ready" ...
	I1109 14:40:09.011705  196795 start.go:628] kubectl: 1.33.2, cluster: 1.34.1 (minor skew: 1)
	I1109 14:40:09.015201  196795 out.go:179] * Done! kubectl is now configured to use "embed-certs-422728" cluster and "default" namespace by default
	
	
	==> CRI-O <==
	Nov 09 14:40:13 embed-certs-422728 crio[646]: time="2025-11-09T14:40:13.174014359Z" level=info msg="Allowed annotations are specified for workload [io.containers.trace-syscall]"
	Nov 09 14:40:13 embed-certs-422728 crio[646]: time="2025-11-09T14:40:13.180737077Z" level=info msg="Allowed annotations are specified for workload [io.containers.trace-syscall]"
	Nov 09 14:40:13 embed-certs-422728 crio[646]: time="2025-11-09T14:40:13.181257836Z" level=info msg="Allowed annotations are specified for workload [io.containers.trace-syscall]"
	Nov 09 14:40:13 embed-certs-422728 crio[646]: time="2025-11-09T14:40:13.198212406Z" level=info msg="Created container 4d552f4b4d8a6b91636fb0457d54c17eafdcbd0e136bd23e839ea5daffbf2f98: kubernetes-dashboard/dashboard-metrics-scraper-6ffb444bf9-phsgl/dashboard-metrics-scraper" id=88611334-173b-429b-a2b8-f9cc03ee7d78 name=/runtime.v1.RuntimeService/CreateContainer
	Nov 09 14:40:13 embed-certs-422728 crio[646]: time="2025-11-09T14:40:13.199373901Z" level=info msg="Starting container: 4d552f4b4d8a6b91636fb0457d54c17eafdcbd0e136bd23e839ea5daffbf2f98" id=03454b15-bf55-4377-8bd6-b983199910d7 name=/runtime.v1.RuntimeService/StartContainer
	Nov 09 14:40:13 embed-certs-422728 crio[646]: time="2025-11-09T14:40:13.201388163Z" level=info msg="Started container" PID=1649 containerID=4d552f4b4d8a6b91636fb0457d54c17eafdcbd0e136bd23e839ea5daffbf2f98 description=kubernetes-dashboard/dashboard-metrics-scraper-6ffb444bf9-phsgl/dashboard-metrics-scraper id=03454b15-bf55-4377-8bd6-b983199910d7 name=/runtime.v1.RuntimeService/StartContainer sandboxID=9b0c05c409c71a34c4df64cb5c2ff501bc3a2054f1dbdf2985e00a20e0c69e2f
	Nov 09 14:40:13 embed-certs-422728 conmon[1647]: conmon 4d552f4b4d8a6b91636f <ninfo>: container 1649 exited with status 1
	Nov 09 14:40:13 embed-certs-422728 crio[646]: time="2025-11-09T14:40:13.474069764Z" level=info msg="Removing container: 5ae40f2a175040db81790736f4196360f115d8bd6e6ecee71c2583eacd5acccf" id=7a1e0c2a-6fec-4bca-bbe4-293b757cd551 name=/runtime.v1.RuntimeService/RemoveContainer
	Nov 09 14:40:13 embed-certs-422728 crio[646]: time="2025-11-09T14:40:13.482488538Z" level=info msg="Error loading conmon cgroup of container 5ae40f2a175040db81790736f4196360f115d8bd6e6ecee71c2583eacd5acccf: cgroup deleted" id=7a1e0c2a-6fec-4bca-bbe4-293b757cd551 name=/runtime.v1.RuntimeService/RemoveContainer
	Nov 09 14:40:13 embed-certs-422728 crio[646]: time="2025-11-09T14:40:13.487064947Z" level=info msg="Removed container 5ae40f2a175040db81790736f4196360f115d8bd6e6ecee71c2583eacd5acccf: kubernetes-dashboard/dashboard-metrics-scraper-6ffb444bf9-phsgl/dashboard-metrics-scraper" id=7a1e0c2a-6fec-4bca-bbe4-293b757cd551 name=/runtime.v1.RuntimeService/RemoveContainer
	Nov 09 14:40:15 embed-certs-422728 crio[646]: time="2025-11-09T14:40:15.933853456Z" level=info msg="CNI monitoring event CREATE        \"/etc/cni/net.d/10-kindnet.conflist.temp\""
	Nov 09 14:40:15 embed-certs-422728 crio[646]: time="2025-11-09T14:40:15.938658979Z" level=info msg="Found CNI network kindnet (type=ptp) at /etc/cni/net.d/10-kindnet.conflist"
	Nov 09 14:40:15 embed-certs-422728 crio[646]: time="2025-11-09T14:40:15.938693728Z" level=info msg="Updated default CNI network name to kindnet"
	Nov 09 14:40:15 embed-certs-422728 crio[646]: time="2025-11-09T14:40:15.938716473Z" level=info msg="CNI monitoring event WRITE         \"/etc/cni/net.d/10-kindnet.conflist.temp\""
	Nov 09 14:40:15 embed-certs-422728 crio[646]: time="2025-11-09T14:40:15.941992095Z" level=info msg="Found CNI network kindnet (type=ptp) at /etc/cni/net.d/10-kindnet.conflist"
	Nov 09 14:40:15 embed-certs-422728 crio[646]: time="2025-11-09T14:40:15.942031538Z" level=info msg="Updated default CNI network name to kindnet"
	Nov 09 14:40:15 embed-certs-422728 crio[646]: time="2025-11-09T14:40:15.942055546Z" level=info msg="CNI monitoring event WRITE         \"/etc/cni/net.d/10-kindnet.conflist.temp\""
	Nov 09 14:40:15 embed-certs-422728 crio[646]: time="2025-11-09T14:40:15.946447856Z" level=info msg="Found CNI network kindnet (type=ptp) at /etc/cni/net.d/10-kindnet.conflist"
	Nov 09 14:40:15 embed-certs-422728 crio[646]: time="2025-11-09T14:40:15.946484238Z" level=info msg="Updated default CNI network name to kindnet"
	Nov 09 14:40:15 embed-certs-422728 crio[646]: time="2025-11-09T14:40:15.946505177Z" level=info msg="CNI monitoring event RENAME        \"/etc/cni/net.d/10-kindnet.conflist.temp\""
	Nov 09 14:40:15 embed-certs-422728 crio[646]: time="2025-11-09T14:40:15.950019227Z" level=info msg="Found CNI network kindnet (type=ptp) at /etc/cni/net.d/10-kindnet.conflist"
	Nov 09 14:40:15 embed-certs-422728 crio[646]: time="2025-11-09T14:40:15.950052466Z" level=info msg="Updated default CNI network name to kindnet"
	Nov 09 14:40:15 embed-certs-422728 crio[646]: time="2025-11-09T14:40:15.950077656Z" level=info msg="CNI monitoring event CREATE        \"/etc/cni/net.d/10-kindnet.conflist\" ← \"/etc/cni/net.d/10-kindnet.conflist.temp\""
	Nov 09 14:40:15 embed-certs-422728 crio[646]: time="2025-11-09T14:40:15.954130738Z" level=info msg="Found CNI network kindnet (type=ptp) at /etc/cni/net.d/10-kindnet.conflist"
	Nov 09 14:40:15 embed-certs-422728 crio[646]: time="2025-11-09T14:40:15.954166365Z" level=info msg="Updated default CNI network name to kindnet"
	
	
	==> container status <==
	CONTAINER           IMAGE                                                                                                      CREATED              STATE               NAME                        ATTEMPT             POD ID              POD                                          NAMESPACE
	4d552f4b4d8a6       a90209bb39e3d7b5fc9daf60c17044ea969aaca0333d672d8c7a34c7446e7ff7                                           14 seconds ago       Exited              dashboard-metrics-scraper   2                   9b0c05c409c71       dashboard-metrics-scraper-6ffb444bf9-phsgl   kubernetes-dashboard
	7e3f27e138c59       ba04bb24b95753201135cbc420b233c1b0b9fa2e1fd21d28319c348c33fbcde6                                           21 seconds ago       Running             storage-provisioner         2                   d40c1330d26c4       storage-provisioner                          kube-system
	fe108ea59a5d4       docker.io/kubernetesui/dashboard@sha256:5c52c60663b473628bd98e4ffee7a747ef1f88d8c7bcee957b089fb3f61bdedf   40 seconds ago       Running             kubernetes-dashboard        0                   1ca2e62c14378       kubernetes-dashboard-855c9754f9-qdgpq        kubernetes-dashboard
	d5c35ad31efd7       138784d87c9c50f8e59412544da4cf4928d61ccbaf93b9f5898a3ba406871bfc                                           51 seconds ago       Running             coredns                     1                   079f28fdbc015       coredns-66bc5c9577-4hk6l                     kube-system
	df5eeef259ea8       1611cd07b61d57dbbfebe6db242513fd51e1c02d20ba08af17a45837d86a8a8c                                           51 seconds ago       Running             busybox                     1                   4f8c53fa1d93a       busybox                                      default
	323cdc33731a9       b1a8c6f707935fd5f346ce5846d21ff8dd65e14c15406a14dbd16b9b897b9b4c                                           52 seconds ago       Running             kindnet-cni                 1                   77d93fc0efa92       kindnet-29xxd                                kube-system
	3b1b52ea2560c       ba04bb24b95753201135cbc420b233c1b0b9fa2e1fd21d28319c348c33fbcde6                                           52 seconds ago       Exited              storage-provisioner         1                   d40c1330d26c4       storage-provisioner                          kube-system
	de1e286695edb       05baa95f5142d87797a2bc1d3d11edfb0bf0a9236d436243d15061fae8b58cb9                                           52 seconds ago       Running             kube-proxy                  1                   c12e6d2b51064       kube-proxy-5zn8j                             kube-system
	a9943a66511d5       b5f57ec6b98676d815366685a0422bd164ecf0732540b79ac51b1186cef97ff0                                           About a minute ago   Running             kube-scheduler              1                   31b9b858ac8d4       kube-scheduler-embed-certs-422728            kube-system
	2b949bf057b2f       43911e833d64d4f30460862fc0c54bb61999d60bc7d063feca71e9fc610d5196                                           About a minute ago   Running             kube-apiserver              1                   5fda1dcfafbd8       kube-apiserver-embed-certs-422728            kube-system
	7ac348b06cb3a       7eb2c6ff0c5a768fd309321bc2ade0e4e11afcf4f2017ef1d0ff00d91fdf992a                                           About a minute ago   Running             kube-controller-manager     1                   7959eb0b36f7e       kube-controller-manager-embed-certs-422728   kube-system
	7f99978e234d1       a1894772a478e07c67a56e8bf32335fdbe1dd4ec96976a5987083164bd00bc0e                                           About a minute ago   Running             etcd                        1                   011fa537d4769       etcd-embed-certs-422728                      kube-system
	
	
	==> coredns [d5c35ad31efd72a72f8ce73406787babc933e64dba57602e67b2a275575beab8] <==
	[INFO] plugin/kubernetes: waiting for Kubernetes API before starting server
	[INFO] plugin/ready: Still waiting on: "kubernetes"
	[INFO] plugin/kubernetes: waiting for Kubernetes API before starting server
	[INFO] plugin/kubernetes: waiting for Kubernetes API before starting server
	[INFO] plugin/ready: Still waiting on: "kubernetes"
	[INFO] plugin/kubernetes: waiting for Kubernetes API before starting server
	[INFO] plugin/kubernetes: waiting for Kubernetes API before starting server
	[INFO] plugin/kubernetes: waiting for Kubernetes API before starting server
	[INFO] plugin/kubernetes: waiting for Kubernetes API before starting server
	[INFO] plugin/kubernetes: waiting for Kubernetes API before starting server
	[INFO] plugin/kubernetes: waiting for Kubernetes API before starting server
	[WARNING] plugin/kubernetes: starting server with unsynced Kubernetes API
	.:53
	[INFO] plugin/reload: Running configuration SHA512 = 3e2243e8b9e7116f563b83b1933f477a68ba9ad4a829ed5d7e54629fb2ce53528b9bc6023030be20be434ad805fd246296dd428c64e9bbef3a70f22b8621f560
	CoreDNS-1.12.1
	linux/arm64, go1.24.1, 707c7c1
	[INFO] 127.0.0.1:60362 - 58300 "HINFO IN 4347462580880539103.5425589942879162873. udp 57 false 512" NXDOMAIN qr,rd,ra 57 0.014838108s
	[INFO] plugin/ready: Still waiting on: "kubernetes"
	[INFO] plugin/ready: Still waiting on: "kubernetes"
	[INFO] plugin/kubernetes: pkg/mod/k8s.io/client-go@v0.32.3/tools/cache/reflector.go:251: failed to list *v1.Namespace: Get "https://10.96.0.1:443/api/v1/namespaces?limit=500&resourceVersion=0": dial tcp 10.96.0.1:443: i/o timeout
	[ERROR] plugin/kubernetes: Unhandled Error
	[INFO] plugin/kubernetes: pkg/mod/k8s.io/client-go@v0.32.3/tools/cache/reflector.go:251: failed to list *v1.Service: Get "https://10.96.0.1:443/api/v1/services?limit=500&resourceVersion=0": dial tcp 10.96.0.1:443: i/o timeout
	[ERROR] plugin/kubernetes: Unhandled Error
	[INFO] plugin/kubernetes: pkg/mod/k8s.io/client-go@v0.32.3/tools/cache/reflector.go:251: failed to list *v1.EndpointSlice: Get "https://10.96.0.1:443/apis/discovery.k8s.io/v1/endpointslices?limit=500&resourceVersion=0": dial tcp 10.96.0.1:443: i/o timeout
	[ERROR] plugin/kubernetes: Unhandled Error
	
	
	==> describe nodes <==
	Name:               embed-certs-422728
	Roles:              control-plane
	Labels:             beta.kubernetes.io/arch=arm64
	                    beta.kubernetes.io/os=linux
	                    kubernetes.io/arch=arm64
	                    kubernetes.io/hostname=embed-certs-422728
	                    kubernetes.io/os=linux
	                    minikube.k8s.io/commit=70e94ed289d7418ac27e2778f9cf44be27a4ecda
	                    minikube.k8s.io/name=embed-certs-422728
	                    minikube.k8s.io/primary=true
	                    minikube.k8s.io/updated_at=2025_11_09T14_38_02_0700
	                    minikube.k8s.io/version=v1.37.0
	                    node-role.kubernetes.io/control-plane=
	                    node.kubernetes.io/exclude-from-external-load-balancers=
	Annotations:        node.alpha.kubernetes.io/ttl: 0
	                    volumes.kubernetes.io/controller-managed-attach-detach: true
	CreationTimestamp:  Sun, 09 Nov 2025 14:37:58 +0000
	Taints:             <none>
	Unschedulable:      false
	Lease:
	  HolderIdentity:  embed-certs-422728
	  AcquireTime:     <unset>
	  RenewTime:       Sun, 09 Nov 2025 14:40:14 +0000
	Conditions:
	  Type             Status  LastHeartbeatTime                 LastTransitionTime                Reason                       Message
	  ----             ------  -----------------                 ------------------                ------                       -------
	  MemoryPressure   False   Sun, 09 Nov 2025 14:40:04 +0000   Sun, 09 Nov 2025 14:37:52 +0000   KubeletHasSufficientMemory   kubelet has sufficient memory available
	  DiskPressure     False   Sun, 09 Nov 2025 14:40:04 +0000   Sun, 09 Nov 2025 14:37:52 +0000   KubeletHasNoDiskPressure     kubelet has no disk pressure
	  PIDPressure      False   Sun, 09 Nov 2025 14:40:04 +0000   Sun, 09 Nov 2025 14:37:52 +0000   KubeletHasSufficientPID      kubelet has sufficient PID available
	  Ready            True    Sun, 09 Nov 2025 14:40:04 +0000   Sun, 09 Nov 2025 14:38:48 +0000   KubeletReady                 kubelet is posting ready status
	Addresses:
	  InternalIP:  192.168.76.2
	  Hostname:    embed-certs-422728
	Capacity:
	  cpu:                2
	  ephemeral-storage:  203034800Ki
	  hugepages-1Gi:      0
	  hugepages-2Mi:      0
	  hugepages-32Mi:     0
	  hugepages-64Ki:     0
	  memory:             8022300Ki
	  pods:               110
	Allocatable:
	  cpu:                2
	  ephemeral-storage:  203034800Ki
	  hugepages-1Gi:      0
	  hugepages-2Mi:      0
	  hugepages-32Mi:     0
	  hugepages-64Ki:     0
	  memory:             8022300Ki
	  pods:               110
	System Info:
	  Machine ID:                 8fe80b37a6a3cb4bc1adadb06905c59a
	  System UUID:                d088bd86-8a64-46dd-b81e-fc8968fd6fcd
	  Boot ID:                    c2b4b02b-b0e5-42ff-aa45-346ad9349595
	  Kernel Version:             5.15.0-1084-aws
	  OS Image:                   Debian GNU/Linux 12 (bookworm)
	  Operating System:           linux
	  Architecture:               arm64
	  Container Runtime Version:  cri-o://1.34.1
	  Kubelet Version:            v1.34.1
	  Kube-Proxy Version:         
	PodCIDR:                      10.244.0.0/24
	PodCIDRs:                     10.244.0.0/24
	Non-terminated Pods:          (11 in total)
	  Namespace                   Name                                          CPU Requests  CPU Limits  Memory Requests  Memory Limits  Age
	  ---------                   ----                                          ------------  ----------  ---------------  -------------  ---
	  default                     busybox                                       0 (0%)        0 (0%)      0 (0%)           0 (0%)         95s
	  kube-system                 coredns-66bc5c9577-4hk6l                      100m (5%)     0 (0%)      70Mi (0%)        170Mi (2%)     2m20s
	  kube-system                 etcd-embed-certs-422728                       100m (5%)     0 (0%)      100Mi (1%)       0 (0%)         2m25s
	  kube-system                 kindnet-29xxd                                 100m (5%)     100m (5%)   50Mi (0%)        50Mi (0%)      2m20s
	  kube-system                 kube-apiserver-embed-certs-422728             250m (12%)    0 (0%)      0 (0%)           0 (0%)         2m25s
	  kube-system                 kube-controller-manager-embed-certs-422728    200m (10%)    0 (0%)      0 (0%)           0 (0%)         2m25s
	  kube-system                 kube-proxy-5zn8j                              0 (0%)        0 (0%)      0 (0%)           0 (0%)         2m20s
	  kube-system                 kube-scheduler-embed-certs-422728             100m (5%)     0 (0%)      0 (0%)           0 (0%)         2m25s
	  kube-system                 storage-provisioner                           0 (0%)        0 (0%)      0 (0%)           0 (0%)         2m19s
	  kubernetes-dashboard        dashboard-metrics-scraper-6ffb444bf9-phsgl    0 (0%)        0 (0%)      0 (0%)           0 (0%)         51s
	  kubernetes-dashboard        kubernetes-dashboard-855c9754f9-qdgpq         0 (0%)        0 (0%)      0 (0%)           0 (0%)         51s
	Allocated resources:
	  (Total limits may be over 100 percent, i.e., overcommitted.)
	  Resource           Requests    Limits
	  --------           --------    ------
	  cpu                850m (42%)  100m (5%)
	  memory             220Mi (2%)  220Mi (2%)
	  ephemeral-storage  0 (0%)      0 (0%)
	  hugepages-1Gi      0 (0%)      0 (0%)
	  hugepages-2Mi      0 (0%)      0 (0%)
	  hugepages-32Mi     0 (0%)      0 (0%)
	  hugepages-64Ki     0 (0%)      0 (0%)
	Events:
	  Type     Reason                   Age                    From             Message
	  ----     ------                   ----                   ----             -------
	  Normal   Starting                 2m18s                  kube-proxy       
	  Normal   Starting                 50s                    kube-proxy       
	  Normal   NodeHasSufficientMemory  2m36s (x8 over 2m36s)  kubelet          Node embed-certs-422728 status is now: NodeHasSufficientMemory
	  Normal   NodeHasNoDiskPressure    2m36s (x8 over 2m36s)  kubelet          Node embed-certs-422728 status is now: NodeHasNoDiskPressure
	  Normal   NodeHasSufficientPID     2m36s (x8 over 2m36s)  kubelet          Node embed-certs-422728 status is now: NodeHasSufficientPID
	  Normal   Starting                 2m26s                  kubelet          Starting kubelet.
	  Warning  CgroupV1                 2m26s                  kubelet          cgroup v1 support is in maintenance mode, please migrate to cgroup v2
	  Normal   NodeHasNoDiskPressure    2m25s                  kubelet          Node embed-certs-422728 status is now: NodeHasNoDiskPressure
	  Normal   NodeHasSufficientPID     2m25s                  kubelet          Node embed-certs-422728 status is now: NodeHasSufficientPID
	  Normal   NodeHasSufficientMemory  2m25s                  kubelet          Node embed-certs-422728 status is now: NodeHasSufficientMemory
	  Normal   RegisteredNode           2m21s                  node-controller  Node embed-certs-422728 event: Registered Node embed-certs-422728 in Controller
	  Normal   NodeReady                99s                    kubelet          Node embed-certs-422728 status is now: NodeReady
	  Normal   Starting                 64s                    kubelet          Starting kubelet.
	  Warning  CgroupV1                 63s                    kubelet          cgroup v1 support is in maintenance mode, please migrate to cgroup v2
	  Normal   NodeHasSufficientMemory  63s (x8 over 63s)      kubelet          Node embed-certs-422728 status is now: NodeHasSufficientMemory
	  Normal   NodeHasNoDiskPressure    63s (x8 over 63s)      kubelet          Node embed-certs-422728 status is now: NodeHasNoDiskPressure
	  Normal   NodeHasSufficientPID     63s (x8 over 63s)      kubelet          Node embed-certs-422728 status is now: NodeHasSufficientPID
	  Normal   RegisteredNode           51s                    node-controller  Node embed-certs-422728 event: Registered Node embed-certs-422728 in Controller
	
	
	==> dmesg <==
	[Nov 9 14:15] overlayfs: idmapped layers are currently not supported
	[Nov 9 14:16] overlayfs: idmapped layers are currently not supported
	[Nov 9 14:18] overlayfs: idmapped layers are currently not supported
	[ +26.909872] overlayfs: idmapped layers are currently not supported
	[  +3.850831] overlayfs: idmapped layers are currently not supported
	[Nov 9 14:19] overlayfs: idmapped layers are currently not supported
	[ +17.180951] overlayfs: idmapped layers are currently not supported
	[Nov 9 14:20] overlayfs: idmapped layers are currently not supported
	[ +23.736977] overlayfs: idmapped layers are currently not supported
	[Nov 9 14:22] overlayfs: idmapped layers are currently not supported
	[Nov 9 14:23] overlayfs: idmapped layers are currently not supported
	[Nov 9 14:25] overlayfs: idmapped layers are currently not supported
	[Nov 9 14:27] overlayfs: idmapped layers are currently not supported
	[ +24.536638] overlayfs: idmapped layers are currently not supported
	[Nov 9 14:31] overlayfs: idmapped layers are currently not supported
	[Nov 9 14:33] overlayfs: idmapped layers are currently not supported
	[ +39.159698] overlayfs: idmapped layers are currently not supported
	[  +5.641155] overlayfs: idmapped layers are currently not supported
	[Nov 9 14:34] overlayfs: idmapped layers are currently not supported
	[Nov 9 14:35] overlayfs: idmapped layers are currently not supported
	[Nov 9 14:36] overlayfs: idmapped layers are currently not supported
	[Nov 9 14:37] overlayfs: idmapped layers are currently not supported
	[  +3.455400] overlayfs: idmapped layers are currently not supported
	[Nov 9 14:39] overlayfs: idmapped layers are currently not supported
	[  +3.462027] overlayfs: idmapped layers are currently not supported
	
	
	==> etcd [7f99978e234d142a4cacb3d0faff188a383754a6fe3a8aafef37dfdbccf51f16] <==
	{"level":"warn","ts":"2025-11-09T14:39:29.762249Z","caller":"embed/config_logging.go:188","msg":"rejected connection on client endpoint","remote-addr":"127.0.0.1:40332","server-name":"","error":"EOF"}
	{"level":"warn","ts":"2025-11-09T14:39:29.808537Z","caller":"embed/config_logging.go:188","msg":"rejected connection on client endpoint","remote-addr":"127.0.0.1:40340","server-name":"","error":"EOF"}
	{"level":"warn","ts":"2025-11-09T14:39:29.891446Z","caller":"embed/config_logging.go:188","msg":"rejected connection on client endpoint","remote-addr":"127.0.0.1:40346","server-name":"","error":"EOF"}
	{"level":"warn","ts":"2025-11-09T14:39:29.928017Z","caller":"embed/config_logging.go:188","msg":"rejected connection on client endpoint","remote-addr":"127.0.0.1:40384","server-name":"","error":"EOF"}
	{"level":"warn","ts":"2025-11-09T14:39:29.976565Z","caller":"embed/config_logging.go:188","msg":"rejected connection on client endpoint","remote-addr":"127.0.0.1:40396","server-name":"","error":"EOF"}
	{"level":"warn","ts":"2025-11-09T14:39:30.040667Z","caller":"embed/config_logging.go:188","msg":"rejected connection on client endpoint","remote-addr":"127.0.0.1:40414","server-name":"","error":"EOF"}
	{"level":"warn","ts":"2025-11-09T14:39:30.088657Z","caller":"embed/config_logging.go:188","msg":"rejected connection on client endpoint","remote-addr":"127.0.0.1:40438","server-name":"","error":"EOF"}
	{"level":"warn","ts":"2025-11-09T14:39:30.133327Z","caller":"embed/config_logging.go:188","msg":"rejected connection on client endpoint","remote-addr":"127.0.0.1:40466","server-name":"","error":"EOF"}
	{"level":"warn","ts":"2025-11-09T14:39:30.208721Z","caller":"embed/config_logging.go:188","msg":"rejected connection on client endpoint","remote-addr":"127.0.0.1:40486","server-name":"","error":"EOF"}
	{"level":"warn","ts":"2025-11-09T14:39:30.300042Z","caller":"embed/config_logging.go:188","msg":"rejected connection on client endpoint","remote-addr":"127.0.0.1:40506","server-name":"","error":"EOF"}
	{"level":"warn","ts":"2025-11-09T14:39:30.356881Z","caller":"embed/config_logging.go:188","msg":"rejected connection on client endpoint","remote-addr":"127.0.0.1:40528","server-name":"","error":"EOF"}
	{"level":"warn","ts":"2025-11-09T14:39:30.395372Z","caller":"embed/config_logging.go:188","msg":"rejected connection on client endpoint","remote-addr":"127.0.0.1:40550","server-name":"","error":"EOF"}
	{"level":"warn","ts":"2025-11-09T14:39:30.451217Z","caller":"embed/config_logging.go:188","msg":"rejected connection on client endpoint","remote-addr":"127.0.0.1:40556","server-name":"","error":"EOF"}
	{"level":"warn","ts":"2025-11-09T14:39:30.495182Z","caller":"embed/config_logging.go:188","msg":"rejected connection on client endpoint","remote-addr":"127.0.0.1:40568","server-name":"","error":"EOF"}
	{"level":"warn","ts":"2025-11-09T14:39:30.524741Z","caller":"embed/config_logging.go:188","msg":"rejected connection on client endpoint","remote-addr":"127.0.0.1:40594","server-name":"","error":"EOF"}
	{"level":"warn","ts":"2025-11-09T14:39:30.579498Z","caller":"embed/config_logging.go:188","msg":"rejected connection on client endpoint","remote-addr":"127.0.0.1:40630","server-name":"","error":"EOF"}
	{"level":"warn","ts":"2025-11-09T14:39:30.622844Z","caller":"embed/config_logging.go:188","msg":"rejected connection on client endpoint","remote-addr":"127.0.0.1:40632","server-name":"","error":"EOF"}
	{"level":"warn","ts":"2025-11-09T14:39:30.672371Z","caller":"embed/config_logging.go:188","msg":"rejected connection on client endpoint","remote-addr":"127.0.0.1:40658","server-name":"","error":"EOF"}
	{"level":"warn","ts":"2025-11-09T14:39:30.702820Z","caller":"embed/config_logging.go:188","msg":"rejected connection on client endpoint","remote-addr":"127.0.0.1:40686","server-name":"","error":"EOF"}
	{"level":"warn","ts":"2025-11-09T14:39:30.749024Z","caller":"embed/config_logging.go:188","msg":"rejected connection on client endpoint","remote-addr":"127.0.0.1:40690","server-name":"","error":"EOF"}
	{"level":"warn","ts":"2025-11-09T14:39:30.812744Z","caller":"embed/config_logging.go:188","msg":"rejected connection on client endpoint","remote-addr":"127.0.0.1:40710","server-name":"","error":"EOF"}
	{"level":"warn","ts":"2025-11-09T14:39:30.859994Z","caller":"embed/config_logging.go:188","msg":"rejected connection on client endpoint","remote-addr":"127.0.0.1:40730","server-name":"","error":"EOF"}
	{"level":"warn","ts":"2025-11-09T14:39:30.905486Z","caller":"embed/config_logging.go:188","msg":"rejected connection on client endpoint","remote-addr":"127.0.0.1:40746","server-name":"","error":"EOF"}
	{"level":"warn","ts":"2025-11-09T14:39:30.936193Z","caller":"embed/config_logging.go:188","msg":"rejected connection on client endpoint","remote-addr":"127.0.0.1:40766","server-name":"","error":"EOF"}
	{"level":"warn","ts":"2025-11-09T14:39:31.105179Z","caller":"embed/config_logging.go:188","msg":"rejected connection on client endpoint","remote-addr":"127.0.0.1:42154","server-name":"","error":"EOF"}
	
	
	==> kernel <==
	 14:40:27 up  1:22,  0 user,  load average: 3.10, 3.40, 2.81
	Linux embed-certs-422728 5.15.0-1084-aws #91~20.04.1-Ubuntu SMP Fri May 2 07:00:04 UTC 2025 aarch64 GNU/Linux
	PRETTY_NAME="Debian GNU/Linux 12 (bookworm)"
	
	
	==> kindnet [323cdc33731a98cbe7f1496b50119456aef177e9a9a5892b2aa6aa476ddc2327] <==
	I1109 14:39:35.741237       1 main.go:109] connected to apiserver: https://10.96.0.1:443
	I1109 14:39:35.745332       1 main.go:139] hostIP = 192.168.76.2
	podIP = 192.168.76.2
	I1109 14:39:35.751729       1 main.go:148] setting mtu 1500 for CNI 
	I1109 14:39:35.752128       1 main.go:178] kindnetd IP family: "ipv4"
	I1109 14:39:35.752282       1 main.go:182] noMask IPv4 subnets: [10.244.0.0/16]
	time="2025-11-09T14:39:35Z" level=info msg="Created plugin 10-kube-network-policies (kindnetd, handles RunPodSandbox,RemovePodSandbox)"
	I1109 14:39:35.933208       1 controller.go:377] "Starting controller" name="kube-network-policies"
	I1109 14:39:35.933316       1 controller.go:381] "Waiting for informer caches to sync"
	I1109 14:39:35.933450       1 shared_informer.go:350] "Waiting for caches to sync" controller="kube-network-policies"
	I1109 14:39:35.934542       1 controller.go:390] nri plugin exited: failed to connect to NRI service: dial unix /var/run/nri/nri.sock: connect: no such file or directory
	E1109 14:40:05.933918       1 reflector.go:200] "Failed to watch" err="failed to list *v1.NetworkPolicy: Get \"https://10.96.0.1:443/apis/networking.k8s.io/v1/networkpolicies?limit=500&resourceVersion=0\": dial tcp 10.96.0.1:443: i/o timeout" logger="UnhandledError" reflector="pkg/mod/k8s.io/client-go@v0.33.0/tools/cache/reflector.go:285" type="*v1.NetworkPolicy"
	E1109 14:40:05.934060       1 reflector.go:200] "Failed to watch" err="failed to list *v1.Namespace: Get \"https://10.96.0.1:443/api/v1/namespaces?limit=500&resourceVersion=0\": dial tcp 10.96.0.1:443: i/o timeout" logger="UnhandledError" reflector="pkg/mod/k8s.io/client-go@v0.33.0/tools/cache/reflector.go:285" type="*v1.Namespace"
	E1109 14:40:05.934151       1 reflector.go:200] "Failed to watch" err="failed to list *v1.Node: Get \"https://10.96.0.1:443/api/v1/nodes?limit=500&resourceVersion=0\": dial tcp 10.96.0.1:443: i/o timeout" logger="UnhandledError" reflector="pkg/mod/k8s.io/client-go@v0.33.0/tools/cache/reflector.go:285" type="*v1.Node"
	E1109 14:40:05.934914       1 reflector.go:200] "Failed to watch" err="failed to list *v1.Pod: Get \"https://10.96.0.1:443/api/v1/pods?limit=500&resourceVersion=0\": dial tcp 10.96.0.1:443: i/o timeout" logger="UnhandledError" reflector="pkg/mod/k8s.io/client-go@v0.33.0/tools/cache/reflector.go:285" type="*v1.Pod"
	I1109 14:40:07.534248       1 shared_informer.go:357] "Caches are synced" controller="kube-network-policies"
	I1109 14:40:07.534385       1 metrics.go:72] Registering metrics
	I1109 14:40:07.534523       1 controller.go:711] "Syncing nftables rules"
	I1109 14:40:15.933498       1 main.go:297] Handling node with IPs: map[192.168.76.2:{}]
	I1109 14:40:15.933588       1 main.go:301] handling current node
	I1109 14:40:25.937986       1 main.go:297] Handling node with IPs: map[192.168.76.2:{}]
	I1109 14:40:25.938026       1 main.go:301] handling current node
	
	
	==> kube-apiserver [2b949bf057b2fcac7eac7de6351c20a0ac0d3dccfcd10d71e47fcbaab8fc91dc] <==
	I1109 14:39:33.332962       1 shared_informer.go:356] "Caches are synced" controller="*generic.policySource[*k8s.io/api/admissionregistration/v1.ValidatingAdmissionPolicy,*k8s.io/api/admissionregistration/v1.ValidatingAdmissionPolicyBinding,k8s.io/apiserver/pkg/admission/plugin/policy/validating.Validator]"
	I1109 14:39:33.332991       1 policy_source.go:240] refreshing policies
	I1109 14:39:33.333117       1 cache.go:39] Caches are synced for LocalAvailability controller
	I1109 14:39:33.333423       1 cache.go:39] Caches are synced for APIServiceRegistrationController controller
	I1109 14:39:33.341213       1 cidrallocator.go:301] created ClusterIP allocator for Service CIDR 10.96.0.0/12
	I1109 14:39:33.356301       1 cache.go:39] Caches are synced for autoregister controller
	I1109 14:39:33.358024       1 controller.go:667] quota admission added evaluator for: leases.coordination.k8s.io
	I1109 14:39:33.399041       1 cache.go:39] Caches are synced for RemoteAvailability controller
	I1109 14:39:33.399171       1 apf_controller.go:382] Running API Priority and Fairness config worker
	I1109 14:39:33.399198       1 apf_controller.go:385] Running API Priority and Fairness periodic rebalancing process
	I1109 14:39:33.430030       1 handler_discovery.go:451] Starting ResourceDiscoveryManager
	I1109 14:39:33.488083       1 shared_informer.go:356] "Caches are synced" controller="node_authorizer"
	E1109 14:39:33.502465       1 controller.go:97] Error removing old endpoints from kubernetes service: no API server IP addresses were listed in storage, refusing to erase all endpoints for the kubernetes Service
	I1109 14:39:33.735428       1 storage_scheduling.go:111] all system priority classes are created successfully or already exist.
	I1109 14:39:34.397447       1 controller.go:667] quota admission added evaluator for: namespaces
	I1109 14:39:34.566396       1 controller.go:667] quota admission added evaluator for: deployments.apps
	I1109 14:39:34.661873       1 controller.go:667] quota admission added evaluator for: roles.rbac.authorization.k8s.io
	I1109 14:39:34.709237       1 controller.go:667] quota admission added evaluator for: rolebindings.rbac.authorization.k8s.io
	I1109 14:39:34.747128       1 controller.go:667] quota admission added evaluator for: serviceaccounts
	I1109 14:39:34.901574       1 alloc.go:328] "allocated clusterIPs" service="kubernetes-dashboard/kubernetes-dashboard" clusterIPs={"IPv4":"10.98.58.88"}
	I1109 14:39:34.923625       1 alloc.go:328] "allocated clusterIPs" service="kubernetes-dashboard/dashboard-metrics-scraper" clusterIPs={"IPv4":"10.108.67.145"}
	I1109 14:39:36.605829       1 controller.go:667] quota admission added evaluator for: replicasets.apps
	I1109 14:39:36.843331       1 controller.go:667] quota admission added evaluator for: endpoints
	I1109 14:39:36.902125       1 controller.go:667] quota admission added evaluator for: daemonsets.apps
	I1109 14:39:37.032766       1 controller.go:667] quota admission added evaluator for: endpointslices.discovery.k8s.io
	
	
	==> kube-controller-manager [7ac348b06cb3a51d4fa87f75d29323432bc36b8fade63732460d89860ce8f3df] <==
	I1109 14:39:36.451850       1 shared_informer.go:356] "Caches are synced" controller="certificate-csrapproving"
	I1109 14:39:36.452555       1 shared_informer.go:356] "Caches are synced" controller="TTL"
	I1109 14:39:36.452599       1 shared_informer.go:356] "Caches are synced" controller="ReplicationController"
	I1109 14:39:36.470212       1 shared_informer.go:356] "Caches are synced" controller="garbage collector"
	I1109 14:39:36.470809       1 shared_informer.go:356] "Caches are synced" controller="persistent volume"
	I1109 14:39:36.470921       1 shared_informer.go:356] "Caches are synced" controller="disruption"
	I1109 14:39:36.471005       1 shared_informer.go:356] "Caches are synced" controller="namespace"
	I1109 14:39:36.471710       1 shared_informer.go:356] "Caches are synced" controller="expand"
	I1109 14:39:36.471752       1 shared_informer.go:356] "Caches are synced" controller="attach detach"
	I1109 14:39:36.471969       1 shared_informer.go:356] "Caches are synced" controller="ephemeral"
	I1109 14:39:36.471999       1 shared_informer.go:356] "Caches are synced" controller="endpoint"
	I1109 14:39:36.472569       1 shared_informer.go:356] "Caches are synced" controller="job"
	I1109 14:39:36.474442       1 shared_informer.go:356] "Caches are synced" controller="resource quota"
	I1109 14:39:36.480166       1 shared_informer.go:356] "Caches are synced" controller="ReplicaSet"
	I1109 14:39:36.480767       1 shared_informer.go:356] "Caches are synced" controller="ClusterRoleAggregator"
	I1109 14:39:36.482651       1 shared_informer.go:356] "Caches are synced" controller="PVC protection"
	I1109 14:39:36.488150       1 shared_informer.go:356] "Caches are synced" controller="daemon sets"
	I1109 14:39:36.494228       1 shared_informer.go:356] "Caches are synced" controller="taint-eviction-controller"
	I1109 14:39:36.494344       1 shared_informer.go:356] "Caches are synced" controller="resource_claim"
	I1109 14:39:36.503080       1 shared_informer.go:356] "Caches are synced" controller="resource quota"
	I1109 14:39:36.511251       1 shared_informer.go:356] "Caches are synced" controller="endpoint_slice"
	I1109 14:39:36.511341       1 shared_informer.go:356] "Caches are synced" controller="legacy-service-account-token-cleaner"
	I1109 14:39:36.511360       1 shared_informer.go:356] "Caches are synced" controller="HPA"
	I1109 14:39:37.061497       1 endpointslice_controller.go:344] "Error syncing endpoint slices for service, retrying" logger="endpointslice-controller" key="kubernetes-dashboard/kubernetes-dashboard" err="EndpointSlice informer cache is out of date"
	I1109 14:39:37.066294       1 endpointslice_controller.go:344] "Error syncing endpoint slices for service, retrying" logger="endpointslice-controller" key="kube-system/kube-dns" err="EndpointSlice informer cache is out of date"
	
	
	==> kube-proxy [de1e286695edb140cab32ced2c194b32034a19be382818767fa2a5a464fd0087] <==
	I1109 14:39:35.844987       1 server_linux.go:53] "Using iptables proxy"
	I1109 14:39:36.104080       1 shared_informer.go:349] "Waiting for caches to sync" controller="node informer cache"
	I1109 14:39:36.651810       1 shared_informer.go:356] "Caches are synced" controller="node informer cache"
	I1109 14:39:36.667259       1 server.go:219] "Successfully retrieved NodeIPs" NodeIPs=["192.168.76.2"]
	E1109 14:39:36.755225       1 server.go:256] "Kube-proxy configuration may be incomplete or incorrect" err="nodePortAddresses is unset; NodePort connections will be accepted on all local IPs. Consider using `--nodeport-addresses primary`"
	I1109 14:39:37.130985       1 server.go:265] "kube-proxy running in dual-stack mode" primary ipFamily="IPv4"
	I1109 14:39:37.131047       1 server_linux.go:132] "Using iptables Proxier"
	I1109 14:39:37.145719       1 proxier.go:242] "Setting route_localnet=1 to allow node-ports on localhost; to change this either disable iptables.localhostNodePorts (--iptables-localhost-nodeports) or set nodePortAddresses (--nodeport-addresses) to filter loopback addresses" ipFamily="IPv4"
	I1109 14:39:37.146182       1 server.go:527] "Version info" version="v1.34.1"
	I1109 14:39:37.146394       1 server.go:529] "Golang settings" GOGC="" GOMAXPROCS="" GOTRACEBACK=""
	I1109 14:39:37.147730       1 config.go:200] "Starting service config controller"
	I1109 14:39:37.147790       1 shared_informer.go:349] "Waiting for caches to sync" controller="service config"
	I1109 14:39:37.147844       1 config.go:106] "Starting endpoint slice config controller"
	I1109 14:39:37.147896       1 shared_informer.go:349] "Waiting for caches to sync" controller="endpoint slice config"
	I1109 14:39:37.147932       1 config.go:403] "Starting serviceCIDR config controller"
	I1109 14:39:37.147957       1 shared_informer.go:349] "Waiting for caches to sync" controller="serviceCIDR config"
	I1109 14:39:37.154412       1 config.go:309] "Starting node config controller"
	I1109 14:39:37.155440       1 shared_informer.go:349] "Waiting for caches to sync" controller="node config"
	I1109 14:39:37.155508       1 shared_informer.go:356] "Caches are synced" controller="node config"
	I1109 14:39:37.248221       1 shared_informer.go:356] "Caches are synced" controller="serviceCIDR config"
	I1109 14:39:37.248222       1 shared_informer.go:356] "Caches are synced" controller="service config"
	I1109 14:39:37.248312       1 shared_informer.go:356] "Caches are synced" controller="endpoint slice config"
	
	
	==> kube-scheduler [a9943a66511d5ad86eafee1dfeb5b5c4217765aa7223fad81d4d85ed65bd4366] <==
	I1109 14:39:33.355199       1 serving.go:386] Generated self-signed cert in-memory
	I1109 14:39:36.785076       1 server.go:175] "Starting Kubernetes Scheduler" version="v1.34.1"
	I1109 14:39:36.787406       1 server.go:177] "Golang settings" GOGC="" GOMAXPROCS="" GOTRACEBACK=""
	I1109 14:39:36.846137       1 requestheader_controller.go:180] Starting RequestHeaderAuthRequestController
	I1109 14:39:36.846238       1 shared_informer.go:349] "Waiting for caches to sync" controller="RequestHeaderAuthRequestController"
	I1109 14:39:36.846359       1 configmap_cafile_content.go:205] "Starting controller" name="client-ca::kube-system::extension-apiserver-authentication::client-ca-file"
	I1109 14:39:36.846403       1 shared_informer.go:349] "Waiting for caches to sync" controller="client-ca::kube-system::extension-apiserver-authentication::client-ca-file"
	I1109 14:39:36.846452       1 configmap_cafile_content.go:205] "Starting controller" name="client-ca::kube-system::extension-apiserver-authentication::requestheader-client-ca-file"
	I1109 14:39:36.846496       1 shared_informer.go:349] "Waiting for caches to sync" controller="client-ca::kube-system::extension-apiserver-authentication::requestheader-client-ca-file"
	I1109 14:39:36.849107       1 secure_serving.go:211] Serving securely on 127.0.0.1:10259
	I1109 14:39:36.849308       1 tlsconfig.go:243] "Starting DynamicServingCertificateController"
	I1109 14:39:36.947758       1 shared_informer.go:356] "Caches are synced" controller="client-ca::kube-system::extension-apiserver-authentication::requestheader-client-ca-file"
	I1109 14:39:36.947810       1 shared_informer.go:356] "Caches are synced" controller="client-ca::kube-system::extension-apiserver-authentication::client-ca-file"
	I1109 14:39:36.960127       1 shared_informer.go:356] "Caches are synced" controller="RequestHeaderAuthRequestController"
	
	
	==> kubelet <==
	Nov 09 14:39:38 embed-certs-422728 kubelet[768]: E1109 14:39:38.285382     768 nestedpendingoperations.go:348] Operation for "{volumeName:kubernetes.io/projected/78f81139-942a-4424-8b87-68a3e0b04fc6-kube-api-access-9jx2b podName:78f81139-942a-4424-8b87-68a3e0b04fc6 nodeName:}" failed. No retries permitted until 2025-11-09 14:39:38.785355286 +0000 UTC m=+14.922278156 (durationBeforeRetry 500ms). Error: MountVolume.SetUp failed for volume "kube-api-access-9jx2b" (UniqueName: "kubernetes.io/projected/78f81139-942a-4424-8b87-68a3e0b04fc6-kube-api-access-9jx2b") pod "dashboard-metrics-scraper-6ffb444bf9-phsgl" (UID: "78f81139-942a-4424-8b87-68a3e0b04fc6") : failed to sync configmap cache: timed out waiting for the condition
	Nov 09 14:39:38 embed-certs-422728 kubelet[768]: E1109 14:39:38.287398     768 projected.go:291] Couldn't get configMap kubernetes-dashboard/kube-root-ca.crt: failed to sync configmap cache: timed out waiting for the condition
	Nov 09 14:39:38 embed-certs-422728 kubelet[768]: E1109 14:39:38.287546     768 projected.go:196] Error preparing data for projected volume kube-api-access-jbv96 for pod kubernetes-dashboard/kubernetes-dashboard-855c9754f9-qdgpq: failed to sync configmap cache: timed out waiting for the condition
	Nov 09 14:39:38 embed-certs-422728 kubelet[768]: E1109 14:39:38.287673     768 nestedpendingoperations.go:348] Operation for "{volumeName:kubernetes.io/projected/a4624bdb-c87a-4e38-bfd4-65e1d022ae3a-kube-api-access-jbv96 podName:a4624bdb-c87a-4e38-bfd4-65e1d022ae3a nodeName:}" failed. No retries permitted until 2025-11-09 14:39:38.787653439 +0000 UTC m=+14.924576309 (durationBeforeRetry 500ms). Error: MountVolume.SetUp failed for volume "kube-api-access-jbv96" (UniqueName: "kubernetes.io/projected/a4624bdb-c87a-4e38-bfd4-65e1d022ae3a-kube-api-access-jbv96") pod "kubernetes-dashboard-855c9754f9-qdgpq" (UID: "a4624bdb-c87a-4e38-bfd4-65e1d022ae3a") : failed to sync configmap cache: timed out waiting for the condition
	Nov 09 14:39:38 embed-certs-422728 kubelet[768]: W1109 14:39:38.953916     768 manager.go:1169] Failed to process watch event {EventType:0 Name:/docker/45825e68cb8679a16f491499ba94fa2babf9539c7324c0ac19dfc0bb866dfb12/crio-1ca2e62c143788a72e86aa04d9da7f37efffe510b5eca4e03ca0ec4b4e36aa3b WatchSource:0}: Error finding container 1ca2e62c143788a72e86aa04d9da7f37efffe510b5eca4e03ca0ec4b4e36aa3b: Status 404 returned error can't find the container with id 1ca2e62c143788a72e86aa04d9da7f37efffe510b5eca4e03ca0ec4b4e36aa3b
	Nov 09 14:39:38 embed-certs-422728 kubelet[768]: W1109 14:39:38.975175     768 manager.go:1169] Failed to process watch event {EventType:0 Name:/docker/45825e68cb8679a16f491499ba94fa2babf9539c7324c0ac19dfc0bb866dfb12/crio-9b0c05c409c71a34c4df64cb5c2ff501bc3a2054f1dbdf2985e00a20e0c69e2f WatchSource:0}: Error finding container 9b0c05c409c71a34c4df64cb5c2ff501bc3a2054f1dbdf2985e00a20e0c69e2f: Status 404 returned error can't find the container with id 9b0c05c409c71a34c4df64cb5c2ff501bc3a2054f1dbdf2985e00a20e0c69e2f
	Nov 09 14:39:47 embed-certs-422728 kubelet[768]: I1109 14:39:47.427720     768 pod_startup_latency_tracker.go:104] "Observed pod startup duration" pod="kubernetes-dashboard/kubernetes-dashboard-855c9754f9-qdgpq" podStartSLOduration=3.547029057 podStartE2EDuration="11.426862183s" podCreationTimestamp="2025-11-09 14:39:36 +0000 UTC" firstStartedPulling="2025-11-09 14:39:38.957213701 +0000 UTC m=+15.094136571" lastFinishedPulling="2025-11-09 14:39:46.837046818 +0000 UTC m=+22.973969697" observedRunningTime="2025-11-09 14:39:47.426245161 +0000 UTC m=+23.563168055" watchObservedRunningTime="2025-11-09 14:39:47.426862183 +0000 UTC m=+23.563785062"
	Nov 09 14:39:52 embed-certs-422728 kubelet[768]: I1109 14:39:52.402450     768 scope.go:117] "RemoveContainer" containerID="35407c86dba9ac0dd30867af492d13dc39d0b1b307ae8eb0cb672cfdedb7fbc8"
	Nov 09 14:39:53 embed-certs-422728 kubelet[768]: I1109 14:39:53.407150     768 scope.go:117] "RemoveContainer" containerID="5ae40f2a175040db81790736f4196360f115d8bd6e6ecee71c2583eacd5acccf"
	Nov 09 14:39:53 embed-certs-422728 kubelet[768]: E1109 14:39:53.407315     768 pod_workers.go:1324] "Error syncing pod, skipping" err="failed to \"StartContainer\" for \"dashboard-metrics-scraper\" with CrashLoopBackOff: \"back-off 10s restarting failed container=dashboard-metrics-scraper pod=dashboard-metrics-scraper-6ffb444bf9-phsgl_kubernetes-dashboard(78f81139-942a-4424-8b87-68a3e0b04fc6)\"" pod="kubernetes-dashboard/dashboard-metrics-scraper-6ffb444bf9-phsgl" podUID="78f81139-942a-4424-8b87-68a3e0b04fc6"
	Nov 09 14:39:53 embed-certs-422728 kubelet[768]: I1109 14:39:53.409648     768 scope.go:117] "RemoveContainer" containerID="35407c86dba9ac0dd30867af492d13dc39d0b1b307ae8eb0cb672cfdedb7fbc8"
	Nov 09 14:39:54 embed-certs-422728 kubelet[768]: I1109 14:39:54.411269     768 scope.go:117] "RemoveContainer" containerID="5ae40f2a175040db81790736f4196360f115d8bd6e6ecee71c2583eacd5acccf"
	Nov 09 14:39:54 embed-certs-422728 kubelet[768]: E1109 14:39:54.411932     768 pod_workers.go:1324] "Error syncing pod, skipping" err="failed to \"StartContainer\" for \"dashboard-metrics-scraper\" with CrashLoopBackOff: \"back-off 10s restarting failed container=dashboard-metrics-scraper pod=dashboard-metrics-scraper-6ffb444bf9-phsgl_kubernetes-dashboard(78f81139-942a-4424-8b87-68a3e0b04fc6)\"" pod="kubernetes-dashboard/dashboard-metrics-scraper-6ffb444bf9-phsgl" podUID="78f81139-942a-4424-8b87-68a3e0b04fc6"
	Nov 09 14:39:58 embed-certs-422728 kubelet[768]: I1109 14:39:58.907215     768 scope.go:117] "RemoveContainer" containerID="5ae40f2a175040db81790736f4196360f115d8bd6e6ecee71c2583eacd5acccf"
	Nov 09 14:39:58 embed-certs-422728 kubelet[768]: E1109 14:39:58.907431     768 pod_workers.go:1324] "Error syncing pod, skipping" err="failed to \"StartContainer\" for \"dashboard-metrics-scraper\" with CrashLoopBackOff: \"back-off 10s restarting failed container=dashboard-metrics-scraper pod=dashboard-metrics-scraper-6ffb444bf9-phsgl_kubernetes-dashboard(78f81139-942a-4424-8b87-68a3e0b04fc6)\"" pod="kubernetes-dashboard/dashboard-metrics-scraper-6ffb444bf9-phsgl" podUID="78f81139-942a-4424-8b87-68a3e0b04fc6"
	Nov 09 14:40:06 embed-certs-422728 kubelet[768]: I1109 14:40:06.446270     768 scope.go:117] "RemoveContainer" containerID="3b1b52ea2560ce0c00fa2ea0c3ba7b2fb276d6faf0899c104043d7528470cddd"
	Nov 09 14:40:13 embed-certs-422728 kubelet[768]: I1109 14:40:13.170863     768 scope.go:117] "RemoveContainer" containerID="5ae40f2a175040db81790736f4196360f115d8bd6e6ecee71c2583eacd5acccf"
	Nov 09 14:40:13 embed-certs-422728 kubelet[768]: I1109 14:40:13.469958     768 scope.go:117] "RemoveContainer" containerID="5ae40f2a175040db81790736f4196360f115d8bd6e6ecee71c2583eacd5acccf"
	Nov 09 14:40:13 embed-certs-422728 kubelet[768]: I1109 14:40:13.470307     768 scope.go:117] "RemoveContainer" containerID="4d552f4b4d8a6b91636fb0457d54c17eafdcbd0e136bd23e839ea5daffbf2f98"
	Nov 09 14:40:13 embed-certs-422728 kubelet[768]: E1109 14:40:13.470477     768 pod_workers.go:1324] "Error syncing pod, skipping" err="failed to \"StartContainer\" for \"dashboard-metrics-scraper\" with CrashLoopBackOff: \"back-off 20s restarting failed container=dashboard-metrics-scraper pod=dashboard-metrics-scraper-6ffb444bf9-phsgl_kubernetes-dashboard(78f81139-942a-4424-8b87-68a3e0b04fc6)\"" pod="kubernetes-dashboard/dashboard-metrics-scraper-6ffb444bf9-phsgl" podUID="78f81139-942a-4424-8b87-68a3e0b04fc6"
	Nov 09 14:40:18 embed-certs-422728 kubelet[768]: I1109 14:40:18.905373     768 scope.go:117] "RemoveContainer" containerID="4d552f4b4d8a6b91636fb0457d54c17eafdcbd0e136bd23e839ea5daffbf2f98"
	Nov 09 14:40:18 embed-certs-422728 kubelet[768]: E1109 14:40:18.905593     768 pod_workers.go:1324] "Error syncing pod, skipping" err="failed to \"StartContainer\" for \"dashboard-metrics-scraper\" with CrashLoopBackOff: \"back-off 20s restarting failed container=dashboard-metrics-scraper pod=dashboard-metrics-scraper-6ffb444bf9-phsgl_kubernetes-dashboard(78f81139-942a-4424-8b87-68a3e0b04fc6)\"" pod="kubernetes-dashboard/dashboard-metrics-scraper-6ffb444bf9-phsgl" podUID="78f81139-942a-4424-8b87-68a3e0b04fc6"
	Nov 09 14:40:21 embed-certs-422728 systemd[1]: Stopping kubelet.service - kubelet: The Kubernetes Node Agent...
	Nov 09 14:40:21 embed-certs-422728 systemd[1]: kubelet.service: Deactivated successfully.
	Nov 09 14:40:21 embed-certs-422728 systemd[1]: Stopped kubelet.service - kubelet: The Kubernetes Node Agent.
	
	
	==> kubernetes-dashboard [fe108ea59a5d4ffb1318ee1b4113ef12ff67b45f3c2041c028c9738cc25481d6] <==
	2025/11/09 14:39:46 Using namespace: kubernetes-dashboard
	2025/11/09 14:39:46 Using in-cluster config to connect to apiserver
	2025/11/09 14:39:46 Using secret token for csrf signing
	2025/11/09 14:39:46 Initializing csrf token from kubernetes-dashboard-csrf secret
	2025/11/09 14:39:46 Empty token. Generating and storing in a secret kubernetes-dashboard-csrf
	2025/11/09 14:39:46 Successful initial request to the apiserver, version: v1.34.1
	2025/11/09 14:39:46 Generating JWE encryption key
	2025/11/09 14:39:46 New synchronizer has been registered: kubernetes-dashboard-key-holder-kubernetes-dashboard. Starting
	2025/11/09 14:39:46 Starting secret synchronizer for kubernetes-dashboard-key-holder in namespace kubernetes-dashboard
	2025/11/09 14:39:47 Initializing JWE encryption key from synchronized object
	2025/11/09 14:39:47 Creating in-cluster Sidecar client
	2025/11/09 14:39:47 Serving insecurely on HTTP port: 9090
	2025/11/09 14:39:47 Metric client health check failed: the server is currently unable to handle the request (get services dashboard-metrics-scraper). Retrying in 30 seconds.
	2025/11/09 14:40:17 Metric client health check failed: the server is currently unable to handle the request (get services dashboard-metrics-scraper). Retrying in 30 seconds.
	2025/11/09 14:39:46 Starting overwatch
	
	
	==> storage-provisioner [3b1b52ea2560ce0c00fa2ea0c3ba7b2fb276d6faf0899c104043d7528470cddd] <==
	I1109 14:39:35.874660       1 storage_provisioner.go:116] Initializing the minikube storage provisioner...
	F1109 14:40:05.877093       1 main.go:39] error getting server version: Get "https://10.96.0.1:443/version?timeout=32s": dial tcp 10.96.0.1:443: i/o timeout
	
	
	==> storage-provisioner [7e3f27e138c59aa0cb710724e534caab4379d6a10868fbbe90e3e8f884adb4a7] <==
	I1109 14:40:06.499444       1 storage_provisioner.go:116] Initializing the minikube storage provisioner...
	I1109 14:40:06.513039       1 storage_provisioner.go:141] Storage provisioner initialized, now starting service!
	I1109 14:40:06.513149       1 leaderelection.go:243] attempting to acquire leader lease kube-system/k8s.io-minikube-hostpath...
	W1109 14:40:06.519471       1 warnings.go:70] v1 Endpoints is deprecated in v1.33+; use discovery.k8s.io/v1 EndpointSlice
	W1109 14:40:09.975231       1 warnings.go:70] v1 Endpoints is deprecated in v1.33+; use discovery.k8s.io/v1 EndpointSlice
	W1109 14:40:14.235272       1 warnings.go:70] v1 Endpoints is deprecated in v1.33+; use discovery.k8s.io/v1 EndpointSlice
	W1109 14:40:17.834216       1 warnings.go:70] v1 Endpoints is deprecated in v1.33+; use discovery.k8s.io/v1 EndpointSlice
	W1109 14:40:20.887785       1 warnings.go:70] v1 Endpoints is deprecated in v1.33+; use discovery.k8s.io/v1 EndpointSlice
	W1109 14:40:23.910358       1 warnings.go:70] v1 Endpoints is deprecated in v1.33+; use discovery.k8s.io/v1 EndpointSlice
	W1109 14:40:23.915516       1 warnings.go:70] v1 Endpoints is deprecated in v1.33+; use discovery.k8s.io/v1 EndpointSlice
	I1109 14:40:23.915670       1 leaderelection.go:253] successfully acquired lease kube-system/k8s.io-minikube-hostpath
	I1109 14:40:23.915832       1 controller.go:835] Starting provisioner controller k8s.io/minikube-hostpath_embed-certs-422728_d646feab-232e-4d11-bd52-56eb99080d9e!
	I1109 14:40:23.916689       1 event.go:282] Event(v1.ObjectReference{Kind:"Endpoints", Namespace:"kube-system", Name:"k8s.io-minikube-hostpath", UID:"75bbad6d-f285-4ed2-83c3-c9896fff11ae", APIVersion:"v1", ResourceVersion:"695", FieldPath:""}): type: 'Normal' reason: 'LeaderElection' embed-certs-422728_d646feab-232e-4d11-bd52-56eb99080d9e became leader
	W1109 14:40:23.929258       1 warnings.go:70] v1 Endpoints is deprecated in v1.33+; use discovery.k8s.io/v1 EndpointSlice
	W1109 14:40:23.942561       1 warnings.go:70] v1 Endpoints is deprecated in v1.33+; use discovery.k8s.io/v1 EndpointSlice
	I1109 14:40:24.017684       1 controller.go:884] Started provisioner controller k8s.io/minikube-hostpath_embed-certs-422728_d646feab-232e-4d11-bd52-56eb99080d9e!
	W1109 14:40:25.953306       1 warnings.go:70] v1 Endpoints is deprecated in v1.33+; use discovery.k8s.io/v1 EndpointSlice
	W1109 14:40:25.959091       1 warnings.go:70] v1 Endpoints is deprecated in v1.33+; use discovery.k8s.io/v1 EndpointSlice
	W1109 14:40:27.962732       1 warnings.go:70] v1 Endpoints is deprecated in v1.33+; use discovery.k8s.io/v1 EndpointSlice
	W1109 14:40:27.971409       1 warnings.go:70] v1 Endpoints is deprecated in v1.33+; use discovery.k8s.io/v1 EndpointSlice
	

                                                
                                                
-- /stdout --
helpers_test.go:262: (dbg) Run:  out/minikube-linux-arm64 status --format={{.APIServer}} -p embed-certs-422728 -n embed-certs-422728
helpers_test.go:262: (dbg) Non-zero exit: out/minikube-linux-arm64 status --format={{.APIServer}} -p embed-certs-422728 -n embed-certs-422728: exit status 2 (499.858935ms)

                                                
                                                
-- stdout --
	Running

                                                
                                                
-- /stdout --
helpers_test.go:262: status error: exit status 2 (may be ok)
helpers_test.go:269: (dbg) Run:  kubectl --context embed-certs-422728 get po -o=jsonpath={.items[*].metadata.name} -A --field-selector=status.phase!=Running
helpers_test.go:293: <<< TestStartStop/group/embed-certs/serial/Pause FAILED: end of post-mortem logs <<<
helpers_test.go:294: ---------------------/post-mortem---------------------------------
--- FAIL: TestStartStop/group/embed-certs/serial/Pause (8.05s)

                                                
                                    
x
+
TestStartStop/group/newest-cni/serial/EnableAddonWhileActive (3.23s)

                                                
                                                
=== RUN   TestStartStop/group/newest-cni/serial/EnableAddonWhileActive
start_stop_delete_test.go:203: (dbg) Run:  out/minikube-linux-arm64 addons enable metrics-server -p newest-cni-192074 --images=MetricsServer=registry.k8s.io/echoserver:1.4 --registries=MetricsServer=fake.domain
E1109 14:41:22.480290    4116 cert_rotation.go:172] "Loading client cert failed" err="open /home/jenkins/minikube-integration/21139-2320/.minikube/profiles/old-k8s-version-349599/client.crt: no such file or directory" logger="tls-transport-cache.UnhandledError" key="key"
start_stop_delete_test.go:203: (dbg) Non-zero exit: out/minikube-linux-arm64 addons enable metrics-server -p newest-cni-192074 --images=MetricsServer=registry.k8s.io/echoserver:1.4 --registries=MetricsServer=fake.domain: exit status 11 (346.333749ms)

                                                
                                                
-- stdout --
	
	

                                                
                                                
-- /stdout --
** stderr ** 
	X Exiting due to MK_ADDON_ENABLE_PAUSED: enabled failed: check paused: list paused: runc: sudo runc list -f json: Process exited with status 1
	stdout:
	
	stderr:
	time="2025-11-09T14:41:22Z" level=error msg="open /run/runc: no such file or directory"
	
	* 
	╭─────────────────────────────────────────────────────────────────────────────────────────────╮
	│                                                                                             │
	│    * If the above advice does not help, please let us know:                                 │
	│      https://github.com/kubernetes/minikube/issues/new/choose                               │
	│                                                                                             │
	│    * Please run `minikube logs --file=logs.txt` and attach logs.txt to the GitHub issue.    │
	│    * Please also attach the following file to the GitHub issue:                             │
	│    * - /tmp/minikube_addons_2bafae6fa40fec163538f94366e390b0317a8b15_0.log                  │
	│                                                                                             │
	╰─────────────────────────────────────────────────────────────────────────────────────────────╯

                                                
                                                
** /stderr **
start_stop_delete_test.go:205: failed to enable an addon post-stop. args "out/minikube-linux-arm64 addons enable metrics-server -p newest-cni-192074 --images=MetricsServer=registry.k8s.io/echoserver:1.4 --registries=MetricsServer=fake.domain": exit status 11
start_stop_delete_test.go:209: WARNING: cni mode requires additional setup before pods can schedule :(
helpers_test.go:222: -----------------------post-mortem--------------------------------
helpers_test.go:223: ======>  post-mortem[TestStartStop/group/newest-cni/serial/EnableAddonWhileActive]: network settings <======
helpers_test.go:230: HOST ENV snapshots: PROXY env: HTTP_PROXY="<empty>" HTTPS_PROXY="<empty>" NO_PROXY="<empty>"
helpers_test.go:238: ======>  post-mortem[TestStartStop/group/newest-cni/serial/EnableAddonWhileActive]: docker inspect <======
helpers_test.go:239: (dbg) Run:  docker inspect newest-cni-192074
helpers_test.go:243: (dbg) docker inspect newest-cni-192074:

                                                
                                                
-- stdout --
	[
	    {
	        "Id": "6efa62eda748b61ec1d68030412467520212e736b73f094b7d6592d76bede223",
	        "Created": "2025-11-09T14:40:38.404452618Z",
	        "Path": "/usr/local/bin/entrypoint",
	        "Args": [
	            "/sbin/init"
	        ],
	        "State": {
	            "Status": "running",
	            "Running": true,
	            "Paused": false,
	            "Restarting": false,
	            "OOMKilled": false,
	            "Dead": false,
	            "Pid": 204864,
	            "ExitCode": 0,
	            "Error": "",
	            "StartedAt": "2025-11-09T14:40:38.502308718Z",
	            "FinishedAt": "0001-01-01T00:00:00Z"
	        },
	        "Image": "sha256:4e1b168be0d8ee6affdaa8dcd0274d605b2f417b6cfaa574410f0380fd962b97",
	        "ResolvConfPath": "/var/lib/docker/containers/6efa62eda748b61ec1d68030412467520212e736b73f094b7d6592d76bede223/resolv.conf",
	        "HostnamePath": "/var/lib/docker/containers/6efa62eda748b61ec1d68030412467520212e736b73f094b7d6592d76bede223/hostname",
	        "HostsPath": "/var/lib/docker/containers/6efa62eda748b61ec1d68030412467520212e736b73f094b7d6592d76bede223/hosts",
	        "LogPath": "/var/lib/docker/containers/6efa62eda748b61ec1d68030412467520212e736b73f094b7d6592d76bede223/6efa62eda748b61ec1d68030412467520212e736b73f094b7d6592d76bede223-json.log",
	        "Name": "/newest-cni-192074",
	        "RestartCount": 0,
	        "Driver": "overlay2",
	        "Platform": "linux",
	        "MountLabel": "",
	        "ProcessLabel": "",
	        "AppArmorProfile": "unconfined",
	        "ExecIDs": null,
	        "HostConfig": {
	            "Binds": [
	                "/lib/modules:/lib/modules:ro",
	                "newest-cni-192074:/var"
	            ],
	            "ContainerIDFile": "",
	            "LogConfig": {
	                "Type": "json-file",
	                "Config": {}
	            },
	            "NetworkMode": "newest-cni-192074",
	            "PortBindings": {
	                "22/tcp": [
	                    {
	                        "HostIp": "127.0.0.1",
	                        "HostPort": ""
	                    }
	                ],
	                "2376/tcp": [
	                    {
	                        "HostIp": "127.0.0.1",
	                        "HostPort": ""
	                    }
	                ],
	                "32443/tcp": [
	                    {
	                        "HostIp": "127.0.0.1",
	                        "HostPort": ""
	                    }
	                ],
	                "5000/tcp": [
	                    {
	                        "HostIp": "127.0.0.1",
	                        "HostPort": ""
	                    }
	                ],
	                "8443/tcp": [
	                    {
	                        "HostIp": "127.0.0.1",
	                        "HostPort": ""
	                    }
	                ]
	            },
	            "RestartPolicy": {
	                "Name": "no",
	                "MaximumRetryCount": 0
	            },
	            "AutoRemove": false,
	            "VolumeDriver": "",
	            "VolumesFrom": null,
	            "ConsoleSize": [
	                0,
	                0
	            ],
	            "CapAdd": null,
	            "CapDrop": null,
	            "CgroupnsMode": "host",
	            "Dns": [],
	            "DnsOptions": [],
	            "DnsSearch": [],
	            "ExtraHosts": null,
	            "GroupAdd": null,
	            "IpcMode": "private",
	            "Cgroup": "",
	            "Links": null,
	            "OomScoreAdj": 0,
	            "PidMode": "",
	            "Privileged": true,
	            "PublishAllPorts": false,
	            "ReadonlyRootfs": false,
	            "SecurityOpt": [
	                "seccomp=unconfined",
	                "apparmor=unconfined",
	                "label=disable"
	            ],
	            "Tmpfs": {
	                "/run": "",
	                "/tmp": ""
	            },
	            "UTSMode": "",
	            "UsernsMode": "",
	            "ShmSize": 67108864,
	            "Runtime": "runc",
	            "Isolation": "",
	            "CpuShares": 0,
	            "Memory": 3221225472,
	            "NanoCpus": 2000000000,
	            "CgroupParent": "",
	            "BlkioWeight": 0,
	            "BlkioWeightDevice": [],
	            "BlkioDeviceReadBps": [],
	            "BlkioDeviceWriteBps": [],
	            "BlkioDeviceReadIOps": [],
	            "BlkioDeviceWriteIOps": [],
	            "CpuPeriod": 0,
	            "CpuQuota": 0,
	            "CpuRealtimePeriod": 0,
	            "CpuRealtimeRuntime": 0,
	            "CpusetCpus": "",
	            "CpusetMems": "",
	            "Devices": [],
	            "DeviceCgroupRules": null,
	            "DeviceRequests": null,
	            "MemoryReservation": 0,
	            "MemorySwap": 6442450944,
	            "MemorySwappiness": null,
	            "OomKillDisable": false,
	            "PidsLimit": null,
	            "Ulimits": [],
	            "CpuCount": 0,
	            "CpuPercent": 0,
	            "IOMaximumIOps": 0,
	            "IOMaximumBandwidth": 0,
	            "MaskedPaths": null,
	            "ReadonlyPaths": null
	        },
	        "GraphDriver": {
	            "Data": {
	                "ID": "6efa62eda748b61ec1d68030412467520212e736b73f094b7d6592d76bede223",
	                "LowerDir": "/var/lib/docker/overlay2/eabc72676e437d7c1a314a8444f33816e7c030e826a97c06275df01c27b5a1a0-init/diff:/var/lib/docker/overlay2/b3a4de36ab2a7c9237cb1555a4866064baca53bca407ae5f84336bea9c6bc6c8/diff",
	                "MergedDir": "/var/lib/docker/overlay2/eabc72676e437d7c1a314a8444f33816e7c030e826a97c06275df01c27b5a1a0/merged",
	                "UpperDir": "/var/lib/docker/overlay2/eabc72676e437d7c1a314a8444f33816e7c030e826a97c06275df01c27b5a1a0/diff",
	                "WorkDir": "/var/lib/docker/overlay2/eabc72676e437d7c1a314a8444f33816e7c030e826a97c06275df01c27b5a1a0/work"
	            },
	            "Name": "overlay2"
	        },
	        "Mounts": [
	            {
	                "Type": "bind",
	                "Source": "/lib/modules",
	                "Destination": "/lib/modules",
	                "Mode": "ro",
	                "RW": false,
	                "Propagation": "rprivate"
	            },
	            {
	                "Type": "volume",
	                "Name": "newest-cni-192074",
	                "Source": "/var/lib/docker/volumes/newest-cni-192074/_data",
	                "Destination": "/var",
	                "Driver": "local",
	                "Mode": "z",
	                "RW": true,
	                "Propagation": ""
	            }
	        ],
	        "Config": {
	            "Hostname": "newest-cni-192074",
	            "Domainname": "",
	            "User": "",
	            "AttachStdin": false,
	            "AttachStdout": false,
	            "AttachStderr": false,
	            "ExposedPorts": {
	                "22/tcp": {},
	                "2376/tcp": {},
	                "32443/tcp": {},
	                "5000/tcp": {},
	                "8443/tcp": {}
	            },
	            "Tty": true,
	            "OpenStdin": false,
	            "StdinOnce": false,
	            "Env": [
	                "container=docker",
	                "PATH=/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin"
	            ],
	            "Cmd": null,
	            "Image": "gcr.io/k8s-minikube/kicbase-builds:v0.0.48-1761985721-21837@sha256:a50b37e97dfdea51156e079ca6b45818a801b3d41bbe13d141f35d2e1af6c7d1",
	            "Volumes": null,
	            "WorkingDir": "/",
	            "Entrypoint": [
	                "/usr/local/bin/entrypoint",
	                "/sbin/init"
	            ],
	            "OnBuild": null,
	            "Labels": {
	                "created_by.minikube.sigs.k8s.io": "true",
	                "mode.minikube.sigs.k8s.io": "newest-cni-192074",
	                "name.minikube.sigs.k8s.io": "newest-cni-192074",
	                "role.minikube.sigs.k8s.io": ""
	            },
	            "StopSignal": "SIGRTMIN+3"
	        },
	        "NetworkSettings": {
	            "Bridge": "",
	            "SandboxID": "cb00de5763e01d2128ea9ce7fc89cb3f6852025411b7aef6127fb089a5e2cedc",
	            "SandboxKey": "/var/run/docker/netns/cb00de5763e0",
	            "Ports": {
	                "22/tcp": [
	                    {
	                        "HostIp": "127.0.0.1",
	                        "HostPort": "33080"
	                    }
	                ],
	                "2376/tcp": [
	                    {
	                        "HostIp": "127.0.0.1",
	                        "HostPort": "33081"
	                    }
	                ],
	                "32443/tcp": [
	                    {
	                        "HostIp": "127.0.0.1",
	                        "HostPort": "33084"
	                    }
	                ],
	                "5000/tcp": [
	                    {
	                        "HostIp": "127.0.0.1",
	                        "HostPort": "33082"
	                    }
	                ],
	                "8443/tcp": [
	                    {
	                        "HostIp": "127.0.0.1",
	                        "HostPort": "33083"
	                    }
	                ]
	            },
	            "HairpinMode": false,
	            "LinkLocalIPv6Address": "",
	            "LinkLocalIPv6PrefixLen": 0,
	            "SecondaryIPAddresses": null,
	            "SecondaryIPv6Addresses": null,
	            "EndpointID": "",
	            "Gateway": "",
	            "GlobalIPv6Address": "",
	            "GlobalIPv6PrefixLen": 0,
	            "IPAddress": "",
	            "IPPrefixLen": 0,
	            "IPv6Gateway": "",
	            "MacAddress": "",
	            "Networks": {
	                "newest-cni-192074": {
	                    "IPAMConfig": {
	                        "IPv4Address": "192.168.76.2"
	                    },
	                    "Links": null,
	                    "Aliases": null,
	                    "MacAddress": "7e:c0:cb:64:67:a7",
	                    "DriverOpts": null,
	                    "GwPriority": 0,
	                    "NetworkID": "114ceded31c032452a3ee1a01231f6fa4125cd9140fa08f1853ed64e4b9d3746",
	                    "EndpointID": "799c442e52d836c368a7ad0f9c28aec014390f3e0e4b66a2fb8decfc6fbe64be",
	                    "Gateway": "192.168.76.1",
	                    "IPAddress": "192.168.76.2",
	                    "IPPrefixLen": 24,
	                    "IPv6Gateway": "",
	                    "GlobalIPv6Address": "",
	                    "GlobalIPv6PrefixLen": 0,
	                    "DNSNames": [
	                        "newest-cni-192074",
	                        "6efa62eda748"
	                    ]
	                }
	            }
	        }
	    }
	]

                                                
                                                
-- /stdout --
helpers_test.go:247: (dbg) Run:  out/minikube-linux-arm64 status --format={{.Host}} -p newest-cni-192074 -n newest-cni-192074
helpers_test.go:252: <<< TestStartStop/group/newest-cni/serial/EnableAddonWhileActive FAILED: start of post-mortem logs <<<
helpers_test.go:253: ======>  post-mortem[TestStartStop/group/newest-cni/serial/EnableAddonWhileActive]: minikube logs <======
helpers_test.go:255: (dbg) Run:  out/minikube-linux-arm64 -p newest-cni-192074 logs -n 25
helpers_test.go:255: (dbg) Done: out/minikube-linux-arm64 -p newest-cni-192074 logs -n 25: (1.463562273s)
helpers_test.go:260: TestStartStop/group/newest-cni/serial/EnableAddonWhileActive logs: 
-- stdout --
	
	==> Audit <==
	┌─────────┬───────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────┬──────────────────────────────┬─────────┬─────────┬─────────────────────┬──────────────
───────┐
	│ COMMAND │                                                                                                                     ARGS                                                                                                                      │           PROFILE            │  USER   │ VERSION │     START TIME      │      END TIME       │
	├─────────┼───────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────┼──────────────────────────────┼─────────┼─────────┼─────────────────────┼──────────────
───────┤
	│ delete  │ -p old-k8s-version-349599                                                                                                                                                                                                                     │ old-k8s-version-349599       │ jenkins │ v1.37.0 │ 09 Nov 25 14:37 UTC │ 09 Nov 25 14:37 UTC │
	│ delete  │ -p old-k8s-version-349599                                                                                                                                                                                                                     │ old-k8s-version-349599       │ jenkins │ v1.37.0 │ 09 Nov 25 14:37 UTC │ 09 Nov 25 14:37 UTC │
	│ start   │ -p default-k8s-diff-port-103048 --memory=3072 --alsologtostderr --wait=true --apiserver-port=8444 --driver=docker  --container-runtime=crio --kubernetes-version=v1.34.1                                                                      │ default-k8s-diff-port-103048 │ jenkins │ v1.37.0 │ 09 Nov 25 14:37 UTC │ 09 Nov 25 14:38 UTC │
	│ delete  │ -p cert-expiration-179822                                                                                                                                                                                                                     │ cert-expiration-179822       │ jenkins │ v1.37.0 │ 09 Nov 25 14:37 UTC │ 09 Nov 25 14:37 UTC │
	│ start   │ -p embed-certs-422728 --memory=3072 --alsologtostderr --wait=true --embed-certs --driver=docker  --container-runtime=crio --kubernetes-version=v1.34.1                                                                                        │ embed-certs-422728           │ jenkins │ v1.37.0 │ 09 Nov 25 14:37 UTC │ 09 Nov 25 14:38 UTC │
	│ addons  │ enable metrics-server -p default-k8s-diff-port-103048 --images=MetricsServer=registry.k8s.io/echoserver:1.4 --registries=MetricsServer=fake.domain                                                                                            │ default-k8s-diff-port-103048 │ jenkins │ v1.37.0 │ 09 Nov 25 14:38 UTC │                     │
	│ addons  │ enable metrics-server -p embed-certs-422728 --images=MetricsServer=registry.k8s.io/echoserver:1.4 --registries=MetricsServer=fake.domain                                                                                                      │ embed-certs-422728           │ jenkins │ v1.37.0 │ 09 Nov 25 14:39 UTC │                     │
	│ stop    │ -p default-k8s-diff-port-103048 --alsologtostderr -v=3                                                                                                                                                                                        │ default-k8s-diff-port-103048 │ jenkins │ v1.37.0 │ 09 Nov 25 14:39 UTC │ 09 Nov 25 14:39 UTC │
	│ stop    │ -p embed-certs-422728 --alsologtostderr -v=3                                                                                                                                                                                                  │ embed-certs-422728           │ jenkins │ v1.37.0 │ 09 Nov 25 14:39 UTC │ 09 Nov 25 14:39 UTC │
	│ addons  │ enable dashboard -p default-k8s-diff-port-103048 --images=MetricsScraper=registry.k8s.io/echoserver:1.4                                                                                                                                       │ default-k8s-diff-port-103048 │ jenkins │ v1.37.0 │ 09 Nov 25 14:39 UTC │ 09 Nov 25 14:39 UTC │
	│ start   │ -p default-k8s-diff-port-103048 --memory=3072 --alsologtostderr --wait=true --apiserver-port=8444 --driver=docker  --container-runtime=crio --kubernetes-version=v1.34.1                                                                      │ default-k8s-diff-port-103048 │ jenkins │ v1.37.0 │ 09 Nov 25 14:39 UTC │ 09 Nov 25 14:40 UTC │
	│ addons  │ enable dashboard -p embed-certs-422728 --images=MetricsScraper=registry.k8s.io/echoserver:1.4                                                                                                                                                 │ embed-certs-422728           │ jenkins │ v1.37.0 │ 09 Nov 25 14:39 UTC │ 09 Nov 25 14:39 UTC │
	│ start   │ -p embed-certs-422728 --memory=3072 --alsologtostderr --wait=true --embed-certs --driver=docker  --container-runtime=crio --kubernetes-version=v1.34.1                                                                                        │ embed-certs-422728           │ jenkins │ v1.37.0 │ 09 Nov 25 14:39 UTC │ 09 Nov 25 14:40 UTC │
	│ image   │ default-k8s-diff-port-103048 image list --format=json                                                                                                                                                                                         │ default-k8s-diff-port-103048 │ jenkins │ v1.37.0 │ 09 Nov 25 14:40 UTC │ 09 Nov 25 14:40 UTC │
	│ pause   │ -p default-k8s-diff-port-103048 --alsologtostderr -v=1                                                                                                                                                                                        │ default-k8s-diff-port-103048 │ jenkins │ v1.37.0 │ 09 Nov 25 14:40 UTC │                     │
	│ image   │ embed-certs-422728 image list --format=json                                                                                                                                                                                                   │ embed-certs-422728           │ jenkins │ v1.37.0 │ 09 Nov 25 14:40 UTC │ 09 Nov 25 14:40 UTC │
	│ pause   │ -p embed-certs-422728 --alsologtostderr -v=1                                                                                                                                                                                                  │ embed-certs-422728           │ jenkins │ v1.37.0 │ 09 Nov 25 14:40 UTC │                     │
	│ delete  │ -p default-k8s-diff-port-103048                                                                                                                                                                                                               │ default-k8s-diff-port-103048 │ jenkins │ v1.37.0 │ 09 Nov 25 14:40 UTC │ 09 Nov 25 14:40 UTC │
	│ delete  │ -p embed-certs-422728                                                                                                                                                                                                                         │ embed-certs-422728           │ jenkins │ v1.37.0 │ 09 Nov 25 14:40 UTC │ 09 Nov 25 14:40 UTC │
	│ delete  │ -p default-k8s-diff-port-103048                                                                                                                                                                                                               │ default-k8s-diff-port-103048 │ jenkins │ v1.37.0 │ 09 Nov 25 14:40 UTC │ 09 Nov 25 14:40 UTC │
	│ delete  │ -p disable-driver-mounts-274584                                                                                                                                                                                                               │ disable-driver-mounts-274584 │ jenkins │ v1.37.0 │ 09 Nov 25 14:40 UTC │ 09 Nov 25 14:40 UTC │
	│ start   │ -p no-preload-545474 --memory=3072 --alsologtostderr --wait=true --preload=false --driver=docker  --container-runtime=crio --kubernetes-version=v1.34.1                                                                                       │ no-preload-545474            │ jenkins │ v1.37.0 │ 09 Nov 25 14:40 UTC │                     │
	│ delete  │ -p embed-certs-422728                                                                                                                                                                                                                         │ embed-certs-422728           │ jenkins │ v1.37.0 │ 09 Nov 25 14:40 UTC │ 09 Nov 25 14:40 UTC │
	│ start   │ -p newest-cni-192074 --memory=3072 --alsologtostderr --wait=apiserver,system_pods,default_sa --network-plugin=cni --extra-config=kubeadm.pod-network-cidr=10.42.0.0/16 --driver=docker  --container-runtime=crio --kubernetes-version=v1.34.1 │ newest-cni-192074            │ jenkins │ v1.37.0 │ 09 Nov 25 14:40 UTC │ 09 Nov 25 14:41 UTC │
	│ addons  │ enable metrics-server -p newest-cni-192074 --images=MetricsServer=registry.k8s.io/echoserver:1.4 --registries=MetricsServer=fake.domain                                                                                                       │ newest-cni-192074            │ jenkins │ v1.37.0 │ 09 Nov 25 14:41 UTC │                     │
	└─────────┴───────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────┴──────────────────────────────┴─────────┴─────────┴─────────────────────┴──────────────
───────┘
	
	
	==> Last Start <==
	Log file created at: 2025/11/09 14:40:32
	Running on machine: ip-172-31-24-2
	Binary: Built with gc go1.24.6 for linux/arm64
	Log line format: [IWEF]mmdd hh:mm:ss.uuuuuu threadid file:line] msg
	I1109 14:40:32.753972  203791 out.go:360] Setting OutFile to fd 1 ...
	I1109 14:40:32.754515  203791 out.go:408] TERM=,COLORTERM=, which probably does not support color
	I1109 14:40:32.754735  203791 out.go:374] Setting ErrFile to fd 2...
	I1109 14:40:32.754760  203791 out.go:408] TERM=,COLORTERM=, which probably does not support color
	I1109 14:40:32.755118  203791 root.go:338] Updating PATH: /home/jenkins/minikube-integration/21139-2320/.minikube/bin
	I1109 14:40:32.755646  203791 out.go:368] Setting JSON to false
	I1109 14:40:32.760642  203791 start.go:133] hostinfo: {"hostname":"ip-172-31-24-2","uptime":4983,"bootTime":1762694250,"procs":168,"os":"linux","platform":"ubuntu","platformFamily":"debian","platformVersion":"20.04","kernelVersion":"5.15.0-1084-aws","kernelArch":"aarch64","virtualizationSystem":"","virtualizationRole":"","hostId":"6d436adf-771e-4269-b9a3-c25fd4fca4f5"}
	I1109 14:40:32.760835  203791 start.go:143] virtualization:  
	I1109 14:40:32.765037  203791 out.go:179] * [newest-cni-192074] minikube v1.37.0 on Ubuntu 20.04 (arm64)
	I1109 14:40:32.768356  203791 out.go:179]   - MINIKUBE_LOCATION=21139
	I1109 14:40:32.768554  203791 notify.go:221] Checking for updates...
	I1109 14:40:32.776094  203791 out.go:179]   - MINIKUBE_SUPPRESS_DOCKER_PERFORMANCE=true
	I1109 14:40:32.779401  203791 out.go:179]   - KUBECONFIG=/home/jenkins/minikube-integration/21139-2320/kubeconfig
	I1109 14:40:32.784125  203791 out.go:179]   - MINIKUBE_HOME=/home/jenkins/minikube-integration/21139-2320/.minikube
	I1109 14:40:32.787209  203791 out.go:179]   - MINIKUBE_BIN=out/minikube-linux-arm64
	I1109 14:40:32.790304  203791 out.go:179]   - MINIKUBE_FORCE_SYSTEMD=
	I1109 14:40:32.793876  203791 config.go:182] Loaded profile config "no-preload-545474": Driver=docker, ContainerRuntime=crio, KubernetesVersion=v1.34.1
	I1109 14:40:32.794050  203791 driver.go:422] Setting default libvirt URI to qemu:///system
	I1109 14:40:32.879678  203791 docker.go:124] docker version: linux-28.1.1:Docker Engine - Community
	I1109 14:40:32.879791  203791 cli_runner.go:164] Run: docker system info --format "{{json .}}"
	I1109 14:40:32.992467  203791 info.go:266] docker info: {ID:J4M5:W6MX:GOX4:4LAQ:VI7E:VJNF:J3OP:OPBH:GF7G:PPY4:WQWD:7N4L Containers:1 ContainersRunning:1 ContainersPaused:0 ContainersStopped:0 Images:4 Driver:overlay2 DriverStatus:[[Backing Filesystem extfs] [Supports d_type true] [Using metacopy false] [Native Overlay Diff true] [userxattr false]] SystemStatus:<nil> Plugins:{Volume:[local] Network:[bridge host ipvlan macvlan null overlay] Authorization:<nil> Log:[awslogs fluentd gcplogs gelf journald json-file local splunk syslog]} MemoryLimit:true SwapLimit:true KernelMemory:false KernelMemoryTCP:true CPUCfsPeriod:true CPUCfsQuota:true CPUShares:true CPUSet:true PidsLimit:true IPv4Forwarding:true BridgeNfIptables:false BridgeNfIP6Tables:false Debug:false NFd:46 OomKillDisable:true NGoroutines:61 SystemTime:2025-11-09 14:40:32.978903794 +0000 UTC LoggingDriver:json-file CgroupDriver:cgroupfs NEventsListener:0 KernelVersion:5.15.0-1084-aws OperatingSystem:Ubuntu 20.04.6 LTS OSType:linux Architecture:a
arch64 IndexServerAddress:https://index.docker.io/v1/ RegistryConfig:{AllowNondistributableArtifactsCIDRs:[] AllowNondistributableArtifactsHostnames:[] InsecureRegistryCIDRs:[::1/128 127.0.0.0/8] IndexConfigs:{DockerIo:{Name:docker.io Mirrors:[] Secure:true Official:true}} Mirrors:[]} NCPU:2 MemTotal:8214835200 GenericResources:<nil> DockerRootDir:/var/lib/docker HTTPProxy: HTTPSProxy: NoProxy: Name:ip-172-31-24-2 Labels:[] ExperimentalBuild:false ServerVersion:28.1.1 ClusterStore: ClusterAdvertise: Runtimes:{Runc:{Path:runc}} DefaultRuntime:runc Swarm:{NodeID: NodeAddr: LocalNodeState:inactive ControlAvailable:false Error: RemoteManagers:<nil>} LiveRestoreEnabled:false Isolation: InitBinary:docker-init ContainerdCommit:{ID:05044ec0a9a75232cad458027ca83437aae3f4da Expected:} RuncCommit:{ID:v1.2.5-0-g59923ef Expected:} InitCommit:{ID:de40ad0 Expected:} SecurityOptions:[name=apparmor name=seccomp,profile=builtin] ProductLicense: Warnings:<nil> ServerErrors:[] ClientInfo:{Debug:false Plugins:[map[Name:buildx Pat
h:/usr/libexec/docker/cli-plugins/docker-buildx SchemaVersion:0.1.0 ShortDescription:Docker Buildx Vendor:Docker Inc. Version:v0.23.0] map[Name:compose Path:/usr/libexec/docker/cli-plugins/docker-compose SchemaVersion:0.1.0 ShortDescription:Docker Compose Vendor:Docker Inc. Version:v2.35.1]] Warnings:<nil>}}
	I1109 14:40:32.992573  203791 docker.go:319] overlay module found
	I1109 14:40:32.996303  203791 out.go:179] * Using the docker driver based on user configuration
	I1109 14:40:32.999262  203791 start.go:309] selected driver: docker
	I1109 14:40:32.999282  203791 start.go:930] validating driver "docker" against <nil>
	I1109 14:40:32.999319  203791 start.go:941] status for docker: {Installed:true Healthy:true Running:false NeedsImprovement:false Error:<nil> Reason: Fix: Doc: Version:}
	I1109 14:40:33.000037  203791 cli_runner.go:164] Run: docker system info --format "{{json .}}"
	I1109 14:40:33.112802  203791 info.go:266] docker info: {ID:J4M5:W6MX:GOX4:4LAQ:VI7E:VJNF:J3OP:OPBH:GF7G:PPY4:WQWD:7N4L Containers:1 ContainersRunning:1 ContainersPaused:0 ContainersStopped:0 Images:4 Driver:overlay2 DriverStatus:[[Backing Filesystem extfs] [Supports d_type true] [Using metacopy false] [Native Overlay Diff true] [userxattr false]] SystemStatus:<nil> Plugins:{Volume:[local] Network:[bridge host ipvlan macvlan null overlay] Authorization:<nil> Log:[awslogs fluentd gcplogs gelf journald json-file local splunk syslog]} MemoryLimit:true SwapLimit:true KernelMemory:false KernelMemoryTCP:true CPUCfsPeriod:true CPUCfsQuota:true CPUShares:true CPUSet:true PidsLimit:true IPv4Forwarding:true BridgeNfIptables:false BridgeNfIP6Tables:false Debug:false NFd:46 OomKillDisable:true NGoroutines:61 SystemTime:2025-11-09 14:40:33.09742615 +0000 UTC LoggingDriver:json-file CgroupDriver:cgroupfs NEventsListener:0 KernelVersion:5.15.0-1084-aws OperatingSystem:Ubuntu 20.04.6 LTS OSType:linux Architecture:aa
rch64 IndexServerAddress:https://index.docker.io/v1/ RegistryConfig:{AllowNondistributableArtifactsCIDRs:[] AllowNondistributableArtifactsHostnames:[] InsecureRegistryCIDRs:[::1/128 127.0.0.0/8] IndexConfigs:{DockerIo:{Name:docker.io Mirrors:[] Secure:true Official:true}} Mirrors:[]} NCPU:2 MemTotal:8214835200 GenericResources:<nil> DockerRootDir:/var/lib/docker HTTPProxy: HTTPSProxy: NoProxy: Name:ip-172-31-24-2 Labels:[] ExperimentalBuild:false ServerVersion:28.1.1 ClusterStore: ClusterAdvertise: Runtimes:{Runc:{Path:runc}} DefaultRuntime:runc Swarm:{NodeID: NodeAddr: LocalNodeState:inactive ControlAvailable:false Error: RemoteManagers:<nil>} LiveRestoreEnabled:false Isolation: InitBinary:docker-init ContainerdCommit:{ID:05044ec0a9a75232cad458027ca83437aae3f4da Expected:} RuncCommit:{ID:v1.2.5-0-g59923ef Expected:} InitCommit:{ID:de40ad0 Expected:} SecurityOptions:[name=apparmor name=seccomp,profile=builtin] ProductLicense: Warnings:<nil> ServerErrors:[] ClientInfo:{Debug:false Plugins:[map[Name:buildx Path
:/usr/libexec/docker/cli-plugins/docker-buildx SchemaVersion:0.1.0 ShortDescription:Docker Buildx Vendor:Docker Inc. Version:v0.23.0] map[Name:compose Path:/usr/libexec/docker/cli-plugins/docker-compose SchemaVersion:0.1.0 ShortDescription:Docker Compose Vendor:Docker Inc. Version:v2.35.1]] Warnings:<nil>}}
	I1109 14:40:33.112967  203791 start_flags.go:327] no existing cluster config was found, will generate one from the flags 
	W1109 14:40:33.112992  203791 out.go:285] ! With --network-plugin=cni, you will need to provide your own CNI. See --cni flag as a user-friendly alternative
	I1109 14:40:33.113221  203791 start_flags.go:1011] Waiting for components: map[apiserver:true apps_running:false default_sa:true extra:false kubelet:false node_ready:false system_pods:true]
	I1109 14:40:33.116918  203791 out.go:179] * Using Docker driver with root privileges
	I1109 14:40:33.120161  203791 cni.go:84] Creating CNI manager for ""
	I1109 14:40:33.120247  203791 cni.go:143] "docker" driver + "crio" runtime found, recommending kindnet
	I1109 14:40:33.120257  203791 start_flags.go:336] Found "CNI" CNI - setting NetworkPlugin=cni
	I1109 14:40:33.120338  203791 start.go:353] cluster config:
	{Name:newest-cni-192074 KeepContext:false EmbedCerts:false MinikubeISO: KicBaseImage:gcr.io/k8s-minikube/kicbase-builds:v0.0.48-1761985721-21837@sha256:a50b37e97dfdea51156e079ca6b45818a801b3d41bbe13d141f35d2e1af6c7d1 Memory:3072 CPUs:2 DiskSize:20000 Driver:docker HyperkitVpnKitSock: HyperkitVSockPorts:[] DockerEnv:[] ContainerVolumeMounts:[] InsecureRegistry:[] RegistryMirror:[] HostOnlyCIDR:192.168.59.1/24 HypervVirtualSwitch: HypervUseExternalSwitch:false HypervExternalAdapter: KVMNetwork:default KVMQemuURI:qemu:///system KVMGPU:false KVMHidden:false KVMNUMACount:1 APIServerPort:8443 DockerOpt:[] DisableDriverMounts:false NFSShare:[] NFSSharesRoot:/nfsshares UUID: NoVTXCheck:false DNSProxy:false HostDNSResolver:true HostOnlyNicType:virtio NatNicType:virtio SSHIPAddress: SSHUser:root SSHKey: SSHPort:22 KubernetesConfig:{KubernetesVersion:v1.34.1 ClusterName:newest-cni-192074 Namespace:default APIServerHAVIP: APIServerName:minikubeCA APIServerNames:[] APIServerIPs:[] DNSDomain:cluster.local Containe
rRuntime:crio CRISocket: NetworkPlugin:cni FeatureGates: ServiceCIDR:10.96.0.0/12 ImageRepository: LoadBalancerStartIP: LoadBalancerEndIP: CustomIngressCert: RegistryAliases: ExtraOptions:[{Component:kubeadm Key:pod-network-cidr Value:10.42.0.0/16}] ShouldLoadCachedImages:true EnableDefaultCNI:false CNI:} Nodes:[{Name: IP: Port:8443 KubernetesVersion:v1.34.1 ContainerRuntime:crio ControlPlane:true Worker:true}] Addons:map[] CustomAddonImages:map[] CustomAddonRegistries:map[] VerifyComponents:map[apiserver:true apps_running:false default_sa:true extra:false kubelet:false node_ready:false system_pods:true] StartHostTimeout:6m0s ScheduledStop:<nil> ExposedPorts:[] ListenAddress: Network: Subnet: MultiNodeRequested:false ExtraDisks:0 CertExpiration:26280h0m0s MountString: Mount9PVersion:9p2000.L MountGID:docker MountIP: MountMSize:262144 MountOptions:[] MountPort:0 MountType:9p MountUID:docker BinaryMirror: DisableOptimizations:false DisableMetrics:false DisableCoreDNSLog:false CustomQemuFirmwarePath: SocketVMnet
ClientPath: SocketVMnetPath: StaticIP: SSHAuthSock: SSHAgentPID:0 GPUs: AutoPauseInterval:1m0s}
	I1109 14:40:33.123452  203791 out.go:179] * Starting "newest-cni-192074" primary control-plane node in "newest-cni-192074" cluster
	I1109 14:40:33.126650  203791 cache.go:134] Beginning downloading kic base image for docker with crio
	I1109 14:40:33.129708  203791 out.go:179] * Pulling base image v0.0.48-1761985721-21837 ...
	I1109 14:40:33.132608  203791 preload.go:188] Checking if preload exists for k8s version v1.34.1 and runtime crio
	I1109 14:40:33.132658  203791 preload.go:203] Found local preload: /home/jenkins/minikube-integration/21139-2320/.minikube/cache/preloaded-tarball/preloaded-images-k8s-v18-v1.34.1-cri-o-overlay-arm64.tar.lz4
	I1109 14:40:33.132669  203791 cache.go:65] Caching tarball of preloaded images
	I1109 14:40:33.132752  203791 preload.go:238] Found /home/jenkins/minikube-integration/21139-2320/.minikube/cache/preloaded-tarball/preloaded-images-k8s-v18-v1.34.1-cri-o-overlay-arm64.tar.lz4 in cache, skipping download
	I1109 14:40:33.132760  203791 cache.go:68] Finished verifying existence of preloaded tar for v1.34.1 on crio
	I1109 14:40:33.132878  203791 profile.go:143] Saving config to /home/jenkins/minikube-integration/21139-2320/.minikube/profiles/newest-cni-192074/config.json ...
	I1109 14:40:33.132896  203791 lock.go:35] WriteFile acquiring /home/jenkins/minikube-integration/21139-2320/.minikube/profiles/newest-cni-192074/config.json: {Name:mk5690e6a9c8471cad10f0c0e02610ee3777b7bc Clock:{} Delay:500ms Timeout:1m0s Cancel:<nil>}
	I1109 14:40:33.133062  203791 image.go:81] Checking for gcr.io/k8s-minikube/kicbase-builds:v0.0.48-1761985721-21837@sha256:a50b37e97dfdea51156e079ca6b45818a801b3d41bbe13d141f35d2e1af6c7d1 in local docker daemon
	I1109 14:40:33.195645  203791 image.go:100] Found gcr.io/k8s-minikube/kicbase-builds:v0.0.48-1761985721-21837@sha256:a50b37e97dfdea51156e079ca6b45818a801b3d41bbe13d141f35d2e1af6c7d1 in local docker daemon, skipping pull
	I1109 14:40:33.195670  203791 cache.go:158] gcr.io/k8s-minikube/kicbase-builds:v0.0.48-1761985721-21837@sha256:a50b37e97dfdea51156e079ca6b45818a801b3d41bbe13d141f35d2e1af6c7d1 exists in daemon, skipping load
	I1109 14:40:33.195682  203791 cache.go:243] Successfully downloaded all kic artifacts
	I1109 14:40:33.195705  203791 start.go:360] acquireMachinesLock for newest-cni-192074: {Name:mk50468e4f833af9c54b7aff282eee0b8ef871dc Clock:{} Delay:500ms Timeout:10m0s Cancel:<nil>}
	I1109 14:40:33.195811  203791 start.go:364] duration metric: took 81.412µs to acquireMachinesLock for "newest-cni-192074"
	I1109 14:40:33.195841  203791 start.go:93] Provisioning new machine with config: &{Name:newest-cni-192074 KeepContext:false EmbedCerts:false MinikubeISO: KicBaseImage:gcr.io/k8s-minikube/kicbase-builds:v0.0.48-1761985721-21837@sha256:a50b37e97dfdea51156e079ca6b45818a801b3d41bbe13d141f35d2e1af6c7d1 Memory:3072 CPUs:2 DiskSize:20000 Driver:docker HyperkitVpnKitSock: HyperkitVSockPorts:[] DockerEnv:[] ContainerVolumeMounts:[] InsecureRegistry:[] RegistryMirror:[] HostOnlyCIDR:192.168.59.1/24 HypervVirtualSwitch: HypervUseExternalSwitch:false HypervExternalAdapter: KVMNetwork:default KVMQemuURI:qemu:///system KVMGPU:false KVMHidden:false KVMNUMACount:1 APIServerPort:8443 DockerOpt:[] DisableDriverMounts:false NFSShare:[] NFSSharesRoot:/nfsshares UUID: NoVTXCheck:false DNSProxy:false HostDNSResolver:true HostOnlyNicType:virtio NatNicType:virtio SSHIPAddress: SSHUser:root SSHKey: SSHPort:22 KubernetesConfig:{KubernetesVersion:v1.34.1 ClusterName:newest-cni-192074 Namespace:default APIServerHAVIP: APIServer
Name:minikubeCA APIServerNames:[] APIServerIPs:[] DNSDomain:cluster.local ContainerRuntime:crio CRISocket: NetworkPlugin:cni FeatureGates: ServiceCIDR:10.96.0.0/12 ImageRepository: LoadBalancerStartIP: LoadBalancerEndIP: CustomIngressCert: RegistryAliases: ExtraOptions:[{Component:kubeadm Key:pod-network-cidr Value:10.42.0.0/16}] ShouldLoadCachedImages:true EnableDefaultCNI:false CNI:} Nodes:[{Name: IP: Port:8443 KubernetesVersion:v1.34.1 ContainerRuntime:crio ControlPlane:true Worker:true}] Addons:map[] CustomAddonImages:map[] CustomAddonRegistries:map[] VerifyComponents:map[apiserver:true apps_running:false default_sa:true extra:false kubelet:false node_ready:false system_pods:true] StartHostTimeout:6m0s ScheduledStop:<nil> ExposedPorts:[] ListenAddress: Network: Subnet: MultiNodeRequested:false ExtraDisks:0 CertExpiration:26280h0m0s MountString: Mount9PVersion:9p2000.L MountGID:docker MountIP: MountMSize:262144 MountOptions:[] MountPort:0 MountType:9p MountUID:docker BinaryMirror: DisableOptimizations:fals
e DisableMetrics:false DisableCoreDNSLog:false CustomQemuFirmwarePath: SocketVMnetClientPath: SocketVMnetPath: StaticIP: SSHAuthSock: SSHAgentPID:0 GPUs: AutoPauseInterval:1m0s} &{Name: IP: Port:8443 KubernetesVersion:v1.34.1 ContainerRuntime:crio ControlPlane:true Worker:true}
	I1109 14:40:33.195980  203791 start.go:125] createHost starting for "" (driver="docker")
	I1109 14:40:30.272208  203153 out.go:252] * Creating docker container (CPUs=2, Memory=3072MB) ...
	I1109 14:40:30.272480  203153 start.go:159] libmachine.API.Create for "no-preload-545474" (driver="docker")
	I1109 14:40:30.272521  203153 client.go:173] LocalClient.Create starting
	I1109 14:40:30.272589  203153 main.go:143] libmachine: Reading certificate data from /home/jenkins/minikube-integration/21139-2320/.minikube/certs/ca.pem
	I1109 14:40:30.272622  203153 main.go:143] libmachine: Decoding PEM data...
	I1109 14:40:30.272639  203153 main.go:143] libmachine: Parsing certificate...
	I1109 14:40:30.272690  203153 main.go:143] libmachine: Reading certificate data from /home/jenkins/minikube-integration/21139-2320/.minikube/certs/cert.pem
	I1109 14:40:30.272713  203153 main.go:143] libmachine: Decoding PEM data...
	I1109 14:40:30.272723  203153 main.go:143] libmachine: Parsing certificate...
	I1109 14:40:30.273085  203153 cli_runner.go:164] Run: docker network inspect no-preload-545474 --format "{"Name": "{{.Name}}","Driver": "{{.Driver}}","Subnet": "{{range .IPAM.Config}}{{.Subnet}}{{end}}","Gateway": "{{range .IPAM.Config}}{{.Gateway}}{{end}}","MTU": {{if (index .Options "com.docker.network.driver.mtu")}}{{(index .Options "com.docker.network.driver.mtu")}}{{else}}0{{end}}, "ContainerIPs": [{{range $k,$v := .Containers }}"{{$v.IPv4Address}}",{{end}}]}"
	W1109 14:40:30.301090  203153 cli_runner.go:211] docker network inspect no-preload-545474 --format "{"Name": "{{.Name}}","Driver": "{{.Driver}}","Subnet": "{{range .IPAM.Config}}{{.Subnet}}{{end}}","Gateway": "{{range .IPAM.Config}}{{.Gateway}}{{end}}","MTU": {{if (index .Options "com.docker.network.driver.mtu")}}{{(index .Options "com.docker.network.driver.mtu")}}{{else}}0{{end}}, "ContainerIPs": [{{range $k,$v := .Containers }}"{{$v.IPv4Address}}",{{end}}]}" returned with exit code 1
	I1109 14:40:30.301177  203153 network_create.go:284] running [docker network inspect no-preload-545474] to gather additional debugging logs...
	I1109 14:40:30.301198  203153 cli_runner.go:164] Run: docker network inspect no-preload-545474
	W1109 14:40:30.318388  203153 cli_runner.go:211] docker network inspect no-preload-545474 returned with exit code 1
	I1109 14:40:30.318425  203153 network_create.go:287] error running [docker network inspect no-preload-545474]: docker network inspect no-preload-545474: exit status 1
	stdout:
	[]
	
	stderr:
	Error response from daemon: network no-preload-545474 not found
	I1109 14:40:30.318437  203153 network_create.go:289] output of [docker network inspect no-preload-545474]: -- stdout --
	[]
	
	-- /stdout --
	** stderr ** 
	Error response from daemon: network no-preload-545474 not found
	
	** /stderr **
	I1109 14:40:30.318531  203153 cli_runner.go:164] Run: docker network inspect bridge --format "{"Name": "{{.Name}}","Driver": "{{.Driver}}","Subnet": "{{range .IPAM.Config}}{{.Subnet}}{{end}}","Gateway": "{{range .IPAM.Config}}{{.Gateway}}{{end}}","MTU": {{if (index .Options "com.docker.network.driver.mtu")}}{{(index .Options "com.docker.network.driver.mtu")}}{{else}}0{{end}}, "ContainerIPs": [{{range $k,$v := .Containers }}"{{$v.IPv4Address}}",{{end}}]}"
	I1109 14:40:30.336444  203153 network.go:211] skipping subnet 192.168.49.0/24 that is taken: &{IP:192.168.49.0 Netmask:255.255.255.0 Prefix:24 CIDR:192.168.49.0/24 Gateway:192.168.49.1 ClientMin:192.168.49.2 ClientMax:192.168.49.254 Broadcast:192.168.49.255 IsPrivate:true Interface:{IfaceName:br-b901b8dcb821 IfaceIPv4:192.168.49.1 IfaceMTU:1500 IfaceMAC:fe:01:f6:7f:4e:91} reservation:<nil>}
	I1109 14:40:30.336860  203153 network.go:211] skipping subnet 192.168.58.0/24 that is taken: &{IP:192.168.58.0 Netmask:255.255.255.0 Prefix:24 CIDR:192.168.58.0/24 Gateway:192.168.58.1 ClientMin:192.168.58.2 ClientMax:192.168.58.254 Broadcast:192.168.58.255 IsPrivate:true Interface:{IfaceName:br-46dda1eda2df IfaceIPv4:192.168.58.1 IfaceMTU:1500 IfaceMAC:3a:a9:4d:4f:8f:31} reservation:<nil>}
	I1109 14:40:30.337204  203153 network.go:211] skipping subnet 192.168.67.0/24 that is taken: &{IP:192.168.67.0 Netmask:255.255.255.0 Prefix:24 CIDR:192.168.67.0/24 Gateway:192.168.67.1 ClientMin:192.168.67.2 ClientMax:192.168.67.254 Broadcast:192.168.67.255 IsPrivate:true Interface:{IfaceName:br-3b44df0b0b1c IfaceIPv4:192.168.67.1 IfaceMTU:1500 IfaceMAC:06:80:ac:56:fe:3d} reservation:<nil>}
	I1109 14:40:30.337495  203153 network.go:211] skipping subnet 192.168.76.0/24 that is taken: &{IP:192.168.76.0 Netmask:255.255.255.0 Prefix:24 CIDR:192.168.76.0/24 Gateway:192.168.76.1 ClientMin:192.168.76.2 ClientMax:192.168.76.254 Broadcast:192.168.76.255 IsPrivate:true Interface:{IfaceName:br-78ce79b8fdce IfaceIPv4:192.168.76.1 IfaceMTU:1500 IfaceMAC:3a:c4:83:e6:e4:7c} reservation:<nil>}
	I1109 14:40:30.337921  203153 network.go:206] using free private subnet 192.168.85.0/24: &{IP:192.168.85.0 Netmask:255.255.255.0 Prefix:24 CIDR:192.168.85.0/24 Gateway:192.168.85.1 ClientMin:192.168.85.2 ClientMax:192.168.85.254 Broadcast:192.168.85.255 IsPrivate:true Interface:{IfaceName: IfaceIPv4: IfaceMTU:0 IfaceMAC:} reservation:0x4001c38b30}
	I1109 14:40:30.337954  203153 network_create.go:124] attempt to create docker network no-preload-545474 192.168.85.0/24 with gateway 192.168.85.1 and MTU of 1500 ...
	I1109 14:40:30.338019  203153 cli_runner.go:164] Run: docker network create --driver=bridge --subnet=192.168.85.0/24 --gateway=192.168.85.1 -o --ip-masq -o --icc -o com.docker.network.driver.mtu=1500 --label=created_by.minikube.sigs.k8s.io=true --label=name.minikube.sigs.k8s.io=no-preload-545474 no-preload-545474
	I1109 14:40:30.404878  203153 network_create.go:108] docker network no-preload-545474 192.168.85.0/24 created
	I1109 14:40:30.404956  203153 kic.go:121] calculated static IP "192.168.85.2" for the "no-preload-545474" container
	I1109 14:40:30.405068  203153 cli_runner.go:164] Run: docker ps -a --format {{.Names}}
	I1109 14:40:30.421640  203153 cli_runner.go:164] Run: docker volume create no-preload-545474 --label name.minikube.sigs.k8s.io=no-preload-545474 --label created_by.minikube.sigs.k8s.io=true
	I1109 14:40:30.442346  203153 oci.go:103] Successfully created a docker volume no-preload-545474
	I1109 14:40:30.442430  203153 cli_runner.go:164] Run: docker run --rm --name no-preload-545474-preload-sidecar --label created_by.minikube.sigs.k8s.io=true --label name.minikube.sigs.k8s.io=no-preload-545474 --entrypoint /usr/bin/test -v no-preload-545474:/var gcr.io/k8s-minikube/kicbase-builds:v0.0.48-1761985721-21837@sha256:a50b37e97dfdea51156e079ca6b45818a801b3d41bbe13d141f35d2e1af6c7d1 -d /var/lib
	I1109 14:40:30.637135  203153 cache.go:162] opening:  /home/jenkins/minikube-integration/21139-2320/.minikube/cache/images/arm64/registry.k8s.io/pause_3.10.1
	I1109 14:40:30.645534  203153 cache.go:162] opening:  /home/jenkins/minikube-integration/21139-2320/.minikube/cache/images/arm64/registry.k8s.io/coredns/coredns_v1.12.1
	I1109 14:40:30.646306  203153 cache.go:162] opening:  /home/jenkins/minikube-integration/21139-2320/.minikube/cache/images/arm64/registry.k8s.io/kube-proxy_v1.34.1
	I1109 14:40:30.683167  203153 cache.go:162] opening:  /home/jenkins/minikube-integration/21139-2320/.minikube/cache/images/arm64/registry.k8s.io/etcd_3.6.4-0
	I1109 14:40:30.692525  203153 cache.go:157] /home/jenkins/minikube-integration/21139-2320/.minikube/cache/images/arm64/registry.k8s.io/pause_3.10.1 exists
	I1109 14:40:30.692547  203153 cache.go:96] cache image "registry.k8s.io/pause:3.10.1" -> "/home/jenkins/minikube-integration/21139-2320/.minikube/cache/images/arm64/registry.k8s.io/pause_3.10.1" took 448.523174ms
	I1109 14:40:30.692560  203153 cache.go:80] save to tar file registry.k8s.io/pause:3.10.1 -> /home/jenkins/minikube-integration/21139-2320/.minikube/cache/images/arm64/registry.k8s.io/pause_3.10.1 succeeded
	I1109 14:40:30.706542  203153 cache.go:162] opening:  /home/jenkins/minikube-integration/21139-2320/.minikube/cache/images/arm64/registry.k8s.io/kube-apiserver_v1.34.1
	I1109 14:40:30.710531  203153 cache.go:162] opening:  /home/jenkins/minikube-integration/21139-2320/.minikube/cache/images/arm64/registry.k8s.io/kube-scheduler_v1.34.1
	I1109 14:40:30.742157  203153 cache.go:162] opening:  /home/jenkins/minikube-integration/21139-2320/.minikube/cache/images/arm64/registry.k8s.io/kube-controller-manager_v1.34.1
	I1109 14:40:31.108130  203153 cache.go:157] /home/jenkins/minikube-integration/21139-2320/.minikube/cache/images/arm64/registry.k8s.io/kube-proxy_v1.34.1 exists
	I1109 14:40:31.108163  203153 cache.go:96] cache image "registry.k8s.io/kube-proxy:v1.34.1" -> "/home/jenkins/minikube-integration/21139-2320/.minikube/cache/images/arm64/registry.k8s.io/kube-proxy_v1.34.1" took 864.563949ms
	I1109 14:40:31.108177  203153 cache.go:80] save to tar file registry.k8s.io/kube-proxy:v1.34.1 -> /home/jenkins/minikube-integration/21139-2320/.minikube/cache/images/arm64/registry.k8s.io/kube-proxy_v1.34.1 succeeded
	I1109 14:40:31.236683  203153 oci.go:107] Successfully prepared a docker volume no-preload-545474
	I1109 14:40:31.236725  203153 preload.go:188] Checking if preload exists for k8s version v1.34.1 and runtime crio
	W1109 14:40:31.236852  203153 cgroups_linux.go:77] Your kernel does not support swap limit capabilities or the cgroup is not mounted.
	I1109 14:40:31.236952  203153 cli_runner.go:164] Run: docker info --format "'{{json .SecurityOptions}}'"
	I1109 14:40:31.333129  203153 cli_runner.go:164] Run: docker run -d -t --privileged --security-opt seccomp=unconfined --tmpfs /tmp --tmpfs /run -v /lib/modules:/lib/modules:ro --hostname no-preload-545474 --name no-preload-545474 --label created_by.minikube.sigs.k8s.io=true --label name.minikube.sigs.k8s.io=no-preload-545474 --label role.minikube.sigs.k8s.io= --label mode.minikube.sigs.k8s.io=no-preload-545474 --network no-preload-545474 --ip 192.168.85.2 --volume no-preload-545474:/var --security-opt apparmor=unconfined --memory=3072mb --cpus=2 -e container=docker --expose 8443 --publish=127.0.0.1::8443 --publish=127.0.0.1::22 --publish=127.0.0.1::2376 --publish=127.0.0.1::5000 --publish=127.0.0.1::32443 gcr.io/k8s-minikube/kicbase-builds:v0.0.48-1761985721-21837@sha256:a50b37e97dfdea51156e079ca6b45818a801b3d41bbe13d141f35d2e1af6c7d1
	I1109 14:40:31.696572  203153 cache.go:157] /home/jenkins/minikube-integration/21139-2320/.minikube/cache/images/arm64/registry.k8s.io/coredns/coredns_v1.12.1 exists
	I1109 14:40:31.696600  203153 cache.go:96] cache image "registry.k8s.io/coredns/coredns:v1.12.1" -> "/home/jenkins/minikube-integration/21139-2320/.minikube/cache/images/arm64/registry.k8s.io/coredns/coredns_v1.12.1" took 1.452335944s
	I1109 14:40:31.696618  203153 cache.go:80] save to tar file registry.k8s.io/coredns/coredns:v1.12.1 -> /home/jenkins/minikube-integration/21139-2320/.minikube/cache/images/arm64/registry.k8s.io/coredns/coredns_v1.12.1 succeeded
	I1109 14:40:31.742451  203153 cache.go:157] /home/jenkins/minikube-integration/21139-2320/.minikube/cache/images/arm64/registry.k8s.io/kube-scheduler_v1.34.1 exists
	I1109 14:40:31.742482  203153 cache.go:96] cache image "registry.k8s.io/kube-scheduler:v1.34.1" -> "/home/jenkins/minikube-integration/21139-2320/.minikube/cache/images/arm64/registry.k8s.io/kube-scheduler_v1.34.1" took 1.498389903s
	I1109 14:40:31.742495  203153 cache.go:80] save to tar file registry.k8s.io/kube-scheduler:v1.34.1 -> /home/jenkins/minikube-integration/21139-2320/.minikube/cache/images/arm64/registry.k8s.io/kube-scheduler_v1.34.1 succeeded
	I1109 14:40:31.774673  203153 cache.go:157] /home/jenkins/minikube-integration/21139-2320/.minikube/cache/images/arm64/registry.k8s.io/kube-apiserver_v1.34.1 exists
	I1109 14:40:31.774865  203153 cache.go:96] cache image "registry.k8s.io/kube-apiserver:v1.34.1" -> "/home/jenkins/minikube-integration/21139-2320/.minikube/cache/images/arm64/registry.k8s.io/kube-apiserver_v1.34.1" took 1.531371153s
	I1109 14:40:31.774878  203153 cache.go:80] save to tar file registry.k8s.io/kube-apiserver:v1.34.1 -> /home/jenkins/minikube-integration/21139-2320/.minikube/cache/images/arm64/registry.k8s.io/kube-apiserver_v1.34.1 succeeded
	I1109 14:40:31.823255  203153 cache.go:157] /home/jenkins/minikube-integration/21139-2320/.minikube/cache/images/arm64/registry.k8s.io/kube-controller-manager_v1.34.1 exists
	I1109 14:40:31.823287  203153 cache.go:96] cache image "registry.k8s.io/kube-controller-manager:v1.34.1" -> "/home/jenkins/minikube-integration/21139-2320/.minikube/cache/images/arm64/registry.k8s.io/kube-controller-manager_v1.34.1" took 1.579430347s
	I1109 14:40:31.823307  203153 cache.go:80] save to tar file registry.k8s.io/kube-controller-manager:v1.34.1 -> /home/jenkins/minikube-integration/21139-2320/.minikube/cache/images/arm64/registry.k8s.io/kube-controller-manager_v1.34.1 succeeded
	I1109 14:40:32.637451  203153 cli_runner.go:217] Completed: docker run -d -t --privileged --security-opt seccomp=unconfined --tmpfs /tmp --tmpfs /run -v /lib/modules:/lib/modules:ro --hostname no-preload-545474 --name no-preload-545474 --label created_by.minikube.sigs.k8s.io=true --label name.minikube.sigs.k8s.io=no-preload-545474 --label role.minikube.sigs.k8s.io= --label mode.minikube.sigs.k8s.io=no-preload-545474 --network no-preload-545474 --ip 192.168.85.2 --volume no-preload-545474:/var --security-opt apparmor=unconfined --memory=3072mb --cpus=2 -e container=docker --expose 8443 --publish=127.0.0.1::8443 --publish=127.0.0.1::22 --publish=127.0.0.1::2376 --publish=127.0.0.1::5000 --publish=127.0.0.1::32443 gcr.io/k8s-minikube/kicbase-builds:v0.0.48-1761985721-21837@sha256:a50b37e97dfdea51156e079ca6b45818a801b3d41bbe13d141f35d2e1af6c7d1: (1.304211883s)
	I1109 14:40:32.637538  203153 cli_runner.go:164] Run: docker container inspect no-preload-545474 --format={{.State.Running}}
	I1109 14:40:32.697496  203153 cli_runner.go:164] Run: docker container inspect no-preload-545474 --format={{.State.Status}}
	I1109 14:40:32.771295  203153 cli_runner.go:164] Run: docker exec no-preload-545474 stat /var/lib/dpkg/alternatives/iptables
	I1109 14:40:32.866204  203153 oci.go:144] the created container "no-preload-545474" has a running status.
	I1109 14:40:32.866235  203153 kic.go:225] Creating ssh key for kic: /home/jenkins/minikube-integration/21139-2320/.minikube/machines/no-preload-545474/id_rsa...
	I1109 14:40:32.881553  203153 cache.go:157] /home/jenkins/minikube-integration/21139-2320/.minikube/cache/images/arm64/registry.k8s.io/etcd_3.6.4-0 exists
	I1109 14:40:32.881625  203153 cache.go:96] cache image "registry.k8s.io/etcd:3.6.4-0" -> "/home/jenkins/minikube-integration/21139-2320/.minikube/cache/images/arm64/registry.k8s.io/etcd_3.6.4-0" took 2.637427266s
	I1109 14:40:32.881652  203153 cache.go:80] save to tar file registry.k8s.io/etcd:3.6.4-0 -> /home/jenkins/minikube-integration/21139-2320/.minikube/cache/images/arm64/registry.k8s.io/etcd_3.6.4-0 succeeded
	I1109 14:40:32.881676  203153 cache.go:87] Successfully saved all images to host disk.
	I1109 14:40:33.627483  203153 kic_runner.go:191] docker (temp): /home/jenkins/minikube-integration/21139-2320/.minikube/machines/no-preload-545474/id_rsa.pub --> /home/docker/.ssh/authorized_keys (381 bytes)
	I1109 14:40:33.661476  203153 cli_runner.go:164] Run: docker container inspect no-preload-545474 --format={{.State.Status}}
	I1109 14:40:33.698312  203153 kic_runner.go:93] Run: chown docker:docker /home/docker/.ssh/authorized_keys
	I1109 14:40:33.698332  203153 kic_runner.go:114] Args: [docker exec --privileged no-preload-545474 chown docker:docker /home/docker/.ssh/authorized_keys]
	I1109 14:40:33.774822  203153 cli_runner.go:164] Run: docker container inspect no-preload-545474 --format={{.State.Status}}
	I1109 14:40:33.811796  203153 machine.go:94] provisionDockerMachine start ...
	I1109 14:40:33.813071  203153 cli_runner.go:164] Run: docker container inspect -f "'{{(index (index .NetworkSettings.Ports "22/tcp") 0).HostPort}}'" no-preload-545474
	I1109 14:40:33.854622  203153 main.go:143] libmachine: Using SSH client type: native
	I1109 14:40:33.854971  203153 main.go:143] libmachine: &{{{<nil> 0 [] [] []} docker [0x3eefe0] 0x3f1790 <nil>  [] 0s} 127.0.0.1 33075 <nil> <nil>}
	I1109 14:40:33.854987  203153 main.go:143] libmachine: About to run SSH command:
	hostname
	I1109 14:40:33.855682  203153 main.go:143] libmachine: Error dialing TCP: ssh: handshake failed: read tcp 127.0.0.1:59640->127.0.0.1:33075: read: connection reset by peer
	I1109 14:40:33.202198  203791 out.go:252] * Creating docker container (CPUs=2, Memory=3072MB) ...
	I1109 14:40:33.202440  203791 start.go:159] libmachine.API.Create for "newest-cni-192074" (driver="docker")
	I1109 14:40:33.202476  203791 client.go:173] LocalClient.Create starting
	I1109 14:40:33.202582  203791 main.go:143] libmachine: Reading certificate data from /home/jenkins/minikube-integration/21139-2320/.minikube/certs/ca.pem
	I1109 14:40:33.202625  203791 main.go:143] libmachine: Decoding PEM data...
	I1109 14:40:33.202645  203791 main.go:143] libmachine: Parsing certificate...
	I1109 14:40:33.202697  203791 main.go:143] libmachine: Reading certificate data from /home/jenkins/minikube-integration/21139-2320/.minikube/certs/cert.pem
	I1109 14:40:33.202717  203791 main.go:143] libmachine: Decoding PEM data...
	I1109 14:40:33.202726  203791 main.go:143] libmachine: Parsing certificate...
	I1109 14:40:33.203114  203791 cli_runner.go:164] Run: docker network inspect newest-cni-192074 --format "{"Name": "{{.Name}}","Driver": "{{.Driver}}","Subnet": "{{range .IPAM.Config}}{{.Subnet}}{{end}}","Gateway": "{{range .IPAM.Config}}{{.Gateway}}{{end}}","MTU": {{if (index .Options "com.docker.network.driver.mtu")}}{{(index .Options "com.docker.network.driver.mtu")}}{{else}}0{{end}}, "ContainerIPs": [{{range $k,$v := .Containers }}"{{$v.IPv4Address}}",{{end}}]}"
	W1109 14:40:33.253099  203791 cli_runner.go:211] docker network inspect newest-cni-192074 --format "{"Name": "{{.Name}}","Driver": "{{.Driver}}","Subnet": "{{range .IPAM.Config}}{{.Subnet}}{{end}}","Gateway": "{{range .IPAM.Config}}{{.Gateway}}{{end}}","MTU": {{if (index .Options "com.docker.network.driver.mtu")}}{{(index .Options "com.docker.network.driver.mtu")}}{{else}}0{{end}}, "ContainerIPs": [{{range $k,$v := .Containers }}"{{$v.IPv4Address}}",{{end}}]}" returned with exit code 1
	I1109 14:40:33.253174  203791 network_create.go:284] running [docker network inspect newest-cni-192074] to gather additional debugging logs...
	I1109 14:40:33.253202  203791 cli_runner.go:164] Run: docker network inspect newest-cni-192074
	W1109 14:40:33.269423  203791 cli_runner.go:211] docker network inspect newest-cni-192074 returned with exit code 1
	I1109 14:40:33.269455  203791 network_create.go:287] error running [docker network inspect newest-cni-192074]: docker network inspect newest-cni-192074: exit status 1
	stdout:
	[]
	
	stderr:
	Error response from daemon: network newest-cni-192074 not found
	I1109 14:40:33.269470  203791 network_create.go:289] output of [docker network inspect newest-cni-192074]: -- stdout --
	[]
	
	-- /stdout --
	** stderr ** 
	Error response from daemon: network newest-cni-192074 not found
	
	** /stderr **
	I1109 14:40:33.269562  203791 cli_runner.go:164] Run: docker network inspect bridge --format "{"Name": "{{.Name}}","Driver": "{{.Driver}}","Subnet": "{{range .IPAM.Config}}{{.Subnet}}{{end}}","Gateway": "{{range .IPAM.Config}}{{.Gateway}}{{end}}","MTU": {{if (index .Options "com.docker.network.driver.mtu")}}{{(index .Options "com.docker.network.driver.mtu")}}{{else}}0{{end}}, "ContainerIPs": [{{range $k,$v := .Containers }}"{{$v.IPv4Address}}",{{end}}]}"
	I1109 14:40:33.289798  203791 network.go:211] skipping subnet 192.168.49.0/24 that is taken: &{IP:192.168.49.0 Netmask:255.255.255.0 Prefix:24 CIDR:192.168.49.0/24 Gateway:192.168.49.1 ClientMin:192.168.49.2 ClientMax:192.168.49.254 Broadcast:192.168.49.255 IsPrivate:true Interface:{IfaceName:br-b901b8dcb821 IfaceIPv4:192.168.49.1 IfaceMTU:1500 IfaceMAC:fe:01:f6:7f:4e:91} reservation:<nil>}
	I1109 14:40:33.290133  203791 network.go:211] skipping subnet 192.168.58.0/24 that is taken: &{IP:192.168.58.0 Netmask:255.255.255.0 Prefix:24 CIDR:192.168.58.0/24 Gateway:192.168.58.1 ClientMin:192.168.58.2 ClientMax:192.168.58.254 Broadcast:192.168.58.255 IsPrivate:true Interface:{IfaceName:br-46dda1eda2df IfaceIPv4:192.168.58.1 IfaceMTU:1500 IfaceMAC:3a:a9:4d:4f:8f:31} reservation:<nil>}
	I1109 14:40:33.290459  203791 network.go:211] skipping subnet 192.168.67.0/24 that is taken: &{IP:192.168.67.0 Netmask:255.255.255.0 Prefix:24 CIDR:192.168.67.0/24 Gateway:192.168.67.1 ClientMin:192.168.67.2 ClientMax:192.168.67.254 Broadcast:192.168.67.255 IsPrivate:true Interface:{IfaceName:br-3b44df0b0b1c IfaceIPv4:192.168.67.1 IfaceMTU:1500 IfaceMAC:06:80:ac:56:fe:3d} reservation:<nil>}
	I1109 14:40:33.290849  203791 network.go:206] using free private subnet 192.168.76.0/24: &{IP:192.168.76.0 Netmask:255.255.255.0 Prefix:24 CIDR:192.168.76.0/24 Gateway:192.168.76.1 ClientMin:192.168.76.2 ClientMax:192.168.76.254 Broadcast:192.168.76.255 IsPrivate:true Interface:{IfaceName: IfaceIPv4: IfaceMTU:0 IfaceMAC:} reservation:0x400196b740}
	I1109 14:40:33.290874  203791 network_create.go:124] attempt to create docker network newest-cni-192074 192.168.76.0/24 with gateway 192.168.76.1 and MTU of 1500 ...
	I1109 14:40:33.290934  203791 cli_runner.go:164] Run: docker network create --driver=bridge --subnet=192.168.76.0/24 --gateway=192.168.76.1 -o --ip-masq -o --icc -o com.docker.network.driver.mtu=1500 --label=created_by.minikube.sigs.k8s.io=true --label=name.minikube.sigs.k8s.io=newest-cni-192074 newest-cni-192074
	I1109 14:40:33.348014  203791 network_create.go:108] docker network newest-cni-192074 192.168.76.0/24 created
	I1109 14:40:33.348046  203791 kic.go:121] calculated static IP "192.168.76.2" for the "newest-cni-192074" container
	I1109 14:40:33.348124  203791 cli_runner.go:164] Run: docker ps -a --format {{.Names}}
	I1109 14:40:33.368128  203791 cli_runner.go:164] Run: docker volume create newest-cni-192074 --label name.minikube.sigs.k8s.io=newest-cni-192074 --label created_by.minikube.sigs.k8s.io=true
	I1109 14:40:33.410859  203791 oci.go:103] Successfully created a docker volume newest-cni-192074
	I1109 14:40:33.410967  203791 cli_runner.go:164] Run: docker run --rm --name newest-cni-192074-preload-sidecar --label created_by.minikube.sigs.k8s.io=true --label name.minikube.sigs.k8s.io=newest-cni-192074 --entrypoint /usr/bin/test -v newest-cni-192074:/var gcr.io/k8s-minikube/kicbase-builds:v0.0.48-1761985721-21837@sha256:a50b37e97dfdea51156e079ca6b45818a801b3d41bbe13d141f35d2e1af6c7d1 -d /var/lib
	I1109 14:40:34.089338  203791 oci.go:107] Successfully prepared a docker volume newest-cni-192074
	I1109 14:40:34.089410  203791 preload.go:188] Checking if preload exists for k8s version v1.34.1 and runtime crio
	I1109 14:40:34.089425  203791 kic.go:194] Starting extracting preloaded images to volume ...
	I1109 14:40:34.089490  203791 cli_runner.go:164] Run: docker run --rm --entrypoint /usr/bin/tar -v /home/jenkins/minikube-integration/21139-2320/.minikube/cache/preloaded-tarball/preloaded-images-k8s-v18-v1.34.1-cri-o-overlay-arm64.tar.lz4:/preloaded.tar:ro -v newest-cni-192074:/extractDir gcr.io/k8s-minikube/kicbase-builds:v0.0.48-1761985721-21837@sha256:a50b37e97dfdea51156e079ca6b45818a801b3d41bbe13d141f35d2e1af6c7d1 -I lz4 -xf /preloaded.tar -C /extractDir
	I1109 14:40:37.018090  203153 main.go:143] libmachine: SSH cmd err, output: <nil>: no-preload-545474
	
	I1109 14:40:37.018125  203153 ubuntu.go:182] provisioning hostname "no-preload-545474"
	I1109 14:40:37.018255  203153 cli_runner.go:164] Run: docker container inspect -f "'{{(index (index .NetworkSettings.Ports "22/tcp") 0).HostPort}}'" no-preload-545474
	I1109 14:40:37.039842  203153 main.go:143] libmachine: Using SSH client type: native
	I1109 14:40:37.040631  203153 main.go:143] libmachine: &{{{<nil> 0 [] [] []} docker [0x3eefe0] 0x3f1790 <nil>  [] 0s} 127.0.0.1 33075 <nil> <nil>}
	I1109 14:40:37.040653  203153 main.go:143] libmachine: About to run SSH command:
	sudo hostname no-preload-545474 && echo "no-preload-545474" | sudo tee /etc/hostname
	I1109 14:40:37.211816  203153 main.go:143] libmachine: SSH cmd err, output: <nil>: no-preload-545474
	
	I1109 14:40:37.211969  203153 cli_runner.go:164] Run: docker container inspect -f "'{{(index (index .NetworkSettings.Ports "22/tcp") 0).HostPort}}'" no-preload-545474
	I1109 14:40:37.230750  203153 main.go:143] libmachine: Using SSH client type: native
	I1109 14:40:37.231076  203153 main.go:143] libmachine: &{{{<nil> 0 [] [] []} docker [0x3eefe0] 0x3f1790 <nil>  [] 0s} 127.0.0.1 33075 <nil> <nil>}
	I1109 14:40:37.231099  203153 main.go:143] libmachine: About to run SSH command:
	
			if ! grep -xq '.*\sno-preload-545474' /etc/hosts; then
				if grep -xq '127.0.1.1\s.*' /etc/hosts; then
					sudo sed -i 's/^127.0.1.1\s.*/127.0.1.1 no-preload-545474/g' /etc/hosts;
				else 
					echo '127.0.1.1 no-preload-545474' | sudo tee -a /etc/hosts; 
				fi
			fi
	I1109 14:40:37.384371  203153 main.go:143] libmachine: SSH cmd err, output: <nil>: 
	I1109 14:40:37.384402  203153 ubuntu.go:188] set auth options {CertDir:/home/jenkins/minikube-integration/21139-2320/.minikube CaCertPath:/home/jenkins/minikube-integration/21139-2320/.minikube/certs/ca.pem CaPrivateKeyPath:/home/jenkins/minikube-integration/21139-2320/.minikube/certs/ca-key.pem CaCertRemotePath:/etc/docker/ca.pem ServerCertPath:/home/jenkins/minikube-integration/21139-2320/.minikube/machines/server.pem ServerKeyPath:/home/jenkins/minikube-integration/21139-2320/.minikube/machines/server-key.pem ClientKeyPath:/home/jenkins/minikube-integration/21139-2320/.minikube/certs/key.pem ServerCertRemotePath:/etc/docker/server.pem ServerKeyRemotePath:/etc/docker/server-key.pem ClientCertPath:/home/jenkins/minikube-integration/21139-2320/.minikube/certs/cert.pem ServerCertSANs:[] StorePath:/home/jenkins/minikube-integration/21139-2320/.minikube}
	I1109 14:40:37.384459  203153 ubuntu.go:190] setting up certificates
	I1109 14:40:37.384469  203153 provision.go:84] configureAuth start
	I1109 14:40:37.384551  203153 cli_runner.go:164] Run: docker container inspect -f "{{range .NetworkSettings.Networks}}{{.IPAddress}},{{.GlobalIPv6Address}}{{end}}" no-preload-545474
	I1109 14:40:37.406146  203153 provision.go:143] copyHostCerts
	I1109 14:40:37.406226  203153 exec_runner.go:144] found /home/jenkins/minikube-integration/21139-2320/.minikube/ca.pem, removing ...
	I1109 14:40:37.406239  203153 exec_runner.go:203] rm: /home/jenkins/minikube-integration/21139-2320/.minikube/ca.pem
	I1109 14:40:37.406302  203153 exec_runner.go:151] cp: /home/jenkins/minikube-integration/21139-2320/.minikube/certs/ca.pem --> /home/jenkins/minikube-integration/21139-2320/.minikube/ca.pem (1082 bytes)
	I1109 14:40:37.406403  203153 exec_runner.go:144] found /home/jenkins/minikube-integration/21139-2320/.minikube/cert.pem, removing ...
	I1109 14:40:37.406411  203153 exec_runner.go:203] rm: /home/jenkins/minikube-integration/21139-2320/.minikube/cert.pem
	I1109 14:40:37.406441  203153 exec_runner.go:151] cp: /home/jenkins/minikube-integration/21139-2320/.minikube/certs/cert.pem --> /home/jenkins/minikube-integration/21139-2320/.minikube/cert.pem (1123 bytes)
	I1109 14:40:37.406510  203153 exec_runner.go:144] found /home/jenkins/minikube-integration/21139-2320/.minikube/key.pem, removing ...
	I1109 14:40:37.406520  203153 exec_runner.go:203] rm: /home/jenkins/minikube-integration/21139-2320/.minikube/key.pem
	I1109 14:40:37.406545  203153 exec_runner.go:151] cp: /home/jenkins/minikube-integration/21139-2320/.minikube/certs/key.pem --> /home/jenkins/minikube-integration/21139-2320/.minikube/key.pem (1679 bytes)
	I1109 14:40:37.406604  203153 provision.go:117] generating server cert: /home/jenkins/minikube-integration/21139-2320/.minikube/machines/server.pem ca-key=/home/jenkins/minikube-integration/21139-2320/.minikube/certs/ca.pem private-key=/home/jenkins/minikube-integration/21139-2320/.minikube/certs/ca-key.pem org=jenkins.no-preload-545474 san=[127.0.0.1 192.168.85.2 localhost minikube no-preload-545474]
	I1109 14:40:38.292432  203153 provision.go:177] copyRemoteCerts
	I1109 14:40:38.292524  203153 ssh_runner.go:195] Run: sudo mkdir -p /etc/docker /etc/docker /etc/docker
	I1109 14:40:38.292594  203153 cli_runner.go:164] Run: docker container inspect -f "'{{(index (index .NetworkSettings.Ports "22/tcp") 0).HostPort}}'" no-preload-545474
	I1109 14:40:38.311479  203153 sshutil.go:53] new ssh client: &{IP:127.0.0.1 Port:33075 SSHKeyPath:/home/jenkins/minikube-integration/21139-2320/.minikube/machines/no-preload-545474/id_rsa Username:docker}
	I1109 14:40:38.431734  203153 ssh_runner.go:362] scp /home/jenkins/minikube-integration/21139-2320/.minikube/machines/server.pem --> /etc/docker/server.pem (1220 bytes)
	I1109 14:40:38.460082  203153 ssh_runner.go:362] scp /home/jenkins/minikube-integration/21139-2320/.minikube/machines/server-key.pem --> /etc/docker/server-key.pem (1679 bytes)
	I1109 14:40:38.481773  203153 ssh_runner.go:362] scp /home/jenkins/minikube-integration/21139-2320/.minikube/certs/ca.pem --> /etc/docker/ca.pem (1082 bytes)
	I1109 14:40:38.513644  203153 provision.go:87] duration metric: took 1.129157176s to configureAuth
	I1109 14:40:38.513672  203153 ubuntu.go:206] setting minikube options for container-runtime
	I1109 14:40:38.513843  203153 config.go:182] Loaded profile config "no-preload-545474": Driver=docker, ContainerRuntime=crio, KubernetesVersion=v1.34.1
	I1109 14:40:38.513947  203153 cli_runner.go:164] Run: docker container inspect -f "'{{(index (index .NetworkSettings.Ports "22/tcp") 0).HostPort}}'" no-preload-545474
	I1109 14:40:38.533245  203153 main.go:143] libmachine: Using SSH client type: native
	I1109 14:40:38.533646  203153 main.go:143] libmachine: &{{{<nil> 0 [] [] []} docker [0x3eefe0] 0x3f1790 <nil>  [] 0s} 127.0.0.1 33075 <nil> <nil>}
	I1109 14:40:38.533670  203153 main.go:143] libmachine: About to run SSH command:
	sudo mkdir -p /etc/sysconfig && printf %s "
	CRIO_MINIKUBE_OPTIONS='--insecure-registry 10.96.0.0/12 '
	" | sudo tee /etc/sysconfig/crio.minikube && sudo systemctl restart crio
	I1109 14:40:39.019839  203153 main.go:143] libmachine: SSH cmd err, output: <nil>: 
	CRIO_MINIKUBE_OPTIONS='--insecure-registry 10.96.0.0/12 '
	
	I1109 14:40:39.019877  203153 machine.go:97] duration metric: took 5.206886962s to provisionDockerMachine
	I1109 14:40:39.019887  203153 client.go:176] duration metric: took 8.747359516s to LocalClient.Create
	I1109 14:40:39.019901  203153 start.go:167] duration metric: took 8.747427579s to libmachine.API.Create "no-preload-545474"
	I1109 14:40:39.019908  203153 start.go:293] postStartSetup for "no-preload-545474" (driver="docker")
	I1109 14:40:39.019919  203153 start.go:322] creating required directories: [/etc/kubernetes/addons /etc/kubernetes/manifests /var/tmp/minikube /var/lib/minikube /var/lib/minikube/certs /var/lib/minikube/images /var/lib/minikube/binaries /tmp/gvisor /usr/share/ca-certificates /etc/ssl/certs]
	I1109 14:40:39.019994  203153 ssh_runner.go:195] Run: sudo mkdir -p /etc/kubernetes/addons /etc/kubernetes/manifests /var/tmp/minikube /var/lib/minikube /var/lib/minikube/certs /var/lib/minikube/images /var/lib/minikube/binaries /tmp/gvisor /usr/share/ca-certificates /etc/ssl/certs
	I1109 14:40:39.020047  203153 cli_runner.go:164] Run: docker container inspect -f "'{{(index (index .NetworkSettings.Ports "22/tcp") 0).HostPort}}'" no-preload-545474
	I1109 14:40:39.074544  203153 sshutil.go:53] new ssh client: &{IP:127.0.0.1 Port:33075 SSHKeyPath:/home/jenkins/minikube-integration/21139-2320/.minikube/machines/no-preload-545474/id_rsa Username:docker}
	I1109 14:40:39.235416  203153 ssh_runner.go:195] Run: cat /etc/os-release
	I1109 14:40:39.240491  203153 main.go:143] libmachine: Couldn't set key VERSION_CODENAME, no corresponding struct field found
	I1109 14:40:39.240516  203153 info.go:137] Remote host: Debian GNU/Linux 12 (bookworm)
	I1109 14:40:39.240527  203153 filesync.go:126] Scanning /home/jenkins/minikube-integration/21139-2320/.minikube/addons for local assets ...
	I1109 14:40:39.240583  203153 filesync.go:126] Scanning /home/jenkins/minikube-integration/21139-2320/.minikube/files for local assets ...
	I1109 14:40:39.240661  203153 filesync.go:149] local asset: /home/jenkins/minikube-integration/21139-2320/.minikube/files/etc/ssl/certs/41162.pem -> 41162.pem in /etc/ssl/certs
	I1109 14:40:39.240756  203153 ssh_runner.go:195] Run: sudo mkdir -p /etc/ssl/certs
	I1109 14:40:39.259324  203153 ssh_runner.go:362] scp /home/jenkins/minikube-integration/21139-2320/.minikube/files/etc/ssl/certs/41162.pem --> /etc/ssl/certs/41162.pem (1708 bytes)
	I1109 14:40:39.311419  203153 start.go:296] duration metric: took 291.495331ms for postStartSetup
	I1109 14:40:39.311771  203153 cli_runner.go:164] Run: docker container inspect -f "{{range .NetworkSettings.Networks}}{{.IPAddress}},{{.GlobalIPv6Address}}{{end}}" no-preload-545474
	I1109 14:40:39.381986  203153 profile.go:143] Saving config to /home/jenkins/minikube-integration/21139-2320/.minikube/profiles/no-preload-545474/config.json ...
	I1109 14:40:39.382263  203153 ssh_runner.go:195] Run: sh -c "df -h /var | awk 'NR==2{print $5}'"
	I1109 14:40:39.382350  203153 cli_runner.go:164] Run: docker container inspect -f "'{{(index (index .NetworkSettings.Ports "22/tcp") 0).HostPort}}'" no-preload-545474
	I1109 14:40:39.428874  203153 sshutil.go:53] new ssh client: &{IP:127.0.0.1 Port:33075 SSHKeyPath:/home/jenkins/minikube-integration/21139-2320/.minikube/machines/no-preload-545474/id_rsa Username:docker}
	I1109 14:40:39.542425  203153 ssh_runner.go:195] Run: sh -c "df -BG /var | awk 'NR==2{print $4}'"
	I1109 14:40:39.550073  203153 start.go:128] duration metric: took 9.281511101s to createHost
	I1109 14:40:39.550093  203153 start.go:83] releasing machines lock for "no-preload-545474", held for 9.281639587s
	I1109 14:40:39.550224  203153 cli_runner.go:164] Run: docker container inspect -f "{{range .NetworkSettings.Networks}}{{.IPAddress}},{{.GlobalIPv6Address}}{{end}}" no-preload-545474
	I1109 14:40:39.575333  203153 ssh_runner.go:195] Run: cat /version.json
	I1109 14:40:39.575386  203153 cli_runner.go:164] Run: docker container inspect -f "'{{(index (index .NetworkSettings.Ports "22/tcp") 0).HostPort}}'" no-preload-545474
	I1109 14:40:39.575601  203153 ssh_runner.go:195] Run: curl -sS -m 2 https://registry.k8s.io/
	I1109 14:40:39.575657  203153 cli_runner.go:164] Run: docker container inspect -f "'{{(index (index .NetworkSettings.Ports "22/tcp") 0).HostPort}}'" no-preload-545474
	I1109 14:40:39.603142  203153 sshutil.go:53] new ssh client: &{IP:127.0.0.1 Port:33075 SSHKeyPath:/home/jenkins/minikube-integration/21139-2320/.minikube/machines/no-preload-545474/id_rsa Username:docker}
	I1109 14:40:39.618662  203153 sshutil.go:53] new ssh client: &{IP:127.0.0.1 Port:33075 SSHKeyPath:/home/jenkins/minikube-integration/21139-2320/.minikube/machines/no-preload-545474/id_rsa Username:docker}
	I1109 14:40:39.717450  203153 ssh_runner.go:195] Run: systemctl --version
	I1109 14:40:39.837411  203153 ssh_runner.go:195] Run: sudo sh -c "podman version >/dev/null"
	I1109 14:40:39.871434  203153 ssh_runner.go:195] Run: sh -c "stat /etc/cni/net.d/*loopback.conf*"
	W1109 14:40:39.875914  203153 cni.go:209] loopback cni configuration skipped: "/etc/cni/net.d/*loopback.conf*" not found
	I1109 14:40:39.875986  203153 ssh_runner.go:195] Run: sudo find /etc/cni/net.d -maxdepth 1 -type f ( ( -name *bridge* -or -name *podman* ) -and -not -name *.mk_disabled ) -printf "%p, " -exec sh -c "sudo mv {} {}.mk_disabled" ;
	I1109 14:40:39.908216  203153 cni.go:262] disabled [/etc/cni/net.d/87-podman-bridge.conflist, /etc/cni/net.d/10-crio-bridge.conflist.disabled] bridge cni config(s)
	I1109 14:40:39.908241  203153 start.go:496] detecting cgroup driver to use...
	I1109 14:40:39.908274  203153 detect.go:187] detected "cgroupfs" cgroup driver on host os
	I1109 14:40:39.908326  203153 ssh_runner.go:195] Run: sudo systemctl stop -f containerd
	I1109 14:40:39.926510  203153 ssh_runner.go:195] Run: sudo systemctl is-active --quiet service containerd
	I1109 14:40:39.939219  203153 docker.go:218] disabling cri-docker service (if available) ...
	I1109 14:40:39.939285  203153 ssh_runner.go:195] Run: sudo systemctl stop -f cri-docker.socket
	I1109 14:40:39.957312  203153 ssh_runner.go:195] Run: sudo systemctl stop -f cri-docker.service
	I1109 14:40:39.976672  203153 ssh_runner.go:195] Run: sudo systemctl disable cri-docker.socket
	I1109 14:40:40.122313  203153 ssh_runner.go:195] Run: sudo systemctl mask cri-docker.service
	I1109 14:40:40.244897  203153 docker.go:234] disabling docker service ...
	I1109 14:40:40.244967  203153 ssh_runner.go:195] Run: sudo systemctl stop -f docker.socket
	I1109 14:40:40.266338  203153 ssh_runner.go:195] Run: sudo systemctl stop -f docker.service
	I1109 14:40:40.279380  203153 ssh_runner.go:195] Run: sudo systemctl disable docker.socket
	I1109 14:40:40.489309  203153 ssh_runner.go:195] Run: sudo systemctl mask docker.service
	I1109 14:40:40.690309  203153 ssh_runner.go:195] Run: sudo systemctl is-active --quiet service docker
	I1109 14:40:40.706634  203153 ssh_runner.go:195] Run: /bin/bash -c "sudo mkdir -p /etc && printf %s "runtime-endpoint: unix:///var/run/crio/crio.sock
	" | sudo tee /etc/crictl.yaml"
	I1109 14:40:40.725847  203153 crio.go:59] configure cri-o to use "registry.k8s.io/pause:3.10.1" pause image...
	I1109 14:40:40.725967  203153 ssh_runner.go:195] Run: sh -c "sudo sed -i 's|^.*pause_image = .*$|pause_image = "registry.k8s.io/pause:3.10.1"|' /etc/crio/crio.conf.d/02-crio.conf"
	I1109 14:40:40.736836  203153 crio.go:70] configuring cri-o to use "cgroupfs" as cgroup driver...
	I1109 14:40:40.736953  203153 ssh_runner.go:195] Run: sh -c "sudo sed -i 's|^.*cgroup_manager = .*$|cgroup_manager = "cgroupfs"|' /etc/crio/crio.conf.d/02-crio.conf"
	I1109 14:40:40.749027  203153 ssh_runner.go:195] Run: sh -c "sudo sed -i '/conmon_cgroup = .*/d' /etc/crio/crio.conf.d/02-crio.conf"
	I1109 14:40:40.759184  203153 ssh_runner.go:195] Run: sh -c "sudo sed -i '/cgroup_manager = .*/a conmon_cgroup = "pod"' /etc/crio/crio.conf.d/02-crio.conf"
	I1109 14:40:40.769182  203153 ssh_runner.go:195] Run: sh -c "sudo rm -rf /etc/cni/net.mk"
	I1109 14:40:40.779234  203153 ssh_runner.go:195] Run: sh -c "sudo sed -i '/^ *"net.ipv4.ip_unprivileged_port_start=.*"/d' /etc/crio/crio.conf.d/02-crio.conf"
	I1109 14:40:40.789819  203153 ssh_runner.go:195] Run: sh -c "sudo grep -q "^ *default_sysctls" /etc/crio/crio.conf.d/02-crio.conf || sudo sed -i '/conmon_cgroup = .*/a default_sysctls = \[\n\]' /etc/crio/crio.conf.d/02-crio.conf"
	I1109 14:40:40.806602  203153 ssh_runner.go:195] Run: sh -c "sudo sed -i -r 's|^default_sysctls *= *\[|&\n  "net.ipv4.ip_unprivileged_port_start=0",|' /etc/crio/crio.conf.d/02-crio.conf"
	I1109 14:40:40.818002  203153 ssh_runner.go:195] Run: sudo sysctl net.bridge.bridge-nf-call-iptables
	I1109 14:40:40.826574  203153 ssh_runner.go:195] Run: sudo sh -c "echo 1 > /proc/sys/net/ipv4/ip_forward"
	I1109 14:40:40.835536  203153 ssh_runner.go:195] Run: sudo systemctl daemon-reload
	I1109 14:40:40.968084  203153 ssh_runner.go:195] Run: sudo systemctl restart crio
	I1109 14:40:41.095255  203153 start.go:543] Will wait 60s for socket path /var/run/crio/crio.sock
	I1109 14:40:41.095366  203153 ssh_runner.go:195] Run: stat /var/run/crio/crio.sock
	I1109 14:40:41.099962  203153 start.go:564] Will wait 60s for crictl version
	I1109 14:40:41.100078  203153 ssh_runner.go:195] Run: which crictl
	I1109 14:40:41.104357  203153 ssh_runner.go:195] Run: sudo /usr/local/bin/crictl version
	I1109 14:40:41.129978  203153 start.go:580] Version:  0.1.0
	RuntimeName:  cri-o
	RuntimeVersion:  1.34.1
	RuntimeApiVersion:  v1
	I1109 14:40:41.130133  203153 ssh_runner.go:195] Run: crio --version
	I1109 14:40:41.157731  203153 ssh_runner.go:195] Run: crio --version
	I1109 14:40:41.187392  203153 out.go:179] * Preparing Kubernetes v1.34.1 on CRI-O 1.34.1 ...
	I1109 14:40:38.298033  203791 cli_runner.go:217] Completed: docker run --rm --entrypoint /usr/bin/tar -v /home/jenkins/minikube-integration/21139-2320/.minikube/cache/preloaded-tarball/preloaded-images-k8s-v18-v1.34.1-cri-o-overlay-arm64.tar.lz4:/preloaded.tar:ro -v newest-cni-192074:/extractDir gcr.io/k8s-minikube/kicbase-builds:v0.0.48-1761985721-21837@sha256:a50b37e97dfdea51156e079ca6b45818a801b3d41bbe13d141f35d2e1af6c7d1 -I lz4 -xf /preloaded.tar -C /extractDir: (4.208470264s)
	I1109 14:40:38.298062  203791 kic.go:203] duration metric: took 4.208632948s to extract preloaded images to volume ...
	W1109 14:40:38.298209  203791 cgroups_linux.go:77] Your kernel does not support swap limit capabilities or the cgroup is not mounted.
	I1109 14:40:38.298315  203791 cli_runner.go:164] Run: docker info --format "'{{json .SecurityOptions}}'"
	I1109 14:40:38.388344  203791 cli_runner.go:164] Run: docker run -d -t --privileged --security-opt seccomp=unconfined --tmpfs /tmp --tmpfs /run -v /lib/modules:/lib/modules:ro --hostname newest-cni-192074 --name newest-cni-192074 --label created_by.minikube.sigs.k8s.io=true --label name.minikube.sigs.k8s.io=newest-cni-192074 --label role.minikube.sigs.k8s.io= --label mode.minikube.sigs.k8s.io=newest-cni-192074 --network newest-cni-192074 --ip 192.168.76.2 --volume newest-cni-192074:/var --security-opt apparmor=unconfined --memory=3072mb --cpus=2 -e container=docker --expose 8443 --publish=127.0.0.1::8443 --publish=127.0.0.1::22 --publish=127.0.0.1::2376 --publish=127.0.0.1::5000 --publish=127.0.0.1::32443 gcr.io/k8s-minikube/kicbase-builds:v0.0.48-1761985721-21837@sha256:a50b37e97dfdea51156e079ca6b45818a801b3d41bbe13d141f35d2e1af6c7d1
	I1109 14:40:38.771438  203791 cli_runner.go:164] Run: docker container inspect newest-cni-192074 --format={{.State.Running}}
	I1109 14:40:38.793167  203791 cli_runner.go:164] Run: docker container inspect newest-cni-192074 --format={{.State.Status}}
	I1109 14:40:38.822106  203791 cli_runner.go:164] Run: docker exec newest-cni-192074 stat /var/lib/dpkg/alternatives/iptables
	I1109 14:40:38.889895  203791 oci.go:144] the created container "newest-cni-192074" has a running status.
	I1109 14:40:38.889926  203791 kic.go:225] Creating ssh key for kic: /home/jenkins/minikube-integration/21139-2320/.minikube/machines/newest-cni-192074/id_rsa...
	I1109 14:40:39.630477  203791 kic_runner.go:191] docker (temp): /home/jenkins/minikube-integration/21139-2320/.minikube/machines/newest-cni-192074/id_rsa.pub --> /home/docker/.ssh/authorized_keys (381 bytes)
	I1109 14:40:39.656578  203791 cli_runner.go:164] Run: docker container inspect newest-cni-192074 --format={{.State.Status}}
	I1109 14:40:39.676358  203791 kic_runner.go:93] Run: chown docker:docker /home/docker/.ssh/authorized_keys
	I1109 14:40:39.676393  203791 kic_runner.go:114] Args: [docker exec --privileged newest-cni-192074 chown docker:docker /home/docker/.ssh/authorized_keys]
	I1109 14:40:39.725642  203791 cli_runner.go:164] Run: docker container inspect newest-cni-192074 --format={{.State.Status}}
	I1109 14:40:39.747134  203791 machine.go:94] provisionDockerMachine start ...
	I1109 14:40:39.747250  203791 cli_runner.go:164] Run: docker container inspect -f "'{{(index (index .NetworkSettings.Ports "22/tcp") 0).HostPort}}'" newest-cni-192074
	I1109 14:40:39.772741  203791 main.go:143] libmachine: Using SSH client type: native
	I1109 14:40:39.773087  203791 main.go:143] libmachine: &{{{<nil> 0 [] [] []} docker [0x3eefe0] 0x3f1790 <nil>  [] 0s} 127.0.0.1 33080 <nil> <nil>}
	I1109 14:40:39.773096  203791 main.go:143] libmachine: About to run SSH command:
	hostname
	I1109 14:40:39.776440  203791 main.go:143] libmachine: Error dialing TCP: ssh: handshake failed: read tcp 127.0.0.1:49864->127.0.0.1:33080: read: connection reset by peer
	I1109 14:40:41.190231  203153 cli_runner.go:164] Run: docker network inspect no-preload-545474 --format "{"Name": "{{.Name}}","Driver": "{{.Driver}}","Subnet": "{{range .IPAM.Config}}{{.Subnet}}{{end}}","Gateway": "{{range .IPAM.Config}}{{.Gateway}}{{end}}","MTU": {{if (index .Options "com.docker.network.driver.mtu")}}{{(index .Options "com.docker.network.driver.mtu")}}{{else}}0{{end}}, "ContainerIPs": [{{range $k,$v := .Containers }}"{{$v.IPv4Address}}",{{end}}]}"
	I1109 14:40:41.206604  203153 ssh_runner.go:195] Run: grep 192.168.85.1	host.minikube.internal$ /etc/hosts
	I1109 14:40:41.210706  203153 ssh_runner.go:195] Run: /bin/bash -c "{ grep -v $'\thost.minikube.internal$' "/etc/hosts"; echo "192.168.85.1	host.minikube.internal"; } > /tmp/h.$$; sudo cp /tmp/h.$$ "/etc/hosts""
	I1109 14:40:41.220619  203153 kubeadm.go:884] updating cluster {Name:no-preload-545474 KeepContext:false EmbedCerts:false MinikubeISO: KicBaseImage:gcr.io/k8s-minikube/kicbase-builds:v0.0.48-1761985721-21837@sha256:a50b37e97dfdea51156e079ca6b45818a801b3d41bbe13d141f35d2e1af6c7d1 Memory:3072 CPUs:2 DiskSize:20000 Driver:docker HyperkitVpnKitSock: HyperkitVSockPorts:[] DockerEnv:[] ContainerVolumeMounts:[] InsecureRegistry:[] RegistryMirror:[] HostOnlyCIDR:192.168.59.1/24 HypervVirtualSwitch: HypervUseExternalSwitch:false HypervExternalAdapter: KVMNetwork:default KVMQemuURI:qemu:///system KVMGPU:false KVMHidden:false KVMNUMACount:1 APIServerPort:8443 DockerOpt:[] DisableDriverMounts:false NFSShare:[] NFSSharesRoot:/nfsshares UUID: NoVTXCheck:false DNSProxy:false HostDNSResolver:true HostOnlyNicType:virtio NatNicType:virtio SSHIPAddress: SSHUser:root SSHKey: SSHPort:22 KubernetesConfig:{KubernetesVersion:v1.34.1 ClusterName:no-preload-545474 Namespace:default APIServerHAVIP: APIServerName:minikubeCA API
ServerNames:[] APIServerIPs:[] DNSDomain:cluster.local ContainerRuntime:crio CRISocket: NetworkPlugin:cni FeatureGates: ServiceCIDR:10.96.0.0/12 ImageRepository: LoadBalancerStartIP: LoadBalancerEndIP: CustomIngressCert: RegistryAliases: ExtraOptions:[] ShouldLoadCachedImages:true EnableDefaultCNI:false CNI:} Nodes:[{Name: IP:192.168.85.2 Port:8443 KubernetesVersion:v1.34.1 ContainerRuntime:crio ControlPlane:true Worker:true}] Addons:map[] CustomAddonImages:map[] CustomAddonRegistries:map[] VerifyComponents:map[apiserver:true apps_running:true default_sa:true extra:true kubelet:true node_ready:true system_pods:true] StartHostTimeout:6m0s ScheduledStop:<nil> ExposedPorts:[] ListenAddress: Network: Subnet: MultiNodeRequested:false ExtraDisks:0 CertExpiration:26280h0m0s MountString: Mount9PVersion:9p2000.L MountGID:docker MountIP: MountMSize:262144 MountOptions:[] MountPort:0 MountType:9p MountUID:docker BinaryMirror: DisableOptimizations:false DisableMetrics:false DisableCoreDNSLog:false CustomQemuFirmwarePath:
SocketVMnetClientPath: SocketVMnetPath: StaticIP: SSHAuthSock: SSHAgentPID:0 GPUs: AutoPauseInterval:1m0s} ...
	I1109 14:40:41.220729  203153 preload.go:188] Checking if preload exists for k8s version v1.34.1 and runtime crio
	I1109 14:40:41.220777  203153 ssh_runner.go:195] Run: sudo crictl images --output json
	I1109 14:40:41.246245  203153 crio.go:510] couldn't find preloaded image for "registry.k8s.io/kube-apiserver:v1.34.1". assuming images are not preloaded.
	I1109 14:40:41.246270  203153 cache_images.go:90] LoadCachedImages start: [registry.k8s.io/kube-apiserver:v1.34.1 registry.k8s.io/kube-controller-manager:v1.34.1 registry.k8s.io/kube-scheduler:v1.34.1 registry.k8s.io/kube-proxy:v1.34.1 registry.k8s.io/pause:3.10.1 registry.k8s.io/etcd:3.6.4-0 registry.k8s.io/coredns/coredns:v1.12.1 gcr.io/k8s-minikube/storage-provisioner:v5]
	I1109 14:40:41.246309  203153 image.go:138] retrieving image: gcr.io/k8s-minikube/storage-provisioner:v5
	I1109 14:40:41.246503  203153 image.go:138] retrieving image: registry.k8s.io/kube-apiserver:v1.34.1
	I1109 14:40:41.246593  203153 image.go:138] retrieving image: registry.k8s.io/kube-controller-manager:v1.34.1
	I1109 14:40:41.246686  203153 image.go:138] retrieving image: registry.k8s.io/kube-scheduler:v1.34.1
	I1109 14:40:41.246777  203153 image.go:138] retrieving image: registry.k8s.io/kube-proxy:v1.34.1
	I1109 14:40:41.246858  203153 image.go:138] retrieving image: registry.k8s.io/pause:3.10.1
	I1109 14:40:41.246937  203153 image.go:138] retrieving image: registry.k8s.io/etcd:3.6.4-0
	I1109 14:40:41.247023  203153 image.go:138] retrieving image: registry.k8s.io/coredns/coredns:v1.12.1
	I1109 14:40:41.248032  203153 image.go:181] daemon lookup for registry.k8s.io/etcd:3.6.4-0: Error response from daemon: No such image: registry.k8s.io/etcd:3.6.4-0
	I1109 14:40:41.248296  203153 image.go:181] daemon lookup for registry.k8s.io/coredns/coredns:v1.12.1: Error response from daemon: No such image: registry.k8s.io/coredns/coredns:v1.12.1
	I1109 14:40:41.248449  203153 image.go:181] daemon lookup for gcr.io/k8s-minikube/storage-provisioner:v5: Error response from daemon: No such image: gcr.io/k8s-minikube/storage-provisioner:v5
	I1109 14:40:41.248925  203153 image.go:181] daemon lookup for registry.k8s.io/pause:3.10.1: Error response from daemon: No such image: registry.k8s.io/pause:3.10.1
	I1109 14:40:41.249302  203153 image.go:181] daemon lookup for registry.k8s.io/kube-scheduler:v1.34.1: Error response from daemon: No such image: registry.k8s.io/kube-scheduler:v1.34.1
	I1109 14:40:41.249476  203153 image.go:181] daemon lookup for registry.k8s.io/kube-proxy:v1.34.1: Error response from daemon: No such image: registry.k8s.io/kube-proxy:v1.34.1
	I1109 14:40:41.249631  203153 image.go:181] daemon lookup for registry.k8s.io/kube-controller-manager:v1.34.1: Error response from daemon: No such image: registry.k8s.io/kube-controller-manager:v1.34.1
	I1109 14:40:41.249783  203153 image.go:181] daemon lookup for registry.k8s.io/kube-apiserver:v1.34.1: Error response from daemon: No such image: registry.k8s.io/kube-apiserver:v1.34.1
	I1109 14:40:41.498145  203153 ssh_runner.go:195] Run: sudo podman image inspect --format {{.Id}} registry.k8s.io/kube-apiserver:v1.34.1
	I1109 14:40:41.507504  203153 ssh_runner.go:195] Run: sudo podman image inspect --format {{.Id}} registry.k8s.io/kube-controller-manager:v1.34.1
	I1109 14:40:41.513470  203153 ssh_runner.go:195] Run: sudo podman image inspect --format {{.Id}} registry.k8s.io/coredns/coredns:v1.12.1
	I1109 14:40:41.520067  203153 ssh_runner.go:195] Run: sudo podman image inspect --format {{.Id}} registry.k8s.io/kube-proxy:v1.34.1
	I1109 14:40:41.521667  203153 ssh_runner.go:195] Run: sudo podman image inspect --format {{.Id}} registry.k8s.io/etcd:3.6.4-0
	I1109 14:40:41.529128  203153 ssh_runner.go:195] Run: sudo podman image inspect --format {{.Id}} registry.k8s.io/kube-scheduler:v1.34.1
	I1109 14:40:41.555729  203153 cache_images.go:118] "registry.k8s.io/kube-apiserver:v1.34.1" needs transfer: "registry.k8s.io/kube-apiserver:v1.34.1" does not exist at hash "43911e833d64d4f30460862fc0c54bb61999d60bc7d063feca71e9fc610d5196" in container runtime
	I1109 14:40:41.555819  203153 cri.go:218] Removing image: registry.k8s.io/kube-apiserver:v1.34.1
	I1109 14:40:41.555909  203153 ssh_runner.go:195] Run: which crictl
	I1109 14:40:41.569937  203153 ssh_runner.go:195] Run: sudo podman image inspect --format {{.Id}} registry.k8s.io/pause:3.10.1
	I1109 14:40:41.628744  203153 cache_images.go:118] "registry.k8s.io/kube-controller-manager:v1.34.1" needs transfer: "registry.k8s.io/kube-controller-manager:v1.34.1" does not exist at hash "7eb2c6ff0c5a768fd309321bc2ade0e4e11afcf4f2017ef1d0ff00d91fdf992a" in container runtime
	I1109 14:40:41.628839  203153 cri.go:218] Removing image: registry.k8s.io/kube-controller-manager:v1.34.1
	I1109 14:40:41.628918  203153 ssh_runner.go:195] Run: which crictl
	I1109 14:40:41.668913  203153 cache_images.go:118] "registry.k8s.io/coredns/coredns:v1.12.1" needs transfer: "registry.k8s.io/coredns/coredns:v1.12.1" does not exist at hash "138784d87c9c50f8e59412544da4cf4928d61ccbaf93b9f5898a3ba406871bfc" in container runtime
	I1109 14:40:41.668953  203153 cri.go:218] Removing image: registry.k8s.io/coredns/coredns:v1.12.1
	I1109 14:40:41.669032  203153 ssh_runner.go:195] Run: which crictl
	I1109 14:40:41.676741  203153 cache_images.go:118] "registry.k8s.io/etcd:3.6.4-0" needs transfer: "registry.k8s.io/etcd:3.6.4-0" does not exist at hash "a1894772a478e07c67a56e8bf32335fdbe1dd4ec96976a5987083164bd00bc0e" in container runtime
	I1109 14:40:41.676851  203153 cri.go:218] Removing image: registry.k8s.io/etcd:3.6.4-0
	I1109 14:40:41.676932  203153 ssh_runner.go:195] Run: which crictl
	I1109 14:40:41.677096  203153 cache_images.go:118] "registry.k8s.io/kube-proxy:v1.34.1" needs transfer: "registry.k8s.io/kube-proxy:v1.34.1" does not exist at hash "05baa95f5142d87797a2bc1d3d11edfb0bf0a9236d436243d15061fae8b58cb9" in container runtime
	I1109 14:40:41.677149  203153 cri.go:218] Removing image: registry.k8s.io/kube-proxy:v1.34.1
	I1109 14:40:41.677206  203153 ssh_runner.go:195] Run: which crictl
	I1109 14:40:41.682317  203153 cache_images.go:118] "registry.k8s.io/kube-scheduler:v1.34.1" needs transfer: "registry.k8s.io/kube-scheduler:v1.34.1" does not exist at hash "b5f57ec6b98676d815366685a0422bd164ecf0732540b79ac51b1186cef97ff0" in container runtime
	I1109 14:40:41.682423  203153 cri.go:218] Removing image: registry.k8s.io/kube-scheduler:v1.34.1
	I1109 14:40:41.682725  203153 ssh_runner.go:195] Run: which crictl
	I1109 14:40:41.682580  203153 cache_images.go:118] "registry.k8s.io/pause:3.10.1" needs transfer: "registry.k8s.io/pause:3.10.1" does not exist at hash "d7b100cd9a77ba782c5e428c8dd5a1df4a1e79d4cb6294acd7d01290ab3babbd" in container runtime
	I1109 14:40:41.682798  203153 cri.go:218] Removing image: registry.k8s.io/pause:3.10.1
	I1109 14:40:41.682849  203153 ssh_runner.go:195] Run: which crictl
	I1109 14:40:41.682665  203153 ssh_runner.go:195] Run: sudo /usr/local/bin/crictl rmi registry.k8s.io/coredns/coredns:v1.12.1
	I1109 14:40:41.682688  203153 ssh_runner.go:195] Run: sudo /usr/local/bin/crictl rmi registry.k8s.io/kube-controller-manager:v1.34.1
	I1109 14:40:41.683055  203153 ssh_runner.go:195] Run: sudo /usr/local/bin/crictl rmi registry.k8s.io/kube-apiserver:v1.34.1
	I1109 14:40:41.687835  203153 ssh_runner.go:195] Run: sudo /usr/local/bin/crictl rmi registry.k8s.io/kube-proxy:v1.34.1
	I1109 14:40:41.688019  203153 ssh_runner.go:195] Run: sudo /usr/local/bin/crictl rmi registry.k8s.io/etcd:3.6.4-0
	I1109 14:40:41.767646  203153 ssh_runner.go:195] Run: sudo /usr/local/bin/crictl rmi registry.k8s.io/kube-apiserver:v1.34.1
	I1109 14:40:41.767805  203153 ssh_runner.go:195] Run: sudo /usr/local/bin/crictl rmi registry.k8s.io/pause:3.10.1
	I1109 14:40:41.767955  203153 ssh_runner.go:195] Run: sudo /usr/local/bin/crictl rmi registry.k8s.io/coredns/coredns:v1.12.1
	I1109 14:40:41.768010  203153 ssh_runner.go:195] Run: sudo /usr/local/bin/crictl rmi registry.k8s.io/kube-controller-manager:v1.34.1
	I1109 14:40:41.768079  203153 ssh_runner.go:195] Run: sudo /usr/local/bin/crictl rmi registry.k8s.io/kube-scheduler:v1.34.1
	I1109 14:40:41.771591  203153 ssh_runner.go:195] Run: sudo /usr/local/bin/crictl rmi registry.k8s.io/etcd:3.6.4-0
	I1109 14:40:41.771783  203153 ssh_runner.go:195] Run: sudo /usr/local/bin/crictl rmi registry.k8s.io/kube-proxy:v1.34.1
	I1109 14:40:41.874269  203153 ssh_runner.go:195] Run: sudo /usr/local/bin/crictl rmi registry.k8s.io/pause:3.10.1
	I1109 14:40:41.874430  203153 ssh_runner.go:195] Run: sudo /usr/local/bin/crictl rmi registry.k8s.io/coredns/coredns:v1.12.1
	I1109 14:40:41.874528  203153 ssh_runner.go:195] Run: sudo /usr/local/bin/crictl rmi registry.k8s.io/kube-controller-manager:v1.34.1
	I1109 14:40:41.874615  203153 ssh_runner.go:195] Run: sudo /usr/local/bin/crictl rmi registry.k8s.io/kube-scheduler:v1.34.1
	I1109 14:40:41.874703  203153 ssh_runner.go:195] Run: sudo /usr/local/bin/crictl rmi registry.k8s.io/kube-apiserver:v1.34.1
	I1109 14:40:41.879679  203153 ssh_runner.go:195] Run: sudo /usr/local/bin/crictl rmi registry.k8s.io/kube-proxy:v1.34.1
	I1109 14:40:41.879850  203153 ssh_runner.go:195] Run: sudo /usr/local/bin/crictl rmi registry.k8s.io/etcd:3.6.4-0
	I1109 14:40:41.982164  203153 cache_images.go:291] Loading image from: /home/jenkins/minikube-integration/21139-2320/.minikube/cache/images/arm64/registry.k8s.io/kube-apiserver_v1.34.1
	I1109 14:40:41.982278  203153 ssh_runner.go:195] Run: sudo /usr/local/bin/crictl rmi registry.k8s.io/pause:3.10.1
	I1109 14:40:41.982375  203153 cache_images.go:291] Loading image from: /home/jenkins/minikube-integration/21139-2320/.minikube/cache/images/arm64/registry.k8s.io/coredns/coredns_v1.12.1
	I1109 14:40:41.982474  203153 ssh_runner.go:195] Run: stat -c "%s %y" /var/lib/minikube/images/coredns_v1.12.1
	I1109 14:40:41.982545  203153 ssh_runner.go:195] Run: stat -c "%s %y" /var/lib/minikube/images/kube-apiserver_v1.34.1
	I1109 14:40:41.982572  203153 cache_images.go:291] Loading image from: /home/jenkins/minikube-integration/21139-2320/.minikube/cache/images/arm64/registry.k8s.io/kube-controller-manager_v1.34.1
	I1109 14:40:41.982820  203153 ssh_runner.go:195] Run: stat -c "%s %y" /var/lib/minikube/images/kube-controller-manager_v1.34.1
	I1109 14:40:41.982621  203153 ssh_runner.go:195] Run: sudo /usr/local/bin/crictl rmi registry.k8s.io/kube-scheduler:v1.34.1
	I1109 14:40:41.982658  203153 cache_images.go:291] Loading image from: /home/jenkins/minikube-integration/21139-2320/.minikube/cache/images/arm64/registry.k8s.io/etcd_3.6.4-0
	I1109 14:40:41.982924  203153 ssh_runner.go:195] Run: stat -c "%s %y" /var/lib/minikube/images/etcd_3.6.4-0
	I1109 14:40:41.982684  203153 cache_images.go:291] Loading image from: /home/jenkins/minikube-integration/21139-2320/.minikube/cache/images/arm64/registry.k8s.io/kube-proxy_v1.34.1
	I1109 14:40:41.983064  203153 ssh_runner.go:195] Run: stat -c "%s %y" /var/lib/minikube/images/kube-proxy_v1.34.1
	I1109 14:40:42.039018  203153 ssh_runner.go:352] existence check for /var/lib/minikube/images/kube-apiserver_v1.34.1: stat -c "%s %y" /var/lib/minikube/images/kube-apiserver_v1.34.1: Process exited with status 1
	stdout:
	
	stderr:
	stat: cannot statx '/var/lib/minikube/images/kube-apiserver_v1.34.1': No such file or directory
	I1109 14:40:42.039059  203153 ssh_runner.go:362] scp /home/jenkins/minikube-integration/21139-2320/.minikube/cache/images/arm64/registry.k8s.io/kube-apiserver_v1.34.1 --> /var/lib/minikube/images/kube-apiserver_v1.34.1 (24581632 bytes)
	I1109 14:40:42.039144  203153 cache_images.go:291] Loading image from: /home/jenkins/minikube-integration/21139-2320/.minikube/cache/images/arm64/registry.k8s.io/pause_3.10.1
	I1109 14:40:42.039251  203153 ssh_runner.go:195] Run: stat -c "%s %y" /var/lib/minikube/images/pause_3.10.1
	I1109 14:40:42.039339  203153 ssh_runner.go:352] existence check for /var/lib/minikube/images/coredns_v1.12.1: stat -c "%s %y" /var/lib/minikube/images/coredns_v1.12.1: Process exited with status 1
	stdout:
	
	stderr:
	stat: cannot statx '/var/lib/minikube/images/coredns_v1.12.1': No such file or directory
	I1109 14:40:42.039361  203153 ssh_runner.go:362] scp /home/jenkins/minikube-integration/21139-2320/.minikube/cache/images/arm64/registry.k8s.io/coredns/coredns_v1.12.1 --> /var/lib/minikube/images/coredns_v1.12.1 (20402176 bytes)
	I1109 14:40:42.039432  203153 ssh_runner.go:352] existence check for /var/lib/minikube/images/kube-controller-manager_v1.34.1: stat -c "%s %y" /var/lib/minikube/images/kube-controller-manager_v1.34.1: Process exited with status 1
	stdout:
	
	stderr:
	stat: cannot statx '/var/lib/minikube/images/kube-controller-manager_v1.34.1': No such file or directory
	I1109 14:40:42.039450  203153 ssh_runner.go:362] scp /home/jenkins/minikube-integration/21139-2320/.minikube/cache/images/arm64/registry.k8s.io/kube-controller-manager_v1.34.1 --> /var/lib/minikube/images/kube-controller-manager_v1.34.1 (20730880 bytes)
	I1109 14:40:42.039501  203153 ssh_runner.go:352] existence check for /var/lib/minikube/images/kube-proxy_v1.34.1: stat -c "%s %y" /var/lib/minikube/images/kube-proxy_v1.34.1: Process exited with status 1
	stdout:
	
	stderr:
	stat: cannot statx '/var/lib/minikube/images/kube-proxy_v1.34.1': No such file or directory
	I1109 14:40:42.039519  203153 ssh_runner.go:362] scp /home/jenkins/minikube-integration/21139-2320/.minikube/cache/images/arm64/registry.k8s.io/kube-proxy_v1.34.1 --> /var/lib/minikube/images/kube-proxy_v1.34.1 (22790144 bytes)
	I1109 14:40:42.039538  203153 cache_images.go:291] Loading image from: /home/jenkins/minikube-integration/21139-2320/.minikube/cache/images/arm64/registry.k8s.io/kube-scheduler_v1.34.1
	I1109 14:40:42.039562  203153 ssh_runner.go:352] existence check for /var/lib/minikube/images/etcd_3.6.4-0: stat -c "%s %y" /var/lib/minikube/images/etcd_3.6.4-0: Process exited with status 1
	stdout:
	
	stderr:
	stat: cannot statx '/var/lib/minikube/images/etcd_3.6.4-0': No such file or directory
	I1109 14:40:42.039652  203153 ssh_runner.go:362] scp /home/jenkins/minikube-integration/21139-2320/.minikube/cache/images/arm64/registry.k8s.io/etcd_3.6.4-0 --> /var/lib/minikube/images/etcd_3.6.4-0 (98216960 bytes)
	I1109 14:40:42.039711  203153 ssh_runner.go:195] Run: stat -c "%s %y" /var/lib/minikube/images/kube-scheduler_v1.34.1
	I1109 14:40:42.081708  203153 ssh_runner.go:352] existence check for /var/lib/minikube/images/pause_3.10.1: stat -c "%s %y" /var/lib/minikube/images/pause_3.10.1: Process exited with status 1
	stdout:
	
	stderr:
	stat: cannot statx '/var/lib/minikube/images/pause_3.10.1': No such file or directory
	I1109 14:40:42.081833  203153 ssh_runner.go:362] scp /home/jenkins/minikube-integration/21139-2320/.minikube/cache/images/arm64/registry.k8s.io/pause_3.10.1 --> /var/lib/minikube/images/pause_3.10.1 (268288 bytes)
	I1109 14:40:42.121132  203153 ssh_runner.go:352] existence check for /var/lib/minikube/images/kube-scheduler_v1.34.1: stat -c "%s %y" /var/lib/minikube/images/kube-scheduler_v1.34.1: Process exited with status 1
	stdout:
	
	stderr:
	stat: cannot statx '/var/lib/minikube/images/kube-scheduler_v1.34.1': No such file or directory
	I1109 14:40:42.121245  203153 ssh_runner.go:362] scp /home/jenkins/minikube-integration/21139-2320/.minikube/cache/images/arm64/registry.k8s.io/kube-scheduler_v1.34.1 --> /var/lib/minikube/images/kube-scheduler_v1.34.1 (15790592 bytes)
	I1109 14:40:42.293161  203153 crio.go:275] Loading image: /var/lib/minikube/images/pause_3.10.1
	I1109 14:40:42.293278  203153 ssh_runner.go:195] Run: sudo podman load -i /var/lib/minikube/images/pause_3.10.1
	W1109 14:40:42.662475  203153 image.go:286] image gcr.io/k8s-minikube/storage-provisioner:v5 arch mismatch: want arm64 got amd64. fixing
	I1109 14:40:42.662713  203153 ssh_runner.go:195] Run: sudo podman image inspect --format {{.Id}} gcr.io/k8s-minikube/storage-provisioner:v5
	I1109 14:40:42.756384  203153 cache_images.go:323] Transferred and loaded /home/jenkins/minikube-integration/21139-2320/.minikube/cache/images/arm64/registry.k8s.io/pause_3.10.1 from cache
	I1109 14:40:42.756479  203153 crio.go:275] Loading image: /var/lib/minikube/images/coredns_v1.12.1
	I1109 14:40:42.756533  203153 ssh_runner.go:195] Run: sudo podman load -i /var/lib/minikube/images/coredns_v1.12.1
	I1109 14:40:42.756598  203153 cache_images.go:118] "gcr.io/k8s-minikube/storage-provisioner:v5" needs transfer: "gcr.io/k8s-minikube/storage-provisioner:v5" does not exist at hash "66749159455b3f08c8318fe0233122f54d0f5889f9c5fdfb73c3fd9d99895b51" in container runtime
	I1109 14:40:42.756622  203153 cri.go:218] Removing image: gcr.io/k8s-minikube/storage-provisioner:v5
	I1109 14:40:42.756847  203153 ssh_runner.go:195] Run: which crictl
	I1109 14:40:42.959398  203791 main.go:143] libmachine: SSH cmd err, output: <nil>: newest-cni-192074
	
	I1109 14:40:42.959420  203791 ubuntu.go:182] provisioning hostname "newest-cni-192074"
	I1109 14:40:42.959483  203791 cli_runner.go:164] Run: docker container inspect -f "'{{(index (index .NetworkSettings.Ports "22/tcp") 0).HostPort}}'" newest-cni-192074
	I1109 14:40:43.014405  203791 main.go:143] libmachine: Using SSH client type: native
	I1109 14:40:43.014751  203791 main.go:143] libmachine: &{{{<nil> 0 [] [] []} docker [0x3eefe0] 0x3f1790 <nil>  [] 0s} 127.0.0.1 33080 <nil> <nil>}
	I1109 14:40:43.014766  203791 main.go:143] libmachine: About to run SSH command:
	sudo hostname newest-cni-192074 && echo "newest-cni-192074" | sudo tee /etc/hostname
	I1109 14:40:43.232785  203791 main.go:143] libmachine: SSH cmd err, output: <nil>: newest-cni-192074
	
	I1109 14:40:43.232952  203791 cli_runner.go:164] Run: docker container inspect -f "'{{(index (index .NetworkSettings.Ports "22/tcp") 0).HostPort}}'" newest-cni-192074
	I1109 14:40:43.291576  203791 main.go:143] libmachine: Using SSH client type: native
	I1109 14:40:43.291911  203791 main.go:143] libmachine: &{{{<nil> 0 [] [] []} docker [0x3eefe0] 0x3f1790 <nil>  [] 0s} 127.0.0.1 33080 <nil> <nil>}
	I1109 14:40:43.291928  203791 main.go:143] libmachine: About to run SSH command:
	
			if ! grep -xq '.*\snewest-cni-192074' /etc/hosts; then
				if grep -xq '127.0.1.1\s.*' /etc/hosts; then
					sudo sed -i 's/^127.0.1.1\s.*/127.0.1.1 newest-cni-192074/g' /etc/hosts;
				else 
					echo '127.0.1.1 newest-cni-192074' | sudo tee -a /etc/hosts; 
				fi
			fi
	I1109 14:40:43.472127  203791 main.go:143] libmachine: SSH cmd err, output: <nil>: 
	I1109 14:40:43.472156  203791 ubuntu.go:188] set auth options {CertDir:/home/jenkins/minikube-integration/21139-2320/.minikube CaCertPath:/home/jenkins/minikube-integration/21139-2320/.minikube/certs/ca.pem CaPrivateKeyPath:/home/jenkins/minikube-integration/21139-2320/.minikube/certs/ca-key.pem CaCertRemotePath:/etc/docker/ca.pem ServerCertPath:/home/jenkins/minikube-integration/21139-2320/.minikube/machines/server.pem ServerKeyPath:/home/jenkins/minikube-integration/21139-2320/.minikube/machines/server-key.pem ClientKeyPath:/home/jenkins/minikube-integration/21139-2320/.minikube/certs/key.pem ServerCertRemotePath:/etc/docker/server.pem ServerKeyRemotePath:/etc/docker/server-key.pem ClientCertPath:/home/jenkins/minikube-integration/21139-2320/.minikube/certs/cert.pem ServerCertSANs:[] StorePath:/home/jenkins/minikube-integration/21139-2320/.minikube}
	I1109 14:40:43.472174  203791 ubuntu.go:190] setting up certificates
	I1109 14:40:43.472194  203791 provision.go:84] configureAuth start
	I1109 14:40:43.472262  203791 cli_runner.go:164] Run: docker container inspect -f "{{range .NetworkSettings.Networks}}{{.IPAddress}},{{.GlobalIPv6Address}}{{end}}" newest-cni-192074
	I1109 14:40:43.497859  203791 provision.go:143] copyHostCerts
	I1109 14:40:43.497922  203791 exec_runner.go:144] found /home/jenkins/minikube-integration/21139-2320/.minikube/ca.pem, removing ...
	I1109 14:40:43.497932  203791 exec_runner.go:203] rm: /home/jenkins/minikube-integration/21139-2320/.minikube/ca.pem
	I1109 14:40:43.498011  203791 exec_runner.go:151] cp: /home/jenkins/minikube-integration/21139-2320/.minikube/certs/ca.pem --> /home/jenkins/minikube-integration/21139-2320/.minikube/ca.pem (1082 bytes)
	I1109 14:40:43.498098  203791 exec_runner.go:144] found /home/jenkins/minikube-integration/21139-2320/.minikube/cert.pem, removing ...
	I1109 14:40:43.498103  203791 exec_runner.go:203] rm: /home/jenkins/minikube-integration/21139-2320/.minikube/cert.pem
	I1109 14:40:43.498130  203791 exec_runner.go:151] cp: /home/jenkins/minikube-integration/21139-2320/.minikube/certs/cert.pem --> /home/jenkins/minikube-integration/21139-2320/.minikube/cert.pem (1123 bytes)
	I1109 14:40:43.498210  203791 exec_runner.go:144] found /home/jenkins/minikube-integration/21139-2320/.minikube/key.pem, removing ...
	I1109 14:40:43.498215  203791 exec_runner.go:203] rm: /home/jenkins/minikube-integration/21139-2320/.minikube/key.pem
	I1109 14:40:43.498243  203791 exec_runner.go:151] cp: /home/jenkins/minikube-integration/21139-2320/.minikube/certs/key.pem --> /home/jenkins/minikube-integration/21139-2320/.minikube/key.pem (1679 bytes)
	I1109 14:40:43.498288  203791 provision.go:117] generating server cert: /home/jenkins/minikube-integration/21139-2320/.minikube/machines/server.pem ca-key=/home/jenkins/minikube-integration/21139-2320/.minikube/certs/ca.pem private-key=/home/jenkins/minikube-integration/21139-2320/.minikube/certs/ca-key.pem org=jenkins.newest-cni-192074 san=[127.0.0.1 192.168.76.2 localhost minikube newest-cni-192074]
	I1109 14:40:44.159351  203791 provision.go:177] copyRemoteCerts
	I1109 14:40:44.159473  203791 ssh_runner.go:195] Run: sudo mkdir -p /etc/docker /etc/docker /etc/docker
	I1109 14:40:44.159548  203791 cli_runner.go:164] Run: docker container inspect -f "'{{(index (index .NetworkSettings.Ports "22/tcp") 0).HostPort}}'" newest-cni-192074
	I1109 14:40:44.178199  203791 sshutil.go:53] new ssh client: &{IP:127.0.0.1 Port:33080 SSHKeyPath:/home/jenkins/minikube-integration/21139-2320/.minikube/machines/newest-cni-192074/id_rsa Username:docker}
	I1109 14:40:44.293296  203791 ssh_runner.go:362] scp /home/jenkins/minikube-integration/21139-2320/.minikube/certs/ca.pem --> /etc/docker/ca.pem (1082 bytes)
	I1109 14:40:44.314116  203791 ssh_runner.go:362] scp /home/jenkins/minikube-integration/21139-2320/.minikube/machines/server.pem --> /etc/docker/server.pem (1220 bytes)
	I1109 14:40:44.340603  203791 ssh_runner.go:362] scp /home/jenkins/minikube-integration/21139-2320/.minikube/machines/server-key.pem --> /etc/docker/server-key.pem (1679 bytes)
	I1109 14:40:44.362295  203791 provision.go:87] duration metric: took 890.076849ms to configureAuth
	I1109 14:40:44.362370  203791 ubuntu.go:206] setting minikube options for container-runtime
	I1109 14:40:44.362613  203791 config.go:182] Loaded profile config "newest-cni-192074": Driver=docker, ContainerRuntime=crio, KubernetesVersion=v1.34.1
	I1109 14:40:44.362762  203791 cli_runner.go:164] Run: docker container inspect -f "'{{(index (index .NetworkSettings.Ports "22/tcp") 0).HostPort}}'" newest-cni-192074
	I1109 14:40:44.385576  203791 main.go:143] libmachine: Using SSH client type: native
	I1109 14:40:44.385895  203791 main.go:143] libmachine: &{{{<nil> 0 [] [] []} docker [0x3eefe0] 0x3f1790 <nil>  [] 0s} 127.0.0.1 33080 <nil> <nil>}
	I1109 14:40:44.385910  203791 main.go:143] libmachine: About to run SSH command:
	sudo mkdir -p /etc/sysconfig && printf %s "
	CRIO_MINIKUBE_OPTIONS='--insecure-registry 10.96.0.0/12 '
	" | sudo tee /etc/sysconfig/crio.minikube && sudo systemctl restart crio
	I1109 14:40:44.737954  203791 main.go:143] libmachine: SSH cmd err, output: <nil>: 
	CRIO_MINIKUBE_OPTIONS='--insecure-registry 10.96.0.0/12 '
	
	I1109 14:40:44.738051  203791 machine.go:97] duration metric: took 4.990896534s to provisionDockerMachine
	I1109 14:40:44.738081  203791 client.go:176] duration metric: took 11.535595613s to LocalClient.Create
	I1109 14:40:44.738136  203791 start.go:167] duration metric: took 11.535696677s to libmachine.API.Create "newest-cni-192074"
	I1109 14:40:44.738167  203791 start.go:293] postStartSetup for "newest-cni-192074" (driver="docker")
	I1109 14:40:44.738205  203791 start.go:322] creating required directories: [/etc/kubernetes/addons /etc/kubernetes/manifests /var/tmp/minikube /var/lib/minikube /var/lib/minikube/certs /var/lib/minikube/images /var/lib/minikube/binaries /tmp/gvisor /usr/share/ca-certificates /etc/ssl/certs]
	I1109 14:40:44.738324  203791 ssh_runner.go:195] Run: sudo mkdir -p /etc/kubernetes/addons /etc/kubernetes/manifests /var/tmp/minikube /var/lib/minikube /var/lib/minikube/certs /var/lib/minikube/images /var/lib/minikube/binaries /tmp/gvisor /usr/share/ca-certificates /etc/ssl/certs
	I1109 14:40:44.738399  203791 cli_runner.go:164] Run: docker container inspect -f "'{{(index (index .NetworkSettings.Ports "22/tcp") 0).HostPort}}'" newest-cni-192074
	I1109 14:40:44.760970  203791 sshutil.go:53] new ssh client: &{IP:127.0.0.1 Port:33080 SSHKeyPath:/home/jenkins/minikube-integration/21139-2320/.minikube/machines/newest-cni-192074/id_rsa Username:docker}
	I1109 14:40:44.869518  203791 ssh_runner.go:195] Run: cat /etc/os-release
	I1109 14:40:44.873362  203791 main.go:143] libmachine: Couldn't set key VERSION_CODENAME, no corresponding struct field found
	I1109 14:40:44.873396  203791 info.go:137] Remote host: Debian GNU/Linux 12 (bookworm)
	I1109 14:40:44.873408  203791 filesync.go:126] Scanning /home/jenkins/minikube-integration/21139-2320/.minikube/addons for local assets ...
	I1109 14:40:44.873470  203791 filesync.go:126] Scanning /home/jenkins/minikube-integration/21139-2320/.minikube/files for local assets ...
	I1109 14:40:44.873554  203791 filesync.go:149] local asset: /home/jenkins/minikube-integration/21139-2320/.minikube/files/etc/ssl/certs/41162.pem -> 41162.pem in /etc/ssl/certs
	I1109 14:40:44.873669  203791 ssh_runner.go:195] Run: sudo mkdir -p /etc/ssl/certs
	I1109 14:40:44.882748  203791 ssh_runner.go:362] scp /home/jenkins/minikube-integration/21139-2320/.minikube/files/etc/ssl/certs/41162.pem --> /etc/ssl/certs/41162.pem (1708 bytes)
	I1109 14:40:44.905195  203791 start.go:296] duration metric: took 166.989282ms for postStartSetup
	I1109 14:40:44.905609  203791 cli_runner.go:164] Run: docker container inspect -f "{{range .NetworkSettings.Networks}}{{.IPAddress}},{{.GlobalIPv6Address}}{{end}}" newest-cni-192074
	I1109 14:40:44.929920  203791 profile.go:143] Saving config to /home/jenkins/minikube-integration/21139-2320/.minikube/profiles/newest-cni-192074/config.json ...
	I1109 14:40:44.930189  203791 ssh_runner.go:195] Run: sh -c "df -h /var | awk 'NR==2{print $5}'"
	I1109 14:40:44.930244  203791 cli_runner.go:164] Run: docker container inspect -f "'{{(index (index .NetworkSettings.Ports "22/tcp") 0).HostPort}}'" newest-cni-192074
	I1109 14:40:44.949665  203791 sshutil.go:53] new ssh client: &{IP:127.0.0.1 Port:33080 SSHKeyPath:/home/jenkins/minikube-integration/21139-2320/.minikube/machines/newest-cni-192074/id_rsa Username:docker}
	I1109 14:40:45.078414  203791 ssh_runner.go:195] Run: sh -c "df -BG /var | awk 'NR==2{print $4}'"
	I1109 14:40:45.084606  203791 start.go:128] duration metric: took 11.888609993s to createHost
	I1109 14:40:45.084638  203791 start.go:83] releasing machines lock for "newest-cni-192074", held for 11.888812062s
	I1109 14:40:45.084721  203791 cli_runner.go:164] Run: docker container inspect -f "{{range .NetworkSettings.Networks}}{{.IPAddress}},{{.GlobalIPv6Address}}{{end}}" newest-cni-192074
	I1109 14:40:45.107048  203791 ssh_runner.go:195] Run: cat /version.json
	I1109 14:40:45.107104  203791 cli_runner.go:164] Run: docker container inspect -f "'{{(index (index .NetworkSettings.Ports "22/tcp") 0).HostPort}}'" newest-cni-192074
	I1109 14:40:45.107427  203791 ssh_runner.go:195] Run: curl -sS -m 2 https://registry.k8s.io/
	I1109 14:40:45.107486  203791 cli_runner.go:164] Run: docker container inspect -f "'{{(index (index .NetworkSettings.Ports "22/tcp") 0).HostPort}}'" newest-cni-192074
	I1109 14:40:45.153262  203791 sshutil.go:53] new ssh client: &{IP:127.0.0.1 Port:33080 SSHKeyPath:/home/jenkins/minikube-integration/21139-2320/.minikube/machines/newest-cni-192074/id_rsa Username:docker}
	I1109 14:40:45.160170  203791 sshutil.go:53] new ssh client: &{IP:127.0.0.1 Port:33080 SSHKeyPath:/home/jenkins/minikube-integration/21139-2320/.minikube/machines/newest-cni-192074/id_rsa Username:docker}
	I1109 14:40:45.393159  203791 ssh_runner.go:195] Run: systemctl --version
	I1109 14:40:45.403717  203791 ssh_runner.go:195] Run: sudo sh -c "podman version >/dev/null"
	I1109 14:40:45.473058  203791 ssh_runner.go:195] Run: sh -c "stat /etc/cni/net.d/*loopback.conf*"
	W1109 14:40:45.478358  203791 cni.go:209] loopback cni configuration skipped: "/etc/cni/net.d/*loopback.conf*" not found
	I1109 14:40:45.478503  203791 ssh_runner.go:195] Run: sudo find /etc/cni/net.d -maxdepth 1 -type f ( ( -name *bridge* -or -name *podman* ) -and -not -name *.mk_disabled ) -printf "%p, " -exec sh -c "sudo mv {} {}.mk_disabled" ;
	I1109 14:40:45.519782  203791 cni.go:262] disabled [/etc/cni/net.d/87-podman-bridge.conflist, /etc/cni/net.d/10-crio-bridge.conflist.disabled] bridge cni config(s)
	I1109 14:40:45.519857  203791 start.go:496] detecting cgroup driver to use...
	I1109 14:40:45.520009  203791 detect.go:187] detected "cgroupfs" cgroup driver on host os
	I1109 14:40:45.520078  203791 ssh_runner.go:195] Run: sudo systemctl stop -f containerd
	I1109 14:40:45.541712  203791 ssh_runner.go:195] Run: sudo systemctl is-active --quiet service containerd
	I1109 14:40:45.559837  203791 docker.go:218] disabling cri-docker service (if available) ...
	I1109 14:40:45.559957  203791 ssh_runner.go:195] Run: sudo systemctl stop -f cri-docker.socket
	I1109 14:40:45.578688  203791 ssh_runner.go:195] Run: sudo systemctl stop -f cri-docker.service
	I1109 14:40:45.601026  203791 ssh_runner.go:195] Run: sudo systemctl disable cri-docker.socket
	I1109 14:40:45.764533  203791 ssh_runner.go:195] Run: sudo systemctl mask cri-docker.service
	I1109 14:40:45.936496  203791 docker.go:234] disabling docker service ...
	I1109 14:40:45.936633  203791 ssh_runner.go:195] Run: sudo systemctl stop -f docker.socket
	I1109 14:40:45.971643  203791 ssh_runner.go:195] Run: sudo systemctl stop -f docker.service
	I1109 14:40:45.986765  203791 ssh_runner.go:195] Run: sudo systemctl disable docker.socket
	I1109 14:40:46.150509  203791 ssh_runner.go:195] Run: sudo systemctl mask docker.service
	I1109 14:40:46.343297  203791 ssh_runner.go:195] Run: sudo systemctl is-active --quiet service docker
	I1109 14:40:46.365340  203791 ssh_runner.go:195] Run: /bin/bash -c "sudo mkdir -p /etc && printf %s "runtime-endpoint: unix:///var/run/crio/crio.sock
	" | sudo tee /etc/crictl.yaml"
	I1109 14:40:46.386821  203791 crio.go:59] configure cri-o to use "registry.k8s.io/pause:3.10.1" pause image...
	I1109 14:40:46.386938  203791 ssh_runner.go:195] Run: sh -c "sudo sed -i 's|^.*pause_image = .*$|pause_image = "registry.k8s.io/pause:3.10.1"|' /etc/crio/crio.conf.d/02-crio.conf"
	I1109 14:40:46.396494  203791 crio.go:70] configuring cri-o to use "cgroupfs" as cgroup driver...
	I1109 14:40:46.396601  203791 ssh_runner.go:195] Run: sh -c "sudo sed -i 's|^.*cgroup_manager = .*$|cgroup_manager = "cgroupfs"|' /etc/crio/crio.conf.d/02-crio.conf"
	I1109 14:40:46.406150  203791 ssh_runner.go:195] Run: sh -c "sudo sed -i '/conmon_cgroup = .*/d' /etc/crio/crio.conf.d/02-crio.conf"
	I1109 14:40:46.415671  203791 ssh_runner.go:195] Run: sh -c "sudo sed -i '/cgroup_manager = .*/a conmon_cgroup = "pod"' /etc/crio/crio.conf.d/02-crio.conf"
	I1109 14:40:46.425804  203791 ssh_runner.go:195] Run: sh -c "sudo rm -rf /etc/cni/net.mk"
	I1109 14:40:46.434540  203791 ssh_runner.go:195] Run: sh -c "sudo sed -i '/^ *"net.ipv4.ip_unprivileged_port_start=.*"/d' /etc/crio/crio.conf.d/02-crio.conf"
	I1109 14:40:46.444503  203791 ssh_runner.go:195] Run: sh -c "sudo grep -q "^ *default_sysctls" /etc/crio/crio.conf.d/02-crio.conf || sudo sed -i '/conmon_cgroup = .*/a default_sysctls = \[\n\]' /etc/crio/crio.conf.d/02-crio.conf"
	I1109 14:40:46.463147  203791 ssh_runner.go:195] Run: sh -c "sudo sed -i -r 's|^default_sysctls *= *\[|&\n  "net.ipv4.ip_unprivileged_port_start=0",|' /etc/crio/crio.conf.d/02-crio.conf"
	I1109 14:40:46.474753  203791 ssh_runner.go:195] Run: sudo sysctl net.bridge.bridge-nf-call-iptables
	I1109 14:40:46.483080  203791 ssh_runner.go:195] Run: sudo sh -c "echo 1 > /proc/sys/net/ipv4/ip_forward"
	I1109 14:40:46.490800  203791 ssh_runner.go:195] Run: sudo systemctl daemon-reload
	I1109 14:40:46.677416  203791 ssh_runner.go:195] Run: sudo systemctl restart crio
	I1109 14:40:47.322975  203791 start.go:543] Will wait 60s for socket path /var/run/crio/crio.sock
	I1109 14:40:47.323062  203791 ssh_runner.go:195] Run: stat /var/run/crio/crio.sock
	I1109 14:40:47.327942  203791 start.go:564] Will wait 60s for crictl version
	I1109 14:40:47.328010  203791 ssh_runner.go:195] Run: which crictl
	I1109 14:40:47.334022  203791 ssh_runner.go:195] Run: sudo /usr/local/bin/crictl version
	I1109 14:40:47.383335  203791 start.go:580] Version:  0.1.0
	RuntimeName:  cri-o
	RuntimeVersion:  1.34.1
	RuntimeApiVersion:  v1
	I1109 14:40:47.383485  203791 ssh_runner.go:195] Run: crio --version
	I1109 14:40:47.415756  203791 ssh_runner.go:195] Run: crio --version
	I1109 14:40:47.464656  203791 out.go:179] * Preparing Kubernetes v1.34.1 on CRI-O 1.34.1 ...
	I1109 14:40:47.467800  203791 cli_runner.go:164] Run: docker network inspect newest-cni-192074 --format "{"Name": "{{.Name}}","Driver": "{{.Driver}}","Subnet": "{{range .IPAM.Config}}{{.Subnet}}{{end}}","Gateway": "{{range .IPAM.Config}}{{.Gateway}}{{end}}","MTU": {{if (index .Options "com.docker.network.driver.mtu")}}{{(index .Options "com.docker.network.driver.mtu")}}{{else}}0{{end}}, "ContainerIPs": [{{range $k,$v := .Containers }}"{{$v.IPv4Address}}",{{end}}]}"
	I1109 14:40:47.488579  203791 ssh_runner.go:195] Run: grep 192.168.76.1	host.minikube.internal$ /etc/hosts
	I1109 14:40:47.495118  203791 ssh_runner.go:195] Run: /bin/bash -c "{ grep -v $'\thost.minikube.internal$' "/etc/hosts"; echo "192.168.76.1	host.minikube.internal"; } > /tmp/h.$$; sudo cp /tmp/h.$$ "/etc/hosts""
	I1109 14:40:47.508819  203791 out.go:179]   - kubeadm.pod-network-cidr=10.42.0.0/16
	I1109 14:40:47.511714  203791 kubeadm.go:884] updating cluster {Name:newest-cni-192074 KeepContext:false EmbedCerts:false MinikubeISO: KicBaseImage:gcr.io/k8s-minikube/kicbase-builds:v0.0.48-1761985721-21837@sha256:a50b37e97dfdea51156e079ca6b45818a801b3d41bbe13d141f35d2e1af6c7d1 Memory:3072 CPUs:2 DiskSize:20000 Driver:docker HyperkitVpnKitSock: HyperkitVSockPorts:[] DockerEnv:[] ContainerVolumeMounts:[] InsecureRegistry:[] RegistryMirror:[] HostOnlyCIDR:192.168.59.1/24 HypervVirtualSwitch: HypervUseExternalSwitch:false HypervExternalAdapter: KVMNetwork:default KVMQemuURI:qemu:///system KVMGPU:false KVMHidden:false KVMNUMACount:1 APIServerPort:8443 DockerOpt:[] DisableDriverMounts:false NFSShare:[] NFSSharesRoot:/nfsshares UUID: NoVTXCheck:false DNSProxy:false HostDNSResolver:true HostOnlyNicType:virtio NatNicType:virtio SSHIPAddress: SSHUser:root SSHKey: SSHPort:22 KubernetesConfig:{KubernetesVersion:v1.34.1 ClusterName:newest-cni-192074 Namespace:default APIServerHAVIP: APIServerName:minikubeCA API
ServerNames:[] APIServerIPs:[] DNSDomain:cluster.local ContainerRuntime:crio CRISocket: NetworkPlugin:cni FeatureGates: ServiceCIDR:10.96.0.0/12 ImageRepository: LoadBalancerStartIP: LoadBalancerEndIP: CustomIngressCert: RegistryAliases: ExtraOptions:[{Component:kubeadm Key:pod-network-cidr Value:10.42.0.0/16}] ShouldLoadCachedImages:true EnableDefaultCNI:false CNI:} Nodes:[{Name: IP:192.168.76.2 Port:8443 KubernetesVersion:v1.34.1 ContainerRuntime:crio ControlPlane:true Worker:true}] Addons:map[] CustomAddonImages:map[] CustomAddonRegistries:map[] VerifyComponents:map[apiserver:true apps_running:false default_sa:true extra:false kubelet:false node_ready:false system_pods:true] StartHostTimeout:6m0s ScheduledStop:<nil> ExposedPorts:[] ListenAddress: Network: Subnet: MultiNodeRequested:false ExtraDisks:0 CertExpiration:26280h0m0s MountString: Mount9PVersion:9p2000.L MountGID:docker MountIP: MountMSize:262144 MountOptions:[] MountPort:0 MountType:9p MountUID:docker BinaryMirror: DisableOptimizations:false Disab
leMetrics:false DisableCoreDNSLog:false CustomQemuFirmwarePath: SocketVMnetClientPath: SocketVMnetPath: StaticIP: SSHAuthSock: SSHAgentPID:0 GPUs: AutoPauseInterval:1m0s} ...
	I1109 14:40:47.511863  203791 preload.go:188] Checking if preload exists for k8s version v1.34.1 and runtime crio
	I1109 14:40:47.511960  203791 ssh_runner.go:195] Run: sudo crictl images --output json
	I1109 14:40:47.559554  203791 crio.go:514] all images are preloaded for cri-o runtime.
	I1109 14:40:47.559582  203791 crio.go:433] Images already preloaded, skipping extraction
	I1109 14:40:47.559635  203791 ssh_runner.go:195] Run: sudo crictl images --output json
	I1109 14:40:47.591045  203791 crio.go:514] all images are preloaded for cri-o runtime.
	I1109 14:40:47.591071  203791 cache_images.go:86] Images are preloaded, skipping loading
	I1109 14:40:47.591079  203791 kubeadm.go:935] updating node { 192.168.76.2 8443 v1.34.1 crio true true} ...
	I1109 14:40:47.591180  203791 kubeadm.go:947] kubelet [Unit]
	Wants=crio.service
	
	[Service]
	ExecStart=
	ExecStart=/var/lib/minikube/binaries/v1.34.1/kubelet --bootstrap-kubeconfig=/etc/kubernetes/bootstrap-kubelet.conf --cgroups-per-qos=false --config=/var/lib/kubelet/config.yaml --enforce-node-allocatable= --hostname-override=newest-cni-192074 --kubeconfig=/etc/kubernetes/kubelet.conf --node-ip=192.168.76.2
	
	[Install]
	 config:
	{KubernetesVersion:v1.34.1 ClusterName:newest-cni-192074 Namespace:default APIServerHAVIP: APIServerName:minikubeCA APIServerNames:[] APIServerIPs:[] DNSDomain:cluster.local ContainerRuntime:crio CRISocket: NetworkPlugin:cni FeatureGates: ServiceCIDR:10.96.0.0/12 ImageRepository: LoadBalancerStartIP: LoadBalancerEndIP: CustomIngressCert: RegistryAliases: ExtraOptions:[{Component:kubeadm Key:pod-network-cidr Value:10.42.0.0/16}] ShouldLoadCachedImages:true EnableDefaultCNI:false CNI:}
	I1109 14:40:47.591261  203791 ssh_runner.go:195] Run: crio config
	I1109 14:40:47.674256  203791 cni.go:84] Creating CNI manager for ""
	I1109 14:40:47.674280  203791 cni.go:143] "docker" driver + "crio" runtime found, recommending kindnet
	I1109 14:40:47.674294  203791 kubeadm.go:85] Using pod CIDR: 10.42.0.0/16
	I1109 14:40:47.674326  203791 kubeadm.go:190] kubeadm options: {CertDir:/var/lib/minikube/certs ServiceCIDR:10.96.0.0/12 PodSubnet:10.42.0.0/16 AdvertiseAddress:192.168.76.2 APIServerPort:8443 KubernetesVersion:v1.34.1 EtcdDataDir:/var/lib/minikube/etcd EtcdExtraArgs:map[] ClusterName:newest-cni-192074 NodeName:newest-cni-192074 DNSDomain:cluster.local CRISocket:/var/run/crio/crio.sock ImageRepository: ComponentOptions:[{Component:apiServer ExtraArgs:map[enable-admission-plugins:NamespaceLifecycle,LimitRanger,ServiceAccount,DefaultStorageClass,DefaultTolerationSeconds,NodeRestriction,MutatingAdmissionWebhook,ValidatingAdmissionWebhook,ResourceQuota] Pairs:map[certSANs:["127.0.0.1", "localhost", "192.168.76.2"]]} {Component:controllerManager ExtraArgs:map[allocate-node-cidrs:true leader-elect:false] Pairs:map[]} {Component:scheduler ExtraArgs:map[leader-elect:false] Pairs:map[]}] FeatureArgs:map[] NodeIP:192.168.76.2 CgroupDriver:cgroupfs ClientCAFile:/var/lib/minikube/certs/ca.crt StaticPodPath:/etc/
kubernetes/manifests ControlPlaneAddress:control-plane.minikube.internal KubeProxyOptions:map[] ResolvConfSearchRegression:false KubeletConfigOpts:map[containerRuntimeEndpoint:unix:///var/run/crio/crio.sock hairpinMode:hairpin-veth runtimeRequestTimeout:15m] PrependCriSocketUnix:true}
	I1109 14:40:47.674482  203791 kubeadm.go:196] kubeadm config:
	apiVersion: kubeadm.k8s.io/v1beta4
	kind: InitConfiguration
	localAPIEndpoint:
	  advertiseAddress: 192.168.76.2
	  bindPort: 8443
	bootstrapTokens:
	  - groups:
	      - system:bootstrappers:kubeadm:default-node-token
	    ttl: 24h0m0s
	    usages:
	      - signing
	      - authentication
	nodeRegistration:
	  criSocket: unix:///var/run/crio/crio.sock
	  name: "newest-cni-192074"
	  kubeletExtraArgs:
	    - name: "node-ip"
	      value: "192.168.76.2"
	  taints: []
	---
	apiVersion: kubeadm.k8s.io/v1beta4
	kind: ClusterConfiguration
	apiServer:
	  certSANs: ["127.0.0.1", "localhost", "192.168.76.2"]
	  extraArgs:
	    - name: "enable-admission-plugins"
	      value: "NamespaceLifecycle,LimitRanger,ServiceAccount,DefaultStorageClass,DefaultTolerationSeconds,NodeRestriction,MutatingAdmissionWebhook,ValidatingAdmissionWebhook,ResourceQuota"
	controllerManager:
	  extraArgs:
	    - name: "allocate-node-cidrs"
	      value: "true"
	    - name: "leader-elect"
	      value: "false"
	scheduler:
	  extraArgs:
	    - name: "leader-elect"
	      value: "false"
	certificatesDir: /var/lib/minikube/certs
	clusterName: mk
	controlPlaneEndpoint: control-plane.minikube.internal:8443
	etcd:
	  local:
	    dataDir: /var/lib/minikube/etcd
	kubernetesVersion: v1.34.1
	networking:
	  dnsDomain: cluster.local
	  podSubnet: "10.42.0.0/16"
	  serviceSubnet: 10.96.0.0/12
	---
	apiVersion: kubelet.config.k8s.io/v1beta1
	kind: KubeletConfiguration
	authentication:
	  x509:
	    clientCAFile: /var/lib/minikube/certs/ca.crt
	cgroupDriver: cgroupfs
	containerRuntimeEndpoint: unix:///var/run/crio/crio.sock
	hairpinMode: hairpin-veth
	runtimeRequestTimeout: 15m
	clusterDomain: "cluster.local"
	# disable disk resource management by default
	imageGCHighThresholdPercent: 100
	evictionHard:
	  nodefs.available: "0%"
	  nodefs.inodesFree: "0%"
	  imagefs.available: "0%"
	failSwapOn: false
	staticPodPath: /etc/kubernetes/manifests
	---
	apiVersion: kubeproxy.config.k8s.io/v1alpha1
	kind: KubeProxyConfiguration
	clusterCIDR: "10.42.0.0/16"
	metricsBindAddress: 0.0.0.0:10249
	conntrack:
	  maxPerCore: 0
	# Skip setting "net.netfilter.nf_conntrack_tcp_timeout_established"
	  tcpEstablishedTimeout: 0s
	# Skip setting "net.netfilter.nf_conntrack_tcp_timeout_close"
	  tcpCloseWaitTimeout: 0s
	
	I1109 14:40:47.674569  203791 ssh_runner.go:195] Run: sudo ls /var/lib/minikube/binaries/v1.34.1
	I1109 14:40:47.684719  203791 binaries.go:51] Found k8s binaries, skipping transfer
	I1109 14:40:47.684828  203791 ssh_runner.go:195] Run: sudo mkdir -p /etc/systemd/system/kubelet.service.d /lib/systemd/system /var/tmp/minikube
	I1109 14:40:47.702108  203791 ssh_runner.go:362] scp memory --> /etc/systemd/system/kubelet.service.d/10-kubeadm.conf (367 bytes)
	I1109 14:40:47.720106  203791 ssh_runner.go:362] scp memory --> /lib/systemd/system/kubelet.service (352 bytes)
	I1109 14:40:47.735741  203791 ssh_runner.go:362] scp memory --> /var/tmp/minikube/kubeadm.yaml.new (2212 bytes)
	I1109 14:40:47.751396  203791 ssh_runner.go:195] Run: grep 192.168.76.2	control-plane.minikube.internal$ /etc/hosts
	I1109 14:40:45.364971  203153 ssh_runner.go:235] Completed: sudo podman load -i /var/lib/minikube/images/coredns_v1.12.1: (2.608417875s)
	I1109 14:40:45.365002  203153 cache_images.go:323] Transferred and loaded /home/jenkins/minikube-integration/21139-2320/.minikube/cache/images/arm64/registry.k8s.io/coredns/coredns_v1.12.1 from cache
	I1109 14:40:45.365022  203153 crio.go:275] Loading image: /var/lib/minikube/images/kube-controller-manager_v1.34.1
	I1109 14:40:45.365080  203153 ssh_runner.go:195] Run: sudo podman load -i /var/lib/minikube/images/kube-controller-manager_v1.34.1
	I1109 14:40:45.365168  203153 ssh_runner.go:235] Completed: which crictl: (2.608303239s)
	I1109 14:40:45.365204  203153 ssh_runner.go:195] Run: sudo /usr/local/bin/crictl rmi gcr.io/k8s-minikube/storage-provisioner:v5
	I1109 14:40:47.843946  203153 ssh_runner.go:235] Completed: sudo /usr/local/bin/crictl rmi gcr.io/k8s-minikube/storage-provisioner:v5: (2.478717214s)
	I1109 14:40:47.844025  203153 ssh_runner.go:195] Run: sudo /usr/local/bin/crictl rmi gcr.io/k8s-minikube/storage-provisioner:v5
	I1109 14:40:47.844043  203153 ssh_runner.go:235] Completed: sudo podman load -i /var/lib/minikube/images/kube-controller-manager_v1.34.1: (2.478943218s)
	I1109 14:40:47.844058  203153 cache_images.go:323] Transferred and loaded /home/jenkins/minikube-integration/21139-2320/.minikube/cache/images/arm64/registry.k8s.io/kube-controller-manager_v1.34.1 from cache
	I1109 14:40:47.844075  203153 crio.go:275] Loading image: /var/lib/minikube/images/kube-scheduler_v1.34.1
	I1109 14:40:47.844124  203153 ssh_runner.go:195] Run: sudo podman load -i /var/lib/minikube/images/kube-scheduler_v1.34.1
	I1109 14:40:47.926544  203153 ssh_runner.go:195] Run: sudo /usr/local/bin/crictl rmi gcr.io/k8s-minikube/storage-provisioner:v5
	I1109 14:40:49.123273  203153 ssh_runner.go:235] Completed: sudo podman load -i /var/lib/minikube/images/kube-scheduler_v1.34.1: (1.279126312s)
	I1109 14:40:49.123302  203153 cache_images.go:323] Transferred and loaded /home/jenkins/minikube-integration/21139-2320/.minikube/cache/images/arm64/registry.k8s.io/kube-scheduler_v1.34.1 from cache
	I1109 14:40:49.123320  203153 crio.go:275] Loading image: /var/lib/minikube/images/kube-proxy_v1.34.1
	I1109 14:40:49.123380  203153 ssh_runner.go:195] Run: sudo podman load -i /var/lib/minikube/images/kube-proxy_v1.34.1
	I1109 14:40:49.123446  203153 ssh_runner.go:235] Completed: sudo /usr/local/bin/crictl rmi gcr.io/k8s-minikube/storage-provisioner:v5: (1.196879479s)
	I1109 14:40:49.123476  203153 cache_images.go:291] Loading image from: /home/jenkins/minikube-integration/21139-2320/.minikube/cache/images/arm64/gcr.io/k8s-minikube/storage-provisioner_v5
	I1109 14:40:49.123546  203153 ssh_runner.go:195] Run: stat -c "%s %y" /var/lib/minikube/images/storage-provisioner_v5
	I1109 14:40:47.755379  203791 ssh_runner.go:195] Run: /bin/bash -c "{ grep -v $'\tcontrol-plane.minikube.internal$' "/etc/hosts"; echo "192.168.76.2	control-plane.minikube.internal"; } > /tmp/h.$$; sudo cp /tmp/h.$$ "/etc/hosts""
	I1109 14:40:47.767962  203791 ssh_runner.go:195] Run: sudo systemctl daemon-reload
	I1109 14:40:47.916606  203791 ssh_runner.go:195] Run: sudo systemctl start kubelet
	I1109 14:40:47.938116  203791 certs.go:69] Setting up /home/jenkins/minikube-integration/21139-2320/.minikube/profiles/newest-cni-192074 for IP: 192.168.76.2
	I1109 14:40:47.938136  203791 certs.go:195] generating shared ca certs ...
	I1109 14:40:47.938153  203791 certs.go:227] acquiring lock for ca certs: {Name:mkdc9287ecd89df27ec460b72246ed6b75395d7c Clock:{} Delay:500ms Timeout:1m0s Cancel:<nil>}
	I1109 14:40:47.938290  203791 certs.go:236] skipping valid "minikubeCA" ca cert: /home/jenkins/minikube-integration/21139-2320/.minikube/ca.key
	I1109 14:40:47.938327  203791 certs.go:236] skipping valid "proxyClientCA" ca cert: /home/jenkins/minikube-integration/21139-2320/.minikube/proxy-client-ca.key
	I1109 14:40:47.938334  203791 certs.go:257] generating profile certs ...
	I1109 14:40:47.938399  203791 certs.go:364] generating signed profile cert for "minikube-user": /home/jenkins/minikube-integration/21139-2320/.minikube/profiles/newest-cni-192074/client.key
	I1109 14:40:47.938409  203791 crypto.go:68] Generating cert /home/jenkins/minikube-integration/21139-2320/.minikube/profiles/newest-cni-192074/client.crt with IP's: []
	I1109 14:40:49.141195  203791 crypto.go:156] Writing cert to /home/jenkins/minikube-integration/21139-2320/.minikube/profiles/newest-cni-192074/client.crt ...
	I1109 14:40:49.141276  203791 lock.go:35] WriteFile acquiring /home/jenkins/minikube-integration/21139-2320/.minikube/profiles/newest-cni-192074/client.crt: {Name:mk04fe9aad0a12c0e5c78a176bb534c5d3dc71bb Clock:{} Delay:500ms Timeout:1m0s Cancel:<nil>}
	I1109 14:40:49.141483  203791 crypto.go:164] Writing key to /home/jenkins/minikube-integration/21139-2320/.minikube/profiles/newest-cni-192074/client.key ...
	I1109 14:40:49.141518  203791 lock.go:35] WriteFile acquiring /home/jenkins/minikube-integration/21139-2320/.minikube/profiles/newest-cni-192074/client.key: {Name:mkf13c18426bf755fd81f8394c6ae4950b71875c Clock:{} Delay:500ms Timeout:1m0s Cancel:<nil>}
	I1109 14:40:49.141631  203791 certs.go:364] generating signed profile cert for "minikube": /home/jenkins/minikube-integration/21139-2320/.minikube/profiles/newest-cni-192074/apiserver.key.19ad1ce3
	I1109 14:40:49.141674  203791 crypto.go:68] Generating cert /home/jenkins/minikube-integration/21139-2320/.minikube/profiles/newest-cni-192074/apiserver.crt.19ad1ce3 with IP's: [10.96.0.1 127.0.0.1 10.0.0.1 192.168.76.2]
	I1109 14:40:49.560420  203791 crypto.go:156] Writing cert to /home/jenkins/minikube-integration/21139-2320/.minikube/profiles/newest-cni-192074/apiserver.crt.19ad1ce3 ...
	I1109 14:40:49.560495  203791 lock.go:35] WriteFile acquiring /home/jenkins/minikube-integration/21139-2320/.minikube/profiles/newest-cni-192074/apiserver.crt.19ad1ce3: {Name:mkadb6fee65e31ebd5571550ef59fb1e47868b7d Clock:{} Delay:500ms Timeout:1m0s Cancel:<nil>}
	I1109 14:40:49.560728  203791 crypto.go:164] Writing key to /home/jenkins/minikube-integration/21139-2320/.minikube/profiles/newest-cni-192074/apiserver.key.19ad1ce3 ...
	I1109 14:40:49.560765  203791 lock.go:35] WriteFile acquiring /home/jenkins/minikube-integration/21139-2320/.minikube/profiles/newest-cni-192074/apiserver.key.19ad1ce3: {Name:mka39c3689eaf7e8606cd12107c67cff6ee5c773 Clock:{} Delay:500ms Timeout:1m0s Cancel:<nil>}
	I1109 14:40:49.560884  203791 certs.go:382] copying /home/jenkins/minikube-integration/21139-2320/.minikube/profiles/newest-cni-192074/apiserver.crt.19ad1ce3 -> /home/jenkins/minikube-integration/21139-2320/.minikube/profiles/newest-cni-192074/apiserver.crt
	I1109 14:40:49.560997  203791 certs.go:386] copying /home/jenkins/minikube-integration/21139-2320/.minikube/profiles/newest-cni-192074/apiserver.key.19ad1ce3 -> /home/jenkins/minikube-integration/21139-2320/.minikube/profiles/newest-cni-192074/apiserver.key
	I1109 14:40:49.561106  203791 certs.go:364] generating signed profile cert for "aggregator": /home/jenkins/minikube-integration/21139-2320/.minikube/profiles/newest-cni-192074/proxy-client.key
	I1109 14:40:49.561193  203791 crypto.go:68] Generating cert /home/jenkins/minikube-integration/21139-2320/.minikube/profiles/newest-cni-192074/proxy-client.crt with IP's: []
	I1109 14:40:49.614473  203791 crypto.go:156] Writing cert to /home/jenkins/minikube-integration/21139-2320/.minikube/profiles/newest-cni-192074/proxy-client.crt ...
	I1109 14:40:49.614546  203791 lock.go:35] WriteFile acquiring /home/jenkins/minikube-integration/21139-2320/.minikube/profiles/newest-cni-192074/proxy-client.crt: {Name:mk86c6d1f70320292389058106d5a141c9749b63 Clock:{} Delay:500ms Timeout:1m0s Cancel:<nil>}
	I1109 14:40:49.614738  203791 crypto.go:164] Writing key to /home/jenkins/minikube-integration/21139-2320/.minikube/profiles/newest-cni-192074/proxy-client.key ...
	I1109 14:40:49.614774  203791 lock.go:35] WriteFile acquiring /home/jenkins/minikube-integration/21139-2320/.minikube/profiles/newest-cni-192074/proxy-client.key: {Name:mkf69d4163e3951c483dce463792b7bd0296f33c Clock:{} Delay:500ms Timeout:1m0s Cancel:<nil>}
	I1109 14:40:49.614986  203791 certs.go:484] found cert: /home/jenkins/minikube-integration/21139-2320/.minikube/certs/4116.pem (1338 bytes)
	W1109 14:40:49.615068  203791 certs.go:480] ignoring /home/jenkins/minikube-integration/21139-2320/.minikube/certs/4116_empty.pem, impossibly tiny 0 bytes
	I1109 14:40:49.615095  203791 certs.go:484] found cert: /home/jenkins/minikube-integration/21139-2320/.minikube/certs/ca-key.pem (1679 bytes)
	I1109 14:40:49.615147  203791 certs.go:484] found cert: /home/jenkins/minikube-integration/21139-2320/.minikube/certs/ca.pem (1082 bytes)
	I1109 14:40:49.615191  203791 certs.go:484] found cert: /home/jenkins/minikube-integration/21139-2320/.minikube/certs/cert.pem (1123 bytes)
	I1109 14:40:49.615240  203791 certs.go:484] found cert: /home/jenkins/minikube-integration/21139-2320/.minikube/certs/key.pem (1679 bytes)
	I1109 14:40:49.615303  203791 certs.go:484] found cert: /home/jenkins/minikube-integration/21139-2320/.minikube/files/etc/ssl/certs/41162.pem (1708 bytes)
	I1109 14:40:49.615955  203791 ssh_runner.go:362] scp /home/jenkins/minikube-integration/21139-2320/.minikube/ca.crt --> /var/lib/minikube/certs/ca.crt (1111 bytes)
	I1109 14:40:49.633987  203791 ssh_runner.go:362] scp /home/jenkins/minikube-integration/21139-2320/.minikube/ca.key --> /var/lib/minikube/certs/ca.key (1679 bytes)
	I1109 14:40:49.653237  203791 ssh_runner.go:362] scp /home/jenkins/minikube-integration/21139-2320/.minikube/proxy-client-ca.crt --> /var/lib/minikube/certs/proxy-client-ca.crt (1119 bytes)
	I1109 14:40:49.672425  203791 ssh_runner.go:362] scp /home/jenkins/minikube-integration/21139-2320/.minikube/proxy-client-ca.key --> /var/lib/minikube/certs/proxy-client-ca.key (1675 bytes)
	I1109 14:40:49.690709  203791 ssh_runner.go:362] scp /home/jenkins/minikube-integration/21139-2320/.minikube/profiles/newest-cni-192074/apiserver.crt --> /var/lib/minikube/certs/apiserver.crt (1424 bytes)
	I1109 14:40:49.710512  203791 ssh_runner.go:362] scp /home/jenkins/minikube-integration/21139-2320/.minikube/profiles/newest-cni-192074/apiserver.key --> /var/lib/minikube/certs/apiserver.key (1675 bytes)
	I1109 14:40:49.731004  203791 ssh_runner.go:362] scp /home/jenkins/minikube-integration/21139-2320/.minikube/profiles/newest-cni-192074/proxy-client.crt --> /var/lib/minikube/certs/proxy-client.crt (1147 bytes)
	I1109 14:40:49.750076  203791 ssh_runner.go:362] scp /home/jenkins/minikube-integration/21139-2320/.minikube/profiles/newest-cni-192074/proxy-client.key --> /var/lib/minikube/certs/proxy-client.key (1679 bytes)
	I1109 14:40:49.769745  203791 ssh_runner.go:362] scp /home/jenkins/minikube-integration/21139-2320/.minikube/files/etc/ssl/certs/41162.pem --> /usr/share/ca-certificates/41162.pem (1708 bytes)
	I1109 14:40:49.825511  203791 ssh_runner.go:362] scp /home/jenkins/minikube-integration/21139-2320/.minikube/ca.crt --> /usr/share/ca-certificates/minikubeCA.pem (1111 bytes)
	I1109 14:40:49.865754  203791 ssh_runner.go:362] scp /home/jenkins/minikube-integration/21139-2320/.minikube/certs/4116.pem --> /usr/share/ca-certificates/4116.pem (1338 bytes)
	I1109 14:40:49.911507  203791 ssh_runner.go:362] scp memory --> /var/lib/minikube/kubeconfig (738 bytes)
	I1109 14:40:49.942406  203791 ssh_runner.go:195] Run: openssl version
	I1109 14:40:49.952020  203791 ssh_runner.go:195] Run: sudo /bin/bash -c "test -s /usr/share/ca-certificates/4116.pem && ln -fs /usr/share/ca-certificates/4116.pem /etc/ssl/certs/4116.pem"
	I1109 14:40:49.970330  203791 ssh_runner.go:195] Run: ls -la /usr/share/ca-certificates/4116.pem
	I1109 14:40:49.974421  203791 certs.go:528] hashing: -rw-r--r-- 1 root root 1338 Nov  9 13:36 /usr/share/ca-certificates/4116.pem
	I1109 14:40:49.974547  203791 ssh_runner.go:195] Run: openssl x509 -hash -noout -in /usr/share/ca-certificates/4116.pem
	I1109 14:40:50.038910  203791 ssh_runner.go:195] Run: sudo /bin/bash -c "test -L /etc/ssl/certs/51391683.0 || ln -fs /etc/ssl/certs/4116.pem /etc/ssl/certs/51391683.0"
	I1109 14:40:50.053314  203791 ssh_runner.go:195] Run: sudo /bin/bash -c "test -s /usr/share/ca-certificates/41162.pem && ln -fs /usr/share/ca-certificates/41162.pem /etc/ssl/certs/41162.pem"
	I1109 14:40:50.071073  203791 ssh_runner.go:195] Run: ls -la /usr/share/ca-certificates/41162.pem
	I1109 14:40:50.075445  203791 certs.go:528] hashing: -rw-r--r-- 1 root root 1708 Nov  9 13:36 /usr/share/ca-certificates/41162.pem
	I1109 14:40:50.075583  203791 ssh_runner.go:195] Run: openssl x509 -hash -noout -in /usr/share/ca-certificates/41162.pem
	I1109 14:40:50.138944  203791 ssh_runner.go:195] Run: sudo /bin/bash -c "test -L /etc/ssl/certs/3ec20f2e.0 || ln -fs /etc/ssl/certs/41162.pem /etc/ssl/certs/3ec20f2e.0"
	I1109 14:40:50.148389  203791 ssh_runner.go:195] Run: sudo /bin/bash -c "test -s /usr/share/ca-certificates/minikubeCA.pem && ln -fs /usr/share/ca-certificates/minikubeCA.pem /etc/ssl/certs/minikubeCA.pem"
	I1109 14:40:50.157871  203791 ssh_runner.go:195] Run: ls -la /usr/share/ca-certificates/minikubeCA.pem
	I1109 14:40:50.162150  203791 certs.go:528] hashing: -rw-r--r-- 1 root root 1111 Nov  9 13:29 /usr/share/ca-certificates/minikubeCA.pem
	I1109 14:40:50.162271  203791 ssh_runner.go:195] Run: openssl x509 -hash -noout -in /usr/share/ca-certificates/minikubeCA.pem
	I1109 14:40:50.203579  203791 ssh_runner.go:195] Run: sudo /bin/bash -c "test -L /etc/ssl/certs/b5213941.0 || ln -fs /etc/ssl/certs/minikubeCA.pem /etc/ssl/certs/b5213941.0"
	I1109 14:40:50.220594  203791 ssh_runner.go:195] Run: stat /var/lib/minikube/certs/apiserver-kubelet-client.crt
	I1109 14:40:50.224636  203791 certs.go:400] 'apiserver-kubelet-client' cert doesn't exist, likely first start: stat /var/lib/minikube/certs/apiserver-kubelet-client.crt: Process exited with status 1
	stdout:
	
	stderr:
	stat: cannot statx '/var/lib/minikube/certs/apiserver-kubelet-client.crt': No such file or directory
	I1109 14:40:50.224713  203791 kubeadm.go:401] StartCluster: {Name:newest-cni-192074 KeepContext:false EmbedCerts:false MinikubeISO: KicBaseImage:gcr.io/k8s-minikube/kicbase-builds:v0.0.48-1761985721-21837@sha256:a50b37e97dfdea51156e079ca6b45818a801b3d41bbe13d141f35d2e1af6c7d1 Memory:3072 CPUs:2 DiskSize:20000 Driver:docker HyperkitVpnKitSock: HyperkitVSockPorts:[] DockerEnv:[] ContainerVolumeMounts:[] InsecureRegistry:[] RegistryMirror:[] HostOnlyCIDR:192.168.59.1/24 HypervVirtualSwitch: HypervUseExternalSwitch:false HypervExternalAdapter: KVMNetwork:default KVMQemuURI:qemu:///system KVMGPU:false KVMHidden:false KVMNUMACount:1 APIServerPort:8443 DockerOpt:[] DisableDriverMounts:false NFSShare:[] NFSSharesRoot:/nfsshares UUID: NoVTXCheck:false DNSProxy:false HostDNSResolver:true HostOnlyNicType:virtio NatNicType:virtio SSHIPAddress: SSHUser:root SSHKey: SSHPort:22 KubernetesConfig:{KubernetesVersion:v1.34.1 ClusterName:newest-cni-192074 Namespace:default APIServerHAVIP: APIServerName:minikubeCA APISer
verNames:[] APIServerIPs:[] DNSDomain:cluster.local ContainerRuntime:crio CRISocket: NetworkPlugin:cni FeatureGates: ServiceCIDR:10.96.0.0/12 ImageRepository: LoadBalancerStartIP: LoadBalancerEndIP: CustomIngressCert: RegistryAliases: ExtraOptions:[{Component:kubeadm Key:pod-network-cidr Value:10.42.0.0/16}] ShouldLoadCachedImages:true EnableDefaultCNI:false CNI:} Nodes:[{Name: IP:192.168.76.2 Port:8443 KubernetesVersion:v1.34.1 ContainerRuntime:crio ControlPlane:true Worker:true}] Addons:map[] CustomAddonImages:map[] CustomAddonRegistries:map[] VerifyComponents:map[apiserver:true apps_running:false default_sa:true extra:false kubelet:false node_ready:false system_pods:true] StartHostTimeout:6m0s ScheduledStop:<nil> ExposedPorts:[] ListenAddress: Network: Subnet: MultiNodeRequested:false ExtraDisks:0 CertExpiration:26280h0m0s MountString: Mount9PVersion:9p2000.L MountGID:docker MountIP: MountMSize:262144 MountOptions:[] MountPort:0 MountType:9p MountUID:docker BinaryMirror: DisableOptimizations:false DisableM
etrics:false DisableCoreDNSLog:false CustomQemuFirmwarePath: SocketVMnetClientPath: SocketVMnetPath: StaticIP: SSHAuthSock: SSHAgentPID:0 GPUs: AutoPauseInterval:1m0s}
	I1109 14:40:50.224813  203791 cri.go:54] listing CRI containers in root : {State:paused Name: Namespaces:[kube-system]}
	I1109 14:40:50.224910  203791 ssh_runner.go:195] Run: sudo -s eval "crictl ps -a --quiet --label io.kubernetes.pod.namespace=kube-system"
	I1109 14:40:50.270313  203791 cri.go:89] found id: ""
	I1109 14:40:50.270418  203791 ssh_runner.go:195] Run: sudo ls /var/lib/kubelet/kubeadm-flags.env /var/lib/kubelet/config.yaml /var/lib/minikube/etcd
	I1109 14:40:50.278676  203791 ssh_runner.go:195] Run: sudo cp /var/tmp/minikube/kubeadm.yaml.new /var/tmp/minikube/kubeadm.yaml
	I1109 14:40:50.287042  203791 kubeadm.go:215] ignoring SystemVerification for kubeadm because of docker driver
	I1109 14:40:50.287137  203791 ssh_runner.go:195] Run: sudo ls -la /etc/kubernetes/admin.conf /etc/kubernetes/kubelet.conf /etc/kubernetes/controller-manager.conf /etc/kubernetes/scheduler.conf
	I1109 14:40:50.295425  203791 kubeadm.go:156] config check failed, skipping stale config cleanup: sudo ls -la /etc/kubernetes/admin.conf /etc/kubernetes/kubelet.conf /etc/kubernetes/controller-manager.conf /etc/kubernetes/scheduler.conf: Process exited with status 2
	stdout:
	
	stderr:
	ls: cannot access '/etc/kubernetes/admin.conf': No such file or directory
	ls: cannot access '/etc/kubernetes/kubelet.conf': No such file or directory
	ls: cannot access '/etc/kubernetes/controller-manager.conf': No such file or directory
	ls: cannot access '/etc/kubernetes/scheduler.conf': No such file or directory
	I1109 14:40:50.295447  203791 kubeadm.go:158] found existing configuration files:
	
	I1109 14:40:50.295523  203791 ssh_runner.go:195] Run: sudo grep https://control-plane.minikube.internal:8443 /etc/kubernetes/admin.conf
	I1109 14:40:50.308795  203791 kubeadm.go:164] "https://control-plane.minikube.internal:8443" may not be in /etc/kubernetes/admin.conf - will remove: sudo grep https://control-plane.minikube.internal:8443 /etc/kubernetes/admin.conf: Process exited with status 2
	stdout:
	
	stderr:
	grep: /etc/kubernetes/admin.conf: No such file or directory
	I1109 14:40:50.308887  203791 ssh_runner.go:195] Run: sudo rm -f /etc/kubernetes/admin.conf
	I1109 14:40:50.317034  203791 ssh_runner.go:195] Run: sudo grep https://control-plane.minikube.internal:8443 /etc/kubernetes/kubelet.conf
	I1109 14:40:50.327170  203791 kubeadm.go:164] "https://control-plane.minikube.internal:8443" may not be in /etc/kubernetes/kubelet.conf - will remove: sudo grep https://control-plane.minikube.internal:8443 /etc/kubernetes/kubelet.conf: Process exited with status 2
	stdout:
	
	stderr:
	grep: /etc/kubernetes/kubelet.conf: No such file or directory
	I1109 14:40:50.327232  203791 ssh_runner.go:195] Run: sudo rm -f /etc/kubernetes/kubelet.conf
	I1109 14:40:50.335803  203791 ssh_runner.go:195] Run: sudo grep https://control-plane.minikube.internal:8443 /etc/kubernetes/controller-manager.conf
	I1109 14:40:50.345319  203791 kubeadm.go:164] "https://control-plane.minikube.internal:8443" may not be in /etc/kubernetes/controller-manager.conf - will remove: sudo grep https://control-plane.minikube.internal:8443 /etc/kubernetes/controller-manager.conf: Process exited with status 2
	stdout:
	
	stderr:
	grep: /etc/kubernetes/controller-manager.conf: No such file or directory
	I1109 14:40:50.345380  203791 ssh_runner.go:195] Run: sudo rm -f /etc/kubernetes/controller-manager.conf
	I1109 14:40:50.357100  203791 ssh_runner.go:195] Run: sudo grep https://control-plane.minikube.internal:8443 /etc/kubernetes/scheduler.conf
	I1109 14:40:50.366358  203791 kubeadm.go:164] "https://control-plane.minikube.internal:8443" may not be in /etc/kubernetes/scheduler.conf - will remove: sudo grep https://control-plane.minikube.internal:8443 /etc/kubernetes/scheduler.conf: Process exited with status 2
	stdout:
	
	stderr:
	grep: /etc/kubernetes/scheduler.conf: No such file or directory
	I1109 14:40:50.366420  203791 ssh_runner.go:195] Run: sudo rm -f /etc/kubernetes/scheduler.conf
	I1109 14:40:50.377940  203791 ssh_runner.go:286] Start: sudo /bin/bash -c "env PATH="/var/lib/minikube/binaries/v1.34.1:$PATH" kubeadm init --config /var/tmp/minikube/kubeadm.yaml  --ignore-preflight-errors=DirAvailable--etc-kubernetes-manifests,DirAvailable--var-lib-minikube,DirAvailable--var-lib-minikube-etcd,FileAvailable--etc-kubernetes-manifests-kube-scheduler.yaml,FileAvailable--etc-kubernetes-manifests-kube-apiserver.yaml,FileAvailable--etc-kubernetes-manifests-kube-controller-manager.yaml,FileAvailable--etc-kubernetes-manifests-etcd.yaml,Port-10250,Swap,NumCPU,Mem,SystemVerification,FileContent--proc-sys-net-bridge-bridge-nf-call-iptables"
	I1109 14:40:50.429659  203791 kubeadm.go:319] [init] Using Kubernetes version: v1.34.1
	I1109 14:40:50.430121  203791 kubeadm.go:319] [preflight] Running pre-flight checks
	I1109 14:40:50.476987  203791 kubeadm.go:319] [preflight] The system verification failed. Printing the output from the verification:
	I1109 14:40:50.477128  203791 kubeadm.go:319] KERNEL_VERSION: 5.15.0-1084-aws
	I1109 14:40:50.477182  203791 kubeadm.go:319] OS: Linux
	I1109 14:40:50.477259  203791 kubeadm.go:319] CGROUPS_CPU: enabled
	I1109 14:40:50.477342  203791 kubeadm.go:319] CGROUPS_CPUACCT: enabled
	I1109 14:40:50.477418  203791 kubeadm.go:319] CGROUPS_CPUSET: enabled
	I1109 14:40:50.477497  203791 kubeadm.go:319] CGROUPS_DEVICES: enabled
	I1109 14:40:50.477578  203791 kubeadm.go:319] CGROUPS_FREEZER: enabled
	I1109 14:40:50.477656  203791 kubeadm.go:319] CGROUPS_MEMORY: enabled
	I1109 14:40:50.477731  203791 kubeadm.go:319] CGROUPS_PIDS: enabled
	I1109 14:40:50.477806  203791 kubeadm.go:319] CGROUPS_HUGETLB: enabled
	I1109 14:40:50.477885  203791 kubeadm.go:319] CGROUPS_BLKIO: enabled
	I1109 14:40:50.592328  203791 kubeadm.go:319] [preflight] Pulling images required for setting up a Kubernetes cluster
	I1109 14:40:50.592509  203791 kubeadm.go:319] [preflight] This might take a minute or two, depending on the speed of your internet connection
	I1109 14:40:50.592641  203791 kubeadm.go:319] [preflight] You can also perform this action beforehand using 'kubeadm config images pull'
	I1109 14:40:50.608279  203791 kubeadm.go:319] [certs] Using certificateDir folder "/var/lib/minikube/certs"
	I1109 14:40:50.617704  203791 out.go:252]   - Generating certificates and keys ...
	I1109 14:40:50.617871  203791 kubeadm.go:319] [certs] Using existing ca certificate authority
	I1109 14:40:50.617975  203791 kubeadm.go:319] [certs] Using existing apiserver certificate and key on disk
	I1109 14:40:52.416714  203791 kubeadm.go:319] [certs] Generating "apiserver-kubelet-client" certificate and key
	I1109 14:40:51.070525  203153 ssh_runner.go:235] Completed: stat -c "%s %y" /var/lib/minikube/images/storage-provisioner_v5: (1.946952946s)
	I1109 14:40:51.070561  203153 ssh_runner.go:352] existence check for /var/lib/minikube/images/storage-provisioner_v5: stat -c "%s %y" /var/lib/minikube/images/storage-provisioner_v5: Process exited with status 1
	stdout:
	
	stderr:
	stat: cannot statx '/var/lib/minikube/images/storage-provisioner_v5': No such file or directory
	I1109 14:40:51.070587  203153 ssh_runner.go:362] scp /home/jenkins/minikube-integration/21139-2320/.minikube/cache/images/arm64/gcr.io/k8s-minikube/storage-provisioner_v5 --> /var/lib/minikube/images/storage-provisioner_v5 (8035840 bytes)
	I1109 14:40:51.070678  203153 ssh_runner.go:235] Completed: sudo podman load -i /var/lib/minikube/images/kube-proxy_v1.34.1: (1.947283165s)
	I1109 14:40:51.070694  203153 cache_images.go:323] Transferred and loaded /home/jenkins/minikube-integration/21139-2320/.minikube/cache/images/arm64/registry.k8s.io/kube-proxy_v1.34.1 from cache
	I1109 14:40:51.070711  203153 crio.go:275] Loading image: /var/lib/minikube/images/kube-apiserver_v1.34.1
	I1109 14:40:51.070753  203153 ssh_runner.go:195] Run: sudo podman load -i /var/lib/minikube/images/kube-apiserver_v1.34.1
	I1109 14:40:52.928987  203153 ssh_runner.go:235] Completed: sudo podman load -i /var/lib/minikube/images/kube-apiserver_v1.34.1: (1.858209686s)
	I1109 14:40:52.929015  203153 cache_images.go:323] Transferred and loaded /home/jenkins/minikube-integration/21139-2320/.minikube/cache/images/arm64/registry.k8s.io/kube-apiserver_v1.34.1 from cache
	I1109 14:40:52.929033  203153 crio.go:275] Loading image: /var/lib/minikube/images/etcd_3.6.4-0
	I1109 14:40:52.929081  203153 ssh_runner.go:195] Run: sudo podman load -i /var/lib/minikube/images/etcd_3.6.4-0
	I1109 14:40:53.311472  203791 kubeadm.go:319] [certs] Generating "front-proxy-ca" certificate and key
	I1109 14:40:53.886694  203791 kubeadm.go:319] [certs] Generating "front-proxy-client" certificate and key
	I1109 14:40:54.812989  203791 kubeadm.go:319] [certs] Generating "etcd/ca" certificate and key
	I1109 14:40:55.352359  203791 kubeadm.go:319] [certs] Generating "etcd/server" certificate and key
	I1109 14:40:55.354248  203791 kubeadm.go:319] [certs] etcd/server serving cert is signed for DNS names [localhost newest-cni-192074] and IPs [192.168.76.2 127.0.0.1 ::1]
	I1109 14:40:55.488974  203791 kubeadm.go:319] [certs] Generating "etcd/peer" certificate and key
	I1109 14:40:55.489591  203791 kubeadm.go:319] [certs] etcd/peer serving cert is signed for DNS names [localhost newest-cni-192074] and IPs [192.168.76.2 127.0.0.1 ::1]
	I1109 14:40:55.676726  203791 kubeadm.go:319] [certs] Generating "etcd/healthcheck-client" certificate and key
	I1109 14:40:55.966170  203791 kubeadm.go:319] [certs] Generating "apiserver-etcd-client" certificate and key
	I1109 14:40:56.098949  203791 kubeadm.go:319] [certs] Generating "sa" key and public key
	I1109 14:40:56.099604  203791 kubeadm.go:319] [kubeconfig] Using kubeconfig folder "/etc/kubernetes"
	I1109 14:40:57.245208  203791 kubeadm.go:319] [kubeconfig] Writing "admin.conf" kubeconfig file
	I1109 14:40:57.669297  203791 kubeadm.go:319] [kubeconfig] Writing "super-admin.conf" kubeconfig file
	I1109 14:40:57.738649  203791 kubeadm.go:319] [kubeconfig] Writing "kubelet.conf" kubeconfig file
	I1109 14:40:57.991684  203791 kubeadm.go:319] [kubeconfig] Writing "controller-manager.conf" kubeconfig file
	I1109 14:40:58.098777  203791 kubeadm.go:319] [kubeconfig] Writing "scheduler.conf" kubeconfig file
	I1109 14:40:58.099990  203791 kubeadm.go:319] [etcd] Creating static Pod manifest for local etcd in "/etc/kubernetes/manifests"
	I1109 14:40:58.108591  203791 kubeadm.go:319] [control-plane] Using manifest folder "/etc/kubernetes/manifests"
	I1109 14:40:57.691493  203153 ssh_runner.go:235] Completed: sudo podman load -i /var/lib/minikube/images/etcd_3.6.4-0: (4.762385996s)
	I1109 14:40:57.691520  203153 cache_images.go:323] Transferred and loaded /home/jenkins/minikube-integration/21139-2320/.minikube/cache/images/arm64/registry.k8s.io/etcd_3.6.4-0 from cache
	I1109 14:40:57.691537  203153 crio.go:275] Loading image: /var/lib/minikube/images/storage-provisioner_v5
	I1109 14:40:57.691618  203153 ssh_runner.go:195] Run: sudo podman load -i /var/lib/minikube/images/storage-provisioner_v5
	I1109 14:40:58.433575  203153 cache_images.go:323] Transferred and loaded /home/jenkins/minikube-integration/21139-2320/.minikube/cache/images/arm64/gcr.io/k8s-minikube/storage-provisioner_v5 from cache
	I1109 14:40:58.433611  203153 cache_images.go:125] Successfully loaded all cached images
	I1109 14:40:58.433618  203153 cache_images.go:94] duration metric: took 17.187335975s to LoadCachedImages
	I1109 14:40:58.433629  203153 kubeadm.go:935] updating node { 192.168.85.2 8443 v1.34.1 crio true true} ...
	I1109 14:40:58.433715  203153 kubeadm.go:947] kubelet [Unit]
	Wants=crio.service
	
	[Service]
	ExecStart=
	ExecStart=/var/lib/minikube/binaries/v1.34.1/kubelet --bootstrap-kubeconfig=/etc/kubernetes/bootstrap-kubelet.conf --cgroups-per-qos=false --config=/var/lib/kubelet/config.yaml --enforce-node-allocatable= --hostname-override=no-preload-545474 --kubeconfig=/etc/kubernetes/kubelet.conf --node-ip=192.168.85.2
	
	[Install]
	 config:
	{KubernetesVersion:v1.34.1 ClusterName:no-preload-545474 Namespace:default APIServerHAVIP: APIServerName:minikubeCA APIServerNames:[] APIServerIPs:[] DNSDomain:cluster.local ContainerRuntime:crio CRISocket: NetworkPlugin:cni FeatureGates: ServiceCIDR:10.96.0.0/12 ImageRepository: LoadBalancerStartIP: LoadBalancerEndIP: CustomIngressCert: RegistryAliases: ExtraOptions:[] ShouldLoadCachedImages:true EnableDefaultCNI:false CNI:}
	I1109 14:40:58.433799  203153 ssh_runner.go:195] Run: crio config
	I1109 14:40:58.522972  203153 cni.go:84] Creating CNI manager for ""
	I1109 14:40:58.522996  203153 cni.go:143] "docker" driver + "crio" runtime found, recommending kindnet
	I1109 14:40:58.523013  203153 kubeadm.go:85] Using pod CIDR: 10.244.0.0/16
	I1109 14:40:58.523036  203153 kubeadm.go:190] kubeadm options: {CertDir:/var/lib/minikube/certs ServiceCIDR:10.96.0.0/12 PodSubnet:10.244.0.0/16 AdvertiseAddress:192.168.85.2 APIServerPort:8443 KubernetesVersion:v1.34.1 EtcdDataDir:/var/lib/minikube/etcd EtcdExtraArgs:map[] ClusterName:no-preload-545474 NodeName:no-preload-545474 DNSDomain:cluster.local CRISocket:/var/run/crio/crio.sock ImageRepository: ComponentOptions:[{Component:apiServer ExtraArgs:map[enable-admission-plugins:NamespaceLifecycle,LimitRanger,ServiceAccount,DefaultStorageClass,DefaultTolerationSeconds,NodeRestriction,MutatingAdmissionWebhook,ValidatingAdmissionWebhook,ResourceQuota] Pairs:map[certSANs:["127.0.0.1", "localhost", "192.168.85.2"]]} {Component:controllerManager ExtraArgs:map[allocate-node-cidrs:true leader-elect:false] Pairs:map[]} {Component:scheduler ExtraArgs:map[leader-elect:false] Pairs:map[]}] FeatureArgs:map[] NodeIP:192.168.85.2 CgroupDriver:cgroupfs ClientCAFile:/var/lib/minikube/certs/ca.crt StaticPodPath:/etc
/kubernetes/manifests ControlPlaneAddress:control-plane.minikube.internal KubeProxyOptions:map[] ResolvConfSearchRegression:false KubeletConfigOpts:map[containerRuntimeEndpoint:unix:///var/run/crio/crio.sock hairpinMode:hairpin-veth runtimeRequestTimeout:15m] PrependCriSocketUnix:true}
	I1109 14:40:58.523167  203153 kubeadm.go:196] kubeadm config:
	apiVersion: kubeadm.k8s.io/v1beta4
	kind: InitConfiguration
	localAPIEndpoint:
	  advertiseAddress: 192.168.85.2
	  bindPort: 8443
	bootstrapTokens:
	  - groups:
	      - system:bootstrappers:kubeadm:default-node-token
	    ttl: 24h0m0s
	    usages:
	      - signing
	      - authentication
	nodeRegistration:
	  criSocket: unix:///var/run/crio/crio.sock
	  name: "no-preload-545474"
	  kubeletExtraArgs:
	    - name: "node-ip"
	      value: "192.168.85.2"
	  taints: []
	---
	apiVersion: kubeadm.k8s.io/v1beta4
	kind: ClusterConfiguration
	apiServer:
	  certSANs: ["127.0.0.1", "localhost", "192.168.85.2"]
	  extraArgs:
	    - name: "enable-admission-plugins"
	      value: "NamespaceLifecycle,LimitRanger,ServiceAccount,DefaultStorageClass,DefaultTolerationSeconds,NodeRestriction,MutatingAdmissionWebhook,ValidatingAdmissionWebhook,ResourceQuota"
	controllerManager:
	  extraArgs:
	    - name: "allocate-node-cidrs"
	      value: "true"
	    - name: "leader-elect"
	      value: "false"
	scheduler:
	  extraArgs:
	    - name: "leader-elect"
	      value: "false"
	certificatesDir: /var/lib/minikube/certs
	clusterName: mk
	controlPlaneEndpoint: control-plane.minikube.internal:8443
	etcd:
	  local:
	    dataDir: /var/lib/minikube/etcd
	kubernetesVersion: v1.34.1
	networking:
	  dnsDomain: cluster.local
	  podSubnet: "10.244.0.0/16"
	  serviceSubnet: 10.96.0.0/12
	---
	apiVersion: kubelet.config.k8s.io/v1beta1
	kind: KubeletConfiguration
	authentication:
	  x509:
	    clientCAFile: /var/lib/minikube/certs/ca.crt
	cgroupDriver: cgroupfs
	containerRuntimeEndpoint: unix:///var/run/crio/crio.sock
	hairpinMode: hairpin-veth
	runtimeRequestTimeout: 15m
	clusterDomain: "cluster.local"
	# disable disk resource management by default
	imageGCHighThresholdPercent: 100
	evictionHard:
	  nodefs.available: "0%"
	  nodefs.inodesFree: "0%"
	  imagefs.available: "0%"
	failSwapOn: false
	staticPodPath: /etc/kubernetes/manifests
	---
	apiVersion: kubeproxy.config.k8s.io/v1alpha1
	kind: KubeProxyConfiguration
	clusterCIDR: "10.244.0.0/16"
	metricsBindAddress: 0.0.0.0:10249
	conntrack:
	  maxPerCore: 0
	# Skip setting "net.netfilter.nf_conntrack_tcp_timeout_established"
	  tcpEstablishedTimeout: 0s
	# Skip setting "net.netfilter.nf_conntrack_tcp_timeout_close"
	  tcpCloseWaitTimeout: 0s
	
	I1109 14:40:58.523243  203153 ssh_runner.go:195] Run: sudo ls /var/lib/minikube/binaries/v1.34.1
	I1109 14:40:58.534754  203153 binaries.go:54] Didn't find k8s binaries: sudo ls /var/lib/minikube/binaries/v1.34.1: Process exited with status 2
	stdout:
	
	stderr:
	ls: cannot access '/var/lib/minikube/binaries/v1.34.1': No such file or directory
	
	Initiating transfer...
	I1109 14:40:58.534817  203153 ssh_runner.go:195] Run: sudo mkdir -p /var/lib/minikube/binaries/v1.34.1
	I1109 14:40:58.548958  203153 binary.go:80] Not caching binary, using https://dl.k8s.io/release/v1.34.1/bin/linux/arm64/kubectl?checksum=file:https://dl.k8s.io/release/v1.34.1/bin/linux/arm64/kubectl.sha256
	I1109 14:40:58.549054  203153 ssh_runner.go:195] Run: stat -c "%s %y" /var/lib/minikube/binaries/v1.34.1/kubectl
	I1109 14:40:58.549934  203153 download.go:108] Downloading: https://dl.k8s.io/release/v1.34.1/bin/linux/arm64/kubelet?checksum=file:https://dl.k8s.io/release/v1.34.1/bin/linux/arm64/kubelet.sha256 -> /home/jenkins/minikube-integration/21139-2320/.minikube/cache/linux/arm64/v1.34.1/kubelet
	I1109 14:40:58.550400  203153 download.go:108] Downloading: https://dl.k8s.io/release/v1.34.1/bin/linux/arm64/kubeadm?checksum=file:https://dl.k8s.io/release/v1.34.1/bin/linux/arm64/kubeadm.sha256 -> /home/jenkins/minikube-integration/21139-2320/.minikube/cache/linux/arm64/v1.34.1/kubeadm
	I1109 14:40:58.554600  203153 ssh_runner.go:352] existence check for /var/lib/minikube/binaries/v1.34.1/kubectl: stat -c "%s %y" /var/lib/minikube/binaries/v1.34.1/kubectl: Process exited with status 1
	stdout:
	
	stderr:
	stat: cannot statx '/var/lib/minikube/binaries/v1.34.1/kubectl': No such file or directory
	I1109 14:40:58.554634  203153 ssh_runner.go:362] scp /home/jenkins/minikube-integration/21139-2320/.minikube/cache/linux/arm64/v1.34.1/kubectl --> /var/lib/minikube/binaries/v1.34.1/kubectl (58130616 bytes)
	I1109 14:40:59.537462  203153 ssh_runner.go:195] Run: sudo systemctl is-active --quiet service kubelet
	I1109 14:40:59.574206  203153 ssh_runner.go:195] Run: stat -c "%s %y" /var/lib/minikube/binaries/v1.34.1/kubeadm
	I1109 14:40:59.577888  203153 ssh_runner.go:195] Run: stat -c "%s %y" /var/lib/minikube/binaries/v1.34.1/kubelet
	I1109 14:40:59.582955  203153 ssh_runner.go:352] existence check for /var/lib/minikube/binaries/v1.34.1/kubelet: stat -c "%s %y" /var/lib/minikube/binaries/v1.34.1/kubelet: Process exited with status 1
	stdout:
	
	stderr:
	stat: cannot statx '/var/lib/minikube/binaries/v1.34.1/kubelet': No such file or directory
	I1109 14:40:59.582995  203153 ssh_runner.go:362] scp /home/jenkins/minikube-integration/21139-2320/.minikube/cache/linux/arm64/v1.34.1/kubelet --> /var/lib/minikube/binaries/v1.34.1/kubelet (56426788 bytes)
	I1109 14:40:59.596982  203153 ssh_runner.go:352] existence check for /var/lib/minikube/binaries/v1.34.1/kubeadm: stat -c "%s %y" /var/lib/minikube/binaries/v1.34.1/kubeadm: Process exited with status 1
	stdout:
	
	stderr:
	stat: cannot statx '/var/lib/minikube/binaries/v1.34.1/kubeadm': No such file or directory
	I1109 14:40:59.597021  203153 ssh_runner.go:362] scp /home/jenkins/minikube-integration/21139-2320/.minikube/cache/linux/arm64/v1.34.1/kubeadm --> /var/lib/minikube/binaries/v1.34.1/kubeadm (71434424 bytes)
	I1109 14:40:58.112180  203791 out.go:252]   - Booting up control plane ...
	I1109 14:40:58.112301  203791 kubeadm.go:319] [control-plane] Creating static Pod manifest for "kube-apiserver"
	I1109 14:40:58.112382  203791 kubeadm.go:319] [control-plane] Creating static Pod manifest for "kube-controller-manager"
	I1109 14:40:58.112454  203791 kubeadm.go:319] [control-plane] Creating static Pod manifest for "kube-scheduler"
	I1109 14:40:58.147438  203791 kubeadm.go:319] [kubelet-start] Writing kubelet environment file with flags to file "/var/lib/kubelet/kubeadm-flags.env"
	I1109 14:40:58.147556  203791 kubeadm.go:319] [kubelet-start] Writing kubelet configuration to file "/var/lib/kubelet/instance-config.yaml"
	I1109 14:40:58.156827  203791 kubeadm.go:319] [patches] Applied patch of type "application/strategic-merge-patch+json" to target "kubeletconfiguration"
	I1109 14:40:58.159522  203791 kubeadm.go:319] [kubelet-start] Writing kubelet configuration to file "/var/lib/kubelet/config.yaml"
	I1109 14:40:58.159576  203791 kubeadm.go:319] [kubelet-start] Starting the kubelet
	I1109 14:40:58.330977  203791 kubeadm.go:319] [wait-control-plane] Waiting for the kubelet to boot up the control plane as static Pods from directory "/etc/kubernetes/manifests"
	I1109 14:40:58.331102  203791 kubeadm.go:319] [kubelet-check] Waiting for a healthy kubelet at http://127.0.0.1:10248/healthz. This can take up to 4m0s
	I1109 14:41:00.368271  203791 kubeadm.go:319] [kubelet-check] The kubelet is healthy after 2.042071182s
	I1109 14:41:00.375338  203791 kubeadm.go:319] [control-plane-check] Waiting for healthy control plane components. This can take up to 4m0s
	I1109 14:41:00.375442  203791 kubeadm.go:319] [control-plane-check] Checking kube-apiserver at https://192.168.76.2:8443/livez
	I1109 14:41:00.375544  203791 kubeadm.go:319] [control-plane-check] Checking kube-controller-manager at https://127.0.0.1:10257/healthz
	I1109 14:41:00.375649  203791 kubeadm.go:319] [control-plane-check] Checking kube-scheduler at https://127.0.0.1:10259/livez
	I1109 14:41:00.377660  203153 ssh_runner.go:195] Run: sudo mkdir -p /etc/systemd/system/kubelet.service.d /lib/systemd/system /var/tmp/minikube
	I1109 14:41:00.397855  203153 ssh_runner.go:362] scp memory --> /etc/systemd/system/kubelet.service.d/10-kubeadm.conf (367 bytes)
	I1109 14:41:00.416590  203153 ssh_runner.go:362] scp memory --> /lib/systemd/system/kubelet.service (352 bytes)
	I1109 14:41:00.439645  203153 ssh_runner.go:362] scp memory --> /var/tmp/minikube/kubeadm.yaml.new (2214 bytes)
	I1109 14:41:00.469171  203153 ssh_runner.go:195] Run: grep 192.168.85.2	control-plane.minikube.internal$ /etc/hosts
	I1109 14:41:00.477019  203153 ssh_runner.go:195] Run: /bin/bash -c "{ grep -v $'\tcontrol-plane.minikube.internal$' "/etc/hosts"; echo "192.168.85.2	control-plane.minikube.internal"; } > /tmp/h.$$; sudo cp /tmp/h.$$ "/etc/hosts""
	I1109 14:41:00.491974  203153 ssh_runner.go:195] Run: sudo systemctl daemon-reload
	I1109 14:41:00.631215  203153 ssh_runner.go:195] Run: sudo systemctl start kubelet
	I1109 14:41:00.654137  203153 certs.go:69] Setting up /home/jenkins/minikube-integration/21139-2320/.minikube/profiles/no-preload-545474 for IP: 192.168.85.2
	I1109 14:41:00.654203  203153 certs.go:195] generating shared ca certs ...
	I1109 14:41:00.654235  203153 certs.go:227] acquiring lock for ca certs: {Name:mkdc9287ecd89df27ec460b72246ed6b75395d7c Clock:{} Delay:500ms Timeout:1m0s Cancel:<nil>}
	I1109 14:41:00.654407  203153 certs.go:236] skipping valid "minikubeCA" ca cert: /home/jenkins/minikube-integration/21139-2320/.minikube/ca.key
	I1109 14:41:00.654485  203153 certs.go:236] skipping valid "proxyClientCA" ca cert: /home/jenkins/minikube-integration/21139-2320/.minikube/proxy-client-ca.key
	I1109 14:41:00.654508  203153 certs.go:257] generating profile certs ...
	I1109 14:41:00.654578  203153 certs.go:364] generating signed profile cert for "minikube-user": /home/jenkins/minikube-integration/21139-2320/.minikube/profiles/no-preload-545474/client.key
	I1109 14:41:00.654605  203153 crypto.go:68] Generating cert /home/jenkins/minikube-integration/21139-2320/.minikube/profiles/no-preload-545474/client.crt with IP's: []
	I1109 14:41:01.755266  203153 crypto.go:156] Writing cert to /home/jenkins/minikube-integration/21139-2320/.minikube/profiles/no-preload-545474/client.crt ...
	I1109 14:41:01.755339  203153 lock.go:35] WriteFile acquiring /home/jenkins/minikube-integration/21139-2320/.minikube/profiles/no-preload-545474/client.crt: {Name:mk26f76250aaef3610d7043042cee123a694c0d7 Clock:{} Delay:500ms Timeout:1m0s Cancel:<nil>}
	I1109 14:41:01.755568  203153 crypto.go:164] Writing key to /home/jenkins/minikube-integration/21139-2320/.minikube/profiles/no-preload-545474/client.key ...
	I1109 14:41:01.755602  203153 lock.go:35] WriteFile acquiring /home/jenkins/minikube-integration/21139-2320/.minikube/profiles/no-preload-545474/client.key: {Name:mk68313704479429c81955978cad5138ffa86f30 Clock:{} Delay:500ms Timeout:1m0s Cancel:<nil>}
	I1109 14:41:01.755742  203153 certs.go:364] generating signed profile cert for "minikube": /home/jenkins/minikube-integration/21139-2320/.minikube/profiles/no-preload-545474/apiserver.key.33b59cf6
	I1109 14:41:01.755780  203153 crypto.go:68] Generating cert /home/jenkins/minikube-integration/21139-2320/.minikube/profiles/no-preload-545474/apiserver.crt.33b59cf6 with IP's: [10.96.0.1 127.0.0.1 10.0.0.1 192.168.85.2]
	I1109 14:41:02.381542  203153 crypto.go:156] Writing cert to /home/jenkins/minikube-integration/21139-2320/.minikube/profiles/no-preload-545474/apiserver.crt.33b59cf6 ...
	I1109 14:41:02.381615  203153 lock.go:35] WriteFile acquiring /home/jenkins/minikube-integration/21139-2320/.minikube/profiles/no-preload-545474/apiserver.crt.33b59cf6: {Name:mkc242cf9ec7f086f9b83fbbe2e17f8b3c8b973d Clock:{} Delay:500ms Timeout:1m0s Cancel:<nil>}
	I1109 14:41:02.381832  203153 crypto.go:164] Writing key to /home/jenkins/minikube-integration/21139-2320/.minikube/profiles/no-preload-545474/apiserver.key.33b59cf6 ...
	I1109 14:41:02.381868  203153 lock.go:35] WriteFile acquiring /home/jenkins/minikube-integration/21139-2320/.minikube/profiles/no-preload-545474/apiserver.key.33b59cf6: {Name:mk707bc166c59473ceab96493bdb8be63ff619e8 Clock:{} Delay:500ms Timeout:1m0s Cancel:<nil>}
	I1109 14:41:02.381996  203153 certs.go:382] copying /home/jenkins/minikube-integration/21139-2320/.minikube/profiles/no-preload-545474/apiserver.crt.33b59cf6 -> /home/jenkins/minikube-integration/21139-2320/.minikube/profiles/no-preload-545474/apiserver.crt
	I1109 14:41:02.382114  203153 certs.go:386] copying /home/jenkins/minikube-integration/21139-2320/.minikube/profiles/no-preload-545474/apiserver.key.33b59cf6 -> /home/jenkins/minikube-integration/21139-2320/.minikube/profiles/no-preload-545474/apiserver.key
	I1109 14:41:02.382250  203153 certs.go:364] generating signed profile cert for "aggregator": /home/jenkins/minikube-integration/21139-2320/.minikube/profiles/no-preload-545474/proxy-client.key
	I1109 14:41:02.382292  203153 crypto.go:68] Generating cert /home/jenkins/minikube-integration/21139-2320/.minikube/profiles/no-preload-545474/proxy-client.crt with IP's: []
	I1109 14:41:02.664086  203153 crypto.go:156] Writing cert to /home/jenkins/minikube-integration/21139-2320/.minikube/profiles/no-preload-545474/proxy-client.crt ...
	I1109 14:41:02.664161  203153 lock.go:35] WriteFile acquiring /home/jenkins/minikube-integration/21139-2320/.minikube/profiles/no-preload-545474/proxy-client.crt: {Name:mkc153bf9206980c9feba9867a86e1504626011e Clock:{} Delay:500ms Timeout:1m0s Cancel:<nil>}
	I1109 14:41:02.664384  203153 crypto.go:164] Writing key to /home/jenkins/minikube-integration/21139-2320/.minikube/profiles/no-preload-545474/proxy-client.key ...
	I1109 14:41:02.664418  203153 lock.go:35] WriteFile acquiring /home/jenkins/minikube-integration/21139-2320/.minikube/profiles/no-preload-545474/proxy-client.key: {Name:mk481c10cff03f058ec694fbd8b96b8cdbf59e03 Clock:{} Delay:500ms Timeout:1m0s Cancel:<nil>}
	I1109 14:41:02.664696  203153 certs.go:484] found cert: /home/jenkins/minikube-integration/21139-2320/.minikube/certs/4116.pem (1338 bytes)
	W1109 14:41:02.664762  203153 certs.go:480] ignoring /home/jenkins/minikube-integration/21139-2320/.minikube/certs/4116_empty.pem, impossibly tiny 0 bytes
	I1109 14:41:02.664786  203153 certs.go:484] found cert: /home/jenkins/minikube-integration/21139-2320/.minikube/certs/ca-key.pem (1679 bytes)
	I1109 14:41:02.664841  203153 certs.go:484] found cert: /home/jenkins/minikube-integration/21139-2320/.minikube/certs/ca.pem (1082 bytes)
	I1109 14:41:02.664888  203153 certs.go:484] found cert: /home/jenkins/minikube-integration/21139-2320/.minikube/certs/cert.pem (1123 bytes)
	I1109 14:41:02.664927  203153 certs.go:484] found cert: /home/jenkins/minikube-integration/21139-2320/.minikube/certs/key.pem (1679 bytes)
	I1109 14:41:02.665008  203153 certs.go:484] found cert: /home/jenkins/minikube-integration/21139-2320/.minikube/files/etc/ssl/certs/41162.pem (1708 bytes)
	I1109 14:41:02.665664  203153 ssh_runner.go:362] scp /home/jenkins/minikube-integration/21139-2320/.minikube/ca.crt --> /var/lib/minikube/certs/ca.crt (1111 bytes)
	I1109 14:41:02.692651  203153 ssh_runner.go:362] scp /home/jenkins/minikube-integration/21139-2320/.minikube/ca.key --> /var/lib/minikube/certs/ca.key (1679 bytes)
	I1109 14:41:02.730031  203153 ssh_runner.go:362] scp /home/jenkins/minikube-integration/21139-2320/.minikube/proxy-client-ca.crt --> /var/lib/minikube/certs/proxy-client-ca.crt (1119 bytes)
	I1109 14:41:02.755754  203153 ssh_runner.go:362] scp /home/jenkins/minikube-integration/21139-2320/.minikube/proxy-client-ca.key --> /var/lib/minikube/certs/proxy-client-ca.key (1675 bytes)
	I1109 14:41:02.786110  203153 ssh_runner.go:362] scp /home/jenkins/minikube-integration/21139-2320/.minikube/profiles/no-preload-545474/apiserver.crt --> /var/lib/minikube/certs/apiserver.crt (1424 bytes)
	I1109 14:41:02.809827  203153 ssh_runner.go:362] scp /home/jenkins/minikube-integration/21139-2320/.minikube/profiles/no-preload-545474/apiserver.key --> /var/lib/minikube/certs/apiserver.key (1679 bytes)
	I1109 14:41:02.841950  203153 ssh_runner.go:362] scp /home/jenkins/minikube-integration/21139-2320/.minikube/profiles/no-preload-545474/proxy-client.crt --> /var/lib/minikube/certs/proxy-client.crt (1147 bytes)
	I1109 14:41:02.872929  203153 ssh_runner.go:362] scp /home/jenkins/minikube-integration/21139-2320/.minikube/profiles/no-preload-545474/proxy-client.key --> /var/lib/minikube/certs/proxy-client.key (1675 bytes)
	I1109 14:41:02.906100  203153 ssh_runner.go:362] scp /home/jenkins/minikube-integration/21139-2320/.minikube/files/etc/ssl/certs/41162.pem --> /usr/share/ca-certificates/41162.pem (1708 bytes)
	I1109 14:41:02.945754  203153 ssh_runner.go:362] scp /home/jenkins/minikube-integration/21139-2320/.minikube/ca.crt --> /usr/share/ca-certificates/minikubeCA.pem (1111 bytes)
	I1109 14:41:02.972000  203153 ssh_runner.go:362] scp /home/jenkins/minikube-integration/21139-2320/.minikube/certs/4116.pem --> /usr/share/ca-certificates/4116.pem (1338 bytes)
	I1109 14:41:02.992750  203153 ssh_runner.go:362] scp memory --> /var/lib/minikube/kubeconfig (738 bytes)
	I1109 14:41:03.006420  203153 ssh_runner.go:195] Run: openssl version
	I1109 14:41:03.015136  203153 ssh_runner.go:195] Run: sudo /bin/bash -c "test -s /usr/share/ca-certificates/41162.pem && ln -fs /usr/share/ca-certificates/41162.pem /etc/ssl/certs/41162.pem"
	I1109 14:41:03.042139  203153 ssh_runner.go:195] Run: ls -la /usr/share/ca-certificates/41162.pem
	I1109 14:41:03.047017  203153 certs.go:528] hashing: -rw-r--r-- 1 root root 1708 Nov  9 13:36 /usr/share/ca-certificates/41162.pem
	I1109 14:41:03.047140  203153 ssh_runner.go:195] Run: openssl x509 -hash -noout -in /usr/share/ca-certificates/41162.pem
	I1109 14:41:03.113648  203153 ssh_runner.go:195] Run: sudo /bin/bash -c "test -L /etc/ssl/certs/3ec20f2e.0 || ln -fs /etc/ssl/certs/41162.pem /etc/ssl/certs/3ec20f2e.0"
	I1109 14:41:03.134429  203153 ssh_runner.go:195] Run: sudo /bin/bash -c "test -s /usr/share/ca-certificates/minikubeCA.pem && ln -fs /usr/share/ca-certificates/minikubeCA.pem /etc/ssl/certs/minikubeCA.pem"
	I1109 14:41:03.149060  203153 ssh_runner.go:195] Run: ls -la /usr/share/ca-certificates/minikubeCA.pem
	I1109 14:41:03.157584  203153 certs.go:528] hashing: -rw-r--r-- 1 root root 1111 Nov  9 13:29 /usr/share/ca-certificates/minikubeCA.pem
	I1109 14:41:03.157707  203153 ssh_runner.go:195] Run: openssl x509 -hash -noout -in /usr/share/ca-certificates/minikubeCA.pem
	I1109 14:41:03.203331  203153 ssh_runner.go:195] Run: sudo /bin/bash -c "test -L /etc/ssl/certs/b5213941.0 || ln -fs /etc/ssl/certs/minikubeCA.pem /etc/ssl/certs/b5213941.0"
	I1109 14:41:03.212982  203153 ssh_runner.go:195] Run: sudo /bin/bash -c "test -s /usr/share/ca-certificates/4116.pem && ln -fs /usr/share/ca-certificates/4116.pem /etc/ssl/certs/4116.pem"
	I1109 14:41:03.222872  203153 ssh_runner.go:195] Run: ls -la /usr/share/ca-certificates/4116.pem
	I1109 14:41:03.227490  203153 certs.go:528] hashing: -rw-r--r-- 1 root root 1338 Nov  9 13:36 /usr/share/ca-certificates/4116.pem
	I1109 14:41:03.227608  203153 ssh_runner.go:195] Run: openssl x509 -hash -noout -in /usr/share/ca-certificates/4116.pem
	I1109 14:41:03.279445  203153 ssh_runner.go:195] Run: sudo /bin/bash -c "test -L /etc/ssl/certs/51391683.0 || ln -fs /etc/ssl/certs/4116.pem /etc/ssl/certs/51391683.0"
	I1109 14:41:03.289516  203153 ssh_runner.go:195] Run: stat /var/lib/minikube/certs/apiserver-kubelet-client.crt
	I1109 14:41:03.294860  203153 certs.go:400] 'apiserver-kubelet-client' cert doesn't exist, likely first start: stat /var/lib/minikube/certs/apiserver-kubelet-client.crt: Process exited with status 1
	stdout:
	
	stderr:
	stat: cannot statx '/var/lib/minikube/certs/apiserver-kubelet-client.crt': No such file or directory
	I1109 14:41:03.294979  203153 kubeadm.go:401] StartCluster: {Name:no-preload-545474 KeepContext:false EmbedCerts:false MinikubeISO: KicBaseImage:gcr.io/k8s-minikube/kicbase-builds:v0.0.48-1761985721-21837@sha256:a50b37e97dfdea51156e079ca6b45818a801b3d41bbe13d141f35d2e1af6c7d1 Memory:3072 CPUs:2 DiskSize:20000 Driver:docker HyperkitVpnKitSock: HyperkitVSockPorts:[] DockerEnv:[] ContainerVolumeMounts:[] InsecureRegistry:[] RegistryMirror:[] HostOnlyCIDR:192.168.59.1/24 HypervVirtualSwitch: HypervUseExternalSwitch:false HypervExternalAdapter: KVMNetwork:default KVMQemuURI:qemu:///system KVMGPU:false KVMHidden:false KVMNUMACount:1 APIServerPort:8443 DockerOpt:[] DisableDriverMounts:false NFSShare:[] NFSSharesRoot:/nfsshares UUID: NoVTXCheck:false DNSProxy:false HostDNSResolver:true HostOnlyNicType:virtio NatNicType:virtio SSHIPAddress: SSHUser:root SSHKey: SSHPort:22 KubernetesConfig:{KubernetesVersion:v1.34.1 ClusterName:no-preload-545474 Namespace:default APIServerHAVIP: APIServerName:minikubeCA APISer
verNames:[] APIServerIPs:[] DNSDomain:cluster.local ContainerRuntime:crio CRISocket: NetworkPlugin:cni FeatureGates: ServiceCIDR:10.96.0.0/12 ImageRepository: LoadBalancerStartIP: LoadBalancerEndIP: CustomIngressCert: RegistryAliases: ExtraOptions:[] ShouldLoadCachedImages:true EnableDefaultCNI:false CNI:} Nodes:[{Name: IP:192.168.85.2 Port:8443 KubernetesVersion:v1.34.1 ContainerRuntime:crio ControlPlane:true Worker:true}] Addons:map[] CustomAddonImages:map[] CustomAddonRegistries:map[] VerifyComponents:map[apiserver:true apps_running:true default_sa:true extra:true kubelet:true node_ready:true system_pods:true] StartHostTimeout:6m0s ScheduledStop:<nil> ExposedPorts:[] ListenAddress: Network: Subnet: MultiNodeRequested:false ExtraDisks:0 CertExpiration:26280h0m0s MountString: Mount9PVersion:9p2000.L MountGID:docker MountIP: MountMSize:262144 MountOptions:[] MountPort:0 MountType:9p MountUID:docker BinaryMirror: DisableOptimizations:false DisableMetrics:false DisableCoreDNSLog:false CustomQemuFirmwarePath: So
cketVMnetClientPath: SocketVMnetPath: StaticIP: SSHAuthSock: SSHAgentPID:0 GPUs: AutoPauseInterval:1m0s}
	I1109 14:41:03.295082  203153 cri.go:54] listing CRI containers in root : {State:paused Name: Namespaces:[kube-system]}
	I1109 14:41:03.295168  203153 ssh_runner.go:195] Run: sudo -s eval "crictl ps -a --quiet --label io.kubernetes.pod.namespace=kube-system"
	I1109 14:41:03.338306  203153 cri.go:89] found id: ""
	I1109 14:41:03.338432  203153 ssh_runner.go:195] Run: sudo ls /var/lib/kubelet/kubeadm-flags.env /var/lib/kubelet/config.yaml /var/lib/minikube/etcd
	I1109 14:41:03.356345  203153 ssh_runner.go:195] Run: sudo cp /var/tmp/minikube/kubeadm.yaml.new /var/tmp/minikube/kubeadm.yaml
	I1109 14:41:03.370197  203153 kubeadm.go:215] ignoring SystemVerification for kubeadm because of docker driver
	I1109 14:41:03.370313  203153 ssh_runner.go:195] Run: sudo ls -la /etc/kubernetes/admin.conf /etc/kubernetes/kubelet.conf /etc/kubernetes/controller-manager.conf /etc/kubernetes/scheduler.conf
	I1109 14:41:03.386214  203153 kubeadm.go:156] config check failed, skipping stale config cleanup: sudo ls -la /etc/kubernetes/admin.conf /etc/kubernetes/kubelet.conf /etc/kubernetes/controller-manager.conf /etc/kubernetes/scheduler.conf: Process exited with status 2
	stdout:
	
	stderr:
	ls: cannot access '/etc/kubernetes/admin.conf': No such file or directory
	ls: cannot access '/etc/kubernetes/kubelet.conf': No such file or directory
	ls: cannot access '/etc/kubernetes/controller-manager.conf': No such file or directory
	ls: cannot access '/etc/kubernetes/scheduler.conf': No such file or directory
	I1109 14:41:03.386287  203153 kubeadm.go:158] found existing configuration files:
	
	I1109 14:41:03.386374  203153 ssh_runner.go:195] Run: sudo grep https://control-plane.minikube.internal:8443 /etc/kubernetes/admin.conf
	I1109 14:41:03.402313  203153 kubeadm.go:164] "https://control-plane.minikube.internal:8443" may not be in /etc/kubernetes/admin.conf - will remove: sudo grep https://control-plane.minikube.internal:8443 /etc/kubernetes/admin.conf: Process exited with status 2
	stdout:
	
	stderr:
	grep: /etc/kubernetes/admin.conf: No such file or directory
	I1109 14:41:03.402430  203153 ssh_runner.go:195] Run: sudo rm -f /etc/kubernetes/admin.conf
	I1109 14:41:03.410874  203153 ssh_runner.go:195] Run: sudo grep https://control-plane.minikube.internal:8443 /etc/kubernetes/kubelet.conf
	I1109 14:41:03.420390  203153 kubeadm.go:164] "https://control-plane.minikube.internal:8443" may not be in /etc/kubernetes/kubelet.conf - will remove: sudo grep https://control-plane.minikube.internal:8443 /etc/kubernetes/kubelet.conf: Process exited with status 2
	stdout:
	
	stderr:
	grep: /etc/kubernetes/kubelet.conf: No such file or directory
	I1109 14:41:03.420499  203153 ssh_runner.go:195] Run: sudo rm -f /etc/kubernetes/kubelet.conf
	I1109 14:41:03.428806  203153 ssh_runner.go:195] Run: sudo grep https://control-plane.minikube.internal:8443 /etc/kubernetes/controller-manager.conf
	I1109 14:41:03.438150  203153 kubeadm.go:164] "https://control-plane.minikube.internal:8443" may not be in /etc/kubernetes/controller-manager.conf - will remove: sudo grep https://control-plane.minikube.internal:8443 /etc/kubernetes/controller-manager.conf: Process exited with status 2
	stdout:
	
	stderr:
	grep: /etc/kubernetes/controller-manager.conf: No such file or directory
	I1109 14:41:03.438263  203153 ssh_runner.go:195] Run: sudo rm -f /etc/kubernetes/controller-manager.conf
	I1109 14:41:03.453944  203153 ssh_runner.go:195] Run: sudo grep https://control-plane.minikube.internal:8443 /etc/kubernetes/scheduler.conf
	I1109 14:41:03.470355  203153 kubeadm.go:164] "https://control-plane.minikube.internal:8443" may not be in /etc/kubernetes/scheduler.conf - will remove: sudo grep https://control-plane.minikube.internal:8443 /etc/kubernetes/scheduler.conf: Process exited with status 2
	stdout:
	
	stderr:
	grep: /etc/kubernetes/scheduler.conf: No such file or directory
	I1109 14:41:03.470491  203153 ssh_runner.go:195] Run: sudo rm -f /etc/kubernetes/scheduler.conf
	I1109 14:41:03.484536  203153 ssh_runner.go:286] Start: sudo /bin/bash -c "env PATH="/var/lib/minikube/binaries/v1.34.1:$PATH" kubeadm init --config /var/tmp/minikube/kubeadm.yaml  --ignore-preflight-errors=DirAvailable--etc-kubernetes-manifests,DirAvailable--var-lib-minikube,DirAvailable--var-lib-minikube-etcd,FileAvailable--etc-kubernetes-manifests-kube-scheduler.yaml,FileAvailable--etc-kubernetes-manifests-kube-apiserver.yaml,FileAvailable--etc-kubernetes-manifests-kube-controller-manager.yaml,FileAvailable--etc-kubernetes-manifests-etcd.yaml,Port-10250,Swap,NumCPU,Mem,SystemVerification,FileContent--proc-sys-net-bridge-bridge-nf-call-iptables"
	I1109 14:41:03.547041  203153 kubeadm.go:319] [init] Using Kubernetes version: v1.34.1
	I1109 14:41:03.547340  203153 kubeadm.go:319] [preflight] Running pre-flight checks
	I1109 14:41:03.583419  203153 kubeadm.go:319] [preflight] The system verification failed. Printing the output from the verification:
	I1109 14:41:03.583560  203153 kubeadm.go:319] KERNEL_VERSION: 5.15.0-1084-aws
	I1109 14:41:03.583618  203153 kubeadm.go:319] OS: Linux
	I1109 14:41:03.583693  203153 kubeadm.go:319] CGROUPS_CPU: enabled
	I1109 14:41:03.583806  203153 kubeadm.go:319] CGROUPS_CPUACCT: enabled
	I1109 14:41:03.583958  203153 kubeadm.go:319] CGROUPS_CPUSET: enabled
	I1109 14:41:03.584067  203153 kubeadm.go:319] CGROUPS_DEVICES: enabled
	I1109 14:41:03.584165  203153 kubeadm.go:319] CGROUPS_FREEZER: enabled
	I1109 14:41:03.584254  203153 kubeadm.go:319] CGROUPS_MEMORY: enabled
	I1109 14:41:03.584339  203153 kubeadm.go:319] CGROUPS_PIDS: enabled
	I1109 14:41:03.584420  203153 kubeadm.go:319] CGROUPS_HUGETLB: enabled
	I1109 14:41:03.584503  203153 kubeadm.go:319] CGROUPS_BLKIO: enabled
	I1109 14:41:03.694016  203153 kubeadm.go:319] [preflight] Pulling images required for setting up a Kubernetes cluster
	I1109 14:41:03.694198  203153 kubeadm.go:319] [preflight] This might take a minute or two, depending on the speed of your internet connection
	I1109 14:41:03.694363  203153 kubeadm.go:319] [preflight] You can also perform this action beforehand using 'kubeadm config images pull'
	I1109 14:41:03.734795  203153 kubeadm.go:319] [certs] Using certificateDir folder "/var/lib/minikube/certs"
	I1109 14:41:03.740960  203153 out.go:252]   - Generating certificates and keys ...
	I1109 14:41:03.741062  203153 kubeadm.go:319] [certs] Using existing ca certificate authority
	I1109 14:41:03.741137  203153 kubeadm.go:319] [certs] Using existing apiserver certificate and key on disk
	I1109 14:41:04.288115  203153 kubeadm.go:319] [certs] Generating "apiserver-kubelet-client" certificate and key
	I1109 14:41:04.752217  203153 kubeadm.go:319] [certs] Generating "front-proxy-ca" certificate and key
	I1109 14:41:05.824275  203153 kubeadm.go:319] [certs] Generating "front-proxy-client" certificate and key
	I1109 14:41:06.404217  203153 kubeadm.go:319] [certs] Generating "etcd/ca" certificate and key
	I1109 14:41:08.198297  203153 kubeadm.go:319] [certs] Generating "etcd/server" certificate and key
	I1109 14:41:08.198852  203153 kubeadm.go:319] [certs] etcd/server serving cert is signed for DNS names [localhost no-preload-545474] and IPs [192.168.85.2 127.0.0.1 ::1]
	I1109 14:41:08.751654  203153 kubeadm.go:319] [certs] Generating "etcd/peer" certificate and key
	I1109 14:41:08.752208  203153 kubeadm.go:319] [certs] etcd/peer serving cert is signed for DNS names [localhost no-preload-545474] and IPs [192.168.85.2 127.0.0.1 ::1]
	I1109 14:41:09.220303  203153 kubeadm.go:319] [certs] Generating "etcd/healthcheck-client" certificate and key
	I1109 14:41:09.831046  203153 kubeadm.go:319] [certs] Generating "apiserver-etcd-client" certificate and key
	I1109 14:41:09.924220  203153 kubeadm.go:319] [certs] Generating "sa" key and public key
	I1109 14:41:09.928245  203153 kubeadm.go:319] [kubeconfig] Using kubeconfig folder "/etc/kubernetes"
	I1109 14:41:09.085833  203791 kubeadm.go:319] [control-plane-check] kube-controller-manager is healthy after 8.712041717s
	I1109 14:41:10.921496  203791 kubeadm.go:319] [control-plane-check] kube-scheduler is healthy after 10.54815083s
	I1109 14:41:12.877006  203791 kubeadm.go:319] [control-plane-check] kube-apiserver is healthy after 12.503611452s
	I1109 14:41:12.902563  203791 kubeadm.go:319] [upload-config] Storing the configuration used in ConfigMap "kubeadm-config" in the "kube-system" Namespace
	I1109 14:41:12.922107  203791 kubeadm.go:319] [kubelet] Creating a ConfigMap "kubelet-config" in namespace kube-system with the configuration for the kubelets in the cluster
	I1109 14:41:12.939249  203791 kubeadm.go:319] [upload-certs] Skipping phase. Please see --upload-certs
	I1109 14:41:12.939712  203791 kubeadm.go:319] [mark-control-plane] Marking the node newest-cni-192074 as control-plane by adding the labels: [node-role.kubernetes.io/control-plane node.kubernetes.io/exclude-from-external-load-balancers]
	I1109 14:41:12.955689  203791 kubeadm.go:319] [bootstrap-token] Using token: nmlbhn.o654qno1ajola0cj
	I1109 14:41:10.493015  203153 kubeadm.go:319] [kubeconfig] Writing "admin.conf" kubeconfig file
	I1109 14:41:11.484053  203153 kubeadm.go:319] [kubeconfig] Writing "super-admin.conf" kubeconfig file
	I1109 14:41:12.130921  203153 kubeadm.go:319] [kubeconfig] Writing "kubelet.conf" kubeconfig file
	I1109 14:41:12.446842  203153 kubeadm.go:319] [kubeconfig] Writing "controller-manager.conf" kubeconfig file
	I1109 14:41:13.083883  203153 kubeadm.go:319] [kubeconfig] Writing "scheduler.conf" kubeconfig file
	I1109 14:41:13.085190  203153 kubeadm.go:319] [etcd] Creating static Pod manifest for local etcd in "/etc/kubernetes/manifests"
	I1109 14:41:13.088353  203153 kubeadm.go:319] [control-plane] Using manifest folder "/etc/kubernetes/manifests"
	I1109 14:41:12.958659  203791 out.go:252]   - Configuring RBAC rules ...
	I1109 14:41:12.958790  203791 kubeadm.go:319] [bootstrap-token] Configuring bootstrap tokens, cluster-info ConfigMap, RBAC Roles
	I1109 14:41:12.968782  203791 kubeadm.go:319] [bootstrap-token] Configured RBAC rules to allow Node Bootstrap tokens to get nodes
	I1109 14:41:12.978272  203791 kubeadm.go:319] [bootstrap-token] Configured RBAC rules to allow Node Bootstrap tokens to post CSRs in order for nodes to get long term certificate credentials
	I1109 14:41:12.983085  203791 kubeadm.go:319] [bootstrap-token] Configured RBAC rules to allow the csrapprover controller automatically approve CSRs from a Node Bootstrap Token
	I1109 14:41:12.988265  203791 kubeadm.go:319] [bootstrap-token] Configured RBAC rules to allow certificate rotation for all node client certificates in the cluster
	I1109 14:41:12.993482  203791 kubeadm.go:319] [bootstrap-token] Creating the "cluster-info" ConfigMap in the "kube-public" namespace
	I1109 14:41:13.285447  203791 kubeadm.go:319] [kubelet-finalize] Updating "/etc/kubernetes/kubelet.conf" to point to a rotatable kubelet client certificate and key
	I1109 14:41:13.721858  203791 kubeadm.go:319] [addons] Applied essential addon: CoreDNS
	I1109 14:41:14.286134  203791 kubeadm.go:319] [addons] Applied essential addon: kube-proxy
	I1109 14:41:14.287475  203791 kubeadm.go:319] 
	I1109 14:41:14.287557  203791 kubeadm.go:319] Your Kubernetes control-plane has initialized successfully!
	I1109 14:41:14.287566  203791 kubeadm.go:319] 
	I1109 14:41:14.287647  203791 kubeadm.go:319] To start using your cluster, you need to run the following as a regular user:
	I1109 14:41:14.287652  203791 kubeadm.go:319] 
	I1109 14:41:14.287678  203791 kubeadm.go:319]   mkdir -p $HOME/.kube
	I1109 14:41:14.287742  203791 kubeadm.go:319]   sudo cp -i /etc/kubernetes/admin.conf $HOME/.kube/config
	I1109 14:41:14.287794  203791 kubeadm.go:319]   sudo chown $(id -u):$(id -g) $HOME/.kube/config
	I1109 14:41:14.287799  203791 kubeadm.go:319] 
	I1109 14:41:14.287855  203791 kubeadm.go:319] Alternatively, if you are the root user, you can run:
	I1109 14:41:14.287859  203791 kubeadm.go:319] 
	I1109 14:41:14.287916  203791 kubeadm.go:319]   export KUBECONFIG=/etc/kubernetes/admin.conf
	I1109 14:41:14.287922  203791 kubeadm.go:319] 
	I1109 14:41:14.287976  203791 kubeadm.go:319] You should now deploy a pod network to the cluster.
	I1109 14:41:14.288055  203791 kubeadm.go:319] Run "kubectl apply -f [podnetwork].yaml" with one of the options listed at:
	I1109 14:41:14.288127  203791 kubeadm.go:319]   https://kubernetes.io/docs/concepts/cluster-administration/addons/
	I1109 14:41:14.288131  203791 kubeadm.go:319] 
	I1109 14:41:14.288231  203791 kubeadm.go:319] You can now join any number of control-plane nodes by copying certificate authorities
	I1109 14:41:14.288313  203791 kubeadm.go:319] and service account keys on each node and then running the following as root:
	I1109 14:41:14.288317  203791 kubeadm.go:319] 
	I1109 14:41:14.288610  203791 kubeadm.go:319]   kubeadm join control-plane.minikube.internal:8443 --token nmlbhn.o654qno1ajola0cj \
	I1109 14:41:14.288724  203791 kubeadm.go:319] 	--discovery-token-ca-cert-hash sha256:bb1c1d61f7abb7f43f33866576b71e6342f6f67545a2d1f7cc51fa851cd51d52 \
	I1109 14:41:14.288746  203791 kubeadm.go:319] 	--control-plane 
	I1109 14:41:14.288750  203791 kubeadm.go:319] 
	I1109 14:41:14.288839  203791 kubeadm.go:319] Then you can join any number of worker nodes by running the following on each as root:
	I1109 14:41:14.288844  203791 kubeadm.go:319] 
	I1109 14:41:14.288929  203791 kubeadm.go:319] kubeadm join control-plane.minikube.internal:8443 --token nmlbhn.o654qno1ajola0cj \
	I1109 14:41:14.289037  203791 kubeadm.go:319] 	--discovery-token-ca-cert-hash sha256:bb1c1d61f7abb7f43f33866576b71e6342f6f67545a2d1f7cc51fa851cd51d52 
	I1109 14:41:14.291834  203791 kubeadm.go:319] 	[WARNING SystemVerification]: cgroups v1 support is in maintenance mode, please migrate to cgroups v2
	I1109 14:41:14.292134  203791 kubeadm.go:319] 	[WARNING SystemVerification]: failed to parse kernel config: unable to load kernel module: "configs", output: "modprobe: FATAL: Module configs not found in directory /lib/modules/5.15.0-1084-aws\n", err: exit status 1
	I1109 14:41:14.292292  203791 kubeadm.go:319] 	[WARNING Service-Kubelet]: kubelet service is not enabled, please run 'systemctl enable kubelet.service'
	I1109 14:41:14.292313  203791 cni.go:84] Creating CNI manager for ""
	I1109 14:41:14.292321  203791 cni.go:143] "docker" driver + "crio" runtime found, recommending kindnet
	I1109 14:41:14.295377  203791 out.go:179] * Configuring CNI (Container Networking Interface) ...
	I1109 14:41:13.091789  203153 out.go:252]   - Booting up control plane ...
	I1109 14:41:13.091952  203153 kubeadm.go:319] [control-plane] Creating static Pod manifest for "kube-apiserver"
	I1109 14:41:13.092070  203153 kubeadm.go:319] [control-plane] Creating static Pod manifest for "kube-controller-manager"
	I1109 14:41:13.095364  203153 kubeadm.go:319] [control-plane] Creating static Pod manifest for "kube-scheduler"
	I1109 14:41:13.112594  203153 kubeadm.go:319] [kubelet-start] Writing kubelet environment file with flags to file "/var/lib/kubelet/kubeadm-flags.env"
	I1109 14:41:13.112739  203153 kubeadm.go:319] [kubelet-start] Writing kubelet configuration to file "/var/lib/kubelet/instance-config.yaml"
	I1109 14:41:13.120429  203153 kubeadm.go:319] [patches] Applied patch of type "application/strategic-merge-patch+json" to target "kubeletconfiguration"
	I1109 14:41:13.120538  203153 kubeadm.go:319] [kubelet-start] Writing kubelet configuration to file "/var/lib/kubelet/config.yaml"
	I1109 14:41:13.120583  203153 kubeadm.go:319] [kubelet-start] Starting the kubelet
	I1109 14:41:13.254967  203153 kubeadm.go:319] [wait-control-plane] Waiting for the kubelet to boot up the control plane as static Pods from directory "/etc/kubernetes/manifests"
	I1109 14:41:13.255099  203153 kubeadm.go:319] [kubelet-check] Waiting for a healthy kubelet at http://127.0.0.1:10248/healthz. This can take up to 4m0s
	I1109 14:41:14.761025  203153 kubeadm.go:319] [kubelet-check] The kubelet is healthy after 1.501286845s
	I1109 14:41:14.761175  203153 kubeadm.go:319] [control-plane-check] Waiting for healthy control plane components. This can take up to 4m0s
	I1109 14:41:14.761289  203153 kubeadm.go:319] [control-plane-check] Checking kube-apiserver at https://192.168.85.2:8443/livez
	I1109 14:41:14.761403  203153 kubeadm.go:319] [control-plane-check] Checking kube-controller-manager at https://127.0.0.1:10257/healthz
	I1109 14:41:14.761500  203153 kubeadm.go:319] [control-plane-check] Checking kube-scheduler at https://127.0.0.1:10259/livez
	I1109 14:41:14.298271  203791 ssh_runner.go:195] Run: stat /opt/cni/bin/portmap
	I1109 14:41:14.302895  203791 cni.go:182] applying CNI manifest using /var/lib/minikube/binaries/v1.34.1/kubectl ...
	I1109 14:41:14.302923  203791 ssh_runner.go:362] scp memory --> /var/tmp/minikube/cni.yaml (2601 bytes)
	I1109 14:41:14.316875  203791 ssh_runner.go:195] Run: sudo /var/lib/minikube/binaries/v1.34.1/kubectl apply --kubeconfig=/var/lib/minikube/kubeconfig -f /var/tmp/minikube/cni.yaml
	I1109 14:41:14.725248  203791 ssh_runner.go:195] Run: /bin/bash -c "cat /proc/$(pgrep kube-apiserver)/oom_adj"
	I1109 14:41:14.725384  203791 ssh_runner.go:195] Run: sudo /var/lib/minikube/binaries/v1.34.1/kubectl create clusterrolebinding minikube-rbac --clusterrole=cluster-admin --serviceaccount=kube-system:default --kubeconfig=/var/lib/minikube/kubeconfig
	I1109 14:41:14.725504  203791 ssh_runner.go:195] Run: sudo /var/lib/minikube/binaries/v1.34.1/kubectl --kubeconfig=/var/lib/minikube/kubeconfig label --overwrite nodes newest-cni-192074 minikube.k8s.io/updated_at=2025_11_09T14_41_14_0700 minikube.k8s.io/version=v1.37.0 minikube.k8s.io/commit=70e94ed289d7418ac27e2778f9cf44be27a4ecda minikube.k8s.io/name=newest-cni-192074 minikube.k8s.io/primary=true
	I1109 14:41:14.893173  203791 ops.go:34] apiserver oom_adj: -16
	I1109 14:41:14.893282  203791 ssh_runner.go:195] Run: sudo /var/lib/minikube/binaries/v1.34.1/kubectl get sa default --kubeconfig=/var/lib/minikube/kubeconfig
	I1109 14:41:15.393695  203791 ssh_runner.go:195] Run: sudo /var/lib/minikube/binaries/v1.34.1/kubectl get sa default --kubeconfig=/var/lib/minikube/kubeconfig
	I1109 14:41:15.893560  203791 ssh_runner.go:195] Run: sudo /var/lib/minikube/binaries/v1.34.1/kubectl get sa default --kubeconfig=/var/lib/minikube/kubeconfig
	I1109 14:41:16.394304  203791 ssh_runner.go:195] Run: sudo /var/lib/minikube/binaries/v1.34.1/kubectl get sa default --kubeconfig=/var/lib/minikube/kubeconfig
	I1109 14:41:16.893737  203791 ssh_runner.go:195] Run: sudo /var/lib/minikube/binaries/v1.34.1/kubectl get sa default --kubeconfig=/var/lib/minikube/kubeconfig
	I1109 14:41:17.393642  203791 ssh_runner.go:195] Run: sudo /var/lib/minikube/binaries/v1.34.1/kubectl get sa default --kubeconfig=/var/lib/minikube/kubeconfig
	I1109 14:41:17.893338  203791 ssh_runner.go:195] Run: sudo /var/lib/minikube/binaries/v1.34.1/kubectl get sa default --kubeconfig=/var/lib/minikube/kubeconfig
	I1109 14:41:18.393372  203791 ssh_runner.go:195] Run: sudo /var/lib/minikube/binaries/v1.34.1/kubectl get sa default --kubeconfig=/var/lib/minikube/kubeconfig
	I1109 14:41:18.894343  203791 ssh_runner.go:195] Run: sudo /var/lib/minikube/binaries/v1.34.1/kubectl get sa default --kubeconfig=/var/lib/minikube/kubeconfig
	I1109 14:41:19.393401  203791 ssh_runner.go:195] Run: sudo /var/lib/minikube/binaries/v1.34.1/kubectl get sa default --kubeconfig=/var/lib/minikube/kubeconfig
	I1109 14:41:19.651966  203791 kubeadm.go:1114] duration metric: took 4.926630452s to wait for elevateKubeSystemPrivileges
	I1109 14:41:19.651998  203791 kubeadm.go:403] duration metric: took 29.42731286s to StartCluster
	I1109 14:41:19.652015  203791 settings.go:142] acquiring lock: {Name:mk52a5a1cf83cfd3007d92af07a5e8dea393f789 Clock:{} Delay:500ms Timeout:1m0s Cancel:<nil>}
	I1109 14:41:19.652077  203791 settings.go:150] Updating kubeconfig:  /home/jenkins/minikube-integration/21139-2320/kubeconfig
	I1109 14:41:19.652704  203791 lock.go:35] WriteFile acquiring /home/jenkins/minikube-integration/21139-2320/kubeconfig: {Name:mkbcbd43244c4eb498050023163f96f81faa78c6 Clock:{} Delay:500ms Timeout:1m0s Cancel:<nil>}
	I1109 14:41:19.652932  203791 start.go:236] Will wait 6m0s for node &{Name: IP:192.168.76.2 Port:8443 KubernetesVersion:v1.34.1 ContainerRuntime:crio ControlPlane:true Worker:true}
	I1109 14:41:19.653054  203791 ssh_runner.go:195] Run: /bin/bash -c "sudo /var/lib/minikube/binaries/v1.34.1/kubectl --kubeconfig=/var/lib/minikube/kubeconfig -n kube-system get configmap coredns -o yaml"
	I1109 14:41:19.653317  203791 config.go:182] Loaded profile config "newest-cni-192074": Driver=docker, ContainerRuntime=crio, KubernetesVersion=v1.34.1
	I1109 14:41:19.653368  203791 addons.go:512] enable addons start: toEnable=map[ambassador:false amd-gpu-device-plugin:false auto-pause:false cloud-spanner:false csi-hostpath-driver:false dashboard:false default-storageclass:true efk:false freshpod:false gcp-auth:false gvisor:false headlamp:false inaccel:false ingress:false ingress-dns:false inspektor-gadget:false istio:false istio-provisioner:false kong:false kubeflow:false kubetail:false kubevirt:false logviewer:false metallb:false metrics-server:false nvidia-device-plugin:false nvidia-driver-installer:false nvidia-gpu-device-plugin:false olm:false pod-security-policy:false portainer:false registry:false registry-aliases:false registry-creds:false storage-provisioner:true storage-provisioner-rancher:false volcano:false volumesnapshots:false yakd:false]
	I1109 14:41:19.653434  203791 addons.go:70] Setting storage-provisioner=true in profile "newest-cni-192074"
	I1109 14:41:19.653449  203791 addons.go:239] Setting addon storage-provisioner=true in "newest-cni-192074"
	I1109 14:41:19.653473  203791 host.go:66] Checking if "newest-cni-192074" exists ...
	I1109 14:41:19.654195  203791 addons.go:70] Setting default-storageclass=true in profile "newest-cni-192074"
	I1109 14:41:19.654217  203791 addons_storage_classes.go:34] enableOrDisableStorageClasses default-storageclass=true on "newest-cni-192074"
	I1109 14:41:19.654487  203791 cli_runner.go:164] Run: docker container inspect newest-cni-192074 --format={{.State.Status}}
	I1109 14:41:19.654727  203791 cli_runner.go:164] Run: docker container inspect newest-cni-192074 --format={{.State.Status}}
	I1109 14:41:19.658564  203791 out.go:179] * Verifying Kubernetes components...
	I1109 14:41:19.662114  203791 ssh_runner.go:195] Run: sudo systemctl daemon-reload
	I1109 14:41:19.697604  203791 addons.go:239] Setting addon default-storageclass=true in "newest-cni-192074"
	I1109 14:41:19.697647  203791 host.go:66] Checking if "newest-cni-192074" exists ...
	I1109 14:41:19.698051  203791 cli_runner.go:164] Run: docker container inspect newest-cni-192074 --format={{.State.Status}}
	I1109 14:41:19.711937  203791 out.go:179]   - Using image gcr.io/k8s-minikube/storage-provisioner:v5
	I1109 14:41:18.225493  203153 kubeadm.go:319] [control-plane-check] kube-controller-manager is healthy after 3.465394914s
	I1109 14:41:19.715806  203791 addons.go:436] installing /etc/kubernetes/addons/storage-provisioner.yaml
	I1109 14:41:19.715832  203791 ssh_runner.go:362] scp memory --> /etc/kubernetes/addons/storage-provisioner.yaml (2676 bytes)
	I1109 14:41:19.715914  203791 cli_runner.go:164] Run: docker container inspect -f "'{{(index (index .NetworkSettings.Ports "22/tcp") 0).HostPort}}'" newest-cni-192074
	I1109 14:41:19.739683  203791 addons.go:436] installing /etc/kubernetes/addons/storageclass.yaml
	I1109 14:41:19.739703  203791 ssh_runner.go:362] scp storageclass/storageclass.yaml --> /etc/kubernetes/addons/storageclass.yaml (271 bytes)
	I1109 14:41:19.739765  203791 cli_runner.go:164] Run: docker container inspect -f "'{{(index (index .NetworkSettings.Ports "22/tcp") 0).HostPort}}'" newest-cni-192074
	I1109 14:41:19.750898  203791 sshutil.go:53] new ssh client: &{IP:127.0.0.1 Port:33080 SSHKeyPath:/home/jenkins/minikube-integration/21139-2320/.minikube/machines/newest-cni-192074/id_rsa Username:docker}
	I1109 14:41:19.775994  203791 sshutil.go:53] new ssh client: &{IP:127.0.0.1 Port:33080 SSHKeyPath:/home/jenkins/minikube-integration/21139-2320/.minikube/machines/newest-cni-192074/id_rsa Username:docker}
	I1109 14:41:20.353619  203791 ssh_runner.go:195] Run: sudo KUBECONFIG=/var/lib/minikube/kubeconfig /var/lib/minikube/binaries/v1.34.1/kubectl apply -f /etc/kubernetes/addons/storageclass.yaml
	I1109 14:41:20.441267  203791 ssh_runner.go:195] Run: sudo KUBECONFIG=/var/lib/minikube/kubeconfig /var/lib/minikube/binaries/v1.34.1/kubectl apply -f /etc/kubernetes/addons/storage-provisioner.yaml
	I1109 14:41:20.488708  203791 ssh_runner.go:195] Run: sudo systemctl start kubelet
	I1109 14:41:20.488763  203791 ssh_runner.go:195] Run: /bin/bash -c "sudo /var/lib/minikube/binaries/v1.34.1/kubectl --kubeconfig=/var/lib/minikube/kubeconfig -n kube-system get configmap coredns -o yaml | sed -e '/^        forward . \/etc\/resolv.conf.*/i \        hosts {\n           192.168.76.1 host.minikube.internal\n           fallthrough\n        }' -e '/^        errors *$/i \        log' | sudo /var/lib/minikube/binaries/v1.34.1/kubectl --kubeconfig=/var/lib/minikube/kubeconfig replace -f -"
	I1109 14:41:21.711434  203791 ssh_runner.go:235] Completed: sudo KUBECONFIG=/var/lib/minikube/kubeconfig /var/lib/minikube/binaries/v1.34.1/kubectl apply -f /etc/kubernetes/addons/storage-provisioner.yaml: (1.270121793s)
	I1109 14:41:21.711538  203791 ssh_runner.go:235] Completed: /bin/bash -c "sudo /var/lib/minikube/binaries/v1.34.1/kubectl --kubeconfig=/var/lib/minikube/kubeconfig -n kube-system get configmap coredns -o yaml | sed -e '/^        forward . \/etc\/resolv.conf.*/i \        hosts {\n           192.168.76.1 host.minikube.internal\n           fallthrough\n        }' -e '/^        errors *$/i \        log' | sudo /var/lib/minikube/binaries/v1.34.1/kubectl --kubeconfig=/var/lib/minikube/kubeconfig replace -f -": (1.222756708s)
	I1109 14:41:21.711779  203791 start.go:977] {"host.minikube.internal": 192.168.76.1} host record injected into CoreDNS's ConfigMap
	I1109 14:41:21.711555  203791 ssh_runner.go:235] Completed: sudo systemctl start kubelet: (1.222826133s)
	I1109 14:41:21.712743  203791 api_server.go:52] waiting for apiserver process to appear ...
	I1109 14:41:21.712826  203791 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I1109 14:41:21.715093  203791 out.go:179] * Enabled addons: default-storageclass, storage-provisioner
	I1109 14:41:21.717978  203791 addons.go:515] duration metric: took 2.064594933s for enable addons: enabled=[default-storageclass storage-provisioner]
	I1109 14:41:21.741239  203791 api_server.go:72] duration metric: took 2.088253131s to wait for apiserver process to appear ...
	I1109 14:41:21.741314  203791 api_server.go:88] waiting for apiserver healthz status ...
	I1109 14:41:21.741347  203791 api_server.go:253] Checking apiserver healthz at https://192.168.76.2:8443/healthz ...
	I1109 14:41:21.757333  203791 api_server.go:279] https://192.168.76.2:8443/healthz returned 200:
	ok
	I1109 14:41:21.762675  203791 api_server.go:141] control plane version: v1.34.1
	I1109 14:41:21.762752  203791 api_server.go:131] duration metric: took 21.417463ms to wait for apiserver health ...
	I1109 14:41:21.762776  203791 system_pods.go:43] waiting for kube-system pods to appear ...
	I1109 14:41:21.767593  203791 system_pods.go:59] 9 kube-system pods found
	I1109 14:41:21.767682  203791 system_pods.go:61] "coredns-66bc5c9577-6brdt" [50d6b82b-8e51-463c-82a3-a4a103105b6a] Pending: PodScheduled:Unschedulable (0/1 nodes are available: 1 node(s) had untolerated taint {node.kubernetes.io/not-ready: }. no new claims to deallocate, preemption: 0/1 nodes are available: 1 Preemption is not helpful for scheduling.)
	I1109 14:41:21.767706  203791 system_pods.go:61] "coredns-66bc5c9577-n5b7f" [9dd49fb3-4cbd-42ac-bb69-15d98fb66ac4] Pending: PodScheduled:Unschedulable (0/1 nodes are available: 1 node(s) had untolerated taint {node.kubernetes.io/not-ready: }. no new claims to deallocate, preemption: 0/1 nodes are available: 1 Preemption is not helpful for scheduling.)
	I1109 14:41:21.767756  203791 system_pods.go:61] "etcd-newest-cni-192074" [c5ddb834-a41a-4e78-8b40-e27ff57c60d9] Running
	I1109 14:41:21.767776  203791 system_pods.go:61] "kindnet-gmcpd" [00d2ffcc-cb88-4632-8efd-e59fe208d3c8] Running
	I1109 14:41:21.767795  203791 system_pods.go:61] "kube-apiserver-newest-cni-192074" [b2ba9393-513f-4735-b9fa-713bf9ac8fed] Running
	I1109 14:41:21.767830  203791 system_pods.go:61] "kube-controller-manager-newest-cni-192074" [5ecab913-3ba7-42cc-a66f-7a8e512c6c71] Running / Ready:ContainersNotReady (containers with unready status: [kube-controller-manager]) / ContainersReady:ContainersNotReady (containers with unready status: [kube-controller-manager])
	I1109 14:41:21.767855  203791 system_pods.go:61] "kube-proxy-vjt4x" [4f389cd7-7dd5-439e-b590-9e4390f0a638] Running
	I1109 14:41:21.767893  203791 system_pods.go:61] "kube-scheduler-newest-cni-192074" [265941ce-4026-4e49-891b-10d612942e7f] Running
	I1109 14:41:21.767937  203791 system_pods.go:61] "storage-provisioner" [9f3003f1-507f-461b-bff4-e19dafefcd23] Pending: PodScheduled:Unschedulable (0/1 nodes are available: 1 node(s) had untolerated taint {node.kubernetes.io/not-ready: }. no new claims to deallocate, preemption: 0/1 nodes are available: 1 Preemption is not helpful for scheduling.)
	I1109 14:41:21.767960  203791 system_pods.go:74] duration metric: took 5.167424ms to wait for pod list to return data ...
	I1109 14:41:21.767984  203791 default_sa.go:34] waiting for default service account to be created ...
	I1109 14:41:21.780641  203791 default_sa.go:45] found service account: "default"
	I1109 14:41:21.780740  203791 default_sa.go:55] duration metric: took 12.736586ms for default service account to be created ...
	I1109 14:41:21.780767  203791 kubeadm.go:587] duration metric: took 2.12780458s to wait for: map[apiserver:true apps_running:false default_sa:true extra:false kubelet:false node_ready:false system_pods:true]
	I1109 14:41:21.780814  203791 node_conditions.go:102] verifying NodePressure condition ...
	I1109 14:41:21.796811  203791 node_conditions.go:122] node storage ephemeral capacity is 203034800Ki
	I1109 14:41:21.796892  203791 node_conditions.go:123] node cpu capacity is 2
	I1109 14:41:21.796920  203791 node_conditions.go:105] duration metric: took 16.083842ms to run NodePressure ...
	I1109 14:41:21.796946  203791 start.go:242] waiting for startup goroutines ...
	I1109 14:41:22.215689  203791 kapi.go:214] "coredns" deployment in "kube-system" namespace and "newest-cni-192074" context rescaled to 1 replicas
	I1109 14:41:22.215779  203791 start.go:247] waiting for cluster config update ...
	I1109 14:41:22.215806  203791 start.go:256] writing updated cluster config ...
	I1109 14:41:22.216230  203791 ssh_runner.go:195] Run: rm -f paused
	I1109 14:41:22.318813  203791 start.go:628] kubectl: 1.33.2, cluster: 1.34.1 (minor skew: 1)
	I1109 14:41:22.322014  203791 out.go:179] * Done! kubectl is now configured to use "newest-cni-192074" cluster and "default" namespace by default
	
	
	==> CRI-O <==
	Nov 09 14:41:19 newest-cni-192074 crio[841]: time="2025-11-09T14:41:19.444210359Z" level=info msg="Allowed annotations are specified for workload [io.containers.trace-syscall]"
	Nov 09 14:41:19 newest-cni-192074 crio[841]: time="2025-11-09T14:41:19.460421249Z" level=warning msg="Skipping invalid sysctl specified by config {net.ipv4.ip_unprivileged_port_start 0}: \"net.ipv4.ip_unprivileged_port_start\" not allowed with host net enabled" id=be0cf9cb-80cd-42d1-a9b1-99c8446a0ecc name=/runtime.v1.RuntimeService/RunPodSandbox
	Nov 09 14:41:19 newest-cni-192074 crio[841]: time="2025-11-09T14:41:19.485720274Z" level=info msg="Ran pod sandbox ed0de81cec08a23c5326690db30e51173b72fb0566674b559c236c457b13c469 with infra container: kube-system/kindnet-gmcpd/POD" id=be0cf9cb-80cd-42d1-a9b1-99c8446a0ecc name=/runtime.v1.RuntimeService/RunPodSandbox
	Nov 09 14:41:19 newest-cni-192074 crio[841]: time="2025-11-09T14:41:19.495146405Z" level=info msg="Checking image status: docker.io/kindest/kindnetd:v20250512-df8de77b" id=c5e75e14-1a04-4885-a955-0f4d93643fc0 name=/runtime.v1.ImageService/ImageStatus
	Nov 09 14:41:19 newest-cni-192074 crio[841]: time="2025-11-09T14:41:19.499082064Z" level=info msg="Checking image status: docker.io/kindest/kindnetd:v20250512-df8de77b" id=492f1134-d42f-4762-bf4a-bded675c0857 name=/runtime.v1.ImageService/ImageStatus
	Nov 09 14:41:19 newest-cni-192074 crio[841]: time="2025-11-09T14:41:19.508997897Z" level=info msg="Creating container: kube-system/kindnet-gmcpd/kindnet-cni" id=bb247c2d-befc-476f-8fbd-f332c4fdfce3 name=/runtime.v1.RuntimeService/CreateContainer
	Nov 09 14:41:19 newest-cni-192074 crio[841]: time="2025-11-09T14:41:19.509096114Z" level=info msg="Allowed annotations are specified for workload [io.containers.trace-syscall]"
	Nov 09 14:41:19 newest-cni-192074 crio[841]: time="2025-11-09T14:41:19.519822178Z" level=info msg="Allowed annotations are specified for workload [io.containers.trace-syscall]"
	Nov 09 14:41:19 newest-cni-192074 crio[841]: time="2025-11-09T14:41:19.521154818Z" level=info msg="Allowed annotations are specified for workload [io.containers.trace-syscall]"
	Nov 09 14:41:19 newest-cni-192074 crio[841]: time="2025-11-09T14:41:19.580279773Z" level=info msg="Created container 1a2a533efbda015e7b9c0ab01cdbd1d0a8038bb32d58c00746c92c99f1c92697: kube-system/kindnet-gmcpd/kindnet-cni" id=bb247c2d-befc-476f-8fbd-f332c4fdfce3 name=/runtime.v1.RuntimeService/CreateContainer
	Nov 09 14:41:19 newest-cni-192074 crio[841]: time="2025-11-09T14:41:19.581187474Z" level=info msg="Starting container: 1a2a533efbda015e7b9c0ab01cdbd1d0a8038bb32d58c00746c92c99f1c92697" id=52cfe99a-9af1-4e82-a9d0-3f96bae5ae45 name=/runtime.v1.RuntimeService/StartContainer
	Nov 09 14:41:19 newest-cni-192074 crio[841]: time="2025-11-09T14:41:19.58431803Z" level=info msg="Started container" PID=1409 containerID=1a2a533efbda015e7b9c0ab01cdbd1d0a8038bb32d58c00746c92c99f1c92697 description=kube-system/kindnet-gmcpd/kindnet-cni id=52cfe99a-9af1-4e82-a9d0-3f96bae5ae45 name=/runtime.v1.RuntimeService/StartContainer sandboxID=ed0de81cec08a23c5326690db30e51173b72fb0566674b559c236c457b13c469
	Nov 09 14:41:19 newest-cni-192074 crio[841]: time="2025-11-09T14:41:19.957210525Z" level=info msg="Running pod sandbox: kube-system/kube-proxy-vjt4x/POD" id=777c0a11-07b1-4286-930e-0bc817214bb1 name=/runtime.v1.RuntimeService/RunPodSandbox
	Nov 09 14:41:19 newest-cni-192074 crio[841]: time="2025-11-09T14:41:19.957306042Z" level=info msg="Allowed annotations are specified for workload [io.containers.trace-syscall]"
	Nov 09 14:41:20 newest-cni-192074 crio[841]: time="2025-11-09T14:41:20.010147189Z" level=warning msg="Skipping invalid sysctl specified by config {net.ipv4.ip_unprivileged_port_start 0}: \"net.ipv4.ip_unprivileged_port_start\" not allowed with host net enabled" id=777c0a11-07b1-4286-930e-0bc817214bb1 name=/runtime.v1.RuntimeService/RunPodSandbox
	Nov 09 14:41:20 newest-cni-192074 crio[841]: time="2025-11-09T14:41:20.029394096Z" level=info msg="Ran pod sandbox 506584050c8c5c66eec504345d985ca652b3b0e2839cd8d4d32dec46c7e7ed0a with infra container: kube-system/kube-proxy-vjt4x/POD" id=777c0a11-07b1-4286-930e-0bc817214bb1 name=/runtime.v1.RuntimeService/RunPodSandbox
	Nov 09 14:41:20 newest-cni-192074 crio[841]: time="2025-11-09T14:41:20.053807992Z" level=info msg="Checking image status: registry.k8s.io/kube-proxy:v1.34.1" id=22adcad7-d3e7-4916-b318-ae810a3edddc name=/runtime.v1.ImageService/ImageStatus
	Nov 09 14:41:20 newest-cni-192074 crio[841]: time="2025-11-09T14:41:20.069351694Z" level=info msg="Checking image status: registry.k8s.io/kube-proxy:v1.34.1" id=e4ac8524-2562-4d75-97f7-9f9e22d89b61 name=/runtime.v1.ImageService/ImageStatus
	Nov 09 14:41:20 newest-cni-192074 crio[841]: time="2025-11-09T14:41:20.099618395Z" level=info msg="Creating container: kube-system/kube-proxy-vjt4x/kube-proxy" id=4be6ebce-1e58-44bc-a902-9bb5b09a9742 name=/runtime.v1.RuntimeService/CreateContainer
	Nov 09 14:41:20 newest-cni-192074 crio[841]: time="2025-11-09T14:41:20.099730766Z" level=info msg="Allowed annotations are specified for workload [io.containers.trace-syscall]"
	Nov 09 14:41:20 newest-cni-192074 crio[841]: time="2025-11-09T14:41:20.151687154Z" level=info msg="Allowed annotations are specified for workload [io.containers.trace-syscall]"
	Nov 09 14:41:20 newest-cni-192074 crio[841]: time="2025-11-09T14:41:20.164114773Z" level=info msg="Allowed annotations are specified for workload [io.containers.trace-syscall]"
	Nov 09 14:41:20 newest-cni-192074 crio[841]: time="2025-11-09T14:41:20.21561263Z" level=info msg="Created container 7480ebdc7544197fe0ef85ccbd72ad9d72f9b81c91881d188b17a62b04a9985f: kube-system/kube-proxy-vjt4x/kube-proxy" id=4be6ebce-1e58-44bc-a902-9bb5b09a9742 name=/runtime.v1.RuntimeService/CreateContainer
	Nov 09 14:41:20 newest-cni-192074 crio[841]: time="2025-11-09T14:41:20.219527628Z" level=info msg="Starting container: 7480ebdc7544197fe0ef85ccbd72ad9d72f9b81c91881d188b17a62b04a9985f" id=4ad1aed5-69ef-4c3a-9cb0-4b2dcd5712c2 name=/runtime.v1.RuntimeService/StartContainer
	Nov 09 14:41:20 newest-cni-192074 crio[841]: time="2025-11-09T14:41:20.24314946Z" level=info msg="Started container" PID=1492 containerID=7480ebdc7544197fe0ef85ccbd72ad9d72f9b81c91881d188b17a62b04a9985f description=kube-system/kube-proxy-vjt4x/kube-proxy id=4ad1aed5-69ef-4c3a-9cb0-4b2dcd5712c2 name=/runtime.v1.RuntimeService/StartContainer sandboxID=506584050c8c5c66eec504345d985ca652b3b0e2839cd8d4d32dec46c7e7ed0a
	
	
	==> container status <==
	CONTAINER           IMAGE                                                              CREATED             STATE               NAME                      ATTEMPT             POD ID              POD                                         NAMESPACE
	7480ebdc75441       05baa95f5142d87797a2bc1d3d11edfb0bf0a9236d436243d15061fae8b58cb9   3 seconds ago       Running             kube-proxy                0                   506584050c8c5       kube-proxy-vjt4x                            kube-system
	1a2a533efbda0       b1a8c6f707935fd5f346ce5846d21ff8dd65e14c15406a14dbd16b9b897b9b4c   4 seconds ago       Running             kindnet-cni               0                   ed0de81cec08a       kindnet-gmcpd                               kube-system
	f3409e75e905b       7eb2c6ff0c5a768fd309321bc2ade0e4e11afcf4f2017ef1d0ff00d91fdf992a   22 seconds ago      Running             kube-controller-manager   0                   67e10f0ba2dc1       kube-controller-manager-newest-cni-192074   kube-system
	5d8e35801bf80       a1894772a478e07c67a56e8bf32335fdbe1dd4ec96976a5987083164bd00bc0e   23 seconds ago      Running             etcd                      0                   31f2da2bc7001       etcd-newest-cni-192074                      kube-system
	0b95cddf662f6       b5f57ec6b98676d815366685a0422bd164ecf0732540b79ac51b1186cef97ff0   23 seconds ago      Running             kube-scheduler            0                   653122cbd3420       kube-scheduler-newest-cni-192074            kube-system
	dc0b946a00ec3       43911e833d64d4f30460862fc0c54bb61999d60bc7d063feca71e9fc610d5196   23 seconds ago      Running             kube-apiserver            0                   ed1446838fc55       kube-apiserver-newest-cni-192074            kube-system
	
	
	==> describe nodes <==
	Name:               newest-cni-192074
	Roles:              control-plane
	Labels:             beta.kubernetes.io/arch=arm64
	                    beta.kubernetes.io/os=linux
	                    kubernetes.io/arch=arm64
	                    kubernetes.io/hostname=newest-cni-192074
	                    kubernetes.io/os=linux
	                    minikube.k8s.io/commit=70e94ed289d7418ac27e2778f9cf44be27a4ecda
	                    minikube.k8s.io/name=newest-cni-192074
	                    minikube.k8s.io/primary=true
	                    minikube.k8s.io/updated_at=2025_11_09T14_41_14_0700
	                    minikube.k8s.io/version=v1.37.0
	                    node-role.kubernetes.io/control-plane=
	                    node.kubernetes.io/exclude-from-external-load-balancers=
	Annotations:        node.alpha.kubernetes.io/ttl: 0
	                    volumes.kubernetes.io/controller-managed-attach-detach: true
	CreationTimestamp:  Sun, 09 Nov 2025 14:41:10 +0000
	Taints:             node.kubernetes.io/not-ready:NoSchedule
	Unschedulable:      false
	Lease:
	  HolderIdentity:  newest-cni-192074
	  AcquireTime:     <unset>
	  RenewTime:       Sun, 09 Nov 2025 14:41:13 +0000
	Conditions:
	  Type             Status  LastHeartbeatTime                 LastTransitionTime                Reason                       Message
	  ----             ------  -----------------                 ------------------                ------                       -------
	  MemoryPressure   False   Sun, 09 Nov 2025 14:41:14 +0000   Sun, 09 Nov 2025 14:41:01 +0000   KubeletHasSufficientMemory   kubelet has sufficient memory available
	  DiskPressure     False   Sun, 09 Nov 2025 14:41:14 +0000   Sun, 09 Nov 2025 14:41:01 +0000   KubeletHasNoDiskPressure     kubelet has no disk pressure
	  PIDPressure      False   Sun, 09 Nov 2025 14:41:14 +0000   Sun, 09 Nov 2025 14:41:01 +0000   KubeletHasSufficientPID      kubelet has sufficient PID available
	  Ready            False   Sun, 09 Nov 2025 14:41:14 +0000   Sun, 09 Nov 2025 14:41:01 +0000   KubeletNotReady              container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/cni/net.d/. Has your network provider started?
	Addresses:
	  InternalIP:  192.168.76.2
	  Hostname:    newest-cni-192074
	Capacity:
	  cpu:                2
	  ephemeral-storage:  203034800Ki
	  hugepages-1Gi:      0
	  hugepages-2Mi:      0
	  hugepages-32Mi:     0
	  hugepages-64Ki:     0
	  memory:             8022300Ki
	  pods:               110
	Allocatable:
	  cpu:                2
	  ephemeral-storage:  203034800Ki
	  hugepages-1Gi:      0
	  hugepages-2Mi:      0
	  hugepages-32Mi:     0
	  hugepages-64Ki:     0
	  memory:             8022300Ki
	  pods:               110
	System Info:
	  Machine ID:                 8fe80b37a6a3cb4bc1adadb06905c59a
	  System UUID:                c64f5fab-6069-4738-9e11-1ea44009e643
	  Boot ID:                    c2b4b02b-b0e5-42ff-aa45-346ad9349595
	  Kernel Version:             5.15.0-1084-aws
	  OS Image:                   Debian GNU/Linux 12 (bookworm)
	  Operating System:           linux
	  Architecture:               arm64
	  Container Runtime Version:  cri-o://1.34.1
	  Kubelet Version:            v1.34.1
	  Kube-Proxy Version:         
	PodCIDR:                      10.42.0.0/24
	PodCIDRs:                     10.42.0.0/24
	Non-terminated Pods:          (6 in total)
	  Namespace                   Name                                         CPU Requests  CPU Limits  Memory Requests  Memory Limits  Age
	  ---------                   ----                                         ------------  ----------  ---------------  -------------  ---
	  kube-system                 etcd-newest-cni-192074                       100m (5%)     0 (0%)      100Mi (1%)       0 (0%)         12s
	  kube-system                 kindnet-gmcpd                                100m (5%)     100m (5%)   50Mi (0%)        50Mi (0%)      6s
	  kube-system                 kube-apiserver-newest-cni-192074             250m (12%)    0 (0%)      0 (0%)           0 (0%)         10s
	  kube-system                 kube-controller-manager-newest-cni-192074    200m (10%)    0 (0%)      0 (0%)           0 (0%)         13s
	  kube-system                 kube-proxy-vjt4x                             0 (0%)        0 (0%)      0 (0%)           0 (0%)         6s
	  kube-system                 kube-scheduler-newest-cni-192074             100m (5%)     0 (0%)      0 (0%)           0 (0%)         10s
	Allocated resources:
	  (Total limits may be over 100 percent, i.e., overcommitted.)
	  Resource           Requests    Limits
	  --------           --------    ------
	  cpu                750m (37%)  100m (5%)
	  memory             150Mi (1%)  50Mi (0%)
	  ephemeral-storage  0 (0%)      0 (0%)
	  hugepages-1Gi      0 (0%)      0 (0%)
	  hugepages-2Mi      0 (0%)      0 (0%)
	  hugepages-32Mi     0 (0%)      0 (0%)
	  hugepages-64Ki     0 (0%)      0 (0%)
	Events:
	  Type     Reason                   Age                From             Message
	  ----     ------                   ----               ----             -------
	  Normal   Starting                 3s                 kube-proxy       
	  Normal   NodeHasSufficientMemory  24s (x8 over 24s)  kubelet          Node newest-cni-192074 status is now: NodeHasSufficientMemory
	  Normal   NodeHasNoDiskPressure    24s (x8 over 24s)  kubelet          Node newest-cni-192074 status is now: NodeHasNoDiskPressure
	  Normal   NodeHasSufficientPID     24s (x8 over 24s)  kubelet          Node newest-cni-192074 status is now: NodeHasSufficientPID
	  Normal   Starting                 11s                kubelet          Starting kubelet.
	  Warning  CgroupV1                 11s                kubelet          cgroup v1 support is in maintenance mode, please migrate to cgroup v2
	  Normal   NodeHasSufficientMemory  10s                kubelet          Node newest-cni-192074 status is now: NodeHasSufficientMemory
	  Normal   NodeHasNoDiskPressure    10s                kubelet          Node newest-cni-192074 status is now: NodeHasNoDiskPressure
	  Normal   NodeHasSufficientPID     10s                kubelet          Node newest-cni-192074 status is now: NodeHasSufficientPID
	  Normal   RegisteredNode           6s                 node-controller  Node newest-cni-192074 event: Registered Node newest-cni-192074 in Controller
	
	
	==> dmesg <==
	[Nov 9 14:18] overlayfs: idmapped layers are currently not supported
	[ +26.909872] overlayfs: idmapped layers are currently not supported
	[  +3.850831] overlayfs: idmapped layers are currently not supported
	[Nov 9 14:19] overlayfs: idmapped layers are currently not supported
	[ +17.180951] overlayfs: idmapped layers are currently not supported
	[Nov 9 14:20] overlayfs: idmapped layers are currently not supported
	[ +23.736977] overlayfs: idmapped layers are currently not supported
	[Nov 9 14:22] overlayfs: idmapped layers are currently not supported
	[Nov 9 14:23] overlayfs: idmapped layers are currently not supported
	[Nov 9 14:25] overlayfs: idmapped layers are currently not supported
	[Nov 9 14:27] overlayfs: idmapped layers are currently not supported
	[ +24.536638] overlayfs: idmapped layers are currently not supported
	[Nov 9 14:31] overlayfs: idmapped layers are currently not supported
	[Nov 9 14:33] overlayfs: idmapped layers are currently not supported
	[ +39.159698] overlayfs: idmapped layers are currently not supported
	[  +5.641155] overlayfs: idmapped layers are currently not supported
	[Nov 9 14:34] overlayfs: idmapped layers are currently not supported
	[Nov 9 14:35] overlayfs: idmapped layers are currently not supported
	[Nov 9 14:36] overlayfs: idmapped layers are currently not supported
	[Nov 9 14:37] overlayfs: idmapped layers are currently not supported
	[  +3.455400] overlayfs: idmapped layers are currently not supported
	[Nov 9 14:39] overlayfs: idmapped layers are currently not supported
	[  +3.462027] overlayfs: idmapped layers are currently not supported
	[Nov 9 14:40] overlayfs: idmapped layers are currently not supported
	[Nov 9 14:41] overlayfs: idmapped layers are currently not supported
	
	
	==> etcd [5d8e35801bf80af732a5bad146adcc24a59bca8bfbf07d7e461827ab04c45b19] <==
	{"level":"warn","ts":"2025-11-09T14:41:07.566144Z","caller":"embed/config_logging.go:188","msg":"rejected connection on client endpoint","remote-addr":"127.0.0.1:35920","server-name":"","error":"EOF"}
	{"level":"warn","ts":"2025-11-09T14:41:07.600482Z","caller":"embed/config_logging.go:188","msg":"rejected connection on client endpoint","remote-addr":"127.0.0.1:35908","server-name":"","error":"EOF"}
	{"level":"warn","ts":"2025-11-09T14:41:07.616592Z","caller":"embed/config_logging.go:188","msg":"rejected connection on client endpoint","remote-addr":"127.0.0.1:35928","server-name":"","error":"EOF"}
	{"level":"warn","ts":"2025-11-09T14:41:07.680055Z","caller":"embed/config_logging.go:188","msg":"rejected connection on client endpoint","remote-addr":"127.0.0.1:35940","server-name":"","error":"EOF"}
	{"level":"warn","ts":"2025-11-09T14:41:07.712660Z","caller":"embed/config_logging.go:188","msg":"rejected connection on client endpoint","remote-addr":"127.0.0.1:35964","server-name":"","error":"EOF"}
	{"level":"warn","ts":"2025-11-09T14:41:07.731517Z","caller":"embed/config_logging.go:188","msg":"rejected connection on client endpoint","remote-addr":"127.0.0.1:35976","server-name":"","error":"EOF"}
	{"level":"warn","ts":"2025-11-09T14:41:07.794154Z","caller":"embed/config_logging.go:188","msg":"rejected connection on client endpoint","remote-addr":"127.0.0.1:35984","server-name":"","error":"EOF"}
	{"level":"warn","ts":"2025-11-09T14:41:07.800767Z","caller":"embed/config_logging.go:188","msg":"rejected connection on client endpoint","remote-addr":"127.0.0.1:36004","server-name":"","error":"EOF"}
	{"level":"warn","ts":"2025-11-09T14:41:07.837139Z","caller":"embed/config_logging.go:188","msg":"rejected connection on client endpoint","remote-addr":"127.0.0.1:36022","server-name":"","error":"EOF"}
	{"level":"warn","ts":"2025-11-09T14:41:07.947575Z","caller":"embed/config_logging.go:188","msg":"rejected connection on client endpoint","remote-addr":"127.0.0.1:36066","server-name":"","error":"EOF"}
	{"level":"warn","ts":"2025-11-09T14:41:07.977610Z","caller":"embed/config_logging.go:188","msg":"rejected connection on client endpoint","remote-addr":"127.0.0.1:36052","server-name":"","error":"EOF"}
	{"level":"warn","ts":"2025-11-09T14:41:07.999964Z","caller":"embed/config_logging.go:188","msg":"rejected connection on client endpoint","remote-addr":"127.0.0.1:36082","server-name":"","error":"EOF"}
	{"level":"warn","ts":"2025-11-09T14:41:08.046703Z","caller":"embed/config_logging.go:188","msg":"rejected connection on client endpoint","remote-addr":"127.0.0.1:36102","server-name":"","error":"EOF"}
	{"level":"warn","ts":"2025-11-09T14:41:08.136099Z","caller":"embed/config_logging.go:188","msg":"rejected connection on client endpoint","remote-addr":"127.0.0.1:36126","server-name":"","error":"EOF"}
	{"level":"warn","ts":"2025-11-09T14:41:08.165942Z","caller":"embed/config_logging.go:188","msg":"rejected connection on client endpoint","remote-addr":"127.0.0.1:36152","server-name":"","error":"EOF"}
	{"level":"warn","ts":"2025-11-09T14:41:08.216311Z","caller":"embed/config_logging.go:188","msg":"rejected connection on client endpoint","remote-addr":"127.0.0.1:36180","server-name":"","error":"EOF"}
	{"level":"warn","ts":"2025-11-09T14:41:08.264183Z","caller":"embed/config_logging.go:188","msg":"rejected connection on client endpoint","remote-addr":"127.0.0.1:36196","server-name":"","error":"EOF"}
	{"level":"warn","ts":"2025-11-09T14:41:08.322179Z","caller":"embed/config_logging.go:188","msg":"rejected connection on client endpoint","remote-addr":"127.0.0.1:36208","server-name":"","error":"EOF"}
	{"level":"warn","ts":"2025-11-09T14:41:08.368000Z","caller":"embed/config_logging.go:188","msg":"rejected connection on client endpoint","remote-addr":"127.0.0.1:36234","server-name":"","error":"EOF"}
	{"level":"warn","ts":"2025-11-09T14:41:08.403026Z","caller":"embed/config_logging.go:188","msg":"rejected connection on client endpoint","remote-addr":"127.0.0.1:36256","server-name":"","error":"EOF"}
	{"level":"warn","ts":"2025-11-09T14:41:08.489332Z","caller":"embed/config_logging.go:188","msg":"rejected connection on client endpoint","remote-addr":"127.0.0.1:36276","server-name":"","error":"EOF"}
	{"level":"warn","ts":"2025-11-09T14:41:08.540664Z","caller":"embed/config_logging.go:188","msg":"rejected connection on client endpoint","remote-addr":"127.0.0.1:36292","server-name":"","error":"EOF"}
	{"level":"warn","ts":"2025-11-09T14:41:08.637964Z","caller":"embed/config_logging.go:188","msg":"rejected connection on client endpoint","remote-addr":"127.0.0.1:36318","server-name":"","error":"EOF"}
	{"level":"warn","ts":"2025-11-09T14:41:08.647206Z","caller":"embed/config_logging.go:188","msg":"rejected connection on client endpoint","remote-addr":"127.0.0.1:36332","server-name":"","error":"EOF"}
	{"level":"warn","ts":"2025-11-09T14:41:08.930869Z","caller":"embed/config_logging.go:188","msg":"rejected connection on client endpoint","remote-addr":"127.0.0.1:36350","server-name":"","error":"EOF"}
	
	
	==> kernel <==
	 14:41:24 up  1:23,  0 user,  load average: 5.62, 4.00, 3.06
	Linux newest-cni-192074 5.15.0-1084-aws #91~20.04.1-Ubuntu SMP Fri May 2 07:00:04 UTC 2025 aarch64 GNU/Linux
	PRETTY_NAME="Debian GNU/Linux 12 (bookworm)"
	
	
	==> kindnet [1a2a533efbda015e7b9c0ab01cdbd1d0a8038bb32d58c00746c92c99f1c92697] <==
	I1109 14:41:19.829100       1 main.go:109] connected to apiserver: https://10.96.0.1:443
	I1109 14:41:19.829439       1 main.go:139] hostIP = 192.168.76.2
	podIP = 192.168.76.2
	I1109 14:41:19.829547       1 main.go:148] setting mtu 1500 for CNI 
	I1109 14:41:19.829558       1 main.go:178] kindnetd IP family: "ipv4"
	I1109 14:41:19.829570       1 main.go:182] noMask IPv4 subnets: [10.244.0.0/16]
	time="2025-11-09T14:41:20Z" level=info msg="Created plugin 10-kube-network-policies (kindnetd, handles RunPodSandbox,RemovePodSandbox)"
	I1109 14:41:20.021085       1 controller.go:377] "Starting controller" name="kube-network-policies"
	I1109 14:41:20.021108       1 controller.go:381] "Waiting for informer caches to sync"
	I1109 14:41:20.021118       1 shared_informer.go:350] "Waiting for caches to sync" controller="kube-network-policies"
	I1109 14:41:20.021463       1 controller.go:390] nri plugin exited: failed to connect to NRI service: dial unix /var/run/nri/nri.sock: connect: no such file or directory
	
	
	==> kube-apiserver [dc0b946a00ec3559ee380982b956456f196bfe28b3920cacf02bbe1f9fa21552] <==
	I1109 14:41:10.724734       1 cidrallocator.go:301] created ClusterIP allocator for Service CIDR 10.96.0.0/12
	I1109 14:41:10.734614       1 controller.go:667] quota admission added evaluator for: namespaces
	I1109 14:41:10.740366       1 default_servicecidr_controller.go:228] Setting default ServiceCIDR condition Ready to True
	I1109 14:41:10.759835       1 shared_informer.go:356] "Caches are synced" controller="node_authorizer"
	I1109 14:41:10.821048       1 cidrallocator.go:277] updated ClusterIP allocator for Service CIDR 10.96.0.0/12
	I1109 14:41:10.822115       1 default_servicecidr_controller.go:137] Shutting down kubernetes-service-cidr-controller
	E1109 14:41:10.878668       1 controller.go:145] "Failed to ensure lease exists, will retry" err="namespaces \"kube-system\" not found" interval="200ms"
	I1109 14:41:11.095848       1 controller.go:667] quota admission added evaluator for: leases.coordination.k8s.io
	I1109 14:41:11.220894       1 storage_scheduling.go:95] created PriorityClass system-node-critical with value 2000001000
	I1109 14:41:11.239469       1 storage_scheduling.go:95] created PriorityClass system-cluster-critical with value 2000000000
	I1109 14:41:11.239568       1 storage_scheduling.go:111] all system priority classes are created successfully or already exist.
	I1109 14:41:12.334875       1 controller.go:667] quota admission added evaluator for: roles.rbac.authorization.k8s.io
	I1109 14:41:12.399877       1 controller.go:667] quota admission added evaluator for: rolebindings.rbac.authorization.k8s.io
	I1109 14:41:12.581107       1 alloc.go:328] "allocated clusterIPs" service="default/kubernetes" clusterIPs={"IPv4":"10.96.0.1"}
	W1109 14:41:12.628100       1 lease.go:265] Resetting endpoints for master service "kubernetes" to [192.168.76.2]
	I1109 14:41:12.629421       1 controller.go:667] quota admission added evaluator for: endpoints
	I1109 14:41:12.642645       1 controller.go:667] quota admission added evaluator for: endpointslices.discovery.k8s.io
	I1109 14:41:13.385233       1 controller.go:667] quota admission added evaluator for: serviceaccounts
	I1109 14:41:13.696355       1 controller.go:667] quota admission added evaluator for: deployments.apps
	I1109 14:41:13.720169       1 alloc.go:328] "allocated clusterIPs" service="kube-system/kube-dns" clusterIPs={"IPv4":"10.96.0.10"}
	I1109 14:41:13.743555       1 controller.go:667] quota admission added evaluator for: daemonsets.apps
	I1109 14:41:18.977956       1 controller.go:667] quota admission added evaluator for: controllerrevisions.apps
	I1109 14:41:19.210921       1 controller.go:667] quota admission added evaluator for: replicasets.apps
	I1109 14:41:19.378898       1 cidrallocator.go:277] updated ClusterIP allocator for Service CIDR 10.96.0.0/12
	I1109 14:41:19.384679       1 cidrallocator.go:277] updated ClusterIP allocator for Service CIDR 10.96.0.0/12
	
	
	==> kube-controller-manager [f3409e75e905b0c1c82dca724e22bb6b2055231b22d7e62e9f800f0e6a8df185] <==
	I1109 14:41:18.441278       1 node_lifecycle_controller.go:1025] "Controller detected that all Nodes are not-Ready. Entering master disruption mode" logger="node-lifecycle-controller"
	I1109 14:41:18.442400       1 shared_informer.go:356] "Caches are synced" controller="disruption"
	I1109 14:41:18.453739       1 shared_informer.go:356] "Caches are synced" controller="persistent volume"
	I1109 14:41:18.461249       1 shared_informer.go:356] "Caches are synced" controller="expand"
	I1109 14:41:18.464383       1 shared_informer.go:356] "Caches are synced" controller="TTL"
	I1109 14:41:18.465669       1 shared_informer.go:356] "Caches are synced" controller="taint-eviction-controller"
	I1109 14:41:18.470704       1 shared_informer.go:356] "Caches are synced" controller="resource quota"
	I1109 14:41:18.470753       1 shared_informer.go:356] "Caches are synced" controller="PVC protection"
	I1109 14:41:18.470784       1 shared_informer.go:356] "Caches are synced" controller="ephemeral"
	I1109 14:41:18.470820       1 shared_informer.go:356] "Caches are synced" controller="service account"
	I1109 14:41:18.471036       1 shared_informer.go:356] "Caches are synced" controller="service-cidr-controller"
	I1109 14:41:18.471200       1 shared_informer.go:356] "Caches are synced" controller="ReplicaSet"
	I1109 14:41:18.471219       1 shared_informer.go:356] "Caches are synced" controller="bootstrap_signer"
	I1109 14:41:18.471650       1 shared_informer.go:356] "Caches are synced" controller="certificate-csrsigning-legacy-unknown"
	I1109 14:41:18.471691       1 shared_informer.go:356] "Caches are synced" controller="certificate-csrsigning-kubelet-serving"
	I1109 14:41:18.471728       1 shared_informer.go:356] "Caches are synced" controller="certificate-csrsigning-kubelet-client"
	I1109 14:41:18.471778       1 shared_informer.go:356] "Caches are synced" controller="certificate-csrsigning-kube-apiserver-client"
	I1109 14:41:18.471819       1 shared_informer.go:356] "Caches are synced" controller="endpoint_slice_mirroring"
	I1109 14:41:18.471911       1 shared_informer.go:356] "Caches are synced" controller="certificate-csrapproving"
	I1109 14:41:18.481001       1 shared_informer.go:356] "Caches are synced" controller="namespace"
	I1109 14:41:18.549967       1 range_allocator.go:428] "Set node PodCIDR" logger="node-ipam-controller" node="newest-cni-192074" podCIDRs=["10.42.0.0/24"]
	I1109 14:41:18.592117       1 shared_informer.go:356] "Caches are synced" controller="garbage collector"
	I1109 14:41:18.631797       1 shared_informer.go:356] "Caches are synced" controller="garbage collector"
	I1109 14:41:18.631817       1 garbagecollector.go:154] "Garbage collector: all resource monitors have synced" logger="garbage-collector-controller"
	I1109 14:41:18.631825       1 garbagecollector.go:157] "Proceeding to collect garbage" logger="garbage-collector-controller"
	
	
	==> kube-proxy [7480ebdc7544197fe0ef85ccbd72ad9d72f9b81c91881d188b17a62b04a9985f] <==
	I1109 14:41:20.452464       1 server_linux.go:53] "Using iptables proxy"
	I1109 14:41:20.598091       1 shared_informer.go:349] "Waiting for caches to sync" controller="node informer cache"
	I1109 14:41:20.698925       1 shared_informer.go:356] "Caches are synced" controller="node informer cache"
	I1109 14:41:20.698972       1 server.go:219] "Successfully retrieved NodeIPs" NodeIPs=["192.168.76.2"]
	E1109 14:41:20.699069       1 server.go:256] "Kube-proxy configuration may be incomplete or incorrect" err="nodePortAddresses is unset; NodePort connections will be accepted on all local IPs. Consider using `--nodeport-addresses primary`"
	I1109 14:41:20.863632       1 server.go:265] "kube-proxy running in dual-stack mode" primary ipFamily="IPv4"
	I1109 14:41:20.863692       1 server_linux.go:132] "Using iptables Proxier"
	I1109 14:41:20.888041       1 proxier.go:242] "Setting route_localnet=1 to allow node-ports on localhost; to change this either disable iptables.localhostNodePorts (--iptables-localhost-nodeports) or set nodePortAddresses (--nodeport-addresses) to filter loopback addresses" ipFamily="IPv4"
	I1109 14:41:20.888411       1 server.go:527] "Version info" version="v1.34.1"
	I1109 14:41:20.888433       1 server.go:529] "Golang settings" GOGC="" GOMAXPROCS="" GOTRACEBACK=""
	I1109 14:41:20.889837       1 config.go:200] "Starting service config controller"
	I1109 14:41:20.889854       1 shared_informer.go:349] "Waiting for caches to sync" controller="service config"
	I1109 14:41:20.889872       1 config.go:106] "Starting endpoint slice config controller"
	I1109 14:41:20.889876       1 shared_informer.go:349] "Waiting for caches to sync" controller="endpoint slice config"
	I1109 14:41:20.889887       1 config.go:403] "Starting serviceCIDR config controller"
	I1109 14:41:20.889907       1 shared_informer.go:349] "Waiting for caches to sync" controller="serviceCIDR config"
	I1109 14:41:20.890518       1 config.go:309] "Starting node config controller"
	I1109 14:41:20.890526       1 shared_informer.go:349] "Waiting for caches to sync" controller="node config"
	I1109 14:41:20.890532       1 shared_informer.go:356] "Caches are synced" controller="node config"
	I1109 14:41:20.993546       1 shared_informer.go:356] "Caches are synced" controller="serviceCIDR config"
	I1109 14:41:20.993582       1 shared_informer.go:356] "Caches are synced" controller="service config"
	I1109 14:41:20.993619       1 shared_informer.go:356] "Caches are synced" controller="endpoint slice config"
	
	
	==> kube-scheduler [0b95cddf662f677b5dbdd5491f305e3854e972dc5839731bae5b95f895312c1f] <==
	I1109 14:41:10.870910       1 tlsconfig.go:243] "Starting DynamicServingCertificateController"
	E1109 14:41:10.927968       1 reflector.go:205] "Failed to watch" err="failed to list *v1.CSIDriver: csidrivers.storage.k8s.io is forbidden: User \"system:kube-scheduler\" cannot list resource \"csidrivers\" in API group \"storage.k8s.io\" at the cluster scope" logger="UnhandledError" reflector="k8s.io/client-go/informers/factory.go:160" type="*v1.CSIDriver"
	E1109 14:41:10.929881       1 reflector.go:205] "Failed to watch" err="failed to list *v1.PersistentVolume: persistentvolumes is forbidden: User \"system:kube-scheduler\" cannot list resource \"persistentvolumes\" in API group \"\" at the cluster scope" logger="UnhandledError" reflector="k8s.io/client-go/informers/factory.go:160" type="*v1.PersistentVolume"
	E1109 14:41:10.934875       1 reflector.go:205] "Failed to watch" err="failed to list *v1.CSIStorageCapacity: csistoragecapacities.storage.k8s.io is forbidden: User \"system:kube-scheduler\" cannot list resource \"csistoragecapacities\" in API group \"storage.k8s.io\" at the cluster scope" logger="UnhandledError" reflector="k8s.io/client-go/informers/factory.go:160" type="*v1.CSIStorageCapacity"
	E1109 14:41:10.935058       1 reflector.go:205] "Failed to watch" err="failed to list *v1.ReplicationController: replicationcontrollers is forbidden: User \"system:kube-scheduler\" cannot list resource \"replicationcontrollers\" in API group \"\" at the cluster scope" logger="UnhandledError" reflector="k8s.io/client-go/informers/factory.go:160" type="*v1.ReplicationController"
	E1109 14:41:10.935152       1 reflector.go:205] "Failed to watch" err="failed to list *v1.Namespace: namespaces is forbidden: User \"system:kube-scheduler\" cannot list resource \"namespaces\" in API group \"\" at the cluster scope" logger="UnhandledError" reflector="k8s.io/client-go/informers/factory.go:160" type="*v1.Namespace"
	E1109 14:41:10.935478       1 reflector.go:205] "Failed to watch" err="failed to list *v1.DeviceClass: deviceclasses.resource.k8s.io is forbidden: User \"system:kube-scheduler\" cannot list resource \"deviceclasses\" in API group \"resource.k8s.io\" at the cluster scope" logger="UnhandledError" reflector="k8s.io/client-go/informers/factory.go:160" type="*v1.DeviceClass"
	E1109 14:41:10.935733       1 reflector.go:205] "Failed to watch" err="failed to list *v1.StorageClass: storageclasses.storage.k8s.io is forbidden: User \"system:kube-scheduler\" cannot list resource \"storageclasses\" in API group \"storage.k8s.io\" at the cluster scope" logger="UnhandledError" reflector="k8s.io/client-go/informers/factory.go:160" type="*v1.StorageClass"
	E1109 14:41:10.936176       1 reflector.go:205] "Failed to watch" err="failed to list *v1.StatefulSet: statefulsets.apps is forbidden: User \"system:kube-scheduler\" cannot list resource \"statefulsets\" in API group \"apps\" at the cluster scope" logger="UnhandledError" reflector="k8s.io/client-go/informers/factory.go:160" type="*v1.StatefulSet"
	E1109 14:41:10.939302       1 reflector.go:205] "Failed to watch" err="failed to list *v1.Service: services is forbidden: User \"system:kube-scheduler\" cannot list resource \"services\" in API group \"\" at the cluster scope" logger="UnhandledError" reflector="k8s.io/client-go/informers/factory.go:160" type="*v1.Service"
	E1109 14:41:10.939754       1 reflector.go:205] "Failed to watch" err="failed to list *v1.PersistentVolumeClaim: persistentvolumeclaims is forbidden: User \"system:kube-scheduler\" cannot list resource \"persistentvolumeclaims\" in API group \"\" at the cluster scope" logger="UnhandledError" reflector="k8s.io/client-go/informers/factory.go:160" type="*v1.PersistentVolumeClaim"
	E1109 14:41:10.942711       1 reflector.go:205] "Failed to watch" err="failed to list *v1.ConfigMap: configmaps \"extension-apiserver-authentication\" is forbidden: User \"system:kube-scheduler\" cannot list resource \"configmaps\" in API group \"\" in the namespace \"kube-system\"" logger="UnhandledError" reflector="runtime/asm_arm64.s:1223" type="*v1.ConfigMap"
	E1109 14:41:10.943028       1 reflector.go:205] "Failed to watch" err="failed to list *v1.ReplicaSet: replicasets.apps is forbidden: User \"system:kube-scheduler\" cannot list resource \"replicasets\" in API group \"apps\" at the cluster scope" logger="UnhandledError" reflector="k8s.io/client-go/informers/factory.go:160" type="*v1.ReplicaSet"
	E1109 14:41:10.944448       1 reflector.go:205] "Failed to watch" err="failed to list *v1.Pod: pods is forbidden: User \"system:kube-scheduler\" cannot list resource \"pods\" in API group \"\" at the cluster scope" logger="UnhandledError" reflector="k8s.io/client-go/informers/factory.go:160" type="*v1.Pod"
	E1109 14:41:10.945789       1 reflector.go:205] "Failed to watch" err="failed to list *v1.PodDisruptionBudget: poddisruptionbudgets.policy is forbidden: User \"system:kube-scheduler\" cannot list resource \"poddisruptionbudgets\" in API group \"policy\" at the cluster scope" logger="UnhandledError" reflector="k8s.io/client-go/informers/factory.go:160" type="*v1.PodDisruptionBudget"
	E1109 14:41:10.947972       1 reflector.go:205] "Failed to watch" err="failed to list *v1.VolumeAttachment: volumeattachments.storage.k8s.io is forbidden: User \"system:kube-scheduler\" cannot list resource \"volumeattachments\" in API group \"storage.k8s.io\" at the cluster scope" logger="UnhandledError" reflector="k8s.io/client-go/informers/factory.go:160" type="*v1.VolumeAttachment"
	E1109 14:41:10.948101       1 reflector.go:205] "Failed to watch" err="failed to list *v1.ResourceSlice: resourceslices.resource.k8s.io is forbidden: User \"system:kube-scheduler\" cannot list resource \"resourceslices\" in API group \"resource.k8s.io\" at the cluster scope" logger="UnhandledError" reflector="k8s.io/client-go/informers/factory.go:160" type="*v1.ResourceSlice"
	E1109 14:41:10.948227       1 reflector.go:205] "Failed to watch" err="failed to list *v1.Node: nodes is forbidden: User \"system:kube-scheduler\" cannot list resource \"nodes\" in API group \"\" at the cluster scope" logger="UnhandledError" reflector="k8s.io/client-go/informers/factory.go:160" type="*v1.Node"
	E1109 14:41:10.948419       1 reflector.go:205] "Failed to watch" err="failed to list *v1.ResourceClaim: resourceclaims.resource.k8s.io is forbidden: User \"system:kube-scheduler\" cannot list resource \"resourceclaims\" in API group \"resource.k8s.io\" at the cluster scope" logger="UnhandledError" reflector="k8s.io/client-go/informers/factory.go:160" type="*v1.ResourceClaim"
	E1109 14:41:10.959760       1 reflector.go:205] "Failed to watch" err="failed to list *v1.CSINode: csinodes.storage.k8s.io is forbidden: User \"system:kube-scheduler\" cannot list resource \"csinodes\" in API group \"storage.k8s.io\" at the cluster scope" logger="UnhandledError" reflector="k8s.io/client-go/informers/factory.go:160" type="*v1.CSINode"
	E1109 14:41:11.758561       1 reflector.go:205] "Failed to watch" err="failed to list *v1.PersistentVolume: persistentvolumes is forbidden: User \"system:kube-scheduler\" cannot list resource \"persistentvolumes\" in API group \"\" at the cluster scope" logger="UnhandledError" reflector="k8s.io/client-go/informers/factory.go:160" type="*v1.PersistentVolume"
	E1109 14:41:11.829862       1 reflector.go:205] "Failed to watch" err="failed to list *v1.PersistentVolumeClaim: persistentvolumeclaims is forbidden: User \"system:kube-scheduler\" cannot list resource \"persistentvolumeclaims\" in API group \"\" at the cluster scope" logger="UnhandledError" reflector="k8s.io/client-go/informers/factory.go:160" type="*v1.PersistentVolumeClaim"
	E1109 14:41:11.904663       1 reflector.go:205] "Failed to watch" err="failed to list *v1.DeviceClass: deviceclasses.resource.k8s.io is forbidden: User \"system:kube-scheduler\" cannot list resource \"deviceclasses\" in API group \"resource.k8s.io\" at the cluster scope" logger="UnhandledError" reflector="k8s.io/client-go/informers/factory.go:160" type="*v1.DeviceClass"
	E1109 14:41:11.904822       1 reflector.go:205] "Failed to watch" err="failed to list *v1.PodDisruptionBudget: poddisruptionbudgets.policy is forbidden: User \"system:kube-scheduler\" cannot list resource \"poddisruptionbudgets\" in API group \"policy\" at the cluster scope" logger="UnhandledError" reflector="k8s.io/client-go/informers/factory.go:160" type="*v1.PodDisruptionBudget"
	I1109 14:41:12.571953       1 shared_informer.go:356] "Caches are synced" controller="client-ca::kube-system::extension-apiserver-authentication::client-ca-file"
	
	
	==> kubelet <==
	Nov 09 14:41:14 newest-cni-192074 kubelet[1297]: I1109 14:41:14.810076    1297 desired_state_of_world_populator.go:154] "Finished populating initial desired state of world"
	Nov 09 14:41:15 newest-cni-192074 kubelet[1297]: I1109 14:41:15.011386    1297 kubelet.go:3219] "Creating a mirror pod for static pod" pod="kube-system/kube-apiserver-newest-cni-192074"
	Nov 09 14:41:15 newest-cni-192074 kubelet[1297]: I1109 14:41:15.011958    1297 kubelet.go:3219] "Creating a mirror pod for static pod" pod="kube-system/kube-scheduler-newest-cni-192074"
	Nov 09 14:41:15 newest-cni-192074 kubelet[1297]: I1109 14:41:15.013040    1297 kubelet.go:3219] "Creating a mirror pod for static pod" pod="kube-system/kube-controller-manager-newest-cni-192074"
	Nov 09 14:41:15 newest-cni-192074 kubelet[1297]: E1109 14:41:15.073289    1297 kubelet.go:3221] "Failed creating a mirror pod" err="pods \"kube-apiserver-newest-cni-192074\" already exists" pod="kube-system/kube-apiserver-newest-cni-192074"
	Nov 09 14:41:15 newest-cni-192074 kubelet[1297]: E1109 14:41:15.074182    1297 kubelet.go:3221] "Failed creating a mirror pod" err="pods \"kube-controller-manager-newest-cni-192074\" already exists" pod="kube-system/kube-controller-manager-newest-cni-192074"
	Nov 09 14:41:15 newest-cni-192074 kubelet[1297]: E1109 14:41:15.074565    1297 kubelet.go:3221] "Failed creating a mirror pod" err="pods \"kube-scheduler-newest-cni-192074\" already exists" pod="kube-system/kube-scheduler-newest-cni-192074"
	Nov 09 14:41:15 newest-cni-192074 kubelet[1297]: I1109 14:41:15.075484    1297 pod_startup_latency_tracker.go:104] "Observed pod startup duration" pod="kube-system/kube-scheduler-newest-cni-192074" podStartSLOduration=1.075463727 podStartE2EDuration="1.075463727s" podCreationTimestamp="2025-11-09 14:41:14 +0000 UTC" firstStartedPulling="0001-01-01 00:00:00 +0000 UTC" lastFinishedPulling="0001-01-01 00:00:00 +0000 UTC" observedRunningTime="2025-11-09 14:41:15.075426197 +0000 UTC m=+1.488634572" watchObservedRunningTime="2025-11-09 14:41:15.075463727 +0000 UTC m=+1.488672086"
	Nov 09 14:41:15 newest-cni-192074 kubelet[1297]: I1109 14:41:15.141926    1297 pod_startup_latency_tracker.go:104] "Observed pod startup duration" pod="kube-system/kube-apiserver-newest-cni-192074" podStartSLOduration=1.141904826 podStartE2EDuration="1.141904826s" podCreationTimestamp="2025-11-09 14:41:14 +0000 UTC" firstStartedPulling="0001-01-01 00:00:00 +0000 UTC" lastFinishedPulling="0001-01-01 00:00:00 +0000 UTC" observedRunningTime="2025-11-09 14:41:15.114113184 +0000 UTC m=+1.527321551" watchObservedRunningTime="2025-11-09 14:41:15.141904826 +0000 UTC m=+1.555113185"
	Nov 09 14:41:18 newest-cni-192074 kubelet[1297]: I1109 14:41:18.574283    1297 kuberuntime_manager.go:1828] "Updating runtime config through cri with podcidr" CIDR="10.42.0.0/24"
	Nov 09 14:41:18 newest-cni-192074 kubelet[1297]: I1109 14:41:18.574834    1297 kubelet_network.go:47] "Updating Pod CIDR" originalPodCIDR="" newPodCIDR="10.42.0.0/24"
	Nov 09 14:41:19 newest-cni-192074 kubelet[1297]: I1109 14:41:19.066883    1297 reconciler_common.go:251] "operationExecutor.VerifyControllerAttachedVolume started for volume \"xtables-lock\" (UniqueName: \"kubernetes.io/host-path/4f389cd7-7dd5-439e-b590-9e4390f0a638-xtables-lock\") pod \"kube-proxy-vjt4x\" (UID: \"4f389cd7-7dd5-439e-b590-9e4390f0a638\") " pod="kube-system/kube-proxy-vjt4x"
	Nov 09 14:41:19 newest-cni-192074 kubelet[1297]: I1109 14:41:19.066941    1297 reconciler_common.go:251] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kube-proxy\" (UniqueName: \"kubernetes.io/configmap/4f389cd7-7dd5-439e-b590-9e4390f0a638-kube-proxy\") pod \"kube-proxy-vjt4x\" (UID: \"4f389cd7-7dd5-439e-b590-9e4390f0a638\") " pod="kube-system/kube-proxy-vjt4x"
	Nov 09 14:41:19 newest-cni-192074 kubelet[1297]: I1109 14:41:19.066961    1297 reconciler_common.go:251] "operationExecutor.VerifyControllerAttachedVolume started for volume \"lib-modules\" (UniqueName: \"kubernetes.io/host-path/4f389cd7-7dd5-439e-b590-9e4390f0a638-lib-modules\") pod \"kube-proxy-vjt4x\" (UID: \"4f389cd7-7dd5-439e-b590-9e4390f0a638\") " pod="kube-system/kube-proxy-vjt4x"
	Nov 09 14:41:19 newest-cni-192074 kubelet[1297]: I1109 14:41:19.066984    1297 reconciler_common.go:251] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kube-api-access-qmvzn\" (UniqueName: \"kubernetes.io/projected/4f389cd7-7dd5-439e-b590-9e4390f0a638-kube-api-access-qmvzn\") pod \"kube-proxy-vjt4x\" (UID: \"4f389cd7-7dd5-439e-b590-9e4390f0a638\") " pod="kube-system/kube-proxy-vjt4x"
	Nov 09 14:41:19 newest-cni-192074 kubelet[1297]: I1109 14:41:19.167478    1297 reconciler_common.go:251] "operationExecutor.VerifyControllerAttachedVolume started for volume \"cni-cfg\" (UniqueName: \"kubernetes.io/host-path/00d2ffcc-cb88-4632-8efd-e59fe208d3c8-cni-cfg\") pod \"kindnet-gmcpd\" (UID: \"00d2ffcc-cb88-4632-8efd-e59fe208d3c8\") " pod="kube-system/kindnet-gmcpd"
	Nov 09 14:41:19 newest-cni-192074 kubelet[1297]: I1109 14:41:19.167529    1297 reconciler_common.go:251] "operationExecutor.VerifyControllerAttachedVolume started for volume \"xtables-lock\" (UniqueName: \"kubernetes.io/host-path/00d2ffcc-cb88-4632-8efd-e59fe208d3c8-xtables-lock\") pod \"kindnet-gmcpd\" (UID: \"00d2ffcc-cb88-4632-8efd-e59fe208d3c8\") " pod="kube-system/kindnet-gmcpd"
	Nov 09 14:41:19 newest-cni-192074 kubelet[1297]: I1109 14:41:19.167562    1297 reconciler_common.go:251] "operationExecutor.VerifyControllerAttachedVolume started for volume \"lib-modules\" (UniqueName: \"kubernetes.io/host-path/00d2ffcc-cb88-4632-8efd-e59fe208d3c8-lib-modules\") pod \"kindnet-gmcpd\" (UID: \"00d2ffcc-cb88-4632-8efd-e59fe208d3c8\") " pod="kube-system/kindnet-gmcpd"
	Nov 09 14:41:19 newest-cni-192074 kubelet[1297]: I1109 14:41:19.167581    1297 reconciler_common.go:251] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kube-api-access-jngxc\" (UniqueName: \"kubernetes.io/projected/00d2ffcc-cb88-4632-8efd-e59fe208d3c8-kube-api-access-jngxc\") pod \"kindnet-gmcpd\" (UID: \"00d2ffcc-cb88-4632-8efd-e59fe208d3c8\") " pod="kube-system/kindnet-gmcpd"
	Nov 09 14:41:19 newest-cni-192074 kubelet[1297]: E1109 14:41:19.210163    1297 projected.go:291] Couldn't get configMap kube-system/kube-root-ca.crt: configmap "kube-root-ca.crt" not found
	Nov 09 14:41:19 newest-cni-192074 kubelet[1297]: E1109 14:41:19.210218    1297 projected.go:196] Error preparing data for projected volume kube-api-access-qmvzn for pod kube-system/kube-proxy-vjt4x: configmap "kube-root-ca.crt" not found
	Nov 09 14:41:19 newest-cni-192074 kubelet[1297]: E1109 14:41:19.210330    1297 nestedpendingoperations.go:348] Operation for "{volumeName:kubernetes.io/projected/4f389cd7-7dd5-439e-b590-9e4390f0a638-kube-api-access-qmvzn podName:4f389cd7-7dd5-439e-b590-9e4390f0a638 nodeName:}" failed. No retries permitted until 2025-11-09 14:41:19.710284536 +0000 UTC m=+6.123492895 (durationBeforeRetry 500ms). Error: MountVolume.SetUp failed for volume "kube-api-access-qmvzn" (UniqueName: "kubernetes.io/projected/4f389cd7-7dd5-439e-b590-9e4390f0a638-kube-api-access-qmvzn") pod "kube-proxy-vjt4x" (UID: "4f389cd7-7dd5-439e-b590-9e4390f0a638") : configmap "kube-root-ca.crt" not found
	Nov 09 14:41:19 newest-cni-192074 kubelet[1297]: I1109 14:41:19.329613    1297 swap_util.go:74] "error creating dir to test if tmpfs noswap is enabled. Assuming not supported" mount path="" error="stat /var/lib/kubelet/plugins/kubernetes.io/empty-dir: no such file or directory"
	Nov 09 14:41:21 newest-cni-192074 kubelet[1297]: I1109 14:41:21.143351    1297 pod_startup_latency_tracker.go:104] "Observed pod startup duration" pod="kube-system/kindnet-gmcpd" podStartSLOduration=3.143329588 podStartE2EDuration="3.143329588s" podCreationTimestamp="2025-11-09 14:41:18 +0000 UTC" firstStartedPulling="0001-01-01 00:00:00 +0000 UTC" lastFinishedPulling="0001-01-01 00:00:00 +0000 UTC" observedRunningTime="2025-11-09 14:41:20.122754233 +0000 UTC m=+6.535962592" watchObservedRunningTime="2025-11-09 14:41:21.143329588 +0000 UTC m=+7.556537947"
	Nov 09 14:41:21 newest-cni-192074 kubelet[1297]: I1109 14:41:21.143659    1297 pod_startup_latency_tracker.go:104] "Observed pod startup duration" pod="kube-system/kube-proxy-vjt4x" podStartSLOduration=3.143651872 podStartE2EDuration="3.143651872s" podCreationTimestamp="2025-11-09 14:41:18 +0000 UTC" firstStartedPulling="0001-01-01 00:00:00 +0000 UTC" lastFinishedPulling="0001-01-01 00:00:00 +0000 UTC" observedRunningTime="2025-11-09 14:41:21.141693495 +0000 UTC m=+7.554901870" watchObservedRunningTime="2025-11-09 14:41:21.143651872 +0000 UTC m=+7.556860231"
	

                                                
                                                
-- /stdout --
helpers_test.go:262: (dbg) Run:  out/minikube-linux-arm64 status --format={{.APIServer}} -p newest-cni-192074 -n newest-cni-192074
helpers_test.go:269: (dbg) Run:  kubectl --context newest-cni-192074 get po -o=jsonpath={.items[*].metadata.name} -A --field-selector=status.phase!=Running
helpers_test.go:280: non-running pods: coredns-66bc5c9577-6brdt storage-provisioner
helpers_test.go:282: ======> post-mortem[TestStartStop/group/newest-cni/serial/EnableAddonWhileActive]: describe non-running pods <======
helpers_test.go:285: (dbg) Run:  kubectl --context newest-cni-192074 describe pod coredns-66bc5c9577-6brdt storage-provisioner
helpers_test.go:285: (dbg) Non-zero exit: kubectl --context newest-cni-192074 describe pod coredns-66bc5c9577-6brdt storage-provisioner: exit status 1 (111.786429ms)

                                                
                                                
** stderr ** 
	Error from server (NotFound): pods "coredns-66bc5c9577-6brdt" not found
	Error from server (NotFound): pods "storage-provisioner" not found

                                                
                                                
** /stderr **
helpers_test.go:287: kubectl --context newest-cni-192074 describe pod coredns-66bc5c9577-6brdt storage-provisioner: exit status 1
--- FAIL: TestStartStop/group/newest-cni/serial/EnableAddonWhileActive (3.23s)

                                                
                                    
x
+
TestStartStop/group/newest-cni/serial/Pause (6.46s)

                                                
                                                
=== RUN   TestStartStop/group/newest-cni/serial/Pause
start_stop_delete_test.go:309: (dbg) Run:  out/minikube-linux-arm64 pause -p newest-cni-192074 --alsologtostderr -v=1
start_stop_delete_test.go:309: (dbg) Non-zero exit: out/minikube-linux-arm64 pause -p newest-cni-192074 --alsologtostderr -v=1: exit status 80 (1.981224924s)

                                                
                                                
-- stdout --
	* Pausing node newest-cni-192074 ... 
	
	

                                                
                                                
-- /stdout --
** stderr ** 
	I1109 14:41:43.852840  211223 out.go:360] Setting OutFile to fd 1 ...
	I1109 14:41:43.852968  211223 out.go:408] TERM=,COLORTERM=, which probably does not support color
	I1109 14:41:43.852979  211223 out.go:374] Setting ErrFile to fd 2...
	I1109 14:41:43.852983  211223 out.go:408] TERM=,COLORTERM=, which probably does not support color
	I1109 14:41:43.853223  211223 root.go:338] Updating PATH: /home/jenkins/minikube-integration/21139-2320/.minikube/bin
	I1109 14:41:43.853460  211223 out.go:368] Setting JSON to false
	I1109 14:41:43.853484  211223 mustload.go:66] Loading cluster: newest-cni-192074
	I1109 14:41:43.853902  211223 config.go:182] Loaded profile config "newest-cni-192074": Driver=docker, ContainerRuntime=crio, KubernetesVersion=v1.34.1
	I1109 14:41:43.854348  211223 cli_runner.go:164] Run: docker container inspect newest-cni-192074 --format={{.State.Status}}
	I1109 14:41:43.873528  211223 host.go:66] Checking if "newest-cni-192074" exists ...
	I1109 14:41:43.873877  211223 cli_runner.go:164] Run: docker system info --format "{{json .}}"
	I1109 14:41:43.938386  211223 info.go:266] docker info: {ID:J4M5:W6MX:GOX4:4LAQ:VI7E:VJNF:J3OP:OPBH:GF7G:PPY4:WQWD:7N4L Containers:2 ContainersRunning:2 ContainersPaused:0 ContainersStopped:0 Images:4 Driver:overlay2 DriverStatus:[[Backing Filesystem extfs] [Supports d_type true] [Using metacopy false] [Native Overlay Diff true] [userxattr false]] SystemStatus:<nil> Plugins:{Volume:[local] Network:[bridge host ipvlan macvlan null overlay] Authorization:<nil> Log:[awslogs fluentd gcplogs gelf journald json-file local splunk syslog]} MemoryLimit:true SwapLimit:true KernelMemory:false KernelMemoryTCP:true CPUCfsPeriod:true CPUCfsQuota:true CPUShares:true CPUSet:true PidsLimit:true IPv4Forwarding:true BridgeNfIptables:false BridgeNfIP6Tables:false Debug:false NFd:49 OomKillDisable:true NGoroutines:62 SystemTime:2025-11-09 14:41:43.92565837 +0000 UTC LoggingDriver:json-file CgroupDriver:cgroupfs NEventsListener:0 KernelVersion:5.15.0-1084-aws OperatingSystem:Ubuntu 20.04.6 LTS OSType:linux Architecture:aa
rch64 IndexServerAddress:https://index.docker.io/v1/ RegistryConfig:{AllowNondistributableArtifactsCIDRs:[] AllowNondistributableArtifactsHostnames:[] InsecureRegistryCIDRs:[::1/128 127.0.0.0/8] IndexConfigs:{DockerIo:{Name:docker.io Mirrors:[] Secure:true Official:true}} Mirrors:[]} NCPU:2 MemTotal:8214835200 GenericResources:<nil> DockerRootDir:/var/lib/docker HTTPProxy: HTTPSProxy: NoProxy: Name:ip-172-31-24-2 Labels:[] ExperimentalBuild:false ServerVersion:28.1.1 ClusterStore: ClusterAdvertise: Runtimes:{Runc:{Path:runc}} DefaultRuntime:runc Swarm:{NodeID: NodeAddr: LocalNodeState:inactive ControlAvailable:false Error: RemoteManagers:<nil>} LiveRestoreEnabled:false Isolation: InitBinary:docker-init ContainerdCommit:{ID:05044ec0a9a75232cad458027ca83437aae3f4da Expected:} RuncCommit:{ID:v1.2.5-0-g59923ef Expected:} InitCommit:{ID:de40ad0 Expected:} SecurityOptions:[name=apparmor name=seccomp,profile=builtin] ProductLicense: Warnings:<nil> ServerErrors:[] ClientInfo:{Debug:false Plugins:[map[Name:buildx Path
:/usr/libexec/docker/cli-plugins/docker-buildx SchemaVersion:0.1.0 ShortDescription:Docker Buildx Vendor:Docker Inc. Version:v0.23.0] map[Name:compose Path:/usr/libexec/docker/cli-plugins/docker-compose SchemaVersion:0.1.0 ShortDescription:Docker Compose Vendor:Docker Inc. Version:v2.35.1]] Warnings:<nil>}}
	I1109 14:41:43.939439  211223 pause.go:60] "namespaces" [kube-system kubernetes-dashboard istio-operator]="keys" map[addons:[] all:%!s(bool=false) apiserver-ips:[] apiserver-name:minikubeCA apiserver-names:[] apiserver-port:%!s(int=8443) auto-pause-interval:1m0s auto-update-drivers:%!s(bool=true) base-image:gcr.io/k8s-minikube/kicbase-builds:v0.0.48-1761985721-21837@sha256:a50b37e97dfdea51156e079ca6b45818a801b3d41bbe13d141f35d2e1af6c7d1 binary-mirror: bootstrapper:kubeadm cache-images:%!s(bool=true) cancel-scheduled:%!s(bool=false) cert-expiration:26280h0m0s cni: container-runtime: cpus:2 cri-socket: delete-on-failure:%!s(bool=false) disable-coredns-log:%!s(bool=false) disable-driver-mounts:%!s(bool=false) disable-metrics:%!s(bool=false) disable-optimizations:%!s(bool=false) disk-size:20000mb dns-domain:cluster.local dns-proxy:%!s(bool=false) docker-env:[] docker-opt:[] download-only:%!s(bool=false) driver: dry-run:%!s(bool=false) embed-certs:%!s(bool=false) embedcerts:%!s(bool=false) enable-default-
cni:%!s(bool=false) extra-config: extra-disks:%!s(int=0) feature-gates: force:%!s(bool=false) force-systemd:%!s(bool=false) gpus: ha:%!s(bool=false) host-dns-resolver:%!s(bool=true) host-only-cidr:192.168.59.1/24 host-only-nic-type:virtio hyperkit-vpnkit-sock: hyperkit-vsock-ports:[] hyperv-external-adapter: hyperv-use-external-switch:%!s(bool=false) hyperv-virtual-switch: image-mirror-country: image-repository: insecure-registry:[] install-addons:%!s(bool=true) interactive:%!s(bool=true) iso-url:[https://storage.googleapis.com/minikube-builds/iso/21834/minikube-v1.37.0-1762018871-21834-arm64.iso https://github.com/kubernetes/minikube/releases/download/v1.37.0-1762018871-21834/minikube-v1.37.0-1762018871-21834-arm64.iso https://kubernetes.oss-cn-hangzhou.aliyuncs.com/minikube/iso/minikube-v1.37.0-1762018871-21834-arm64.iso] keep-context:%!s(bool=false) keep-context-active:%!s(bool=false) kubernetes-version: kvm-gpu:%!s(bool=false) kvm-hidden:%!s(bool=false) kvm-network:default kvm-numa-count:%!s(int=1) kvm-qe
mu-uri:qemu:///system listen-address: maxauditentries:%!s(int=1000) memory: mount:%!s(bool=false) mount-9p-version:9p2000.L mount-gid:docker mount-ip: mount-msize:%!s(int=262144) mount-options:[] mount-port:0 mount-string: mount-type:9p mount-uid:docker namespace:default nat-nic-type:virtio native-ssh:%!s(bool=true) network: network-plugin: nfs-share:[] nfs-shares-root:/nfsshares no-kubernetes:%!s(bool=false) no-vtx-check:%!s(bool=false) nodes:%!s(int=1) output:text ports:[] preload:%!s(bool=true) profile:newest-cni-192074 purge:%!s(bool=false) qemu-firmware-path: registry-mirror:[] reminderwaitperiodinhours:%!s(int=24) rootless:%!s(bool=false) schedule:0s service-cluster-ip-range:10.96.0.0/12 skip-audit:%!s(bool=false) socket-vmnet-client-path: socket-vmnet-path: ssh-ip-address: ssh-key: ssh-port:%!s(int=22) ssh-user:root static-ip: subnet: trace: user: uuid: vm:%!s(bool=false) vm-driver: wait:[apiserver system_pods] wait-timeout:6m0s wantnonedriverwarning:%!s(bool=true) wantupdatenotification:%!s(bool=true)
wantvirtualboxdriverwarning:%!s(bool=true)]="(MISSING)"
	I1109 14:41:43.943031  211223 out.go:179] * Pausing node newest-cni-192074 ... 
	I1109 14:41:43.947657  211223 host.go:66] Checking if "newest-cni-192074" exists ...
	I1109 14:41:43.948044  211223 ssh_runner.go:195] Run: systemctl --version
	I1109 14:41:43.948098  211223 cli_runner.go:164] Run: docker container inspect -f "'{{(index (index .NetworkSettings.Ports "22/tcp") 0).HostPort}}'" newest-cni-192074
	I1109 14:41:43.968242  211223 sshutil.go:53] new ssh client: &{IP:127.0.0.1 Port:33085 SSHKeyPath:/home/jenkins/minikube-integration/21139-2320/.minikube/machines/newest-cni-192074/id_rsa Username:docker}
	I1109 14:41:44.078764  211223 ssh_runner.go:195] Run: sudo systemctl is-active --quiet service kubelet
	I1109 14:41:44.092699  211223 pause.go:52] kubelet running: true
	I1109 14:41:44.092763  211223 ssh_runner.go:195] Run: sudo systemctl disable --now kubelet
	I1109 14:41:44.347238  211223 cri.go:54] listing CRI containers in root : {State:running Name: Namespaces:[kube-system kubernetes-dashboard istio-operator]}
	I1109 14:41:44.347336  211223 ssh_runner.go:195] Run: sudo -s eval "crictl ps -a --quiet --label io.kubernetes.pod.namespace=kube-system; crictl ps -a --quiet --label io.kubernetes.pod.namespace=kubernetes-dashboard; crictl ps -a --quiet --label io.kubernetes.pod.namespace=istio-operator"
	I1109 14:41:44.422593  211223 cri.go:89] found id: "eec42ffc8671c548030a782bbc9411ebe0b4af2d70e2f7d365da735aaaf5cb19"
	I1109 14:41:44.422624  211223 cri.go:89] found id: "4bbbd857426df358f0a97cc69c9e38cb57d717c134067204afb845ff0948a1b0"
	I1109 14:41:44.422630  211223 cri.go:89] found id: "c913d6d7b55a2bf4363aa340114681f62876da6b25f96a1bb3b282eda1b60139"
	I1109 14:41:44.422634  211223 cri.go:89] found id: "586cbf2507c60a0c5f2a7a6dbb1b3df9ad1c324498ff6f1875d3fecc41181903"
	I1109 14:41:44.422637  211223 cri.go:89] found id: "afd353198acd97ab297fdf63f5ed475dde326bf68ef3c2d1001f999ea14a25ac"
	I1109 14:41:44.422641  211223 cri.go:89] found id: "4906171ae291fe25c62fa24b9abc955b2e431c04f03b82b97bc5dac9dabbf8a3"
	I1109 14:41:44.422644  211223 cri.go:89] found id: ""
	I1109 14:41:44.422699  211223 ssh_runner.go:195] Run: sudo runc list -f json
	I1109 14:41:44.435206  211223 retry.go:31] will retry after 206.711997ms: list running: runc: sudo runc list -f json: Process exited with status 1
	stdout:
	
	stderr:
	time="2025-11-09T14:41:44Z" level=error msg="open /run/runc: no such file or directory"
	I1109 14:41:44.642690  211223 ssh_runner.go:195] Run: sudo systemctl is-active --quiet service kubelet
	I1109 14:41:44.655705  211223 pause.go:52] kubelet running: false
	I1109 14:41:44.655791  211223 ssh_runner.go:195] Run: sudo systemctl disable --now kubelet
	I1109 14:41:44.832392  211223 cri.go:54] listing CRI containers in root : {State:running Name: Namespaces:[kube-system kubernetes-dashboard istio-operator]}
	I1109 14:41:44.832475  211223 ssh_runner.go:195] Run: sudo -s eval "crictl ps -a --quiet --label io.kubernetes.pod.namespace=kube-system; crictl ps -a --quiet --label io.kubernetes.pod.namespace=kubernetes-dashboard; crictl ps -a --quiet --label io.kubernetes.pod.namespace=istio-operator"
	I1109 14:41:44.971989  211223 cri.go:89] found id: "eec42ffc8671c548030a782bbc9411ebe0b4af2d70e2f7d365da735aaaf5cb19"
	I1109 14:41:44.972016  211223 cri.go:89] found id: "4bbbd857426df358f0a97cc69c9e38cb57d717c134067204afb845ff0948a1b0"
	I1109 14:41:44.972021  211223 cri.go:89] found id: "c913d6d7b55a2bf4363aa340114681f62876da6b25f96a1bb3b282eda1b60139"
	I1109 14:41:44.972025  211223 cri.go:89] found id: "586cbf2507c60a0c5f2a7a6dbb1b3df9ad1c324498ff6f1875d3fecc41181903"
	I1109 14:41:44.972029  211223 cri.go:89] found id: "afd353198acd97ab297fdf63f5ed475dde326bf68ef3c2d1001f999ea14a25ac"
	I1109 14:41:44.972034  211223 cri.go:89] found id: "4906171ae291fe25c62fa24b9abc955b2e431c04f03b82b97bc5dac9dabbf8a3"
	I1109 14:41:44.972038  211223 cri.go:89] found id: ""
	I1109 14:41:44.972099  211223 ssh_runner.go:195] Run: sudo runc list -f json
	I1109 14:41:44.985204  211223 retry.go:31] will retry after 312.292445ms: list running: runc: sudo runc list -f json: Process exited with status 1
	stdout:
	
	stderr:
	time="2025-11-09T14:41:44Z" level=error msg="open /run/runc: no such file or directory"
	I1109 14:41:45.297725  211223 ssh_runner.go:195] Run: sudo systemctl is-active --quiet service kubelet
	I1109 14:41:45.322020  211223 pause.go:52] kubelet running: false
	I1109 14:41:45.322097  211223 ssh_runner.go:195] Run: sudo systemctl disable --now kubelet
	I1109 14:41:45.630444  211223 cri.go:54] listing CRI containers in root : {State:running Name: Namespaces:[kube-system kubernetes-dashboard istio-operator]}
	I1109 14:41:45.630580  211223 ssh_runner.go:195] Run: sudo -s eval "crictl ps -a --quiet --label io.kubernetes.pod.namespace=kube-system; crictl ps -a --quiet --label io.kubernetes.pod.namespace=kubernetes-dashboard; crictl ps -a --quiet --label io.kubernetes.pod.namespace=istio-operator"
	I1109 14:41:45.751965  211223 cri.go:89] found id: "eec42ffc8671c548030a782bbc9411ebe0b4af2d70e2f7d365da735aaaf5cb19"
	I1109 14:41:45.752029  211223 cri.go:89] found id: "4bbbd857426df358f0a97cc69c9e38cb57d717c134067204afb845ff0948a1b0"
	I1109 14:41:45.752048  211223 cri.go:89] found id: "c913d6d7b55a2bf4363aa340114681f62876da6b25f96a1bb3b282eda1b60139"
	I1109 14:41:45.752064  211223 cri.go:89] found id: "586cbf2507c60a0c5f2a7a6dbb1b3df9ad1c324498ff6f1875d3fecc41181903"
	I1109 14:41:45.752092  211223 cri.go:89] found id: "afd353198acd97ab297fdf63f5ed475dde326bf68ef3c2d1001f999ea14a25ac"
	I1109 14:41:45.752109  211223 cri.go:89] found id: "4906171ae291fe25c62fa24b9abc955b2e431c04f03b82b97bc5dac9dabbf8a3"
	I1109 14:41:45.752126  211223 cri.go:89] found id: ""
	I1109 14:41:45.752223  211223 ssh_runner.go:195] Run: sudo runc list -f json
	I1109 14:41:45.770651  211223 out.go:203] 
	W1109 14:41:45.773746  211223 out.go:285] X Exiting due to GUEST_PAUSE: Pause: list running: runc: sudo runc list -f json: Process exited with status 1
	stdout:
	
	stderr:
	time="2025-11-09T14:41:45Z" level=error msg="open /run/runc: no such file or directory"
	
	X Exiting due to GUEST_PAUSE: Pause: list running: runc: sudo runc list -f json: Process exited with status 1
	stdout:
	
	stderr:
	time="2025-11-09T14:41:45Z" level=error msg="open /run/runc: no such file or directory"
	
	W1109 14:41:45.773817  211223 out.go:285] * 
	* 
	W1109 14:41:45.778775  211223 out.go:308] ╭─────────────────────────────────────────────────────────────────────────────────────────────╮
	│                                                                                             │
	│    * If the above advice does not help, please let us know:                                 │
	│      https://github.com/kubernetes/minikube/issues/new/choose                               │
	│                                                                                             │
	│    * Please run `minikube logs --file=logs.txt` and attach logs.txt to the GitHub issue.    │
	│    * Please also attach the following file to the GitHub issue:                             │
	│    * - /tmp/minikube_pause_49fdaea37aad8ebccb761973c21590cc64efe8d9_0.log                   │
	│                                                                                             │
	╰─────────────────────────────────────────────────────────────────────────────────────────────╯
	╭─────────────────────────────────────────────────────────────────────────────────────────────╮
	│                                                                                             │
	│    * If the above advice does not help, please let us know:                                 │
	│      https://github.com/kubernetes/minikube/issues/new/choose                               │
	│                                                                                             │
	│    * Please run `minikube logs --file=logs.txt` and attach logs.txt to the GitHub issue.    │
	│    * Please also attach the following file to the GitHub issue:                             │
	│    * - /tmp/minikube_pause_49fdaea37aad8ebccb761973c21590cc64efe8d9_0.log                   │
	│                                                                                             │
	╰─────────────────────────────────────────────────────────────────────────────────────────────╯
	I1109 14:41:45.781938  211223 out.go:203] 

                                                
                                                
** /stderr **
start_stop_delete_test.go:309: out/minikube-linux-arm64 pause -p newest-cni-192074 --alsologtostderr -v=1 failed: exit status 80
helpers_test.go:222: -----------------------post-mortem--------------------------------
helpers_test.go:223: ======>  post-mortem[TestStartStop/group/newest-cni/serial/Pause]: network settings <======
helpers_test.go:230: HOST ENV snapshots: PROXY env: HTTP_PROXY="<empty>" HTTPS_PROXY="<empty>" NO_PROXY="<empty>"
helpers_test.go:238: ======>  post-mortem[TestStartStop/group/newest-cni/serial/Pause]: docker inspect <======
helpers_test.go:239: (dbg) Run:  docker inspect newest-cni-192074
helpers_test.go:243: (dbg) docker inspect newest-cni-192074:

                                                
                                                
-- stdout --
	[
	    {
	        "Id": "6efa62eda748b61ec1d68030412467520212e736b73f094b7d6592d76bede223",
	        "Created": "2025-11-09T14:40:38.404452618Z",
	        "Path": "/usr/local/bin/entrypoint",
	        "Args": [
	            "/sbin/init"
	        ],
	        "State": {
	            "Status": "running",
	            "Running": true,
	            "Paused": false,
	            "Restarting": false,
	            "OOMKilled": false,
	            "Dead": false,
	            "Pid": 209202,
	            "ExitCode": 0,
	            "Error": "",
	            "StartedAt": "2025-11-09T14:41:27.702721755Z",
	            "FinishedAt": "2025-11-09T14:41:26.77697781Z"
	        },
	        "Image": "sha256:4e1b168be0d8ee6affdaa8dcd0274d605b2f417b6cfaa574410f0380fd962b97",
	        "ResolvConfPath": "/var/lib/docker/containers/6efa62eda748b61ec1d68030412467520212e736b73f094b7d6592d76bede223/resolv.conf",
	        "HostnamePath": "/var/lib/docker/containers/6efa62eda748b61ec1d68030412467520212e736b73f094b7d6592d76bede223/hostname",
	        "HostsPath": "/var/lib/docker/containers/6efa62eda748b61ec1d68030412467520212e736b73f094b7d6592d76bede223/hosts",
	        "LogPath": "/var/lib/docker/containers/6efa62eda748b61ec1d68030412467520212e736b73f094b7d6592d76bede223/6efa62eda748b61ec1d68030412467520212e736b73f094b7d6592d76bede223-json.log",
	        "Name": "/newest-cni-192074",
	        "RestartCount": 0,
	        "Driver": "overlay2",
	        "Platform": "linux",
	        "MountLabel": "",
	        "ProcessLabel": "",
	        "AppArmorProfile": "unconfined",
	        "ExecIDs": null,
	        "HostConfig": {
	            "Binds": [
	                "/lib/modules:/lib/modules:ro",
	                "newest-cni-192074:/var"
	            ],
	            "ContainerIDFile": "",
	            "LogConfig": {
	                "Type": "json-file",
	                "Config": {}
	            },
	            "NetworkMode": "newest-cni-192074",
	            "PortBindings": {
	                "22/tcp": [
	                    {
	                        "HostIp": "127.0.0.1",
	                        "HostPort": ""
	                    }
	                ],
	                "2376/tcp": [
	                    {
	                        "HostIp": "127.0.0.1",
	                        "HostPort": ""
	                    }
	                ],
	                "32443/tcp": [
	                    {
	                        "HostIp": "127.0.0.1",
	                        "HostPort": ""
	                    }
	                ],
	                "5000/tcp": [
	                    {
	                        "HostIp": "127.0.0.1",
	                        "HostPort": ""
	                    }
	                ],
	                "8443/tcp": [
	                    {
	                        "HostIp": "127.0.0.1",
	                        "HostPort": ""
	                    }
	                ]
	            },
	            "RestartPolicy": {
	                "Name": "no",
	                "MaximumRetryCount": 0
	            },
	            "AutoRemove": false,
	            "VolumeDriver": "",
	            "VolumesFrom": null,
	            "ConsoleSize": [
	                0,
	                0
	            ],
	            "CapAdd": null,
	            "CapDrop": null,
	            "CgroupnsMode": "host",
	            "Dns": [],
	            "DnsOptions": [],
	            "DnsSearch": [],
	            "ExtraHosts": null,
	            "GroupAdd": null,
	            "IpcMode": "private",
	            "Cgroup": "",
	            "Links": null,
	            "OomScoreAdj": 0,
	            "PidMode": "",
	            "Privileged": true,
	            "PublishAllPorts": false,
	            "ReadonlyRootfs": false,
	            "SecurityOpt": [
	                "seccomp=unconfined",
	                "apparmor=unconfined",
	                "label=disable"
	            ],
	            "Tmpfs": {
	                "/run": "",
	                "/tmp": ""
	            },
	            "UTSMode": "",
	            "UsernsMode": "",
	            "ShmSize": 67108864,
	            "Runtime": "runc",
	            "Isolation": "",
	            "CpuShares": 0,
	            "Memory": 3221225472,
	            "NanoCpus": 2000000000,
	            "CgroupParent": "",
	            "BlkioWeight": 0,
	            "BlkioWeightDevice": [],
	            "BlkioDeviceReadBps": [],
	            "BlkioDeviceWriteBps": [],
	            "BlkioDeviceReadIOps": [],
	            "BlkioDeviceWriteIOps": [],
	            "CpuPeriod": 0,
	            "CpuQuota": 0,
	            "CpuRealtimePeriod": 0,
	            "CpuRealtimeRuntime": 0,
	            "CpusetCpus": "",
	            "CpusetMems": "",
	            "Devices": [],
	            "DeviceCgroupRules": null,
	            "DeviceRequests": null,
	            "MemoryReservation": 0,
	            "MemorySwap": 6442450944,
	            "MemorySwappiness": null,
	            "OomKillDisable": false,
	            "PidsLimit": null,
	            "Ulimits": [],
	            "CpuCount": 0,
	            "CpuPercent": 0,
	            "IOMaximumIOps": 0,
	            "IOMaximumBandwidth": 0,
	            "MaskedPaths": null,
	            "ReadonlyPaths": null
	        },
	        "GraphDriver": {
	            "Data": {
	                "ID": "6efa62eda748b61ec1d68030412467520212e736b73f094b7d6592d76bede223",
	                "LowerDir": "/var/lib/docker/overlay2/eabc72676e437d7c1a314a8444f33816e7c030e826a97c06275df01c27b5a1a0-init/diff:/var/lib/docker/overlay2/b3a4de36ab2a7c9237cb1555a4866064baca53bca407ae5f84336bea9c6bc6c8/diff",
	                "MergedDir": "/var/lib/docker/overlay2/eabc72676e437d7c1a314a8444f33816e7c030e826a97c06275df01c27b5a1a0/merged",
	                "UpperDir": "/var/lib/docker/overlay2/eabc72676e437d7c1a314a8444f33816e7c030e826a97c06275df01c27b5a1a0/diff",
	                "WorkDir": "/var/lib/docker/overlay2/eabc72676e437d7c1a314a8444f33816e7c030e826a97c06275df01c27b5a1a0/work"
	            },
	            "Name": "overlay2"
	        },
	        "Mounts": [
	            {
	                "Type": "bind",
	                "Source": "/lib/modules",
	                "Destination": "/lib/modules",
	                "Mode": "ro",
	                "RW": false,
	                "Propagation": "rprivate"
	            },
	            {
	                "Type": "volume",
	                "Name": "newest-cni-192074",
	                "Source": "/var/lib/docker/volumes/newest-cni-192074/_data",
	                "Destination": "/var",
	                "Driver": "local",
	                "Mode": "z",
	                "RW": true,
	                "Propagation": ""
	            }
	        ],
	        "Config": {
	            "Hostname": "newest-cni-192074",
	            "Domainname": "",
	            "User": "",
	            "AttachStdin": false,
	            "AttachStdout": false,
	            "AttachStderr": false,
	            "ExposedPorts": {
	                "22/tcp": {},
	                "2376/tcp": {},
	                "32443/tcp": {},
	                "5000/tcp": {},
	                "8443/tcp": {}
	            },
	            "Tty": true,
	            "OpenStdin": false,
	            "StdinOnce": false,
	            "Env": [
	                "container=docker",
	                "PATH=/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin"
	            ],
	            "Cmd": null,
	            "Image": "gcr.io/k8s-minikube/kicbase-builds:v0.0.48-1761985721-21837@sha256:a50b37e97dfdea51156e079ca6b45818a801b3d41bbe13d141f35d2e1af6c7d1",
	            "Volumes": null,
	            "WorkingDir": "/",
	            "Entrypoint": [
	                "/usr/local/bin/entrypoint",
	                "/sbin/init"
	            ],
	            "OnBuild": null,
	            "Labels": {
	                "created_by.minikube.sigs.k8s.io": "true",
	                "mode.minikube.sigs.k8s.io": "newest-cni-192074",
	                "name.minikube.sigs.k8s.io": "newest-cni-192074",
	                "role.minikube.sigs.k8s.io": ""
	            },
	            "StopSignal": "SIGRTMIN+3"
	        },
	        "NetworkSettings": {
	            "Bridge": "",
	            "SandboxID": "97d1733e38dad291eb50091fd479d6dea0e76ab42e1e9ff577321b0655a0881f",
	            "SandboxKey": "/var/run/docker/netns/97d1733e38da",
	            "Ports": {
	                "22/tcp": [
	                    {
	                        "HostIp": "127.0.0.1",
	                        "HostPort": "33085"
	                    }
	                ],
	                "2376/tcp": [
	                    {
	                        "HostIp": "127.0.0.1",
	                        "HostPort": "33086"
	                    }
	                ],
	                "32443/tcp": [
	                    {
	                        "HostIp": "127.0.0.1",
	                        "HostPort": "33089"
	                    }
	                ],
	                "5000/tcp": [
	                    {
	                        "HostIp": "127.0.0.1",
	                        "HostPort": "33087"
	                    }
	                ],
	                "8443/tcp": [
	                    {
	                        "HostIp": "127.0.0.1",
	                        "HostPort": "33088"
	                    }
	                ]
	            },
	            "HairpinMode": false,
	            "LinkLocalIPv6Address": "",
	            "LinkLocalIPv6PrefixLen": 0,
	            "SecondaryIPAddresses": null,
	            "SecondaryIPv6Addresses": null,
	            "EndpointID": "",
	            "Gateway": "",
	            "GlobalIPv6Address": "",
	            "GlobalIPv6PrefixLen": 0,
	            "IPAddress": "",
	            "IPPrefixLen": 0,
	            "IPv6Gateway": "",
	            "MacAddress": "",
	            "Networks": {
	                "newest-cni-192074": {
	                    "IPAMConfig": {
	                        "IPv4Address": "192.168.76.2"
	                    },
	                    "Links": null,
	                    "Aliases": null,
	                    "MacAddress": "a6:51:99:94:a8:8b",
	                    "DriverOpts": null,
	                    "GwPriority": 0,
	                    "NetworkID": "114ceded31c032452a3ee1a01231f6fa4125cd9140fa08f1853ed64e4b9d3746",
	                    "EndpointID": "17e2ef1ae4bb601f081aa13c5a0327f66953ca5feb348a8d31789e4b2c65268e",
	                    "Gateway": "192.168.76.1",
	                    "IPAddress": "192.168.76.2",
	                    "IPPrefixLen": 24,
	                    "IPv6Gateway": "",
	                    "GlobalIPv6Address": "",
	                    "GlobalIPv6PrefixLen": 0,
	                    "DNSNames": [
	                        "newest-cni-192074",
	                        "6efa62eda748"
	                    ]
	                }
	            }
	        }
	    }
	]

                                                
                                                
-- /stdout --
helpers_test.go:247: (dbg) Run:  out/minikube-linux-arm64 status --format={{.Host}} -p newest-cni-192074 -n newest-cni-192074
helpers_test.go:247: (dbg) Non-zero exit: out/minikube-linux-arm64 status --format={{.Host}} -p newest-cni-192074 -n newest-cni-192074: exit status 2 (402.306051ms)

                                                
                                                
-- stdout --
	Running

                                                
                                                
-- /stdout --
helpers_test.go:247: status error: exit status 2 (may be ok)
helpers_test.go:252: <<< TestStartStop/group/newest-cni/serial/Pause FAILED: start of post-mortem logs <<<
helpers_test.go:253: ======>  post-mortem[TestStartStop/group/newest-cni/serial/Pause]: minikube logs <======
helpers_test.go:255: (dbg) Run:  out/minikube-linux-arm64 -p newest-cni-192074 logs -n 25
helpers_test.go:255: (dbg) Done: out/minikube-linux-arm64 -p newest-cni-192074 logs -n 25: (1.162254573s)
helpers_test.go:260: TestStartStop/group/newest-cni/serial/Pause logs: 
-- stdout --
	
	==> Audit <==
	┌─────────┬───────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────┬──────────────────────────────┬─────────┬─────────┬─────────────────────┬──────────────
───────┐
	│ COMMAND │                                                                                                                     ARGS                                                                                                                      │           PROFILE            │  USER   │ VERSION │     START TIME      │      END TIME       │
	├─────────┼───────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────┼──────────────────────────────┼─────────┼─────────┼─────────────────────┼──────────────
───────┤
	│ addons  │ enable metrics-server -p default-k8s-diff-port-103048 --images=MetricsServer=registry.k8s.io/echoserver:1.4 --registries=MetricsServer=fake.domain                                                                                            │ default-k8s-diff-port-103048 │ jenkins │ v1.37.0 │ 09 Nov 25 14:38 UTC │                     │
	│ addons  │ enable metrics-server -p embed-certs-422728 --images=MetricsServer=registry.k8s.io/echoserver:1.4 --registries=MetricsServer=fake.domain                                                                                                      │ embed-certs-422728           │ jenkins │ v1.37.0 │ 09 Nov 25 14:39 UTC │                     │
	│ stop    │ -p default-k8s-diff-port-103048 --alsologtostderr -v=3                                                                                                                                                                                        │ default-k8s-diff-port-103048 │ jenkins │ v1.37.0 │ 09 Nov 25 14:39 UTC │ 09 Nov 25 14:39 UTC │
	│ stop    │ -p embed-certs-422728 --alsologtostderr -v=3                                                                                                                                                                                                  │ embed-certs-422728           │ jenkins │ v1.37.0 │ 09 Nov 25 14:39 UTC │ 09 Nov 25 14:39 UTC │
	│ addons  │ enable dashboard -p default-k8s-diff-port-103048 --images=MetricsScraper=registry.k8s.io/echoserver:1.4                                                                                                                                       │ default-k8s-diff-port-103048 │ jenkins │ v1.37.0 │ 09 Nov 25 14:39 UTC │ 09 Nov 25 14:39 UTC │
	│ start   │ -p default-k8s-diff-port-103048 --memory=3072 --alsologtostderr --wait=true --apiserver-port=8444 --driver=docker  --container-runtime=crio --kubernetes-version=v1.34.1                                                                      │ default-k8s-diff-port-103048 │ jenkins │ v1.37.0 │ 09 Nov 25 14:39 UTC │ 09 Nov 25 14:40 UTC │
	│ addons  │ enable dashboard -p embed-certs-422728 --images=MetricsScraper=registry.k8s.io/echoserver:1.4                                                                                                                                                 │ embed-certs-422728           │ jenkins │ v1.37.0 │ 09 Nov 25 14:39 UTC │ 09 Nov 25 14:39 UTC │
	│ start   │ -p embed-certs-422728 --memory=3072 --alsologtostderr --wait=true --embed-certs --driver=docker  --container-runtime=crio --kubernetes-version=v1.34.1                                                                                        │ embed-certs-422728           │ jenkins │ v1.37.0 │ 09 Nov 25 14:39 UTC │ 09 Nov 25 14:40 UTC │
	│ image   │ default-k8s-diff-port-103048 image list --format=json                                                                                                                                                                                         │ default-k8s-diff-port-103048 │ jenkins │ v1.37.0 │ 09 Nov 25 14:40 UTC │ 09 Nov 25 14:40 UTC │
	│ pause   │ -p default-k8s-diff-port-103048 --alsologtostderr -v=1                                                                                                                                                                                        │ default-k8s-diff-port-103048 │ jenkins │ v1.37.0 │ 09 Nov 25 14:40 UTC │                     │
	│ image   │ embed-certs-422728 image list --format=json                                                                                                                                                                                                   │ embed-certs-422728           │ jenkins │ v1.37.0 │ 09 Nov 25 14:40 UTC │ 09 Nov 25 14:40 UTC │
	│ pause   │ -p embed-certs-422728 --alsologtostderr -v=1                                                                                                                                                                                                  │ embed-certs-422728           │ jenkins │ v1.37.0 │ 09 Nov 25 14:40 UTC │                     │
	│ delete  │ -p default-k8s-diff-port-103048                                                                                                                                                                                                               │ default-k8s-diff-port-103048 │ jenkins │ v1.37.0 │ 09 Nov 25 14:40 UTC │ 09 Nov 25 14:40 UTC │
	│ delete  │ -p embed-certs-422728                                                                                                                                                                                                                         │ embed-certs-422728           │ jenkins │ v1.37.0 │ 09 Nov 25 14:40 UTC │ 09 Nov 25 14:40 UTC │
	│ delete  │ -p default-k8s-diff-port-103048                                                                                                                                                                                                               │ default-k8s-diff-port-103048 │ jenkins │ v1.37.0 │ 09 Nov 25 14:40 UTC │ 09 Nov 25 14:40 UTC │
	│ delete  │ -p disable-driver-mounts-274584                                                                                                                                                                                                               │ disable-driver-mounts-274584 │ jenkins │ v1.37.0 │ 09 Nov 25 14:40 UTC │ 09 Nov 25 14:40 UTC │
	│ start   │ -p no-preload-545474 --memory=3072 --alsologtostderr --wait=true --preload=false --driver=docker  --container-runtime=crio --kubernetes-version=v1.34.1                                                                                       │ no-preload-545474            │ jenkins │ v1.37.0 │ 09 Nov 25 14:40 UTC │                     │
	│ delete  │ -p embed-certs-422728                                                                                                                                                                                                                         │ embed-certs-422728           │ jenkins │ v1.37.0 │ 09 Nov 25 14:40 UTC │ 09 Nov 25 14:40 UTC │
	│ start   │ -p newest-cni-192074 --memory=3072 --alsologtostderr --wait=apiserver,system_pods,default_sa --network-plugin=cni --extra-config=kubeadm.pod-network-cidr=10.42.0.0/16 --driver=docker  --container-runtime=crio --kubernetes-version=v1.34.1 │ newest-cni-192074            │ jenkins │ v1.37.0 │ 09 Nov 25 14:40 UTC │ 09 Nov 25 14:41 UTC │
	│ addons  │ enable metrics-server -p newest-cni-192074 --images=MetricsServer=registry.k8s.io/echoserver:1.4 --registries=MetricsServer=fake.domain                                                                                                       │ newest-cni-192074            │ jenkins │ v1.37.0 │ 09 Nov 25 14:41 UTC │                     │
	│ stop    │ -p newest-cni-192074 --alsologtostderr -v=3                                                                                                                                                                                                   │ newest-cni-192074            │ jenkins │ v1.37.0 │ 09 Nov 25 14:41 UTC │ 09 Nov 25 14:41 UTC │
	│ addons  │ enable dashboard -p newest-cni-192074 --images=MetricsScraper=registry.k8s.io/echoserver:1.4                                                                                                                                                  │ newest-cni-192074            │ jenkins │ v1.37.0 │ 09 Nov 25 14:41 UTC │ 09 Nov 25 14:41 UTC │
	│ start   │ -p newest-cni-192074 --memory=3072 --alsologtostderr --wait=apiserver,system_pods,default_sa --network-plugin=cni --extra-config=kubeadm.pod-network-cidr=10.42.0.0/16 --driver=docker  --container-runtime=crio --kubernetes-version=v1.34.1 │ newest-cni-192074            │ jenkins │ v1.37.0 │ 09 Nov 25 14:41 UTC │ 09 Nov 25 14:41 UTC │
	│ image   │ newest-cni-192074 image list --format=json                                                                                                                                                                                                    │ newest-cni-192074            │ jenkins │ v1.37.0 │ 09 Nov 25 14:41 UTC │ 09 Nov 25 14:41 UTC │
	│ pause   │ -p newest-cni-192074 --alsologtostderr -v=1                                                                                                                                                                                                   │ newest-cni-192074            │ jenkins │ v1.37.0 │ 09 Nov 25 14:41 UTC │                     │
	└─────────┴───────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────┴──────────────────────────────┴─────────┴─────────┴─────────────────────┴──────────────
───────┘
	
	
	==> Last Start <==
	Log file created at: 2025/11/09 14:41:27
	Running on machine: ip-172-31-24-2
	Binary: Built with gc go1.24.6 for linux/arm64
	Log line format: [IWEF]mmdd hh:mm:ss.uuuuuu threadid file:line] msg
	I1109 14:41:27.358960  209070 out.go:360] Setting OutFile to fd 1 ...
	I1109 14:41:27.359137  209070 out.go:408] TERM=,COLORTERM=, which probably does not support color
	I1109 14:41:27.359144  209070 out.go:374] Setting ErrFile to fd 2...
	I1109 14:41:27.359149  209070 out.go:408] TERM=,COLORTERM=, which probably does not support color
	I1109 14:41:27.359462  209070 root.go:338] Updating PATH: /home/jenkins/minikube-integration/21139-2320/.minikube/bin
	I1109 14:41:27.359826  209070 out.go:368] Setting JSON to false
	I1109 14:41:27.360760  209070 start.go:133] hostinfo: {"hostname":"ip-172-31-24-2","uptime":5038,"bootTime":1762694250,"procs":188,"os":"linux","platform":"ubuntu","platformFamily":"debian","platformVersion":"20.04","kernelVersion":"5.15.0-1084-aws","kernelArch":"aarch64","virtualizationSystem":"","virtualizationRole":"","hostId":"6d436adf-771e-4269-b9a3-c25fd4fca4f5"}
	I1109 14:41:27.360830  209070 start.go:143] virtualization:  
	I1109 14:41:27.363953  209070 out.go:179] * [newest-cni-192074] minikube v1.37.0 on Ubuntu 20.04 (arm64)
	I1109 14:41:27.367810  209070 out.go:179]   - MINIKUBE_LOCATION=21139
	I1109 14:41:27.368031  209070 notify.go:221] Checking for updates...
	I1109 14:41:27.374686  209070 out.go:179]   - MINIKUBE_SUPPRESS_DOCKER_PERFORMANCE=true
	I1109 14:41:27.377726  209070 out.go:179]   - KUBECONFIG=/home/jenkins/minikube-integration/21139-2320/kubeconfig
	I1109 14:41:27.380710  209070 out.go:179]   - MINIKUBE_HOME=/home/jenkins/minikube-integration/21139-2320/.minikube
	I1109 14:41:27.383622  209070 out.go:179]   - MINIKUBE_BIN=out/minikube-linux-arm64
	I1109 14:41:27.386550  209070 out.go:179]   - MINIKUBE_FORCE_SYSTEMD=
	I1109 14:41:27.389890  209070 config.go:182] Loaded profile config "newest-cni-192074": Driver=docker, ContainerRuntime=crio, KubernetesVersion=v1.34.1
	I1109 14:41:27.390560  209070 driver.go:422] Setting default libvirt URI to qemu:///system
	I1109 14:41:27.419545  209070 docker.go:124] docker version: linux-28.1.1:Docker Engine - Community
	I1109 14:41:27.419657  209070 cli_runner.go:164] Run: docker system info --format "{{json .}}"
	I1109 14:41:27.526855  209070 info.go:266] docker info: {ID:J4M5:W6MX:GOX4:4LAQ:VI7E:VJNF:J3OP:OPBH:GF7G:PPY4:WQWD:7N4L Containers:2 ContainersRunning:1 ContainersPaused:0 ContainersStopped:1 Images:4 Driver:overlay2 DriverStatus:[[Backing Filesystem extfs] [Supports d_type true] [Using metacopy false] [Native Overlay Diff true] [userxattr false]] SystemStatus:<nil> Plugins:{Volume:[local] Network:[bridge host ipvlan macvlan null overlay] Authorization:<nil> Log:[awslogs fluentd gcplogs gelf journald json-file local splunk syslog]} MemoryLimit:true SwapLimit:true KernelMemory:false KernelMemoryTCP:true CPUCfsPeriod:true CPUCfsQuota:true CPUShares:true CPUSet:true PidsLimit:true IPv4Forwarding:true BridgeNfIptables:false BridgeNfIP6Tables:false Debug:false NFd:36 OomKillDisable:true NGoroutines:52 SystemTime:2025-11-09 14:41:27.517117019 +0000 UTC LoggingDriver:json-file CgroupDriver:cgroupfs NEventsListener:0 KernelVersion:5.15.0-1084-aws OperatingSystem:Ubuntu 20.04.6 LTS OSType:linux Architecture:a
arch64 IndexServerAddress:https://index.docker.io/v1/ RegistryConfig:{AllowNondistributableArtifactsCIDRs:[] AllowNondistributableArtifactsHostnames:[] InsecureRegistryCIDRs:[::1/128 127.0.0.0/8] IndexConfigs:{DockerIo:{Name:docker.io Mirrors:[] Secure:true Official:true}} Mirrors:[]} NCPU:2 MemTotal:8214835200 GenericResources:<nil> DockerRootDir:/var/lib/docker HTTPProxy: HTTPSProxy: NoProxy: Name:ip-172-31-24-2 Labels:[] ExperimentalBuild:false ServerVersion:28.1.1 ClusterStore: ClusterAdvertise: Runtimes:{Runc:{Path:runc}} DefaultRuntime:runc Swarm:{NodeID: NodeAddr: LocalNodeState:inactive ControlAvailable:false Error: RemoteManagers:<nil>} LiveRestoreEnabled:false Isolation: InitBinary:docker-init ContainerdCommit:{ID:05044ec0a9a75232cad458027ca83437aae3f4da Expected:} RuncCommit:{ID:v1.2.5-0-g59923ef Expected:} InitCommit:{ID:de40ad0 Expected:} SecurityOptions:[name=apparmor name=seccomp,profile=builtin] ProductLicense: Warnings:<nil> ServerErrors:[] ClientInfo:{Debug:false Plugins:[map[Name:buildx Pat
h:/usr/libexec/docker/cli-plugins/docker-buildx SchemaVersion:0.1.0 ShortDescription:Docker Buildx Vendor:Docker Inc. Version:v0.23.0] map[Name:compose Path:/usr/libexec/docker/cli-plugins/docker-compose SchemaVersion:0.1.0 ShortDescription:Docker Compose Vendor:Docker Inc. Version:v2.35.1]] Warnings:<nil>}}
	I1109 14:41:27.526971  209070 docker.go:319] overlay module found
	I1109 14:41:27.530158  209070 out.go:179] * Using the docker driver based on existing profile
	I1109 14:41:27.532980  209070 start.go:309] selected driver: docker
	I1109 14:41:27.533002  209070 start.go:930] validating driver "docker" against &{Name:newest-cni-192074 KeepContext:false EmbedCerts:false MinikubeISO: KicBaseImage:gcr.io/k8s-minikube/kicbase-builds:v0.0.48-1761985721-21837@sha256:a50b37e97dfdea51156e079ca6b45818a801b3d41bbe13d141f35d2e1af6c7d1 Memory:3072 CPUs:2 DiskSize:20000 Driver:docker HyperkitVpnKitSock: HyperkitVSockPorts:[] DockerEnv:[] ContainerVolumeMounts:[] InsecureRegistry:[] RegistryMirror:[] HostOnlyCIDR:192.168.59.1/24 HypervVirtualSwitch: HypervUseExternalSwitch:false HypervExternalAdapter: KVMNetwork:default KVMQemuURI:qemu:///system KVMGPU:false KVMHidden:false KVMNUMACount:1 APIServerPort:8443 DockerOpt:[] DisableDriverMounts:false NFSShare:[] NFSSharesRoot:/nfsshares UUID: NoVTXCheck:false DNSProxy:false HostDNSResolver:true HostOnlyNicType:virtio NatNicType:virtio SSHIPAddress: SSHUser:root SSHKey: SSHPort:22 KubernetesConfig:{KubernetesVersion:v1.34.1 ClusterName:newest-cni-192074 Namespace:default APIServerHAVIP: APIServerNa
me:minikubeCA APIServerNames:[] APIServerIPs:[] DNSDomain:cluster.local ContainerRuntime:crio CRISocket: NetworkPlugin:cni FeatureGates: ServiceCIDR:10.96.0.0/12 ImageRepository: LoadBalancerStartIP: LoadBalancerEndIP: CustomIngressCert: RegistryAliases: ExtraOptions:[{Component:kubeadm Key:pod-network-cidr Value:10.42.0.0/16}] ShouldLoadCachedImages:true EnableDefaultCNI:false CNI:} Nodes:[{Name: IP:192.168.76.2 Port:8443 KubernetesVersion:v1.34.1 ContainerRuntime:crio ControlPlane:true Worker:true}] Addons:map[dashboard:true default-storageclass:true storage-provisioner:true] CustomAddonImages:map[MetricsScraper:registry.k8s.io/echoserver:1.4] CustomAddonRegistries:map[] VerifyComponents:map[apiserver:true apps_running:false default_sa:true extra:false kubelet:false node_ready:false system_pods:true] StartHostTimeout:6m0s ScheduledStop:<nil> ExposedPorts:[] ListenAddress: Network: Subnet: MultiNodeRequested:false ExtraDisks:0 CertExpiration:26280h0m0s MountString: Mount9PVersion:9p2000.L MountGID:docker Mou
ntIP: MountMSize:262144 MountOptions:[] MountPort:0 MountType:9p MountUID:docker BinaryMirror: DisableOptimizations:false DisableMetrics:false DisableCoreDNSLog:false CustomQemuFirmwarePath: SocketVMnetClientPath: SocketVMnetPath: StaticIP: SSHAuthSock: SSHAgentPID:0 GPUs: AutoPauseInterval:1m0s}
	I1109 14:41:27.533106  209070 start.go:941] status for docker: {Installed:true Healthy:true Running:false NeedsImprovement:false Error:<nil> Reason: Fix: Doc: Version:}
	I1109 14:41:27.533841  209070 cli_runner.go:164] Run: docker system info --format "{{json .}}"
	I1109 14:41:27.615471  209070 info.go:266] docker info: {ID:J4M5:W6MX:GOX4:4LAQ:VI7E:VJNF:J3OP:OPBH:GF7G:PPY4:WQWD:7N4L Containers:2 ContainersRunning:1 ContainersPaused:0 ContainersStopped:1 Images:4 Driver:overlay2 DriverStatus:[[Backing Filesystem extfs] [Supports d_type true] [Using metacopy false] [Native Overlay Diff true] [userxattr false]] SystemStatus:<nil> Plugins:{Volume:[local] Network:[bridge host ipvlan macvlan null overlay] Authorization:<nil> Log:[awslogs fluentd gcplogs gelf journald json-file local splunk syslog]} MemoryLimit:true SwapLimit:true KernelMemory:false KernelMemoryTCP:true CPUCfsPeriod:true CPUCfsQuota:true CPUShares:true CPUSet:true PidsLimit:true IPv4Forwarding:true BridgeNfIptables:false BridgeNfIP6Tables:false Debug:false NFd:36 OomKillDisable:true NGoroutines:52 SystemTime:2025-11-09 14:41:27.606224698 +0000 UTC LoggingDriver:json-file CgroupDriver:cgroupfs NEventsListener:0 KernelVersion:5.15.0-1084-aws OperatingSystem:Ubuntu 20.04.6 LTS OSType:linux Architecture:a
arch64 IndexServerAddress:https://index.docker.io/v1/ RegistryConfig:{AllowNondistributableArtifactsCIDRs:[] AllowNondistributableArtifactsHostnames:[] InsecureRegistryCIDRs:[::1/128 127.0.0.0/8] IndexConfigs:{DockerIo:{Name:docker.io Mirrors:[] Secure:true Official:true}} Mirrors:[]} NCPU:2 MemTotal:8214835200 GenericResources:<nil> DockerRootDir:/var/lib/docker HTTPProxy: HTTPSProxy: NoProxy: Name:ip-172-31-24-2 Labels:[] ExperimentalBuild:false ServerVersion:28.1.1 ClusterStore: ClusterAdvertise: Runtimes:{Runc:{Path:runc}} DefaultRuntime:runc Swarm:{NodeID: NodeAddr: LocalNodeState:inactive ControlAvailable:false Error: RemoteManagers:<nil>} LiveRestoreEnabled:false Isolation: InitBinary:docker-init ContainerdCommit:{ID:05044ec0a9a75232cad458027ca83437aae3f4da Expected:} RuncCommit:{ID:v1.2.5-0-g59923ef Expected:} InitCommit:{ID:de40ad0 Expected:} SecurityOptions:[name=apparmor name=seccomp,profile=builtin] ProductLicense: Warnings:<nil> ServerErrors:[] ClientInfo:{Debug:false Plugins:[map[Name:buildx Pat
h:/usr/libexec/docker/cli-plugins/docker-buildx SchemaVersion:0.1.0 ShortDescription:Docker Buildx Vendor:Docker Inc. Version:v0.23.0] map[Name:compose Path:/usr/libexec/docker/cli-plugins/docker-compose SchemaVersion:0.1.0 ShortDescription:Docker Compose Vendor:Docker Inc. Version:v2.35.1]] Warnings:<nil>}}
	I1109 14:41:27.615794  209070 start_flags.go:1011] Waiting for components: map[apiserver:true apps_running:false default_sa:true extra:false kubelet:false node_ready:false system_pods:true]
	I1109 14:41:27.615817  209070 cni.go:84] Creating CNI manager for ""
	I1109 14:41:27.615942  209070 cni.go:143] "docker" driver + "crio" runtime found, recommending kindnet
	I1109 14:41:27.615983  209070 start.go:353] cluster config:
	{Name:newest-cni-192074 KeepContext:false EmbedCerts:false MinikubeISO: KicBaseImage:gcr.io/k8s-minikube/kicbase-builds:v0.0.48-1761985721-21837@sha256:a50b37e97dfdea51156e079ca6b45818a801b3d41bbe13d141f35d2e1af6c7d1 Memory:3072 CPUs:2 DiskSize:20000 Driver:docker HyperkitVpnKitSock: HyperkitVSockPorts:[] DockerEnv:[] ContainerVolumeMounts:[] InsecureRegistry:[] RegistryMirror:[] HostOnlyCIDR:192.168.59.1/24 HypervVirtualSwitch: HypervUseExternalSwitch:false HypervExternalAdapter: KVMNetwork:default KVMQemuURI:qemu:///system KVMGPU:false KVMHidden:false KVMNUMACount:1 APIServerPort:8443 DockerOpt:[] DisableDriverMounts:false NFSShare:[] NFSSharesRoot:/nfsshares UUID: NoVTXCheck:false DNSProxy:false HostDNSResolver:true HostOnlyNicType:virtio NatNicType:virtio SSHIPAddress: SSHUser:root SSHKey: SSHPort:22 KubernetesConfig:{KubernetesVersion:v1.34.1 ClusterName:newest-cni-192074 Namespace:default APIServerHAVIP: APIServerName:minikubeCA APIServerNames:[] APIServerIPs:[] DNSDomain:cluster.local Containe
rRuntime:crio CRISocket: NetworkPlugin:cni FeatureGates: ServiceCIDR:10.96.0.0/12 ImageRepository: LoadBalancerStartIP: LoadBalancerEndIP: CustomIngressCert: RegistryAliases: ExtraOptions:[{Component:kubeadm Key:pod-network-cidr Value:10.42.0.0/16}] ShouldLoadCachedImages:true EnableDefaultCNI:false CNI:} Nodes:[{Name: IP:192.168.76.2 Port:8443 KubernetesVersion:v1.34.1 ContainerRuntime:crio ControlPlane:true Worker:true}] Addons:map[dashboard:true default-storageclass:true storage-provisioner:true] CustomAddonImages:map[MetricsScraper:registry.k8s.io/echoserver:1.4] CustomAddonRegistries:map[] VerifyComponents:map[apiserver:true apps_running:false default_sa:true extra:false kubelet:false node_ready:false system_pods:true] StartHostTimeout:6m0s ScheduledStop:<nil> ExposedPorts:[] ListenAddress: Network: Subnet: MultiNodeRequested:false ExtraDisks:0 CertExpiration:26280h0m0s MountString: Mount9PVersion:9p2000.L MountGID:docker MountIP: MountMSize:262144 MountOptions:[] MountPort:0 MountType:9p MountUID:docker
BinaryMirror: DisableOptimizations:false DisableMetrics:false DisableCoreDNSLog:false CustomQemuFirmwarePath: SocketVMnetClientPath: SocketVMnetPath: StaticIP: SSHAuthSock: SSHAgentPID:0 GPUs: AutoPauseInterval:1m0s}
	I1109 14:41:27.619162  209070 out.go:179] * Starting "newest-cni-192074" primary control-plane node in "newest-cni-192074" cluster
	I1109 14:41:27.622027  209070 cache.go:134] Beginning downloading kic base image for docker with crio
	I1109 14:41:27.625544  209070 out.go:179] * Pulling base image v0.0.48-1761985721-21837 ...
	I1109 14:41:27.628497  209070 preload.go:188] Checking if preload exists for k8s version v1.34.1 and runtime crio
	I1109 14:41:27.628545  209070 preload.go:203] Found local preload: /home/jenkins/minikube-integration/21139-2320/.minikube/cache/preloaded-tarball/preloaded-images-k8s-v18-v1.34.1-cri-o-overlay-arm64.tar.lz4
	I1109 14:41:27.628560  209070 cache.go:65] Caching tarball of preloaded images
	I1109 14:41:27.628570  209070 image.go:81] Checking for gcr.io/k8s-minikube/kicbase-builds:v0.0.48-1761985721-21837@sha256:a50b37e97dfdea51156e079ca6b45818a801b3d41bbe13d141f35d2e1af6c7d1 in local docker daemon
	I1109 14:41:27.628653  209070 preload.go:238] Found /home/jenkins/minikube-integration/21139-2320/.minikube/cache/preloaded-tarball/preloaded-images-k8s-v18-v1.34.1-cri-o-overlay-arm64.tar.lz4 in cache, skipping download
	I1109 14:41:27.628664  209070 cache.go:68] Finished verifying existence of preloaded tar for v1.34.1 on crio
	I1109 14:41:27.628822  209070 profile.go:143] Saving config to /home/jenkins/minikube-integration/21139-2320/.minikube/profiles/newest-cni-192074/config.json ...
	I1109 14:41:27.647536  209070 image.go:100] Found gcr.io/k8s-minikube/kicbase-builds:v0.0.48-1761985721-21837@sha256:a50b37e97dfdea51156e079ca6b45818a801b3d41bbe13d141f35d2e1af6c7d1 in local docker daemon, skipping pull
	I1109 14:41:27.647559  209070 cache.go:158] gcr.io/k8s-minikube/kicbase-builds:v0.0.48-1761985721-21837@sha256:a50b37e97dfdea51156e079ca6b45818a801b3d41bbe13d141f35d2e1af6c7d1 exists in daemon, skipping load
	I1109 14:41:27.647577  209070 cache.go:243] Successfully downloaded all kic artifacts
	I1109 14:41:27.647599  209070 start.go:360] acquireMachinesLock for newest-cni-192074: {Name:mk50468e4f833af9c54b7aff282eee0b8ef871dc Clock:{} Delay:500ms Timeout:10m0s Cancel:<nil>}
	I1109 14:41:27.647656  209070 start.go:364] duration metric: took 35.175µs to acquireMachinesLock for "newest-cni-192074"
	I1109 14:41:27.647680  209070 start.go:96] Skipping create...Using existing machine configuration
	I1109 14:41:27.647688  209070 fix.go:54] fixHost starting: 
	I1109 14:41:27.648013  209070 cli_runner.go:164] Run: docker container inspect newest-cni-192074 --format={{.State.Status}}
	I1109 14:41:27.664466  209070 fix.go:112] recreateIfNeeded on newest-cni-192074: state=Stopped err=<nil>
	W1109 14:41:27.664498  209070 fix.go:138] unexpected machine state, will restart: <nil>
	I1109 14:41:25.694169  203153 ssh_runner.go:195] Run: stat /opt/cni/bin/portmap
	I1109 14:41:25.699671  203153 cni.go:182] applying CNI manifest using /var/lib/minikube/binaries/v1.34.1/kubectl ...
	I1109 14:41:25.699707  203153 ssh_runner.go:362] scp memory --> /var/tmp/minikube/cni.yaml (2601 bytes)
	I1109 14:41:25.738096  203153 ssh_runner.go:195] Run: sudo /var/lib/minikube/binaries/v1.34.1/kubectl apply --kubeconfig=/var/lib/minikube/kubeconfig -f /var/tmp/minikube/cni.yaml
	I1109 14:41:26.257453  203153 ssh_runner.go:195] Run: /bin/bash -c "cat /proc/$(pgrep kube-apiserver)/oom_adj"
	I1109 14:41:26.257576  203153 ssh_runner.go:195] Run: sudo /var/lib/minikube/binaries/v1.34.1/kubectl create clusterrolebinding minikube-rbac --clusterrole=cluster-admin --serviceaccount=kube-system:default --kubeconfig=/var/lib/minikube/kubeconfig
	I1109 14:41:26.257640  203153 ssh_runner.go:195] Run: sudo /var/lib/minikube/binaries/v1.34.1/kubectl --kubeconfig=/var/lib/minikube/kubeconfig label --overwrite nodes no-preload-545474 minikube.k8s.io/updated_at=2025_11_09T14_41_26_0700 minikube.k8s.io/version=v1.37.0 minikube.k8s.io/commit=70e94ed289d7418ac27e2778f9cf44be27a4ecda minikube.k8s.io/name=no-preload-545474 minikube.k8s.io/primary=true
	I1109 14:41:26.421439  203153 ops.go:34] apiserver oom_adj: -16
	I1109 14:41:26.421548  203153 ssh_runner.go:195] Run: sudo /var/lib/minikube/binaries/v1.34.1/kubectl get sa default --kubeconfig=/var/lib/minikube/kubeconfig
	I1109 14:41:26.921787  203153 ssh_runner.go:195] Run: sudo /var/lib/minikube/binaries/v1.34.1/kubectl get sa default --kubeconfig=/var/lib/minikube/kubeconfig
	I1109 14:41:27.421776  203153 ssh_runner.go:195] Run: sudo /var/lib/minikube/binaries/v1.34.1/kubectl get sa default --kubeconfig=/var/lib/minikube/kubeconfig
	I1109 14:41:27.923362  203153 ssh_runner.go:195] Run: sudo /var/lib/minikube/binaries/v1.34.1/kubectl get sa default --kubeconfig=/var/lib/minikube/kubeconfig
	I1109 14:41:28.421631  203153 ssh_runner.go:195] Run: sudo /var/lib/minikube/binaries/v1.34.1/kubectl get sa default --kubeconfig=/var/lib/minikube/kubeconfig
	I1109 14:41:28.922298  203153 ssh_runner.go:195] Run: sudo /var/lib/minikube/binaries/v1.34.1/kubectl get sa default --kubeconfig=/var/lib/minikube/kubeconfig
	I1109 14:41:29.422212  203153 ssh_runner.go:195] Run: sudo /var/lib/minikube/binaries/v1.34.1/kubectl get sa default --kubeconfig=/var/lib/minikube/kubeconfig
	I1109 14:41:29.922567  203153 ssh_runner.go:195] Run: sudo /var/lib/minikube/binaries/v1.34.1/kubectl get sa default --kubeconfig=/var/lib/minikube/kubeconfig
	I1109 14:41:30.103845  203153 kubeadm.go:1114] duration metric: took 3.846309325s to wait for elevateKubeSystemPrivileges
	I1109 14:41:30.103889  203153 kubeadm.go:403] duration metric: took 26.808914446s to StartCluster
	I1109 14:41:30.103908  203153 settings.go:142] acquiring lock: {Name:mk52a5a1cf83cfd3007d92af07a5e8dea393f789 Clock:{} Delay:500ms Timeout:1m0s Cancel:<nil>}
	I1109 14:41:30.103976  203153 settings.go:150] Updating kubeconfig:  /home/jenkins/minikube-integration/21139-2320/kubeconfig
	I1109 14:41:30.104731  203153 lock.go:35] WriteFile acquiring /home/jenkins/minikube-integration/21139-2320/kubeconfig: {Name:mkbcbd43244c4eb498050023163f96f81faa78c6 Clock:{} Delay:500ms Timeout:1m0s Cancel:<nil>}
	I1109 14:41:30.104995  203153 start.go:236] Will wait 6m0s for node &{Name: IP:192.168.85.2 Port:8443 KubernetesVersion:v1.34.1 ContainerRuntime:crio ControlPlane:true Worker:true}
	I1109 14:41:30.105098  203153 ssh_runner.go:195] Run: /bin/bash -c "sudo /var/lib/minikube/binaries/v1.34.1/kubectl --kubeconfig=/var/lib/minikube/kubeconfig -n kube-system get configmap coredns -o yaml"
	I1109 14:41:30.105369  203153 config.go:182] Loaded profile config "no-preload-545474": Driver=docker, ContainerRuntime=crio, KubernetesVersion=v1.34.1
	I1109 14:41:30.105412  203153 addons.go:512] enable addons start: toEnable=map[ambassador:false amd-gpu-device-plugin:false auto-pause:false cloud-spanner:false csi-hostpath-driver:false dashboard:false default-storageclass:true efk:false freshpod:false gcp-auth:false gvisor:false headlamp:false inaccel:false ingress:false ingress-dns:false inspektor-gadget:false istio:false istio-provisioner:false kong:false kubeflow:false kubetail:false kubevirt:false logviewer:false metallb:false metrics-server:false nvidia-device-plugin:false nvidia-driver-installer:false nvidia-gpu-device-plugin:false olm:false pod-security-policy:false portainer:false registry:false registry-aliases:false registry-creds:false storage-provisioner:true storage-provisioner-rancher:false volcano:false volumesnapshots:false yakd:false]
	I1109 14:41:30.105477  203153 addons.go:70] Setting storage-provisioner=true in profile "no-preload-545474"
	I1109 14:41:30.105498  203153 addons.go:239] Setting addon storage-provisioner=true in "no-preload-545474"
	I1109 14:41:30.105521  203153 host.go:66] Checking if "no-preload-545474" exists ...
	I1109 14:41:30.106047  203153 cli_runner.go:164] Run: docker container inspect no-preload-545474 --format={{.State.Status}}
	I1109 14:41:30.106393  203153 addons.go:70] Setting default-storageclass=true in profile "no-preload-545474"
	I1109 14:41:30.106421  203153 addons_storage_classes.go:34] enableOrDisableStorageClasses default-storageclass=true on "no-preload-545474"
	I1109 14:41:30.106764  203153 cli_runner.go:164] Run: docker container inspect no-preload-545474 --format={{.State.Status}}
	I1109 14:41:30.108397  203153 out.go:179] * Verifying Kubernetes components...
	I1109 14:41:30.111586  203153 ssh_runner.go:195] Run: sudo systemctl daemon-reload
	I1109 14:41:30.151324  203153 out.go:179]   - Using image gcr.io/k8s-minikube/storage-provisioner:v5
	I1109 14:41:30.153764  203153 addons.go:239] Setting addon default-storageclass=true in "no-preload-545474"
	I1109 14:41:30.153807  203153 host.go:66] Checking if "no-preload-545474" exists ...
	I1109 14:41:30.154248  203153 cli_runner.go:164] Run: docker container inspect no-preload-545474 --format={{.State.Status}}
	I1109 14:41:30.155422  203153 addons.go:436] installing /etc/kubernetes/addons/storage-provisioner.yaml
	I1109 14:41:30.155443  203153 ssh_runner.go:362] scp memory --> /etc/kubernetes/addons/storage-provisioner.yaml (2676 bytes)
	I1109 14:41:30.155504  203153 cli_runner.go:164] Run: docker container inspect -f "'{{(index (index .NetworkSettings.Ports "22/tcp") 0).HostPort}}'" no-preload-545474
	I1109 14:41:30.185633  203153 addons.go:436] installing /etc/kubernetes/addons/storageclass.yaml
	I1109 14:41:30.185657  203153 ssh_runner.go:362] scp storageclass/storageclass.yaml --> /etc/kubernetes/addons/storageclass.yaml (271 bytes)
	I1109 14:41:30.185924  203153 cli_runner.go:164] Run: docker container inspect -f "'{{(index (index .NetworkSettings.Ports "22/tcp") 0).HostPort}}'" no-preload-545474
	I1109 14:41:30.206073  203153 sshutil.go:53] new ssh client: &{IP:127.0.0.1 Port:33075 SSHKeyPath:/home/jenkins/minikube-integration/21139-2320/.minikube/machines/no-preload-545474/id_rsa Username:docker}
	I1109 14:41:30.231616  203153 sshutil.go:53] new ssh client: &{IP:127.0.0.1 Port:33075 SSHKeyPath:/home/jenkins/minikube-integration/21139-2320/.minikube/machines/no-preload-545474/id_rsa Username:docker}
	I1109 14:41:30.413277  203153 ssh_runner.go:195] Run: /bin/bash -c "sudo /var/lib/minikube/binaries/v1.34.1/kubectl --kubeconfig=/var/lib/minikube/kubeconfig -n kube-system get configmap coredns -o yaml | sed -e '/^        forward . \/etc\/resolv.conf.*/i \        hosts {\n           192.168.85.1 host.minikube.internal\n           fallthrough\n        }' -e '/^        errors *$/i \        log' | sudo /var/lib/minikube/binaries/v1.34.1/kubectl --kubeconfig=/var/lib/minikube/kubeconfig replace -f -"
	I1109 14:41:30.413380  203153 ssh_runner.go:195] Run: sudo systemctl start kubelet
	I1109 14:41:30.430148  203153 ssh_runner.go:195] Run: sudo KUBECONFIG=/var/lib/minikube/kubeconfig /var/lib/minikube/binaries/v1.34.1/kubectl apply -f /etc/kubernetes/addons/storage-provisioner.yaml
	I1109 14:41:30.467464  203153 ssh_runner.go:195] Run: sudo KUBECONFIG=/var/lib/minikube/kubeconfig /var/lib/minikube/binaries/v1.34.1/kubectl apply -f /etc/kubernetes/addons/storageclass.yaml
	I1109 14:41:30.926323  203153 start.go:977] {"host.minikube.internal": 192.168.85.1} host record injected into CoreDNS's ConfigMap
	I1109 14:41:30.928386  203153 node_ready.go:35] waiting up to 6m0s for node "no-preload-545474" to be "Ready" ...
	I1109 14:41:31.430889  203153 kapi.go:214] "coredns" deployment in "kube-system" namespace and "no-preload-545474" context rescaled to 1 replicas
	I1109 14:41:31.453384  203153 ssh_runner.go:235] Completed: sudo KUBECONFIG=/var/lib/minikube/kubeconfig /var/lib/minikube/binaries/v1.34.1/kubectl apply -f /etc/kubernetes/addons/storage-provisioner.yaml: (1.023150632s)
	I1109 14:41:31.491769  203153 out.go:179] * Enabled addons: storage-provisioner, default-storageclass
	I1109 14:41:27.667792  209070 out.go:252] * Restarting existing docker container for "newest-cni-192074" ...
	I1109 14:41:27.667932  209070 cli_runner.go:164] Run: docker start newest-cni-192074
	I1109 14:41:27.942585  209070 cli_runner.go:164] Run: docker container inspect newest-cni-192074 --format={{.State.Status}}
	I1109 14:41:27.972366  209070 kic.go:430] container "newest-cni-192074" state is running.
	I1109 14:41:27.972742  209070 cli_runner.go:164] Run: docker container inspect -f "{{range .NetworkSettings.Networks}}{{.IPAddress}},{{.GlobalIPv6Address}}{{end}}" newest-cni-192074
	I1109 14:41:27.999932  209070 profile.go:143] Saving config to /home/jenkins/minikube-integration/21139-2320/.minikube/profiles/newest-cni-192074/config.json ...
	I1109 14:41:28.000169  209070 machine.go:94] provisionDockerMachine start ...
	I1109 14:41:28.000246  209070 cli_runner.go:164] Run: docker container inspect -f "'{{(index (index .NetworkSettings.Ports "22/tcp") 0).HostPort}}'" newest-cni-192074
	I1109 14:41:28.031134  209070 main.go:143] libmachine: Using SSH client type: native
	I1109 14:41:28.032272  209070 main.go:143] libmachine: &{{{<nil> 0 [] [] []} docker [0x3eefe0] 0x3f1790 <nil>  [] 0s} 127.0.0.1 33085 <nil> <nil>}
	I1109 14:41:28.032292  209070 main.go:143] libmachine: About to run SSH command:
	hostname
	I1109 14:41:28.033033  209070 main.go:143] libmachine: Error dialing TCP: ssh: handshake failed: EOF
	I1109 14:41:31.227723  209070 main.go:143] libmachine: SSH cmd err, output: <nil>: newest-cni-192074
	
	I1109 14:41:31.227798  209070 ubuntu.go:182] provisioning hostname "newest-cni-192074"
	I1109 14:41:31.227933  209070 cli_runner.go:164] Run: docker container inspect -f "'{{(index (index .NetworkSettings.Ports "22/tcp") 0).HostPort}}'" newest-cni-192074
	I1109 14:41:31.255636  209070 main.go:143] libmachine: Using SSH client type: native
	I1109 14:41:31.256022  209070 main.go:143] libmachine: &{{{<nil> 0 [] [] []} docker [0x3eefe0] 0x3f1790 <nil>  [] 0s} 127.0.0.1 33085 <nil> <nil>}
	I1109 14:41:31.256036  209070 main.go:143] libmachine: About to run SSH command:
	sudo hostname newest-cni-192074 && echo "newest-cni-192074" | sudo tee /etc/hostname
	I1109 14:41:31.460102  209070 main.go:143] libmachine: SSH cmd err, output: <nil>: newest-cni-192074
	
	I1109 14:41:31.460360  209070 cli_runner.go:164] Run: docker container inspect -f "'{{(index (index .NetworkSettings.Ports "22/tcp") 0).HostPort}}'" newest-cni-192074
	I1109 14:41:31.485710  209070 main.go:143] libmachine: Using SSH client type: native
	I1109 14:41:31.486023  209070 main.go:143] libmachine: &{{{<nil> 0 [] [] []} docker [0x3eefe0] 0x3f1790 <nil>  [] 0s} 127.0.0.1 33085 <nil> <nil>}
	I1109 14:41:31.486040  209070 main.go:143] libmachine: About to run SSH command:
	
			if ! grep -xq '.*\snewest-cni-192074' /etc/hosts; then
				if grep -xq '127.0.1.1\s.*' /etc/hosts; then
					sudo sed -i 's/^127.0.1.1\s.*/127.0.1.1 newest-cni-192074/g' /etc/hosts;
				else 
					echo '127.0.1.1 newest-cni-192074' | sudo tee -a /etc/hosts; 
				fi
			fi
	I1109 14:41:31.659860  209070 main.go:143] libmachine: SSH cmd err, output: <nil>: 
	I1109 14:41:31.659960  209070 ubuntu.go:188] set auth options {CertDir:/home/jenkins/minikube-integration/21139-2320/.minikube CaCertPath:/home/jenkins/minikube-integration/21139-2320/.minikube/certs/ca.pem CaPrivateKeyPath:/home/jenkins/minikube-integration/21139-2320/.minikube/certs/ca-key.pem CaCertRemotePath:/etc/docker/ca.pem ServerCertPath:/home/jenkins/minikube-integration/21139-2320/.minikube/machines/server.pem ServerKeyPath:/home/jenkins/minikube-integration/21139-2320/.minikube/machines/server-key.pem ClientKeyPath:/home/jenkins/minikube-integration/21139-2320/.minikube/certs/key.pem ServerCertRemotePath:/etc/docker/server.pem ServerKeyRemotePath:/etc/docker/server-key.pem ClientCertPath:/home/jenkins/minikube-integration/21139-2320/.minikube/certs/cert.pem ServerCertSANs:[] StorePath:/home/jenkins/minikube-integration/21139-2320/.minikube}
	I1109 14:41:31.659989  209070 ubuntu.go:190] setting up certificates
	I1109 14:41:31.660008  209070 provision.go:84] configureAuth start
	I1109 14:41:31.660074  209070 cli_runner.go:164] Run: docker container inspect -f "{{range .NetworkSettings.Networks}}{{.IPAddress}},{{.GlobalIPv6Address}}{{end}}" newest-cni-192074
	I1109 14:41:31.681613  209070 provision.go:143] copyHostCerts
	I1109 14:41:31.681681  209070 exec_runner.go:144] found /home/jenkins/minikube-integration/21139-2320/.minikube/ca.pem, removing ...
	I1109 14:41:31.681695  209070 exec_runner.go:203] rm: /home/jenkins/minikube-integration/21139-2320/.minikube/ca.pem
	I1109 14:41:31.681777  209070 exec_runner.go:151] cp: /home/jenkins/minikube-integration/21139-2320/.minikube/certs/ca.pem --> /home/jenkins/minikube-integration/21139-2320/.minikube/ca.pem (1082 bytes)
	I1109 14:41:31.681868  209070 exec_runner.go:144] found /home/jenkins/minikube-integration/21139-2320/.minikube/cert.pem, removing ...
	I1109 14:41:31.681882  209070 exec_runner.go:203] rm: /home/jenkins/minikube-integration/21139-2320/.minikube/cert.pem
	I1109 14:41:31.681910  209070 exec_runner.go:151] cp: /home/jenkins/minikube-integration/21139-2320/.minikube/certs/cert.pem --> /home/jenkins/minikube-integration/21139-2320/.minikube/cert.pem (1123 bytes)
	I1109 14:41:31.681971  209070 exec_runner.go:144] found /home/jenkins/minikube-integration/21139-2320/.minikube/key.pem, removing ...
	I1109 14:41:31.681980  209070 exec_runner.go:203] rm: /home/jenkins/minikube-integration/21139-2320/.minikube/key.pem
	I1109 14:41:31.682004  209070 exec_runner.go:151] cp: /home/jenkins/minikube-integration/21139-2320/.minikube/certs/key.pem --> /home/jenkins/minikube-integration/21139-2320/.minikube/key.pem (1679 bytes)
	I1109 14:41:31.682054  209070 provision.go:117] generating server cert: /home/jenkins/minikube-integration/21139-2320/.minikube/machines/server.pem ca-key=/home/jenkins/minikube-integration/21139-2320/.minikube/certs/ca.pem private-key=/home/jenkins/minikube-integration/21139-2320/.minikube/certs/ca-key.pem org=jenkins.newest-cni-192074 san=[127.0.0.1 192.168.76.2 localhost minikube newest-cni-192074]
	I1109 14:41:32.042985  209070 provision.go:177] copyRemoteCerts
	I1109 14:41:32.043057  209070 ssh_runner.go:195] Run: sudo mkdir -p /etc/docker /etc/docker /etc/docker
	I1109 14:41:32.043111  209070 cli_runner.go:164] Run: docker container inspect -f "'{{(index (index .NetworkSettings.Ports "22/tcp") 0).HostPort}}'" newest-cni-192074
	I1109 14:41:32.064051  209070 sshutil.go:53] new ssh client: &{IP:127.0.0.1 Port:33085 SSHKeyPath:/home/jenkins/minikube-integration/21139-2320/.minikube/machines/newest-cni-192074/id_rsa Username:docker}
	I1109 14:41:32.187506  209070 ssh_runner.go:362] scp /home/jenkins/minikube-integration/21139-2320/.minikube/machines/server.pem --> /etc/docker/server.pem (1220 bytes)
	I1109 14:41:32.235309  209070 ssh_runner.go:362] scp /home/jenkins/minikube-integration/21139-2320/.minikube/machines/server-key.pem --> /etc/docker/server-key.pem (1675 bytes)
	I1109 14:41:32.270666  209070 ssh_runner.go:362] scp /home/jenkins/minikube-integration/21139-2320/.minikube/certs/ca.pem --> /etc/docker/ca.pem (1082 bytes)
	I1109 14:41:32.302928  209070 provision.go:87] duration metric: took 642.878974ms to configureAuth
	I1109 14:41:32.302957  209070 ubuntu.go:206] setting minikube options for container-runtime
	I1109 14:41:32.303244  209070 config.go:182] Loaded profile config "newest-cni-192074": Driver=docker, ContainerRuntime=crio, KubernetesVersion=v1.34.1
	I1109 14:41:32.303394  209070 cli_runner.go:164] Run: docker container inspect -f "'{{(index (index .NetworkSettings.Ports "22/tcp") 0).HostPort}}'" newest-cni-192074
	I1109 14:41:32.329260  209070 main.go:143] libmachine: Using SSH client type: native
	I1109 14:41:32.329816  209070 main.go:143] libmachine: &{{{<nil> 0 [] [] []} docker [0x3eefe0] 0x3f1790 <nil>  [] 0s} 127.0.0.1 33085 <nil> <nil>}
	I1109 14:41:32.329851  209070 main.go:143] libmachine: About to run SSH command:
	sudo mkdir -p /etc/sysconfig && printf %s "
	CRIO_MINIKUBE_OPTIONS='--insecure-registry 10.96.0.0/12 '
	" | sudo tee /etc/sysconfig/crio.minikube && sudo systemctl restart crio
	I1109 14:41:32.713720  209070 main.go:143] libmachine: SSH cmd err, output: <nil>: 
	CRIO_MINIKUBE_OPTIONS='--insecure-registry 10.96.0.0/12 '
	
	I1109 14:41:32.713747  209070 machine.go:97] duration metric: took 4.713560928s to provisionDockerMachine
	I1109 14:41:32.713759  209070 start.go:293] postStartSetup for "newest-cni-192074" (driver="docker")
	I1109 14:41:32.713769  209070 start.go:322] creating required directories: [/etc/kubernetes/addons /etc/kubernetes/manifests /var/tmp/minikube /var/lib/minikube /var/lib/minikube/certs /var/lib/minikube/images /var/lib/minikube/binaries /tmp/gvisor /usr/share/ca-certificates /etc/ssl/certs]
	I1109 14:41:32.713826  209070 ssh_runner.go:195] Run: sudo mkdir -p /etc/kubernetes/addons /etc/kubernetes/manifests /var/tmp/minikube /var/lib/minikube /var/lib/minikube/certs /var/lib/minikube/images /var/lib/minikube/binaries /tmp/gvisor /usr/share/ca-certificates /etc/ssl/certs
	I1109 14:41:32.713886  209070 cli_runner.go:164] Run: docker container inspect -f "'{{(index (index .NetworkSettings.Ports "22/tcp") 0).HostPort}}'" newest-cni-192074
	I1109 14:41:32.738102  209070 sshutil.go:53] new ssh client: &{IP:127.0.0.1 Port:33085 SSHKeyPath:/home/jenkins/minikube-integration/21139-2320/.minikube/machines/newest-cni-192074/id_rsa Username:docker}
	I1109 14:41:32.853040  209070 ssh_runner.go:195] Run: cat /etc/os-release
	I1109 14:41:32.857124  209070 main.go:143] libmachine: Couldn't set key VERSION_CODENAME, no corresponding struct field found
	I1109 14:41:32.857150  209070 info.go:137] Remote host: Debian GNU/Linux 12 (bookworm)
	I1109 14:41:32.857160  209070 filesync.go:126] Scanning /home/jenkins/minikube-integration/21139-2320/.minikube/addons for local assets ...
	I1109 14:41:32.857208  209070 filesync.go:126] Scanning /home/jenkins/minikube-integration/21139-2320/.minikube/files for local assets ...
	I1109 14:41:32.857285  209070 filesync.go:149] local asset: /home/jenkins/minikube-integration/21139-2320/.minikube/files/etc/ssl/certs/41162.pem -> 41162.pem in /etc/ssl/certs
	I1109 14:41:32.857387  209070 ssh_runner.go:195] Run: sudo mkdir -p /etc/ssl/certs
	I1109 14:41:32.871747  209070 ssh_runner.go:362] scp /home/jenkins/minikube-integration/21139-2320/.minikube/files/etc/ssl/certs/41162.pem --> /etc/ssl/certs/41162.pem (1708 bytes)
	I1109 14:41:32.894321  209070 start.go:296] duration metric: took 180.546742ms for postStartSetup
	I1109 14:41:32.894405  209070 ssh_runner.go:195] Run: sh -c "df -h /var | awk 'NR==2{print $5}'"
	I1109 14:41:32.894467  209070 cli_runner.go:164] Run: docker container inspect -f "'{{(index (index .NetworkSettings.Ports "22/tcp") 0).HostPort}}'" newest-cni-192074
	I1109 14:41:32.914585  209070 sshutil.go:53] new ssh client: &{IP:127.0.0.1 Port:33085 SSHKeyPath:/home/jenkins/minikube-integration/21139-2320/.minikube/machines/newest-cni-192074/id_rsa Username:docker}
	I1109 14:41:33.028613  209070 ssh_runner.go:195] Run: sh -c "df -BG /var | awk 'NR==2{print $4}'"
	I1109 14:41:33.036164  209070 fix.go:56] duration metric: took 5.388466358s for fixHost
	I1109 14:41:33.036187  209070 start.go:83] releasing machines lock for "newest-cni-192074", held for 5.388518379s
	I1109 14:41:33.036270  209070 cli_runner.go:164] Run: docker container inspect -f "{{range .NetworkSettings.Networks}}{{.IPAddress}},{{.GlobalIPv6Address}}{{end}}" newest-cni-192074
	I1109 14:41:33.061542  209070 ssh_runner.go:195] Run: cat /version.json
	I1109 14:41:33.061595  209070 cli_runner.go:164] Run: docker container inspect -f "'{{(index (index .NetworkSettings.Ports "22/tcp") 0).HostPort}}'" newest-cni-192074
	I1109 14:41:33.061842  209070 ssh_runner.go:195] Run: curl -sS -m 2 https://registry.k8s.io/
	I1109 14:41:33.061888  209070 cli_runner.go:164] Run: docker container inspect -f "'{{(index (index .NetworkSettings.Ports "22/tcp") 0).HostPort}}'" newest-cni-192074
	I1109 14:41:33.102818  209070 sshutil.go:53] new ssh client: &{IP:127.0.0.1 Port:33085 SSHKeyPath:/home/jenkins/minikube-integration/21139-2320/.minikube/machines/newest-cni-192074/id_rsa Username:docker}
	I1109 14:41:33.112571  209070 sshutil.go:53] new ssh client: &{IP:127.0.0.1 Port:33085 SSHKeyPath:/home/jenkins/minikube-integration/21139-2320/.minikube/machines/newest-cni-192074/id_rsa Username:docker}
	I1109 14:41:33.333329  209070 ssh_runner.go:195] Run: systemctl --version
	I1109 14:41:33.342476  209070 ssh_runner.go:195] Run: sudo sh -c "podman version >/dev/null"
	I1109 14:41:33.416038  209070 ssh_runner.go:195] Run: sh -c "stat /etc/cni/net.d/*loopback.conf*"
	W1109 14:41:33.424834  209070 cni.go:209] loopback cni configuration skipped: "/etc/cni/net.d/*loopback.conf*" not found
	I1109 14:41:33.424914  209070 ssh_runner.go:195] Run: sudo find /etc/cni/net.d -maxdepth 1 -type f ( ( -name *bridge* -or -name *podman* ) -and -not -name *.mk_disabled ) -printf "%p, " -exec sh -c "sudo mv {} {}.mk_disabled" ;
	I1109 14:41:33.441110  209070 cni.go:259] no active bridge cni configs found in "/etc/cni/net.d" - nothing to disable
	I1109 14:41:33.441184  209070 start.go:496] detecting cgroup driver to use...
	I1109 14:41:33.441232  209070 detect.go:187] detected "cgroupfs" cgroup driver on host os
	I1109 14:41:33.441338  209070 ssh_runner.go:195] Run: sudo systemctl stop -f containerd
	I1109 14:41:33.459701  209070 ssh_runner.go:195] Run: sudo systemctl is-active --quiet service containerd
	I1109 14:41:33.474641  209070 docker.go:218] disabling cri-docker service (if available) ...
	I1109 14:41:33.474764  209070 ssh_runner.go:195] Run: sudo systemctl stop -f cri-docker.socket
	I1109 14:41:33.494307  209070 ssh_runner.go:195] Run: sudo systemctl stop -f cri-docker.service
	I1109 14:41:33.509018  209070 ssh_runner.go:195] Run: sudo systemctl disable cri-docker.socket
	I1109 14:41:33.689206  209070 ssh_runner.go:195] Run: sudo systemctl mask cri-docker.service
	I1109 14:41:33.870199  209070 docker.go:234] disabling docker service ...
	I1109 14:41:33.870267  209070 ssh_runner.go:195] Run: sudo systemctl stop -f docker.socket
	I1109 14:41:33.887710  209070 ssh_runner.go:195] Run: sudo systemctl stop -f docker.service
	I1109 14:41:33.903376  209070 ssh_runner.go:195] Run: sudo systemctl disable docker.socket
	I1109 14:41:34.063108  209070 ssh_runner.go:195] Run: sudo systemctl mask docker.service
	I1109 14:41:34.226857  209070 ssh_runner.go:195] Run: sudo systemctl is-active --quiet service docker
	I1109 14:41:34.241929  209070 ssh_runner.go:195] Run: /bin/bash -c "sudo mkdir -p /etc && printf %s "runtime-endpoint: unix:///var/run/crio/crio.sock
	" | sudo tee /etc/crictl.yaml"
	I1109 14:41:34.261097  209070 crio.go:59] configure cri-o to use "registry.k8s.io/pause:3.10.1" pause image...
	I1109 14:41:34.261168  209070 ssh_runner.go:195] Run: sh -c "sudo sed -i 's|^.*pause_image = .*$|pause_image = "registry.k8s.io/pause:3.10.1"|' /etc/crio/crio.conf.d/02-crio.conf"
	I1109 14:41:34.282628  209070 crio.go:70] configuring cri-o to use "cgroupfs" as cgroup driver...
	I1109 14:41:34.282695  209070 ssh_runner.go:195] Run: sh -c "sudo sed -i 's|^.*cgroup_manager = .*$|cgroup_manager = "cgroupfs"|' /etc/crio/crio.conf.d/02-crio.conf"
	I1109 14:41:34.294305  209070 ssh_runner.go:195] Run: sh -c "sudo sed -i '/conmon_cgroup = .*/d' /etc/crio/crio.conf.d/02-crio.conf"
	I1109 14:41:34.304450  209070 ssh_runner.go:195] Run: sh -c "sudo sed -i '/cgroup_manager = .*/a conmon_cgroup = "pod"' /etc/crio/crio.conf.d/02-crio.conf"
	I1109 14:41:34.314388  209070 ssh_runner.go:195] Run: sh -c "sudo rm -rf /etc/cni/net.mk"
	I1109 14:41:34.325388  209070 ssh_runner.go:195] Run: sh -c "sudo sed -i '/^ *"net.ipv4.ip_unprivileged_port_start=.*"/d' /etc/crio/crio.conf.d/02-crio.conf"
	I1109 14:41:34.335482  209070 ssh_runner.go:195] Run: sh -c "sudo grep -q "^ *default_sysctls" /etc/crio/crio.conf.d/02-crio.conf || sudo sed -i '/conmon_cgroup = .*/a default_sysctls = \[\n\]' /etc/crio/crio.conf.d/02-crio.conf"
	I1109 14:41:34.346415  209070 ssh_runner.go:195] Run: sh -c "sudo sed -i -r 's|^default_sysctls *= *\[|&\n  "net.ipv4.ip_unprivileged_port_start=0",|' /etc/crio/crio.conf.d/02-crio.conf"
	I1109 14:41:34.359123  209070 ssh_runner.go:195] Run: sudo sysctl net.bridge.bridge-nf-call-iptables
	I1109 14:41:34.367287  209070 ssh_runner.go:195] Run: sudo sh -c "echo 1 > /proc/sys/net/ipv4/ip_forward"
	I1109 14:41:34.379114  209070 ssh_runner.go:195] Run: sudo systemctl daemon-reload
	I1109 14:41:34.536439  209070 ssh_runner.go:195] Run: sudo systemctl restart crio
	I1109 14:41:34.851123  209070 start.go:543] Will wait 60s for socket path /var/run/crio/crio.sock
	I1109 14:41:34.851271  209070 ssh_runner.go:195] Run: stat /var/run/crio/crio.sock
	I1109 14:41:34.855445  209070 start.go:564] Will wait 60s for crictl version
	I1109 14:41:34.855578  209070 ssh_runner.go:195] Run: which crictl
	I1109 14:41:34.859295  209070 ssh_runner.go:195] Run: sudo /usr/local/bin/crictl version
	I1109 14:41:34.884865  209070 start.go:580] Version:  0.1.0
	RuntimeName:  cri-o
	RuntimeVersion:  1.34.1
	RuntimeApiVersion:  v1
	I1109 14:41:34.885026  209070 ssh_runner.go:195] Run: crio --version
	I1109 14:41:34.925798  209070 ssh_runner.go:195] Run: crio --version
	I1109 14:41:34.960918  209070 out.go:179] * Preparing Kubernetes v1.34.1 on CRI-O 1.34.1 ...
	I1109 14:41:31.494680  203153 addons.go:515] duration metric: took 1.389247743s for enable addons: enabled=[storage-provisioner default-storageclass]
	W1109 14:41:32.932208  203153 node_ready.go:57] node "no-preload-545474" has "Ready":"False" status (will retry)
	I1109 14:41:34.962134  209070 cli_runner.go:164] Run: docker network inspect newest-cni-192074 --format "{"Name": "{{.Name}}","Driver": "{{.Driver}}","Subnet": "{{range .IPAM.Config}}{{.Subnet}}{{end}}","Gateway": "{{range .IPAM.Config}}{{.Gateway}}{{end}}","MTU": {{if (index .Options "com.docker.network.driver.mtu")}}{{(index .Options "com.docker.network.driver.mtu")}}{{else}}0{{end}}, "ContainerIPs": [{{range $k,$v := .Containers }}"{{$v.IPv4Address}}",{{end}}]}"
	I1109 14:41:34.981420  209070 ssh_runner.go:195] Run: grep 192.168.76.1	host.minikube.internal$ /etc/hosts
	I1109 14:41:34.986110  209070 ssh_runner.go:195] Run: /bin/bash -c "{ grep -v $'\thost.minikube.internal$' "/etc/hosts"; echo "192.168.76.1	host.minikube.internal"; } > /tmp/h.$$; sudo cp /tmp/h.$$ "/etc/hosts""
	I1109 14:41:34.997741  209070 out.go:179]   - kubeadm.pod-network-cidr=10.42.0.0/16
	I1109 14:41:34.998894  209070 kubeadm.go:884] updating cluster {Name:newest-cni-192074 KeepContext:false EmbedCerts:false MinikubeISO: KicBaseImage:gcr.io/k8s-minikube/kicbase-builds:v0.0.48-1761985721-21837@sha256:a50b37e97dfdea51156e079ca6b45818a801b3d41bbe13d141f35d2e1af6c7d1 Memory:3072 CPUs:2 DiskSize:20000 Driver:docker HyperkitVpnKitSock: HyperkitVSockPorts:[] DockerEnv:[] ContainerVolumeMounts:[] InsecureRegistry:[] RegistryMirror:[] HostOnlyCIDR:192.168.59.1/24 HypervVirtualSwitch: HypervUseExternalSwitch:false HypervExternalAdapter: KVMNetwork:default KVMQemuURI:qemu:///system KVMGPU:false KVMHidden:false KVMNUMACount:1 APIServerPort:8443 DockerOpt:[] DisableDriverMounts:false NFSShare:[] NFSSharesRoot:/nfsshares UUID: NoVTXCheck:false DNSProxy:false HostDNSResolver:true HostOnlyNicType:virtio NatNicType:virtio SSHIPAddress: SSHUser:root SSHKey: SSHPort:22 KubernetesConfig:{KubernetesVersion:v1.34.1 ClusterName:newest-cni-192074 Namespace:default APIServerHAVIP: APIServerName:minikubeCA API
ServerNames:[] APIServerIPs:[] DNSDomain:cluster.local ContainerRuntime:crio CRISocket: NetworkPlugin:cni FeatureGates: ServiceCIDR:10.96.0.0/12 ImageRepository: LoadBalancerStartIP: LoadBalancerEndIP: CustomIngressCert: RegistryAliases: ExtraOptions:[{Component:kubeadm Key:pod-network-cidr Value:10.42.0.0/16}] ShouldLoadCachedImages:true EnableDefaultCNI:false CNI:} Nodes:[{Name: IP:192.168.76.2 Port:8443 KubernetesVersion:v1.34.1 ContainerRuntime:crio ControlPlane:true Worker:true}] Addons:map[dashboard:true default-storageclass:true storage-provisioner:true] CustomAddonImages:map[MetricsScraper:registry.k8s.io/echoserver:1.4] CustomAddonRegistries:map[] VerifyComponents:map[apiserver:true apps_running:false default_sa:true extra:false kubelet:false node_ready:false system_pods:true] StartHostTimeout:6m0s ScheduledStop:<nil> ExposedPorts:[] ListenAddress: Network: Subnet: MultiNodeRequested:false ExtraDisks:0 CertExpiration:26280h0m0s MountString: Mount9PVersion:9p2000.L MountGID:docker MountIP: MountMSize:
262144 MountOptions:[] MountPort:0 MountType:9p MountUID:docker BinaryMirror: DisableOptimizations:false DisableMetrics:false DisableCoreDNSLog:false CustomQemuFirmwarePath: SocketVMnetClientPath: SocketVMnetPath: StaticIP: SSHAuthSock: SSHAgentPID:0 GPUs: AutoPauseInterval:1m0s} ...
	I1109 14:41:34.999046  209070 preload.go:188] Checking if preload exists for k8s version v1.34.1 and runtime crio
	I1109 14:41:34.999120  209070 ssh_runner.go:195] Run: sudo crictl images --output json
	I1109 14:41:35.044081  209070 crio.go:514] all images are preloaded for cri-o runtime.
	I1109 14:41:35.044108  209070 crio.go:433] Images already preloaded, skipping extraction
	I1109 14:41:35.044164  209070 ssh_runner.go:195] Run: sudo crictl images --output json
	I1109 14:41:35.070831  209070 crio.go:514] all images are preloaded for cri-o runtime.
	I1109 14:41:35.070856  209070 cache_images.go:86] Images are preloaded, skipping loading
	I1109 14:41:35.070865  209070 kubeadm.go:935] updating node { 192.168.76.2 8443 v1.34.1 crio true true} ...
	I1109 14:41:35.070977  209070 kubeadm.go:947] kubelet [Unit]
	Wants=crio.service
	
	[Service]
	ExecStart=
	ExecStart=/var/lib/minikube/binaries/v1.34.1/kubelet --bootstrap-kubeconfig=/etc/kubernetes/bootstrap-kubelet.conf --cgroups-per-qos=false --config=/var/lib/kubelet/config.yaml --enforce-node-allocatable= --hostname-override=newest-cni-192074 --kubeconfig=/etc/kubernetes/kubelet.conf --node-ip=192.168.76.2
	
	[Install]
	 config:
	{KubernetesVersion:v1.34.1 ClusterName:newest-cni-192074 Namespace:default APIServerHAVIP: APIServerName:minikubeCA APIServerNames:[] APIServerIPs:[] DNSDomain:cluster.local ContainerRuntime:crio CRISocket: NetworkPlugin:cni FeatureGates: ServiceCIDR:10.96.0.0/12 ImageRepository: LoadBalancerStartIP: LoadBalancerEndIP: CustomIngressCert: RegistryAliases: ExtraOptions:[{Component:kubeadm Key:pod-network-cidr Value:10.42.0.0/16}] ShouldLoadCachedImages:true EnableDefaultCNI:false CNI:}
	I1109 14:41:35.071068  209070 ssh_runner.go:195] Run: crio config
	I1109 14:41:35.149614  209070 cni.go:84] Creating CNI manager for ""
	I1109 14:41:35.149639  209070 cni.go:143] "docker" driver + "crio" runtime found, recommending kindnet
	I1109 14:41:35.149658  209070 kubeadm.go:85] Using pod CIDR: 10.42.0.0/16
	I1109 14:41:35.149685  209070 kubeadm.go:190] kubeadm options: {CertDir:/var/lib/minikube/certs ServiceCIDR:10.96.0.0/12 PodSubnet:10.42.0.0/16 AdvertiseAddress:192.168.76.2 APIServerPort:8443 KubernetesVersion:v1.34.1 EtcdDataDir:/var/lib/minikube/etcd EtcdExtraArgs:map[] ClusterName:newest-cni-192074 NodeName:newest-cni-192074 DNSDomain:cluster.local CRISocket:/var/run/crio/crio.sock ImageRepository: ComponentOptions:[{Component:apiServer ExtraArgs:map[enable-admission-plugins:NamespaceLifecycle,LimitRanger,ServiceAccount,DefaultStorageClass,DefaultTolerationSeconds,NodeRestriction,MutatingAdmissionWebhook,ValidatingAdmissionWebhook,ResourceQuota] Pairs:map[certSANs:["127.0.0.1", "localhost", "192.168.76.2"]]} {Component:controllerManager ExtraArgs:map[allocate-node-cidrs:true leader-elect:false] Pairs:map[]} {Component:scheduler ExtraArgs:map[leader-elect:false] Pairs:map[]}] FeatureArgs:map[] NodeIP:192.168.76.2 CgroupDriver:cgroupfs ClientCAFile:/var/lib/minikube/certs/ca.crt StaticPodPath:/etc/
kubernetes/manifests ControlPlaneAddress:control-plane.minikube.internal KubeProxyOptions:map[] ResolvConfSearchRegression:false KubeletConfigOpts:map[containerRuntimeEndpoint:unix:///var/run/crio/crio.sock hairpinMode:hairpin-veth runtimeRequestTimeout:15m] PrependCriSocketUnix:true}
	I1109 14:41:35.149820  209070 kubeadm.go:196] kubeadm config:
	apiVersion: kubeadm.k8s.io/v1beta4
	kind: InitConfiguration
	localAPIEndpoint:
	  advertiseAddress: 192.168.76.2
	  bindPort: 8443
	bootstrapTokens:
	  - groups:
	      - system:bootstrappers:kubeadm:default-node-token
	    ttl: 24h0m0s
	    usages:
	      - signing
	      - authentication
	nodeRegistration:
	  criSocket: unix:///var/run/crio/crio.sock
	  name: "newest-cni-192074"
	  kubeletExtraArgs:
	    - name: "node-ip"
	      value: "192.168.76.2"
	  taints: []
	---
	apiVersion: kubeadm.k8s.io/v1beta4
	kind: ClusterConfiguration
	apiServer:
	  certSANs: ["127.0.0.1", "localhost", "192.168.76.2"]
	  extraArgs:
	    - name: "enable-admission-plugins"
	      value: "NamespaceLifecycle,LimitRanger,ServiceAccount,DefaultStorageClass,DefaultTolerationSeconds,NodeRestriction,MutatingAdmissionWebhook,ValidatingAdmissionWebhook,ResourceQuota"
	controllerManager:
	  extraArgs:
	    - name: "allocate-node-cidrs"
	      value: "true"
	    - name: "leader-elect"
	      value: "false"
	scheduler:
	  extraArgs:
	    - name: "leader-elect"
	      value: "false"
	certificatesDir: /var/lib/minikube/certs
	clusterName: mk
	controlPlaneEndpoint: control-plane.minikube.internal:8443
	etcd:
	  local:
	    dataDir: /var/lib/minikube/etcd
	kubernetesVersion: v1.34.1
	networking:
	  dnsDomain: cluster.local
	  podSubnet: "10.42.0.0/16"
	  serviceSubnet: 10.96.0.0/12
	---
	apiVersion: kubelet.config.k8s.io/v1beta1
	kind: KubeletConfiguration
	authentication:
	  x509:
	    clientCAFile: /var/lib/minikube/certs/ca.crt
	cgroupDriver: cgroupfs
	containerRuntimeEndpoint: unix:///var/run/crio/crio.sock
	hairpinMode: hairpin-veth
	runtimeRequestTimeout: 15m
	clusterDomain: "cluster.local"
	# disable disk resource management by default
	imageGCHighThresholdPercent: 100
	evictionHard:
	  nodefs.available: "0%"
	  nodefs.inodesFree: "0%"
	  imagefs.available: "0%"
	failSwapOn: false
	staticPodPath: /etc/kubernetes/manifests
	---
	apiVersion: kubeproxy.config.k8s.io/v1alpha1
	kind: KubeProxyConfiguration
	clusterCIDR: "10.42.0.0/16"
	metricsBindAddress: 0.0.0.0:10249
	conntrack:
	  maxPerCore: 0
	# Skip setting "net.netfilter.nf_conntrack_tcp_timeout_established"
	  tcpEstablishedTimeout: 0s
	# Skip setting "net.netfilter.nf_conntrack_tcp_timeout_close"
	  tcpCloseWaitTimeout: 0s
	
	I1109 14:41:35.149904  209070 ssh_runner.go:195] Run: sudo ls /var/lib/minikube/binaries/v1.34.1
	I1109 14:41:35.158290  209070 binaries.go:51] Found k8s binaries, skipping transfer
	I1109 14:41:35.158360  209070 ssh_runner.go:195] Run: sudo mkdir -p /etc/systemd/system/kubelet.service.d /lib/systemd/system /var/tmp/minikube
	I1109 14:41:35.167156  209070 ssh_runner.go:362] scp memory --> /etc/systemd/system/kubelet.service.d/10-kubeadm.conf (367 bytes)
	I1109 14:41:35.182739  209070 ssh_runner.go:362] scp memory --> /lib/systemd/system/kubelet.service (352 bytes)
	I1109 14:41:35.196597  209070 ssh_runner.go:362] scp memory --> /var/tmp/minikube/kubeadm.yaml.new (2212 bytes)
	I1109 14:41:35.209739  209070 ssh_runner.go:195] Run: grep 192.168.76.2	control-plane.minikube.internal$ /etc/hosts
	I1109 14:41:35.213320  209070 ssh_runner.go:195] Run: /bin/bash -c "{ grep -v $'\tcontrol-plane.minikube.internal$' "/etc/hosts"; echo "192.168.76.2	control-plane.minikube.internal"; } > /tmp/h.$$; sudo cp /tmp/h.$$ "/etc/hosts""
	I1109 14:41:35.222967  209070 ssh_runner.go:195] Run: sudo systemctl daemon-reload
	I1109 14:41:35.334250  209070 ssh_runner.go:195] Run: sudo systemctl start kubelet
	I1109 14:41:35.356427  209070 certs.go:69] Setting up /home/jenkins/minikube-integration/21139-2320/.minikube/profiles/newest-cni-192074 for IP: 192.168.76.2
	I1109 14:41:35.356504  209070 certs.go:195] generating shared ca certs ...
	I1109 14:41:35.356534  209070 certs.go:227] acquiring lock for ca certs: {Name:mkdc9287ecd89df27ec460b72246ed6b75395d7c Clock:{} Delay:500ms Timeout:1m0s Cancel:<nil>}
	I1109 14:41:35.356703  209070 certs.go:236] skipping valid "minikubeCA" ca cert: /home/jenkins/minikube-integration/21139-2320/.minikube/ca.key
	I1109 14:41:35.356792  209070 certs.go:236] skipping valid "proxyClientCA" ca cert: /home/jenkins/minikube-integration/21139-2320/.minikube/proxy-client-ca.key
	I1109 14:41:35.356816  209070 certs.go:257] generating profile certs ...
	I1109 14:41:35.356923  209070 certs.go:360] skipping valid signed profile cert regeneration for "minikube-user": /home/jenkins/minikube-integration/21139-2320/.minikube/profiles/newest-cni-192074/client.key
	I1109 14:41:35.357027  209070 certs.go:360] skipping valid signed profile cert regeneration for "minikube": /home/jenkins/minikube-integration/21139-2320/.minikube/profiles/newest-cni-192074/apiserver.key.19ad1ce3
	I1109 14:41:35.357100  209070 certs.go:360] skipping valid signed profile cert regeneration for "aggregator": /home/jenkins/minikube-integration/21139-2320/.minikube/profiles/newest-cni-192074/proxy-client.key
	I1109 14:41:35.357243  209070 certs.go:484] found cert: /home/jenkins/minikube-integration/21139-2320/.minikube/certs/4116.pem (1338 bytes)
	W1109 14:41:35.357309  209070 certs.go:480] ignoring /home/jenkins/minikube-integration/21139-2320/.minikube/certs/4116_empty.pem, impossibly tiny 0 bytes
	I1109 14:41:35.357336  209070 certs.go:484] found cert: /home/jenkins/minikube-integration/21139-2320/.minikube/certs/ca-key.pem (1679 bytes)
	I1109 14:41:35.357395  209070 certs.go:484] found cert: /home/jenkins/minikube-integration/21139-2320/.minikube/certs/ca.pem (1082 bytes)
	I1109 14:41:35.357437  209070 certs.go:484] found cert: /home/jenkins/minikube-integration/21139-2320/.minikube/certs/cert.pem (1123 bytes)
	I1109 14:41:35.357489  209070 certs.go:484] found cert: /home/jenkins/minikube-integration/21139-2320/.minikube/certs/key.pem (1679 bytes)
	I1109 14:41:35.357557  209070 certs.go:484] found cert: /home/jenkins/minikube-integration/21139-2320/.minikube/files/etc/ssl/certs/41162.pem (1708 bytes)
	I1109 14:41:35.358193  209070 ssh_runner.go:362] scp /home/jenkins/minikube-integration/21139-2320/.minikube/ca.crt --> /var/lib/minikube/certs/ca.crt (1111 bytes)
	I1109 14:41:35.378364  209070 ssh_runner.go:362] scp /home/jenkins/minikube-integration/21139-2320/.minikube/ca.key --> /var/lib/minikube/certs/ca.key (1679 bytes)
	I1109 14:41:35.397537  209070 ssh_runner.go:362] scp /home/jenkins/minikube-integration/21139-2320/.minikube/proxy-client-ca.crt --> /var/lib/minikube/certs/proxy-client-ca.crt (1119 bytes)
	I1109 14:41:35.415588  209070 ssh_runner.go:362] scp /home/jenkins/minikube-integration/21139-2320/.minikube/proxy-client-ca.key --> /var/lib/minikube/certs/proxy-client-ca.key (1675 bytes)
	I1109 14:41:35.438749  209070 ssh_runner.go:362] scp /home/jenkins/minikube-integration/21139-2320/.minikube/profiles/newest-cni-192074/apiserver.crt --> /var/lib/minikube/certs/apiserver.crt (1424 bytes)
	I1109 14:41:35.459783  209070 ssh_runner.go:362] scp /home/jenkins/minikube-integration/21139-2320/.minikube/profiles/newest-cni-192074/apiserver.key --> /var/lib/minikube/certs/apiserver.key (1675 bytes)
	I1109 14:41:35.478368  209070 ssh_runner.go:362] scp /home/jenkins/minikube-integration/21139-2320/.minikube/profiles/newest-cni-192074/proxy-client.crt --> /var/lib/minikube/certs/proxy-client.crt (1147 bytes)
	I1109 14:41:35.502925  209070 ssh_runner.go:362] scp /home/jenkins/minikube-integration/21139-2320/.minikube/profiles/newest-cni-192074/proxy-client.key --> /var/lib/minikube/certs/proxy-client.key (1679 bytes)
	I1109 14:41:35.559268  209070 ssh_runner.go:362] scp /home/jenkins/minikube-integration/21139-2320/.minikube/ca.crt --> /usr/share/ca-certificates/minikubeCA.pem (1111 bytes)
	I1109 14:41:35.580606  209070 ssh_runner.go:362] scp /home/jenkins/minikube-integration/21139-2320/.minikube/certs/4116.pem --> /usr/share/ca-certificates/4116.pem (1338 bytes)
	I1109 14:41:35.606994  209070 ssh_runner.go:362] scp /home/jenkins/minikube-integration/21139-2320/.minikube/files/etc/ssl/certs/41162.pem --> /usr/share/ca-certificates/41162.pem (1708 bytes)
	I1109 14:41:35.628213  209070 ssh_runner.go:362] scp memory --> /var/lib/minikube/kubeconfig (738 bytes)
	I1109 14:41:35.643589  209070 ssh_runner.go:195] Run: openssl version
	I1109 14:41:35.650069  209070 ssh_runner.go:195] Run: sudo /bin/bash -c "test -s /usr/share/ca-certificates/minikubeCA.pem && ln -fs /usr/share/ca-certificates/minikubeCA.pem /etc/ssl/certs/minikubeCA.pem"
	I1109 14:41:35.659669  209070 ssh_runner.go:195] Run: ls -la /usr/share/ca-certificates/minikubeCA.pem
	I1109 14:41:35.663434  209070 certs.go:528] hashing: -rw-r--r-- 1 root root 1111 Nov  9 13:29 /usr/share/ca-certificates/minikubeCA.pem
	I1109 14:41:35.663501  209070 ssh_runner.go:195] Run: openssl x509 -hash -noout -in /usr/share/ca-certificates/minikubeCA.pem
	I1109 14:41:35.708872  209070 ssh_runner.go:195] Run: sudo /bin/bash -c "test -L /etc/ssl/certs/b5213941.0 || ln -fs /etc/ssl/certs/minikubeCA.pem /etc/ssl/certs/b5213941.0"
	I1109 14:41:35.717119  209070 ssh_runner.go:195] Run: sudo /bin/bash -c "test -s /usr/share/ca-certificates/4116.pem && ln -fs /usr/share/ca-certificates/4116.pem /etc/ssl/certs/4116.pem"
	I1109 14:41:35.725430  209070 ssh_runner.go:195] Run: ls -la /usr/share/ca-certificates/4116.pem
	I1109 14:41:35.729153  209070 certs.go:528] hashing: -rw-r--r-- 1 root root 1338 Nov  9 13:36 /usr/share/ca-certificates/4116.pem
	I1109 14:41:35.729213  209070 ssh_runner.go:195] Run: openssl x509 -hash -noout -in /usr/share/ca-certificates/4116.pem
	I1109 14:41:35.770781  209070 ssh_runner.go:195] Run: sudo /bin/bash -c "test -L /etc/ssl/certs/51391683.0 || ln -fs /etc/ssl/certs/4116.pem /etc/ssl/certs/51391683.0"
	I1109 14:41:35.779151  209070 ssh_runner.go:195] Run: sudo /bin/bash -c "test -s /usr/share/ca-certificates/41162.pem && ln -fs /usr/share/ca-certificates/41162.pem /etc/ssl/certs/41162.pem"
	I1109 14:41:35.787678  209070 ssh_runner.go:195] Run: ls -la /usr/share/ca-certificates/41162.pem
	I1109 14:41:35.792720  209070 certs.go:528] hashing: -rw-r--r-- 1 root root 1708 Nov  9 13:36 /usr/share/ca-certificates/41162.pem
	I1109 14:41:35.792820  209070 ssh_runner.go:195] Run: openssl x509 -hash -noout -in /usr/share/ca-certificates/41162.pem
	I1109 14:41:35.837076  209070 ssh_runner.go:195] Run: sudo /bin/bash -c "test -L /etc/ssl/certs/3ec20f2e.0 || ln -fs /etc/ssl/certs/41162.pem /etc/ssl/certs/3ec20f2e.0"
	I1109 14:41:35.844980  209070 ssh_runner.go:195] Run: stat /var/lib/minikube/certs/apiserver-kubelet-client.crt
	I1109 14:41:35.848849  209070 ssh_runner.go:195] Run: openssl x509 -noout -in /var/lib/minikube/certs/apiserver-etcd-client.crt -checkend 86400
	I1109 14:41:35.890950  209070 ssh_runner.go:195] Run: openssl x509 -noout -in /var/lib/minikube/certs/apiserver-kubelet-client.crt -checkend 86400
	I1109 14:41:35.933888  209070 ssh_runner.go:195] Run: openssl x509 -noout -in /var/lib/minikube/certs/etcd/server.crt -checkend 86400
	I1109 14:41:35.976588  209070 ssh_runner.go:195] Run: openssl x509 -noout -in /var/lib/minikube/certs/etcd/healthcheck-client.crt -checkend 86400
	I1109 14:41:36.020221  209070 ssh_runner.go:195] Run: openssl x509 -noout -in /var/lib/minikube/certs/etcd/peer.crt -checkend 86400
	I1109 14:41:36.070028  209070 ssh_runner.go:195] Run: openssl x509 -noout -in /var/lib/minikube/certs/front-proxy-client.crt -checkend 86400
	I1109 14:41:36.114718  209070 kubeadm.go:401] StartCluster: {Name:newest-cni-192074 KeepContext:false EmbedCerts:false MinikubeISO: KicBaseImage:gcr.io/k8s-minikube/kicbase-builds:v0.0.48-1761985721-21837@sha256:a50b37e97dfdea51156e079ca6b45818a801b3d41bbe13d141f35d2e1af6c7d1 Memory:3072 CPUs:2 DiskSize:20000 Driver:docker HyperkitVpnKitSock: HyperkitVSockPorts:[] DockerEnv:[] ContainerVolumeMounts:[] InsecureRegistry:[] RegistryMirror:[] HostOnlyCIDR:192.168.59.1/24 HypervVirtualSwitch: HypervUseExternalSwitch:false HypervExternalAdapter: KVMNetwork:default KVMQemuURI:qemu:///system KVMGPU:false KVMHidden:false KVMNUMACount:1 APIServerPort:8443 DockerOpt:[] DisableDriverMounts:false NFSShare:[] NFSSharesRoot:/nfsshares UUID: NoVTXCheck:false DNSProxy:false HostDNSResolver:true HostOnlyNicType:virtio NatNicType:virtio SSHIPAddress: SSHUser:root SSHKey: SSHPort:22 KubernetesConfig:{KubernetesVersion:v1.34.1 ClusterName:newest-cni-192074 Namespace:default APIServerHAVIP: APIServerName:minikubeCA APISer
verNames:[] APIServerIPs:[] DNSDomain:cluster.local ContainerRuntime:crio CRISocket: NetworkPlugin:cni FeatureGates: ServiceCIDR:10.96.0.0/12 ImageRepository: LoadBalancerStartIP: LoadBalancerEndIP: CustomIngressCert: RegistryAliases: ExtraOptions:[{Component:kubeadm Key:pod-network-cidr Value:10.42.0.0/16}] ShouldLoadCachedImages:true EnableDefaultCNI:false CNI:} Nodes:[{Name: IP:192.168.76.2 Port:8443 KubernetesVersion:v1.34.1 ContainerRuntime:crio ControlPlane:true Worker:true}] Addons:map[dashboard:true default-storageclass:true storage-provisioner:true] CustomAddonImages:map[MetricsScraper:registry.k8s.io/echoserver:1.4] CustomAddonRegistries:map[] VerifyComponents:map[apiserver:true apps_running:false default_sa:true extra:false kubelet:false node_ready:false system_pods:true] StartHostTimeout:6m0s ScheduledStop:<nil> ExposedPorts:[] ListenAddress: Network: Subnet: MultiNodeRequested:false ExtraDisks:0 CertExpiration:26280h0m0s MountString: Mount9PVersion:9p2000.L MountGID:docker MountIP: MountMSize:262
144 MountOptions:[] MountPort:0 MountType:9p MountUID:docker BinaryMirror: DisableOptimizations:false DisableMetrics:false DisableCoreDNSLog:false CustomQemuFirmwarePath: SocketVMnetClientPath: SocketVMnetPath: StaticIP: SSHAuthSock: SSHAgentPID:0 GPUs: AutoPauseInterval:1m0s}
	I1109 14:41:36.114810  209070 cri.go:54] listing CRI containers in root : {State:paused Name: Namespaces:[kube-system]}
	I1109 14:41:36.114910  209070 ssh_runner.go:195] Run: sudo -s eval "crictl ps -a --quiet --label io.kubernetes.pod.namespace=kube-system"
	I1109 14:41:36.189862  209070 cri.go:89] found id: ""
	I1109 14:41:36.189970  209070 ssh_runner.go:195] Run: sudo ls /var/lib/kubelet/kubeadm-flags.env /var/lib/kubelet/config.yaml /var/lib/minikube/etcd
	I1109 14:41:36.201414  209070 kubeadm.go:417] found existing configuration files, will attempt cluster restart
	I1109 14:41:36.201438  209070 kubeadm.go:598] restartPrimaryControlPlane start ...
	I1109 14:41:36.201531  209070 ssh_runner.go:195] Run: sudo test -d /data/minikube
	I1109 14:41:36.220210  209070 kubeadm.go:131] /data/minikube skipping compat symlinks: sudo test -d /data/minikube: Process exited with status 1
	stdout:
	
	stderr:
	I1109 14:41:36.220797  209070 kubeconfig.go:47] verify endpoint returned: get endpoint: "newest-cni-192074" does not appear in /home/jenkins/minikube-integration/21139-2320/kubeconfig
	I1109 14:41:36.224636  209070 kubeconfig.go:62] /home/jenkins/minikube-integration/21139-2320/kubeconfig needs updating (will repair): [kubeconfig missing "newest-cni-192074" cluster setting kubeconfig missing "newest-cni-192074" context setting]
	I1109 14:41:36.225422  209070 lock.go:35] WriteFile acquiring /home/jenkins/minikube-integration/21139-2320/kubeconfig: {Name:mkbcbd43244c4eb498050023163f96f81faa78c6 Clock:{} Delay:500ms Timeout:1m0s Cancel:<nil>}
	I1109 14:41:36.230872  209070 ssh_runner.go:195] Run: sudo diff -u /var/tmp/minikube/kubeadm.yaml /var/tmp/minikube/kubeadm.yaml.new
	I1109 14:41:36.258411  209070 kubeadm.go:635] The running cluster does not require reconfiguration: 192.168.76.2
	I1109 14:41:36.258447  209070 kubeadm.go:602] duration metric: took 57.002505ms to restartPrimaryControlPlane
	I1109 14:41:36.258484  209070 kubeadm.go:403] duration metric: took 143.773782ms to StartCluster
	I1109 14:41:36.258507  209070 settings.go:142] acquiring lock: {Name:mk52a5a1cf83cfd3007d92af07a5e8dea393f789 Clock:{} Delay:500ms Timeout:1m0s Cancel:<nil>}
	I1109 14:41:36.258590  209070 settings.go:150] Updating kubeconfig:  /home/jenkins/minikube-integration/21139-2320/kubeconfig
	I1109 14:41:36.259668  209070 lock.go:35] WriteFile acquiring /home/jenkins/minikube-integration/21139-2320/kubeconfig: {Name:mkbcbd43244c4eb498050023163f96f81faa78c6 Clock:{} Delay:500ms Timeout:1m0s Cancel:<nil>}
	I1109 14:41:36.260296  209070 config.go:182] Loaded profile config "newest-cni-192074": Driver=docker, ContainerRuntime=crio, KubernetesVersion=v1.34.1
	I1109 14:41:36.260576  209070 addons.go:512] enable addons start: toEnable=map[ambassador:false amd-gpu-device-plugin:false auto-pause:false cloud-spanner:false csi-hostpath-driver:false dashboard:true default-storageclass:true efk:false freshpod:false gcp-auth:false gvisor:false headlamp:false inaccel:false ingress:false ingress-dns:false inspektor-gadget:false istio:false istio-provisioner:false kong:false kubeflow:false kubetail:false kubevirt:false logviewer:false metallb:false metrics-server:false nvidia-device-plugin:false nvidia-driver-installer:false nvidia-gpu-device-plugin:false olm:false pod-security-policy:false portainer:false registry:false registry-aliases:false registry-creds:false storage-provisioner:true storage-provisioner-rancher:false volcano:false volumesnapshots:false yakd:false]
	I1109 14:41:36.260662  209070 addons.go:70] Setting storage-provisioner=true in profile "newest-cni-192074"
	I1109 14:41:36.260680  209070 addons.go:239] Setting addon storage-provisioner=true in "newest-cni-192074"
	W1109 14:41:36.260687  209070 addons.go:248] addon storage-provisioner should already be in state true
	I1109 14:41:36.260710  209070 host.go:66] Checking if "newest-cni-192074" exists ...
	I1109 14:41:36.261163  209070 cli_runner.go:164] Run: docker container inspect newest-cni-192074 --format={{.State.Status}}
	I1109 14:41:36.261342  209070 start.go:236] Will wait 6m0s for node &{Name: IP:192.168.76.2 Port:8443 KubernetesVersion:v1.34.1 ContainerRuntime:crio ControlPlane:true Worker:true}
	I1109 14:41:36.261648  209070 addons.go:70] Setting dashboard=true in profile "newest-cni-192074"
	I1109 14:41:36.261667  209070 addons.go:239] Setting addon dashboard=true in "newest-cni-192074"
	W1109 14:41:36.261674  209070 addons.go:248] addon dashboard should already be in state true
	I1109 14:41:36.261730  209070 host.go:66] Checking if "newest-cni-192074" exists ...
	I1109 14:41:36.261761  209070 addons.go:70] Setting default-storageclass=true in profile "newest-cni-192074"
	I1109 14:41:36.261779  209070 addons_storage_classes.go:34] enableOrDisableStorageClasses default-storageclass=true on "newest-cni-192074"
	I1109 14:41:36.262054  209070 cli_runner.go:164] Run: docker container inspect newest-cni-192074 --format={{.State.Status}}
	I1109 14:41:36.262245  209070 cli_runner.go:164] Run: docker container inspect newest-cni-192074 --format={{.State.Status}}
	I1109 14:41:36.271272  209070 out.go:179] * Verifying Kubernetes components...
	I1109 14:41:36.272678  209070 ssh_runner.go:195] Run: sudo systemctl daemon-reload
	I1109 14:41:36.325452  209070 out.go:179]   - Using image gcr.io/k8s-minikube/storage-provisioner:v5
	I1109 14:41:36.326720  209070 addons.go:436] installing /etc/kubernetes/addons/storage-provisioner.yaml
	I1109 14:41:36.326740  209070 ssh_runner.go:362] scp memory --> /etc/kubernetes/addons/storage-provisioner.yaml (2676 bytes)
	I1109 14:41:36.326813  209070 cli_runner.go:164] Run: docker container inspect -f "'{{(index (index .NetworkSettings.Ports "22/tcp") 0).HostPort}}'" newest-cni-192074
	I1109 14:41:36.333095  209070 addons.go:239] Setting addon default-storageclass=true in "newest-cni-192074"
	W1109 14:41:36.333120  209070 addons.go:248] addon default-storageclass should already be in state true
	I1109 14:41:36.333146  209070 host.go:66] Checking if "newest-cni-192074" exists ...
	I1109 14:41:36.333548  209070 cli_runner.go:164] Run: docker container inspect newest-cni-192074 --format={{.State.Status}}
	I1109 14:41:36.336665  209070 out.go:179]   - Using image docker.io/kubernetesui/dashboard:v2.7.0
	I1109 14:41:36.338699  209070 out.go:179]   - Using image registry.k8s.io/echoserver:1.4
	I1109 14:41:36.339927  209070 addons.go:436] installing /etc/kubernetes/addons/dashboard-ns.yaml
	I1109 14:41:36.339950  209070 ssh_runner.go:362] scp dashboard/dashboard-ns.yaml --> /etc/kubernetes/addons/dashboard-ns.yaml (759 bytes)
	I1109 14:41:36.340020  209070 cli_runner.go:164] Run: docker container inspect -f "'{{(index (index .NetworkSettings.Ports "22/tcp") 0).HostPort}}'" newest-cni-192074
	I1109 14:41:36.392863  209070 sshutil.go:53] new ssh client: &{IP:127.0.0.1 Port:33085 SSHKeyPath:/home/jenkins/minikube-integration/21139-2320/.minikube/machines/newest-cni-192074/id_rsa Username:docker}
	I1109 14:41:36.395129  209070 sshutil.go:53] new ssh client: &{IP:127.0.0.1 Port:33085 SSHKeyPath:/home/jenkins/minikube-integration/21139-2320/.minikube/machines/newest-cni-192074/id_rsa Username:docker}
	I1109 14:41:36.397768  209070 addons.go:436] installing /etc/kubernetes/addons/storageclass.yaml
	I1109 14:41:36.397790  209070 ssh_runner.go:362] scp storageclass/storageclass.yaml --> /etc/kubernetes/addons/storageclass.yaml (271 bytes)
	I1109 14:41:36.397855  209070 cli_runner.go:164] Run: docker container inspect -f "'{{(index (index .NetworkSettings.Ports "22/tcp") 0).HostPort}}'" newest-cni-192074
	I1109 14:41:36.436099  209070 sshutil.go:53] new ssh client: &{IP:127.0.0.1 Port:33085 SSHKeyPath:/home/jenkins/minikube-integration/21139-2320/.minikube/machines/newest-cni-192074/id_rsa Username:docker}
	I1109 14:41:36.589364  209070 addons.go:436] installing /etc/kubernetes/addons/dashboard-clusterrole.yaml
	I1109 14:41:36.589392  209070 ssh_runner.go:362] scp dashboard/dashboard-clusterrole.yaml --> /etc/kubernetes/addons/dashboard-clusterrole.yaml (1001 bytes)
	I1109 14:41:36.658275  209070 ssh_runner.go:195] Run: sudo systemctl start kubelet
	I1109 14:41:36.682843  209070 addons.go:436] installing /etc/kubernetes/addons/dashboard-clusterrolebinding.yaml
	I1109 14:41:36.682870  209070 ssh_runner.go:362] scp dashboard/dashboard-clusterrolebinding.yaml --> /etc/kubernetes/addons/dashboard-clusterrolebinding.yaml (1018 bytes)
	I1109 14:41:36.718987  209070 ssh_runner.go:195] Run: sudo KUBECONFIG=/var/lib/minikube/kubeconfig /var/lib/minikube/binaries/v1.34.1/kubectl apply -f /etc/kubernetes/addons/storageclass.yaml
	I1109 14:41:36.725520  209070 api_server.go:52] waiting for apiserver process to appear ...
	I1109 14:41:36.725591  209070 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I1109 14:41:36.728294  209070 ssh_runner.go:195] Run: sudo KUBECONFIG=/var/lib/minikube/kubeconfig /var/lib/minikube/binaries/v1.34.1/kubectl apply -f /etc/kubernetes/addons/storage-provisioner.yaml
	I1109 14:41:36.778142  209070 addons.go:436] installing /etc/kubernetes/addons/dashboard-configmap.yaml
	I1109 14:41:36.778168  209070 ssh_runner.go:362] scp dashboard/dashboard-configmap.yaml --> /etc/kubernetes/addons/dashboard-configmap.yaml (837 bytes)
	I1109 14:41:36.846208  209070 addons.go:436] installing /etc/kubernetes/addons/dashboard-dp.yaml
	I1109 14:41:36.846234  209070 ssh_runner.go:362] scp memory --> /etc/kubernetes/addons/dashboard-dp.yaml (4201 bytes)
	I1109 14:41:36.877494  209070 addons.go:436] installing /etc/kubernetes/addons/dashboard-role.yaml
	I1109 14:41:36.877518  209070 ssh_runner.go:362] scp dashboard/dashboard-role.yaml --> /etc/kubernetes/addons/dashboard-role.yaml (1724 bytes)
	I1109 14:41:36.942318  209070 addons.go:436] installing /etc/kubernetes/addons/dashboard-rolebinding.yaml
	I1109 14:41:36.942349  209070 ssh_runner.go:362] scp dashboard/dashboard-rolebinding.yaml --> /etc/kubernetes/addons/dashboard-rolebinding.yaml (1046 bytes)
	I1109 14:41:37.018326  209070 addons.go:436] installing /etc/kubernetes/addons/dashboard-sa.yaml
	I1109 14:41:37.018356  209070 ssh_runner.go:362] scp dashboard/dashboard-sa.yaml --> /etc/kubernetes/addons/dashboard-sa.yaml (837 bytes)
	I1109 14:41:37.061016  209070 addons.go:436] installing /etc/kubernetes/addons/dashboard-secret.yaml
	I1109 14:41:37.061040  209070 ssh_runner.go:362] scp dashboard/dashboard-secret.yaml --> /etc/kubernetes/addons/dashboard-secret.yaml (1389 bytes)
	I1109 14:41:37.084261  209070 addons.go:436] installing /etc/kubernetes/addons/dashboard-svc.yaml
	I1109 14:41:37.084286  209070 ssh_runner.go:362] scp dashboard/dashboard-svc.yaml --> /etc/kubernetes/addons/dashboard-svc.yaml (1294 bytes)
	I1109 14:41:37.107101  209070 ssh_runner.go:195] Run: sudo KUBECONFIG=/var/lib/minikube/kubeconfig /var/lib/minikube/binaries/v1.34.1/kubectl apply -f /etc/kubernetes/addons/dashboard-ns.yaml -f /etc/kubernetes/addons/dashboard-clusterrole.yaml -f /etc/kubernetes/addons/dashboard-clusterrolebinding.yaml -f /etc/kubernetes/addons/dashboard-configmap.yaml -f /etc/kubernetes/addons/dashboard-dp.yaml -f /etc/kubernetes/addons/dashboard-role.yaml -f /etc/kubernetes/addons/dashboard-rolebinding.yaml -f /etc/kubernetes/addons/dashboard-sa.yaml -f /etc/kubernetes/addons/dashboard-secret.yaml -f /etc/kubernetes/addons/dashboard-svc.yaml
	W1109 14:41:35.432111  203153 node_ready.go:57] node "no-preload-545474" has "Ready":"False" status (will retry)
	W1109 14:41:37.432424  203153 node_ready.go:57] node "no-preload-545474" has "Ready":"False" status (will retry)
	W1109 14:41:39.932104  203153 node_ready.go:57] node "no-preload-545474" has "Ready":"False" status (will retry)
	I1109 14:41:41.541839  209070 ssh_runner.go:235] Completed: sudo KUBECONFIG=/var/lib/minikube/kubeconfig /var/lib/minikube/binaries/v1.34.1/kubectl apply -f /etc/kubernetes/addons/storageclass.yaml: (4.822814805s)
	I1109 14:41:41.542032  209070 ssh_runner.go:235] Completed: sudo pgrep -xnf kube-apiserver.*minikube.*: (4.816424542s)
	I1109 14:41:41.542093  209070 api_server.go:72] duration metric: took 5.280722247s to wait for apiserver process to appear ...
	I1109 14:41:41.542106  209070 api_server.go:88] waiting for apiserver healthz status ...
	I1109 14:41:41.542123  209070 api_server.go:253] Checking apiserver healthz at https://192.168.76.2:8443/healthz ...
	I1109 14:41:41.576310  209070 api_server.go:279] https://192.168.76.2:8443/healthz returned 500:
	[+]ping ok
	[+]log ok
	[+]etcd ok
	[+]poststarthook/start-apiserver-admission-initializer ok
	[+]poststarthook/generic-apiserver-start-informers ok
	[+]poststarthook/priority-and-fairness-config-consumer ok
	[+]poststarthook/priority-and-fairness-filter ok
	[+]poststarthook/storage-object-count-tracker-hook ok
	[+]poststarthook/start-apiextensions-informers ok
	[+]poststarthook/start-apiextensions-controllers ok
	[+]poststarthook/crd-informer-synced ok
	[+]poststarthook/start-system-namespaces-controller ok
	[+]poststarthook/start-cluster-authentication-info-controller ok
	[+]poststarthook/start-kube-apiserver-identity-lease-controller ok
	[+]poststarthook/start-kube-apiserver-identity-lease-garbage-collector ok
	[+]poststarthook/start-legacy-token-tracking-controller ok
	[+]poststarthook/start-service-ip-repair-controllers ok
	[-]poststarthook/rbac/bootstrap-roles failed: reason withheld
	[-]poststarthook/scheduling/bootstrap-system-priority-classes failed: reason withheld
	[+]poststarthook/priority-and-fairness-config-producer ok
	[+]poststarthook/bootstrap-controller ok
	[+]poststarthook/start-kubernetes-service-cidr-controller ok
	[+]poststarthook/aggregator-reload-proxy-client-cert ok
	[+]poststarthook/start-kube-aggregator-informers ok
	[+]poststarthook/apiservice-status-local-available-controller ok
	[+]poststarthook/apiservice-status-remote-available-controller ok
	[+]poststarthook/apiservice-registration-controller ok
	[+]poststarthook/apiservice-discovery-controller ok
	[+]poststarthook/kube-apiserver-autoregistration ok
	[+]autoregister-completion ok
	[+]poststarthook/apiservice-openapi-controller ok
	[+]poststarthook/apiservice-openapiv3-controller ok
	healthz check failed
	W1109 14:41:41.576349  209070 api_server.go:103] status: https://192.168.76.2:8443/healthz returned error 500:
	[+]ping ok
	[+]log ok
	[+]etcd ok
	[+]poststarthook/start-apiserver-admission-initializer ok
	[+]poststarthook/generic-apiserver-start-informers ok
	[+]poststarthook/priority-and-fairness-config-consumer ok
	[+]poststarthook/priority-and-fairness-filter ok
	[+]poststarthook/storage-object-count-tracker-hook ok
	[+]poststarthook/start-apiextensions-informers ok
	[+]poststarthook/start-apiextensions-controllers ok
	[+]poststarthook/crd-informer-synced ok
	[+]poststarthook/start-system-namespaces-controller ok
	[+]poststarthook/start-cluster-authentication-info-controller ok
	[+]poststarthook/start-kube-apiserver-identity-lease-controller ok
	[+]poststarthook/start-kube-apiserver-identity-lease-garbage-collector ok
	[+]poststarthook/start-legacy-token-tracking-controller ok
	[+]poststarthook/start-service-ip-repair-controllers ok
	[-]poststarthook/rbac/bootstrap-roles failed: reason withheld
	[-]poststarthook/scheduling/bootstrap-system-priority-classes failed: reason withheld
	[+]poststarthook/priority-and-fairness-config-producer ok
	[+]poststarthook/bootstrap-controller ok
	[+]poststarthook/start-kubernetes-service-cidr-controller ok
	[+]poststarthook/aggregator-reload-proxy-client-cert ok
	[+]poststarthook/start-kube-aggregator-informers ok
	[+]poststarthook/apiservice-status-local-available-controller ok
	[+]poststarthook/apiservice-status-remote-available-controller ok
	[+]poststarthook/apiservice-registration-controller ok
	[+]poststarthook/apiservice-discovery-controller ok
	[+]poststarthook/kube-apiserver-autoregistration ok
	[+]autoregister-completion ok
	[+]poststarthook/apiservice-openapi-controller ok
	[+]poststarthook/apiservice-openapiv3-controller ok
	healthz check failed
	I1109 14:41:42.042871  209070 api_server.go:253] Checking apiserver healthz at https://192.168.76.2:8443/healthz ...
	I1109 14:41:42.065919  209070 api_server.go:279] https://192.168.76.2:8443/healthz returned 500:
	[+]ping ok
	[+]log ok
	[+]etcd ok
	[+]poststarthook/start-apiserver-admission-initializer ok
	[+]poststarthook/generic-apiserver-start-informers ok
	[+]poststarthook/priority-and-fairness-config-consumer ok
	[+]poststarthook/priority-and-fairness-filter ok
	[+]poststarthook/storage-object-count-tracker-hook ok
	[+]poststarthook/start-apiextensions-informers ok
	[+]poststarthook/start-apiextensions-controllers ok
	[+]poststarthook/crd-informer-synced ok
	[+]poststarthook/start-system-namespaces-controller ok
	[+]poststarthook/start-cluster-authentication-info-controller ok
	[+]poststarthook/start-kube-apiserver-identity-lease-controller ok
	[+]poststarthook/start-kube-apiserver-identity-lease-garbage-collector ok
	[+]poststarthook/start-legacy-token-tracking-controller ok
	[+]poststarthook/start-service-ip-repair-controllers ok
	[-]poststarthook/rbac/bootstrap-roles failed: reason withheld
	[+]poststarthook/scheduling/bootstrap-system-priority-classes ok
	[+]poststarthook/priority-and-fairness-config-producer ok
	[+]poststarthook/bootstrap-controller ok
	[+]poststarthook/start-kubernetes-service-cidr-controller ok
	[+]poststarthook/aggregator-reload-proxy-client-cert ok
	[+]poststarthook/start-kube-aggregator-informers ok
	[+]poststarthook/apiservice-status-local-available-controller ok
	[+]poststarthook/apiservice-status-remote-available-controller ok
	[+]poststarthook/apiservice-registration-controller ok
	[+]poststarthook/apiservice-discovery-controller ok
	[+]poststarthook/kube-apiserver-autoregistration ok
	[+]autoregister-completion ok
	[+]poststarthook/apiservice-openapi-controller ok
	[+]poststarthook/apiservice-openapiv3-controller ok
	healthz check failed
	W1109 14:41:42.065959  209070 api_server.go:103] status: https://192.168.76.2:8443/healthz returned error 500:
	[+]ping ok
	[+]log ok
	[+]etcd ok
	[+]poststarthook/start-apiserver-admission-initializer ok
	[+]poststarthook/generic-apiserver-start-informers ok
	[+]poststarthook/priority-and-fairness-config-consumer ok
	[+]poststarthook/priority-and-fairness-filter ok
	[+]poststarthook/storage-object-count-tracker-hook ok
	[+]poststarthook/start-apiextensions-informers ok
	[+]poststarthook/start-apiextensions-controllers ok
	[+]poststarthook/crd-informer-synced ok
	[+]poststarthook/start-system-namespaces-controller ok
	[+]poststarthook/start-cluster-authentication-info-controller ok
	[+]poststarthook/start-kube-apiserver-identity-lease-controller ok
	[+]poststarthook/start-kube-apiserver-identity-lease-garbage-collector ok
	[+]poststarthook/start-legacy-token-tracking-controller ok
	[+]poststarthook/start-service-ip-repair-controllers ok
	[-]poststarthook/rbac/bootstrap-roles failed: reason withheld
	[+]poststarthook/scheduling/bootstrap-system-priority-classes ok
	[+]poststarthook/priority-and-fairness-config-producer ok
	[+]poststarthook/bootstrap-controller ok
	[+]poststarthook/start-kubernetes-service-cidr-controller ok
	[+]poststarthook/aggregator-reload-proxy-client-cert ok
	[+]poststarthook/start-kube-aggregator-informers ok
	[+]poststarthook/apiservice-status-local-available-controller ok
	[+]poststarthook/apiservice-status-remote-available-controller ok
	[+]poststarthook/apiservice-registration-controller ok
	[+]poststarthook/apiservice-discovery-controller ok
	[+]poststarthook/kube-apiserver-autoregistration ok
	[+]autoregister-completion ok
	[+]poststarthook/apiservice-openapi-controller ok
	[+]poststarthook/apiservice-openapiv3-controller ok
	healthz check failed
	I1109 14:41:42.542396  209070 api_server.go:253] Checking apiserver healthz at https://192.168.76.2:8443/healthz ...
	I1109 14:41:42.552229  209070 api_server.go:279] https://192.168.76.2:8443/healthz returned 500:
	[+]ping ok
	[+]log ok
	[+]etcd ok
	[+]poststarthook/start-apiserver-admission-initializer ok
	[+]poststarthook/generic-apiserver-start-informers ok
	[+]poststarthook/priority-and-fairness-config-consumer ok
	[+]poststarthook/priority-and-fairness-filter ok
	[+]poststarthook/storage-object-count-tracker-hook ok
	[+]poststarthook/start-apiextensions-informers ok
	[+]poststarthook/start-apiextensions-controllers ok
	[+]poststarthook/crd-informer-synced ok
	[+]poststarthook/start-system-namespaces-controller ok
	[+]poststarthook/start-cluster-authentication-info-controller ok
	[+]poststarthook/start-kube-apiserver-identity-lease-controller ok
	[+]poststarthook/start-kube-apiserver-identity-lease-garbage-collector ok
	[+]poststarthook/start-legacy-token-tracking-controller ok
	[+]poststarthook/start-service-ip-repair-controllers ok
	[-]poststarthook/rbac/bootstrap-roles failed: reason withheld
	[+]poststarthook/scheduling/bootstrap-system-priority-classes ok
	[+]poststarthook/priority-and-fairness-config-producer ok
	[+]poststarthook/bootstrap-controller ok
	[+]poststarthook/start-kubernetes-service-cidr-controller ok
	[+]poststarthook/aggregator-reload-proxy-client-cert ok
	[+]poststarthook/start-kube-aggregator-informers ok
	[+]poststarthook/apiservice-status-local-available-controller ok
	[+]poststarthook/apiservice-status-remote-available-controller ok
	[+]poststarthook/apiservice-registration-controller ok
	[+]poststarthook/apiservice-discovery-controller ok
	[+]poststarthook/kube-apiserver-autoregistration ok
	[+]autoregister-completion ok
	[+]poststarthook/apiservice-openapi-controller ok
	[+]poststarthook/apiservice-openapiv3-controller ok
	healthz check failed
	W1109 14:41:42.552309  209070 api_server.go:103] status: https://192.168.76.2:8443/healthz returned error 500:
	[+]ping ok
	[+]log ok
	[+]etcd ok
	[+]poststarthook/start-apiserver-admission-initializer ok
	[+]poststarthook/generic-apiserver-start-informers ok
	[+]poststarthook/priority-and-fairness-config-consumer ok
	[+]poststarthook/priority-and-fairness-filter ok
	[+]poststarthook/storage-object-count-tracker-hook ok
	[+]poststarthook/start-apiextensions-informers ok
	[+]poststarthook/start-apiextensions-controllers ok
	[+]poststarthook/crd-informer-synced ok
	[+]poststarthook/start-system-namespaces-controller ok
	[+]poststarthook/start-cluster-authentication-info-controller ok
	[+]poststarthook/start-kube-apiserver-identity-lease-controller ok
	[+]poststarthook/start-kube-apiserver-identity-lease-garbage-collector ok
	[+]poststarthook/start-legacy-token-tracking-controller ok
	[+]poststarthook/start-service-ip-repair-controllers ok
	[-]poststarthook/rbac/bootstrap-roles failed: reason withheld
	[+]poststarthook/scheduling/bootstrap-system-priority-classes ok
	[+]poststarthook/priority-and-fairness-config-producer ok
	[+]poststarthook/bootstrap-controller ok
	[+]poststarthook/start-kubernetes-service-cidr-controller ok
	[+]poststarthook/aggregator-reload-proxy-client-cert ok
	[+]poststarthook/start-kube-aggregator-informers ok
	[+]poststarthook/apiservice-status-local-available-controller ok
	[+]poststarthook/apiservice-status-remote-available-controller ok
	[+]poststarthook/apiservice-registration-controller ok
	[+]poststarthook/apiservice-discovery-controller ok
	[+]poststarthook/kube-apiserver-autoregistration ok
	[+]autoregister-completion ok
	[+]poststarthook/apiservice-openapi-controller ok
	[+]poststarthook/apiservice-openapiv3-controller ok
	healthz check failed
	I1109 14:41:42.732935  209070 ssh_runner.go:235] Completed: sudo KUBECONFIG=/var/lib/minikube/kubeconfig /var/lib/minikube/binaries/v1.34.1/kubectl apply -f /etc/kubernetes/addons/storage-provisioner.yaml: (6.004608389s)
	I1109 14:41:42.733115  209070 ssh_runner.go:235] Completed: sudo KUBECONFIG=/var/lib/minikube/kubeconfig /var/lib/minikube/binaries/v1.34.1/kubectl apply -f /etc/kubernetes/addons/dashboard-ns.yaml -f /etc/kubernetes/addons/dashboard-clusterrole.yaml -f /etc/kubernetes/addons/dashboard-clusterrolebinding.yaml -f /etc/kubernetes/addons/dashboard-configmap.yaml -f /etc/kubernetes/addons/dashboard-dp.yaml -f /etc/kubernetes/addons/dashboard-role.yaml -f /etc/kubernetes/addons/dashboard-rolebinding.yaml -f /etc/kubernetes/addons/dashboard-sa.yaml -f /etc/kubernetes/addons/dashboard-secret.yaml -f /etc/kubernetes/addons/dashboard-svc.yaml: (5.625982127s)
	I1109 14:41:42.736332  209070 out.go:179] * Some dashboard features require the metrics-server addon. To enable all features please run:
	
		minikube -p newest-cni-192074 addons enable metrics-server
	
	I1109 14:41:42.739365  209070 out.go:179] * Enabled addons: default-storageclass, storage-provisioner, dashboard
	I1109 14:41:42.742180  209070 addons.go:515] duration metric: took 6.481595896s for enable addons: enabled=[default-storageclass storage-provisioner dashboard]
	I1109 14:41:43.042572  209070 api_server.go:253] Checking apiserver healthz at https://192.168.76.2:8443/healthz ...
	I1109 14:41:43.051050  209070 api_server.go:279] https://192.168.76.2:8443/healthz returned 200:
	ok
	I1109 14:41:43.052383  209070 api_server.go:141] control plane version: v1.34.1
	I1109 14:41:43.052418  209070 api_server.go:131] duration metric: took 1.510300299s to wait for apiserver health ...
	I1109 14:41:43.052445  209070 system_pods.go:43] waiting for kube-system pods to appear ...
	I1109 14:41:43.055718  209070 system_pods.go:59] 8 kube-system pods found
	I1109 14:41:43.055759  209070 system_pods.go:61] "coredns-66bc5c9577-6brdt" [50d6b82b-8e51-463c-82a3-a4a103105b6a] Pending: PodScheduled:Unschedulable (0/1 nodes are available: 1 node(s) had untolerated taint {node.kubernetes.io/not-ready: }. no new claims to deallocate, preemption: 0/1 nodes are available: 1 Preemption is not helpful for scheduling.)
	I1109 14:41:43.055769  209070 system_pods.go:61] "etcd-newest-cni-192074" [c5ddb834-a41a-4e78-8b40-e27ff57c60d9] Running / Ready:ContainersNotReady (containers with unready status: [etcd]) / ContainersReady:ContainersNotReady (containers with unready status: [etcd])
	I1109 14:41:43.055775  209070 system_pods.go:61] "kindnet-gmcpd" [00d2ffcc-cb88-4632-8efd-e59fe208d3c8] Running
	I1109 14:41:43.055783  209070 system_pods.go:61] "kube-apiserver-newest-cni-192074" [b2ba9393-513f-4735-b9fa-713bf9ac8fed] Running / Ready:ContainersNotReady (containers with unready status: [kube-apiserver]) / ContainersReady:ContainersNotReady (containers with unready status: [kube-apiserver])
	I1109 14:41:43.055791  209070 system_pods.go:61] "kube-controller-manager-newest-cni-192074" [5ecab913-3ba7-42cc-a66f-7a8e512c6c71] Running / Ready:ContainersNotReady (containers with unready status: [kube-controller-manager]) / ContainersReady:ContainersNotReady (containers with unready status: [kube-controller-manager])
	I1109 14:41:43.055801  209070 system_pods.go:61] "kube-proxy-vjt4x" [4f389cd7-7dd5-439e-b590-9e4390f0a638] Running
	I1109 14:41:43.055809  209070 system_pods.go:61] "kube-scheduler-newest-cni-192074" [265941ce-4026-4e49-891b-10d612942e7f] Running / Ready:ContainersNotReady (containers with unready status: [kube-scheduler]) / ContainersReady:ContainersNotReady (containers with unready status: [kube-scheduler])
	I1109 14:41:43.055819  209070 system_pods.go:61] "storage-provisioner" [9f3003f1-507f-461b-bff4-e19dafefcd23] Pending: PodScheduled:Unschedulable (0/1 nodes are available: 1 node(s) had untolerated taint {node.kubernetes.io/not-ready: }. no new claims to deallocate, preemption: 0/1 nodes are available: 1 Preemption is not helpful for scheduling.)
	I1109 14:41:43.055827  209070 system_pods.go:74] duration metric: took 3.369435ms to wait for pod list to return data ...
	I1109 14:41:43.055840  209070 default_sa.go:34] waiting for default service account to be created ...
	I1109 14:41:43.058730  209070 default_sa.go:45] found service account: "default"
	I1109 14:41:43.058757  209070 default_sa.go:55] duration metric: took 2.910033ms for default service account to be created ...
	I1109 14:41:43.058771  209070 kubeadm.go:587] duration metric: took 6.797398967s to wait for: map[apiserver:true apps_running:false default_sa:true extra:false kubelet:false node_ready:false system_pods:true]
	I1109 14:41:43.058801  209070 node_conditions.go:102] verifying NodePressure condition ...
	I1109 14:41:43.062166  209070 node_conditions.go:122] node storage ephemeral capacity is 203034800Ki
	I1109 14:41:43.062239  209070 node_conditions.go:123] node cpu capacity is 2
	I1109 14:41:43.062269  209070 node_conditions.go:105] duration metric: took 3.449542ms to run NodePressure ...
	I1109 14:41:43.062284  209070 start.go:242] waiting for startup goroutines ...
	I1109 14:41:43.062291  209070 start.go:247] waiting for cluster config update ...
	I1109 14:41:43.062304  209070 start.go:256] writing updated cluster config ...
	I1109 14:41:43.062597  209070 ssh_runner.go:195] Run: rm -f paused
	I1109 14:41:43.124645  209070 start.go:628] kubectl: 1.33.2, cluster: 1.34.1 (minor skew: 1)
	I1109 14:41:43.128707  209070 out.go:179] * Done! kubectl is now configured to use "newest-cni-192074" cluster and "default" namespace by default
	W1109 14:41:41.933022  203153 node_ready.go:57] node "no-preload-545474" has "Ready":"False" status (will retry)
	W1109 14:41:43.934352  203153 node_ready.go:57] node "no-preload-545474" has "Ready":"False" status (will retry)
	
	
	==> CRI-O <==
	Nov 09 14:41:41 newest-cni-192074 crio[612]: time="2025-11-09T14:41:41.79957338Z" level=info msg="Allowed annotations are specified for workload [io.containers.trace-syscall]"
	Nov 09 14:41:41 newest-cni-192074 crio[612]: time="2025-11-09T14:41:41.806419377Z" level=warning msg="Skipping invalid sysctl specified by config {net.ipv4.ip_unprivileged_port_start 0}: \"net.ipv4.ip_unprivileged_port_start\" not allowed with host net enabled" id=6db8eb90-cd75-4673-9c2b-c0783d2d364e name=/runtime.v1.RuntimeService/RunPodSandbox
	Nov 09 14:41:41 newest-cni-192074 crio[612]: time="2025-11-09T14:41:41.816869888Z" level=info msg="Ran pod sandbox a5c8dbb14dc04a668f880818d9df0b34b401a810e78f1029111ab427669769eb with infra container: kube-system/kindnet-gmcpd/POD" id=6db8eb90-cd75-4673-9c2b-c0783d2d364e name=/runtime.v1.RuntimeService/RunPodSandbox
	Nov 09 14:41:41 newest-cni-192074 crio[612]: time="2025-11-09T14:41:41.824324219Z" level=info msg="Running pod sandbox: kube-system/kube-proxy-vjt4x/POD" id=4df067d4-fd06-4ab6-8bd0-b14042619d81 name=/runtime.v1.RuntimeService/RunPodSandbox
	Nov 09 14:41:41 newest-cni-192074 crio[612]: time="2025-11-09T14:41:41.824395062Z" level=info msg="Allowed annotations are specified for workload [io.containers.trace-syscall]"
	Nov 09 14:41:41 newest-cni-192074 crio[612]: time="2025-11-09T14:41:41.826471127Z" level=info msg="Checking image status: docker.io/kindest/kindnetd:v20250512-df8de77b" id=17ab18dc-348f-4847-af2d-14b262fd2339 name=/runtime.v1.ImageService/ImageStatus
	Nov 09 14:41:41 newest-cni-192074 crio[612]: time="2025-11-09T14:41:41.829051057Z" level=info msg="Checking image status: docker.io/kindest/kindnetd:v20250512-df8de77b" id=2c82af5c-9bf8-4ddc-8e80-0eeb171f52b5 name=/runtime.v1.ImageService/ImageStatus
	Nov 09 14:41:41 newest-cni-192074 crio[612]: time="2025-11-09T14:41:41.829598697Z" level=warning msg="Skipping invalid sysctl specified by config {net.ipv4.ip_unprivileged_port_start 0}: \"net.ipv4.ip_unprivileged_port_start\" not allowed with host net enabled" id=4df067d4-fd06-4ab6-8bd0-b14042619d81 name=/runtime.v1.RuntimeService/RunPodSandbox
	Nov 09 14:41:41 newest-cni-192074 crio[612]: time="2025-11-09T14:41:41.831067216Z" level=info msg="Creating container: kube-system/kindnet-gmcpd/kindnet-cni" id=48083390-d500-4134-bde2-123a604854ba name=/runtime.v1.RuntimeService/CreateContainer
	Nov 09 14:41:41 newest-cni-192074 crio[612]: time="2025-11-09T14:41:41.831160361Z" level=info msg="Allowed annotations are specified for workload [io.containers.trace-syscall]"
	Nov 09 14:41:41 newest-cni-192074 crio[612]: time="2025-11-09T14:41:41.856776841Z" level=info msg="Allowed annotations are specified for workload [io.containers.trace-syscall]"
	Nov 09 14:41:41 newest-cni-192074 crio[612]: time="2025-11-09T14:41:41.859608515Z" level=info msg="Ran pod sandbox 2d607777fd17fede81438faa3ea6044f81aea859d2fb6ca8ca33868822f3635d with infra container: kube-system/kube-proxy-vjt4x/POD" id=4df067d4-fd06-4ab6-8bd0-b14042619d81 name=/runtime.v1.RuntimeService/RunPodSandbox
	Nov 09 14:41:41 newest-cni-192074 crio[612]: time="2025-11-09T14:41:41.859845243Z" level=info msg="Allowed annotations are specified for workload [io.containers.trace-syscall]"
	Nov 09 14:41:41 newest-cni-192074 crio[612]: time="2025-11-09T14:41:41.862168252Z" level=info msg="Checking image status: registry.k8s.io/kube-proxy:v1.34.1" id=62c9999e-3f28-453a-a17f-ad9b95b4aaee name=/runtime.v1.ImageService/ImageStatus
	Nov 09 14:41:41 newest-cni-192074 crio[612]: time="2025-11-09T14:41:41.866497408Z" level=info msg="Checking image status: registry.k8s.io/kube-proxy:v1.34.1" id=6ed0db08-79d9-4589-bf3c-83387d98b5be name=/runtime.v1.ImageService/ImageStatus
	Nov 09 14:41:41 newest-cni-192074 crio[612]: time="2025-11-09T14:41:41.868655492Z" level=info msg="Creating container: kube-system/kube-proxy-vjt4x/kube-proxy" id=43263849-d3b2-4acc-8185-79337cf27b84 name=/runtime.v1.RuntimeService/CreateContainer
	Nov 09 14:41:41 newest-cni-192074 crio[612]: time="2025-11-09T14:41:41.868949641Z" level=info msg="Allowed annotations are specified for workload [io.containers.trace-syscall]"
	Nov 09 14:41:41 newest-cni-192074 crio[612]: time="2025-11-09T14:41:41.878038489Z" level=info msg="Allowed annotations are specified for workload [io.containers.trace-syscall]"
	Nov 09 14:41:41 newest-cni-192074 crio[612]: time="2025-11-09T14:41:41.880276164Z" level=info msg="Allowed annotations are specified for workload [io.containers.trace-syscall]"
	Nov 09 14:41:41 newest-cni-192074 crio[612]: time="2025-11-09T14:41:41.903811844Z" level=info msg="Created container 4bbbd857426df358f0a97cc69c9e38cb57d717c134067204afb845ff0948a1b0: kube-system/kindnet-gmcpd/kindnet-cni" id=48083390-d500-4134-bde2-123a604854ba name=/runtime.v1.RuntimeService/CreateContainer
	Nov 09 14:41:41 newest-cni-192074 crio[612]: time="2025-11-09T14:41:41.906956038Z" level=info msg="Starting container: 4bbbd857426df358f0a97cc69c9e38cb57d717c134067204afb845ff0948a1b0" id=06ea8948-bf65-41e2-8032-b857a2b83dcf name=/runtime.v1.RuntimeService/StartContainer
	Nov 09 14:41:41 newest-cni-192074 crio[612]: time="2025-11-09T14:41:41.909384122Z" level=info msg="Started container" PID=1055 containerID=4bbbd857426df358f0a97cc69c9e38cb57d717c134067204afb845ff0948a1b0 description=kube-system/kindnet-gmcpd/kindnet-cni id=06ea8948-bf65-41e2-8032-b857a2b83dcf name=/runtime.v1.RuntimeService/StartContainer sandboxID=a5c8dbb14dc04a668f880818d9df0b34b401a810e78f1029111ab427669769eb
	Nov 09 14:41:41 newest-cni-192074 crio[612]: time="2025-11-09T14:41:41.948103478Z" level=info msg="Created container eec42ffc8671c548030a782bbc9411ebe0b4af2d70e2f7d365da735aaaf5cb19: kube-system/kube-proxy-vjt4x/kube-proxy" id=43263849-d3b2-4acc-8185-79337cf27b84 name=/runtime.v1.RuntimeService/CreateContainer
	Nov 09 14:41:41 newest-cni-192074 crio[612]: time="2025-11-09T14:41:41.949026334Z" level=info msg="Starting container: eec42ffc8671c548030a782bbc9411ebe0b4af2d70e2f7d365da735aaaf5cb19" id=f2971588-4e01-4a23-9dc5-ddde7b49d5b8 name=/runtime.v1.RuntimeService/StartContainer
	Nov 09 14:41:41 newest-cni-192074 crio[612]: time="2025-11-09T14:41:41.951595991Z" level=info msg="Started container" PID=1061 containerID=eec42ffc8671c548030a782bbc9411ebe0b4af2d70e2f7d365da735aaaf5cb19 description=kube-system/kube-proxy-vjt4x/kube-proxy id=f2971588-4e01-4a23-9dc5-ddde7b49d5b8 name=/runtime.v1.RuntimeService/StartContainer sandboxID=2d607777fd17fede81438faa3ea6044f81aea859d2fb6ca8ca33868822f3635d
	
	
	==> container status <==
	CONTAINER           IMAGE                                                              CREATED             STATE               NAME                      ATTEMPT             POD ID              POD                                         NAMESPACE
	eec42ffc8671c       05baa95f5142d87797a2bc1d3d11edfb0bf0a9236d436243d15061fae8b58cb9   5 seconds ago       Running             kube-proxy                1                   2d607777fd17f       kube-proxy-vjt4x                            kube-system
	4bbbd857426df       b1a8c6f707935fd5f346ce5846d21ff8dd65e14c15406a14dbd16b9b897b9b4c   5 seconds ago       Running             kindnet-cni               1                   a5c8dbb14dc04       kindnet-gmcpd                               kube-system
	c913d6d7b55a2       b5f57ec6b98676d815366685a0422bd164ecf0732540b79ac51b1186cef97ff0   10 seconds ago      Running             kube-scheduler            1                   44c075c86d06e       kube-scheduler-newest-cni-192074            kube-system
	586cbf2507c60       43911e833d64d4f30460862fc0c54bb61999d60bc7d063feca71e9fc610d5196   10 seconds ago      Running             kube-apiserver            1                   059d5ebfe832e       kube-apiserver-newest-cni-192074            kube-system
	afd353198acd9       a1894772a478e07c67a56e8bf32335fdbe1dd4ec96976a5987083164bd00bc0e   10 seconds ago      Running             etcd                      1                   514c43d857b7e       etcd-newest-cni-192074                      kube-system
	4906171ae291f       7eb2c6ff0c5a768fd309321bc2ade0e4e11afcf4f2017ef1d0ff00d91fdf992a   10 seconds ago      Running             kube-controller-manager   1                   843c399279a02       kube-controller-manager-newest-cni-192074   kube-system
	
	
	==> describe nodes <==
	Name:               newest-cni-192074
	Roles:              control-plane
	Labels:             beta.kubernetes.io/arch=arm64
	                    beta.kubernetes.io/os=linux
	                    kubernetes.io/arch=arm64
	                    kubernetes.io/hostname=newest-cni-192074
	                    kubernetes.io/os=linux
	                    minikube.k8s.io/commit=70e94ed289d7418ac27e2778f9cf44be27a4ecda
	                    minikube.k8s.io/name=newest-cni-192074
	                    minikube.k8s.io/primary=true
	                    minikube.k8s.io/updated_at=2025_11_09T14_41_14_0700
	                    minikube.k8s.io/version=v1.37.0
	                    node-role.kubernetes.io/control-plane=
	                    node.kubernetes.io/exclude-from-external-load-balancers=
	Annotations:        node.alpha.kubernetes.io/ttl: 0
	                    volumes.kubernetes.io/controller-managed-attach-detach: true
	CreationTimestamp:  Sun, 09 Nov 2025 14:41:10 +0000
	Taints:             node.kubernetes.io/not-ready:NoSchedule
	Unschedulable:      false
	Lease:
	  HolderIdentity:  newest-cni-192074
	  AcquireTime:     <unset>
	  RenewTime:       Sun, 09 Nov 2025 14:41:41 +0000
	Conditions:
	  Type             Status  LastHeartbeatTime                 LastTransitionTime                Reason                       Message
	  ----             ------  -----------------                 ------------------                ------                       -------
	  MemoryPressure   False   Sun, 09 Nov 2025 14:41:41 +0000   Sun, 09 Nov 2025 14:41:01 +0000   KubeletHasSufficientMemory   kubelet has sufficient memory available
	  DiskPressure     False   Sun, 09 Nov 2025 14:41:41 +0000   Sun, 09 Nov 2025 14:41:01 +0000   KubeletHasNoDiskPressure     kubelet has no disk pressure
	  PIDPressure      False   Sun, 09 Nov 2025 14:41:41 +0000   Sun, 09 Nov 2025 14:41:01 +0000   KubeletHasSufficientPID      kubelet has sufficient PID available
	  Ready            False   Sun, 09 Nov 2025 14:41:41 +0000   Sun, 09 Nov 2025 14:41:01 +0000   KubeletNotReady              container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/cni/net.d/. Has your network provider started?
	Addresses:
	  InternalIP:  192.168.76.2
	  Hostname:    newest-cni-192074
	Capacity:
	  cpu:                2
	  ephemeral-storage:  203034800Ki
	  hugepages-1Gi:      0
	  hugepages-2Mi:      0
	  hugepages-32Mi:     0
	  hugepages-64Ki:     0
	  memory:             8022300Ki
	  pods:               110
	Allocatable:
	  cpu:                2
	  ephemeral-storage:  203034800Ki
	  hugepages-1Gi:      0
	  hugepages-2Mi:      0
	  hugepages-32Mi:     0
	  hugepages-64Ki:     0
	  memory:             8022300Ki
	  pods:               110
	System Info:
	  Machine ID:                 8fe80b37a6a3cb4bc1adadb06905c59a
	  System UUID:                c64f5fab-6069-4738-9e11-1ea44009e643
	  Boot ID:                    c2b4b02b-b0e5-42ff-aa45-346ad9349595
	  Kernel Version:             5.15.0-1084-aws
	  OS Image:                   Debian GNU/Linux 12 (bookworm)
	  Operating System:           linux
	  Architecture:               arm64
	  Container Runtime Version:  cri-o://1.34.1
	  Kubelet Version:            v1.34.1
	  Kube-Proxy Version:         
	PodCIDR:                      10.42.0.0/24
	PodCIDRs:                     10.42.0.0/24
	Non-terminated Pods:          (6 in total)
	  Namespace                   Name                                         CPU Requests  CPU Limits  Memory Requests  Memory Limits  Age
	  ---------                   ----                                         ------------  ----------  ---------------  -------------  ---
	  kube-system                 etcd-newest-cni-192074                       100m (5%)     0 (0%)      100Mi (1%)       0 (0%)         35s
	  kube-system                 kindnet-gmcpd                                100m (5%)     100m (5%)   50Mi (0%)        50Mi (0%)      29s
	  kube-system                 kube-apiserver-newest-cni-192074             250m (12%)    0 (0%)      0 (0%)           0 (0%)         33s
	  kube-system                 kube-controller-manager-newest-cni-192074    200m (10%)    0 (0%)      0 (0%)           0 (0%)         36s
	  kube-system                 kube-proxy-vjt4x                             0 (0%)        0 (0%)      0 (0%)           0 (0%)         29s
	  kube-system                 kube-scheduler-newest-cni-192074             100m (5%)     0 (0%)      0 (0%)           0 (0%)         33s
	Allocated resources:
	  (Total limits may be over 100 percent, i.e., overcommitted.)
	  Resource           Requests    Limits
	  --------           --------    ------
	  cpu                750m (37%)  100m (5%)
	  memory             150Mi (1%)  50Mi (0%)
	  ephemeral-storage  0 (0%)      0 (0%)
	  hugepages-1Gi      0 (0%)      0 (0%)
	  hugepages-2Mi      0 (0%)      0 (0%)
	  hugepages-32Mi     0 (0%)      0 (0%)
	  hugepages-64Ki     0 (0%)      0 (0%)
	Events:
	  Type     Reason                   Age                From             Message
	  ----     ------                   ----               ----             -------
	  Normal   Starting                 26s                kube-proxy       
	  Normal   Starting                 4s                 kube-proxy       
	  Normal   NodeHasSufficientMemory  47s (x8 over 47s)  kubelet          Node newest-cni-192074 status is now: NodeHasSufficientMemory
	  Normal   NodeHasNoDiskPressure    47s (x8 over 47s)  kubelet          Node newest-cni-192074 status is now: NodeHasNoDiskPressure
	  Normal   NodeHasSufficientPID     47s (x8 over 47s)  kubelet          Node newest-cni-192074 status is now: NodeHasSufficientPID
	  Normal   Starting                 34s                kubelet          Starting kubelet.
	  Warning  CgroupV1                 34s                kubelet          cgroup v1 support is in maintenance mode, please migrate to cgroup v2
	  Normal   NodeHasSufficientMemory  33s                kubelet          Node newest-cni-192074 status is now: NodeHasSufficientMemory
	  Normal   NodeHasNoDiskPressure    33s                kubelet          Node newest-cni-192074 status is now: NodeHasNoDiskPressure
	  Normal   NodeHasSufficientPID     33s                kubelet          Node newest-cni-192074 status is now: NodeHasSufficientPID
	  Normal   RegisteredNode           29s                node-controller  Node newest-cni-192074 event: Registered Node newest-cni-192074 in Controller
	  Normal   RegisteredNode           3s                 node-controller  Node newest-cni-192074 event: Registered Node newest-cni-192074 in Controller
	
	
	==> dmesg <==
	[ +26.909872] overlayfs: idmapped layers are currently not supported
	[  +3.850831] overlayfs: idmapped layers are currently not supported
	[Nov 9 14:19] overlayfs: idmapped layers are currently not supported
	[ +17.180951] overlayfs: idmapped layers are currently not supported
	[Nov 9 14:20] overlayfs: idmapped layers are currently not supported
	[ +23.736977] overlayfs: idmapped layers are currently not supported
	[Nov 9 14:22] overlayfs: idmapped layers are currently not supported
	[Nov 9 14:23] overlayfs: idmapped layers are currently not supported
	[Nov 9 14:25] overlayfs: idmapped layers are currently not supported
	[Nov 9 14:27] overlayfs: idmapped layers are currently not supported
	[ +24.536638] overlayfs: idmapped layers are currently not supported
	[Nov 9 14:31] overlayfs: idmapped layers are currently not supported
	[Nov 9 14:33] overlayfs: idmapped layers are currently not supported
	[ +39.159698] overlayfs: idmapped layers are currently not supported
	[  +5.641155] overlayfs: idmapped layers are currently not supported
	[Nov 9 14:34] overlayfs: idmapped layers are currently not supported
	[Nov 9 14:35] overlayfs: idmapped layers are currently not supported
	[Nov 9 14:36] overlayfs: idmapped layers are currently not supported
	[Nov 9 14:37] overlayfs: idmapped layers are currently not supported
	[  +3.455400] overlayfs: idmapped layers are currently not supported
	[Nov 9 14:39] overlayfs: idmapped layers are currently not supported
	[  +3.462027] overlayfs: idmapped layers are currently not supported
	[Nov 9 14:40] overlayfs: idmapped layers are currently not supported
	[Nov 9 14:41] overlayfs: idmapped layers are currently not supported
	[ +35.139553] overlayfs: idmapped layers are currently not supported
	
	
	==> etcd [afd353198acd97ab297fdf63f5ed475dde326bf68ef3c2d1001f999ea14a25ac] <==
	{"level":"warn","ts":"2025-11-09T14:41:39.635973Z","caller":"embed/config_logging.go:188","msg":"rejected connection on client endpoint","remote-addr":"127.0.0.1:47570","server-name":"","error":"EOF"}
	{"level":"warn","ts":"2025-11-09T14:41:39.652082Z","caller":"embed/config_logging.go:188","msg":"rejected connection on client endpoint","remote-addr":"127.0.0.1:47576","server-name":"","error":"EOF"}
	{"level":"warn","ts":"2025-11-09T14:41:39.708026Z","caller":"embed/config_logging.go:188","msg":"rejected connection on client endpoint","remote-addr":"127.0.0.1:47610","server-name":"","error":"EOF"}
	{"level":"warn","ts":"2025-11-09T14:41:39.718734Z","caller":"embed/config_logging.go:188","msg":"rejected connection on client endpoint","remote-addr":"127.0.0.1:47582","server-name":"","error":"EOF"}
	{"level":"warn","ts":"2025-11-09T14:41:39.721134Z","caller":"embed/config_logging.go:188","msg":"rejected connection on client endpoint","remote-addr":"127.0.0.1:47632","server-name":"","error":"EOF"}
	{"level":"warn","ts":"2025-11-09T14:41:39.754074Z","caller":"embed/config_logging.go:188","msg":"rejected connection on client endpoint","remote-addr":"127.0.0.1:47650","server-name":"","error":"EOF"}
	{"level":"warn","ts":"2025-11-09T14:41:39.795568Z","caller":"embed/config_logging.go:188","msg":"rejected connection on client endpoint","remote-addr":"127.0.0.1:47662","server-name":"","error":"EOF"}
	{"level":"warn","ts":"2025-11-09T14:41:39.879296Z","caller":"embed/config_logging.go:188","msg":"rejected connection on client endpoint","remote-addr":"127.0.0.1:47676","server-name":"","error":"EOF"}
	{"level":"warn","ts":"2025-11-09T14:41:39.900884Z","caller":"embed/config_logging.go:188","msg":"rejected connection on client endpoint","remote-addr":"127.0.0.1:47694","server-name":"","error":"EOF"}
	{"level":"warn","ts":"2025-11-09T14:41:39.924284Z","caller":"embed/config_logging.go:188","msg":"rejected connection on client endpoint","remote-addr":"127.0.0.1:47706","server-name":"","error":"EOF"}
	{"level":"warn","ts":"2025-11-09T14:41:39.955170Z","caller":"embed/config_logging.go:188","msg":"rejected connection on client endpoint","remote-addr":"127.0.0.1:47718","server-name":"","error":"EOF"}
	{"level":"warn","ts":"2025-11-09T14:41:39.994743Z","caller":"embed/config_logging.go:188","msg":"rejected connection on client endpoint","remote-addr":"127.0.0.1:47730","server-name":"","error":"EOF"}
	{"level":"warn","ts":"2025-11-09T14:41:40.024289Z","caller":"embed/config_logging.go:188","msg":"rejected connection on client endpoint","remote-addr":"127.0.0.1:47752","server-name":"","error":"EOF"}
	{"level":"warn","ts":"2025-11-09T14:41:40.068164Z","caller":"embed/config_logging.go:188","msg":"rejected connection on client endpoint","remote-addr":"127.0.0.1:47772","server-name":"","error":"EOF"}
	{"level":"warn","ts":"2025-11-09T14:41:40.080361Z","caller":"embed/config_logging.go:188","msg":"rejected connection on client endpoint","remote-addr":"127.0.0.1:47788","server-name":"","error":"EOF"}
	{"level":"warn","ts":"2025-11-09T14:41:40.113804Z","caller":"embed/config_logging.go:188","msg":"rejected connection on client endpoint","remote-addr":"127.0.0.1:47802","server-name":"","error":"EOF"}
	{"level":"warn","ts":"2025-11-09T14:41:40.155346Z","caller":"embed/config_logging.go:188","msg":"rejected connection on client endpoint","remote-addr":"127.0.0.1:47822","server-name":"","error":"EOF"}
	{"level":"warn","ts":"2025-11-09T14:41:40.166223Z","caller":"embed/config_logging.go:188","msg":"rejected connection on client endpoint","remote-addr":"127.0.0.1:47832","server-name":"","error":"EOF"}
	{"level":"warn","ts":"2025-11-09T14:41:40.185334Z","caller":"embed/config_logging.go:188","msg":"rejected connection on client endpoint","remote-addr":"127.0.0.1:47860","server-name":"","error":"EOF"}
	{"level":"warn","ts":"2025-11-09T14:41:40.206983Z","caller":"embed/config_logging.go:188","msg":"rejected connection on client endpoint","remote-addr":"127.0.0.1:47876","server-name":"","error":"EOF"}
	{"level":"warn","ts":"2025-11-09T14:41:40.227797Z","caller":"embed/config_logging.go:188","msg":"rejected connection on client endpoint","remote-addr":"127.0.0.1:47892","server-name":"","error":"EOF"}
	{"level":"warn","ts":"2025-11-09T14:41:40.259794Z","caller":"embed/config_logging.go:188","msg":"rejected connection on client endpoint","remote-addr":"127.0.0.1:47922","server-name":"","error":"EOF"}
	{"level":"warn","ts":"2025-11-09T14:41:40.296153Z","caller":"embed/config_logging.go:188","msg":"rejected connection on client endpoint","remote-addr":"127.0.0.1:47960","server-name":"","error":"EOF"}
	{"level":"warn","ts":"2025-11-09T14:41:40.307567Z","caller":"embed/config_logging.go:188","msg":"rejected connection on client endpoint","remote-addr":"127.0.0.1:47946","server-name":"","error":"EOF"}
	{"level":"warn","ts":"2025-11-09T14:41:40.397207Z","caller":"embed/config_logging.go:188","msg":"rejected connection on client endpoint","remote-addr":"127.0.0.1:47990","server-name":"","error":"EOF"}
	
	
	==> kernel <==
	 14:41:47 up  1:24,  0 user,  load average: 6.86, 4.38, 3.20
	Linux newest-cni-192074 5.15.0-1084-aws #91~20.04.1-Ubuntu SMP Fri May 2 07:00:04 UTC 2025 aarch64 GNU/Linux
	PRETTY_NAME="Debian GNU/Linux 12 (bookworm)"
	
	
	==> kindnet [4bbbd857426df358f0a97cc69c9e38cb57d717c134067204afb845ff0948a1b0] <==
	I1109 14:41:42.047239       1 main.go:109] connected to apiserver: https://10.96.0.1:443
	I1109 14:41:42.047767       1 main.go:139] hostIP = 192.168.76.2
	podIP = 192.168.76.2
	I1109 14:41:42.047936       1 main.go:148] setting mtu 1500 for CNI 
	I1109 14:41:42.048007       1 main.go:178] kindnetd IP family: "ipv4"
	I1109 14:41:42.048043       1 main.go:182] noMask IPv4 subnets: [10.244.0.0/16]
	time="2025-11-09T14:41:42Z" level=info msg="Created plugin 10-kube-network-policies (kindnetd, handles RunPodSandbox,RemovePodSandbox)"
	I1109 14:41:42.328382       1 controller.go:377] "Starting controller" name="kube-network-policies"
	I1109 14:41:42.328409       1 controller.go:381] "Waiting for informer caches to sync"
	I1109 14:41:42.328419       1 shared_informer.go:350] "Waiting for caches to sync" controller="kube-network-policies"
	I1109 14:41:42.333463       1 controller.go:390] nri plugin exited: failed to connect to NRI service: dial unix /var/run/nri/nri.sock: connect: no such file or directory
	
	
	==> kube-apiserver [586cbf2507c60a0c5f2a7a6dbb1b3df9ad1c324498ff6f1875d3fecc41181903] <==
	I1109 14:41:41.366784       1 autoregister_controller.go:144] Starting autoregister controller
	I1109 14:41:41.366791       1 cache.go:32] Waiting for caches to sync for autoregister controller
	I1109 14:41:41.366799       1 cache.go:39] Caches are synced for autoregister controller
	I1109 14:41:41.375068       1 cache.go:39] Caches are synced for LocalAvailability controller
	I1109 14:41:41.375329       1 shared_informer.go:356] "Caches are synced" controller="cluster_authentication_trust_controller"
	I1109 14:41:41.375353       1 shared_informer.go:356] "Caches are synced" controller="configmaps"
	I1109 14:41:41.375423       1 shared_informer.go:356] "Caches are synced" controller="kubernetes-service-cidr-controller"
	I1109 14:41:41.375458       1 default_servicecidr_controller.go:137] Shutting down kubernetes-service-cidr-controller
	I1109 14:41:41.396566       1 apf_controller.go:382] Running API Priority and Fairness config worker
	I1109 14:41:41.396591       1 apf_controller.go:385] Running API Priority and Fairness periodic rebalancing process
	I1109 14:41:41.397311       1 cache.go:39] Caches are synced for RemoteAvailability controller
	I1109 14:41:41.397378       1 cache.go:39] Caches are synced for APIServiceRegistrationController controller
	I1109 14:41:41.410057       1 handler_discovery.go:451] Starting ResourceDiscoveryManager
	I1109 14:41:41.425011       1 controller.go:667] quota admission added evaluator for: leases.coordination.k8s.io
	I1109 14:41:41.622775       1 controller.go:667] quota admission added evaluator for: serviceaccounts
	I1109 14:41:41.997164       1 storage_scheduling.go:111] all system priority classes are created successfully or already exist.
	I1109 14:41:42.245943       1 controller.go:667] quota admission added evaluator for: namespaces
	I1109 14:41:42.335528       1 controller.go:667] quota admission added evaluator for: deployments.apps
	I1109 14:41:42.386276       1 controller.go:667] quota admission added evaluator for: roles.rbac.authorization.k8s.io
	I1109 14:41:42.402970       1 controller.go:667] quota admission added evaluator for: rolebindings.rbac.authorization.k8s.io
	I1109 14:41:42.609328       1 alloc.go:328] "allocated clusterIPs" service="kubernetes-dashboard/kubernetes-dashboard" clusterIPs={"IPv4":"10.106.213.87"}
	I1109 14:41:42.627022       1 alloc.go:328] "allocated clusterIPs" service="kubernetes-dashboard/dashboard-metrics-scraper" clusterIPs={"IPv4":"10.110.184.126"}
	I1109 14:41:45.308967       1 controller.go:667] quota admission added evaluator for: replicasets.apps
	I1109 14:41:45.401959       1 controller.go:667] quota admission added evaluator for: endpoints
	I1109 14:41:45.445078       1 controller.go:667] quota admission added evaluator for: endpointslices.discovery.k8s.io
	
	
	==> kube-controller-manager [4906171ae291fe25c62fa24b9abc955b2e431c04f03b82b97bc5dac9dabbf8a3] <==
	I1109 14:41:44.979530       1 shared_informer.go:356] "Caches are synced" controller="ReplicaSet"
	I1109 14:41:44.984277       1 shared_informer.go:356] "Caches are synced" controller="PVC protection"
	I1109 14:41:44.986815       1 shared_informer.go:356] "Caches are synced" controller="garbage collector"
	I1109 14:41:44.990261       1 shared_informer.go:356] "Caches are synced" controller="taint-eviction-controller"
	I1109 14:41:44.990391       1 shared_informer.go:356] "Caches are synced" controller="taint"
	I1109 14:41:44.990542       1 node_lifecycle_controller.go:1221] "Initializing eviction metric for zone" logger="node-lifecycle-controller" zone=""
	I1109 14:41:44.990618       1 shared_informer.go:356] "Caches are synced" controller="disruption"
	I1109 14:41:44.991086       1 shared_informer.go:356] "Caches are synced" controller="service account"
	I1109 14:41:44.991564       1 node_lifecycle_controller.go:873] "Missing timestamp for Node. Assuming now as a timestamp" logger="node-lifecycle-controller" node="newest-cni-192074"
	I1109 14:41:44.991665       1 shared_informer.go:356] "Caches are synced" controller="legacy-service-account-token-cleaner"
	I1109 14:41:44.995319       1 shared_informer.go:356] "Caches are synced" controller="VAC protection"
	I1109 14:41:44.995623       1 node_lifecycle_controller.go:1025] "Controller detected that all Nodes are not-Ready. Entering master disruption mode" logger="node-lifecycle-controller"
	I1109 14:41:44.995805       1 shared_informer.go:356] "Caches are synced" controller="service-cidr-controller"
	I1109 14:41:44.995901       1 shared_informer.go:356] "Caches are synced" controller="deployment"
	I1109 14:41:44.995914       1 shared_informer.go:356] "Caches are synced" controller="endpoint"
	I1109 14:41:44.998542       1 shared_informer.go:356] "Caches are synced" controller="resource_claim"
	I1109 14:41:44.998788       1 shared_informer.go:356] "Caches are synced" controller="endpoint_slice"
	I1109 14:41:44.999201       1 shared_informer.go:356] "Caches are synced" controller="node"
	I1109 14:41:45.000723       1 range_allocator.go:177] "Sending events to api server" logger="node-ipam-controller"
	I1109 14:41:45.000808       1 range_allocator.go:183] "Starting range CIDR allocator" logger="node-ipam-controller"
	I1109 14:41:45.000817       1 shared_informer.go:349] "Waiting for caches to sync" controller="cidrallocator"
	I1109 14:41:45.000825       1 shared_informer.go:356] "Caches are synced" controller="cidrallocator"
	I1109 14:41:45.006500       1 shared_informer.go:356] "Caches are synced" controller="endpoint_slice_mirroring"
	I1109 14:41:45.028791       1 shared_informer.go:356] "Caches are synced" controller="resource quota"
	I1109 14:41:45.029548       1 shared_informer.go:356] "Caches are synced" controller="resource quota"
	
	
	==> kube-proxy [eec42ffc8671c548030a782bbc9411ebe0b4af2d70e2f7d365da735aaaf5cb19] <==
	I1109 14:41:42.077335       1 server_linux.go:53] "Using iptables proxy"
	I1109 14:41:42.312270       1 shared_informer.go:349] "Waiting for caches to sync" controller="node informer cache"
	I1109 14:41:42.412843       1 shared_informer.go:356] "Caches are synced" controller="node informer cache"
	I1109 14:41:42.412882       1 server.go:219] "Successfully retrieved NodeIPs" NodeIPs=["192.168.76.2"]
	E1109 14:41:42.412955       1 server.go:256] "Kube-proxy configuration may be incomplete or incorrect" err="nodePortAddresses is unset; NodePort connections will be accepted on all local IPs. Consider using `--nodeport-addresses primary`"
	I1109 14:41:42.629723       1 server.go:265] "kube-proxy running in dual-stack mode" primary ipFamily="IPv4"
	I1109 14:41:42.629780       1 server_linux.go:132] "Using iptables Proxier"
	I1109 14:41:42.675975       1 proxier.go:242] "Setting route_localnet=1 to allow node-ports on localhost; to change this either disable iptables.localhostNodePorts (--iptables-localhost-nodeports) or set nodePortAddresses (--nodeport-addresses) to filter loopback addresses" ipFamily="IPv4"
	I1109 14:41:42.676289       1 server.go:527] "Version info" version="v1.34.1"
	I1109 14:41:42.676305       1 server.go:529] "Golang settings" GOGC="" GOMAXPROCS="" GOTRACEBACK=""
	I1109 14:41:42.681104       1 config.go:200] "Starting service config controller"
	I1109 14:41:42.683983       1 shared_informer.go:349] "Waiting for caches to sync" controller="service config"
	I1109 14:41:42.684108       1 config.go:106] "Starting endpoint slice config controller"
	I1109 14:41:42.684153       1 shared_informer.go:349] "Waiting for caches to sync" controller="endpoint slice config"
	I1109 14:41:42.684192       1 config.go:403] "Starting serviceCIDR config controller"
	I1109 14:41:42.684234       1 shared_informer.go:349] "Waiting for caches to sync" controller="serviceCIDR config"
	I1109 14:41:42.685140       1 config.go:309] "Starting node config controller"
	I1109 14:41:42.688178       1 shared_informer.go:349] "Waiting for caches to sync" controller="node config"
	I1109 14:41:42.688293       1 shared_informer.go:356] "Caches are synced" controller="node config"
	I1109 14:41:42.785197       1 shared_informer.go:356] "Caches are synced" controller="serviceCIDR config"
	I1109 14:41:42.785203       1 shared_informer.go:356] "Caches are synced" controller="service config"
	I1109 14:41:42.785236       1 shared_informer.go:356] "Caches are synced" controller="endpoint slice config"
	
	
	==> kube-scheduler [c913d6d7b55a2bf4363aa340114681f62876da6b25f96a1bb3b282eda1b60139] <==
	I1109 14:41:39.226289       1 serving.go:386] Generated self-signed cert in-memory
	W1109 14:41:41.232140       1 requestheader_controller.go:204] Unable to get configmap/extension-apiserver-authentication in kube-system.  Usually fixed by 'kubectl create rolebinding -n kube-system ROLEBINDING_NAME --role=extension-apiserver-authentication-reader --serviceaccount=YOUR_NS:YOUR_SA'
	W1109 14:41:41.232175       1 authentication.go:397] Error looking up in-cluster authentication configuration: configmaps "extension-apiserver-authentication" is forbidden: User "system:kube-scheduler" cannot get resource "configmaps" in API group "" in the namespace "kube-system"
	W1109 14:41:41.232185       1 authentication.go:398] Continuing without authentication configuration. This may treat all requests as anonymous.
	W1109 14:41:41.232305       1 authentication.go:399] To require authentication configuration lookup to succeed, set --authentication-tolerate-lookup-failure=false
	I1109 14:41:41.361327       1 server.go:175] "Starting Kubernetes Scheduler" version="v1.34.1"
	I1109 14:41:41.361369       1 server.go:177] "Golang settings" GOGC="" GOMAXPROCS="" GOTRACEBACK=""
	I1109 14:41:41.375926       1 configmap_cafile_content.go:205] "Starting controller" name="client-ca::kube-system::extension-apiserver-authentication::client-ca-file"
	I1109 14:41:41.376073       1 shared_informer.go:349] "Waiting for caches to sync" controller="client-ca::kube-system::extension-apiserver-authentication::client-ca-file"
	I1109 14:41:41.384608       1 secure_serving.go:211] Serving securely on 127.0.0.1:10259
	I1109 14:41:41.384700       1 tlsconfig.go:243] "Starting DynamicServingCertificateController"
	I1109 14:41:41.480974       1 shared_informer.go:356] "Caches are synced" controller="client-ca::kube-system::extension-apiserver-authentication::client-ca-file"
	
	
	==> kubelet <==
	Nov 09 14:41:40 newest-cni-192074 kubelet[733]: E1109 14:41:40.136130     733 kubelet.go:3215] "No need to create a mirror pod, since failed to get node info from the cluster" err="node \"newest-cni-192074\" not found" node="newest-cni-192074"
	Nov 09 14:41:41 newest-cni-192074 kubelet[733]: I1109 14:41:41.191074     733 kubelet.go:3219] "Creating a mirror pod for static pod" pod="kube-system/kube-controller-manager-newest-cni-192074"
	Nov 09 14:41:41 newest-cni-192074 kubelet[733]: E1109 14:41:41.400562     733 kubelet.go:3221] "Failed creating a mirror pod" err="pods \"kube-controller-manager-newest-cni-192074\" already exists" pod="kube-system/kube-controller-manager-newest-cni-192074"
	Nov 09 14:41:41 newest-cni-192074 kubelet[733]: I1109 14:41:41.400600     733 kubelet.go:3219] "Creating a mirror pod for static pod" pod="kube-system/kube-scheduler-newest-cni-192074"
	Nov 09 14:41:41 newest-cni-192074 kubelet[733]: I1109 14:41:41.483059     733 apiserver.go:52] "Watching apiserver"
	Nov 09 14:41:41 newest-cni-192074 kubelet[733]: E1109 14:41:41.498520     733 kubelet.go:3221] "Failed creating a mirror pod" err="pods \"kube-scheduler-newest-cni-192074\" already exists" pod="kube-system/kube-scheduler-newest-cni-192074"
	Nov 09 14:41:41 newest-cni-192074 kubelet[733]: I1109 14:41:41.498732     733 kubelet.go:3219] "Creating a mirror pod for static pod" pod="kube-system/etcd-newest-cni-192074"
	Nov 09 14:41:41 newest-cni-192074 kubelet[733]: I1109 14:41:41.505220     733 kubelet_node_status.go:124] "Node was previously registered" node="newest-cni-192074"
	Nov 09 14:41:41 newest-cni-192074 kubelet[733]: I1109 14:41:41.505469     733 kubelet_node_status.go:78] "Successfully registered node" node="newest-cni-192074"
	Nov 09 14:41:41 newest-cni-192074 kubelet[733]: I1109 14:41:41.505570     733 kuberuntime_manager.go:1828] "Updating runtime config through cri with podcidr" CIDR="10.42.0.0/24"
	Nov 09 14:41:41 newest-cni-192074 kubelet[733]: I1109 14:41:41.506519     733 kubelet_network.go:47] "Updating Pod CIDR" originalPodCIDR="" newPodCIDR="10.42.0.0/24"
	Nov 09 14:41:41 newest-cni-192074 kubelet[733]: E1109 14:41:41.587954     733 kubelet.go:3221] "Failed creating a mirror pod" err="pods \"etcd-newest-cni-192074\" already exists" pod="kube-system/etcd-newest-cni-192074"
	Nov 09 14:41:41 newest-cni-192074 kubelet[733]: I1109 14:41:41.588008     733 kubelet.go:3219] "Creating a mirror pod for static pod" pod="kube-system/kube-apiserver-newest-cni-192074"
	Nov 09 14:41:41 newest-cni-192074 kubelet[733]: I1109 14:41:41.594129     733 desired_state_of_world_populator.go:154] "Finished populating initial desired state of world"
	Nov 09 14:41:41 newest-cni-192074 kubelet[733]: I1109 14:41:41.599131     733 reconciler_common.go:251] "operationExecutor.VerifyControllerAttachedVolume started for volume \"xtables-lock\" (UniqueName: \"kubernetes.io/host-path/4f389cd7-7dd5-439e-b590-9e4390f0a638-xtables-lock\") pod \"kube-proxy-vjt4x\" (UID: \"4f389cd7-7dd5-439e-b590-9e4390f0a638\") " pod="kube-system/kube-proxy-vjt4x"
	Nov 09 14:41:41 newest-cni-192074 kubelet[733]: I1109 14:41:41.599317     733 reconciler_common.go:251] "operationExecutor.VerifyControllerAttachedVolume started for volume \"lib-modules\" (UniqueName: \"kubernetes.io/host-path/4f389cd7-7dd5-439e-b590-9e4390f0a638-lib-modules\") pod \"kube-proxy-vjt4x\" (UID: \"4f389cd7-7dd5-439e-b590-9e4390f0a638\") " pod="kube-system/kube-proxy-vjt4x"
	Nov 09 14:41:41 newest-cni-192074 kubelet[733]: I1109 14:41:41.599412     733 reconciler_common.go:251] "operationExecutor.VerifyControllerAttachedVolume started for volume \"xtables-lock\" (UniqueName: \"kubernetes.io/host-path/00d2ffcc-cb88-4632-8efd-e59fe208d3c8-xtables-lock\") pod \"kindnet-gmcpd\" (UID: \"00d2ffcc-cb88-4632-8efd-e59fe208d3c8\") " pod="kube-system/kindnet-gmcpd"
	Nov 09 14:41:41 newest-cni-192074 kubelet[733]: I1109 14:41:41.599536     733 reconciler_common.go:251] "operationExecutor.VerifyControllerAttachedVolume started for volume \"lib-modules\" (UniqueName: \"kubernetes.io/host-path/00d2ffcc-cb88-4632-8efd-e59fe208d3c8-lib-modules\") pod \"kindnet-gmcpd\" (UID: \"00d2ffcc-cb88-4632-8efd-e59fe208d3c8\") " pod="kube-system/kindnet-gmcpd"
	Nov 09 14:41:41 newest-cni-192074 kubelet[733]: I1109 14:41:41.599685     733 reconciler_common.go:251] "operationExecutor.VerifyControllerAttachedVolume started for volume \"cni-cfg\" (UniqueName: \"kubernetes.io/host-path/00d2ffcc-cb88-4632-8efd-e59fe208d3c8-cni-cfg\") pod \"kindnet-gmcpd\" (UID: \"00d2ffcc-cb88-4632-8efd-e59fe208d3c8\") " pod="kube-system/kindnet-gmcpd"
	Nov 09 14:41:41 newest-cni-192074 kubelet[733]: E1109 14:41:41.623512     733 kubelet.go:3221] "Failed creating a mirror pod" err="pods \"kube-apiserver-newest-cni-192074\" already exists" pod="kube-system/kube-apiserver-newest-cni-192074"
	Nov 09 14:41:41 newest-cni-192074 kubelet[733]: I1109 14:41:41.640063     733 swap_util.go:74] "error creating dir to test if tmpfs noswap is enabled. Assuming not supported" mount path="" error="stat /var/lib/kubelet/plugins/kubernetes.io/empty-dir: no such file or directory"
	Nov 09 14:41:41 newest-cni-192074 kubelet[733]: W1109 14:41:41.858167     733 manager.go:1169] Failed to process watch event {EventType:0 Name:/docker/6efa62eda748b61ec1d68030412467520212e736b73f094b7d6592d76bede223/crio-2d607777fd17fede81438faa3ea6044f81aea859d2fb6ca8ca33868822f3635d WatchSource:0}: Error finding container 2d607777fd17fede81438faa3ea6044f81aea859d2fb6ca8ca33868822f3635d: Status 404 returned error can't find the container with id 2d607777fd17fede81438faa3ea6044f81aea859d2fb6ca8ca33868822f3635d
	Nov 09 14:41:44 newest-cni-192074 systemd[1]: Stopping kubelet.service - kubelet: The Kubernetes Node Agent...
	Nov 09 14:41:44 newest-cni-192074 systemd[1]: kubelet.service: Deactivated successfully.
	Nov 09 14:41:44 newest-cni-192074 systemd[1]: Stopped kubelet.service - kubelet: The Kubernetes Node Agent.
	

                                                
                                                
-- /stdout --
helpers_test.go:262: (dbg) Run:  out/minikube-linux-arm64 status --format={{.APIServer}} -p newest-cni-192074 -n newest-cni-192074
helpers_test.go:262: (dbg) Non-zero exit: out/minikube-linux-arm64 status --format={{.APIServer}} -p newest-cni-192074 -n newest-cni-192074: exit status 2 (370.829604ms)

                                                
                                                
-- stdout --
	Running

                                                
                                                
-- /stdout --
helpers_test.go:262: status error: exit status 2 (may be ok)
helpers_test.go:269: (dbg) Run:  kubectl --context newest-cni-192074 get po -o=jsonpath={.items[*].metadata.name} -A --field-selector=status.phase!=Running
helpers_test.go:280: non-running pods: coredns-66bc5c9577-6brdt storage-provisioner dashboard-metrics-scraper-6ffb444bf9-jm9lh kubernetes-dashboard-855c9754f9-hdpw8
helpers_test.go:282: ======> post-mortem[TestStartStop/group/newest-cni/serial/Pause]: describe non-running pods <======
helpers_test.go:285: (dbg) Run:  kubectl --context newest-cni-192074 describe pod coredns-66bc5c9577-6brdt storage-provisioner dashboard-metrics-scraper-6ffb444bf9-jm9lh kubernetes-dashboard-855c9754f9-hdpw8
helpers_test.go:285: (dbg) Non-zero exit: kubectl --context newest-cni-192074 describe pod coredns-66bc5c9577-6brdt storage-provisioner dashboard-metrics-scraper-6ffb444bf9-jm9lh kubernetes-dashboard-855c9754f9-hdpw8: exit status 1 (91.808976ms)

                                                
                                                
** stderr ** 
	Error from server (NotFound): pods "coredns-66bc5c9577-6brdt" not found
	Error from server (NotFound): pods "storage-provisioner" not found
	Error from server (NotFound): pods "dashboard-metrics-scraper-6ffb444bf9-jm9lh" not found
	Error from server (NotFound): pods "kubernetes-dashboard-855c9754f9-hdpw8" not found

                                                
                                                
** /stderr **
helpers_test.go:287: kubectl --context newest-cni-192074 describe pod coredns-66bc5c9577-6brdt storage-provisioner dashboard-metrics-scraper-6ffb444bf9-jm9lh kubernetes-dashboard-855c9754f9-hdpw8: exit status 1
helpers_test.go:222: -----------------------post-mortem--------------------------------
helpers_test.go:223: ======>  post-mortem[TestStartStop/group/newest-cni/serial/Pause]: network settings <======
helpers_test.go:230: HOST ENV snapshots: PROXY env: HTTP_PROXY="<empty>" HTTPS_PROXY="<empty>" NO_PROXY="<empty>"
helpers_test.go:238: ======>  post-mortem[TestStartStop/group/newest-cni/serial/Pause]: docker inspect <======
helpers_test.go:239: (dbg) Run:  docker inspect newest-cni-192074
helpers_test.go:243: (dbg) docker inspect newest-cni-192074:

                                                
                                                
-- stdout --
	[
	    {
	        "Id": "6efa62eda748b61ec1d68030412467520212e736b73f094b7d6592d76bede223",
	        "Created": "2025-11-09T14:40:38.404452618Z",
	        "Path": "/usr/local/bin/entrypoint",
	        "Args": [
	            "/sbin/init"
	        ],
	        "State": {
	            "Status": "running",
	            "Running": true,
	            "Paused": false,
	            "Restarting": false,
	            "OOMKilled": false,
	            "Dead": false,
	            "Pid": 209202,
	            "ExitCode": 0,
	            "Error": "",
	            "StartedAt": "2025-11-09T14:41:27.702721755Z",
	            "FinishedAt": "2025-11-09T14:41:26.77697781Z"
	        },
	        "Image": "sha256:4e1b168be0d8ee6affdaa8dcd0274d605b2f417b6cfaa574410f0380fd962b97",
	        "ResolvConfPath": "/var/lib/docker/containers/6efa62eda748b61ec1d68030412467520212e736b73f094b7d6592d76bede223/resolv.conf",
	        "HostnamePath": "/var/lib/docker/containers/6efa62eda748b61ec1d68030412467520212e736b73f094b7d6592d76bede223/hostname",
	        "HostsPath": "/var/lib/docker/containers/6efa62eda748b61ec1d68030412467520212e736b73f094b7d6592d76bede223/hosts",
	        "LogPath": "/var/lib/docker/containers/6efa62eda748b61ec1d68030412467520212e736b73f094b7d6592d76bede223/6efa62eda748b61ec1d68030412467520212e736b73f094b7d6592d76bede223-json.log",
	        "Name": "/newest-cni-192074",
	        "RestartCount": 0,
	        "Driver": "overlay2",
	        "Platform": "linux",
	        "MountLabel": "",
	        "ProcessLabel": "",
	        "AppArmorProfile": "unconfined",
	        "ExecIDs": null,
	        "HostConfig": {
	            "Binds": [
	                "/lib/modules:/lib/modules:ro",
	                "newest-cni-192074:/var"
	            ],
	            "ContainerIDFile": "",
	            "LogConfig": {
	                "Type": "json-file",
	                "Config": {}
	            },
	            "NetworkMode": "newest-cni-192074",
	            "PortBindings": {
	                "22/tcp": [
	                    {
	                        "HostIp": "127.0.0.1",
	                        "HostPort": ""
	                    }
	                ],
	                "2376/tcp": [
	                    {
	                        "HostIp": "127.0.0.1",
	                        "HostPort": ""
	                    }
	                ],
	                "32443/tcp": [
	                    {
	                        "HostIp": "127.0.0.1",
	                        "HostPort": ""
	                    }
	                ],
	                "5000/tcp": [
	                    {
	                        "HostIp": "127.0.0.1",
	                        "HostPort": ""
	                    }
	                ],
	                "8443/tcp": [
	                    {
	                        "HostIp": "127.0.0.1",
	                        "HostPort": ""
	                    }
	                ]
	            },
	            "RestartPolicy": {
	                "Name": "no",
	                "MaximumRetryCount": 0
	            },
	            "AutoRemove": false,
	            "VolumeDriver": "",
	            "VolumesFrom": null,
	            "ConsoleSize": [
	                0,
	                0
	            ],
	            "CapAdd": null,
	            "CapDrop": null,
	            "CgroupnsMode": "host",
	            "Dns": [],
	            "DnsOptions": [],
	            "DnsSearch": [],
	            "ExtraHosts": null,
	            "GroupAdd": null,
	            "IpcMode": "private",
	            "Cgroup": "",
	            "Links": null,
	            "OomScoreAdj": 0,
	            "PidMode": "",
	            "Privileged": true,
	            "PublishAllPorts": false,
	            "ReadonlyRootfs": false,
	            "SecurityOpt": [
	                "seccomp=unconfined",
	                "apparmor=unconfined",
	                "label=disable"
	            ],
	            "Tmpfs": {
	                "/run": "",
	                "/tmp": ""
	            },
	            "UTSMode": "",
	            "UsernsMode": "",
	            "ShmSize": 67108864,
	            "Runtime": "runc",
	            "Isolation": "",
	            "CpuShares": 0,
	            "Memory": 3221225472,
	            "NanoCpus": 2000000000,
	            "CgroupParent": "",
	            "BlkioWeight": 0,
	            "BlkioWeightDevice": [],
	            "BlkioDeviceReadBps": [],
	            "BlkioDeviceWriteBps": [],
	            "BlkioDeviceReadIOps": [],
	            "BlkioDeviceWriteIOps": [],
	            "CpuPeriod": 0,
	            "CpuQuota": 0,
	            "CpuRealtimePeriod": 0,
	            "CpuRealtimeRuntime": 0,
	            "CpusetCpus": "",
	            "CpusetMems": "",
	            "Devices": [],
	            "DeviceCgroupRules": null,
	            "DeviceRequests": null,
	            "MemoryReservation": 0,
	            "MemorySwap": 6442450944,
	            "MemorySwappiness": null,
	            "OomKillDisable": false,
	            "PidsLimit": null,
	            "Ulimits": [],
	            "CpuCount": 0,
	            "CpuPercent": 0,
	            "IOMaximumIOps": 0,
	            "IOMaximumBandwidth": 0,
	            "MaskedPaths": null,
	            "ReadonlyPaths": null
	        },
	        "GraphDriver": {
	            "Data": {
	                "ID": "6efa62eda748b61ec1d68030412467520212e736b73f094b7d6592d76bede223",
	                "LowerDir": "/var/lib/docker/overlay2/eabc72676e437d7c1a314a8444f33816e7c030e826a97c06275df01c27b5a1a0-init/diff:/var/lib/docker/overlay2/b3a4de36ab2a7c9237cb1555a4866064baca53bca407ae5f84336bea9c6bc6c8/diff",
	                "MergedDir": "/var/lib/docker/overlay2/eabc72676e437d7c1a314a8444f33816e7c030e826a97c06275df01c27b5a1a0/merged",
	                "UpperDir": "/var/lib/docker/overlay2/eabc72676e437d7c1a314a8444f33816e7c030e826a97c06275df01c27b5a1a0/diff",
	                "WorkDir": "/var/lib/docker/overlay2/eabc72676e437d7c1a314a8444f33816e7c030e826a97c06275df01c27b5a1a0/work"
	            },
	            "Name": "overlay2"
	        },
	        "Mounts": [
	            {
	                "Type": "bind",
	                "Source": "/lib/modules",
	                "Destination": "/lib/modules",
	                "Mode": "ro",
	                "RW": false,
	                "Propagation": "rprivate"
	            },
	            {
	                "Type": "volume",
	                "Name": "newest-cni-192074",
	                "Source": "/var/lib/docker/volumes/newest-cni-192074/_data",
	                "Destination": "/var",
	                "Driver": "local",
	                "Mode": "z",
	                "RW": true,
	                "Propagation": ""
	            }
	        ],
	        "Config": {
	            "Hostname": "newest-cni-192074",
	            "Domainname": "",
	            "User": "",
	            "AttachStdin": false,
	            "AttachStdout": false,
	            "AttachStderr": false,
	            "ExposedPorts": {
	                "22/tcp": {},
	                "2376/tcp": {},
	                "32443/tcp": {},
	                "5000/tcp": {},
	                "8443/tcp": {}
	            },
	            "Tty": true,
	            "OpenStdin": false,
	            "StdinOnce": false,
	            "Env": [
	                "container=docker",
	                "PATH=/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin"
	            ],
	            "Cmd": null,
	            "Image": "gcr.io/k8s-minikube/kicbase-builds:v0.0.48-1761985721-21837@sha256:a50b37e97dfdea51156e079ca6b45818a801b3d41bbe13d141f35d2e1af6c7d1",
	            "Volumes": null,
	            "WorkingDir": "/",
	            "Entrypoint": [
	                "/usr/local/bin/entrypoint",
	                "/sbin/init"
	            ],
	            "OnBuild": null,
	            "Labels": {
	                "created_by.minikube.sigs.k8s.io": "true",
	                "mode.minikube.sigs.k8s.io": "newest-cni-192074",
	                "name.minikube.sigs.k8s.io": "newest-cni-192074",
	                "role.minikube.sigs.k8s.io": ""
	            },
	            "StopSignal": "SIGRTMIN+3"
	        },
	        "NetworkSettings": {
	            "Bridge": "",
	            "SandboxID": "97d1733e38dad291eb50091fd479d6dea0e76ab42e1e9ff577321b0655a0881f",
	            "SandboxKey": "/var/run/docker/netns/97d1733e38da",
	            "Ports": {
	                "22/tcp": [
	                    {
	                        "HostIp": "127.0.0.1",
	                        "HostPort": "33085"
	                    }
	                ],
	                "2376/tcp": [
	                    {
	                        "HostIp": "127.0.0.1",
	                        "HostPort": "33086"
	                    }
	                ],
	                "32443/tcp": [
	                    {
	                        "HostIp": "127.0.0.1",
	                        "HostPort": "33089"
	                    }
	                ],
	                "5000/tcp": [
	                    {
	                        "HostIp": "127.0.0.1",
	                        "HostPort": "33087"
	                    }
	                ],
	                "8443/tcp": [
	                    {
	                        "HostIp": "127.0.0.1",
	                        "HostPort": "33088"
	                    }
	                ]
	            },
	            "HairpinMode": false,
	            "LinkLocalIPv6Address": "",
	            "LinkLocalIPv6PrefixLen": 0,
	            "SecondaryIPAddresses": null,
	            "SecondaryIPv6Addresses": null,
	            "EndpointID": "",
	            "Gateway": "",
	            "GlobalIPv6Address": "",
	            "GlobalIPv6PrefixLen": 0,
	            "IPAddress": "",
	            "IPPrefixLen": 0,
	            "IPv6Gateway": "",
	            "MacAddress": "",
	            "Networks": {
	                "newest-cni-192074": {
	                    "IPAMConfig": {
	                        "IPv4Address": "192.168.76.2"
	                    },
	                    "Links": null,
	                    "Aliases": null,
	                    "MacAddress": "a6:51:99:94:a8:8b",
	                    "DriverOpts": null,
	                    "GwPriority": 0,
	                    "NetworkID": "114ceded31c032452a3ee1a01231f6fa4125cd9140fa08f1853ed64e4b9d3746",
	                    "EndpointID": "17e2ef1ae4bb601f081aa13c5a0327f66953ca5feb348a8d31789e4b2c65268e",
	                    "Gateway": "192.168.76.1",
	                    "IPAddress": "192.168.76.2",
	                    "IPPrefixLen": 24,
	                    "IPv6Gateway": "",
	                    "GlobalIPv6Address": "",
	                    "GlobalIPv6PrefixLen": 0,
	                    "DNSNames": [
	                        "newest-cni-192074",
	                        "6efa62eda748"
	                    ]
	                }
	            }
	        }
	    }
	]

                                                
                                                
-- /stdout --
helpers_test.go:247: (dbg) Run:  out/minikube-linux-arm64 status --format={{.Host}} -p newest-cni-192074 -n newest-cni-192074
helpers_test.go:247: (dbg) Non-zero exit: out/minikube-linux-arm64 status --format={{.Host}} -p newest-cni-192074 -n newest-cni-192074: exit status 2 (433.032707ms)

                                                
                                                
-- stdout --
	Running

                                                
                                                
-- /stdout --
helpers_test.go:247: status error: exit status 2 (may be ok)
helpers_test.go:252: <<< TestStartStop/group/newest-cni/serial/Pause FAILED: start of post-mortem logs <<<
helpers_test.go:253: ======>  post-mortem[TestStartStop/group/newest-cni/serial/Pause]: minikube logs <======
helpers_test.go:255: (dbg) Run:  out/minikube-linux-arm64 -p newest-cni-192074 logs -n 25
helpers_test.go:255: (dbg) Done: out/minikube-linux-arm64 -p newest-cni-192074 logs -n 25: (1.220393996s)
helpers_test.go:260: TestStartStop/group/newest-cni/serial/Pause logs: 
-- stdout --
	
	==> Audit <==
	┌─────────┬───────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────┬──────────────────────────────┬─────────┬─────────┬─────────────────────┬──────────────
───────┐
	│ COMMAND │                                                                                                                     ARGS                                                                                                                      │           PROFILE            │  USER   │ VERSION │     START TIME      │      END TIME       │
	├─────────┼───────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────┼──────────────────────────────┼─────────┼─────────┼─────────────────────┼──────────────
───────┤
	│ addons  │ enable metrics-server -p default-k8s-diff-port-103048 --images=MetricsServer=registry.k8s.io/echoserver:1.4 --registries=MetricsServer=fake.domain                                                                                            │ default-k8s-diff-port-103048 │ jenkins │ v1.37.0 │ 09 Nov 25 14:38 UTC │                     │
	│ addons  │ enable metrics-server -p embed-certs-422728 --images=MetricsServer=registry.k8s.io/echoserver:1.4 --registries=MetricsServer=fake.domain                                                                                                      │ embed-certs-422728           │ jenkins │ v1.37.0 │ 09 Nov 25 14:39 UTC │                     │
	│ stop    │ -p default-k8s-diff-port-103048 --alsologtostderr -v=3                                                                                                                                                                                        │ default-k8s-diff-port-103048 │ jenkins │ v1.37.0 │ 09 Nov 25 14:39 UTC │ 09 Nov 25 14:39 UTC │
	│ stop    │ -p embed-certs-422728 --alsologtostderr -v=3                                                                                                                                                                                                  │ embed-certs-422728           │ jenkins │ v1.37.0 │ 09 Nov 25 14:39 UTC │ 09 Nov 25 14:39 UTC │
	│ addons  │ enable dashboard -p default-k8s-diff-port-103048 --images=MetricsScraper=registry.k8s.io/echoserver:1.4                                                                                                                                       │ default-k8s-diff-port-103048 │ jenkins │ v1.37.0 │ 09 Nov 25 14:39 UTC │ 09 Nov 25 14:39 UTC │
	│ start   │ -p default-k8s-diff-port-103048 --memory=3072 --alsologtostderr --wait=true --apiserver-port=8444 --driver=docker  --container-runtime=crio --kubernetes-version=v1.34.1                                                                      │ default-k8s-diff-port-103048 │ jenkins │ v1.37.0 │ 09 Nov 25 14:39 UTC │ 09 Nov 25 14:40 UTC │
	│ addons  │ enable dashboard -p embed-certs-422728 --images=MetricsScraper=registry.k8s.io/echoserver:1.4                                                                                                                                                 │ embed-certs-422728           │ jenkins │ v1.37.0 │ 09 Nov 25 14:39 UTC │ 09 Nov 25 14:39 UTC │
	│ start   │ -p embed-certs-422728 --memory=3072 --alsologtostderr --wait=true --embed-certs --driver=docker  --container-runtime=crio --kubernetes-version=v1.34.1                                                                                        │ embed-certs-422728           │ jenkins │ v1.37.0 │ 09 Nov 25 14:39 UTC │ 09 Nov 25 14:40 UTC │
	│ image   │ default-k8s-diff-port-103048 image list --format=json                                                                                                                                                                                         │ default-k8s-diff-port-103048 │ jenkins │ v1.37.0 │ 09 Nov 25 14:40 UTC │ 09 Nov 25 14:40 UTC │
	│ pause   │ -p default-k8s-diff-port-103048 --alsologtostderr -v=1                                                                                                                                                                                        │ default-k8s-diff-port-103048 │ jenkins │ v1.37.0 │ 09 Nov 25 14:40 UTC │                     │
	│ image   │ embed-certs-422728 image list --format=json                                                                                                                                                                                                   │ embed-certs-422728           │ jenkins │ v1.37.0 │ 09 Nov 25 14:40 UTC │ 09 Nov 25 14:40 UTC │
	│ pause   │ -p embed-certs-422728 --alsologtostderr -v=1                                                                                                                                                                                                  │ embed-certs-422728           │ jenkins │ v1.37.0 │ 09 Nov 25 14:40 UTC │                     │
	│ delete  │ -p default-k8s-diff-port-103048                                                                                                                                                                                                               │ default-k8s-diff-port-103048 │ jenkins │ v1.37.0 │ 09 Nov 25 14:40 UTC │ 09 Nov 25 14:40 UTC │
	│ delete  │ -p embed-certs-422728                                                                                                                                                                                                                         │ embed-certs-422728           │ jenkins │ v1.37.0 │ 09 Nov 25 14:40 UTC │ 09 Nov 25 14:40 UTC │
	│ delete  │ -p default-k8s-diff-port-103048                                                                                                                                                                                                               │ default-k8s-diff-port-103048 │ jenkins │ v1.37.0 │ 09 Nov 25 14:40 UTC │ 09 Nov 25 14:40 UTC │
	│ delete  │ -p disable-driver-mounts-274584                                                                                                                                                                                                               │ disable-driver-mounts-274584 │ jenkins │ v1.37.0 │ 09 Nov 25 14:40 UTC │ 09 Nov 25 14:40 UTC │
	│ start   │ -p no-preload-545474 --memory=3072 --alsologtostderr --wait=true --preload=false --driver=docker  --container-runtime=crio --kubernetes-version=v1.34.1                                                                                       │ no-preload-545474            │ jenkins │ v1.37.0 │ 09 Nov 25 14:40 UTC │ 09 Nov 25 14:41 UTC │
	│ delete  │ -p embed-certs-422728                                                                                                                                                                                                                         │ embed-certs-422728           │ jenkins │ v1.37.0 │ 09 Nov 25 14:40 UTC │ 09 Nov 25 14:40 UTC │
	│ start   │ -p newest-cni-192074 --memory=3072 --alsologtostderr --wait=apiserver,system_pods,default_sa --network-plugin=cni --extra-config=kubeadm.pod-network-cidr=10.42.0.0/16 --driver=docker  --container-runtime=crio --kubernetes-version=v1.34.1 │ newest-cni-192074            │ jenkins │ v1.37.0 │ 09 Nov 25 14:40 UTC │ 09 Nov 25 14:41 UTC │
	│ addons  │ enable metrics-server -p newest-cni-192074 --images=MetricsServer=registry.k8s.io/echoserver:1.4 --registries=MetricsServer=fake.domain                                                                                                       │ newest-cni-192074            │ jenkins │ v1.37.0 │ 09 Nov 25 14:41 UTC │                     │
	│ stop    │ -p newest-cni-192074 --alsologtostderr -v=3                                                                                                                                                                                                   │ newest-cni-192074            │ jenkins │ v1.37.0 │ 09 Nov 25 14:41 UTC │ 09 Nov 25 14:41 UTC │
	│ addons  │ enable dashboard -p newest-cni-192074 --images=MetricsScraper=registry.k8s.io/echoserver:1.4                                                                                                                                                  │ newest-cni-192074            │ jenkins │ v1.37.0 │ 09 Nov 25 14:41 UTC │ 09 Nov 25 14:41 UTC │
	│ start   │ -p newest-cni-192074 --memory=3072 --alsologtostderr --wait=apiserver,system_pods,default_sa --network-plugin=cni --extra-config=kubeadm.pod-network-cidr=10.42.0.0/16 --driver=docker  --container-runtime=crio --kubernetes-version=v1.34.1 │ newest-cni-192074            │ jenkins │ v1.37.0 │ 09 Nov 25 14:41 UTC │ 09 Nov 25 14:41 UTC │
	│ image   │ newest-cni-192074 image list --format=json                                                                                                                                                                                                    │ newest-cni-192074            │ jenkins │ v1.37.0 │ 09 Nov 25 14:41 UTC │ 09 Nov 25 14:41 UTC │
	│ pause   │ -p newest-cni-192074 --alsologtostderr -v=1                                                                                                                                                                                                   │ newest-cni-192074            │ jenkins │ v1.37.0 │ 09 Nov 25 14:41 UTC │                     │
	└─────────┴───────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────┴──────────────────────────────┴─────────┴─────────┴─────────────────────┴──────────────
───────┘
	
	
	==> Last Start <==
	Log file created at: 2025/11/09 14:41:27
	Running on machine: ip-172-31-24-2
	Binary: Built with gc go1.24.6 for linux/arm64
	Log line format: [IWEF]mmdd hh:mm:ss.uuuuuu threadid file:line] msg
	I1109 14:41:27.358960  209070 out.go:360] Setting OutFile to fd 1 ...
	I1109 14:41:27.359137  209070 out.go:408] TERM=,COLORTERM=, which probably does not support color
	I1109 14:41:27.359144  209070 out.go:374] Setting ErrFile to fd 2...
	I1109 14:41:27.359149  209070 out.go:408] TERM=,COLORTERM=, which probably does not support color
	I1109 14:41:27.359462  209070 root.go:338] Updating PATH: /home/jenkins/minikube-integration/21139-2320/.minikube/bin
	I1109 14:41:27.359826  209070 out.go:368] Setting JSON to false
	I1109 14:41:27.360760  209070 start.go:133] hostinfo: {"hostname":"ip-172-31-24-2","uptime":5038,"bootTime":1762694250,"procs":188,"os":"linux","platform":"ubuntu","platformFamily":"debian","platformVersion":"20.04","kernelVersion":"5.15.0-1084-aws","kernelArch":"aarch64","virtualizationSystem":"","virtualizationRole":"","hostId":"6d436adf-771e-4269-b9a3-c25fd4fca4f5"}
	I1109 14:41:27.360830  209070 start.go:143] virtualization:  
	I1109 14:41:27.363953  209070 out.go:179] * [newest-cni-192074] minikube v1.37.0 on Ubuntu 20.04 (arm64)
	I1109 14:41:27.367810  209070 out.go:179]   - MINIKUBE_LOCATION=21139
	I1109 14:41:27.368031  209070 notify.go:221] Checking for updates...
	I1109 14:41:27.374686  209070 out.go:179]   - MINIKUBE_SUPPRESS_DOCKER_PERFORMANCE=true
	I1109 14:41:27.377726  209070 out.go:179]   - KUBECONFIG=/home/jenkins/minikube-integration/21139-2320/kubeconfig
	I1109 14:41:27.380710  209070 out.go:179]   - MINIKUBE_HOME=/home/jenkins/minikube-integration/21139-2320/.minikube
	I1109 14:41:27.383622  209070 out.go:179]   - MINIKUBE_BIN=out/minikube-linux-arm64
	I1109 14:41:27.386550  209070 out.go:179]   - MINIKUBE_FORCE_SYSTEMD=
	I1109 14:41:27.389890  209070 config.go:182] Loaded profile config "newest-cni-192074": Driver=docker, ContainerRuntime=crio, KubernetesVersion=v1.34.1
	I1109 14:41:27.390560  209070 driver.go:422] Setting default libvirt URI to qemu:///system
	I1109 14:41:27.419545  209070 docker.go:124] docker version: linux-28.1.1:Docker Engine - Community
	I1109 14:41:27.419657  209070 cli_runner.go:164] Run: docker system info --format "{{json .}}"
	I1109 14:41:27.526855  209070 info.go:266] docker info: {ID:J4M5:W6MX:GOX4:4LAQ:VI7E:VJNF:J3OP:OPBH:GF7G:PPY4:WQWD:7N4L Containers:2 ContainersRunning:1 ContainersPaused:0 ContainersStopped:1 Images:4 Driver:overlay2 DriverStatus:[[Backing Filesystem extfs] [Supports d_type true] [Using metacopy false] [Native Overlay Diff true] [userxattr false]] SystemStatus:<nil> Plugins:{Volume:[local] Network:[bridge host ipvlan macvlan null overlay] Authorization:<nil> Log:[awslogs fluentd gcplogs gelf journald json-file local splunk syslog]} MemoryLimit:true SwapLimit:true KernelMemory:false KernelMemoryTCP:true CPUCfsPeriod:true CPUCfsQuota:true CPUShares:true CPUSet:true PidsLimit:true IPv4Forwarding:true BridgeNfIptables:false BridgeNfIP6Tables:false Debug:false NFd:36 OomKillDisable:true NGoroutines:52 SystemTime:2025-11-09 14:41:27.517117019 +0000 UTC LoggingDriver:json-file CgroupDriver:cgroupfs NEventsListener:0 KernelVersion:5.15.0-1084-aws OperatingSystem:Ubuntu 20.04.6 LTS OSType:linux Architecture:a
arch64 IndexServerAddress:https://index.docker.io/v1/ RegistryConfig:{AllowNondistributableArtifactsCIDRs:[] AllowNondistributableArtifactsHostnames:[] InsecureRegistryCIDRs:[::1/128 127.0.0.0/8] IndexConfigs:{DockerIo:{Name:docker.io Mirrors:[] Secure:true Official:true}} Mirrors:[]} NCPU:2 MemTotal:8214835200 GenericResources:<nil> DockerRootDir:/var/lib/docker HTTPProxy: HTTPSProxy: NoProxy: Name:ip-172-31-24-2 Labels:[] ExperimentalBuild:false ServerVersion:28.1.1 ClusterStore: ClusterAdvertise: Runtimes:{Runc:{Path:runc}} DefaultRuntime:runc Swarm:{NodeID: NodeAddr: LocalNodeState:inactive ControlAvailable:false Error: RemoteManagers:<nil>} LiveRestoreEnabled:false Isolation: InitBinary:docker-init ContainerdCommit:{ID:05044ec0a9a75232cad458027ca83437aae3f4da Expected:} RuncCommit:{ID:v1.2.5-0-g59923ef Expected:} InitCommit:{ID:de40ad0 Expected:} SecurityOptions:[name=apparmor name=seccomp,profile=builtin] ProductLicense: Warnings:<nil> ServerErrors:[] ClientInfo:{Debug:false Plugins:[map[Name:buildx Pat
h:/usr/libexec/docker/cli-plugins/docker-buildx SchemaVersion:0.1.0 ShortDescription:Docker Buildx Vendor:Docker Inc. Version:v0.23.0] map[Name:compose Path:/usr/libexec/docker/cli-plugins/docker-compose SchemaVersion:0.1.0 ShortDescription:Docker Compose Vendor:Docker Inc. Version:v2.35.1]] Warnings:<nil>}}
	I1109 14:41:27.526971  209070 docker.go:319] overlay module found
	I1109 14:41:27.530158  209070 out.go:179] * Using the docker driver based on existing profile
	I1109 14:41:27.532980  209070 start.go:309] selected driver: docker
	I1109 14:41:27.533002  209070 start.go:930] validating driver "docker" against &{Name:newest-cni-192074 KeepContext:false EmbedCerts:false MinikubeISO: KicBaseImage:gcr.io/k8s-minikube/kicbase-builds:v0.0.48-1761985721-21837@sha256:a50b37e97dfdea51156e079ca6b45818a801b3d41bbe13d141f35d2e1af6c7d1 Memory:3072 CPUs:2 DiskSize:20000 Driver:docker HyperkitVpnKitSock: HyperkitVSockPorts:[] DockerEnv:[] ContainerVolumeMounts:[] InsecureRegistry:[] RegistryMirror:[] HostOnlyCIDR:192.168.59.1/24 HypervVirtualSwitch: HypervUseExternalSwitch:false HypervExternalAdapter: KVMNetwork:default KVMQemuURI:qemu:///system KVMGPU:false KVMHidden:false KVMNUMACount:1 APIServerPort:8443 DockerOpt:[] DisableDriverMounts:false NFSShare:[] NFSSharesRoot:/nfsshares UUID: NoVTXCheck:false DNSProxy:false HostDNSResolver:true HostOnlyNicType:virtio NatNicType:virtio SSHIPAddress: SSHUser:root SSHKey: SSHPort:22 KubernetesConfig:{KubernetesVersion:v1.34.1 ClusterName:newest-cni-192074 Namespace:default APIServerHAVIP: APIServerNa
me:minikubeCA APIServerNames:[] APIServerIPs:[] DNSDomain:cluster.local ContainerRuntime:crio CRISocket: NetworkPlugin:cni FeatureGates: ServiceCIDR:10.96.0.0/12 ImageRepository: LoadBalancerStartIP: LoadBalancerEndIP: CustomIngressCert: RegistryAliases: ExtraOptions:[{Component:kubeadm Key:pod-network-cidr Value:10.42.0.0/16}] ShouldLoadCachedImages:true EnableDefaultCNI:false CNI:} Nodes:[{Name: IP:192.168.76.2 Port:8443 KubernetesVersion:v1.34.1 ContainerRuntime:crio ControlPlane:true Worker:true}] Addons:map[dashboard:true default-storageclass:true storage-provisioner:true] CustomAddonImages:map[MetricsScraper:registry.k8s.io/echoserver:1.4] CustomAddonRegistries:map[] VerifyComponents:map[apiserver:true apps_running:false default_sa:true extra:false kubelet:false node_ready:false system_pods:true] StartHostTimeout:6m0s ScheduledStop:<nil> ExposedPorts:[] ListenAddress: Network: Subnet: MultiNodeRequested:false ExtraDisks:0 CertExpiration:26280h0m0s MountString: Mount9PVersion:9p2000.L MountGID:docker Mou
ntIP: MountMSize:262144 MountOptions:[] MountPort:0 MountType:9p MountUID:docker BinaryMirror: DisableOptimizations:false DisableMetrics:false DisableCoreDNSLog:false CustomQemuFirmwarePath: SocketVMnetClientPath: SocketVMnetPath: StaticIP: SSHAuthSock: SSHAgentPID:0 GPUs: AutoPauseInterval:1m0s}
	I1109 14:41:27.533106  209070 start.go:941] status for docker: {Installed:true Healthy:true Running:false NeedsImprovement:false Error:<nil> Reason: Fix: Doc: Version:}
	I1109 14:41:27.533841  209070 cli_runner.go:164] Run: docker system info --format "{{json .}}"
	I1109 14:41:27.615471  209070 info.go:266] docker info: {ID:J4M5:W6MX:GOX4:4LAQ:VI7E:VJNF:J3OP:OPBH:GF7G:PPY4:WQWD:7N4L Containers:2 ContainersRunning:1 ContainersPaused:0 ContainersStopped:1 Images:4 Driver:overlay2 DriverStatus:[[Backing Filesystem extfs] [Supports d_type true] [Using metacopy false] [Native Overlay Diff true] [userxattr false]] SystemStatus:<nil> Plugins:{Volume:[local] Network:[bridge host ipvlan macvlan null overlay] Authorization:<nil> Log:[awslogs fluentd gcplogs gelf journald json-file local splunk syslog]} MemoryLimit:true SwapLimit:true KernelMemory:false KernelMemoryTCP:true CPUCfsPeriod:true CPUCfsQuota:true CPUShares:true CPUSet:true PidsLimit:true IPv4Forwarding:true BridgeNfIptables:false BridgeNfIP6Tables:false Debug:false NFd:36 OomKillDisable:true NGoroutines:52 SystemTime:2025-11-09 14:41:27.606224698 +0000 UTC LoggingDriver:json-file CgroupDriver:cgroupfs NEventsListener:0 KernelVersion:5.15.0-1084-aws OperatingSystem:Ubuntu 20.04.6 LTS OSType:linux Architecture:a
arch64 IndexServerAddress:https://index.docker.io/v1/ RegistryConfig:{AllowNondistributableArtifactsCIDRs:[] AllowNondistributableArtifactsHostnames:[] InsecureRegistryCIDRs:[::1/128 127.0.0.0/8] IndexConfigs:{DockerIo:{Name:docker.io Mirrors:[] Secure:true Official:true}} Mirrors:[]} NCPU:2 MemTotal:8214835200 GenericResources:<nil> DockerRootDir:/var/lib/docker HTTPProxy: HTTPSProxy: NoProxy: Name:ip-172-31-24-2 Labels:[] ExperimentalBuild:false ServerVersion:28.1.1 ClusterStore: ClusterAdvertise: Runtimes:{Runc:{Path:runc}} DefaultRuntime:runc Swarm:{NodeID: NodeAddr: LocalNodeState:inactive ControlAvailable:false Error: RemoteManagers:<nil>} LiveRestoreEnabled:false Isolation: InitBinary:docker-init ContainerdCommit:{ID:05044ec0a9a75232cad458027ca83437aae3f4da Expected:} RuncCommit:{ID:v1.2.5-0-g59923ef Expected:} InitCommit:{ID:de40ad0 Expected:} SecurityOptions:[name=apparmor name=seccomp,profile=builtin] ProductLicense: Warnings:<nil> ServerErrors:[] ClientInfo:{Debug:false Plugins:[map[Name:buildx Pat
h:/usr/libexec/docker/cli-plugins/docker-buildx SchemaVersion:0.1.0 ShortDescription:Docker Buildx Vendor:Docker Inc. Version:v0.23.0] map[Name:compose Path:/usr/libexec/docker/cli-plugins/docker-compose SchemaVersion:0.1.0 ShortDescription:Docker Compose Vendor:Docker Inc. Version:v2.35.1]] Warnings:<nil>}}
	I1109 14:41:27.615794  209070 start_flags.go:1011] Waiting for components: map[apiserver:true apps_running:false default_sa:true extra:false kubelet:false node_ready:false system_pods:true]
	I1109 14:41:27.615817  209070 cni.go:84] Creating CNI manager for ""
	I1109 14:41:27.615942  209070 cni.go:143] "docker" driver + "crio" runtime found, recommending kindnet
	I1109 14:41:27.615983  209070 start.go:353] cluster config:
	{Name:newest-cni-192074 KeepContext:false EmbedCerts:false MinikubeISO: KicBaseImage:gcr.io/k8s-minikube/kicbase-builds:v0.0.48-1761985721-21837@sha256:a50b37e97dfdea51156e079ca6b45818a801b3d41bbe13d141f35d2e1af6c7d1 Memory:3072 CPUs:2 DiskSize:20000 Driver:docker HyperkitVpnKitSock: HyperkitVSockPorts:[] DockerEnv:[] ContainerVolumeMounts:[] InsecureRegistry:[] RegistryMirror:[] HostOnlyCIDR:192.168.59.1/24 HypervVirtualSwitch: HypervUseExternalSwitch:false HypervExternalAdapter: KVMNetwork:default KVMQemuURI:qemu:///system KVMGPU:false KVMHidden:false KVMNUMACount:1 APIServerPort:8443 DockerOpt:[] DisableDriverMounts:false NFSShare:[] NFSSharesRoot:/nfsshares UUID: NoVTXCheck:false DNSProxy:false HostDNSResolver:true HostOnlyNicType:virtio NatNicType:virtio SSHIPAddress: SSHUser:root SSHKey: SSHPort:22 KubernetesConfig:{KubernetesVersion:v1.34.1 ClusterName:newest-cni-192074 Namespace:default APIServerHAVIP: APIServerName:minikubeCA APIServerNames:[] APIServerIPs:[] DNSDomain:cluster.local Containe
rRuntime:crio CRISocket: NetworkPlugin:cni FeatureGates: ServiceCIDR:10.96.0.0/12 ImageRepository: LoadBalancerStartIP: LoadBalancerEndIP: CustomIngressCert: RegistryAliases: ExtraOptions:[{Component:kubeadm Key:pod-network-cidr Value:10.42.0.0/16}] ShouldLoadCachedImages:true EnableDefaultCNI:false CNI:} Nodes:[{Name: IP:192.168.76.2 Port:8443 KubernetesVersion:v1.34.1 ContainerRuntime:crio ControlPlane:true Worker:true}] Addons:map[dashboard:true default-storageclass:true storage-provisioner:true] CustomAddonImages:map[MetricsScraper:registry.k8s.io/echoserver:1.4] CustomAddonRegistries:map[] VerifyComponents:map[apiserver:true apps_running:false default_sa:true extra:false kubelet:false node_ready:false system_pods:true] StartHostTimeout:6m0s ScheduledStop:<nil> ExposedPorts:[] ListenAddress: Network: Subnet: MultiNodeRequested:false ExtraDisks:0 CertExpiration:26280h0m0s MountString: Mount9PVersion:9p2000.L MountGID:docker MountIP: MountMSize:262144 MountOptions:[] MountPort:0 MountType:9p MountUID:docker
BinaryMirror: DisableOptimizations:false DisableMetrics:false DisableCoreDNSLog:false CustomQemuFirmwarePath: SocketVMnetClientPath: SocketVMnetPath: StaticIP: SSHAuthSock: SSHAgentPID:0 GPUs: AutoPauseInterval:1m0s}
	I1109 14:41:27.619162  209070 out.go:179] * Starting "newest-cni-192074" primary control-plane node in "newest-cni-192074" cluster
	I1109 14:41:27.622027  209070 cache.go:134] Beginning downloading kic base image for docker with crio
	I1109 14:41:27.625544  209070 out.go:179] * Pulling base image v0.0.48-1761985721-21837 ...
	I1109 14:41:27.628497  209070 preload.go:188] Checking if preload exists for k8s version v1.34.1 and runtime crio
	I1109 14:41:27.628545  209070 preload.go:203] Found local preload: /home/jenkins/minikube-integration/21139-2320/.minikube/cache/preloaded-tarball/preloaded-images-k8s-v18-v1.34.1-cri-o-overlay-arm64.tar.lz4
	I1109 14:41:27.628560  209070 cache.go:65] Caching tarball of preloaded images
	I1109 14:41:27.628570  209070 image.go:81] Checking for gcr.io/k8s-minikube/kicbase-builds:v0.0.48-1761985721-21837@sha256:a50b37e97dfdea51156e079ca6b45818a801b3d41bbe13d141f35d2e1af6c7d1 in local docker daemon
	I1109 14:41:27.628653  209070 preload.go:238] Found /home/jenkins/minikube-integration/21139-2320/.minikube/cache/preloaded-tarball/preloaded-images-k8s-v18-v1.34.1-cri-o-overlay-arm64.tar.lz4 in cache, skipping download
	I1109 14:41:27.628664  209070 cache.go:68] Finished verifying existence of preloaded tar for v1.34.1 on crio
	I1109 14:41:27.628822  209070 profile.go:143] Saving config to /home/jenkins/minikube-integration/21139-2320/.minikube/profiles/newest-cni-192074/config.json ...
	I1109 14:41:27.647536  209070 image.go:100] Found gcr.io/k8s-minikube/kicbase-builds:v0.0.48-1761985721-21837@sha256:a50b37e97dfdea51156e079ca6b45818a801b3d41bbe13d141f35d2e1af6c7d1 in local docker daemon, skipping pull
	I1109 14:41:27.647559  209070 cache.go:158] gcr.io/k8s-minikube/kicbase-builds:v0.0.48-1761985721-21837@sha256:a50b37e97dfdea51156e079ca6b45818a801b3d41bbe13d141f35d2e1af6c7d1 exists in daemon, skipping load
	I1109 14:41:27.647577  209070 cache.go:243] Successfully downloaded all kic artifacts
	I1109 14:41:27.647599  209070 start.go:360] acquireMachinesLock for newest-cni-192074: {Name:mk50468e4f833af9c54b7aff282eee0b8ef871dc Clock:{} Delay:500ms Timeout:10m0s Cancel:<nil>}
	I1109 14:41:27.647656  209070 start.go:364] duration metric: took 35.175µs to acquireMachinesLock for "newest-cni-192074"
	I1109 14:41:27.647680  209070 start.go:96] Skipping create...Using existing machine configuration
	I1109 14:41:27.647688  209070 fix.go:54] fixHost starting: 
	I1109 14:41:27.648013  209070 cli_runner.go:164] Run: docker container inspect newest-cni-192074 --format={{.State.Status}}
	I1109 14:41:27.664466  209070 fix.go:112] recreateIfNeeded on newest-cni-192074: state=Stopped err=<nil>
	W1109 14:41:27.664498  209070 fix.go:138] unexpected machine state, will restart: <nil>
	I1109 14:41:25.694169  203153 ssh_runner.go:195] Run: stat /opt/cni/bin/portmap
	I1109 14:41:25.699671  203153 cni.go:182] applying CNI manifest using /var/lib/minikube/binaries/v1.34.1/kubectl ...
	I1109 14:41:25.699707  203153 ssh_runner.go:362] scp memory --> /var/tmp/minikube/cni.yaml (2601 bytes)
	I1109 14:41:25.738096  203153 ssh_runner.go:195] Run: sudo /var/lib/minikube/binaries/v1.34.1/kubectl apply --kubeconfig=/var/lib/minikube/kubeconfig -f /var/tmp/minikube/cni.yaml
	I1109 14:41:26.257453  203153 ssh_runner.go:195] Run: /bin/bash -c "cat /proc/$(pgrep kube-apiserver)/oom_adj"
	I1109 14:41:26.257576  203153 ssh_runner.go:195] Run: sudo /var/lib/minikube/binaries/v1.34.1/kubectl create clusterrolebinding minikube-rbac --clusterrole=cluster-admin --serviceaccount=kube-system:default --kubeconfig=/var/lib/minikube/kubeconfig
	I1109 14:41:26.257640  203153 ssh_runner.go:195] Run: sudo /var/lib/minikube/binaries/v1.34.1/kubectl --kubeconfig=/var/lib/minikube/kubeconfig label --overwrite nodes no-preload-545474 minikube.k8s.io/updated_at=2025_11_09T14_41_26_0700 minikube.k8s.io/version=v1.37.0 minikube.k8s.io/commit=70e94ed289d7418ac27e2778f9cf44be27a4ecda minikube.k8s.io/name=no-preload-545474 minikube.k8s.io/primary=true
	I1109 14:41:26.421439  203153 ops.go:34] apiserver oom_adj: -16
	I1109 14:41:26.421548  203153 ssh_runner.go:195] Run: sudo /var/lib/minikube/binaries/v1.34.1/kubectl get sa default --kubeconfig=/var/lib/minikube/kubeconfig
	I1109 14:41:26.921787  203153 ssh_runner.go:195] Run: sudo /var/lib/minikube/binaries/v1.34.1/kubectl get sa default --kubeconfig=/var/lib/minikube/kubeconfig
	I1109 14:41:27.421776  203153 ssh_runner.go:195] Run: sudo /var/lib/minikube/binaries/v1.34.1/kubectl get sa default --kubeconfig=/var/lib/minikube/kubeconfig
	I1109 14:41:27.923362  203153 ssh_runner.go:195] Run: sudo /var/lib/minikube/binaries/v1.34.1/kubectl get sa default --kubeconfig=/var/lib/minikube/kubeconfig
	I1109 14:41:28.421631  203153 ssh_runner.go:195] Run: sudo /var/lib/minikube/binaries/v1.34.1/kubectl get sa default --kubeconfig=/var/lib/minikube/kubeconfig
	I1109 14:41:28.922298  203153 ssh_runner.go:195] Run: sudo /var/lib/minikube/binaries/v1.34.1/kubectl get sa default --kubeconfig=/var/lib/minikube/kubeconfig
	I1109 14:41:29.422212  203153 ssh_runner.go:195] Run: sudo /var/lib/minikube/binaries/v1.34.1/kubectl get sa default --kubeconfig=/var/lib/minikube/kubeconfig
	I1109 14:41:29.922567  203153 ssh_runner.go:195] Run: sudo /var/lib/minikube/binaries/v1.34.1/kubectl get sa default --kubeconfig=/var/lib/minikube/kubeconfig
	I1109 14:41:30.103845  203153 kubeadm.go:1114] duration metric: took 3.846309325s to wait for elevateKubeSystemPrivileges
	I1109 14:41:30.103889  203153 kubeadm.go:403] duration metric: took 26.808914446s to StartCluster
	I1109 14:41:30.103908  203153 settings.go:142] acquiring lock: {Name:mk52a5a1cf83cfd3007d92af07a5e8dea393f789 Clock:{} Delay:500ms Timeout:1m0s Cancel:<nil>}
	I1109 14:41:30.103976  203153 settings.go:150] Updating kubeconfig:  /home/jenkins/minikube-integration/21139-2320/kubeconfig
	I1109 14:41:30.104731  203153 lock.go:35] WriteFile acquiring /home/jenkins/minikube-integration/21139-2320/kubeconfig: {Name:mkbcbd43244c4eb498050023163f96f81faa78c6 Clock:{} Delay:500ms Timeout:1m0s Cancel:<nil>}
	I1109 14:41:30.104995  203153 start.go:236] Will wait 6m0s for node &{Name: IP:192.168.85.2 Port:8443 KubernetesVersion:v1.34.1 ContainerRuntime:crio ControlPlane:true Worker:true}
	I1109 14:41:30.105098  203153 ssh_runner.go:195] Run: /bin/bash -c "sudo /var/lib/minikube/binaries/v1.34.1/kubectl --kubeconfig=/var/lib/minikube/kubeconfig -n kube-system get configmap coredns -o yaml"
	I1109 14:41:30.105369  203153 config.go:182] Loaded profile config "no-preload-545474": Driver=docker, ContainerRuntime=crio, KubernetesVersion=v1.34.1
	I1109 14:41:30.105412  203153 addons.go:512] enable addons start: toEnable=map[ambassador:false amd-gpu-device-plugin:false auto-pause:false cloud-spanner:false csi-hostpath-driver:false dashboard:false default-storageclass:true efk:false freshpod:false gcp-auth:false gvisor:false headlamp:false inaccel:false ingress:false ingress-dns:false inspektor-gadget:false istio:false istio-provisioner:false kong:false kubeflow:false kubetail:false kubevirt:false logviewer:false metallb:false metrics-server:false nvidia-device-plugin:false nvidia-driver-installer:false nvidia-gpu-device-plugin:false olm:false pod-security-policy:false portainer:false registry:false registry-aliases:false registry-creds:false storage-provisioner:true storage-provisioner-rancher:false volcano:false volumesnapshots:false yakd:false]
	I1109 14:41:30.105477  203153 addons.go:70] Setting storage-provisioner=true in profile "no-preload-545474"
	I1109 14:41:30.105498  203153 addons.go:239] Setting addon storage-provisioner=true in "no-preload-545474"
	I1109 14:41:30.105521  203153 host.go:66] Checking if "no-preload-545474" exists ...
	I1109 14:41:30.106047  203153 cli_runner.go:164] Run: docker container inspect no-preload-545474 --format={{.State.Status}}
	I1109 14:41:30.106393  203153 addons.go:70] Setting default-storageclass=true in profile "no-preload-545474"
	I1109 14:41:30.106421  203153 addons_storage_classes.go:34] enableOrDisableStorageClasses default-storageclass=true on "no-preload-545474"
	I1109 14:41:30.106764  203153 cli_runner.go:164] Run: docker container inspect no-preload-545474 --format={{.State.Status}}
	I1109 14:41:30.108397  203153 out.go:179] * Verifying Kubernetes components...
	I1109 14:41:30.111586  203153 ssh_runner.go:195] Run: sudo systemctl daemon-reload
	I1109 14:41:30.151324  203153 out.go:179]   - Using image gcr.io/k8s-minikube/storage-provisioner:v5
	I1109 14:41:30.153764  203153 addons.go:239] Setting addon default-storageclass=true in "no-preload-545474"
	I1109 14:41:30.153807  203153 host.go:66] Checking if "no-preload-545474" exists ...
	I1109 14:41:30.154248  203153 cli_runner.go:164] Run: docker container inspect no-preload-545474 --format={{.State.Status}}
	I1109 14:41:30.155422  203153 addons.go:436] installing /etc/kubernetes/addons/storage-provisioner.yaml
	I1109 14:41:30.155443  203153 ssh_runner.go:362] scp memory --> /etc/kubernetes/addons/storage-provisioner.yaml (2676 bytes)
	I1109 14:41:30.155504  203153 cli_runner.go:164] Run: docker container inspect -f "'{{(index (index .NetworkSettings.Ports "22/tcp") 0).HostPort}}'" no-preload-545474
	I1109 14:41:30.185633  203153 addons.go:436] installing /etc/kubernetes/addons/storageclass.yaml
	I1109 14:41:30.185657  203153 ssh_runner.go:362] scp storageclass/storageclass.yaml --> /etc/kubernetes/addons/storageclass.yaml (271 bytes)
	I1109 14:41:30.185924  203153 cli_runner.go:164] Run: docker container inspect -f "'{{(index (index .NetworkSettings.Ports "22/tcp") 0).HostPort}}'" no-preload-545474
	I1109 14:41:30.206073  203153 sshutil.go:53] new ssh client: &{IP:127.0.0.1 Port:33075 SSHKeyPath:/home/jenkins/minikube-integration/21139-2320/.minikube/machines/no-preload-545474/id_rsa Username:docker}
	I1109 14:41:30.231616  203153 sshutil.go:53] new ssh client: &{IP:127.0.0.1 Port:33075 SSHKeyPath:/home/jenkins/minikube-integration/21139-2320/.minikube/machines/no-preload-545474/id_rsa Username:docker}
	I1109 14:41:30.413277  203153 ssh_runner.go:195] Run: /bin/bash -c "sudo /var/lib/minikube/binaries/v1.34.1/kubectl --kubeconfig=/var/lib/minikube/kubeconfig -n kube-system get configmap coredns -o yaml | sed -e '/^        forward . \/etc\/resolv.conf.*/i \        hosts {\n           192.168.85.1 host.minikube.internal\n           fallthrough\n        }' -e '/^        errors *$/i \        log' | sudo /var/lib/minikube/binaries/v1.34.1/kubectl --kubeconfig=/var/lib/minikube/kubeconfig replace -f -"
	I1109 14:41:30.413380  203153 ssh_runner.go:195] Run: sudo systemctl start kubelet
	I1109 14:41:30.430148  203153 ssh_runner.go:195] Run: sudo KUBECONFIG=/var/lib/minikube/kubeconfig /var/lib/minikube/binaries/v1.34.1/kubectl apply -f /etc/kubernetes/addons/storage-provisioner.yaml
	I1109 14:41:30.467464  203153 ssh_runner.go:195] Run: sudo KUBECONFIG=/var/lib/minikube/kubeconfig /var/lib/minikube/binaries/v1.34.1/kubectl apply -f /etc/kubernetes/addons/storageclass.yaml
	I1109 14:41:30.926323  203153 start.go:977] {"host.minikube.internal": 192.168.85.1} host record injected into CoreDNS's ConfigMap
	I1109 14:41:30.928386  203153 node_ready.go:35] waiting up to 6m0s for node "no-preload-545474" to be "Ready" ...
	I1109 14:41:31.430889  203153 kapi.go:214] "coredns" deployment in "kube-system" namespace and "no-preload-545474" context rescaled to 1 replicas
	I1109 14:41:31.453384  203153 ssh_runner.go:235] Completed: sudo KUBECONFIG=/var/lib/minikube/kubeconfig /var/lib/minikube/binaries/v1.34.1/kubectl apply -f /etc/kubernetes/addons/storage-provisioner.yaml: (1.023150632s)
	I1109 14:41:31.491769  203153 out.go:179] * Enabled addons: storage-provisioner, default-storageclass
	I1109 14:41:27.667792  209070 out.go:252] * Restarting existing docker container for "newest-cni-192074" ...
	I1109 14:41:27.667932  209070 cli_runner.go:164] Run: docker start newest-cni-192074
	I1109 14:41:27.942585  209070 cli_runner.go:164] Run: docker container inspect newest-cni-192074 --format={{.State.Status}}
	I1109 14:41:27.972366  209070 kic.go:430] container "newest-cni-192074" state is running.
	I1109 14:41:27.972742  209070 cli_runner.go:164] Run: docker container inspect -f "{{range .NetworkSettings.Networks}}{{.IPAddress}},{{.GlobalIPv6Address}}{{end}}" newest-cni-192074
	I1109 14:41:27.999932  209070 profile.go:143] Saving config to /home/jenkins/minikube-integration/21139-2320/.minikube/profiles/newest-cni-192074/config.json ...
	I1109 14:41:28.000169  209070 machine.go:94] provisionDockerMachine start ...
	I1109 14:41:28.000246  209070 cli_runner.go:164] Run: docker container inspect -f "'{{(index (index .NetworkSettings.Ports "22/tcp") 0).HostPort}}'" newest-cni-192074
	I1109 14:41:28.031134  209070 main.go:143] libmachine: Using SSH client type: native
	I1109 14:41:28.032272  209070 main.go:143] libmachine: &{{{<nil> 0 [] [] []} docker [0x3eefe0] 0x3f1790 <nil>  [] 0s} 127.0.0.1 33085 <nil> <nil>}
	I1109 14:41:28.032292  209070 main.go:143] libmachine: About to run SSH command:
	hostname
	I1109 14:41:28.033033  209070 main.go:143] libmachine: Error dialing TCP: ssh: handshake failed: EOF
	I1109 14:41:31.227723  209070 main.go:143] libmachine: SSH cmd err, output: <nil>: newest-cni-192074
	
	I1109 14:41:31.227798  209070 ubuntu.go:182] provisioning hostname "newest-cni-192074"
	I1109 14:41:31.227933  209070 cli_runner.go:164] Run: docker container inspect -f "'{{(index (index .NetworkSettings.Ports "22/tcp") 0).HostPort}}'" newest-cni-192074
	I1109 14:41:31.255636  209070 main.go:143] libmachine: Using SSH client type: native
	I1109 14:41:31.256022  209070 main.go:143] libmachine: &{{{<nil> 0 [] [] []} docker [0x3eefe0] 0x3f1790 <nil>  [] 0s} 127.0.0.1 33085 <nil> <nil>}
	I1109 14:41:31.256036  209070 main.go:143] libmachine: About to run SSH command:
	sudo hostname newest-cni-192074 && echo "newest-cni-192074" | sudo tee /etc/hostname
	I1109 14:41:31.460102  209070 main.go:143] libmachine: SSH cmd err, output: <nil>: newest-cni-192074
	
	I1109 14:41:31.460360  209070 cli_runner.go:164] Run: docker container inspect -f "'{{(index (index .NetworkSettings.Ports "22/tcp") 0).HostPort}}'" newest-cni-192074
	I1109 14:41:31.485710  209070 main.go:143] libmachine: Using SSH client type: native
	I1109 14:41:31.486023  209070 main.go:143] libmachine: &{{{<nil> 0 [] [] []} docker [0x3eefe0] 0x3f1790 <nil>  [] 0s} 127.0.0.1 33085 <nil> <nil>}
	I1109 14:41:31.486040  209070 main.go:143] libmachine: About to run SSH command:
	
			if ! grep -xq '.*\snewest-cni-192074' /etc/hosts; then
				if grep -xq '127.0.1.1\s.*' /etc/hosts; then
					sudo sed -i 's/^127.0.1.1\s.*/127.0.1.1 newest-cni-192074/g' /etc/hosts;
				else 
					echo '127.0.1.1 newest-cni-192074' | sudo tee -a /etc/hosts; 
				fi
			fi
	I1109 14:41:31.659860  209070 main.go:143] libmachine: SSH cmd err, output: <nil>: 
	I1109 14:41:31.659960  209070 ubuntu.go:188] set auth options {CertDir:/home/jenkins/minikube-integration/21139-2320/.minikube CaCertPath:/home/jenkins/minikube-integration/21139-2320/.minikube/certs/ca.pem CaPrivateKeyPath:/home/jenkins/minikube-integration/21139-2320/.minikube/certs/ca-key.pem CaCertRemotePath:/etc/docker/ca.pem ServerCertPath:/home/jenkins/minikube-integration/21139-2320/.minikube/machines/server.pem ServerKeyPath:/home/jenkins/minikube-integration/21139-2320/.minikube/machines/server-key.pem ClientKeyPath:/home/jenkins/minikube-integration/21139-2320/.minikube/certs/key.pem ServerCertRemotePath:/etc/docker/server.pem ServerKeyRemotePath:/etc/docker/server-key.pem ClientCertPath:/home/jenkins/minikube-integration/21139-2320/.minikube/certs/cert.pem ServerCertSANs:[] StorePath:/home/jenkins/minikube-integration/21139-2320/.minikube}
	I1109 14:41:31.659989  209070 ubuntu.go:190] setting up certificates
	I1109 14:41:31.660008  209070 provision.go:84] configureAuth start
	I1109 14:41:31.660074  209070 cli_runner.go:164] Run: docker container inspect -f "{{range .NetworkSettings.Networks}}{{.IPAddress}},{{.GlobalIPv6Address}}{{end}}" newest-cni-192074
	I1109 14:41:31.681613  209070 provision.go:143] copyHostCerts
	I1109 14:41:31.681681  209070 exec_runner.go:144] found /home/jenkins/minikube-integration/21139-2320/.minikube/ca.pem, removing ...
	I1109 14:41:31.681695  209070 exec_runner.go:203] rm: /home/jenkins/minikube-integration/21139-2320/.minikube/ca.pem
	I1109 14:41:31.681777  209070 exec_runner.go:151] cp: /home/jenkins/minikube-integration/21139-2320/.minikube/certs/ca.pem --> /home/jenkins/minikube-integration/21139-2320/.minikube/ca.pem (1082 bytes)
	I1109 14:41:31.681868  209070 exec_runner.go:144] found /home/jenkins/minikube-integration/21139-2320/.minikube/cert.pem, removing ...
	I1109 14:41:31.681882  209070 exec_runner.go:203] rm: /home/jenkins/minikube-integration/21139-2320/.minikube/cert.pem
	I1109 14:41:31.681910  209070 exec_runner.go:151] cp: /home/jenkins/minikube-integration/21139-2320/.minikube/certs/cert.pem --> /home/jenkins/minikube-integration/21139-2320/.minikube/cert.pem (1123 bytes)
	I1109 14:41:31.681971  209070 exec_runner.go:144] found /home/jenkins/minikube-integration/21139-2320/.minikube/key.pem, removing ...
	I1109 14:41:31.681980  209070 exec_runner.go:203] rm: /home/jenkins/minikube-integration/21139-2320/.minikube/key.pem
	I1109 14:41:31.682004  209070 exec_runner.go:151] cp: /home/jenkins/minikube-integration/21139-2320/.minikube/certs/key.pem --> /home/jenkins/minikube-integration/21139-2320/.minikube/key.pem (1679 bytes)
	I1109 14:41:31.682054  209070 provision.go:117] generating server cert: /home/jenkins/minikube-integration/21139-2320/.minikube/machines/server.pem ca-key=/home/jenkins/minikube-integration/21139-2320/.minikube/certs/ca.pem private-key=/home/jenkins/minikube-integration/21139-2320/.minikube/certs/ca-key.pem org=jenkins.newest-cni-192074 san=[127.0.0.1 192.168.76.2 localhost minikube newest-cni-192074]
	I1109 14:41:32.042985  209070 provision.go:177] copyRemoteCerts
	I1109 14:41:32.043057  209070 ssh_runner.go:195] Run: sudo mkdir -p /etc/docker /etc/docker /etc/docker
	I1109 14:41:32.043111  209070 cli_runner.go:164] Run: docker container inspect -f "'{{(index (index .NetworkSettings.Ports "22/tcp") 0).HostPort}}'" newest-cni-192074
	I1109 14:41:32.064051  209070 sshutil.go:53] new ssh client: &{IP:127.0.0.1 Port:33085 SSHKeyPath:/home/jenkins/minikube-integration/21139-2320/.minikube/machines/newest-cni-192074/id_rsa Username:docker}
	I1109 14:41:32.187506  209070 ssh_runner.go:362] scp /home/jenkins/minikube-integration/21139-2320/.minikube/machines/server.pem --> /etc/docker/server.pem (1220 bytes)
	I1109 14:41:32.235309  209070 ssh_runner.go:362] scp /home/jenkins/minikube-integration/21139-2320/.minikube/machines/server-key.pem --> /etc/docker/server-key.pem (1675 bytes)
	I1109 14:41:32.270666  209070 ssh_runner.go:362] scp /home/jenkins/minikube-integration/21139-2320/.minikube/certs/ca.pem --> /etc/docker/ca.pem (1082 bytes)
	I1109 14:41:32.302928  209070 provision.go:87] duration metric: took 642.878974ms to configureAuth
	I1109 14:41:32.302957  209070 ubuntu.go:206] setting minikube options for container-runtime
	I1109 14:41:32.303244  209070 config.go:182] Loaded profile config "newest-cni-192074": Driver=docker, ContainerRuntime=crio, KubernetesVersion=v1.34.1
	I1109 14:41:32.303394  209070 cli_runner.go:164] Run: docker container inspect -f "'{{(index (index .NetworkSettings.Ports "22/tcp") 0).HostPort}}'" newest-cni-192074
	I1109 14:41:32.329260  209070 main.go:143] libmachine: Using SSH client type: native
	I1109 14:41:32.329816  209070 main.go:143] libmachine: &{{{<nil> 0 [] [] []} docker [0x3eefe0] 0x3f1790 <nil>  [] 0s} 127.0.0.1 33085 <nil> <nil>}
	I1109 14:41:32.329851  209070 main.go:143] libmachine: About to run SSH command:
	sudo mkdir -p /etc/sysconfig && printf %s "
	CRIO_MINIKUBE_OPTIONS='--insecure-registry 10.96.0.0/12 '
	" | sudo tee /etc/sysconfig/crio.minikube && sudo systemctl restart crio
	I1109 14:41:32.713720  209070 main.go:143] libmachine: SSH cmd err, output: <nil>: 
	CRIO_MINIKUBE_OPTIONS='--insecure-registry 10.96.0.0/12 '
	
	I1109 14:41:32.713747  209070 machine.go:97] duration metric: took 4.713560928s to provisionDockerMachine
	I1109 14:41:32.713759  209070 start.go:293] postStartSetup for "newest-cni-192074" (driver="docker")
	I1109 14:41:32.713769  209070 start.go:322] creating required directories: [/etc/kubernetes/addons /etc/kubernetes/manifests /var/tmp/minikube /var/lib/minikube /var/lib/minikube/certs /var/lib/minikube/images /var/lib/minikube/binaries /tmp/gvisor /usr/share/ca-certificates /etc/ssl/certs]
	I1109 14:41:32.713826  209070 ssh_runner.go:195] Run: sudo mkdir -p /etc/kubernetes/addons /etc/kubernetes/manifests /var/tmp/minikube /var/lib/minikube /var/lib/minikube/certs /var/lib/minikube/images /var/lib/minikube/binaries /tmp/gvisor /usr/share/ca-certificates /etc/ssl/certs
	I1109 14:41:32.713886  209070 cli_runner.go:164] Run: docker container inspect -f "'{{(index (index .NetworkSettings.Ports "22/tcp") 0).HostPort}}'" newest-cni-192074
	I1109 14:41:32.738102  209070 sshutil.go:53] new ssh client: &{IP:127.0.0.1 Port:33085 SSHKeyPath:/home/jenkins/minikube-integration/21139-2320/.minikube/machines/newest-cni-192074/id_rsa Username:docker}
	I1109 14:41:32.853040  209070 ssh_runner.go:195] Run: cat /etc/os-release
	I1109 14:41:32.857124  209070 main.go:143] libmachine: Couldn't set key VERSION_CODENAME, no corresponding struct field found
	I1109 14:41:32.857150  209070 info.go:137] Remote host: Debian GNU/Linux 12 (bookworm)
	I1109 14:41:32.857160  209070 filesync.go:126] Scanning /home/jenkins/minikube-integration/21139-2320/.minikube/addons for local assets ...
	I1109 14:41:32.857208  209070 filesync.go:126] Scanning /home/jenkins/minikube-integration/21139-2320/.minikube/files for local assets ...
	I1109 14:41:32.857285  209070 filesync.go:149] local asset: /home/jenkins/minikube-integration/21139-2320/.minikube/files/etc/ssl/certs/41162.pem -> 41162.pem in /etc/ssl/certs
	I1109 14:41:32.857387  209070 ssh_runner.go:195] Run: sudo mkdir -p /etc/ssl/certs
	I1109 14:41:32.871747  209070 ssh_runner.go:362] scp /home/jenkins/minikube-integration/21139-2320/.minikube/files/etc/ssl/certs/41162.pem --> /etc/ssl/certs/41162.pem (1708 bytes)
	I1109 14:41:32.894321  209070 start.go:296] duration metric: took 180.546742ms for postStartSetup
	I1109 14:41:32.894405  209070 ssh_runner.go:195] Run: sh -c "df -h /var | awk 'NR==2{print $5}'"
	I1109 14:41:32.894467  209070 cli_runner.go:164] Run: docker container inspect -f "'{{(index (index .NetworkSettings.Ports "22/tcp") 0).HostPort}}'" newest-cni-192074
	I1109 14:41:32.914585  209070 sshutil.go:53] new ssh client: &{IP:127.0.0.1 Port:33085 SSHKeyPath:/home/jenkins/minikube-integration/21139-2320/.minikube/machines/newest-cni-192074/id_rsa Username:docker}
	I1109 14:41:33.028613  209070 ssh_runner.go:195] Run: sh -c "df -BG /var | awk 'NR==2{print $4}'"
	I1109 14:41:33.036164  209070 fix.go:56] duration metric: took 5.388466358s for fixHost
	I1109 14:41:33.036187  209070 start.go:83] releasing machines lock for "newest-cni-192074", held for 5.388518379s
	I1109 14:41:33.036270  209070 cli_runner.go:164] Run: docker container inspect -f "{{range .NetworkSettings.Networks}}{{.IPAddress}},{{.GlobalIPv6Address}}{{end}}" newest-cni-192074
	I1109 14:41:33.061542  209070 ssh_runner.go:195] Run: cat /version.json
	I1109 14:41:33.061595  209070 cli_runner.go:164] Run: docker container inspect -f "'{{(index (index .NetworkSettings.Ports "22/tcp") 0).HostPort}}'" newest-cni-192074
	I1109 14:41:33.061842  209070 ssh_runner.go:195] Run: curl -sS -m 2 https://registry.k8s.io/
	I1109 14:41:33.061888  209070 cli_runner.go:164] Run: docker container inspect -f "'{{(index (index .NetworkSettings.Ports "22/tcp") 0).HostPort}}'" newest-cni-192074
	I1109 14:41:33.102818  209070 sshutil.go:53] new ssh client: &{IP:127.0.0.1 Port:33085 SSHKeyPath:/home/jenkins/minikube-integration/21139-2320/.minikube/machines/newest-cni-192074/id_rsa Username:docker}
	I1109 14:41:33.112571  209070 sshutil.go:53] new ssh client: &{IP:127.0.0.1 Port:33085 SSHKeyPath:/home/jenkins/minikube-integration/21139-2320/.minikube/machines/newest-cni-192074/id_rsa Username:docker}
	I1109 14:41:33.333329  209070 ssh_runner.go:195] Run: systemctl --version
	I1109 14:41:33.342476  209070 ssh_runner.go:195] Run: sudo sh -c "podman version >/dev/null"
	I1109 14:41:33.416038  209070 ssh_runner.go:195] Run: sh -c "stat /etc/cni/net.d/*loopback.conf*"
	W1109 14:41:33.424834  209070 cni.go:209] loopback cni configuration skipped: "/etc/cni/net.d/*loopback.conf*" not found
	I1109 14:41:33.424914  209070 ssh_runner.go:195] Run: sudo find /etc/cni/net.d -maxdepth 1 -type f ( ( -name *bridge* -or -name *podman* ) -and -not -name *.mk_disabled ) -printf "%p, " -exec sh -c "sudo mv {} {}.mk_disabled" ;
	I1109 14:41:33.441110  209070 cni.go:259] no active bridge cni configs found in "/etc/cni/net.d" - nothing to disable
	I1109 14:41:33.441184  209070 start.go:496] detecting cgroup driver to use...
	I1109 14:41:33.441232  209070 detect.go:187] detected "cgroupfs" cgroup driver on host os
	I1109 14:41:33.441338  209070 ssh_runner.go:195] Run: sudo systemctl stop -f containerd
	I1109 14:41:33.459701  209070 ssh_runner.go:195] Run: sudo systemctl is-active --quiet service containerd
	I1109 14:41:33.474641  209070 docker.go:218] disabling cri-docker service (if available) ...
	I1109 14:41:33.474764  209070 ssh_runner.go:195] Run: sudo systemctl stop -f cri-docker.socket
	I1109 14:41:33.494307  209070 ssh_runner.go:195] Run: sudo systemctl stop -f cri-docker.service
	I1109 14:41:33.509018  209070 ssh_runner.go:195] Run: sudo systemctl disable cri-docker.socket
	I1109 14:41:33.689206  209070 ssh_runner.go:195] Run: sudo systemctl mask cri-docker.service
	I1109 14:41:33.870199  209070 docker.go:234] disabling docker service ...
	I1109 14:41:33.870267  209070 ssh_runner.go:195] Run: sudo systemctl stop -f docker.socket
	I1109 14:41:33.887710  209070 ssh_runner.go:195] Run: sudo systemctl stop -f docker.service
	I1109 14:41:33.903376  209070 ssh_runner.go:195] Run: sudo systemctl disable docker.socket
	I1109 14:41:34.063108  209070 ssh_runner.go:195] Run: sudo systemctl mask docker.service
	I1109 14:41:34.226857  209070 ssh_runner.go:195] Run: sudo systemctl is-active --quiet service docker
	I1109 14:41:34.241929  209070 ssh_runner.go:195] Run: /bin/bash -c "sudo mkdir -p /etc && printf %s "runtime-endpoint: unix:///var/run/crio/crio.sock
	" | sudo tee /etc/crictl.yaml"
	I1109 14:41:34.261097  209070 crio.go:59] configure cri-o to use "registry.k8s.io/pause:3.10.1" pause image...
	I1109 14:41:34.261168  209070 ssh_runner.go:195] Run: sh -c "sudo sed -i 's|^.*pause_image = .*$|pause_image = "registry.k8s.io/pause:3.10.1"|' /etc/crio/crio.conf.d/02-crio.conf"
	I1109 14:41:34.282628  209070 crio.go:70] configuring cri-o to use "cgroupfs" as cgroup driver...
	I1109 14:41:34.282695  209070 ssh_runner.go:195] Run: sh -c "sudo sed -i 's|^.*cgroup_manager = .*$|cgroup_manager = "cgroupfs"|' /etc/crio/crio.conf.d/02-crio.conf"
	I1109 14:41:34.294305  209070 ssh_runner.go:195] Run: sh -c "sudo sed -i '/conmon_cgroup = .*/d' /etc/crio/crio.conf.d/02-crio.conf"
	I1109 14:41:34.304450  209070 ssh_runner.go:195] Run: sh -c "sudo sed -i '/cgroup_manager = .*/a conmon_cgroup = "pod"' /etc/crio/crio.conf.d/02-crio.conf"
	I1109 14:41:34.314388  209070 ssh_runner.go:195] Run: sh -c "sudo rm -rf /etc/cni/net.mk"
	I1109 14:41:34.325388  209070 ssh_runner.go:195] Run: sh -c "sudo sed -i '/^ *"net.ipv4.ip_unprivileged_port_start=.*"/d' /etc/crio/crio.conf.d/02-crio.conf"
	I1109 14:41:34.335482  209070 ssh_runner.go:195] Run: sh -c "sudo grep -q "^ *default_sysctls" /etc/crio/crio.conf.d/02-crio.conf || sudo sed -i '/conmon_cgroup = .*/a default_sysctls = \[\n\]' /etc/crio/crio.conf.d/02-crio.conf"
	I1109 14:41:34.346415  209070 ssh_runner.go:195] Run: sh -c "sudo sed -i -r 's|^default_sysctls *= *\[|&\n  "net.ipv4.ip_unprivileged_port_start=0",|' /etc/crio/crio.conf.d/02-crio.conf"
	I1109 14:41:34.359123  209070 ssh_runner.go:195] Run: sudo sysctl net.bridge.bridge-nf-call-iptables
	I1109 14:41:34.367287  209070 ssh_runner.go:195] Run: sudo sh -c "echo 1 > /proc/sys/net/ipv4/ip_forward"
	I1109 14:41:34.379114  209070 ssh_runner.go:195] Run: sudo systemctl daemon-reload
	I1109 14:41:34.536439  209070 ssh_runner.go:195] Run: sudo systemctl restart crio
	I1109 14:41:34.851123  209070 start.go:543] Will wait 60s for socket path /var/run/crio/crio.sock
	I1109 14:41:34.851271  209070 ssh_runner.go:195] Run: stat /var/run/crio/crio.sock
	I1109 14:41:34.855445  209070 start.go:564] Will wait 60s for crictl version
	I1109 14:41:34.855578  209070 ssh_runner.go:195] Run: which crictl
	I1109 14:41:34.859295  209070 ssh_runner.go:195] Run: sudo /usr/local/bin/crictl version
	I1109 14:41:34.884865  209070 start.go:580] Version:  0.1.0
	RuntimeName:  cri-o
	RuntimeVersion:  1.34.1
	RuntimeApiVersion:  v1
	I1109 14:41:34.885026  209070 ssh_runner.go:195] Run: crio --version
	I1109 14:41:34.925798  209070 ssh_runner.go:195] Run: crio --version
	I1109 14:41:34.960918  209070 out.go:179] * Preparing Kubernetes v1.34.1 on CRI-O 1.34.1 ...
	I1109 14:41:31.494680  203153 addons.go:515] duration metric: took 1.389247743s for enable addons: enabled=[storage-provisioner default-storageclass]
	W1109 14:41:32.932208  203153 node_ready.go:57] node "no-preload-545474" has "Ready":"False" status (will retry)
	I1109 14:41:34.962134  209070 cli_runner.go:164] Run: docker network inspect newest-cni-192074 --format "{"Name": "{{.Name}}","Driver": "{{.Driver}}","Subnet": "{{range .IPAM.Config}}{{.Subnet}}{{end}}","Gateway": "{{range .IPAM.Config}}{{.Gateway}}{{end}}","MTU": {{if (index .Options "com.docker.network.driver.mtu")}}{{(index .Options "com.docker.network.driver.mtu")}}{{else}}0{{end}}, "ContainerIPs": [{{range $k,$v := .Containers }}"{{$v.IPv4Address}}",{{end}}]}"
	I1109 14:41:34.981420  209070 ssh_runner.go:195] Run: grep 192.168.76.1	host.minikube.internal$ /etc/hosts
	I1109 14:41:34.986110  209070 ssh_runner.go:195] Run: /bin/bash -c "{ grep -v $'\thost.minikube.internal$' "/etc/hosts"; echo "192.168.76.1	host.minikube.internal"; } > /tmp/h.$$; sudo cp /tmp/h.$$ "/etc/hosts""
	I1109 14:41:34.997741  209070 out.go:179]   - kubeadm.pod-network-cidr=10.42.0.0/16
	I1109 14:41:34.998894  209070 kubeadm.go:884] updating cluster {Name:newest-cni-192074 KeepContext:false EmbedCerts:false MinikubeISO: KicBaseImage:gcr.io/k8s-minikube/kicbase-builds:v0.0.48-1761985721-21837@sha256:a50b37e97dfdea51156e079ca6b45818a801b3d41bbe13d141f35d2e1af6c7d1 Memory:3072 CPUs:2 DiskSize:20000 Driver:docker HyperkitVpnKitSock: HyperkitVSockPorts:[] DockerEnv:[] ContainerVolumeMounts:[] InsecureRegistry:[] RegistryMirror:[] HostOnlyCIDR:192.168.59.1/24 HypervVirtualSwitch: HypervUseExternalSwitch:false HypervExternalAdapter: KVMNetwork:default KVMQemuURI:qemu:///system KVMGPU:false KVMHidden:false KVMNUMACount:1 APIServerPort:8443 DockerOpt:[] DisableDriverMounts:false NFSShare:[] NFSSharesRoot:/nfsshares UUID: NoVTXCheck:false DNSProxy:false HostDNSResolver:true HostOnlyNicType:virtio NatNicType:virtio SSHIPAddress: SSHUser:root SSHKey: SSHPort:22 KubernetesConfig:{KubernetesVersion:v1.34.1 ClusterName:newest-cni-192074 Namespace:default APIServerHAVIP: APIServerName:minikubeCA API
ServerNames:[] APIServerIPs:[] DNSDomain:cluster.local ContainerRuntime:crio CRISocket: NetworkPlugin:cni FeatureGates: ServiceCIDR:10.96.0.0/12 ImageRepository: LoadBalancerStartIP: LoadBalancerEndIP: CustomIngressCert: RegistryAliases: ExtraOptions:[{Component:kubeadm Key:pod-network-cidr Value:10.42.0.0/16}] ShouldLoadCachedImages:true EnableDefaultCNI:false CNI:} Nodes:[{Name: IP:192.168.76.2 Port:8443 KubernetesVersion:v1.34.1 ContainerRuntime:crio ControlPlane:true Worker:true}] Addons:map[dashboard:true default-storageclass:true storage-provisioner:true] CustomAddonImages:map[MetricsScraper:registry.k8s.io/echoserver:1.4] CustomAddonRegistries:map[] VerifyComponents:map[apiserver:true apps_running:false default_sa:true extra:false kubelet:false node_ready:false system_pods:true] StartHostTimeout:6m0s ScheduledStop:<nil> ExposedPorts:[] ListenAddress: Network: Subnet: MultiNodeRequested:false ExtraDisks:0 CertExpiration:26280h0m0s MountString: Mount9PVersion:9p2000.L MountGID:docker MountIP: MountMSize:
262144 MountOptions:[] MountPort:0 MountType:9p MountUID:docker BinaryMirror: DisableOptimizations:false DisableMetrics:false DisableCoreDNSLog:false CustomQemuFirmwarePath: SocketVMnetClientPath: SocketVMnetPath: StaticIP: SSHAuthSock: SSHAgentPID:0 GPUs: AutoPauseInterval:1m0s} ...
	I1109 14:41:34.999046  209070 preload.go:188] Checking if preload exists for k8s version v1.34.1 and runtime crio
	I1109 14:41:34.999120  209070 ssh_runner.go:195] Run: sudo crictl images --output json
	I1109 14:41:35.044081  209070 crio.go:514] all images are preloaded for cri-o runtime.
	I1109 14:41:35.044108  209070 crio.go:433] Images already preloaded, skipping extraction
	I1109 14:41:35.044164  209070 ssh_runner.go:195] Run: sudo crictl images --output json
	I1109 14:41:35.070831  209070 crio.go:514] all images are preloaded for cri-o runtime.
	I1109 14:41:35.070856  209070 cache_images.go:86] Images are preloaded, skipping loading
	I1109 14:41:35.070865  209070 kubeadm.go:935] updating node { 192.168.76.2 8443 v1.34.1 crio true true} ...
	I1109 14:41:35.070977  209070 kubeadm.go:947] kubelet [Unit]
	Wants=crio.service
	
	[Service]
	ExecStart=
	ExecStart=/var/lib/minikube/binaries/v1.34.1/kubelet --bootstrap-kubeconfig=/etc/kubernetes/bootstrap-kubelet.conf --cgroups-per-qos=false --config=/var/lib/kubelet/config.yaml --enforce-node-allocatable= --hostname-override=newest-cni-192074 --kubeconfig=/etc/kubernetes/kubelet.conf --node-ip=192.168.76.2
	
	[Install]
	 config:
	{KubernetesVersion:v1.34.1 ClusterName:newest-cni-192074 Namespace:default APIServerHAVIP: APIServerName:minikubeCA APIServerNames:[] APIServerIPs:[] DNSDomain:cluster.local ContainerRuntime:crio CRISocket: NetworkPlugin:cni FeatureGates: ServiceCIDR:10.96.0.0/12 ImageRepository: LoadBalancerStartIP: LoadBalancerEndIP: CustomIngressCert: RegistryAliases: ExtraOptions:[{Component:kubeadm Key:pod-network-cidr Value:10.42.0.0/16}] ShouldLoadCachedImages:true EnableDefaultCNI:false CNI:}
	I1109 14:41:35.071068  209070 ssh_runner.go:195] Run: crio config
	I1109 14:41:35.149614  209070 cni.go:84] Creating CNI manager for ""
	I1109 14:41:35.149639  209070 cni.go:143] "docker" driver + "crio" runtime found, recommending kindnet
	I1109 14:41:35.149658  209070 kubeadm.go:85] Using pod CIDR: 10.42.0.0/16
	I1109 14:41:35.149685  209070 kubeadm.go:190] kubeadm options: {CertDir:/var/lib/minikube/certs ServiceCIDR:10.96.0.0/12 PodSubnet:10.42.0.0/16 AdvertiseAddress:192.168.76.2 APIServerPort:8443 KubernetesVersion:v1.34.1 EtcdDataDir:/var/lib/minikube/etcd EtcdExtraArgs:map[] ClusterName:newest-cni-192074 NodeName:newest-cni-192074 DNSDomain:cluster.local CRISocket:/var/run/crio/crio.sock ImageRepository: ComponentOptions:[{Component:apiServer ExtraArgs:map[enable-admission-plugins:NamespaceLifecycle,LimitRanger,ServiceAccount,DefaultStorageClass,DefaultTolerationSeconds,NodeRestriction,MutatingAdmissionWebhook,ValidatingAdmissionWebhook,ResourceQuota] Pairs:map[certSANs:["127.0.0.1", "localhost", "192.168.76.2"]]} {Component:controllerManager ExtraArgs:map[allocate-node-cidrs:true leader-elect:false] Pairs:map[]} {Component:scheduler ExtraArgs:map[leader-elect:false] Pairs:map[]}] FeatureArgs:map[] NodeIP:192.168.76.2 CgroupDriver:cgroupfs ClientCAFile:/var/lib/minikube/certs/ca.crt StaticPodPath:/etc/
kubernetes/manifests ControlPlaneAddress:control-plane.minikube.internal KubeProxyOptions:map[] ResolvConfSearchRegression:false KubeletConfigOpts:map[containerRuntimeEndpoint:unix:///var/run/crio/crio.sock hairpinMode:hairpin-veth runtimeRequestTimeout:15m] PrependCriSocketUnix:true}
	I1109 14:41:35.149820  209070 kubeadm.go:196] kubeadm config:
	apiVersion: kubeadm.k8s.io/v1beta4
	kind: InitConfiguration
	localAPIEndpoint:
	  advertiseAddress: 192.168.76.2
	  bindPort: 8443
	bootstrapTokens:
	  - groups:
	      - system:bootstrappers:kubeadm:default-node-token
	    ttl: 24h0m0s
	    usages:
	      - signing
	      - authentication
	nodeRegistration:
	  criSocket: unix:///var/run/crio/crio.sock
	  name: "newest-cni-192074"
	  kubeletExtraArgs:
	    - name: "node-ip"
	      value: "192.168.76.2"
	  taints: []
	---
	apiVersion: kubeadm.k8s.io/v1beta4
	kind: ClusterConfiguration
	apiServer:
	  certSANs: ["127.0.0.1", "localhost", "192.168.76.2"]
	  extraArgs:
	    - name: "enable-admission-plugins"
	      value: "NamespaceLifecycle,LimitRanger,ServiceAccount,DefaultStorageClass,DefaultTolerationSeconds,NodeRestriction,MutatingAdmissionWebhook,ValidatingAdmissionWebhook,ResourceQuota"
	controllerManager:
	  extraArgs:
	    - name: "allocate-node-cidrs"
	      value: "true"
	    - name: "leader-elect"
	      value: "false"
	scheduler:
	  extraArgs:
	    - name: "leader-elect"
	      value: "false"
	certificatesDir: /var/lib/minikube/certs
	clusterName: mk
	controlPlaneEndpoint: control-plane.minikube.internal:8443
	etcd:
	  local:
	    dataDir: /var/lib/minikube/etcd
	kubernetesVersion: v1.34.1
	networking:
	  dnsDomain: cluster.local
	  podSubnet: "10.42.0.0/16"
	  serviceSubnet: 10.96.0.0/12
	---
	apiVersion: kubelet.config.k8s.io/v1beta1
	kind: KubeletConfiguration
	authentication:
	  x509:
	    clientCAFile: /var/lib/minikube/certs/ca.crt
	cgroupDriver: cgroupfs
	containerRuntimeEndpoint: unix:///var/run/crio/crio.sock
	hairpinMode: hairpin-veth
	runtimeRequestTimeout: 15m
	clusterDomain: "cluster.local"
	# disable disk resource management by default
	imageGCHighThresholdPercent: 100
	evictionHard:
	  nodefs.available: "0%"
	  nodefs.inodesFree: "0%"
	  imagefs.available: "0%"
	failSwapOn: false
	staticPodPath: /etc/kubernetes/manifests
	---
	apiVersion: kubeproxy.config.k8s.io/v1alpha1
	kind: KubeProxyConfiguration
	clusterCIDR: "10.42.0.0/16"
	metricsBindAddress: 0.0.0.0:10249
	conntrack:
	  maxPerCore: 0
	# Skip setting "net.netfilter.nf_conntrack_tcp_timeout_established"
	  tcpEstablishedTimeout: 0s
	# Skip setting "net.netfilter.nf_conntrack_tcp_timeout_close"
	  tcpCloseWaitTimeout: 0s
	
	I1109 14:41:35.149904  209070 ssh_runner.go:195] Run: sudo ls /var/lib/minikube/binaries/v1.34.1
	I1109 14:41:35.158290  209070 binaries.go:51] Found k8s binaries, skipping transfer
	I1109 14:41:35.158360  209070 ssh_runner.go:195] Run: sudo mkdir -p /etc/systemd/system/kubelet.service.d /lib/systemd/system /var/tmp/minikube
	I1109 14:41:35.167156  209070 ssh_runner.go:362] scp memory --> /etc/systemd/system/kubelet.service.d/10-kubeadm.conf (367 bytes)
	I1109 14:41:35.182739  209070 ssh_runner.go:362] scp memory --> /lib/systemd/system/kubelet.service (352 bytes)
	I1109 14:41:35.196597  209070 ssh_runner.go:362] scp memory --> /var/tmp/minikube/kubeadm.yaml.new (2212 bytes)
	I1109 14:41:35.209739  209070 ssh_runner.go:195] Run: grep 192.168.76.2	control-plane.minikube.internal$ /etc/hosts
	I1109 14:41:35.213320  209070 ssh_runner.go:195] Run: /bin/bash -c "{ grep -v $'\tcontrol-plane.minikube.internal$' "/etc/hosts"; echo "192.168.76.2	control-plane.minikube.internal"; } > /tmp/h.$$; sudo cp /tmp/h.$$ "/etc/hosts""
	I1109 14:41:35.222967  209070 ssh_runner.go:195] Run: sudo systemctl daemon-reload
	I1109 14:41:35.334250  209070 ssh_runner.go:195] Run: sudo systemctl start kubelet
	I1109 14:41:35.356427  209070 certs.go:69] Setting up /home/jenkins/minikube-integration/21139-2320/.minikube/profiles/newest-cni-192074 for IP: 192.168.76.2
	I1109 14:41:35.356504  209070 certs.go:195] generating shared ca certs ...
	I1109 14:41:35.356534  209070 certs.go:227] acquiring lock for ca certs: {Name:mkdc9287ecd89df27ec460b72246ed6b75395d7c Clock:{} Delay:500ms Timeout:1m0s Cancel:<nil>}
	I1109 14:41:35.356703  209070 certs.go:236] skipping valid "minikubeCA" ca cert: /home/jenkins/minikube-integration/21139-2320/.minikube/ca.key
	I1109 14:41:35.356792  209070 certs.go:236] skipping valid "proxyClientCA" ca cert: /home/jenkins/minikube-integration/21139-2320/.minikube/proxy-client-ca.key
	I1109 14:41:35.356816  209070 certs.go:257] generating profile certs ...
	I1109 14:41:35.356923  209070 certs.go:360] skipping valid signed profile cert regeneration for "minikube-user": /home/jenkins/minikube-integration/21139-2320/.minikube/profiles/newest-cni-192074/client.key
	I1109 14:41:35.357027  209070 certs.go:360] skipping valid signed profile cert regeneration for "minikube": /home/jenkins/minikube-integration/21139-2320/.minikube/profiles/newest-cni-192074/apiserver.key.19ad1ce3
	I1109 14:41:35.357100  209070 certs.go:360] skipping valid signed profile cert regeneration for "aggregator": /home/jenkins/minikube-integration/21139-2320/.minikube/profiles/newest-cni-192074/proxy-client.key
	I1109 14:41:35.357243  209070 certs.go:484] found cert: /home/jenkins/minikube-integration/21139-2320/.minikube/certs/4116.pem (1338 bytes)
	W1109 14:41:35.357309  209070 certs.go:480] ignoring /home/jenkins/minikube-integration/21139-2320/.minikube/certs/4116_empty.pem, impossibly tiny 0 bytes
	I1109 14:41:35.357336  209070 certs.go:484] found cert: /home/jenkins/minikube-integration/21139-2320/.minikube/certs/ca-key.pem (1679 bytes)
	I1109 14:41:35.357395  209070 certs.go:484] found cert: /home/jenkins/minikube-integration/21139-2320/.minikube/certs/ca.pem (1082 bytes)
	I1109 14:41:35.357437  209070 certs.go:484] found cert: /home/jenkins/minikube-integration/21139-2320/.minikube/certs/cert.pem (1123 bytes)
	I1109 14:41:35.357489  209070 certs.go:484] found cert: /home/jenkins/minikube-integration/21139-2320/.minikube/certs/key.pem (1679 bytes)
	I1109 14:41:35.357557  209070 certs.go:484] found cert: /home/jenkins/minikube-integration/21139-2320/.minikube/files/etc/ssl/certs/41162.pem (1708 bytes)
	I1109 14:41:35.358193  209070 ssh_runner.go:362] scp /home/jenkins/minikube-integration/21139-2320/.minikube/ca.crt --> /var/lib/minikube/certs/ca.crt (1111 bytes)
	I1109 14:41:35.378364  209070 ssh_runner.go:362] scp /home/jenkins/minikube-integration/21139-2320/.minikube/ca.key --> /var/lib/minikube/certs/ca.key (1679 bytes)
	I1109 14:41:35.397537  209070 ssh_runner.go:362] scp /home/jenkins/minikube-integration/21139-2320/.minikube/proxy-client-ca.crt --> /var/lib/minikube/certs/proxy-client-ca.crt (1119 bytes)
	I1109 14:41:35.415588  209070 ssh_runner.go:362] scp /home/jenkins/minikube-integration/21139-2320/.minikube/proxy-client-ca.key --> /var/lib/minikube/certs/proxy-client-ca.key (1675 bytes)
	I1109 14:41:35.438749  209070 ssh_runner.go:362] scp /home/jenkins/minikube-integration/21139-2320/.minikube/profiles/newest-cni-192074/apiserver.crt --> /var/lib/minikube/certs/apiserver.crt (1424 bytes)
	I1109 14:41:35.459783  209070 ssh_runner.go:362] scp /home/jenkins/minikube-integration/21139-2320/.minikube/profiles/newest-cni-192074/apiserver.key --> /var/lib/minikube/certs/apiserver.key (1675 bytes)
	I1109 14:41:35.478368  209070 ssh_runner.go:362] scp /home/jenkins/minikube-integration/21139-2320/.minikube/profiles/newest-cni-192074/proxy-client.crt --> /var/lib/minikube/certs/proxy-client.crt (1147 bytes)
	I1109 14:41:35.502925  209070 ssh_runner.go:362] scp /home/jenkins/minikube-integration/21139-2320/.minikube/profiles/newest-cni-192074/proxy-client.key --> /var/lib/minikube/certs/proxy-client.key (1679 bytes)
	I1109 14:41:35.559268  209070 ssh_runner.go:362] scp /home/jenkins/minikube-integration/21139-2320/.minikube/ca.crt --> /usr/share/ca-certificates/minikubeCA.pem (1111 bytes)
	I1109 14:41:35.580606  209070 ssh_runner.go:362] scp /home/jenkins/minikube-integration/21139-2320/.minikube/certs/4116.pem --> /usr/share/ca-certificates/4116.pem (1338 bytes)
	I1109 14:41:35.606994  209070 ssh_runner.go:362] scp /home/jenkins/minikube-integration/21139-2320/.minikube/files/etc/ssl/certs/41162.pem --> /usr/share/ca-certificates/41162.pem (1708 bytes)
	I1109 14:41:35.628213  209070 ssh_runner.go:362] scp memory --> /var/lib/minikube/kubeconfig (738 bytes)
	I1109 14:41:35.643589  209070 ssh_runner.go:195] Run: openssl version
	I1109 14:41:35.650069  209070 ssh_runner.go:195] Run: sudo /bin/bash -c "test -s /usr/share/ca-certificates/minikubeCA.pem && ln -fs /usr/share/ca-certificates/minikubeCA.pem /etc/ssl/certs/minikubeCA.pem"
	I1109 14:41:35.659669  209070 ssh_runner.go:195] Run: ls -la /usr/share/ca-certificates/minikubeCA.pem
	I1109 14:41:35.663434  209070 certs.go:528] hashing: -rw-r--r-- 1 root root 1111 Nov  9 13:29 /usr/share/ca-certificates/minikubeCA.pem
	I1109 14:41:35.663501  209070 ssh_runner.go:195] Run: openssl x509 -hash -noout -in /usr/share/ca-certificates/minikubeCA.pem
	I1109 14:41:35.708872  209070 ssh_runner.go:195] Run: sudo /bin/bash -c "test -L /etc/ssl/certs/b5213941.0 || ln -fs /etc/ssl/certs/minikubeCA.pem /etc/ssl/certs/b5213941.0"
	I1109 14:41:35.717119  209070 ssh_runner.go:195] Run: sudo /bin/bash -c "test -s /usr/share/ca-certificates/4116.pem && ln -fs /usr/share/ca-certificates/4116.pem /etc/ssl/certs/4116.pem"
	I1109 14:41:35.725430  209070 ssh_runner.go:195] Run: ls -la /usr/share/ca-certificates/4116.pem
	I1109 14:41:35.729153  209070 certs.go:528] hashing: -rw-r--r-- 1 root root 1338 Nov  9 13:36 /usr/share/ca-certificates/4116.pem
	I1109 14:41:35.729213  209070 ssh_runner.go:195] Run: openssl x509 -hash -noout -in /usr/share/ca-certificates/4116.pem
	I1109 14:41:35.770781  209070 ssh_runner.go:195] Run: sudo /bin/bash -c "test -L /etc/ssl/certs/51391683.0 || ln -fs /etc/ssl/certs/4116.pem /etc/ssl/certs/51391683.0"
	I1109 14:41:35.779151  209070 ssh_runner.go:195] Run: sudo /bin/bash -c "test -s /usr/share/ca-certificates/41162.pem && ln -fs /usr/share/ca-certificates/41162.pem /etc/ssl/certs/41162.pem"
	I1109 14:41:35.787678  209070 ssh_runner.go:195] Run: ls -la /usr/share/ca-certificates/41162.pem
	I1109 14:41:35.792720  209070 certs.go:528] hashing: -rw-r--r-- 1 root root 1708 Nov  9 13:36 /usr/share/ca-certificates/41162.pem
	I1109 14:41:35.792820  209070 ssh_runner.go:195] Run: openssl x509 -hash -noout -in /usr/share/ca-certificates/41162.pem
	I1109 14:41:35.837076  209070 ssh_runner.go:195] Run: sudo /bin/bash -c "test -L /etc/ssl/certs/3ec20f2e.0 || ln -fs /etc/ssl/certs/41162.pem /etc/ssl/certs/3ec20f2e.0"
	I1109 14:41:35.844980  209070 ssh_runner.go:195] Run: stat /var/lib/minikube/certs/apiserver-kubelet-client.crt
	I1109 14:41:35.848849  209070 ssh_runner.go:195] Run: openssl x509 -noout -in /var/lib/minikube/certs/apiserver-etcd-client.crt -checkend 86400
	I1109 14:41:35.890950  209070 ssh_runner.go:195] Run: openssl x509 -noout -in /var/lib/minikube/certs/apiserver-kubelet-client.crt -checkend 86400
	I1109 14:41:35.933888  209070 ssh_runner.go:195] Run: openssl x509 -noout -in /var/lib/minikube/certs/etcd/server.crt -checkend 86400
	I1109 14:41:35.976588  209070 ssh_runner.go:195] Run: openssl x509 -noout -in /var/lib/minikube/certs/etcd/healthcheck-client.crt -checkend 86400
	I1109 14:41:36.020221  209070 ssh_runner.go:195] Run: openssl x509 -noout -in /var/lib/minikube/certs/etcd/peer.crt -checkend 86400
	I1109 14:41:36.070028  209070 ssh_runner.go:195] Run: openssl x509 -noout -in /var/lib/minikube/certs/front-proxy-client.crt -checkend 86400
	I1109 14:41:36.114718  209070 kubeadm.go:401] StartCluster: {Name:newest-cni-192074 KeepContext:false EmbedCerts:false MinikubeISO: KicBaseImage:gcr.io/k8s-minikube/kicbase-builds:v0.0.48-1761985721-21837@sha256:a50b37e97dfdea51156e079ca6b45818a801b3d41bbe13d141f35d2e1af6c7d1 Memory:3072 CPUs:2 DiskSize:20000 Driver:docker HyperkitVpnKitSock: HyperkitVSockPorts:[] DockerEnv:[] ContainerVolumeMounts:[] InsecureRegistry:[] RegistryMirror:[] HostOnlyCIDR:192.168.59.1/24 HypervVirtualSwitch: HypervUseExternalSwitch:false HypervExternalAdapter: KVMNetwork:default KVMQemuURI:qemu:///system KVMGPU:false KVMHidden:false KVMNUMACount:1 APIServerPort:8443 DockerOpt:[] DisableDriverMounts:false NFSShare:[] NFSSharesRoot:/nfsshares UUID: NoVTXCheck:false DNSProxy:false HostDNSResolver:true HostOnlyNicType:virtio NatNicType:virtio SSHIPAddress: SSHUser:root SSHKey: SSHPort:22 KubernetesConfig:{KubernetesVersion:v1.34.1 ClusterName:newest-cni-192074 Namespace:default APIServerHAVIP: APIServerName:minikubeCA APISer
verNames:[] APIServerIPs:[] DNSDomain:cluster.local ContainerRuntime:crio CRISocket: NetworkPlugin:cni FeatureGates: ServiceCIDR:10.96.0.0/12 ImageRepository: LoadBalancerStartIP: LoadBalancerEndIP: CustomIngressCert: RegistryAliases: ExtraOptions:[{Component:kubeadm Key:pod-network-cidr Value:10.42.0.0/16}] ShouldLoadCachedImages:true EnableDefaultCNI:false CNI:} Nodes:[{Name: IP:192.168.76.2 Port:8443 KubernetesVersion:v1.34.1 ContainerRuntime:crio ControlPlane:true Worker:true}] Addons:map[dashboard:true default-storageclass:true storage-provisioner:true] CustomAddonImages:map[MetricsScraper:registry.k8s.io/echoserver:1.4] CustomAddonRegistries:map[] VerifyComponents:map[apiserver:true apps_running:false default_sa:true extra:false kubelet:false node_ready:false system_pods:true] StartHostTimeout:6m0s ScheduledStop:<nil> ExposedPorts:[] ListenAddress: Network: Subnet: MultiNodeRequested:false ExtraDisks:0 CertExpiration:26280h0m0s MountString: Mount9PVersion:9p2000.L MountGID:docker MountIP: MountMSize:262
144 MountOptions:[] MountPort:0 MountType:9p MountUID:docker BinaryMirror: DisableOptimizations:false DisableMetrics:false DisableCoreDNSLog:false CustomQemuFirmwarePath: SocketVMnetClientPath: SocketVMnetPath: StaticIP: SSHAuthSock: SSHAgentPID:0 GPUs: AutoPauseInterval:1m0s}
	I1109 14:41:36.114810  209070 cri.go:54] listing CRI containers in root : {State:paused Name: Namespaces:[kube-system]}
	I1109 14:41:36.114910  209070 ssh_runner.go:195] Run: sudo -s eval "crictl ps -a --quiet --label io.kubernetes.pod.namespace=kube-system"
	I1109 14:41:36.189862  209070 cri.go:89] found id: ""
	I1109 14:41:36.189970  209070 ssh_runner.go:195] Run: sudo ls /var/lib/kubelet/kubeadm-flags.env /var/lib/kubelet/config.yaml /var/lib/minikube/etcd
	I1109 14:41:36.201414  209070 kubeadm.go:417] found existing configuration files, will attempt cluster restart
	I1109 14:41:36.201438  209070 kubeadm.go:598] restartPrimaryControlPlane start ...
	I1109 14:41:36.201531  209070 ssh_runner.go:195] Run: sudo test -d /data/minikube
	I1109 14:41:36.220210  209070 kubeadm.go:131] /data/minikube skipping compat symlinks: sudo test -d /data/minikube: Process exited with status 1
	stdout:
	
	stderr:
	I1109 14:41:36.220797  209070 kubeconfig.go:47] verify endpoint returned: get endpoint: "newest-cni-192074" does not appear in /home/jenkins/minikube-integration/21139-2320/kubeconfig
	I1109 14:41:36.224636  209070 kubeconfig.go:62] /home/jenkins/minikube-integration/21139-2320/kubeconfig needs updating (will repair): [kubeconfig missing "newest-cni-192074" cluster setting kubeconfig missing "newest-cni-192074" context setting]
	I1109 14:41:36.225422  209070 lock.go:35] WriteFile acquiring /home/jenkins/minikube-integration/21139-2320/kubeconfig: {Name:mkbcbd43244c4eb498050023163f96f81faa78c6 Clock:{} Delay:500ms Timeout:1m0s Cancel:<nil>}
	I1109 14:41:36.230872  209070 ssh_runner.go:195] Run: sudo diff -u /var/tmp/minikube/kubeadm.yaml /var/tmp/minikube/kubeadm.yaml.new
	I1109 14:41:36.258411  209070 kubeadm.go:635] The running cluster does not require reconfiguration: 192.168.76.2
	I1109 14:41:36.258447  209070 kubeadm.go:602] duration metric: took 57.002505ms to restartPrimaryControlPlane
	I1109 14:41:36.258484  209070 kubeadm.go:403] duration metric: took 143.773782ms to StartCluster
	I1109 14:41:36.258507  209070 settings.go:142] acquiring lock: {Name:mk52a5a1cf83cfd3007d92af07a5e8dea393f789 Clock:{} Delay:500ms Timeout:1m0s Cancel:<nil>}
	I1109 14:41:36.258590  209070 settings.go:150] Updating kubeconfig:  /home/jenkins/minikube-integration/21139-2320/kubeconfig
	I1109 14:41:36.259668  209070 lock.go:35] WriteFile acquiring /home/jenkins/minikube-integration/21139-2320/kubeconfig: {Name:mkbcbd43244c4eb498050023163f96f81faa78c6 Clock:{} Delay:500ms Timeout:1m0s Cancel:<nil>}
	I1109 14:41:36.260296  209070 config.go:182] Loaded profile config "newest-cni-192074": Driver=docker, ContainerRuntime=crio, KubernetesVersion=v1.34.1
	I1109 14:41:36.260576  209070 addons.go:512] enable addons start: toEnable=map[ambassador:false amd-gpu-device-plugin:false auto-pause:false cloud-spanner:false csi-hostpath-driver:false dashboard:true default-storageclass:true efk:false freshpod:false gcp-auth:false gvisor:false headlamp:false inaccel:false ingress:false ingress-dns:false inspektor-gadget:false istio:false istio-provisioner:false kong:false kubeflow:false kubetail:false kubevirt:false logviewer:false metallb:false metrics-server:false nvidia-device-plugin:false nvidia-driver-installer:false nvidia-gpu-device-plugin:false olm:false pod-security-policy:false portainer:false registry:false registry-aliases:false registry-creds:false storage-provisioner:true storage-provisioner-rancher:false volcano:false volumesnapshots:false yakd:false]
	I1109 14:41:36.260662  209070 addons.go:70] Setting storage-provisioner=true in profile "newest-cni-192074"
	I1109 14:41:36.260680  209070 addons.go:239] Setting addon storage-provisioner=true in "newest-cni-192074"
	W1109 14:41:36.260687  209070 addons.go:248] addon storage-provisioner should already be in state true
	I1109 14:41:36.260710  209070 host.go:66] Checking if "newest-cni-192074" exists ...
	I1109 14:41:36.261163  209070 cli_runner.go:164] Run: docker container inspect newest-cni-192074 --format={{.State.Status}}
	I1109 14:41:36.261342  209070 start.go:236] Will wait 6m0s for node &{Name: IP:192.168.76.2 Port:8443 KubernetesVersion:v1.34.1 ContainerRuntime:crio ControlPlane:true Worker:true}
	I1109 14:41:36.261648  209070 addons.go:70] Setting dashboard=true in profile "newest-cni-192074"
	I1109 14:41:36.261667  209070 addons.go:239] Setting addon dashboard=true in "newest-cni-192074"
	W1109 14:41:36.261674  209070 addons.go:248] addon dashboard should already be in state true
	I1109 14:41:36.261730  209070 host.go:66] Checking if "newest-cni-192074" exists ...
	I1109 14:41:36.261761  209070 addons.go:70] Setting default-storageclass=true in profile "newest-cni-192074"
	I1109 14:41:36.261779  209070 addons_storage_classes.go:34] enableOrDisableStorageClasses default-storageclass=true on "newest-cni-192074"
	I1109 14:41:36.262054  209070 cli_runner.go:164] Run: docker container inspect newest-cni-192074 --format={{.State.Status}}
	I1109 14:41:36.262245  209070 cli_runner.go:164] Run: docker container inspect newest-cni-192074 --format={{.State.Status}}
	I1109 14:41:36.271272  209070 out.go:179] * Verifying Kubernetes components...
	I1109 14:41:36.272678  209070 ssh_runner.go:195] Run: sudo systemctl daemon-reload
	I1109 14:41:36.325452  209070 out.go:179]   - Using image gcr.io/k8s-minikube/storage-provisioner:v5
	I1109 14:41:36.326720  209070 addons.go:436] installing /etc/kubernetes/addons/storage-provisioner.yaml
	I1109 14:41:36.326740  209070 ssh_runner.go:362] scp memory --> /etc/kubernetes/addons/storage-provisioner.yaml (2676 bytes)
	I1109 14:41:36.326813  209070 cli_runner.go:164] Run: docker container inspect -f "'{{(index (index .NetworkSettings.Ports "22/tcp") 0).HostPort}}'" newest-cni-192074
	I1109 14:41:36.333095  209070 addons.go:239] Setting addon default-storageclass=true in "newest-cni-192074"
	W1109 14:41:36.333120  209070 addons.go:248] addon default-storageclass should already be in state true
	I1109 14:41:36.333146  209070 host.go:66] Checking if "newest-cni-192074" exists ...
	I1109 14:41:36.333548  209070 cli_runner.go:164] Run: docker container inspect newest-cni-192074 --format={{.State.Status}}
	I1109 14:41:36.336665  209070 out.go:179]   - Using image docker.io/kubernetesui/dashboard:v2.7.0
	I1109 14:41:36.338699  209070 out.go:179]   - Using image registry.k8s.io/echoserver:1.4
	I1109 14:41:36.339927  209070 addons.go:436] installing /etc/kubernetes/addons/dashboard-ns.yaml
	I1109 14:41:36.339950  209070 ssh_runner.go:362] scp dashboard/dashboard-ns.yaml --> /etc/kubernetes/addons/dashboard-ns.yaml (759 bytes)
	I1109 14:41:36.340020  209070 cli_runner.go:164] Run: docker container inspect -f "'{{(index (index .NetworkSettings.Ports "22/tcp") 0).HostPort}}'" newest-cni-192074
	I1109 14:41:36.392863  209070 sshutil.go:53] new ssh client: &{IP:127.0.0.1 Port:33085 SSHKeyPath:/home/jenkins/minikube-integration/21139-2320/.minikube/machines/newest-cni-192074/id_rsa Username:docker}
	I1109 14:41:36.395129  209070 sshutil.go:53] new ssh client: &{IP:127.0.0.1 Port:33085 SSHKeyPath:/home/jenkins/minikube-integration/21139-2320/.minikube/machines/newest-cni-192074/id_rsa Username:docker}
	I1109 14:41:36.397768  209070 addons.go:436] installing /etc/kubernetes/addons/storageclass.yaml
	I1109 14:41:36.397790  209070 ssh_runner.go:362] scp storageclass/storageclass.yaml --> /etc/kubernetes/addons/storageclass.yaml (271 bytes)
	I1109 14:41:36.397855  209070 cli_runner.go:164] Run: docker container inspect -f "'{{(index (index .NetworkSettings.Ports "22/tcp") 0).HostPort}}'" newest-cni-192074
	I1109 14:41:36.436099  209070 sshutil.go:53] new ssh client: &{IP:127.0.0.1 Port:33085 SSHKeyPath:/home/jenkins/minikube-integration/21139-2320/.minikube/machines/newest-cni-192074/id_rsa Username:docker}
	I1109 14:41:36.589364  209070 addons.go:436] installing /etc/kubernetes/addons/dashboard-clusterrole.yaml
	I1109 14:41:36.589392  209070 ssh_runner.go:362] scp dashboard/dashboard-clusterrole.yaml --> /etc/kubernetes/addons/dashboard-clusterrole.yaml (1001 bytes)
	I1109 14:41:36.658275  209070 ssh_runner.go:195] Run: sudo systemctl start kubelet
	I1109 14:41:36.682843  209070 addons.go:436] installing /etc/kubernetes/addons/dashboard-clusterrolebinding.yaml
	I1109 14:41:36.682870  209070 ssh_runner.go:362] scp dashboard/dashboard-clusterrolebinding.yaml --> /etc/kubernetes/addons/dashboard-clusterrolebinding.yaml (1018 bytes)
	I1109 14:41:36.718987  209070 ssh_runner.go:195] Run: sudo KUBECONFIG=/var/lib/minikube/kubeconfig /var/lib/minikube/binaries/v1.34.1/kubectl apply -f /etc/kubernetes/addons/storageclass.yaml
	I1109 14:41:36.725520  209070 api_server.go:52] waiting for apiserver process to appear ...
	I1109 14:41:36.725591  209070 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I1109 14:41:36.728294  209070 ssh_runner.go:195] Run: sudo KUBECONFIG=/var/lib/minikube/kubeconfig /var/lib/minikube/binaries/v1.34.1/kubectl apply -f /etc/kubernetes/addons/storage-provisioner.yaml
	I1109 14:41:36.778142  209070 addons.go:436] installing /etc/kubernetes/addons/dashboard-configmap.yaml
	I1109 14:41:36.778168  209070 ssh_runner.go:362] scp dashboard/dashboard-configmap.yaml --> /etc/kubernetes/addons/dashboard-configmap.yaml (837 bytes)
	I1109 14:41:36.846208  209070 addons.go:436] installing /etc/kubernetes/addons/dashboard-dp.yaml
	I1109 14:41:36.846234  209070 ssh_runner.go:362] scp memory --> /etc/kubernetes/addons/dashboard-dp.yaml (4201 bytes)
	I1109 14:41:36.877494  209070 addons.go:436] installing /etc/kubernetes/addons/dashboard-role.yaml
	I1109 14:41:36.877518  209070 ssh_runner.go:362] scp dashboard/dashboard-role.yaml --> /etc/kubernetes/addons/dashboard-role.yaml (1724 bytes)
	I1109 14:41:36.942318  209070 addons.go:436] installing /etc/kubernetes/addons/dashboard-rolebinding.yaml
	I1109 14:41:36.942349  209070 ssh_runner.go:362] scp dashboard/dashboard-rolebinding.yaml --> /etc/kubernetes/addons/dashboard-rolebinding.yaml (1046 bytes)
	I1109 14:41:37.018326  209070 addons.go:436] installing /etc/kubernetes/addons/dashboard-sa.yaml
	I1109 14:41:37.018356  209070 ssh_runner.go:362] scp dashboard/dashboard-sa.yaml --> /etc/kubernetes/addons/dashboard-sa.yaml (837 bytes)
	I1109 14:41:37.061016  209070 addons.go:436] installing /etc/kubernetes/addons/dashboard-secret.yaml
	I1109 14:41:37.061040  209070 ssh_runner.go:362] scp dashboard/dashboard-secret.yaml --> /etc/kubernetes/addons/dashboard-secret.yaml (1389 bytes)
	I1109 14:41:37.084261  209070 addons.go:436] installing /etc/kubernetes/addons/dashboard-svc.yaml
	I1109 14:41:37.084286  209070 ssh_runner.go:362] scp dashboard/dashboard-svc.yaml --> /etc/kubernetes/addons/dashboard-svc.yaml (1294 bytes)
	I1109 14:41:37.107101  209070 ssh_runner.go:195] Run: sudo KUBECONFIG=/var/lib/minikube/kubeconfig /var/lib/minikube/binaries/v1.34.1/kubectl apply -f /etc/kubernetes/addons/dashboard-ns.yaml -f /etc/kubernetes/addons/dashboard-clusterrole.yaml -f /etc/kubernetes/addons/dashboard-clusterrolebinding.yaml -f /etc/kubernetes/addons/dashboard-configmap.yaml -f /etc/kubernetes/addons/dashboard-dp.yaml -f /etc/kubernetes/addons/dashboard-role.yaml -f /etc/kubernetes/addons/dashboard-rolebinding.yaml -f /etc/kubernetes/addons/dashboard-sa.yaml -f /etc/kubernetes/addons/dashboard-secret.yaml -f /etc/kubernetes/addons/dashboard-svc.yaml
	W1109 14:41:35.432111  203153 node_ready.go:57] node "no-preload-545474" has "Ready":"False" status (will retry)
	W1109 14:41:37.432424  203153 node_ready.go:57] node "no-preload-545474" has "Ready":"False" status (will retry)
	W1109 14:41:39.932104  203153 node_ready.go:57] node "no-preload-545474" has "Ready":"False" status (will retry)
	I1109 14:41:41.541839  209070 ssh_runner.go:235] Completed: sudo KUBECONFIG=/var/lib/minikube/kubeconfig /var/lib/minikube/binaries/v1.34.1/kubectl apply -f /etc/kubernetes/addons/storageclass.yaml: (4.822814805s)
	I1109 14:41:41.542032  209070 ssh_runner.go:235] Completed: sudo pgrep -xnf kube-apiserver.*minikube.*: (4.816424542s)
	I1109 14:41:41.542093  209070 api_server.go:72] duration metric: took 5.280722247s to wait for apiserver process to appear ...
	I1109 14:41:41.542106  209070 api_server.go:88] waiting for apiserver healthz status ...
	I1109 14:41:41.542123  209070 api_server.go:253] Checking apiserver healthz at https://192.168.76.2:8443/healthz ...
	I1109 14:41:41.576310  209070 api_server.go:279] https://192.168.76.2:8443/healthz returned 500:
	[+]ping ok
	[+]log ok
	[+]etcd ok
	[+]poststarthook/start-apiserver-admission-initializer ok
	[+]poststarthook/generic-apiserver-start-informers ok
	[+]poststarthook/priority-and-fairness-config-consumer ok
	[+]poststarthook/priority-and-fairness-filter ok
	[+]poststarthook/storage-object-count-tracker-hook ok
	[+]poststarthook/start-apiextensions-informers ok
	[+]poststarthook/start-apiextensions-controllers ok
	[+]poststarthook/crd-informer-synced ok
	[+]poststarthook/start-system-namespaces-controller ok
	[+]poststarthook/start-cluster-authentication-info-controller ok
	[+]poststarthook/start-kube-apiserver-identity-lease-controller ok
	[+]poststarthook/start-kube-apiserver-identity-lease-garbage-collector ok
	[+]poststarthook/start-legacy-token-tracking-controller ok
	[+]poststarthook/start-service-ip-repair-controllers ok
	[-]poststarthook/rbac/bootstrap-roles failed: reason withheld
	[-]poststarthook/scheduling/bootstrap-system-priority-classes failed: reason withheld
	[+]poststarthook/priority-and-fairness-config-producer ok
	[+]poststarthook/bootstrap-controller ok
	[+]poststarthook/start-kubernetes-service-cidr-controller ok
	[+]poststarthook/aggregator-reload-proxy-client-cert ok
	[+]poststarthook/start-kube-aggregator-informers ok
	[+]poststarthook/apiservice-status-local-available-controller ok
	[+]poststarthook/apiservice-status-remote-available-controller ok
	[+]poststarthook/apiservice-registration-controller ok
	[+]poststarthook/apiservice-discovery-controller ok
	[+]poststarthook/kube-apiserver-autoregistration ok
	[+]autoregister-completion ok
	[+]poststarthook/apiservice-openapi-controller ok
	[+]poststarthook/apiservice-openapiv3-controller ok
	healthz check failed
	W1109 14:41:41.576349  209070 api_server.go:103] status: https://192.168.76.2:8443/healthz returned error 500:
	[+]ping ok
	[+]log ok
	[+]etcd ok
	[+]poststarthook/start-apiserver-admission-initializer ok
	[+]poststarthook/generic-apiserver-start-informers ok
	[+]poststarthook/priority-and-fairness-config-consumer ok
	[+]poststarthook/priority-and-fairness-filter ok
	[+]poststarthook/storage-object-count-tracker-hook ok
	[+]poststarthook/start-apiextensions-informers ok
	[+]poststarthook/start-apiextensions-controllers ok
	[+]poststarthook/crd-informer-synced ok
	[+]poststarthook/start-system-namespaces-controller ok
	[+]poststarthook/start-cluster-authentication-info-controller ok
	[+]poststarthook/start-kube-apiserver-identity-lease-controller ok
	[+]poststarthook/start-kube-apiserver-identity-lease-garbage-collector ok
	[+]poststarthook/start-legacy-token-tracking-controller ok
	[+]poststarthook/start-service-ip-repair-controllers ok
	[-]poststarthook/rbac/bootstrap-roles failed: reason withheld
	[-]poststarthook/scheduling/bootstrap-system-priority-classes failed: reason withheld
	[+]poststarthook/priority-and-fairness-config-producer ok
	[+]poststarthook/bootstrap-controller ok
	[+]poststarthook/start-kubernetes-service-cidr-controller ok
	[+]poststarthook/aggregator-reload-proxy-client-cert ok
	[+]poststarthook/start-kube-aggregator-informers ok
	[+]poststarthook/apiservice-status-local-available-controller ok
	[+]poststarthook/apiservice-status-remote-available-controller ok
	[+]poststarthook/apiservice-registration-controller ok
	[+]poststarthook/apiservice-discovery-controller ok
	[+]poststarthook/kube-apiserver-autoregistration ok
	[+]autoregister-completion ok
	[+]poststarthook/apiservice-openapi-controller ok
	[+]poststarthook/apiservice-openapiv3-controller ok
	healthz check failed
	I1109 14:41:42.042871  209070 api_server.go:253] Checking apiserver healthz at https://192.168.76.2:8443/healthz ...
	I1109 14:41:42.065919  209070 api_server.go:279] https://192.168.76.2:8443/healthz returned 500:
	[+]ping ok
	[+]log ok
	[+]etcd ok
	[+]poststarthook/start-apiserver-admission-initializer ok
	[+]poststarthook/generic-apiserver-start-informers ok
	[+]poststarthook/priority-and-fairness-config-consumer ok
	[+]poststarthook/priority-and-fairness-filter ok
	[+]poststarthook/storage-object-count-tracker-hook ok
	[+]poststarthook/start-apiextensions-informers ok
	[+]poststarthook/start-apiextensions-controllers ok
	[+]poststarthook/crd-informer-synced ok
	[+]poststarthook/start-system-namespaces-controller ok
	[+]poststarthook/start-cluster-authentication-info-controller ok
	[+]poststarthook/start-kube-apiserver-identity-lease-controller ok
	[+]poststarthook/start-kube-apiserver-identity-lease-garbage-collector ok
	[+]poststarthook/start-legacy-token-tracking-controller ok
	[+]poststarthook/start-service-ip-repair-controllers ok
	[-]poststarthook/rbac/bootstrap-roles failed: reason withheld
	[+]poststarthook/scheduling/bootstrap-system-priority-classes ok
	[+]poststarthook/priority-and-fairness-config-producer ok
	[+]poststarthook/bootstrap-controller ok
	[+]poststarthook/start-kubernetes-service-cidr-controller ok
	[+]poststarthook/aggregator-reload-proxy-client-cert ok
	[+]poststarthook/start-kube-aggregator-informers ok
	[+]poststarthook/apiservice-status-local-available-controller ok
	[+]poststarthook/apiservice-status-remote-available-controller ok
	[+]poststarthook/apiservice-registration-controller ok
	[+]poststarthook/apiservice-discovery-controller ok
	[+]poststarthook/kube-apiserver-autoregistration ok
	[+]autoregister-completion ok
	[+]poststarthook/apiservice-openapi-controller ok
	[+]poststarthook/apiservice-openapiv3-controller ok
	healthz check failed
	W1109 14:41:42.065959  209070 api_server.go:103] status: https://192.168.76.2:8443/healthz returned error 500:
	[+]ping ok
	[+]log ok
	[+]etcd ok
	[+]poststarthook/start-apiserver-admission-initializer ok
	[+]poststarthook/generic-apiserver-start-informers ok
	[+]poststarthook/priority-and-fairness-config-consumer ok
	[+]poststarthook/priority-and-fairness-filter ok
	[+]poststarthook/storage-object-count-tracker-hook ok
	[+]poststarthook/start-apiextensions-informers ok
	[+]poststarthook/start-apiextensions-controllers ok
	[+]poststarthook/crd-informer-synced ok
	[+]poststarthook/start-system-namespaces-controller ok
	[+]poststarthook/start-cluster-authentication-info-controller ok
	[+]poststarthook/start-kube-apiserver-identity-lease-controller ok
	[+]poststarthook/start-kube-apiserver-identity-lease-garbage-collector ok
	[+]poststarthook/start-legacy-token-tracking-controller ok
	[+]poststarthook/start-service-ip-repair-controllers ok
	[-]poststarthook/rbac/bootstrap-roles failed: reason withheld
	[+]poststarthook/scheduling/bootstrap-system-priority-classes ok
	[+]poststarthook/priority-and-fairness-config-producer ok
	[+]poststarthook/bootstrap-controller ok
	[+]poststarthook/start-kubernetes-service-cidr-controller ok
	[+]poststarthook/aggregator-reload-proxy-client-cert ok
	[+]poststarthook/start-kube-aggregator-informers ok
	[+]poststarthook/apiservice-status-local-available-controller ok
	[+]poststarthook/apiservice-status-remote-available-controller ok
	[+]poststarthook/apiservice-registration-controller ok
	[+]poststarthook/apiservice-discovery-controller ok
	[+]poststarthook/kube-apiserver-autoregistration ok
	[+]autoregister-completion ok
	[+]poststarthook/apiservice-openapi-controller ok
	[+]poststarthook/apiservice-openapiv3-controller ok
	healthz check failed
	I1109 14:41:42.542396  209070 api_server.go:253] Checking apiserver healthz at https://192.168.76.2:8443/healthz ...
	I1109 14:41:42.552229  209070 api_server.go:279] https://192.168.76.2:8443/healthz returned 500:
	[+]ping ok
	[+]log ok
	[+]etcd ok
	[+]poststarthook/start-apiserver-admission-initializer ok
	[+]poststarthook/generic-apiserver-start-informers ok
	[+]poststarthook/priority-and-fairness-config-consumer ok
	[+]poststarthook/priority-and-fairness-filter ok
	[+]poststarthook/storage-object-count-tracker-hook ok
	[+]poststarthook/start-apiextensions-informers ok
	[+]poststarthook/start-apiextensions-controllers ok
	[+]poststarthook/crd-informer-synced ok
	[+]poststarthook/start-system-namespaces-controller ok
	[+]poststarthook/start-cluster-authentication-info-controller ok
	[+]poststarthook/start-kube-apiserver-identity-lease-controller ok
	[+]poststarthook/start-kube-apiserver-identity-lease-garbage-collector ok
	[+]poststarthook/start-legacy-token-tracking-controller ok
	[+]poststarthook/start-service-ip-repair-controllers ok
	[-]poststarthook/rbac/bootstrap-roles failed: reason withheld
	[+]poststarthook/scheduling/bootstrap-system-priority-classes ok
	[+]poststarthook/priority-and-fairness-config-producer ok
	[+]poststarthook/bootstrap-controller ok
	[+]poststarthook/start-kubernetes-service-cidr-controller ok
	[+]poststarthook/aggregator-reload-proxy-client-cert ok
	[+]poststarthook/start-kube-aggregator-informers ok
	[+]poststarthook/apiservice-status-local-available-controller ok
	[+]poststarthook/apiservice-status-remote-available-controller ok
	[+]poststarthook/apiservice-registration-controller ok
	[+]poststarthook/apiservice-discovery-controller ok
	[+]poststarthook/kube-apiserver-autoregistration ok
	[+]autoregister-completion ok
	[+]poststarthook/apiservice-openapi-controller ok
	[+]poststarthook/apiservice-openapiv3-controller ok
	healthz check failed
	W1109 14:41:42.552309  209070 api_server.go:103] status: https://192.168.76.2:8443/healthz returned error 500:
	[+]ping ok
	[+]log ok
	[+]etcd ok
	[+]poststarthook/start-apiserver-admission-initializer ok
	[+]poststarthook/generic-apiserver-start-informers ok
	[+]poststarthook/priority-and-fairness-config-consumer ok
	[+]poststarthook/priority-and-fairness-filter ok
	[+]poststarthook/storage-object-count-tracker-hook ok
	[+]poststarthook/start-apiextensions-informers ok
	[+]poststarthook/start-apiextensions-controllers ok
	[+]poststarthook/crd-informer-synced ok
	[+]poststarthook/start-system-namespaces-controller ok
	[+]poststarthook/start-cluster-authentication-info-controller ok
	[+]poststarthook/start-kube-apiserver-identity-lease-controller ok
	[+]poststarthook/start-kube-apiserver-identity-lease-garbage-collector ok
	[+]poststarthook/start-legacy-token-tracking-controller ok
	[+]poststarthook/start-service-ip-repair-controllers ok
	[-]poststarthook/rbac/bootstrap-roles failed: reason withheld
	[+]poststarthook/scheduling/bootstrap-system-priority-classes ok
	[+]poststarthook/priority-and-fairness-config-producer ok
	[+]poststarthook/bootstrap-controller ok
	[+]poststarthook/start-kubernetes-service-cidr-controller ok
	[+]poststarthook/aggregator-reload-proxy-client-cert ok
	[+]poststarthook/start-kube-aggregator-informers ok
	[+]poststarthook/apiservice-status-local-available-controller ok
	[+]poststarthook/apiservice-status-remote-available-controller ok
	[+]poststarthook/apiservice-registration-controller ok
	[+]poststarthook/apiservice-discovery-controller ok
	[+]poststarthook/kube-apiserver-autoregistration ok
	[+]autoregister-completion ok
	[+]poststarthook/apiservice-openapi-controller ok
	[+]poststarthook/apiservice-openapiv3-controller ok
	healthz check failed
	I1109 14:41:42.732935  209070 ssh_runner.go:235] Completed: sudo KUBECONFIG=/var/lib/minikube/kubeconfig /var/lib/minikube/binaries/v1.34.1/kubectl apply -f /etc/kubernetes/addons/storage-provisioner.yaml: (6.004608389s)
	I1109 14:41:42.733115  209070 ssh_runner.go:235] Completed: sudo KUBECONFIG=/var/lib/minikube/kubeconfig /var/lib/minikube/binaries/v1.34.1/kubectl apply -f /etc/kubernetes/addons/dashboard-ns.yaml -f /etc/kubernetes/addons/dashboard-clusterrole.yaml -f /etc/kubernetes/addons/dashboard-clusterrolebinding.yaml -f /etc/kubernetes/addons/dashboard-configmap.yaml -f /etc/kubernetes/addons/dashboard-dp.yaml -f /etc/kubernetes/addons/dashboard-role.yaml -f /etc/kubernetes/addons/dashboard-rolebinding.yaml -f /etc/kubernetes/addons/dashboard-sa.yaml -f /etc/kubernetes/addons/dashboard-secret.yaml -f /etc/kubernetes/addons/dashboard-svc.yaml: (5.625982127s)
	I1109 14:41:42.736332  209070 out.go:179] * Some dashboard features require the metrics-server addon. To enable all features please run:
	
		minikube -p newest-cni-192074 addons enable metrics-server
	
	I1109 14:41:42.739365  209070 out.go:179] * Enabled addons: default-storageclass, storage-provisioner, dashboard
	I1109 14:41:42.742180  209070 addons.go:515] duration metric: took 6.481595896s for enable addons: enabled=[default-storageclass storage-provisioner dashboard]
	I1109 14:41:43.042572  209070 api_server.go:253] Checking apiserver healthz at https://192.168.76.2:8443/healthz ...
	I1109 14:41:43.051050  209070 api_server.go:279] https://192.168.76.2:8443/healthz returned 200:
	ok
	I1109 14:41:43.052383  209070 api_server.go:141] control plane version: v1.34.1
	I1109 14:41:43.052418  209070 api_server.go:131] duration metric: took 1.510300299s to wait for apiserver health ...
	I1109 14:41:43.052445  209070 system_pods.go:43] waiting for kube-system pods to appear ...
	I1109 14:41:43.055718  209070 system_pods.go:59] 8 kube-system pods found
	I1109 14:41:43.055759  209070 system_pods.go:61] "coredns-66bc5c9577-6brdt" [50d6b82b-8e51-463c-82a3-a4a103105b6a] Pending: PodScheduled:Unschedulable (0/1 nodes are available: 1 node(s) had untolerated taint {node.kubernetes.io/not-ready: }. no new claims to deallocate, preemption: 0/1 nodes are available: 1 Preemption is not helpful for scheduling.)
	I1109 14:41:43.055769  209070 system_pods.go:61] "etcd-newest-cni-192074" [c5ddb834-a41a-4e78-8b40-e27ff57c60d9] Running / Ready:ContainersNotReady (containers with unready status: [etcd]) / ContainersReady:ContainersNotReady (containers with unready status: [etcd])
	I1109 14:41:43.055775  209070 system_pods.go:61] "kindnet-gmcpd" [00d2ffcc-cb88-4632-8efd-e59fe208d3c8] Running
	I1109 14:41:43.055783  209070 system_pods.go:61] "kube-apiserver-newest-cni-192074" [b2ba9393-513f-4735-b9fa-713bf9ac8fed] Running / Ready:ContainersNotReady (containers with unready status: [kube-apiserver]) / ContainersReady:ContainersNotReady (containers with unready status: [kube-apiserver])
	I1109 14:41:43.055791  209070 system_pods.go:61] "kube-controller-manager-newest-cni-192074" [5ecab913-3ba7-42cc-a66f-7a8e512c6c71] Running / Ready:ContainersNotReady (containers with unready status: [kube-controller-manager]) / ContainersReady:ContainersNotReady (containers with unready status: [kube-controller-manager])
	I1109 14:41:43.055801  209070 system_pods.go:61] "kube-proxy-vjt4x" [4f389cd7-7dd5-439e-b590-9e4390f0a638] Running
	I1109 14:41:43.055809  209070 system_pods.go:61] "kube-scheduler-newest-cni-192074" [265941ce-4026-4e49-891b-10d612942e7f] Running / Ready:ContainersNotReady (containers with unready status: [kube-scheduler]) / ContainersReady:ContainersNotReady (containers with unready status: [kube-scheduler])
	I1109 14:41:43.055819  209070 system_pods.go:61] "storage-provisioner" [9f3003f1-507f-461b-bff4-e19dafefcd23] Pending: PodScheduled:Unschedulable (0/1 nodes are available: 1 node(s) had untolerated taint {node.kubernetes.io/not-ready: }. no new claims to deallocate, preemption: 0/1 nodes are available: 1 Preemption is not helpful for scheduling.)
	I1109 14:41:43.055827  209070 system_pods.go:74] duration metric: took 3.369435ms to wait for pod list to return data ...
	I1109 14:41:43.055840  209070 default_sa.go:34] waiting for default service account to be created ...
	I1109 14:41:43.058730  209070 default_sa.go:45] found service account: "default"
	I1109 14:41:43.058757  209070 default_sa.go:55] duration metric: took 2.910033ms for default service account to be created ...
	I1109 14:41:43.058771  209070 kubeadm.go:587] duration metric: took 6.797398967s to wait for: map[apiserver:true apps_running:false default_sa:true extra:false kubelet:false node_ready:false system_pods:true]
	I1109 14:41:43.058801  209070 node_conditions.go:102] verifying NodePressure condition ...
	I1109 14:41:43.062166  209070 node_conditions.go:122] node storage ephemeral capacity is 203034800Ki
	I1109 14:41:43.062239  209070 node_conditions.go:123] node cpu capacity is 2
	I1109 14:41:43.062269  209070 node_conditions.go:105] duration metric: took 3.449542ms to run NodePressure ...
	I1109 14:41:43.062284  209070 start.go:242] waiting for startup goroutines ...
	I1109 14:41:43.062291  209070 start.go:247] waiting for cluster config update ...
	I1109 14:41:43.062304  209070 start.go:256] writing updated cluster config ...
	I1109 14:41:43.062597  209070 ssh_runner.go:195] Run: rm -f paused
	I1109 14:41:43.124645  209070 start.go:628] kubectl: 1.33.2, cluster: 1.34.1 (minor skew: 1)
	I1109 14:41:43.128707  209070 out.go:179] * Done! kubectl is now configured to use "newest-cni-192074" cluster and "default" namespace by default
	W1109 14:41:41.933022  203153 node_ready.go:57] node "no-preload-545474" has "Ready":"False" status (will retry)
	W1109 14:41:43.934352  203153 node_ready.go:57] node "no-preload-545474" has "Ready":"False" status (will retry)
	I1109 14:41:45.432413  203153 node_ready.go:49] node "no-preload-545474" is "Ready"
	I1109 14:41:45.432439  203153 node_ready.go:38] duration metric: took 14.503978334s for node "no-preload-545474" to be "Ready" ...
	I1109 14:41:45.432452  203153 api_server.go:52] waiting for apiserver process to appear ...
	I1109 14:41:45.432508  203153 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I1109 14:41:45.458511  203153 api_server.go:72] duration metric: took 15.353476403s to wait for apiserver process to appear ...
	I1109 14:41:45.458534  203153 api_server.go:88] waiting for apiserver healthz status ...
	I1109 14:41:45.458554  203153 api_server.go:253] Checking apiserver healthz at https://192.168.85.2:8443/healthz ...
	I1109 14:41:45.483035  203153 api_server.go:279] https://192.168.85.2:8443/healthz returned 200:
	ok
	I1109 14:41:45.484958  203153 api_server.go:141] control plane version: v1.34.1
	I1109 14:41:45.485029  203153 api_server.go:131] duration metric: took 26.486724ms to wait for apiserver health ...
	I1109 14:41:45.485053  203153 system_pods.go:43] waiting for kube-system pods to appear ...
	I1109 14:41:45.496399  203153 system_pods.go:59] 8 kube-system pods found
	I1109 14:41:45.496435  203153 system_pods.go:61] "coredns-66bc5c9577-gq42x" [e6074143-5b8d-41d4-8951-a551d8d2a4b9] Pending / Ready:ContainersNotReady (containers with unready status: [coredns]) / ContainersReady:ContainersNotReady (containers with unready status: [coredns])
	I1109 14:41:45.496443  203153 system_pods.go:61] "etcd-no-preload-545474" [293db1fc-5ee2-477f-bdde-78af20f645e0] Running
	I1109 14:41:45.496449  203153 system_pods.go:61] "kindnet-t9j49" [7be905a2-33f1-4116-b900-707561fa3d05] Running
	I1109 14:41:45.496453  203153 system_pods.go:61] "kube-apiserver-no-preload-545474" [4993944b-0090-4965-90b5-a23757af7772] Running
	I1109 14:41:45.496459  203153 system_pods.go:61] "kube-controller-manager-no-preload-545474" [e6bbfbb1-6531-47ac-8224-0d5cf70c2f59] Running
	I1109 14:41:45.496463  203153 system_pods.go:61] "kube-proxy-2mnwv" [5de7aa0e-eb03-4535-9040-8d34d0520820] Running
	I1109 14:41:45.496468  203153 system_pods.go:61] "kube-scheduler-no-preload-545474" [6bc2c009-cfa3-47ae-a27a-f261cdbb70df] Running
	I1109 14:41:45.496478  203153 system_pods.go:61] "storage-provisioner" [5c1ae78c-82fb-4b73-a894-745d823e352c] Pending / Ready:ContainersNotReady (containers with unready status: [storage-provisioner]) / ContainersReady:ContainersNotReady (containers with unready status: [storage-provisioner])
	I1109 14:41:45.496484  203153 system_pods.go:74] duration metric: took 11.414803ms to wait for pod list to return data ...
	I1109 14:41:45.496492  203153 default_sa.go:34] waiting for default service account to be created ...
	I1109 14:41:45.500447  203153 default_sa.go:45] found service account: "default"
	I1109 14:41:45.500525  203153 default_sa.go:55] duration metric: took 4.025917ms for default service account to be created ...
	I1109 14:41:45.500549  203153 system_pods.go:116] waiting for k8s-apps to be running ...
	I1109 14:41:45.506604  203153 system_pods.go:86] 8 kube-system pods found
	I1109 14:41:45.506638  203153 system_pods.go:89] "coredns-66bc5c9577-gq42x" [e6074143-5b8d-41d4-8951-a551d8d2a4b9] Pending / Ready:ContainersNotReady (containers with unready status: [coredns]) / ContainersReady:ContainersNotReady (containers with unready status: [coredns])
	I1109 14:41:45.506650  203153 system_pods.go:89] "etcd-no-preload-545474" [293db1fc-5ee2-477f-bdde-78af20f645e0] Running
	I1109 14:41:45.506658  203153 system_pods.go:89] "kindnet-t9j49" [7be905a2-33f1-4116-b900-707561fa3d05] Running
	I1109 14:41:45.506662  203153 system_pods.go:89] "kube-apiserver-no-preload-545474" [4993944b-0090-4965-90b5-a23757af7772] Running
	I1109 14:41:45.506666  203153 system_pods.go:89] "kube-controller-manager-no-preload-545474" [e6bbfbb1-6531-47ac-8224-0d5cf70c2f59] Running
	I1109 14:41:45.506670  203153 system_pods.go:89] "kube-proxy-2mnwv" [5de7aa0e-eb03-4535-9040-8d34d0520820] Running
	I1109 14:41:45.506674  203153 system_pods.go:89] "kube-scheduler-no-preload-545474" [6bc2c009-cfa3-47ae-a27a-f261cdbb70df] Running
	I1109 14:41:45.506679  203153 system_pods.go:89] "storage-provisioner" [5c1ae78c-82fb-4b73-a894-745d823e352c] Pending / Ready:ContainersNotReady (containers with unready status: [storage-provisioner]) / ContainersReady:ContainersNotReady (containers with unready status: [storage-provisioner])
	I1109 14:41:45.506702  203153 retry.go:31] will retry after 256.483545ms: missing components: kube-dns
	I1109 14:41:45.772375  203153 system_pods.go:86] 8 kube-system pods found
	I1109 14:41:45.772455  203153 system_pods.go:89] "coredns-66bc5c9577-gq42x" [e6074143-5b8d-41d4-8951-a551d8d2a4b9] Pending / Ready:ContainersNotReady (containers with unready status: [coredns]) / ContainersReady:ContainersNotReady (containers with unready status: [coredns])
	I1109 14:41:45.772477  203153 system_pods.go:89] "etcd-no-preload-545474" [293db1fc-5ee2-477f-bdde-78af20f645e0] Running
	I1109 14:41:45.772503  203153 system_pods.go:89] "kindnet-t9j49" [7be905a2-33f1-4116-b900-707561fa3d05] Running
	I1109 14:41:45.772523  203153 system_pods.go:89] "kube-apiserver-no-preload-545474" [4993944b-0090-4965-90b5-a23757af7772] Running
	I1109 14:41:45.772542  203153 system_pods.go:89] "kube-controller-manager-no-preload-545474" [e6bbfbb1-6531-47ac-8224-0d5cf70c2f59] Running
	I1109 14:41:45.772561  203153 system_pods.go:89] "kube-proxy-2mnwv" [5de7aa0e-eb03-4535-9040-8d34d0520820] Running
	I1109 14:41:45.772578  203153 system_pods.go:89] "kube-scheduler-no-preload-545474" [6bc2c009-cfa3-47ae-a27a-f261cdbb70df] Running
	I1109 14:41:45.772597  203153 system_pods.go:89] "storage-provisioner" [5c1ae78c-82fb-4b73-a894-745d823e352c] Pending / Ready:ContainersNotReady (containers with unready status: [storage-provisioner]) / ContainersReady:ContainersNotReady (containers with unready status: [storage-provisioner])
	I1109 14:41:45.772657  203153 retry.go:31] will retry after 321.295946ms: missing components: kube-dns
	I1109 14:41:46.099291  203153 system_pods.go:86] 8 kube-system pods found
	I1109 14:41:46.099327  203153 system_pods.go:89] "coredns-66bc5c9577-gq42x" [e6074143-5b8d-41d4-8951-a551d8d2a4b9] Pending / Ready:ContainersNotReady (containers with unready status: [coredns]) / ContainersReady:ContainersNotReady (containers with unready status: [coredns])
	I1109 14:41:46.099335  203153 system_pods.go:89] "etcd-no-preload-545474" [293db1fc-5ee2-477f-bdde-78af20f645e0] Running
	I1109 14:41:46.099342  203153 system_pods.go:89] "kindnet-t9j49" [7be905a2-33f1-4116-b900-707561fa3d05] Running
	I1109 14:41:46.099355  203153 system_pods.go:89] "kube-apiserver-no-preload-545474" [4993944b-0090-4965-90b5-a23757af7772] Running
	I1109 14:41:46.099363  203153 system_pods.go:89] "kube-controller-manager-no-preload-545474" [e6bbfbb1-6531-47ac-8224-0d5cf70c2f59] Running
	I1109 14:41:46.099367  203153 system_pods.go:89] "kube-proxy-2mnwv" [5de7aa0e-eb03-4535-9040-8d34d0520820] Running
	I1109 14:41:46.099377  203153 system_pods.go:89] "kube-scheduler-no-preload-545474" [6bc2c009-cfa3-47ae-a27a-f261cdbb70df] Running
	I1109 14:41:46.099384  203153 system_pods.go:89] "storage-provisioner" [5c1ae78c-82fb-4b73-a894-745d823e352c] Pending / Ready:ContainersNotReady (containers with unready status: [storage-provisioner]) / ContainersReady:ContainersNotReady (containers with unready status: [storage-provisioner])
	I1109 14:41:46.099401  203153 retry.go:31] will retry after 486.675432ms: missing components: kube-dns
	I1109 14:41:46.597704  203153 system_pods.go:86] 8 kube-system pods found
	I1109 14:41:46.597754  203153 system_pods.go:89] "coredns-66bc5c9577-gq42x" [e6074143-5b8d-41d4-8951-a551d8d2a4b9] Running / Ready:ContainersNotReady (containers with unready status: [coredns]) / ContainersReady:ContainersNotReady (containers with unready status: [coredns])
	I1109 14:41:46.597763  203153 system_pods.go:89] "etcd-no-preload-545474" [293db1fc-5ee2-477f-bdde-78af20f645e0] Running
	I1109 14:41:46.597770  203153 system_pods.go:89] "kindnet-t9j49" [7be905a2-33f1-4116-b900-707561fa3d05] Running
	I1109 14:41:46.597776  203153 system_pods.go:89] "kube-apiserver-no-preload-545474" [4993944b-0090-4965-90b5-a23757af7772] Running
	I1109 14:41:46.597782  203153 system_pods.go:89] "kube-controller-manager-no-preload-545474" [e6bbfbb1-6531-47ac-8224-0d5cf70c2f59] Running
	I1109 14:41:46.597786  203153 system_pods.go:89] "kube-proxy-2mnwv" [5de7aa0e-eb03-4535-9040-8d34d0520820] Running
	I1109 14:41:46.597800  203153 system_pods.go:89] "kube-scheduler-no-preload-545474" [6bc2c009-cfa3-47ae-a27a-f261cdbb70df] Running
	I1109 14:41:46.597806  203153 system_pods.go:89] "storage-provisioner" [5c1ae78c-82fb-4b73-a894-745d823e352c] Pending / Ready:ContainersNotReady (containers with unready status: [storage-provisioner]) / ContainersReady:ContainersNotReady (containers with unready status: [storage-provisioner])
	I1109 14:41:46.597814  203153 system_pods.go:126] duration metric: took 1.097247921s to wait for k8s-apps to be running ...
	I1109 14:41:46.597827  203153 system_svc.go:44] waiting for kubelet service to be running ....
	I1109 14:41:46.597889  203153 ssh_runner.go:195] Run: sudo systemctl is-active --quiet service kubelet
	I1109 14:41:46.617201  203153 system_svc.go:56] duration metric: took 19.364073ms WaitForService to wait for kubelet
	I1109 14:41:46.617233  203153 kubeadm.go:587] duration metric: took 16.512205266s to wait for: map[apiserver:true apps_running:true default_sa:true extra:true kubelet:true node_ready:true system_pods:true]
	I1109 14:41:46.617253  203153 node_conditions.go:102] verifying NodePressure condition ...
	I1109 14:41:46.635989  203153 node_conditions.go:122] node storage ephemeral capacity is 203034800Ki
	I1109 14:41:46.636021  203153 node_conditions.go:123] node cpu capacity is 2
	I1109 14:41:46.636033  203153 node_conditions.go:105] duration metric: took 18.774693ms to run NodePressure ...
	I1109 14:41:46.636046  203153 start.go:242] waiting for startup goroutines ...
	I1109 14:41:46.636054  203153 start.go:247] waiting for cluster config update ...
	I1109 14:41:46.636065  203153 start.go:256] writing updated cluster config ...
	I1109 14:41:46.636394  203153 ssh_runner.go:195] Run: rm -f paused
	I1109 14:41:46.641372  203153 pod_ready.go:37] extra waiting up to 4m0s for all "kube-system" pods having one of [k8s-app=kube-dns component=etcd component=kube-apiserver component=kube-controller-manager k8s-app=kube-proxy component=kube-scheduler] labels to be "Ready" ...
	I1109 14:41:46.649045  203153 pod_ready.go:83] waiting for pod "coredns-66bc5c9577-gq42x" in "kube-system" namespace to be "Ready" or be gone ...
	I1109 14:41:46.677663  203153 pod_ready.go:94] pod "coredns-66bc5c9577-gq42x" is "Ready"
	I1109 14:41:46.677706  203153 pod_ready.go:86] duration metric: took 28.624862ms for pod "coredns-66bc5c9577-gq42x" in "kube-system" namespace to be "Ready" or be gone ...
	I1109 14:41:46.695574  203153 pod_ready.go:83] waiting for pod "etcd-no-preload-545474" in "kube-system" namespace to be "Ready" or be gone ...
	I1109 14:41:46.710272  203153 pod_ready.go:94] pod "etcd-no-preload-545474" is "Ready"
	I1109 14:41:46.710315  203153 pod_ready.go:86] duration metric: took 14.709858ms for pod "etcd-no-preload-545474" in "kube-system" namespace to be "Ready" or be gone ...
	I1109 14:41:46.714470  203153 pod_ready.go:83] waiting for pod "kube-apiserver-no-preload-545474" in "kube-system" namespace to be "Ready" or be gone ...
	I1109 14:41:46.723273  203153 pod_ready.go:94] pod "kube-apiserver-no-preload-545474" is "Ready"
	I1109 14:41:46.723326  203153 pod_ready.go:86] duration metric: took 8.827103ms for pod "kube-apiserver-no-preload-545474" in "kube-system" namespace to be "Ready" or be gone ...
	I1109 14:41:46.727908  203153 pod_ready.go:83] waiting for pod "kube-controller-manager-no-preload-545474" in "kube-system" namespace to be "Ready" or be gone ...
	I1109 14:41:47.045636  203153 pod_ready.go:94] pod "kube-controller-manager-no-preload-545474" is "Ready"
	I1109 14:41:47.045667  203153 pod_ready.go:86] duration metric: took 317.727848ms for pod "kube-controller-manager-no-preload-545474" in "kube-system" namespace to be "Ready" or be gone ...
	I1109 14:41:47.245810  203153 pod_ready.go:83] waiting for pod "kube-proxy-2mnwv" in "kube-system" namespace to be "Ready" or be gone ...
	I1109 14:41:47.645831  203153 pod_ready.go:94] pod "kube-proxy-2mnwv" is "Ready"
	I1109 14:41:47.645860  203153 pod_ready.go:86] duration metric: took 400.021963ms for pod "kube-proxy-2mnwv" in "kube-system" namespace to be "Ready" or be gone ...
	I1109 14:41:47.846777  203153 pod_ready.go:83] waiting for pod "kube-scheduler-no-preload-545474" in "kube-system" namespace to be "Ready" or be gone ...
	I1109 14:41:48.246211  203153 pod_ready.go:94] pod "kube-scheduler-no-preload-545474" is "Ready"
	I1109 14:41:48.246236  203153 pod_ready.go:86] duration metric: took 399.434686ms for pod "kube-scheduler-no-preload-545474" in "kube-system" namespace to be "Ready" or be gone ...
	I1109 14:41:48.246253  203153 pod_ready.go:40] duration metric: took 1.604840069s for extra waiting for all "kube-system" pods having one of [k8s-app=kube-dns component=etcd component=kube-apiserver component=kube-controller-manager k8s-app=kube-proxy component=kube-scheduler] labels to be "Ready" ...
	I1109 14:41:48.341114  203153 start.go:628] kubectl: 1.33.2, cluster: 1.34.1 (minor skew: 1)
	I1109 14:41:48.344746  203153 out.go:179] * Done! kubectl is now configured to use "no-preload-545474" cluster and "default" namespace by default
	
	
	==> CRI-O <==
	Nov 09 14:41:41 newest-cni-192074 crio[612]: time="2025-11-09T14:41:41.79957338Z" level=info msg="Allowed annotations are specified for workload [io.containers.trace-syscall]"
	Nov 09 14:41:41 newest-cni-192074 crio[612]: time="2025-11-09T14:41:41.806419377Z" level=warning msg="Skipping invalid sysctl specified by config {net.ipv4.ip_unprivileged_port_start 0}: \"net.ipv4.ip_unprivileged_port_start\" not allowed with host net enabled" id=6db8eb90-cd75-4673-9c2b-c0783d2d364e name=/runtime.v1.RuntimeService/RunPodSandbox
	Nov 09 14:41:41 newest-cni-192074 crio[612]: time="2025-11-09T14:41:41.816869888Z" level=info msg="Ran pod sandbox a5c8dbb14dc04a668f880818d9df0b34b401a810e78f1029111ab427669769eb with infra container: kube-system/kindnet-gmcpd/POD" id=6db8eb90-cd75-4673-9c2b-c0783d2d364e name=/runtime.v1.RuntimeService/RunPodSandbox
	Nov 09 14:41:41 newest-cni-192074 crio[612]: time="2025-11-09T14:41:41.824324219Z" level=info msg="Running pod sandbox: kube-system/kube-proxy-vjt4x/POD" id=4df067d4-fd06-4ab6-8bd0-b14042619d81 name=/runtime.v1.RuntimeService/RunPodSandbox
	Nov 09 14:41:41 newest-cni-192074 crio[612]: time="2025-11-09T14:41:41.824395062Z" level=info msg="Allowed annotations are specified for workload [io.containers.trace-syscall]"
	Nov 09 14:41:41 newest-cni-192074 crio[612]: time="2025-11-09T14:41:41.826471127Z" level=info msg="Checking image status: docker.io/kindest/kindnetd:v20250512-df8de77b" id=17ab18dc-348f-4847-af2d-14b262fd2339 name=/runtime.v1.ImageService/ImageStatus
	Nov 09 14:41:41 newest-cni-192074 crio[612]: time="2025-11-09T14:41:41.829051057Z" level=info msg="Checking image status: docker.io/kindest/kindnetd:v20250512-df8de77b" id=2c82af5c-9bf8-4ddc-8e80-0eeb171f52b5 name=/runtime.v1.ImageService/ImageStatus
	Nov 09 14:41:41 newest-cni-192074 crio[612]: time="2025-11-09T14:41:41.829598697Z" level=warning msg="Skipping invalid sysctl specified by config {net.ipv4.ip_unprivileged_port_start 0}: \"net.ipv4.ip_unprivileged_port_start\" not allowed with host net enabled" id=4df067d4-fd06-4ab6-8bd0-b14042619d81 name=/runtime.v1.RuntimeService/RunPodSandbox
	Nov 09 14:41:41 newest-cni-192074 crio[612]: time="2025-11-09T14:41:41.831067216Z" level=info msg="Creating container: kube-system/kindnet-gmcpd/kindnet-cni" id=48083390-d500-4134-bde2-123a604854ba name=/runtime.v1.RuntimeService/CreateContainer
	Nov 09 14:41:41 newest-cni-192074 crio[612]: time="2025-11-09T14:41:41.831160361Z" level=info msg="Allowed annotations are specified for workload [io.containers.trace-syscall]"
	Nov 09 14:41:41 newest-cni-192074 crio[612]: time="2025-11-09T14:41:41.856776841Z" level=info msg="Allowed annotations are specified for workload [io.containers.trace-syscall]"
	Nov 09 14:41:41 newest-cni-192074 crio[612]: time="2025-11-09T14:41:41.859608515Z" level=info msg="Ran pod sandbox 2d607777fd17fede81438faa3ea6044f81aea859d2fb6ca8ca33868822f3635d with infra container: kube-system/kube-proxy-vjt4x/POD" id=4df067d4-fd06-4ab6-8bd0-b14042619d81 name=/runtime.v1.RuntimeService/RunPodSandbox
	Nov 09 14:41:41 newest-cni-192074 crio[612]: time="2025-11-09T14:41:41.859845243Z" level=info msg="Allowed annotations are specified for workload [io.containers.trace-syscall]"
	Nov 09 14:41:41 newest-cni-192074 crio[612]: time="2025-11-09T14:41:41.862168252Z" level=info msg="Checking image status: registry.k8s.io/kube-proxy:v1.34.1" id=62c9999e-3f28-453a-a17f-ad9b95b4aaee name=/runtime.v1.ImageService/ImageStatus
	Nov 09 14:41:41 newest-cni-192074 crio[612]: time="2025-11-09T14:41:41.866497408Z" level=info msg="Checking image status: registry.k8s.io/kube-proxy:v1.34.1" id=6ed0db08-79d9-4589-bf3c-83387d98b5be name=/runtime.v1.ImageService/ImageStatus
	Nov 09 14:41:41 newest-cni-192074 crio[612]: time="2025-11-09T14:41:41.868655492Z" level=info msg="Creating container: kube-system/kube-proxy-vjt4x/kube-proxy" id=43263849-d3b2-4acc-8185-79337cf27b84 name=/runtime.v1.RuntimeService/CreateContainer
	Nov 09 14:41:41 newest-cni-192074 crio[612]: time="2025-11-09T14:41:41.868949641Z" level=info msg="Allowed annotations are specified for workload [io.containers.trace-syscall]"
	Nov 09 14:41:41 newest-cni-192074 crio[612]: time="2025-11-09T14:41:41.878038489Z" level=info msg="Allowed annotations are specified for workload [io.containers.trace-syscall]"
	Nov 09 14:41:41 newest-cni-192074 crio[612]: time="2025-11-09T14:41:41.880276164Z" level=info msg="Allowed annotations are specified for workload [io.containers.trace-syscall]"
	Nov 09 14:41:41 newest-cni-192074 crio[612]: time="2025-11-09T14:41:41.903811844Z" level=info msg="Created container 4bbbd857426df358f0a97cc69c9e38cb57d717c134067204afb845ff0948a1b0: kube-system/kindnet-gmcpd/kindnet-cni" id=48083390-d500-4134-bde2-123a604854ba name=/runtime.v1.RuntimeService/CreateContainer
	Nov 09 14:41:41 newest-cni-192074 crio[612]: time="2025-11-09T14:41:41.906956038Z" level=info msg="Starting container: 4bbbd857426df358f0a97cc69c9e38cb57d717c134067204afb845ff0948a1b0" id=06ea8948-bf65-41e2-8032-b857a2b83dcf name=/runtime.v1.RuntimeService/StartContainer
	Nov 09 14:41:41 newest-cni-192074 crio[612]: time="2025-11-09T14:41:41.909384122Z" level=info msg="Started container" PID=1055 containerID=4bbbd857426df358f0a97cc69c9e38cb57d717c134067204afb845ff0948a1b0 description=kube-system/kindnet-gmcpd/kindnet-cni id=06ea8948-bf65-41e2-8032-b857a2b83dcf name=/runtime.v1.RuntimeService/StartContainer sandboxID=a5c8dbb14dc04a668f880818d9df0b34b401a810e78f1029111ab427669769eb
	Nov 09 14:41:41 newest-cni-192074 crio[612]: time="2025-11-09T14:41:41.948103478Z" level=info msg="Created container eec42ffc8671c548030a782bbc9411ebe0b4af2d70e2f7d365da735aaaf5cb19: kube-system/kube-proxy-vjt4x/kube-proxy" id=43263849-d3b2-4acc-8185-79337cf27b84 name=/runtime.v1.RuntimeService/CreateContainer
	Nov 09 14:41:41 newest-cni-192074 crio[612]: time="2025-11-09T14:41:41.949026334Z" level=info msg="Starting container: eec42ffc8671c548030a782bbc9411ebe0b4af2d70e2f7d365da735aaaf5cb19" id=f2971588-4e01-4a23-9dc5-ddde7b49d5b8 name=/runtime.v1.RuntimeService/StartContainer
	Nov 09 14:41:41 newest-cni-192074 crio[612]: time="2025-11-09T14:41:41.951595991Z" level=info msg="Started container" PID=1061 containerID=eec42ffc8671c548030a782bbc9411ebe0b4af2d70e2f7d365da735aaaf5cb19 description=kube-system/kube-proxy-vjt4x/kube-proxy id=f2971588-4e01-4a23-9dc5-ddde7b49d5b8 name=/runtime.v1.RuntimeService/StartContainer sandboxID=2d607777fd17fede81438faa3ea6044f81aea859d2fb6ca8ca33868822f3635d
	
	
	==> container status <==
	CONTAINER           IMAGE                                                              CREATED             STATE               NAME                      ATTEMPT             POD ID              POD                                         NAMESPACE
	eec42ffc8671c       05baa95f5142d87797a2bc1d3d11edfb0bf0a9236d436243d15061fae8b58cb9   7 seconds ago       Running             kube-proxy                1                   2d607777fd17f       kube-proxy-vjt4x                            kube-system
	4bbbd857426df       b1a8c6f707935fd5f346ce5846d21ff8dd65e14c15406a14dbd16b9b897b9b4c   7 seconds ago       Running             kindnet-cni               1                   a5c8dbb14dc04       kindnet-gmcpd                               kube-system
	c913d6d7b55a2       b5f57ec6b98676d815366685a0422bd164ecf0732540b79ac51b1186cef97ff0   13 seconds ago      Running             kube-scheduler            1                   44c075c86d06e       kube-scheduler-newest-cni-192074            kube-system
	586cbf2507c60       43911e833d64d4f30460862fc0c54bb61999d60bc7d063feca71e9fc610d5196   13 seconds ago      Running             kube-apiserver            1                   059d5ebfe832e       kube-apiserver-newest-cni-192074            kube-system
	afd353198acd9       a1894772a478e07c67a56e8bf32335fdbe1dd4ec96976a5987083164bd00bc0e   13 seconds ago      Running             etcd                      1                   514c43d857b7e       etcd-newest-cni-192074                      kube-system
	4906171ae291f       7eb2c6ff0c5a768fd309321bc2ade0e4e11afcf4f2017ef1d0ff00d91fdf992a   13 seconds ago      Running             kube-controller-manager   1                   843c399279a02       kube-controller-manager-newest-cni-192074   kube-system
	
	
	==> describe nodes <==
	Name:               newest-cni-192074
	Roles:              control-plane
	Labels:             beta.kubernetes.io/arch=arm64
	                    beta.kubernetes.io/os=linux
	                    kubernetes.io/arch=arm64
	                    kubernetes.io/hostname=newest-cni-192074
	                    kubernetes.io/os=linux
	                    minikube.k8s.io/commit=70e94ed289d7418ac27e2778f9cf44be27a4ecda
	                    minikube.k8s.io/name=newest-cni-192074
	                    minikube.k8s.io/primary=true
	                    minikube.k8s.io/updated_at=2025_11_09T14_41_14_0700
	                    minikube.k8s.io/version=v1.37.0
	                    node-role.kubernetes.io/control-plane=
	                    node.kubernetes.io/exclude-from-external-load-balancers=
	Annotations:        node.alpha.kubernetes.io/ttl: 0
	                    volumes.kubernetes.io/controller-managed-attach-detach: true
	CreationTimestamp:  Sun, 09 Nov 2025 14:41:10 +0000
	Taints:             node.kubernetes.io/not-ready:NoSchedule
	Unschedulable:      false
	Lease:
	  HolderIdentity:  newest-cni-192074
	  AcquireTime:     <unset>
	  RenewTime:       Sun, 09 Nov 2025 14:41:41 +0000
	Conditions:
	  Type             Status  LastHeartbeatTime                 LastTransitionTime                Reason                       Message
	  ----             ------  -----------------                 ------------------                ------                       -------
	  MemoryPressure   False   Sun, 09 Nov 2025 14:41:41 +0000   Sun, 09 Nov 2025 14:41:01 +0000   KubeletHasSufficientMemory   kubelet has sufficient memory available
	  DiskPressure     False   Sun, 09 Nov 2025 14:41:41 +0000   Sun, 09 Nov 2025 14:41:01 +0000   KubeletHasNoDiskPressure     kubelet has no disk pressure
	  PIDPressure      False   Sun, 09 Nov 2025 14:41:41 +0000   Sun, 09 Nov 2025 14:41:01 +0000   KubeletHasSufficientPID      kubelet has sufficient PID available
	  Ready            False   Sun, 09 Nov 2025 14:41:41 +0000   Sun, 09 Nov 2025 14:41:01 +0000   KubeletNotReady              container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/cni/net.d/. Has your network provider started?
	Addresses:
	  InternalIP:  192.168.76.2
	  Hostname:    newest-cni-192074
	Capacity:
	  cpu:                2
	  ephemeral-storage:  203034800Ki
	  hugepages-1Gi:      0
	  hugepages-2Mi:      0
	  hugepages-32Mi:     0
	  hugepages-64Ki:     0
	  memory:             8022300Ki
	  pods:               110
	Allocatable:
	  cpu:                2
	  ephemeral-storage:  203034800Ki
	  hugepages-1Gi:      0
	  hugepages-2Mi:      0
	  hugepages-32Mi:     0
	  hugepages-64Ki:     0
	  memory:             8022300Ki
	  pods:               110
	System Info:
	  Machine ID:                 8fe80b37a6a3cb4bc1adadb06905c59a
	  System UUID:                c64f5fab-6069-4738-9e11-1ea44009e643
	  Boot ID:                    c2b4b02b-b0e5-42ff-aa45-346ad9349595
	  Kernel Version:             5.15.0-1084-aws
	  OS Image:                   Debian GNU/Linux 12 (bookworm)
	  Operating System:           linux
	  Architecture:               arm64
	  Container Runtime Version:  cri-o://1.34.1
	  Kubelet Version:            v1.34.1
	  Kube-Proxy Version:         
	PodCIDR:                      10.42.0.0/24
	PodCIDRs:                     10.42.0.0/24
	Non-terminated Pods:          (6 in total)
	  Namespace                   Name                                         CPU Requests  CPU Limits  Memory Requests  Memory Limits  Age
	  ---------                   ----                                         ------------  ----------  ---------------  -------------  ---
	  kube-system                 etcd-newest-cni-192074                       100m (5%)     0 (0%)      100Mi (1%)       0 (0%)         37s
	  kube-system                 kindnet-gmcpd                                100m (5%)     100m (5%)   50Mi (0%)        50Mi (0%)      31s
	  kube-system                 kube-apiserver-newest-cni-192074             250m (12%)    0 (0%)      0 (0%)           0 (0%)         35s
	  kube-system                 kube-controller-manager-newest-cni-192074    200m (10%)    0 (0%)      0 (0%)           0 (0%)         38s
	  kube-system                 kube-proxy-vjt4x                             0 (0%)        0 (0%)      0 (0%)           0 (0%)         31s
	  kube-system                 kube-scheduler-newest-cni-192074             100m (5%)     0 (0%)      0 (0%)           0 (0%)         35s
	Allocated resources:
	  (Total limits may be over 100 percent, i.e., overcommitted.)
	  Resource           Requests    Limits
	  --------           --------    ------
	  cpu                750m (37%)  100m (5%)
	  memory             150Mi (1%)  50Mi (0%)
	  ephemeral-storage  0 (0%)      0 (0%)
	  hugepages-1Gi      0 (0%)      0 (0%)
	  hugepages-2Mi      0 (0%)      0 (0%)
	  hugepages-32Mi     0 (0%)      0 (0%)
	  hugepages-64Ki     0 (0%)      0 (0%)
	Events:
	  Type     Reason                   Age                From             Message
	  ----     ------                   ----               ----             -------
	  Normal   Starting                 28s                kube-proxy       
	  Normal   Starting                 6s                 kube-proxy       
	  Normal   NodeHasSufficientMemory  49s (x8 over 49s)  kubelet          Node newest-cni-192074 status is now: NodeHasSufficientMemory
	  Normal   NodeHasNoDiskPressure    49s (x8 over 49s)  kubelet          Node newest-cni-192074 status is now: NodeHasNoDiskPressure
	  Normal   NodeHasSufficientPID     49s (x8 over 49s)  kubelet          Node newest-cni-192074 status is now: NodeHasSufficientPID
	  Normal   Starting                 36s                kubelet          Starting kubelet.
	  Warning  CgroupV1                 36s                kubelet          cgroup v1 support is in maintenance mode, please migrate to cgroup v2
	  Normal   NodeHasSufficientMemory  35s                kubelet          Node newest-cni-192074 status is now: NodeHasSufficientMemory
	  Normal   NodeHasNoDiskPressure    35s                kubelet          Node newest-cni-192074 status is now: NodeHasNoDiskPressure
	  Normal   NodeHasSufficientPID     35s                kubelet          Node newest-cni-192074 status is now: NodeHasSufficientPID
	  Normal   RegisteredNode           31s                node-controller  Node newest-cni-192074 event: Registered Node newest-cni-192074 in Controller
	  Normal   RegisteredNode           5s                 node-controller  Node newest-cni-192074 event: Registered Node newest-cni-192074 in Controller
	
	
	==> dmesg <==
	[ +26.909872] overlayfs: idmapped layers are currently not supported
	[  +3.850831] overlayfs: idmapped layers are currently not supported
	[Nov 9 14:19] overlayfs: idmapped layers are currently not supported
	[ +17.180951] overlayfs: idmapped layers are currently not supported
	[Nov 9 14:20] overlayfs: idmapped layers are currently not supported
	[ +23.736977] overlayfs: idmapped layers are currently not supported
	[Nov 9 14:22] overlayfs: idmapped layers are currently not supported
	[Nov 9 14:23] overlayfs: idmapped layers are currently not supported
	[Nov 9 14:25] overlayfs: idmapped layers are currently not supported
	[Nov 9 14:27] overlayfs: idmapped layers are currently not supported
	[ +24.536638] overlayfs: idmapped layers are currently not supported
	[Nov 9 14:31] overlayfs: idmapped layers are currently not supported
	[Nov 9 14:33] overlayfs: idmapped layers are currently not supported
	[ +39.159698] overlayfs: idmapped layers are currently not supported
	[  +5.641155] overlayfs: idmapped layers are currently not supported
	[Nov 9 14:34] overlayfs: idmapped layers are currently not supported
	[Nov 9 14:35] overlayfs: idmapped layers are currently not supported
	[Nov 9 14:36] overlayfs: idmapped layers are currently not supported
	[Nov 9 14:37] overlayfs: idmapped layers are currently not supported
	[  +3.455400] overlayfs: idmapped layers are currently not supported
	[Nov 9 14:39] overlayfs: idmapped layers are currently not supported
	[  +3.462027] overlayfs: idmapped layers are currently not supported
	[Nov 9 14:40] overlayfs: idmapped layers are currently not supported
	[Nov 9 14:41] overlayfs: idmapped layers are currently not supported
	[ +35.139553] overlayfs: idmapped layers are currently not supported
	
	
	==> etcd [afd353198acd97ab297fdf63f5ed475dde326bf68ef3c2d1001f999ea14a25ac] <==
	{"level":"warn","ts":"2025-11-09T14:41:39.635973Z","caller":"embed/config_logging.go:188","msg":"rejected connection on client endpoint","remote-addr":"127.0.0.1:47570","server-name":"","error":"EOF"}
	{"level":"warn","ts":"2025-11-09T14:41:39.652082Z","caller":"embed/config_logging.go:188","msg":"rejected connection on client endpoint","remote-addr":"127.0.0.1:47576","server-name":"","error":"EOF"}
	{"level":"warn","ts":"2025-11-09T14:41:39.708026Z","caller":"embed/config_logging.go:188","msg":"rejected connection on client endpoint","remote-addr":"127.0.0.1:47610","server-name":"","error":"EOF"}
	{"level":"warn","ts":"2025-11-09T14:41:39.718734Z","caller":"embed/config_logging.go:188","msg":"rejected connection on client endpoint","remote-addr":"127.0.0.1:47582","server-name":"","error":"EOF"}
	{"level":"warn","ts":"2025-11-09T14:41:39.721134Z","caller":"embed/config_logging.go:188","msg":"rejected connection on client endpoint","remote-addr":"127.0.0.1:47632","server-name":"","error":"EOF"}
	{"level":"warn","ts":"2025-11-09T14:41:39.754074Z","caller":"embed/config_logging.go:188","msg":"rejected connection on client endpoint","remote-addr":"127.0.0.1:47650","server-name":"","error":"EOF"}
	{"level":"warn","ts":"2025-11-09T14:41:39.795568Z","caller":"embed/config_logging.go:188","msg":"rejected connection on client endpoint","remote-addr":"127.0.0.1:47662","server-name":"","error":"EOF"}
	{"level":"warn","ts":"2025-11-09T14:41:39.879296Z","caller":"embed/config_logging.go:188","msg":"rejected connection on client endpoint","remote-addr":"127.0.0.1:47676","server-name":"","error":"EOF"}
	{"level":"warn","ts":"2025-11-09T14:41:39.900884Z","caller":"embed/config_logging.go:188","msg":"rejected connection on client endpoint","remote-addr":"127.0.0.1:47694","server-name":"","error":"EOF"}
	{"level":"warn","ts":"2025-11-09T14:41:39.924284Z","caller":"embed/config_logging.go:188","msg":"rejected connection on client endpoint","remote-addr":"127.0.0.1:47706","server-name":"","error":"EOF"}
	{"level":"warn","ts":"2025-11-09T14:41:39.955170Z","caller":"embed/config_logging.go:188","msg":"rejected connection on client endpoint","remote-addr":"127.0.0.1:47718","server-name":"","error":"EOF"}
	{"level":"warn","ts":"2025-11-09T14:41:39.994743Z","caller":"embed/config_logging.go:188","msg":"rejected connection on client endpoint","remote-addr":"127.0.0.1:47730","server-name":"","error":"EOF"}
	{"level":"warn","ts":"2025-11-09T14:41:40.024289Z","caller":"embed/config_logging.go:188","msg":"rejected connection on client endpoint","remote-addr":"127.0.0.1:47752","server-name":"","error":"EOF"}
	{"level":"warn","ts":"2025-11-09T14:41:40.068164Z","caller":"embed/config_logging.go:188","msg":"rejected connection on client endpoint","remote-addr":"127.0.0.1:47772","server-name":"","error":"EOF"}
	{"level":"warn","ts":"2025-11-09T14:41:40.080361Z","caller":"embed/config_logging.go:188","msg":"rejected connection on client endpoint","remote-addr":"127.0.0.1:47788","server-name":"","error":"EOF"}
	{"level":"warn","ts":"2025-11-09T14:41:40.113804Z","caller":"embed/config_logging.go:188","msg":"rejected connection on client endpoint","remote-addr":"127.0.0.1:47802","server-name":"","error":"EOF"}
	{"level":"warn","ts":"2025-11-09T14:41:40.155346Z","caller":"embed/config_logging.go:188","msg":"rejected connection on client endpoint","remote-addr":"127.0.0.1:47822","server-name":"","error":"EOF"}
	{"level":"warn","ts":"2025-11-09T14:41:40.166223Z","caller":"embed/config_logging.go:188","msg":"rejected connection on client endpoint","remote-addr":"127.0.0.1:47832","server-name":"","error":"EOF"}
	{"level":"warn","ts":"2025-11-09T14:41:40.185334Z","caller":"embed/config_logging.go:188","msg":"rejected connection on client endpoint","remote-addr":"127.0.0.1:47860","server-name":"","error":"EOF"}
	{"level":"warn","ts":"2025-11-09T14:41:40.206983Z","caller":"embed/config_logging.go:188","msg":"rejected connection on client endpoint","remote-addr":"127.0.0.1:47876","server-name":"","error":"EOF"}
	{"level":"warn","ts":"2025-11-09T14:41:40.227797Z","caller":"embed/config_logging.go:188","msg":"rejected connection on client endpoint","remote-addr":"127.0.0.1:47892","server-name":"","error":"EOF"}
	{"level":"warn","ts":"2025-11-09T14:41:40.259794Z","caller":"embed/config_logging.go:188","msg":"rejected connection on client endpoint","remote-addr":"127.0.0.1:47922","server-name":"","error":"EOF"}
	{"level":"warn","ts":"2025-11-09T14:41:40.296153Z","caller":"embed/config_logging.go:188","msg":"rejected connection on client endpoint","remote-addr":"127.0.0.1:47960","server-name":"","error":"EOF"}
	{"level":"warn","ts":"2025-11-09T14:41:40.307567Z","caller":"embed/config_logging.go:188","msg":"rejected connection on client endpoint","remote-addr":"127.0.0.1:47946","server-name":"","error":"EOF"}
	{"level":"warn","ts":"2025-11-09T14:41:40.397207Z","caller":"embed/config_logging.go:188","msg":"rejected connection on client endpoint","remote-addr":"127.0.0.1:47990","server-name":"","error":"EOF"}
	
	
	==> kernel <==
	 14:41:49 up  1:24,  0 user,  load average: 6.47, 4.34, 3.19
	Linux newest-cni-192074 5.15.0-1084-aws #91~20.04.1-Ubuntu SMP Fri May 2 07:00:04 UTC 2025 aarch64 GNU/Linux
	PRETTY_NAME="Debian GNU/Linux 12 (bookworm)"
	
	
	==> kindnet [4bbbd857426df358f0a97cc69c9e38cb57d717c134067204afb845ff0948a1b0] <==
	I1109 14:41:42.047239       1 main.go:109] connected to apiserver: https://10.96.0.1:443
	I1109 14:41:42.047767       1 main.go:139] hostIP = 192.168.76.2
	podIP = 192.168.76.2
	I1109 14:41:42.047936       1 main.go:148] setting mtu 1500 for CNI 
	I1109 14:41:42.048007       1 main.go:178] kindnetd IP family: "ipv4"
	I1109 14:41:42.048043       1 main.go:182] noMask IPv4 subnets: [10.244.0.0/16]
	time="2025-11-09T14:41:42Z" level=info msg="Created plugin 10-kube-network-policies (kindnetd, handles RunPodSandbox,RemovePodSandbox)"
	I1109 14:41:42.328382       1 controller.go:377] "Starting controller" name="kube-network-policies"
	I1109 14:41:42.328409       1 controller.go:381] "Waiting for informer caches to sync"
	I1109 14:41:42.328419       1 shared_informer.go:350] "Waiting for caches to sync" controller="kube-network-policies"
	I1109 14:41:42.333463       1 controller.go:390] nri plugin exited: failed to connect to NRI service: dial unix /var/run/nri/nri.sock: connect: no such file or directory
	
	
	==> kube-apiserver [586cbf2507c60a0c5f2a7a6dbb1b3df9ad1c324498ff6f1875d3fecc41181903] <==
	I1109 14:41:41.366784       1 autoregister_controller.go:144] Starting autoregister controller
	I1109 14:41:41.366791       1 cache.go:32] Waiting for caches to sync for autoregister controller
	I1109 14:41:41.366799       1 cache.go:39] Caches are synced for autoregister controller
	I1109 14:41:41.375068       1 cache.go:39] Caches are synced for LocalAvailability controller
	I1109 14:41:41.375329       1 shared_informer.go:356] "Caches are synced" controller="cluster_authentication_trust_controller"
	I1109 14:41:41.375353       1 shared_informer.go:356] "Caches are synced" controller="configmaps"
	I1109 14:41:41.375423       1 shared_informer.go:356] "Caches are synced" controller="kubernetes-service-cidr-controller"
	I1109 14:41:41.375458       1 default_servicecidr_controller.go:137] Shutting down kubernetes-service-cidr-controller
	I1109 14:41:41.396566       1 apf_controller.go:382] Running API Priority and Fairness config worker
	I1109 14:41:41.396591       1 apf_controller.go:385] Running API Priority and Fairness periodic rebalancing process
	I1109 14:41:41.397311       1 cache.go:39] Caches are synced for RemoteAvailability controller
	I1109 14:41:41.397378       1 cache.go:39] Caches are synced for APIServiceRegistrationController controller
	I1109 14:41:41.410057       1 handler_discovery.go:451] Starting ResourceDiscoveryManager
	I1109 14:41:41.425011       1 controller.go:667] quota admission added evaluator for: leases.coordination.k8s.io
	I1109 14:41:41.622775       1 controller.go:667] quota admission added evaluator for: serviceaccounts
	I1109 14:41:41.997164       1 storage_scheduling.go:111] all system priority classes are created successfully or already exist.
	I1109 14:41:42.245943       1 controller.go:667] quota admission added evaluator for: namespaces
	I1109 14:41:42.335528       1 controller.go:667] quota admission added evaluator for: deployments.apps
	I1109 14:41:42.386276       1 controller.go:667] quota admission added evaluator for: roles.rbac.authorization.k8s.io
	I1109 14:41:42.402970       1 controller.go:667] quota admission added evaluator for: rolebindings.rbac.authorization.k8s.io
	I1109 14:41:42.609328       1 alloc.go:328] "allocated clusterIPs" service="kubernetes-dashboard/kubernetes-dashboard" clusterIPs={"IPv4":"10.106.213.87"}
	I1109 14:41:42.627022       1 alloc.go:328] "allocated clusterIPs" service="kubernetes-dashboard/dashboard-metrics-scraper" clusterIPs={"IPv4":"10.110.184.126"}
	I1109 14:41:45.308967       1 controller.go:667] quota admission added evaluator for: replicasets.apps
	I1109 14:41:45.401959       1 controller.go:667] quota admission added evaluator for: endpoints
	I1109 14:41:45.445078       1 controller.go:667] quota admission added evaluator for: endpointslices.discovery.k8s.io
	
	
	==> kube-controller-manager [4906171ae291fe25c62fa24b9abc955b2e431c04f03b82b97bc5dac9dabbf8a3] <==
	I1109 14:41:44.979530       1 shared_informer.go:356] "Caches are synced" controller="ReplicaSet"
	I1109 14:41:44.984277       1 shared_informer.go:356] "Caches are synced" controller="PVC protection"
	I1109 14:41:44.986815       1 shared_informer.go:356] "Caches are synced" controller="garbage collector"
	I1109 14:41:44.990261       1 shared_informer.go:356] "Caches are synced" controller="taint-eviction-controller"
	I1109 14:41:44.990391       1 shared_informer.go:356] "Caches are synced" controller="taint"
	I1109 14:41:44.990542       1 node_lifecycle_controller.go:1221] "Initializing eviction metric for zone" logger="node-lifecycle-controller" zone=""
	I1109 14:41:44.990618       1 shared_informer.go:356] "Caches are synced" controller="disruption"
	I1109 14:41:44.991086       1 shared_informer.go:356] "Caches are synced" controller="service account"
	I1109 14:41:44.991564       1 node_lifecycle_controller.go:873] "Missing timestamp for Node. Assuming now as a timestamp" logger="node-lifecycle-controller" node="newest-cni-192074"
	I1109 14:41:44.991665       1 shared_informer.go:356] "Caches are synced" controller="legacy-service-account-token-cleaner"
	I1109 14:41:44.995319       1 shared_informer.go:356] "Caches are synced" controller="VAC protection"
	I1109 14:41:44.995623       1 node_lifecycle_controller.go:1025] "Controller detected that all Nodes are not-Ready. Entering master disruption mode" logger="node-lifecycle-controller"
	I1109 14:41:44.995805       1 shared_informer.go:356] "Caches are synced" controller="service-cidr-controller"
	I1109 14:41:44.995901       1 shared_informer.go:356] "Caches are synced" controller="deployment"
	I1109 14:41:44.995914       1 shared_informer.go:356] "Caches are synced" controller="endpoint"
	I1109 14:41:44.998542       1 shared_informer.go:356] "Caches are synced" controller="resource_claim"
	I1109 14:41:44.998788       1 shared_informer.go:356] "Caches are synced" controller="endpoint_slice"
	I1109 14:41:44.999201       1 shared_informer.go:356] "Caches are synced" controller="node"
	I1109 14:41:45.000723       1 range_allocator.go:177] "Sending events to api server" logger="node-ipam-controller"
	I1109 14:41:45.000808       1 range_allocator.go:183] "Starting range CIDR allocator" logger="node-ipam-controller"
	I1109 14:41:45.000817       1 shared_informer.go:349] "Waiting for caches to sync" controller="cidrallocator"
	I1109 14:41:45.000825       1 shared_informer.go:356] "Caches are synced" controller="cidrallocator"
	I1109 14:41:45.006500       1 shared_informer.go:356] "Caches are synced" controller="endpoint_slice_mirroring"
	I1109 14:41:45.028791       1 shared_informer.go:356] "Caches are synced" controller="resource quota"
	I1109 14:41:45.029548       1 shared_informer.go:356] "Caches are synced" controller="resource quota"
	
	
	==> kube-proxy [eec42ffc8671c548030a782bbc9411ebe0b4af2d70e2f7d365da735aaaf5cb19] <==
	I1109 14:41:42.077335       1 server_linux.go:53] "Using iptables proxy"
	I1109 14:41:42.312270       1 shared_informer.go:349] "Waiting for caches to sync" controller="node informer cache"
	I1109 14:41:42.412843       1 shared_informer.go:356] "Caches are synced" controller="node informer cache"
	I1109 14:41:42.412882       1 server.go:219] "Successfully retrieved NodeIPs" NodeIPs=["192.168.76.2"]
	E1109 14:41:42.412955       1 server.go:256] "Kube-proxy configuration may be incomplete or incorrect" err="nodePortAddresses is unset; NodePort connections will be accepted on all local IPs. Consider using `--nodeport-addresses primary`"
	I1109 14:41:42.629723       1 server.go:265] "kube-proxy running in dual-stack mode" primary ipFamily="IPv4"
	I1109 14:41:42.629780       1 server_linux.go:132] "Using iptables Proxier"
	I1109 14:41:42.675975       1 proxier.go:242] "Setting route_localnet=1 to allow node-ports on localhost; to change this either disable iptables.localhostNodePorts (--iptables-localhost-nodeports) or set nodePortAddresses (--nodeport-addresses) to filter loopback addresses" ipFamily="IPv4"
	I1109 14:41:42.676289       1 server.go:527] "Version info" version="v1.34.1"
	I1109 14:41:42.676305       1 server.go:529] "Golang settings" GOGC="" GOMAXPROCS="" GOTRACEBACK=""
	I1109 14:41:42.681104       1 config.go:200] "Starting service config controller"
	I1109 14:41:42.683983       1 shared_informer.go:349] "Waiting for caches to sync" controller="service config"
	I1109 14:41:42.684108       1 config.go:106] "Starting endpoint slice config controller"
	I1109 14:41:42.684153       1 shared_informer.go:349] "Waiting for caches to sync" controller="endpoint slice config"
	I1109 14:41:42.684192       1 config.go:403] "Starting serviceCIDR config controller"
	I1109 14:41:42.684234       1 shared_informer.go:349] "Waiting for caches to sync" controller="serviceCIDR config"
	I1109 14:41:42.685140       1 config.go:309] "Starting node config controller"
	I1109 14:41:42.688178       1 shared_informer.go:349] "Waiting for caches to sync" controller="node config"
	I1109 14:41:42.688293       1 shared_informer.go:356] "Caches are synced" controller="node config"
	I1109 14:41:42.785197       1 shared_informer.go:356] "Caches are synced" controller="serviceCIDR config"
	I1109 14:41:42.785203       1 shared_informer.go:356] "Caches are synced" controller="service config"
	I1109 14:41:42.785236       1 shared_informer.go:356] "Caches are synced" controller="endpoint slice config"
	
	
	==> kube-scheduler [c913d6d7b55a2bf4363aa340114681f62876da6b25f96a1bb3b282eda1b60139] <==
	I1109 14:41:39.226289       1 serving.go:386] Generated self-signed cert in-memory
	W1109 14:41:41.232140       1 requestheader_controller.go:204] Unable to get configmap/extension-apiserver-authentication in kube-system.  Usually fixed by 'kubectl create rolebinding -n kube-system ROLEBINDING_NAME --role=extension-apiserver-authentication-reader --serviceaccount=YOUR_NS:YOUR_SA'
	W1109 14:41:41.232175       1 authentication.go:397] Error looking up in-cluster authentication configuration: configmaps "extension-apiserver-authentication" is forbidden: User "system:kube-scheduler" cannot get resource "configmaps" in API group "" in the namespace "kube-system"
	W1109 14:41:41.232185       1 authentication.go:398] Continuing without authentication configuration. This may treat all requests as anonymous.
	W1109 14:41:41.232305       1 authentication.go:399] To require authentication configuration lookup to succeed, set --authentication-tolerate-lookup-failure=false
	I1109 14:41:41.361327       1 server.go:175] "Starting Kubernetes Scheduler" version="v1.34.1"
	I1109 14:41:41.361369       1 server.go:177] "Golang settings" GOGC="" GOMAXPROCS="" GOTRACEBACK=""
	I1109 14:41:41.375926       1 configmap_cafile_content.go:205] "Starting controller" name="client-ca::kube-system::extension-apiserver-authentication::client-ca-file"
	I1109 14:41:41.376073       1 shared_informer.go:349] "Waiting for caches to sync" controller="client-ca::kube-system::extension-apiserver-authentication::client-ca-file"
	I1109 14:41:41.384608       1 secure_serving.go:211] Serving securely on 127.0.0.1:10259
	I1109 14:41:41.384700       1 tlsconfig.go:243] "Starting DynamicServingCertificateController"
	I1109 14:41:41.480974       1 shared_informer.go:356] "Caches are synced" controller="client-ca::kube-system::extension-apiserver-authentication::client-ca-file"
	
	
	==> kubelet <==
	Nov 09 14:41:40 newest-cni-192074 kubelet[733]: E1109 14:41:40.136130     733 kubelet.go:3215] "No need to create a mirror pod, since failed to get node info from the cluster" err="node \"newest-cni-192074\" not found" node="newest-cni-192074"
	Nov 09 14:41:41 newest-cni-192074 kubelet[733]: I1109 14:41:41.191074     733 kubelet.go:3219] "Creating a mirror pod for static pod" pod="kube-system/kube-controller-manager-newest-cni-192074"
	Nov 09 14:41:41 newest-cni-192074 kubelet[733]: E1109 14:41:41.400562     733 kubelet.go:3221] "Failed creating a mirror pod" err="pods \"kube-controller-manager-newest-cni-192074\" already exists" pod="kube-system/kube-controller-manager-newest-cni-192074"
	Nov 09 14:41:41 newest-cni-192074 kubelet[733]: I1109 14:41:41.400600     733 kubelet.go:3219] "Creating a mirror pod for static pod" pod="kube-system/kube-scheduler-newest-cni-192074"
	Nov 09 14:41:41 newest-cni-192074 kubelet[733]: I1109 14:41:41.483059     733 apiserver.go:52] "Watching apiserver"
	Nov 09 14:41:41 newest-cni-192074 kubelet[733]: E1109 14:41:41.498520     733 kubelet.go:3221] "Failed creating a mirror pod" err="pods \"kube-scheduler-newest-cni-192074\" already exists" pod="kube-system/kube-scheduler-newest-cni-192074"
	Nov 09 14:41:41 newest-cni-192074 kubelet[733]: I1109 14:41:41.498732     733 kubelet.go:3219] "Creating a mirror pod for static pod" pod="kube-system/etcd-newest-cni-192074"
	Nov 09 14:41:41 newest-cni-192074 kubelet[733]: I1109 14:41:41.505220     733 kubelet_node_status.go:124] "Node was previously registered" node="newest-cni-192074"
	Nov 09 14:41:41 newest-cni-192074 kubelet[733]: I1109 14:41:41.505469     733 kubelet_node_status.go:78] "Successfully registered node" node="newest-cni-192074"
	Nov 09 14:41:41 newest-cni-192074 kubelet[733]: I1109 14:41:41.505570     733 kuberuntime_manager.go:1828] "Updating runtime config through cri with podcidr" CIDR="10.42.0.0/24"
	Nov 09 14:41:41 newest-cni-192074 kubelet[733]: I1109 14:41:41.506519     733 kubelet_network.go:47] "Updating Pod CIDR" originalPodCIDR="" newPodCIDR="10.42.0.0/24"
	Nov 09 14:41:41 newest-cni-192074 kubelet[733]: E1109 14:41:41.587954     733 kubelet.go:3221] "Failed creating a mirror pod" err="pods \"etcd-newest-cni-192074\" already exists" pod="kube-system/etcd-newest-cni-192074"
	Nov 09 14:41:41 newest-cni-192074 kubelet[733]: I1109 14:41:41.588008     733 kubelet.go:3219] "Creating a mirror pod for static pod" pod="kube-system/kube-apiserver-newest-cni-192074"
	Nov 09 14:41:41 newest-cni-192074 kubelet[733]: I1109 14:41:41.594129     733 desired_state_of_world_populator.go:154] "Finished populating initial desired state of world"
	Nov 09 14:41:41 newest-cni-192074 kubelet[733]: I1109 14:41:41.599131     733 reconciler_common.go:251] "operationExecutor.VerifyControllerAttachedVolume started for volume \"xtables-lock\" (UniqueName: \"kubernetes.io/host-path/4f389cd7-7dd5-439e-b590-9e4390f0a638-xtables-lock\") pod \"kube-proxy-vjt4x\" (UID: \"4f389cd7-7dd5-439e-b590-9e4390f0a638\") " pod="kube-system/kube-proxy-vjt4x"
	Nov 09 14:41:41 newest-cni-192074 kubelet[733]: I1109 14:41:41.599317     733 reconciler_common.go:251] "operationExecutor.VerifyControllerAttachedVolume started for volume \"lib-modules\" (UniqueName: \"kubernetes.io/host-path/4f389cd7-7dd5-439e-b590-9e4390f0a638-lib-modules\") pod \"kube-proxy-vjt4x\" (UID: \"4f389cd7-7dd5-439e-b590-9e4390f0a638\") " pod="kube-system/kube-proxy-vjt4x"
	Nov 09 14:41:41 newest-cni-192074 kubelet[733]: I1109 14:41:41.599412     733 reconciler_common.go:251] "operationExecutor.VerifyControllerAttachedVolume started for volume \"xtables-lock\" (UniqueName: \"kubernetes.io/host-path/00d2ffcc-cb88-4632-8efd-e59fe208d3c8-xtables-lock\") pod \"kindnet-gmcpd\" (UID: \"00d2ffcc-cb88-4632-8efd-e59fe208d3c8\") " pod="kube-system/kindnet-gmcpd"
	Nov 09 14:41:41 newest-cni-192074 kubelet[733]: I1109 14:41:41.599536     733 reconciler_common.go:251] "operationExecutor.VerifyControllerAttachedVolume started for volume \"lib-modules\" (UniqueName: \"kubernetes.io/host-path/00d2ffcc-cb88-4632-8efd-e59fe208d3c8-lib-modules\") pod \"kindnet-gmcpd\" (UID: \"00d2ffcc-cb88-4632-8efd-e59fe208d3c8\") " pod="kube-system/kindnet-gmcpd"
	Nov 09 14:41:41 newest-cni-192074 kubelet[733]: I1109 14:41:41.599685     733 reconciler_common.go:251] "operationExecutor.VerifyControllerAttachedVolume started for volume \"cni-cfg\" (UniqueName: \"kubernetes.io/host-path/00d2ffcc-cb88-4632-8efd-e59fe208d3c8-cni-cfg\") pod \"kindnet-gmcpd\" (UID: \"00d2ffcc-cb88-4632-8efd-e59fe208d3c8\") " pod="kube-system/kindnet-gmcpd"
	Nov 09 14:41:41 newest-cni-192074 kubelet[733]: E1109 14:41:41.623512     733 kubelet.go:3221] "Failed creating a mirror pod" err="pods \"kube-apiserver-newest-cni-192074\" already exists" pod="kube-system/kube-apiserver-newest-cni-192074"
	Nov 09 14:41:41 newest-cni-192074 kubelet[733]: I1109 14:41:41.640063     733 swap_util.go:74] "error creating dir to test if tmpfs noswap is enabled. Assuming not supported" mount path="" error="stat /var/lib/kubelet/plugins/kubernetes.io/empty-dir: no such file or directory"
	Nov 09 14:41:41 newest-cni-192074 kubelet[733]: W1109 14:41:41.858167     733 manager.go:1169] Failed to process watch event {EventType:0 Name:/docker/6efa62eda748b61ec1d68030412467520212e736b73f094b7d6592d76bede223/crio-2d607777fd17fede81438faa3ea6044f81aea859d2fb6ca8ca33868822f3635d WatchSource:0}: Error finding container 2d607777fd17fede81438faa3ea6044f81aea859d2fb6ca8ca33868822f3635d: Status 404 returned error can't find the container with id 2d607777fd17fede81438faa3ea6044f81aea859d2fb6ca8ca33868822f3635d
	Nov 09 14:41:44 newest-cni-192074 systemd[1]: Stopping kubelet.service - kubelet: The Kubernetes Node Agent...
	Nov 09 14:41:44 newest-cni-192074 systemd[1]: kubelet.service: Deactivated successfully.
	Nov 09 14:41:44 newest-cni-192074 systemd[1]: Stopped kubelet.service - kubelet: The Kubernetes Node Agent.
	

                                                
                                                
-- /stdout --
helpers_test.go:262: (dbg) Run:  out/minikube-linux-arm64 status --format={{.APIServer}} -p newest-cni-192074 -n newest-cni-192074
helpers_test.go:262: (dbg) Non-zero exit: out/minikube-linux-arm64 status --format={{.APIServer}} -p newest-cni-192074 -n newest-cni-192074: exit status 2 (388.911972ms)

                                                
                                                
-- stdout --
	Running

                                                
                                                
-- /stdout --
helpers_test.go:262: status error: exit status 2 (may be ok)
helpers_test.go:269: (dbg) Run:  kubectl --context newest-cni-192074 get po -o=jsonpath={.items[*].metadata.name} -A --field-selector=status.phase!=Running
helpers_test.go:280: non-running pods: coredns-66bc5c9577-6brdt storage-provisioner dashboard-metrics-scraper-6ffb444bf9-jm9lh kubernetes-dashboard-855c9754f9-hdpw8
helpers_test.go:282: ======> post-mortem[TestStartStop/group/newest-cni/serial/Pause]: describe non-running pods <======
helpers_test.go:285: (dbg) Run:  kubectl --context newest-cni-192074 describe pod coredns-66bc5c9577-6brdt storage-provisioner dashboard-metrics-scraper-6ffb444bf9-jm9lh kubernetes-dashboard-855c9754f9-hdpw8
helpers_test.go:285: (dbg) Non-zero exit: kubectl --context newest-cni-192074 describe pod coredns-66bc5c9577-6brdt storage-provisioner dashboard-metrics-scraper-6ffb444bf9-jm9lh kubernetes-dashboard-855c9754f9-hdpw8: exit status 1 (85.967208ms)

                                                
                                                
** stderr ** 
	Error from server (NotFound): pods "coredns-66bc5c9577-6brdt" not found
	Error from server (NotFound): pods "storage-provisioner" not found
	Error from server (NotFound): pods "dashboard-metrics-scraper-6ffb444bf9-jm9lh" not found
	Error from server (NotFound): pods "kubernetes-dashboard-855c9754f9-hdpw8" not found

                                                
                                                
** /stderr **
helpers_test.go:287: kubectl --context newest-cni-192074 describe pod coredns-66bc5c9577-6brdt storage-provisioner dashboard-metrics-scraper-6ffb444bf9-jm9lh kubernetes-dashboard-855c9754f9-hdpw8: exit status 1
--- FAIL: TestStartStop/group/newest-cni/serial/Pause (6.46s)

                                                
                                    
x
+
TestStartStop/group/no-preload/serial/EnableAddonWhileActive (3.09s)

                                                
                                                
=== RUN   TestStartStop/group/no-preload/serial/EnableAddonWhileActive
start_stop_delete_test.go:203: (dbg) Run:  out/minikube-linux-arm64 addons enable metrics-server -p no-preload-545474 --images=MetricsServer=registry.k8s.io/echoserver:1.4 --registries=MetricsServer=fake.domain
start_stop_delete_test.go:203: (dbg) Non-zero exit: out/minikube-linux-arm64 addons enable metrics-server -p no-preload-545474 --images=MetricsServer=registry.k8s.io/echoserver:1.4 --registries=MetricsServer=fake.domain: exit status 11 (304.956671ms)

                                                
                                                
-- stdout --
	
	

                                                
                                                
-- /stdout --
** stderr ** 
	X Exiting due to MK_ADDON_ENABLE_PAUSED: enabled failed: check paused: list paused: runc: sudo runc list -f json: Process exited with status 1
	stdout:
	
	stderr:
	time="2025-11-09T14:41:57Z" level=error msg="open /run/runc: no such file or directory"
	
	* 
	╭─────────────────────────────────────────────────────────────────────────────────────────────╮
	│                                                                                             │
	│    * If the above advice does not help, please let us know:                                 │
	│      https://github.com/kubernetes/minikube/issues/new/choose                               │
	│                                                                                             │
	│    * Please run `minikube logs --file=logs.txt` and attach logs.txt to the GitHub issue.    │
	│    * Please also attach the following file to the GitHub issue:                             │
	│    * - /tmp/minikube_addons_2bafae6fa40fec163538f94366e390b0317a8b15_0.log                  │
	│                                                                                             │
	╰─────────────────────────────────────────────────────────────────────────────────────────────╯

                                                
                                                
** /stderr **
start_stop_delete_test.go:205: failed to enable an addon post-stop. args "out/minikube-linux-arm64 addons enable metrics-server -p no-preload-545474 --images=MetricsServer=registry.k8s.io/echoserver:1.4 --registries=MetricsServer=fake.domain": exit status 11
start_stop_delete_test.go:213: (dbg) Run:  kubectl --context no-preload-545474 describe deploy/metrics-server -n kube-system
start_stop_delete_test.go:213: (dbg) Non-zero exit: kubectl --context no-preload-545474 describe deploy/metrics-server -n kube-system: exit status 1 (81.536228ms)

                                                
                                                
** stderr ** 
	Error from server (NotFound): deployments.apps "metrics-server" not found

                                                
                                                
** /stderr **
start_stop_delete_test.go:215: failed to get info on auto-pause deployments. args "kubectl --context no-preload-545474 describe deploy/metrics-server -n kube-system": exit status 1
start_stop_delete_test.go:219: addon did not load correct image. Expected to contain " fake.domain/registry.k8s.io/echoserver:1.4". Addon deployment info: 
helpers_test.go:222: -----------------------post-mortem--------------------------------
helpers_test.go:223: ======>  post-mortem[TestStartStop/group/no-preload/serial/EnableAddonWhileActive]: network settings <======
helpers_test.go:230: HOST ENV snapshots: PROXY env: HTTP_PROXY="<empty>" HTTPS_PROXY="<empty>" NO_PROXY="<empty>"
helpers_test.go:238: ======>  post-mortem[TestStartStop/group/no-preload/serial/EnableAddonWhileActive]: docker inspect <======
helpers_test.go:239: (dbg) Run:  docker inspect no-preload-545474
helpers_test.go:243: (dbg) docker inspect no-preload-545474:

                                                
                                                
-- stdout --
	[
	    {
	        "Id": "435b3ae5d44375062a24914709f3375acdafc76fdd93c4f83f8e7b4be40a79be",
	        "Created": "2025-11-09T14:40:31.3484438Z",
	        "Path": "/usr/local/bin/entrypoint",
	        "Args": [
	            "/sbin/init"
	        ],
	        "State": {
	            "Status": "running",
	            "Running": true,
	            "Paused": false,
	            "Restarting": false,
	            "OOMKilled": false,
	            "Dead": false,
	            "Pid": 203631,
	            "ExitCode": 0,
	            "Error": "",
	            "StartedAt": "2025-11-09T14:40:32.322190666Z",
	            "FinishedAt": "0001-01-01T00:00:00Z"
	        },
	        "Image": "sha256:4e1b168be0d8ee6affdaa8dcd0274d605b2f417b6cfaa574410f0380fd962b97",
	        "ResolvConfPath": "/var/lib/docker/containers/435b3ae5d44375062a24914709f3375acdafc76fdd93c4f83f8e7b4be40a79be/resolv.conf",
	        "HostnamePath": "/var/lib/docker/containers/435b3ae5d44375062a24914709f3375acdafc76fdd93c4f83f8e7b4be40a79be/hostname",
	        "HostsPath": "/var/lib/docker/containers/435b3ae5d44375062a24914709f3375acdafc76fdd93c4f83f8e7b4be40a79be/hosts",
	        "LogPath": "/var/lib/docker/containers/435b3ae5d44375062a24914709f3375acdafc76fdd93c4f83f8e7b4be40a79be/435b3ae5d44375062a24914709f3375acdafc76fdd93c4f83f8e7b4be40a79be-json.log",
	        "Name": "/no-preload-545474",
	        "RestartCount": 0,
	        "Driver": "overlay2",
	        "Platform": "linux",
	        "MountLabel": "",
	        "ProcessLabel": "",
	        "AppArmorProfile": "unconfined",
	        "ExecIDs": null,
	        "HostConfig": {
	            "Binds": [
	                "/lib/modules:/lib/modules:ro",
	                "no-preload-545474:/var"
	            ],
	            "ContainerIDFile": "",
	            "LogConfig": {
	                "Type": "json-file",
	                "Config": {}
	            },
	            "NetworkMode": "no-preload-545474",
	            "PortBindings": {
	                "22/tcp": [
	                    {
	                        "HostIp": "127.0.0.1",
	                        "HostPort": ""
	                    }
	                ],
	                "2376/tcp": [
	                    {
	                        "HostIp": "127.0.0.1",
	                        "HostPort": ""
	                    }
	                ],
	                "32443/tcp": [
	                    {
	                        "HostIp": "127.0.0.1",
	                        "HostPort": ""
	                    }
	                ],
	                "5000/tcp": [
	                    {
	                        "HostIp": "127.0.0.1",
	                        "HostPort": ""
	                    }
	                ],
	                "8443/tcp": [
	                    {
	                        "HostIp": "127.0.0.1",
	                        "HostPort": ""
	                    }
	                ]
	            },
	            "RestartPolicy": {
	                "Name": "no",
	                "MaximumRetryCount": 0
	            },
	            "AutoRemove": false,
	            "VolumeDriver": "",
	            "VolumesFrom": null,
	            "ConsoleSize": [
	                0,
	                0
	            ],
	            "CapAdd": null,
	            "CapDrop": null,
	            "CgroupnsMode": "host",
	            "Dns": [],
	            "DnsOptions": [],
	            "DnsSearch": [],
	            "ExtraHosts": null,
	            "GroupAdd": null,
	            "IpcMode": "private",
	            "Cgroup": "",
	            "Links": null,
	            "OomScoreAdj": 0,
	            "PidMode": "",
	            "Privileged": true,
	            "PublishAllPorts": false,
	            "ReadonlyRootfs": false,
	            "SecurityOpt": [
	                "seccomp=unconfined",
	                "apparmor=unconfined",
	                "label=disable"
	            ],
	            "Tmpfs": {
	                "/run": "",
	                "/tmp": ""
	            },
	            "UTSMode": "",
	            "UsernsMode": "",
	            "ShmSize": 67108864,
	            "Runtime": "runc",
	            "Isolation": "",
	            "CpuShares": 0,
	            "Memory": 3221225472,
	            "NanoCpus": 2000000000,
	            "CgroupParent": "",
	            "BlkioWeight": 0,
	            "BlkioWeightDevice": [],
	            "BlkioDeviceReadBps": [],
	            "BlkioDeviceWriteBps": [],
	            "BlkioDeviceReadIOps": [],
	            "BlkioDeviceWriteIOps": [],
	            "CpuPeriod": 0,
	            "CpuQuota": 0,
	            "CpuRealtimePeriod": 0,
	            "CpuRealtimeRuntime": 0,
	            "CpusetCpus": "",
	            "CpusetMems": "",
	            "Devices": [],
	            "DeviceCgroupRules": null,
	            "DeviceRequests": null,
	            "MemoryReservation": 0,
	            "MemorySwap": 6442450944,
	            "MemorySwappiness": null,
	            "OomKillDisable": false,
	            "PidsLimit": null,
	            "Ulimits": [],
	            "CpuCount": 0,
	            "CpuPercent": 0,
	            "IOMaximumIOps": 0,
	            "IOMaximumBandwidth": 0,
	            "MaskedPaths": null,
	            "ReadonlyPaths": null
	        },
	        "GraphDriver": {
	            "Data": {
	                "ID": "435b3ae5d44375062a24914709f3375acdafc76fdd93c4f83f8e7b4be40a79be",
	                "LowerDir": "/var/lib/docker/overlay2/a85564fb62a3e51e03f4702b5e2ea71dec8ed82e123ce306678040ba01f5478a-init/diff:/var/lib/docker/overlay2/b3a4de36ab2a7c9237cb1555a4866064baca53bca407ae5f84336bea9c6bc6c8/diff",
	                "MergedDir": "/var/lib/docker/overlay2/a85564fb62a3e51e03f4702b5e2ea71dec8ed82e123ce306678040ba01f5478a/merged",
	                "UpperDir": "/var/lib/docker/overlay2/a85564fb62a3e51e03f4702b5e2ea71dec8ed82e123ce306678040ba01f5478a/diff",
	                "WorkDir": "/var/lib/docker/overlay2/a85564fb62a3e51e03f4702b5e2ea71dec8ed82e123ce306678040ba01f5478a/work"
	            },
	            "Name": "overlay2"
	        },
	        "Mounts": [
	            {
	                "Type": "bind",
	                "Source": "/lib/modules",
	                "Destination": "/lib/modules",
	                "Mode": "ro",
	                "RW": false,
	                "Propagation": "rprivate"
	            },
	            {
	                "Type": "volume",
	                "Name": "no-preload-545474",
	                "Source": "/var/lib/docker/volumes/no-preload-545474/_data",
	                "Destination": "/var",
	                "Driver": "local",
	                "Mode": "z",
	                "RW": true,
	                "Propagation": ""
	            }
	        ],
	        "Config": {
	            "Hostname": "no-preload-545474",
	            "Domainname": "",
	            "User": "",
	            "AttachStdin": false,
	            "AttachStdout": false,
	            "AttachStderr": false,
	            "ExposedPorts": {
	                "22/tcp": {},
	                "2376/tcp": {},
	                "32443/tcp": {},
	                "5000/tcp": {},
	                "8443/tcp": {}
	            },
	            "Tty": true,
	            "OpenStdin": false,
	            "StdinOnce": false,
	            "Env": [
	                "container=docker",
	                "PATH=/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin"
	            ],
	            "Cmd": null,
	            "Image": "gcr.io/k8s-minikube/kicbase-builds:v0.0.48-1761985721-21837@sha256:a50b37e97dfdea51156e079ca6b45818a801b3d41bbe13d141f35d2e1af6c7d1",
	            "Volumes": null,
	            "WorkingDir": "/",
	            "Entrypoint": [
	                "/usr/local/bin/entrypoint",
	                "/sbin/init"
	            ],
	            "OnBuild": null,
	            "Labels": {
	                "created_by.minikube.sigs.k8s.io": "true",
	                "mode.minikube.sigs.k8s.io": "no-preload-545474",
	                "name.minikube.sigs.k8s.io": "no-preload-545474",
	                "role.minikube.sigs.k8s.io": ""
	            },
	            "StopSignal": "SIGRTMIN+3"
	        },
	        "NetworkSettings": {
	            "Bridge": "",
	            "SandboxID": "4aa655b92f25e3f5880b737ff3d912905e65aaa691abf4791b229cedac81db20",
	            "SandboxKey": "/var/run/docker/netns/4aa655b92f25",
	            "Ports": {
	                "22/tcp": [
	                    {
	                        "HostIp": "127.0.0.1",
	                        "HostPort": "33075"
	                    }
	                ],
	                "2376/tcp": [
	                    {
	                        "HostIp": "127.0.0.1",
	                        "HostPort": "33076"
	                    }
	                ],
	                "32443/tcp": [
	                    {
	                        "HostIp": "127.0.0.1",
	                        "HostPort": "33079"
	                    }
	                ],
	                "5000/tcp": [
	                    {
	                        "HostIp": "127.0.0.1",
	                        "HostPort": "33077"
	                    }
	                ],
	                "8443/tcp": [
	                    {
	                        "HostIp": "127.0.0.1",
	                        "HostPort": "33078"
	                    }
	                ]
	            },
	            "HairpinMode": false,
	            "LinkLocalIPv6Address": "",
	            "LinkLocalIPv6PrefixLen": 0,
	            "SecondaryIPAddresses": null,
	            "SecondaryIPv6Addresses": null,
	            "EndpointID": "",
	            "Gateway": "",
	            "GlobalIPv6Address": "",
	            "GlobalIPv6PrefixLen": 0,
	            "IPAddress": "",
	            "IPPrefixLen": 0,
	            "IPv6Gateway": "",
	            "MacAddress": "",
	            "Networks": {
	                "no-preload-545474": {
	                    "IPAMConfig": {
	                        "IPv4Address": "192.168.85.2"
	                    },
	                    "Links": null,
	                    "Aliases": null,
	                    "MacAddress": "ae:96:56:7c:0a:f9",
	                    "DriverOpts": null,
	                    "GwPriority": 0,
	                    "NetworkID": "eb0cf9a1901884390b78ed227402aaa4fd370ba585a10d7d075f56046116850c",
	                    "EndpointID": "6875b3e5d2e833a56a775b62441d367e37d55680f159d799779b5c7bb655ff52",
	                    "Gateway": "192.168.85.1",
	                    "IPAddress": "192.168.85.2",
	                    "IPPrefixLen": 24,
	                    "IPv6Gateway": "",
	                    "GlobalIPv6Address": "",
	                    "GlobalIPv6PrefixLen": 0,
	                    "DNSNames": [
	                        "no-preload-545474",
	                        "435b3ae5d443"
	                    ]
	                }
	            }
	        }
	    }
	]

                                                
                                                
-- /stdout --
helpers_test.go:247: (dbg) Run:  out/minikube-linux-arm64 status --format={{.Host}} -p no-preload-545474 -n no-preload-545474
helpers_test.go:252: <<< TestStartStop/group/no-preload/serial/EnableAddonWhileActive FAILED: start of post-mortem logs <<<
helpers_test.go:253: ======>  post-mortem[TestStartStop/group/no-preload/serial/EnableAddonWhileActive]: minikube logs <======
helpers_test.go:255: (dbg) Run:  out/minikube-linux-arm64 -p no-preload-545474 logs -n 25
helpers_test.go:255: (dbg) Done: out/minikube-linux-arm64 -p no-preload-545474 logs -n 25: (1.606835721s)
helpers_test.go:260: TestStartStop/group/no-preload/serial/EnableAddonWhileActive logs: 
-- stdout --
	
	==> Audit <==
	┌─────────┬───────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────┬──────────────────────────────┬─────────┬─────────┬─────────────────────┬──────────────
───────┐
	│ COMMAND │                                                                                                                     ARGS                                                                                                                      │           PROFILE            │  USER   │ VERSION │     START TIME      │      END TIME       │
	├─────────┼───────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────┼──────────────────────────────┼─────────┼─────────┼─────────────────────┼──────────────
───────┤
	│ addons  │ enable dashboard -p default-k8s-diff-port-103048 --images=MetricsScraper=registry.k8s.io/echoserver:1.4                                                                                                                                       │ default-k8s-diff-port-103048 │ jenkins │ v1.37.0 │ 09 Nov 25 14:39 UTC │ 09 Nov 25 14:39 UTC │
	│ start   │ -p default-k8s-diff-port-103048 --memory=3072 --alsologtostderr --wait=true --apiserver-port=8444 --driver=docker  --container-runtime=crio --kubernetes-version=v1.34.1                                                                      │ default-k8s-diff-port-103048 │ jenkins │ v1.37.0 │ 09 Nov 25 14:39 UTC │ 09 Nov 25 14:40 UTC │
	│ addons  │ enable dashboard -p embed-certs-422728 --images=MetricsScraper=registry.k8s.io/echoserver:1.4                                                                                                                                                 │ embed-certs-422728           │ jenkins │ v1.37.0 │ 09 Nov 25 14:39 UTC │ 09 Nov 25 14:39 UTC │
	│ start   │ -p embed-certs-422728 --memory=3072 --alsologtostderr --wait=true --embed-certs --driver=docker  --container-runtime=crio --kubernetes-version=v1.34.1                                                                                        │ embed-certs-422728           │ jenkins │ v1.37.0 │ 09 Nov 25 14:39 UTC │ 09 Nov 25 14:40 UTC │
	│ image   │ default-k8s-diff-port-103048 image list --format=json                                                                                                                                                                                         │ default-k8s-diff-port-103048 │ jenkins │ v1.37.0 │ 09 Nov 25 14:40 UTC │ 09 Nov 25 14:40 UTC │
	│ pause   │ -p default-k8s-diff-port-103048 --alsologtostderr -v=1                                                                                                                                                                                        │ default-k8s-diff-port-103048 │ jenkins │ v1.37.0 │ 09 Nov 25 14:40 UTC │                     │
	│ image   │ embed-certs-422728 image list --format=json                                                                                                                                                                                                   │ embed-certs-422728           │ jenkins │ v1.37.0 │ 09 Nov 25 14:40 UTC │ 09 Nov 25 14:40 UTC │
	│ pause   │ -p embed-certs-422728 --alsologtostderr -v=1                                                                                                                                                                                                  │ embed-certs-422728           │ jenkins │ v1.37.0 │ 09 Nov 25 14:40 UTC │                     │
	│ delete  │ -p default-k8s-diff-port-103048                                                                                                                                                                                                               │ default-k8s-diff-port-103048 │ jenkins │ v1.37.0 │ 09 Nov 25 14:40 UTC │ 09 Nov 25 14:40 UTC │
	│ delete  │ -p embed-certs-422728                                                                                                                                                                                                                         │ embed-certs-422728           │ jenkins │ v1.37.0 │ 09 Nov 25 14:40 UTC │ 09 Nov 25 14:40 UTC │
	│ delete  │ -p default-k8s-diff-port-103048                                                                                                                                                                                                               │ default-k8s-diff-port-103048 │ jenkins │ v1.37.0 │ 09 Nov 25 14:40 UTC │ 09 Nov 25 14:40 UTC │
	│ delete  │ -p disable-driver-mounts-274584                                                                                                                                                                                                               │ disable-driver-mounts-274584 │ jenkins │ v1.37.0 │ 09 Nov 25 14:40 UTC │ 09 Nov 25 14:40 UTC │
	│ start   │ -p no-preload-545474 --memory=3072 --alsologtostderr --wait=true --preload=false --driver=docker  --container-runtime=crio --kubernetes-version=v1.34.1                                                                                       │ no-preload-545474            │ jenkins │ v1.37.0 │ 09 Nov 25 14:40 UTC │ 09 Nov 25 14:41 UTC │
	│ delete  │ -p embed-certs-422728                                                                                                                                                                                                                         │ embed-certs-422728           │ jenkins │ v1.37.0 │ 09 Nov 25 14:40 UTC │ 09 Nov 25 14:40 UTC │
	│ start   │ -p newest-cni-192074 --memory=3072 --alsologtostderr --wait=apiserver,system_pods,default_sa --network-plugin=cni --extra-config=kubeadm.pod-network-cidr=10.42.0.0/16 --driver=docker  --container-runtime=crio --kubernetes-version=v1.34.1 │ newest-cni-192074            │ jenkins │ v1.37.0 │ 09 Nov 25 14:40 UTC │ 09 Nov 25 14:41 UTC │
	│ addons  │ enable metrics-server -p newest-cni-192074 --images=MetricsServer=registry.k8s.io/echoserver:1.4 --registries=MetricsServer=fake.domain                                                                                                       │ newest-cni-192074            │ jenkins │ v1.37.0 │ 09 Nov 25 14:41 UTC │                     │
	│ stop    │ -p newest-cni-192074 --alsologtostderr -v=3                                                                                                                                                                                                   │ newest-cni-192074            │ jenkins │ v1.37.0 │ 09 Nov 25 14:41 UTC │ 09 Nov 25 14:41 UTC │
	│ addons  │ enable dashboard -p newest-cni-192074 --images=MetricsScraper=registry.k8s.io/echoserver:1.4                                                                                                                                                  │ newest-cni-192074            │ jenkins │ v1.37.0 │ 09 Nov 25 14:41 UTC │ 09 Nov 25 14:41 UTC │
	│ start   │ -p newest-cni-192074 --memory=3072 --alsologtostderr --wait=apiserver,system_pods,default_sa --network-plugin=cni --extra-config=kubeadm.pod-network-cidr=10.42.0.0/16 --driver=docker  --container-runtime=crio --kubernetes-version=v1.34.1 │ newest-cni-192074            │ jenkins │ v1.37.0 │ 09 Nov 25 14:41 UTC │ 09 Nov 25 14:41 UTC │
	│ image   │ newest-cni-192074 image list --format=json                                                                                                                                                                                                    │ newest-cni-192074            │ jenkins │ v1.37.0 │ 09 Nov 25 14:41 UTC │ 09 Nov 25 14:41 UTC │
	│ pause   │ -p newest-cni-192074 --alsologtostderr -v=1                                                                                                                                                                                                   │ newest-cni-192074            │ jenkins │ v1.37.0 │ 09 Nov 25 14:41 UTC │                     │
	│ delete  │ -p newest-cni-192074                                                                                                                                                                                                                          │ newest-cni-192074            │ jenkins │ v1.37.0 │ 09 Nov 25 14:41 UTC │ 09 Nov 25 14:41 UTC │
	│ delete  │ -p newest-cni-192074                                                                                                                                                                                                                          │ newest-cni-192074            │ jenkins │ v1.37.0 │ 09 Nov 25 14:41 UTC │ 09 Nov 25 14:41 UTC │
	│ start   │ -p auto-241021 --memory=3072 --alsologtostderr --wait=true --wait-timeout=15m --driver=docker  --container-runtime=crio                                                                                                                       │ auto-241021                  │ jenkins │ v1.37.0 │ 09 Nov 25 14:41 UTC │                     │
	│ addons  │ enable metrics-server -p no-preload-545474 --images=MetricsServer=registry.k8s.io/echoserver:1.4 --registries=MetricsServer=fake.domain                                                                                                       │ no-preload-545474            │ jenkins │ v1.37.0 │ 09 Nov 25 14:41 UTC │                     │
	└─────────┴───────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────┴──────────────────────────────┴─────────┴─────────┴─────────────────────┴──────────────
───────┘
	
	
	==> Last Start <==
	Log file created at: 2025/11/09 14:41:52
	Running on machine: ip-172-31-24-2
	Binary: Built with gc go1.24.6 for linux/arm64
	Log line format: [IWEF]mmdd hh:mm:ss.uuuuuu threadid file:line] msg
	I1109 14:41:52.759135  212661 out.go:360] Setting OutFile to fd 1 ...
	I1109 14:41:52.759292  212661 out.go:408] TERM=,COLORTERM=, which probably does not support color
	I1109 14:41:52.759303  212661 out.go:374] Setting ErrFile to fd 2...
	I1109 14:41:52.759335  212661 out.go:408] TERM=,COLORTERM=, which probably does not support color
	I1109 14:41:52.759619  212661 root.go:338] Updating PATH: /home/jenkins/minikube-integration/21139-2320/.minikube/bin
	I1109 14:41:52.760140  212661 out.go:368] Setting JSON to false
	I1109 14:41:52.761098  212661 start.go:133] hostinfo: {"hostname":"ip-172-31-24-2","uptime":5063,"bootTime":1762694250,"procs":186,"os":"linux","platform":"ubuntu","platformFamily":"debian","platformVersion":"20.04","kernelVersion":"5.15.0-1084-aws","kernelArch":"aarch64","virtualizationSystem":"","virtualizationRole":"","hostId":"6d436adf-771e-4269-b9a3-c25fd4fca4f5"}
	I1109 14:41:52.761169  212661 start.go:143] virtualization:  
	I1109 14:41:52.767553  212661 out.go:179] * [auto-241021] minikube v1.37.0 on Ubuntu 20.04 (arm64)
	I1109 14:41:52.771025  212661 out.go:179]   - MINIKUBE_LOCATION=21139
	I1109 14:41:52.771088  212661 notify.go:221] Checking for updates...
	I1109 14:41:52.777702  212661 out.go:179]   - MINIKUBE_SUPPRESS_DOCKER_PERFORMANCE=true
	I1109 14:41:52.780997  212661 out.go:179]   - KUBECONFIG=/home/jenkins/minikube-integration/21139-2320/kubeconfig
	I1109 14:41:52.784132  212661 out.go:179]   - MINIKUBE_HOME=/home/jenkins/minikube-integration/21139-2320/.minikube
	I1109 14:41:52.787256  212661 out.go:179]   - MINIKUBE_BIN=out/minikube-linux-arm64
	I1109 14:41:52.790286  212661 out.go:179]   - MINIKUBE_FORCE_SYSTEMD=
	I1109 14:41:52.793950  212661 config.go:182] Loaded profile config "no-preload-545474": Driver=docker, ContainerRuntime=crio, KubernetesVersion=v1.34.1
	I1109 14:41:52.794046  212661 driver.go:422] Setting default libvirt URI to qemu:///system
	I1109 14:41:52.824011  212661 docker.go:124] docker version: linux-28.1.1:Docker Engine - Community
	I1109 14:41:52.824147  212661 cli_runner.go:164] Run: docker system info --format "{{json .}}"
	I1109 14:41:52.887787  212661 info.go:266] docker info: {ID:J4M5:W6MX:GOX4:4LAQ:VI7E:VJNF:J3OP:OPBH:GF7G:PPY4:WQWD:7N4L Containers:1 ContainersRunning:1 ContainersPaused:0 ContainersStopped:0 Images:4 Driver:overlay2 DriverStatus:[[Backing Filesystem extfs] [Supports d_type true] [Using metacopy false] [Native Overlay Diff true] [userxattr false]] SystemStatus:<nil> Plugins:{Volume:[local] Network:[bridge host ipvlan macvlan null overlay] Authorization:<nil> Log:[awslogs fluentd gcplogs gelf journald json-file local splunk syslog]} MemoryLimit:true SwapLimit:true KernelMemory:false KernelMemoryTCP:true CPUCfsPeriod:true CPUCfsQuota:true CPUShares:true CPUSet:true PidsLimit:true IPv4Forwarding:true BridgeNfIptables:false BridgeNfIP6Tables:false Debug:false NFd:36 OomKillDisable:true NGoroutines:52 SystemTime:2025-11-09 14:41:52.877450851 +0000 UTC LoggingDriver:json-file CgroupDriver:cgroupfs NEventsListener:0 KernelVersion:5.15.0-1084-aws OperatingSystem:Ubuntu 20.04.6 LTS OSType:linux Architecture:a
arch64 IndexServerAddress:https://index.docker.io/v1/ RegistryConfig:{AllowNondistributableArtifactsCIDRs:[] AllowNondistributableArtifactsHostnames:[] InsecureRegistryCIDRs:[::1/128 127.0.0.0/8] IndexConfigs:{DockerIo:{Name:docker.io Mirrors:[] Secure:true Official:true}} Mirrors:[]} NCPU:2 MemTotal:8214835200 GenericResources:<nil> DockerRootDir:/var/lib/docker HTTPProxy: HTTPSProxy: NoProxy: Name:ip-172-31-24-2 Labels:[] ExperimentalBuild:false ServerVersion:28.1.1 ClusterStore: ClusterAdvertise: Runtimes:{Runc:{Path:runc}} DefaultRuntime:runc Swarm:{NodeID: NodeAddr: LocalNodeState:inactive ControlAvailable:false Error: RemoteManagers:<nil>} LiveRestoreEnabled:false Isolation: InitBinary:docker-init ContainerdCommit:{ID:05044ec0a9a75232cad458027ca83437aae3f4da Expected:} RuncCommit:{ID:v1.2.5-0-g59923ef Expected:} InitCommit:{ID:de40ad0 Expected:} SecurityOptions:[name=apparmor name=seccomp,profile=builtin] ProductLicense: Warnings:<nil> ServerErrors:[] ClientInfo:{Debug:false Plugins:[map[Name:buildx Pat
h:/usr/libexec/docker/cli-plugins/docker-buildx SchemaVersion:0.1.0 ShortDescription:Docker Buildx Vendor:Docker Inc. Version:v0.23.0] map[Name:compose Path:/usr/libexec/docker/cli-plugins/docker-compose SchemaVersion:0.1.0 ShortDescription:Docker Compose Vendor:Docker Inc. Version:v2.35.1]] Warnings:<nil>}}
	I1109 14:41:52.887991  212661 docker.go:319] overlay module found
	I1109 14:41:52.893042  212661 out.go:179] * Using the docker driver based on user configuration
	I1109 14:41:52.896033  212661 start.go:309] selected driver: docker
	I1109 14:41:52.896059  212661 start.go:930] validating driver "docker" against <nil>
	I1109 14:41:52.896073  212661 start.go:941] status for docker: {Installed:true Healthy:true Running:false NeedsImprovement:false Error:<nil> Reason: Fix: Doc: Version:}
	I1109 14:41:52.896862  212661 cli_runner.go:164] Run: docker system info --format "{{json .}}"
	I1109 14:41:52.951249  212661 info.go:266] docker info: {ID:J4M5:W6MX:GOX4:4LAQ:VI7E:VJNF:J3OP:OPBH:GF7G:PPY4:WQWD:7N4L Containers:1 ContainersRunning:1 ContainersPaused:0 ContainersStopped:0 Images:4 Driver:overlay2 DriverStatus:[[Backing Filesystem extfs] [Supports d_type true] [Using metacopy false] [Native Overlay Diff true] [userxattr false]] SystemStatus:<nil> Plugins:{Volume:[local] Network:[bridge host ipvlan macvlan null overlay] Authorization:<nil> Log:[awslogs fluentd gcplogs gelf journald json-file local splunk syslog]} MemoryLimit:true SwapLimit:true KernelMemory:false KernelMemoryTCP:true CPUCfsPeriod:true CPUCfsQuota:true CPUShares:true CPUSet:true PidsLimit:true IPv4Forwarding:true BridgeNfIptables:false BridgeNfIP6Tables:false Debug:false NFd:36 OomKillDisable:true NGoroutines:52 SystemTime:2025-11-09 14:41:52.942215085 +0000 UTC LoggingDriver:json-file CgroupDriver:cgroupfs NEventsListener:0 KernelVersion:5.15.0-1084-aws OperatingSystem:Ubuntu 20.04.6 LTS OSType:linux Architecture:a
arch64 IndexServerAddress:https://index.docker.io/v1/ RegistryConfig:{AllowNondistributableArtifactsCIDRs:[] AllowNondistributableArtifactsHostnames:[] InsecureRegistryCIDRs:[::1/128 127.0.0.0/8] IndexConfigs:{DockerIo:{Name:docker.io Mirrors:[] Secure:true Official:true}} Mirrors:[]} NCPU:2 MemTotal:8214835200 GenericResources:<nil> DockerRootDir:/var/lib/docker HTTPProxy: HTTPSProxy: NoProxy: Name:ip-172-31-24-2 Labels:[] ExperimentalBuild:false ServerVersion:28.1.1 ClusterStore: ClusterAdvertise: Runtimes:{Runc:{Path:runc}} DefaultRuntime:runc Swarm:{NodeID: NodeAddr: LocalNodeState:inactive ControlAvailable:false Error: RemoteManagers:<nil>} LiveRestoreEnabled:false Isolation: InitBinary:docker-init ContainerdCommit:{ID:05044ec0a9a75232cad458027ca83437aae3f4da Expected:} RuncCommit:{ID:v1.2.5-0-g59923ef Expected:} InitCommit:{ID:de40ad0 Expected:} SecurityOptions:[name=apparmor name=seccomp,profile=builtin] ProductLicense: Warnings:<nil> ServerErrors:[] ClientInfo:{Debug:false Plugins:[map[Name:buildx Pat
h:/usr/libexec/docker/cli-plugins/docker-buildx SchemaVersion:0.1.0 ShortDescription:Docker Buildx Vendor:Docker Inc. Version:v0.23.0] map[Name:compose Path:/usr/libexec/docker/cli-plugins/docker-compose SchemaVersion:0.1.0 ShortDescription:Docker Compose Vendor:Docker Inc. Version:v2.35.1]] Warnings:<nil>}}
	I1109 14:41:52.951424  212661 start_flags.go:327] no existing cluster config was found, will generate one from the flags 
	I1109 14:41:52.951667  212661 start_flags.go:992] Waiting for all components: map[apiserver:true apps_running:true default_sa:true extra:true kubelet:true node_ready:true system_pods:true]
	I1109 14:41:52.954502  212661 out.go:179] * Using Docker driver with root privileges
	I1109 14:41:52.957496  212661 cni.go:84] Creating CNI manager for ""
	I1109 14:41:52.957568  212661 cni.go:143] "docker" driver + "crio" runtime found, recommending kindnet
	I1109 14:41:52.957581  212661 start_flags.go:336] Found "CNI" CNI - setting NetworkPlugin=cni
	I1109 14:41:52.957665  212661 start.go:353] cluster config:
	{Name:auto-241021 KeepContext:false EmbedCerts:false MinikubeISO: KicBaseImage:gcr.io/k8s-minikube/kicbase-builds:v0.0.48-1761985721-21837@sha256:a50b37e97dfdea51156e079ca6b45818a801b3d41bbe13d141f35d2e1af6c7d1 Memory:3072 CPUs:2 DiskSize:20000 Driver:docker HyperkitVpnKitSock: HyperkitVSockPorts:[] DockerEnv:[] ContainerVolumeMounts:[] InsecureRegistry:[] RegistryMirror:[] HostOnlyCIDR:192.168.59.1/24 HypervVirtualSwitch: HypervUseExternalSwitch:false HypervExternalAdapter: KVMNetwork:default KVMQemuURI:qemu:///system KVMGPU:false KVMHidden:false KVMNUMACount:1 APIServerPort:8443 DockerOpt:[] DisableDriverMounts:false NFSShare:[] NFSSharesRoot:/nfsshares UUID: NoVTXCheck:false DNSProxy:false HostDNSResolver:true HostOnlyNicType:virtio NatNicType:virtio SSHIPAddress: SSHUser:root SSHKey: SSHPort:22 KubernetesConfig:{KubernetesVersion:v1.34.1 ClusterName:auto-241021 Namespace:default APIServerHAVIP: APIServerName:minikubeCA APIServerNames:[] APIServerIPs:[] DNSDomain:cluster.local ContainerRuntime:cri
o CRISocket: NetworkPlugin:cni FeatureGates: ServiceCIDR:10.96.0.0/12 ImageRepository: LoadBalancerStartIP: LoadBalancerEndIP: CustomIngressCert: RegistryAliases: ExtraOptions:[] ShouldLoadCachedImages:true EnableDefaultCNI:false CNI:} Nodes:[{Name: IP: Port:8443 KubernetesVersion:v1.34.1 ContainerRuntime:crio ControlPlane:true Worker:true}] Addons:map[] CustomAddonImages:map[] CustomAddonRegistries:map[] VerifyComponents:map[apiserver:true apps_running:true default_sa:true extra:true kubelet:true node_ready:true system_pods:true] StartHostTimeout:15m0s ScheduledStop:<nil> ExposedPorts:[] ListenAddress: Network: Subnet: MultiNodeRequested:false ExtraDisks:0 CertExpiration:26280h0m0s MountString: Mount9PVersion:9p2000.L MountGID:docker MountIP: MountMSize:262144 MountOptions:[] MountPort:0 MountType:9p MountUID:docker BinaryMirror: DisableOptimizations:false DisableMetrics:false DisableCoreDNSLog:false CustomQemuFirmwarePath: SocketVMnetClientPath: SocketVMnetPath: StaticIP: SSHAuthSock: SSHAgentPID:0 GPUs: Au
toPauseInterval:1m0s}
	I1109 14:41:52.960965  212661 out.go:179] * Starting "auto-241021" primary control-plane node in "auto-241021" cluster
	I1109 14:41:52.963774  212661 cache.go:134] Beginning downloading kic base image for docker with crio
	I1109 14:41:52.966917  212661 out.go:179] * Pulling base image v0.0.48-1761985721-21837 ...
	I1109 14:41:52.969792  212661 preload.go:188] Checking if preload exists for k8s version v1.34.1 and runtime crio
	I1109 14:41:52.969819  212661 image.go:81] Checking for gcr.io/k8s-minikube/kicbase-builds:v0.0.48-1761985721-21837@sha256:a50b37e97dfdea51156e079ca6b45818a801b3d41bbe13d141f35d2e1af6c7d1 in local docker daemon
	I1109 14:41:52.969839  212661 preload.go:203] Found local preload: /home/jenkins/minikube-integration/21139-2320/.minikube/cache/preloaded-tarball/preloaded-images-k8s-v18-v1.34.1-cri-o-overlay-arm64.tar.lz4
	I1109 14:41:52.969879  212661 cache.go:65] Caching tarball of preloaded images
	I1109 14:41:52.969973  212661 preload.go:238] Found /home/jenkins/minikube-integration/21139-2320/.minikube/cache/preloaded-tarball/preloaded-images-k8s-v18-v1.34.1-cri-o-overlay-arm64.tar.lz4 in cache, skipping download
	I1109 14:41:52.969983  212661 cache.go:68] Finished verifying existence of preloaded tar for v1.34.1 on crio
	I1109 14:41:52.970092  212661 profile.go:143] Saving config to /home/jenkins/minikube-integration/21139-2320/.minikube/profiles/auto-241021/config.json ...
	I1109 14:41:52.970120  212661 lock.go:35] WriteFile acquiring /home/jenkins/minikube-integration/21139-2320/.minikube/profiles/auto-241021/config.json: {Name:mk1be68fabb121cf49cad88b4159d0e8e3eb18e1 Clock:{} Delay:500ms Timeout:1m0s Cancel:<nil>}
	I1109 14:41:52.993387  212661 image.go:100] Found gcr.io/k8s-minikube/kicbase-builds:v0.0.48-1761985721-21837@sha256:a50b37e97dfdea51156e079ca6b45818a801b3d41bbe13d141f35d2e1af6c7d1 in local docker daemon, skipping pull
	I1109 14:41:52.993409  212661 cache.go:158] gcr.io/k8s-minikube/kicbase-builds:v0.0.48-1761985721-21837@sha256:a50b37e97dfdea51156e079ca6b45818a801b3d41bbe13d141f35d2e1af6c7d1 exists in daemon, skipping load
	I1109 14:41:52.993422  212661 cache.go:243] Successfully downloaded all kic artifacts
	I1109 14:41:52.993447  212661 start.go:360] acquireMachinesLock for auto-241021: {Name:mkc5c8eae1240be729310a73f917492e7a534548 Clock:{} Delay:500ms Timeout:10m0s Cancel:<nil>}
	I1109 14:41:52.993551  212661 start.go:364] duration metric: took 83.882µs to acquireMachinesLock for "auto-241021"
	I1109 14:41:52.993581  212661 start.go:93] Provisioning new machine with config: &{Name:auto-241021 KeepContext:false EmbedCerts:false MinikubeISO: KicBaseImage:gcr.io/k8s-minikube/kicbase-builds:v0.0.48-1761985721-21837@sha256:a50b37e97dfdea51156e079ca6b45818a801b3d41bbe13d141f35d2e1af6c7d1 Memory:3072 CPUs:2 DiskSize:20000 Driver:docker HyperkitVpnKitSock: HyperkitVSockPorts:[] DockerEnv:[] ContainerVolumeMounts:[] InsecureRegistry:[] RegistryMirror:[] HostOnlyCIDR:192.168.59.1/24 HypervVirtualSwitch: HypervUseExternalSwitch:false HypervExternalAdapter: KVMNetwork:default KVMQemuURI:qemu:///system KVMGPU:false KVMHidden:false KVMNUMACount:1 APIServerPort:8443 DockerOpt:[] DisableDriverMounts:false NFSShare:[] NFSSharesRoot:/nfsshares UUID: NoVTXCheck:false DNSProxy:false HostDNSResolver:true HostOnlyNicType:virtio NatNicType:virtio SSHIPAddress: SSHUser:root SSHKey: SSHPort:22 KubernetesConfig:{KubernetesVersion:v1.34.1 ClusterName:auto-241021 Namespace:default APIServerHAVIP: APIServerName:minikub
eCA APIServerNames:[] APIServerIPs:[] DNSDomain:cluster.local ContainerRuntime:crio CRISocket: NetworkPlugin:cni FeatureGates: ServiceCIDR:10.96.0.0/12 ImageRepository: LoadBalancerStartIP: LoadBalancerEndIP: CustomIngressCert: RegistryAliases: ExtraOptions:[] ShouldLoadCachedImages:true EnableDefaultCNI:false CNI:} Nodes:[{Name: IP: Port:8443 KubernetesVersion:v1.34.1 ContainerRuntime:crio ControlPlane:true Worker:true}] Addons:map[] CustomAddonImages:map[] CustomAddonRegistries:map[] VerifyComponents:map[apiserver:true apps_running:true default_sa:true extra:true kubelet:true node_ready:true system_pods:true] StartHostTimeout:15m0s ScheduledStop:<nil> ExposedPorts:[] ListenAddress: Network: Subnet: MultiNodeRequested:false ExtraDisks:0 CertExpiration:26280h0m0s MountString: Mount9PVersion:9p2000.L MountGID:docker MountIP: MountMSize:262144 MountOptions:[] MountPort:0 MountType:9p MountUID:docker BinaryMirror: DisableOptimizations:false DisableMetrics:false DisableCoreDNSLog:false CustomQemuFirmwarePath: Soc
ketVMnetClientPath: SocketVMnetPath: StaticIP: SSHAuthSock: SSHAgentPID:0 GPUs: AutoPauseInterval:1m0s} &{Name: IP: Port:8443 KubernetesVersion:v1.34.1 ContainerRuntime:crio ControlPlane:true Worker:true}
	I1109 14:41:52.993706  212661 start.go:125] createHost starting for "" (driver="docker")
	
	
	==> CRI-O <==
	Nov 09 14:41:45 no-preload-545474 crio[839]: time="2025-11-09T14:41:45.973938568Z" level=info msg="Created container 93081599cd8556304d2c80ee5c71b89bfa658750dee1317dd550c5434ce72fdc: kube-system/coredns-66bc5c9577-gq42x/coredns" id=27e48ea9-d92c-4a6f-8cb8-dc8e6bc53ff9 name=/runtime.v1.RuntimeService/CreateContainer
	Nov 09 14:41:45 no-preload-545474 crio[839]: time="2025-11-09T14:41:45.976600337Z" level=info msg="Starting container: 93081599cd8556304d2c80ee5c71b89bfa658750dee1317dd550c5434ce72fdc" id=3a48879c-4d9e-47bd-9a38-3775f9f5b81b name=/runtime.v1.RuntimeService/StartContainer
	Nov 09 14:41:45 no-preload-545474 crio[839]: time="2025-11-09T14:41:45.982954046Z" level=info msg="Started container" PID=2485 containerID=93081599cd8556304d2c80ee5c71b89bfa658750dee1317dd550c5434ce72fdc description=kube-system/coredns-66bc5c9577-gq42x/coredns id=3a48879c-4d9e-47bd-9a38-3775f9f5b81b name=/runtime.v1.RuntimeService/StartContainer sandboxID=235c2e0fcf319d6fdc41e4dad4ffe1e90161866c8e413ea063e98259250e4722
	Nov 09 14:41:48 no-preload-545474 crio[839]: time="2025-11-09T14:41:48.917372296Z" level=info msg="Running pod sandbox: default/busybox/POD" id=49b7090a-269f-40c2-93ea-1e5439eb46ce name=/runtime.v1.RuntimeService/RunPodSandbox
	Nov 09 14:41:48 no-preload-545474 crio[839]: time="2025-11-09T14:41:48.917448744Z" level=info msg="Allowed annotations are specified for workload [io.containers.trace-syscall]"
	Nov 09 14:41:48 no-preload-545474 crio[839]: time="2025-11-09T14:41:48.930184306Z" level=info msg="Got pod network &{Name:busybox Namespace:default ID:0f2d1a6beebc7dd6073d295315b25542f9c70277c047dec0788bccac1f56a483 UID:cf172c15-bc73-4b57-b8a2-5a67c4f6b615 NetNS:/var/run/netns/d45f8a33-a4d8-45bd-8c76-6bbc36d32d23 Networks:[{Name:kindnet Ifname:eth0}] RuntimeConfig:map[kindnet:{IP: MAC: PortMappings:[] Bandwidth:<nil> IpRanges:[] CgroupPath: PodAnnotations:0x400012cdb0}] Aliases:map[]}"
	Nov 09 14:41:48 no-preload-545474 crio[839]: time="2025-11-09T14:41:48.930220861Z" level=info msg="Adding pod default_busybox to CNI network \"kindnet\" (type=ptp)"
	Nov 09 14:41:48 no-preload-545474 crio[839]: time="2025-11-09T14:41:48.942410597Z" level=info msg="Got pod network &{Name:busybox Namespace:default ID:0f2d1a6beebc7dd6073d295315b25542f9c70277c047dec0788bccac1f56a483 UID:cf172c15-bc73-4b57-b8a2-5a67c4f6b615 NetNS:/var/run/netns/d45f8a33-a4d8-45bd-8c76-6bbc36d32d23 Networks:[{Name:kindnet Ifname:eth0}] RuntimeConfig:map[kindnet:{IP: MAC: PortMappings:[] Bandwidth:<nil> IpRanges:[] CgroupPath: PodAnnotations:0x400012cdb0}] Aliases:map[]}"
	Nov 09 14:41:48 no-preload-545474 crio[839]: time="2025-11-09T14:41:48.942706887Z" level=info msg="Checking pod default_busybox for CNI network kindnet (type=ptp)"
	Nov 09 14:41:48 no-preload-545474 crio[839]: time="2025-11-09T14:41:48.950874078Z" level=info msg="Ran pod sandbox 0f2d1a6beebc7dd6073d295315b25542f9c70277c047dec0788bccac1f56a483 with infra container: default/busybox/POD" id=49b7090a-269f-40c2-93ea-1e5439eb46ce name=/runtime.v1.RuntimeService/RunPodSandbox
	Nov 09 14:41:48 no-preload-545474 crio[839]: time="2025-11-09T14:41:48.952310998Z" level=info msg="Checking image status: gcr.io/k8s-minikube/busybox:1.28.4-glibc" id=be75bfbc-6764-4a7f-9fda-9fc21a786247 name=/runtime.v1.ImageService/ImageStatus
	Nov 09 14:41:48 no-preload-545474 crio[839]: time="2025-11-09T14:41:48.952598295Z" level=info msg="Image gcr.io/k8s-minikube/busybox:1.28.4-glibc not found" id=be75bfbc-6764-4a7f-9fda-9fc21a786247 name=/runtime.v1.ImageService/ImageStatus
	Nov 09 14:41:48 no-preload-545474 crio[839]: time="2025-11-09T14:41:48.95277401Z" level=info msg="Neither image nor artfiact gcr.io/k8s-minikube/busybox:1.28.4-glibc found" id=be75bfbc-6764-4a7f-9fda-9fc21a786247 name=/runtime.v1.ImageService/ImageStatus
	Nov 09 14:41:48 no-preload-545474 crio[839]: time="2025-11-09T14:41:48.953704792Z" level=info msg="Pulling image: gcr.io/k8s-minikube/busybox:1.28.4-glibc" id=dd6c456b-8af0-4af2-8afe-dec665157ff9 name=/runtime.v1.ImageService/PullImage
	Nov 09 14:41:48 no-preload-545474 crio[839]: time="2025-11-09T14:41:48.95930921Z" level=info msg="Trying to access \"gcr.io/k8s-minikube/busybox:1.28.4-glibc\""
	Nov 09 14:41:51 no-preload-545474 crio[839]: time="2025-11-09T14:41:51.058282314Z" level=info msg="Pulled image: gcr.io/k8s-minikube/busybox@sha256:580b0aa58b210f512f818b7b7ef4f63c803f7a8cd6baf571b1462b79f7b7719e" id=dd6c456b-8af0-4af2-8afe-dec665157ff9 name=/runtime.v1.ImageService/PullImage
	Nov 09 14:41:51 no-preload-545474 crio[839]: time="2025-11-09T14:41:51.059468041Z" level=info msg="Checking image status: gcr.io/k8s-minikube/busybox:1.28.4-glibc" id=a6ec4156-d3f1-4552-bfe5-4ab35bac4bd7 name=/runtime.v1.ImageService/ImageStatus
	Nov 09 14:41:51 no-preload-545474 crio[839]: time="2025-11-09T14:41:51.064299931Z" level=info msg="Checking image status: gcr.io/k8s-minikube/busybox:1.28.4-glibc" id=e9322416-1954-4947-8442-402e089a59d5 name=/runtime.v1.ImageService/ImageStatus
	Nov 09 14:41:51 no-preload-545474 crio[839]: time="2025-11-09T14:41:51.071728128Z" level=info msg="Creating container: default/busybox/busybox" id=a7318d09-364a-4156-bbf8-728a6a3df35a name=/runtime.v1.RuntimeService/CreateContainer
	Nov 09 14:41:51 no-preload-545474 crio[839]: time="2025-11-09T14:41:51.072290759Z" level=info msg="Allowed annotations are specified for workload [io.containers.trace-syscall]"
	Nov 09 14:41:51 no-preload-545474 crio[839]: time="2025-11-09T14:41:51.082482364Z" level=info msg="Allowed annotations are specified for workload [io.containers.trace-syscall]"
	Nov 09 14:41:51 no-preload-545474 crio[839]: time="2025-11-09T14:41:51.083171831Z" level=info msg="Allowed annotations are specified for workload [io.containers.trace-syscall]"
	Nov 09 14:41:51 no-preload-545474 crio[839]: time="2025-11-09T14:41:51.109228891Z" level=info msg="Created container 9cb4ddffdaac3bb9a50ddc833c3967e40149719f7144ae55b25bcd85d85a1b23: default/busybox/busybox" id=a7318d09-364a-4156-bbf8-728a6a3df35a name=/runtime.v1.RuntimeService/CreateContainer
	Nov 09 14:41:51 no-preload-545474 crio[839]: time="2025-11-09T14:41:51.110992895Z" level=info msg="Starting container: 9cb4ddffdaac3bb9a50ddc833c3967e40149719f7144ae55b25bcd85d85a1b23" id=22fe310f-b00c-406c-bb18-c4bdcd30cefd name=/runtime.v1.RuntimeService/StartContainer
	Nov 09 14:41:51 no-preload-545474 crio[839]: time="2025-11-09T14:41:51.121383Z" level=info msg="Started container" PID=2538 containerID=9cb4ddffdaac3bb9a50ddc833c3967e40149719f7144ae55b25bcd85d85a1b23 description=default/busybox/busybox id=22fe310f-b00c-406c-bb18-c4bdcd30cefd name=/runtime.v1.RuntimeService/StartContainer sandboxID=0f2d1a6beebc7dd6073d295315b25542f9c70277c047dec0788bccac1f56a483
	
	
	==> container status <==
	CONTAINER           IMAGE                                                                                                 CREATED             STATE               NAME                      ATTEMPT             POD ID              POD                                         NAMESPACE
	9cb4ddffdaac3       gcr.io/k8s-minikube/busybox@sha256:580b0aa58b210f512f818b7b7ef4f63c803f7a8cd6baf571b1462b79f7b7719e   7 seconds ago       Running             busybox                   0                   0f2d1a6beebc7       busybox                                     default
	93081599cd855       138784d87c9c50f8e59412544da4cf4928d61ccbaf93b9f5898a3ba406871bfc                                      12 seconds ago      Running             coredns                   0                   235c2e0fcf319       coredns-66bc5c9577-gq42x                    kube-system
	e1dc67b6668b1       66749159455b3f08c8318fe0233122f54d0f5889f9c5fdfb73c3fd9d99895b51                                      12 seconds ago      Running             storage-provisioner       0                   d1492ef33cb8a       storage-provisioner                         kube-system
	0f1980a342049       docker.io/kindest/kindnetd@sha256:2bdc3188f2ddc8e54841f69ef900a8dde1280057c97500f966a7ef31364021f1    23 seconds ago      Running             kindnet-cni               0                   2d9729df6c7d1       kindnet-t9j49                               kube-system
	815cf768902d1       05baa95f5142d87797a2bc1d3d11edfb0bf0a9236d436243d15061fae8b58cb9                                      27 seconds ago      Running             kube-proxy                0                   de6de9ddb6211       kube-proxy-2mnwv                            kube-system
	fe5e5e2c794d8       b5f57ec6b98676d815366685a0422bd164ecf0732540b79ac51b1186cef97ff0                                      43 seconds ago      Running             kube-scheduler            0                   88d3b8970566b       kube-scheduler-no-preload-545474            kube-system
	6ad007f71a96e       7eb2c6ff0c5a768fd309321bc2ade0e4e11afcf4f2017ef1d0ff00d91fdf992a                                      43 seconds ago      Running             kube-controller-manager   0                   552bc88512059       kube-controller-manager-no-preload-545474   kube-system
	ddf450436d3ca       a1894772a478e07c67a56e8bf32335fdbe1dd4ec96976a5987083164bd00bc0e                                      43 seconds ago      Running             etcd                      0                   fecf13b2b3f20       etcd-no-preload-545474                      kube-system
	bd7dc765c7d6c       43911e833d64d4f30460862fc0c54bb61999d60bc7d063feca71e9fc610d5196                                      43 seconds ago      Running             kube-apiserver            0                   52e581caab6cf       kube-apiserver-no-preload-545474            kube-system
	
	
	==> coredns [93081599cd8556304d2c80ee5c71b89bfa658750dee1317dd550c5434ce72fdc] <==
	maxprocs: Leaving GOMAXPROCS=2: CPU quota undefined
	.:53
	[INFO] plugin/reload: Running configuration SHA512 = fa9a0cdcdddcb4be74a0eaf7cfcb211c40e29ddf5507e03bbfc0065bade31f0f2641a2513136e246f32328dd126fc93236fb5c595246f0763926a524386705e8
	CoreDNS-1.12.1
	linux/arm64, go1.24.1, 707c7c1
	[INFO] 127.0.0.1:48553 - 14924 "HINFO IN 2546767305122101769.2183527125794707024. udp 57 false 512" NXDOMAIN qr,rd,ra 57 0.011654961s
	
	
	==> describe nodes <==
	Name:               no-preload-545474
	Roles:              control-plane
	Labels:             beta.kubernetes.io/arch=arm64
	                    beta.kubernetes.io/os=linux
	                    kubernetes.io/arch=arm64
	                    kubernetes.io/hostname=no-preload-545474
	                    kubernetes.io/os=linux
	                    minikube.k8s.io/commit=70e94ed289d7418ac27e2778f9cf44be27a4ecda
	                    minikube.k8s.io/name=no-preload-545474
	                    minikube.k8s.io/primary=true
	                    minikube.k8s.io/updated_at=2025_11_09T14_41_26_0700
	                    minikube.k8s.io/version=v1.37.0
	                    node-role.kubernetes.io/control-plane=
	                    node.kubernetes.io/exclude-from-external-load-balancers=
	Annotations:        node.alpha.kubernetes.io/ttl: 0
	                    volumes.kubernetes.io/controller-managed-attach-detach: true
	CreationTimestamp:  Sun, 09 Nov 2025 14:41:21 +0000
	Taints:             <none>
	Unschedulable:      false
	Lease:
	  HolderIdentity:  no-preload-545474
	  AcquireTime:     <unset>
	  RenewTime:       Sun, 09 Nov 2025 14:41:55 +0000
	Conditions:
	  Type             Status  LastHeartbeatTime                 LastTransitionTime                Reason                       Message
	  ----             ------  -----------------                 ------------------                ------                       -------
	  MemoryPressure   False   Sun, 09 Nov 2025 14:41:56 +0000   Sun, 09 Nov 2025 14:41:16 +0000   KubeletHasSufficientMemory   kubelet has sufficient memory available
	  DiskPressure     False   Sun, 09 Nov 2025 14:41:56 +0000   Sun, 09 Nov 2025 14:41:16 +0000   KubeletHasNoDiskPressure     kubelet has no disk pressure
	  PIDPressure      False   Sun, 09 Nov 2025 14:41:56 +0000   Sun, 09 Nov 2025 14:41:16 +0000   KubeletHasSufficientPID      kubelet has sufficient PID available
	  Ready            True    Sun, 09 Nov 2025 14:41:56 +0000   Sun, 09 Nov 2025 14:41:45 +0000   KubeletReady                 kubelet is posting ready status
	Addresses:
	  InternalIP:  192.168.85.2
	  Hostname:    no-preload-545474
	Capacity:
	  cpu:                2
	  ephemeral-storage:  203034800Ki
	  hugepages-1Gi:      0
	  hugepages-2Mi:      0
	  hugepages-32Mi:     0
	  hugepages-64Ki:     0
	  memory:             8022300Ki
	  pods:               110
	Allocatable:
	  cpu:                2
	  ephemeral-storage:  203034800Ki
	  hugepages-1Gi:      0
	  hugepages-2Mi:      0
	  hugepages-32Mi:     0
	  hugepages-64Ki:     0
	  memory:             8022300Ki
	  pods:               110
	System Info:
	  Machine ID:                 8fe80b37a6a3cb4bc1adadb06905c59a
	  System UUID:                c8e11a83-d01e-4114-9a5f-a54126ee8120
	  Boot ID:                    c2b4b02b-b0e5-42ff-aa45-346ad9349595
	  Kernel Version:             5.15.0-1084-aws
	  OS Image:                   Debian GNU/Linux 12 (bookworm)
	  Operating System:           linux
	  Architecture:               arm64
	  Container Runtime Version:  cri-o://1.34.1
	  Kubelet Version:            v1.34.1
	  Kube-Proxy Version:         
	PodCIDR:                      10.244.0.0/24
	PodCIDRs:                     10.244.0.0/24
	Non-terminated Pods:          (9 in total)
	  Namespace                   Name                                         CPU Requests  CPU Limits  Memory Requests  Memory Limits  Age
	  ---------                   ----                                         ------------  ----------  ---------------  -------------  ---
	  default                     busybox                                      0 (0%)        0 (0%)      0 (0%)           0 (0%)         10s
	  kube-system                 coredns-66bc5c9577-gq42x                     100m (5%)     0 (0%)      70Mi (0%)        170Mi (2%)     29s
	  kube-system                 etcd-no-preload-545474                       100m (5%)     0 (0%)      100Mi (1%)       0 (0%)         33s
	  kube-system                 kindnet-t9j49                                100m (5%)     100m (5%)   50Mi (0%)        50Mi (0%)      28s
	  kube-system                 kube-apiserver-no-preload-545474             250m (12%)    0 (0%)      0 (0%)           0 (0%)         35s
	  kube-system                 kube-controller-manager-no-preload-545474    200m (10%)    0 (0%)      0 (0%)           0 (0%)         35s
	  kube-system                 kube-proxy-2mnwv                             0 (0%)        0 (0%)      0 (0%)           0 (0%)         28s
	  kube-system                 kube-scheduler-no-preload-545474             100m (5%)     0 (0%)      0 (0%)           0 (0%)         33s
	  kube-system                 storage-provisioner                          0 (0%)        0 (0%)      0 (0%)           0 (0%)         27s
	Allocated resources:
	  (Total limits may be over 100 percent, i.e., overcommitted.)
	  Resource           Requests    Limits
	  --------           --------    ------
	  cpu                850m (42%)  100m (5%)
	  memory             220Mi (2%)  220Mi (2%)
	  ephemeral-storage  0 (0%)      0 (0%)
	  hugepages-1Gi      0 (0%)      0 (0%)
	  hugepages-2Mi      0 (0%)      0 (0%)
	  hugepages-32Mi     0 (0%)      0 (0%)
	  hugepages-64Ki     0 (0%)      0 (0%)
	Events:
	  Type     Reason                   Age                From             Message
	  ----     ------                   ----               ----             -------
	  Normal   Starting                 27s                kube-proxy       
	  Warning  CgroupV1                 44s                kubelet          cgroup v1 support is in maintenance mode, please migrate to cgroup v2
	  Normal   NodeHasSufficientMemory  44s (x8 over 44s)  kubelet          Node no-preload-545474 status is now: NodeHasSufficientMemory
	  Normal   NodeHasNoDiskPressure    44s (x8 over 44s)  kubelet          Node no-preload-545474 status is now: NodeHasNoDiskPressure
	  Normal   NodeHasSufficientPID     44s (x8 over 44s)  kubelet          Node no-preload-545474 status is now: NodeHasSufficientPID
	  Normal   Starting                 33s                kubelet          Starting kubelet.
	  Warning  CgroupV1                 33s                kubelet          cgroup v1 support is in maintenance mode, please migrate to cgroup v2
	  Normal   NodeHasSufficientMemory  33s                kubelet          Node no-preload-545474 status is now: NodeHasSufficientMemory
	  Normal   NodeHasNoDiskPressure    33s                kubelet          Node no-preload-545474 status is now: NodeHasNoDiskPressure
	  Normal   NodeHasSufficientPID     33s                kubelet          Node no-preload-545474 status is now: NodeHasSufficientPID
	  Normal   RegisteredNode           30s                node-controller  Node no-preload-545474 event: Registered Node no-preload-545474 in Controller
	  Normal   NodeReady                13s                kubelet          Node no-preload-545474 status is now: NodeReady
	
	
	==> dmesg <==
	[ +26.909872] overlayfs: idmapped layers are currently not supported
	[  +3.850831] overlayfs: idmapped layers are currently not supported
	[Nov 9 14:19] overlayfs: idmapped layers are currently not supported
	[ +17.180951] overlayfs: idmapped layers are currently not supported
	[Nov 9 14:20] overlayfs: idmapped layers are currently not supported
	[ +23.736977] overlayfs: idmapped layers are currently not supported
	[Nov 9 14:22] overlayfs: idmapped layers are currently not supported
	[Nov 9 14:23] overlayfs: idmapped layers are currently not supported
	[Nov 9 14:25] overlayfs: idmapped layers are currently not supported
	[Nov 9 14:27] overlayfs: idmapped layers are currently not supported
	[ +24.536638] overlayfs: idmapped layers are currently not supported
	[Nov 9 14:31] overlayfs: idmapped layers are currently not supported
	[Nov 9 14:33] overlayfs: idmapped layers are currently not supported
	[ +39.159698] overlayfs: idmapped layers are currently not supported
	[  +5.641155] overlayfs: idmapped layers are currently not supported
	[Nov 9 14:34] overlayfs: idmapped layers are currently not supported
	[Nov 9 14:35] overlayfs: idmapped layers are currently not supported
	[Nov 9 14:36] overlayfs: idmapped layers are currently not supported
	[Nov 9 14:37] overlayfs: idmapped layers are currently not supported
	[  +3.455400] overlayfs: idmapped layers are currently not supported
	[Nov 9 14:39] overlayfs: idmapped layers are currently not supported
	[  +3.462027] overlayfs: idmapped layers are currently not supported
	[Nov 9 14:40] overlayfs: idmapped layers are currently not supported
	[Nov 9 14:41] overlayfs: idmapped layers are currently not supported
	[ +35.139553] overlayfs: idmapped layers are currently not supported
	
	
	==> etcd [ddf450436d3ca2ad7394721c1ddd56250aef258cb0526514b0af8c6fdeeb9bf1] <==
	{"level":"warn","ts":"2025-11-09T14:41:19.069222Z","caller":"embed/config_logging.go:188","msg":"rejected connection on client endpoint","remote-addr":"127.0.0.1:32934","server-name":"","error":"EOF"}
	{"level":"warn","ts":"2025-11-09T14:41:19.151174Z","caller":"embed/config_logging.go:188","msg":"rejected connection on client endpoint","remote-addr":"127.0.0.1:32956","server-name":"","error":"EOF"}
	{"level":"warn","ts":"2025-11-09T14:41:19.164342Z","caller":"embed/config_logging.go:188","msg":"rejected connection on client endpoint","remote-addr":"127.0.0.1:32970","server-name":"","error":"EOF"}
	{"level":"warn","ts":"2025-11-09T14:41:19.215708Z","caller":"embed/config_logging.go:188","msg":"rejected connection on client endpoint","remote-addr":"127.0.0.1:32992","server-name":"","error":"EOF"}
	{"level":"warn","ts":"2025-11-09T14:41:19.258115Z","caller":"embed/config_logging.go:188","msg":"rejected connection on client endpoint","remote-addr":"127.0.0.1:33012","server-name":"","error":"EOF"}
	{"level":"warn","ts":"2025-11-09T14:41:19.297765Z","caller":"embed/config_logging.go:188","msg":"rejected connection on client endpoint","remote-addr":"127.0.0.1:33018","server-name":"","error":"EOF"}
	{"level":"warn","ts":"2025-11-09T14:41:19.350950Z","caller":"embed/config_logging.go:188","msg":"rejected connection on client endpoint","remote-addr":"127.0.0.1:33034","server-name":"","error":"EOF"}
	{"level":"warn","ts":"2025-11-09T14:41:19.390740Z","caller":"embed/config_logging.go:188","msg":"rejected connection on client endpoint","remote-addr":"127.0.0.1:33042","server-name":"","error":"EOF"}
	{"level":"warn","ts":"2025-11-09T14:41:19.406552Z","caller":"embed/config_logging.go:188","msg":"rejected connection on client endpoint","remote-addr":"127.0.0.1:33060","server-name":"","error":"EOF"}
	{"level":"warn","ts":"2025-11-09T14:41:19.447658Z","caller":"embed/config_logging.go:188","msg":"rejected connection on client endpoint","remote-addr":"127.0.0.1:33076","server-name":"","error":"EOF"}
	{"level":"warn","ts":"2025-11-09T14:41:19.530758Z","caller":"embed/config_logging.go:188","msg":"rejected connection on client endpoint","remote-addr":"127.0.0.1:33100","server-name":"","error":"EOF"}
	{"level":"warn","ts":"2025-11-09T14:41:19.565426Z","caller":"embed/config_logging.go:188","msg":"rejected connection on client endpoint","remote-addr":"127.0.0.1:33106","server-name":"","error":"EOF"}
	{"level":"warn","ts":"2025-11-09T14:41:19.606853Z","caller":"embed/config_logging.go:188","msg":"rejected connection on client endpoint","remote-addr":"127.0.0.1:33130","server-name":"","error":"EOF"}
	{"level":"warn","ts":"2025-11-09T14:41:19.628986Z","caller":"embed/config_logging.go:188","msg":"rejected connection on client endpoint","remote-addr":"127.0.0.1:33140","server-name":"","error":"EOF"}
	{"level":"warn","ts":"2025-11-09T14:41:19.676964Z","caller":"embed/config_logging.go:188","msg":"rejected connection on client endpoint","remote-addr":"127.0.0.1:33156","server-name":"","error":"EOF"}
	{"level":"warn","ts":"2025-11-09T14:41:19.775336Z","caller":"embed/config_logging.go:188","msg":"rejected connection on client endpoint","remote-addr":"127.0.0.1:33176","server-name":"","error":"EOF"}
	{"level":"warn","ts":"2025-11-09T14:41:19.814579Z","caller":"embed/config_logging.go:188","msg":"rejected connection on client endpoint","remote-addr":"127.0.0.1:33194","server-name":"","error":"EOF"}
	{"level":"warn","ts":"2025-11-09T14:41:19.855107Z","caller":"embed/config_logging.go:188","msg":"rejected connection on client endpoint","remote-addr":"127.0.0.1:33210","server-name":"","error":"EOF"}
	{"level":"warn","ts":"2025-11-09T14:41:19.894169Z","caller":"embed/config_logging.go:188","msg":"rejected connection on client endpoint","remote-addr":"127.0.0.1:33224","server-name":"","error":"EOF"}
	{"level":"warn","ts":"2025-11-09T14:41:19.919195Z","caller":"embed/config_logging.go:188","msg":"rejected connection on client endpoint","remote-addr":"127.0.0.1:33232","server-name":"","error":"EOF"}
	{"level":"warn","ts":"2025-11-09T14:41:19.979351Z","caller":"embed/config_logging.go:188","msg":"rejected connection on client endpoint","remote-addr":"127.0.0.1:33254","server-name":"","error":"EOF"}
	{"level":"warn","ts":"2025-11-09T14:41:20.060276Z","caller":"embed/config_logging.go:188","msg":"rejected connection on client endpoint","remote-addr":"127.0.0.1:33274","server-name":"","error":"EOF"}
	{"level":"warn","ts":"2025-11-09T14:41:20.070519Z","caller":"embed/config_logging.go:188","msg":"rejected connection on client endpoint","remote-addr":"127.0.0.1:33300","server-name":"","error":"EOF"}
	{"level":"warn","ts":"2025-11-09T14:41:20.133859Z","caller":"embed/config_logging.go:188","msg":"rejected connection on client endpoint","remote-addr":"127.0.0.1:33324","server-name":"","error":"EOF"}
	{"level":"warn","ts":"2025-11-09T14:41:20.355221Z","caller":"embed/config_logging.go:188","msg":"rejected connection on client endpoint","remote-addr":"127.0.0.1:33346","server-name":"","error":"EOF"}
	
	
	==> kernel <==
	 14:41:58 up  1:24,  0 user,  load average: 6.04, 4.28, 3.18
	Linux no-preload-545474 5.15.0-1084-aws #91~20.04.1-Ubuntu SMP Fri May 2 07:00:04 UTC 2025 aarch64 GNU/Linux
	PRETTY_NAME="Debian GNU/Linux 12 (bookworm)"
	
	
	==> kindnet [0f1980a342049962f85aa5b620b1b413bbf6a0f169530c27f7c40e4173e8202a] <==
	I1109 14:41:34.619755       1 main.go:109] connected to apiserver: https://10.96.0.1:443
	I1109 14:41:34.620142       1 main.go:139] hostIP = 192.168.85.2
	podIP = 192.168.85.2
	I1109 14:41:34.620299       1 main.go:148] setting mtu 1500 for CNI 
	I1109 14:41:34.620340       1 main.go:178] kindnetd IP family: "ipv4"
	I1109 14:41:34.620375       1 main.go:182] noMask IPv4 subnets: [10.244.0.0/16]
	time="2025-11-09T14:41:34Z" level=info msg="Created plugin 10-kube-network-policies (kindnetd, handles RunPodSandbox,RemovePodSandbox)"
	I1109 14:41:34.820289       1 controller.go:377] "Starting controller" name="kube-network-policies"
	I1109 14:41:34.820366       1 controller.go:381] "Waiting for informer caches to sync"
	I1109 14:41:34.820401       1 shared_informer.go:350] "Waiting for caches to sync" controller="kube-network-policies"
	I1109 14:41:34.820538       1 controller.go:390] nri plugin exited: failed to connect to NRI service: dial unix /var/run/nri/nri.sock: connect: no such file or directory
	I1109 14:41:35.020588       1 shared_informer.go:357] "Caches are synced" controller="kube-network-policies"
	I1109 14:41:35.020685       1 metrics.go:72] Registering metrics
	I1109 14:41:35.020772       1 controller.go:711] "Syncing nftables rules"
	I1109 14:41:44.827544       1 main.go:297] Handling node with IPs: map[192.168.85.2:{}]
	I1109 14:41:44.827686       1 main.go:301] handling current node
	I1109 14:41:54.821406       1 main.go:297] Handling node with IPs: map[192.168.85.2:{}]
	I1109 14:41:54.821512       1 main.go:301] handling current node
	
	
	==> kube-apiserver [bd7dc765c7d6c44df535732d4100f116e90a4235f5f389565de60ee33e14b26a] <==
	I1109 14:41:21.974051       1 cidrallocator.go:301] created ClusterIP allocator for Service CIDR 10.96.0.0/12
	I1109 14:41:21.974197       1 default_servicecidr_controller.go:228] Setting default ServiceCIDR condition Ready to True
	I1109 14:41:21.998666       1 controller.go:667] quota admission added evaluator for: leases.coordination.k8s.io
	I1109 14:41:22.055592       1 shared_informer.go:356] "Caches are synced" controller="ipallocator-repair-controller"
	I1109 14:41:22.055656       1 cache.go:39] Caches are synced for LocalAvailability controller
	I1109 14:41:22.055793       1 shared_informer.go:356] "Caches are synced" controller="cluster_authentication_trust_controller"
	I1109 14:41:22.061312       1 cidrallocator.go:277] updated ClusterIP allocator for Service CIDR 10.96.0.0/12
	I1109 14:41:22.061530       1 default_servicecidr_controller.go:137] Shutting down kubernetes-service-cidr-controller
	I1109 14:41:22.598725       1 storage_scheduling.go:95] created PriorityClass system-node-critical with value 2000001000
	I1109 14:41:22.606966       1 storage_scheduling.go:95] created PriorityClass system-cluster-critical with value 2000000000
	I1109 14:41:22.606988       1 storage_scheduling.go:111] all system priority classes are created successfully or already exist.
	I1109 14:41:23.657371       1 controller.go:667] quota admission added evaluator for: roles.rbac.authorization.k8s.io
	I1109 14:41:23.734856       1 controller.go:667] quota admission added evaluator for: rolebindings.rbac.authorization.k8s.io
	I1109 14:41:23.840741       1 controller.go:667] quota admission added evaluator for: serviceaccounts
	I1109 14:41:23.843320       1 alloc.go:328] "allocated clusterIPs" service="default/kubernetes" clusterIPs={"IPv4":"10.96.0.1"}
	W1109 14:41:23.867782       1 lease.go:265] Resetting endpoints for master service "kubernetes" to [192.168.85.2]
	I1109 14:41:23.869315       1 controller.go:667] quota admission added evaluator for: endpoints
	I1109 14:41:23.879512       1 controller.go:667] quota admission added evaluator for: endpointslices.discovery.k8s.io
	I1109 14:41:25.087822       1 controller.go:667] quota admission added evaluator for: deployments.apps
	I1109 14:41:25.162985       1 alloc.go:328] "allocated clusterIPs" service="kube-system/kube-dns" clusterIPs={"IPv4":"10.96.0.10"}
	I1109 14:41:25.334720       1 controller.go:667] quota admission added evaluator for: daemonsets.apps
	I1109 14:41:28.840410       1 controller.go:667] quota admission added evaluator for: replicasets.apps
	I1109 14:41:29.990619       1 cidrallocator.go:277] updated ClusterIP allocator for Service CIDR 10.96.0.0/12
	I1109 14:41:29.996503       1 cidrallocator.go:277] updated ClusterIP allocator for Service CIDR 10.96.0.0/12
	I1109 14:41:30.079389       1 controller.go:667] quota admission added evaluator for: controllerrevisions.apps
	
	
	==> kube-controller-manager [6ad007f71a96e590d50048df507c07b493e0222f34d7d8d8aab9042417038265] <==
	I1109 14:41:28.835929       1 shared_informer.go:356] "Caches are synced" controller="ReplicationController"
	I1109 14:41:28.835944       1 shared_informer.go:356] "Caches are synced" controller="ClusterRoleAggregator"
	I1109 14:41:28.836217       1 shared_informer.go:356] "Caches are synced" controller="ReplicaSet"
	I1109 14:41:28.835962       1 shared_informer.go:356] "Caches are synced" controller="VAC protection"
	I1109 14:41:28.836939       1 shared_informer.go:356] "Caches are synced" controller="resource quota"
	I1109 14:41:28.835975       1 shared_informer.go:356] "Caches are synced" controller="job"
	I1109 14:41:28.841813       1 shared_informer.go:356] "Caches are synced" controller="disruption"
	I1109 14:41:28.841920       1 shared_informer.go:356] "Caches are synced" controller="legacy-service-account-token-cleaner"
	I1109 14:41:28.846600       1 shared_informer.go:356] "Caches are synced" controller="certificate-csrapproving"
	I1109 14:41:28.849895       1 shared_informer.go:356] "Caches are synced" controller="namespace"
	I1109 14:41:28.854970       1 shared_informer.go:356] "Caches are synced" controller="garbage collector"
	I1109 14:41:28.866146       1 shared_informer.go:356] "Caches are synced" controller="TTL"
	I1109 14:41:28.878140       1 shared_informer.go:356] "Caches are synced" controller="stateful set"
	I1109 14:41:28.878152       1 shared_informer.go:356] "Caches are synced" controller="attach detach"
	I1109 14:41:28.878937       1 shared_informer.go:356] "Caches are synced" controller="service-cidr-controller"
	I1109 14:41:28.879172       1 shared_informer.go:356] "Caches are synced" controller="daemon sets"
	I1109 14:41:28.880458       1 shared_informer.go:356] "Caches are synced" controller="GC"
	I1109 14:41:28.880908       1 shared_informer.go:356] "Caches are synced" controller="taint-eviction-controller"
	I1109 14:41:28.884821       1 shared_informer.go:356] "Caches are synced" controller="node"
	I1109 14:41:28.884901       1 range_allocator.go:177] "Sending events to api server" logger="node-ipam-controller"
	I1109 14:41:28.884934       1 range_allocator.go:183] "Starting range CIDR allocator" logger="node-ipam-controller"
	I1109 14:41:28.884947       1 shared_informer.go:349] "Waiting for caches to sync" controller="cidrallocator"
	I1109 14:41:28.884954       1 shared_informer.go:356] "Caches are synced" controller="cidrallocator"
	I1109 14:41:28.893662       1 range_allocator.go:428] "Set node PodCIDR" logger="node-ipam-controller" node="no-preload-545474" podCIDRs=["10.244.0.0/24"]
	I1109 14:41:48.827547       1 node_lifecycle_controller.go:1044] "Controller detected that some Nodes are Ready. Exiting master disruption mode" logger="node-lifecycle-controller"
	
	
	==> kube-proxy [815cf768902d1e6e0ca5cf6c4a9b3404ccc05aa2c9263645f273549e8be0878c] <==
	I1109 14:41:30.716800       1 server_linux.go:53] "Using iptables proxy"
	I1109 14:41:30.823099       1 shared_informer.go:349] "Waiting for caches to sync" controller="node informer cache"
	I1109 14:41:30.924483       1 shared_informer.go:356] "Caches are synced" controller="node informer cache"
	I1109 14:41:30.924517       1 server.go:219] "Successfully retrieved NodeIPs" NodeIPs=["192.168.85.2"]
	E1109 14:41:30.924591       1 server.go:256] "Kube-proxy configuration may be incomplete or incorrect" err="nodePortAddresses is unset; NodePort connections will be accepted on all local IPs. Consider using `--nodeport-addresses primary`"
	I1109 14:41:31.159168       1 server.go:265] "kube-proxy running in dual-stack mode" primary ipFamily="IPv4"
	I1109 14:41:31.159221       1 server_linux.go:132] "Using iptables Proxier"
	I1109 14:41:31.172131       1 proxier.go:242] "Setting route_localnet=1 to allow node-ports on localhost; to change this either disable iptables.localhostNodePorts (--iptables-localhost-nodeports) or set nodePortAddresses (--nodeport-addresses) to filter loopback addresses" ipFamily="IPv4"
	I1109 14:41:31.172490       1 server.go:527] "Version info" version="v1.34.1"
	I1109 14:41:31.172506       1 server.go:529] "Golang settings" GOGC="" GOMAXPROCS="" GOTRACEBACK=""
	I1109 14:41:31.176891       1 config.go:200] "Starting service config controller"
	I1109 14:41:31.176904       1 shared_informer.go:349] "Waiting for caches to sync" controller="service config"
	I1109 14:41:31.176925       1 config.go:106] "Starting endpoint slice config controller"
	I1109 14:41:31.176930       1 shared_informer.go:349] "Waiting for caches to sync" controller="endpoint slice config"
	I1109 14:41:31.176957       1 config.go:403] "Starting serviceCIDR config controller"
	I1109 14:41:31.176961       1 shared_informer.go:349] "Waiting for caches to sync" controller="serviceCIDR config"
	I1109 14:41:31.181587       1 config.go:309] "Starting node config controller"
	I1109 14:41:31.181607       1 shared_informer.go:349] "Waiting for caches to sync" controller="node config"
	I1109 14:41:31.181615       1 shared_informer.go:356] "Caches are synced" controller="node config"
	I1109 14:41:31.279619       1 shared_informer.go:356] "Caches are synced" controller="serviceCIDR config"
	I1109 14:41:31.279653       1 shared_informer.go:356] "Caches are synced" controller="service config"
	I1109 14:41:31.279680       1 shared_informer.go:356] "Caches are synced" controller="endpoint slice config"
	
	
	==> kube-scheduler [fe5e5e2c794d8bf99757926017ae61649f9ba376308d801cb2376670a194a169] <==
	E1109 14:41:22.025298       1 reflector.go:205] "Failed to watch" err="failed to list *v1.PodDisruptionBudget: poddisruptionbudgets.policy is forbidden: User \"system:kube-scheduler\" cannot list resource \"poddisruptionbudgets\" in API group \"policy\" at the cluster scope" logger="UnhandledError" reflector="k8s.io/client-go/informers/factory.go:160" type="*v1.PodDisruptionBudget"
	E1109 14:41:22.025380       1 reflector.go:205] "Failed to watch" err="failed to list *v1.CSINode: csinodes.storage.k8s.io is forbidden: User \"system:kube-scheduler\" cannot list resource \"csinodes\" in API group \"storage.k8s.io\" at the cluster scope" logger="UnhandledError" reflector="k8s.io/client-go/informers/factory.go:160" type="*v1.CSINode"
	E1109 14:41:22.025428       1 reflector.go:205] "Failed to watch" err="failed to list *v1.PersistentVolume: persistentvolumes is forbidden: User \"system:kube-scheduler\" cannot list resource \"persistentvolumes\" in API group \"\" at the cluster scope" logger="UnhandledError" reflector="k8s.io/client-go/informers/factory.go:160" type="*v1.PersistentVolume"
	E1109 14:41:22.025486       1 reflector.go:205] "Failed to watch" err="failed to list *v1.ReplicaSet: replicasets.apps is forbidden: User \"system:kube-scheduler\" cannot list resource \"replicasets\" in API group \"apps\" at the cluster scope" logger="UnhandledError" reflector="k8s.io/client-go/informers/factory.go:160" type="*v1.ReplicaSet"
	E1109 14:41:22.025548       1 reflector.go:205] "Failed to watch" err="failed to list *v1.Namespace: namespaces is forbidden: User \"system:kube-scheduler\" cannot list resource \"namespaces\" in API group \"\" at the cluster scope" logger="UnhandledError" reflector="k8s.io/client-go/informers/factory.go:160" type="*v1.Namespace"
	E1109 14:41:22.025804       1 reflector.go:205] "Failed to watch" err="failed to list *v1.Service: services is forbidden: User \"system:kube-scheduler\" cannot list resource \"services\" in API group \"\" at the cluster scope" logger="UnhandledError" reflector="k8s.io/client-go/informers/factory.go:160" type="*v1.Service"
	E1109 14:41:22.025885       1 reflector.go:205] "Failed to watch" err="failed to list *v1.StorageClass: storageclasses.storage.k8s.io is forbidden: User \"system:kube-scheduler\" cannot list resource \"storageclasses\" in API group \"storage.k8s.io\" at the cluster scope" logger="UnhandledError" reflector="k8s.io/client-go/informers/factory.go:160" type="*v1.StorageClass"
	E1109 14:41:22.025930       1 reflector.go:205] "Failed to watch" err="failed to list *v1.VolumeAttachment: volumeattachments.storage.k8s.io is forbidden: User \"system:kube-scheduler\" cannot list resource \"volumeattachments\" in API group \"storage.k8s.io\" at the cluster scope" logger="UnhandledError" reflector="k8s.io/client-go/informers/factory.go:160" type="*v1.VolumeAttachment"
	E1109 14:41:22.026813       1 reflector.go:205] "Failed to watch" err="failed to list *v1.ResourceClaim: resourceclaims.resource.k8s.io is forbidden: User \"system:kube-scheduler\" cannot list resource \"resourceclaims\" in API group \"resource.k8s.io\" at the cluster scope" logger="UnhandledError" reflector="k8s.io/client-go/informers/factory.go:160" type="*v1.ResourceClaim"
	E1109 14:41:22.026900       1 reflector.go:205] "Failed to watch" err="failed to list *v1.ResourceSlice: resourceslices.resource.k8s.io is forbidden: User \"system:kube-scheduler\" cannot list resource \"resourceslices\" in API group \"resource.k8s.io\" at the cluster scope" logger="UnhandledError" reflector="k8s.io/client-go/informers/factory.go:160" type="*v1.ResourceSlice"
	E1109 14:41:22.027003       1 reflector.go:205] "Failed to watch" err="failed to list *v1.DeviceClass: deviceclasses.resource.k8s.io is forbidden: User \"system:kube-scheduler\" cannot list resource \"deviceclasses\" in API group \"resource.k8s.io\" at the cluster scope" logger="UnhandledError" reflector="k8s.io/client-go/informers/factory.go:160" type="*v1.DeviceClass"
	E1109 14:41:22.027081       1 reflector.go:205] "Failed to watch" err="failed to list *v1.ReplicationController: replicationcontrollers is forbidden: User \"system:kube-scheduler\" cannot list resource \"replicationcontrollers\" in API group \"\" at the cluster scope" logger="UnhandledError" reflector="k8s.io/client-go/informers/factory.go:160" type="*v1.ReplicationController"
	E1109 14:41:22.027118       1 reflector.go:205] "Failed to watch" err="failed to list *v1.Pod: pods is forbidden: User \"system:kube-scheduler\" cannot list resource \"pods\" in API group \"\" at the cluster scope" logger="UnhandledError" reflector="k8s.io/client-go/informers/factory.go:160" type="*v1.Pod"
	E1109 14:41:22.027177       1 reflector.go:205] "Failed to watch" err="failed to list *v1.StatefulSet: statefulsets.apps is forbidden: User \"system:kube-scheduler\" cannot list resource \"statefulsets\" in API group \"apps\" at the cluster scope" logger="UnhandledError" reflector="k8s.io/client-go/informers/factory.go:160" type="*v1.StatefulSet"
	E1109 14:41:22.846127       1 reflector.go:205] "Failed to watch" err="failed to list *v1.DeviceClass: deviceclasses.resource.k8s.io is forbidden: User \"system:kube-scheduler\" cannot list resource \"deviceclasses\" in API group \"resource.k8s.io\" at the cluster scope" logger="UnhandledError" reflector="k8s.io/client-go/informers/factory.go:160" type="*v1.DeviceClass"
	E1109 14:41:22.850244       1 reflector.go:205] "Failed to watch" err="failed to list *v1.ResourceSlice: resourceslices.resource.k8s.io is forbidden: User \"system:kube-scheduler\" cannot list resource \"resourceslices\" in API group \"resource.k8s.io\" at the cluster scope" logger="UnhandledError" reflector="k8s.io/client-go/informers/factory.go:160" type="*v1.ResourceSlice"
	E1109 14:41:22.947964       1 reflector.go:205] "Failed to watch" err="failed to list *v1.ConfigMap: configmaps \"extension-apiserver-authentication\" is forbidden: User \"system:kube-scheduler\" cannot list resource \"configmaps\" in API group \"\" in the namespace \"kube-system\"" logger="UnhandledError" reflector="runtime/asm_arm64.s:1223" type="*v1.ConfigMap"
	E1109 14:41:22.979164       1 reflector.go:205] "Failed to watch" err="failed to list *v1.ReplicaSet: replicasets.apps is forbidden: User \"system:kube-scheduler\" cannot list resource \"replicasets\" in API group \"apps\" at the cluster scope" logger="UnhandledError" reflector="k8s.io/client-go/informers/factory.go:160" type="*v1.ReplicaSet"
	E1109 14:41:23.022297       1 reflector.go:205] "Failed to watch" err="failed to list *v1.CSIStorageCapacity: csistoragecapacities.storage.k8s.io is forbidden: User \"system:kube-scheduler\" cannot list resource \"csistoragecapacities\" in API group \"storage.k8s.io\" at the cluster scope" logger="UnhandledError" reflector="k8s.io/client-go/informers/factory.go:160" type="*v1.CSIStorageCapacity"
	E1109 14:41:23.022442       1 reflector.go:205] "Failed to watch" err="failed to list *v1.ReplicationController: replicationcontrollers is forbidden: User \"system:kube-scheduler\" cannot list resource \"replicationcontrollers\" in API group \"\" at the cluster scope" logger="UnhandledError" reflector="k8s.io/client-go/informers/factory.go:160" type="*v1.ReplicationController"
	E1109 14:41:23.160800       1 reflector.go:205] "Failed to watch" err="failed to list *v1.CSINode: csinodes.storage.k8s.io is forbidden: User \"system:kube-scheduler\" cannot list resource \"csinodes\" in API group \"storage.k8s.io\" at the cluster scope" logger="UnhandledError" reflector="k8s.io/client-go/informers/factory.go:160" type="*v1.CSINode"
	E1109 14:41:23.165917       1 reflector.go:205] "Failed to watch" err="failed to list *v1.StorageClass: storageclasses.storage.k8s.io is forbidden: User \"system:kube-scheduler\" cannot list resource \"storageclasses\" in API group \"storage.k8s.io\" at the cluster scope" logger="UnhandledError" reflector="k8s.io/client-go/informers/factory.go:160" type="*v1.StorageClass"
	E1109 14:41:23.232332       1 reflector.go:205] "Failed to watch" err="failed to list *v1.Node: nodes is forbidden: User \"system:kube-scheduler\" cannot list resource \"nodes\" in API group \"\" at the cluster scope" logger="UnhandledError" reflector="k8s.io/client-go/informers/factory.go:160" type="*v1.Node"
	E1109 14:41:23.259387       1 reflector.go:205] "Failed to watch" err="failed to list *v1.ResourceClaim: resourceclaims.resource.k8s.io is forbidden: User \"system:kube-scheduler\" cannot list resource \"resourceclaims\" in API group \"resource.k8s.io\" at the cluster scope" logger="UnhandledError" reflector="k8s.io/client-go/informers/factory.go:160" type="*v1.ResourceClaim"
	I1109 14:41:25.380278       1 shared_informer.go:356] "Caches are synced" controller="client-ca::kube-system::extension-apiserver-authentication::client-ca-file"
	
	
	==> kubelet <==
	Nov 09 14:41:26 no-preload-545474 kubelet[2000]: I1109 14:41:26.510916    2000 pod_startup_latency_tracker.go:104] "Observed pod startup duration" pod="kube-system/kube-scheduler-no-preload-545474" podStartSLOduration=1.510887656 podStartE2EDuration="1.510887656s" podCreationTimestamp="2025-11-09 14:41:25 +0000 UTC" firstStartedPulling="0001-01-01 00:00:00 +0000 UTC" lastFinishedPulling="0001-01-01 00:00:00 +0000 UTC" observedRunningTime="2025-11-09 14:41:26.282535215 +0000 UTC m=+1.285787096" watchObservedRunningTime="2025-11-09 14:41:26.510887656 +0000 UTC m=+1.514139537"
	Nov 09 14:41:26 no-preload-545474 kubelet[2000]: E1109 14:41:26.515402    2000 kubelet.go:3221] "Failed creating a mirror pod" err="pods \"kube-apiserver-no-preload-545474\" already exists" pod="kube-system/kube-apiserver-no-preload-545474"
	Nov 09 14:41:28 no-preload-545474 kubelet[2000]: I1109 14:41:28.921280    2000 kuberuntime_manager.go:1828] "Updating runtime config through cri with podcidr" CIDR="10.244.0.0/24"
	Nov 09 14:41:28 no-preload-545474 kubelet[2000]: I1109 14:41:28.922109    2000 kubelet_network.go:47] "Updating Pod CIDR" originalPodCIDR="" newPodCIDR="10.244.0.0/24"
	Nov 09 14:41:30 no-preload-545474 kubelet[2000]: I1109 14:41:30.341307    2000 reconciler_common.go:251] "operationExecutor.VerifyControllerAttachedVolume started for volume \"xtables-lock\" (UniqueName: \"kubernetes.io/host-path/7be905a2-33f1-4116-b900-707561fa3d05-xtables-lock\") pod \"kindnet-t9j49\" (UID: \"7be905a2-33f1-4116-b900-707561fa3d05\") " pod="kube-system/kindnet-t9j49"
	Nov 09 14:41:30 no-preload-545474 kubelet[2000]: I1109 14:41:30.341362    2000 reconciler_common.go:251] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kube-proxy\" (UniqueName: \"kubernetes.io/configmap/5de7aa0e-eb03-4535-9040-8d34d0520820-kube-proxy\") pod \"kube-proxy-2mnwv\" (UID: \"5de7aa0e-eb03-4535-9040-8d34d0520820\") " pod="kube-system/kube-proxy-2mnwv"
	Nov 09 14:41:30 no-preload-545474 kubelet[2000]: I1109 14:41:30.341383    2000 reconciler_common.go:251] "operationExecutor.VerifyControllerAttachedVolume started for volume \"lib-modules\" (UniqueName: \"kubernetes.io/host-path/5de7aa0e-eb03-4535-9040-8d34d0520820-lib-modules\") pod \"kube-proxy-2mnwv\" (UID: \"5de7aa0e-eb03-4535-9040-8d34d0520820\") " pod="kube-system/kube-proxy-2mnwv"
	Nov 09 14:41:30 no-preload-545474 kubelet[2000]: I1109 14:41:30.341403    2000 reconciler_common.go:251] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kube-api-access-ggskg\" (UniqueName: \"kubernetes.io/projected/5de7aa0e-eb03-4535-9040-8d34d0520820-kube-api-access-ggskg\") pod \"kube-proxy-2mnwv\" (UID: \"5de7aa0e-eb03-4535-9040-8d34d0520820\") " pod="kube-system/kube-proxy-2mnwv"
	Nov 09 14:41:30 no-preload-545474 kubelet[2000]: I1109 14:41:30.342167    2000 reconciler_common.go:251] "operationExecutor.VerifyControllerAttachedVolume started for volume \"cni-cfg\" (UniqueName: \"kubernetes.io/host-path/7be905a2-33f1-4116-b900-707561fa3d05-cni-cfg\") pod \"kindnet-t9j49\" (UID: \"7be905a2-33f1-4116-b900-707561fa3d05\") " pod="kube-system/kindnet-t9j49"
	Nov 09 14:41:30 no-preload-545474 kubelet[2000]: I1109 14:41:30.342189    2000 reconciler_common.go:251] "operationExecutor.VerifyControllerAttachedVolume started for volume \"lib-modules\" (UniqueName: \"kubernetes.io/host-path/7be905a2-33f1-4116-b900-707561fa3d05-lib-modules\") pod \"kindnet-t9j49\" (UID: \"7be905a2-33f1-4116-b900-707561fa3d05\") " pod="kube-system/kindnet-t9j49"
	Nov 09 14:41:30 no-preload-545474 kubelet[2000]: I1109 14:41:30.342210    2000 reconciler_common.go:251] "operationExecutor.VerifyControllerAttachedVolume started for volume \"xtables-lock\" (UniqueName: \"kubernetes.io/host-path/5de7aa0e-eb03-4535-9040-8d34d0520820-xtables-lock\") pod \"kube-proxy-2mnwv\" (UID: \"5de7aa0e-eb03-4535-9040-8d34d0520820\") " pod="kube-system/kube-proxy-2mnwv"
	Nov 09 14:41:30 no-preload-545474 kubelet[2000]: I1109 14:41:30.342226    2000 reconciler_common.go:251] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kube-api-access-g2mwf\" (UniqueName: \"kubernetes.io/projected/7be905a2-33f1-4116-b900-707561fa3d05-kube-api-access-g2mwf\") pod \"kindnet-t9j49\" (UID: \"7be905a2-33f1-4116-b900-707561fa3d05\") " pod="kube-system/kindnet-t9j49"
	Nov 09 14:41:30 no-preload-545474 kubelet[2000]: I1109 14:41:30.462765    2000 swap_util.go:74] "error creating dir to test if tmpfs noswap is enabled. Assuming not supported" mount path="" error="stat /var/lib/kubelet/plugins/kubernetes.io/empty-dir: no such file or directory"
	Nov 09 14:41:31 no-preload-545474 kubelet[2000]: I1109 14:41:31.523933    2000 pod_startup_latency_tracker.go:104] "Observed pod startup duration" pod="kube-system/kube-proxy-2mnwv" podStartSLOduration=1.5239175459999998 podStartE2EDuration="1.523917546s" podCreationTimestamp="2025-11-09 14:41:30 +0000 UTC" firstStartedPulling="0001-01-01 00:00:00 +0000 UTC" lastFinishedPulling="0001-01-01 00:00:00 +0000 UTC" observedRunningTime="2025-11-09 14:41:31.523542379 +0000 UTC m=+6.526794350" watchObservedRunningTime="2025-11-09 14:41:31.523917546 +0000 UTC m=+6.527169427"
	Nov 09 14:41:34 no-preload-545474 kubelet[2000]: I1109 14:41:34.983936    2000 pod_startup_latency_tracker.go:104] "Observed pod startup duration" pod="kube-system/kindnet-t9j49" podStartSLOduration=1.195171336 podStartE2EDuration="4.983918413s" podCreationTimestamp="2025-11-09 14:41:30 +0000 UTC" firstStartedPulling="2025-11-09 14:41:30.596209259 +0000 UTC m=+5.599461124" lastFinishedPulling="2025-11-09 14:41:34.384956328 +0000 UTC m=+9.388208201" observedRunningTime="2025-11-09 14:41:34.548801804 +0000 UTC m=+9.552053685" watchObservedRunningTime="2025-11-09 14:41:34.983918413 +0000 UTC m=+9.987170302"
	Nov 09 14:41:45 no-preload-545474 kubelet[2000]: I1109 14:41:45.186663    2000 kubelet_node_status.go:439] "Fast updating node status as it just became ready"
	Nov 09 14:41:45 no-preload-545474 kubelet[2000]: I1109 14:41:45.355007    2000 reconciler_common.go:251] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kube-api-access-54wvh\" (UniqueName: \"kubernetes.io/projected/5c1ae78c-82fb-4b73-a894-745d823e352c-kube-api-access-54wvh\") pod \"storage-provisioner\" (UID: \"5c1ae78c-82fb-4b73-a894-745d823e352c\") " pod="kube-system/storage-provisioner"
	Nov 09 14:41:45 no-preload-545474 kubelet[2000]: I1109 14:41:45.355075    2000 reconciler_common.go:251] "operationExecutor.VerifyControllerAttachedVolume started for volume \"tmp\" (UniqueName: \"kubernetes.io/host-path/5c1ae78c-82fb-4b73-a894-745d823e352c-tmp\") pod \"storage-provisioner\" (UID: \"5c1ae78c-82fb-4b73-a894-745d823e352c\") " pod="kube-system/storage-provisioner"
	Nov 09 14:41:45 no-preload-545474 kubelet[2000]: I1109 14:41:45.455692    2000 reconciler_common.go:251] "operationExecutor.VerifyControllerAttachedVolume started for volume \"config-volume\" (UniqueName: \"kubernetes.io/configmap/e6074143-5b8d-41d4-8951-a551d8d2a4b9-config-volume\") pod \"coredns-66bc5c9577-gq42x\" (UID: \"e6074143-5b8d-41d4-8951-a551d8d2a4b9\") " pod="kube-system/coredns-66bc5c9577-gq42x"
	Nov 09 14:41:45 no-preload-545474 kubelet[2000]: I1109 14:41:45.455761    2000 reconciler_common.go:251] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kube-api-access-xq5dj\" (UniqueName: \"kubernetes.io/projected/e6074143-5b8d-41d4-8951-a551d8d2a4b9-kube-api-access-xq5dj\") pod \"coredns-66bc5c9577-gq42x\" (UID: \"e6074143-5b8d-41d4-8951-a551d8d2a4b9\") " pod="kube-system/coredns-66bc5c9577-gq42x"
	Nov 09 14:41:45 no-preload-545474 kubelet[2000]: W1109 14:41:45.924625    2000 manager.go:1169] Failed to process watch event {EventType:0 Name:/docker/435b3ae5d44375062a24914709f3375acdafc76fdd93c4f83f8e7b4be40a79be/crio-235c2e0fcf319d6fdc41e4dad4ffe1e90161866c8e413ea063e98259250e4722 WatchSource:0}: Error finding container 235c2e0fcf319d6fdc41e4dad4ffe1e90161866c8e413ea063e98259250e4722: Status 404 returned error can't find the container with id 235c2e0fcf319d6fdc41e4dad4ffe1e90161866c8e413ea063e98259250e4722
	Nov 09 14:41:46 no-preload-545474 kubelet[2000]: I1109 14:41:46.613327    2000 pod_startup_latency_tracker.go:104] "Observed pod startup duration" pod="kube-system/coredns-66bc5c9577-gq42x" podStartSLOduration=17.613306835 podStartE2EDuration="17.613306835s" podCreationTimestamp="2025-11-09 14:41:29 +0000 UTC" firstStartedPulling="0001-01-01 00:00:00 +0000 UTC" lastFinishedPulling="0001-01-01 00:00:00 +0000 UTC" observedRunningTime="2025-11-09 14:41:46.580993988 +0000 UTC m=+21.584245877" watchObservedRunningTime="2025-11-09 14:41:46.613306835 +0000 UTC m=+21.616558716"
	Nov 09 14:41:46 no-preload-545474 kubelet[2000]: I1109 14:41:46.613516    2000 pod_startup_latency_tracker.go:104] "Observed pod startup duration" pod="kube-system/storage-provisioner" podStartSLOduration=15.613509520000001 podStartE2EDuration="15.61350952s" podCreationTimestamp="2025-11-09 14:41:31 +0000 UTC" firstStartedPulling="0001-01-01 00:00:00 +0000 UTC" lastFinishedPulling="0001-01-01 00:00:00 +0000 UTC" observedRunningTime="2025-11-09 14:41:46.61294757 +0000 UTC m=+21.616199451" watchObservedRunningTime="2025-11-09 14:41:46.61350952 +0000 UTC m=+21.616761401"
	Nov 09 14:41:48 no-preload-545474 kubelet[2000]: I1109 14:41:48.679465    2000 reconciler_common.go:251] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kube-api-access-hl7v6\" (UniqueName: \"kubernetes.io/projected/cf172c15-bc73-4b57-b8a2-5a67c4f6b615-kube-api-access-hl7v6\") pod \"busybox\" (UID: \"cf172c15-bc73-4b57-b8a2-5a67c4f6b615\") " pod="default/busybox"
	Nov 09 14:41:48 no-preload-545474 kubelet[2000]: W1109 14:41:48.948644    2000 manager.go:1169] Failed to process watch event {EventType:0 Name:/docker/435b3ae5d44375062a24914709f3375acdafc76fdd93c4f83f8e7b4be40a79be/crio-0f2d1a6beebc7dd6073d295315b25542f9c70277c047dec0788bccac1f56a483 WatchSource:0}: Error finding container 0f2d1a6beebc7dd6073d295315b25542f9c70277c047dec0788bccac1f56a483: Status 404 returned error can't find the container with id 0f2d1a6beebc7dd6073d295315b25542f9c70277c047dec0788bccac1f56a483
	
	
	==> storage-provisioner [e1dc67b6668b188720235a266cefcae89947a40372008730745cc11c130bbaa9] <==
	I1109 14:41:45.722769       1 storage_provisioner.go:116] Initializing the minikube storage provisioner...
	I1109 14:41:45.757324       1 storage_provisioner.go:141] Storage provisioner initialized, now starting service!
	I1109 14:41:45.757480       1 leaderelection.go:243] attempting to acquire leader lease kube-system/k8s.io-minikube-hostpath...
	W1109 14:41:45.761484       1 warnings.go:70] v1 Endpoints is deprecated in v1.33+; use discovery.k8s.io/v1 EndpointSlice
	W1109 14:41:45.776805       1 warnings.go:70] v1 Endpoints is deprecated in v1.33+; use discovery.k8s.io/v1 EndpointSlice
	I1109 14:41:45.777085       1 leaderelection.go:253] successfully acquired lease kube-system/k8s.io-minikube-hostpath
	I1109 14:41:45.778121       1 controller.go:835] Starting provisioner controller k8s.io/minikube-hostpath_no-preload-545474_65bd88b1-ea04-4eed-935a-47ee69019e6a!
	W1109 14:41:45.779353       1 warnings.go:70] v1 Endpoints is deprecated in v1.33+; use discovery.k8s.io/v1 EndpointSlice
	I1109 14:41:45.781025       1 event.go:282] Event(v1.ObjectReference{Kind:"Endpoints", Namespace:"kube-system", Name:"k8s.io-minikube-hostpath", UID:"c28aa965-9a7b-46e6-8965-1a16b69399de", APIVersion:"v1", ResourceVersion:"447", FieldPath:""}): type: 'Normal' reason: 'LeaderElection' no-preload-545474_65bd88b1-ea04-4eed-935a-47ee69019e6a became leader
	W1109 14:41:45.801232       1 warnings.go:70] v1 Endpoints is deprecated in v1.33+; use discovery.k8s.io/v1 EndpointSlice
	I1109 14:41:45.878749       1 controller.go:884] Started provisioner controller k8s.io/minikube-hostpath_no-preload-545474_65bd88b1-ea04-4eed-935a-47ee69019e6a!
	W1109 14:41:47.805219       1 warnings.go:70] v1 Endpoints is deprecated in v1.33+; use discovery.k8s.io/v1 EndpointSlice
	W1109 14:41:47.809829       1 warnings.go:70] v1 Endpoints is deprecated in v1.33+; use discovery.k8s.io/v1 EndpointSlice
	W1109 14:41:49.813504       1 warnings.go:70] v1 Endpoints is deprecated in v1.33+; use discovery.k8s.io/v1 EndpointSlice
	W1109 14:41:49.820892       1 warnings.go:70] v1 Endpoints is deprecated in v1.33+; use discovery.k8s.io/v1 EndpointSlice
	W1109 14:41:51.824057       1 warnings.go:70] v1 Endpoints is deprecated in v1.33+; use discovery.k8s.io/v1 EndpointSlice
	W1109 14:41:51.828574       1 warnings.go:70] v1 Endpoints is deprecated in v1.33+; use discovery.k8s.io/v1 EndpointSlice
	W1109 14:41:53.832252       1 warnings.go:70] v1 Endpoints is deprecated in v1.33+; use discovery.k8s.io/v1 EndpointSlice
	W1109 14:41:53.837758       1 warnings.go:70] v1 Endpoints is deprecated in v1.33+; use discovery.k8s.io/v1 EndpointSlice
	W1109 14:41:55.841286       1 warnings.go:70] v1 Endpoints is deprecated in v1.33+; use discovery.k8s.io/v1 EndpointSlice
	W1109 14:41:55.846493       1 warnings.go:70] v1 Endpoints is deprecated in v1.33+; use discovery.k8s.io/v1 EndpointSlice
	W1109 14:41:57.850806       1 warnings.go:70] v1 Endpoints is deprecated in v1.33+; use discovery.k8s.io/v1 EndpointSlice
	W1109 14:41:57.860069       1 warnings.go:70] v1 Endpoints is deprecated in v1.33+; use discovery.k8s.io/v1 EndpointSlice
	

                                                
                                                
-- /stdout --
helpers_test.go:262: (dbg) Run:  out/minikube-linux-arm64 status --format={{.APIServer}} -p no-preload-545474 -n no-preload-545474
helpers_test.go:269: (dbg) Run:  kubectl --context no-preload-545474 get po -o=jsonpath={.items[*].metadata.name} -A --field-selector=status.phase!=Running
helpers_test.go:293: <<< TestStartStop/group/no-preload/serial/EnableAddonWhileActive FAILED: end of post-mortem logs <<<
helpers_test.go:294: ---------------------/post-mortem---------------------------------
--- FAIL: TestStartStop/group/no-preload/serial/EnableAddonWhileActive (3.09s)

                                                
                                    
x
+
TestStartStop/group/no-preload/serial/Pause (6.98s)

                                                
                                                
=== RUN   TestStartStop/group/no-preload/serial/Pause
start_stop_delete_test.go:309: (dbg) Run:  out/minikube-linux-arm64 pause -p no-preload-545474 --alsologtostderr -v=1
start_stop_delete_test.go:309: (dbg) Non-zero exit: out/minikube-linux-arm64 pause -p no-preload-545474 --alsologtostderr -v=1: exit status 80 (2.373929021s)

                                                
                                                
-- stdout --
	* Pausing node no-preload-545474 ... 
	
	

                                                
                                                
-- /stdout --
** stderr ** 
	I1109 14:43:17.561112  218396 out.go:360] Setting OutFile to fd 1 ...
	I1109 14:43:17.561303  218396 out.go:408] TERM=,COLORTERM=, which probably does not support color
	I1109 14:43:17.561331  218396 out.go:374] Setting ErrFile to fd 2...
	I1109 14:43:17.561350  218396 out.go:408] TERM=,COLORTERM=, which probably does not support color
	I1109 14:43:17.561751  218396 root.go:338] Updating PATH: /home/jenkins/minikube-integration/21139-2320/.minikube/bin
	I1109 14:43:17.562429  218396 out.go:368] Setting JSON to false
	I1109 14:43:17.562486  218396 mustload.go:66] Loading cluster: no-preload-545474
	I1109 14:43:17.562916  218396 config.go:182] Loaded profile config "no-preload-545474": Driver=docker, ContainerRuntime=crio, KubernetesVersion=v1.34.1
	I1109 14:43:17.563434  218396 cli_runner.go:164] Run: docker container inspect no-preload-545474 --format={{.State.Status}}
	I1109 14:43:17.581321  218396 host.go:66] Checking if "no-preload-545474" exists ...
	I1109 14:43:17.581616  218396 cli_runner.go:164] Run: docker system info --format "{{json .}}"
	I1109 14:43:17.637665  218396 info.go:266] docker info: {ID:J4M5:W6MX:GOX4:4LAQ:VI7E:VJNF:J3OP:OPBH:GF7G:PPY4:WQWD:7N4L Containers:2 ContainersRunning:2 ContainersPaused:0 ContainersStopped:0 Images:4 Driver:overlay2 DriverStatus:[[Backing Filesystem extfs] [Supports d_type true] [Using metacopy false] [Native Overlay Diff true] [userxattr false]] SystemStatus:<nil> Plugins:{Volume:[local] Network:[bridge host ipvlan macvlan null overlay] Authorization:<nil> Log:[awslogs fluentd gcplogs gelf journald json-file local splunk syslog]} MemoryLimit:true SwapLimit:true KernelMemory:false KernelMemoryTCP:true CPUCfsPeriod:true CPUCfsQuota:true CPUShares:true CPUSet:true PidsLimit:true IPv4Forwarding:true BridgeNfIptables:false BridgeNfIP6Tables:false Debug:false NFd:49 OomKillDisable:true NGoroutines:62 SystemTime:2025-11-09 14:43:17.627543407 +0000 UTC LoggingDriver:json-file CgroupDriver:cgroupfs NEventsListener:0 KernelVersion:5.15.0-1084-aws OperatingSystem:Ubuntu 20.04.6 LTS OSType:linux Architecture:a
arch64 IndexServerAddress:https://index.docker.io/v1/ RegistryConfig:{AllowNondistributableArtifactsCIDRs:[] AllowNondistributableArtifactsHostnames:[] InsecureRegistryCIDRs:[::1/128 127.0.0.0/8] IndexConfigs:{DockerIo:{Name:docker.io Mirrors:[] Secure:true Official:true}} Mirrors:[]} NCPU:2 MemTotal:8214835200 GenericResources:<nil> DockerRootDir:/var/lib/docker HTTPProxy: HTTPSProxy: NoProxy: Name:ip-172-31-24-2 Labels:[] ExperimentalBuild:false ServerVersion:28.1.1 ClusterStore: ClusterAdvertise: Runtimes:{Runc:{Path:runc}} DefaultRuntime:runc Swarm:{NodeID: NodeAddr: LocalNodeState:inactive ControlAvailable:false Error: RemoteManagers:<nil>} LiveRestoreEnabled:false Isolation: InitBinary:docker-init ContainerdCommit:{ID:05044ec0a9a75232cad458027ca83437aae3f4da Expected:} RuncCommit:{ID:v1.2.5-0-g59923ef Expected:} InitCommit:{ID:de40ad0 Expected:} SecurityOptions:[name=apparmor name=seccomp,profile=builtin] ProductLicense: Warnings:<nil> ServerErrors:[] ClientInfo:{Debug:false Plugins:[map[Name:buildx Pat
h:/usr/libexec/docker/cli-plugins/docker-buildx SchemaVersion:0.1.0 ShortDescription:Docker Buildx Vendor:Docker Inc. Version:v0.23.0] map[Name:compose Path:/usr/libexec/docker/cli-plugins/docker-compose SchemaVersion:0.1.0 ShortDescription:Docker Compose Vendor:Docker Inc. Version:v2.35.1]] Warnings:<nil>}}
	I1109 14:43:17.638539  218396 pause.go:60] "namespaces" [kube-system kubernetes-dashboard istio-operator]="keys" map[addons:[] all:%!s(bool=false) apiserver-ips:[] apiserver-name:minikubeCA apiserver-names:[] apiserver-port:%!s(int=8443) auto-pause-interval:1m0s auto-update-drivers:%!s(bool=true) base-image:gcr.io/k8s-minikube/kicbase-builds:v0.0.48-1761985721-21837@sha256:a50b37e97dfdea51156e079ca6b45818a801b3d41bbe13d141f35d2e1af6c7d1 binary-mirror: bootstrapper:kubeadm cache-images:%!s(bool=true) cancel-scheduled:%!s(bool=false) cert-expiration:26280h0m0s cni: container-runtime: cpus:2 cri-socket: delete-on-failure:%!s(bool=false) disable-coredns-log:%!s(bool=false) disable-driver-mounts:%!s(bool=false) disable-metrics:%!s(bool=false) disable-optimizations:%!s(bool=false) disk-size:20000mb dns-domain:cluster.local dns-proxy:%!s(bool=false) docker-env:[] docker-opt:[] download-only:%!s(bool=false) driver: dry-run:%!s(bool=false) embed-certs:%!s(bool=false) embedcerts:%!s(bool=false) enable-default-
cni:%!s(bool=false) extra-config: extra-disks:%!s(int=0) feature-gates: force:%!s(bool=false) force-systemd:%!s(bool=false) gpus: ha:%!s(bool=false) host-dns-resolver:%!s(bool=true) host-only-cidr:192.168.59.1/24 host-only-nic-type:virtio hyperkit-vpnkit-sock: hyperkit-vsock-ports:[] hyperv-external-adapter: hyperv-use-external-switch:%!s(bool=false) hyperv-virtual-switch: image-mirror-country: image-repository: insecure-registry:[] install-addons:%!s(bool=true) interactive:%!s(bool=true) iso-url:[https://storage.googleapis.com/minikube-builds/iso/21834/minikube-v1.37.0-1762018871-21834-arm64.iso https://github.com/kubernetes/minikube/releases/download/v1.37.0-1762018871-21834/minikube-v1.37.0-1762018871-21834-arm64.iso https://kubernetes.oss-cn-hangzhou.aliyuncs.com/minikube/iso/minikube-v1.37.0-1762018871-21834-arm64.iso] keep-context:%!s(bool=false) keep-context-active:%!s(bool=false) kubernetes-version: kvm-gpu:%!s(bool=false) kvm-hidden:%!s(bool=false) kvm-network:default kvm-numa-count:%!s(int=1) kvm-qe
mu-uri:qemu:///system listen-address: maxauditentries:%!s(int=1000) memory: mount:%!s(bool=false) mount-9p-version:9p2000.L mount-gid:docker mount-ip: mount-msize:%!s(int=262144) mount-options:[] mount-port:0 mount-string: mount-type:9p mount-uid:docker namespace:default nat-nic-type:virtio native-ssh:%!s(bool=true) network: network-plugin: nfs-share:[] nfs-shares-root:/nfsshares no-kubernetes:%!s(bool=false) no-vtx-check:%!s(bool=false) nodes:%!s(int=1) output:text ports:[] preload:%!s(bool=true) profile:no-preload-545474 purge:%!s(bool=false) qemu-firmware-path: registry-mirror:[] reminderwaitperiodinhours:%!s(int=24) rootless:%!s(bool=false) schedule:0s service-cluster-ip-range:10.96.0.0/12 skip-audit:%!s(bool=false) socket-vmnet-client-path: socket-vmnet-path: ssh-ip-address: ssh-key: ssh-port:%!s(int=22) ssh-user:root static-ip: subnet: trace: user: uuid: vm:%!s(bool=false) vm-driver: wait:[apiserver system_pods] wait-timeout:6m0s wantnonedriverwarning:%!s(bool=true) wantupdatenotification:%!s(bool=true)
wantvirtualboxdriverwarning:%!s(bool=true)]="(MISSING)"
	I1109 14:43:17.642355  218396 out.go:179] * Pausing node no-preload-545474 ... 
	I1109 14:43:17.645504  218396 host.go:66] Checking if "no-preload-545474" exists ...
	I1109 14:43:17.646010  218396 ssh_runner.go:195] Run: systemctl --version
	I1109 14:43:17.646093  218396 cli_runner.go:164] Run: docker container inspect -f "'{{(index (index .NetworkSettings.Ports "22/tcp") 0).HostPort}}'" no-preload-545474
	I1109 14:43:17.663804  218396 sshutil.go:53] new ssh client: &{IP:127.0.0.1 Port:33095 SSHKeyPath:/home/jenkins/minikube-integration/21139-2320/.minikube/machines/no-preload-545474/id_rsa Username:docker}
	I1109 14:43:17.766581  218396 ssh_runner.go:195] Run: sudo systemctl is-active --quiet service kubelet
	I1109 14:43:17.779346  218396 pause.go:52] kubelet running: true
	I1109 14:43:17.779419  218396 ssh_runner.go:195] Run: sudo systemctl disable --now kubelet
	I1109 14:43:17.993075  218396 cri.go:54] listing CRI containers in root : {State:running Name: Namespaces:[kube-system kubernetes-dashboard istio-operator]}
	I1109 14:43:17.993198  218396 ssh_runner.go:195] Run: sudo -s eval "crictl ps -a --quiet --label io.kubernetes.pod.namespace=kube-system; crictl ps -a --quiet --label io.kubernetes.pod.namespace=kubernetes-dashboard; crictl ps -a --quiet --label io.kubernetes.pod.namespace=istio-operator"
	I1109 14:43:18.067258  218396 cri.go:89] found id: "97146ccbb7051494060c47263c5598c7fdc03c86778dbc868aff7662435f9c33"
	I1109 14:43:18.067326  218396 cri.go:89] found id: "a56ad15fb1acd25af1ccdd95286c2550aeb592ff86d9a87affec2580a370d7dc"
	I1109 14:43:18.067346  218396 cri.go:89] found id: "3a56c743b8a3e4504b63b2de555d8f1d8433520edee172e996d5bb694372c514"
	I1109 14:43:18.067364  218396 cri.go:89] found id: "5528464a75a8a31cc909e0b5261d839f7dcb4a347d188366b316e9c264cb7e1e"
	I1109 14:43:18.067382  218396 cri.go:89] found id: "9844fa4dd0e741c1e135049b0ec50a2c5f6206bf090fce7e184f76f6f5de6cb7"
	I1109 14:43:18.067406  218396 cri.go:89] found id: "e0fa19fb74d19affdcb53dc2a19669b9497ed088c06c1be5d6368f4a1d768ad8"
	I1109 14:43:18.067424  218396 cri.go:89] found id: "9c6841e7685fc5801280c9ddf2d6c0a2a346830e53491f2f3d439c2e21c977fd"
	I1109 14:43:18.067441  218396 cri.go:89] found id: "9df79b1c8bb2b207310b5498d17036b5975ec9b07c6ca842407741f9ad73de97"
	I1109 14:43:18.067458  218396 cri.go:89] found id: "baa0cc7198ae04a8507839c6fddece0836011983b84bd4fd652613a18bd01d25"
	I1109 14:43:18.067488  218396 cri.go:89] found id: "33687307c11ac0258a0c4a328dca8bb0ee9be8beb937d99cd72b8fc358b43247"
	I1109 14:43:18.067511  218396 cri.go:89] found id: "2f347db595849365a063711d3213a98014e01fa8ff9740f0c0cae1ee2989edca"
	I1109 14:43:18.067528  218396 cri.go:89] found id: ""
	I1109 14:43:18.067604  218396 ssh_runner.go:195] Run: sudo runc list -f json
	I1109 14:43:18.087074  218396 retry.go:31] will retry after 282.535052ms: list running: runc: sudo runc list -f json: Process exited with status 1
	stdout:
	
	stderr:
	time="2025-11-09T14:43:18Z" level=error msg="open /run/runc: no such file or directory"
	I1109 14:43:18.370668  218396 ssh_runner.go:195] Run: sudo systemctl is-active --quiet service kubelet
	I1109 14:43:18.384271  218396 pause.go:52] kubelet running: false
	I1109 14:43:18.384368  218396 ssh_runner.go:195] Run: sudo systemctl disable --now kubelet
	I1109 14:43:18.551555  218396 cri.go:54] listing CRI containers in root : {State:running Name: Namespaces:[kube-system kubernetes-dashboard istio-operator]}
	I1109 14:43:18.551648  218396 ssh_runner.go:195] Run: sudo -s eval "crictl ps -a --quiet --label io.kubernetes.pod.namespace=kube-system; crictl ps -a --quiet --label io.kubernetes.pod.namespace=kubernetes-dashboard; crictl ps -a --quiet --label io.kubernetes.pod.namespace=istio-operator"
	I1109 14:43:18.630206  218396 cri.go:89] found id: "97146ccbb7051494060c47263c5598c7fdc03c86778dbc868aff7662435f9c33"
	I1109 14:43:18.630231  218396 cri.go:89] found id: "a56ad15fb1acd25af1ccdd95286c2550aeb592ff86d9a87affec2580a370d7dc"
	I1109 14:43:18.630237  218396 cri.go:89] found id: "3a56c743b8a3e4504b63b2de555d8f1d8433520edee172e996d5bb694372c514"
	I1109 14:43:18.630241  218396 cri.go:89] found id: "5528464a75a8a31cc909e0b5261d839f7dcb4a347d188366b316e9c264cb7e1e"
	I1109 14:43:18.630244  218396 cri.go:89] found id: "9844fa4dd0e741c1e135049b0ec50a2c5f6206bf090fce7e184f76f6f5de6cb7"
	I1109 14:43:18.630250  218396 cri.go:89] found id: "e0fa19fb74d19affdcb53dc2a19669b9497ed088c06c1be5d6368f4a1d768ad8"
	I1109 14:43:18.630253  218396 cri.go:89] found id: "9c6841e7685fc5801280c9ddf2d6c0a2a346830e53491f2f3d439c2e21c977fd"
	I1109 14:43:18.630257  218396 cri.go:89] found id: "9df79b1c8bb2b207310b5498d17036b5975ec9b07c6ca842407741f9ad73de97"
	I1109 14:43:18.630260  218396 cri.go:89] found id: "baa0cc7198ae04a8507839c6fddece0836011983b84bd4fd652613a18bd01d25"
	I1109 14:43:18.630267  218396 cri.go:89] found id: "33687307c11ac0258a0c4a328dca8bb0ee9be8beb937d99cd72b8fc358b43247"
	I1109 14:43:18.630274  218396 cri.go:89] found id: "2f347db595849365a063711d3213a98014e01fa8ff9740f0c0cae1ee2989edca"
	I1109 14:43:18.630277  218396 cri.go:89] found id: ""
	I1109 14:43:18.630327  218396 ssh_runner.go:195] Run: sudo runc list -f json
	I1109 14:43:18.644263  218396 retry.go:31] will retry after 217.407364ms: list running: runc: sudo runc list -f json: Process exited with status 1
	stdout:
	
	stderr:
	time="2025-11-09T14:43:18Z" level=error msg="open /run/runc: no such file or directory"
	I1109 14:43:18.862768  218396 ssh_runner.go:195] Run: sudo systemctl is-active --quiet service kubelet
	I1109 14:43:18.875780  218396 pause.go:52] kubelet running: false
	I1109 14:43:18.875841  218396 ssh_runner.go:195] Run: sudo systemctl disable --now kubelet
	I1109 14:43:19.049022  218396 cri.go:54] listing CRI containers in root : {State:running Name: Namespaces:[kube-system kubernetes-dashboard istio-operator]}
	I1109 14:43:19.049116  218396 ssh_runner.go:195] Run: sudo -s eval "crictl ps -a --quiet --label io.kubernetes.pod.namespace=kube-system; crictl ps -a --quiet --label io.kubernetes.pod.namespace=kubernetes-dashboard; crictl ps -a --quiet --label io.kubernetes.pod.namespace=istio-operator"
	I1109 14:43:19.120973  218396 cri.go:89] found id: "97146ccbb7051494060c47263c5598c7fdc03c86778dbc868aff7662435f9c33"
	I1109 14:43:19.120995  218396 cri.go:89] found id: "a56ad15fb1acd25af1ccdd95286c2550aeb592ff86d9a87affec2580a370d7dc"
	I1109 14:43:19.121013  218396 cri.go:89] found id: "3a56c743b8a3e4504b63b2de555d8f1d8433520edee172e996d5bb694372c514"
	I1109 14:43:19.121017  218396 cri.go:89] found id: "5528464a75a8a31cc909e0b5261d839f7dcb4a347d188366b316e9c264cb7e1e"
	I1109 14:43:19.121020  218396 cri.go:89] found id: "9844fa4dd0e741c1e135049b0ec50a2c5f6206bf090fce7e184f76f6f5de6cb7"
	I1109 14:43:19.121024  218396 cri.go:89] found id: "e0fa19fb74d19affdcb53dc2a19669b9497ed088c06c1be5d6368f4a1d768ad8"
	I1109 14:43:19.121028  218396 cri.go:89] found id: "9c6841e7685fc5801280c9ddf2d6c0a2a346830e53491f2f3d439c2e21c977fd"
	I1109 14:43:19.121032  218396 cri.go:89] found id: "9df79b1c8bb2b207310b5498d17036b5975ec9b07c6ca842407741f9ad73de97"
	I1109 14:43:19.121035  218396 cri.go:89] found id: "baa0cc7198ae04a8507839c6fddece0836011983b84bd4fd652613a18bd01d25"
	I1109 14:43:19.121047  218396 cri.go:89] found id: "33687307c11ac0258a0c4a328dca8bb0ee9be8beb937d99cd72b8fc358b43247"
	I1109 14:43:19.121054  218396 cri.go:89] found id: "2f347db595849365a063711d3213a98014e01fa8ff9740f0c0cae1ee2989edca"
	I1109 14:43:19.121057  218396 cri.go:89] found id: ""
	I1109 14:43:19.121116  218396 ssh_runner.go:195] Run: sudo runc list -f json
	I1109 14:43:19.132645  218396 retry.go:31] will retry after 456.699927ms: list running: runc: sudo runc list -f json: Process exited with status 1
	stdout:
	
	stderr:
	time="2025-11-09T14:43:19Z" level=error msg="open /run/runc: no such file or directory"
	I1109 14:43:19.590405  218396 ssh_runner.go:195] Run: sudo systemctl is-active --quiet service kubelet
	I1109 14:43:19.603957  218396 pause.go:52] kubelet running: false
	I1109 14:43:19.604018  218396 ssh_runner.go:195] Run: sudo systemctl disable --now kubelet
	I1109 14:43:19.784278  218396 cri.go:54] listing CRI containers in root : {State:running Name: Namespaces:[kube-system kubernetes-dashboard istio-operator]}
	I1109 14:43:19.784375  218396 ssh_runner.go:195] Run: sudo -s eval "crictl ps -a --quiet --label io.kubernetes.pod.namespace=kube-system; crictl ps -a --quiet --label io.kubernetes.pod.namespace=kubernetes-dashboard; crictl ps -a --quiet --label io.kubernetes.pod.namespace=istio-operator"
	I1109 14:43:19.854862  218396 cri.go:89] found id: "97146ccbb7051494060c47263c5598c7fdc03c86778dbc868aff7662435f9c33"
	I1109 14:43:19.854888  218396 cri.go:89] found id: "a56ad15fb1acd25af1ccdd95286c2550aeb592ff86d9a87affec2580a370d7dc"
	I1109 14:43:19.854893  218396 cri.go:89] found id: "3a56c743b8a3e4504b63b2de555d8f1d8433520edee172e996d5bb694372c514"
	I1109 14:43:19.854903  218396 cri.go:89] found id: "5528464a75a8a31cc909e0b5261d839f7dcb4a347d188366b316e9c264cb7e1e"
	I1109 14:43:19.854906  218396 cri.go:89] found id: "9844fa4dd0e741c1e135049b0ec50a2c5f6206bf090fce7e184f76f6f5de6cb7"
	I1109 14:43:19.854935  218396 cri.go:89] found id: "e0fa19fb74d19affdcb53dc2a19669b9497ed088c06c1be5d6368f4a1d768ad8"
	I1109 14:43:19.854960  218396 cri.go:89] found id: "9c6841e7685fc5801280c9ddf2d6c0a2a346830e53491f2f3d439c2e21c977fd"
	I1109 14:43:19.854964  218396 cri.go:89] found id: "9df79b1c8bb2b207310b5498d17036b5975ec9b07c6ca842407741f9ad73de97"
	I1109 14:43:19.854975  218396 cri.go:89] found id: "baa0cc7198ae04a8507839c6fddece0836011983b84bd4fd652613a18bd01d25"
	I1109 14:43:19.854981  218396 cri.go:89] found id: "33687307c11ac0258a0c4a328dca8bb0ee9be8beb937d99cd72b8fc358b43247"
	I1109 14:43:19.854990  218396 cri.go:89] found id: "2f347db595849365a063711d3213a98014e01fa8ff9740f0c0cae1ee2989edca"
	I1109 14:43:19.854993  218396 cri.go:89] found id: ""
	I1109 14:43:19.855070  218396 ssh_runner.go:195] Run: sudo runc list -f json
	I1109 14:43:19.870292  218396 out.go:203] 
	W1109 14:43:19.873241  218396 out.go:285] X Exiting due to GUEST_PAUSE: Pause: list running: runc: sudo runc list -f json: Process exited with status 1
	stdout:
	
	stderr:
	time="2025-11-09T14:43:19Z" level=error msg="open /run/runc: no such file or directory"
	
	X Exiting due to GUEST_PAUSE: Pause: list running: runc: sudo runc list -f json: Process exited with status 1
	stdout:
	
	stderr:
	time="2025-11-09T14:43:19Z" level=error msg="open /run/runc: no such file or directory"
	
	W1109 14:43:19.873266  218396 out.go:285] * 
	* 
	W1109 14:43:19.878186  218396 out.go:308] ╭─────────────────────────────────────────────────────────────────────────────────────────────╮
	│                                                                                             │
	│    * If the above advice does not help, please let us know:                                 │
	│      https://github.com/kubernetes/minikube/issues/new/choose                               │
	│                                                                                             │
	│    * Please run `minikube logs --file=logs.txt` and attach logs.txt to the GitHub issue.    │
	│    * Please also attach the following file to the GitHub issue:                             │
	│    * - /tmp/minikube_pause_49fdaea37aad8ebccb761973c21590cc64efe8d9_0.log                   │
	│                                                                                             │
	╰─────────────────────────────────────────────────────────────────────────────────────────────╯
	╭─────────────────────────────────────────────────────────────────────────────────────────────╮
	│                                                                                             │
	│    * If the above advice does not help, please let us know:                                 │
	│      https://github.com/kubernetes/minikube/issues/new/choose                               │
	│                                                                                             │
	│    * Please run `minikube logs --file=logs.txt` and attach logs.txt to the GitHub issue.    │
	│    * Please also attach the following file to the GitHub issue:                             │
	│    * - /tmp/minikube_pause_49fdaea37aad8ebccb761973c21590cc64efe8d9_0.log                   │
	│                                                                                             │
	╰─────────────────────────────────────────────────────────────────────────────────────────────╯
	I1109 14:43:19.881241  218396 out.go:203] 

                                                
                                                
** /stderr **
start_stop_delete_test.go:309: out/minikube-linux-arm64 pause -p no-preload-545474 --alsologtostderr -v=1 failed: exit status 80
helpers_test.go:222: -----------------------post-mortem--------------------------------
helpers_test.go:223: ======>  post-mortem[TestStartStop/group/no-preload/serial/Pause]: network settings <======
helpers_test.go:230: HOST ENV snapshots: PROXY env: HTTP_PROXY="<empty>" HTTPS_PROXY="<empty>" NO_PROXY="<empty>"
helpers_test.go:238: ======>  post-mortem[TestStartStop/group/no-preload/serial/Pause]: docker inspect <======
helpers_test.go:239: (dbg) Run:  docker inspect no-preload-545474
helpers_test.go:243: (dbg) docker inspect no-preload-545474:

                                                
                                                
-- stdout --
	[
	    {
	        "Id": "435b3ae5d44375062a24914709f3375acdafc76fdd93c4f83f8e7b4be40a79be",
	        "Created": "2025-11-09T14:40:31.3484438Z",
	        "Path": "/usr/local/bin/entrypoint",
	        "Args": [
	            "/sbin/init"
	        ],
	        "State": {
	            "Status": "running",
	            "Running": true,
	            "Paused": false,
	            "Restarting": false,
	            "OOMKilled": false,
	            "Dead": false,
	            "Pid": 215420,
	            "ExitCode": 0,
	            "Error": "",
	            "StartedAt": "2025-11-09T14:42:12.931743928Z",
	            "FinishedAt": "2025-11-09T14:42:11.916224945Z"
	        },
	        "Image": "sha256:4e1b168be0d8ee6affdaa8dcd0274d605b2f417b6cfaa574410f0380fd962b97",
	        "ResolvConfPath": "/var/lib/docker/containers/435b3ae5d44375062a24914709f3375acdafc76fdd93c4f83f8e7b4be40a79be/resolv.conf",
	        "HostnamePath": "/var/lib/docker/containers/435b3ae5d44375062a24914709f3375acdafc76fdd93c4f83f8e7b4be40a79be/hostname",
	        "HostsPath": "/var/lib/docker/containers/435b3ae5d44375062a24914709f3375acdafc76fdd93c4f83f8e7b4be40a79be/hosts",
	        "LogPath": "/var/lib/docker/containers/435b3ae5d44375062a24914709f3375acdafc76fdd93c4f83f8e7b4be40a79be/435b3ae5d44375062a24914709f3375acdafc76fdd93c4f83f8e7b4be40a79be-json.log",
	        "Name": "/no-preload-545474",
	        "RestartCount": 0,
	        "Driver": "overlay2",
	        "Platform": "linux",
	        "MountLabel": "",
	        "ProcessLabel": "",
	        "AppArmorProfile": "unconfined",
	        "ExecIDs": null,
	        "HostConfig": {
	            "Binds": [
	                "/lib/modules:/lib/modules:ro",
	                "no-preload-545474:/var"
	            ],
	            "ContainerIDFile": "",
	            "LogConfig": {
	                "Type": "json-file",
	                "Config": {}
	            },
	            "NetworkMode": "no-preload-545474",
	            "PortBindings": {
	                "22/tcp": [
	                    {
	                        "HostIp": "127.0.0.1",
	                        "HostPort": ""
	                    }
	                ],
	                "2376/tcp": [
	                    {
	                        "HostIp": "127.0.0.1",
	                        "HostPort": ""
	                    }
	                ],
	                "32443/tcp": [
	                    {
	                        "HostIp": "127.0.0.1",
	                        "HostPort": ""
	                    }
	                ],
	                "5000/tcp": [
	                    {
	                        "HostIp": "127.0.0.1",
	                        "HostPort": ""
	                    }
	                ],
	                "8443/tcp": [
	                    {
	                        "HostIp": "127.0.0.1",
	                        "HostPort": ""
	                    }
	                ]
	            },
	            "RestartPolicy": {
	                "Name": "no",
	                "MaximumRetryCount": 0
	            },
	            "AutoRemove": false,
	            "VolumeDriver": "",
	            "VolumesFrom": null,
	            "ConsoleSize": [
	                0,
	                0
	            ],
	            "CapAdd": null,
	            "CapDrop": null,
	            "CgroupnsMode": "host",
	            "Dns": [],
	            "DnsOptions": [],
	            "DnsSearch": [],
	            "ExtraHosts": null,
	            "GroupAdd": null,
	            "IpcMode": "private",
	            "Cgroup": "",
	            "Links": null,
	            "OomScoreAdj": 0,
	            "PidMode": "",
	            "Privileged": true,
	            "PublishAllPorts": false,
	            "ReadonlyRootfs": false,
	            "SecurityOpt": [
	                "seccomp=unconfined",
	                "apparmor=unconfined",
	                "label=disable"
	            ],
	            "Tmpfs": {
	                "/run": "",
	                "/tmp": ""
	            },
	            "UTSMode": "",
	            "UsernsMode": "",
	            "ShmSize": 67108864,
	            "Runtime": "runc",
	            "Isolation": "",
	            "CpuShares": 0,
	            "Memory": 3221225472,
	            "NanoCpus": 2000000000,
	            "CgroupParent": "",
	            "BlkioWeight": 0,
	            "BlkioWeightDevice": [],
	            "BlkioDeviceReadBps": [],
	            "BlkioDeviceWriteBps": [],
	            "BlkioDeviceReadIOps": [],
	            "BlkioDeviceWriteIOps": [],
	            "CpuPeriod": 0,
	            "CpuQuota": 0,
	            "CpuRealtimePeriod": 0,
	            "CpuRealtimeRuntime": 0,
	            "CpusetCpus": "",
	            "CpusetMems": "",
	            "Devices": [],
	            "DeviceCgroupRules": null,
	            "DeviceRequests": null,
	            "MemoryReservation": 0,
	            "MemorySwap": 6442450944,
	            "MemorySwappiness": null,
	            "OomKillDisable": false,
	            "PidsLimit": null,
	            "Ulimits": [],
	            "CpuCount": 0,
	            "CpuPercent": 0,
	            "IOMaximumIOps": 0,
	            "IOMaximumBandwidth": 0,
	            "MaskedPaths": null,
	            "ReadonlyPaths": null
	        },
	        "GraphDriver": {
	            "Data": {
	                "ID": "435b3ae5d44375062a24914709f3375acdafc76fdd93c4f83f8e7b4be40a79be",
	                "LowerDir": "/var/lib/docker/overlay2/a85564fb62a3e51e03f4702b5e2ea71dec8ed82e123ce306678040ba01f5478a-init/diff:/var/lib/docker/overlay2/b3a4de36ab2a7c9237cb1555a4866064baca53bca407ae5f84336bea9c6bc6c8/diff",
	                "MergedDir": "/var/lib/docker/overlay2/a85564fb62a3e51e03f4702b5e2ea71dec8ed82e123ce306678040ba01f5478a/merged",
	                "UpperDir": "/var/lib/docker/overlay2/a85564fb62a3e51e03f4702b5e2ea71dec8ed82e123ce306678040ba01f5478a/diff",
	                "WorkDir": "/var/lib/docker/overlay2/a85564fb62a3e51e03f4702b5e2ea71dec8ed82e123ce306678040ba01f5478a/work"
	            },
	            "Name": "overlay2"
	        },
	        "Mounts": [
	            {
	                "Type": "bind",
	                "Source": "/lib/modules",
	                "Destination": "/lib/modules",
	                "Mode": "ro",
	                "RW": false,
	                "Propagation": "rprivate"
	            },
	            {
	                "Type": "volume",
	                "Name": "no-preload-545474",
	                "Source": "/var/lib/docker/volumes/no-preload-545474/_data",
	                "Destination": "/var",
	                "Driver": "local",
	                "Mode": "z",
	                "RW": true,
	                "Propagation": ""
	            }
	        ],
	        "Config": {
	            "Hostname": "no-preload-545474",
	            "Domainname": "",
	            "User": "",
	            "AttachStdin": false,
	            "AttachStdout": false,
	            "AttachStderr": false,
	            "ExposedPorts": {
	                "22/tcp": {},
	                "2376/tcp": {},
	                "32443/tcp": {},
	                "5000/tcp": {},
	                "8443/tcp": {}
	            },
	            "Tty": true,
	            "OpenStdin": false,
	            "StdinOnce": false,
	            "Env": [
	                "container=docker",
	                "PATH=/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin"
	            ],
	            "Cmd": null,
	            "Image": "gcr.io/k8s-minikube/kicbase-builds:v0.0.48-1761985721-21837@sha256:a50b37e97dfdea51156e079ca6b45818a801b3d41bbe13d141f35d2e1af6c7d1",
	            "Volumes": null,
	            "WorkingDir": "/",
	            "Entrypoint": [
	                "/usr/local/bin/entrypoint",
	                "/sbin/init"
	            ],
	            "OnBuild": null,
	            "Labels": {
	                "created_by.minikube.sigs.k8s.io": "true",
	                "mode.minikube.sigs.k8s.io": "no-preload-545474",
	                "name.minikube.sigs.k8s.io": "no-preload-545474",
	                "role.minikube.sigs.k8s.io": ""
	            },
	            "StopSignal": "SIGRTMIN+3"
	        },
	        "NetworkSettings": {
	            "Bridge": "",
	            "SandboxID": "01f62b5fbd0f00c724ebd5b0fecca50faf2b627ce5ee3b5c7575c8c88e55faaf",
	            "SandboxKey": "/var/run/docker/netns/01f62b5fbd0f",
	            "Ports": {
	                "22/tcp": [
	                    {
	                        "HostIp": "127.0.0.1",
	                        "HostPort": "33095"
	                    }
	                ],
	                "2376/tcp": [
	                    {
	                        "HostIp": "127.0.0.1",
	                        "HostPort": "33096"
	                    }
	                ],
	                "32443/tcp": [
	                    {
	                        "HostIp": "127.0.0.1",
	                        "HostPort": "33099"
	                    }
	                ],
	                "5000/tcp": [
	                    {
	                        "HostIp": "127.0.0.1",
	                        "HostPort": "33097"
	                    }
	                ],
	                "8443/tcp": [
	                    {
	                        "HostIp": "127.0.0.1",
	                        "HostPort": "33098"
	                    }
	                ]
	            },
	            "HairpinMode": false,
	            "LinkLocalIPv6Address": "",
	            "LinkLocalIPv6PrefixLen": 0,
	            "SecondaryIPAddresses": null,
	            "SecondaryIPv6Addresses": null,
	            "EndpointID": "",
	            "Gateway": "",
	            "GlobalIPv6Address": "",
	            "GlobalIPv6PrefixLen": 0,
	            "IPAddress": "",
	            "IPPrefixLen": 0,
	            "IPv6Gateway": "",
	            "MacAddress": "",
	            "Networks": {
	                "no-preload-545474": {
	                    "IPAMConfig": {
	                        "IPv4Address": "192.168.85.2"
	                    },
	                    "Links": null,
	                    "Aliases": null,
	                    "MacAddress": "2e:86:11:4a:f1:c7",
	                    "DriverOpts": null,
	                    "GwPriority": 0,
	                    "NetworkID": "eb0cf9a1901884390b78ed227402aaa4fd370ba585a10d7d075f56046116850c",
	                    "EndpointID": "8ef1cd96f1382ef6f8d96d4eb331233c0c0417b38417a9493e0a3dbc58c7e90e",
	                    "Gateway": "192.168.85.1",
	                    "IPAddress": "192.168.85.2",
	                    "IPPrefixLen": 24,
	                    "IPv6Gateway": "",
	                    "GlobalIPv6Address": "",
	                    "GlobalIPv6PrefixLen": 0,
	                    "DNSNames": [
	                        "no-preload-545474",
	                        "435b3ae5d443"
	                    ]
	                }
	            }
	        }
	    }
	]

                                                
                                                
-- /stdout --
helpers_test.go:247: (dbg) Run:  out/minikube-linux-arm64 status --format={{.Host}} -p no-preload-545474 -n no-preload-545474
helpers_test.go:247: (dbg) Non-zero exit: out/minikube-linux-arm64 status --format={{.Host}} -p no-preload-545474 -n no-preload-545474: exit status 2 (353.28763ms)

                                                
                                                
-- stdout --
	Running

                                                
                                                
-- /stdout --
helpers_test.go:247: status error: exit status 2 (may be ok)
helpers_test.go:252: <<< TestStartStop/group/no-preload/serial/Pause FAILED: start of post-mortem logs <<<
helpers_test.go:253: ======>  post-mortem[TestStartStop/group/no-preload/serial/Pause]: minikube logs <======
helpers_test.go:255: (dbg) Run:  out/minikube-linux-arm64 -p no-preload-545474 logs -n 25
helpers_test.go:255: (dbg) Done: out/minikube-linux-arm64 -p no-preload-545474 logs -n 25: (1.335867369s)
helpers_test.go:260: TestStartStop/group/no-preload/serial/Pause logs: 
-- stdout --
	
	==> Audit <==
	┌─────────┬───────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────┬──────────────────────────────┬─────────┬─────────┬─────────────────────┬──────────────
───────┐
	│ COMMAND │                                                                                                                     ARGS                                                                                                                      │           PROFILE            │  USER   │ VERSION │     START TIME      │      END TIME       │
	├─────────┼───────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────┼──────────────────────────────┼─────────┼─────────┼─────────────────────┼──────────────
───────┤
	│ image   │ embed-certs-422728 image list --format=json                                                                                                                                                                                                   │ embed-certs-422728           │ jenkins │ v1.37.0 │ 09 Nov 25 14:40 UTC │ 09 Nov 25 14:40 UTC │
	│ pause   │ -p embed-certs-422728 --alsologtostderr -v=1                                                                                                                                                                                                  │ embed-certs-422728           │ jenkins │ v1.37.0 │ 09 Nov 25 14:40 UTC │                     │
	│ delete  │ -p default-k8s-diff-port-103048                                                                                                                                                                                                               │ default-k8s-diff-port-103048 │ jenkins │ v1.37.0 │ 09 Nov 25 14:40 UTC │ 09 Nov 25 14:40 UTC │
	│ delete  │ -p embed-certs-422728                                                                                                                                                                                                                         │ embed-certs-422728           │ jenkins │ v1.37.0 │ 09 Nov 25 14:40 UTC │ 09 Nov 25 14:40 UTC │
	│ delete  │ -p default-k8s-diff-port-103048                                                                                                                                                                                                               │ default-k8s-diff-port-103048 │ jenkins │ v1.37.0 │ 09 Nov 25 14:40 UTC │ 09 Nov 25 14:40 UTC │
	│ delete  │ -p disable-driver-mounts-274584                                                                                                                                                                                                               │ disable-driver-mounts-274584 │ jenkins │ v1.37.0 │ 09 Nov 25 14:40 UTC │ 09 Nov 25 14:40 UTC │
	│ start   │ -p no-preload-545474 --memory=3072 --alsologtostderr --wait=true --preload=false --driver=docker  --container-runtime=crio --kubernetes-version=v1.34.1                                                                                       │ no-preload-545474            │ jenkins │ v1.37.0 │ 09 Nov 25 14:40 UTC │ 09 Nov 25 14:41 UTC │
	│ delete  │ -p embed-certs-422728                                                                                                                                                                                                                         │ embed-certs-422728           │ jenkins │ v1.37.0 │ 09 Nov 25 14:40 UTC │ 09 Nov 25 14:40 UTC │
	│ start   │ -p newest-cni-192074 --memory=3072 --alsologtostderr --wait=apiserver,system_pods,default_sa --network-plugin=cni --extra-config=kubeadm.pod-network-cidr=10.42.0.0/16 --driver=docker  --container-runtime=crio --kubernetes-version=v1.34.1 │ newest-cni-192074            │ jenkins │ v1.37.0 │ 09 Nov 25 14:40 UTC │ 09 Nov 25 14:41 UTC │
	│ addons  │ enable metrics-server -p newest-cni-192074 --images=MetricsServer=registry.k8s.io/echoserver:1.4 --registries=MetricsServer=fake.domain                                                                                                       │ newest-cni-192074            │ jenkins │ v1.37.0 │ 09 Nov 25 14:41 UTC │                     │
	│ stop    │ -p newest-cni-192074 --alsologtostderr -v=3                                                                                                                                                                                                   │ newest-cni-192074            │ jenkins │ v1.37.0 │ 09 Nov 25 14:41 UTC │ 09 Nov 25 14:41 UTC │
	│ addons  │ enable dashboard -p newest-cni-192074 --images=MetricsScraper=registry.k8s.io/echoserver:1.4                                                                                                                                                  │ newest-cni-192074            │ jenkins │ v1.37.0 │ 09 Nov 25 14:41 UTC │ 09 Nov 25 14:41 UTC │
	│ start   │ -p newest-cni-192074 --memory=3072 --alsologtostderr --wait=apiserver,system_pods,default_sa --network-plugin=cni --extra-config=kubeadm.pod-network-cidr=10.42.0.0/16 --driver=docker  --container-runtime=crio --kubernetes-version=v1.34.1 │ newest-cni-192074            │ jenkins │ v1.37.0 │ 09 Nov 25 14:41 UTC │ 09 Nov 25 14:41 UTC │
	│ image   │ newest-cni-192074 image list --format=json                                                                                                                                                                                                    │ newest-cni-192074            │ jenkins │ v1.37.0 │ 09 Nov 25 14:41 UTC │ 09 Nov 25 14:41 UTC │
	│ pause   │ -p newest-cni-192074 --alsologtostderr -v=1                                                                                                                                                                                                   │ newest-cni-192074            │ jenkins │ v1.37.0 │ 09 Nov 25 14:41 UTC │                     │
	│ delete  │ -p newest-cni-192074                                                                                                                                                                                                                          │ newest-cni-192074            │ jenkins │ v1.37.0 │ 09 Nov 25 14:41 UTC │ 09 Nov 25 14:41 UTC │
	│ delete  │ -p newest-cni-192074                                                                                                                                                                                                                          │ newest-cni-192074            │ jenkins │ v1.37.0 │ 09 Nov 25 14:41 UTC │ 09 Nov 25 14:41 UTC │
	│ start   │ -p auto-241021 --memory=3072 --alsologtostderr --wait=true --wait-timeout=15m --driver=docker  --container-runtime=crio                                                                                                                       │ auto-241021                  │ jenkins │ v1.37.0 │ 09 Nov 25 14:41 UTC │ 09 Nov 25 14:43 UTC │
	│ addons  │ enable metrics-server -p no-preload-545474 --images=MetricsServer=registry.k8s.io/echoserver:1.4 --registries=MetricsServer=fake.domain                                                                                                       │ no-preload-545474            │ jenkins │ v1.37.0 │ 09 Nov 25 14:41 UTC │                     │
	│ stop    │ -p no-preload-545474 --alsologtostderr -v=3                                                                                                                                                                                                   │ no-preload-545474            │ jenkins │ v1.37.0 │ 09 Nov 25 14:41 UTC │ 09 Nov 25 14:42 UTC │
	│ addons  │ enable dashboard -p no-preload-545474 --images=MetricsScraper=registry.k8s.io/echoserver:1.4                                                                                                                                                  │ no-preload-545474            │ jenkins │ v1.37.0 │ 09 Nov 25 14:42 UTC │ 09 Nov 25 14:42 UTC │
	│ start   │ -p no-preload-545474 --memory=3072 --alsologtostderr --wait=true --preload=false --driver=docker  --container-runtime=crio --kubernetes-version=v1.34.1                                                                                       │ no-preload-545474            │ jenkins │ v1.37.0 │ 09 Nov 25 14:42 UTC │ 09 Nov 25 14:43 UTC │
	│ ssh     │ -p auto-241021 pgrep -a kubelet                                                                                                                                                                                                               │ auto-241021                  │ jenkins │ v1.37.0 │ 09 Nov 25 14:43 UTC │ 09 Nov 25 14:43 UTC │
	│ image   │ no-preload-545474 image list --format=json                                                                                                                                                                                                    │ no-preload-545474            │ jenkins │ v1.37.0 │ 09 Nov 25 14:43 UTC │ 09 Nov 25 14:43 UTC │
	│ pause   │ -p no-preload-545474 --alsologtostderr -v=1                                                                                                                                                                                                   │ no-preload-545474            │ jenkins │ v1.37.0 │ 09 Nov 25 14:43 UTC │                     │
	└─────────┴───────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────┴──────────────────────────────┴─────────┴─────────┴─────────────────────┴──────────────
───────┘
	
	
	==> Last Start <==
	Log file created at: 2025/11/09 14:42:12
	Running on machine: ip-172-31-24-2
	Binary: Built with gc go1.24.6 for linux/arm64
	Log line format: [IWEF]mmdd hh:mm:ss.uuuuuu threadid file:line] msg
	I1109 14:42:12.551409  215276 out.go:360] Setting OutFile to fd 1 ...
	I1109 14:42:12.551517  215276 out.go:408] TERM=,COLORTERM=, which probably does not support color
	I1109 14:42:12.551556  215276 out.go:374] Setting ErrFile to fd 2...
	I1109 14:42:12.551561  215276 out.go:408] TERM=,COLORTERM=, which probably does not support color
	I1109 14:42:12.551804  215276 root.go:338] Updating PATH: /home/jenkins/minikube-integration/21139-2320/.minikube/bin
	I1109 14:42:12.552257  215276 out.go:368] Setting JSON to false
	I1109 14:42:12.553136  215276 start.go:133] hostinfo: {"hostname":"ip-172-31-24-2","uptime":5083,"bootTime":1762694250,"procs":180,"os":"linux","platform":"ubuntu","platformFamily":"debian","platformVersion":"20.04","kernelVersion":"5.15.0-1084-aws","kernelArch":"aarch64","virtualizationSystem":"","virtualizationRole":"","hostId":"6d436adf-771e-4269-b9a3-c25fd4fca4f5"}
	I1109 14:42:12.553196  215276 start.go:143] virtualization:  
	I1109 14:42:12.558440  215276 out.go:179] * [no-preload-545474] minikube v1.37.0 on Ubuntu 20.04 (arm64)
	I1109 14:42:12.561649  215276 out.go:179]   - MINIKUBE_LOCATION=21139
	I1109 14:42:12.561713  215276 notify.go:221] Checking for updates...
	I1109 14:42:12.568606  215276 out.go:179]   - MINIKUBE_SUPPRESS_DOCKER_PERFORMANCE=true
	I1109 14:42:12.571729  215276 out.go:179]   - KUBECONFIG=/home/jenkins/minikube-integration/21139-2320/kubeconfig
	I1109 14:42:12.574692  215276 out.go:179]   - MINIKUBE_HOME=/home/jenkins/minikube-integration/21139-2320/.minikube
	I1109 14:42:12.578196  215276 out.go:179]   - MINIKUBE_BIN=out/minikube-linux-arm64
	I1109 14:42:12.581074  215276 out.go:179]   - MINIKUBE_FORCE_SYSTEMD=
	I1109 14:42:12.584455  215276 config.go:182] Loaded profile config "no-preload-545474": Driver=docker, ContainerRuntime=crio, KubernetesVersion=v1.34.1
	I1109 14:42:12.585043  215276 driver.go:422] Setting default libvirt URI to qemu:///system
	I1109 14:42:12.624679  215276 docker.go:124] docker version: linux-28.1.1:Docker Engine - Community
	I1109 14:42:12.624808  215276 cli_runner.go:164] Run: docker system info --format "{{json .}}"
	I1109 14:42:12.719804  215276 info.go:266] docker info: {ID:J4M5:W6MX:GOX4:4LAQ:VI7E:VJNF:J3OP:OPBH:GF7G:PPY4:WQWD:7N4L Containers:2 ContainersRunning:1 ContainersPaused:0 ContainersStopped:1 Images:4 Driver:overlay2 DriverStatus:[[Backing Filesystem extfs] [Supports d_type true] [Using metacopy false] [Native Overlay Diff true] [userxattr false]] SystemStatus:<nil> Plugins:{Volume:[local] Network:[bridge host ipvlan macvlan null overlay] Authorization:<nil> Log:[awslogs fluentd gcplogs gelf journald json-file local splunk syslog]} MemoryLimit:true SwapLimit:true KernelMemory:false KernelMemoryTCP:true CPUCfsPeriod:true CPUCfsQuota:true CPUShares:true CPUSet:true PidsLimit:true IPv4Forwarding:true BridgeNfIptables:false BridgeNfIP6Tables:false Debug:false NFd:37 OomKillDisable:true NGoroutines:53 SystemTime:2025-11-09 14:42:12.707294824 +0000 UTC LoggingDriver:json-file CgroupDriver:cgroupfs NEventsListener:0 KernelVersion:5.15.0-1084-aws OperatingSystem:Ubuntu 20.04.6 LTS OSType:linux Architecture:a
arch64 IndexServerAddress:https://index.docker.io/v1/ RegistryConfig:{AllowNondistributableArtifactsCIDRs:[] AllowNondistributableArtifactsHostnames:[] InsecureRegistryCIDRs:[::1/128 127.0.0.0/8] IndexConfigs:{DockerIo:{Name:docker.io Mirrors:[] Secure:true Official:true}} Mirrors:[]} NCPU:2 MemTotal:8214835200 GenericResources:<nil> DockerRootDir:/var/lib/docker HTTPProxy: HTTPSProxy: NoProxy: Name:ip-172-31-24-2 Labels:[] ExperimentalBuild:false ServerVersion:28.1.1 ClusterStore: ClusterAdvertise: Runtimes:{Runc:{Path:runc}} DefaultRuntime:runc Swarm:{NodeID: NodeAddr: LocalNodeState:inactive ControlAvailable:false Error: RemoteManagers:<nil>} LiveRestoreEnabled:false Isolation: InitBinary:docker-init ContainerdCommit:{ID:05044ec0a9a75232cad458027ca83437aae3f4da Expected:} RuncCommit:{ID:v1.2.5-0-g59923ef Expected:} InitCommit:{ID:de40ad0 Expected:} SecurityOptions:[name=apparmor name=seccomp,profile=builtin] ProductLicense: Warnings:<nil> ServerErrors:[] ClientInfo:{Debug:false Plugins:[map[Name:buildx Pat
h:/usr/libexec/docker/cli-plugins/docker-buildx SchemaVersion:0.1.0 ShortDescription:Docker Buildx Vendor:Docker Inc. Version:v0.23.0] map[Name:compose Path:/usr/libexec/docker/cli-plugins/docker-compose SchemaVersion:0.1.0 ShortDescription:Docker Compose Vendor:Docker Inc. Version:v2.35.1]] Warnings:<nil>}}
	I1109 14:42:12.719948  215276 docker.go:319] overlay module found
	I1109 14:42:12.723090  215276 out.go:179] * Using the docker driver based on existing profile
	I1109 14:42:07.820240  212661 kubeadm.go:319] [certs] Generating "etcd/ca" certificate and key
	I1109 14:42:08.121003  212661 kubeadm.go:319] [certs] Generating "etcd/server" certificate and key
	I1109 14:42:08.121186  212661 kubeadm.go:319] [certs] etcd/server serving cert is signed for DNS names [auto-241021 localhost] and IPs [192.168.76.2 127.0.0.1 ::1]
	I1109 14:42:08.398156  212661 kubeadm.go:319] [certs] Generating "etcd/peer" certificate and key
	I1109 14:42:08.398411  212661 kubeadm.go:319] [certs] etcd/peer serving cert is signed for DNS names [auto-241021 localhost] and IPs [192.168.76.2 127.0.0.1 ::1]
	I1109 14:42:09.376113  212661 kubeadm.go:319] [certs] Generating "etcd/healthcheck-client" certificate and key
	I1109 14:42:09.652576  212661 kubeadm.go:319] [certs] Generating "apiserver-etcd-client" certificate and key
	I1109 14:42:10.138815  212661 kubeadm.go:319] [certs] Generating "sa" key and public key
	I1109 14:42:10.139031  212661 kubeadm.go:319] [kubeconfig] Using kubeconfig folder "/etc/kubernetes"
	I1109 14:42:10.445596  212661 kubeadm.go:319] [kubeconfig] Writing "admin.conf" kubeconfig file
	I1109 14:42:10.851520  212661 kubeadm.go:319] [kubeconfig] Writing "super-admin.conf" kubeconfig file
	I1109 14:42:12.062791  212661 kubeadm.go:319] [kubeconfig] Writing "kubelet.conf" kubeconfig file
	I1109 14:42:12.539804  212661 kubeadm.go:319] [kubeconfig] Writing "controller-manager.conf" kubeconfig file
	I1109 14:42:12.793980  212661 kubeadm.go:319] [kubeconfig] Writing "scheduler.conf" kubeconfig file
	I1109 14:42:12.794825  212661 kubeadm.go:319] [etcd] Creating static Pod manifest for local etcd in "/etc/kubernetes/manifests"
	I1109 14:42:12.797672  212661 kubeadm.go:319] [control-plane] Using manifest folder "/etc/kubernetes/manifests"
	I1109 14:42:12.725988  215276 start.go:309] selected driver: docker
	I1109 14:42:12.726007  215276 start.go:930] validating driver "docker" against &{Name:no-preload-545474 KeepContext:false EmbedCerts:false MinikubeISO: KicBaseImage:gcr.io/k8s-minikube/kicbase-builds:v0.0.48-1761985721-21837@sha256:a50b37e97dfdea51156e079ca6b45818a801b3d41bbe13d141f35d2e1af6c7d1 Memory:3072 CPUs:2 DiskSize:20000 Driver:docker HyperkitVpnKitSock: HyperkitVSockPorts:[] DockerEnv:[] ContainerVolumeMounts:[] InsecureRegistry:[] RegistryMirror:[] HostOnlyCIDR:192.168.59.1/24 HypervVirtualSwitch: HypervUseExternalSwitch:false HypervExternalAdapter: KVMNetwork:default KVMQemuURI:qemu:///system KVMGPU:false KVMHidden:false KVMNUMACount:1 APIServerPort:8443 DockerOpt:[] DisableDriverMounts:false NFSShare:[] NFSSharesRoot:/nfsshares UUID: NoVTXCheck:false DNSProxy:false HostDNSResolver:true HostOnlyNicType:virtio NatNicType:virtio SSHIPAddress: SSHUser:root SSHKey: SSHPort:22 KubernetesConfig:{KubernetesVersion:v1.34.1 ClusterName:no-preload-545474 Namespace:default APIServerHAVIP: APIServerNa
me:minikubeCA APIServerNames:[] APIServerIPs:[] DNSDomain:cluster.local ContainerRuntime:crio CRISocket: NetworkPlugin:cni FeatureGates: ServiceCIDR:10.96.0.0/12 ImageRepository: LoadBalancerStartIP: LoadBalancerEndIP: CustomIngressCert: RegistryAliases: ExtraOptions:[] ShouldLoadCachedImages:true EnableDefaultCNI:false CNI:} Nodes:[{Name: IP:192.168.85.2 Port:8443 KubernetesVersion:v1.34.1 ContainerRuntime:crio ControlPlane:true Worker:true}] Addons:map[dashboard:true default-storageclass:true storage-provisioner:true] CustomAddonImages:map[MetricsScraper:registry.k8s.io/echoserver:1.4] CustomAddonRegistries:map[] VerifyComponents:map[apiserver:true apps_running:true default_sa:true extra:true kubelet:true node_ready:true system_pods:true] StartHostTimeout:6m0s ScheduledStop:<nil> ExposedPorts:[] ListenAddress: Network: Subnet: MultiNodeRequested:false ExtraDisks:0 CertExpiration:26280h0m0s MountString: Mount9PVersion:9p2000.L MountGID:docker MountIP: MountMSize:262144 MountOptions:[] MountPort:0 MountType:9
p MountUID:docker BinaryMirror: DisableOptimizations:false DisableMetrics:false DisableCoreDNSLog:false CustomQemuFirmwarePath: SocketVMnetClientPath: SocketVMnetPath: StaticIP: SSHAuthSock: SSHAgentPID:0 GPUs: AutoPauseInterval:1m0s}
	I1109 14:42:12.726106  215276 start.go:941] status for docker: {Installed:true Healthy:true Running:false NeedsImprovement:false Error:<nil> Reason: Fix: Doc: Version:}
	I1109 14:42:12.726805  215276 cli_runner.go:164] Run: docker system info --format "{{json .}}"
	I1109 14:42:12.826431  215276 info.go:266] docker info: {ID:J4M5:W6MX:GOX4:4LAQ:VI7E:VJNF:J3OP:OPBH:GF7G:PPY4:WQWD:7N4L Containers:2 ContainersRunning:1 ContainersPaused:0 ContainersStopped:1 Images:4 Driver:overlay2 DriverStatus:[[Backing Filesystem extfs] [Supports d_type true] [Using metacopy false] [Native Overlay Diff true] [userxattr false]] SystemStatus:<nil> Plugins:{Volume:[local] Network:[bridge host ipvlan macvlan null overlay] Authorization:<nil> Log:[awslogs fluentd gcplogs gelf journald json-file local splunk syslog]} MemoryLimit:true SwapLimit:true KernelMemory:false KernelMemoryTCP:true CPUCfsPeriod:true CPUCfsQuota:true CPUShares:true CPUSet:true PidsLimit:true IPv4Forwarding:true BridgeNfIptables:false BridgeNfIP6Tables:false Debug:false NFd:37 OomKillDisable:true NGoroutines:53 SystemTime:2025-11-09 14:42:12.816425828 +0000 UTC LoggingDriver:json-file CgroupDriver:cgroupfs NEventsListener:0 KernelVersion:5.15.0-1084-aws OperatingSystem:Ubuntu 20.04.6 LTS OSType:linux Architecture:a
arch64 IndexServerAddress:https://index.docker.io/v1/ RegistryConfig:{AllowNondistributableArtifactsCIDRs:[] AllowNondistributableArtifactsHostnames:[] InsecureRegistryCIDRs:[::1/128 127.0.0.0/8] IndexConfigs:{DockerIo:{Name:docker.io Mirrors:[] Secure:true Official:true}} Mirrors:[]} NCPU:2 MemTotal:8214835200 GenericResources:<nil> DockerRootDir:/var/lib/docker HTTPProxy: HTTPSProxy: NoProxy: Name:ip-172-31-24-2 Labels:[] ExperimentalBuild:false ServerVersion:28.1.1 ClusterStore: ClusterAdvertise: Runtimes:{Runc:{Path:runc}} DefaultRuntime:runc Swarm:{NodeID: NodeAddr: LocalNodeState:inactive ControlAvailable:false Error: RemoteManagers:<nil>} LiveRestoreEnabled:false Isolation: InitBinary:docker-init ContainerdCommit:{ID:05044ec0a9a75232cad458027ca83437aae3f4da Expected:} RuncCommit:{ID:v1.2.5-0-g59923ef Expected:} InitCommit:{ID:de40ad0 Expected:} SecurityOptions:[name=apparmor name=seccomp,profile=builtin] ProductLicense: Warnings:<nil> ServerErrors:[] ClientInfo:{Debug:false Plugins:[map[Name:buildx Pat
h:/usr/libexec/docker/cli-plugins/docker-buildx SchemaVersion:0.1.0 ShortDescription:Docker Buildx Vendor:Docker Inc. Version:v0.23.0] map[Name:compose Path:/usr/libexec/docker/cli-plugins/docker-compose SchemaVersion:0.1.0 ShortDescription:Docker Compose Vendor:Docker Inc. Version:v2.35.1]] Warnings:<nil>}}
	I1109 14:42:12.826777  215276 start_flags.go:992] Waiting for all components: map[apiserver:true apps_running:true default_sa:true extra:true kubelet:true node_ready:true system_pods:true]
	I1109 14:42:12.826811  215276 cni.go:84] Creating CNI manager for ""
	I1109 14:42:12.826867  215276 cni.go:143] "docker" driver + "crio" runtime found, recommending kindnet
	I1109 14:42:12.826911  215276 start.go:353] cluster config:
	{Name:no-preload-545474 KeepContext:false EmbedCerts:false MinikubeISO: KicBaseImage:gcr.io/k8s-minikube/kicbase-builds:v0.0.48-1761985721-21837@sha256:a50b37e97dfdea51156e079ca6b45818a801b3d41bbe13d141f35d2e1af6c7d1 Memory:3072 CPUs:2 DiskSize:20000 Driver:docker HyperkitVpnKitSock: HyperkitVSockPorts:[] DockerEnv:[] ContainerVolumeMounts:[] InsecureRegistry:[] RegistryMirror:[] HostOnlyCIDR:192.168.59.1/24 HypervVirtualSwitch: HypervUseExternalSwitch:false HypervExternalAdapter: KVMNetwork:default KVMQemuURI:qemu:///system KVMGPU:false KVMHidden:false KVMNUMACount:1 APIServerPort:8443 DockerOpt:[] DisableDriverMounts:false NFSShare:[] NFSSharesRoot:/nfsshares UUID: NoVTXCheck:false DNSProxy:false HostDNSResolver:true HostOnlyNicType:virtio NatNicType:virtio SSHIPAddress: SSHUser:root SSHKey: SSHPort:22 KubernetesConfig:{KubernetesVersion:v1.34.1 ClusterName:no-preload-545474 Namespace:default APIServerHAVIP: APIServerName:minikubeCA APIServerNames:[] APIServerIPs:[] DNSDomain:cluster.local Containe
rRuntime:crio CRISocket: NetworkPlugin:cni FeatureGates: ServiceCIDR:10.96.0.0/12 ImageRepository: LoadBalancerStartIP: LoadBalancerEndIP: CustomIngressCert: RegistryAliases: ExtraOptions:[] ShouldLoadCachedImages:true EnableDefaultCNI:false CNI:} Nodes:[{Name: IP:192.168.85.2 Port:8443 KubernetesVersion:v1.34.1 ContainerRuntime:crio ControlPlane:true Worker:true}] Addons:map[dashboard:true default-storageclass:true storage-provisioner:true] CustomAddonImages:map[MetricsScraper:registry.k8s.io/echoserver:1.4] CustomAddonRegistries:map[] VerifyComponents:map[apiserver:true apps_running:true default_sa:true extra:true kubelet:true node_ready:true system_pods:true] StartHostTimeout:6m0s ScheduledStop:<nil> ExposedPorts:[] ListenAddress: Network: Subnet: MultiNodeRequested:false ExtraDisks:0 CertExpiration:26280h0m0s MountString: Mount9PVersion:9p2000.L MountGID:docker MountIP: MountMSize:262144 MountOptions:[] MountPort:0 MountType:9p MountUID:docker BinaryMirror: DisableOptimizations:false DisableMetrics:false
DisableCoreDNSLog:false CustomQemuFirmwarePath: SocketVMnetClientPath: SocketVMnetPath: StaticIP: SSHAuthSock: SSHAgentPID:0 GPUs: AutoPauseInterval:1m0s}
	I1109 14:42:12.830720  215276 out.go:179] * Starting "no-preload-545474" primary control-plane node in "no-preload-545474" cluster
	I1109 14:42:12.833807  215276 cache.go:134] Beginning downloading kic base image for docker with crio
	I1109 14:42:12.836944  215276 out.go:179] * Pulling base image v0.0.48-1761985721-21837 ...
	I1109 14:42:12.839937  215276 preload.go:188] Checking if preload exists for k8s version v1.34.1 and runtime crio
	I1109 14:42:12.840161  215276 cache.go:107] acquiring lock: {Name:mk8ebf1821303e62d035eff80c869bb7ee741166 Clock:{} Delay:500ms Timeout:10m0s Cancel:<nil>}
	I1109 14:42:12.840257  215276 cache.go:115] /home/jenkins/minikube-integration/21139-2320/.minikube/cache/images/arm64/gcr.io/k8s-minikube/storage-provisioner_v5 exists
	I1109 14:42:12.840273  215276 cache.go:96] cache image "gcr.io/k8s-minikube/storage-provisioner:v5" -> "/home/jenkins/minikube-integration/21139-2320/.minikube/cache/images/arm64/gcr.io/k8s-minikube/storage-provisioner_v5" took 120.329µs
	I1109 14:42:12.840282  215276 cache.go:80] save to tar file gcr.io/k8s-minikube/storage-provisioner:v5 -> /home/jenkins/minikube-integration/21139-2320/.minikube/cache/images/arm64/gcr.io/k8s-minikube/storage-provisioner_v5 succeeded
	I1109 14:42:12.840299  215276 image.go:81] Checking for gcr.io/k8s-minikube/kicbase-builds:v0.0.48-1761985721-21837@sha256:a50b37e97dfdea51156e079ca6b45818a801b3d41bbe13d141f35d2e1af6c7d1 in local docker daemon
	I1109 14:42:12.840470  215276 profile.go:143] Saving config to /home/jenkins/minikube-integration/21139-2320/.minikube/profiles/no-preload-545474/config.json ...
	I1109 14:42:12.840709  215276 cache.go:107] acquiring lock: {Name:mk53871c92845ee135c49257023f708114b8f41e Clock:{} Delay:500ms Timeout:10m0s Cancel:<nil>}
	I1109 14:42:12.840764  215276 cache.go:115] /home/jenkins/minikube-integration/21139-2320/.minikube/cache/images/arm64/registry.k8s.io/kube-proxy_v1.34.1 exists
	I1109 14:42:12.840771  215276 cache.go:96] cache image "registry.k8s.io/kube-proxy:v1.34.1" -> "/home/jenkins/minikube-integration/21139-2320/.minikube/cache/images/arm64/registry.k8s.io/kube-proxy_v1.34.1" took 68.932µs
	I1109 14:42:12.840777  215276 cache.go:80] save to tar file registry.k8s.io/kube-proxy:v1.34.1 -> /home/jenkins/minikube-integration/21139-2320/.minikube/cache/images/arm64/registry.k8s.io/kube-proxy_v1.34.1 succeeded
	I1109 14:42:12.840790  215276 cache.go:107] acquiring lock: {Name:mk4f58a09b1fc4909821101e1b77c9ffca6005ee Clock:{} Delay:500ms Timeout:10m0s Cancel:<nil>}
	I1109 14:42:12.840818  215276 cache.go:115] /home/jenkins/minikube-integration/21139-2320/.minikube/cache/images/arm64/registry.k8s.io/kube-apiserver_v1.34.1 exists
	I1109 14:42:12.840824  215276 cache.go:96] cache image "registry.k8s.io/kube-apiserver:v1.34.1" -> "/home/jenkins/minikube-integration/21139-2320/.minikube/cache/images/arm64/registry.k8s.io/kube-apiserver_v1.34.1" took 35.594µs
	I1109 14:42:12.840830  215276 cache.go:80] save to tar file registry.k8s.io/kube-apiserver:v1.34.1 -> /home/jenkins/minikube-integration/21139-2320/.minikube/cache/images/arm64/registry.k8s.io/kube-apiserver_v1.34.1 succeeded
	I1109 14:42:12.840839  215276 cache.go:107] acquiring lock: {Name:mk73ab4d10a27d479f537d5f1b1270fea0724531 Clock:{} Delay:500ms Timeout:10m0s Cancel:<nil>}
	I1109 14:42:12.840866  215276 cache.go:115] /home/jenkins/minikube-integration/21139-2320/.minikube/cache/images/arm64/registry.k8s.io/kube-controller-manager_v1.34.1 exists
	I1109 14:42:12.840870  215276 cache.go:96] cache image "registry.k8s.io/kube-controller-manager:v1.34.1" -> "/home/jenkins/minikube-integration/21139-2320/.minikube/cache/images/arm64/registry.k8s.io/kube-controller-manager_v1.34.1" took 32.697µs
	I1109 14:42:12.840876  215276 cache.go:80] save to tar file registry.k8s.io/kube-controller-manager:v1.34.1 -> /home/jenkins/minikube-integration/21139-2320/.minikube/cache/images/arm64/registry.k8s.io/kube-controller-manager_v1.34.1 succeeded
	I1109 14:42:12.840888  215276 cache.go:107] acquiring lock: {Name:mk27f7c7c6f60f594b852d08be5e102aa55cc901 Clock:{} Delay:500ms Timeout:10m0s Cancel:<nil>}
	I1109 14:42:12.840913  215276 cache.go:115] /home/jenkins/minikube-integration/21139-2320/.minikube/cache/images/arm64/registry.k8s.io/kube-scheduler_v1.34.1 exists
	I1109 14:42:12.840918  215276 cache.go:96] cache image "registry.k8s.io/kube-scheduler:v1.34.1" -> "/home/jenkins/minikube-integration/21139-2320/.minikube/cache/images/arm64/registry.k8s.io/kube-scheduler_v1.34.1" took 34.036µs
	I1109 14:42:12.840924  215276 cache.go:80] save to tar file registry.k8s.io/kube-scheduler:v1.34.1 -> /home/jenkins/minikube-integration/21139-2320/.minikube/cache/images/arm64/registry.k8s.io/kube-scheduler_v1.34.1 succeeded
	I1109 14:42:12.840934  215276 cache.go:107] acquiring lock: {Name:mkcfd288d144643fe17076d14fdf648fc664b270 Clock:{} Delay:500ms Timeout:10m0s Cancel:<nil>}
	I1109 14:42:12.840959  215276 cache.go:115] /home/jenkins/minikube-integration/21139-2320/.minikube/cache/images/arm64/registry.k8s.io/etcd_3.6.4-0 exists
	I1109 14:42:12.840965  215276 cache.go:96] cache image "registry.k8s.io/etcd:3.6.4-0" -> "/home/jenkins/minikube-integration/21139-2320/.minikube/cache/images/arm64/registry.k8s.io/etcd_3.6.4-0" took 31.615µs
	I1109 14:42:12.840970  215276 cache.go:80] save to tar file registry.k8s.io/etcd:3.6.4-0 -> /home/jenkins/minikube-integration/21139-2320/.minikube/cache/images/arm64/registry.k8s.io/etcd_3.6.4-0 succeeded
	I1109 14:42:12.840985  215276 cache.go:107] acquiring lock: {Name:mk769de8354c929e88f0f6b138307492bb4ec194 Clock:{} Delay:500ms Timeout:10m0s Cancel:<nil>}
	I1109 14:42:12.841012  215276 cache.go:115] /home/jenkins/minikube-integration/21139-2320/.minikube/cache/images/arm64/registry.k8s.io/pause_3.10.1 exists
	I1109 14:42:12.841017  215276 cache.go:96] cache image "registry.k8s.io/pause:3.10.1" -> "/home/jenkins/minikube-integration/21139-2320/.minikube/cache/images/arm64/registry.k8s.io/pause_3.10.1" took 39.434µs
	I1109 14:42:12.841022  215276 cache.go:80] save to tar file registry.k8s.io/pause:3.10.1 -> /home/jenkins/minikube-integration/21139-2320/.minikube/cache/images/arm64/registry.k8s.io/pause_3.10.1 succeeded
	I1109 14:42:12.841031  215276 cache.go:107] acquiring lock: {Name:mk6a5718ed24b8768b1b0c11e268924a881d21f7 Clock:{} Delay:500ms Timeout:10m0s Cancel:<nil>}
	I1109 14:42:12.841055  215276 cache.go:115] /home/jenkins/minikube-integration/21139-2320/.minikube/cache/images/arm64/registry.k8s.io/coredns/coredns_v1.12.1 exists
	I1109 14:42:12.841060  215276 cache.go:96] cache image "registry.k8s.io/coredns/coredns:v1.12.1" -> "/home/jenkins/minikube-integration/21139-2320/.minikube/cache/images/arm64/registry.k8s.io/coredns/coredns_v1.12.1" took 29.719µs
	I1109 14:42:12.841066  215276 cache.go:80] save to tar file registry.k8s.io/coredns/coredns:v1.12.1 -> /home/jenkins/minikube-integration/21139-2320/.minikube/cache/images/arm64/registry.k8s.io/coredns/coredns_v1.12.1 succeeded
	I1109 14:42:12.841072  215276 cache.go:87] Successfully saved all images to host disk.
	I1109 14:42:12.860234  215276 image.go:100] Found gcr.io/k8s-minikube/kicbase-builds:v0.0.48-1761985721-21837@sha256:a50b37e97dfdea51156e079ca6b45818a801b3d41bbe13d141f35d2e1af6c7d1 in local docker daemon, skipping pull
	I1109 14:42:12.860263  215276 cache.go:158] gcr.io/k8s-minikube/kicbase-builds:v0.0.48-1761985721-21837@sha256:a50b37e97dfdea51156e079ca6b45818a801b3d41bbe13d141f35d2e1af6c7d1 exists in daemon, skipping load
	I1109 14:42:12.860280  215276 cache.go:243] Successfully downloaded all kic artifacts
	I1109 14:42:12.860303  215276 start.go:360] acquireMachinesLock for no-preload-545474: {Name:mkc3edd7cced849c77bded9e0b243a9510986130 Clock:{} Delay:500ms Timeout:10m0s Cancel:<nil>}
	I1109 14:42:12.860367  215276 start.go:364] duration metric: took 44.038µs to acquireMachinesLock for "no-preload-545474"
	I1109 14:42:12.860389  215276 start.go:96] Skipping create...Using existing machine configuration
	I1109 14:42:12.860399  215276 fix.go:54] fixHost starting: 
	I1109 14:42:12.860650  215276 cli_runner.go:164] Run: docker container inspect no-preload-545474 --format={{.State.Status}}
	I1109 14:42:12.885738  215276 fix.go:112] recreateIfNeeded on no-preload-545474: state=Stopped err=<nil>
	W1109 14:42:12.885764  215276 fix.go:138] unexpected machine state, will restart: <nil>
	I1109 14:42:12.889229  215276 out.go:252] * Restarting existing docker container for "no-preload-545474" ...
	I1109 14:42:12.889316  215276 cli_runner.go:164] Run: docker start no-preload-545474
	I1109 14:42:13.211013  215276 cli_runner.go:164] Run: docker container inspect no-preload-545474 --format={{.State.Status}}
	I1109 14:42:13.244264  215276 kic.go:430] container "no-preload-545474" state is running.
	I1109 14:42:13.244638  215276 cli_runner.go:164] Run: docker container inspect -f "{{range .NetworkSettings.Networks}}{{.IPAddress}},{{.GlobalIPv6Address}}{{end}}" no-preload-545474
	I1109 14:42:13.268233  215276 profile.go:143] Saving config to /home/jenkins/minikube-integration/21139-2320/.minikube/profiles/no-preload-545474/config.json ...
	I1109 14:42:13.268462  215276 machine.go:94] provisionDockerMachine start ...
	I1109 14:42:13.268527  215276 cli_runner.go:164] Run: docker container inspect -f "'{{(index (index .NetworkSettings.Ports "22/tcp") 0).HostPort}}'" no-preload-545474
	I1109 14:42:13.292946  215276 main.go:143] libmachine: Using SSH client type: native
	I1109 14:42:13.293263  215276 main.go:143] libmachine: &{{{<nil> 0 [] [] []} docker [0x3eefe0] 0x3f1790 <nil>  [] 0s} 127.0.0.1 33095 <nil> <nil>}
	I1109 14:42:13.293271  215276 main.go:143] libmachine: About to run SSH command:
	hostname
	I1109 14:42:13.294021  215276 main.go:143] libmachine: Error dialing TCP: ssh: handshake failed: read tcp 127.0.0.1:58756->127.0.0.1:33095: read: connection reset by peer
	I1109 14:42:16.487270  215276 main.go:143] libmachine: SSH cmd err, output: <nil>: no-preload-545474
	
	I1109 14:42:16.487296  215276 ubuntu.go:182] provisioning hostname "no-preload-545474"
	I1109 14:42:16.487365  215276 cli_runner.go:164] Run: docker container inspect -f "'{{(index (index .NetworkSettings.Ports "22/tcp") 0).HostPort}}'" no-preload-545474
	I1109 14:42:16.517852  215276 main.go:143] libmachine: Using SSH client type: native
	I1109 14:42:16.518163  215276 main.go:143] libmachine: &{{{<nil> 0 [] [] []} docker [0x3eefe0] 0x3f1790 <nil>  [] 0s} 127.0.0.1 33095 <nil> <nil>}
	I1109 14:42:16.518180  215276 main.go:143] libmachine: About to run SSH command:
	sudo hostname no-preload-545474 && echo "no-preload-545474" | sudo tee /etc/hostname
	I1109 14:42:16.702558  215276 main.go:143] libmachine: SSH cmd err, output: <nil>: no-preload-545474
	
	I1109 14:42:16.702647  215276 cli_runner.go:164] Run: docker container inspect -f "'{{(index (index .NetworkSettings.Ports "22/tcp") 0).HostPort}}'" no-preload-545474
	I1109 14:42:16.732085  215276 main.go:143] libmachine: Using SSH client type: native
	I1109 14:42:16.732400  215276 main.go:143] libmachine: &{{{<nil> 0 [] [] []} docker [0x3eefe0] 0x3f1790 <nil>  [] 0s} 127.0.0.1 33095 <nil> <nil>}
	I1109 14:42:16.732424  215276 main.go:143] libmachine: About to run SSH command:
	
			if ! grep -xq '.*\sno-preload-545474' /etc/hosts; then
				if grep -xq '127.0.1.1\s.*' /etc/hosts; then
					sudo sed -i 's/^127.0.1.1\s.*/127.0.1.1 no-preload-545474/g' /etc/hosts;
				else 
					echo '127.0.1.1 no-preload-545474' | sudo tee -a /etc/hosts; 
				fi
			fi
	I1109 14:42:16.924598  215276 main.go:143] libmachine: SSH cmd err, output: <nil>: 
	I1109 14:42:16.924628  215276 ubuntu.go:188] set auth options {CertDir:/home/jenkins/minikube-integration/21139-2320/.minikube CaCertPath:/home/jenkins/minikube-integration/21139-2320/.minikube/certs/ca.pem CaPrivateKeyPath:/home/jenkins/minikube-integration/21139-2320/.minikube/certs/ca-key.pem CaCertRemotePath:/etc/docker/ca.pem ServerCertPath:/home/jenkins/minikube-integration/21139-2320/.minikube/machines/server.pem ServerKeyPath:/home/jenkins/minikube-integration/21139-2320/.minikube/machines/server-key.pem ClientKeyPath:/home/jenkins/minikube-integration/21139-2320/.minikube/certs/key.pem ServerCertRemotePath:/etc/docker/server.pem ServerKeyRemotePath:/etc/docker/server-key.pem ClientCertPath:/home/jenkins/minikube-integration/21139-2320/.minikube/certs/cert.pem ServerCertSANs:[] StorePath:/home/jenkins/minikube-integration/21139-2320/.minikube}
	I1109 14:42:16.924653  215276 ubuntu.go:190] setting up certificates
	I1109 14:42:16.924672  215276 provision.go:84] configureAuth start
	I1109 14:42:16.924738  215276 cli_runner.go:164] Run: docker container inspect -f "{{range .NetworkSettings.Networks}}{{.IPAddress}},{{.GlobalIPv6Address}}{{end}}" no-preload-545474
	I1109 14:42:16.953445  215276 provision.go:143] copyHostCerts
	I1109 14:42:16.953508  215276 exec_runner.go:144] found /home/jenkins/minikube-integration/21139-2320/.minikube/ca.pem, removing ...
	I1109 14:42:16.953531  215276 exec_runner.go:203] rm: /home/jenkins/minikube-integration/21139-2320/.minikube/ca.pem
	I1109 14:42:16.953612  215276 exec_runner.go:151] cp: /home/jenkins/minikube-integration/21139-2320/.minikube/certs/ca.pem --> /home/jenkins/minikube-integration/21139-2320/.minikube/ca.pem (1082 bytes)
	I1109 14:42:16.953715  215276 exec_runner.go:144] found /home/jenkins/minikube-integration/21139-2320/.minikube/cert.pem, removing ...
	I1109 14:42:16.953726  215276 exec_runner.go:203] rm: /home/jenkins/minikube-integration/21139-2320/.minikube/cert.pem
	I1109 14:42:16.953755  215276 exec_runner.go:151] cp: /home/jenkins/minikube-integration/21139-2320/.minikube/certs/cert.pem --> /home/jenkins/minikube-integration/21139-2320/.minikube/cert.pem (1123 bytes)
	I1109 14:42:16.953814  215276 exec_runner.go:144] found /home/jenkins/minikube-integration/21139-2320/.minikube/key.pem, removing ...
	I1109 14:42:16.953823  215276 exec_runner.go:203] rm: /home/jenkins/minikube-integration/21139-2320/.minikube/key.pem
	I1109 14:42:16.953852  215276 exec_runner.go:151] cp: /home/jenkins/minikube-integration/21139-2320/.minikube/certs/key.pem --> /home/jenkins/minikube-integration/21139-2320/.minikube/key.pem (1679 bytes)
	I1109 14:42:16.953904  215276 provision.go:117] generating server cert: /home/jenkins/minikube-integration/21139-2320/.minikube/machines/server.pem ca-key=/home/jenkins/minikube-integration/21139-2320/.minikube/certs/ca.pem private-key=/home/jenkins/minikube-integration/21139-2320/.minikube/certs/ca-key.pem org=jenkins.no-preload-545474 san=[127.0.0.1 192.168.85.2 localhost minikube no-preload-545474]
	I1109 14:42:12.801148  212661 out.go:252]   - Booting up control plane ...
	I1109 14:42:12.801266  212661 kubeadm.go:319] [control-plane] Creating static Pod manifest for "kube-apiserver"
	I1109 14:42:12.801359  212661 kubeadm.go:319] [control-plane] Creating static Pod manifest for "kube-controller-manager"
	I1109 14:42:12.801437  212661 kubeadm.go:319] [control-plane] Creating static Pod manifest for "kube-scheduler"
	I1109 14:42:12.821957  212661 kubeadm.go:319] [kubelet-start] Writing kubelet environment file with flags to file "/var/lib/kubelet/kubeadm-flags.env"
	I1109 14:42:12.822076  212661 kubeadm.go:319] [kubelet-start] Writing kubelet configuration to file "/var/lib/kubelet/instance-config.yaml"
	I1109 14:42:12.830868  212661 kubeadm.go:319] [patches] Applied patch of type "application/strategic-merge-patch+json" to target "kubeletconfiguration"
	I1109 14:42:12.831356  212661 kubeadm.go:319] [kubelet-start] Writing kubelet configuration to file "/var/lib/kubelet/config.yaml"
	I1109 14:42:12.831419  212661 kubeadm.go:319] [kubelet-start] Starting the kubelet
	I1109 14:42:13.030573  212661 kubeadm.go:319] [wait-control-plane] Waiting for the kubelet to boot up the control plane as static Pods from directory "/etc/kubernetes/manifests"
	I1109 14:42:13.030727  212661 kubeadm.go:319] [kubelet-check] Waiting for a healthy kubelet at http://127.0.0.1:10248/healthz. This can take up to 4m0s
	I1109 14:42:14.033165  212661 kubeadm.go:319] [kubelet-check] The kubelet is healthy after 1.00241333s
	I1109 14:42:14.036899  212661 kubeadm.go:319] [control-plane-check] Waiting for healthy control plane components. This can take up to 4m0s
	I1109 14:42:14.037142  212661 kubeadm.go:319] [control-plane-check] Checking kube-apiserver at https://192.168.76.2:8443/livez
	I1109 14:42:14.037500  212661 kubeadm.go:319] [control-plane-check] Checking kube-controller-manager at https://127.0.0.1:10257/healthz
	I1109 14:42:14.037754  212661 kubeadm.go:319] [control-plane-check] Checking kube-scheduler at https://127.0.0.1:10259/livez
	I1109 14:42:17.635915  212661 kubeadm.go:319] [control-plane-check] kube-controller-manager is healthy after 3.597684288s
	I1109 14:42:17.560490  215276 provision.go:177] copyRemoteCerts
	I1109 14:42:17.560600  215276 ssh_runner.go:195] Run: sudo mkdir -p /etc/docker /etc/docker /etc/docker
	I1109 14:42:17.560674  215276 cli_runner.go:164] Run: docker container inspect -f "'{{(index (index .NetworkSettings.Ports "22/tcp") 0).HostPort}}'" no-preload-545474
	I1109 14:42:17.577883  215276 sshutil.go:53] new ssh client: &{IP:127.0.0.1 Port:33095 SSHKeyPath:/home/jenkins/minikube-integration/21139-2320/.minikube/machines/no-preload-545474/id_rsa Username:docker}
	I1109 14:42:17.685069  215276 ssh_runner.go:362] scp /home/jenkins/minikube-integration/21139-2320/.minikube/certs/ca.pem --> /etc/docker/ca.pem (1082 bytes)
	I1109 14:42:17.705359  215276 ssh_runner.go:362] scp /home/jenkins/minikube-integration/21139-2320/.minikube/machines/server.pem --> /etc/docker/server.pem (1220 bytes)
	I1109 14:42:17.725647  215276 ssh_runner.go:362] scp /home/jenkins/minikube-integration/21139-2320/.minikube/machines/server-key.pem --> /etc/docker/server-key.pem (1679 bytes)
	I1109 14:42:17.749475  215276 provision.go:87] duration metric: took 824.7788ms to configureAuth
	I1109 14:42:17.749553  215276 ubuntu.go:206] setting minikube options for container-runtime
	I1109 14:42:17.749787  215276 config.go:182] Loaded profile config "no-preload-545474": Driver=docker, ContainerRuntime=crio, KubernetesVersion=v1.34.1
	I1109 14:42:17.749943  215276 cli_runner.go:164] Run: docker container inspect -f "'{{(index (index .NetworkSettings.Ports "22/tcp") 0).HostPort}}'" no-preload-545474
	I1109 14:42:17.773217  215276 main.go:143] libmachine: Using SSH client type: native
	I1109 14:42:17.773511  215276 main.go:143] libmachine: &{{{<nil> 0 [] [] []} docker [0x3eefe0] 0x3f1790 <nil>  [] 0s} 127.0.0.1 33095 <nil> <nil>}
	I1109 14:42:17.773526  215276 main.go:143] libmachine: About to run SSH command:
	sudo mkdir -p /etc/sysconfig && printf %s "
	CRIO_MINIKUBE_OPTIONS='--insecure-registry 10.96.0.0/12 '
	" | sudo tee /etc/sysconfig/crio.minikube && sudo systemctl restart crio
	I1109 14:42:18.120042  215276 main.go:143] libmachine: SSH cmd err, output: <nil>: 
	CRIO_MINIKUBE_OPTIONS='--insecure-registry 10.96.0.0/12 '
	
	I1109 14:42:18.120115  215276 machine.go:97] duration metric: took 4.851641314s to provisionDockerMachine
	I1109 14:42:18.120140  215276 start.go:293] postStartSetup for "no-preload-545474" (driver="docker")
	I1109 14:42:18.120161  215276 start.go:322] creating required directories: [/etc/kubernetes/addons /etc/kubernetes/manifests /var/tmp/minikube /var/lib/minikube /var/lib/minikube/certs /var/lib/minikube/images /var/lib/minikube/binaries /tmp/gvisor /usr/share/ca-certificates /etc/ssl/certs]
	I1109 14:42:18.120256  215276 ssh_runner.go:195] Run: sudo mkdir -p /etc/kubernetes/addons /etc/kubernetes/manifests /var/tmp/minikube /var/lib/minikube /var/lib/minikube/certs /var/lib/minikube/images /var/lib/minikube/binaries /tmp/gvisor /usr/share/ca-certificates /etc/ssl/certs
	I1109 14:42:18.120324  215276 cli_runner.go:164] Run: docker container inspect -f "'{{(index (index .NetworkSettings.Ports "22/tcp") 0).HostPort}}'" no-preload-545474
	I1109 14:42:18.156279  215276 sshutil.go:53] new ssh client: &{IP:127.0.0.1 Port:33095 SSHKeyPath:/home/jenkins/minikube-integration/21139-2320/.minikube/machines/no-preload-545474/id_rsa Username:docker}
	I1109 14:42:18.264275  215276 ssh_runner.go:195] Run: cat /etc/os-release
	I1109 14:42:18.268319  215276 main.go:143] libmachine: Couldn't set key VERSION_CODENAME, no corresponding struct field found
	I1109 14:42:18.268352  215276 info.go:137] Remote host: Debian GNU/Linux 12 (bookworm)
	I1109 14:42:18.268363  215276 filesync.go:126] Scanning /home/jenkins/minikube-integration/21139-2320/.minikube/addons for local assets ...
	I1109 14:42:18.268417  215276 filesync.go:126] Scanning /home/jenkins/minikube-integration/21139-2320/.minikube/files for local assets ...
	I1109 14:42:18.268501  215276 filesync.go:149] local asset: /home/jenkins/minikube-integration/21139-2320/.minikube/files/etc/ssl/certs/41162.pem -> 41162.pem in /etc/ssl/certs
	I1109 14:42:18.268615  215276 ssh_runner.go:195] Run: sudo mkdir -p /etc/ssl/certs
	I1109 14:42:18.286310  215276 ssh_runner.go:362] scp /home/jenkins/minikube-integration/21139-2320/.minikube/files/etc/ssl/certs/41162.pem --> /etc/ssl/certs/41162.pem (1708 bytes)
	I1109 14:42:18.317262  215276 start.go:296] duration metric: took 197.09633ms for postStartSetup
	I1109 14:42:18.317372  215276 ssh_runner.go:195] Run: sh -c "df -h /var | awk 'NR==2{print $5}'"
	I1109 14:42:18.317422  215276 cli_runner.go:164] Run: docker container inspect -f "'{{(index (index .NetworkSettings.Ports "22/tcp") 0).HostPort}}'" no-preload-545474
	I1109 14:42:18.362097  215276 sshutil.go:53] new ssh client: &{IP:127.0.0.1 Port:33095 SSHKeyPath:/home/jenkins/minikube-integration/21139-2320/.minikube/machines/no-preload-545474/id_rsa Username:docker}
	I1109 14:42:18.481751  215276 ssh_runner.go:195] Run: sh -c "df -BG /var | awk 'NR==2{print $4}'"
	I1109 14:42:18.487238  215276 fix.go:56] duration metric: took 5.626831565s for fixHost
	I1109 14:42:18.487266  215276 start.go:83] releasing machines lock for "no-preload-545474", held for 5.626887688s
	I1109 14:42:18.487339  215276 cli_runner.go:164] Run: docker container inspect -f "{{range .NetworkSettings.Networks}}{{.IPAddress}},{{.GlobalIPv6Address}}{{end}}" no-preload-545474
	I1109 14:42:18.511681  215276 ssh_runner.go:195] Run: cat /version.json
	I1109 14:42:18.511718  215276 ssh_runner.go:195] Run: curl -sS -m 2 https://registry.k8s.io/
	I1109 14:42:18.511734  215276 cli_runner.go:164] Run: docker container inspect -f "'{{(index (index .NetworkSettings.Ports "22/tcp") 0).HostPort}}'" no-preload-545474
	I1109 14:42:18.511795  215276 cli_runner.go:164] Run: docker container inspect -f "'{{(index (index .NetworkSettings.Ports "22/tcp") 0).HostPort}}'" no-preload-545474
	I1109 14:42:18.549445  215276 sshutil.go:53] new ssh client: &{IP:127.0.0.1 Port:33095 SSHKeyPath:/home/jenkins/minikube-integration/21139-2320/.minikube/machines/no-preload-545474/id_rsa Username:docker}
	I1109 14:42:18.557972  215276 sshutil.go:53] new ssh client: &{IP:127.0.0.1 Port:33095 SSHKeyPath:/home/jenkins/minikube-integration/21139-2320/.minikube/machines/no-preload-545474/id_rsa Username:docker}
	I1109 14:42:18.672374  215276 ssh_runner.go:195] Run: systemctl --version
	I1109 14:42:18.798542  215276 ssh_runner.go:195] Run: sudo sh -c "podman version >/dev/null"
	I1109 14:42:18.881491  215276 ssh_runner.go:195] Run: sh -c "stat /etc/cni/net.d/*loopback.conf*"
	W1109 14:42:18.893646  215276 cni.go:209] loopback cni configuration skipped: "/etc/cni/net.d/*loopback.conf*" not found
	I1109 14:42:18.893786  215276 ssh_runner.go:195] Run: sudo find /etc/cni/net.d -maxdepth 1 -type f ( ( -name *bridge* -or -name *podman* ) -and -not -name *.mk_disabled ) -printf "%p, " -exec sh -c "sudo mv {} {}.mk_disabled" ;
	I1109 14:42:18.925321  215276 cni.go:259] no active bridge cni configs found in "/etc/cni/net.d" - nothing to disable
	I1109 14:42:18.925396  215276 start.go:496] detecting cgroup driver to use...
	I1109 14:42:18.925444  215276 detect.go:187] detected "cgroupfs" cgroup driver on host os
	I1109 14:42:18.925518  215276 ssh_runner.go:195] Run: sudo systemctl stop -f containerd
	I1109 14:42:18.951876  215276 ssh_runner.go:195] Run: sudo systemctl is-active --quiet service containerd
	I1109 14:42:18.972349  215276 docker.go:218] disabling cri-docker service (if available) ...
	I1109 14:42:18.972461  215276 ssh_runner.go:195] Run: sudo systemctl stop -f cri-docker.socket
	I1109 14:42:18.992899  215276 ssh_runner.go:195] Run: sudo systemctl stop -f cri-docker.service
	I1109 14:42:19.013392  215276 ssh_runner.go:195] Run: sudo systemctl disable cri-docker.socket
	I1109 14:42:19.213618  215276 ssh_runner.go:195] Run: sudo systemctl mask cri-docker.service
	I1109 14:42:19.388237  215276 docker.go:234] disabling docker service ...
	I1109 14:42:19.388352  215276 ssh_runner.go:195] Run: sudo systemctl stop -f docker.socket
	I1109 14:42:19.408498  215276 ssh_runner.go:195] Run: sudo systemctl stop -f docker.service
	I1109 14:42:19.423115  215276 ssh_runner.go:195] Run: sudo systemctl disable docker.socket
	I1109 14:42:19.571162  215276 ssh_runner.go:195] Run: sudo systemctl mask docker.service
	I1109 14:42:19.721938  215276 ssh_runner.go:195] Run: sudo systemctl is-active --quiet service docker
	I1109 14:42:19.741701  215276 ssh_runner.go:195] Run: /bin/bash -c "sudo mkdir -p /etc && printf %s "runtime-endpoint: unix:///var/run/crio/crio.sock
	" | sudo tee /etc/crictl.yaml"
	I1109 14:42:19.762822  215276 crio.go:59] configure cri-o to use "registry.k8s.io/pause:3.10.1" pause image...
	I1109 14:42:19.762887  215276 ssh_runner.go:195] Run: sh -c "sudo sed -i 's|^.*pause_image = .*$|pause_image = "registry.k8s.io/pause:3.10.1"|' /etc/crio/crio.conf.d/02-crio.conf"
	I1109 14:42:19.776152  215276 crio.go:70] configuring cri-o to use "cgroupfs" as cgroup driver...
	I1109 14:42:19.776311  215276 ssh_runner.go:195] Run: sh -c "sudo sed -i 's|^.*cgroup_manager = .*$|cgroup_manager = "cgroupfs"|' /etc/crio/crio.conf.d/02-crio.conf"
	I1109 14:42:19.788591  215276 ssh_runner.go:195] Run: sh -c "sudo sed -i '/conmon_cgroup = .*/d' /etc/crio/crio.conf.d/02-crio.conf"
	I1109 14:42:19.801495  215276 ssh_runner.go:195] Run: sh -c "sudo sed -i '/cgroup_manager = .*/a conmon_cgroup = "pod"' /etc/crio/crio.conf.d/02-crio.conf"
	I1109 14:42:19.816673  215276 ssh_runner.go:195] Run: sh -c "sudo rm -rf /etc/cni/net.mk"
	I1109 14:42:19.827660  215276 ssh_runner.go:195] Run: sh -c "sudo sed -i '/^ *"net.ipv4.ip_unprivileged_port_start=.*"/d' /etc/crio/crio.conf.d/02-crio.conf"
	I1109 14:42:19.838412  215276 ssh_runner.go:195] Run: sh -c "sudo grep -q "^ *default_sysctls" /etc/crio/crio.conf.d/02-crio.conf || sudo sed -i '/conmon_cgroup = .*/a default_sysctls = \[\n\]' /etc/crio/crio.conf.d/02-crio.conf"
	I1109 14:42:19.848533  215276 ssh_runner.go:195] Run: sh -c "sudo sed -i -r 's|^default_sysctls *= *\[|&\n  "net.ipv4.ip_unprivileged_port_start=0",|' /etc/crio/crio.conf.d/02-crio.conf"
	I1109 14:42:19.857863  215276 ssh_runner.go:195] Run: sudo sysctl net.bridge.bridge-nf-call-iptables
	I1109 14:42:19.868733  215276 ssh_runner.go:195] Run: sudo sh -c "echo 1 > /proc/sys/net/ipv4/ip_forward"
	I1109 14:42:19.877552  215276 ssh_runner.go:195] Run: sudo systemctl daemon-reload
	I1109 14:42:20.024228  215276 ssh_runner.go:195] Run: sudo systemctl restart crio
	I1109 14:42:20.201243  215276 start.go:543] Will wait 60s for socket path /var/run/crio/crio.sock
	I1109 14:42:20.201378  215276 ssh_runner.go:195] Run: stat /var/run/crio/crio.sock
	I1109 14:42:20.205883  215276 start.go:564] Will wait 60s for crictl version
	I1109 14:42:20.205993  215276 ssh_runner.go:195] Run: which crictl
	I1109 14:42:20.210498  215276 ssh_runner.go:195] Run: sudo /usr/local/bin/crictl version
	I1109 14:42:20.240086  215276 start.go:580] Version:  0.1.0
	RuntimeName:  cri-o
	RuntimeVersion:  1.34.1
	RuntimeApiVersion:  v1
	I1109 14:42:20.240181  215276 ssh_runner.go:195] Run: crio --version
	I1109 14:42:20.290982  215276 ssh_runner.go:195] Run: crio --version
	I1109 14:42:20.335805  215276 out.go:179] * Preparing Kubernetes v1.34.1 on CRI-O 1.34.1 ...
	I1109 14:42:20.338852  215276 cli_runner.go:164] Run: docker network inspect no-preload-545474 --format "{"Name": "{{.Name}}","Driver": "{{.Driver}}","Subnet": "{{range .IPAM.Config}}{{.Subnet}}{{end}}","Gateway": "{{range .IPAM.Config}}{{.Gateway}}{{end}}","MTU": {{if (index .Options "com.docker.network.driver.mtu")}}{{(index .Options "com.docker.network.driver.mtu")}}{{else}}0{{end}}, "ContainerIPs": [{{range $k,$v := .Containers }}"{{$v.IPv4Address}}",{{end}}]}"
	I1109 14:42:20.357534  215276 ssh_runner.go:195] Run: grep 192.168.85.1	host.minikube.internal$ /etc/hosts
	I1109 14:42:20.362646  215276 ssh_runner.go:195] Run: /bin/bash -c "{ grep -v $'\thost.minikube.internal$' "/etc/hosts"; echo "192.168.85.1	host.minikube.internal"; } > /tmp/h.$$; sudo cp /tmp/h.$$ "/etc/hosts""
	I1109 14:42:20.374607  215276 kubeadm.go:884] updating cluster {Name:no-preload-545474 KeepContext:false EmbedCerts:false MinikubeISO: KicBaseImage:gcr.io/k8s-minikube/kicbase-builds:v0.0.48-1761985721-21837@sha256:a50b37e97dfdea51156e079ca6b45818a801b3d41bbe13d141f35d2e1af6c7d1 Memory:3072 CPUs:2 DiskSize:20000 Driver:docker HyperkitVpnKitSock: HyperkitVSockPorts:[] DockerEnv:[] ContainerVolumeMounts:[] InsecureRegistry:[] RegistryMirror:[] HostOnlyCIDR:192.168.59.1/24 HypervVirtualSwitch: HypervUseExternalSwitch:false HypervExternalAdapter: KVMNetwork:default KVMQemuURI:qemu:///system KVMGPU:false KVMHidden:false KVMNUMACount:1 APIServerPort:8443 DockerOpt:[] DisableDriverMounts:false NFSShare:[] NFSSharesRoot:/nfsshares UUID: NoVTXCheck:false DNSProxy:false HostDNSResolver:true HostOnlyNicType:virtio NatNicType:virtio SSHIPAddress: SSHUser:root SSHKey: SSHPort:22 KubernetesConfig:{KubernetesVersion:v1.34.1 ClusterName:no-preload-545474 Namespace:default APIServerHAVIP: APIServerName:minikubeCA API
ServerNames:[] APIServerIPs:[] DNSDomain:cluster.local ContainerRuntime:crio CRISocket: NetworkPlugin:cni FeatureGates: ServiceCIDR:10.96.0.0/12 ImageRepository: LoadBalancerStartIP: LoadBalancerEndIP: CustomIngressCert: RegistryAliases: ExtraOptions:[] ShouldLoadCachedImages:true EnableDefaultCNI:false CNI:} Nodes:[{Name: IP:192.168.85.2 Port:8443 KubernetesVersion:v1.34.1 ContainerRuntime:crio ControlPlane:true Worker:true}] Addons:map[dashboard:true default-storageclass:true storage-provisioner:true] CustomAddonImages:map[MetricsScraper:registry.k8s.io/echoserver:1.4] CustomAddonRegistries:map[] VerifyComponents:map[apiserver:true apps_running:true default_sa:true extra:true kubelet:true node_ready:true system_pods:true] StartHostTimeout:6m0s ScheduledStop:<nil> ExposedPorts:[] ListenAddress: Network: Subnet: MultiNodeRequested:false ExtraDisks:0 CertExpiration:26280h0m0s MountString: Mount9PVersion:9p2000.L MountGID:docker MountIP: MountMSize:262144 MountOptions:[] MountPort:0 MountType:9p MountUID:docker
BinaryMirror: DisableOptimizations:false DisableMetrics:false DisableCoreDNSLog:false CustomQemuFirmwarePath: SocketVMnetClientPath: SocketVMnetPath: StaticIP: SSHAuthSock: SSHAgentPID:0 GPUs: AutoPauseInterval:1m0s} ...
	I1109 14:42:20.374722  215276 preload.go:188] Checking if preload exists for k8s version v1.34.1 and runtime crio
	I1109 14:42:20.374761  215276 ssh_runner.go:195] Run: sudo crictl images --output json
	I1109 14:42:20.419650  215276 crio.go:514] all images are preloaded for cri-o runtime.
	I1109 14:42:20.419682  215276 cache_images.go:86] Images are preloaded, skipping loading
	I1109 14:42:20.419690  215276 kubeadm.go:935] updating node { 192.168.85.2 8443 v1.34.1 crio true true} ...
	I1109 14:42:20.419780  215276 kubeadm.go:947] kubelet [Unit]
	Wants=crio.service
	
	[Service]
	ExecStart=
	ExecStart=/var/lib/minikube/binaries/v1.34.1/kubelet --bootstrap-kubeconfig=/etc/kubernetes/bootstrap-kubelet.conf --cgroups-per-qos=false --config=/var/lib/kubelet/config.yaml --enforce-node-allocatable= --hostname-override=no-preload-545474 --kubeconfig=/etc/kubernetes/kubelet.conf --node-ip=192.168.85.2
	
	[Install]
	 config:
	{KubernetesVersion:v1.34.1 ClusterName:no-preload-545474 Namespace:default APIServerHAVIP: APIServerName:minikubeCA APIServerNames:[] APIServerIPs:[] DNSDomain:cluster.local ContainerRuntime:crio CRISocket: NetworkPlugin:cni FeatureGates: ServiceCIDR:10.96.0.0/12 ImageRepository: LoadBalancerStartIP: LoadBalancerEndIP: CustomIngressCert: RegistryAliases: ExtraOptions:[] ShouldLoadCachedImages:true EnableDefaultCNI:false CNI:}
	I1109 14:42:20.419859  215276 ssh_runner.go:195] Run: crio config
	I1109 14:42:20.497229  215276 cni.go:84] Creating CNI manager for ""
	I1109 14:42:20.497297  215276 cni.go:143] "docker" driver + "crio" runtime found, recommending kindnet
	I1109 14:42:20.497327  215276 kubeadm.go:85] Using pod CIDR: 10.244.0.0/16
	I1109 14:42:20.497378  215276 kubeadm.go:190] kubeadm options: {CertDir:/var/lib/minikube/certs ServiceCIDR:10.96.0.0/12 PodSubnet:10.244.0.0/16 AdvertiseAddress:192.168.85.2 APIServerPort:8443 KubernetesVersion:v1.34.1 EtcdDataDir:/var/lib/minikube/etcd EtcdExtraArgs:map[] ClusterName:no-preload-545474 NodeName:no-preload-545474 DNSDomain:cluster.local CRISocket:/var/run/crio/crio.sock ImageRepository: ComponentOptions:[{Component:apiServer ExtraArgs:map[enable-admission-plugins:NamespaceLifecycle,LimitRanger,ServiceAccount,DefaultStorageClass,DefaultTolerationSeconds,NodeRestriction,MutatingAdmissionWebhook,ValidatingAdmissionWebhook,ResourceQuota] Pairs:map[certSANs:["127.0.0.1", "localhost", "192.168.85.2"]]} {Component:controllerManager ExtraArgs:map[allocate-node-cidrs:true leader-elect:false] Pairs:map[]} {Component:scheduler ExtraArgs:map[leader-elect:false] Pairs:map[]}] FeatureArgs:map[] NodeIP:192.168.85.2 CgroupDriver:cgroupfs ClientCAFile:/var/lib/minikube/certs/ca.crt StaticPodPath:/etc
/kubernetes/manifests ControlPlaneAddress:control-plane.minikube.internal KubeProxyOptions:map[] ResolvConfSearchRegression:false KubeletConfigOpts:map[containerRuntimeEndpoint:unix:///var/run/crio/crio.sock hairpinMode:hairpin-veth runtimeRequestTimeout:15m] PrependCriSocketUnix:true}
	I1109 14:42:20.497540  215276 kubeadm.go:196] kubeadm config:
	apiVersion: kubeadm.k8s.io/v1beta4
	kind: InitConfiguration
	localAPIEndpoint:
	  advertiseAddress: 192.168.85.2
	  bindPort: 8443
	bootstrapTokens:
	  - groups:
	      - system:bootstrappers:kubeadm:default-node-token
	    ttl: 24h0m0s
	    usages:
	      - signing
	      - authentication
	nodeRegistration:
	  criSocket: unix:///var/run/crio/crio.sock
	  name: "no-preload-545474"
	  kubeletExtraArgs:
	    - name: "node-ip"
	      value: "192.168.85.2"
	  taints: []
	---
	apiVersion: kubeadm.k8s.io/v1beta4
	kind: ClusterConfiguration
	apiServer:
	  certSANs: ["127.0.0.1", "localhost", "192.168.85.2"]
	  extraArgs:
	    - name: "enable-admission-plugins"
	      value: "NamespaceLifecycle,LimitRanger,ServiceAccount,DefaultStorageClass,DefaultTolerationSeconds,NodeRestriction,MutatingAdmissionWebhook,ValidatingAdmissionWebhook,ResourceQuota"
	controllerManager:
	  extraArgs:
	    - name: "allocate-node-cidrs"
	      value: "true"
	    - name: "leader-elect"
	      value: "false"
	scheduler:
	  extraArgs:
	    - name: "leader-elect"
	      value: "false"
	certificatesDir: /var/lib/minikube/certs
	clusterName: mk
	controlPlaneEndpoint: control-plane.minikube.internal:8443
	etcd:
	  local:
	    dataDir: /var/lib/minikube/etcd
	kubernetesVersion: v1.34.1
	networking:
	  dnsDomain: cluster.local
	  podSubnet: "10.244.0.0/16"
	  serviceSubnet: 10.96.0.0/12
	---
	apiVersion: kubelet.config.k8s.io/v1beta1
	kind: KubeletConfiguration
	authentication:
	  x509:
	    clientCAFile: /var/lib/minikube/certs/ca.crt
	cgroupDriver: cgroupfs
	containerRuntimeEndpoint: unix:///var/run/crio/crio.sock
	hairpinMode: hairpin-veth
	runtimeRequestTimeout: 15m
	clusterDomain: "cluster.local"
	# disable disk resource management by default
	imageGCHighThresholdPercent: 100
	evictionHard:
	  nodefs.available: "0%"
	  nodefs.inodesFree: "0%"
	  imagefs.available: "0%"
	failSwapOn: false
	staticPodPath: /etc/kubernetes/manifests
	---
	apiVersion: kubeproxy.config.k8s.io/v1alpha1
	kind: KubeProxyConfiguration
	clusterCIDR: "10.244.0.0/16"
	metricsBindAddress: 0.0.0.0:10249
	conntrack:
	  maxPerCore: 0
	# Skip setting "net.netfilter.nf_conntrack_tcp_timeout_established"
	  tcpEstablishedTimeout: 0s
	# Skip setting "net.netfilter.nf_conntrack_tcp_timeout_close"
	  tcpCloseWaitTimeout: 0s
	
	I1109 14:42:20.497631  215276 ssh_runner.go:195] Run: sudo ls /var/lib/minikube/binaries/v1.34.1
	I1109 14:42:20.506931  215276 binaries.go:51] Found k8s binaries, skipping transfer
	I1109 14:42:20.507038  215276 ssh_runner.go:195] Run: sudo mkdir -p /etc/systemd/system/kubelet.service.d /lib/systemd/system /var/tmp/minikube
	I1109 14:42:20.515841  215276 ssh_runner.go:362] scp memory --> /etc/systemd/system/kubelet.service.d/10-kubeadm.conf (367 bytes)
	I1109 14:42:20.528978  215276 ssh_runner.go:362] scp memory --> /lib/systemd/system/kubelet.service (352 bytes)
	I1109 14:42:20.542049  215276 ssh_runner.go:362] scp memory --> /var/tmp/minikube/kubeadm.yaml.new (2214 bytes)
	I1109 14:42:20.557347  215276 ssh_runner.go:195] Run: grep 192.168.85.2	control-plane.minikube.internal$ /etc/hosts
	I1109 14:42:20.561807  215276 ssh_runner.go:195] Run: /bin/bash -c "{ grep -v $'\tcontrol-plane.minikube.internal$' "/etc/hosts"; echo "192.168.85.2	control-plane.minikube.internal"; } > /tmp/h.$$; sudo cp /tmp/h.$$ "/etc/hosts""
	I1109 14:42:20.573492  215276 ssh_runner.go:195] Run: sudo systemctl daemon-reload
	I1109 14:42:20.747972  215276 ssh_runner.go:195] Run: sudo systemctl start kubelet
	I1109 14:42:20.765976  215276 certs.go:69] Setting up /home/jenkins/minikube-integration/21139-2320/.minikube/profiles/no-preload-545474 for IP: 192.168.85.2
	I1109 14:42:20.765999  215276 certs.go:195] generating shared ca certs ...
	I1109 14:42:20.766014  215276 certs.go:227] acquiring lock for ca certs: {Name:mkdc9287ecd89df27ec460b72246ed6b75395d7c Clock:{} Delay:500ms Timeout:1m0s Cancel:<nil>}
	I1109 14:42:20.766149  215276 certs.go:236] skipping valid "minikubeCA" ca cert: /home/jenkins/minikube-integration/21139-2320/.minikube/ca.key
	I1109 14:42:20.766196  215276 certs.go:236] skipping valid "proxyClientCA" ca cert: /home/jenkins/minikube-integration/21139-2320/.minikube/proxy-client-ca.key
	I1109 14:42:20.766218  215276 certs.go:257] generating profile certs ...
	I1109 14:42:20.766310  215276 certs.go:360] skipping valid signed profile cert regeneration for "minikube-user": /home/jenkins/minikube-integration/21139-2320/.minikube/profiles/no-preload-545474/client.key
	I1109 14:42:20.766377  215276 certs.go:360] skipping valid signed profile cert regeneration for "minikube": /home/jenkins/minikube-integration/21139-2320/.minikube/profiles/no-preload-545474/apiserver.key.33b59cf6
	I1109 14:42:20.766417  215276 certs.go:360] skipping valid signed profile cert regeneration for "aggregator": /home/jenkins/minikube-integration/21139-2320/.minikube/profiles/no-preload-545474/proxy-client.key
	I1109 14:42:20.766533  215276 certs.go:484] found cert: /home/jenkins/minikube-integration/21139-2320/.minikube/certs/4116.pem (1338 bytes)
	W1109 14:42:20.766567  215276 certs.go:480] ignoring /home/jenkins/minikube-integration/21139-2320/.minikube/certs/4116_empty.pem, impossibly tiny 0 bytes
	I1109 14:42:20.766580  215276 certs.go:484] found cert: /home/jenkins/minikube-integration/21139-2320/.minikube/certs/ca-key.pem (1679 bytes)
	I1109 14:42:20.766605  215276 certs.go:484] found cert: /home/jenkins/minikube-integration/21139-2320/.minikube/certs/ca.pem (1082 bytes)
	I1109 14:42:20.766630  215276 certs.go:484] found cert: /home/jenkins/minikube-integration/21139-2320/.minikube/certs/cert.pem (1123 bytes)
	I1109 14:42:20.766657  215276 certs.go:484] found cert: /home/jenkins/minikube-integration/21139-2320/.minikube/certs/key.pem (1679 bytes)
	I1109 14:42:20.766702  215276 certs.go:484] found cert: /home/jenkins/minikube-integration/21139-2320/.minikube/files/etc/ssl/certs/41162.pem (1708 bytes)
	I1109 14:42:20.767287  215276 ssh_runner.go:362] scp /home/jenkins/minikube-integration/21139-2320/.minikube/ca.crt --> /var/lib/minikube/certs/ca.crt (1111 bytes)
	I1109 14:42:20.786616  215276 ssh_runner.go:362] scp /home/jenkins/minikube-integration/21139-2320/.minikube/ca.key --> /var/lib/minikube/certs/ca.key (1679 bytes)
	I1109 14:42:20.804428  215276 ssh_runner.go:362] scp /home/jenkins/minikube-integration/21139-2320/.minikube/proxy-client-ca.crt --> /var/lib/minikube/certs/proxy-client-ca.crt (1119 bytes)
	I1109 14:42:20.823532  215276 ssh_runner.go:362] scp /home/jenkins/minikube-integration/21139-2320/.minikube/proxy-client-ca.key --> /var/lib/minikube/certs/proxy-client-ca.key (1675 bytes)
	I1109 14:42:20.843824  215276 ssh_runner.go:362] scp /home/jenkins/minikube-integration/21139-2320/.minikube/profiles/no-preload-545474/apiserver.crt --> /var/lib/minikube/certs/apiserver.crt (1424 bytes)
	I1109 14:42:20.868769  215276 ssh_runner.go:362] scp /home/jenkins/minikube-integration/21139-2320/.minikube/profiles/no-preload-545474/apiserver.key --> /var/lib/minikube/certs/apiserver.key (1679 bytes)
	I1109 14:42:20.887438  215276 ssh_runner.go:362] scp /home/jenkins/minikube-integration/21139-2320/.minikube/profiles/no-preload-545474/proxy-client.crt --> /var/lib/minikube/certs/proxy-client.crt (1147 bytes)
	I1109 14:42:20.912007  215276 ssh_runner.go:362] scp /home/jenkins/minikube-integration/21139-2320/.minikube/profiles/no-preload-545474/proxy-client.key --> /var/lib/minikube/certs/proxy-client.key (1675 bytes)
	I1109 14:42:20.935063  215276 ssh_runner.go:362] scp /home/jenkins/minikube-integration/21139-2320/.minikube/files/etc/ssl/certs/41162.pem --> /usr/share/ca-certificates/41162.pem (1708 bytes)
	I1109 14:42:20.957674  215276 ssh_runner.go:362] scp /home/jenkins/minikube-integration/21139-2320/.minikube/ca.crt --> /usr/share/ca-certificates/minikubeCA.pem (1111 bytes)
	I1109 14:42:20.981982  215276 ssh_runner.go:362] scp /home/jenkins/minikube-integration/21139-2320/.minikube/certs/4116.pem --> /usr/share/ca-certificates/4116.pem (1338 bytes)
	I1109 14:42:21.009849  215276 ssh_runner.go:362] scp memory --> /var/lib/minikube/kubeconfig (738 bytes)
	I1109 14:42:21.026556  215276 ssh_runner.go:195] Run: openssl version
	I1109 14:42:21.033813  215276 ssh_runner.go:195] Run: sudo /bin/bash -c "test -s /usr/share/ca-certificates/41162.pem && ln -fs /usr/share/ca-certificates/41162.pem /etc/ssl/certs/41162.pem"
	I1109 14:42:21.045375  215276 ssh_runner.go:195] Run: ls -la /usr/share/ca-certificates/41162.pem
	I1109 14:42:21.049771  215276 certs.go:528] hashing: -rw-r--r-- 1 root root 1708 Nov  9 13:36 /usr/share/ca-certificates/41162.pem
	I1109 14:42:21.049832  215276 ssh_runner.go:195] Run: openssl x509 -hash -noout -in /usr/share/ca-certificates/41162.pem
	I1109 14:42:21.096750  215276 ssh_runner.go:195] Run: sudo /bin/bash -c "test -L /etc/ssl/certs/3ec20f2e.0 || ln -fs /etc/ssl/certs/41162.pem /etc/ssl/certs/3ec20f2e.0"
	I1109 14:42:21.104843  215276 ssh_runner.go:195] Run: sudo /bin/bash -c "test -s /usr/share/ca-certificates/minikubeCA.pem && ln -fs /usr/share/ca-certificates/minikubeCA.pem /etc/ssl/certs/minikubeCA.pem"
	I1109 14:42:21.113575  215276 ssh_runner.go:195] Run: ls -la /usr/share/ca-certificates/minikubeCA.pem
	I1109 14:42:21.117243  215276 certs.go:528] hashing: -rw-r--r-- 1 root root 1111 Nov  9 13:29 /usr/share/ca-certificates/minikubeCA.pem
	I1109 14:42:21.117354  215276 ssh_runner.go:195] Run: openssl x509 -hash -noout -in /usr/share/ca-certificates/minikubeCA.pem
	I1109 14:42:21.159203  215276 ssh_runner.go:195] Run: sudo /bin/bash -c "test -L /etc/ssl/certs/b5213941.0 || ln -fs /etc/ssl/certs/minikubeCA.pem /etc/ssl/certs/b5213941.0"
	I1109 14:42:21.167394  215276 ssh_runner.go:195] Run: sudo /bin/bash -c "test -s /usr/share/ca-certificates/4116.pem && ln -fs /usr/share/ca-certificates/4116.pem /etc/ssl/certs/4116.pem"
	I1109 14:42:21.175822  215276 ssh_runner.go:195] Run: ls -la /usr/share/ca-certificates/4116.pem
	I1109 14:42:21.179316  215276 certs.go:528] hashing: -rw-r--r-- 1 root root 1338 Nov  9 13:36 /usr/share/ca-certificates/4116.pem
	I1109 14:42:21.179375  215276 ssh_runner.go:195] Run: openssl x509 -hash -noout -in /usr/share/ca-certificates/4116.pem
	I1109 14:42:21.220603  215276 ssh_runner.go:195] Run: sudo /bin/bash -c "test -L /etc/ssl/certs/51391683.0 || ln -fs /etc/ssl/certs/4116.pem /etc/ssl/certs/51391683.0"
	I1109 14:42:21.228357  215276 ssh_runner.go:195] Run: stat /var/lib/minikube/certs/apiserver-kubelet-client.crt
	I1109 14:42:21.232091  215276 ssh_runner.go:195] Run: openssl x509 -noout -in /var/lib/minikube/certs/apiserver-etcd-client.crt -checkend 86400
	I1109 14:42:21.274060  215276 ssh_runner.go:195] Run: openssl x509 -noout -in /var/lib/minikube/certs/apiserver-kubelet-client.crt -checkend 86400
	I1109 14:42:21.316825  215276 ssh_runner.go:195] Run: openssl x509 -noout -in /var/lib/minikube/certs/etcd/server.crt -checkend 86400
	I1109 14:42:21.357858  215276 ssh_runner.go:195] Run: openssl x509 -noout -in /var/lib/minikube/certs/etcd/healthcheck-client.crt -checkend 86400
	I1109 14:42:21.399153  215276 ssh_runner.go:195] Run: openssl x509 -noout -in /var/lib/minikube/certs/etcd/peer.crt -checkend 86400
	I1109 14:42:21.447143  215276 ssh_runner.go:195] Run: openssl x509 -noout -in /var/lib/minikube/certs/front-proxy-client.crt -checkend 86400
	I1109 14:42:21.494822  215276 kubeadm.go:401] StartCluster: {Name:no-preload-545474 KeepContext:false EmbedCerts:false MinikubeISO: KicBaseImage:gcr.io/k8s-minikube/kicbase-builds:v0.0.48-1761985721-21837@sha256:a50b37e97dfdea51156e079ca6b45818a801b3d41bbe13d141f35d2e1af6c7d1 Memory:3072 CPUs:2 DiskSize:20000 Driver:docker HyperkitVpnKitSock: HyperkitVSockPorts:[] DockerEnv:[] ContainerVolumeMounts:[] InsecureRegistry:[] RegistryMirror:[] HostOnlyCIDR:192.168.59.1/24 HypervVirtualSwitch: HypervUseExternalSwitch:false HypervExternalAdapter: KVMNetwork:default KVMQemuURI:qemu:///system KVMGPU:false KVMHidden:false KVMNUMACount:1 APIServerPort:8443 DockerOpt:[] DisableDriverMounts:false NFSShare:[] NFSSharesRoot:/nfsshares UUID: NoVTXCheck:false DNSProxy:false HostDNSResolver:true HostOnlyNicType:virtio NatNicType:virtio SSHIPAddress: SSHUser:root SSHKey: SSHPort:22 KubernetesConfig:{KubernetesVersion:v1.34.1 ClusterName:no-preload-545474 Namespace:default APIServerHAVIP: APIServerName:minikubeCA APISer
verNames:[] APIServerIPs:[] DNSDomain:cluster.local ContainerRuntime:crio CRISocket: NetworkPlugin:cni FeatureGates: ServiceCIDR:10.96.0.0/12 ImageRepository: LoadBalancerStartIP: LoadBalancerEndIP: CustomIngressCert: RegistryAliases: ExtraOptions:[] ShouldLoadCachedImages:true EnableDefaultCNI:false CNI:} Nodes:[{Name: IP:192.168.85.2 Port:8443 KubernetesVersion:v1.34.1 ContainerRuntime:crio ControlPlane:true Worker:true}] Addons:map[dashboard:true default-storageclass:true storage-provisioner:true] CustomAddonImages:map[MetricsScraper:registry.k8s.io/echoserver:1.4] CustomAddonRegistries:map[] VerifyComponents:map[apiserver:true apps_running:true default_sa:true extra:true kubelet:true node_ready:true system_pods:true] StartHostTimeout:6m0s ScheduledStop:<nil> ExposedPorts:[] ListenAddress: Network: Subnet: MultiNodeRequested:false ExtraDisks:0 CertExpiration:26280h0m0s MountString: Mount9PVersion:9p2000.L MountGID:docker MountIP: MountMSize:262144 MountOptions:[] MountPort:0 MountType:9p MountUID:docker Bi
naryMirror: DisableOptimizations:false DisableMetrics:false DisableCoreDNSLog:false CustomQemuFirmwarePath: SocketVMnetClientPath: SocketVMnetPath: StaticIP: SSHAuthSock: SSHAgentPID:0 GPUs: AutoPauseInterval:1m0s}
	I1109 14:42:21.494953  215276 cri.go:54] listing CRI containers in root : {State:paused Name: Namespaces:[kube-system]}
	I1109 14:42:21.495074  215276 ssh_runner.go:195] Run: sudo -s eval "crictl ps -a --quiet --label io.kubernetes.pod.namespace=kube-system"
	I1109 14:42:21.541572  215276 cri.go:89] found id: ""
	I1109 14:42:21.541696  215276 ssh_runner.go:195] Run: sudo ls /var/lib/kubelet/kubeadm-flags.env /var/lib/kubelet/config.yaml /var/lib/minikube/etcd
	I1109 14:42:21.554753  215276 kubeadm.go:417] found existing configuration files, will attempt cluster restart
	I1109 14:42:21.554826  215276 kubeadm.go:598] restartPrimaryControlPlane start ...
	I1109 14:42:21.554910  215276 ssh_runner.go:195] Run: sudo test -d /data/minikube
	I1109 14:42:21.576922  215276 kubeadm.go:131] /data/minikube skipping compat symlinks: sudo test -d /data/minikube: Process exited with status 1
	stdout:
	
	stderr:
	I1109 14:42:21.577345  215276 kubeconfig.go:47] verify endpoint returned: get endpoint: "no-preload-545474" does not appear in /home/jenkins/minikube-integration/21139-2320/kubeconfig
	I1109 14:42:21.577451  215276 kubeconfig.go:62] /home/jenkins/minikube-integration/21139-2320/kubeconfig needs updating (will repair): [kubeconfig missing "no-preload-545474" cluster setting kubeconfig missing "no-preload-545474" context setting]
	I1109 14:42:21.577772  215276 lock.go:35] WriteFile acquiring /home/jenkins/minikube-integration/21139-2320/kubeconfig: {Name:mkbcbd43244c4eb498050023163f96f81faa78c6 Clock:{} Delay:500ms Timeout:1m0s Cancel:<nil>}
	I1109 14:42:21.579112  215276 ssh_runner.go:195] Run: sudo diff -u /var/tmp/minikube/kubeadm.yaml /var/tmp/minikube/kubeadm.yaml.new
	I1109 14:42:21.611145  215276 kubeadm.go:635] The running cluster does not require reconfiguration: 192.168.85.2
	I1109 14:42:21.611179  215276 kubeadm.go:602] duration metric: took 56.334651ms to restartPrimaryControlPlane
	I1109 14:42:21.611189  215276 kubeadm.go:403] duration metric: took 116.393333ms to StartCluster
	I1109 14:42:21.611203  215276 settings.go:142] acquiring lock: {Name:mk52a5a1cf83cfd3007d92af07a5e8dea393f789 Clock:{} Delay:500ms Timeout:1m0s Cancel:<nil>}
	I1109 14:42:21.611263  215276 settings.go:150] Updating kubeconfig:  /home/jenkins/minikube-integration/21139-2320/kubeconfig
	I1109 14:42:21.611851  215276 lock.go:35] WriteFile acquiring /home/jenkins/minikube-integration/21139-2320/kubeconfig: {Name:mkbcbd43244c4eb498050023163f96f81faa78c6 Clock:{} Delay:500ms Timeout:1m0s Cancel:<nil>}
	I1109 14:42:21.612079  215276 start.go:236] Will wait 6m0s for node &{Name: IP:192.168.85.2 Port:8443 KubernetesVersion:v1.34.1 ContainerRuntime:crio ControlPlane:true Worker:true}
	I1109 14:42:21.612437  215276 config.go:182] Loaded profile config "no-preload-545474": Driver=docker, ContainerRuntime=crio, KubernetesVersion=v1.34.1
	I1109 14:42:21.612481  215276 addons.go:512] enable addons start: toEnable=map[ambassador:false amd-gpu-device-plugin:false auto-pause:false cloud-spanner:false csi-hostpath-driver:false dashboard:true default-storageclass:true efk:false freshpod:false gcp-auth:false gvisor:false headlamp:false inaccel:false ingress:false ingress-dns:false inspektor-gadget:false istio:false istio-provisioner:false kong:false kubeflow:false kubetail:false kubevirt:false logviewer:false metallb:false metrics-server:false nvidia-device-plugin:false nvidia-driver-installer:false nvidia-gpu-device-plugin:false olm:false pod-security-policy:false portainer:false registry:false registry-aliases:false registry-creds:false storage-provisioner:true storage-provisioner-rancher:false volcano:false volumesnapshots:false yakd:false]
	I1109 14:42:21.612610  215276 addons.go:70] Setting storage-provisioner=true in profile "no-preload-545474"
	I1109 14:42:21.612631  215276 addons.go:239] Setting addon storage-provisioner=true in "no-preload-545474"
	W1109 14:42:21.612637  215276 addons.go:248] addon storage-provisioner should already be in state true
	I1109 14:42:21.612661  215276 host.go:66] Checking if "no-preload-545474" exists ...
	I1109 14:42:21.613113  215276 cli_runner.go:164] Run: docker container inspect no-preload-545474 --format={{.State.Status}}
	I1109 14:42:21.613492  215276 addons.go:70] Setting dashboard=true in profile "no-preload-545474"
	I1109 14:42:21.613514  215276 addons.go:239] Setting addon dashboard=true in "no-preload-545474"
	W1109 14:42:21.613521  215276 addons.go:248] addon dashboard should already be in state true
	I1109 14:42:21.613541  215276 host.go:66] Checking if "no-preload-545474" exists ...
	I1109 14:42:21.613898  215276 addons.go:70] Setting default-storageclass=true in profile "no-preload-545474"
	I1109 14:42:21.613915  215276 addons_storage_classes.go:34] enableOrDisableStorageClasses default-storageclass=true on "no-preload-545474"
	I1109 14:42:21.614128  215276 cli_runner.go:164] Run: docker container inspect no-preload-545474 --format={{.State.Status}}
	I1109 14:42:21.614261  215276 cli_runner.go:164] Run: docker container inspect no-preload-545474 --format={{.State.Status}}
	I1109 14:42:21.621883  215276 out.go:179] * Verifying Kubernetes components...
	I1109 14:42:21.626138  215276 ssh_runner.go:195] Run: sudo systemctl daemon-reload
	I1109 14:42:21.666846  215276 addons.go:239] Setting addon default-storageclass=true in "no-preload-545474"
	W1109 14:42:21.666869  215276 addons.go:248] addon default-storageclass should already be in state true
	I1109 14:42:21.666894  215276 host.go:66] Checking if "no-preload-545474" exists ...
	I1109 14:42:21.667308  215276 cli_runner.go:164] Run: docker container inspect no-preload-545474 --format={{.State.Status}}
	I1109 14:42:21.687720  215276 out.go:179]   - Using image docker.io/kubernetesui/dashboard:v2.7.0
	I1109 14:42:21.690682  215276 out.go:179]   - Using image gcr.io/k8s-minikube/storage-provisioner:v5
	I1109 14:42:21.694891  215276 out.go:179]   - Using image registry.k8s.io/echoserver:1.4
	I1109 14:42:18.919593  212661 kubeadm.go:319] [control-plane-check] kube-scheduler is healthy after 4.879059826s
	I1109 14:42:21.039480  212661 kubeadm.go:319] [control-plane-check] kube-apiserver is healthy after 7.001919207s
	I1109 14:42:21.062533  212661 kubeadm.go:319] [upload-config] Storing the configuration used in ConfigMap "kubeadm-config" in the "kube-system" Namespace
	I1109 14:42:21.581982  212661 kubeadm.go:319] [kubelet] Creating a ConfigMap "kubelet-config" in namespace kube-system with the configuration for the kubelets in the cluster
	I1109 14:42:21.622056  212661 kubeadm.go:319] [upload-certs] Skipping phase. Please see --upload-certs
	I1109 14:42:21.622273  212661 kubeadm.go:319] [mark-control-plane] Marking the node auto-241021 as control-plane by adding the labels: [node-role.kubernetes.io/control-plane node.kubernetes.io/exclude-from-external-load-balancers]
	I1109 14:42:21.701773  212661 kubeadm.go:319] [bootstrap-token] Using token: wm7c8z.kpjb1ns37gy8zfml
	I1109 14:42:21.695013  215276 addons.go:436] installing /etc/kubernetes/addons/storage-provisioner.yaml
	I1109 14:42:21.695024  215276 ssh_runner.go:362] scp memory --> /etc/kubernetes/addons/storage-provisioner.yaml (2676 bytes)
	I1109 14:42:21.695084  215276 cli_runner.go:164] Run: docker container inspect -f "'{{(index (index .NetworkSettings.Ports "22/tcp") 0).HostPort}}'" no-preload-545474
	I1109 14:42:21.700018  215276 addons.go:436] installing /etc/kubernetes/addons/dashboard-ns.yaml
	I1109 14:42:21.700043  215276 ssh_runner.go:362] scp dashboard/dashboard-ns.yaml --> /etc/kubernetes/addons/dashboard-ns.yaml (759 bytes)
	I1109 14:42:21.700109  215276 cli_runner.go:164] Run: docker container inspect -f "'{{(index (index .NetworkSettings.Ports "22/tcp") 0).HostPort}}'" no-preload-545474
	I1109 14:42:21.729453  215276 addons.go:436] installing /etc/kubernetes/addons/storageclass.yaml
	I1109 14:42:21.729475  215276 ssh_runner.go:362] scp storageclass/storageclass.yaml --> /etc/kubernetes/addons/storageclass.yaml (271 bytes)
	I1109 14:42:21.729538  215276 cli_runner.go:164] Run: docker container inspect -f "'{{(index (index .NetworkSettings.Ports "22/tcp") 0).HostPort}}'" no-preload-545474
	I1109 14:42:21.770707  215276 sshutil.go:53] new ssh client: &{IP:127.0.0.1 Port:33095 SSHKeyPath:/home/jenkins/minikube-integration/21139-2320/.minikube/machines/no-preload-545474/id_rsa Username:docker}
	I1109 14:42:21.777714  215276 sshutil.go:53] new ssh client: &{IP:127.0.0.1 Port:33095 SSHKeyPath:/home/jenkins/minikube-integration/21139-2320/.minikube/machines/no-preload-545474/id_rsa Username:docker}
	I1109 14:42:21.788103  215276 sshutil.go:53] new ssh client: &{IP:127.0.0.1 Port:33095 SSHKeyPath:/home/jenkins/minikube-integration/21139-2320/.minikube/machines/no-preload-545474/id_rsa Username:docker}
	I1109 14:42:22.176209  215276 addons.go:436] installing /etc/kubernetes/addons/dashboard-clusterrole.yaml
	I1109 14:42:22.176275  215276 ssh_runner.go:362] scp dashboard/dashboard-clusterrole.yaml --> /etc/kubernetes/addons/dashboard-clusterrole.yaml (1001 bytes)
	I1109 14:42:22.187070  215276 ssh_runner.go:195] Run: sudo systemctl start kubelet
	I1109 14:42:22.205030  215276 addons.go:436] installing /etc/kubernetes/addons/dashboard-clusterrolebinding.yaml
	I1109 14:42:22.205055  215276 ssh_runner.go:362] scp dashboard/dashboard-clusterrolebinding.yaml --> /etc/kubernetes/addons/dashboard-clusterrolebinding.yaml (1018 bytes)
	I1109 14:42:22.232005  215276 addons.go:436] installing /etc/kubernetes/addons/dashboard-configmap.yaml
	I1109 14:42:22.232030  215276 ssh_runner.go:362] scp dashboard/dashboard-configmap.yaml --> /etc/kubernetes/addons/dashboard-configmap.yaml (837 bytes)
	I1109 14:42:22.274675  215276 node_ready.go:35] waiting up to 6m0s for node "no-preload-545474" to be "Ready" ...
	I1109 14:42:22.293443  215276 ssh_runner.go:195] Run: sudo KUBECONFIG=/var/lib/minikube/kubeconfig /var/lib/minikube/binaries/v1.34.1/kubectl apply -f /etc/kubernetes/addons/storageclass.yaml
	I1109 14:42:22.320499  215276 ssh_runner.go:195] Run: sudo KUBECONFIG=/var/lib/minikube/kubeconfig /var/lib/minikube/binaries/v1.34.1/kubectl apply -f /etc/kubernetes/addons/storage-provisioner.yaml
	I1109 14:42:22.325216  215276 addons.go:436] installing /etc/kubernetes/addons/dashboard-dp.yaml
	I1109 14:42:22.325275  215276 ssh_runner.go:362] scp memory --> /etc/kubernetes/addons/dashboard-dp.yaml (4201 bytes)
	I1109 14:42:22.399855  215276 addons.go:436] installing /etc/kubernetes/addons/dashboard-role.yaml
	I1109 14:42:22.399935  215276 ssh_runner.go:362] scp dashboard/dashboard-role.yaml --> /etc/kubernetes/addons/dashboard-role.yaml (1724 bytes)
	I1109 14:42:22.507225  215276 addons.go:436] installing /etc/kubernetes/addons/dashboard-rolebinding.yaml
	I1109 14:42:22.507289  215276 ssh_runner.go:362] scp dashboard/dashboard-rolebinding.yaml --> /etc/kubernetes/addons/dashboard-rolebinding.yaml (1046 bytes)
	I1109 14:42:21.705543  212661 out.go:252]   - Configuring RBAC rules ...
	I1109 14:42:21.705681  212661 kubeadm.go:319] [bootstrap-token] Configuring bootstrap tokens, cluster-info ConfigMap, RBAC Roles
	I1109 14:42:21.750565  212661 kubeadm.go:319] [bootstrap-token] Configured RBAC rules to allow Node Bootstrap tokens to get nodes
	I1109 14:42:21.782718  212661 kubeadm.go:319] [bootstrap-token] Configured RBAC rules to allow Node Bootstrap tokens to post CSRs in order for nodes to get long term certificate credentials
	I1109 14:42:21.795444  212661 kubeadm.go:319] [bootstrap-token] Configured RBAC rules to allow the csrapprover controller automatically approve CSRs from a Node Bootstrap Token
	I1109 14:42:21.805340  212661 kubeadm.go:319] [bootstrap-token] Configured RBAC rules to allow certificate rotation for all node client certificates in the cluster
	I1109 14:42:21.813627  212661 kubeadm.go:319] [bootstrap-token] Creating the "cluster-info" ConfigMap in the "kube-public" namespace
	I1109 14:42:21.873338  212661 kubeadm.go:319] [kubelet-finalize] Updating "/etc/kubernetes/kubelet.conf" to point to a rotatable kubelet client certificate and key
	I1109 14:42:22.447343  212661 kubeadm.go:319] [addons] Applied essential addon: CoreDNS
	I1109 14:42:22.787426  212661 kubeadm.go:319] [addons] Applied essential addon: kube-proxy
	I1109 14:42:22.787445  212661 kubeadm.go:319] 
	I1109 14:42:22.787513  212661 kubeadm.go:319] Your Kubernetes control-plane has initialized successfully!
	I1109 14:42:22.787518  212661 kubeadm.go:319] 
	I1109 14:42:22.787599  212661 kubeadm.go:319] To start using your cluster, you need to run the following as a regular user:
	I1109 14:42:22.787604  212661 kubeadm.go:319] 
	I1109 14:42:22.787630  212661 kubeadm.go:319]   mkdir -p $HOME/.kube
	I1109 14:42:22.787691  212661 kubeadm.go:319]   sudo cp -i /etc/kubernetes/admin.conf $HOME/.kube/config
	I1109 14:42:22.787744  212661 kubeadm.go:319]   sudo chown $(id -u):$(id -g) $HOME/.kube/config
	I1109 14:42:22.787749  212661 kubeadm.go:319] 
	I1109 14:42:22.787806  212661 kubeadm.go:319] Alternatively, if you are the root user, you can run:
	I1109 14:42:22.787811  212661 kubeadm.go:319] 
	I1109 14:42:22.787860  212661 kubeadm.go:319]   export KUBECONFIG=/etc/kubernetes/admin.conf
	I1109 14:42:22.787884  212661 kubeadm.go:319] 
	I1109 14:42:22.787940  212661 kubeadm.go:319] You should now deploy a pod network to the cluster.
	I1109 14:42:22.788020  212661 kubeadm.go:319] Run "kubectl apply -f [podnetwork].yaml" with one of the options listed at:
	I1109 14:42:22.788099  212661 kubeadm.go:319]   https://kubernetes.io/docs/concepts/cluster-administration/addons/
	I1109 14:42:22.788104  212661 kubeadm.go:319] 
	I1109 14:42:22.788192  212661 kubeadm.go:319] You can now join any number of control-plane nodes by copying certificate authorities
	I1109 14:42:22.788283  212661 kubeadm.go:319] and service account keys on each node and then running the following as root:
	I1109 14:42:22.788288  212661 kubeadm.go:319] 
	I1109 14:42:22.788375  212661 kubeadm.go:319]   kubeadm join control-plane.minikube.internal:8443 --token wm7c8z.kpjb1ns37gy8zfml \
	I1109 14:42:22.788484  212661 kubeadm.go:319] 	--discovery-token-ca-cert-hash sha256:bb1c1d61f7abb7f43f33866576b71e6342f6f67545a2d1f7cc51fa851cd51d52 \
	I1109 14:42:22.788506  212661 kubeadm.go:319] 	--control-plane 
	I1109 14:42:22.788510  212661 kubeadm.go:319] 
	I1109 14:42:22.788599  212661 kubeadm.go:319] Then you can join any number of worker nodes by running the following on each as root:
	I1109 14:42:22.788604  212661 kubeadm.go:319] 
	I1109 14:42:22.788689  212661 kubeadm.go:319] kubeadm join control-plane.minikube.internal:8443 --token wm7c8z.kpjb1ns37gy8zfml \
	I1109 14:42:22.788796  212661 kubeadm.go:319] 	--discovery-token-ca-cert-hash sha256:bb1c1d61f7abb7f43f33866576b71e6342f6f67545a2d1f7cc51fa851cd51d52 
	I1109 14:42:22.795578  212661 kubeadm.go:319] 	[WARNING SystemVerification]: cgroups v1 support is in maintenance mode, please migrate to cgroups v2
	I1109 14:42:22.795823  212661 kubeadm.go:319] 	[WARNING SystemVerification]: failed to parse kernel config: unable to load kernel module: "configs", output: "modprobe: FATAL: Module configs not found in directory /lib/modules/5.15.0-1084-aws\n", err: exit status 1
	I1109 14:42:22.795976  212661 kubeadm.go:319] 	[WARNING Service-Kubelet]: kubelet service is not enabled, please run 'systemctl enable kubelet.service'
	I1109 14:42:22.795995  212661 cni.go:84] Creating CNI manager for ""
	I1109 14:42:22.796002  212661 cni.go:143] "docker" driver + "crio" runtime found, recommending kindnet
	I1109 14:42:22.799358  212661 out.go:179] * Configuring CNI (Container Networking Interface) ...
	I1109 14:42:22.681932  215276 addons.go:436] installing /etc/kubernetes/addons/dashboard-sa.yaml
	I1109 14:42:22.682003  215276 ssh_runner.go:362] scp dashboard/dashboard-sa.yaml --> /etc/kubernetes/addons/dashboard-sa.yaml (837 bytes)
	I1109 14:42:22.748000  215276 addons.go:436] installing /etc/kubernetes/addons/dashboard-secret.yaml
	I1109 14:42:22.748062  215276 ssh_runner.go:362] scp dashboard/dashboard-secret.yaml --> /etc/kubernetes/addons/dashboard-secret.yaml (1389 bytes)
	I1109 14:42:22.805250  215276 addons.go:436] installing /etc/kubernetes/addons/dashboard-svc.yaml
	I1109 14:42:22.805309  215276 ssh_runner.go:362] scp dashboard/dashboard-svc.yaml --> /etc/kubernetes/addons/dashboard-svc.yaml (1294 bytes)
	I1109 14:42:22.846491  215276 ssh_runner.go:195] Run: sudo KUBECONFIG=/var/lib/minikube/kubeconfig /var/lib/minikube/binaries/v1.34.1/kubectl apply -f /etc/kubernetes/addons/dashboard-ns.yaml -f /etc/kubernetes/addons/dashboard-clusterrole.yaml -f /etc/kubernetes/addons/dashboard-clusterrolebinding.yaml -f /etc/kubernetes/addons/dashboard-configmap.yaml -f /etc/kubernetes/addons/dashboard-dp.yaml -f /etc/kubernetes/addons/dashboard-role.yaml -f /etc/kubernetes/addons/dashboard-rolebinding.yaml -f /etc/kubernetes/addons/dashboard-sa.yaml -f /etc/kubernetes/addons/dashboard-secret.yaml -f /etc/kubernetes/addons/dashboard-svc.yaml
	I1109 14:42:22.802583  212661 ssh_runner.go:195] Run: stat /opt/cni/bin/portmap
	I1109 14:42:22.813746  212661 cni.go:182] applying CNI manifest using /var/lib/minikube/binaries/v1.34.1/kubectl ...
	I1109 14:42:22.813765  212661 ssh_runner.go:362] scp memory --> /var/tmp/minikube/cni.yaml (2601 bytes)
	I1109 14:42:22.872433  212661 ssh_runner.go:195] Run: sudo /var/lib/minikube/binaries/v1.34.1/kubectl apply --kubeconfig=/var/lib/minikube/kubeconfig -f /var/tmp/minikube/cni.yaml
	I1109 14:42:23.593094  212661 ssh_runner.go:195] Run: /bin/bash -c "cat /proc/$(pgrep kube-apiserver)/oom_adj"
	I1109 14:42:23.593214  212661 ssh_runner.go:195] Run: sudo /var/lib/minikube/binaries/v1.34.1/kubectl create clusterrolebinding minikube-rbac --clusterrole=cluster-admin --serviceaccount=kube-system:default --kubeconfig=/var/lib/minikube/kubeconfig
	I1109 14:42:23.593334  212661 ssh_runner.go:195] Run: sudo /var/lib/minikube/binaries/v1.34.1/kubectl --kubeconfig=/var/lib/minikube/kubeconfig label --overwrite nodes auto-241021 minikube.k8s.io/updated_at=2025_11_09T14_42_23_0700 minikube.k8s.io/version=v1.37.0 minikube.k8s.io/commit=70e94ed289d7418ac27e2778f9cf44be27a4ecda minikube.k8s.io/name=auto-241021 minikube.k8s.io/primary=true
	I1109 14:42:24.037032  212661 ops.go:34] apiserver oom_adj: -16
	I1109 14:42:24.037165  212661 ssh_runner.go:195] Run: sudo /var/lib/minikube/binaries/v1.34.1/kubectl get sa default --kubeconfig=/var/lib/minikube/kubeconfig
	I1109 14:42:24.538132  212661 ssh_runner.go:195] Run: sudo /var/lib/minikube/binaries/v1.34.1/kubectl get sa default --kubeconfig=/var/lib/minikube/kubeconfig
	I1109 14:42:25.037616  212661 ssh_runner.go:195] Run: sudo /var/lib/minikube/binaries/v1.34.1/kubectl get sa default --kubeconfig=/var/lib/minikube/kubeconfig
	I1109 14:42:25.537528  212661 ssh_runner.go:195] Run: sudo /var/lib/minikube/binaries/v1.34.1/kubectl get sa default --kubeconfig=/var/lib/minikube/kubeconfig
	I1109 14:42:26.038030  212661 ssh_runner.go:195] Run: sudo /var/lib/minikube/binaries/v1.34.1/kubectl get sa default --kubeconfig=/var/lib/minikube/kubeconfig
	I1109 14:42:26.537899  212661 ssh_runner.go:195] Run: sudo /var/lib/minikube/binaries/v1.34.1/kubectl get sa default --kubeconfig=/var/lib/minikube/kubeconfig
	I1109 14:42:27.037641  212661 ssh_runner.go:195] Run: sudo /var/lib/minikube/binaries/v1.34.1/kubectl get sa default --kubeconfig=/var/lib/minikube/kubeconfig
	I1109 14:42:27.537502  212661 ssh_runner.go:195] Run: sudo /var/lib/minikube/binaries/v1.34.1/kubectl get sa default --kubeconfig=/var/lib/minikube/kubeconfig
	I1109 14:42:27.749792  212661 kubeadm.go:1114] duration metric: took 4.15662262s to wait for elevateKubeSystemPrivileges
	I1109 14:42:27.749817  212661 kubeadm.go:403] duration metric: took 21.108743552s to StartCluster
	I1109 14:42:27.749840  212661 settings.go:142] acquiring lock: {Name:mk52a5a1cf83cfd3007d92af07a5e8dea393f789 Clock:{} Delay:500ms Timeout:1m0s Cancel:<nil>}
	I1109 14:42:27.749898  212661 settings.go:150] Updating kubeconfig:  /home/jenkins/minikube-integration/21139-2320/kubeconfig
	I1109 14:42:27.750803  212661 lock.go:35] WriteFile acquiring /home/jenkins/minikube-integration/21139-2320/kubeconfig: {Name:mkbcbd43244c4eb498050023163f96f81faa78c6 Clock:{} Delay:500ms Timeout:1m0s Cancel:<nil>}
	I1109 14:42:27.751010  212661 start.go:236] Will wait 15m0s for node &{Name: IP:192.168.76.2 Port:8443 KubernetesVersion:v1.34.1 ContainerRuntime:crio ControlPlane:true Worker:true}
	I1109 14:42:27.751122  212661 ssh_runner.go:195] Run: /bin/bash -c "sudo /var/lib/minikube/binaries/v1.34.1/kubectl --kubeconfig=/var/lib/minikube/kubeconfig -n kube-system get configmap coredns -o yaml"
	I1109 14:42:27.751386  212661 config.go:182] Loaded profile config "auto-241021": Driver=docker, ContainerRuntime=crio, KubernetesVersion=v1.34.1
	I1109 14:42:27.751423  212661 addons.go:512] enable addons start: toEnable=map[ambassador:false amd-gpu-device-plugin:false auto-pause:false cloud-spanner:false csi-hostpath-driver:false dashboard:false default-storageclass:true efk:false freshpod:false gcp-auth:false gvisor:false headlamp:false inaccel:false ingress:false ingress-dns:false inspektor-gadget:false istio:false istio-provisioner:false kong:false kubeflow:false kubetail:false kubevirt:false logviewer:false metallb:false metrics-server:false nvidia-device-plugin:false nvidia-driver-installer:false nvidia-gpu-device-plugin:false olm:false pod-security-policy:false portainer:false registry:false registry-aliases:false registry-creds:false storage-provisioner:true storage-provisioner-rancher:false volcano:false volumesnapshots:false yakd:false]
	I1109 14:42:27.751485  212661 addons.go:70] Setting storage-provisioner=true in profile "auto-241021"
	I1109 14:42:27.751501  212661 addons.go:239] Setting addon storage-provisioner=true in "auto-241021"
	I1109 14:42:27.751521  212661 host.go:66] Checking if "auto-241021" exists ...
	I1109 14:42:27.752037  212661 cli_runner.go:164] Run: docker container inspect auto-241021 --format={{.State.Status}}
	I1109 14:42:27.752579  212661 addons.go:70] Setting default-storageclass=true in profile "auto-241021"
	I1109 14:42:27.752605  212661 addons_storage_classes.go:34] enableOrDisableStorageClasses default-storageclass=true on "auto-241021"
	I1109 14:42:27.752928  212661 cli_runner.go:164] Run: docker container inspect auto-241021 --format={{.State.Status}}
	I1109 14:42:27.754902  212661 out.go:179] * Verifying Kubernetes components...
	I1109 14:42:27.758529  212661 ssh_runner.go:195] Run: sudo systemctl daemon-reload
	I1109 14:42:27.806008  212661 addons.go:239] Setting addon default-storageclass=true in "auto-241021"
	I1109 14:42:27.806046  212661 host.go:66] Checking if "auto-241021" exists ...
	I1109 14:42:27.806445  212661 cli_runner.go:164] Run: docker container inspect auto-241021 --format={{.State.Status}}
	I1109 14:42:27.809999  212661 out.go:179]   - Using image gcr.io/k8s-minikube/storage-provisioner:v5
	I1109 14:42:27.813017  212661 addons.go:436] installing /etc/kubernetes/addons/storage-provisioner.yaml
	I1109 14:42:27.813038  212661 ssh_runner.go:362] scp memory --> /etc/kubernetes/addons/storage-provisioner.yaml (2676 bytes)
	I1109 14:42:27.813107  212661 cli_runner.go:164] Run: docker container inspect -f "'{{(index (index .NetworkSettings.Ports "22/tcp") 0).HostPort}}'" auto-241021
	I1109 14:42:27.836325  212661 addons.go:436] installing /etc/kubernetes/addons/storageclass.yaml
	I1109 14:42:27.836344  212661 ssh_runner.go:362] scp storageclass/storageclass.yaml --> /etc/kubernetes/addons/storageclass.yaml (271 bytes)
	I1109 14:42:27.836406  212661 cli_runner.go:164] Run: docker container inspect -f "'{{(index (index .NetworkSettings.Ports "22/tcp") 0).HostPort}}'" auto-241021
	I1109 14:42:27.862250  212661 sshutil.go:53] new ssh client: &{IP:127.0.0.1 Port:33090 SSHKeyPath:/home/jenkins/minikube-integration/21139-2320/.minikube/machines/auto-241021/id_rsa Username:docker}
	I1109 14:42:27.894999  212661 sshutil.go:53] new ssh client: &{IP:127.0.0.1 Port:33090 SSHKeyPath:/home/jenkins/minikube-integration/21139-2320/.minikube/machines/auto-241021/id_rsa Username:docker}
	I1109 14:42:28.420586  212661 ssh_runner.go:195] Run: sudo KUBECONFIG=/var/lib/minikube/kubeconfig /var/lib/minikube/binaries/v1.34.1/kubectl apply -f /etc/kubernetes/addons/storageclass.yaml
	I1109 14:42:28.425168  212661 ssh_runner.go:195] Run: sudo KUBECONFIG=/var/lib/minikube/kubeconfig /var/lib/minikube/binaries/v1.34.1/kubectl apply -f /etc/kubernetes/addons/storage-provisioner.yaml
	I1109 14:42:28.707807  212661 ssh_runner.go:195] Run: sudo systemctl start kubelet
	I1109 14:42:28.708342  212661 ssh_runner.go:195] Run: /bin/bash -c "sudo /var/lib/minikube/binaries/v1.34.1/kubectl --kubeconfig=/var/lib/minikube/kubeconfig -n kube-system get configmap coredns -o yaml | sed -e '/^        forward . \/etc\/resolv.conf.*/i \        hosts {\n           192.168.76.1 host.minikube.internal\n           fallthrough\n        }' -e '/^        errors *$/i \        log' | sudo /var/lib/minikube/binaries/v1.34.1/kubectl --kubeconfig=/var/lib/minikube/kubeconfig replace -f -"
	I1109 14:42:29.571950  212661 ssh_runner.go:235] Completed: sudo KUBECONFIG=/var/lib/minikube/kubeconfig /var/lib/minikube/binaries/v1.34.1/kubectl apply -f /etc/kubernetes/addons/storageclass.yaml: (1.151281155s)
	I1109 14:42:30.400352  212661 ssh_runner.go:235] Completed: sudo KUBECONFIG=/var/lib/minikube/kubeconfig /var/lib/minikube/binaries/v1.34.1/kubectl apply -f /etc/kubernetes/addons/storage-provisioner.yaml: (1.975101528s)
	I1109 14:42:30.400444  212661 ssh_runner.go:235] Completed: sudo systemctl start kubelet: (1.692538737s)
	I1109 14:42:30.400498  212661 ssh_runner.go:235] Completed: /bin/bash -c "sudo /var/lib/minikube/binaries/v1.34.1/kubectl --kubeconfig=/var/lib/minikube/kubeconfig -n kube-system get configmap coredns -o yaml | sed -e '/^        forward . \/etc\/resolv.conf.*/i \        hosts {\n           192.168.76.1 host.minikube.internal\n           fallthrough\n        }' -e '/^        errors *$/i \        log' | sudo /var/lib/minikube/binaries/v1.34.1/kubectl --kubeconfig=/var/lib/minikube/kubeconfig replace -f -": (1.692090101s)
	I1109 14:42:30.401045  212661 start.go:977] {"host.minikube.internal": 192.168.76.1} host record injected into CoreDNS's ConfigMap
	I1109 14:42:30.403017  212661 node_ready.go:35] waiting up to 15m0s for node "auto-241021" to be "Ready" ...
	I1109 14:42:30.404434  212661 out.go:179] * Enabled addons: default-storageclass, storage-provisioner
	I1109 14:42:28.143094  215276 node_ready.go:49] node "no-preload-545474" is "Ready"
	I1109 14:42:28.143120  215276 node_ready.go:38] duration metric: took 5.868412391s for node "no-preload-545474" to be "Ready" ...
	I1109 14:42:28.143133  215276 api_server.go:52] waiting for apiserver process to appear ...
	I1109 14:42:28.143187  215276 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I1109 14:42:28.552462  215276 ssh_runner.go:235] Completed: sudo KUBECONFIG=/var/lib/minikube/kubeconfig /var/lib/minikube/binaries/v1.34.1/kubectl apply -f /etc/kubernetes/addons/storageclass.yaml: (6.258983922s)
	I1109 14:42:31.168018  215276 ssh_runner.go:235] Completed: sudo KUBECONFIG=/var/lib/minikube/kubeconfig /var/lib/minikube/binaries/v1.34.1/kubectl apply -f /etc/kubernetes/addons/storage-provisioner.yaml: (8.847483496s)
	I1109 14:42:31.168193  215276 ssh_runner.go:235] Completed: sudo KUBECONFIG=/var/lib/minikube/kubeconfig /var/lib/minikube/binaries/v1.34.1/kubectl apply -f /etc/kubernetes/addons/dashboard-ns.yaml -f /etc/kubernetes/addons/dashboard-clusterrole.yaml -f /etc/kubernetes/addons/dashboard-clusterrolebinding.yaml -f /etc/kubernetes/addons/dashboard-configmap.yaml -f /etc/kubernetes/addons/dashboard-dp.yaml -f /etc/kubernetes/addons/dashboard-role.yaml -f /etc/kubernetes/addons/dashboard-rolebinding.yaml -f /etc/kubernetes/addons/dashboard-sa.yaml -f /etc/kubernetes/addons/dashboard-secret.yaml -f /etc/kubernetes/addons/dashboard-svc.yaml: (8.321604371s)
	I1109 14:42:31.168446  215276 ssh_runner.go:235] Completed: sudo pgrep -xnf kube-apiserver.*minikube.*: (3.0252263s)
	I1109 14:42:31.168476  215276 api_server.go:72] duration metric: took 9.556363424s to wait for apiserver process to appear ...
	I1109 14:42:31.168488  215276 api_server.go:88] waiting for apiserver healthz status ...
	I1109 14:42:31.168510  215276 api_server.go:253] Checking apiserver healthz at https://192.168.85.2:8443/healthz ...
	I1109 14:42:31.171614  215276 out.go:179] * Some dashboard features require the metrics-server addon. To enable all features please run:
	
		minikube -p no-preload-545474 addons enable metrics-server
	
	I1109 14:42:31.174868  215276 out.go:179] * Enabled addons: default-storageclass, storage-provisioner, dashboard
	I1109 14:42:31.177928  215276 addons.go:515] duration metric: took 9.565418839s for enable addons: enabled=[default-storageclass storage-provisioner dashboard]
	I1109 14:42:31.189933  215276 api_server.go:279] https://192.168.85.2:8443/healthz returned 200:
	ok
	I1109 14:42:31.191608  215276 api_server.go:141] control plane version: v1.34.1
	I1109 14:42:31.191657  215276 api_server.go:131] duration metric: took 23.15778ms to wait for apiserver health ...
	I1109 14:42:31.191668  215276 system_pods.go:43] waiting for kube-system pods to appear ...
	I1109 14:42:31.198643  215276 system_pods.go:59] 8 kube-system pods found
	I1109 14:42:31.198696  215276 system_pods.go:61] "coredns-66bc5c9577-gq42x" [e6074143-5b8d-41d4-8951-a551d8d2a4b9] Running / Ready:ContainersNotReady (containers with unready status: [coredns]) / ContainersReady:ContainersNotReady (containers with unready status: [coredns])
	I1109 14:42:31.198719  215276 system_pods.go:61] "etcd-no-preload-545474" [293db1fc-5ee2-477f-bdde-78af20f645e0] Running / Ready:ContainersNotReady (containers with unready status: [etcd]) / ContainersReady:ContainersNotReady (containers with unready status: [etcd])
	I1109 14:42:31.198732  215276 system_pods.go:61] "kindnet-t9j49" [7be905a2-33f1-4116-b900-707561fa3d05] Running
	I1109 14:42:31.198741  215276 system_pods.go:61] "kube-apiserver-no-preload-545474" [4993944b-0090-4965-90b5-a23757af7772] Running / Ready:ContainersNotReady (containers with unready status: [kube-apiserver]) / ContainersReady:ContainersNotReady (containers with unready status: [kube-apiserver])
	I1109 14:42:31.198757  215276 system_pods.go:61] "kube-controller-manager-no-preload-545474" [e6bbfbb1-6531-47ac-8224-0d5cf70c2f59] Running / Ready:ContainersNotReady (containers with unready status: [kube-controller-manager]) / ContainersReady:ContainersNotReady (containers with unready status: [kube-controller-manager])
	I1109 14:42:31.198776  215276 system_pods.go:61] "kube-proxy-2mnwv" [5de7aa0e-eb03-4535-9040-8d34d0520820] Running
	I1109 14:42:31.198789  215276 system_pods.go:61] "kube-scheduler-no-preload-545474" [6bc2c009-cfa3-47ae-a27a-f261cdbb70df] Running / Ready:ContainersNotReady (containers with unready status: [kube-scheduler]) / ContainersReady:ContainersNotReady (containers with unready status: [kube-scheduler])
	I1109 14:42:31.198797  215276 system_pods.go:61] "storage-provisioner" [5c1ae78c-82fb-4b73-a894-745d823e352c] Running
	I1109 14:42:31.198805  215276 system_pods.go:74] duration metric: took 7.124426ms to wait for pod list to return data ...
	I1109 14:42:31.198818  215276 default_sa.go:34] waiting for default service account to be created ...
	I1109 14:42:31.208306  215276 default_sa.go:45] found service account: "default"
	I1109 14:42:31.208343  215276 default_sa.go:55] duration metric: took 9.518772ms for default service account to be created ...
	I1109 14:42:31.208363  215276 system_pods.go:116] waiting for k8s-apps to be running ...
	I1109 14:42:31.213366  215276 system_pods.go:86] 8 kube-system pods found
	I1109 14:42:31.213429  215276 system_pods.go:89] "coredns-66bc5c9577-gq42x" [e6074143-5b8d-41d4-8951-a551d8d2a4b9] Running / Ready:ContainersNotReady (containers with unready status: [coredns]) / ContainersReady:ContainersNotReady (containers with unready status: [coredns])
	I1109 14:42:31.213464  215276 system_pods.go:89] "etcd-no-preload-545474" [293db1fc-5ee2-477f-bdde-78af20f645e0] Running / Ready:ContainersNotReady (containers with unready status: [etcd]) / ContainersReady:ContainersNotReady (containers with unready status: [etcd])
	I1109 14:42:31.213483  215276 system_pods.go:89] "kindnet-t9j49" [7be905a2-33f1-4116-b900-707561fa3d05] Running
	I1109 14:42:31.213497  215276 system_pods.go:89] "kube-apiserver-no-preload-545474" [4993944b-0090-4965-90b5-a23757af7772] Running / Ready:ContainersNotReady (containers with unready status: [kube-apiserver]) / ContainersReady:ContainersNotReady (containers with unready status: [kube-apiserver])
	I1109 14:42:31.213503  215276 system_pods.go:89] "kube-controller-manager-no-preload-545474" [e6bbfbb1-6531-47ac-8224-0d5cf70c2f59] Running / Ready:ContainersNotReady (containers with unready status: [kube-controller-manager]) / ContainersReady:ContainersNotReady (containers with unready status: [kube-controller-manager])
	I1109 14:42:31.213515  215276 system_pods.go:89] "kube-proxy-2mnwv" [5de7aa0e-eb03-4535-9040-8d34d0520820] Running
	I1109 14:42:31.213534  215276 system_pods.go:89] "kube-scheduler-no-preload-545474" [6bc2c009-cfa3-47ae-a27a-f261cdbb70df] Running / Ready:ContainersNotReady (containers with unready status: [kube-scheduler]) / ContainersReady:ContainersNotReady (containers with unready status: [kube-scheduler])
	I1109 14:42:31.213543  215276 system_pods.go:89] "storage-provisioner" [5c1ae78c-82fb-4b73-a894-745d823e352c] Running
	I1109 14:42:31.213551  215276 system_pods.go:126] duration metric: took 5.180867ms to wait for k8s-apps to be running ...
	I1109 14:42:31.213567  215276 system_svc.go:44] waiting for kubelet service to be running ....
	I1109 14:42:31.213632  215276 ssh_runner.go:195] Run: sudo systemctl is-active --quiet service kubelet
	I1109 14:42:31.233192  215276 system_svc.go:56] duration metric: took 19.616864ms WaitForService to wait for kubelet
	I1109 14:42:31.233268  215276 kubeadm.go:587] duration metric: took 9.621152502s to wait for: map[apiserver:true apps_running:true default_sa:true extra:true kubelet:true node_ready:true system_pods:true]
	I1109 14:42:31.233301  215276 node_conditions.go:102] verifying NodePressure condition ...
	I1109 14:42:31.238898  215276 node_conditions.go:122] node storage ephemeral capacity is 203034800Ki
	I1109 14:42:31.238928  215276 node_conditions.go:123] node cpu capacity is 2
	I1109 14:42:31.238950  215276 node_conditions.go:105] duration metric: took 5.632055ms to run NodePressure ...
	I1109 14:42:31.238962  215276 start.go:242] waiting for startup goroutines ...
	I1109 14:42:31.238969  215276 start.go:247] waiting for cluster config update ...
	I1109 14:42:31.238987  215276 start.go:256] writing updated cluster config ...
	I1109 14:42:31.239257  215276 ssh_runner.go:195] Run: rm -f paused
	I1109 14:42:31.243557  215276 pod_ready.go:37] extra waiting up to 4m0s for all "kube-system" pods having one of [k8s-app=kube-dns component=etcd component=kube-apiserver component=kube-controller-manager k8s-app=kube-proxy component=kube-scheduler] labels to be "Ready" ...
	I1109 14:42:31.247762  215276 pod_ready.go:83] waiting for pod "coredns-66bc5c9577-gq42x" in "kube-system" namespace to be "Ready" or be gone ...
	I1109 14:42:30.407251  212661 addons.go:515] duration metric: took 2.65581727s for enable addons: enabled=[default-storageclass storage-provisioner]
	I1109 14:42:30.907146  212661 kapi.go:214] "coredns" deployment in "kube-system" namespace and "auto-241021" context rescaled to 1 replicas
	W1109 14:42:32.406046  212661 node_ready.go:57] node "auto-241021" has "Ready":"False" status (will retry)
	W1109 14:42:33.254541  215276 pod_ready.go:104] pod "coredns-66bc5c9577-gq42x" is not "Ready", error: <nil>
	W1109 14:42:35.754001  215276 pod_ready.go:104] pod "coredns-66bc5c9577-gq42x" is not "Ready", error: <nil>
	W1109 14:42:34.906282  212661 node_ready.go:57] node "auto-241021" has "Ready":"False" status (will retry)
	W1109 14:42:37.405894  212661 node_ready.go:57] node "auto-241021" has "Ready":"False" status (will retry)
	W1109 14:42:37.755859  215276 pod_ready.go:104] pod "coredns-66bc5c9577-gq42x" is not "Ready", error: <nil>
	W1109 14:42:39.756274  215276 pod_ready.go:104] pod "coredns-66bc5c9577-gq42x" is not "Ready", error: <nil>
	W1109 14:42:42.254091  215276 pod_ready.go:104] pod "coredns-66bc5c9577-gq42x" is not "Ready", error: <nil>
	W1109 14:42:39.406554  212661 node_ready.go:57] node "auto-241021" has "Ready":"False" status (will retry)
	W1109 14:42:41.406797  212661 node_ready.go:57] node "auto-241021" has "Ready":"False" status (will retry)
	W1109 14:42:44.256943  215276 pod_ready.go:104] pod "coredns-66bc5c9577-gq42x" is not "Ready", error: <nil>
	W1109 14:42:46.754486  215276 pod_ready.go:104] pod "coredns-66bc5c9577-gq42x" is not "Ready", error: <nil>
	W1109 14:42:43.905973  212661 node_ready.go:57] node "auto-241021" has "Ready":"False" status (will retry)
	W1109 14:42:46.405976  212661 node_ready.go:57] node "auto-241021" has "Ready":"False" status (will retry)
	W1109 14:42:48.754943  215276 pod_ready.go:104] pod "coredns-66bc5c9577-gq42x" is not "Ready", error: <nil>
	W1109 14:42:51.253884  215276 pod_ready.go:104] pod "coredns-66bc5c9577-gq42x" is not "Ready", error: <nil>
	W1109 14:42:48.406619  212661 node_ready.go:57] node "auto-241021" has "Ready":"False" status (will retry)
	W1109 14:42:50.906354  212661 node_ready.go:57] node "auto-241021" has "Ready":"False" status (will retry)
	W1109 14:42:53.757079  215276 pod_ready.go:104] pod "coredns-66bc5c9577-gq42x" is not "Ready", error: <nil>
	W1109 14:42:56.252756  215276 pod_ready.go:104] pod "coredns-66bc5c9577-gq42x" is not "Ready", error: <nil>
	W1109 14:42:52.907028  212661 node_ready.go:57] node "auto-241021" has "Ready":"False" status (will retry)
	W1109 14:42:55.406168  212661 node_ready.go:57] node "auto-241021" has "Ready":"False" status (will retry)
	W1109 14:42:58.252872  215276 pod_ready.go:104] pod "coredns-66bc5c9577-gq42x" is not "Ready", error: <nil>
	W1109 14:43:00.276669  215276 pod_ready.go:104] pod "coredns-66bc5c9577-gq42x" is not "Ready", error: <nil>
	W1109 14:42:57.906674  212661 node_ready.go:57] node "auto-241021" has "Ready":"False" status (will retry)
	W1109 14:43:00.410976  212661 node_ready.go:57] node "auto-241021" has "Ready":"False" status (will retry)
	W1109 14:43:02.755199  215276 pod_ready.go:104] pod "coredns-66bc5c9577-gq42x" is not "Ready", error: <nil>
	I1109 14:43:04.253085  215276 pod_ready.go:94] pod "coredns-66bc5c9577-gq42x" is "Ready"
	I1109 14:43:04.253113  215276 pod_ready.go:86] duration metric: took 33.005281391s for pod "coredns-66bc5c9577-gq42x" in "kube-system" namespace to be "Ready" or be gone ...
	I1109 14:43:04.255993  215276 pod_ready.go:83] waiting for pod "etcd-no-preload-545474" in "kube-system" namespace to be "Ready" or be gone ...
	I1109 14:43:04.260738  215276 pod_ready.go:94] pod "etcd-no-preload-545474" is "Ready"
	I1109 14:43:04.260762  215276 pod_ready.go:86] duration metric: took 4.741134ms for pod "etcd-no-preload-545474" in "kube-system" namespace to be "Ready" or be gone ...
	I1109 14:43:04.263194  215276 pod_ready.go:83] waiting for pod "kube-apiserver-no-preload-545474" in "kube-system" namespace to be "Ready" or be gone ...
	I1109 14:43:04.268142  215276 pod_ready.go:94] pod "kube-apiserver-no-preload-545474" is "Ready"
	I1109 14:43:04.268169  215276 pod_ready.go:86] duration metric: took 4.951475ms for pod "kube-apiserver-no-preload-545474" in "kube-system" namespace to be "Ready" or be gone ...
	I1109 14:43:04.270468  215276 pod_ready.go:83] waiting for pod "kube-controller-manager-no-preload-545474" in "kube-system" namespace to be "Ready" or be gone ...
	I1109 14:43:04.450900  215276 pod_ready.go:94] pod "kube-controller-manager-no-preload-545474" is "Ready"
	I1109 14:43:04.450931  215276 pod_ready.go:86] duration metric: took 180.436157ms for pod "kube-controller-manager-no-preload-545474" in "kube-system" namespace to be "Ready" or be gone ...
	I1109 14:43:04.651906  215276 pod_ready.go:83] waiting for pod "kube-proxy-2mnwv" in "kube-system" namespace to be "Ready" or be gone ...
	I1109 14:43:05.050824  215276 pod_ready.go:94] pod "kube-proxy-2mnwv" is "Ready"
	I1109 14:43:05.050853  215276 pod_ready.go:86] duration metric: took 398.918608ms for pod "kube-proxy-2mnwv" in "kube-system" namespace to be "Ready" or be gone ...
	I1109 14:43:05.250907  215276 pod_ready.go:83] waiting for pod "kube-scheduler-no-preload-545474" in "kube-system" namespace to be "Ready" or be gone ...
	I1109 14:43:05.650794  215276 pod_ready.go:94] pod "kube-scheduler-no-preload-545474" is "Ready"
	I1109 14:43:05.650824  215276 pod_ready.go:86] duration metric: took 399.891248ms for pod "kube-scheduler-no-preload-545474" in "kube-system" namespace to be "Ready" or be gone ...
	I1109 14:43:05.650836  215276 pod_ready.go:40] duration metric: took 34.407245084s for extra waiting for all "kube-system" pods having one of [k8s-app=kube-dns component=etcd component=kube-apiserver component=kube-controller-manager k8s-app=kube-proxy component=kube-scheduler] labels to be "Ready" ...
	I1109 14:43:05.708028  215276 start.go:628] kubectl: 1.33.2, cluster: 1.34.1 (minor skew: 1)
	I1109 14:43:05.711282  215276 out.go:179] * Done! kubectl is now configured to use "no-preload-545474" cluster and "default" namespace by default
	W1109 14:43:02.906266  212661 node_ready.go:57] node "auto-241021" has "Ready":"False" status (will retry)
	W1109 14:43:04.906398  212661 node_ready.go:57] node "auto-241021" has "Ready":"False" status (will retry)
	W1109 14:43:07.406888  212661 node_ready.go:57] node "auto-241021" has "Ready":"False" status (will retry)
	I1109 14:43:09.906973  212661 node_ready.go:49] node "auto-241021" is "Ready"
	I1109 14:43:09.907003  212661 node_ready.go:38] duration metric: took 39.503928875s for node "auto-241021" to be "Ready" ...
	I1109 14:43:09.907016  212661 api_server.go:52] waiting for apiserver process to appear ...
	I1109 14:43:09.907077  212661 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I1109 14:43:09.921621  212661 api_server.go:72] duration metric: took 42.17058545s to wait for apiserver process to appear ...
	I1109 14:43:09.921645  212661 api_server.go:88] waiting for apiserver healthz status ...
	I1109 14:43:09.921668  212661 api_server.go:253] Checking apiserver healthz at https://192.168.76.2:8443/healthz ...
	I1109 14:43:09.930314  212661 api_server.go:279] https://192.168.76.2:8443/healthz returned 200:
	ok
	I1109 14:43:09.931453  212661 api_server.go:141] control plane version: v1.34.1
	I1109 14:43:09.931481  212661 api_server.go:131] duration metric: took 9.829424ms to wait for apiserver health ...
	I1109 14:43:09.931491  212661 system_pods.go:43] waiting for kube-system pods to appear ...
	I1109 14:43:09.934858  212661 system_pods.go:59] 8 kube-system pods found
	I1109 14:43:09.934900  212661 system_pods.go:61] "coredns-66bc5c9577-54bms" [e20e0bce-590c-4118-94c3-efc439fe752a] Pending / Ready:ContainersNotReady (containers with unready status: [coredns]) / ContainersReady:ContainersNotReady (containers with unready status: [coredns])
	I1109 14:43:09.934907  212661 system_pods.go:61] "etcd-auto-241021" [03253057-b5e1-4faf-a973-71a1dc8f7290] Running
	I1109 14:43:09.934913  212661 system_pods.go:61] "kindnet-r8mbp" [8a92f989-ea4a-4e48-a80e-a7d9ff65f2da] Running
	I1109 14:43:09.934917  212661 system_pods.go:61] "kube-apiserver-auto-241021" [9e459dfd-a2e9-4f69-b017-085a249bac48] Running
	I1109 14:43:09.934921  212661 system_pods.go:61] "kube-controller-manager-auto-241021" [0e483be5-7cac-4dbf-98b9-196945ba1d9a] Running
	I1109 14:43:09.934926  212661 system_pods.go:61] "kube-proxy-vp98l" [eb0ae333-a02a-4d76-9678-75c6048ed7a0] Running
	I1109 14:43:09.934930  212661 system_pods.go:61] "kube-scheduler-auto-241021" [989ac301-7d80-4de7-b995-74c2b8f0f2bc] Running
	I1109 14:43:09.934937  212661 system_pods.go:61] "storage-provisioner" [10245536-2ef4-4db2-9d42-1c408069c6ac] Pending / Ready:ContainersNotReady (containers with unready status: [storage-provisioner]) / ContainersReady:ContainersNotReady (containers with unready status: [storage-provisioner])
	I1109 14:43:09.934951  212661 system_pods.go:74] duration metric: took 3.454197ms to wait for pod list to return data ...
	I1109 14:43:09.934963  212661 default_sa.go:34] waiting for default service account to be created ...
	I1109 14:43:09.937937  212661 default_sa.go:45] found service account: "default"
	I1109 14:43:09.937959  212661 default_sa.go:55] duration metric: took 2.989372ms for default service account to be created ...
	I1109 14:43:09.937968  212661 system_pods.go:116] waiting for k8s-apps to be running ...
	I1109 14:43:09.941116  212661 system_pods.go:86] 8 kube-system pods found
	I1109 14:43:09.941150  212661 system_pods.go:89] "coredns-66bc5c9577-54bms" [e20e0bce-590c-4118-94c3-efc439fe752a] Pending / Ready:ContainersNotReady (containers with unready status: [coredns]) / ContainersReady:ContainersNotReady (containers with unready status: [coredns])
	I1109 14:43:09.941162  212661 system_pods.go:89] "etcd-auto-241021" [03253057-b5e1-4faf-a973-71a1dc8f7290] Running
	I1109 14:43:09.941181  212661 system_pods.go:89] "kindnet-r8mbp" [8a92f989-ea4a-4e48-a80e-a7d9ff65f2da] Running
	I1109 14:43:09.941188  212661 system_pods.go:89] "kube-apiserver-auto-241021" [9e459dfd-a2e9-4f69-b017-085a249bac48] Running
	I1109 14:43:09.941193  212661 system_pods.go:89] "kube-controller-manager-auto-241021" [0e483be5-7cac-4dbf-98b9-196945ba1d9a] Running
	I1109 14:43:09.941197  212661 system_pods.go:89] "kube-proxy-vp98l" [eb0ae333-a02a-4d76-9678-75c6048ed7a0] Running
	I1109 14:43:09.941201  212661 system_pods.go:89] "kube-scheduler-auto-241021" [989ac301-7d80-4de7-b995-74c2b8f0f2bc] Running
	I1109 14:43:09.941208  212661 system_pods.go:89] "storage-provisioner" [10245536-2ef4-4db2-9d42-1c408069c6ac] Pending / Ready:ContainersNotReady (containers with unready status: [storage-provisioner]) / ContainersReady:ContainersNotReady (containers with unready status: [storage-provisioner])
	I1109 14:43:09.941235  212661 retry.go:31] will retry after 262.178205ms: missing components: kube-dns
	I1109 14:43:10.214339  212661 system_pods.go:86] 8 kube-system pods found
	I1109 14:43:10.214368  212661 system_pods.go:89] "coredns-66bc5c9577-54bms" [e20e0bce-590c-4118-94c3-efc439fe752a] Pending / Ready:ContainersNotReady (containers with unready status: [coredns]) / ContainersReady:ContainersNotReady (containers with unready status: [coredns])
	I1109 14:43:10.214375  212661 system_pods.go:89] "etcd-auto-241021" [03253057-b5e1-4faf-a973-71a1dc8f7290] Running
	I1109 14:43:10.214381  212661 system_pods.go:89] "kindnet-r8mbp" [8a92f989-ea4a-4e48-a80e-a7d9ff65f2da] Running
	I1109 14:43:10.214390  212661 system_pods.go:89] "kube-apiserver-auto-241021" [9e459dfd-a2e9-4f69-b017-085a249bac48] Running
	I1109 14:43:10.214394  212661 system_pods.go:89] "kube-controller-manager-auto-241021" [0e483be5-7cac-4dbf-98b9-196945ba1d9a] Running
	I1109 14:43:10.214398  212661 system_pods.go:89] "kube-proxy-vp98l" [eb0ae333-a02a-4d76-9678-75c6048ed7a0] Running
	I1109 14:43:10.214402  212661 system_pods.go:89] "kube-scheduler-auto-241021" [989ac301-7d80-4de7-b995-74c2b8f0f2bc] Running
	I1109 14:43:10.214407  212661 system_pods.go:89] "storage-provisioner" [10245536-2ef4-4db2-9d42-1c408069c6ac] Pending / Ready:ContainersNotReady (containers with unready status: [storage-provisioner]) / ContainersReady:ContainersNotReady (containers with unready status: [storage-provisioner])
	I1109 14:43:10.214421  212661 retry.go:31] will retry after 247.959356ms: missing components: kube-dns
	I1109 14:43:10.467883  212661 system_pods.go:86] 8 kube-system pods found
	I1109 14:43:10.467960  212661 system_pods.go:89] "coredns-66bc5c9577-54bms" [e20e0bce-590c-4118-94c3-efc439fe752a] Pending / Ready:ContainersNotReady (containers with unready status: [coredns]) / ContainersReady:ContainersNotReady (containers with unready status: [coredns])
	I1109 14:43:10.467973  212661 system_pods.go:89] "etcd-auto-241021" [03253057-b5e1-4faf-a973-71a1dc8f7290] Running
	I1109 14:43:10.467980  212661 system_pods.go:89] "kindnet-r8mbp" [8a92f989-ea4a-4e48-a80e-a7d9ff65f2da] Running
	I1109 14:43:10.467985  212661 system_pods.go:89] "kube-apiserver-auto-241021" [9e459dfd-a2e9-4f69-b017-085a249bac48] Running
	I1109 14:43:10.467990  212661 system_pods.go:89] "kube-controller-manager-auto-241021" [0e483be5-7cac-4dbf-98b9-196945ba1d9a] Running
	I1109 14:43:10.467994  212661 system_pods.go:89] "kube-proxy-vp98l" [eb0ae333-a02a-4d76-9678-75c6048ed7a0] Running
	I1109 14:43:10.467998  212661 system_pods.go:89] "kube-scheduler-auto-241021" [989ac301-7d80-4de7-b995-74c2b8f0f2bc] Running
	I1109 14:43:10.468005  212661 system_pods.go:89] "storage-provisioner" [10245536-2ef4-4db2-9d42-1c408069c6ac] Pending / Ready:ContainersNotReady (containers with unready status: [storage-provisioner]) / ContainersReady:ContainersNotReady (containers with unready status: [storage-provisioner])
	I1109 14:43:10.468024  212661 retry.go:31] will retry after 451.140887ms: missing components: kube-dns
	I1109 14:43:10.923675  212661 system_pods.go:86] 8 kube-system pods found
	I1109 14:43:10.923720  212661 system_pods.go:89] "coredns-66bc5c9577-54bms" [e20e0bce-590c-4118-94c3-efc439fe752a] Pending / Ready:ContainersNotReady (containers with unready status: [coredns]) / ContainersReady:ContainersNotReady (containers with unready status: [coredns])
	I1109 14:43:10.923750  212661 system_pods.go:89] "etcd-auto-241021" [03253057-b5e1-4faf-a973-71a1dc8f7290] Running
	I1109 14:43:10.923766  212661 system_pods.go:89] "kindnet-r8mbp" [8a92f989-ea4a-4e48-a80e-a7d9ff65f2da] Running
	I1109 14:43:10.923771  212661 system_pods.go:89] "kube-apiserver-auto-241021" [9e459dfd-a2e9-4f69-b017-085a249bac48] Running
	I1109 14:43:10.923776  212661 system_pods.go:89] "kube-controller-manager-auto-241021" [0e483be5-7cac-4dbf-98b9-196945ba1d9a] Running
	I1109 14:43:10.923785  212661 system_pods.go:89] "kube-proxy-vp98l" [eb0ae333-a02a-4d76-9678-75c6048ed7a0] Running
	I1109 14:43:10.923789  212661 system_pods.go:89] "kube-scheduler-auto-241021" [989ac301-7d80-4de7-b995-74c2b8f0f2bc] Running
	I1109 14:43:10.923799  212661 system_pods.go:89] "storage-provisioner" [10245536-2ef4-4db2-9d42-1c408069c6ac] Pending / Ready:ContainersNotReady (containers with unready status: [storage-provisioner]) / ContainersReady:ContainersNotReady (containers with unready status: [storage-provisioner])
	I1109 14:43:10.923823  212661 retry.go:31] will retry after 579.89858ms: missing components: kube-dns
	I1109 14:43:11.509695  212661 system_pods.go:86] 8 kube-system pods found
	I1109 14:43:11.509728  212661 system_pods.go:89] "coredns-66bc5c9577-54bms" [e20e0bce-590c-4118-94c3-efc439fe752a] Running
	I1109 14:43:11.509740  212661 system_pods.go:89] "etcd-auto-241021" [03253057-b5e1-4faf-a973-71a1dc8f7290] Running
	I1109 14:43:11.509745  212661 system_pods.go:89] "kindnet-r8mbp" [8a92f989-ea4a-4e48-a80e-a7d9ff65f2da] Running
	I1109 14:43:11.509750  212661 system_pods.go:89] "kube-apiserver-auto-241021" [9e459dfd-a2e9-4f69-b017-085a249bac48] Running
	I1109 14:43:11.509755  212661 system_pods.go:89] "kube-controller-manager-auto-241021" [0e483be5-7cac-4dbf-98b9-196945ba1d9a] Running
	I1109 14:43:11.509760  212661 system_pods.go:89] "kube-proxy-vp98l" [eb0ae333-a02a-4d76-9678-75c6048ed7a0] Running
	I1109 14:43:11.509768  212661 system_pods.go:89] "kube-scheduler-auto-241021" [989ac301-7d80-4de7-b995-74c2b8f0f2bc] Running
	I1109 14:43:11.509772  212661 system_pods.go:89] "storage-provisioner" [10245536-2ef4-4db2-9d42-1c408069c6ac] Running
	I1109 14:43:11.509784  212661 system_pods.go:126] duration metric: took 1.571805713s to wait for k8s-apps to be running ...
	I1109 14:43:11.509794  212661 system_svc.go:44] waiting for kubelet service to be running ....
	I1109 14:43:11.509855  212661 ssh_runner.go:195] Run: sudo systemctl is-active --quiet service kubelet
	I1109 14:43:11.533181  212661 system_svc.go:56] duration metric: took 23.37666ms WaitForService to wait for kubelet
	I1109 14:43:11.533261  212661 kubeadm.go:587] duration metric: took 43.782230569s to wait for: map[apiserver:true apps_running:true default_sa:true extra:true kubelet:true node_ready:true system_pods:true]
	I1109 14:43:11.533289  212661 node_conditions.go:102] verifying NodePressure condition ...
	I1109 14:43:11.536453  212661 node_conditions.go:122] node storage ephemeral capacity is 203034800Ki
	I1109 14:43:11.536486  212661 node_conditions.go:123] node cpu capacity is 2
	I1109 14:43:11.536500  212661 node_conditions.go:105] duration metric: took 3.204905ms to run NodePressure ...
	I1109 14:43:11.536539  212661 start.go:242] waiting for startup goroutines ...
	I1109 14:43:11.536554  212661 start.go:247] waiting for cluster config update ...
	I1109 14:43:11.536567  212661 start.go:256] writing updated cluster config ...
	I1109 14:43:11.536873  212661 ssh_runner.go:195] Run: rm -f paused
	I1109 14:43:11.541357  212661 pod_ready.go:37] extra waiting up to 4m0s for all "kube-system" pods having one of [k8s-app=kube-dns component=etcd component=kube-apiserver component=kube-controller-manager k8s-app=kube-proxy component=kube-scheduler] labels to be "Ready" ...
	I1109 14:43:11.545066  212661 pod_ready.go:83] waiting for pod "coredns-66bc5c9577-54bms" in "kube-system" namespace to be "Ready" or be gone ...
	I1109 14:43:11.549961  212661 pod_ready.go:94] pod "coredns-66bc5c9577-54bms" is "Ready"
	I1109 14:43:11.549988  212661 pod_ready.go:86] duration metric: took 4.890297ms for pod "coredns-66bc5c9577-54bms" in "kube-system" namespace to be "Ready" or be gone ...
	I1109 14:43:11.552829  212661 pod_ready.go:83] waiting for pod "etcd-auto-241021" in "kube-system" namespace to be "Ready" or be gone ...
	I1109 14:43:11.558135  212661 pod_ready.go:94] pod "etcd-auto-241021" is "Ready"
	I1109 14:43:11.558228  212661 pod_ready.go:86] duration metric: took 5.31929ms for pod "etcd-auto-241021" in "kube-system" namespace to be "Ready" or be gone ...
	I1109 14:43:11.561108  212661 pod_ready.go:83] waiting for pod "kube-apiserver-auto-241021" in "kube-system" namespace to be "Ready" or be gone ...
	I1109 14:43:11.565873  212661 pod_ready.go:94] pod "kube-apiserver-auto-241021" is "Ready"
	I1109 14:43:11.565897  212661 pod_ready.go:86] duration metric: took 4.763108ms for pod "kube-apiserver-auto-241021" in "kube-system" namespace to be "Ready" or be gone ...
	I1109 14:43:11.568432  212661 pod_ready.go:83] waiting for pod "kube-controller-manager-auto-241021" in "kube-system" namespace to be "Ready" or be gone ...
	I1109 14:43:11.945928  212661 pod_ready.go:94] pod "kube-controller-manager-auto-241021" is "Ready"
	I1109 14:43:11.945961  212661 pod_ready.go:86] duration metric: took 377.503838ms for pod "kube-controller-manager-auto-241021" in "kube-system" namespace to be "Ready" or be gone ...
	I1109 14:43:12.146933  212661 pod_ready.go:83] waiting for pod "kube-proxy-vp98l" in "kube-system" namespace to be "Ready" or be gone ...
	I1109 14:43:12.545096  212661 pod_ready.go:94] pod "kube-proxy-vp98l" is "Ready"
	I1109 14:43:12.545121  212661 pod_ready.go:86] duration metric: took 398.11811ms for pod "kube-proxy-vp98l" in "kube-system" namespace to be "Ready" or be gone ...
	I1109 14:43:12.746586  212661 pod_ready.go:83] waiting for pod "kube-scheduler-auto-241021" in "kube-system" namespace to be "Ready" or be gone ...
	I1109 14:43:13.145844  212661 pod_ready.go:94] pod "kube-scheduler-auto-241021" is "Ready"
	I1109 14:43:13.145872  212661 pod_ready.go:86] duration metric: took 399.258142ms for pod "kube-scheduler-auto-241021" in "kube-system" namespace to be "Ready" or be gone ...
	I1109 14:43:13.145885  212661 pod_ready.go:40] duration metric: took 1.604453405s for extra waiting for all "kube-system" pods having one of [k8s-app=kube-dns component=etcd component=kube-apiserver component=kube-controller-manager k8s-app=kube-proxy component=kube-scheduler] labels to be "Ready" ...
	I1109 14:43:13.214630  212661 start.go:628] kubectl: 1.33.2, cluster: 1.34.1 (minor skew: 1)
	I1109 14:43:13.217763  212661 out.go:179] * Done! kubectl is now configured to use "auto-241021" cluster and "default" namespace by default
	
	
	==> CRI-O <==
	Nov 09 14:43:10 no-preload-545474 crio[651]: time="2025-11-09T14:43:10.396313727Z" level=info msg="CNI monitoring event WRITE         \"/etc/cni/net.d/10-kindnet.conflist.temp\""
	Nov 09 14:43:10 no-preload-545474 crio[651]: time="2025-11-09T14:43:10.399807399Z" level=info msg="Found CNI network kindnet (type=ptp) at /etc/cni/net.d/10-kindnet.conflist"
	Nov 09 14:43:10 no-preload-545474 crio[651]: time="2025-11-09T14:43:10.399842238Z" level=info msg="Updated default CNI network name to kindnet"
	Nov 09 14:43:10 no-preload-545474 crio[651]: time="2025-11-09T14:43:10.399911351Z" level=info msg="CNI monitoring event WRITE         \"/etc/cni/net.d/10-kindnet.conflist.temp\""
	Nov 09 14:43:10 no-preload-545474 crio[651]: time="2025-11-09T14:43:10.402999316Z" level=info msg="Found CNI network kindnet (type=ptp) at /etc/cni/net.d/10-kindnet.conflist"
	Nov 09 14:43:10 no-preload-545474 crio[651]: time="2025-11-09T14:43:10.403034754Z" level=info msg="Updated default CNI network name to kindnet"
	Nov 09 14:43:10 no-preload-545474 crio[651]: time="2025-11-09T14:43:10.403058566Z" level=info msg="CNI monitoring event RENAME        \"/etc/cni/net.d/10-kindnet.conflist.temp\""
	Nov 09 14:43:10 no-preload-545474 crio[651]: time="2025-11-09T14:43:10.406317248Z" level=info msg="Found CNI network kindnet (type=ptp) at /etc/cni/net.d/10-kindnet.conflist"
	Nov 09 14:43:10 no-preload-545474 crio[651]: time="2025-11-09T14:43:10.406367267Z" level=info msg="Updated default CNI network name to kindnet"
	Nov 09 14:43:10 no-preload-545474 crio[651]: time="2025-11-09T14:43:10.406391957Z" level=info msg="CNI monitoring event CREATE        \"/etc/cni/net.d/10-kindnet.conflist\" ← \"/etc/cni/net.d/10-kindnet.conflist.temp\""
	Nov 09 14:43:10 no-preload-545474 crio[651]: time="2025-11-09T14:43:10.410102476Z" level=info msg="Found CNI network kindnet (type=ptp) at /etc/cni/net.d/10-kindnet.conflist"
	Nov 09 14:43:10 no-preload-545474 crio[651]: time="2025-11-09T14:43:10.41013752Z" level=info msg="Updated default CNI network name to kindnet"
	Nov 09 14:43:14 no-preload-545474 crio[651]: time="2025-11-09T14:43:14.991521577Z" level=info msg="Checking image status: registry.k8s.io/echoserver:1.4" id=52163c2f-11cf-4e18-b63c-06002e4698c4 name=/runtime.v1.ImageService/ImageStatus
	Nov 09 14:43:14 no-preload-545474 crio[651]: time="2025-11-09T14:43:14.993953585Z" level=info msg="Checking image status: registry.k8s.io/echoserver:1.4" id=2015529a-0ba9-4867-916a-0f45f19d2422 name=/runtime.v1.ImageService/ImageStatus
	Nov 09 14:43:14 no-preload-545474 crio[651]: time="2025-11-09T14:43:14.995454802Z" level=info msg="Creating container: kubernetes-dashboard/dashboard-metrics-scraper-6ffb444bf9-blrjq/dashboard-metrics-scraper" id=92c8afc1-a953-4956-837e-a3e074eb9532 name=/runtime.v1.RuntimeService/CreateContainer
	Nov 09 14:43:14 no-preload-545474 crio[651]: time="2025-11-09T14:43:14.995603464Z" level=info msg="Allowed annotations are specified for workload [io.containers.trace-syscall]"
	Nov 09 14:43:15 no-preload-545474 crio[651]: time="2025-11-09T14:43:15.016032641Z" level=info msg="Allowed annotations are specified for workload [io.containers.trace-syscall]"
	Nov 09 14:43:15 no-preload-545474 crio[651]: time="2025-11-09T14:43:15.040704699Z" level=info msg="Allowed annotations are specified for workload [io.containers.trace-syscall]"
	Nov 09 14:43:15 no-preload-545474 crio[651]: time="2025-11-09T14:43:15.080934049Z" level=info msg="Created container 33687307c11ac0258a0c4a328dca8bb0ee9be8beb937d99cd72b8fc358b43247: kubernetes-dashboard/dashboard-metrics-scraper-6ffb444bf9-blrjq/dashboard-metrics-scraper" id=92c8afc1-a953-4956-837e-a3e074eb9532 name=/runtime.v1.RuntimeService/CreateContainer
	Nov 09 14:43:15 no-preload-545474 crio[651]: time="2025-11-09T14:43:15.084254411Z" level=info msg="Starting container: 33687307c11ac0258a0c4a328dca8bb0ee9be8beb937d99cd72b8fc358b43247" id=4bf5bba8-6381-46c0-8469-13b254b39737 name=/runtime.v1.RuntimeService/StartContainer
	Nov 09 14:43:15 no-preload-545474 crio[651]: time="2025-11-09T14:43:15.090898022Z" level=info msg="Started container" PID=1716 containerID=33687307c11ac0258a0c4a328dca8bb0ee9be8beb937d99cd72b8fc358b43247 description=kubernetes-dashboard/dashboard-metrics-scraper-6ffb444bf9-blrjq/dashboard-metrics-scraper id=4bf5bba8-6381-46c0-8469-13b254b39737 name=/runtime.v1.RuntimeService/StartContainer sandboxID=02bf9cccffb2e7f921681057abe235f77d9c425b36f48aaea700389f33baa558
	Nov 09 14:43:15 no-preload-545474 conmon[1714]: conmon 33687307c11ac0258a0c <ninfo>: container 1716 exited with status 1
	Nov 09 14:43:15 no-preload-545474 crio[651]: time="2025-11-09T14:43:15.344861432Z" level=info msg="Removing container: 54427a8df9d12b8f89cabb1e02d454376fd533549a2a5bd4728c070344b169a9" id=af81a6c7-1449-4d80-9216-93a0a5fdcb8a name=/runtime.v1.RuntimeService/RemoveContainer
	Nov 09 14:43:15 no-preload-545474 crio[651]: time="2025-11-09T14:43:15.357392667Z" level=info msg="Error loading conmon cgroup of container 54427a8df9d12b8f89cabb1e02d454376fd533549a2a5bd4728c070344b169a9: cgroup deleted" id=af81a6c7-1449-4d80-9216-93a0a5fdcb8a name=/runtime.v1.RuntimeService/RemoveContainer
	Nov 09 14:43:15 no-preload-545474 crio[651]: time="2025-11-09T14:43:15.361966358Z" level=info msg="Removed container 54427a8df9d12b8f89cabb1e02d454376fd533549a2a5bd4728c070344b169a9: kubernetes-dashboard/dashboard-metrics-scraper-6ffb444bf9-blrjq/dashboard-metrics-scraper" id=af81a6c7-1449-4d80-9216-93a0a5fdcb8a name=/runtime.v1.RuntimeService/RemoveContainer
	
	
	==> container status <==
	CONTAINER           IMAGE                                                                                                      CREATED             STATE               NAME                        ATTEMPT             POD ID              POD                                          NAMESPACE
	33687307c11ac       a90209bb39e3d7b5fc9daf60c17044ea969aaca0333d672d8c7a34c7446e7ff7                                           5 seconds ago       Exited              dashboard-metrics-scraper   3                   02bf9cccffb2e       dashboard-metrics-scraper-6ffb444bf9-blrjq   kubernetes-dashboard
	97146ccbb7051       66749159455b3f08c8318fe0233122f54d0f5889f9c5fdfb73c3fd9d99895b51                                           19 seconds ago      Running             storage-provisioner         2                   9f68346c5cb0a       storage-provisioner                          kube-system
	2f347db595849       docker.io/kubernetesui/dashboard@sha256:5c52c60663b473628bd98e4ffee7a747ef1f88d8c7bcee957b089fb3f61bdedf   42 seconds ago      Running             kubernetes-dashboard        0                   634a038efb97f       kubernetes-dashboard-855c9754f9-zlh4p        kubernetes-dashboard
	dff62811213e4       1611cd07b61d57dbbfebe6db242513fd51e1c02d20ba08af17a45837d86a8a8c                                           51 seconds ago      Running             busybox                     1                   e612db17c7925       busybox                                      default
	a56ad15fb1acd       66749159455b3f08c8318fe0233122f54d0f5889f9c5fdfb73c3fd9d99895b51                                           51 seconds ago      Exited              storage-provisioner         1                   9f68346c5cb0a       storage-provisioner                          kube-system
	3a56c743b8a3e       138784d87c9c50f8e59412544da4cf4928d61ccbaf93b9f5898a3ba406871bfc                                           51 seconds ago      Running             coredns                     1                   cfa06e172519b       coredns-66bc5c9577-gq42x                     kube-system
	5528464a75a8a       05baa95f5142d87797a2bc1d3d11edfb0bf0a9236d436243d15061fae8b58cb9                                           51 seconds ago      Running             kube-proxy                  1                   316a639ecc69f       kube-proxy-2mnwv                             kube-system
	9844fa4dd0e74       b1a8c6f707935fd5f346ce5846d21ff8dd65e14c15406a14dbd16b9b897b9b4c                                           51 seconds ago      Running             kindnet-cni                 1                   facbfc2f8c056       kindnet-t9j49                                kube-system
	e0fa19fb74d19       7eb2c6ff0c5a768fd309321bc2ade0e4e11afcf4f2017ef1d0ff00d91fdf992a                                           59 seconds ago      Running             kube-controller-manager     1                   466a74dbf4924       kube-controller-manager-no-preload-545474    kube-system
	9c6841e7685fc       43911e833d64d4f30460862fc0c54bb61999d60bc7d063feca71e9fc610d5196                                           59 seconds ago      Running             kube-apiserver              1                   87e88904b7475       kube-apiserver-no-preload-545474             kube-system
	9df79b1c8bb2b       a1894772a478e07c67a56e8bf32335fdbe1dd4ec96976a5987083164bd00bc0e                                           59 seconds ago      Running             etcd                        1                   fa52aa961b93e       etcd-no-preload-545474                       kube-system
	baa0cc7198ae0       b5f57ec6b98676d815366685a0422bd164ecf0732540b79ac51b1186cef97ff0                                           59 seconds ago      Running             kube-scheduler              1                   3be72e38361e5       kube-scheduler-no-preload-545474             kube-system
	
	
	==> coredns [3a56c743b8a3e4504b63b2de555d8f1d8433520edee172e996d5bb694372c514] <==
	[INFO] plugin/kubernetes: waiting for Kubernetes API before starting server
	[INFO] plugin/kubernetes: waiting for Kubernetes API before starting server
	[INFO] plugin/kubernetes: waiting for Kubernetes API before starting server
	[INFO] plugin/kubernetes: waiting for Kubernetes API before starting server
	[INFO] plugin/kubernetes: waiting for Kubernetes API before starting server
	[INFO] plugin/kubernetes: waiting for Kubernetes API before starting server
	[INFO] plugin/kubernetes: waiting for Kubernetes API before starting server
	[INFO] plugin/ready: Still waiting on: "kubernetes"
	[INFO] plugin/ready: Still waiting on: "kubernetes"
	[INFO] plugin/kubernetes: waiting for Kubernetes API before starting server
	[INFO] plugin/kubernetes: waiting for Kubernetes API before starting server
	[WARNING] plugin/kubernetes: starting server with unsynced Kubernetes API
	.:53
	[INFO] plugin/reload: Running configuration SHA512 = fa9a0cdcdddcb4be74a0eaf7cfcb211c40e29ddf5507e03bbfc0065bade31f0f2641a2513136e246f32328dd126fc93236fb5c595246f0763926a524386705e8
	CoreDNS-1.12.1
	linux/arm64, go1.24.1, 707c7c1
	[INFO] 127.0.0.1:47556 - 38614 "HINFO IN 6565056289328285906.5439230190109313127. udp 57 false 512" NXDOMAIN qr,rd,ra 57 0.004844166s
	[INFO] plugin/ready: Still waiting on: "kubernetes"
	[INFO] plugin/ready: Still waiting on: "kubernetes"
	[INFO] plugin/kubernetes: pkg/mod/k8s.io/client-go@v0.32.3/tools/cache/reflector.go:251: failed to list *v1.EndpointSlice: Get "https://10.96.0.1:443/apis/discovery.k8s.io/v1/endpointslices?limit=500&resourceVersion=0": dial tcp 10.96.0.1:443: i/o timeout
	[ERROR] plugin/kubernetes: Unhandled Error
	[INFO] plugin/kubernetes: pkg/mod/k8s.io/client-go@v0.32.3/tools/cache/reflector.go:251: failed to list *v1.Namespace: Get "https://10.96.0.1:443/api/v1/namespaces?limit=500&resourceVersion=0": dial tcp 10.96.0.1:443: i/o timeout
	[ERROR] plugin/kubernetes: Unhandled Error
	[INFO] plugin/kubernetes: pkg/mod/k8s.io/client-go@v0.32.3/tools/cache/reflector.go:251: failed to list *v1.Service: Get "https://10.96.0.1:443/api/v1/services?limit=500&resourceVersion=0": dial tcp 10.96.0.1:443: i/o timeout
	[ERROR] plugin/kubernetes: Unhandled Error
	
	
	==> describe nodes <==
	Name:               no-preload-545474
	Roles:              control-plane
	Labels:             beta.kubernetes.io/arch=arm64
	                    beta.kubernetes.io/os=linux
	                    kubernetes.io/arch=arm64
	                    kubernetes.io/hostname=no-preload-545474
	                    kubernetes.io/os=linux
	                    minikube.k8s.io/commit=70e94ed289d7418ac27e2778f9cf44be27a4ecda
	                    minikube.k8s.io/name=no-preload-545474
	                    minikube.k8s.io/primary=true
	                    minikube.k8s.io/updated_at=2025_11_09T14_41_26_0700
	                    minikube.k8s.io/version=v1.37.0
	                    node-role.kubernetes.io/control-plane=
	                    node.kubernetes.io/exclude-from-external-load-balancers=
	Annotations:        node.alpha.kubernetes.io/ttl: 0
	                    volumes.kubernetes.io/controller-managed-attach-detach: true
	CreationTimestamp:  Sun, 09 Nov 2025 14:41:21 +0000
	Taints:             <none>
	Unschedulable:      false
	Lease:
	  HolderIdentity:  no-preload-545474
	  AcquireTime:     <unset>
	  RenewTime:       Sun, 09 Nov 2025 14:43:09 +0000
	Conditions:
	  Type             Status  LastHeartbeatTime                 LastTransitionTime                Reason                       Message
	  ----             ------  -----------------                 ------------------                ------                       -------
	  MemoryPressure   False   Sun, 09 Nov 2025 14:43:09 +0000   Sun, 09 Nov 2025 14:41:16 +0000   KubeletHasSufficientMemory   kubelet has sufficient memory available
	  DiskPressure     False   Sun, 09 Nov 2025 14:43:09 +0000   Sun, 09 Nov 2025 14:41:16 +0000   KubeletHasNoDiskPressure     kubelet has no disk pressure
	  PIDPressure      False   Sun, 09 Nov 2025 14:43:09 +0000   Sun, 09 Nov 2025 14:41:16 +0000   KubeletHasSufficientPID      kubelet has sufficient PID available
	  Ready            True    Sun, 09 Nov 2025 14:43:09 +0000   Sun, 09 Nov 2025 14:41:45 +0000   KubeletReady                 kubelet is posting ready status
	Addresses:
	  InternalIP:  192.168.85.2
	  Hostname:    no-preload-545474
	Capacity:
	  cpu:                2
	  ephemeral-storage:  203034800Ki
	  hugepages-1Gi:      0
	  hugepages-2Mi:      0
	  hugepages-32Mi:     0
	  hugepages-64Ki:     0
	  memory:             8022300Ki
	  pods:               110
	Allocatable:
	  cpu:                2
	  ephemeral-storage:  203034800Ki
	  hugepages-1Gi:      0
	  hugepages-2Mi:      0
	  hugepages-32Mi:     0
	  hugepages-64Ki:     0
	  memory:             8022300Ki
	  pods:               110
	System Info:
	  Machine ID:                 8fe80b37a6a3cb4bc1adadb06905c59a
	  System UUID:                c8e11a83-d01e-4114-9a5f-a54126ee8120
	  Boot ID:                    c2b4b02b-b0e5-42ff-aa45-346ad9349595
	  Kernel Version:             5.15.0-1084-aws
	  OS Image:                   Debian GNU/Linux 12 (bookworm)
	  Operating System:           linux
	  Architecture:               arm64
	  Container Runtime Version:  cri-o://1.34.1
	  Kubelet Version:            v1.34.1
	  Kube-Proxy Version:         
	PodCIDR:                      10.244.0.0/24
	PodCIDRs:                     10.244.0.0/24
	Non-terminated Pods:          (11 in total)
	  Namespace                   Name                                          CPU Requests  CPU Limits  Memory Requests  Memory Limits  Age
	  ---------                   ----                                          ------------  ----------  ---------------  -------------  ---
	  default                     busybox                                       0 (0%)        0 (0%)      0 (0%)           0 (0%)         93s
	  kube-system                 coredns-66bc5c9577-gq42x                      100m (5%)     0 (0%)      70Mi (0%)        170Mi (2%)     112s
	  kube-system                 etcd-no-preload-545474                        100m (5%)     0 (0%)      100Mi (1%)       0 (0%)         116s
	  kube-system                 kindnet-t9j49                                 100m (5%)     100m (5%)   50Mi (0%)        50Mi (0%)      111s
	  kube-system                 kube-apiserver-no-preload-545474              250m (12%)    0 (0%)      0 (0%)           0 (0%)         118s
	  kube-system                 kube-controller-manager-no-preload-545474     200m (10%)    0 (0%)      0 (0%)           0 (0%)         118s
	  kube-system                 kube-proxy-2mnwv                              0 (0%)        0 (0%)      0 (0%)           0 (0%)         111s
	  kube-system                 kube-scheduler-no-preload-545474              100m (5%)     0 (0%)      0 (0%)           0 (0%)         116s
	  kube-system                 storage-provisioner                           0 (0%)        0 (0%)      0 (0%)           0 (0%)         110s
	  kubernetes-dashboard        dashboard-metrics-scraper-6ffb444bf9-blrjq    0 (0%)        0 (0%)      0 (0%)           0 (0%)         48s
	  kubernetes-dashboard        kubernetes-dashboard-855c9754f9-zlh4p         0 (0%)        0 (0%)      0 (0%)           0 (0%)         48s
	Allocated resources:
	  (Total limits may be over 100 percent, i.e., overcommitted.)
	  Resource           Requests    Limits
	  --------           --------    ------
	  cpu                850m (42%)  100m (5%)
	  memory             220Mi (2%)  220Mi (2%)
	  ephemeral-storage  0 (0%)      0 (0%)
	  hugepages-1Gi      0 (0%)      0 (0%)
	  hugepages-2Mi      0 (0%)      0 (0%)
	  hugepages-32Mi     0 (0%)      0 (0%)
	  hugepages-64Ki     0 (0%)      0 (0%)
	Events:
	  Type     Reason                   Age                  From             Message
	  ----     ------                   ----                 ----             -------
	  Normal   Starting                 109s                 kube-proxy       
	  Normal   Starting                 50s                  kube-proxy       
	  Warning  CgroupV1                 2m7s                 kubelet          cgroup v1 support is in maintenance mode, please migrate to cgroup v2
	  Normal   NodeHasSufficientMemory  2m7s (x8 over 2m7s)  kubelet          Node no-preload-545474 status is now: NodeHasSufficientMemory
	  Normal   NodeHasNoDiskPressure    2m7s (x8 over 2m7s)  kubelet          Node no-preload-545474 status is now: NodeHasNoDiskPressure
	  Normal   NodeHasSufficientPID     2m7s (x8 over 2m7s)  kubelet          Node no-preload-545474 status is now: NodeHasSufficientPID
	  Normal   NodeHasSufficientMemory  116s                 kubelet          Node no-preload-545474 status is now: NodeHasSufficientMemory
	  Warning  CgroupV1                 116s                 kubelet          cgroup v1 support is in maintenance mode, please migrate to cgroup v2
	  Normal   NodeHasNoDiskPressure    116s                 kubelet          Node no-preload-545474 status is now: NodeHasNoDiskPressure
	  Normal   NodeHasSufficientPID     116s                 kubelet          Node no-preload-545474 status is now: NodeHasSufficientPID
	  Normal   Starting                 116s                 kubelet          Starting kubelet.
	  Normal   RegisteredNode           113s                 node-controller  Node no-preload-545474 event: Registered Node no-preload-545474 in Controller
	  Normal   NodeReady                96s                  kubelet          Node no-preload-545474 status is now: NodeReady
	  Normal   Starting                 61s                  kubelet          Starting kubelet.
	  Warning  CgroupV1                 61s                  kubelet          cgroup v1 support is in maintenance mode, please migrate to cgroup v2
	  Normal   NodeHasSufficientMemory  60s (x8 over 61s)    kubelet          Node no-preload-545474 status is now: NodeHasSufficientMemory
	  Normal   NodeHasNoDiskPressure    60s (x8 over 61s)    kubelet          Node no-preload-545474 status is now: NodeHasNoDiskPressure
	  Normal   NodeHasSufficientPID     60s (x8 over 61s)    kubelet          Node no-preload-545474 status is now: NodeHasSufficientPID
	  Normal   RegisteredNode           49s                  node-controller  Node no-preload-545474 event: Registered Node no-preload-545474 in Controller
	
	
	==> dmesg <==
	[Nov 9 14:19] overlayfs: idmapped layers are currently not supported
	[ +17.180951] overlayfs: idmapped layers are currently not supported
	[Nov 9 14:20] overlayfs: idmapped layers are currently not supported
	[ +23.736977] overlayfs: idmapped layers are currently not supported
	[Nov 9 14:22] overlayfs: idmapped layers are currently not supported
	[Nov 9 14:23] overlayfs: idmapped layers are currently not supported
	[Nov 9 14:25] overlayfs: idmapped layers are currently not supported
	[Nov 9 14:27] overlayfs: idmapped layers are currently not supported
	[ +24.536638] overlayfs: idmapped layers are currently not supported
	[Nov 9 14:31] overlayfs: idmapped layers are currently not supported
	[Nov 9 14:33] overlayfs: idmapped layers are currently not supported
	[ +39.159698] overlayfs: idmapped layers are currently not supported
	[  +5.641155] overlayfs: idmapped layers are currently not supported
	[Nov 9 14:34] overlayfs: idmapped layers are currently not supported
	[Nov 9 14:35] overlayfs: idmapped layers are currently not supported
	[Nov 9 14:36] overlayfs: idmapped layers are currently not supported
	[Nov 9 14:37] overlayfs: idmapped layers are currently not supported
	[  +3.455400] overlayfs: idmapped layers are currently not supported
	[Nov 9 14:39] overlayfs: idmapped layers are currently not supported
	[  +3.462027] overlayfs: idmapped layers are currently not supported
	[Nov 9 14:40] overlayfs: idmapped layers are currently not supported
	[Nov 9 14:41] overlayfs: idmapped layers are currently not supported
	[ +35.139553] overlayfs: idmapped layers are currently not supported
	[Nov 9 14:42] overlayfs: idmapped layers are currently not supported
	[  +6.994514] overlayfs: idmapped layers are currently not supported
	
	
	==> etcd [9df79b1c8bb2b207310b5498d17036b5975ec9b07c6ca842407741f9ad73de97] <==
	{"level":"warn","ts":"2025-11-09T14:42:25.889246Z","caller":"embed/config_logging.go:188","msg":"rejected connection on client endpoint","remote-addr":"127.0.0.1:58786","server-name":"","error":"EOF"}
	{"level":"warn","ts":"2025-11-09T14:42:25.945123Z","caller":"embed/config_logging.go:188","msg":"rejected connection on client endpoint","remote-addr":"127.0.0.1:58816","server-name":"","error":"EOF"}
	{"level":"warn","ts":"2025-11-09T14:42:25.982789Z","caller":"embed/config_logging.go:188","msg":"rejected connection on client endpoint","remote-addr":"127.0.0.1:58818","server-name":"","error":"EOF"}
	{"level":"warn","ts":"2025-11-09T14:42:26.019657Z","caller":"embed/config_logging.go:188","msg":"rejected connection on client endpoint","remote-addr":"127.0.0.1:58832","server-name":"","error":"EOF"}
	{"level":"warn","ts":"2025-11-09T14:42:26.039930Z","caller":"embed/config_logging.go:188","msg":"rejected connection on client endpoint","remote-addr":"127.0.0.1:58850","server-name":"","error":"EOF"}
	{"level":"warn","ts":"2025-11-09T14:42:26.084878Z","caller":"embed/config_logging.go:188","msg":"rejected connection on client endpoint","remote-addr":"127.0.0.1:58868","server-name":"","error":"EOF"}
	{"level":"warn","ts":"2025-11-09T14:42:26.146607Z","caller":"embed/config_logging.go:188","msg":"rejected connection on client endpoint","remote-addr":"127.0.0.1:58884","server-name":"","error":"EOF"}
	{"level":"warn","ts":"2025-11-09T14:42:26.196449Z","caller":"embed/config_logging.go:188","msg":"rejected connection on client endpoint","remote-addr":"127.0.0.1:58892","server-name":"","error":"EOF"}
	{"level":"warn","ts":"2025-11-09T14:42:26.326424Z","caller":"embed/config_logging.go:188","msg":"rejected connection on client endpoint","remote-addr":"127.0.0.1:58928","server-name":"","error":"EOF"}
	{"level":"warn","ts":"2025-11-09T14:42:26.339539Z","caller":"embed/config_logging.go:188","msg":"rejected connection on client endpoint","remote-addr":"127.0.0.1:58908","server-name":"","error":"EOF"}
	{"level":"warn","ts":"2025-11-09T14:42:26.358397Z","caller":"embed/config_logging.go:188","msg":"rejected connection on client endpoint","remote-addr":"127.0.0.1:58940","server-name":"","error":"EOF"}
	{"level":"warn","ts":"2025-11-09T14:42:26.381470Z","caller":"embed/config_logging.go:188","msg":"rejected connection on client endpoint","remote-addr":"127.0.0.1:58966","server-name":"","error":"EOF"}
	{"level":"warn","ts":"2025-11-09T14:42:26.400801Z","caller":"embed/config_logging.go:188","msg":"rejected connection on client endpoint","remote-addr":"127.0.0.1:58984","server-name":"","error":"EOF"}
	{"level":"warn","ts":"2025-11-09T14:42:26.421112Z","caller":"embed/config_logging.go:188","msg":"rejected connection on client endpoint","remote-addr":"127.0.0.1:59004","server-name":"","error":"EOF"}
	{"level":"warn","ts":"2025-11-09T14:42:26.432434Z","caller":"embed/config_logging.go:188","msg":"rejected connection on client endpoint","remote-addr":"127.0.0.1:59032","server-name":"","error":"EOF"}
	{"level":"warn","ts":"2025-11-09T14:42:26.462049Z","caller":"embed/config_logging.go:188","msg":"rejected connection on client endpoint","remote-addr":"127.0.0.1:59050","server-name":"","error":"EOF"}
	{"level":"warn","ts":"2025-11-09T14:42:26.483436Z","caller":"embed/config_logging.go:188","msg":"rejected connection on client endpoint","remote-addr":"127.0.0.1:59062","server-name":"","error":"EOF"}
	{"level":"warn","ts":"2025-11-09T14:42:26.496547Z","caller":"embed/config_logging.go:188","msg":"rejected connection on client endpoint","remote-addr":"127.0.0.1:59074","server-name":"","error":"EOF"}
	{"level":"warn","ts":"2025-11-09T14:42:26.532652Z","caller":"embed/config_logging.go:188","msg":"rejected connection on client endpoint","remote-addr":"127.0.0.1:59100","server-name":"","error":"EOF"}
	{"level":"warn","ts":"2025-11-09T14:42:26.568183Z","caller":"embed/config_logging.go:188","msg":"rejected connection on client endpoint","remote-addr":"127.0.0.1:59108","server-name":"","error":"EOF"}
	{"level":"warn","ts":"2025-11-09T14:42:26.637397Z","caller":"embed/config_logging.go:188","msg":"rejected connection on client endpoint","remote-addr":"127.0.0.1:59134","server-name":"","error":"EOF"}
	{"level":"warn","ts":"2025-11-09T14:42:26.656485Z","caller":"embed/config_logging.go:188","msg":"rejected connection on client endpoint","remote-addr":"127.0.0.1:59146","server-name":"","error":"EOF"}
	{"level":"warn","ts":"2025-11-09T14:42:26.685791Z","caller":"embed/config_logging.go:188","msg":"rejected connection on client endpoint","remote-addr":"127.0.0.1:59162","server-name":"","error":"EOF"}
	{"level":"warn","ts":"2025-11-09T14:42:26.716598Z","caller":"embed/config_logging.go:188","msg":"rejected connection on client endpoint","remote-addr":"127.0.0.1:59174","server-name":"","error":"EOF"}
	{"level":"warn","ts":"2025-11-09T14:42:26.861774Z","caller":"embed/config_logging.go:188","msg":"rejected connection on client endpoint","remote-addr":"127.0.0.1:59192","server-name":"","error":"EOF"}
	
	
	==> kernel <==
	 14:43:21 up  1:25,  0 user,  load average: 3.50, 4.07, 3.21
	Linux no-preload-545474 5.15.0-1084-aws #91~20.04.1-Ubuntu SMP Fri May 2 07:00:04 UTC 2025 aarch64 GNU/Linux
	PRETTY_NAME="Debian GNU/Linux 12 (bookworm)"
	
	
	==> kindnet [9844fa4dd0e741c1e135049b0ec50a2c5f6206bf090fce7e184f76f6f5de6cb7] <==
	I1109 14:42:30.172055       1 main.go:109] connected to apiserver: https://10.96.0.1:443
	I1109 14:42:30.172327       1 main.go:139] hostIP = 192.168.85.2
	podIP = 192.168.85.2
	I1109 14:42:30.172453       1 main.go:148] setting mtu 1500 for CNI 
	I1109 14:42:30.172465       1 main.go:178] kindnetd IP family: "ipv4"
	I1109 14:42:30.172476       1 main.go:182] noMask IPv4 subnets: [10.244.0.0/16]
	time="2025-11-09T14:42:30Z" level=info msg="Created plugin 10-kube-network-policies (kindnetd, handles RunPodSandbox,RemovePodSandbox)"
	I1109 14:42:30.390150       1 controller.go:377] "Starting controller" name="kube-network-policies"
	I1109 14:42:30.390179       1 controller.go:381] "Waiting for informer caches to sync"
	I1109 14:42:30.390187       1 shared_informer.go:350] "Waiting for caches to sync" controller="kube-network-policies"
	I1109 14:42:30.390868       1 controller.go:390] nri plugin exited: failed to connect to NRI service: dial unix /var/run/nri/nri.sock: connect: no such file or directory
	E1109 14:43:00.391846       1 reflector.go:200] "Failed to watch" err="failed to list *v1.Node: Get \"https://10.96.0.1:443/api/v1/nodes?limit=500&resourceVersion=0\": dial tcp 10.96.0.1:443: i/o timeout" logger="UnhandledError" reflector="pkg/mod/k8s.io/client-go@v0.33.0/tools/cache/reflector.go:285" type="*v1.Node"
	E1109 14:43:00.391846       1 reflector.go:200] "Failed to watch" err="failed to list *v1.Namespace: Get \"https://10.96.0.1:443/api/v1/namespaces?limit=500&resourceVersion=0\": dial tcp 10.96.0.1:443: i/o timeout" logger="UnhandledError" reflector="pkg/mod/k8s.io/client-go@v0.33.0/tools/cache/reflector.go:285" type="*v1.Namespace"
	E1109 14:43:00.392066       1 reflector.go:200] "Failed to watch" err="failed to list *v1.Pod: Get \"https://10.96.0.1:443/api/v1/pods?limit=500&resourceVersion=0\": dial tcp 10.96.0.1:443: i/o timeout" logger="UnhandledError" reflector="pkg/mod/k8s.io/client-go@v0.33.0/tools/cache/reflector.go:285" type="*v1.Pod"
	E1109 14:43:00.392170       1 reflector.go:200] "Failed to watch" err="failed to list *v1.NetworkPolicy: Get \"https://10.96.0.1:443/apis/networking.k8s.io/v1/networkpolicies?limit=500&resourceVersion=0\": dial tcp 10.96.0.1:443: i/o timeout" logger="UnhandledError" reflector="pkg/mod/k8s.io/client-go@v0.33.0/tools/cache/reflector.go:285" type="*v1.NetworkPolicy"
	I1109 14:43:01.790746       1 shared_informer.go:357] "Caches are synced" controller="kube-network-policies"
	I1109 14:43:01.790777       1 metrics.go:72] Registering metrics
	I1109 14:43:01.792181       1 controller.go:711] "Syncing nftables rules"
	I1109 14:43:10.389870       1 main.go:297] Handling node with IPs: map[192.168.85.2:{}]
	I1109 14:43:10.389966       1 main.go:301] handling current node
	I1109 14:43:20.390485       1 main.go:297] Handling node with IPs: map[192.168.85.2:{}]
	I1109 14:43:20.390537       1 main.go:301] handling current node
	
	
	==> kube-apiserver [9c6841e7685fc5801280c9ddf2d6c0a2a346830e53491f2f3d439c2e21c977fd] <==
	I1109 14:42:28.306583       1 cidrallocator.go:301] created ClusterIP allocator for Service CIDR 10.96.0.0/12
	I1109 14:42:28.315086       1 cache.go:39] Caches are synced for RemoteAvailability controller
	I1109 14:42:28.315204       1 shared_informer.go:356] "Caches are synced" controller="cluster_authentication_trust_controller"
	I1109 14:42:28.318286       1 cache.go:39] Caches are synced for APIServiceRegistrationController controller
	I1109 14:42:28.328457       1 cache.go:39] Caches are synced for LocalAvailability controller
	I1109 14:42:28.328714       1 shared_informer.go:356] "Caches are synced" controller="node_authorizer"
	I1109 14:42:28.331601       1 controller.go:667] quota admission added evaluator for: leases.coordination.k8s.io
	I1109 14:42:28.355113       1 shared_informer.go:356] "Caches are synced" controller="ipallocator-repair-controller"
	I1109 14:42:28.355247       1 apf_controller.go:382] Running API Priority and Fairness config worker
	I1109 14:42:28.355257       1 apf_controller.go:385] Running API Priority and Fairness periodic rebalancing process
	I1109 14:42:28.356123       1 cache.go:39] Caches are synced for autoregister controller
	I1109 14:42:28.389818       1 handler_discovery.go:451] Starting ResourceDiscoveryManager
	E1109 14:42:28.418117       1 controller.go:97] Error removing old endpoints from kubernetes service: no API server IP addresses were listed in storage, refusing to erase all endpoints for the kubernetes Service
	I1109 14:42:28.582374       1 storage_scheduling.go:111] all system priority classes are created successfully or already exist.
	I1109 14:42:29.021249       1 controller.go:667] quota admission added evaluator for: serviceaccounts
	I1109 14:42:30.762286       1 controller.go:667] quota admission added evaluator for: namespaces
	I1109 14:42:30.820813       1 controller.go:667] quota admission added evaluator for: deployments.apps
	I1109 14:42:30.856491       1 controller.go:667] quota admission added evaluator for: roles.rbac.authorization.k8s.io
	I1109 14:42:30.877109       1 controller.go:667] quota admission added evaluator for: rolebindings.rbac.authorization.k8s.io
	I1109 14:42:30.947965       1 alloc.go:328] "allocated clusterIPs" service="kubernetes-dashboard/kubernetes-dashboard" clusterIPs={"IPv4":"10.111.68.119"}
	I1109 14:42:30.965582       1 alloc.go:328] "allocated clusterIPs" service="kubernetes-dashboard/dashboard-metrics-scraper" clusterIPs={"IPv4":"10.111.142.193"}
	I1109 14:42:32.871140       1 controller.go:667] quota admission added evaluator for: endpoints
	I1109 14:42:33.269821       1 controller.go:667] quota admission added evaluator for: endpointslices.discovery.k8s.io
	I1109 14:42:33.320701       1 controller.go:667] quota admission added evaluator for: replicasets.apps
	I1109 14:42:33.320753       1 controller.go:667] quota admission added evaluator for: replicasets.apps
	
	
	==> kube-controller-manager [e0fa19fb74d19affdcb53dc2a19669b9497ed088c06c1be5d6368f4a1d768ad8] <==
	I1109 14:42:32.765416       1 shared_informer.go:356] "Caches are synced" controller="GC"
	I1109 14:42:32.765495       1 shared_informer.go:356] "Caches are synced" controller="service-cidr-controller"
	I1109 14:42:32.765556       1 shared_informer.go:356] "Caches are synced" controller="expand"
	I1109 14:42:32.765600       1 shared_informer.go:356] "Caches are synced" controller="attach detach"
	I1109 14:42:32.768010       1 shared_informer.go:356] "Caches are synced" controller="PV protection"
	I1109 14:42:32.768086       1 shared_informer.go:356] "Caches are synced" controller="stateful set"
	I1109 14:42:32.771225       1 shared_informer.go:356] "Caches are synced" controller="TTL after finished"
	I1109 14:42:32.772387       1 shared_informer.go:356] "Caches are synced" controller="TTL"
	I1109 14:42:32.773639       1 shared_informer.go:356] "Caches are synced" controller="daemon sets"
	I1109 14:42:32.775365       1 shared_informer.go:356] "Caches are synced" controller="resource quota"
	I1109 14:42:32.775609       1 shared_informer.go:356] "Caches are synced" controller="endpoint_slice"
	I1109 14:42:32.777696       1 shared_informer.go:356] "Caches are synced" controller="deployment"
	I1109 14:42:32.778407       1 shared_informer.go:356] "Caches are synced" controller="endpoint_slice_mirroring"
	I1109 14:42:32.781503       1 shared_informer.go:356] "Caches are synced" controller="ReplicaSet"
	I1109 14:42:32.781530       1 shared_informer.go:356] "Caches are synced" controller="taint-eviction-controller"
	I1109 14:42:32.787767       1 shared_informer.go:356] "Caches are synced" controller="certificate-csrsigning-kube-apiserver-client"
	I1109 14:42:32.787955       1 shared_informer.go:356] "Caches are synced" controller="certificate-csrsigning-kubelet-serving"
	I1109 14:42:32.788032       1 shared_informer.go:356] "Caches are synced" controller="certificate-csrsigning-kubelet-client"
	I1109 14:42:32.788174       1 shared_informer.go:356] "Caches are synced" controller="certificate-csrsigning-legacy-unknown"
	I1109 14:42:32.810883       1 shared_informer.go:356] "Caches are synced" controller="garbage collector"
	I1109 14:42:32.813378       1 shared_informer.go:356] "Caches are synced" controller="cronjob"
	I1109 14:42:32.814531       1 shared_informer.go:356] "Caches are synced" controller="persistent volume"
	I1109 14:42:32.833025       1 shared_informer.go:356] "Caches are synced" controller="garbage collector"
	I1109 14:42:32.833052       1 garbagecollector.go:154] "Garbage collector: all resource monitors have synced" logger="garbage-collector-controller"
	I1109 14:42:32.833082       1 garbagecollector.go:157] "Proceeding to collect garbage" logger="garbage-collector-controller"
	
	
	==> kube-proxy [5528464a75a8a31cc909e0b5261d839f7dcb4a347d188366b316e9c264cb7e1e] <==
	I1109 14:42:30.754359       1 server_linux.go:53] "Using iptables proxy"
	I1109 14:42:30.853624       1 shared_informer.go:349] "Waiting for caches to sync" controller="node informer cache"
	I1109 14:42:30.953761       1 shared_informer.go:356] "Caches are synced" controller="node informer cache"
	I1109 14:42:30.953794       1 server.go:219] "Successfully retrieved NodeIPs" NodeIPs=["192.168.85.2"]
	E1109 14:42:30.953857       1 server.go:256] "Kube-proxy configuration may be incomplete or incorrect" err="nodePortAddresses is unset; NodePort connections will be accepted on all local IPs. Consider using `--nodeport-addresses primary`"
	I1109 14:42:30.987517       1 server.go:265] "kube-proxy running in dual-stack mode" primary ipFamily="IPv4"
	I1109 14:42:30.987584       1 server_linux.go:132] "Using iptables Proxier"
	I1109 14:42:31.005200       1 proxier.go:242] "Setting route_localnet=1 to allow node-ports on localhost; to change this either disable iptables.localhostNodePorts (--iptables-localhost-nodeports) or set nodePortAddresses (--nodeport-addresses) to filter loopback addresses" ipFamily="IPv4"
	I1109 14:42:31.005539       1 server.go:527] "Version info" version="v1.34.1"
	I1109 14:42:31.005554       1 server.go:529] "Golang settings" GOGC="" GOMAXPROCS="" GOTRACEBACK=""
	I1109 14:42:31.022262       1 config.go:200] "Starting service config controller"
	I1109 14:42:31.022301       1 shared_informer.go:349] "Waiting for caches to sync" controller="service config"
	I1109 14:42:31.022357       1 config.go:106] "Starting endpoint slice config controller"
	I1109 14:42:31.022364       1 shared_informer.go:349] "Waiting for caches to sync" controller="endpoint slice config"
	I1109 14:42:31.022383       1 config.go:403] "Starting serviceCIDR config controller"
	I1109 14:42:31.022387       1 shared_informer.go:349] "Waiting for caches to sync" controller="serviceCIDR config"
	I1109 14:42:31.028577       1 config.go:309] "Starting node config controller"
	I1109 14:42:31.028597       1 shared_informer.go:349] "Waiting for caches to sync" controller="node config"
	I1109 14:42:31.028605       1 shared_informer.go:356] "Caches are synced" controller="node config"
	I1109 14:42:31.122926       1 shared_informer.go:356] "Caches are synced" controller="serviceCIDR config"
	I1109 14:42:31.122964       1 shared_informer.go:356] "Caches are synced" controller="service config"
	I1109 14:42:31.123025       1 shared_informer.go:356] "Caches are synced" controller="endpoint slice config"
	
	
	==> kube-scheduler [baa0cc7198ae04a8507839c6fddece0836011983b84bd4fd652613a18bd01d25] <==
	I1109 14:42:25.172747       1 serving.go:386] Generated self-signed cert in-memory
	W1109 14:42:28.236200       1 requestheader_controller.go:204] Unable to get configmap/extension-apiserver-authentication in kube-system.  Usually fixed by 'kubectl create rolebinding -n kube-system ROLEBINDING_NAME --role=extension-apiserver-authentication-reader --serviceaccount=YOUR_NS:YOUR_SA'
	W1109 14:42:28.236235       1 authentication.go:397] Error looking up in-cluster authentication configuration: configmaps "extension-apiserver-authentication" is forbidden: User "system:kube-scheduler" cannot get resource "configmaps" in API group "" in the namespace "kube-system"
	W1109 14:42:28.236253       1 authentication.go:398] Continuing without authentication configuration. This may treat all requests as anonymous.
	W1109 14:42:28.236261       1 authentication.go:399] To require authentication configuration lookup to succeed, set --authentication-tolerate-lookup-failure=false
	I1109 14:42:28.449950       1 server.go:175] "Starting Kubernetes Scheduler" version="v1.34.1"
	I1109 14:42:28.449987       1 server.go:177] "Golang settings" GOGC="" GOMAXPROCS="" GOTRACEBACK=""
	I1109 14:42:28.479113       1 secure_serving.go:211] Serving securely on 127.0.0.1:10259
	I1109 14:42:28.479249       1 configmap_cafile_content.go:205] "Starting controller" name="client-ca::kube-system::extension-apiserver-authentication::client-ca-file"
	I1109 14:42:28.479270       1 shared_informer.go:349] "Waiting for caches to sync" controller="client-ca::kube-system::extension-apiserver-authentication::client-ca-file"
	I1109 14:42:28.479286       1 tlsconfig.go:243] "Starting DynamicServingCertificateController"
	I1109 14:42:28.581732       1 shared_informer.go:356] "Caches are synced" controller="client-ca::kube-system::extension-apiserver-authentication::client-ca-file"
	
	
	==> kubelet <==
	Nov 09 14:42:33 no-preload-545474 kubelet[771]: I1109 14:42:33.438586     771 reconciler_common.go:251] "operationExecutor.VerifyControllerAttachedVolume started for volume \"tmp-volume\" (UniqueName: \"kubernetes.io/empty-dir/1fe25f87-54de-446f-b6f2-08786b029184-tmp-volume\") pod \"kubernetes-dashboard-855c9754f9-zlh4p\" (UID: \"1fe25f87-54de-446f-b6f2-08786b029184\") " pod="kubernetes-dashboard/kubernetes-dashboard-855c9754f9-zlh4p"
	Nov 09 14:42:33 no-preload-545474 kubelet[771]: I1109 14:42:33.438662     771 reconciler_common.go:251] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kube-api-access-67qds\" (UniqueName: \"kubernetes.io/projected/1fe25f87-54de-446f-b6f2-08786b029184-kube-api-access-67qds\") pod \"kubernetes-dashboard-855c9754f9-zlh4p\" (UID: \"1fe25f87-54de-446f-b6f2-08786b029184\") " pod="kubernetes-dashboard/kubernetes-dashboard-855c9754f9-zlh4p"
	Nov 09 14:42:33 no-preload-545474 kubelet[771]: W1109 14:42:33.745994     771 manager.go:1169] Failed to process watch event {EventType:0 Name:/docker/435b3ae5d44375062a24914709f3375acdafc76fdd93c4f83f8e7b4be40a79be/crio-634a038efb97fe395a77fb89c85792c1a54c8dcad0155f2322867fd066854d12 WatchSource:0}: Error finding container 634a038efb97fe395a77fb89c85792c1a54c8dcad0155f2322867fd066854d12: Status 404 returned error can't find the container with id 634a038efb97fe395a77fb89c85792c1a54c8dcad0155f2322867fd066854d12
	Nov 09 14:42:34 no-preload-545474 kubelet[771]: I1109 14:42:34.017344     771 prober_manager.go:312] "Failed to trigger a manual run" probe="Readiness"
	Nov 09 14:42:39 no-preload-545474 kubelet[771]: I1109 14:42:39.533813     771 pod_startup_latency_tracker.go:104] "Observed pod startup duration" pod="kubernetes-dashboard/kubernetes-dashboard-855c9754f9-zlh4p" podStartSLOduration=1.718788525 podStartE2EDuration="6.530137596s" podCreationTimestamp="2025-11-09 14:42:33 +0000 UTC" firstStartedPulling="2025-11-09 14:42:33.749916086 +0000 UTC m=+12.983644201" lastFinishedPulling="2025-11-09 14:42:38.561265157 +0000 UTC m=+17.794993272" observedRunningTime="2025-11-09 14:42:39.236841381 +0000 UTC m=+18.470569488" watchObservedRunningTime="2025-11-09 14:42:39.530137596 +0000 UTC m=+18.763865703"
	Nov 09 14:42:43 no-preload-545474 kubelet[771]: I1109 14:42:43.227993     771 scope.go:117] "RemoveContainer" containerID="ca37865d13ab04e5ea2d15e831dfad521e8629843fb0a5bf702301d7d79b252b"
	Nov 09 14:42:44 no-preload-545474 kubelet[771]: I1109 14:42:44.232400     771 scope.go:117] "RemoveContainer" containerID="ca37865d13ab04e5ea2d15e831dfad521e8629843fb0a5bf702301d7d79b252b"
	Nov 09 14:42:44 no-preload-545474 kubelet[771]: I1109 14:42:44.232664     771 scope.go:117] "RemoveContainer" containerID="4759455c1e720b78c772ed068536c091978cc22c15401f50eccf30ce1c75fcd5"
	Nov 09 14:42:44 no-preload-545474 kubelet[771]: E1109 14:42:44.232804     771 pod_workers.go:1324] "Error syncing pod, skipping" err="failed to \"StartContainer\" for \"dashboard-metrics-scraper\" with CrashLoopBackOff: \"back-off 10s restarting failed container=dashboard-metrics-scraper pod=dashboard-metrics-scraper-6ffb444bf9-blrjq_kubernetes-dashboard(c8f18330-ef09-4d75-9de7-69680540601f)\"" pod="kubernetes-dashboard/dashboard-metrics-scraper-6ffb444bf9-blrjq" podUID="c8f18330-ef09-4d75-9de7-69680540601f"
	Nov 09 14:42:45 no-preload-545474 kubelet[771]: I1109 14:42:45.238963     771 scope.go:117] "RemoveContainer" containerID="4759455c1e720b78c772ed068536c091978cc22c15401f50eccf30ce1c75fcd5"
	Nov 09 14:42:45 no-preload-545474 kubelet[771]: E1109 14:42:45.239460     771 pod_workers.go:1324] "Error syncing pod, skipping" err="failed to \"StartContainer\" for \"dashboard-metrics-scraper\" with CrashLoopBackOff: \"back-off 10s restarting failed container=dashboard-metrics-scraper pod=dashboard-metrics-scraper-6ffb444bf9-blrjq_kubernetes-dashboard(c8f18330-ef09-4d75-9de7-69680540601f)\"" pod="kubernetes-dashboard/dashboard-metrics-scraper-6ffb444bf9-blrjq" podUID="c8f18330-ef09-4d75-9de7-69680540601f"
	Nov 09 14:42:53 no-preload-545474 kubelet[771]: I1109 14:42:53.718326     771 scope.go:117] "RemoveContainer" containerID="4759455c1e720b78c772ed068536c091978cc22c15401f50eccf30ce1c75fcd5"
	Nov 09 14:42:54 no-preload-545474 kubelet[771]: I1109 14:42:54.262429     771 scope.go:117] "RemoveContainer" containerID="4759455c1e720b78c772ed068536c091978cc22c15401f50eccf30ce1c75fcd5"
	Nov 09 14:42:54 no-preload-545474 kubelet[771]: I1109 14:42:54.262709     771 scope.go:117] "RemoveContainer" containerID="54427a8df9d12b8f89cabb1e02d454376fd533549a2a5bd4728c070344b169a9"
	Nov 09 14:42:54 no-preload-545474 kubelet[771]: E1109 14:42:54.262854     771 pod_workers.go:1324] "Error syncing pod, skipping" err="failed to \"StartContainer\" for \"dashboard-metrics-scraper\" with CrashLoopBackOff: \"back-off 20s restarting failed container=dashboard-metrics-scraper pod=dashboard-metrics-scraper-6ffb444bf9-blrjq_kubernetes-dashboard(c8f18330-ef09-4d75-9de7-69680540601f)\"" pod="kubernetes-dashboard/dashboard-metrics-scraper-6ffb444bf9-blrjq" podUID="c8f18330-ef09-4d75-9de7-69680540601f"
	Nov 09 14:43:01 no-preload-545474 kubelet[771]: I1109 14:43:01.291204     771 scope.go:117] "RemoveContainer" containerID="a56ad15fb1acd25af1ccdd95286c2550aeb592ff86d9a87affec2580a370d7dc"
	Nov 09 14:43:03 no-preload-545474 kubelet[771]: I1109 14:43:03.718343     771 scope.go:117] "RemoveContainer" containerID="54427a8df9d12b8f89cabb1e02d454376fd533549a2a5bd4728c070344b169a9"
	Nov 09 14:43:03 no-preload-545474 kubelet[771]: E1109 14:43:03.718970     771 pod_workers.go:1324] "Error syncing pod, skipping" err="failed to \"StartContainer\" for \"dashboard-metrics-scraper\" with CrashLoopBackOff: \"back-off 20s restarting failed container=dashboard-metrics-scraper pod=dashboard-metrics-scraper-6ffb444bf9-blrjq_kubernetes-dashboard(c8f18330-ef09-4d75-9de7-69680540601f)\"" pod="kubernetes-dashboard/dashboard-metrics-scraper-6ffb444bf9-blrjq" podUID="c8f18330-ef09-4d75-9de7-69680540601f"
	Nov 09 14:43:14 no-preload-545474 kubelet[771]: I1109 14:43:14.989791     771 scope.go:117] "RemoveContainer" containerID="54427a8df9d12b8f89cabb1e02d454376fd533549a2a5bd4728c070344b169a9"
	Nov 09 14:43:15 no-preload-545474 kubelet[771]: I1109 14:43:15.327539     771 scope.go:117] "RemoveContainer" containerID="54427a8df9d12b8f89cabb1e02d454376fd533549a2a5bd4728c070344b169a9"
	Nov 09 14:43:15 no-preload-545474 kubelet[771]: I1109 14:43:15.327827     771 scope.go:117] "RemoveContainer" containerID="33687307c11ac0258a0c4a328dca8bb0ee9be8beb937d99cd72b8fc358b43247"
	Nov 09 14:43:15 no-preload-545474 kubelet[771]: E1109 14:43:15.329287     771 pod_workers.go:1324] "Error syncing pod, skipping" err="failed to \"StartContainer\" for \"dashboard-metrics-scraper\" with CrashLoopBackOff: \"back-off 40s restarting failed container=dashboard-metrics-scraper pod=dashboard-metrics-scraper-6ffb444bf9-blrjq_kubernetes-dashboard(c8f18330-ef09-4d75-9de7-69680540601f)\"" pod="kubernetes-dashboard/dashboard-metrics-scraper-6ffb444bf9-blrjq" podUID="c8f18330-ef09-4d75-9de7-69680540601f"
	Nov 09 14:43:17 no-preload-545474 systemd[1]: Stopping kubelet.service - kubelet: The Kubernetes Node Agent...
	Nov 09 14:43:17 no-preload-545474 systemd[1]: kubelet.service: Deactivated successfully.
	Nov 09 14:43:17 no-preload-545474 systemd[1]: Stopped kubelet.service - kubelet: The Kubernetes Node Agent.
	
	
	==> kubernetes-dashboard [2f347db595849365a063711d3213a98014e01fa8ff9740f0c0cae1ee2989edca] <==
	2025/11/09 14:42:38 Using namespace: kubernetes-dashboard
	2025/11/09 14:42:38 Using in-cluster config to connect to apiserver
	2025/11/09 14:42:38 Using secret token for csrf signing
	2025/11/09 14:42:38 Initializing csrf token from kubernetes-dashboard-csrf secret
	2025/11/09 14:42:38 Empty token. Generating and storing in a secret kubernetes-dashboard-csrf
	2025/11/09 14:42:38 Successful initial request to the apiserver, version: v1.34.1
	2025/11/09 14:42:38 Generating JWE encryption key
	2025/11/09 14:42:38 New synchronizer has been registered: kubernetes-dashboard-key-holder-kubernetes-dashboard. Starting
	2025/11/09 14:42:38 Starting secret synchronizer for kubernetes-dashboard-key-holder in namespace kubernetes-dashboard
	2025/11/09 14:42:39 Initializing JWE encryption key from synchronized object
	2025/11/09 14:42:39 Creating in-cluster Sidecar client
	2025/11/09 14:42:39 Metric client health check failed: the server is currently unable to handle the request (get services dashboard-metrics-scraper). Retrying in 30 seconds.
	2025/11/09 14:42:39 Serving insecurely on HTTP port: 9090
	2025/11/09 14:43:09 Metric client health check failed: the server is currently unable to handle the request (get services dashboard-metrics-scraper). Retrying in 30 seconds.
	2025/11/09 14:42:38 Starting overwatch
	
	
	==> storage-provisioner [97146ccbb7051494060c47263c5598c7fdc03c86778dbc868aff7662435f9c33] <==
	I1109 14:43:01.343582       1 storage_provisioner.go:116] Initializing the minikube storage provisioner...
	I1109 14:43:01.357393       1 storage_provisioner.go:141] Storage provisioner initialized, now starting service!
	I1109 14:43:01.357538       1 leaderelection.go:243] attempting to acquire leader lease kube-system/k8s.io-minikube-hostpath...
	W1109 14:43:01.360232       1 warnings.go:70] v1 Endpoints is deprecated in v1.33+; use discovery.k8s.io/v1 EndpointSlice
	W1109 14:43:04.815311       1 warnings.go:70] v1 Endpoints is deprecated in v1.33+; use discovery.k8s.io/v1 EndpointSlice
	W1109 14:43:09.075334       1 warnings.go:70] v1 Endpoints is deprecated in v1.33+; use discovery.k8s.io/v1 EndpointSlice
	W1109 14:43:12.673661       1 warnings.go:70] v1 Endpoints is deprecated in v1.33+; use discovery.k8s.io/v1 EndpointSlice
	W1109 14:43:15.726836       1 warnings.go:70] v1 Endpoints is deprecated in v1.33+; use discovery.k8s.io/v1 EndpointSlice
	W1109 14:43:18.750078       1 warnings.go:70] v1 Endpoints is deprecated in v1.33+; use discovery.k8s.io/v1 EndpointSlice
	W1109 14:43:18.755506       1 warnings.go:70] v1 Endpoints is deprecated in v1.33+; use discovery.k8s.io/v1 EndpointSlice
	I1109 14:43:18.755705       1 leaderelection.go:253] successfully acquired lease kube-system/k8s.io-minikube-hostpath
	I1109 14:43:18.756460       1 event.go:282] Event(v1.ObjectReference{Kind:"Endpoints", Namespace:"kube-system", Name:"k8s.io-minikube-hostpath", UID:"c28aa965-9a7b-46e6-8965-1a16b69399de", APIVersion:"v1", ResourceVersion:"680", FieldPath:""}): type: 'Normal' reason: 'LeaderElection' no-preload-545474_e2317694-a436-48d8-8baf-a0459015c3a4 became leader
	I1109 14:43:18.756677       1 controller.go:835] Starting provisioner controller k8s.io/minikube-hostpath_no-preload-545474_e2317694-a436-48d8-8baf-a0459015c3a4!
	W1109 14:43:18.762916       1 warnings.go:70] v1 Endpoints is deprecated in v1.33+; use discovery.k8s.io/v1 EndpointSlice
	W1109 14:43:18.769731       1 warnings.go:70] v1 Endpoints is deprecated in v1.33+; use discovery.k8s.io/v1 EndpointSlice
	I1109 14:43:18.857017       1 controller.go:884] Started provisioner controller k8s.io/minikube-hostpath_no-preload-545474_e2317694-a436-48d8-8baf-a0459015c3a4!
	W1109 14:43:20.780390       1 warnings.go:70] v1 Endpoints is deprecated in v1.33+; use discovery.k8s.io/v1 EndpointSlice
	W1109 14:43:20.787830       1 warnings.go:70] v1 Endpoints is deprecated in v1.33+; use discovery.k8s.io/v1 EndpointSlice
	
	
	==> storage-provisioner [a56ad15fb1acd25af1ccdd95286c2550aeb592ff86d9a87affec2580a370d7dc] <==
	I1109 14:42:30.578115       1 storage_provisioner.go:116] Initializing the minikube storage provisioner...
	F1109 14:43:00.585460       1 main.go:39] error getting server version: Get "https://10.96.0.1:443/version?timeout=32s": dial tcp 10.96.0.1:443: i/o timeout
	

                                                
                                                
-- /stdout --
helpers_test.go:262: (dbg) Run:  out/minikube-linux-arm64 status --format={{.APIServer}} -p no-preload-545474 -n no-preload-545474
helpers_test.go:262: (dbg) Non-zero exit: out/minikube-linux-arm64 status --format={{.APIServer}} -p no-preload-545474 -n no-preload-545474: exit status 2 (353.373606ms)

                                                
                                                
-- stdout --
	Running

                                                
                                                
-- /stdout --
helpers_test.go:262: status error: exit status 2 (may be ok)
helpers_test.go:269: (dbg) Run:  kubectl --context no-preload-545474 get po -o=jsonpath={.items[*].metadata.name} -A --field-selector=status.phase!=Running
helpers_test.go:293: <<< TestStartStop/group/no-preload/serial/Pause FAILED: end of post-mortem logs <<<
helpers_test.go:294: ---------------------/post-mortem---------------------------------
helpers_test.go:222: -----------------------post-mortem--------------------------------
helpers_test.go:223: ======>  post-mortem[TestStartStop/group/no-preload/serial/Pause]: network settings <======
helpers_test.go:230: HOST ENV snapshots: PROXY env: HTTP_PROXY="<empty>" HTTPS_PROXY="<empty>" NO_PROXY="<empty>"
helpers_test.go:238: ======>  post-mortem[TestStartStop/group/no-preload/serial/Pause]: docker inspect <======
helpers_test.go:239: (dbg) Run:  docker inspect no-preload-545474
helpers_test.go:243: (dbg) docker inspect no-preload-545474:

                                                
                                                
-- stdout --
	[
	    {
	        "Id": "435b3ae5d44375062a24914709f3375acdafc76fdd93c4f83f8e7b4be40a79be",
	        "Created": "2025-11-09T14:40:31.3484438Z",
	        "Path": "/usr/local/bin/entrypoint",
	        "Args": [
	            "/sbin/init"
	        ],
	        "State": {
	            "Status": "running",
	            "Running": true,
	            "Paused": false,
	            "Restarting": false,
	            "OOMKilled": false,
	            "Dead": false,
	            "Pid": 215420,
	            "ExitCode": 0,
	            "Error": "",
	            "StartedAt": "2025-11-09T14:42:12.931743928Z",
	            "FinishedAt": "2025-11-09T14:42:11.916224945Z"
	        },
	        "Image": "sha256:4e1b168be0d8ee6affdaa8dcd0274d605b2f417b6cfaa574410f0380fd962b97",
	        "ResolvConfPath": "/var/lib/docker/containers/435b3ae5d44375062a24914709f3375acdafc76fdd93c4f83f8e7b4be40a79be/resolv.conf",
	        "HostnamePath": "/var/lib/docker/containers/435b3ae5d44375062a24914709f3375acdafc76fdd93c4f83f8e7b4be40a79be/hostname",
	        "HostsPath": "/var/lib/docker/containers/435b3ae5d44375062a24914709f3375acdafc76fdd93c4f83f8e7b4be40a79be/hosts",
	        "LogPath": "/var/lib/docker/containers/435b3ae5d44375062a24914709f3375acdafc76fdd93c4f83f8e7b4be40a79be/435b3ae5d44375062a24914709f3375acdafc76fdd93c4f83f8e7b4be40a79be-json.log",
	        "Name": "/no-preload-545474",
	        "RestartCount": 0,
	        "Driver": "overlay2",
	        "Platform": "linux",
	        "MountLabel": "",
	        "ProcessLabel": "",
	        "AppArmorProfile": "unconfined",
	        "ExecIDs": null,
	        "HostConfig": {
	            "Binds": [
	                "/lib/modules:/lib/modules:ro",
	                "no-preload-545474:/var"
	            ],
	            "ContainerIDFile": "",
	            "LogConfig": {
	                "Type": "json-file",
	                "Config": {}
	            },
	            "NetworkMode": "no-preload-545474",
	            "PortBindings": {
	                "22/tcp": [
	                    {
	                        "HostIp": "127.0.0.1",
	                        "HostPort": ""
	                    }
	                ],
	                "2376/tcp": [
	                    {
	                        "HostIp": "127.0.0.1",
	                        "HostPort": ""
	                    }
	                ],
	                "32443/tcp": [
	                    {
	                        "HostIp": "127.0.0.1",
	                        "HostPort": ""
	                    }
	                ],
	                "5000/tcp": [
	                    {
	                        "HostIp": "127.0.0.1",
	                        "HostPort": ""
	                    }
	                ],
	                "8443/tcp": [
	                    {
	                        "HostIp": "127.0.0.1",
	                        "HostPort": ""
	                    }
	                ]
	            },
	            "RestartPolicy": {
	                "Name": "no",
	                "MaximumRetryCount": 0
	            },
	            "AutoRemove": false,
	            "VolumeDriver": "",
	            "VolumesFrom": null,
	            "ConsoleSize": [
	                0,
	                0
	            ],
	            "CapAdd": null,
	            "CapDrop": null,
	            "CgroupnsMode": "host",
	            "Dns": [],
	            "DnsOptions": [],
	            "DnsSearch": [],
	            "ExtraHosts": null,
	            "GroupAdd": null,
	            "IpcMode": "private",
	            "Cgroup": "",
	            "Links": null,
	            "OomScoreAdj": 0,
	            "PidMode": "",
	            "Privileged": true,
	            "PublishAllPorts": false,
	            "ReadonlyRootfs": false,
	            "SecurityOpt": [
	                "seccomp=unconfined",
	                "apparmor=unconfined",
	                "label=disable"
	            ],
	            "Tmpfs": {
	                "/run": "",
	                "/tmp": ""
	            },
	            "UTSMode": "",
	            "UsernsMode": "",
	            "ShmSize": 67108864,
	            "Runtime": "runc",
	            "Isolation": "",
	            "CpuShares": 0,
	            "Memory": 3221225472,
	            "NanoCpus": 2000000000,
	            "CgroupParent": "",
	            "BlkioWeight": 0,
	            "BlkioWeightDevice": [],
	            "BlkioDeviceReadBps": [],
	            "BlkioDeviceWriteBps": [],
	            "BlkioDeviceReadIOps": [],
	            "BlkioDeviceWriteIOps": [],
	            "CpuPeriod": 0,
	            "CpuQuota": 0,
	            "CpuRealtimePeriod": 0,
	            "CpuRealtimeRuntime": 0,
	            "CpusetCpus": "",
	            "CpusetMems": "",
	            "Devices": [],
	            "DeviceCgroupRules": null,
	            "DeviceRequests": null,
	            "MemoryReservation": 0,
	            "MemorySwap": 6442450944,
	            "MemorySwappiness": null,
	            "OomKillDisable": false,
	            "PidsLimit": null,
	            "Ulimits": [],
	            "CpuCount": 0,
	            "CpuPercent": 0,
	            "IOMaximumIOps": 0,
	            "IOMaximumBandwidth": 0,
	            "MaskedPaths": null,
	            "ReadonlyPaths": null
	        },
	        "GraphDriver": {
	            "Data": {
	                "ID": "435b3ae5d44375062a24914709f3375acdafc76fdd93c4f83f8e7b4be40a79be",
	                "LowerDir": "/var/lib/docker/overlay2/a85564fb62a3e51e03f4702b5e2ea71dec8ed82e123ce306678040ba01f5478a-init/diff:/var/lib/docker/overlay2/b3a4de36ab2a7c9237cb1555a4866064baca53bca407ae5f84336bea9c6bc6c8/diff",
	                "MergedDir": "/var/lib/docker/overlay2/a85564fb62a3e51e03f4702b5e2ea71dec8ed82e123ce306678040ba01f5478a/merged",
	                "UpperDir": "/var/lib/docker/overlay2/a85564fb62a3e51e03f4702b5e2ea71dec8ed82e123ce306678040ba01f5478a/diff",
	                "WorkDir": "/var/lib/docker/overlay2/a85564fb62a3e51e03f4702b5e2ea71dec8ed82e123ce306678040ba01f5478a/work"
	            },
	            "Name": "overlay2"
	        },
	        "Mounts": [
	            {
	                "Type": "bind",
	                "Source": "/lib/modules",
	                "Destination": "/lib/modules",
	                "Mode": "ro",
	                "RW": false,
	                "Propagation": "rprivate"
	            },
	            {
	                "Type": "volume",
	                "Name": "no-preload-545474",
	                "Source": "/var/lib/docker/volumes/no-preload-545474/_data",
	                "Destination": "/var",
	                "Driver": "local",
	                "Mode": "z",
	                "RW": true,
	                "Propagation": ""
	            }
	        ],
	        "Config": {
	            "Hostname": "no-preload-545474",
	            "Domainname": "",
	            "User": "",
	            "AttachStdin": false,
	            "AttachStdout": false,
	            "AttachStderr": false,
	            "ExposedPorts": {
	                "22/tcp": {},
	                "2376/tcp": {},
	                "32443/tcp": {},
	                "5000/tcp": {},
	                "8443/tcp": {}
	            },
	            "Tty": true,
	            "OpenStdin": false,
	            "StdinOnce": false,
	            "Env": [
	                "container=docker",
	                "PATH=/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin"
	            ],
	            "Cmd": null,
	            "Image": "gcr.io/k8s-minikube/kicbase-builds:v0.0.48-1761985721-21837@sha256:a50b37e97dfdea51156e079ca6b45818a801b3d41bbe13d141f35d2e1af6c7d1",
	            "Volumes": null,
	            "WorkingDir": "/",
	            "Entrypoint": [
	                "/usr/local/bin/entrypoint",
	                "/sbin/init"
	            ],
	            "OnBuild": null,
	            "Labels": {
	                "created_by.minikube.sigs.k8s.io": "true",
	                "mode.minikube.sigs.k8s.io": "no-preload-545474",
	                "name.minikube.sigs.k8s.io": "no-preload-545474",
	                "role.minikube.sigs.k8s.io": ""
	            },
	            "StopSignal": "SIGRTMIN+3"
	        },
	        "NetworkSettings": {
	            "Bridge": "",
	            "SandboxID": "01f62b5fbd0f00c724ebd5b0fecca50faf2b627ce5ee3b5c7575c8c88e55faaf",
	            "SandboxKey": "/var/run/docker/netns/01f62b5fbd0f",
	            "Ports": {
	                "22/tcp": [
	                    {
	                        "HostIp": "127.0.0.1",
	                        "HostPort": "33095"
	                    }
	                ],
	                "2376/tcp": [
	                    {
	                        "HostIp": "127.0.0.1",
	                        "HostPort": "33096"
	                    }
	                ],
	                "32443/tcp": [
	                    {
	                        "HostIp": "127.0.0.1",
	                        "HostPort": "33099"
	                    }
	                ],
	                "5000/tcp": [
	                    {
	                        "HostIp": "127.0.0.1",
	                        "HostPort": "33097"
	                    }
	                ],
	                "8443/tcp": [
	                    {
	                        "HostIp": "127.0.0.1",
	                        "HostPort": "33098"
	                    }
	                ]
	            },
	            "HairpinMode": false,
	            "LinkLocalIPv6Address": "",
	            "LinkLocalIPv6PrefixLen": 0,
	            "SecondaryIPAddresses": null,
	            "SecondaryIPv6Addresses": null,
	            "EndpointID": "",
	            "Gateway": "",
	            "GlobalIPv6Address": "",
	            "GlobalIPv6PrefixLen": 0,
	            "IPAddress": "",
	            "IPPrefixLen": 0,
	            "IPv6Gateway": "",
	            "MacAddress": "",
	            "Networks": {
	                "no-preload-545474": {
	                    "IPAMConfig": {
	                        "IPv4Address": "192.168.85.2"
	                    },
	                    "Links": null,
	                    "Aliases": null,
	                    "MacAddress": "2e:86:11:4a:f1:c7",
	                    "DriverOpts": null,
	                    "GwPriority": 0,
	                    "NetworkID": "eb0cf9a1901884390b78ed227402aaa4fd370ba585a10d7d075f56046116850c",
	                    "EndpointID": "8ef1cd96f1382ef6f8d96d4eb331233c0c0417b38417a9493e0a3dbc58c7e90e",
	                    "Gateway": "192.168.85.1",
	                    "IPAddress": "192.168.85.2",
	                    "IPPrefixLen": 24,
	                    "IPv6Gateway": "",
	                    "GlobalIPv6Address": "",
	                    "GlobalIPv6PrefixLen": 0,
	                    "DNSNames": [
	                        "no-preload-545474",
	                        "435b3ae5d443"
	                    ]
	                }
	            }
	        }
	    }
	]

                                                
                                                
-- /stdout --
helpers_test.go:247: (dbg) Run:  out/minikube-linux-arm64 status --format={{.Host}} -p no-preload-545474 -n no-preload-545474
helpers_test.go:247: (dbg) Non-zero exit: out/minikube-linux-arm64 status --format={{.Host}} -p no-preload-545474 -n no-preload-545474: exit status 2 (372.041972ms)

                                                
                                                
-- stdout --
	Running

                                                
                                                
-- /stdout --
helpers_test.go:247: status error: exit status 2 (may be ok)
helpers_test.go:252: <<< TestStartStop/group/no-preload/serial/Pause FAILED: start of post-mortem logs <<<
helpers_test.go:253: ======>  post-mortem[TestStartStop/group/no-preload/serial/Pause]: minikube logs <======
helpers_test.go:255: (dbg) Run:  out/minikube-linux-arm64 -p no-preload-545474 logs -n 25
helpers_test.go:255: (dbg) Done: out/minikube-linux-arm64 -p no-preload-545474 logs -n 25: (1.300415148s)
helpers_test.go:260: TestStartStop/group/no-preload/serial/Pause logs: 
-- stdout --
	
	==> Audit <==
	┌─────────┬───────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────┬──────────────────────────────┬─────────┬─────────┬─────────────────────┬──────────────
───────┐
	│ COMMAND │                                                                                                                     ARGS                                                                                                                      │           PROFILE            │  USER   │ VERSION │     START TIME      │      END TIME       │
	├─────────┼───────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────┼──────────────────────────────┼─────────┼─────────┼─────────────────────┼──────────────
───────┤
	│ image   │ embed-certs-422728 image list --format=json                                                                                                                                                                                                   │ embed-certs-422728           │ jenkins │ v1.37.0 │ 09 Nov 25 14:40 UTC │ 09 Nov 25 14:40 UTC │
	│ pause   │ -p embed-certs-422728 --alsologtostderr -v=1                                                                                                                                                                                                  │ embed-certs-422728           │ jenkins │ v1.37.0 │ 09 Nov 25 14:40 UTC │                     │
	│ delete  │ -p default-k8s-diff-port-103048                                                                                                                                                                                                               │ default-k8s-diff-port-103048 │ jenkins │ v1.37.0 │ 09 Nov 25 14:40 UTC │ 09 Nov 25 14:40 UTC │
	│ delete  │ -p embed-certs-422728                                                                                                                                                                                                                         │ embed-certs-422728           │ jenkins │ v1.37.0 │ 09 Nov 25 14:40 UTC │ 09 Nov 25 14:40 UTC │
	│ delete  │ -p default-k8s-diff-port-103048                                                                                                                                                                                                               │ default-k8s-diff-port-103048 │ jenkins │ v1.37.0 │ 09 Nov 25 14:40 UTC │ 09 Nov 25 14:40 UTC │
	│ delete  │ -p disable-driver-mounts-274584                                                                                                                                                                                                               │ disable-driver-mounts-274584 │ jenkins │ v1.37.0 │ 09 Nov 25 14:40 UTC │ 09 Nov 25 14:40 UTC │
	│ start   │ -p no-preload-545474 --memory=3072 --alsologtostderr --wait=true --preload=false --driver=docker  --container-runtime=crio --kubernetes-version=v1.34.1                                                                                       │ no-preload-545474            │ jenkins │ v1.37.0 │ 09 Nov 25 14:40 UTC │ 09 Nov 25 14:41 UTC │
	│ delete  │ -p embed-certs-422728                                                                                                                                                                                                                         │ embed-certs-422728           │ jenkins │ v1.37.0 │ 09 Nov 25 14:40 UTC │ 09 Nov 25 14:40 UTC │
	│ start   │ -p newest-cni-192074 --memory=3072 --alsologtostderr --wait=apiserver,system_pods,default_sa --network-plugin=cni --extra-config=kubeadm.pod-network-cidr=10.42.0.0/16 --driver=docker  --container-runtime=crio --kubernetes-version=v1.34.1 │ newest-cni-192074            │ jenkins │ v1.37.0 │ 09 Nov 25 14:40 UTC │ 09 Nov 25 14:41 UTC │
	│ addons  │ enable metrics-server -p newest-cni-192074 --images=MetricsServer=registry.k8s.io/echoserver:1.4 --registries=MetricsServer=fake.domain                                                                                                       │ newest-cni-192074            │ jenkins │ v1.37.0 │ 09 Nov 25 14:41 UTC │                     │
	│ stop    │ -p newest-cni-192074 --alsologtostderr -v=3                                                                                                                                                                                                   │ newest-cni-192074            │ jenkins │ v1.37.0 │ 09 Nov 25 14:41 UTC │ 09 Nov 25 14:41 UTC │
	│ addons  │ enable dashboard -p newest-cni-192074 --images=MetricsScraper=registry.k8s.io/echoserver:1.4                                                                                                                                                  │ newest-cni-192074            │ jenkins │ v1.37.0 │ 09 Nov 25 14:41 UTC │ 09 Nov 25 14:41 UTC │
	│ start   │ -p newest-cni-192074 --memory=3072 --alsologtostderr --wait=apiserver,system_pods,default_sa --network-plugin=cni --extra-config=kubeadm.pod-network-cidr=10.42.0.0/16 --driver=docker  --container-runtime=crio --kubernetes-version=v1.34.1 │ newest-cni-192074            │ jenkins │ v1.37.0 │ 09 Nov 25 14:41 UTC │ 09 Nov 25 14:41 UTC │
	│ image   │ newest-cni-192074 image list --format=json                                                                                                                                                                                                    │ newest-cni-192074            │ jenkins │ v1.37.0 │ 09 Nov 25 14:41 UTC │ 09 Nov 25 14:41 UTC │
	│ pause   │ -p newest-cni-192074 --alsologtostderr -v=1                                                                                                                                                                                                   │ newest-cni-192074            │ jenkins │ v1.37.0 │ 09 Nov 25 14:41 UTC │                     │
	│ delete  │ -p newest-cni-192074                                                                                                                                                                                                                          │ newest-cni-192074            │ jenkins │ v1.37.0 │ 09 Nov 25 14:41 UTC │ 09 Nov 25 14:41 UTC │
	│ delete  │ -p newest-cni-192074                                                                                                                                                                                                                          │ newest-cni-192074            │ jenkins │ v1.37.0 │ 09 Nov 25 14:41 UTC │ 09 Nov 25 14:41 UTC │
	│ start   │ -p auto-241021 --memory=3072 --alsologtostderr --wait=true --wait-timeout=15m --driver=docker  --container-runtime=crio                                                                                                                       │ auto-241021                  │ jenkins │ v1.37.0 │ 09 Nov 25 14:41 UTC │ 09 Nov 25 14:43 UTC │
	│ addons  │ enable metrics-server -p no-preload-545474 --images=MetricsServer=registry.k8s.io/echoserver:1.4 --registries=MetricsServer=fake.domain                                                                                                       │ no-preload-545474            │ jenkins │ v1.37.0 │ 09 Nov 25 14:41 UTC │                     │
	│ stop    │ -p no-preload-545474 --alsologtostderr -v=3                                                                                                                                                                                                   │ no-preload-545474            │ jenkins │ v1.37.0 │ 09 Nov 25 14:41 UTC │ 09 Nov 25 14:42 UTC │
	│ addons  │ enable dashboard -p no-preload-545474 --images=MetricsScraper=registry.k8s.io/echoserver:1.4                                                                                                                                                  │ no-preload-545474            │ jenkins │ v1.37.0 │ 09 Nov 25 14:42 UTC │ 09 Nov 25 14:42 UTC │
	│ start   │ -p no-preload-545474 --memory=3072 --alsologtostderr --wait=true --preload=false --driver=docker  --container-runtime=crio --kubernetes-version=v1.34.1                                                                                       │ no-preload-545474            │ jenkins │ v1.37.0 │ 09 Nov 25 14:42 UTC │ 09 Nov 25 14:43 UTC │
	│ ssh     │ -p auto-241021 pgrep -a kubelet                                                                                                                                                                                                               │ auto-241021                  │ jenkins │ v1.37.0 │ 09 Nov 25 14:43 UTC │ 09 Nov 25 14:43 UTC │
	│ image   │ no-preload-545474 image list --format=json                                                                                                                                                                                                    │ no-preload-545474            │ jenkins │ v1.37.0 │ 09 Nov 25 14:43 UTC │ 09 Nov 25 14:43 UTC │
	│ pause   │ -p no-preload-545474 --alsologtostderr -v=1                                                                                                                                                                                                   │ no-preload-545474            │ jenkins │ v1.37.0 │ 09 Nov 25 14:43 UTC │                     │
	└─────────┴───────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────┴──────────────────────────────┴─────────┴─────────┴─────────────────────┴──────────────
───────┘
	
	
	==> Last Start <==
	Log file created at: 2025/11/09 14:42:12
	Running on machine: ip-172-31-24-2
	Binary: Built with gc go1.24.6 for linux/arm64
	Log line format: [IWEF]mmdd hh:mm:ss.uuuuuu threadid file:line] msg
	I1109 14:42:12.551409  215276 out.go:360] Setting OutFile to fd 1 ...
	I1109 14:42:12.551517  215276 out.go:408] TERM=,COLORTERM=, which probably does not support color
	I1109 14:42:12.551556  215276 out.go:374] Setting ErrFile to fd 2...
	I1109 14:42:12.551561  215276 out.go:408] TERM=,COLORTERM=, which probably does not support color
	I1109 14:42:12.551804  215276 root.go:338] Updating PATH: /home/jenkins/minikube-integration/21139-2320/.minikube/bin
	I1109 14:42:12.552257  215276 out.go:368] Setting JSON to false
	I1109 14:42:12.553136  215276 start.go:133] hostinfo: {"hostname":"ip-172-31-24-2","uptime":5083,"bootTime":1762694250,"procs":180,"os":"linux","platform":"ubuntu","platformFamily":"debian","platformVersion":"20.04","kernelVersion":"5.15.0-1084-aws","kernelArch":"aarch64","virtualizationSystem":"","virtualizationRole":"","hostId":"6d436adf-771e-4269-b9a3-c25fd4fca4f5"}
	I1109 14:42:12.553196  215276 start.go:143] virtualization:  
	I1109 14:42:12.558440  215276 out.go:179] * [no-preload-545474] minikube v1.37.0 on Ubuntu 20.04 (arm64)
	I1109 14:42:12.561649  215276 out.go:179]   - MINIKUBE_LOCATION=21139
	I1109 14:42:12.561713  215276 notify.go:221] Checking for updates...
	I1109 14:42:12.568606  215276 out.go:179]   - MINIKUBE_SUPPRESS_DOCKER_PERFORMANCE=true
	I1109 14:42:12.571729  215276 out.go:179]   - KUBECONFIG=/home/jenkins/minikube-integration/21139-2320/kubeconfig
	I1109 14:42:12.574692  215276 out.go:179]   - MINIKUBE_HOME=/home/jenkins/minikube-integration/21139-2320/.minikube
	I1109 14:42:12.578196  215276 out.go:179]   - MINIKUBE_BIN=out/minikube-linux-arm64
	I1109 14:42:12.581074  215276 out.go:179]   - MINIKUBE_FORCE_SYSTEMD=
	I1109 14:42:12.584455  215276 config.go:182] Loaded profile config "no-preload-545474": Driver=docker, ContainerRuntime=crio, KubernetesVersion=v1.34.1
	I1109 14:42:12.585043  215276 driver.go:422] Setting default libvirt URI to qemu:///system
	I1109 14:42:12.624679  215276 docker.go:124] docker version: linux-28.1.1:Docker Engine - Community
	I1109 14:42:12.624808  215276 cli_runner.go:164] Run: docker system info --format "{{json .}}"
	I1109 14:42:12.719804  215276 info.go:266] docker info: {ID:J4M5:W6MX:GOX4:4LAQ:VI7E:VJNF:J3OP:OPBH:GF7G:PPY4:WQWD:7N4L Containers:2 ContainersRunning:1 ContainersPaused:0 ContainersStopped:1 Images:4 Driver:overlay2 DriverStatus:[[Backing Filesystem extfs] [Supports d_type true] [Using metacopy false] [Native Overlay Diff true] [userxattr false]] SystemStatus:<nil> Plugins:{Volume:[local] Network:[bridge host ipvlan macvlan null overlay] Authorization:<nil> Log:[awslogs fluentd gcplogs gelf journald json-file local splunk syslog]} MemoryLimit:true SwapLimit:true KernelMemory:false KernelMemoryTCP:true CPUCfsPeriod:true CPUCfsQuota:true CPUShares:true CPUSet:true PidsLimit:true IPv4Forwarding:true BridgeNfIptables:false BridgeNfIP6Tables:false Debug:false NFd:37 OomKillDisable:true NGoroutines:53 SystemTime:2025-11-09 14:42:12.707294824 +0000 UTC LoggingDriver:json-file CgroupDriver:cgroupfs NEventsListener:0 KernelVersion:5.15.0-1084-aws OperatingSystem:Ubuntu 20.04.6 LTS OSType:linux Architecture:a
arch64 IndexServerAddress:https://index.docker.io/v1/ RegistryConfig:{AllowNondistributableArtifactsCIDRs:[] AllowNondistributableArtifactsHostnames:[] InsecureRegistryCIDRs:[::1/128 127.0.0.0/8] IndexConfigs:{DockerIo:{Name:docker.io Mirrors:[] Secure:true Official:true}} Mirrors:[]} NCPU:2 MemTotal:8214835200 GenericResources:<nil> DockerRootDir:/var/lib/docker HTTPProxy: HTTPSProxy: NoProxy: Name:ip-172-31-24-2 Labels:[] ExperimentalBuild:false ServerVersion:28.1.1 ClusterStore: ClusterAdvertise: Runtimes:{Runc:{Path:runc}} DefaultRuntime:runc Swarm:{NodeID: NodeAddr: LocalNodeState:inactive ControlAvailable:false Error: RemoteManagers:<nil>} LiveRestoreEnabled:false Isolation: InitBinary:docker-init ContainerdCommit:{ID:05044ec0a9a75232cad458027ca83437aae3f4da Expected:} RuncCommit:{ID:v1.2.5-0-g59923ef Expected:} InitCommit:{ID:de40ad0 Expected:} SecurityOptions:[name=apparmor name=seccomp,profile=builtin] ProductLicense: Warnings:<nil> ServerErrors:[] ClientInfo:{Debug:false Plugins:[map[Name:buildx Pat
h:/usr/libexec/docker/cli-plugins/docker-buildx SchemaVersion:0.1.0 ShortDescription:Docker Buildx Vendor:Docker Inc. Version:v0.23.0] map[Name:compose Path:/usr/libexec/docker/cli-plugins/docker-compose SchemaVersion:0.1.0 ShortDescription:Docker Compose Vendor:Docker Inc. Version:v2.35.1]] Warnings:<nil>}}
	I1109 14:42:12.719948  215276 docker.go:319] overlay module found
	I1109 14:42:12.723090  215276 out.go:179] * Using the docker driver based on existing profile
	I1109 14:42:07.820240  212661 kubeadm.go:319] [certs] Generating "etcd/ca" certificate and key
	I1109 14:42:08.121003  212661 kubeadm.go:319] [certs] Generating "etcd/server" certificate and key
	I1109 14:42:08.121186  212661 kubeadm.go:319] [certs] etcd/server serving cert is signed for DNS names [auto-241021 localhost] and IPs [192.168.76.2 127.0.0.1 ::1]
	I1109 14:42:08.398156  212661 kubeadm.go:319] [certs] Generating "etcd/peer" certificate and key
	I1109 14:42:08.398411  212661 kubeadm.go:319] [certs] etcd/peer serving cert is signed for DNS names [auto-241021 localhost] and IPs [192.168.76.2 127.0.0.1 ::1]
	I1109 14:42:09.376113  212661 kubeadm.go:319] [certs] Generating "etcd/healthcheck-client" certificate and key
	I1109 14:42:09.652576  212661 kubeadm.go:319] [certs] Generating "apiserver-etcd-client" certificate and key
	I1109 14:42:10.138815  212661 kubeadm.go:319] [certs] Generating "sa" key and public key
	I1109 14:42:10.139031  212661 kubeadm.go:319] [kubeconfig] Using kubeconfig folder "/etc/kubernetes"
	I1109 14:42:10.445596  212661 kubeadm.go:319] [kubeconfig] Writing "admin.conf" kubeconfig file
	I1109 14:42:10.851520  212661 kubeadm.go:319] [kubeconfig] Writing "super-admin.conf" kubeconfig file
	I1109 14:42:12.062791  212661 kubeadm.go:319] [kubeconfig] Writing "kubelet.conf" kubeconfig file
	I1109 14:42:12.539804  212661 kubeadm.go:319] [kubeconfig] Writing "controller-manager.conf" kubeconfig file
	I1109 14:42:12.793980  212661 kubeadm.go:319] [kubeconfig] Writing "scheduler.conf" kubeconfig file
	I1109 14:42:12.794825  212661 kubeadm.go:319] [etcd] Creating static Pod manifest for local etcd in "/etc/kubernetes/manifests"
	I1109 14:42:12.797672  212661 kubeadm.go:319] [control-plane] Using manifest folder "/etc/kubernetes/manifests"
	I1109 14:42:12.725988  215276 start.go:309] selected driver: docker
	I1109 14:42:12.726007  215276 start.go:930] validating driver "docker" against &{Name:no-preload-545474 KeepContext:false EmbedCerts:false MinikubeISO: KicBaseImage:gcr.io/k8s-minikube/kicbase-builds:v0.0.48-1761985721-21837@sha256:a50b37e97dfdea51156e079ca6b45818a801b3d41bbe13d141f35d2e1af6c7d1 Memory:3072 CPUs:2 DiskSize:20000 Driver:docker HyperkitVpnKitSock: HyperkitVSockPorts:[] DockerEnv:[] ContainerVolumeMounts:[] InsecureRegistry:[] RegistryMirror:[] HostOnlyCIDR:192.168.59.1/24 HypervVirtualSwitch: HypervUseExternalSwitch:false HypervExternalAdapter: KVMNetwork:default KVMQemuURI:qemu:///system KVMGPU:false KVMHidden:false KVMNUMACount:1 APIServerPort:8443 DockerOpt:[] DisableDriverMounts:false NFSShare:[] NFSSharesRoot:/nfsshares UUID: NoVTXCheck:false DNSProxy:false HostDNSResolver:true HostOnlyNicType:virtio NatNicType:virtio SSHIPAddress: SSHUser:root SSHKey: SSHPort:22 KubernetesConfig:{KubernetesVersion:v1.34.1 ClusterName:no-preload-545474 Namespace:default APIServerHAVIP: APIServerNa
me:minikubeCA APIServerNames:[] APIServerIPs:[] DNSDomain:cluster.local ContainerRuntime:crio CRISocket: NetworkPlugin:cni FeatureGates: ServiceCIDR:10.96.0.0/12 ImageRepository: LoadBalancerStartIP: LoadBalancerEndIP: CustomIngressCert: RegistryAliases: ExtraOptions:[] ShouldLoadCachedImages:true EnableDefaultCNI:false CNI:} Nodes:[{Name: IP:192.168.85.2 Port:8443 KubernetesVersion:v1.34.1 ContainerRuntime:crio ControlPlane:true Worker:true}] Addons:map[dashboard:true default-storageclass:true storage-provisioner:true] CustomAddonImages:map[MetricsScraper:registry.k8s.io/echoserver:1.4] CustomAddonRegistries:map[] VerifyComponents:map[apiserver:true apps_running:true default_sa:true extra:true kubelet:true node_ready:true system_pods:true] StartHostTimeout:6m0s ScheduledStop:<nil> ExposedPorts:[] ListenAddress: Network: Subnet: MultiNodeRequested:false ExtraDisks:0 CertExpiration:26280h0m0s MountString: Mount9PVersion:9p2000.L MountGID:docker MountIP: MountMSize:262144 MountOptions:[] MountPort:0 MountType:9
p MountUID:docker BinaryMirror: DisableOptimizations:false DisableMetrics:false DisableCoreDNSLog:false CustomQemuFirmwarePath: SocketVMnetClientPath: SocketVMnetPath: StaticIP: SSHAuthSock: SSHAgentPID:0 GPUs: AutoPauseInterval:1m0s}
	I1109 14:42:12.726106  215276 start.go:941] status for docker: {Installed:true Healthy:true Running:false NeedsImprovement:false Error:<nil> Reason: Fix: Doc: Version:}
	I1109 14:42:12.726805  215276 cli_runner.go:164] Run: docker system info --format "{{json .}}"
	I1109 14:42:12.826431  215276 info.go:266] docker info: {ID:J4M5:W6MX:GOX4:4LAQ:VI7E:VJNF:J3OP:OPBH:GF7G:PPY4:WQWD:7N4L Containers:2 ContainersRunning:1 ContainersPaused:0 ContainersStopped:1 Images:4 Driver:overlay2 DriverStatus:[[Backing Filesystem extfs] [Supports d_type true] [Using metacopy false] [Native Overlay Diff true] [userxattr false]] SystemStatus:<nil> Plugins:{Volume:[local] Network:[bridge host ipvlan macvlan null overlay] Authorization:<nil> Log:[awslogs fluentd gcplogs gelf journald json-file local splunk syslog]} MemoryLimit:true SwapLimit:true KernelMemory:false KernelMemoryTCP:true CPUCfsPeriod:true CPUCfsQuota:true CPUShares:true CPUSet:true PidsLimit:true IPv4Forwarding:true BridgeNfIptables:false BridgeNfIP6Tables:false Debug:false NFd:37 OomKillDisable:true NGoroutines:53 SystemTime:2025-11-09 14:42:12.816425828 +0000 UTC LoggingDriver:json-file CgroupDriver:cgroupfs NEventsListener:0 KernelVersion:5.15.0-1084-aws OperatingSystem:Ubuntu 20.04.6 LTS OSType:linux Architecture:a
arch64 IndexServerAddress:https://index.docker.io/v1/ RegistryConfig:{AllowNondistributableArtifactsCIDRs:[] AllowNondistributableArtifactsHostnames:[] InsecureRegistryCIDRs:[::1/128 127.0.0.0/8] IndexConfigs:{DockerIo:{Name:docker.io Mirrors:[] Secure:true Official:true}} Mirrors:[]} NCPU:2 MemTotal:8214835200 GenericResources:<nil> DockerRootDir:/var/lib/docker HTTPProxy: HTTPSProxy: NoProxy: Name:ip-172-31-24-2 Labels:[] ExperimentalBuild:false ServerVersion:28.1.1 ClusterStore: ClusterAdvertise: Runtimes:{Runc:{Path:runc}} DefaultRuntime:runc Swarm:{NodeID: NodeAddr: LocalNodeState:inactive ControlAvailable:false Error: RemoteManagers:<nil>} LiveRestoreEnabled:false Isolation: InitBinary:docker-init ContainerdCommit:{ID:05044ec0a9a75232cad458027ca83437aae3f4da Expected:} RuncCommit:{ID:v1.2.5-0-g59923ef Expected:} InitCommit:{ID:de40ad0 Expected:} SecurityOptions:[name=apparmor name=seccomp,profile=builtin] ProductLicense: Warnings:<nil> ServerErrors:[] ClientInfo:{Debug:false Plugins:[map[Name:buildx Pat
h:/usr/libexec/docker/cli-plugins/docker-buildx SchemaVersion:0.1.0 ShortDescription:Docker Buildx Vendor:Docker Inc. Version:v0.23.0] map[Name:compose Path:/usr/libexec/docker/cli-plugins/docker-compose SchemaVersion:0.1.0 ShortDescription:Docker Compose Vendor:Docker Inc. Version:v2.35.1]] Warnings:<nil>}}
	I1109 14:42:12.826777  215276 start_flags.go:992] Waiting for all components: map[apiserver:true apps_running:true default_sa:true extra:true kubelet:true node_ready:true system_pods:true]
	I1109 14:42:12.826811  215276 cni.go:84] Creating CNI manager for ""
	I1109 14:42:12.826867  215276 cni.go:143] "docker" driver + "crio" runtime found, recommending kindnet
	I1109 14:42:12.826911  215276 start.go:353] cluster config:
	{Name:no-preload-545474 KeepContext:false EmbedCerts:false MinikubeISO: KicBaseImage:gcr.io/k8s-minikube/kicbase-builds:v0.0.48-1761985721-21837@sha256:a50b37e97dfdea51156e079ca6b45818a801b3d41bbe13d141f35d2e1af6c7d1 Memory:3072 CPUs:2 DiskSize:20000 Driver:docker HyperkitVpnKitSock: HyperkitVSockPorts:[] DockerEnv:[] ContainerVolumeMounts:[] InsecureRegistry:[] RegistryMirror:[] HostOnlyCIDR:192.168.59.1/24 HypervVirtualSwitch: HypervUseExternalSwitch:false HypervExternalAdapter: KVMNetwork:default KVMQemuURI:qemu:///system KVMGPU:false KVMHidden:false KVMNUMACount:1 APIServerPort:8443 DockerOpt:[] DisableDriverMounts:false NFSShare:[] NFSSharesRoot:/nfsshares UUID: NoVTXCheck:false DNSProxy:false HostDNSResolver:true HostOnlyNicType:virtio NatNicType:virtio SSHIPAddress: SSHUser:root SSHKey: SSHPort:22 KubernetesConfig:{KubernetesVersion:v1.34.1 ClusterName:no-preload-545474 Namespace:default APIServerHAVIP: APIServerName:minikubeCA APIServerNames:[] APIServerIPs:[] DNSDomain:cluster.local Containe
rRuntime:crio CRISocket: NetworkPlugin:cni FeatureGates: ServiceCIDR:10.96.0.0/12 ImageRepository: LoadBalancerStartIP: LoadBalancerEndIP: CustomIngressCert: RegistryAliases: ExtraOptions:[] ShouldLoadCachedImages:true EnableDefaultCNI:false CNI:} Nodes:[{Name: IP:192.168.85.2 Port:8443 KubernetesVersion:v1.34.1 ContainerRuntime:crio ControlPlane:true Worker:true}] Addons:map[dashboard:true default-storageclass:true storage-provisioner:true] CustomAddonImages:map[MetricsScraper:registry.k8s.io/echoserver:1.4] CustomAddonRegistries:map[] VerifyComponents:map[apiserver:true apps_running:true default_sa:true extra:true kubelet:true node_ready:true system_pods:true] StartHostTimeout:6m0s ScheduledStop:<nil> ExposedPorts:[] ListenAddress: Network: Subnet: MultiNodeRequested:false ExtraDisks:0 CertExpiration:26280h0m0s MountString: Mount9PVersion:9p2000.L MountGID:docker MountIP: MountMSize:262144 MountOptions:[] MountPort:0 MountType:9p MountUID:docker BinaryMirror: DisableOptimizations:false DisableMetrics:false
DisableCoreDNSLog:false CustomQemuFirmwarePath: SocketVMnetClientPath: SocketVMnetPath: StaticIP: SSHAuthSock: SSHAgentPID:0 GPUs: AutoPauseInterval:1m0s}
	I1109 14:42:12.830720  215276 out.go:179] * Starting "no-preload-545474" primary control-plane node in "no-preload-545474" cluster
	I1109 14:42:12.833807  215276 cache.go:134] Beginning downloading kic base image for docker with crio
	I1109 14:42:12.836944  215276 out.go:179] * Pulling base image v0.0.48-1761985721-21837 ...
	I1109 14:42:12.839937  215276 preload.go:188] Checking if preload exists for k8s version v1.34.1 and runtime crio
	I1109 14:42:12.840161  215276 cache.go:107] acquiring lock: {Name:mk8ebf1821303e62d035eff80c869bb7ee741166 Clock:{} Delay:500ms Timeout:10m0s Cancel:<nil>}
	I1109 14:42:12.840257  215276 cache.go:115] /home/jenkins/minikube-integration/21139-2320/.minikube/cache/images/arm64/gcr.io/k8s-minikube/storage-provisioner_v5 exists
	I1109 14:42:12.840273  215276 cache.go:96] cache image "gcr.io/k8s-minikube/storage-provisioner:v5" -> "/home/jenkins/minikube-integration/21139-2320/.minikube/cache/images/arm64/gcr.io/k8s-minikube/storage-provisioner_v5" took 120.329µs
	I1109 14:42:12.840282  215276 cache.go:80] save to tar file gcr.io/k8s-minikube/storage-provisioner:v5 -> /home/jenkins/minikube-integration/21139-2320/.minikube/cache/images/arm64/gcr.io/k8s-minikube/storage-provisioner_v5 succeeded
	I1109 14:42:12.840299  215276 image.go:81] Checking for gcr.io/k8s-minikube/kicbase-builds:v0.0.48-1761985721-21837@sha256:a50b37e97dfdea51156e079ca6b45818a801b3d41bbe13d141f35d2e1af6c7d1 in local docker daemon
	I1109 14:42:12.840470  215276 profile.go:143] Saving config to /home/jenkins/minikube-integration/21139-2320/.minikube/profiles/no-preload-545474/config.json ...
	I1109 14:42:12.840709  215276 cache.go:107] acquiring lock: {Name:mk53871c92845ee135c49257023f708114b8f41e Clock:{} Delay:500ms Timeout:10m0s Cancel:<nil>}
	I1109 14:42:12.840764  215276 cache.go:115] /home/jenkins/minikube-integration/21139-2320/.minikube/cache/images/arm64/registry.k8s.io/kube-proxy_v1.34.1 exists
	I1109 14:42:12.840771  215276 cache.go:96] cache image "registry.k8s.io/kube-proxy:v1.34.1" -> "/home/jenkins/minikube-integration/21139-2320/.minikube/cache/images/arm64/registry.k8s.io/kube-proxy_v1.34.1" took 68.932µs
	I1109 14:42:12.840777  215276 cache.go:80] save to tar file registry.k8s.io/kube-proxy:v1.34.1 -> /home/jenkins/minikube-integration/21139-2320/.minikube/cache/images/arm64/registry.k8s.io/kube-proxy_v1.34.1 succeeded
	I1109 14:42:12.840790  215276 cache.go:107] acquiring lock: {Name:mk4f58a09b1fc4909821101e1b77c9ffca6005ee Clock:{} Delay:500ms Timeout:10m0s Cancel:<nil>}
	I1109 14:42:12.840818  215276 cache.go:115] /home/jenkins/minikube-integration/21139-2320/.minikube/cache/images/arm64/registry.k8s.io/kube-apiserver_v1.34.1 exists
	I1109 14:42:12.840824  215276 cache.go:96] cache image "registry.k8s.io/kube-apiserver:v1.34.1" -> "/home/jenkins/minikube-integration/21139-2320/.minikube/cache/images/arm64/registry.k8s.io/kube-apiserver_v1.34.1" took 35.594µs
	I1109 14:42:12.840830  215276 cache.go:80] save to tar file registry.k8s.io/kube-apiserver:v1.34.1 -> /home/jenkins/minikube-integration/21139-2320/.minikube/cache/images/arm64/registry.k8s.io/kube-apiserver_v1.34.1 succeeded
	I1109 14:42:12.840839  215276 cache.go:107] acquiring lock: {Name:mk73ab4d10a27d479f537d5f1b1270fea0724531 Clock:{} Delay:500ms Timeout:10m0s Cancel:<nil>}
	I1109 14:42:12.840866  215276 cache.go:115] /home/jenkins/minikube-integration/21139-2320/.minikube/cache/images/arm64/registry.k8s.io/kube-controller-manager_v1.34.1 exists
	I1109 14:42:12.840870  215276 cache.go:96] cache image "registry.k8s.io/kube-controller-manager:v1.34.1" -> "/home/jenkins/minikube-integration/21139-2320/.minikube/cache/images/arm64/registry.k8s.io/kube-controller-manager_v1.34.1" took 32.697µs
	I1109 14:42:12.840876  215276 cache.go:80] save to tar file registry.k8s.io/kube-controller-manager:v1.34.1 -> /home/jenkins/minikube-integration/21139-2320/.minikube/cache/images/arm64/registry.k8s.io/kube-controller-manager_v1.34.1 succeeded
	I1109 14:42:12.840888  215276 cache.go:107] acquiring lock: {Name:mk27f7c7c6f60f594b852d08be5e102aa55cc901 Clock:{} Delay:500ms Timeout:10m0s Cancel:<nil>}
	I1109 14:42:12.840913  215276 cache.go:115] /home/jenkins/minikube-integration/21139-2320/.minikube/cache/images/arm64/registry.k8s.io/kube-scheduler_v1.34.1 exists
	I1109 14:42:12.840918  215276 cache.go:96] cache image "registry.k8s.io/kube-scheduler:v1.34.1" -> "/home/jenkins/minikube-integration/21139-2320/.minikube/cache/images/arm64/registry.k8s.io/kube-scheduler_v1.34.1" took 34.036µs
	I1109 14:42:12.840924  215276 cache.go:80] save to tar file registry.k8s.io/kube-scheduler:v1.34.1 -> /home/jenkins/minikube-integration/21139-2320/.minikube/cache/images/arm64/registry.k8s.io/kube-scheduler_v1.34.1 succeeded
	I1109 14:42:12.840934  215276 cache.go:107] acquiring lock: {Name:mkcfd288d144643fe17076d14fdf648fc664b270 Clock:{} Delay:500ms Timeout:10m0s Cancel:<nil>}
	I1109 14:42:12.840959  215276 cache.go:115] /home/jenkins/minikube-integration/21139-2320/.minikube/cache/images/arm64/registry.k8s.io/etcd_3.6.4-0 exists
	I1109 14:42:12.840965  215276 cache.go:96] cache image "registry.k8s.io/etcd:3.6.4-0" -> "/home/jenkins/minikube-integration/21139-2320/.minikube/cache/images/arm64/registry.k8s.io/etcd_3.6.4-0" took 31.615µs
	I1109 14:42:12.840970  215276 cache.go:80] save to tar file registry.k8s.io/etcd:3.6.4-0 -> /home/jenkins/minikube-integration/21139-2320/.minikube/cache/images/arm64/registry.k8s.io/etcd_3.6.4-0 succeeded
	I1109 14:42:12.840985  215276 cache.go:107] acquiring lock: {Name:mk769de8354c929e88f0f6b138307492bb4ec194 Clock:{} Delay:500ms Timeout:10m0s Cancel:<nil>}
	I1109 14:42:12.841012  215276 cache.go:115] /home/jenkins/minikube-integration/21139-2320/.minikube/cache/images/arm64/registry.k8s.io/pause_3.10.1 exists
	I1109 14:42:12.841017  215276 cache.go:96] cache image "registry.k8s.io/pause:3.10.1" -> "/home/jenkins/minikube-integration/21139-2320/.minikube/cache/images/arm64/registry.k8s.io/pause_3.10.1" took 39.434µs
	I1109 14:42:12.841022  215276 cache.go:80] save to tar file registry.k8s.io/pause:3.10.1 -> /home/jenkins/minikube-integration/21139-2320/.minikube/cache/images/arm64/registry.k8s.io/pause_3.10.1 succeeded
	I1109 14:42:12.841031  215276 cache.go:107] acquiring lock: {Name:mk6a5718ed24b8768b1b0c11e268924a881d21f7 Clock:{} Delay:500ms Timeout:10m0s Cancel:<nil>}
	I1109 14:42:12.841055  215276 cache.go:115] /home/jenkins/minikube-integration/21139-2320/.minikube/cache/images/arm64/registry.k8s.io/coredns/coredns_v1.12.1 exists
	I1109 14:42:12.841060  215276 cache.go:96] cache image "registry.k8s.io/coredns/coredns:v1.12.1" -> "/home/jenkins/minikube-integration/21139-2320/.minikube/cache/images/arm64/registry.k8s.io/coredns/coredns_v1.12.1" took 29.719µs
	I1109 14:42:12.841066  215276 cache.go:80] save to tar file registry.k8s.io/coredns/coredns:v1.12.1 -> /home/jenkins/minikube-integration/21139-2320/.minikube/cache/images/arm64/registry.k8s.io/coredns/coredns_v1.12.1 succeeded
	I1109 14:42:12.841072  215276 cache.go:87] Successfully saved all images to host disk.
	I1109 14:42:12.860234  215276 image.go:100] Found gcr.io/k8s-minikube/kicbase-builds:v0.0.48-1761985721-21837@sha256:a50b37e97dfdea51156e079ca6b45818a801b3d41bbe13d141f35d2e1af6c7d1 in local docker daemon, skipping pull
	I1109 14:42:12.860263  215276 cache.go:158] gcr.io/k8s-minikube/kicbase-builds:v0.0.48-1761985721-21837@sha256:a50b37e97dfdea51156e079ca6b45818a801b3d41bbe13d141f35d2e1af6c7d1 exists in daemon, skipping load
	I1109 14:42:12.860280  215276 cache.go:243] Successfully downloaded all kic artifacts
	I1109 14:42:12.860303  215276 start.go:360] acquireMachinesLock for no-preload-545474: {Name:mkc3edd7cced849c77bded9e0b243a9510986130 Clock:{} Delay:500ms Timeout:10m0s Cancel:<nil>}
	I1109 14:42:12.860367  215276 start.go:364] duration metric: took 44.038µs to acquireMachinesLock for "no-preload-545474"
	I1109 14:42:12.860389  215276 start.go:96] Skipping create...Using existing machine configuration
	I1109 14:42:12.860399  215276 fix.go:54] fixHost starting: 
	I1109 14:42:12.860650  215276 cli_runner.go:164] Run: docker container inspect no-preload-545474 --format={{.State.Status}}
	I1109 14:42:12.885738  215276 fix.go:112] recreateIfNeeded on no-preload-545474: state=Stopped err=<nil>
	W1109 14:42:12.885764  215276 fix.go:138] unexpected machine state, will restart: <nil>
	I1109 14:42:12.889229  215276 out.go:252] * Restarting existing docker container for "no-preload-545474" ...
	I1109 14:42:12.889316  215276 cli_runner.go:164] Run: docker start no-preload-545474
	I1109 14:42:13.211013  215276 cli_runner.go:164] Run: docker container inspect no-preload-545474 --format={{.State.Status}}
	I1109 14:42:13.244264  215276 kic.go:430] container "no-preload-545474" state is running.
	I1109 14:42:13.244638  215276 cli_runner.go:164] Run: docker container inspect -f "{{range .NetworkSettings.Networks}}{{.IPAddress}},{{.GlobalIPv6Address}}{{end}}" no-preload-545474
	I1109 14:42:13.268233  215276 profile.go:143] Saving config to /home/jenkins/minikube-integration/21139-2320/.minikube/profiles/no-preload-545474/config.json ...
	I1109 14:42:13.268462  215276 machine.go:94] provisionDockerMachine start ...
	I1109 14:42:13.268527  215276 cli_runner.go:164] Run: docker container inspect -f "'{{(index (index .NetworkSettings.Ports "22/tcp") 0).HostPort}}'" no-preload-545474
	I1109 14:42:13.292946  215276 main.go:143] libmachine: Using SSH client type: native
	I1109 14:42:13.293263  215276 main.go:143] libmachine: &{{{<nil> 0 [] [] []} docker [0x3eefe0] 0x3f1790 <nil>  [] 0s} 127.0.0.1 33095 <nil> <nil>}
	I1109 14:42:13.293271  215276 main.go:143] libmachine: About to run SSH command:
	hostname
	I1109 14:42:13.294021  215276 main.go:143] libmachine: Error dialing TCP: ssh: handshake failed: read tcp 127.0.0.1:58756->127.0.0.1:33095: read: connection reset by peer
	I1109 14:42:16.487270  215276 main.go:143] libmachine: SSH cmd err, output: <nil>: no-preload-545474
	
	I1109 14:42:16.487296  215276 ubuntu.go:182] provisioning hostname "no-preload-545474"
	I1109 14:42:16.487365  215276 cli_runner.go:164] Run: docker container inspect -f "'{{(index (index .NetworkSettings.Ports "22/tcp") 0).HostPort}}'" no-preload-545474
	I1109 14:42:16.517852  215276 main.go:143] libmachine: Using SSH client type: native
	I1109 14:42:16.518163  215276 main.go:143] libmachine: &{{{<nil> 0 [] [] []} docker [0x3eefe0] 0x3f1790 <nil>  [] 0s} 127.0.0.1 33095 <nil> <nil>}
	I1109 14:42:16.518180  215276 main.go:143] libmachine: About to run SSH command:
	sudo hostname no-preload-545474 && echo "no-preload-545474" | sudo tee /etc/hostname
	I1109 14:42:16.702558  215276 main.go:143] libmachine: SSH cmd err, output: <nil>: no-preload-545474
	
	I1109 14:42:16.702647  215276 cli_runner.go:164] Run: docker container inspect -f "'{{(index (index .NetworkSettings.Ports "22/tcp") 0).HostPort}}'" no-preload-545474
	I1109 14:42:16.732085  215276 main.go:143] libmachine: Using SSH client type: native
	I1109 14:42:16.732400  215276 main.go:143] libmachine: &{{{<nil> 0 [] [] []} docker [0x3eefe0] 0x3f1790 <nil>  [] 0s} 127.0.0.1 33095 <nil> <nil>}
	I1109 14:42:16.732424  215276 main.go:143] libmachine: About to run SSH command:
	
			if ! grep -xq '.*\sno-preload-545474' /etc/hosts; then
				if grep -xq '127.0.1.1\s.*' /etc/hosts; then
					sudo sed -i 's/^127.0.1.1\s.*/127.0.1.1 no-preload-545474/g' /etc/hosts;
				else 
					echo '127.0.1.1 no-preload-545474' | sudo tee -a /etc/hosts; 
				fi
			fi
	I1109 14:42:16.924598  215276 main.go:143] libmachine: SSH cmd err, output: <nil>: 
	I1109 14:42:16.924628  215276 ubuntu.go:188] set auth options {CertDir:/home/jenkins/minikube-integration/21139-2320/.minikube CaCertPath:/home/jenkins/minikube-integration/21139-2320/.minikube/certs/ca.pem CaPrivateKeyPath:/home/jenkins/minikube-integration/21139-2320/.minikube/certs/ca-key.pem CaCertRemotePath:/etc/docker/ca.pem ServerCertPath:/home/jenkins/minikube-integration/21139-2320/.minikube/machines/server.pem ServerKeyPath:/home/jenkins/minikube-integration/21139-2320/.minikube/machines/server-key.pem ClientKeyPath:/home/jenkins/minikube-integration/21139-2320/.minikube/certs/key.pem ServerCertRemotePath:/etc/docker/server.pem ServerKeyRemotePath:/etc/docker/server-key.pem ClientCertPath:/home/jenkins/minikube-integration/21139-2320/.minikube/certs/cert.pem ServerCertSANs:[] StorePath:/home/jenkins/minikube-integration/21139-2320/.minikube}
	I1109 14:42:16.924653  215276 ubuntu.go:190] setting up certificates
	I1109 14:42:16.924672  215276 provision.go:84] configureAuth start
	I1109 14:42:16.924738  215276 cli_runner.go:164] Run: docker container inspect -f "{{range .NetworkSettings.Networks}}{{.IPAddress}},{{.GlobalIPv6Address}}{{end}}" no-preload-545474
	I1109 14:42:16.953445  215276 provision.go:143] copyHostCerts
	I1109 14:42:16.953508  215276 exec_runner.go:144] found /home/jenkins/minikube-integration/21139-2320/.minikube/ca.pem, removing ...
	I1109 14:42:16.953531  215276 exec_runner.go:203] rm: /home/jenkins/minikube-integration/21139-2320/.minikube/ca.pem
	I1109 14:42:16.953612  215276 exec_runner.go:151] cp: /home/jenkins/minikube-integration/21139-2320/.minikube/certs/ca.pem --> /home/jenkins/minikube-integration/21139-2320/.minikube/ca.pem (1082 bytes)
	I1109 14:42:16.953715  215276 exec_runner.go:144] found /home/jenkins/minikube-integration/21139-2320/.minikube/cert.pem, removing ...
	I1109 14:42:16.953726  215276 exec_runner.go:203] rm: /home/jenkins/minikube-integration/21139-2320/.minikube/cert.pem
	I1109 14:42:16.953755  215276 exec_runner.go:151] cp: /home/jenkins/minikube-integration/21139-2320/.minikube/certs/cert.pem --> /home/jenkins/minikube-integration/21139-2320/.minikube/cert.pem (1123 bytes)
	I1109 14:42:16.953814  215276 exec_runner.go:144] found /home/jenkins/minikube-integration/21139-2320/.minikube/key.pem, removing ...
	I1109 14:42:16.953823  215276 exec_runner.go:203] rm: /home/jenkins/minikube-integration/21139-2320/.minikube/key.pem
	I1109 14:42:16.953852  215276 exec_runner.go:151] cp: /home/jenkins/minikube-integration/21139-2320/.minikube/certs/key.pem --> /home/jenkins/minikube-integration/21139-2320/.minikube/key.pem (1679 bytes)
	I1109 14:42:16.953904  215276 provision.go:117] generating server cert: /home/jenkins/minikube-integration/21139-2320/.minikube/machines/server.pem ca-key=/home/jenkins/minikube-integration/21139-2320/.minikube/certs/ca.pem private-key=/home/jenkins/minikube-integration/21139-2320/.minikube/certs/ca-key.pem org=jenkins.no-preload-545474 san=[127.0.0.1 192.168.85.2 localhost minikube no-preload-545474]
	I1109 14:42:12.801148  212661 out.go:252]   - Booting up control plane ...
	I1109 14:42:12.801266  212661 kubeadm.go:319] [control-plane] Creating static Pod manifest for "kube-apiserver"
	I1109 14:42:12.801359  212661 kubeadm.go:319] [control-plane] Creating static Pod manifest for "kube-controller-manager"
	I1109 14:42:12.801437  212661 kubeadm.go:319] [control-plane] Creating static Pod manifest for "kube-scheduler"
	I1109 14:42:12.821957  212661 kubeadm.go:319] [kubelet-start] Writing kubelet environment file with flags to file "/var/lib/kubelet/kubeadm-flags.env"
	I1109 14:42:12.822076  212661 kubeadm.go:319] [kubelet-start] Writing kubelet configuration to file "/var/lib/kubelet/instance-config.yaml"
	I1109 14:42:12.830868  212661 kubeadm.go:319] [patches] Applied patch of type "application/strategic-merge-patch+json" to target "kubeletconfiguration"
	I1109 14:42:12.831356  212661 kubeadm.go:319] [kubelet-start] Writing kubelet configuration to file "/var/lib/kubelet/config.yaml"
	I1109 14:42:12.831419  212661 kubeadm.go:319] [kubelet-start] Starting the kubelet
	I1109 14:42:13.030573  212661 kubeadm.go:319] [wait-control-plane] Waiting for the kubelet to boot up the control plane as static Pods from directory "/etc/kubernetes/manifests"
	I1109 14:42:13.030727  212661 kubeadm.go:319] [kubelet-check] Waiting for a healthy kubelet at http://127.0.0.1:10248/healthz. This can take up to 4m0s
	I1109 14:42:14.033165  212661 kubeadm.go:319] [kubelet-check] The kubelet is healthy after 1.00241333s
	I1109 14:42:14.036899  212661 kubeadm.go:319] [control-plane-check] Waiting for healthy control plane components. This can take up to 4m0s
	I1109 14:42:14.037142  212661 kubeadm.go:319] [control-plane-check] Checking kube-apiserver at https://192.168.76.2:8443/livez
	I1109 14:42:14.037500  212661 kubeadm.go:319] [control-plane-check] Checking kube-controller-manager at https://127.0.0.1:10257/healthz
	I1109 14:42:14.037754  212661 kubeadm.go:319] [control-plane-check] Checking kube-scheduler at https://127.0.0.1:10259/livez
	I1109 14:42:17.635915  212661 kubeadm.go:319] [control-plane-check] kube-controller-manager is healthy after 3.597684288s
	I1109 14:42:17.560490  215276 provision.go:177] copyRemoteCerts
	I1109 14:42:17.560600  215276 ssh_runner.go:195] Run: sudo mkdir -p /etc/docker /etc/docker /etc/docker
	I1109 14:42:17.560674  215276 cli_runner.go:164] Run: docker container inspect -f "'{{(index (index .NetworkSettings.Ports "22/tcp") 0).HostPort}}'" no-preload-545474
	I1109 14:42:17.577883  215276 sshutil.go:53] new ssh client: &{IP:127.0.0.1 Port:33095 SSHKeyPath:/home/jenkins/minikube-integration/21139-2320/.minikube/machines/no-preload-545474/id_rsa Username:docker}
	I1109 14:42:17.685069  215276 ssh_runner.go:362] scp /home/jenkins/minikube-integration/21139-2320/.minikube/certs/ca.pem --> /etc/docker/ca.pem (1082 bytes)
	I1109 14:42:17.705359  215276 ssh_runner.go:362] scp /home/jenkins/minikube-integration/21139-2320/.minikube/machines/server.pem --> /etc/docker/server.pem (1220 bytes)
	I1109 14:42:17.725647  215276 ssh_runner.go:362] scp /home/jenkins/minikube-integration/21139-2320/.minikube/machines/server-key.pem --> /etc/docker/server-key.pem (1679 bytes)
	I1109 14:42:17.749475  215276 provision.go:87] duration metric: took 824.7788ms to configureAuth
	I1109 14:42:17.749553  215276 ubuntu.go:206] setting minikube options for container-runtime
	I1109 14:42:17.749787  215276 config.go:182] Loaded profile config "no-preload-545474": Driver=docker, ContainerRuntime=crio, KubernetesVersion=v1.34.1
	I1109 14:42:17.749943  215276 cli_runner.go:164] Run: docker container inspect -f "'{{(index (index .NetworkSettings.Ports "22/tcp") 0).HostPort}}'" no-preload-545474
	I1109 14:42:17.773217  215276 main.go:143] libmachine: Using SSH client type: native
	I1109 14:42:17.773511  215276 main.go:143] libmachine: &{{{<nil> 0 [] [] []} docker [0x3eefe0] 0x3f1790 <nil>  [] 0s} 127.0.0.1 33095 <nil> <nil>}
	I1109 14:42:17.773526  215276 main.go:143] libmachine: About to run SSH command:
	sudo mkdir -p /etc/sysconfig && printf %s "
	CRIO_MINIKUBE_OPTIONS='--insecure-registry 10.96.0.0/12 '
	" | sudo tee /etc/sysconfig/crio.minikube && sudo systemctl restart crio
	I1109 14:42:18.120042  215276 main.go:143] libmachine: SSH cmd err, output: <nil>: 
	CRIO_MINIKUBE_OPTIONS='--insecure-registry 10.96.0.0/12 '
	
	I1109 14:42:18.120115  215276 machine.go:97] duration metric: took 4.851641314s to provisionDockerMachine
	I1109 14:42:18.120140  215276 start.go:293] postStartSetup for "no-preload-545474" (driver="docker")
	I1109 14:42:18.120161  215276 start.go:322] creating required directories: [/etc/kubernetes/addons /etc/kubernetes/manifests /var/tmp/minikube /var/lib/minikube /var/lib/minikube/certs /var/lib/minikube/images /var/lib/minikube/binaries /tmp/gvisor /usr/share/ca-certificates /etc/ssl/certs]
	I1109 14:42:18.120256  215276 ssh_runner.go:195] Run: sudo mkdir -p /etc/kubernetes/addons /etc/kubernetes/manifests /var/tmp/minikube /var/lib/minikube /var/lib/minikube/certs /var/lib/minikube/images /var/lib/minikube/binaries /tmp/gvisor /usr/share/ca-certificates /etc/ssl/certs
	I1109 14:42:18.120324  215276 cli_runner.go:164] Run: docker container inspect -f "'{{(index (index .NetworkSettings.Ports "22/tcp") 0).HostPort}}'" no-preload-545474
	I1109 14:42:18.156279  215276 sshutil.go:53] new ssh client: &{IP:127.0.0.1 Port:33095 SSHKeyPath:/home/jenkins/minikube-integration/21139-2320/.minikube/machines/no-preload-545474/id_rsa Username:docker}
	I1109 14:42:18.264275  215276 ssh_runner.go:195] Run: cat /etc/os-release
	I1109 14:42:18.268319  215276 main.go:143] libmachine: Couldn't set key VERSION_CODENAME, no corresponding struct field found
	I1109 14:42:18.268352  215276 info.go:137] Remote host: Debian GNU/Linux 12 (bookworm)
	I1109 14:42:18.268363  215276 filesync.go:126] Scanning /home/jenkins/minikube-integration/21139-2320/.minikube/addons for local assets ...
	I1109 14:42:18.268417  215276 filesync.go:126] Scanning /home/jenkins/minikube-integration/21139-2320/.minikube/files for local assets ...
	I1109 14:42:18.268501  215276 filesync.go:149] local asset: /home/jenkins/minikube-integration/21139-2320/.minikube/files/etc/ssl/certs/41162.pem -> 41162.pem in /etc/ssl/certs
	I1109 14:42:18.268615  215276 ssh_runner.go:195] Run: sudo mkdir -p /etc/ssl/certs
	I1109 14:42:18.286310  215276 ssh_runner.go:362] scp /home/jenkins/minikube-integration/21139-2320/.minikube/files/etc/ssl/certs/41162.pem --> /etc/ssl/certs/41162.pem (1708 bytes)
	I1109 14:42:18.317262  215276 start.go:296] duration metric: took 197.09633ms for postStartSetup
	I1109 14:42:18.317372  215276 ssh_runner.go:195] Run: sh -c "df -h /var | awk 'NR==2{print $5}'"
	I1109 14:42:18.317422  215276 cli_runner.go:164] Run: docker container inspect -f "'{{(index (index .NetworkSettings.Ports "22/tcp") 0).HostPort}}'" no-preload-545474
	I1109 14:42:18.362097  215276 sshutil.go:53] new ssh client: &{IP:127.0.0.1 Port:33095 SSHKeyPath:/home/jenkins/minikube-integration/21139-2320/.minikube/machines/no-preload-545474/id_rsa Username:docker}
	I1109 14:42:18.481751  215276 ssh_runner.go:195] Run: sh -c "df -BG /var | awk 'NR==2{print $4}'"
	I1109 14:42:18.487238  215276 fix.go:56] duration metric: took 5.626831565s for fixHost
	I1109 14:42:18.487266  215276 start.go:83] releasing machines lock for "no-preload-545474", held for 5.626887688s
	I1109 14:42:18.487339  215276 cli_runner.go:164] Run: docker container inspect -f "{{range .NetworkSettings.Networks}}{{.IPAddress}},{{.GlobalIPv6Address}}{{end}}" no-preload-545474
	I1109 14:42:18.511681  215276 ssh_runner.go:195] Run: cat /version.json
	I1109 14:42:18.511718  215276 ssh_runner.go:195] Run: curl -sS -m 2 https://registry.k8s.io/
	I1109 14:42:18.511734  215276 cli_runner.go:164] Run: docker container inspect -f "'{{(index (index .NetworkSettings.Ports "22/tcp") 0).HostPort}}'" no-preload-545474
	I1109 14:42:18.511795  215276 cli_runner.go:164] Run: docker container inspect -f "'{{(index (index .NetworkSettings.Ports "22/tcp") 0).HostPort}}'" no-preload-545474
	I1109 14:42:18.549445  215276 sshutil.go:53] new ssh client: &{IP:127.0.0.1 Port:33095 SSHKeyPath:/home/jenkins/minikube-integration/21139-2320/.minikube/machines/no-preload-545474/id_rsa Username:docker}
	I1109 14:42:18.557972  215276 sshutil.go:53] new ssh client: &{IP:127.0.0.1 Port:33095 SSHKeyPath:/home/jenkins/minikube-integration/21139-2320/.minikube/machines/no-preload-545474/id_rsa Username:docker}
	I1109 14:42:18.672374  215276 ssh_runner.go:195] Run: systemctl --version
	I1109 14:42:18.798542  215276 ssh_runner.go:195] Run: sudo sh -c "podman version >/dev/null"
	I1109 14:42:18.881491  215276 ssh_runner.go:195] Run: sh -c "stat /etc/cni/net.d/*loopback.conf*"
	W1109 14:42:18.893646  215276 cni.go:209] loopback cni configuration skipped: "/etc/cni/net.d/*loopback.conf*" not found
	I1109 14:42:18.893786  215276 ssh_runner.go:195] Run: sudo find /etc/cni/net.d -maxdepth 1 -type f ( ( -name *bridge* -or -name *podman* ) -and -not -name *.mk_disabled ) -printf "%p, " -exec sh -c "sudo mv {} {}.mk_disabled" ;
	I1109 14:42:18.925321  215276 cni.go:259] no active bridge cni configs found in "/etc/cni/net.d" - nothing to disable
	I1109 14:42:18.925396  215276 start.go:496] detecting cgroup driver to use...
	I1109 14:42:18.925444  215276 detect.go:187] detected "cgroupfs" cgroup driver on host os
	I1109 14:42:18.925518  215276 ssh_runner.go:195] Run: sudo systemctl stop -f containerd
	I1109 14:42:18.951876  215276 ssh_runner.go:195] Run: sudo systemctl is-active --quiet service containerd
	I1109 14:42:18.972349  215276 docker.go:218] disabling cri-docker service (if available) ...
	I1109 14:42:18.972461  215276 ssh_runner.go:195] Run: sudo systemctl stop -f cri-docker.socket
	I1109 14:42:18.992899  215276 ssh_runner.go:195] Run: sudo systemctl stop -f cri-docker.service
	I1109 14:42:19.013392  215276 ssh_runner.go:195] Run: sudo systemctl disable cri-docker.socket
	I1109 14:42:19.213618  215276 ssh_runner.go:195] Run: sudo systemctl mask cri-docker.service
	I1109 14:42:19.388237  215276 docker.go:234] disabling docker service ...
	I1109 14:42:19.388352  215276 ssh_runner.go:195] Run: sudo systemctl stop -f docker.socket
	I1109 14:42:19.408498  215276 ssh_runner.go:195] Run: sudo systemctl stop -f docker.service
	I1109 14:42:19.423115  215276 ssh_runner.go:195] Run: sudo systemctl disable docker.socket
	I1109 14:42:19.571162  215276 ssh_runner.go:195] Run: sudo systemctl mask docker.service
	I1109 14:42:19.721938  215276 ssh_runner.go:195] Run: sudo systemctl is-active --quiet service docker
	I1109 14:42:19.741701  215276 ssh_runner.go:195] Run: /bin/bash -c "sudo mkdir -p /etc && printf %s "runtime-endpoint: unix:///var/run/crio/crio.sock
	" | sudo tee /etc/crictl.yaml"
	I1109 14:42:19.762822  215276 crio.go:59] configure cri-o to use "registry.k8s.io/pause:3.10.1" pause image...
	I1109 14:42:19.762887  215276 ssh_runner.go:195] Run: sh -c "sudo sed -i 's|^.*pause_image = .*$|pause_image = "registry.k8s.io/pause:3.10.1"|' /etc/crio/crio.conf.d/02-crio.conf"
	I1109 14:42:19.776152  215276 crio.go:70] configuring cri-o to use "cgroupfs" as cgroup driver...
	I1109 14:42:19.776311  215276 ssh_runner.go:195] Run: sh -c "sudo sed -i 's|^.*cgroup_manager = .*$|cgroup_manager = "cgroupfs"|' /etc/crio/crio.conf.d/02-crio.conf"
	I1109 14:42:19.788591  215276 ssh_runner.go:195] Run: sh -c "sudo sed -i '/conmon_cgroup = .*/d' /etc/crio/crio.conf.d/02-crio.conf"
	I1109 14:42:19.801495  215276 ssh_runner.go:195] Run: sh -c "sudo sed -i '/cgroup_manager = .*/a conmon_cgroup = "pod"' /etc/crio/crio.conf.d/02-crio.conf"
	I1109 14:42:19.816673  215276 ssh_runner.go:195] Run: sh -c "sudo rm -rf /etc/cni/net.mk"
	I1109 14:42:19.827660  215276 ssh_runner.go:195] Run: sh -c "sudo sed -i '/^ *"net.ipv4.ip_unprivileged_port_start=.*"/d' /etc/crio/crio.conf.d/02-crio.conf"
	I1109 14:42:19.838412  215276 ssh_runner.go:195] Run: sh -c "sudo grep -q "^ *default_sysctls" /etc/crio/crio.conf.d/02-crio.conf || sudo sed -i '/conmon_cgroup = .*/a default_sysctls = \[\n\]' /etc/crio/crio.conf.d/02-crio.conf"
	I1109 14:42:19.848533  215276 ssh_runner.go:195] Run: sh -c "sudo sed -i -r 's|^default_sysctls *= *\[|&\n  "net.ipv4.ip_unprivileged_port_start=0",|' /etc/crio/crio.conf.d/02-crio.conf"
	I1109 14:42:19.857863  215276 ssh_runner.go:195] Run: sudo sysctl net.bridge.bridge-nf-call-iptables
	I1109 14:42:19.868733  215276 ssh_runner.go:195] Run: sudo sh -c "echo 1 > /proc/sys/net/ipv4/ip_forward"
	I1109 14:42:19.877552  215276 ssh_runner.go:195] Run: sudo systemctl daemon-reload
	I1109 14:42:20.024228  215276 ssh_runner.go:195] Run: sudo systemctl restart crio
	I1109 14:42:20.201243  215276 start.go:543] Will wait 60s for socket path /var/run/crio/crio.sock
	I1109 14:42:20.201378  215276 ssh_runner.go:195] Run: stat /var/run/crio/crio.sock
	I1109 14:42:20.205883  215276 start.go:564] Will wait 60s for crictl version
	I1109 14:42:20.205993  215276 ssh_runner.go:195] Run: which crictl
	I1109 14:42:20.210498  215276 ssh_runner.go:195] Run: sudo /usr/local/bin/crictl version
	I1109 14:42:20.240086  215276 start.go:580] Version:  0.1.0
	RuntimeName:  cri-o
	RuntimeVersion:  1.34.1
	RuntimeApiVersion:  v1
	I1109 14:42:20.240181  215276 ssh_runner.go:195] Run: crio --version
	I1109 14:42:20.290982  215276 ssh_runner.go:195] Run: crio --version
	I1109 14:42:20.335805  215276 out.go:179] * Preparing Kubernetes v1.34.1 on CRI-O 1.34.1 ...
	I1109 14:42:20.338852  215276 cli_runner.go:164] Run: docker network inspect no-preload-545474 --format "{"Name": "{{.Name}}","Driver": "{{.Driver}}","Subnet": "{{range .IPAM.Config}}{{.Subnet}}{{end}}","Gateway": "{{range .IPAM.Config}}{{.Gateway}}{{end}}","MTU": {{if (index .Options "com.docker.network.driver.mtu")}}{{(index .Options "com.docker.network.driver.mtu")}}{{else}}0{{end}}, "ContainerIPs": [{{range $k,$v := .Containers }}"{{$v.IPv4Address}}",{{end}}]}"
	I1109 14:42:20.357534  215276 ssh_runner.go:195] Run: grep 192.168.85.1	host.minikube.internal$ /etc/hosts
	I1109 14:42:20.362646  215276 ssh_runner.go:195] Run: /bin/bash -c "{ grep -v $'\thost.minikube.internal$' "/etc/hosts"; echo "192.168.85.1	host.minikube.internal"; } > /tmp/h.$$; sudo cp /tmp/h.$$ "/etc/hosts""
	I1109 14:42:20.374607  215276 kubeadm.go:884] updating cluster {Name:no-preload-545474 KeepContext:false EmbedCerts:false MinikubeISO: KicBaseImage:gcr.io/k8s-minikube/kicbase-builds:v0.0.48-1761985721-21837@sha256:a50b37e97dfdea51156e079ca6b45818a801b3d41bbe13d141f35d2e1af6c7d1 Memory:3072 CPUs:2 DiskSize:20000 Driver:docker HyperkitVpnKitSock: HyperkitVSockPorts:[] DockerEnv:[] ContainerVolumeMounts:[] InsecureRegistry:[] RegistryMirror:[] HostOnlyCIDR:192.168.59.1/24 HypervVirtualSwitch: HypervUseExternalSwitch:false HypervExternalAdapter: KVMNetwork:default KVMQemuURI:qemu:///system KVMGPU:false KVMHidden:false KVMNUMACount:1 APIServerPort:8443 DockerOpt:[] DisableDriverMounts:false NFSShare:[] NFSSharesRoot:/nfsshares UUID: NoVTXCheck:false DNSProxy:false HostDNSResolver:true HostOnlyNicType:virtio NatNicType:virtio SSHIPAddress: SSHUser:root SSHKey: SSHPort:22 KubernetesConfig:{KubernetesVersion:v1.34.1 ClusterName:no-preload-545474 Namespace:default APIServerHAVIP: APIServerName:minikubeCA API
ServerNames:[] APIServerIPs:[] DNSDomain:cluster.local ContainerRuntime:crio CRISocket: NetworkPlugin:cni FeatureGates: ServiceCIDR:10.96.0.0/12 ImageRepository: LoadBalancerStartIP: LoadBalancerEndIP: CustomIngressCert: RegistryAliases: ExtraOptions:[] ShouldLoadCachedImages:true EnableDefaultCNI:false CNI:} Nodes:[{Name: IP:192.168.85.2 Port:8443 KubernetesVersion:v1.34.1 ContainerRuntime:crio ControlPlane:true Worker:true}] Addons:map[dashboard:true default-storageclass:true storage-provisioner:true] CustomAddonImages:map[MetricsScraper:registry.k8s.io/echoserver:1.4] CustomAddonRegistries:map[] VerifyComponents:map[apiserver:true apps_running:true default_sa:true extra:true kubelet:true node_ready:true system_pods:true] StartHostTimeout:6m0s ScheduledStop:<nil> ExposedPorts:[] ListenAddress: Network: Subnet: MultiNodeRequested:false ExtraDisks:0 CertExpiration:26280h0m0s MountString: Mount9PVersion:9p2000.L MountGID:docker MountIP: MountMSize:262144 MountOptions:[] MountPort:0 MountType:9p MountUID:docker
BinaryMirror: DisableOptimizations:false DisableMetrics:false DisableCoreDNSLog:false CustomQemuFirmwarePath: SocketVMnetClientPath: SocketVMnetPath: StaticIP: SSHAuthSock: SSHAgentPID:0 GPUs: AutoPauseInterval:1m0s} ...
	I1109 14:42:20.374722  215276 preload.go:188] Checking if preload exists for k8s version v1.34.1 and runtime crio
	I1109 14:42:20.374761  215276 ssh_runner.go:195] Run: sudo crictl images --output json
	I1109 14:42:20.419650  215276 crio.go:514] all images are preloaded for cri-o runtime.
	I1109 14:42:20.419682  215276 cache_images.go:86] Images are preloaded, skipping loading
	I1109 14:42:20.419690  215276 kubeadm.go:935] updating node { 192.168.85.2 8443 v1.34.1 crio true true} ...
	I1109 14:42:20.419780  215276 kubeadm.go:947] kubelet [Unit]
	Wants=crio.service
	
	[Service]
	ExecStart=
	ExecStart=/var/lib/minikube/binaries/v1.34.1/kubelet --bootstrap-kubeconfig=/etc/kubernetes/bootstrap-kubelet.conf --cgroups-per-qos=false --config=/var/lib/kubelet/config.yaml --enforce-node-allocatable= --hostname-override=no-preload-545474 --kubeconfig=/etc/kubernetes/kubelet.conf --node-ip=192.168.85.2
	
	[Install]
	 config:
	{KubernetesVersion:v1.34.1 ClusterName:no-preload-545474 Namespace:default APIServerHAVIP: APIServerName:minikubeCA APIServerNames:[] APIServerIPs:[] DNSDomain:cluster.local ContainerRuntime:crio CRISocket: NetworkPlugin:cni FeatureGates: ServiceCIDR:10.96.0.0/12 ImageRepository: LoadBalancerStartIP: LoadBalancerEndIP: CustomIngressCert: RegistryAliases: ExtraOptions:[] ShouldLoadCachedImages:true EnableDefaultCNI:false CNI:}
	I1109 14:42:20.419859  215276 ssh_runner.go:195] Run: crio config
	I1109 14:42:20.497229  215276 cni.go:84] Creating CNI manager for ""
	I1109 14:42:20.497297  215276 cni.go:143] "docker" driver + "crio" runtime found, recommending kindnet
	I1109 14:42:20.497327  215276 kubeadm.go:85] Using pod CIDR: 10.244.0.0/16
	I1109 14:42:20.497378  215276 kubeadm.go:190] kubeadm options: {CertDir:/var/lib/minikube/certs ServiceCIDR:10.96.0.0/12 PodSubnet:10.244.0.0/16 AdvertiseAddress:192.168.85.2 APIServerPort:8443 KubernetesVersion:v1.34.1 EtcdDataDir:/var/lib/minikube/etcd EtcdExtraArgs:map[] ClusterName:no-preload-545474 NodeName:no-preload-545474 DNSDomain:cluster.local CRISocket:/var/run/crio/crio.sock ImageRepository: ComponentOptions:[{Component:apiServer ExtraArgs:map[enable-admission-plugins:NamespaceLifecycle,LimitRanger,ServiceAccount,DefaultStorageClass,DefaultTolerationSeconds,NodeRestriction,MutatingAdmissionWebhook,ValidatingAdmissionWebhook,ResourceQuota] Pairs:map[certSANs:["127.0.0.1", "localhost", "192.168.85.2"]]} {Component:controllerManager ExtraArgs:map[allocate-node-cidrs:true leader-elect:false] Pairs:map[]} {Component:scheduler ExtraArgs:map[leader-elect:false] Pairs:map[]}] FeatureArgs:map[] NodeIP:192.168.85.2 CgroupDriver:cgroupfs ClientCAFile:/var/lib/minikube/certs/ca.crt StaticPodPath:/etc
/kubernetes/manifests ControlPlaneAddress:control-plane.minikube.internal KubeProxyOptions:map[] ResolvConfSearchRegression:false KubeletConfigOpts:map[containerRuntimeEndpoint:unix:///var/run/crio/crio.sock hairpinMode:hairpin-veth runtimeRequestTimeout:15m] PrependCriSocketUnix:true}
	I1109 14:42:20.497540  215276 kubeadm.go:196] kubeadm config:
	apiVersion: kubeadm.k8s.io/v1beta4
	kind: InitConfiguration
	localAPIEndpoint:
	  advertiseAddress: 192.168.85.2
	  bindPort: 8443
	bootstrapTokens:
	  - groups:
	      - system:bootstrappers:kubeadm:default-node-token
	    ttl: 24h0m0s
	    usages:
	      - signing
	      - authentication
	nodeRegistration:
	  criSocket: unix:///var/run/crio/crio.sock
	  name: "no-preload-545474"
	  kubeletExtraArgs:
	    - name: "node-ip"
	      value: "192.168.85.2"
	  taints: []
	---
	apiVersion: kubeadm.k8s.io/v1beta4
	kind: ClusterConfiguration
	apiServer:
	  certSANs: ["127.0.0.1", "localhost", "192.168.85.2"]
	  extraArgs:
	    - name: "enable-admission-plugins"
	      value: "NamespaceLifecycle,LimitRanger,ServiceAccount,DefaultStorageClass,DefaultTolerationSeconds,NodeRestriction,MutatingAdmissionWebhook,ValidatingAdmissionWebhook,ResourceQuota"
	controllerManager:
	  extraArgs:
	    - name: "allocate-node-cidrs"
	      value: "true"
	    - name: "leader-elect"
	      value: "false"
	scheduler:
	  extraArgs:
	    - name: "leader-elect"
	      value: "false"
	certificatesDir: /var/lib/minikube/certs
	clusterName: mk
	controlPlaneEndpoint: control-plane.minikube.internal:8443
	etcd:
	  local:
	    dataDir: /var/lib/minikube/etcd
	kubernetesVersion: v1.34.1
	networking:
	  dnsDomain: cluster.local
	  podSubnet: "10.244.0.0/16"
	  serviceSubnet: 10.96.0.0/12
	---
	apiVersion: kubelet.config.k8s.io/v1beta1
	kind: KubeletConfiguration
	authentication:
	  x509:
	    clientCAFile: /var/lib/minikube/certs/ca.crt
	cgroupDriver: cgroupfs
	containerRuntimeEndpoint: unix:///var/run/crio/crio.sock
	hairpinMode: hairpin-veth
	runtimeRequestTimeout: 15m
	clusterDomain: "cluster.local"
	# disable disk resource management by default
	imageGCHighThresholdPercent: 100
	evictionHard:
	  nodefs.available: "0%"
	  nodefs.inodesFree: "0%"
	  imagefs.available: "0%"
	failSwapOn: false
	staticPodPath: /etc/kubernetes/manifests
	---
	apiVersion: kubeproxy.config.k8s.io/v1alpha1
	kind: KubeProxyConfiguration
	clusterCIDR: "10.244.0.0/16"
	metricsBindAddress: 0.0.0.0:10249
	conntrack:
	  maxPerCore: 0
	# Skip setting "net.netfilter.nf_conntrack_tcp_timeout_established"
	  tcpEstablishedTimeout: 0s
	# Skip setting "net.netfilter.nf_conntrack_tcp_timeout_close"
	  tcpCloseWaitTimeout: 0s
	
	I1109 14:42:20.497631  215276 ssh_runner.go:195] Run: sudo ls /var/lib/minikube/binaries/v1.34.1
	I1109 14:42:20.506931  215276 binaries.go:51] Found k8s binaries, skipping transfer
	I1109 14:42:20.507038  215276 ssh_runner.go:195] Run: sudo mkdir -p /etc/systemd/system/kubelet.service.d /lib/systemd/system /var/tmp/minikube
	I1109 14:42:20.515841  215276 ssh_runner.go:362] scp memory --> /etc/systemd/system/kubelet.service.d/10-kubeadm.conf (367 bytes)
	I1109 14:42:20.528978  215276 ssh_runner.go:362] scp memory --> /lib/systemd/system/kubelet.service (352 bytes)
	I1109 14:42:20.542049  215276 ssh_runner.go:362] scp memory --> /var/tmp/minikube/kubeadm.yaml.new (2214 bytes)
	I1109 14:42:20.557347  215276 ssh_runner.go:195] Run: grep 192.168.85.2	control-plane.minikube.internal$ /etc/hosts
	I1109 14:42:20.561807  215276 ssh_runner.go:195] Run: /bin/bash -c "{ grep -v $'\tcontrol-plane.minikube.internal$' "/etc/hosts"; echo "192.168.85.2	control-plane.minikube.internal"; } > /tmp/h.$$; sudo cp /tmp/h.$$ "/etc/hosts""
	I1109 14:42:20.573492  215276 ssh_runner.go:195] Run: sudo systemctl daemon-reload
	I1109 14:42:20.747972  215276 ssh_runner.go:195] Run: sudo systemctl start kubelet
	I1109 14:42:20.765976  215276 certs.go:69] Setting up /home/jenkins/minikube-integration/21139-2320/.minikube/profiles/no-preload-545474 for IP: 192.168.85.2
	I1109 14:42:20.765999  215276 certs.go:195] generating shared ca certs ...
	I1109 14:42:20.766014  215276 certs.go:227] acquiring lock for ca certs: {Name:mkdc9287ecd89df27ec460b72246ed6b75395d7c Clock:{} Delay:500ms Timeout:1m0s Cancel:<nil>}
	I1109 14:42:20.766149  215276 certs.go:236] skipping valid "minikubeCA" ca cert: /home/jenkins/minikube-integration/21139-2320/.minikube/ca.key
	I1109 14:42:20.766196  215276 certs.go:236] skipping valid "proxyClientCA" ca cert: /home/jenkins/minikube-integration/21139-2320/.minikube/proxy-client-ca.key
	I1109 14:42:20.766218  215276 certs.go:257] generating profile certs ...
	I1109 14:42:20.766310  215276 certs.go:360] skipping valid signed profile cert regeneration for "minikube-user": /home/jenkins/minikube-integration/21139-2320/.minikube/profiles/no-preload-545474/client.key
	I1109 14:42:20.766377  215276 certs.go:360] skipping valid signed profile cert regeneration for "minikube": /home/jenkins/minikube-integration/21139-2320/.minikube/profiles/no-preload-545474/apiserver.key.33b59cf6
	I1109 14:42:20.766417  215276 certs.go:360] skipping valid signed profile cert regeneration for "aggregator": /home/jenkins/minikube-integration/21139-2320/.minikube/profiles/no-preload-545474/proxy-client.key
	I1109 14:42:20.766533  215276 certs.go:484] found cert: /home/jenkins/minikube-integration/21139-2320/.minikube/certs/4116.pem (1338 bytes)
	W1109 14:42:20.766567  215276 certs.go:480] ignoring /home/jenkins/minikube-integration/21139-2320/.minikube/certs/4116_empty.pem, impossibly tiny 0 bytes
	I1109 14:42:20.766580  215276 certs.go:484] found cert: /home/jenkins/minikube-integration/21139-2320/.minikube/certs/ca-key.pem (1679 bytes)
	I1109 14:42:20.766605  215276 certs.go:484] found cert: /home/jenkins/minikube-integration/21139-2320/.minikube/certs/ca.pem (1082 bytes)
	I1109 14:42:20.766630  215276 certs.go:484] found cert: /home/jenkins/minikube-integration/21139-2320/.minikube/certs/cert.pem (1123 bytes)
	I1109 14:42:20.766657  215276 certs.go:484] found cert: /home/jenkins/minikube-integration/21139-2320/.minikube/certs/key.pem (1679 bytes)
	I1109 14:42:20.766702  215276 certs.go:484] found cert: /home/jenkins/minikube-integration/21139-2320/.minikube/files/etc/ssl/certs/41162.pem (1708 bytes)
	I1109 14:42:20.767287  215276 ssh_runner.go:362] scp /home/jenkins/minikube-integration/21139-2320/.minikube/ca.crt --> /var/lib/minikube/certs/ca.crt (1111 bytes)
	I1109 14:42:20.786616  215276 ssh_runner.go:362] scp /home/jenkins/minikube-integration/21139-2320/.minikube/ca.key --> /var/lib/minikube/certs/ca.key (1679 bytes)
	I1109 14:42:20.804428  215276 ssh_runner.go:362] scp /home/jenkins/minikube-integration/21139-2320/.minikube/proxy-client-ca.crt --> /var/lib/minikube/certs/proxy-client-ca.crt (1119 bytes)
	I1109 14:42:20.823532  215276 ssh_runner.go:362] scp /home/jenkins/minikube-integration/21139-2320/.minikube/proxy-client-ca.key --> /var/lib/minikube/certs/proxy-client-ca.key (1675 bytes)
	I1109 14:42:20.843824  215276 ssh_runner.go:362] scp /home/jenkins/minikube-integration/21139-2320/.minikube/profiles/no-preload-545474/apiserver.crt --> /var/lib/minikube/certs/apiserver.crt (1424 bytes)
	I1109 14:42:20.868769  215276 ssh_runner.go:362] scp /home/jenkins/minikube-integration/21139-2320/.minikube/profiles/no-preload-545474/apiserver.key --> /var/lib/minikube/certs/apiserver.key (1679 bytes)
	I1109 14:42:20.887438  215276 ssh_runner.go:362] scp /home/jenkins/minikube-integration/21139-2320/.minikube/profiles/no-preload-545474/proxy-client.crt --> /var/lib/minikube/certs/proxy-client.crt (1147 bytes)
	I1109 14:42:20.912007  215276 ssh_runner.go:362] scp /home/jenkins/minikube-integration/21139-2320/.minikube/profiles/no-preload-545474/proxy-client.key --> /var/lib/minikube/certs/proxy-client.key (1675 bytes)
	I1109 14:42:20.935063  215276 ssh_runner.go:362] scp /home/jenkins/minikube-integration/21139-2320/.minikube/files/etc/ssl/certs/41162.pem --> /usr/share/ca-certificates/41162.pem (1708 bytes)
	I1109 14:42:20.957674  215276 ssh_runner.go:362] scp /home/jenkins/minikube-integration/21139-2320/.minikube/ca.crt --> /usr/share/ca-certificates/minikubeCA.pem (1111 bytes)
	I1109 14:42:20.981982  215276 ssh_runner.go:362] scp /home/jenkins/minikube-integration/21139-2320/.minikube/certs/4116.pem --> /usr/share/ca-certificates/4116.pem (1338 bytes)
	I1109 14:42:21.009849  215276 ssh_runner.go:362] scp memory --> /var/lib/minikube/kubeconfig (738 bytes)
	I1109 14:42:21.026556  215276 ssh_runner.go:195] Run: openssl version
	I1109 14:42:21.033813  215276 ssh_runner.go:195] Run: sudo /bin/bash -c "test -s /usr/share/ca-certificates/41162.pem && ln -fs /usr/share/ca-certificates/41162.pem /etc/ssl/certs/41162.pem"
	I1109 14:42:21.045375  215276 ssh_runner.go:195] Run: ls -la /usr/share/ca-certificates/41162.pem
	I1109 14:42:21.049771  215276 certs.go:528] hashing: -rw-r--r-- 1 root root 1708 Nov  9 13:36 /usr/share/ca-certificates/41162.pem
	I1109 14:42:21.049832  215276 ssh_runner.go:195] Run: openssl x509 -hash -noout -in /usr/share/ca-certificates/41162.pem
	I1109 14:42:21.096750  215276 ssh_runner.go:195] Run: sudo /bin/bash -c "test -L /etc/ssl/certs/3ec20f2e.0 || ln -fs /etc/ssl/certs/41162.pem /etc/ssl/certs/3ec20f2e.0"
	I1109 14:42:21.104843  215276 ssh_runner.go:195] Run: sudo /bin/bash -c "test -s /usr/share/ca-certificates/minikubeCA.pem && ln -fs /usr/share/ca-certificates/minikubeCA.pem /etc/ssl/certs/minikubeCA.pem"
	I1109 14:42:21.113575  215276 ssh_runner.go:195] Run: ls -la /usr/share/ca-certificates/minikubeCA.pem
	I1109 14:42:21.117243  215276 certs.go:528] hashing: -rw-r--r-- 1 root root 1111 Nov  9 13:29 /usr/share/ca-certificates/minikubeCA.pem
	I1109 14:42:21.117354  215276 ssh_runner.go:195] Run: openssl x509 -hash -noout -in /usr/share/ca-certificates/minikubeCA.pem
	I1109 14:42:21.159203  215276 ssh_runner.go:195] Run: sudo /bin/bash -c "test -L /etc/ssl/certs/b5213941.0 || ln -fs /etc/ssl/certs/minikubeCA.pem /etc/ssl/certs/b5213941.0"
	I1109 14:42:21.167394  215276 ssh_runner.go:195] Run: sudo /bin/bash -c "test -s /usr/share/ca-certificates/4116.pem && ln -fs /usr/share/ca-certificates/4116.pem /etc/ssl/certs/4116.pem"
	I1109 14:42:21.175822  215276 ssh_runner.go:195] Run: ls -la /usr/share/ca-certificates/4116.pem
	I1109 14:42:21.179316  215276 certs.go:528] hashing: -rw-r--r-- 1 root root 1338 Nov  9 13:36 /usr/share/ca-certificates/4116.pem
	I1109 14:42:21.179375  215276 ssh_runner.go:195] Run: openssl x509 -hash -noout -in /usr/share/ca-certificates/4116.pem
	I1109 14:42:21.220603  215276 ssh_runner.go:195] Run: sudo /bin/bash -c "test -L /etc/ssl/certs/51391683.0 || ln -fs /etc/ssl/certs/4116.pem /etc/ssl/certs/51391683.0"
	I1109 14:42:21.228357  215276 ssh_runner.go:195] Run: stat /var/lib/minikube/certs/apiserver-kubelet-client.crt
	I1109 14:42:21.232091  215276 ssh_runner.go:195] Run: openssl x509 -noout -in /var/lib/minikube/certs/apiserver-etcd-client.crt -checkend 86400
	I1109 14:42:21.274060  215276 ssh_runner.go:195] Run: openssl x509 -noout -in /var/lib/minikube/certs/apiserver-kubelet-client.crt -checkend 86400
	I1109 14:42:21.316825  215276 ssh_runner.go:195] Run: openssl x509 -noout -in /var/lib/minikube/certs/etcd/server.crt -checkend 86400
	I1109 14:42:21.357858  215276 ssh_runner.go:195] Run: openssl x509 -noout -in /var/lib/minikube/certs/etcd/healthcheck-client.crt -checkend 86400
	I1109 14:42:21.399153  215276 ssh_runner.go:195] Run: openssl x509 -noout -in /var/lib/minikube/certs/etcd/peer.crt -checkend 86400
	I1109 14:42:21.447143  215276 ssh_runner.go:195] Run: openssl x509 -noout -in /var/lib/minikube/certs/front-proxy-client.crt -checkend 86400
	I1109 14:42:21.494822  215276 kubeadm.go:401] StartCluster: {Name:no-preload-545474 KeepContext:false EmbedCerts:false MinikubeISO: KicBaseImage:gcr.io/k8s-minikube/kicbase-builds:v0.0.48-1761985721-21837@sha256:a50b37e97dfdea51156e079ca6b45818a801b3d41bbe13d141f35d2e1af6c7d1 Memory:3072 CPUs:2 DiskSize:20000 Driver:docker HyperkitVpnKitSock: HyperkitVSockPorts:[] DockerEnv:[] ContainerVolumeMounts:[] InsecureRegistry:[] RegistryMirror:[] HostOnlyCIDR:192.168.59.1/24 HypervVirtualSwitch: HypervUseExternalSwitch:false HypervExternalAdapter: KVMNetwork:default KVMQemuURI:qemu:///system KVMGPU:false KVMHidden:false KVMNUMACount:1 APIServerPort:8443 DockerOpt:[] DisableDriverMounts:false NFSShare:[] NFSSharesRoot:/nfsshares UUID: NoVTXCheck:false DNSProxy:false HostDNSResolver:true HostOnlyNicType:virtio NatNicType:virtio SSHIPAddress: SSHUser:root SSHKey: SSHPort:22 KubernetesConfig:{KubernetesVersion:v1.34.1 ClusterName:no-preload-545474 Namespace:default APIServerHAVIP: APIServerName:minikubeCA APISer
verNames:[] APIServerIPs:[] DNSDomain:cluster.local ContainerRuntime:crio CRISocket: NetworkPlugin:cni FeatureGates: ServiceCIDR:10.96.0.0/12 ImageRepository: LoadBalancerStartIP: LoadBalancerEndIP: CustomIngressCert: RegistryAliases: ExtraOptions:[] ShouldLoadCachedImages:true EnableDefaultCNI:false CNI:} Nodes:[{Name: IP:192.168.85.2 Port:8443 KubernetesVersion:v1.34.1 ContainerRuntime:crio ControlPlane:true Worker:true}] Addons:map[dashboard:true default-storageclass:true storage-provisioner:true] CustomAddonImages:map[MetricsScraper:registry.k8s.io/echoserver:1.4] CustomAddonRegistries:map[] VerifyComponents:map[apiserver:true apps_running:true default_sa:true extra:true kubelet:true node_ready:true system_pods:true] StartHostTimeout:6m0s ScheduledStop:<nil> ExposedPorts:[] ListenAddress: Network: Subnet: MultiNodeRequested:false ExtraDisks:0 CertExpiration:26280h0m0s MountString: Mount9PVersion:9p2000.L MountGID:docker MountIP: MountMSize:262144 MountOptions:[] MountPort:0 MountType:9p MountUID:docker Bi
naryMirror: DisableOptimizations:false DisableMetrics:false DisableCoreDNSLog:false CustomQemuFirmwarePath: SocketVMnetClientPath: SocketVMnetPath: StaticIP: SSHAuthSock: SSHAgentPID:0 GPUs: AutoPauseInterval:1m0s}
	I1109 14:42:21.494953  215276 cri.go:54] listing CRI containers in root : {State:paused Name: Namespaces:[kube-system]}
	I1109 14:42:21.495074  215276 ssh_runner.go:195] Run: sudo -s eval "crictl ps -a --quiet --label io.kubernetes.pod.namespace=kube-system"
	I1109 14:42:21.541572  215276 cri.go:89] found id: ""
	I1109 14:42:21.541696  215276 ssh_runner.go:195] Run: sudo ls /var/lib/kubelet/kubeadm-flags.env /var/lib/kubelet/config.yaml /var/lib/minikube/etcd
	I1109 14:42:21.554753  215276 kubeadm.go:417] found existing configuration files, will attempt cluster restart
	I1109 14:42:21.554826  215276 kubeadm.go:598] restartPrimaryControlPlane start ...
	I1109 14:42:21.554910  215276 ssh_runner.go:195] Run: sudo test -d /data/minikube
	I1109 14:42:21.576922  215276 kubeadm.go:131] /data/minikube skipping compat symlinks: sudo test -d /data/minikube: Process exited with status 1
	stdout:
	
	stderr:
	I1109 14:42:21.577345  215276 kubeconfig.go:47] verify endpoint returned: get endpoint: "no-preload-545474" does not appear in /home/jenkins/minikube-integration/21139-2320/kubeconfig
	I1109 14:42:21.577451  215276 kubeconfig.go:62] /home/jenkins/minikube-integration/21139-2320/kubeconfig needs updating (will repair): [kubeconfig missing "no-preload-545474" cluster setting kubeconfig missing "no-preload-545474" context setting]
	I1109 14:42:21.577772  215276 lock.go:35] WriteFile acquiring /home/jenkins/minikube-integration/21139-2320/kubeconfig: {Name:mkbcbd43244c4eb498050023163f96f81faa78c6 Clock:{} Delay:500ms Timeout:1m0s Cancel:<nil>}
	I1109 14:42:21.579112  215276 ssh_runner.go:195] Run: sudo diff -u /var/tmp/minikube/kubeadm.yaml /var/tmp/minikube/kubeadm.yaml.new
	I1109 14:42:21.611145  215276 kubeadm.go:635] The running cluster does not require reconfiguration: 192.168.85.2
	I1109 14:42:21.611179  215276 kubeadm.go:602] duration metric: took 56.334651ms to restartPrimaryControlPlane
	I1109 14:42:21.611189  215276 kubeadm.go:403] duration metric: took 116.393333ms to StartCluster
	I1109 14:42:21.611203  215276 settings.go:142] acquiring lock: {Name:mk52a5a1cf83cfd3007d92af07a5e8dea393f789 Clock:{} Delay:500ms Timeout:1m0s Cancel:<nil>}
	I1109 14:42:21.611263  215276 settings.go:150] Updating kubeconfig:  /home/jenkins/minikube-integration/21139-2320/kubeconfig
	I1109 14:42:21.611851  215276 lock.go:35] WriteFile acquiring /home/jenkins/minikube-integration/21139-2320/kubeconfig: {Name:mkbcbd43244c4eb498050023163f96f81faa78c6 Clock:{} Delay:500ms Timeout:1m0s Cancel:<nil>}
	I1109 14:42:21.612079  215276 start.go:236] Will wait 6m0s for node &{Name: IP:192.168.85.2 Port:8443 KubernetesVersion:v1.34.1 ContainerRuntime:crio ControlPlane:true Worker:true}
	I1109 14:42:21.612437  215276 config.go:182] Loaded profile config "no-preload-545474": Driver=docker, ContainerRuntime=crio, KubernetesVersion=v1.34.1
	I1109 14:42:21.612481  215276 addons.go:512] enable addons start: toEnable=map[ambassador:false amd-gpu-device-plugin:false auto-pause:false cloud-spanner:false csi-hostpath-driver:false dashboard:true default-storageclass:true efk:false freshpod:false gcp-auth:false gvisor:false headlamp:false inaccel:false ingress:false ingress-dns:false inspektor-gadget:false istio:false istio-provisioner:false kong:false kubeflow:false kubetail:false kubevirt:false logviewer:false metallb:false metrics-server:false nvidia-device-plugin:false nvidia-driver-installer:false nvidia-gpu-device-plugin:false olm:false pod-security-policy:false portainer:false registry:false registry-aliases:false registry-creds:false storage-provisioner:true storage-provisioner-rancher:false volcano:false volumesnapshots:false yakd:false]
	I1109 14:42:21.612610  215276 addons.go:70] Setting storage-provisioner=true in profile "no-preload-545474"
	I1109 14:42:21.612631  215276 addons.go:239] Setting addon storage-provisioner=true in "no-preload-545474"
	W1109 14:42:21.612637  215276 addons.go:248] addon storage-provisioner should already be in state true
	I1109 14:42:21.612661  215276 host.go:66] Checking if "no-preload-545474" exists ...
	I1109 14:42:21.613113  215276 cli_runner.go:164] Run: docker container inspect no-preload-545474 --format={{.State.Status}}
	I1109 14:42:21.613492  215276 addons.go:70] Setting dashboard=true in profile "no-preload-545474"
	I1109 14:42:21.613514  215276 addons.go:239] Setting addon dashboard=true in "no-preload-545474"
	W1109 14:42:21.613521  215276 addons.go:248] addon dashboard should already be in state true
	I1109 14:42:21.613541  215276 host.go:66] Checking if "no-preload-545474" exists ...
	I1109 14:42:21.613898  215276 addons.go:70] Setting default-storageclass=true in profile "no-preload-545474"
	I1109 14:42:21.613915  215276 addons_storage_classes.go:34] enableOrDisableStorageClasses default-storageclass=true on "no-preload-545474"
	I1109 14:42:21.614128  215276 cli_runner.go:164] Run: docker container inspect no-preload-545474 --format={{.State.Status}}
	I1109 14:42:21.614261  215276 cli_runner.go:164] Run: docker container inspect no-preload-545474 --format={{.State.Status}}
	I1109 14:42:21.621883  215276 out.go:179] * Verifying Kubernetes components...
	I1109 14:42:21.626138  215276 ssh_runner.go:195] Run: sudo systemctl daemon-reload
	I1109 14:42:21.666846  215276 addons.go:239] Setting addon default-storageclass=true in "no-preload-545474"
	W1109 14:42:21.666869  215276 addons.go:248] addon default-storageclass should already be in state true
	I1109 14:42:21.666894  215276 host.go:66] Checking if "no-preload-545474" exists ...
	I1109 14:42:21.667308  215276 cli_runner.go:164] Run: docker container inspect no-preload-545474 --format={{.State.Status}}
	I1109 14:42:21.687720  215276 out.go:179]   - Using image docker.io/kubernetesui/dashboard:v2.7.0
	I1109 14:42:21.690682  215276 out.go:179]   - Using image gcr.io/k8s-minikube/storage-provisioner:v5
	I1109 14:42:21.694891  215276 out.go:179]   - Using image registry.k8s.io/echoserver:1.4
	I1109 14:42:18.919593  212661 kubeadm.go:319] [control-plane-check] kube-scheduler is healthy after 4.879059826s
	I1109 14:42:21.039480  212661 kubeadm.go:319] [control-plane-check] kube-apiserver is healthy after 7.001919207s
	I1109 14:42:21.062533  212661 kubeadm.go:319] [upload-config] Storing the configuration used in ConfigMap "kubeadm-config" in the "kube-system" Namespace
	I1109 14:42:21.581982  212661 kubeadm.go:319] [kubelet] Creating a ConfigMap "kubelet-config" in namespace kube-system with the configuration for the kubelets in the cluster
	I1109 14:42:21.622056  212661 kubeadm.go:319] [upload-certs] Skipping phase. Please see --upload-certs
	I1109 14:42:21.622273  212661 kubeadm.go:319] [mark-control-plane] Marking the node auto-241021 as control-plane by adding the labels: [node-role.kubernetes.io/control-plane node.kubernetes.io/exclude-from-external-load-balancers]
	I1109 14:42:21.701773  212661 kubeadm.go:319] [bootstrap-token] Using token: wm7c8z.kpjb1ns37gy8zfml
	I1109 14:42:21.695013  215276 addons.go:436] installing /etc/kubernetes/addons/storage-provisioner.yaml
	I1109 14:42:21.695024  215276 ssh_runner.go:362] scp memory --> /etc/kubernetes/addons/storage-provisioner.yaml (2676 bytes)
	I1109 14:42:21.695084  215276 cli_runner.go:164] Run: docker container inspect -f "'{{(index (index .NetworkSettings.Ports "22/tcp") 0).HostPort}}'" no-preload-545474
	I1109 14:42:21.700018  215276 addons.go:436] installing /etc/kubernetes/addons/dashboard-ns.yaml
	I1109 14:42:21.700043  215276 ssh_runner.go:362] scp dashboard/dashboard-ns.yaml --> /etc/kubernetes/addons/dashboard-ns.yaml (759 bytes)
	I1109 14:42:21.700109  215276 cli_runner.go:164] Run: docker container inspect -f "'{{(index (index .NetworkSettings.Ports "22/tcp") 0).HostPort}}'" no-preload-545474
	I1109 14:42:21.729453  215276 addons.go:436] installing /etc/kubernetes/addons/storageclass.yaml
	I1109 14:42:21.729475  215276 ssh_runner.go:362] scp storageclass/storageclass.yaml --> /etc/kubernetes/addons/storageclass.yaml (271 bytes)
	I1109 14:42:21.729538  215276 cli_runner.go:164] Run: docker container inspect -f "'{{(index (index .NetworkSettings.Ports "22/tcp") 0).HostPort}}'" no-preload-545474
	I1109 14:42:21.770707  215276 sshutil.go:53] new ssh client: &{IP:127.0.0.1 Port:33095 SSHKeyPath:/home/jenkins/minikube-integration/21139-2320/.minikube/machines/no-preload-545474/id_rsa Username:docker}
	I1109 14:42:21.777714  215276 sshutil.go:53] new ssh client: &{IP:127.0.0.1 Port:33095 SSHKeyPath:/home/jenkins/minikube-integration/21139-2320/.minikube/machines/no-preload-545474/id_rsa Username:docker}
	I1109 14:42:21.788103  215276 sshutil.go:53] new ssh client: &{IP:127.0.0.1 Port:33095 SSHKeyPath:/home/jenkins/minikube-integration/21139-2320/.minikube/machines/no-preload-545474/id_rsa Username:docker}
	I1109 14:42:22.176209  215276 addons.go:436] installing /etc/kubernetes/addons/dashboard-clusterrole.yaml
	I1109 14:42:22.176275  215276 ssh_runner.go:362] scp dashboard/dashboard-clusterrole.yaml --> /etc/kubernetes/addons/dashboard-clusterrole.yaml (1001 bytes)
	I1109 14:42:22.187070  215276 ssh_runner.go:195] Run: sudo systemctl start kubelet
	I1109 14:42:22.205030  215276 addons.go:436] installing /etc/kubernetes/addons/dashboard-clusterrolebinding.yaml
	I1109 14:42:22.205055  215276 ssh_runner.go:362] scp dashboard/dashboard-clusterrolebinding.yaml --> /etc/kubernetes/addons/dashboard-clusterrolebinding.yaml (1018 bytes)
	I1109 14:42:22.232005  215276 addons.go:436] installing /etc/kubernetes/addons/dashboard-configmap.yaml
	I1109 14:42:22.232030  215276 ssh_runner.go:362] scp dashboard/dashboard-configmap.yaml --> /etc/kubernetes/addons/dashboard-configmap.yaml (837 bytes)
	I1109 14:42:22.274675  215276 node_ready.go:35] waiting up to 6m0s for node "no-preload-545474" to be "Ready" ...
	I1109 14:42:22.293443  215276 ssh_runner.go:195] Run: sudo KUBECONFIG=/var/lib/minikube/kubeconfig /var/lib/minikube/binaries/v1.34.1/kubectl apply -f /etc/kubernetes/addons/storageclass.yaml
	I1109 14:42:22.320499  215276 ssh_runner.go:195] Run: sudo KUBECONFIG=/var/lib/minikube/kubeconfig /var/lib/minikube/binaries/v1.34.1/kubectl apply -f /etc/kubernetes/addons/storage-provisioner.yaml
	I1109 14:42:22.325216  215276 addons.go:436] installing /etc/kubernetes/addons/dashboard-dp.yaml
	I1109 14:42:22.325275  215276 ssh_runner.go:362] scp memory --> /etc/kubernetes/addons/dashboard-dp.yaml (4201 bytes)
	I1109 14:42:22.399855  215276 addons.go:436] installing /etc/kubernetes/addons/dashboard-role.yaml
	I1109 14:42:22.399935  215276 ssh_runner.go:362] scp dashboard/dashboard-role.yaml --> /etc/kubernetes/addons/dashboard-role.yaml (1724 bytes)
	I1109 14:42:22.507225  215276 addons.go:436] installing /etc/kubernetes/addons/dashboard-rolebinding.yaml
	I1109 14:42:22.507289  215276 ssh_runner.go:362] scp dashboard/dashboard-rolebinding.yaml --> /etc/kubernetes/addons/dashboard-rolebinding.yaml (1046 bytes)
	I1109 14:42:21.705543  212661 out.go:252]   - Configuring RBAC rules ...
	I1109 14:42:21.705681  212661 kubeadm.go:319] [bootstrap-token] Configuring bootstrap tokens, cluster-info ConfigMap, RBAC Roles
	I1109 14:42:21.750565  212661 kubeadm.go:319] [bootstrap-token] Configured RBAC rules to allow Node Bootstrap tokens to get nodes
	I1109 14:42:21.782718  212661 kubeadm.go:319] [bootstrap-token] Configured RBAC rules to allow Node Bootstrap tokens to post CSRs in order for nodes to get long term certificate credentials
	I1109 14:42:21.795444  212661 kubeadm.go:319] [bootstrap-token] Configured RBAC rules to allow the csrapprover controller automatically approve CSRs from a Node Bootstrap Token
	I1109 14:42:21.805340  212661 kubeadm.go:319] [bootstrap-token] Configured RBAC rules to allow certificate rotation for all node client certificates in the cluster
	I1109 14:42:21.813627  212661 kubeadm.go:319] [bootstrap-token] Creating the "cluster-info" ConfigMap in the "kube-public" namespace
	I1109 14:42:21.873338  212661 kubeadm.go:319] [kubelet-finalize] Updating "/etc/kubernetes/kubelet.conf" to point to a rotatable kubelet client certificate and key
	I1109 14:42:22.447343  212661 kubeadm.go:319] [addons] Applied essential addon: CoreDNS
	I1109 14:42:22.787426  212661 kubeadm.go:319] [addons] Applied essential addon: kube-proxy
	I1109 14:42:22.787445  212661 kubeadm.go:319] 
	I1109 14:42:22.787513  212661 kubeadm.go:319] Your Kubernetes control-plane has initialized successfully!
	I1109 14:42:22.787518  212661 kubeadm.go:319] 
	I1109 14:42:22.787599  212661 kubeadm.go:319] To start using your cluster, you need to run the following as a regular user:
	I1109 14:42:22.787604  212661 kubeadm.go:319] 
	I1109 14:42:22.787630  212661 kubeadm.go:319]   mkdir -p $HOME/.kube
	I1109 14:42:22.787691  212661 kubeadm.go:319]   sudo cp -i /etc/kubernetes/admin.conf $HOME/.kube/config
	I1109 14:42:22.787744  212661 kubeadm.go:319]   sudo chown $(id -u):$(id -g) $HOME/.kube/config
	I1109 14:42:22.787749  212661 kubeadm.go:319] 
	I1109 14:42:22.787806  212661 kubeadm.go:319] Alternatively, if you are the root user, you can run:
	I1109 14:42:22.787811  212661 kubeadm.go:319] 
	I1109 14:42:22.787860  212661 kubeadm.go:319]   export KUBECONFIG=/etc/kubernetes/admin.conf
	I1109 14:42:22.787884  212661 kubeadm.go:319] 
	I1109 14:42:22.787940  212661 kubeadm.go:319] You should now deploy a pod network to the cluster.
	I1109 14:42:22.788020  212661 kubeadm.go:319] Run "kubectl apply -f [podnetwork].yaml" with one of the options listed at:
	I1109 14:42:22.788099  212661 kubeadm.go:319]   https://kubernetes.io/docs/concepts/cluster-administration/addons/
	I1109 14:42:22.788104  212661 kubeadm.go:319] 
	I1109 14:42:22.788192  212661 kubeadm.go:319] You can now join any number of control-plane nodes by copying certificate authorities
	I1109 14:42:22.788283  212661 kubeadm.go:319] and service account keys on each node and then running the following as root:
	I1109 14:42:22.788288  212661 kubeadm.go:319] 
	I1109 14:42:22.788375  212661 kubeadm.go:319]   kubeadm join control-plane.minikube.internal:8443 --token wm7c8z.kpjb1ns37gy8zfml \
	I1109 14:42:22.788484  212661 kubeadm.go:319] 	--discovery-token-ca-cert-hash sha256:bb1c1d61f7abb7f43f33866576b71e6342f6f67545a2d1f7cc51fa851cd51d52 \
	I1109 14:42:22.788506  212661 kubeadm.go:319] 	--control-plane 
	I1109 14:42:22.788510  212661 kubeadm.go:319] 
	I1109 14:42:22.788599  212661 kubeadm.go:319] Then you can join any number of worker nodes by running the following on each as root:
	I1109 14:42:22.788604  212661 kubeadm.go:319] 
	I1109 14:42:22.788689  212661 kubeadm.go:319] kubeadm join control-plane.minikube.internal:8443 --token wm7c8z.kpjb1ns37gy8zfml \
	I1109 14:42:22.788796  212661 kubeadm.go:319] 	--discovery-token-ca-cert-hash sha256:bb1c1d61f7abb7f43f33866576b71e6342f6f67545a2d1f7cc51fa851cd51d52 
	I1109 14:42:22.795578  212661 kubeadm.go:319] 	[WARNING SystemVerification]: cgroups v1 support is in maintenance mode, please migrate to cgroups v2
	I1109 14:42:22.795823  212661 kubeadm.go:319] 	[WARNING SystemVerification]: failed to parse kernel config: unable to load kernel module: "configs", output: "modprobe: FATAL: Module configs not found in directory /lib/modules/5.15.0-1084-aws\n", err: exit status 1
	I1109 14:42:22.795976  212661 kubeadm.go:319] 	[WARNING Service-Kubelet]: kubelet service is not enabled, please run 'systemctl enable kubelet.service'
	I1109 14:42:22.795995  212661 cni.go:84] Creating CNI manager for ""
	I1109 14:42:22.796002  212661 cni.go:143] "docker" driver + "crio" runtime found, recommending kindnet
	I1109 14:42:22.799358  212661 out.go:179] * Configuring CNI (Container Networking Interface) ...
	I1109 14:42:22.681932  215276 addons.go:436] installing /etc/kubernetes/addons/dashboard-sa.yaml
	I1109 14:42:22.682003  215276 ssh_runner.go:362] scp dashboard/dashboard-sa.yaml --> /etc/kubernetes/addons/dashboard-sa.yaml (837 bytes)
	I1109 14:42:22.748000  215276 addons.go:436] installing /etc/kubernetes/addons/dashboard-secret.yaml
	I1109 14:42:22.748062  215276 ssh_runner.go:362] scp dashboard/dashboard-secret.yaml --> /etc/kubernetes/addons/dashboard-secret.yaml (1389 bytes)
	I1109 14:42:22.805250  215276 addons.go:436] installing /etc/kubernetes/addons/dashboard-svc.yaml
	I1109 14:42:22.805309  215276 ssh_runner.go:362] scp dashboard/dashboard-svc.yaml --> /etc/kubernetes/addons/dashboard-svc.yaml (1294 bytes)
	I1109 14:42:22.846491  215276 ssh_runner.go:195] Run: sudo KUBECONFIG=/var/lib/minikube/kubeconfig /var/lib/minikube/binaries/v1.34.1/kubectl apply -f /etc/kubernetes/addons/dashboard-ns.yaml -f /etc/kubernetes/addons/dashboard-clusterrole.yaml -f /etc/kubernetes/addons/dashboard-clusterrolebinding.yaml -f /etc/kubernetes/addons/dashboard-configmap.yaml -f /etc/kubernetes/addons/dashboard-dp.yaml -f /etc/kubernetes/addons/dashboard-role.yaml -f /etc/kubernetes/addons/dashboard-rolebinding.yaml -f /etc/kubernetes/addons/dashboard-sa.yaml -f /etc/kubernetes/addons/dashboard-secret.yaml -f /etc/kubernetes/addons/dashboard-svc.yaml
	I1109 14:42:22.802583  212661 ssh_runner.go:195] Run: stat /opt/cni/bin/portmap
	I1109 14:42:22.813746  212661 cni.go:182] applying CNI manifest using /var/lib/minikube/binaries/v1.34.1/kubectl ...
	I1109 14:42:22.813765  212661 ssh_runner.go:362] scp memory --> /var/tmp/minikube/cni.yaml (2601 bytes)
	I1109 14:42:22.872433  212661 ssh_runner.go:195] Run: sudo /var/lib/minikube/binaries/v1.34.1/kubectl apply --kubeconfig=/var/lib/minikube/kubeconfig -f /var/tmp/minikube/cni.yaml
	I1109 14:42:23.593094  212661 ssh_runner.go:195] Run: /bin/bash -c "cat /proc/$(pgrep kube-apiserver)/oom_adj"
	I1109 14:42:23.593214  212661 ssh_runner.go:195] Run: sudo /var/lib/minikube/binaries/v1.34.1/kubectl create clusterrolebinding minikube-rbac --clusterrole=cluster-admin --serviceaccount=kube-system:default --kubeconfig=/var/lib/minikube/kubeconfig
	I1109 14:42:23.593334  212661 ssh_runner.go:195] Run: sudo /var/lib/minikube/binaries/v1.34.1/kubectl --kubeconfig=/var/lib/minikube/kubeconfig label --overwrite nodes auto-241021 minikube.k8s.io/updated_at=2025_11_09T14_42_23_0700 minikube.k8s.io/version=v1.37.0 minikube.k8s.io/commit=70e94ed289d7418ac27e2778f9cf44be27a4ecda minikube.k8s.io/name=auto-241021 minikube.k8s.io/primary=true
	I1109 14:42:24.037032  212661 ops.go:34] apiserver oom_adj: -16
	I1109 14:42:24.037165  212661 ssh_runner.go:195] Run: sudo /var/lib/minikube/binaries/v1.34.1/kubectl get sa default --kubeconfig=/var/lib/minikube/kubeconfig
	I1109 14:42:24.538132  212661 ssh_runner.go:195] Run: sudo /var/lib/minikube/binaries/v1.34.1/kubectl get sa default --kubeconfig=/var/lib/minikube/kubeconfig
	I1109 14:42:25.037616  212661 ssh_runner.go:195] Run: sudo /var/lib/minikube/binaries/v1.34.1/kubectl get sa default --kubeconfig=/var/lib/minikube/kubeconfig
	I1109 14:42:25.537528  212661 ssh_runner.go:195] Run: sudo /var/lib/minikube/binaries/v1.34.1/kubectl get sa default --kubeconfig=/var/lib/minikube/kubeconfig
	I1109 14:42:26.038030  212661 ssh_runner.go:195] Run: sudo /var/lib/minikube/binaries/v1.34.1/kubectl get sa default --kubeconfig=/var/lib/minikube/kubeconfig
	I1109 14:42:26.537899  212661 ssh_runner.go:195] Run: sudo /var/lib/minikube/binaries/v1.34.1/kubectl get sa default --kubeconfig=/var/lib/minikube/kubeconfig
	I1109 14:42:27.037641  212661 ssh_runner.go:195] Run: sudo /var/lib/minikube/binaries/v1.34.1/kubectl get sa default --kubeconfig=/var/lib/minikube/kubeconfig
	I1109 14:42:27.537502  212661 ssh_runner.go:195] Run: sudo /var/lib/minikube/binaries/v1.34.1/kubectl get sa default --kubeconfig=/var/lib/minikube/kubeconfig
	I1109 14:42:27.749792  212661 kubeadm.go:1114] duration metric: took 4.15662262s to wait for elevateKubeSystemPrivileges
	I1109 14:42:27.749817  212661 kubeadm.go:403] duration metric: took 21.108743552s to StartCluster
	I1109 14:42:27.749840  212661 settings.go:142] acquiring lock: {Name:mk52a5a1cf83cfd3007d92af07a5e8dea393f789 Clock:{} Delay:500ms Timeout:1m0s Cancel:<nil>}
	I1109 14:42:27.749898  212661 settings.go:150] Updating kubeconfig:  /home/jenkins/minikube-integration/21139-2320/kubeconfig
	I1109 14:42:27.750803  212661 lock.go:35] WriteFile acquiring /home/jenkins/minikube-integration/21139-2320/kubeconfig: {Name:mkbcbd43244c4eb498050023163f96f81faa78c6 Clock:{} Delay:500ms Timeout:1m0s Cancel:<nil>}
	I1109 14:42:27.751010  212661 start.go:236] Will wait 15m0s for node &{Name: IP:192.168.76.2 Port:8443 KubernetesVersion:v1.34.1 ContainerRuntime:crio ControlPlane:true Worker:true}
	I1109 14:42:27.751122  212661 ssh_runner.go:195] Run: /bin/bash -c "sudo /var/lib/minikube/binaries/v1.34.1/kubectl --kubeconfig=/var/lib/minikube/kubeconfig -n kube-system get configmap coredns -o yaml"
	I1109 14:42:27.751386  212661 config.go:182] Loaded profile config "auto-241021": Driver=docker, ContainerRuntime=crio, KubernetesVersion=v1.34.1
	I1109 14:42:27.751423  212661 addons.go:512] enable addons start: toEnable=map[ambassador:false amd-gpu-device-plugin:false auto-pause:false cloud-spanner:false csi-hostpath-driver:false dashboard:false default-storageclass:true efk:false freshpod:false gcp-auth:false gvisor:false headlamp:false inaccel:false ingress:false ingress-dns:false inspektor-gadget:false istio:false istio-provisioner:false kong:false kubeflow:false kubetail:false kubevirt:false logviewer:false metallb:false metrics-server:false nvidia-device-plugin:false nvidia-driver-installer:false nvidia-gpu-device-plugin:false olm:false pod-security-policy:false portainer:false registry:false registry-aliases:false registry-creds:false storage-provisioner:true storage-provisioner-rancher:false volcano:false volumesnapshots:false yakd:false]
	I1109 14:42:27.751485  212661 addons.go:70] Setting storage-provisioner=true in profile "auto-241021"
	I1109 14:42:27.751501  212661 addons.go:239] Setting addon storage-provisioner=true in "auto-241021"
	I1109 14:42:27.751521  212661 host.go:66] Checking if "auto-241021" exists ...
	I1109 14:42:27.752037  212661 cli_runner.go:164] Run: docker container inspect auto-241021 --format={{.State.Status}}
	I1109 14:42:27.752579  212661 addons.go:70] Setting default-storageclass=true in profile "auto-241021"
	I1109 14:42:27.752605  212661 addons_storage_classes.go:34] enableOrDisableStorageClasses default-storageclass=true on "auto-241021"
	I1109 14:42:27.752928  212661 cli_runner.go:164] Run: docker container inspect auto-241021 --format={{.State.Status}}
	I1109 14:42:27.754902  212661 out.go:179] * Verifying Kubernetes components...
	I1109 14:42:27.758529  212661 ssh_runner.go:195] Run: sudo systemctl daemon-reload
	I1109 14:42:27.806008  212661 addons.go:239] Setting addon default-storageclass=true in "auto-241021"
	I1109 14:42:27.806046  212661 host.go:66] Checking if "auto-241021" exists ...
	I1109 14:42:27.806445  212661 cli_runner.go:164] Run: docker container inspect auto-241021 --format={{.State.Status}}
	I1109 14:42:27.809999  212661 out.go:179]   - Using image gcr.io/k8s-minikube/storage-provisioner:v5
	I1109 14:42:27.813017  212661 addons.go:436] installing /etc/kubernetes/addons/storage-provisioner.yaml
	I1109 14:42:27.813038  212661 ssh_runner.go:362] scp memory --> /etc/kubernetes/addons/storage-provisioner.yaml (2676 bytes)
	I1109 14:42:27.813107  212661 cli_runner.go:164] Run: docker container inspect -f "'{{(index (index .NetworkSettings.Ports "22/tcp") 0).HostPort}}'" auto-241021
	I1109 14:42:27.836325  212661 addons.go:436] installing /etc/kubernetes/addons/storageclass.yaml
	I1109 14:42:27.836344  212661 ssh_runner.go:362] scp storageclass/storageclass.yaml --> /etc/kubernetes/addons/storageclass.yaml (271 bytes)
	I1109 14:42:27.836406  212661 cli_runner.go:164] Run: docker container inspect -f "'{{(index (index .NetworkSettings.Ports "22/tcp") 0).HostPort}}'" auto-241021
	I1109 14:42:27.862250  212661 sshutil.go:53] new ssh client: &{IP:127.0.0.1 Port:33090 SSHKeyPath:/home/jenkins/minikube-integration/21139-2320/.minikube/machines/auto-241021/id_rsa Username:docker}
	I1109 14:42:27.894999  212661 sshutil.go:53] new ssh client: &{IP:127.0.0.1 Port:33090 SSHKeyPath:/home/jenkins/minikube-integration/21139-2320/.minikube/machines/auto-241021/id_rsa Username:docker}
	I1109 14:42:28.420586  212661 ssh_runner.go:195] Run: sudo KUBECONFIG=/var/lib/minikube/kubeconfig /var/lib/minikube/binaries/v1.34.1/kubectl apply -f /etc/kubernetes/addons/storageclass.yaml
	I1109 14:42:28.425168  212661 ssh_runner.go:195] Run: sudo KUBECONFIG=/var/lib/minikube/kubeconfig /var/lib/minikube/binaries/v1.34.1/kubectl apply -f /etc/kubernetes/addons/storage-provisioner.yaml
	I1109 14:42:28.707807  212661 ssh_runner.go:195] Run: sudo systemctl start kubelet
	I1109 14:42:28.708342  212661 ssh_runner.go:195] Run: /bin/bash -c "sudo /var/lib/minikube/binaries/v1.34.1/kubectl --kubeconfig=/var/lib/minikube/kubeconfig -n kube-system get configmap coredns -o yaml | sed -e '/^        forward . \/etc\/resolv.conf.*/i \        hosts {\n           192.168.76.1 host.minikube.internal\n           fallthrough\n        }' -e '/^        errors *$/i \        log' | sudo /var/lib/minikube/binaries/v1.34.1/kubectl --kubeconfig=/var/lib/minikube/kubeconfig replace -f -"
	I1109 14:42:29.571950  212661 ssh_runner.go:235] Completed: sudo KUBECONFIG=/var/lib/minikube/kubeconfig /var/lib/minikube/binaries/v1.34.1/kubectl apply -f /etc/kubernetes/addons/storageclass.yaml: (1.151281155s)
	I1109 14:42:30.400352  212661 ssh_runner.go:235] Completed: sudo KUBECONFIG=/var/lib/minikube/kubeconfig /var/lib/minikube/binaries/v1.34.1/kubectl apply -f /etc/kubernetes/addons/storage-provisioner.yaml: (1.975101528s)
	I1109 14:42:30.400444  212661 ssh_runner.go:235] Completed: sudo systemctl start kubelet: (1.692538737s)
	I1109 14:42:30.400498  212661 ssh_runner.go:235] Completed: /bin/bash -c "sudo /var/lib/minikube/binaries/v1.34.1/kubectl --kubeconfig=/var/lib/minikube/kubeconfig -n kube-system get configmap coredns -o yaml | sed -e '/^        forward . \/etc\/resolv.conf.*/i \        hosts {\n           192.168.76.1 host.minikube.internal\n           fallthrough\n        }' -e '/^        errors *$/i \        log' | sudo /var/lib/minikube/binaries/v1.34.1/kubectl --kubeconfig=/var/lib/minikube/kubeconfig replace -f -": (1.692090101s)
	I1109 14:42:30.401045  212661 start.go:977] {"host.minikube.internal": 192.168.76.1} host record injected into CoreDNS's ConfigMap
	I1109 14:42:30.403017  212661 node_ready.go:35] waiting up to 15m0s for node "auto-241021" to be "Ready" ...
	I1109 14:42:30.404434  212661 out.go:179] * Enabled addons: default-storageclass, storage-provisioner
	I1109 14:42:28.143094  215276 node_ready.go:49] node "no-preload-545474" is "Ready"
	I1109 14:42:28.143120  215276 node_ready.go:38] duration metric: took 5.868412391s for node "no-preload-545474" to be "Ready" ...
	I1109 14:42:28.143133  215276 api_server.go:52] waiting for apiserver process to appear ...
	I1109 14:42:28.143187  215276 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I1109 14:42:28.552462  215276 ssh_runner.go:235] Completed: sudo KUBECONFIG=/var/lib/minikube/kubeconfig /var/lib/minikube/binaries/v1.34.1/kubectl apply -f /etc/kubernetes/addons/storageclass.yaml: (6.258983922s)
	I1109 14:42:31.168018  215276 ssh_runner.go:235] Completed: sudo KUBECONFIG=/var/lib/minikube/kubeconfig /var/lib/minikube/binaries/v1.34.1/kubectl apply -f /etc/kubernetes/addons/storage-provisioner.yaml: (8.847483496s)
	I1109 14:42:31.168193  215276 ssh_runner.go:235] Completed: sudo KUBECONFIG=/var/lib/minikube/kubeconfig /var/lib/minikube/binaries/v1.34.1/kubectl apply -f /etc/kubernetes/addons/dashboard-ns.yaml -f /etc/kubernetes/addons/dashboard-clusterrole.yaml -f /etc/kubernetes/addons/dashboard-clusterrolebinding.yaml -f /etc/kubernetes/addons/dashboard-configmap.yaml -f /etc/kubernetes/addons/dashboard-dp.yaml -f /etc/kubernetes/addons/dashboard-role.yaml -f /etc/kubernetes/addons/dashboard-rolebinding.yaml -f /etc/kubernetes/addons/dashboard-sa.yaml -f /etc/kubernetes/addons/dashboard-secret.yaml -f /etc/kubernetes/addons/dashboard-svc.yaml: (8.321604371s)
	I1109 14:42:31.168446  215276 ssh_runner.go:235] Completed: sudo pgrep -xnf kube-apiserver.*minikube.*: (3.0252263s)
	I1109 14:42:31.168476  215276 api_server.go:72] duration metric: took 9.556363424s to wait for apiserver process to appear ...
	I1109 14:42:31.168488  215276 api_server.go:88] waiting for apiserver healthz status ...
	I1109 14:42:31.168510  215276 api_server.go:253] Checking apiserver healthz at https://192.168.85.2:8443/healthz ...
	I1109 14:42:31.171614  215276 out.go:179] * Some dashboard features require the metrics-server addon. To enable all features please run:
	
		minikube -p no-preload-545474 addons enable metrics-server
	
	I1109 14:42:31.174868  215276 out.go:179] * Enabled addons: default-storageclass, storage-provisioner, dashboard
	I1109 14:42:31.177928  215276 addons.go:515] duration metric: took 9.565418839s for enable addons: enabled=[default-storageclass storage-provisioner dashboard]
	I1109 14:42:31.189933  215276 api_server.go:279] https://192.168.85.2:8443/healthz returned 200:
	ok
	I1109 14:42:31.191608  215276 api_server.go:141] control plane version: v1.34.1
	I1109 14:42:31.191657  215276 api_server.go:131] duration metric: took 23.15778ms to wait for apiserver health ...
	I1109 14:42:31.191668  215276 system_pods.go:43] waiting for kube-system pods to appear ...
	I1109 14:42:31.198643  215276 system_pods.go:59] 8 kube-system pods found
	I1109 14:42:31.198696  215276 system_pods.go:61] "coredns-66bc5c9577-gq42x" [e6074143-5b8d-41d4-8951-a551d8d2a4b9] Running / Ready:ContainersNotReady (containers with unready status: [coredns]) / ContainersReady:ContainersNotReady (containers with unready status: [coredns])
	I1109 14:42:31.198719  215276 system_pods.go:61] "etcd-no-preload-545474" [293db1fc-5ee2-477f-bdde-78af20f645e0] Running / Ready:ContainersNotReady (containers with unready status: [etcd]) / ContainersReady:ContainersNotReady (containers with unready status: [etcd])
	I1109 14:42:31.198732  215276 system_pods.go:61] "kindnet-t9j49" [7be905a2-33f1-4116-b900-707561fa3d05] Running
	I1109 14:42:31.198741  215276 system_pods.go:61] "kube-apiserver-no-preload-545474" [4993944b-0090-4965-90b5-a23757af7772] Running / Ready:ContainersNotReady (containers with unready status: [kube-apiserver]) / ContainersReady:ContainersNotReady (containers with unready status: [kube-apiserver])
	I1109 14:42:31.198757  215276 system_pods.go:61] "kube-controller-manager-no-preload-545474" [e6bbfbb1-6531-47ac-8224-0d5cf70c2f59] Running / Ready:ContainersNotReady (containers with unready status: [kube-controller-manager]) / ContainersReady:ContainersNotReady (containers with unready status: [kube-controller-manager])
	I1109 14:42:31.198776  215276 system_pods.go:61] "kube-proxy-2mnwv" [5de7aa0e-eb03-4535-9040-8d34d0520820] Running
	I1109 14:42:31.198789  215276 system_pods.go:61] "kube-scheduler-no-preload-545474" [6bc2c009-cfa3-47ae-a27a-f261cdbb70df] Running / Ready:ContainersNotReady (containers with unready status: [kube-scheduler]) / ContainersReady:ContainersNotReady (containers with unready status: [kube-scheduler])
	I1109 14:42:31.198797  215276 system_pods.go:61] "storage-provisioner" [5c1ae78c-82fb-4b73-a894-745d823e352c] Running
	I1109 14:42:31.198805  215276 system_pods.go:74] duration metric: took 7.124426ms to wait for pod list to return data ...
	I1109 14:42:31.198818  215276 default_sa.go:34] waiting for default service account to be created ...
	I1109 14:42:31.208306  215276 default_sa.go:45] found service account: "default"
	I1109 14:42:31.208343  215276 default_sa.go:55] duration metric: took 9.518772ms for default service account to be created ...
	I1109 14:42:31.208363  215276 system_pods.go:116] waiting for k8s-apps to be running ...
	I1109 14:42:31.213366  215276 system_pods.go:86] 8 kube-system pods found
	I1109 14:42:31.213429  215276 system_pods.go:89] "coredns-66bc5c9577-gq42x" [e6074143-5b8d-41d4-8951-a551d8d2a4b9] Running / Ready:ContainersNotReady (containers with unready status: [coredns]) / ContainersReady:ContainersNotReady (containers with unready status: [coredns])
	I1109 14:42:31.213464  215276 system_pods.go:89] "etcd-no-preload-545474" [293db1fc-5ee2-477f-bdde-78af20f645e0] Running / Ready:ContainersNotReady (containers with unready status: [etcd]) / ContainersReady:ContainersNotReady (containers with unready status: [etcd])
	I1109 14:42:31.213483  215276 system_pods.go:89] "kindnet-t9j49" [7be905a2-33f1-4116-b900-707561fa3d05] Running
	I1109 14:42:31.213497  215276 system_pods.go:89] "kube-apiserver-no-preload-545474" [4993944b-0090-4965-90b5-a23757af7772] Running / Ready:ContainersNotReady (containers with unready status: [kube-apiserver]) / ContainersReady:ContainersNotReady (containers with unready status: [kube-apiserver])
	I1109 14:42:31.213503  215276 system_pods.go:89] "kube-controller-manager-no-preload-545474" [e6bbfbb1-6531-47ac-8224-0d5cf70c2f59] Running / Ready:ContainersNotReady (containers with unready status: [kube-controller-manager]) / ContainersReady:ContainersNotReady (containers with unready status: [kube-controller-manager])
	I1109 14:42:31.213515  215276 system_pods.go:89] "kube-proxy-2mnwv" [5de7aa0e-eb03-4535-9040-8d34d0520820] Running
	I1109 14:42:31.213534  215276 system_pods.go:89] "kube-scheduler-no-preload-545474" [6bc2c009-cfa3-47ae-a27a-f261cdbb70df] Running / Ready:ContainersNotReady (containers with unready status: [kube-scheduler]) / ContainersReady:ContainersNotReady (containers with unready status: [kube-scheduler])
	I1109 14:42:31.213543  215276 system_pods.go:89] "storage-provisioner" [5c1ae78c-82fb-4b73-a894-745d823e352c] Running
	I1109 14:42:31.213551  215276 system_pods.go:126] duration metric: took 5.180867ms to wait for k8s-apps to be running ...
	I1109 14:42:31.213567  215276 system_svc.go:44] waiting for kubelet service to be running ....
	I1109 14:42:31.213632  215276 ssh_runner.go:195] Run: sudo systemctl is-active --quiet service kubelet
	I1109 14:42:31.233192  215276 system_svc.go:56] duration metric: took 19.616864ms WaitForService to wait for kubelet
	I1109 14:42:31.233268  215276 kubeadm.go:587] duration metric: took 9.621152502s to wait for: map[apiserver:true apps_running:true default_sa:true extra:true kubelet:true node_ready:true system_pods:true]
	I1109 14:42:31.233301  215276 node_conditions.go:102] verifying NodePressure condition ...
	I1109 14:42:31.238898  215276 node_conditions.go:122] node storage ephemeral capacity is 203034800Ki
	I1109 14:42:31.238928  215276 node_conditions.go:123] node cpu capacity is 2
	I1109 14:42:31.238950  215276 node_conditions.go:105] duration metric: took 5.632055ms to run NodePressure ...
	I1109 14:42:31.238962  215276 start.go:242] waiting for startup goroutines ...
	I1109 14:42:31.238969  215276 start.go:247] waiting for cluster config update ...
	I1109 14:42:31.238987  215276 start.go:256] writing updated cluster config ...
	I1109 14:42:31.239257  215276 ssh_runner.go:195] Run: rm -f paused
	I1109 14:42:31.243557  215276 pod_ready.go:37] extra waiting up to 4m0s for all "kube-system" pods having one of [k8s-app=kube-dns component=etcd component=kube-apiserver component=kube-controller-manager k8s-app=kube-proxy component=kube-scheduler] labels to be "Ready" ...
	I1109 14:42:31.247762  215276 pod_ready.go:83] waiting for pod "coredns-66bc5c9577-gq42x" in "kube-system" namespace to be "Ready" or be gone ...
	I1109 14:42:30.407251  212661 addons.go:515] duration metric: took 2.65581727s for enable addons: enabled=[default-storageclass storage-provisioner]
	I1109 14:42:30.907146  212661 kapi.go:214] "coredns" deployment in "kube-system" namespace and "auto-241021" context rescaled to 1 replicas
	W1109 14:42:32.406046  212661 node_ready.go:57] node "auto-241021" has "Ready":"False" status (will retry)
	W1109 14:42:33.254541  215276 pod_ready.go:104] pod "coredns-66bc5c9577-gq42x" is not "Ready", error: <nil>
	W1109 14:42:35.754001  215276 pod_ready.go:104] pod "coredns-66bc5c9577-gq42x" is not "Ready", error: <nil>
	W1109 14:42:34.906282  212661 node_ready.go:57] node "auto-241021" has "Ready":"False" status (will retry)
	W1109 14:42:37.405894  212661 node_ready.go:57] node "auto-241021" has "Ready":"False" status (will retry)
	W1109 14:42:37.755859  215276 pod_ready.go:104] pod "coredns-66bc5c9577-gq42x" is not "Ready", error: <nil>
	W1109 14:42:39.756274  215276 pod_ready.go:104] pod "coredns-66bc5c9577-gq42x" is not "Ready", error: <nil>
	W1109 14:42:42.254091  215276 pod_ready.go:104] pod "coredns-66bc5c9577-gq42x" is not "Ready", error: <nil>
	W1109 14:42:39.406554  212661 node_ready.go:57] node "auto-241021" has "Ready":"False" status (will retry)
	W1109 14:42:41.406797  212661 node_ready.go:57] node "auto-241021" has "Ready":"False" status (will retry)
	W1109 14:42:44.256943  215276 pod_ready.go:104] pod "coredns-66bc5c9577-gq42x" is not "Ready", error: <nil>
	W1109 14:42:46.754486  215276 pod_ready.go:104] pod "coredns-66bc5c9577-gq42x" is not "Ready", error: <nil>
	W1109 14:42:43.905973  212661 node_ready.go:57] node "auto-241021" has "Ready":"False" status (will retry)
	W1109 14:42:46.405976  212661 node_ready.go:57] node "auto-241021" has "Ready":"False" status (will retry)
	W1109 14:42:48.754943  215276 pod_ready.go:104] pod "coredns-66bc5c9577-gq42x" is not "Ready", error: <nil>
	W1109 14:42:51.253884  215276 pod_ready.go:104] pod "coredns-66bc5c9577-gq42x" is not "Ready", error: <nil>
	W1109 14:42:48.406619  212661 node_ready.go:57] node "auto-241021" has "Ready":"False" status (will retry)
	W1109 14:42:50.906354  212661 node_ready.go:57] node "auto-241021" has "Ready":"False" status (will retry)
	W1109 14:42:53.757079  215276 pod_ready.go:104] pod "coredns-66bc5c9577-gq42x" is not "Ready", error: <nil>
	W1109 14:42:56.252756  215276 pod_ready.go:104] pod "coredns-66bc5c9577-gq42x" is not "Ready", error: <nil>
	W1109 14:42:52.907028  212661 node_ready.go:57] node "auto-241021" has "Ready":"False" status (will retry)
	W1109 14:42:55.406168  212661 node_ready.go:57] node "auto-241021" has "Ready":"False" status (will retry)
	W1109 14:42:58.252872  215276 pod_ready.go:104] pod "coredns-66bc5c9577-gq42x" is not "Ready", error: <nil>
	W1109 14:43:00.276669  215276 pod_ready.go:104] pod "coredns-66bc5c9577-gq42x" is not "Ready", error: <nil>
	W1109 14:42:57.906674  212661 node_ready.go:57] node "auto-241021" has "Ready":"False" status (will retry)
	W1109 14:43:00.410976  212661 node_ready.go:57] node "auto-241021" has "Ready":"False" status (will retry)
	W1109 14:43:02.755199  215276 pod_ready.go:104] pod "coredns-66bc5c9577-gq42x" is not "Ready", error: <nil>
	I1109 14:43:04.253085  215276 pod_ready.go:94] pod "coredns-66bc5c9577-gq42x" is "Ready"
	I1109 14:43:04.253113  215276 pod_ready.go:86] duration metric: took 33.005281391s for pod "coredns-66bc5c9577-gq42x" in "kube-system" namespace to be "Ready" or be gone ...
	I1109 14:43:04.255993  215276 pod_ready.go:83] waiting for pod "etcd-no-preload-545474" in "kube-system" namespace to be "Ready" or be gone ...
	I1109 14:43:04.260738  215276 pod_ready.go:94] pod "etcd-no-preload-545474" is "Ready"
	I1109 14:43:04.260762  215276 pod_ready.go:86] duration metric: took 4.741134ms for pod "etcd-no-preload-545474" in "kube-system" namespace to be "Ready" or be gone ...
	I1109 14:43:04.263194  215276 pod_ready.go:83] waiting for pod "kube-apiserver-no-preload-545474" in "kube-system" namespace to be "Ready" or be gone ...
	I1109 14:43:04.268142  215276 pod_ready.go:94] pod "kube-apiserver-no-preload-545474" is "Ready"
	I1109 14:43:04.268169  215276 pod_ready.go:86] duration metric: took 4.951475ms for pod "kube-apiserver-no-preload-545474" in "kube-system" namespace to be "Ready" or be gone ...
	I1109 14:43:04.270468  215276 pod_ready.go:83] waiting for pod "kube-controller-manager-no-preload-545474" in "kube-system" namespace to be "Ready" or be gone ...
	I1109 14:43:04.450900  215276 pod_ready.go:94] pod "kube-controller-manager-no-preload-545474" is "Ready"
	I1109 14:43:04.450931  215276 pod_ready.go:86] duration metric: took 180.436157ms for pod "kube-controller-manager-no-preload-545474" in "kube-system" namespace to be "Ready" or be gone ...
	I1109 14:43:04.651906  215276 pod_ready.go:83] waiting for pod "kube-proxy-2mnwv" in "kube-system" namespace to be "Ready" or be gone ...
	I1109 14:43:05.050824  215276 pod_ready.go:94] pod "kube-proxy-2mnwv" is "Ready"
	I1109 14:43:05.050853  215276 pod_ready.go:86] duration metric: took 398.918608ms for pod "kube-proxy-2mnwv" in "kube-system" namespace to be "Ready" or be gone ...
	I1109 14:43:05.250907  215276 pod_ready.go:83] waiting for pod "kube-scheduler-no-preload-545474" in "kube-system" namespace to be "Ready" or be gone ...
	I1109 14:43:05.650794  215276 pod_ready.go:94] pod "kube-scheduler-no-preload-545474" is "Ready"
	I1109 14:43:05.650824  215276 pod_ready.go:86] duration metric: took 399.891248ms for pod "kube-scheduler-no-preload-545474" in "kube-system" namespace to be "Ready" or be gone ...
	I1109 14:43:05.650836  215276 pod_ready.go:40] duration metric: took 34.407245084s for extra waiting for all "kube-system" pods having one of [k8s-app=kube-dns component=etcd component=kube-apiserver component=kube-controller-manager k8s-app=kube-proxy component=kube-scheduler] labels to be "Ready" ...
	I1109 14:43:05.708028  215276 start.go:628] kubectl: 1.33.2, cluster: 1.34.1 (minor skew: 1)
	I1109 14:43:05.711282  215276 out.go:179] * Done! kubectl is now configured to use "no-preload-545474" cluster and "default" namespace by default
	W1109 14:43:02.906266  212661 node_ready.go:57] node "auto-241021" has "Ready":"False" status (will retry)
	W1109 14:43:04.906398  212661 node_ready.go:57] node "auto-241021" has "Ready":"False" status (will retry)
	W1109 14:43:07.406888  212661 node_ready.go:57] node "auto-241021" has "Ready":"False" status (will retry)
	I1109 14:43:09.906973  212661 node_ready.go:49] node "auto-241021" is "Ready"
	I1109 14:43:09.907003  212661 node_ready.go:38] duration metric: took 39.503928875s for node "auto-241021" to be "Ready" ...
	I1109 14:43:09.907016  212661 api_server.go:52] waiting for apiserver process to appear ...
	I1109 14:43:09.907077  212661 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I1109 14:43:09.921621  212661 api_server.go:72] duration metric: took 42.17058545s to wait for apiserver process to appear ...
	I1109 14:43:09.921645  212661 api_server.go:88] waiting for apiserver healthz status ...
	I1109 14:43:09.921668  212661 api_server.go:253] Checking apiserver healthz at https://192.168.76.2:8443/healthz ...
	I1109 14:43:09.930314  212661 api_server.go:279] https://192.168.76.2:8443/healthz returned 200:
	ok
	I1109 14:43:09.931453  212661 api_server.go:141] control plane version: v1.34.1
	I1109 14:43:09.931481  212661 api_server.go:131] duration metric: took 9.829424ms to wait for apiserver health ...
	I1109 14:43:09.931491  212661 system_pods.go:43] waiting for kube-system pods to appear ...
	I1109 14:43:09.934858  212661 system_pods.go:59] 8 kube-system pods found
	I1109 14:43:09.934900  212661 system_pods.go:61] "coredns-66bc5c9577-54bms" [e20e0bce-590c-4118-94c3-efc439fe752a] Pending / Ready:ContainersNotReady (containers with unready status: [coredns]) / ContainersReady:ContainersNotReady (containers with unready status: [coredns])
	I1109 14:43:09.934907  212661 system_pods.go:61] "etcd-auto-241021" [03253057-b5e1-4faf-a973-71a1dc8f7290] Running
	I1109 14:43:09.934913  212661 system_pods.go:61] "kindnet-r8mbp" [8a92f989-ea4a-4e48-a80e-a7d9ff65f2da] Running
	I1109 14:43:09.934917  212661 system_pods.go:61] "kube-apiserver-auto-241021" [9e459dfd-a2e9-4f69-b017-085a249bac48] Running
	I1109 14:43:09.934921  212661 system_pods.go:61] "kube-controller-manager-auto-241021" [0e483be5-7cac-4dbf-98b9-196945ba1d9a] Running
	I1109 14:43:09.934926  212661 system_pods.go:61] "kube-proxy-vp98l" [eb0ae333-a02a-4d76-9678-75c6048ed7a0] Running
	I1109 14:43:09.934930  212661 system_pods.go:61] "kube-scheduler-auto-241021" [989ac301-7d80-4de7-b995-74c2b8f0f2bc] Running
	I1109 14:43:09.934937  212661 system_pods.go:61] "storage-provisioner" [10245536-2ef4-4db2-9d42-1c408069c6ac] Pending / Ready:ContainersNotReady (containers with unready status: [storage-provisioner]) / ContainersReady:ContainersNotReady (containers with unready status: [storage-provisioner])
	I1109 14:43:09.934951  212661 system_pods.go:74] duration metric: took 3.454197ms to wait for pod list to return data ...
	I1109 14:43:09.934963  212661 default_sa.go:34] waiting for default service account to be created ...
	I1109 14:43:09.937937  212661 default_sa.go:45] found service account: "default"
	I1109 14:43:09.937959  212661 default_sa.go:55] duration metric: took 2.989372ms for default service account to be created ...
	I1109 14:43:09.937968  212661 system_pods.go:116] waiting for k8s-apps to be running ...
	I1109 14:43:09.941116  212661 system_pods.go:86] 8 kube-system pods found
	I1109 14:43:09.941150  212661 system_pods.go:89] "coredns-66bc5c9577-54bms" [e20e0bce-590c-4118-94c3-efc439fe752a] Pending / Ready:ContainersNotReady (containers with unready status: [coredns]) / ContainersReady:ContainersNotReady (containers with unready status: [coredns])
	I1109 14:43:09.941162  212661 system_pods.go:89] "etcd-auto-241021" [03253057-b5e1-4faf-a973-71a1dc8f7290] Running
	I1109 14:43:09.941181  212661 system_pods.go:89] "kindnet-r8mbp" [8a92f989-ea4a-4e48-a80e-a7d9ff65f2da] Running
	I1109 14:43:09.941188  212661 system_pods.go:89] "kube-apiserver-auto-241021" [9e459dfd-a2e9-4f69-b017-085a249bac48] Running
	I1109 14:43:09.941193  212661 system_pods.go:89] "kube-controller-manager-auto-241021" [0e483be5-7cac-4dbf-98b9-196945ba1d9a] Running
	I1109 14:43:09.941197  212661 system_pods.go:89] "kube-proxy-vp98l" [eb0ae333-a02a-4d76-9678-75c6048ed7a0] Running
	I1109 14:43:09.941201  212661 system_pods.go:89] "kube-scheduler-auto-241021" [989ac301-7d80-4de7-b995-74c2b8f0f2bc] Running
	I1109 14:43:09.941208  212661 system_pods.go:89] "storage-provisioner" [10245536-2ef4-4db2-9d42-1c408069c6ac] Pending / Ready:ContainersNotReady (containers with unready status: [storage-provisioner]) / ContainersReady:ContainersNotReady (containers with unready status: [storage-provisioner])
	I1109 14:43:09.941235  212661 retry.go:31] will retry after 262.178205ms: missing components: kube-dns
	I1109 14:43:10.214339  212661 system_pods.go:86] 8 kube-system pods found
	I1109 14:43:10.214368  212661 system_pods.go:89] "coredns-66bc5c9577-54bms" [e20e0bce-590c-4118-94c3-efc439fe752a] Pending / Ready:ContainersNotReady (containers with unready status: [coredns]) / ContainersReady:ContainersNotReady (containers with unready status: [coredns])
	I1109 14:43:10.214375  212661 system_pods.go:89] "etcd-auto-241021" [03253057-b5e1-4faf-a973-71a1dc8f7290] Running
	I1109 14:43:10.214381  212661 system_pods.go:89] "kindnet-r8mbp" [8a92f989-ea4a-4e48-a80e-a7d9ff65f2da] Running
	I1109 14:43:10.214390  212661 system_pods.go:89] "kube-apiserver-auto-241021" [9e459dfd-a2e9-4f69-b017-085a249bac48] Running
	I1109 14:43:10.214394  212661 system_pods.go:89] "kube-controller-manager-auto-241021" [0e483be5-7cac-4dbf-98b9-196945ba1d9a] Running
	I1109 14:43:10.214398  212661 system_pods.go:89] "kube-proxy-vp98l" [eb0ae333-a02a-4d76-9678-75c6048ed7a0] Running
	I1109 14:43:10.214402  212661 system_pods.go:89] "kube-scheduler-auto-241021" [989ac301-7d80-4de7-b995-74c2b8f0f2bc] Running
	I1109 14:43:10.214407  212661 system_pods.go:89] "storage-provisioner" [10245536-2ef4-4db2-9d42-1c408069c6ac] Pending / Ready:ContainersNotReady (containers with unready status: [storage-provisioner]) / ContainersReady:ContainersNotReady (containers with unready status: [storage-provisioner])
	I1109 14:43:10.214421  212661 retry.go:31] will retry after 247.959356ms: missing components: kube-dns
	I1109 14:43:10.467883  212661 system_pods.go:86] 8 kube-system pods found
	I1109 14:43:10.467960  212661 system_pods.go:89] "coredns-66bc5c9577-54bms" [e20e0bce-590c-4118-94c3-efc439fe752a] Pending / Ready:ContainersNotReady (containers with unready status: [coredns]) / ContainersReady:ContainersNotReady (containers with unready status: [coredns])
	I1109 14:43:10.467973  212661 system_pods.go:89] "etcd-auto-241021" [03253057-b5e1-4faf-a973-71a1dc8f7290] Running
	I1109 14:43:10.467980  212661 system_pods.go:89] "kindnet-r8mbp" [8a92f989-ea4a-4e48-a80e-a7d9ff65f2da] Running
	I1109 14:43:10.467985  212661 system_pods.go:89] "kube-apiserver-auto-241021" [9e459dfd-a2e9-4f69-b017-085a249bac48] Running
	I1109 14:43:10.467990  212661 system_pods.go:89] "kube-controller-manager-auto-241021" [0e483be5-7cac-4dbf-98b9-196945ba1d9a] Running
	I1109 14:43:10.467994  212661 system_pods.go:89] "kube-proxy-vp98l" [eb0ae333-a02a-4d76-9678-75c6048ed7a0] Running
	I1109 14:43:10.467998  212661 system_pods.go:89] "kube-scheduler-auto-241021" [989ac301-7d80-4de7-b995-74c2b8f0f2bc] Running
	I1109 14:43:10.468005  212661 system_pods.go:89] "storage-provisioner" [10245536-2ef4-4db2-9d42-1c408069c6ac] Pending / Ready:ContainersNotReady (containers with unready status: [storage-provisioner]) / ContainersReady:ContainersNotReady (containers with unready status: [storage-provisioner])
	I1109 14:43:10.468024  212661 retry.go:31] will retry after 451.140887ms: missing components: kube-dns
	I1109 14:43:10.923675  212661 system_pods.go:86] 8 kube-system pods found
	I1109 14:43:10.923720  212661 system_pods.go:89] "coredns-66bc5c9577-54bms" [e20e0bce-590c-4118-94c3-efc439fe752a] Pending / Ready:ContainersNotReady (containers with unready status: [coredns]) / ContainersReady:ContainersNotReady (containers with unready status: [coredns])
	I1109 14:43:10.923750  212661 system_pods.go:89] "etcd-auto-241021" [03253057-b5e1-4faf-a973-71a1dc8f7290] Running
	I1109 14:43:10.923766  212661 system_pods.go:89] "kindnet-r8mbp" [8a92f989-ea4a-4e48-a80e-a7d9ff65f2da] Running
	I1109 14:43:10.923771  212661 system_pods.go:89] "kube-apiserver-auto-241021" [9e459dfd-a2e9-4f69-b017-085a249bac48] Running
	I1109 14:43:10.923776  212661 system_pods.go:89] "kube-controller-manager-auto-241021" [0e483be5-7cac-4dbf-98b9-196945ba1d9a] Running
	I1109 14:43:10.923785  212661 system_pods.go:89] "kube-proxy-vp98l" [eb0ae333-a02a-4d76-9678-75c6048ed7a0] Running
	I1109 14:43:10.923789  212661 system_pods.go:89] "kube-scheduler-auto-241021" [989ac301-7d80-4de7-b995-74c2b8f0f2bc] Running
	I1109 14:43:10.923799  212661 system_pods.go:89] "storage-provisioner" [10245536-2ef4-4db2-9d42-1c408069c6ac] Pending / Ready:ContainersNotReady (containers with unready status: [storage-provisioner]) / ContainersReady:ContainersNotReady (containers with unready status: [storage-provisioner])
	I1109 14:43:10.923823  212661 retry.go:31] will retry after 579.89858ms: missing components: kube-dns
	I1109 14:43:11.509695  212661 system_pods.go:86] 8 kube-system pods found
	I1109 14:43:11.509728  212661 system_pods.go:89] "coredns-66bc5c9577-54bms" [e20e0bce-590c-4118-94c3-efc439fe752a] Running
	I1109 14:43:11.509740  212661 system_pods.go:89] "etcd-auto-241021" [03253057-b5e1-4faf-a973-71a1dc8f7290] Running
	I1109 14:43:11.509745  212661 system_pods.go:89] "kindnet-r8mbp" [8a92f989-ea4a-4e48-a80e-a7d9ff65f2da] Running
	I1109 14:43:11.509750  212661 system_pods.go:89] "kube-apiserver-auto-241021" [9e459dfd-a2e9-4f69-b017-085a249bac48] Running
	I1109 14:43:11.509755  212661 system_pods.go:89] "kube-controller-manager-auto-241021" [0e483be5-7cac-4dbf-98b9-196945ba1d9a] Running
	I1109 14:43:11.509760  212661 system_pods.go:89] "kube-proxy-vp98l" [eb0ae333-a02a-4d76-9678-75c6048ed7a0] Running
	I1109 14:43:11.509768  212661 system_pods.go:89] "kube-scheduler-auto-241021" [989ac301-7d80-4de7-b995-74c2b8f0f2bc] Running
	I1109 14:43:11.509772  212661 system_pods.go:89] "storage-provisioner" [10245536-2ef4-4db2-9d42-1c408069c6ac] Running
	I1109 14:43:11.509784  212661 system_pods.go:126] duration metric: took 1.571805713s to wait for k8s-apps to be running ...
	I1109 14:43:11.509794  212661 system_svc.go:44] waiting for kubelet service to be running ....
	I1109 14:43:11.509855  212661 ssh_runner.go:195] Run: sudo systemctl is-active --quiet service kubelet
	I1109 14:43:11.533181  212661 system_svc.go:56] duration metric: took 23.37666ms WaitForService to wait for kubelet
	I1109 14:43:11.533261  212661 kubeadm.go:587] duration metric: took 43.782230569s to wait for: map[apiserver:true apps_running:true default_sa:true extra:true kubelet:true node_ready:true system_pods:true]
	I1109 14:43:11.533289  212661 node_conditions.go:102] verifying NodePressure condition ...
	I1109 14:43:11.536453  212661 node_conditions.go:122] node storage ephemeral capacity is 203034800Ki
	I1109 14:43:11.536486  212661 node_conditions.go:123] node cpu capacity is 2
	I1109 14:43:11.536500  212661 node_conditions.go:105] duration metric: took 3.204905ms to run NodePressure ...
	I1109 14:43:11.536539  212661 start.go:242] waiting for startup goroutines ...
	I1109 14:43:11.536554  212661 start.go:247] waiting for cluster config update ...
	I1109 14:43:11.536567  212661 start.go:256] writing updated cluster config ...
	I1109 14:43:11.536873  212661 ssh_runner.go:195] Run: rm -f paused
	I1109 14:43:11.541357  212661 pod_ready.go:37] extra waiting up to 4m0s for all "kube-system" pods having one of [k8s-app=kube-dns component=etcd component=kube-apiserver component=kube-controller-manager k8s-app=kube-proxy component=kube-scheduler] labels to be "Ready" ...
	I1109 14:43:11.545066  212661 pod_ready.go:83] waiting for pod "coredns-66bc5c9577-54bms" in "kube-system" namespace to be "Ready" or be gone ...
	I1109 14:43:11.549961  212661 pod_ready.go:94] pod "coredns-66bc5c9577-54bms" is "Ready"
	I1109 14:43:11.549988  212661 pod_ready.go:86] duration metric: took 4.890297ms for pod "coredns-66bc5c9577-54bms" in "kube-system" namespace to be "Ready" or be gone ...
	I1109 14:43:11.552829  212661 pod_ready.go:83] waiting for pod "etcd-auto-241021" in "kube-system" namespace to be "Ready" or be gone ...
	I1109 14:43:11.558135  212661 pod_ready.go:94] pod "etcd-auto-241021" is "Ready"
	I1109 14:43:11.558228  212661 pod_ready.go:86] duration metric: took 5.31929ms for pod "etcd-auto-241021" in "kube-system" namespace to be "Ready" or be gone ...
	I1109 14:43:11.561108  212661 pod_ready.go:83] waiting for pod "kube-apiserver-auto-241021" in "kube-system" namespace to be "Ready" or be gone ...
	I1109 14:43:11.565873  212661 pod_ready.go:94] pod "kube-apiserver-auto-241021" is "Ready"
	I1109 14:43:11.565897  212661 pod_ready.go:86] duration metric: took 4.763108ms for pod "kube-apiserver-auto-241021" in "kube-system" namespace to be "Ready" or be gone ...
	I1109 14:43:11.568432  212661 pod_ready.go:83] waiting for pod "kube-controller-manager-auto-241021" in "kube-system" namespace to be "Ready" or be gone ...
	I1109 14:43:11.945928  212661 pod_ready.go:94] pod "kube-controller-manager-auto-241021" is "Ready"
	I1109 14:43:11.945961  212661 pod_ready.go:86] duration metric: took 377.503838ms for pod "kube-controller-manager-auto-241021" in "kube-system" namespace to be "Ready" or be gone ...
	I1109 14:43:12.146933  212661 pod_ready.go:83] waiting for pod "kube-proxy-vp98l" in "kube-system" namespace to be "Ready" or be gone ...
	I1109 14:43:12.545096  212661 pod_ready.go:94] pod "kube-proxy-vp98l" is "Ready"
	I1109 14:43:12.545121  212661 pod_ready.go:86] duration metric: took 398.11811ms for pod "kube-proxy-vp98l" in "kube-system" namespace to be "Ready" or be gone ...
	I1109 14:43:12.746586  212661 pod_ready.go:83] waiting for pod "kube-scheduler-auto-241021" in "kube-system" namespace to be "Ready" or be gone ...
	I1109 14:43:13.145844  212661 pod_ready.go:94] pod "kube-scheduler-auto-241021" is "Ready"
	I1109 14:43:13.145872  212661 pod_ready.go:86] duration metric: took 399.258142ms for pod "kube-scheduler-auto-241021" in "kube-system" namespace to be "Ready" or be gone ...
	I1109 14:43:13.145885  212661 pod_ready.go:40] duration metric: took 1.604453405s for extra waiting for all "kube-system" pods having one of [k8s-app=kube-dns component=etcd component=kube-apiserver component=kube-controller-manager k8s-app=kube-proxy component=kube-scheduler] labels to be "Ready" ...
	I1109 14:43:13.214630  212661 start.go:628] kubectl: 1.33.2, cluster: 1.34.1 (minor skew: 1)
	I1109 14:43:13.217763  212661 out.go:179] * Done! kubectl is now configured to use "auto-241021" cluster and "default" namespace by default
	
	
	==> CRI-O <==
	Nov 09 14:43:10 no-preload-545474 crio[651]: time="2025-11-09T14:43:10.396313727Z" level=info msg="CNI monitoring event WRITE         \"/etc/cni/net.d/10-kindnet.conflist.temp\""
	Nov 09 14:43:10 no-preload-545474 crio[651]: time="2025-11-09T14:43:10.399807399Z" level=info msg="Found CNI network kindnet (type=ptp) at /etc/cni/net.d/10-kindnet.conflist"
	Nov 09 14:43:10 no-preload-545474 crio[651]: time="2025-11-09T14:43:10.399842238Z" level=info msg="Updated default CNI network name to kindnet"
	Nov 09 14:43:10 no-preload-545474 crio[651]: time="2025-11-09T14:43:10.399911351Z" level=info msg="CNI monitoring event WRITE         \"/etc/cni/net.d/10-kindnet.conflist.temp\""
	Nov 09 14:43:10 no-preload-545474 crio[651]: time="2025-11-09T14:43:10.402999316Z" level=info msg="Found CNI network kindnet (type=ptp) at /etc/cni/net.d/10-kindnet.conflist"
	Nov 09 14:43:10 no-preload-545474 crio[651]: time="2025-11-09T14:43:10.403034754Z" level=info msg="Updated default CNI network name to kindnet"
	Nov 09 14:43:10 no-preload-545474 crio[651]: time="2025-11-09T14:43:10.403058566Z" level=info msg="CNI monitoring event RENAME        \"/etc/cni/net.d/10-kindnet.conflist.temp\""
	Nov 09 14:43:10 no-preload-545474 crio[651]: time="2025-11-09T14:43:10.406317248Z" level=info msg="Found CNI network kindnet (type=ptp) at /etc/cni/net.d/10-kindnet.conflist"
	Nov 09 14:43:10 no-preload-545474 crio[651]: time="2025-11-09T14:43:10.406367267Z" level=info msg="Updated default CNI network name to kindnet"
	Nov 09 14:43:10 no-preload-545474 crio[651]: time="2025-11-09T14:43:10.406391957Z" level=info msg="CNI monitoring event CREATE        \"/etc/cni/net.d/10-kindnet.conflist\" ← \"/etc/cni/net.d/10-kindnet.conflist.temp\""
	Nov 09 14:43:10 no-preload-545474 crio[651]: time="2025-11-09T14:43:10.410102476Z" level=info msg="Found CNI network kindnet (type=ptp) at /etc/cni/net.d/10-kindnet.conflist"
	Nov 09 14:43:10 no-preload-545474 crio[651]: time="2025-11-09T14:43:10.41013752Z" level=info msg="Updated default CNI network name to kindnet"
	Nov 09 14:43:14 no-preload-545474 crio[651]: time="2025-11-09T14:43:14.991521577Z" level=info msg="Checking image status: registry.k8s.io/echoserver:1.4" id=52163c2f-11cf-4e18-b63c-06002e4698c4 name=/runtime.v1.ImageService/ImageStatus
	Nov 09 14:43:14 no-preload-545474 crio[651]: time="2025-11-09T14:43:14.993953585Z" level=info msg="Checking image status: registry.k8s.io/echoserver:1.4" id=2015529a-0ba9-4867-916a-0f45f19d2422 name=/runtime.v1.ImageService/ImageStatus
	Nov 09 14:43:14 no-preload-545474 crio[651]: time="2025-11-09T14:43:14.995454802Z" level=info msg="Creating container: kubernetes-dashboard/dashboard-metrics-scraper-6ffb444bf9-blrjq/dashboard-metrics-scraper" id=92c8afc1-a953-4956-837e-a3e074eb9532 name=/runtime.v1.RuntimeService/CreateContainer
	Nov 09 14:43:14 no-preload-545474 crio[651]: time="2025-11-09T14:43:14.995603464Z" level=info msg="Allowed annotations are specified for workload [io.containers.trace-syscall]"
	Nov 09 14:43:15 no-preload-545474 crio[651]: time="2025-11-09T14:43:15.016032641Z" level=info msg="Allowed annotations are specified for workload [io.containers.trace-syscall]"
	Nov 09 14:43:15 no-preload-545474 crio[651]: time="2025-11-09T14:43:15.040704699Z" level=info msg="Allowed annotations are specified for workload [io.containers.trace-syscall]"
	Nov 09 14:43:15 no-preload-545474 crio[651]: time="2025-11-09T14:43:15.080934049Z" level=info msg="Created container 33687307c11ac0258a0c4a328dca8bb0ee9be8beb937d99cd72b8fc358b43247: kubernetes-dashboard/dashboard-metrics-scraper-6ffb444bf9-blrjq/dashboard-metrics-scraper" id=92c8afc1-a953-4956-837e-a3e074eb9532 name=/runtime.v1.RuntimeService/CreateContainer
	Nov 09 14:43:15 no-preload-545474 crio[651]: time="2025-11-09T14:43:15.084254411Z" level=info msg="Starting container: 33687307c11ac0258a0c4a328dca8bb0ee9be8beb937d99cd72b8fc358b43247" id=4bf5bba8-6381-46c0-8469-13b254b39737 name=/runtime.v1.RuntimeService/StartContainer
	Nov 09 14:43:15 no-preload-545474 crio[651]: time="2025-11-09T14:43:15.090898022Z" level=info msg="Started container" PID=1716 containerID=33687307c11ac0258a0c4a328dca8bb0ee9be8beb937d99cd72b8fc358b43247 description=kubernetes-dashboard/dashboard-metrics-scraper-6ffb444bf9-blrjq/dashboard-metrics-scraper id=4bf5bba8-6381-46c0-8469-13b254b39737 name=/runtime.v1.RuntimeService/StartContainer sandboxID=02bf9cccffb2e7f921681057abe235f77d9c425b36f48aaea700389f33baa558
	Nov 09 14:43:15 no-preload-545474 conmon[1714]: conmon 33687307c11ac0258a0c <ninfo>: container 1716 exited with status 1
	Nov 09 14:43:15 no-preload-545474 crio[651]: time="2025-11-09T14:43:15.344861432Z" level=info msg="Removing container: 54427a8df9d12b8f89cabb1e02d454376fd533549a2a5bd4728c070344b169a9" id=af81a6c7-1449-4d80-9216-93a0a5fdcb8a name=/runtime.v1.RuntimeService/RemoveContainer
	Nov 09 14:43:15 no-preload-545474 crio[651]: time="2025-11-09T14:43:15.357392667Z" level=info msg="Error loading conmon cgroup of container 54427a8df9d12b8f89cabb1e02d454376fd533549a2a5bd4728c070344b169a9: cgroup deleted" id=af81a6c7-1449-4d80-9216-93a0a5fdcb8a name=/runtime.v1.RuntimeService/RemoveContainer
	Nov 09 14:43:15 no-preload-545474 crio[651]: time="2025-11-09T14:43:15.361966358Z" level=info msg="Removed container 54427a8df9d12b8f89cabb1e02d454376fd533549a2a5bd4728c070344b169a9: kubernetes-dashboard/dashboard-metrics-scraper-6ffb444bf9-blrjq/dashboard-metrics-scraper" id=af81a6c7-1449-4d80-9216-93a0a5fdcb8a name=/runtime.v1.RuntimeService/RemoveContainer
	
	
	==> container status <==
	CONTAINER           IMAGE                                                                                                      CREATED              STATE               NAME                        ATTEMPT             POD ID              POD                                          NAMESPACE
	33687307c11ac       a90209bb39e3d7b5fc9daf60c17044ea969aaca0333d672d8c7a34c7446e7ff7                                           8 seconds ago        Exited              dashboard-metrics-scraper   3                   02bf9cccffb2e       dashboard-metrics-scraper-6ffb444bf9-blrjq   kubernetes-dashboard
	97146ccbb7051       66749159455b3f08c8318fe0233122f54d0f5889f9c5fdfb73c3fd9d99895b51                                           21 seconds ago       Running             storage-provisioner         2                   9f68346c5cb0a       storage-provisioner                          kube-system
	2f347db595849       docker.io/kubernetesui/dashboard@sha256:5c52c60663b473628bd98e4ffee7a747ef1f88d8c7bcee957b089fb3f61bdedf   44 seconds ago       Running             kubernetes-dashboard        0                   634a038efb97f       kubernetes-dashboard-855c9754f9-zlh4p        kubernetes-dashboard
	dff62811213e4       1611cd07b61d57dbbfebe6db242513fd51e1c02d20ba08af17a45837d86a8a8c                                           53 seconds ago       Running             busybox                     1                   e612db17c7925       busybox                                      default
	a56ad15fb1acd       66749159455b3f08c8318fe0233122f54d0f5889f9c5fdfb73c3fd9d99895b51                                           53 seconds ago       Exited              storage-provisioner         1                   9f68346c5cb0a       storage-provisioner                          kube-system
	3a56c743b8a3e       138784d87c9c50f8e59412544da4cf4928d61ccbaf93b9f5898a3ba406871bfc                                           53 seconds ago       Running             coredns                     1                   cfa06e172519b       coredns-66bc5c9577-gq42x                     kube-system
	5528464a75a8a       05baa95f5142d87797a2bc1d3d11edfb0bf0a9236d436243d15061fae8b58cb9                                           53 seconds ago       Running             kube-proxy                  1                   316a639ecc69f       kube-proxy-2mnwv                             kube-system
	9844fa4dd0e74       b1a8c6f707935fd5f346ce5846d21ff8dd65e14c15406a14dbd16b9b897b9b4c                                           53 seconds ago       Running             kindnet-cni                 1                   facbfc2f8c056       kindnet-t9j49                                kube-system
	e0fa19fb74d19       7eb2c6ff0c5a768fd309321bc2ade0e4e11afcf4f2017ef1d0ff00d91fdf992a                                           About a minute ago   Running             kube-controller-manager     1                   466a74dbf4924       kube-controller-manager-no-preload-545474    kube-system
	9c6841e7685fc       43911e833d64d4f30460862fc0c54bb61999d60bc7d063feca71e9fc610d5196                                           About a minute ago   Running             kube-apiserver              1                   87e88904b7475       kube-apiserver-no-preload-545474             kube-system
	9df79b1c8bb2b       a1894772a478e07c67a56e8bf32335fdbe1dd4ec96976a5987083164bd00bc0e                                           About a minute ago   Running             etcd                        1                   fa52aa961b93e       etcd-no-preload-545474                       kube-system
	baa0cc7198ae0       b5f57ec6b98676d815366685a0422bd164ecf0732540b79ac51b1186cef97ff0                                           About a minute ago   Running             kube-scheduler              1                   3be72e38361e5       kube-scheduler-no-preload-545474             kube-system
	
	
	==> coredns [3a56c743b8a3e4504b63b2de555d8f1d8433520edee172e996d5bb694372c514] <==
	[INFO] plugin/kubernetes: waiting for Kubernetes API before starting server
	[INFO] plugin/kubernetes: waiting for Kubernetes API before starting server
	[INFO] plugin/kubernetes: waiting for Kubernetes API before starting server
	[INFO] plugin/kubernetes: waiting for Kubernetes API before starting server
	[INFO] plugin/kubernetes: waiting for Kubernetes API before starting server
	[INFO] plugin/kubernetes: waiting for Kubernetes API before starting server
	[INFO] plugin/kubernetes: waiting for Kubernetes API before starting server
	[INFO] plugin/ready: Still waiting on: "kubernetes"
	[INFO] plugin/ready: Still waiting on: "kubernetes"
	[INFO] plugin/kubernetes: waiting for Kubernetes API before starting server
	[INFO] plugin/kubernetes: waiting for Kubernetes API before starting server
	[WARNING] plugin/kubernetes: starting server with unsynced Kubernetes API
	.:53
	[INFO] plugin/reload: Running configuration SHA512 = fa9a0cdcdddcb4be74a0eaf7cfcb211c40e29ddf5507e03bbfc0065bade31f0f2641a2513136e246f32328dd126fc93236fb5c595246f0763926a524386705e8
	CoreDNS-1.12.1
	linux/arm64, go1.24.1, 707c7c1
	[INFO] 127.0.0.1:47556 - 38614 "HINFO IN 6565056289328285906.5439230190109313127. udp 57 false 512" NXDOMAIN qr,rd,ra 57 0.004844166s
	[INFO] plugin/ready: Still waiting on: "kubernetes"
	[INFO] plugin/ready: Still waiting on: "kubernetes"
	[INFO] plugin/kubernetes: pkg/mod/k8s.io/client-go@v0.32.3/tools/cache/reflector.go:251: failed to list *v1.EndpointSlice: Get "https://10.96.0.1:443/apis/discovery.k8s.io/v1/endpointslices?limit=500&resourceVersion=0": dial tcp 10.96.0.1:443: i/o timeout
	[ERROR] plugin/kubernetes: Unhandled Error
	[INFO] plugin/kubernetes: pkg/mod/k8s.io/client-go@v0.32.3/tools/cache/reflector.go:251: failed to list *v1.Namespace: Get "https://10.96.0.1:443/api/v1/namespaces?limit=500&resourceVersion=0": dial tcp 10.96.0.1:443: i/o timeout
	[ERROR] plugin/kubernetes: Unhandled Error
	[INFO] plugin/kubernetes: pkg/mod/k8s.io/client-go@v0.32.3/tools/cache/reflector.go:251: failed to list *v1.Service: Get "https://10.96.0.1:443/api/v1/services?limit=500&resourceVersion=0": dial tcp 10.96.0.1:443: i/o timeout
	[ERROR] plugin/kubernetes: Unhandled Error
	
	
	==> describe nodes <==
	Name:               no-preload-545474
	Roles:              control-plane
	Labels:             beta.kubernetes.io/arch=arm64
	                    beta.kubernetes.io/os=linux
	                    kubernetes.io/arch=arm64
	                    kubernetes.io/hostname=no-preload-545474
	                    kubernetes.io/os=linux
	                    minikube.k8s.io/commit=70e94ed289d7418ac27e2778f9cf44be27a4ecda
	                    minikube.k8s.io/name=no-preload-545474
	                    minikube.k8s.io/primary=true
	                    minikube.k8s.io/updated_at=2025_11_09T14_41_26_0700
	                    minikube.k8s.io/version=v1.37.0
	                    node-role.kubernetes.io/control-plane=
	                    node.kubernetes.io/exclude-from-external-load-balancers=
	Annotations:        node.alpha.kubernetes.io/ttl: 0
	                    volumes.kubernetes.io/controller-managed-attach-detach: true
	CreationTimestamp:  Sun, 09 Nov 2025 14:41:21 +0000
	Taints:             <none>
	Unschedulable:      false
	Lease:
	  HolderIdentity:  no-preload-545474
	  AcquireTime:     <unset>
	  RenewTime:       Sun, 09 Nov 2025 14:43:09 +0000
	Conditions:
	  Type             Status  LastHeartbeatTime                 LastTransitionTime                Reason                       Message
	  ----             ------  -----------------                 ------------------                ------                       -------
	  MemoryPressure   False   Sun, 09 Nov 2025 14:43:09 +0000   Sun, 09 Nov 2025 14:41:16 +0000   KubeletHasSufficientMemory   kubelet has sufficient memory available
	  DiskPressure     False   Sun, 09 Nov 2025 14:43:09 +0000   Sun, 09 Nov 2025 14:41:16 +0000   KubeletHasNoDiskPressure     kubelet has no disk pressure
	  PIDPressure      False   Sun, 09 Nov 2025 14:43:09 +0000   Sun, 09 Nov 2025 14:41:16 +0000   KubeletHasSufficientPID      kubelet has sufficient PID available
	  Ready            True    Sun, 09 Nov 2025 14:43:09 +0000   Sun, 09 Nov 2025 14:41:45 +0000   KubeletReady                 kubelet is posting ready status
	Addresses:
	  InternalIP:  192.168.85.2
	  Hostname:    no-preload-545474
	Capacity:
	  cpu:                2
	  ephemeral-storage:  203034800Ki
	  hugepages-1Gi:      0
	  hugepages-2Mi:      0
	  hugepages-32Mi:     0
	  hugepages-64Ki:     0
	  memory:             8022300Ki
	  pods:               110
	Allocatable:
	  cpu:                2
	  ephemeral-storage:  203034800Ki
	  hugepages-1Gi:      0
	  hugepages-2Mi:      0
	  hugepages-32Mi:     0
	  hugepages-64Ki:     0
	  memory:             8022300Ki
	  pods:               110
	System Info:
	  Machine ID:                 8fe80b37a6a3cb4bc1adadb06905c59a
	  System UUID:                c8e11a83-d01e-4114-9a5f-a54126ee8120
	  Boot ID:                    c2b4b02b-b0e5-42ff-aa45-346ad9349595
	  Kernel Version:             5.15.0-1084-aws
	  OS Image:                   Debian GNU/Linux 12 (bookworm)
	  Operating System:           linux
	  Architecture:               arm64
	  Container Runtime Version:  cri-o://1.34.1
	  Kubelet Version:            v1.34.1
	  Kube-Proxy Version:         
	PodCIDR:                      10.244.0.0/24
	PodCIDRs:                     10.244.0.0/24
	Non-terminated Pods:          (11 in total)
	  Namespace                   Name                                          CPU Requests  CPU Limits  Memory Requests  Memory Limits  Age
	  ---------                   ----                                          ------------  ----------  ---------------  -------------  ---
	  default                     busybox                                       0 (0%)        0 (0%)      0 (0%)           0 (0%)         95s
	  kube-system                 coredns-66bc5c9577-gq42x                      100m (5%)     0 (0%)      70Mi (0%)        170Mi (2%)     114s
	  kube-system                 etcd-no-preload-545474                        100m (5%)     0 (0%)      100Mi (1%)       0 (0%)         118s
	  kube-system                 kindnet-t9j49                                 100m (5%)     100m (5%)   50Mi (0%)        50Mi (0%)      113s
	  kube-system                 kube-apiserver-no-preload-545474              250m (12%)    0 (0%)      0 (0%)           0 (0%)         2m
	  kube-system                 kube-controller-manager-no-preload-545474     200m (10%)    0 (0%)      0 (0%)           0 (0%)         2m
	  kube-system                 kube-proxy-2mnwv                              0 (0%)        0 (0%)      0 (0%)           0 (0%)         113s
	  kube-system                 kube-scheduler-no-preload-545474              100m (5%)     0 (0%)      0 (0%)           0 (0%)         118s
	  kube-system                 storage-provisioner                           0 (0%)        0 (0%)      0 (0%)           0 (0%)         112s
	  kubernetes-dashboard        dashboard-metrics-scraper-6ffb444bf9-blrjq    0 (0%)        0 (0%)      0 (0%)           0 (0%)         50s
	  kubernetes-dashboard        kubernetes-dashboard-855c9754f9-zlh4p         0 (0%)        0 (0%)      0 (0%)           0 (0%)         50s
	Allocated resources:
	  (Total limits may be over 100 percent, i.e., overcommitted.)
	  Resource           Requests    Limits
	  --------           --------    ------
	  cpu                850m (42%)  100m (5%)
	  memory             220Mi (2%)  220Mi (2%)
	  ephemeral-storage  0 (0%)      0 (0%)
	  hugepages-1Gi      0 (0%)      0 (0%)
	  hugepages-2Mi      0 (0%)      0 (0%)
	  hugepages-32Mi     0 (0%)      0 (0%)
	  hugepages-64Ki     0 (0%)      0 (0%)
	Events:
	  Type     Reason                   Age                  From             Message
	  ----     ------                   ----                 ----             -------
	  Normal   Starting                 112s                 kube-proxy       
	  Normal   Starting                 52s                  kube-proxy       
	  Warning  CgroupV1                 2m9s                 kubelet          cgroup v1 support is in maintenance mode, please migrate to cgroup v2
	  Normal   NodeHasSufficientMemory  2m9s (x8 over 2m9s)  kubelet          Node no-preload-545474 status is now: NodeHasSufficientMemory
	  Normal   NodeHasNoDiskPressure    2m9s (x8 over 2m9s)  kubelet          Node no-preload-545474 status is now: NodeHasNoDiskPressure
	  Normal   NodeHasSufficientPID     2m9s (x8 over 2m9s)  kubelet          Node no-preload-545474 status is now: NodeHasSufficientPID
	  Normal   NodeHasSufficientMemory  118s                 kubelet          Node no-preload-545474 status is now: NodeHasSufficientMemory
	  Warning  CgroupV1                 118s                 kubelet          cgroup v1 support is in maintenance mode, please migrate to cgroup v2
	  Normal   NodeHasNoDiskPressure    118s                 kubelet          Node no-preload-545474 status is now: NodeHasNoDiskPressure
	  Normal   NodeHasSufficientPID     118s                 kubelet          Node no-preload-545474 status is now: NodeHasSufficientPID
	  Normal   Starting                 118s                 kubelet          Starting kubelet.
	  Normal   RegisteredNode           115s                 node-controller  Node no-preload-545474 event: Registered Node no-preload-545474 in Controller
	  Normal   NodeReady                98s                  kubelet          Node no-preload-545474 status is now: NodeReady
	  Normal   Starting                 63s                  kubelet          Starting kubelet.
	  Warning  CgroupV1                 63s                  kubelet          cgroup v1 support is in maintenance mode, please migrate to cgroup v2
	  Normal   NodeHasSufficientMemory  62s (x8 over 63s)    kubelet          Node no-preload-545474 status is now: NodeHasSufficientMemory
	  Normal   NodeHasNoDiskPressure    62s (x8 over 63s)    kubelet          Node no-preload-545474 status is now: NodeHasNoDiskPressure
	  Normal   NodeHasSufficientPID     62s (x8 over 63s)    kubelet          Node no-preload-545474 status is now: NodeHasSufficientPID
	  Normal   RegisteredNode           51s                  node-controller  Node no-preload-545474 event: Registered Node no-preload-545474 in Controller
	
	
	==> dmesg <==
	[Nov 9 14:19] overlayfs: idmapped layers are currently not supported
	[ +17.180951] overlayfs: idmapped layers are currently not supported
	[Nov 9 14:20] overlayfs: idmapped layers are currently not supported
	[ +23.736977] overlayfs: idmapped layers are currently not supported
	[Nov 9 14:22] overlayfs: idmapped layers are currently not supported
	[Nov 9 14:23] overlayfs: idmapped layers are currently not supported
	[Nov 9 14:25] overlayfs: idmapped layers are currently not supported
	[Nov 9 14:27] overlayfs: idmapped layers are currently not supported
	[ +24.536638] overlayfs: idmapped layers are currently not supported
	[Nov 9 14:31] overlayfs: idmapped layers are currently not supported
	[Nov 9 14:33] overlayfs: idmapped layers are currently not supported
	[ +39.159698] overlayfs: idmapped layers are currently not supported
	[  +5.641155] overlayfs: idmapped layers are currently not supported
	[Nov 9 14:34] overlayfs: idmapped layers are currently not supported
	[Nov 9 14:35] overlayfs: idmapped layers are currently not supported
	[Nov 9 14:36] overlayfs: idmapped layers are currently not supported
	[Nov 9 14:37] overlayfs: idmapped layers are currently not supported
	[  +3.455400] overlayfs: idmapped layers are currently not supported
	[Nov 9 14:39] overlayfs: idmapped layers are currently not supported
	[  +3.462027] overlayfs: idmapped layers are currently not supported
	[Nov 9 14:40] overlayfs: idmapped layers are currently not supported
	[Nov 9 14:41] overlayfs: idmapped layers are currently not supported
	[ +35.139553] overlayfs: idmapped layers are currently not supported
	[Nov 9 14:42] overlayfs: idmapped layers are currently not supported
	[  +6.994514] overlayfs: idmapped layers are currently not supported
	
	
	==> etcd [9df79b1c8bb2b207310b5498d17036b5975ec9b07c6ca842407741f9ad73de97] <==
	{"level":"warn","ts":"2025-11-09T14:42:25.889246Z","caller":"embed/config_logging.go:188","msg":"rejected connection on client endpoint","remote-addr":"127.0.0.1:58786","server-name":"","error":"EOF"}
	{"level":"warn","ts":"2025-11-09T14:42:25.945123Z","caller":"embed/config_logging.go:188","msg":"rejected connection on client endpoint","remote-addr":"127.0.0.1:58816","server-name":"","error":"EOF"}
	{"level":"warn","ts":"2025-11-09T14:42:25.982789Z","caller":"embed/config_logging.go:188","msg":"rejected connection on client endpoint","remote-addr":"127.0.0.1:58818","server-name":"","error":"EOF"}
	{"level":"warn","ts":"2025-11-09T14:42:26.019657Z","caller":"embed/config_logging.go:188","msg":"rejected connection on client endpoint","remote-addr":"127.0.0.1:58832","server-name":"","error":"EOF"}
	{"level":"warn","ts":"2025-11-09T14:42:26.039930Z","caller":"embed/config_logging.go:188","msg":"rejected connection on client endpoint","remote-addr":"127.0.0.1:58850","server-name":"","error":"EOF"}
	{"level":"warn","ts":"2025-11-09T14:42:26.084878Z","caller":"embed/config_logging.go:188","msg":"rejected connection on client endpoint","remote-addr":"127.0.0.1:58868","server-name":"","error":"EOF"}
	{"level":"warn","ts":"2025-11-09T14:42:26.146607Z","caller":"embed/config_logging.go:188","msg":"rejected connection on client endpoint","remote-addr":"127.0.0.1:58884","server-name":"","error":"EOF"}
	{"level":"warn","ts":"2025-11-09T14:42:26.196449Z","caller":"embed/config_logging.go:188","msg":"rejected connection on client endpoint","remote-addr":"127.0.0.1:58892","server-name":"","error":"EOF"}
	{"level":"warn","ts":"2025-11-09T14:42:26.326424Z","caller":"embed/config_logging.go:188","msg":"rejected connection on client endpoint","remote-addr":"127.0.0.1:58928","server-name":"","error":"EOF"}
	{"level":"warn","ts":"2025-11-09T14:42:26.339539Z","caller":"embed/config_logging.go:188","msg":"rejected connection on client endpoint","remote-addr":"127.0.0.1:58908","server-name":"","error":"EOF"}
	{"level":"warn","ts":"2025-11-09T14:42:26.358397Z","caller":"embed/config_logging.go:188","msg":"rejected connection on client endpoint","remote-addr":"127.0.0.1:58940","server-name":"","error":"EOF"}
	{"level":"warn","ts":"2025-11-09T14:42:26.381470Z","caller":"embed/config_logging.go:188","msg":"rejected connection on client endpoint","remote-addr":"127.0.0.1:58966","server-name":"","error":"EOF"}
	{"level":"warn","ts":"2025-11-09T14:42:26.400801Z","caller":"embed/config_logging.go:188","msg":"rejected connection on client endpoint","remote-addr":"127.0.0.1:58984","server-name":"","error":"EOF"}
	{"level":"warn","ts":"2025-11-09T14:42:26.421112Z","caller":"embed/config_logging.go:188","msg":"rejected connection on client endpoint","remote-addr":"127.0.0.1:59004","server-name":"","error":"EOF"}
	{"level":"warn","ts":"2025-11-09T14:42:26.432434Z","caller":"embed/config_logging.go:188","msg":"rejected connection on client endpoint","remote-addr":"127.0.0.1:59032","server-name":"","error":"EOF"}
	{"level":"warn","ts":"2025-11-09T14:42:26.462049Z","caller":"embed/config_logging.go:188","msg":"rejected connection on client endpoint","remote-addr":"127.0.0.1:59050","server-name":"","error":"EOF"}
	{"level":"warn","ts":"2025-11-09T14:42:26.483436Z","caller":"embed/config_logging.go:188","msg":"rejected connection on client endpoint","remote-addr":"127.0.0.1:59062","server-name":"","error":"EOF"}
	{"level":"warn","ts":"2025-11-09T14:42:26.496547Z","caller":"embed/config_logging.go:188","msg":"rejected connection on client endpoint","remote-addr":"127.0.0.1:59074","server-name":"","error":"EOF"}
	{"level":"warn","ts":"2025-11-09T14:42:26.532652Z","caller":"embed/config_logging.go:188","msg":"rejected connection on client endpoint","remote-addr":"127.0.0.1:59100","server-name":"","error":"EOF"}
	{"level":"warn","ts":"2025-11-09T14:42:26.568183Z","caller":"embed/config_logging.go:188","msg":"rejected connection on client endpoint","remote-addr":"127.0.0.1:59108","server-name":"","error":"EOF"}
	{"level":"warn","ts":"2025-11-09T14:42:26.637397Z","caller":"embed/config_logging.go:188","msg":"rejected connection on client endpoint","remote-addr":"127.0.0.1:59134","server-name":"","error":"EOF"}
	{"level":"warn","ts":"2025-11-09T14:42:26.656485Z","caller":"embed/config_logging.go:188","msg":"rejected connection on client endpoint","remote-addr":"127.0.0.1:59146","server-name":"","error":"EOF"}
	{"level":"warn","ts":"2025-11-09T14:42:26.685791Z","caller":"embed/config_logging.go:188","msg":"rejected connection on client endpoint","remote-addr":"127.0.0.1:59162","server-name":"","error":"EOF"}
	{"level":"warn","ts":"2025-11-09T14:42:26.716598Z","caller":"embed/config_logging.go:188","msg":"rejected connection on client endpoint","remote-addr":"127.0.0.1:59174","server-name":"","error":"EOF"}
	{"level":"warn","ts":"2025-11-09T14:42:26.861774Z","caller":"embed/config_logging.go:188","msg":"rejected connection on client endpoint","remote-addr":"127.0.0.1:59192","server-name":"","error":"EOF"}
	
	
	==> kernel <==
	 14:43:23 up  1:25,  0 user,  load average: 3.50, 4.07, 3.21
	Linux no-preload-545474 5.15.0-1084-aws #91~20.04.1-Ubuntu SMP Fri May 2 07:00:04 UTC 2025 aarch64 GNU/Linux
	PRETTY_NAME="Debian GNU/Linux 12 (bookworm)"
	
	
	==> kindnet [9844fa4dd0e741c1e135049b0ec50a2c5f6206bf090fce7e184f76f6f5de6cb7] <==
	I1109 14:42:30.172055       1 main.go:109] connected to apiserver: https://10.96.0.1:443
	I1109 14:42:30.172327       1 main.go:139] hostIP = 192.168.85.2
	podIP = 192.168.85.2
	I1109 14:42:30.172453       1 main.go:148] setting mtu 1500 for CNI 
	I1109 14:42:30.172465       1 main.go:178] kindnetd IP family: "ipv4"
	I1109 14:42:30.172476       1 main.go:182] noMask IPv4 subnets: [10.244.0.0/16]
	time="2025-11-09T14:42:30Z" level=info msg="Created plugin 10-kube-network-policies (kindnetd, handles RunPodSandbox,RemovePodSandbox)"
	I1109 14:42:30.390150       1 controller.go:377] "Starting controller" name="kube-network-policies"
	I1109 14:42:30.390179       1 controller.go:381] "Waiting for informer caches to sync"
	I1109 14:42:30.390187       1 shared_informer.go:350] "Waiting for caches to sync" controller="kube-network-policies"
	I1109 14:42:30.390868       1 controller.go:390] nri plugin exited: failed to connect to NRI service: dial unix /var/run/nri/nri.sock: connect: no such file or directory
	E1109 14:43:00.391846       1 reflector.go:200] "Failed to watch" err="failed to list *v1.Node: Get \"https://10.96.0.1:443/api/v1/nodes?limit=500&resourceVersion=0\": dial tcp 10.96.0.1:443: i/o timeout" logger="UnhandledError" reflector="pkg/mod/k8s.io/client-go@v0.33.0/tools/cache/reflector.go:285" type="*v1.Node"
	E1109 14:43:00.391846       1 reflector.go:200] "Failed to watch" err="failed to list *v1.Namespace: Get \"https://10.96.0.1:443/api/v1/namespaces?limit=500&resourceVersion=0\": dial tcp 10.96.0.1:443: i/o timeout" logger="UnhandledError" reflector="pkg/mod/k8s.io/client-go@v0.33.0/tools/cache/reflector.go:285" type="*v1.Namespace"
	E1109 14:43:00.392066       1 reflector.go:200] "Failed to watch" err="failed to list *v1.Pod: Get \"https://10.96.0.1:443/api/v1/pods?limit=500&resourceVersion=0\": dial tcp 10.96.0.1:443: i/o timeout" logger="UnhandledError" reflector="pkg/mod/k8s.io/client-go@v0.33.0/tools/cache/reflector.go:285" type="*v1.Pod"
	E1109 14:43:00.392170       1 reflector.go:200] "Failed to watch" err="failed to list *v1.NetworkPolicy: Get \"https://10.96.0.1:443/apis/networking.k8s.io/v1/networkpolicies?limit=500&resourceVersion=0\": dial tcp 10.96.0.1:443: i/o timeout" logger="UnhandledError" reflector="pkg/mod/k8s.io/client-go@v0.33.0/tools/cache/reflector.go:285" type="*v1.NetworkPolicy"
	I1109 14:43:01.790746       1 shared_informer.go:357] "Caches are synced" controller="kube-network-policies"
	I1109 14:43:01.790777       1 metrics.go:72] Registering metrics
	I1109 14:43:01.792181       1 controller.go:711] "Syncing nftables rules"
	I1109 14:43:10.389870       1 main.go:297] Handling node with IPs: map[192.168.85.2:{}]
	I1109 14:43:10.389966       1 main.go:301] handling current node
	I1109 14:43:20.390485       1 main.go:297] Handling node with IPs: map[192.168.85.2:{}]
	I1109 14:43:20.390537       1 main.go:301] handling current node
	
	
	==> kube-apiserver [9c6841e7685fc5801280c9ddf2d6c0a2a346830e53491f2f3d439c2e21c977fd] <==
	I1109 14:42:28.306583       1 cidrallocator.go:301] created ClusterIP allocator for Service CIDR 10.96.0.0/12
	I1109 14:42:28.315086       1 cache.go:39] Caches are synced for RemoteAvailability controller
	I1109 14:42:28.315204       1 shared_informer.go:356] "Caches are synced" controller="cluster_authentication_trust_controller"
	I1109 14:42:28.318286       1 cache.go:39] Caches are synced for APIServiceRegistrationController controller
	I1109 14:42:28.328457       1 cache.go:39] Caches are synced for LocalAvailability controller
	I1109 14:42:28.328714       1 shared_informer.go:356] "Caches are synced" controller="node_authorizer"
	I1109 14:42:28.331601       1 controller.go:667] quota admission added evaluator for: leases.coordination.k8s.io
	I1109 14:42:28.355113       1 shared_informer.go:356] "Caches are synced" controller="ipallocator-repair-controller"
	I1109 14:42:28.355247       1 apf_controller.go:382] Running API Priority and Fairness config worker
	I1109 14:42:28.355257       1 apf_controller.go:385] Running API Priority and Fairness periodic rebalancing process
	I1109 14:42:28.356123       1 cache.go:39] Caches are synced for autoregister controller
	I1109 14:42:28.389818       1 handler_discovery.go:451] Starting ResourceDiscoveryManager
	E1109 14:42:28.418117       1 controller.go:97] Error removing old endpoints from kubernetes service: no API server IP addresses were listed in storage, refusing to erase all endpoints for the kubernetes Service
	I1109 14:42:28.582374       1 storage_scheduling.go:111] all system priority classes are created successfully or already exist.
	I1109 14:42:29.021249       1 controller.go:667] quota admission added evaluator for: serviceaccounts
	I1109 14:42:30.762286       1 controller.go:667] quota admission added evaluator for: namespaces
	I1109 14:42:30.820813       1 controller.go:667] quota admission added evaluator for: deployments.apps
	I1109 14:42:30.856491       1 controller.go:667] quota admission added evaluator for: roles.rbac.authorization.k8s.io
	I1109 14:42:30.877109       1 controller.go:667] quota admission added evaluator for: rolebindings.rbac.authorization.k8s.io
	I1109 14:42:30.947965       1 alloc.go:328] "allocated clusterIPs" service="kubernetes-dashboard/kubernetes-dashboard" clusterIPs={"IPv4":"10.111.68.119"}
	I1109 14:42:30.965582       1 alloc.go:328] "allocated clusterIPs" service="kubernetes-dashboard/dashboard-metrics-scraper" clusterIPs={"IPv4":"10.111.142.193"}
	I1109 14:42:32.871140       1 controller.go:667] quota admission added evaluator for: endpoints
	I1109 14:42:33.269821       1 controller.go:667] quota admission added evaluator for: endpointslices.discovery.k8s.io
	I1109 14:42:33.320701       1 controller.go:667] quota admission added evaluator for: replicasets.apps
	I1109 14:42:33.320753       1 controller.go:667] quota admission added evaluator for: replicasets.apps
	
	
	==> kube-controller-manager [e0fa19fb74d19affdcb53dc2a19669b9497ed088c06c1be5d6368f4a1d768ad8] <==
	I1109 14:42:32.765416       1 shared_informer.go:356] "Caches are synced" controller="GC"
	I1109 14:42:32.765495       1 shared_informer.go:356] "Caches are synced" controller="service-cidr-controller"
	I1109 14:42:32.765556       1 shared_informer.go:356] "Caches are synced" controller="expand"
	I1109 14:42:32.765600       1 shared_informer.go:356] "Caches are synced" controller="attach detach"
	I1109 14:42:32.768010       1 shared_informer.go:356] "Caches are synced" controller="PV protection"
	I1109 14:42:32.768086       1 shared_informer.go:356] "Caches are synced" controller="stateful set"
	I1109 14:42:32.771225       1 shared_informer.go:356] "Caches are synced" controller="TTL after finished"
	I1109 14:42:32.772387       1 shared_informer.go:356] "Caches are synced" controller="TTL"
	I1109 14:42:32.773639       1 shared_informer.go:356] "Caches are synced" controller="daemon sets"
	I1109 14:42:32.775365       1 shared_informer.go:356] "Caches are synced" controller="resource quota"
	I1109 14:42:32.775609       1 shared_informer.go:356] "Caches are synced" controller="endpoint_slice"
	I1109 14:42:32.777696       1 shared_informer.go:356] "Caches are synced" controller="deployment"
	I1109 14:42:32.778407       1 shared_informer.go:356] "Caches are synced" controller="endpoint_slice_mirroring"
	I1109 14:42:32.781503       1 shared_informer.go:356] "Caches are synced" controller="ReplicaSet"
	I1109 14:42:32.781530       1 shared_informer.go:356] "Caches are synced" controller="taint-eviction-controller"
	I1109 14:42:32.787767       1 shared_informer.go:356] "Caches are synced" controller="certificate-csrsigning-kube-apiserver-client"
	I1109 14:42:32.787955       1 shared_informer.go:356] "Caches are synced" controller="certificate-csrsigning-kubelet-serving"
	I1109 14:42:32.788032       1 shared_informer.go:356] "Caches are synced" controller="certificate-csrsigning-kubelet-client"
	I1109 14:42:32.788174       1 shared_informer.go:356] "Caches are synced" controller="certificate-csrsigning-legacy-unknown"
	I1109 14:42:32.810883       1 shared_informer.go:356] "Caches are synced" controller="garbage collector"
	I1109 14:42:32.813378       1 shared_informer.go:356] "Caches are synced" controller="cronjob"
	I1109 14:42:32.814531       1 shared_informer.go:356] "Caches are synced" controller="persistent volume"
	I1109 14:42:32.833025       1 shared_informer.go:356] "Caches are synced" controller="garbage collector"
	I1109 14:42:32.833052       1 garbagecollector.go:154] "Garbage collector: all resource monitors have synced" logger="garbage-collector-controller"
	I1109 14:42:32.833082       1 garbagecollector.go:157] "Proceeding to collect garbage" logger="garbage-collector-controller"
	
	
	==> kube-proxy [5528464a75a8a31cc909e0b5261d839f7dcb4a347d188366b316e9c264cb7e1e] <==
	I1109 14:42:30.754359       1 server_linux.go:53] "Using iptables proxy"
	I1109 14:42:30.853624       1 shared_informer.go:349] "Waiting for caches to sync" controller="node informer cache"
	I1109 14:42:30.953761       1 shared_informer.go:356] "Caches are synced" controller="node informer cache"
	I1109 14:42:30.953794       1 server.go:219] "Successfully retrieved NodeIPs" NodeIPs=["192.168.85.2"]
	E1109 14:42:30.953857       1 server.go:256] "Kube-proxy configuration may be incomplete or incorrect" err="nodePortAddresses is unset; NodePort connections will be accepted on all local IPs. Consider using `--nodeport-addresses primary`"
	I1109 14:42:30.987517       1 server.go:265] "kube-proxy running in dual-stack mode" primary ipFamily="IPv4"
	I1109 14:42:30.987584       1 server_linux.go:132] "Using iptables Proxier"
	I1109 14:42:31.005200       1 proxier.go:242] "Setting route_localnet=1 to allow node-ports on localhost; to change this either disable iptables.localhostNodePorts (--iptables-localhost-nodeports) or set nodePortAddresses (--nodeport-addresses) to filter loopback addresses" ipFamily="IPv4"
	I1109 14:42:31.005539       1 server.go:527] "Version info" version="v1.34.1"
	I1109 14:42:31.005554       1 server.go:529] "Golang settings" GOGC="" GOMAXPROCS="" GOTRACEBACK=""
	I1109 14:42:31.022262       1 config.go:200] "Starting service config controller"
	I1109 14:42:31.022301       1 shared_informer.go:349] "Waiting for caches to sync" controller="service config"
	I1109 14:42:31.022357       1 config.go:106] "Starting endpoint slice config controller"
	I1109 14:42:31.022364       1 shared_informer.go:349] "Waiting for caches to sync" controller="endpoint slice config"
	I1109 14:42:31.022383       1 config.go:403] "Starting serviceCIDR config controller"
	I1109 14:42:31.022387       1 shared_informer.go:349] "Waiting for caches to sync" controller="serviceCIDR config"
	I1109 14:42:31.028577       1 config.go:309] "Starting node config controller"
	I1109 14:42:31.028597       1 shared_informer.go:349] "Waiting for caches to sync" controller="node config"
	I1109 14:42:31.028605       1 shared_informer.go:356] "Caches are synced" controller="node config"
	I1109 14:42:31.122926       1 shared_informer.go:356] "Caches are synced" controller="serviceCIDR config"
	I1109 14:42:31.122964       1 shared_informer.go:356] "Caches are synced" controller="service config"
	I1109 14:42:31.123025       1 shared_informer.go:356] "Caches are synced" controller="endpoint slice config"
	
	
	==> kube-scheduler [baa0cc7198ae04a8507839c6fddece0836011983b84bd4fd652613a18bd01d25] <==
	I1109 14:42:25.172747       1 serving.go:386] Generated self-signed cert in-memory
	W1109 14:42:28.236200       1 requestheader_controller.go:204] Unable to get configmap/extension-apiserver-authentication in kube-system.  Usually fixed by 'kubectl create rolebinding -n kube-system ROLEBINDING_NAME --role=extension-apiserver-authentication-reader --serviceaccount=YOUR_NS:YOUR_SA'
	W1109 14:42:28.236235       1 authentication.go:397] Error looking up in-cluster authentication configuration: configmaps "extension-apiserver-authentication" is forbidden: User "system:kube-scheduler" cannot get resource "configmaps" in API group "" in the namespace "kube-system"
	W1109 14:42:28.236253       1 authentication.go:398] Continuing without authentication configuration. This may treat all requests as anonymous.
	W1109 14:42:28.236261       1 authentication.go:399] To require authentication configuration lookup to succeed, set --authentication-tolerate-lookup-failure=false
	I1109 14:42:28.449950       1 server.go:175] "Starting Kubernetes Scheduler" version="v1.34.1"
	I1109 14:42:28.449987       1 server.go:177] "Golang settings" GOGC="" GOMAXPROCS="" GOTRACEBACK=""
	I1109 14:42:28.479113       1 secure_serving.go:211] Serving securely on 127.0.0.1:10259
	I1109 14:42:28.479249       1 configmap_cafile_content.go:205] "Starting controller" name="client-ca::kube-system::extension-apiserver-authentication::client-ca-file"
	I1109 14:42:28.479270       1 shared_informer.go:349] "Waiting for caches to sync" controller="client-ca::kube-system::extension-apiserver-authentication::client-ca-file"
	I1109 14:42:28.479286       1 tlsconfig.go:243] "Starting DynamicServingCertificateController"
	I1109 14:42:28.581732       1 shared_informer.go:356] "Caches are synced" controller="client-ca::kube-system::extension-apiserver-authentication::client-ca-file"
	
	
	==> kubelet <==
	Nov 09 14:42:33 no-preload-545474 kubelet[771]: I1109 14:42:33.438586     771 reconciler_common.go:251] "operationExecutor.VerifyControllerAttachedVolume started for volume \"tmp-volume\" (UniqueName: \"kubernetes.io/empty-dir/1fe25f87-54de-446f-b6f2-08786b029184-tmp-volume\") pod \"kubernetes-dashboard-855c9754f9-zlh4p\" (UID: \"1fe25f87-54de-446f-b6f2-08786b029184\") " pod="kubernetes-dashboard/kubernetes-dashboard-855c9754f9-zlh4p"
	Nov 09 14:42:33 no-preload-545474 kubelet[771]: I1109 14:42:33.438662     771 reconciler_common.go:251] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kube-api-access-67qds\" (UniqueName: \"kubernetes.io/projected/1fe25f87-54de-446f-b6f2-08786b029184-kube-api-access-67qds\") pod \"kubernetes-dashboard-855c9754f9-zlh4p\" (UID: \"1fe25f87-54de-446f-b6f2-08786b029184\") " pod="kubernetes-dashboard/kubernetes-dashboard-855c9754f9-zlh4p"
	Nov 09 14:42:33 no-preload-545474 kubelet[771]: W1109 14:42:33.745994     771 manager.go:1169] Failed to process watch event {EventType:0 Name:/docker/435b3ae5d44375062a24914709f3375acdafc76fdd93c4f83f8e7b4be40a79be/crio-634a038efb97fe395a77fb89c85792c1a54c8dcad0155f2322867fd066854d12 WatchSource:0}: Error finding container 634a038efb97fe395a77fb89c85792c1a54c8dcad0155f2322867fd066854d12: Status 404 returned error can't find the container with id 634a038efb97fe395a77fb89c85792c1a54c8dcad0155f2322867fd066854d12
	Nov 09 14:42:34 no-preload-545474 kubelet[771]: I1109 14:42:34.017344     771 prober_manager.go:312] "Failed to trigger a manual run" probe="Readiness"
	Nov 09 14:42:39 no-preload-545474 kubelet[771]: I1109 14:42:39.533813     771 pod_startup_latency_tracker.go:104] "Observed pod startup duration" pod="kubernetes-dashboard/kubernetes-dashboard-855c9754f9-zlh4p" podStartSLOduration=1.718788525 podStartE2EDuration="6.530137596s" podCreationTimestamp="2025-11-09 14:42:33 +0000 UTC" firstStartedPulling="2025-11-09 14:42:33.749916086 +0000 UTC m=+12.983644201" lastFinishedPulling="2025-11-09 14:42:38.561265157 +0000 UTC m=+17.794993272" observedRunningTime="2025-11-09 14:42:39.236841381 +0000 UTC m=+18.470569488" watchObservedRunningTime="2025-11-09 14:42:39.530137596 +0000 UTC m=+18.763865703"
	Nov 09 14:42:43 no-preload-545474 kubelet[771]: I1109 14:42:43.227993     771 scope.go:117] "RemoveContainer" containerID="ca37865d13ab04e5ea2d15e831dfad521e8629843fb0a5bf702301d7d79b252b"
	Nov 09 14:42:44 no-preload-545474 kubelet[771]: I1109 14:42:44.232400     771 scope.go:117] "RemoveContainer" containerID="ca37865d13ab04e5ea2d15e831dfad521e8629843fb0a5bf702301d7d79b252b"
	Nov 09 14:42:44 no-preload-545474 kubelet[771]: I1109 14:42:44.232664     771 scope.go:117] "RemoveContainer" containerID="4759455c1e720b78c772ed068536c091978cc22c15401f50eccf30ce1c75fcd5"
	Nov 09 14:42:44 no-preload-545474 kubelet[771]: E1109 14:42:44.232804     771 pod_workers.go:1324] "Error syncing pod, skipping" err="failed to \"StartContainer\" for \"dashboard-metrics-scraper\" with CrashLoopBackOff: \"back-off 10s restarting failed container=dashboard-metrics-scraper pod=dashboard-metrics-scraper-6ffb444bf9-blrjq_kubernetes-dashboard(c8f18330-ef09-4d75-9de7-69680540601f)\"" pod="kubernetes-dashboard/dashboard-metrics-scraper-6ffb444bf9-blrjq" podUID="c8f18330-ef09-4d75-9de7-69680540601f"
	Nov 09 14:42:45 no-preload-545474 kubelet[771]: I1109 14:42:45.238963     771 scope.go:117] "RemoveContainer" containerID="4759455c1e720b78c772ed068536c091978cc22c15401f50eccf30ce1c75fcd5"
	Nov 09 14:42:45 no-preload-545474 kubelet[771]: E1109 14:42:45.239460     771 pod_workers.go:1324] "Error syncing pod, skipping" err="failed to \"StartContainer\" for \"dashboard-metrics-scraper\" with CrashLoopBackOff: \"back-off 10s restarting failed container=dashboard-metrics-scraper pod=dashboard-metrics-scraper-6ffb444bf9-blrjq_kubernetes-dashboard(c8f18330-ef09-4d75-9de7-69680540601f)\"" pod="kubernetes-dashboard/dashboard-metrics-scraper-6ffb444bf9-blrjq" podUID="c8f18330-ef09-4d75-9de7-69680540601f"
	Nov 09 14:42:53 no-preload-545474 kubelet[771]: I1109 14:42:53.718326     771 scope.go:117] "RemoveContainer" containerID="4759455c1e720b78c772ed068536c091978cc22c15401f50eccf30ce1c75fcd5"
	Nov 09 14:42:54 no-preload-545474 kubelet[771]: I1109 14:42:54.262429     771 scope.go:117] "RemoveContainer" containerID="4759455c1e720b78c772ed068536c091978cc22c15401f50eccf30ce1c75fcd5"
	Nov 09 14:42:54 no-preload-545474 kubelet[771]: I1109 14:42:54.262709     771 scope.go:117] "RemoveContainer" containerID="54427a8df9d12b8f89cabb1e02d454376fd533549a2a5bd4728c070344b169a9"
	Nov 09 14:42:54 no-preload-545474 kubelet[771]: E1109 14:42:54.262854     771 pod_workers.go:1324] "Error syncing pod, skipping" err="failed to \"StartContainer\" for \"dashboard-metrics-scraper\" with CrashLoopBackOff: \"back-off 20s restarting failed container=dashboard-metrics-scraper pod=dashboard-metrics-scraper-6ffb444bf9-blrjq_kubernetes-dashboard(c8f18330-ef09-4d75-9de7-69680540601f)\"" pod="kubernetes-dashboard/dashboard-metrics-scraper-6ffb444bf9-blrjq" podUID="c8f18330-ef09-4d75-9de7-69680540601f"
	Nov 09 14:43:01 no-preload-545474 kubelet[771]: I1109 14:43:01.291204     771 scope.go:117] "RemoveContainer" containerID="a56ad15fb1acd25af1ccdd95286c2550aeb592ff86d9a87affec2580a370d7dc"
	Nov 09 14:43:03 no-preload-545474 kubelet[771]: I1109 14:43:03.718343     771 scope.go:117] "RemoveContainer" containerID="54427a8df9d12b8f89cabb1e02d454376fd533549a2a5bd4728c070344b169a9"
	Nov 09 14:43:03 no-preload-545474 kubelet[771]: E1109 14:43:03.718970     771 pod_workers.go:1324] "Error syncing pod, skipping" err="failed to \"StartContainer\" for \"dashboard-metrics-scraper\" with CrashLoopBackOff: \"back-off 20s restarting failed container=dashboard-metrics-scraper pod=dashboard-metrics-scraper-6ffb444bf9-blrjq_kubernetes-dashboard(c8f18330-ef09-4d75-9de7-69680540601f)\"" pod="kubernetes-dashboard/dashboard-metrics-scraper-6ffb444bf9-blrjq" podUID="c8f18330-ef09-4d75-9de7-69680540601f"
	Nov 09 14:43:14 no-preload-545474 kubelet[771]: I1109 14:43:14.989791     771 scope.go:117] "RemoveContainer" containerID="54427a8df9d12b8f89cabb1e02d454376fd533549a2a5bd4728c070344b169a9"
	Nov 09 14:43:15 no-preload-545474 kubelet[771]: I1109 14:43:15.327539     771 scope.go:117] "RemoveContainer" containerID="54427a8df9d12b8f89cabb1e02d454376fd533549a2a5bd4728c070344b169a9"
	Nov 09 14:43:15 no-preload-545474 kubelet[771]: I1109 14:43:15.327827     771 scope.go:117] "RemoveContainer" containerID="33687307c11ac0258a0c4a328dca8bb0ee9be8beb937d99cd72b8fc358b43247"
	Nov 09 14:43:15 no-preload-545474 kubelet[771]: E1109 14:43:15.329287     771 pod_workers.go:1324] "Error syncing pod, skipping" err="failed to \"StartContainer\" for \"dashboard-metrics-scraper\" with CrashLoopBackOff: \"back-off 40s restarting failed container=dashboard-metrics-scraper pod=dashboard-metrics-scraper-6ffb444bf9-blrjq_kubernetes-dashboard(c8f18330-ef09-4d75-9de7-69680540601f)\"" pod="kubernetes-dashboard/dashboard-metrics-scraper-6ffb444bf9-blrjq" podUID="c8f18330-ef09-4d75-9de7-69680540601f"
	Nov 09 14:43:17 no-preload-545474 systemd[1]: Stopping kubelet.service - kubelet: The Kubernetes Node Agent...
	Nov 09 14:43:17 no-preload-545474 systemd[1]: kubelet.service: Deactivated successfully.
	Nov 09 14:43:17 no-preload-545474 systemd[1]: Stopped kubelet.service - kubelet: The Kubernetes Node Agent.
	
	
	==> kubernetes-dashboard [2f347db595849365a063711d3213a98014e01fa8ff9740f0c0cae1ee2989edca] <==
	2025/11/09 14:42:38 Starting overwatch
	2025/11/09 14:42:38 Using namespace: kubernetes-dashboard
	2025/11/09 14:42:38 Using in-cluster config to connect to apiserver
	2025/11/09 14:42:38 Using secret token for csrf signing
	2025/11/09 14:42:38 Initializing csrf token from kubernetes-dashboard-csrf secret
	2025/11/09 14:42:38 Empty token. Generating and storing in a secret kubernetes-dashboard-csrf
	2025/11/09 14:42:38 Successful initial request to the apiserver, version: v1.34.1
	2025/11/09 14:42:38 Generating JWE encryption key
	2025/11/09 14:42:38 New synchronizer has been registered: kubernetes-dashboard-key-holder-kubernetes-dashboard. Starting
	2025/11/09 14:42:38 Starting secret synchronizer for kubernetes-dashboard-key-holder in namespace kubernetes-dashboard
	2025/11/09 14:42:39 Initializing JWE encryption key from synchronized object
	2025/11/09 14:42:39 Creating in-cluster Sidecar client
	2025/11/09 14:42:39 Metric client health check failed: the server is currently unable to handle the request (get services dashboard-metrics-scraper). Retrying in 30 seconds.
	2025/11/09 14:42:39 Serving insecurely on HTTP port: 9090
	2025/11/09 14:43:09 Metric client health check failed: the server is currently unable to handle the request (get services dashboard-metrics-scraper). Retrying in 30 seconds.
	
	
	==> storage-provisioner [97146ccbb7051494060c47263c5598c7fdc03c86778dbc868aff7662435f9c33] <==
	I1109 14:43:01.343582       1 storage_provisioner.go:116] Initializing the minikube storage provisioner...
	I1109 14:43:01.357393       1 storage_provisioner.go:141] Storage provisioner initialized, now starting service!
	I1109 14:43:01.357538       1 leaderelection.go:243] attempting to acquire leader lease kube-system/k8s.io-minikube-hostpath...
	W1109 14:43:01.360232       1 warnings.go:70] v1 Endpoints is deprecated in v1.33+; use discovery.k8s.io/v1 EndpointSlice
	W1109 14:43:04.815311       1 warnings.go:70] v1 Endpoints is deprecated in v1.33+; use discovery.k8s.io/v1 EndpointSlice
	W1109 14:43:09.075334       1 warnings.go:70] v1 Endpoints is deprecated in v1.33+; use discovery.k8s.io/v1 EndpointSlice
	W1109 14:43:12.673661       1 warnings.go:70] v1 Endpoints is deprecated in v1.33+; use discovery.k8s.io/v1 EndpointSlice
	W1109 14:43:15.726836       1 warnings.go:70] v1 Endpoints is deprecated in v1.33+; use discovery.k8s.io/v1 EndpointSlice
	W1109 14:43:18.750078       1 warnings.go:70] v1 Endpoints is deprecated in v1.33+; use discovery.k8s.io/v1 EndpointSlice
	W1109 14:43:18.755506       1 warnings.go:70] v1 Endpoints is deprecated in v1.33+; use discovery.k8s.io/v1 EndpointSlice
	I1109 14:43:18.755705       1 leaderelection.go:253] successfully acquired lease kube-system/k8s.io-minikube-hostpath
	I1109 14:43:18.756460       1 event.go:282] Event(v1.ObjectReference{Kind:"Endpoints", Namespace:"kube-system", Name:"k8s.io-minikube-hostpath", UID:"c28aa965-9a7b-46e6-8965-1a16b69399de", APIVersion:"v1", ResourceVersion:"680", FieldPath:""}): type: 'Normal' reason: 'LeaderElection' no-preload-545474_e2317694-a436-48d8-8baf-a0459015c3a4 became leader
	I1109 14:43:18.756677       1 controller.go:835] Starting provisioner controller k8s.io/minikube-hostpath_no-preload-545474_e2317694-a436-48d8-8baf-a0459015c3a4!
	W1109 14:43:18.762916       1 warnings.go:70] v1 Endpoints is deprecated in v1.33+; use discovery.k8s.io/v1 EndpointSlice
	W1109 14:43:18.769731       1 warnings.go:70] v1 Endpoints is deprecated in v1.33+; use discovery.k8s.io/v1 EndpointSlice
	I1109 14:43:18.857017       1 controller.go:884] Started provisioner controller k8s.io/minikube-hostpath_no-preload-545474_e2317694-a436-48d8-8baf-a0459015c3a4!
	W1109 14:43:20.780390       1 warnings.go:70] v1 Endpoints is deprecated in v1.33+; use discovery.k8s.io/v1 EndpointSlice
	W1109 14:43:20.787830       1 warnings.go:70] v1 Endpoints is deprecated in v1.33+; use discovery.k8s.io/v1 EndpointSlice
	W1109 14:43:22.791773       1 warnings.go:70] v1 Endpoints is deprecated in v1.33+; use discovery.k8s.io/v1 EndpointSlice
	W1109 14:43:22.796163       1 warnings.go:70] v1 Endpoints is deprecated in v1.33+; use discovery.k8s.io/v1 EndpointSlice
	
	
	==> storage-provisioner [a56ad15fb1acd25af1ccdd95286c2550aeb592ff86d9a87affec2580a370d7dc] <==
	I1109 14:42:30.578115       1 storage_provisioner.go:116] Initializing the minikube storage provisioner...
	F1109 14:43:00.585460       1 main.go:39] error getting server version: Get "https://10.96.0.1:443/version?timeout=32s": dial tcp 10.96.0.1:443: i/o timeout
	

                                                
                                                
-- /stdout --
helpers_test.go:262: (dbg) Run:  out/minikube-linux-arm64 status --format={{.APIServer}} -p no-preload-545474 -n no-preload-545474
helpers_test.go:262: (dbg) Non-zero exit: out/minikube-linux-arm64 status --format={{.APIServer}} -p no-preload-545474 -n no-preload-545474: exit status 2 (510.386595ms)

                                                
                                                
-- stdout --
	Running

                                                
                                                
-- /stdout --
helpers_test.go:262: status error: exit status 2 (may be ok)
helpers_test.go:269: (dbg) Run:  kubectl --context no-preload-545474 get po -o=jsonpath={.items[*].metadata.name} -A --field-selector=status.phase!=Running
helpers_test.go:293: <<< TestStartStop/group/no-preload/serial/Pause FAILED: end of post-mortem logs <<<
helpers_test.go:294: ---------------------/post-mortem---------------------------------
--- FAIL: TestStartStop/group/no-preload/serial/Pause (6.98s)

                                                
                                    

Test pass (253/328)

Order passed test Duration
3 TestDownloadOnly/v1.28.0/json-events 9.74
4 TestDownloadOnly/v1.28.0/preload-exists 0
8 TestDownloadOnly/v1.28.0/LogsDuration 0.09
9 TestDownloadOnly/v1.28.0/DeleteAll 0.21
10 TestDownloadOnly/v1.28.0/DeleteAlwaysSucceeds 0.14
12 TestDownloadOnly/v1.34.1/json-events 6.41
13 TestDownloadOnly/v1.34.1/preload-exists 0
17 TestDownloadOnly/v1.34.1/LogsDuration 0.09
18 TestDownloadOnly/v1.34.1/DeleteAll 0.21
19 TestDownloadOnly/v1.34.1/DeleteAlwaysSucceeds 0.14
21 TestBinaryMirror 0.62
25 TestAddons/PreSetup/EnablingAddonOnNonExistingCluster 0.07
26 TestAddons/PreSetup/DisablingAddonOnNonExistingCluster 0.07
27 TestAddons/Setup 161.39
31 TestAddons/serial/GCPAuth/Namespaces 0.23
32 TestAddons/serial/GCPAuth/FakeCredentials 10.83
48 TestAddons/StoppedEnableDisable 12.71
49 TestCertOptions 37.73
50 TestCertExpiration 245.66
52 TestForceSystemdFlag 40.97
53 TestForceSystemdEnv 47.54
58 TestErrorSpam/setup 35.23
59 TestErrorSpam/start 0.8
60 TestErrorSpam/status 1.11
61 TestErrorSpam/pause 5.36
62 TestErrorSpam/unpause 5.83
63 TestErrorSpam/stop 1.5
66 TestFunctional/serial/CopySyncFile 0
67 TestFunctional/serial/StartWithProxy 76.3
68 TestFunctional/serial/AuditLog 0
69 TestFunctional/serial/SoftStart 37.9
70 TestFunctional/serial/KubeContext 0.07
71 TestFunctional/serial/KubectlGetPods 0.11
74 TestFunctional/serial/CacheCmd/cache/add_remote 3.34
75 TestFunctional/serial/CacheCmd/cache/add_local 1.05
76 TestFunctional/serial/CacheCmd/cache/CacheDelete 0.05
77 TestFunctional/serial/CacheCmd/cache/list 0.05
78 TestFunctional/serial/CacheCmd/cache/verify_cache_inside_node 0.3
79 TestFunctional/serial/CacheCmd/cache/cache_reload 1.96
80 TestFunctional/serial/CacheCmd/cache/delete 0.12
81 TestFunctional/serial/MinikubeKubectlCmd 0.14
82 TestFunctional/serial/MinikubeKubectlCmdDirectly 0.13
83 TestFunctional/serial/ExtraConfig 41
84 TestFunctional/serial/ComponentHealth 0.1
85 TestFunctional/serial/LogsCmd 1.44
86 TestFunctional/serial/LogsFileCmd 1.5
87 TestFunctional/serial/InvalidService 4.21
89 TestFunctional/parallel/ConfigCmd 0.52
90 TestFunctional/parallel/DashboardCmd 11.18
91 TestFunctional/parallel/DryRun 0.6
92 TestFunctional/parallel/InternationalLanguage 0.3
93 TestFunctional/parallel/StatusCmd 1.35
98 TestFunctional/parallel/AddonsCmd 0.15
99 TestFunctional/parallel/PersistentVolumeClaim 25.6
101 TestFunctional/parallel/SSHCmd 0.78
102 TestFunctional/parallel/CpCmd 2.19
104 TestFunctional/parallel/FileSync 0.4
105 TestFunctional/parallel/CertSync 2.2
109 TestFunctional/parallel/NodeLabels 0.12
111 TestFunctional/parallel/NonActiveRuntimeDisabled 0.75
113 TestFunctional/parallel/License 0.33
115 TestFunctional/parallel/TunnelCmd/serial/RunSecondTunnel 0.63
116 TestFunctional/parallel/TunnelCmd/serial/StartTunnel 0
118 TestFunctional/parallel/TunnelCmd/serial/WaitService/Setup 10.37
119 TestFunctional/parallel/TunnelCmd/serial/WaitService/IngressIP 0.07
120 TestFunctional/parallel/TunnelCmd/serial/AccessDirect 0
124 TestFunctional/parallel/TunnelCmd/serial/DeleteTunnel 0.1
126 TestFunctional/parallel/ProfileCmd/profile_not_create 0.43
127 TestFunctional/parallel/ProfileCmd/profile_list 0.43
128 TestFunctional/parallel/ProfileCmd/profile_json_output 0.43
129 TestFunctional/parallel/MountCmd/any-port 6.91
130 TestFunctional/parallel/MountCmd/specific-port 1.81
131 TestFunctional/parallel/MountCmd/VerifyCleanup 2.26
132 TestFunctional/parallel/ServiceCmd/List 0.59
133 TestFunctional/parallel/ServiceCmd/JSONOutput 0.62
137 TestFunctional/parallel/Version/short 0.09
138 TestFunctional/parallel/Version/components 0.76
139 TestFunctional/parallel/ImageCommands/ImageListShort 0.29
140 TestFunctional/parallel/ImageCommands/ImageListTable 0.25
141 TestFunctional/parallel/ImageCommands/ImageListJson 0.25
142 TestFunctional/parallel/ImageCommands/ImageListYaml 0.3
143 TestFunctional/parallel/ImageCommands/ImageBuild 4.11
144 TestFunctional/parallel/ImageCommands/Setup 0.68
149 TestFunctional/parallel/ImageCommands/ImageRemove 0.64
152 TestFunctional/parallel/UpdateContextCmd/no_changes 0.2
153 TestFunctional/parallel/UpdateContextCmd/no_minikube_cluster 0.17
154 TestFunctional/parallel/UpdateContextCmd/no_clusters 0.15
155 TestFunctional/delete_echo-server_images 0.05
156 TestFunctional/delete_my-image_image 0.02
157 TestFunctional/delete_minikube_cached_images 0.02
162 TestMultiControlPlane/serial/StartCluster 149.53
163 TestMultiControlPlane/serial/DeployApp 8.19
164 TestMultiControlPlane/serial/PingHostFromPods 1.46
165 TestMultiControlPlane/serial/AddWorkerNode 32.82
166 TestMultiControlPlane/serial/NodeLabels 0.11
167 TestMultiControlPlane/serial/HAppyAfterClusterStart 1.08
168 TestMultiControlPlane/serial/CopyFile 19.94
169 TestMultiControlPlane/serial/StopSecondaryNode 12.95
170 TestMultiControlPlane/serial/DegradedAfterControlPlaneNodeStop 0.82
171 TestMultiControlPlane/serial/RestartSecondaryNode 27.83
172 TestMultiControlPlane/serial/HAppyAfterSecondaryNodeRestart 1.01
185 TestJSONOutput/start/Command 80.68
186 TestJSONOutput/start/Audit 0
188 TestJSONOutput/start/parallel/DistinctCurrentSteps 0
189 TestJSONOutput/start/parallel/IncreasingCurrentSteps 0
192 TestJSONOutput/pause/Audit 0
194 TestJSONOutput/pause/parallel/DistinctCurrentSteps 0
195 TestJSONOutput/pause/parallel/IncreasingCurrentSteps 0
198 TestJSONOutput/unpause/Audit 0
200 TestJSONOutput/unpause/parallel/DistinctCurrentSteps 0
201 TestJSONOutput/unpause/parallel/IncreasingCurrentSteps 0
203 TestJSONOutput/stop/Command 5.88
204 TestJSONOutput/stop/Audit 0
206 TestJSONOutput/stop/parallel/DistinctCurrentSteps 0
207 TestJSONOutput/stop/parallel/IncreasingCurrentSteps 0
208 TestErrorJSONOutput 0.24
210 TestKicCustomNetwork/create_custom_network 40.29
211 TestKicCustomNetwork/use_default_bridge_network 39.12
212 TestKicExistingNetwork 38.6
213 TestKicCustomSubnet 38.48
214 TestKicStaticIP 36.75
215 TestMainNoArgs 0.05
216 TestMinikubeProfile 74.64
219 TestMountStart/serial/StartWithMountFirst 8.89
220 TestMountStart/serial/VerifyMountFirst 0.28
221 TestMountStart/serial/StartWithMountSecond 6.03
222 TestMountStart/serial/VerifyMountSecond 0.28
223 TestMountStart/serial/DeleteFirst 1.73
224 TestMountStart/serial/VerifyMountPostDelete 0.28
225 TestMountStart/serial/Stop 1.29
226 TestMountStart/serial/RestartStopped 7.99
227 TestMountStart/serial/VerifyMountPostStop 0.27
230 TestMultiNode/serial/FreshStart2Nodes 137.13
231 TestMultiNode/serial/DeployApp2Nodes 4.9
232 TestMultiNode/serial/PingHostFrom2Pods 0.92
233 TestMultiNode/serial/AddNode 59.31
234 TestMultiNode/serial/MultiNodeLabels 0.1
235 TestMultiNode/serial/ProfileList 0.75
236 TestMultiNode/serial/CopyFile 10.54
237 TestMultiNode/serial/StopNode 2.4
238 TestMultiNode/serial/StartAfterStop 8.21
239 TestMultiNode/serial/RestartKeepsNodes 78.63
240 TestMultiNode/serial/DeleteNode 5.71
241 TestMultiNode/serial/StopMultiNode 24
242 TestMultiNode/serial/RestartMultiNode 48.4
243 TestMultiNode/serial/ValidateNameConflict 41.43
248 TestPreload 128.19
250 TestScheduledStopUnix 109.19
253 TestInsufficientStorage 13.67
254 TestRunningBinaryUpgrade 55.61
256 TestKubernetesUpgrade 349.97
257 TestMissingContainerUpgrade 108.91
259 TestNoKubernetes/serial/StartNoK8sWithVersion 0.09
260 TestNoKubernetes/serial/StartWithK8s 50.43
261 TestNoKubernetes/serial/StartWithStopK8s 114.76
262 TestNoKubernetes/serial/Start 8.06
263 TestNoKubernetes/serial/VerifyNok8sNoK8sDownloads 0
264 TestNoKubernetes/serial/VerifyK8sNotRunning 0.28
265 TestNoKubernetes/serial/ProfileList 35.01
266 TestNoKubernetes/serial/Stop 1.3
267 TestNoKubernetes/serial/StartNoArgs 6.87
268 TestNoKubernetes/serial/VerifyK8sNotRunningSecond 0.28
269 TestStoppedBinaryUpgrade/Setup 0.92
270 TestStoppedBinaryUpgrade/Upgrade 59.53
271 TestStoppedBinaryUpgrade/MinikubeLogs 1.26
280 TestPause/serial/Start 84.95
281 TestPause/serial/SecondStartNoReconfiguration 41.18
290 TestNetworkPlugins/group/false 5.27
295 TestStartStop/group/old-k8s-version/serial/FirstStart 62.78
296 TestStartStop/group/old-k8s-version/serial/DeployApp 8.44
298 TestStartStop/group/old-k8s-version/serial/Stop 12.04
299 TestStartStop/group/old-k8s-version/serial/EnableAddonAfterStop 0.19
300 TestStartStop/group/old-k8s-version/serial/SecondStart 51.33
301 TestStartStop/group/old-k8s-version/serial/UserAppExistsAfterStop 6.01
302 TestStartStop/group/old-k8s-version/serial/AddonExistsAfterStop 5.11
303 TestStartStop/group/old-k8s-version/serial/VerifyKubernetesImages 0.25
306 TestStartStop/group/default-k8s-diff-port/serial/FirstStart 91.46
308 TestStartStop/group/embed-certs/serial/FirstStart 83.99
309 TestStartStop/group/default-k8s-diff-port/serial/DeployApp 8.37
310 TestStartStop/group/embed-certs/serial/DeployApp 8.68
313 TestStartStop/group/default-k8s-diff-port/serial/Stop 12.25
314 TestStartStop/group/embed-certs/serial/Stop 11.95
315 TestStartStop/group/default-k8s-diff-port/serial/EnableAddonAfterStop 0.19
316 TestStartStop/group/default-k8s-diff-port/serial/SecondStart 53.65
317 TestStartStop/group/embed-certs/serial/EnableAddonAfterStop 0.18
318 TestStartStop/group/embed-certs/serial/SecondStart 53.79
319 TestStartStop/group/default-k8s-diff-port/serial/UserAppExistsAfterStop 6
320 TestStartStop/group/embed-certs/serial/UserAppExistsAfterStop 6.01
321 TestStartStop/group/default-k8s-diff-port/serial/AddonExistsAfterStop 5.1
322 TestStartStop/group/embed-certs/serial/AddonExistsAfterStop 5.12
323 TestStartStop/group/default-k8s-diff-port/serial/VerifyKubernetesImages 0.27
325 TestStartStop/group/embed-certs/serial/VerifyKubernetesImages 0.34
328 TestStartStop/group/no-preload/serial/FirstStart 78.45
330 TestStartStop/group/newest-cni/serial/FirstStart 49.72
331 TestStartStop/group/newest-cni/serial/DeployApp 0
333 TestStartStop/group/newest-cni/serial/Stop 1.53
334 TestStartStop/group/newest-cni/serial/EnableAddonAfterStop 0.2
335 TestStartStop/group/newest-cni/serial/SecondStart 16.23
336 TestStartStop/group/newest-cni/serial/UserAppExistsAfterStop 0
337 TestStartStop/group/newest-cni/serial/AddonExistsAfterStop 0
338 TestStartStop/group/newest-cni/serial/VerifyKubernetesImages 0.26
340 TestStartStop/group/no-preload/serial/DeployApp 8.39
341 TestNetworkPlugins/group/auto/Start 80.53
343 TestStartStop/group/no-preload/serial/Stop 12.38
344 TestStartStop/group/no-preload/serial/EnableAddonAfterStop 0.25
345 TestStartStop/group/no-preload/serial/SecondStart 53.66
346 TestStartStop/group/no-preload/serial/UserAppExistsAfterStop 6
347 TestStartStop/group/no-preload/serial/AddonExistsAfterStop 5.11
348 TestNetworkPlugins/group/auto/KubeletFlags 0.32
349 TestNetworkPlugins/group/auto/NetCatPod 10.27
350 TestStartStop/group/no-preload/serial/VerifyKubernetesImages 0.25
352 TestNetworkPlugins/group/auto/DNS 0.24
353 TestNetworkPlugins/group/auto/Localhost 0.26
354 TestNetworkPlugins/group/auto/HairPin 0.2
355 TestNetworkPlugins/group/kindnet/Start 84.49
356 TestNetworkPlugins/group/calico/Start 60.61
357 TestNetworkPlugins/group/calico/ControllerPod 6.01
358 TestNetworkPlugins/group/kindnet/ControllerPod 6.01
359 TestNetworkPlugins/group/calico/KubeletFlags 0.32
360 TestNetworkPlugins/group/calico/NetCatPod 12.31
361 TestNetworkPlugins/group/kindnet/KubeletFlags 0.4
362 TestNetworkPlugins/group/kindnet/NetCatPod 12.33
363 TestNetworkPlugins/group/calico/DNS 0.16
364 TestNetworkPlugins/group/calico/Localhost 0.13
365 TestNetworkPlugins/group/calico/HairPin 0.13
366 TestNetworkPlugins/group/kindnet/DNS 0.18
367 TestNetworkPlugins/group/kindnet/Localhost 0.16
368 TestNetworkPlugins/group/kindnet/HairPin 0.14
369 TestNetworkPlugins/group/custom-flannel/Start 64.24
370 TestNetworkPlugins/group/enable-default-cni/Start 85.83
371 TestNetworkPlugins/group/custom-flannel/KubeletFlags 0.31
372 TestNetworkPlugins/group/custom-flannel/NetCatPod 9.28
373 TestNetworkPlugins/group/custom-flannel/DNS 0.16
374 TestNetworkPlugins/group/custom-flannel/Localhost 0.14
375 TestNetworkPlugins/group/custom-flannel/HairPin 0.14
376 TestNetworkPlugins/group/enable-default-cni/KubeletFlags 0.41
377 TestNetworkPlugins/group/enable-default-cni/NetCatPod 12.36
378 TestNetworkPlugins/group/flannel/Start 65.91
379 TestNetworkPlugins/group/enable-default-cni/DNS 0.2
380 TestNetworkPlugins/group/enable-default-cni/Localhost 0.26
381 TestNetworkPlugins/group/enable-default-cni/HairPin 0.19
382 TestNetworkPlugins/group/bridge/Start 74.82
383 TestNetworkPlugins/group/flannel/ControllerPod 6.01
384 TestNetworkPlugins/group/flannel/KubeletFlags 0.31
385 TestNetworkPlugins/group/flannel/NetCatPod 10.28
386 TestNetworkPlugins/group/flannel/DNS 0.15
387 TestNetworkPlugins/group/flannel/Localhost 0.13
388 TestNetworkPlugins/group/flannel/HairPin 0.15
389 TestNetworkPlugins/group/bridge/KubeletFlags 0.33
390 TestNetworkPlugins/group/bridge/NetCatPod 11.26
391 TestNetworkPlugins/group/bridge/DNS 0.14
392 TestNetworkPlugins/group/bridge/Localhost 0.13
393 TestNetworkPlugins/group/bridge/HairPin 0.21
x
+
TestDownloadOnly/v1.28.0/json-events (9.74s)

                                                
                                                
=== RUN   TestDownloadOnly/v1.28.0/json-events
aaa_download_only_test.go:80: (dbg) Run:  out/minikube-linux-arm64 start -o=json --download-only -p download-only-802526 --force --alsologtostderr --kubernetes-version=v1.28.0 --container-runtime=crio --driver=docker  --container-runtime=crio
aaa_download_only_test.go:80: (dbg) Done: out/minikube-linux-arm64 start -o=json --download-only -p download-only-802526 --force --alsologtostderr --kubernetes-version=v1.28.0 --container-runtime=crio --driver=docker  --container-runtime=crio: (9.740658373s)
--- PASS: TestDownloadOnly/v1.28.0/json-events (9.74s)

                                                
                                    
x
+
TestDownloadOnly/v1.28.0/preload-exists (0s)

                                                
                                                
=== RUN   TestDownloadOnly/v1.28.0/preload-exists
I1109 13:29:15.478254    4116 preload.go:188] Checking if preload exists for k8s version v1.28.0 and runtime crio
I1109 13:29:15.478351    4116 preload.go:203] Found local preload: /home/jenkins/minikube-integration/21139-2320/.minikube/cache/preloaded-tarball/preloaded-images-k8s-v18-v1.28.0-cri-o-overlay-arm64.tar.lz4
--- PASS: TestDownloadOnly/v1.28.0/preload-exists (0.00s)

                                                
                                    
x
+
TestDownloadOnly/v1.28.0/LogsDuration (0.09s)

                                                
                                                
=== RUN   TestDownloadOnly/v1.28.0/LogsDuration
aaa_download_only_test.go:183: (dbg) Run:  out/minikube-linux-arm64 logs -p download-only-802526
aaa_download_only_test.go:183: (dbg) Non-zero exit: out/minikube-linux-arm64 logs -p download-only-802526: exit status 85 (88.854776ms)

                                                
                                                
-- stdout --
	
	==> Audit <==
	┌─────────┬───────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────┬──────────────────────┬─────────┬─────────┬─────────────────────┬──────────┐
	│ COMMAND │                                                                                   ARGS                                                                                    │       PROFILE        │  USER   │ VERSION │     START TIME      │ END TIME │
	├─────────┼───────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────┼──────────────────────┼─────────┼─────────┼─────────────────────┼──────────┤
	│ start   │ -o=json --download-only -p download-only-802526 --force --alsologtostderr --kubernetes-version=v1.28.0 --container-runtime=crio --driver=docker  --container-runtime=crio │ download-only-802526 │ jenkins │ v1.37.0 │ 09 Nov 25 13:29 UTC │          │
	└─────────┴───────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────┴──────────────────────┴─────────┴─────────┴─────────────────────┴──────────┘
	
	
	==> Last Start <==
	Log file created at: 2025/11/09 13:29:05
	Running on machine: ip-172-31-24-2
	Binary: Built with gc go1.24.6 for linux/arm64
	Log line format: [IWEF]mmdd hh:mm:ss.uuuuuu threadid file:line] msg
	I1109 13:29:05.780080    4121 out.go:360] Setting OutFile to fd 1 ...
	I1109 13:29:05.780202    4121 out.go:408] TERM=,COLORTERM=, which probably does not support color
	I1109 13:29:05.780212    4121 out.go:374] Setting ErrFile to fd 2...
	I1109 13:29:05.780217    4121 out.go:408] TERM=,COLORTERM=, which probably does not support color
	I1109 13:29:05.780457    4121 root.go:338] Updating PATH: /home/jenkins/minikube-integration/21139-2320/.minikube/bin
	W1109 13:29:05.780585    4121 root.go:314] Error reading config file at /home/jenkins/minikube-integration/21139-2320/.minikube/config/config.json: open /home/jenkins/minikube-integration/21139-2320/.minikube/config/config.json: no such file or directory
	I1109 13:29:05.780964    4121 out.go:368] Setting JSON to true
	I1109 13:29:05.781738    4121 start.go:133] hostinfo: {"hostname":"ip-172-31-24-2","uptime":696,"bootTime":1762694250,"procs":150,"os":"linux","platform":"ubuntu","platformFamily":"debian","platformVersion":"20.04","kernelVersion":"5.15.0-1084-aws","kernelArch":"aarch64","virtualizationSystem":"","virtualizationRole":"","hostId":"6d436adf-771e-4269-b9a3-c25fd4fca4f5"}
	I1109 13:29:05.781801    4121 start.go:143] virtualization:  
	I1109 13:29:05.785769    4121 out.go:99] [download-only-802526] minikube v1.37.0 on Ubuntu 20.04 (arm64)
	W1109 13:29:05.785918    4121 preload.go:354] Failed to list preload files: open /home/jenkins/minikube-integration/21139-2320/.minikube/cache/preloaded-tarball: no such file or directory
	I1109 13:29:05.785981    4121 notify.go:221] Checking for updates...
	I1109 13:29:05.788927    4121 out.go:171] MINIKUBE_LOCATION=21139
	I1109 13:29:05.791916    4121 out.go:171] MINIKUBE_SUPPRESS_DOCKER_PERFORMANCE=true
	I1109 13:29:05.795019    4121 out.go:171] KUBECONFIG=/home/jenkins/minikube-integration/21139-2320/kubeconfig
	I1109 13:29:05.798036    4121 out.go:171] MINIKUBE_HOME=/home/jenkins/minikube-integration/21139-2320/.minikube
	I1109 13:29:05.800973    4121 out.go:171] MINIKUBE_BIN=out/minikube-linux-arm64
	W1109 13:29:05.806890    4121 out.go:336] minikube skips various validations when --force is supplied; this may lead to unexpected behavior
	I1109 13:29:05.807190    4121 driver.go:422] Setting default libvirt URI to qemu:///system
	I1109 13:29:05.834160    4121 docker.go:124] docker version: linux-28.1.1:Docker Engine - Community
	I1109 13:29:05.834286    4121 cli_runner.go:164] Run: docker system info --format "{{json .}}"
	I1109 13:29:06.248752    4121 info.go:266] docker info: {ID:J4M5:W6MX:GOX4:4LAQ:VI7E:VJNF:J3OP:OPBH:GF7G:PPY4:WQWD:7N4L Containers:0 ContainersRunning:0 ContainersPaused:0 ContainersStopped:0 Images:1 Driver:overlay2 DriverStatus:[[Backing Filesystem extfs] [Supports d_type true] [Using metacopy false] [Native Overlay Diff true] [userxattr false]] SystemStatus:<nil> Plugins:{Volume:[local] Network:[bridge host ipvlan macvlan null overlay] Authorization:<nil> Log:[awslogs fluentd gcplogs gelf journald json-file local splunk syslog]} MemoryLimit:true SwapLimit:true KernelMemory:false KernelMemoryTCP:true CPUCfsPeriod:true CPUCfsQuota:true CPUShares:true CPUSet:true PidsLimit:true IPv4Forwarding:true BridgeNfIptables:false BridgeNfIP6Tables:false Debug:false NFd:29 OomKillDisable:true NGoroutines:61 SystemTime:2025-11-09 13:29:06.239516275 +0000 UTC LoggingDriver:json-file CgroupDriver:cgroupfs NEventsListener:0 KernelVersion:5.15.0-1084-aws OperatingSystem:Ubuntu 20.04.6 LTS OSType:linux Architecture:a
arch64 IndexServerAddress:https://index.docker.io/v1/ RegistryConfig:{AllowNondistributableArtifactsCIDRs:[] AllowNondistributableArtifactsHostnames:[] InsecureRegistryCIDRs:[::1/128 127.0.0.0/8] IndexConfigs:{DockerIo:{Name:docker.io Mirrors:[] Secure:true Official:true}} Mirrors:[]} NCPU:2 MemTotal:8214835200 GenericResources:<nil> DockerRootDir:/var/lib/docker HTTPProxy: HTTPSProxy: NoProxy: Name:ip-172-31-24-2 Labels:[] ExperimentalBuild:false ServerVersion:28.1.1 ClusterStore: ClusterAdvertise: Runtimes:{Runc:{Path:runc}} DefaultRuntime:runc Swarm:{NodeID: NodeAddr: LocalNodeState:inactive ControlAvailable:false Error: RemoteManagers:<nil>} LiveRestoreEnabled:false Isolation: InitBinary:docker-init ContainerdCommit:{ID:05044ec0a9a75232cad458027ca83437aae3f4da Expected:} RuncCommit:{ID:v1.2.5-0-g59923ef Expected:} InitCommit:{ID:de40ad0 Expected:} SecurityOptions:[name=apparmor name=seccomp,profile=builtin] ProductLicense: Warnings:<nil> ServerErrors:[] ClientInfo:{Debug:false Plugins:[map[Name:buildx Pat
h:/usr/libexec/docker/cli-plugins/docker-buildx SchemaVersion:0.1.0 ShortDescription:Docker Buildx Vendor:Docker Inc. Version:v0.23.0] map[Name:compose Path:/usr/libexec/docker/cli-plugins/docker-compose SchemaVersion:0.1.0 ShortDescription:Docker Compose Vendor:Docker Inc. Version:v2.35.1]] Warnings:<nil>}}
	I1109 13:29:06.248869    4121 docker.go:319] overlay module found
	I1109 13:29:06.251890    4121 out.go:99] Using the docker driver based on user configuration
	I1109 13:29:06.251932    4121 start.go:309] selected driver: docker
	I1109 13:29:06.251939    4121 start.go:930] validating driver "docker" against <nil>
	I1109 13:29:06.252047    4121 cli_runner.go:164] Run: docker system info --format "{{json .}}"
	I1109 13:29:06.322114    4121 info.go:266] docker info: {ID:J4M5:W6MX:GOX4:4LAQ:VI7E:VJNF:J3OP:OPBH:GF7G:PPY4:WQWD:7N4L Containers:0 ContainersRunning:0 ContainersPaused:0 ContainersStopped:0 Images:1 Driver:overlay2 DriverStatus:[[Backing Filesystem extfs] [Supports d_type true] [Using metacopy false] [Native Overlay Diff true] [userxattr false]] SystemStatus:<nil> Plugins:{Volume:[local] Network:[bridge host ipvlan macvlan null overlay] Authorization:<nil> Log:[awslogs fluentd gcplogs gelf journald json-file local splunk syslog]} MemoryLimit:true SwapLimit:true KernelMemory:false KernelMemoryTCP:true CPUCfsPeriod:true CPUCfsQuota:true CPUShares:true CPUSet:true PidsLimit:true IPv4Forwarding:true BridgeNfIptables:false BridgeNfIP6Tables:false Debug:false NFd:29 OomKillDisable:true NGoroutines:61 SystemTime:2025-11-09 13:29:06.304658979 +0000 UTC LoggingDriver:json-file CgroupDriver:cgroupfs NEventsListener:0 KernelVersion:5.15.0-1084-aws OperatingSystem:Ubuntu 20.04.6 LTS OSType:linux Architecture:a
arch64 IndexServerAddress:https://index.docker.io/v1/ RegistryConfig:{AllowNondistributableArtifactsCIDRs:[] AllowNondistributableArtifactsHostnames:[] InsecureRegistryCIDRs:[::1/128 127.0.0.0/8] IndexConfigs:{DockerIo:{Name:docker.io Mirrors:[] Secure:true Official:true}} Mirrors:[]} NCPU:2 MemTotal:8214835200 GenericResources:<nil> DockerRootDir:/var/lib/docker HTTPProxy: HTTPSProxy: NoProxy: Name:ip-172-31-24-2 Labels:[] ExperimentalBuild:false ServerVersion:28.1.1 ClusterStore: ClusterAdvertise: Runtimes:{Runc:{Path:runc}} DefaultRuntime:runc Swarm:{NodeID: NodeAddr: LocalNodeState:inactive ControlAvailable:false Error: RemoteManagers:<nil>} LiveRestoreEnabled:false Isolation: InitBinary:docker-init ContainerdCommit:{ID:05044ec0a9a75232cad458027ca83437aae3f4da Expected:} RuncCommit:{ID:v1.2.5-0-g59923ef Expected:} InitCommit:{ID:de40ad0 Expected:} SecurityOptions:[name=apparmor name=seccomp,profile=builtin] ProductLicense: Warnings:<nil> ServerErrors:[] ClientInfo:{Debug:false Plugins:[map[Name:buildx Pat
h:/usr/libexec/docker/cli-plugins/docker-buildx SchemaVersion:0.1.0 ShortDescription:Docker Buildx Vendor:Docker Inc. Version:v0.23.0] map[Name:compose Path:/usr/libexec/docker/cli-plugins/docker-compose SchemaVersion:0.1.0 ShortDescription:Docker Compose Vendor:Docker Inc. Version:v2.35.1]] Warnings:<nil>}}
	I1109 13:29:06.322271    4121 start_flags.go:327] no existing cluster config was found, will generate one from the flags 
	I1109 13:29:06.322572    4121 start_flags.go:410] Using suggested 3072MB memory alloc based on sys=7834MB, container=7834MB
	I1109 13:29:06.322749    4121 start_flags.go:974] Wait components to verify : map[apiserver:true system_pods:true]
	I1109 13:29:06.326019    4121 out.go:171] Using Docker driver with root privileges
	I1109 13:29:06.328907    4121 cni.go:84] Creating CNI manager for ""
	I1109 13:29:06.328975    4121 cni.go:143] "docker" driver + "crio" runtime found, recommending kindnet
	I1109 13:29:06.328989    4121 start_flags.go:336] Found "CNI" CNI - setting NetworkPlugin=cni
	I1109 13:29:06.329065    4121 start.go:353] cluster config:
	{Name:download-only-802526 KeepContext:false EmbedCerts:false MinikubeISO: KicBaseImage:gcr.io/k8s-minikube/kicbase-builds:v0.0.48-1761985721-21837@sha256:a50b37e97dfdea51156e079ca6b45818a801b3d41bbe13d141f35d2e1af6c7d1 Memory:3072 CPUs:2 DiskSize:20000 Driver:docker HyperkitVpnKitSock: HyperkitVSockPorts:[] DockerEnv:[] ContainerVolumeMounts:[] InsecureRegistry:[] RegistryMirror:[] HostOnlyCIDR:192.168.59.1/24 HypervVirtualSwitch: HypervUseExternalSwitch:false HypervExternalAdapter: KVMNetwork:default KVMQemuURI:qemu:///system KVMGPU:false KVMHidden:false KVMNUMACount:1 APIServerPort:8443 DockerOpt:[] DisableDriverMounts:false NFSShare:[] NFSSharesRoot:/nfsshares UUID: NoVTXCheck:false DNSProxy:false HostDNSResolver:true HostOnlyNicType:virtio NatNicType:virtio SSHIPAddress: SSHUser:root SSHKey: SSHPort:22 KubernetesConfig:{KubernetesVersion:v1.28.0 ClusterName:download-only-802526 Namespace:default APIServerHAVIP: APIServerName:minikubeCA APIServerNames:[] APIServerIPs:[] DNSDomain:cluster.local Co
ntainerRuntime:crio CRISocket: NetworkPlugin:cni FeatureGates: ServiceCIDR:10.96.0.0/12 ImageRepository: LoadBalancerStartIP: LoadBalancerEndIP: CustomIngressCert: RegistryAliases: ExtraOptions:[] ShouldLoadCachedImages:true EnableDefaultCNI:false CNI:} Nodes:[{Name: IP: Port:8443 KubernetesVersion:v1.28.0 ContainerRuntime:crio ControlPlane:true Worker:true}] Addons:map[] CustomAddonImages:map[] CustomAddonRegistries:map[] VerifyComponents:map[apiserver:true system_pods:true] StartHostTimeout:6m0s ScheduledStop:<nil> ExposedPorts:[] ListenAddress: Network: Subnet: MultiNodeRequested:false ExtraDisks:0 CertExpiration:26280h0m0s MountString: Mount9PVersion:9p2000.L MountGID:docker MountIP: MountMSize:262144 MountOptions:[] MountPort:0 MountType:9p MountUID:docker BinaryMirror: DisableOptimizations:false DisableMetrics:false DisableCoreDNSLog:false CustomQemuFirmwarePath: SocketVMnetClientPath: SocketVMnetPath: StaticIP: SSHAuthSock: SSHAgentPID:0 GPUs: AutoPauseInterval:1m0s}
	I1109 13:29:06.332013    4121 out.go:99] Starting "download-only-802526" primary control-plane node in "download-only-802526" cluster
	I1109 13:29:06.332040    4121 cache.go:134] Beginning downloading kic base image for docker with crio
	I1109 13:29:06.334907    4121 out.go:99] Pulling base image v0.0.48-1761985721-21837 ...
	I1109 13:29:06.334959    4121 preload.go:188] Checking if preload exists for k8s version v1.28.0 and runtime crio
	I1109 13:29:06.335120    4121 image.go:81] Checking for gcr.io/k8s-minikube/kicbase-builds:v0.0.48-1761985721-21837@sha256:a50b37e97dfdea51156e079ca6b45818a801b3d41bbe13d141f35d2e1af6c7d1 in local docker daemon
	I1109 13:29:06.354494    4121 cache.go:163] Downloading gcr.io/k8s-minikube/kicbase-builds:v0.0.48-1761985721-21837@sha256:a50b37e97dfdea51156e079ca6b45818a801b3d41bbe13d141f35d2e1af6c7d1 to local cache
	I1109 13:29:06.354686    4121 image.go:65] Checking for gcr.io/k8s-minikube/kicbase-builds:v0.0.48-1761985721-21837@sha256:a50b37e97dfdea51156e079ca6b45818a801b3d41bbe13d141f35d2e1af6c7d1 in local cache directory
	I1109 13:29:06.354790    4121 image.go:150] Writing gcr.io/k8s-minikube/kicbase-builds:v0.0.48-1761985721-21837@sha256:a50b37e97dfdea51156e079ca6b45818a801b3d41bbe13d141f35d2e1af6c7d1 to local cache
	I1109 13:29:06.388968    4121 preload.go:148] Found remote preload: https://storage.googleapis.com/minikube-preloaded-volume-tarballs/v18/v1.28.0/preloaded-images-k8s-v18-v1.28.0-cri-o-overlay-arm64.tar.lz4
	I1109 13:29:06.388997    4121 cache.go:65] Caching tarball of preloaded images
	I1109 13:29:06.389153    4121 preload.go:188] Checking if preload exists for k8s version v1.28.0 and runtime crio
	I1109 13:29:06.392515    4121 out.go:99] Downloading Kubernetes v1.28.0 preload ...
	I1109 13:29:06.392538    4121 preload.go:318] getting checksum for preloaded-images-k8s-v18-v1.28.0-cri-o-overlay-arm64.tar.lz4 from gcs api...
	I1109 13:29:06.483415    4121 preload.go:295] Got checksum from GCS API "e092595ade89dbfc477bd4cd6b9c633b"
	I1109 13:29:06.483546    4121 download.go:108] Downloading: https://storage.googleapis.com/minikube-preloaded-volume-tarballs/v18/v1.28.0/preloaded-images-k8s-v18-v1.28.0-cri-o-overlay-arm64.tar.lz4?checksum=md5:e092595ade89dbfc477bd4cd6b9c633b -> /home/jenkins/minikube-integration/21139-2320/.minikube/cache/preloaded-tarball/preloaded-images-k8s-v18-v1.28.0-cri-o-overlay-arm64.tar.lz4
	
	
	* The control-plane node download-only-802526 host does not exist
	  To start a cluster, run: "minikube start -p download-only-802526"

                                                
                                                
-- /stdout --
aaa_download_only_test.go:184: minikube logs failed with error: exit status 85
--- PASS: TestDownloadOnly/v1.28.0/LogsDuration (0.09s)

                                                
                                    
x
+
TestDownloadOnly/v1.28.0/DeleteAll (0.21s)

                                                
                                                
=== RUN   TestDownloadOnly/v1.28.0/DeleteAll
aaa_download_only_test.go:196: (dbg) Run:  out/minikube-linux-arm64 delete --all
--- PASS: TestDownloadOnly/v1.28.0/DeleteAll (0.21s)

                                                
                                    
x
+
TestDownloadOnly/v1.28.0/DeleteAlwaysSucceeds (0.14s)

                                                
                                                
=== RUN   TestDownloadOnly/v1.28.0/DeleteAlwaysSucceeds
aaa_download_only_test.go:207: (dbg) Run:  out/minikube-linux-arm64 delete -p download-only-802526
--- PASS: TestDownloadOnly/v1.28.0/DeleteAlwaysSucceeds (0.14s)

                                                
                                    
x
+
TestDownloadOnly/v1.34.1/json-events (6.41s)

                                                
                                                
=== RUN   TestDownloadOnly/v1.34.1/json-events
aaa_download_only_test.go:80: (dbg) Run:  out/minikube-linux-arm64 start -o=json --download-only -p download-only-603977 --force --alsologtostderr --kubernetes-version=v1.34.1 --container-runtime=crio --driver=docker  --container-runtime=crio
aaa_download_only_test.go:80: (dbg) Done: out/minikube-linux-arm64 start -o=json --download-only -p download-only-603977 --force --alsologtostderr --kubernetes-version=v1.34.1 --container-runtime=crio --driver=docker  --container-runtime=crio: (6.409594558s)
--- PASS: TestDownloadOnly/v1.34.1/json-events (6.41s)

                                                
                                    
x
+
TestDownloadOnly/v1.34.1/preload-exists (0s)

                                                
                                                
=== RUN   TestDownloadOnly/v1.34.1/preload-exists
I1109 13:29:22.325551    4116 preload.go:188] Checking if preload exists for k8s version v1.34.1 and runtime crio
I1109 13:29:22.325596    4116 preload.go:203] Found local preload: /home/jenkins/minikube-integration/21139-2320/.minikube/cache/preloaded-tarball/preloaded-images-k8s-v18-v1.34.1-cri-o-overlay-arm64.tar.lz4
--- PASS: TestDownloadOnly/v1.34.1/preload-exists (0.00s)

                                                
                                    
x
+
TestDownloadOnly/v1.34.1/LogsDuration (0.09s)

                                                
                                                
=== RUN   TestDownloadOnly/v1.34.1/LogsDuration
aaa_download_only_test.go:183: (dbg) Run:  out/minikube-linux-arm64 logs -p download-only-603977
aaa_download_only_test.go:183: (dbg) Non-zero exit: out/minikube-linux-arm64 logs -p download-only-603977: exit status 85 (90.348169ms)

                                                
                                                
-- stdout --
	
	==> Audit <==
	┌─────────┬───────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────┬──────────────────────┬─────────┬─────────┬─────────────────────┬─────────────────────┐
	│ COMMAND │                                                                                   ARGS                                                                                    │       PROFILE        │  USER   │ VERSION │     START TIME      │      END TIME       │
	├─────────┼───────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────┼──────────────────────┼─────────┼─────────┼─────────────────────┼─────────────────────┤
	│ start   │ -o=json --download-only -p download-only-802526 --force --alsologtostderr --kubernetes-version=v1.28.0 --container-runtime=crio --driver=docker  --container-runtime=crio │ download-only-802526 │ jenkins │ v1.37.0 │ 09 Nov 25 13:29 UTC │                     │
	│ delete  │ --all                                                                                                                                                                     │ minikube             │ jenkins │ v1.37.0 │ 09 Nov 25 13:29 UTC │ 09 Nov 25 13:29 UTC │
	│ delete  │ -p download-only-802526                                                                                                                                                   │ download-only-802526 │ jenkins │ v1.37.0 │ 09 Nov 25 13:29 UTC │ 09 Nov 25 13:29 UTC │
	│ start   │ -o=json --download-only -p download-only-603977 --force --alsologtostderr --kubernetes-version=v1.34.1 --container-runtime=crio --driver=docker  --container-runtime=crio │ download-only-603977 │ jenkins │ v1.37.0 │ 09 Nov 25 13:29 UTC │                     │
	└─────────┴───────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────┴──────────────────────┴─────────┴─────────┴─────────────────────┴─────────────────────┘
	
	
	==> Last Start <==
	Log file created at: 2025/11/09 13:29:15
	Running on machine: ip-172-31-24-2
	Binary: Built with gc go1.24.6 for linux/arm64
	Log line format: [IWEF]mmdd hh:mm:ss.uuuuuu threadid file:line] msg
	I1109 13:29:15.954692    4324 out.go:360] Setting OutFile to fd 1 ...
	I1109 13:29:15.955012    4324 out.go:408] TERM=,COLORTERM=, which probably does not support color
	I1109 13:29:15.955026    4324 out.go:374] Setting ErrFile to fd 2...
	I1109 13:29:15.955033    4324 out.go:408] TERM=,COLORTERM=, which probably does not support color
	I1109 13:29:15.955317    4324 root.go:338] Updating PATH: /home/jenkins/minikube-integration/21139-2320/.minikube/bin
	I1109 13:29:15.955752    4324 out.go:368] Setting JSON to true
	I1109 13:29:15.956550    4324 start.go:133] hostinfo: {"hostname":"ip-172-31-24-2","uptime":706,"bootTime":1762694250,"procs":145,"os":"linux","platform":"ubuntu","platformFamily":"debian","platformVersion":"20.04","kernelVersion":"5.15.0-1084-aws","kernelArch":"aarch64","virtualizationSystem":"","virtualizationRole":"","hostId":"6d436adf-771e-4269-b9a3-c25fd4fca4f5"}
	I1109 13:29:15.956617    4324 start.go:143] virtualization:  
	I1109 13:29:15.960092    4324 out.go:99] [download-only-603977] minikube v1.37.0 on Ubuntu 20.04 (arm64)
	I1109 13:29:15.960302    4324 notify.go:221] Checking for updates...
	I1109 13:29:15.963175    4324 out.go:171] MINIKUBE_LOCATION=21139
	I1109 13:29:15.966183    4324 out.go:171] MINIKUBE_SUPPRESS_DOCKER_PERFORMANCE=true
	I1109 13:29:15.969159    4324 out.go:171] KUBECONFIG=/home/jenkins/minikube-integration/21139-2320/kubeconfig
	I1109 13:29:15.972075    4324 out.go:171] MINIKUBE_HOME=/home/jenkins/minikube-integration/21139-2320/.minikube
	I1109 13:29:15.975101    4324 out.go:171] MINIKUBE_BIN=out/minikube-linux-arm64
	W1109 13:29:15.980910    4324 out.go:336] minikube skips various validations when --force is supplied; this may lead to unexpected behavior
	I1109 13:29:15.981169    4324 driver.go:422] Setting default libvirt URI to qemu:///system
	I1109 13:29:16.011531    4324 docker.go:124] docker version: linux-28.1.1:Docker Engine - Community
	I1109 13:29:16.011681    4324 cli_runner.go:164] Run: docker system info --format "{{json .}}"
	I1109 13:29:16.071980    4324 info.go:266] docker info: {ID:J4M5:W6MX:GOX4:4LAQ:VI7E:VJNF:J3OP:OPBH:GF7G:PPY4:WQWD:7N4L Containers:0 ContainersRunning:0 ContainersPaused:0 ContainersStopped:0 Images:1 Driver:overlay2 DriverStatus:[[Backing Filesystem extfs] [Supports d_type true] [Using metacopy false] [Native Overlay Diff true] [userxattr false]] SystemStatus:<nil> Plugins:{Volume:[local] Network:[bridge host ipvlan macvlan null overlay] Authorization:<nil> Log:[awslogs fluentd gcplogs gelf journald json-file local splunk syslog]} MemoryLimit:true SwapLimit:true KernelMemory:false KernelMemoryTCP:true CPUCfsPeriod:true CPUCfsQuota:true CPUShares:true CPUSet:true PidsLimit:true IPv4Forwarding:true BridgeNfIptables:false BridgeNfIP6Tables:false Debug:false NFd:25 OomKillDisable:true NGoroutines:47 SystemTime:2025-11-09 13:29:16.062655144 +0000 UTC LoggingDriver:json-file CgroupDriver:cgroupfs NEventsListener:0 KernelVersion:5.15.0-1084-aws OperatingSystem:Ubuntu 20.04.6 LTS OSType:linux Architecture:a
arch64 IndexServerAddress:https://index.docker.io/v1/ RegistryConfig:{AllowNondistributableArtifactsCIDRs:[] AllowNondistributableArtifactsHostnames:[] InsecureRegistryCIDRs:[::1/128 127.0.0.0/8] IndexConfigs:{DockerIo:{Name:docker.io Mirrors:[] Secure:true Official:true}} Mirrors:[]} NCPU:2 MemTotal:8214835200 GenericResources:<nil> DockerRootDir:/var/lib/docker HTTPProxy: HTTPSProxy: NoProxy: Name:ip-172-31-24-2 Labels:[] ExperimentalBuild:false ServerVersion:28.1.1 ClusterStore: ClusterAdvertise: Runtimes:{Runc:{Path:runc}} DefaultRuntime:runc Swarm:{NodeID: NodeAddr: LocalNodeState:inactive ControlAvailable:false Error: RemoteManagers:<nil>} LiveRestoreEnabled:false Isolation: InitBinary:docker-init ContainerdCommit:{ID:05044ec0a9a75232cad458027ca83437aae3f4da Expected:} RuncCommit:{ID:v1.2.5-0-g59923ef Expected:} InitCommit:{ID:de40ad0 Expected:} SecurityOptions:[name=apparmor name=seccomp,profile=builtin] ProductLicense: Warnings:<nil> ServerErrors:[] ClientInfo:{Debug:false Plugins:[map[Name:buildx Pat
h:/usr/libexec/docker/cli-plugins/docker-buildx SchemaVersion:0.1.0 ShortDescription:Docker Buildx Vendor:Docker Inc. Version:v0.23.0] map[Name:compose Path:/usr/libexec/docker/cli-plugins/docker-compose SchemaVersion:0.1.0 ShortDescription:Docker Compose Vendor:Docker Inc. Version:v2.35.1]] Warnings:<nil>}}
	I1109 13:29:16.072094    4324 docker.go:319] overlay module found
	I1109 13:29:16.075163    4324 out.go:99] Using the docker driver based on user configuration
	I1109 13:29:16.075206    4324 start.go:309] selected driver: docker
	I1109 13:29:16.075213    4324 start.go:930] validating driver "docker" against <nil>
	I1109 13:29:16.075313    4324 cli_runner.go:164] Run: docker system info --format "{{json .}}"
	I1109 13:29:16.127845    4324 info.go:266] docker info: {ID:J4M5:W6MX:GOX4:4LAQ:VI7E:VJNF:J3OP:OPBH:GF7G:PPY4:WQWD:7N4L Containers:0 ContainersRunning:0 ContainersPaused:0 ContainersStopped:0 Images:1 Driver:overlay2 DriverStatus:[[Backing Filesystem extfs] [Supports d_type true] [Using metacopy false] [Native Overlay Diff true] [userxattr false]] SystemStatus:<nil> Plugins:{Volume:[local] Network:[bridge host ipvlan macvlan null overlay] Authorization:<nil> Log:[awslogs fluentd gcplogs gelf journald json-file local splunk syslog]} MemoryLimit:true SwapLimit:true KernelMemory:false KernelMemoryTCP:true CPUCfsPeriod:true CPUCfsQuota:true CPUShares:true CPUSet:true PidsLimit:true IPv4Forwarding:true BridgeNfIptables:false BridgeNfIP6Tables:false Debug:false NFd:25 OomKillDisable:true NGoroutines:47 SystemTime:2025-11-09 13:29:16.118952587 +0000 UTC LoggingDriver:json-file CgroupDriver:cgroupfs NEventsListener:0 KernelVersion:5.15.0-1084-aws OperatingSystem:Ubuntu 20.04.6 LTS OSType:linux Architecture:a
arch64 IndexServerAddress:https://index.docker.io/v1/ RegistryConfig:{AllowNondistributableArtifactsCIDRs:[] AllowNondistributableArtifactsHostnames:[] InsecureRegistryCIDRs:[::1/128 127.0.0.0/8] IndexConfigs:{DockerIo:{Name:docker.io Mirrors:[] Secure:true Official:true}} Mirrors:[]} NCPU:2 MemTotal:8214835200 GenericResources:<nil> DockerRootDir:/var/lib/docker HTTPProxy: HTTPSProxy: NoProxy: Name:ip-172-31-24-2 Labels:[] ExperimentalBuild:false ServerVersion:28.1.1 ClusterStore: ClusterAdvertise: Runtimes:{Runc:{Path:runc}} DefaultRuntime:runc Swarm:{NodeID: NodeAddr: LocalNodeState:inactive ControlAvailable:false Error: RemoteManagers:<nil>} LiveRestoreEnabled:false Isolation: InitBinary:docker-init ContainerdCommit:{ID:05044ec0a9a75232cad458027ca83437aae3f4da Expected:} RuncCommit:{ID:v1.2.5-0-g59923ef Expected:} InitCommit:{ID:de40ad0 Expected:} SecurityOptions:[name=apparmor name=seccomp,profile=builtin] ProductLicense: Warnings:<nil> ServerErrors:[] ClientInfo:{Debug:false Plugins:[map[Name:buildx Pat
h:/usr/libexec/docker/cli-plugins/docker-buildx SchemaVersion:0.1.0 ShortDescription:Docker Buildx Vendor:Docker Inc. Version:v0.23.0] map[Name:compose Path:/usr/libexec/docker/cli-plugins/docker-compose SchemaVersion:0.1.0 ShortDescription:Docker Compose Vendor:Docker Inc. Version:v2.35.1]] Warnings:<nil>}}
	I1109 13:29:16.128060    4324 start_flags.go:327] no existing cluster config was found, will generate one from the flags 
	I1109 13:29:16.128333    4324 start_flags.go:410] Using suggested 3072MB memory alloc based on sys=7834MB, container=7834MB
	I1109 13:29:16.128493    4324 start_flags.go:974] Wait components to verify : map[apiserver:true system_pods:true]
	I1109 13:29:16.131709    4324 out.go:171] Using Docker driver with root privileges
	I1109 13:29:16.134603    4324 cni.go:84] Creating CNI manager for ""
	I1109 13:29:16.134667    4324 cni.go:143] "docker" driver + "crio" runtime found, recommending kindnet
	I1109 13:29:16.134676    4324 start_flags.go:336] Found "CNI" CNI - setting NetworkPlugin=cni
	I1109 13:29:16.134753    4324 start.go:353] cluster config:
	{Name:download-only-603977 KeepContext:false EmbedCerts:false MinikubeISO: KicBaseImage:gcr.io/k8s-minikube/kicbase-builds:v0.0.48-1761985721-21837@sha256:a50b37e97dfdea51156e079ca6b45818a801b3d41bbe13d141f35d2e1af6c7d1 Memory:3072 CPUs:2 DiskSize:20000 Driver:docker HyperkitVpnKitSock: HyperkitVSockPorts:[] DockerEnv:[] ContainerVolumeMounts:[] InsecureRegistry:[] RegistryMirror:[] HostOnlyCIDR:192.168.59.1/24 HypervVirtualSwitch: HypervUseExternalSwitch:false HypervExternalAdapter: KVMNetwork:default KVMQemuURI:qemu:///system KVMGPU:false KVMHidden:false KVMNUMACount:1 APIServerPort:8443 DockerOpt:[] DisableDriverMounts:false NFSShare:[] NFSSharesRoot:/nfsshares UUID: NoVTXCheck:false DNSProxy:false HostDNSResolver:true HostOnlyNicType:virtio NatNicType:virtio SSHIPAddress: SSHUser:root SSHKey: SSHPort:22 KubernetesConfig:{KubernetesVersion:v1.34.1 ClusterName:download-only-603977 Namespace:default APIServerHAVIP: APIServerName:minikubeCA APIServerNames:[] APIServerIPs:[] DNSDomain:cluster.local Co
ntainerRuntime:crio CRISocket: NetworkPlugin:cni FeatureGates: ServiceCIDR:10.96.0.0/12 ImageRepository: LoadBalancerStartIP: LoadBalancerEndIP: CustomIngressCert: RegistryAliases: ExtraOptions:[] ShouldLoadCachedImages:true EnableDefaultCNI:false CNI:} Nodes:[{Name: IP: Port:8443 KubernetesVersion:v1.34.1 ContainerRuntime:crio ControlPlane:true Worker:true}] Addons:map[] CustomAddonImages:map[] CustomAddonRegistries:map[] VerifyComponents:map[apiserver:true system_pods:true] StartHostTimeout:6m0s ScheduledStop:<nil> ExposedPorts:[] ListenAddress: Network: Subnet: MultiNodeRequested:false ExtraDisks:0 CertExpiration:26280h0m0s MountString: Mount9PVersion:9p2000.L MountGID:docker MountIP: MountMSize:262144 MountOptions:[] MountPort:0 MountType:9p MountUID:docker BinaryMirror: DisableOptimizations:false DisableMetrics:false DisableCoreDNSLog:false CustomQemuFirmwarePath: SocketVMnetClientPath: SocketVMnetPath: StaticIP: SSHAuthSock: SSHAgentPID:0 GPUs: AutoPauseInterval:1m0s}
	I1109 13:29:16.137696    4324 out.go:99] Starting "download-only-603977" primary control-plane node in "download-only-603977" cluster
	I1109 13:29:16.137714    4324 cache.go:134] Beginning downloading kic base image for docker with crio
	I1109 13:29:16.140581    4324 out.go:99] Pulling base image v0.0.48-1761985721-21837 ...
	I1109 13:29:16.140625    4324 preload.go:188] Checking if preload exists for k8s version v1.34.1 and runtime crio
	I1109 13:29:16.140780    4324 image.go:81] Checking for gcr.io/k8s-minikube/kicbase-builds:v0.0.48-1761985721-21837@sha256:a50b37e97dfdea51156e079ca6b45818a801b3d41bbe13d141f35d2e1af6c7d1 in local docker daemon
	I1109 13:29:16.159791    4324 cache.go:163] Downloading gcr.io/k8s-minikube/kicbase-builds:v0.0.48-1761985721-21837@sha256:a50b37e97dfdea51156e079ca6b45818a801b3d41bbe13d141f35d2e1af6c7d1 to local cache
	I1109 13:29:16.159944    4324 image.go:65] Checking for gcr.io/k8s-minikube/kicbase-builds:v0.0.48-1761985721-21837@sha256:a50b37e97dfdea51156e079ca6b45818a801b3d41bbe13d141f35d2e1af6c7d1 in local cache directory
	I1109 13:29:16.159967    4324 image.go:68] Found gcr.io/k8s-minikube/kicbase-builds:v0.0.48-1761985721-21837@sha256:a50b37e97dfdea51156e079ca6b45818a801b3d41bbe13d141f35d2e1af6c7d1 in local cache directory, skipping pull
	I1109 13:29:16.159972    4324 image.go:137] gcr.io/k8s-minikube/kicbase-builds:v0.0.48-1761985721-21837@sha256:a50b37e97dfdea51156e079ca6b45818a801b3d41bbe13d141f35d2e1af6c7d1 exists in cache, skipping pull
	I1109 13:29:16.159988    4324 cache.go:166] successfully saved gcr.io/k8s-minikube/kicbase-builds:v0.0.48-1761985721-21837@sha256:a50b37e97dfdea51156e079ca6b45818a801b3d41bbe13d141f35d2e1af6c7d1 as a tarball
	I1109 13:29:16.194724    4324 preload.go:148] Found remote preload: https://storage.googleapis.com/minikube-preloaded-volume-tarballs/v18/v1.34.1/preloaded-images-k8s-v18-v1.34.1-cri-o-overlay-arm64.tar.lz4
	I1109 13:29:16.194748    4324 cache.go:65] Caching tarball of preloaded images
	I1109 13:29:16.194908    4324 preload.go:188] Checking if preload exists for k8s version v1.34.1 and runtime crio
	I1109 13:29:16.197962    4324 out.go:99] Downloading Kubernetes v1.34.1 preload ...
	I1109 13:29:16.197992    4324 preload.go:318] getting checksum for preloaded-images-k8s-v18-v1.34.1-cri-o-overlay-arm64.tar.lz4 from gcs api...
	I1109 13:29:16.285993    4324 preload.go:295] Got checksum from GCS API "bc3e4aa50814345ef9ba3452bb5efb9f"
	I1109 13:29:16.286050    4324 download.go:108] Downloading: https://storage.googleapis.com/minikube-preloaded-volume-tarballs/v18/v1.34.1/preloaded-images-k8s-v18-v1.34.1-cri-o-overlay-arm64.tar.lz4?checksum=md5:bc3e4aa50814345ef9ba3452bb5efb9f -> /home/jenkins/minikube-integration/21139-2320/.minikube/cache/preloaded-tarball/preloaded-images-k8s-v18-v1.34.1-cri-o-overlay-arm64.tar.lz4
	
	
	* The control-plane node download-only-603977 host does not exist
	  To start a cluster, run: "minikube start -p download-only-603977"

                                                
                                                
-- /stdout --
aaa_download_only_test.go:184: minikube logs failed with error: exit status 85
--- PASS: TestDownloadOnly/v1.34.1/LogsDuration (0.09s)

                                                
                                    
x
+
TestDownloadOnly/v1.34.1/DeleteAll (0.21s)

                                                
                                                
=== RUN   TestDownloadOnly/v1.34.1/DeleteAll
aaa_download_only_test.go:196: (dbg) Run:  out/minikube-linux-arm64 delete --all
--- PASS: TestDownloadOnly/v1.34.1/DeleteAll (0.21s)

                                                
                                    
x
+
TestDownloadOnly/v1.34.1/DeleteAlwaysSucceeds (0.14s)

                                                
                                                
=== RUN   TestDownloadOnly/v1.34.1/DeleteAlwaysSucceeds
aaa_download_only_test.go:207: (dbg) Run:  out/minikube-linux-arm64 delete -p download-only-603977
--- PASS: TestDownloadOnly/v1.34.1/DeleteAlwaysSucceeds (0.14s)

                                                
                                    
x
+
TestBinaryMirror (0.62s)

                                                
                                                
=== RUN   TestBinaryMirror
I1109 13:29:23.462695    4116 binary.go:80] Not caching binary, using https://dl.k8s.io/release/v1.34.1/bin/linux/arm64/kubectl?checksum=file:https://dl.k8s.io/release/v1.34.1/bin/linux/arm64/kubectl.sha256
aaa_download_only_test.go:309: (dbg) Run:  out/minikube-linux-arm64 start --download-only -p binary-mirror-258515 --alsologtostderr --binary-mirror http://127.0.0.1:41697 --driver=docker  --container-runtime=crio
helpers_test.go:175: Cleaning up "binary-mirror-258515" profile ...
helpers_test.go:178: (dbg) Run:  out/minikube-linux-arm64 delete -p binary-mirror-258515
--- PASS: TestBinaryMirror (0.62s)

                                                
                                    
x
+
TestAddons/PreSetup/EnablingAddonOnNonExistingCluster (0.07s)

                                                
                                                
=== RUN   TestAddons/PreSetup/EnablingAddonOnNonExistingCluster
=== PAUSE TestAddons/PreSetup/EnablingAddonOnNonExistingCluster

                                                
                                                

                                                
                                                
=== CONT  TestAddons/PreSetup/EnablingAddonOnNonExistingCluster
addons_test.go:1000: (dbg) Run:  out/minikube-linux-arm64 addons enable dashboard -p addons-651467
addons_test.go:1000: (dbg) Non-zero exit: out/minikube-linux-arm64 addons enable dashboard -p addons-651467: exit status 85 (70.120323ms)

                                                
                                                
-- stdout --
	* Profile "addons-651467" not found. Run "minikube profile list" to view all profiles.
	  To start a cluster, run: "minikube start -p addons-651467"

                                                
                                                
-- /stdout --
--- PASS: TestAddons/PreSetup/EnablingAddonOnNonExistingCluster (0.07s)

                                                
                                    
x
+
TestAddons/PreSetup/DisablingAddonOnNonExistingCluster (0.07s)

                                                
                                                
=== RUN   TestAddons/PreSetup/DisablingAddonOnNonExistingCluster
=== PAUSE TestAddons/PreSetup/DisablingAddonOnNonExistingCluster

                                                
                                                

                                                
                                                
=== CONT  TestAddons/PreSetup/DisablingAddonOnNonExistingCluster
addons_test.go:1011: (dbg) Run:  out/minikube-linux-arm64 addons disable dashboard -p addons-651467
addons_test.go:1011: (dbg) Non-zero exit: out/minikube-linux-arm64 addons disable dashboard -p addons-651467: exit status 85 (73.678158ms)

                                                
                                                
-- stdout --
	* Profile "addons-651467" not found. Run "minikube profile list" to view all profiles.
	  To start a cluster, run: "minikube start -p addons-651467"

                                                
                                                
-- /stdout --
--- PASS: TestAddons/PreSetup/DisablingAddonOnNonExistingCluster (0.07s)

                                                
                                    
x
+
TestAddons/Setup (161.39s)

                                                
                                                
=== RUN   TestAddons/Setup
addons_test.go:108: (dbg) Run:  out/minikube-linux-arm64 start -p addons-651467 --wait=true --memory=4096 --alsologtostderr --addons=registry --addons=registry-creds --addons=metrics-server --addons=volumesnapshots --addons=csi-hostpath-driver --addons=gcp-auth --addons=cloud-spanner --addons=inspektor-gadget --addons=nvidia-device-plugin --addons=yakd --addons=volcano --addons=amd-gpu-device-plugin --driver=docker  --container-runtime=crio --addons=ingress --addons=ingress-dns --addons=storage-provisioner-rancher
addons_test.go:108: (dbg) Done: out/minikube-linux-arm64 start -p addons-651467 --wait=true --memory=4096 --alsologtostderr --addons=registry --addons=registry-creds --addons=metrics-server --addons=volumesnapshots --addons=csi-hostpath-driver --addons=gcp-auth --addons=cloud-spanner --addons=inspektor-gadget --addons=nvidia-device-plugin --addons=yakd --addons=volcano --addons=amd-gpu-device-plugin --driver=docker  --container-runtime=crio --addons=ingress --addons=ingress-dns --addons=storage-provisioner-rancher: (2m41.388988389s)
--- PASS: TestAddons/Setup (161.39s)

                                                
                                    
x
+
TestAddons/serial/GCPAuth/Namespaces (0.23s)

                                                
                                                
=== RUN   TestAddons/serial/GCPAuth/Namespaces
addons_test.go:630: (dbg) Run:  kubectl --context addons-651467 create ns new-namespace
addons_test.go:644: (dbg) Run:  kubectl --context addons-651467 get secret gcp-auth -n new-namespace
--- PASS: TestAddons/serial/GCPAuth/Namespaces (0.23s)

                                                
                                    
x
+
TestAddons/serial/GCPAuth/FakeCredentials (10.83s)

                                                
                                                
=== RUN   TestAddons/serial/GCPAuth/FakeCredentials
addons_test.go:675: (dbg) Run:  kubectl --context addons-651467 create -f testdata/busybox.yaml
addons_test.go:682: (dbg) Run:  kubectl --context addons-651467 create sa gcp-auth-test
addons_test.go:688: (dbg) TestAddons/serial/GCPAuth/FakeCredentials: waiting 8m0s for pods matching "integration-test=busybox" in namespace "default" ...
helpers_test.go:352: "busybox" [9103b4af-a588-464b-acc8-9a75a7087aa6] Pending / Ready:ContainersNotReady (containers with unready status: [busybox]) / ContainersReady:ContainersNotReady (containers with unready status: [busybox])
helpers_test.go:352: "busybox" [9103b4af-a588-464b-acc8-9a75a7087aa6] Running
addons_test.go:688: (dbg) TestAddons/serial/GCPAuth/FakeCredentials: integration-test=busybox healthy within 10.002886138s
addons_test.go:694: (dbg) Run:  kubectl --context addons-651467 exec busybox -- /bin/sh -c "printenv GOOGLE_APPLICATION_CREDENTIALS"
addons_test.go:706: (dbg) Run:  kubectl --context addons-651467 describe sa gcp-auth-test
addons_test.go:720: (dbg) Run:  kubectl --context addons-651467 exec busybox -- /bin/sh -c "cat /google-app-creds.json"
addons_test.go:744: (dbg) Run:  kubectl --context addons-651467 exec busybox -- /bin/sh -c "printenv GOOGLE_CLOUD_PROJECT"
--- PASS: TestAddons/serial/GCPAuth/FakeCredentials (10.83s)

                                                
                                    
x
+
TestAddons/StoppedEnableDisable (12.71s)

                                                
                                                
=== RUN   TestAddons/StoppedEnableDisable
addons_test.go:172: (dbg) Run:  out/minikube-linux-arm64 stop -p addons-651467
addons_test.go:172: (dbg) Done: out/minikube-linux-arm64 stop -p addons-651467: (12.436807082s)
addons_test.go:176: (dbg) Run:  out/minikube-linux-arm64 addons enable dashboard -p addons-651467
addons_test.go:180: (dbg) Run:  out/minikube-linux-arm64 addons disable dashboard -p addons-651467
addons_test.go:185: (dbg) Run:  out/minikube-linux-arm64 addons disable gvisor -p addons-651467
--- PASS: TestAddons/StoppedEnableDisable (12.71s)

                                                
                                    
x
+
TestCertOptions (37.73s)

                                                
                                                
=== RUN   TestCertOptions
=== PAUSE TestCertOptions

                                                
                                                

                                                
                                                
=== CONT  TestCertOptions
cert_options_test.go:49: (dbg) Run:  out/minikube-linux-arm64 start -p cert-options-276181 --memory=3072 --apiserver-ips=127.0.0.1 --apiserver-ips=192.168.15.15 --apiserver-names=localhost --apiserver-names=www.google.com --apiserver-port=8555 --driver=docker  --container-runtime=crio
cert_options_test.go:49: (dbg) Done: out/minikube-linux-arm64 start -p cert-options-276181 --memory=3072 --apiserver-ips=127.0.0.1 --apiserver-ips=192.168.15.15 --apiserver-names=localhost --apiserver-names=www.google.com --apiserver-port=8555 --driver=docker  --container-runtime=crio: (34.953399356s)
cert_options_test.go:60: (dbg) Run:  out/minikube-linux-arm64 -p cert-options-276181 ssh "openssl x509 -text -noout -in /var/lib/minikube/certs/apiserver.crt"
cert_options_test.go:88: (dbg) Run:  kubectl --context cert-options-276181 config view
cert_options_test.go:100: (dbg) Run:  out/minikube-linux-arm64 ssh -p cert-options-276181 -- "sudo cat /etc/kubernetes/admin.conf"
helpers_test.go:175: Cleaning up "cert-options-276181" profile ...
helpers_test.go:178: (dbg) Run:  out/minikube-linux-arm64 delete -p cert-options-276181
helpers_test.go:178: (dbg) Done: out/minikube-linux-arm64 delete -p cert-options-276181: (2.056983531s)
--- PASS: TestCertOptions (37.73s)

                                                
                                    
x
+
TestCertExpiration (245.66s)

                                                
                                                
=== RUN   TestCertExpiration
=== PAUSE TestCertExpiration

                                                
                                                

                                                
                                                
=== CONT  TestCertExpiration
cert_options_test.go:123: (dbg) Run:  out/minikube-linux-arm64 start -p cert-expiration-179822 --memory=3072 --cert-expiration=3m --driver=docker  --container-runtime=crio
cert_options_test.go:123: (dbg) Done: out/minikube-linux-arm64 start -p cert-expiration-179822 --memory=3072 --cert-expiration=3m --driver=docker  --container-runtime=crio: (42.076490596s)
E1109 14:34:20.736893    4116 cert_rotation.go:172] "Loading client cert failed" err="open /home/jenkins/minikube-integration/21139-2320/.minikube/profiles/functional-002359/client.crt: no such file or directory" logger="tls-transport-cache.UnhandledError" key="key"
cert_options_test.go:131: (dbg) Run:  out/minikube-linux-arm64 start -p cert-expiration-179822 --memory=3072 --cert-expiration=8760h --driver=docker  --container-runtime=crio
E1109 14:37:06.456589    4116 cert_rotation.go:172] "Loading client cert failed" err="open /home/jenkins/minikube-integration/21139-2320/.minikube/profiles/addons-651467/client.crt: no such file or directory" logger="tls-transport-cache.UnhandledError" key="key"
cert_options_test.go:131: (dbg) Done: out/minikube-linux-arm64 start -p cert-expiration-179822 --memory=3072 --cert-expiration=8760h --driver=docker  --container-runtime=crio: (20.788750472s)
helpers_test.go:175: Cleaning up "cert-expiration-179822" profile ...
helpers_test.go:178: (dbg) Run:  out/minikube-linux-arm64 delete -p cert-expiration-179822
helpers_test.go:178: (dbg) Done: out/minikube-linux-arm64 delete -p cert-expiration-179822: (2.792399711s)
--- PASS: TestCertExpiration (245.66s)

                                                
                                    
x
+
TestForceSystemdFlag (40.97s)

                                                
                                                
=== RUN   TestForceSystemdFlag
=== PAUSE TestForceSystemdFlag

                                                
                                                

                                                
                                                
=== CONT  TestForceSystemdFlag
docker_test.go:91: (dbg) Run:  out/minikube-linux-arm64 start -p force-systemd-flag-519664 --memory=3072 --force-systemd --alsologtostderr -v=5 --driver=docker  --container-runtime=crio
docker_test.go:91: (dbg) Done: out/minikube-linux-arm64 start -p force-systemd-flag-519664 --memory=3072 --force-systemd --alsologtostderr -v=5 --driver=docker  --container-runtime=crio: (37.695026026s)
docker_test.go:132: (dbg) Run:  out/minikube-linux-arm64 -p force-systemd-flag-519664 ssh "cat /etc/crio/crio.conf.d/02-crio.conf"
helpers_test.go:175: Cleaning up "force-systemd-flag-519664" profile ...
helpers_test.go:178: (dbg) Run:  out/minikube-linux-arm64 delete -p force-systemd-flag-519664
helpers_test.go:178: (dbg) Done: out/minikube-linux-arm64 delete -p force-systemd-flag-519664: (2.912387343s)
--- PASS: TestForceSystemdFlag (40.97s)

                                                
                                    
x
+
TestForceSystemdEnv (47.54s)

                                                
                                                
=== RUN   TestForceSystemdEnv
=== PAUSE TestForceSystemdEnv

                                                
                                                

                                                
                                                
=== CONT  TestForceSystemdEnv
docker_test.go:155: (dbg) Run:  out/minikube-linux-arm64 start -p force-systemd-env-413219 --memory=3072 --alsologtostderr -v=5 --driver=docker  --container-runtime=crio
docker_test.go:155: (dbg) Done: out/minikube-linux-arm64 start -p force-systemd-env-413219 --memory=3072 --alsologtostderr -v=5 --driver=docker  --container-runtime=crio: (44.180608008s)
helpers_test.go:175: Cleaning up "force-systemd-env-413219" profile ...
helpers_test.go:178: (dbg) Run:  out/minikube-linux-arm64 delete -p force-systemd-env-413219
helpers_test.go:178: (dbg) Done: out/minikube-linux-arm64 delete -p force-systemd-env-413219: (3.363972269s)
--- PASS: TestForceSystemdEnv (47.54s)

                                                
                                    
x
+
TestErrorSpam/setup (35.23s)

                                                
                                                
=== RUN   TestErrorSpam/setup
error_spam_test.go:81: (dbg) Run:  out/minikube-linux-arm64 start -p nospam-036188 -n=1 --memory=3072 --wait=false --log_dir=/tmp/nospam-036188 --driver=docker  --container-runtime=crio
error_spam_test.go:81: (dbg) Done: out/minikube-linux-arm64 start -p nospam-036188 -n=1 --memory=3072 --wait=false --log_dir=/tmp/nospam-036188 --driver=docker  --container-runtime=crio: (35.230081385s)
--- PASS: TestErrorSpam/setup (35.23s)

                                                
                                    
x
+
TestErrorSpam/start (0.8s)

                                                
                                                
=== RUN   TestErrorSpam/start
error_spam_test.go:206: Cleaning up 1 logfile(s) ...
error_spam_test.go:149: (dbg) Run:  out/minikube-linux-arm64 -p nospam-036188 --log_dir /tmp/nospam-036188 start --dry-run
error_spam_test.go:149: (dbg) Run:  out/minikube-linux-arm64 -p nospam-036188 --log_dir /tmp/nospam-036188 start --dry-run
error_spam_test.go:172: (dbg) Run:  out/minikube-linux-arm64 -p nospam-036188 --log_dir /tmp/nospam-036188 start --dry-run
--- PASS: TestErrorSpam/start (0.80s)

                                                
                                    
x
+
TestErrorSpam/status (1.11s)

                                                
                                                
=== RUN   TestErrorSpam/status
error_spam_test.go:206: Cleaning up 0 logfile(s) ...
error_spam_test.go:149: (dbg) Run:  out/minikube-linux-arm64 -p nospam-036188 --log_dir /tmp/nospam-036188 status
error_spam_test.go:149: (dbg) Run:  out/minikube-linux-arm64 -p nospam-036188 --log_dir /tmp/nospam-036188 status
error_spam_test.go:172: (dbg) Run:  out/minikube-linux-arm64 -p nospam-036188 --log_dir /tmp/nospam-036188 status
--- PASS: TestErrorSpam/status (1.11s)

                                                
                                    
x
+
TestErrorSpam/pause (5.36s)

                                                
                                                
=== RUN   TestErrorSpam/pause
error_spam_test.go:206: Cleaning up 0 logfile(s) ...
error_spam_test.go:149: (dbg) Run:  out/minikube-linux-arm64 -p nospam-036188 --log_dir /tmp/nospam-036188 pause
error_spam_test.go:149: (dbg) Non-zero exit: out/minikube-linux-arm64 -p nospam-036188 --log_dir /tmp/nospam-036188 pause: exit status 80 (1.989522316s)

                                                
                                                
-- stdout --
	* Pausing node nospam-036188 ... 
	
	

                                                
                                                
-- /stdout --
** stderr ** 
	X Exiting due to GUEST_PAUSE: Pause: list running: runc: sudo runc list -f json: Process exited with status 1
	stdout:
	
	stderr:
	time="2025-11-09T13:36:14Z" level=error msg="open /run/runc: no such file or directory"
	
	* 
	╭─────────────────────────────────────────────────────────────────────────────────────────────╮
	│                                                                                             │
	│    * If the above advice does not help, please let us know:                                 │
	│      https://github.com/kubernetes/minikube/issues/new/choose                               │
	│                                                                                             │
	│    * Please run `minikube logs --file=logs.txt` and attach logs.txt to the GitHub issue.    │
	│    * Please also attach the following file to the GitHub issue:                             │
	│    * - /tmp/minikube_delete_7f6b85125f52d8b6f2676a081a2b9f4eb5a7d9b1_0.log                  │
	│                                                                                             │
	╰─────────────────────────────────────────────────────────────────────────────────────────────╯

                                                
                                                
** /stderr **
error_spam_test.go:151: "out/minikube-linux-arm64 -p nospam-036188 --log_dir /tmp/nospam-036188 pause" failed: exit status 80
error_spam_test.go:149: (dbg) Run:  out/minikube-linux-arm64 -p nospam-036188 --log_dir /tmp/nospam-036188 pause
error_spam_test.go:149: (dbg) Non-zero exit: out/minikube-linux-arm64 -p nospam-036188 --log_dir /tmp/nospam-036188 pause: exit status 80 (1.773083931s)

                                                
                                                
-- stdout --
	* Pausing node nospam-036188 ... 
	
	

                                                
                                                
-- /stdout --
** stderr ** 
	X Exiting due to GUEST_PAUSE: Pause: list running: runc: sudo runc list -f json: Process exited with status 1
	stdout:
	
	stderr:
	time="2025-11-09T13:36:16Z" level=error msg="open /run/runc: no such file or directory"
	
	* 
	╭─────────────────────────────────────────────────────────────────────────────────────────────╮
	│                                                                                             │
	│    * If the above advice does not help, please let us know:                                 │
	│      https://github.com/kubernetes/minikube/issues/new/choose                               │
	│                                                                                             │
	│    * Please run `minikube logs --file=logs.txt` and attach logs.txt to the GitHub issue.    │
	│    * Please also attach the following file to the GitHub issue:                             │
	│    * - /tmp/minikube_delete_7f6b85125f52d8b6f2676a081a2b9f4eb5a7d9b1_0.log                  │
	│                                                                                             │
	╰─────────────────────────────────────────────────────────────────────────────────────────────╯

                                                
                                                
** /stderr **
error_spam_test.go:151: "out/minikube-linux-arm64 -p nospam-036188 --log_dir /tmp/nospam-036188 pause" failed: exit status 80
error_spam_test.go:172: (dbg) Run:  out/minikube-linux-arm64 -p nospam-036188 --log_dir /tmp/nospam-036188 pause
error_spam_test.go:172: (dbg) Non-zero exit: out/minikube-linux-arm64 -p nospam-036188 --log_dir /tmp/nospam-036188 pause: exit status 80 (1.595429842s)

                                                
                                                
-- stdout --
	* Pausing node nospam-036188 ... 
	
	

                                                
                                                
-- /stdout --
** stderr ** 
	X Exiting due to GUEST_PAUSE: Pause: list running: runc: sudo runc list -f json: Process exited with status 1
	stdout:
	
	stderr:
	time="2025-11-09T13:36:17Z" level=error msg="open /run/runc: no such file or directory"
	
	* 
	╭─────────────────────────────────────────────────────────────────────────────────────────────╮
	│                                                                                             │
	│    * If the above advice does not help, please let us know:                                 │
	│      https://github.com/kubernetes/minikube/issues/new/choose                               │
	│                                                                                             │
	│    * Please run `minikube logs --file=logs.txt` and attach logs.txt to the GitHub issue.    │
	│    * Please also attach the following file to the GitHub issue:                             │
	│    * - /tmp/minikube_delete_7f6b85125f52d8b6f2676a081a2b9f4eb5a7d9b1_0.log                  │
	│                                                                                             │
	╰─────────────────────────────────────────────────────────────────────────────────────────────╯

                                                
                                                
** /stderr **
error_spam_test.go:174: "out/minikube-linux-arm64 -p nospam-036188 --log_dir /tmp/nospam-036188 pause" failed: exit status 80
--- PASS: TestErrorSpam/pause (5.36s)

                                                
                                    
x
+
TestErrorSpam/unpause (5.83s)

                                                
                                                
=== RUN   TestErrorSpam/unpause
error_spam_test.go:206: Cleaning up 0 logfile(s) ...
error_spam_test.go:149: (dbg) Run:  out/minikube-linux-arm64 -p nospam-036188 --log_dir /tmp/nospam-036188 unpause
error_spam_test.go:149: (dbg) Non-zero exit: out/minikube-linux-arm64 -p nospam-036188 --log_dir /tmp/nospam-036188 unpause: exit status 80 (2.088374492s)

                                                
                                                
-- stdout --
	* Unpausing node nospam-036188 ... 
	
	

                                                
                                                
-- /stdout --
** stderr ** 
	X Exiting due to GUEST_UNPAUSE: Pause: list paused: runc: sudo runc list -f json: Process exited with status 1
	stdout:
	
	stderr:
	time="2025-11-09T13:36:19Z" level=error msg="open /run/runc: no such file or directory"
	
	* 
	╭─────────────────────────────────────────────────────────────────────────────────────────────╮
	│                                                                                             │
	│    * If the above advice does not help, please let us know:                                 │
	│      https://github.com/kubernetes/minikube/issues/new/choose                               │
	│                                                                                             │
	│    * Please run `minikube logs --file=logs.txt` and attach logs.txt to the GitHub issue.    │
	│    * Please also attach the following file to the GitHub issue:                             │
	│    * - /tmp/minikube_delete_7f6b85125f52d8b6f2676a081a2b9f4eb5a7d9b1_0.log                  │
	│                                                                                             │
	╰─────────────────────────────────────────────────────────────────────────────────────────────╯

                                                
                                                
** /stderr **
error_spam_test.go:151: "out/minikube-linux-arm64 -p nospam-036188 --log_dir /tmp/nospam-036188 unpause" failed: exit status 80
error_spam_test.go:149: (dbg) Run:  out/minikube-linux-arm64 -p nospam-036188 --log_dir /tmp/nospam-036188 unpause
error_spam_test.go:149: (dbg) Non-zero exit: out/minikube-linux-arm64 -p nospam-036188 --log_dir /tmp/nospam-036188 unpause: exit status 80 (1.854980297s)

                                                
                                                
-- stdout --
	* Unpausing node nospam-036188 ... 
	
	

                                                
                                                
-- /stdout --
** stderr ** 
	X Exiting due to GUEST_UNPAUSE: Pause: list paused: runc: sudo runc list -f json: Process exited with status 1
	stdout:
	
	stderr:
	time="2025-11-09T13:36:21Z" level=error msg="open /run/runc: no such file or directory"
	
	* 
	╭─────────────────────────────────────────────────────────────────────────────────────────────╮
	│                                                                                             │
	│    * If the above advice does not help, please let us know:                                 │
	│      https://github.com/kubernetes/minikube/issues/new/choose                               │
	│                                                                                             │
	│    * Please run `minikube logs --file=logs.txt` and attach logs.txt to the GitHub issue.    │
	│    * Please also attach the following file to the GitHub issue:                             │
	│    * - /tmp/minikube_delete_7f6b85125f52d8b6f2676a081a2b9f4eb5a7d9b1_0.log                  │
	│                                                                                             │
	╰─────────────────────────────────────────────────────────────────────────────────────────────╯

                                                
                                                
** /stderr **
error_spam_test.go:151: "out/minikube-linux-arm64 -p nospam-036188 --log_dir /tmp/nospam-036188 unpause" failed: exit status 80
error_spam_test.go:172: (dbg) Run:  out/minikube-linux-arm64 -p nospam-036188 --log_dir /tmp/nospam-036188 unpause
error_spam_test.go:172: (dbg) Non-zero exit: out/minikube-linux-arm64 -p nospam-036188 --log_dir /tmp/nospam-036188 unpause: exit status 80 (1.888220007s)

                                                
                                                
-- stdout --
	* Unpausing node nospam-036188 ... 
	
	

                                                
                                                
-- /stdout --
** stderr ** 
	X Exiting due to GUEST_UNPAUSE: Pause: list paused: runc: sudo runc list -f json: Process exited with status 1
	stdout:
	
	stderr:
	time="2025-11-09T13:36:23Z" level=error msg="open /run/runc: no such file or directory"
	
	* 
	╭─────────────────────────────────────────────────────────────────────────────────────────────╮
	│                                                                                             │
	│    * If the above advice does not help, please let us know:                                 │
	│      https://github.com/kubernetes/minikube/issues/new/choose                               │
	│                                                                                             │
	│    * Please run `minikube logs --file=logs.txt` and attach logs.txt to the GitHub issue.    │
	│    * Please also attach the following file to the GitHub issue:                             │
	│    * - /tmp/minikube_delete_7f6b85125f52d8b6f2676a081a2b9f4eb5a7d9b1_0.log                  │
	│                                                                                             │
	╰─────────────────────────────────────────────────────────────────────────────────────────────╯

                                                
                                                
** /stderr **
error_spam_test.go:174: "out/minikube-linux-arm64 -p nospam-036188 --log_dir /tmp/nospam-036188 unpause" failed: exit status 80
--- PASS: TestErrorSpam/unpause (5.83s)

                                                
                                    
x
+
TestErrorSpam/stop (1.5s)

                                                
                                                
=== RUN   TestErrorSpam/stop
error_spam_test.go:206: Cleaning up 0 logfile(s) ...
error_spam_test.go:149: (dbg) Run:  out/minikube-linux-arm64 -p nospam-036188 --log_dir /tmp/nospam-036188 stop
error_spam_test.go:149: (dbg) Done: out/minikube-linux-arm64 -p nospam-036188 --log_dir /tmp/nospam-036188 stop: (1.304109038s)
error_spam_test.go:149: (dbg) Run:  out/minikube-linux-arm64 -p nospam-036188 --log_dir /tmp/nospam-036188 stop
error_spam_test.go:172: (dbg) Run:  out/minikube-linux-arm64 -p nospam-036188 --log_dir /tmp/nospam-036188 stop
--- PASS: TestErrorSpam/stop (1.50s)

                                                
                                    
x
+
TestFunctional/serial/CopySyncFile (0s)

                                                
                                                
=== RUN   TestFunctional/serial/CopySyncFile
functional_test.go:1860: local sync path: /home/jenkins/minikube-integration/21139-2320/.minikube/files/etc/test/nested/copy/4116/hosts
--- PASS: TestFunctional/serial/CopySyncFile (0.00s)

                                                
                                    
x
+
TestFunctional/serial/StartWithProxy (76.3s)

                                                
                                                
=== RUN   TestFunctional/serial/StartWithProxy
functional_test.go:2239: (dbg) Run:  out/minikube-linux-arm64 start -p functional-002359 --memory=4096 --apiserver-port=8441 --wait=all --driver=docker  --container-runtime=crio
E1109 13:37:06.457410    4116 cert_rotation.go:172] "Loading client cert failed" err="open /home/jenkins/minikube-integration/21139-2320/.minikube/profiles/addons-651467/client.crt: no such file or directory" logger="tls-transport-cache.UnhandledError" key="key"
E1109 13:37:06.463894    4116 cert_rotation.go:172] "Loading client cert failed" err="open /home/jenkins/minikube-integration/21139-2320/.minikube/profiles/addons-651467/client.crt: no such file or directory" logger="tls-transport-cache.UnhandledError" key="key"
E1109 13:37:06.475321    4116 cert_rotation.go:172] "Loading client cert failed" err="open /home/jenkins/minikube-integration/21139-2320/.minikube/profiles/addons-651467/client.crt: no such file or directory" logger="tls-transport-cache.UnhandledError" key="key"
E1109 13:37:06.496713    4116 cert_rotation.go:172] "Loading client cert failed" err="open /home/jenkins/minikube-integration/21139-2320/.minikube/profiles/addons-651467/client.crt: no such file or directory" logger="tls-transport-cache.UnhandledError" key="key"
E1109 13:37:06.538178    4116 cert_rotation.go:172] "Loading client cert failed" err="open /home/jenkins/minikube-integration/21139-2320/.minikube/profiles/addons-651467/client.crt: no such file or directory" logger="tls-transport-cache.UnhandledError" key="key"
E1109 13:37:06.619597    4116 cert_rotation.go:172] "Loading client cert failed" err="open /home/jenkins/minikube-integration/21139-2320/.minikube/profiles/addons-651467/client.crt: no such file or directory" logger="tls-transport-cache.UnhandledError" key="key"
E1109 13:37:06.781220    4116 cert_rotation.go:172] "Loading client cert failed" err="open /home/jenkins/minikube-integration/21139-2320/.minikube/profiles/addons-651467/client.crt: no such file or directory" logger="tls-transport-cache.UnhandledError" key="key"
E1109 13:37:07.102878    4116 cert_rotation.go:172] "Loading client cert failed" err="open /home/jenkins/minikube-integration/21139-2320/.minikube/profiles/addons-651467/client.crt: no such file or directory" logger="tls-transport-cache.UnhandledError" key="key"
E1109 13:37:07.744936    4116 cert_rotation.go:172] "Loading client cert failed" err="open /home/jenkins/minikube-integration/21139-2320/.minikube/profiles/addons-651467/client.crt: no such file or directory" logger="tls-transport-cache.UnhandledError" key="key"
E1109 13:37:09.027141    4116 cert_rotation.go:172] "Loading client cert failed" err="open /home/jenkins/minikube-integration/21139-2320/.minikube/profiles/addons-651467/client.crt: no such file or directory" logger="tls-transport-cache.UnhandledError" key="key"
E1109 13:37:11.588617    4116 cert_rotation.go:172] "Loading client cert failed" err="open /home/jenkins/minikube-integration/21139-2320/.minikube/profiles/addons-651467/client.crt: no such file or directory" logger="tls-transport-cache.UnhandledError" key="key"
E1109 13:37:16.710101    4116 cert_rotation.go:172] "Loading client cert failed" err="open /home/jenkins/minikube-integration/21139-2320/.minikube/profiles/addons-651467/client.crt: no such file or directory" logger="tls-transport-cache.UnhandledError" key="key"
E1109 13:37:26.952326    4116 cert_rotation.go:172] "Loading client cert failed" err="open /home/jenkins/minikube-integration/21139-2320/.minikube/profiles/addons-651467/client.crt: no such file or directory" logger="tls-transport-cache.UnhandledError" key="key"
functional_test.go:2239: (dbg) Done: out/minikube-linux-arm64 start -p functional-002359 --memory=4096 --apiserver-port=8441 --wait=all --driver=docker  --container-runtime=crio: (1m16.299369736s)
--- PASS: TestFunctional/serial/StartWithProxy (76.30s)

                                                
                                    
x
+
TestFunctional/serial/AuditLog (0s)

                                                
                                                
=== RUN   TestFunctional/serial/AuditLog
--- PASS: TestFunctional/serial/AuditLog (0.00s)

                                                
                                    
x
+
TestFunctional/serial/SoftStart (37.9s)

                                                
                                                
=== RUN   TestFunctional/serial/SoftStart
I1109 13:37:46.474750    4116 config.go:182] Loaded profile config "functional-002359": Driver=docker, ContainerRuntime=crio, KubernetesVersion=v1.34.1
functional_test.go:674: (dbg) Run:  out/minikube-linux-arm64 start -p functional-002359 --alsologtostderr -v=8
E1109 13:37:47.433608    4116 cert_rotation.go:172] "Loading client cert failed" err="open /home/jenkins/minikube-integration/21139-2320/.minikube/profiles/addons-651467/client.crt: no such file or directory" logger="tls-transport-cache.UnhandledError" key="key"
functional_test.go:674: (dbg) Done: out/minikube-linux-arm64 start -p functional-002359 --alsologtostderr -v=8: (37.896414438s)
functional_test.go:678: soft start took 37.896911241s for "functional-002359" cluster.
I1109 13:38:24.371444    4116 config.go:182] Loaded profile config "functional-002359": Driver=docker, ContainerRuntime=crio, KubernetesVersion=v1.34.1
--- PASS: TestFunctional/serial/SoftStart (37.90s)

                                                
                                    
x
+
TestFunctional/serial/KubeContext (0.07s)

                                                
                                                
=== RUN   TestFunctional/serial/KubeContext
functional_test.go:696: (dbg) Run:  kubectl config current-context
--- PASS: TestFunctional/serial/KubeContext (0.07s)

                                                
                                    
x
+
TestFunctional/serial/KubectlGetPods (0.11s)

                                                
                                                
=== RUN   TestFunctional/serial/KubectlGetPods
functional_test.go:711: (dbg) Run:  kubectl --context functional-002359 get po -A
--- PASS: TestFunctional/serial/KubectlGetPods (0.11s)

                                                
                                    
x
+
TestFunctional/serial/CacheCmd/cache/add_remote (3.34s)

                                                
                                                
=== RUN   TestFunctional/serial/CacheCmd/cache/add_remote
functional_test.go:1064: (dbg) Run:  out/minikube-linux-arm64 -p functional-002359 cache add registry.k8s.io/pause:3.1
functional_test.go:1064: (dbg) Done: out/minikube-linux-arm64 -p functional-002359 cache add registry.k8s.io/pause:3.1: (1.121987464s)
functional_test.go:1064: (dbg) Run:  out/minikube-linux-arm64 -p functional-002359 cache add registry.k8s.io/pause:3.3
functional_test.go:1064: (dbg) Done: out/minikube-linux-arm64 -p functional-002359 cache add registry.k8s.io/pause:3.3: (1.127647165s)
functional_test.go:1064: (dbg) Run:  out/minikube-linux-arm64 -p functional-002359 cache add registry.k8s.io/pause:latest
functional_test.go:1064: (dbg) Done: out/minikube-linux-arm64 -p functional-002359 cache add registry.k8s.io/pause:latest: (1.087383737s)
--- PASS: TestFunctional/serial/CacheCmd/cache/add_remote (3.34s)

                                                
                                    
x
+
TestFunctional/serial/CacheCmd/cache/add_local (1.05s)

                                                
                                                
=== RUN   TestFunctional/serial/CacheCmd/cache/add_local
functional_test.go:1092: (dbg) Run:  docker build -t minikube-local-cache-test:functional-002359 /tmp/TestFunctionalserialCacheCmdcacheadd_local4051729111/001
functional_test.go:1104: (dbg) Run:  out/minikube-linux-arm64 -p functional-002359 cache add minikube-local-cache-test:functional-002359
E1109 13:38:28.395817    4116 cert_rotation.go:172] "Loading client cert failed" err="open /home/jenkins/minikube-integration/21139-2320/.minikube/profiles/addons-651467/client.crt: no such file or directory" logger="tls-transport-cache.UnhandledError" key="key"
functional_test.go:1109: (dbg) Run:  out/minikube-linux-arm64 -p functional-002359 cache delete minikube-local-cache-test:functional-002359
functional_test.go:1098: (dbg) Run:  docker rmi minikube-local-cache-test:functional-002359
--- PASS: TestFunctional/serial/CacheCmd/cache/add_local (1.05s)

                                                
                                    
x
+
TestFunctional/serial/CacheCmd/cache/CacheDelete (0.05s)

                                                
                                                
=== RUN   TestFunctional/serial/CacheCmd/cache/CacheDelete
functional_test.go:1117: (dbg) Run:  out/minikube-linux-arm64 cache delete registry.k8s.io/pause:3.3
--- PASS: TestFunctional/serial/CacheCmd/cache/CacheDelete (0.05s)

                                                
                                    
x
+
TestFunctional/serial/CacheCmd/cache/list (0.05s)

                                                
                                                
=== RUN   TestFunctional/serial/CacheCmd/cache/list
functional_test.go:1125: (dbg) Run:  out/minikube-linux-arm64 cache list
--- PASS: TestFunctional/serial/CacheCmd/cache/list (0.05s)

                                                
                                    
x
+
TestFunctional/serial/CacheCmd/cache/verify_cache_inside_node (0.3s)

                                                
                                                
=== RUN   TestFunctional/serial/CacheCmd/cache/verify_cache_inside_node
functional_test.go:1139: (dbg) Run:  out/minikube-linux-arm64 -p functional-002359 ssh sudo crictl images
--- PASS: TestFunctional/serial/CacheCmd/cache/verify_cache_inside_node (0.30s)

                                                
                                    
x
+
TestFunctional/serial/CacheCmd/cache/cache_reload (1.96s)

                                                
                                                
=== RUN   TestFunctional/serial/CacheCmd/cache/cache_reload
functional_test.go:1162: (dbg) Run:  out/minikube-linux-arm64 -p functional-002359 ssh sudo crictl rmi registry.k8s.io/pause:latest
functional_test.go:1168: (dbg) Run:  out/minikube-linux-arm64 -p functional-002359 ssh sudo crictl inspecti registry.k8s.io/pause:latest
functional_test.go:1168: (dbg) Non-zero exit: out/minikube-linux-arm64 -p functional-002359 ssh sudo crictl inspecti registry.k8s.io/pause:latest: exit status 1 (306.571103ms)

                                                
                                                
-- stdout --
	FATA[0000] no such image "registry.k8s.io/pause:latest" present 

                                                
                                                
-- /stdout --
** stderr ** 
	ssh: Process exited with status 1

                                                
                                                
** /stderr **
functional_test.go:1173: (dbg) Run:  out/minikube-linux-arm64 -p functional-002359 cache reload
functional_test.go:1178: (dbg) Run:  out/minikube-linux-arm64 -p functional-002359 ssh sudo crictl inspecti registry.k8s.io/pause:latest
--- PASS: TestFunctional/serial/CacheCmd/cache/cache_reload (1.96s)

                                                
                                    
x
+
TestFunctional/serial/CacheCmd/cache/delete (0.12s)

                                                
                                                
=== RUN   TestFunctional/serial/CacheCmd/cache/delete
functional_test.go:1187: (dbg) Run:  out/minikube-linux-arm64 cache delete registry.k8s.io/pause:3.1
functional_test.go:1187: (dbg) Run:  out/minikube-linux-arm64 cache delete registry.k8s.io/pause:latest
--- PASS: TestFunctional/serial/CacheCmd/cache/delete (0.12s)

                                                
                                    
x
+
TestFunctional/serial/MinikubeKubectlCmd (0.14s)

                                                
                                                
=== RUN   TestFunctional/serial/MinikubeKubectlCmd
functional_test.go:731: (dbg) Run:  out/minikube-linux-arm64 -p functional-002359 kubectl -- --context functional-002359 get pods
--- PASS: TestFunctional/serial/MinikubeKubectlCmd (0.14s)

                                                
                                    
x
+
TestFunctional/serial/MinikubeKubectlCmdDirectly (0.13s)

                                                
                                                
=== RUN   TestFunctional/serial/MinikubeKubectlCmdDirectly
functional_test.go:756: (dbg) Run:  out/kubectl --context functional-002359 get pods
--- PASS: TestFunctional/serial/MinikubeKubectlCmdDirectly (0.13s)

                                                
                                    
x
+
TestFunctional/serial/ExtraConfig (41s)

                                                
                                                
=== RUN   TestFunctional/serial/ExtraConfig
functional_test.go:772: (dbg) Run:  out/minikube-linux-arm64 start -p functional-002359 --extra-config=apiserver.enable-admission-plugins=NamespaceAutoProvision --wait=all
functional_test.go:772: (dbg) Done: out/minikube-linux-arm64 start -p functional-002359 --extra-config=apiserver.enable-admission-plugins=NamespaceAutoProvision --wait=all: (40.996256976s)
functional_test.go:776: restart took 40.996389021s for "functional-002359" cluster.
I1109 13:39:12.705950    4116 config.go:182] Loaded profile config "functional-002359": Driver=docker, ContainerRuntime=crio, KubernetesVersion=v1.34.1
--- PASS: TestFunctional/serial/ExtraConfig (41.00s)

                                                
                                    
x
+
TestFunctional/serial/ComponentHealth (0.1s)

                                                
                                                
=== RUN   TestFunctional/serial/ComponentHealth
functional_test.go:825: (dbg) Run:  kubectl --context functional-002359 get po -l tier=control-plane -n kube-system -o=json
functional_test.go:840: etcd phase: Running
functional_test.go:850: etcd status: Ready
functional_test.go:840: kube-apiserver phase: Running
functional_test.go:850: kube-apiserver status: Ready
functional_test.go:840: kube-controller-manager phase: Running
functional_test.go:850: kube-controller-manager status: Ready
functional_test.go:840: kube-scheduler phase: Running
functional_test.go:850: kube-scheduler status: Ready
--- PASS: TestFunctional/serial/ComponentHealth (0.10s)

                                                
                                    
x
+
TestFunctional/serial/LogsCmd (1.44s)

                                                
                                                
=== RUN   TestFunctional/serial/LogsCmd
functional_test.go:1251: (dbg) Run:  out/minikube-linux-arm64 -p functional-002359 logs
functional_test.go:1251: (dbg) Done: out/minikube-linux-arm64 -p functional-002359 logs: (1.443372956s)
--- PASS: TestFunctional/serial/LogsCmd (1.44s)

                                                
                                    
x
+
TestFunctional/serial/LogsFileCmd (1.5s)

                                                
                                                
=== RUN   TestFunctional/serial/LogsFileCmd
functional_test.go:1265: (dbg) Run:  out/minikube-linux-arm64 -p functional-002359 logs --file /tmp/TestFunctionalserialLogsFileCmd2712916647/001/logs.txt
functional_test.go:1265: (dbg) Done: out/minikube-linux-arm64 -p functional-002359 logs --file /tmp/TestFunctionalserialLogsFileCmd2712916647/001/logs.txt: (1.495055188s)
--- PASS: TestFunctional/serial/LogsFileCmd (1.50s)

                                                
                                    
x
+
TestFunctional/serial/InvalidService (4.21s)

                                                
                                                
=== RUN   TestFunctional/serial/InvalidService
functional_test.go:2326: (dbg) Run:  kubectl --context functional-002359 apply -f testdata/invalidsvc.yaml
functional_test.go:2340: (dbg) Run:  out/minikube-linux-arm64 service invalid-svc -p functional-002359
functional_test.go:2340: (dbg) Non-zero exit: out/minikube-linux-arm64 service invalid-svc -p functional-002359: exit status 115 (372.882603ms)

                                                
                                                
-- stdout --
	┌───────────┬─────────────┬─────────────┬───────────────────────────┐
	│ NAMESPACE │    NAME     │ TARGET PORT │            URL            │
	├───────────┼─────────────┼─────────────┼───────────────────────────┤
	│ default   │ invalid-svc │ 80          │ http://192.168.49.2:31805 │
	└───────────┴─────────────┴─────────────┴───────────────────────────┘
	
	

                                                
                                                
-- /stdout --
** stderr ** 
	X Exiting due to SVC_UNREACHABLE: service not available: no running pod for service invalid-svc found
	* 
	╭─────────────────────────────────────────────────────────────────────────────────────────────╮
	│                                                                                             │
	│    * If the above advice does not help, please let us know:                                 │
	│      https://github.com/kubernetes/minikube/issues/new/choose                               │
	│                                                                                             │
	│    * Please run `minikube logs --file=logs.txt` and attach logs.txt to the GitHub issue.    │
	│    * Please also attach the following file to the GitHub issue:                             │
	│    * - /tmp/minikube_service_96b204199e3191fa1740d4430b018a3c8028d52d_0.log                 │
	│                                                                                             │
	╰─────────────────────────────────────────────────────────────────────────────────────────────╯

                                                
                                                
** /stderr **
functional_test.go:2332: (dbg) Run:  kubectl --context functional-002359 delete -f testdata/invalidsvc.yaml
--- PASS: TestFunctional/serial/InvalidService (4.21s)

                                                
                                    
x
+
TestFunctional/parallel/ConfigCmd (0.52s)

                                                
                                                
=== RUN   TestFunctional/parallel/ConfigCmd
=== PAUSE TestFunctional/parallel/ConfigCmd

                                                
                                                

                                                
                                                
=== CONT  TestFunctional/parallel/ConfigCmd
functional_test.go:1214: (dbg) Run:  out/minikube-linux-arm64 -p functional-002359 config unset cpus
functional_test.go:1214: (dbg) Run:  out/minikube-linux-arm64 -p functional-002359 config get cpus
functional_test.go:1214: (dbg) Non-zero exit: out/minikube-linux-arm64 -p functional-002359 config get cpus: exit status 14 (71.118571ms)

                                                
                                                
** stderr ** 
	Error: specified key could not be found in config

                                                
                                                
** /stderr **
functional_test.go:1214: (dbg) Run:  out/minikube-linux-arm64 -p functional-002359 config set cpus 2
functional_test.go:1214: (dbg) Run:  out/minikube-linux-arm64 -p functional-002359 config get cpus
functional_test.go:1214: (dbg) Run:  out/minikube-linux-arm64 -p functional-002359 config unset cpus
functional_test.go:1214: (dbg) Run:  out/minikube-linux-arm64 -p functional-002359 config get cpus
functional_test.go:1214: (dbg) Non-zero exit: out/minikube-linux-arm64 -p functional-002359 config get cpus: exit status 14 (81.693588ms)

                                                
                                                
** stderr ** 
	Error: specified key could not be found in config

                                                
                                                
** /stderr **
--- PASS: TestFunctional/parallel/ConfigCmd (0.52s)

                                                
                                    
x
+
TestFunctional/parallel/DashboardCmd (11.18s)

                                                
                                                
=== RUN   TestFunctional/parallel/DashboardCmd
=== PAUSE TestFunctional/parallel/DashboardCmd

                                                
                                                

                                                
                                                
=== CONT  TestFunctional/parallel/DashboardCmd
functional_test.go:920: (dbg) daemon: [out/minikube-linux-arm64 dashboard --url --port 36195 -p functional-002359 --alsologtostderr -v=1]
functional_test.go:925: (dbg) stopping [out/minikube-linux-arm64 dashboard --url --port 36195 -p functional-002359 --alsologtostderr -v=1] ...
helpers_test.go:525: unable to kill pid 30395: os: process already finished
--- PASS: TestFunctional/parallel/DashboardCmd (11.18s)

                                                
                                    
x
+
TestFunctional/parallel/DryRun (0.6s)

                                                
                                                
=== RUN   TestFunctional/parallel/DryRun
=== PAUSE TestFunctional/parallel/DryRun

                                                
                                                

                                                
                                                
=== CONT  TestFunctional/parallel/DryRun
functional_test.go:989: (dbg) Run:  out/minikube-linux-arm64 start -p functional-002359 --dry-run --memory 250MB --alsologtostderr --driver=docker  --container-runtime=crio
functional_test.go:989: (dbg) Non-zero exit: out/minikube-linux-arm64 start -p functional-002359 --dry-run --memory 250MB --alsologtostderr --driver=docker  --container-runtime=crio: exit status 23 (271.804773ms)

                                                
                                                
-- stdout --
	* [functional-002359] minikube v1.37.0 on Ubuntu 20.04 (arm64)
	  - MINIKUBE_LOCATION=21139
	  - MINIKUBE_SUPPRESS_DOCKER_PERFORMANCE=true
	  - KUBECONFIG=/home/jenkins/minikube-integration/21139-2320/kubeconfig
	  - MINIKUBE_HOME=/home/jenkins/minikube-integration/21139-2320/.minikube
	  - MINIKUBE_BIN=out/minikube-linux-arm64
	  - MINIKUBE_FORCE_SYSTEMD=
	* Using the docker driver based on existing profile
	
	

                                                
                                                
-- /stdout --
** stderr ** 
	I1109 13:49:49.305159   29815 out.go:360] Setting OutFile to fd 1 ...
	I1109 13:49:49.305355   29815 out.go:408] TERM=,COLORTERM=, which probably does not support color
	I1109 13:49:49.305364   29815 out.go:374] Setting ErrFile to fd 2...
	I1109 13:49:49.305369   29815 out.go:408] TERM=,COLORTERM=, which probably does not support color
	I1109 13:49:49.305717   29815 root.go:338] Updating PATH: /home/jenkins/minikube-integration/21139-2320/.minikube/bin
	I1109 13:49:49.306172   29815 out.go:368] Setting JSON to false
	I1109 13:49:49.307103   29815 start.go:133] hostinfo: {"hostname":"ip-172-31-24-2","uptime":1940,"bootTime":1762694250,"procs":184,"os":"linux","platform":"ubuntu","platformFamily":"debian","platformVersion":"20.04","kernelVersion":"5.15.0-1084-aws","kernelArch":"aarch64","virtualizationSystem":"","virtualizationRole":"","hostId":"6d436adf-771e-4269-b9a3-c25fd4fca4f5"}
	I1109 13:49:49.307218   29815 start.go:143] virtualization:  
	I1109 13:49:49.311457   29815 out.go:179] * [functional-002359] minikube v1.37.0 on Ubuntu 20.04 (arm64)
	I1109 13:49:49.318620   29815 out.go:179]   - MINIKUBE_LOCATION=21139
	I1109 13:49:49.318758   29815 notify.go:221] Checking for updates...
	I1109 13:49:49.325859   29815 out.go:179]   - MINIKUBE_SUPPRESS_DOCKER_PERFORMANCE=true
	I1109 13:49:49.328834   29815 out.go:179]   - KUBECONFIG=/home/jenkins/minikube-integration/21139-2320/kubeconfig
	I1109 13:49:49.331699   29815 out.go:179]   - MINIKUBE_HOME=/home/jenkins/minikube-integration/21139-2320/.minikube
	I1109 13:49:49.334641   29815 out.go:179]   - MINIKUBE_BIN=out/minikube-linux-arm64
	I1109 13:49:49.337597   29815 out.go:179]   - MINIKUBE_FORCE_SYSTEMD=
	I1109 13:49:49.343073   29815 config.go:182] Loaded profile config "functional-002359": Driver=docker, ContainerRuntime=crio, KubernetesVersion=v1.34.1
	I1109 13:49:49.343594   29815 driver.go:422] Setting default libvirt URI to qemu:///system
	I1109 13:49:49.393121   29815 docker.go:124] docker version: linux-28.1.1:Docker Engine - Community
	I1109 13:49:49.393237   29815 cli_runner.go:164] Run: docker system info --format "{{json .}}"
	I1109 13:49:49.502549   29815 info.go:266] docker info: {ID:J4M5:W6MX:GOX4:4LAQ:VI7E:VJNF:J3OP:OPBH:GF7G:PPY4:WQWD:7N4L Containers:1 ContainersRunning:1 ContainersPaused:0 ContainersStopped:0 Images:2 Driver:overlay2 DriverStatus:[[Backing Filesystem extfs] [Supports d_type true] [Using metacopy false] [Native Overlay Diff true] [userxattr false]] SystemStatus:<nil> Plugins:{Volume:[local] Network:[bridge host ipvlan macvlan null overlay] Authorization:<nil> Log:[awslogs fluentd gcplogs gelf journald json-file local splunk syslog]} MemoryLimit:true SwapLimit:true KernelMemory:false KernelMemoryTCP:true CPUCfsPeriod:true CPUCfsQuota:true CPUShares:true CPUSet:true PidsLimit:true IPv4Forwarding:true BridgeNfIptables:false BridgeNfIP6Tables:false Debug:false NFd:36 OomKillDisable:true NGoroutines:52 SystemTime:2025-11-09 13:49:49.490071495 +0000 UTC LoggingDriver:json-file CgroupDriver:cgroupfs NEventsListener:0 KernelVersion:5.15.0-1084-aws OperatingSystem:Ubuntu 20.04.6 LTS OSType:linux Architecture:a
arch64 IndexServerAddress:https://index.docker.io/v1/ RegistryConfig:{AllowNondistributableArtifactsCIDRs:[] AllowNondistributableArtifactsHostnames:[] InsecureRegistryCIDRs:[::1/128 127.0.0.0/8] IndexConfigs:{DockerIo:{Name:docker.io Mirrors:[] Secure:true Official:true}} Mirrors:[]} NCPU:2 MemTotal:8214835200 GenericResources:<nil> DockerRootDir:/var/lib/docker HTTPProxy: HTTPSProxy: NoProxy: Name:ip-172-31-24-2 Labels:[] ExperimentalBuild:false ServerVersion:28.1.1 ClusterStore: ClusterAdvertise: Runtimes:{Runc:{Path:runc}} DefaultRuntime:runc Swarm:{NodeID: NodeAddr: LocalNodeState:inactive ControlAvailable:false Error: RemoteManagers:<nil>} LiveRestoreEnabled:false Isolation: InitBinary:docker-init ContainerdCommit:{ID:05044ec0a9a75232cad458027ca83437aae3f4da Expected:} RuncCommit:{ID:v1.2.5-0-g59923ef Expected:} InitCommit:{ID:de40ad0 Expected:} SecurityOptions:[name=apparmor name=seccomp,profile=builtin] ProductLicense: Warnings:<nil> ServerErrors:[] ClientInfo:{Debug:false Plugins:[map[Name:buildx Pat
h:/usr/libexec/docker/cli-plugins/docker-buildx SchemaVersion:0.1.0 ShortDescription:Docker Buildx Vendor:Docker Inc. Version:v0.23.0] map[Name:compose Path:/usr/libexec/docker/cli-plugins/docker-compose SchemaVersion:0.1.0 ShortDescription:Docker Compose Vendor:Docker Inc. Version:v2.35.1]] Warnings:<nil>}}
	I1109 13:49:49.502656   29815 docker.go:319] overlay module found
	I1109 13:49:49.505915   29815 out.go:179] * Using the docker driver based on existing profile
	I1109 13:49:49.508936   29815 start.go:309] selected driver: docker
	I1109 13:49:49.508958   29815 start.go:930] validating driver "docker" against &{Name:functional-002359 KeepContext:false EmbedCerts:false MinikubeISO: KicBaseImage:gcr.io/k8s-minikube/kicbase-builds:v0.0.48-1761985721-21837@sha256:a50b37e97dfdea51156e079ca6b45818a801b3d41bbe13d141f35d2e1af6c7d1 Memory:4096 CPUs:2 DiskSize:20000 Driver:docker HyperkitVpnKitSock: HyperkitVSockPorts:[] DockerEnv:[] ContainerVolumeMounts:[] InsecureRegistry:[] RegistryMirror:[] HostOnlyCIDR:192.168.59.1/24 HypervVirtualSwitch: HypervUseExternalSwitch:false HypervExternalAdapter: KVMNetwork:default KVMQemuURI:qemu:///system KVMGPU:false KVMHidden:false KVMNUMACount:1 APIServerPort:8441 DockerOpt:[] DisableDriverMounts:false NFSShare:[] NFSSharesRoot:/nfsshares UUID: NoVTXCheck:false DNSProxy:false HostDNSResolver:true HostOnlyNicType:virtio NatNicType:virtio SSHIPAddress: SSHUser:root SSHKey: SSHPort:22 KubernetesConfig:{KubernetesVersion:v1.34.1 ClusterName:functional-002359 Namespace:default APIServerHAVIP: APIServerNa
me:minikubeCA APIServerNames:[] APIServerIPs:[] DNSDomain:cluster.local ContainerRuntime:crio CRISocket: NetworkPlugin:cni FeatureGates: ServiceCIDR:10.96.0.0/12 ImageRepository: LoadBalancerStartIP: LoadBalancerEndIP: CustomIngressCert: RegistryAliases: ExtraOptions:[{Component:apiserver Key:enable-admission-plugins Value:NamespaceAutoProvision}] ShouldLoadCachedImages:true EnableDefaultCNI:false CNI:} Nodes:[{Name: IP:192.168.49.2 Port:8441 KubernetesVersion:v1.34.1 ContainerRuntime:crio ControlPlane:true Worker:true}] Addons:map[default-storageclass:true storage-provisioner:true] CustomAddonImages:map[] CustomAddonRegistries:map[] VerifyComponents:map[apiserver:true apps_running:true default_sa:true extra:true kubelet:true node_ready:true system_pods:true] StartHostTimeout:6m0s ScheduledStop:<nil> ExposedPorts:[] ListenAddress: Network: Subnet: MultiNodeRequested:false ExtraDisks:0 CertExpiration:26280h0m0s MountString: Mount9PVersion:9p2000.L MountGID:docker MountIP: MountMSize:262144 MountOptions:[] Moun
tPort:0 MountType:9p MountUID:docker BinaryMirror: DisableOptimizations:false DisableMetrics:false DisableCoreDNSLog:false CustomQemuFirmwarePath: SocketVMnetClientPath: SocketVMnetPath: StaticIP: SSHAuthSock: SSHAgentPID:0 GPUs: AutoPauseInterval:1m0s}
	I1109 13:49:49.509055   29815 start.go:941] status for docker: {Installed:true Healthy:true Running:false NeedsImprovement:false Error:<nil> Reason: Fix: Doc: Version:}
	I1109 13:49:49.512561   29815 out.go:203] 
	W1109 13:49:49.515235   29815 out.go:285] X Exiting due to RSRC_INSUFFICIENT_REQ_MEMORY: Requested memory allocation 250MiB is less than the usable minimum of 1800MB
	X Exiting due to RSRC_INSUFFICIENT_REQ_MEMORY: Requested memory allocation 250MiB is less than the usable minimum of 1800MB
	I1109 13:49:49.519159   29815 out.go:203] 

                                                
                                                
** /stderr **
functional_test.go:1006: (dbg) Run:  out/minikube-linux-arm64 start -p functional-002359 --dry-run --alsologtostderr -v=1 --driver=docker  --container-runtime=crio
--- PASS: TestFunctional/parallel/DryRun (0.60s)

                                                
                                    
x
+
TestFunctional/parallel/InternationalLanguage (0.3s)

                                                
                                                
=== RUN   TestFunctional/parallel/InternationalLanguage
=== PAUSE TestFunctional/parallel/InternationalLanguage

                                                
                                                

                                                
                                                
=== CONT  TestFunctional/parallel/InternationalLanguage
functional_test.go:1035: (dbg) Run:  out/minikube-linux-arm64 start -p functional-002359 --dry-run --memory 250MB --alsologtostderr --driver=docker  --container-runtime=crio
functional_test.go:1035: (dbg) Non-zero exit: out/minikube-linux-arm64 start -p functional-002359 --dry-run --memory 250MB --alsologtostderr --driver=docker  --container-runtime=crio: exit status 23 (295.183246ms)

                                                
                                                
-- stdout --
	* [functional-002359] minikube v1.37.0 sur Ubuntu 20.04 (arm64)
	  - MINIKUBE_LOCATION=21139
	  - MINIKUBE_SUPPRESS_DOCKER_PERFORMANCE=true
	  - KUBECONFIG=/home/jenkins/minikube-integration/21139-2320/kubeconfig
	  - MINIKUBE_HOME=/home/jenkins/minikube-integration/21139-2320/.minikube
	  - MINIKUBE_BIN=out/minikube-linux-arm64
	  - MINIKUBE_FORCE_SYSTEMD=
	* Utilisation du pilote docker basé sur le profil existant
	
	

                                                
                                                
-- /stdout --
** stderr ** 
	I1109 13:49:49.052028   29717 out.go:360] Setting OutFile to fd 1 ...
	I1109 13:49:49.053719   29717 out.go:408] TERM=,COLORTERM=, which probably does not support color
	I1109 13:49:49.053741   29717 out.go:374] Setting ErrFile to fd 2...
	I1109 13:49:49.053747   29717 out.go:408] TERM=,COLORTERM=, which probably does not support color
	I1109 13:49:49.054278   29717 root.go:338] Updating PATH: /home/jenkins/minikube-integration/21139-2320/.minikube/bin
	I1109 13:49:49.054803   29717 out.go:368] Setting JSON to false
	I1109 13:49:49.055858   29717 start.go:133] hostinfo: {"hostname":"ip-172-31-24-2","uptime":1939,"bootTime":1762694250,"procs":183,"os":"linux","platform":"ubuntu","platformFamily":"debian","platformVersion":"20.04","kernelVersion":"5.15.0-1084-aws","kernelArch":"aarch64","virtualizationSystem":"","virtualizationRole":"","hostId":"6d436adf-771e-4269-b9a3-c25fd4fca4f5"}
	I1109 13:49:49.055966   29717 start.go:143] virtualization:  
	I1109 13:49:49.059770   29717 out.go:179] * [functional-002359] minikube v1.37.0 sur Ubuntu 20.04 (arm64)
	I1109 13:49:49.062785   29717 out.go:179]   - MINIKUBE_LOCATION=21139
	I1109 13:49:49.062953   29717 notify.go:221] Checking for updates...
	I1109 13:49:49.068649   29717 out.go:179]   - MINIKUBE_SUPPRESS_DOCKER_PERFORMANCE=true
	I1109 13:49:49.071498   29717 out.go:179]   - KUBECONFIG=/home/jenkins/minikube-integration/21139-2320/kubeconfig
	I1109 13:49:49.074440   29717 out.go:179]   - MINIKUBE_HOME=/home/jenkins/minikube-integration/21139-2320/.minikube
	I1109 13:49:49.077348   29717 out.go:179]   - MINIKUBE_BIN=out/minikube-linux-arm64
	I1109 13:49:49.080396   29717 out.go:179]   - MINIKUBE_FORCE_SYSTEMD=
	I1109 13:49:49.085081   29717 config.go:182] Loaded profile config "functional-002359": Driver=docker, ContainerRuntime=crio, KubernetesVersion=v1.34.1
	I1109 13:49:49.085771   29717 driver.go:422] Setting default libvirt URI to qemu:///system
	I1109 13:49:49.124100   29717 docker.go:124] docker version: linux-28.1.1:Docker Engine - Community
	I1109 13:49:49.124225   29717 cli_runner.go:164] Run: docker system info --format "{{json .}}"
	I1109 13:49:49.231743   29717 info.go:266] docker info: {ID:J4M5:W6MX:GOX4:4LAQ:VI7E:VJNF:J3OP:OPBH:GF7G:PPY4:WQWD:7N4L Containers:1 ContainersRunning:1 ContainersPaused:0 ContainersStopped:0 Images:2 Driver:overlay2 DriverStatus:[[Backing Filesystem extfs] [Supports d_type true] [Using metacopy false] [Native Overlay Diff true] [userxattr false]] SystemStatus:<nil> Plugins:{Volume:[local] Network:[bridge host ipvlan macvlan null overlay] Authorization:<nil> Log:[awslogs fluentd gcplogs gelf journald json-file local splunk syslog]} MemoryLimit:true SwapLimit:true KernelMemory:false KernelMemoryTCP:true CPUCfsPeriod:true CPUCfsQuota:true CPUShares:true CPUSet:true PidsLimit:true IPv4Forwarding:true BridgeNfIptables:false BridgeNfIP6Tables:false Debug:false NFd:36 OomKillDisable:true NGoroutines:52 SystemTime:2025-11-09 13:49:49.220868932 +0000 UTC LoggingDriver:json-file CgroupDriver:cgroupfs NEventsListener:0 KernelVersion:5.15.0-1084-aws OperatingSystem:Ubuntu 20.04.6 LTS OSType:linux Architecture:a
arch64 IndexServerAddress:https://index.docker.io/v1/ RegistryConfig:{AllowNondistributableArtifactsCIDRs:[] AllowNondistributableArtifactsHostnames:[] InsecureRegistryCIDRs:[::1/128 127.0.0.0/8] IndexConfigs:{DockerIo:{Name:docker.io Mirrors:[] Secure:true Official:true}} Mirrors:[]} NCPU:2 MemTotal:8214835200 GenericResources:<nil> DockerRootDir:/var/lib/docker HTTPProxy: HTTPSProxy: NoProxy: Name:ip-172-31-24-2 Labels:[] ExperimentalBuild:false ServerVersion:28.1.1 ClusterStore: ClusterAdvertise: Runtimes:{Runc:{Path:runc}} DefaultRuntime:runc Swarm:{NodeID: NodeAddr: LocalNodeState:inactive ControlAvailable:false Error: RemoteManagers:<nil>} LiveRestoreEnabled:false Isolation: InitBinary:docker-init ContainerdCommit:{ID:05044ec0a9a75232cad458027ca83437aae3f4da Expected:} RuncCommit:{ID:v1.2.5-0-g59923ef Expected:} InitCommit:{ID:de40ad0 Expected:} SecurityOptions:[name=apparmor name=seccomp,profile=builtin] ProductLicense: Warnings:<nil> ServerErrors:[] ClientInfo:{Debug:false Plugins:[map[Name:buildx Pat
h:/usr/libexec/docker/cli-plugins/docker-buildx SchemaVersion:0.1.0 ShortDescription:Docker Buildx Vendor:Docker Inc. Version:v0.23.0] map[Name:compose Path:/usr/libexec/docker/cli-plugins/docker-compose SchemaVersion:0.1.0 ShortDescription:Docker Compose Vendor:Docker Inc. Version:v2.35.1]] Warnings:<nil>}}
	I1109 13:49:49.231853   29717 docker.go:319] overlay module found
	I1109 13:49:49.235041   29717 out.go:179] * Utilisation du pilote docker basé sur le profil existant
	I1109 13:49:49.238005   29717 start.go:309] selected driver: docker
	I1109 13:49:49.238029   29717 start.go:930] validating driver "docker" against &{Name:functional-002359 KeepContext:false EmbedCerts:false MinikubeISO: KicBaseImage:gcr.io/k8s-minikube/kicbase-builds:v0.0.48-1761985721-21837@sha256:a50b37e97dfdea51156e079ca6b45818a801b3d41bbe13d141f35d2e1af6c7d1 Memory:4096 CPUs:2 DiskSize:20000 Driver:docker HyperkitVpnKitSock: HyperkitVSockPorts:[] DockerEnv:[] ContainerVolumeMounts:[] InsecureRegistry:[] RegistryMirror:[] HostOnlyCIDR:192.168.59.1/24 HypervVirtualSwitch: HypervUseExternalSwitch:false HypervExternalAdapter: KVMNetwork:default KVMQemuURI:qemu:///system KVMGPU:false KVMHidden:false KVMNUMACount:1 APIServerPort:8441 DockerOpt:[] DisableDriverMounts:false NFSShare:[] NFSSharesRoot:/nfsshares UUID: NoVTXCheck:false DNSProxy:false HostDNSResolver:true HostOnlyNicType:virtio NatNicType:virtio SSHIPAddress: SSHUser:root SSHKey: SSHPort:22 KubernetesConfig:{KubernetesVersion:v1.34.1 ClusterName:functional-002359 Namespace:default APIServerHAVIP: APIServerNa
me:minikubeCA APIServerNames:[] APIServerIPs:[] DNSDomain:cluster.local ContainerRuntime:crio CRISocket: NetworkPlugin:cni FeatureGates: ServiceCIDR:10.96.0.0/12 ImageRepository: LoadBalancerStartIP: LoadBalancerEndIP: CustomIngressCert: RegistryAliases: ExtraOptions:[{Component:apiserver Key:enable-admission-plugins Value:NamespaceAutoProvision}] ShouldLoadCachedImages:true EnableDefaultCNI:false CNI:} Nodes:[{Name: IP:192.168.49.2 Port:8441 KubernetesVersion:v1.34.1 ContainerRuntime:crio ControlPlane:true Worker:true}] Addons:map[default-storageclass:true storage-provisioner:true] CustomAddonImages:map[] CustomAddonRegistries:map[] VerifyComponents:map[apiserver:true apps_running:true default_sa:true extra:true kubelet:true node_ready:true system_pods:true] StartHostTimeout:6m0s ScheduledStop:<nil> ExposedPorts:[] ListenAddress: Network: Subnet: MultiNodeRequested:false ExtraDisks:0 CertExpiration:26280h0m0s MountString: Mount9PVersion:9p2000.L MountGID:docker MountIP: MountMSize:262144 MountOptions:[] Moun
tPort:0 MountType:9p MountUID:docker BinaryMirror: DisableOptimizations:false DisableMetrics:false DisableCoreDNSLog:false CustomQemuFirmwarePath: SocketVMnetClientPath: SocketVMnetPath: StaticIP: SSHAuthSock: SSHAgentPID:0 GPUs: AutoPauseInterval:1m0s}
	I1109 13:49:49.238134   29717 start.go:941] status for docker: {Installed:true Healthy:true Running:false NeedsImprovement:false Error:<nil> Reason: Fix: Doc: Version:}
	I1109 13:49:49.241655   29717 out.go:203] 
	W1109 13:49:49.244514   29717 out.go:285] X Fermeture en raison de RSRC_INSUFFICIENT_REQ_MEMORY : L'allocation de mémoire demandée 250 Mio est inférieure au minimum utilisable de 1800 Mo
	X Fermeture en raison de RSRC_INSUFFICIENT_REQ_MEMORY : L'allocation de mémoire demandée 250 Mio est inférieure au minimum utilisable de 1800 Mo
	I1109 13:49:49.247404   29717 out.go:203] 

                                                
                                                
** /stderr **
--- PASS: TestFunctional/parallel/InternationalLanguage (0.30s)

                                                
                                    
x
+
TestFunctional/parallel/StatusCmd (1.35s)

                                                
                                                
=== RUN   TestFunctional/parallel/StatusCmd
=== PAUSE TestFunctional/parallel/StatusCmd

                                                
                                                

                                                
                                                
=== CONT  TestFunctional/parallel/StatusCmd
functional_test.go:869: (dbg) Run:  out/minikube-linux-arm64 -p functional-002359 status
functional_test.go:875: (dbg) Run:  out/minikube-linux-arm64 -p functional-002359 status -f host:{{.Host}},kublet:{{.Kubelet}},apiserver:{{.APIServer}},kubeconfig:{{.Kubeconfig}}
functional_test.go:887: (dbg) Run:  out/minikube-linux-arm64 -p functional-002359 status -o json
--- PASS: TestFunctional/parallel/StatusCmd (1.35s)

                                                
                                    
x
+
TestFunctional/parallel/AddonsCmd (0.15s)

                                                
                                                
=== RUN   TestFunctional/parallel/AddonsCmd
=== PAUSE TestFunctional/parallel/AddonsCmd

                                                
                                                

                                                
                                                
=== CONT  TestFunctional/parallel/AddonsCmd
functional_test.go:1695: (dbg) Run:  out/minikube-linux-arm64 -p functional-002359 addons list
functional_test.go:1707: (dbg) Run:  out/minikube-linux-arm64 -p functional-002359 addons list -o json
--- PASS: TestFunctional/parallel/AddonsCmd (0.15s)

                                                
                                    
x
+
TestFunctional/parallel/PersistentVolumeClaim (25.6s)

                                                
                                                
=== RUN   TestFunctional/parallel/PersistentVolumeClaim
=== PAUSE TestFunctional/parallel/PersistentVolumeClaim

                                                
                                                

                                                
                                                
=== CONT  TestFunctional/parallel/PersistentVolumeClaim
functional_test_pvc_test.go:50: (dbg) TestFunctional/parallel/PersistentVolumeClaim: waiting 4m0s for pods matching "integration-test=storage-provisioner" in namespace "kube-system" ...
helpers_test.go:352: "storage-provisioner" [ebf54c13-6f7b-4167-920f-f85c358ec5ab] Running
functional_test_pvc_test.go:50: (dbg) TestFunctional/parallel/PersistentVolumeClaim: integration-test=storage-provisioner healthy within 6.003586404s
functional_test_pvc_test.go:55: (dbg) Run:  kubectl --context functional-002359 get storageclass -o=json
functional_test_pvc_test.go:75: (dbg) Run:  kubectl --context functional-002359 apply -f testdata/storage-provisioner/pvc.yaml
functional_test_pvc_test.go:82: (dbg) Run:  kubectl --context functional-002359 get pvc myclaim -o=json
functional_test_pvc_test.go:131: (dbg) Run:  kubectl --context functional-002359 apply -f testdata/storage-provisioner/pod.yaml
functional_test_pvc_test.go:140: (dbg) TestFunctional/parallel/PersistentVolumeClaim: waiting 4m0s for pods matching "test=storage-provisioner" in namespace "default" ...
helpers_test.go:352: "sp-pod" [37b78684-4831-44e1-a23d-cd2a56189074] Pending / Ready:ContainersNotReady (containers with unready status: [myfrontend]) / ContainersReady:ContainersNotReady (containers with unready status: [myfrontend])
helpers_test.go:352: "sp-pod" [37b78684-4831-44e1-a23d-cd2a56189074] Running
functional_test_pvc_test.go:140: (dbg) TestFunctional/parallel/PersistentVolumeClaim: test=storage-provisioner healthy within 11.003752884s
functional_test_pvc_test.go:106: (dbg) Run:  kubectl --context functional-002359 exec sp-pod -- touch /tmp/mount/foo
functional_test_pvc_test.go:112: (dbg) Run:  kubectl --context functional-002359 delete -f testdata/storage-provisioner/pod.yaml
functional_test_pvc_test.go:131: (dbg) Run:  kubectl --context functional-002359 apply -f testdata/storage-provisioner/pod.yaml
functional_test_pvc_test.go:140: (dbg) TestFunctional/parallel/PersistentVolumeClaim: waiting 4m0s for pods matching "test=storage-provisioner" in namespace "default" ...
helpers_test.go:352: "sp-pod" [ee2d7db4-acef-43ba-839f-f4c93929975d] Pending
helpers_test.go:352: "sp-pod" [ee2d7db4-acef-43ba-839f-f4c93929975d] Pending / Ready:ContainersNotReady (containers with unready status: [myfrontend]) / ContainersReady:ContainersNotReady (containers with unready status: [myfrontend])
helpers_test.go:352: "sp-pod" [ee2d7db4-acef-43ba-839f-f4c93929975d] Running
functional_test_pvc_test.go:140: (dbg) TestFunctional/parallel/PersistentVolumeClaim: test=storage-provisioner healthy within 7.003229422s
functional_test_pvc_test.go:120: (dbg) Run:  kubectl --context functional-002359 exec sp-pod -- ls /tmp/mount
--- PASS: TestFunctional/parallel/PersistentVolumeClaim (25.60s)

                                                
                                    
x
+
TestFunctional/parallel/SSHCmd (0.78s)

                                                
                                                
=== RUN   TestFunctional/parallel/SSHCmd
=== PAUSE TestFunctional/parallel/SSHCmd

                                                
                                                

                                                
                                                
=== CONT  TestFunctional/parallel/SSHCmd
functional_test.go:1730: (dbg) Run:  out/minikube-linux-arm64 -p functional-002359 ssh "echo hello"
functional_test.go:1747: (dbg) Run:  out/minikube-linux-arm64 -p functional-002359 ssh "cat /etc/hostname"
--- PASS: TestFunctional/parallel/SSHCmd (0.78s)

                                                
                                    
x
+
TestFunctional/parallel/CpCmd (2.19s)

                                                
                                                
=== RUN   TestFunctional/parallel/CpCmd
=== PAUSE TestFunctional/parallel/CpCmd

                                                
                                                

                                                
                                                
=== CONT  TestFunctional/parallel/CpCmd
helpers_test.go:573: (dbg) Run:  out/minikube-linux-arm64 -p functional-002359 cp testdata/cp-test.txt /home/docker/cp-test.txt
helpers_test.go:551: (dbg) Run:  out/minikube-linux-arm64 -p functional-002359 ssh -n functional-002359 "sudo cat /home/docker/cp-test.txt"
helpers_test.go:573: (dbg) Run:  out/minikube-linux-arm64 -p functional-002359 cp functional-002359:/home/docker/cp-test.txt /tmp/TestFunctionalparallelCpCmd3848952019/001/cp-test.txt
helpers_test.go:551: (dbg) Run:  out/minikube-linux-arm64 -p functional-002359 ssh -n functional-002359 "sudo cat /home/docker/cp-test.txt"
helpers_test.go:573: (dbg) Run:  out/minikube-linux-arm64 -p functional-002359 cp testdata/cp-test.txt /tmp/does/not/exist/cp-test.txt
helpers_test.go:551: (dbg) Run:  out/minikube-linux-arm64 -p functional-002359 ssh -n functional-002359 "sudo cat /tmp/does/not/exist/cp-test.txt"
--- PASS: TestFunctional/parallel/CpCmd (2.19s)

                                                
                                    
x
+
TestFunctional/parallel/FileSync (0.4s)

                                                
                                                
=== RUN   TestFunctional/parallel/FileSync
=== PAUSE TestFunctional/parallel/FileSync

                                                
                                                

                                                
                                                
=== CONT  TestFunctional/parallel/FileSync
functional_test.go:1934: Checking for existence of /etc/test/nested/copy/4116/hosts within VM
functional_test.go:1936: (dbg) Run:  out/minikube-linux-arm64 -p functional-002359 ssh "sudo cat /etc/test/nested/copy/4116/hosts"
functional_test.go:1941: file sync test content: Test file for checking file sync process
--- PASS: TestFunctional/parallel/FileSync (0.40s)

                                                
                                    
x
+
TestFunctional/parallel/CertSync (2.2s)

                                                
                                                
=== RUN   TestFunctional/parallel/CertSync
=== PAUSE TestFunctional/parallel/CertSync

                                                
                                                

                                                
                                                
=== CONT  TestFunctional/parallel/CertSync
functional_test.go:1977: Checking for existence of /etc/ssl/certs/4116.pem within VM
functional_test.go:1978: (dbg) Run:  out/minikube-linux-arm64 -p functional-002359 ssh "sudo cat /etc/ssl/certs/4116.pem"
functional_test.go:1977: Checking for existence of /usr/share/ca-certificates/4116.pem within VM
functional_test.go:1978: (dbg) Run:  out/minikube-linux-arm64 -p functional-002359 ssh "sudo cat /usr/share/ca-certificates/4116.pem"
functional_test.go:1977: Checking for existence of /etc/ssl/certs/51391683.0 within VM
functional_test.go:1978: (dbg) Run:  out/minikube-linux-arm64 -p functional-002359 ssh "sudo cat /etc/ssl/certs/51391683.0"
functional_test.go:2004: Checking for existence of /etc/ssl/certs/41162.pem within VM
functional_test.go:2005: (dbg) Run:  out/minikube-linux-arm64 -p functional-002359 ssh "sudo cat /etc/ssl/certs/41162.pem"
functional_test.go:2004: Checking for existence of /usr/share/ca-certificates/41162.pem within VM
functional_test.go:2005: (dbg) Run:  out/minikube-linux-arm64 -p functional-002359 ssh "sudo cat /usr/share/ca-certificates/41162.pem"
functional_test.go:2004: Checking for existence of /etc/ssl/certs/3ec20f2e.0 within VM
functional_test.go:2005: (dbg) Run:  out/minikube-linux-arm64 -p functional-002359 ssh "sudo cat /etc/ssl/certs/3ec20f2e.0"
--- PASS: TestFunctional/parallel/CertSync (2.20s)

                                                
                                    
x
+
TestFunctional/parallel/NodeLabels (0.12s)

                                                
                                                
=== RUN   TestFunctional/parallel/NodeLabels
=== PAUSE TestFunctional/parallel/NodeLabels

                                                
                                                

                                                
                                                
=== CONT  TestFunctional/parallel/NodeLabels
functional_test.go:234: (dbg) Run:  kubectl --context functional-002359 get nodes --output=go-template "--template='{{range $k, $v := (index .items 0).metadata.labels}}{{$k}} {{end}}'"
--- PASS: TestFunctional/parallel/NodeLabels (0.12s)

                                                
                                    
x
+
TestFunctional/parallel/NonActiveRuntimeDisabled (0.75s)

                                                
                                                
=== RUN   TestFunctional/parallel/NonActiveRuntimeDisabled
=== PAUSE TestFunctional/parallel/NonActiveRuntimeDisabled

                                                
                                                

                                                
                                                
=== CONT  TestFunctional/parallel/NonActiveRuntimeDisabled
functional_test.go:2032: (dbg) Run:  out/minikube-linux-arm64 -p functional-002359 ssh "sudo systemctl is-active docker"
functional_test.go:2032: (dbg) Non-zero exit: out/minikube-linux-arm64 -p functional-002359 ssh "sudo systemctl is-active docker": exit status 1 (380.348583ms)

                                                
                                                
-- stdout --
	inactive

                                                
                                                
-- /stdout --
** stderr ** 
	ssh: Process exited with status 3

                                                
                                                
** /stderr **
functional_test.go:2032: (dbg) Run:  out/minikube-linux-arm64 -p functional-002359 ssh "sudo systemctl is-active containerd"
functional_test.go:2032: (dbg) Non-zero exit: out/minikube-linux-arm64 -p functional-002359 ssh "sudo systemctl is-active containerd": exit status 1 (365.020256ms)

                                                
                                                
-- stdout --
	inactive

                                                
                                                
-- /stdout --
** stderr ** 
	ssh: Process exited with status 3

                                                
                                                
** /stderr **
--- PASS: TestFunctional/parallel/NonActiveRuntimeDisabled (0.75s)

                                                
                                    
x
+
TestFunctional/parallel/License (0.33s)

                                                
                                                
=== RUN   TestFunctional/parallel/License
=== PAUSE TestFunctional/parallel/License

                                                
                                                

                                                
                                                
=== CONT  TestFunctional/parallel/License
functional_test.go:2293: (dbg) Run:  out/minikube-linux-arm64 license
--- PASS: TestFunctional/parallel/License (0.33s)

                                                
                                    
x
+
TestFunctional/parallel/TunnelCmd/serial/RunSecondTunnel (0.63s)

                                                
                                                
=== RUN   TestFunctional/parallel/TunnelCmd/serial/RunSecondTunnel
functional_test_tunnel_test.go:154: (dbg) daemon: [out/minikube-linux-arm64 -p functional-002359 tunnel --alsologtostderr]
functional_test_tunnel_test.go:154: (dbg) daemon: [out/minikube-linux-arm64 -p functional-002359 tunnel --alsologtostderr]
functional_test_tunnel_test.go:194: (dbg) stopping [out/minikube-linux-arm64 -p functional-002359 tunnel --alsologtostderr] ...
helpers_test.go:507: unable to find parent, assuming dead: process does not exist
functional_test_tunnel_test.go:194: (dbg) stopping [out/minikube-linux-arm64 -p functional-002359 tunnel --alsologtostderr] ...
helpers_test.go:525: unable to kill pid 26050: os: process already finished
--- PASS: TestFunctional/parallel/TunnelCmd/serial/RunSecondTunnel (0.63s)

                                                
                                    
x
+
TestFunctional/parallel/TunnelCmd/serial/StartTunnel (0s)

                                                
                                                
=== RUN   TestFunctional/parallel/TunnelCmd/serial/StartTunnel
functional_test_tunnel_test.go:129: (dbg) daemon: [out/minikube-linux-arm64 -p functional-002359 tunnel --alsologtostderr]
--- PASS: TestFunctional/parallel/TunnelCmd/serial/StartTunnel (0.00s)

                                                
                                    
x
+
TestFunctional/parallel/TunnelCmd/serial/WaitService/Setup (10.37s)

                                                
                                                
=== RUN   TestFunctional/parallel/TunnelCmd/serial/WaitService/Setup
functional_test_tunnel_test.go:212: (dbg) Run:  kubectl --context functional-002359 apply -f testdata/testsvc.yaml
functional_test_tunnel_test.go:216: (dbg) TestFunctional/parallel/TunnelCmd/serial/WaitService/Setup: waiting 4m0s for pods matching "run=nginx-svc" in namespace "default" ...
helpers_test.go:352: "nginx-svc" [b62a74fc-88db-41cb-9ccb-a0321f49cb84] Pending / Ready:ContainersNotReady (containers with unready status: [nginx]) / ContainersReady:ContainersNotReady (containers with unready status: [nginx])
helpers_test.go:352: "nginx-svc" [b62a74fc-88db-41cb-9ccb-a0321f49cb84] Running
functional_test_tunnel_test.go:216: (dbg) TestFunctional/parallel/TunnelCmd/serial/WaitService/Setup: run=nginx-svc healthy within 10.004205208s
I1109 13:39:31.480528    4116 kapi.go:150] Service nginx-svc in namespace default found.
--- PASS: TestFunctional/parallel/TunnelCmd/serial/WaitService/Setup (10.37s)

                                                
                                    
x
+
TestFunctional/parallel/TunnelCmd/serial/WaitService/IngressIP (0.07s)

                                                
                                                
=== RUN   TestFunctional/parallel/TunnelCmd/serial/WaitService/IngressIP
functional_test_tunnel_test.go:234: (dbg) Run:  kubectl --context functional-002359 get svc nginx-svc -o jsonpath={.status.loadBalancer.ingress[0].ip}
--- PASS: TestFunctional/parallel/TunnelCmd/serial/WaitService/IngressIP (0.07s)

                                                
                                    
x
+
TestFunctional/parallel/TunnelCmd/serial/AccessDirect (0s)

                                                
                                                
=== RUN   TestFunctional/parallel/TunnelCmd/serial/AccessDirect
functional_test_tunnel_test.go:299: tunnel at http://10.110.22.157 is working!
--- PASS: TestFunctional/parallel/TunnelCmd/serial/AccessDirect (0.00s)

                                                
                                    
x
+
TestFunctional/parallel/TunnelCmd/serial/DeleteTunnel (0.1s)

                                                
                                                
=== RUN   TestFunctional/parallel/TunnelCmd/serial/DeleteTunnel
functional_test_tunnel_test.go:434: (dbg) stopping [out/minikube-linux-arm64 -p functional-002359 tunnel --alsologtostderr] ...
functional_test_tunnel_test.go:437: failed to stop process: signal: terminated
--- PASS: TestFunctional/parallel/TunnelCmd/serial/DeleteTunnel (0.10s)

                                                
                                    
x
+
TestFunctional/parallel/ProfileCmd/profile_not_create (0.43s)

                                                
                                                
=== RUN   TestFunctional/parallel/ProfileCmd/profile_not_create
functional_test.go:1285: (dbg) Run:  out/minikube-linux-arm64 profile lis
functional_test.go:1290: (dbg) Run:  out/minikube-linux-arm64 profile list --output json
--- PASS: TestFunctional/parallel/ProfileCmd/profile_not_create (0.43s)

                                                
                                    
x
+
TestFunctional/parallel/ProfileCmd/profile_list (0.43s)

                                                
                                                
=== RUN   TestFunctional/parallel/ProfileCmd/profile_list
functional_test.go:1325: (dbg) Run:  out/minikube-linux-arm64 profile list
functional_test.go:1330: Took "375.974312ms" to run "out/minikube-linux-arm64 profile list"
functional_test.go:1339: (dbg) Run:  out/minikube-linux-arm64 profile list -l
functional_test.go:1344: Took "54.288024ms" to run "out/minikube-linux-arm64 profile list -l"
--- PASS: TestFunctional/parallel/ProfileCmd/profile_list (0.43s)

                                                
                                    
x
+
TestFunctional/parallel/ProfileCmd/profile_json_output (0.43s)

                                                
                                                
=== RUN   TestFunctional/parallel/ProfileCmd/profile_json_output
functional_test.go:1376: (dbg) Run:  out/minikube-linux-arm64 profile list -o json
functional_test.go:1381: Took "371.412062ms" to run "out/minikube-linux-arm64 profile list -o json"
functional_test.go:1389: (dbg) Run:  out/minikube-linux-arm64 profile list -o json --light
functional_test.go:1394: Took "55.14874ms" to run "out/minikube-linux-arm64 profile list -o json --light"
--- PASS: TestFunctional/parallel/ProfileCmd/profile_json_output (0.43s)

                                                
                                    
x
+
TestFunctional/parallel/MountCmd/any-port (6.91s)

                                                
                                                
=== RUN   TestFunctional/parallel/MountCmd/any-port
functional_test_mount_test.go:73: (dbg) daemon: [out/minikube-linux-arm64 mount -p functional-002359 /tmp/TestFunctionalparallelMountCmdany-port2302875689/001:/mount-9p --alsologtostderr -v=1]
functional_test_mount_test.go:107: wrote "test-1762696176637272891" to /tmp/TestFunctionalparallelMountCmdany-port2302875689/001/created-by-test
functional_test_mount_test.go:107: wrote "test-1762696176637272891" to /tmp/TestFunctionalparallelMountCmdany-port2302875689/001/created-by-test-removed-by-pod
functional_test_mount_test.go:107: wrote "test-1762696176637272891" to /tmp/TestFunctionalparallelMountCmdany-port2302875689/001/test-1762696176637272891
functional_test_mount_test.go:115: (dbg) Run:  out/minikube-linux-arm64 -p functional-002359 ssh "findmnt -T /mount-9p | grep 9p"
functional_test_mount_test.go:115: (dbg) Non-zero exit: out/minikube-linux-arm64 -p functional-002359 ssh "findmnt -T /mount-9p | grep 9p": exit status 1 (398.731991ms)

                                                
                                                
** stderr ** 
	ssh: Process exited with status 1

                                                
                                                
** /stderr **
I1109 13:49:37.036272    4116 retry.go:31] will retry after 418.365643ms: exit status 1
functional_test_mount_test.go:115: (dbg) Run:  out/minikube-linux-arm64 -p functional-002359 ssh "findmnt -T /mount-9p | grep 9p"
functional_test_mount_test.go:129: (dbg) Run:  out/minikube-linux-arm64 -p functional-002359 ssh -- ls -la /mount-9p
functional_test_mount_test.go:133: guest mount directory contents
total 2
-rw-r--r-- 1 docker docker 24 Nov  9 13:49 created-by-test
-rw-r--r-- 1 docker docker 24 Nov  9 13:49 created-by-test-removed-by-pod
-rw-r--r-- 1 docker docker 24 Nov  9 13:49 test-1762696176637272891
functional_test_mount_test.go:137: (dbg) Run:  out/minikube-linux-arm64 -p functional-002359 ssh cat /mount-9p/test-1762696176637272891
functional_test_mount_test.go:148: (dbg) Run:  kubectl --context functional-002359 replace --force -f testdata/busybox-mount-test.yaml
functional_test_mount_test.go:153: (dbg) TestFunctional/parallel/MountCmd/any-port: waiting 4m0s for pods matching "integration-test=busybox-mount" in namespace "default" ...
helpers_test.go:352: "busybox-mount" [f71fbf0f-822b-4a8e-aea2-8c9950663c3d] Pending
helpers_test.go:352: "busybox-mount" [f71fbf0f-822b-4a8e-aea2-8c9950663c3d] Pending / Ready:ContainersNotReady (containers with unready status: [mount-munger]) / ContainersReady:ContainersNotReady (containers with unready status: [mount-munger])
helpers_test.go:352: "busybox-mount" [f71fbf0f-822b-4a8e-aea2-8c9950663c3d] Pending / Initialized:PodCompleted / Ready:PodCompleted / ContainersReady:PodCompleted
helpers_test.go:352: "busybox-mount" [f71fbf0f-822b-4a8e-aea2-8c9950663c3d] Succeeded / Initialized:PodCompleted / Ready:PodCompleted / ContainersReady:PodCompleted
functional_test_mount_test.go:153: (dbg) TestFunctional/parallel/MountCmd/any-port: integration-test=busybox-mount healthy within 4.003392972s
functional_test_mount_test.go:169: (dbg) Run:  kubectl --context functional-002359 logs busybox-mount
functional_test_mount_test.go:181: (dbg) Run:  out/minikube-linux-arm64 -p functional-002359 ssh stat /mount-9p/created-by-test
functional_test_mount_test.go:181: (dbg) Run:  out/minikube-linux-arm64 -p functional-002359 ssh stat /mount-9p/created-by-pod
functional_test_mount_test.go:90: (dbg) Run:  out/minikube-linux-arm64 -p functional-002359 ssh "sudo umount -f /mount-9p"
functional_test_mount_test.go:94: (dbg) stopping [out/minikube-linux-arm64 mount -p functional-002359 /tmp/TestFunctionalparallelMountCmdany-port2302875689/001:/mount-9p --alsologtostderr -v=1] ...
--- PASS: TestFunctional/parallel/MountCmd/any-port (6.91s)

                                                
                                    
x
+
TestFunctional/parallel/MountCmd/specific-port (1.81s)

                                                
                                                
=== RUN   TestFunctional/parallel/MountCmd/specific-port
functional_test_mount_test.go:213: (dbg) daemon: [out/minikube-linux-arm64 mount -p functional-002359 /tmp/TestFunctionalparallelMountCmdspecific-port2268795932/001:/mount-9p --alsologtostderr -v=1 --port 46464]
functional_test_mount_test.go:243: (dbg) Run:  out/minikube-linux-arm64 -p functional-002359 ssh "findmnt -T /mount-9p | grep 9p"
functional_test_mount_test.go:243: (dbg) Non-zero exit: out/minikube-linux-arm64 -p functional-002359 ssh "findmnt -T /mount-9p | grep 9p": exit status 1 (354.887462ms)

                                                
                                                
** stderr ** 
	ssh: Process exited with status 1

                                                
                                                
** /stderr **
I1109 13:49:43.901645    4116 retry.go:31] will retry after 309.166556ms: exit status 1
functional_test_mount_test.go:243: (dbg) Run:  out/minikube-linux-arm64 -p functional-002359 ssh "findmnt -T /mount-9p | grep 9p"
functional_test_mount_test.go:257: (dbg) Run:  out/minikube-linux-arm64 -p functional-002359 ssh -- ls -la /mount-9p
functional_test_mount_test.go:261: guest mount directory contents
total 0
functional_test_mount_test.go:263: (dbg) stopping [out/minikube-linux-arm64 mount -p functional-002359 /tmp/TestFunctionalparallelMountCmdspecific-port2268795932/001:/mount-9p --alsologtostderr -v=1 --port 46464] ...
functional_test_mount_test.go:264: reading mount text
functional_test_mount_test.go:278: done reading mount text
functional_test_mount_test.go:230: (dbg) Run:  out/minikube-linux-arm64 -p functional-002359 ssh "sudo umount -f /mount-9p"
functional_test_mount_test.go:230: (dbg) Non-zero exit: out/minikube-linux-arm64 -p functional-002359 ssh "sudo umount -f /mount-9p": exit status 1 (376.341006ms)

                                                
                                                
-- stdout --
	umount: /mount-9p: not mounted.

                                                
                                                
-- /stdout --
** stderr ** 
	ssh: Process exited with status 32

                                                
                                                
** /stderr **
functional_test_mount_test.go:232: "out/minikube-linux-arm64 -p functional-002359 ssh \"sudo umount -f /mount-9p\"": exit status 1
functional_test_mount_test.go:234: (dbg) stopping [out/minikube-linux-arm64 mount -p functional-002359 /tmp/TestFunctionalparallelMountCmdspecific-port2268795932/001:/mount-9p --alsologtostderr -v=1 --port 46464] ...
--- PASS: TestFunctional/parallel/MountCmd/specific-port (1.81s)

                                                
                                    
x
+
TestFunctional/parallel/MountCmd/VerifyCleanup (2.26s)

                                                
                                                
=== RUN   TestFunctional/parallel/MountCmd/VerifyCleanup
functional_test_mount_test.go:298: (dbg) daemon: [out/minikube-linux-arm64 mount -p functional-002359 /tmp/TestFunctionalparallelMountCmdVerifyCleanup2208286672/001:/mount1 --alsologtostderr -v=1]
functional_test_mount_test.go:298: (dbg) daemon: [out/minikube-linux-arm64 mount -p functional-002359 /tmp/TestFunctionalparallelMountCmdVerifyCleanup2208286672/001:/mount2 --alsologtostderr -v=1]
functional_test_mount_test.go:298: (dbg) daemon: [out/minikube-linux-arm64 mount -p functional-002359 /tmp/TestFunctionalparallelMountCmdVerifyCleanup2208286672/001:/mount3 --alsologtostderr -v=1]
functional_test_mount_test.go:325: (dbg) Run:  out/minikube-linux-arm64 -p functional-002359 ssh "findmnt -T" /mount1
functional_test_mount_test.go:325: (dbg) Non-zero exit: out/minikube-linux-arm64 -p functional-002359 ssh "findmnt -T" /mount1: exit status 1 (596.948631ms)

                                                
                                                
** stderr ** 
	ssh: Process exited with status 1

                                                
                                                
** /stderr **
I1109 13:49:45.959415    4116 retry.go:31] will retry after 522.694232ms: exit status 1
functional_test_mount_test.go:325: (dbg) Run:  out/minikube-linux-arm64 -p functional-002359 ssh "findmnt -T" /mount1
functional_test_mount_test.go:325: (dbg) Run:  out/minikube-linux-arm64 -p functional-002359 ssh "findmnt -T" /mount2
functional_test_mount_test.go:325: (dbg) Run:  out/minikube-linux-arm64 -p functional-002359 ssh "findmnt -T" /mount3
functional_test_mount_test.go:370: (dbg) Run:  out/minikube-linux-arm64 mount -p functional-002359 --kill=true
functional_test_mount_test.go:313: (dbg) stopping [out/minikube-linux-arm64 mount -p functional-002359 /tmp/TestFunctionalparallelMountCmdVerifyCleanup2208286672/001:/mount1 --alsologtostderr -v=1] ...
helpers_test.go:507: unable to find parent, assuming dead: process does not exist
functional_test_mount_test.go:313: (dbg) stopping [out/minikube-linux-arm64 mount -p functional-002359 /tmp/TestFunctionalparallelMountCmdVerifyCleanup2208286672/001:/mount2 --alsologtostderr -v=1] ...
helpers_test.go:507: unable to find parent, assuming dead: process does not exist
functional_test_mount_test.go:313: (dbg) stopping [out/minikube-linux-arm64 mount -p functional-002359 /tmp/TestFunctionalparallelMountCmdVerifyCleanup2208286672/001:/mount3 --alsologtostderr -v=1] ...
helpers_test.go:507: unable to find parent, assuming dead: process does not exist
--- PASS: TestFunctional/parallel/MountCmd/VerifyCleanup (2.26s)

                                                
                                    
x
+
TestFunctional/parallel/ServiceCmd/List (0.59s)

                                                
                                                
=== RUN   TestFunctional/parallel/ServiceCmd/List
functional_test.go:1469: (dbg) Run:  out/minikube-linux-arm64 -p functional-002359 service list
--- PASS: TestFunctional/parallel/ServiceCmd/List (0.59s)

                                                
                                    
x
+
TestFunctional/parallel/ServiceCmd/JSONOutput (0.62s)

                                                
                                                
=== RUN   TestFunctional/parallel/ServiceCmd/JSONOutput
functional_test.go:1499: (dbg) Run:  out/minikube-linux-arm64 -p functional-002359 service list -o json
functional_test.go:1504: Took "615.351946ms" to run "out/minikube-linux-arm64 -p functional-002359 service list -o json"
--- PASS: TestFunctional/parallel/ServiceCmd/JSONOutput (0.62s)

                                                
                                    
x
+
TestFunctional/parallel/Version/short (0.09s)

                                                
                                                
=== RUN   TestFunctional/parallel/Version/short
=== PAUSE TestFunctional/parallel/Version/short

                                                
                                                

                                                
                                                
=== CONT  TestFunctional/parallel/Version/short
functional_test.go:2261: (dbg) Run:  out/minikube-linux-arm64 -p functional-002359 version --short
--- PASS: TestFunctional/parallel/Version/short (0.09s)

                                                
                                    
x
+
TestFunctional/parallel/Version/components (0.76s)

                                                
                                                
=== RUN   TestFunctional/parallel/Version/components
=== PAUSE TestFunctional/parallel/Version/components

                                                
                                                

                                                
                                                
=== CONT  TestFunctional/parallel/Version/components
functional_test.go:2275: (dbg) Run:  out/minikube-linux-arm64 -p functional-002359 version -o=json --components
--- PASS: TestFunctional/parallel/Version/components (0.76s)

                                                
                                    
x
+
TestFunctional/parallel/ImageCommands/ImageListShort (0.29s)

                                                
                                                
=== RUN   TestFunctional/parallel/ImageCommands/ImageListShort
=== PAUSE TestFunctional/parallel/ImageCommands/ImageListShort

                                                
                                                

                                                
                                                
=== CONT  TestFunctional/parallel/ImageCommands/ImageListShort
functional_test.go:276: (dbg) Run:  out/minikube-linux-arm64 -p functional-002359 image ls --format short --alsologtostderr
functional_test.go:281: (dbg) Stdout: out/minikube-linux-arm64 -p functional-002359 image ls --format short --alsologtostderr:
registry.k8s.io/pause:latest
registry.k8s.io/pause:3.3
registry.k8s.io/pause:3.10.1
registry.k8s.io/pause:3.1
registry.k8s.io/kube-scheduler:v1.34.1
registry.k8s.io/kube-proxy:v1.34.1
registry.k8s.io/kube-controller-manager:v1.34.1
registry.k8s.io/kube-apiserver:v1.34.1
registry.k8s.io/etcd:3.6.4-0
registry.k8s.io/coredns/coredns:v1.12.1
gcr.io/k8s-minikube/storage-provisioner:v5
gcr.io/k8s-minikube/busybox:1.28.4-glibc
docker.io/library/nginx:latest
docker.io/library/nginx:alpine
docker.io/kindest/kindnetd:v20250512-df8de77b
functional_test.go:284: (dbg) Stderr: out/minikube-linux-arm64 -p functional-002359 image ls --format short --alsologtostderr:
I1109 13:50:04.248534   32291 out.go:360] Setting OutFile to fd 1 ...
I1109 13:50:04.248740   32291 out.go:408] TERM=,COLORTERM=, which probably does not support color
I1109 13:50:04.248769   32291 out.go:374] Setting ErrFile to fd 2...
I1109 13:50:04.248789   32291 out.go:408] TERM=,COLORTERM=, which probably does not support color
I1109 13:50:04.249053   32291 root.go:338] Updating PATH: /home/jenkins/minikube-integration/21139-2320/.minikube/bin
I1109 13:50:04.249651   32291 config.go:182] Loaded profile config "functional-002359": Driver=docker, ContainerRuntime=crio, KubernetesVersion=v1.34.1
I1109 13:50:04.249816   32291 config.go:182] Loaded profile config "functional-002359": Driver=docker, ContainerRuntime=crio, KubernetesVersion=v1.34.1
I1109 13:50:04.250300   32291 cli_runner.go:164] Run: docker container inspect functional-002359 --format={{.State.Status}}
I1109 13:50:04.268864   32291 ssh_runner.go:195] Run: systemctl --version
I1109 13:50:04.268914   32291 cli_runner.go:164] Run: docker container inspect -f "'{{(index (index .NetworkSettings.Ports "22/tcp") 0).HostPort}}'" functional-002359
I1109 13:50:04.291813   32291 sshutil.go:53] new ssh client: &{IP:127.0.0.1 Port:32778 SSHKeyPath:/home/jenkins/minikube-integration/21139-2320/.minikube/machines/functional-002359/id_rsa Username:docker}
I1109 13:50:04.416025   32291 ssh_runner.go:195] Run: sudo crictl images --output json
--- PASS: TestFunctional/parallel/ImageCommands/ImageListShort (0.29s)

                                                
                                    
x
+
TestFunctional/parallel/ImageCommands/ImageListTable (0.25s)

                                                
                                                
=== RUN   TestFunctional/parallel/ImageCommands/ImageListTable
=== PAUSE TestFunctional/parallel/ImageCommands/ImageListTable

                                                
                                                

                                                
                                                
=== CONT  TestFunctional/parallel/ImageCommands/ImageListTable
functional_test.go:276: (dbg) Run:  out/minikube-linux-arm64 -p functional-002359 image ls --format table --alsologtostderr
functional_test.go:281: (dbg) Stdout: out/minikube-linux-arm64 -p functional-002359 image ls --format table --alsologtostderr:
┌─────────────────────────────────────────┬────────────────────┬───────────────┬────────┐
│                  IMAGE                  │        TAG         │   IMAGE ID    │  SIZE  │
├─────────────────────────────────────────┼────────────────────┼───────────────┼────────┤
│ docker.io/kindest/kindnetd              │ v20250512-df8de77b │ b1a8c6f707935 │ 111MB  │
│ gcr.io/k8s-minikube/busybox             │ 1.28.4-glibc       │ 1611cd07b61d5 │ 3.77MB │
│ registry.k8s.io/kube-proxy              │ v1.34.1            │ 05baa95f5142d │ 75.9MB │
│ registry.k8s.io/pause                   │ latest             │ 8cb2091f603e7 │ 246kB  │
│ docker.io/library/nginx                 │ alpine             │ cbad6347cca28 │ 54.8MB │
│ gcr.io/k8s-minikube/storage-provisioner │ v5                 │ ba04bb24b9575 │ 29MB   │
│ registry.k8s.io/coredns/coredns         │ v1.12.1            │ 138784d87c9c5 │ 73.2MB │
│ registry.k8s.io/etcd                    │ 3.6.4-0            │ a1894772a478e │ 206MB  │
│ registry.k8s.io/kube-apiserver          │ v1.34.1            │ 43911e833d64d │ 84.8MB │
│ registry.k8s.io/kube-controller-manager │ v1.34.1            │ 7eb2c6ff0c5a7 │ 72.6MB │
│ registry.k8s.io/kube-scheduler          │ v1.34.1            │ b5f57ec6b9867 │ 51.6MB │
│ docker.io/library/nginx                 │ latest             │ 2d5a8f08b76da │ 176MB  │
│ registry.k8s.io/pause                   │ 3.1                │ 8057e0500773a │ 529kB  │
│ registry.k8s.io/pause                   │ 3.10.1             │ d7b100cd9a77b │ 520kB  │
│ registry.k8s.io/pause                   │ 3.3                │ 3d18732f8686c │ 487kB  │
└─────────────────────────────────────────┴────────────────────┴───────────────┴────────┘
functional_test.go:284: (dbg) Stderr: out/minikube-linux-arm64 -p functional-002359 image ls --format table --alsologtostderr:
I1109 13:50:05.624811   32696 out.go:360] Setting OutFile to fd 1 ...
I1109 13:50:05.625023   32696 out.go:408] TERM=,COLORTERM=, which probably does not support color
I1109 13:50:05.625052   32696 out.go:374] Setting ErrFile to fd 2...
I1109 13:50:05.625070   32696 out.go:408] TERM=,COLORTERM=, which probably does not support color
I1109 13:50:05.625353   32696 root.go:338] Updating PATH: /home/jenkins/minikube-integration/21139-2320/.minikube/bin
I1109 13:50:05.626050   32696 config.go:182] Loaded profile config "functional-002359": Driver=docker, ContainerRuntime=crio, KubernetesVersion=v1.34.1
I1109 13:50:05.626208   32696 config.go:182] Loaded profile config "functional-002359": Driver=docker, ContainerRuntime=crio, KubernetesVersion=v1.34.1
I1109 13:50:05.626712   32696 cli_runner.go:164] Run: docker container inspect functional-002359 --format={{.State.Status}}
I1109 13:50:05.644765   32696 ssh_runner.go:195] Run: systemctl --version
I1109 13:50:05.644833   32696 cli_runner.go:164] Run: docker container inspect -f "'{{(index (index .NetworkSettings.Ports "22/tcp") 0).HostPort}}'" functional-002359
I1109 13:50:05.669072   32696 sshutil.go:53] new ssh client: &{IP:127.0.0.1 Port:32778 SSHKeyPath:/home/jenkins/minikube-integration/21139-2320/.minikube/machines/functional-002359/id_rsa Username:docker}
I1109 13:50:05.782524   32696 ssh_runner.go:195] Run: sudo crictl images --output json
--- PASS: TestFunctional/parallel/ImageCommands/ImageListTable (0.25s)

                                                
                                    
x
+
TestFunctional/parallel/ImageCommands/ImageListJson (0.25s)

                                                
                                                
=== RUN   TestFunctional/parallel/ImageCommands/ImageListJson
=== PAUSE TestFunctional/parallel/ImageCommands/ImageListJson

                                                
                                                

                                                
                                                
=== CONT  TestFunctional/parallel/ImageCommands/ImageListJson
functional_test.go:276: (dbg) Run:  out/minikube-linux-arm64 -p functional-002359 image ls --format json --alsologtostderr
functional_test.go:281: (dbg) Stdout: out/minikube-linux-arm64 -p functional-002359 image ls --format json --alsologtostderr:
[{"id":"20b332c9a70d8516d849d1ac23eff5800cbb2f263d379f0ec11ee908db6b25a8","repoDigests":["docker.io/kubernetesui/dashboard@sha256:2e500d29e9d5f4a086b908eb8dfe7ecac57d2ab09d65b24f588b1d449841ef93","docker.io/kubernetesui/dashboard@sha256:5c52c60663b473628bd98e4ffee7a747ef1f88d8c7bcee957b089fb3f61bdedf"],"repoTags":[],"size":"247562353"},{"id":"a422e0e982356f6c1cf0e5bb7b733363caae3992a07c99951fbcc73e58ed656a","repoDigests":["docker.io/kubernetesui/metrics-scraper@sha256:76049887f07a0476dc93efc2d3569b9529bf982b22d29f356092ce206e98765c","docker.io/kubernetesui/metrics-scraper@sha256:853c43f3cced687cb211708aa0024304a5adb33ec45ebf5915d318358822e09a"],"repoTags":[],"size":"42263767"},{"id":"cbad6347cca28a6ee7b08793856bc6fcb2c2c7a377a62a5e6d785895c4194ac1","repoDigests":["docker.io/library/nginx@sha256:7391b3732e7f7ccd23ff1d02fbeadcde496f374d7460ad8e79260f8f6d2c9f90","docker.io/library/nginx@sha256:b3c656d55d7ad751196f21b7fd2e8d4da9cb430e32f646adcf92441b72f82b14"],"repoTags":["docker.io/library/nginx:alpine"]
,"size":"54837949"},{"id":"2d5a8f08b76da55a3731f09e696a0ee5c6d8115ba5e80c5ae2ae1c210b3b1b98","repoDigests":["docker.io/library/nginx@sha256:1beed3ca46acebe9d3fb62e9067f03d05d5bfa97a00f30938a0a3580563272ad","docker.io/library/nginx@sha256:63a931a2f5772f57ed7537f19330ee231c0550d1fbb95ee24d0e0e3e849bae33"],"repoTags":["docker.io/library/nginx:latest"],"size":"176006678"},{"id":"43911e833d64d4f30460862fc0c54bb61999d60bc7d063feca71e9fc610d5196","repoDigests":["registry.k8s.io/kube-apiserver@sha256:b9d7c117f8ac52bed4b13aeed973dc5198f9d93a926e6fe9e0b384f155baa902","registry.k8s.io/kube-apiserver@sha256:ffe89a0fe39dd71bb6eee7066c95512bd4a8365cb6df23eaf60e70209fe79645"],"repoTags":["registry.k8s.io/kube-apiserver:v1.34.1"],"size":"84753391"},{"id":"7eb2c6ff0c5a768fd309321bc2ade0e4e11afcf4f2017ef1d0ff00d91fdf992a","repoDigests":["registry.k8s.io/kube-controller-manager@sha256:1276f2ef2e44c06f37d7c3cccaa3f0100d5f4e939e5cfde42343962da346857f","registry.k8s.io/kube-controller-manager@sha256:2bf47c1b01f51e8963bf2327390883c
9fa4ed03ea1b284500a2cba17ce303e89"],"repoTags":["registry.k8s.io/kube-controller-manager:v1.34.1"],"size":"72629077"},{"id":"d7b100cd9a77ba782c5e428c8dd5a1df4a1e79d4cb6294acd7d01290ab3babbd","repoDigests":["registry.k8s.io/pause@sha256:278fb9dbcca9518083ad1e11276933a2e96f23de604a3a08cc3c80002767d24c","registry.k8s.io/pause@sha256:e9c466420bcaeede00f46ecfa0ca8cd854c549f2f13330e2723173d88f2de70f"],"repoTags":["registry.k8s.io/pause:3.10.1"],"size":"519884"},{"id":"3d18732f8686cc3c878055d99a05fa80289502fa496b36b6a0fe0f77206a7300","repoDigests":["registry.k8s.io/pause@sha256:e59730b14890252c14f85976e22ab1c47ec28b111ffed407f34bca1b44447476"],"repoTags":["registry.k8s.io/pause:3.3"],"size":"487479"},{"id":"b1a8c6f707935fd5f346ce5846d21ff8dd65e14c15406a14dbd16b9b897b9b4c","repoDigests":["docker.io/kindest/kindnetd@sha256:07a4b3fe0077a0ae606cc0a200fc25a28fa64dcc30b8d311b461089969449f9a","docker.io/kindest/kindnetd@sha256:2bdc3188f2ddc8e54841f69ef900a8dde1280057c97500f966a7ef31364021f1"],"repoTags":["docker.io/kindest
/kindnetd:v20250512-df8de77b"],"size":"111333938"},{"id":"a1894772a478e07c67a56e8bf32335fdbe1dd4ec96976a5987083164bd00bc0e","repoDigests":["registry.k8s.io/etcd@sha256:5db83f9e7ee85732a647f5cf5fbdf85652afa8561b66c99f20756080ebd82ea5","registry.k8s.io/etcd@sha256:e36c081683425b5b3bc1425bc508b37e7107bb65dfa9367bf5a80125d431fa19"],"repoTags":["registry.k8s.io/etcd:3.6.4-0"],"size":"205987068"},{"id":"8057e0500773a37cde2cff041eb13ebd68c748419a2fbfd1dfb5bf38696cc8e5","repoDigests":["registry.k8s.io/pause@sha256:b0602c9f938379133ff8017007894b48c1112681c9468f82a1e4cbf8a4498b67"],"repoTags":["registry.k8s.io/pause:3.1"],"size":"528622"},{"id":"8cb2091f603e75187e2f6226c5901d12e00b1d1f778c6471ae4578e8a1c4724a","repoDigests":["registry.k8s.io/pause@sha256:f5e31d44aa14d5669e030380b656463a7e45934c03994e72e3dbf83d4a645cca"],"repoTags":["registry.k8s.io/pause:latest"],"size":"246070"},{"id":"1611cd07b61d57dbbfebe6db242513fd51e1c02d20ba08af17a45837d86a8a8c","repoDigests":["gcr.io/k8s-minikube/busybox@sha256:2d03e6ceeb9925006
1dd110530b0ece7998cd84121f952adef120ea7c5a6f00e","gcr.io/k8s-minikube/busybox@sha256:580b0aa58b210f512f818b7b7ef4f63c803f7a8cd6baf571b1462b79f7b7719e"],"repoTags":["gcr.io/k8s-minikube/busybox:1.28.4-glibc"],"size":"3774172"},{"id":"ba04bb24b95753201135cbc420b233c1b0b9fa2e1fd21d28319c348c33fbcde6","repoDigests":["gcr.io/k8s-minikube/storage-provisioner@sha256:0ba370588274b88531ab311a5d2e645d240a853555c1e58fd1dd428fc333c9d2","gcr.io/k8s-minikube/storage-provisioner@sha256:18eb69d1418e854ad5a19e399310e52808a8321e4c441c1dddad8977a0d7a944"],"repoTags":["gcr.io/k8s-minikube/storage-provisioner:v5"],"size":"29037500"},{"id":"138784d87c9c50f8e59412544da4cf4928d61ccbaf93b9f5898a3ba406871bfc","repoDigests":["registry.k8s.io/coredns/coredns@sha256:4779e7517f375a597f100524db6f7f8b5b8499a6ccd14aacfa65432d4cfd5789","registry.k8s.io/coredns/coredns@sha256:e8c262566636e6bc340ece6473b0eed193cad045384401529721ddbe6463d31c"],"repoTags":["registry.k8s.io/coredns/coredns:v1.12.1"],"size":"73195387"},{"id":"05baa95f5142d87797a2bc
1d3d11edfb0bf0a9236d436243d15061fae8b58cb9","repoDigests":["registry.k8s.io/kube-proxy@sha256:90d560a712188ee40c7d03b070c8f2cbcb3097081e62306bc7e68e438cceb9a6","registry.k8s.io/kube-proxy@sha256:913cc83ca0b5588a81d86ce8eedeb3ed1e9c1326e81852a1ea4f622b74ff749a"],"repoTags":["registry.k8s.io/kube-proxy:v1.34.1"],"size":"75938711"},{"id":"b5f57ec6b98676d815366685a0422bd164ecf0732540b79ac51b1186cef97ff0","repoDigests":["registry.k8s.io/kube-scheduler@sha256:6e9fbc4e25a576483e6a233976353a66e4d77eb5d0530e9118e94b7d46fb3500","registry.k8s.io/kube-scheduler@sha256:d69ae11adb4233d440c302583adee9e3a37cf3626484476fe18ec821953e951e"],"repoTags":["registry.k8s.io/kube-scheduler:v1.34.1"],"size":"51592017"}]
functional_test.go:284: (dbg) Stderr: out/minikube-linux-arm64 -p functional-002359 image ls --format json --alsologtostderr:
I1109 13:50:05.393997   32646 out.go:360] Setting OutFile to fd 1 ...
I1109 13:50:05.394119   32646 out.go:408] TERM=,COLORTERM=, which probably does not support color
I1109 13:50:05.394124   32646 out.go:374] Setting ErrFile to fd 2...
I1109 13:50:05.394128   32646 out.go:408] TERM=,COLORTERM=, which probably does not support color
I1109 13:50:05.394466   32646 root.go:338] Updating PATH: /home/jenkins/minikube-integration/21139-2320/.minikube/bin
I1109 13:50:05.395571   32646 config.go:182] Loaded profile config "functional-002359": Driver=docker, ContainerRuntime=crio, KubernetesVersion=v1.34.1
I1109 13:50:05.395712   32646 config.go:182] Loaded profile config "functional-002359": Driver=docker, ContainerRuntime=crio, KubernetesVersion=v1.34.1
I1109 13:50:05.396793   32646 cli_runner.go:164] Run: docker container inspect functional-002359 --format={{.State.Status}}
I1109 13:50:05.416326   32646 ssh_runner.go:195] Run: systemctl --version
I1109 13:50:05.416376   32646 cli_runner.go:164] Run: docker container inspect -f "'{{(index (index .NetworkSettings.Ports "22/tcp") 0).HostPort}}'" functional-002359
I1109 13:50:05.435268   32646 sshutil.go:53] new ssh client: &{IP:127.0.0.1 Port:32778 SSHKeyPath:/home/jenkins/minikube-integration/21139-2320/.minikube/machines/functional-002359/id_rsa Username:docker}
I1109 13:50:05.542505   32646 ssh_runner.go:195] Run: sudo crictl images --output json
--- PASS: TestFunctional/parallel/ImageCommands/ImageListJson (0.25s)

                                                
                                    
x
+
TestFunctional/parallel/ImageCommands/ImageListYaml (0.3s)

                                                
                                                
=== RUN   TestFunctional/parallel/ImageCommands/ImageListYaml
=== PAUSE TestFunctional/parallel/ImageCommands/ImageListYaml

                                                
                                                

                                                
                                                
=== CONT  TestFunctional/parallel/ImageCommands/ImageListYaml
functional_test.go:276: (dbg) Run:  out/minikube-linux-arm64 -p functional-002359 image ls --format yaml --alsologtostderr
functional_test.go:281: (dbg) Stdout: out/minikube-linux-arm64 -p functional-002359 image ls --format yaml --alsologtostderr:
- id: 20b332c9a70d8516d849d1ac23eff5800cbb2f263d379f0ec11ee908db6b25a8
repoDigests:
- docker.io/kubernetesui/dashboard@sha256:2e500d29e9d5f4a086b908eb8dfe7ecac57d2ab09d65b24f588b1d449841ef93
- docker.io/kubernetesui/dashboard@sha256:5c52c60663b473628bd98e4ffee7a747ef1f88d8c7bcee957b089fb3f61bdedf
repoTags: []
size: "247562353"
- id: ba04bb24b95753201135cbc420b233c1b0b9fa2e1fd21d28319c348c33fbcde6
repoDigests:
- gcr.io/k8s-minikube/storage-provisioner@sha256:0ba370588274b88531ab311a5d2e645d240a853555c1e58fd1dd428fc333c9d2
- gcr.io/k8s-minikube/storage-provisioner@sha256:18eb69d1418e854ad5a19e399310e52808a8321e4c441c1dddad8977a0d7a944
repoTags:
- gcr.io/k8s-minikube/storage-provisioner:v5
size: "29037500"
- id: a1894772a478e07c67a56e8bf32335fdbe1dd4ec96976a5987083164bd00bc0e
repoDigests:
- registry.k8s.io/etcd@sha256:5db83f9e7ee85732a647f5cf5fbdf85652afa8561b66c99f20756080ebd82ea5
- registry.k8s.io/etcd@sha256:e36c081683425b5b3bc1425bc508b37e7107bb65dfa9367bf5a80125d431fa19
repoTags:
- registry.k8s.io/etcd:3.6.4-0
size: "205987068"
- id: 7eb2c6ff0c5a768fd309321bc2ade0e4e11afcf4f2017ef1d0ff00d91fdf992a
repoDigests:
- registry.k8s.io/kube-controller-manager@sha256:1276f2ef2e44c06f37d7c3cccaa3f0100d5f4e939e5cfde42343962da346857f
- registry.k8s.io/kube-controller-manager@sha256:2bf47c1b01f51e8963bf2327390883c9fa4ed03ea1b284500a2cba17ce303e89
repoTags:
- registry.k8s.io/kube-controller-manager:v1.34.1
size: "72629077"
- id: d7b100cd9a77ba782c5e428c8dd5a1df4a1e79d4cb6294acd7d01290ab3babbd
repoDigests:
- registry.k8s.io/pause@sha256:278fb9dbcca9518083ad1e11276933a2e96f23de604a3a08cc3c80002767d24c
- registry.k8s.io/pause@sha256:e9c466420bcaeede00f46ecfa0ca8cd854c549f2f13330e2723173d88f2de70f
repoTags:
- registry.k8s.io/pause:3.10.1
size: "519884"
- id: b1a8c6f707935fd5f346ce5846d21ff8dd65e14c15406a14dbd16b9b897b9b4c
repoDigests:
- docker.io/kindest/kindnetd@sha256:07a4b3fe0077a0ae606cc0a200fc25a28fa64dcc30b8d311b461089969449f9a
- docker.io/kindest/kindnetd@sha256:2bdc3188f2ddc8e54841f69ef900a8dde1280057c97500f966a7ef31364021f1
repoTags:
- docker.io/kindest/kindnetd:v20250512-df8de77b
size: "111333938"
- id: 138784d87c9c50f8e59412544da4cf4928d61ccbaf93b9f5898a3ba406871bfc
repoDigests:
- registry.k8s.io/coredns/coredns@sha256:4779e7517f375a597f100524db6f7f8b5b8499a6ccd14aacfa65432d4cfd5789
- registry.k8s.io/coredns/coredns@sha256:e8c262566636e6bc340ece6473b0eed193cad045384401529721ddbe6463d31c
repoTags:
- registry.k8s.io/coredns/coredns:v1.12.1
size: "73195387"
- id: b5f57ec6b98676d815366685a0422bd164ecf0732540b79ac51b1186cef97ff0
repoDigests:
- registry.k8s.io/kube-scheduler@sha256:6e9fbc4e25a576483e6a233976353a66e4d77eb5d0530e9118e94b7d46fb3500
- registry.k8s.io/kube-scheduler@sha256:d69ae11adb4233d440c302583adee9e3a37cf3626484476fe18ec821953e951e
repoTags:
- registry.k8s.io/kube-scheduler:v1.34.1
size: "51592017"
- id: 8057e0500773a37cde2cff041eb13ebd68c748419a2fbfd1dfb5bf38696cc8e5
repoDigests:
- registry.k8s.io/pause@sha256:b0602c9f938379133ff8017007894b48c1112681c9468f82a1e4cbf8a4498b67
repoTags:
- registry.k8s.io/pause:3.1
size: "528622"
- id: a422e0e982356f6c1cf0e5bb7b733363caae3992a07c99951fbcc73e58ed656a
repoDigests:
- docker.io/kubernetesui/metrics-scraper@sha256:76049887f07a0476dc93efc2d3569b9529bf982b22d29f356092ce206e98765c
- docker.io/kubernetesui/metrics-scraper@sha256:853c43f3cced687cb211708aa0024304a5adb33ec45ebf5915d318358822e09a
repoTags: []
size: "42263767"
- id: cbad6347cca28a6ee7b08793856bc6fcb2c2c7a377a62a5e6d785895c4194ac1
repoDigests:
- docker.io/library/nginx@sha256:7391b3732e7f7ccd23ff1d02fbeadcde496f374d7460ad8e79260f8f6d2c9f90
- docker.io/library/nginx@sha256:b3c656d55d7ad751196f21b7fd2e8d4da9cb430e32f646adcf92441b72f82b14
repoTags:
- docker.io/library/nginx:alpine
size: "54837949"
- id: 1611cd07b61d57dbbfebe6db242513fd51e1c02d20ba08af17a45837d86a8a8c
repoDigests:
- gcr.io/k8s-minikube/busybox@sha256:2d03e6ceeb99250061dd110530b0ece7998cd84121f952adef120ea7c5a6f00e
- gcr.io/k8s-minikube/busybox@sha256:580b0aa58b210f512f818b7b7ef4f63c803f7a8cd6baf571b1462b79f7b7719e
repoTags:
- gcr.io/k8s-minikube/busybox:1.28.4-glibc
size: "3774172"
- id: 43911e833d64d4f30460862fc0c54bb61999d60bc7d063feca71e9fc610d5196
repoDigests:
- registry.k8s.io/kube-apiserver@sha256:b9d7c117f8ac52bed4b13aeed973dc5198f9d93a926e6fe9e0b384f155baa902
- registry.k8s.io/kube-apiserver@sha256:ffe89a0fe39dd71bb6eee7066c95512bd4a8365cb6df23eaf60e70209fe79645
repoTags:
- registry.k8s.io/kube-apiserver:v1.34.1
size: "84753391"
- id: 2d5a8f08b76da55a3731f09e696a0ee5c6d8115ba5e80c5ae2ae1c210b3b1b98
repoDigests:
- docker.io/library/nginx@sha256:1beed3ca46acebe9d3fb62e9067f03d05d5bfa97a00f30938a0a3580563272ad
- docker.io/library/nginx@sha256:63a931a2f5772f57ed7537f19330ee231c0550d1fbb95ee24d0e0e3e849bae33
repoTags:
- docker.io/library/nginx:latest
size: "176006678"
- id: 05baa95f5142d87797a2bc1d3d11edfb0bf0a9236d436243d15061fae8b58cb9
repoDigests:
- registry.k8s.io/kube-proxy@sha256:90d560a712188ee40c7d03b070c8f2cbcb3097081e62306bc7e68e438cceb9a6
- registry.k8s.io/kube-proxy@sha256:913cc83ca0b5588a81d86ce8eedeb3ed1e9c1326e81852a1ea4f622b74ff749a
repoTags:
- registry.k8s.io/kube-proxy:v1.34.1
size: "75938711"
- id: 3d18732f8686cc3c878055d99a05fa80289502fa496b36b6a0fe0f77206a7300
repoDigests:
- registry.k8s.io/pause@sha256:e59730b14890252c14f85976e22ab1c47ec28b111ffed407f34bca1b44447476
repoTags:
- registry.k8s.io/pause:3.3
size: "487479"
- id: 8cb2091f603e75187e2f6226c5901d12e00b1d1f778c6471ae4578e8a1c4724a
repoDigests:
- registry.k8s.io/pause@sha256:f5e31d44aa14d5669e030380b656463a7e45934c03994e72e3dbf83d4a645cca
repoTags:
- registry.k8s.io/pause:latest
size: "246070"

                                                
                                                
functional_test.go:284: (dbg) Stderr: out/minikube-linux-arm64 -p functional-002359 image ls --format yaml --alsologtostderr:
I1109 13:50:05.115284   32565 out.go:360] Setting OutFile to fd 1 ...
I1109 13:50:05.115402   32565 out.go:408] TERM=,COLORTERM=, which probably does not support color
I1109 13:50:05.115418   32565 out.go:374] Setting ErrFile to fd 2...
I1109 13:50:05.115423   32565 out.go:408] TERM=,COLORTERM=, which probably does not support color
I1109 13:50:05.115798   32565 root.go:338] Updating PATH: /home/jenkins/minikube-integration/21139-2320/.minikube/bin
I1109 13:50:05.116799   32565 config.go:182] Loaded profile config "functional-002359": Driver=docker, ContainerRuntime=crio, KubernetesVersion=v1.34.1
I1109 13:50:05.116914   32565 config.go:182] Loaded profile config "functional-002359": Driver=docker, ContainerRuntime=crio, KubernetesVersion=v1.34.1
I1109 13:50:05.117390   32565 cli_runner.go:164] Run: docker container inspect functional-002359 --format={{.State.Status}}
I1109 13:50:05.140291   32565 ssh_runner.go:195] Run: systemctl --version
I1109 13:50:05.140348   32565 cli_runner.go:164] Run: docker container inspect -f "'{{(index (index .NetworkSettings.Ports "22/tcp") 0).HostPort}}'" functional-002359
I1109 13:50:05.168000   32565 sshutil.go:53] new ssh client: &{IP:127.0.0.1 Port:32778 SSHKeyPath:/home/jenkins/minikube-integration/21139-2320/.minikube/machines/functional-002359/id_rsa Username:docker}
I1109 13:50:05.279727   32565 ssh_runner.go:195] Run: sudo crictl images --output json
--- PASS: TestFunctional/parallel/ImageCommands/ImageListYaml (0.30s)

                                                
                                    
x
+
TestFunctional/parallel/ImageCommands/ImageBuild (4.11s)

                                                
                                                
=== RUN   TestFunctional/parallel/ImageCommands/ImageBuild
=== PAUSE TestFunctional/parallel/ImageCommands/ImageBuild

                                                
                                                

                                                
                                                
=== CONT  TestFunctional/parallel/ImageCommands/ImageBuild
functional_test.go:323: (dbg) Run:  out/minikube-linux-arm64 -p functional-002359 ssh pgrep buildkitd
functional_test.go:323: (dbg) Non-zero exit: out/minikube-linux-arm64 -p functional-002359 ssh pgrep buildkitd: exit status 1 (367.943646ms)

                                                
                                                
** stderr ** 
	ssh: Process exited with status 1

                                                
                                                
** /stderr **
functional_test.go:330: (dbg) Run:  out/minikube-linux-arm64 -p functional-002359 image build -t localhost/my-image:functional-002359 testdata/build --alsologtostderr
functional_test.go:330: (dbg) Done: out/minikube-linux-arm64 -p functional-002359 image build -t localhost/my-image:functional-002359 testdata/build --alsologtostderr: (3.509423885s)
functional_test.go:335: (dbg) Stdout: out/minikube-linux-arm64 -p functional-002359 image build -t localhost/my-image:functional-002359 testdata/build --alsologtostderr:
STEP 1/3: FROM gcr.io/k8s-minikube/busybox
STEP 2/3: RUN true
--> dbec600989e
STEP 3/3: ADD content.txt /
COMMIT localhost/my-image:functional-002359
--> 56acc53e258
Successfully tagged localhost/my-image:functional-002359
56acc53e2581f90a42a8e7606136fe694dbd8039f37ca76f99df2c840e02661c
functional_test.go:338: (dbg) Stderr: out/minikube-linux-arm64 -p functional-002359 image build -t localhost/my-image:functional-002359 testdata/build --alsologtostderr:
I1109 13:50:05.106330   32570 out.go:360] Setting OutFile to fd 1 ...
I1109 13:50:05.106635   32570 out.go:408] TERM=,COLORTERM=, which probably does not support color
I1109 13:50:05.106667   32570 out.go:374] Setting ErrFile to fd 2...
I1109 13:50:05.106687   32570 out.go:408] TERM=,COLORTERM=, which probably does not support color
I1109 13:50:05.107018   32570 root.go:338] Updating PATH: /home/jenkins/minikube-integration/21139-2320/.minikube/bin
I1109 13:50:05.107818   32570 config.go:182] Loaded profile config "functional-002359": Driver=docker, ContainerRuntime=crio, KubernetesVersion=v1.34.1
I1109 13:50:05.108572   32570 config.go:182] Loaded profile config "functional-002359": Driver=docker, ContainerRuntime=crio, KubernetesVersion=v1.34.1
I1109 13:50:05.109178   32570 cli_runner.go:164] Run: docker container inspect functional-002359 --format={{.State.Status}}
I1109 13:50:05.136739   32570 ssh_runner.go:195] Run: systemctl --version
I1109 13:50:05.136795   32570 cli_runner.go:164] Run: docker container inspect -f "'{{(index (index .NetworkSettings.Ports "22/tcp") 0).HostPort}}'" functional-002359
I1109 13:50:05.157166   32570 sshutil.go:53] new ssh client: &{IP:127.0.0.1 Port:32778 SSHKeyPath:/home/jenkins/minikube-integration/21139-2320/.minikube/machines/functional-002359/id_rsa Username:docker}
I1109 13:50:05.266405   32570 build_images.go:162] Building image from path: /tmp/build.481071817.tar
I1109 13:50:05.266467   32570 ssh_runner.go:195] Run: sudo mkdir -p /var/lib/minikube/build
I1109 13:50:05.276326   32570 ssh_runner.go:195] Run: stat -c "%s %y" /var/lib/minikube/build/build.481071817.tar
I1109 13:50:05.281738   32570 ssh_runner.go:352] existence check for /var/lib/minikube/build/build.481071817.tar: stat -c "%s %y" /var/lib/minikube/build/build.481071817.tar: Process exited with status 1
stdout:

                                                
                                                
stderr:
stat: cannot statx '/var/lib/minikube/build/build.481071817.tar': No such file or directory
I1109 13:50:05.281778   32570 ssh_runner.go:362] scp /tmp/build.481071817.tar --> /var/lib/minikube/build/build.481071817.tar (3072 bytes)
I1109 13:50:05.305961   32570 ssh_runner.go:195] Run: sudo mkdir -p /var/lib/minikube/build/build.481071817
I1109 13:50:05.320276   32570 ssh_runner.go:195] Run: sudo tar -C /var/lib/minikube/build/build.481071817 -xf /var/lib/minikube/build/build.481071817.tar
I1109 13:50:05.336753   32570 crio.go:315] Building image: /var/lib/minikube/build/build.481071817
I1109 13:50:05.336842   32570 ssh_runner.go:195] Run: sudo podman build -t localhost/my-image:functional-002359 /var/lib/minikube/build/build.481071817 --cgroup-manager=cgroupfs
Trying to pull gcr.io/k8s-minikube/busybox:latest...
Getting image source signatures
Copying blob sha256:a01966dde7f8d5ba10b6d87e776c7c8fb5a5f6bfa678874bd28b33b1fc6dba34
Copying config sha256:71a676dd070f4b701c3272e566d84951362f1326ea07d5bbad119d1c4f6b3d02
Writing manifest to image destination
Storing signatures
I1109 13:50:08.535525   32570 ssh_runner.go:235] Completed: sudo podman build -t localhost/my-image:functional-002359 /var/lib/minikube/build/build.481071817 --cgroup-manager=cgroupfs: (3.198661455s)
I1109 13:50:08.535592   32570 ssh_runner.go:195] Run: sudo rm -rf /var/lib/minikube/build/build.481071817
I1109 13:50:08.543828   32570 ssh_runner.go:195] Run: sudo rm -f /var/lib/minikube/build/build.481071817.tar
I1109 13:50:08.551347   32570 build_images.go:218] Built localhost/my-image:functional-002359 from /tmp/build.481071817.tar
I1109 13:50:08.551376   32570 build_images.go:134] succeeded building to: functional-002359
I1109 13:50:08.551381   32570 build_images.go:135] failed building to: 
functional_test.go:466: (dbg) Run:  out/minikube-linux-arm64 -p functional-002359 image ls
--- PASS: TestFunctional/parallel/ImageCommands/ImageBuild (4.11s)

                                                
                                    
x
+
TestFunctional/parallel/ImageCommands/Setup (0.68s)

                                                
                                                
=== RUN   TestFunctional/parallel/ImageCommands/Setup
functional_test.go:357: (dbg) Run:  docker pull kicbase/echo-server:1.0
functional_test.go:362: (dbg) Run:  docker tag kicbase/echo-server:1.0 kicbase/echo-server:functional-002359
--- PASS: TestFunctional/parallel/ImageCommands/Setup (0.68s)

                                                
                                    
x
+
TestFunctional/parallel/ImageCommands/ImageRemove (0.64s)

                                                
                                                
=== RUN   TestFunctional/parallel/ImageCommands/ImageRemove
functional_test.go:407: (dbg) Run:  out/minikube-linux-arm64 -p functional-002359 image rm kicbase/echo-server:functional-002359 --alsologtostderr
functional_test.go:466: (dbg) Run:  out/minikube-linux-arm64 -p functional-002359 image ls
--- PASS: TestFunctional/parallel/ImageCommands/ImageRemove (0.64s)

                                                
                                    
x
+
TestFunctional/parallel/UpdateContextCmd/no_changes (0.2s)

                                                
                                                
=== RUN   TestFunctional/parallel/UpdateContextCmd/no_changes
=== PAUSE TestFunctional/parallel/UpdateContextCmd/no_changes

                                                
                                                

                                                
                                                
=== CONT  TestFunctional/parallel/UpdateContextCmd/no_changes
functional_test.go:2124: (dbg) Run:  out/minikube-linux-arm64 -p functional-002359 update-context --alsologtostderr -v=2
--- PASS: TestFunctional/parallel/UpdateContextCmd/no_changes (0.20s)

                                                
                                    
x
+
TestFunctional/parallel/UpdateContextCmd/no_minikube_cluster (0.17s)

                                                
                                                
=== RUN   TestFunctional/parallel/UpdateContextCmd/no_minikube_cluster
=== PAUSE TestFunctional/parallel/UpdateContextCmd/no_minikube_cluster

                                                
                                                

                                                
                                                
=== CONT  TestFunctional/parallel/UpdateContextCmd/no_minikube_cluster
functional_test.go:2124: (dbg) Run:  out/minikube-linux-arm64 -p functional-002359 update-context --alsologtostderr -v=2
--- PASS: TestFunctional/parallel/UpdateContextCmd/no_minikube_cluster (0.17s)

                                                
                                    
x
+
TestFunctional/parallel/UpdateContextCmd/no_clusters (0.15s)

                                                
                                                
=== RUN   TestFunctional/parallel/UpdateContextCmd/no_clusters
=== PAUSE TestFunctional/parallel/UpdateContextCmd/no_clusters

                                                
                                                

                                                
                                                
=== CONT  TestFunctional/parallel/UpdateContextCmd/no_clusters
functional_test.go:2124: (dbg) Run:  out/minikube-linux-arm64 -p functional-002359 update-context --alsologtostderr -v=2
--- PASS: TestFunctional/parallel/UpdateContextCmd/no_clusters (0.15s)

                                                
                                    
x
+
TestFunctional/delete_echo-server_images (0.05s)

                                                
                                                
=== RUN   TestFunctional/delete_echo-server_images
functional_test.go:205: (dbg) Run:  docker rmi -f kicbase/echo-server:1.0
functional_test.go:205: (dbg) Run:  docker rmi -f kicbase/echo-server:functional-002359
--- PASS: TestFunctional/delete_echo-server_images (0.05s)

                                                
                                    
x
+
TestFunctional/delete_my-image_image (0.02s)

                                                
                                                
=== RUN   TestFunctional/delete_my-image_image
functional_test.go:213: (dbg) Run:  docker rmi -f localhost/my-image:functional-002359
--- PASS: TestFunctional/delete_my-image_image (0.02s)

                                                
                                    
x
+
TestFunctional/delete_minikube_cached_images (0.02s)

                                                
                                                
=== RUN   TestFunctional/delete_minikube_cached_images
functional_test.go:221: (dbg) Run:  docker rmi -f minikube-local-cache-test:functional-002359
--- PASS: TestFunctional/delete_minikube_cached_images (0.02s)

                                                
                                    
x
+
TestMultiControlPlane/serial/StartCluster (149.53s)

                                                
                                                
=== RUN   TestMultiControlPlane/serial/StartCluster
ha_test.go:101: (dbg) Run:  out/minikube-linux-arm64 -p ha-423884 start --ha --memory 3072 --wait true --alsologtostderr -v 5 --driver=docker  --container-runtime=crio
E1109 13:52:06.460038    4116 cert_rotation.go:172] "Loading client cert failed" err="open /home/jenkins/minikube-integration/21139-2320/.minikube/profiles/addons-651467/client.crt: no such file or directory" logger="tls-transport-cache.UnhandledError" key="key"
ha_test.go:101: (dbg) Done: out/minikube-linux-arm64 -p ha-423884 start --ha --memory 3072 --wait true --alsologtostderr -v 5 --driver=docker  --container-runtime=crio: (2m28.602441701s)
ha_test.go:107: (dbg) Run:  out/minikube-linux-arm64 -p ha-423884 status --alsologtostderr -v 5
--- PASS: TestMultiControlPlane/serial/StartCluster (149.53s)

                                                
                                    
x
+
TestMultiControlPlane/serial/DeployApp (8.19s)

                                                
                                                
=== RUN   TestMultiControlPlane/serial/DeployApp
ha_test.go:128: (dbg) Run:  out/minikube-linux-arm64 -p ha-423884 kubectl -- apply -f ./testdata/ha/ha-pod-dns-test.yaml
ha_test.go:133: (dbg) Run:  out/minikube-linux-arm64 -p ha-423884 kubectl -- rollout status deployment/busybox
ha_test.go:133: (dbg) Done: out/minikube-linux-arm64 -p ha-423884 kubectl -- rollout status deployment/busybox: (5.524740658s)
ha_test.go:140: (dbg) Run:  out/minikube-linux-arm64 -p ha-423884 kubectl -- get pods -o jsonpath='{.items[*].status.podIP}'
ha_test.go:163: (dbg) Run:  out/minikube-linux-arm64 -p ha-423884 kubectl -- get pods -o jsonpath='{.items[*].metadata.name}'
ha_test.go:171: (dbg) Run:  out/minikube-linux-arm64 -p ha-423884 kubectl -- exec busybox-7b57f96db7-5bfxx -- nslookup kubernetes.io
ha_test.go:171: (dbg) Run:  out/minikube-linux-arm64 -p ha-423884 kubectl -- exec busybox-7b57f96db7-bprtw -- nslookup kubernetes.io
ha_test.go:171: (dbg) Run:  out/minikube-linux-arm64 -p ha-423884 kubectl -- exec busybox-7b57f96db7-c9qf4 -- nslookup kubernetes.io
ha_test.go:181: (dbg) Run:  out/minikube-linux-arm64 -p ha-423884 kubectl -- exec busybox-7b57f96db7-5bfxx -- nslookup kubernetes.default
ha_test.go:181: (dbg) Run:  out/minikube-linux-arm64 -p ha-423884 kubectl -- exec busybox-7b57f96db7-bprtw -- nslookup kubernetes.default
ha_test.go:181: (dbg) Run:  out/minikube-linux-arm64 -p ha-423884 kubectl -- exec busybox-7b57f96db7-c9qf4 -- nslookup kubernetes.default
ha_test.go:189: (dbg) Run:  out/minikube-linux-arm64 -p ha-423884 kubectl -- exec busybox-7b57f96db7-5bfxx -- nslookup kubernetes.default.svc.cluster.local
ha_test.go:189: (dbg) Run:  out/minikube-linux-arm64 -p ha-423884 kubectl -- exec busybox-7b57f96db7-bprtw -- nslookup kubernetes.default.svc.cluster.local
ha_test.go:189: (dbg) Run:  out/minikube-linux-arm64 -p ha-423884 kubectl -- exec busybox-7b57f96db7-c9qf4 -- nslookup kubernetes.default.svc.cluster.local
--- PASS: TestMultiControlPlane/serial/DeployApp (8.19s)

                                                
                                    
x
+
TestMultiControlPlane/serial/PingHostFromPods (1.46s)

                                                
                                                
=== RUN   TestMultiControlPlane/serial/PingHostFromPods
ha_test.go:199: (dbg) Run:  out/minikube-linux-arm64 -p ha-423884 kubectl -- get pods -o jsonpath='{.items[*].metadata.name}'
ha_test.go:207: (dbg) Run:  out/minikube-linux-arm64 -p ha-423884 kubectl -- exec busybox-7b57f96db7-5bfxx -- sh -c "nslookup host.minikube.internal | awk 'NR==5' | cut -d' ' -f3"
ha_test.go:218: (dbg) Run:  out/minikube-linux-arm64 -p ha-423884 kubectl -- exec busybox-7b57f96db7-5bfxx -- sh -c "ping -c 1 192.168.49.1"
ha_test.go:207: (dbg) Run:  out/minikube-linux-arm64 -p ha-423884 kubectl -- exec busybox-7b57f96db7-bprtw -- sh -c "nslookup host.minikube.internal | awk 'NR==5' | cut -d' ' -f3"
ha_test.go:218: (dbg) Run:  out/minikube-linux-arm64 -p ha-423884 kubectl -- exec busybox-7b57f96db7-bprtw -- sh -c "ping -c 1 192.168.49.1"
ha_test.go:207: (dbg) Run:  out/minikube-linux-arm64 -p ha-423884 kubectl -- exec busybox-7b57f96db7-c9qf4 -- sh -c "nslookup host.minikube.internal | awk 'NR==5' | cut -d' ' -f3"
ha_test.go:218: (dbg) Run:  out/minikube-linux-arm64 -p ha-423884 kubectl -- exec busybox-7b57f96db7-c9qf4 -- sh -c "ping -c 1 192.168.49.1"
--- PASS: TestMultiControlPlane/serial/PingHostFromPods (1.46s)

                                                
                                    
x
+
TestMultiControlPlane/serial/AddWorkerNode (32.82s)

                                                
                                                
=== RUN   TestMultiControlPlane/serial/AddWorkerNode
ha_test.go:228: (dbg) Run:  out/minikube-linux-arm64 -p ha-423884 node add --alsologtostderr -v 5
ha_test.go:228: (dbg) Done: out/minikube-linux-arm64 -p ha-423884 node add --alsologtostderr -v 5: (31.701660968s)
ha_test.go:234: (dbg) Run:  out/minikube-linux-arm64 -p ha-423884 status --alsologtostderr -v 5
ha_test.go:234: (dbg) Done: out/minikube-linux-arm64 -p ha-423884 status --alsologtostderr -v 5: (1.113787006s)
--- PASS: TestMultiControlPlane/serial/AddWorkerNode (32.82s)

                                                
                                    
x
+
TestMultiControlPlane/serial/NodeLabels (0.11s)

                                                
                                                
=== RUN   TestMultiControlPlane/serial/NodeLabels
ha_test.go:255: (dbg) Run:  kubectl --context ha-423884 get nodes -o "jsonpath=[{range .items[*]}{.metadata.labels},{end}]"
--- PASS: TestMultiControlPlane/serial/NodeLabels (0.11s)

                                                
                                    
x
+
TestMultiControlPlane/serial/HAppyAfterClusterStart (1.08s)

                                                
                                                
=== RUN   TestMultiControlPlane/serial/HAppyAfterClusterStart
ha_test.go:281: (dbg) Run:  out/minikube-linux-arm64 profile list --output json
ha_test.go:281: (dbg) Done: out/minikube-linux-arm64 profile list --output json: (1.074772572s)
--- PASS: TestMultiControlPlane/serial/HAppyAfterClusterStart (1.08s)

                                                
                                    
x
+
TestMultiControlPlane/serial/CopyFile (19.94s)

                                                
                                                
=== RUN   TestMultiControlPlane/serial/CopyFile
ha_test.go:328: (dbg) Run:  out/minikube-linux-arm64 -p ha-423884 status --output json --alsologtostderr -v 5
ha_test.go:328: (dbg) Done: out/minikube-linux-arm64 -p ha-423884 status --output json --alsologtostderr -v 5: (1.028011306s)
helpers_test.go:573: (dbg) Run:  out/minikube-linux-arm64 -p ha-423884 cp testdata/cp-test.txt ha-423884:/home/docker/cp-test.txt
helpers_test.go:551: (dbg) Run:  out/minikube-linux-arm64 -p ha-423884 ssh -n ha-423884 "sudo cat /home/docker/cp-test.txt"
helpers_test.go:573: (dbg) Run:  out/minikube-linux-arm64 -p ha-423884 cp ha-423884:/home/docker/cp-test.txt /tmp/TestMultiControlPlaneserialCopyFile776916293/001/cp-test_ha-423884.txt
helpers_test.go:551: (dbg) Run:  out/minikube-linux-arm64 -p ha-423884 ssh -n ha-423884 "sudo cat /home/docker/cp-test.txt"
helpers_test.go:573: (dbg) Run:  out/minikube-linux-arm64 -p ha-423884 cp ha-423884:/home/docker/cp-test.txt ha-423884-m02:/home/docker/cp-test_ha-423884_ha-423884-m02.txt
helpers_test.go:551: (dbg) Run:  out/minikube-linux-arm64 -p ha-423884 ssh -n ha-423884 "sudo cat /home/docker/cp-test.txt"
helpers_test.go:551: (dbg) Run:  out/minikube-linux-arm64 -p ha-423884 ssh -n ha-423884-m02 "sudo cat /home/docker/cp-test_ha-423884_ha-423884-m02.txt"
helpers_test.go:573: (dbg) Run:  out/minikube-linux-arm64 -p ha-423884 cp ha-423884:/home/docker/cp-test.txt ha-423884-m03:/home/docker/cp-test_ha-423884_ha-423884-m03.txt
helpers_test.go:551: (dbg) Run:  out/minikube-linux-arm64 -p ha-423884 ssh -n ha-423884 "sudo cat /home/docker/cp-test.txt"
helpers_test.go:551: (dbg) Run:  out/minikube-linux-arm64 -p ha-423884 ssh -n ha-423884-m03 "sudo cat /home/docker/cp-test_ha-423884_ha-423884-m03.txt"
helpers_test.go:573: (dbg) Run:  out/minikube-linux-arm64 -p ha-423884 cp ha-423884:/home/docker/cp-test.txt ha-423884-m04:/home/docker/cp-test_ha-423884_ha-423884-m04.txt
E1109 13:53:29.521742    4116 cert_rotation.go:172] "Loading client cert failed" err="open /home/jenkins/minikube-integration/21139-2320/.minikube/profiles/addons-651467/client.crt: no such file or directory" logger="tls-transport-cache.UnhandledError" key="key"
helpers_test.go:551: (dbg) Run:  out/minikube-linux-arm64 -p ha-423884 ssh -n ha-423884 "sudo cat /home/docker/cp-test.txt"
helpers_test.go:551: (dbg) Run:  out/minikube-linux-arm64 -p ha-423884 ssh -n ha-423884-m04 "sudo cat /home/docker/cp-test_ha-423884_ha-423884-m04.txt"
helpers_test.go:573: (dbg) Run:  out/minikube-linux-arm64 -p ha-423884 cp testdata/cp-test.txt ha-423884-m02:/home/docker/cp-test.txt
helpers_test.go:551: (dbg) Run:  out/minikube-linux-arm64 -p ha-423884 ssh -n ha-423884-m02 "sudo cat /home/docker/cp-test.txt"
helpers_test.go:573: (dbg) Run:  out/minikube-linux-arm64 -p ha-423884 cp ha-423884-m02:/home/docker/cp-test.txt /tmp/TestMultiControlPlaneserialCopyFile776916293/001/cp-test_ha-423884-m02.txt
helpers_test.go:551: (dbg) Run:  out/minikube-linux-arm64 -p ha-423884 ssh -n ha-423884-m02 "sudo cat /home/docker/cp-test.txt"
helpers_test.go:573: (dbg) Run:  out/minikube-linux-arm64 -p ha-423884 cp ha-423884-m02:/home/docker/cp-test.txt ha-423884:/home/docker/cp-test_ha-423884-m02_ha-423884.txt
helpers_test.go:551: (dbg) Run:  out/minikube-linux-arm64 -p ha-423884 ssh -n ha-423884-m02 "sudo cat /home/docker/cp-test.txt"
helpers_test.go:551: (dbg) Run:  out/minikube-linux-arm64 -p ha-423884 ssh -n ha-423884 "sudo cat /home/docker/cp-test_ha-423884-m02_ha-423884.txt"
helpers_test.go:573: (dbg) Run:  out/minikube-linux-arm64 -p ha-423884 cp ha-423884-m02:/home/docker/cp-test.txt ha-423884-m03:/home/docker/cp-test_ha-423884-m02_ha-423884-m03.txt
helpers_test.go:551: (dbg) Run:  out/minikube-linux-arm64 -p ha-423884 ssh -n ha-423884-m02 "sudo cat /home/docker/cp-test.txt"
helpers_test.go:551: (dbg) Run:  out/minikube-linux-arm64 -p ha-423884 ssh -n ha-423884-m03 "sudo cat /home/docker/cp-test_ha-423884-m02_ha-423884-m03.txt"
helpers_test.go:573: (dbg) Run:  out/minikube-linux-arm64 -p ha-423884 cp ha-423884-m02:/home/docker/cp-test.txt ha-423884-m04:/home/docker/cp-test_ha-423884-m02_ha-423884-m04.txt
helpers_test.go:551: (dbg) Run:  out/minikube-linux-arm64 -p ha-423884 ssh -n ha-423884-m02 "sudo cat /home/docker/cp-test.txt"
helpers_test.go:551: (dbg) Run:  out/minikube-linux-arm64 -p ha-423884 ssh -n ha-423884-m04 "sudo cat /home/docker/cp-test_ha-423884-m02_ha-423884-m04.txt"
helpers_test.go:573: (dbg) Run:  out/minikube-linux-arm64 -p ha-423884 cp testdata/cp-test.txt ha-423884-m03:/home/docker/cp-test.txt
helpers_test.go:551: (dbg) Run:  out/minikube-linux-arm64 -p ha-423884 ssh -n ha-423884-m03 "sudo cat /home/docker/cp-test.txt"
helpers_test.go:573: (dbg) Run:  out/minikube-linux-arm64 -p ha-423884 cp ha-423884-m03:/home/docker/cp-test.txt /tmp/TestMultiControlPlaneserialCopyFile776916293/001/cp-test_ha-423884-m03.txt
helpers_test.go:551: (dbg) Run:  out/minikube-linux-arm64 -p ha-423884 ssh -n ha-423884-m03 "sudo cat /home/docker/cp-test.txt"
helpers_test.go:573: (dbg) Run:  out/minikube-linux-arm64 -p ha-423884 cp ha-423884-m03:/home/docker/cp-test.txt ha-423884:/home/docker/cp-test_ha-423884-m03_ha-423884.txt
helpers_test.go:551: (dbg) Run:  out/minikube-linux-arm64 -p ha-423884 ssh -n ha-423884-m03 "sudo cat /home/docker/cp-test.txt"
helpers_test.go:551: (dbg) Run:  out/minikube-linux-arm64 -p ha-423884 ssh -n ha-423884 "sudo cat /home/docker/cp-test_ha-423884-m03_ha-423884.txt"
helpers_test.go:573: (dbg) Run:  out/minikube-linux-arm64 -p ha-423884 cp ha-423884-m03:/home/docker/cp-test.txt ha-423884-m02:/home/docker/cp-test_ha-423884-m03_ha-423884-m02.txt
helpers_test.go:551: (dbg) Run:  out/minikube-linux-arm64 -p ha-423884 ssh -n ha-423884-m03 "sudo cat /home/docker/cp-test.txt"
helpers_test.go:551: (dbg) Run:  out/minikube-linux-arm64 -p ha-423884 ssh -n ha-423884-m02 "sudo cat /home/docker/cp-test_ha-423884-m03_ha-423884-m02.txt"
helpers_test.go:573: (dbg) Run:  out/minikube-linux-arm64 -p ha-423884 cp ha-423884-m03:/home/docker/cp-test.txt ha-423884-m04:/home/docker/cp-test_ha-423884-m03_ha-423884-m04.txt
helpers_test.go:551: (dbg) Run:  out/minikube-linux-arm64 -p ha-423884 ssh -n ha-423884-m03 "sudo cat /home/docker/cp-test.txt"
helpers_test.go:551: (dbg) Run:  out/minikube-linux-arm64 -p ha-423884 ssh -n ha-423884-m04 "sudo cat /home/docker/cp-test_ha-423884-m03_ha-423884-m04.txt"
helpers_test.go:573: (dbg) Run:  out/minikube-linux-arm64 -p ha-423884 cp testdata/cp-test.txt ha-423884-m04:/home/docker/cp-test.txt
helpers_test.go:551: (dbg) Run:  out/minikube-linux-arm64 -p ha-423884 ssh -n ha-423884-m04 "sudo cat /home/docker/cp-test.txt"
helpers_test.go:573: (dbg) Run:  out/minikube-linux-arm64 -p ha-423884 cp ha-423884-m04:/home/docker/cp-test.txt /tmp/TestMultiControlPlaneserialCopyFile776916293/001/cp-test_ha-423884-m04.txt
helpers_test.go:551: (dbg) Run:  out/minikube-linux-arm64 -p ha-423884 ssh -n ha-423884-m04 "sudo cat /home/docker/cp-test.txt"
helpers_test.go:573: (dbg) Run:  out/minikube-linux-arm64 -p ha-423884 cp ha-423884-m04:/home/docker/cp-test.txt ha-423884:/home/docker/cp-test_ha-423884-m04_ha-423884.txt
helpers_test.go:551: (dbg) Run:  out/minikube-linux-arm64 -p ha-423884 ssh -n ha-423884-m04 "sudo cat /home/docker/cp-test.txt"
helpers_test.go:551: (dbg) Run:  out/minikube-linux-arm64 -p ha-423884 ssh -n ha-423884 "sudo cat /home/docker/cp-test_ha-423884-m04_ha-423884.txt"
helpers_test.go:573: (dbg) Run:  out/minikube-linux-arm64 -p ha-423884 cp ha-423884-m04:/home/docker/cp-test.txt ha-423884-m02:/home/docker/cp-test_ha-423884-m04_ha-423884-m02.txt
helpers_test.go:551: (dbg) Run:  out/minikube-linux-arm64 -p ha-423884 ssh -n ha-423884-m04 "sudo cat /home/docker/cp-test.txt"
helpers_test.go:551: (dbg) Run:  out/minikube-linux-arm64 -p ha-423884 ssh -n ha-423884-m02 "sudo cat /home/docker/cp-test_ha-423884-m04_ha-423884-m02.txt"
helpers_test.go:573: (dbg) Run:  out/minikube-linux-arm64 -p ha-423884 cp ha-423884-m04:/home/docker/cp-test.txt ha-423884-m03:/home/docker/cp-test_ha-423884-m04_ha-423884-m03.txt
helpers_test.go:551: (dbg) Run:  out/minikube-linux-arm64 -p ha-423884 ssh -n ha-423884-m04 "sudo cat /home/docker/cp-test.txt"
helpers_test.go:551: (dbg) Run:  out/minikube-linux-arm64 -p ha-423884 ssh -n ha-423884-m03 "sudo cat /home/docker/cp-test_ha-423884-m04_ha-423884-m03.txt"
--- PASS: TestMultiControlPlane/serial/CopyFile (19.94s)

                                                
                                    
x
+
TestMultiControlPlane/serial/StopSecondaryNode (12.95s)

                                                
                                                
=== RUN   TestMultiControlPlane/serial/StopSecondaryNode
ha_test.go:365: (dbg) Run:  out/minikube-linux-arm64 -p ha-423884 node stop m02 --alsologtostderr -v 5
ha_test.go:365: (dbg) Done: out/minikube-linux-arm64 -p ha-423884 node stop m02 --alsologtostderr -v 5: (12.167029831s)
ha_test.go:371: (dbg) Run:  out/minikube-linux-arm64 -p ha-423884 status --alsologtostderr -v 5
ha_test.go:371: (dbg) Non-zero exit: out/minikube-linux-arm64 -p ha-423884 status --alsologtostderr -v 5: exit status 7 (778.156327ms)

                                                
                                                
-- stdout --
	ha-423884
	type: Control Plane
	host: Running
	kubelet: Running
	apiserver: Running
	kubeconfig: Configured
	
	ha-423884-m02
	type: Control Plane
	host: Stopped
	kubelet: Stopped
	apiserver: Stopped
	kubeconfig: Stopped
	
	ha-423884-m03
	type: Control Plane
	host: Running
	kubelet: Running
	apiserver: Running
	kubeconfig: Configured
	
	ha-423884-m04
	type: Worker
	host: Running
	kubelet: Running
	

                                                
                                                
-- /stdout --
** stderr ** 
	I1109 13:53:57.052493   47548 out.go:360] Setting OutFile to fd 1 ...
	I1109 13:53:57.052692   47548 out.go:408] TERM=,COLORTERM=, which probably does not support color
	I1109 13:53:57.052704   47548 out.go:374] Setting ErrFile to fd 2...
	I1109 13:53:57.052709   47548 out.go:408] TERM=,COLORTERM=, which probably does not support color
	I1109 13:53:57.052978   47548 root.go:338] Updating PATH: /home/jenkins/minikube-integration/21139-2320/.minikube/bin
	I1109 13:53:57.053184   47548 out.go:368] Setting JSON to false
	I1109 13:53:57.053227   47548 mustload.go:66] Loading cluster: ha-423884
	I1109 13:53:57.053306   47548 notify.go:221] Checking for updates...
	I1109 13:53:57.053623   47548 config.go:182] Loaded profile config "ha-423884": Driver=docker, ContainerRuntime=crio, KubernetesVersion=v1.34.1
	I1109 13:53:57.053635   47548 status.go:174] checking status of ha-423884 ...
	I1109 13:53:57.054519   47548 cli_runner.go:164] Run: docker container inspect ha-423884 --format={{.State.Status}}
	I1109 13:53:57.078652   47548 status.go:371] ha-423884 host status = "Running" (err=<nil>)
	I1109 13:53:57.078676   47548 host.go:66] Checking if "ha-423884" exists ...
	I1109 13:53:57.078979   47548 cli_runner.go:164] Run: docker container inspect -f "{{range .NetworkSettings.Networks}}{{.IPAddress}},{{.GlobalIPv6Address}}{{end}}" ha-423884
	I1109 13:53:57.108591   47548 host.go:66] Checking if "ha-423884" exists ...
	I1109 13:53:57.109035   47548 ssh_runner.go:195] Run: sh -c "df -h /var | awk 'NR==2{print $5}'"
	I1109 13:53:57.109094   47548 cli_runner.go:164] Run: docker container inspect -f "'{{(index (index .NetworkSettings.Ports "22/tcp") 0).HostPort}}'" ha-423884
	I1109 13:53:57.135298   47548 sshutil.go:53] new ssh client: &{IP:127.0.0.1 Port:32783 SSHKeyPath:/home/jenkins/minikube-integration/21139-2320/.minikube/machines/ha-423884/id_rsa Username:docker}
	I1109 13:53:57.241785   47548 ssh_runner.go:195] Run: systemctl --version
	I1109 13:53:57.250211   47548 ssh_runner.go:195] Run: sudo systemctl is-active --quiet service kubelet
	I1109 13:53:57.263839   47548 cli_runner.go:164] Run: docker system info --format "{{json .}}"
	I1109 13:53:57.322474   47548 info.go:266] docker info: {ID:J4M5:W6MX:GOX4:4LAQ:VI7E:VJNF:J3OP:OPBH:GF7G:PPY4:WQWD:7N4L Containers:4 ContainersRunning:3 ContainersPaused:0 ContainersStopped:1 Images:3 Driver:overlay2 DriverStatus:[[Backing Filesystem extfs] [Supports d_type true] [Using metacopy false] [Native Overlay Diff true] [userxattr false]] SystemStatus:<nil> Plugins:{Volume:[local] Network:[bridge host ipvlan macvlan null overlay] Authorization:<nil> Log:[awslogs fluentd gcplogs gelf journald json-file local splunk syslog]} MemoryLimit:true SwapLimit:true KernelMemory:false KernelMemoryTCP:true CPUCfsPeriod:true CPUCfsQuota:true CPUShares:true CPUSet:true PidsLimit:true IPv4Forwarding:true BridgeNfIptables:false BridgeNfIP6Tables:false Debug:false NFd:62 OomKillDisable:true NGoroutines:72 SystemTime:2025-11-09 13:53:57.312683103 +0000 UTC LoggingDriver:json-file CgroupDriver:cgroupfs NEventsListener:0 KernelVersion:5.15.0-1084-aws OperatingSystem:Ubuntu 20.04.6 LTS OSType:linux Architecture:a
arch64 IndexServerAddress:https://index.docker.io/v1/ RegistryConfig:{AllowNondistributableArtifactsCIDRs:[] AllowNondistributableArtifactsHostnames:[] InsecureRegistryCIDRs:[::1/128 127.0.0.0/8] IndexConfigs:{DockerIo:{Name:docker.io Mirrors:[] Secure:true Official:true}} Mirrors:[]} NCPU:2 MemTotal:8214835200 GenericResources:<nil> DockerRootDir:/var/lib/docker HTTPProxy: HTTPSProxy: NoProxy: Name:ip-172-31-24-2 Labels:[] ExperimentalBuild:false ServerVersion:28.1.1 ClusterStore: ClusterAdvertise: Runtimes:{Runc:{Path:runc}} DefaultRuntime:runc Swarm:{NodeID: NodeAddr: LocalNodeState:inactive ControlAvailable:false Error: RemoteManagers:<nil>} LiveRestoreEnabled:false Isolation: InitBinary:docker-init ContainerdCommit:{ID:05044ec0a9a75232cad458027ca83437aae3f4da Expected:} RuncCommit:{ID:v1.2.5-0-g59923ef Expected:} InitCommit:{ID:de40ad0 Expected:} SecurityOptions:[name=apparmor name=seccomp,profile=builtin] ProductLicense: Warnings:<nil> ServerErrors:[] ClientInfo:{Debug:false Plugins:[map[Name:buildx Pat
h:/usr/libexec/docker/cli-plugins/docker-buildx SchemaVersion:0.1.0 ShortDescription:Docker Buildx Vendor:Docker Inc. Version:v0.23.0] map[Name:compose Path:/usr/libexec/docker/cli-plugins/docker-compose SchemaVersion:0.1.0 ShortDescription:Docker Compose Vendor:Docker Inc. Version:v2.35.1]] Warnings:<nil>}}
	I1109 13:53:57.323071   47548 kubeconfig.go:125] found "ha-423884" server: "https://192.168.49.254:8443"
	I1109 13:53:57.323114   47548 api_server.go:166] Checking apiserver status ...
	I1109 13:53:57.323156   47548 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I1109 13:53:57.335818   47548 ssh_runner.go:195] Run: sudo egrep ^[0-9]+:freezer: /proc/1257/cgroup
	I1109 13:53:57.344536   47548 api_server.go:182] apiserver freezer: "11:freezer:/docker/8c902201acb6ac6cc85dcb02210aea656c86b05a6fb72e47cb8c42c952f307e8/crio/crio-1bfe9c9a16f07b4ee71eb9086e98f91bb5a4bec75c7fa58010e30df2449f7edd"
	I1109 13:53:57.344619   47548 ssh_runner.go:195] Run: sudo cat /sys/fs/cgroup/freezer/docker/8c902201acb6ac6cc85dcb02210aea656c86b05a6fb72e47cb8c42c952f307e8/crio/crio-1bfe9c9a16f07b4ee71eb9086e98f91bb5a4bec75c7fa58010e30df2449f7edd/freezer.state
	I1109 13:53:57.352637   47548 api_server.go:204] freezer state: "THAWED"
	I1109 13:53:57.352662   47548 api_server.go:253] Checking apiserver healthz at https://192.168.49.254:8443/healthz ...
	I1109 13:53:57.360929   47548 api_server.go:279] https://192.168.49.254:8443/healthz returned 200:
	ok
	I1109 13:53:57.360956   47548 status.go:463] ha-423884 apiserver status = Running (err=<nil>)
	I1109 13:53:57.360974   47548 status.go:176] ha-423884 status: &{Name:ha-423884 Host:Running Kubelet:Running APIServer:Running Kubeconfig:Configured Worker:false TimeToStop: DockerEnv: PodManEnv:}
	I1109 13:53:57.360991   47548 status.go:174] checking status of ha-423884-m02 ...
	I1109 13:53:57.361300   47548 cli_runner.go:164] Run: docker container inspect ha-423884-m02 --format={{.State.Status}}
	I1109 13:53:57.378660   47548 status.go:371] ha-423884-m02 host status = "Stopped" (err=<nil>)
	I1109 13:53:57.378685   47548 status.go:384] host is not running, skipping remaining checks
	I1109 13:53:57.378693   47548 status.go:176] ha-423884-m02 status: &{Name:ha-423884-m02 Host:Stopped Kubelet:Stopped APIServer:Stopped Kubeconfig:Stopped Worker:false TimeToStop: DockerEnv: PodManEnv:}
	I1109 13:53:57.378779   47548 status.go:174] checking status of ha-423884-m03 ...
	I1109 13:53:57.379108   47548 cli_runner.go:164] Run: docker container inspect ha-423884-m03 --format={{.State.Status}}
	I1109 13:53:57.396859   47548 status.go:371] ha-423884-m03 host status = "Running" (err=<nil>)
	I1109 13:53:57.396883   47548 host.go:66] Checking if "ha-423884-m03" exists ...
	I1109 13:53:57.397184   47548 cli_runner.go:164] Run: docker container inspect -f "{{range .NetworkSettings.Networks}}{{.IPAddress}},{{.GlobalIPv6Address}}{{end}}" ha-423884-m03
	I1109 13:53:57.415363   47548 host.go:66] Checking if "ha-423884-m03" exists ...
	I1109 13:53:57.415822   47548 ssh_runner.go:195] Run: sh -c "df -h /var | awk 'NR==2{print $5}'"
	I1109 13:53:57.415933   47548 cli_runner.go:164] Run: docker container inspect -f "'{{(index (index .NetworkSettings.Ports "22/tcp") 0).HostPort}}'" ha-423884-m03
	I1109 13:53:57.441020   47548 sshutil.go:53] new ssh client: &{IP:127.0.0.1 Port:32793 SSHKeyPath:/home/jenkins/minikube-integration/21139-2320/.minikube/machines/ha-423884-m03/id_rsa Username:docker}
	I1109 13:53:57.545768   47548 ssh_runner.go:195] Run: sudo systemctl is-active --quiet service kubelet
	I1109 13:53:57.559131   47548 kubeconfig.go:125] found "ha-423884" server: "https://192.168.49.254:8443"
	I1109 13:53:57.559161   47548 api_server.go:166] Checking apiserver status ...
	I1109 13:53:57.559207   47548 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I1109 13:53:57.571175   47548 ssh_runner.go:195] Run: sudo egrep ^[0-9]+:freezer: /proc/1198/cgroup
	I1109 13:53:57.579611   47548 api_server.go:182] apiserver freezer: "11:freezer:/docker/bb91fc8b1606d8f2866d22015b109bf99ec21f19ef283ad84bbd2df1b28777c7/crio/crio-847ad13de09e89e8fcd04273f044eb80ac6584e3c4938ad16d223d6459a8dd66"
	I1109 13:53:57.579687   47548 ssh_runner.go:195] Run: sudo cat /sys/fs/cgroup/freezer/docker/bb91fc8b1606d8f2866d22015b109bf99ec21f19ef283ad84bbd2df1b28777c7/crio/crio-847ad13de09e89e8fcd04273f044eb80ac6584e3c4938ad16d223d6459a8dd66/freezer.state
	I1109 13:53:57.587922   47548 api_server.go:204] freezer state: "THAWED"
	I1109 13:53:57.587949   47548 api_server.go:253] Checking apiserver healthz at https://192.168.49.254:8443/healthz ...
	I1109 13:53:57.596235   47548 api_server.go:279] https://192.168.49.254:8443/healthz returned 200:
	ok
	I1109 13:53:57.596263   47548 status.go:463] ha-423884-m03 apiserver status = Running (err=<nil>)
	I1109 13:53:57.596274   47548 status.go:176] ha-423884-m03 status: &{Name:ha-423884-m03 Host:Running Kubelet:Running APIServer:Running Kubeconfig:Configured Worker:false TimeToStop: DockerEnv: PodManEnv:}
	I1109 13:53:57.596298   47548 status.go:174] checking status of ha-423884-m04 ...
	I1109 13:53:57.596603   47548 cli_runner.go:164] Run: docker container inspect ha-423884-m04 --format={{.State.Status}}
	I1109 13:53:57.615746   47548 status.go:371] ha-423884-m04 host status = "Running" (err=<nil>)
	I1109 13:53:57.615774   47548 host.go:66] Checking if "ha-423884-m04" exists ...
	I1109 13:53:57.616096   47548 cli_runner.go:164] Run: docker container inspect -f "{{range .NetworkSettings.Networks}}{{.IPAddress}},{{.GlobalIPv6Address}}{{end}}" ha-423884-m04
	I1109 13:53:57.634420   47548 host.go:66] Checking if "ha-423884-m04" exists ...
	I1109 13:53:57.634731   47548 ssh_runner.go:195] Run: sh -c "df -h /var | awk 'NR==2{print $5}'"
	I1109 13:53:57.634781   47548 cli_runner.go:164] Run: docker container inspect -f "'{{(index (index .NetworkSettings.Ports "22/tcp") 0).HostPort}}'" ha-423884-m04
	I1109 13:53:57.653070   47548 sshutil.go:53] new ssh client: &{IP:127.0.0.1 Port:32798 SSHKeyPath:/home/jenkins/minikube-integration/21139-2320/.minikube/machines/ha-423884-m04/id_rsa Username:docker}
	I1109 13:53:57.757622   47548 ssh_runner.go:195] Run: sudo systemctl is-active --quiet service kubelet
	I1109 13:53:57.775018   47548 status.go:176] ha-423884-m04 status: &{Name:ha-423884-m04 Host:Running Kubelet:Running APIServer:Irrelevant Kubeconfig:Irrelevant Worker:true TimeToStop: DockerEnv: PodManEnv:}

                                                
                                                
** /stderr **
--- PASS: TestMultiControlPlane/serial/StopSecondaryNode (12.95s)

                                                
                                    
x
+
TestMultiControlPlane/serial/DegradedAfterControlPlaneNodeStop (0.82s)

                                                
                                                
=== RUN   TestMultiControlPlane/serial/DegradedAfterControlPlaneNodeStop
ha_test.go:392: (dbg) Run:  out/minikube-linux-arm64 profile list --output json
--- PASS: TestMultiControlPlane/serial/DegradedAfterControlPlaneNodeStop (0.82s)

                                                
                                    
x
+
TestMultiControlPlane/serial/RestartSecondaryNode (27.83s)

                                                
                                                
=== RUN   TestMultiControlPlane/serial/RestartSecondaryNode
ha_test.go:422: (dbg) Run:  out/minikube-linux-arm64 -p ha-423884 node start m02 --alsologtostderr -v 5
E1109 13:54:20.733826    4116 cert_rotation.go:172] "Loading client cert failed" err="open /home/jenkins/minikube-integration/21139-2320/.minikube/profiles/functional-002359/client.crt: no such file or directory" logger="tls-transport-cache.UnhandledError" key="key"
E1109 13:54:20.740290    4116 cert_rotation.go:172] "Loading client cert failed" err="open /home/jenkins/minikube-integration/21139-2320/.minikube/profiles/functional-002359/client.crt: no such file or directory" logger="tls-transport-cache.UnhandledError" key="key"
E1109 13:54:20.751740    4116 cert_rotation.go:172] "Loading client cert failed" err="open /home/jenkins/minikube-integration/21139-2320/.minikube/profiles/functional-002359/client.crt: no such file or directory" logger="tls-transport-cache.UnhandledError" key="key"
E1109 13:54:20.773270    4116 cert_rotation.go:172] "Loading client cert failed" err="open /home/jenkins/minikube-integration/21139-2320/.minikube/profiles/functional-002359/client.crt: no such file or directory" logger="tls-transport-cache.UnhandledError" key="key"
E1109 13:54:20.814709    4116 cert_rotation.go:172] "Loading client cert failed" err="open /home/jenkins/minikube-integration/21139-2320/.minikube/profiles/functional-002359/client.crt: no such file or directory" logger="tls-transport-cache.UnhandledError" key="key"
E1109 13:54:20.896173    4116 cert_rotation.go:172] "Loading client cert failed" err="open /home/jenkins/minikube-integration/21139-2320/.minikube/profiles/functional-002359/client.crt: no such file or directory" logger="tls-transport-cache.UnhandledError" key="key"
E1109 13:54:21.057848    4116 cert_rotation.go:172] "Loading client cert failed" err="open /home/jenkins/minikube-integration/21139-2320/.minikube/profiles/functional-002359/client.crt: no such file or directory" logger="tls-transport-cache.UnhandledError" key="key"
E1109 13:54:21.379546    4116 cert_rotation.go:172] "Loading client cert failed" err="open /home/jenkins/minikube-integration/21139-2320/.minikube/profiles/functional-002359/client.crt: no such file or directory" logger="tls-transport-cache.UnhandledError" key="key"
E1109 13:54:22.021713    4116 cert_rotation.go:172] "Loading client cert failed" err="open /home/jenkins/minikube-integration/21139-2320/.minikube/profiles/functional-002359/client.crt: no such file or directory" logger="tls-transport-cache.UnhandledError" key="key"
E1109 13:54:23.303699    4116 cert_rotation.go:172] "Loading client cert failed" err="open /home/jenkins/minikube-integration/21139-2320/.minikube/profiles/functional-002359/client.crt: no such file or directory" logger="tls-transport-cache.UnhandledError" key="key"
ha_test.go:422: (dbg) Done: out/minikube-linux-arm64 -p ha-423884 node start m02 --alsologtostderr -v 5: (26.571202073s)
ha_test.go:430: (dbg) Run:  out/minikube-linux-arm64 -p ha-423884 status --alsologtostderr -v 5
E1109 13:54:25.865139    4116 cert_rotation.go:172] "Loading client cert failed" err="open /home/jenkins/minikube-integration/21139-2320/.minikube/profiles/functional-002359/client.crt: no such file or directory" logger="tls-transport-cache.UnhandledError" key="key"
ha_test.go:430: (dbg) Done: out/minikube-linux-arm64 -p ha-423884 status --alsologtostderr -v 5: (1.157732596s)
ha_test.go:450: (dbg) Run:  kubectl get nodes
--- PASS: TestMultiControlPlane/serial/RestartSecondaryNode (27.83s)

                                                
                                    
x
+
TestMultiControlPlane/serial/HAppyAfterSecondaryNodeRestart (1.01s)

                                                
                                                
=== RUN   TestMultiControlPlane/serial/HAppyAfterSecondaryNodeRestart
ha_test.go:281: (dbg) Run:  out/minikube-linux-arm64 profile list --output json
ha_test.go:281: (dbg) Done: out/minikube-linux-arm64 profile list --output json: (1.009882011s)
--- PASS: TestMultiControlPlane/serial/HAppyAfterSecondaryNodeRestart (1.01s)

                                                
                                    
x
+
TestJSONOutput/start/Command (80.68s)

                                                
                                                
=== RUN   TestJSONOutput/start/Command
json_output_test.go:63: (dbg) Run:  out/minikube-linux-arm64 start -p json-output-510235 --output=json --user=testUser --memory=3072 --wait=true --driver=docker  --container-runtime=crio
json_output_test.go:63: (dbg) Done: out/minikube-linux-arm64 start -p json-output-510235 --output=json --user=testUser --memory=3072 --wait=true --driver=docker  --container-runtime=crio: (1m20.68065488s)
--- PASS: TestJSONOutput/start/Command (80.68s)

                                                
                                    
x
+
TestJSONOutput/start/Audit (0s)

                                                
                                                
=== RUN   TestJSONOutput/start/Audit
--- PASS: TestJSONOutput/start/Audit (0.00s)

                                                
                                    
x
+
TestJSONOutput/start/parallel/DistinctCurrentSteps (0s)

                                                
                                                
=== RUN   TestJSONOutput/start/parallel/DistinctCurrentSteps
=== PAUSE TestJSONOutput/start/parallel/DistinctCurrentSteps

                                                
                                                

                                                
                                                
=== CONT  TestJSONOutput/start/parallel/DistinctCurrentSteps
--- PASS: TestJSONOutput/start/parallel/DistinctCurrentSteps (0.00s)

                                                
                                    
x
+
TestJSONOutput/start/parallel/IncreasingCurrentSteps (0s)

                                                
                                                
=== RUN   TestJSONOutput/start/parallel/IncreasingCurrentSteps
=== PAUSE TestJSONOutput/start/parallel/IncreasingCurrentSteps

                                                
                                                

                                                
                                                
=== CONT  TestJSONOutput/start/parallel/IncreasingCurrentSteps
--- PASS: TestJSONOutput/start/parallel/IncreasingCurrentSteps (0.00s)

                                                
                                    
x
+
TestJSONOutput/pause/Audit (0s)

                                                
                                                
=== RUN   TestJSONOutput/pause/Audit
--- PASS: TestJSONOutput/pause/Audit (0.00s)

                                                
                                    
x
+
TestJSONOutput/pause/parallel/DistinctCurrentSteps (0s)

                                                
                                                
=== RUN   TestJSONOutput/pause/parallel/DistinctCurrentSteps
=== PAUSE TestJSONOutput/pause/parallel/DistinctCurrentSteps

                                                
                                                

                                                
                                                
=== CONT  TestJSONOutput/pause/parallel/DistinctCurrentSteps
--- PASS: TestJSONOutput/pause/parallel/DistinctCurrentSteps (0.00s)

                                                
                                    
x
+
TestJSONOutput/pause/parallel/IncreasingCurrentSteps (0s)

                                                
                                                
=== RUN   TestJSONOutput/pause/parallel/IncreasingCurrentSteps
=== PAUSE TestJSONOutput/pause/parallel/IncreasingCurrentSteps

                                                
                                                

                                                
                                                
=== CONT  TestJSONOutput/pause/parallel/IncreasingCurrentSteps
--- PASS: TestJSONOutput/pause/parallel/IncreasingCurrentSteps (0.00s)

                                                
                                    
x
+
TestJSONOutput/unpause/Audit (0s)

                                                
                                                
=== RUN   TestJSONOutput/unpause/Audit
--- PASS: TestJSONOutput/unpause/Audit (0.00s)

                                                
                                    
x
+
TestJSONOutput/unpause/parallel/DistinctCurrentSteps (0s)

                                                
                                                
=== RUN   TestJSONOutput/unpause/parallel/DistinctCurrentSteps
=== PAUSE TestJSONOutput/unpause/parallel/DistinctCurrentSteps

                                                
                                                

                                                
                                                
=== CONT  TestJSONOutput/unpause/parallel/DistinctCurrentSteps
--- PASS: TestJSONOutput/unpause/parallel/DistinctCurrentSteps (0.00s)

                                                
                                    
x
+
TestJSONOutput/unpause/parallel/IncreasingCurrentSteps (0s)

                                                
                                                
=== RUN   TestJSONOutput/unpause/parallel/IncreasingCurrentSteps
=== PAUSE TestJSONOutput/unpause/parallel/IncreasingCurrentSteps

                                                
                                                

                                                
                                                
=== CONT  TestJSONOutput/unpause/parallel/IncreasingCurrentSteps
--- PASS: TestJSONOutput/unpause/parallel/IncreasingCurrentSteps (0.00s)

                                                
                                    
x
+
TestJSONOutput/stop/Command (5.88s)

                                                
                                                
=== RUN   TestJSONOutput/stop/Command
json_output_test.go:63: (dbg) Run:  out/minikube-linux-arm64 stop -p json-output-510235 --output=json --user=testUser
json_output_test.go:63: (dbg) Done: out/minikube-linux-arm64 stop -p json-output-510235 --output=json --user=testUser: (5.876870341s)
--- PASS: TestJSONOutput/stop/Command (5.88s)

                                                
                                    
x
+
TestJSONOutput/stop/Audit (0s)

                                                
                                                
=== RUN   TestJSONOutput/stop/Audit
--- PASS: TestJSONOutput/stop/Audit (0.00s)

                                                
                                    
x
+
TestJSONOutput/stop/parallel/DistinctCurrentSteps (0s)

                                                
                                                
=== RUN   TestJSONOutput/stop/parallel/DistinctCurrentSteps
=== PAUSE TestJSONOutput/stop/parallel/DistinctCurrentSteps

                                                
                                                

                                                
                                                
=== CONT  TestJSONOutput/stop/parallel/DistinctCurrentSteps
--- PASS: TestJSONOutput/stop/parallel/DistinctCurrentSteps (0.00s)

                                                
                                    
x
+
TestJSONOutput/stop/parallel/IncreasingCurrentSteps (0s)

                                                
                                                
=== RUN   TestJSONOutput/stop/parallel/IncreasingCurrentSteps
=== PAUSE TestJSONOutput/stop/parallel/IncreasingCurrentSteps

                                                
                                                

                                                
                                                
=== CONT  TestJSONOutput/stop/parallel/IncreasingCurrentSteps
--- PASS: TestJSONOutput/stop/parallel/IncreasingCurrentSteps (0.00s)

                                                
                                    
x
+
TestErrorJSONOutput (0.24s)

                                                
                                                
=== RUN   TestErrorJSONOutput
json_output_test.go:160: (dbg) Run:  out/minikube-linux-arm64 start -p json-output-error-248591 --memory=3072 --output=json --wait=true --driver=fail
json_output_test.go:160: (dbg) Non-zero exit: out/minikube-linux-arm64 start -p json-output-error-248591 --memory=3072 --output=json --wait=true --driver=fail: exit status 56 (93.287418ms)

                                                
                                                
-- stdout --
	{"specversion":"1.0","id":"a7e2b46c-0c34-411f-a1ad-5a320ff9e5db","source":"https://minikube.sigs.k8s.io/","type":"io.k8s.sigs.minikube.step","datacontenttype":"application/json","data":{"currentstep":"0","message":"[json-output-error-248591] minikube v1.37.0 on Ubuntu 20.04 (arm64)","name":"Initial Minikube Setup","totalsteps":"19"}}
	{"specversion":"1.0","id":"4860ebca-2e8d-42dc-9793-471fb43e40a6","source":"https://minikube.sigs.k8s.io/","type":"io.k8s.sigs.minikube.info","datacontenttype":"application/json","data":{"message":"MINIKUBE_LOCATION=21139"}}
	{"specversion":"1.0","id":"ba2c5b2e-ddf2-4d72-9d93-3529d673fc31","source":"https://minikube.sigs.k8s.io/","type":"io.k8s.sigs.minikube.info","datacontenttype":"application/json","data":{"message":"MINIKUBE_SUPPRESS_DOCKER_PERFORMANCE=true"}}
	{"specversion":"1.0","id":"c7c3ab7a-868e-48c5-81c8-17154166bec6","source":"https://minikube.sigs.k8s.io/","type":"io.k8s.sigs.minikube.info","datacontenttype":"application/json","data":{"message":"KUBECONFIG=/home/jenkins/minikube-integration/21139-2320/kubeconfig"}}
	{"specversion":"1.0","id":"e25edd4e-e4d1-42c6-895a-0f2b5e953264","source":"https://minikube.sigs.k8s.io/","type":"io.k8s.sigs.minikube.info","datacontenttype":"application/json","data":{"message":"MINIKUBE_HOME=/home/jenkins/minikube-integration/21139-2320/.minikube"}}
	{"specversion":"1.0","id":"54ef8bad-b2db-44a1-9a7e-cdc823fe9846","source":"https://minikube.sigs.k8s.io/","type":"io.k8s.sigs.minikube.info","datacontenttype":"application/json","data":{"message":"MINIKUBE_BIN=out/minikube-linux-arm64"}}
	{"specversion":"1.0","id":"3580d84d-39ca-4494-b757-0387084b1c27","source":"https://minikube.sigs.k8s.io/","type":"io.k8s.sigs.minikube.info","datacontenttype":"application/json","data":{"message":"MINIKUBE_FORCE_SYSTEMD="}}
	{"specversion":"1.0","id":"d8889596-a4a0-4036-b4d9-f81e73760b99","source":"https://minikube.sigs.k8s.io/","type":"io.k8s.sigs.minikube.error","datacontenttype":"application/json","data":{"advice":"","exitcode":"56","issues":"","message":"The driver 'fail' is not supported on linux/arm64","name":"DRV_UNSUPPORTED_OS","url":""}}

                                                
                                                
-- /stdout --
helpers_test.go:175: Cleaning up "json-output-error-248591" profile ...
helpers_test.go:178: (dbg) Run:  out/minikube-linux-arm64 delete -p json-output-error-248591
--- PASS: TestErrorJSONOutput (0.24s)

                                                
                                    
x
+
TestKicCustomNetwork/create_custom_network (40.29s)

                                                
                                                
=== RUN   TestKicCustomNetwork/create_custom_network
kic_custom_network_test.go:57: (dbg) Run:  out/minikube-linux-arm64 start -p docker-network-086790 --network=
E1109 14:09:20.735998    4116 cert_rotation.go:172] "Loading client cert failed" err="open /home/jenkins/minikube-integration/21139-2320/.minikube/profiles/functional-002359/client.crt: no such file or directory" logger="tls-transport-cache.UnhandledError" key="key"
kic_custom_network_test.go:57: (dbg) Done: out/minikube-linux-arm64 start -p docker-network-086790 --network=: (38.02699212s)
kic_custom_network_test.go:150: (dbg) Run:  docker network ls --format {{.Name}}
helpers_test.go:175: Cleaning up "docker-network-086790" profile ...
helpers_test.go:178: (dbg) Run:  out/minikube-linux-arm64 delete -p docker-network-086790
helpers_test.go:178: (dbg) Done: out/minikube-linux-arm64 delete -p docker-network-086790: (2.236405221s)
--- PASS: TestKicCustomNetwork/create_custom_network (40.29s)

                                                
                                    
x
+
TestKicCustomNetwork/use_default_bridge_network (39.12s)

                                                
                                                
=== RUN   TestKicCustomNetwork/use_default_bridge_network
kic_custom_network_test.go:57: (dbg) Run:  out/minikube-linux-arm64 start -p docker-network-715280 --network=bridge
kic_custom_network_test.go:57: (dbg) Done: out/minikube-linux-arm64 start -p docker-network-715280 --network=bridge: (36.934989952s)
kic_custom_network_test.go:150: (dbg) Run:  docker network ls --format {{.Name}}
helpers_test.go:175: Cleaning up "docker-network-715280" profile ...
helpers_test.go:178: (dbg) Run:  out/minikube-linux-arm64 delete -p docker-network-715280
helpers_test.go:178: (dbg) Done: out/minikube-linux-arm64 delete -p docker-network-715280: (2.15978011s)
--- PASS: TestKicCustomNetwork/use_default_bridge_network (39.12s)

                                                
                                    
x
+
TestKicExistingNetwork (38.6s)

                                                
                                                
=== RUN   TestKicExistingNetwork
I1109 14:10:07.112659    4116 cli_runner.go:164] Run: docker network inspect existing-network --format "{"Name": "{{.Name}}","Driver": "{{.Driver}}","Subnet": "{{range .IPAM.Config}}{{.Subnet}}{{end}}","Gateway": "{{range .IPAM.Config}}{{.Gateway}}{{end}}","MTU": {{if (index .Options "com.docker.network.driver.mtu")}}{{(index .Options "com.docker.network.driver.mtu")}}{{else}}0{{end}}, "ContainerIPs": [{{range $k,$v := .Containers }}"{{$v.IPv4Address}}",{{end}}]}"
W1109 14:10:07.131613    4116 cli_runner.go:211] docker network inspect existing-network --format "{"Name": "{{.Name}}","Driver": "{{.Driver}}","Subnet": "{{range .IPAM.Config}}{{.Subnet}}{{end}}","Gateway": "{{range .IPAM.Config}}{{.Gateway}}{{end}}","MTU": {{if (index .Options "com.docker.network.driver.mtu")}}{{(index .Options "com.docker.network.driver.mtu")}}{{else}}0{{end}}, "ContainerIPs": [{{range $k,$v := .Containers }}"{{$v.IPv4Address}}",{{end}}]}" returned with exit code 1
I1109 14:10:07.132491    4116 network_create.go:284] running [docker network inspect existing-network] to gather additional debugging logs...
I1109 14:10:07.132535    4116 cli_runner.go:164] Run: docker network inspect existing-network
W1109 14:10:07.148602    4116 cli_runner.go:211] docker network inspect existing-network returned with exit code 1
I1109 14:10:07.148657    4116 network_create.go:287] error running [docker network inspect existing-network]: docker network inspect existing-network: exit status 1
stdout:
[]

                                                
                                                
stderr:
Error response from daemon: network existing-network not found
I1109 14:10:07.148674    4116 network_create.go:289] output of [docker network inspect existing-network]: -- stdout --
[]

                                                
                                                
-- /stdout --
** stderr ** 
Error response from daemon: network existing-network not found

                                                
                                                
** /stderr **
I1109 14:10:07.148797    4116 cli_runner.go:164] Run: docker network inspect bridge --format "{"Name": "{{.Name}}","Driver": "{{.Driver}}","Subnet": "{{range .IPAM.Config}}{{.Subnet}}{{end}}","Gateway": "{{range .IPAM.Config}}{{.Gateway}}{{end}}","MTU": {{if (index .Options "com.docker.network.driver.mtu")}}{{(index .Options "com.docker.network.driver.mtu")}}{{else}}0{{end}}, "ContainerIPs": [{{range $k,$v := .Containers }}"{{$v.IPv4Address}}",{{end}}]}"
I1109 14:10:07.166474    4116 network.go:211] skipping subnet 192.168.49.0/24 that is taken: &{IP:192.168.49.0 Netmask:255.255.255.0 Prefix:24 CIDR:192.168.49.0/24 Gateway:192.168.49.1 ClientMin:192.168.49.2 ClientMax:192.168.49.254 Broadcast:192.168.49.255 IsPrivate:true Interface:{IfaceName:br-b901b8dcb821 IfaceIPv4:192.168.49.1 IfaceMTU:1500 IfaceMAC:fe:01:f6:7f:4e:91} reservation:<nil>}
I1109 14:10:07.166786    4116 network.go:206] using free private subnet 192.168.58.0/24: &{IP:192.168.58.0 Netmask:255.255.255.0 Prefix:24 CIDR:192.168.58.0/24 Gateway:192.168.58.1 ClientMin:192.168.58.2 ClientMax:192.168.58.254 Broadcast:192.168.58.255 IsPrivate:true Interface:{IfaceName: IfaceIPv4: IfaceMTU:0 IfaceMAC:} reservation:0x400030a310}
I1109 14:10:07.166809    4116 network_create.go:124] attempt to create docker network existing-network 192.168.58.0/24 with gateway 192.168.58.1 and MTU of 1500 ...
I1109 14:10:07.166858    4116 cli_runner.go:164] Run: docker network create --driver=bridge --subnet=192.168.58.0/24 --gateway=192.168.58.1 -o --ip-masq -o --icc -o com.docker.network.driver.mtu=1500 --label=created_by.minikube.sigs.k8s.io=true --label=name.minikube.sigs.k8s.io=existing-network existing-network
I1109 14:10:07.230697    4116 network_create.go:108] docker network existing-network 192.168.58.0/24 created
kic_custom_network_test.go:150: (dbg) Run:  docker network ls --format {{.Name}}
kic_custom_network_test.go:93: (dbg) Run:  out/minikube-linux-arm64 start -p existing-network-721928 --network=existing-network
E1109 14:10:09.523077    4116 cert_rotation.go:172] "Loading client cert failed" err="open /home/jenkins/minikube-integration/21139-2320/.minikube/profiles/addons-651467/client.crt: no such file or directory" logger="tls-transport-cache.UnhandledError" key="key"
kic_custom_network_test.go:93: (dbg) Done: out/minikube-linux-arm64 start -p existing-network-721928 --network=existing-network: (36.121889121s)
helpers_test.go:175: Cleaning up "existing-network-721928" profile ...
helpers_test.go:178: (dbg) Run:  out/minikube-linux-arm64 delete -p existing-network-721928
E1109 14:10:43.798470    4116 cert_rotation.go:172] "Loading client cert failed" err="open /home/jenkins/minikube-integration/21139-2320/.minikube/profiles/functional-002359/client.crt: no such file or directory" logger="tls-transport-cache.UnhandledError" key="key"
helpers_test.go:178: (dbg) Done: out/minikube-linux-arm64 delete -p existing-network-721928: (2.332039896s)
I1109 14:10:45.700464    4116 cli_runner.go:164] Run: docker network ls --filter=label=existing-network --format {{.Name}}
--- PASS: TestKicExistingNetwork (38.60s)

                                                
                                    
x
+
TestKicCustomSubnet (38.48s)

                                                
                                                
=== RUN   TestKicCustomSubnet
kic_custom_network_test.go:112: (dbg) Run:  out/minikube-linux-arm64 start -p custom-subnet-583472 --subnet=192.168.60.0/24
kic_custom_network_test.go:112: (dbg) Done: out/minikube-linux-arm64 start -p custom-subnet-583472 --subnet=192.168.60.0/24: (36.208524761s)
kic_custom_network_test.go:161: (dbg) Run:  docker network inspect custom-subnet-583472 --format "{{(index .IPAM.Config 0).Subnet}}"
helpers_test.go:175: Cleaning up "custom-subnet-583472" profile ...
helpers_test.go:178: (dbg) Run:  out/minikube-linux-arm64 delete -p custom-subnet-583472
helpers_test.go:178: (dbg) Done: out/minikube-linux-arm64 delete -p custom-subnet-583472: (2.241828578s)
--- PASS: TestKicCustomSubnet (38.48s)

                                                
                                    
x
+
TestKicStaticIP (36.75s)

                                                
                                                
=== RUN   TestKicStaticIP
kic_custom_network_test.go:132: (dbg) Run:  out/minikube-linux-arm64 start -p static-ip-093029 --static-ip=192.168.200.200
kic_custom_network_test.go:132: (dbg) Done: out/minikube-linux-arm64 start -p static-ip-093029 --static-ip=192.168.200.200: (34.069857493s)
kic_custom_network_test.go:138: (dbg) Run:  out/minikube-linux-arm64 -p static-ip-093029 ip
helpers_test.go:175: Cleaning up "static-ip-093029" profile ...
helpers_test.go:178: (dbg) Run:  out/minikube-linux-arm64 delete -p static-ip-093029
helpers_test.go:178: (dbg) Done: out/minikube-linux-arm64 delete -p static-ip-093029: (2.530871181s)
--- PASS: TestKicStaticIP (36.75s)

                                                
                                    
x
+
TestMainNoArgs (0.05s)

                                                
                                                
=== RUN   TestMainNoArgs
main_test.go:70: (dbg) Run:  out/minikube-linux-arm64
--- PASS: TestMainNoArgs (0.05s)

                                                
                                    
x
+
TestMinikubeProfile (74.64s)

                                                
                                                
=== RUN   TestMinikubeProfile
minikube_profile_test.go:44: (dbg) Run:  out/minikube-linux-arm64 start -p first-855966 --driver=docker  --container-runtime=crio
E1109 14:12:06.460007    4116 cert_rotation.go:172] "Loading client cert failed" err="open /home/jenkins/minikube-integration/21139-2320/.minikube/profiles/addons-651467/client.crt: no such file or directory" logger="tls-transport-cache.UnhandledError" key="key"
minikube_profile_test.go:44: (dbg) Done: out/minikube-linux-arm64 start -p first-855966 --driver=docker  --container-runtime=crio: (31.905781078s)
minikube_profile_test.go:44: (dbg) Run:  out/minikube-linux-arm64 start -p second-858994 --driver=docker  --container-runtime=crio
minikube_profile_test.go:44: (dbg) Done: out/minikube-linux-arm64 start -p second-858994 --driver=docker  --container-runtime=crio: (37.094474037s)
minikube_profile_test.go:51: (dbg) Run:  out/minikube-linux-arm64 profile first-855966
minikube_profile_test.go:55: (dbg) Run:  out/minikube-linux-arm64 profile list -ojson
minikube_profile_test.go:51: (dbg) Run:  out/minikube-linux-arm64 profile second-858994
minikube_profile_test.go:55: (dbg) Run:  out/minikube-linux-arm64 profile list -ojson
helpers_test.go:175: Cleaning up "second-858994" profile ...
helpers_test.go:178: (dbg) Run:  out/minikube-linux-arm64 delete -p second-858994
helpers_test.go:178: (dbg) Done: out/minikube-linux-arm64 delete -p second-858994: (2.074821097s)
helpers_test.go:175: Cleaning up "first-855966" profile ...
helpers_test.go:178: (dbg) Run:  out/minikube-linux-arm64 delete -p first-855966
helpers_test.go:178: (dbg) Done: out/minikube-linux-arm64 delete -p first-855966: (2.091971446s)
--- PASS: TestMinikubeProfile (74.64s)

                                                
                                    
x
+
TestMountStart/serial/StartWithMountFirst (8.89s)

                                                
                                                
=== RUN   TestMountStart/serial/StartWithMountFirst
mount_start_test.go:118: (dbg) Run:  out/minikube-linux-arm64 start -p mount-start-1-168559 --memory=3072 --mount-string /tmp/TestMountStartserial2833935617/001:/minikube-host --mount-gid 0 --mount-msize 6543 --mount-port 46464 --mount-uid 0 --no-kubernetes --driver=docker  --container-runtime=crio
mount_start_test.go:118: (dbg) Done: out/minikube-linux-arm64 start -p mount-start-1-168559 --memory=3072 --mount-string /tmp/TestMountStartserial2833935617/001:/minikube-host --mount-gid 0 --mount-msize 6543 --mount-port 46464 --mount-uid 0 --no-kubernetes --driver=docker  --container-runtime=crio: (7.891491537s)
--- PASS: TestMountStart/serial/StartWithMountFirst (8.89s)

                                                
                                    
x
+
TestMountStart/serial/VerifyMountFirst (0.28s)

                                                
                                                
=== RUN   TestMountStart/serial/VerifyMountFirst
mount_start_test.go:134: (dbg) Run:  out/minikube-linux-arm64 -p mount-start-1-168559 ssh -- ls /minikube-host
--- PASS: TestMountStart/serial/VerifyMountFirst (0.28s)

                                                
                                    
x
+
TestMountStart/serial/StartWithMountSecond (6.03s)

                                                
                                                
=== RUN   TestMountStart/serial/StartWithMountSecond
mount_start_test.go:118: (dbg) Run:  out/minikube-linux-arm64 start -p mount-start-2-170528 --memory=3072 --mount-string /tmp/TestMountStartserial2833935617/001:/minikube-host --mount-gid 0 --mount-msize 6543 --mount-port 46465 --mount-uid 0 --no-kubernetes --driver=docker  --container-runtime=crio
mount_start_test.go:118: (dbg) Done: out/minikube-linux-arm64 start -p mount-start-2-170528 --memory=3072 --mount-string /tmp/TestMountStartserial2833935617/001:/minikube-host --mount-gid 0 --mount-msize 6543 --mount-port 46465 --mount-uid 0 --no-kubernetes --driver=docker  --container-runtime=crio: (5.032227508s)
--- PASS: TestMountStart/serial/StartWithMountSecond (6.03s)

                                                
                                    
x
+
TestMountStart/serial/VerifyMountSecond (0.28s)

                                                
                                                
=== RUN   TestMountStart/serial/VerifyMountSecond
mount_start_test.go:134: (dbg) Run:  out/minikube-linux-arm64 -p mount-start-2-170528 ssh -- ls /minikube-host
--- PASS: TestMountStart/serial/VerifyMountSecond (0.28s)

                                                
                                    
x
+
TestMountStart/serial/DeleteFirst (1.73s)

                                                
                                                
=== RUN   TestMountStart/serial/DeleteFirst
pause_test.go:132: (dbg) Run:  out/minikube-linux-arm64 delete -p mount-start-1-168559 --alsologtostderr -v=5
pause_test.go:132: (dbg) Done: out/minikube-linux-arm64 delete -p mount-start-1-168559 --alsologtostderr -v=5: (1.727373743s)
--- PASS: TestMountStart/serial/DeleteFirst (1.73s)

                                                
                                    
x
+
TestMountStart/serial/VerifyMountPostDelete (0.28s)

                                                
                                                
=== RUN   TestMountStart/serial/VerifyMountPostDelete
mount_start_test.go:134: (dbg) Run:  out/minikube-linux-arm64 -p mount-start-2-170528 ssh -- ls /minikube-host
--- PASS: TestMountStart/serial/VerifyMountPostDelete (0.28s)

                                                
                                    
x
+
TestMountStart/serial/Stop (1.29s)

                                                
                                                
=== RUN   TestMountStart/serial/Stop
mount_start_test.go:196: (dbg) Run:  out/minikube-linux-arm64 stop -p mount-start-2-170528
mount_start_test.go:196: (dbg) Done: out/minikube-linux-arm64 stop -p mount-start-2-170528: (1.293648978s)
--- PASS: TestMountStart/serial/Stop (1.29s)

                                                
                                    
x
+
TestMountStart/serial/RestartStopped (7.99s)

                                                
                                                
=== RUN   TestMountStart/serial/RestartStopped
mount_start_test.go:207: (dbg) Run:  out/minikube-linux-arm64 start -p mount-start-2-170528
mount_start_test.go:207: (dbg) Done: out/minikube-linux-arm64 start -p mount-start-2-170528: (6.985186709s)
--- PASS: TestMountStart/serial/RestartStopped (7.99s)

                                                
                                    
x
+
TestMountStart/serial/VerifyMountPostStop (0.27s)

                                                
                                                
=== RUN   TestMountStart/serial/VerifyMountPostStop
mount_start_test.go:134: (dbg) Run:  out/minikube-linux-arm64 -p mount-start-2-170528 ssh -- ls /minikube-host
--- PASS: TestMountStart/serial/VerifyMountPostStop (0.27s)

                                                
                                    
x
+
TestMultiNode/serial/FreshStart2Nodes (137.13s)

                                                
                                                
=== RUN   TestMultiNode/serial/FreshStart2Nodes
multinode_test.go:96: (dbg) Run:  out/minikube-linux-arm64 start -p multinode-644769 --wait=true --memory=3072 --nodes=2 -v=5 --alsologtostderr --driver=docker  --container-runtime=crio
E1109 14:14:20.734436    4116 cert_rotation.go:172] "Loading client cert failed" err="open /home/jenkins/minikube-integration/21139-2320/.minikube/profiles/functional-002359/client.crt: no such file or directory" logger="tls-transport-cache.UnhandledError" key="key"
multinode_test.go:96: (dbg) Done: out/minikube-linux-arm64 start -p multinode-644769 --wait=true --memory=3072 --nodes=2 -v=5 --alsologtostderr --driver=docker  --container-runtime=crio: (2m16.581961335s)
multinode_test.go:102: (dbg) Run:  out/minikube-linux-arm64 -p multinode-644769 status --alsologtostderr
--- PASS: TestMultiNode/serial/FreshStart2Nodes (137.13s)

                                                
                                    
x
+
TestMultiNode/serial/DeployApp2Nodes (4.9s)

                                                
                                                
=== RUN   TestMultiNode/serial/DeployApp2Nodes
multinode_test.go:493: (dbg) Run:  out/minikube-linux-arm64 kubectl -p multinode-644769 -- apply -f ./testdata/multinodes/multinode-pod-dns-test.yaml
multinode_test.go:498: (dbg) Run:  out/minikube-linux-arm64 kubectl -p multinode-644769 -- rollout status deployment/busybox
multinode_test.go:498: (dbg) Done: out/minikube-linux-arm64 kubectl -p multinode-644769 -- rollout status deployment/busybox: (3.19576942s)
multinode_test.go:505: (dbg) Run:  out/minikube-linux-arm64 kubectl -p multinode-644769 -- get pods -o jsonpath='{.items[*].status.podIP}'
multinode_test.go:528: (dbg) Run:  out/minikube-linux-arm64 kubectl -p multinode-644769 -- get pods -o jsonpath='{.items[*].metadata.name}'
multinode_test.go:536: (dbg) Run:  out/minikube-linux-arm64 kubectl -p multinode-644769 -- exec busybox-7b57f96db7-29vqw -- nslookup kubernetes.io
multinode_test.go:536: (dbg) Run:  out/minikube-linux-arm64 kubectl -p multinode-644769 -- exec busybox-7b57f96db7-8fvwz -- nslookup kubernetes.io
multinode_test.go:546: (dbg) Run:  out/minikube-linux-arm64 kubectl -p multinode-644769 -- exec busybox-7b57f96db7-29vqw -- nslookup kubernetes.default
multinode_test.go:546: (dbg) Run:  out/minikube-linux-arm64 kubectl -p multinode-644769 -- exec busybox-7b57f96db7-8fvwz -- nslookup kubernetes.default
multinode_test.go:554: (dbg) Run:  out/minikube-linux-arm64 kubectl -p multinode-644769 -- exec busybox-7b57f96db7-29vqw -- nslookup kubernetes.default.svc.cluster.local
multinode_test.go:554: (dbg) Run:  out/minikube-linux-arm64 kubectl -p multinode-644769 -- exec busybox-7b57f96db7-8fvwz -- nslookup kubernetes.default.svc.cluster.local
--- PASS: TestMultiNode/serial/DeployApp2Nodes (4.90s)

                                                
                                    
x
+
TestMultiNode/serial/PingHostFrom2Pods (0.92s)

                                                
                                                
=== RUN   TestMultiNode/serial/PingHostFrom2Pods
multinode_test.go:564: (dbg) Run:  out/minikube-linux-arm64 kubectl -p multinode-644769 -- get pods -o jsonpath='{.items[*].metadata.name}'
multinode_test.go:572: (dbg) Run:  out/minikube-linux-arm64 kubectl -p multinode-644769 -- exec busybox-7b57f96db7-29vqw -- sh -c "nslookup host.minikube.internal | awk 'NR==5' | cut -d' ' -f3"
multinode_test.go:583: (dbg) Run:  out/minikube-linux-arm64 kubectl -p multinode-644769 -- exec busybox-7b57f96db7-29vqw -- sh -c "ping -c 1 192.168.67.1"
multinode_test.go:572: (dbg) Run:  out/minikube-linux-arm64 kubectl -p multinode-644769 -- exec busybox-7b57f96db7-8fvwz -- sh -c "nslookup host.minikube.internal | awk 'NR==5' | cut -d' ' -f3"
multinode_test.go:583: (dbg) Run:  out/minikube-linux-arm64 kubectl -p multinode-644769 -- exec busybox-7b57f96db7-8fvwz -- sh -c "ping -c 1 192.168.67.1"
--- PASS: TestMultiNode/serial/PingHostFrom2Pods (0.92s)

                                                
                                    
x
+
TestMultiNode/serial/AddNode (59.31s)

                                                
                                                
=== RUN   TestMultiNode/serial/AddNode
multinode_test.go:121: (dbg) Run:  out/minikube-linux-arm64 node add -p multinode-644769 -v=5 --alsologtostderr
multinode_test.go:121: (dbg) Done: out/minikube-linux-arm64 node add -p multinode-644769 -v=5 --alsologtostderr: (58.540922064s)
multinode_test.go:127: (dbg) Run:  out/minikube-linux-arm64 -p multinode-644769 status --alsologtostderr
E1109 14:17:06.456358    4116 cert_rotation.go:172] "Loading client cert failed" err="open /home/jenkins/minikube-integration/21139-2320/.minikube/profiles/addons-651467/client.crt: no such file or directory" logger="tls-transport-cache.UnhandledError" key="key"
--- PASS: TestMultiNode/serial/AddNode (59.31s)

                                                
                                    
x
+
TestMultiNode/serial/MultiNodeLabels (0.1s)

                                                
                                                
=== RUN   TestMultiNode/serial/MultiNodeLabels
multinode_test.go:221: (dbg) Run:  kubectl --context multinode-644769 get nodes -o "jsonpath=[{range .items[*]}{.metadata.labels},{end}]"
--- PASS: TestMultiNode/serial/MultiNodeLabels (0.10s)

                                                
                                    
x
+
TestMultiNode/serial/ProfileList (0.75s)

                                                
                                                
=== RUN   TestMultiNode/serial/ProfileList
multinode_test.go:143: (dbg) Run:  out/minikube-linux-arm64 profile list --output json
--- PASS: TestMultiNode/serial/ProfileList (0.75s)

                                                
                                    
x
+
TestMultiNode/serial/CopyFile (10.54s)

                                                
                                                
=== RUN   TestMultiNode/serial/CopyFile
multinode_test.go:184: (dbg) Run:  out/minikube-linux-arm64 -p multinode-644769 status --output json --alsologtostderr
helpers_test.go:573: (dbg) Run:  out/minikube-linux-arm64 -p multinode-644769 cp testdata/cp-test.txt multinode-644769:/home/docker/cp-test.txt
helpers_test.go:551: (dbg) Run:  out/minikube-linux-arm64 -p multinode-644769 ssh -n multinode-644769 "sudo cat /home/docker/cp-test.txt"
helpers_test.go:573: (dbg) Run:  out/minikube-linux-arm64 -p multinode-644769 cp multinode-644769:/home/docker/cp-test.txt /tmp/TestMultiNodeserialCopyFile637553163/001/cp-test_multinode-644769.txt
helpers_test.go:551: (dbg) Run:  out/minikube-linux-arm64 -p multinode-644769 ssh -n multinode-644769 "sudo cat /home/docker/cp-test.txt"
helpers_test.go:573: (dbg) Run:  out/minikube-linux-arm64 -p multinode-644769 cp multinode-644769:/home/docker/cp-test.txt multinode-644769-m02:/home/docker/cp-test_multinode-644769_multinode-644769-m02.txt
helpers_test.go:551: (dbg) Run:  out/minikube-linux-arm64 -p multinode-644769 ssh -n multinode-644769 "sudo cat /home/docker/cp-test.txt"
helpers_test.go:551: (dbg) Run:  out/minikube-linux-arm64 -p multinode-644769 ssh -n multinode-644769-m02 "sudo cat /home/docker/cp-test_multinode-644769_multinode-644769-m02.txt"
helpers_test.go:573: (dbg) Run:  out/minikube-linux-arm64 -p multinode-644769 cp multinode-644769:/home/docker/cp-test.txt multinode-644769-m03:/home/docker/cp-test_multinode-644769_multinode-644769-m03.txt
helpers_test.go:551: (dbg) Run:  out/minikube-linux-arm64 -p multinode-644769 ssh -n multinode-644769 "sudo cat /home/docker/cp-test.txt"
helpers_test.go:551: (dbg) Run:  out/minikube-linux-arm64 -p multinode-644769 ssh -n multinode-644769-m03 "sudo cat /home/docker/cp-test_multinode-644769_multinode-644769-m03.txt"
helpers_test.go:573: (dbg) Run:  out/minikube-linux-arm64 -p multinode-644769 cp testdata/cp-test.txt multinode-644769-m02:/home/docker/cp-test.txt
helpers_test.go:551: (dbg) Run:  out/minikube-linux-arm64 -p multinode-644769 ssh -n multinode-644769-m02 "sudo cat /home/docker/cp-test.txt"
helpers_test.go:573: (dbg) Run:  out/minikube-linux-arm64 -p multinode-644769 cp multinode-644769-m02:/home/docker/cp-test.txt /tmp/TestMultiNodeserialCopyFile637553163/001/cp-test_multinode-644769-m02.txt
helpers_test.go:551: (dbg) Run:  out/minikube-linux-arm64 -p multinode-644769 ssh -n multinode-644769-m02 "sudo cat /home/docker/cp-test.txt"
helpers_test.go:573: (dbg) Run:  out/minikube-linux-arm64 -p multinode-644769 cp multinode-644769-m02:/home/docker/cp-test.txt multinode-644769:/home/docker/cp-test_multinode-644769-m02_multinode-644769.txt
helpers_test.go:551: (dbg) Run:  out/minikube-linux-arm64 -p multinode-644769 ssh -n multinode-644769-m02 "sudo cat /home/docker/cp-test.txt"
helpers_test.go:551: (dbg) Run:  out/minikube-linux-arm64 -p multinode-644769 ssh -n multinode-644769 "sudo cat /home/docker/cp-test_multinode-644769-m02_multinode-644769.txt"
helpers_test.go:573: (dbg) Run:  out/minikube-linux-arm64 -p multinode-644769 cp multinode-644769-m02:/home/docker/cp-test.txt multinode-644769-m03:/home/docker/cp-test_multinode-644769-m02_multinode-644769-m03.txt
helpers_test.go:551: (dbg) Run:  out/minikube-linux-arm64 -p multinode-644769 ssh -n multinode-644769-m02 "sudo cat /home/docker/cp-test.txt"
helpers_test.go:551: (dbg) Run:  out/minikube-linux-arm64 -p multinode-644769 ssh -n multinode-644769-m03 "sudo cat /home/docker/cp-test_multinode-644769-m02_multinode-644769-m03.txt"
helpers_test.go:573: (dbg) Run:  out/minikube-linux-arm64 -p multinode-644769 cp testdata/cp-test.txt multinode-644769-m03:/home/docker/cp-test.txt
helpers_test.go:551: (dbg) Run:  out/minikube-linux-arm64 -p multinode-644769 ssh -n multinode-644769-m03 "sudo cat /home/docker/cp-test.txt"
helpers_test.go:573: (dbg) Run:  out/minikube-linux-arm64 -p multinode-644769 cp multinode-644769-m03:/home/docker/cp-test.txt /tmp/TestMultiNodeserialCopyFile637553163/001/cp-test_multinode-644769-m03.txt
helpers_test.go:551: (dbg) Run:  out/minikube-linux-arm64 -p multinode-644769 ssh -n multinode-644769-m03 "sudo cat /home/docker/cp-test.txt"
helpers_test.go:573: (dbg) Run:  out/minikube-linux-arm64 -p multinode-644769 cp multinode-644769-m03:/home/docker/cp-test.txt multinode-644769:/home/docker/cp-test_multinode-644769-m03_multinode-644769.txt
helpers_test.go:551: (dbg) Run:  out/minikube-linux-arm64 -p multinode-644769 ssh -n multinode-644769-m03 "sudo cat /home/docker/cp-test.txt"
helpers_test.go:551: (dbg) Run:  out/minikube-linux-arm64 -p multinode-644769 ssh -n multinode-644769 "sudo cat /home/docker/cp-test_multinode-644769-m03_multinode-644769.txt"
helpers_test.go:573: (dbg) Run:  out/minikube-linux-arm64 -p multinode-644769 cp multinode-644769-m03:/home/docker/cp-test.txt multinode-644769-m02:/home/docker/cp-test_multinode-644769-m03_multinode-644769-m02.txt
helpers_test.go:551: (dbg) Run:  out/minikube-linux-arm64 -p multinode-644769 ssh -n multinode-644769-m03 "sudo cat /home/docker/cp-test.txt"
helpers_test.go:551: (dbg) Run:  out/minikube-linux-arm64 -p multinode-644769 ssh -n multinode-644769-m02 "sudo cat /home/docker/cp-test_multinode-644769-m03_multinode-644769-m02.txt"
--- PASS: TestMultiNode/serial/CopyFile (10.54s)

                                                
                                    
x
+
TestMultiNode/serial/StopNode (2.4s)

                                                
                                                
=== RUN   TestMultiNode/serial/StopNode
multinode_test.go:248: (dbg) Run:  out/minikube-linux-arm64 -p multinode-644769 node stop m03
multinode_test.go:248: (dbg) Done: out/minikube-linux-arm64 -p multinode-644769 node stop m03: (1.310186886s)
multinode_test.go:254: (dbg) Run:  out/minikube-linux-arm64 -p multinode-644769 status
multinode_test.go:254: (dbg) Non-zero exit: out/minikube-linux-arm64 -p multinode-644769 status: exit status 7 (549.016892ms)

                                                
                                                
-- stdout --
	multinode-644769
	type: Control Plane
	host: Running
	kubelet: Running
	apiserver: Running
	kubeconfig: Configured
	
	multinode-644769-m02
	type: Worker
	host: Running
	kubelet: Running
	
	multinode-644769-m03
	type: Worker
	host: Stopped
	kubelet: Stopped
	

                                                
                                                
-- /stdout --
multinode_test.go:261: (dbg) Run:  out/minikube-linux-arm64 -p multinode-644769 status --alsologtostderr
multinode_test.go:261: (dbg) Non-zero exit: out/minikube-linux-arm64 -p multinode-644769 status --alsologtostderr: exit status 7 (544.695501ms)

                                                
                                                
-- stdout --
	multinode-644769
	type: Control Plane
	host: Running
	kubelet: Running
	apiserver: Running
	kubeconfig: Configured
	
	multinode-644769-m02
	type: Worker
	host: Running
	kubelet: Running
	
	multinode-644769-m03
	type: Worker
	host: Stopped
	kubelet: Stopped
	

                                                
                                                
-- /stdout --
** stderr ** 
	I1109 14:17:20.109938  110369 out.go:360] Setting OutFile to fd 1 ...
	I1109 14:17:20.110110  110369 out.go:408] TERM=,COLORTERM=, which probably does not support color
	I1109 14:17:20.110141  110369 out.go:374] Setting ErrFile to fd 2...
	I1109 14:17:20.110160  110369 out.go:408] TERM=,COLORTERM=, which probably does not support color
	I1109 14:17:20.110432  110369 root.go:338] Updating PATH: /home/jenkins/minikube-integration/21139-2320/.minikube/bin
	I1109 14:17:20.110672  110369 out.go:368] Setting JSON to false
	I1109 14:17:20.110744  110369 mustload.go:66] Loading cluster: multinode-644769
	I1109 14:17:20.110818  110369 notify.go:221] Checking for updates...
	I1109 14:17:20.111860  110369 config.go:182] Loaded profile config "multinode-644769": Driver=docker, ContainerRuntime=crio, KubernetesVersion=v1.34.1
	I1109 14:17:20.111953  110369 status.go:174] checking status of multinode-644769 ...
	I1109 14:17:20.112568  110369 cli_runner.go:164] Run: docker container inspect multinode-644769 --format={{.State.Status}}
	I1109 14:17:20.134740  110369 status.go:371] multinode-644769 host status = "Running" (err=<nil>)
	I1109 14:17:20.134761  110369 host.go:66] Checking if "multinode-644769" exists ...
	I1109 14:17:20.135064  110369 cli_runner.go:164] Run: docker container inspect -f "{{range .NetworkSettings.Networks}}{{.IPAddress}},{{.GlobalIPv6Address}}{{end}}" multinode-644769
	I1109 14:17:20.169402  110369 host.go:66] Checking if "multinode-644769" exists ...
	I1109 14:17:20.169708  110369 ssh_runner.go:195] Run: sh -c "df -h /var | awk 'NR==2{print $5}'"
	I1109 14:17:20.169753  110369 cli_runner.go:164] Run: docker container inspect -f "'{{(index (index .NetworkSettings.Ports "22/tcp") 0).HostPort}}'" multinode-644769
	I1109 14:17:20.191443  110369 sshutil.go:53] new ssh client: &{IP:127.0.0.1 Port:32898 SSHKeyPath:/home/jenkins/minikube-integration/21139-2320/.minikube/machines/multinode-644769/id_rsa Username:docker}
	I1109 14:17:20.297550  110369 ssh_runner.go:195] Run: systemctl --version
	I1109 14:17:20.304076  110369 ssh_runner.go:195] Run: sudo systemctl is-active --quiet service kubelet
	I1109 14:17:20.318423  110369 cli_runner.go:164] Run: docker system info --format "{{json .}}"
	I1109 14:17:20.373284  110369 info.go:266] docker info: {ID:J4M5:W6MX:GOX4:4LAQ:VI7E:VJNF:J3OP:OPBH:GF7G:PPY4:WQWD:7N4L Containers:3 ContainersRunning:2 ContainersPaused:0 ContainersStopped:1 Images:3 Driver:overlay2 DriverStatus:[[Backing Filesystem extfs] [Supports d_type true] [Using metacopy false] [Native Overlay Diff true] [userxattr false]] SystemStatus:<nil> Plugins:{Volume:[local] Network:[bridge host ipvlan macvlan null overlay] Authorization:<nil> Log:[awslogs fluentd gcplogs gelf journald json-file local splunk syslog]} MemoryLimit:true SwapLimit:true KernelMemory:false KernelMemoryTCP:true CPUCfsPeriod:true CPUCfsQuota:true CPUShares:true CPUSet:true PidsLimit:true IPv4Forwarding:true BridgeNfIptables:false BridgeNfIP6Tables:false Debug:false NFd:49 OomKillDisable:true NGoroutines:62 SystemTime:2025-11-09 14:17:20.364094021 +0000 UTC LoggingDriver:json-file CgroupDriver:cgroupfs NEventsListener:0 KernelVersion:5.15.0-1084-aws OperatingSystem:Ubuntu 20.04.6 LTS OSType:linux Architecture:a
arch64 IndexServerAddress:https://index.docker.io/v1/ RegistryConfig:{AllowNondistributableArtifactsCIDRs:[] AllowNondistributableArtifactsHostnames:[] InsecureRegistryCIDRs:[::1/128 127.0.0.0/8] IndexConfigs:{DockerIo:{Name:docker.io Mirrors:[] Secure:true Official:true}} Mirrors:[]} NCPU:2 MemTotal:8214835200 GenericResources:<nil> DockerRootDir:/var/lib/docker HTTPProxy: HTTPSProxy: NoProxy: Name:ip-172-31-24-2 Labels:[] ExperimentalBuild:false ServerVersion:28.1.1 ClusterStore: ClusterAdvertise: Runtimes:{Runc:{Path:runc}} DefaultRuntime:runc Swarm:{NodeID: NodeAddr: LocalNodeState:inactive ControlAvailable:false Error: RemoteManagers:<nil>} LiveRestoreEnabled:false Isolation: InitBinary:docker-init ContainerdCommit:{ID:05044ec0a9a75232cad458027ca83437aae3f4da Expected:} RuncCommit:{ID:v1.2.5-0-g59923ef Expected:} InitCommit:{ID:de40ad0 Expected:} SecurityOptions:[name=apparmor name=seccomp,profile=builtin] ProductLicense: Warnings:<nil> ServerErrors:[] ClientInfo:{Debug:false Plugins:[map[Name:buildx Pat
h:/usr/libexec/docker/cli-plugins/docker-buildx SchemaVersion:0.1.0 ShortDescription:Docker Buildx Vendor:Docker Inc. Version:v0.23.0] map[Name:compose Path:/usr/libexec/docker/cli-plugins/docker-compose SchemaVersion:0.1.0 ShortDescription:Docker Compose Vendor:Docker Inc. Version:v2.35.1]] Warnings:<nil>}}
	I1109 14:17:20.373829  110369 kubeconfig.go:125] found "multinode-644769" server: "https://192.168.67.2:8443"
	I1109 14:17:20.373855  110369 api_server.go:166] Checking apiserver status ...
	I1109 14:17:20.373900  110369 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I1109 14:17:20.385423  110369 ssh_runner.go:195] Run: sudo egrep ^[0-9]+:freezer: /proc/1231/cgroup
	I1109 14:17:20.393452  110369 api_server.go:182] apiserver freezer: "11:freezer:/docker/666b24932e5b99a900cc1ba8738e4cb41fffe7ced8553fefbaa2724c3a102315/crio/crio-85409b3f28fa859607c208015e8bf87aceeea20bea8820eecd10e8742ae4e0e1"
	I1109 14:17:20.393516  110369 ssh_runner.go:195] Run: sudo cat /sys/fs/cgroup/freezer/docker/666b24932e5b99a900cc1ba8738e4cb41fffe7ced8553fefbaa2724c3a102315/crio/crio-85409b3f28fa859607c208015e8bf87aceeea20bea8820eecd10e8742ae4e0e1/freezer.state
	I1109 14:17:20.401048  110369 api_server.go:204] freezer state: "THAWED"
	I1109 14:17:20.401076  110369 api_server.go:253] Checking apiserver healthz at https://192.168.67.2:8443/healthz ...
	I1109 14:17:20.409647  110369 api_server.go:279] https://192.168.67.2:8443/healthz returned 200:
	ok
	I1109 14:17:20.409674  110369 status.go:463] multinode-644769 apiserver status = Running (err=<nil>)
	I1109 14:17:20.409685  110369 status.go:176] multinode-644769 status: &{Name:multinode-644769 Host:Running Kubelet:Running APIServer:Running Kubeconfig:Configured Worker:false TimeToStop: DockerEnv: PodManEnv:}
	I1109 14:17:20.409702  110369 status.go:174] checking status of multinode-644769-m02 ...
	I1109 14:17:20.410017  110369 cli_runner.go:164] Run: docker container inspect multinode-644769-m02 --format={{.State.Status}}
	I1109 14:17:20.426895  110369 status.go:371] multinode-644769-m02 host status = "Running" (err=<nil>)
	I1109 14:17:20.426919  110369 host.go:66] Checking if "multinode-644769-m02" exists ...
	I1109 14:17:20.427221  110369 cli_runner.go:164] Run: docker container inspect -f "{{range .NetworkSettings.Networks}}{{.IPAddress}},{{.GlobalIPv6Address}}{{end}}" multinode-644769-m02
	I1109 14:17:20.444257  110369 host.go:66] Checking if "multinode-644769-m02" exists ...
	I1109 14:17:20.444598  110369 ssh_runner.go:195] Run: sh -c "df -h /var | awk 'NR==2{print $5}'"
	I1109 14:17:20.444645  110369 cli_runner.go:164] Run: docker container inspect -f "'{{(index (index .NetworkSettings.Ports "22/tcp") 0).HostPort}}'" multinode-644769-m02
	I1109 14:17:20.462048  110369 sshutil.go:53] new ssh client: &{IP:127.0.0.1 Port:32903 SSHKeyPath:/home/jenkins/minikube-integration/21139-2320/.minikube/machines/multinode-644769-m02/id_rsa Username:docker}
	I1109 14:17:20.565111  110369 ssh_runner.go:195] Run: sudo systemctl is-active --quiet service kubelet
	I1109 14:17:20.577636  110369 status.go:176] multinode-644769-m02 status: &{Name:multinode-644769-m02 Host:Running Kubelet:Running APIServer:Irrelevant Kubeconfig:Irrelevant Worker:true TimeToStop: DockerEnv: PodManEnv:}
	I1109 14:17:20.577671  110369 status.go:174] checking status of multinode-644769-m03 ...
	I1109 14:17:20.578009  110369 cli_runner.go:164] Run: docker container inspect multinode-644769-m03 --format={{.State.Status}}
	I1109 14:17:20.595489  110369 status.go:371] multinode-644769-m03 host status = "Stopped" (err=<nil>)
	I1109 14:17:20.595511  110369 status.go:384] host is not running, skipping remaining checks
	I1109 14:17:20.595518  110369 status.go:176] multinode-644769-m03 status: &{Name:multinode-644769-m03 Host:Stopped Kubelet:Stopped APIServer:Stopped Kubeconfig:Stopped Worker:true TimeToStop: DockerEnv: PodManEnv:}

                                                
                                                
** /stderr **
--- PASS: TestMultiNode/serial/StopNode (2.40s)

                                                
                                    
x
+
TestMultiNode/serial/StartAfterStop (8.21s)

                                                
                                                
=== RUN   TestMultiNode/serial/StartAfterStop
multinode_test.go:282: (dbg) Run:  out/minikube-linux-arm64 -p multinode-644769 node start m03 -v=5 --alsologtostderr
multinode_test.go:282: (dbg) Done: out/minikube-linux-arm64 -p multinode-644769 node start m03 -v=5 --alsologtostderr: (7.440644473s)
multinode_test.go:290: (dbg) Run:  out/minikube-linux-arm64 -p multinode-644769 status -v=5 --alsologtostderr
multinode_test.go:306: (dbg) Run:  kubectl get nodes
--- PASS: TestMultiNode/serial/StartAfterStop (8.21s)

                                                
                                    
x
+
TestMultiNode/serial/RestartKeepsNodes (78.63s)

                                                
                                                
=== RUN   TestMultiNode/serial/RestartKeepsNodes
multinode_test.go:314: (dbg) Run:  out/minikube-linux-arm64 node list -p multinode-644769
multinode_test.go:321: (dbg) Run:  out/minikube-linux-arm64 stop -p multinode-644769
multinode_test.go:321: (dbg) Done: out/minikube-linux-arm64 stop -p multinode-644769: (25.058422488s)
multinode_test.go:326: (dbg) Run:  out/minikube-linux-arm64 start -p multinode-644769 --wait=true -v=5 --alsologtostderr
multinode_test.go:326: (dbg) Done: out/minikube-linux-arm64 start -p multinode-644769 --wait=true -v=5 --alsologtostderr: (53.43675352s)
multinode_test.go:331: (dbg) Run:  out/minikube-linux-arm64 node list -p multinode-644769
--- PASS: TestMultiNode/serial/RestartKeepsNodes (78.63s)

                                                
                                    
x
+
TestMultiNode/serial/DeleteNode (5.71s)

                                                
                                                
=== RUN   TestMultiNode/serial/DeleteNode
multinode_test.go:416: (dbg) Run:  out/minikube-linux-arm64 -p multinode-644769 node delete m03
multinode_test.go:416: (dbg) Done: out/minikube-linux-arm64 -p multinode-644769 node delete m03: (4.998779398s)
multinode_test.go:422: (dbg) Run:  out/minikube-linux-arm64 -p multinode-644769 status --alsologtostderr
multinode_test.go:436: (dbg) Run:  kubectl get nodes
multinode_test.go:444: (dbg) Run:  kubectl get nodes -o "go-template='{{range .items}}{{range .status.conditions}}{{if eq .type "Ready"}} {{.status}}{{"\n"}}{{end}}{{end}}{{end}}'"
--- PASS: TestMultiNode/serial/DeleteNode (5.71s)

                                                
                                    
x
+
TestMultiNode/serial/StopMultiNode (24s)

                                                
                                                
=== RUN   TestMultiNode/serial/StopMultiNode
multinode_test.go:345: (dbg) Run:  out/minikube-linux-arm64 -p multinode-644769 stop
multinode_test.go:345: (dbg) Done: out/minikube-linux-arm64 -p multinode-644769 stop: (23.7944952s)
multinode_test.go:351: (dbg) Run:  out/minikube-linux-arm64 -p multinode-644769 status
multinode_test.go:351: (dbg) Non-zero exit: out/minikube-linux-arm64 -p multinode-644769 status: exit status 7 (106.788623ms)

                                                
                                                
-- stdout --
	multinode-644769
	type: Control Plane
	host: Stopped
	kubelet: Stopped
	apiserver: Stopped
	kubeconfig: Stopped
	
	multinode-644769-m02
	type: Worker
	host: Stopped
	kubelet: Stopped
	

                                                
                                                
-- /stdout --
multinode_test.go:358: (dbg) Run:  out/minikube-linux-arm64 -p multinode-644769 status --alsologtostderr
multinode_test.go:358: (dbg) Non-zero exit: out/minikube-linux-arm64 -p multinode-644769 status --alsologtostderr: exit status 7 (97.841611ms)

                                                
                                                
-- stdout --
	multinode-644769
	type: Control Plane
	host: Stopped
	kubelet: Stopped
	apiserver: Stopped
	kubeconfig: Stopped
	
	multinode-644769-m02
	type: Worker
	host: Stopped
	kubelet: Stopped
	

                                                
                                                
-- /stdout --
** stderr ** 
	I1109 14:19:17.108542  118137 out.go:360] Setting OutFile to fd 1 ...
	I1109 14:19:17.108675  118137 out.go:408] TERM=,COLORTERM=, which probably does not support color
	I1109 14:19:17.108686  118137 out.go:374] Setting ErrFile to fd 2...
	I1109 14:19:17.108692  118137 out.go:408] TERM=,COLORTERM=, which probably does not support color
	I1109 14:19:17.109073  118137 root.go:338] Updating PATH: /home/jenkins/minikube-integration/21139-2320/.minikube/bin
	I1109 14:19:17.109650  118137 out.go:368] Setting JSON to false
	I1109 14:19:17.109796  118137 mustload.go:66] Loading cluster: multinode-644769
	I1109 14:19:17.110216  118137 config.go:182] Loaded profile config "multinode-644769": Driver=docker, ContainerRuntime=crio, KubernetesVersion=v1.34.1
	I1109 14:19:17.110234  118137 status.go:174] checking status of multinode-644769 ...
	I1109 14:19:17.110756  118137 cli_runner.go:164] Run: docker container inspect multinode-644769 --format={{.State.Status}}
	I1109 14:19:17.110979  118137 notify.go:221] Checking for updates...
	I1109 14:19:17.130923  118137 status.go:371] multinode-644769 host status = "Stopped" (err=<nil>)
	I1109 14:19:17.130950  118137 status.go:384] host is not running, skipping remaining checks
	I1109 14:19:17.130957  118137 status.go:176] multinode-644769 status: &{Name:multinode-644769 Host:Stopped Kubelet:Stopped APIServer:Stopped Kubeconfig:Stopped Worker:false TimeToStop: DockerEnv: PodManEnv:}
	I1109 14:19:17.130983  118137 status.go:174] checking status of multinode-644769-m02 ...
	I1109 14:19:17.131286  118137 cli_runner.go:164] Run: docker container inspect multinode-644769-m02 --format={{.State.Status}}
	I1109 14:19:17.153504  118137 status.go:371] multinode-644769-m02 host status = "Stopped" (err=<nil>)
	I1109 14:19:17.153531  118137 status.go:384] host is not running, skipping remaining checks
	I1109 14:19:17.153537  118137 status.go:176] multinode-644769-m02 status: &{Name:multinode-644769-m02 Host:Stopped Kubelet:Stopped APIServer:Stopped Kubeconfig:Stopped Worker:true TimeToStop: DockerEnv: PodManEnv:}

                                                
                                                
** /stderr **
--- PASS: TestMultiNode/serial/StopMultiNode (24.00s)

                                                
                                    
x
+
TestMultiNode/serial/RestartMultiNode (48.4s)

                                                
                                                
=== RUN   TestMultiNode/serial/RestartMultiNode
multinode_test.go:376: (dbg) Run:  out/minikube-linux-arm64 start -p multinode-644769 --wait=true -v=5 --alsologtostderr --driver=docker  --container-runtime=crio
E1109 14:19:20.733427    4116 cert_rotation.go:172] "Loading client cert failed" err="open /home/jenkins/minikube-integration/21139-2320/.minikube/profiles/functional-002359/client.crt: no such file or directory" logger="tls-transport-cache.UnhandledError" key="key"
multinode_test.go:376: (dbg) Done: out/minikube-linux-arm64 start -p multinode-644769 --wait=true -v=5 --alsologtostderr --driver=docker  --container-runtime=crio: (47.657475574s)
multinode_test.go:382: (dbg) Run:  out/minikube-linux-arm64 -p multinode-644769 status --alsologtostderr
multinode_test.go:396: (dbg) Run:  kubectl get nodes
multinode_test.go:404: (dbg) Run:  kubectl get nodes -o "go-template='{{range .items}}{{range .status.conditions}}{{if eq .type "Ready"}} {{.status}}{{"\n"}}{{end}}{{end}}{{end}}'"
--- PASS: TestMultiNode/serial/RestartMultiNode (48.40s)

                                                
                                    
x
+
TestMultiNode/serial/ValidateNameConflict (41.43s)

                                                
                                                
=== RUN   TestMultiNode/serial/ValidateNameConflict
multinode_test.go:455: (dbg) Run:  out/minikube-linux-arm64 node list -p multinode-644769
multinode_test.go:464: (dbg) Run:  out/minikube-linux-arm64 start -p multinode-644769-m02 --driver=docker  --container-runtime=crio
multinode_test.go:464: (dbg) Non-zero exit: out/minikube-linux-arm64 start -p multinode-644769-m02 --driver=docker  --container-runtime=crio: exit status 14 (95.95259ms)

                                                
                                                
-- stdout --
	* [multinode-644769-m02] minikube v1.37.0 on Ubuntu 20.04 (arm64)
	  - MINIKUBE_LOCATION=21139
	  - MINIKUBE_SUPPRESS_DOCKER_PERFORMANCE=true
	  - KUBECONFIG=/home/jenkins/minikube-integration/21139-2320/kubeconfig
	  - MINIKUBE_HOME=/home/jenkins/minikube-integration/21139-2320/.minikube
	  - MINIKUBE_BIN=out/minikube-linux-arm64
	  - MINIKUBE_FORCE_SYSTEMD=
	
	

                                                
                                                
-- /stdout --
** stderr ** 
	! Profile name 'multinode-644769-m02' is duplicated with machine name 'multinode-644769-m02' in profile 'multinode-644769'
	X Exiting due to MK_USAGE: Profile name should be unique

                                                
                                                
** /stderr **
multinode_test.go:472: (dbg) Run:  out/minikube-linux-arm64 start -p multinode-644769-m03 --driver=docker  --container-runtime=crio
multinode_test.go:472: (dbg) Done: out/minikube-linux-arm64 start -p multinode-644769-m03 --driver=docker  --container-runtime=crio: (38.748733996s)
multinode_test.go:479: (dbg) Run:  out/minikube-linux-arm64 node add -p multinode-644769
multinode_test.go:479: (dbg) Non-zero exit: out/minikube-linux-arm64 node add -p multinode-644769: exit status 80 (337.915932ms)

                                                
                                                
-- stdout --
	* Adding node m03 to cluster multinode-644769 as [worker]
	
	

                                                
                                                
-- /stdout --
** stderr ** 
	X Exiting due to GUEST_NODE_ADD: failed to add node: Node multinode-644769-m03 already exists in multinode-644769-m03 profile
	* 
	╭─────────────────────────────────────────────────────────────────────────────────────────────╮
	│                                                                                             │
	│    * If the above advice does not help, please let us know:                                 │
	│      https://github.com/kubernetes/minikube/issues/new/choose                               │
	│                                                                                             │
	│    * Please run `minikube logs --file=logs.txt` and attach logs.txt to the GitHub issue.    │
	│    * Please also attach the following file to the GitHub issue:                             │
	│    * - /tmp/minikube_node_040ea7097fd6ed71e65be9a474587f81f0ccd21d_0.log                    │
	│                                                                                             │
	╰─────────────────────────────────────────────────────────────────────────────────────────────╯

                                                
                                                
** /stderr **
multinode_test.go:484: (dbg) Run:  out/minikube-linux-arm64 delete -p multinode-644769-m03
multinode_test.go:484: (dbg) Done: out/minikube-linux-arm64 delete -p multinode-644769-m03: (2.20153017s)
--- PASS: TestMultiNode/serial/ValidateNameConflict (41.43s)

                                                
                                    
x
+
TestPreload (128.19s)

                                                
                                                
=== RUN   TestPreload
preload_test.go:43: (dbg) Run:  out/minikube-linux-arm64 start -p test-preload-274251 --memory=3072 --alsologtostderr --wait=true --preload=false --driver=docker  --container-runtime=crio --kubernetes-version=v1.32.0
preload_test.go:43: (dbg) Done: out/minikube-linux-arm64 start -p test-preload-274251 --memory=3072 --alsologtostderr --wait=true --preload=false --driver=docker  --container-runtime=crio --kubernetes-version=v1.32.0: (1m1.047790799s)
preload_test.go:51: (dbg) Run:  out/minikube-linux-arm64 -p test-preload-274251 image pull gcr.io/k8s-minikube/busybox
preload_test.go:51: (dbg) Done: out/minikube-linux-arm64 -p test-preload-274251 image pull gcr.io/k8s-minikube/busybox: (2.335550907s)
preload_test.go:57: (dbg) Run:  out/minikube-linux-arm64 stop -p test-preload-274251
preload_test.go:57: (dbg) Done: out/minikube-linux-arm64 stop -p test-preload-274251: (6.205299498s)
preload_test.go:65: (dbg) Run:  out/minikube-linux-arm64 start -p test-preload-274251 --memory=3072 --alsologtostderr -v=1 --wait=true --driver=docker  --container-runtime=crio
E1109 14:22:06.457024    4116 cert_rotation.go:172] "Loading client cert failed" err="open /home/jenkins/minikube-integration/21139-2320/.minikube/profiles/addons-651467/client.crt: no such file or directory" logger="tls-transport-cache.UnhandledError" key="key"
preload_test.go:65: (dbg) Done: out/minikube-linux-arm64 start -p test-preload-274251 --memory=3072 --alsologtostderr -v=1 --wait=true --driver=docker  --container-runtime=crio: (55.898208937s)
preload_test.go:70: (dbg) Run:  out/minikube-linux-arm64 -p test-preload-274251 image list
helpers_test.go:175: Cleaning up "test-preload-274251" profile ...
helpers_test.go:178: (dbg) Run:  out/minikube-linux-arm64 delete -p test-preload-274251
helpers_test.go:178: (dbg) Done: out/minikube-linux-arm64 delete -p test-preload-274251: (2.454153877s)
--- PASS: TestPreload (128.19s)

                                                
                                    
x
+
TestScheduledStopUnix (109.19s)

                                                
                                                
=== RUN   TestScheduledStopUnix
scheduled_stop_test.go:128: (dbg) Run:  out/minikube-linux-arm64 start -p scheduled-stop-479557 --memory=3072 --driver=docker  --container-runtime=crio
scheduled_stop_test.go:128: (dbg) Done: out/minikube-linux-arm64 start -p scheduled-stop-479557 --memory=3072 --driver=docker  --container-runtime=crio: (33.258823937s)
scheduled_stop_test.go:137: (dbg) Run:  out/minikube-linux-arm64 stop -p scheduled-stop-479557 --schedule 5m
scheduled_stop_test.go:191: (dbg) Run:  out/minikube-linux-arm64 status --format={{.TimeToStop}} -p scheduled-stop-479557 -n scheduled-stop-479557
scheduled_stop_test.go:169: signal error was:  <nil>
scheduled_stop_test.go:137: (dbg) Run:  out/minikube-linux-arm64 stop -p scheduled-stop-479557 --schedule 15s
scheduled_stop_test.go:169: signal error was:  os: process already finished
I1109 14:23:33.276607    4116 retry.go:31] will retry after 51.085µs: open /home/jenkins/minikube-integration/21139-2320/.minikube/profiles/scheduled-stop-479557/pid: no such file or directory
I1109 14:23:33.276754    4116 retry.go:31] will retry after 186.863µs: open /home/jenkins/minikube-integration/21139-2320/.minikube/profiles/scheduled-stop-479557/pid: no such file or directory
I1109 14:23:33.277886    4116 retry.go:31] will retry after 287.325µs: open /home/jenkins/minikube-integration/21139-2320/.minikube/profiles/scheduled-stop-479557/pid: no such file or directory
I1109 14:23:33.279004    4116 retry.go:31] will retry after 361.377µs: open /home/jenkins/minikube-integration/21139-2320/.minikube/profiles/scheduled-stop-479557/pid: no such file or directory
I1109 14:23:33.280119    4116 retry.go:31] will retry after 359.289µs: open /home/jenkins/minikube-integration/21139-2320/.minikube/profiles/scheduled-stop-479557/pid: no such file or directory
I1109 14:23:33.281236    4116 retry.go:31] will retry after 936.39µs: open /home/jenkins/minikube-integration/21139-2320/.minikube/profiles/scheduled-stop-479557/pid: no such file or directory
I1109 14:23:33.282354    4116 retry.go:31] will retry after 1.146505ms: open /home/jenkins/minikube-integration/21139-2320/.minikube/profiles/scheduled-stop-479557/pid: no such file or directory
I1109 14:23:33.284577    4116 retry.go:31] will retry after 2.119518ms: open /home/jenkins/minikube-integration/21139-2320/.minikube/profiles/scheduled-stop-479557/pid: no such file or directory
I1109 14:23:33.287736    4116 retry.go:31] will retry after 2.589597ms: open /home/jenkins/minikube-integration/21139-2320/.minikube/profiles/scheduled-stop-479557/pid: no such file or directory
I1109 14:23:33.291005    4116 retry.go:31] will retry after 1.981448ms: open /home/jenkins/minikube-integration/21139-2320/.minikube/profiles/scheduled-stop-479557/pid: no such file or directory
I1109 14:23:33.293244    4116 retry.go:31] will retry after 7.25197ms: open /home/jenkins/minikube-integration/21139-2320/.minikube/profiles/scheduled-stop-479557/pid: no such file or directory
I1109 14:23:33.301472    4116 retry.go:31] will retry after 7.219672ms: open /home/jenkins/minikube-integration/21139-2320/.minikube/profiles/scheduled-stop-479557/pid: no such file or directory
I1109 14:23:33.309759    4116 retry.go:31] will retry after 14.743825ms: open /home/jenkins/minikube-integration/21139-2320/.minikube/profiles/scheduled-stop-479557/pid: no such file or directory
I1109 14:23:33.325018    4116 retry.go:31] will retry after 24.169497ms: open /home/jenkins/minikube-integration/21139-2320/.minikube/profiles/scheduled-stop-479557/pid: no such file or directory
I1109 14:23:33.350313    4116 retry.go:31] will retry after 26.540022ms: open /home/jenkins/minikube-integration/21139-2320/.minikube/profiles/scheduled-stop-479557/pid: no such file or directory
I1109 14:23:33.377940    4116 retry.go:31] will retry after 56.172438ms: open /home/jenkins/minikube-integration/21139-2320/.minikube/profiles/scheduled-stop-479557/pid: no such file or directory
scheduled_stop_test.go:137: (dbg) Run:  out/minikube-linux-arm64 stop -p scheduled-stop-479557 --cancel-scheduled
scheduled_stop_test.go:176: (dbg) Run:  out/minikube-linux-arm64 status --format={{.Host}} -p scheduled-stop-479557 -n scheduled-stop-479557
scheduled_stop_test.go:205: (dbg) Run:  out/minikube-linux-arm64 status -p scheduled-stop-479557
scheduled_stop_test.go:137: (dbg) Run:  out/minikube-linux-arm64 stop -p scheduled-stop-479557 --schedule 15s
scheduled_stop_test.go:169: signal error was:  os: process already finished
E1109 14:24:20.737046    4116 cert_rotation.go:172] "Loading client cert failed" err="open /home/jenkins/minikube-integration/21139-2320/.minikube/profiles/functional-002359/client.crt: no such file or directory" logger="tls-transport-cache.UnhandledError" key="key"
scheduled_stop_test.go:205: (dbg) Run:  out/minikube-linux-arm64 status -p scheduled-stop-479557
scheduled_stop_test.go:205: (dbg) Non-zero exit: out/minikube-linux-arm64 status -p scheduled-stop-479557: exit status 7 (66.896108ms)

                                                
                                                
-- stdout --
	scheduled-stop-479557
	type: Control Plane
	host: Stopped
	kubelet: Stopped
	apiserver: Stopped
	kubeconfig: Stopped
	

                                                
                                                
-- /stdout --
scheduled_stop_test.go:176: (dbg) Run:  out/minikube-linux-arm64 status --format={{.Host}} -p scheduled-stop-479557 -n scheduled-stop-479557
scheduled_stop_test.go:176: (dbg) Non-zero exit: out/minikube-linux-arm64 status --format={{.Host}} -p scheduled-stop-479557 -n scheduled-stop-479557: exit status 7 (71.933124ms)

                                                
                                                
-- stdout --
	Stopped

                                                
                                                
-- /stdout --
scheduled_stop_test.go:176: status error: exit status 7 (may be ok)
helpers_test.go:175: Cleaning up "scheduled-stop-479557" profile ...
helpers_test.go:178: (dbg) Run:  out/minikube-linux-arm64 delete -p scheduled-stop-479557
helpers_test.go:178: (dbg) Done: out/minikube-linux-arm64 delete -p scheduled-stop-479557: (4.298849891s)
--- PASS: TestScheduledStopUnix (109.19s)

                                                
                                    
x
+
TestInsufficientStorage (13.67s)

                                                
                                                
=== RUN   TestInsufficientStorage
status_test.go:50: (dbg) Run:  out/minikube-linux-arm64 start -p insufficient-storage-296572 --memory=3072 --output=json --wait=true --driver=docker  --container-runtime=crio
status_test.go:50: (dbg) Non-zero exit: out/minikube-linux-arm64 start -p insufficient-storage-296572 --memory=3072 --output=json --wait=true --driver=docker  --container-runtime=crio: exit status 26 (10.760436298s)

                                                
                                                
-- stdout --
	{"specversion":"1.0","id":"117cc1f9-7c49-4e87-b36e-a86e4eec3f56","source":"https://minikube.sigs.k8s.io/","type":"io.k8s.sigs.minikube.step","datacontenttype":"application/json","data":{"currentstep":"0","message":"[insufficient-storage-296572] minikube v1.37.0 on Ubuntu 20.04 (arm64)","name":"Initial Minikube Setup","totalsteps":"19"}}
	{"specversion":"1.0","id":"eb13e39e-b437-473c-a854-9aeb5c9e07ad","source":"https://minikube.sigs.k8s.io/","type":"io.k8s.sigs.minikube.info","datacontenttype":"application/json","data":{"message":"MINIKUBE_LOCATION=21139"}}
	{"specversion":"1.0","id":"0ce1c79b-473d-4d89-82a0-7faa840ac5ff","source":"https://minikube.sigs.k8s.io/","type":"io.k8s.sigs.minikube.info","datacontenttype":"application/json","data":{"message":"MINIKUBE_SUPPRESS_DOCKER_PERFORMANCE=true"}}
	{"specversion":"1.0","id":"bdd0e877-ef2b-465f-aa0e-f0e58ba2d4dc","source":"https://minikube.sigs.k8s.io/","type":"io.k8s.sigs.minikube.info","datacontenttype":"application/json","data":{"message":"KUBECONFIG=/home/jenkins/minikube-integration/21139-2320/kubeconfig"}}
	{"specversion":"1.0","id":"35ed5199-5080-4abd-b811-6db55b87253a","source":"https://minikube.sigs.k8s.io/","type":"io.k8s.sigs.minikube.info","datacontenttype":"application/json","data":{"message":"MINIKUBE_HOME=/home/jenkins/minikube-integration/21139-2320/.minikube"}}
	{"specversion":"1.0","id":"d68a5b79-86f1-4120-bda0-6124d2de3998","source":"https://minikube.sigs.k8s.io/","type":"io.k8s.sigs.minikube.info","datacontenttype":"application/json","data":{"message":"MINIKUBE_BIN=out/minikube-linux-arm64"}}
	{"specversion":"1.0","id":"c2a32ca9-2e4d-4efa-bed0-e3292f59914d","source":"https://minikube.sigs.k8s.io/","type":"io.k8s.sigs.minikube.info","datacontenttype":"application/json","data":{"message":"MINIKUBE_FORCE_SYSTEMD="}}
	{"specversion":"1.0","id":"372a97bb-b33c-4607-baa3-97eb741e881d","source":"https://minikube.sigs.k8s.io/","type":"io.k8s.sigs.minikube.info","datacontenttype":"application/json","data":{"message":"MINIKUBE_TEST_STORAGE_CAPACITY=100"}}
	{"specversion":"1.0","id":"41e1ca41-232c-470c-8f46-536099da9a76","source":"https://minikube.sigs.k8s.io/","type":"io.k8s.sigs.minikube.info","datacontenttype":"application/json","data":{"message":"MINIKUBE_TEST_AVAILABLE_STORAGE=19"}}
	{"specversion":"1.0","id":"567c9185-9aff-4d0c-a67f-230fbc0b29d7","source":"https://minikube.sigs.k8s.io/","type":"io.k8s.sigs.minikube.step","datacontenttype":"application/json","data":{"currentstep":"1","message":"Using the docker driver based on user configuration","name":"Selecting Driver","totalsteps":"19"}}
	{"specversion":"1.0","id":"b699a960-52f9-4f4a-8255-0029eeacd8c3","source":"https://minikube.sigs.k8s.io/","type":"io.k8s.sigs.minikube.info","datacontenttype":"application/json","data":{"message":"Using Docker driver with root privileges"}}
	{"specversion":"1.0","id":"53c13b72-08e6-4d72-8200-03827c93fa02","source":"https://minikube.sigs.k8s.io/","type":"io.k8s.sigs.minikube.step","datacontenttype":"application/json","data":{"currentstep":"3","message":"Starting \"insufficient-storage-296572\" primary control-plane node in \"insufficient-storage-296572\" cluster","name":"Starting Node","totalsteps":"19"}}
	{"specversion":"1.0","id":"b6af4f62-1c0d-4d23-8714-1ebf80354fc5","source":"https://minikube.sigs.k8s.io/","type":"io.k8s.sigs.minikube.step","datacontenttype":"application/json","data":{"currentstep":"5","message":"Pulling base image v0.0.48-1761985721-21837 ...","name":"Pulling Base Image","totalsteps":"19"}}
	{"specversion":"1.0","id":"1bd16709-bb97-45ae-a8a8-c6f4daee9e97","source":"https://minikube.sigs.k8s.io/","type":"io.k8s.sigs.minikube.step","datacontenttype":"application/json","data":{"currentstep":"8","message":"Creating docker container (CPUs=2, Memory=3072MB) ...","name":"Creating Container","totalsteps":"19"}}
	{"specversion":"1.0","id":"3f017980-851b-4483-8b84-380d916cdd72","source":"https://minikube.sigs.k8s.io/","type":"io.k8s.sigs.minikube.error","datacontenttype":"application/json","data":{"advice":"Try one or more of the following to free up space on the device:\n\n\t\t\t1. Run \"docker system prune\" to remove unused Docker data (optionally with \"-a\")\n\t\t\t2. Increase the storage allocated to Docker for Desktop by clicking on:\n\t\t\t\tDocker icon \u003e Preferences \u003e Resources \u003e Disk Image Size\n\t\t\t3. Run \"minikube ssh -- docker system prune\" if using the Docker container runtime","exitcode":"26","issues":"https://github.com/kubernetes/minikube/issues/9024","message":"Docker is out of disk space! (/var is at 100% of capacity). You can pass '--force' to skip this check.","name":"RSRC_DOCKER_STORAGE","url":""}}

                                                
                                                
-- /stdout --
status_test.go:76: (dbg) Run:  out/minikube-linux-arm64 status -p insufficient-storage-296572 --output=json --layout=cluster
status_test.go:76: (dbg) Non-zero exit: out/minikube-linux-arm64 status -p insufficient-storage-296572 --output=json --layout=cluster: exit status 7 (307.74583ms)

                                                
                                                
-- stdout --
	{"Name":"insufficient-storage-296572","StatusCode":507,"StatusName":"InsufficientStorage","StatusDetail":"/var is almost out of disk space","Step":"Creating Container","StepDetail":"Creating docker container (CPUs=2, Memory=3072MB) ...","BinaryVersion":"v1.37.0","Components":{"kubeconfig":{"Name":"kubeconfig","StatusCode":500,"StatusName":"Error"}},"Nodes":[{"Name":"insufficient-storage-296572","StatusCode":507,"StatusName":"InsufficientStorage","Components":{"apiserver":{"Name":"apiserver","StatusCode":405,"StatusName":"Stopped"},"kubelet":{"Name":"kubelet","StatusCode":405,"StatusName":"Stopped"}}}]}

                                                
                                                
-- /stdout --
** stderr ** 
	E1109 14:24:59.733399  134339 status.go:458] kubeconfig endpoint: get endpoint: "insufficient-storage-296572" does not appear in /home/jenkins/minikube-integration/21139-2320/kubeconfig

                                                
                                                
** /stderr **
status_test.go:76: (dbg) Run:  out/minikube-linux-arm64 status -p insufficient-storage-296572 --output=json --layout=cluster
status_test.go:76: (dbg) Non-zero exit: out/minikube-linux-arm64 status -p insufficient-storage-296572 --output=json --layout=cluster: exit status 7 (385.917991ms)

                                                
                                                
-- stdout --
	{"Name":"insufficient-storage-296572","StatusCode":507,"StatusName":"InsufficientStorage","StatusDetail":"/var is almost out of disk space","BinaryVersion":"v1.37.0","Components":{"kubeconfig":{"Name":"kubeconfig","StatusCode":500,"StatusName":"Error"}},"Nodes":[{"Name":"insufficient-storage-296572","StatusCode":507,"StatusName":"InsufficientStorage","Components":{"apiserver":{"Name":"apiserver","StatusCode":405,"StatusName":"Stopped"},"kubelet":{"Name":"kubelet","StatusCode":405,"StatusName":"Stopped"}}}]}

                                                
                                                
-- /stdout --
** stderr ** 
	E1109 14:25:00.085199  134406 status.go:458] kubeconfig endpoint: get endpoint: "insufficient-storage-296572" does not appear in /home/jenkins/minikube-integration/21139-2320/kubeconfig
	E1109 14:25:00.128268  134406 status.go:258] unable to read event log: stat: stat /home/jenkins/minikube-integration/21139-2320/.minikube/profiles/insufficient-storage-296572/events.json: no such file or directory

                                                
                                                
** /stderr **
helpers_test.go:175: Cleaning up "insufficient-storage-296572" profile ...
helpers_test.go:178: (dbg) Run:  out/minikube-linux-arm64 delete -p insufficient-storage-296572
helpers_test.go:178: (dbg) Done: out/minikube-linux-arm64 delete -p insufficient-storage-296572: (2.208679838s)
--- PASS: TestInsufficientStorage (13.67s)

                                                
                                    
x
+
TestRunningBinaryUpgrade (55.61s)

                                                
                                                
=== RUN   TestRunningBinaryUpgrade
=== PAUSE TestRunningBinaryUpgrade

                                                
                                                

                                                
                                                
=== CONT  TestRunningBinaryUpgrade
version_upgrade_test.go:120: (dbg) Run:  /tmp/minikube-v1.32.0.2192419760 start -p running-upgrade-382260 --memory=3072 --vm-driver=docker  --container-runtime=crio
version_upgrade_test.go:120: (dbg) Done: /tmp/minikube-v1.32.0.2192419760 start -p running-upgrade-382260 --memory=3072 --vm-driver=docker  --container-runtime=crio: (34.42310592s)
version_upgrade_test.go:130: (dbg) Run:  out/minikube-linux-arm64 start -p running-upgrade-382260 --memory=3072 --alsologtostderr -v=1 --driver=docker  --container-runtime=crio
version_upgrade_test.go:130: (dbg) Done: out/minikube-linux-arm64 start -p running-upgrade-382260 --memory=3072 --alsologtostderr -v=1 --driver=docker  --container-runtime=crio: (18.318140401s)
helpers_test.go:175: Cleaning up "running-upgrade-382260" profile ...
helpers_test.go:178: (dbg) Run:  out/minikube-linux-arm64 delete -p running-upgrade-382260
helpers_test.go:178: (dbg) Done: out/minikube-linux-arm64 delete -p running-upgrade-382260: (1.995153679s)
--- PASS: TestRunningBinaryUpgrade (55.61s)

                                                
                                    
x
+
TestKubernetesUpgrade (349.97s)

                                                
                                                
=== RUN   TestKubernetesUpgrade
=== PAUSE TestKubernetesUpgrade

                                                
                                                

                                                
                                                
=== CONT  TestKubernetesUpgrade
version_upgrade_test.go:222: (dbg) Run:  out/minikube-linux-arm64 start -p kubernetes-upgrade-334644 --memory=3072 --kubernetes-version=v1.28.0 --alsologtostderr -v=1 --driver=docker  --container-runtime=crio
E1109 14:27:06.456742    4116 cert_rotation.go:172] "Loading client cert failed" err="open /home/jenkins/minikube-integration/21139-2320/.minikube/profiles/addons-651467/client.crt: no such file or directory" logger="tls-transport-cache.UnhandledError" key="key"
E1109 14:27:23.800169    4116 cert_rotation.go:172] "Loading client cert failed" err="open /home/jenkins/minikube-integration/21139-2320/.minikube/profiles/functional-002359/client.crt: no such file or directory" logger="tls-transport-cache.UnhandledError" key="key"
version_upgrade_test.go:222: (dbg) Done: out/minikube-linux-arm64 start -p kubernetes-upgrade-334644 --memory=3072 --kubernetes-version=v1.28.0 --alsologtostderr -v=1 --driver=docker  --container-runtime=crio: (34.274714254s)
version_upgrade_test.go:227: (dbg) Run:  out/minikube-linux-arm64 stop -p kubernetes-upgrade-334644
version_upgrade_test.go:227: (dbg) Done: out/minikube-linux-arm64 stop -p kubernetes-upgrade-334644: (1.372204757s)
version_upgrade_test.go:232: (dbg) Run:  out/minikube-linux-arm64 -p kubernetes-upgrade-334644 status --format={{.Host}}
version_upgrade_test.go:232: (dbg) Non-zero exit: out/minikube-linux-arm64 -p kubernetes-upgrade-334644 status --format={{.Host}}: exit status 7 (68.012546ms)

                                                
                                                
-- stdout --
	Stopped

                                                
                                                
-- /stdout --
version_upgrade_test.go:234: status error: exit status 7 (may be ok)
version_upgrade_test.go:243: (dbg) Run:  out/minikube-linux-arm64 start -p kubernetes-upgrade-334644 --memory=3072 --kubernetes-version=v1.34.1 --alsologtostderr -v=1 --driver=docker  --container-runtime=crio
version_upgrade_test.go:243: (dbg) Done: out/minikube-linux-arm64 start -p kubernetes-upgrade-334644 --memory=3072 --kubernetes-version=v1.34.1 --alsologtostderr -v=1 --driver=docker  --container-runtime=crio: (4m33.738094985s)
version_upgrade_test.go:248: (dbg) Run:  kubectl --context kubernetes-upgrade-334644 version --output=json
version_upgrade_test.go:267: Attempting to downgrade Kubernetes (should fail)
version_upgrade_test.go:269: (dbg) Run:  out/minikube-linux-arm64 start -p kubernetes-upgrade-334644 --memory=3072 --kubernetes-version=v1.28.0 --driver=docker  --container-runtime=crio
version_upgrade_test.go:269: (dbg) Non-zero exit: out/minikube-linux-arm64 start -p kubernetes-upgrade-334644 --memory=3072 --kubernetes-version=v1.28.0 --driver=docker  --container-runtime=crio: exit status 106 (97.060474ms)

                                                
                                                
-- stdout --
	* [kubernetes-upgrade-334644] minikube v1.37.0 on Ubuntu 20.04 (arm64)
	  - MINIKUBE_LOCATION=21139
	  - MINIKUBE_SUPPRESS_DOCKER_PERFORMANCE=true
	  - KUBECONFIG=/home/jenkins/minikube-integration/21139-2320/kubeconfig
	  - MINIKUBE_HOME=/home/jenkins/minikube-integration/21139-2320/.minikube
	  - MINIKUBE_BIN=out/minikube-linux-arm64
	  - MINIKUBE_FORCE_SYSTEMD=
	
	

                                                
                                                
-- /stdout --
** stderr ** 
	X Exiting due to K8S_DOWNGRADE_UNSUPPORTED: Unable to safely downgrade existing Kubernetes v1.34.1 cluster to v1.28.0
	* Suggestion: 
	
	    1) Recreate the cluster with Kubernetes 1.28.0, by running:
	    
	    minikube delete -p kubernetes-upgrade-334644
	    minikube start -p kubernetes-upgrade-334644 --kubernetes-version=v1.28.0
	    
	    2) Create a second cluster with Kubernetes 1.28.0, by running:
	    
	    minikube start -p kubernetes-upgrade-3346442 --kubernetes-version=v1.28.0
	    
	    3) Use the existing cluster at version Kubernetes 1.34.1, by running:
	    
	    minikube start -p kubernetes-upgrade-334644 --kubernetes-version=v1.34.1
	    

                                                
                                                
** /stderr **
version_upgrade_test.go:273: Attempting restart after unsuccessful downgrade
version_upgrade_test.go:275: (dbg) Run:  out/minikube-linux-arm64 start -p kubernetes-upgrade-334644 --memory=3072 --kubernetes-version=v1.34.1 --alsologtostderr -v=1 --driver=docker  --container-runtime=crio
version_upgrade_test.go:275: (dbg) Done: out/minikube-linux-arm64 start -p kubernetes-upgrade-334644 --memory=3072 --kubernetes-version=v1.34.1 --alsologtostderr -v=1 --driver=docker  --container-runtime=crio: (38.152758101s)
helpers_test.go:175: Cleaning up "kubernetes-upgrade-334644" profile ...
helpers_test.go:178: (dbg) Run:  out/minikube-linux-arm64 delete -p kubernetes-upgrade-334644
helpers_test.go:178: (dbg) Done: out/minikube-linux-arm64 delete -p kubernetes-upgrade-334644: (2.151132602s)
--- PASS: TestKubernetesUpgrade (349.97s)

                                                
                                    
x
+
TestMissingContainerUpgrade (108.91s)

                                                
                                                
=== RUN   TestMissingContainerUpgrade
=== PAUSE TestMissingContainerUpgrade

                                                
                                                

                                                
                                                
=== CONT  TestMissingContainerUpgrade
version_upgrade_test.go:309: (dbg) Run:  /tmp/minikube-v1.32.0.1767093548 start -p missing-upgrade-396103 --memory=3072 --driver=docker  --container-runtime=crio
version_upgrade_test.go:309: (dbg) Done: /tmp/minikube-v1.32.0.1767093548 start -p missing-upgrade-396103 --memory=3072 --driver=docker  --container-runtime=crio: (1m2.330974223s)
version_upgrade_test.go:318: (dbg) Run:  docker stop missing-upgrade-396103
version_upgrade_test.go:323: (dbg) Run:  docker rm missing-upgrade-396103
version_upgrade_test.go:329: (dbg) Run:  out/minikube-linux-arm64 start -p missing-upgrade-396103 --memory=3072 --alsologtostderr -v=1 --driver=docker  --container-runtime=crio
version_upgrade_test.go:329: (dbg) Done: out/minikube-linux-arm64 start -p missing-upgrade-396103 --memory=3072 --alsologtostderr -v=1 --driver=docker  --container-runtime=crio: (42.926093068s)
helpers_test.go:175: Cleaning up "missing-upgrade-396103" profile ...
helpers_test.go:178: (dbg) Run:  out/minikube-linux-arm64 delete -p missing-upgrade-396103
E1109 14:26:49.524336    4116 cert_rotation.go:172] "Loading client cert failed" err="open /home/jenkins/minikube-integration/21139-2320/.minikube/profiles/addons-651467/client.crt: no such file or directory" logger="tls-transport-cache.UnhandledError" key="key"
helpers_test.go:178: (dbg) Done: out/minikube-linux-arm64 delete -p missing-upgrade-396103: (1.998818253s)
--- PASS: TestMissingContainerUpgrade (108.91s)

                                                
                                    
x
+
TestNoKubernetes/serial/StartNoK8sWithVersion (0.09s)

                                                
                                                
=== RUN   TestNoKubernetes/serial/StartNoK8sWithVersion
no_kubernetes_test.go:108: (dbg) Run:  out/minikube-linux-arm64 start -p NoKubernetes-451939 --no-kubernetes --kubernetes-version=v1.28.0 --driver=docker  --container-runtime=crio
no_kubernetes_test.go:108: (dbg) Non-zero exit: out/minikube-linux-arm64 start -p NoKubernetes-451939 --no-kubernetes --kubernetes-version=v1.28.0 --driver=docker  --container-runtime=crio: exit status 14 (93.610589ms)

                                                
                                                
-- stdout --
	* [NoKubernetes-451939] minikube v1.37.0 on Ubuntu 20.04 (arm64)
	  - MINIKUBE_LOCATION=21139
	  - MINIKUBE_SUPPRESS_DOCKER_PERFORMANCE=true
	  - KUBECONFIG=/home/jenkins/minikube-integration/21139-2320/kubeconfig
	  - MINIKUBE_HOME=/home/jenkins/minikube-integration/21139-2320/.minikube
	  - MINIKUBE_BIN=out/minikube-linux-arm64
	  - MINIKUBE_FORCE_SYSTEMD=
	
	

                                                
                                                
-- /stdout --
** stderr ** 
	X Exiting due to MK_USAGE: cannot specify --kubernetes-version with --no-kubernetes,
	to unset a global config run:
	
	$ minikube config unset kubernetes-version

                                                
                                                
** /stderr **
--- PASS: TestNoKubernetes/serial/StartNoK8sWithVersion (0.09s)

                                                
                                    
x
+
TestNoKubernetes/serial/StartWithK8s (50.43s)

                                                
                                                
=== RUN   TestNoKubernetes/serial/StartWithK8s
no_kubernetes_test.go:120: (dbg) Run:  out/minikube-linux-arm64 start -p NoKubernetes-451939 --memory=3072 --alsologtostderr -v=5 --driver=docker  --container-runtime=crio
no_kubernetes_test.go:120: (dbg) Done: out/minikube-linux-arm64 start -p NoKubernetes-451939 --memory=3072 --alsologtostderr -v=5 --driver=docker  --container-runtime=crio: (50.016612539s)
no_kubernetes_test.go:225: (dbg) Run:  out/minikube-linux-arm64 -p NoKubernetes-451939 status -o json
--- PASS: TestNoKubernetes/serial/StartWithK8s (50.43s)

                                                
                                    
x
+
TestNoKubernetes/serial/StartWithStopK8s (114.76s)

                                                
                                                
=== RUN   TestNoKubernetes/serial/StartWithStopK8s
no_kubernetes_test.go:137: (dbg) Run:  out/minikube-linux-arm64 start -p NoKubernetes-451939 --no-kubernetes --memory=3072 --alsologtostderr -v=5 --driver=docker  --container-runtime=crio
no_kubernetes_test.go:137: (dbg) Done: out/minikube-linux-arm64 start -p NoKubernetes-451939 --no-kubernetes --memory=3072 --alsologtostderr -v=5 --driver=docker  --container-runtime=crio: (1m51.963655764s)
no_kubernetes_test.go:225: (dbg) Run:  out/minikube-linux-arm64 -p NoKubernetes-451939 status -o json
no_kubernetes_test.go:225: (dbg) Non-zero exit: out/minikube-linux-arm64 -p NoKubernetes-451939 status -o json: exit status 2 (543.839794ms)

                                                
                                                
-- stdout --
	{"Name":"NoKubernetes-451939","Host":"Running","Kubelet":"Stopped","APIServer":"Stopped","Kubeconfig":"Configured","Worker":false}

                                                
                                                
-- /stdout --
no_kubernetes_test.go:149: (dbg) Run:  out/minikube-linux-arm64 delete -p NoKubernetes-451939
no_kubernetes_test.go:149: (dbg) Done: out/minikube-linux-arm64 delete -p NoKubernetes-451939: (2.248947932s)
--- PASS: TestNoKubernetes/serial/StartWithStopK8s (114.76s)

                                                
                                    
x
+
TestNoKubernetes/serial/Start (8.06s)

                                                
                                                
=== RUN   TestNoKubernetes/serial/Start
no_kubernetes_test.go:161: (dbg) Run:  out/minikube-linux-arm64 start -p NoKubernetes-451939 --no-kubernetes --memory=3072 --alsologtostderr -v=5 --driver=docker  --container-runtime=crio
no_kubernetes_test.go:161: (dbg) Done: out/minikube-linux-arm64 start -p NoKubernetes-451939 --no-kubernetes --memory=3072 --alsologtostderr -v=5 --driver=docker  --container-runtime=crio: (8.055019511s)
--- PASS: TestNoKubernetes/serial/Start (8.06s)

                                                
                                    
x
+
TestNoKubernetes/serial/VerifyNok8sNoK8sDownloads (0s)

                                                
                                                
=== RUN   TestNoKubernetes/serial/VerifyNok8sNoK8sDownloads
no_kubernetes_test.go:89: Checking cache directory: /home/jenkins/minikube-integration/21139-2320/.minikube/cache/linux/arm64/v0.0.0
--- PASS: TestNoKubernetes/serial/VerifyNok8sNoK8sDownloads (0.00s)

                                                
                                    
x
+
TestNoKubernetes/serial/VerifyK8sNotRunning (0.28s)

                                                
                                                
=== RUN   TestNoKubernetes/serial/VerifyK8sNotRunning
no_kubernetes_test.go:172: (dbg) Run:  out/minikube-linux-arm64 ssh -p NoKubernetes-451939 "sudo systemctl is-active --quiet service kubelet"
no_kubernetes_test.go:172: (dbg) Non-zero exit: out/minikube-linux-arm64 ssh -p NoKubernetes-451939 "sudo systemctl is-active --quiet service kubelet": exit status 1 (283.203927ms)

                                                
                                                
** stderr ** 
	ssh: Process exited with status 3

                                                
                                                
** /stderr **
--- PASS: TestNoKubernetes/serial/VerifyK8sNotRunning (0.28s)

                                                
                                    
x
+
TestNoKubernetes/serial/ProfileList (35.01s)

                                                
                                                
=== RUN   TestNoKubernetes/serial/ProfileList
no_kubernetes_test.go:194: (dbg) Run:  out/minikube-linux-arm64 profile list
no_kubernetes_test.go:194: (dbg) Done: out/minikube-linux-arm64 profile list: (20.342668835s)
no_kubernetes_test.go:204: (dbg) Run:  out/minikube-linux-arm64 profile list --output=json
no_kubernetes_test.go:204: (dbg) Done: out/minikube-linux-arm64 profile list --output=json: (14.669894048s)
--- PASS: TestNoKubernetes/serial/ProfileList (35.01s)

                                                
                                    
x
+
TestNoKubernetes/serial/Stop (1.3s)

                                                
                                                
=== RUN   TestNoKubernetes/serial/Stop
no_kubernetes_test.go:183: (dbg) Run:  out/minikube-linux-arm64 stop -p NoKubernetes-451939
no_kubernetes_test.go:183: (dbg) Done: out/minikube-linux-arm64 stop -p NoKubernetes-451939: (1.296414922s)
--- PASS: TestNoKubernetes/serial/Stop (1.30s)

                                                
                                    
x
+
TestNoKubernetes/serial/StartNoArgs (6.87s)

                                                
                                                
=== RUN   TestNoKubernetes/serial/StartNoArgs
no_kubernetes_test.go:216: (dbg) Run:  out/minikube-linux-arm64 start -p NoKubernetes-451939 --driver=docker  --container-runtime=crio
no_kubernetes_test.go:216: (dbg) Done: out/minikube-linux-arm64 start -p NoKubernetes-451939 --driver=docker  --container-runtime=crio: (6.868912412s)
--- PASS: TestNoKubernetes/serial/StartNoArgs (6.87s)

                                                
                                    
x
+
TestNoKubernetes/serial/VerifyK8sNotRunningSecond (0.28s)

                                                
                                                
=== RUN   TestNoKubernetes/serial/VerifyK8sNotRunningSecond
no_kubernetes_test.go:172: (dbg) Run:  out/minikube-linux-arm64 ssh -p NoKubernetes-451939 "sudo systemctl is-active --quiet service kubelet"
no_kubernetes_test.go:172: (dbg) Non-zero exit: out/minikube-linux-arm64 ssh -p NoKubernetes-451939 "sudo systemctl is-active --quiet service kubelet": exit status 1 (283.758902ms)

                                                
                                                
** stderr ** 
	ssh: Process exited with status 3

                                                
                                                
** /stderr **
--- PASS: TestNoKubernetes/serial/VerifyK8sNotRunningSecond (0.28s)

                                                
                                    
x
+
TestStoppedBinaryUpgrade/Setup (0.92s)

                                                
                                                
=== RUN   TestStoppedBinaryUpgrade/Setup
--- PASS: TestStoppedBinaryUpgrade/Setup (0.92s)

                                                
                                    
x
+
TestStoppedBinaryUpgrade/Upgrade (59.53s)

                                                
                                                
=== RUN   TestStoppedBinaryUpgrade/Upgrade
version_upgrade_test.go:183: (dbg) Run:  /tmp/minikube-v1.32.0.647661002 start -p stopped-upgrade-471685 --memory=3072 --vm-driver=docker  --container-runtime=crio
version_upgrade_test.go:183: (dbg) Done: /tmp/minikube-v1.32.0.647661002 start -p stopped-upgrade-471685 --memory=3072 --vm-driver=docker  --container-runtime=crio: (38.555584667s)
version_upgrade_test.go:192: (dbg) Run:  /tmp/minikube-v1.32.0.647661002 -p stopped-upgrade-471685 stop
E1109 14:29:20.733538    4116 cert_rotation.go:172] "Loading client cert failed" err="open /home/jenkins/minikube-integration/21139-2320/.minikube/profiles/functional-002359/client.crt: no such file or directory" logger="tls-transport-cache.UnhandledError" key="key"
version_upgrade_test.go:192: (dbg) Done: /tmp/minikube-v1.32.0.647661002 -p stopped-upgrade-471685 stop: (1.232087889s)
version_upgrade_test.go:198: (dbg) Run:  out/minikube-linux-arm64 start -p stopped-upgrade-471685 --memory=3072 --alsologtostderr -v=1 --driver=docker  --container-runtime=crio
version_upgrade_test.go:198: (dbg) Done: out/minikube-linux-arm64 start -p stopped-upgrade-471685 --memory=3072 --alsologtostderr -v=1 --driver=docker  --container-runtime=crio: (19.743755311s)
--- PASS: TestStoppedBinaryUpgrade/Upgrade (59.53s)

                                                
                                    
x
+
TestStoppedBinaryUpgrade/MinikubeLogs (1.26s)

                                                
                                                
=== RUN   TestStoppedBinaryUpgrade/MinikubeLogs
version_upgrade_test.go:206: (dbg) Run:  out/minikube-linux-arm64 logs -p stopped-upgrade-471685
version_upgrade_test.go:206: (dbg) Done: out/minikube-linux-arm64 logs -p stopped-upgrade-471685: (1.260647689s)
--- PASS: TestStoppedBinaryUpgrade/MinikubeLogs (1.26s)

                                                
                                    
x
+
TestPause/serial/Start (84.95s)

                                                
                                                
=== RUN   TestPause/serial/Start
pause_test.go:80: (dbg) Run:  out/minikube-linux-arm64 start -p pause-342238 --memory=3072 --install-addons=false --wait=all --driver=docker  --container-runtime=crio
pause_test.go:80: (dbg) Done: out/minikube-linux-arm64 start -p pause-342238 --memory=3072 --install-addons=false --wait=all --driver=docker  --container-runtime=crio: (1m24.954668942s)
--- PASS: TestPause/serial/Start (84.95s)

                                                
                                    
x
+
TestPause/serial/SecondStartNoReconfiguration (41.18s)

                                                
                                                
=== RUN   TestPause/serial/SecondStartNoReconfiguration
pause_test.go:92: (dbg) Run:  out/minikube-linux-arm64 start -p pause-342238 --alsologtostderr -v=1 --driver=docker  --container-runtime=crio
E1109 14:32:06.457485    4116 cert_rotation.go:172] "Loading client cert failed" err="open /home/jenkins/minikube-integration/21139-2320/.minikube/profiles/addons-651467/client.crt: no such file or directory" logger="tls-transport-cache.UnhandledError" key="key"
pause_test.go:92: (dbg) Done: out/minikube-linux-arm64 start -p pause-342238 --alsologtostderr -v=1 --driver=docker  --container-runtime=crio: (41.160089196s)
--- PASS: TestPause/serial/SecondStartNoReconfiguration (41.18s)

                                                
                                    
x
+
TestNetworkPlugins/group/false (5.27s)

                                                
                                                
=== RUN   TestNetworkPlugins/group/false
net_test.go:246: (dbg) Run:  out/minikube-linux-arm64 start -p false-241021 --memory=3072 --alsologtostderr --cni=false --driver=docker  --container-runtime=crio
net_test.go:246: (dbg) Non-zero exit: out/minikube-linux-arm64 start -p false-241021 --memory=3072 --alsologtostderr --cni=false --driver=docker  --container-runtime=crio: exit status 14 (257.08387ms)

                                                
                                                
-- stdout --
	* [false-241021] minikube v1.37.0 on Ubuntu 20.04 (arm64)
	  - MINIKUBE_LOCATION=21139
	  - MINIKUBE_SUPPRESS_DOCKER_PERFORMANCE=true
	  - KUBECONFIG=/home/jenkins/minikube-integration/21139-2320/kubeconfig
	  - MINIKUBE_HOME=/home/jenkins/minikube-integration/21139-2320/.minikube
	  - MINIKUBE_BIN=out/minikube-linux-arm64
	  - MINIKUBE_FORCE_SYSTEMD=
	* Using the docker driver based on user configuration
	
	

                                                
                                                
-- /stdout --
** stderr ** 
	I1109 14:33:02.165354  172164 out.go:360] Setting OutFile to fd 1 ...
	I1109 14:33:02.165947  172164 out.go:408] TERM=,COLORTERM=, which probably does not support color
	I1109 14:33:02.165982  172164 out.go:374] Setting ErrFile to fd 2...
	I1109 14:33:02.166001  172164 out.go:408] TERM=,COLORTERM=, which probably does not support color
	I1109 14:33:02.166296  172164 root.go:338] Updating PATH: /home/jenkins/minikube-integration/21139-2320/.minikube/bin
	I1109 14:33:02.166795  172164 out.go:368] Setting JSON to false
	I1109 14:33:02.167741  172164 start.go:133] hostinfo: {"hostname":"ip-172-31-24-2","uptime":4533,"bootTime":1762694250,"procs":172,"os":"linux","platform":"ubuntu","platformFamily":"debian","platformVersion":"20.04","kernelVersion":"5.15.0-1084-aws","kernelArch":"aarch64","virtualizationSystem":"","virtualizationRole":"","hostId":"6d436adf-771e-4269-b9a3-c25fd4fca4f5"}
	I1109 14:33:02.167839  172164 start.go:143] virtualization:  
	I1109 14:33:02.173560  172164 out.go:179] * [false-241021] minikube v1.37.0 on Ubuntu 20.04 (arm64)
	I1109 14:33:02.176847  172164 out.go:179]   - MINIKUBE_LOCATION=21139
	I1109 14:33:02.176928  172164 notify.go:221] Checking for updates...
	I1109 14:33:02.183594  172164 out.go:179]   - MINIKUBE_SUPPRESS_DOCKER_PERFORMANCE=true
	I1109 14:33:02.186656  172164 out.go:179]   - KUBECONFIG=/home/jenkins/minikube-integration/21139-2320/kubeconfig
	I1109 14:33:02.189685  172164 out.go:179]   - MINIKUBE_HOME=/home/jenkins/minikube-integration/21139-2320/.minikube
	I1109 14:33:02.192656  172164 out.go:179]   - MINIKUBE_BIN=out/minikube-linux-arm64
	I1109 14:33:02.195698  172164 out.go:179]   - MINIKUBE_FORCE_SYSTEMD=
	I1109 14:33:02.199232  172164 config.go:182] Loaded profile config "force-systemd-flag-519664": Driver=docker, ContainerRuntime=crio, KubernetesVersion=v1.34.1
	I1109 14:33:02.199404  172164 driver.go:422] Setting default libvirt URI to qemu:///system
	I1109 14:33:02.242590  172164 docker.go:124] docker version: linux-28.1.1:Docker Engine - Community
	I1109 14:33:02.242724  172164 cli_runner.go:164] Run: docker system info --format "{{json .}}"
	I1109 14:33:02.335982  172164 info.go:266] docker info: {ID:J4M5:W6MX:GOX4:4LAQ:VI7E:VJNF:J3OP:OPBH:GF7G:PPY4:WQWD:7N4L Containers:1 ContainersRunning:1 ContainersPaused:0 ContainersStopped:0 Images:4 Driver:overlay2 DriverStatus:[[Backing Filesystem extfs] [Supports d_type true] [Using metacopy false] [Native Overlay Diff true] [userxattr false]] SystemStatus:<nil> Plugins:{Volume:[local] Network:[bridge host ipvlan macvlan null overlay] Authorization:<nil> Log:[awslogs fluentd gcplogs gelf journald json-file local splunk syslog]} MemoryLimit:true SwapLimit:true KernelMemory:false KernelMemoryTCP:true CPUCfsPeriod:true CPUCfsQuota:true CPUShares:true CPUSet:true PidsLimit:true IPv4Forwarding:true BridgeNfIptables:false BridgeNfIP6Tables:false Debug:false NFd:37 OomKillDisable:true NGoroutines:53 SystemTime:2025-11-09 14:33:02.324987269 +0000 UTC LoggingDriver:json-file CgroupDriver:cgroupfs NEventsListener:0 KernelVersion:5.15.0-1084-aws OperatingSystem:Ubuntu 20.04.6 LTS OSType:linux Architecture:a
arch64 IndexServerAddress:https://index.docker.io/v1/ RegistryConfig:{AllowNondistributableArtifactsCIDRs:[] AllowNondistributableArtifactsHostnames:[] InsecureRegistryCIDRs:[::1/128 127.0.0.0/8] IndexConfigs:{DockerIo:{Name:docker.io Mirrors:[] Secure:true Official:true}} Mirrors:[]} NCPU:2 MemTotal:8214835200 GenericResources:<nil> DockerRootDir:/var/lib/docker HTTPProxy: HTTPSProxy: NoProxy: Name:ip-172-31-24-2 Labels:[] ExperimentalBuild:false ServerVersion:28.1.1 ClusterStore: ClusterAdvertise: Runtimes:{Runc:{Path:runc}} DefaultRuntime:runc Swarm:{NodeID: NodeAddr: LocalNodeState:inactive ControlAvailable:false Error: RemoteManagers:<nil>} LiveRestoreEnabled:false Isolation: InitBinary:docker-init ContainerdCommit:{ID:05044ec0a9a75232cad458027ca83437aae3f4da Expected:} RuncCommit:{ID:v1.2.5-0-g59923ef Expected:} InitCommit:{ID:de40ad0 Expected:} SecurityOptions:[name=apparmor name=seccomp,profile=builtin] ProductLicense: Warnings:<nil> ServerErrors:[] ClientInfo:{Debug:false Plugins:[map[Name:buildx Pat
h:/usr/libexec/docker/cli-plugins/docker-buildx SchemaVersion:0.1.0 ShortDescription:Docker Buildx Vendor:Docker Inc. Version:v0.23.0] map[Name:compose Path:/usr/libexec/docker/cli-plugins/docker-compose SchemaVersion:0.1.0 ShortDescription:Docker Compose Vendor:Docker Inc. Version:v2.35.1]] Warnings:<nil>}}
	I1109 14:33:02.336101  172164 docker.go:319] overlay module found
	I1109 14:33:02.339124  172164 out.go:179] * Using the docker driver based on user configuration
	I1109 14:33:02.342346  172164 start.go:309] selected driver: docker
	I1109 14:33:02.342364  172164 start.go:930] validating driver "docker" against <nil>
	I1109 14:33:02.342384  172164 start.go:941] status for docker: {Installed:true Healthy:true Running:false NeedsImprovement:false Error:<nil> Reason: Fix: Doc: Version:}
	I1109 14:33:02.346130  172164 out.go:203] 
	W1109 14:33:02.349019  172164 out.go:285] X Exiting due to MK_USAGE: The "crio" container runtime requires CNI
	X Exiting due to MK_USAGE: The "crio" container runtime requires CNI
	I1109 14:33:02.352037  172164 out.go:203] 

                                                
                                                
** /stderr **
net_test.go:88: 
----------------------- debugLogs start: false-241021 [pass: true] --------------------------------
>>> netcat: nslookup kubernetes.default:
Error in configuration: context was not found for specified context: false-241021

                                                
                                                

                                                
                                                
>>> netcat: nslookup debug kubernetes.default a-records:
Error in configuration: context was not found for specified context: false-241021

                                                
                                                

                                                
                                                
>>> netcat: dig search kubernetes.default:
Error in configuration: context was not found for specified context: false-241021

                                                
                                                

                                                
                                                
>>> netcat: dig @10.96.0.10 kubernetes.default.svc.cluster.local udp/53:
Error in configuration: context was not found for specified context: false-241021

                                                
                                                

                                                
                                                
>>> netcat: dig @10.96.0.10 kubernetes.default.svc.cluster.local tcp/53:
Error in configuration: context was not found for specified context: false-241021

                                                
                                                

                                                
                                                
>>> netcat: nc 10.96.0.10 udp/53:
Error in configuration: context was not found for specified context: false-241021

                                                
                                                

                                                
                                                
>>> netcat: nc 10.96.0.10 tcp/53:
Error in configuration: context was not found for specified context: false-241021

                                                
                                                

                                                
                                                
>>> netcat: /etc/nsswitch.conf:
Error in configuration: context was not found for specified context: false-241021

                                                
                                                

                                                
                                                
>>> netcat: /etc/hosts:
Error in configuration: context was not found for specified context: false-241021

                                                
                                                

                                                
                                                
>>> netcat: /etc/resolv.conf:
Error in configuration: context was not found for specified context: false-241021

                                                
                                                

                                                
                                                
>>> host: /etc/nsswitch.conf:
* Profile "false-241021" not found. Run "minikube profile list" to view all profiles.
To start a cluster, run: "minikube start -p false-241021"

                                                
                                                

                                                
                                                
>>> host: /etc/hosts:
* Profile "false-241021" not found. Run "minikube profile list" to view all profiles.
To start a cluster, run: "minikube start -p false-241021"

                                                
                                                

                                                
                                                
>>> host: /etc/resolv.conf:
* Profile "false-241021" not found. Run "minikube profile list" to view all profiles.
To start a cluster, run: "minikube start -p false-241021"

                                                
                                                

                                                
                                                
>>> k8s: nodes, services, endpoints, daemon sets, deployments and pods, :
Error in configuration: context was not found for specified context: false-241021

                                                
                                                

                                                
                                                
>>> host: crictl pods:
* Profile "false-241021" not found. Run "minikube profile list" to view all profiles.
To start a cluster, run: "minikube start -p false-241021"

                                                
                                                

                                                
                                                
>>> host: crictl containers:
* Profile "false-241021" not found. Run "minikube profile list" to view all profiles.
To start a cluster, run: "minikube start -p false-241021"

                                                
                                                

                                                
                                                
>>> k8s: describe netcat deployment:
error: context "false-241021" does not exist

                                                
                                                

                                                
                                                
>>> k8s: describe netcat pod(s):
error: context "false-241021" does not exist

                                                
                                                

                                                
                                                
>>> k8s: netcat logs:
error: context "false-241021" does not exist

                                                
                                                

                                                
                                                
>>> k8s: describe coredns deployment:
error: context "false-241021" does not exist

                                                
                                                

                                                
                                                
>>> k8s: describe coredns pods:
error: context "false-241021" does not exist

                                                
                                                

                                                
                                                
>>> k8s: coredns logs:
error: context "false-241021" does not exist

                                                
                                                

                                                
                                                
>>> k8s: describe api server pod(s):
error: context "false-241021" does not exist

                                                
                                                

                                                
                                                
>>> k8s: api server logs:
error: context "false-241021" does not exist

                                                
                                                

                                                
                                                
>>> host: /etc/cni:
* Profile "false-241021" not found. Run "minikube profile list" to view all profiles.
To start a cluster, run: "minikube start -p false-241021"

                                                
                                                

                                                
                                                
>>> host: ip a s:
* Profile "false-241021" not found. Run "minikube profile list" to view all profiles.
To start a cluster, run: "minikube start -p false-241021"

                                                
                                                

                                                
                                                
>>> host: ip r s:
* Profile "false-241021" not found. Run "minikube profile list" to view all profiles.
To start a cluster, run: "minikube start -p false-241021"

                                                
                                                

                                                
                                                
>>> host: iptables-save:
* Profile "false-241021" not found. Run "minikube profile list" to view all profiles.
To start a cluster, run: "minikube start -p false-241021"

                                                
                                                

                                                
                                                
>>> host: iptables table nat:
* Profile "false-241021" not found. Run "minikube profile list" to view all profiles.
To start a cluster, run: "minikube start -p false-241021"

                                                
                                                

                                                
                                                
>>> k8s: describe kube-proxy daemon set:
error: context "false-241021" does not exist

                                                
                                                

                                                
                                                
>>> k8s: describe kube-proxy pod(s):
error: context "false-241021" does not exist

                                                
                                                

                                                
                                                
>>> k8s: kube-proxy logs:
error: context "false-241021" does not exist

                                                
                                                

                                                
                                                
>>> host: kubelet daemon status:
* Profile "false-241021" not found. Run "minikube profile list" to view all profiles.
To start a cluster, run: "minikube start -p false-241021"

                                                
                                                

                                                
                                                
>>> host: kubelet daemon config:
* Profile "false-241021" not found. Run "minikube profile list" to view all profiles.
To start a cluster, run: "minikube start -p false-241021"

                                                
                                                

                                                
                                                
>>> k8s: kubelet logs:
* Profile "false-241021" not found. Run "minikube profile list" to view all profiles.
To start a cluster, run: "minikube start -p false-241021"

                                                
                                                

                                                
                                                
>>> host: /etc/kubernetes/kubelet.conf:
* Profile "false-241021" not found. Run "minikube profile list" to view all profiles.
To start a cluster, run: "minikube start -p false-241021"

                                                
                                                

                                                
                                                
>>> host: /var/lib/kubelet/config.yaml:
* Profile "false-241021" not found. Run "minikube profile list" to view all profiles.
To start a cluster, run: "minikube start -p false-241021"

                                                
                                                

                                                
                                                
>>> k8s: kubectl config:
apiVersion: v1
clusters: null
contexts: null
current-context: ""
kind: Config
preferences: {}
users: null

                                                
                                                

                                                
                                                
>>> k8s: cms:
Error in configuration: context was not found for specified context: false-241021

                                                
                                                

                                                
                                                
>>> host: docker daemon status:
* Profile "false-241021" not found. Run "minikube profile list" to view all profiles.
To start a cluster, run: "minikube start -p false-241021"

                                                
                                                

                                                
                                                
>>> host: docker daemon config:
* Profile "false-241021" not found. Run "minikube profile list" to view all profiles.
To start a cluster, run: "minikube start -p false-241021"

                                                
                                                

                                                
                                                
>>> host: /etc/docker/daemon.json:
* Profile "false-241021" not found. Run "minikube profile list" to view all profiles.
To start a cluster, run: "minikube start -p false-241021"

                                                
                                                

                                                
                                                
>>> host: docker system info:
* Profile "false-241021" not found. Run "minikube profile list" to view all profiles.
To start a cluster, run: "minikube start -p false-241021"

                                                
                                                

                                                
                                                
>>> host: cri-docker daemon status:
* Profile "false-241021" not found. Run "minikube profile list" to view all profiles.
To start a cluster, run: "minikube start -p false-241021"

                                                
                                                

                                                
                                                
>>> host: cri-docker daemon config:
* Profile "false-241021" not found. Run "minikube profile list" to view all profiles.
To start a cluster, run: "minikube start -p false-241021"

                                                
                                                

                                                
                                                
>>> host: /etc/systemd/system/cri-docker.service.d/10-cni.conf:
* Profile "false-241021" not found. Run "minikube profile list" to view all profiles.
To start a cluster, run: "minikube start -p false-241021"

                                                
                                                

                                                
                                                
>>> host: /usr/lib/systemd/system/cri-docker.service:
* Profile "false-241021" not found. Run "minikube profile list" to view all profiles.
To start a cluster, run: "minikube start -p false-241021"

                                                
                                                

                                                
                                                
>>> host: cri-dockerd version:
* Profile "false-241021" not found. Run "minikube profile list" to view all profiles.
To start a cluster, run: "minikube start -p false-241021"

                                                
                                                

                                                
                                                
>>> host: containerd daemon status:
* Profile "false-241021" not found. Run "minikube profile list" to view all profiles.
To start a cluster, run: "minikube start -p false-241021"

                                                
                                                

                                                
                                                
>>> host: containerd daemon config:
* Profile "false-241021" not found. Run "minikube profile list" to view all profiles.
To start a cluster, run: "minikube start -p false-241021"

                                                
                                                

                                                
                                                
>>> host: /lib/systemd/system/containerd.service:
* Profile "false-241021" not found. Run "minikube profile list" to view all profiles.
To start a cluster, run: "minikube start -p false-241021"

                                                
                                                

                                                
                                                
>>> host: /etc/containerd/config.toml:
* Profile "false-241021" not found. Run "minikube profile list" to view all profiles.
To start a cluster, run: "minikube start -p false-241021"

                                                
                                                

                                                
                                                
>>> host: containerd config dump:
* Profile "false-241021" not found. Run "minikube profile list" to view all profiles.
To start a cluster, run: "minikube start -p false-241021"

                                                
                                                

                                                
                                                
>>> host: crio daemon status:
* Profile "false-241021" not found. Run "minikube profile list" to view all profiles.
To start a cluster, run: "minikube start -p false-241021"

                                                
                                                

                                                
                                                
>>> host: crio daemon config:
* Profile "false-241021" not found. Run "minikube profile list" to view all profiles.
To start a cluster, run: "minikube start -p false-241021"

                                                
                                                

                                                
                                                
>>> host: /etc/crio:
* Profile "false-241021" not found. Run "minikube profile list" to view all profiles.
To start a cluster, run: "minikube start -p false-241021"

                                                
                                                

                                                
                                                
>>> host: crio config:
* Profile "false-241021" not found. Run "minikube profile list" to view all profiles.
To start a cluster, run: "minikube start -p false-241021"

                                                
                                                
----------------------- debugLogs end: false-241021 [took: 4.738807039s] --------------------------------
helpers_test.go:175: Cleaning up "false-241021" profile ...
helpers_test.go:178: (dbg) Run:  out/minikube-linux-arm64 delete -p false-241021
--- PASS: TestNetworkPlugins/group/false (5.27s)

                                                
                                    
x
+
TestStartStop/group/old-k8s-version/serial/FirstStart (62.78s)

                                                
                                                
=== RUN   TestStartStop/group/old-k8s-version/serial/FirstStart
start_stop_delete_test.go:184: (dbg) Run:  out/minikube-linux-arm64 start -p old-k8s-version-349599 --memory=3072 --alsologtostderr --wait=true --kvm-network=default --kvm-qemu-uri=qemu:///system --disable-driver-mounts --keep-context=false --driver=docker  --container-runtime=crio --kubernetes-version=v1.28.0
start_stop_delete_test.go:184: (dbg) Done: out/minikube-linux-arm64 start -p old-k8s-version-349599 --memory=3072 --alsologtostderr --wait=true --kvm-network=default --kvm-qemu-uri=qemu:///system --disable-driver-mounts --keep-context=false --driver=docker  --container-runtime=crio --kubernetes-version=v1.28.0: (1m2.782421739s)
--- PASS: TestStartStop/group/old-k8s-version/serial/FirstStart (62.78s)

                                                
                                    
x
+
TestStartStop/group/old-k8s-version/serial/DeployApp (8.44s)

                                                
                                                
=== RUN   TestStartStop/group/old-k8s-version/serial/DeployApp
start_stop_delete_test.go:194: (dbg) Run:  kubectl --context old-k8s-version-349599 create -f testdata/busybox.yaml
start_stop_delete_test.go:194: (dbg) TestStartStop/group/old-k8s-version/serial/DeployApp: waiting 8m0s for pods matching "integration-test=busybox" in namespace "default" ...
helpers_test.go:352: "busybox" [623307ca-4ed7-4378-9c59-77fc8b166a0b] Pending / Ready:ContainersNotReady (containers with unready status: [busybox]) / ContainersReady:ContainersNotReady (containers with unready status: [busybox])
helpers_test.go:352: "busybox" [623307ca-4ed7-4378-9c59-77fc8b166a0b] Running
start_stop_delete_test.go:194: (dbg) TestStartStop/group/old-k8s-version/serial/DeployApp: integration-test=busybox healthy within 8.004392957s
start_stop_delete_test.go:194: (dbg) Run:  kubectl --context old-k8s-version-349599 exec busybox -- /bin/sh -c "ulimit -n"
--- PASS: TestStartStop/group/old-k8s-version/serial/DeployApp (8.44s)

                                                
                                    
x
+
TestStartStop/group/old-k8s-version/serial/Stop (12.04s)

                                                
                                                
=== RUN   TestStartStop/group/old-k8s-version/serial/Stop
start_stop_delete_test.go:226: (dbg) Run:  out/minikube-linux-arm64 stop -p old-k8s-version-349599 --alsologtostderr -v=3
start_stop_delete_test.go:226: (dbg) Done: out/minikube-linux-arm64 stop -p old-k8s-version-349599 --alsologtostderr -v=3: (12.041047617s)
--- PASS: TestStartStop/group/old-k8s-version/serial/Stop (12.04s)

                                                
                                    
x
+
TestStartStop/group/old-k8s-version/serial/EnableAddonAfterStop (0.19s)

                                                
                                                
=== RUN   TestStartStop/group/old-k8s-version/serial/EnableAddonAfterStop
start_stop_delete_test.go:237: (dbg) Run:  out/minikube-linux-arm64 status --format={{.Host}} -p old-k8s-version-349599 -n old-k8s-version-349599
start_stop_delete_test.go:237: (dbg) Non-zero exit: out/minikube-linux-arm64 status --format={{.Host}} -p old-k8s-version-349599 -n old-k8s-version-349599: exit status 7 (78.903951ms)

                                                
                                                
-- stdout --
	Stopped

                                                
                                                
-- /stdout --
start_stop_delete_test.go:237: status error: exit status 7 (may be ok)
start_stop_delete_test.go:244: (dbg) Run:  out/minikube-linux-arm64 addons enable dashboard -p old-k8s-version-349599 --images=MetricsScraper=registry.k8s.io/echoserver:1.4
--- PASS: TestStartStop/group/old-k8s-version/serial/EnableAddonAfterStop (0.19s)

                                                
                                    
x
+
TestStartStop/group/old-k8s-version/serial/SecondStart (51.33s)

                                                
                                                
=== RUN   TestStartStop/group/old-k8s-version/serial/SecondStart
start_stop_delete_test.go:254: (dbg) Run:  out/minikube-linux-arm64 start -p old-k8s-version-349599 --memory=3072 --alsologtostderr --wait=true --kvm-network=default --kvm-qemu-uri=qemu:///system --disable-driver-mounts --keep-context=false --driver=docker  --container-runtime=crio --kubernetes-version=v1.28.0
start_stop_delete_test.go:254: (dbg) Done: out/minikube-linux-arm64 start -p old-k8s-version-349599 --memory=3072 --alsologtostderr --wait=true --kvm-network=default --kvm-qemu-uri=qemu:///system --disable-driver-mounts --keep-context=false --driver=docker  --container-runtime=crio --kubernetes-version=v1.28.0: (50.932020062s)
start_stop_delete_test.go:260: (dbg) Run:  out/minikube-linux-arm64 status --format={{.Host}} -p old-k8s-version-349599 -n old-k8s-version-349599
--- PASS: TestStartStop/group/old-k8s-version/serial/SecondStart (51.33s)

                                                
                                    
x
+
TestStartStop/group/old-k8s-version/serial/UserAppExistsAfterStop (6.01s)

                                                
                                                
=== RUN   TestStartStop/group/old-k8s-version/serial/UserAppExistsAfterStop
start_stop_delete_test.go:272: (dbg) TestStartStop/group/old-k8s-version/serial/UserAppExistsAfterStop: waiting 9m0s for pods matching "k8s-app=kubernetes-dashboard" in namespace "kubernetes-dashboard" ...
helpers_test.go:352: "kubernetes-dashboard-8694d4445c-4d8hp" [393cd277-6a4b-46b3-b252-9d0f66277445] Running
start_stop_delete_test.go:272: (dbg) TestStartStop/group/old-k8s-version/serial/UserAppExistsAfterStop: k8s-app=kubernetes-dashboard healthy within 6.004093659s
--- PASS: TestStartStop/group/old-k8s-version/serial/UserAppExistsAfterStop (6.01s)

                                                
                                    
x
+
TestStartStop/group/old-k8s-version/serial/AddonExistsAfterStop (5.11s)

                                                
                                                
=== RUN   TestStartStop/group/old-k8s-version/serial/AddonExistsAfterStop
start_stop_delete_test.go:285: (dbg) TestStartStop/group/old-k8s-version/serial/AddonExistsAfterStop: waiting 9m0s for pods matching "k8s-app=kubernetes-dashboard" in namespace "kubernetes-dashboard" ...
helpers_test.go:352: "kubernetes-dashboard-8694d4445c-4d8hp" [393cd277-6a4b-46b3-b252-9d0f66277445] Running
start_stop_delete_test.go:285: (dbg) TestStartStop/group/old-k8s-version/serial/AddonExistsAfterStop: k8s-app=kubernetes-dashboard healthy within 5.003855199s
start_stop_delete_test.go:289: (dbg) Run:  kubectl --context old-k8s-version-349599 describe deploy/dashboard-metrics-scraper -n kubernetes-dashboard
--- PASS: TestStartStop/group/old-k8s-version/serial/AddonExistsAfterStop (5.11s)

                                                
                                    
x
+
TestStartStop/group/old-k8s-version/serial/VerifyKubernetesImages (0.25s)

                                                
                                                
=== RUN   TestStartStop/group/old-k8s-version/serial/VerifyKubernetesImages
start_stop_delete_test.go:302: (dbg) Run:  out/minikube-linux-arm64 -p old-k8s-version-349599 image list --format=json
start_stop_delete_test.go:302: Found non-minikube image: kindest/kindnetd:v20230511-dc714da8
start_stop_delete_test.go:302: Found non-minikube image: kindest/kindnetd:v20250512-df8de77b
start_stop_delete_test.go:302: Found non-minikube image: gcr.io/k8s-minikube/busybox:1.28.4-glibc
--- PASS: TestStartStop/group/old-k8s-version/serial/VerifyKubernetesImages (0.25s)

                                                
                                    
x
+
TestStartStop/group/default-k8s-diff-port/serial/FirstStart (91.46s)

                                                
                                                
=== RUN   TestStartStop/group/default-k8s-diff-port/serial/FirstStart
start_stop_delete_test.go:184: (dbg) Run:  out/minikube-linux-arm64 start -p default-k8s-diff-port-103048 --memory=3072 --alsologtostderr --wait=true --apiserver-port=8444 --driver=docker  --container-runtime=crio --kubernetes-version=v1.34.1
start_stop_delete_test.go:184: (dbg) Done: out/minikube-linux-arm64 start -p default-k8s-diff-port-103048 --memory=3072 --alsologtostderr --wait=true --apiserver-port=8444 --driver=docker  --container-runtime=crio --kubernetes-version=v1.34.1: (1m31.46179496s)
--- PASS: TestStartStop/group/default-k8s-diff-port/serial/FirstStart (91.46s)

                                                
                                    
x
+
TestStartStop/group/embed-certs/serial/FirstStart (83.99s)

                                                
                                                
=== RUN   TestStartStop/group/embed-certs/serial/FirstStart
start_stop_delete_test.go:184: (dbg) Run:  out/minikube-linux-arm64 start -p embed-certs-422728 --memory=3072 --alsologtostderr --wait=true --embed-certs --driver=docker  --container-runtime=crio --kubernetes-version=v1.34.1
start_stop_delete_test.go:184: (dbg) Done: out/minikube-linux-arm64 start -p embed-certs-422728 --memory=3072 --alsologtostderr --wait=true --embed-certs --driver=docker  --container-runtime=crio --kubernetes-version=v1.34.1: (1m23.994213891s)
--- PASS: TestStartStop/group/embed-certs/serial/FirstStart (83.99s)

                                                
                                    
x
+
TestStartStop/group/default-k8s-diff-port/serial/DeployApp (8.37s)

                                                
                                                
=== RUN   TestStartStop/group/default-k8s-diff-port/serial/DeployApp
start_stop_delete_test.go:194: (dbg) Run:  kubectl --context default-k8s-diff-port-103048 create -f testdata/busybox.yaml
start_stop_delete_test.go:194: (dbg) TestStartStop/group/default-k8s-diff-port/serial/DeployApp: waiting 8m0s for pods matching "integration-test=busybox" in namespace "default" ...
helpers_test.go:352: "busybox" [f325ae72-af8a-416b-b2a2-8fe2e1b4d024] Pending
helpers_test.go:352: "busybox" [f325ae72-af8a-416b-b2a2-8fe2e1b4d024] Pending / Ready:ContainersNotReady (containers with unready status: [busybox]) / ContainersReady:ContainersNotReady (containers with unready status: [busybox])
helpers_test.go:352: "busybox" [f325ae72-af8a-416b-b2a2-8fe2e1b4d024] Running
start_stop_delete_test.go:194: (dbg) TestStartStop/group/default-k8s-diff-port/serial/DeployApp: integration-test=busybox healthy within 8.003536225s
start_stop_delete_test.go:194: (dbg) Run:  kubectl --context default-k8s-diff-port-103048 exec busybox -- /bin/sh -c "ulimit -n"
--- PASS: TestStartStop/group/default-k8s-diff-port/serial/DeployApp (8.37s)

                                                
                                    
x
+
TestStartStop/group/embed-certs/serial/DeployApp (8.68s)

                                                
                                                
=== RUN   TestStartStop/group/embed-certs/serial/DeployApp
start_stop_delete_test.go:194: (dbg) Run:  kubectl --context embed-certs-422728 create -f testdata/busybox.yaml
start_stop_delete_test.go:194: (dbg) TestStartStop/group/embed-certs/serial/DeployApp: waiting 8m0s for pods matching "integration-test=busybox" in namespace "default" ...
helpers_test.go:352: "busybox" [219a9c8a-eefa-4542-a8f6-78c4f56bea13] Pending
helpers_test.go:352: "busybox" [219a9c8a-eefa-4542-a8f6-78c4f56bea13] Pending / Ready:ContainersNotReady (containers with unready status: [busybox]) / ContainersReady:ContainersNotReady (containers with unready status: [busybox])
helpers_test.go:352: "busybox" [219a9c8a-eefa-4542-a8f6-78c4f56bea13] Running
start_stop_delete_test.go:194: (dbg) TestStartStop/group/embed-certs/serial/DeployApp: integration-test=busybox healthy within 8.04706539s
start_stop_delete_test.go:194: (dbg) Run:  kubectl --context embed-certs-422728 exec busybox -- /bin/sh -c "ulimit -n"
--- PASS: TestStartStop/group/embed-certs/serial/DeployApp (8.68s)

                                                
                                    
x
+
TestStartStop/group/default-k8s-diff-port/serial/Stop (12.25s)

                                                
                                                
=== RUN   TestStartStop/group/default-k8s-diff-port/serial/Stop
start_stop_delete_test.go:226: (dbg) Run:  out/minikube-linux-arm64 stop -p default-k8s-diff-port-103048 --alsologtostderr -v=3
start_stop_delete_test.go:226: (dbg) Done: out/minikube-linux-arm64 stop -p default-k8s-diff-port-103048 --alsologtostderr -v=3: (12.25420298s)
--- PASS: TestStartStop/group/default-k8s-diff-port/serial/Stop (12.25s)

                                                
                                    
x
+
TestStartStop/group/embed-certs/serial/Stop (11.95s)

                                                
                                                
=== RUN   TestStartStop/group/embed-certs/serial/Stop
start_stop_delete_test.go:226: (dbg) Run:  out/minikube-linux-arm64 stop -p embed-certs-422728 --alsologtostderr -v=3
start_stop_delete_test.go:226: (dbg) Done: out/minikube-linux-arm64 stop -p embed-certs-422728 --alsologtostderr -v=3: (11.954264865s)
--- PASS: TestStartStop/group/embed-certs/serial/Stop (11.95s)

                                                
                                    
x
+
TestStartStop/group/default-k8s-diff-port/serial/EnableAddonAfterStop (0.19s)

                                                
                                                
=== RUN   TestStartStop/group/default-k8s-diff-port/serial/EnableAddonAfterStop
start_stop_delete_test.go:237: (dbg) Run:  out/minikube-linux-arm64 status --format={{.Host}} -p default-k8s-diff-port-103048 -n default-k8s-diff-port-103048
start_stop_delete_test.go:237: (dbg) Non-zero exit: out/minikube-linux-arm64 status --format={{.Host}} -p default-k8s-diff-port-103048 -n default-k8s-diff-port-103048: exit status 7 (71.234199ms)

                                                
                                                
-- stdout --
	Stopped

                                                
                                                
-- /stdout --
start_stop_delete_test.go:237: status error: exit status 7 (may be ok)
start_stop_delete_test.go:244: (dbg) Run:  out/minikube-linux-arm64 addons enable dashboard -p default-k8s-diff-port-103048 --images=MetricsScraper=registry.k8s.io/echoserver:1.4
--- PASS: TestStartStop/group/default-k8s-diff-port/serial/EnableAddonAfterStop (0.19s)

                                                
                                    
x
+
TestStartStop/group/default-k8s-diff-port/serial/SecondStart (53.65s)

                                                
                                                
=== RUN   TestStartStop/group/default-k8s-diff-port/serial/SecondStart
start_stop_delete_test.go:254: (dbg) Run:  out/minikube-linux-arm64 start -p default-k8s-diff-port-103048 --memory=3072 --alsologtostderr --wait=true --apiserver-port=8444 --driver=docker  --container-runtime=crio --kubernetes-version=v1.34.1
start_stop_delete_test.go:254: (dbg) Done: out/minikube-linux-arm64 start -p default-k8s-diff-port-103048 --memory=3072 --alsologtostderr --wait=true --apiserver-port=8444 --driver=docker  --container-runtime=crio --kubernetes-version=v1.34.1: (53.185832606s)
start_stop_delete_test.go:260: (dbg) Run:  out/minikube-linux-arm64 status --format={{.Host}} -p default-k8s-diff-port-103048 -n default-k8s-diff-port-103048
--- PASS: TestStartStop/group/default-k8s-diff-port/serial/SecondStart (53.65s)

                                                
                                    
x
+
TestStartStop/group/embed-certs/serial/EnableAddonAfterStop (0.18s)

                                                
                                                
=== RUN   TestStartStop/group/embed-certs/serial/EnableAddonAfterStop
start_stop_delete_test.go:237: (dbg) Run:  out/minikube-linux-arm64 status --format={{.Host}} -p embed-certs-422728 -n embed-certs-422728
start_stop_delete_test.go:237: (dbg) Non-zero exit: out/minikube-linux-arm64 status --format={{.Host}} -p embed-certs-422728 -n embed-certs-422728: exit status 7 (66.816818ms)

                                                
                                                
-- stdout --
	Stopped

                                                
                                                
-- /stdout --
start_stop_delete_test.go:237: status error: exit status 7 (may be ok)
start_stop_delete_test.go:244: (dbg) Run:  out/minikube-linux-arm64 addons enable dashboard -p embed-certs-422728 --images=MetricsScraper=registry.k8s.io/echoserver:1.4
--- PASS: TestStartStop/group/embed-certs/serial/EnableAddonAfterStop (0.18s)

                                                
                                    
x
+
TestStartStop/group/embed-certs/serial/SecondStart (53.79s)

                                                
                                                
=== RUN   TestStartStop/group/embed-certs/serial/SecondStart
start_stop_delete_test.go:254: (dbg) Run:  out/minikube-linux-arm64 start -p embed-certs-422728 --memory=3072 --alsologtostderr --wait=true --embed-certs --driver=docker  --container-runtime=crio --kubernetes-version=v1.34.1
E1109 14:39:20.734049    4116 cert_rotation.go:172] "Loading client cert failed" err="open /home/jenkins/minikube-integration/21139-2320/.minikube/profiles/functional-002359/client.crt: no such file or directory" logger="tls-transport-cache.UnhandledError" key="key"
start_stop_delete_test.go:254: (dbg) Done: out/minikube-linux-arm64 start -p embed-certs-422728 --memory=3072 --alsologtostderr --wait=true --embed-certs --driver=docker  --container-runtime=crio --kubernetes-version=v1.34.1: (53.429457965s)
start_stop_delete_test.go:260: (dbg) Run:  out/minikube-linux-arm64 status --format={{.Host}} -p embed-certs-422728 -n embed-certs-422728
--- PASS: TestStartStop/group/embed-certs/serial/SecondStart (53.79s)

                                                
                                    
x
+
TestStartStop/group/default-k8s-diff-port/serial/UserAppExistsAfterStop (6s)

                                                
                                                
=== RUN   TestStartStop/group/default-k8s-diff-port/serial/UserAppExistsAfterStop
start_stop_delete_test.go:272: (dbg) TestStartStop/group/default-k8s-diff-port/serial/UserAppExistsAfterStop: waiting 9m0s for pods matching "k8s-app=kubernetes-dashboard" in namespace "kubernetes-dashboard" ...
helpers_test.go:352: "kubernetes-dashboard-855c9754f9-swwl8" [41f9a5ae-7b92-448d-8014-b25c5eea04c2] Running
start_stop_delete_test.go:272: (dbg) TestStartStop/group/default-k8s-diff-port/serial/UserAppExistsAfterStop: k8s-app=kubernetes-dashboard healthy within 6.0035968s
--- PASS: TestStartStop/group/default-k8s-diff-port/serial/UserAppExistsAfterStop (6.00s)

                                                
                                    
x
+
TestStartStop/group/embed-certs/serial/UserAppExistsAfterStop (6.01s)

                                                
                                                
=== RUN   TestStartStop/group/embed-certs/serial/UserAppExistsAfterStop
start_stop_delete_test.go:272: (dbg) TestStartStop/group/embed-certs/serial/UserAppExistsAfterStop: waiting 9m0s for pods matching "k8s-app=kubernetes-dashboard" in namespace "kubernetes-dashboard" ...
helpers_test.go:352: "kubernetes-dashboard-855c9754f9-qdgpq" [a4624bdb-c87a-4e38-bfd4-65e1d022ae3a] Running
start_stop_delete_test.go:272: (dbg) TestStartStop/group/embed-certs/serial/UserAppExistsAfterStop: k8s-app=kubernetes-dashboard healthy within 6.003939475s
--- PASS: TestStartStop/group/embed-certs/serial/UserAppExistsAfterStop (6.01s)

                                                
                                    
x
+
TestStartStop/group/default-k8s-diff-port/serial/AddonExistsAfterStop (5.1s)

                                                
                                                
=== RUN   TestStartStop/group/default-k8s-diff-port/serial/AddonExistsAfterStop
start_stop_delete_test.go:285: (dbg) TestStartStop/group/default-k8s-diff-port/serial/AddonExistsAfterStop: waiting 9m0s for pods matching "k8s-app=kubernetes-dashboard" in namespace "kubernetes-dashboard" ...
helpers_test.go:352: "kubernetes-dashboard-855c9754f9-swwl8" [41f9a5ae-7b92-448d-8014-b25c5eea04c2] Running
start_stop_delete_test.go:285: (dbg) TestStartStop/group/default-k8s-diff-port/serial/AddonExistsAfterStop: k8s-app=kubernetes-dashboard healthy within 5.00321806s
start_stop_delete_test.go:289: (dbg) Run:  kubectl --context default-k8s-diff-port-103048 describe deploy/dashboard-metrics-scraper -n kubernetes-dashboard
--- PASS: TestStartStop/group/default-k8s-diff-port/serial/AddonExistsAfterStop (5.10s)

                                                
                                    
x
+
TestStartStop/group/embed-certs/serial/AddonExistsAfterStop (5.12s)

                                                
                                                
=== RUN   TestStartStop/group/embed-certs/serial/AddonExistsAfterStop
start_stop_delete_test.go:285: (dbg) TestStartStop/group/embed-certs/serial/AddonExistsAfterStop: waiting 9m0s for pods matching "k8s-app=kubernetes-dashboard" in namespace "kubernetes-dashboard" ...
helpers_test.go:352: "kubernetes-dashboard-855c9754f9-qdgpq" [a4624bdb-c87a-4e38-bfd4-65e1d022ae3a] Running
start_stop_delete_test.go:285: (dbg) TestStartStop/group/embed-certs/serial/AddonExistsAfterStop: k8s-app=kubernetes-dashboard healthy within 5.005062104s
start_stop_delete_test.go:289: (dbg) Run:  kubectl --context embed-certs-422728 describe deploy/dashboard-metrics-scraper -n kubernetes-dashboard
--- PASS: TestStartStop/group/embed-certs/serial/AddonExistsAfterStop (5.12s)

                                                
                                    
x
+
TestStartStop/group/default-k8s-diff-port/serial/VerifyKubernetesImages (0.27s)

                                                
                                                
=== RUN   TestStartStop/group/default-k8s-diff-port/serial/VerifyKubernetesImages
start_stop_delete_test.go:302: (dbg) Run:  out/minikube-linux-arm64 -p default-k8s-diff-port-103048 image list --format=json
start_stop_delete_test.go:302: Found non-minikube image: kindest/kindnetd:v20250512-df8de77b
start_stop_delete_test.go:302: Found non-minikube image: gcr.io/k8s-minikube/busybox:1.28.4-glibc
--- PASS: TestStartStop/group/default-k8s-diff-port/serial/VerifyKubernetesImages (0.27s)

                                                
                                    
x
+
TestStartStop/group/embed-certs/serial/VerifyKubernetesImages (0.34s)

                                                
                                                
=== RUN   TestStartStop/group/embed-certs/serial/VerifyKubernetesImages
start_stop_delete_test.go:302: (dbg) Run:  out/minikube-linux-arm64 -p embed-certs-422728 image list --format=json
start_stop_delete_test.go:302: Found non-minikube image: kindest/kindnetd:v20250512-df8de77b
start_stop_delete_test.go:302: Found non-minikube image: gcr.io/k8s-minikube/busybox:1.28.4-glibc
--- PASS: TestStartStop/group/embed-certs/serial/VerifyKubernetesImages (0.34s)

                                                
                                    
x
+
TestStartStop/group/no-preload/serial/FirstStart (78.45s)

                                                
                                                
=== RUN   TestStartStop/group/no-preload/serial/FirstStart
start_stop_delete_test.go:184: (dbg) Run:  out/minikube-linux-arm64 start -p no-preload-545474 --memory=3072 --alsologtostderr --wait=true --preload=false --driver=docker  --container-runtime=crio --kubernetes-version=v1.34.1
start_stop_delete_test.go:184: (dbg) Done: out/minikube-linux-arm64 start -p no-preload-545474 --memory=3072 --alsologtostderr --wait=true --preload=false --driver=docker  --container-runtime=crio --kubernetes-version=v1.34.1: (1m18.446363187s)
--- PASS: TestStartStop/group/no-preload/serial/FirstStart (78.45s)

                                                
                                    
x
+
TestStartStop/group/newest-cni/serial/FirstStart (49.72s)

                                                
                                                
=== RUN   TestStartStop/group/newest-cni/serial/FirstStart
start_stop_delete_test.go:184: (dbg) Run:  out/minikube-linux-arm64 start -p newest-cni-192074 --memory=3072 --alsologtostderr --wait=apiserver,system_pods,default_sa --network-plugin=cni --extra-config=kubeadm.pod-network-cidr=10.42.0.0/16 --driver=docker  --container-runtime=crio --kubernetes-version=v1.34.1
E1109 14:40:41.497327    4116 cert_rotation.go:172] "Loading client cert failed" err="open /home/jenkins/minikube-integration/21139-2320/.minikube/profiles/old-k8s-version-349599/client.crt: no such file or directory" logger="tls-transport-cache.UnhandledError" key="key"
E1109 14:40:41.504570    4116 cert_rotation.go:172] "Loading client cert failed" err="open /home/jenkins/minikube-integration/21139-2320/.minikube/profiles/old-k8s-version-349599/client.crt: no such file or directory" logger="tls-transport-cache.UnhandledError" key="key"
E1109 14:40:41.519513    4116 cert_rotation.go:172] "Loading client cert failed" err="open /home/jenkins/minikube-integration/21139-2320/.minikube/profiles/old-k8s-version-349599/client.crt: no such file or directory" logger="tls-transport-cache.UnhandledError" key="key"
E1109 14:40:41.541332    4116 cert_rotation.go:172] "Loading client cert failed" err="open /home/jenkins/minikube-integration/21139-2320/.minikube/profiles/old-k8s-version-349599/client.crt: no such file or directory" logger="tls-transport-cache.UnhandledError" key="key"
E1109 14:40:41.583226    4116 cert_rotation.go:172] "Loading client cert failed" err="open /home/jenkins/minikube-integration/21139-2320/.minikube/profiles/old-k8s-version-349599/client.crt: no such file or directory" logger="tls-transport-cache.UnhandledError" key="key"
E1109 14:40:41.665111    4116 cert_rotation.go:172] "Loading client cert failed" err="open /home/jenkins/minikube-integration/21139-2320/.minikube/profiles/old-k8s-version-349599/client.crt: no such file or directory" logger="tls-transport-cache.UnhandledError" key="key"
E1109 14:40:41.826982    4116 cert_rotation.go:172] "Loading client cert failed" err="open /home/jenkins/minikube-integration/21139-2320/.minikube/profiles/old-k8s-version-349599/client.crt: no such file or directory" logger="tls-transport-cache.UnhandledError" key="key"
E1109 14:40:42.148536    4116 cert_rotation.go:172] "Loading client cert failed" err="open /home/jenkins/minikube-integration/21139-2320/.minikube/profiles/old-k8s-version-349599/client.crt: no such file or directory" logger="tls-transport-cache.UnhandledError" key="key"
E1109 14:40:42.790121    4116 cert_rotation.go:172] "Loading client cert failed" err="open /home/jenkins/minikube-integration/21139-2320/.minikube/profiles/old-k8s-version-349599/client.crt: no such file or directory" logger="tls-transport-cache.UnhandledError" key="key"
E1109 14:40:44.071429    4116 cert_rotation.go:172] "Loading client cert failed" err="open /home/jenkins/minikube-integration/21139-2320/.minikube/profiles/old-k8s-version-349599/client.crt: no such file or directory" logger="tls-transport-cache.UnhandledError" key="key"
E1109 14:40:46.633058    4116 cert_rotation.go:172] "Loading client cert failed" err="open /home/jenkins/minikube-integration/21139-2320/.minikube/profiles/old-k8s-version-349599/client.crt: no such file or directory" logger="tls-transport-cache.UnhandledError" key="key"
E1109 14:40:51.754361    4116 cert_rotation.go:172] "Loading client cert failed" err="open /home/jenkins/minikube-integration/21139-2320/.minikube/profiles/old-k8s-version-349599/client.crt: no such file or directory" logger="tls-transport-cache.UnhandledError" key="key"
E1109 14:41:01.996188    4116 cert_rotation.go:172] "Loading client cert failed" err="open /home/jenkins/minikube-integration/21139-2320/.minikube/profiles/old-k8s-version-349599/client.crt: no such file or directory" logger="tls-transport-cache.UnhandledError" key="key"
start_stop_delete_test.go:184: (dbg) Done: out/minikube-linux-arm64 start -p newest-cni-192074 --memory=3072 --alsologtostderr --wait=apiserver,system_pods,default_sa --network-plugin=cni --extra-config=kubeadm.pod-network-cidr=10.42.0.0/16 --driver=docker  --container-runtime=crio --kubernetes-version=v1.34.1: (49.718785821s)
--- PASS: TestStartStop/group/newest-cni/serial/FirstStart (49.72s)

                                                
                                    
x
+
TestStartStop/group/newest-cni/serial/DeployApp (0s)

                                                
                                                
=== RUN   TestStartStop/group/newest-cni/serial/DeployApp
--- PASS: TestStartStop/group/newest-cni/serial/DeployApp (0.00s)

                                                
                                    
x
+
TestStartStop/group/newest-cni/serial/Stop (1.53s)

                                                
                                                
=== RUN   TestStartStop/group/newest-cni/serial/Stop
start_stop_delete_test.go:226: (dbg) Run:  out/minikube-linux-arm64 stop -p newest-cni-192074 --alsologtostderr -v=3
start_stop_delete_test.go:226: (dbg) Done: out/minikube-linux-arm64 stop -p newest-cni-192074 --alsologtostderr -v=3: (1.529660341s)
--- PASS: TestStartStop/group/newest-cni/serial/Stop (1.53s)

                                                
                                    
x
+
TestStartStop/group/newest-cni/serial/EnableAddonAfterStop (0.2s)

                                                
                                                
=== RUN   TestStartStop/group/newest-cni/serial/EnableAddonAfterStop
start_stop_delete_test.go:237: (dbg) Run:  out/minikube-linux-arm64 status --format={{.Host}} -p newest-cni-192074 -n newest-cni-192074
start_stop_delete_test.go:237: (dbg) Non-zero exit: out/minikube-linux-arm64 status --format={{.Host}} -p newest-cni-192074 -n newest-cni-192074: exit status 7 (82.523962ms)

                                                
                                                
-- stdout --
	Stopped

                                                
                                                
-- /stdout --
start_stop_delete_test.go:237: status error: exit status 7 (may be ok)
start_stop_delete_test.go:244: (dbg) Run:  out/minikube-linux-arm64 addons enable dashboard -p newest-cni-192074 --images=MetricsScraper=registry.k8s.io/echoserver:1.4
--- PASS: TestStartStop/group/newest-cni/serial/EnableAddonAfterStop (0.20s)

                                                
                                    
x
+
TestStartStop/group/newest-cni/serial/SecondStart (16.23s)

                                                
                                                
=== RUN   TestStartStop/group/newest-cni/serial/SecondStart
start_stop_delete_test.go:254: (dbg) Run:  out/minikube-linux-arm64 start -p newest-cni-192074 --memory=3072 --alsologtostderr --wait=apiserver,system_pods,default_sa --network-plugin=cni --extra-config=kubeadm.pod-network-cidr=10.42.0.0/16 --driver=docker  --container-runtime=crio --kubernetes-version=v1.34.1
start_stop_delete_test.go:254: (dbg) Done: out/minikube-linux-arm64 start -p newest-cni-192074 --memory=3072 --alsologtostderr --wait=apiserver,system_pods,default_sa --network-plugin=cni --extra-config=kubeadm.pod-network-cidr=10.42.0.0/16 --driver=docker  --container-runtime=crio --kubernetes-version=v1.34.1: (15.843543542s)
start_stop_delete_test.go:260: (dbg) Run:  out/minikube-linux-arm64 status --format={{.Host}} -p newest-cni-192074 -n newest-cni-192074
--- PASS: TestStartStop/group/newest-cni/serial/SecondStart (16.23s)

                                                
                                    
x
+
TestStartStop/group/newest-cni/serial/UserAppExistsAfterStop (0s)

                                                
                                                
=== RUN   TestStartStop/group/newest-cni/serial/UserAppExistsAfterStop
start_stop_delete_test.go:271: WARNING: cni mode requires additional setup before pods can schedule :(
--- PASS: TestStartStop/group/newest-cni/serial/UserAppExistsAfterStop (0.00s)

                                                
                                    
x
+
TestStartStop/group/newest-cni/serial/AddonExistsAfterStop (0s)

                                                
                                                
=== RUN   TestStartStop/group/newest-cni/serial/AddonExistsAfterStop
start_stop_delete_test.go:282: WARNING: cni mode requires additional setup before pods can schedule :(
--- PASS: TestStartStop/group/newest-cni/serial/AddonExistsAfterStop (0.00s)

                                                
                                    
x
+
TestStartStop/group/newest-cni/serial/VerifyKubernetesImages (0.26s)

                                                
                                                
=== RUN   TestStartStop/group/newest-cni/serial/VerifyKubernetesImages
start_stop_delete_test.go:302: (dbg) Run:  out/minikube-linux-arm64 -p newest-cni-192074 image list --format=json
start_stop_delete_test.go:302: Found non-minikube image: kindest/kindnetd:v20250512-df8de77b
--- PASS: TestStartStop/group/newest-cni/serial/VerifyKubernetesImages (0.26s)

                                                
                                    
x
+
TestStartStop/group/no-preload/serial/DeployApp (8.39s)

                                                
                                                
=== RUN   TestStartStop/group/no-preload/serial/DeployApp
start_stop_delete_test.go:194: (dbg) Run:  kubectl --context no-preload-545474 create -f testdata/busybox.yaml
start_stop_delete_test.go:194: (dbg) TestStartStop/group/no-preload/serial/DeployApp: waiting 8m0s for pods matching "integration-test=busybox" in namespace "default" ...
helpers_test.go:352: "busybox" [cf172c15-bc73-4b57-b8a2-5a67c4f6b615] Pending / Ready:ContainersNotReady (containers with unready status: [busybox]) / ContainersReady:ContainersNotReady (containers with unready status: [busybox])
helpers_test.go:352: "busybox" [cf172c15-bc73-4b57-b8a2-5a67c4f6b615] Running
start_stop_delete_test.go:194: (dbg) TestStartStop/group/no-preload/serial/DeployApp: integration-test=busybox healthy within 8.004002173s
start_stop_delete_test.go:194: (dbg) Run:  kubectl --context no-preload-545474 exec busybox -- /bin/sh -c "ulimit -n"
--- PASS: TestStartStop/group/no-preload/serial/DeployApp (8.39s)

                                                
                                    
x
+
TestNetworkPlugins/group/auto/Start (80.53s)

                                                
                                                
=== RUN   TestNetworkPlugins/group/auto/Start
net_test.go:112: (dbg) Run:  out/minikube-linux-arm64 start -p auto-241021 --memory=3072 --alsologtostderr --wait=true --wait-timeout=15m --driver=docker  --container-runtime=crio
net_test.go:112: (dbg) Done: out/minikube-linux-arm64 start -p auto-241021 --memory=3072 --alsologtostderr --wait=true --wait-timeout=15m --driver=docker  --container-runtime=crio: (1m20.526802333s)
--- PASS: TestNetworkPlugins/group/auto/Start (80.53s)

                                                
                                    
x
+
TestStartStop/group/no-preload/serial/Stop (12.38s)

                                                
                                                
=== RUN   TestStartStop/group/no-preload/serial/Stop
start_stop_delete_test.go:226: (dbg) Run:  out/minikube-linux-arm64 stop -p no-preload-545474 --alsologtostderr -v=3
E1109 14:42:03.442152    4116 cert_rotation.go:172] "Loading client cert failed" err="open /home/jenkins/minikube-integration/21139-2320/.minikube/profiles/old-k8s-version-349599/client.crt: no such file or directory" logger="tls-transport-cache.UnhandledError" key="key"
E1109 14:42:06.456475    4116 cert_rotation.go:172] "Loading client cert failed" err="open /home/jenkins/minikube-integration/21139-2320/.minikube/profiles/addons-651467/client.crt: no such file or directory" logger="tls-transport-cache.UnhandledError" key="key"
start_stop_delete_test.go:226: (dbg) Done: out/minikube-linux-arm64 stop -p no-preload-545474 --alsologtostderr -v=3: (12.376040613s)
--- PASS: TestStartStop/group/no-preload/serial/Stop (12.38s)

                                                
                                    
x
+
TestStartStop/group/no-preload/serial/EnableAddonAfterStop (0.25s)

                                                
                                                
=== RUN   TestStartStop/group/no-preload/serial/EnableAddonAfterStop
start_stop_delete_test.go:237: (dbg) Run:  out/minikube-linux-arm64 status --format={{.Host}} -p no-preload-545474 -n no-preload-545474
start_stop_delete_test.go:237: (dbg) Non-zero exit: out/minikube-linux-arm64 status --format={{.Host}} -p no-preload-545474 -n no-preload-545474: exit status 7 (95.457912ms)

                                                
                                                
-- stdout --
	Stopped

                                                
                                                
-- /stdout --
start_stop_delete_test.go:237: status error: exit status 7 (may be ok)
start_stop_delete_test.go:244: (dbg) Run:  out/minikube-linux-arm64 addons enable dashboard -p no-preload-545474 --images=MetricsScraper=registry.k8s.io/echoserver:1.4
--- PASS: TestStartStop/group/no-preload/serial/EnableAddonAfterStop (0.25s)

                                                
                                    
x
+
TestStartStop/group/no-preload/serial/SecondStart (53.66s)

                                                
                                                
=== RUN   TestStartStop/group/no-preload/serial/SecondStart
start_stop_delete_test.go:254: (dbg) Run:  out/minikube-linux-arm64 start -p no-preload-545474 --memory=3072 --alsologtostderr --wait=true --preload=false --driver=docker  --container-runtime=crio --kubernetes-version=v1.34.1
start_stop_delete_test.go:254: (dbg) Done: out/minikube-linux-arm64 start -p no-preload-545474 --memory=3072 --alsologtostderr --wait=true --preload=false --driver=docker  --container-runtime=crio --kubernetes-version=v1.34.1: (53.252008029s)
start_stop_delete_test.go:260: (dbg) Run:  out/minikube-linux-arm64 status --format={{.Host}} -p no-preload-545474 -n no-preload-545474
--- PASS: TestStartStop/group/no-preload/serial/SecondStart (53.66s)

                                                
                                    
x
+
TestStartStop/group/no-preload/serial/UserAppExistsAfterStop (6s)

                                                
                                                
=== RUN   TestStartStop/group/no-preload/serial/UserAppExistsAfterStop
start_stop_delete_test.go:272: (dbg) TestStartStop/group/no-preload/serial/UserAppExistsAfterStop: waiting 9m0s for pods matching "k8s-app=kubernetes-dashboard" in namespace "kubernetes-dashboard" ...
helpers_test.go:352: "kubernetes-dashboard-855c9754f9-zlh4p" [1fe25f87-54de-446f-b6f2-08786b029184] Running
start_stop_delete_test.go:272: (dbg) TestStartStop/group/no-preload/serial/UserAppExistsAfterStop: k8s-app=kubernetes-dashboard healthy within 6.003739274s
--- PASS: TestStartStop/group/no-preload/serial/UserAppExistsAfterStop (6.00s)

                                                
                                    
x
+
TestStartStop/group/no-preload/serial/AddonExistsAfterStop (5.11s)

                                                
                                                
=== RUN   TestStartStop/group/no-preload/serial/AddonExistsAfterStop
start_stop_delete_test.go:285: (dbg) TestStartStop/group/no-preload/serial/AddonExistsAfterStop: waiting 9m0s for pods matching "k8s-app=kubernetes-dashboard" in namespace "kubernetes-dashboard" ...
helpers_test.go:352: "kubernetes-dashboard-855c9754f9-zlh4p" [1fe25f87-54de-446f-b6f2-08786b029184] Running
start_stop_delete_test.go:285: (dbg) TestStartStop/group/no-preload/serial/AddonExistsAfterStop: k8s-app=kubernetes-dashboard healthy within 5.003900652s
start_stop_delete_test.go:289: (dbg) Run:  kubectl --context no-preload-545474 describe deploy/dashboard-metrics-scraper -n kubernetes-dashboard
--- PASS: TestStartStop/group/no-preload/serial/AddonExistsAfterStop (5.11s)

                                                
                                    
x
+
TestNetworkPlugins/group/auto/KubeletFlags (0.32s)

                                                
                                                
=== RUN   TestNetworkPlugins/group/auto/KubeletFlags
net_test.go:133: (dbg) Run:  out/minikube-linux-arm64 ssh -p auto-241021 "pgrep -a kubelet"
I1109 14:43:13.565482    4116 config.go:182] Loaded profile config "auto-241021": Driver=docker, ContainerRuntime=crio, KubernetesVersion=v1.34.1
--- PASS: TestNetworkPlugins/group/auto/KubeletFlags (0.32s)

                                                
                                    
x
+
TestNetworkPlugins/group/auto/NetCatPod (10.27s)

                                                
                                                
=== RUN   TestNetworkPlugins/group/auto/NetCatPod
net_test.go:149: (dbg) Run:  kubectl --context auto-241021 replace --force -f testdata/netcat-deployment.yaml
net_test.go:163: (dbg) TestNetworkPlugins/group/auto/NetCatPod: waiting 15m0s for pods matching "app=netcat" in namespace "default" ...
helpers_test.go:352: "netcat-cd4db9dbf-5kk26" [07dc77c0-2aa4-4347-aa42-8fa184d2d3b8] Pending / Ready:ContainersNotReady (containers with unready status: [dnsutils]) / ContainersReady:ContainersNotReady (containers with unready status: [dnsutils])
helpers_test.go:352: "netcat-cd4db9dbf-5kk26" [07dc77c0-2aa4-4347-aa42-8fa184d2d3b8] Running
net_test.go:163: (dbg) TestNetworkPlugins/group/auto/NetCatPod: app=netcat healthy within 10.004329968s
--- PASS: TestNetworkPlugins/group/auto/NetCatPod (10.27s)

                                                
                                    
x
+
TestStartStop/group/no-preload/serial/VerifyKubernetesImages (0.25s)

                                                
                                                
=== RUN   TestStartStop/group/no-preload/serial/VerifyKubernetesImages
start_stop_delete_test.go:302: (dbg) Run:  out/minikube-linux-arm64 -p no-preload-545474 image list --format=json
start_stop_delete_test.go:302: Found non-minikube image: gcr.io/k8s-minikube/busybox:1.28.4-glibc
start_stop_delete_test.go:302: Found non-minikube image: kindest/kindnetd:v20250512-df8de77b
--- PASS: TestStartStop/group/no-preload/serial/VerifyKubernetesImages (0.25s)

                                                
                                    
x
+
TestNetworkPlugins/group/auto/DNS (0.24s)

                                                
                                                
=== RUN   TestNetworkPlugins/group/auto/DNS
net_test.go:175: (dbg) Run:  kubectl --context auto-241021 exec deployment/netcat -- nslookup kubernetes.default
--- PASS: TestNetworkPlugins/group/auto/DNS (0.24s)

                                                
                                    
x
+
TestNetworkPlugins/group/auto/Localhost (0.26s)

                                                
                                                
=== RUN   TestNetworkPlugins/group/auto/Localhost
net_test.go:194: (dbg) Run:  kubectl --context auto-241021 exec deployment/netcat -- /bin/sh -c "nc -w 5 -i 5 -z localhost 8080"
--- PASS: TestNetworkPlugins/group/auto/Localhost (0.26s)

                                                
                                    
x
+
TestNetworkPlugins/group/auto/HairPin (0.2s)

                                                
                                                
=== RUN   TestNetworkPlugins/group/auto/HairPin
net_test.go:264: (dbg) Run:  kubectl --context auto-241021 exec deployment/netcat -- /bin/sh -c "nc -w 5 -i 5 -z netcat 8080"
--- PASS: TestNetworkPlugins/group/auto/HairPin (0.20s)

                                                
                                    
x
+
TestNetworkPlugins/group/kindnet/Start (84.49s)

                                                
                                                
=== RUN   TestNetworkPlugins/group/kindnet/Start
net_test.go:112: (dbg) Run:  out/minikube-linux-arm64 start -p kindnet-241021 --memory=3072 --alsologtostderr --wait=true --wait-timeout=15m --cni=kindnet --driver=docker  --container-runtime=crio
E1109 14:43:29.525742    4116 cert_rotation.go:172] "Loading client cert failed" err="open /home/jenkins/minikube-integration/21139-2320/.minikube/profiles/addons-651467/client.crt: no such file or directory" logger="tls-transport-cache.UnhandledError" key="key"
net_test.go:112: (dbg) Done: out/minikube-linux-arm64 start -p kindnet-241021 --memory=3072 --alsologtostderr --wait=true --wait-timeout=15m --cni=kindnet --driver=docker  --container-runtime=crio: (1m24.493100161s)
--- PASS: TestNetworkPlugins/group/kindnet/Start (84.49s)

                                                
                                    
x
+
TestNetworkPlugins/group/calico/Start (60.61s)

                                                
                                                
=== RUN   TestNetworkPlugins/group/calico/Start
net_test.go:112: (dbg) Run:  out/minikube-linux-arm64 start -p calico-241021 --memory=3072 --alsologtostderr --wait=true --wait-timeout=15m --cni=calico --driver=docker  --container-runtime=crio
E1109 14:43:49.918130    4116 cert_rotation.go:172] "Loading client cert failed" err="open /home/jenkins/minikube-integration/21139-2320/.minikube/profiles/default-k8s-diff-port-103048/client.crt: no such file or directory" logger="tls-transport-cache.UnhandledError" key="key"
E1109 14:43:49.924398    4116 cert_rotation.go:172] "Loading client cert failed" err="open /home/jenkins/minikube-integration/21139-2320/.minikube/profiles/default-k8s-diff-port-103048/client.crt: no such file or directory" logger="tls-transport-cache.UnhandledError" key="key"
E1109 14:43:49.935731    4116 cert_rotation.go:172] "Loading client cert failed" err="open /home/jenkins/minikube-integration/21139-2320/.minikube/profiles/default-k8s-diff-port-103048/client.crt: no such file or directory" logger="tls-transport-cache.UnhandledError" key="key"
E1109 14:43:49.957571    4116 cert_rotation.go:172] "Loading client cert failed" err="open /home/jenkins/minikube-integration/21139-2320/.minikube/profiles/default-k8s-diff-port-103048/client.crt: no such file or directory" logger="tls-transport-cache.UnhandledError" key="key"
E1109 14:43:49.998964    4116 cert_rotation.go:172] "Loading client cert failed" err="open /home/jenkins/minikube-integration/21139-2320/.minikube/profiles/default-k8s-diff-port-103048/client.crt: no such file or directory" logger="tls-transport-cache.UnhandledError" key="key"
E1109 14:43:50.083746    4116 cert_rotation.go:172] "Loading client cert failed" err="open /home/jenkins/minikube-integration/21139-2320/.minikube/profiles/default-k8s-diff-port-103048/client.crt: no such file or directory" logger="tls-transport-cache.UnhandledError" key="key"
E1109 14:43:50.247959    4116 cert_rotation.go:172] "Loading client cert failed" err="open /home/jenkins/minikube-integration/21139-2320/.minikube/profiles/default-k8s-diff-port-103048/client.crt: no such file or directory" logger="tls-transport-cache.UnhandledError" key="key"
E1109 14:43:50.570178    4116 cert_rotation.go:172] "Loading client cert failed" err="open /home/jenkins/minikube-integration/21139-2320/.minikube/profiles/default-k8s-diff-port-103048/client.crt: no such file or directory" logger="tls-transport-cache.UnhandledError" key="key"
E1109 14:43:51.212109    4116 cert_rotation.go:172] "Loading client cert failed" err="open /home/jenkins/minikube-integration/21139-2320/.minikube/profiles/default-k8s-diff-port-103048/client.crt: no such file or directory" logger="tls-transport-cache.UnhandledError" key="key"
E1109 14:43:52.494255    4116 cert_rotation.go:172] "Loading client cert failed" err="open /home/jenkins/minikube-integration/21139-2320/.minikube/profiles/default-k8s-diff-port-103048/client.crt: no such file or directory" logger="tls-transport-cache.UnhandledError" key="key"
E1109 14:43:55.056242    4116 cert_rotation.go:172] "Loading client cert failed" err="open /home/jenkins/minikube-integration/21139-2320/.minikube/profiles/default-k8s-diff-port-103048/client.crt: no such file or directory" logger="tls-transport-cache.UnhandledError" key="key"
E1109 14:44:00.180724    4116 cert_rotation.go:172] "Loading client cert failed" err="open /home/jenkins/minikube-integration/21139-2320/.minikube/profiles/default-k8s-diff-port-103048/client.crt: no such file or directory" logger="tls-transport-cache.UnhandledError" key="key"
E1109 14:44:03.802336    4116 cert_rotation.go:172] "Loading client cert failed" err="open /home/jenkins/minikube-integration/21139-2320/.minikube/profiles/functional-002359/client.crt: no such file or directory" logger="tls-transport-cache.UnhandledError" key="key"
E1109 14:44:10.423792    4116 cert_rotation.go:172] "Loading client cert failed" err="open /home/jenkins/minikube-integration/21139-2320/.minikube/profiles/default-k8s-diff-port-103048/client.crt: no such file or directory" logger="tls-transport-cache.UnhandledError" key="key"
E1109 14:44:20.734470    4116 cert_rotation.go:172] "Loading client cert failed" err="open /home/jenkins/minikube-integration/21139-2320/.minikube/profiles/functional-002359/client.crt: no such file or directory" logger="tls-transport-cache.UnhandledError" key="key"
E1109 14:44:30.905579    4116 cert_rotation.go:172] "Loading client cert failed" err="open /home/jenkins/minikube-integration/21139-2320/.minikube/profiles/default-k8s-diff-port-103048/client.crt: no such file or directory" logger="tls-transport-cache.UnhandledError" key="key"
net_test.go:112: (dbg) Done: out/minikube-linux-arm64 start -p calico-241021 --memory=3072 --alsologtostderr --wait=true --wait-timeout=15m --cni=calico --driver=docker  --container-runtime=crio: (1m0.605342262s)
--- PASS: TestNetworkPlugins/group/calico/Start (60.61s)

                                                
                                    
x
+
TestNetworkPlugins/group/calico/ControllerPod (6.01s)

                                                
                                                
=== RUN   TestNetworkPlugins/group/calico/ControllerPod
net_test.go:120: (dbg) TestNetworkPlugins/group/calico/ControllerPod: waiting 10m0s for pods matching "k8s-app=calico-node" in namespace "kube-system" ...
helpers_test.go:352: "calico-node-phxph" [b7e4e714-2c7e-4746-885e-0d758fe7e2e7] Running / Ready:ContainersNotReady (containers with unready status: [calico-node]) / ContainersReady:ContainersNotReady (containers with unready status: [calico-node])
helpers_test.go:352: "calico-node-phxph" [b7e4e714-2c7e-4746-885e-0d758fe7e2e7] Running
net_test.go:120: (dbg) TestNetworkPlugins/group/calico/ControllerPod: k8s-app=calico-node healthy within 6.004564389s
--- PASS: TestNetworkPlugins/group/calico/ControllerPod (6.01s)

                                                
                                    
x
+
TestNetworkPlugins/group/kindnet/ControllerPod (6.01s)

                                                
                                                
=== RUN   TestNetworkPlugins/group/kindnet/ControllerPod
net_test.go:120: (dbg) TestNetworkPlugins/group/kindnet/ControllerPod: waiting 10m0s for pods matching "app=kindnet" in namespace "kube-system" ...
helpers_test.go:352: "kindnet-2dxdw" [4c68ce97-34d0-4157-a34e-b7f8f96e70df] Running
net_test.go:120: (dbg) TestNetworkPlugins/group/kindnet/ControllerPod: app=kindnet healthy within 6.003772278s
--- PASS: TestNetworkPlugins/group/kindnet/ControllerPod (6.01s)

                                                
                                    
x
+
TestNetworkPlugins/group/calico/KubeletFlags (0.32s)

                                                
                                                
=== RUN   TestNetworkPlugins/group/calico/KubeletFlags
net_test.go:133: (dbg) Run:  out/minikube-linux-arm64 ssh -p calico-241021 "pgrep -a kubelet"
I1109 14:44:55.928078    4116 config.go:182] Loaded profile config "calico-241021": Driver=docker, ContainerRuntime=crio, KubernetesVersion=v1.34.1
--- PASS: TestNetworkPlugins/group/calico/KubeletFlags (0.32s)

                                                
                                    
x
+
TestNetworkPlugins/group/calico/NetCatPod (12.31s)

                                                
                                                
=== RUN   TestNetworkPlugins/group/calico/NetCatPod
net_test.go:149: (dbg) Run:  kubectl --context calico-241021 replace --force -f testdata/netcat-deployment.yaml
net_test.go:163: (dbg) TestNetworkPlugins/group/calico/NetCatPod: waiting 15m0s for pods matching "app=netcat" in namespace "default" ...
helpers_test.go:352: "netcat-cd4db9dbf-n6vtv" [9bd5b364-9c14-4128-8a12-bbaa3122755a] Pending / Ready:ContainersNotReady (containers with unready status: [dnsutils]) / ContainersReady:ContainersNotReady (containers with unready status: [dnsutils])
helpers_test.go:352: "netcat-cd4db9dbf-n6vtv" [9bd5b364-9c14-4128-8a12-bbaa3122755a] Running
net_test.go:163: (dbg) TestNetworkPlugins/group/calico/NetCatPod: app=netcat healthy within 12.003406913s
--- PASS: TestNetworkPlugins/group/calico/NetCatPod (12.31s)

                                                
                                    
x
+
TestNetworkPlugins/group/kindnet/KubeletFlags (0.4s)

                                                
                                                
=== RUN   TestNetworkPlugins/group/kindnet/KubeletFlags
net_test.go:133: (dbg) Run:  out/minikube-linux-arm64 ssh -p kindnet-241021 "pgrep -a kubelet"
I1109 14:44:58.551187    4116 config.go:182] Loaded profile config "kindnet-241021": Driver=docker, ContainerRuntime=crio, KubernetesVersion=v1.34.1
--- PASS: TestNetworkPlugins/group/kindnet/KubeletFlags (0.40s)

                                                
                                    
x
+
TestNetworkPlugins/group/kindnet/NetCatPod (12.33s)

                                                
                                                
=== RUN   TestNetworkPlugins/group/kindnet/NetCatPod
net_test.go:149: (dbg) Run:  kubectl --context kindnet-241021 replace --force -f testdata/netcat-deployment.yaml
net_test.go:163: (dbg) TestNetworkPlugins/group/kindnet/NetCatPod: waiting 15m0s for pods matching "app=netcat" in namespace "default" ...
helpers_test.go:352: "netcat-cd4db9dbf-dftpc" [1171b17f-896a-431b-8a82-41a51abfbad2] Pending / Ready:ContainersNotReady (containers with unready status: [dnsutils]) / ContainersReady:ContainersNotReady (containers with unready status: [dnsutils])
helpers_test.go:352: "netcat-cd4db9dbf-dftpc" [1171b17f-896a-431b-8a82-41a51abfbad2] Running
net_test.go:163: (dbg) TestNetworkPlugins/group/kindnet/NetCatPod: app=netcat healthy within 12.003829033s
--- PASS: TestNetworkPlugins/group/kindnet/NetCatPod (12.33s)

                                                
                                    
x
+
TestNetworkPlugins/group/calico/DNS (0.16s)

                                                
                                                
=== RUN   TestNetworkPlugins/group/calico/DNS
net_test.go:175: (dbg) Run:  kubectl --context calico-241021 exec deployment/netcat -- nslookup kubernetes.default
--- PASS: TestNetworkPlugins/group/calico/DNS (0.16s)

                                                
                                    
x
+
TestNetworkPlugins/group/calico/Localhost (0.13s)

                                                
                                                
=== RUN   TestNetworkPlugins/group/calico/Localhost
net_test.go:194: (dbg) Run:  kubectl --context calico-241021 exec deployment/netcat -- /bin/sh -c "nc -w 5 -i 5 -z localhost 8080"
--- PASS: TestNetworkPlugins/group/calico/Localhost (0.13s)

                                                
                                    
x
+
TestNetworkPlugins/group/calico/HairPin (0.13s)

                                                
                                                
=== RUN   TestNetworkPlugins/group/calico/HairPin
net_test.go:264: (dbg) Run:  kubectl --context calico-241021 exec deployment/netcat -- /bin/sh -c "nc -w 5 -i 5 -z netcat 8080"
--- PASS: TestNetworkPlugins/group/calico/HairPin (0.13s)

                                                
                                    
x
+
TestNetworkPlugins/group/kindnet/DNS (0.18s)

                                                
                                                
=== RUN   TestNetworkPlugins/group/kindnet/DNS
net_test.go:175: (dbg) Run:  kubectl --context kindnet-241021 exec deployment/netcat -- nslookup kubernetes.default
--- PASS: TestNetworkPlugins/group/kindnet/DNS (0.18s)

                                                
                                    
x
+
TestNetworkPlugins/group/kindnet/Localhost (0.16s)

                                                
                                                
=== RUN   TestNetworkPlugins/group/kindnet/Localhost
net_test.go:194: (dbg) Run:  kubectl --context kindnet-241021 exec deployment/netcat -- /bin/sh -c "nc -w 5 -i 5 -z localhost 8080"
--- PASS: TestNetworkPlugins/group/kindnet/Localhost (0.16s)

                                                
                                    
x
+
TestNetworkPlugins/group/kindnet/HairPin (0.14s)

                                                
                                                
=== RUN   TestNetworkPlugins/group/kindnet/HairPin
net_test.go:264: (dbg) Run:  kubectl --context kindnet-241021 exec deployment/netcat -- /bin/sh -c "nc -w 5 -i 5 -z netcat 8080"
--- PASS: TestNetworkPlugins/group/kindnet/HairPin (0.14s)

                                                
                                    
x
+
TestNetworkPlugins/group/custom-flannel/Start (64.24s)

                                                
                                                
=== RUN   TestNetworkPlugins/group/custom-flannel/Start
net_test.go:112: (dbg) Run:  out/minikube-linux-arm64 start -p custom-flannel-241021 --memory=3072 --alsologtostderr --wait=true --wait-timeout=15m --cni=testdata/kube-flannel.yaml --driver=docker  --container-runtime=crio
net_test.go:112: (dbg) Done: out/minikube-linux-arm64 start -p custom-flannel-241021 --memory=3072 --alsologtostderr --wait=true --wait-timeout=15m --cni=testdata/kube-flannel.yaml --driver=docker  --container-runtime=crio: (1m4.235802249s)
--- PASS: TestNetworkPlugins/group/custom-flannel/Start (64.24s)

                                                
                                    
x
+
TestNetworkPlugins/group/enable-default-cni/Start (85.83s)

                                                
                                                
=== RUN   TestNetworkPlugins/group/enable-default-cni/Start
net_test.go:112: (dbg) Run:  out/minikube-linux-arm64 start -p enable-default-cni-241021 --memory=3072 --alsologtostderr --wait=true --wait-timeout=15m --enable-default-cni=true --driver=docker  --container-runtime=crio
E1109 14:45:41.497864    4116 cert_rotation.go:172] "Loading client cert failed" err="open /home/jenkins/minikube-integration/21139-2320/.minikube/profiles/old-k8s-version-349599/client.crt: no such file or directory" logger="tls-transport-cache.UnhandledError" key="key"
E1109 14:46:09.205095    4116 cert_rotation.go:172] "Loading client cert failed" err="open /home/jenkins/minikube-integration/21139-2320/.minikube/profiles/old-k8s-version-349599/client.crt: no such file or directory" logger="tls-transport-cache.UnhandledError" key="key"
E1109 14:46:33.789720    4116 cert_rotation.go:172] "Loading client cert failed" err="open /home/jenkins/minikube-integration/21139-2320/.minikube/profiles/default-k8s-diff-port-103048/client.crt: no such file or directory" logger="tls-transport-cache.UnhandledError" key="key"
net_test.go:112: (dbg) Done: out/minikube-linux-arm64 start -p enable-default-cni-241021 --memory=3072 --alsologtostderr --wait=true --wait-timeout=15m --enable-default-cni=true --driver=docker  --container-runtime=crio: (1m25.828655809s)
--- PASS: TestNetworkPlugins/group/enable-default-cni/Start (85.83s)

                                                
                                    
x
+
TestNetworkPlugins/group/custom-flannel/KubeletFlags (0.31s)

                                                
                                                
=== RUN   TestNetworkPlugins/group/custom-flannel/KubeletFlags
net_test.go:133: (dbg) Run:  out/minikube-linux-arm64 ssh -p custom-flannel-241021 "pgrep -a kubelet"
I1109 14:46:39.669279    4116 config.go:182] Loaded profile config "custom-flannel-241021": Driver=docker, ContainerRuntime=crio, KubernetesVersion=v1.34.1
--- PASS: TestNetworkPlugins/group/custom-flannel/KubeletFlags (0.31s)

                                                
                                    
x
+
TestNetworkPlugins/group/custom-flannel/NetCatPod (9.28s)

                                                
                                                
=== RUN   TestNetworkPlugins/group/custom-flannel/NetCatPod
net_test.go:149: (dbg) Run:  kubectl --context custom-flannel-241021 replace --force -f testdata/netcat-deployment.yaml
net_test.go:163: (dbg) TestNetworkPlugins/group/custom-flannel/NetCatPod: waiting 15m0s for pods matching "app=netcat" in namespace "default" ...
helpers_test.go:352: "netcat-cd4db9dbf-d7s9l" [ee003e1c-b0ec-4744-b724-c6ab294492c0] Pending / Ready:ContainersNotReady (containers with unready status: [dnsutils]) / ContainersReady:ContainersNotReady (containers with unready status: [dnsutils])
helpers_test.go:352: "netcat-cd4db9dbf-d7s9l" [ee003e1c-b0ec-4744-b724-c6ab294492c0] Running
E1109 14:46:48.608755    4116 cert_rotation.go:172] "Loading client cert failed" err="open /home/jenkins/minikube-integration/21139-2320/.minikube/profiles/no-preload-545474/client.crt: no such file or directory" logger="tls-transport-cache.UnhandledError" key="key"
E1109 14:46:48.615171    4116 cert_rotation.go:172] "Loading client cert failed" err="open /home/jenkins/minikube-integration/21139-2320/.minikube/profiles/no-preload-545474/client.crt: no such file or directory" logger="tls-transport-cache.UnhandledError" key="key"
E1109 14:46:48.626580    4116 cert_rotation.go:172] "Loading client cert failed" err="open /home/jenkins/minikube-integration/21139-2320/.minikube/profiles/no-preload-545474/client.crt: no such file or directory" logger="tls-transport-cache.UnhandledError" key="key"
E1109 14:46:48.648032    4116 cert_rotation.go:172] "Loading client cert failed" err="open /home/jenkins/minikube-integration/21139-2320/.minikube/profiles/no-preload-545474/client.crt: no such file or directory" logger="tls-transport-cache.UnhandledError" key="key"
E1109 14:46:48.689510    4116 cert_rotation.go:172] "Loading client cert failed" err="open /home/jenkins/minikube-integration/21139-2320/.minikube/profiles/no-preload-545474/client.crt: no such file or directory" logger="tls-transport-cache.UnhandledError" key="key"
E1109 14:46:48.770977    4116 cert_rotation.go:172] "Loading client cert failed" err="open /home/jenkins/minikube-integration/21139-2320/.minikube/profiles/no-preload-545474/client.crt: no such file or directory" logger="tls-transport-cache.UnhandledError" key="key"
E1109 14:46:48.932624    4116 cert_rotation.go:172] "Loading client cert failed" err="open /home/jenkins/minikube-integration/21139-2320/.minikube/profiles/no-preload-545474/client.crt: no such file or directory" logger="tls-transport-cache.UnhandledError" key="key"
net_test.go:163: (dbg) TestNetworkPlugins/group/custom-flannel/NetCatPod: app=netcat healthy within 9.004912115s
--- PASS: TestNetworkPlugins/group/custom-flannel/NetCatPod (9.28s)

                                                
                                    
x
+
TestNetworkPlugins/group/custom-flannel/DNS (0.16s)

                                                
                                                
=== RUN   TestNetworkPlugins/group/custom-flannel/DNS
net_test.go:175: (dbg) Run:  kubectl --context custom-flannel-241021 exec deployment/netcat -- nslookup kubernetes.default
--- PASS: TestNetworkPlugins/group/custom-flannel/DNS (0.16s)

                                                
                                    
x
+
TestNetworkPlugins/group/custom-flannel/Localhost (0.14s)

                                                
                                                
=== RUN   TestNetworkPlugins/group/custom-flannel/Localhost
net_test.go:194: (dbg) Run:  kubectl --context custom-flannel-241021 exec deployment/netcat -- /bin/sh -c "nc -w 5 -i 5 -z localhost 8080"
--- PASS: TestNetworkPlugins/group/custom-flannel/Localhost (0.14s)

                                                
                                    
x
+
TestNetworkPlugins/group/custom-flannel/HairPin (0.14s)

                                                
                                                
=== RUN   TestNetworkPlugins/group/custom-flannel/HairPin
net_test.go:264: (dbg) Run:  kubectl --context custom-flannel-241021 exec deployment/netcat -- /bin/sh -c "nc -w 5 -i 5 -z netcat 8080"
E1109 14:46:49.254221    4116 cert_rotation.go:172] "Loading client cert failed" err="open /home/jenkins/minikube-integration/21139-2320/.minikube/profiles/no-preload-545474/client.crt: no such file or directory" logger="tls-transport-cache.UnhandledError" key="key"
--- PASS: TestNetworkPlugins/group/custom-flannel/HairPin (0.14s)

                                                
                                    
x
+
TestNetworkPlugins/group/enable-default-cni/KubeletFlags (0.41s)

                                                
                                                
=== RUN   TestNetworkPlugins/group/enable-default-cni/KubeletFlags
net_test.go:133: (dbg) Run:  out/minikube-linux-arm64 ssh -p enable-default-cni-241021 "pgrep -a kubelet"
I1109 14:47:03.372073    4116 config.go:182] Loaded profile config "enable-default-cni-241021": Driver=docker, ContainerRuntime=crio, KubernetesVersion=v1.34.1
--- PASS: TestNetworkPlugins/group/enable-default-cni/KubeletFlags (0.41s)

                                                
                                    
x
+
TestNetworkPlugins/group/enable-default-cni/NetCatPod (12.36s)

                                                
                                                
=== RUN   TestNetworkPlugins/group/enable-default-cni/NetCatPod
net_test.go:149: (dbg) Run:  kubectl --context enable-default-cni-241021 replace --force -f testdata/netcat-deployment.yaml
net_test.go:163: (dbg) TestNetworkPlugins/group/enable-default-cni/NetCatPod: waiting 15m0s for pods matching "app=netcat" in namespace "default" ...
helpers_test.go:352: "netcat-cd4db9dbf-sk8v7" [1208caef-b76c-46db-be2a-894211e6dbe6] Pending / Ready:ContainersNotReady (containers with unready status: [dnsutils]) / ContainersReady:ContainersNotReady (containers with unready status: [dnsutils])
E1109 14:47:06.457089    4116 cert_rotation.go:172] "Loading client cert failed" err="open /home/jenkins/minikube-integration/21139-2320/.minikube/profiles/addons-651467/client.crt: no such file or directory" logger="tls-transport-cache.UnhandledError" key="key"
E1109 14:47:09.102154    4116 cert_rotation.go:172] "Loading client cert failed" err="open /home/jenkins/minikube-integration/21139-2320/.minikube/profiles/no-preload-545474/client.crt: no such file or directory" logger="tls-transport-cache.UnhandledError" key="key"
helpers_test.go:352: "netcat-cd4db9dbf-sk8v7" [1208caef-b76c-46db-be2a-894211e6dbe6] Running
net_test.go:163: (dbg) TestNetworkPlugins/group/enable-default-cni/NetCatPod: app=netcat healthy within 12.0037894s
--- PASS: TestNetworkPlugins/group/enable-default-cni/NetCatPod (12.36s)

                                                
                                    
x
+
TestNetworkPlugins/group/flannel/Start (65.91s)

                                                
                                                
=== RUN   TestNetworkPlugins/group/flannel/Start
net_test.go:112: (dbg) Run:  out/minikube-linux-arm64 start -p flannel-241021 --memory=3072 --alsologtostderr --wait=true --wait-timeout=15m --cni=flannel --driver=docker  --container-runtime=crio
net_test.go:112: (dbg) Done: out/minikube-linux-arm64 start -p flannel-241021 --memory=3072 --alsologtostderr --wait=true --wait-timeout=15m --cni=flannel --driver=docker  --container-runtime=crio: (1m5.905491812s)
--- PASS: TestNetworkPlugins/group/flannel/Start (65.91s)

                                                
                                    
x
+
TestNetworkPlugins/group/enable-default-cni/DNS (0.2s)

                                                
                                                
=== RUN   TestNetworkPlugins/group/enable-default-cni/DNS
net_test.go:175: (dbg) Run:  kubectl --context enable-default-cni-241021 exec deployment/netcat -- nslookup kubernetes.default
--- PASS: TestNetworkPlugins/group/enable-default-cni/DNS (0.20s)

                                                
                                    
x
+
TestNetworkPlugins/group/enable-default-cni/Localhost (0.26s)

                                                
                                                
=== RUN   TestNetworkPlugins/group/enable-default-cni/Localhost
net_test.go:194: (dbg) Run:  kubectl --context enable-default-cni-241021 exec deployment/netcat -- /bin/sh -c "nc -w 5 -i 5 -z localhost 8080"
--- PASS: TestNetworkPlugins/group/enable-default-cni/Localhost (0.26s)

                                                
                                    
x
+
TestNetworkPlugins/group/enable-default-cni/HairPin (0.19s)

                                                
                                                
=== RUN   TestNetworkPlugins/group/enable-default-cni/HairPin
net_test.go:264: (dbg) Run:  kubectl --context enable-default-cni-241021 exec deployment/netcat -- /bin/sh -c "nc -w 5 -i 5 -z netcat 8080"
--- PASS: TestNetworkPlugins/group/enable-default-cni/HairPin (0.19s)

                                                
                                    
x
+
TestNetworkPlugins/group/bridge/Start (74.82s)

                                                
                                                
=== RUN   TestNetworkPlugins/group/bridge/Start
net_test.go:112: (dbg) Run:  out/minikube-linux-arm64 start -p bridge-241021 --memory=3072 --alsologtostderr --wait=true --wait-timeout=15m --cni=bridge --driver=docker  --container-runtime=crio
E1109 14:48:10.545314    4116 cert_rotation.go:172] "Loading client cert failed" err="open /home/jenkins/minikube-integration/21139-2320/.minikube/profiles/no-preload-545474/client.crt: no such file or directory" logger="tls-transport-cache.UnhandledError" key="key"
E1109 14:48:13.814552    4116 cert_rotation.go:172] "Loading client cert failed" err="open /home/jenkins/minikube-integration/21139-2320/.minikube/profiles/auto-241021/client.crt: no such file or directory" logger="tls-transport-cache.UnhandledError" key="key"
E1109 14:48:13.821286    4116 cert_rotation.go:172] "Loading client cert failed" err="open /home/jenkins/minikube-integration/21139-2320/.minikube/profiles/auto-241021/client.crt: no such file or directory" logger="tls-transport-cache.UnhandledError" key="key"
E1109 14:48:13.833335    4116 cert_rotation.go:172] "Loading client cert failed" err="open /home/jenkins/minikube-integration/21139-2320/.minikube/profiles/auto-241021/client.crt: no such file or directory" logger="tls-transport-cache.UnhandledError" key="key"
E1109 14:48:13.854582    4116 cert_rotation.go:172] "Loading client cert failed" err="open /home/jenkins/minikube-integration/21139-2320/.minikube/profiles/auto-241021/client.crt: no such file or directory" logger="tls-transport-cache.UnhandledError" key="key"
E1109 14:48:13.895967    4116 cert_rotation.go:172] "Loading client cert failed" err="open /home/jenkins/minikube-integration/21139-2320/.minikube/profiles/auto-241021/client.crt: no such file or directory" logger="tls-transport-cache.UnhandledError" key="key"
E1109 14:48:13.977378    4116 cert_rotation.go:172] "Loading client cert failed" err="open /home/jenkins/minikube-integration/21139-2320/.minikube/profiles/auto-241021/client.crt: no such file or directory" logger="tls-transport-cache.UnhandledError" key="key"
E1109 14:48:14.138927    4116 cert_rotation.go:172] "Loading client cert failed" err="open /home/jenkins/minikube-integration/21139-2320/.minikube/profiles/auto-241021/client.crt: no such file or directory" logger="tls-transport-cache.UnhandledError" key="key"
E1109 14:48:14.460398    4116 cert_rotation.go:172] "Loading client cert failed" err="open /home/jenkins/minikube-integration/21139-2320/.minikube/profiles/auto-241021/client.crt: no such file or directory" logger="tls-transport-cache.UnhandledError" key="key"
E1109 14:48:15.102512    4116 cert_rotation.go:172] "Loading client cert failed" err="open /home/jenkins/minikube-integration/21139-2320/.minikube/profiles/auto-241021/client.crt: no such file or directory" logger="tls-transport-cache.UnhandledError" key="key"
E1109 14:48:16.384443    4116 cert_rotation.go:172] "Loading client cert failed" err="open /home/jenkins/minikube-integration/21139-2320/.minikube/profiles/auto-241021/client.crt: no such file or directory" logger="tls-transport-cache.UnhandledError" key="key"
net_test.go:112: (dbg) Done: out/minikube-linux-arm64 start -p bridge-241021 --memory=3072 --alsologtostderr --wait=true --wait-timeout=15m --cni=bridge --driver=docker  --container-runtime=crio: (1m14.816217046s)
--- PASS: TestNetworkPlugins/group/bridge/Start (74.82s)

                                                
                                    
x
+
TestNetworkPlugins/group/flannel/ControllerPod (6.01s)

                                                
                                                
=== RUN   TestNetworkPlugins/group/flannel/ControllerPod
net_test.go:120: (dbg) TestNetworkPlugins/group/flannel/ControllerPod: waiting 10m0s for pods matching "app=flannel" in namespace "kube-flannel" ...
helpers_test.go:352: "kube-flannel-ds-vmjr5" [a68c2fc2-31cf-462c-b051-3006baae2f2e] Running
E1109 14:48:18.946274    4116 cert_rotation.go:172] "Loading client cert failed" err="open /home/jenkins/minikube-integration/21139-2320/.minikube/profiles/auto-241021/client.crt: no such file or directory" logger="tls-transport-cache.UnhandledError" key="key"
E1109 14:48:24.068485    4116 cert_rotation.go:172] "Loading client cert failed" err="open /home/jenkins/minikube-integration/21139-2320/.minikube/profiles/auto-241021/client.crt: no such file or directory" logger="tls-transport-cache.UnhandledError" key="key"
net_test.go:120: (dbg) TestNetworkPlugins/group/flannel/ControllerPod: app=flannel healthy within 6.003758926s
--- PASS: TestNetworkPlugins/group/flannel/ControllerPod (6.01s)

                                                
                                    
x
+
TestNetworkPlugins/group/flannel/KubeletFlags (0.31s)

                                                
                                                
=== RUN   TestNetworkPlugins/group/flannel/KubeletFlags
net_test.go:133: (dbg) Run:  out/minikube-linux-arm64 ssh -p flannel-241021 "pgrep -a kubelet"
I1109 14:48:25.074533    4116 config.go:182] Loaded profile config "flannel-241021": Driver=docker, ContainerRuntime=crio, KubernetesVersion=v1.34.1
--- PASS: TestNetworkPlugins/group/flannel/KubeletFlags (0.31s)

                                                
                                    
x
+
TestNetworkPlugins/group/flannel/NetCatPod (10.28s)

                                                
                                                
=== RUN   TestNetworkPlugins/group/flannel/NetCatPod
net_test.go:149: (dbg) Run:  kubectl --context flannel-241021 replace --force -f testdata/netcat-deployment.yaml
net_test.go:163: (dbg) TestNetworkPlugins/group/flannel/NetCatPod: waiting 15m0s for pods matching "app=netcat" in namespace "default" ...
helpers_test.go:352: "netcat-cd4db9dbf-7qdjh" [d0250abc-348f-413b-87bb-f6e75a4c24b8] Pending / Ready:ContainersNotReady (containers with unready status: [dnsutils]) / ContainersReady:ContainersNotReady (containers with unready status: [dnsutils])
helpers_test.go:352: "netcat-cd4db9dbf-7qdjh" [d0250abc-348f-413b-87bb-f6e75a4c24b8] Running
E1109 14:48:34.310023    4116 cert_rotation.go:172] "Loading client cert failed" err="open /home/jenkins/minikube-integration/21139-2320/.minikube/profiles/auto-241021/client.crt: no such file or directory" logger="tls-transport-cache.UnhandledError" key="key"
net_test.go:163: (dbg) TestNetworkPlugins/group/flannel/NetCatPod: app=netcat healthy within 10.003300062s
--- PASS: TestNetworkPlugins/group/flannel/NetCatPod (10.28s)

                                                
                                    
x
+
TestNetworkPlugins/group/flannel/DNS (0.15s)

                                                
                                                
=== RUN   TestNetworkPlugins/group/flannel/DNS
net_test.go:175: (dbg) Run:  kubectl --context flannel-241021 exec deployment/netcat -- nslookup kubernetes.default
--- PASS: TestNetworkPlugins/group/flannel/DNS (0.15s)

                                                
                                    
x
+
TestNetworkPlugins/group/flannel/Localhost (0.13s)

                                                
                                                
=== RUN   TestNetworkPlugins/group/flannel/Localhost
net_test.go:194: (dbg) Run:  kubectl --context flannel-241021 exec deployment/netcat -- /bin/sh -c "nc -w 5 -i 5 -z localhost 8080"
--- PASS: TestNetworkPlugins/group/flannel/Localhost (0.13s)

                                                
                                    
x
+
TestNetworkPlugins/group/flannel/HairPin (0.15s)

                                                
                                                
=== RUN   TestNetworkPlugins/group/flannel/HairPin
net_test.go:264: (dbg) Run:  kubectl --context flannel-241021 exec deployment/netcat -- /bin/sh -c "nc -w 5 -i 5 -z netcat 8080"
--- PASS: TestNetworkPlugins/group/flannel/HairPin (0.15s)

                                                
                                    
x
+
TestNetworkPlugins/group/bridge/KubeletFlags (0.33s)

                                                
                                                
=== RUN   TestNetworkPlugins/group/bridge/KubeletFlags
net_test.go:133: (dbg) Run:  out/minikube-linux-arm64 ssh -p bridge-241021 "pgrep -a kubelet"
I1109 14:48:56.354726    4116 config.go:182] Loaded profile config "bridge-241021": Driver=docker, ContainerRuntime=crio, KubernetesVersion=v1.34.1
--- PASS: TestNetworkPlugins/group/bridge/KubeletFlags (0.33s)

                                                
                                    
x
+
TestNetworkPlugins/group/bridge/NetCatPod (11.26s)

                                                
                                                
=== RUN   TestNetworkPlugins/group/bridge/NetCatPod
net_test.go:149: (dbg) Run:  kubectl --context bridge-241021 replace --force -f testdata/netcat-deployment.yaml
net_test.go:163: (dbg) TestNetworkPlugins/group/bridge/NetCatPod: waiting 15m0s for pods matching "app=netcat" in namespace "default" ...
helpers_test.go:352: "netcat-cd4db9dbf-nwmhf" [38cfda06-0086-4d59-95ea-c0dd8c5d914b] Pending / Ready:ContainersNotReady (containers with unready status: [dnsutils]) / ContainersReady:ContainersNotReady (containers with unready status: [dnsutils])
helpers_test.go:352: "netcat-cd4db9dbf-nwmhf" [38cfda06-0086-4d59-95ea-c0dd8c5d914b] Running
net_test.go:163: (dbg) TestNetworkPlugins/group/bridge/NetCatPod: app=netcat healthy within 11.003560332s
--- PASS: TestNetworkPlugins/group/bridge/NetCatPod (11.26s)

                                                
                                    
x
+
TestNetworkPlugins/group/bridge/DNS (0.14s)

                                                
                                                
=== RUN   TestNetworkPlugins/group/bridge/DNS
net_test.go:175: (dbg) Run:  kubectl --context bridge-241021 exec deployment/netcat -- nslookup kubernetes.default
--- PASS: TestNetworkPlugins/group/bridge/DNS (0.14s)

                                                
                                    
x
+
TestNetworkPlugins/group/bridge/Localhost (0.13s)

                                                
                                                
=== RUN   TestNetworkPlugins/group/bridge/Localhost
net_test.go:194: (dbg) Run:  kubectl --context bridge-241021 exec deployment/netcat -- /bin/sh -c "nc -w 5 -i 5 -z localhost 8080"
--- PASS: TestNetworkPlugins/group/bridge/Localhost (0.13s)

                                                
                                    
x
+
TestNetworkPlugins/group/bridge/HairPin (0.21s)

                                                
                                                
=== RUN   TestNetworkPlugins/group/bridge/HairPin
net_test.go:264: (dbg) Run:  kubectl --context bridge-241021 exec deployment/netcat -- /bin/sh -c "nc -w 5 -i 5 -z netcat 8080"
--- PASS: TestNetworkPlugins/group/bridge/HairPin (0.21s)

                                                
                                    

Test skip (31/328)

x
+
TestDownloadOnly/v1.28.0/cached-images (0s)

                                                
                                                
=== RUN   TestDownloadOnly/v1.28.0/cached-images
aaa_download_only_test.go:128: Preload exists, images won't be cached
--- SKIP: TestDownloadOnly/v1.28.0/cached-images (0.00s)

                                                
                                    
x
+
TestDownloadOnly/v1.28.0/binaries (0s)

                                                
                                                
=== RUN   TestDownloadOnly/v1.28.0/binaries
aaa_download_only_test.go:150: Preload exists, binaries are present within.
--- SKIP: TestDownloadOnly/v1.28.0/binaries (0.00s)

                                                
                                    
x
+
TestDownloadOnly/v1.28.0/kubectl (0s)

                                                
                                                
=== RUN   TestDownloadOnly/v1.28.0/kubectl
aaa_download_only_test.go:166: Test for darwin and windows
--- SKIP: TestDownloadOnly/v1.28.0/kubectl (0.00s)

                                                
                                    
x
+
TestDownloadOnly/v1.34.1/cached-images (0s)

                                                
                                                
=== RUN   TestDownloadOnly/v1.34.1/cached-images
aaa_download_only_test.go:128: Preload exists, images won't be cached
--- SKIP: TestDownloadOnly/v1.34.1/cached-images (0.00s)

                                                
                                    
x
+
TestDownloadOnly/v1.34.1/binaries (0s)

                                                
                                                
=== RUN   TestDownloadOnly/v1.34.1/binaries
aaa_download_only_test.go:150: Preload exists, binaries are present within.
--- SKIP: TestDownloadOnly/v1.34.1/binaries (0.00s)

                                                
                                    
x
+
TestDownloadOnly/v1.34.1/kubectl (0s)

                                                
                                                
=== RUN   TestDownloadOnly/v1.34.1/kubectl
aaa_download_only_test.go:166: Test for darwin and windows
--- SKIP: TestDownloadOnly/v1.34.1/kubectl (0.00s)

                                                
                                    
x
+
TestDownloadOnlyKic (0.42s)

                                                
                                                
=== RUN   TestDownloadOnlyKic
aaa_download_only_test.go:231: (dbg) Run:  out/minikube-linux-arm64 start --download-only -p download-docker-143180 --alsologtostderr --driver=docker  --container-runtime=crio
aaa_download_only_test.go:248: Skip for arm64 platform. See https://github.com/kubernetes/minikube/issues/10144
helpers_test.go:175: Cleaning up "download-docker-143180" profile ...
helpers_test.go:178: (dbg) Run:  out/minikube-linux-arm64 delete -p download-docker-143180
--- SKIP: TestDownloadOnlyKic (0.42s)

                                                
                                    
x
+
TestOffline (0s)

                                                
                                                
=== RUN   TestOffline
=== PAUSE TestOffline

                                                
                                                

                                                
                                                
=== CONT  TestOffline
aab_offline_test.go:35: skipping TestOffline - only docker runtime supported on arm64. See https://github.com/kubernetes/minikube/issues/10144
--- SKIP: TestOffline (0.00s)

                                                
                                    
x
+
TestAddons/serial/GCPAuth/RealCredentials (0s)

                                                
                                                
=== RUN   TestAddons/serial/GCPAuth/RealCredentials
addons_test.go:759: This test requires a GCE instance (excluding Cloud Shell) with a container based driver
--- SKIP: TestAddons/serial/GCPAuth/RealCredentials (0.00s)

                                                
                                    
x
+
TestAddons/parallel/Olm (0s)

                                                
                                                
=== RUN   TestAddons/parallel/Olm
=== PAUSE TestAddons/parallel/Olm

                                                
                                                

                                                
                                                
=== CONT  TestAddons/parallel/Olm
addons_test.go:483: Skipping OLM addon test until https://github.com/operator-framework/operator-lifecycle-manager/issues/2534 is resolved
--- SKIP: TestAddons/parallel/Olm (0.00s)

                                                
                                    
x
+
TestAddons/parallel/AmdGpuDevicePlugin (0s)

                                                
                                                
=== RUN   TestAddons/parallel/AmdGpuDevicePlugin
=== PAUSE TestAddons/parallel/AmdGpuDevicePlugin

                                                
                                                

                                                
                                                
=== CONT  TestAddons/parallel/AmdGpuDevicePlugin
addons_test.go:1033: skip amd gpu test on all but docker driver and amd64 platform
--- SKIP: TestAddons/parallel/AmdGpuDevicePlugin (0.00s)

                                                
                                    
x
+
TestDockerFlags (0s)

                                                
                                                
=== RUN   TestDockerFlags
docker_test.go:41: skipping: only runs with docker container runtime, currently testing crio
--- SKIP: TestDockerFlags (0.00s)

                                                
                                    
x
+
TestDockerEnvContainerd (0s)

                                                
                                                
=== RUN   TestDockerEnvContainerd
docker_test.go:170: running with crio true linux arm64
docker_test.go:172: skipping: TestDockerEnvContainerd can only be run with the containerd runtime on Docker driver
--- SKIP: TestDockerEnvContainerd (0.00s)

                                                
                                    
x
+
TestHyperKitDriverInstallOrUpdate (0s)

                                                
                                                
=== RUN   TestHyperKitDriverInstallOrUpdate
driver_install_or_update_test.go:37: Skip if not darwin.
--- SKIP: TestHyperKitDriverInstallOrUpdate (0.00s)

                                                
                                    
x
+
TestHyperkitDriverSkipUpgrade (0s)

                                                
                                                
=== RUN   TestHyperkitDriverSkipUpgrade
driver_install_or_update_test.go:101: Skip if not darwin.
--- SKIP: TestHyperkitDriverSkipUpgrade (0.00s)

                                                
                                    
x
+
TestFunctional/parallel/MySQL (0s)

                                                
                                                
=== RUN   TestFunctional/parallel/MySQL
=== PAUSE TestFunctional/parallel/MySQL

                                                
                                                

                                                
                                                
=== CONT  TestFunctional/parallel/MySQL
functional_test.go:1792: arm64 is not supported by mysql. Skip the test. See https://github.com/kubernetes/minikube/issues/10144
--- SKIP: TestFunctional/parallel/MySQL (0.00s)

                                                
                                    
x
+
TestFunctional/parallel/DockerEnv (0s)

                                                
                                                
=== RUN   TestFunctional/parallel/DockerEnv
=== PAUSE TestFunctional/parallel/DockerEnv

                                                
                                                

                                                
                                                
=== CONT  TestFunctional/parallel/DockerEnv
functional_test.go:478: only validate docker env with docker container runtime, currently testing crio
--- SKIP: TestFunctional/parallel/DockerEnv (0.00s)

                                                
                                    
x
+
TestFunctional/parallel/PodmanEnv (0s)

                                                
                                                
=== RUN   TestFunctional/parallel/PodmanEnv
=== PAUSE TestFunctional/parallel/PodmanEnv

                                                
                                                

                                                
                                                
=== CONT  TestFunctional/parallel/PodmanEnv
functional_test.go:565: only validate podman env with docker container runtime, currently testing crio
--- SKIP: TestFunctional/parallel/PodmanEnv (0.00s)

                                                
                                    
x
+
TestFunctional/parallel/TunnelCmd/serial/DNSResolutionByDig (0s)

                                                
                                                
=== RUN   TestFunctional/parallel/TunnelCmd/serial/DNSResolutionByDig
functional_test_tunnel_test.go:99: DNS forwarding is only supported for Hyperkit on Darwin, skipping test DNS forwarding
--- SKIP: TestFunctional/parallel/TunnelCmd/serial/DNSResolutionByDig (0.00s)

                                                
                                    
x
+
TestFunctional/parallel/TunnelCmd/serial/DNSResolutionByDscacheutil (0s)

                                                
                                                
=== RUN   TestFunctional/parallel/TunnelCmd/serial/DNSResolutionByDscacheutil
functional_test_tunnel_test.go:99: DNS forwarding is only supported for Hyperkit on Darwin, skipping test DNS forwarding
--- SKIP: TestFunctional/parallel/TunnelCmd/serial/DNSResolutionByDscacheutil (0.00s)

                                                
                                    
x
+
TestFunctional/parallel/TunnelCmd/serial/AccessThroughDNS (0s)

                                                
                                                
=== RUN   TestFunctional/parallel/TunnelCmd/serial/AccessThroughDNS
functional_test_tunnel_test.go:99: DNS forwarding is only supported for Hyperkit on Darwin, skipping test DNS forwarding
--- SKIP: TestFunctional/parallel/TunnelCmd/serial/AccessThroughDNS (0.00s)

                                                
                                    
x
+
TestFunctionalNewestKubernetes (0s)

                                                
                                                
=== RUN   TestFunctionalNewestKubernetes
functional_test.go:82: 
--- SKIP: TestFunctionalNewestKubernetes (0.00s)

                                                
                                    
x
+
TestGvisorAddon (0s)

                                                
                                                
=== RUN   TestGvisorAddon
gvisor_addon_test.go:34: skipping test because --gvisor=false
--- SKIP: TestGvisorAddon (0.00s)

                                                
                                    
x
+
TestImageBuild (0s)

                                                
                                                
=== RUN   TestImageBuild
image_test.go:33: 
--- SKIP: TestImageBuild (0.00s)

                                                
                                    
x
+
TestISOImage (0s)

                                                
                                                
=== RUN   TestISOImage
iso_test.go:36: This test requires a VM driver
--- SKIP: TestISOImage (0.00s)

                                                
                                    
x
+
TestChangeNoneUser (0s)

                                                
                                                
=== RUN   TestChangeNoneUser
none_test.go:38: Test requires none driver and SUDO_USER env to not be empty
--- SKIP: TestChangeNoneUser (0.00s)

                                                
                                    
x
+
TestScheduledStopWindows (0s)

                                                
                                                
=== RUN   TestScheduledStopWindows
scheduled_stop_test.go:42: test only runs on windows
--- SKIP: TestScheduledStopWindows (0.00s)

                                                
                                    
x
+
TestSkaffold (0s)

                                                
                                                
=== RUN   TestSkaffold
skaffold_test.go:45: skaffold requires docker-env, currently testing crio container runtime
--- SKIP: TestSkaffold (0.00s)

                                                
                                    
x
+
TestStartStop/group/disable-driver-mounts (0.19s)

                                                
                                                
=== RUN   TestStartStop/group/disable-driver-mounts
=== PAUSE TestStartStop/group/disable-driver-mounts

                                                
                                                

                                                
                                                
=== CONT  TestStartStop/group/disable-driver-mounts
start_stop_delete_test.go:101: skipping TestStartStop/group/disable-driver-mounts - only runs on virtualbox
helpers_test.go:175: Cleaning up "disable-driver-mounts-274584" profile ...
helpers_test.go:178: (dbg) Run:  out/minikube-linux-arm64 delete -p disable-driver-mounts-274584
--- SKIP: TestStartStop/group/disable-driver-mounts (0.19s)

                                                
                                    
x
+
TestNetworkPlugins/group/kubenet (4.75s)

                                                
                                                
=== RUN   TestNetworkPlugins/group/kubenet
net_test.go:93: Skipping the test as crio container runtimes requires CNI
panic.go:636: 
----------------------- debugLogs start: kubenet-241021 [pass: true] --------------------------------
>>> netcat: nslookup kubernetes.default:
Error in configuration: context was not found for specified context: kubenet-241021

                                                
                                                

                                                
                                                
>>> netcat: nslookup debug kubernetes.default a-records:
Error in configuration: context was not found for specified context: kubenet-241021

                                                
                                                

                                                
                                                
>>> netcat: dig search kubernetes.default:
Error in configuration: context was not found for specified context: kubenet-241021

                                                
                                                

                                                
                                                
>>> netcat: dig @10.96.0.10 kubernetes.default.svc.cluster.local udp/53:
Error in configuration: context was not found for specified context: kubenet-241021

                                                
                                                

                                                
                                                
>>> netcat: dig @10.96.0.10 kubernetes.default.svc.cluster.local tcp/53:
Error in configuration: context was not found for specified context: kubenet-241021

                                                
                                                

                                                
                                                
>>> netcat: nc 10.96.0.10 udp/53:
Error in configuration: context was not found for specified context: kubenet-241021

                                                
                                                

                                                
                                                
>>> netcat: nc 10.96.0.10 tcp/53:
Error in configuration: context was not found for specified context: kubenet-241021

                                                
                                                

                                                
                                                
>>> netcat: /etc/nsswitch.conf:
Error in configuration: context was not found for specified context: kubenet-241021

                                                
                                                

                                                
                                                
>>> netcat: /etc/hosts:
Error in configuration: context was not found for specified context: kubenet-241021

                                                
                                                

                                                
                                                
>>> netcat: /etc/resolv.conf:
Error in configuration: context was not found for specified context: kubenet-241021

                                                
                                                

                                                
                                                
>>> host: /etc/nsswitch.conf:
* Profile "kubenet-241021" not found. Run "minikube profile list" to view all profiles.
To start a cluster, run: "minikube start -p kubenet-241021"

                                                
                                                

                                                
                                                
>>> host: /etc/hosts:
* Profile "kubenet-241021" not found. Run "minikube profile list" to view all profiles.
To start a cluster, run: "minikube start -p kubenet-241021"

                                                
                                                

                                                
                                                
>>> host: /etc/resolv.conf:
* Profile "kubenet-241021" not found. Run "minikube profile list" to view all profiles.
To start a cluster, run: "minikube start -p kubenet-241021"

                                                
                                                

                                                
                                                
>>> k8s: nodes, services, endpoints, daemon sets, deployments and pods, :
Error in configuration: context was not found for specified context: kubenet-241021

                                                
                                                

                                                
                                                
>>> host: crictl pods:
* Profile "kubenet-241021" not found. Run "minikube profile list" to view all profiles.
To start a cluster, run: "minikube start -p kubenet-241021"

                                                
                                                

                                                
                                                
>>> host: crictl containers:
* Profile "kubenet-241021" not found. Run "minikube profile list" to view all profiles.
To start a cluster, run: "minikube start -p kubenet-241021"

                                                
                                                

                                                
                                                
>>> k8s: describe netcat deployment:
error: context "kubenet-241021" does not exist

                                                
                                                

                                                
                                                
>>> k8s: describe netcat pod(s):
error: context "kubenet-241021" does not exist

                                                
                                                

                                                
                                                
>>> k8s: netcat logs:
error: context "kubenet-241021" does not exist

                                                
                                                

                                                
                                                
>>> k8s: describe coredns deployment:
error: context "kubenet-241021" does not exist

                                                
                                                

                                                
                                                
>>> k8s: describe coredns pods:
error: context "kubenet-241021" does not exist

                                                
                                                

                                                
                                                
>>> k8s: coredns logs:
error: context "kubenet-241021" does not exist

                                                
                                                

                                                
                                                
>>> k8s: describe api server pod(s):
error: context "kubenet-241021" does not exist

                                                
                                                

                                                
                                                
>>> k8s: api server logs:
error: context "kubenet-241021" does not exist

                                                
                                                

                                                
                                                
>>> host: /etc/cni:
* Profile "kubenet-241021" not found. Run "minikube profile list" to view all profiles.
To start a cluster, run: "minikube start -p kubenet-241021"

                                                
                                                

                                                
                                                
>>> host: ip a s:
* Profile "kubenet-241021" not found. Run "minikube profile list" to view all profiles.
To start a cluster, run: "minikube start -p kubenet-241021"

                                                
                                                

                                                
                                                
>>> host: ip r s:
* Profile "kubenet-241021" not found. Run "minikube profile list" to view all profiles.
To start a cluster, run: "minikube start -p kubenet-241021"

                                                
                                                

                                                
                                                
>>> host: iptables-save:
* Profile "kubenet-241021" not found. Run "minikube profile list" to view all profiles.
To start a cluster, run: "minikube start -p kubenet-241021"

                                                
                                                

                                                
                                                
>>> host: iptables table nat:
* Profile "kubenet-241021" not found. Run "minikube profile list" to view all profiles.
To start a cluster, run: "minikube start -p kubenet-241021"

                                                
                                                

                                                
                                                
>>> k8s: describe kube-proxy daemon set:
error: context "kubenet-241021" does not exist

                                                
                                                

                                                
                                                
>>> k8s: describe kube-proxy pod(s):
error: context "kubenet-241021" does not exist

                                                
                                                

                                                
                                                
>>> k8s: kube-proxy logs:
error: context "kubenet-241021" does not exist

                                                
                                                

                                                
                                                
>>> host: kubelet daemon status:
* Profile "kubenet-241021" not found. Run "minikube profile list" to view all profiles.
To start a cluster, run: "minikube start -p kubenet-241021"

                                                
                                                

                                                
                                                
>>> host: kubelet daemon config:
* Profile "kubenet-241021" not found. Run "minikube profile list" to view all profiles.
To start a cluster, run: "minikube start -p kubenet-241021"

                                                
                                                

                                                
                                                
>>> k8s: kubelet logs:
* Profile "kubenet-241021" not found. Run "minikube profile list" to view all profiles.
To start a cluster, run: "minikube start -p kubenet-241021"

                                                
                                                

                                                
                                                
>>> host: /etc/kubernetes/kubelet.conf:
* Profile "kubenet-241021" not found. Run "minikube profile list" to view all profiles.
To start a cluster, run: "minikube start -p kubenet-241021"

                                                
                                                

                                                
                                                
>>> host: /var/lib/kubelet/config.yaml:
* Profile "kubenet-241021" not found. Run "minikube profile list" to view all profiles.
To start a cluster, run: "minikube start -p kubenet-241021"

                                                
                                                

                                                
                                                
>>> k8s: kubectl config:
apiVersion: v1
clusters: null
contexts: null
current-context: ""
kind: Config
preferences: {}
users: null

                                                
                                                

                                                
                                                
>>> k8s: cms:
Error in configuration: context was not found for specified context: kubenet-241021

                                                
                                                

                                                
                                                
>>> host: docker daemon status:
* Profile "kubenet-241021" not found. Run "minikube profile list" to view all profiles.
To start a cluster, run: "minikube start -p kubenet-241021"

                                                
                                                

                                                
                                                
>>> host: docker daemon config:
* Profile "kubenet-241021" not found. Run "minikube profile list" to view all profiles.
To start a cluster, run: "minikube start -p kubenet-241021"

                                                
                                                

                                                
                                                
>>> host: /etc/docker/daemon.json:
* Profile "kubenet-241021" not found. Run "minikube profile list" to view all profiles.
To start a cluster, run: "minikube start -p kubenet-241021"

                                                
                                                

                                                
                                                
>>> host: docker system info:
* Profile "kubenet-241021" not found. Run "minikube profile list" to view all profiles.
To start a cluster, run: "minikube start -p kubenet-241021"

                                                
                                                

                                                
                                                
>>> host: cri-docker daemon status:
* Profile "kubenet-241021" not found. Run "minikube profile list" to view all profiles.
To start a cluster, run: "minikube start -p kubenet-241021"

                                                
                                                

                                                
                                                
>>> host: cri-docker daemon config:
* Profile "kubenet-241021" not found. Run "minikube profile list" to view all profiles.
To start a cluster, run: "minikube start -p kubenet-241021"

                                                
                                                

                                                
                                                
>>> host: /etc/systemd/system/cri-docker.service.d/10-cni.conf:
* Profile "kubenet-241021" not found. Run "minikube profile list" to view all profiles.
To start a cluster, run: "minikube start -p kubenet-241021"

                                                
                                                

                                                
                                                
>>> host: /usr/lib/systemd/system/cri-docker.service:
* Profile "kubenet-241021" not found. Run "minikube profile list" to view all profiles.
To start a cluster, run: "minikube start -p kubenet-241021"

                                                
                                                

                                                
                                                
>>> host: cri-dockerd version:
* Profile "kubenet-241021" not found. Run "minikube profile list" to view all profiles.
To start a cluster, run: "minikube start -p kubenet-241021"

                                                
                                                

                                                
                                                
>>> host: containerd daemon status:
* Profile "kubenet-241021" not found. Run "minikube profile list" to view all profiles.
To start a cluster, run: "minikube start -p kubenet-241021"

                                                
                                                

                                                
                                                
>>> host: containerd daemon config:
* Profile "kubenet-241021" not found. Run "minikube profile list" to view all profiles.
To start a cluster, run: "minikube start -p kubenet-241021"

                                                
                                                

                                                
                                                
>>> host: /lib/systemd/system/containerd.service:
* Profile "kubenet-241021" not found. Run "minikube profile list" to view all profiles.
To start a cluster, run: "minikube start -p kubenet-241021"

                                                
                                                

                                                
                                                
>>> host: /etc/containerd/config.toml:
* Profile "kubenet-241021" not found. Run "minikube profile list" to view all profiles.
To start a cluster, run: "minikube start -p kubenet-241021"

                                                
                                                

                                                
                                                
>>> host: containerd config dump:
* Profile "kubenet-241021" not found. Run "minikube profile list" to view all profiles.
To start a cluster, run: "minikube start -p kubenet-241021"

                                                
                                                

                                                
                                                
>>> host: crio daemon status:
* Profile "kubenet-241021" not found. Run "minikube profile list" to view all profiles.
To start a cluster, run: "minikube start -p kubenet-241021"

                                                
                                                

                                                
                                                
>>> host: crio daemon config:
* Profile "kubenet-241021" not found. Run "minikube profile list" to view all profiles.
To start a cluster, run: "minikube start -p kubenet-241021"

                                                
                                                

                                                
                                                
>>> host: /etc/crio:
* Profile "kubenet-241021" not found. Run "minikube profile list" to view all profiles.
To start a cluster, run: "minikube start -p kubenet-241021"

                                                
                                                

                                                
                                                
>>> host: crio config:
* Profile "kubenet-241021" not found. Run "minikube profile list" to view all profiles.
To start a cluster, run: "minikube start -p kubenet-241021"

                                                
                                                
----------------------- debugLogs end: kubenet-241021 [took: 4.579081353s] --------------------------------
helpers_test.go:175: Cleaning up "kubenet-241021" profile ...
helpers_test.go:178: (dbg) Run:  out/minikube-linux-arm64 delete -p kubenet-241021
--- SKIP: TestNetworkPlugins/group/kubenet (4.75s)

                                                
                                    
x
+
TestNetworkPlugins/group/cilium (5.82s)

                                                
                                                
=== RUN   TestNetworkPlugins/group/cilium
net_test.go:102: Skipping the test as it's interfering with other tests and is outdated
panic.go:636: 
----------------------- debugLogs start: cilium-241021 [pass: true] --------------------------------
>>> netcat: nslookup kubernetes.default:
Error in configuration: context was not found for specified context: cilium-241021

                                                
                                                

                                                
                                                
>>> netcat: nslookup debug kubernetes.default a-records:
Error in configuration: context was not found for specified context: cilium-241021

                                                
                                                

                                                
                                                
>>> netcat: dig search kubernetes.default:
Error in configuration: context was not found for specified context: cilium-241021

                                                
                                                

                                                
                                                
>>> netcat: dig @10.96.0.10 kubernetes.default.svc.cluster.local udp/53:
Error in configuration: context was not found for specified context: cilium-241021

                                                
                                                

                                                
                                                
>>> netcat: dig @10.96.0.10 kubernetes.default.svc.cluster.local tcp/53:
Error in configuration: context was not found for specified context: cilium-241021

                                                
                                                

                                                
                                                
>>> netcat: nc 10.96.0.10 udp/53:
Error in configuration: context was not found for specified context: cilium-241021

                                                
                                                

                                                
                                                
>>> netcat: nc 10.96.0.10 tcp/53:
Error in configuration: context was not found for specified context: cilium-241021

                                                
                                                

                                                
                                                
>>> netcat: /etc/nsswitch.conf:
Error in configuration: context was not found for specified context: cilium-241021

                                                
                                                

                                                
                                                
>>> netcat: /etc/hosts:
Error in configuration: context was not found for specified context: cilium-241021

                                                
                                                

                                                
                                                
>>> netcat: /etc/resolv.conf:
Error in configuration: context was not found for specified context: cilium-241021

                                                
                                                

                                                
                                                
>>> host: /etc/nsswitch.conf:
* Profile "cilium-241021" not found. Run "minikube profile list" to view all profiles.
To start a cluster, run: "minikube start -p cilium-241021"

                                                
                                                

                                                
                                                
>>> host: /etc/hosts:
* Profile "cilium-241021" not found. Run "minikube profile list" to view all profiles.
To start a cluster, run: "minikube start -p cilium-241021"

                                                
                                                

                                                
                                                
>>> host: /etc/resolv.conf:
* Profile "cilium-241021" not found. Run "minikube profile list" to view all profiles.
To start a cluster, run: "minikube start -p cilium-241021"

                                                
                                                

                                                
                                                
>>> k8s: nodes, services, endpoints, daemon sets, deployments and pods, :
Error in configuration: context was not found for specified context: cilium-241021

                                                
                                                

                                                
                                                
>>> host: crictl pods:
* Profile "cilium-241021" not found. Run "minikube profile list" to view all profiles.
To start a cluster, run: "minikube start -p cilium-241021"

                                                
                                                

                                                
                                                
>>> host: crictl containers:
* Profile "cilium-241021" not found. Run "minikube profile list" to view all profiles.
To start a cluster, run: "minikube start -p cilium-241021"

                                                
                                                

                                                
                                                
>>> k8s: describe netcat deployment:
error: context "cilium-241021" does not exist

                                                
                                                

                                                
                                                
>>> k8s: describe netcat pod(s):
error: context "cilium-241021" does not exist

                                                
                                                

                                                
                                                
>>> k8s: netcat logs:
error: context "cilium-241021" does not exist

                                                
                                                

                                                
                                                
>>> k8s: describe coredns deployment:
error: context "cilium-241021" does not exist

                                                
                                                

                                                
                                                
>>> k8s: describe coredns pods:
error: context "cilium-241021" does not exist

                                                
                                                

                                                
                                                
>>> k8s: coredns logs:
error: context "cilium-241021" does not exist

                                                
                                                

                                                
                                                
>>> k8s: describe api server pod(s):
error: context "cilium-241021" does not exist

                                                
                                                

                                                
                                                
>>> k8s: api server logs:
error: context "cilium-241021" does not exist

                                                
                                                

                                                
                                                
>>> host: /etc/cni:
* Profile "cilium-241021" not found. Run "minikube profile list" to view all profiles.
To start a cluster, run: "minikube start -p cilium-241021"

                                                
                                                

                                                
                                                
>>> host: ip a s:
* Profile "cilium-241021" not found. Run "minikube profile list" to view all profiles.
To start a cluster, run: "minikube start -p cilium-241021"

                                                
                                                

                                                
                                                
>>> host: ip r s:
* Profile "cilium-241021" not found. Run "minikube profile list" to view all profiles.
To start a cluster, run: "minikube start -p cilium-241021"

                                                
                                                

                                                
                                                
>>> host: iptables-save:
* Profile "cilium-241021" not found. Run "minikube profile list" to view all profiles.
To start a cluster, run: "minikube start -p cilium-241021"

                                                
                                                

                                                
                                                
>>> host: iptables table nat:
* Profile "cilium-241021" not found. Run "minikube profile list" to view all profiles.
To start a cluster, run: "minikube start -p cilium-241021"

                                                
                                                

                                                
                                                
>>> k8s: describe cilium daemon set:
Error in configuration: context was not found for specified context: cilium-241021

                                                
                                                

                                                
                                                
>>> k8s: describe cilium daemon set pod(s):
Error in configuration: context was not found for specified context: cilium-241021

                                                
                                                

                                                
                                                
>>> k8s: cilium daemon set container(s) logs (current):
error: context "cilium-241021" does not exist

                                                
                                                

                                                
                                                
>>> k8s: cilium daemon set container(s) logs (previous):
error: context "cilium-241021" does not exist

                                                
                                                

                                                
                                                
>>> k8s: describe cilium deployment:
Error in configuration: context was not found for specified context: cilium-241021

                                                
                                                

                                                
                                                
>>> k8s: describe cilium deployment pod(s):
Error in configuration: context was not found for specified context: cilium-241021

                                                
                                                

                                                
                                                
>>> k8s: cilium deployment container(s) logs (current):
error: context "cilium-241021" does not exist

                                                
                                                

                                                
                                                
>>> k8s: cilium deployment container(s) logs (previous):
error: context "cilium-241021" does not exist

                                                
                                                

                                                
                                                
>>> k8s: describe kube-proxy daemon set:
error: context "cilium-241021" does not exist

                                                
                                                

                                                
                                                
>>> k8s: describe kube-proxy pod(s):
error: context "cilium-241021" does not exist

                                                
                                                

                                                
                                                
>>> k8s: kube-proxy logs:
error: context "cilium-241021" does not exist

                                                
                                                

                                                
                                                
>>> host: kubelet daemon status:
* Profile "cilium-241021" not found. Run "minikube profile list" to view all profiles.
To start a cluster, run: "minikube start -p cilium-241021"

                                                
                                                

                                                
                                                
>>> host: kubelet daemon config:
* Profile "cilium-241021" not found. Run "minikube profile list" to view all profiles.
To start a cluster, run: "minikube start -p cilium-241021"

                                                
                                                

                                                
                                                
>>> k8s: kubelet logs:
* Profile "cilium-241021" not found. Run "minikube profile list" to view all profiles.
To start a cluster, run: "minikube start -p cilium-241021"

                                                
                                                

                                                
                                                
>>> host: /etc/kubernetes/kubelet.conf:
* Profile "cilium-241021" not found. Run "minikube profile list" to view all profiles.
To start a cluster, run: "minikube start -p cilium-241021"

                                                
                                                

                                                
                                                
>>> host: /var/lib/kubelet/config.yaml:
* Profile "cilium-241021" not found. Run "minikube profile list" to view all profiles.
To start a cluster, run: "minikube start -p cilium-241021"

                                                
                                                

                                                
                                                
>>> k8s: kubectl config:
apiVersion: v1
clusters: null
contexts: null
current-context: ""
kind: Config
preferences: {}
users: null

                                                
                                                

                                                
                                                
>>> k8s: cms:
Error in configuration: context was not found for specified context: cilium-241021

                                                
                                                

                                                
                                                
>>> host: docker daemon status:
* Profile "cilium-241021" not found. Run "minikube profile list" to view all profiles.
To start a cluster, run: "minikube start -p cilium-241021"

                                                
                                                

                                                
                                                
>>> host: docker daemon config:
* Profile "cilium-241021" not found. Run "minikube profile list" to view all profiles.
To start a cluster, run: "minikube start -p cilium-241021"

                                                
                                                

                                                
                                                
>>> host: /etc/docker/daemon.json:
* Profile "cilium-241021" not found. Run "minikube profile list" to view all profiles.
To start a cluster, run: "minikube start -p cilium-241021"

                                                
                                                

                                                
                                                
>>> host: docker system info:
* Profile "cilium-241021" not found. Run "minikube profile list" to view all profiles.
To start a cluster, run: "minikube start -p cilium-241021"

                                                
                                                

                                                
                                                
>>> host: cri-docker daemon status:
* Profile "cilium-241021" not found. Run "minikube profile list" to view all profiles.
To start a cluster, run: "minikube start -p cilium-241021"

                                                
                                                

                                                
                                                
>>> host: cri-docker daemon config:
* Profile "cilium-241021" not found. Run "minikube profile list" to view all profiles.
To start a cluster, run: "minikube start -p cilium-241021"

                                                
                                                

                                                
                                                
>>> host: /etc/systemd/system/cri-docker.service.d/10-cni.conf:
* Profile "cilium-241021" not found. Run "minikube profile list" to view all profiles.
To start a cluster, run: "minikube start -p cilium-241021"

                                                
                                                

                                                
                                                
>>> host: /usr/lib/systemd/system/cri-docker.service:
* Profile "cilium-241021" not found. Run "minikube profile list" to view all profiles.
To start a cluster, run: "minikube start -p cilium-241021"

                                                
                                                

                                                
                                                
>>> host: cri-dockerd version:
* Profile "cilium-241021" not found. Run "minikube profile list" to view all profiles.
To start a cluster, run: "minikube start -p cilium-241021"

                                                
                                                

                                                
                                                
>>> host: containerd daemon status:
* Profile "cilium-241021" not found. Run "minikube profile list" to view all profiles.
To start a cluster, run: "minikube start -p cilium-241021"

                                                
                                                

                                                
                                                
>>> host: containerd daemon config:
* Profile "cilium-241021" not found. Run "minikube profile list" to view all profiles.
To start a cluster, run: "minikube start -p cilium-241021"

                                                
                                                

                                                
                                                
>>> host: /lib/systemd/system/containerd.service:
* Profile "cilium-241021" not found. Run "minikube profile list" to view all profiles.
To start a cluster, run: "minikube start -p cilium-241021"

                                                
                                                

                                                
                                                
>>> host: /etc/containerd/config.toml:
* Profile "cilium-241021" not found. Run "minikube profile list" to view all profiles.
To start a cluster, run: "minikube start -p cilium-241021"

                                                
                                                

                                                
                                                
>>> host: containerd config dump:
* Profile "cilium-241021" not found. Run "minikube profile list" to view all profiles.
To start a cluster, run: "minikube start -p cilium-241021"

                                                
                                                

                                                
                                                
>>> host: crio daemon status:
* Profile "cilium-241021" not found. Run "minikube profile list" to view all profiles.
To start a cluster, run: "minikube start -p cilium-241021"

                                                
                                                

                                                
                                                
>>> host: crio daemon config:
* Profile "cilium-241021" not found. Run "minikube profile list" to view all profiles.
To start a cluster, run: "minikube start -p cilium-241021"

                                                
                                                

                                                
                                                
>>> host: /etc/crio:
* Profile "cilium-241021" not found. Run "minikube profile list" to view all profiles.
To start a cluster, run: "minikube start -p cilium-241021"

                                                
                                                

                                                
                                                
>>> host: crio config:
* Profile "cilium-241021" not found. Run "minikube profile list" to view all profiles.
To start a cluster, run: "minikube start -p cilium-241021"

                                                
                                                
----------------------- debugLogs end: cilium-241021 [took: 5.653954251s] --------------------------------
helpers_test.go:175: Cleaning up "cilium-241021" profile ...
helpers_test.go:178: (dbg) Run:  out/minikube-linux-arm64 delete -p cilium-241021
--- SKIP: TestNetworkPlugins/group/cilium (5.82s)

                                                
                                    
Copied to clipboard